mbox series

[v2,0/7] pmdomain/cpuidle-psci: Support s2idle/s2ram on PREEMPT_RT

Message ID 20240527142557.321610-1-ulf.hansson@linaro.org
Headers show
Series pmdomain/cpuidle-psci: Support s2idle/s2ram on PREEMPT_RT | expand

Message

Ulf Hansson May 27, 2024, 2:25 p.m. UTC
Updates in v2:
	- Rebased and fixed a small issue in genpd, see patch3.
	- Re-tested on v6.9-rt5 (PREEMPT_RT enabled)
	- Re-tested on v6.10-rc1 (for regressions, PREEMPT_RT disabled)

The hierarchical PM domain topology and the corresponding domain-idle-states
are currently disabled on a PREEMPT_RT based configuration. The main reason is
because spinlocks are turned into sleepable locks on PREEMPT_RT, which means
genpd and runtime PM can't be use in the atomic idle-path when
selecting/entering an idle-state.

For s2idle/s2ram this is an unnecessary limitation that this series intends to
address. Note that, the support for cpuhotplug is left to future improvements.
More information about this are available in the commit messages.

I have tested this on a Dragonboard 410c.

Kind regards
Ulf Hansson


Ulf Hansson (7):
  pmdomain: core: Enable s2idle for CPU PM domains on PREEMPT_RT
  pmdomain: core: Don't hold the genpd-lock when calling
    dev_pm_domain_set()
  pmdomain: core: Use dev_name() instead of kobject_get_path() in
    debugfs
  cpuidle: psci-domain: Enable system-wide suspend on PREEMPT_RT
  cpuidle: psci: Drop redundant assignment of CPUIDLE_FLAG_RCU_IDLE
  cpuidle: psci: Enable the hierarchical topology for s2ram on
    PREEMPT_RT
  cpuidle: psci: Enable the hierarchical topology for s2idle on
    PREEMPT_RT

 drivers/cpuidle/cpuidle-psci-domain.c | 10 ++--
 drivers/cpuidle/cpuidle-psci.c        | 26 ++++++----
 drivers/pmdomain/core.c               | 75 +++++++++++++++++++--------
 include/linux/pm_domain.h             |  5 +-
 4 files changed, 80 insertions(+), 36 deletions(-)

Comments

Nikunj Kela May 28, 2024, 7:55 p.m. UTC | #1
On 5/27/2024 7:25 AM, Ulf Hansson wrote:
> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
>
> To address this problem, let's convert into using the raw spinlock, but
> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
> this way, the lock can still be acquired/released in atomic context, which
> is needed in the idle-path for PREEMPT_RT.
>
> Do note that the genpd power-on/off notifiers may also be fired during
> s2idle, but these are already prepared for PREEMPT_RT as they are based on
> the raw notifiers. However, consumers of them may need to adopt accordingly
> to work properly on PREEMPT_RT.
>
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> ---
>
> Changes in v2:
> 	- None.
>
> ---
>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
>  include/linux/pm_domain.h |  5 ++++-
>  2 files changed, 50 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
> index 623d15b68707..072e6bdb6ee6 100644
> --- a/drivers/pmdomain/core.c
> +++ b/drivers/pmdomain/core.c
> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
>  	.unlock = genpd_unlock_spin,
>  };
>  
> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->raw_slock)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> +	genpd->raw_lock_flags = flags;
> +}
> +
> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
> +					int depth)
> +	__acquires(&genpd->raw_slock)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
> +	genpd->raw_lock_flags = flags;
> +}
> +
> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->raw_slock)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> +	genpd->raw_lock_flags = flags;
> +	return 0;
> +}
> +
> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
> +	__releases(&genpd->raw_slock)
> +{
> +	raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
> +}
> +
> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
> +	.lock = genpd_lock_raw_spin,
> +	.lock_nested = genpd_lock_nested_raw_spin,
> +	.lock_interruptible = genpd_lock_interruptible_raw_spin,
> +	.unlock = genpd_unlock_raw_spin,
> +};
> +
>  #define genpd_lock(p)			p->lock_ops->lock(p)
>  #define genpd_lock_nested(p, d)		p->lock_ops->lock_nested(p, d)
>  #define genpd_lock_interruptible(p)	p->lock_ops->lock_interruptible(p)
> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
>  
>  static void genpd_lock_init(struct generic_pm_domain *genpd)
>  {
> -	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> +	if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
> +		raw_spin_lock_init(&genpd->raw_slock);
> +		genpd->lock_ops = &genpd_raw_spin_ops;
> +	} else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {

Hi Ulf, though you are targeting only CPU domains for now, I wonder if
FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
it is safe for atomic context which won't be the case for PREEMPT_RT? 


>  		spin_lock_init(&genpd->slock);
>  		genpd->lock_ops = &genpd_spin_ops;
>  	} else {
> diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
> index f24546a3d3db..0d7fc47de3bc 100644
> --- a/include/linux/pm_domain.h
> +++ b/include/linux/pm_domain.h
> @@ -194,8 +194,11 @@ struct generic_pm_domain {
>  			spinlock_t slock;
>  			unsigned long lock_flags;
>  		};
> +		struct {
> +			raw_spinlock_t raw_slock;
> +			unsigned long raw_lock_flags;
> +		};
>  	};
> -
>  };
>  
>  static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)
Ulf Hansson May 30, 2024, 8:15 a.m. UTC | #2
On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>
>
> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
> > To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
> > during s2idle on a PREEMPT_RT based configuration, we can't use the regular
> > spinlock, as they are turned into sleepable locks on PREEMPT_RT.
> >
> > To address this problem, let's convert into using the raw spinlock, but
> > only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
> > this way, the lock can still be acquired/released in atomic context, which
> > is needed in the idle-path for PREEMPT_RT.
> >
> > Do note that the genpd power-on/off notifiers may also be fired during
> > s2idle, but these are already prepared for PREEMPT_RT as they are based on
> > the raw notifiers. However, consumers of them may need to adopt accordingly
> > to work properly on PREEMPT_RT.
> >
> > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> > ---
> >
> > Changes in v2:
> >       - None.
> >
> > ---
> >  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
> >  include/linux/pm_domain.h |  5 ++++-
> >  2 files changed, 50 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
> > index 623d15b68707..072e6bdb6ee6 100644
> > --- a/drivers/pmdomain/core.c
> > +++ b/drivers/pmdomain/core.c
> > @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
> >       .unlock = genpd_unlock_spin,
> >  };
> >
> > +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
> > +     __acquires(&genpd->raw_slock)
> > +{
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> > +     genpd->raw_lock_flags = flags;
> > +}
> > +
> > +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
> > +                                     int depth)
> > +     __acquires(&genpd->raw_slock)
> > +{
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
> > +     genpd->raw_lock_flags = flags;
> > +}
> > +
> > +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
> > +     __acquires(&genpd->raw_slock)
> > +{
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> > +     genpd->raw_lock_flags = flags;
> > +     return 0;
> > +}
> > +
> > +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
> > +     __releases(&genpd->raw_slock)
> > +{
> > +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
> > +}
> > +
> > +static const struct genpd_lock_ops genpd_raw_spin_ops = {
> > +     .lock = genpd_lock_raw_spin,
> > +     .lock_nested = genpd_lock_nested_raw_spin,
> > +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
> > +     .unlock = genpd_unlock_raw_spin,
> > +};
> > +
> >  #define genpd_lock(p)                        p->lock_ops->lock(p)
> >  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
> >  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
> > @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
> >
> >  static void genpd_lock_init(struct generic_pm_domain *genpd)
> >  {
> > -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> > +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
> > +             raw_spin_lock_init(&genpd->raw_slock);
> > +             genpd->lock_ops = &genpd_raw_spin_ops;
> > +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>
> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
> it is safe for atomic context which won't be the case for PREEMPT_RT?

You have a point!

However, we also need to limit the use of raw spinlocks, from
PREEMPT_RT point of view. In other words, just because a genpd
provider is capable of executing its callbacks in atomic context,
doesn't always mean that it should use raw spinlocks too.

[...]

Kind regards
Uffe
Nikunj Kela May 30, 2024, 2:23 p.m. UTC | #3
On 5/30/2024 1:15 AM, Ulf Hansson wrote:
> On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>>
>> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
>>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
>>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
>>> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
>>>
>>> To address this problem, let's convert into using the raw spinlock, but
>>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
>>> this way, the lock can still be acquired/released in atomic context, which
>>> is needed in the idle-path for PREEMPT_RT.
>>>
>>> Do note that the genpd power-on/off notifiers may also be fired during
>>> s2idle, but these are already prepared for PREEMPT_RT as they are based on
>>> the raw notifiers. However, consumers of them may need to adopt accordingly
>>> to work properly on PREEMPT_RT.
>>>
>>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>> ---
>>>
>>> Changes in v2:
>>>       - None.
>>>
>>> ---
>>>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
>>>  include/linux/pm_domain.h |  5 ++++-
>>>  2 files changed, 50 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
>>> index 623d15b68707..072e6bdb6ee6 100644
>>> --- a/drivers/pmdomain/core.c
>>> +++ b/drivers/pmdomain/core.c
>>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
>>>       .unlock = genpd_unlock_spin,
>>>  };
>>>
>>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
>>> +     __acquires(&genpd->raw_slock)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>> +     genpd->raw_lock_flags = flags;
>>> +}
>>> +
>>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
>>> +                                     int depth)
>>> +     __acquires(&genpd->raw_slock)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
>>> +     genpd->raw_lock_flags = flags;
>>> +}
>>> +
>>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
>>> +     __acquires(&genpd->raw_slock)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>> +     genpd->raw_lock_flags = flags;
>>> +     return 0;
>>> +}
>>> +
>>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
>>> +     __releases(&genpd->raw_slock)
>>> +{
>>> +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
>>> +}
>>> +
>>> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
>>> +     .lock = genpd_lock_raw_spin,
>>> +     .lock_nested = genpd_lock_nested_raw_spin,
>>> +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
>>> +     .unlock = genpd_unlock_raw_spin,
>>> +};
>>> +
>>>  #define genpd_lock(p)                        p->lock_ops->lock(p)
>>>  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
>>>  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
>>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
>>>
>>>  static void genpd_lock_init(struct generic_pm_domain *genpd)
>>>  {
>>> -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>>> +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
>>> +             raw_spin_lock_init(&genpd->raw_slock);
>>> +             genpd->lock_ops = &genpd_raw_spin_ops;
>>> +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
>> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
>> it is safe for atomic context which won't be the case for PREEMPT_RT?
> You have a point!
>
> However, we also need to limit the use of raw spinlocks, from
> PREEMPT_RT point of view. In other words, just because a genpd
> provider is capable of executing its callbacks in atomic context,
> doesn't always mean that it should use raw spinlocks too.

Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG
for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this.


>
> [...]
>
> Kind regards
> Uffe
Ulf Hansson June 3, 2024, 1:58 p.m. UTC | #4
On Thu, 30 May 2024 at 16:23, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>
>
> On 5/30/2024 1:15 AM, Ulf Hansson wrote:
> > On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
> >>
> >> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
> >>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
> >>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
> >>> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
> >>>
> >>> To address this problem, let's convert into using the raw spinlock, but
> >>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
> >>> this way, the lock can still be acquired/released in atomic context, which
> >>> is needed in the idle-path for PREEMPT_RT.
> >>>
> >>> Do note that the genpd power-on/off notifiers may also be fired during
> >>> s2idle, but these are already prepared for PREEMPT_RT as they are based on
> >>> the raw notifiers. However, consumers of them may need to adopt accordingly
> >>> to work properly on PREEMPT_RT.
> >>>
> >>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> >>> ---
> >>>
> >>> Changes in v2:
> >>>       - None.
> >>>
> >>> ---
> >>>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
> >>>  include/linux/pm_domain.h |  5 ++++-
> >>>  2 files changed, 50 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
> >>> index 623d15b68707..072e6bdb6ee6 100644
> >>> --- a/drivers/pmdomain/core.c
> >>> +++ b/drivers/pmdomain/core.c
> >>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
> >>>       .unlock = genpd_unlock_spin,
> >>>  };
> >>>
> >>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
> >>> +     __acquires(&genpd->raw_slock)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> >>> +     genpd->raw_lock_flags = flags;
> >>> +}
> >>> +
> >>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
> >>> +                                     int depth)
> >>> +     __acquires(&genpd->raw_slock)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
> >>> +     genpd->raw_lock_flags = flags;
> >>> +}
> >>> +
> >>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
> >>> +     __acquires(&genpd->raw_slock)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> >>> +     genpd->raw_lock_flags = flags;
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
> >>> +     __releases(&genpd->raw_slock)
> >>> +{
> >>> +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
> >>> +}
> >>> +
> >>> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
> >>> +     .lock = genpd_lock_raw_spin,
> >>> +     .lock_nested = genpd_lock_nested_raw_spin,
> >>> +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
> >>> +     .unlock = genpd_unlock_raw_spin,
> >>> +};
> >>> +
> >>>  #define genpd_lock(p)                        p->lock_ops->lock(p)
> >>>  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
> >>>  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
> >>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
> >>>
> >>>  static void genpd_lock_init(struct generic_pm_domain *genpd)
> >>>  {
> >>> -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> >>> +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
> >>> +             raw_spin_lock_init(&genpd->raw_slock);
> >>> +             genpd->lock_ops = &genpd_raw_spin_ops;
> >>> +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> >> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
> >> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
> >> it is safe for atomic context which won't be the case for PREEMPT_RT?
> > You have a point!
> >
> > However, we also need to limit the use of raw spinlocks, from
> > PREEMPT_RT point of view. In other words, just because a genpd
> > provider is capable of executing its callbacks in atomic context,
> > doesn't always mean that it should use raw spinlocks too.
>
> Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG
> for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this.

Yes, I agree, something along those lines would make sense.

BTW, did you manage to get some time to test the series on your end?

Kind regards
Uffe
Nikunj Kela June 4, 2024, 5:22 p.m. UTC | #5
On 6/3/2024 6:58 AM, Ulf Hansson wrote:
> On Thu, 30 May 2024 at 16:23, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>>
>> On 5/30/2024 1:15 AM, Ulf Hansson wrote:
>>> On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>>>> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
>>>>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
>>>>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
>>>>> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
>>>>>
>>>>> To address this problem, let's convert into using the raw spinlock, but
>>>>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
>>>>> this way, the lock can still be acquired/released in atomic context, which
>>>>> is needed in the idle-path for PREEMPT_RT.
>>>>>
>>>>> Do note that the genpd power-on/off notifiers may also be fired during
>>>>> s2idle, but these are already prepared for PREEMPT_RT as they are based on
>>>>> the raw notifiers. However, consumers of them may need to adopt accordingly
>>>>> to work properly on PREEMPT_RT.
>>>>>
>>>>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>>>> ---
>>>>>
>>>>> Changes in v2:
>>>>>       - None.
>>>>>
>>>>> ---
>>>>>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
>>>>>  include/linux/pm_domain.h |  5 ++++-
>>>>>  2 files changed, 50 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
>>>>> index 623d15b68707..072e6bdb6ee6 100644
>>>>> --- a/drivers/pmdomain/core.c
>>>>> +++ b/drivers/pmdomain/core.c
>>>>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
>>>>>       .unlock = genpd_unlock_spin,
>>>>>  };
>>>>>
>>>>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
>>>>> +     __acquires(&genpd->raw_slock)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>>>> +     genpd->raw_lock_flags = flags;
>>>>> +}
>>>>> +
>>>>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
>>>>> +                                     int depth)
>>>>> +     __acquires(&genpd->raw_slock)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
>>>>> +     genpd->raw_lock_flags = flags;
>>>>> +}
>>>>> +
>>>>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
>>>>> +     __acquires(&genpd->raw_slock)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>>>> +     genpd->raw_lock_flags = flags;
>>>>> +     return 0;
>>>>> +}
>>>>> +
>>>>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
>>>>> +     __releases(&genpd->raw_slock)
>>>>> +{
>>>>> +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
>>>>> +}
>>>>> +
>>>>> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
>>>>> +     .lock = genpd_lock_raw_spin,
>>>>> +     .lock_nested = genpd_lock_nested_raw_spin,
>>>>> +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
>>>>> +     .unlock = genpd_unlock_raw_spin,
>>>>> +};
>>>>> +
>>>>>  #define genpd_lock(p)                        p->lock_ops->lock(p)
>>>>>  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
>>>>>  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
>>>>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
>>>>>
>>>>>  static void genpd_lock_init(struct generic_pm_domain *genpd)
>>>>>  {
>>>>> -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>>>>> +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
>>>>> +             raw_spin_lock_init(&genpd->raw_slock);
>>>>> +             genpd->lock_ops = &genpd_raw_spin_ops;
>>>>> +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>>>> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
>>>> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
>>>> it is safe for atomic context which won't be the case for PREEMPT_RT?
>>> You have a point!
>>>
>>> However, we also need to limit the use of raw spinlocks, from
>>> PREEMPT_RT point of view. In other words, just because a genpd
>>> provider is capable of executing its callbacks in atomic context,
>>> doesn't always mean that it should use raw spinlocks too.
>> Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG
>> for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this.
> Yes, I agree, something along those lines would make sense.
>
> BTW, did you manage to get some time to test the series on your end?

I haven't been able spend time testing it but I have requested Maulik to
test it and update you. Thanks


> Kind regards
> Uffe
Sebastian Andrzej Siewior July 8, 2024, 1:53 p.m. UTC | #6
On 2024-05-27 16:25:50 [+0200], Ulf Hansson wrote:
> Updates in v2:
> 	- Rebased and fixed a small issue in genpd, see patch3.
> 	- Re-tested on v6.9-rt5 (PREEMPT_RT enabled)
> 	- Re-tested on v6.10-rc1 (for regressions, PREEMPT_RT disabled)
> 
> The hierarchical PM domain topology and the corresponding domain-idle-states
> are currently disabled on a PREEMPT_RT based configuration. The main reason is
> because spinlocks are turned into sleepable locks on PREEMPT_RT, which means
> genpd and runtime PM can't be use in the atomic idle-path when
> selecting/entering an idle-state.
> 
> For s2idle/s2ram this is an unnecessary limitation that this series intends to
> address. Note that, the support for cpuhotplug is left to future improvements.
> More information about this are available in the commit messages.

I looked at it and it seems limited to pmdomain/core.c, also I don't
know if there is a ->set_performance_state callback set since the one I
checked have mutex_t locking ;)
So if this is needed, then be it. s2ram wouldn't be used in "production"
but in "safe state" so I wouldn't worry too much about latency spikes.
Not sure what it means for the other modes.
I am not to worried for now, please don't let spread more than needed ;)

> Kind regards
> Ulf Hansson

Sebastian
Ulf Hansson July 9, 2024, 3:31 p.m. UTC | #7
On Mon, 8 Jul 2024 at 15:53, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2024-05-27 16:25:50 [+0200], Ulf Hansson wrote:
> > Updates in v2:
> >       - Rebased and fixed a small issue in genpd, see patch3.
> >       - Re-tested on v6.9-rt5 (PREEMPT_RT enabled)
> >       - Re-tested on v6.10-rc1 (for regressions, PREEMPT_RT disabled)
> >
> > The hierarchical PM domain topology and the corresponding domain-idle-states
> > are currently disabled on a PREEMPT_RT based configuration. The main reason is
> > because spinlocks are turned into sleepable locks on PREEMPT_RT, which means
> > genpd and runtime PM can't be use in the atomic idle-path when
> > selecting/entering an idle-state.
> >
> > For s2idle/s2ram this is an unnecessary limitation that this series intends to
> > address. Note that, the support for cpuhotplug is left to future improvements.
> > More information about this are available in the commit messages.
>
> I looked at it and it seems limited to pmdomain/core.c, also I don't
> know if there is a ->set_performance_state callback set since the one I
> checked have mutex_t locking ;)
> So if this is needed, then be it. s2ram wouldn't be used in "production"
> but in "safe state" so I wouldn't worry too much about latency spikes.
> Not sure what it means for the other modes.
> I am not to worried for now, please don't let spread more than needed ;)

Thanks for taking a look and for providing your thoughts. Can I
consider that as an "ack" for the whole series?

Before I decide to apply this I am awaiting some additional
confirmation from Qcom guys. It's getting late for v6.11, so I may
need to make another re-spin, but let's see.

Kind regards
Uffe
Raghavendra Kakarla July 25, 2024, 10:32 a.m. UTC | #8
On 5/27/2024 7:55 PM, Ulf Hansson wrote:
> Updates in v2:
> 	- Rebased and fixed a small issue in genpd, see patch3.
> 	- Re-tested on v6.9-rt5 (PREEMPT_RT enabled)
> 	- Re-tested on v6.10-rc1 (for regressions, PREEMPT_RT disabled)
> 
> The hierarchical PM domain topology and the corresponding domain-idle-states
> are currently disabled on a PREEMPT_RT based configuration. The main reason is
> because spinlocks are turned into sleepable locks on PREEMPT_RT, which means
> genpd and runtime PM can't be use in the atomic idle-path when
> selecting/entering an idle-state.
> 
> For s2idle/s2ram this is an unnecessary limitation that this series intends to
> address. Note that, the support for cpuhotplug is left to future improvements.
> More information about this are available in the commit messages.
> 
> I have tested this on a Dragonboard 410c.
This series is tested on qcm6490 with PREEMPT_RT enabled. For the series,

Tested-by: Raghavendra Kakarla <quic_rkakarla@quicinc.com>  # qcm6490 
with PREEMPT_RT enabled

Tested APSS suspend and then SoC power collapse through s2idle and s2ram 
paths.
APSS and soc power collapse stats are incremented.
/sys/kernel/debug/pm_genpd/power-domain-cluster/idle_states
/sys/kernel/debug/qcom_stats/cxsd

Without this series, they are not incremented.

Thanks,
Raghavendra K.
> 
> Kind regards
> Ulf Hansson
> 
> 
> Ulf Hansson (7):
>    pmdomain: core: Enable s2idle for CPU PM domains on PREEMPT_RT
>    pmdomain: core: Don't hold the genpd-lock when calling
>      dev_pm_domain_set()
>    pmdomain: core: Use dev_name() instead of kobject_get_path() in
>      debugfs
>    cpuidle: psci-domain: Enable system-wide suspend on PREEMPT_RT
>    cpuidle: psci: Drop redundant assignment of CPUIDLE_FLAG_RCU_IDLE
>    cpuidle: psci: Enable the hierarchical topology for s2ram on
>      PREEMPT_RT
>    cpuidle: psci: Enable the hierarchical topology for s2idle on
>      PREEMPT_RT
> 
>   drivers/cpuidle/cpuidle-psci-domain.c | 10 ++--
>   drivers/cpuidle/cpuidle-psci.c        | 26 ++++++----
>   drivers/pmdomain/core.c               | 75 +++++++++++++++++++--------
>   include/linux/pm_domain.h             |  5 +-
>   4 files changed, 80 insertions(+), 36 deletions(-)
>
Sebastian Andrzej Siewior July 31, 2024, 8:15 a.m. UTC | #9
On 2024-07-09 17:31:12 [+0200], Ulf Hansson wrote:
> 
> Thanks for taking a look and for providing your thoughts. Can I
> consider that as an "ack" for the whole series?

Yes.

Sebastian
Ulf Hansson Aug. 5, 2024, 11:35 a.m. UTC | #10
+ Raghavendra Kakarla, Sebastian Andrzej Siewior

On Mon, 27 May 2024 at 16:26, Ulf Hansson <ulf.hansson@linaro.org> wrote:
>
> Updates in v2:
>         - Rebased and fixed a small issue in genpd, see patch3.
>         - Re-tested on v6.9-rt5 (PREEMPT_RT enabled)
>         - Re-tested on v6.10-rc1 (for regressions, PREEMPT_RT disabled)
>
> The hierarchical PM domain topology and the corresponding domain-idle-states
> are currently disabled on a PREEMPT_RT based configuration. The main reason is
> because spinlocks are turned into sleepable locks on PREEMPT_RT, which means
> genpd and runtime PM can't be use in the atomic idle-path when
> selecting/entering an idle-state.
>
> For s2idle/s2ram this is an unnecessary limitation that this series intends to
> address. Note that, the support for cpuhotplug is left to future improvements.
> More information about this are available in the commit messages.
>
> I have tested this on a Dragonboard 410c.
>
> Kind regards
> Ulf Hansson
>
>
> Ulf Hansson (7):
>   pmdomain: core: Enable s2idle for CPU PM domains on PREEMPT_RT
>   pmdomain: core: Don't hold the genpd-lock when calling
>     dev_pm_domain_set()
>   pmdomain: core: Use dev_name() instead of kobject_get_path() in
>     debugfs
>   cpuidle: psci-domain: Enable system-wide suspend on PREEMPT_RT
>   cpuidle: psci: Drop redundant assignment of CPUIDLE_FLAG_RCU_IDLE
>   cpuidle: psci: Enable the hierarchical topology for s2ram on
>     PREEMPT_RT
>   cpuidle: psci: Enable the hierarchical topology for s2idle on
>     PREEMPT_RT
>
>  drivers/cpuidle/cpuidle-psci-domain.c | 10 ++--
>  drivers/cpuidle/cpuidle-psci.c        | 26 ++++++----
>  drivers/pmdomain/core.c               | 75 +++++++++++++++++++--------
>  include/linux/pm_domain.h             |  5 +-
>  4 files changed, 80 insertions(+), 36 deletions(-)
>

Just wanted to let you all know that I have queued up this series via
my pmdomain tree. Please let me know if you have any further
comments/suggestions.

Kind regards
Uffe