mbox series

[v2,0/7] pmdomain/cpuidle-psci: Support s2idle/s2ram on PREEMPT_RT

Message ID 20240527142557.321610-1-ulf.hansson@linaro.org
Headers show
Series pmdomain/cpuidle-psci: Support s2idle/s2ram on PREEMPT_RT | expand

Message

Ulf Hansson May 27, 2024, 2:25 p.m. UTC
Updates in v2:
	- Rebased and fixed a small issue in genpd, see patch3.
	- Re-tested on v6.9-rt5 (PREEMPT_RT enabled)
	- Re-tested on v6.10-rc1 (for regressions, PREEMPT_RT disabled)

The hierarchical PM domain topology and the corresponding domain-idle-states
are currently disabled on a PREEMPT_RT based configuration. The main reason is
because spinlocks are turned into sleepable locks on PREEMPT_RT, which means
genpd and runtime PM can't be use in the atomic idle-path when
selecting/entering an idle-state.

For s2idle/s2ram this is an unnecessary limitation that this series intends to
address. Note that, the support for cpuhotplug is left to future improvements.
More information about this are available in the commit messages.

I have tested this on a Dragonboard 410c.

Kind regards
Ulf Hansson


Ulf Hansson (7):
  pmdomain: core: Enable s2idle for CPU PM domains on PREEMPT_RT
  pmdomain: core: Don't hold the genpd-lock when calling
    dev_pm_domain_set()
  pmdomain: core: Use dev_name() instead of kobject_get_path() in
    debugfs
  cpuidle: psci-domain: Enable system-wide suspend on PREEMPT_RT
  cpuidle: psci: Drop redundant assignment of CPUIDLE_FLAG_RCU_IDLE
  cpuidle: psci: Enable the hierarchical topology for s2ram on
    PREEMPT_RT
  cpuidle: psci: Enable the hierarchical topology for s2idle on
    PREEMPT_RT

 drivers/cpuidle/cpuidle-psci-domain.c | 10 ++--
 drivers/cpuidle/cpuidle-psci.c        | 26 ++++++----
 drivers/pmdomain/core.c               | 75 +++++++++++++++++++--------
 include/linux/pm_domain.h             |  5 +-
 4 files changed, 80 insertions(+), 36 deletions(-)

Comments

Nikunj Kela May 28, 2024, 7:55 p.m. UTC | #1
On 5/27/2024 7:25 AM, Ulf Hansson wrote:
> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
>
> To address this problem, let's convert into using the raw spinlock, but
> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
> this way, the lock can still be acquired/released in atomic context, which
> is needed in the idle-path for PREEMPT_RT.
>
> Do note that the genpd power-on/off notifiers may also be fired during
> s2idle, but these are already prepared for PREEMPT_RT as they are based on
> the raw notifiers. However, consumers of them may need to adopt accordingly
> to work properly on PREEMPT_RT.
>
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> ---
>
> Changes in v2:
> 	- None.
>
> ---
>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
>  include/linux/pm_domain.h |  5 ++++-
>  2 files changed, 50 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
> index 623d15b68707..072e6bdb6ee6 100644
> --- a/drivers/pmdomain/core.c
> +++ b/drivers/pmdomain/core.c
> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
>  	.unlock = genpd_unlock_spin,
>  };
>  
> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->raw_slock)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> +	genpd->raw_lock_flags = flags;
> +}
> +
> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
> +					int depth)
> +	__acquires(&genpd->raw_slock)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
> +	genpd->raw_lock_flags = flags;
> +}
> +
> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->raw_slock)
> +{
> +	unsigned long flags;
> +
> +	raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> +	genpd->raw_lock_flags = flags;
> +	return 0;
> +}
> +
> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
> +	__releases(&genpd->raw_slock)
> +{
> +	raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
> +}
> +
> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
> +	.lock = genpd_lock_raw_spin,
> +	.lock_nested = genpd_lock_nested_raw_spin,
> +	.lock_interruptible = genpd_lock_interruptible_raw_spin,
> +	.unlock = genpd_unlock_raw_spin,
> +};
> +
>  #define genpd_lock(p)			p->lock_ops->lock(p)
>  #define genpd_lock_nested(p, d)		p->lock_ops->lock_nested(p, d)
>  #define genpd_lock_interruptible(p)	p->lock_ops->lock_interruptible(p)
> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
>  
>  static void genpd_lock_init(struct generic_pm_domain *genpd)
>  {
> -	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> +	if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
> +		raw_spin_lock_init(&genpd->raw_slock);
> +		genpd->lock_ops = &genpd_raw_spin_ops;
> +	} else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {

Hi Ulf, though you are targeting only CPU domains for now, I wonder if
FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
it is safe for atomic context which won't be the case for PREEMPT_RT? 


>  		spin_lock_init(&genpd->slock);
>  		genpd->lock_ops = &genpd_spin_ops;
>  	} else {
> diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
> index f24546a3d3db..0d7fc47de3bc 100644
> --- a/include/linux/pm_domain.h
> +++ b/include/linux/pm_domain.h
> @@ -194,8 +194,11 @@ struct generic_pm_domain {
>  			spinlock_t slock;
>  			unsigned long lock_flags;
>  		};
> +		struct {
> +			raw_spinlock_t raw_slock;
> +			unsigned long raw_lock_flags;
> +		};
>  	};
> -
>  };
>  
>  static inline struct generic_pm_domain *pd_to_genpd(struct dev_pm_domain *pd)
Ulf Hansson May 30, 2024, 8:15 a.m. UTC | #2
On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>
>
> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
> > To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
> > during s2idle on a PREEMPT_RT based configuration, we can't use the regular
> > spinlock, as they are turned into sleepable locks on PREEMPT_RT.
> >
> > To address this problem, let's convert into using the raw spinlock, but
> > only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
> > this way, the lock can still be acquired/released in atomic context, which
> > is needed in the idle-path for PREEMPT_RT.
> >
> > Do note that the genpd power-on/off notifiers may also be fired during
> > s2idle, but these are already prepared for PREEMPT_RT as they are based on
> > the raw notifiers. However, consumers of them may need to adopt accordingly
> > to work properly on PREEMPT_RT.
> >
> > Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> > ---
> >
> > Changes in v2:
> >       - None.
> >
> > ---
> >  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
> >  include/linux/pm_domain.h |  5 ++++-
> >  2 files changed, 50 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
> > index 623d15b68707..072e6bdb6ee6 100644
> > --- a/drivers/pmdomain/core.c
> > +++ b/drivers/pmdomain/core.c
> > @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
> >       .unlock = genpd_unlock_spin,
> >  };
> >
> > +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
> > +     __acquires(&genpd->raw_slock)
> > +{
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> > +     genpd->raw_lock_flags = flags;
> > +}
> > +
> > +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
> > +                                     int depth)
> > +     __acquires(&genpd->raw_slock)
> > +{
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
> > +     genpd->raw_lock_flags = flags;
> > +}
> > +
> > +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
> > +     __acquires(&genpd->raw_slock)
> > +{
> > +     unsigned long flags;
> > +
> > +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> > +     genpd->raw_lock_flags = flags;
> > +     return 0;
> > +}
> > +
> > +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
> > +     __releases(&genpd->raw_slock)
> > +{
> > +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
> > +}
> > +
> > +static const struct genpd_lock_ops genpd_raw_spin_ops = {
> > +     .lock = genpd_lock_raw_spin,
> > +     .lock_nested = genpd_lock_nested_raw_spin,
> > +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
> > +     .unlock = genpd_unlock_raw_spin,
> > +};
> > +
> >  #define genpd_lock(p)                        p->lock_ops->lock(p)
> >  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
> >  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
> > @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
> >
> >  static void genpd_lock_init(struct generic_pm_domain *genpd)
> >  {
> > -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> > +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
> > +             raw_spin_lock_init(&genpd->raw_slock);
> > +             genpd->lock_ops = &genpd_raw_spin_ops;
> > +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>
> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
> it is safe for atomic context which won't be the case for PREEMPT_RT?

You have a point!

However, we also need to limit the use of raw spinlocks, from
PREEMPT_RT point of view. In other words, just because a genpd
provider is capable of executing its callbacks in atomic context,
doesn't always mean that it should use raw spinlocks too.

[...]

Kind regards
Uffe
Nikunj Kela May 30, 2024, 2:23 p.m. UTC | #3
On 5/30/2024 1:15 AM, Ulf Hansson wrote:
> On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>>
>> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
>>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
>>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
>>> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
>>>
>>> To address this problem, let's convert into using the raw spinlock, but
>>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
>>> this way, the lock can still be acquired/released in atomic context, which
>>> is needed in the idle-path for PREEMPT_RT.
>>>
>>> Do note that the genpd power-on/off notifiers may also be fired during
>>> s2idle, but these are already prepared for PREEMPT_RT as they are based on
>>> the raw notifiers. However, consumers of them may need to adopt accordingly
>>> to work properly on PREEMPT_RT.
>>>
>>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>> ---
>>>
>>> Changes in v2:
>>>       - None.
>>>
>>> ---
>>>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
>>>  include/linux/pm_domain.h |  5 ++++-
>>>  2 files changed, 50 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
>>> index 623d15b68707..072e6bdb6ee6 100644
>>> --- a/drivers/pmdomain/core.c
>>> +++ b/drivers/pmdomain/core.c
>>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
>>>       .unlock = genpd_unlock_spin,
>>>  };
>>>
>>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
>>> +     __acquires(&genpd->raw_slock)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>> +     genpd->raw_lock_flags = flags;
>>> +}
>>> +
>>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
>>> +                                     int depth)
>>> +     __acquires(&genpd->raw_slock)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
>>> +     genpd->raw_lock_flags = flags;
>>> +}
>>> +
>>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
>>> +     __acquires(&genpd->raw_slock)
>>> +{
>>> +     unsigned long flags;
>>> +
>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>> +     genpd->raw_lock_flags = flags;
>>> +     return 0;
>>> +}
>>> +
>>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
>>> +     __releases(&genpd->raw_slock)
>>> +{
>>> +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
>>> +}
>>> +
>>> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
>>> +     .lock = genpd_lock_raw_spin,
>>> +     .lock_nested = genpd_lock_nested_raw_spin,
>>> +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
>>> +     .unlock = genpd_unlock_raw_spin,
>>> +};
>>> +
>>>  #define genpd_lock(p)                        p->lock_ops->lock(p)
>>>  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
>>>  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
>>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
>>>
>>>  static void genpd_lock_init(struct generic_pm_domain *genpd)
>>>  {
>>> -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>>> +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
>>> +             raw_spin_lock_init(&genpd->raw_slock);
>>> +             genpd->lock_ops = &genpd_raw_spin_ops;
>>> +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
>> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
>> it is safe for atomic context which won't be the case for PREEMPT_RT?
> You have a point!
>
> However, we also need to limit the use of raw spinlocks, from
> PREEMPT_RT point of view. In other words, just because a genpd
> provider is capable of executing its callbacks in atomic context,
> doesn't always mean that it should use raw spinlocks too.

Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG
for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this.


>
> [...]
>
> Kind regards
> Uffe
Ulf Hansson June 3, 2024, 1:58 p.m. UTC | #4
On Thu, 30 May 2024 at 16:23, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>
>
> On 5/30/2024 1:15 AM, Ulf Hansson wrote:
> > On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
> >>
> >> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
> >>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
> >>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
> >>> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
> >>>
> >>> To address this problem, let's convert into using the raw spinlock, but
> >>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
> >>> this way, the lock can still be acquired/released in atomic context, which
> >>> is needed in the idle-path for PREEMPT_RT.
> >>>
> >>> Do note that the genpd power-on/off notifiers may also be fired during
> >>> s2idle, but these are already prepared for PREEMPT_RT as they are based on
> >>> the raw notifiers. However, consumers of them may need to adopt accordingly
> >>> to work properly on PREEMPT_RT.
> >>>
> >>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> >>> ---
> >>>
> >>> Changes in v2:
> >>>       - None.
> >>>
> >>> ---
> >>>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
> >>>  include/linux/pm_domain.h |  5 ++++-
> >>>  2 files changed, 50 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
> >>> index 623d15b68707..072e6bdb6ee6 100644
> >>> --- a/drivers/pmdomain/core.c
> >>> +++ b/drivers/pmdomain/core.c
> >>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
> >>>       .unlock = genpd_unlock_spin,
> >>>  };
> >>>
> >>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
> >>> +     __acquires(&genpd->raw_slock)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> >>> +     genpd->raw_lock_flags = flags;
> >>> +}
> >>> +
> >>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
> >>> +                                     int depth)
> >>> +     __acquires(&genpd->raw_slock)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
> >>> +     genpd->raw_lock_flags = flags;
> >>> +}
> >>> +
> >>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
> >>> +     __acquires(&genpd->raw_slock)
> >>> +{
> >>> +     unsigned long flags;
> >>> +
> >>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
> >>> +     genpd->raw_lock_flags = flags;
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
> >>> +     __releases(&genpd->raw_slock)
> >>> +{
> >>> +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
> >>> +}
> >>> +
> >>> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
> >>> +     .lock = genpd_lock_raw_spin,
> >>> +     .lock_nested = genpd_lock_nested_raw_spin,
> >>> +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
> >>> +     .unlock = genpd_unlock_raw_spin,
> >>> +};
> >>> +
> >>>  #define genpd_lock(p)                        p->lock_ops->lock(p)
> >>>  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
> >>>  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
> >>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
> >>>
> >>>  static void genpd_lock_init(struct generic_pm_domain *genpd)
> >>>  {
> >>> -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> >>> +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
> >>> +             raw_spin_lock_init(&genpd->raw_slock);
> >>> +             genpd->lock_ops = &genpd_raw_spin_ops;
> >>> +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
> >> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
> >> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
> >> it is safe for atomic context which won't be the case for PREEMPT_RT?
> > You have a point!
> >
> > However, we also need to limit the use of raw spinlocks, from
> > PREEMPT_RT point of view. In other words, just because a genpd
> > provider is capable of executing its callbacks in atomic context,
> > doesn't always mean that it should use raw spinlocks too.
>
> Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG
> for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this.

Yes, I agree, something along those lines would make sense.

BTW, did you manage to get some time to test the series on your end?

Kind regards
Uffe
Nikunj Kela June 4, 2024, 5:22 p.m. UTC | #5
On 6/3/2024 6:58 AM, Ulf Hansson wrote:
> On Thu, 30 May 2024 at 16:23, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>>
>> On 5/30/2024 1:15 AM, Ulf Hansson wrote:
>>> On Tue, 28 May 2024 at 21:56, Nikunj Kela <quic_nkela@quicinc.com> wrote:
>>>> On 5/27/2024 7:25 AM, Ulf Hansson wrote:
>>>>> To allow a genpd provider for a CPU PM domain to enter a domain-idle-state
>>>>> during s2idle on a PREEMPT_RT based configuration, we can't use the regular
>>>>> spinlock, as they are turned into sleepable locks on PREEMPT_RT.
>>>>>
>>>>> To address this problem, let's convert into using the raw spinlock, but
>>>>> only for genpd providers that have the GENPD_FLAG_CPU_DOMAIN bit set. In
>>>>> this way, the lock can still be acquired/released in atomic context, which
>>>>> is needed in the idle-path for PREEMPT_RT.
>>>>>
>>>>> Do note that the genpd power-on/off notifiers may also be fired during
>>>>> s2idle, but these are already prepared for PREEMPT_RT as they are based on
>>>>> the raw notifiers. However, consumers of them may need to adopt accordingly
>>>>> to work properly on PREEMPT_RT.
>>>>>
>>>>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>>>> ---
>>>>>
>>>>> Changes in v2:
>>>>>       - None.
>>>>>
>>>>> ---
>>>>>  drivers/pmdomain/core.c   | 47 ++++++++++++++++++++++++++++++++++++++-
>>>>>  include/linux/pm_domain.h |  5 ++++-
>>>>>  2 files changed, 50 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
>>>>> index 623d15b68707..072e6bdb6ee6 100644
>>>>> --- a/drivers/pmdomain/core.c
>>>>> +++ b/drivers/pmdomain/core.c
>>>>> @@ -117,6 +117,48 @@ static const struct genpd_lock_ops genpd_spin_ops = {
>>>>>       .unlock = genpd_unlock_spin,
>>>>>  };
>>>>>
>>>>> +static void genpd_lock_raw_spin(struct generic_pm_domain *genpd)
>>>>> +     __acquires(&genpd->raw_slock)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>>>> +     genpd->raw_lock_flags = flags;
>>>>> +}
>>>>> +
>>>>> +static void genpd_lock_nested_raw_spin(struct generic_pm_domain *genpd,
>>>>> +                                     int depth)
>>>>> +     __acquires(&genpd->raw_slock)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     raw_spin_lock_irqsave_nested(&genpd->raw_slock, flags, depth);
>>>>> +     genpd->raw_lock_flags = flags;
>>>>> +}
>>>>> +
>>>>> +static int genpd_lock_interruptible_raw_spin(struct generic_pm_domain *genpd)
>>>>> +     __acquires(&genpd->raw_slock)
>>>>> +{
>>>>> +     unsigned long flags;
>>>>> +
>>>>> +     raw_spin_lock_irqsave(&genpd->raw_slock, flags);
>>>>> +     genpd->raw_lock_flags = flags;
>>>>> +     return 0;
>>>>> +}
>>>>> +
>>>>> +static void genpd_unlock_raw_spin(struct generic_pm_domain *genpd)
>>>>> +     __releases(&genpd->raw_slock)
>>>>> +{
>>>>> +     raw_spin_unlock_irqrestore(&genpd->raw_slock, genpd->raw_lock_flags);
>>>>> +}
>>>>> +
>>>>> +static const struct genpd_lock_ops genpd_raw_spin_ops = {
>>>>> +     .lock = genpd_lock_raw_spin,
>>>>> +     .lock_nested = genpd_lock_nested_raw_spin,
>>>>> +     .lock_interruptible = genpd_lock_interruptible_raw_spin,
>>>>> +     .unlock = genpd_unlock_raw_spin,
>>>>> +};
>>>>> +
>>>>>  #define genpd_lock(p)                        p->lock_ops->lock(p)
>>>>>  #define genpd_lock_nested(p, d)              p->lock_ops->lock_nested(p, d)
>>>>>  #define genpd_lock_interruptible(p)  p->lock_ops->lock_interruptible(p)
>>>>> @@ -2079,7 +2121,10 @@ static void genpd_free_data(struct generic_pm_domain *genpd)
>>>>>
>>>>>  static void genpd_lock_init(struct generic_pm_domain *genpd)
>>>>>  {
>>>>> -     if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>>>>> +     if (genpd->flags & GENPD_FLAG_CPU_DOMAIN) {
>>>>> +             raw_spin_lock_init(&genpd->raw_slock);
>>>>> +             genpd->lock_ops = &genpd_raw_spin_ops;
>>>>> +     } else if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
>>>> Hi Ulf, though you are targeting only CPU domains for now, I wonder if
>>>> FLAG_IRQ_SAFE will be a better choice?  The description of the flag says
>>>> it is safe for atomic context which won't be the case for PREEMPT_RT?
>>> You have a point!
>>>
>>> However, we also need to limit the use of raw spinlocks, from
>>> PREEMPT_RT point of view. In other words, just because a genpd
>>> provider is capable of executing its callbacks in atomic context,
>>> doesn't always mean that it should use raw spinlocks too.
>> Got it! Thanks. Maybe in future, if there is a need, a new GENPD FLAG
>> for RT, something like GENPD_FLAG_IRQ_SAFE_RT, can be added to address this.
> Yes, I agree, something along those lines would make sense.
>
> BTW, did you manage to get some time to test the series on your end?

I haven't been able spend time testing it but I have requested Maulik to
test it and update you. Thanks


> Kind regards
> Uffe