Message ID | 20250430160943.2836-1-ImanDevel@gmail.com |
---|---|
State | New |
Headers | show |
Series | [v3] cpufreq: fix locking order in store_local_boost to prevent deadlock | expand |
On Fri, May 9, 2025 at 7:29 AM Seyediman Seyedarab <imandevel@gmail.com> wrote: > > On 25/05/02 10:36AM, Viresh Kumar wrote: > > On 30-04-25, 12:09, Seyediman Seyedarab wrote: > > > Lockdep reports a possible circular locking dependency[1] when > > > writing to /sys/devices/system/cpu/cpufreq/policyN/boost, > > > triggered by power-profiles-daemon at boot. > > > > > > store_local_boost() used to acquire cpu_hotplug_lock *after* > > > the policy lock had already been taken by the store() handler. > > > However, the expected locking hierarchy is to acquire > > > cpu_hotplug_lock before the policy guard. This inverted lock order > > > creates a *theoretical* deadlock possibility. > > > > > > Acquire cpu_hotplug_lock in the store() handler *only* for the > > > local_boost attribute, before entering the policy guard block, > > > and remove the cpus_read_lock/unlock() calls from store_local_boost(). > > > Also switch from guard() to scoped_guard() to allow explicitly wrapping > > > the policy guard inside the cpu_hotplug_lock critical section. > > > > > > [1] > > > ====================================================== > > > WARNING: possible circular locking dependency detected > > > 6.15.0-rc4-debug #28 Not tainted > > > ------------------------------------------------------ > > > power-profiles-/596 is trying to acquire lock: > > > ffffffffb147e910 (cpu_hotplug_lock){++++}-{0:0}, at: store_local_boost+0x6a/0xd0 > > > > > > but task is already holding lock: > > > ffff9eaa48377b80 (&policy->rwsem){++++}-{4:4}, at: store+0x37/0x90 > > > > > > which lock already depends on the new lock. > > > > > > the existing dependency chain (in reverse order) is: > > > > > > -> #2 (&policy->rwsem){++++}-{4:4}: > > > down_write+0x29/0xb0 > > > cpufreq_online+0x841/0xa00 > > > cpufreq_add_dev+0x71/0x80 > > > subsys_interface_register+0x14b/0x170 > > > cpufreq_register_driver+0x154/0x250 > > > amd_pstate_register_driver+0x36/0x70 > > > amd_pstate_init+0x1e7/0x270 > > > do_one_initcall+0x67/0x2c0 > > > kernel_init_freeable+0x230/0x270 > > > kernel_init+0x15/0x130 > > > ret_from_fork+0x2c/0x50 > > > ret_from_fork_asm+0x11/0x20 > > > > > > -> #1 (subsys mutex#3){+.+.}-{4:4}: > > > __mutex_lock+0xc2/0x930 > > > subsys_interface_register+0x83/0x170 > > > cpufreq_register_driver+0x154/0x250 > > > amd_pstate_register_driver+0x36/0x70 > > > amd_pstate_init+0x1e7/0x270 > > > do_one_initcall+0x67/0x2c0 > > > kernel_init_freeable+0x230/0x270 > > > kernel_init+0x15/0x130 > > > ret_from_fork+0x2c/0x50 > > > ret_from_fork_asm+0x11/0x20 > > > > > > -> #0 (cpu_hotplug_lock){++++}-{0:0}: > > > __lock_acquire+0x1087/0x17e0 > > > lock_acquire.part.0+0x66/0x1b0 > > > cpus_read_lock+0x2a/0xc0 > > > store_local_boost+0x6a/0xd0 > > > store+0x50/0x90 > > > kernfs_fop_write_iter+0x135/0x200 > > > vfs_write+0x2ab/0x540 > > > ksys_write+0x6c/0xe0 > > > do_syscall_64+0xbb/0x1d0 > > > entry_SYSCALL_64_after_hwframe+0x56/0x5e > > > > > > Signed-off-by: Seyediman Seyedarab <ImanDevel@gmail.com> > > > --- > > > Changes in v3: > > > - Rebased over PM tree's linux-next branch > > > - Added a comment to explain why this piece of code is required > > > - Switched from guard() to scoped_guard() to allow explicitly wrapping > > > the policy guard inside the cpu_hotplug_lock critical section. > > > > > > Changes in v2: > > > - Restrict cpu_hotplug_lock acquisition to only > > > the local_boost attribute in store() handler. > > > > > > Regards, > > > Seyediman > > > > > > drivers/cpufreq/cpufreq.c | 23 ++++++++++++++++------- > > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > > > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > > > index 21fa733a2..b349adbeb 100644 > > > --- a/drivers/cpufreq/cpufreq.c > > > +++ b/drivers/cpufreq/cpufreq.c > > > @@ -622,10 +622,7 @@ static ssize_t store_local_boost(struct cpufreq_policy *policy, > > > if (!policy->boost_supported) > > > return -EINVAL; > > > > > > - cpus_read_lock(); > > > ret = policy_set_boost(policy, enable); > > > - cpus_read_unlock(); > > > - > > > if (!ret) > > > return count; > > > > > > @@ -1006,16 +1003,28 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr, > > > { > > > struct cpufreq_policy *policy = to_policy(kobj); > > > struct freq_attr *fattr = to_attr(attr); > > > + int ret = -EBUSY; > > > > > > if (!fattr->store) > > > return -EIO; > > > > > > - guard(cpufreq_policy_write)(policy); > > > + /* > > > + * store_local_boost() requires cpu_hotplug_lock to be held, and must be > > > + * called with that lock acquired *before* taking policy->rwsem to avoid > > > + * lock ordering violations. > > > + */ > > > + if (fattr == &local_boost) > > > + cpus_read_lock(); > > > > > > - if (likely(!policy_is_inactive(policy))) > > > - return fattr->store(policy, buf, count); > > > + scoped_guard(cpufreq_policy_write, policy) { > > > + if (likely(!policy_is_inactive(policy))) > > > + ret = fattr->store(policy, buf, count); > > > + } > > > > > > - return -EBUSY; > > > + if (fattr == &local_boost) > > > + cpus_read_unlock(); > > > + > > > + return ret; > > > } > > > > Acked-by: Viresh Kumar <viresh.kumar@linaro.org> > > > > -- > > viresh > > Hi there, > > Just following up to see if there's anything you'd like me to > change or address in the patch before it can move forward. > Please let me know if any updates are needed. I'm kind of wondering why local_boost needs cpus_read_lock() at all. Holding the policy rwsem blocks CPU online/offline already for this policy. Is that because ->set_boost() may need to synchronize with the other policies?
On Fri, May 9, 2025 at 7:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > On Fri, May 9, 2025 at 7:29 AM Seyediman Seyedarab <imandevel@gmail.com> wrote: > > > > On 25/05/02 10:36AM, Viresh Kumar wrote: > > > On 30-04-25, 12:09, Seyediman Seyedarab wrote: > > > > Lockdep reports a possible circular locking dependency[1] when > > > > writing to /sys/devices/system/cpu/cpufreq/policyN/boost, > > > > triggered by power-profiles-daemon at boot. > > > > > > > > store_local_boost() used to acquire cpu_hotplug_lock *after* > > > > the policy lock had already been taken by the store() handler. > > > > However, the expected locking hierarchy is to acquire > > > > cpu_hotplug_lock before the policy guard. This inverted lock order > > > > creates a *theoretical* deadlock possibility. > > > > > > > > Acquire cpu_hotplug_lock in the store() handler *only* for the > > > > local_boost attribute, before entering the policy guard block, > > > > and remove the cpus_read_lock/unlock() calls from store_local_boost(). > > > > Also switch from guard() to scoped_guard() to allow explicitly wrapping > > > > the policy guard inside the cpu_hotplug_lock critical section. > > > > > > > > [1] > > > > ====================================================== > > > > WARNING: possible circular locking dependency detected > > > > 6.15.0-rc4-debug #28 Not tainted > > > > ------------------------------------------------------ > > > > power-profiles-/596 is trying to acquire lock: > > > > ffffffffb147e910 (cpu_hotplug_lock){++++}-{0:0}, at: store_local_boost+0x6a/0xd0 > > > > > > > > but task is already holding lock: > > > > ffff9eaa48377b80 (&policy->rwsem){++++}-{4:4}, at: store+0x37/0x90 > > > > > > > > which lock already depends on the new lock. > > > > > > > > the existing dependency chain (in reverse order) is: > > > > > > > > -> #2 (&policy->rwsem){++++}-{4:4}: > > > > down_write+0x29/0xb0 > > > > cpufreq_online+0x841/0xa00 > > > > cpufreq_add_dev+0x71/0x80 > > > > subsys_interface_register+0x14b/0x170 > > > > cpufreq_register_driver+0x154/0x250 > > > > amd_pstate_register_driver+0x36/0x70 > > > > amd_pstate_init+0x1e7/0x270 > > > > do_one_initcall+0x67/0x2c0 > > > > kernel_init_freeable+0x230/0x270 > > > > kernel_init+0x15/0x130 > > > > ret_from_fork+0x2c/0x50 > > > > ret_from_fork_asm+0x11/0x20 > > > > > > > > -> #1 (subsys mutex#3){+.+.}-{4:4}: > > > > __mutex_lock+0xc2/0x930 > > > > subsys_interface_register+0x83/0x170 > > > > cpufreq_register_driver+0x154/0x250 > > > > amd_pstate_register_driver+0x36/0x70 > > > > amd_pstate_init+0x1e7/0x270 > > > > do_one_initcall+0x67/0x2c0 > > > > kernel_init_freeable+0x230/0x270 > > > > kernel_init+0x15/0x130 > > > > ret_from_fork+0x2c/0x50 > > > > ret_from_fork_asm+0x11/0x20 > > > > > > > > -> #0 (cpu_hotplug_lock){++++}-{0:0}: > > > > __lock_acquire+0x1087/0x17e0 > > > > lock_acquire.part.0+0x66/0x1b0 > > > > cpus_read_lock+0x2a/0xc0 > > > > store_local_boost+0x6a/0xd0 > > > > store+0x50/0x90 > > > > kernfs_fop_write_iter+0x135/0x200 > > > > vfs_write+0x2ab/0x540 > > > > ksys_write+0x6c/0xe0 > > > > do_syscall_64+0xbb/0x1d0 > > > > entry_SYSCALL_64_after_hwframe+0x56/0x5e > > > > > > > > Signed-off-by: Seyediman Seyedarab <ImanDevel@gmail.com> > > > > --- > > > > Changes in v3: > > > > - Rebased over PM tree's linux-next branch > > > > - Added a comment to explain why this piece of code is required > > > > - Switched from guard() to scoped_guard() to allow explicitly wrapping > > > > the policy guard inside the cpu_hotplug_lock critical section. > > > > > > > > Changes in v2: > > > > - Restrict cpu_hotplug_lock acquisition to only > > > > the local_boost attribute in store() handler. > > > > > > > > Regards, > > > > Seyediman > > > > > > > > drivers/cpufreq/cpufreq.c | 23 ++++++++++++++++------- > > > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > > > > index 21fa733a2..b349adbeb 100644 > > > > --- a/drivers/cpufreq/cpufreq.c > > > > +++ b/drivers/cpufreq/cpufreq.c > > > > @@ -622,10 +622,7 @@ static ssize_t store_local_boost(struct cpufreq_policy *policy, > > > > if (!policy->boost_supported) > > > > return -EINVAL; > > > > > > > > - cpus_read_lock(); > > > > ret = policy_set_boost(policy, enable); > > > > - cpus_read_unlock(); > > > > - > > > > if (!ret) > > > > return count; > > > > > > > > @@ -1006,16 +1003,28 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr, > > > > { > > > > struct cpufreq_policy *policy = to_policy(kobj); > > > > struct freq_attr *fattr = to_attr(attr); > > > > + int ret = -EBUSY; > > > > > > > > if (!fattr->store) > > > > return -EIO; > > > > > > > > - guard(cpufreq_policy_write)(policy); > > > > + /* > > > > + * store_local_boost() requires cpu_hotplug_lock to be held, and must be > > > > + * called with that lock acquired *before* taking policy->rwsem to avoid > > > > + * lock ordering violations. > > > > + */ > > > > + if (fattr == &local_boost) > > > > + cpus_read_lock(); > > > > > > > > - if (likely(!policy_is_inactive(policy))) > > > > - return fattr->store(policy, buf, count); > > > > + scoped_guard(cpufreq_policy_write, policy) { > > > > + if (likely(!policy_is_inactive(policy))) > > > > + ret = fattr->store(policy, buf, count); > > > > + } > > > > > > > > - return -EBUSY; > > > > + if (fattr == &local_boost) > > > > + cpus_read_unlock(); > > > > + > > > > + return ret; > > > > } > > > > > > Acked-by: Viresh Kumar <viresh.kumar@linaro.org> > > > > > > -- > > > viresh > > > > Hi there, > > > > Just following up to see if there's anything you'd like me to > > change or address in the patch before it can move forward. > > Please let me know if any updates are needed. > > I'm kind of wondering why local_boost needs cpus_read_lock() at all. > Holding the policy rwsem blocks CPU online/offline already for this > policy. > > Is that because ->set_boost() may need to synchronize with the other policies? IOW, what can go wrong if the cpus_read_lock() locking is dropped from there altogether?
On 25/05/10 01:41PM, Rafael J. Wysocki wrote: > On Fri, May 9, 2025 at 7:03 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > > > On Fri, May 9, 2025 at 7:29 AM Seyediman Seyedarab <imandevel@gmail.com> wrote: > > > > > > On 25/05/02 10:36AM, Viresh Kumar wrote: > > > > On 30-04-25, 12:09, Seyediman Seyedarab wrote: > > > > > Lockdep reports a possible circular locking dependency[1] when > > > > > writing to /sys/devices/system/cpu/cpufreq/policyN/boost, > > > > > triggered by power-profiles-daemon at boot. > > > > > > > > > > store_local_boost() used to acquire cpu_hotplug_lock *after* > > > > > the policy lock had already been taken by the store() handler. > > > > > However, the expected locking hierarchy is to acquire > > > > > cpu_hotplug_lock before the policy guard. This inverted lock order > > > > > creates a *theoretical* deadlock possibility. > > > > > > > > > > Acquire cpu_hotplug_lock in the store() handler *only* for the > > > > > local_boost attribute, before entering the policy guard block, > > > > > and remove the cpus_read_lock/unlock() calls from store_local_boost(). > > > > > Also switch from guard() to scoped_guard() to allow explicitly wrapping > > > > > the policy guard inside the cpu_hotplug_lock critical section. > > > > > > > > > > [1] > > > > > ====================================================== > > > > > WARNING: possible circular locking dependency detected > > > > > 6.15.0-rc4-debug #28 Not tainted > > > > > ------------------------------------------------------ > > > > > power-profiles-/596 is trying to acquire lock: > > > > > ffffffffb147e910 (cpu_hotplug_lock){++++}-{0:0}, at: store_local_boost+0x6a/0xd0 > > > > > > > > > > but task is already holding lock: > > > > > ffff9eaa48377b80 (&policy->rwsem){++++}-{4:4}, at: store+0x37/0x90 > > > > > > > > > > which lock already depends on the new lock. > > > > > > > > > > the existing dependency chain (in reverse order) is: > > > > > > > > > > -> #2 (&policy->rwsem){++++}-{4:4}: > > > > > down_write+0x29/0xb0 > > > > > cpufreq_online+0x841/0xa00 > > > > > cpufreq_add_dev+0x71/0x80 > > > > > subsys_interface_register+0x14b/0x170 > > > > > cpufreq_register_driver+0x154/0x250 > > > > > amd_pstate_register_driver+0x36/0x70 > > > > > amd_pstate_init+0x1e7/0x270 > > > > > do_one_initcall+0x67/0x2c0 > > > > > kernel_init_freeable+0x230/0x270 > > > > > kernel_init+0x15/0x130 > > > > > ret_from_fork+0x2c/0x50 > > > > > ret_from_fork_asm+0x11/0x20 > > > > > > > > > > -> #1 (subsys mutex#3){+.+.}-{4:4}: > > > > > __mutex_lock+0xc2/0x930 > > > > > subsys_interface_register+0x83/0x170 > > > > > cpufreq_register_driver+0x154/0x250 > > > > > amd_pstate_register_driver+0x36/0x70 > > > > > amd_pstate_init+0x1e7/0x270 > > > > > do_one_initcall+0x67/0x2c0 > > > > > kernel_init_freeable+0x230/0x270 > > > > > kernel_init+0x15/0x130 > > > > > ret_from_fork+0x2c/0x50 > > > > > ret_from_fork_asm+0x11/0x20 > > > > > > > > > > -> #0 (cpu_hotplug_lock){++++}-{0:0}: > > > > > __lock_acquire+0x1087/0x17e0 > > > > > lock_acquire.part.0+0x66/0x1b0 > > > > > cpus_read_lock+0x2a/0xc0 > > > > > store_local_boost+0x6a/0xd0 > > > > > store+0x50/0x90 > > > > > kernfs_fop_write_iter+0x135/0x200 > > > > > vfs_write+0x2ab/0x540 > > > > > ksys_write+0x6c/0xe0 > > > > > do_syscall_64+0xbb/0x1d0 > > > > > entry_SYSCALL_64_after_hwframe+0x56/0x5e > > > > > > > > > > Signed-off-by: Seyediman Seyedarab <ImanDevel@gmail.com> > > > > > --- > > > > > Changes in v3: > > > > > - Rebased over PM tree's linux-next branch > > > > > - Added a comment to explain why this piece of code is required > > > > > - Switched from guard() to scoped_guard() to allow explicitly wrapping > > > > > the policy guard inside the cpu_hotplug_lock critical section. > > > > > > > > > > Changes in v2: > > > > > - Restrict cpu_hotplug_lock acquisition to only > > > > > the local_boost attribute in store() handler. > > > > > > > > > > Regards, > > > > > Seyediman > > > > > > > > > > drivers/cpufreq/cpufreq.c | 23 ++++++++++++++++------- > > > > > 1 file changed, 16 insertions(+), 7 deletions(-) > > > > > > > > > > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c > > > > > index 21fa733a2..b349adbeb 100644 > > > > > --- a/drivers/cpufreq/cpufreq.c > > > > > +++ b/drivers/cpufreq/cpufreq.c > > > > > @@ -622,10 +622,7 @@ static ssize_t store_local_boost(struct cpufreq_policy *policy, > > > > > if (!policy->boost_supported) > > > > > return -EINVAL; > > > > > > > > > > - cpus_read_lock(); > > > > > ret = policy_set_boost(policy, enable); > > > > > - cpus_read_unlock(); > > > > > - > > > > > if (!ret) > > > > > return count; > > > > > > > > > > @@ -1006,16 +1003,28 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr, > > > > > { > > > > > struct cpufreq_policy *policy = to_policy(kobj); > > > > > struct freq_attr *fattr = to_attr(attr); > > > > > + int ret = -EBUSY; > > > > > > > > > > if (!fattr->store) > > > > > return -EIO; > > > > > > > > > > - guard(cpufreq_policy_write)(policy); > > > > > + /* > > > > > + * store_local_boost() requires cpu_hotplug_lock to be held, and must be > > > > > + * called with that lock acquired *before* taking policy->rwsem to avoid > > > > > + * lock ordering violations. > > > > > + */ > > > > > + if (fattr == &local_boost) > > > > > + cpus_read_lock(); > > > > > > > > > > - if (likely(!policy_is_inactive(policy))) > > > > > - return fattr->store(policy, buf, count); > > > > > + scoped_guard(cpufreq_policy_write, policy) { > > > > > + if (likely(!policy_is_inactive(policy))) > > > > > + ret = fattr->store(policy, buf, count); > > > > > + } > > > > > > > > > > - return -EBUSY; > > > > > + if (fattr == &local_boost) > > > > > + cpus_read_unlock(); > > > > > + > > > > > + return ret; > > > > > } > > > > > > > > Acked-by: Viresh Kumar <viresh.kumar@linaro.org> > > > > > > > > -- > > > > viresh > > > > > > Hi there, > > > > > > Just following up to see if there's anything you'd like me to > > > change or address in the patch before it can move forward. > > > Please let me know if any updates are needed. > > > > I'm kind of wondering why local_boost needs cpus_read_lock() at all. > > Holding the policy rwsem blocks CPU online/offline already for this > > policy. > > > > Is that because ->set_boost() may need to synchronize with the other policies? > > IOW, what can go wrong if the cpus_read_lock() locking is dropped from > there altogether? I think ->set_boost() being per-policy makes cpus_read_lock() unnecessary here. Since we already hold the policy lock, any topology changes involving this policy should be blocked. And because we're not iterating over all CPUs or policies to set boost, we don't need to worry about CPU hotplug synchronization in this case. Regards, Seyediman
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 21fa733a2..b349adbeb 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -622,10 +622,7 @@ static ssize_t store_local_boost(struct cpufreq_policy *policy, if (!policy->boost_supported) return -EINVAL; - cpus_read_lock(); ret = policy_set_boost(policy, enable); - cpus_read_unlock(); - if (!ret) return count; @@ -1006,16 +1003,28 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr, { struct cpufreq_policy *policy = to_policy(kobj); struct freq_attr *fattr = to_attr(attr); + int ret = -EBUSY; if (!fattr->store) return -EIO; - guard(cpufreq_policy_write)(policy); + /* + * store_local_boost() requires cpu_hotplug_lock to be held, and must be + * called with that lock acquired *before* taking policy->rwsem to avoid + * lock ordering violations. + */ + if (fattr == &local_boost) + cpus_read_lock(); - if (likely(!policy_is_inactive(policy))) - return fattr->store(policy, buf, count); + scoped_guard(cpufreq_policy_write, policy) { + if (likely(!policy_is_inactive(policy))) + ret = fattr->store(policy, buf, count); + } - return -EBUSY; + if (fattr == &local_boost) + cpus_read_unlock(); + + return ret; } static void cpufreq_sysfs_release(struct kobject *kobj)
Lockdep reports a possible circular locking dependency[1] when writing to /sys/devices/system/cpu/cpufreq/policyN/boost, triggered by power-profiles-daemon at boot. store_local_boost() used to acquire cpu_hotplug_lock *after* the policy lock had already been taken by the store() handler. However, the expected locking hierarchy is to acquire cpu_hotplug_lock before the policy guard. This inverted lock order creates a *theoretical* deadlock possibility. Acquire cpu_hotplug_lock in the store() handler *only* for the local_boost attribute, before entering the policy guard block, and remove the cpus_read_lock/unlock() calls from store_local_boost(). Also switch from guard() to scoped_guard() to allow explicitly wrapping the policy guard inside the cpu_hotplug_lock critical section. [1] ====================================================== WARNING: possible circular locking dependency detected 6.15.0-rc4-debug #28 Not tainted ------------------------------------------------------ power-profiles-/596 is trying to acquire lock: ffffffffb147e910 (cpu_hotplug_lock){++++}-{0:0}, at: store_local_boost+0x6a/0xd0 but task is already holding lock: ffff9eaa48377b80 (&policy->rwsem){++++}-{4:4}, at: store+0x37/0x90 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&policy->rwsem){++++}-{4:4}: down_write+0x29/0xb0 cpufreq_online+0x841/0xa00 cpufreq_add_dev+0x71/0x80 subsys_interface_register+0x14b/0x170 cpufreq_register_driver+0x154/0x250 amd_pstate_register_driver+0x36/0x70 amd_pstate_init+0x1e7/0x270 do_one_initcall+0x67/0x2c0 kernel_init_freeable+0x230/0x270 kernel_init+0x15/0x130 ret_from_fork+0x2c/0x50 ret_from_fork_asm+0x11/0x20 -> #1 (subsys mutex#3){+.+.}-{4:4}: __mutex_lock+0xc2/0x930 subsys_interface_register+0x83/0x170 cpufreq_register_driver+0x154/0x250 amd_pstate_register_driver+0x36/0x70 amd_pstate_init+0x1e7/0x270 do_one_initcall+0x67/0x2c0 kernel_init_freeable+0x230/0x270 kernel_init+0x15/0x130 ret_from_fork+0x2c/0x50 ret_from_fork_asm+0x11/0x20 -> #0 (cpu_hotplug_lock){++++}-{0:0}: __lock_acquire+0x1087/0x17e0 lock_acquire.part.0+0x66/0x1b0 cpus_read_lock+0x2a/0xc0 store_local_boost+0x6a/0xd0 store+0x50/0x90 kernfs_fop_write_iter+0x135/0x200 vfs_write+0x2ab/0x540 ksys_write+0x6c/0xe0 do_syscall_64+0xbb/0x1d0 entry_SYSCALL_64_after_hwframe+0x56/0x5e Signed-off-by: Seyediman Seyedarab <ImanDevel@gmail.com> --- Changes in v3: - Rebased over PM tree's linux-next branch - Added a comment to explain why this piece of code is required - Switched from guard() to scoped_guard() to allow explicitly wrapping the policy guard inside the cpu_hotplug_lock critical section. Changes in v2: - Restrict cpu_hotplug_lock acquisition to only the local_boost attribute in store() handler. Regards, Seyediman drivers/cpufreq/cpufreq.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-)