mbox series

[00/10] cgroup/cpuset: Miscellaneous partition bug fixes and enhancements

Message ID 20250330215248.3620801-1-longman@redhat.com
Headers show
Series cgroup/cpuset: Miscellaneous partition bug fixes and enhancements | expand

Message

Waiman Long March 30, 2025, 9:52 p.m. UTC
This patch series fixes a number of bugs in the cpuset partition code as
well as improvement in remote partition handling. The test_cpuset_prs.sh
is also enhanced to allow more vigorous remote partition testing.

Waiman Long (10):
  cgroup/cpuset: Fix race between newly created partition and dying one
  cgroup/cpuset: Fix incorrect isolated_cpus update in
    update_parent_effective_cpumask()
  cgroup/cpuset: Fix error handling in remote_partition_disable()
  cgroup/cpuset: Remove remote_partition_check() & make
    update_cpumasks_hier() handle remote partition
  cgroup/cpuset: Don't allow creation of local partition over a remote
    one
  cgroup/cpuset: Code cleanup and comment update
  cgroup/cpuset: Remove unneeded goto in sched_partition_write() and
    rename it
  selftest/cgroup: Update test_cpuset_prs.sh to use | as effective CPUs
    and state separator
  selftest/cgroup: Clean up and restructure test_cpuset_prs.sh
  selftest/cgroup: Add a remote partition transition test to
    test_cpuset_prs.sh

 include/linux/cgroup-defs.h                   |   1 +
 include/linux/cgroup.h                        |   2 +-
 kernel/cgroup/cgroup.c                        |   6 +
 kernel/cgroup/cpuset-internal.h               |   1 +
 kernel/cgroup/cpuset.c                        | 401 +++++++-----
 .../selftests/cgroup/test_cpuset_prs.sh       | 617 ++++++++++++------
 6 files changed, 649 insertions(+), 379 deletions(-)

Comments

Tejun Heo March 31, 2025, 11:13 p.m. UTC | #1
Hello,

On Sun, Mar 30, 2025 at 05:52:39PM -0400, Waiman Long wrote:
...
> One possible way to fix this is to iterate the dying cpusets as well and
> avoid using the exclusive CPUs in those dying cpusets. However, this
> can still cause random partition creation failures or other anomalies
> due to racing. A better way to fix this race is to reset the partition
> state at the moment when a cpuset is being killed.

I'm not a big fan of adding another method call in the destruction path.
css_offline() is where the kill can be seen from all CPUs and notified to
the controller and I'm not sure why bringing it sooner would be necessary to
close the race window. Can't the creation side drain the cgroups that are
going down if the asynchronous part is a problem? e.g. We already have
cgroup_lock_and_drain_offline() which isn't the most scalable thing but
partition operations aren't very frequent, right? And if that's a problem,
there should be a way to make it reasonably quicker.

Thanks.
Waiman Long April 1, 2025, 3:12 a.m. UTC | #2
On 3/31/25 7:13 PM, Tejun Heo wrote:
> Hello,
>
> On Sun, Mar 30, 2025 at 05:52:39PM -0400, Waiman Long wrote:
> ...
>> One possible way to fix this is to iterate the dying cpusets as well and
>> avoid using the exclusive CPUs in those dying cpusets. However, this
>> can still cause random partition creation failures or other anomalies
>> due to racing. A better way to fix this race is to reset the partition
>> state at the moment when a cpuset is being killed.
> I'm not a big fan of adding another method call in the destruction path.
> css_offline() is where the kill can be seen from all CPUs and notified to
> the controller and I'm not sure why bringing it sooner would be necessary to
> close the race window. Can't the creation side drain the cgroups that are
> going down if the asynchronous part is a problem? e.g. We already have
> cgroup_lock_and_drain_offline() which isn't the most scalable thing but
> partition operations aren't very frequent, right? And if that's a problem,
> there should be a way to make it reasonably quicker.

The problem is the RCU delay between the time a cgroup is killed and is 
in a dying state and when the partition is deactivated when 
cpuset_css_offline() is called. That delay can be rather lengthy 
depending on the current workload.

Another alternative that I can think of is to scan the remote partition 
list for remote partition and sibling cpusets for local partition 
whenever some kind of conflicts are detected when enabling a partition. 
When a dying cpuset partition is detected, deactivate it immediately to 
resolve the conflict. Otherwise, the dying partition will still be 
deactivated at cpuset_css_offline() time.

That will be a bit more complex and I think can still get the problem 
solved without adding a new method. What do you think? If you are OK 
with that, I will send out a new patch later this week.

Thanks,
Longman
Tejun Heo April 1, 2025, 7:59 p.m. UTC | #3
Hello, Waiman.

On Mon, Mar 31, 2025 at 11:12:06PM -0400, Waiman Long wrote:
> The problem is the RCU delay between the time a cgroup is killed and is in a
> dying state and when the partition is deactivated when cpuset_css_offline()
> is called. That delay can be rather lengthy depending on the current
> workload.

If we don't have to do it too often, synchronize_rcu_expedited() may be
workable too. What do you think?

> Another alternative that I can think of is to scan the remote partition list
> for remote partition and sibling cpusets for local partition whenever some
> kind of conflicts are detected when enabling a partition. When a dying
> cpuset partition is detected, deactivate it immediately to resolve the
> conflict. Otherwise, the dying partition will still be deactivated at
> cpuset_css_offline() time.
> 
> That will be a bit more complex and I think can still get the problem solved
> without adding a new method. What do you think? If you are OK with that, I
> will send out a new patch later this week.

If synchronize_rcu_expedited() won't do, let's go with the original patch.
The operation does make general sense in that it's for a distinctive step in
the destruction process although I'm a bit curious why it's called before
DYING is set.

Thanks.
Waiman Long April 1, 2025, 8:41 p.m. UTC | #4
On 4/1/25 3:59 PM, Tejun Heo wrote:
> Hello, Waiman.
>
> On Mon, Mar 31, 2025 at 11:12:06PM -0400, Waiman Long wrote:
>> The problem is the RCU delay between the time a cgroup is killed and is in a
>> dying state and when the partition is deactivated when cpuset_css_offline()
>> is called. That delay can be rather lengthy depending on the current
>> workload.
> If we don't have to do it too often, synchronize_rcu_expedited() may be
> workable too. What do you think?

I don't think we ever call synchronize_rcu() in the cgroup code except 
for rstat flush. In fact, we didn't use to have an easy way to know if 
there were dying cpusets hanging around. Now we can probably use the 
root cgroup's nr_dying_subsys[cpuset_cgrp_id] to know if we need to use 
synchronize_rcu*() call to wait for it. However, I still need to check 
if there is any racing window that will cause us to miss it.

>
>> Another alternative that I can think of is to scan the remote partition list
>> for remote partition and sibling cpusets for local partition whenever some
>> kind of conflicts are detected when enabling a partition. When a dying
>> cpuset partition is detected, deactivate it immediately to resolve the
>> conflict. Otherwise, the dying partition will still be deactivated at
>> cpuset_css_offline() time.
>>
>> That will be a bit more complex and I think can still get the problem solved
>> without adding a new method. What do you think? If you are OK with that, I
>> will send out a new patch later this week.
> If synchronize_rcu_expedited() won't do, let's go with the original patch.
> The operation does make general sense in that it's for a distinctive step in
> the destruction process although I'm a bit curious why it's called before
> DYING is set.

Again, we have to synchronize between the css_is_dying() call in 
is_cpuset_online() which is used by cpuset_for_each_child() against the 
calling of cpuset_css_killed(). Since setting of the CSS_DYING flag is 
protected by cgroup_mutex() while most of the cpuset code is protected 
by cpuset_mutex. The two operations can be asynchronous with each other. 
So I have to make sure that by the time CSS_DYING is set, the 
cpuset_css_killed() call has been invoked. I need to do similar check if 
we decide to use synchronize_rcu*() to wait for the completion of 
cpuset_css_offline() call.

As I am also dealing with a lot of locking related issues, I am more 
attuned to this kind of racing conditions to make sure nothing bad will 
happen.

Cheers,
Longman
Waiman Long April 1, 2025, 8:56 p.m. UTC | #5
On 4/1/25 4:41 PM, Waiman Long wrote:
>
> On 4/1/25 3:59 PM, Tejun Heo wrote:
>> Hello, Waiman.
>>
>> On Mon, Mar 31, 2025 at 11:12:06PM -0400, Waiman Long wrote:
>>> The problem is the RCU delay between the time a cgroup is killed and 
>>> is in a
>>> dying state and when the partition is deactivated when 
>>> cpuset_css_offline()
>>> is called. That delay can be rather lengthy depending on the current
>>> workload.
>> If we don't have to do it too often, synchronize_rcu_expedited() may be
>> workable too. What do you think?
>
> I don't think we ever call synchronize_rcu() in the cgroup code except 
> for rstat flush. In fact, we didn't use to have an easy way to know if 
> there were dying cpusets hanging around. Now we can probably use the 
> root cgroup's nr_dying_subsys[cpuset_cgrp_id] to know if we need to 
> use synchronize_rcu*() call to wait for it. However, I still need to 
> check if there is any racing window that will cause us to miss it.

Sorry, I don't think I can use synchronize_rcu_expedited() as the use 
cases that I am seeing most often is the creation of isolated partitions 
running latency sensitive applications like DPDK. Using 
synchronize_rcu_expedited() will send IPIs to all the CPUs which may 
break the required latency guarantee for those applications. Just using 
synchronize_rcu(), however, will have unpredictable latency impacting 
user experience.

>
>>
>>> Another alternative that I can think of is to scan the remote 
>>> partition list
>>> for remote partition and sibling cpusets for local partition 
>>> whenever some
>>> kind of conflicts are detected when enabling a partition. When a dying
>>> cpuset partition is detected, deactivate it immediately to resolve the
>>> conflict. Otherwise, the dying partition will still be deactivated at
>>> cpuset_css_offline() time.
>>>
>>> That will be a bit more complex and I think can still get the 
>>> problem solved
>>> without adding a new method. What do you think? If you are OK with 
>>> that, I
>>> will send out a new patch later this week.
>> If synchronize_rcu_expedited() won't do, let's go with the original 
>> patch.
>> The operation does make general sense in that it's for a distinctive 
>> step in
>> the destruction process although I'm a bit curious why it's called 
>> before
>> DYING is set.
>
Because of the above, I still prefer either using the original patch or 
scanning for dying cpuset partitions in case a conflict is detected. 
Please let me know what you think about it.

Thanks,
Longman