diff mbox

[V2,2/3] sched: Fix race in idle_balance()

Message ID 1391728237-4441-3-git-send-email-daniel.lezcano@linaro.org
State Accepted
Commit e5fc66119ec97054eefc83f173a7ee9e133c3c3a
Headers show

Commit Message

Daniel Lezcano Feb. 6, 2014, 11:10 p.m. UTC
The scheduler main function 'schedule()' checks if there are no more tasks
on the runqueue. Then it checks if a task should be pulled in the current
runqueue in idle_balance() assuming it will go to idle otherwise.

But the idle_balance() releases the rq->lock in order to lookup in the sched
domains and takes the lock again right after. That opens a window where
another cpu may put a task in our runqueue, so we won't go to idle but
we have filled the idle_stamp, thinking we will.

This patch closes the window by checking if the runqueue has been modified
but without pulling a task after taking the lock again, so we won't go to idle
right after in the __schedule() function.

Cc: alex.shi@linaro.org
Cc: peterz@infradead.org
Cc: mingo@kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/fair.c |    7 +++++++
 1 file changed, 7 insertions(+)

Comments

Daniel Lezcano Feb. 11, 2014, 11:11 a.m. UTC | #1
On 02/10/2014 10:24 AM, Preeti Murthy wrote:
> HI Daniel,
>
> Isn't the only scenario where another cpu can put an idle task on
> our runqueue,

Well, I am not sure to understand what you meant, but I assume you are 
asking if it is possible to have a task to be pulled when we are idle, 
right ?

This patch fixes the race when the current cpu is *about* to enter idle 
when calling schedule().


> in nohz_idle_balance() where only the cpus in
> the nohz.idle_cpus_mask are iterated through. But for the case
> that this patch is addressing, the cpu in question is not yet a part
> of the nohz.idle_cpus_mask right?
>
> Any other case would trigger load balancing on the same cpu, but
> we are preempt_disabled and interrupt disabled at this point.
>
> Thanks
>
> Regards
> Preeti U Murthy
>
> On Fri, Feb 7, 2014 at 4:40 AM, Daniel Lezcano
> <daniel.lezcano@linaro.org> wrote:
>> The scheduler main function 'schedule()' checks if there are no more tasks
>> on the runqueue. Then it checks if a task should be pulled in the current
>> runqueue in idle_balance() assuming it will go to idle otherwise.
>>
>> But the idle_balance() releases the rq->lock in order to lookup in the sched
>> domains and takes the lock again right after. That opens a window where
>> another cpu may put a task in our runqueue, so we won't go to idle but
>> we have filled the idle_stamp, thinking we will.
>>
>> This patch closes the window by checking if the runqueue has been modified
>> but without pulling a task after taking the lock again, so we won't go to idle
>> right after in the __schedule() function.
>>
>> Cc: alex.shi@linaro.org
>> Cc: peterz@infradead.org
>> Cc: mingo@kernel.org
>> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
>> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
>> ---
>>   kernel/sched/fair.c |    7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 428bc9d..5ebc681 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6589,6 +6589,13 @@ void idle_balance(struct rq *this_rq)
>>
>>          raw_spin_lock(&this_rq->lock);
>>
>> +       /*
>> +        * While browsing the domains, we released the rq lock.
>> +        * A task could have be enqueued in the meantime
>> +        */
>> +       if (this_rq->nr_running && !pulled_task)
>> +               return;
>> +
>>          if (pulled_task || time_after(jiffies, this_rq->next_balance)) {
>>                  /*
>>                   * We are going idle. next_balance may be set based on
>> --
>> 1.7.9.5
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
Alex Shi Feb. 13, 2014, 7:45 a.m. UTC | #2
On 02/11/2014 07:11 PM, Daniel Lezcano wrote:
> On 02/10/2014 10:24 AM, Preeti Murthy wrote:
>> HI Daniel,
>>
>> Isn't the only scenario where another cpu can put an idle task on
>> our runqueue,
> 
> Well, I am not sure to understand what you meant, but I assume you are
> asking if it is possible to have a task to be pulled when we are idle,
> right ?
> 
> This patch fixes the race when the current cpu is *about* to enter idle
> when calling schedule().

Preeti said the she didn't see a possible to insert a task on the cpu.

I also did a quick check, maybe task come from wakeup path?
Alex Shi Feb. 13, 2014, 7:46 a.m. UTC | #3
On 02/07/2014 07:10 AM, Daniel Lezcano wrote:
> The scheduler main function 'schedule()' checks if there are no more tasks
> on the runqueue. Then it checks if a task should be pulled in the current
> runqueue in idle_balance() assuming it will go to idle otherwise.
> 
> But the idle_balance() releases the rq->lock in order to lookup in the sched
> domains and takes the lock again right after. That opens a window where
> another cpu may put a task in our runqueue, so we won't go to idle but
> we have filled the idle_stamp, thinking we will.
> 
> This patch closes the window by checking if the runqueue has been modified
> but without pulling a task after taking the lock again, so we won't go to idle
> right after in the __schedule() function.
> 
> Cc: alex.shi@linaro.org
> Cc: peterz@infradead.org
> Cc: mingo@kernel.org
> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
> Signed-off-by: Peter Zijlstra <peterz@infradead.org>
> ---
>  kernel/sched/fair.c |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 428bc9d..5ebc681 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6589,6 +6589,13 @@ void idle_balance(struct rq *this_rq)
>  
>  	raw_spin_lock(&this_rq->lock);
>  
> +	/*
> +	 * While browsing the domains, we released the rq lock.
> +	 * A task could have be enqueued in the meantime
> +	 */

Mind to move the following line up to here?

        if (curr_cost > this_rq->max_idle_balance_cost)
                this_rq->max_idle_balance_cost = curr_cost;

> +	if (this_rq->nr_running && !pulled_task)
> +		return;
> +
>  	if (pulled_task || time_after(jiffies, this_rq->next_balance)) {
>  		/*
>  		 * We are going idle. next_balance may be set based on
>
Daniel Lezcano Feb. 13, 2014, 10:22 a.m. UTC | #4
On 02/13/2014 11:10 AM, Preeti U Murthy wrote:
> Hi,
>
> On 02/13/2014 01:15 PM, Alex Shi wrote:
>> On 02/11/2014 07:11 PM, Daniel Lezcano wrote:
>>> On 02/10/2014 10:24 AM, Preeti Murthy wrote:
>>>> HI Daniel,
>>>>
>>>> Isn't the only scenario where another cpu can put an idle task on
>>>> our runqueue,
>>>
>>> Well, I am not sure to understand what you meant, but I assume you are
>>> asking if it is possible to have a task to be pulled when we are idle,
>>> right ?
>>>
>>> This patch fixes the race when the current cpu is *about* to enter idle
>>> when calling schedule().
>>
>> Preeti said the she didn't see a possible to insert a task on the cpu.
>>
>> I also did a quick check, maybe task come from wakeup path?
>
> Yes this is possible. Thanks for pointing this :)
>
> Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>

Thanks for the review !

   -- Daniel
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 428bc9d..5ebc681 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6589,6 +6589,13 @@  void idle_balance(struct rq *this_rq)
 
 	raw_spin_lock(&this_rq->lock);
 
+	/*
+	 * While browsing the domains, we released the rq lock.
+	 * A task could have be enqueued in the meantime
+	 */
+	if (this_rq->nr_running && !pulled_task)
+		return;
+
 	if (pulled_task || time_after(jiffies, this_rq->next_balance)) {
 		/*
 		 * We are going idle. next_balance may be set based on