From patchwork Tue Jan 28 17:16:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 23807 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f200.google.com (mail-ie0-f200.google.com [209.85.223.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0E9E7202B2 for ; Tue, 28 Jan 2014 17:37:29 +0000 (UTC) Received: by mail-ie0-f200.google.com with SMTP id tp5sf2113932ieb.3 for ; Tue, 28 Jan 2014 09:37:29 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:message-id:user-agent :date:from:to:cc:subject:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe :content-disposition; bh=wuQ1593jO6FyXE7ex8iNJ5EpDJrq1LQwhN3+DSaDBaU=; b=QuJTfK1B5bZbrbglFvJKM5Ie4msVOpEyRwtbZGlAERZuH7CWfwJzP//97zgURUmzhh dvP489pgi2OGniIPIoFKtm96C+fDhch8wQfkXIOy25jMHdgCD//cFZvYlOY7bMAZacQX 4oxzV8pVrSF1a9oiPqs055a84WrieuPUcjy0L+A4b6FT7qzKZvX+fS4EY2PEYS9CDs2p iMLxSHWy3IejZAzHorfkPCFk0kz3bhI9/ahpz4ruwWS2gvVrzGWktKdHTet3gA7e9dIH tRaKv+BMmoUILZHJ2Z6ozJHipADcI/Oj6DuvjdKvhM6vUHYCNT8h8QaQ4besKub8nvvV GugQ== X-Gm-Message-State: ALoCoQmiGcnUC04U0Ham8jep5/6I/v8fioVlrNgeL7Icr20BNB/2tQ6rNPQLB43q4HCBZ0iM9Kf6 X-Received: by 10.182.161.105 with SMTP id xr9mr947180obb.31.1390930649342; Tue, 28 Jan 2014 09:37:29 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.46.98 with SMTP id j89ls1098516qga.73.gmail; Tue, 28 Jan 2014 09:37:29 -0800 (PST) X-Received: by 10.220.131.210 with SMTP id y18mr2019729vcs.12.1390930649238; Tue, 28 Jan 2014 09:37:29 -0800 (PST) Received: from mail-ve0-f169.google.com (mail-ve0-f169.google.com [209.85.128.169]) by mx.google.com with ESMTPS id ef6si6591031ved.9.2014.01.28.09.37.29 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 28 Jan 2014 09:37:29 -0800 (PST) Received-SPF: neutral (google.com: 209.85.128.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.128.169; Received: by mail-ve0-f169.google.com with SMTP id oy12so482573veb.28 for ; Tue, 28 Jan 2014 09:37:29 -0800 (PST) X-Received: by 10.58.211.130 with SMTP id nc2mr1910672vec.7.1390930649134; Tue, 28 Jan 2014 09:37:29 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp48742vcz; Tue, 28 Jan 2014 09:37:28 -0800 (PST) X-Received: by 10.68.178.197 with SMTP id da5mr2914400pbc.28.1390930647910; Tue, 28 Jan 2014 09:37:27 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id fu1si16238541pbc.44.2014.01.28.09.37.27; Tue, 28 Jan 2014 09:37:27 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755366AbaA1Rds (ORCPT + 27 others); Tue, 28 Jan 2014 12:33:48 -0500 Received: from merlin.infradead.org ([205.233.59.134]:60562 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754817AbaA1Rdq (ORCPT ); Tue, 28 Jan 2014 12:33:46 -0500 Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=twins) by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux)) id 1W8CXc-0002vC-Lp; Tue, 28 Jan 2014 17:33:40 +0000 Received: by twins (Postfix, from userid 0) id 24E5482785D6; Tue, 28 Jan 2014 18:33:38 +0100 (CET) Message-Id: <20140128171948.009146613@infradead.org> User-Agent: quilt/0.60-1 Date: Tue, 28 Jan 2014 18:16:37 +0100 From: Peter Zijlstra To: linux-kernel@vger.kernel.org Cc: mingo@kernel.org, daniel.lezcano@linaro.org, pjt@google.com, bsegall@google.com, Steven Rostedt , Vincent Guittot , Peter Zijlstra Subject: [PATCH 3/9] sched: Move idle_stamp up to the core References: <20140128171634.974847076@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peterz@infradead.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.128.169 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-Disposition: inline; filename=daniel_lezcano-3_sched-move_idle_stamp_up_to_the_core.patch From: Daniel Lezcano The idle_balance modifies the idle_stamp field of the rq, making this information to be shared across core.c and fair.c. As we can know if the cpu is going to idle or not with the previous patch, let's encapsulate the idle_stamp information in core.c by moving it up to the caller. The idle_balance function returns true in case a balancing occured and the cpu won't be idle, false if no balance happened and the cpu is going idle. Cc: mingo@kernel.org Cc: alex.shi@linaro.org Cc: peterz@infradead.org Signed-off-by: Daniel Lezcano Signed-off-by: Peter Zijlstra --- kernel/sched/core.c | 11 +++++++++-- kernel/sched/fair.c | 14 ++++++-------- kernel/sched/sched.h | 2 +- 3 files changed, 16 insertions(+), 11 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2681,8 +2681,15 @@ static void __sched __schedule(void) pre_schedule(rq, prev); - if (unlikely(!rq->nr_running)) - idle_balance(rq); + if (unlikely(!rq->nr_running)) { + /* + * We must set idle_stamp _before_ calling idle_balance(), such + * that we measure the duration of idle_balance() as idle time. + */ + rq->idle_stamp = rq_clock(rq); + if (idle_balance(rq)) + rq->idle_stamp = 0; + } put_prev_task(rq, prev); next = pick_next_task(rq); --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6535,7 +6535,7 @@ static int load_balance(int this_cpu, st * idle_balance is called by schedule() if this_cpu is about to become * idle. Attempts to pull tasks from other CPUs. */ -void idle_balance(struct rq *this_rq) +int idle_balance(struct rq *this_rq) { struct sched_domain *sd; int pulled_task = 0; @@ -6543,10 +6543,8 @@ void idle_balance(struct rq *this_rq) u64 curr_cost = 0; int this_cpu = this_rq->cpu; - this_rq->idle_stamp = rq_clock(this_rq); - if (this_rq->avg_idle < sysctl_sched_migration_cost) - return; + return 0; /* * Drop the rq->lock, but keep IRQ/preempt disabled. @@ -6584,10 +6582,8 @@ void idle_balance(struct rq *this_rq) interval = msecs_to_jiffies(sd->balance_interval); if (time_after(next_balance, sd->last_balance + interval)) next_balance = sd->last_balance + interval; - if (pulled_task) { - this_rq->idle_stamp = 0; + if (pulled_task) break; - } } rcu_read_unlock(); @@ -6598,7 +6594,7 @@ void idle_balance(struct rq *this_rq) * A task could have be enqueued in the meantime */ if (this_rq->nr_running && !pulled_task) - return; + return 1; if (pulled_task || time_after(jiffies, this_rq->next_balance)) { /* @@ -6610,6 +6606,8 @@ void idle_balance(struct rq *this_rq) if (curr_cost > this_rq->max_idle_balance_cost) this_rq->max_idle_balance_cost = curr_cost; + + return pulled_task; } /* --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1176,7 +1176,7 @@ extern const struct sched_class idle_sch extern void update_group_power(struct sched_domain *sd, int cpu); extern void trigger_load_balance(struct rq *rq); -extern void idle_balance(struct rq *this_rq); +extern int idle_balance(struct rq *this_rq); extern void idle_enter_fair(struct rq *this_rq); extern void idle_exit_fair(struct rq *this_rq);