From patchwork Wed Jul 20 00:18:21 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 2774 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id BB22A2405D for ; Wed, 20 Jul 2011 00:18:34 +0000 (UTC) Received: from mail-qy0-f173.google.com (mail-qy0-f173.google.com [209.85.216.173]) by fiordland.canonical.com (Postfix) with ESMTP id 741D7A18115 for ; Wed, 20 Jul 2011 00:18:34 +0000 (UTC) Received: by qyk10 with SMTP id 10so2937428qyk.11 for ; Tue, 19 Jul 2011 17:18:34 -0700 (PDT) Received: by 10.229.73.196 with SMTP id r4mr4435530qcj.266.1311121113835; Tue, 19 Jul 2011 17:18:33 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.229.217.78 with SMTP id hl14cs98672qcb; Tue, 19 Jul 2011 17:18:33 -0700 (PDT) Received: by 10.68.46.38 with SMTP id s6mr2026389pbm.10.1311121112459; Tue, 19 Jul 2011 17:18:32 -0700 (PDT) Received: from e8.ny.us.ibm.com (e8.ny.us.ibm.com [32.97.182.138]) by mx.google.com with ESMTPS id a7si16097317pbl.49.2011.07.19.17.18.31 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 19 Jul 2011 17:18:32 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.138 as permitted sender) client-ip=32.97.182.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.138 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from d01relay01.pok.ibm.com (d01relay01.pok.ibm.com [9.56.227.233]) by e8.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p6K05xVI006405 for ; Tue, 19 Jul 2011 20:05:59 -0400 Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay01.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p6K0IVOH144300 for ; Tue, 19 Jul 2011 20:18:31 -0400 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p6K0IQuN015884 for ; Tue, 19 Jul 2011 20:18:30 -0400 Received: from paulmck-ThinkPad-W500 (paulmck-ThinkPad-W500.beaverton.ibm.com [9.47.24.65]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p6K0IQUr015841; Tue, 19 Jul 2011 20:18:26 -0400 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id B566213F806; Tue, 19 Jul 2011 17:18:25 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, patches@linaro.org, greearb@candelatech.com, edt@aei.ca, Peter Zijlstra , "Paul E. McKenney" Subject: [PATCH tip/core/urgent 5/7] sched: Add irq_{enter, exit}() to scheduler_ipi() Date: Tue, 19 Jul 2011 17:18:21 -0700 Message-Id: <1311121103-16978-5-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.3.2 In-Reply-To: <20110720001738.GA16369@linux.vnet.ibm.com> References: <20110720001738.GA16369@linux.vnet.ibm.com> From: Peter Zijlstra Ensure scheduler_ipi() calls irq_{enter,exit} when it does some actual work. Traditionally we never did any actual work from the resched IPI and all magic happened in the return from interrupt path. Now that we do do some work, we need to ensure irq_{enter,exit} are called so that we don't confuse things. This affects things like timekeeping, NO_HZ and RCU, basically everything with a hook in irq_enter/exit. Explicit examples of things going wrong are: sched_clock_cpu() -- has a callback when leaving NO_HZ state to take a new reading from GTOD and TSC. Without this callback, time is stuck in the past. RCU -- needs in_irq() to work in order to avoid some nasty deadlocks Signed-off-by: Peter Zijlstra Signed-off-by: Paul E. McKenney --- kernel/sched.c | 44 ++++++++++++++++++++++++++++++++++++++------ 1 files changed, 38 insertions(+), 6 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index 9769c75..1930ee1 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2544,13 +2544,9 @@ static int ttwu_remote(struct task_struct *p, int wake_flags) } #ifdef CONFIG_SMP -static void sched_ttwu_pending(void) +static void sched_ttwu_do_pending(struct task_struct *list) { struct rq *rq = this_rq(); - struct task_struct *list = xchg(&rq->wake_list, NULL); - - if (!list) - return; raw_spin_lock(&rq->lock); @@ -2563,9 +2559,45 @@ static void sched_ttwu_pending(void) raw_spin_unlock(&rq->lock); } +#ifdef CONFIG_HOTPLUG_CPU + +static void sched_ttwu_pending(void) +{ + struct rq *rq = this_rq(); + struct task_struct *list = xchg(&rq->wake_list, NULL); + + if (!list) + return; + + sched_ttwu_do_pending(list); +} + +#endif /* CONFIG_HOTPLUG_CPU */ + void scheduler_ipi(void) { - sched_ttwu_pending(); + struct rq *rq = this_rq(); + struct task_struct *list = xchg(&rq->wake_list, NULL); + + if (!list) + return; + + /* + * Not all reschedule IPI handlers call irq_enter/irq_exit, since + * traditionally all their work was done from the interrupt return + * path. Now that we actually do some work, we need to make sure + * we do call them. + * + * Some archs already do call them, luckily irq_enter/exit nest + * properly. + * + * Arguably we should visit all archs and update all handlers, + * however a fair share of IPIs are still resched only so this would + * somewhat pessimize the simple resched case. + */ + irq_enter(); + sched_ttwu_do_pending(list); + irq_exit(); } static void ttwu_queue_remote(struct task_struct *p, int cpu)