From patchwork Wed Jul 20 04:54:15 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 2781 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 168F823F57 for ; Wed, 20 Jul 2011 04:54:26 +0000 (UTC) Received: from mail-qy0-f173.google.com (mail-qy0-f173.google.com [209.85.216.173]) by fiordland.canonical.com (Postfix) with ESMTP id BACFFA18578 for ; Wed, 20 Jul 2011 04:54:25 +0000 (UTC) Received: by qyk10 with SMTP id 10so3032694qyk.11 for ; Tue, 19 Jul 2011 21:54:25 -0700 (PDT) Received: by 10.229.249.84 with SMTP id mj20mr800516qcb.0.1311137665022; Tue, 19 Jul 2011 21:54:25 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.229.217.78 with SMTP id hl14cs103571qcb; Tue, 19 Jul 2011 21:54:24 -0700 (PDT) Received: by 10.68.56.228 with SMTP id d4mr814769pbq.29.1311137664027; Tue, 19 Jul 2011 21:54:24 -0700 (PDT) Received: from e6.ny.us.ibm.com (e6.ny.us.ibm.com [32.97.182.146]) by mx.google.com with ESMTPS id p5si16805550wfd.142.2011.07.19.21.54.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 19 Jul 2011 21:54:24 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.146 as permitted sender) client-ip=32.97.182.146; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.146 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from d01relay05.pok.ibm.com (d01relay05.pok.ibm.com [9.56.227.237]) by e6.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p6K4UFbb001932 for ; Wed, 20 Jul 2011 00:30:15 -0400 Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay05.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p6K4sKik141166 for ; Wed, 20 Jul 2011 00:54:22 -0400 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p6K4sGvZ013517 for ; Wed, 20 Jul 2011 00:54:20 -0400 Received: from paulmck-ThinkPad-W500 (sig-9-65-194-177.mts.ibm.com [9.65.194.177]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p6K4sFxp013376; Wed, 20 Jul 2011 00:54:15 -0400 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 0E5DF13F803; Tue, 19 Jul 2011 21:54:15 -0700 (PDT) Date: Tue, 19 Jul 2011 21:54:15 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, patches@linaro.org, greearb@candelatech.com, edt@aei.ca Subject: Re: [PATCH tip/core/urgent 1/7] rcu: decrease rcu_report_exp_rnp coupling with scheduler Message-ID: <20110720045414.GC2400@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20110720001738.GA16369@linux.vnet.ibm.com> <1311121103-16978-1-git-send-email-paulmck@linux.vnet.ibm.com> <1311129618.5345.2.camel@twins> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1311129618.5345.2.camel@twins> User-Agent: Mutt/1.5.20 (2009-06-14) On Wed, Jul 20, 2011 at 04:40:18AM +0200, Peter Zijlstra wrote: > On Tue, 2011-07-19 at 17:18 -0700, Paul E. McKenney wrote: > > +++ b/kernel/rcutree_plugin.h > > @@ -696,8 +696,10 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp) > > raw_spin_lock_irqsave(&rnp->lock, flags); > > for (;;) { > > if (!sync_rcu_preempt_exp_done(rnp)) > > + raw_spin_unlock_irqrestore(&rnp->lock, flags); > > break; > > I bet that'll all work much better if you wrap it in curly braces like: > > if (!sync_rcu_preempt_exp_done(rnp)) { > raw_spin_unlock_irqrestore(&rnp->lock, flags); > break; > } > > That might also explain those explosions Ed and Ben have been seeing. Indeed. Must be the call of the snake. :-( Thank you for catching this! > > if (rnp->parent == NULL) { > > + raw_spin_unlock_irqrestore(&rnp->lock, flags); > > wake_up(&sync_rcu_preempt_exp_wq); > > break; > > } > > @@ -707,7 +709,6 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp) > > raw_spin_lock(&rnp->lock); /* irqs already disabled */ > > rnp->expmask &= ~mask; > > } > > - raw_spin_unlock_irqrestore(&rnp->lock, flags); > > } So this time I am testing the exact patch series before resending. In the meantime, here is the updated version of this patch. Thanx, Paul ------------------------------------------------------------------------ rcu: decrease rcu_report_exp_rnp coupling with scheduler PREEMPT_RCU read-side critical sections blocking an expedited grace period invoke rcu_report_exp_rnp(). When the last such critical section has completed, rcu_report_exp_rnp() invokes the scheduler to wake up the task that invoked synchronize_rcu_expedited() -- needlessly holding the root rcu_node structure's lock while doing so, thus needlessly providing a way for RCU and the scheduler to deadlock. This commit therefore releases the root rcu_node structure's lock before calling wake_up(). Reported-by: Ed Tomlinson Signed-off-by: Paul E. McKenney diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index 75113cb..6abef3c 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -695,9 +695,12 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp) raw_spin_lock_irqsave(&rnp->lock, flags); for (;;) { - if (!sync_rcu_preempt_exp_done(rnp)) + if (!sync_rcu_preempt_exp_done(rnp)) { + raw_spin_unlock_irqrestore(&rnp->lock, flags); break; + } if (rnp->parent == NULL) { + raw_spin_unlock_irqrestore(&rnp->lock, flags); wake_up(&sync_rcu_preempt_exp_wq); break; } @@ -707,7 +710,6 @@ static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp) raw_spin_lock(&rnp->lock); /* irqs already disabled */ rnp->expmask &= ~mask; } - raw_spin_unlock_irqrestore(&rnp->lock, flags); } /*