From patchwork Thu Apr 5 22:26:04 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 7686 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id CEFC223E47 for ; Thu, 5 Apr 2012 22:27:26 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 4265EA18319 for ; Thu, 5 Apr 2012 22:27:26 +0000 (UTC) Received: by iage36 with SMTP id e36so3288248iag.11 for ; Thu, 05 Apr 2012 15:27:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:date:from :to:cc:subject:message-id:reply-to:mime-version:content-type :content-disposition:user-agent:x-content-scanned:x-cbid :x-ibm-iss-spamdetectors:x-ibm-iss-detailinfo:x-gm-message-state; bh=ZimWNT5ofwO1n5F364bvwrCYcbJeZtCuTe55+dfzBhc=; b=iarwbJuoL+go3dOeOBI8bk9yU/vGwC/3+wMnx4H+ATzWAFxq8oA7OZg1EouLUTXvx2 5rogh93vACg+kZIgLHlBoqIdpqb5OyGewRCQEVNVUqvWHYQCNNirjcaR/N1er4B4gS9H 8KU+8L/6P/UGHblFeZTr2iNNtI4t9fnbth3d9N9LGZrVTeWU42T7E0q1Mu9KNUfeVup6 FDOLFeFRg4gmJfIOA+XnFWYZqWypLKXSZN3W1iDU/QUvsEzE9KtOpL3VxtVeK+v93jci 4G5MA3Dr4rNHLMliI86VZ1N2IK2zyIZZTRfl/G4aKilnzjcD5SBFOEjocKhwcktJJPAD AHOA== Received: by 10.50.156.229 with SMTP id wh5mr6520859igb.28.1333664845459; Thu, 05 Apr 2012 15:27:25 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.164.217 with SMTP id f25csp122072iby; Thu, 5 Apr 2012 15:27:24 -0700 (PDT) Received: by 10.60.28.103 with SMTP id a7mr6405289oeh.24.1333664843200; Thu, 05 Apr 2012 15:27:23 -0700 (PDT) Received: from e34.co.us.ibm.com (e34.co.us.ibm.com. [32.97.110.152]) by mx.google.com with ESMTPS id hf6si173496obc.77.2012.04.05.15.27.21 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 05 Apr 2012 15:27:23 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.152 as permitted sender) client-ip=32.97.110.152; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.152 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e34.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 5 Apr 2012 16:27:19 -0600 Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e34.co.us.ibm.com (192.168.1.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 5 Apr 2012 16:26:18 -0600 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id C3E0B1FF004A for ; Thu, 5 Apr 2012 16:26:15 -0600 (MDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q35MQBtR023794 for ; Thu, 5 Apr 2012 16:26:14 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q35MQ5fh020219 for ; Thu, 5 Apr 2012 16:26:07 -0600 Received: from paulmck-ThinkPad-W500 (sig-9-49-133-196.mts.ibm.com [9.49.133.196]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q35MQ4bg020174; Thu, 5 Apr 2012 16:26:04 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 63DDCE4D6D; Thu, 5 Apr 2012 15:26:04 -0700 (PDT) Date: Thu, 5 Apr 2012 15:26:04 -0700 From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, fweisbec@gmail.com, patches@linaro.org, sergey.senozhatsky@gmail.com Subject: [PATCH tip/urgent] rcu: Permit call_rcu() from CPU_DYING notifiers Message-ID: <20120405222604.GA15713@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12040522-1780-0000-0000-0000048DD747 X-IBM-ISS-SpamDetectors: X-IBM-ISS-DetailInfo: BY=3.00000264; HX=3.00000186; KW=3.00000007; PH=3.00000001; SC=3.00000001; SDB=6.00128335; UDB=6.00030436; UTC=2012-04-05 22:27:17 X-Gm-Message-State: ALoCoQlJvtcBtRZfsbkEHUDJbp7IFmGQTGdQGpnN3HEHfxhHuNBQltSghyqAxcwGtQjAmEUWEnHO As of commit 29494be7, RCU adopts callbacks from the dying CPU in its CPU_DYING notifier, which means that any callbacks posted by later CPU_DYING notifiers are ignored until the CPU comes back online. A WARN_ON_ONCE() was added to __call_rcu() by commit e5601400 to check for this condition. Although this condition did not trigger (at least as far as know) during -next testing, it did recently trigger in mainline (https://lkml.org/lkml/2012/4/2/34). This commit therefore causes RCU's CPU_DEAD notifier to adopt any callbacks that were posted by CPU_DYING notifiers and removes the WARN_ON_ONCE() from __call_rcu(). A more targeted warning for callback posting from offline CPUs will be added as a separate commit. Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney Tested-by: Sergey Senozhatsky rcutree.c | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 1050d6d..4c927e6 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -1406,11 +1406,41 @@ static void rcu_cleanup_dying_cpu(struct rcu_state *rsp) static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp) { unsigned long flags; + int i; unsigned long mask; int need_report = 0; struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rnp. */ + /* If a CPU_DYING notifier has enqueued callbacks, adopt them. */ + if (rdp->nxtlist != NULL) { + struct rcu_data *receive_rdp; + + local_irq_save(flags); + receive_rdp = per_cpu_ptr(rsp->rda, smp_processor_id()); + + /* Adjust the counts. */ + receive_rdp->qlen_lazy += rdp->qlen_lazy; + receive_rdp->qlen += rdp->qlen; + rdp->qlen_lazy = 0; + rdp->qlen = 0; + + /* + * Adopt all callbacks. The outgoing CPU was in no shape + * to advance them, so make them all go through a full + * grace period. + */ + *receive_rdp->nxttail[RCU_NEXT_TAIL] = rdp->nxtlist; + receive_rdp->nxttail[RCU_NEXT_TAIL] = + rdp->nxttail[RCU_NEXT_TAIL]; + local_irq_restore(flags); + + /* Initialize the outgoing CPU's callback list. */ + rdp->nxtlist = NULL; + for (i = 0; i < RCU_NEXT_SIZE; i++) + rdp->nxttail[i] = &rdp->nxtlist; + } + /* Adjust any no-longer-needed kthreads. */ rcu_stop_cpu_kthread(cpu); rcu_node_kthread_setaffinity(rnp, -1); @@ -1820,7 +1850,6 @@ __call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu), * a quiescent state betweentimes. */ local_irq_save(flags); - WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); rdp = this_cpu_ptr(rsp->rda); /* Add the callback to our list. */