From patchwork Sat Jan 5 17:49:02 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 13831 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id C558A23E21 for ; Sat, 5 Jan 2013 17:49:23 +0000 (UTC) Received: from mail-vc0-f173.google.com (mail-vc0-f173.google.com [209.85.220.173]) by fiordland.canonical.com (Postfix) with ESMTP id 6A728A191C4 for ; Sat, 5 Jan 2013 17:49:23 +0000 (UTC) Received: by mail-vc0-f173.google.com with SMTP id f13so17689895vcb.18 for ; Sat, 05 Jan 2013 09:49:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:x-forwarded-to:x-forwarded-for:delivered-to:x-received :received-spf:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-content-scanned:x-cbid:x-gm-message-state; bh=/FpXfSIc8Y3eoI7RF2RlJAtMkwBCO0jC4lFf6agpbNg=; b=A+x4loFCFL1BpfpHfUw3WES9unGT1jAd6uyUZjv7mzmOE29BnVN3FGAQ7H+fSLoQ0L tezB4TcEGXaaFUb25FLmlTJ9/XSkKmH3QqocM7yUAUYa6O7suAlFz+f3RqmlKNJWJa22 rJgscYeFymH9VJnLKw7kDslyrHJN7g76WwRwo93Lyes1AE8zwik2QwPXYumo75Dwr0aK oDy89EstiqvJT8KOuK2RaT/mFnaUFdjwGrF3NFTuxcdg2yvngjMgtg7w9/7qzDc1ayO0 h8JAMpDK2YHQUJuGr9HhCuMnLHTrGQ5XaE32be5qjWcGYDfEMMNpCLx85nUdc1hqDVsH RtnA== X-Received: by 10.52.18.207 with SMTP id y15mr68688381vdd.8.1357408162926; Sat, 05 Jan 2013 09:49:22 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.58.145.101 with SMTP id st5csp17433veb; Sat, 5 Jan 2013 09:49:22 -0800 (PST) X-Received: by 10.42.101.134 with SMTP id e6mr36264580ico.37.1357408160580; Sat, 05 Jan 2013 09:49:20 -0800 (PST) Received: from e36.co.us.ibm.com (e36.co.us.ibm.com. [32.97.110.154]) by mx.google.com with ESMTPS id j15si2689826iga.42.2013.01.05.09.49.20 (version=TLSv1 cipher=RC4-SHA bits=128/128); Sat, 05 Jan 2013 09:49:20 -0800 (PST) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.154 as permitted sender) client-ip=32.97.110.154; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.154 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 5 Jan 2013 10:49:19 -0700 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e36.co.us.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sat, 5 Jan 2013 10:49:17 -0700 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 88F3B3E4003D; Sat, 5 Jan 2013 10:49:11 -0700 (MST) Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r05HnGB2388436; Sat, 5 Jan 2013 10:49:16 -0700 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r05HnCeD032429; Sat, 5 Jan 2013 10:49:15 -0700 Received: from paulmck-ThinkPad-W500 ([9.80.23.97]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r05HnAB9032209; Sat, 5 Jan 2013 10:49:11 -0700 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 8256CE71A0; Sat, 5 Jan 2013 09:49:06 -0800 (PST) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, edumazet@google.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, patches@linaro.org, "Paul E. McKenney" , "Paul E. McKenney" Subject: [PATCH tip/core/rcu 12/14] rcu: Rename n_nocb_gp_requests to need_future_gp Date: Sat, 5 Jan 2013 09:49:02 -0800 Message-Id: <1357408144-15830-12-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <1357408144-15830-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20130105174844.GA14172@linux.vnet.ibm.com> <1357408144-15830-1-git-send-email-paulmck@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13010517-7606-0000-0000-00000726CA4A X-Gm-Message-State: ALoCoQlV4u5cBoFhr0xP/3yaE5/pl4h5mJeBBdlK1bdPPKJUNM7o+gVa00y10BZ5tPobjoXYxlvy From: "Paul E. McKenney" CPUs going idle need to be able to indicate their need for future grace periods. A mechanism for doing this already exists for no-callbacks CPUs, so the idea is to re-use that mechanism. This commit therefore moves the ->n_nocb_gp_requests field of the rcu_node structure out from under the CONFIG_RCU_NOCB_CPU #ifdef and renames it to ->need_future_gp. Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney --- kernel/rcutree.h | 4 ++-- kernel/rcutree_plugin.h | 18 +++++++++--------- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/kernel/rcutree.h b/kernel/rcutree.h index 282b1d7..775d96c 100644 --- a/kernel/rcutree.h +++ b/kernel/rcutree.h @@ -198,9 +198,9 @@ struct rcu_node { #ifdef CONFIG_RCU_NOCB_CPU wait_queue_head_t nocb_gp_wq[2]; /* Place for rcu_nocb_kthread() to wait GP. */ - int n_nocb_gp_requests[2]; - /* Counts of upcoming no-CB GP requests. */ #endif /* #ifdef CONFIG_RCU_NOCB_CPU */ + int need_future_gp[2]; + /* Counts of upcoming no-CB GP requests. */ raw_spinlock_t fqslock ____cacheline_internodealigned_in_smp; } ____cacheline_internodealigned_in_smp; diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index 736dd2c..e4037bd 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -2057,7 +2057,7 @@ static int rcu_nocb_needs_gp(struct rcu_state *rsp) { struct rcu_node *rnp = rcu_get_root(rsp); - return rnp->n_nocb_gp_requests[(ACCESS_ONCE(rnp->completed) + 1) & 0x1]; + return rnp->need_future_gp[(ACCESS_ONCE(rnp->completed) + 1) & 0x1]; } /* @@ -2071,8 +2071,8 @@ static int rcu_nocb_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) int needmore; wake_up_all(&rnp->nocb_gp_wq[c & 0x1]); - rnp->n_nocb_gp_requests[c & 0x1] = 0; - needmore = rnp->n_nocb_gp_requests[(c + 1) & 0x1]; + rnp->need_future_gp[c & 0x1] = 0; + needmore = rnp->need_future_gp[(c + 1) & 0x1]; trace_rcu_future_grace_period(rsp->name, rnp->gpnum, rnp->completed, c, rnp->level, rnp->grplo, rnp->grphi, needmore ? "CleanupMore" : "Cleanup"); @@ -2080,7 +2080,7 @@ static int rcu_nocb_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) } /* - * Set the root rcu_node structure's ->n_nocb_gp_requests field + * Set the root rcu_node structure's ->need_future_gp field * based on the sum of those of all rcu_node structures. This does * double-count the root rcu_node structure's requests, but this * is necessary to handle the possibility of a rcu_nocb_kthread() @@ -2089,7 +2089,7 @@ static int rcu_nocb_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) */ static void rcu_nocb_gp_set(struct rcu_node *rnp, int nrq) { - rnp->n_nocb_gp_requests[(rnp->completed + 1) & 0x1] += nrq; + rnp->need_future_gp[(rnp->completed + 1) & 0x1] += nrq; } static void rcu_init_one_nocb(struct rcu_node *rnp) @@ -2220,7 +2220,7 @@ static void rcu_nocb_wait_gp(struct rcu_data *rdp) c = rnp->completed + 2; /* Count our request for a grace period. */ - rnp->n_nocb_gp_requests[c & 0x1]++; + rnp->need_future_gp[c & 0x1]++; trace_rcu_future_grace_period(rdp->rsp->name, rnp->gpnum, rnp->completed, c, rnp->level, rnp->grplo, rnp->grphi, "Startleaf"); @@ -2264,10 +2264,10 @@ static void rcu_nocb_wait_gp(struct rcu_data *rdp) * Adjust counters accordingly and start the * needed grace period. */ - rnp->n_nocb_gp_requests[c & 0x1]--; + rnp->need_future_gp[c & 0x1]--; c = rnp_root->completed + 1; - rnp->n_nocb_gp_requests[c & 0x1]++; - rnp_root->n_nocb_gp_requests[c & 0x1]++; + rnp->need_future_gp[c & 0x1]++; + rnp_root->need_future_gp[c & 0x1]++; trace_rcu_future_grace_period(rdp->rsp->name, rnp->gpnum, rnp->completed,