From patchwork Tue Oct 30 16:27:54 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12586 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 6EDE223EF7 for ; Tue, 30 Oct 2012 16:39:35 +0000 (UTC) Received: from mail-ie0-f180.google.com (mail-ie0-f180.google.com [209.85.223.180]) by fiordland.canonical.com (Postfix) with ESMTP id 1B107A18F3D for ; Tue, 30 Oct 2012 16:39:34 +0000 (UTC) Received: by mail-ie0-f180.google.com with SMTP id e10so671091iej.11 for ; Tue, 30 Oct 2012 09:39:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=T52qu6yiLAf/+IqB47nmD7GZQyRTfZKG10dtDyiDa6k=; b=SD646L7nHFWfCddVDpGBsy2aMx2+Pf+WZA5n/dDD776L0OhGMqIUPi+7VW0akH4V7B LgmG4wUDhanSWUqQ2SgalC/v5gEDjI3pPS19kM5PqUFPPYqajebpFhFuF/AZSRz+AgBI ZUviD/P9KHjzUQQst5ZvhcrmziwlGrS7YFOy1ghboHLpD8O9wfVO8DhFcr5WBs269+Fj xxuitHlXyrROlnwsrR3CxtlerV52iSaCZZkDT8/EifSos8nWKnb51ICNGdIjlPJFHZzP CSQPtZ1STtCKptzbAWqhHv/xCdvhBFkc1kbBAFfQQ8CFZmuCMfFYSHcViL9CcYg9LDGt FPzA== Received: by 10.50.88.233 with SMTP id bj9mr2003033igb.70.1351615174724; Tue, 30 Oct 2012 09:39:34 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp446762igt; Tue, 30 Oct 2012 09:39:34 -0700 (PDT) Received: by 10.50.219.129 with SMTP id po1mr2046527igc.35.1351615174402; Tue, 30 Oct 2012 09:39:34 -0700 (PDT) Received: from e37.co.us.ibm.com (e37.co.us.ibm.com. [32.97.110.158]) by mx.google.com with ESMTPS id ak4si1212061icc.22.2012.10.30.09.39.34 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 30 Oct 2012 09:39:34 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.158 as permitted sender) client-ip=32.97.110.158; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.110.158 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 30 Oct 2012 10:39:33 -0600 Received: from d03dlp03.boulder.ibm.com (9.17.202.179) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 30 Oct 2012 10:39:13 -0600 Received: from d03relay04.boulder.ibm.com (d03relay04.boulder.ibm.com [9.17.195.106]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id B843119D804E for ; Tue, 30 Oct 2012 10:39:10 -0600 (MDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay04.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q9UGd9NS237124 for ; Tue, 30 Oct 2012 10:39:09 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q9UGd7Z0010511 for ; Tue, 30 Oct 2012 10:39:09 -0600 Received: from paulmck-ThinkPad-W500 (sig-9-65-77-17.mts.ibm.com [9.65.77.17]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q9UGd4bX010154; Tue, 30 Oct 2012 10:39:06 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 31C19EBED9; Tue, 30 Oct 2012 09:27:57 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, edumazet@google.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, patches@linaro.org, "Paul E. McKenney" Subject: [PATCH tip/core/rcu 5/6] rcu: Clarify memory-ordering properties of grace-period primitives Date: Tue, 30 Oct 2012 09:27:54 -0700 Message-Id: <1351614475-22895-5-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <1351614475-22895-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20121030162728.GA22648@linux.vnet.ibm.com> <1351614475-22895-1-git-send-email-paulmck@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12103016-7408-0000-0000-000009C5544D X-Gm-Message-State: ALoCoQmUUDvuNI+cutqcK5NmGxHuMQdIRokLnJ4tS2QfMd8tpD6Md/SiFG0sHh0iDSutJ+NlBtw7 From: "Paul E. McKenney" This commit explicitly states the memory-ordering properties of the RCU grace-period primitives. Although these properties were in some sense implied by the fundmental property of RCU ("a grace period must wait for all pre-existing RCU read-side critical sections to complete"), stating it explicitly will be a great labor-saving device. Reported-by: Oleg Nesterov Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 20 ++++++++++++++++++++ kernel/rcutree.c | 16 ++++++++++++++++ kernel/rcutree_plugin.h | 8 ++++++++ 3 files changed, 44 insertions(+), 0 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 7c968e4..91d530a 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -90,6 +90,20 @@ extern void do_trace_rcu_torture_read(char *rcutorturename, * that started after call_rcu() was invoked. RCU read-side critical * sections are delimited by rcu_read_lock() and rcu_read_unlock(), * and may be nested. + * + * Note that all CPUs must agree that the grace period extended beyond + * all pre-existing RCU read-side critical section. This means that + * on systems with more than one CPU, when "func()" is invoked, each + * CPU is guaranteed to have executed a full memory barrier since the + * end of its last RCU read-side critical section whose beginning + * preceded the call to call_rcu(). Note that this guarantee includes + * CPUs that are offline, idle, or executing in user mode, as well as + * CPUs that are executing in the kernel. Furthermore, if CPU A + * invoked call_rcu() and CPU B invoked the resulting RCU callback + * function "func()", then both CPU A and CPU B are guaranteed to execute + * a full memory barrier during the time interval between the call to + * call_rcu() and the invocation of "func()" -- even if CPU A and CPU B + * are the same CPU (but again only if the system has more than one CPU). */ extern void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *head)); @@ -118,6 +132,9 @@ extern void call_rcu(struct rcu_head *head, * OR * - rcu_read_lock_bh() and rcu_read_unlock_bh(), if in process context. * These may be nested. + * + * See the description of call_rcu() for more detailed information on + * memory ordering guarantees. */ extern void call_rcu_bh(struct rcu_head *head, void (*func)(struct rcu_head *head)); @@ -137,6 +154,9 @@ extern void call_rcu_bh(struct rcu_head *head, * OR * anything that disables preemption. * These may be nested. + * + * See the description of call_rcu() for more detailed information on + * memory ordering guarantees. */ extern void call_rcu_sched(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); diff --git a/kernel/rcutree.c b/kernel/rcutree.c index e4c2192..ca32215 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -2233,6 +2233,19 @@ static inline int rcu_blocking_is_gp(void) * softirq handlers will have completed, since in some kernels, these * handlers can run in process context, and can block. * + * Note that this guarantee implies a further memory-ordering guarantee. + * On systems with more than one CPU, when synchronize_sched() returns, + * each CPU is guaranteed to have executed a full memory barrier since + * the end of its last RCU-sched read-side critical section whose beginning + * preceded the call to synchronize_sched(). Note that this guarantee + * includes CPUs that are offline, idle, or executing in user mode, as + * well as CPUs that are executing in the kernel. Furthermore, if CPU A + * invoked synchronize_sched(), which returned to its caller on CPU B, + * then both CPU A and CPU B are guaranteed to have executed a full memory + * barrier during the execution of synchronize_sched() -- even if CPU A + * and CPU B are the same CPU (but again only if the system has more than + * one CPU). + * * This primitive provides the guarantees made by the (now removed) * synchronize_kernel() API. In contrast, synchronize_rcu() only * guarantees that rcu_read_lock() sections will have completed. @@ -2259,6 +2272,9 @@ EXPORT_SYMBOL_GPL(synchronize_sched); * read-side critical sections have completed. RCU read-side critical * sections are delimited by rcu_read_lock_bh() and rcu_read_unlock_bh(), * and may be nested. + * + * See the description of synchronize_sched() for more detailed information + * on memory ordering guarantees. */ void synchronize_rcu_bh(void) { diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index f921154..0f370a8 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -670,6 +670,9 @@ EXPORT_SYMBOL_GPL(kfree_call_rcu); * concurrently with new RCU read-side critical sections that began while * synchronize_rcu() was waiting. RCU read-side critical sections are * delimited by rcu_read_lock() and rcu_read_unlock(), and may be nested. + * + * See the description of synchronize_sched() for more detailed information + * on memory ordering guarantees. */ void synchronize_rcu(void) { @@ -875,6 +878,11 @@ EXPORT_SYMBOL_GPL(synchronize_rcu_expedited); /** * rcu_barrier - Wait until all in-flight call_rcu() callbacks complete. + * + * Note that this primitive will not always wait for an RCU grace period + * to complete. For example, if there are no RCU callbacks queued anywhere + * in the system, then rcu_barrier() is within its rights to return + * immediately, without waiting for anything, much less an RCU grace period. */ void rcu_barrier(void) {