From patchwork Tue Oct 30 16:54:35 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 12605 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 0643F23F58 for ; Tue, 30 Oct 2012 16:57:22 +0000 (UTC) Received: from mail-ia0-f180.google.com (mail-ia0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 64043A18FE4 for ; Tue, 30 Oct 2012 16:57:21 +0000 (UTC) Received: by mail-ia0-f180.google.com with SMTP id f6so323195iag.11 for ; Tue, 30 Oct 2012 09:57:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-forwarded-to:x-forwarded-for:delivered-to:received-spf:from:to:cc :subject:date:message-id:x-mailer:in-reply-to:references :x-content-scanned:x-cbid:x-gm-message-state; bh=/55EVRgYXwDS2KF8gINMYEl4E1Bj56kYkI9dSrrDl8w=; b=Z9JV1jRWKH0h3Q/HVw5V+JAeKaY7/5Xi+txk+k6Y0e+BedtYJHMkn1nb6NcGfPi8ER 75wpx33y5wZ7D4EokQGQSLKaeuFR/xRDp/dKgI5ItXLqHpUFql6rZkNsQeTA8y3Kayfy Y/A5VRuH/9hA/wMupCUtVlIxyX9L+XO83qU9xiVHnRRbs4ipok6A2WIhzIP6cJ9tgnn5 getM/LuKHK7uW+klrnC1EoXPDze/NdXyzP4aEwcVjGeCS+RbY3eYr8bX8567toC07xP/ 61CKcRwjoHpmo6Zb5ksfzxtsp/u7l6HFtlxewnEqcj+6vBToABsvroDL2tJqC0zu46Nc zHyg== Received: by 10.42.21.68 with SMTP id j4mr10604614icb.18.1351616240793; Tue, 30 Oct 2012 09:57:20 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.50.67.148 with SMTP id n20csp450299igt; Tue, 30 Oct 2012 09:57:19 -0700 (PDT) Received: by 10.50.33.147 with SMTP id r19mr2085923igi.73.1351616239589; Tue, 30 Oct 2012 09:57:19 -0700 (PDT) Received: from e5.ny.us.ibm.com (e5.ny.us.ibm.com. [32.97.182.145]) by mx.google.com with ESMTPS id ak4si1232354icc.94.2012.10.30.09.57.19 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 30 Oct 2012 09:57:19 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.145 as permitted sender) client-ip=32.97.182.145; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.145 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from /spool/local by e5.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 30 Oct 2012 12:57:14 -0400 Received: from d01dlp01.pok.ibm.com (9.56.250.166) by e5.ny.us.ibm.com (192.168.1.105) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 30 Oct 2012 12:54:55 -0400 Received: from d01relay05.pok.ibm.com (d01relay05.pok.ibm.com [9.56.227.237]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id A85CE38C8074; Tue, 30 Oct 2012 12:54:45 -0400 (EDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d01relay05.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id q9UGsi45305518; Tue, 30 Oct 2012 12:54:45 -0400 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id q9UGshDS028848; Tue, 30 Oct 2012 10:54:44 -0600 Received: from paulmck-ThinkPad-W500 (sig-9-65-77-17.mts.ibm.com [9.65.77.17]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id q9UGsenZ028437; Tue, 30 Oct 2012 10:54:41 -0600 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id E07FBEBEDB; Tue, 30 Oct 2012 09:54:37 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, edumazet@google.com, darren@dvhart.com, fweisbec@gmail.com, sbw@mit.edu, patches@linaro.org, "Paul E. McKenney" , "Paul E. McKenney" Subject: [PATCH tip/core/rcu 6/7] rcu: Move synchronize_sched_expedited() state to rcu_state Date: Tue, 30 Oct 2012 09:54:35 -0700 Message-Id: <1351616076-25617-6-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.8 In-Reply-To: <1351616076-25617-1-git-send-email-paulmck@linux.vnet.ibm.com> References: <20121030165415.GA25438@linux.vnet.ibm.com> <1351616076-25617-1-git-send-email-paulmck@linux.vnet.ibm.com> X-Content-Scanned: Fidelis XPS MAILER x-cbid: 12103016-5930-0000-0000-00000D9A35CB X-Gm-Message-State: ALoCoQlE6kLreqH8oyUC+3BlJOS0h21/2IcuEHwOVISHHnrskII+JNYMncKq/IL1IbG6qDR2Kzjq From: "Paul E. McKenney" Tracing (debugfs) of expedited RCU primitives is required, which in turn requires that the relevant data be located where the tracing code can find it, not in its current static global variables in kernel/rcutree.c. This commit therefore moves sync_sched_expedited_started and sync_sched_expedited_done to the rcu_state structure, as fields ->expedited_start and ->expedited_done, respectively. Signed-off-by: Paul E. McKenney Signed-off-by: Paul E. McKenney --- kernel/rcutree.c | 20 +++++++++----------- kernel/rcutree.h | 3 +++ 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 6789055..3c72e5e 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -2249,9 +2249,6 @@ void synchronize_rcu_bh(void) } EXPORT_SYMBOL_GPL(synchronize_rcu_bh); -static atomic_long_t sync_sched_expedited_started = ATOMIC_LONG_INIT(0); -static atomic_long_t sync_sched_expedited_done = ATOMIC_LONG_INIT(0); - static int synchronize_sched_expedited_cpu_stop(void *data) { /* @@ -2310,6 +2307,7 @@ void synchronize_sched_expedited(void) { long firstsnap, s, snap; int trycount = 0; + struct rcu_state *rsp = &rcu_sched_state; /* * If we are in danger of counter wrap, just do synchronize_sched(). @@ -2319,8 +2317,8 @@ void synchronize_sched_expedited(void) * counter wrap on a 32-bit system. Quite a few more CPUs would of * course be required on a 64-bit system. */ - if (ULONG_CMP_GE((ulong)atomic_read(&sync_sched_expedited_started), - (ulong)atomic_read(&sync_sched_expedited_done) + + if (ULONG_CMP_GE((ulong)atomic_long_read(&rsp->expedited_start), + (ulong)atomic_long_read(&rsp->expedited_done) + ULONG_MAX / 8)) { synchronize_sched(); return; @@ -2330,7 +2328,7 @@ void synchronize_sched_expedited(void) * Take a ticket. Note that atomic_inc_return() implies a * full memory barrier. */ - snap = atomic_long_inc_return(&sync_sched_expedited_started); + snap = atomic_long_inc_return(&rsp->expedited_start); firstsnap = snap; get_online_cpus(); WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id())); @@ -2345,7 +2343,7 @@ void synchronize_sched_expedited(void) put_online_cpus(); /* Check to see if someone else did our work for us. */ - s = atomic_long_read(&sync_sched_expedited_done); + s = atomic_long_read(&rsp->expedited_done); if (ULONG_CMP_GE((ulong)s, (ulong)firstsnap)) { smp_mb(); /* ensure test happens before caller kfree */ return; @@ -2360,7 +2358,7 @@ void synchronize_sched_expedited(void) } /* Recheck to see if someone else did our work for us. */ - s = atomic_long_read(&sync_sched_expedited_done); + s = atomic_long_read(&rsp->expedited_done); if (ULONG_CMP_GE((ulong)s, (ulong)firstsnap)) { smp_mb(); /* ensure test happens before caller kfree */ return; @@ -2374,7 +2372,7 @@ void synchronize_sched_expedited(void) * period works for us. */ get_online_cpus(); - snap = atomic_long_read(&sync_sched_expedited_started); + snap = atomic_long_read(&rsp->expedited_start); smp_mb(); /* ensure read is before try_stop_cpus(). */ } @@ -2385,12 +2383,12 @@ void synchronize_sched_expedited(void) * than we did already did their update. */ do { - s = atomic_long_read(&sync_sched_expedited_done); + s = atomic_long_read(&rsp->expedited_done); if (ULONG_CMP_GE((ulong)s, (ulong)snap)) { smp_mb(); /* ensure test happens before caller kfree */ break; } - } while (atomic_long_cmpxchg(&sync_sched_expedited_done, s, snap) != s); + } while (atomic_long_cmpxchg(&rsp->expedited_done, s, snap) != s); put_online_cpus(); } diff --git a/kernel/rcutree.h b/kernel/rcutree.h index a7c945d..88f3d9d 100644 --- a/kernel/rcutree.h +++ b/kernel/rcutree.h @@ -404,6 +404,9 @@ struct rcu_state { /* _rcu_barrier(). */ /* End of fields guarded by barrier_mutex. */ + atomic_long_t expedited_start; /* Starting ticket. */ + atomic_long_t expedited_done; /* Done ticket. */ + unsigned long jiffies_force_qs; /* Time at which to invoke */ /* force_quiescent_state(). */ unsigned long n_force_qs; /* Number of calls to */