From patchwork Wed Jun 8 19:29:42 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 1870 Return-Path: Delivered-To: unknown Received: from imap.gmail.com (74.125.47.109) by localhost6.localdomain6 with IMAP4-SSL; 14 Jun 2011 16:46:11 -0000 Delivered-To: patches@linaro.org Received: by 10.52.181.10 with SMTP id ds10cs199993vdc; Wed, 8 Jun 2011 12:30:51 -0700 (PDT) Received: by 10.236.116.5 with SMTP id f5mr2784362yhh.310.1307561450067; Wed, 08 Jun 2011 12:30:50 -0700 (PDT) Received: from e2.ny.us.ibm.com (e2.ny.us.ibm.com [32.97.182.142]) by mx.google.com with ESMTPS id b65si5035186yhm.39.2011.06.08.12.30.49 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 08 Jun 2011 12:30:49 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.142 as permitted sender) client-ip=32.97.182.142; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.142 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from d01relay07.pok.ibm.com (d01relay07.pok.ibm.com [9.56.227.147]) by e2.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p58JAScB008510 for ; Wed, 8 Jun 2011 15:10:28 -0400 Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay07.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p58JUKOq1368156 for ; Wed, 8 Jun 2011 15:30:20 -0400 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p58JUAKB015533 for ; Wed, 8 Jun 2011 15:30:18 -0400 Received: from paulmck-ThinkPad-W500 (paulmck-ThinkPad-W500.beaverton.ibm.com [9.47.24.65]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p58JU9mA015508; Wed, 8 Jun 2011 15:30:10 -0400 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 8F1FA13F7C5; Wed, 8 Jun 2011 12:30:09 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, patches@linaro.org, "Paul E. McKenney" Subject: [PATCH tip/core/rcu 03/28] rcu: Streamline code produced by __rcu_read_unlock() Date: Wed, 8 Jun 2011 12:29:42 -0700 Message-Id: <1307561407-13809-3-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.3.2 In-Reply-To: <20110608192943.GA13211@linux.vnet.ibm.com> References: <20110608192943.GA13211@linux.vnet.ibm.com> Given some common flag combinations, particularly -Os, gcc will inline rcu_read_unlock_special() despite its being in an unlikely() clause. Use noline to prohibit this misoptimization. In addition, move the second barrier() in __rcu_read_unlock() so that it is not on the common-case code path. This will allow the compiler to generate better code for the common-case path through __rcu_read_unlock(). Finally, fix up whitespace in kernel/lockdep.c to keep checkpatch happy. Suggested-by: Linus Torvalds Signed-off-by: Paul E. McKenney --- kernel/rcutree_plugin.h | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index ea2e2fb..40a6db7 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -284,7 +284,7 @@ static struct list_head *rcu_next_node_entry(struct task_struct *t, * notify RCU core processing or task having blocked during the RCU * read-side critical section. */ -static void rcu_read_unlock_special(struct task_struct *t) +static noinline void rcu_read_unlock_special(struct task_struct *t) { int empty; int empty_exp; @@ -387,11 +387,11 @@ void __rcu_read_unlock(void) struct task_struct *t = current; barrier(); /* needed if we ever invoke rcu_read_unlock in rcutree.c */ - --t->rcu_read_lock_nesting; - barrier(); /* decrement before load of ->rcu_read_unlock_special */ - if (t->rcu_read_lock_nesting == 0 && - unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) - rcu_read_unlock_special(t); + if (--t->rcu_read_lock_nesting == 0) { + barrier(); /* decr before ->rcu_read_unlock_special load */ + if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special))) + rcu_read_unlock_special(t); + } #ifdef CONFIG_PROVE_LOCKING WARN_ON_ONCE(ACCESS_ONCE(t->rcu_read_lock_nesting) < 0); #endif /* #ifdef CONFIG_PROVE_LOCKING */