From patchwork Tue Sep 6 18:00:43 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 3936 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id 889B423FB6 for ; Wed, 7 Sep 2011 06:21:02 +0000 (UTC) Received: from mail-fx0-f52.google.com (mail-fx0-f52.google.com [209.85.161.52]) by fiordland.canonical.com (Postfix) with ESMTP id 7D438A1817A for ; Wed, 7 Sep 2011 06:21:02 +0000 (UTC) Received: by mail-fx0-f52.google.com with SMTP id 18so398726fxd.11 for ; Tue, 06 Sep 2011 23:21:02 -0700 (PDT) Received: by 10.223.62.8 with SMTP id v8mr2769915fah.43.1315376462272; Tue, 06 Sep 2011 23:21:02 -0700 (PDT) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.152.11.8 with SMTP id m8cs127801lab; Tue, 6 Sep 2011 23:21:02 -0700 (PDT) Received: by 10.150.179.12 with SMTP id b12mr4506885ybf.67.1315376460063; Tue, 06 Sep 2011 23:21:00 -0700 (PDT) Received: from e7.ny.us.ibm.com (e7.ny.us.ibm.com [32.97.182.137]) by mx.google.com with ESMTPS id q6si566958ybf.17.2011.09.06.23.20.59 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 06 Sep 2011 23:21:00 -0700 (PDT) Received-SPF: pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.137 as permitted sender) client-ip=32.97.182.137; Authentication-Results: mx.google.com; spf=pass (google.com: domain of paulmck@linux.vnet.ibm.com designates 32.97.182.137 as permitted sender) smtp.mail=paulmck@linux.vnet.ibm.com Received: from d01relay03.pok.ibm.com (d01relay03.pok.ibm.com [9.56.227.235]) by e7.ny.us.ibm.com (8.14.4/8.13.1) with ESMTP id p8750glq028677 for ; Wed, 7 Sep 2011 01:00:42 -0400 Received: from d01av01.pok.ibm.com (d01av01.pok.ibm.com [9.56.224.215]) by d01relay03.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p876KxAb187082 for ; Wed, 7 Sep 2011 02:20:59 -0400 Received: from d01av01.pok.ibm.com (loopback [127.0.0.1]) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p876Klpf006408 for ; Wed, 7 Sep 2011 02:20:55 -0400 Received: from paulmck-ThinkPad-W500 (dyn9050016039.mts.ibm.com [9.50.16.39] (may be forged)) by d01av01.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p876Kg6o005120; Wed, 7 Sep 2011 02:20:44 -0400 Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000) id 46CC613F899; Tue, 6 Sep 2011 11:00:53 -0700 (PDT) From: "Paul E. McKenney" To: linux-kernel@vger.kernel.org Cc: mingo@elte.hu, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, Valdis.Kletnieks@vt.edu, dhowells@redhat.com, eric.dumazet@gmail.com, darren@dvhart.com, patches@linaro.org, Frederic Weisbecker , "Paul E. McKenney" , Peter Zijlstra Subject: [PATCH tip/core/rcu 49/55] rcu: Detect illegal rcu dereference in extended quiescent state Date: Tue, 6 Sep 2011 11:00:43 -0700 Message-Id: <1315332049-2604-49-git-send-email-paulmck@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.3.2 In-Reply-To: <20110906180015.GA2560@linux.vnet.ibm.com> References: <20110906180015.GA2560@linux.vnet.ibm.com> From: Frederic Weisbecker Report that none of the rcu read lock maps are held while in an RCU extended quiescent state (in this case, the RCU extended quiescent state is dyntick-idle mode). This helps detect any use of rcu_dereference() and friends from within dyntick-idle mode. Signed-off-by: Frederic Weisbecker Cc: Paul E. McKenney Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Lai Jiangshan Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 36 ++++++++++++++++++++++++++++++++++++ kernel/rcupdate.c | 17 ++++++++++++++++- kernel/rcutiny.c | 14 ++++++++++++++ kernel/rcutree.c | 16 ++++++++++++++++ 4 files changed, 82 insertions(+), 1 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 8d7efc8..7d8fa7c 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -231,6 +231,14 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head) } #endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ + +#if defined(CONFIG_PROVE_RCU) && defined(CONFIG_NO_HZ) +extern bool rcu_check_extended_qs(void); +#else +static inline bool rcu_check_extended_qs(void) { return false; } +#endif + + #ifdef CONFIG_DEBUG_LOCK_ALLOC #define PROVE_RCU(a) a @@ -264,11 +272,25 @@ extern int debug_lockdep_rcu_enabled(void); * * Checks debug_lockdep_rcu_enabled() to prevent false positives during boot * and while lockdep is disabled. + * + * Note that if the CPU is in an extended quiescent state, for example, + * if the CPU is in dyntick-idle mode, then rcu_read_lock_held() returns + * false even if the CPU did an rcu_read_lock(). The reason for this is + * that RCU ignores CPUs that are in extended quiescent states, so such + * a CPU is effectively never in an RCU read-side critical section + * regardless of what RCU primitives it invokes. This state of affairs + * is required -- RCU would otherwise need to periodically wake up + * dyntick-idle CPUs, which would defeat the whole purpose of dyntick-idle + * mode. */ static inline int rcu_read_lock_held(void) { if (!debug_lockdep_rcu_enabled()) return 1; + + if (rcu_check_extended_qs()) + return 0; + return lock_is_held(&rcu_lock_map); } @@ -292,6 +314,16 @@ extern int rcu_read_lock_bh_held(void); * * Check debug_lockdep_rcu_enabled() to prevent false positives during boot * and while lockdep is disabled. + * + * Note that if the CPU is in an extended quiescent state, for example, + * if the CPU is in dyntick-idle mode, then rcu_read_lock_held() returns + * false even if the CPU did an rcu_read_lock(). The reason for this is + * that RCU ignores CPUs that are in extended quiescent states, so such + * a CPU is effectively never in an RCU read-side critical section + * regardless of what RCU primitives it invokes. This state of affairs + * is required -- RCU would otherwise need to periodically wake up + * dyntick-idle CPUs, which would defeat the whole purpose of dyntick-idle + * mode. */ #ifdef CONFIG_PREEMPT static inline int rcu_read_lock_sched_held(void) @@ -300,6 +332,10 @@ static inline int rcu_read_lock_sched_held(void) if (!debug_lockdep_rcu_enabled()) return 1; + + if (rcu_check_extended_qs()) + return 0; + if (debug_locks) lockdep_opinion = lock_is_held(&rcu_sched_lock_map); return lockdep_opinion || preempt_count() != 0 || irqs_disabled(); diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c index 5031caf..e4d8a98 100644 --- a/kernel/rcupdate.c +++ b/kernel/rcupdate.c @@ -87,12 +87,27 @@ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled); * that require that they be called within an RCU read-side critical * section. * - * Check debug_lockdep_rcu_enabled() to prevent false positives during boot. + * Check debug_lockdep_rcu_enabled() to prevent false positives during boot + * and while lockdep is disabled. + * + * Note that if the CPU is in an extended quiescent state, for example, + * if the CPU is in dyntick-idle mode, then rcu_read_lock_held() returns + * false even if the CPU did an rcu_read_lock(). The reason for this is + * that RCU ignores CPUs that are in extended quiescent states, so such + * a CPU is effectively never in an RCU read-side critical section + * regardless of what RCU primitives it invokes. This state of affairs + * is required -- RCU would otherwise need to periodically wake up + * dyntick-idle CPUs, which would defeat the whole purpose of dyntick-idle + * mode. */ int rcu_read_lock_bh_held(void) { if (!debug_lockdep_rcu_enabled()) return 1; + + if (rcu_check_extended_qs()) + return 0; + return in_softirq() || irqs_disabled(); } EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held); diff --git a/kernel/rcutiny.c b/kernel/rcutiny.c index da775c8..9e493b9 100644 --- a/kernel/rcutiny.c +++ b/kernel/rcutiny.c @@ -78,6 +78,20 @@ void rcu_exit_nohz(void) rcu_dynticks_nesting++; } + +#ifdef CONFIG_PROVE_RCU + +bool rcu_check_extended_qs(void) +{ + if (!rcu_dynticks_nesting) + return true; + + return false; +} +EXPORT_SYMBOL_GPL(rcu_check_extended_qs); + +#endif + #endif /* #ifdef CONFIG_NO_HZ */ /* diff --git a/kernel/rcutree.c b/kernel/rcutree.c index f0a9432..c9b4adf 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -465,6 +465,22 @@ void rcu_irq_exit(void) rcu_enter_nohz(); } +#ifdef CONFIG_PROVE_RCU + +bool rcu_check_extended_qs(void) +{ + struct rcu_dynticks *rdtp; + + rdtp = &per_cpu(rcu_dynticks, raw_smp_processor_id()); + if (atomic_read(&rdtp->dynticks) & 0x1) + return false; + + return true; +} +EXPORT_SYMBOL_GPL(rcu_check_extended_qs); + +#endif /* CONFIG_PROVE_RCU */ + #ifdef CONFIG_SMP /*