Message ID | 20190617124718.1232976-1-arnd@arndb.de |
---|---|
State | Accepted |
Commit | 886532aee3cd42d95196601ed16d7c3d4679e9e5 |
Headers | show |
Series | locking/lockdep: Move mark_lock() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING | expand |
Hi Arnd, On Mon, Jun 17, 2019 at 02:47:05PM +0200, Arnd Bergmann wrote: > The last cleanup patch triggered another issue, as now another function > should be moved into the same section: > > kernel/locking/lockdep.c:3580:12: error: 'mark_lock' defined but not used [-Werror=unused-function] > static int mark_lock(struct task_struct *curr, struct held_lock *this, > > Move mark_lock() into the same #ifdef section as its only caller, and > remove the now-unused mark_lock_irq() stub helper. > > Fixes: 0d2cc3b34532 ("locking/lockdep: Move valid_state() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING") > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > --- > kernel/locking/lockdep.c | 73 +++++++++++++++++++--------------------- > 1 file changed, 34 insertions(+), 39 deletions(-) > > diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c > index 48a840adb281..43e880ceafc2 100644 > --- a/kernel/locking/lockdep.c > +++ b/kernel/locking/lockdep.c > @@ -437,13 +437,6 @@ static int verbose(struct lock_class *class) > return 0; > } > > -/* > - * Stack-trace: tightly packed array of stack backtrace > - * addresses. Protected by the graph_lock. > - */ > -unsigned long nr_stack_trace_entries; > -static unsigned long stack_trace[MAX_STACK_TRACE_ENTRIES]; > - > static void print_lockdep_off(const char *bug_msg) > { > printk(KERN_DEBUG "%s\n", bug_msg); > @@ -453,6 +446,15 @@ static void print_lockdep_off(const char *bug_msg) > #endif > } > > +unsigned long nr_stack_trace_entries; > + > +#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) Is this necessary, given that CONFIG_PROVE_LOCKING selects TRACE_IRQFLAGS? I find that having both of the symbols makes this really hard to read. For example: > +/* > + * Stack-trace: tightly packed array of stack backtrace > + * addresses. Protected by the graph_lock. > + */ > +static unsigned long stack_trace[MAX_STACK_TRACE_ENTRIES]; This is used later on by print_lock_trace(), which is only predicated on #ifdef CONFIG_PROVE_LOCKING. Will
On Mon, Jun 24, 2019 at 2:18 PM Will Deacon <will@kernel.org> wrote: > On Mon, Jun 17, 2019 at 02:47:05PM +0200, Arnd Bergmann wrote: > > > > +unsigned long nr_stack_trace_entries; > > + > > +#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) > > Is this necessary, given that CONFIG_PROVE_LOCKING selects TRACE_IRQFLAGS? > I find that having both of the symbols makes this really hard to read. Probably not. I have removed the CONFIG_TRACE_IRQFLAGS check from all instances in this file now, and will give it some more testing. It took me a few iterations to get to a version of this patch that had no build failures, so I'm a bit careful. I had copied the #if check from what protected some of the callers. If the change below works, I'll fold it into my patch and send it again. Arnd --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -448,7 +448,7 @@ static void print_lockdep_off(const char *bug_msg) unsigned long nr_stack_trace_entries; -#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) +#ifdef CONFIG_PROVE_LOCKING /* * Stack-trace: tightly packed array of stack backtrace * addresses. Protected by the graph_lock. @@ -491,7 +491,7 @@ unsigned int max_lockdep_depth; DEFINE_PER_CPU(struct lockdep_stats, lockdep_stats); #endif -#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) +#ifdef CONFIG_PROVE_LOCKING /* * Locking printouts: */ @@ -2969,7 +2969,7 @@ static void check_chain_key(struct task_struct *curr) #endif } -#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) +#ifdef CONFIG_PROVE_LOCKING static int mark_lock(struct task_struct *curr, struct held_lock *this, enum lock_usage_bit new_bit); @@ -3608,7 +3608,7 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this, return ret; } -#else /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */ +#else /* CONFIG_PROVE_LOCKING */ static inline int mark_usage(struct task_struct *curr, struct held_lock *hlock, int check) @@ -3627,7 +3627,7 @@ static inline int separate_irq_context(struct task_struct *curr, return 0; } -#endif /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */ +#endif /* CONFIG_PROVE_LOCKING */ /* * Initialize a lock instance's lock-class mapping info: @@ -4321,8 +4321,7 @@ static void __lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie */ static void check_flags(unsigned long flags) { -#if defined(CONFIG_PROVE_LOCKING) && defined(CONFIG_DEBUG_LOCKDEP) && \ - defined(CONFIG_TRACE_IRQFLAGS) +#if defined(CONFIG_PROVE_LOCKING) && defined(CONFIG_DEBUG_LOCKDEP) if (!debug_locks) return;
On Tue, Jun 25, 2019 at 01:46:07AM -0700, tip-bot for Arnd Bergmann wrote: > Commit-ID: 886532aee3cd42d95196601ed16d7c3d4679e9e5 > Gitweb: https://git.kernel.org/tip/886532aee3cd42d95196601ed16d7c3d4679e9e5 > Author: Arnd Bergmann <arnd@arndb.de> > AuthorDate: Mon, 17 Jun 2019 14:47:05 +0200 > Committer: Ingo Molnar <mingo@kernel.org> > CommitDate: Tue, 25 Jun 2019 10:17:07 +0200 > > locking/lockdep: Move mark_lock() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING > > The last cleanup patch triggered another issue, as now another function > should be moved into the same section: > > kernel/locking/lockdep.c:3580:12: error: 'mark_lock' defined but not used [-Werror=unused-function] > static int mark_lock(struct task_struct *curr, struct held_lock *this, > > Move mark_lock() into the same #ifdef section as its only caller, and > remove the now-unused mark_lock_irq() stub helper. > > Signed-off-by: Arnd Bergmann <arnd@arndb.de> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Bart Van Assche <bvanassche@acm.org> > Cc: Frederic Weisbecker <frederic@kernel.org> > Cc: Linus Torvalds <torvalds@linux-foundation.org> > Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Waiman Long <longman@redhat.com> > Cc: Will Deacon <will.deacon@arm.com> > Cc: Yuyang Du <duyuyang@gmail.com> > Fixes: 0d2cc3b34532 ("locking/lockdep: Move valid_state() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING") > Link: https://lkml.kernel.org/r/20190617124718.1232976-1-arnd@arndb.de > Signed-off-by: Ingo Molnar <mingo@kernel.org> > --- > kernel/locking/lockdep.c | 73 ++++++++++++++++++++++-------------------------- > 1 file changed, 34 insertions(+), 39 deletions(-) Hmm, I was hoping we could fold in the simplification that Arnd came up with yesterday: https://lkml.kernel.org/r/CAK8P3a2X_5p9QOKG-jcozR4P8iPNJAY2ObXgfqt=bBD+hZdnSg@mail.gmail.com Will
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 48a840adb281..43e880ceafc2 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -437,13 +437,6 @@ static int verbose(struct lock_class *class) return 0; } -/* - * Stack-trace: tightly packed array of stack backtrace - * addresses. Protected by the graph_lock. - */ -unsigned long nr_stack_trace_entries; -static unsigned long stack_trace[MAX_STACK_TRACE_ENTRIES]; - static void print_lockdep_off(const char *bug_msg) { printk(KERN_DEBUG "%s\n", bug_msg); @@ -453,6 +446,15 @@ static void print_lockdep_off(const char *bug_msg) #endif } +unsigned long nr_stack_trace_entries; + +#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) +/* + * Stack-trace: tightly packed array of stack backtrace + * addresses. Protected by the graph_lock. + */ +static unsigned long stack_trace[MAX_STACK_TRACE_ENTRIES]; + static int save_trace(struct lock_trace *trace) { unsigned long *entries = stack_trace + nr_stack_trace_entries; @@ -475,6 +477,7 @@ static int save_trace(struct lock_trace *trace) return 1; } +#endif unsigned int nr_hardirq_chains; unsigned int nr_softirq_chains; @@ -488,6 +491,7 @@ unsigned int max_lockdep_depth; DEFINE_PER_CPU(struct lockdep_stats, lockdep_stats); #endif +#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) /* * Locking printouts: */ @@ -505,6 +509,7 @@ static const char *usage_str[] = #undef LOCKDEP_STATE [LOCK_USED] = "INITIAL USE", }; +#endif const char * __get_key_name(struct lockdep_subclass_key *key, char *str) { @@ -2964,12 +2969,10 @@ static void check_chain_key(struct task_struct *curr) #endif } +#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) static int mark_lock(struct task_struct *curr, struct held_lock *this, enum lock_usage_bit new_bit); -#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) - - static void print_usage_bug_scenario(struct held_lock *lock) { struct lock_class *class = hlock_class(lock); @@ -3545,35 +3548,6 @@ static int separate_irq_context(struct task_struct *curr, return 0; } -#else /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */ - -static inline -int mark_lock_irq(struct task_struct *curr, struct held_lock *this, - enum lock_usage_bit new_bit) -{ - WARN_ON(1); /* Impossible innit? when we don't have TRACE_IRQFLAG */ - return 1; -} - -static inline int -mark_usage(struct task_struct *curr, struct held_lock *hlock, int check) -{ - return 1; -} - -static inline unsigned int task_irq_context(struct task_struct *task) -{ - return 0; -} - -static inline int separate_irq_context(struct task_struct *curr, - struct held_lock *hlock) -{ - return 0; -} - -#endif /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */ - /* * Mark a lock with a usage bit, and validate the state transition: */ @@ -3634,6 +3608,27 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this, return ret; } +#else /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */ + +static inline int +mark_usage(struct task_struct *curr, struct held_lock *hlock, int check) +{ + return 1; +} + +static inline unsigned int task_irq_context(struct task_struct *task) +{ + return 0; +} + +static inline int separate_irq_context(struct task_struct *curr, + struct held_lock *hlock) +{ + return 0; +} + +#endif /* defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_PROVE_LOCKING) */ + /* * Initialize a lock instance's lock-class mapping info: */
The last cleanup patch triggered another issue, as now another function should be moved into the same section: kernel/locking/lockdep.c:3580:12: error: 'mark_lock' defined but not used [-Werror=unused-function] static int mark_lock(struct task_struct *curr, struct held_lock *this, Move mark_lock() into the same #ifdef section as its only caller, and remove the now-unused mark_lock_irq() stub helper. Fixes: 0d2cc3b34532 ("locking/lockdep: Move valid_state() inside CONFIG_TRACE_IRQFLAGS && CONFIG_PROVE_LOCKING") Signed-off-by: Arnd Bergmann <arnd@arndb.de> --- kernel/locking/lockdep.c | 73 +++++++++++++++++++--------------------- 1 file changed, 34 insertions(+), 39 deletions(-) -- 2.20.0