Message ID | 1346350718-30937-21-git-send-email-paulmck@linux.vnet.ibm.com |
---|---|
State | Superseded |
Headers | show |
On Thu, Aug 30, 2012 at 11:18:36AM -0700, Paul E. McKenney wrote: > From: "Paul E. McKenney" <paul.mckenney@linaro.org> > > In the C language, signed overflow is undefined. It is true that > twos-complement arithmetic normally comes to the rescue, but if the > compiler can subvert this any time it has any information about the values > being compared. For example, given "if (a - b > 0)", if the compiler > has enough information to realize that (for example) the value of "a" > is positive and that of "b" is negative, the compiler is within its > rights to optimize to a simple "if (1)", which might not be what you want. > > This commit therefore converts synchronize_rcu_expedited()'s work-done > detection counter from signed to unsigned. > > Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> > kernel/rcutree_plugin.h | 8 ++++---- > 1 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h > index befb0b2..7ed45c9 100644 > --- a/kernel/rcutree_plugin.h > +++ b/kernel/rcutree_plugin.h > @@ -677,7 +677,7 @@ void synchronize_rcu(void) > EXPORT_SYMBOL_GPL(synchronize_rcu); > > static DECLARE_WAIT_QUEUE_HEAD(sync_rcu_preempt_exp_wq); > -static long sync_rcu_preempt_exp_count; > +static unsigned long sync_rcu_preempt_exp_count; > static DEFINE_MUTEX(sync_rcu_preempt_exp_mutex); > > /* > @@ -792,7 +792,7 @@ void synchronize_rcu_expedited(void) > unsigned long flags; > struct rcu_node *rnp; > struct rcu_state *rsp = &rcu_preempt_state; > - long snap; > + unsigned long snap; > int trycount = 0; > > smp_mb(); /* Caller's modifications seen first by other CPUs. */ > @@ -811,10 +811,10 @@ void synchronize_rcu_expedited(void) > synchronize_rcu(); > return; > } > - if ((ACCESS_ONCE(sync_rcu_preempt_exp_count) - snap) > 0) > + if (ULONG_CMP_LT(snap, ACCESS_ONCE(sync_rcu_preempt_exp_count))) > goto mb_ret; /* Others did our work for us. */ > } > - if ((ACCESS_ONCE(sync_rcu_preempt_exp_count) - snap) > 0) > + if (ULONG_CMP_LT(snap, ACCESS_ONCE(sync_rcu_preempt_exp_count))) > goto unlock_mb_ret; /* Others did our work for us. */ > > /* force all RCU readers onto ->blkd_tasks lists. */ > -- > 1.7.8 >
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h index befb0b2..7ed45c9 100644 --- a/kernel/rcutree_plugin.h +++ b/kernel/rcutree_plugin.h @@ -677,7 +677,7 @@ void synchronize_rcu(void) EXPORT_SYMBOL_GPL(synchronize_rcu); static DECLARE_WAIT_QUEUE_HEAD(sync_rcu_preempt_exp_wq); -static long sync_rcu_preempt_exp_count; +static unsigned long sync_rcu_preempt_exp_count; static DEFINE_MUTEX(sync_rcu_preempt_exp_mutex); /* @@ -792,7 +792,7 @@ void synchronize_rcu_expedited(void) unsigned long flags; struct rcu_node *rnp; struct rcu_state *rsp = &rcu_preempt_state; - long snap; + unsigned long snap; int trycount = 0; smp_mb(); /* Caller's modifications seen first by other CPUs. */ @@ -811,10 +811,10 @@ void synchronize_rcu_expedited(void) synchronize_rcu(); return; } - if ((ACCESS_ONCE(sync_rcu_preempt_exp_count) - snap) > 0) + if (ULONG_CMP_LT(snap, ACCESS_ONCE(sync_rcu_preempt_exp_count))) goto mb_ret; /* Others did our work for us. */ } - if ((ACCESS_ONCE(sync_rcu_preempt_exp_count) - snap) > 0) + if (ULONG_CMP_LT(snap, ACCESS_ONCE(sync_rcu_preempt_exp_count))) goto unlock_mb_ret; /* Others did our work for us. */ /* force all RCU readers onto ->blkd_tasks lists. */