Message ID | d12b25130db5a8b645929eb93d1cf4ddc2d68826.1352196505.git.viresh.kumar@linaro.org |
---|---|
State | Accepted |
Headers | show |
[ Added John Stultz ] On Tue, 2012-11-06 at 16:08 +0530, Viresh Kumar wrote: > Till now, we weren't migrating a running timer because with migration > del_timer_sync() can't detect that the timer's handler yet has not finished. > > Now, when can we actually to reach to the code (inside __mod_timer()) where > > base->running_timer == timer ? i.e. We are trying to migrate current timer > > I can see only following case: > - Timer re-armed itself. i.e. Currently we are running interrupt handler of a > timer and it rearmed itself from there. At this time user might have called > del_timer_sync() or not. If not, then there is no harm in re-arming the timer? > > Now, when somebody tries to delete a timer, obviously he doesn't want to run it > any more for now. So, why should we ever re-arm a timer, which is scheduled for > deletion? > > This patch tries to fix "migration of running timer", assuming above theory is > correct :) > That's a question for Thomas or John (hello! Thomas or John :-) > So, now when we get a call to del_timer_sync(), we will mark it scheduled for > deletion in an additional variable. This would be checked whenever we try to > modify/arm it again. If it was currently scheduled for deletion, we must not > modify/arm it. > > And so, whenever we reach to the situation where: > base->running_timer == timer > > We are sure, nobody is waiting in del_timer_sync(). > > We will clear this flag as soon as the timer is deleted, so that it can be > started again after deleting it successfully. > > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> > --- > include/linux/timer.h | 2 ++ > kernel/timer.c | 42 +++++++++++++++++++++++++----------------- > 2 files changed, 27 insertions(+), 17 deletions(-) > > diff --git a/include/linux/timer.h b/include/linux/timer.h > index 8c5a197..6aa720f 100644 > --- a/include/linux/timer.h > +++ b/include/linux/timer.h > @@ -22,6 +22,7 @@ struct timer_list { > unsigned long data; > > int slack; > + int sched_del; Make that a bool, as it's just a flag. Maybe gcc can optimize or something. > > #ifdef CONFIG_TIMER_STATS > int start_pid; > @@ -77,6 +78,7 @@ extern struct tvec_base boot_tvec_bases; > .data = (_data), \ > .base = (void *)((unsigned long)&boot_tvec_bases + (_flags)), \ > .slack = -1, \ > + .sched_del = 0, \ > __TIMER_LOCKDEP_MAP_INITIALIZER( \ > __FILE__ ":" __stringify(__LINE__)) \ > } > diff --git a/kernel/timer.c b/kernel/timer.c > index 1170ece..14e1f76 100644 > --- a/kernel/timer.c > +++ b/kernel/timer.c > @@ -622,6 +622,7 @@ static void do_init_timer(struct timer_list *timer, unsigned int flags, > timer->entry.next = NULL; > timer->base = (void *)((unsigned long)base | flags); > timer->slack = -1; > + timer->sched_del = 0; > #ifdef CONFIG_TIMER_STATS > timer->start_site = NULL; > timer->start_pid = -1; > @@ -729,6 +730,12 @@ __mod_timer(struct timer_list *timer, unsigned long expires, > > base = lock_timer_base(timer, &flags); > > + if (timer->sched_del) { > + /* Don't schedule it again, as it is getting deleted */ > + ret = -EBUSY; > + goto out_unlock; > + } > + > ret = detach_if_pending(timer, base, false); > if (!ret && pending_only) > goto out_unlock; > @@ -746,21 +753,12 @@ __mod_timer(struct timer_list *timer, unsigned long expires, > new_base = per_cpu(tvec_bases, cpu); > > if (base != new_base) { > - /* > - * We are trying to schedule the timer on the local CPU. > - * However we can't change timer's base while it is running, > - * otherwise del_timer_sync() can't detect that the timer's > - * handler yet has not finished. This also guarantees that > - * the timer is serialized wrt itself. > - */ > - if (likely(base->running_timer != timer)) { > - /* See the comment in lock_timer_base() */ > - timer_set_base(timer, NULL); > - spin_unlock(&base->lock); > - base = new_base; > - spin_lock(&base->lock); > - timer_set_base(timer, base); > - } > + /* See the comment in lock_timer_base() */ > + timer_set_base(timer, NULL); > + spin_unlock(&base->lock); > + base = new_base; > + spin_lock(&base->lock); > + timer_set_base(timer, base); > } > > timer->expires = expires; > @@ -1039,9 +1037,11 @@ EXPORT_SYMBOL(try_to_del_timer_sync); > */ > int del_timer_sync(struct timer_list *timer) > { > -#ifdef CONFIG_LOCKDEP > + struct tvec_base *base; > unsigned long flags; > > +#ifdef CONFIG_LOCKDEP > + > /* > * If lockdep gives a backtrace here, please reference > * the synchronization rules above. > @@ -1051,6 +1051,12 @@ int del_timer_sync(struct timer_list *timer) > lock_map_release(&timer->lockdep_map); > local_irq_restore(flags); > #endif > + > + /* Timer is scheduled for deletion, don't let it re-arm itself */ > + base = lock_timer_base(timer, &flags); > + timer->sched_del = 1; > + spin_unlock_irqrestore(&base->lock, flags); I don't think this is good enough. For one thing, it doesn't handle try_to_del_timer_sync() or even del_timer_sync() for that matter. As that may return success when the timer happens to be running on another CPU. We have this: CPU0 CPU1 ---- ---- timerA (running) mod_timer(timerA) [ migrate to CPU2 ] release timer base lock del_timer_sync(timerA) timer->sched_del = true try_to_del_timer_sync(timerA) base(CPU2)->timer != timerA [TRUE!] timerA (finishes) Fail! -- Steve > + > /* > * don't use it in hardirq context, because it > * could lead to deadlock. > @@ -1058,8 +1064,10 @@ int del_timer_sync(struct timer_list *timer) > WARN_ON(in_irq() && !tbase_get_irqsafe(timer->base)); > for (;;) { > int ret = try_to_del_timer_sync(timer); > - if (ret >= 0) > + if (ret >= 0) { > + timer->sched_del = 0; > return ret; > + } > cpu_relax(); > } > }
On 27 November 2012 19:17, Steven Rostedt <rostedt@goodmis.org> wrote: > On Tue, 2012-11-06 at 16:08 +0530, Viresh Kumar wrote: >> diff --git a/kernel/timer.c b/kernel/timer.c >> @@ -729,6 +730,12 @@ __mod_timer(struct timer_list *timer, unsigned long expires, >> >> base = lock_timer_base(timer, &flags); >> >> + if (timer->sched_del) { >> + /* Don't schedule it again, as it is getting deleted */ >> + ret = -EBUSY; >> + goto out_unlock; >> + } >> + >> ret = detach_if_pending(timer, base, false); >> if (!ret && pending_only) >> goto out_unlock; >> @@ -746,21 +753,12 @@ __mod_timer(struct timer_list *timer, unsigned long expires, >> new_base = per_cpu(tvec_bases, cpu); >> >> if (base != new_base) { >> - /* >> - * We are trying to schedule the timer on the local CPU. >> - * However we can't change timer's base while it is running, >> - * otherwise del_timer_sync() can't detect that the timer's >> - * handler yet has not finished. This also guarantees that >> - * the timer is serialized wrt itself. >> - */ >> - if (likely(base->running_timer != timer)) { >> - /* See the comment in lock_timer_base() */ >> - timer_set_base(timer, NULL); >> - spin_unlock(&base->lock); >> - base = new_base; >> - spin_lock(&base->lock); >> - timer_set_base(timer, base); >> - } >> + /* See the comment in lock_timer_base() */ >> + timer_set_base(timer, NULL); >> + spin_unlock(&base->lock); >> + base = new_base; >> + spin_lock(&base->lock); >> + timer_set_base(timer, base); >> } > I don't think this is good enough. For one thing, it doesn't handle > try_to_del_timer_sync() or even del_timer_sync() for that matter. As > that may return success when the timer happens to be running on another > CPU. > > We have this: > > CPU0 CPU1 > ---- ---- > timerA (running) > mod_timer(timerA) > [ migrate to CPU2 ] > release timer base lock > del_timer_sync(timerA) > timer->sched_del = true > try_to_del_timer_sync(timerA) > base(CPU2)->timer != timerA > [TRUE!] > timerA (finishes) > > Fail! Hi Steven/Thomas, I came back to this patch after completing some other stuff and posting wq part of this patchset separately. I got your point and understand how this would fail. @Thomas: I need your opinion first. Do you like this concept of migrating running timer or not? Or you see some basic problem with this concept? If no (i.e. i can go ahead with another version), then i have some solution to fix earlier problems reported by Steven: The problem lies with del_timer_sync() which just checks base->running_timer != timer to check if timer is currently running or not. What if we add another variable in struct timer_list, that will store if we are running timer callback or not. And so, before we call callback in timer core, we will set this variable and will reset it after finishing callback. del_timer_sync() will have something like: if (base->running_timer != timer) remove timer and return; else if (timer->running_callback) go back to its loop... So, with my existing patch + this change, del_timer_sync() will not return back unless the callback is completed on CPU0. But what can happen now is base->running_timer == timer can be true for two cpus simultaneously cpu0 (running callback) and cpu2 (running hardware timer). Will that cause any issues? -- viresh
[Steven replied to a personal Ping!!, including everybody again] On 9 April 2013 19:25, Steven Rostedt <rostedt@goodmis.org> wrote: > On Tue, 2013-04-09 at 14:05 +0530, Viresh Kumar wrote: >> Ping!! >> > > Remind me again. What problem are you trying to solve? I was trying to migrate a running timer which arms itself, so that we don't keep a cpu busy just for servicing this timer. >> On 20 March 2013 20:43, Viresh Kumar <viresh.kumar@linaro.org> wrote: >> > >> > Hi Steven/Thomas, >> > >> > I came back to this patch after completing some other stuff and posting >> > wq part of this patchset separately. >> > >> > I got your point and understand how this would fail. >> > >> > @Thomas: I need your opinion first. Do you like this concept of migrating >> > running timer or not? Or you see some basic problem with this concept? > > I'll let Thomas answer this, but to me, this sounds really racy. Sure. >> > If no (i.e. i can go ahead with another version), then i have some solution to >> > fix earlier problems reported by Steven: >> > >> > The problem lies with del_timer_sync() which just checks >> > base->running_timer != timer to check if timer is currently running or not. >> > >> > What if we add another variable in struct timer_list, that will store if we are >> > running timer callback or not. And so, before we call callback in timer core, >> > we will set this variable and will reset it after finishing callback. >> > >> > del_timer_sync() will have something like: >> > >> > if (base->running_timer != timer) >> > remove timer and return; > > For example, this didn't fix the issue. You removed the timer when it > was still running, because base->running_timer did not equal timer. You are correct and i was stupid. I wanted to write this instead: del_timer_sync() will have something like: if (base->running_timer != timer) if (timer->running_callback) go back to its loop... else remove timer and return; i.e. if we aren't running on our base cpu, just check if our callback is executing somewhere else due to migration.
On 9 April 2013 20:22, Viresh Kumar <viresh.kumar@linaro.org> wrote: > [Steven replied to a personal Ping!!, including everybody again] > > On 9 April 2013 19:25, Steven Rostedt <rostedt@goodmis.org> wrote: >> On Tue, 2013-04-09 at 14:05 +0530, Viresh Kumar wrote: >>> Ping!! >>> >> >> Remind me again. What problem are you trying to solve? > > I was trying to migrate a running timer which arms itself, so that we don't > keep a cpu busy just for servicing this timer. > >>> On 20 March 2013 20:43, Viresh Kumar <viresh.kumar@linaro.org> wrote: >>> > >>> > Hi Steven/Thomas, >>> > >>> > I came back to this patch after completing some other stuff and posting >>> > wq part of this patchset separately. >>> > >>> > I got your point and understand how this would fail. >>> > >>> > @Thomas: I need your opinion first. Do you like this concept of migrating >>> > running timer or not? Or you see some basic problem with this concept? >> >> I'll let Thomas answer this, but to me, this sounds really racy. > > Sure. > >>> > If no (i.e. i can go ahead with another version), then i have some solution to >>> > fix earlier problems reported by Steven: >>> > >>> > The problem lies with del_timer_sync() which just checks >>> > base->running_timer != timer to check if timer is currently running or not. >>> > >>> > What if we add another variable in struct timer_list, that will store if we are >>> > running timer callback or not. And so, before we call callback in timer core, >>> > we will set this variable and will reset it after finishing callback. >>> > >>> > del_timer_sync() will have something like: >>> > >>> > if (base->running_timer != timer) >>> > remove timer and return; >> >> For example, this didn't fix the issue. You removed the timer when it >> was still running, because base->running_timer did not equal timer. > > You are correct and i was stupid. I wanted to write this instead: > > del_timer_sync() will have something like: > > if (base->running_timer != timer) > if (timer->running_callback) > go back to its loop... > else > remove timer and return; > > i.e. if we aren't running on our base cpu, just check if our callback is > executing somewhere else due to migration. Ping!!
On 24 April 2013 16:52, Viresh Kumar <viresh.kumar@linaro.org> wrote: > On 9 April 2013 20:22, Viresh Kumar <viresh.kumar@linaro.org> wrote: >> [Steven replied to a personal Ping!!, including everybody again] >> >> On 9 April 2013 19:25, Steven Rostedt <rostedt@goodmis.org> wrote: >>> On Tue, 2013-04-09 at 14:05 +0530, Viresh Kumar wrote: >>>> Ping!! >>>> >>> >>> Remind me again. What problem are you trying to solve? >> >> I was trying to migrate a running timer which arms itself, so that we don't >> keep a cpu busy just for servicing this timer. >> >>>> On 20 March 2013 20:43, Viresh Kumar <viresh.kumar@linaro.org> wrote: >>>> > >>>> > Hi Steven/Thomas, >>>> > >>>> > I came back to this patch after completing some other stuff and posting >>>> > wq part of this patchset separately. >>>> > >>>> > I got your point and understand how this would fail. >>>> > >>>> > @Thomas: I need your opinion first. Do you like this concept of migrating >>>> > running timer or not? Or you see some basic problem with this concept? >>> >>> I'll let Thomas answer this, but to me, this sounds really racy. >> >> Sure. >> >>>> > If no (i.e. i can go ahead with another version), then i have some solution to >>>> > fix earlier problems reported by Steven: >>>> > >>>> > The problem lies with del_timer_sync() which just checks >>>> > base->running_timer != timer to check if timer is currently running or not. >>>> > >>>> > What if we add another variable in struct timer_list, that will store if we are >>>> > running timer callback or not. And so, before we call callback in timer core, >>>> > we will set this variable and will reset it after finishing callback. >>>> > >>>> > del_timer_sync() will have something like: >>>> > >>>> > if (base->running_timer != timer) >>>> > remove timer and return; >>> >>> For example, this didn't fix the issue. You removed the timer when it >>> was still running, because base->running_timer did not equal timer. >> >> You are correct and i was stupid. I wanted to write this instead: >> >> del_timer_sync() will have something like: >> >> if (base->running_timer != timer) >> if (timer->running_callback) >> go back to its loop... >> else >> remove timer and return; >> >> i.e. if we aren't running on our base cpu, just check if our callback is >> executing somewhere else due to migration. > > Ping!! Ping!!
On Mon, 13 May 2013, Viresh Kumar wrote: > On 24 April 2013 16:52, Viresh Kumar <viresh.kumar@linaro.org> wrote: > > On 9 April 2013 20:22, Viresh Kumar <viresh.kumar@linaro.org> wrote: > >> [Steven replied to a personal Ping!!, including everybody again] > >> > >> On 9 April 2013 19:25, Steven Rostedt <rostedt@goodmis.org> wrote: > >>> On Tue, 2013-04-09 at 14:05 +0530, Viresh Kumar wrote: > >>>> Ping!! > >>>> > >>> > >>> Remind me again. What problem are you trying to solve? > >> > >> I was trying to migrate a running timer which arms itself, so that we don't > >> keep a cpu busy just for servicing this timer. Which mechanism is migrating the timer away? > >>>> On 20 March 2013 20:43, Viresh Kumar <viresh.kumar@linaro.org> wrote: > >>>> > > >>>> > Hi Steven/Thomas, > >>>> > > >>>> > I came back to this patch after completing some other stuff and posting > >>>> > wq part of this patchset separately. > >>>> > > >>>> > I got your point and understand how this would fail. > >>>> > > >>>> > @Thomas: I need your opinion first. Do you like this concept of migrating > >>>> > running timer or not? Or you see some basic problem with this concept? I have no objections to the functionality per se, but the proposed solution is not going to fly. Aside of bloating the data structure you're changing the semantics of __mod_timer(). No __mod_timer() caller can deal with -EBUSY. So you'd break the world and some more. Here is a list of questions: - Which mechanism migrates timers? - How is that mechanism triggered? - How does that deal with CPU bound timers? Thanks, tglx
diff --git a/include/linux/timer.h b/include/linux/timer.h index 8c5a197..6aa720f 100644 --- a/include/linux/timer.h +++ b/include/linux/timer.h @@ -22,6 +22,7 @@ struct timer_list { unsigned long data; int slack; + int sched_del; #ifdef CONFIG_TIMER_STATS int start_pid; @@ -77,6 +78,7 @@ extern struct tvec_base boot_tvec_bases; .data = (_data), \ .base = (void *)((unsigned long)&boot_tvec_bases + (_flags)), \ .slack = -1, \ + .sched_del = 0, \ __TIMER_LOCKDEP_MAP_INITIALIZER( \ __FILE__ ":" __stringify(__LINE__)) \ } diff --git a/kernel/timer.c b/kernel/timer.c index 1170ece..14e1f76 100644 --- a/kernel/timer.c +++ b/kernel/timer.c @@ -622,6 +622,7 @@ static void do_init_timer(struct timer_list *timer, unsigned int flags, timer->entry.next = NULL; timer->base = (void *)((unsigned long)base | flags); timer->slack = -1; + timer->sched_del = 0; #ifdef CONFIG_TIMER_STATS timer->start_site = NULL; timer->start_pid = -1; @@ -729,6 +730,12 @@ __mod_timer(struct timer_list *timer, unsigned long expires, base = lock_timer_base(timer, &flags); + if (timer->sched_del) { + /* Don't schedule it again, as it is getting deleted */ + ret = -EBUSY; + goto out_unlock; + } + ret = detach_if_pending(timer, base, false); if (!ret && pending_only) goto out_unlock; @@ -746,21 +753,12 @@ __mod_timer(struct timer_list *timer, unsigned long expires, new_base = per_cpu(tvec_bases, cpu); if (base != new_base) { - /* - * We are trying to schedule the timer on the local CPU. - * However we can't change timer's base while it is running, - * otherwise del_timer_sync() can't detect that the timer's - * handler yet has not finished. This also guarantees that - * the timer is serialized wrt itself. - */ - if (likely(base->running_timer != timer)) { - /* See the comment in lock_timer_base() */ - timer_set_base(timer, NULL); - spin_unlock(&base->lock); - base = new_base; - spin_lock(&base->lock); - timer_set_base(timer, base); - } + /* See the comment in lock_timer_base() */ + timer_set_base(timer, NULL); + spin_unlock(&base->lock); + base = new_base; + spin_lock(&base->lock); + timer_set_base(timer, base); } timer->expires = expires; @@ -1039,9 +1037,11 @@ EXPORT_SYMBOL(try_to_del_timer_sync); */ int del_timer_sync(struct timer_list *timer) { -#ifdef CONFIG_LOCKDEP + struct tvec_base *base; unsigned long flags; +#ifdef CONFIG_LOCKDEP + /* * If lockdep gives a backtrace here, please reference * the synchronization rules above. @@ -1051,6 +1051,12 @@ int del_timer_sync(struct timer_list *timer) lock_map_release(&timer->lockdep_map); local_irq_restore(flags); #endif + + /* Timer is scheduled for deletion, don't let it re-arm itself */ + base = lock_timer_base(timer, &flags); + timer->sched_del = 1; + spin_unlock_irqrestore(&base->lock, flags); + /* * don't use it in hardirq context, because it * could lead to deadlock. @@ -1058,8 +1064,10 @@ int del_timer_sync(struct timer_list *timer) WARN_ON(in_irq() && !tbase_get_irqsafe(timer->base)); for (;;) { int ret = try_to_del_timer_sync(timer); - if (ret >= 0) + if (ret >= 0) { + timer->sched_del = 0; return ret; + } cpu_relax(); } }
Till now, we weren't migrating a running timer because with migration del_timer_sync() can't detect that the timer's handler yet has not finished. Now, when can we actually to reach to the code (inside __mod_timer()) where base->running_timer == timer ? i.e. We are trying to migrate current timer I can see only following case: - Timer re-armed itself. i.e. Currently we are running interrupt handler of a timer and it rearmed itself from there. At this time user might have called del_timer_sync() or not. If not, then there is no harm in re-arming the timer? Now, when somebody tries to delete a timer, obviously he doesn't want to run it any more for now. So, why should we ever re-arm a timer, which is scheduled for deletion? This patch tries to fix "migration of running timer", assuming above theory is correct :) So, now when we get a call to del_timer_sync(), we will mark it scheduled for deletion in an additional variable. This would be checked whenever we try to modify/arm it again. If it was currently scheduled for deletion, we must not modify/arm it. And so, whenever we reach to the situation where: base->running_timer == timer We are sure, nobody is waiting in del_timer_sync(). We will clear this flag as soon as the timer is deleted, so that it can be started again after deleting it successfully. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> --- include/linux/timer.h | 2 ++ kernel/timer.c | 42 +++++++++++++++++++++++++----------------- 2 files changed, 27 insertions(+), 17 deletions(-)