Message ID | f9568a6c5140b99cbd5b48de21f4d5b12a76d92e.1740499185.git.jerome.forissier@linaro.org |
---|---|
State | New |
Headers | show |
Series | Uthreads | expand |
Hi Jerome, On Tue, 25 Feb 2025 at 18:35, Jerome Forissier <jerome.forissier@linaro.org> wrote: > > Introduce a uthread scheduling loop into udelay() when CONFIG_UTHREAD > is enabled. This means that any uthread calling into udelay() may yield > to uthread and be scheduled again later. > > While not strictly necessary since uthread_schedule() is already called > by schedule(), > tests show that it is desirable to call it in a tight > loop instead of calling __usleep(). It gives more opportunities for > other threads to make progress and results in better performances. Some examples of timing gains would be nice. > > Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> > --- > lib/time.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/lib/time.c b/lib/time.c > index d88edafb196..d1a1a66f301 100644 > --- a/lib/time.c > +++ b/lib/time.c > @@ -17,6 +17,7 @@ > #include <asm/global_data.h> > #include <asm/io.h> > #include <linux/delay.h> > +#include <uthread.h> > > #ifndef CFG_WD_PERIOD > # define CFG_WD_PERIOD (10 * 1000 * 1000) /* 10 seconds default */ > @@ -197,7 +198,14 @@ void udelay(unsigned long usec) > do { > schedule(); > kv = usec > CFG_WD_PERIOD ? CFG_WD_PERIOD : usec; > - __udelay(kv); > + if (CONFIG_IS_ENABLED(UTHREAD)) { > + ulong t0 = timer_get_us(); > + while (timer_get_us() < t0 + kv) Do we make progress by constantly scheduling new tasks? Perhaps we should at least leave the task running for some time? Thanks /Ilias > + uthread_schedule(); > + } else { > + __udelay(kv); > + } > usec -= kv; > } while(usec); > + > } > -- > 2.43.0 >
On Fri, Feb 28, 2025 at 03:38:22PM +0200, Ilias Apalodimas wrote: > Hi Jerome, > > On Tue, 25 Feb 2025 at 18:35, Jerome Forissier > <jerome.forissier@linaro.org> wrote: > > > > Introduce a uthread scheduling loop into udelay() when CONFIG_UTHREAD > > is enabled. This means that any uthread calling into udelay() may yield > > to uthread and be scheduled again later. > > > > While not strictly necessary since uthread_schedule() is already called > > by schedule(), > > tests show that it is desirable to call it in a tight > > loop instead of calling __usleep(). It gives more opportunities for > > other threads to make progress and results in better performances. > > Some examples of timing gains would be nice. > > > > > Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> > > --- > > lib/time.c | 10 +++++++++- > > 1 file changed, 9 insertions(+), 1 deletion(-) > > > > diff --git a/lib/time.c b/lib/time.c > > index d88edafb196..d1a1a66f301 100644 > > --- a/lib/time.c > > +++ b/lib/time.c > > @@ -17,6 +17,7 @@ > > #include <asm/global_data.h> > > #include <asm/io.h> > > #include <linux/delay.h> > > +#include <uthread.h> > > > > #ifndef CFG_WD_PERIOD > > # define CFG_WD_PERIOD (10 * 1000 * 1000) /* 10 seconds default */ > > @@ -197,7 +198,14 @@ void udelay(unsigned long usec) > > do { > > schedule(); > > kv = usec > CFG_WD_PERIOD ? CFG_WD_PERIOD : usec; > > - __udelay(kv); > > + if (CONFIG_IS_ENABLED(UTHREAD)) { > > + ulong t0 = timer_get_us(); > > + while (timer_get_us() < t0 + kv) > > Do we make progress by constantly scheduling new tasks? Perhaps we > should at least leave the task running for some time? If I get the point, the UTHREAD is a cooperative framework, which means a task yields the control flow only when it considers nothing else could be done. And there's no preemption (at least in this revision). Thus I don't think it's a problem. > Thanks > /Ilias Best regards, Yao Zi > > + uthread_schedule(); > > + } else { > > + __udelay(kv); > > + } > > usec -= kv; > > } while(usec); > > + > > } > > -- > > 2.43.0 > >
On 2/28/25 15:16, Yao Zi wrote: > On Fri, Feb 28, 2025 at 03:38:22PM +0200, Ilias Apalodimas wrote: >> Hi Jerome, >> >> On Tue, 25 Feb 2025 at 18:35, Jerome Forissier >> <jerome.forissier@linaro.org> wrote: >>> >>> Introduce a uthread scheduling loop into udelay() when CONFIG_UTHREAD >>> is enabled. This means that any uthread calling into udelay() may yield >>> to uthread and be scheduled again later. >>> >>> While not strictly necessary since uthread_schedule() is already called >>> by schedule(), >>> tests show that it is desirable to call it in a tight >>> loop instead of calling __usleep(). It gives more opportunities for >>> other threads to make progress and results in better performances. >> >> Some examples of timing gains would be nice. I'll try to provide numbers in v3. >> >>> >>> Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> >>> --- >>> lib/time.c | 10 +++++++++- >>> 1 file changed, 9 insertions(+), 1 deletion(-) >>> >>> diff --git a/lib/time.c b/lib/time.c >>> index d88edafb196..d1a1a66f301 100644 >>> --- a/lib/time.c >>> +++ b/lib/time.c >>> @@ -17,6 +17,7 @@ >>> #include <asm/global_data.h> >>> #include <asm/io.h> >>> #include <linux/delay.h> >>> +#include <uthread.h> >>> >>> #ifndef CFG_WD_PERIOD >>> # define CFG_WD_PERIOD (10 * 1000 * 1000) /* 10 seconds default */ >>> @@ -197,7 +198,14 @@ void udelay(unsigned long usec) >>> do { >>> schedule(); >>> kv = usec > CFG_WD_PERIOD ? CFG_WD_PERIOD : usec; >>> - __udelay(kv); >>> + if (CONFIG_IS_ENABLED(UTHREAD)) { >>> + ulong t0 = timer_get_us(); >>> + while (timer_get_us() < t0 + kv) >> >> Do we make progress by constantly scheduling new tasks? Perhaps we >> should at least leave the task running for some time? > > If I get the point, the UTHREAD is a cooperative framework, which means > a task yields the control flow only when it considers nothing else could > be done. And there's no preemption (at least in this revision). Thus I > don't think it's a problem. That's correct. Code always executes uninterrupted until it reaches uthread_schedule(). Thanks,
Hi Yao On Fri, 28 Feb 2025 at 16:16, Yao Zi <ziyao@disroot.org> wrote: > > On Fri, Feb 28, 2025 at 03:38:22PM +0200, Ilias Apalodimas wrote: > > Hi Jerome, > > > > On Tue, 25 Feb 2025 at 18:35, Jerome Forissier > > <jerome.forissier@linaro.org> wrote: > > > > > > Introduce a uthread scheduling loop into udelay() when CONFIG_UTHREAD > > > is enabled. This means that any uthread calling into udelay() may yield > > > to uthread and be scheduled again later. > > > > > > While not strictly necessary since uthread_schedule() is already called > > > by schedule(), > > > tests show that it is desirable to call it in a tight > > > loop instead of calling __usleep(). It gives more opportunities for > > > other threads to make progress and results in better performances. > > > > Some examples of timing gains would be nice. > > > > > > > > Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> > > > --- > > > lib/time.c | 10 +++++++++- > > > 1 file changed, 9 insertions(+), 1 deletion(-) > > > > > > diff --git a/lib/time.c b/lib/time.c > > > index d88edafb196..d1a1a66f301 100644 > > > --- a/lib/time.c > > > +++ b/lib/time.c > > > @@ -17,6 +17,7 @@ > > > #include <asm/global_data.h> > > > #include <asm/io.h> > > > #include <linux/delay.h> > > > +#include <uthread.h> > > > > > > #ifndef CFG_WD_PERIOD > > > # define CFG_WD_PERIOD (10 * 1000 * 1000) /* 10 seconds default */ > > > @@ -197,7 +198,14 @@ void udelay(unsigned long usec) > > > do { > > > schedule(); > > > kv = usec > CFG_WD_PERIOD ? CFG_WD_PERIOD : usec; > > > - __udelay(kv); > > > + if (CONFIG_IS_ENABLED(UTHREAD)) { > > > + ulong t0 = timer_get_us(); > > > + while (timer_get_us() < t0 + kv) I think it's better to rewrite this as timer_get_us() - t0 < k. > > > > Do we make progress by constantly scheduling new tasks? Perhaps we > > should at least leave the task running for some time? > > If I get the point, the UTHREAD is a cooperative framework, which means > a task yields the control flow only when it considers nothing else could > be done. > And there's no preemption (at least in this revision). Thus I > don't think it's a problem. I was implying it would cause a problem, I was just wondering if we could optimize it easily somehow. But i think it's fine as is thanks /Ilias > > > Thanks > > /Ilias > > Best regards, > Yao Zi > > > > + uthread_schedule(); > > > + } else { > > > + __udelay(kv); > > > + } > > > usec -= kv; > > > } while(usec); > > > + > > > } > > > -- > > > 2.43.0 > > >
diff --git a/lib/time.c b/lib/time.c index d88edafb196..d1a1a66f301 100644 --- a/lib/time.c +++ b/lib/time.c @@ -17,6 +17,7 @@ #include <asm/global_data.h> #include <asm/io.h> #include <linux/delay.h> +#include <uthread.h> #ifndef CFG_WD_PERIOD # define CFG_WD_PERIOD (10 * 1000 * 1000) /* 10 seconds default */ @@ -197,7 +198,14 @@ void udelay(unsigned long usec) do { schedule(); kv = usec > CFG_WD_PERIOD ? CFG_WD_PERIOD : usec; - __udelay(kv); + if (CONFIG_IS_ENABLED(UTHREAD)) { + ulong t0 = timer_get_us(); + while (timer_get_us() < t0 + kv) + uthread_schedule(); + } else { + __udelay(kv); + } usec -= kv; } while(usec); + }
Introduce a uthread scheduling loop into udelay() when CONFIG_UTHREAD is enabled. This means that any uthread calling into udelay() may yield to uthread and be scheduled again later. While not strictly necessary since uthread_schedule() is already called by schedule(), tests show that it is desirable to call it in a tight loop instead of calling __usleep(). It gives more opportunities for other threads to make progress and results in better performances. Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> --- lib/time.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)