mbox series

[0/3] cupfreq: schedutil: Minor fix and cleanups

Message ID cover.1488437503.git.viresh.kumar@linaro.org
Headers show
Series cupfreq: schedutil: Minor fix and cleanups | expand

Message

Viresh Kumar March 2, 2017, 8:33 a.m. UTC
Hi,

The first patch fixes an issue and the other two do cleanups. I have run
hackbench with these patches and no regressions were noticed.

--
viresh

Viresh Kumar (3):
  cpufreq: schedutil: move cached_raw_freq to struct sugov_policy
  cpufreq: schedutil: Pass sg_policy to get_next_freq()
  cpufreq: schedutil: remove redundant code from
    sugov_next_freq_shared()

 kernel/sched/cpufreq_schedutil.c | 37 +++++++++++++------------------------
 1 file changed, 13 insertions(+), 24 deletions(-)

-- 
2.7.1.410.g6faf27b

Comments

Viresh Kumar March 3, 2017, 3:07 a.m. UTC | #1
On 02-03-17, 23:05, Rafael J. Wysocki wrote:
> On Thursday, March 02, 2017 02:03:20 PM Viresh Kumar wrote:

> > cached_raw_freq applies to the entire cpufreq policy and not individual

> > CPUs. Apart from wasting per-cpu memory, it is actually wrong to keep it

> > in struct sugov_cpu as we may end up comparing next_freq with a stale

> > cached_raw_freq of a random CPU.

> > 

> > Move cached_raw_freq to struct sugov_policy.

> > 

> > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

> 

> Any chance for a Fixes: tag?


Fixes: 5cbea46984d6 ("cpufreq: schedutil: map raw required frequency
to driver frequency")

Sorry to miss that in the first place.

-- 
viresh
Viresh Kumar March 6, 2017, 4:45 a.m. UTC | #2
On 04-03-17, 01:11, Rafael J. Wysocki wrote:
> So one idea is that if SCHED_CPUFREQ_RT_DL is set in flags, we don't even

> need to start the loop which is quite a cost to simply notice that there's

> nothing to do.


Hmm. Isn't the probability of this flag being set, same for all CPUs in the
policy? If yes, then why do we need to handle the current CPU specially?

> Also I don't quite agree with adding an extra pair of integer multiplications

> to that loop just to get rid of the extra args.


But that should be cheap enough as we would be multiplying with 1 in one of them
and with 0 on the other.

Isn't that better then keeping same code at two places?

Also as I mentioned in the commit log, the number of extra comparisons for the
current CPU will be balanced if we have three CPUs in the policy and with every
other CPU in the policy, we will end up doing one comparison less. With
Quad-core policies, we reduce the number of comparisons by 1 and for octa-core
ones, we reduce it by 5.

-- 
viresh
Rafael J. Wysocki March 8, 2017, 12:54 p.m. UTC | #3
On Wed, Mar 8, 2017 at 12:15 PM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> On 08-03-17, 11:50, Rafael J. Wysocki wrote:

>> So overall, maybe you can move the flags check to

>> sugov_update_shared(), so that you don't need to pass flags to

>> sugov_next_freq_shared(), and then do what you did to util and max.

>

> Just to confirm, below is what you are suggesting ?


Yes, it is.

> -------------------------8<-------------------------

>

>  1 file changed, 9 insertions(+), 16 deletions(-)

>

> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c

> index 78468aa051ab..f5ffe241812e 100644

> --- a/kernel/sched/cpufreq_schedutil.c

> +++ b/kernel/sched/cpufreq_schedutil.c

> @@ -217,30 +217,19 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,

>         sugov_update_commit(sg_policy, time, next_f);

>  }

>

> -static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu,

> -                                          unsigned long util, unsigned long max,

> -                                          unsigned int flags)

> +static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu)

>  {

>         struct sugov_policy *sg_policy = sg_cpu->sg_policy;

>         struct cpufreq_policy *policy = sg_policy->policy;

> -       unsigned int max_f = policy->cpuinfo.max_freq;

>         u64 last_freq_update_time = sg_policy->last_freq_update_time;

> +       unsigned long util = 0, max = 1;

>         unsigned int j;

>

> -       if (flags & SCHED_CPUFREQ_RT_DL)

> -               return max_f;

> -

> -       sugov_iowait_boost(sg_cpu, &util, &max);

> -

>         for_each_cpu(j, policy->cpus) {

> -               struct sugov_cpu *j_sg_cpu;

> +               struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);

>                 unsigned long j_util, j_max;

>                 s64 delta_ns;

>

> -               if (j == smp_processor_id())

> -                       continue;

> -

> -               j_sg_cpu = &per_cpu(sugov_cpu, j);

>                 /*

>                  * If the CPU utilization was last updated before the previous

>                  * frequency update and the time elapsed between the last update

> @@ -254,7 +243,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu,

>                         continue;

>                 }

>                 if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL)

> -                       return max_f;

> +                       return policy->cpuinfo.max_freq;

>

>                 j_util = j_sg_cpu->util;

>                 j_max = j_sg_cpu->max;

> @@ -289,7 +278,11 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time,

>         sg_cpu->last_update = time;

>

>         if (sugov_should_update_freq(sg_policy, time)) {

> -               next_f = sugov_next_freq_shared(sg_cpu, util, max, flags);

> +               if (flags & SCHED_CPUFREQ_RT_DL)

> +                       next_f = sg_policy->policy->cpuinfo.max_freq;

> +               else

> +                       next_f = sugov_next_freq_shared(sg_cpu);

> +

>                 sugov_update_commit(sg_policy, time, next_f);

>         }

>

>> But that would be a 4.12 change anyway.

>

> Sure.


And IMO the subject/changelog should not talk about "redundant code",
because the code in question is not in fact redundant, but about
refactoring the code to eliminate one conditional from the
for_each_cpu() loop and to make that loop treat all CPUs in the same
way (more symmetrically).

Thanks,
Rafael