diff mbox series

[V3,3/9] cpufreq: Cap the default transition delay value to 10 ms

Message ID 1b93c94cb8b4914314e4f50304c3cb11c53d8b14.1500373914.git.viresh.kumar@linaro.org
State Accepted
Commit e948bc8fbee077735c2b71b991a5ca5e573f3506
Headers show
Series None | expand

Commit Message

Viresh Kumar July 19, 2017, 10:12 a.m. UTC
If transition_delay_us isn't defined by the cpufreq driver, the default
value of transition delay (time after which the cpufreq governor will
try updating the frequency again) is currently calculated by multiplying
transition_latency (nsec) with LATENCY_MULTIPLIER (1000) and then
converting this time to usec. That gives the exact same value as
transition_latency, just that the time unit is usec instead of nsec.

With acpi-cpufreq for example, transition_latency is set to around 10
usec and we get transition delay as 10 ms. Which seems to be a
reasonable amount of time to reevaluate the frequency again.

But for platforms where frequency switching isn't that fast (like ARM),
the transition_latency varies from 500 usec to 3 ms, and the transition
delay becomes 500 ms to 3 seconds. Of course, that is a pretty bad
default value to start with.

We can try to come across a better formula (instead of multiplying with
LATENCY_MULTIPLIER) to solve this problem, but will that be worth it ?

This patch tries a simple approach and caps the maximum value of default
transition delay to 10 ms. Of course, userspace can still come in and
change this value anytime or individual drivers can rather provide
transition_delay_us instead.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

---
 drivers/cpufreq/cpufreq.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

-- 
2.13.0.71.gd7076ec9c9cb

Comments

Leonard Crestez July 25, 2017, 11:54 a.m. UTC | #1
On Wed, 2017-07-19 at 15:42 +0530, Viresh Kumar wrote:
> If transition_delay_us isn't defined by the cpufreq driver, the default

> value of transition delay (time after which the cpufreq governor will

> try updating the frequency again) is currently calculated by multiplying

> transition_latency (nsec) with LATENCY_MULTIPLIER (1000) and then

> converting this time to usec. That gives the exact same value as

> transition_latency, just that the time unit is usec instead of nsec.

> 

> With acpi-cpufreq for example, transition_latency is set to around 10

> usec and we get transition delay as 10 ms. Which seems to be a

> reasonable amount of time to reevaluate the frequency again.

> 

> But for platforms where frequency switching isn't that fast (like ARM),

> the transition_latency varies from 500 usec to 3 ms, and the transition

> delay becomes 500 ms to 3 seconds. Of course, that is a pretty bad

> default value to start with.

> 

> We can try to come across a better formula (instead of multiplying with

> LATENCY_MULTIPLIER) to solve this problem, but will that be worth it ?

> 

> This patch tries a simple approach and caps the maximum value of default

> transition delay to 10 ms. Of course, userspace can still come in and

> change this value anytime or individual drivers can rather provide

> transition_delay_us instead.

> 

> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

> ---

>  drivers/cpufreq/cpufreq.c | 15 +++++++++++++--

>  1 file changed, 13 insertions(+), 2 deletions(-)

> 

> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c

> index c426d21822f7..d00cde871c15 100644

> --- a/drivers/cpufreq/cpufreq.c

> +++ b/drivers/cpufreq/cpufreq.c

> @@ -532,8 +532,19 @@ unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)

>  		return policy->transition_delay_us;

>  

>  	latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC;

> -	if (latency)

> -		return latency * LATENCY_MULTIPLIER;

> +	if (latency) {

> +		/*

> +		 * For platforms that can change the frequency very fast (< 10

> +		 * us), the above formula gives a decent transition delay. But

> +		 * for platforms where transition_latency is in milliseconds, it

> +		 * ends up giving unrealistic values.

> +		 *

> +		 * Cap the default transition delay to 10 ms, which seems to be

> +		 * a reasonable amount of time after which we should reevaluate

> +		 * the frequency.

> +		 */

> +		return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);

> +	}

>  

>  	return LATENCY_MULTIPLIER;

>  }


This patch made it's way into linux-next and it seems to cause imx socs
to almost always hang around their max frequency with the ondemand
governor, even when almost completely idle. The lowest frequency is
never reached. This seems wrong?

This driver calculates transition_latency at probe time, the value is
not terribly accurate but it reaches values like latency = 109 us, so
this patch clamps it at about 10% of the value.

It's worth noting that the default IMX config has HZ=100 and
NO_HZ_IDLE=y, so maybe doing idle checks at a rate comparable to the
jiffie tick screws stuff up? I don't understand what ondemand is trying
to do.
Rafael J. Wysocki July 26, 2017, 12:19 a.m. UTC | #2
On Tuesday, July 25, 2017 02:54:46 PM Leonard Crestez wrote:
> On Wed, 2017-07-19 at 15:42 +0530, Viresh Kumar wrote:

> > If transition_delay_us isn't defined by the cpufreq driver, the default

> > value of transition delay (time after which the cpufreq governor will

> > try updating the frequency again) is currently calculated by multiplying

> > transition_latency (nsec) with LATENCY_MULTIPLIER (1000) and then

> > converting this time to usec. That gives the exact same value as

> > transition_latency, just that the time unit is usec instead of nsec.

> > 

> > With acpi-cpufreq for example, transition_latency is set to around 10

> > usec and we get transition delay as 10 ms. Which seems to be a

> > reasonable amount of time to reevaluate the frequency again.

> > 

> > But for platforms where frequency switching isn't that fast (like ARM),

> > the transition_latency varies from 500 usec to 3 ms, and the transition

> > delay becomes 500 ms to 3 seconds. Of course, that is a pretty bad

> > default value to start with.

> > 

> > We can try to come across a better formula (instead of multiplying with

> > LATENCY_MULTIPLIER) to solve this problem, but will that be worth it ?

> > 

> > This patch tries a simple approach and caps the maximum value of default

> > transition delay to 10 ms. Of course, userspace can still come in and

> > change this value anytime or individual drivers can rather provide

> > transition_delay_us instead.

> > 

> > Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

> > ---

> >  drivers/cpufreq/cpufreq.c | 15 +++++++++++++--

> >  1 file changed, 13 insertions(+), 2 deletions(-)

> > 

> > diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c

> > index c426d21822f7..d00cde871c15 100644

> > --- a/drivers/cpufreq/cpufreq.c

> > +++ b/drivers/cpufreq/cpufreq.c

> > @@ -532,8 +532,19 @@ unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)

> >  		return policy->transition_delay_us;

> >  

> >  	latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC;

> > -	if (latency)

> > -		return latency * LATENCY_MULTIPLIER;

> > +	if (latency) {

> > +		/*

> > +		 * For platforms that can change the frequency very fast (< 10

> > +		 * us), the above formula gives a decent transition delay. But

> > +		 * for platforms where transition_latency is in milliseconds, it

> > +		 * ends up giving unrealistic values.

> > +		 *

> > +		 * Cap the default transition delay to 10 ms, which seems to be

> > +		 * a reasonable amount of time after which we should reevaluate

> > +		 * the frequency.

> > +		 */

> > +		return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);

> > +	}

> >  

> >  	return LATENCY_MULTIPLIER;

> >  }

> 

> This patch made it's way into linux-next and it seems to cause imx socs

> to almost always hang around their max frequency with the ondemand

> governor, even when almost completely idle. The lowest frequency is

> never reached. This seems wrong?

> 

> This driver calculates transition_latency at probe time, the value is

> not terribly accurate but it reaches values like latency = 109 us, so

> this patch clamps it at about 10% of the value.

> 

> It's worth noting that the default IMX config has HZ=100 and

> NO_HZ_IDLE=y, so maybe doing idle checks at a rate comparable to the

> jiffie tick screws stuff up? I don't understand what ondemand is trying

> to do.


I've dropped this commit from linux-next for now.

Thanks,
Rafael
Viresh Kumar July 26, 2017, 6:06 a.m. UTC | #3
On 25-07-17, 14:54, Leonard Crestez wrote:

Thanks for reporting Leonard, really appreciate it.

> This patch made it's way into linux-next and it seems to cause imx socs

> to almost always hang around their max frequency with the ondemand

> governor, even when almost completely idle. The lowest frequency is

> never reached. This seems wrong?


Sure thing. Though I tried to reproduce it on Hikey today and it all
worked pretty fine with ondemand governor. The latency is around 500
us here. I don't have a clue right now on what's wrong in your setup.

> This driver calculates transition_latency at probe time, the value is

> not terribly accurate but it reaches values like latency = 109 us, so


So this is the value that is stored in the global variable
"transition_latency" in the imx6q-cpufreq.c file? i.e.
transition_latency = 109000 (ns) to be exact ?

> this patch clamps it at about 10% of the value.


Without this patch the sampling rate of ondemand governor will be 109
ms. And after this patch it would be capped at 10 ms. Why would that
screw up anyone's setup ? I don't have an answer to that right now.

> It's worth noting that the default IMX config has HZ=100 and

> NO_HZ_IDLE=y, so maybe doing idle checks at a rate comparable to the

> jiffie tick screws stuff up?


Maybe, but I am not sure at all.

Can you try few things ?

- Don't use this patch and try to change ondemand's sampling rate from
  sysfs. Try setting it to 10000 and see if the behavior is identical
  to after this patch.

- Find how much time does it really take to change the frequency of
  the CPU. I don't really thing 109 us is the right transition
  latency. Use attached patch for that and look for the print message.

-- 
vireshFrom 95d830ad8765d6c35fb9e91d5028bf3cf1ff2451 Mon Sep 17 00:00:00 2001
Message-Id: <95d830ad8765d6c35fb9e91d5028bf3cf1ff2451.1501049147.git.viresh.kumar@linaro.org>
From: Viresh Kumar <viresh.kumar@linaro.org>

Date: Fri, 2 Jun 2017 15:00:58 +0530
Subject: [PATCH] cpufreq: Find transition latency dynamically

Test patch to find latency dynamically.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>

---
 drivers/cpufreq/cpufreq.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 58 insertions(+)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 0d6fbb3099b4..e62d670dffd2 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -1076,6 +1076,62 @@ __weak struct cpufreq_governor *cpufreq_default_governor(void)
 	return NULL;
 }
 
+static int find_dvfs_latency(struct cpufreq_policy *policy, unsigned long freq,
+			     unsigned int *latency_ns)
+{
+	u64 time;
+	int ret;
+
+	time = local_clock();
+	ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L);
+	*latency_ns = local_clock() - time;
+
+	return ret;
+}
+
+/*
+ * Find the transition latency dynamically by:
+ * - Switching to min freq first.
+ * - Then switching to max freq.
+ * - And finally switching back to the initial freq.
+ *
+ * The maximum duration of the above three freq changes should be good enough to
+ * find the maximum transition latency for a platform.
+ */
+static void cpufreq_find_target_latency(struct cpufreq_policy *policy)
+{
+	unsigned long initial_freq = policy->cur;
+	unsigned int latency_ns, latency_max_ns;
+	int ret;
+
+	if (!has_target())
+		return;
+
+	/* Go to min frequency first */
+	ret = find_dvfs_latency(policy, policy->cpuinfo.min_freq, &latency_ns);
+	if (ret)
+		return;
+
+	latency_max_ns = latency_ns;
+
+	/* Go to max frequency then.. */
+	ret = find_dvfs_latency(policy, policy->cpuinfo.max_freq, &latency_ns);
+	if (ret)
+		return;
+
+	latency_max_ns = max(latency_max_ns, latency_ns);
+
+	/* And finally switch back to where we started from */
+	ret = find_dvfs_latency(policy, initial_freq, &latency_ns);
+	if (ret)
+		return;
+
+	policy->cpuinfo.transition_latency = max(latency_max_ns, latency_ns);
+
+	pr_info("%s: Setting transition latency to %u ns for policy of CPU%d\n",
+		__func__, policy->cpuinfo.transition_latency, policy->cpu);
+}
+
 static int cpufreq_init_policy(struct cpufreq_policy *policy)
 {
 	struct cpufreq_governor *gov = NULL;
@@ -1345,6 +1401,8 @@ static int cpufreq_online(unsigned int cpu)
 	}
 
 	if (new_policy) {
+		cpufreq_find_target_latency(policy);
+
 		ret = cpufreq_add_dev_interface(policy);
 		if (ret)
 			goto out_exit_policy;
-- 
2.13.0.71.gd7076ec9c9cb


Viresh Kumar Aug. 2, 2017, 3:23 a.m. UTC | #4
On 01-08-17, 20:48, Leonard Crestez wrote:
> > > I found this by enabling the power:cpu_frequency tracepoint event and

> > > checking for deltas with a script. Enabling CPU_FREQ_STAT show this:

> > > 

> > > time_in_state:

> > > 

> > > 396000 1609

> > So we still stay at the lowest frequency most of the time.

> 

> Yes


Aren't these numbers you shared were taken after this patch is applied? What's
wrong with the numbers then ?

> > Maybe can you compare these values with and without this patch to let

> > us know?

> 

> Without the patch it is always at low freq. Sampling at a lower

> frequency means spikes get ignored.


-- 
viresh
Viresh Kumar Aug. 16, 2017, 6:34 a.m. UTC | #5
On 28-07-17, 10:58, Viresh Kumar wrote:
> At this point I really feel that this is a hardware specific problem

> and it was working by chance until now. And I am not sure if we

> shouldn't be stopping this patch from getting merged just because of

> that.

> 

> At least you can teach your distribution to go increase the sampling

> rate from userspace to make it all work.


Its been 3 weeks since my last email on this thread and no reply yet
from any of the IMX maintainers. Can someone please help here ?

@Shawn: Can you help debugging a bit here, to see what's get screwed
up due to this commit ? Its just that your platform isn't able to
change freq at 10 ms rate.

@Rafael: I am not sure, but should we be stopping this patch because
some hardware isn't able to change freq at 10ms interval and is just
faking the transition delay to start with ?

Maybe we get this merged again and the IMX guys can figure out what's
wrong on their platform and how to fix it ?

-- 
viresh
Viresh Kumar Aug. 17, 2017, 3:38 a.m. UTC | #6
On 16-08-17, 12:42, Leonard Crestez wrote:
> I reported the initial issue but did not have the time to do a more

> thorough investigation, this is more complicated than it seems. I said

> this before but maybe it got lost:

> 

> I don't think the odd behavior I noticed justifies keeping the patch

> from merging.


Thanks for that.

I will resend this patch now.

-- 
viresh
diff mbox series

Patch

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index c426d21822f7..d00cde871c15 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -532,8 +532,19 @@  unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)
 		return policy->transition_delay_us;
 
 	latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC;
-	if (latency)
-		return latency * LATENCY_MULTIPLIER;
+	if (latency) {
+		/*
+		 * For platforms that can change the frequency very fast (< 10
+		 * us), the above formula gives a decent transition delay. But
+		 * for platforms where transition_latency is in milliseconds, it
+		 * ends up giving unrealistic values.
+		 *
+		 * Cap the default transition delay to 10 ms, which seems to be
+		 * a reasonable amount of time after which we should reevaluate
+		 * the frequency.
+		 */
+		return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);
+	}
 
 	return LATENCY_MULTIPLIER;
 }