diff mbox

[Linaro-big-little,v2] cpufreq/arm-bl-cpufreq: Add simple cpufreq big.LITTLE switcher frontend

Message ID 20120321114840.GB2070@linaro.org
State Not Applicable
Headers show

Commit Message

Dave Martin March 21, 2012, 11:48 a.m. UTC
On Wed, Mar 21, 2012 at 03:41:46PM +0530, Avik Sil wrote:
> With this patch, the default governor is userspace but after one hour of
> booting it's switching to performance governor!
> 
> # tail -n 20 /var/log/syslog
> Jan  1 00:00:06 linaro-developer kernel: init: ureadahead main process
> (614) terminated with status 5
> Jan  1 00:00:06 linaro-developer kernel: EXT4-fs (mmcblk0p2):
> re-mounted. Opts: errors=remount-ro
> Jan  1 00:00:06 linaro-developer kernel: init: udev-fallback-graphics
> main process (789) terminated with status 1
> Jan  1 00:00:06 linaro-developer kernel: init: failsafe main process
> (779) killed by TERM signal
> Jan  1 00:00:06 linaro-developer cron[826]: (CRON) INFO (pidfile fd = 3)
> Jan  1 00:00:06 linaro-developer cron[840]: (CRON) STARTUP (fork ok)
> Jan  1 00:00:06 linaro-developer cron[840]: (CRON) INFO (Running @reboot
> jobs)
> Jan  1 00:00:06 linaro-developer kernel: init: tty1 main process (859)
> killed by TERM signal
> Jan  1 00:01:06 linaro-developer kernel: ondemand governor failed, too
> long transition latency of HW, fallback to performance governor
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: ondemand governor failed, too
> long transition latency of HW, fallback to performance governor
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: ondemand governor failed, too
> long transition latency of HW, fallback to performance governor
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: ondemand governor failed, too
> long transition latency of HW, fallback to performance governor
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0
> Jan  1 00:01:06 linaro-developer kernel: arm-bl-cpufreq: Switching to
> cluster 0

It looks like something in userspace is trying to change to a different
governor after boot.  The ondemand governor requires a reasonable
estimate for the transition latency in order to work -- currently, I
don't bother to set this, so cpufreq falls back to performance.

I don't know what is trying to switch governors... if it's not possible
to guess, this patch may give you a clue about where it's coming from.

Cheers
---Dave
diff mbox

Patch

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 987a165..d271664 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -29,6 +29,7 @@ 
 #include <linux/completion.h>
 #include <linux/mutex.h>
 #include <linux/syscore_ops.h>
+#include <linux/sched.h>
 
 #include <trace/events/power.h>
 
@@ -424,6 +425,17 @@  static ssize_t show_scaling_governor(struct cpufreq_policy *policy, char *buf)
 }
 
 
+static void printk_task_chain(struct task_struct *t)
+{
+	get_task_struct(t);
+	if(t->real_parent != NULL && t->real_parent != t && t->pid != 1) {
+		printk_task_chain(t->real_parent);
+		printk("->");
+	}
+	printk("%s[%d]", t->comm, t->pid);
+	put_task_struct(t);
+}
+
 /**
  * store_scaling_governor - store policy for the specified CPU
  */
@@ -446,6 +458,10 @@  static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
 						&new_policy.governor))
 		return -EINVAL;
 
+	printk(KERN_INFO "cpufreq: ");
+	printk_task_chain(current);
+	printk(": scaling_governor -> %s on CPU%d\n", str_governor, policy->cpu);
+
 	/* Do not use cpufreq_set_policy here or the user_policy.max
 	   will be wrongly overridden */
 	ret = __cpufreq_set_policy(policy, &new_policy);