From patchwork Thu Dec 3 04:07:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 57596 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp3272625lbb; Wed, 2 Dec 2015 20:09:48 -0800 (PST) X-Received: by 10.98.67.9 with SMTP id q9mr10178563pfa.138.1449115788566; Wed, 02 Dec 2015 20:09:48 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s20si9136761pfa.146.2015.12.02.20.09.48; Wed, 02 Dec 2015 20:09:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dkim=neutral (body hash did not verify) header.i=@linaro-org.20150623.gappssmtp.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758485AbbLCEJJ (ORCPT + 28 others); Wed, 2 Dec 2015 23:09:09 -0500 Received: from mail-pa0-f44.google.com ([209.85.220.44]:35532 "EHLO mail-pa0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758178AbbLCEIT (ORCPT ); Wed, 2 Dec 2015 23:08:19 -0500 Received: by pacej9 with SMTP id ej9so59465276pac.2 for ; Wed, 02 Dec 2015 20:08:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=UlH8rsjDhfkVLijzRNdcTpKFQgl/ba8wsqyWn58G5Zw=; b=g37DVa26Xge0rS4H9+vVbAq0H65UZz4rQdemlPO+QSnR1j7ACxKk4NdwJr9RiuybLy Xd4XRpB8KBMeVJIu/N4g0h8bJXNoJOm8C/XPM8Qf6sCLzo2PtEHFAGJTDiIQZGmkDtbK zS+4F9XKQJG0CNMlMBJDAVzt5LLpwrXKZZpAzudYV3x/fHVGeyHljtfB3Vfk6vziUE9t K/Zdj6gPPsppKfDLJTRr685ALQ4iq9KWE2TVm/a/3crhWSxcW3Dw7a5/RNo8b+mD2MJl AN5HUcxkvRCr93rITjFWPN1HLClW42Cn4Ie/dEy+iRzp94uXCNYyV67qHEHLZKmtj1fd gKNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=UlH8rsjDhfkVLijzRNdcTpKFQgl/ba8wsqyWn58G5Zw=; b=ltqbuTXm0bi+8ejghvYCFfzkEbHyG/sFqXWveTXfSi8VQeZQICUaiDpTkZerv2emL/ r6JE/Hqj/od064PZKuRKcfxsMryfVS5c0JS2H9QYzzO9Jw8m1+r4Gt9/PHuMIPH9d8mm x6UdnaUOayW/1X2hStExIlJSpQBaRXaPTLuYgW5xrWCO1L1TZ1RulfLMdGGPDh7tvMPM Zm+JGZjAOkeYO2iT5XdA7U51Ta3prhDUN3tjID7LW939OQq+dVkkyfOifAXyYzA+1AVU 09yTFSG8usuGgaoaKlm3zrPJdxtsFgvzaeJMTqw7iWn0Q4zkbuQHsRMWJ6vfK42JCKvK wWWw== X-Gm-Message-State: ALoCoQnuKTMql69Q1cfzDjLWwo7p6A60YOnpB9Yj+WN/Xti53DYr2m6qYTUtoxjOc/pmyVM235Bh X-Received: by 10.66.194.198 with SMTP id hy6mr10259725pac.34.1449115698947; Wed, 02 Dec 2015 20:08:18 -0800 (PST) Received: from localhost ([122.172.170.156]) by smtp.gmail.com with ESMTPSA id 134sm7230690pfa.30.2015.12.02.20.08.17 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Wed, 02 Dec 2015 20:08:18 -0800 (PST) From: Viresh Kumar To: Rafael Wysocki Cc: linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, ashwin.chaugule@linaro.org, Viresh Kumar , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org (open list) Subject: [PATCH V2 5/6] cpufreq: governor: replace per-cpu delayed work with timers Date: Thu, 3 Dec 2015 09:37:53 +0530 Message-Id: <691a3524afaa8d08d61987b1c2913948b4e59f01.1449115453.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.6.2.198.g614a2ac In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org cpufreq governors evaluate load at sampling rate and based on that they update frequency for a group of CPUs belonging to the same cpufreq policy. This is required to be done in a single thread for all policy->cpus, but because we don't want to wakeup idle CPUs to do just that, we use deferrable work for this. If we would have used a single delayed deferrable work for the entire policy, there were chances that the CPU required to run the handler can be in idle and we might end up not changing the frequency for the entire group with load variations. And so we were forced to keep per-cpu works, and only the one that expires first need to do the real work and others are rescheduled for next sampling time. We have been using the more complex solution until now, where we used a delayed deferrable work for this, which is a combination of a timer and a work. This could be made lightweight by keeping per-cpu deferred timers with a single work item, which is scheduled by the first timer that expires. This patch does just that and here are important changes: - The timer handler will run in irq context and so we need to use a spin_lock instead of the timer_mutex. And so a separate timer_lock is created. This also makes the use of the mutex and lock quite clear, as we know what exactly they are protecting. - A new field 'skip_work' is added to track when the timer handlers can queue a work. More comments present in code. Suggested-by: Rafael J. Wysocki Signed-off-by: Viresh Kumar Reviewed-by: Ashwin Chaugule --- drivers/cpufreq/cpufreq_governor.c | 139 +++++++++++++++++++++---------------- drivers/cpufreq/cpufreq_governor.h | 20 ++++-- drivers/cpufreq/cpufreq_ondemand.c | 8 +-- 3 files changed, 98 insertions(+), 69 deletions(-) -- 2.6.2.198.g614a2ac -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 999e1f6addf9..a3f9bc9b98e9 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -158,47 +158,52 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) } EXPORT_SYMBOL_GPL(dbs_check_cpu); -static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, - unsigned int delay) +void gov_add_timers(struct cpufreq_policy *policy, unsigned int delay) { - struct cpu_dbs_info *cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); - - mod_delayed_work_on(cpu, system_wq, &cdbs->dwork, delay); -} - -void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus) -{ - int i; + struct dbs_data *dbs_data = policy->governor_data; + struct cpu_dbs_info *cdbs; + int cpu; - if (!all_cpus) { - /* - * Use raw_smp_processor_id() to avoid preemptible warnings. - * We know that this is only called with all_cpus == false from - * works that have been queued with *_work_on() functions and - * those works are canceled during CPU_DOWN_PREPARE so they - * can't possibly run on any other CPU. - */ - __gov_queue_work(raw_smp_processor_id(), dbs_data, delay); - } else { - for_each_cpu(i, policy->cpus) - __gov_queue_work(i, dbs_data, delay); + for_each_cpu(cpu, policy->cpus) { + cdbs = dbs_data->cdata->get_cpu_cdbs(cpu); + cdbs->timer.expires = jiffies + delay; + add_timer_on(&cdbs->timer, cpu); } } -EXPORT_SYMBOL_GPL(gov_queue_work); +EXPORT_SYMBOL_GPL(gov_add_timers); -static inline void gov_cancel_work(struct dbs_data *dbs_data, - struct cpufreq_policy *policy) +static inline void gov_cancel_timers(struct cpufreq_policy *policy) { + struct dbs_data *dbs_data = policy->governor_data; struct cpu_dbs_info *cdbs; int i; for_each_cpu(i, policy->cpus) { cdbs = dbs_data->cdata->get_cpu_cdbs(i); - cancel_delayed_work_sync(&cdbs->dwork); + del_timer_sync(&cdbs->timer); } } +void gov_cancel_work(struct cpu_common_dbs_info *shared) +{ + unsigned long flags; + + /* + * No work will be queued from timer handlers after skip_work is + * updated. And so we can safely cancel the work first and then the + * timers. + */ + spin_lock_irqsave(&shared->timer_lock, flags); + shared->skip_work++; + spin_unlock_irqrestore(&shared->timer_lock, flags); + + cancel_work_sync(&shared->work); + + gov_cancel_timers(shared->policy); + + shared->skip_work = 0; +} + /* Will return if we need to evaluate cpu load again or not */ static bool need_load_eval(struct cpu_common_dbs_info *shared, unsigned int sampling_rate) @@ -217,29 +222,21 @@ static bool need_load_eval(struct cpu_common_dbs_info *shared, return true; } -static void dbs_timer(struct work_struct *work) +static void dbs_work_handler(struct work_struct *work) { - struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info, - dwork.work); - struct cpu_common_dbs_info *shared = cdbs->shared; + struct cpu_common_dbs_info *shared = container_of(work, struct + cpu_common_dbs_info, work); struct cpufreq_policy *policy; struct dbs_data *dbs_data; unsigned int sampling_rate, delay; - bool modify_all = true; - - mutex_lock(&shared->timer_mutex); + bool eval_load; policy = shared->policy; - - /* - * Governor might already be disabled and there is no point continuing - * with the work-handler. - */ - if (!policy) - goto unlock; - dbs_data = policy->governor_data; + /* Kill all timers */ + gov_cancel_timers(policy); + if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; @@ -250,14 +247,44 @@ static void dbs_timer(struct work_struct *work) sampling_rate = od_tuners->sampling_rate; } - if (!need_load_eval(cdbs->shared, sampling_rate)) - modify_all = false; + eval_load = need_load_eval(shared, sampling_rate); - delay = dbs_data->cdata->gov_dbs_timer(policy, modify_all); - gov_queue_work(dbs_data, policy, delay, modify_all); + /* + * Make sure cpufreq_governor_limits() isn't evaluating load in + * parallel. + */ + mutex_lock(&shared->timer_mutex); + delay = dbs_data->cdata->gov_dbs_timer(policy, eval_load); + mutex_unlock(&shared->timer_mutex); + + shared->skip_work--; + gov_add_timers(policy, delay); +} + +static void dbs_timer_handler(unsigned long data) +{ + struct cpu_dbs_info *cdbs = (struct cpu_dbs_info *)data; + struct cpu_common_dbs_info *shared = cdbs->shared; + struct cpufreq_policy *policy; + unsigned long flags; + + spin_lock_irqsave(&shared->timer_lock, flags); + policy = shared->policy; + + /* + * Timer handler isn't allowed to queue work at the moment, because: + * - Another timer handler has done that + * - We are stopping the governor + * - Or we are updating the sampling rate of ondemand governor + */ + if (shared->skip_work) + goto unlock; + + shared->skip_work++; + queue_work(system_wq, &shared->work); unlock: - mutex_unlock(&shared->timer_mutex); + spin_unlock_irqrestore(&shared->timer_lock, flags); } static void set_sampling_rate(struct dbs_data *dbs_data, @@ -288,6 +315,8 @@ static int alloc_common_dbs_info(struct cpufreq_policy *policy, cdata->get_cpu_cdbs(j)->shared = shared; mutex_init(&shared->timer_mutex); + spin_lock_init(&shared->timer_lock); + INIT_WORK(&shared->work, dbs_work_handler); return 0; } @@ -452,7 +481,9 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, if (ignore_nice) j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE]; - INIT_DEFERRABLE_WORK(&j_cdbs->dwork, dbs_timer); + __setup_timer(&j_cdbs->timer, dbs_timer_handler, + (unsigned long)j_cdbs, + TIMER_DEFERRABLE | TIMER_IRQSAFE); } if (cdata->governor == GOV_CONSERVATIVE) { @@ -470,8 +501,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, od_ops->powersave_bias_init_cpu(cpu); } - gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate), - true); + gov_add_timers(policy, delay_for_sampling_rate(sampling_rate)); return 0; } @@ -485,16 +515,9 @@ static int cpufreq_governor_stop(struct cpufreq_policy *policy, if (!shared || !shared->policy) return -EBUSY; - /* - * Work-handler must see this updated, as it should not proceed any - * further after governor is disabled. And so timer_mutex is taken while - * updating this value. - */ - mutex_lock(&shared->timer_mutex); + gov_cancel_work(shared); shared->policy = NULL; - mutex_unlock(&shared->timer_mutex); - gov_cancel_work(dbs_data, policy); return 0; } diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index 0c7589016b6c..76742902491e 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -132,12 +132,20 @@ static void *get_cpu_dbs_info_s(int cpu) \ struct cpu_common_dbs_info { struct cpufreq_policy *policy; /* - * percpu mutex that serializes governor limit change with dbs_timer - * invocation. We do not want dbs_timer to run when user is changing - * the governor or limits. + * Per policy mutex that serializes load evaluation from limit-change + * and work-handler. */ struct mutex timer_mutex; + + /* + * Per policy lock that serializes access to queuing work from timer + * handlers. + */ + spinlock_t timer_lock; + ktime_t time_stamp; + unsigned int skip_work; + struct work_struct work; }; /* Per cpu structures */ @@ -152,7 +160,7 @@ struct cpu_dbs_info { * wake-up from idle. */ unsigned int prev_load; - struct delayed_work dwork; + struct timer_list timer; struct cpu_common_dbs_info *shared; }; @@ -268,11 +276,11 @@ static ssize_t show_sampling_rate_min_gov_pol \ extern struct mutex cpufreq_governor_lock; +void gov_add_timers(struct cpufreq_policy *policy, unsigned int delay); +void gov_cancel_work(struct cpu_common_dbs_info *shared); void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); int cpufreq_governor_dbs(struct cpufreq_policy *policy, struct common_dbs_data *cdata, unsigned int event); -void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus); void od_register_powersave_bias_handler(unsigned int (*f) (struct cpufreq_policy *, unsigned int, unsigned int), unsigned int powersave_bias); diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index fc0384b4d02d..f879012cf849 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -286,13 +286,11 @@ static void update_sampling_rate(struct dbs_data *dbs_data, continue; next_sampling = jiffies + usecs_to_jiffies(new_rate); - appointed_at = dbs_info->cdbs.dwork.timer.expires; + appointed_at = dbs_info->cdbs.timer.expires; if (time_before(next_sampling, appointed_at)) { - cancel_delayed_work_sync(&dbs_info->cdbs.dwork); - - gov_queue_work(dbs_data, policy, - usecs_to_jiffies(new_rate), true); + gov_cancel_work(shared); + gov_add_timers(policy, usecs_to_jiffies(new_rate)); } }