From patchwork Mon Jun 22 08:02:51 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 50136 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6765B21594 for ; Mon, 22 Jun 2015 08:03:28 +0000 (UTC) Received: by lbcak1 with SMTP id ak1sf42054519lbc.2 for ; Mon, 22 Jun 2015 01:03:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:in-reply-to:references :sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=pVlHzWtLnExfXmN1B+I2u95vWYDPXdCnLVgcH9XX4n4=; b=feL6mMcr7tzAQxlVprC6N+OKK9omSajNaOzJLnjc2cqz3O3nszkRwe4EFz+SOiLRpo GwTPQgdV9T1P5UCcI590ke6MrcvFWa1MY0FPAaMxB4ttKXNf7t5gZ1+WTdgXcJo8IHwC N8PCAN5HfZxKhZyKAdJYtZ9yxEOPug7TERF4o2TY85EReNqRLVdQtGNb5cxwXLPXWdNr 7dbkR3DuUIAzbIygxa376Gc5af7Wypvp3Fd0eBbgSnUjN0KsE2ikCTMwADx+f7BnywbP XeJicmh5p/LlOKHUjM73xoextJ4Clca66mHe1dAgzQXl4dF81SXoWsFTGbZCDCAPssKp 1Z+Q== X-Gm-Message-State: ALoCoQm46FTqFMLhqPIBnhSO+EzDkjzYmBB2XZ05af8lo43Py09E2f5GiLiTDx+Jd8qq+PeztftZ X-Received: by 10.112.84.104 with SMTP id x8mr26874442lby.23.1434960207313; Mon, 22 Jun 2015 01:03:27 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.20.103 with SMTP id m7ls947216lae.79.gmail; Mon, 22 Jun 2015 01:03:27 -0700 (PDT) X-Received: by 10.112.168.165 with SMTP id zx5mr28475737lbb.111.1434960207160; Mon, 22 Jun 2015 01:03:27 -0700 (PDT) Received: from mail-la0-f50.google.com (mail-la0-f50.google.com. [209.85.215.50]) by mx.google.com with ESMTPS id jt2si1044618lab.166.2015.06.22.01.03.27 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2015 01:03:27 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) client-ip=209.85.215.50; Received: by lagx9 with SMTP id x9so16769812lag.1 for ; Mon, 22 Jun 2015 01:03:27 -0700 (PDT) X-Received: by 10.112.126.101 with SMTP id mx5mr23968909lbb.35.1434960207069; Mon, 22 Jun 2015 01:03:27 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp2208742lbb; Mon, 22 Jun 2015 01:03:26 -0700 (PDT) X-Received: by 10.70.109.199 with SMTP id hu7mr57003139pdb.71.1434960205295; Mon, 22 Jun 2015 01:03:25 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j10si28342941pbq.257.2015.06.22.01.03.24; Mon, 22 Jun 2015 01:03:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756088AbbFVIDX (ORCPT + 11 others); Mon, 22 Jun 2015 04:03:23 -0400 Received: from mail-pa0-f42.google.com ([209.85.220.42]:34725 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755999AbbFVIDW (ORCPT ); Mon, 22 Jun 2015 04:03:22 -0400 Received: by pabvl15 with SMTP id vl15so82800614pab.1 for ; Mon, 22 Jun 2015 01:03:21 -0700 (PDT) X-Received: by 10.68.68.235 with SMTP id z11mr56900323pbt.93.1434960201844; Mon, 22 Jun 2015 01:03:21 -0700 (PDT) Received: from localhost ([122.167.71.211]) by mx.google.com with ESMTPSA id gy3sm18856296pbb.42.2015.06.22.01.03.20 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 22 Jun 2015 01:03:21 -0700 (PDT) From: Viresh Kumar To: Rafael Wysocki , Preeti U Murthy Cc: linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, Viresh Kumar Subject: [PATCH 04/10] cpufreq: ondemand: only queue canceled works from update_sampling_rate() Date: Mon, 22 Jun 2015 13:32:51 +0530 Message-Id: <4a478b21805d9454ce48e115ab1b7bc61f06cabb.1434959517.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.4.0 In-Reply-To: References: In-Reply-To: References: Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: viresh.kumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.50 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The sampling rate is updated with a call to update_sampling_rate(), and we process CPUs one by one here. While the work is canceled on per-cpu basis, it is getting enqueued (by mistake) for all policy->cpus. That would result in wasting cpu cycles for queuing works which are already queued and never canceled. This patch is about queuing work only on the cpu for which it was canceled earlier. gov_queue_work() was missing the CPU parameter and it's better to club 'modify_all' and the new 'cpu' parameter to a 'cpus' mask. And so this patch also changes the prototype of gov_queue_work() and fixes its caller sites. Fixes: 031299b3be30 ("cpufreq: governors: Avoid unnecessary per cpu timer interrupts") Signed-off-by: Viresh Kumar --- drivers/cpufreq/cpufreq_conservative.c | 4 ++-- drivers/cpufreq/cpufreq_governor.c | 30 ++++++++++-------------------- drivers/cpufreq/cpufreq_governor.h | 2 +- drivers/cpufreq/cpufreq_ondemand.c | 7 ++++--- 4 files changed, 17 insertions(+), 26 deletions(-) diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index f53719e5bed9..2ab53d96c078 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -116,11 +116,11 @@ static void cs_check_cpu(int cpu, unsigned int load) } static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, bool modify_all) + struct dbs_data *dbs_data, bool load_eval) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; - if (modify_all) + if (load_eval) dbs_check_cpu(dbs_data, cdbs->ccdbs->policy->cpu); return delay_for_sampling_rate(cs_tuners->sampling_rate); diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 836aefd03c1b..416a8c5665dd 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -167,7 +167,7 @@ static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, } void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus) + unsigned int delay, const struct cpumask *cpus) { int i; @@ -175,19 +175,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, if (!policy->governor_enabled) goto out_unlock; - if (!all_cpus) { - /* - * Use raw_smp_processor_id() to avoid preemptible warnings. - * We know that this is only called with all_cpus == false from - * works that have been queued with *_work_on() functions and - * those works are canceled during CPU_DOWN_PREPARE so they - * can't possibly run on any other CPU. - */ - __gov_queue_work(raw_smp_processor_id(), dbs_data, delay); - } else { - for_each_cpu(i, policy->cpus) - __gov_queue_work(i, dbs_data, delay); - } + for_each_cpu(i, cpus) + __gov_queue_work(i, dbs_data, delay); out_unlock: mutex_unlock(&cpufreq_governor_lock); @@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work) struct cpufreq_policy *policy = ccdbs->policy; struct dbs_data *dbs_data = policy->governor_data; unsigned int sampling_rate, delay; - bool modify_all = true; + const struct cpumask *cpus; + bool load_eval; mutex_lock(&ccdbs->timer_mutex); @@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work) sampling_rate = od_tuners->sampling_rate; } - if (!need_load_eval(cdbs->ccdbs, sampling_rate)) - modify_all = false; + load_eval = need_load_eval(cdbs->ccdbs, sampling_rate); + cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id()); - delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all); - gov_queue_work(dbs_data, policy, delay, modify_all); + delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval); + gov_queue_work(dbs_data, policy, delay, cpus); mutex_unlock(&ccdbs->timer_mutex); } @@ -474,7 +464,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, } gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate), - true); + policy->cpus); return 0; } diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index a0d24149f18c..dc2ad8a427f3 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -273,7 +273,7 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); int cpufreq_governor_dbs(struct cpufreq_policy *policy, struct common_dbs_data *cdata, unsigned int event); void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus); + unsigned int delay, const struct cpumask *cpus); void od_register_powersave_bias_handler(unsigned int (*f) (struct cpufreq_policy *, unsigned int, unsigned int), unsigned int powersave_bias); diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 11db20079fc6..774bbddae2c9 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -192,7 +192,7 @@ static void od_check_cpu(int cpu, unsigned int load) } static unsigned int od_dbs_timer(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, bool modify_all) + struct dbs_data *dbs_data, bool load_eval) { struct cpufreq_policy *policy = cdbs->ccdbs->policy; unsigned int cpu = policy->cpu; @@ -201,7 +201,7 @@ static unsigned int od_dbs_timer(struct cpu_dbs_info *cdbs, struct od_dbs_tuners *od_tuners = dbs_data->tuners; int delay = 0, sample_type = dbs_info->sample_type; - if (!modify_all) + if (!load_eval) goto max_delay; /* Common NORMAL_SAMPLE setup */ @@ -284,7 +284,8 @@ static void update_sampling_rate(struct dbs_data *dbs_data, mutex_lock(&dbs_info->cdbs.ccdbs->timer_mutex); gov_queue_work(dbs_data, policy, - usecs_to_jiffies(new_rate), true); + usecs_to_jiffies(new_rate), + cpumask_of(cpu)); } mutex_unlock(&dbs_info->cdbs.ccdbs->timer_mutex);