From patchwork Mon Jun 9 08:51:24 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 31535 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f199.google.com (mail-ve0-f199.google.com [209.85.128.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5940F20675 for ; Mon, 9 Jun 2014 08:55:01 +0000 (UTC) Received: by mail-ve0-f199.google.com with SMTP id us18sf13451102veb.10 for ; Mon, 09 Jun 2014 01:55:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=DVRZjVhfwEAtDyVr6ZD+JWLjYOZGhP3v5Fwex3639UA=; b=P8b9C0I3WQCGiijIGNeu0b+YWJTQdJ7z2Mj6baHdKVkX9CZK1LSRl+vBAtCWpOoi2p itALaKLncLBAKRHAgzU62uj8VfrUfe0g1GH3z4dhVpSG4tD1Yf9UWRufWjCiTQQY5Yi0 E0O/uVS2oU7gfy8wvFy66j1HhtxIadqbGhgVK8vP5iwPllnHYLYTKNInNomfrxXynXLe J2c3e7YKN4jrYV5xtGOtAFrIkj71zz6eyxz7HO965pLDznEcoIIyGjWeUij9pof3haOx z68UgpSKUhwO4uENd21mUSSNQ1OpkPw6zLERgzq2k5XIeYKQCzGmdmTyuYbMGqLMbhCT /zkA== X-Gm-Message-State: ALoCoQkB2eUpRnEBAHOsjKSV5//4idwKNJ5VhagkMAp/8PR0DmUtPtUpR55eesL47/Y+Yar43wx/ X-Received: by 10.236.26.105 with SMTP id b69mr5712769yha.55.1402304101146; Mon, 09 Jun 2014 01:55:01 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.93.115 with SMTP id c106ls1522578qge.55.gmail; Mon, 09 Jun 2014 01:55:01 -0700 (PDT) X-Received: by 10.58.178.167 with SMTP id cz7mr25454677vec.33.1402304100992; Mon, 09 Jun 2014 01:55:00 -0700 (PDT) Received: from mail-vc0-f182.google.com (mail-vc0-f182.google.com [209.85.220.182]) by mx.google.com with ESMTPS id s5si10373912vcu.2.2014.06.09.01.55.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 09 Jun 2014 01:55:00 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.182 as permitted sender) client-ip=209.85.220.182; Received: by mail-vc0-f182.google.com with SMTP id il7so5951382vcb.13 for ; Mon, 09 Jun 2014 01:55:00 -0700 (PDT) X-Received: by 10.52.164.237 with SMTP id yt13mr20866245vdb.18.1402304100895; Mon, 09 Jun 2014 01:55:00 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.54.6 with SMTP id vs6csp125958vcb; Mon, 9 Jun 2014 01:55:00 -0700 (PDT) X-Received: by 10.66.186.141 with SMTP id fk13mr3332880pac.18.1402304100184; Mon, 09 Jun 2014 01:55:00 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id xe4si1610440pab.28.2014.06.09.01.54.59; Mon, 09 Jun 2014 01:54:59 -0700 (PDT) Received-SPF: none (google.com: linux-pm-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755354AbaFIIy6 (ORCPT + 13 others); Mon, 9 Jun 2014 04:54:58 -0400 Received: from mail-qc0-f174.google.com ([209.85.216.174]:37720 "EHLO mail-qc0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932443AbaFIIvc (ORCPT ); Mon, 9 Jun 2014 04:51:32 -0400 Received: by mail-qc0-f174.google.com with SMTP id x3so340333qcv.19 for ; Mon, 09 Jun 2014 01:51:31 -0700 (PDT) X-Received: by 10.140.81.74 with SMTP id e68mr28655184qgd.77.1402303891503; Mon, 09 Jun 2014 01:51:31 -0700 (PDT) Received: from localhost (ec2-23-23-178-99.compute-1.amazonaws.com. [23.23.178.99]) by mx.google.com with ESMTPSA id u8sm29491275qaf.30.2014.06.09.01.51.27 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 09 Jun 2014 01:51:31 -0700 (PDT) From: Viresh Kumar To: rjw@rjwysocki.net Cc: linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, arvind.chauhan@arm.com, srivatsa.bhat@linux.vnet.ibm.com, svaidy@linux.vnet.ibm.com, ego@linux.vnet.ibm.com, pavel@ucw.cz, Viresh Kumar Subject: [PATCH Resend] cpufreq: governor: remove copy_prev_load from 'struct cpu_dbs_common_info' Date: Mon, 9 Jun 2014 14:21:24 +0530 Message-Id: X-Mailer: git-send-email 2.0.0.rc2 Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: viresh.kumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.182 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , 'copy_prev_load' was recently added by commit: 18b46ab (cpufreq: governor: Be friendly towards latency-sensitive bursty workloads). It actually is a bit redundant as we also have 'prev_load' which can store any integer value and can be used instead of 'copy_prev_load' by setting it zero. True load can also turn out to be zero during long idle intervals (and hence the actual value of 'prev_load' and the overloaded value can clash). However this is not a problem because, if the true load was really zero in the previous interval, it makes sense to evaluate the load afresh for the current interval rather than copying the previous load. So, drop 'copy_prev_load' and use 'prev_load' instead. Update comments as well to make it more clear. There is another change here which was probably missed by Srivatsa during the last version of updates he made. The unlikely in the 'if' statement was covering only half of the condition and the whole line should actually come under it. Also checkpatch is made more silent as it was reporting this (--strict option): CHECK: Alignment should match open parenthesis + if (unlikely(wall_time > (2 * sampling_rate) && + j_cdbs->prev_load)) { Signed-off-by: Viresh Kumar Reviewed-by: Srivatsa S. Bhat --- Resend: Updated comments/logs as suggested by Srivatsa. drivers/cpufreq/cpufreq_governor.c | 19 ++++++++++++++----- drivers/cpufreq/cpufreq_governor.h | 9 +++++---- 2 files changed, 19 insertions(+), 9 deletions(-) diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 9004450..1b44496 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -131,15 +131,25 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu) * timer would not have fired during CPU-idle periods. Hence * an unusually large 'wall_time' (as compared to the sampling * rate) indicates this scenario. + * + * prev_load can be zero in two cases and we must recalculate it + * for both cases: + * - during long idle intervals + * - explicitly set to zero */ - if (unlikely(wall_time > (2 * sampling_rate)) && - j_cdbs->copy_prev_load) { + if (unlikely(wall_time > (2 * sampling_rate) && + j_cdbs->prev_load)) { load = j_cdbs->prev_load; - j_cdbs->copy_prev_load = false; + + /* + * Perform a destructive copy, to ensure that we copy + * the previous load only once, upon the first wake-up + * from idle. + */ + j_cdbs->prev_load = 0; } else { load = 100 * (wall_time - idle_time) / wall_time; j_cdbs->prev_load = load; - j_cdbs->copy_prev_load = true; } if (load > max_load) @@ -373,7 +383,6 @@ int cpufreq_governor_dbs(struct cpufreq_policy *policy, (j_cdbs->prev_cpu_wall - j_cdbs->prev_cpu_idle); j_cdbs->prev_load = 100 * prev_load / (unsigned int) j_cdbs->prev_cpu_wall; - j_cdbs->copy_prev_load = true; if (ignore_nice) j_cdbs->prev_cpu_nice = diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index c2a5b7e..cc401d1 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -134,12 +134,13 @@ struct cpu_dbs_common_info { u64 prev_cpu_idle; u64 prev_cpu_wall; u64 prev_cpu_nice; - unsigned int prev_load; /* - * Flag to ensure that we copy the previous load only once, upon the - * first wake-up from idle. + * Used to keep track of load in the previous interval. However, when + * explicitly set to zero, it is used as a flag to ensure that we copy + * the previous load to the current interval only once, upon the first + * wake-up from idle. */ - bool copy_prev_load; + unsigned int prev_load; struct cpufreq_policy *cur_policy; struct delayed_work work; /*