From patchwork Thu Jul 3 16:25:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 33050 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f199.google.com (mail-ie0-f199.google.com [209.85.223.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5DAA6203AC for ; Thu, 3 Jul 2014 16:31:49 +0000 (UTC) Received: by mail-ie0-f199.google.com with SMTP id rl12sf2148539iec.6 for ; Thu, 03 Jul 2014 09:31:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=7l3Mf2Kt9P2Aaw37oyxf5I+viCF4h+FoZOgKUBljMDk=; b=W5bEm+JjFaZIlIxzwYzVRp7mPDJN8ViTe0BG85gGM4dJx+zV94Ww4Fw7hrFIrlcSsM e+KWX6hYThXzStsQFEr8sCXp0EdEt2omZRS8VAEU79xMuM1ExSOME0i8B7fdIvi0fIEf P+yDgjq5C0Q8JXEvwfBQzMq/SfdNU2zgnN4B7UWd5UGsG0Bg0s0b0HzQ3nEE1HdGE2Y5 Gf4fHIvXe2orXoVXAJSKYa/R1YngLhtsgigIfHVUHqLSxMVyhjiidC9K7Qu/47jC7OPU ExVh/B7S8n0r8Hi9MevXbsvWzH35GNBprFxKABLK1lI7S4dLkx/wSwei7RF6V4xvSgas m7Tw== X-Gm-Message-State: ALoCoQmuSxdATTAo8GzVkwuSj6sRqPNpeDyhOmdQsIR3QnEkM50vUW3z+LdP9oAEiuH0uLZJdKYO X-Received: by 10.42.94.8 with SMTP id z8mr5263383icm.3.1404405108845; Thu, 03 Jul 2014 09:31:48 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.28.4 with SMTP id 4ls632832qgy.98.gmail; Thu, 03 Jul 2014 09:31:48 -0700 (PDT) X-Received: by 10.220.119.18 with SMTP id x18mr1167618vcq.68.1404405108710; Thu, 03 Jul 2014 09:31:48 -0700 (PDT) Received: from mail-ve0-f181.google.com (mail-ve0-f181.google.com [209.85.128.181]) by mx.google.com with ESMTPS id xa4si14427763vcb.12.2014.07.03.09.31.48 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:31:48 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.181 as permitted sender) client-ip=209.85.128.181; Received: by mail-ve0-f181.google.com with SMTP id db11so494020veb.12 for ; Thu, 03 Jul 2014 09:31:48 -0700 (PDT) X-Received: by 10.52.172.5 with SMTP id ay5mr1637897vdc.38.1404405108606; Thu, 03 Jul 2014 09:31:48 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp390727vcb; Thu, 3 Jul 2014 09:31:48 -0700 (PDT) X-Received: by 10.69.18.97 with SMTP id gl1mr5846371pbd.78.1404405107733; Thu, 03 Jul 2014 09:31:47 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id hr2si33296025pbb.187.2014.07.03.09.31.47; Thu, 03 Jul 2014 09:31:47 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759502AbaGCQbi (ORCPT + 27 others); Thu, 3 Jul 2014 12:31:38 -0400 Received: from service87.mimecast.com ([91.220.42.44]:45416 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759268AbaGCQ0R (ORCPT ); Thu, 3 Jul 2014 12:26:17 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:15 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:15 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 11/23] sched: Introduce an unweighted cpu_load array Date: Thu, 3 Jul 2014 17:25:58 +0100 Message-Id: <1404404770-323-12-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:15.0084 (UTC) FILETIME=[82A43EC0:01CF96DB] X-MC-Unique: 114070317261510801 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.181 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann Maintain an unweighted (uw) cpu_load array as the uw counterpart of rq.cpu_load[]. Signed-off-by: Dietmar Eggemann --- kernel/sched/core.c | 4 +++- kernel/sched/proc.c | 22 ++++++++++++++++++---- kernel/sched/sched.h | 1 + 3 files changed, 22 insertions(+), 5 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2d7544a..d814064 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7114,8 +7114,10 @@ void __init sched_init(void) init_tg_rt_entry(&root_task_group, &rq->rt, NULL, i, NULL); #endif - for (j = 0; j < CPU_LOAD_IDX_MAX; j++) + for (j = 0; j < CPU_LOAD_IDX_MAX; j++) { rq->cpu_load[j] = 0; + rq->uw_cpu_load[j] = 0; + } rq->last_load_update_tick = jiffies; diff --git a/kernel/sched/proc.c b/kernel/sched/proc.c index 16f5a30..2260092 100644 --- a/kernel/sched/proc.c +++ b/kernel/sched/proc.c @@ -471,6 +471,7 @@ decay_load_missed(unsigned long load, unsigned long missed_updates, int idx) * every tick. We fix it up based on jiffies. */ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, + unsigned long uw_this_load, unsigned long pending_updates) { int i, scale; @@ -479,14 +480,20 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, /* Update our load: */ this_rq->cpu_load[0] = this_load; /* Fasttrack for idx 0 */ + this_rq->uw_cpu_load[0] = uw_this_load; /* Fasttrack for idx 0 */ for (i = 1, scale = 2; i < CPU_LOAD_IDX_MAX; i++, scale += scale) { - unsigned long old_load, new_load; + unsigned long old_load, new_load, uw_old_load, uw_new_load; /* scale is effectively 1 << i now, and >> i divides by scale */ old_load = this_rq->cpu_load[i]; old_load = decay_load_missed(old_load, pending_updates - 1, i); new_load = this_load; + + uw_old_load = this_rq->uw_cpu_load[i]; + uw_old_load = decay_load_missed(uw_old_load, + pending_updates - 1, i); + uw_new_load = uw_this_load; /* * Round up the averaging division if load is increasing. This * prevents us from getting stuck on 9 if the load is 10, for @@ -494,8 +501,12 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load, */ if (new_load > old_load) new_load += scale - 1; + if (uw_new_load > uw_old_load) + uw_new_load += scale - 1; this_rq->cpu_load[i] = (old_load * (scale - 1) + new_load) >> i; + this_rq->uw_cpu_load[i] = (uw_old_load * (scale - 1) + + uw_new_load) >> i; } sched_avg_update(this_rq); @@ -535,6 +546,7 @@ void update_idle_cpu_load(struct rq *this_rq) { unsigned long curr_jiffies = ACCESS_ONCE(jiffies); unsigned long load = get_rq_runnable_load(this_rq); + unsigned long uw_load = this_rq->cfs.uw_runnable_load_avg; unsigned long pending_updates; /* @@ -546,7 +558,7 @@ void update_idle_cpu_load(struct rq *this_rq) pending_updates = curr_jiffies - this_rq->last_load_update_tick; this_rq->last_load_update_tick = curr_jiffies; - __update_cpu_load(this_rq, load, pending_updates); + __update_cpu_load(this_rq, load, uw_load, pending_updates); } /* @@ -569,7 +581,7 @@ void update_cpu_load_nohz(void) * We were idle, this means load 0, the current load might be * !0 due to remote wakeups and the sort. */ - __update_cpu_load(this_rq, 0, pending_updates); + __update_cpu_load(this_rq, 0, 0, pending_updates); } raw_spin_unlock(&this_rq->lock); } @@ -581,11 +593,13 @@ void update_cpu_load_nohz(void) void update_cpu_load_active(struct rq *this_rq) { unsigned long load = get_rq_runnable_load(this_rq); + unsigned long uw_load = this_rq->cfs.uw_runnable_load_avg; + /* * See the mess around update_idle_cpu_load() / update_cpu_load_nohz(). */ this_rq->last_load_update_tick = jiffies; - __update_cpu_load(this_rq, load, 1); + __update_cpu_load(this_rq, load, uw_load, 1); calc_load_account_active(this_rq); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d7d2ee2..455d152 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -521,6 +521,7 @@ struct rq { #endif #define CPU_LOAD_IDX_MAX 5 unsigned long cpu_load[CPU_LOAD_IDX_MAX]; + unsigned long uw_cpu_load[CPU_LOAD_IDX_MAX]; unsigned long last_load_update_tick; #ifdef CONFIG_NO_HZ_COMMON u64 nohz_stamp;