From patchwork Thu Jul 3 16:25:55 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 33048 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f70.google.com (mail-oa0-f70.google.com [209.85.219.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 1C0D1203AC for ; Thu, 3 Jul 2014 16:31:10 +0000 (UTC) Received: by mail-oa0-f70.google.com with SMTP id m1sf1971939oag.5 for ; Thu, 03 Jul 2014 09:31:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=lQO8poFFagbvvIyK/5dYZdt6XT6HK3sVLSZsxqJKGiQ=; b=QanRXdOlqDRGi2nwPvIMACknYt7JRIC9N9ZsGD+4aT3sjtx6HDibZwFMRWDVNAstS8 h7Oo5HSog2W1VU9Qc0nOeNLhf67XBGB+nZISfn2gDJdtuzx4677/04aE3bFirm6A8Zeq 3vQRl8PZLHt6Akf28lyLCvJzMeFkHhAAAQyvz8uCSUMU6KTrABKVTj6Z7a46RQW/X0Zv Hu7TTtk16Dgt43dwNLj0RtQNWanysEWtLog50EUxS7P4y2GWKkaptDoKdAxElu2aFEYk U9j9junQc7wi5LRlFnfyfdrhNNlo3xcrvwrHq/Pqd0BekbQGrjmBY+Gkxp2iCInz7Zqq r9vg== X-Gm-Message-State: ALoCoQkRsxO0uw39BJuYx7ESlqy3k1n5lyaJS/6ZRst/fo1SwHT2k+Z2NNYLTwN/luK/uua/h0Eh X-Received: by 10.182.250.229 with SMTP id zf5mr2792240obc.4.1404405070541; Thu, 03 Jul 2014 09:31:10 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.31.116 with SMTP id e107ls562360qge.67.gmail; Thu, 03 Jul 2014 09:31:10 -0700 (PDT) X-Received: by 10.58.179.39 with SMTP id dd7mr1164212vec.75.1404405070425; Thu, 03 Jul 2014 09:31:10 -0700 (PDT) Received: from mail-ve0-f170.google.com (mail-ve0-f170.google.com [209.85.128.170]) by mx.google.com with ESMTPS id sn16si5404051vdb.23.2014.07.03.09.31.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:31:10 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.170 as permitted sender) client-ip=209.85.128.170; Received: by mail-ve0-f170.google.com with SMTP id i13so496471veh.29 for ; Thu, 03 Jul 2014 09:31:10 -0700 (PDT) X-Received: by 10.53.7.204 with SMTP id de12mr1422269vdd.41.1404405070328; Thu, 03 Jul 2014 09:31:10 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp390664vcb; Thu, 3 Jul 2014 09:31:09 -0700 (PDT) X-Received: by 10.70.136.194 with SMTP id qc2mr5074848pdb.109.1404405069456; Thu, 03 Jul 2014 09:31:09 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id aa8si33335276pbd.60.2014.07.03.09.31.08; Thu, 03 Jul 2014 09:31:08 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759342AbaGCQ0T (ORCPT + 27 others); Thu, 3 Jul 2014 12:26:19 -0400 Received: from service87.mimecast.com ([91.220.42.44]:45321 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759282AbaGCQ0Q (ORCPT ); Thu, 3 Jul 2014 12:26:16 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:14 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:12 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 08/23] sched: Aggregate unweighted load contributed by task entities on parenting cfs_rq Date: Thu, 3 Jul 2014 17:25:55 +0100 Message-Id: <1404404770-323-9-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:12.0506 (UTC) FILETIME=[811ADFA0:01CF96DB] X-MC-Unique: 114070317261401701 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann Energy aware scheduling relies on cpu utilization and to be able to maintain it, we need a per run queue signal of the sum of the unweighted, i.e. not scaled with task priority, load contribution of runnable task entries. The unweighted runnable load on a run queue is maintained alongside the existing (weighted) runnable load. This patch is the unweighted counterpart of "sched: Aggregate load contributed by task entities on parenting cfs_rq" (commit id 2dac754e10a5). Signed-off-by: Dietmar Eggemann --- include/linux/sched.h | 1 + kernel/sched/debug.c | 4 ++++ kernel/sched/fair.c | 26 ++++++++++++++++++++++---- kernel/sched/sched.h | 1 + 4 files changed, 28 insertions(+), 4 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 1507390..b5eeae0 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1105,6 +1105,7 @@ struct sched_avg { u64 last_runnable_update; s64 decay_count; unsigned long load_avg_contrib; + unsigned long uw_load_avg_contrib; }; #ifdef CONFIG_SCHEDSTATS diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 695f977..78d4151 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -96,6 +96,7 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group P(se->avg.runnable_avg_sum); P(se->avg.runnable_avg_period); P(se->avg.load_avg_contrib); + P(se->avg.uw_load_avg_contrib); P(se->avg.decay_count); #endif #undef PN @@ -215,6 +216,8 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) #ifdef CONFIG_SMP SEQ_printf(m, " .%-30s: %ld\n", "runnable_load_avg", cfs_rq->runnable_load_avg); + SEQ_printf(m, " .%-30s: %ld\n", "uw_runnable_load_avg", + cfs_rq->uw_runnable_load_avg); SEQ_printf(m, " .%-30s: %ld\n", "blocked_load_avg", cfs_rq->blocked_load_avg); #ifdef CONFIG_FAIR_GROUP_SCHED @@ -635,6 +638,7 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m) P(se.avg.runnable_avg_sum); P(se.avg.runnable_avg_period); P(se.avg.load_avg_contrib); + P(se.avg.uw_load_avg_contrib); P(se.avg.decay_count); #endif P(policy); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 981406e..1ee47b3 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2345,6 +2345,8 @@ static inline u64 __synchronize_entity_decay(struct sched_entity *se) return 0; se->avg.load_avg_contrib = decay_load(se->avg.load_avg_contrib, decays); + se->avg.uw_load_avg_contrib = decay_load(se->avg.uw_load_avg_contrib, + decays); se->avg.decay_count = 0; return decays; @@ -2451,12 +2453,18 @@ static inline void __update_task_entity_contrib(struct sched_entity *se) contrib = se->avg.runnable_avg_sum * scale_load_down(se->load.weight); contrib /= (se->avg.runnable_avg_period + 1); se->avg.load_avg_contrib = scale_load(contrib); + + contrib = se->avg.runnable_avg_sum * scale_load_down(NICE_0_LOAD); + contrib /= (se->avg.runnable_avg_period + 1); + se->avg.uw_load_avg_contrib = scale_load(contrib); } /* Compute the current contribution to load_avg by se, return any delta */ -static long __update_entity_load_avg_contrib(struct sched_entity *se) +static long __update_entity_load_avg_contrib(struct sched_entity *se, + long *uw_contrib_delta) { long old_contrib = se->avg.load_avg_contrib; + long uw_old_contrib = se->avg.uw_load_avg_contrib; if (entity_is_task(se)) { __update_task_entity_contrib(se); @@ -2465,6 +2473,10 @@ static long __update_entity_load_avg_contrib(struct sched_entity *se) __update_group_entity_contrib(se); } + if (uw_contrib_delta) + *uw_contrib_delta = se->avg.uw_load_avg_contrib - + uw_old_contrib; + return se->avg.load_avg_contrib - old_contrib; } @@ -2484,7 +2496,7 @@ static inline void update_entity_load_avg(struct sched_entity *se, int update_cfs_rq) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - long contrib_delta; + long contrib_delta, uw_contrib_delta; u64 now; /* @@ -2499,13 +2511,15 @@ static inline void update_entity_load_avg(struct sched_entity *se, if (!__update_entity_runnable_avg(now, &se->avg, se->on_rq)) return; - contrib_delta = __update_entity_load_avg_contrib(se); + contrib_delta = __update_entity_load_avg_contrib(se, &uw_contrib_delta); if (!update_cfs_rq) return; - if (se->on_rq) + if (se->on_rq) { cfs_rq->runnable_load_avg += contrib_delta; + cfs_rq->uw_runnable_load_avg += uw_contrib_delta; + } else subtract_blocked_load_contrib(cfs_rq, -contrib_delta); } @@ -2582,6 +2596,8 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, } cfs_rq->runnable_load_avg += se->avg.load_avg_contrib; + cfs_rq->uw_runnable_load_avg += se->avg.uw_load_avg_contrib; + /* we force update consideration on load-balancer moves */ update_cfs_rq_blocked_load(cfs_rq, !wakeup); } @@ -2600,6 +2616,8 @@ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, update_cfs_rq_blocked_load(cfs_rq, !sleep); cfs_rq->runnable_load_avg -= se->avg.load_avg_contrib; + cfs_rq->uw_runnable_load_avg -= se->avg.uw_load_avg_contrib; + if (sleep) { cfs_rq->blocked_load_avg += se->avg.load_avg_contrib; se->avg.decay_count = atomic64_read(&cfs_rq->decay_counter); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c971359..46cb8bd 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -337,6 +337,7 @@ struct cfs_rq { * the FAIR_GROUP_SCHED case). */ unsigned long runnable_load_avg, blocked_load_avg; + unsigned long uw_runnable_load_avg; atomic64_t decay_counter; u64 last_decay; atomic_long_t removed_load;