From patchwork Mon Sep 22 16:24:05 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 37692 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f198.google.com (mail-lb0-f198.google.com [209.85.217.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 39F84202A1 for ; Mon, 22 Sep 2014 16:24:50 +0000 (UTC) Received: by mail-lb0-f198.google.com with SMTP id 10sf3227375lbg.5 for ; Mon, 22 Sep 2014 09:24:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=T6Ko1IAD2CTpb7Zj3emfnqveeGyOuJ6jquml5OHP+68=; b=Uh8RzreqKdFeXKGitD79qbCD02QIrI4xW7q2ErvfFUtStgq21/lnpBKtLlrpoo4dhO r4cqpqr7sAs+NYqJSqmbo5yXPTcy70mJbYzzEslr7Uf8IE0WTClLkE/5FbQPorOGOCLL vtjQ2IbFSfrT4PrUYvfq9qzDKYAA7iMfT1WkRGBEv4KJIT7URbbirqLjj1bnQjfuU8/d W98zrUB5SrsQoSmqdFK5GL4pkKfI7o9I/Frlx3ebAKnl5HdVw9LP9d+8/uko3UbFGbQj YfB3uXG3qI4d9pOvY5xRE5BWXGifYy52DqAvXEtBf3jtg9hJHvIdWkcmevklVFYaynVa 1fZg== X-Gm-Message-State: ALoCoQlYo6kbslTjgohg3/wZe8cUZil+HqZg1RAr5kAUcAgNzjF2DO1G062pAcrGeU1GjyOq9eKT X-Received: by 10.112.89.8 with SMTP id bk8mr4665127lbb.6.1411403089115; Mon, 22 Sep 2014 09:24:49 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.99 with SMTP id x3ls544116laj.90.gmail; Mon, 22 Sep 2014 09:24:48 -0700 (PDT) X-Received: by 10.112.199.197 with SMTP id jm5mr25230198lbc.19.1411403088914; Mon, 22 Sep 2014 09:24:48 -0700 (PDT) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com [209.85.215.49]) by mx.google.com with ESMTPS id p5si15071509laf.101.2014.09.22.09.24.48 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 22 Sep 2014 09:24:48 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by mail-la0-f49.google.com with SMTP id pn19so7112264lab.36 for ; Mon, 22 Sep 2014 09:24:48 -0700 (PDT) X-Received: by 10.112.200.134 with SMTP id js6mr25263867lbc.0.1411403088806; Mon, 22 Sep 2014 09:24:48 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp269497lbb; Mon, 22 Sep 2014 09:24:48 -0700 (PDT) X-Received: by 10.70.36.33 with SMTP id n1mr4468309pdj.26.1411403087267; Mon, 22 Sep 2014 09:24:47 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q16si9388236pdn.76.2014.09.22.09.24.46 for ; Mon, 22 Sep 2014 09:24:47 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754603AbaIVQYN (ORCPT + 27 others); Mon, 22 Sep 2014 12:24:13 -0400 Received: from service87.mimecast.com ([91.220.42.44]:60503 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754578AbaIVQYL (ORCPT ); Mon, 22 Sep 2014 12:24:11 -0400 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Mon, 22 Sep 2014 17:24:10 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 22 Sep 2014 17:24:07 +0100 From: Morten Rasmussen To: peterz@infradead.org, mingo@redhat.com Cc: dietmar.eggemann@arm.com, pjt@google.com, bsegall@google.com, vincent.guittot@linaro.org, nicolas.pitre@linaro.org, mturquette@linaro.org, rjw@rjwysocki.net, linux-kernel@vger.kernel.org, Morten Rasmussen Subject: [PATCH 5/7] sched: Implement usage tracking Date: Mon, 22 Sep 2014 17:24:05 +0100 Message-Id: <1411403047-32010-6-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1411403047-32010-1-git-send-email-morten.rasmussen@arm.com> References: <1411403047-32010-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 22 Sep 2014 16:24:07.0315 (UTC) FILETIME=[A1F1FE30:01CFD681] X-MC-Unique: 114092217241012801 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , With the framework for runnable tracking now fully in place, per-entity usage tracking is a simple and low-overhead addition. This is a rebased and significantly cut down version of a patch originally authored by Paul Turner . cc: Paul Turner cc: Ben Segall Signed-off-by: Morten Rasmussen --- include/linux/sched.h | 1 + kernel/sched/debug.c | 1 + kernel/sched/fair.c | 16 +++++++++++++--- 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 18f5262..0bcd8a7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1080,6 +1080,7 @@ struct sched_avg { u64 last_runnable_update; s64 decay_count; unsigned long load_avg_contrib; + u32 usage_avg_sum; }; #ifdef CONFIG_SCHEDSTATS diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index c7fe1ea0..ed5a9ce 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -95,6 +95,7 @@ static void print_cfs_group_stats(struct seq_file *m, int cpu, struct task_group #ifdef CONFIG_SMP P(se->avg.runnable_avg_sum); P(se->avg.runnable_avg_period); + P(se->avg.usage_avg_sum); P(se->avg.load_avg_contrib); P(se->avg.decay_count); #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 52abb3e..d8a8c83 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2299,7 +2299,8 @@ unsigned long arch_scale_load_capacity(int cpu); */ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu, struct sched_avg *sa, - int runnable) + int runnable, + int running) { u64 delta, periods; u32 runnable_contrib; @@ -2341,6 +2342,8 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu, if (runnable) sa->runnable_avg_sum += (delta_w * scale_cap) >> SCHED_CAPACITY_SHIFT; + if (running) + sa->usage_avg_sum += delta_w; sa->runnable_avg_period += delta_w; delta -= delta_w; @@ -2353,6 +2356,7 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu, periods + 1); sa->runnable_avg_period = decay_load(sa->runnable_avg_period, periods + 1); + sa->usage_avg_sum = decay_load(sa->usage_avg_sum, periods + 1); /* Efficiently calculate \sum (1..n_period) 1024*y^i */ runnable_contrib = __compute_runnable_contrib(periods); @@ -2360,6 +2364,8 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu, if (runnable) sa->runnable_avg_sum += (runnable_contrib * scale_cap) >> SCHED_CAPACITY_SHIFT; + if (running) + sa->usage_avg_sum += runnable_contrib; sa->runnable_avg_period += runnable_contrib; } @@ -2367,6 +2373,8 @@ static __always_inline int __update_entity_runnable_avg(u64 now, int cpu, if (runnable) sa->runnable_avg_sum += (delta * scale_cap) >> SCHED_CAPACITY_SHIFT; + if (running) + sa->usage_avg_sum += delta; sa->runnable_avg_period += delta; return decayed; @@ -2473,7 +2481,7 @@ static inline void __update_group_entity_contrib(struct sched_entity *se) static inline void update_rq_runnable_avg(struct rq *rq, int runnable) { __update_entity_runnable_avg(rq_clock_task(rq), rq->cpu, &rq->avg, - runnable); + runnable, runnable); __update_tg_runnable_avg(&rq->avg, &rq->cfs); } #else /* CONFIG_FAIR_GROUP_SCHED */ @@ -2539,7 +2547,8 @@ static inline void update_entity_load_avg(struct sched_entity *se, else now = cfs_rq_clock_task(group_cfs_rq(se)); - if (!__update_entity_runnable_avg(now, cpu, &se->avg, se->on_rq)) + if (!__update_entity_runnable_avg(now, cpu, &se->avg, se->on_rq, + cfs_rq->curr == se)) return; contrib_delta = __update_entity_load_avg_contrib(se); @@ -2980,6 +2989,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) */ update_stats_wait_end(cfs_rq, se); __dequeue_entity(cfs_rq, se); + update_entity_load_avg(se, 1); } update_stats_curr_start(cfs_rq, se);