From patchwork Thu Jul 3 16:25:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 33043 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5384D203AC for ; Thu, 3 Jul 2014 16:28:25 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id eb12sf1945793oac.11 for ; Thu, 03 Jul 2014 09:28:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=PoQzRPhQpQWcr7/GNiSELppLrQCHyUcKh/OjFhWp878=; b=MN8tMHRsf0H+ofMW0rdH2W9DH9x2Q077KR45Oiz60kAaigEpxrz3bjo1chRxh1sZA0 stEQMLNVkI9ynD/GNX5x/6Vob6J50HJPXRWvYMKb4gXkbXlYPRst1SgCpo24ce8tPuAd ZW9m86FVZvAX+clbD6MyPjPJ1Uf94SMl9oq0iO29+jcvLYCt76ZN4AknHI4Rla9eEM04 lXGneNNi/C7X8zjP8aud21P4eidyW6SWEMpEaSivfe8Qzn9yLfmBuvkw2UK02N0MPJA5 MdA2MHCnYzOtKeVxRNMsSps1OxRqokL4GkSnqEE/7OvBu4Ik3hyScmo9pRi0AI4DGCy+ Epmg== X-Gm-Message-State: ALoCoQmFRqrmofhogOKYDfpxXB/xf4HkdH2D/fmqUlE8pFkNcNXWqfQ/YEAi/WNjyuydWCssKOAm X-Received: by 10.182.123.196 with SMTP id mc4mr2548914obb.41.1404404904930; Thu, 03 Jul 2014 09:28:24 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.101.23 with SMTP id t23ls571825qge.32.gmail; Thu, 03 Jul 2014 09:28:24 -0700 (PDT) X-Received: by 10.58.197.193 with SMTP id iw1mr1437894vec.57.1404404904812; Thu, 03 Jul 2014 09:28:24 -0700 (PDT) Received: from mail-ve0-f171.google.com (mail-ve0-f171.google.com [209.85.128.171]) by mx.google.com with ESMTPS id lx5si7137148vec.2.2014.07.03.09.28.24 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:28:24 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.171 as permitted sender) client-ip=209.85.128.171; Received: by mail-ve0-f171.google.com with SMTP id jz11so487597veb.30 for ; Thu, 03 Jul 2014 09:28:24 -0700 (PDT) X-Received: by 10.58.235.130 with SMTP id um2mr4820022vec.18.1404404904737; Thu, 03 Jul 2014 09:28:24 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp390460vcb; Thu, 3 Jul 2014 09:28:24 -0700 (PDT) X-Received: by 10.68.113.133 with SMTP id iy5mr78500798pbb.135.1404404901955; Thu, 03 Jul 2014 09:28:21 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id rf4si963633pdb.213.2014.07.03.09.28.21; Thu, 03 Jul 2014 09:28:21 -0700 (PDT) Received-SPF: none (google.com: linux-pm-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759508AbaGCQ2O (ORCPT + 13 others); Thu, 3 Jul 2014 12:28:14 -0400 Received: from service87.mimecast.com ([91.220.42.44]:52433 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759536AbaGCQ2M (ORCPT ); Thu, 3 Jul 2014 12:28:12 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:14 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:13 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 09/23] sched: Maintain the unweighted load contribution of blocked entities Date: Thu, 3 Jul 2014 17:25:56 +0100 Message-Id: <1404404770-323-10-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:13.0366 (UTC) FILETIME=[819E1960:01CF96DB] X-MC-Unique: 114070317261401001 Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.171 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann The unweighted blocked load on a run queue is maintained alongside the existing (weighted) blocked load. This patch is the unweighted counterpart of "sched: Maintain the load contribution of blocked entities" (commit id 9ee474f55664). Note: The unweighted blocked load is not used for energy aware scheduling yet. Signed-off-by: Dietmar Eggemann --- kernel/sched/debug.c | 2 ++ kernel/sched/fair.c | 22 +++++++++++++++++----- kernel/sched/sched.h | 2 +- 3 files changed, 20 insertions(+), 6 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 78d4151..ffa56a8 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -220,6 +220,8 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) cfs_rq->uw_runnable_load_avg); SEQ_printf(m, " .%-30s: %ld\n", "blocked_load_avg", cfs_rq->blocked_load_avg); + SEQ_printf(m, " .%-30s: %ld\n", "uw_blocked_load_avg", + cfs_rq->uw_blocked_load_avg); #ifdef CONFIG_FAIR_GROUP_SCHED SEQ_printf(m, " .%-30s: %ld\n", "tg_load_contrib", cfs_rq->tg_load_contrib); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1ee47b3..c6207f7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2481,12 +2481,18 @@ static long __update_entity_load_avg_contrib(struct sched_entity *se, } static inline void subtract_blocked_load_contrib(struct cfs_rq *cfs_rq, - long load_contrib) + long load_contrib, + long uw_load_contrib) { if (likely(load_contrib < cfs_rq->blocked_load_avg)) cfs_rq->blocked_load_avg -= load_contrib; else cfs_rq->blocked_load_avg = 0; + + if (likely(uw_load_contrib < cfs_rq->uw_blocked_load_avg)) + cfs_rq->uw_blocked_load_avg -= uw_load_contrib; + else + cfs_rq->uw_blocked_load_avg = 0; } static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); @@ -2521,7 +2527,8 @@ static inline void update_entity_load_avg(struct sched_entity *se, cfs_rq->uw_runnable_load_avg += uw_contrib_delta; } else - subtract_blocked_load_contrib(cfs_rq, -contrib_delta); + subtract_blocked_load_contrib(cfs_rq, -contrib_delta, + -uw_contrib_delta); } /* @@ -2540,12 +2547,14 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update) if (atomic_long_read(&cfs_rq->removed_load)) { unsigned long removed_load; removed_load = atomic_long_xchg(&cfs_rq->removed_load, 0); - subtract_blocked_load_contrib(cfs_rq, removed_load); + subtract_blocked_load_contrib(cfs_rq, removed_load, 0); } if (decays) { cfs_rq->blocked_load_avg = decay_load(cfs_rq->blocked_load_avg, decays); + cfs_rq->uw_blocked_load_avg = + decay_load(cfs_rq->uw_blocked_load_avg, decays); atomic64_add(decays, &cfs_rq->decay_counter); cfs_rq->last_decay = now; } @@ -2591,7 +2600,8 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, /* migrated tasks did not contribute to our blocked load */ if (wakeup) { - subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib); + subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib, + se->avg.uw_load_avg_contrib); update_entity_load_avg(se, 0); } @@ -2620,6 +2630,7 @@ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, if (sleep) { cfs_rq->blocked_load_avg += se->avg.load_avg_contrib; + cfs_rq->uw_blocked_load_avg += se->avg.uw_load_avg_contrib; se->avg.decay_count = atomic64_read(&cfs_rq->decay_counter); } /* migrations, e.g. sleep=0 leave decay_count == 0 */ } @@ -7481,7 +7492,8 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p) */ if (se->avg.decay_count) { __synchronize_entity_decay(se); - subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib); + subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib, + se->avg.uw_load_avg_contrib); } #endif } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 46cb8bd..3f1eeb3 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -337,7 +337,7 @@ struct cfs_rq { * the FAIR_GROUP_SCHED case). */ unsigned long runnable_load_avg, blocked_load_avg; - unsigned long uw_runnable_load_avg; + unsigned long uw_runnable_load_avg, uw_blocked_load_avg; atomic64_t decay_counter; u64 last_decay; atomic_long_t removed_load;