From patchwork Thu Jul 3 16:25:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 33034 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pd0-f197.google.com (mail-pd0-f197.google.com [209.85.192.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 6370F203AC for ; Thu, 3 Jul 2014 16:26:29 +0000 (UTC) Received: by mail-pd0-f197.google.com with SMTP id fp1sf2441607pdb.4 for ; Thu, 03 Jul 2014 09:26:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=pGQTPFf+AXMLIQ3gfLgccYZ660conJJJV3cCCqSY1o8=; b=fSBmHj+JYwJ2DUebN8oYZnhjAJu2mnonzVNUilt+Dri99MkwIt5VuBFqNDJokUwTeq TY8NjQuKy/6DpeZbMQ/5L2tU8AZyq4TA2nWK94TZcr16Mjse1uL7um6dZ/FZLWgtUXAz ZJ36SEEYFaaaHoS40Rdv8q0y2OB44fZMOF42HLtNx3oVMwHNqnOmlZPql0SGfgpIhxX4 PU7vCcF0jd2MbSHnVAbOB14sDkd2HxzJAnK03ajhSTyVgn0ps4hJMB/Lf9rVwWDiVc3y zVbVkyZ4We2HJ0h3Xeu41kPh44EhISMKgqJxbPHeyDsCYENUDjRbHDP0iUAz+2Zp8rw/ Z2Aw== X-Gm-Message-State: ALoCoQmZnI+n5x/w585VbiHq4uO8iiUokHTjt/+fsNgq7xOB/Pt2jL8CuddkM1SXZ2pk3K8WbpwV X-Received: by 10.66.216.161 with SMTP id or1mr2803852pac.38.1404404788713; Thu, 03 Jul 2014 09:26:28 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.37.18 with SMTP id q18ls583575qgq.70.gmail; Thu, 03 Jul 2014 09:26:28 -0700 (PDT) X-Received: by 10.58.150.1 with SMTP id ue1mr4713845veb.11.1404404788549; Thu, 03 Jul 2014 09:26:28 -0700 (PDT) Received: from mail-ve0-f176.google.com (mail-ve0-f176.google.com [209.85.128.176]) by mx.google.com with ESMTPS id t6si13302202vcr.86.2014.07.03.09.26.28 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:26:28 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.176 as permitted sender) client-ip=209.85.128.176; Received: by mail-ve0-f176.google.com with SMTP id db12so488821veb.35 for ; Thu, 03 Jul 2014 09:26:28 -0700 (PDT) X-Received: by 10.220.166.9 with SMTP id k9mr4818356vcy.20.1404404788452; Thu, 03 Jul 2014 09:26:28 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp390312vcb; Thu, 3 Jul 2014 09:26:27 -0700 (PDT) X-Received: by 10.70.133.69 with SMTP id pa5mr4967447pdb.121.1404404787333; Thu, 03 Jul 2014 09:26:27 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p13si962039pdn.189.2014.07.03.09.26.26; Thu, 03 Jul 2014 09:26:26 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759360AbaGCQ0W (ORCPT + 27 others); Thu, 3 Jul 2014 12:26:22 -0400 Received: from service87.mimecast.com ([91.220.42.44]:45618 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759297AbaGCQ0T (ORCPT ); Thu, 3 Jul 2014 12:26:19 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:17 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:14 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 10/23] sched: Account for blocked unweighted load waking back up Date: Thu, 3 Jul 2014 17:25:57 +0100 Message-Id: <1404404770-323-11-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:14.0162 (UTC) FILETIME=[82178F20:01CF96DB] X-MC-Unique: 114070317261708801 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.176 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann Migrate unweighted blocked load of an entity away from the run queue in case it is migrated to another cpu during wake-up. This patch is the unweighted counterpart of "sched: Account for blocked load waking back up" (commit id aff3e4988444). Note: The unweighted blocked load is not used for energy aware scheduling yet. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 9 +++++++-- kernel/sched/sched.h | 2 +- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c6207f7..93c8dbe 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2545,9 +2545,11 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update) return; if (atomic_long_read(&cfs_rq->removed_load)) { - unsigned long removed_load; + unsigned long removed_load, uw_removed_load; removed_load = atomic_long_xchg(&cfs_rq->removed_load, 0); - subtract_blocked_load_contrib(cfs_rq, removed_load, 0); + uw_removed_load = atomic_long_xchg(&cfs_rq->uw_removed_load, 0); + subtract_blocked_load_contrib(cfs_rq, removed_load, + uw_removed_load); } if (decays) { @@ -4606,6 +4608,8 @@ migrate_task_rq_fair(struct task_struct *p, int next_cpu) se->avg.decay_count = -__synchronize_entity_decay(se); atomic_long_add(se->avg.load_avg_contrib, &cfs_rq->removed_load); + atomic_long_add(se->avg.uw_load_avg_contrib, + &cfs_rq->uw_removed_load); } /* We have migrated, no longer consider this task hot */ @@ -7553,6 +7557,7 @@ void init_cfs_rq(struct cfs_rq *cfs_rq) #ifdef CONFIG_SMP atomic64_set(&cfs_rq->decay_counter, 1); atomic_long_set(&cfs_rq->removed_load, 0); + atomic_long_set(&cfs_rq->uw_removed_load, 0); #endif } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 3f1eeb3..d7d2ee2 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -340,7 +340,7 @@ struct cfs_rq { unsigned long uw_runnable_load_avg, uw_blocked_load_avg; atomic64_t decay_counter; u64 last_decay; - atomic_long_t removed_load; + atomic_long_t removed_load, uw_removed_load; #ifdef CONFIG_FAIR_GROUP_SCHED /* Required to track per-cpu representation of a task_group */