From patchwork Tue Nov 8 08:26:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 81249 Delivered-To: patch@linaro.org Received: by 10.140.97.165 with SMTP id m34csp1428781qge; Tue, 8 Nov 2016 00:26:47 -0800 (PST) X-Received: by 10.99.244.17 with SMTP id g17mr5052977pgi.80.1478593607433; Tue, 08 Nov 2016 00:26:47 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e19si32957786pgk.268.2016.11.08.00.26.47; Tue, 08 Nov 2016 00:26:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932260AbcKHI02 (ORCPT + 27 others); Tue, 8 Nov 2016 03:26:28 -0500 Received: from mail-wm0-f45.google.com ([74.125.82.45]:38170 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752082AbcKHI0Y (ORCPT ); Tue, 8 Nov 2016 03:26:24 -0500 Received: by mail-wm0-f45.google.com with SMTP id f82so165071324wmf.1 for ; Tue, 08 Nov 2016 00:26:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Q1REwQRKUnnP2FDui3yhxdDJjcGgHfSDFgBX+uQrGI8=; b=Ot5S3QyguobE07OIXLgCSAI4XSNFXZhgU51UYCwmKmn1cqNqN5Y+BPmpSx4PsyWsLf Fdx6t0wZRXoktV9WamrV9zGWyW0KLtUkpRCXOC2uS7/sajeJyU4YX47s9H4GStdRGhzn ifA17ilP+fkYCamshqviLCs50iYGIfaHYjP5k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Q1REwQRKUnnP2FDui3yhxdDJjcGgHfSDFgBX+uQrGI8=; b=ZxeotIgItc98L2PaLB2OSncZh5ABtdcnSDveC/a3qPymgW5qdq1mPbruCT6PFUvYdZ mAZN28iZ64oVQI9KJu+ujsZPThxTx10JR7TrS1P2JDQxcZRa214C7/kEbgUYfLluRIis 5V3pAK8tUGEHJtpjw1/oc32Y5wQXARJiHfGe4ZlvLC2EyY5Fk7DxwgtGWvWtL+GsBMlW QCwpH5sbC/2O5M6/ViaGXyjIXjllCpy5IgcF6seZd87Q/ykx2uGfx10lb9V4Z5y5jJOv XCjnziQkfHoCA2hSswT0X+uWWB4alQJIY63HoEWVh6bBGqJlY8YSfVhqD9kPSQ/Fu9iZ Oqaw== X-Gm-Message-State: ABUngvcl2XBttOzSrrXvnt2O1oxiYNchGO6HNZ+sSjH/x+WZSZQP4iXrDPBacNh1iA/UTAZl X-Received: by 10.28.127.14 with SMTP id a14mr10502773wmd.80.1478593582925; Tue, 08 Nov 2016 00:26:22 -0800 (PST) Received: from localhost.localdomain ([2a01:e35:8bd4:7750:6483:2475:9666:6640]) by smtp.gmail.com with ESMTPSA id w1sm35670015wje.36.2016.11.08.00.26.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 08 Nov 2016 00:26:22 -0800 (PST) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com Cc: yuyang.du@intel.com, Morten.Rasmussen@arm.com, pjt@google.com, bsegall@google.com, kernellwp@gmail.com, Vincent Guittot Subject: [PATCH 1/6 v6] sched: factorize attach/detach entity Date: Tue, 8 Nov 2016 09:26:07 +0100 Message-Id: <1478593572-26671-2-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1478593572-26671-1-git-send-email-vincent.guittot@linaro.org> References: <1478593572-26671-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Factorize post_init_entity_util_avg and part of attach_task_cfs_rq in one function attach_entity_cfs_rq. Create symmetric detach_entity_cfs_rq function Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 54 +++++++++++++++++++++++++++++++---------------------- 1 file changed, 32 insertions(+), 22 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c242944..6dd9ea9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -708,9 +708,7 @@ void init_entity_runnable_average(struct sched_entity *se) } static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); -static int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq); -static void update_tg_load_avg(struct cfs_rq *cfs_rq, int force); -static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se); +static void attach_entity_cfs_rq(struct sched_entity *se); /* * With new tasks being created, their initial util_avgs are extrapolated @@ -742,7 +740,6 @@ void post_init_entity_util_avg(struct sched_entity *se) struct cfs_rq *cfs_rq = cfs_rq_of(se); struct sched_avg *sa = &se->avg; long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; - u64 now = cfs_rq_clock_task(cfs_rq); if (cap > 0) { if (cfs_rq->avg.util_avg != 0) { @@ -770,14 +767,12 @@ void post_init_entity_util_avg(struct sched_entity *se) * such that the next switched_to_fair() has the * expected state. */ - se->avg.last_update_time = now; + se->avg.last_update_time = cfs_rq_clock_task(cfs_rq); return; } } - update_cfs_rq_load_avg(now, cfs_rq, false); - attach_entity_load_avg(cfs_rq, se); - update_tg_load_avg(cfs_rq, false); + attach_entity_cfs_rq(se); } #else /* !CONFIG_SMP */ @@ -8687,30 +8682,19 @@ static inline bool vruntime_normalized(struct task_struct *p) return false; } -static void detach_task_cfs_rq(struct task_struct *p) +static void detach_entity_cfs_rq(struct sched_entity *se) { - struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 now = cfs_rq_clock_task(cfs_rq); - if (!vruntime_normalized(p)) { - /* - * Fix up our vruntime so that the current sleep doesn't - * cause 'unlimited' sleep bonus. - */ - place_entity(cfs_rq, se, 0); - se->vruntime -= cfs_rq->min_vruntime; - } - /* Catch up with the cfs_rq and remove our load when we leave */ update_cfs_rq_load_avg(now, cfs_rq, false); detach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); } -static void attach_task_cfs_rq(struct task_struct *p) +static void attach_entity_cfs_rq(struct sched_entity *se) { - struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 now = cfs_rq_clock_task(cfs_rq); @@ -8722,10 +8706,36 @@ static void attach_task_cfs_rq(struct task_struct *p) se->depth = se->parent ? se->parent->depth + 1 : 0; #endif - /* Synchronize task with its cfs_rq */ + /* Synchronize entity with its cfs_rq */ update_cfs_rq_load_avg(now, cfs_rq, false); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); +} + +static void detach_task_cfs_rq(struct task_struct *p) +{ + struct sched_entity *se = &p->se; + struct cfs_rq *cfs_rq = cfs_rq_of(se); + u64 now = cfs_rq_clock_task(cfs_rq); + + if (!vruntime_normalized(p)) { + /* + * Fix up our vruntime so that the current sleep doesn't + * cause 'unlimited' sleep bonus. + */ + place_entity(cfs_rq, se, 0); + se->vruntime -= cfs_rq->min_vruntime; + } + + detach_entity_cfs_rq(se); +} + +static void attach_task_cfs_rq(struct task_struct *p) +{ + struct sched_entity *se = &p->se; + struct cfs_rq *cfs_rq = cfs_rq_of(se); + + attach_entity_cfs_rq(se); if (!vruntime_normalized(p)) se->vruntime += cfs_rq->min_vruntime;