From patchwork Mon Sep 26 12:19:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 77024 Delivered-To: patch@linaro.org Received: by 10.140.106.72 with SMTP id d66csp1145936qgf; Mon, 26 Sep 2016 05:20:41 -0700 (PDT) X-Received: by 10.98.89.199 with SMTP id k68mr38591295pfj.72.1474892441199; Mon, 26 Sep 2016 05:20:41 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e79si25017307pfb.162.2016.09.26.05.20.39; Mon, 26 Sep 2016 05:20:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S941433AbcIZMU2 (ORCPT + 27 others); Mon, 26 Sep 2016 08:20:28 -0400 Received: from mail-pf0-f169.google.com ([209.85.192.169]:35148 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933347AbcIZMU0 (ORCPT ); Mon, 26 Sep 2016 08:20:26 -0400 Received: by mail-pf0-f169.google.com with SMTP id s13so31046136pfd.2 for ; Mon, 26 Sep 2016 05:20:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PlqWYAkoBtQvnkHAtxlNGByG0EwIknFdhpf8HAvtT2w=; b=GVZY7Ypf9fqpnA5rewkUwFg0sKfPZTlDFdzhmFGOZ8E234fvF1Jv3Z6YVQ5iObkz14 trM2Z4VEf84cMf76FPjK4Y4ACTkroU0nBMVaRBB2KcaRj4/Liob8bsq4SETnIdagb68b EZ7evV5k5Bo8FmPW9+3fRSoLM96aAGi4NqLis= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PlqWYAkoBtQvnkHAtxlNGByG0EwIknFdhpf8HAvtT2w=; b=QIDXuGYiNUAit9VW6/jhdJEGUBQ9h4StxLhwuzyDsQWkxk17CjNZrez8f3TQobqIQm 6K7GFt/CvCM3iBs2qNFZHE8Dq6hsIO7zhHfiLx9/SQa1RtvSaEG1XdlKKrjDkUQOdKh6 15EicFfx5u+y8oMp/OHutiBRdj3wR7SIJiYwFcvDf7dyrbIL1NR9P8l7nCs9867FSy1z iWFWnpl/cDzVnONu/PDqUKs0dYF4+wGULP00qose1GD6JvE9wsTpwNNAv6vUCHlkIGk8 zdA6zZ9xRDKb3CX3QN/ZIV70vo6nYv91T+/IMDCC6Z679cpYmuX4z7MrnXl0iRfM3C/t AknQ== X-Gm-Message-State: AE9vXwP6abItaT4olzQ60jXxC0Ub8qDm5kZWvd7QO4QtIXUMZOETQt1mI2q+BNSmLMol0/bG X-Received: by 10.98.147.195 with SMTP id r64mr37701400pfk.32.1474892425194; Mon, 26 Sep 2016 05:20:25 -0700 (PDT) Received: from localhost.localdomain ([67.238.99.186]) by smtp.gmail.com with ESMTPSA id i62sm30773860pfg.89.2016.09.26.05.20.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 26 Sep 2016 05:20:24 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com, kernellwp@gmail.com Cc: yuyang.du@intel.com, Morten.Rasmussen@arm.com, linaro-kernel@lists.linaro.org, pjt@google.com, bsegall@google.com, Vincent Guittot Subject: [PATCH 3/7 v4] sched: factorize PELT update Date: Mon, 26 Sep 2016 14:19:49 +0200 Message-Id: <1474892393-5095-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1474892393-5095-1-git-send-email-vincent.guittot@linaro.org> References: <1474892393-5095-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Every time, we modify load/utilization of sched_entity, we start to sync it with its cfs_rq. This update is done is different ways: -when attaching/detaching a sched_entity, we update cfs_rq and then we sync the entity with the cfs_rq. -when enqueueing/dequeuing the sched_entity, we update both sched_entity and cfs_rq metrics to now. Use update_load_avg everytime we have to update and sync cfs_rq and sched_entity before changing the state of a sched_enity Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 75 +++++++++++++++++------------------------------------ 1 file changed, 24 insertions(+), 51 deletions(-) -- 1.9.1 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 3d29492..625e7f7 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3088,8 +3088,14 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return decayed || removed_load; } +/* + * Optional action to be done while updating the load average + */ +#define UPDATE_TG 0x1 +#define SKIP_AGE_LOAD 0x2 + /* Update task and its cfs_rq load average */ -static inline void update_load_avg(struct sched_entity *se, int update_tg) +static inline void update_load_avg(struct sched_entity *se, int flags) { struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 now = cfs_rq_clock_task(cfs_rq); @@ -3100,11 +3106,12 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * Track task load average for carrying it to new CPU after migrated, and * track group sched_entity load average for task_h_load calc in migration */ - __update_load_avg(now, cpu, &se->avg, + if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD)) + __update_load_avg(now, cpu, &se->avg, se->on_rq * scale_load_down(se->load.weight), cfs_rq->curr == se, NULL); - if (update_cfs_rq_load_avg(now, cfs_rq, true) && update_tg) + if (update_cfs_rq_load_avg(now, cfs_rq, true) && (flags & UPDATE_TG)) update_tg_load_avg(cfs_rq, 0); } @@ -3118,26 +3125,6 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) */ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - if (!sched_feat(ATTACH_AGE_LOAD)) - goto skip_aging; - - /* - * If we got migrated (either between CPUs or between cgroups) we'll - * have aged the average right before clearing @last_update_time. - * - * Or we're fresh through post_init_entity_util_avg(). - */ - if (se->avg.last_update_time) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, 0, 0, NULL); - - /* - * XXX: we could have just aged the entire load away if we've been - * absent from the fair class for too long. - */ - } - -skip_aging: se->avg.last_update_time = cfs_rq->avg.last_update_time; cfs_rq->avg.load_avg += se->avg.load_avg; cfs_rq->avg.load_sum += se->avg.load_sum; @@ -3157,9 +3144,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s */ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)), - &se->avg, se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); sub_positive(&cfs_rq->avg.load_avg, se->avg.load_avg); sub_positive(&cfs_rq->avg.load_sum, se->avg.load_sum); @@ -3174,34 +3158,20 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { struct sched_avg *sa = &se->avg; - u64 now = cfs_rq_clock_task(cfs_rq); - int migrated, decayed; - - migrated = !sa->last_update_time; - if (!migrated) { - __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); - } - - decayed = update_cfs_rq_load_avg(now, cfs_rq, !migrated); cfs_rq->runnable_load_avg += sa->load_avg; cfs_rq->runnable_load_sum += sa->load_sum; - if (migrated) + if (!sa->last_update_time) { attach_entity_load_avg(cfs_rq, se); - - if (decayed || migrated) update_tg_load_avg(cfs_rq, 0); + } } /* Remove the runnable load generated by se from cfs_rq's runnable load average */ static inline void dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { - update_load_avg(se, 1); - cfs_rq->runnable_load_avg = max_t(long, cfs_rq->runnable_load_avg - se->avg.load_avg, 0); cfs_rq->runnable_load_sum = @@ -3275,7 +3245,10 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq) return 0; } -static inline void update_load_avg(struct sched_entity *se, int not_used) +#define UPDATE_TG 0x0 +#define SKIP_AGE_LOAD 0x0 + +static inline void update_load_avg(struct sched_entity *se, int not_used1) { struct cfs_rq *cfs_rq = cfs_rq_of(se); struct rq *rq = rq_of(cfs_rq); @@ -3423,6 +3396,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) if (renorm && !curr) se->vruntime += cfs_rq->min_vruntime; + update_load_avg(se, UPDATE_TG); enqueue_entity_load_avg(cfs_rq, se); account_entity_enqueue(cfs_rq, se); update_cfs_shares(cfs_rq); @@ -3497,6 +3471,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) * Update run-time statistics of the 'current'. */ update_curr(cfs_rq); + update_load_avg(se, UPDATE_TG); dequeue_entity_load_avg(cfs_rq, se); update_stats_dequeue(cfs_rq, se, flags); @@ -3575,7 +3550,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se) */ update_stats_wait_end(cfs_rq, se); __dequeue_entity(cfs_rq, se); - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); } update_stats_curr_start(cfs_rq, se); @@ -3693,7 +3668,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) /* * Ensure that runnable average is periodically updated. */ - update_load_avg(curr, 1); + update_load_avg(curr, UPDATE_TG); update_cfs_shares(cfs_rq); #ifdef CONFIG_SCHED_HRTICK @@ -4582,7 +4557,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); update_cfs_shares(cfs_rq); } @@ -4641,7 +4616,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) if (cfs_rq_throttled(cfs_rq)) break; - update_load_avg(se, 1); + update_load_avg(se, UPDATE_TG); update_cfs_shares(cfs_rq); } @@ -8520,7 +8495,6 @@ static void detach_task_cfs_rq(struct task_struct *p) { struct sched_entity *se = &p->se; struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); if (!vruntime_normalized(p)) { /* @@ -8532,7 +8506,7 @@ static void detach_task_cfs_rq(struct task_struct *p) } /* Catch up with the cfs_rq and remove our load when we leave */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, 0); detach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); } @@ -8540,7 +8514,6 @@ static void detach_task_cfs_rq(struct task_struct *p) static void attach_entity_cfs_rq(struct sched_entity *se) { struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); #ifdef CONFIG_FAIR_GROUP_SCHED /* @@ -8551,7 +8524,7 @@ static void attach_entity_cfs_rq(struct sched_entity *se) #endif /* Synchronize task with its cfs_rq */ - update_cfs_rq_load_avg(now, cfs_rq, false); + update_load_avg(se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false); }