From patchwork Thu Nov 16 14:21:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 119044 Delivered-To: patch@linaro.org Received: by 10.140.22.164 with SMTP id 33csp5650418qgn; Thu, 16 Nov 2017 06:22:12 -0800 (PST) X-Google-Smtp-Source: AGs4zMa3Q7od7cHoKeX5VZi0BorgIsZSU3h8HwS26/GyLlmNuRmep2pUKhzIcwc2ag3+J4oP2tWF X-Received: by 10.84.132.42 with SMTP id 39mr1845146ple.382.1510842132455; Thu, 16 Nov 2017 06:22:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510842132; cv=none; d=google.com; s=arc-20160816; b=fkymmMjt91Bxzw8V085gKNRQaL0zk1AprfNKmE83cihKnQTAsSVIIIMBCSEsgaNe8L V5tSimODXstpMqiglMYnIDUOJ/byhjJaLKdzU1Tml524QMBmEffot4KcMbESeNruRfbc +ps08axNXb3YhLuKvLSVJ9Xf/t6Rl/KiGoUqe0qU/7Uy0LKFmnT86sbp+/m+tUClPFJP rg1uq5hLDXqk4Zze/QuUfvjZkZ10cl+g2G4HM+3PbrLJsXOJ5mrgVPLsARu0kMjWg/T6 hb7SXpAKuIRp9ucfZH2p3eUOdIdcwO7cVFckHar8Slse6eeZv9gMr/IyZKMH40Myv2Gj Tpxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=7VTtQ7oyT22bmun/ZTLgGQ/9c4Iro4ymjwGosw/9Uh0=; b=0keznxL8bzRd6iMB8GoFEPlVSIPO/T0IJ5p+QL8QXo3ij8XNhYJ7Uh3wq7iROb/tt2 +uhu73gd6CDKRSWrtILHJ6FWvH8Zxnz3rbHvDsC3N1osclmnc7omL+J9TfdMfUVH2uqZ zGLWmvjnjq8zUsBGHw5luefoSKHaIQJ0w0de5sZ0ohXmUd5nKzBxI7H6dX5gSrZucw8V OsYQSmPur5+YnT4kAoIY87hGvqm94So0V7slq0739ZJ/ax6k+lndLwAkyV71kyBuAcWL 7mawq7mo/M0TnKkYhzrxVI6omYiLKiP/b54of7xP38cKfAMO53C34DWcpHo1WXwCA7a7 SPsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ODNYCNOQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p17si956977pgc.553.2017.11.16.06.22.12; Thu, 16 Nov 2017 06:22:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ODNYCNOQ; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759618AbdKPOWI (ORCPT + 28 others); Thu, 16 Nov 2017 09:22:08 -0500 Received: from mail-wm0-f67.google.com ([74.125.82.67]:36055 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751064AbdKPOWA (ORCPT ); Thu, 16 Nov 2017 09:22:00 -0500 Received: by mail-wm0-f67.google.com with SMTP id r68so379467wmr.1 for ; Thu, 16 Nov 2017 06:21:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7VTtQ7oyT22bmun/ZTLgGQ/9c4Iro4ymjwGosw/9Uh0=; b=ODNYCNOQwpvnMVyCKBJPjEMykEoytPYT8xu92nUX+tOzh8b74uuZNibsvEm5Jvhpbv Iqd9y0z4rZHMLrib/pIbijVYcJ0/EcbyCYT1pYut0bdHn7qsgZvFQs4TV3nbChC4u4zU +SQo0EhJtZushMHmej198E8Nr1JLMwSx0y64o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7VTtQ7oyT22bmun/ZTLgGQ/9c4Iro4ymjwGosw/9Uh0=; b=S+QNsU8N8D12VC6nCHLGfhebetai6c3XZRKtjw2rZcGjUDYk0c6kKDYdkocPAX8pRN A0S+DxCzFl1Eyn96cV3IONS3AYi5q9Ptl7ZH+M+70bqHm9xFubjPc1MBPF9pm8Vv6fj/ SreO1fbmR3MoemnZWJ4B3s9ewckWrNE8E0cUw+NyY/OL7TBLOuxuJqwUEz0lhJVJO1t2 d9XQA7TjvTawcaTqRgRe/kLI/XbmCA/ZSvcVQb/qK4EOmJlkP7iytZL+2ICF1Q6KzPQQ EGupUo1V1mDVbVFvBFR2ALO1u+rSoSw9AUDZ6p2IPBJ6Ugn3wrR+DqRtolHBYAfH69SL 08/A== X-Gm-Message-State: AJaThX7We2X4PqmtvWvoOFchJbFjmLJlLHuu9v9UAlOZc/IloDwzxMkb D4gFl7OOmfW0v8x9Lmjd+LwAgA== X-Received: by 10.28.232.88 with SMTP id f85mr1618748wmh.62.1510842119112; Thu, 16 Nov 2017 06:21:59 -0800 (PST) Received: from localhost.localdomain ([2a01:e0a:f:6020:4f8:17cf:f041:b383]) by smtp.gmail.com with ESMTPSA id l31sm2105969wrc.50.2017.11.16.06.21.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 16 Nov 2017 06:21:58 -0800 (PST) From: Vincent Guittot To: peterz@infradead.org, linux-kernel@vger.kernel.org Cc: Vincent Guittot , Yuyang Du , Ingo Molnar , Mike Galbraith , Chris Mason , Linus Torvalds , Dietmar Eggemann , Josef Bacik , Ben Segall , Paul Turner , Tejun Heo , Morten Rasmussen Subject: [PATCH v4] sched: Update runnable propagation rule Date: Thu, 16 Nov 2017 15:21:52 +0100 Message-Id: <1510842112-21028-1-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1510841397-29119-1-git-send-email-vincent.guittot@linaro.org> References: <1510841397-29119-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Unlike running, the runnable part can't be directly propagated through the hierarchy when we migrate a task. The main reason is that runnable time can be shared with other sched_entities that stay on the rq and this runnable time will also remain on prev cfs_rq and must not be removed. Instead, we can estimate what should be the new runnable of the prev cfs_rq and check that this estimation stay in a possible range. The prop_runnable_sum is a good estimation when adding runnable_sum but fails most often when we remove it. Instead, we could use the formula below instead: gcfs_rq's runnable_sum = gcfs_rq->avg.load_sum / gcfs_rq->load.weight which assumes that tasks are equally runnable which is not true but easy to compute. Beside these estimates, we have several simple rules that help us to filter out wrong ones: - ge->avg.runnable_sum <= than LOAD_AVG_MAX - ge->avg.runnable_sum >= ge->avg.running_sum (ge->avg.util_sum << LOAD_AVG_MAX) - ge->avg.runnable_sum can't increase when we detach a task Cc: Yuyang Du Cc: Ingo Molnar Cc: Mike Galbraith Cc: Chris Mason Cc: Linus Torvalds Cc: Dietmar Eggemann Cc: Josef Bacik Cc: Ben Segall Cc: Paul Turner Cc: Tejun Heo Cc: Morten Rasmussen Signed-off-by: Vincent Guittot Signed-off-by: Peter Zijlstra (Intel) Link: http://lkml.kernel.org/r/20171019150442.GA25025@linaro.org --- Hi Peter, Please forget v3 which doesn' compile. I have rebased the patch, updated the 2 comments that were unclear and fixed the computation of running_sum by using arch_scale_cpu_capacity() instead of >> SCHED_CAPACITY_SHIFT. kernel/sched/fair.c | 102 +++++++++++++++++++++++++++++++++++++--------------- 1 file changed, 73 insertions(+), 29 deletions(-) -- 2.7.4 Acked-by: Peter Zijlstra (Intel) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0989676..7d4dd7e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3413,9 +3413,9 @@ void set_task_rq_fair(struct sched_entity *se, * _IFF_ we look at the pure running and runnable sums. Because they * represent the very same entity, just at different points in the hierarchy. * - * - * Per the above update_tg_cfs_util() is trivial (and still 'wrong') and - * simply copies the running sum over. + * Per the above update_tg_cfs_util() is trivial and simply copies the running + * sum over (but still wrong, because the group entity and group rq do not have + * their PELT windows aligned). * * However, update_tg_cfs_runnable() is more complex. So we have: * @@ -3424,11 +3424,11 @@ void set_task_rq_fair(struct sched_entity *se, * And since, like util, the runnable part should be directly transferable, * the following would _appear_ to be the straight forward approach: * - * grq->avg.load_avg = grq->load.weight * grq->avg.running_avg (3) + * grq->avg.load_avg = grq->load.weight * grq->avg.runnable_avg (3) * * And per (1) we have: * - * ge->avg.running_avg == grq->avg.running_avg + * ge->avg.runnable_avg == grq->avg.runnable_avg * * Which gives: * @@ -3447,27 +3447,28 @@ void set_task_rq_fair(struct sched_entity *se, * to (shortly) return to us. This only works by keeping the weights as * integral part of the sum. We therefore cannot decompose as per (3). * - * OK, so what then? + * Another reason this doesn't work is that runnable isn't a 0-sum entity. + * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the + * rq itself is runnable anywhere between 2/3 and 1 depending on how the + * runnable section of these tasks overlap (or not). If they were to perfectly + * align the rq as a whole would be runnable 2/3 of the time. If however we + * always have at least 1 runnable task, the rq as a whole is always runnable. * + * So we'll have to approximate.. :/ * - * Another way to look at things is: + * Given the constraint: * - * grq->avg.load_avg = \Sum se->avg.load_avg + * ge->avg.running_sum <= ge->avg.runnable_sum <= LOAD_AVG_MAX * - * Therefore, per (2): + * We can construct a rule that adds runnable to a rq by assuming minimal + * overlap. * - * grq->avg.load_avg = \Sum se->load.weight * se->avg.runnable_avg + * On removal, we'll assume each task is equally runnable; which yields: * - * And the very thing we're propagating is a change in that sum (someone - * joined/left). So we can easily know the runnable change, which would be, per - * (2) the already tracked se->load_avg divided by the corresponding - * se->weight. + * grq->avg.runnable_sum = grq->avg.load_sum / grq->load.weight * - * Basically (4) but in differential form: + * XXX: only do this for the part of runnable > running ? * - * d(runnable_avg) += se->avg.load_avg / se->load.weight - * (5) - * ge->avg.load_avg += ge->load.weight * d(runnable_avg) */ static inline void @@ -3479,6 +3480,14 @@ update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq if (!delta) return; + /* + * The relation between sum and avg is: + * + * LOAD_AVG_MAX - 1024 + sa->period_contrib + * + * however, the PELT windows are not aligned between grq and gse. + */ + /* Set new sched_entity's utilization */ se->avg.util_avg = gcfs_rq->avg.util_avg; se->avg.util_sum = se->avg.util_avg * LOAD_AVG_MAX; @@ -3491,33 +3500,68 @@ update_tg_cfs_util(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq static inline void update_tg_cfs_runnable(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq) { - long runnable_sum = gcfs_rq->prop_runnable_sum; - long runnable_load_avg, load_avg; - s64 runnable_load_sum, load_sum; + long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum; + unsigned long runnable_load_avg, load_avg; + u64 runnable_load_sum, load_sum = 0; + s64 delta_sum; if (!runnable_sum) return; gcfs_rq->prop_runnable_sum = 0; + if (runnable_sum >= 0) { + /* + * Add runnable; clip at LOAD_AVG_MAX. Reflects that until + * the CPU is saturated running == runnable. + */ + runnable_sum += se->avg.load_sum; + runnable_sum = min(runnable_sum, (long)LOAD_AVG_MAX); + } else { + /* + * Estimate the new unweighted runnable_sum of the gcfs_rq by + * assuming all tasks are equally runnable. + */ + if (scale_load_down(gcfs_rq->load.weight)) { + load_sum = div_s64(gcfs_rq->avg.load_sum, + scale_load_down(gcfs_rq->load.weight)); + } + + /* But make sure to not inflate se's runnable */ + runnable_sum = min(se->avg.load_sum, load_sum); + } + + /* + * runnable_sum can't be lower than running_sum + * As running sum is scale with cpu capacity wehreas the runnable sum + * is not we rescale running_sum 1st + */ + running_sum = se->avg.util_sum / + arch_scale_cpu_capacity(NULL, cpu_of(rq_of(cfs_rq))); + runnable_sum = max(runnable_sum, running_sum); + load_sum = (s64)se_weight(se) * runnable_sum; load_avg = div_s64(load_sum, LOAD_AVG_MAX); - add_positive(&se->avg.load_sum, runnable_sum); - add_positive(&se->avg.load_avg, load_avg); + delta_sum = load_sum - (s64)se_weight(se) * se->avg.load_sum; + delta_avg = load_avg - se->avg.load_avg; - add_positive(&cfs_rq->avg.load_avg, load_avg); - add_positive(&cfs_rq->avg.load_sum, load_sum); + se->avg.load_sum = runnable_sum; + se->avg.load_avg = load_avg; + add_positive(&cfs_rq->avg.load_avg, delta_avg); + add_positive(&cfs_rq->avg.load_sum, delta_sum); runnable_load_sum = (s64)se_runnable(se) * runnable_sum; runnable_load_avg = div_s64(runnable_load_sum, LOAD_AVG_MAX); + delta_sum = runnable_load_sum - se_weight(se) * se->avg.runnable_load_sum; + delta_avg = runnable_load_avg - se->avg.runnable_load_avg; - add_positive(&se->avg.runnable_load_sum, runnable_sum); - add_positive(&se->avg.runnable_load_avg, runnable_load_avg); + se->avg.runnable_load_sum = runnable_sum; + se->avg.runnable_load_avg = runnable_load_avg; if (se->on_rq) { - add_positive(&cfs_rq->avg.runnable_load_avg, runnable_load_avg); - add_positive(&cfs_rq->avg.runnable_load_sum, runnable_load_sum); + add_positive(&cfs_rq->avg.runnable_load_avg, delta_avg); + add_positive(&cfs_rq->avg.runnable_load_sum, delta_sum); } }