From patchwork Thu Aug 1 14:40:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 170385 Delivered-To: patch@linaro.org Received: by 2002:a92:512:0:0:0:0:0 with SMTP id q18csp5538925ile; Thu, 1 Aug 2019 07:41:00 -0700 (PDT) X-Google-Smtp-Source: APXvYqzi5PUzt65m0+kPK9awUulJ416JcL45npArVihPSYI0ArW8ODjZt1XWIfwRxud34uvrTBly X-Received: by 2002:a17:902:2aa9:: with SMTP id j38mr120554531plb.206.1564670460248; Thu, 01 Aug 2019 07:41:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564670460; cv=none; d=google.com; s=arc-20160816; b=eWykZb30UwpqQt+kdX3iodBCZnOf6eaDYaPFPuUTy005m1DDoSM/IjXNRKLBgsx7Jw GAtbrdAA22aqz+ho6bYWznl6VPO5OPq8jkO++WzeMoEcl5OrJgV/duXF+7ZJn7j9g7/+ MFWeMu+QdFar3WmEdXm3DzpNdltuy0La1GBfwty3CKfdEvsI3UGV72DlJc2Ajksxeu9v WTS30c9aGt6gyplfS3oWmI81CCSgSj2tHg24Rcm/n1YQEpwBMRKoGF3ORmYh/UScFOkX bPu4MEJWYcBZt//NayrBnnN1xLyqUZ5AFSN0E3B/ao9zfgXRVosmsGpRZiWIPOqGQsRS o6BA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=JzB9VaAbbg7LQZ9XSxn2KJHSUzRU4B8rlHeaB0NzIVs=; b=rgDJvevrYD5U4zFo+dJZpRAFh/WIodwBLZofZ5PApuuDP8jmLxxios+Uniuiz0d8JU Yy/fbsMcrPpTPE/MOgzdHb1gtO2h1G5Wz+OOkXHqVKs1r0t42wAUk68MnV2IrtWV5AA5 7G5y2rFqWh8eHLvD3Iuhg66lUwKZoSj8NYvyt+izjCVbhkcTMzwDXQ8kDk3EPfhrtexK F74EFH/QI/IKFfnI44YC6pvJDlqCP4KAd4h7k7UxHd/a/czFXjt1E2vx8YwYshcK/1eb mNBSCZO0BIPvEExX3TVD2mvuNfUsj7SmGI/7hzvjg41yBme5ZhXDibfmhy6aeS7UsJi1 MkqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qkibWlGk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t18si31382846plo.328.2019.08.01.07.40.59; Thu, 01 Aug 2019 07:41:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qkibWlGk; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732252AbfHAOk6 (ORCPT + 29 others); Thu, 1 Aug 2019 10:40:58 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:38241 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731713AbfHAOki (ORCPT ); Thu, 1 Aug 2019 10:40:38 -0400 Received: by mail-wm1-f67.google.com with SMTP id s15so42088196wmj.3 for ; Thu, 01 Aug 2019 07:40:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JzB9VaAbbg7LQZ9XSxn2KJHSUzRU4B8rlHeaB0NzIVs=; b=qkibWlGky7jPtVk0adbE/rqs5Uc/g8DkejdRLARL5z6ljsALiSbXTObjHxNfYrKpBA 3Rd7Gr3DKgHMiICqjUrs0Eh2qVjF4ONHskFovt1TZCkirFuxBlzvm6/GCrjQKoTD11bg qEk6/S0U1C0vIto+9KYKlfw0NdxYDPhRIqS+IqtMgEuhkdtsY+864Mx4nva3ED5y3rko QdbKYi62LQmkffNjYyQ6An+rwoymFF1QLMnUziKOMo/aPtxbc38lopX6TKEGZ2F/JWfN L093jggoRY7BAckbIHj53Ohc+JEw/ER2wiibR2yo6yhW9Ww4rcahADbsVsOOcYi8IFa0 h/cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JzB9VaAbbg7LQZ9XSxn2KJHSUzRU4B8rlHeaB0NzIVs=; b=fOeyz/atMEprqa+IgbtiP4ZBmXp4tPaYpdlTTyDnc1BSjUCYKxhqDB8IU88/jCiSBz w6BgpKU2k8kWlS2u46M2yoGB5nFspwAumZy/yqIocz+3/lstLp19PkMNqPDW6jO8MalG SjdN/UH4wZcDWM4SxLqyjWKUI7mQEpgoru1NACreM0p3AMchrU/PdLSnlnCE0FY86oU9 aACDbPqd9yqgU4wgN8axQt1SLY5ZlU0dr5Dl0DBA3BkialmV+DA8VGu5WEfbU6wqq+BI ic91IAx2oPyuDDCo/U0sROvtwUpDUm5O0ni+8DwHBDI1zTsJJbTLY9TqjpCIRmfgHEhx MMdw== X-Gm-Message-State: APjAAAWiDGqliwDx+I6zGllUC9g/zT7M/L8y/KF7n9Mr/HiQLcoje3ze 7c3iYVLidjWgtxXundC7UFztOaHzZKk= X-Received: by 2002:a7b:c1c1:: with SMTP id a1mr122451652wmj.31.1564670436768; Thu, 01 Aug 2019 07:40:36 -0700 (PDT) Received: from localhost.localdomain ([2a01:e0a:f:6020:9865:5ad1:5ff3:80c]) by smtp.gmail.com with ESMTPSA id y10sm58768873wmj.2.2019.08.01.07.40.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 01 Aug 2019 07:40:36 -0700 (PDT) From: Vincent Guittot To: linux-kernel@vger.kernel.org, mingo@redhat.com, peterz@infradead.org Cc: pauld@redhat.com, valentin.schneider@arm.com, srikar@linux.vnet.ibm.com, quentin.perret@arm.com, dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com, Vincent Guittot Subject: [PATCH v2 6/8] sched/fair: use load instead of runnable load Date: Thu, 1 Aug 2019 16:40:22 +0200 Message-Id: <1564670424-26023-7-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1564670424-26023-1-git-send-email-vincent.guittot@linaro.org> References: <1564670424-26023-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org runnable load has been introduced to take into account the case where blocked load biases the load balance decision which was selecting underutilized group with huge blocked load whereas other groups were overloaded. The load is now only used when groups are overloaded. In this case, it's worth being conservative and taking into account the sleeping tasks that might wakeup on the cpu. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) -- 2.7.4 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f05f1ad..dfaf0b8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5445,6 +5445,11 @@ static unsigned long cpu_runnable_load(struct rq *rq) return cfs_rq_runnable_load_avg(&rq->cfs); } +static unsigned long cpu_load(struct rq *rq) +{ + return cfs_rq_load_avg(&rq->cfs); +} + static unsigned long capacity_of(int cpu) { return cpu_rq(cpu)->cpu_capacity; @@ -5540,7 +5545,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p, s64 this_eff_load, prev_eff_load; unsigned long task_load; - this_eff_load = cpu_runnable_load(cpu_rq(this_cpu)); + this_eff_load = cpu_load(cpu_rq(this_cpu)); if (sync) { unsigned long current_load = task_h_load(current); @@ -5558,7 +5563,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p, this_eff_load *= 100; this_eff_load *= capacity_of(prev_cpu); - prev_eff_load = cpu_runnable_load(cpu_rq(prev_cpu)); + prev_eff_load = cpu_load(cpu_rq(prev_cpu)); prev_eff_load -= task_load; if (sched_feat(WA_BIAS)) prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2; @@ -5646,7 +5651,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, max_spare_cap = 0; for_each_cpu(i, sched_group_span(group)) { - load = cpu_runnable_load(cpu_rq(i)); + load = cpu_load(cpu_rq(i)); runnable_load += load; avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs); @@ -5787,7 +5792,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this continue; } - load = cpu_runnable_load(cpu_rq(i)); + load = cpu_load(cpu_rq(i)); if (load < min_load) { min_load = load; least_loaded_cpu = i; @@ -8128,7 +8133,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false)) env->flags |= LBF_NOHZ_AGAIN; - sgs->group_load += cpu_runnable_load(rq); + sgs->group_load += cpu_load(rq); sgs->group_util += cpu_util(i); sgs->sum_h_nr_running += rq->cfs.h_nr_running; @@ -8569,7 +8574,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env) init_sd_lb_stats(&sds); /* - * Compute the various statistics relavent for load balancing at + * Compute the various statistics relevant for load balancing at * this level. */ update_sd_lb_stats(env, &sds); @@ -8748,10 +8753,10 @@ static struct rq *find_busiest_queue(struct lb_env *env, case migrate_load: /* - * When comparing with load imbalance, use cpu_runnable_load() + * When comparing with load imbalance, use cpu_load() * which is not scaled with the CPU capacity. */ - load = cpu_runnable_load(rq); + load = cpu_load(rq); if (nr_running == 1 && load > env->imbalance && !check_cpu_capacity(rq, env->sd)) @@ -8759,7 +8764,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, /* * For the load comparisons with the other CPU's, consider - * the cpu_runnable_load() scaled with the CPU capacity, so + * the cpu_load() scaled with the CPU capacity, so * that the load can be moved away from the CPU that is * potentially running at a lower capacity. *