From patchwork Tue Aug 26 11:06:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 35985 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f70.google.com (mail-yh0-f70.google.com [209.85.213.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id C72FD20565 for ; Tue, 26 Aug 2014 11:08:25 +0000 (UTC) Received: by mail-yh0-f70.google.com with SMTP id b6sf50131221yha.9 for ; Tue, 26 Aug 2014 04:08:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=Z+3f8pztLO4KDBFwZV9exe/r/PIGzOP6l3WRjIAmxiI=; b=c9Q4oZL5aFVhrp1wlk/qEG44Sh6Qie1BLkQrnYU+f6iP/AUe8cyMjPW0uZ1vPIwbQT o5YctQO89o7rCqpT10c1pPums8AzzyyUByngtLgiOZl2Zti/CP9zLEdfNyHlcQHubRCj 5U0zMlSex9ZSjLaeVBCgAhA5y36UZservvo0u0mHHWUg46rtgOmosTFMcfNf+iLe82tX 93UhKn86bv/ASVAIEQfr5Nfk+F8FwDdDCAHPfzHuOkQpYrO0UtXCfeNaSpEwRQ5xw5SU Gntpt5n9m24qpnQvoFjZyqoMucEHE6JlnHuX7YLIgzLC+AKTX4/pagBxXG1YiTgeXRnd AUoQ== X-Gm-Message-State: ALoCoQkGTh5xza4CXb/vLxHTv2Roi/ES6FZHyYdYuwO+TeD8iaH2epjH1sbIg/vqlj/RQT//dsg9 X-Received: by 10.236.180.202 with SMTP id j50mr4840666yhm.51.1409051305510; Tue, 26 Aug 2014 04:08:25 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.30.8 with SMTP id c8ls2610628qgc.37.gmail; Tue, 26 Aug 2014 04:08:25 -0700 (PDT) X-Received: by 10.52.135.133 with SMTP id ps5mr9627438vdb.33.1409051305439; Tue, 26 Aug 2014 04:08:25 -0700 (PDT) Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by mx.google.com with ESMTPS id ev4si1231997vdb.15.2014.08.26.04.08.25 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 26 Aug 2014 04:08:25 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) client-ip=209.85.220.172; Received: by mail-vc0-f172.google.com with SMTP id im17so16586278vcb.31 for ; Tue, 26 Aug 2014 04:08:25 -0700 (PDT) X-Received: by 10.52.98.201 with SMTP id ek9mr9678053vdb.35.1409051305344; Tue, 26 Aug 2014 04:08:25 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.45.67 with SMTP id uj3csp189082vcb; Tue, 26 Aug 2014 04:08:24 -0700 (PDT) X-Received: by 10.68.102.132 with SMTP id fo4mr36170446pbb.96.1409051304287; Tue, 26 Aug 2014 04:08:24 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id oy3si3949283pdb.50.2014.08.26.04.08.23 for ; Tue, 26 Aug 2014 04:08:24 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757669AbaHZLIB (ORCPT + 26 others); Tue, 26 Aug 2014 07:08:01 -0400 Received: from mail-wi0-f175.google.com ([209.85.212.175]:44468 "EHLO mail-wi0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757648AbaHZLH6 (ORCPT ); Tue, 26 Aug 2014 07:07:58 -0400 Received: by mail-wi0-f175.google.com with SMTP id ho1so3972575wib.14 for ; Tue, 26 Aug 2014 04:07:57 -0700 (PDT) X-Received: by 10.180.84.66 with SMTP id w2mr20760593wiy.27.1409051277009; Tue, 26 Aug 2014 04:07:57 -0700 (PDT) Received: from lmenx30s.lme.st.com (LPuteaux-656-01-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id q6sm2494891wjy.47.2014.08.26.04.07.54 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 26 Aug 2014 04:07:55 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, preeti@linux.vnet.ibm.com, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org Cc: riel@redhat.com, Morten.Rasmussen@arm.com, efault@gmx.de, nicolas.pitre@linaro.org, linaro-kernel@lists.linaro.org, daniel.lezcano@linaro.org, dietmar.eggemann@arm.com, Vincent Guittot Subject: [PATCH v5 07/12] sched: test the cpu's capacity in wake affine Date: Tue, 26 Aug 2014 13:06:50 +0200 Message-Id: <1409051215-16788-8-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1409051215-16788-1-git-send-email-vincent.guittot@linaro.org> References: <1409051215-16788-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Currently the task always wakes affine on this_cpu if the latter is idle. Before waking up the task on this_cpu, we check that this_cpu capacity is not significantly reduced because of RT tasks or irq activity. Use case where the number of irq and/or the time spent under irq is important will take benefit of this because the task that is woken up by irq or softirq will not use the same CPU than irq (and softirq) but a idle one. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 17c16cc..18db43e 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4281,6 +4281,7 @@ static int wake_wide(struct task_struct *p) static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) { s64 this_load, load; + s64 this_eff_load, prev_eff_load; int idx, this_cpu, prev_cpu; struct task_group *tg; unsigned long weight; @@ -4324,21 +4325,21 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) * Otherwise check if either cpus are near enough in load to allow this * task to be woken on this_cpu. */ - if (this_load > 0) { - s64 this_eff_load, prev_eff_load; + this_eff_load = 100; + this_eff_load *= capacity_of(prev_cpu); + + prev_eff_load = 100 + (sd->imbalance_pct - 100) / 2; + prev_eff_load *= capacity_of(this_cpu); - this_eff_load = 100; - this_eff_load *= capacity_of(prev_cpu); + if (this_load > 0) { this_eff_load *= this_load + effective_load(tg, this_cpu, weight, weight); - prev_eff_load = 100 + (sd->imbalance_pct - 100) / 2; - prev_eff_load *= capacity_of(this_cpu); prev_eff_load *= load + effective_load(tg, prev_cpu, 0, weight); + } + + balanced = this_eff_load <= prev_eff_load; - balanced = this_eff_load <= prev_eff_load; - } else - balanced = true; schedstat_inc(p, se.statistics.nr_wakeups_affine_attempts); if (!balanced)