From patchwork Mon Jun 30 16:05:39 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 32776 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qa0-f72.google.com (mail-qa0-f72.google.com [209.85.216.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 0A306203C0 for ; Mon, 30 Jun 2014 16:07:28 +0000 (UTC) Received: by mail-qa0-f72.google.com with SMTP id i13sf16364793qae.3 for ; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=ThOOYsafq5gqVNBPTZKty04L3SWd1C+zgnTRRrEO7AA=; b=S+dZ4X7SjIoY8/cXhdnDdQX9IeNH2PlBt7Hs85TZAEgAG02GwyfxjG+ThhCv9s0WWZ Zpwv5D3UMt4pQIwzQ8iU3BKQqiU6w0jxyzGny7kMNrL+mPzL490WYVVWupm2+06MxgJT XMWP2CBNQoz2Zx8Wak2FAiwJRFkI9USy3GgIJl/KQaqDI/nA7B1D3DDxge2SQ02+aKLJ 3dfRrW4Dx8KMOBId2bDL9RsQ290qgl/S2slfGsqvs087D9WRq7LXmFJOZBd4FVqPnu5V Np9Rx95J4Q2oF/DasJMT2B7xAe/itx2hIFvDKp5+Yuq6Uv/bG823uFdIY16puH9YnJrT kjew== X-Gm-Message-State: ALoCoQmmXaG10tyXRHu4V30b5zIrmo6NTYu+l6xST8QRBhvLs8Py/DVvWRUCrLyi6nfgjQH4P9MV X-Received: by 10.236.151.74 with SMTP id a50mr849731yhk.32.1404144447899; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.16.107 with SMTP id 98ls1476524qga.80.gmail; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) X-Received: by 10.58.150.100 with SMTP id uh4mr39871708veb.30.1404144447777; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) Received: from mail-vc0-f172.google.com (mail-vc0-f172.google.com [209.85.220.172]) by mx.google.com with ESMTPS id sq3si1728588vdb.76.2014.06.30.09.07.27 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 30 Jun 2014 09:07:27 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) client-ip=209.85.220.172; Received: by mail-vc0-f172.google.com with SMTP id hy10so7718020vcb.31 for ; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) X-Received: by 10.52.88.44 with SMTP id bd12mr326024vdb.86.1404144447698; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp151243vcb; Mon, 30 Jun 2014 09:07:27 -0700 (PDT) X-Received: by 10.66.231.237 with SMTP id tj13mr52999649pac.136.1404144446803; Mon, 30 Jun 2014 09:07:26 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id dm2si23671463pbb.68.2014.06.30.09.07.26; Mon, 30 Jun 2014 09:07:26 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756612AbaF3QHK (ORCPT + 27 others); Mon, 30 Jun 2014 12:07:10 -0400 Received: from mail-wi0-f170.google.com ([209.85.212.170]:64719 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756200AbaF3QHA (ORCPT ); Mon, 30 Jun 2014 12:07:00 -0400 Received: by mail-wi0-f170.google.com with SMTP id cc10so6045819wib.3 for ; Mon, 30 Jun 2014 09:06:59 -0700 (PDT) X-Received: by 10.180.218.12 with SMTP id pc12mr22323087wic.15.1404144418926; Mon, 30 Jun 2014 09:06:58 -0700 (PDT) Received: from lmenx30s.lme.st.com (LPuteaux-656-01-48-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id lo18sm32896271wic.1.2014.06.30.09.06.56 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 30 Jun 2014 09:06:57 -0700 (PDT) From: Vincent Guittot To: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, linux@arm.linux.org.uk, linux-arm-kernel@lists.infradead.org Cc: preeti@linux.vnet.ibm.com, Morten.Rasmussen@arm.com, efault@gmx.de, nicolas.pitre@linaro.org, linaro-kernel@lists.linaro.org, daniel.lezcano@linaro.org, dietmar.eggemann@arm.com, Vincent Guittot Subject: [PATCH v3 08/12] sched: move cfs task on a CPU with higher capacity Date: Mon, 30 Jun 2014 18:05:39 +0200 Message-Id: <1404144343-18720-9-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1404144343-18720-1-git-send-email-vincent.guittot@linaro.org> References: <1404144343-18720-1-git-send-email-vincent.guittot@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.220.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , If the CPU is used for handling lot of IRQs, trig a load balance to check if it's worth moving its tasks on another CPU that has more capacity. As a sidenote, this will note generate more spurious ilb because we already trig an ilb if there is more than 1 busy cpu. If this cpu is the only one that has a task, we will trig the ilb once for migrating the task. The nohz_kick_needed function has been cleaned up a bit while adding the new test Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 58 +++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 41 insertions(+), 17 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a23c938..742ad88 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5944,6 +5944,14 @@ static bool update_sd_pick_busiest(struct lb_env *env, return true; } + /* + * The group capacity is reduced probably because of activity from other + * sched class or interrupts which use part of the available capacity + */ + if ((sg->sgc->capacity_orig * 100) > (sgs->group_capacity * + env->sd->imbalance_pct)) + return true; + return false; } @@ -6421,13 +6429,24 @@ static int need_active_balance(struct lb_env *env) struct sched_domain *sd = env->sd; if (env->idle == CPU_NEWLY_IDLE) { + int src_cpu = env->src_cpu; /* * ASYM_PACKING needs to force migrate tasks from busy but * higher numbered CPUs in order to pack all tasks in the * lowest numbered CPUs. */ - if ((sd->flags & SD_ASYM_PACKING) && env->src_cpu > env->dst_cpu) + if ((sd->flags & SD_ASYM_PACKING) && src_cpu > env->dst_cpu) + return 1; + + /* + * If the CPUs share their cache and the src_cpu's capacity is + * reduced because of other sched_class or IRQs, we trig an + * active balance to move the task + */ + if ((sd->flags & SD_SHARE_PKG_RESOURCES) + && ((capacity_orig_of(src_cpu) * 100) > (capacity_of(src_cpu) * + sd->imbalance_pct))) return 1; } @@ -6529,6 +6548,8 @@ redo: schedstat_add(sd, lb_imbalance[idle], env.imbalance); + env.src_cpu = busiest->cpu; + ld_moved = 0; if (busiest->nr_running > 1) { /* @@ -6538,7 +6559,6 @@ redo: * correctly treated as an imbalance. */ env.flags |= LBF_ALL_PINNED; - env.src_cpu = busiest->cpu; env.src_rq = busiest; env.loop_max = min(sysctl_sched_nr_migrate, busiest->nr_running); @@ -7233,9 +7253,10 @@ static inline int nohz_kick_needed(struct rq *rq) struct sched_domain *sd; struct sched_group_capacity *sgc; int nr_busy, cpu = rq->cpu; + bool kick = false; if (unlikely(rq->idle_balance)) - return 0; + return false; /* * We may be recently in ticked or tickless idle mode. At the first @@ -7249,38 +7270,41 @@ static inline int nohz_kick_needed(struct rq *rq) * balancing. */ if (likely(!atomic_read(&nohz.nr_cpus))) - return 0; + return false; if (time_before(now, nohz.next_balance)) - return 0; + return false; if (rq->nr_running >= 2) - goto need_kick; + return true; rcu_read_lock(); sd = rcu_dereference(per_cpu(sd_busy, cpu)); - if (sd) { sgc = sd->groups->sgc; nr_busy = atomic_read(&sgc->nr_busy_cpus); - if (nr_busy > 1) - goto need_kick_unlock; + if (nr_busy > 1) { + kick = true; + goto unlock; + } + + if ((rq->cfs.h_nr_running >= 1) + && ((rq->cpu_capacity * sd->imbalance_pct) < + (rq->cpu_capacity_orig * 100))) { + kick = true; + goto unlock; + } } sd = rcu_dereference(per_cpu(sd_asym, cpu)); - if (sd && (cpumask_first_and(nohz.idle_cpus_mask, sched_domain_span(sd)) < cpu)) - goto need_kick_unlock; + kick = true; +unlock: rcu_read_unlock(); - return 0; - -need_kick_unlock: - rcu_read_unlock(); -need_kick: - return 1; + return kick; } #else static void nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle) { }