From patchwork Fri Oct 18 11:52:17 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Guittot X-Patchwork-Id: 21117 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-pb0-f69.google.com (mail-pb0-f69.google.com [209.85.160.69]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 25A32246F1 for ; Fri, 18 Oct 2013 11:53:50 +0000 (UTC) Received: by mail-pb0-f69.google.com with SMTP id md4sf5814528pbc.8 for ; Fri, 18 Oct 2013 04:53:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:x-gm-message-state:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=cp1+0E5ydlzF2ynhgRlfrF/AmcUksnsol6BuH/jcpO4=; b=b3sUPyVzJtZdg3m0/xauooiOS5hqFrBvYNz8rT8Q26UCGGRxHTl4K6TkuYp6YBukYt NWoAP7u8fHPaRizONa5yDNBBcfGKMNc2fI5u+GddbEneEa1IkhgmvpBe3rRIY+wQc5X4 BKPiI8bCROuK5eXt5D7smjYfhcXj7b7TebEBqfakte9rJNQ3Ty2JWCxrMyr9BCuiBOnS zMvpZTVhY6TjRqnEq+nb3jdSIY7X+ZsP5bn4TRgX3788ipHN7PBZxrrJFygfUdIh4mLq yOKozucxRvAvIg8jfWM4Xb9oTabhikVPmm/fkfGDqhlhdse8r3qptiFxpC8TmvcKq4Xu WGAw== X-Received: by 10.66.146.65 with SMTP id ta1mr1152237pab.19.1382097229355; Fri, 18 Oct 2013 04:53:49 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.82.229 with SMTP id l5ls1274249qey.11.gmail; Fri, 18 Oct 2013 04:53:49 -0700 (PDT) X-Received: by 10.52.69.204 with SMTP id g12mr226734vdu.26.1382097229220; Fri, 18 Oct 2013 04:53:49 -0700 (PDT) Received: from mail-vc0-f177.google.com (mail-vc0-f177.google.com [209.85.220.177]) by mx.google.com with ESMTPS id dt10si232626vdb.60.2013.10.18.04.53.49 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Oct 2013 04:53:49 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.177 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.177; Received: by mail-vc0-f177.google.com with SMTP id ib11so287071vcb.8 for ; Fri, 18 Oct 2013 04:53:49 -0700 (PDT) X-Gm-Message-State: ALoCoQlxjFT9pRTLesx/1+kuiKhfK/RzkhD7QluhwkyZ8C8YHpplcIuiPVHJIoMyd6Ex+vp8jsUe X-Received: by 10.58.210.234 with SMTP id mx10mr1766802vec.9.1382097229118; Fri, 18 Oct 2013 04:53:49 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.220.174.196 with SMTP id u4csp29479vcz; Fri, 18 Oct 2013 04:53:48 -0700 (PDT) X-Received: by 10.194.143.18 with SMTP id sa18mr199511wjb.70.1382097228237; Fri, 18 Oct 2013 04:53:48 -0700 (PDT) Received: from mail-wg0-f52.google.com (mail-wg0-f52.google.com [74.125.82.52]) by mx.google.com with ESMTPS id o6si417731wiz.37.2013.10.18.04.53.47 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Oct 2013 04:53:48 -0700 (PDT) Received-SPF: neutral (google.com: 74.125.82.52 is neither permitted nor denied by best guess record for domain of vincent.guittot@linaro.org) client-ip=74.125.82.52; Received: by mail-wg0-f52.google.com with SMTP id f12so3610068wgh.31 for ; Fri, 18 Oct 2013 04:53:47 -0700 (PDT) X-Received: by 10.194.206.5 with SMTP id lk5mr1453056wjc.46.1382097227642; Fri, 18 Oct 2013 04:53:47 -0700 (PDT) Received: from localhost.localdomain (LPuteaux-156-14-44-212.w82-127.abo.wanadoo.fr. [82.127.83.212]) by mx.google.com with ESMTPSA id lr3sm25000673wic.5.2013.10.18.04.53.46 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 18 Oct 2013 04:53:47 -0700 (PDT) From: Vincent Guittot To: linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org, pjt@google.com, Morten.Rasmussen@arm.com, cmetcalf@tilera.com, tony.luck@intel.com, alex.shi@intel.com, preeti@linux.vnet.ibm.com, linaro-kernel@lists.linaro.org Cc: rjw@sisk.pl, paulmck@linux.vnet.ibm.com, corbet@lwn.net, tglx@linutronix.de, len.brown@intel.com, arjan@linux.intel.com, amit.kucheria@linaro.org, l.majewski@samsung.com, Vincent Guittot Subject: [RFC][PATCH v5 04/14] sched: do load balance only with packing cpus Date: Fri, 18 Oct 2013 13:52:17 +0200 Message-Id: <1382097147-30088-4-git-send-email-vincent.guittot@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1382097147-30088-1-git-send-email-vincent.guittot@linaro.org> References: <1382097147-30088-1-git-send-email-vincent.guittot@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: vincent.guittot@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.177 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The tasks will be scheduled only on the CPUs that participate to the packing effort. A CPU participates to the packing effort when it is its own buddy. For ILB, look for an idle CPU close to the packing CPUs whenever possible. The goal is to prevent the wake up of a CPU which doesn't share the power domain of the pack buddy CPU. Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 76 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5547831..7149f38 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -186,6 +186,17 @@ void sched_init_granularity(void) */ DEFINE_PER_CPU(int, sd_pack_buddy); +static inline bool is_packing_cpu(int cpu) +{ + int my_buddy = per_cpu(sd_pack_buddy, cpu); + return (my_buddy == -1) || (cpu == my_buddy); +} + +static inline int get_buddy(int cpu) +{ + return per_cpu(sd_pack_buddy, cpu); +} + /* * Look for the best buddy CPU that can be used to pack small tasks * We make the assumption that it doesn't wort to pack on CPU that share the @@ -245,6 +256,32 @@ void update_packing_domain(int cpu) pr_debug("CPU%d packing on CPU%d\n", cpu, id); per_cpu(sd_pack_buddy, cpu) = id; } + +static int check_nohz_packing(int cpu) +{ + if (!is_packing_cpu(cpu)) + return true; + + return false; +} +#else /* CONFIG_SCHED_PACKING_TASKS */ + +static inline bool is_packing_cpu(int cpu) +{ + return 1; +} + +static inline int get_buddy(int cpu) +{ + return -1; +} + +static inline int check_nohz_packing(int cpu) +{ + return false; +} + + #endif /* CONFIG_SCHED_PACKING_TASKS */ #endif /* CONFIG_SMP */ @@ -3370,7 +3407,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, do { unsigned long load, avg_load; - int local_group; + int local_group, packing_cpus = 0; int i; /* Skip over this group if it has no CPUs allowed */ @@ -3392,8 +3429,14 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, load = target_load(i, load_idx); avg_load += load; + + if (is_packing_cpu(i)) + packing_cpus = 1; } + if (!packing_cpus) + continue; + /* Adjust by relative CPU power of the group */ avg_load = (avg_load * SCHED_POWER_SCALE) / group->sgp->power; @@ -3448,7 +3491,8 @@ static int select_idle_sibling(struct task_struct *p, int target) /* * If the prevous cpu is cache affine and idle, don't be stupid. */ - if (i != target && cpus_share_cache(i, target) && idle_cpu(i)) + if (i != target && cpus_share_cache(i, target) && idle_cpu(i) + && is_packing_cpu(i)) return i; /* @@ -3463,7 +3507,8 @@ static int select_idle_sibling(struct task_struct *p, int target) goto next; for_each_cpu(i, sched_group_cpus(sg)) { - if (i == target || !idle_cpu(i)) + if (i == target || !idle_cpu(i) + || !is_packing_cpu(i)) goto next; } @@ -3528,9 +3573,13 @@ select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags) } if (affine_sd) { - if (cpu != prev_cpu && wake_affine(affine_sd, p, sync)) + if (cpu != prev_cpu && (wake_affine(affine_sd, p, sync) + || !is_packing_cpu(prev_cpu))) prev_cpu = cpu; + if (!is_packing_cpu(prev_cpu)) + prev_cpu = get_buddy(prev_cpu); + new_cpu = select_idle_sibling(p, prev_cpu); goto unlock; } @@ -5593,7 +5642,26 @@ static struct { static inline int find_new_ilb(int call_cpu) { + struct sched_domain *sd; int ilb = cpumask_first(nohz.idle_cpus_mask); + int buddy = get_buddy(call_cpu); + + /* + * If we have a pack buddy CPU, we try to run load balance on a CPU + * that is close to the buddy. + */ + if (buddy != -1) { + for_each_domain(buddy, sd) { + if (sd->flags & SD_SHARE_CPUPOWER) + continue; + + ilb = cpumask_first_and(sched_domain_span(sd), + nohz.idle_cpus_mask); + + if (ilb < nr_cpu_ids) + break; + } + } if (ilb < nr_cpu_ids && idle_cpu(ilb)) return ilb; @@ -5874,6 +5942,10 @@ static inline int nohz_kick_needed(struct rq *rq, int cpu) if (rq->nr_running >= 2) goto need_kick; + /* This cpu doesn't contribute to packing effort */ + if (check_nohz_packing(cpu)) + goto need_kick; + rcu_read_lock(); for_each_domain(cpu, sd) { struct sched_group *sg = sd->groups;