From patchwork Mon May 26 22:19:35 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicolas Pitre X-Patchwork-Id: 30948 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f200.google.com (mail-ie0-f200.google.com [209.85.223.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 3E5D820A25 for ; Mon, 26 May 2014 22:21:35 +0000 (UTC) Received: by mail-ie0-f200.google.com with SMTP id y20sf45215246ier.3 for ; Mon, 26 May 2014 15:21:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe :content-transfer-encoding; bh=oUPTQEeSx9zHwKsNs8oxn1d6DRBpZFNl3ySHOudji0s=; b=Ztbhlc/WwNYPvpgq8GYeqOBMOchd0EzPzRrF3ekzMpgTnxdeq36A6bEycdgpB42Abu Bb1zeDLpH61n1qhjQjmF7glc1XBagU2adj/UyL6kK9s13yDSZDfAjvW6OHwuIPXO/Hhj F+U1e6OPe4LaQn57JibXR2Zn6ciQeXVKhd5MqMLfEMfb/U2LxwrN65z7Fo0IyRqeOOm/ BBpAKM5IV+n2coIHQlODb/E1+P+cHIy0QA3ihmEExig8+ZLu7rWz8lAWrFKLfqWn64/s +hrGVkr3kELI3kZFGihS8yVv0DYKG28WZ/CZg0KMIS9lfP+M94Rs7ObsQuWSX73OD2DV D5ew== X-Gm-Message-State: ALoCoQlyQNGjkBAPUEJO5k6IWUAZAZOuVY0dtNCpaVF2WZnVH55vkAIJlx/daKFasJyUk9LmAhYN X-Received: by 10.42.92.69 with SMTP id s5mr10473377icm.20.1401142894697; Mon, 26 May 2014 15:21:34 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.95.232 with SMTP id i95ls2826836qge.7.gmail; Mon, 26 May 2014 15:21:34 -0700 (PDT) X-Received: by 10.52.147.170 with SMTP id tl10mr19382983vdb.14.1401142894600; Mon, 26 May 2014 15:21:34 -0700 (PDT) Received: from mail-ve0-f177.google.com (mail-ve0-f177.google.com [209.85.128.177]) by mx.google.com with ESMTPS id tk8si7039506vcb.62.2014.05.26.15.21.34 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 26 May 2014 15:21:34 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.177 as permitted sender) client-ip=209.85.128.177; Received: by mail-ve0-f177.google.com with SMTP id db11so9741647veb.36 for ; Mon, 26 May 2014 15:21:34 -0700 (PDT) X-Received: by 10.58.34.72 with SMTP id x8mr104816vei.61.1401142894479; Mon, 26 May 2014 15:21:34 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.221.72 with SMTP id ib8csp73014vcb; Mon, 26 May 2014 15:21:34 -0700 (PDT) X-Received: by 10.66.158.132 with SMTP id wu4mr31575382pab.66.1401142893714; Mon, 26 May 2014 15:21:33 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id dm2si16188669pbb.68.2014.05.26.15.21.33 for ; Mon, 26 May 2014 15:21:33 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752165AbaEZWVZ (ORCPT + 27 others); Mon, 26 May 2014 18:21:25 -0400 Received: from relais.videotron.ca ([24.201.245.36]:61825 "EHLO relais.videotron.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753246AbaEZWTq (ORCPT ); Mon, 26 May 2014 18:19:46 -0400 Received: from yoda.home ([66.130.143.177]) by VL-VM-MR006.ip.videotron.ca (Oracle Communications Messaging Exchange Server 7u4-22.01 64bit (built Apr 21 2011)) with ESMTP id <0N6700M5TCOVBZD0@VL-VM-MR006.ip.videotron.ca> for linux-kernel@vger.kernel.org; Mon, 26 May 2014 18:19:43 -0400 (EDT) Received: from xanadu.home (xanadu.home [192.168.2.2]) by yoda.home (Postfix) with ESMTP id 981C32DA07CF; Mon, 26 May 2014 18:19:43 -0400 (EDT) From: Nicolas Pitre To: Peter Zijlstra , Ingo Molnar Cc: Vincent Guittot , Daniel Lezcano , Morten Rasmussen , "Rafael J. Wysocki" , linux-kernel@vger.kernel.org, linaro-kernel@lists.linaro.org Subject: [PATCH v2 2/6] sched/fair.c: change "has_capacity" to "has_free_capacity" Date: Mon, 26 May 2014 18:19:35 -0400 Message-id: <1401142779-6633-3-git-send-email-nicolas.pitre@linaro.org> X-Mailer: git-send-email 1.8.4.108.g55ea5f6 In-reply-to: <1401142779-6633-1-git-send-email-nicolas.pitre@linaro.org> References: <1401142779-6633-1-git-send-email-nicolas.pitre@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: nicolas.pitre@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.177 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Content-transfer-encoding: 7BIT The capacity of a CPU/group should be some intrinsic value that doesn't change with task placement. It is like a container which capacity is stable regardless of the amount of liquid in it (its "utilization")... unless the container itself is crushed that is, but that's another story. Therefore let's rename "has_capacity" to "has_free_capacity" in order to better convey the wanted meaning. Signed-off-by: Nicolas Pitre --- kernel/sched/fair.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 16f0dddb70..8f9ac4826c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1030,7 +1030,7 @@ struct numa_stats { /* Approximate capacity in terms of runnable tasks on a node */ unsigned long task_capacity; - int has_capacity; + int has_free_capacity; }; /* @@ -1056,8 +1056,8 @@ static void update_numa_stats(struct numa_stats *ns, int nid) * the @ns structure is NULL'ed and task_numa_compare() will * not find this node attractive. * - * We'll either bail at !has_capacity, or we'll detect a huge imbalance - * and bail there. + * We'll either bail at !has_free_capacity, or we'll detect a huge + * imbalance and bail there. */ if (!cpus) return; @@ -1065,7 +1065,7 @@ static void update_numa_stats(struct numa_stats *ns, int nid) ns->load = (ns->load * SCHED_POWER_SCALE) / ns->compute_capacity; ns->task_capacity = DIV_ROUND_CLOSEST(ns->compute_capacity, SCHED_POWER_SCALE); - ns->has_capacity = (ns->nr_running < ns->task_capacity); + ns->has_free_capacity = (ns->nr_running < ns->task_capacity); } struct task_numa_env { @@ -1196,8 +1196,8 @@ static void task_numa_compare(struct task_numa_env *env, if (!cur) { /* Is there capacity at our destination? */ - if (env->src_stats.has_capacity && - !env->dst_stats.has_capacity) + if (env->src_stats.has_free_capacity && + !env->dst_stats.has_free_capacity) goto unlock; goto balance; @@ -1302,8 +1302,8 @@ static int task_numa_migrate(struct task_struct *p) groupimp = group_weight(p, env.dst_nid) - groupweight; update_numa_stats(&env.dst_stats, env.dst_nid); - /* If the preferred nid has capacity, try to use it. */ - if (env.dst_stats.has_capacity) + /* If the preferred nid has free capacity, try to use it. */ + if (env.dst_stats.has_free_capacity) task_numa_find_cpu(&env, taskimp, groupimp); /* No space available on the preferred nid. Look elsewhere. */ @@ -5536,7 +5536,7 @@ struct sg_lb_stats { unsigned int idle_cpus; unsigned int group_weight; int group_imb; /* Is there an imbalance in the group ? */ - int group_has_capacity; /* Is there extra capacity in the group? */ + int group_has_free_capacity; #ifdef CONFIG_NUMA_BALANCING unsigned int nr_numa_running; unsigned int nr_preferred_running; @@ -5903,7 +5903,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->group_capacity = sg_capacity(env, group); if (sgs->group_capacity > sgs->sum_nr_running) - sgs->group_has_capacity = 1; + sgs->group_has_free_capacity = 1; } /** @@ -6027,7 +6027,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd * with a large weight task outweighs the tasks on the system). */ if (prefer_sibling && sds->local && - sds->local_stat.group_has_capacity) + sds->local_stat.group_has_free_capacity) sgs->group_capacity = min(sgs->group_capacity, 1U); if (update_sd_pick_busiest(env, sds, sg, sgs)) { @@ -6287,8 +6287,8 @@ static struct sched_group *find_busiest_group(struct lb_env *env) goto force_balance; /* SD_BALANCE_NEWIDLE trumps SMP nice when underutilized */ - if (env->idle == CPU_NEWLY_IDLE && local->group_has_capacity && - !busiest->group_has_capacity) + if (env->idle == CPU_NEWLY_IDLE && local->group_has_free_capacity && + !busiest->group_has_free_capacity) goto force_balance; /*