From patchwork Thu Jul 3 16:25:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 33051 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-yh0-f71.google.com (mail-yh0-f71.google.com [209.85.213.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CF625203AC for ; Thu, 3 Jul 2014 16:32:00 +0000 (UTC) Received: by mail-yh0-f71.google.com with SMTP id t59sf1494512yho.6 for ; Thu, 03 Jul 2014 09:32:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=Qf4SHqS0nQVPHO5O3YmiZK89157Sfs/T0rMLjiUMKKA=; b=YOrVzAcn8xHxmegJuSTIWDznce0CN5fgRk94mW0XakeFKDCRunOtiY2Hz22VwVgqc5 80tyeAq6CkZGE3Ny2dcjNV5veZFWgsSM3RffDSSpm+a/oRmblLfBfjyNbCAOvXukw4lC 8izjtCVWeYXoKrsXUXBkv97dsUp9WXcTWDzKmYelM8vEvH9eCg0LMt4E5jo3e3C875+a OzBFyibbPSTQL9uHE5+NrRuTHXtnyERlVlycGi9ly2PolWolJyZ9cqhPwO9aasu+lNqM 2+g2PxziMAWKbNo2YCPFN3yNhw+lakObAnT+uYbAbBjnY+J8vWYYb2v4GVKeohkUK9Pr tH5Q== X-Gm-Message-State: ALoCoQnCTsHgspLpiQgMaUqmb1KARQGY4CKBXpjxEfVb2cAwoDfaOZfyFjHygdnU3zAbrixUuS6M X-Received: by 10.236.87.73 with SMTP id x49mr2367646yhe.30.1404405120673; Thu, 03 Jul 2014 09:32:00 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.51.132 with SMTP id u4ls588587qga.36.gmail; Thu, 03 Jul 2014 09:32:00 -0700 (PDT) X-Received: by 10.52.63.226 with SMTP id j2mr1435068vds.43.1404405120556; Thu, 03 Jul 2014 09:32:00 -0700 (PDT) Received: from mail-ve0-f175.google.com (mail-ve0-f175.google.com [209.85.128.175]) by mx.google.com with ESMTPS id nh6si14425618vec.57.2014.07.03.09.32.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:32:00 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.175 as permitted sender) client-ip=209.85.128.175; Received: by mail-ve0-f175.google.com with SMTP id jx11so491819veb.6 for ; Thu, 03 Jul 2014 09:32:00 -0700 (PDT) X-Received: by 10.58.187.19 with SMTP id fo19mr1433295vec.45.1404405120458; Thu, 03 Jul 2014 09:32:00 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp390741vcb; Thu, 3 Jul 2014 09:31:59 -0700 (PDT) X-Received: by 10.68.231.229 with SMTP id tj5mr77352042pbc.101.1404405119630; Thu, 03 Jul 2014 09:31:59 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qp1si33342031pac.31.2014.07.03.09.31.59; Thu, 03 Jul 2014 09:31:59 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759422AbaGCQbh (ORCPT + 27 others); Thu, 3 Jul 2014 12:31:37 -0400 Received: from service87.mimecast.com ([91.220.42.44]:45466 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759271AbaGCQ0R (ORCPT ); Thu, 3 Jul 2014 12:26:17 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:15 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:15 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 12/23] sched: Rename weighted_cpuload() to cpu_load() Date: Thu, 3 Jul 2014 17:25:59 +0100 Message-Id: <1404404770-323-13-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:15.0897 (UTC) FILETIME=[83204C90:01CF96DB] X-MC-Unique: 114070317261515201 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.175 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann The function weighted_cpuload() is the only one in the group of load related functions used in the scheduler load balancing code (weighted_cpuload(), source_load(), target_load(), task_h_load()) which carries an explicit 'weighted' identifier in its name. Get rid of this 'weighted' identifier since following patches will introduce a weighted/unweighted switch as an argument for these functions. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 93c8dbe..784fdab 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1014,7 +1014,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page, return group_faults(p, dst_nid) < (group_faults(p, src_nid) * 3 / 4); } -static unsigned long weighted_cpuload(const int cpu); +static unsigned long cpu_load(const int cpu); static unsigned long source_load(int cpu, int type); static unsigned long target_load(int cpu, int type); static unsigned long capacity_of(int cpu); @@ -1045,7 +1045,7 @@ static void update_numa_stats(struct numa_stats *ns, int nid) struct rq *rq = cpu_rq(cpu); ns->nr_running += rq->nr_running; - ns->load += weighted_cpuload(cpu); + ns->load += cpu_load(cpu); ns->compute_capacity += capacity_of(cpu); cpus++; @@ -4036,7 +4036,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) #ifdef CONFIG_SMP /* Used instead of source_load when we know the type == 0 */ -static unsigned long weighted_cpuload(const int cpu) +static unsigned long cpu_load(const int cpu) { return cpu_rq(cpu)->cfs.runnable_load_avg; } @@ -4051,7 +4051,7 @@ static unsigned long weighted_cpuload(const int cpu) static unsigned long source_load(int cpu, int type) { struct rq *rq = cpu_rq(cpu); - unsigned long total = weighted_cpuload(cpu); + unsigned long total = cpu_load(cpu); if (type == 0 || !sched_feat(LB_BIAS)) return total; @@ -4066,7 +4066,7 @@ static unsigned long source_load(int cpu, int type) static unsigned long target_load(int cpu, int type) { struct rq *rq = cpu_rq(cpu); - unsigned long total = weighted_cpuload(cpu); + unsigned long total = cpu_load(cpu); if (type == 0 || !sched_feat(LB_BIAS)) return total; @@ -4433,7 +4433,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) /* Traverse only the allowed CPUs */ for_each_cpu_and(i, sched_group_cpus(group), tsk_cpus_allowed(p)) { - load = weighted_cpuload(i); + load = cpu_load(i); if (load < min_load || (load == min_load && i == this_cpu)) { min_load = load; @@ -5926,7 +5926,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, sgs->nr_numa_running += rq->nr_numa_running; sgs->nr_preferred_running += rq->nr_preferred_running; #endif - sgs->sum_weighted_load += weighted_cpuload(i); + sgs->sum_weighted_load += cpu_load(i); if (idle_cpu(i)) sgs->idle_cpus++; } @@ -6388,7 +6388,7 @@ static struct rq *find_busiest_queue(struct lb_env *env, int i; for_each_cpu_and(i, sched_group_cpus(group), env->cpus) { - unsigned long capacity, capacity_factor, wl; + unsigned long capacity, capacity_factor, load; enum fbq_type rt; rq = cpu_rq(i); @@ -6421,28 +6421,29 @@ static struct rq *find_busiest_queue(struct lb_env *env, if (!capacity_factor) capacity_factor = fix_small_capacity(env->sd, group); - wl = weighted_cpuload(i); + load = cpu_load(i); /* - * When comparing with imbalance, use weighted_cpuload() + * When comparing with imbalance, use cpu_load() * which is not scaled with the cpu capacity. */ - if (capacity_factor && rq->nr_running == 1 && wl > env->imbalance) + if (capacity_factor && rq->nr_running == 1 && + load > env->imbalance) continue; /* * For the load comparisons with the other cpu's, consider - * the weighted_cpuload() scaled with the cpu capacity, so + * the cpu_load() scaled with the cpu capacity, so * that the load can be moved away from the cpu that is * potentially running at a lower capacity. * - * Thus we're looking for max(wl_i / capacity_i), crosswise + * Thus we're looking for max(load_i / capacity_i), crosswise * multiplication to rid ourselves of the division works out - * to: wl_i * capacity_j > wl_j * capacity_i; where j is + * to: load_i * capacity_j > load_j * capacity_i; where j is * our previous maximum. */ - if (wl * busiest_capacity > busiest_load * capacity) { - busiest_load = wl; + if (load * busiest_capacity > busiest_load * capacity) { + busiest_load = load; busiest_capacity = capacity; busiest = rq; }