From patchwork Thu Jul 3 16:25:51 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Morten Rasmussen X-Patchwork-Id: 33053 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ie0-f197.google.com (mail-ie0-f197.google.com [209.85.223.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 880DC203AC for ; Thu, 3 Jul 2014 16:33:01 +0000 (UTC) Received: by mail-ie0-f197.google.com with SMTP id rd18sf2163784iec.0 for ; Thu, 03 Jul 2014 09:33:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=vUvQs2NZARTrlT+VXc94gRtk7BNcXHOSpC0pEmy2Cic=; b=erAVbzMrPgRMhuP5Jg+PN3xfGvAiaJiFag2HdgWZmWmcH9OYC+8WUC2ZDOFfoLvEU2 ZHsnSSJdDoQJ7DxlUn/eG1HWPeXceRx7IVsUKbioUeQpECkgk8JutpZO81MDVFmgkfuE uT70uuvFMb2WBM1rmt1btwMRyPcNgxApM0xtulCzBoupMBHpE3uTqZwyt5g8rA81P9Uz G3zKEgCpgl87wolqRxzAsMzi6gvZvtVdRkqMwWAyRa+Cp5gozLTU7NMQhKW2nTR0vR4S GCFuVYtCkPY2O01mFUD6dyKVB1mhISc1VtJdRA6ioic47jpGWNVQmwIJF+sAizy8YQPU zdIg== X-Gm-Message-State: ALoCoQlcxZf346iSjEVWavNlmHe5PVmP82O6xYZ0exp0Np5k1rOGqCsKdeogO7hWqjolIxkVW4Df X-Received: by 10.43.103.136 with SMTP id di8mr4784472icc.14.1404405181087; Thu, 03 Jul 2014 09:33:01 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.48.145 with SMTP id o17ls571601qga.37.gmail; Thu, 03 Jul 2014 09:33:01 -0700 (PDT) X-Received: by 10.220.203.134 with SMTP id fi6mr4874731vcb.18.1404405180975; Thu, 03 Jul 2014 09:33:00 -0700 (PDT) Received: from mail-ve0-f172.google.com (mail-ve0-f172.google.com [209.85.128.172]) by mx.google.com with ESMTPS id gu3si14427335veb.59.2014.07.03.09.33.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 03 Jul 2014 09:33:00 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.172 as permitted sender) client-ip=209.85.128.172; Received: by mail-ve0-f172.google.com with SMTP id jz11so496825veb.17 for ; Thu, 03 Jul 2014 09:33:00 -0700 (PDT) X-Received: by 10.221.20.199 with SMTP id qp7mr4862667vcb.24.1404405180896; Thu, 03 Jul 2014 09:33:00 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.221.37.5 with SMTP id tc5csp390831vcb; Thu, 3 Jul 2014 09:33:00 -0700 (PDT) X-Received: by 10.68.174.33 with SMTP id bp1mr5890821pbc.74.1404405180136; Thu, 03 Jul 2014 09:33:00 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ia5si33274287pbb.236.2014.07.03.09.32.59; Thu, 03 Jul 2014 09:32:59 -0700 (PDT) Received-SPF: none (google.com: linux-pm-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759461AbaGCQc6 (ORCPT + 13 others); Thu, 3 Jul 2014 12:32:58 -0400 Received: from service87.mimecast.com ([91.220.42.44]:44979 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759115AbaGCQ0K (ORCPT ); Thu, 3 Jul 2014 12:26:10 -0400 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Thu, 03 Jul 2014 17:26:08 +0100 Received: from e103034-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 3 Jul 2014 17:26:08 +0100 From: Morten Rasmussen To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: rjw@rjwysocki.net, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, preeti@linux.vnet.ibm.com, Dietmar.Eggemann@arm.com, pjt@google.com Subject: [RFCv2 PATCH 04/23] sched: Allocate and initialize energy data structures Date: Thu, 3 Jul 2014 17:25:51 +0100 Message-Id: <1404404770-323-5-git-send-email-morten.rasmussen@arm.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> References: <1404404770-323-1-git-send-email-morten.rasmussen@arm.com> X-OriginalArrivalTime: 03 Jul 2014 16:26:09.0037 (UTC) FILETIME=[7F098BD0:01CF96DB] X-MC-Unique: 114070317260825001 Sender: linux-pm-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: morten.rasmussen@arm.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.128.172 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Dietmar Eggemann The per sched group (sg) sched_group_energy structure plus the related idle_state and capacity_state arrays are allocated like the other sched domain (sd) hierarchy data structures. This includes the freeing of sched_group_energy structures which are not used. One problem is that the number of elements of the idle_state and the capacity_state arrays is not fixed and has to be retrieved in __sdt_alloc() to allocate memory for the sched_group_energy structure and the two arrays in one chunk. The array pointers (idle_states and cap_states) are initialized here to point to the correct place inside the memory chunk. The new function init_sched_energy() initializes the sched_group_energy structure and the two arrays in case the sd topology level contains energy information. Signed-off-by: Dietmar Eggemann --- kernel/sched/core.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++++- kernel/sched/sched.h | 35 +++++++++++++++++++++++++ 2 files changed, 105 insertions(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 54f5722..ecece17 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5539,6 +5539,7 @@ static void free_sched_domain(struct rcu_head *rcu) free_sched_groups(sd->groups, 1); } else if (atomic_dec_and_test(&sd->groups->ref)) { kfree(sd->groups->sgc); + kfree(sd->groups->sge); kfree(sd->groups); } kfree(sd); @@ -5799,6 +5800,8 @@ static int get_group(int cpu, struct sd_data *sdd, struct sched_group **sg) *sg = *per_cpu_ptr(sdd->sg, cpu); (*sg)->sgc = *per_cpu_ptr(sdd->sgc, cpu); atomic_set(&(*sg)->sgc->ref, 1); /* for claim_allocations */ + (*sg)->sge = *per_cpu_ptr(sdd->sge, cpu); + atomic_set(&(*sg)->sge->ref, 1); /* for claim_allocations */ } return cpu; @@ -5888,6 +5891,28 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd) atomic_set(&sg->sgc->nr_busy_cpus, sg->group_weight); } +static void init_sched_energy(int cpu, struct sched_domain *sd, + struct sched_domain_topology_level *tl) +{ + struct sched_group *sg = sd->groups; + struct sched_group_energy *energy = sg->sge; + sched_domain_energy_f fn = tl->energy; + struct cpumask *mask = sched_group_cpus(sg); + + if (!fn || !fn(cpu)) + return; + + if (cpumask_weight(mask) > 1) + check_sched_energy_data(cpu, fn, mask); + + energy->nr_idle_states = fn(cpu)->nr_idle_states; + memcpy(energy->idle_states, fn(cpu)->idle_states, + energy->nr_idle_states*sizeof(struct idle_state)); + energy->nr_cap_states = fn(cpu)->nr_cap_states; + memcpy(energy->cap_states, fn(cpu)->cap_states, + energy->nr_cap_states*sizeof(struct capacity_state)); +} + /* * Initializers for schedule domains * Non-inlined to reduce accumulated stack pressure in build_sched_domains() @@ -5978,6 +6003,9 @@ static void claim_allocations(int cpu, struct sched_domain *sd) if (atomic_read(&(*per_cpu_ptr(sdd->sgc, cpu))->ref)) *per_cpu_ptr(sdd->sgc, cpu) = NULL; + + if (atomic_read(&(*per_cpu_ptr(sdd->sge, cpu))->ref)) + *per_cpu_ptr(sdd->sge, cpu) = NULL; } #ifdef CONFIG_NUMA @@ -6383,10 +6411,24 @@ static int __sdt_alloc(const struct cpumask *cpu_map) if (!sdd->sgc) return -ENOMEM; + sdd->sge = alloc_percpu(struct sched_group_energy *); + if (!sdd->sge) + return -ENOMEM; + for_each_cpu(j, cpu_map) { struct sched_domain *sd; struct sched_group *sg; struct sched_group_capacity *sgc; + struct sched_group_energy *sge; + sched_domain_energy_f fn = tl->energy; + unsigned int nr_idle_states = 0; + unsigned int nr_cap_states = 0; + + if (fn && fn(j)) { + nr_idle_states = fn(j)->nr_idle_states; + nr_cap_states = fn(j)->nr_cap_states; + BUG_ON(!nr_idle_states || !nr_cap_states); + } sd = kzalloc_node(sizeof(struct sched_domain) + cpumask_size(), GFP_KERNEL, cpu_to_node(j)); @@ -6410,6 +6452,26 @@ static int __sdt_alloc(const struct cpumask *cpu_map) return -ENOMEM; *per_cpu_ptr(sdd->sgc, j) = sgc; + + sge = kzalloc_node(sizeof(struct sched_group_energy) + + nr_idle_states*sizeof(struct idle_state) + + nr_cap_states*sizeof(struct capacity_state), + GFP_KERNEL, cpu_to_node(j)); + + if (!sge) + return -ENOMEM; + + sge->idle_states = (struct idle_state *) + ((void *)&sge->cap_states + + sizeof(sge->cap_states)); + + sge->cap_states = (struct capacity_state *) + ((void *)&sge->cap_states + + sizeof(sge->cap_states) + + nr_idle_states* + sizeof(struct idle_state)); + + *per_cpu_ptr(sdd->sge, j) = sge; } } @@ -6438,6 +6500,8 @@ static void __sdt_free(const struct cpumask *cpu_map) kfree(*per_cpu_ptr(sdd->sg, j)); if (sdd->sgc) kfree(*per_cpu_ptr(sdd->sgc, j)); + if (sdd->sge) + kfree(*per_cpu_ptr(sdd->sge, j)); } free_percpu(sdd->sd); sdd->sd = NULL; @@ -6445,6 +6509,8 @@ static void __sdt_free(const struct cpumask *cpu_map) sdd->sg = NULL; free_percpu(sdd->sgc); sdd->sgc = NULL; + free_percpu(sdd->sge); + sdd->sge = NULL; } } @@ -6516,10 +6582,13 @@ static int build_sched_domains(const struct cpumask *cpu_map, /* Calculate CPU capacity for physical packages and nodes */ for (i = nr_cpumask_bits-1; i >= 0; i--) { + struct sched_domain_topology_level *tl = sched_domain_topology; + if (!cpumask_test_cpu(i, cpu_map)) continue; - for (sd = *per_cpu_ptr(d.sd, i); sd; sd = sd->parent) { + for (sd = *per_cpu_ptr(d.sd, i); sd; sd = sd->parent, tl++) { + init_sched_energy(i, sd, tl); claim_allocations(i, sd); init_sched_groups_capacity(i, sd); } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index d300a64..1a5f1ee 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -790,6 +790,41 @@ static inline unsigned int group_first_cpu(struct sched_group *group) extern int group_balance_cpu(struct sched_group *sg); +/* + * Check that the per-cpu provided sd energy data is consistent for all cpus + * within the mask. + */ +static inline void check_sched_energy_data(int cpu, sched_domain_energy_f fn, + const struct cpumask *cpumask) +{ + struct cpumask mask; + int i; + + cpumask_xor(&mask, cpumask, get_cpu_mask(cpu)); + + for_each_cpu(i, &mask) { + int y; + + BUG_ON(fn(i)->nr_idle_states != fn(cpu)->nr_idle_states); + + for (y = 0; y < (fn(i)->nr_idle_states); y++) { + BUG_ON(fn(i)->idle_states[y].power != + fn(cpu)->idle_states[y].power); + BUG_ON(fn(i)->idle_states[y].wu_energy != + fn(cpu)->idle_states[y].wu_energy); + } + + BUG_ON(fn(i)->nr_cap_states != fn(cpu)->nr_cap_states); + + for (y = 0; y < (fn(i)->nr_cap_states); y++) { + BUG_ON(fn(i)->cap_states[y].cap != + fn(cpu)->cap_states[y].cap); + BUG_ON(fn(i)->cap_states[y].power != + fn(cpu)->cap_states[y].power); + } + } +} + #else static inline void sched_ttwu_pending(void) { }