From patchwork Wed Jun 15 10:17:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 70099 Delivered-To: patch@linaro.org Received: by 10.140.106.246 with SMTP id e109csp2487693qgf; Wed, 15 Jun 2016 03:18:58 -0700 (PDT) X-Received: by 10.66.65.109 with SMTP id w13mr3028574pas.142.1465985937941; Wed, 15 Jun 2016 03:18:57 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r88si37076216pfa.128.2016.06.15.03.18.57; Wed, 15 Jun 2016 03:18:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753445AbcFOKSq (ORCPT + 30 others); Wed, 15 Jun 2016 06:18:46 -0400 Received: from foss.arm.com ([217.140.101.70]:36070 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753366AbcFOKSm (ORCPT ); Wed, 15 Jun 2016 06:18:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 185E495D; Wed, 15 Jun 2016 03:19:23 -0700 (PDT) Received: from e106622-lin.cambridge.arm.com (e106622-lin.cambridge.arm.com [10.1.208.152]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2FD623F213; Wed, 15 Jun 2016 03:18:39 -0700 (PDT) From: Juri Lelli To: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, peterz@infradead.org, vincent.guittot@linaro.org, robh+dt@kernel.org, mark.rutland@arm.com, linux@arm.linux.org.uk, sudeep.holla@arm.com, lorenzo.pieralisi@arm.com, catalin.marinas@arm.com, will.deacon@arm.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, juri.lelli@arm.com, broonie@kernel.org, sgurrappadi@nvidia.com, Mark Brown Subject: [PATCH v5 4/8] arm64: parse cpu capacity-dmips-mhz from DT Date: Wed, 15 Jun 2016 11:17:53 +0100 Message-Id: <1465985877-18271-5-git-send-email-juri.lelli@arm.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1465985877-18271-1-git-send-email-juri.lelli@arm.com> References: <1465985877-18271-1-git-send-email-juri.lelli@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With the introduction of cpu capacity-dmips-mhz bindings, CPU capacities can now be calculated from values extracted from DT and information coming from cpufreq. Add parsing of DT information at boot time, and complement it with cpufreq information. Also, store such information using per CPU variables, as we do for arm. Caveat: the information provided by this patch will start to be used in the future. We need to #define arch_scale_cpu_capacity to something provided in arch, so that scheduler's default implementation (which gets used if arch_scale_cpu_capacity is not defined) is overwritten. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Brown Cc: Sudeep Holla Signed-off-by: Juri Lelli --- Changes from v1: - normalize w.r.t. highest capacity found in DT - bailout conditions (all-or-nothing) Changes from v4: - parsing modified to reflect change in binding (capacity-dmips-mhz) --- arch/arm64/kernel/topology.c | 145 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+) -- 2.7.0 diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 694f6de..780c2c7 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -19,10 +19,152 @@ #include #include #include +#include +#include #include #include +static DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE; + +unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) +{ + return per_cpu(cpu_scale, cpu); +} + +static void set_capacity_scale(unsigned int cpu, unsigned long capacity) +{ + per_cpu(cpu_scale, cpu) = capacity; +} + +static u32 capacity_scale; +static u32 *raw_capacity; +static bool cap_parsing_failed; + +static void __init parse_cpu_capacity(struct device_node *cpu_node, int cpu) +{ + int ret; + u32 cpu_capacity; + + if (cap_parsing_failed) + return; + + ret = of_property_read_u32(cpu_node, + "capacity-dmips-mhz", + &cpu_capacity); + if (!ret) { + if (!raw_capacity) { + raw_capacity = kzalloc(sizeof(*raw_capacity) * + num_possible_cpus(), GFP_KERNEL); + if (!raw_capacity) { + pr_err("cpu_capacity: failed to allocate memory" + " for raw capacities\n"); + cap_parsing_failed = true; + return; + } + } + capacity_scale = max(cpu_capacity, capacity_scale); + raw_capacity[cpu] = cpu_capacity; + pr_debug("cpu_capacity: %s cpu_capacity=%u (raw)\n", + cpu_node->full_name, raw_capacity[cpu]); + } else { + pr_err("cpu_capacity: missing %s raw capacity " + "(fallback to 1024 for all CPUs)\n", + cpu_node->full_name); + cap_parsing_failed = true; + kfree(raw_capacity); + } +} + +static void normalize_cpu_capacity(void) +{ + u64 capacity; + int cpu; + + if (WARN_ON(!raw_capacity) || cap_parsing_failed) + return; + + pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale); + for_each_possible_cpu(cpu) { + pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n", + cpu, raw_capacity[cpu]); + capacity = (raw_capacity[cpu] << SCHED_CAPACITY_SHIFT) + / capacity_scale; + set_capacity_scale(cpu, capacity); + pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n", + cpu, arch_scale_cpu_capacity(NULL, cpu)); + } +} + +#ifdef CONFIG_CPU_FREQ +static cpumask_var_t cpus_to_visit; +static bool cap_parsing_done; + +static int +init_cpu_capacity_callback(struct notifier_block *nb, + unsigned long val, + void *data) +{ + struct cpufreq_policy *policy = data; + int cpu; + + if (cap_parsing_failed || cap_parsing_done) + return 0; + + switch (val) { + case CPUFREQ_NOTIFY: + pr_debug("cpu_capacity: init cpu capacity for CPUs [%*pbl] " + "(to_visit=%*pbl)\n", + cpumask_pr_args(policy->related_cpus), + cpumask_pr_args(cpus_to_visit)); + cpumask_andnot(cpus_to_visit, + cpus_to_visit, + policy->related_cpus); + for_each_cpu(cpu, policy->related_cpus) { + raw_capacity[cpu] = arch_scale_cpu_capacity(NULL, cpu) * + policy->max / 1000UL; + capacity_scale = max(raw_capacity[cpu], capacity_scale); + } + if (cpumask_empty(cpus_to_visit)) { + normalize_cpu_capacity(); + kfree(raw_capacity); + pr_debug("cpu_capacity: parsing done\n"); + cap_parsing_done = true; + } + } + return 0; +} + +static struct notifier_block init_cpu_capacity_notifier = { + .notifier_call = init_cpu_capacity_callback, +}; + +static int __init register_cpufreq_notifier(void) +{ + if (cap_parsing_failed) + return -EINVAL; + + if (!alloc_cpumask_var(&cpus_to_visit, GFP_KERNEL)) { + pr_err("cpu_capacity: failed to allocate memory for " + "cpus_to_visit\n"); + return -ENOMEM; + } + cpumask_copy(cpus_to_visit, cpu_possible_mask); + + return cpufreq_register_notifier(&init_cpu_capacity_notifier, + CPUFREQ_POLICY_NOTIFIER); +} +core_initcall(register_cpufreq_notifier); +#else +static int __init free_raw_capacity(void) +{ + kfree(raw_capacity); + + return 0; +} +core_initcall(free_raw_capacity); +#endif + static int __init get_cpu_for_node(struct device_node *node) { struct device_node *cpu_node; @@ -34,6 +176,7 @@ static int __init get_cpu_for_node(struct device_node *node) for_each_possible_cpu(cpu) { if (of_get_cpu_node(cpu, NULL) == cpu_node) { + parse_cpu_capacity(cpu_node, cpu); of_node_put(cpu_node); return cpu; } @@ -185,6 +328,8 @@ static int __init parse_dt_topology(void) if (ret != 0) goto out_map; + normalize_cpu_capacity(); + /* * Check that all cores are in the topology; the SMP code will * only mark cores described in the DT as possible.