From patchwork Fri Apr 10 08:42:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukasz Luba X-Patchwork-Id: 189655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFB0FC2BB55 for ; Fri, 10 Apr 2020 08:42:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8B95F20801 for ; Fri, 10 Apr 2020 08:42:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726646AbgDJImu (ORCPT ); Fri, 10 Apr 2020 04:42:50 -0400 Received: from foss.arm.com ([217.140.110.172]:58864 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725913AbgDJImu (ORCPT ); Fri, 10 Apr 2020 04:42:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F63D1063; Fri, 10 Apr 2020 01:42:50 -0700 (PDT) Received: from e123648.arm.com (unknown [10.37.12.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DD1D83F68F; Fri, 10 Apr 2020 01:42:39 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, dri-devel@lists.freedesktop.org, linux-omap@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-imx@nxp.com Cc: Dietmar.Eggemann@arm.com, cw00.choi@samsung.com, b.zolnierkie@samsung.com, rjw@rjwysocki.net, sudeep.holla@arm.com, viresh.kumar@linaro.org, nm@ti.com, sboyd@kernel.org, rui.zhang@intel.com, amit.kucheria@verdurent.com, daniel.lezcano@linaro.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, rostedt@goodmis.org, qperret@google.com, bsegall@google.com, mgorman@suse.de, shawnguo@kernel.org, s.hauer@pengutronix.de, festevam@gmail.com, kernel@pengutronix.de, khilman@kernel.org, agross@kernel.org, bjorn.andersson@linaro.org, robh@kernel.org, matthias.bgg@gmail.com, steven.price@arm.com, tomeu.vizoso@collabora.com, alyssa.rosenzweig@collabora.com, airlied@linux.ie, daniel@ffwll.ch, liviu.dudau@arm.com, lorenzo.pieralisi@arm.com, lukasz.luba@arm.com, patrick.bellasi@matbug.net, orjan.eide@arm.com, rdunlap@infradead.org, mka@chromium.org Subject: [PATCH v6 02/10] PM / EM: introduce em_dev_register_perf_domain function Date: Fri, 10 Apr 2020 09:42:02 +0100 Message-Id: <20200410084210.24932-3-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200410084210.24932-1-lukasz.luba@arm.com> References: <20200410084210.24932-1-lukasz.luba@arm.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add now function in the Energy Model framework which is going to support new devices. This function will help in transition and make it smoother. For now it still checks if the cpumask is a valid pointer, which will be removed later when the new structures and infrastructure will be ready. Signed-off-by: Lukasz Luba Acked-by: Daniel Lezcano --- include/linux/energy_model.h | 13 ++++++++++-- kernel/power/energy_model.c | 40 ++++++++++++++++++++++++++++++------ 2 files changed, 45 insertions(+), 8 deletions(-) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index fe336a9eb5d4..7c048df98447 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -2,6 +2,7 @@ #ifndef _LINUX_ENERGY_MODEL_H #define _LINUX_ENERGY_MODEL_H #include +#include #include #include #include @@ -42,7 +43,7 @@ struct em_perf_domain { #define em_span_cpus(em) (to_cpumask((em)->cpus)) #ifdef CONFIG_ENERGY_MODEL -#define EM_CPU_MAX_POWER 0xFFFF +#define EM_MAX_POWER 0xFFFF struct em_data_callback { /** @@ -59,7 +60,7 @@ struct em_data_callback { * and frequency. * * The power is the one of a single CPU in the domain, expressed in - * milli-watts. It is expected to fit in the [0, EM_CPU_MAX_POWER] + * milli-watts. It is expected to fit in the [0, EM_MAX_POWER] * range. * * Return 0 on success. @@ -71,6 +72,8 @@ struct em_data_callback { struct em_perf_domain *em_cpu_get(int cpu); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); +int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *span); /** * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain @@ -174,6 +177,12 @@ static inline int em_register_perf_domain(cpumask_t *span, { return -EINVAL; } +static inline +int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *span) +{ + return -EINVAL; +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 9892d548a0fa..875b163e54ab 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -125,7 +125,7 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states, * The power returned by active_state() is expected to be * positive, in milli-watts and to fit into 16 bits. */ - if (!power || power > EM_CPU_MAX_POWER) { + if (!power || power > EM_MAX_POWER) { pr_err("pd%d: invalid power: %lu\n", cpu, power); goto free_ps_table; } @@ -183,10 +183,13 @@ struct em_perf_domain *em_cpu_get(int cpu) EXPORT_SYMBOL_GPL(em_cpu_get); /** - * em_register_perf_domain() - Register the Energy Model of a performance domain - * @span : Mask of CPUs in the performance domain + * em_dev_register_perf_domain() - Register the Energy Model (EM) for a device + * @dev : Device for which the EM is to register * @nr_states : Number of performance states to register * @cb : Callback functions providing the data of the Energy Model + * @span : Pointer to cpumask_t, which in case of a CPU device is + * obligatory. It can be taken from i.e. 'policy->cpus'. For other + * type of devices this should be set to NULL. * * Create Energy Model tables for a performance domain using the callbacks * defined in cb. @@ -196,14 +199,14 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * * Return 0 on success */ -int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, - struct em_data_callback *cb) +int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, + struct em_data_callback *cb, cpumask_t *span) { unsigned long cap, prev_cap = 0; struct em_perf_domain *pd; int cpu, ret = 0; - if (!span || !nr_states || !cb) + if (!dev || !span || !nr_states || !cb) return -EINVAL; /* @@ -255,4 +258,29 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, return ret; } +EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); + +/** + * em_register_perf_domain() - Register the Energy Model of a performance domain + * @span : Mask of CPUs in the performance domain + * @nr_states : Number of capacity states to register + * @cb : Callback functions providing the data of the Energy Model + * + * Create Energy Model tables for a performance domain using the callbacks + * defined in cb. + * + * If multiple clients register the same performance domain, all but the first + * registration will be ignored. + * + * Return 0 on success + */ +int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, + struct em_data_callback *cb) +{ + struct device *cpu_dev; + + cpu_dev = get_cpu_device(cpumask_first(span)); + + return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); +} EXPORT_SYMBOL_GPL(em_register_perf_domain); From patchwork Fri Apr 10 08:42:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukasz Luba X-Patchwork-Id: 189654 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F2FDC2BA2B for ; Fri, 10 Apr 2020 08:43:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E636C206F7 for ; Fri, 10 Apr 2020 08:43:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725893AbgDJInM (ORCPT ); Fri, 10 Apr 2020 04:43:12 -0400 Received: from foss.arm.com ([217.140.110.172]:58976 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725839AbgDJInL (ORCPT ); Fri, 10 Apr 2020 04:43:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B79527FA; Fri, 10 Apr 2020 01:43:10 -0700 (PDT) Received: from e123648.arm.com (unknown [10.37.12.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CFACC3F68F; Fri, 10 Apr 2020 01:43:00 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, dri-devel@lists.freedesktop.org, linux-omap@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-imx@nxp.com Cc: Dietmar.Eggemann@arm.com, cw00.choi@samsung.com, b.zolnierkie@samsung.com, rjw@rjwysocki.net, sudeep.holla@arm.com, viresh.kumar@linaro.org, nm@ti.com, sboyd@kernel.org, rui.zhang@intel.com, amit.kucheria@verdurent.com, daniel.lezcano@linaro.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, rostedt@goodmis.org, qperret@google.com, bsegall@google.com, mgorman@suse.de, shawnguo@kernel.org, s.hauer@pengutronix.de, festevam@gmail.com, kernel@pengutronix.de, khilman@kernel.org, agross@kernel.org, bjorn.andersson@linaro.org, robh@kernel.org, matthias.bgg@gmail.com, steven.price@arm.com, tomeu.vizoso@collabora.com, alyssa.rosenzweig@collabora.com, airlied@linux.ie, daniel@ffwll.ch, liviu.dudau@arm.com, lorenzo.pieralisi@arm.com, lukasz.luba@arm.com, patrick.bellasi@matbug.net, orjan.eide@arm.com, rdunlap@infradead.org, mka@chromium.org Subject: [PATCH v6 04/10] PM / EM: add support for other devices than CPUs in Energy Model Date: Fri, 10 Apr 2020 09:42:04 +0100 Message-Id: <20200410084210.24932-5-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200410084210.24932-1-lukasz.luba@arm.com> References: <20200410084210.24932-1-lukasz.luba@arm.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Add support for other devices that CPUs. The registration function does not require a valid cpumask pointer and is ready to handle new devices. Some of the internal structures has been reorganized in order to keep consistent view (like removing per_cpu pd pointers). To track usage of the Energy Model structures, they are protected with kref. Signed-off-by: Lukasz Luba Reported-by: kbuild test robot --- include/linux/energy_model.h | 32 ++- kernel/power/energy_model.c | 433 +++++++++++++++++++++++++++++------ 2 files changed, 389 insertions(+), 76 deletions(-) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7076cb22b247..f6b8077cc875 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -12,8 +12,10 @@ /** * em_perf_state - Performance state of a performance domain - * @frequency: The CPU frequency in KHz, for consistency with CPUFreq - * @power: The power consumed by 1 CPU at this level, in milli-watts + * @frequency: The frequency in KHz, for consistency with CPUFreq + * @power: The power consumed at this level, in milli-watts (by 1 CPU or + by a registered device). It can be a total power: static and + dynamic. * @cost: The cost coefficient associated with this level, used during * energy calculation. Equal to: power * max_frequency / frequency */ @@ -27,12 +29,15 @@ struct em_perf_state { * em_perf_domain - Performance domain * @table: List of performance states, in ascending order * @nr_perf_states: Number of performance states - * @cpus: Cpumask covering the CPUs of the domain + * @cpus: Cpumask covering the CPUs of the domain. It's here + * for performance reasons to avoid potential cache + * misses during energy calculations in the scheduler * - * A "performance domain" represents a group of CPUs whose performance is - * scaled together. All CPUs of a performance domain must have the same - * micro-architecture. Performance domains often have a 1-to-1 mapping with - * CPUFreq policies. + * In case of CPU device, a "performance domain" represents a group of CPUs + * whose performance is scaled together. All CPUs of a performance domain + * must have the same micro-architecture. Performance domains often have + * a 1-to-1 mapping with CPUFreq policies. In case of other devices the 'cpus' + * field is unused. */ struct em_perf_domain { struct em_perf_state *table; @@ -71,10 +76,13 @@ struct em_data_callback { #define EM_DATA_CB(_active_power_cb) { .active_power = &_active_power_cb } struct em_perf_domain *em_cpu_get(int cpu); +struct em_perf_domain *em_pd_get(struct device *dev); +void em_pd_put(struct device *dev); int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, struct em_data_callback *cb); int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, struct em_data_callback *cb, cpumask_t *span); +void em_dev_unregister_perf_domain(struct device *dev); /** * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain @@ -184,10 +192,20 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, { return -EINVAL; } +static inline void em_dev_unregister_perf_domain(struct device *dev) +{ +} static inline struct em_perf_domain *em_cpu_get(int cpu) { return NULL; } +static inline struct em_perf_domain *em_pd_get(struct device *dev) +{ + return NULL; +} +static inline void em_pd_put(struct device *dev) +{ +} static inline unsigned long em_pd_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 5b8a1566526a..5967a21b56fc 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -2,8 +2,9 @@ /* * Energy Model of CPUs * - * Copyright (c) 2018, Arm ltd. + * Copyright (c) 2018-2020, Arm ltd. * Written by: Quentin Perret, Arm ltd. + * Improvements provided by: Lukasz Luba, Arm ltd. */ #define pr_fmt(fmt) "energy_model: " fmt @@ -12,17 +13,46 @@ #include #include #include +#include +#include #include #include -/* Mapping of each CPU to the performance domain to which it belongs. */ -static DEFINE_PER_CPU(struct em_perf_domain *, em_data); +/** + * em_device - Performance domain wrapper for device + * @em_pd: Performance domain which carries the energy model + * @dev: Device for which this performance domain is set + * @id: Id of this performance domain + * @em_dev_list: List entry to connect all the devices perf. domain + * @debug_dir: Optional debug directory + * + * Internal structure. It contains a "performance domain" and the corresponding + * device. + */ +struct em_device { + struct em_perf_domain *em_pd; + struct device *dev; + int id; + struct kref kref; + struct list_head em_dev_list; +#ifdef CONFIG_DEBUG_FS + struct dentry *debug_dir; +#endif +}; +static DEFINE_IDA(em_dev_ida); /* * Mutex serializing the registrations of performance domains and letting * callbacks defined by drivers sleep. */ static DEFINE_MUTEX(em_pd_mutex); +/* List of devices' energy model, protected by 'em_pd_mutex' */ +static LIST_HEAD(em_pd_dev_list); + +static bool _is_cpu_device(struct device *dev) +{ + return (dev->bus == &cpu_subsys); +} #ifdef CONFIG_DEBUG_FS static struct dentry *rootdir; @@ -43,28 +73,74 @@ static void em_debug_create_ps(struct em_perf_state *ps, struct dentry *pd) static int em_debug_cpus_show(struct seq_file *s, void *unused) { - seq_printf(s, "%*pbl\n", cpumask_pr_args(to_cpumask(s->private))); + struct em_perf_domain *em_pd = s->private; + struct cpumask *mask = em_span_cpus(em_pd); + + seq_printf(s, "%*pbl\n", cpumask_pr_args(mask)); return 0; } DEFINE_SHOW_ATTRIBUTE(em_debug_cpus); -static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) +static int em_debug_dev_show(struct seq_file *s, void *unused) { + struct em_device *em_dev = s->private; + const char *domain_name; + + domain_name = dev_name(em_dev->dev); + if (!domain_name) + domain_name = "unknown"; + + seq_printf(s, "%s\n", domain_name); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(em_debug_dev); + +static int em_debug_type_show(struct seq_file *s, void *unused) +{ + struct em_device *em_dev = s->private; + + if (_is_cpu_device(em_dev->dev)) + seq_puts(s, "EM cpufreq device\n"); + else + seq_puts(s, "EM devfreq device\n"); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(em_debug_type); + +static void em_debug_create_pd(struct em_device *em_dev) +{ + struct em_perf_domain *em_pd = em_dev->em_pd; struct dentry *d; char name[8]; int i; - snprintf(name, sizeof(name), "pd%d", cpu); + snprintf(name, sizeof(name), "pd%d", em_dev->id); /* Create the directory of the performance domain */ d = debugfs_create_dir(name, rootdir); - debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops); + if (_is_cpu_device(em_dev->dev)) + debugfs_create_file("cpus", 0444, d, em_pd, + &em_debug_cpus_fops); + else + debugfs_create_file("dev_name", 0444, d, em_dev, + &em_debug_dev_fops); + + debugfs_create_file("type", 0444, d, em_dev, &em_debug_type_fops); /* Create a sub-directory for each performance state */ - for (i = 0; i < pd->nr_perf_states; i++) - em_debug_create_ps(&pd->table[i], d); + for (i = 0; i < em_pd->nr_perf_states; i++) + em_debug_create_ps(&em_pd->table[i], d); + + em_dev->debug_dir = d; +} + +static void em_debug_remove_pd(struct em_device *em_dev) +{ + debugfs_remove_recursive(em_dev->debug_dir); } static int __init em_debug_init(void) @@ -76,40 +152,34 @@ static int __init em_debug_init(void) } core_initcall(em_debug_init); #else /* CONFIG_DEBUG_FS */ -static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {} +static void em_debug_create_pd(struct em_device *em_dev) {} +static void em_debug_remove_pd(struct em_device *em_dev) {} #endif -static struct em_perf_domain * -em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, - cpumask_t *span) + +static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd, + int nr_states, struct em_data_callback *cb) { unsigned long opp_eff, prev_opp_eff = ULONG_MAX; unsigned long power, freq, prev_freq = 0; - int i, ret, cpu = cpumask_first(span); struct em_perf_state *table; - struct em_perf_domain *pd; + int i, ret; u64 fmax; - if (!cb->active_power) - return NULL; - - pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL); - if (!pd) - return NULL; - table = kcalloc(nr_states, sizeof(*table), GFP_KERNEL); if (!table) - goto free_pd; + return -ENOMEM; /* Build the list of performance states for this performance domain */ for (i = 0, freq = 0; i < nr_states; i++, freq++) { /* * active_power() is a driver callback which ceils 'freq' to - * lowest performance state of 'cpu' above 'freq' and updates + * lowest performance state of 'dev' above 'freq' and updates * 'power' and 'freq' accordingly. */ ret = cb->active_power(&power, &freq, dev); if (ret) { - pr_err("pd%d: invalid perf. state: %d\n", cpu, ret); + dev_err(dev, "EM: invalid perf. state: %d\n", + ret); goto free_ps_table; } @@ -118,7 +188,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * higher performance states. */ if (freq <= prev_freq) { - pr_err("pd%d: non-increasing freq: %lu\n", cpu, freq); + dev_err(dev, "EM: non-increasing freq: %lu\n", + freq); goto free_ps_table; } @@ -127,7 +198,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, * positive, in milli-watts and to fit into 16 bits. */ if (!power || power > EM_MAX_POWER) { - pr_err("pd%d: invalid power: %lu\n", cpu, power); + dev_err(dev, "EM: invalid power: %lu\n", + power); goto free_ps_table; } @@ -142,8 +214,8 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, */ opp_eff = freq / power; if (opp_eff >= prev_opp_eff) - pr_warn("pd%d: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", - cpu, i, i - 1); + dev_dbg(dev, "EM: hertz/watts ratio non-monotonically decreasing: em_perf_state %d >= em_perf_state%d\n", + i, i - 1); prev_opp_eff = opp_eff; } @@ -156,30 +228,143 @@ em_create_pd(struct device *dev, int nr_states, struct em_data_callback *cb, pd->table = table; pd->nr_perf_states = nr_states; - cpumask_copy(to_cpumask(pd->cpus), span); - - em_debug_create_pd(pd, cpu); - return pd; + return 0; free_ps_table: kfree(table); -free_pd: - kfree(pd); + return -EINVAL; +} + +static struct em_perf_domain *em_create_pd(struct device *dev, int nr_states, + struct em_data_callback *cb, + cpumask_t *cpus) +{ + struct em_perf_domain *pd; + int ret; + + if (_is_cpu_device(dev)) { + pd = kzalloc(sizeof(*pd) + cpumask_size(), GFP_KERNEL); + if (!pd) + return NULL; + + cpumask_copy(em_span_cpus(pd), cpus); + } else { + pd = kzalloc(sizeof(*pd), GFP_KERNEL); + if (!pd) + return NULL; + } + + ret = em_create_perf_table(dev, pd, nr_states, cb); + if (ret) { + kfree(pd); + return NULL; + } + + return pd; +} + +static bool em_cpus_pd_exist(cpumask_t *span) +{ + struct em_device *em_dev; + + /* The em_pd_mutex should be held already. */ + if (list_empty(&em_pd_dev_list)) + return 0; + + list_for_each_entry(em_dev, &em_pd_dev_list, em_dev_list) { + if (!_is_cpu_device(em_dev->dev)) + continue; + + if (cpumask_intersects(span, em_span_cpus(em_dev->em_pd))) + return 1; + } + + return 0; +} + +static struct em_device *_em_dev_find_existing(struct device *dev) +{ + struct em_device *em_dev; + + /* The em_pd_mutex should be held already. */ + if (list_empty(&em_pd_dev_list)) + return NULL; + + list_for_each_entry(em_dev, &em_pd_dev_list, em_dev_list) + if (em_dev->dev == dev) + return em_dev; return NULL; } +/** + * em_pd_get() - Return the performance domain for a device + * @dev : Device to find the performance domain for + * + * Returns the performance domain to which 'dev' belongs, or NULL if it doesn't + * exist. + */ +struct em_perf_domain *em_pd_get(struct device *dev) +{ + struct em_perf_domain *pd = NULL; + struct em_device *em_dev; + + if (IS_ERR_OR_NULL(dev)) + return NULL; + + if (_is_cpu_device(dev)) + return em_cpu_get(dev->id); + + mutex_lock(&em_pd_mutex); + + em_dev = _em_dev_find_existing(dev); + if (em_dev) { + kref_get(&em_dev->kref); + pd = em_dev->em_pd; + } + + mutex_unlock(&em_pd_mutex); + return pd; +} +EXPORT_SYMBOL_GPL(em_pd_get); + /** * em_cpu_get() - Return the performance domain for a CPU * @cpu : CPU to find the performance domain for * - * Return: the performance domain to which 'cpu' belongs, or NULL if it doesn't + * This function is especially useful for subsystems which operate on CPU id, + * like topology setup code does. It simplifies code in those subsystems. + * + * Returns the performance domain to which 'cpu' belongs, or NULL if it doesn't * exist. */ struct em_perf_domain *em_cpu_get(int cpu) { - return READ_ONCE(per_cpu(em_data, cpu)); + struct em_device *em_dev; + + mutex_lock(&em_pd_mutex); + + if (list_empty(&em_pd_dev_list)) + goto unlock; + + list_for_each_entry(em_dev, &em_pd_dev_list, em_dev_list) { + if (!_is_cpu_device(em_dev->dev)) + continue; + + if (cpumask_test_cpu(cpu, em_span_cpus(em_dev->em_pd))) { + /* + * Don't increment kref here, we won't free the EM + * in CPU hotlpug. + */ + mutex_unlock(&em_pd_mutex); + return em_dev->em_pd; + } + } + +unlock: + mutex_unlock(&em_pd_mutex); + return NULL; } EXPORT_SYMBOL_GPL(em_cpu_get); @@ -188,7 +373,7 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * @dev : Device for which the EM is to register * @nr_states : Number of performance states to register * @cb : Callback functions providing the data of the Energy Model - * @span : Pointer to cpumask_t, which in case of a CPU device is + * @cpus : Pointer to cpumask_t, which in case of a CPU device is * obligatory. It can be taken from i.e. 'policy->cpus'. For other * type of devices this should be set to NULL. * @@ -201,13 +386,14 @@ EXPORT_SYMBOL_GPL(em_cpu_get); * Return 0 on success */ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, - struct em_data_callback *cb, cpumask_t *span) + struct em_data_callback *cb, cpumask_t *cpus) { unsigned long cap, prev_cap = 0; struct em_perf_domain *pd; + struct em_device *em_dev; int cpu, ret = 0; - if (!dev || !span || !nr_states || !cb) + if (!dev || !nr_states || !cb) return -EINVAL; /* @@ -216,51 +402,104 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, */ mutex_lock(&em_pd_mutex); - for_each_cpu(cpu, span) { - /* Make sure we don't register again an existing domain. */ - if (READ_ONCE(per_cpu(em_data, cpu))) { - ret = -EEXIST; - goto unlock; + em_dev = _em_dev_find_existing(dev); + if (em_dev) { + mutex_unlock(&em_pd_mutex); + dev_dbg(dev, "EM: found exisiting pd%d\n", em_dev->id); + return -EEXIST; + } + + if (_is_cpu_device(dev)) { + if (!cpus) { + mutex_unlock(&em_pd_mutex); + dev_err(dev, "EM: invalid CPU mask\n"); + return -EINVAL; } - /* - * All CPUs of a domain must have the same micro-architecture - * since they all share the same table. - */ - cap = arch_scale_cpu_capacity(cpu); - if (prev_cap && prev_cap != cap) { - pr_err("CPUs of %*pbl must have the same capacity\n", - cpumask_pr_args(span)); - ret = -EINVAL; - goto unlock; + /* Make sure we don't register domain for existing CPUs */ + if (em_cpus_pd_exist(cpus)) { + mutex_unlock(&em_pd_mutex); + return -EEXIST; + } + + for_each_cpu(cpu, cpus) { + /* + * All CPUs of a domain must have the same + * micro-architecture since they all share the same + * table. + */ + cap = arch_scale_cpu_capacity(cpu); + if (prev_cap && prev_cap != cap) { + dev_err(dev, "EM: CPUs of %*pbl must have the same capacity\n", + cpumask_pr_args(cpus)); + + mutex_unlock(&em_pd_mutex); + return -EINVAL; + } + prev_cap = cap; } - prev_cap = cap; } - /* Create the performance domain and add it to the Energy Model. */ - pd = em_create_pd(dev, nr_states, cb, span); + pd = em_create_pd(dev, nr_states, cb, cpus); if (!pd) { - ret = -EINVAL; - goto unlock; + mutex_unlock(&em_pd_mutex); + return -EINVAL; } - for_each_cpu(cpu, span) { - /* - * The per-cpu array can be read concurrently from em_cpu_get(). - * The barrier enforces the ordering needed to make sure readers - * can only access well formed em_perf_domain structs. - */ - smp_store_release(per_cpu_ptr(&em_data, cpu), pd); + em_dev = kzalloc(sizeof(struct em_device), GFP_KERNEL); + if (!em_dev) { + ret = -ENOMEM; + goto free_pd; } - pr_debug("Created perf domain %*pbl\n", cpumask_pr_args(span)); -unlock: + em_dev->id = ida_simple_get(&em_dev_ida, 0, 0, GFP_KERNEL); + if (em_dev->id < 0) { + ret = em_dev->id; + goto free_em_dev; + } + + em_dev->em_pd = pd; + em_dev->dev = dev; + + kref_init(&em_dev->kref); + list_add(&em_dev->em_dev_list, &em_pd_dev_list); + + em_debug_create_pd(em_dev); + dev_dbg(dev, "EM: created perf domain pd%d\n", em_dev->id); + mutex_unlock(&em_pd_mutex); + return 0; +free_em_dev: + kfree(em_dev); +free_pd: + kfree(pd->table); + kfree(pd); + + mutex_unlock(&em_pd_mutex); return ret; } EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); +static void _em_release(struct kref *ref) +{ + struct em_device *em_dev = container_of(ref, struct em_device, kref); + struct em_perf_domain *pd = em_dev->em_pd; + + dev_dbg(em_dev->dev, "EM: freeing perf domain pd%d\n", em_dev->id); + + /* The em_pd_mutex is already locked. */ + list_del(&em_dev->em_dev_list); + + em_debug_remove_pd(em_dev); + + ida_simple_remove(&em_dev_ida, em_dev->id); + + kfree(pd->table); + kfree(pd); + kfree(em_dev); +} + /** * em_register_perf_domain() - Register the Energy Model of a performance domain * @span : Mask of CPUs in the performance domain @@ -285,3 +524,59 @@ int em_register_perf_domain(cpumask_t *span, unsigned int nr_states, return em_dev_register_perf_domain(cpu_dev, nr_states, cb, span); } EXPORT_SYMBOL_GPL(em_register_perf_domain); + +/** + * em_dev_unregister_perf_domain() - Unregister Energy Model (EM) for a device + * @dev : Device for which the EM is registered + * + * Try to unregister the EM for the specified device (it checks current + * reference counter). The EM for CPUs will not be freed. + */ +void em_dev_unregister_perf_domain(struct device *dev) +{ + struct em_device *em_dev, *tmp; + + if (IS_ERR_OR_NULL(dev)) + return; + + /* We don't support freeing CPU structures in hotplug */ + if (_is_cpu_device(dev)) { + dev_dbg_once(dev, "EM: the structures are not going to be removed\n"); + return; + } + + mutex_lock(&em_pd_mutex); + + if (list_empty(&em_pd_dev_list)) { + mutex_unlock(&em_pd_mutex); + return; + } + + list_for_each_entry_safe(em_dev, tmp, &em_pd_dev_list, em_dev_list) { + if (em_dev->dev == dev) { + kref_put(&em_dev->kref, _em_release); + break; + } + } + + mutex_unlock(&em_pd_mutex); +} +EXPORT_SYMBOL_GPL(em_dev_unregister_perf_domain); + +/** + * em_pd_put() - Decrement reference of the Energy Model (EM) for the device + * @dev : Device for which the EM is registered + * + * Decrement reference of the EM for the specified device. If it was the last + * reference then it will trigger cleaning. The EM for CPUs will not be freed, + * though. It goes in pair with em_pd_get() (which incremented a kref counter). + * This should be called when the code which called em_pd_get() has finished + * its work with EM structs and won't use them in future. Typical places where + * this should be called are e.g. driver unregister function or framework clean + * up functions. + */ +void em_pd_put(struct device *dev) +{ + em_dev_unregister_perf_domain(dev); +} +EXPORT_SYMBOL_GPL(em_pd_put); From patchwork Fri Apr 10 08:42:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukasz Luba X-Patchwork-Id: 189653 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2B57C2BA2B for ; Fri, 10 Apr 2020 08:43:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 95AA0206A1 for ; Fri, 10 Apr 2020 08:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726719AbgDJInb (ORCPT ); Fri, 10 Apr 2020 04:43:31 -0400 Received: from foss.arm.com ([217.140.110.172]:59090 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725839AbgDJInb (ORCPT ); Fri, 10 Apr 2020 04:43:31 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 27B247FA; Fri, 10 Apr 2020 01:43:31 -0700 (PDT) Received: from e123648.arm.com (unknown [10.37.12.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 356113F68F; Fri, 10 Apr 2020 01:43:21 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, dri-devel@lists.freedesktop.org, linux-omap@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-imx@nxp.com Cc: Dietmar.Eggemann@arm.com, cw00.choi@samsung.com, b.zolnierkie@samsung.com, rjw@rjwysocki.net, sudeep.holla@arm.com, viresh.kumar@linaro.org, nm@ti.com, sboyd@kernel.org, rui.zhang@intel.com, amit.kucheria@verdurent.com, daniel.lezcano@linaro.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, rostedt@goodmis.org, qperret@google.com, bsegall@google.com, mgorman@suse.de, shawnguo@kernel.org, s.hauer@pengutronix.de, festevam@gmail.com, kernel@pengutronix.de, khilman@kernel.org, agross@kernel.org, bjorn.andersson@linaro.org, robh@kernel.org, matthias.bgg@gmail.com, steven.price@arm.com, tomeu.vizoso@collabora.com, alyssa.rosenzweig@collabora.com, airlied@linux.ie, daniel@ffwll.ch, liviu.dudau@arm.com, lorenzo.pieralisi@arm.com, lukasz.luba@arm.com, patrick.bellasi@matbug.net, orjan.eide@arm.com, rdunlap@infradead.org, mka@chromium.org Subject: [PATCH v6 06/10] PM / EM: change name of em_pd_energy to em_cpu_energy Date: Fri, 10 Apr 2020 09:42:06 +0100 Message-Id: <20200410084210.24932-7-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200410084210.24932-1-lukasz.luba@arm.com> References: <20200410084210.24932-1-lukasz.luba@arm.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Energy Model framework supports now other devices than CPUs. Refactor some of the functions in order to prevent wrong usage. The old function em_pd_energy has to generic name. It must not be used without proper cpumask pointer, which is possible only for CPU devices. Thus, rename it and add proper description to warn of potential wrong usage for other devices. Signed-off-by: Lukasz Luba --- include/linux/energy_model.h | 11 ++++++++--- kernel/sched/fair.c | 2 +- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 73c43e4c8a3b..1790199aa24f 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -83,15 +83,20 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, void em_dev_unregister_perf_domain(struct device *dev); /** - * em_pd_energy() - Estimates the energy consumed by the CPUs of a perf. domain + * em_cpu_energy() - Estimates the energy consumed by the CPUs of a + performance domain * @pd : performance domain for which energy has to be estimated * @max_util : highest utilization among CPUs of the domain * @sum_util : sum of the utilization of all CPUs in the domain * + * This function must be used only for CPU devices. There is no validation, + * i.e. if the EM is a CPU type and has cpumask allocated. It is called from + * the scheduler code quite frequently and that is why there is not checks. + * * Return: the sum of the energy consumed by the CPUs of the domain assuming * a capacity state satisfying the max utilization of the domain. */ -static inline unsigned long em_pd_energy(struct em_perf_domain *pd, +static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { unsigned long freq, scale_cpu; @@ -199,7 +204,7 @@ static inline struct em_perf_domain *em_pd_get(struct device *dev) static inline void em_pd_put(struct device *dev) { } -static inline unsigned long em_pd_energy(struct em_perf_domain *pd, +static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util) { return 0; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 02f323b85b6d..30f6392f628c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6464,7 +6464,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) max_util = max(max_util, cpu_util); } - return em_pd_energy(pd->em_pd, max_util, sum_util); + return em_cpu_energy(pd->em_pd, max_util, sum_util); } /* From patchwork Fri Apr 10 08:42:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukasz Luba X-Patchwork-Id: 189652 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3535C2BB86 for ; Fri, 10 Apr 2020 08:43:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 96361206F7 for ; Fri, 10 Apr 2020 08:43:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726142AbgDJInw (ORCPT ); Fri, 10 Apr 2020 04:43:52 -0400 Received: from foss.arm.com ([217.140.110.172]:59206 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725839AbgDJInw (ORCPT ); Fri, 10 Apr 2020 04:43:52 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 014BF1063; Fri, 10 Apr 2020 01:43:52 -0700 (PDT) Received: from e123648.arm.com (unknown [10.37.12.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E9E743F68F; Fri, 10 Apr 2020 01:43:41 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, dri-devel@lists.freedesktop.org, linux-omap@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-imx@nxp.com Cc: Dietmar.Eggemann@arm.com, cw00.choi@samsung.com, b.zolnierkie@samsung.com, rjw@rjwysocki.net, sudeep.holla@arm.com, viresh.kumar@linaro.org, nm@ti.com, sboyd@kernel.org, rui.zhang@intel.com, amit.kucheria@verdurent.com, daniel.lezcano@linaro.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, rostedt@goodmis.org, qperret@google.com, bsegall@google.com, mgorman@suse.de, shawnguo@kernel.org, s.hauer@pengutronix.de, festevam@gmail.com, kernel@pengutronix.de, khilman@kernel.org, agross@kernel.org, bjorn.andersson@linaro.org, robh@kernel.org, matthias.bgg@gmail.com, steven.price@arm.com, tomeu.vizoso@collabora.com, alyssa.rosenzweig@collabora.com, airlied@linux.ie, daniel@ffwll.ch, liviu.dudau@arm.com, lorenzo.pieralisi@arm.com, lukasz.luba@arm.com, patrick.bellasi@matbug.net, orjan.eide@arm.com, rdunlap@infradead.org, mka@chromium.org Subject: [PATCH v6 08/10] OPP: refactor dev_pm_opp_of_register_em() and update related drivers Date: Fri, 10 Apr 2020 09:42:08 +0100 Message-Id: <20200410084210.24932-9-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200410084210.24932-1-lukasz.luba@arm.com> References: <20200410084210.24932-1-lukasz.luba@arm.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org The Energy Model framework supports not only CPU devices. Drop the CPU specific interface with cpumask and add struct device. Add also a return value, user might use it. This new interface provides easy way to create a simple Energy Model, which then might be used by e.g. thermal subsystem. Signed-off-by: Lukasz Luba Acked-by: Daniel Lezcano --- drivers/cpufreq/cpufreq-dt.c | 2 +- drivers/cpufreq/imx6q-cpufreq.c | 2 +- drivers/cpufreq/mediatek-cpufreq.c | 2 +- drivers/cpufreq/omap-cpufreq.c | 2 +- drivers/cpufreq/qcom-cpufreq-hw.c | 2 +- drivers/cpufreq/scpi-cpufreq.c | 2 +- drivers/cpufreq/vexpress-spc-cpufreq.c | 2 +- drivers/opp/of.c | 71 ++++++++++++++++---------- include/linux/pm_opp.h | 15 +++++- 9 files changed, 65 insertions(+), 35 deletions(-) diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c index 26fe8dfb9ce6..f9f03fd49b83 100644 --- a/drivers/cpufreq/cpufreq-dt.c +++ b/drivers/cpufreq/cpufreq-dt.c @@ -275,7 +275,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) policy->cpuinfo.transition_latency = transition_latency; policy->dvfs_possible_from_any_cpu = true; - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus); return 0; diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c index fdb2ffffbd15..ef7b34c1fd2b 100644 --- a/drivers/cpufreq/imx6q-cpufreq.c +++ b/drivers/cpufreq/imx6q-cpufreq.c @@ -193,7 +193,7 @@ static int imx6q_cpufreq_init(struct cpufreq_policy *policy) policy->clk = clks[ARM].clk; cpufreq_generic_init(policy, freq_table, transition_latency); policy->suspend_freq = max_freq; - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus); return 0; } diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c index 0c98dd08273d..7d1212c9b7c8 100644 --- a/drivers/cpufreq/mediatek-cpufreq.c +++ b/drivers/cpufreq/mediatek-cpufreq.c @@ -448,7 +448,7 @@ static int mtk_cpufreq_init(struct cpufreq_policy *policy) policy->driver_data = info; policy->clk = info->cpu_clk; - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(info->cpu_dev, policy->cpus); return 0; } diff --git a/drivers/cpufreq/omap-cpufreq.c b/drivers/cpufreq/omap-cpufreq.c index 8d14b42a8c6f..3694bb030df3 100644 --- a/drivers/cpufreq/omap-cpufreq.c +++ b/drivers/cpufreq/omap-cpufreq.c @@ -131,7 +131,7 @@ static int omap_cpu_init(struct cpufreq_policy *policy) /* FIXME: what's the actual transition time? */ cpufreq_generic_init(policy, freq_table, 300 * 1000); - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(mpu_dev, policy->cpus); return 0; } diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c index fc92a8842e25..0a04b6f03b9a 100644 --- a/drivers/cpufreq/qcom-cpufreq-hw.c +++ b/drivers/cpufreq/qcom-cpufreq-hw.c @@ -238,7 +238,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy) goto error; } - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus); policy->fast_switch_possible = true; diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c index 20d1f85d5f5a..b0f5388b8854 100644 --- a/drivers/cpufreq/scpi-cpufreq.c +++ b/drivers/cpufreq/scpi-cpufreq.c @@ -167,7 +167,7 @@ static int scpi_cpufreq_init(struct cpufreq_policy *policy) policy->fast_switch_possible = false; - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus); return 0; diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c index 83c85d3d67e3..4e8b1dee7c9a 100644 --- a/drivers/cpufreq/vexpress-spc-cpufreq.c +++ b/drivers/cpufreq/vexpress-spc-cpufreq.c @@ -450,7 +450,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy) policy->freq_table = freq_table[cur_cluster]; policy->cpuinfo.transition_latency = 1000000; /* 1 ms */ - dev_pm_opp_of_register_em(policy->cpus); + dev_pm_opp_of_register_em(cpu_dev, policy->cpus); if (is_bL_switching_enabled()) per_cpu(cpu_last_req_freq, policy->cpu) = diff --git a/drivers/opp/of.c b/drivers/opp/of.c index 5b75829a915d..4500ce46d476 100644 --- a/drivers/opp/of.c +++ b/drivers/opp/of.c @@ -1036,18 +1036,18 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node); /* * Callback function provided to the Energy Model framework upon registration. - * This computes the power estimated by @CPU at @kHz if it is the frequency + * This computes the power estimated by @dev at @kHz if it is the frequency * of an existing OPP, or at the frequency of the first OPP above @kHz otherwise * (see dev_pm_opp_find_freq_ceil()). This function updates @kHz to the ceiled * frequency and @mW to the associated power. The power is estimated as - * P = C * V^2 * f with C being the CPU's capacitance and V and f respectively - * the voltage and frequency of the OPP. + * P = C * V^2 * f with C being the device's capacitance and V and f + * respectively the voltage and frequency of the OPP. * - * Returns -ENODEV if the CPU device cannot be found, -EINVAL if the power - * calculation failed because of missing parameters, 0 otherwise. + * Returns -EINVAL if the power calculation failed because of missing + * parameters, 0 otherwise. */ -static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, - struct device *cpu_dev) +static int __maybe_unused _get_power(unsigned long *mW, unsigned long *kHz, + struct device *dev) { struct dev_pm_opp *opp; struct device_node *np; @@ -1056,7 +1056,7 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, u64 tmp; int ret; - np = of_node_get(cpu_dev->of_node); + np = of_node_get(dev->of_node); if (!np) return -EINVAL; @@ -1066,7 +1066,7 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, return -EINVAL; Hz = *kHz * 1000; - opp = dev_pm_opp_find_freq_ceil(cpu_dev, &Hz); + opp = dev_pm_opp_find_freq_ceil(dev, &Hz); if (IS_ERR(opp)) return -EINVAL; @@ -1086,30 +1086,38 @@ static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz, /** * dev_pm_opp_of_register_em() - Attempt to register an Energy Model - * @cpus : CPUs for which an Energy Model has to be registered + * @dev : Device for which an Energy Model has to be registered + * @cpus : CPUs for which an Energy Model has to be registered. For + * other type of devices it should be set to NULL. * * This checks whether the "dynamic-power-coefficient" devicetree property has * been specified, and tries to register an Energy Model with it if it has. + * Having this property means the voltages are known for OPPs and the EM + * might be calculated. */ -void dev_pm_opp_of_register_em(struct cpumask *cpus) +int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus) { - struct em_data_callback em_cb = EM_DATA_CB(_get_cpu_power); - int ret, nr_opp, cpu = cpumask_first(cpus); - struct device *cpu_dev; + struct em_data_callback em_cb = EM_DATA_CB(_get_power); struct device_node *np; + int ret, nr_opp; u32 cap; - cpu_dev = get_cpu_device(cpu); - if (!cpu_dev) - return; + if (IS_ERR_OR_NULL(dev)) { + ret = -EINVAL; + goto failed; + } - nr_opp = dev_pm_opp_get_opp_count(cpu_dev); - if (nr_opp <= 0) - return; + nr_opp = dev_pm_opp_get_opp_count(dev); + if (nr_opp <= 0) { + ret = -EINVAL; + goto failed; + } - np = of_node_get(cpu_dev->of_node); - if (!np) - return; + np = of_node_get(dev->of_node); + if (!np) { + ret = -EINVAL; + goto failed; + } /* * Register an EM only if the 'dynamic-power-coefficient' property is @@ -1120,9 +1128,20 @@ void dev_pm_opp_of_register_em(struct cpumask *cpus) */ ret = of_property_read_u32(np, "dynamic-power-coefficient", &cap); of_node_put(np); - if (ret || !cap) - return; + if (ret || !cap) { + dev_dbg(dev, "Couldn't find proper 'dynamic-power-coefficient' in DT\n"); + ret = -EINVAL; + goto failed; + } - em_dev_register_perf_domain(cpu_dev, nr_opp, &em_cb, cpus); + ret = em_dev_register_perf_domain(dev, nr_opp, &em_cb, cpus); + if (ret) + goto failed; + + return 0; + +failed: + dev_dbg(dev, "Couldn't register Energy Model %d\n", ret); + return ret; } EXPORT_SYMBOL_GPL(dev_pm_opp_of_register_em); diff --git a/include/linux/pm_opp.h b/include/linux/pm_opp.h index 747861816f4f..8df8b82054e5 100644 --- a/include/linux/pm_opp.h +++ b/include/linux/pm_opp.h @@ -11,6 +11,7 @@ #ifndef __LINUX_OPP_H__ #define __LINUX_OPP_H__ +#include #include #include @@ -360,7 +361,11 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpuma struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev); struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp); int of_get_required_opp_performance_state(struct device_node *np, int index); -void dev_pm_opp_of_register_em(struct cpumask *cpus); +int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus); +static inline void dev_pm_opp_of_unregister_em(struct device *dev) +{ + em_dev_unregister_perf_domain(dev); +} #else static inline int dev_pm_opp_of_add_table(struct device *dev) { @@ -400,7 +405,13 @@ static inline struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp) return NULL; } -static inline void dev_pm_opp_of_register_em(struct cpumask *cpus) +static inline int dev_pm_opp_of_register_em(struct device *dev, + struct cpumask *cpus) +{ + return -ENOTSUPP; +} + +static inline void dev_pm_opp_of_unregister_em(struct device *dev) { } From patchwork Fri Apr 10 08:42:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukasz Luba X-Patchwork-Id: 189651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9213C2BA2B for ; Fri, 10 Apr 2020 08:44:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AB6D3207FF for ; Fri, 10 Apr 2020 08:44:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726080AbgDJIoO (ORCPT ); Fri, 10 Apr 2020 04:44:14 -0400 Received: from foss.arm.com ([217.140.110.172]:59314 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725839AbgDJIoO (ORCPT ); Fri, 10 Apr 2020 04:44:14 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E3B717FA; Fri, 10 Apr 2020 01:44:13 -0700 (PDT) Received: from e123648.arm.com (unknown [10.37.12.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0CD1F3F68F; Fri, 10 Apr 2020 01:44:02 -0700 (PDT) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, dri-devel@lists.freedesktop.org, linux-omap@vger.kernel.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-imx@nxp.com Cc: Dietmar.Eggemann@arm.com, cw00.choi@samsung.com, b.zolnierkie@samsung.com, rjw@rjwysocki.net, sudeep.holla@arm.com, viresh.kumar@linaro.org, nm@ti.com, sboyd@kernel.org, rui.zhang@intel.com, amit.kucheria@verdurent.com, daniel.lezcano@linaro.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, rostedt@goodmis.org, qperret@google.com, bsegall@google.com, mgorman@suse.de, shawnguo@kernel.org, s.hauer@pengutronix.de, festevam@gmail.com, kernel@pengutronix.de, khilman@kernel.org, agross@kernel.org, bjorn.andersson@linaro.org, robh@kernel.org, matthias.bgg@gmail.com, steven.price@arm.com, tomeu.vizoso@collabora.com, alyssa.rosenzweig@collabora.com, airlied@linux.ie, daniel@ffwll.ch, liviu.dudau@arm.com, lorenzo.pieralisi@arm.com, lukasz.luba@arm.com, patrick.bellasi@matbug.net, orjan.eide@arm.com, rdunlap@infradead.org, mka@chromium.org Subject: [PATCH v6 10/10] drm/panfrost: Register devfreq cooling and attempt to add Energy Model Date: Fri, 10 Apr 2020 09:42:10 +0100 Message-Id: <20200410084210.24932-11-lukasz.luba@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200410084210.24932-1-lukasz.luba@arm.com> References: <20200410084210.24932-1-lukasz.luba@arm.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Register devfreq cooling device and attempt to register Energy Model. This will add the devfreq device to the Energy Model framework. It will create a dedicated and unified data structures used i.e. in thermal framework. The last NULL parameter indicates that the power model is simplified and created based on DT 'dynamic-power-coefficient', voltage and frequency. Reviewed-by: Steven Price Reviewed-by: Alyssa Rosenzweig Signed-off-by: Lukasz Luba --- drivers/gpu/drm/panfrost/panfrost_devfreq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c index 413987038fbf..8759a73db153 100644 --- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c +++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c @@ -105,7 +105,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev) } pfdev->devfreq.devfreq = devfreq; - cooling = of_devfreq_cooling_register(dev->of_node, devfreq); + cooling = devfreq_cooling_em_register(devfreq, NULL); if (IS_ERR(cooling)) DRM_DEV_INFO(dev, "Failed to register cooling device\n"); else