From patchwork Wed Mar 27 14:40:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 161265 Delivered-To: patches@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp6566223jan; Wed, 27 Mar 2019 07:40:28 -0700 (PDT) X-Received: by 2002:a19:6d01:: with SMTP id i1mr18698364lfc.118.1553697628519; Wed, 27 Mar 2019 07:40:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553697628; cv=none; d=google.com; s=arc-20160816; b=m/FF5BLecV3HgQSoPoXLb0xXsc00GhmH2n+qB3/DV3AKYrveOBPX0Bfk5CZzoFWs9u F92dpNNwReonaLMKkxHTo0ggx42y10AWoYSKEboR2tORbUrY5QHKt3W1NEvgzTMBiOq7 kCIspVYKThFzs4xjSz1ZJkq1EtDHwTgY0jY7p6MyiLdeZ9riS2K+q38pFXW0jrT6Kjux KD7JHbAmzp70YRd8/XDslT110Y5PuQSmnUCqkNEh6HbB9EsmP+8lhj7wFM38t0gzSkia beV9PufGQ+ZR8zVpmAj8bCNJ9RSNl+DCdxsCxw8Yp11PhkPNOU4nYsrqlSZgc4v9qDGO 4+DQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature; bh=Do62RarFZKiaJHchChMBUqt67JhmdRrqCmB3BM4qNsk=; b=WTZh9VzyhLjlENwG5ef1IeH5bqPNzD/9AjO68K+vvySYLHDT7JZ3MshBaQFwnk/sfv ltB/09ZjaboaIZcFjdhV/pHU+7p8uT6xKSEHpat16yw6UBqUxzbYULmycBZIGARJcCxl PSzRrQ8eGgEE3gsuCsspl/K+26pRFsBQzQShdI1hJeGMQeIsWYi+tNKDZ+eEEqUV2gCV m3z8Va8zJhl5ppIuBa5uIoXru06065VICjTlZAxnQR7CJ8h/LHeFyLQSrSocfRYSKg/M 9PkE3zTpDi6oNbObETSJvaM35xDhbZdIdRRnIO2izIu2AkSXKaraXOPlPqlBiIX59SpX Jfmw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=SExylMA0; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id q5sor8659165ljh.7.2019.03.27.07.40.28 for (Google Transport Security); Wed, 27 Mar 2019 07:40:28 -0700 (PDT) Received-SPF: pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=SExylMA0; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=Do62RarFZKiaJHchChMBUqt67JhmdRrqCmB3BM4qNsk=; b=SExylMA02q7PZ48R903XNHtcZaHznnBFU3GYNrYfefHUzLtJm5Lqun6991Y5jjWezc yRlM+36xvPbS5+hyrrtknzYodp/kGeczP6BqnfB3lMXC2QVRIUIg0jIYGQZ9atQ438Cu BNHaF/atI4/VMxQ+V3lO2Aw0EjVO4VjZjUh+24y0cxt/6tWgZV9wtIKqjBCfCarctrXb HGVRYsdUoaIe+7uyVSQ+04qkz/348Uj6Mk/35cXSqOhF9kMAvgrgTR8T4j5BRITKzdOM rlWknBBoa+jwwdnaiDMCqiIwOMGe4YVMHi4i4fuIPRX0xRMqGUMrfBeo8Ca4PGJLZy2h Yv6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Do62RarFZKiaJHchChMBUqt67JhmdRrqCmB3BM4qNsk=; b=B415BZIL6FyzXKLqOXMpNS8pu9Yl+vnlAKuKblKUEUD1RgB6qXXPx85sGFwtWxC/zl OSEpIDu2TwOkvqLWpxONLM25hPQRN5VbKAgGooWA73/VyhQLbh02kJQ9kfKxK5dAUzcn tXBjKuV/q43sTtN9ggJ3yoLZX0Snsf90ts1ySvcbLx3ulHQQbmFTJ1GBPxKRExq55upD WQnt6G64kC7w/+B6yZtG1oqFoKvuYs+ULNvCYYks232HqCtZLHSkR4argxlyDQmxMZnK MS9ksATGyaU4jbYaNbqV6BPa+5pVJErTNtBz7y/Qp8c1c4yPD4L8+gve2hd6DwFdTRkr GWUg== X-Gm-Message-State: APjAAAX9xse5jbinV55xY4KxLyMGOKMlJ0kjVWNY+hZ7Ud4HQrqQgzj0 RnK8sDlud2imaJ3TdWrOAWbJZXAG X-Google-Smtp-Source: APXvYqzjdwuDI5vxyAhyj7gxalpzOXpSx3KUDHUetkufjXTlih/+CyBnThwqhfzHkK5w4ELgH0oc1g== X-Received: by 2002:a2e:9655:: with SMTP id z21mr3023901ljh.60.1553697627798; Wed, 27 Mar 2019 07:40:27 -0700 (PDT) Return-Path: Received: from localhost.localdomain (h-158-174-22-210.NA.cust.bahnhof.se. [158.174.22.210]) by smtp.gmail.com with ESMTPSA id f19sm4604376lfk.69.2019.03.27.07.40.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Mar 2019 07:40:26 -0700 (PDT) From: Ulf Hansson To: "Rafael J . Wysocki" , linux-pm@vger.kernel.org Cc: Frederic Weisbecker , Thomas Gleixner , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , Daniel Lezcano , "Raju P . L . S . S . S . N" , Stephen Boyd , Tony Lindgren , Kevin Hilman , Lina Iyer , Ulf Hansson , Viresh Kumar , Vincent Guittot , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v13 2/4] PM / Domains: Add support for CPU devices to genpd Date: Wed, 27 Mar 2019 15:40:23 +0100 Message-Id: <20190327144023.25383-1-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.17.1 To enable a device belonging to a CPU to be attached to a PM domain managed by genpd, let's do a few changes to it, as to make it convenient to manage the specifics around CPUs. To be able to quickly find out what CPUs that are attached to a genpd, which typically becomes useful from a genpd governor as following changes is about to show, let's add a cpumask to the struct generic_pm_domain. At the point when a CPU device gets attached to a genpd, let's update the genpd's cpumask. Moreover, let's also propagate changes to the cpumask upwards in the topology to the master PM domains. In this way, the cpumask for a genpd hierarchically reflects all CPUs attached to the topology below it. Finally, let's make this an opt-in feature, to avoid having to manage CPUs and the cpumask for a genpd that doesn't need it. For that reason, let's add a new genpd configuration bit, GENPD_FLAG_CPU_DOMAIN. Cc: Lina Iyer Co-developed-by: Lina Iyer Acked-by: Rafael J. Wysocki Acked-by: Daniel Lezcano Signed-off-by: Ulf Hansson --- Changes in v13: - None (re-based). --- drivers/base/power/domain.c | 65 ++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 13 ++++++++ 2 files changed, 77 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index ff6f992f7a1d..ecac03dcc9b2 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -22,6 +22,7 @@ #include #include #include +#include #include "power.h" @@ -128,6 +129,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, const struct generic_pm_domain *genpd) @@ -1454,6 +1456,56 @@ static void genpd_free_dev_data(struct device *dev, dev_pm_put_subsys_data(dev); } +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, + int cpu, bool set, unsigned int depth) +{ + struct gpd_link *link; + + if (!genpd_is_cpu_domain(genpd)) + return; + + list_for_each_entry(link, &genpd->slave_links, slave_node) { + struct generic_pm_domain *master = link->master; + + genpd_lock_nested(master, depth + 1); + __genpd_update_cpumask(master, cpu, set, depth + 1); + genpd_unlock(master); + } + + if (set) + cpumask_set_cpu(cpu, genpd->cpus); + else + cpumask_clear_cpu(cpu, genpd->cpus); +} + +static void genpd_update_cpumask(struct generic_pm_domain *genpd, + struct device *dev, bool set) +{ + int cpu; + + if (!genpd_is_cpu_domain(genpd)) + return; + + for_each_possible_cpu(cpu) { + if (get_cpu_device(cpu) == dev) { + __genpd_update_cpumask(genpd, cpu, set, 0); + return; + } + } +} + +static void genpd_set_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, true); +} + +static void genpd_clear_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, false); +} + static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, struct gpd_timing_data *td) { @@ -1475,6 +1527,7 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, genpd_lock(genpd); + genpd_set_cpumask(genpd, dev); dev_pm_domain_set(dev, &genpd->domain); genpd->device_count++; @@ -1532,6 +1585,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd, genpd->device_count--; genpd->max_off_time_changed = true; + genpd_clear_cpumask(genpd, dev); dev_pm_domain_set(dev, NULL); list_del_init(&pdd->list_node); @@ -1768,11 +1822,18 @@ int pm_genpd_init(struct generic_pm_domain *genpd, if (genpd_is_always_on(genpd) && !genpd_status_on(genpd)) return -EINVAL; + if (genpd_is_cpu_domain(genpd) && + !zalloc_cpumask_var(&genpd->cpus, GFP_KERNEL)) + return -ENOMEM; + /* Use only one "off" state if there were no states declared */ if (genpd->state_count == 0) { ret = genpd_set_default_power_state(genpd); - if (ret) + if (ret) { + if (genpd_is_cpu_domain(genpd)) + free_cpumask_var(genpd->cpus); return ret; + } } else if (!gov && genpd->state_count > 1) { pr_warn("%s: no governor for states\n", genpd->name); } @@ -1818,6 +1879,8 @@ static int genpd_remove(struct generic_pm_domain *genpd) list_del(&genpd->gpd_list_node); genpd_unlock(genpd); cancel_work_sync(&genpd->power_off_work); + if (genpd_is_cpu_domain(genpd)) + free_cpumask_var(genpd->cpus); if (genpd->free_states) genpd->free_states(genpd->states, genpd->state_count); diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 8e1399231753..a6e251fe9deb 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -16,6 +16,7 @@ #include #include #include +#include /* * Flags to control the behaviour of a genpd. @@ -42,11 +43,22 @@ * GENPD_FLAG_ACTIVE_WAKEUP: Instructs genpd to keep the PM domain powered * on, in case any of its attached devices is used * in the wakeup path to serve system wakeups. + * + * GENPD_FLAG_CPU_DOMAIN: Instructs genpd that it should expect to get + * devices attached, which may belong to CPUs or + * possibly have subdomains with CPUs attached. + * This flag enables the genpd backend driver to + * deploy idle power management support for CPUs + * and groups of CPUs. Note that, the backend + * driver must then comply with the so called, + * last-man-standing algorithm, for the CPUs in the + * PM domain. */ #define GENPD_FLAG_PM_CLK (1U << 0) #define GENPD_FLAG_IRQ_SAFE (1U << 1) #define GENPD_FLAG_ALWAYS_ON (1U << 2) #define GENPD_FLAG_ACTIVE_WAKEUP (1U << 3) +#define GENPD_FLAG_CPU_DOMAIN (1U << 4) enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -94,6 +106,7 @@ struct generic_pm_domain { unsigned int suspended_count; /* System suspend device counter */ unsigned int prepared_count; /* Suspend counter of prepared devices */ unsigned int performance_state; /* Aggregated max performance state */ + cpumask_var_t cpus; /* A cpumask of the attached CPUs */ int (*power_off)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); struct opp_table *opp_table; /* OPP table of the genpd */