From patchwork Wed Mar 27 14:35:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ulf Hansson X-Patchwork-Id: 161262 Delivered-To: patches@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp6561560jan; Wed, 27 Mar 2019 07:35:59 -0700 (PDT) X-Received: by 2002:a19:954b:: with SMTP id x72mr18698796lfd.67.1553697358892; Wed, 27 Mar 2019 07:35:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553697358; cv=none; d=google.com; s=arc-20160816; b=SQ+2Vo3Hisr3oWuznYW7OogavD+SSwicXEIooFNf8fJCS34cqbcIvMGvTpSvbUna+V YUTiah2Gq/8iMtxGxnO7PMtdzTV7U6Cw8nQfZwOQ94G1Ni5futHnWqRJFoxru7IBjqRc oTU66GrQ3jhaXGDfA5JPtc89hZ7libCoR1d5IP8GWnzvt/ijdNi3hwG7LnphtdAPQwpV HqkuyfqLYoDjDTNoIRdOXsC7Mz9RqjhW+T18FBKRilpHn+1+9WpChsfhcwaVNIQQMZs/ +yh8sovSx6MBGXwNPr3qPOrdWb9EokdLVewrLKwYVwO4uch4wM/3NUQyhkLklb93Ztw/ I6BQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Do62RarFZKiaJHchChMBUqt67JhmdRrqCmB3BM4qNsk=; b=sH9ao24tP0Pa4mcXqIk2/YjWtNt4b6NsaNxfP9Seby2eAWcGvSvEnWT22VC2cytnNy NQFayH17nshw/hQwsJfIHKUc/i4KXiBbFWcNHGIxBqCYkOHQn/y77bnnIIuqMa+OdRxV jrs2NIZrEiQ7yCE6NH2EjDLzg3xxFCcA1n14lhG2X9T2dOryQX9yu9VW6EPm28SlZC9A ZERF1uUNr1ToT1rHrFn0MALGjeiOjdy6JHG6ZHmHyzGsmjiUiN700Q9Ss3X1btjjzo9R xv7TaE9BISqUtp9vsMToxe68+evKgSAvo0gy3VCJoe160iwpq9XeteX7L68ikK33xt01 b6iw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KhP79mrR; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id f7sor8688933ljk.41.2019.03.27.07.35.58 for (Google Transport Security); Wed, 27 Mar 2019 07:35:58 -0700 (PDT) Received-SPF: pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=KhP79mrR; spf=pass (google.com: domain of ulf.hansson@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=ulf.hansson@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Do62RarFZKiaJHchChMBUqt67JhmdRrqCmB3BM4qNsk=; b=KhP79mrRfRrsAir3ZbeLkpEDaUFnZcXgy4dlrUVtV8lML0eWBceZEwB0ezuPNHY+1c Iq1UJXV7GcgXCRg/xcAcpI4MhRAeR3Ll0LQKwPPgj60Xt3muNyAxwTK+WKwMYlApbkgr n1Uw3OLTWqi7HbQxkCZv5Fn+3qOjlxbAkU+B8B4RFb9qShGuTC9ad27zlG+1UQX8V94E jSqOGJW7wKzMSL5V5W0vVquYRSVmMwnr26ysFcHMGWO7oFAh99YyGCYSFX7tHqcUMzFf +dUJ0EdnCraD5KLu2v/H3sruhCmV+Vdk60NEBJLauuw+MGHLv/90O098+wvCCkGT1cur jrJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Do62RarFZKiaJHchChMBUqt67JhmdRrqCmB3BM4qNsk=; b=NyAxUpud+KSskmdIxmLrZA5j+rC59BpcmlTuuJ7aHex0I/SQczgZIoJCsTrZcydUeC yN3lIEiKksuCdNLupqDm9PuqIbA75DFgjPkz6HGbIGhDZtG28fBIGG7WEo7QUT7hqvmZ mrgJ77/7JNuj/fjgdNS7w36RCKOFCYyua/n9m0FQZJQ8cVY37Ol5aLPaapY5tMIxhf9d 4rXVcgiU+hATUSD3quokX92zjUqYn8+MCJctOqNpHektliUGXxV3DTn0yz9JGL8n8mXW uhrb53HMnTw8yz3w+iRRDxSCcPZRO/vIXJBgZlV5o66gYgwBzS4zMzBw9o3rzjpUHiwv ki3w== X-Gm-Message-State: APjAAAVqftj6BzAGRmT0zcm/o2eWhAqKOvozm49pl6+W9JwvgFFXSD9h T8pcd8JlPVmJgcOMmqcbEiZhR5yV X-Google-Smtp-Source: APXvYqxvJpHzjkikSC9P794Ufie27oyHGtWUvApWjshRK2vNSuvPv1H/ixxVqHnz+Ufj42EuSVQ5aQ== X-Received: by 2002:a2e:5d94:: with SMTP id v20mr18347734lje.138.1553697358467; Wed, 27 Mar 2019 07:35:58 -0700 (PDT) Return-Path: Received: from localhost.localdomain (h-158-174-22-210.NA.cust.bahnhof.se. [158.174.22.210]) by smtp.gmail.com with ESMTPSA id q2sm4548789lfj.58.2019.03.27.07.35.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Mar 2019 07:35:57 -0700 (PDT) From: Ulf Hansson To: "Rafael J . Wysocki" , linux-pm@vger.kernel.org Cc: Frederic Weisbecker , Thomas Gleixner , Sudeep Holla , Lorenzo Pieralisi , Mark Rutland , Daniel Lezcano , "Raju P . L . S . S . S . N" , Stephen Boyd , Tony Lindgren , Kevin Hilman , Lina Iyer , Ulf Hansson , Viresh Kumar , Vincent Guittot , Geert Uytterhoeven , linux-arm-kernel@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/4] PM / Domains: Add support for CPU devices to genpd Date: Wed, 27 Mar 2019 15:35:46 +0100 Message-Id: <20190327143548.25305-3-ulf.hansson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190327143548.25305-1-ulf.hansson@linaro.org> References: <20190327143548.25305-1-ulf.hansson@linaro.org> To enable a device belonging to a CPU to be attached to a PM domain managed by genpd, let's do a few changes to it, as to make it convenient to manage the specifics around CPUs. To be able to quickly find out what CPUs that are attached to a genpd, which typically becomes useful from a genpd governor as following changes is about to show, let's add a cpumask to the struct generic_pm_domain. At the point when a CPU device gets attached to a genpd, let's update the genpd's cpumask. Moreover, let's also propagate changes to the cpumask upwards in the topology to the master PM domains. In this way, the cpumask for a genpd hierarchically reflects all CPUs attached to the topology below it. Finally, let's make this an opt-in feature, to avoid having to manage CPUs and the cpumask for a genpd that doesn't need it. For that reason, let's add a new genpd configuration bit, GENPD_FLAG_CPU_DOMAIN. Cc: Lina Iyer Co-developed-by: Lina Iyer Acked-by: Rafael J. Wysocki Acked-by: Daniel Lezcano Signed-off-by: Ulf Hansson --- Changes in v13: - None (re-based). --- drivers/base/power/domain.c | 65 ++++++++++++++++++++++++++++++++++++- include/linux/pm_domain.h | 13 ++++++++ 2 files changed, 77 insertions(+), 1 deletion(-) -- 2.17.1 diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c index ff6f992f7a1d..ecac03dcc9b2 100644 --- a/drivers/base/power/domain.c +++ b/drivers/base/power/domain.c @@ -22,6 +22,7 @@ #include #include #include +#include #include "power.h" @@ -128,6 +129,7 @@ static const struct genpd_lock_ops genpd_spin_ops = { #define genpd_is_irq_safe(genpd) (genpd->flags & GENPD_FLAG_IRQ_SAFE) #define genpd_is_always_on(genpd) (genpd->flags & GENPD_FLAG_ALWAYS_ON) #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) +#define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev, const struct generic_pm_domain *genpd) @@ -1454,6 +1456,56 @@ static void genpd_free_dev_data(struct device *dev, dev_pm_put_subsys_data(dev); } +static void __genpd_update_cpumask(struct generic_pm_domain *genpd, + int cpu, bool set, unsigned int depth) +{ + struct gpd_link *link; + + if (!genpd_is_cpu_domain(genpd)) + return; + + list_for_each_entry(link, &genpd->slave_links, slave_node) { + struct generic_pm_domain *master = link->master; + + genpd_lock_nested(master, depth + 1); + __genpd_update_cpumask(master, cpu, set, depth + 1); + genpd_unlock(master); + } + + if (set) + cpumask_set_cpu(cpu, genpd->cpus); + else + cpumask_clear_cpu(cpu, genpd->cpus); +} + +static void genpd_update_cpumask(struct generic_pm_domain *genpd, + struct device *dev, bool set) +{ + int cpu; + + if (!genpd_is_cpu_domain(genpd)) + return; + + for_each_possible_cpu(cpu) { + if (get_cpu_device(cpu) == dev) { + __genpd_update_cpumask(genpd, cpu, set, 0); + return; + } + } +} + +static void genpd_set_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, true); +} + +static void genpd_clear_cpumask(struct generic_pm_domain *genpd, + struct device *dev) +{ + genpd_update_cpumask(genpd, dev, false); +} + static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, struct gpd_timing_data *td) { @@ -1475,6 +1527,7 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, genpd_lock(genpd); + genpd_set_cpumask(genpd, dev); dev_pm_domain_set(dev, &genpd->domain); genpd->device_count++; @@ -1532,6 +1585,7 @@ static int genpd_remove_device(struct generic_pm_domain *genpd, genpd->device_count--; genpd->max_off_time_changed = true; + genpd_clear_cpumask(genpd, dev); dev_pm_domain_set(dev, NULL); list_del_init(&pdd->list_node); @@ -1768,11 +1822,18 @@ int pm_genpd_init(struct generic_pm_domain *genpd, if (genpd_is_always_on(genpd) && !genpd_status_on(genpd)) return -EINVAL; + if (genpd_is_cpu_domain(genpd) && + !zalloc_cpumask_var(&genpd->cpus, GFP_KERNEL)) + return -ENOMEM; + /* Use only one "off" state if there were no states declared */ if (genpd->state_count == 0) { ret = genpd_set_default_power_state(genpd); - if (ret) + if (ret) { + if (genpd_is_cpu_domain(genpd)) + free_cpumask_var(genpd->cpus); return ret; + } } else if (!gov && genpd->state_count > 1) { pr_warn("%s: no governor for states\n", genpd->name); } @@ -1818,6 +1879,8 @@ static int genpd_remove(struct generic_pm_domain *genpd) list_del(&genpd->gpd_list_node); genpd_unlock(genpd); cancel_work_sync(&genpd->power_off_work); + if (genpd_is_cpu_domain(genpd)) + free_cpumask_var(genpd->cpus); if (genpd->free_states) genpd->free_states(genpd->states, genpd->state_count); diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index 8e1399231753..a6e251fe9deb 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -16,6 +16,7 @@ #include #include #include +#include /* * Flags to control the behaviour of a genpd. @@ -42,11 +43,22 @@ * GENPD_FLAG_ACTIVE_WAKEUP: Instructs genpd to keep the PM domain powered * on, in case any of its attached devices is used * in the wakeup path to serve system wakeups. + * + * GENPD_FLAG_CPU_DOMAIN: Instructs genpd that it should expect to get + * devices attached, which may belong to CPUs or + * possibly have subdomains with CPUs attached. + * This flag enables the genpd backend driver to + * deploy idle power management support for CPUs + * and groups of CPUs. Note that, the backend + * driver must then comply with the so called, + * last-man-standing algorithm, for the CPUs in the + * PM domain. */ #define GENPD_FLAG_PM_CLK (1U << 0) #define GENPD_FLAG_IRQ_SAFE (1U << 1) #define GENPD_FLAG_ALWAYS_ON (1U << 2) #define GENPD_FLAG_ACTIVE_WAKEUP (1U << 3) +#define GENPD_FLAG_CPU_DOMAIN (1U << 4) enum gpd_status { GPD_STATE_ACTIVE = 0, /* PM domain is active */ @@ -94,6 +106,7 @@ struct generic_pm_domain { unsigned int suspended_count; /* System suspend device counter */ unsigned int prepared_count; /* Suspend counter of prepared devices */ unsigned int performance_state; /* Aggregated max performance state */ + cpumask_var_t cpus; /* A cpumask of the attached CPUs */ int (*power_off)(struct generic_pm_domain *domain); int (*power_on)(struct generic_pm_domain *domain); struct opp_table *opp_table; /* OPP table of the genpd */