From patchwork Fri Jul 29 21:56:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 73056 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp1601310qga; Fri, 29 Jul 2016 14:57:19 -0700 (PDT) X-Received: by 10.66.100.230 with SMTP id fb6mr49079544pab.107.1469829414909; Fri, 29 Jul 2016 14:56:54 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 195si20379500pfz.17.2016.07.29.14.56.54; Fri, 29 Jul 2016 14:56:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-pm-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-pm-owner@vger.kernel.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754160AbcG2V4x (ORCPT + 14 others); Fri, 29 Jul 2016 17:56:53 -0400 Received: from mail-pf0-f181.google.com ([209.85.192.181]:33239 "EHLO mail-pf0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753882AbcG2V4x (ORCPT ); Fri, 29 Jul 2016 17:56:53 -0400 Received: by mail-pf0-f181.google.com with SMTP id y134so36212976pfg.0 for ; Fri, 29 Jul 2016 14:56:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2tz0CQaTXgiB8Dxu0ZEM+VPDJbiVscPz6nxKM8wyZi8=; b=dgHICPf1WSguiqUr/Q2WskrSh6jF7lmD/1/7OzDmoOyBXM02Z1RaWy4yUzWIb2mgVI DYSxWyyD4M1tivO9AUP0d25WvgXr9MujpxuHilPJkMi9HjVeZNxUDTT1f/c+DQgSzXeD LrKlXFoa9vV6mEosFKsr40YruwDGrkTvIcfK0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2tz0CQaTXgiB8Dxu0ZEM+VPDJbiVscPz6nxKM8wyZi8=; b=QXT0F1VeCUSzrAajXfwP62GKl6re/rxUZH2r1ubwynxjg3KSTWzD5VJQuBWaChinZ8 iF/sGIFzss4kg/hYyhzXjS9Amrvr4BiYKlSN+tnOZB19HIEhXkI6cwZdSYX+fCg8mQ7w c5Wrlal3Zlu1vdIO0LySDvINTrElOij3OS9972TB7rmcT76HgMgTc1i9m6iXzgjnbpel Xy48wUZcIGIYE7tPeONakdse7EN6B/puvHDl8/yxsa5v5YNMIP10dCGML8uVUT3akAWN JTubYQa6e0WlMBR+743LByuynmntOnynhuN13Z6pVZrdrHco4LM9uOweOhicrswX1tTi G0pw== X-Gm-Message-State: AEkoousQV+C920ky/WAmLd9UZLvEJYSe/O8X5ATWmruuNHcL6FuksDlYNqo57MzVeF/S0tfq X-Received: by 10.98.222.70 with SMTP id h67mr72505750pfg.128.1469829412181; Fri, 29 Jul 2016 14:56:52 -0700 (PDT) Received: from ubuntu.localdomain (i-global254.qualcomm.com. [199.106.103.254]) by smtp.gmail.com with ESMTPSA id 81sm27196601pfm.90.2016.07.29.14.56.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 29 Jul 2016 14:56:51 -0700 (PDT) From: Lina Iyer To: ulf.hansson@linaro.org, khilman@kernel.org, rjw@rjwysocki.net, linux-pm@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: andy.gross@linaro.org, sboyd@codeaurora.org, linux-arm-msm@vger.kernel.org, Lina Iyer Subject: [PATCH v2 09/14] PM / cpu_domains: Add PM Domain governor for CPUs Date: Fri, 29 Jul 2016 15:56:20 -0600 Message-Id: <1469829385-11511-10-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1469829385-11511-1-git-send-email-lina.iyer@linaro.org> References: <1469829385-11511-1-git-send-email-lina.iyer@linaro.org> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org A PM domain comprising of CPUs may be powered off when all the CPUs in the domain are powered down. Powering down a CPU domain is generally a expensive operation and therefore the power performance trade offs should be considered. The time between the last CPU powering down and the first CPU powering up in a domain, is the time available for the domain to sleep. Ideally, the sleep time of the domain should fulfill the residency requirement of the domains' idle state. To do this effectively, read the time before the wakeup of the cluster's CPUs and ensure that the domain's idle state sleep time guarantees the QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the state's residency. Signed-off-by: Lina Iyer --- drivers/base/power/cpu_domains.c | 80 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 79 insertions(+), 1 deletion(-) -- 2.7.4 -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c index f80b308..089c8d6 100644 --- a/drivers/base/power/cpu_domains.c +++ b/drivers/base/power/cpu_domains.c @@ -17,9 +17,12 @@ #include #include #include +#include +#include #include #include #include +#include #define CPU_PD_NAME_MAX 36 @@ -52,6 +55,81 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d) return res; } +static bool cpu_pd_down_ok(struct dev_pm_domain *pd) +{ + struct generic_pm_domain *genpd = pd_to_genpd(pd); + struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd); + int qos_ns = pm_qos_request(PM_QOS_CPU_DMA_LATENCY); + u64 sleep_ns; + ktime_t earliest, next_wakeup; + int cpu; + int i; + + /* Reset the last set genpd state, default to index 0 */ + genpd->state_idx = 0; + + /* We don't want to power down, if QoS is 0 */ + if (!qos_ns) + return false; + + /* + * Find the sleep time for the cluster. + * The time between now and the first wake up of any CPU that + * are in this domain hierarchy is the time available for the + * domain to be idle. + * + * We only care about the next wakeup for any online CPU in that + * cluster. Hotplug off any of the CPUs that we care about will + * wait on the genpd lock, until we are done. Any other CPU hotplug + * is not of consequence to our sleep time. + */ + earliest = ktime_set(KTIME_SEC_MAX, 0); + for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) { + next_wakeup = tick_nohz_get_next_wakeup(cpu); + if (earliest.tv64 > next_wakeup.tv64) + earliest = next_wakeup; + } + + sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get())); + if (sleep_ns <= 0) + return false; + + /* + * Find the deepest sleep state that satisfies the residency + * requirement and the QoS constraint + */ + for (i = genpd->state_count - 1; i >= 0; i--) { + u64 state_sleep_ns; + + state_sleep_ns = genpd->states[i].power_off_latency_ns + + genpd->states[i].power_on_latency_ns + + genpd->states[i].residency_ns; + + /* + * If we can't sleep to save power in the state, move on + * to the next lower idle state. + */ + if (state_sleep_ns > sleep_ns) + continue; + + /* + * We also don't want to sleep more than we should to + * gaurantee QoS. + */ + if (state_sleep_ns < (qos_ns * NSEC_PER_USEC)) + break; + } + + if (i >= 0) + genpd->state_idx = i; + + return (i >= 0); +} + +static struct dev_power_governor cpu_pd_gov = { + .power_down_ok = cpu_pd_down_ok, +}; + static int cpu_pd_attach_cpu(struct cpu_pm_domain *cpu_pd, int cpu) { int ret; @@ -166,7 +244,7 @@ static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn, /* Register the CPU genpd */ pr_debug("adding %s as CPU PM domain\n", pd->genpd->name); - ret = pm_genpd_init(pd->genpd, &simple_qos_governor, false); + ret = pm_genpd_init(pd->genpd, &cpu_pd_gov, false); if (ret) { pr_err("Unable to initialize domain %s\n", dn->full_name); goto fail;