From patchwork Wed Jan 10 16:44:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 124120 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5494751qgn; Wed, 10 Jan 2018 08:44:50 -0800 (PST) X-Google-Smtp-Source: ACJfBotQb6I9l8H5ivEuIrbbPiTCw+gAMb6ZW0pUvSWzzmxD7qKINODYz3xl1HsaPYSfxkL5qgpW X-Received: by 10.98.150.5 with SMTP id c5mr17128418pfe.232.1515602690177; Wed, 10 Jan 2018 08:44:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515602690; cv=none; d=google.com; s=arc-20160816; b=anT36yWb5WlDXRxYSuGlnOMVa3uUcZmqhlUMJSoNmfr1wJVe812Z7RJ34vywv9hgfM 9KMwpBj15/VR1xP6dvD5EOWVpjS8hA4lnGUUUwgFaIy0Z/knmaDeqbx+8b7lqAyVFAua Q7zAZbvEvFKIF2UUuFLKHGBmJ9v7vIhR+051Y8ytFKDu4R/HQ7jYXwsC0/Phn3S+6DES 8K4AwCOsi5oAWD8bG2haEieeF8CQmZkSZqxP8bMCRon+0W4vVBe7pPwQ2WlCNYa6attx RgfPtiBGzN0QEwBlPiCPZCHS0xMY6O3cJlkoFwnhP7OT3+5yV4qs8VMQo9LRpYfDAw0P 61Xw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=BwAHP7oK5Nmr1zUYb3jG4gg666xWTYJysYf2ZL9MUdU=; b=cIugXqRZcZx72a5lFuQzmVUKkY8kIdd3mcWPakAdngGZt3iUTtFC18vLQ3uyY28y2g ZLHoGxGHK5MjJZ951J8B//FzlFbYJX3fGCDzHxMG4RM8UjqyfTi7jw15AVxsx3BSE8rG CO03fMj+8DUi71hgF8BoFXSxn5OH+HELLbuBOptWjxCKjcfJOFcqiQHntK4rEf/WQXqd +VErS3j6QvKsV/UA+ost1qwoS1GB06HfxypxewqZrHBCGXSMLcog5APLyiQxEblWDyHM h5RKf3dNHUQdC5FsgkPmwV5VO9SdO2UKFVE3NZDqW5K7cYWJR5RmlyJ4iJs1m3o1gEYq ykNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o16si6376869pgd.563.2018.01.10.08.44.49; Wed, 10 Jan 2018 08:44:50 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S966492AbeAJQor (ORCPT + 28 others); Wed, 10 Jan 2018 11:44:47 -0500 Received: from foss.arm.com ([217.140.101.70]:45374 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966324AbeAJQok (ORCPT ); Wed, 10 Jan 2018 11:44:40 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D046E15A2; Wed, 10 Jan 2018 08:44:39 -0800 (PST) Received: from e107155-lin.cambridge.arm.com (e107155-lin.cambridge.arm.com [10.1.210.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 341F13F487; Wed, 10 Jan 2018 08:44:38 -0800 (PST) From: Sudeep Holla To: linux-amlogic@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org Cc: Sudeep Holla , linux-kernel@vger.kernel.org, Viresh Kumar , Jeremy Linton , Lorenzo Pieralisi , Mark Rutland Subject: [PATCH v2 1/2] drivers: psci: remove cluster terminology and dependency on physical_package_id Date: Wed, 10 Jan 2018 16:44:14 +0000 Message-Id: <1515602655-12740-2-git-send-email-sudeep.holla@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515602655-12740-1-git-send-email-sudeep.holla@arm.com> References: <1515602655-12740-1-git-send-email-sudeep.holla@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since the definition of the term "cluster" is not well defined in the architecture, we should avoid using it. Also the physical package id is currently mapped to so called "clusters" in ARM/ARM64 platforms which is already argumentative. Currently PSCI checker uses the physical package id assuming that CPU power domains map to "clusters" and the physical package id in the code as it stands also maps to cluster boundaries. It does that trying to test "cluster" idle states to its best. However the CPU power domain often but not always maps directly to the processor topology. This patch removes the dependency on physical_package_id from the topology in this PSCI checker. Also it replaces all the occurences of clusters to cpu_groups which is derived from core_sibling_mask and may not directly map to physical "cluster". Cc: Mark Rutland Acked-by: Lorenzo Pieralisi Signed-off-by: Sudeep Holla --- drivers/firmware/psci_checker.c | 46 ++++++++++++++++++++--------------------- 1 file changed, 22 insertions(+), 24 deletions(-) -- 2.7.4 diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c index f3f4f810e5df..bb1c068bff19 100644 --- a/drivers/firmware/psci_checker.c +++ b/drivers/firmware/psci_checker.c @@ -77,8 +77,8 @@ static int psci_ops_check(void) return 0; } -static int find_clusters(const struct cpumask *cpus, - const struct cpumask **clusters) +static int find_cpu_groups(const struct cpumask *cpus, + const struct cpumask **cpu_groups) { unsigned int nb = 0; cpumask_var_t tmp; @@ -88,11 +88,11 @@ static int find_clusters(const struct cpumask *cpus, cpumask_copy(tmp, cpus); while (!cpumask_empty(tmp)) { - const struct cpumask *cluster = + const struct cpumask *cpu_group = topology_core_cpumask(cpumask_any(tmp)); - clusters[nb++] = cluster; - cpumask_andnot(tmp, tmp, cluster); + cpu_groups[nb++] = cpu_group; + cpumask_andnot(tmp, tmp, cpu_group); } free_cpumask_var(tmp); @@ -170,24 +170,24 @@ static int hotplug_tests(void) { int err; cpumask_var_t offlined_cpus; - int i, nb_cluster; - const struct cpumask **clusters; + int i, nb_cpu_group; + const struct cpumask **cpu_groups; char *page_buf; err = -ENOMEM; if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL)) return err; - /* We may have up to nb_available_cpus clusters. */ - clusters = kmalloc_array(nb_available_cpus, sizeof(*clusters), - GFP_KERNEL); - if (!clusters) + /* We may have up to nb_available_cpus cpu_groups. */ + cpu_groups = kmalloc_array(nb_available_cpus, sizeof(*cpu_groups), + GFP_KERNEL); + if (!cpu_groups) goto out_free_cpus; page_buf = (char *)__get_free_page(GFP_KERNEL); if (!page_buf) - goto out_free_clusters; + goto out_free_cpu_groups; err = 0; - nb_cluster = find_clusters(cpu_online_mask, clusters); + nb_cpu_group = find_cpu_groups(cpu_online_mask, cpu_groups); /* * Of course the last CPU cannot be powered down and cpu_down() should @@ -197,24 +197,22 @@ static int hotplug_tests(void) err += down_and_up_cpus(cpu_online_mask, offlined_cpus); /* - * Take down CPUs by cluster this time. When the last CPU is turned - * off, the cluster itself should shut down. + * Take down CPUs by cpu group this time. When the last CPU is turned + * off, the cpu group itself should shut down. */ - for (i = 0; i < nb_cluster; ++i) { - int cluster_id = - topology_physical_package_id(cpumask_any(clusters[i])); + for (i = 0; i < nb_cpu_group; ++i) { ssize_t len = cpumap_print_to_pagebuf(true, page_buf, - clusters[i]); + cpu_groups[i]); /* Remove trailing newline. */ page_buf[len - 1] = '\0'; - pr_info("Trying to turn off and on again cluster %d " - "(CPUs %s)\n", cluster_id, page_buf); - err += down_and_up_cpus(clusters[i], offlined_cpus); + pr_info("Trying to turn off and on again group %d (CPUs %s)\n", + i, page_buf); + err += down_and_up_cpus(cpu_groups[i], offlined_cpus); } free_page((unsigned long)page_buf); -out_free_clusters: - kfree(clusters); +out_free_cpu_groups: + kfree(cpu_groups); out_free_cpus: free_cpumask_var(offlined_cpus); return err;