From patchwork Tue Jan 9 16:49:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep Holla X-Patchwork-Id: 123975 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4236989qgn; Tue, 9 Jan 2018 08:50:30 -0800 (PST) X-Google-Smtp-Source: ACJfBoulxfhMzHfuYle/fVgDuc8mRdZXhmEHmlcLU/HEPk4UFiRAGa8zjL3BXVkJnKxN6u2xH6oN X-Received: by 10.159.235.135 with SMTP id f7mr16253155plr.41.1515516630758; Tue, 09 Jan 2018 08:50:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515516630; cv=none; d=google.com; s=arc-20160816; b=PzUJckehCXT0F04ComV556GDDRYe6sSSU2HUwXV3uUGy2ovUX20M+KzSFJGyuMbxrs NilUg+cgic0GQNrictiTkzo4JblgnfyEINzE97+UUR2dy2v/URRnO4WzNlFC7IFy0u0z 7LDWqG7fX8EtB10HJdSo5AOa6VQXittjlgs7W4MrcId2p5bybCU+LPYmew96NWyB0Xx3 HntF4/6h6ttx39T5SnZLBiCdy2jX2otB7aD9ZBYEZ/FrlwNQB9ghRZsGta7EONmWNkdS bO1vdWxqYmKnZ2vCrpXHV3NSMl+RZoPF5yhS+8ZZXWQpaR+QL3dCFZ/nP/Lgb12GkpEK zyIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=9R0rVrtWd9PmI6j8EktWNEZ92txSrHHjQFX2ogi0EXs=; b=hD1AKyfNNGHoA5840/72zdMmdCCnG//Zsobk2NS5LfrY1pQJYncEbkRZp9IGNaUOCN qp6B3GWgP84qQabgWbHBYLQqkMk/CcBtp5GucgcmC1l2OatO6LlGROxXbwsxaDONNY6+ gy/ty5h1y0ExUihKtdiVJx9oGNmP2rSnM7x/Xa7QzYGVTBYlaBUZqqMwFm1OCOfqq64x rfLZliMQ0c2WNPLxZOSlmr3JoBkr9owK/KFgcKTGvMS4rliHMZg9b5i+1vkf848l3rf0 j9htbEo0VjRZA5Js93wyu7TJU6E2JOsoNmuzZZcmCPrO70Vh3MZUPDg2P6BrmhM9QIvV tm3g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f188si10316068pfg.94.2018.01.09.08.50.30; Tue, 09 Jan 2018 08:50:30 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759839AbeAIQu3 (ORCPT + 28 others); Tue, 9 Jan 2018 11:50:29 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:58206 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759717AbeAIQti (ORCPT ); Tue, 9 Jan 2018 11:49:38 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EADA21596; Tue, 9 Jan 2018 08:49:37 -0800 (PST) Received: from e107155-lin.cambridge.arm.com (e107155-lin.cambridge.arm.com [10.1.210.28]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 71DB43F581; Tue, 9 Jan 2018 08:49:36 -0800 (PST) From: Sudeep Holla To: linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org Cc: Sudeep Holla , linux-kernel@vger.kernel.org, Viresh Kumar , Jeremy Linton , Lorenzo Pieralisi , Mark Rutland Subject: [PATCH 1/2] drivers: psci: remove cluster terminology and dependency on physical_package_id Date: Tue, 9 Jan 2018 16:49:27 +0000 Message-Id: <1515516568-31359-2-git-send-email-sudeep.holla@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515516568-31359-1-git-send-email-sudeep.holla@arm.com> References: <1515516568-31359-1-git-send-email-sudeep.holla@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Since the definition of the term "cluster" is not well defined in the architecture, we should avoid using it. Also the physical package id is currently mapped to so called "clusters" in ARM/ARM64 platforms which is already argumentative. This patch removes the dependency on physical_package_id from the topology in this PSCI checker. Also it replaces all the occurences of clusters to cpu_groups which is derived from core_sibling_mask and may not directly map to physical "cluster". Cc: Mark Rutland Cc: Lorenzo Pieralisi Signed-off-by: Sudeep Holla --- drivers/firmware/psci_checker.c | 46 ++++++++++++++++++++--------------------- 1 file changed, 22 insertions(+), 24 deletions(-) -- 2.7.4 diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c index f3f4f810e5df..bb1c068bff19 100644 --- a/drivers/firmware/psci_checker.c +++ b/drivers/firmware/psci_checker.c @@ -77,8 +77,8 @@ static int psci_ops_check(void) return 0; } -static int find_clusters(const struct cpumask *cpus, - const struct cpumask **clusters) +static int find_cpu_groups(const struct cpumask *cpus, + const struct cpumask **cpu_groups) { unsigned int nb = 0; cpumask_var_t tmp; @@ -88,11 +88,11 @@ static int find_clusters(const struct cpumask *cpus, cpumask_copy(tmp, cpus); while (!cpumask_empty(tmp)) { - const struct cpumask *cluster = + const struct cpumask *cpu_group = topology_core_cpumask(cpumask_any(tmp)); - clusters[nb++] = cluster; - cpumask_andnot(tmp, tmp, cluster); + cpu_groups[nb++] = cpu_group; + cpumask_andnot(tmp, tmp, cpu_group); } free_cpumask_var(tmp); @@ -170,24 +170,24 @@ static int hotplug_tests(void) { int err; cpumask_var_t offlined_cpus; - int i, nb_cluster; - const struct cpumask **clusters; + int i, nb_cpu_group; + const struct cpumask **cpu_groups; char *page_buf; err = -ENOMEM; if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL)) return err; - /* We may have up to nb_available_cpus clusters. */ - clusters = kmalloc_array(nb_available_cpus, sizeof(*clusters), - GFP_KERNEL); - if (!clusters) + /* We may have up to nb_available_cpus cpu_groups. */ + cpu_groups = kmalloc_array(nb_available_cpus, sizeof(*cpu_groups), + GFP_KERNEL); + if (!cpu_groups) goto out_free_cpus; page_buf = (char *)__get_free_page(GFP_KERNEL); if (!page_buf) - goto out_free_clusters; + goto out_free_cpu_groups; err = 0; - nb_cluster = find_clusters(cpu_online_mask, clusters); + nb_cpu_group = find_cpu_groups(cpu_online_mask, cpu_groups); /* * Of course the last CPU cannot be powered down and cpu_down() should @@ -197,24 +197,22 @@ static int hotplug_tests(void) err += down_and_up_cpus(cpu_online_mask, offlined_cpus); /* - * Take down CPUs by cluster this time. When the last CPU is turned - * off, the cluster itself should shut down. + * Take down CPUs by cpu group this time. When the last CPU is turned + * off, the cpu group itself should shut down. */ - for (i = 0; i < nb_cluster; ++i) { - int cluster_id = - topology_physical_package_id(cpumask_any(clusters[i])); + for (i = 0; i < nb_cpu_group; ++i) { ssize_t len = cpumap_print_to_pagebuf(true, page_buf, - clusters[i]); + cpu_groups[i]); /* Remove trailing newline. */ page_buf[len - 1] = '\0'; - pr_info("Trying to turn off and on again cluster %d " - "(CPUs %s)\n", cluster_id, page_buf); - err += down_and_up_cpus(clusters[i], offlined_cpus); + pr_info("Trying to turn off and on again group %d (CPUs %s)\n", + i, page_buf); + err += down_and_up_cpus(cpu_groups[i], offlined_cpus); } free_page((unsigned long)page_buf); -out_free_clusters: - kfree(clusters); +out_free_cpu_groups: + kfree(cpu_groups); out_free_cpus: free_cpumask_var(offlined_cpus); return err;