From patchwork Thu Aug 11 09:33:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 73715 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp14812qga; Thu, 11 Aug 2016 02:35:29 -0700 (PDT) X-Received: by 10.98.17.152 with SMTP id 24mr15422882pfr.13.1470908129596; Thu, 11 Aug 2016 02:35:29 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e8si2390947pfg.248.2016.08.11.02.35.29; Thu, 11 Aug 2016 02:35:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933043AbcHKJfP (ORCPT + 7 others); Thu, 11 Aug 2016 05:35:15 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:59590 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933024AbcHKJfL (ORCPT ); Thu, 11 Aug 2016 05:35:11 -0400 Received: from 172.24.1.47 (EHLO szxeml422-hub.china.huawei.com) ([172.24.1.47]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DPE94506; Thu, 11 Aug 2016 17:34:32 +0800 (CST) Received: from localhost (10.177.23.164) by szxeml422-hub.china.huawei.com (10.82.67.152) with Microsoft SMTP Server id 14.3.235.1; Thu, 11 Aug 2016 17:34:22 +0800 From: Zhen Lei To: Catalin Marinas , Will Deacon , linux-arm-kernel , linux-kernel , Rob Herring , "Frank Rowand" , devicetree CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v6 10/14] arm64/numa: define numa_distance as array to simplify code Date: Thu, 11 Aug 2016 17:33:44 +0800 Message-ID: <1470908028-8596-11-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1470908028-8596-1-git-send-email-thunder.leizhen@huawei.com> References: <1470908028-8596-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.57AC46C9.00A6, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 1833ff8110a54f435d4e72a5506da0c8 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org 1. MAX_NUMNODES is base on CONFIG_NODES_SHIFT, the default value of the latter is very small now. 2. Suppose the default value of MAX_NUMNODES is enlarged to 64, so the size of numa_distance is 4K, it's still acceptable if run the Image on other processors. 3. It will make function __node_distance quicker than before. Signed-off-by: Zhen Lei --- arch/arm64/include/asm/numa.h | 1 - arch/arm64/mm/numa.c | 74 +++---------------------------------------- 2 files changed, 5 insertions(+), 70 deletions(-) -- 2.5.0 -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h index 600887e..9b6cc38 100644 --- a/arch/arm64/include/asm/numa.h +++ b/arch/arm64/include/asm/numa.h @@ -32,7 +32,6 @@ static inline const struct cpumask *cpumask_of_node(int node) void __init arm64_numa_init(void); int __init numa_add_memblk(int nodeid, u64 start, u64 end); void __init numa_set_distance(int from, int to, int distance); -void __init numa_free_distance(void); void __init early_map_cpu_to_node(unsigned int cpu, int nid); void numa_store_cpu_info(unsigned int cpu); diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index 69aacf4..bd4d26a9 100644 --- a/arch/arm64/mm/numa.c +++ b/arch/arm64/mm/numa.c @@ -32,8 +32,7 @@ EXPORT_SYMBOL(node_data); nodemask_t numa_nodes_parsed __initdata; static int cpu_to_node_map[NR_CPUS] = { [0 ... NR_CPUS-1] = NUMA_NO_NODE }; -static int numa_distance_cnt; -static u8 *numa_distance; +static u8 numa_distance[MAX_NUMNODES][MAX_NUMNODES]; static bool numa_off; static __init int numa_parse_early_param(char *opt) @@ -242,59 +241,6 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) } /** - * numa_free_distance - * - * The current table is freed. - */ -void __init numa_free_distance(void) -{ - size_t size; - - if (!numa_distance) - return; - - size = numa_distance_cnt * numa_distance_cnt * - sizeof(numa_distance[0]); - - memblock_free(__pa(numa_distance), size); - numa_distance_cnt = 0; - numa_distance = NULL; -} - -/** - * - * Create a new NUMA distance table. - * - */ -static int __init numa_alloc_distance(void) -{ - size_t size; - u64 phys; - int i, j; - - size = nr_node_ids * nr_node_ids * sizeof(numa_distance[0]); - phys = memblock_find_in_range(0, PFN_PHYS(max_pfn), - size, PAGE_SIZE); - if (WARN_ON(!phys)) - return -ENOMEM; - - memblock_reserve(phys, size); - - numa_distance = __va(phys); - numa_distance_cnt = nr_node_ids; - - /* fill with the default distances */ - for (i = 0; i < numa_distance_cnt; i++) - for (j = 0; j < numa_distance_cnt; j++) - numa_distance[i * numa_distance_cnt + j] = i == j ? - LOCAL_DISTANCE : REMOTE_DISTANCE; - - pr_debug("Initialized distance table, cnt=%d\n", numa_distance_cnt); - - return 0; -} - -/** * numa_set_distance - Set inter node NUMA distance from node to node. * @from: the 'from' node to set distance * @to: the 'to' node to set distance @@ -309,12 +255,7 @@ static int __init numa_alloc_distance(void) */ void __init numa_set_distance(int from, int to, int distance) { - if (!numa_distance) { - pr_warn_once("Warning: distance table not allocated yet\n"); - return; - } - - if (from >= numa_distance_cnt || to >= numa_distance_cnt || + if (from >= MAX_NUMNODES || to >= MAX_NUMNODES || from < 0 || to < 0) { pr_warn_once("Warning: node ids are out of bound, from=%d to=%d distance=%d\n", from, to, distance); @@ -328,7 +269,7 @@ void __init numa_set_distance(int from, int to, int distance) return; } - numa_distance[from * numa_distance_cnt + to] = distance; + numa_distance[from][to] = distance; } /** @@ -336,9 +277,9 @@ void __init numa_set_distance(int from, int to, int distance) */ int __node_distance(int from, int to) { - if (from >= numa_distance_cnt || to >= numa_distance_cnt) + if (from >= MAX_NUMNODES || to >= MAX_NUMNODES) return from == to ? LOCAL_DISTANCE : REMOTE_DISTANCE; - return numa_distance[from * numa_distance_cnt + to]; + return numa_distance[from][to]; } EXPORT_SYMBOL(__node_distance); @@ -378,11 +319,6 @@ static int __init numa_init(int (*init_func)(void)) nodes_clear(numa_nodes_parsed); nodes_clear(node_possible_map); nodes_clear(node_online_map); - numa_free_distance(); - - ret = numa_alloc_distance(); - if (ret < 0) - return ret; ret = init_func(); if (ret < 0)