From patchwork Fri Jul 28 12:13:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 707884 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33F9EC04A94 for ; Fri, 28 Jul 2023 12:15:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234047AbjG1MPp (ORCPT ); Fri, 28 Jul 2023 08:15:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229601AbjG1MOj (ORCPT ); Fri, 28 Jul 2023 08:14:39 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A000A44B1; Fri, 28 Jul 2023 05:13:18 -0700 (PDT) Message-ID: <20230728120931.116763463@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1690546395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=3BTOZBcdzyy0AoTkbsTzY5XJZ5iS307qvXgzpSumcTA=; b=e6WnG+CBD7qibzUX2EKlmiFgkEhB1GuA5mWZ/broNje/Ltu1ebSOd9PC4Fcow0ar2MDsC7 1IvEbIE2poKEknGKAMUMrpKS91HYHaV+xmqafNTa65jiuDEEwb/loWxpJNn/PR4MnXCaiR JuH+ejVO0cHoL+1QE5l4OMV6VlkA84FGOo5WCjcOVXMWkCMIH906YOc7gvuRe3F1WirVUo j5Mftx1hBmZpJ5tVw3VPIRFTSIkOMlRvRNATT8upPGshl5qC6zx6eUgJYAWVmBshpyT8Zf 5Qbb5AJlPp3w5glkgDyuPo/06MDYB4tDE9BnLVHhoTO74H5/sT5mN1aB+GE/tA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1690546395; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=3BTOZBcdzyy0AoTkbsTzY5XJZ5iS307qvXgzpSumcTA=; b=TfJnUv5FyaqhFECG71uZpWXd6e4mLSlRrr3G5fqYm1TaqYh5WezoyJiowKT0ctAmlAwDMu aYMhonhRwTxvg0Aw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, Tom Lendacky , Andrew Cooper , Arjan van de Ven , "James E.J. Bottomley" , Dick Kennedy , James Smart , "Martin K. Petersen" , linux-scsi@vger.kernel.org, Guenter Roeck , linux-hwmon@vger.kernel.org, Jean Delvare , Huang Rui , Juergen Gross , Steve Wahl , Mike Travis , Dimitri Sivanich , Russ Anderson Subject: [patch v2 26/38] x86/cpu: Use common topology code for Intel References: <20230728105650.565799744@linutronix.de> MIME-Version: 1.0 Date: Fri, 28 Jul 2023 14:13:15 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Intel CPUs use either topology leaf 0xb/0x1f evaluation or the legacy SMP/HT evaluation based on CPUID leaf 0x1/0x4. Move it over to the consolidated topology code and remove the random topology hacks which are sprinkled into the Intel and the common code. No functional change intended. Signed-off-by: Thomas Gleixner --- arch/x86/kernel/cpu/common.c | 65 ---------------------------------- arch/x86/kernel/cpu/cpu.h | 4 -- arch/x86/kernel/cpu/intel.c | 25 ------------- arch/x86/kernel/cpu/topology_common.c | 5 ++ 4 files changed, 4 insertions(+), 95 deletions(-) --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -784,19 +784,6 @@ static void get_model_name(struct cpuinf *(s + 1) = '\0'; } -void detect_num_cpu_cores(struct cpuinfo_x86 *c) -{ - unsigned int eax, ebx, ecx, edx; - - c->x86_max_cores = 1; - if (!IS_ENABLED(CONFIG_SMP) || c->cpuid_level < 4) - return; - - cpuid_count(4, 0, &eax, &ebx, &ecx, &edx); - if (eax & 0x1f) - c->x86_max_cores = (eax >> 26) + 1; -} - void cpu_detect_cache_sizes(struct cpuinfo_x86 *c) { unsigned int n, dummy, ebx, ecx, edx, l2size; @@ -858,54 +845,6 @@ static void cpu_detect_tlb(struct cpuinf tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]); } -int detect_ht_early(struct cpuinfo_x86 *c) -{ -#ifdef CONFIG_SMP - u32 eax, ebx, ecx, edx; - - if (!cpu_has(c, X86_FEATURE_HT)) - return -1; - - if (cpu_has(c, X86_FEATURE_CMP_LEGACY)) - return -1; - - if (cpu_has(c, X86_FEATURE_XTOPOLOGY)) - return -1; - - cpuid(1, &eax, &ebx, &ecx, &edx); - - smp_num_siblings = (ebx & 0xff0000) >> 16; - if (smp_num_siblings == 1) - pr_info_once("CPU0: Hyper-Threading is disabled\n"); -#endif - return 0; -} - -void detect_ht(struct cpuinfo_x86 *c) -{ -#ifdef CONFIG_SMP - int index_msb, core_bits; - - if (topo_is_converted(c)) - return; - - if (detect_ht_early(c) < 0) - return; - - index_msb = get_count_order(smp_num_siblings); - c->topo.pkg_id = apic->phys_pkg_id(c->topo.initial_apicid, index_msb); - - smp_num_siblings = smp_num_siblings / c->x86_max_cores; - - index_msb = get_count_order(smp_num_siblings); - - core_bits = get_count_order(c->x86_max_cores); - - c->topo.core_id = apic->phys_pkg_id(c->topo.initial_apicid, index_msb) & - ((1 << core_bits) - 1); -#endif -} - static void get_cpu_vendor(struct cpuinfo_x86 *c) { char *v = c->x86_vendor_id; @@ -1853,10 +1792,6 @@ static void identify_cpu(struct cpuinfo_ c->x86, c->x86_model); } -#ifdef CONFIG_X86_64 - detect_ht(c); -#endif - x86_init_rdrand(c); setup_pku(c); setup_cet(c); --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -76,11 +76,7 @@ extern void init_intel_cacheinfo(struct extern void init_amd_cacheinfo(struct cpuinfo_x86 *c); extern void init_hygon_cacheinfo(struct cpuinfo_x86 *c); -extern void detect_num_cpu_cores(struct cpuinfo_x86 *c); -extern int detect_extended_topology_early(struct cpuinfo_x86 *c); extern int detect_extended_topology(struct cpuinfo_x86 *c); -extern int detect_ht_early(struct cpuinfo_x86 *c); -extern void detect_ht(struct cpuinfo_x86 *c); extern void check_null_seg_clears_base(struct cpuinfo_x86 *c); void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c); --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -489,13 +489,6 @@ static void early_init_intel(struct cpui } check_memory_type_self_snoop_errata(c); - - /* - * Get the number of SMT siblings early from the extended topology - * leaf, if available. Otherwise try the legacy SMT detection. - */ - if (detect_extended_topology_early(c) < 0) - detect_ht_early(c); } static void bsp_init_intel(struct cpuinfo_x86 *c) @@ -777,24 +770,6 @@ static void init_intel(struct cpuinfo_x8 intel_workarounds(c); - /* - * Detect the extended topology information if available. This - * will reinitialise the initial_apicid which will be used - * in init_intel_cacheinfo() - */ - detect_extended_topology(c); - - if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) { - /* - * let's use the legacy cpuid vector 0x1 and 0x4 for topology - * detection. - */ - detect_num_cpu_cores(c); -#ifdef CONFIG_X86_32 - detect_ht(c); -#endif - } - init_intel_cacheinfo(c); if (c->cpuid_level > 9) { --- a/arch/x86/kernel/cpu/topology_common.c +++ b/arch/x86/kernel/cpu/topology_common.c @@ -64,7 +64,6 @@ bool topo_is_converted(struct cpuinfo_x8 /* Temporary until everything is converted over. */ switch (boot_cpu_data.x86_vendor) { case X86_VENDOR_AMD: - case X86_VENDOR_INTEL: case X86_VENDOR_HYGON: return false; default: @@ -129,6 +128,10 @@ static void parse_topology(struct topo_s case X86_VENDOR_ZHAOXIN: parse_legacy(tscan); break; + case X86_VENDOR_INTEL: + if (!IS_ENABLED(CONFIG_CPU_SUP_INTEL) || !cpu_parse_topology_ext(tscan)) + parse_legacy(tscan); + break; } }