From patchwork Wed Apr 6 16:45:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF334C433EF for ; Wed, 6 Apr 2022 18:06:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239927AbiDFSIN (ORCPT ); Wed, 6 Apr 2022 14:08:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240360AbiDFSHs (ORCPT ); Wed, 6 Apr 2022 14:07:48 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CDB01D1110; Wed, 6 Apr 2022 09:46:03 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B00412FC; Wed, 6 Apr 2022 09:46:03 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EDC333F73B; Wed, 6 Apr 2022 09:46:02 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 03/43] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 Date: Wed, 6 Apr 2022 17:45:06 +0100 Message-Id: <20220406164546.1888528-3-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose commit 6e616864f21160d8d503523b60a53a29cecc6f24 upstream. Update the MIDR encodings for the Cortex-A55 and Cortex-A35 Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: James Morse --- arch/arm64/include/asm/cputype.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 39d1db68748d..2270a5f31271 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -83,6 +83,8 @@ #define ARM_CPU_PART_CORTEX_A53 0xD03 #define ARM_CPU_PART_CORTEX_A73 0xD09 #define ARM_CPU_PART_CORTEX_A75 0xD0A +#define ARM_CPU_PART_CORTEX_A35 0xD04 +#define ARM_CPU_PART_CORTEX_A55 0xD05 #define APM_CPU_PART_POTENZA 0x000 @@ -98,6 +100,8 @@ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73) #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75) +#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35) +#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) From patchwork Wed Apr 6 16:45:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558570 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 178C6C433F5 for ; Wed, 6 Apr 2022 18:06:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239899AbiDFSIL (ORCPT ); Wed, 6 Apr 2022 14:08:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240372AbiDFSHt (ORCPT ); Wed, 6 Apr 2022 14:07:49 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4169F140DF2; Wed, 6 Apr 2022 09:46:05 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D8D323A; Wed, 6 Apr 2022 09:46:05 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 45D3B3F73B; Wed, 6 Apr 2022 09:46:04 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 04/43] arm64: capabilities: Update prototype for enable call back Date: Wed, 6 Apr 2022 17:45:07 +0100 Message-Id: <20220406164546.1888528-4-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Dave Martin [ Upstream commit c0cda3b8ee6b4b6851b2fd8b6db91fd7b0e2524a ] We issue the enable() call back for all CPU hwcaps capabilities available on the system, on all the CPUs. So far we have ignored the argument passed to the call back, which had a prototype to accept a "void *" for use with on_each_cpu() and later with stop_machine(). However, with commit 0a0d111d40fd1 ("arm64: cpufeature: Pass capability structure to ->enable callback"), there are some users of the argument who wants the matching capability struct pointer where there are multiple matching criteria for a single capability. Clean up the declaration of the call back to make it clear. 1) Renamed to cpu_enable(), to imply taking necessary actions on the called CPU for the entry. 2) Pass const pointer to the capability, to allow the call back to check the entry. (e.,g to check if any action is needed on the CPU) 3) We don't care about the result of the call back, turning this to a void. Cc: Will Deacon Cc: Catalin Marinas Cc: Mark Rutland Cc: Andre Przywara Cc: James Morse Acked-by: Robin Murphy Reviewed-by: Julien Thierry Signed-off-by: Dave Martin [suzuki: convert more users, rename call back and drop results] Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse --- arch/arm64/include/asm/cpufeature.h | 7 ++++- arch/arm64/include/asm/processor.h | 5 ++-- arch/arm64/kernel/cpu_errata.c | 44 ++++++++++++++--------------- arch/arm64/kernel/cpufeature.c | 34 +++++++++++++--------- arch/arm64/kernel/fpsimd.c | 1 + arch/arm64/kernel/traps.c | 4 +-- arch/arm64/mm/fault.c | 3 +- 7 files changed, 56 insertions(+), 42 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index e7bef3d936d8..984a9c81d65a 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -77,7 +77,12 @@ struct arm64_cpu_capabilities { u16 capability; int def_scope; /* default scope */ bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope); - int (*enable)(void *); /* Called on all active CPUs */ + /* + * Take the appropriate actions to enable this capability for this CPU. + * For each successfully booted CPU, this method is called for each + * globally detected capability. + */ + void (*cpu_enable)(const struct arm64_cpu_capabilities *cap); union { struct { /* To be used for erratum handling only */ u32 midr_model; diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index d27e472bbbf1..367141e05c34 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -37,6 +37,7 @@ #include #include +#include #include #include #include @@ -219,8 +220,8 @@ static inline void spin_lock_prefetch(const void *ptr) #endif -int cpu_enable_pan(void *__unused); -int cpu_enable_cache_maint_trap(void *__unused); +void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused); +void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused); #endif /* __ASSEMBLY__ */ #endif /* __ASM_PROCESSOR_H */ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index bf4da33d77e3..cc62e3376345 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -48,11 +48,11 @@ has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry, (arm64_ftr_reg_ctrel0.sys_val & mask); } -static int cpu_enable_trap_ctr_access(void *__unused) +static void +cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused) { /* Clear SCTLR_EL1.UCT */ config_sctlr_el1(SCTLR_EL1_UCT, 0); - return 0; } #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR @@ -152,25 +152,25 @@ static void call_hvc_arch_workaround_1(void) arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); } -static int enable_smccc_arch_workaround_1(void *data) +static void +enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) { - const struct arm64_cpu_capabilities *entry = data; bp_hardening_cb_t cb; void *smccc_start, *smccc_end; struct arm_smccc_res res; if (!entry->matches(entry, SCOPE_LOCAL_CPU)) - return 0; + return; if (psci_ops.smccc_version == SMCCC_VERSION_1_0) - return 0; + return; switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return 0; + return; cb = call_hvc_arch_workaround_1; smccc_start = __smccc_workaround_1_hvc_start; smccc_end = __smccc_workaround_1_hvc_end; @@ -180,19 +180,19 @@ static int enable_smccc_arch_workaround_1(void *data) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return 0; + return; cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc_start; smccc_end = __smccc_workaround_1_smc_end; break; default: - return 0; + return; } install_bp_hardening_cb(entry, cb, smccc_start, smccc_end); - return 0; + return; } #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ @@ -391,7 +391,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM errata 826319, 827319, 824069", .capability = ARM64_WORKAROUND_CLEAN_CACHE, MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02), - .enable = cpu_enable_cache_maint_trap, + .cpu_enable = cpu_enable_cache_maint_trap, }, #endif #ifdef CONFIG_ARM64_ERRATUM_819472 @@ -400,7 +400,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM errata 819472", .capability = ARM64_WORKAROUND_CLEAN_CACHE, MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01), - .enable = cpu_enable_cache_maint_trap, + .cpu_enable = cpu_enable_cache_maint_trap, }, #endif #ifdef CONFIG_ARM64_ERRATUM_832075 @@ -460,45 +460,45 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_MISMATCHED_CACHE_LINE_SIZE, .matches = has_mismatched_cache_type, .def_scope = SCOPE_LOCAL_CPU, - .enable = cpu_enable_trap_ctr_access, + .cpu_enable = cpu_enable_trap_ctr_access, }, { .desc = "Mismatched cache type", .capability = ARM64_MISMATCHED_CACHE_TYPE, .matches = has_mismatched_cache_type, .def_scope = SCOPE_LOCAL_CPU, - .enable = cpu_enable_trap_ctr_access, + .cpu_enable = cpu_enable_trap_ctr_access, }, #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), - .enable = enable_smccc_arch_workaround_1, + .cpu_enable = enable_smccc_arch_workaround_1, }, #endif #ifdef CONFIG_ARM64_SSBD @@ -524,8 +524,8 @@ void verify_local_cpu_errata_workarounds(void) for (; caps->matches; caps++) { if (cpus_have_cap(caps->capability)) { - if (caps->enable) - caps->enable((void *)caps); + if (caps->cpu_enable) + caps->cpu_enable(caps); } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) { pr_crit("CPU%d: Requires work around for %s, not detected" " at boot time\n", diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 6601dd4005c3..8e037a519e02 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -800,7 +800,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, ID_AA64PFR0_CSV3_SHIFT); } -static int kpti_install_ng_mappings(void *__unused) +static void +kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) { typedef void (kpti_remap_fn)(int, int, phys_addr_t); extern kpti_remap_fn idmap_kpti_install_ng_mappings; @@ -810,7 +811,7 @@ static int kpti_install_ng_mappings(void *__unused) int cpu = smp_processor_id(); if (kpti_applied) - return 0; + return; remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings); @@ -821,7 +822,7 @@ static int kpti_install_ng_mappings(void *__unused) if (!cpu) kpti_applied = true; - return 0; + return; } static int __init parse_kpti(char *str) @@ -838,7 +839,7 @@ static int __init parse_kpti(char *str) early_param("kpti", parse_kpti); #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ -static int cpu_copy_el2regs(void *__unused) +static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused) { /* * Copy register values that aren't redirected by hardware. @@ -850,8 +851,6 @@ static int cpu_copy_el2regs(void *__unused) */ if (!alternatives_applied) write_sysreg(read_sysreg(tpidr_el1), tpidr_el2); - - return 0; } static const struct arm64_cpu_capabilities arm64_features[] = { @@ -875,7 +874,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .field_pos = ID_AA64MMFR1_PAN_SHIFT, .sign = FTR_UNSIGNED, .min_field_value = 1, - .enable = cpu_enable_pan, + .cpu_enable = cpu_enable_pan, }, #endif /* CONFIG_ARM64_PAN */ #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS) @@ -923,7 +922,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .capability = ARM64_HAS_VIRT_HOST_EXTN, .def_scope = SCOPE_SYSTEM, .matches = runs_at_el2, - .enable = cpu_copy_el2regs, + .cpu_enable = cpu_copy_el2regs, }, { .desc = "32-bit EL0 Support", @@ -947,7 +946,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .capability = ARM64_UNMAP_KERNEL_AT_EL0, .def_scope = SCOPE_SYSTEM, .matches = unmap_kernel_at_el0, - .enable = kpti_install_ng_mappings, + .cpu_enable = kpti_install_ng_mappings, }, #endif {}, @@ -1075,6 +1074,14 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps, } } +static int __enable_cpu_capability(void *arg) +{ + const struct arm64_cpu_capabilities *cap = arg; + + cap->cpu_enable(cap); + return 0; +} + /* * Run through the enabled capabilities and enable() it on all active * CPUs @@ -1090,14 +1097,15 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) /* Ensure cpus_have_const_cap(num) works */ static_branch_enable(&cpu_hwcap_keys[num]); - if (caps->enable) { + if (caps->cpu_enable) { /* * Use stop_machine() as it schedules the work allowing * us to modify PSTATE, instead of on_each_cpu() which * uses an IPI, giving us a PSTATE that disappears when * we return. */ - stop_machine(caps->enable, (void *)caps, cpu_online_mask); + stop_machine(__enable_cpu_capability, (void *)caps, + cpu_online_mask); } } } @@ -1155,8 +1163,8 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list) smp_processor_id(), caps->desc); cpu_die_early(); } - if (caps->enable) - caps->enable((void *)caps); + if (caps->cpu_enable) + caps->cpu_enable(caps); } } diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 1d5890f19ca3..ee34be8bed03 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -26,6 +26,7 @@ #include #include +#include #include #define FPEXC_IOF (1 << 0) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index b6fd2a21b015..adf18b9a2c03 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -34,6 +34,7 @@ #include #include +#include #include #include #include @@ -432,10 +433,9 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs) force_signal_inject(SIGILL, ILL_ILLOPC, regs, 0); } -int cpu_enable_cache_maint_trap(void *__unused) +void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) { config_sctlr_el1(SCTLR_EL1_UCI, 0); - return 0; } #define __user_cache_maint(insn, address, res) \ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index e973002530de..a0c3efbc3717 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -727,7 +727,7 @@ asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint, NOKPROBE_SYMBOL(do_debug_exception); #ifdef CONFIG_ARM64_PAN -int cpu_enable_pan(void *__unused) +void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused) { /* * We modify PSTATE. This won't work from irq context as the PSTATE @@ -737,6 +737,5 @@ int cpu_enable_pan(void *__unused) config_sctlr_el1(SCTLR_EL1_SPAN, 0); asm(SET_PSTATE_PAN(1)); - return 0; } #endif /* CONFIG_ARM64_PAN */ From patchwork Wed Apr 6 16:45:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9259C4332F for ; Wed, 6 Apr 2022 18:06:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239832AbiDFSIL (ORCPT ); Wed, 6 Apr 2022 14:08:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240386AbiDFSHu (ORCPT ); Wed, 6 Apr 2022 14:07:50 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1DD101786B0; Wed, 6 Apr 2022 09:46:07 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DF5E512FC; Wed, 6 Apr 2022 09:46:06 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3E1813F73B; Wed, 6 Apr 2022 09:46:06 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 05/43] arm64: capabilities: Move errata work around check on boot CPU Date: Wed, 6 Apr 2022 17:45:08 +0100 Message-Id: <20220406164546.1888528-5-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5e91107b06811f0ca147cebbedce53626c9c4443 ] We trigger CPU errata work around check on the boot CPU from smp_prepare_boot_cpu() to make sure that we run the checks only after the CPU feature infrastructure is initialised. While this is correct, we can also do this from init_cpu_features() which initilises the infrastructure, and is called only on the Boot CPU. This helps to consolidate the CPU capability handling to cpufeature.c. No functional changes. Cc: Will Deacon Cc: Catalin Marinas Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse --- arch/arm64/kernel/cpufeature.c | 5 +++++ arch/arm64/kernel/smp.c | 6 ------ 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 8e037a519e02..65779a1644d1 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -476,6 +476,11 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info) init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2); } + /* + * Run the errata work around checks on the boot CPU, once we have + * initialised the cpu feature infrastructure. + */ + update_cpu_errata_workarounds(); } static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 13b9c20a84b5..ea4aedb6bbdc 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -444,12 +444,6 @@ void __init smp_prepare_boot_cpu(void) jump_label_init(); cpuinfo_store_boot_cpu(); save_boot_cpu_run_el(); - /* - * Run the errata work around checks on the boot CPU, once we have - * initialised the cpu feature infrastructure from - * cpuinfo_store_boot_cpu() above. - */ - update_cpu_errata_workarounds(); } static u64 __init of_get_cpu_mpidr(struct device_node *dn) From patchwork Wed Apr 6 16:45:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0354C433F5 for ; Wed, 6 Apr 2022 18:06:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239405AbiDFSIJ (ORCPT ); Wed, 6 Apr 2022 14:08:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240400AbiDFSHv (ORCPT ); Wed, 6 Apr 2022 14:07:51 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BB0D9D7469; Wed, 6 Apr 2022 09:46:09 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A17B912FC; Wed, 6 Apr 2022 09:46:09 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 002B23F73B; Wed, 6 Apr 2022 09:46:08 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 08/43] arm64: capabilities: Add flags to handle the conflicts on late CPU Date: Wed, 6 Apr 2022 17:45:11 +0100 Message-Id: <20220406164546.1888528-8-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5b4747c5dce7a873e1e7fe1608835825f714267a ] When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | |-----------------------------| | a | y | n | |-----------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds that cannot be activated after the kernel has finished booting.And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). Add two different flags to indicate how the conflict should be handled. ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability. Now that we have the flags to describe the behavior of the errata and the features, as we treat them, define types for ERRATUM and FEATURE. Cc: Will Deacon Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse --- arch/arm64/include/asm/cpufeature.h | 68 +++++++++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 10 ++--- arch/arm64/kernel/cpufeature.c | 22 +++++----- 3 files changed, 84 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index b7d0baeaad41..542a082dfdfc 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -130,6 +130,7 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; * an action, based on the severity (e.g, a CPU could be prevented from * booting or cause a kernel panic). The CPU is allowed to "affect" the * state of the capability, if it has not been finalised already. + * See section 5 for more details on conflicts. * * 4) Action: As mentioned in (2), the kernel can take an action for each * detected capability, on all CPUs on the system. Appropriate actions @@ -147,6 +148,34 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; * * check_local_cpu_capabilities() -> verify_local_cpu_capabilities() * + * 5) Conflicts: Based on the state of the capability on a late CPU vs. + * the system state, we could have the following combinations : + * + * x-----------------------------x + * | Type | System | Late CPU | + * |-----------------------------| + * | a | y | n | + * |-----------------------------| + * | b | n | y | + * x-----------------------------x + * + * Two separate flag bits are defined to indicate whether each kind of + * conflict can be allowed: + * ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - Case(a) is allowed + * ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - Case(b) is allowed + * + * Case (a) is not permitted for a capability that the system requires + * all CPUs to have in order for the capability to be enabled. This is + * typical for capabilities that represent enhanced functionality. + * + * Case (b) is not permitted for a capability that must be enabled + * during boot if any CPU in the system requires it in order to run + * safely. This is typical for erratum work arounds that cannot be + * enabled after the corresponding capability is finalised. + * + * In some non-typical cases either both (a) and (b), or neither, + * should be permitted. This can be described by including neither + * or both flags in the capability's type field. */ @@ -160,6 +189,33 @@ extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; #define SCOPE_SYSTEM ARM64_CPUCAP_SCOPE_SYSTEM #define SCOPE_LOCAL_CPU ARM64_CPUCAP_SCOPE_LOCAL_CPU +/* + * Is it permitted for a late CPU to have this capability when system + * hasn't already enabled it ? + */ +#define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4)) +/* Is it safe for a late CPU to miss this capability when system has it */ +#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5)) + +/* + * CPU errata workarounds that need to be enabled at boot time if one or + * more CPUs in the system requires it. When one of these capabilities + * has been enabled, it is safe to allow any CPU to boot that doesn't + * require the workaround. However, it is not safe if a "late" CPU + * requires a workaround and the system hasn't enabled it already. + */ +#define ARM64_CPUCAP_LOCAL_CPU_ERRATUM \ + (ARM64_CPUCAP_SCOPE_LOCAL_CPU | ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU) +/* + * CPU feature detected at boot time based on system-wide value of a + * feature. It is safe for a late CPU to have this feature even though + * the system hasn't enabled it, although the featuer will not be used + * by Linux in this case. If the system has enabled this feature already, + * then every late CPU must have it. + */ +#define ARM64_CPUCAP_SYSTEM_FEATURE \ + (ARM64_CPUCAP_SCOPE_SYSTEM | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) + struct arm64_cpu_capabilities { const char *desc; u16 capability; @@ -193,6 +249,18 @@ static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) return cap->type & ARM64_CPUCAP_SCOPE_MASK; } +static inline bool +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU); +} + +static inline bool +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); +} + extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; extern struct static_key_false arm64_const_caps_ready; diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 40f2c203e907..242c2ec110d6 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -369,14 +369,14 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, #endif /* CONFIG_ARM64_SSBD */ #define MIDR_RANGE(model, min, max) \ - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = min, \ .midr_range_max = max #define MIDR_ALL_VERSIONS(model) \ - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = 0, \ @@ -459,14 +459,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "Mismatched cache line size", .capability = ARM64_MISMATCHED_CACHE_LINE_SIZE, .matches = has_mismatched_cache_type, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, { .desc = "Mismatched cache type", .capability = ARM64_MISMATCHED_CACHE_TYPE, .matches = has_mismatched_cache_type, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR @@ -504,7 +504,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { #ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypass Disable", - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .capability = ARM64_SSBD, .matches = has_ssbd_mitigation, }, diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 4f3fc0bbf9c3..51567cd83099 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -865,7 +865,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface", .capability = ARM64_HAS_SYSREG_GIC_CPUIF, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_useable_gicv3_cpuif, .sys_reg = SYS_ID_AA64PFR0_EL1, .field_pos = ID_AA64PFR0_GIC_SHIFT, @@ -876,7 +876,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Privileged Access Never", .capability = ARM64_HAS_PAN, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR1_EL1, .field_pos = ID_AA64MMFR1_PAN_SHIFT, @@ -889,7 +889,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "LSE atomic instructions", .capability = ARM64_HAS_LSE_ATOMICS, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64ISAR0_EL1, .field_pos = ID_AA64ISAR0_ATOMICS_SHIFT, @@ -900,14 +900,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Software prefetching using PRFM", .capability = ARM64_HAS_NO_HW_PREFETCH, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_no_hw_prefetch, }, #ifdef CONFIG_ARM64_UAO { .desc = "User Access Override", .capability = ARM64_HAS_UAO, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_UAO_SHIFT, @@ -921,21 +921,21 @@ static const struct arm64_cpu_capabilities arm64_features[] = { #ifdef CONFIG_ARM64_PAN { .capability = ARM64_ALT_PAN_NOT_UAO, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = cpufeature_pan_not_uao, }, #endif /* CONFIG_ARM64_PAN */ { .desc = "Virtualization Host Extensions", .capability = ARM64_HAS_VIRT_HOST_EXTN, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = runs_at_el2, .cpu_enable = cpu_copy_el2regs, }, { .desc = "32-bit EL0 Support", .capability = ARM64_HAS_32BIT_EL0, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, @@ -945,14 +945,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "Reduced HYP mapping offset", .capability = ARM64_HYP_OFFSET_LOW, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = hyp_offset_low, }, #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = unmap_kernel_at_el0, .cpu_enable = kpti_install_ng_mappings, }, @@ -963,7 +963,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { #define HWCAP_CAP(reg, field, s, min_value, cap_type, cap) \ { \ .desc = #cap, \ - .type = ARM64_CPUCAP_SCOPE_SYSTEM, \ + .type = ARM64_CPUCAP_SYSTEM_FEATURE, \ .matches = has_cpuid_feature, \ .sys_reg = reg, \ .field_pos = field, \ From patchwork Wed Apr 6 16:45:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A877C4332F for ; Wed, 6 Apr 2022 18:06:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240059AbiDFSIR (ORCPT ); Wed, 6 Apr 2022 14:08:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240404AbiDFSHw (ORCPT ); Wed, 6 Apr 2022 14:07:52 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 889A816F6CE; Wed, 6 Apr 2022 09:46:11 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6626D23A; Wed, 6 Apr 2022 09:46:11 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B89003F73B; Wed, 6 Apr 2022 09:46:10 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 10/43] arm64: Add helpers for checking CPU MIDR against a range Date: Wed, 6 Apr 2022 17:45:13 +0100 Message-Id: <20220406164546.1888528-10-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 1df310505d6d544802016f6bae49aab836ae8510 ] Add helpers for checking if the given CPU midr falls in a range of variants/revisions for a given model. Cc: Will Deacon Cc: Mark Rutland Cc: Ard Biesheuvel Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse --- arch/arm64/include/asm/cpufeature.h | 4 ++-- arch/arm64/include/asm/cputype.h | 30 +++++++++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 14 +++++--------- 3 files changed, 37 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 542a082dfdfc..f8646d7fae49 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -10,6 +10,7 @@ #define __ASM_CPUFEATURE_H #include +#include #include #include @@ -229,8 +230,7 @@ struct arm64_cpu_capabilities { void (*cpu_enable)(const struct arm64_cpu_capabilities *cap); union { struct { /* To be used for erratum handling only */ - u32 midr_model; - u32 midr_range_min, midr_range_max; + struct midr_range midr_range; }; struct { /* Feature register checking */ diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 2270a5f31271..e76245ea6b60 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -113,6 +113,36 @@ #define read_cpuid(reg) read_sysreg_s(SYS_ ## reg) +/* + * Represent a range of MIDR values for a given CPU model and a + * range of variant/revision values. + * + * @model - CPU model as defined by MIDR_CPU_MODEL + * @rv_min - Minimum value for the revision/variant as defined by + * MIDR_CPU_VAR_REV + * @rv_max - Maximum value for the variant/revision for the range. + */ +struct midr_range { + u32 model; + u32 rv_min; + u32 rv_max; +}; + +#define MIDR_RANGE(m, v_min, r_min, v_max, r_max) \ + { \ + .model = m, \ + .rv_min = MIDR_CPU_VAR_REV(v_min, r_min), \ + .rv_max = MIDR_CPU_VAR_REV(v_max, r_max), \ + } + +#define MIDR_ALL_VERSIONS(m) MIDR_RANGE(m, 0, 0, 0xf, 0xf) + +static inline bool is_midr_in_range(u32 midr, struct midr_range const *range) +{ + return MIDR_IS_CPU_MODEL_RANGE(midr, range->model, + range->rv_min, range->rv_max); +} + /* * The CPU ID never changes at run time, so we might as well tell the * compiler that it's constant. Use this function to read the CPU ID diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 858a4954907d..e2630e88e8ad 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -27,10 +27,10 @@ static bool __maybe_unused is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope) { + u32 midr = read_cpuid_id(); + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); - return MIDR_IS_CPU_MODEL_RANGE(read_cpuid_id(), entry->midr_model, - entry->midr_range_min, - entry->midr_range_max); + return is_midr_in_range(midr, &entry->midr_range); } static bool @@ -370,15 +370,11 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \ .matches = is_affected_midr_range, \ - .midr_model = model, \ - .midr_range_min = MIDR_CPU_VAR_REV(v_min, r_min), \ - .midr_range_max = MIDR_CPU_VAR_REV(v_max, r_max) + .midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max) #define CAP_MIDR_ALL_VERSIONS(model) \ .matches = is_affected_midr_range, \ - .midr_model = model, \ - .midr_range_min = MIDR_CPU_VAR_REV(0, 0), \ - .midr_range_max = (MIDR_VARIANT_MASK | MIDR_REVISION_MASK) + .midr_range = MIDR_ALL_VERSIONS(model) #define MIDR_FIXED(rev, revidr_mask) \ .fixed_revs = (struct arm64_midr_revidr[]){{ (rev), (revidr_mask) }, {}} From patchwork Wed Apr 6 16:45:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558568 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69315C433FE for ; Wed, 6 Apr 2022 18:06:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240042AbiDFSIP (ORCPT ); Wed, 6 Apr 2022 14:08:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240405AbiDFSHw (ORCPT ); Wed, 6 Apr 2022 14:07:52 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6C6EC1877D0; Wed, 6 Apr 2022 09:46:12 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4E5BB12FC; Wed, 6 Apr 2022 09:46:12 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9AF5F3F73B; Wed, 6 Apr 2022 09:46:11 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 11/43] arm64: capabilities: Add support for checks based on a list of MIDRs Date: Wed, 6 Apr 2022 17:45:14 +0100 Message-Id: <20220406164546.1888528-11-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit be5b299830c63ed76e0357473c4218c85fb388b3 ] Add helpers for detecting an errata on list of midr ranges of affected CPUs, with the same work around. Cc: Will Deacon Cc: Mark Rutland Cc: Ard Biesheuvel Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon [ardb: add Cortex-A35 to kpti_safe_list[] as well] Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse --- arch/arm64/include/asm/cpufeature.h | 1 + arch/arm64/include/asm/cputype.h | 9 +++++ arch/arm64/kernel/cpu_errata.c | 62 +++++++++++++++++------------ arch/arm64/kernel/cpufeature.c | 21 +++++----- 4 files changed, 58 insertions(+), 35 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index f8646d7fae49..45206dd20ffd 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -233,6 +233,7 @@ struct arm64_cpu_capabilities { struct midr_range midr_range; }; + const struct midr_range *midr_range_list; struct { /* Feature register checking */ u32 sys_reg; u8 field_pos; diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index e76245ea6b60..61041e051acb 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -143,6 +143,15 @@ static inline bool is_midr_in_range(u32 midr, struct midr_range const *range) range->rv_min, range->rv_max); } +static inline bool +is_midr_in_range_list(u32 midr, struct midr_range const *ranges) +{ + while (ranges->model) + if (is_midr_in_range(midr, ranges++)) + return true; + return false; +} + /* * The CPU ID never changes at run time, so we might as well tell the * compiler that it's constant. Use this function to read the CPU ID diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index e2630e88e8ad..69c3492cb063 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -33,6 +33,14 @@ is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope) return is_midr_in_range(midr, &entry->midr_range); } +static bool __maybe_unused +is_affected_midr_range_list(const struct arm64_cpu_capabilities *entry, + int scope) +{ + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + return is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); +} + static bool has_mismatched_cache_type(const struct arm64_cpu_capabilities *entry, int scope) @@ -383,6 +391,10 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) +#define CAP_MIDR_RANGE_LIST(list) \ + .matches = is_affected_midr_range_list, \ + .midr_range_list = list + /* Errata affecting a range of revisions of given model variant */ #define ERRATA_MIDR_REV_RANGE(m, var, r_min, r_max) \ ERRATA_MIDR_RANGE(m, var, r_min, var, r_max) @@ -396,6 +408,29 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry, .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_ALL_VERSIONS(model) +/* Errata affecting a list of midr ranges, with same work around */ +#define ERRATA_MIDR_RANGE_LIST(midr_list) \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ + CAP_MIDR_RANGE_LIST(midr_list) + +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + +/* + * List of CPUs where we need to issue a psci call to + * harden the branch predictor. + */ +static const struct midr_range arm64_bp_harden_smccc_cpus[] = { + MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), + MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), + MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), + {}, +}; + +#endif + const struct arm64_cpu_capabilities arm64_errata[] = { #if defined(CONFIG_ARM64_ERRATUM_826319) || \ defined(CONFIG_ARM64_ERRATUM_827319) || \ @@ -486,32 +521,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = { #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), - .cpu_enable = enable_smccc_arch_workaround_1, - }, - { - .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), - .cpu_enable = enable_smccc_arch_workaround_1, - }, - { - .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), - .cpu_enable = enable_smccc_arch_workaround_1, - }, - { - .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), - .cpu_enable = enable_smccc_arch_workaround_1, - }, - { - .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - ERRATA_MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), - .cpu_enable = enable_smccc_arch_workaround_1, - }, - { - .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - ERRATA_MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), + ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), .cpu_enable = enable_smccc_arch_workaround_1, }, #endif diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 51567cd83099..1b5afb80247d 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -767,6 +767,17 @@ static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, int __unused) { + /* List of CPUs that are not vulnerable and don't need KPTI */ + static const struct midr_range kpti_safe_list[] = { + MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), + MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A35), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A53), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A55), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), + }; char const *str = "command line option"; u64 pfr0 = read_system_reg(SYS_ID_AA64PFR0_EL1); @@ -792,16 +803,8 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry, return true; /* Don't force KPTI for CPUs that are not vulnerable */ - switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) { - case MIDR_CAVIUM_THUNDERX2: - case MIDR_BRCM_VULCAN: - case MIDR_CORTEX_A53: - case MIDR_CORTEX_A55: - case MIDR_CORTEX_A57: - case MIDR_CORTEX_A72: - case MIDR_CORTEX_A73: + if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list)) return false; - } /* Defer to CPU feature registers */ return !cpuid_feature_extract_unsigned_field(pfr0, From patchwork Wed Apr 6 16:45:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57807C43219 for ; Wed, 6 Apr 2022 18:06:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240066AbiDFSIS (ORCPT ); Wed, 6 Apr 2022 14:08:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240418AbiDFSHw (ORCPT ); Wed, 6 Apr 2022 14:07:52 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 25A6B1890E3; Wed, 6 Apr 2022 09:46:14 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0C9AE1516; Wed, 6 Apr 2022 09:46:14 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5ED713F73B; Wed, 6 Apr 2022 09:46:13 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 13/43] clocksource/drivers/arm_arch_timer: Introduce generic errata handling infrastructure Date: Wed, 6 Apr 2022 17:45:16 +0100 Message-Id: <20220406164546.1888528-13-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Ding Tianhong commit 16d10ef29f25aba923779234bb93a451b14d20e6 upstream. Currently we have code inline in the arch timer probe path to cater for Freescale erratum A-008585, complete with ifdeffery. This is a little ugly, and will get worse as we try to add more errata handling. This patch refactors the handling of Freescale erratum A-008585. Now the erratum is described in a generic arch_timer_erratum_workaround structure, and the probe path can iterate over these to detect errata and enable workarounds. This will simplify the addition and maintenance of code handling Hisilicon erratum 161010101. Signed-off-by: Ding Tianhong [Mark: split patch, correct Kconfig, reword commit message] Signed-off-by: Mark Rutland Acked-by: Daniel Lezcano Signed-off-by: Daniel Lezcano Signed-off-by: James Morse --- arch/arm64/include/asm/arch_timer.h | 38 ++++-------- drivers/clocksource/Kconfig | 4 ++ drivers/clocksource/arm_arch_timer.c | 90 +++++++++++++++++++--------- 3 files changed, 79 insertions(+), 53 deletions(-) diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index eaa5bbe3fa87..b4b34004a21e 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -29,41 +29,29 @@ #include -#if IS_ENABLED(CONFIG_FSL_ERRATUM_A008585) +#if IS_ENABLED(CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND) extern struct static_key_false arch_timer_read_ool_enabled; -#define needs_fsl_a008585_workaround() \ +#define needs_unstable_timer_counter_workaround() \ static_branch_unlikely(&arch_timer_read_ool_enabled) #else -#define needs_fsl_a008585_workaround() false +#define needs_unstable_timer_counter_workaround() false #endif -u32 __fsl_a008585_read_cntp_tval_el0(void); -u32 __fsl_a008585_read_cntv_tval_el0(void); -u64 __fsl_a008585_read_cntvct_el0(void); -/* - * The number of retries is an arbitrary value well beyond the highest number - * of iterations the loop has been observed to take. - */ -#define __fsl_a008585_read_reg(reg) ({ \ - u64 _old, _new; \ - int _retries = 200; \ - \ - do { \ - _old = read_sysreg(reg); \ - _new = read_sysreg(reg); \ - _retries--; \ - } while (unlikely(_old != _new) && _retries); \ - \ - WARN_ON_ONCE(!_retries); \ - _new; \ -}) +struct arch_timer_erratum_workaround { + const char *id; /* Indicate the Erratum ID */ + u32 (*read_cntp_tval_el0)(void); + u32 (*read_cntv_tval_el0)(void); + u64 (*read_cntvct_el0)(void); +}; + +extern const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround; #define arch_timer_reg_read_stable(reg) \ ({ \ u64 _val; \ - if (needs_fsl_a008585_workaround()) \ - _val = __fsl_a008585_read_##reg(); \ + if (needs_unstable_timer_counter_workaround()) \ + _val = timer_unstable_counter_workaround->read_##reg();\ else \ _val = read_sysreg(reg); \ _val; \ diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig index e2c6e43cf8ca..3d748eac1a68 100644 --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig @@ -305,10 +305,14 @@ config ARM_ARCH_TIMER_EVTSTREAM This must be disabled for hardware validation purposes to detect any hardware anomalies of missing events. +config ARM_ARCH_TIMER_OOL_WORKAROUND + bool + config FSL_ERRATUM_A008585 bool "Workaround for Freescale/NXP Erratum A-008585" default y depends on ARM_ARCH_TIMER && ARM64 + select ARM_ARCH_TIMER_OOL_WORKAROUND help This option enables a workaround for Freescale/NXP Erratum A-008585 ("ARM generic timer may contain an erroneous diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index c2a5e8252cd7..4b268ef9cc78 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -96,27 +96,58 @@ early_param("clocksource.arm_arch_timer.evtstrm", early_evtstrm_cfg); */ #ifdef CONFIG_FSL_ERRATUM_A008585 -DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); -EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled); +/* + * The number of retries is an arbitrary value well beyond the highest number + * of iterations the loop has been observed to take. + */ +#define __fsl_a008585_read_reg(reg) ({ \ + u64 _old, _new; \ + int _retries = 200; \ + \ + do { \ + _old = read_sysreg(reg); \ + _new = read_sysreg(reg); \ + _retries--; \ + } while (unlikely(_old != _new) && _retries); \ + \ + WARN_ON_ONCE(!_retries); \ + _new; \ +}) -static int fsl_a008585_enable = -1; - -u32 __fsl_a008585_read_cntp_tval_el0(void) +static u32 notrace fsl_a008585_read_cntp_tval_el0(void) { return __fsl_a008585_read_reg(cntp_tval_el0); } -u32 __fsl_a008585_read_cntv_tval_el0(void) +static u32 notrace fsl_a008585_read_cntv_tval_el0(void) { return __fsl_a008585_read_reg(cntv_tval_el0); } -u64 __fsl_a008585_read_cntvct_el0(void) +static u64 notrace fsl_a008585_read_cntvct_el0(void) { return __fsl_a008585_read_reg(cntvct_el0); } -EXPORT_SYMBOL(__fsl_a008585_read_cntvct_el0); -#endif /* CONFIG_FSL_ERRATUM_A008585 */ +#endif + +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND +const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL; +EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); + +DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); +EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled); + +static const struct arch_timer_erratum_workaround ool_workarounds[] = { +#ifdef CONFIG_FSL_ERRATUM_A008585 + { + .id = "fsl,erratum-a008585", + .read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0, + .read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0, + .read_cntvct_el0 = fsl_a008585_read_cntvct_el0, + }, +#endif +}; +#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ static __always_inline void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val, @@ -267,8 +298,8 @@ static __always_inline void set_next_event(const int access, unsigned long evt, arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); } -#ifdef CONFIG_FSL_ERRATUM_A008585 -static __always_inline void fsl_a008585_set_next_event(const int access, +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND +static __always_inline void erratum_set_next_event_generic(const int access, unsigned long evt, struct clock_event_device *clk) { unsigned long ctrl; @@ -286,20 +317,20 @@ static __always_inline void fsl_a008585_set_next_event(const int access, arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); } -static int fsl_a008585_set_next_event_virt(unsigned long evt, +static int erratum_set_next_event_virt(unsigned long evt, struct clock_event_device *clk) { - fsl_a008585_set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk); + erratum_set_next_event_generic(ARCH_TIMER_VIRT_ACCESS, evt, clk); return 0; } -static int fsl_a008585_set_next_event_phys(unsigned long evt, +static int erratum_set_next_event_phys(unsigned long evt, struct clock_event_device *clk) { - fsl_a008585_set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk); + erratum_set_next_event_generic(ARCH_TIMER_PHYS_ACCESS, evt, clk); return 0; } -#endif /* CONFIG_FSL_ERRATUM_A008585 */ +#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ static int arch_timer_set_next_event_virt(unsigned long evt, struct clock_event_device *clk) @@ -329,16 +360,16 @@ static int arch_timer_set_next_event_phys_mem(unsigned long evt, return 0; } -static void fsl_a008585_set_sne(struct clock_event_device *clk) +static void erratum_workaround_set_sne(struct clock_event_device *clk) { -#ifdef CONFIG_FSL_ERRATUM_A008585 +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND if (!static_branch_unlikely(&arch_timer_read_ool_enabled)) return; if (arch_timer_uses_ppi == VIRT_PPI) - clk->set_next_event = fsl_a008585_set_next_event_virt; + clk->set_next_event = erratum_set_next_event_virt; else - clk->set_next_event = fsl_a008585_set_next_event_phys; + clk->set_next_event = erratum_set_next_event_phys; #endif } @@ -371,7 +402,7 @@ static void __arch_timer_setup(unsigned type, BUG(); } - fsl_a008585_set_sne(clk); + erratum_workaround_set_sne(clk); } else { clk->features |= CLOCK_EVT_FEAT_DYNIRQ; clk->name = "arch_mem_timer"; @@ -600,7 +631,7 @@ static void __init arch_counter_register(unsigned type) clocksource_counter.archdata.vdso_direct = true; -#ifdef CONFIG_FSL_ERRATUM_A008585 +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND /* * Don't use the vdso fastpath if errata require using * the out-of-line counter accessor. @@ -888,12 +919,15 @@ static int __init arch_timer_of_init(struct device_node *np) arch_timer_c3stop = !of_property_read_bool(np, "always-on"); -#ifdef CONFIG_FSL_ERRATUM_A008585 - if (fsl_a008585_enable < 0) - fsl_a008585_enable = of_property_read_bool(np, "fsl,erratum-a008585"); - if (fsl_a008585_enable) { - static_branch_enable(&arch_timer_read_ool_enabled); - pr_info("Enabling workaround for FSL erratum A-008585\n"); +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND + for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) { + if (of_property_read_bool(np, ool_workarounds[i].id)) { + timer_unstable_counter_workaround = &ool_workarounds[i]; + static_branch_enable(&arch_timer_read_ool_enabled); + pr_info("arch_timer: Enabling workaround for %s\n", + timer_unstable_counter_workaround->id); + break; + } } #endif From patchwork Wed Apr 6 16:45:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27755C433F5 for ; Wed, 6 Apr 2022 18:06:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240113AbiDFSIX (ORCPT ); Wed, 6 Apr 2022 14:08:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56632 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240441AbiDFSHy (ORCPT ); Wed, 6 Apr 2022 14:07:54 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DE5E618CD0C; Wed, 6 Apr 2022 09:46:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C501512FC; Wed, 6 Apr 2022 09:46:15 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2334C3F73B; Wed, 6 Apr 2022 09:46:15 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 15/43] arm64: arch_timer: Add erratum handler for CPU-specific capability Date: Wed, 6 Apr 2022 17:45:18 +0100 Message-Id: <20220406164546.1888528-15-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit 0064030c6fd4ca6cfab42de037b2a89445beeead upstream. Should we ever have a workaround for an erratum that is detected using a capability and affecting a particular CPU, it'd be nice to have a way to probe them directly. Acked-by: Thomas Gleixner Signed-off-by: Marc Zyngier Signed-off-by: James Morse --- arch/arm64/include/asm/arch_timer.h | 1 + drivers/clocksource/arm_arch_timer.c | 28 ++++++++++++++++++++++++---- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index 5cd964e90d11..1b0d7e994e0c 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -39,6 +39,7 @@ extern struct static_key_false arch_timer_read_ool_enabled; enum arch_timer_erratum_match_type { ate_match_dt, + ate_match_local_cap_id, }; struct arch_timer_erratum_workaround { diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 015b28e7f1f2..138dbfdfb413 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -162,6 +162,13 @@ bool arch_timer_check_dt_erratum(const struct arch_timer_erratum_workaround *wa, return of_property_read_bool(np, wa->id); } +static +bool arch_timer_check_local_cap_erratum(const struct arch_timer_erratum_workaround *wa, + const void *arg) +{ + return this_cpu_has_cap((uintptr_t)wa->id); +} + static const struct arch_timer_erratum_workaround * arch_timer_iterate_errata(enum arch_timer_erratum_match_type type, ate_match_fn_t match_fn, @@ -192,14 +199,16 @@ static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type t { const struct arch_timer_erratum_workaround *wa; ate_match_fn_t match_fn = NULL; - - if (static_branch_unlikely(&arch_timer_read_ool_enabled)) - return; + bool local = false; switch (type) { case ate_match_dt: match_fn = arch_timer_check_dt_erratum; break; + case ate_match_local_cap_id: + match_fn = arch_timer_check_local_cap_erratum; + local = true; + break; default: WARN_ON(1); return; @@ -209,8 +218,17 @@ static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type t if (!wa) return; + if (needs_unstable_timer_counter_workaround()) { + if (wa != timer_unstable_counter_workaround) + pr_warn("Can't enable workaround for %s (clashes with %s\n)", + wa->desc, + timer_unstable_counter_workaround->desc); + return; + } + arch_timer_enable_workaround(wa); - pr_info("Enabling global workaround for %s\n", wa->desc); + pr_info("Enabling %s workaround for %s\n", + local ? "local" : "global", wa->desc); } #else @@ -470,6 +488,8 @@ static void __arch_timer_setup(unsigned type, BUG(); } + arch_timer_check_ool_workaround(ate_match_local_cap_id, NULL); + erratum_workaround_set_sne(clk); } else { clk->features |= CLOCK_EVT_FEAT_DYNIRQ; From patchwork Wed Apr 6 16:45:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D625EC433FE for ; Wed, 6 Apr 2022 18:06:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240097AbiDFSIy (ORCPT ); Wed, 6 Apr 2022 14:08:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43550 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240449AbiDFSHy (ORCPT ); Wed, 6 Apr 2022 14:07:54 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C00E818D98A; Wed, 6 Apr 2022 09:46:16 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A738123A; Wed, 6 Apr 2022 09:46:16 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 05D433F73B; Wed, 6 Apr 2022 09:46:15 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 16/43] arm64: arch_timer: Add workaround for ARM erratum 1188873 Date: Wed, 6 Apr 2022 17:45:19 +0100 Message-Id: <20220406164546.1888528-16-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit 95b861a4a6d94f64d5242605569218160ebacdbe upstream. When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by: Mark Rutland Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/Kconfig | 12 ++++++++++++ arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/cputype.h | 2 ++ arch/arm64/kernel/cpu_errata.c | 8 ++++++++ drivers/clocksource/arm_arch_timer.c | 15 +++++++++++++++ 5 files changed, 39 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b12275be0e13..a36595c1557b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -441,6 +441,18 @@ config ARM64_ERRATUM_1024718 If unsure, say Y. +config ARM64_ERRATUM_1188873 + bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result" + default y + help + This option adds work arounds for ARM Cortex-A76 erratum 1188873 + + Affected Cortex-A76 cores (r0p0, r1p0, r2p0) could cause + register corruption when accessing the timer registers from + AArch32 userspace. + + If unsure, say Y. + config CAVIUM_ERRATUM_22375 bool "Cavium erratum 22375, 24313" default y diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 8c7c4b23a8b1..d4a46764c1ad 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -38,7 +38,8 @@ #define ARM64_HARDEN_BRANCH_PREDICTOR 17 #define ARM64_SSBD 18 #define ARM64_MISMATCHED_CACHE_TYPE 19 +#define ARM64_WORKAROUND_1188873 20 -#define ARM64_NCAPS 20 +#define ARM64_NCAPS 21 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 61041e051acb..76b551e83f2d 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -85,6 +85,7 @@ #define ARM_CPU_PART_CORTEX_A75 0xD0A #define ARM_CPU_PART_CORTEX_A35 0xD04 #define ARM_CPU_PART_CORTEX_A55 0xD05 +#define ARM_CPU_PART_CORTEX_A76 0xD0B #define APM_CPU_PART_POTENZA 0x000 @@ -102,6 +103,7 @@ #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75) #define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35) #define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) +#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 69c3492cb063..37cb8c23ccc6 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -532,6 +532,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .capability = ARM64_SSBD, .matches = has_ssbd_mitigation, }, +#endif +#ifdef CONFIG_ARM64_ERRATUM_1188873 + { + /* Cortex-A76 r0p0 to r2p0 */ + .desc = "ARM erratum 1188873", + .capability = ARM64_WORKAROUND_1188873, + ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 2, 0), + }, #endif { } diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c index 138dbfdfb413..e70d0974470c 100644 --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -130,6 +130,13 @@ static u64 notrace fsl_a008585_read_cntvct_el0(void) } #endif +#ifdef CONFIG_ARM64_ERRATUM_1188873 +static u64 notrace arm64_1188873_read_cntvct_el0(void) +{ + return read_sysreg(cntvct_el0); +} +#endif + #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL; EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); @@ -148,6 +155,14 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = { .read_cntvct_el0 = fsl_a008585_read_cntvct_el0, }, #endif +#ifdef CONFIG_ARM64_ERRATUM_1188873 + { + .match_type = ate_match_local_cap_id, + .id = (void *)ARM64_WORKAROUND_1188873, + .desc = "ARM erratum 1188873", + .read_cntvct_el0 = arm64_1188873_read_cntvct_el0, + }, +#endif }; typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *, From patchwork Wed Apr 6 16:45:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 878E2C4332F for ; Wed, 6 Apr 2022 18:06:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240155AbiDFSI6 (ORCPT ); Wed, 6 Apr 2022 14:08:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42952 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240473AbiDFSHz (ORCPT ); Wed, 6 Apr 2022 14:07:55 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 65FB2B89BD; Wed, 6 Apr 2022 09:46:19 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4CC5112FC; Wed, 6 Apr 2022 09:46:19 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9FBEE3F73B; Wed, 6 Apr 2022 09:46:18 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 19/43] arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT Date: Wed, 6 Apr 2022 17:45:22 +0100 Message-Id: <20220406164546.1888528-19-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit c2b5bba3967a000764e9148e6f020d776b7ecd82 upstream. Since ARM64_ERRATUM_1188873 only affects AArch32 EL0, it makes some sense that it should depend on COMPAT. Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: James Morse --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 93bf53aa02d4..42719bd58046 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -444,6 +444,7 @@ config ARM64_ERRATUM_1024718 config ARM64_ERRATUM_1188873 bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result" default y + depends on COMPAT select ARM_ARCH_TIMER_OOL_WORKAROUND help This option adds work arounds for ARM Cortex-A76 erratum 1188873 From patchwork Wed Apr 6 16:45:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64A67C4332F for ; Wed, 6 Apr 2022 18:07:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240205AbiDFSJK (ORCPT ); Wed, 6 Apr 2022 14:09:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240493AbiDFSIC (ORCPT ); Wed, 6 Apr 2022 14:08:02 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0B7E34FC62; Wed, 6 Apr 2022 09:46:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E6E521516; Wed, 6 Apr 2022 09:46:21 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 45B043F73B; Wed, 6 Apr 2022 09:46:21 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 22/43] arm64: Add Neoverse-N2, Cortex-A710 CPU part definition Date: Wed, 6 Apr 2022 17:45:25 +0100 Message-Id: <20220406164546.1888528-22-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose commit 2d0d656700d67239a57afaf617439143d8dac9be upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Catalin Marinas Cc: Mark Rutland Cc: Will Deacon Acked-by: Catalin Marinas Reviewed-by: Anshuman Khandual Signed-off-by: Suzuki K Poulose Link: https://lore.kernel.org/r/20211019163153.3692640-2-suzuki.poulose@arm.com Signed-off-by: Will Deacon Signed-off-by: James Morse --- arch/arm64/include/asm/cputype.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index eec5d0aceb50..1a1c9ecc9fcc 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -88,6 +88,8 @@ #define ARM_CPU_PART_CORTEX_A76 0xD0B #define ARM_CPU_PART_NEOVERSE_N1 0xD0C #define ARM_CPU_PART_CORTEX_A77 0xD0D +#define ARM_CPU_PART_CORTEX_A710 0xD47 +#define ARM_CPU_PART_NEOVERSE_N2 0xD49 #define APM_CPU_PART_POTENZA 0x000 @@ -108,6 +110,8 @@ #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) +#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) +#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) From patchwork Wed Apr 6 16:45:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558561 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F8E8C433F5 for ; Wed, 6 Apr 2022 18:07:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240234AbiDFSJX (ORCPT ); Wed, 6 Apr 2022 14:09:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45626 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240504AbiDFSID (ORCPT ); Wed, 6 Apr 2022 14:08:03 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E17B7FFF73; Wed, 6 Apr 2022 09:46:22 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C8B1623A; Wed, 6 Apr 2022 09:46:22 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 277633F73B; Wed, 6 Apr 2022 09:46:22 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 23/43] arm64: Add Cortex-X2 CPU part definition Date: Wed, 6 Apr 2022 17:45:26 +0100 Message-Id: <20220406164546.1888528-23-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Anshuman Khandual commit 72bb9dcb6c33cfac80282713c2b4f2b254cd24d1 upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Will Deacon Cc: Suzuki Poulose Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reviewed-by: Suzuki K Poulose Link: https://lore.kernel.org/r/1642994138-25887-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/include/asm/cputype.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h index 1a1c9ecc9fcc..498316001ccd 100644 --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -89,6 +89,7 @@ #define ARM_CPU_PART_NEOVERSE_N1 0xD0C #define ARM_CPU_PART_CORTEX_A77 0xD0D #define ARM_CPU_PART_CORTEX_A710 0xD47 +#define ARM_CPU_PART_CORTEX_X2 0xD48 #define ARM_CPU_PART_NEOVERSE_N2 0xD49 #define APM_CPU_PART_POTENZA 0x000 @@ -111,6 +112,7 @@ #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) +#define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) From patchwork Wed Apr 6 16:45:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558560 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9ADDC433FE for ; Wed, 6 Apr 2022 18:07:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240065AbiDFSJc (ORCPT ); Wed, 6 Apr 2022 14:09:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240515AbiDFSIE (ORCPT ); Wed, 6 Apr 2022 14:08:04 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A5760140DFA; Wed, 6 Apr 2022 09:46:24 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C95C23A; Wed, 6 Apr 2022 09:46:24 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DF4B73F73B; Wed, 6 Apr 2022 09:46:23 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 25/43] arm64: entry.S: Add ventry overflow sanity checks Date: Wed, 6 Apr 2022 17:45:28 +0100 Message-Id: <20220406164546.1888528-25-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit 4330e2c5c04c27bebf89d34e0bc14e6943413067 upstream. Subsequent patches add even more code to the ventry slots. Ensure kernels that overflow a ventry slot don't get built. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/kernel/entry.S | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ca978d7d98eb..0414b0494dd3 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -74,6 +74,7 @@ .macro kernel_ventry, el, label, regsize = 64 .align 7 +.Lventry_start\@: #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 alternative_if ARM64_UNMAP_KERNEL_AT_EL0 .if \el == 0 @@ -89,6 +90,7 @@ alternative_else_nop_endif sub sp, sp, #S_FRAME_SIZE b el\()\el\()_\label +.org .Lventry_start\@ + 128 // Did we overflow the ventry slot? .endm .macro tramp_alias, dst, sym @@ -935,6 +937,7 @@ __ni_sys_trace: add x30, x30, #(1b - tramp_vectors) isb ret +.org 1b + 128 // Did we overflow the ventry slot? .endm .macro tramp_exit, regsize = 64 From patchwork Wed Apr 6 16:45:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68497C433F5 for ; Wed, 6 Apr 2022 18:07:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240296AbiDFSJo (ORCPT ); Wed, 6 Apr 2022 14:09:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240531AbiDFSIG (ORCPT ); Wed, 6 Apr 2022 14:08:06 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 68FAF19456B; Wed, 6 Apr 2022 09:46:26 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 505A923A; Wed, 6 Apr 2022 09:46:26 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A31B93F73B; Wed, 6 Apr 2022 09:46:25 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 27/43] arm64: entry: Free up another register on kpti's tramp_exit path Date: Wed, 6 Apr 2022 17:45:30 +0100 Message-Id: <20220406164546.1888528-27-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream. Kpti stashes x30 in far_el1 while it uses x30 for all its work. Making the vectors a per-cpu data structure will require a second register. Allow tramp_exit two registers before it unmaps the kernel, by leaving x30 on the stack, and stashing x29 in far_el1. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/kernel/entry.S | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 160a3131a190..40647b5e279e 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -244,14 +244,16 @@ alternative_else_nop_endif ldp x24, x25, [sp, #16 * 12] ldp x26, x27, [sp, #16 * 13] ldp x28, x29, [sp, #16 * 14] - ldr lr, [sp, #S_LR] - add sp, sp, #S_FRAME_SIZE // restore sp .if \el == 0 -alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 +alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 + ldr lr, [sp, #S_LR] + add sp, sp, #S_FRAME_SIZE // restore sp + eret +alternative_else_nop_endif #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 bne 4f - msr far_el1, x30 + msr far_el1, x29 tramp_alias x30, tramp_exit_native br x30 4: @@ -259,6 +261,8 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 br x30 #endif .else + ldr lr, [sp, #S_LR] + add sp, sp, #S_FRAME_SIZE // restore sp eret .endif .endm @@ -947,10 +951,12 @@ __ni_sys_trace: .macro tramp_exit, regsize = 64 adr x30, tramp_vectors msr vbar_el1, x30 - tramp_unmap_kernel x30 + ldr lr, [sp, #S_LR] + tramp_unmap_kernel x29 .if \regsize == 64 - mrs x30, far_el1 + mrs x29, far_el1 .endif + add sp, sp, #S_FRAME_SIZE // restore sp eret .endm From patchwork Wed Apr 6 16:45:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60173C433F5 for ; Wed, 6 Apr 2022 18:07:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239745AbiDFSJq (ORCPT ); Wed, 6 Apr 2022 14:09:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240545AbiDFSII (ORCPT ); Wed, 6 Apr 2022 14:08:08 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2D80119533E; Wed, 6 Apr 2022 09:46:28 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 143BB12FC; Wed, 6 Apr 2022 09:46:28 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 66D233F73B; Wed, 6 Apr 2022 09:46:27 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 29/43] arm64: entry: Allow tramp_alias to access symbols after the 4K boundary Date: Wed, 6 Apr 2022 17:45:32 +0100 Message-Id: <20220406164546.1888528-29-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit 6c5bf79b69f911560fbf82214c0971af6e58e682 upstream. Systems using kpti enter and exit the kernel through a trampoline mapping that is always mapped, even when the kernel is not. tramp_valias is a macro to find the address of a symbol in the trampoline mapping. Adding extra sets of vectors will expand the size of the entry.tramp.text section to beyond 4K. tramp_valias will be unable to generate addresses for symbols beyond 4K as it uses the 12 bit immediate of the add instruction. As there are now two registers available when tramp_alias is called, use the extra register to avoid the 4K limit of the 12 bit immediate. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas [ Removed SDEI for backport ] Signed-off-by: James Morse --- arch/arm64/kernel/entry.S | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index d665714cdca6..4c4f7df5f0f3 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -97,9 +97,12 @@ .org .Lventry_start\@ + 128 // Did we overflow the ventry slot? .endm - .macro tramp_alias, dst, sym + .macro tramp_alias, dst, sym, tmp mov_q \dst, TRAMP_VALIAS - add \dst, \dst, #(\sym - .entry.tramp.text) + adr_l \tmp, \sym + add \dst, \dst, \tmp + adr_l \tmp, .entry.tramp.text + sub \dst, \dst, \tmp .endm // This macro corrupts x0-x3. It is the caller's duty @@ -254,10 +257,10 @@ alternative_else_nop_endif #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 bne 4f msr far_el1, x29 - tramp_alias x30, tramp_exit_native + tramp_alias x30, tramp_exit_native, x29 br x30 4: - tramp_alias x30, tramp_exit_compat + tramp_alias x30, tramp_exit_compat, x29 br x30 #endif .else From patchwork Wed Apr 6 16:45:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E578C43217 for ; Wed, 6 Apr 2022 18:07:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240330AbiDFSJr (ORCPT ); Wed, 6 Apr 2022 14:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239752AbiDFSIJ (ORCPT ); Wed, 6 Apr 2022 14:08:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 18C0B1965D3; Wed, 6 Apr 2022 09:46:29 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E9F6A23A; Wed, 6 Apr 2022 09:46:28 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 48D143F73B; Wed, 6 Apr 2022 09:46:28 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 30/43] arm64: entry: Don't assume tramp_vectors is the start of the vectors Date: Wed, 6 Apr 2022 17:45:33 +0100 Message-Id: <20220406164546.1888528-30-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit ed50da7764535f1e24432ded289974f2bf2b0c5a upstream. The tramp_ventry macro uses tramp_vectors as the address of the vectors when calculating which ventry in the 'full fat' vectors to branch to. While there is one set of tramp_vectors, this will be true. Adding multiple sets of vectors will break this assumption. Move the generation of the vectors to a macro, and pass the start of the vectors as an argument to tramp_ventry. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/kernel/entry.S | 28 +++++++++++++++------------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 4c4f7df5f0f3..114922c1c3e3 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -926,7 +926,7 @@ __ni_sys_trace: sub \dst, \dst, PAGE_SIZE .endm - .macro tramp_ventry, regsize = 64 + .macro tramp_ventry, vector_start, regsize .align 7 1: .if \regsize == 64 @@ -948,9 +948,9 @@ __ni_sys_trace: #else ldr x30, =vectors #endif - prfm plil1strm, [x30, #(1b - tramp_vectors)] + prfm plil1strm, [x30, #(1b - \vector_start)] msr vbar_el1, x30 - add x30, x30, #(1b - tramp_vectors + 4) + add x30, x30, #(1b - \vector_start + 4) isb ret .org 1b + 128 // Did we overflow the ventry slot? @@ -968,19 +968,21 @@ __ni_sys_trace: eret .endm - .align 11 -ENTRY(tramp_vectors) + .macro generate_tramp_vector +.Lvector_start\@: .space 0x400 - tramp_ventry - tramp_ventry - tramp_ventry - tramp_ventry + .rept 4 + tramp_ventry .Lvector_start\@, 64 + .endr + .rept 4 + tramp_ventry .Lvector_start\@, 32 + .endr + .endm - tramp_ventry 32 - tramp_ventry 32 - tramp_ventry 32 - tramp_ventry 32 + .align 11 +ENTRY(tramp_vectors) + generate_tramp_vector END(tramp_vectors) ENTRY(tramp_exit_native) From patchwork Wed Apr 6 16:45:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE863C433FE for ; Wed, 6 Apr 2022 18:07:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240349AbiDFSJz (ORCPT ); Wed, 6 Apr 2022 14:09:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239864AbiDFSIL (ORCPT ); Wed, 6 Apr 2022 14:08:11 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B24B719852D; Wed, 6 Apr 2022 09:46:31 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F8BF12FC; Wed, 6 Apr 2022 09:46:31 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E220C3F73B; Wed, 6 Apr 2022 09:46:30 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 33/43] arm64: entry: Allow the trampoline text to occupy multiple pages Date: Wed, 6 Apr 2022 17:45:36 +0100 Message-Id: <20220406164546.1888528-33-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit a9c406e6462ff14956d690de7bbe5131a5677dc9 upstream. Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page. Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page. Reviewed-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/include/asm/fixmap.h | 6 ++++-- arch/arm64/include/asm/sections.h | 6 ++++++ arch/arm64/kernel/entry.S | 2 +- arch/arm64/kernel/vmlinux.lds.S | 2 +- arch/arm64/mm/mmu.c | 11 ++++++++--- 5 files changed, 20 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index feee38303afe..4ffe0d698fa7 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -53,9 +53,11 @@ enum fixed_addresses { FIX_TEXT_POKE0, #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 - FIX_ENTRY_TRAMP_TEXT, + FIX_ENTRY_TRAMP_TEXT3, + FIX_ENTRY_TRAMP_TEXT2, + FIX_ENTRY_TRAMP_TEXT1, FIX_ENTRY_TRAMP_DATA, -#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT)) +#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT1)) #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ __end_of_permanent_fixed_addresses, diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h index 4e7e7067afdb..09ebd37d5aa3 100644 --- a/arch/arm64/include/asm/sections.h +++ b/arch/arm64/include/asm/sections.h @@ -26,5 +26,11 @@ extern char __hyp_text_start[], __hyp_text_end[]; extern char __idmap_text_start[], __idmap_text_end[]; extern char __irqentry_text_start[], __irqentry_text_end[]; extern char __mmuoff_data_start[], __mmuoff_data_end[]; +extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; + +static inline size_t entry_tramp_text_size(void) +{ + return __entry_tramp_text_end - __entry_tramp_text_start; +} #endif /* __ASM_SECTIONS_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 6e1d02d87d45..b9a757216a80 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -917,7 +917,7 @@ __ni_sys_trace: .endm .macro tramp_data_page dst - adr \dst, .entry.tramp.text + adr_l \dst, .entry.tramp.text sub \dst, \dst, PAGE_SIZE .endm diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index fa3ffad50a61..17fc1671b990 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -261,7 +261,7 @@ ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1)) <= SZ_4K, "Hibernate exit text too big or misaligned") #endif #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE, +ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) <= 3*PAGE_SIZE, "Entry trampoline text too big") #endif /* diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 60be5bc0984a..36bd50091c4b 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -438,6 +438,7 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end, #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 static int __init map_entry_trampoline(void) { + int i; extern char __entry_tramp_text_start[]; pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; @@ -448,11 +449,15 @@ static int __init map_entry_trampoline(void) /* Map only the text into the trampoline page table */ memset(tramp_pg_dir, 0, PGD_SIZE); - __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE, - prot, pgd_pgtable_alloc, 0); + __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, + entry_tramp_text_size(), prot, pgd_pgtable_alloc, + 0); /* Map both the text and data into the kernel page table */ - __set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot); + for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++) + __set_fixmap(FIX_ENTRY_TRAMP_TEXT1 - i, + pa_start + i * PAGE_SIZE, prot); + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { extern char __entry_tramp_data_start[]; From patchwork Wed Apr 6 16:45:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90B8BC4332F for ; Wed, 6 Apr 2022 18:07:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240107AbiDFSJ4 (ORCPT ); Wed, 6 Apr 2022 14:09:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240012AbiDFSIO (ORCPT ); Wed, 6 Apr 2022 14:08:14 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6B40319ABE4; Wed, 6 Apr 2022 09:46:34 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3578012FC; Wed, 6 Apr 2022 09:46:34 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 886043F73B; Wed, 6 Apr 2022 09:46:33 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 36/43] arm64: entry: Add vectors that have the bhb mitigation sequences Date: Wed, 6 Apr 2022 17:45:39 +0100 Message-Id: <20220406164546.1888528-36-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit ba2689234be92024e5635d30fe744f4853ad97db upstream. Some CPUs affected by Spectre-BHB need a sequence of branches, or a firmware call to be run before any indirect branch. This needs to go in the vectors. No CPU needs both. While this can be patched in, it would run on all CPUs as there is a single set of vectors. If only one part of a big/little combination is affected, the unaffected CPUs have to run the mitigation too. Create extra vectors that include the sequence. Subsequent patches will allow affected CPUs to select this set of vectors. Later patches will modify the loop count to match what the CPU requires. Reviewed-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/include/asm/assembler.h | 25 ++++++++++++++ arch/arm64/include/asm/vectors.h | 34 +++++++++++++++++++ arch/arm64/kernel/entry.S | 53 +++++++++++++++++++++++++----- include/linux/arm-smccc.h | 7 ++++ 4 files changed, 110 insertions(+), 9 deletions(-) create mode 100644 arch/arm64/include/asm/vectors.h diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 3f85bbcd7e40..4541c539eb92 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -494,4 +494,29 @@ alternative_endif .Ldone\@: .endm + .macro __mitigate_spectre_bhb_loop tmp +#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY + mov \tmp, #32 +.Lspectre_bhb_loop\@: + b . + 4 + subs \tmp, \tmp, #1 + b.ne .Lspectre_bhb_loop\@ + dsb nsh + isb +#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ + .endm + + /* Save/restores x0-x3 to the stack */ + .macro __mitigate_spectre_bhb_fw +#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY + stp x0, x1, [sp, #-16]! + stp x2, x3, [sp, #-16]! + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3 +alternative_cb arm64_update_smccc_conduit + nop // Patched to SMC/HVC #0 +alternative_cb_end + ldp x2, x3, [sp], #16 + ldp x0, x1, [sp], #16 +#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ + .endm #endif /* __ASM_ASSEMBLER_H */ diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h new file mode 100644 index 000000000000..16ca74260375 --- /dev/null +++ b/arch/arm64/include/asm/vectors.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022 ARM Ltd. + */ +#ifndef __ASM_VECTORS_H +#define __ASM_VECTORS_H + +/* + * Note: the order of this enum corresponds to two arrays in entry.S: + * tramp_vecs and __bp_harden_el1_vectors. By default the canonical + * 'full fat' vectors are used directly. + */ +enum arm64_bp_harden_el1_vectors { +#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY + /* + * Perform the BHB loop mitigation, before branching to the canonical + * vectors. + */ + EL1_VECTOR_BHB_LOOP, + + /* + * Make the SMC call for firmware mitigation, before branching to the + * canonical vectors. + */ + EL1_VECTOR_BHB_FW, +#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ + + /* + * Remap the kernel before branching to the canonical vectors. + */ + EL1_VECTOR_KPTI, +}; + +#endif /* __ASM_VECTORS_H */ diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index b732480007a4..6e637315fd80 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -921,13 +921,26 @@ __ni_sys_trace: sub \dst, \dst, PAGE_SIZE .endm - .macro tramp_ventry, vector_start, regsize, kpti + +#define BHB_MITIGATION_NONE 0 +#define BHB_MITIGATION_LOOP 1 +#define BHB_MITIGATION_FW 2 + + .macro tramp_ventry, vector_start, regsize, kpti, bhb .align 7 1: .if \regsize == 64 msr tpidrro_el0, x30 // Restored in kernel_ventry .endif + .if \bhb == BHB_MITIGATION_LOOP + /* + * This sequence must appear before the first indirect branch. i.e. the + * ret out of tramp_ventry. It appears here because x30 is free. + */ + __mitigate_spectre_bhb_loop x30 + .endif // \bhb == BHB_MITIGATION_LOOP + .if \kpti == 1 /* * Defend against branch aliasing attacks by pushing a dummy @@ -952,6 +965,15 @@ __ni_sys_trace: ldr x30, =vectors .endif // \kpti == 1 + .if \bhb == BHB_MITIGATION_FW + /* + * The firmware sequence must appear before the first indirect branch. + * i.e. the ret out of tramp_ventry. But it also needs the stack to be + * mapped to save/restore the registers the SMC clobbers. + */ + __mitigate_spectre_bhb_fw + .endif // \bhb == BHB_MITIGATION_FW + add x30, x30, #(1b - \vector_start + 4) ret .org 1b + 128 // Did we overflow the ventry slot? @@ -959,6 +981,9 @@ __ni_sys_trace: .macro tramp_exit, regsize = 64 adr x30, tramp_vectors +#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY + add x30, x30, SZ_4K +#endif msr vbar_el1, x30 ldr lr, [sp, #S_LR] tramp_unmap_kernel x29 @@ -969,26 +994,32 @@ __ni_sys_trace: eret .endm - .macro generate_tramp_vector, kpti + .macro generate_tramp_vector, kpti, bhb .Lvector_start\@: .space 0x400 .rept 4 - tramp_ventry .Lvector_start\@, 64, \kpti + tramp_ventry .Lvector_start\@, 64, \kpti, \bhb .endr .rept 4 - tramp_ventry .Lvector_start\@, 32, \kpti + tramp_ventry .Lvector_start\@, 32, \kpti, \bhb .endr .endm #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 /* * Exception vectors trampoline. + * The order must match __bp_harden_el1_vectors and the + * arm64_bp_harden_el1_vectors enum. */ .pushsection ".entry.tramp.text", "ax" .align 11 ENTRY(tramp_vectors) - generate_tramp_vector kpti=1 +#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW +#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE END(tramp_vectors) ENTRY(tramp_exit_native) @@ -1015,7 +1046,7 @@ __entry_tramp_data_start: * Exception vectors for spectre mitigations on entry from EL1 when * kpti is not in use. */ - .macro generate_el1_vector + .macro generate_el1_vector, bhb .Lvector_start\@: kernel_ventry 1, sync_invalid // Synchronous EL1t kernel_ventry 1, irq_invalid // IRQ EL1t @@ -1028,17 +1059,21 @@ __entry_tramp_data_start: kernel_ventry 1, error_invalid // Error EL1h .rept 4 - tramp_ventry .Lvector_start\@, 64, kpti=0 + tramp_ventry .Lvector_start\@, 64, 0, \bhb .endr .rept 4 - tramp_ventry .Lvector_start\@, 32, kpti=0 + tramp_ventry .Lvector_start\@, 32, 0, \bhb .endr .endm +/* The order must match tramp_vecs and the arm64_bp_harden_el1_vectors enum. */ .pushsection ".entry.text", "ax" .align 11 ENTRY(__bp_harden_el1_vectors) - generate_el1_vector +#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY + generate_el1_vector bhb=BHB_MITIGATION_LOOP + generate_el1_vector bhb=BHB_MITIGATION_FW +#endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ END(__bp_harden_el1_vectors) .popsection diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 6366b04c7d5f..040266891414 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -85,6 +85,13 @@ ARM_SMCCC_SMC_32, \ 0, 0x7fff) +#define ARM_SMCCC_ARCH_WORKAROUND_3 \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 0x3fff) + +#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1 + #ifndef __ASSEMBLY__ #include From patchwork Wed Apr 6 16:45:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19673C43219 for ; Wed, 6 Apr 2022 18:08:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240169AbiDFSJ6 (ORCPT ); Wed, 6 Apr 2022 14:09:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240109AbiDFSIW (ORCPT ); Wed, 6 Apr 2022 14:08:22 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 303FB1FCF7; Wed, 6 Apr 2022 09:46:35 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1754923A; Wed, 6 Apr 2022 09:46:35 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6A3FD3F73B; Wed, 6 Apr 2022 09:46:34 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 37/43] arm64: entry: Add macro for reading symbol addresses from the trampoline Date: Wed, 6 Apr 2022 17:45:40 +0100 Message-Id: <20220406164546.1888528-37-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream. The trampoline code needs to use the address of symbols in the wider kernel, e.g. vectors. PC-relative addressing wouldn't work as the trampoline code doesn't run at the address the linker expected. tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is set, in which case it uses the data page as a literal pool because the data page can be unmapped when running in user-space, which is required for CPUs vulnerable to meltdown. Pull this logic out as a macro, instead of adding a third copy of it. Reviewed-by: Catalin Marinas [ Removed SDEI for stable backport ] Signed-off-by: James Morse --- arch/arm64/kernel/entry.S | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 6e637315fd80..ec46c89759a8 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -921,6 +921,15 @@ __ni_sys_trace: sub \dst, \dst, PAGE_SIZE .endm + .macro tramp_data_read_var dst, var +#ifdef CONFIG_RANDOMIZE_BASE + tramp_data_page \dst + add \dst, \dst, #:lo12:__entry_tramp_data_\var + ldr \dst, [\dst] +#else + ldr \dst, =\var +#endif + .endm #define BHB_MITIGATION_NONE 0 #define BHB_MITIGATION_LOOP 1 @@ -951,13 +960,7 @@ __ni_sys_trace: b . 2: tramp_map_kernel x30 -#ifdef CONFIG_RANDOMIZE_BASE - tramp_data_page x30 - isb - ldr x30, [x30] -#else - ldr x30, =vectors -#endif + tramp_data_read_var x30, vectors prfm plil1strm, [x30, #(1b - \vector_start)] msr vbar_el1, x30 isb @@ -1037,7 +1040,12 @@ END(tramp_exit_compat) .align PAGE_SHIFT .globl __entry_tramp_data_start __entry_tramp_data_start: +__entry_tramp_data_vectors: .quad vectors +#ifdef CONFIG_ARM_SDE_INTERFACE +__entry_tramp_data___sdei_asm_trampoline_next_handler: + .quad __sdei_asm_handler +#endif /* CONFIG_ARM_SDE_INTERFACE */ .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ From patchwork Wed Apr 6 16:45:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32673C433FE for ; Wed, 6 Apr 2022 18:08:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240133AbiDFSJ7 (ORCPT ); Wed, 6 Apr 2022 14:09:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54354 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239924AbiDFSIx (ORCPT ); Wed, 6 Apr 2022 14:08:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1483BB877; Wed, 6 Apr 2022 09:46:36 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EF3E312FC; Wed, 6 Apr 2022 09:46:35 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4C43A3F73B; Wed, 6 Apr 2022 09:46:35 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 38/43] arm64: Add percpu vectors for EL1 Date: Wed, 6 Apr 2022 17:45:41 +0100 Message-Id: <20220406164546.1888528-38-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream. The Spectre-BHB workaround adds a firmware call to the vectors. This is needed on some CPUs, but not others. To avoid the unaffected CPU in a big/little pair from making the firmware call, create per cpu vectors. The per-cpu vectors only apply when returning from EL0. Systems using KPTI can use the canonical 'full-fat' vectors directly at EL1, the trampoline exit code will switch to this_cpu_vector on exit to EL0. Systems not using KPTI should always use this_cpu_vector. this_cpu_vector will point at a vector in tramp_vecs or __bp_harden_el1_vectors, depending on whether KPTI is in use. Reviewed-by: Catalin Marinas Signed-off-by: James Morse --- arch/arm64/include/asm/mmu.h | 2 +- arch/arm64/include/asm/vectors.h | 27 +++++++++++++++++++++++++++ arch/arm64/kernel/cpufeature.c | 11 +++++++++++ arch/arm64/kernel/entry.S | 16 ++++++++++------ arch/arm64/kvm/hyp/switch.c | 9 ++++++--- 5 files changed, 55 insertions(+), 10 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 6ac34c75f4e1..5eff1c49270d 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -34,7 +34,7 @@ typedef struct { */ #define ASID(mm) ((mm)->context.id.counter & 0xffff) -static inline bool arm64_kernel_unmapped_at_el0(void) +static __always_inline bool arm64_kernel_unmapped_at_el0(void) { return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h index 16ca74260375..3f76dfd9e074 100644 --- a/arch/arm64/include/asm/vectors.h +++ b/arch/arm64/include/asm/vectors.h @@ -5,6 +5,15 @@ #ifndef __ASM_VECTORS_H #define __ASM_VECTORS_H +#include +#include + +#include + +extern char vectors[]; +extern char tramp_vectors[]; +extern char __bp_harden_el1_vectors[]; + /* * Note: the order of this enum corresponds to two arrays in entry.S: * tramp_vecs and __bp_harden_el1_vectors. By default the canonical @@ -31,4 +40,22 @@ enum arm64_bp_harden_el1_vectors { EL1_VECTOR_KPTI, }; +/* The vectors to use on return from EL0. e.g. to remap the kernel */ +DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector); + +#ifndef CONFIG_UNMAP_KERNEL_AT_EL0 +#define TRAMP_VALIAS 0 +#endif + +static inline const char * +arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot) +{ + if (arm64_kernel_unmapped_at_el0()) + return (char *)TRAMP_VALIAS + SZ_2K * slot; + + WARN_ON_ONCE(slot == EL1_VECTOR_KPTI); + + return __bp_harden_el1_vectors + SZ_2K * slot; +} + #endif /* __ASM_VECTORS_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 1b5afb80247d..b4a6f881c3c0 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -20,15 +20,18 @@ #include #include +#include #include #include #include + #include #include #include #include #include #include +#include #include unsigned long elf_hwcap __read_mostly; @@ -49,6 +52,8 @@ unsigned int compat_elf_hwcap2 __read_mostly; DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); EXPORT_SYMBOL(cpu_hwcaps); +DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors; + DEFINE_STATIC_KEY_ARRAY_FALSE(cpu_hwcap_keys, ARM64_NCAPS); EXPORT_SYMBOL(cpu_hwcap_keys); @@ -821,6 +826,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused) static bool kpti_applied = false; int cpu = smp_processor_id(); + if (__this_cpu_read(this_cpu_vector) == vectors) { + const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); + + __this_cpu_write(this_cpu_vector, v); + } + if (kpti_applied) return; diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index ec46c89759a8..746a5fe133c5 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -75,7 +75,6 @@ .macro kernel_ventry, el, label, regsize = 64 .align 7 .Lventry_start\@: -#ifdef CONFIG_UNMAP_KERNEL_AT_EL0 .if \el == 0 /* * This must be the first instruction of the EL0 vector entries. It is @@ -90,7 +89,6 @@ .endif .Lskip_tramp_vectors_cleanup\@: .endif -#endif sub sp, sp, #S_FRAME_SIZE b el\()\el\()_\label @@ -983,10 +981,14 @@ __ni_sys_trace: .endm .macro tramp_exit, regsize = 64 - adr x30, tramp_vectors -#ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY - add x30, x30, SZ_4K -#endif + tramp_data_read_var x30, this_cpu_vector +alternative_if_not ARM64_HAS_VIRT_HOST_EXTN + mrs x29, tpidr_el1 +alternative_else + mrs x29, tpidr_el2 +alternative_endif + ldr x30, [x30, x29] + msr vbar_el1, x30 ldr lr, [sp, #S_LR] tramp_unmap_kernel x29 @@ -1046,6 +1048,8 @@ __entry_tramp_data_vectors: __entry_tramp_data___sdei_asm_trampoline_next_handler: .quad __sdei_asm_handler #endif /* CONFIG_ARM_SDE_INTERFACE */ +__entry_tramp_data_this_cpu_vector: + .quad this_cpu_vector .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 0a2f37bceab0..1751d2763cc1 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -26,7 +26,7 @@ #include #include #include - +#include extern struct exception_table_entry __start___kvm_ex_table; extern struct exception_table_entry __stop___kvm_ex_table; @@ -107,11 +107,14 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) static void __hyp_text __deactivate_traps_vhe(void) { - extern char vectors[]; /* kernel exception vectors */ + const char *host_vectors = vectors; write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); write_sysreg(CPACR_EL1_FPEN, cpacr_el1); - write_sysreg(vectors, vbar_el1); + + if (!arm64_kernel_unmapped_at_el0()) + host_vectors = __this_cpu_read(this_cpu_vector); + write_sysreg(host_vectors, vbar_el1); } static void __hyp_text __deactivate_traps_nvhe(void) From patchwork Wed Apr 6 16:45:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558552 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08600C433EF for ; Wed, 6 Apr 2022 18:08:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240201AbiDFSKD (ORCPT ); Wed, 6 Apr 2022 14:10:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240105AbiDFSIz (ORCPT ); Wed, 6 Apr 2022 14:08:55 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E95171A4D6D; Wed, 6 Apr 2022 09:46:36 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D01D41516; Wed, 6 Apr 2022 09:46:36 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2EE443F73B; Wed, 6 Apr 2022 09:46:36 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 39/43] KVM: arm64: Add templates for BHB mitigation sequences Date: Wed, 6 Apr 2022 17:45:42 +0100 Message-Id: <20220406164546.1888528-39-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org KVM writes the Spectre-v2 mitigation template at the beginning of each vector when a CPU requires a specific sequence to run. Because the template is copied, it can not be modified by the alternatives at runtime. As the KVM template code is intertwined with the bp-hardening callbacks, all templates must have a bp-hardening callback. Add templates for calling ARCH_WORKAROUND_3 and one for each value of K in the brancy-loop. Identify these sequences by a new parameter template_start, and add a copy of install_bp_hardening_cb() that is able to install them. Signed-off-by: James Morse --- arch/arm64/include/asm/cpucaps.h | 3 +- arch/arm64/include/asm/kvm_mmu.h | 2 +- arch/arm64/include/asm/mmu.h | 6 +++ arch/arm64/kernel/bpi.S | 50 ++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 71 +++++++++++++++++++++++++++++++- 5 files changed, 128 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index d4a46764c1ad..9935e55a3cc7 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -39,7 +39,8 @@ #define ARM64_SSBD 18 #define ARM64_MISMATCHED_CACHE_TYPE 19 #define ARM64_WORKAROUND_1188873 20 +#define ARM64_SPECTRE_BHB 21 -#define ARM64_NCAPS 21 +#define ARM64_NCAPS 22 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index ff721659eb94..4a2c95854856 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -362,7 +362,7 @@ static inline void *kvm_get_hyp_vector(void) struct bp_hardening_data *data = arm64_get_bp_hardening_data(); void *vect = kvm_ksym_ref(__kvm_hyp_vector); - if (data->fn) { + if (data->template_start) { vect = __bp_harden_hyp_vecs_start + data->hyp_vectors_slot * SZ_2K; diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 5eff1c49270d..f4377b005cba 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -45,6 +45,12 @@ typedef void (*bp_hardening_cb_t)(void); struct bp_hardening_data { int hyp_vectors_slot; bp_hardening_cb_t fn; + + /* + * template_start is only used by the BHB mitigation to identify the + * hyp_vectors_slot sequence. + */ + const char *template_start; }; #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S index dc4eb154e33b..313f2b59eef9 100644 --- a/arch/arm64/kernel/bpi.S +++ b/arch/arm64/kernel/bpi.S @@ -73,3 +73,53 @@ ENTRY(__smccc_workaround_1_smc_end) ENTRY(__smccc_workaround_1_hvc_start) smccc_workaround_1 hvc ENTRY(__smccc_workaround_1_hvc_end) + +ENTRY(__smccc_workaround_3_smc_start) + sub sp, sp, #(8 * 4) + stp x2, x3, [sp, #(8 * 0)] + stp x0, x1, [sp, #(8 * 2)] + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3 + smc #0 + ldp x2, x3, [sp, #(8 * 0)] + ldp x0, x1, [sp, #(8 * 2)] + add sp, sp, #(8 * 4) +ENTRY(__smccc_workaround_3_smc_end) + +ENTRY(__spectre_bhb_loop_k8_start) + sub sp, sp, #(8 * 2) + stp x0, x1, [sp, #(8 * 0)] + mov x0, #8 +2: b . + 4 + subs x0, x0, #1 + b.ne 2b + dsb nsh + isb + ldp x0, x1, [sp, #(8 * 0)] + add sp, sp, #(8 * 2) +ENTRY(__spectre_bhb_loop_k8_end) + +ENTRY(__spectre_bhb_loop_k24_start) + sub sp, sp, #(8 * 2) + stp x0, x1, [sp, #(8 * 0)] + mov x0, #24 +2: b . + 4 + subs x0, x0, #1 + b.ne 2b + dsb nsh + isb + ldp x0, x1, [sp, #(8 * 0)] + add sp, sp, #(8 * 2) +ENTRY(__spectre_bhb_loop_k24_end) + +ENTRY(__spectre_bhb_loop_k32_start) + sub sp, sp, #(8 * 2) + stp x0, x1, [sp, #(8 * 0)] + mov x0, #32 +2: b . + 4 + subs x0, x0, #1 + b.ne 2b + dsb nsh + isb + ldp x0, x1, [sp, #(8 * 0)] + add sp, sp, #(8 * 2) +ENTRY(__spectre_bhb_loop_k32_end) diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 0d08249cbdab..acccc685b670 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -74,6 +74,14 @@ extern char __smccc_workaround_1_smc_start[]; extern char __smccc_workaround_1_smc_end[]; extern char __smccc_workaround_1_hvc_start[]; extern char __smccc_workaround_1_hvc_end[]; +extern char __smccc_workaround_3_smc_start[]; +extern char __smccc_workaround_3_smc_end[]; +extern char __spectre_bhb_loop_k8_start[]; +extern char __spectre_bhb_loop_k8_end[]; +extern char __spectre_bhb_loop_k24_start[]; +extern char __spectre_bhb_loop_k24_end[]; +extern char __spectre_bhb_loop_k32_start[]; +extern char __spectre_bhb_loop_k32_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -87,12 +95,14 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K); } +static DEFINE_SPINLOCK(bp_lock); +static int last_slot = -1; + static void __install_bp_hardening_cb(bp_hardening_cb_t fn, const char *hyp_vecs_start, const char *hyp_vecs_end) { - static int last_slot = -1; - static DEFINE_SPINLOCK(bp_lock); + int cpu, slot = -1; spin_lock(&bp_lock); @@ -113,6 +123,7 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn, __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot); __this_cpu_write(bp_hardening_data.fn, fn); + __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start); spin_unlock(&bp_lock); } #else @@ -544,3 +555,59 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { } }; + +#ifdef CONFIG_KVM +static const char *kvm_bhb_get_vecs_end(const char *start) +{ + if (start == __smccc_workaround_3_smc_start) + return __smccc_workaround_3_smc_end; + else if (start == __spectre_bhb_loop_k8_start) + return __spectre_bhb_loop_k8_end; + else if (start == __spectre_bhb_loop_k24_start) + return __spectre_bhb_loop_k24_end; + else if (start == __spectre_bhb_loop_k32_start) + return __spectre_bhb_loop_k32_end; + + return NULL; +} + +void kvm_setup_bhb_slot(const char *hyp_vecs_start) +{ + int cpu, slot = -1; + const char *hyp_vecs_end; + + if (!IS_ENABLED(CONFIG_KVM) || !is_hyp_mode_available()) + return; + + hyp_vecs_end = kvm_bhb_get_vecs_end(hyp_vecs_start); + if (WARN_ON_ONCE(!hyp_vecs_start || !hyp_vecs_end)) + return; + + spin_lock(&bp_lock); + for_each_possible_cpu(cpu) { + if (per_cpu(bp_hardening_data.template_start, cpu) == hyp_vecs_start) { + slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu); + break; + } + } + + if (slot == -1) { + last_slot++; + BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start) + / SZ_2K) <= last_slot); + slot = last_slot; + __copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end); + } + + __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot); + __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start); + spin_unlock(&bp_lock); +} +#else +#define __smccc_workaround_3_smc_start NULL +#define __spectre_bhb_loop_k8_start NULL +#define __spectre_bhb_loop_k24_start NULL +#define __spectre_bhb_loop_k32_start NULL + +void kvm_setup_bhb_slot(const char *hyp_vecs_start) { }; +#endif From patchwork Wed Apr 6 16:45:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 558551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A0F6C4332F for ; Wed, 6 Apr 2022 18:08:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240183AbiDFSKF (ORCPT ); Wed, 6 Apr 2022 14:10:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240194AbiDFSJK (ORCPT ); Wed, 6 Apr 2022 14:09:10 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 41BD1B41; Wed, 6 Apr 2022 09:46:40 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 746DC12FC; Wed, 6 Apr 2022 09:46:40 -0700 (PDT) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.218]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C74D03F73B; Wed, 6 Apr 2022 09:46:39 -0700 (PDT) From: James Morse To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: James Morse , Catalin Marinas Subject: [stable:PATCH v4.9.309 43/43] arm64: Use the clearbhb instruction in mitigations Date: Wed, 6 Apr 2022 17:45:46 +0100 Message-Id: <20220406164546.1888528-43-james.morse@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220406164546.1888528-1-james.morse@arm.com> References: <0220406164217.1888053-1-james.morse@arm.com> <20220406164546.1888528-1-james.morse@arm.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas [ modified for stable: Use a KVM vector template instead of alternatives ] Signed-off-by: James Morse --- arch/arm64/include/asm/assembler.h | 7 +++++++ arch/arm64/include/asm/cpufeature.h | 13 +++++++++++++ arch/arm64/include/asm/sysreg.h | 3 +++ arch/arm64/include/asm/vectors.h | 7 +++++++ arch/arm64/kernel/bpi.S | 5 +++++ arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++ arch/arm64/kernel/cpufeature.c | 1 + arch/arm64/kernel/entry.S | 8 ++++++++ 8 files changed, 58 insertions(+) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 459ce3766814..a6aaeb871d5f 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -94,6 +94,13 @@ hint #20 .endm +/* + * Clear Branch History instruction + */ + .macro clearbhb + hint #22 + .endm + /* * Sanitise a 64-bit bounded index wrt speculation, returning zero if out * of bounds. diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 6c0388665251..58a32511da8f 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -387,6 +387,19 @@ static inline bool supports_csv2p3(int scope) return csv2_val == 3; } +static inline bool supports_clearbhb(int scope) +{ + u64 isar2; + + if (scope == SCOPE_LOCAL_CPU) + isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1); + else + isar2 = read_system_reg(SYS_ID_AA64ISAR2_EL1); + + return cpuid_feature_extract_unsigned_field(isar2, + ID_AA64ISAR2_CLEARBHB_SHIFT); +} + static inline bool system_supports_32bit_el0(void) { return cpus_have_const_cap(ARM64_HAS_32BIT_EL0); diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index dc03704ccc79..46e97be12e02 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -174,6 +174,9 @@ #define ID_AA64ISAR0_SHA1_SHIFT 8 #define ID_AA64ISAR0_AES_SHIFT 4 +/* id_aa64isar2 */ +#define ID_AA64ISAR2_CLEARBHB_SHIFT 28 + /* id_aa64pfr0 */ #define ID_AA64PFR0_CSV3_SHIFT 60 #define ID_AA64PFR0_CSV2_SHIFT 56 diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h index f222d8e033b3..695583b9a145 100644 --- a/arch/arm64/include/asm/vectors.h +++ b/arch/arm64/include/asm/vectors.h @@ -33,6 +33,12 @@ enum arm64_bp_harden_el1_vectors { * canonical vectors. */ EL1_VECTOR_BHB_FW, + + /* + * Use the ClearBHB instruction, before branching to the canonical + * vectors. + */ + EL1_VECTOR_BHB_CLEAR_INSN, #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ /* @@ -44,6 +50,7 @@ enum arm64_bp_harden_el1_vectors { #ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY #define EL1_VECTOR_BHB_LOOP -1 #define EL1_VECTOR_BHB_FW -1 +#define EL1_VECTOR_BHB_CLEAR_INSN -1 #endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ /* The vectors to use on return from EL0. e.g. to remap the kernel */ diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S index 313f2b59eef9..d3fd8bf42d86 100644 --- a/arch/arm64/kernel/bpi.S +++ b/arch/arm64/kernel/bpi.S @@ -123,3 +123,8 @@ ENTRY(__spectre_bhb_loop_k32_start) ldp x0, x1, [sp, #(8 * 0)] add sp, sp, #(8 * 2) ENTRY(__spectre_bhb_loop_k32_end) + +ENTRY(__spectre_bhb_clearbhb_start) + hint #22 /* aka clearbhb */ + isb +ENTRY(__spectre_bhb_clearbhb_end) diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index d6bc44a7d471..710808624a82 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -84,6 +84,8 @@ extern char __spectre_bhb_loop_k24_start[]; extern char __spectre_bhb_loop_k24_end[]; extern char __spectre_bhb_loop_k32_start[]; extern char __spectre_bhb_loop_k32_end[]; +extern char __spectre_bhb_clearbhb_start[]; +extern char __spectre_bhb_clearbhb_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -590,6 +592,7 @@ static void update_mitigation_state(enum mitigation_state *oldp, * - Mitigated by a branchy loop a CPU specific number of times, and listed * in our "loop mitigated list". * - Mitigated in software by the firmware Spectre v2 call. + * - Has the ClearBHB instruction to perform the mitigation. * - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no * software mitigation in the vectors is needed. * - Has CSV2.3, so is unaffected. @@ -729,6 +732,9 @@ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, if (supports_csv2p3(scope)) return false; + if (supports_clearbhb(scope)) + return true; + if (spectre_bhb_loop_affected(scope)) return true; @@ -769,6 +775,8 @@ static const char *kvm_bhb_get_vecs_end(const char *start) return __spectre_bhb_loop_k24_end; else if (start == __spectre_bhb_loop_k32_start) return __spectre_bhb_loop_k32_end; + else if (start == __spectre_bhb_clearbhb_start) + return __spectre_bhb_clearbhb_end; return NULL; } @@ -810,6 +818,7 @@ static void kvm_setup_bhb_slot(const char *hyp_vecs_start) #define __spectre_bhb_loop_k8_start NULL #define __spectre_bhb_loop_k24_start NULL #define __spectre_bhb_loop_k32_start NULL +#define __spectre_bhb_clearbhb_start NULL static void kvm_setup_bhb_slot(const char *hyp_vecs_start) { }; #endif @@ -834,6 +843,11 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) } else if (cpu_mitigations_off()) { pr_info_once("spectre-bhb mitigation disabled by command line option\n"); } else if (supports_ecbhb(SCOPE_LOCAL_CPU)) { + state = SPECTRE_MITIGATED; + } else if (supports_clearbhb(SCOPE_LOCAL_CPU)) { + kvm_setup_bhb_slot(__spectre_bhb_clearbhb_start); + this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN); + state = SPECTRE_MITIGATED; } else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) { switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) { diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 82590761db64..9b7e7d2f236e 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -99,6 +99,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = { }; static const struct arm64_ftr_bits ftr_id_aa64isar2[] = { + ARM64_FTR_BITS(FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0), ARM64_FTR_END, }; diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 746a5fe133c5..1f79abb1e5dd 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -932,6 +932,7 @@ __ni_sys_trace: #define BHB_MITIGATION_NONE 0 #define BHB_MITIGATION_LOOP 1 #define BHB_MITIGATION_FW 2 +#define BHB_MITIGATION_INSN 3 .macro tramp_ventry, vector_start, regsize, kpti, bhb .align 7 @@ -948,6 +949,11 @@ __ni_sys_trace: __mitigate_spectre_bhb_loop x30 .endif // \bhb == BHB_MITIGATION_LOOP + .if \bhb == BHB_MITIGATION_INSN + clearbhb + isb + .endif // \bhb == BHB_MITIGATION_INSN + .if \kpti == 1 /* * Defend against branch aliasing attacks by pushing a dummy @@ -1023,6 +1029,7 @@ ENTRY(tramp_vectors) #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_INSN #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE END(tramp_vectors) @@ -1085,6 +1092,7 @@ ENTRY(__bp_harden_el1_vectors) #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY generate_el1_vector bhb=BHB_MITIGATION_LOOP generate_el1_vector bhb=BHB_MITIGATION_FW + generate_el1_vector bhb=BHB_MITIGATION_INSN #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ END(__bp_harden_el1_vectors) .popsection