From patchwork Wed Apr 6 18:26:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0312C43219 for ; Wed, 6 Apr 2022 19:52:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229516AbiDFTyn (ORCPT ); Wed, 6 Apr 2022 15:54:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234295AbiDFTxG (ORCPT ); Wed, 6 Apr 2022 15:53:06 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41D9519316D; Wed, 6 Apr 2022 11:27:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id D3DC361C57; Wed, 6 Apr 2022 18:27:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E012EC385A3; Wed, 6 Apr 2022 18:27:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269651; bh=4PDQbNjEWgyzjgN+9Ms2UXb1BtWf2djd8Z39TxrINqs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t4pWES0BVK2Ay0bVHAfCahl5HKu3uECEWavXm/yeCTEA9sD4MQ+cwf0EluCCl0UBv LpF5zVoiuN7+wn4sYSpWAf6mT8PoLZozq7/obpNS062RDLwbAX7k9yHJzwus0/FbrM 5hQ+SKX+NfdCBKtEOLurhXiFqejJ7zzWfTZ6mt/A= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vladimir Murzin , James Morse Subject: [PATCH 4.9 02/43] arm64: Remove useless UAO IPI and describe how this gets enabled Date: Wed, 6 Apr 2022 20:26:11 +0200 Message-Id: <20220406182436.748478940@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit c8b06e3fddddaae1a87ed479edcb8b3d85caecc7 upstream. Since its introduction, the UAO enable call was broken, and useless. commit 2a6dcb2b5f3e ("arm64: cpufeature: Schedule enable() calls instead of calling them via IPI"), fixed the framework so that these calls are scheduled, so that they can modify PSTATE. Now it is just useless. Remove it. UAO is enabled by the code patching which causes get_user() and friends to use the 'ldtr' family of instructions. This relies on the PSTATE.UAO bit being set to match addr_limit, which we do in uao_thread_switch() called via __switch_to(). All that is needed to enable UAO is patch the code, and call schedule(). __apply_alternatives_multi_stop() calls stop_machine() when it modifies the kernel text to enable the alternatives, (including the UAO code in uao_thread_switch()). Once stop_machine() has finished __switch_to() is called to reschedule the original task, this causes PSTATE.UAO to be set appropriately. An explicit enable() call is not needed. Reported-by: Vladimir Murzin Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/processor.h | 1 - arch/arm64/kernel/cpufeature.c | 5 ++++- arch/arm64/mm/fault.c | 14 -------------- 3 files changed, 4 insertions(+), 16 deletions(-) --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -220,7 +220,6 @@ static inline void spin_lock_prefetch(co #endif int cpu_enable_pan(void *__unused); -int cpu_enable_uao(void *__unused); int cpu_enable_cache_maint_trap(void *__unused); #endif /* __ASSEMBLY__ */ --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -905,7 +905,10 @@ static const struct arm64_cpu_capabiliti .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_UAO_SHIFT, .min_field_value = 1, - .enable = cpu_enable_uao, + /* + * We rely on stop_machine() calling uao_thread_switch() to set + * UAO immediately after patching. + */ }, #endif /* CONFIG_ARM64_UAO */ #ifdef CONFIG_ARM64_PAN --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -740,17 +740,3 @@ int cpu_enable_pan(void *__unused) return 0; } #endif /* CONFIG_ARM64_PAN */ - -#ifdef CONFIG_ARM64_UAO -/* - * Kernel threads have fs=KERNEL_DS by default, and don't need to call - * set_fs(), devtmpfs in particular relies on this behaviour. - * We need to enable the feature at runtime (instead of adding it to - * PSR_MODE_EL1h) as the feature may not be implemented by the cpu. - */ -int cpu_enable_uao(void *__unused) -{ - asm(SET_PSTATE_UAO(1)); - return 0; -} -#endif /* CONFIG_ARM64_UAO */ From patchwork Wed Apr 6 18:26:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3B6DC433EF for ; Wed, 6 Apr 2022 20:12:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234093AbiDFUOg (ORCPT ); Wed, 6 Apr 2022 16:14:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236453AbiDFUNm (ORCPT ); Wed, 6 Apr 2022 16:13:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C661FA216; Wed, 6 Apr 2022 11:27:57 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CBAF061C57; Wed, 6 Apr 2022 18:27:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D8935C385A1; Wed, 6 Apr 2022 18:27:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269676; bh=FR0wyzzcRBc8I1oJlAOaq4otxm8Uxctpbe4QISIxzHM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UiL9YME9p3w6wTd1erfK4Rh1CZNKUFuxVemTNLJ++Ad++Bnwh1AlkdOE3FTo6wTdG V8K9cGdmBli9kQH8IOm02vxXbWcRE7QRHcLVSpqe8JJeDmxBzEhnk2WC5y/IdhX7Pd 6zFkGXIFKj8vBnkOV0ULVGN7WO7z2lsHZhjAi1Cw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mark Rutland , Dave Martin , Suzuki K Poulose , Will Deacon , James Morse Subject: [PATCH 4.9 03/43] arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 Date: Wed, 6 Apr 2022 20:26:12 +0200 Message-Id: <20220406182436.776830404@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose commit 6e616864f21160d8d503523b60a53a29cecc6f24 upstream. Update the MIDR encodings for the Cortex-A55 and Cortex-A35 Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cputype.h | 4 ++++ 1 file changed, 4 insertions(+) --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -83,6 +83,8 @@ #define ARM_CPU_PART_CORTEX_A53 0xD03 #define ARM_CPU_PART_CORTEX_A73 0xD09 #define ARM_CPU_PART_CORTEX_A75 0xD0A +#define ARM_CPU_PART_CORTEX_A35 0xD04 +#define ARM_CPU_PART_CORTEX_A55 0xD05 #define APM_CPU_PART_POTENZA 0x000 @@ -98,6 +100,8 @@ #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73) #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75) +#define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35) +#define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) From patchwork Wed Apr 6 18:26:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC3E7C433F5 for ; Wed, 6 Apr 2022 19:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230289AbiDFTvl (ORCPT ); Wed, 6 Apr 2022 15:51:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232250AbiDFTvT (ORCPT ); Wed, 6 Apr 2022 15:51:19 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E554D27E845; Wed, 6 Apr 2022 11:28:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7F01FB8253A; Wed, 6 Apr 2022 18:28:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0BD6C385A1; Wed, 6 Apr 2022 18:28:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269687; bh=JDqhD44nCKOMlUd87YawuqgKhSYVuiKQQnwneW96lvA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dEDlYF9aD+3c3deo1LIoY6jV5Isi7919glA093rfMLVHNSEcIwF/cTpAWOUZuwTXE p7HKH8w2J9gq8fpFXtkxyTHpSigulJJnSLiL3/qacmdUTJBpOeH+cQzaOod+Q2wKHg N0v1q/kBSXqDs1fZkn4ZPNH1LlYjT20q+3eoO5Ic= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mark Rutland , Dave Martin , Suzuki K Poulose , Will Deacon , Ard Biesheuvel , James Morse Subject: [PATCH 4.9 07/43] arm64: capabilities: Prepare for fine grained capabilities Date: Wed, 6 Apr 2022 20:26:16 +0200 Message-Id: <20220406182436.894315694@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 143ba05d867af34827faf99e0eed4de27106c7cb ] We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed to the userspace and the CPU hwcaps used by the kernel, which include cpu features and CPU errata work arounds. Capabilities have some properties that decide how they should be treated : 1) Detection, i.e scope : A cap could be "detected" either : - if it is present on at least one CPU (SCOPE_LOCAL_CPU) Or - if it is present on all the CPUs (SCOPE_SYSTEM) 2) When is it enabled ? - A cap is treated as "enabled" when the system takes some action based on whether the capability is detected or not. e.g, setting some control register, patching the kernel code. Right now, we treat all caps are enabled at boot-time, after all the CPUs are brought up by the kernel. But there are certain caps, which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI) and kernel starts using them, even before the secondary CPUs are brought up. We would need a way to describe this for each capability. 3) Conflict on a late CPU - When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | ------------------------------| | a | y | n | ------------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds which requires some work around, which cannot be delayed. And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). So this calls for a lot more fine grained behavior for each capability. And if we define all the attributes to control their behavior properly, we may be able to use a single table for the CPU hwcaps (which cover errata and features, not the ELF HWCAPs). This is a prepartory step to get there. More bits would be added for the properties listed above. We are going to use a bit-mask to encode all the properties of a capabilities. This patch encodes the "SCOPE" of the capability. As such there is no change in how the capabilities are treated. Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 105 +++++++++++++++++++++++++++++++++--- arch/arm64/kernel/cpu_errata.c | 10 +-- arch/arm64/kernel/cpufeature.c | 30 +++++----- 3 files changed, 119 insertions(+), 26 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -66,16 +66,104 @@ struct arm64_ftr_reg { extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0; -/* scope of capability check */ -enum { - SCOPE_SYSTEM, - SCOPE_LOCAL_CPU, -}; +/* + * CPU capabilities: + * + * We use arm64_cpu_capabilities to represent system features, errata work + * arounds (both used internally by kernel and tracked in cpu_hwcaps) and + * ELF HWCAPs (which are exposed to user). + * + * To support systems with heterogeneous CPUs, we need to make sure that we + * detect the capabilities correctly on the system and take appropriate + * measures to ensure there are no incompatibilities. + * + * This comment tries to explain how we treat the capabilities. + * Each capability has the following list of attributes : + * + * 1) Scope of Detection : The system detects a given capability by + * performing some checks at runtime. This could be, e.g, checking the + * value of a field in CPU ID feature register or checking the cpu + * model. The capability provides a call back ( @matches() ) to + * perform the check. Scope defines how the checks should be performed. + * There are two cases: + * + * a) SCOPE_LOCAL_CPU: check all the CPUs and "detect" if at least one + * matches. This implies, we have to run the check on all the + * booting CPUs, until the system decides that state of the + * capability is finalised. (See section 2 below) + * Or + * b) SCOPE_SYSTEM: check all the CPUs and "detect" if all the CPUs + * matches. This implies, we run the check only once, when the + * system decides to finalise the state of the capability. If the + * capability relies on a field in one of the CPU ID feature + * registers, we use the sanitised value of the register from the + * CPU feature infrastructure to make the decision. + * + * The process of detection is usually denoted by "update" capability + * state in the code. + * + * 2) Finalise the state : The kernel should finalise the state of a + * capability at some point during its execution and take necessary + * actions if any. Usually, this is done, after all the boot-time + * enabled CPUs are brought up by the kernel, so that it can make + * better decision based on the available set of CPUs. However, there + * are some special cases, where the action is taken during the early + * boot by the primary boot CPU. (e.g, running the kernel at EL2 with + * Virtualisation Host Extensions). The kernel usually disallows any + * changes to the state of a capability once it finalises the capability + * and takes any action, as it may be impossible to execute the actions + * safely. A CPU brought up after a capability is "finalised" is + * referred to as "Late CPU" w.r.t the capability. e.g, all secondary + * CPUs are treated "late CPUs" for capabilities determined by the boot + * CPU. + * + * 3) Verification: When a CPU is brought online (e.g, by user or by the + * kernel), the kernel should make sure that it is safe to use the CPU, + * by verifying that the CPU is compliant with the state of the + * capabilities finalised already. This happens via : + * + * secondary_start_kernel()-> check_local_cpu_capabilities() + * + * As explained in (2) above, capabilities could be finalised at + * different points in the execution. Each CPU is verified against the + * "finalised" capabilities and if there is a conflict, the kernel takes + * an action, based on the severity (e.g, a CPU could be prevented from + * booting or cause a kernel panic). The CPU is allowed to "affect" the + * state of the capability, if it has not been finalised already. + * + * 4) Action: As mentioned in (2), the kernel can take an action for each + * detected capability, on all CPUs on the system. Appropriate actions + * include, turning on an architectural feature, modifying the control + * registers (e.g, SCTLR, TCR etc.) or patching the kernel via + * alternatives. The kernel patching is batched and performed at later + * point. The actions are always initiated only after the capability + * is finalised. This is usally denoted by "enabling" the capability. + * The actions are initiated as follows : + * a) Action is triggered on all online CPUs, after the capability is + * finalised, invoked within the stop_machine() context from + * enable_cpu_capabilitie(). + * + * b) Any late CPU, brought up after (1), the action is triggered via: + * + * check_local_cpu_capabilities() -> verify_local_cpu_capabilities() + * + */ + + +/* Decide how the capability is detected. On a local CPU vs System wide */ +#define ARM64_CPUCAP_SCOPE_LOCAL_CPU ((u16)BIT(0)) +#define ARM64_CPUCAP_SCOPE_SYSTEM ((u16)BIT(1)) +#define ARM64_CPUCAP_SCOPE_MASK \ + (ARM64_CPUCAP_SCOPE_SYSTEM | \ + ARM64_CPUCAP_SCOPE_LOCAL_CPU) + +#define SCOPE_SYSTEM ARM64_CPUCAP_SCOPE_SYSTEM +#define SCOPE_LOCAL_CPU ARM64_CPUCAP_SCOPE_LOCAL_CPU struct arm64_cpu_capabilities { const char *desc; u16 capability; - int def_scope; /* default scope */ + u16 type; bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope); /* * Take the appropriate actions to enable this capability for this CPU. @@ -100,6 +188,11 @@ struct arm64_cpu_capabilities { }; }; +static inline int cpucap_default_scope(const struct arm64_cpu_capabilities *cap) +{ + return cap->type & ARM64_CPUCAP_SCOPE_MASK; +} + extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; extern struct static_key_false arm64_const_caps_ready; --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -369,14 +369,14 @@ static bool has_ssbd_mitigation(const st #endif /* CONFIG_ARM64_SSBD */ #define MIDR_RANGE(model, min, max) \ - .def_scope = SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = min, \ .midr_range_max = max #define MIDR_ALL_VERSIONS(model) \ - .def_scope = SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = 0, \ @@ -459,14 +459,14 @@ const struct arm64_cpu_capabilities arm6 .desc = "Mismatched cache line size", .capability = ARM64_MISMATCHED_CACHE_LINE_SIZE, .matches = has_mismatched_cache_type, - .def_scope = SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, .cpu_enable = cpu_enable_trap_ctr_access, }, { .desc = "Mismatched cache type", .capability = ARM64_MISMATCHED_CACHE_TYPE, .matches = has_mismatched_cache_type, - .def_scope = SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, .cpu_enable = cpu_enable_trap_ctr_access, }, #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR @@ -504,7 +504,7 @@ const struct arm64_cpu_capabilities arm6 #ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypass Disable", - .def_scope = SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, .capability = ARM64_SSBD, .matches = has_ssbd_mitigation, }, --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -865,7 +865,7 @@ static const struct arm64_cpu_capabiliti { .desc = "GIC system register CPU interface", .capability = ARM64_HAS_SYSREG_GIC_CPUIF, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = has_useable_gicv3_cpuif, .sys_reg = SYS_ID_AA64PFR0_EL1, .field_pos = ID_AA64PFR0_GIC_SHIFT, @@ -876,7 +876,7 @@ static const struct arm64_cpu_capabiliti { .desc = "Privileged Access Never", .capability = ARM64_HAS_PAN, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR1_EL1, .field_pos = ID_AA64MMFR1_PAN_SHIFT, @@ -889,7 +889,7 @@ static const struct arm64_cpu_capabiliti { .desc = "LSE atomic instructions", .capability = ARM64_HAS_LSE_ATOMICS, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64ISAR0_EL1, .field_pos = ID_AA64ISAR0_ATOMICS_SHIFT, @@ -900,14 +900,14 @@ static const struct arm64_cpu_capabiliti { .desc = "Software prefetching using PRFM", .capability = ARM64_HAS_NO_HW_PREFETCH, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = has_no_hw_prefetch, }, #ifdef CONFIG_ARM64_UAO { .desc = "User Access Override", .capability = ARM64_HAS_UAO, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_UAO_SHIFT, @@ -921,21 +921,21 @@ static const struct arm64_cpu_capabiliti #ifdef CONFIG_ARM64_PAN { .capability = ARM64_ALT_PAN_NOT_UAO, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = cpufeature_pan_not_uao, }, #endif /* CONFIG_ARM64_PAN */ { .desc = "Virtualization Host Extensions", .capability = ARM64_HAS_VIRT_HOST_EXTN, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = runs_at_el2, .cpu_enable = cpu_copy_el2regs, }, { .desc = "32-bit EL0 Support", .capability = ARM64_HAS_32BIT_EL0, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, @@ -945,14 +945,14 @@ static const struct arm64_cpu_capabiliti { .desc = "Reduced HYP mapping offset", .capability = ARM64_HYP_OFFSET_LOW, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = hyp_offset_low, }, #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, - .def_scope = SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SCOPE_SYSTEM, .matches = unmap_kernel_at_el0, .cpu_enable = kpti_install_ng_mappings, }, @@ -960,16 +960,16 @@ static const struct arm64_cpu_capabiliti {}, }; -#define HWCAP_CAP(reg, field, s, min_value, type, cap) \ +#define HWCAP_CAP(reg, field, s, min_value, cap_type, cap) \ { \ .desc = #cap, \ - .def_scope = SCOPE_SYSTEM, \ + .type = ARM64_CPUCAP_SCOPE_SYSTEM, \ .matches = has_cpuid_feature, \ .sys_reg = reg, \ .field_pos = field, \ .sign = s, \ .min_field_value = min_value, \ - .hwcap_type = type, \ + .hwcap_type = cap_type, \ .hwcap = cap, \ } @@ -1046,7 +1046,7 @@ static bool cpus_have_elf_hwcap(const st static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps) { for (; hwcaps->matches; hwcaps++) - if (hwcaps->matches(hwcaps, hwcaps->def_scope)) + if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps))) cap_set_elf_hwcap(hwcaps); } @@ -1073,7 +1073,7 @@ static void update_cpu_capabilities(cons const char *info) { for (; caps->matches; caps++) { - if (!caps->matches(caps, caps->def_scope)) + if (!caps->matches(caps, cpucap_default_scope(caps))) continue; if (!cpus_have_cap(caps->capability) && caps->desc) From patchwork Wed Apr 6 18:26:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558543 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0581EC43217 for ; Wed, 6 Apr 2022 19:27:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231174AbiDFT3B (ORCPT ); Wed, 6 Apr 2022 15:29:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229572AbiDFT1K (ORCPT ); Wed, 6 Apr 2022 15:27:10 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB5D3B53F4; Wed, 6 Apr 2022 11:28:12 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 81929B8253D; Wed, 6 Apr 2022 18:28:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9A24C385A3; Wed, 6 Apr 2022 18:28:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269690; bh=3207PX1bCCiNBnTyir6/vytaoStzuPc8YRYrS1GA3n4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FlGI1FV6ZKsXbFcLVIMX2CvUnii022xUvjAgrWceDCeJ4iXNZ/9taqHCYjk6D+3tI KYp/e1OUrFt9ycI1OMmMUfk464082IW3dOOi9ArXB/ss4GdUqxVdjXLjvJQgcQ+TXx pu2QhVxnt4I+WF2ABFaq7vAtQdnJ37b6pa6198ao= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Will Deacon , Mark Rutland , Dave Martin , Suzuki K Poulose , Ard Biesheuvel , James Morse Subject: [PATCH 4.9 08/43] arm64: capabilities: Add flags to handle the conflicts on late CPU Date: Wed, 6 Apr 2022 20:26:17 +0200 Message-Id: <20220406182436.925544626@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5b4747c5dce7a873e1e7fe1608835825f714267a ] When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | |-----------------------------| | a | y | n | |-----------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds that cannot be activated after the kernel has finished booting.And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). Add two different flags to indicate how the conflict should be handled. ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability. Now that we have the flags to describe the behavior of the errata and the features, as we treat them, define types for ERRATUM and FEATURE. Cc: Will Deacon Cc: Mark Rutland Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 68 ++++++++++++++++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 10 ++--- arch/arm64/kernel/cpufeature.c | 22 +++++------ 3 files changed, 84 insertions(+), 16 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -130,6 +130,7 @@ extern struct arm64_ftr_reg arm64_ftr_re * an action, based on the severity (e.g, a CPU could be prevented from * booting or cause a kernel panic). The CPU is allowed to "affect" the * state of the capability, if it has not been finalised already. + * See section 5 for more details on conflicts. * * 4) Action: As mentioned in (2), the kernel can take an action for each * detected capability, on all CPUs on the system. Appropriate actions @@ -147,6 +148,34 @@ extern struct arm64_ftr_reg arm64_ftr_re * * check_local_cpu_capabilities() -> verify_local_cpu_capabilities() * + * 5) Conflicts: Based on the state of the capability on a late CPU vs. + * the system state, we could have the following combinations : + * + * x-----------------------------x + * | Type | System | Late CPU | + * |-----------------------------| + * | a | y | n | + * |-----------------------------| + * | b | n | y | + * x-----------------------------x + * + * Two separate flag bits are defined to indicate whether each kind of + * conflict can be allowed: + * ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - Case(a) is allowed + * ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - Case(b) is allowed + * + * Case (a) is not permitted for a capability that the system requires + * all CPUs to have in order for the capability to be enabled. This is + * typical for capabilities that represent enhanced functionality. + * + * Case (b) is not permitted for a capability that must be enabled + * during boot if any CPU in the system requires it in order to run + * safely. This is typical for erratum work arounds that cannot be + * enabled after the corresponding capability is finalised. + * + * In some non-typical cases either both (a) and (b), or neither, + * should be permitted. This can be described by including neither + * or both flags in the capability's type field. */ @@ -160,6 +189,33 @@ extern struct arm64_ftr_reg arm64_ftr_re #define SCOPE_SYSTEM ARM64_CPUCAP_SCOPE_SYSTEM #define SCOPE_LOCAL_CPU ARM64_CPUCAP_SCOPE_LOCAL_CPU +/* + * Is it permitted for a late CPU to have this capability when system + * hasn't already enabled it ? + */ +#define ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU ((u16)BIT(4)) +/* Is it safe for a late CPU to miss this capability when system has it */ +#define ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU ((u16)BIT(5)) + +/* + * CPU errata workarounds that need to be enabled at boot time if one or + * more CPUs in the system requires it. When one of these capabilities + * has been enabled, it is safe to allow any CPU to boot that doesn't + * require the workaround. However, it is not safe if a "late" CPU + * requires a workaround and the system hasn't enabled it already. + */ +#define ARM64_CPUCAP_LOCAL_CPU_ERRATUM \ + (ARM64_CPUCAP_SCOPE_LOCAL_CPU | ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU) +/* + * CPU feature detected at boot time based on system-wide value of a + * feature. It is safe for a late CPU to have this feature even though + * the system hasn't enabled it, although the featuer will not be used + * by Linux in this case. If the system has enabled this feature already, + * then every late CPU must have it. + */ +#define ARM64_CPUCAP_SYSTEM_FEATURE \ + (ARM64_CPUCAP_SCOPE_SYSTEM | ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU) + struct arm64_cpu_capabilities { const char *desc; u16 capability; @@ -193,6 +249,18 @@ static inline int cpucap_default_scope(c return cap->type & ARM64_CPUCAP_SCOPE_MASK; } +static inline bool +cpucap_late_cpu_optional(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU); +} + +static inline bool +cpucap_late_cpu_permitted(const struct arm64_cpu_capabilities *cap) +{ + return !!(cap->type & ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU); +} + extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; extern struct static_key_false arm64_const_caps_ready; --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -369,14 +369,14 @@ static bool has_ssbd_mitigation(const st #endif /* CONFIG_ARM64_SSBD */ #define MIDR_RANGE(model, min, max) \ - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = min, \ .midr_range_max = max #define MIDR_ALL_VERSIONS(model) \ - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ .matches = is_affected_midr_range, \ .midr_model = model, \ .midr_range_min = 0, \ @@ -459,14 +459,14 @@ const struct arm64_cpu_capabilities arm6 .desc = "Mismatched cache line size", .capability = ARM64_MISMATCHED_CACHE_LINE_SIZE, .matches = has_mismatched_cache_type, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, { .desc = "Mismatched cache type", .capability = ARM64_MISMATCHED_CACHE_TYPE, .matches = has_mismatched_cache_type, - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .cpu_enable = cpu_enable_trap_ctr_access, }, #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR @@ -504,7 +504,7 @@ const struct arm64_cpu_capabilities arm6 #ifdef CONFIG_ARM64_SSBD { .desc = "Speculative Store Bypass Disable", - .type = ARM64_CPUCAP_SCOPE_LOCAL_CPU, + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, .capability = ARM64_SSBD, .matches = has_ssbd_mitigation, }, --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -865,7 +865,7 @@ static const struct arm64_cpu_capabiliti { .desc = "GIC system register CPU interface", .capability = ARM64_HAS_SYSREG_GIC_CPUIF, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_useable_gicv3_cpuif, .sys_reg = SYS_ID_AA64PFR0_EL1, .field_pos = ID_AA64PFR0_GIC_SHIFT, @@ -876,7 +876,7 @@ static const struct arm64_cpu_capabiliti { .desc = "Privileged Access Never", .capability = ARM64_HAS_PAN, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR1_EL1, .field_pos = ID_AA64MMFR1_PAN_SHIFT, @@ -889,7 +889,7 @@ static const struct arm64_cpu_capabiliti { .desc = "LSE atomic instructions", .capability = ARM64_HAS_LSE_ATOMICS, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64ISAR0_EL1, .field_pos = ID_AA64ISAR0_ATOMICS_SHIFT, @@ -900,14 +900,14 @@ static const struct arm64_cpu_capabiliti { .desc = "Software prefetching using PRFM", .capability = ARM64_HAS_NO_HW_PREFETCH, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_no_hw_prefetch, }, #ifdef CONFIG_ARM64_UAO { .desc = "User Access Override", .capability = ARM64_HAS_UAO, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR2_EL1, .field_pos = ID_AA64MMFR2_UAO_SHIFT, @@ -921,21 +921,21 @@ static const struct arm64_cpu_capabiliti #ifdef CONFIG_ARM64_PAN { .capability = ARM64_ALT_PAN_NOT_UAO, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = cpufeature_pan_not_uao, }, #endif /* CONFIG_ARM64_PAN */ { .desc = "Virtualization Host Extensions", .capability = ARM64_HAS_VIRT_HOST_EXTN, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = runs_at_el2, .cpu_enable = cpu_copy_el2regs, }, { .desc = "32-bit EL0 Support", .capability = ARM64_HAS_32BIT_EL0, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64PFR0_EL1, .sign = FTR_UNSIGNED, @@ -945,14 +945,14 @@ static const struct arm64_cpu_capabiliti { .desc = "Reduced HYP mapping offset", .capability = ARM64_HYP_OFFSET_LOW, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = hyp_offset_low, }, #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 { .desc = "Kernel page table isolation (KPTI)", .capability = ARM64_UNMAP_KERNEL_AT_EL0, - .type = ARM64_CPUCAP_SCOPE_SYSTEM, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = unmap_kernel_at_el0, .cpu_enable = kpti_install_ng_mappings, }, @@ -963,7 +963,7 @@ static const struct arm64_cpu_capabiliti #define HWCAP_CAP(reg, field, s, min_value, cap_type, cap) \ { \ .desc = #cap, \ - .type = ARM64_CPUCAP_SCOPE_SYSTEM, \ + .type = ARM64_CPUCAP_SYSTEM_FEATURE, \ .matches = has_cpuid_feature, \ .sys_reg = reg, \ .field_pos = field, \ From patchwork Wed Apr 6 18:26:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BAC3C433EF for ; Wed, 6 Apr 2022 19:16:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229816AbiDFTR0 (ORCPT ); Wed, 6 Apr 2022 15:17:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229933AbiDFTPy (ORCPT ); Wed, 6 Apr 2022 15:15:54 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE9C127FCBB; Wed, 6 Apr 2022 11:28:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 63DB461B89; Wed, 6 Apr 2022 18:28:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 71F89C385A1; Wed, 6 Apr 2022 18:28:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269692; bh=01v3XoePHGzw/UoYUpx/8ppu8DorS5rqyoT218B2w50=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UZ/R4KHDgbKeWe6kh88kEdEts6m7DMHcPtdDc1lfN5WcsbJCTbs2zKuPBqckQGIyg bLvGXTJK7NGiwjq0wMsM4CEAQmp88hbIIGh8b7Cd2ZWZpjmp+TseiV1YcixTqK5GLV 3n8VCBj5iVPuEPCXqkaXfD6IuyEAtBj4HPfGi8ow= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Will Deacon , Mark Rutland , Ard Biesheuvel , Dave Martin , Suzuki K Poulose , James Morse Subject: [PATCH 4.9 09/43] arm64: capabilities: Clean up midr range helpers Date: Wed, 6 Apr 2022 20:26:18 +0200 Message-Id: <20220406182436.955249794@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 5e7951ce19abf4113645ae789c033917356ee96f ] We are about to introduce generic MIDR range helpers. Clean up the existing helpers in erratum handling, preparing them to use generic version. Cc: Will Deacon Cc: Mark Rutland Cc: Ard Biesheuvel Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/cpu_errata.c | 82 +++++++++++++++++++++++++---------------- 1 file changed, 50 insertions(+), 32 deletions(-) --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -368,20 +368,38 @@ static bool has_ssbd_mitigation(const st } #endif /* CONFIG_ARM64_SSBD */ -#define MIDR_RANGE(model, min, max) \ - .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ - .matches = is_affected_midr_range, \ - .midr_model = model, \ - .midr_range_min = min, \ - .midr_range_max = max - -#define MIDR_ALL_VERSIONS(model) \ - .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ - .matches = is_affected_midr_range, \ - .midr_model = model, \ - .midr_range_min = 0, \ +#define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \ + .matches = is_affected_midr_range, \ + .midr_model = model, \ + .midr_range_min = MIDR_CPU_VAR_REV(v_min, r_min), \ + .midr_range_max = MIDR_CPU_VAR_REV(v_max, r_max) + +#define CAP_MIDR_ALL_VERSIONS(model) \ + .matches = is_affected_midr_range, \ + .midr_model = model, \ + .midr_range_min = MIDR_CPU_VAR_REV(0, 0), \ .midr_range_max = (MIDR_VARIANT_MASK | MIDR_REVISION_MASK) +#define MIDR_FIXED(rev, revidr_mask) \ + .fixed_revs = (struct arm64_midr_revidr[]){{ (rev), (revidr_mask) }, {}} + +#define ERRATA_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ + CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) + +/* Errata affecting a range of revisions of given model variant */ +#define ERRATA_MIDR_REV_RANGE(m, var, r_min, r_max) \ + ERRATA_MIDR_RANGE(m, var, r_min, var, r_max) + +/* Errata affecting a single variant/revision of a model */ +#define ERRATA_MIDR_REV(model, var, rev) \ + ERRATA_MIDR_RANGE(model, var, rev, var, rev) + +/* Errata affecting all variants/revisions of a given a model */ +#define ERRATA_MIDR_ALL_VERSIONS(model) \ + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ + CAP_MIDR_ALL_VERSIONS(model) + const struct arm64_cpu_capabilities arm64_errata[] = { #if defined(CONFIG_ARM64_ERRATUM_826319) || \ defined(CONFIG_ARM64_ERRATUM_827319) || \ @@ -390,7 +408,7 @@ const struct arm64_cpu_capabilities arm6 /* Cortex-A53 r0p[012] */ .desc = "ARM errata 826319, 827319, 824069", .capability = ARM64_WORKAROUND_CLEAN_CACHE, - MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x02), + ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 2), .cpu_enable = cpu_enable_cache_maint_trap, }, #endif @@ -399,7 +417,7 @@ const struct arm64_cpu_capabilities arm6 /* Cortex-A53 r0p[01] */ .desc = "ARM errata 819472", .capability = ARM64_WORKAROUND_CLEAN_CACHE, - MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x01), + ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 1), .cpu_enable = cpu_enable_cache_maint_trap, }, #endif @@ -408,9 +426,9 @@ const struct arm64_cpu_capabilities arm6 /* Cortex-A57 r0p0 - r1p2 */ .desc = "ARM erratum 832075", .capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE, - MIDR_RANGE(MIDR_CORTEX_A57, - MIDR_CPU_VAR_REV(0, 0), - MIDR_CPU_VAR_REV(1, 2)), + ERRATA_MIDR_RANGE(MIDR_CORTEX_A57, + 0, 0, + 1, 2), }, #endif #ifdef CONFIG_ARM64_ERRATUM_834220 @@ -418,9 +436,9 @@ const struct arm64_cpu_capabilities arm6 /* Cortex-A57 r0p0 - r1p2 */ .desc = "ARM erratum 834220", .capability = ARM64_WORKAROUND_834220, - MIDR_RANGE(MIDR_CORTEX_A57, - MIDR_CPU_VAR_REV(0, 0), - MIDR_CPU_VAR_REV(1, 2)), + ERRATA_MIDR_RANGE(MIDR_CORTEX_A57, + 0, 0, + 1, 2), }, #endif #ifdef CONFIG_ARM64_ERRATUM_845719 @@ -428,7 +446,7 @@ const struct arm64_cpu_capabilities arm6 /* Cortex-A53 r0p[01234] */ .desc = "ARM erratum 845719", .capability = ARM64_WORKAROUND_845719, - MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04), + ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A53, 0, 0, 4), }, #endif #ifdef CONFIG_CAVIUM_ERRATUM_23154 @@ -436,7 +454,7 @@ const struct arm64_cpu_capabilities arm6 /* Cavium ThunderX, pass 1.x */ .desc = "Cavium erratum 23154", .capability = ARM64_WORKAROUND_CAVIUM_23154, - MIDR_RANGE(MIDR_THUNDERX, 0x00, 0x01), + ERRATA_MIDR_REV_RANGE(MIDR_THUNDERX, 0, 0, 1), }, #endif #ifdef CONFIG_CAVIUM_ERRATUM_27456 @@ -444,15 +462,15 @@ const struct arm64_cpu_capabilities arm6 /* Cavium ThunderX, T88 pass 1.x - 2.1 */ .desc = "Cavium erratum 27456", .capability = ARM64_WORKAROUND_CAVIUM_27456, - MIDR_RANGE(MIDR_THUNDERX, - MIDR_CPU_VAR_REV(0, 0), - MIDR_CPU_VAR_REV(1, 1)), + ERRATA_MIDR_RANGE(MIDR_THUNDERX, + 0, 0, + 1, 1), }, { /* Cavium ThunderX, T81 pass 1.0 */ .desc = "Cavium erratum 27456", .capability = ARM64_WORKAROUND_CAVIUM_27456, - MIDR_RANGE(MIDR_THUNDERX_81XX, 0x00, 0x00), + ERRATA_MIDR_REV(MIDR_THUNDERX_81XX, 0, 0), }, #endif { @@ -472,32 +490,32 @@ const struct arm64_cpu_capabilities arm6 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), + ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), + ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), + ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), + ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), + ERRATA_MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), .cpu_enable = enable_smccc_arch_workaround_1, }, { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), + ERRATA_MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), .cpu_enable = enable_smccc_arch_workaround_1, }, #endif From patchwork Wed Apr 6 18:26:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7202CC433F5 for ; Wed, 6 Apr 2022 19:24:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231267AbiDFTZz (ORCPT ); Wed, 6 Apr 2022 15:25:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231327AbiDFTZE (ORCPT ); Wed, 6 Apr 2022 15:25:04 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7967B231AFE; Wed, 6 Apr 2022 11:27:06 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 28936B82539; Wed, 6 Apr 2022 18:27:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88C58C385A3; Wed, 6 Apr 2022 18:27:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269623; bh=cm+wZ4bD6mhTl9eg14j1NHQ6vt2bNfVB9Ejg2XBQRd4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hM9Of+cXBY8vnYXFCXnBD4Vn7qdXTq2DcTttX0wCA9cbaGDBPdrat2/L6gzaKFNrC knEApTsj4QEv8czvCJEDUcsm/f7Xx+zuahcYZ03+nv0qLJaURf+xN4Fbv79KshsFLd NLXcXT7LwMiKU9BG5ZYCnkyFmTSlnjNnwdCsr7mU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Will Deacon , Mark Rutland , Ard Biesheuvel , Dave Martin , Suzuki K Poulose , James Morse Subject: [PATCH 4.9 10/43] arm64: Add helpers for checking CPU MIDR against a range Date: Wed, 6 Apr 2022 20:26:19 +0200 Message-Id: <20220406182436.984391315@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose [ Upstream commit 1df310505d6d544802016f6bae49aab836ae8510 ] Add helpers for checking if the given CPU midr falls in a range of variants/revisions for a given model. Cc: Will Deacon Cc: Mark Rutland Cc: Ard Biesheuvel Reviewed-by: Dave Martin Signed-off-by: Suzuki K Poulose Signed-off-by: Will Deacon Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpufeature.h | 4 ++-- arch/arm64/include/asm/cputype.h | 30 ++++++++++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 14 +++++--------- 3 files changed, 37 insertions(+), 11 deletions(-) --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -10,6 +10,7 @@ #define __ASM_CPUFEATURE_H #include +#include #include #include @@ -229,8 +230,7 @@ struct arm64_cpu_capabilities { void (*cpu_enable)(const struct arm64_cpu_capabilities *cap); union { struct { /* To be used for erratum handling only */ - u32 midr_model; - u32 midr_range_min, midr_range_max; + struct midr_range midr_range; }; struct { /* Feature register checking */ --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -114,6 +114,36 @@ #define read_cpuid(reg) read_sysreg_s(SYS_ ## reg) /* + * Represent a range of MIDR values for a given CPU model and a + * range of variant/revision values. + * + * @model - CPU model as defined by MIDR_CPU_MODEL + * @rv_min - Minimum value for the revision/variant as defined by + * MIDR_CPU_VAR_REV + * @rv_max - Maximum value for the variant/revision for the range. + */ +struct midr_range { + u32 model; + u32 rv_min; + u32 rv_max; +}; + +#define MIDR_RANGE(m, v_min, r_min, v_max, r_max) \ + { \ + .model = m, \ + .rv_min = MIDR_CPU_VAR_REV(v_min, r_min), \ + .rv_max = MIDR_CPU_VAR_REV(v_max, r_max), \ + } + +#define MIDR_ALL_VERSIONS(m) MIDR_RANGE(m, 0, 0, 0xf, 0xf) + +static inline bool is_midr_in_range(u32 midr, struct midr_range const *range) +{ + return MIDR_IS_CPU_MODEL_RANGE(midr, range->model, + range->rv_min, range->rv_max); +} + +/* * The CPU ID never changes at run time, so we might as well tell the * compiler that it's constant. Use this function to read the CPU ID * rather than directly reading processor_id or read_cpuid() directly. --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -27,10 +27,10 @@ static bool __maybe_unused is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope) { + u32 midr = read_cpuid_id(); + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); - return MIDR_IS_CPU_MODEL_RANGE(read_cpuid_id(), entry->midr_model, - entry->midr_range_min, - entry->midr_range_max); + return is_midr_in_range(midr, &entry->midr_range); } static bool @@ -370,15 +370,11 @@ static bool has_ssbd_mitigation(const st #define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \ .matches = is_affected_midr_range, \ - .midr_model = model, \ - .midr_range_min = MIDR_CPU_VAR_REV(v_min, r_min), \ - .midr_range_max = MIDR_CPU_VAR_REV(v_max, r_max) + .midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max) #define CAP_MIDR_ALL_VERSIONS(model) \ .matches = is_affected_midr_range, \ - .midr_model = model, \ - .midr_range_min = MIDR_CPU_VAR_REV(0, 0), \ - .midr_range_max = (MIDR_VARIANT_MASK | MIDR_REVISION_MASK) + .midr_range = MIDR_ALL_VERSIONS(model) #define MIDR_FIXED(rev, revidr_mask) \ .fixed_revs = (struct arm64_midr_revidr[]){{ (rev), (revidr_mask) }, {}} From patchwork Wed Apr 6 18:26:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87904C433F5 for ; Wed, 6 Apr 2022 19:59:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233052AbiDFUBc (ORCPT ); Wed, 6 Apr 2022 16:01:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233109AbiDFUAO (ORCPT ); Wed, 6 Apr 2022 16:00:14 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F7AD26F23D; Wed, 6 Apr 2022 11:27:14 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2D4BFB8252B; Wed, 6 Apr 2022 18:27:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 90D6DC385A1; Wed, 6 Apr 2022 18:27:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269631; bh=TmtFHpzi96JGPweioQRkEhgwGppePS7U0Q7YQwfl/SE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WiwguteeAMpGucc/ye2PJJKjSsQscqV1YpvEzq87AWsZJvPrJtfPpa6utoygmvL05 b/EHPYuhWsFzBnVaPPHItx+a6m9smu1vwsh9+cIaAt1MS36GkHLK0t1I7CNViiy4lU ttXUiXCGePD77NbWnOuSGsP2LQz9yd4U0h2uiodk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ding Tianhong , Mark Rutland , Daniel Lezcano , James Morse Subject: [PATCH 4.9 13/43] clocksource/drivers/arm_arch_timer: Introduce generic errata handling infrastructure Date: Wed, 6 Apr 2022 20:26:22 +0200 Message-Id: <20220406182437.070605313@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Ding Tianhong commit 16d10ef29f25aba923779234bb93a451b14d20e6 upstream. Currently we have code inline in the arch timer probe path to cater for Freescale erratum A-008585, complete with ifdeffery. This is a little ugly, and will get worse as we try to add more errata handling. This patch refactors the handling of Freescale erratum A-008585. Now the erratum is described in a generic arch_timer_erratum_workaround structure, and the probe path can iterate over these to detect errata and enable workarounds. This will simplify the addition and maintenance of code handling Hisilicon erratum 161010101. Signed-off-by: Ding Tianhong [Mark: split patch, correct Kconfig, reword commit message] Signed-off-by: Mark Rutland Acked-by: Daniel Lezcano Signed-off-by: Daniel Lezcano Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/arch_timer.h | 40 +++++---------- drivers/clocksource/Kconfig | 4 + drivers/clocksource/arm_arch_timer.c | 90 ++++++++++++++++++++++++----------- 3 files changed, 80 insertions(+), 54 deletions(-) --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -29,41 +29,29 @@ #include -#if IS_ENABLED(CONFIG_FSL_ERRATUM_A008585) +#if IS_ENABLED(CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND) extern struct static_key_false arch_timer_read_ool_enabled; -#define needs_fsl_a008585_workaround() \ +#define needs_unstable_timer_counter_workaround() \ static_branch_unlikely(&arch_timer_read_ool_enabled) #else -#define needs_fsl_a008585_workaround() false +#define needs_unstable_timer_counter_workaround() false #endif -u32 __fsl_a008585_read_cntp_tval_el0(void); -u32 __fsl_a008585_read_cntv_tval_el0(void); -u64 __fsl_a008585_read_cntvct_el0(void); - -/* - * The number of retries is an arbitrary value well beyond the highest number - * of iterations the loop has been observed to take. - */ -#define __fsl_a008585_read_reg(reg) ({ \ - u64 _old, _new; \ - int _retries = 200; \ - \ - do { \ - _old = read_sysreg(reg); \ - _new = read_sysreg(reg); \ - _retries--; \ - } while (unlikely(_old != _new) && _retries); \ - \ - WARN_ON_ONCE(!_retries); \ - _new; \ -}) + +struct arch_timer_erratum_workaround { + const char *id; /* Indicate the Erratum ID */ + u32 (*read_cntp_tval_el0)(void); + u32 (*read_cntv_tval_el0)(void); + u64 (*read_cntvct_el0)(void); +}; + +extern const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround; #define arch_timer_reg_read_stable(reg) \ ({ \ u64 _val; \ - if (needs_fsl_a008585_workaround()) \ - _val = __fsl_a008585_read_##reg(); \ + if (needs_unstable_timer_counter_workaround()) \ + _val = timer_unstable_counter_workaround->read_##reg();\ else \ _val = read_sysreg(reg); \ _val; \ --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig @@ -305,10 +305,14 @@ config ARM_ARCH_TIMER_EVTSTREAM This must be disabled for hardware validation purposes to detect any hardware anomalies of missing events. +config ARM_ARCH_TIMER_OOL_WORKAROUND + bool + config FSL_ERRATUM_A008585 bool "Workaround for Freescale/NXP Erratum A-008585" default y depends on ARM_ARCH_TIMER && ARM64 + select ARM_ARCH_TIMER_OOL_WORKAROUND help This option enables a workaround for Freescale/NXP Erratum A-008585 ("ARM generic timer may contain an erroneous --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -96,27 +96,58 @@ early_param("clocksource.arm_arch_timer. */ #ifdef CONFIG_FSL_ERRATUM_A008585 -DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); -EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled); - -static int fsl_a008585_enable = -1; +/* + * The number of retries is an arbitrary value well beyond the highest number + * of iterations the loop has been observed to take. + */ +#define __fsl_a008585_read_reg(reg) ({ \ + u64 _old, _new; \ + int _retries = 200; \ + \ + do { \ + _old = read_sysreg(reg); \ + _new = read_sysreg(reg); \ + _retries--; \ + } while (unlikely(_old != _new) && _retries); \ + \ + WARN_ON_ONCE(!_retries); \ + _new; \ +}) -u32 __fsl_a008585_read_cntp_tval_el0(void) +static u32 notrace fsl_a008585_read_cntp_tval_el0(void) { return __fsl_a008585_read_reg(cntp_tval_el0); } -u32 __fsl_a008585_read_cntv_tval_el0(void) +static u32 notrace fsl_a008585_read_cntv_tval_el0(void) { return __fsl_a008585_read_reg(cntv_tval_el0); } -u64 __fsl_a008585_read_cntvct_el0(void) +static u64 notrace fsl_a008585_read_cntvct_el0(void) { return __fsl_a008585_read_reg(cntvct_el0); } -EXPORT_SYMBOL(__fsl_a008585_read_cntvct_el0); -#endif /* CONFIG_FSL_ERRATUM_A008585 */ +#endif + +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND +const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL; +EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); + +DEFINE_STATIC_KEY_FALSE(arch_timer_read_ool_enabled); +EXPORT_SYMBOL_GPL(arch_timer_read_ool_enabled); + +static const struct arch_timer_erratum_workaround ool_workarounds[] = { +#ifdef CONFIG_FSL_ERRATUM_A008585 + { + .id = "fsl,erratum-a008585", + .read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0, + .read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0, + .read_cntvct_el0 = fsl_a008585_read_cntvct_el0, + }, +#endif +}; +#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ static __always_inline void arch_timer_reg_write(int access, enum arch_timer_reg reg, u32 val, @@ -267,8 +298,8 @@ static __always_inline void set_next_eve arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); } -#ifdef CONFIG_FSL_ERRATUM_A008585 -static __always_inline void fsl_a008585_set_next_event(const int access, +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND +static __always_inline void erratum_set_next_event_generic(const int access, unsigned long evt, struct clock_event_device *clk) { unsigned long ctrl; @@ -286,20 +317,20 @@ static __always_inline void fsl_a008585_ arch_timer_reg_write(access, ARCH_TIMER_REG_CTRL, ctrl, clk); } -static int fsl_a008585_set_next_event_virt(unsigned long evt, +static int erratum_set_next_event_virt(unsigned long evt, struct clock_event_device *clk) { - fsl_a008585_set_next_event(ARCH_TIMER_VIRT_ACCESS, evt, clk); + erratum_set_next_event_generic(ARCH_TIMER_VIRT_ACCESS, evt, clk); return 0; } -static int fsl_a008585_set_next_event_phys(unsigned long evt, +static int erratum_set_next_event_phys(unsigned long evt, struct clock_event_device *clk) { - fsl_a008585_set_next_event(ARCH_TIMER_PHYS_ACCESS, evt, clk); + erratum_set_next_event_generic(ARCH_TIMER_PHYS_ACCESS, evt, clk); return 0; } -#endif /* CONFIG_FSL_ERRATUM_A008585 */ +#endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ static int arch_timer_set_next_event_virt(unsigned long evt, struct clock_event_device *clk) @@ -329,16 +360,16 @@ static int arch_timer_set_next_event_phy return 0; } -static void fsl_a008585_set_sne(struct clock_event_device *clk) +static void erratum_workaround_set_sne(struct clock_event_device *clk) { -#ifdef CONFIG_FSL_ERRATUM_A008585 +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND if (!static_branch_unlikely(&arch_timer_read_ool_enabled)) return; if (arch_timer_uses_ppi == VIRT_PPI) - clk->set_next_event = fsl_a008585_set_next_event_virt; + clk->set_next_event = erratum_set_next_event_virt; else - clk->set_next_event = fsl_a008585_set_next_event_phys; + clk->set_next_event = erratum_set_next_event_phys; #endif } @@ -371,7 +402,7 @@ static void __arch_timer_setup(unsigned BUG(); } - fsl_a008585_set_sne(clk); + erratum_workaround_set_sne(clk); } else { clk->features |= CLOCK_EVT_FEAT_DYNIRQ; clk->name = "arch_mem_timer"; @@ -600,7 +631,7 @@ static void __init arch_counter_register clocksource_counter.archdata.vdso_direct = true; -#ifdef CONFIG_FSL_ERRATUM_A008585 +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND /* * Don't use the vdso fastpath if errata require using * the out-of-line counter accessor. @@ -888,12 +919,15 @@ static int __init arch_timer_of_init(str arch_timer_c3stop = !of_property_read_bool(np, "always-on"); -#ifdef CONFIG_FSL_ERRATUM_A008585 - if (fsl_a008585_enable < 0) - fsl_a008585_enable = of_property_read_bool(np, "fsl,erratum-a008585"); - if (fsl_a008585_enable) { - static_branch_enable(&arch_timer_read_ool_enabled); - pr_info("Enabling workaround for FSL erratum A-008585\n"); +#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND + for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) { + if (of_property_read_bool(np, ool_workarounds[i].id)) { + timer_unstable_counter_workaround = &ool_workarounds[i]; + static_branch_enable(&arch_timer_read_ool_enabled); + pr_info("arch_timer: Enabling workaround for %s\n", + timer_unstable_counter_workaround->id); + break; + } } #endif From patchwork Wed Apr 6 18:26:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E641C433F5 for ; Wed, 6 Apr 2022 20:08:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233710AbiDFUK0 (ORCPT ); Wed, 6 Apr 2022 16:10:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233698AbiDFUKQ (ORCPT ); Wed, 6 Apr 2022 16:10:16 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46FB7CF4B3; Wed, 6 Apr 2022 11:27:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EBF7EB824E1; Wed, 6 Apr 2022 18:27:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F23CC385A3; Wed, 6 Apr 2022 18:27:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269634; bh=T0hUhy+ZEHBPKg8ig2zcn/mvi2SifcsVfTmk4w6GxtQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oaw8xf875UaT0VQxcuA/kQ3b7gN4z7SfO+5nKQGfFptaozx7OjQh3vpreNSxVerQ2 RyNFd4TfsT/ZtRcTF4J6ATLdgLwSpRU43GeFKUZS1ehyg2PCV7GM1aSW26/xJb3alK eigwSgnTvmpy95VZ5cXcPSVaroZOqCNalIDOAjxc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Hanjun Guo , Marc Zyngier , James Morse Subject: [PATCH 4.9 14/43] arm64: arch_timer: Add infrastructure for multiple erratum detection methods Date: Wed, 6 Apr 2022 20:26:23 +0200 Message-Id: <20220406182437.099454875@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit 651bb2e9dca6e6dbad3fba5f6e6086a23575b8b5 upstream. We're currently stuck with DT when it comes to handling errata, which is pretty restrictive. In order to make things more flexible, let's introduce an infrastructure that could support alternative discovery methods. No change in functionality. Acked-by: Thomas Gleixner Reviewed-by: Hanjun Guo Signed-off-by: Marc Zyngier [ morse: Removed the changes to HiSilicon erratum 161010101, which isn't present in v4.9 ] Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/arch_timer.h | 7 ++- drivers/clocksource/arm_arch_timer.c | 81 ++++++++++++++++++++++++++++++----- 2 files changed, 76 insertions(+), 12 deletions(-) --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -37,9 +37,14 @@ extern struct static_key_false arch_time #define needs_unstable_timer_counter_workaround() false #endif +enum arch_timer_erratum_match_type { + ate_match_dt, +}; struct arch_timer_erratum_workaround { - const char *id; /* Indicate the Erratum ID */ + enum arch_timer_erratum_match_type match_type; + const void *id; + const char *desc; u32 (*read_cntp_tval_el0)(void); u32 (*read_cntv_tval_el0)(void); u64 (*read_cntvct_el0)(void); --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -140,13 +140,81 @@ EXPORT_SYMBOL_GPL(arch_timer_read_ool_en static const struct arch_timer_erratum_workaround ool_workarounds[] = { #ifdef CONFIG_FSL_ERRATUM_A008585 { + .match_type = ate_match_dt, .id = "fsl,erratum-a008585", + .desc = "Freescale erratum a005858", .read_cntp_tval_el0 = fsl_a008585_read_cntp_tval_el0, .read_cntv_tval_el0 = fsl_a008585_read_cntv_tval_el0, .read_cntvct_el0 = fsl_a008585_read_cntvct_el0, }, #endif }; + +typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *, + const void *); + +static +bool arch_timer_check_dt_erratum(const struct arch_timer_erratum_workaround *wa, + const void *arg) +{ + const struct device_node *np = arg; + + return of_property_read_bool(np, wa->id); +} + +static const struct arch_timer_erratum_workaround * +arch_timer_iterate_errata(enum arch_timer_erratum_match_type type, + ate_match_fn_t match_fn, + void *arg) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) { + if (ool_workarounds[i].match_type != type) + continue; + + if (match_fn(&ool_workarounds[i], arg)) + return &ool_workarounds[i]; + } + + return NULL; +} + +static +void arch_timer_enable_workaround(const struct arch_timer_erratum_workaround *wa) +{ + timer_unstable_counter_workaround = wa; + static_branch_enable(&arch_timer_read_ool_enabled); +} + +static void arch_timer_check_ool_workaround(enum arch_timer_erratum_match_type type, + void *arg) +{ + const struct arch_timer_erratum_workaround *wa; + ate_match_fn_t match_fn = NULL; + + if (static_branch_unlikely(&arch_timer_read_ool_enabled)) + return; + + switch (type) { + case ate_match_dt: + match_fn = arch_timer_check_dt_erratum; + break; + default: + WARN_ON(1); + return; + } + + wa = arch_timer_iterate_errata(type, match_fn, arg); + if (!wa) + return; + + arch_timer_enable_workaround(wa); + pr_info("Enabling global workaround for %s\n", wa->desc); +} + +#else +#define arch_timer_check_ool_workaround(t,a) do { } while(0) #endif /* CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND */ static __always_inline @@ -919,17 +987,8 @@ static int __init arch_timer_of_init(str arch_timer_c3stop = !of_property_read_bool(np, "always-on"); -#ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND - for (i = 0; i < ARRAY_SIZE(ool_workarounds); i++) { - if (of_property_read_bool(np, ool_workarounds[i].id)) { - timer_unstable_counter_workaround = &ool_workarounds[i]; - static_branch_enable(&arch_timer_read_ool_enabled); - pr_info("arch_timer: Enabling workaround for %s\n", - timer_unstable_counter_workaround->id); - break; - } - } -#endif + /* Check for globally applicable workarounds */ + arch_timer_check_ool_workaround(ate_match_dt, np); /* * If we cannot rely on firmware initializing the timer registers then From patchwork Wed Apr 6 18:26:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BE4BC433EF for ; Wed, 6 Apr 2022 20:15:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231650AbiDFUQ4 (ORCPT ); Wed, 6 Apr 2022 16:16:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47992 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234317AbiDFUPa (ORCPT ); Wed, 6 Apr 2022 16:15:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E4131B6089; Wed, 6 Apr 2022 11:27:20 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C4E1EB8252B; Wed, 6 Apr 2022 18:27:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 319D0C385A1; Wed, 6 Apr 2022 18:27:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269637; bh=6G22upvbXoxta8nk2RQbeDekiGmPjZ2D3Z7Cr5183MA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cETam1jJgbMpSVIgPvdbSpjUj/gHVQtBUyzvXkRCHVYvm9ZSAIBAcPSc6CZJVv1cB KwLE/PPSrd00DOUTbXvgIrDcOq9nTfE42+EmK8s976f6VlEThKWYkuDMvEWsgFCewD iYgjx6GPspOabBY1ngMJg1ap15q0bKRiLX3qbyb0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Marc Zyngier , James Morse Subject: [PATCH 4.9 15/43] arm64: arch_timer: Add erratum handler for CPU-specific capability Date: Wed, 6 Apr 2022 20:26:24 +0200 Message-Id: <20220406182437.127728140@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit 0064030c6fd4ca6cfab42de037b2a89445beeead upstream. Should we ever have a workaround for an erratum that is detected using a capability and affecting a particular CPU, it'd be nice to have a way to probe them directly. Acked-by: Thomas Gleixner Signed-off-by: Marc Zyngier Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/arch_timer.h | 1 + drivers/clocksource/arm_arch_timer.c | 28 ++++++++++++++++++++++++---- 2 files changed, 25 insertions(+), 4 deletions(-) --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -39,6 +39,7 @@ extern struct static_key_false arch_time enum arch_timer_erratum_match_type { ate_match_dt, + ate_match_local_cap_id, }; struct arch_timer_erratum_workaround { --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -162,6 +162,13 @@ bool arch_timer_check_dt_erratum(const s return of_property_read_bool(np, wa->id); } +static +bool arch_timer_check_local_cap_erratum(const struct arch_timer_erratum_workaround *wa, + const void *arg) +{ + return this_cpu_has_cap((uintptr_t)wa->id); +} + static const struct arch_timer_erratum_workaround * arch_timer_iterate_errata(enum arch_timer_erratum_match_type type, ate_match_fn_t match_fn, @@ -192,14 +199,16 @@ static void arch_timer_check_ool_workaro { const struct arch_timer_erratum_workaround *wa; ate_match_fn_t match_fn = NULL; - - if (static_branch_unlikely(&arch_timer_read_ool_enabled)) - return; + bool local = false; switch (type) { case ate_match_dt: match_fn = arch_timer_check_dt_erratum; break; + case ate_match_local_cap_id: + match_fn = arch_timer_check_local_cap_erratum; + local = true; + break; default: WARN_ON(1); return; @@ -209,8 +218,17 @@ static void arch_timer_check_ool_workaro if (!wa) return; + if (needs_unstable_timer_counter_workaround()) { + if (wa != timer_unstable_counter_workaround) + pr_warn("Can't enable workaround for %s (clashes with %s\n)", + wa->desc, + timer_unstable_counter_workaround->desc); + return; + } + arch_timer_enable_workaround(wa); - pr_info("Enabling global workaround for %s\n", wa->desc); + pr_info("Enabling %s workaround for %s\n", + local ? "local" : "global", wa->desc); } #else @@ -470,6 +488,8 @@ static void __arch_timer_setup(unsigned BUG(); } + arch_timer_check_ool_workaround(ate_match_local_cap_id, NULL); + erratum_workaround_set_sne(clk); } else { clk->features |= CLOCK_EVT_FEAT_DYNIRQ; From patchwork Wed Apr 6 18:26:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558542 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 255D7C433FE for ; Wed, 6 Apr 2022 19:31:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230211AbiDFTcz (ORCPT ); Wed, 6 Apr 2022 15:32:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50516 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231626AbiDFTcC (ORCPT ); Wed, 6 Apr 2022 15:32:02 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 007D81B9FC5; Wed, 6 Apr 2022 11:27:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9FEBAB8253C; Wed, 6 Apr 2022 18:27:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E94D2C385A3; Wed, 6 Apr 2022 18:27:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269640; bh=q+CDZVRrCXAK8ZiYWzkHLt6sYUFD4HaQ4RysPdfvcjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SpHwCgj0X32OP6ZcTcw/pcdSG8xHLNyH962EaF2iug6j6Tpp3c7vhlPsspaZk/aUe pQlPUkltjKJiOEMUc1V2gTRwBrVzAdgRNyPpz7+idu9P1g95BY2nwcq0LwG1iTcCsV nQ2QC1g/8D7CcNgLok4maWVj3rKx5x6DHQHzHVuk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mark Rutland , Marc Zyngier , Catalin Marinas , James Morse Subject: [PATCH 4.9 16/43] arm64: arch_timer: Add workaround for ARM erratum 1188873 Date: Wed, 6 Apr 2022 20:26:25 +0200 Message-Id: <20220406182437.155996244@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit 95b861a4a6d94f64d5242605569218160ebacdbe upstream. When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by: Mark Rutland Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/Kconfig | 12 ++++++++++++ arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/cputype.h | 2 ++ arch/arm64/kernel/cpu_errata.c | 8 ++++++++ drivers/clocksource/arm_arch_timer.c | 15 +++++++++++++++ 5 files changed, 39 insertions(+), 1 deletion(-) --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -441,6 +441,18 @@ config ARM64_ERRATUM_1024718 If unsure, say Y. +config ARM64_ERRATUM_1188873 + bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result" + default y + help + This option adds work arounds for ARM Cortex-A76 erratum 1188873 + + Affected Cortex-A76 cores (r0p0, r1p0, r2p0) could cause + register corruption when accessing the timer registers from + AArch32 userspace. + + If unsure, say Y. + config CAVIUM_ERRATUM_22375 bool "Cavium erratum 22375, 24313" default y --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -38,7 +38,8 @@ #define ARM64_HARDEN_BRANCH_PREDICTOR 17 #define ARM64_SSBD 18 #define ARM64_MISMATCHED_CACHE_TYPE 19 +#define ARM64_WORKAROUND_1188873 20 -#define ARM64_NCAPS 20 +#define ARM64_NCAPS 21 #endif /* __ASM_CPUCAPS_H */ --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -85,6 +85,7 @@ #define ARM_CPU_PART_CORTEX_A75 0xD0A #define ARM_CPU_PART_CORTEX_A35 0xD04 #define ARM_CPU_PART_CORTEX_A55 0xD05 +#define ARM_CPU_PART_CORTEX_A76 0xD0B #define APM_CPU_PART_POTENZA 0x000 @@ -102,6 +103,7 @@ #define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75) #define MIDR_CORTEX_A35 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A35) #define MIDR_CORTEX_A55 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A55) +#define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -533,6 +533,14 @@ const struct arm64_cpu_capabilities arm6 .matches = has_ssbd_mitigation, }, #endif +#ifdef CONFIG_ARM64_ERRATUM_1188873 + { + /* Cortex-A76 r0p0 to r2p0 */ + .desc = "ARM erratum 1188873", + .capability = ARM64_WORKAROUND_1188873, + ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 2, 0), + }, +#endif { } }; --- a/drivers/clocksource/arm_arch_timer.c +++ b/drivers/clocksource/arm_arch_timer.c @@ -130,6 +130,13 @@ static u64 notrace fsl_a008585_read_cntv } #endif +#ifdef CONFIG_ARM64_ERRATUM_1188873 +static u64 notrace arm64_1188873_read_cntvct_el0(void) +{ + return read_sysreg(cntvct_el0); +} +#endif + #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND const struct arch_timer_erratum_workaround *timer_unstable_counter_workaround = NULL; EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); @@ -148,6 +155,14 @@ static const struct arch_timer_erratum_w .read_cntvct_el0 = fsl_a008585_read_cntvct_el0, }, #endif +#ifdef CONFIG_ARM64_ERRATUM_1188873 + { + .match_type = ate_match_local_cap_id, + .id = (void *)ARM64_WORKAROUND_1188873, + .desc = "ARM erratum 1188873", + .read_cntvct_el0 = arm64_1188873_read_cntvct_el0, + }, +#endif }; typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *, From patchwork Wed Apr 6 18:26:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 797F0C433F5 for ; Wed, 6 Apr 2022 19:48:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232646AbiDFTu0 (ORCPT ); Wed, 6 Apr 2022 15:50:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232417AbiDFTtg (ORCPT ); Wed, 6 Apr 2022 15:49:36 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B3D518BCFD; Wed, 6 Apr 2022 11:27:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 7D895CE24A6; Wed, 6 Apr 2022 18:27:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A197C385A1; Wed, 6 Apr 2022 18:27:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269643; bh=Litla4dQ6FJ4XIrqe0GBrFnwKM6GLSoVgCqTTyuj7+g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u4gTRVEmtREQeHX9NOyZ2RT/psDtnsZKTL/7LTxomCqWgMft/1x4jBB7eQjCPtIgD rKGXcewE4Z71iK1Y7m3uF9rmYwqMUnMeRh9ziChagWJq6+KhqqE1FLP+jFUY3AROC0 Kc8uBue+hR0yBjzZ79jFnkibXxpQrKAgFSjC2ERE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Marc Zyngier , Arnd Bergmann , Catalin Marinas , James Morse Subject: [PATCH 4.9 17/43] arm64: arch_timer: avoid unused function warning Date: Wed, 6 Apr 2022 20:26:26 +0200 Message-Id: <20220406182437.184811467@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Arnd Bergmann commit 040f340134751d73bd03ee92fabb992946c55b3d upstream. arm64_1188873_read_cntvct_el0() is protected by the correct CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section, and causes a warning if that is disabled: drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function] Since the erratum requires that we always apply the workaround in the timer driver, select that symbol as we do for SoC specific errata. Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873") Acked-by: Marc Zyngier Signed-off-by: Arnd Bergmann Signed-off-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -444,6 +444,7 @@ config ARM64_ERRATUM_1024718 config ARM64_ERRATUM_1188873 bool "Cortex-A76: MRC read following MRRC read of specific Generic Timer in AArch32 might give incorrect result" default y + select ARM_ARCH_TIMER_OOL_WORKAROUND help This option adds work arounds for ARM Cortex-A76 erratum 1188873 From patchwork Wed Apr 6 18:26:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26E59C433EF for ; Wed, 6 Apr 2022 19:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232312AbiDFTpi (ORCPT ); Wed, 6 Apr 2022 15:45:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232395AbiDFTp3 (ORCPT ); Wed, 6 Apr 2022 15:45:29 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B020A1E110D; Wed, 6 Apr 2022 11:27:26 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4C19361C57; Wed, 6 Apr 2022 18:27:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53C7CC385A3; Wed, 6 Apr 2022 18:27:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269645; bh=1f90r0PPcWyC00HpJvDQcFTIRgfNox54/raYZWqCdlo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2hj8AgYaJdghULcuPho5HQJyN4Ms0g54iIOWvjNR6FCmUrexSjegxlgv1o8D1bWuh V2pOAEHBnFSECA1mkbhfnnivOvON2P3vYk/EOogUAvprbWlPWe+0MA53xg8UTr9BYL qIPZJxgfXh2rsSyuR0Jyjmlj6LtU9WvNi4r95pRw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Marc Zyngier , Catalin Marinas , James Morse Subject: [PATCH 4.9 18/43] arm64: Add silicon-errata.txt entry for ARM erratum 1188873 Date: Wed, 6 Apr 2022 20:26:27 +0200 Message-Id: <20220406182437.212978702@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Marc Zyngier commit e03a4e5bb7430f9294c12f02c69eb045d010e942 upstream. Document that we actually work around ARM erratum 1188873 Fixes: 95b861a4a6d9 ("arm64: arch_timer: Add workaround for ARM erratum 1188873") Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- Documentation/arm64/silicon-errata.txt | 1 + 1 file changed, 1 insertion(+) --- a/Documentation/arm64/silicon-errata.txt +++ b/Documentation/arm64/silicon-errata.txt @@ -55,6 +55,7 @@ stable kernels. | ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 | | ARM | Cortex-A72 | #853709 | N/A | | ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 | +| ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 | | ARM | MMU-500 | #841119,#826419 | N/A | | | | | | | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 | From patchwork Wed Apr 6 18:26:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D415C433F5 for ; Wed, 6 Apr 2022 19:18:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229569AbiDFTUo (ORCPT ); Wed, 6 Apr 2022 15:20:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229813AbiDFTTv (ORCPT ); Wed, 6 Apr 2022 15:19:51 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC21BCB014; Wed, 6 Apr 2022 11:27:40 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 88C8861CAF; Wed, 6 Apr 2022 18:27:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8E1B6C385A3; Wed, 6 Apr 2022 18:27:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269660; bh=5QGqmIOiPwdFlNzxrmBfw5JiQh6nPBsLiPK0p4wy6D0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vlSTqo5Td8NwH4EcMYvGkfXJA1N5gySRs7bZ4S8L5D72rr0Z0xzkAvV9DlNhdvqTq bp/wSagta3jnFnX8YOM7qWx1IQKvpHf6r79WnbBkVT3YE7X4Ilu55cJIuQT6lFLIaY 4L0WCQa2RfFhSIJyvNoScnfy2u5zA93BQQWOq5Ag= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Catalin Marinas , Mark Rutland , Will Deacon , Anshuman Khandual , Suzuki K Poulose , James Morse Subject: [PATCH 4.9 22/43] arm64: Add Neoverse-N2, Cortex-A710 CPU part definition Date: Wed, 6 Apr 2022 20:26:31 +0200 Message-Id: <20220406182437.326405168@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Suzuki K Poulose commit 2d0d656700d67239a57afaf617439143d8dac9be upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Catalin Marinas Cc: Mark Rutland Cc: Will Deacon Acked-by: Catalin Marinas Reviewed-by: Anshuman Khandual Signed-off-by: Suzuki K Poulose Link: https://lore.kernel.org/r/20211019163153.3692640-2-suzuki.poulose@arm.com Signed-off-by: Will Deacon Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cputype.h | 4 ++++ 1 file changed, 4 insertions(+) --- a/arch/arm64/include/asm/cputype.h +++ b/arch/arm64/include/asm/cputype.h @@ -88,6 +88,8 @@ #define ARM_CPU_PART_CORTEX_A76 0xD0B #define ARM_CPU_PART_NEOVERSE_N1 0xD0C #define ARM_CPU_PART_CORTEX_A77 0xD0D +#define ARM_CPU_PART_CORTEX_A710 0xD47 +#define ARM_CPU_PART_NEOVERSE_N2 0xD49 #define APM_CPU_PART_POTENZA 0x000 @@ -108,6 +110,8 @@ #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) +#define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) +#define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) #define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2) From patchwork Wed Apr 6 18:26:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DF6EC433EF for ; Wed, 6 Apr 2022 20:02:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233463AbiDFUEQ (ORCPT ); Wed, 6 Apr 2022 16:04:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235593AbiDFUDx (ORCPT ); Wed, 6 Apr 2022 16:03:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB6F21FF205; Wed, 6 Apr 2022 11:27:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 84FE761B89; Wed, 6 Apr 2022 18:27:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 935F9C385A5; Wed, 6 Apr 2022 18:27:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269670; bh=TEO1ws/joQIZlBlf0QTXMmwaDURYuXSVRVRTRhqI9DI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=znpkq99dOhVAghgYw7ANzbsqI0s7urmGc5oUtuPlinSgyI1lOmOzeYMFgyVzz8vAT j386/XoQOF0n07g7YoDsHAVVaadXguQd03WQfB1ML7ECaM0Jh5BG2swc6S/eGbydWp /7VhCSfpLAvlmhz8o6QEsE7YfdnD3UFeSxuxrN1k= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 26/43] arm64: entry: Make the trampoline cleanup optional Date: Wed, 6 Apr 2022 20:26:35 +0200 Message-Id: <20220406182437.441263260@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit d739da1694a0eaef0358a42b76904b611539b77b upstream. Subsequent patches will add additional sets of vectors that use the same tricks as the kpti vectors to reach the full-fat vectors. The full-fat vectors contain some cleanup for kpti that is patched in by alternatives when kpti is in use. Once there are additional vectors, the cleanup will be needed in more cases. But on big/little systems, the cleanup would be harmful if no trampoline vector were in use. Instead of forcing CPUs that don't need a trampoline vector to use one, make the trampoline cleanup optional. Entry at the top of the vectors will skip the cleanup. The trampoline vectors can then skip the first instruction, triggering the cleanup to run. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/entry.S | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -76,16 +76,20 @@ .align 7 .Lventry_start\@: #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 -alternative_if ARM64_UNMAP_KERNEL_AT_EL0 .if \el == 0 + /* + * This must be the first instruction of the EL0 vector entries. It is + * skipped by the trampoline vectors, to trigger the cleanup. + */ + b .Lskip_tramp_vectors_cleanup\@ .if \regsize == 64 mrs x30, tpidrro_el0 msr tpidrro_el0, xzr .else mov x30, xzr .endif +.Lskip_tramp_vectors_cleanup\@: .endif -alternative_else_nop_endif #endif sub sp, sp, #S_FRAME_SIZE @@ -934,7 +938,7 @@ __ni_sys_trace: #endif prfm plil1strm, [x30, #(1b - tramp_vectors)] msr vbar_el1, x30 - add x30, x30, #(1b - tramp_vectors) + add x30, x30, #(1b - tramp_vectors + 4) isb ret .org 1b + 128 // Did we overflow the ventry slot? From patchwork Wed Apr 6 18:26:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6913FC433EF for ; Wed, 6 Apr 2022 20:18:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229559AbiDFUUZ (ORCPT ); Wed, 6 Apr 2022 16:20:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230188AbiDFUUA (ORCPT ); Wed, 6 Apr 2022 16:20:00 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F1D1274E41; Wed, 6 Apr 2022 11:27:56 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BC2E3B8253D; Wed, 6 Apr 2022 18:27:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 39390C385A1; Wed, 6 Apr 2022 18:27:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269673; bh=Xq7EH7iSEgU+tAkN5ctw2qcn2E6EM6pyEbMVP9TNox0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iHf/ZqLhL8h4UKUTC/Q/QNWzDRa6/xxe8rD5k9QoaD/uHj+9GnCyQZiUeNjXC0amU 0rnZa41qd3QE1cGrGeshgh+EmVXM/aaxEfWfR/TmnEsaPPgganKIKUmre+esAs9ZuH O1LXsCB9pmX2Sxf5H1SYl+UYszNkWv1yvpgMZZ2I= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 27/43] arm64: entry: Free up another register on kptis tramp_exit path Date: Wed, 6 Apr 2022 20:26:36 +0200 Message-Id: <20220406182437.470448070@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream. Kpti stashes x30 in far_el1 while it uses x30 for all its work. Making the vectors a per-cpu data structure will require a second register. Allow tramp_exit two registers before it unmaps the kernel, by leaving x30 on the stack, and stashing x29 in far_el1. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/entry.S | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -244,14 +244,16 @@ alternative_else_nop_endif ldp x24, x25, [sp, #16 * 12] ldp x26, x27, [sp, #16 * 13] ldp x28, x29, [sp, #16 * 14] - ldr lr, [sp, #S_LR] - add sp, sp, #S_FRAME_SIZE // restore sp .if \el == 0 -alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 +alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 + ldr lr, [sp, #S_LR] + add sp, sp, #S_FRAME_SIZE // restore sp + eret +alternative_else_nop_endif #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 bne 4f - msr far_el1, x30 + msr far_el1, x29 tramp_alias x30, tramp_exit_native br x30 4: @@ -259,6 +261,8 @@ alternative_insn eret, nop, ARM64_UNMAP_ br x30 #endif .else + ldr lr, [sp, #S_LR] + add sp, sp, #S_FRAME_SIZE // restore sp eret .endif .endm @@ -947,10 +951,12 @@ __ni_sys_trace: .macro tramp_exit, regsize = 64 adr x30, tramp_vectors msr vbar_el1, x30 - tramp_unmap_kernel x30 + ldr lr, [sp, #S_LR] + tramp_unmap_kernel x29 .if \regsize == 64 - mrs x30, far_el1 + mrs x29, far_el1 .endif + add sp, sp, #S_FRAME_SIZE // restore sp eret .endm From patchwork Wed Apr 6 18:26:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0815C433EF for ; Wed, 6 Apr 2022 19:09:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229446AbiDFTLp (ORCPT ); Wed, 6 Apr 2022 15:11:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230176AbiDFTLH (ORCPT ); Wed, 6 Apr 2022 15:11:07 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35C93282B33; Wed, 6 Apr 2022 11:28:18 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E0E26B8253A; Wed, 6 Apr 2022 18:28:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3951CC385A3; Wed, 6 Apr 2022 18:28:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269695; bh=F8uuznHn4+qQNfOl3xbyGgObdn518zMlTYxyggh8fRI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=z56bzeEkRWaXBFV2hL1gwvMSRRu7dNWBVOitZ1Ts8WRaK/Dg8HYiv3Ey5Spy4PVIp WMiKsRFgdsit2OjJqhWQWaxZhSZAKv8e7rtDoRp69PfxBnqouZXAQX6fV8BrhMQoXZ culIepix6twfOTl9zgGjndefy0adx1sg8jjbP+FI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 29/43] arm64: entry: Allow tramp_alias to access symbols after the 4K boundary Date: Wed, 6 Apr 2022 20:26:38 +0200 Message-Id: <20220406182437.527213548@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit 6c5bf79b69f911560fbf82214c0971af6e58e682 upstream. Systems using kpti enter and exit the kernel through a trampoline mapping that is always mapped, even when the kernel is not. tramp_valias is a macro to find the address of a symbol in the trampoline mapping. Adding extra sets of vectors will expand the size of the entry.tramp.text section to beyond 4K. tramp_valias will be unable to generate addresses for symbols beyond 4K as it uses the 12 bit immediate of the add instruction. As there are now two registers available when tramp_alias is called, use the extra register to avoid the 4K limit of the 12 bit immediate. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas [ Removed SDEI for backport ] Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/entry.S | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -97,9 +97,12 @@ .org .Lventry_start\@ + 128 // Did we overflow the ventry slot? .endm - .macro tramp_alias, dst, sym + .macro tramp_alias, dst, sym, tmp mov_q \dst, TRAMP_VALIAS - add \dst, \dst, #(\sym - .entry.tramp.text) + adr_l \tmp, \sym + add \dst, \dst, \tmp + adr_l \tmp, .entry.tramp.text + sub \dst, \dst, \tmp .endm // This macro corrupts x0-x3. It is the caller's duty @@ -254,10 +257,10 @@ alternative_else_nop_endif #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 bne 4f msr far_el1, x29 - tramp_alias x30, tramp_exit_native + tramp_alias x30, tramp_exit_native, x29 br x30 4: - tramp_alias x30, tramp_exit_compat + tramp_alias x30, tramp_exit_compat, x29 br x30 #endif .else From patchwork Wed Apr 6 18:26:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558550 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F121C433EF for ; Wed, 6 Apr 2022 19:02:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229849AbiDFTCZ (ORCPT ); Wed, 6 Apr 2022 15:02:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229927AbiDFTAX (ORCPT ); Wed, 6 Apr 2022 15:00:23 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E37275DA7F; Wed, 6 Apr 2022 11:28:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 800A861B89; Wed, 6 Apr 2022 18:28:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 93C6BC385A1; Wed, 6 Apr 2022 18:28:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269714; bh=exiXNA1MO5/MztigFZQV7CS5/yR+3l8J3hwduGMGaT8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pkCiE2+m69oDqQ+BawcGbOut+gBvIKVkqgVKlgZkgM8WtLOdkPF7r+02dQYWWSKsH r30Tm/4hn/YLOfV+o0Od8qR38y2URRL3Ga1g4w1LsbeXAxx5IZlHSX3/u7cCPtRrEC Qzu4am0S6Qii/iaX+jpUqGWgCeT9k6IWFrNJ6CwE= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 30/43] arm64: entry: Dont assume tramp_vectors is the start of the vectors Date: Wed, 6 Apr 2022 20:26:39 +0200 Message-Id: <20220406182437.556792297@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit ed50da7764535f1e24432ded289974f2bf2b0c5a upstream. The tramp_ventry macro uses tramp_vectors as the address of the vectors when calculating which ventry in the 'full fat' vectors to branch to. While there is one set of tramp_vectors, this will be true. Adding multiple sets of vectors will break this assumption. Move the generation of the vectors to a macro, and pass the start of the vectors as an argument to tramp_ventry. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/entry.S | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -926,7 +926,7 @@ __ni_sys_trace: sub \dst, \dst, PAGE_SIZE .endm - .macro tramp_ventry, regsize = 64 + .macro tramp_ventry, vector_start, regsize .align 7 1: .if \regsize == 64 @@ -948,9 +948,9 @@ __ni_sys_trace: #else ldr x30, =vectors #endif - prfm plil1strm, [x30, #(1b - tramp_vectors)] + prfm plil1strm, [x30, #(1b - \vector_start)] msr vbar_el1, x30 - add x30, x30, #(1b - tramp_vectors + 4) + add x30, x30, #(1b - \vector_start + 4) isb ret .org 1b + 128 // Did we overflow the ventry slot? @@ -968,19 +968,21 @@ __ni_sys_trace: eret .endm - .align 11 -ENTRY(tramp_vectors) + .macro generate_tramp_vector +.Lvector_start\@: .space 0x400 - tramp_ventry - tramp_ventry - tramp_ventry - tramp_ventry - - tramp_ventry 32 - tramp_ventry 32 - tramp_ventry 32 - tramp_ventry 32 + .rept 4 + tramp_ventry .Lvector_start\@, 64 + .endr + .rept 4 + tramp_ventry .Lvector_start\@, 32 + .endr + .endm + + .align 11 +ENTRY(tramp_vectors) + generate_tramp_vector END(tramp_vectors) ENTRY(tramp_exit_native) From patchwork Wed Apr 6 18:26:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41940C433FE for ; Wed, 6 Apr 2022 19:27:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230491AbiDFT3B (ORCPT ); Wed, 6 Apr 2022 15:29:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232265AbiDFT2S (ORCPT ); Wed, 6 Apr 2022 15:28:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 014AD1A7775; Wed, 6 Apr 2022 11:28:47 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 926C661C57; Wed, 6 Apr 2022 18:28:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97E54C385A3; Wed, 6 Apr 2022 18:28:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269726; bh=VZkau1X7QBVtVcpCZQUhBpuYje66qoH0Q8TgtCm2pQ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NbWMjUBwITXrv6jSKohuDC1tXi4GsWLD4iEn1ekgiLuBJ78T2DrYKuBtT65RVnKQi MPaPwSamJvxEigsWuBehlDs0u69FqXC4RSIFSs1MCMp+jFQoavrUdRF0Qv74XKXV9s OOE4v3Troio+cOoXaz0PDZ9fVwSV2tBts8JlUxmg= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 34/43] arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations Date: Wed, 6 Apr 2022 20:26:43 +0200 Message-Id: <20220406182437.671737147@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit aff65393fa1401e034656e349abd655cfe272de0 upstream. kpti is an optional feature, for systems not using kpti a set of vectors for the spectre-bhb mitigations is needed. Add another set of vectors, __bp_harden_el1_vectors, that will be used if a mitigation is needed and kpti is not in use. The EL1 ventries are repeated verbatim as there is no additional work needed for entry from EL1. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/entry.S | 34 +++++++++++++++++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-) --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -924,10 +924,11 @@ __ni_sys_trace: .macro tramp_ventry, vector_start, regsize, kpti .align 7 1: - .if \kpti == 1 .if \regsize == 64 msr tpidrro_el0, x30 // Restored in kernel_ventry .endif + + .if \kpti == 1 /* * Defend against branch aliasing attacks by pushing a dummy * entry onto the return stack and using a RET instruction to @@ -1011,6 +1012,37 @@ __entry_tramp_data_start: #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ /* + * Exception vectors for spectre mitigations on entry from EL1 when + * kpti is not in use. + */ + .macro generate_el1_vector +.Lvector_start\@: + kernel_ventry 1, sync_invalid // Synchronous EL1t + kernel_ventry 1, irq_invalid // IRQ EL1t + kernel_ventry 1, fiq_invalid // FIQ EL1t + kernel_ventry 1, error_invalid // Error EL1t + + kernel_ventry 1, sync // Synchronous EL1h + kernel_ventry 1, irq // IRQ EL1h + kernel_ventry 1, fiq_invalid // FIQ EL1h + kernel_ventry 1, error_invalid // Error EL1h + + .rept 4 + tramp_ventry .Lvector_start\@, 64, kpti=0 + .endr + .rept 4 + tramp_ventry .Lvector_start\@, 32, kpti=0 + .endr + .endm + + .pushsection ".entry.text", "ax" + .align 11 +ENTRY(__bp_harden_el1_vectors) + generate_el1_vector +END(__bp_harden_el1_vectors) + .popsection + +/* * Special system call wrappers. */ ENTRY(sys_rt_sigreturn_wrapper) From patchwork Wed Apr 6 18:26:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F070C433F5 for ; Wed, 6 Apr 2022 20:07:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233908AbiDFUJ0 (ORCPT ); Wed, 6 Apr 2022 16:09:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44378 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233841AbiDFUIm (ORCPT ); Wed, 6 Apr 2022 16:08:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80B3F29C97E; Wed, 6 Apr 2022 11:28:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 1D56661BCB; Wed, 6 Apr 2022 18:28:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 297C8C385A3; Wed, 6 Apr 2022 18:28:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269734; bh=zTA/gwxscYLWt3/KQ7o9xr51+1aDngsucM7Pk5mFjtY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sQAOaSZlA0NiexhkvHJZajLSLZSblqsyYFqzE+rw1bCKJsIRv64GO9DLJTo5UZi2z z6bMD9zM/ThQAPXzK606sDQVDC2dIYiQgrlXsJIHmqL24bBqrN1EcAcXmVlf/H958S x0JYQtXvCSrPw3s105g9ov+qhxosJM+ZLZckknhI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Catalin Marinas , James Morse Subject: [PATCH 4.9 37/43] arm64: entry: Add macro for reading symbol addresses from the trampoline Date: Wed, 6 Apr 2022 20:26:46 +0200 Message-Id: <20220406182437.757706906@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream. The trampoline code needs to use the address of symbols in the wider kernel, e.g. vectors. PC-relative addressing wouldn't work as the trampoline code doesn't run at the address the linker expected. tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is set, in which case it uses the data page as a literal pool because the data page can be unmapped when running in user-space, which is required for CPUs vulnerable to meltdown. Pull this logic out as a macro, instead of adding a third copy of it. Reviewed-by: Catalin Marinas [ Removed SDEI for stable backport ] Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/entry.S | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -921,6 +921,15 @@ __ni_sys_trace: sub \dst, \dst, PAGE_SIZE .endm + .macro tramp_data_read_var dst, var +#ifdef CONFIG_RANDOMIZE_BASE + tramp_data_page \dst + add \dst, \dst, #:lo12:__entry_tramp_data_\var + ldr \dst, [\dst] +#else + ldr \dst, =\var +#endif + .endm #define BHB_MITIGATION_NONE 0 #define BHB_MITIGATION_LOOP 1 @@ -951,13 +960,7 @@ __ni_sys_trace: b . 2: tramp_map_kernel x30 -#ifdef CONFIG_RANDOMIZE_BASE - tramp_data_page x30 - isb - ldr x30, [x30] -#else - ldr x30, =vectors -#endif + tramp_data_read_var x30, vectors prfm plil1strm, [x30, #(1b - \vector_start)] msr vbar_el1, x30 isb @@ -1037,7 +1040,12 @@ END(tramp_exit_compat) .align PAGE_SHIFT .globl __entry_tramp_data_start __entry_tramp_data_start: +__entry_tramp_data_vectors: .quad vectors +#ifdef CONFIG_ARM_SDE_INTERFACE +__entry_tramp_data___sdei_asm_trampoline_next_handler: + .quad __sdei_asm_handler +#endif /* CONFIG_ARM_SDE_INTERFACE */ .popsection // .rodata #endif /* CONFIG_RANDOMIZE_BASE */ #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ From patchwork Wed Apr 6 18:26:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59997C433F5 for ; Wed, 6 Apr 2022 20:04:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233564AbiDFUFy (ORCPT ); Wed, 6 Apr 2022 16:05:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233572AbiDFUEZ (ORCPT ); Wed, 6 Apr 2022 16:04:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12BDF286F53; Wed, 6 Apr 2022 11:28:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A38AD61B89; Wed, 6 Apr 2022 18:28:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7FB5C385A3; Wed, 6 Apr 2022 18:28:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269701; bh=yaUyPPEw7sVJJEQquPNAOiyO1/DjZWJahlZiljRNwOQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1IbVv9RvMqbT0P1Pt1qTGj3xTw52sIbRrua/q3mgyRrukNczHa/OQXgYiuHOynGL9 knGakJxqwINXJG47ViPz3mNXHYa8lPTuh+ke9Pyib/CdjxkutaGg6ZMVj/VKjn2MbO KgSjQUIJJONfTDgDkNVkd3s7Jmmd1avdp5x+rT9w= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, James Morse Subject: [PATCH 4.9 39/43] KVM: arm64: Add templates for BHB mitigation sequences Date: Wed, 6 Apr 2022 20:26:48 +0200 Message-Id: <20220406182437.814721153@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse KVM writes the Spectre-v2 mitigation template at the beginning of each vector when a CPU requires a specific sequence to run. Because the template is copied, it can not be modified by the alternatives at runtime. As the KVM template code is intertwined with the bp-hardening callbacks, all templates must have a bp-hardening callback. Add templates for calling ARCH_WORKAROUND_3 and one for each value of K in the brancy-loop. Identify these sequences by a new parameter template_start, and add a copy of install_bp_hardening_cb() that is able to install them. Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpucaps.h | 3 + arch/arm64/include/asm/kvm_mmu.h | 2 - arch/arm64/include/asm/mmu.h | 6 +++ arch/arm64/kernel/bpi.S | 50 +++++++++++++++++++++++++++ arch/arm64/kernel/cpu_errata.c | 71 +++++++++++++++++++++++++++++++++++++-- 5 files changed, 128 insertions(+), 4 deletions(-) --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -39,7 +39,8 @@ #define ARM64_SSBD 18 #define ARM64_MISMATCHED_CACHE_TYPE 19 #define ARM64_WORKAROUND_1188873 20 +#define ARM64_SPECTRE_BHB 21 -#define ARM64_NCAPS 21 +#define ARM64_NCAPS 22 #endif /* __ASM_CPUCAPS_H */ --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -362,7 +362,7 @@ static inline void *kvm_get_hyp_vector(v struct bp_hardening_data *data = arm64_get_bp_hardening_data(); void *vect = kvm_ksym_ref(__kvm_hyp_vector); - if (data->fn) { + if (data->template_start) { vect = __bp_harden_hyp_vecs_start + data->hyp_vectors_slot * SZ_2K; --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -45,6 +45,12 @@ typedef void (*bp_hardening_cb_t)(void); struct bp_hardening_data { int hyp_vectors_slot; bp_hardening_cb_t fn; + + /* + * template_start is only used by the BHB mitigation to identify the + * hyp_vectors_slot sequence. + */ + const char *template_start; }; #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR --- a/arch/arm64/kernel/bpi.S +++ b/arch/arm64/kernel/bpi.S @@ -73,3 +73,53 @@ ENTRY(__smccc_workaround_1_smc_end) ENTRY(__smccc_workaround_1_hvc_start) smccc_workaround_1 hvc ENTRY(__smccc_workaround_1_hvc_end) + +ENTRY(__smccc_workaround_3_smc_start) + sub sp, sp, #(8 * 4) + stp x2, x3, [sp, #(8 * 0)] + stp x0, x1, [sp, #(8 * 2)] + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3 + smc #0 + ldp x2, x3, [sp, #(8 * 0)] + ldp x0, x1, [sp, #(8 * 2)] + add sp, sp, #(8 * 4) +ENTRY(__smccc_workaround_3_smc_end) + +ENTRY(__spectre_bhb_loop_k8_start) + sub sp, sp, #(8 * 2) + stp x0, x1, [sp, #(8 * 0)] + mov x0, #8 +2: b . + 4 + subs x0, x0, #1 + b.ne 2b + dsb nsh + isb + ldp x0, x1, [sp, #(8 * 0)] + add sp, sp, #(8 * 2) +ENTRY(__spectre_bhb_loop_k8_end) + +ENTRY(__spectre_bhb_loop_k24_start) + sub sp, sp, #(8 * 2) + stp x0, x1, [sp, #(8 * 0)] + mov x0, #24 +2: b . + 4 + subs x0, x0, #1 + b.ne 2b + dsb nsh + isb + ldp x0, x1, [sp, #(8 * 0)] + add sp, sp, #(8 * 2) +ENTRY(__spectre_bhb_loop_k24_end) + +ENTRY(__spectre_bhb_loop_k32_start) + sub sp, sp, #(8 * 2) + stp x0, x1, [sp, #(8 * 0)] + mov x0, #32 +2: b . + 4 + subs x0, x0, #1 + b.ne 2b + dsb nsh + isb + ldp x0, x1, [sp, #(8 * 0)] + add sp, sp, #(8 * 2) +ENTRY(__spectre_bhb_loop_k32_end) --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -74,6 +74,14 @@ extern char __smccc_workaround_1_smc_sta extern char __smccc_workaround_1_smc_end[]; extern char __smccc_workaround_1_hvc_start[]; extern char __smccc_workaround_1_hvc_end[]; +extern char __smccc_workaround_3_smc_start[]; +extern char __smccc_workaround_3_smc_end[]; +extern char __spectre_bhb_loop_k8_start[]; +extern char __spectre_bhb_loop_k8_end[]; +extern char __spectre_bhb_loop_k24_start[]; +extern char __spectre_bhb_loop_k24_end[]; +extern char __spectre_bhb_loop_k32_start[]; +extern char __spectre_bhb_loop_k32_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -87,12 +95,14 @@ static void __copy_hyp_vect_bpi(int slot flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K); } +static DEFINE_SPINLOCK(bp_lock); +static int last_slot = -1; + static void __install_bp_hardening_cb(bp_hardening_cb_t fn, const char *hyp_vecs_start, const char *hyp_vecs_end) { - static int last_slot = -1; - static DEFINE_SPINLOCK(bp_lock); + int cpu, slot = -1; spin_lock(&bp_lock); @@ -113,6 +123,7 @@ static void __install_bp_hardening_cb(bp __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot); __this_cpu_write(bp_hardening_data.fn, fn); + __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start); spin_unlock(&bp_lock); } #else @@ -544,3 +555,59 @@ const struct arm64_cpu_capabilities arm6 { } }; + +#ifdef CONFIG_KVM +static const char *kvm_bhb_get_vecs_end(const char *start) +{ + if (start == __smccc_workaround_3_smc_start) + return __smccc_workaround_3_smc_end; + else if (start == __spectre_bhb_loop_k8_start) + return __spectre_bhb_loop_k8_end; + else if (start == __spectre_bhb_loop_k24_start) + return __spectre_bhb_loop_k24_end; + else if (start == __spectre_bhb_loop_k32_start) + return __spectre_bhb_loop_k32_end; + + return NULL; +} + +void kvm_setup_bhb_slot(const char *hyp_vecs_start) +{ + int cpu, slot = -1; + const char *hyp_vecs_end; + + if (!IS_ENABLED(CONFIG_KVM) || !is_hyp_mode_available()) + return; + + hyp_vecs_end = kvm_bhb_get_vecs_end(hyp_vecs_start); + if (WARN_ON_ONCE(!hyp_vecs_start || !hyp_vecs_end)) + return; + + spin_lock(&bp_lock); + for_each_possible_cpu(cpu) { + if (per_cpu(bp_hardening_data.template_start, cpu) == hyp_vecs_start) { + slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu); + break; + } + } + + if (slot == -1) { + last_slot++; + BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start) + / SZ_2K) <= last_slot); + slot = last_slot; + __copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end); + } + + __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot); + __this_cpu_write(bp_hardening_data.template_start, hyp_vecs_start); + spin_unlock(&bp_lock); +} +#else +#define __smccc_workaround_3_smc_start NULL +#define __spectre_bhb_loop_k8_start NULL +#define __spectre_bhb_loop_k24_start NULL +#define __spectre_bhb_loop_k32_start NULL + +void kvm_setup_bhb_slot(const char *hyp_vecs_start) { }; +#endif From patchwork Wed Apr 6 18:26:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04D4CC433F5 for ; Wed, 6 Apr 2022 20:18:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233681AbiDFUUM (ORCPT ); Wed, 6 Apr 2022 16:20:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234104AbiDFUTc (ORCPT ); Wed, 6 Apr 2022 16:19:32 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E2E728D2A0; Wed, 6 Apr 2022 11:28:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 01840B824E1; Wed, 6 Apr 2022 18:28:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B25FC385A3; Wed, 6 Apr 2022 18:28:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269706; bh=NgqVS5WrlftclBEuSycZ8OoPFaK+Gbw/CFEjdjdLH8w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vzSRhOgddLkSmjECL5R9J+p0qSz5XQWO4+szW0tDNy5yLkli3boBGbfl6OYRQ7rML yNY/nOs8IyBSuOrDXN2cNBKkXZOmFXAwAPPhf6rVXTFa+JQGm0P8hozXo3XeUlrsSx LKjp2QsvaFf1RCxWeyF/CdSzCFvRlnEPK7duwhck= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 41/43] KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated Date: Wed, 6 Apr 2022 20:26:50 +0200 Message-Id: <20220406182437.872916215@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream. KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are implemented, and to preserve that state during migration through its firmware register interface. Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas [ kvm code moved to arch/arm/kvm, removed fw regs ABI. Added 32bit stub ] Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/kvm_host.h | 5 +++++ arch/arm/kvm/psci.c | 4 ++++ arch/arm64/include/asm/kvm_host.h | 4 ++++ 3 files changed, 13 insertions(+) --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -349,4 +349,9 @@ static inline int kvm_arm_have_ssbd(void return KVM_SSBD_UNKNOWN; } +static inline bool kvm_arm_spectre_bhb_mitigated(void) +{ + /* 32bit guests don't need firmware for this */ + return false; +} #endif /* __ARM_KVM_HOST_H__ */ --- a/arch/arm/kvm/psci.c +++ b/arch/arm/kvm/psci.c @@ -431,6 +431,10 @@ int kvm_hvc_call_handler(struct kvm_vcpu break; } break; + case ARM_SMCCC_ARCH_WORKAROUND_3: + if (kvm_arm_spectre_bhb_mitigated()) + val = SMCCC_RET_SUCCESS; + break; } break; default: --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -452,4 +452,8 @@ static inline int kvm_arm_have_ssbd(void } } +static inline bool kvm_arm_spectre_bhb_mitigated(void) +{ + return arm64_get_spectre_bhb_state() == SPECTRE_MITIGATED; +} #endif /* __ARM64_KVM_HOST_H__ */ From patchwork Wed Apr 6 18:26:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29673C433FE for ; Wed, 6 Apr 2022 19:27:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229611AbiDFT26 (ORCPT ); Wed, 6 Apr 2022 15:28:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229540AbiDFT1G (ORCPT ); Wed, 6 Apr 2022 15:27:06 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 318AC293E53; Wed, 6 Apr 2022 11:28:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E3456B8252B; Wed, 6 Apr 2022 18:28:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34B9CC385A1; Wed, 6 Apr 2022 18:28:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269709; bh=7/JpIuHZ73gicMMcNnzxRtATCQ3Zste224UU9OXJZcQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LXgLULf9BAmIfT9uflgo6NX1++gPaF17xxmpwWPEsK8Fx33q4n66B3dMQ7GjcryXg SDBYcVvQBpgFeI3QScPaDtFsJYxtkr0YTn0QucykdghjcfCxRNRhrMScwpfRkEwaUn 4e61jwSEtkH0gtSODzAf2lQtAGmcIqvOwqvEFwTU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Joey Gouly , Will Deacon , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Reiji Watanabe , Catalin Marinas Subject: [PATCH 4.9 42/43] arm64: add ID_AA64ISAR2_EL1 sys register Date: Wed, 6 Apr 2022 20:26:51 +0200 Message-Id: <20220406182437.901166311@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream. This is a new ID register, introduced in 8.7. Signed-off-by: Joey Gouly Cc: Will Deacon Cc: Marc Zyngier Cc: James Morse Cc: Alexandru Elisei Cc: Suzuki K Poulose Cc: Reiji Watanabe Acked-by: Marc Zyngier Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.com Signed-off-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/cpu.h | 1 + arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kernel/cpufeature.c | 9 +++++++++ arch/arm64/kernel/cpuinfo.c | 1 + 4 files changed, 12 insertions(+) --- a/arch/arm64/include/asm/cpu.h +++ b/arch/arm64/include/asm/cpu.h @@ -36,6 +36,7 @@ struct cpuinfo_arm64 { u64 reg_id_aa64dfr1; u64 reg_id_aa64isar0; u64 reg_id_aa64isar1; + u64 reg_id_aa64isar2; u64 reg_id_aa64mmfr0; u64 reg_id_aa64mmfr1; u64 reg_id_aa64mmfr2; --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -70,6 +70,7 @@ #define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0) #define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1) +#define SYS_ID_AA64ISAR2_EL1 sys_reg(3, 0, 0, 6, 2) #define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0) #define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1) --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -98,6 +98,10 @@ static const struct arm64_ftr_bits ftr_i ARM64_FTR_END, }; +static const struct arm64_ftr_bits ftr_id_aa64isar2[] = { + ARM64_FTR_END, +}; + static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { ARM64_FTR_BITS(FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0), @@ -332,6 +336,7 @@ static const struct __ftr_reg_entry { /* Op1 = 0, CRn = 0, CRm = 6 */ ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0), ARM64_FTR_REG(SYS_ID_AA64ISAR1_EL1, ftr_aa64raz), + ARM64_FTR_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2), /* Op1 = 0, CRn = 0, CRm = 7 */ ARM64_FTR_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0), @@ -459,6 +464,7 @@ void __init init_cpu_features(struct cpu init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1, info->reg_id_aa64dfr1); init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1, info->reg_id_aa64isar0); init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1, info->reg_id_aa64isar1); + init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1, info->reg_id_aa64isar2); init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1, info->reg_id_aa64mmfr0); init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1, info->reg_id_aa64mmfr1); init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2); @@ -570,6 +576,8 @@ void update_cpu_features(int cpu, info->reg_id_aa64isar0, boot->reg_id_aa64isar0); taint |= check_update_ftr_reg(SYS_ID_AA64ISAR1_EL1, cpu, info->reg_id_aa64isar1, boot->reg_id_aa64isar1); + taint |= check_update_ftr_reg(SYS_ID_AA64ISAR2_EL1, cpu, + info->reg_id_aa64isar2, boot->reg_id_aa64isar2); /* * Differing PARange support is fine as long as all peripherals and @@ -689,6 +697,7 @@ static u64 __raw_read_system_reg(u32 sys case SYS_ID_AA64MMFR2_EL1: return read_cpuid(ID_AA64MMFR2_EL1); case SYS_ID_AA64ISAR0_EL1: return read_cpuid(ID_AA64ISAR0_EL1); case SYS_ID_AA64ISAR1_EL1: return read_cpuid(ID_AA64ISAR1_EL1); + case SYS_ID_AA64ISAR2_EL1: return read_cpuid(ID_AA64ISAR2_EL1); case SYS_CNTFRQ_EL0: return read_cpuid(CNTFRQ_EL0); case SYS_CTR_EL0: return read_cpuid(CTR_EL0); --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -335,6 +335,7 @@ static void __cpuinfo_store_cpu(struct c info->reg_id_aa64dfr1 = read_cpuid(ID_AA64DFR1_EL1); info->reg_id_aa64isar0 = read_cpuid(ID_AA64ISAR0_EL1); info->reg_id_aa64isar1 = read_cpuid(ID_AA64ISAR1_EL1); + info->reg_id_aa64isar2 = read_cpuid(ID_AA64ISAR2_EL1); info->reg_id_aa64mmfr0 = read_cpuid(ID_AA64MMFR0_EL1); info->reg_id_aa64mmfr1 = read_cpuid(ID_AA64MMFR1_EL1); info->reg_id_aa64mmfr2 = read_cpuid(ID_AA64MMFR2_EL1); From patchwork Wed Apr 6 18:26:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 558537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F106FC433F5 for ; Wed, 6 Apr 2022 19:58:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229922AbiDFUA0 (ORCPT ); Wed, 6 Apr 2022 16:00:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233449AbiDFT7w (ORCPT ); Wed, 6 Apr 2022 15:59:52 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC2CF1EACB; Wed, 6 Apr 2022 11:28:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 88EB2B8253D; Wed, 6 Apr 2022 18:28:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA0CBC385A3; Wed, 6 Apr 2022 18:28:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1649269712; bh=/Y8tw8fZoZbeTRsOt2DoxXpyxEnPuJ3YWXe6IV9CfaA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lXRCXsShVHU7pKrd85xN0mU7KwGQdhsriwld9UEPb8gPGntDeQT/ZuCxw6+hTAf+R /LIuW0uxDRhdrD+BBC7NF+DP5WTVOJynchTYZZXYWQZevYwHIAjEsz6HK6GOyZnkrs uuTe4gh3N5WiAlVutgCyFRJsEafy1i9IQlj3JWf0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Russell King (Oracle)" , Catalin Marinas , James Morse Subject: [PATCH 4.9 43/43] arm64: Use the clearbhb instruction in mitigations Date: Wed, 6 Apr 2022 20:26:52 +0200 Message-Id: <20220406182437.930125262@linuxfoundation.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220406182436.675069715@linuxfoundation.org> References: <20220406182436.675069715@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: James Morse commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas [ modified for stable: Use a KVM vector template instead of alternatives ] Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/assembler.h | 7 +++++++ arch/arm64/include/asm/cpufeature.h | 13 +++++++++++++ arch/arm64/include/asm/sysreg.h | 3 +++ arch/arm64/include/asm/vectors.h | 7 +++++++ arch/arm64/kernel/bpi.S | 5 +++++ arch/arm64/kernel/cpu_errata.c | 14 ++++++++++++++ arch/arm64/kernel/cpufeature.c | 1 + arch/arm64/kernel/entry.S | 8 ++++++++ 8 files changed, 58 insertions(+) --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -95,6 +95,13 @@ .endm /* + * Clear Branch History instruction + */ + .macro clearbhb + hint #22 + .endm + +/* * Sanitise a 64-bit bounded index wrt speculation, returning zero if out * of bounds. */ --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -387,6 +387,19 @@ static inline bool supports_csv2p3(int s return csv2_val == 3; } +static inline bool supports_clearbhb(int scope) +{ + u64 isar2; + + if (scope == SCOPE_LOCAL_CPU) + isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1); + else + isar2 = read_system_reg(SYS_ID_AA64ISAR2_EL1); + + return cpuid_feature_extract_unsigned_field(isar2, + ID_AA64ISAR2_CLEARBHB_SHIFT); +} + static inline bool system_supports_32bit_el0(void) { return cpus_have_const_cap(ARM64_HAS_32BIT_EL0); --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -174,6 +174,9 @@ #define ID_AA64ISAR0_SHA1_SHIFT 8 #define ID_AA64ISAR0_AES_SHIFT 4 +/* id_aa64isar2 */ +#define ID_AA64ISAR2_CLEARBHB_SHIFT 28 + /* id_aa64pfr0 */ #define ID_AA64PFR0_CSV3_SHIFT 60 #define ID_AA64PFR0_CSV2_SHIFT 56 --- a/arch/arm64/include/asm/vectors.h +++ b/arch/arm64/include/asm/vectors.h @@ -33,6 +33,12 @@ enum arm64_bp_harden_el1_vectors { * canonical vectors. */ EL1_VECTOR_BHB_FW, + + /* + * Use the ClearBHB instruction, before branching to the canonical + * vectors. + */ + EL1_VECTOR_BHB_CLEAR_INSN, #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ /* @@ -44,6 +50,7 @@ enum arm64_bp_harden_el1_vectors { #ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY #define EL1_VECTOR_BHB_LOOP -1 #define EL1_VECTOR_BHB_FW -1 +#define EL1_VECTOR_BHB_CLEAR_INSN -1 #endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ /* The vectors to use on return from EL0. e.g. to remap the kernel */ --- a/arch/arm64/kernel/bpi.S +++ b/arch/arm64/kernel/bpi.S @@ -123,3 +123,8 @@ ENTRY(__spectre_bhb_loop_k32_start) ldp x0, x1, [sp, #(8 * 0)] add sp, sp, #(8 * 2) ENTRY(__spectre_bhb_loop_k32_end) + +ENTRY(__spectre_bhb_clearbhb_start) + hint #22 /* aka clearbhb */ + isb +ENTRY(__spectre_bhb_clearbhb_end) --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -84,6 +84,8 @@ extern char __spectre_bhb_loop_k24_start extern char __spectre_bhb_loop_k24_end[]; extern char __spectre_bhb_loop_k32_start[]; extern char __spectre_bhb_loop_k32_end[]; +extern char __spectre_bhb_clearbhb_start[]; +extern char __spectre_bhb_clearbhb_end[]; static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, const char *hyp_vecs_end) @@ -590,6 +592,7 @@ static void update_mitigation_state(enum * - Mitigated by a branchy loop a CPU specific number of times, and listed * in our "loop mitigated list". * - Mitigated in software by the firmware Spectre v2 call. + * - Has the ClearBHB instruction to perform the mitigation. * - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no * software mitigation in the vectors is needed. * - Has CSV2.3, so is unaffected. @@ -729,6 +732,9 @@ bool is_spectre_bhb_affected(const struc if (supports_csv2p3(scope)) return false; + if (supports_clearbhb(scope)) + return true; + if (spectre_bhb_loop_affected(scope)) return true; @@ -769,6 +775,8 @@ static const char *kvm_bhb_get_vecs_end( return __spectre_bhb_loop_k24_end; else if (start == __spectre_bhb_loop_k32_start) return __spectre_bhb_loop_k32_end; + else if (start == __spectre_bhb_clearbhb_start) + return __spectre_bhb_clearbhb_end; return NULL; } @@ -810,6 +818,7 @@ static void kvm_setup_bhb_slot(const cha #define __spectre_bhb_loop_k8_start NULL #define __spectre_bhb_loop_k24_start NULL #define __spectre_bhb_loop_k32_start NULL +#define __spectre_bhb_clearbhb_start NULL static void kvm_setup_bhb_slot(const char *hyp_vecs_start) { }; #endif @@ -835,6 +844,11 @@ void spectre_bhb_enable_mitigation(const pr_info_once("spectre-bhb mitigation disabled by command line option\n"); } else if (supports_ecbhb(SCOPE_LOCAL_CPU)) { state = SPECTRE_MITIGATED; + } else if (supports_clearbhb(SCOPE_LOCAL_CPU)) { + kvm_setup_bhb_slot(__spectre_bhb_clearbhb_start); + this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN); + + state = SPECTRE_MITIGATED; } else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) { switch (spectre_bhb_loop_affected(SCOPE_SYSTEM)) { case 8: --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -99,6 +99,7 @@ static const struct arm64_ftr_bits ftr_i }; static const struct arm64_ftr_bits ftr_id_aa64isar2[] = { + ARM64_FTR_BITS(FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0), ARM64_FTR_END, }; --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -932,6 +932,7 @@ __ni_sys_trace: #define BHB_MITIGATION_NONE 0 #define BHB_MITIGATION_LOOP 1 #define BHB_MITIGATION_FW 2 +#define BHB_MITIGATION_INSN 3 .macro tramp_ventry, vector_start, regsize, kpti, bhb .align 7 @@ -948,6 +949,11 @@ __ni_sys_trace: __mitigate_spectre_bhb_loop x30 .endif // \bhb == BHB_MITIGATION_LOOP + .if \bhb == BHB_MITIGATION_INSN + clearbhb + isb + .endif // \bhb == BHB_MITIGATION_INSN + .if \kpti == 1 /* * Defend against branch aliasing attacks by pushing a dummy @@ -1023,6 +1029,7 @@ ENTRY(tramp_vectors) #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_INSN #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE END(tramp_vectors) @@ -1085,6 +1092,7 @@ ENTRY(__bp_harden_el1_vectors) #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY generate_el1_vector bhb=BHB_MITIGATION_LOOP generate_el1_vector bhb=BHB_MITIGATION_FW + generate_el1_vector bhb=BHB_MITIGATION_INSN #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ END(__bp_harden_el1_vectors) .popsection