From patchwork Thu Mar 21 23:05:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 160833 Delivered-To: patch@linaro.org Received: by 2002:a02:c6d8:0:0:0:0:0 with SMTP id r24csp140405jan; Thu, 21 Mar 2019 16:06:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqzgOzIUZAAaRqK9eUcgFTONpz8I46meci6by75/wckk+euPUs+FiXXi6k4F5dzUqwxlrB6m X-Received: by 2002:a17:902:9a02:: with SMTP id v2mr6209382plp.201.1553209608275; Thu, 21 Mar 2019 16:06:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553209608; cv=none; d=google.com; s=arc-20160816; b=ilVjfpJYuAgDE0yqvpGLm4uCUy4UbypxW9f6DzQdxwUSS5kxMoPzMtuVU8u278d3kS PWWIXk+kj/RmjEitrJM6nZsNy90mvun7P+zBsn6DIknhGmJk0tPtJUR6JC5mP5JViKDI C9J85y+TSxssxeUFaFoRwmEldgS36zEZv0WCzq+7P0nLfvoyHpC1zC7G90IP/6DvUq1F 39PAHzK3yPITQuAzfAH1pCNCMVzert8GqIFxCnHbpMo8/Lc3/nMMq178tCcFQ/vxYkrc 2pNAECX0x1WkAEp7SFYSURqGRPybx7KNpZtgj2G7VRSSvMwOTSohAuVBvA2StxySM3fH oatg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=syvVDg8CsEuxoIEtPydsvyQnTjmu357FNLQ8rCnmIok=; b=LX2OOJBUBLboZF+7t035xHcVJiuKO7QcKOJkYWWsA73J7w+RO5A7O9F6RfuSb/Qtax A82h8ffpgPpRNHXZooRlqF6Lzr4lSISyNeaRLbxmteZH/GdsjpsUoKG5B3JGu/YQqRc+ 4ENS+W2y7CkFC6wIxJfIUXXbJ+Nfgpg9b6mWwIPI69y/4EGZ8CZ45jR2m44bDd+YFXBd bx/Ilhov08Uy0Wl+YEY/oVNlDNRhE0mgK4XmfNHnUMWLsaJ1EC8sniw2rwu8tTCguvxU J61ScqhdUbxoSWxwH3yUl30aVz4fbpNOekN1XRLJe2JoXBVEk3BrGluo+81C5YFma9uX gg1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i10si5155660pfj.186.2019.03.21.16.06.47; Thu, 21 Mar 2019 16:06:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727555AbfCUXGq (ORCPT + 31 others); Thu, 21 Mar 2019 19:06:46 -0400 Received: from foss.arm.com ([217.140.101.70]:35748 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727405AbfCUXGN (ORCPT ); Thu, 21 Mar 2019 19:06:13 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3DC03168F; Thu, 21 Mar 2019 16:06:13 -0700 (PDT) Received: from beelzebub.austin.arm.com (mammon-tx2.austin.arm.com [10.118.29.246]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8D7353F614; Thu, 21 Mar 2019 16:06:12 -0700 (PDT) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, Dave.Martin@arm.com, shankerd@codeaurora.org, julien.thierry@arm.com, mlangsdo@redhat.com, stefan.wahren@i2e.com, Andre.Przywara@arm.com, linux-kernel@vger.kernel.org, Jeremy Linton , Andre Przywara , Stefan Wahren Subject: [PATCH v6 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof Date: Thu, 21 Mar 2019 18:05:51 -0500 Message-Id: <20190321230557.45107-5-jeremy.linton@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190321230557.45107-1-jeremy.linton@arm.com> References: <20190321230557.45107-1-jeremy.linton@arm.com> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Marc Zyngier We currently have a list of CPUs affected by Spectre-v2, for which we check that the firmware implements ARCH_WORKAROUND_1. It turns out that not all firmwares do implement the required mitigation, and that we fail to let the user know about it. Instead, let's slightly revamp our checks, and rely on a whitelist of cores that are known to be non-vulnerable, and let the user know the status of the mitigation in the kernel log. Signed-off-by: Marc Zyngier [This makes more sense in front of the sysfs patch] [Pick pieces of that patch into this and move it earlier] Signed-off-by: Jeremy Linton Reviewed-by: Andre Przywara Tested-by: Stefan Wahren --- arch/arm64/kernel/cpu_errata.c | 107 +++++++++++++++++---------------- 1 file changed, 55 insertions(+), 52 deletions(-) -- 2.20.1 Reviewed-by: Suzuki K Poulose diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index cf623657cf3c..2b6e6d8e105b 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -131,9 +131,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start, __flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K); } -static void __install_bp_hardening_cb(bp_hardening_cb_t fn, - const char *hyp_vecs_start, - const char *hyp_vecs_end) +static void install_bp_hardening_cb(bp_hardening_cb_t fn, + const char *hyp_vecs_start, + const char *hyp_vecs_end) { static DEFINE_RAW_SPINLOCK(bp_lock); int cpu, slot = -1; @@ -177,23 +177,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn, } #endif /* CONFIG_KVM_INDIRECT_VECTORS */ -static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry, - bp_hardening_cb_t fn, - const char *hyp_vecs_start, - const char *hyp_vecs_end) -{ - u64 pfr0; - - if (!entry->matches(entry, SCOPE_LOCAL_CPU)) - return; - - pfr0 = read_cpuid(ID_AA64PFR0_EL1); - if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT)) - return; - - __install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end); -} - #include #include #include @@ -228,31 +211,27 @@ static int __init parse_nospectre_v2(char *str) } early_param("nospectre_v2", parse_nospectre_v2); -static void -enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) +/* + * -1: No workaround + * 0: No workaround required + * 1: Workaround installed + */ +static int detect_harden_bp_fw(void) { bp_hardening_cb_t cb; void *smccc_start, *smccc_end; struct arm_smccc_res res; u32 midr = read_cpuid_id(); - if (!entry->matches(entry, SCOPE_LOCAL_CPU)) - return; - - if (__nospectre_v2) { - pr_info_once("spectrev2 mitigation disabled by command line option\n"); - return; - } - if (psci_ops.smccc_version == SMCCC_VERSION_1_0) - return; + return -1; switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return; + return -1; cb = call_hvc_arch_workaround_1; /* This is a guest, no need to patch KVM vectors */ smccc_start = NULL; @@ -263,23 +242,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); if ((int)res.a0 < 0) - return; + return -1; cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc_start; smccc_end = __smccc_workaround_1_smc_end; break; default: - return; + return -1; } if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) || ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) cb = qcom_link_stack_sanitization; - install_bp_hardening_cb(entry, cb, smccc_start, smccc_end); + install_bp_hardening_cb(cb, smccc_start, smccc_end); - return; + return 1; } #endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ @@ -521,24 +500,48 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) CAP_MIDR_RANGE_LIST(midr_list) #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR - /* - * List of CPUs where we need to issue a psci call to - * harden the branch predictor. + * List of CPUs that do not need any Spectre-v2 mitigation at all. */ -static const struct midr_range arm64_bp_harden_smccc_cpus[] = { - MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), - MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), - MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), - MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), - MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), - MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), - MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1), - MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR), - MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER), - {}, +static const struct midr_range spectre_v2_safe_list[] = { + MIDR_ALL_VERSIONS(MIDR_CORTEX_A35), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A53), + MIDR_ALL_VERSIONS(MIDR_CORTEX_A55), + { /* sentinel */ } }; +static bool __maybe_unused +check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope) +{ + int need_wa; + + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + + /* If the CPU has CSV2 set, we're safe */ + if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1), + ID_AA64PFR0_CSV2_SHIFT)) + return false; + + /* Alternatively, we have a list of unaffected CPUs */ + if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list)) + return false; + + /* Fallback to firmware detection */ + need_wa = detect_harden_bp_fw(); + if (!need_wa) + return false; + + /* forced off */ + if (__nospectre_v2) { + pr_info_once("spectrev2 mitigation disabled by command line option\n"); + return false; + } + + if (need_wa < 0) + pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n"); + + return (need_wa > 0); +} #endif #ifdef CONFIG_HARDEN_EL2_VECTORS @@ -717,8 +720,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = { #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, - .cpu_enable = enable_smccc_arch_workaround_1, - ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, + .matches = check_branch_predictor, }, #endif #ifdef CONFIG_HARDEN_EL2_VECTORS