From patchwork Wed Jan 9 23:55:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 155096 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp1277463jaa; Wed, 9 Jan 2019 15:55:59 -0800 (PST) X-Google-Smtp-Source: ALg8bN7WsthvaQiw7r/3SVjrDsEAcMLZFfO6rGhK77bVG/bMqZfawwcWIz1whBjycvJsrK2nNTym X-Received: by 2002:a17:902:6948:: with SMTP id k8mr7976033plt.2.1547078159913; Wed, 09 Jan 2019 15:55:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547078159; cv=none; d=google.com; s=arc-20160816; b=mZyHi45Qw/kWVRlAbJ+52gQgKOO4fhGl5zjlXXTxvSXXEeFrWC8PFBrHYa2S8Bffc4 LvQwxUKM5Te+ZfoGQoueQ7szIKyuHLuxi9YxqCAEyUn4cL0ohjdMxfRlFTvwN3wyWRz0 2w4Rx+Usk4QuQfsOkiv8PAqQ5HslkYBz+YVyWActpZA2oUsCFc1ap/LX3O+05JoIe4nZ Z7WpHYpPP7jpMmAKz5OMcdvkyANDDpb8Ri/U+H7e93yAMbY9HUMe/nhkAHTqKwLWTO66 KgVLGLMe0BHddt1FpV0QW99FfQHQJwCue1/DNWBmBqY49E72WZJGBY8Be2TUgH/ZUKCb e0XA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=YK63iKyAqDXIhhqIVThxgx82aB0sc9rVyK2lfL2eHU4=; b=mUcBE7IP2jtLVkvdjSAOQDS/NiVJW14v7p5B9yNaF6cJMyBHqt5z6+nabLQAZT9lFd xe5x4cpMZ/NJ4jOEna2cIwV5UjM4eMj08YIElx0KHkBx6BcfRrZL7A5Tx8ih9nk0uNSF y0lUukN7fF8WAZ+NzKyAUpRrWO6UnqnegFzwMOURXkLBOQmh7dJmHViXie+5aC5LdDzD Gf0QnAPp8R/3K3CD+664xtTC0SZekoctIIk+tr+SO0lsCaqxRGHPuvIOUKWiAQQ2RO1j 0wqGCY2jXnkBEyDwpnbB2a+T8FVZGDlMwWSuWW0c2ysBYg+NRdaqT0R8ZIPdOBpyn/Kt DyUw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i9si26715912plb.35.2019.01.09.15.55.59; Wed, 09 Jan 2019 15:55:59 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726761AbfAIXz6 (ORCPT + 31 others); Wed, 9 Jan 2019 18:55:58 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53366 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726705AbfAIXz4 (ORCPT ); Wed, 9 Jan 2019 18:55:56 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F09BA78; Wed, 9 Jan 2019 15:55:56 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8795C3F5AF; Wed, 9 Jan 2019 15:55:55 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton Subject: [PATCH v3 5/7] arm64: add sysfs vulnerability show for spectre v2 Date: Wed, 9 Jan 2019 17:55:42 -0600 Message-Id: <20190109235544.2992426-6-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190109235544.2992426-1-jeremy.linton@arm.com> References: <20190109235544.2992426-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add code to track whether all the cores in the machine are vulnerable, and whether all the vulnerable cores have been mitigated. Once we have that information we can add the sysfs stub and provide an accurate view of what is known about the machine. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 61 +++++++++++++++++++++++++++++++--- 1 file changed, 56 insertions(+), 5 deletions(-) -- 2.17.2 diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 8dde8c616b7e..ee286d606d9b 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -111,6 +111,11 @@ atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); uint arm64_requested_vuln_attrs = VULN_SPECTREV1; +#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || defined(CONFIG_GENERIC_CPU_VULNERABILITIES) +/* Track overall mitigation state. We are only mitigated if all cores are ok */ +static bool __hardenbp_enab = true; +#endif + #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR #include #include @@ -233,15 +238,19 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) if (!entry->matches(entry, SCOPE_LOCAL_CPU)) return; - if (psci_ops.smccc_version == SMCCC_VERSION_1_0) + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) { + __hardenbp_enab = false; return; + } switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + if ((int)res.a0 < 0) { + __hardenbp_enab = false; return; + } cb = call_hvc_arch_workaround_1; /* This is a guest, no need to patch KVM vectors */ smccc_start = NULL; @@ -251,14 +260,17 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) case PSCI_CONDUIT_SMC: arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + if ((int)res.a0 < 0) { + __hardenbp_enab = false; return; + } cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc_start; smccc_end = __smccc_workaround_1_smc_end; break; default: + __hardenbp_enab = false; return; } @@ -509,7 +521,32 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE_LIST(midr_list) -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ + defined(CONFIG_GENERIC_CPU_VULNERABILITIES) + + +static bool __spectrev2_safe = true; + +/* + * Track overall bp hardening for all heterogeneous cores in the machine. + * We are only considered "safe" if all booted cores are known safe. + */ +static bool __maybe_unused +check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope) +{ + bool is_vul; + + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + + is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); + + if (is_vul) + __spectrev2_safe = false; + + arm64_requested_vuln_attrs |= VULN_SPECTREV2; + + return is_vul; +} /* * List of CPUs where we need to issue a psci call to @@ -707,7 +744,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, .cpu_enable = enable_smccc_arch_workaround_1, - ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, + .matches = check_branch_predictor, + .midr_range_list = arm64_bp_harden_smccc_cpus, }, #endif #ifdef CONFIG_HARDEN_EL2_VECTORS @@ -758,4 +797,16 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, return sprintf(buf, "Mitigation: __user pointer sanitization\n"); } +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, + char *buf) +{ + if (__spectrev2_safe) + return sprintf(buf, "Not affected\n"); + + if (__hardenbp_enab) + return sprintf(buf, "Mitigation: Branch predictor hardening\n"); + + return sprintf(buf, "Vulnerable\n"); +} + #endif