From patchwork Thu Jan 3 00:49:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 154688 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp127594ljp; Wed, 2 Jan 2019 16:49:53 -0800 (PST) X-Google-Smtp-Source: AFSGD/U6qPBqGs0jMkzRueOXGhYAQCiybRUV2Orgpwjq4P5tM0yz6uSZ6jZv9MoMhAsHGJaPPZFI X-Received: by 2002:a62:c21c:: with SMTP id l28mr45922007pfg.74.1546476593110; Wed, 02 Jan 2019 16:49:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1546476593; cv=none; d=google.com; s=arc-20160816; b=WFTrxgRzi7yrQ6YfvsmO1ZzbskqHf3xbjoJPKZHFQxzyQEQmqj0BhJ3e3pOj5oedJD uzdgYxWvi/I4o6nazIHlv7HK+Qsqj0TNG9zJzjhWYgKimLWoSPAEj2DSb6BT4bD66Bxf xzHbOZyqB7EwsjBvumIV/5wMQ3Fg5XOuxsgNmwXDQOO8br27XvdHkfNgMvj6JfSjwt3W 1mul5RXePCGDS0H2bj0CQg1fbZmaIu/fB5ALTGbcKOu7VvMg+xy8TqtGmoaa4m0/JsTs HaQjvjCa6tEzZc5/YlSjYjo7fpiT5Z7QYKrShsTLlyoAJ8rRztGHjdVUw0PKEbs5/cBg gTIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=rBFmbu9/zO9jsp1ogtY0AmFI8M8YT90U6CyMlV8YQ7E=; b=EhSHBtw+8APWCpirOiAk11DBCZWSv+iN5Ykec+2tf3Nn/oQVPAvERCMIUX2VNDNlQv 9g8nmLsU++TD8fCifArBsxiT/w5FCCLJVAOv2fuxpHDIJQ5Ch7VWAto4NiP72O0kpeQx AeTPW0trc7eVQThtFtY3zGoujTuv9jhY4zKbmnvj/xs/QaA8qlM9CNMQvEb8PSfduqWv 98hShIUSvacu+vIv05Fsa0GYWp2xlxZnRwaTjDiWQU/rlDJKha1ayA0IMVuGa6Z9T0eG xRcWm87lyod6bHQmZmTuyEpSTIq9okl/RJ3Cie0QR+rXZ6Skl+JWafDAYMGaRk++bTVn jmDg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g1si33378834pld.197.2019.01.02.16.49.52; Wed, 02 Jan 2019 16:49:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730298AbfACAtw (ORCPT + 31 others); Wed, 2 Jan 2019 19:49:52 -0500 Received: from foss.arm.com ([217.140.101.70]:39836 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730244AbfACAti (ORCPT ); Wed, 2 Jan 2019 19:49:38 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AABAF1650; Wed, 2 Jan 2019 16:49:37 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D434B3F5AF; Wed, 2 Jan 2019 16:49:36 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, mark.rutland@arm.com, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, Jeremy Linton Subject: [PATCH v2 5/7] arm64: add sysfs vulnerability show for spectre v2 Date: Wed, 2 Jan 2019 18:49:19 -0600 Message-Id: <20190103004921.1928921-6-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190103004921.1928921-1-jeremy.linton@arm.com> References: <20190103004921.1928921-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add code to track whether all the cores in the machine are vulnerable, and whether all the vulnerable cores have been mitigated. Once we have that information we can add the sysfs stub and provide an accurate view of what is known about the machine. Signed-off-by: Jeremy Linton --- arch/arm64/kernel/cpu_errata.c | 69 +++++++++++++++++++++++++++++++--- 1 file changed, 64 insertions(+), 5 deletions(-) -- 2.17.2 diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 2352955b1259..96a55accefa9 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -109,6 +109,11 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused) atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); +#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || defined(CONFIG_GENERIC_CPU_VULNERABILITIES) +/* Track overall mitigation state. We are only mitigated if all cores are ok */ +static bool __hardenbp_enab = true; +#endif + #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR #include #include @@ -231,15 +236,19 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) if (!entry->matches(entry, SCOPE_LOCAL_CPU)) return; - if (psci_ops.smccc_version == SMCCC_VERSION_1_0) + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) { + __hardenbp_enab = false; return; + } switch (psci_ops.conduit) { case PSCI_CONDUIT_HVC: arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + if ((int)res.a0 < 0) { + __hardenbp_enab = false; return; + } cb = call_hvc_arch_workaround_1; /* This is a guest, no need to patch KVM vectors */ smccc_start = NULL; @@ -249,14 +258,17 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) case PSCI_CONDUIT_SMC: arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_ARCH_WORKAROUND_1, &res); - if ((int)res.a0 < 0) + if ((int)res.a0 < 0) { + __hardenbp_enab = false; return; + } cb = call_smc_arch_workaround_1; smccc_start = __smccc_workaround_1_smc_start; smccc_end = __smccc_workaround_1_smc_end; break; default: + __hardenbp_enab = false; return; } @@ -507,7 +519,36 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE_LIST(midr_list) -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +#if defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ + defined(CONFIG_GENERIC_CPU_VULNERABILITIES) + +static enum { A64_SV2_UNSET, A64_SV2_SAFE, A64_SV2_UNSAFE } __spectrev2_safe = A64_SV2_UNSET; + +/* + * Track overall bp hardening for all heterogeneous cores in the machine. + * We are only considered "safe" if all booted cores are known safe. + */ +static bool __maybe_unused +check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope) +{ + bool is_vul; + bool has_csv2; + u64 pfr0; + + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); + + is_vul = is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list); + + pfr0 = read_cpuid(ID_AA64PFR0_EL1); + has_csv2 = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT); + + if (is_vul) + __spectrev2_safe = A64_SV2_UNSAFE; + else if (__spectrev2_safe == A64_SV2_UNSET && has_csv2) + __spectrev2_safe = A64_SV2_SAFE; + + return is_vul; +} /* * List of CPUs where we need to issue a psci call to @@ -705,7 +746,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = { { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, .cpu_enable = enable_smccc_arch_workaround_1, - ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, + .matches = check_branch_predictor, + .midr_range_list = arm64_bp_harden_smccc_cpus, }, #endif #ifdef CONFIG_HARDEN_EL2_VECTORS @@ -751,4 +794,20 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, return sprintf(buf, "Mitigation: __user pointer sanitization\n"); } +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, + char *buf) +{ + switch (__spectrev2_safe) { + case A64_SV2_SAFE: + return sprintf(buf, "Not affected\n"); + case A64_SV2_UNSAFE: + if (__hardenbp_enab) + return sprintf(buf, + "Mitigation: Branch predictor hardening\n"); + return sprintf(buf, "Vulnerable\n"); + default: + return sprintf(buf, "Unknown\n"); + } +} + #endif