From patchwork Fri Jan 25 18:07:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Linton X-Patchwork-Id: 156622 Delivered-To: patch@linaro.org Received: by 2002:a02:48:0:0:0:0:0 with SMTP id 69csp652301jaa; Fri, 25 Jan 2019 10:07:29 -0800 (PST) X-Google-Smtp-Source: ALg8bN7uX+ksneDtthKAp6791o9BnLBpV8DpmPrayaS39excXThJb+VXzU1qChpfNmqZdYkYhjjA X-Received: by 2002:a62:160d:: with SMTP id 13mr11828083pfw.203.1548439649458; Fri, 25 Jan 2019 10:07:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1548439649; cv=none; d=google.com; s=arc-20160816; b=dsmd3feGV2xgJ58lXLUZtSdIj1KdhHg67ED1ye3T+X2Yd6vS9VCYPZ+HGdrtXFE3wm 4Te4QMB7JtAUlFqdvwuyuw2Q1fFf+uDZUtfZ5O9fz1BCwXTH0Z1+pDPUzSP3dQcNZi3G CZ3rkfDMHG6jcPY1PHypK5ZlJJAe58n3HbzwKLcVKrrUZznCJAV7TXTTp1y0GJ7C11os tBQQZnemL1ffwdMdqoz2TcHHKy+tWHgAoqFB3p3HLfj5qz4DI1XUAxZU7Uv1R+T/fSBd OjLghLsR/yAQs6TX3djWjObcePJKv5Inp2TvSEgh+xltcjclgnUMhyhdvOAbxyr+0LT/ SP+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=t97Tsh3j5+OXCns9MhTjMONgVXchyLxwWH59IUSWzmY=; b=o6LOg48Ni23oQQcfkjgjsvguQqFHYJ8ZlQ1zrwWIX/GlibruNPK+a+CBPaPtG5kaos nNXAf0WSfdyCE8ZGHT2UYHXaSaNzZ37dXul5VYfRFBH4iCVnAymj1KBhSeNfsQjH3bXL DVrpk0udrD4xE8i8NukMFyaZZfnIUZI2Z2c/M2Ml/nnffdFrzMQstiIJtyl5tBdYzz5g 6MRfoKt5UEP8luD9X5V2PyZ7Dr1sFMafN3b7H+5aX7yneKfxBzd5B3fV9Sj4eg2mFHGi IcXpYGbTbpSrjeYQOUkCudKvIGNYunIkogaKu6ZcU7McfCt3rLCNcabFWVAe2RVdNbSH 8JlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x66si25231079pfk.73.2019.01.25.10.07.29; Fri, 25 Jan 2019 10:07:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729313AbfAYSH1 (ORCPT + 31 others); Fri, 25 Jan 2019 13:07:27 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:51802 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729258AbfAYSH0 (ORCPT ); Fri, 25 Jan 2019 13:07:26 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A2148EBD; Fri, 25 Jan 2019 10:07:25 -0800 (PST) Received: from beelzebub.austin.arm.com (beelzebub.austin.arm.com [10.118.12.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E90B33F5AF; Fri, 25 Jan 2019 10:07:24 -0800 (PST) From: Jeremy Linton To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, marc.zyngier@arm.com, suzuki.poulose@arm.com, dave.martin@arm.com, shankerd@codeaurora.org, linux-kernel@vger.kernel.org, ykaukab@suse.de, julien.thierry@arm.com, mlangsdo@redhat.com, steven.price@arm.com, stefan.wahren@i2se.com, Jeremy Linton , Christoffer Dall , kvmarm@lists.cs.columbia.edu Subject: [PATCH v4 04/12] arm64: remove the ability to build a kernel without hardened branch predictors Date: Fri, 25 Jan 2019 12:07:03 -0600 Message-Id: <20190125180711.1970973-5-jeremy.linton@arm.com> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190125180711.1970973-1-jeremy.linton@arm.com> References: <20190125180711.1970973-1-jeremy.linton@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Buried behind EXPERT is the ability to build a kernel without hardened branch predictors, this needlessly clutters up the code as well as creates the opportunity for bugs. It also removes the kernel's ability to determine if the machine its running on is vulnerable. Since its also possible to disable it at boot time, lets remove the config option. Signed-off-by: Jeremy Linton Cc: Christoffer Dall Cc: kvmarm@lists.cs.columbia.edu --- arch/arm64/Kconfig | 17 ----------------- arch/arm64/include/asm/kvm_mmu.h | 12 ------------ arch/arm64/include/asm/mmu.h | 12 ------------ arch/arm64/kernel/cpu_errata.c | 19 ------------------- arch/arm64/kernel/entry.S | 2 -- arch/arm64/kvm/Kconfig | 3 --- arch/arm64/kvm/hyp/hyp-entry.S | 2 -- 7 files changed, 67 deletions(-) -- 2.17.2 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0baa632bf0a8..6b4c6d3fdf4d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1005,23 +1005,6 @@ config UNMAP_KERNEL_AT_EL0 If unsure, say Y. -config HARDEN_BRANCH_PREDICTOR - bool "Harden the branch predictor against aliasing attacks" if EXPERT - default y - help - Speculation attacks against some high-performance processors rely on - being able to manipulate the branch predictor for a victim context by - executing aliasing branches in the attacker context. Such attacks - can be partially mitigated against by clearing internal branch - predictor state and limiting the prediction logic in some situations. - - This config option will take CPU-specific actions to harden the - branch predictor against aliasing attacks and may rely on specific - instruction sequences or control bits being set by the system - firmware. - - If unsure, say Y. - config HARDEN_EL2_VECTORS bool "Harden EL2 vector mapping against system register leak" if EXPERT default y diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index a5c152d79820..9dd680194db9 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -444,7 +444,6 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, return ret; } -#ifdef CONFIG_KVM_INDIRECT_VECTORS /* * EL2 vectors can be mapped and rerouted in a number of ways, * depending on the kernel configuration and CPU present: @@ -529,17 +528,6 @@ static inline int kvm_map_vectors(void) return 0; } -#else -static inline void *kvm_get_hyp_vector(void) -{ - return kern_hyp_va(kvm_ksym_ref(__kvm_hyp_vector)); -} - -static inline int kvm_map_vectors(void) -{ - return 0; -} -#endif DECLARE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index 3e8063f4f9d3..20fdf71f96c3 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -95,13 +95,9 @@ struct bp_hardening_data { bp_hardening_cb_t fn; }; -#if (defined(CONFIG_HARDEN_BRANCH_PREDICTOR) || \ - defined(CONFIG_HARDEN_EL2_VECTORS)) extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[]; extern atomic_t arm64_el2_vector_last_slot; -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR || CONFIG_HARDEN_EL2_VECTORS */ -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void) @@ -120,14 +116,6 @@ static inline void arm64_apply_bp_hardening(void) if (d->fn) d->fn(); } -#else -static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void) -{ - return NULL; -} - -static inline void arm64_apply_bp_hardening(void) { } -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ extern void paging_init(void); extern void bootmem_init(void); diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 934d50788ca3..de09a3537cd4 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -109,13 +109,11 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused) atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1); -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR #include #include DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data); -#ifdef CONFIG_KVM_INDIRECT_VECTORS extern char __smccc_workaround_1_smc_start[]; extern char __smccc_workaround_1_smc_end[]; @@ -165,17 +163,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn, __this_cpu_write(bp_hardening_data.fn, fn); raw_spin_unlock(&bp_lock); } -#else -#define __smccc_workaround_1_smc_start NULL -#define __smccc_workaround_1_smc_end NULL - -static void __install_bp_hardening_cb(bp_hardening_cb_t fn, - const char *hyp_vecs_start, - const char *hyp_vecs_end) -{ - __this_cpu_write(bp_hardening_data.fn, fn); -} -#endif /* CONFIG_KVM_INDIRECT_VECTORS */ static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry, bp_hardening_cb_t fn, @@ -279,7 +266,6 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry) return; } -#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */ DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required); @@ -516,7 +502,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused) .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \ CAP_MIDR_RANGE_LIST(midr_list) -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR /* * List of CPUs where we need to issue a psci call to @@ -535,8 +520,6 @@ static const struct midr_range arm64_bp_harden_smccc_cpus[] = { {}, }; -#endif - #ifdef CONFIG_HARDEN_EL2_VECTORS static const struct midr_range arm64_harden_el2_vectors[] = { @@ -710,13 +693,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = { ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), }, #endif -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR { .capability = ARM64_HARDEN_BRANCH_PREDICTOR, .cpu_enable = enable_smccc_arch_workaround_1, ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus), }, -#endif #ifdef CONFIG_HARDEN_EL2_VECTORS { .desc = "EL2 vector hardening", diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index bee54b7d17b9..3f0eaaf704c8 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -842,11 +842,9 @@ el0_irq_naked: #endif ct_user_exit -#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR tbz x22, #55, 1f bl do_el0_irq_bp_hardening 1: -#endif irq_handler #ifdef CONFIG_TRACE_IRQFLAGS diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index a3f85624313e..402bcfb85f25 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -58,9 +58,6 @@ config KVM_ARM_PMU Adds support for a virtual Performance Monitoring Unit (PMU) in virtual machines. -config KVM_INDIRECT_VECTORS - def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS) - source "drivers/vhost/Kconfig" endif # VIRTUALIZATION diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index 53c9344968d4..e02ddf40f113 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -272,7 +272,6 @@ ENTRY(__kvm_hyp_vector) valid_vect el1_error // Error 32-bit EL1 ENDPROC(__kvm_hyp_vector) -#ifdef CONFIG_KVM_INDIRECT_VECTORS .macro hyp_ventry .align 7 1: .rept 27 @@ -331,4 +330,3 @@ ENTRY(__smccc_workaround_1_smc_start) ldp x0, x1, [sp, #(8 * 2)] add sp, sp, #(8 * 4) ENTRY(__smccc_workaround_1_smc_end) -#endif