From patchwork Wed Oct 31 13:57:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 149805 Delivered-To: patches@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp6826545ljp; Wed, 31 Oct 2018 06:57:31 -0700 (PDT) X-Received: by 2002:a67:1c8:: with SMTP id r69mr1342911vsf.48.1540994251282; Wed, 31 Oct 2018 06:57:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540994251; cv=none; d=google.com; s=arc-20160816; b=d6gzVPG0OdURoQ/0hO+i1SCPX8oqKesON+bowmPhjkL0ukxPSHfR7hjwhAW+JISJud kSAsqHr4aHkVloxMdiK+Sbp1RhqjMs+vXFxPMFo0N42wRHY6VBisl/E7zjeVeaTDIzqW lgik82pAFmXilcmSjYPugl9joh0NXiUPu2BbsvKwFOWwv00MowWkEUq09gfbrjSkAHf2 RIs96XcxoafFAXWPGq2I9jQdznSCaKTtcrqbypEmfhU+tIc8NN5PPW3uRJpXrA7M0r4b 6ess8bvp5/j3rf8VsYzmB0Cf0XwQNeXUdgGiSdXVmVo/c6MqZV86P9SDOw7wqKMdfzoT +3lg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=i0c8sZtm7yp/oIzAHzqHZe2hwLr+EfkYv1nOt3hW/mg=; b=iF35MVdbDFB3+ojC7xdwtNSpLFSwAXQseGqtwVcZMpR95p7fLs2MWJ9flYKWpTFqE/ q/DvX7kigOYEeEzy7ns8oZxnoLjHatfulV9P3bW1yBSOwuvVn4zca/AiGWMKla8qCXm4 ID45ScErF3msiNQyXDGrgj+RtY1EWAlizQBnlZE/FLgnl3R1toiDj8hqImClH8Z18a1b v5wdA7Sq6lv8vtsUw+JNKBIpPDM/UqKZT73VSPFITtcoyNN3aBPUDHBvgsMntSVL1KEe vx7ibAESL+JnI4WgCciQzJ9A076DVSp/q2+BGQyXvqUbUxtcF07qOoDdV4MPta06qFtl eXug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IcPQr9ng; spf=pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id 33sor14500836uas.21.2018.10.31.06.57.31 for (Google Transport Security); Wed, 31 Oct 2018 06:57:31 -0700 (PDT) Received-SPF: pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=IcPQr9ng; spf=pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=i0c8sZtm7yp/oIzAHzqHZe2hwLr+EfkYv1nOt3hW/mg=; b=IcPQr9ngRA4WiPkHTuVLoYJjXyTOQfzTz9j1lsL0Ms48TS5M6lOXSDE8k719OdtR6O eitI4gFcV0FYpKVEXDmRclG2XilWVbG453zIR4z4MwjzSYkz6pJAT/XYvHlhlfPrCZZ1 kg5QXwZk0f/nc5q8vwUxx90F59zJNJmjsWcn8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=i0c8sZtm7yp/oIzAHzqHZe2hwLr+EfkYv1nOt3hW/mg=; b=hA2KAwwlTlcmm5WWmPRiJ0IIn357IAKXrsbmCfDe4Sx5o9pmlmNsXiXGiNSKx6d+5K QVPsFP46a25qe5vEvp/68orN2YKyXDSjybCVNNMOUI8r5CzvTSk5Mu/BILzVqW7gj88E 0pgjXrTeb/aYaB0Red353YaeykpO5SM6QoqKSaWRbmiS7kM2MFJB+iiboZnwWGH/9ekO 2ce35T+USMi5L1YYuGN34CfVGAv2YBoCP4dduB9cz+KhhlUaZ5g4gmu/5vR7NoR7qi5S Thy3elxnKz51QK7x6jm/YrAJ55A33mbzEtd1B3uniVhq+L3pI1X5P2ao6ytEJITeMrgv DENg== X-Gm-Message-State: AGRZ1gJJqpTvUXIpQyK6jqiFYCmpKZ3G2Lku1OGM8Lau9zgUWsylcirI 61Fwc9oReawmulK6CGV4pQQklGAz X-Google-Smtp-Source: AJdET5cWskdsSEFOSoWvb0aOgTume0+KQDxPvJjZ2fn8hYLyiPbfGsPOfnxU0wWhqA+FLyhiVDAWPg== X-Received: by 2002:ab0:1445:: with SMTP id c5mr1476566uae.18.1540994250778; Wed, 31 Oct 2018 06:57:30 -0700 (PDT) Return-Path: Received: from dave-Dell-System-XPS-L502X.hsd1.nh.comcast.net ([2603:3005:3403:7100:2c71:8680:34e1:a6aa]) by smtp.googlemail.com with ESMTPSA id s85-v6sm2275624vse.29.2018.10.31.06.57.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 31 Oct 2018 06:57:30 -0700 (PDT) From: David Long To: stable@vger.kernel.org, Russell King - ARM Linux , Florian Fainelli , Tony Lindgren , Marc Zyngier , Mark Rutland Cc: Greg KH , Mark Brown Subject: [PATCH 4.9 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 Date: Wed, 31 Oct 2018 09:57:00 -0400 Message-Id: <20181031135713.2873-12-dave.long@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181031135713.2873-1-dave.long@linaro.org> References: <20181031135713.2873-1-dave.long@linaro.org> From: Marc Zyngier Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream. In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long --- arch/arm/include/asm/kvm_asm.h | 2 - arch/arm/include/asm/kvm_mmu.h | 17 ++++++++- arch/arm/kvm/hyp/hyp-entry.S | 69 ++++++++++++++++++++++++++++++++++ 3 files changed, 85 insertions(+), 3 deletions(-) -- 2.17.1 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 8ef05381984b..24f3ec7c9fbe 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -61,8 +61,6 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; - extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index e2f05cedaf97..625edef2a54f 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -248,7 +248,22 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, static inline void *kvm_get_hyp_vector(void) { - return kvm_ksym_ref(__kvm_hyp_vector); + switch(read_cpuid_part()) { +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + { + extern char __kvm_hyp_vector_bp_inv[]; + return kvm_ksym_ref(__kvm_hyp_vector_bp_inv); + } + +#endif + default: + { + extern char __kvm_hyp_vector[]; + return kvm_ksym_ref(__kvm_hyp_vector); + } + } } static inline int kvm_map_vectors(void) diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S index 96beb53934c9..de242d9598c6 100644 --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -71,6 +71,66 @@ __kvm_hyp_vector: W(b) hyp_irq W(b) hyp_fiq +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + .align 5 +__kvm_hyp_vector_bp_inv: + .global __kvm_hyp_vector_bp_inv + + /* + * We encode the exception entry in the bottom 3 bits of + * SP, and we have to guarantee to be 8 bytes aligned. + */ + W(add) sp, sp, #1 /* Reset 7 */ + W(add) sp, sp, #1 /* Undef 6 */ + W(add) sp, sp, #1 /* Syscall 5 */ + W(add) sp, sp, #1 /* Prefetch abort 4 */ + W(add) sp, sp, #1 /* Data abort 3 */ + W(add) sp, sp, #1 /* HVC 2 */ + W(add) sp, sp, #1 /* IRQ 1 */ + W(nop) /* FIQ 0 */ + + mcr p15, 0, r0, c7, c5, 6 /* BPIALL */ + isb + +#ifdef CONFIG_THUMB2_KERNEL + /* + * Yet another silly hack: Use VPIDR as a temp register. + * Thumb2 is really a pain, as SP cannot be used with most + * of the bitwise instructions. The vect_br macro ensures + * things gets cleaned-up. + */ + mcr p15, 4, r0, c0, c0, 0 /* VPIDR */ + mov r0, sp + and r0, r0, #7 + sub sp, sp, r0 + push {r1, r2} + mov r1, r0 + mrc p15, 4, r0, c0, c0, 0 /* VPIDR */ + mrc p15, 0, r2, c0, c0, 0 /* MIDR */ + mcr p15, 4, r2, c0, c0, 0 /* VPIDR */ +#endif + +.macro vect_br val, targ +ARM( eor sp, sp, #\val ) +ARM( tst sp, #7 ) +ARM( eorne sp, sp, #\val ) + +THUMB( cmp r1, #\val ) +THUMB( popeq {r1, r2} ) + + beq \targ +.endm + + vect_br 0, hyp_fiq + vect_br 1, hyp_irq + vect_br 2, hyp_hvc + vect_br 3, hyp_dabt + vect_br 4, hyp_pabt + vect_br 5, hyp_svc + vect_br 6, hyp_undef + vect_br 7, hyp_reset +#endif + .macro invalid_vector label, cause .align \label: mov r0, #\cause @@ -132,6 +192,14 @@ hyp_hvc: beq 1f push {lr} + /* + * Pushing r2 here is just a way of keeping the stack aligned to + * 8 bytes on any path that can trigger a HYP exception. Here, + * we may well be about to jump into the guest, and the guest + * exit would otherwise be badly decoded by our fancy + * "decode-exception-without-a-branch" code... + */ + push {r2, lr} mov lr, r0 mov r0, r1 @@ -142,6 +210,7 @@ THUMB( orr lr, #1) blx lr @ Call the HYP function pop {lr} + pop {r2, lr} 1: eret guest_trap: