From patchwork Mon Oct 15 15:32:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 148861 Delivered-To: patches@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp3933893lji; Mon, 15 Oct 2018 08:32:33 -0700 (PDT) X-Received: by 2002:a0c:9425:: with SMTP id h34mr18091811qvh.0.1539617553439; Mon, 15 Oct 2018 08:32:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539617553; cv=none; d=google.com; s=arc-20160816; b=pJq7IJL0b3f6qd35kjkhifyuvgvT+QejrRr6Vb+U+HGhGup3ZOGL2nGFrQMWvyenQI oAdxEd7POITkPEToRRsycqqQFMV7Yo0s8Do5ordi0dkrZHbjtmUtU5+6DNevKOkC8pHa dJMGJ97hi5G9Kere7oFvj3rr0cA2lnEu8HxK4foTxVh4TScdbjvKBdXUgCpNcGVVz7BG Bbl6UBZVyPwyTw8MbW6A4UND0u5WW4PFZn7N2d18GgJ4a0dIXshBGiarICL7i6tXEi4z 1cN5v693MnnqsIQQBd3MpxaAScz4smNjBUmSqRGm1h+VC9UYT5gi5PPNN7mPpWsCch7/ 2qjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=k0g3bZJ1niqzlT2RYDHwLj98YGSAxcR3RA1D6vBxzl8=; b=tXHwHLqc/N1/ZsmvCGmzg2NBzPr676yFMxlctFRW8MC3SlFTSjS7K9IYEP0jYqx7Bw uss0UKwrqX4E4zybvXMIUxJpbAAcuHF17EflZaaB/95VXJK92mWlijUjecpDb/Itr1mo 1DP9SQOrgF8rvnBrkV/D8/wjvi4SO2rdVLhRgWGjrnrqa5ygq6aYHhPNOLul0RNZEhy1 uZzi0AqztNIyuZCu+Rf55O2lZMpEMYrHDQlufdc52G28OURhqb7qbCaBjL1rGqwoPXL5 P0iafAjxukleCHmMjKwtt1+P7PiKqX6CjMCUakIcAwkqmYdNyPuhl6291WsgrJNUqI6M HcEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kDgF9uXa; spf=pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id v9sor1338847qvf.0.2018.10.15.08.32.33 for (Google Transport Security); Mon, 15 Oct 2018 08:32:33 -0700 (PDT) Received-SPF: pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kDgF9uXa; spf=pass (google.com: domain of dave.long@linaro.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=dave.long@linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=k0g3bZJ1niqzlT2RYDHwLj98YGSAxcR3RA1D6vBxzl8=; b=kDgF9uXaoxGdsE6Dl+DC3F8CyEIrOltiSYY0naFS2veh3gl544h7BwEACM6dq4FpZW zE5Y5srH1Z4wHPHtYRBEzlQ2zXMR1cvRFWL1bqFdhhfs7ZkbIezHUffbVkaJOZbLrm+v otcTJgR6bhOOM9hYKiKquh5DeuaJmNxAaKMwI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=k0g3bZJ1niqzlT2RYDHwLj98YGSAxcR3RA1D6vBxzl8=; b=V2morGqE2v4RlnDjDMrSn4BT4n0fe56D2orBfqPvizP94k0CHIdAVkLKOBX0CLb5lG k8yMtTU2j+fOJWNKuqpB1whzEJ6vXUQaTOvJ0loXM0GZ2/9FJgxCARlmiUEtKmQkg7Dk AVVqglbwyWGu7JgZxQOFRdSfhTu1KaXKZKh7To8u0QaZBvO9hzUcOsbBSRiocVwz0aA2 Jt/sQZ9N/UzXLyFQdWoxqW6Eo0vmmdL3oh9SGsyflciKWNDSc6nlRwzseeoMCa85l2RS xRjd4jsQsfwNFGUv1Le3wGSB6mw6TjfKm2mto1FgA/VaxR37EdVi+fEGrEhucNt0ypyC FrCg== X-Gm-Message-State: ABuFfoi30/fL6WTMPFs01NRRcBtoal9TyaMNy+eR9wCHM/aoyIfr+rjQ EtELig9MBJDbotCYECAG7UDTv+k4 X-Google-Smtp-Source: ACcGV60PoOahUFK6SS8jtOtunzWgTE01mb31dn3hsXQ5XuGt2Qn4iUkrp2ZCyNJ9CL40dR+VTbncSQ== X-Received: by 2002:ad4:4048:: with SMTP id r8mr17278898qvp.23.1539617552995; Mon, 15 Oct 2018 08:32:32 -0700 (PDT) Return-Path: Received: from localhost.localdomain ([72.71.243.63]) by smtp.googlemail.com with ESMTPSA id g82-v6sm10087768qkh.24.2018.10.15.08.32.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Oct 2018 08:32:32 -0700 (PDT) From: David Long To: , Russell King - ARM Linux , Florian Fainelli , Tony Lindgren , Marc Zyngier , Mark Rutland Cc: Greg KH , Mark Brown Subject: [PATCH 4.14 11/24] ARM: KVM: invalidate BTB on guest exit for Cortex-A12/A17 Date: Mon, 15 Oct 2018 11:32:05 -0400 Message-Id: <1539617538-22328-12-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1539617538-22328-1-git-send-email-dave.long@linaro.org> References: <1539617538-22328-1-git-send-email-dave.long@linaro.org> From: Marc Zyngier Commit 3f7e8e2e1ebda787f156ce46e3f0a9ce2833fa4f upstream. In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long --- arch/arm/include/asm/kvm_asm.h | 2 -- arch/arm/include/asm/kvm_mmu.h | 17 +++++++++- arch/arm/kvm/hyp/hyp-entry.S | 71 ++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 85 insertions(+), 5 deletions(-) -- 2.5.0 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 14d68a4..b598e66 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -61,8 +61,6 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; - extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 8a098e6..85d48c9 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -246,7 +246,22 @@ static inline int kvm_read_guest_lock(struct kvm *kvm, static inline void *kvm_get_hyp_vector(void) { - return kvm_ksym_ref(__kvm_hyp_vector); + switch(read_cpuid_part()) { +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + { + extern char __kvm_hyp_vector_bp_inv[]; + return kvm_ksym_ref(__kvm_hyp_vector_bp_inv); + } + +#endif + default: + { + extern char __kvm_hyp_vector[]; + return kvm_ksym_ref(__kvm_hyp_vector); + } + } } static inline int kvm_map_vectors(void) diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S index 95a2fae..e789f52 100644 --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -71,6 +71,66 @@ __kvm_hyp_vector: W(b) hyp_irq W(b) hyp_fiq +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + .align 5 +__kvm_hyp_vector_bp_inv: + .global __kvm_hyp_vector_bp_inv + + /* + * We encode the exception entry in the bottom 3 bits of + * SP, and we have to guarantee to be 8 bytes aligned. + */ + W(add) sp, sp, #1 /* Reset 7 */ + W(add) sp, sp, #1 /* Undef 6 */ + W(add) sp, sp, #1 /* Syscall 5 */ + W(add) sp, sp, #1 /* Prefetch abort 4 */ + W(add) sp, sp, #1 /* Data abort 3 */ + W(add) sp, sp, #1 /* HVC 2 */ + W(add) sp, sp, #1 /* IRQ 1 */ + W(nop) /* FIQ 0 */ + + mcr p15, 0, r0, c7, c5, 6 /* BPIALL */ + isb + +#ifdef CONFIG_THUMB2_KERNEL + /* + * Yet another silly hack: Use VPIDR as a temp register. + * Thumb2 is really a pain, as SP cannot be used with most + * of the bitwise instructions. The vect_br macro ensures + * things gets cleaned-up. + */ + mcr p15, 4, r0, c0, c0, 0 /* VPIDR */ + mov r0, sp + and r0, r0, #7 + sub sp, sp, r0 + push {r1, r2} + mov r1, r0 + mrc p15, 4, r0, c0, c0, 0 /* VPIDR */ + mrc p15, 0, r2, c0, c0, 0 /* MIDR */ + mcr p15, 4, r2, c0, c0, 0 /* VPIDR */ +#endif + +.macro vect_br val, targ +ARM( eor sp, sp, #\val ) +ARM( tst sp, #7 ) +ARM( eorne sp, sp, #\val ) + +THUMB( cmp r1, #\val ) +THUMB( popeq {r1, r2} ) + + beq \targ +.endm + + vect_br 0, hyp_fiq + vect_br 1, hyp_irq + vect_br 2, hyp_hvc + vect_br 3, hyp_dabt + vect_br 4, hyp_pabt + vect_br 5, hyp_svc + vect_br 6, hyp_undef + vect_br 7, hyp_reset +#endif + .macro invalid_vector label, cause .align \label: mov r0, #\cause @@ -149,7 +209,14 @@ hyp_hvc: bx ip 1: - push {lr} + /* + * Pushing r2 here is just a way of keeping the stack aligned to + * 8 bytes on any path that can trigger a HYP exception. Here, + * we may well be about to jump into the guest, and the guest + * exit would otherwise be badly decoded by our fancy + * "decode-exception-without-a-branch" code... + */ + push {r2, lr} mov lr, r0 mov r0, r1 @@ -159,7 +226,7 @@ hyp_hvc: THUMB( orr lr, #1) blx lr @ Call the HYP function - pop {lr} + pop {r2, lr} eret guest_trap: