From patchwork Mon Feb 26 08:20:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 129599 Delivered-To: patch@linaro.org Received: by 10.46.66.2 with SMTP id p2csp3359675lja; Mon, 26 Feb 2018 00:27:40 -0800 (PST) X-Google-Smtp-Source: AH8x226yLFsSfo/7JOCV8WHQXOUoQAgwgu8sbvdjjTa2VdHvuBai7Kb/Cc6qFzxRGQQnIikQRJs1 X-Received: by 10.98.31.155 with SMTP id l27mr9903668pfj.176.1519633659876; Mon, 26 Feb 2018 00:27:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519633659; cv=none; d=google.com; s=arc-20160816; b=AGQ1pDExRlI7L9GUVAa4A1oi9DssV5bqjgZaG5Su255TpNf+bxcTNELZKZfmRl4ZES NEh07M4SOiczUXhC+F5DhbgI1650A7baPn2z0AGh6Y0x/PN19QRv7lAhiXDOuDVmgSCt k11BDzY5XjP1a/gnh29W6xdeCMJ2MvFQTigWqGanhSaxVry8pXA6ohsWqW4U0VFOy87y vkVInmdAcmQYdm8ymIuHx2qL4NGaPrmbn09bPXdxGDIHCozf4hKISKE35IW7gzJe5Rdd iIHpMtfw/1uCq1JDWf2DtUwuOxvxH231Sszd7/eseZuT37ngFvNhaYqPoohIp0Jp0ASz U6lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:to:from:dkim-signature:arc-authentication-results; bh=c+m8Xx/gdzZk3oAgyKLtXnF31YiTGhZDX4r8l10YuK4=; b=XbeDvzvfaikYC94Jne9dNa+ITDQslSliwgaSB7fdL2hplKMlkEValONwMejFmnzsVF 7Zw3BR5d+EI8HjxWJTUbgPgpzDvTSm1ee/ZwhhY6sXzvz2aj0HFuxyHEx3AECA6q8Jz4 KaDEIdsDGMSmhSsxu51nDgsUNpV+ZjLjMSkp+qFHKS4eoKQxUV9aGIn1BZSvJqfNI11w TMLI3VkXp40Vv6CcrmL30aXQ6e+WdzHOMetg8+l4PWzUSeuW4PWgE6vAGPmTtsSxP+TZ SnYpFcUcm/zU+tHPV5b6jt9YbdbXmmlR6MZkj03t0A6+ENBKN5nrZvyUVwqWjSjMzlyV u6kA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Zh8hg4a5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f23-v6si6600728plk.810.2018.02.26.00.27.39; Mon, 26 Feb 2018 00:27:39 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=Zh8hg4a5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752472AbeBZI1f (ORCPT + 28 others); Mon, 26 Feb 2018 03:27:35 -0500 Received: from mail-pf0-f195.google.com ([209.85.192.195]:40206 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752855AbeBZI10 (ORCPT ); Mon, 26 Feb 2018 03:27:26 -0500 Received: by mail-pf0-f195.google.com with SMTP id m5so6219574pff.7 for ; Mon, 26 Feb 2018 00:27:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=c+m8Xx/gdzZk3oAgyKLtXnF31YiTGhZDX4r8l10YuK4=; b=Zh8hg4a58XLRH9ZUZZAbbTsxHhtmh4//H1Y1YjpNB5gPV7cEHDnDNISEwBOdXrdb++ yMUjG/wRHSN96UZxm3t2TswSbmQKyc3A5m/TgPbAvf8Kw9ovue+yVKjFR9UWJGwsPI/B 8mlmlRb1dvoiUnyus68owrxZSqzPkk+61tpa0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=c+m8Xx/gdzZk3oAgyKLtXnF31YiTGhZDX4r8l10YuK4=; b=ssvslbXXu334l8XxojyualQboEXN+CvzrTMB5Ja7cU6Q6Rg7lh7qosmOC3TDrdKJZe wO6ekHpmH3wn1w8MrdFDR/LIwXIABIFFx7VUVtY+JRdxdwyPiGCDzdaE4v4MQfDUxYXp hxmsqSiaO1r2zUos4Hw3igrkNZLiAv+4YSLlY25cW8WcKWw5iQY3Jyvm1UBBjMUq/SUi DxeZqOfSQ0xru/lF+tIW0V2W+tkdOwyKu9lVDTm8lHA1gxD66x8F3wn/n9GzRabN6FZ+ MyQrWWXT2HO3Ot82geCMvtERw+d15SWQkkF8XETAuLtHZ0hcSTOeSgO27Z/PCSALu2/b mwaQ== X-Gm-Message-State: APf1xPASS6vQxwGr52FE+Qb8bR90DphYSiePH8npHM6nwqZkVcI+kiCz cIRaTJs8JkUR+D4/60IFLct9ag== X-Received: by 10.98.227.10 with SMTP id g10mr9837192pfh.200.1519633645577; Mon, 26 Feb 2018 00:27:25 -0800 (PST) Received: from localhost.localdomain (176.122.172.82.16clouds.com. [176.122.172.82]) by smtp.gmail.com with ESMTPSA id o86sm1422706pfi.87.2018.02.26.00.27.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 26 Feb 2018 00:27:25 -0800 (PST) From: Alex Shi To: Marc Zyngier , Will Deacon , Ard Biesheuvel , Catalin Marinas , stable@vger.kernel.org, Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Christoffer Dall , Russell King , kvm@vger.kernel.org (open list:KERNEL VIRTUAL MACHINE (KVM)), linux-arm-kernel@lists.infradead.org (moderated list:KERNEL VIRTUAL MACHINE (KVM) FOR ARM), kvmarm@lists.cs.columbia.edu (open list:KERNEL VIRTUAL MACHINE (KVM) FOR ARM), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 48/52] arm: KVM: Invalidate BTB on guest exit for Cortex-A12/A17 Date: Mon, 26 Feb 2018 16:20:22 +0800 Message-Id: <1519633227-29832-49-git-send-email-alex.shi@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519633227-29832-1-git-send-email-alex.shi@linaro.org> References: <1519633227-29832-1-git-send-email-alex.shi@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Marc Zyngier ** Not yet queued for inclusion in mainline ** In order to avoid aliasing attacks against the branch predictor, let's invalidate the BTB on guest exit. This is made complicated by the fact that we cannot take a branch before invalidating the BTB. We only apply this to A12 and A17, which are the only two ARM cores on which this useful. Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: Alex Shi Conflicts: no hvc stub in hyp_hvc in arch/arm/kvm/hyp/hyp-entry.S --- arch/arm/include/asm/kvm_asm.h | 2 -- arch/arm/include/asm/kvm_mmu.h | 18 ++++++++++- arch/arm/kvm/hyp/hyp-entry.S | 71 ++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 86 insertions(+), 5 deletions(-) -- 2.7.4 diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 8ef0538..24f3ec7 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -61,8 +61,6 @@ struct kvm_vcpu; extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; -extern char __kvm_hyp_vector[]; - extern void __kvm_flush_vm_context(void); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid(struct kvm *kvm); diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index d10e362..2887129 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -37,6 +37,7 @@ #include #include +#include #include #include @@ -225,7 +226,22 @@ static inline unsigned int kvm_get_vmid_bits(void) static inline void *kvm_get_hyp_vector(void) { - return kvm_ksym_ref(__kvm_hyp_vector); + switch(read_cpuid_part()) { +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + { + extern char __kvm_hyp_vector_bp_inv[]; + return kvm_ksym_ref(__kvm_hyp_vector_bp_inv); + } + +#endif + default: + { + extern char __kvm_hyp_vector[]; + return kvm_ksym_ref(__kvm_hyp_vector); + } + } } static inline int kvm_map_vectors(void) diff --git a/arch/arm/kvm/hyp/hyp-entry.S b/arch/arm/kvm/hyp/hyp-entry.S index 96beb53..b6b8cb1 100644 --- a/arch/arm/kvm/hyp/hyp-entry.S +++ b/arch/arm/kvm/hyp/hyp-entry.S @@ -71,6 +71,66 @@ __kvm_hyp_vector: W(b) hyp_irq W(b) hyp_fiq +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + .align 5 +__kvm_hyp_vector_bp_inv: + .global __kvm_hyp_vector_bp_inv + + /* + * We encode the exception entry in the bottom 3 bits of + * SP, and we have to guarantee to be 8 bytes aligned. + */ + W(add) sp, sp, #1 /* Reset 7 */ + W(add) sp, sp, #1 /* Undef 6 */ + W(add) sp, sp, #1 /* Syscall 5 */ + W(add) sp, sp, #1 /* Prefetch abort 4 */ + W(add) sp, sp, #1 /* Data abort 3 */ + W(add) sp, sp, #1 /* HVC 2 */ + W(add) sp, sp, #1 /* IRQ 1 */ + W(nop) /* FIQ 0 */ + + mcr p15, 0, r0, c7, c5, 6 /* BPIALL */ + isb + +#ifdef CONFIG_THUMB2_KERNEL + /* + * Yet another silly hack: Use VPIDR as a temp register. + * Thumb2 is really a pain, as SP cannot be used with most + * of the bitwise instructions. The vect_br macro ensures + * things gets cleaned-up. + */ + mcr p15, 4, r0, c0, c0, 0 /* VPIDR */ + mov r0, sp + and r0, r0, #7 + sub sp, sp, r0 + push {r1, r2} + mov r1, r0 + mrc p15, 4, r0, c0, c0, 0 /* VPIDR */ + mrc p15, 0, r2, c0, c0, 0 /* MIDR */ + mcr p15, 4, r2, c0, c0, 0 /* VPIDR */ +#endif + +.macro vect_br val, targ +ARM( eor sp, sp, #\val ) +ARM( tst sp, #7 ) +ARM( eorne sp, sp, #\val ) + +THUMB( cmp r1, #\val ) +THUMB( popeq {r1, r2} ) + + beq \targ +.endm + + vect_br 0, hyp_fiq + vect_br 1, hyp_irq + vect_br 2, hyp_hvc + vect_br 3, hyp_dabt + vect_br 4, hyp_pabt + vect_br 5, hyp_svc + vect_br 6, hyp_undef + vect_br 7, hyp_reset +#endif + .macro invalid_vector label, cause .align \label: mov r0, #\cause @@ -131,7 +191,14 @@ hyp_hvc: mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR beq 1f - push {lr} + /* + * Pushing r2 here is just a way of keeping the stack aligned to + * 8 bytes on any path that can trigger a HYP exception. Here, + * we may well be about to jump into the guest, and the guest + * exit would otherwise be badly decoded by our fancy + * "decode-exception-without-a-branch" code... + */ + push {r2, lr} mov lr, r0 mov r0, r1 @@ -141,7 +208,7 @@ hyp_hvc: THUMB( orr lr, #1) blx lr @ Call the HYP function - pop {lr} + pop {r2, lr} 1: eret guest_trap: