From patchwork Fri Jan 17 15:03:13 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 23337 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-ve0-f199.google.com (mail-ve0-f199.google.com [209.85.128.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 587BE202DA for ; Fri, 17 Jan 2014 15:04:16 +0000 (UTC) Received: by mail-ve0-f199.google.com with SMTP id oy12sf4724634veb.6 for ; Fri, 17 Jan 2014 07:04:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:subject:date:message-id :in-reply-to:references:cc:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:mime-version:sender :errors-to:x-original-sender:x-original-authentication-results :mailing-list:content-type:content-transfer-encoding; bh=5SYDZmLFgri47VazwiHyEg0IY3qN6YigCy/GhENAjiw=; b=XUzVmQYH5sOrL6Atkoe9lsrYsFGJYIylTCkuW2mbAUVMSBLkcmkgEZwPtAa1rJbwEm LDno1cYTqdP94RgqcowigRudgDjvYYCYuJ9balSIOrAMNffLpRV7qEm5uCM1VbEHsHnb bBDd7hYW6pybai1kDZtjdR+QVadeTZBhI/qHyzje+5MTwFx1ylLeb0M9nferalOgZjxX fPJSeC1F3TrkER7MaLzjsFfvd7r8w15EY0q1hv1+l1McAjsjrPGmCXAXD2YkWn0vsS04 oe0MfFB0yONe+dxsx0ZY7zu02DSuBughdVwB0Pxaf24adyhutJc8vGcyxoohINmAJ/7v RldQ== X-Gm-Message-State: ALoCoQmEo9OZZ4Y3gJFDr3aaEKsyK6EGAFFsTfBDZZPap/LqXNUu7sXz7mwSJCsd+2qZeicyaMZr X-Received: by 10.224.38.206 with SMTP id c14mr878297qae.4.1389971055558; Fri, 17 Jan 2014 07:04:15 -0800 (PST) X-BeenThere: patchwork-forward@linaro.org Received: by 10.49.128.135 with SMTP id no7ls1300506qeb.65.gmail; Fri, 17 Jan 2014 07:04:15 -0800 (PST) X-Received: by 10.58.172.132 with SMTP id bc4mr452930vec.45.1389971055379; Fri, 17 Jan 2014 07:04:15 -0800 (PST) Received: from mail-vb0-f52.google.com (mail-vb0-f52.google.com [209.85.212.52]) by mx.google.com with ESMTPS id sr4si4125506vdc.64.2014.01.17.07.04.15 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 17 Jan 2014 07:04:15 -0800 (PST) Received-SPF: neutral (google.com: 209.85.212.52 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.212.52; Received: by mail-vb0-f52.google.com with SMTP id p14so1619543vbm.11 for ; Fri, 17 Jan 2014 07:04:15 -0800 (PST) X-Received: by 10.58.117.65 with SMTP id kc1mr10982veb.68.1389971055273; Fri, 17 Jan 2014 07:04:15 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.59.13.131 with SMTP id ey3csp26080ved; Fri, 17 Jan 2014 07:04:14 -0800 (PST) X-Received: by 10.194.179.69 with SMTP id de5mr2630816wjc.4.1389971054228; Fri, 17 Jan 2014 07:04:14 -0800 (PST) Received: from casper.infradead.org (casper.infradead.org. [2001:770:15f::2]) by mx.google.com with ESMTPS id hu4si8513482wjb.92.2014.01.17.07.04.13 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Jan 2014 07:04:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org designates 2001:770:15f::2 as permitted sender) client-ip=2001:770:15f::2; Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1W4AxX-00068q-JU; Fri, 17 Jan 2014 15:03:47 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W4AxT-0003Jt-KJ; Fri, 17 Jan 2014 15:03:43 +0000 Received: from fw-tnat.austin.arm.com ([217.140.110.23] helo=collaborate-mta1.arm.com) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W4AxP-0003I2-SW for linux-arm-kernel@lists.infradead.org; Fri, 17 Jan 2014 15:03:40 +0000 Received: from e102391-lin.cambridge.arm.com (e102391-lin.cambridge.arm.com [10.1.209.166]) by collaborate-mta1.arm.com (Postfix) with ESMTP id 19A361401A4; Fri, 17 Jan 2014 09:03:16 -0600 (CST) From: Marc Zyngier To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org Subject: [RFC PATCH 3/3] arm64: KVM: flush VM pages before letting the guest enable caches Date: Fri, 17 Jan 2014 15:03:13 +0000 Message-Id: <1389970993-19371-4-git-send-email-marc.zyngier@arm.com> X-Mailer: git-send-email 1.8.3.4 In-Reply-To: <1389970993-19371-1-git-send-email-marc.zyngier@arm.com> References: <1389970993-19371-1-git-send-email-marc.zyngier@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140117_100340_009720_57761725 X-CRM114-Status: GOOD ( 12.30 ) X-Spam-Score: -2.2 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record -0.3 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Catalin Marinas , Christoffer Dall X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patch=linaro.org@lists.infradead.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: marc.zyngier@arm.com X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.212.52 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 When the guest runs with caches disabled (like in an early boot sequence, for example), all the writes are diectly going to RAM, bypassing the caches altogether. Once the MMU and caches are enabled, whatever sits in the cache becomes suddently visible, which isn't what the guest expects. A way to avoid this potential disaster is to invalidate the cache when the MMU is being turned on. For this, we hook into the SCTLR_EL1 trapping code, and scan the stage-2 page tables, invalidating the pages/sections that have already been mapped in. Signed-off-by: Marc Zyngier Reviewed-by: Catalin Marinas --- arch/arm/kvm/mmu.c | 72 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kvm/sys_regs.c | 5 ++- 3 files changed, 77 insertions(+), 1 deletion(-) diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 415fd63..704c939 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -187,6 +187,78 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp, } } +void stage2_flush_ptes(struct kvm *kvm, pmd_t *pmd, + unsigned long addr, unsigned long end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PAGE_SIZE); + } + } while(pte++, addr += PAGE_SIZE, addr != end); +} + +void stage2_flush_pmds(struct kvm *kvm, pud_t *pud, + unsigned long addr, unsigned long end) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + next = pmd_addr_end(addr, end); + if (!pmd_none(*pmd)) { + if (kvm_pmd_huge(*pmd)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PMD_SIZE); + } else { + stage2_flush_ptes(kvm, pmd, addr, next); + } + } + } while(pmd++, addr = next, addr != end); +} + +void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd, + unsigned long addr, unsigned long end) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(pgd, addr); + do { + next = pud_addr_end(addr, end); + if (!pud_none(*pud)) { + if (pud_huge(*pud)) { + hva_t hva = gfn_to_hva(kvm, addr >> PAGE_SHIFT); + kvm_flush_dcache_to_poc((void*)hva, PUD_SIZE); + } else { + stage2_flush_pmds(kvm, pud, addr, next); + } + } + } while(pud++, addr = next, addr != end); +} + +void stage2_flush_vm(struct kvm *kvm) +{ + unsigned long long addr = 0; + unsigned long end = KVM_PHYS_SIZE; + unsigned long next; + pgd_t *pgd; + + spin_lock(&kvm->mmu_lock); + + pgd = kvm->arch.pgd + pgd_index(addr); + do { + next = pgd_addr_end(addr, end); + stage2_flush_puds(kvm, pgd, addr, next); + } while(pgd++, addr = next, addr != end); + + spin_unlock(&kvm->mmu_lock); +} + /** * free_boot_hyp_pgd - free HYP boot page tables * diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 2232dd0..b7b2ca3 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -139,6 +139,7 @@ static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, } } +void stage2_flush_vm(struct kvm *kvm); #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 5e92b9e..32e440f 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -150,8 +151,10 @@ static bool access_sctlr_el1(struct kvm_vcpu *vcpu, val = *vcpu_reg(vcpu, p->Rt); vcpu_sys_reg(vcpu, r->reg) = val; - if ((val & (0b101)) == 0b101) /* MMU+Caches enabled? */ + if ((val & (0b101)) == 0b101) { /* MMU+Caches enabled? */ vcpu->arch.hcr_el2 &= ~HCR_TVM; + stage2_flush_vm(vcpu->kvm); + } return true; }