From patchwork Sun May 17 01:02:29 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 48618 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id E395A21411 for ; Sun, 17 May 2015 01:06:41 +0000 (UTC) Received: by wiz9 with SMTP id 9sf9917287wiz.3 for ; Sat, 16 May 2015 18:06:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=wQPFhMbw58+4cS8z5DjT6AxmZN6n4v40q2km5WbYkCE=; b=LnIpGjNNUMWnuqjsWgZPA5PUDuwBE6mwgGww9uw4Fkogg3JgXzNQb3MhMAcQoHV1JH kZtVlvifTSQM4Qlc4pBMd3ZhL0Cat5CGxF5nMh1aS1T8sZ/EGm55T/WZtXK3PtszlIiU BKcytC05K62D5t9VW+nbc5OT+IrhKHKmEXPKLtFyqQ+m0yR1I51Nyn5/sS3pFIO9qgk4 I2Xhb06FurriRDvRowRoTfFTVSXab7AMJrMP7+s7bpWMPXO2a57mvHCm9YknV4Hasbb1 8esfDpYblmdsUFfvIYqkoLgAwJPEPfGwQsSBDtYlYPyS/balLqvnJ9ce6uUxCvwI0MYh Z2hA== X-Gm-Message-State: ALoCoQmJmiJuQFThoA3t96l+uslmHjXOJUI/fwX9SEN0u+cmKXLstY6GoLRBCA4Rys0AUVyuZlov X-Received: by 10.112.171.41 with SMTP id ar9mr12506184lbc.24.1431824800859; Sat, 16 May 2015 18:06:40 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.9.37 with SMTP id w5ls599687laa.77.gmail; Sat, 16 May 2015 18:06:40 -0700 (PDT) X-Received: by 10.112.54.225 with SMTP id m1mr12575527lbp.34.1431824800737; Sat, 16 May 2015 18:06:40 -0700 (PDT) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id ku3si3909587lbb.54.2015.05.16.18.06.40 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 16 May 2015 18:06:40 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by laat2 with SMTP id t2so172398084laa.1 for ; Sat, 16 May 2015 18:06:40 -0700 (PDT) X-Received: by 10.152.37.228 with SMTP id b4mr12061274lak.117.1431824800530; Sat, 16 May 2015 18:06:40 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp3022167lbb; Sat, 16 May 2015 18:06:39 -0700 (PDT) X-Received: by 10.66.100.233 with SMTP id fb9mr30960857pab.128.1431824785343; Sat, 16 May 2015 18:06:25 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h13si9275368pdm.194.2015.05.16.18.06.24; Sat, 16 May 2015 18:06:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750882AbbEQBGU (ORCPT + 2 others); Sat, 16 May 2015 21:06:20 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:25944 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750914AbbEQBGT (ORCPT ); Sat, 16 May 2015 21:06:19 -0400 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id t4H16Imw028487 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Sun, 17 May 2015 01:06:18 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.13.8/8.13.8) with ESMTP id t4H16IRE011082 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Sun, 17 May 2015 01:06:18 GMT Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18]) by aserv0122.oracle.com (8.13.8/8.13.8) with ESMTP id t4H16ITh029454; Sun, 17 May 2015 01:06:18 GMT Received: from lappy.hsd1.nh.comcast.net (/10.159.157.103) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sat, 16 May 2015 18:06:17 -0700 From: Sasha Levin To: stable@vger.kernel.org, stable-commits@vger.kernel.org Cc: sasha.levin@oracle.com Subject: [PATCH 3.18 008/222] arm/arm64: KVM: Introduce stage2_unmap_vm Date: Sat, 16 May 2015 21:02:29 -0400 Message-Id: <1431824764-20044-9-git-send-email-sasha.levin@oracle.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1431824764-20044-1-git-send-email-sasha.levin@oracle.com> References: <1431824764-20044-1-git-send-email-sasha.levin@oracle.com> X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream. Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Shannon Zhao Signed-off-by: Sasha Levin --- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 7 +++++ arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + 4 files changed, 74 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index f867060..63e0ecc 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -52,6 +52,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 24c9ca4..827ff48 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -658,6 +658,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, if (ret) return ret; + /* + * Ensure a rebooted VM will fault in RAM pages and detect if the + * guest MMU is turned off and flush the caches as needed. + */ + if (vcpu->arch.has_run_once) + stage2_unmap_vm(vcpu->kvm); + vcpu_reset_hcr(vcpu); /* diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 16ae5f0..1dc9778 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -612,6 +612,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) unmap_range(kvm, kvm->arch.pgd, start, size); } +static void stage2_unmap_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + hva_t hva = memslot->userspace_addr; + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t size = PAGE_SIZE * memslot->npages; + hva_t reg_end = hva + size; + + /* + * A memory region could potentially cover multiple VMAs, and any holes + * between them, so iterate over all of them to find out if we should + * unmap any of them. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Take the intersection of this VMA with the memory region + */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (!(vma->vm_flags & VM_PFNMAP)) { + gpa_t gpa = addr + (vm_start - memslot->userspace_addr); + unmap_stage2_range(kvm, gpa, vm_end - vm_start); + } + hva = vm_end; + } while (hva < reg_end); +} + +/** + * stage2_unmap_vm - Unmap Stage-2 RAM mappings + * @kvm: The struct kvm pointer + * + * Go through the memregions and unmap any reguler RAM + * backing memory already mapped to the VM. + */ +void stage2_unmap_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_unmap_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * kvm_free_stage2_pgd - free all stage-2 tables * @kvm: The KVM struct pointer for the VM. diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 123b521..14a74f1 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -83,6 +83,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,