From patchwork Fri Apr 24 05:27:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shannon Zhao X-Patchwork-Id: 47542 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f200.google.com (mail-wi0-f200.google.com [209.85.212.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D42A120553 for ; Fri, 24 Apr 2015 05:35:00 +0000 (UTC) Received: by wiun10 with SMTP id n10sf1579210wiu.1 for ; Thu, 23 Apr 2015 22:35:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=2AJ5dWdBcblcOz6Kxsulu8nMCYXhm0H0iF2m2DwtZoo=; b=e9UvtlOukpUcf1fx8qo04VHH/BI9X/AzQyM394ewh5LZW/aWnuwnLBu3tC0ucmK8Yg 9fG9HOy/rBiq4D9+y/rDTHFY4LnO4Pra4KHWTsbZN6vkDrexi64Tx2yrcWT/y4xvD4q2 ClrRkMXINJMT9uX2eerMrNN65KllqLl2J2loKnVNWe7UhfAFB2irBL4zKZhPgdW61If2 wvEeMJ3HNlDY0yKBIB4WjZiIv6arK2l1l+OX0klFcGRKMgLr+FmjAmp0zvVBHpnf3QhQ iOlX3GwsV6/LysEKbra+prl5YE2+KBAVUbZ6jINOOxYpEYxYfYMAJ/EcrS42JaxC+QhQ 60Wg== X-Gm-Message-State: ALoCoQkfef0AlMjZUQvhdkY1wpujmYXBtrvo3sy1c2hZeHLAAqq/oTJn9Jv/PUp9+N3MtjJvP5py X-Received: by 10.152.184.73 with SMTP id es9mr3221253lac.4.1429853700144; Thu, 23 Apr 2015 22:35:00 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.42.137 with SMTP id o9ls413534lal.16.gmail; Thu, 23 Apr 2015 22:35:00 -0700 (PDT) X-Received: by 10.112.42.16 with SMTP id j16mr3536909lbl.98.1429853700005; Thu, 23 Apr 2015 22:35:00 -0700 (PDT) Received: from mail-lb0-f176.google.com (mail-lb0-f176.google.com. [209.85.217.176]) by mx.google.com with ESMTPS id pc6si7458501lbc.178.2015.04.23.22.34.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Apr 2015 22:34:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) client-ip=209.85.217.176; Received: by lbbqq2 with SMTP id qq2so28572910lbb.3 for ; Thu, 23 Apr 2015 22:34:59 -0700 (PDT) X-Received: by 10.152.27.1 with SMTP id p1mr5356527lag.112.1429853699715; Thu, 23 Apr 2015 22:34:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp978979lbt; Thu, 23 Apr 2015 22:34:58 -0700 (PDT) X-Received: by 10.66.140.101 with SMTP id rf5mr2719103pab.50.1429853697851; Thu, 23 Apr 2015 22:34:57 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bd10si15800741pdb.178.2015.04.23.22.34.57; Thu, 23 Apr 2015 22:34:57 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754580AbbDXFe4 (ORCPT + 2 others); Fri, 24 Apr 2015 01:34:56 -0400 Received: from mail-oi0-f50.google.com ([209.85.218.50]:33750 "EHLO mail-oi0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754579AbbDXFe4 (ORCPT ); Fri, 24 Apr 2015 01:34:56 -0400 Received: by oica37 with SMTP id a37so32494640oic.0 for ; Thu, 23 Apr 2015 22:34:55 -0700 (PDT) X-Received: by 10.202.224.11 with SMTP id x11mr5384923oig.33.1429853695627; Thu, 23 Apr 2015 22:34:55 -0700 (PDT) Received: from localhost ([167.160.116.36]) by mx.google.com with ESMTPSA id a126sm6099948oib.7.2015.04.23.22.34.53 (version=TLSv1 cipher=RC4-SHA bits=128/128); Thu, 23 Apr 2015 22:34:54 -0700 (PDT) From: shannon.zhao@linaro.org To: stable@vger.kernel.org Cc: jslaby@suse.cz, christoffer.dall@linaro.org, shannon.zhao@linaro.org Subject: [PATCH for 3.12.y stable 57/63] arm/arm64: KVM: Introduce stage2_unmap_vm Date: Fri, 24 Apr 2015 13:27:55 +0800 Message-Id: <1429853281-6136-58-git-send-email-shannon.zhao@linaro.org> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1429853281-6136-1-git-send-email-shannon.zhao@linaro.org> References: <1429853281-6136-1-git-send-email-shannon.zhao@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: shannon.zhao@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.176 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream. Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Shannon Zhao --- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 7 +++++ arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + 4 files changed, 74 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 17b9307..8cd8856 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -47,6 +47,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 8f4761b..d1c5946 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -673,6 +673,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, if (ret) return ret; + /* + * Ensure a rebooted VM will fault in RAM pages and detect if the + * guest MMU is turned off and flush the caches as needed. + */ + if (vcpu->arch.has_run_once) + stage2_unmap_vm(vcpu->kvm); + vcpu_reset_hcr(vcpu); /* diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 5c31e3ff..a79baa5 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -528,6 +528,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) unmap_range(kvm, kvm->arch.pgd, start, size); } +static void stage2_unmap_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + hva_t hva = memslot->userspace_addr; + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t size = PAGE_SIZE * memslot->npages; + hva_t reg_end = hva + size; + + /* + * A memory region could potentially cover multiple VMAs, and any holes + * between them, so iterate over all of them to find out if we should + * unmap any of them. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Take the intersection of this VMA with the memory region + */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (!(vma->vm_flags & VM_PFNMAP)) { + gpa_t gpa = addr + (vm_start - memslot->userspace_addr); + unmap_stage2_range(kvm, gpa, vm_end - vm_start); + } + hva = vm_end; + } while (hva < reg_end); +} + +/** + * stage2_unmap_vm - Unmap Stage-2 RAM mappings + * @kvm: The struct kvm pointer + * + * Go through the memregions and unmap any reguler RAM + * backing memory already mapped to the VM. + */ +void stage2_unmap_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_unmap_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * kvm_free_stage2_pgd - free all stage-2 tables * @kvm: The KVM struct pointer for the VM. diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 5966ad5..6e127e7 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -74,6 +74,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,