From patchwork Thu Apr 30 12:12:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 47818 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id F183E2121F for ; Thu, 30 Apr 2015 12:15:04 +0000 (UTC) Received: by lbcne10 with SMTP id ne10sf13979637lbc.1 for ; Thu, 30 Apr 2015 05:15:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:in-reply-to:references :sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=qLkiuktTvYOjHGy0IC81Qze/v1dpkvSZ9P3iQ0WBa6Y=; b=Fcxn2xFt5OPID9v1eYT2BioYG9rUxvEwG9uMoGT5HTU8AtRGctA8v8ap4H+Q39diaW 2reM+kdUT+zVbzumVtMR2PL3PKAhOVKvoDK0PWvxQ1sJeV7RsIFyxmLbDczGw6NT3VDq L6M1XDpTzaBTk7HBOxgMr6VzWRAX3pFYFKdKkjfhuLJFt9kNnTBN0eULJsKAYJRedccz YOZsE4bUYVzyEhrcxQU9xae8xdvCP1Utfj/nXterMrPhpMCewJNAFmDluJWZLfi5XxRG VpD0LM/DlGMf2Hp45BaUJbA2gbNORXzALkAj0zPGe/cBp1g0x/bn0Bw5qmtZPKzsvep2 b7ow== X-Gm-Message-State: ALoCoQn/KkV2xjfNBtA6mqRuy7GwIPcd/8egc8Zy3/abVikDpRsBT6hWPPNugrJlBBiQOAmJxI6s X-Received: by 10.112.93.203 with SMTP id cw11mr2380678lbb.0.1430396103706; Thu, 30 Apr 2015 05:15:03 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.37.196 with SMTP id a4ls375173lak.30.gmail; Thu, 30 Apr 2015 05:15:03 -0700 (PDT) X-Received: by 10.112.93.72 with SMTP id cs8mr3569012lbb.15.1430396103591; Thu, 30 Apr 2015 05:15:03 -0700 (PDT) Received: from mail-la0-f54.google.com (mail-la0-f54.google.com. [209.85.215.54]) by mx.google.com with ESMTPS id rh10si1627954lbb.105.2015.04.30.05.15.03 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Apr 2015 05:15:03 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) client-ip=209.85.215.54; Received: by lagv1 with SMTP id v1so42536859lag.3 for ; Thu, 30 Apr 2015 05:15:03 -0700 (PDT) X-Received: by 10.112.204.72 with SMTP id kw8mr3588651lbc.88.1430396103165; Thu, 30 Apr 2015 05:15:03 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp3075725lbt; Thu, 30 Apr 2015 05:14:59 -0700 (PDT) X-Received: by 10.68.227.42 with SMTP id rx10mr2597389pbc.28.1430396082762; Thu, 30 Apr 2015 05:14:42 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id rj10si3261223pdb.132.2015.04.30.05.14.41; Thu, 30 Apr 2015 05:14:42 -0700 (PDT) Received-SPF: none (google.com: stable-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751007AbbD3MOj (ORCPT + 2 others); Thu, 30 Apr 2015 08:14:39 -0400 Received: from ip4-83-240-67-251.cust.nbox.cz ([83.240.67.251]:53685 "EHLO ip4-83-240-18-248.cust.nbox.cz" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751467AbbD3MMf (ORCPT ); Thu, 30 Apr 2015 08:12:35 -0400 Received: from ku by ip4-83-240-18-248.cust.nbox.cz with local (Exim 4.85) (envelope-from ) id 1YnnKT-0008E0-HW; Thu, 30 Apr 2015 14:12:33 +0200 From: Jiri Slaby To: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Christoffer Dall , Shannon Zhao , Jiri Slaby Subject: [PATCH 3.12 57/63] arm/arm64: KVM: Introduce stage2_unmap_vm Date: Thu, 30 Apr 2015 14:12:26 +0200 Message-Id: X-Mailer: git-send-email 2.3.5 In-Reply-To: <45aaf85687dd6ac119c55c5ec0dbe0bef0e62235.1430387326.git.jslaby@suse.cz> References: <45aaf85687dd6ac119c55c5ec0dbe0bef0e62235.1430387326.git.jslaby@suse.cz> In-Reply-To: References: Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.54 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall 3.12-stable review patch. If anyone has any objections, please let me know. =============== commit 957db105c99792ae8ef61ffc9ae77d910f6471da upstream. Introduce a new function to unmap user RAM regions in the stage2 page tables. This is needed on reboot (or when the guest turns off the MMU) to ensure we fault in pages again and make the dcache, RAM, and icache coherent. Using unmap_stage2_range for the whole guest physical range does not work, because that unmaps IO regions (such as the GIC) which will not be recreated or in the best case faulted in on a page-by-page basis. Call this function on secondary and subsequent calls to the KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest Stage-1 MMU is off when faulting in pages and make the caches coherent. Acked-by: Marc Zyngier Signed-off-by: Christoffer Dall Signed-off-by: Shannon Zhao Signed-off-by: Jiri Slaby --- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 7 +++++ arch/arm/kvm/mmu.c | 65 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/kvm_mmu.h | 1 + 4 files changed, 74 insertions(+) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 17b93071bb17..8cd885699420 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -47,6 +47,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 8f4761b5af85..d1c5946e33a2 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -673,6 +673,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, if (ret) return ret; + /* + * Ensure a rebooted VM will fault in RAM pages and detect if the + * guest MMU is turned off and flush the caches as needed. + */ + if (vcpu->arch.has_run_once) + stage2_unmap_vm(vcpu->kvm); + vcpu_reset_hcr(vcpu); /* diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 5c31e3fff597..a79baa59fe15 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -528,6 +528,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) unmap_range(kvm, kvm->arch.pgd, start, size); } +static void stage2_unmap_memslot(struct kvm *kvm, + struct kvm_memory_slot *memslot) +{ + hva_t hva = memslot->userspace_addr; + phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t size = PAGE_SIZE * memslot->npages; + hva_t reg_end = hva + size; + + /* + * A memory region could potentially cover multiple VMAs, and any holes + * between them, so iterate over all of them to find out if we should + * unmap any of them. + * + * +--------------------------------------------+ + * +---------------+----------------+ +----------------+ + * | : VMA 1 | VMA 2 | | VMA 3 : | + * +---------------+----------------+ +----------------+ + * | memory region | + * +--------------------------------------------+ + */ + do { + struct vm_area_struct *vma = find_vma(current->mm, hva); + hva_t vm_start, vm_end; + + if (!vma || vma->vm_start >= reg_end) + break; + + /* + * Take the intersection of this VMA with the memory region + */ + vm_start = max(hva, vma->vm_start); + vm_end = min(reg_end, vma->vm_end); + + if (!(vma->vm_flags & VM_PFNMAP)) { + gpa_t gpa = addr + (vm_start - memslot->userspace_addr); + unmap_stage2_range(kvm, gpa, vm_end - vm_start); + } + hva = vm_end; + } while (hva < reg_end); +} + +/** + * stage2_unmap_vm - Unmap Stage-2 RAM mappings + * @kvm: The struct kvm pointer + * + * Go through the memregions and unmap any reguler RAM + * backing memory already mapped to the VM. + */ +void stage2_unmap_vm(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + stage2_unmap_memslot(kvm, memslot); + + spin_unlock(&kvm->mmu_lock); + srcu_read_unlock(&kvm->srcu, idx); +} + /** * kvm_free_stage2_pgd - free all stage-2 tables * @kvm: The KVM struct pointer for the VM. diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 5966ad5a356f..6e127e7ca687 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -74,6 +74,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); void free_boot_hyp_pgd(void); void free_hyp_pgds(void); +void stage2_unmap_vm(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm); int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,