From patchwork Tue Oct 3 03:10:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jintack Lim X-Patchwork-Id: 114645 Delivered-To: patch@linaro.org Received: by 10.140.22.163 with SMTP id 32csp1387655qgn; Mon, 2 Oct 2017 20:16:30 -0700 (PDT) X-Received: by 10.101.72.132 with SMTP id n4mr14455369pgs.118.1507000590539; Mon, 02 Oct 2017 20:16:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1507000590; cv=none; d=google.com; s=arc-20160816; b=FAWBUwo/x4RahOflWmf3xoY+x9m+xRxrAZM90ZPJZ07lHrIGDtOfP4el71ysd00bvl I24sQq/k8uFKxnWOT3/lPTSPSYssNNgGlhwnB8ueA/kRJDEVaa42Eyka6eqYNIkbdpxu 7oTW7LQeUE4N5iEfuofZUwmMUzwUlsquNhQmaqN4AowATHuatWVMG0AsQgEkjHeV/tsT rYwE2LclPpe1/l3Nb5OscCcVTtMbmMJh5Jy258gyTIU07XzluF4WgxmUccIPgT5E5/+V 7bR70gYLpRTrW3WoSj6nKRebp3fUH94tjNQuk3NPHHDparJlPHDGuoFeVGQ5Bv+SzVTz ljbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=MQcM1FcXOvspeWAnNJawmA0pVa8r9+tl5UYC5/lqzjE=; b=Z3WQ46Ma8skvyEma+Oa+03dm+hPCzT2/uDtwm87SS+pJUvqdbeLMfXPvSwi19JrGpc Z8d9alhPOIQka/yszEWCBY+ZunADHqYtEDpw9HWqBIyaMBISJhubpk9GpOJk8+239DJA Xm4xJt5f0JQwQ83psk0qcoSm0hZlox6Cb2r5tiiu0JBIkpkzwqodRk/1E35jAtCHcuKZ v5HuRVeLvGDHWoF/BEMtc15pZC67gEbHAx0/UKJHfLOGF2/BiyzQXlhjnzMexr8Ec7lY Q0Rh8Zj0nk2npGao+PKGE8VmkcF4Po9UD/IKYIlPCF48HDh5H4TCO2zFkpNT0XAJDd0s wtpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TRpn3e+2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y92si374480plb.588.2017.10.02.20.16.30; Mon, 02 Oct 2017 20:16:30 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=TRpn3e+2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751952AbdJCDL5 (ORCPT + 26 others); Mon, 2 Oct 2017 23:11:57 -0400 Received: from mail-it0-f45.google.com ([209.85.214.45]:51565 "EHLO mail-it0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751896AbdJCDLt (ORCPT ); Mon, 2 Oct 2017 23:11:49 -0400 Received: by mail-it0-f45.google.com with SMTP id w1so283155itd.0 for ; Mon, 02 Oct 2017 20:11:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MQcM1FcXOvspeWAnNJawmA0pVa8r9+tl5UYC5/lqzjE=; b=TRpn3e+2rCiow3pZdAdv+YO53xI1ool+eZKvZrcbYOY97xrR1us3q1GuKSdJUeCtqe og8FwBNSR4KkQa30Obtk/LIkw8xb+n9ToMl5jG8u5byAz7tjztYXtsCH+XKop6QhJaWj QQ8ZBhVhsYj9TWqWMuxnElJE/WArDh3YnvQwo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MQcM1FcXOvspeWAnNJawmA0pVa8r9+tl5UYC5/lqzjE=; b=EbzetxzzKp85fmTIoCL/ltWFulQTVREXp0/cvcDP3nyzuZ2jLsExuDQZPetisgx/vL HwrC1I3BuWkFlQtMtuKL1SQ2By7iHbrz3QRMOcEAKXIQiMj37CQlPjeFZno+zGEBYUGA 5P017kw7wD0o8Tn6Hj17BNeZcBD4P+/SGif0iN7H3zpP9UBkir43kn2LvW+adwa78vlh qWWhNqzz1Gx7inLNckx0M3DX/gKekwSTNJ/FP2tP3MUNLD2pic07QF2E0OfSD7TYzvVQ 0714QedUaLOvncHnz1h6ob/ewu0+lr5NSc845GmeEzzexCvHHiwJ6xZRCjkxk/b5QsBx i7YQ== X-Gm-Message-State: AHPjjUhUGAMdzL0tWqx0RZWZ6zdAHWpaZts6HsGhc0P9KT/ne7Avnr5x +K+zMuTLM+5eHOF7wlefPMdZeg== X-Google-Smtp-Source: AOwi7QBs3oYX0GYs1RFm66nB92LhQLT3RJU2q1cBjLSi3Yk3I8zaVK0BC6Pw0w7m76ENmoDGHQpa2A== X-Received: by 10.36.77.66 with SMTP id l63mr22561239itb.49.1507000308630; Mon, 02 Oct 2017 20:11:48 -0700 (PDT) Received: from node.jintackl-qv28633.kvmarm-pg0.wisc.cloudlab.us (c220g1-031126.wisc.cloudlab.us. [128.104.222.76]) by smtp.gmail.com with ESMTPSA id h84sm5367193iod.72.2017.10.02.20.11.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 02 Oct 2017 20:11:48 -0700 (PDT) From: Jintack Lim To: christoffer.dall@linaro.org, marc.zyngier@arm.com, kvmarm@lists.cs.columbia.edu Cc: jintack@cs.columbia.edu, pbonzini@redhat.com, rkrcmar@redhat.com, catalin.marinas@arm.com, will.deacon@arm.com, linux@armlinux.org.uk, mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jintack Lim Subject: [RFC PATCH v2 10/31] KVM: arm/arm64: Unmap/flush shadow stage 2 page tables Date: Mon, 2 Oct 2017 22:10:52 -0500 Message-Id: <1507000273-3735-8-git-send-email-jintack.lim@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1507000273-3735-1-git-send-email-jintack.lim@linaro.org> References: <1507000273-3735-1-git-send-email-jintack.lim@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Christoffer Dall Unmap/flush shadow stage 2 page tables for the nested VMs as well as the stage 2 page table for the guest hypervisor. Note: A bunch of the code in mmu.c relating to MMU notifiers is currently dealt with in an extremely abrupt way, for example by clearing out an entire shadow stage-2 table. This will be handled in a more efficient way using the reverse mapping feature in a later version of the patch series. Signed-off-by: Christoffer Dall Signed-off-by: Jintack Lim --- Notes: v1-->v2: - Removed an unnecessary iteration for each vcpu in kvm_nested_s2_all_vcpus_*() functions and remove all_vcpus in the function names; a list of nested mmu is per VM, not per vcpu. - Renamed kvm_nested_s2_unmap() to kvm_nested_s2_clear() - Renamed kvm_nested_s2_teardown() to kvm_nested_s2_free() - Removed the unused kvm_nested_s2_init() function. arch/arm/include/asm/kvm_mmu.h | 6 ++++++ arch/arm64/include/asm/kvm_mmu.h | 5 +++++ arch/arm64/kvm/mmu-nested.c | 40 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/arm/arm.c | 6 +++++- virt/kvm/arm/mmu.c | 17 +++++++++++++++++ 5 files changed, 73 insertions(+), 1 deletion(-) -- 1.9.1 diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 86fdc70..d3eafc5 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -221,6 +221,12 @@ static inline unsigned int kvm_get_vmid_bits(void) return 8; } +static inline void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu) { } +static inline void kvm_nested_s2_free(struct kvm *kvm) { } +static inline void kvm_nested_s2_wp(struct kvm *kvm) { } +static inline void kvm_nested_s2_clear(struct kvm *kvm) { } +static inline void kvm_nested_s2_flush(struct kvm *kvm) { } + static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, struct kvm_s2_mmu *mmu) { diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 452912f..7fc7a83 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -325,6 +325,11 @@ static inline unsigned int kvm_get_vmid_bits(void) struct kvm_nested_s2_mmu *get_nested_mmu(struct kvm_vcpu *vcpu, u64 vttbr); struct kvm_s2_mmu *vcpu_get_active_s2_mmu(struct kvm_vcpu *vcpu); void update_nested_s2_mmu(struct kvm_vcpu *vcpu); +void kvm_nested_s2_unmap(struct kvm_vcpu *vcpu); +void kvm_nested_s2_free(struct kvm *kvm); +void kvm_nested_s2_wp(struct kvm *kvm); +void kvm_nested_s2_clear(struct kvm *kvm); +void kvm_nested_s2_flush(struct kvm *kvm); static inline u64 kvm_get_vttbr(struct kvm_s2_vmid *vmid, struct kvm_s2_mmu *mmu) diff --git a/arch/arm64/kvm/mmu-nested.c b/arch/arm64/kvm/mmu-nested.c index c436daf..3ee20f2 100644 --- a/arch/arm64/kvm/mmu-nested.c +++ b/arch/arm64/kvm/mmu-nested.c @@ -1,6 +1,7 @@ /* * Copyright (C) 2017 - Columbia University and Linaro Ltd. * Author: Jintack Lim + * Author: Christoffer Dall * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -21,6 +22,45 @@ #include #include +/* expects kvm->mmu_lock to be held */ +void kvm_nested_s2_wp(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_stage2_wp_range(kvm, &nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +/* expects kvm->mmu_lock to be held */ +void kvm_nested_s2_clear(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_unmap_stage2_range(kvm, &nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +/* expects kvm->mmu_lock to be held */ +void kvm_nested_s2_flush(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + kvm_stage2_flush_range(&nested_mmu->mmu, 0, KVM_PHYS_SIZE); +} + +void kvm_nested_s2_free(struct kvm *kvm) +{ + struct kvm_nested_s2_mmu *nested_mmu; + struct list_head *nested_mmu_list = &kvm->arch.nested_mmu_list; + + list_for_each_entry_rcu(nested_mmu, nested_mmu_list, list) + __kvm_free_stage2_pgd(kvm, &nested_mmu->mmu); +} + static struct kvm_nested_s2_mmu *lookup_nested_mmu(struct kvm_vcpu *vcpu, u64 vttbr) { diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 4548d77..08706f8 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -187,6 +187,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm) free_percpu(kvm->arch.last_vcpu_ran); kvm->arch.last_vcpu_ran = NULL; + kvm_nested_s2_free(kvm); + for (i = 0; i < KVM_MAX_VCPUS; ++i) { if (kvm->vcpus[i]) { kvm_arch_vcpu_free(kvm->vcpus[i]); @@ -926,8 +928,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, * Ensure a rebooted VM will fault in RAM pages and detect if the * guest MMU is turned off and flush the caches as needed. */ - if (vcpu->arch.has_run_once) + if (vcpu->arch.has_run_once) { stage2_unmap_vm(vcpu->kvm); + kvm_nested_s2_clear(vcpu->kvm); + } vcpu_reset_hcr(vcpu); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index ca10799..3143f81 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -434,6 +434,8 @@ static void stage2_flush_vm(struct kvm *kvm) kvm_for_each_memslot(memslot, slots) stage2_flush_memslot(&kvm->arch.mmu, memslot); + kvm_nested_s2_flush(kvm); + spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, idx); } @@ -1268,6 +1270,7 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot) spin_lock(&kvm->mmu_lock); kvm_stage2_wp_range(kvm, &kvm->arch.mmu, start, end); + kvm_nested_s2_wp(kvm); spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs(kvm); } @@ -1306,6 +1309,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, gfn_t gfn_offset, unsigned long mask) { kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); + kvm_nested_s2_wp(kvm); } static void coherent_cache_guest_page(struct kvm_vcpu *vcpu, kvm_pfn_t pfn, @@ -1643,6 +1647,7 @@ static int handle_hva_to_gpa(struct kvm *kvm, static int kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) { kvm_unmap_stage2_range(kvm, &kvm->arch.mmu, gpa, size); + kvm_nested_s2_clear(kvm); return 0; } @@ -1682,6 +1687,7 @@ static int kvm_set_spte_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data * through this calling path. */ stage2_set_pte(&kvm->arch.mmu, NULL, gpa, pte, 0); + kvm_nested_s2_clear(kvm); return 0; } @@ -1716,6 +1722,11 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void *data) if (pte_none(*pte)) return 0; + /* + * TODO: Handle nested_mmu structures here using the reverse mapping in + * a later version of patch series. + */ + return stage2_ptep_test_and_clear_young(pte); } @@ -1736,6 +1747,11 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void * if (!pte_none(*pte)) /* Just a page... */ return pte_young(*pte); + /* + * TODO: Handle nested_mmu structures here using the reverse mapping in + * a later version of patch series. + */ + return 0; } @@ -1992,6 +2008,7 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, spin_lock(&kvm->mmu_lock); kvm_unmap_stage2_range(kvm, &kvm->arch.mmu, gpa, size); + kvm_nested_s2_clear(kvm); spin_unlock(&kvm->mmu_lock); }