From patchwork Tue May 27 18:02:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892752 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93E07280A29 for ; Tue, 27 May 2025 18:03:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368990; cv=none; b=efSs8y5v0kHwNStFBJAbvixmQ7Y6ke+KES35DJPK+eXd4bQWz1B4bT7YNlxMVtzB4xWZE0KswA8aygGZVggRSQWO1WQt5JZEdC01f0ZwjKe+i27qsv+miT4+bQJ9Rfpd3Ob5XOooEpFp8jE1blw5PgplaMwFLqNvpExBXg9wu8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368990; c=relaxed/simple; bh=WCrBXtyWL2w8mUEt2e9cw7cBZC7kx1mz1vCkK8dZ7AQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SwolaobQu6wuqgcQ1qkp1XpkoxPCDc741rvOwGNoJpAGGQTBXiqHsvtYW3Wj7tYfzvudBUFWXQsA80AN+Z3Gl2McQSdNIQTy7WcecpOM57D5hE0IgD6sPxy4AgI2A0f/7+wnlBVLqmnK1BiTlwSvbQsTIicvPSiC3wQW3GP2Buw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WwJ45Dr2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WwJ45Dr2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-442ffaa7dbeso27559355e9.3 for ; Tue, 27 May 2025 11:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368987; x=1748973787; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AHMe2ivL10UxaXIME2XPKI1qy/qjD/O9SD6HXriTp0g=; b=WwJ45Dr2qfML8uhm9YwKVeSr/w4ixrK2nByrdpkRJbyPHvfccui3pOveOTxGN4clvS BuYznFO0bidIJeOOqopf3avTKt6C9RF5bBKD0mAhrJWSsFR1ifQB9actDV5aVPOIv/rQ U8qak2z51eAPjW+YbxDsSy9JqAYroHF4z3slil7D5SWUGI+3OodWTeUwpTAviHs2Ztsu 7zuSR1V6FPYPCcGncMQfonP7ZWUMW1fWD0ChQfBXS5OHUHcpqF9m1E5/XHRYtUgbxb54 z7kHAm7Yyp579NAwiEU2FikGMqULGhMZQlkLXvaAVd1flgdeK+Tjq5lSU9F62ZxBNQMT w9RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368987; x=1748973787; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AHMe2ivL10UxaXIME2XPKI1qy/qjD/O9SD6HXriTp0g=; b=c3bYmtdKi3katZnhD8t+LK21dKsVujHE4oCPFYFaGBaNu0jhETaVE2o+/8qRiCngnc zC/QdDHy9A5G0b0P6ZZjy/4eZL0HSi30KdequJuiP/iBLvsNGvq1rJ39mzHF/9ZY477k bw/llnflSgUzY5vTxo8YlRS4lGqE3Sp8HAze0H+kDcSzGIRnxFGUUQVOlS6Bcak0yeC2 GdYp7E10JBTc4/BTZax4ngvVm9Pmm1sTEU01ZcEXkp+Jv40s4AaeaL6942vkTQQi75dv LUwlD5+jBoQIRgA8GFdS+7n49QMQna7ZxOa/MZZQP5/Jm2C0icyNzQnAiO2l8Zsosi7G xUfw== X-Forwarded-Encrypted: i=1; AJvYcCWfeSUbwN9sXSlE8Wjtm9hFWsCCJqZtOtXBj/echvYyPi1o4tLVIJcfQlZmCxj+Xu2LDpBi+lx+sR7Mit5e@vger.kernel.org X-Gm-Message-State: AOJu0YzEVJv3JI26iRBfJ7upTqMI7CkTzUi3vO0/PSBiLxloZi3NFXLh kRR95NsED+RjrUKw9jS0Y1Kgv2muP0ZFv3XEnRfZiaEHDnHJZ5O7MNAikEr5UKd7groXsGjMnVV ICw== X-Google-Smtp-Source: AGHT+IEKYiZEDz0prHFVSs9gInuvUtJLkC94yEGH1e3gdJ5/OHkOpkzRY2s3mFtBGtEiZnGp3d719P4v5Q== X-Received: from wmben7.prod.google.com ([2002:a05:600c:8287:b0:445:1cd2:5e5f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35d3:b0:43d:47b7:b32d with SMTP id 5b1f17b1804b1-44c92f21eafmr120874665e9.25.1748368987086; Tue, 27 May 2025 11:03:07 -0700 (PDT) Date: Tue, 27 May 2025 19:02:39 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-11-tabba@google.com> Subject: [PATCH v10 10/16] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng For memslots backed by guest_memfd with shared mem support, the KVM MMU always faults-in pages from guest_memfd, and not from the userspace_addr. Function names have also been updated for accuracy - kvm_mem_is_private() returns true only when the current private/shared state (in the CoCo sense) of the memory is private, and returns false if the current state is shared explicitly or impicitly, e.g., belongs to a non-CoCo VM. kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used to fault in not just private memory, but more generally, from guest_memfd. Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Ackerley Tng --- arch/x86/kvm/mmu/mmu.c | 38 +++++++++++++++++++++++--------------- include/linux/kvm_host.h | 25 +++++++++++++++++++++++-- 2 files changed, 46 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2b6376986f96..5b7df2905aa9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3289,6 +3289,11 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); } +static inline bool fault_from_gmem(struct kvm_page_fault *fault) +{ + return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot); +} + void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4465,21 +4470,25 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, + struct kvm_page_fault *fault, + int order) { - u8 req_max_level; + u8 max_level = fault->max_level; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - max_level = min(kvm_max_level_for_order(gmem_order), max_level); + max_level = min(kvm_max_level_for_order(order), max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); - if (req_max_level) - max_level = min(max_level, req_max_level); + if (fault->is_private) { + u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn); + + if (level) + max_level = min(max_level, level); + } return max_level; } @@ -4491,10 +4500,10 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - int max_order, r; + int gmem_order, r; if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4502,15 +4511,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &fault->refcounted_page, &max_order); + &fault->refcounted_page, &gmem_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_level_for_fault_and_order(vcpu->kvm, fault, gmem_order); return RET_PF_CONTINUE; } @@ -4520,8 +4528,8 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + if (fault_from_gmem(fault)) + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index edb3795a64b9..b1786ef6d8ea 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2524,10 +2524,31 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); +/* + * Returns true if the given gfn's private/shared status (in the CoCo sense) is + * private. + * + * A return value of false indicates that the gfn is explicitly or implicitly + * shared (i.e., non-CoCo VMs). + */ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_GMEM) && - kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; + struct kvm_memory_slot *slot; + + if (!IS_ENABLED(CONFIG_KVM_GMEM)) + return false; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) { + /* + * Without in-place conversion support, if a guest_memfd memslot + * supports shared memory, then all the slot's memory is + * considered not private, i.e., implicitly shared. + */ + return false; + } + + return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)