From patchwork Tue May 27 18:02:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892756 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4222A280304 for ; Tue, 27 May 2025 18:02:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368974; cv=none; b=VmpUqCpDqEkciN+bq+uJjauN4iHLw7W5NHnvBcPMb587EwBWiPjK8xI3Gw0OBjEuotnICPYC52sC9gSGJycUokRxqYSe3CBAVgW3QQfPB279lfnyEw0yvZVqnJSU7p0KBWuLNEiRQr92TASg3/1axuL8ln5dd1ENiF17tifhg7w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368974; c=relaxed/simple; bh=7vSE4b+gpw60VpByePLlVFEBBz8IDDZdt67DV3HEzhU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=scfSAoD5VbgTITru4cd3ylGAAstw6hBvNqCHzNWb3wYMlwC/pjE3L1kc3eiNn1ePsN/WTgrhFhHR+p+0a5tSNGESEgqFd8HrYweRv+B8lgKxLGJhMPXetZbBipbxN4Y96B1PtlQlTChSNWux2aK7xymiUIZhF1uKHs4WrQvTx4Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rLCYjBd5; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rLCYjBd5" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442d472cf7fso30566165e9.3 for ; Tue, 27 May 2025 11:02:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368970; x=1748973770; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5V3B2RYLa1fEECDPnlrHtu9P8hNPSXvchqgDN4GcNys=; b=rLCYjBd5GWlhyAjIZbRDxoP1RyI8YxJgSdm+uWhY7z/o5vx4SMRRPyjQPxzwjcdg/j x/zmMdCtupFRE6JOMYCjYjvrD3LNng27tuWEFHAqmW41L9xGXlnqH46yJwldM27VYyyR Ps+8Q7zUDp1+Ev3LP6PRWB7UlaIcPfh4Gb1sAGtSBjpJms2MpEj+wRmWB8V2zHHHDfFy jpRKggqmteO061lQ3tGrm1H6K0coOWyLq5PjzAh7y0hOMwvspw0jpvgMxk06j9xHvVI4 yxygbmY9yzJLSI1QuuY9K/HOSfNRm5g2UIrcFUUqI0a+NKHowDM1cSJUgGGQAhVakUaj pMAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368970; x=1748973770; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5V3B2RYLa1fEECDPnlrHtu9P8hNPSXvchqgDN4GcNys=; b=UflB1koLbHI8NvuceqS+kAkk89ICX1lJ455GGltRM58WGeukR/CcmHrkHXcoepwmEs 3JEcETyMO0IcDkL3GHgBRT+caGPDYuq8f+w1yCCGuP32DsWSMXQUvTuFIDWMNM6WmQti mDj3VI6tksPcJ6b46d6n8REoGv256G7wJ4+ER7p1eXwaDf0TSdCVGpTgr+sRxGcQzCo7 RymGhs3yNQMNLZ2Sq1KSWsk+c72ihiGQOacOEgMWYyh/D4tmYXf6YwTWjQgKy7fHlhmH Tcuz9syCubVksbEjGN2G3lpanigUEipfnhW569m0TuewQ2rDkQVY1pXPMeAFruNOlREG fA/A== X-Forwarded-Encrypted: i=1; AJvYcCUFIXRx2iXBldD5JRq1UD6/P4TjoYfBE9l+8EtaY7pLfX+3RgMA0eXVQmAYljjKnbHGdXoHxb+/J2A2CxTL@vger.kernel.org X-Gm-Message-State: AOJu0YyC8GclumZZaB5l/Hn6CjUcF5tZcrDA7a8y78GUbtPhWMvZo1ZJ 4b1iYQl4P2sjppA2e5mlJrHB3b1eE/YUuWV/YVsZ47n0TmGaSOWlzab3THku6EVSoRnnao/EEXp fsw== X-Google-Smtp-Source: AGHT+IFdZHX/lQZqCoT+8qts9/5CdbMfeZRpKrH709gkJkldDvm+LOhfmpHq4s3xD2Py1rZ0kVm6aRg0KQ== X-Received: from wmbem23.prod.google.com ([2002:a05:600c:8217:b0:442:e19a:2ac9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3d19:b0:43d:fa59:af98 with SMTP id 5b1f17b1804b1-44c937d138dmr123262715e9.33.1748368970523; Tue, 27 May 2025 11:02:50 -0700 (PDT) Date: Tue, 27 May 2025 19:02:31 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-3-tabba@google.com> Subject: [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose clearer. Reviewed-by: Gavin Shan Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/Kconfig | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/Kconfig | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fe8ea8c097de..b37258253543 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,7 +46,7 @@ config KVM_X86 select HAVE_KVM_PM_NOTIFIER if PM select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM @@ -145,7 +145,7 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM - select KVM_GENERIC_PRIVATE_MEM + select KVM_GENERIC_GMEM_POPULATE select HAVE_KVM_ARCH_GMEM_PREPARE select HAVE_KVM_ARCH_GMEM_INVALIDATE help diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d6900995725d..7ca23837fa52 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 49df4e32bff7..559c93ad90be 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -116,7 +116,7 @@ config KVM_GMEM select XARRAY_MULTI bool -config KVM_GENERIC_PRIVATE_MEM +config KVM_GENERIC_GMEM_POPULATE select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_GMEM bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..befea51bbc75 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { From patchwork Tue May 27 18:02:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892755 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 10F6727C856 for ; Tue, 27 May 2025 18:02:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368977; cv=none; b=Ikj0pjKJMdzF3hOG7vRYeLOvglmrgCgqRq3jYp+2CvSjCGtSoLV/BGxf16AU0axMNfQmjyrRx338s2zp3kupEgiIN4PI+5CFWWsGUjJeImg22tIK2y2cPhKDs1cYaLWOQRANKdwnIS9R3LdvsLmlAPLD4/erpvzm6kbB3znlWoU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368977; c=relaxed/simple; bh=Eot37zmLKilj4Bha1mt6W8syR6jjo5Jw837ljSExbSY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uaLvgO82NRA1KxzFplnS8TTvaWlIvZ/Wyxb/sf2/IapmjUjcpktQ3W9KYnwpsgVBTgfagVqsqXf0HEzJjHx6Qxz4qJ+0Y4z9FLHzMpTDIqJMEBSkzXIRXhNhMiCRzMt8QtaGLEj1Vfpi1mtRz1ar28t6q1UE2p4N04gI/CKrPJY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=H8ubEqoy; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="H8ubEqoy" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a4d8983eecso1217431f8f.1 for ; Tue, 27 May 2025 11:02:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368974; x=1748973774; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3qcf5r9fuEv4TeXNhRM/FFKnchxS488iMuxF5wS7th8=; b=H8ubEqoyDNzRtD4Xr9yDQb7lFAeQUJ34qaC9dqfPBJLCkl4Md51Hk8mksfY7hAqwOB p3O8LtvZ8ftYK9Klwg8ogpYy/kCKRbqwSfR5uMBETD5TzrQUW0OG1FXmNYCvFfvAt2pM M1NQf9PL3nuywqWrD5T6Gz4Zgc+C1KwN1qOjEX8nqNApsk2ppwgXeM3xa2DvFusfMpTW ncMC3x4cCZd1fHqFr8dtESffrXJpnnpIuBnMYfXHTKkUOHjh41Olgt6cFBxXc00a+Y8K zH4xeAL01xEIpoiJj5vbZDuAeP9LfStUBXHVoZp6MitSdBJ+XqyiP8DSJp15Tm8PCNl1 H74g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368974; x=1748973774; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3qcf5r9fuEv4TeXNhRM/FFKnchxS488iMuxF5wS7th8=; b=S1nuKagch5ktWZphYxjegvBilvozzBuuzCzFrh1bdhg5QytrjEwxRh84wW/gNRHClM m/lkUyiRL10S5ud5VWxfY4P4ixmKBiBHbTcIpi09eraoUTZkC1cJDzlH8Ueg7uHJpsQz yTYCpQqjSyMnFL/Xl9szMXfy+rIJSi+CVMGKFMzlx7Mq162boU5YQNPHGjvg/HEUk+c5 AE9ER0HKPqDNPY3q176PpjFao6Px6oecLpGabwLCeKo0m7ElR5UX5wx8PUBXpX50TUxN /l/MefQiF7rxJfLLcQYqcbs8lVua78NoHt1Z4p7fNTUhnRx2ZOsDtQgTX00faAn51Gc0 wi1w== X-Forwarded-Encrypted: i=1; AJvYcCXhiVBzPsJq4PfjqPV2+i0V0PE/01NTOyodxaaMv9WVxwUhuTnGLcP/3qdBZkb0+ICUHMF/r1cBcNJyanfK@vger.kernel.org X-Gm-Message-State: AOJu0YyR28O1Gwh+PxueZEgAVoTpl3rsVqFh9pjCjufSkW6q0bcGB6JI g3bVgHDUg4+j4HbPnKxguUCTZIcI1dt78jqNhTu6uHi6J56yreATK6Q5tCXi82WomTThAVRj9wU 6BQ== X-Google-Smtp-Source: AGHT+IHCvb9oBv7M9nYviQPq2lfNId9bBMU1rJU3c1ZALC9CNK8cl36mxeGLwyYnRx/AcP4NkYwJmgvK3A== X-Received: from wmbbd24.prod.google.com ([2002:a05:600c:1f18:b0:442:dc6e:b907]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:430e:b0:3a4:dc32:6cbc with SMTP id ffacd0b85a97d-3a4dc326fb0mr6674535f8f.20.1748368974318; Tue, 27 May 2025 11:02:54 -0700 (PDT) Date: Tue, 27 May 2025 19:02:33 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-5-tabba@google.com> Subject: [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The bool has_private_mem is used to indicate whether guest_memfd is supported. Rename it to supports_gmem to make its meaning clearer and to decouple memory being private from guest_memfd. Reviewed-by: Gavin Shan Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 3 +-- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4a83fbae7056..709cc2a7ba66 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1331,7 +1331,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; - bool has_private_mem; + bool supports_gmem; bool has_protected_state; bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; @@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) #else #define kvm_arch_supports_gmem(kvm) false #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b66f1bf24e06..69bf2ef22ed0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault * on RET_PF_SPURIOUS until the update completes, or an actual spurious * case might go down the slow path. Either case will resolve itself. */ - if (kvm->arch.has_private_mem && + if (kvm->arch.supports_gmem && fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) return false; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a89c271a1951..a05b7dc7b717 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5110,8 +5110,8 @@ static int svm_vm_init(struct kvm *kvm) (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM); to_kvm_sev_info(kvm)->need_init = true; - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM); - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem; + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM); + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem; } if (!pause_filter_count || !pause_filter_thresh) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index be7bb6d20129..035ced06b2dd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12718,8 +12718,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.has_private_mem = - (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Tue May 27 18:02:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892754 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E238281375 for ; Tue, 27 May 2025 18:03:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368983; cv=none; b=Ki8A0tb9TlyUI1pXHeI1RDq/gq6o/z3bBtQcX5ADVziJ7ybVw1QUERdHmGLSrDl+AFitSe3T1RunNpwmX/k/ZhU6wwtVAtgqyaRGiImD+Cs2cNpOPEKCNzyFxxz5dmeBKQVvF8E/aI7Jq456XrFDVsSB3NvvdoQrL0u011UeUXU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368983; c=relaxed/simple; bh=EYZFk7r1z2ql5hNYNrrRqdhAJfESDaEvZFcM+jE7e6Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HKcSmGBvXMulYoMC+Vvp+yxbfkKxegJrWS+JtfNRPMN+7fFbOCKIlDxMbddnnwvJ1TRmQDzs8Nh0pwWNZh99lyPhZ0/mjfSI3ghfn4mX8je0X3otm3eZ3AYZ+La9mX9me4pYNxr5YiO4bc00K1tCPz93LIDU+ggpDn8oItcaX2Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XDu7OiOS; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XDu7OiOS" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a4cceb558aso1554832f8f.3 for ; Tue, 27 May 2025 11:03:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368978; x=1748973778; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lc19MQ2L83GFf4VOSaJH4jzsZICcJBsc7b6CwdqsbsI=; b=XDu7OiOSrC7QY8ovU4hv364sWaCpoXcgzuXtyRKyHRU2uFMU4MUdjOrpXBaXzOUpSu 9T+piab848QWRNNvLr9yRYiRpt/hCU3jBn4et+sJIaG9RzyRzOHobh+Qk2e9CyOluXtJ whPz4Vb5PWSze8KGN5N+qGbwGiprvmvC5ks7hokz9Zr2yTgp94vgJ9c19eZVB2i2sMBP BwedcWOi6GWya0uuGLatt61PlDvixko7S0cOwV8cP+bgJD2Pmv9Q9/jcPdrAfIWhAfgc rbXH5j9su1TOL0nmD/JwQt4hK6KlbpIqWjgUC/sYbeoThxUYiGkBW1njl1wKaLHU7tNQ fGdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368978; x=1748973778; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lc19MQ2L83GFf4VOSaJH4jzsZICcJBsc7b6CwdqsbsI=; b=tkGJVs4qG14fApZF9LD5MI1n2+VArTPVzXvg6fu4/DAHaUjcq7KWaF4OwWpWz02d0A UvZ9hIjNSwpjkhZrZxJzwPZqHbPqx8hnNJfUWXsOoWe8ERlPV9DJA8XfZpln1q7JCNEY /7Kly/EvdPf2uqkTvP9AQ5revVWdnmck/VBsJwkE7GTsO2Aed5PIFXVCIybnbm+C6M8w pzYO0DYNsEIARhW2bmte9wPHiUBryysoLQ1fGepig4ASqUZvoXnmypoA8oL7RjT5DvJJ g+qIZGzc1Y5K47pJnAuyXBlSsvOJgNrdTrbx8g9eSzpK9Qi9GYjJYux4sbqFerzqJ7ec i4yQ== X-Forwarded-Encrypted: i=1; AJvYcCVRv79qlzGmCanw6Tu/XydaxfuHiNp8CbcpXHp/QYhEg82u4pUHjsPfPTKbFdl73wJ7+51etYn42Pgtn7yl@vger.kernel.org X-Gm-Message-State: AOJu0YyL8wyifGSmuK5aQluFSCGcDuwnsU7W/ju++Dm9stfhV81YMzp2 OQDX0jwE7klFVgJ3ycHMCjF+IzRx2uOoiV22qUJwMlDjbMUsbQBKAt3HZ1V2YW4pMZglBrKUSKG drQ== X-Google-Smtp-Source: AGHT+IHLK7VmvWcDZxFTXIFk0nAZ8w70lzZCIbV5EYiVy8sWjQnhG8dciiR2hqbLtLd2GCHLUt22io7peQ== X-Received: from wmbfl8.prod.google.com ([2002:a05:600c:b88:b0:442:cd17:732c]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1881:b0:3a3:67bb:8f3f with SMTP id ffacd0b85a97d-3a4cb499cc0mr10465551f8f.53.1748368978280; Tue, 27 May 2025 11:02:58 -0700 (PDT) Date: Tue, 27 May 2025 19:02:35 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-7-tabba@google.com> Subject: [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Fix comments so that they refer to slots_lock instead of slots_locks (remove trailing s). Reviewed-by: Gavin Shan Reviewed-by: David Hildenbrand Reviewed-by: Ira Weiny Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9616ee6acc7..ae70e4e19700 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -859,7 +859,7 @@ struct kvm { struct notifier_block pm_notifier; #endif #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ + /* Protected by slots_lock (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2468d50a9ed4..6289ea1685dd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, * All current use cases for flushing the TLBs for a specific memslot * are related to dirty logging, and many do the TLB flush out of * mmu_lock. The interaction between the various operations on memslot - * must be serialized by slots_locks to ensure the TLB flush from one + * must be serialized by slots_lock to ensure the TLB flush from one * operation is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); From patchwork Tue May 27 18:02:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892753 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8C8828314E for ; Tue, 27 May 2025 18:03:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368986; cv=none; b=Bc9EqOPupeyDMeBc4hsQnpPjmlxtMz76jwAAP385Sw3CCapTIWkAi1e6hkAzgAiXiGgKxxowZhZMyZGCzWJoPCjaI3ziczFrQtyvedJLxVfbr5N2+lhIb4WhGL9cprJdEYDeXR3uiRc4oYIDVyQHQdNwduNG6GQ1kMxd06YCnoE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368986; c=relaxed/simple; bh=BnuFlP/MPodf2GUKLHlQ7wQPG0fVQReQ+KLKm/BT9NU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PdaPBsCuYhR1oPcRh669UpqXW3KhNVxaRAREppLAZ8kAOHpqJjxEjo0vRX+0elMRhoUtnliuz9u/6ncZ7q7SviFT+t7dSjzu0RmgDMmJDqFBtDbKsSO07zxHoDfE3RMx8Rr5U2ckNAQs25lc/b68TC1Q6+hppPh0zJXtVjO63l8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bf2M/L5d; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bf2M/L5d" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442dc6f0138so18622325e9.0 for ; Tue, 27 May 2025 11:03:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368983; x=1748973783; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PPh1VzeopsPxIravFkgzZ3tQjZGs9ZRvTiaLFQYSp/g=; b=bf2M/L5d1ZewdK8LLRi6ZMzPIoht2P69pUOSGB0mKYVQRFPCqpPVGoeIjvQIh/MoVo RRBrvx0yNvA1n5SgYOIWGvOC+aYBsS0x4hLFN+iljHraknfkPdMWXCu+vtWb/LcRN7UE dzQxTxUklwDR9G/6JYOOu05HQ1Dsd+bETrlfmN4MttL8GD7eQTnM96OEqdk80KHMTnOA SK5jgLL5CyjbEpkSVPEA3xTcTgPAQy/WttaYmQT7jEqRc76h3p2EVuz0UrG3yF0nlkqK aFTEMeJff18OH/Q0L4Tf+oViOUVQXCXp9U3FAaqiCCaifXqGZI/B5eU621HXzKv5eYL4 cuFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368983; x=1748973783; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PPh1VzeopsPxIravFkgzZ3tQjZGs9ZRvTiaLFQYSp/g=; b=GccsPm6Fwdo2S5gZxziuXKAiYJDuwBzzDRNDkRJWW2KMR9zoa/hw6qrBGOuj+058Vs QMMWmED1q52QOGNViADRdC/HSbBIkHr+g90Aup2dk7E7wNhkJpQgodPVfxC1toqlleNy ay1AC0Xqa8HbTQ58uXr43Ek3tRZYxRDJ+nVU8EXd5J4o/UAnrNwkDA5XgI8pcsgnQ1rA cpAJ+Z1WpzMFN31jWBhoiqpOKAEnwAHOBizNBMwgkeq4kADth8GCEsP2CDWb/RtA5FBy oLkeTyCW8heH+wNhZvZYwVHuhEMe7uN5Y2+1boh4sevnZrPQ8WIc4mKK4mLLqsaKdbrS S55Q== X-Forwarded-Encrypted: i=1; AJvYcCW6JdsMJbq+42nr3P/o3OI4FRgjCvSHZ/UmtF6ehXt78zwJnG3XXQullh3t7Bthi5vVnxywxybBOrFx/RC8@vger.kernel.org X-Gm-Message-State: AOJu0YxOstOc8+D/urFOOyT5YGSqBwvxAu8PpJxAm+6TuQtdnKHAQecM cYqW7g0dhO0pmQCuvTp05ZYx0e8SyaZ/OrL7jlBbVPUItNyMvKx5Sr4K034ooE8h13Rum4zDWoX kDQ== X-Google-Smtp-Source: AGHT+IHXUipBmfllLGBhkMKX/WEGBqCWDLNnstlCK7t0hAPbynlDj9rt+Y4SnPQgslY+gJ81nwpjkYmqpA== X-Received: from wmbjg9.prod.google.com ([2002:a05:600c:a009:b0:43d:b30:d2df]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:820a:b0:43d:526:e0ce with SMTP id 5b1f17b1804b1-44d76021feemr101789735e9.21.1748368982815; Tue, 27 May 2025 11:03:02 -0700 (PDT) Date: Tue, 27 May 2025 19:02:37 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-9-tabba@google.com> Subject: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch enables support for shared memory in guest_memfd, including mapping that memory at the host userspace. This support is gated by the configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a guest_memfd instance. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 10 ++++ arch/x86/kvm/x86.c | 3 +- include/linux/kvm_host.h | 13 ++++++ include/uapi/linux/kvm.h | 1 + virt/kvm/Kconfig | 5 ++ virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++ 6 files changed, 112 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 709cc2a7ba66..ce9ad4cd93c5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) + +/* + * CoCo VMs with hardware support that use guest_memfd only for backing private + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. + */ +#define kvm_arch_supports_gmem_shared_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) #else #define kvm_arch_supports_gmem(kvm) false +#define kvm_arch_supports_gmem_shared_mem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 035ced06b2dd..2a02f2457c42 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 80371475818f..ba83547e62b0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) } #endif +/* + * Returns true if this VM supports shared mem in guest_memfd. + * + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for + * guest_memfd is enabled. + */ +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm) +{ + return false; +} +#endif + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b6ae8ad8934b..c2714c9d1a0e 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0) struct kvm_create_guest_memfd { __u64 size; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 559c93ad90be..df225298ab10 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_GMEM + +config KVM_GMEM_SHARED_MEM + select KVM_GMEM + bool + prompt "Enable support for non-private (shared) memory in guest_memfd" diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 6db515833f61..5d34712f64fc 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +static bool kvm_gmem_supports_shared(struct inode *inode) +{ + u64 flags; + + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)) + return false; + + flags = (u64)inode->i_private; + + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; +} + + +#ifdef CONFIG_KVM_GMEM_SHARED_MEM +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + struct folio *folio; + vm_fault_t ret = VM_FAULT_LOCKED; + + folio = kvm_gmem_get_folio(inode, vmf->pgoff); + if (IS_ERR(folio)) { + int err = PTR_ERR(folio); + + if (err == -EAGAIN) + return VM_FAULT_RETRY; + + return vmf_error(err); + } + + if (WARN_ON_ONCE(folio_test_large(folio))) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (!folio_test_uptodate(folio)) { + clear_highpage(folio_page(folio, 0)); + kvm_gmem_mark_prepared(folio); + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + +out_folio: + if (ret != VM_FAULT_LOCKED) { + folio_unlock(folio); + folio_put(folio); + } + + return ret; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault_shared, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if (!kvm_gmem_supports_shared(file_inode(file))) + return -ENODEV; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) u64 flags = args->flags; u64 valid_flags = 0; + if (kvm_arch_supports_gmem_shared_mem(kvm)) + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; + if (flags & ~valid_flags) return -EINVAL; @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, offset + size > i_size_read(inode)) goto err; + if (kvm_gmem_supports_shared(inode) && + !kvm_arch_supports_gmem_shared_mem(kvm)) + goto err; + filemap_invalidate_lock(inode->i_mapping); start = offset >> PAGE_SHIFT; From patchwork Tue May 27 18:02:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892752 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93E07280A29 for ; Tue, 27 May 2025 18:03:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368990; cv=none; b=efSs8y5v0kHwNStFBJAbvixmQ7Y6ke+KES35DJPK+eXd4bQWz1B4bT7YNlxMVtzB4xWZE0KswA8aygGZVggRSQWO1WQt5JZEdC01f0ZwjKe+i27qsv+miT4+bQJ9Rfpd3Ob5XOooEpFp8jE1blw5PgplaMwFLqNvpExBXg9wu8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368990; c=relaxed/simple; bh=WCrBXtyWL2w8mUEt2e9cw7cBZC7kx1mz1vCkK8dZ7AQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SwolaobQu6wuqgcQ1qkp1XpkoxPCDc741rvOwGNoJpAGGQTBXiqHsvtYW3Wj7tYfzvudBUFWXQsA80AN+Z3Gl2McQSdNIQTy7WcecpOM57D5hE0IgD6sPxy4AgI2A0f/7+wnlBVLqmnK1BiTlwSvbQsTIicvPSiC3wQW3GP2Buw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WwJ45Dr2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WwJ45Dr2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-442ffaa7dbeso27559355e9.3 for ; Tue, 27 May 2025 11:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368987; x=1748973787; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AHMe2ivL10UxaXIME2XPKI1qy/qjD/O9SD6HXriTp0g=; b=WwJ45Dr2qfML8uhm9YwKVeSr/w4ixrK2nByrdpkRJbyPHvfccui3pOveOTxGN4clvS BuYznFO0bidIJeOOqopf3avTKt6C9RF5bBKD0mAhrJWSsFR1ifQB9actDV5aVPOIv/rQ U8qak2z51eAPjW+YbxDsSy9JqAYroHF4z3slil7D5SWUGI+3OodWTeUwpTAviHs2Ztsu 7zuSR1V6FPYPCcGncMQfonP7ZWUMW1fWD0ChQfBXS5OHUHcpqF9m1E5/XHRYtUgbxb54 z7kHAm7Yyp579NAwiEU2FikGMqULGhMZQlkLXvaAVd1flgdeK+Tjq5lSU9F62ZxBNQMT w9RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368987; x=1748973787; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AHMe2ivL10UxaXIME2XPKI1qy/qjD/O9SD6HXriTp0g=; b=c3bYmtdKi3katZnhD8t+LK21dKsVujHE4oCPFYFaGBaNu0jhETaVE2o+/8qRiCngnc zC/QdDHy9A5G0b0P6ZZjy/4eZL0HSi30KdequJuiP/iBLvsNGvq1rJ39mzHF/9ZY477k bw/llnflSgUzY5vTxo8YlRS4lGqE3Sp8HAze0H+kDcSzGIRnxFGUUQVOlS6Bcak0yeC2 GdYp7E10JBTc4/BTZax4ngvVm9Pmm1sTEU01ZcEXkp+Jv40s4AaeaL6942vkTQQi75dv LUwlD5+jBoQIRgA8GFdS+7n49QMQna7ZxOa/MZZQP5/Jm2C0icyNzQnAiO2l8Zsosi7G xUfw== X-Forwarded-Encrypted: i=1; AJvYcCWfeSUbwN9sXSlE8Wjtm9hFWsCCJqZtOtXBj/echvYyPi1o4tLVIJcfQlZmCxj+Xu2LDpBi+lx+sR7Mit5e@vger.kernel.org X-Gm-Message-State: AOJu0YzEVJv3JI26iRBfJ7upTqMI7CkTzUi3vO0/PSBiLxloZi3NFXLh kRR95NsED+RjrUKw9jS0Y1Kgv2muP0ZFv3XEnRfZiaEHDnHJZ5O7MNAikEr5UKd7groXsGjMnVV ICw== X-Google-Smtp-Source: AGHT+IEKYiZEDz0prHFVSs9gInuvUtJLkC94yEGH1e3gdJ5/OHkOpkzRY2s3mFtBGtEiZnGp3d719P4v5Q== X-Received: from wmben7.prod.google.com ([2002:a05:600c:8287:b0:445:1cd2:5e5f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35d3:b0:43d:47b7:b32d with SMTP id 5b1f17b1804b1-44c92f21eafmr120874665e9.25.1748368987086; Tue, 27 May 2025 11:03:07 -0700 (PDT) Date: Tue, 27 May 2025 19:02:39 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-11-tabba@google.com> Subject: [PATCH v10 10/16] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng For memslots backed by guest_memfd with shared mem support, the KVM MMU always faults-in pages from guest_memfd, and not from the userspace_addr. Function names have also been updated for accuracy - kvm_mem_is_private() returns true only when the current private/shared state (in the CoCo sense) of the memory is private, and returns false if the current state is shared explicitly or impicitly, e.g., belongs to a non-CoCo VM. kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used to fault in not just private memory, but more generally, from guest_memfd. Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Ackerley Tng --- arch/x86/kvm/mmu/mmu.c | 38 +++++++++++++++++++++++--------------- include/linux/kvm_host.h | 25 +++++++++++++++++++++++-- 2 files changed, 46 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2b6376986f96..5b7df2905aa9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3289,6 +3289,11 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); } +static inline bool fault_from_gmem(struct kvm_page_fault *fault) +{ + return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot); +} + void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4465,21 +4470,25 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, + struct kvm_page_fault *fault, + int order) { - u8 req_max_level; + u8 max_level = fault->max_level; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - max_level = min(kvm_max_level_for_order(gmem_order), max_level); + max_level = min(kvm_max_level_for_order(order), max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); - if (req_max_level) - max_level = min(max_level, req_max_level); + if (fault->is_private) { + u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn); + + if (level) + max_level = min(max_level, level); + } return max_level; } @@ -4491,10 +4500,10 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - int max_order, r; + int gmem_order, r; if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4502,15 +4511,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &fault->refcounted_page, &max_order); + &fault->refcounted_page, &gmem_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_level_for_fault_and_order(vcpu->kvm, fault, gmem_order); return RET_PF_CONTINUE; } @@ -4520,8 +4528,8 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + if (fault_from_gmem(fault)) + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index edb3795a64b9..b1786ef6d8ea 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2524,10 +2524,31 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); +/* + * Returns true if the given gfn's private/shared status (in the CoCo sense) is + * private. + * + * A return value of false indicates that the gfn is explicitly or implicitly + * shared (i.e., non-CoCo VMs). + */ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_GMEM) && - kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; + struct kvm_memory_slot *slot; + + if (!IS_ENABLED(CONFIG_KVM_GMEM)) + return false; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) { + /* + * Without in-place conversion support, if a guest_memfd memslot + * supports shared memory, then all the slot's memory is + * considered not private, i.e., implicitly shared. + */ + return false; + } + + return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) From patchwork Tue May 27 18:02:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892751 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21E5F283CB0 for ; Tue, 27 May 2025 18:03:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368994; cv=none; b=rEqG2qktjoYyYP5LWqXxnpZBxZrO5Q4FDq0gahSuwc4fnYIjl4p0BaDz8NG66n7WUdEqTa4Nt4YoRkgbGH29LoOO0o/lSeUBbEm8mJvJWU6jk5seP72UhVk8R1BZFVEYJbhsg1S58kSIpvXjyrQgKxMrgylE5Lo1WLkB2LPhlDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368994; c=relaxed/simple; bh=7rh9zHe9AHahelF9NuSt3/K+8lnfoR+09S3klGg3nGo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=c78YMljvlaBv1Js3lLCZNm/Na72eRfGXAqYxifUi77N/UcLclxs+ph+an6avvqX/bSAw4xgJJaLbCOynQJJ+W8e1l2ZwOJRxDLcoBc/S73mGDo3lN0RkfiXZoQ75x5vhQDh5ZR1dD4bQqApUNyma1fGjlwy1Fl8L+EutcNmJGZ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=u2G13VHF; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u2G13VHF" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442fa43e018so576125e9.0 for ; Tue, 27 May 2025 11:03:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368991; x=1748973791; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ALrz1sRb+zGk6S8awUKMjvdNktsAjzmIkhXrUj3xTdE=; b=u2G13VHF7QiNZHMevArkNdwn1Rm5kPnAQdTdAV/KUwLGoWZMyDHRV/5a4yfD5bvhms vpMWj3a1ew+3tYvX4W4edRtYbB2JZLux0fAHPjPAQNSvtSRkAC6klSSP6FsAG73t3l8q xjGKEwYNKkjzMqu2xr+bg7h+9WH5KfeId2nU7L5F8z6OgnMWh+2ogP9lo4Zmott9XC/C GJ7zBnkwthHkwjYraqZWDf/TWP4fnBv1BVe9TkaEaAmecOJuldO8jKODPQ2Ew1T35kgX AJOL6f04i4WsfETf0lYtiq/MRCb1dnnPAiDAN2X7JLto44+wJRuJK3iSZBgkrczziqDU ij1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368991; x=1748973791; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ALrz1sRb+zGk6S8awUKMjvdNktsAjzmIkhXrUj3xTdE=; b=jWbGiWZ5/HW3HeAyh6AkR4RBjpd1ZA0dn7vE4EmAOaV6+d0DueUvsNZrNU5MRkC1j8 PH/QI7gtYSmno6kTjQjIJT5LKK3j1E/cxgwF7e2kg4pv86dYQKxmeAdPmTSZeezgZTd8 nXzEdjDOh/4dd9pdOd6aUiRFXqP5XqnjICDwZEbvk5Go2eDPlQlFAE7N/7hN7NyFhrfo 8+6ARGv7UlonuEEHIQxENG2wLN4d5kgbhcKkprFJeumv8RfvG47ZV9zWuv3ZyKFdqFjO AT6+TzZIflvNLiQMK+fj4bNAgtaBVY9v3gubtQDEKOc8N4DJjCtDocmQGc0qPAdCtpR4 W49w== X-Forwarded-Encrypted: i=1; AJvYcCXHwTFUMgkjd/NRH8y2ArYcCQxuuyTOKVXqH+0vUP8T0Uhk9Moa4k+JsYleIadyAK/fM5WoMgXYReroXMv5@vger.kernel.org X-Gm-Message-State: AOJu0YyDM9Tnk79WUMHDI5zPtF4wqjl8we/a8g3EggvKUo/TnMlvFcaa AoUZneAA4GCSoAogaDgX4499sJJwplrh/XoMLZPtdzksTpeHbH1iVZ+m+Ko2sxni4rKgmVqRMJr +zg== X-Google-Smtp-Source: AGHT+IEodZ07aGUSN9lzHx1HTAUjm24ip1XDGKBVDKeoRhMm1kFvEgr3NfTUnNAICPpNgKbsF/l/u8jzeg== X-Received: from wmcn17-n2.prod.google.com ([2002:a05:600c:c0d1:20b0:44a:3354:5dfd]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64ce:0:b0:3a3:76e2:ccb7 with SMTP id ffacd0b85a97d-3a4e5e447eemr1432544f8f.5.1748368991350; Tue, 27 May 2025 11:03:11 -0700 (PDT) Date: Tue, 27 May 2025 19:02:41 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-13-tabba@google.com> Subject: [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com To simplify the code and to make the assumptions clearer, refactor user_mem_abort() by immediately setting force_pte to true if the conditions are met. Also, remove the comment about logging_active being guaranteed to never be true for VM_PFNMAP memslots, since it's not actually correct. No functional change intended. Reviewed-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index eeda92330ade..9865ada04a81 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1472,7 +1472,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool fault_is_perm) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool write_fault, writable; bool exec_fault, mte_allowed; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; @@ -1484,6 +1484,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); + bool force_pte = logging_active || is_protected_kvm_enabled(); long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; @@ -1536,16 +1537,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - /* - * logging_active is guaranteed to never be true for VM_PFNMAP - * memslots. - */ - if (logging_active || is_protected_kvm_enabled()) { - force_pte = true; + if (force_pte) vma_shift = PAGE_SHIFT; - } else { + else vma_shift = get_vma_page_shift(vma, hva); - } switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED From patchwork Tue May 27 18:02:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892750 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E048286887 for ; Tue, 27 May 2025 18:03:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368998; cv=none; b=GUgSWJpQF8MwEDG4/bNyCz0H8eYhajV5VV8Yb+vIbqRrIJlfBJ9HHgbpJqioCpHg7siWbcD5w4NTvsZ+nvej13vza+P6I+LDF26C+hWc1jFx2AsjBMsV5Q3R444E6LcrCthoy2LnKhEnhNQxy13ij2S7qeM3sE1jN134VJQsXQc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748368998; c=relaxed/simple; bh=rW0X1JnHf+X26Ebk4Nu/kYLwn4mph9GuhOZqGWUX0Jc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cmp80tyjIZkHY5HxrN9VtazSRTqreNHlpCCF1GqG6xjIxdt1K8FrfT/tBEMqeXZd33xirG4k4BgJ/SDH1KlpC5ZrFx44fvR2smiZODHiL7dBqqrNZn345G6mp02rDlzfXpNv01kOA06UTUroJjj6Uj08GsAdKTrzuFqB39HiF70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Nnz8aVQV; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Nnz8aVQV" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442f4a3851fso26968265e9.1 for ; Tue, 27 May 2025 11:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748368996; x=1748973796; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3LQzgqD6Xq1P+lRI22kuKcFVz6OwWxfUaW0mwgEOv/M=; b=Nnz8aVQVZHSAOXXGt+mdJhzefboGfsCOrM60EmgyL1mxzLwr8umMWYQ3lY02VNVFzJ iVRyzcNNd0eNo8O/H5MjbaEw00vdhFEmT9rvPMbhpe+kvM8N3Xo5y1jBPvoeIv7NhCzA EU8kHORHMNHMvq+pXWleVWosEbPF6KnQrw5GNJCNDh3vaflLpf7lt/OmrIn/X74c7nWG YoJCh5+ybR0mcztj+LnNxaaTysQHcZLktkmqaqR0d7WFZZ4mX+2EU9FKf3HhjSziYffT cS0CfiIHzjjjpyC7JHge4ONOWh7V8VDBs7is68wUaza2nKFpPaxfhMz//caUGW29SJY9 SHsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748368996; x=1748973796; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3LQzgqD6Xq1P+lRI22kuKcFVz6OwWxfUaW0mwgEOv/M=; b=i5hwpRNLe7daEnxUo2tsSllnnt8l8scHGYs/TIrv/nxRSjaeicgPTbKaK4q4P8DXtm P69Pl6p9u1bTG7VAXjC655ZqA1C9gnJfMfW/+y2LrxX1Z6rKWi8wKnBIXgyOE35L4auM cGoeLnuw/f9hGhzPo3Fv310lnlKJ2gXqudmIJ+E5X3avE0FKQAe7hsY0Cgi3t8sPmd8e oigzAxf19cL5D5LKpYP3H4lOgK0Oi24pDc2SUh4L/YoAQhBTn4CzPyyWchYocZxNp1RQ nYCJ61SmQy4+mDvBB/fv0S6ri7UtLRhVHpKFERNlvHdVwpDB+yEidzaknfTTNly1jnF2 ntfw== X-Forwarded-Encrypted: i=1; AJvYcCX+FVncEk9QklWYLnbuP4epgA3bQa2ZEb0sFWRGlwXt9NTwYRKxLHwDveZasS0Pnf/ZwdhQmAoS7RFKyYYu@vger.kernel.org X-Gm-Message-State: AOJu0YytYb3H6Lzxd6aiWjpRON6nx8XpnTpXWKDNcAWqTmXYQTxtjUN4 9LsVcEhe1yqUPxI6bhOyVZeJFeuVyTNjT5jDW0G5jcFTg2cUsTb/KrYFaHLzlnrXGKTuVDcStKK bPA== X-Google-Smtp-Source: AGHT+IEcdVpCbp8MjJf64q8k5e/VTPJ3pw7BA2+dZtfA0ZF4gfu1Oso/b3kULdysXS+hJCL6sJ2jnEhvsg== X-Received: from wmrn12.prod.google.com ([2002:a05:600c:500c:b0:441:bc26:4d3f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:820a:b0:43d:762:e0c4 with SMTP id 5b1f17b1804b1-44c959b79fdmr129129035e9.27.1748368995587; Tue, 27 May 2025 11:03:15 -0700 (PDT) Date: Tue, 27 May 2025 19:02:43 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-15-tabba@google.com> Subject: [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64 From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Enable mapping guest_memfd backed memory at the host in arm64. For now, it applies to all arm64 VM types in arm64 that use guest_memfd. In the future, new VM types can restrict this via kvm_arch_gmem_supports_shared_mem(). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 5 +++++ arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/mmu.c | 7 +++++++ 3 files changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 08ba91e6fb03..8add94929711 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1593,4 +1593,9 @@ static inline bool kvm_arch_has_irq_bypass(void) return true; } +#ifdef CONFIG_KVM_GMEM +#define kvm_arch_supports_gmem(kvm) true +#define kvm_arch_supports_gmem_shared_mem(kvm) IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) +#endif + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 096e45acadb2..8c1e1964b46a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GMEM_SHARED_MEM help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 896c56683d88..03da08390bf0 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2264,6 +2264,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT)) return -EFAULT; + /* + * Only support guest_memfd backed memslots with shared memory, since + * there aren't any CoCo VMs that support only private memory on arm64. + */ + if (kvm_slot_has_gmem(new) && !kvm_gmem_memslot_supports_shared(new)) + return -EINVAL; + hva = new->userspace_addr; reg_end = hva + (new->npages << PAGE_SHIFT); From patchwork Tue May 27 18:02:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 892749 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60D542874FA for ; Tue, 27 May 2025 18:03:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748369003; cv=none; b=p7oT2vFMCWxIg++tfJKO8fXjoZh93A19kwi+iXopbP+l4W2QXnXTPhODkNjNUchUqfBfaRPgIsfcKxeqAXuEBywMiK8lHzCoBAigIaP9etAGYGZUhYdT3ihhLexLYfUcpcLwoBjsoqh6DngPS2kA+iZUr7OPxfWr5gFjDPXzQ+w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748369003; c=relaxed/simple; bh=eUSPHQzdXq/oFvfN4HtM7AaADkmKl+nmmeh70n32z+4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gEIN5hSjSTe5DUhyh3FzMw5iMyeVGJH/4O+xzanyA6eT2SRV3FNy9yRF4j3roZQIYIumyKlO9wxmYCgxRqdAvnNHh3WpjI1WQMWMnYY+PfwXbCQlRMQdhSWD8CO+2TITVLPKB8G0uxYEG0oLButkrz0pGI3OXsQiT1vKqQlKrNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=31RMTLMK; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="31RMTLMK" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-441c122fa56so17920295e9.2 for ; Tue, 27 May 2025 11:03:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1748369000; x=1748973800; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YUv/6AxiCYz3znG3EpZR9aFsyQsCRwX0eENShgHFqLA=; b=31RMTLMKEna7+tcGHIdsXXtuq8LxypDgWUyipqr66Fn1rwUXBNoDL/V8M/Cw1WuvmR V6drPLHIEzeuJ4ZcLC276tpDNxnOMBfANHC4sJjxZvCHnZ1K3IG+aavRBkDBPZuD/hOM zPyUDxp10GTxeh0sAqVh7mmuIDS5AHWuSQr7fGoEhVNckkx9NDkyXZEtfkfhCx27buG3 BCw92/7cDKy2Yvj9QHQHEKa6HM91Ex/PpwG8tgRPTPeuzPZ2twWRV1V8LvyLZ/SvumuK GmlfEXOwLoNIAtgFnqjdNtUvw4/yE+4gAlFUdsinmUwOSZzRgjHmBe59lAwjZNjjXoyq FtwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748369000; x=1748973800; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YUv/6AxiCYz3znG3EpZR9aFsyQsCRwX0eENShgHFqLA=; b=wtHeGPhInoCvdWVdeHvSiRTwJFPyoBDkg2WX9ZIjewZJu/Qmua/ktH9GaXhEhCJWOl j6DQ9wHK5m4slel4YWo3io/S6B8wlWfeIVsjhHQ9HBy3qycfKO0V08Luxh3rpe+aylrT 92BxZmXCbE1ESwGf+JaXcIBmBZ8SyPgDZKmyaA589eqbBUSmwIPGCuQBog5hXSaa1kGn +grEwmg8wxkpjwnNX1B+8yVenDXVNh53LtCzxFJ79zgWYT8WuC6ci/Ftw60wir2qVX4C 4u6xkBPSUsed9Uz/ELjWvo8AvULhakoFY9+fSpnO1n5sahI3Pp2GpDtU3Gta1yXRPrg9 X5ZQ== X-Forwarded-Encrypted: i=1; AJvYcCXIzMTMEn3IODwaTgQr64mi0VvnsPcOr9xnOLUqXNk0jFRpaG4LYtg9Ev1N1J872pi1RbXEKmDgT270Rw8z@vger.kernel.org X-Gm-Message-State: AOJu0YyhqCB0Ti6eMVB35Sa6IoVRD2lL+hhEoLqV/MvqcuFjtlGc0G+r lDeXTvhOpJVyuu+9fuy7GEnAiZ+KdbtLNeRgi+3b/HExvsdDY8m0TGkgpGsYemCv4YxcYHx7SZ4 MIw== X-Google-Smtp-Source: AGHT+IHwxDDpSOhp5JOPkw7Az3zA1+GOA36Sv5kSChHXs1Dol8FyTA+PxlhBcFS+OXHG5kgq1/hOAHktgQ== X-Received: from wmsd9.prod.google.com ([2002:a05:600c:3ac9:b0:442:f482:bba5]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3c84:b0:442:f4d4:53a with SMTP id 5b1f17b1804b1-44c9151293fmr149200335e9.2.1748368999693; Tue, 27 May 2025 11:03:19 -0700 (PDT) Date: Tue, 27 May 2025 19:02:45 +0100 In-Reply-To: <20250527180245.1413463-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250527180245.1413463-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1164.gab81da1b16-goog Message-ID: <20250527180245.1413463-17-tabba@google.com> Subject: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Also, build the guest_memfd selftest for arm64. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++--- 2 files changed, 142 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..ccf95ed037c3 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test TEST_GEN_PROGS_arm64 += arch_timer TEST_GEN_PROGS_arm64 += coalesced_io_test TEST_GEN_PROGS_arm64 += dirty_log_perf_test +TEST_GEN_PROGS_arm64 += guest_memfd_test TEST_GEN_PROGS_arm64 += get-reg-list TEST_GEN_PROGS_arm64 += memslot_modification_stress_test TEST_GEN_PROGS_arm64 += memslot_perf_test diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..3d6765bc1f28 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) +{ + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + memset(mem, val, page_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) { char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) } } -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, + uint64_t guest_memfd_flags, + size_t page_size) { - size_t page_size = getpagesize(); - uint64_t flag; size_t size; int fd; for (size = 1; size < page_size; size++) { - fd = __vm_create_guest_memfd(vm, size, 0); - TEST_ASSERT(fd == -1 && errno == EINVAL, + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); + TEST_ASSERT(fd < 0 && errno == EINVAL, "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", size); } - - for (flag = BIT(0); flag; flag <<= 1) { - fd = __vm_create_guest_memfd(vm, page_size, flag); - TEST_ASSERT(fd == -1 && errno == EINVAL, - "guest_memfd() with flag '0x%lx' should fail with EINVAL", - flag); - } } static void test_create_guest_memfd_multiple(struct kvm_vm *vm) @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +#define GUEST_MEMFD_TEST_SLOT 10 +#define GUEST_MEMFD_TEST_GPA 0x100000000 + +static bool check_vm_type(unsigned long vm_type) { - size_t page_size; + /* + * Not all architectures support KVM_CAP_VM_TYPES. However, those that + * support guest_memfd have that support for the default VM type. + */ + if (vm_type == VM_TYPE_DEFAULT) + return true; + + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type); +} + +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, + bool expect_mmap_allowed) +{ + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + if (!check_vm_type(vm_type)) + return; page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(vm_type); - test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); - fd = vm_create_guest_memfd(vm, total_size, 0); + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (expect_mmap_allowed) + test_mmap_allowed(fd, page_size, total_size); + else + test_mmap_denied(fd, page_size, total_size); + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_release(vm); +} + +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, + uint64_t expected_valid_flags) +{ + size_t page_size = getpagesize(); + struct kvm_vm *vm; + uint64_t flag = 0; + int fd; + + if (!check_vm_type(vm_type)) + return; + + vm = vm_create_barebones_type(vm_type); + + for (flag = BIT(0); flag; flag <<= 1) { + fd = __vm_create_guest_memfd(vm, page_size, flag); + + if (flag & expected_valid_flags) { + TEST_ASSERT(fd >= 0, + "guest_memfd() with flag '0x%lx' should be valid", + flag); + close(fd); + } else { + TEST_ASSERT(fd < 0 && errno == EINVAL, + "guest_memfd() with flag '0x%lx' should fail with EINVAL", + flag); + } + } + + kvm_vm_release(vm); +} + +static void test_gmem_flag_validity(void) +{ + uint64_t non_coco_vm_valid_flags = 0; + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; + + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); + +#ifdef __x86_64__ + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); +#endif +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + test_gmem_flag_validity(); + + test_with_type(VM_TYPE_DEFAULT, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, + true); + } + +#ifdef __x86_64__ + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(KVM_X86_SW_PROTECTED_VM, + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); + } +#endif }