From patchwork Wed Jan 29 17:23:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 860694 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 22B391DF728 for ; Wed, 29 Jan 2025 17:23:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171407; cv=none; b=EdiqgXWi2uqvAMgUZUF+DoWl5zOjD/NCK30D8Bkwr5RJuLWs8+OuhZX0betWCYrw7LT6DeZnl2kgE2v6Y20S9BWe5k7tMjXOaNhFADfdSSrMLNCcRIg0lbr75U22V5sw8m6x1+9CCWbjiweXrBfybYaSm70+kVo3z2Eg/IrS8Bo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171407; c=relaxed/simple; bh=7AlMcFMWzcY1KBVpuAkyCgonVme5/aciBVZa8uL35zs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sjmGe8P36cviVxN0AMClJbEoy8pLYa30jt0HUf2zSM7Jf+tW+qgI8dq4oWDA2iL7+JAt79FvpONBgkRGC9CTlTTv/Feg7+WM4/3Tpr9mQ32PMEvcz26GT99c5vFENGsNgMQMRXLq0YVxyT0jRVi5iTrxUZlUewJPr8zn8YGAAAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tL6S0aEI; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tL6S0aEI" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43673af80a6so50967135e9.1 for ; Wed, 29 Jan 2025 09:23:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738171404; x=1738776204; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iDaVGDLi8mgRhdRmBEo1g3AHoA0NuWvKpi8mcLq23xA=; b=tL6S0aEIOa2TEspeIfvXTr7zmNxNvew4FArh/DZqgxQ8WUalZU+TtiCmpVMYNN7riM /mPUoIhlE2SosBgm2iBy8HRXYJv3RXVAiUbRvODhbFtMrot3fRU2N80E+iO4jHxLIKH/ s+pxCZbUtfq74HoPjFm0tdph/krU1SOhLtrcxxKvewu9UWI9FFGjTZtC8EhcMZlwv4v8 K2HRpbdgV/vseJV3SPWecHiI85nQ6/FidlJyBvo9fvZ/SaaV7UkIfS5aDim1Fvs5Ui8Z St0yTvxuC0rgoQuYm9GDCeGQxdKtAvTB3+C5jq9VC6vb8toFraR765EIRyoXWVskvheJ 2Zzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738171404; x=1738776204; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iDaVGDLi8mgRhdRmBEo1g3AHoA0NuWvKpi8mcLq23xA=; b=nvUyrC6YbfYhI5VMd+otMc3nWMI5AFnLMJar/7qO6q56avZ79oi798Eqogqrg/LTnS OvmvROFmRQfLCZr0jQQdZxn++oLC/UUkKuNsVjd0oZDtAPVpyqd+IYOegOe4a/PC1BnT YXhW691xtYqgXdAlpTDGDCyISenv418L6ZsBT9Zh3LOXrzaQOo//u5j4w09U2thRWRVc 21nXhle2S7QPplAeGZ4ph6Y0QO83GyblLSFUW+fJ58zK7KL+EztQsbY26gG+fcyJqDUb E75fAkCIQsJq4U9KmO4meN/lSqRdFRtOyk6iWTLxJrVPdc9gYGKiO7cOeHj+bewX8oXW sskg== X-Forwarded-Encrypted: i=1; AJvYcCWhFD/Wr4lDCTzbBY3yDo+Hou2qqsCp3W3fnq6WBtw/OQ8zhEieo72fKyoR7AgeAG/O8k6wm176Fek65Xom@vger.kernel.org X-Gm-Message-State: AOJu0YwHt/5ZMPMdWGzur9e59rmrnlHVUfYSQT4U4vHyGO+8CS25Tqe0 dk03xZ8uF0h+6UqywsdOk//d/+Gh2bEewnVEg/R2d161/Wv79KlyYzxcXN7YEwQqfCY5WZctbw= = X-Google-Smtp-Source: AGHT+IG7AAxXO9uUL/TV8lWRjQ0ckYe36dATBMk94k2BjGqrWcfbwhKyc4W7DSJHTwQDQb5yqHpv0ii9Sw== X-Received: from wmbbh25.prod.google.com ([2002:a05:600c:3d19:b0:434:f173:a51]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d26:b0:42c:b9c8:2bb0 with SMTP id 5b1f17b1804b1-438dc3aa595mr34124635e9.4.1738171404499; Wed, 29 Jan 2025 09:23:24 -0800 (PST) Date: Wed, 29 Jan 2025 17:23:10 +0000 In-Reply-To: <20250129172320.950523-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250129172320.950523-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129172320.950523-2-tabba@google.com> Subject: [RFC PATCH v2 01/11] mm: Consolidate freeing of typed folios on final folio_put() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com Some folio types, such as hugetlb, handle freeing their own folios. Moreover, guest_memfd will require being notified once a folio's reference count reaches 0 to facilitate shared to private folio conversion, without the folio actually being freed at that point. As a first step towards that, this patch consolidates freeing folios that have a type. The first user is hugetlb folios. Later in this patch series, guest_memfd will become the second user of this. Suggested-by: David Hildenbrand Signed-off-by: Fuad Tabba --- include/linux/page-flags.h | 15 +++++++++++++++ mm/swap.c | 22 +++++++++++++++++----- 2 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 691506bdf2c5..6615f2f59144 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -962,6 +962,21 @@ static inline bool page_has_type(const struct page *page) return page_mapcount_is_type(data_race(page->page_type)); } +static inline int page_get_type(const struct page *page) +{ + return page->page_type >> 24; +} + +static inline bool folio_has_type(const struct folio *folio) +{ + return page_has_type(&folio->page); +} + +static inline int folio_get_type(const struct folio *folio) +{ + return page_get_type(&folio->page); +} + #define FOLIO_TYPE_OPS(lname, fname) \ static __always_inline bool folio_test_##fname(const struct folio *folio) \ { \ diff --git a/mm/swap.c b/mm/swap.c index 10decd9dffa1..8a66cd9cb9da 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -94,6 +94,18 @@ static void page_cache_release(struct folio *folio) unlock_page_lruvec_irqrestore(lruvec, flags); } +static void free_typed_folio(struct folio *folio) +{ + switch (folio_get_type(folio)) { + case PGTY_hugetlb: + if (IS_ENABLED(CONFIG_HUGETLBFS)) + free_huge_folio(folio); + return; + default: + WARN_ON_ONCE(1); + } +} + void __folio_put(struct folio *folio) { if (unlikely(folio_is_zone_device(folio))) { @@ -101,8 +113,8 @@ void __folio_put(struct folio *folio) return; } - if (folio_test_hugetlb(folio)) { - free_huge_folio(folio); + if (unlikely(folio_has_type(folio))) { + free_typed_folio(folio); return; } @@ -934,13 +946,13 @@ void folios_put_refs(struct folio_batch *folios, unsigned int *refs) if (!folio_ref_sub_and_test(folio, nr_refs)) continue; - /* hugetlb has its own memcg */ - if (folio_test_hugetlb(folio)) { + if (unlikely(folio_has_type(folio))) { + /* typed folios have their own memcg, if any */ if (lruvec) { unlock_page_lruvec_irqrestore(lruvec, flags); lruvec = NULL; } - free_huge_folio(folio); + free_typed_folio(folio); continue; } folio_unqueue_deferred_split(folio); From patchwork Wed Jan 29 17:23:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 860693 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4FD591E0B86 for ; Wed, 29 Jan 2025 17:23:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171412; cv=none; b=lCJlOQ3oWe+VXXbvyGk4iCOYyi3KBxJpAJ0sjXzpjJ7rZHFcszEhnxamRRs1klm56H3oDASC2X3jVjcxX1aGX0L8gWPtTAHFA45uChxP0kUBcOylOb5Dd0nDnQfpYRRmiVXH4ZQKCruiAaInvyYCnsTNi/ozpfnDbm3ChqfQ6Aw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171412; c=relaxed/simple; bh=HVcVTHFmQT/fykHPy7CdMOtBYayqFUquU8jKqytzw4Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=uOqMr/hgs8jrEbcefBpXTaJr9aOL/EZcIDbVnxfI9bXRZZniw5KEat4zHaxWPxgdApmfIuL2SnlB130J0MCwcGDOe+EgfeuUzP0ai2NKXcDrqwfdAsYChyyyCV7KvEo0E9Fk/KVXp/lclHhAUxdqjy3j1KdXonvYd5QVPVSc0bc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Mpt5VL/s; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mpt5VL/s" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3862a49fbdaso2559630f8f.1 for ; Wed, 29 Jan 2025 09:23:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738171408; x=1738776208; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RoVkPwaMvkLCkrMEK45Vw1CTJ2upgTYFUQnx7UT1Wys=; b=Mpt5VL/sAWIVG2jl/7zFkxU/sYgBW5Y8beCHy/fSIUow1OT446fLxA96qHilqUdOlZ H7ZbFSjxuLQtIbkO4eC6Ui3ysQOxowG/duNWseMRh/kmUL27uIqDK+qFCspnUcMg9Zit szTBXvnwTZ1Cxlx3eOPoHeFc4NhIB3h9UN0HUAhU2foNc4gptK8dffUugX9G+/21qKua abjnCg7B58Z9EJrkysZZy7oIyQga/eWkyzh0IrFLPDIBTDai7plLjw2XwzCzQdty9i9L HxIWCwmRf7wqSTbMmwcrSvgfnMZTVfejGQ61Av4WNgKDe1Uczxl6V1nlP515FKbgvaRt ogGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738171408; x=1738776208; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RoVkPwaMvkLCkrMEK45Vw1CTJ2upgTYFUQnx7UT1Wys=; b=L/DIRwTOrOL+3m2K2uyIVQVeHtZIi1I30sPmAAaV79oCU2cSUAHnBeu6j0dXo+mzTd sIMH3ibgexD54MmxY9uqML78neR87BMliXLJ1pzmUeUyBOrg6TXpuiXkBpcJ7KV1F7Z8 MBOVomat+Cb5HkagiibOFzhwTWsD5VWwpq7KzAnjsOFPiBYU7Pcnqu0I3HX/pe9+BSb2 6sgjoed/M6OLuFeZo8ApaaR0FLc5W0aGUfiUT8z3WZQYdAtdBhErdPLYPnzFahiNXKAn dPTeZEsWklSjINA+iVkCFDlKbCN1z69robO0oQrjv5PFlgcp+bM3ICFgZZ35F/VJvyfn WIfg== X-Forwarded-Encrypted: i=1; AJvYcCXEupCkdZefQOSidJTC/Iyzh2FWWaX5qdAVUnuYrkEeAQuDp7nB2y42iwvEYJXXIfmQlxWVPka2lD9HQFxc@vger.kernel.org X-Gm-Message-State: AOJu0YzMqqrcEmHnBSr48M88xuaSC//VQHpGBj4l8fOSSINBqg1bN0S+ xIVNjvL0xSbWyRliqpgprZmJW5ouaiNUx2Xxcy1fwdiOFKL8n902xOCI35TEcJZMuECJDug2jQ= = X-Google-Smtp-Source: AGHT+IFhvEin4XE4wsGkRNdDKMzEHI3B5ItG8XmtbU1PM70tgZsQHYBr2xJu8rB4jweNCcCb7JB0XXISUQ== X-Received: from wmbez5.prod.google.com ([2002:a05:600c:83c5:b0:434:f9da:44af]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:154c:b0:38a:39ad:3e2f with SMTP id ffacd0b85a97d-38c5194c9femr3444938f8f.2.1738171408672; Wed, 29 Jan 2025 09:23:28 -0800 (PST) Date: Wed, 29 Jan 2025 17:23:12 +0000 In-Reply-To: <20250129172320.950523-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250129172320.950523-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129172320.950523-4-tabba@google.com> Subject: [RFC PATCH v2 03/11] KVM: guest_memfd: Allow host to map guest_memfd() pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com Add support for mmap() and fault() for guest_memfd backed memory in the host for VMs that support in-place conversion between shared and private (shared memory). To that end, this patch adds the ability to check whether the VM type has that support, and only allows mapping its memory if that's the case. Additionally, this behavior is gated with a new configuration option, CONFIG_KVM_GMEM_SHARED_MEM. Signed-off-by: Fuad Tabba --- This patch series will allow shared memory support for software VMs in x86. It will also introduce a similar VM type for arm64 and allow shared memory support for that. In the future, pKVM will also support shared memory. --- include/linux/kvm_host.h | 11 ++++++ virt/kvm/Kconfig | 4 +++ virt/kvm/guest_memfd.c | 77 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 92 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 401439bb21e3..408429f13bf4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -717,6 +717,17 @@ static inline bool kvm_arch_has_private_mem(struct kvm *kvm) } #endif +/* + * Arch code must define kvm_arch_gmem_supports_shared_mem if support for + * private memory is enabled and it supports in-place shared/private conversion. + */ +#if !defined(kvm_arch_gmem_supports_shared_mem) && !IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) +static inline bool kvm_arch_gmem_supports_shared_mem(struct kvm *kvm) +{ + return false; +} +#endif + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 54e959e7d68f..4e759e8020c5 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -124,3 +124,7 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_PRIVATE_MEM + +config KVM_GMEM_SHARED_MEM + select KVM_PRIVATE_MEM + bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 47a9f68f7b24..86441581c9ae 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -307,7 +307,84 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +#ifdef CONFIG_KVM_GMEM_SHARED_MEM +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + struct folio *folio; + vm_fault_t ret = VM_FAULT_LOCKED; + + filemap_invalidate_lock_shared(inode->i_mapping); + + folio = kvm_gmem_get_folio(inode, vmf->pgoff); + if (IS_ERR(folio)) { + ret = VM_FAULT_SIGBUS; + goto out_filemap; + } + + if (folio_test_hwpoison(folio)) { + ret = VM_FAULT_HWPOISON; + goto out_folio; + } + + if (WARN_ON_ONCE(folio_test_guestmem(folio))) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + /* No support for huge pages. */ + if (WARN_ON_ONCE(folio_nr_pages(folio) > 1)) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (!folio_test_uptodate(folio)) { + clear_highpage(folio_page(folio, 0)); + folio_mark_uptodate(folio); + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + +out_folio: + if (ret != VM_FAULT_LOCKED) { + folio_unlock(folio); + folio_put(folio); + } + +out_filemap: + filemap_invalidate_unlock_shared(inode->i_mapping); + + return ret; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct kvm_gmem *gmem = file->private_data; + + if (!kvm_arch_gmem_supports_shared_mem(gmem->kvm)) + return -ENODEV; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + file_accessed(file); + vm_flags_set(vma, VM_DONTDUMP); + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, From patchwork Wed Jan 29 17:23:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 860692 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A45401E0E15 for ; Wed, 29 Jan 2025 17:23:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171416; cv=none; b=mvoBEH8OhfomW46XA0EmQOyW97myBTnyfbl+I3Za6i8e/HddeXg8w8Ivo809C8P+2cnxY8GlnKpz8J9TmJnY4QITjNtMpGwOLxY/TCUscGE7Vjv08aYJoCmJH8oVd++tIBP03P+LbAugvYlzdRz3P/qdQGT346e6BzMFOFw3V1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171416; c=relaxed/simple; bh=Gv2OZPl1bMY0RPBa4hp+clbwB5r7GxE0cnVfY9KeGEM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Am8GozBdHdnfYM+ucwwOT8ft9+AlWD82zO5Sw26J8/Ik5rN3IemVZn9xe4mIf0XjiUvCvn3gMqNbFbFXq3hhYfi7ONs4LnKdcGCzapFV6xDGaXqMRACkfcSVS1NT7WEJDbfxXAC4wSUA79vSc4Fs+pKSbFwnSyoXlIilbN1O230= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZiEA13Vn; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZiEA13Vn" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-38c4a819c0aso1682967f8f.3 for ; Wed, 29 Jan 2025 09:23:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738171413; x=1738776213; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LUYJLmVCJmslu0EZ70wF3BbY2jfY7y/4QOInoyL0YSs=; b=ZiEA13VnHFuolh/qNk0VzlrLTaxI19RllMzy3/chwXVJ6d2LYwIhr1inrOIReahaBv OCzBUnHtIJ42m0+du1znjaEbcwfdZ6jpFLJsgsE2p6HYFiAX7ET3KvssAQ8Eyb8kpaHE NSf1atLsPf2IOsd6J5RQYbQPeP2c8KzJPDAUb6IddpC7gV3dwt1gKM2gTftFZV99v7wl Q9GXfkcVU3J04o7gwVYdOotKgvBpETM+YXHIck08V0Rk3Rftu4X4T4U2QF8/sxC7JpMg 0AraXt2sVcDyfAtR7YTC/GoviWB8imGokmpEK4DUuTMEL4j3mePg7mx89sdQDc6NtJ4i j4lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738171413; x=1738776213; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LUYJLmVCJmslu0EZ70wF3BbY2jfY7y/4QOInoyL0YSs=; b=WDYNPIqHSIUnxWn0ZTxYqbCx1BiBhQ0YbbW94Q5npaq++W+LI4R42af5xmxSHdtyLO zmH1/hWeY/QKBIcqr/N1tWunvetqNDd4Tq2XByP4DOvggMA+zUnfHbQmwnsFDJ3KddqY 83i2twuvBlVOJL9XwZIYXytulO16SCRE0U0ptqBSnNhgJEzEAEZa+5qW+d95aQjlXxDC DFA7gxXhAqSSLe6UJtQKoolm6yVnSoztoHxZs8QHMhrqO12ryWFrNG1v6kmsPaPf4R6R smByGNiA0KgUF5H09oMl454MbUxCFRezmfJWhh8NccCADN6BpPaVhQcBr4kwCbu5Een/ 4VHg== X-Forwarded-Encrypted: i=1; AJvYcCWr+eViwshGo+rBW1yGejcPW1qgz0Qc5EfNKPxm1uCMB1IgS3JU30UPkm83CONPhBHfvENfDftLcoz6+A8l@vger.kernel.org X-Gm-Message-State: AOJu0Yz/+IczUraKVNCzvkUXQo6rEoU1Q3jeXUj+nhAiC6s8eqHs64fa A/gg2UqRgu1tAHLCga6OzJIu+flnVJ/0uIutfuiczH48M3it0UjRILzdg3y4EeT/Wtc+T72BXw= = X-Google-Smtp-Source: AGHT+IEzz5y+mMN5wvE2N/WLUB8ZD0NYIc/aKjbCfNBDRcZwxKLOkunY2FkpKk8kIiLC8nBZsvQDYPVFww== X-Received: from wmrn34.prod.google.com ([2002:a05:600c:5022:b0:436:3ea:c491]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1863:b0:386:37af:dd9a with SMTP id ffacd0b85a97d-38c5209037cmr4744111f8f.35.1738171413076; Wed, 29 Jan 2025 09:23:33 -0800 (PST) Date: Wed, 29 Jan 2025 17:23:14 +0000 In-Reply-To: <20250129172320.950523-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250129172320.950523-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129172320.950523-6-tabba@google.com> Subject: [RFC PATCH v2 05/11] KVM: guest_memfd: Handle in-place shared memory as guest_memfd backed memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com For VMs that allow sharing guest_memfd backed memory in-place, handle that memory the same as "private" guest_memfd memory. This means that faulting that memory in the host or in the guest will go through the guest_memfd subsystem. Note that the word "private" in the name of the function kvm_mem_is_private() doesn't necessarily indicate that the memory isn't shared, but is due to the history and evolution of guest_memfd and the various names it has received. In effect, this function is used to multiplex between the path of a normal page fault and the path of a guest_memfd backed page fault. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 408429f13bf4..e57cdf4e3f3f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2503,7 +2503,8 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) #else static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return false; + return kvm_arch_gmem_supports_shared_mem(kvm) && + kvm_slot_can_be_private(gfn_to_memslot(kvm, gfn)); } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ From patchwork Wed Jan 29 17:23:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 860691 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A3AA1E1C36 for ; Wed, 29 Jan 2025 17:23:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171421; cv=none; b=Onsxo2LsEkraxNlYdfJMphiaRPsGx0oKep0+vC+rJyQG52lhjse9hDttXUqkJRrDXa2z1BQabHxsjozWVxL1b58cLZXPFjUW1IZOf+7GsSqhhrmBB9IrgyWO2cIKn3A3+uX7tYjz0jGhylaQPPaDd7cGUZdTXcQy1NsAIulMEbA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171421; c=relaxed/simple; bh=XPNmo/FlP/BRMBwZNSeXSnV0XlcjcPy5hRJg1uWWx+w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dIj7NlIn9DZ7cUD2bW81cSzlRmsB71RUeeMni7myHdWGFW/dsxqLj1ULgEq6Dh4WJ6EJ1zsmMWRhJzXEIl8PCmtCGhENdhM9qid2qQXSG/Kye+hvSLIqEnOUExIqRKBbn3H58OnvIx2h9z4v0AJ5Of1IO/FT9lTGSkP5JH+j+Q0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KoUpu0+s; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KoUpu0+s" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43624b08181so5264615e9.0 for ; Wed, 29 Jan 2025 09:23:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738171417; x=1738776217; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=clAA2pQ0aAsHHtJcPdsprveMf7sd3Qr2fHuajtMaEDo=; b=KoUpu0+sf9hFJLQ13uMhUCaFbqdcu2neAp+TKMYMoUVnRlWRqjKkVatfGPd4xBFxw2 tHfzYcXaOxFzIyoXxEl2orMbcVwito84uJocaY3rJYpoynA7a9ag9y0sN2lO/QxPV6CA rM25GGC5I0csWy5rf+7P3Rm9umXHEGIBtmfZKUlXv7UGGbWBaPKxpUfZNpSP99Gn3qEE DVRBfZD86O2Sf1rXRSXeIvlk4Uws5ZSBK/Sr/+XxFnNsSr302npX65a8rCKKwFnO+oYZ f7qK0EvN0MvmABMgr1aqGpP8ypcK3bHCD8rl+CtKWY35VSi1+Bc2qRWUif8x6fKw92co QUKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738171417; x=1738776217; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=clAA2pQ0aAsHHtJcPdsprveMf7sd3Qr2fHuajtMaEDo=; b=t2fG4QboR4Y5c90Ggsv/0cXtTk89FmELf4z0E+NQSbF0CosLtYcFW+ri2JZ3XNAmBa OhZhLsdNZLeWsciVZAZsZtpsG2wuAR/LAIVUrObTJlv1R/UmU0oNcQ80uW1TSF9GvRXm sy70TrsXh9hh6x9hXMGtEoB0SqYKo+d7JV+YQeg+KjhFk5ZArUxWeG6hdMAdZjGmBLFz zJfIiM99BYobc8F3DUR0LX8tVgkYY+6LWId22rhrnVNEyZGVUZL0h1UAZ2aWmE3hJ4+g WJ0pul2idgIuC1+3NV+nbLk1u2wIgeZnI3fsPV/ndamTLJXLlmO5q1zHQ92ZUf7pqcrN 6zBQ== X-Forwarded-Encrypted: i=1; AJvYcCWe6zfg+V8ecoh1WhrutVMK/zFc1KnG5YNlaE1N9dIlv8J2qv4QR3owCULkGra/+kVsFs2ez6b3gyqp0zIA@vger.kernel.org X-Gm-Message-State: AOJu0YwUn0sqrbw4DdgFfjTtG0bRKmVVZmJk1Cyrq0hUb3X3LpvkMYee ot8ERvEz0/4CYWJdKwqQN548TPiFnvizv9nkZCceFSmbM3TQS70PKck7L3UPeIWZh3GBDUwYPw= = X-Google-Smtp-Source: AGHT+IH4Z/w8m7ect4Xjx7d6nFW4rotWrjkyMgHYG1tUIFYfo/owBhkDUsni5QfNbmeYmc7O93FGA8qkJQ== X-Received: from wmd22.prod.google.com ([2002:a05:600c:6056:b0:435:dde5:2c3b]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:b8d:b0:436:1b77:b5aa with SMTP id 5b1f17b1804b1-438e15e8307mr1177165e9.8.1738171417631; Wed, 29 Jan 2025 09:23:37 -0800 (PST) Date: Wed, 29 Jan 2025 17:23:16 +0000 In-Reply-To: <20250129172320.950523-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250129172320.950523-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129172320.950523-8-tabba@google.com> Subject: [RFC PATCH v2 07/11] KVM: arm64: Refactor user_mem_abort() calculation of force_pte From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com To simplify the code and to make the assumptions clearer, refactor user_mem_abort() by immediately setting force_pte to true if logging_active is true. Also, add a check to ensure that the assumption that logging_active is guaranteed to never be true for VM_PFNMAP memslot is true. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..1ec362d0d093 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1436,7 +1436,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool fault_is_perm) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool write_fault, writable; bool exec_fault, mte_allowed; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; @@ -1448,6 +1448,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); + bool force_pte = logging_active; long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; @@ -1493,12 +1494,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { - force_pte = true; + if (WARN_ON_ONCE(logging_active && (vma->vm_flags & VM_PFNMAP))) + return -EFAULT; + + if (force_pte) vma_shift = PAGE_SHIFT; - } else { + else vma_shift = get_vma_page_shift(vma, hva); - } switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED From patchwork Wed Jan 29 17:23:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 860690 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 676481E0DD6 for ; Wed, 29 Jan 2025 17:23:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171425; cv=none; b=AaznMh5t0DKfbLno9TC0+cBjJRFTDpSEf+aiVFvVLajADGaGSSukEFPSmK6H8xKkuu2h62kW2Dzyt9Rx2xCpqGN/V2xvwtfRPOa6t+YawDqWMi+ChZHJGgQxSbU8zX0NxvO+BOEnOn8MKQmUXShLb2VKoFqyN4bvclEr5zb/NMY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171425; c=relaxed/simple; bh=zjFy8tL6ycA9u7oJbIEiNZF5VVWtz8XZQG69IoGVR+I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WN1VL+7vt0pfULvSk9GVX66/N9YmFYfbvYlKH3lyoqH/ZQkfokAZzQ/is5UI26D1b+rIIYyPDtxnFdUi4lGcxxLQpcy52+Y0nsTHchMEA1Hp8KbtidS9iG5Ux2cuOEC1eXAzB5rJWohb7sMrJ+STuiyWZzg/Bw//j85WozU4gJQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CqphjvqB; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CqphjvqB" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43623bf2a83so54989555e9.0 for ; Wed, 29 Jan 2025 09:23:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738171422; x=1738776222; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LJ160MXGhYFmROJLbL7eYj3t8ftygUUVZySLs0qAKPY=; b=CqphjvqBG/mlgglSiXD+HWlOvWlpzgEGjsB0f+lu3WknJPYy2S5EV/r24Drq9UDB3c iC27A1JeNRVnD7wtign+DUZEAv9KzUeGnPO7jLaqASTGxmK707OdH06fEnSwumtCVh9U WkEHrNBYOC8f5vRKLP5bBC/OYA8NNig0vulkgqcIukb0y7YzkG9K7uWa2suLgv/U3BSd uv9SwLqLzmMk0o8b4S9Q8MU8R8pcr+FNuzuCRMFEnDswaopZXZQzj1tKK13letyVJwHn C1F8l4RsHQhsm/HyArjhp8wHMQlguMAJN6kA9HDUDU3pxsfz0f2KCf8166QUcMsRkD5f Aesw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738171422; x=1738776222; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LJ160MXGhYFmROJLbL7eYj3t8ftygUUVZySLs0qAKPY=; b=G0w2Q0essT6u8xDGwFoy2qLDAs9RGXXKgw/MKH4oAZ+ztwJVi3cnty+fciKR4dTlXV SyDVnda+uyEYuY8IhKc4fq73Gbvic8AkAMNDzc/R95Oq9ivRZ+N1w5zjUSBZEWuW45PG Huk9Z8emWJ6WmP6LeziDyyaidDp77Dnf/bFgEAZwDrDLaZNT3FtrdPTA3VEFCqQmBY/H TLxPvoe/XmAv4P+xLcrOPiB1CVK2rocFvB7JJtVPJ5RVmdSY4U/dNfQ6IjUg2timxSV/ 2u9bFhVAags80s1sW4IQH/8DMmMD+9Z9m8gcisT0Jeh1h3mnzJDMOM6zH2twzm8TAfmm hz0A== X-Forwarded-Encrypted: i=1; AJvYcCXOGPxvQTn7nLcbYfNC3WveYoG9V7e9rrn0q66brzrM69W0EvKVs8ZwUFxuelmk/O9/YEZFNfTB75B8P9Ky@vger.kernel.org X-Gm-Message-State: AOJu0YxezdHJGelNl9a8ElaO7mRxaq2xy8aUG6hUOZ2Bgg0o5DY8DyAf lpphfFbEga0y0OL4TGcpAprk4/Z27N+UsqUaTaTH/vuXzjVhHTejor5J/yR++THwIKcd+6TXOw= = X-Google-Smtp-Source: AGHT+IHzze1fd9XcgYyloPMAtmGdSptsFbtkXMC3i4as+lZ1qQjtORL3JE2QGX7gEDdKOF6yr1lLOcmjHw== X-Received: from wmbg5.prod.google.com ([2002:a05:600c:a405:b0:434:fb78:6216]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b23:b0:436:f960:3428 with SMTP id 5b1f17b1804b1-438dc42ff3cmr36674135e9.29.1738171421994; Wed, 29 Jan 2025 09:23:41 -0800 (PST) Date: Wed, 29 Jan 2025 17:23:18 +0000 In-Reply-To: <20250129172320.950523-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250129172320.950523-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129172320.950523-10-tabba@google.com> Subject: [RFC PATCH v2 09/11] KVM: arm64: Introduce KVM_VM_TYPE_ARM_SW_PROTECTED machine type From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com Introduce a new virtual machine type, KVM_VM_TYPE_ARM_SW_PROTECTED, to serve as a development and testing vehicle for Confidential (CoCo) VMs, similar to the x86 KVM_X86_SW_PROTECTED_VM type. Initially, this is used to test guest_memfd without needing any underlying protection. Similar to the x86 type, this is currently only for development and testing. Do not use KVM_VM_TYPE_ARM_SW_PROTECTED for "real" VMs, and especially not in production. The behavior and effective ABI for software-protected VMs is unstable. Signed-off-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 5 +++++ arch/arm64/include/asm/kvm_host.h | 10 ++++++++++ arch/arm64/kvm/arm.c | 5 +++++ arch/arm64/kvm/mmu.c | 3 --- include/uapi/linux/kvm.h | 6 ++++++ 5 files changed, 26 insertions(+), 3 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index f15b61317aad..7953b07c8c2b 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -214,6 +214,11 @@ exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects size of the address translated by the stage2 level (guest physical to host physical address translations). +KVM_VM_TYPE_ARM_SW_PROTECTED is currently only for development and testing of +confidential VMs without having underlying support. Do not use +KVM_VM_TYPE_ARM_SW_PROTECTED for "real" VMs, and especially not in production. +The behavior and effective ABI for software-protected VMs is unstable. + 4.3 KVM_GET_MSR_INDEX_LIST, KVM_GET_MSR_FEATURE_INDEX_LIST ---------------------------------------------------------- diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e18e9244d17a..e8a0db2ac4fa 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -380,6 +380,8 @@ struct kvm_arch { * the associated pKVM instance in the hypervisor. */ struct kvm_protected_vm pkvm; + + unsigned long vm_type; }; struct kvm_vcpu_fault_info { @@ -1529,4 +1531,12 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define kvm_has_s1poe(k) \ (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP)) +#define kvm_arch_has_private_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && \ + ((kvm)->arch.vm_type & KVM_VM_TYPE_ARM_SW_PROTECTED)) + +#define kvm_arch_gmem_supports_shared_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ + ((kvm)->arch.vm_type & KVM_VM_TYPE_ARM_SW_PROTECTED)) + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..ecdb8db619d8 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -171,6 +171,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) { int ret; + if (type & ~KVM_VM_TYPE_MASK) + return -EINVAL; + mutex_init(&kvm->arch.config_lock); #ifdef CONFIG_LOCKDEP @@ -212,6 +215,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) bitmap_zero(kvm->arch.vcpu_features, KVM_VCPU_MAX_FEATURES); + kvm->arch.vm_type = type; + return 0; err_free_cpumask: diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c1f3ddb88cb9..8e19248533f1 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -869,9 +869,6 @@ static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long type) u64 mmfr0, mmfr1; u32 phys_shift; - if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) - return -EINVAL; - phys_shift = KVM_VM_TYPE_ARM_IPA_SIZE(type); if (is_protected_kvm_enabled()) { phys_shift = kvm_ipa_limit; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 3ac805c5abf1..a3973d2b1a69 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -656,6 +656,12 @@ struct kvm_enable_cap { #define KVM_VM_TYPE_ARM_IPA_SIZE_MASK 0xffULL #define KVM_VM_TYPE_ARM_IPA_SIZE(x) \ ((x) & KVM_VM_TYPE_ARM_IPA_SIZE_MASK) + +#define KVM_VM_TYPE_ARM_SW_PROTECTED (1UL << 9) + +#define KVM_VM_TYPE_MASK (KVM_VM_TYPE_ARM_IPA_SIZE_MASK | \ + KVM_VM_TYPE_ARM_SW_PROTECTED) + /* * ioctls for /dev/kvm fds: */ From patchwork Wed Jan 29 17:23:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 860689 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B83041E3779 for ; Wed, 29 Jan 2025 17:23:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171429; cv=none; b=nK24Qwk9yexKrnptlk58sxGEDTiWh2iAw7Bt4XW1mvj8yMwE8UTlvOJdTeKFAFNZTELn0JtVKRMGBWS6od00zFz20tzWXXCRugPPMsMxh9G8EGtZOOSqsHIOwg56nvJyoe5Gj3pj5HNU5YLec9l9nEowLvfu1CteoACEj3O6BYk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738171429; c=relaxed/simple; bh=ky2lH5N7Ay9DC3OktCv4VJspBE0k8W4DrnJYOr1cEgk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=T2UY41N7UBxDE2V6XZ/nMoAUw6E45/HKYdYYX4UAS9Gx6mgm8MlcNKgfyArGfVsUMQhgbo9cJXXWXe/outLQkSC4GCGVot8sumWy8MaFkeu/ykOqPLny+vXQKB1QhLNphVvUxn+vIgD8ZzwXntxw0VHbKt9opnBSYJ+0pBriBx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zCkvwRCt; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zCkvwRCt" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4362153dcd6so36386745e9.2 for ; Wed, 29 Jan 2025 09:23:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738171426; x=1738776226; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3zrvzGaN8zaP+D06MaP8yPSKdvx6oMhYyqVOrUtzvXY=; b=zCkvwRCtgpyqNfFZP/hlJWBFcgwfkMcbTFmS9L6FkzzUBGAgmg3SeGJgQmsZOzVEf+ iECQmcMSfmwz5eNDTENUB+P9KNmXIWjI4Q1ahDbpqxRi6t9g//n4uxnGHcvc6dsYS2LK pUMGK07jeKMouY8e3G7hsYZG7yCGEHUpOafUbAIRaZWGwrD44kxTuKZs7wWp3GIZsAzC EuDsESkHflKM9eRFtiU3nqK/Jca9RZS1lL1aktABpEwnPK58KrOwR7MFd1d01r+V1Yvf f/w5GDi9J6YvNiezgKEBB49u9fSfpnxvV5mi0H+MUAJclBcYDAOPuEDrv30a5TGPi+zb ajbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738171426; x=1738776226; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3zrvzGaN8zaP+D06MaP8yPSKdvx6oMhYyqVOrUtzvXY=; b=td+Ru4qP3Kf8ZCsXcvQw/mGAmoqTegieFwUS1G83TsjKsGH/2H7m4lAi2IoJKCpw2U W1s2HVqZIK+UM2wZ0wec5AATdoJFVkcVRLCapWLsYJCPqk8Mv/5unfXG2dfc8EsmKIlu EFZacldTpF5rsPPD342s6Y+luFwxqARb2u5LfBfGqUfsmx9/DWaHnO2zVojKaqHnOk3J HWIABApoVyVMlT+q6rtfL3qrGzkBfTKXvz3CU90oHVrJsigH5fPf7yaGjF2oX8QmROqj XtCq6NCUp0YJFBve3o0pysQg3HVyrPy03dHQ+ltypVoYem2+97CL1pJ2KgOs6rEmxVPg VPGQ== X-Forwarded-Encrypted: i=1; AJvYcCW8VU3P59Tjad+o1LvLhlkfF1ymLSAaqxVNmykksOPz8aK7D8BMKbxCnzAZZeSU4kHNaHLY1ewhCHwsvHj/@vger.kernel.org X-Gm-Message-State: AOJu0YwCMGZHOINWvAp3ov4CN5zkYp+Bjj9LhR8EWbNDjVr6meXucEp2 DqB8GtUYMmwFUUoPcOiaD/1FmbmjIqGiNzwlp32F86upRMwHXJ1J0wcoBrgxaSVGYbOOCocHZg= = X-Google-Smtp-Source: AGHT+IFR5xFOkTLqeb6QODDX2zzdjcB9oqFqKPN0nLKNTAI4B+FSdQD2EzgL2GfPKUeAFSCR2jBuoM5BRA== X-Received: from wmqe7.prod.google.com ([2002:a05:600c:4e47:b0:434:fbcf:594e]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e45:b0:436:6160:5b81 with SMTP id 5b1f17b1804b1-438dc3c7c5cmr38793015e9.14.1738171426208; Wed, 29 Jan 2025 09:23:46 -0800 (PST) Date: Wed, 29 Jan 2025 17:23:20 +0000 In-Reply-To: <20250129172320.950523-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250129172320.950523-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250129172320.950523-12-tabba@google.com> Subject: [RFC PATCH v2 11/11] KVM: guest_memfd: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Also, build the guest_memfd selftest for aarch64. Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 75 +++++++++++++++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 3 +- 3 files changed, 71 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 41593d2e7de9..c998eb3c3b77 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -174,6 +174,7 @@ TEST_GEN_PROGS_aarch64 += coalesced_io_test TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_perf_test +TEST_GEN_PROGS_aarch64 += guest_memfd_test TEST_GEN_PROGS_aarch64 += guest_print_test TEST_GEN_PROGS_aarch64 += get-reg-list TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..f1e89f72b89f 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,48 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t total_size) { + size_t page_size = getpagesize(); + const char val = 0xaa; + char *mem; + int ret; + int i; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t total_size) +{ + size_t page_size = getpagesize(); char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -170,19 +206,30 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +unsigned long get_shared_type(void) { - size_t page_size; +#ifdef __x86_64__ + return KVM_X86_SW_PROTECTED_VM; +#endif +#ifdef __aarch64__ + return KVM_VM_TYPE_ARM_SW_PROTECTED; +#endif + return 0; +} + +void test_vm_type(unsigned long type, bool is_shared) +{ + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(type); test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); @@ -190,10 +237,26 @@ int main(int argc, char *argv[]) fd = vm_create_guest_memfd(vm, total_size, 0); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (is_shared) + test_mmap_allowed(fd, total_size); + else + test_mmap_denied(fd, total_size); + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_release(vm); +} + +int main(int argc, char *argv[]) +{ + test_vm_type(VM_TYPE_DEFAULT, false); + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + test_vm_type(get_shared_type(), true); + + return 0; } diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 480e3a40d197..098ea04ec099 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -347,9 +347,8 @@ struct kvm_vm *____vm_create(struct vm_shape shape) } #ifdef __aarch64__ - TEST_ASSERT(!vm->type, "ARM doesn't support test-provided types"); if (vm->pa_bits != 40) - vm->type = KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits); + vm->type |= KVM_VM_TYPE_ARM_IPA_SIZE(vm->pa_bits); #endif vm_open(vm);