mbox series

[RFC,0/4] mm: Introduce guest_memfd library

Message ID 20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com
Headers show
Series mm: Introduce guest_memfd library | expand

Message

Elliot Berman Aug. 5, 2024, 6:34 p.m. UTC
In preparation for adding more features to KVM's guest_memfd, refactor
and introduce a library which abstracts some of the core-mm decisions
about managing folios associated with the file. The goal of the refactor
serves two purposes:

1. Provide an easier way to reason about memory in guest_memfd. With KVM
supporting multiple confidentiality models (TDX, SEV-SNP, pKVM, ARM
CCA), and coming support for allowing kernel and userspace to access
this memory, it seems necessary to create a stronger abstraction between
core-mm concerns and hypervisor concerns.

2. Provide a common implementation for other hypervisors (Gunyah) to use.

To create a guest_memfd, the owner provides operations to attempt to
unmap the folio and check whether a folio is accessible to the host. The
owner can call guest_memfd_make_inaccessible() to ensure Linux doesn't
have the folio mapped.

The series first introduces a guest_memfd library based on the current
KVM (next) implementation, then adds few features needed for Gunyah and
arm64 pKVM. The Gunyah usage of the series will be posted sepately
shortly after sending this series. I'll work with Fuad on using the
guest_memfd library for arm64 pKVM based on the feedback received.

I've not yet investigated deeply whether having the guest_memfd library
helps live migration. I'd appreciate any input on that part.

Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
---
Elliot Berman (4):
      mm: Introduce guest_memfd
      kvm: Convert to use mm/guest_memfd
      mm: guest_memfd: Add option to remove guest private memory from direct map
      mm: guest_memfd: Add ability for mmap'ing pages

 include/linux/guest_memfd.h |  59 ++++++
 mm/Kconfig                  |   3 +
 mm/Makefile                 |   1 +
 mm/guest_memfd.c            | 427 ++++++++++++++++++++++++++++++++++++++++++++
 virt/kvm/Kconfig            |   1 +
 virt/kvm/guest_memfd.c      | 299 +++++--------------------------
 virt/kvm/kvm_main.c         |   2 -
 virt/kvm/kvm_mm.h           |   6 -
 8 files changed, 539 insertions(+), 259 deletions(-)
---
base-commit: 8400291e289ee6b2bf9779ff1c83a291501f017b
change-id: 20240722-guest-memfd-lib-455f24115d46

Best regards,

Comments

Patrick Roy Aug. 6, 2024, 3:39 p.m. UTC | #1
Hi Elliot,

On Mon, 2024-08-05 at 19:34 +0100, Elliot Berman wrote:
> This patch was reworked from Patrick's patch:
> https://lore.kernel.org/all/20240709132041.3625501-6-roypat@amazon.co.uk/

yaay :D

> While guest_memfd is not available to be mapped by userspace, it is
> still accessible through the kernel's direct map. This means that in
> scenarios where guest-private memory is not hardware protected, it can
> be speculatively read and its contents potentially leaked through
> hardware side-channels. Removing guest-private memory from the direct
> map, thus mitigates a large class of speculative execution issues
> [1, Table 1].
> 
> Direct map removal do not reuse the `.prepare` machinery, since
> `prepare` can be called multiple time, and it is the responsibility of
> the preparation routine to not "prepare" the same folio twice [2]. Thus,
> instead explicitly check if `filemap_grab_folio` allocated a new folio,
> and remove the returned folio from the direct map only if this was the
> case.

My patch did this, but you separated the PG_uptodate logic from the
direct map removal, right?

> The patch uses release_folio instead of free_folio to reinsert pages
> back into the direct map as by the time free_folio is called,
> folio->mapping can already be NULL. This means that a call to
> folio_inode inside free_folio might deference a NULL pointer, leaving no
> way to access the inode which stores the flags that allow determining
> whether the page was removed from the direct map in the first place.

I thought release_folio was only called for folios with PG_private=1?
You choose PG_private=1 to mean "this folio is in the direct map", so it
gets called for exactly the wrong folios (more on that below, too).

> [1]: https://download.vusec.net/papers/quarantine_raid23.pdf
> 
> Cc: Patrick Roy <roypat@amazon.co.uk>
> Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
> ---
>  include/linux/guest_memfd.h |  8 ++++++
>  mm/guest_memfd.c            | 65 ++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 72 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h
> index be56d9d53067..f9e4a27aed67 100644
> --- a/include/linux/guest_memfd.h
> +++ b/include/linux/guest_memfd.h
> @@ -25,6 +25,14 @@ struct guest_memfd_operations {
>         int (*release)(struct inode *inode);
>  };
> 
> +/**
> + * @GUEST_MEMFD_FLAG_NO_DIRECT_MAP: When making folios inaccessible by host, also
> + *                                  remove them from the kernel's direct map.
> + */
> +enum {
> +       GUEST_MEMFD_FLAG_NO_DIRECT_MAP          = BIT(0),
> +};
> +
>  /**
>   * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date.
>   *                             If trusted hyp will do it, can ommit this flag
> diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c
> index 580138b0f9d4..e9d8cab72b28 100644
> --- a/mm/guest_memfd.c
> +++ b/mm/guest_memfd.c
> @@ -7,9 +7,55 @@
>  #include <linux/falloc.h>
>  #include <linux/guest_memfd.h>
>  #include <linux/pagemap.h>
> +#include <linux/set_memory.h>
> +
> +static inline int guest_memfd_folio_private(struct folio *folio)
> +{
> +       unsigned long nr_pages = folio_nr_pages(folio);
> +       unsigned long i;
> +       int r;
> +
> +       for (i = 0; i < nr_pages; i++) {
> +               struct page *page = folio_page(folio, i);
> +
> +               r = set_direct_map_invalid_noflush(page);
> +               if (r < 0)
> +                       goto out_remap;
> +       }
> +
> +       folio_set_private(folio);

Mh, you've inverted the semantics of PG_private in the context of gmem
here, compared to my patch. For me, PG_private=1 meant "this folio is
back in the direct map". For you it means "this folio is removed from
the direct map". 

Could you elaborate on why you require these different semantics for
PG_private? Actually, I think in this patch series, you could just drop
the PG_private stuff altogether, as the only place you do
folio_test_private is in guest_memfd_clear_private, but iirc calling
set_direct_map_default_noflush on a page that's already in the direct
map is a NOOP anyway.

On the other hand, as Paolo pointed out in my patches [1], just using a
page flag to track direct map presence for gmem is not enough. We
actually need to keep a refcount in folio->private to keep track of how
many different actors request a folio's direct map presence (in the
specific case in my patch series, it was different pfn_to_gfn_caches for
the kvm-clock structures of different vcpus, which the guest can place
into the same gfn). While this might not be a concern for the the
pKVM/Gunyah case, where the guest dictates memory state, it's required
for the non-CoCo case where KVM/userspace can set arbitrary guest gfns
to shared if it needs/wants to access them for whatever reason. So for
this we'd need to have PG_private=1 mean "direct map entry restored" (as
if PG_private=0, there is no folio->private).

[1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m0608c4b6a069b3953d7ee97f48577d32688a3315

> +       return 0;
> +out_remap:
> +       for (; i > 0; i--) {
> +               struct page *page = folio_page(folio, i - 1);
> +
> +               BUG_ON(set_direct_map_default_noflush(page));
> +       }
> +       return r;
> +}
> +
> +static inline void guest_memfd_folio_clear_private(struct folio *folio)
> +{
> +       unsigned long start = (unsigned long)folio_address(folio);
> +       unsigned long nr = folio_nr_pages(folio);
> +       unsigned long i;
> +
> +       if (!folio_test_private(folio))
> +               return;
> +
> +       for (i = 0; i < nr; i++) {
> +               struct page *page = folio_page(folio, i);
> +
> +               BUG_ON(set_direct_map_default_noflush(page));
> +       }
> +       flush_tlb_kernel_range(start, start + folio_size(folio));
> +
> +       folio_clear_private(folio);
> +}
> 
>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
>  {
> +       unsigned long gmem_flags = (unsigned long)file->private_data;
>         struct inode *inode = file_inode(file);
>         struct guest_memfd_operations *ops = inode->i_private;
>         struct folio *folio;
> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
>                         goto out_err;
>         }
> 
> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
> +               r = guest_memfd_folio_private(folio);
> +               if (r)
> +                       goto out_err;
> +       }
> +

How does a caller of guest_memfd_grab_folio know whether a folio needs
to be removed from the direct map? E.g. how can a caller know ahead of
time whether guest_memfd_grab_folio will return a freshly allocated
folio (which thus needs to be removed from the direct map), vs a folio
that already exists and has been removed from the direct map (probably
fine to remove from direct map again), vs a folio that already exists
and is currently re-inserted into the direct map for whatever reason
(must not remove these from the direct map, as other parts of
KVM/userspace probably don't expect the direct map entries to disappear
from underneath them). I couldn't figure this one out for my series,
which is why I went with hooking into the PG_uptodate logic to always
remove direct map entries on freshly allocated folios.

>         /*
>          * Ignore accessed, referenced, and dirty flags.  The memory is
>          * unevictable and there is no storage to write back to.
> @@ -213,14 +265,25 @@ static bool gmem_release_folio(struct folio *folio, gfp_t gfp)
>         if (ops->invalidate_end)
>                 ops->invalidate_end(inode, offset, nr);
> 
> +       guest_memfd_folio_clear_private(folio);
> +
>         return true;
>  }
> 
> +static void gmem_invalidate_folio(struct folio *folio, size_t offset, size_t len)
> +{
> +       /* not yet supported */
> +       BUG_ON(offset || len != folio_size(folio));
> +
> +       BUG_ON(!gmem_release_folio(folio, 0));
> +}
> +
>  static const struct address_space_operations gmem_aops = {
>         .dirty_folio = noop_dirty_folio,
>         .migrate_folio = gmem_migrate_folio,
>         .error_remove_folio = gmem_error_folio,
>         .release_folio = gmem_release_folio,
> +       .invalidate_folio = gmem_invalidate_folio,
>  };
> 
>  static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops)
> @@ -241,7 +304,7 @@ struct file *guest_memfd_alloc(const char *name,
>         if (!guest_memfd_check_ops(ops))
>                 return ERR_PTR(-EINVAL);
> 
> -       if (flags)
> +       if (flags & ~GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
>                 return ERR_PTR(-EINVAL);
> 
>         /*
> 
> --
> 2.34.1
> 

Best, 
Patrick
Elliot Berman Aug. 6, 2024, 8:13 p.m. UTC | #2
On Tue, Aug 06, 2024 at 04:39:24PM +0100, Patrick Roy wrote:
> 
> Hi Elliot,
> 
> On Mon, 2024-08-05 at 19:34 +0100, Elliot Berman wrote:
> > This patch was reworked from Patrick's patch:
> > https://lore.kernel.org/all/20240709132041.3625501-6-roypat@amazon.co.uk/
> 
> yaay :D
> 
> > While guest_memfd is not available to be mapped by userspace, it is
> > still accessible through the kernel's direct map. This means that in
> > scenarios where guest-private memory is not hardware protected, it can
> > be speculatively read and its contents potentially leaked through
> > hardware side-channels. Removing guest-private memory from the direct
> > map, thus mitigates a large class of speculative execution issues
> > [1, Table 1].
> > 
> > Direct map removal do not reuse the `.prepare` machinery, since
> > `prepare` can be called multiple time, and it is the responsibility of
> > the preparation routine to not "prepare" the same folio twice [2]. Thus,
> > instead explicitly check if `filemap_grab_folio` allocated a new folio,
> > and remove the returned folio from the direct map only if this was the
> > case.
> 
> My patch did this, but you separated the PG_uptodate logic from the
> direct map removal, right?
> 
> > The patch uses release_folio instead of free_folio to reinsert pages
> > back into the direct map as by the time free_folio is called,
> > folio->mapping can already be NULL. This means that a call to
> > folio_inode inside free_folio might deference a NULL pointer, leaving no
> > way to access the inode which stores the flags that allow determining
> > whether the page was removed from the direct map in the first place.
> 
> I thought release_folio was only called for folios with PG_private=1?
> You choose PG_private=1 to mean "this folio is in the direct map", so it
> gets called for exactly the wrong folios (more on that below, too).
> 

PG_private=1 should be meaning "this folio is not in the direct map".

> > [1]: https://download.vusec.net/papers/quarantine_raid23.pdf
> > 
> > Cc: Patrick Roy <roypat@amazon.co.uk>
> > Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
> > ---
> >  include/linux/guest_memfd.h |  8 ++++++
> >  mm/guest_memfd.c            | 65 ++++++++++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 72 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h
> > index be56d9d53067..f9e4a27aed67 100644
> > --- a/include/linux/guest_memfd.h
> > +++ b/include/linux/guest_memfd.h
> > @@ -25,6 +25,14 @@ struct guest_memfd_operations {
> >         int (*release)(struct inode *inode);
> >  };
> > 
> > +/**
> > + * @GUEST_MEMFD_FLAG_NO_DIRECT_MAP: When making folios inaccessible by host, also
> > + *                                  remove them from the kernel's direct map.
> > + */
> > +enum {
> > +       GUEST_MEMFD_FLAG_NO_DIRECT_MAP          = BIT(0),
> > +};
> > +
> >  /**
> >   * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date.
> >   *                             If trusted hyp will do it, can ommit this flag
> > diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c
> > index 580138b0f9d4..e9d8cab72b28 100644
> > --- a/mm/guest_memfd.c
> > +++ b/mm/guest_memfd.c
> > @@ -7,9 +7,55 @@
> >  #include <linux/falloc.h>
> >  #include <linux/guest_memfd.h>
> >  #include <linux/pagemap.h>
> > +#include <linux/set_memory.h>
> > +
> > +static inline int guest_memfd_folio_private(struct folio *folio)
> > +{
> > +       unsigned long nr_pages = folio_nr_pages(folio);
> > +       unsigned long i;
> > +       int r;
> > +
> > +       for (i = 0; i < nr_pages; i++) {
> > +               struct page *page = folio_page(folio, i);
> > +
> > +               r = set_direct_map_invalid_noflush(page);
> > +               if (r < 0)
> > +                       goto out_remap;
> > +       }
> > +
> > +       folio_set_private(folio);
> 
> Mh, you've inverted the semantics of PG_private in the context of gmem
> here, compared to my patch. For me, PG_private=1 meant "this folio is
> back in the direct map". For you it means "this folio is removed from
> the direct map". 
> 
> Could you elaborate on why you require these different semantics for
> PG_private? Actually, I think in this patch series, you could just drop
> the PG_private stuff altogether, as the only place you do
> folio_test_private is in guest_memfd_clear_private, but iirc calling
> set_direct_map_default_noflush on a page that's already in the direct
> map is a NOOP anyway.
> 
> On the other hand, as Paolo pointed out in my patches [1], just using a
> page flag to track direct map presence for gmem is not enough. We
> actually need to keep a refcount in folio->private to keep track of how
> many different actors request a folio's direct map presence (in the
> specific case in my patch series, it was different pfn_to_gfn_caches for
> the kvm-clock structures of different vcpus, which the guest can place
> into the same gfn). While this might not be a concern for the the
> pKVM/Gunyah case, where the guest dictates memory state, it's required
> for the non-CoCo case where KVM/userspace can set arbitrary guest gfns
> to shared if it needs/wants to access them for whatever reason. So for
> this we'd need to have PG_private=1 mean "direct map entry restored" (as
> if PG_private=0, there is no folio->private).
> 
> [1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m0608c4b6a069b3953d7ee97f48577d32688a3315
> 

I wonder if we can use the folio refcount itself, assuming we can rely
on refcount == 1 means we can do shared->private conversion.

In gpc_map_gmem, we convert private->shared. There's no problem here in
the non-CoCo case.

In gpc_unmap, we *try* to convert back from shared->private. If
refcount>2, then the conversion would fail. The last gpc_unmap would be
able to successfully convert back to private.

Do you see any concerns with this approach?

> > +       return 0;
> > +out_remap:
> > +       for (; i > 0; i--) {
> > +               struct page *page = folio_page(folio, i - 1);
> > +
> > +               BUG_ON(set_direct_map_default_noflush(page));
> > +       }
> > +       return r;
> > +}
> > +
> > +static inline void guest_memfd_folio_clear_private(struct folio *folio)
> > +{
> > +       unsigned long start = (unsigned long)folio_address(folio);
> > +       unsigned long nr = folio_nr_pages(folio);
> > +       unsigned long i;
> > +
> > +       if (!folio_test_private(folio))
> > +               return;
> > +
> > +       for (i = 0; i < nr; i++) {
> > +               struct page *page = folio_page(folio, i);
> > +
> > +               BUG_ON(set_direct_map_default_noflush(page));
> > +       }
> > +       flush_tlb_kernel_range(start, start + folio_size(folio));
> > +
> > +       folio_clear_private(folio);
> > +}
> > 
> >  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
> >  {
> > +       unsigned long gmem_flags = (unsigned long)file->private_data;
> >         struct inode *inode = file_inode(file);
> >         struct guest_memfd_operations *ops = inode->i_private;
> >         struct folio *folio;
> > @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
> >                         goto out_err;
> >         }
> > 
> > +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
> > +               r = guest_memfd_folio_private(folio);
> > +               if (r)
> > +                       goto out_err;
> > +       }
> > +
> 
> How does a caller of guest_memfd_grab_folio know whether a folio needs
> to be removed from the direct map? E.g. how can a caller know ahead of
> time whether guest_memfd_grab_folio will return a freshly allocated
> folio (which thus needs to be removed from the direct map), vs a folio
> that already exists and has been removed from the direct map (probably
> fine to remove from direct map again), vs a folio that already exists
> and is currently re-inserted into the direct map for whatever reason
> (must not remove these from the direct map, as other parts of
> KVM/userspace probably don't expect the direct map entries to disappear
> from underneath them). I couldn't figure this one out for my series,
> which is why I went with hooking into the PG_uptodate logic to always
> remove direct map entries on freshly allocated folios.
> 

gmem_flags come from the owner. If the caller (in non-CoCo case) wants
to restore the direct map right away, it'd have to be a direct
operation. As an optimization, we could add option that asks for page in
"shared" state. If allocating new page, we can return it right away
without removing from direct map. If grabbing existing folio, it would
try to do the private->shared conversion.

Thanks for the feedback, it was helpful!

- Elliot

> >         /*
> >          * Ignore accessed, referenced, and dirty flags.  The memory is
> >          * unevictable and there is no storage to write back to.
> > @@ -213,14 +265,25 @@ static bool gmem_release_folio(struct folio *folio, gfp_t gfp)
> >         if (ops->invalidate_end)
> >                 ops->invalidate_end(inode, offset, nr);
> > 
> > +       guest_memfd_folio_clear_private(folio);
> > +
> >         return true;
> >  }
> > 
> > +static void gmem_invalidate_folio(struct folio *folio, size_t offset, size_t len)
> > +{
> > +       /* not yet supported */
> > +       BUG_ON(offset || len != folio_size(folio));
> > +
> > +       BUG_ON(!gmem_release_folio(folio, 0));
> > +}
> > +
> >  static const struct address_space_operations gmem_aops = {
> >         .dirty_folio = noop_dirty_folio,
> >         .migrate_folio = gmem_migrate_folio,
> >         .error_remove_folio = gmem_error_folio,
> >         .release_folio = gmem_release_folio,
> > +       .invalidate_folio = gmem_invalidate_folio,
> >  };
> > 
> >  static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops)
> > @@ -241,7 +304,7 @@ struct file *guest_memfd_alloc(const char *name,
> >         if (!guest_memfd_check_ops(ops))
> >                 return ERR_PTR(-EINVAL);
> > 
> > -       if (flags)
> > +       if (flags & ~GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
> >                 return ERR_PTR(-EINVAL);
> > 
> >         /*
> > 
> > --
> > 2.34.1
> > 
> 
> Best, 
> Patrick
>
Patrick Roy Aug. 7, 2024, 6:48 a.m. UTC | #3
On Tue, 2024-08-06 at 21:13 +0100, Elliot Berman wrote:
> On Tue, Aug 06, 2024 at 04:39:24PM +0100, Patrick Roy wrote:
>>
>> Hi Elliot,
>>
>> On Mon, 2024-08-05 at 19:34 +0100, Elliot Berman wrote:
>>> This patch was reworked from Patrick's patch:
>>> https://lore.kernel.org/all/20240709132041.3625501-6-roypat@amazon.co.uk/
>>
>> yaay :D
>>
>>> While guest_memfd is not available to be mapped by userspace, it is
>>> still accessible through the kernel's direct map. This means that in
>>> scenarios where guest-private memory is not hardware protected, it can
>>> be speculatively read and its contents potentially leaked through
>>> hardware side-channels. Removing guest-private memory from the direct
>>> map, thus mitigates a large class of speculative execution issues
>>> [1, Table 1].
>>>
>>> Direct map removal do not reuse the `.prepare` machinery, since
>>> `prepare` can be called multiple time, and it is the responsibility of
>>> the preparation routine to not "prepare" the same folio twice [2]. Thus,
>>> instead explicitly check if `filemap_grab_folio` allocated a new folio,
>>> and remove the returned folio from the direct map only if this was the
>>> case.
>>
>> My patch did this, but you separated the PG_uptodate logic from the
>> direct map removal, right?
>>
>>> The patch uses release_folio instead of free_folio to reinsert pages
>>> back into the direct map as by the time free_folio is called,
>>> folio->mapping can already be NULL. This means that a call to
>>> folio_inode inside free_folio might deference a NULL pointer, leaving no
>>> way to access the inode which stores the flags that allow determining
>>> whether the page was removed from the direct map in the first place.
>>
>> I thought release_folio was only called for folios with PG_private=1?
>> You choose PG_private=1 to mean "this folio is in the direct map", so it
>> gets called for exactly the wrong folios (more on that below, too).
>>
> 
> PG_private=1 should be meaning "this folio is not in the direct map".

Right. I just checked my patch and it indeed means the same there. No
idea what I was on about yesterday. I think I only had Paolo's comment
about using folio->private for refcounting sharings in mind, so I
thought "to use folio->private, you need PG_private=1, therefore
PG_private=1 means shared" (I just checked, and while
folio_attach_private causes PG_private=1 to be set, page_set_private
does not). Obviously my comments below and especially here on PG_private
were nonsense. Sorry about that!

>>> [1]: https://download.vusec.net/papers/quarantine_raid23.pdf
>>>
>>> Cc: Patrick Roy <roypat@amazon.co.uk>
>>> Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
>>> ---
>>>  include/linux/guest_memfd.h |  8 ++++++
>>>  mm/guest_memfd.c            | 65 ++++++++++++++++++++++++++++++++++++++++++++-
>>>  2 files changed, 72 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h
>>> index be56d9d53067..f9e4a27aed67 100644
>>> --- a/include/linux/guest_memfd.h
>>> +++ b/include/linux/guest_memfd.h
>>> @@ -25,6 +25,14 @@ struct guest_memfd_operations {
>>>         int (*release)(struct inode *inode);
>>>  };
>>>
>>> +/**
>>> + * @GUEST_MEMFD_FLAG_NO_DIRECT_MAP: When making folios inaccessible by host, also
>>> + *                                  remove them from the kernel's direct map.
>>> + */
>>> +enum {
>>> +       GUEST_MEMFD_FLAG_NO_DIRECT_MAP          = BIT(0),
>>> +};
>>> +
>>>  /**
>>>   * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date.
>>>   *                             If trusted hyp will do it, can ommit this flag
>>> diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c
>>> index 580138b0f9d4..e9d8cab72b28 100644
>>> --- a/mm/guest_memfd.c
>>> +++ b/mm/guest_memfd.c
>>> @@ -7,9 +7,55 @@
>>>  #include <linux/falloc.h>
>>>  #include <linux/guest_memfd.h>
>>>  #include <linux/pagemap.h>
>>> +#include <linux/set_memory.h>
>>> +
>>> +static inline int guest_memfd_folio_private(struct folio *folio)
>>> +{
>>> +       unsigned long nr_pages = folio_nr_pages(folio);
>>> +       unsigned long i;
>>> +       int r;
>>> +
>>> +       for (i = 0; i < nr_pages; i++) {
>>> +               struct page *page = folio_page(folio, i);
>>> +
>>> +               r = set_direct_map_invalid_noflush(page);
>>> +               if (r < 0)
>>> +                       goto out_remap;
>>> +       }
>>> +
>>> +       folio_set_private(folio);
>>
>> Mh, you've inverted the semantics of PG_private in the context of gmem
>> here, compared to my patch. For me, PG_private=1 meant "this folio is
>> back in the direct map". For you it means "this folio is removed from
>> the direct map".
>>
>> Could you elaborate on why you require these different semantics for
>> PG_private? Actually, I think in this patch series, you could just drop
>> the PG_private stuff altogether, as the only place you do
>> folio_test_private is in guest_memfd_clear_private, but iirc calling
>> set_direct_map_default_noflush on a page that's already in the direct
>> map is a NOOP anyway.
>>
>> On the other hand, as Paolo pointed out in my patches [1], just using a
>> page flag to track direct map presence for gmem is not enough. We
>> actually need to keep a refcount in folio->private to keep track of how
>> many different actors request a folio's direct map presence (in the
>> specific case in my patch series, it was different pfn_to_gfn_caches for
>> the kvm-clock structures of different vcpus, which the guest can place
>> into the same gfn). While this might not be a concern for the the
>> pKVM/Gunyah case, where the guest dictates memory state, it's required
>> for the non-CoCo case where KVM/userspace can set arbitrary guest gfns
>> to shared if it needs/wants to access them for whatever reason. So for
>> this we'd need to have PG_private=1 mean "direct map entry restored" (as
>> if PG_private=0, there is no folio->private).
>>
>> [1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m0608c4b6a069b3953d7ee97f48577d32688a3315
>>
> 
> I wonder if we can use the folio refcount itself, assuming we can rely
> on refcount == 1 means we can do shared->private conversion.
> 
> In gpc_map_gmem, we convert private->shared. There's no problem here in
> the non-CoCo case.
> 
> In gpc_unmap, we *try* to convert back from shared->private. If
> refcount>2, then the conversion would fail. The last gpc_unmap would be
> able to successfully convert back to private.
> 
> Do you see any concerns with this approach?

The gfn_to_pfn_cache does not keep an elevated refcount on the cached
page, and instead responds to MMU notifiers to detect whether the cached
translation has been invalidated, iirc. So the folio refcount will
not reflect the number of gpcs holding that folio.

>>> +       return 0;
>>> +out_remap:
>>> +       for (; i > 0; i--) {
>>> +               struct page *page = folio_page(folio, i - 1);
>>> +
>>> +               BUG_ON(set_direct_map_default_noflush(page));
>>> +       }
>>> +       return r;
>>> +}
>>> +
>>> +static inline void guest_memfd_folio_clear_private(struct folio *folio)
>>> +{
>>> +       unsigned long start = (unsigned long)folio_address(folio);
>>> +       unsigned long nr = folio_nr_pages(folio);
>>> +       unsigned long i;
>>> +
>>> +       if (!folio_test_private(folio))
>>> +               return;
>>> +
>>> +       for (i = 0; i < nr; i++) {
>>> +               struct page *page = folio_page(folio, i);
>>> +
>>> +               BUG_ON(set_direct_map_default_noflush(page));
>>> +       }
>>> +       flush_tlb_kernel_range(start, start + folio_size(folio));
>>> +
>>> +       folio_clear_private(folio);
>>> +}
>>>
>>>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
>>>  {
>>> +       unsigned long gmem_flags = (unsigned long)file->private_data;
>>>         struct inode *inode = file_inode(file);
>>>         struct guest_memfd_operations *ops = inode->i_private;
>>>         struct folio *folio;
>>> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
>>>                         goto out_err;
>>>         }
>>>
>>> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
>>> +               r = guest_memfd_folio_private(folio);
>>> +               if (r)
>>> +                       goto out_err;
>>> +       }
>>> +
>>
>> How does a caller of guest_memfd_grab_folio know whether a folio needs
>> to be removed from the direct map? E.g. how can a caller know ahead of
>> time whether guest_memfd_grab_folio will return a freshly allocated
>> folio (which thus needs to be removed from the direct map), vs a folio
>> that already exists and has been removed from the direct map (probably
>> fine to remove from direct map again), vs a folio that already exists
>> and is currently re-inserted into the direct map for whatever reason
>> (must not remove these from the direct map, as other parts of
>> KVM/userspace probably don't expect the direct map entries to disappear
>> from underneath them). I couldn't figure this one out for my series,
>> which is why I went with hooking into the PG_uptodate logic to always
>> remove direct map entries on freshly allocated folios.
>>
> 
> gmem_flags come from the owner. If the caller (in non-CoCo case) wants
> to restore the direct map right away, it'd have to be a direct
> operation. As an optimization, we could add option that asks for page in
> "shared" state. If allocating new page, we can return it right away
> without removing from direct map. If grabbing existing folio, it would
> try to do the private->shared conversion.
> 
> Thanks for the feedback, it was helpful!
> 
> - Elliot
> 
>>>         /*
>>>          * Ignore accessed, referenced, and dirty flags.  The memory is
>>>          * unevictable and there is no storage to write back to.
>>> @@ -213,14 +265,25 @@ static bool gmem_release_folio(struct folio *folio, gfp_t gfp)
>>>         if (ops->invalidate_end)
>>>                 ops->invalidate_end(inode, offset, nr);
>>>
>>> +       guest_memfd_folio_clear_private(folio);
>>> +
>>>         return true;
>>>  }
>>>
>>> +static void gmem_invalidate_folio(struct folio *folio, size_t offset, size_t len)
>>> +{
>>> +       /* not yet supported */
>>> +       BUG_ON(offset || len != folio_size(folio));
>>> +
>>> +       BUG_ON(!gmem_release_folio(folio, 0));
>>> +}
>>> +
>>>  static const struct address_space_operations gmem_aops = {
>>>         .dirty_folio = noop_dirty_folio,
>>>         .migrate_folio = gmem_migrate_folio,
>>>         .error_remove_folio = gmem_error_folio,
>>>         .release_folio = gmem_release_folio,
>>> +       .invalidate_folio = gmem_invalidate_folio,
>>>  };
>>>
>>>  static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops)
>>> @@ -241,7 +304,7 @@ struct file *guest_memfd_alloc(const char *name,
>>>         if (!guest_memfd_check_ops(ops))
>>>                 return ERR_PTR(-EINVAL);
>>>
>>> -       if (flags)
>>> +       if (flags & ~GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
>>>                 return ERR_PTR(-EINVAL);
>>>
>>>         /*
>>>
>>> --
>>> 2.34.1
>>>
>>
>> Best,
>> Patrick
>>
Patrick Roy Aug. 7, 2024, 10:57 a.m. UTC | #4
On Wed, 2024-08-07 at 07:48 +0100, Patrick Roy wrote:
> 
> 
> On Tue, 2024-08-06 at 21:13 +0100, Elliot Berman wrote:
>> On Tue, Aug 06, 2024 at 04:39:24PM +0100, Patrick Roy wrote:
>>>
>>> Hi Elliot,
>>>
>>> On Mon, 2024-08-05 at 19:34 +0100, Elliot Berman wrote:
>>>> This patch was reworked from Patrick's patch:
>>>> https://lore.kernel.org/all/20240709132041.3625501-6-roypat@amazon.co.uk/
>>>
>>> yaay :D
>>>
>>>> While guest_memfd is not available to be mapped by userspace, it is
>>>> still accessible through the kernel's direct map. This means that in
>>>> scenarios where guest-private memory is not hardware protected, it can
>>>> be speculatively read and its contents potentially leaked through
>>>> hardware side-channels. Removing guest-private memory from the direct
>>>> map, thus mitigates a large class of speculative execution issues
>>>> [1, Table 1].
>>>>
>>>> Direct map removal do not reuse the `.prepare` machinery, since
>>>> `prepare` can be called multiple time, and it is the responsibility of
>>>> the preparation routine to not "prepare" the same folio twice [2]. Thus,
>>>> instead explicitly check if `filemap_grab_folio` allocated a new folio,
>>>> and remove the returned folio from the direct map only if this was the
>>>> case.
>>>
>>> My patch did this, but you separated the PG_uptodate logic from the
>>> direct map removal, right?
>>>
>>>> The patch uses release_folio instead of free_folio to reinsert pages
>>>> back into the direct map as by the time free_folio is called,
>>>> folio->mapping can already be NULL. This means that a call to
>>>> folio_inode inside free_folio might deference a NULL pointer, leaving no
>>>> way to access the inode which stores the flags that allow determining
>>>> whether the page was removed from the direct map in the first place.
>>>
>>> I thought release_folio was only called for folios with PG_private=1?
>>> You choose PG_private=1 to mean "this folio is in the direct map", so it
>>> gets called for exactly the wrong folios (more on that below, too).
>>>
>>
>> PG_private=1 should be meaning "this folio is not in the direct map".
> 
> Right. I just checked my patch and it indeed means the same there. No
> idea what I was on about yesterday. I think I only had Paolo's comment
> about using folio->private for refcounting sharings in mind, so I
> thought "to use folio->private, you need PG_private=1, therefore
> PG_private=1 means shared" (I just checked, and while
> folio_attach_private causes PG_private=1 to be set, page_set_private
> does not). Obviously my comments below and especially here on PG_private
> were nonsense. Sorry about that!
> 
>>>> [1]: https://download.vusec.net/papers/quarantine_raid23.pdf
>>>>
>>>> Cc: Patrick Roy <roypat@amazon.co.uk>
>>>> Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
>>>> ---
>>>>  include/linux/guest_memfd.h |  8 ++++++
>>>>  mm/guest_memfd.c            | 65 ++++++++++++++++++++++++++++++++++++++++++++-
>>>>  2 files changed, 72 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h
>>>> index be56d9d53067..f9e4a27aed67 100644
>>>> --- a/include/linux/guest_memfd.h
>>>> +++ b/include/linux/guest_memfd.h
>>>> @@ -25,6 +25,14 @@ struct guest_memfd_operations {
>>>>         int (*release)(struct inode *inode);
>>>>  };
>>>>
>>>> +/**
>>>> + * @GUEST_MEMFD_FLAG_NO_DIRECT_MAP: When making folios inaccessible by host, also
>>>> + *                                  remove them from the kernel's direct map.
>>>> + */
>>>> +enum {
>>>> +       GUEST_MEMFD_FLAG_NO_DIRECT_MAP          = BIT(0),
>>>> +};
>>>> +
>>>>  /**
>>>>   * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date.
>>>>   *                             If trusted hyp will do it, can ommit this flag
>>>> diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c
>>>> index 580138b0f9d4..e9d8cab72b28 100644
>>>> --- a/mm/guest_memfd.c
>>>> +++ b/mm/guest_memfd.c
>>>> @@ -7,9 +7,55 @@
>>>>  #include <linux/falloc.h>
>>>>  #include <linux/guest_memfd.h>
>>>>  #include <linux/pagemap.h>
>>>> +#include <linux/set_memory.h>
>>>> +
>>>> +static inline int guest_memfd_folio_private(struct folio *folio)
>>>> +{
>>>> +       unsigned long nr_pages = folio_nr_pages(folio);
>>>> +       unsigned long i;
>>>> +       int r;
>>>> +
>>>> +       for (i = 0; i < nr_pages; i++) {
>>>> +               struct page *page = folio_page(folio, i);
>>>> +
>>>> +               r = set_direct_map_invalid_noflush(page);
>>>> +               if (r < 0)
>>>> +                       goto out_remap;
>>>> +       }
>>>> +
>>>> +       folio_set_private(folio);
>>>
>>> Mh, you've inverted the semantics of PG_private in the context of gmem
>>> here, compared to my patch. For me, PG_private=1 meant "this folio is
>>> back in the direct map". For you it means "this folio is removed from
>>> the direct map".
>>>
>>> Could you elaborate on why you require these different semantics for
>>> PG_private? Actually, I think in this patch series, you could just drop
>>> the PG_private stuff altogether, as the only place you do
>>> folio_test_private is in guest_memfd_clear_private, but iirc calling
>>> set_direct_map_default_noflush on a page that's already in the direct
>>> map is a NOOP anyway.
>>>
>>> On the other hand, as Paolo pointed out in my patches [1], just using a
>>> page flag to track direct map presence for gmem is not enough. We
>>> actually need to keep a refcount in folio->private to keep track of how
>>> many different actors request a folio's direct map presence (in the
>>> specific case in my patch series, it was different pfn_to_gfn_caches for
>>> the kvm-clock structures of different vcpus, which the guest can place
>>> into the same gfn). While this might not be a concern for the the
>>> pKVM/Gunyah case, where the guest dictates memory state, it's required
>>> for the non-CoCo case where KVM/userspace can set arbitrary guest gfns
>>> to shared if it needs/wants to access them for whatever reason. So for
>>> this we'd need to have PG_private=1 mean "direct map entry restored" (as
>>> if PG_private=0, there is no folio->private).
>>>
>>> [1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m0608c4b6a069b3953d7ee97f48577d32688a3315
>>>
>>
>> I wonder if we can use the folio refcount itself, assuming we can rely
>> on refcount == 1 means we can do shared->private conversion.
>>
>> In gpc_map_gmem, we convert private->shared. There's no problem here in
>> the non-CoCo case.
>>
>> In gpc_unmap, we *try* to convert back from shared->private. If
>> refcount>2, then the conversion would fail. The last gpc_unmap would be
>> able to successfully convert back to private.
>>
>> Do you see any concerns with this approach?
> 
> The gfn_to_pfn_cache does not keep an elevated refcount on the cached
> page, and instead responds to MMU notifiers to detect whether the cached
> translation has been invalidated, iirc. So the folio refcount will
> not reflect the number of gpcs holding that folio.
> 
>>>> +       return 0;
>>>> +out_remap:
>>>> +       for (; i > 0; i--) {
>>>> +               struct page *page = folio_page(folio, i - 1);
>>>> +
>>>> +               BUG_ON(set_direct_map_default_noflush(page));
>>>> +       }
>>>> +       return r;
>>>> +}
>>>> +
>>>> +static inline void guest_memfd_folio_clear_private(struct folio *folio)
>>>> +{
>>>> +       unsigned long start = (unsigned long)folio_address(folio);
>>>> +       unsigned long nr = folio_nr_pages(folio);
>>>> +       unsigned long i;
>>>> +
>>>> +       if (!folio_test_private(folio))
>>>> +               return;
>>>> +
>>>> +       for (i = 0; i < nr; i++) {
>>>> +               struct page *page = folio_page(folio, i);
>>>> +
>>>> +               BUG_ON(set_direct_map_default_noflush(page));
>>>> +       }
>>>> +       flush_tlb_kernel_range(start, start + folio_size(folio));
>>>> +
>>>> +       folio_clear_private(folio);
>>>> +}
>>>>
>>>>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
>>>>  {
>>>> +       unsigned long gmem_flags = (unsigned long)file->private_data;
>>>>         struct inode *inode = file_inode(file);
>>>>         struct guest_memfd_operations *ops = inode->i_private;
>>>>         struct folio *folio;
>>>> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
>>>>                         goto out_err;
>>>>         }
>>>>
>>>> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
>>>> +               r = guest_memfd_folio_private(folio);
>>>> +               if (r)
>>>> +                       goto out_err;
>>>> +       }
>>>> +
>>>
>>> How does a caller of guest_memfd_grab_folio know whether a folio needs
>>> to be removed from the direct map? E.g. how can a caller know ahead of
>>> time whether guest_memfd_grab_folio will return a freshly allocated
>>> folio (which thus needs to be removed from the direct map), vs a folio
>>> that already exists and has been removed from the direct map (probably
>>> fine to remove from direct map again), vs a folio that already exists
>>> and is currently re-inserted into the direct map for whatever reason
>>> (must not remove these from the direct map, as other parts of
>>> KVM/userspace probably don't expect the direct map entries to disappear
>>> from underneath them). I couldn't figure this one out for my series,
>>> which is why I went with hooking into the PG_uptodate logic to always
>>> remove direct map entries on freshly allocated folios.
>>>
>>
>> gmem_flags come from the owner. If the caller (in non-CoCo case) wants

Ah, oops, I got it mixed up with the new `flags` parameter. 

>> to restore the direct map right away, it'd have to be a direct
>> operation. As an optimization, we could add option that asks for page in
>> "shared" state. If allocating new page, we can return it right away
>> without removing from direct map. If grabbing existing folio, it would
>> try to do the private->shared conversion.

My concern is more with the implicit shared->private conversion that
happens on every call to guest_memfd_grab_folio (and thus
kvm_gmem_get_pfn) when grabbing existing folios. If something else
marked the folio as shared, then we cannot punch it out of the direct
map again until that something is done using the folio (when working on
my RFC, kvm_gmem_get_pfn was indeed called on existing folios that were
temporarily marked shared, as I was seeing panics because of this). And
if the folio is currently private, there's nothing to do. So either way,
guest_memfd_grab_folio shouldn't touch the direct map entry for existing
folios.

>>
>> Thanks for the feedback, it was helpful!
>>
>> - Elliot
>>
>>>>         /*
>>>>          * Ignore accessed, referenced, and dirty flags.  The memory is
>>>>          * unevictable and there is no storage to write back to.
>>>> @@ -213,14 +265,25 @@ static bool gmem_release_folio(struct folio *folio, gfp_t gfp)
>>>>         if (ops->invalidate_end)
>>>>                 ops->invalidate_end(inode, offset, nr);
>>>>
>>>> +       guest_memfd_folio_clear_private(folio);
>>>> +
>>>>         return true;
>>>>  }
>>>>
>>>> +static void gmem_invalidate_folio(struct folio *folio, size_t offset, size_t len)
>>>> +{
>>>> +       /* not yet supported */
>>>> +       BUG_ON(offset || len != folio_size(folio));
>>>> +
>>>> +       BUG_ON(!gmem_release_folio(folio, 0));
>>>> +}
>>>> +
>>>>  static const struct address_space_operations gmem_aops = {
>>>>         .dirty_folio = noop_dirty_folio,
>>>>         .migrate_folio = gmem_migrate_folio,
>>>>         .error_remove_folio = gmem_error_folio,
>>>>         .release_folio = gmem_release_folio,
>>>> +       .invalidate_folio = gmem_invalidate_folio,
>>>>  };
>>>>
>>>>  static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops)
>>>> @@ -241,7 +304,7 @@ struct file *guest_memfd_alloc(const char *name,
>>>>         if (!guest_memfd_check_ops(ops))
>>>>                 return ERR_PTR(-EINVAL);
>>>>
>>>> -       if (flags)
>>>> +       if (flags & ~GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
>>>>                 return ERR_PTR(-EINVAL);
>>>>
>>>>         /*
>>>>
>>>> --
>>>> 2.34.1
>>>>
>>>
>>> Best,
>>> Patrick
>>>
Elliot Berman Aug. 7, 2024, 7:06 p.m. UTC | #5
On Wed, Aug 07, 2024 at 11:57:35AM +0100, Patrick Roy wrote:
> On Wed, 2024-08-07 at 07:48 +0100, Patrick Roy wrote:
> > On Tue, 2024-08-06 at 21:13 +0100, Elliot Berman wrote:
> >> On Tue, Aug 06, 2024 at 04:39:24PM +0100, Patrick Roy wrote:
> >>> On the other hand, as Paolo pointed out in my patches [1], just using a
> >>> page flag to track direct map presence for gmem is not enough. We
> >>> actually need to keep a refcount in folio->private to keep track of how
> >>> many different actors request a folio's direct map presence (in the
> >>> specific case in my patch series, it was different pfn_to_gfn_caches for
> >>> the kvm-clock structures of different vcpus, which the guest can place
> >>> into the same gfn). While this might not be a concern for the the
> >>> pKVM/Gunyah case, where the guest dictates memory state, it's required
> >>> for the non-CoCo case where KVM/userspace can set arbitrary guest gfns
> >>> to shared if it needs/wants to access them for whatever reason. So for
> >>> this we'd need to have PG_private=1 mean "direct map entry restored" (as
> >>> if PG_private=0, there is no folio->private).
> >>>
> >>> [1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#m0608c4b6a069b3953d7ee97f48577d32688a3315
> >>>
> >>
> >> I wonder if we can use the folio refcount itself, assuming we can rely
> >> on refcount == 1 means we can do shared->private conversion.
> >>
> >> In gpc_map_gmem, we convert private->shared. There's no problem here in
> >> the non-CoCo case.
> >>
> >> In gpc_unmap, we *try* to convert back from shared->private. If
> >> refcount>2, then the conversion would fail. The last gpc_unmap would be
> >> able to successfully convert back to private.
> >>
> >> Do you see any concerns with this approach?
> > 
> > The gfn_to_pfn_cache does not keep an elevated refcount on the cached
> > page, and instead responds to MMU notifiers to detect whether the cached
> > translation has been invalidated, iirc. So the folio refcount will
> > not reflect the number of gpcs holding that folio.
> > 

Ah, fair enough. This is kinda like a GUP pin which would prevent us
from making page private, but without the pin part.

[...]

> >>>>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
> >>>>  {
> >>>> +       unsigned long gmem_flags = (unsigned long)file->private_data;
> >>>>         struct inode *inode = file_inode(file);
> >>>>         struct guest_memfd_operations *ops = inode->i_private;
> >>>>         struct folio *folio;
> >>>> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
> >>>>                         goto out_err;
> >>>>         }
> >>>>
> >>>> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
> >>>> +               r = guest_memfd_folio_private(folio);
> >>>> +               if (r)
> >>>> +                       goto out_err;
> >>>> +       }
> >>>> +
> >>>
> >>> How does a caller of guest_memfd_grab_folio know whether a folio needs
> >>> to be removed from the direct map? E.g. how can a caller know ahead of
> >>> time whether guest_memfd_grab_folio will return a freshly allocated
> >>> folio (which thus needs to be removed from the direct map), vs a folio
> >>> that already exists and has been removed from the direct map (probably
> >>> fine to remove from direct map again), vs a folio that already exists
> >>> and is currently re-inserted into the direct map for whatever reason
> >>> (must not remove these from the direct map, as other parts of
> >>> KVM/userspace probably don't expect the direct map entries to disappear
> >>> from underneath them). I couldn't figure this one out for my series,
> >>> which is why I went with hooking into the PG_uptodate logic to always
> >>> remove direct map entries on freshly allocated folios.
> >>>
> >>
> >> gmem_flags come from the owner. If the caller (in non-CoCo case) wants
> 
> Ah, oops, I got it mixed up with the new `flags` parameter. 
> 
> >> to restore the direct map right away, it'd have to be a direct
> >> operation. As an optimization, we could add option that asks for page in
> >> "shared" state. If allocating new page, we can return it right away
> >> without removing from direct map. If grabbing existing folio, it would
> >> try to do the private->shared conversion.
> 
> My concern is more with the implicit shared->private conversion that
> happens on every call to guest_memfd_grab_folio (and thus
> kvm_gmem_get_pfn) when grabbing existing folios. If something else
> marked the folio as shared, then we cannot punch it out of the direct
> map again until that something is done using the folio (when working on
> my RFC, kvm_gmem_get_pfn was indeed called on existing folios that were
> temporarily marked shared, as I was seeing panics because of this). And
> if the folio is currently private, there's nothing to do. So either way,
> guest_memfd_grab_folio shouldn't touch the direct map entry for existing
> folios.
> 

What I did could be documented/commented better.

If ops->accessible() is *not* provided, all guest_memfd allocations will
immediately remove from direct map and treat them immediately like guest
private (goal is to match what KVM does today on tip). 

If ops->accessible() is provided, then guest_memfd allocations start
as "shared" and KVM/Gunyah need to do the shared->private conversion
when they want to do the private conversion on the folio. "Shared" is
the default because that is effectively a no-op.

For the non-CoCo case you're interested in, we'd have the
ops->accessible() provided and we wouldn't pull out the direct map from
gpc.

Thanks,
Elliot
Patrick Roy Aug. 8, 2024, 1:05 p.m. UTC | #6
On Wed, 2024-08-07 at 20:06 +0100, Elliot Berman wrote:
>>>>>>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
>>>>>>  {
>>>>>> +       unsigned long gmem_flags = (unsigned long)file->private_data;
>>>>>>         struct inode *inode = file_inode(file);
>>>>>>         struct guest_memfd_operations *ops = inode->i_private;
>>>>>>         struct folio *folio;
>>>>>> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
>>>>>>                         goto out_err;
>>>>>>         }
>>>>>>
>>>>>> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
>>>>>> +               r = guest_memfd_folio_private(folio);
>>>>>> +               if (r)
>>>>>> +                       goto out_err;
>>>>>> +       }
>>>>>> +
>>>>>
>>>>> How does a caller of guest_memfd_grab_folio know whether a folio needs
>>>>> to be removed from the direct map? E.g. how can a caller know ahead of
>>>>> time whether guest_memfd_grab_folio will return a freshly allocated
>>>>> folio (which thus needs to be removed from the direct map), vs a folio
>>>>> that already exists and has been removed from the direct map (probably
>>>>> fine to remove from direct map again), vs a folio that already exists
>>>>> and is currently re-inserted into the direct map for whatever reason
>>>>> (must not remove these from the direct map, as other parts of
>>>>> KVM/userspace probably don't expect the direct map entries to disappear
>>>>> from underneath them). I couldn't figure this one out for my series,
>>>>> which is why I went with hooking into the PG_uptodate logic to always
>>>>> remove direct map entries on freshly allocated folios.
>>>>>
>>>>
>>>> gmem_flags come from the owner. If the caller (in non-CoCo case) wants
>>
>> Ah, oops, I got it mixed up with the new `flags` parameter.
>>
>>>> to restore the direct map right away, it'd have to be a direct
>>>> operation. As an optimization, we could add option that asks for page in
>>>> "shared" state. If allocating new page, we can return it right away
>>>> without removing from direct map. If grabbing existing folio, it would
>>>> try to do the private->shared conversion.
>>
>> My concern is more with the implicit shared->private conversion that
>> happens on every call to guest_memfd_grab_folio (and thus
>> kvm_gmem_get_pfn) when grabbing existing folios. If something else
>> marked the folio as shared, then we cannot punch it out of the direct
>> map again until that something is done using the folio (when working on
>> my RFC, kvm_gmem_get_pfn was indeed called on existing folios that were
>> temporarily marked shared, as I was seeing panics because of this). And
>> if the folio is currently private, there's nothing to do. So either way,
>> guest_memfd_grab_folio shouldn't touch the direct map entry for existing
>> folios.
>>
>
> What I did could be documented/commented better.

No worries, thanks for taking the time to walk me through understanding
it!

> If ops->accessible() is *not* provided, all guest_memfd allocations will
> immediately remove from direct map and treat them immediately like guest
> private (goal is to match what KVM does today on tip).

Ah, so if ops->accessible() is not provided, then there will never be
any shared memory inside gmem (like today, where gmem doesn't support
shared memory altogether), and thus there's no problems with just
unconditionally doing set_direct_map_invalid_noflush in
guest_memfd_grab_folio, because all existing folios already have their
direct map entry removed. Got it!

> If ops->accessible() is provided, then guest_memfd allocations start
> as "shared" and KVM/Gunyah need to do the shared->private conversion
> when they want to do the private conversion on the folio. "Shared" is
> the default because that is effectively a no-op.
> For the non-CoCo case you're interested in, we'd have the
> ops->accessible() provided and we wouldn't pull out the direct map from
> gpc.

So in pKVM/Gunyah's case, guest memory starts as shared, and at some
point the guest will issue a hypercall (or similar) to flip it to
private, at which point it'll get removed from the direct map?

That isn't really what we want for our case. We consider the folios as
private straight away, as we do not let the guest control their state at
all. Everything is always "accessible" to both KVM and userspace in the
sense that they can just flip gfns to shared as they please without the
guest having any say in it.

I think we should untangle the behavior of guest_memfd_grab_folio from
the presence of ops->accessible. E.g.  instead of direct map removal
being dependent on ops->accessible we should have some
GRAB_FOLIO_RETURN_SHARED flag for gmem_flags, which is set for y'all,
and not set for us (I don't think we should have a "call
set_direct_map_invalid_noflush unconditionally in
guest_memfd_grab_folio" mode at all, because if sharing gmem is
supported, then that is broken, and if sharing gmem is not supported
then only removing direct map entries for freshly allocated folios gets
us the same result of "all folios never in the direct map" while
avoiding some no-op direct map operations).

Because we would still use ->accessible, albeit for us that would be
more for bookkeeping along the lines of "which gfns does userspace
currently require to be in the direct map?". I haven't completely
thought it through, but what I could see working for us would be a pair
of ioctls for marking ranges accessible/inaccessible, with
"accessibility" stored in some xarray (somewhat like Fuad's patches, I
guess? [1]).

In a world where we have a "sharing refcount", the "make accessible"
ioctl reinserts into the direct map (if needed), lifts the "sharings
refcount" for each folio in the given gfn range, and marks the range as
accessible.  And the "make inaccessible" ioctl would first check that
userspace has unmapped all those gfns again, and if yes, mark them as
inaccessible, drop the "sharings refcount" by 1 for each, and removes
from the direct map again if it held the last reference (if userspace
still has some gfns mapped, the ioctl would just fail).

I guess for pKVM/Gunyah, there wouldn't be userspace ioctls, but instead
the above would happen in handlers for share/unshare hypercalls. But the
overall flow would be similar. The only difference is the default state
of guest memory (shared for you, private for us). You want a
guest_memfd_grab_folio that essentially returns folios with "sharing
refcount == 1" (and thus present in the direct map), while we want the
opposite.

So I think something like the following should work for both of us
(modulo some error handling):

static struct folio *__kvm_gmem_get_folio(struct file *file, pgoff_t index, bool prepare, bool *fresh)
{
    // as today's kvm_gmem_get_folio, except
    ...
    if (!folio_test_uptodate(folio)) {
        ...
        if (fresh)
            *fresh = true
    }
    ...
}

struct folio *kvm_gmem_get_folio(struct file *file, pgoff_t index, bool prepare)
{
    bool fresh;
    unsigned long gmem_flags = /* ... */
    struct folio *folio = __kvm_gmem_get_folio(file, index, prepare, &fresh);
    if (gmem_flag & GRAB_FOLIO_RETURN_SHARED != 0) {
        // if "sharing refcount == 0", inserts back into direct map and lifts refcount, otherwise just lifts refcount
        guest_memfd_folio_clear_private(folio);
    } else {
        if (fresh)
            guest_memfd_folio_private(folio);
    }
    return folio;
}

Now, thinking ahead, there's probably optimizations here where we defer
the direct map manipulations to gmem_fault, at which point having a
guest_memfd_grab_folio that doesn't remove direct map entries for fresh
folios would be useful in our non-CoCo usecase too. But that should also
be easily achievable by maybe having a flag to kvm_gmem_get_folio that
forces the behavior of GRAB_FOLIO_RETURN_SHARED, indendently of whether
GRAB_FOLIO_RETURN_SHARED is set in gmem_flags.

How does that sound to you?

[1]: https://lore.kernel.org/kvm/20240801090117.3841080-1-tabba@google.com/

> Thanks,
> Elliot

Best,
Patrick
Elliot Berman Aug. 8, 2024, 10:16 p.m. UTC | #7
On Thu, Aug 08, 2024 at 02:05:55PM +0100, Patrick Roy wrote:
> On Wed, 2024-08-07 at 20:06 +0100, Elliot Berman wrote:
> >>>>>>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
> >>>>>>  {
> >>>>>> +       unsigned long gmem_flags = (unsigned long)file->private_data;
> >>>>>>         struct inode *inode = file_inode(file);
> >>>>>>         struct guest_memfd_operations *ops = inode->i_private;
> >>>>>>         struct folio *folio;
> >>>>>> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
> >>>>>>                         goto out_err;
> >>>>>>         }
> >>>>>>
> >>>>>> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
> >>>>>> +               r = guest_memfd_folio_private(folio);
> >>>>>> +               if (r)
> >>>>>> +                       goto out_err;
> >>>>>> +       }
> >>>>>> +
> >>>>>
> >>>>> How does a caller of guest_memfd_grab_folio know whether a folio needs
> >>>>> to be removed from the direct map? E.g. how can a caller know ahead of
> >>>>> time whether guest_memfd_grab_folio will return a freshly allocated
> >>>>> folio (which thus needs to be removed from the direct map), vs a folio
> >>>>> that already exists and has been removed from the direct map (probably
> >>>>> fine to remove from direct map again), vs a folio that already exists
> >>>>> and is currently re-inserted into the direct map for whatever reason
> >>>>> (must not remove these from the direct map, as other parts of
> >>>>> KVM/userspace probably don't expect the direct map entries to disappear
> >>>>> from underneath them). I couldn't figure this one out for my series,
> >>>>> which is why I went with hooking into the PG_uptodate logic to always
> >>>>> remove direct map entries on freshly allocated folios.
> >>>>>
> >>>>
> >>>> gmem_flags come from the owner. If the caller (in non-CoCo case) wants
> >>
> >> Ah, oops, I got it mixed up with the new `flags` parameter.
> >>
> >>>> to restore the direct map right away, it'd have to be a direct
> >>>> operation. As an optimization, we could add option that asks for page in
> >>>> "shared" state. If allocating new page, we can return it right away
> >>>> without removing from direct map. If grabbing existing folio, it would
> >>>> try to do the private->shared conversion.
> >>
> >> My concern is more with the implicit shared->private conversion that
> >> happens on every call to guest_memfd_grab_folio (and thus
> >> kvm_gmem_get_pfn) when grabbing existing folios. If something else
> >> marked the folio as shared, then we cannot punch it out of the direct
> >> map again until that something is done using the folio (when working on
> >> my RFC, kvm_gmem_get_pfn was indeed called on existing folios that were
> >> temporarily marked shared, as I was seeing panics because of this). And
> >> if the folio is currently private, there's nothing to do. So either way,
> >> guest_memfd_grab_folio shouldn't touch the direct map entry for existing
> >> folios.
> >>
> >
> > What I did could be documented/commented better.
> 
> No worries, thanks for taking the time to walk me through understanding
> it!
> 
> > If ops->accessible() is *not* provided, all guest_memfd allocations will
> > immediately remove from direct map and treat them immediately like guest
> > private (goal is to match what KVM does today on tip).
> 
> Ah, so if ops->accessible() is not provided, then there will never be
> any shared memory inside gmem (like today, where gmem doesn't support
> shared memory altogether), and thus there's no problems with just
> unconditionally doing set_direct_map_invalid_noflush in
> guest_memfd_grab_folio, because all existing folios already have their
> direct map entry removed. Got it!
> 
> > If ops->accessible() is provided, then guest_memfd allocations start
> > as "shared" and KVM/Gunyah need to do the shared->private conversion
> > when they want to do the private conversion on the folio. "Shared" is
> > the default because that is effectively a no-op.
> > For the non-CoCo case you're interested in, we'd have the
> > ops->accessible() provided and we wouldn't pull out the direct map from
> > gpc.
> 
> So in pKVM/Gunyah's case, guest memory starts as shared, and at some
> point the guest will issue a hypercall (or similar) to flip it to
> private, at which point it'll get removed from the direct map?
> 
> That isn't really what we want for our case. We consider the folios as
> private straight away, as we do not let the guest control their state at
> all. Everything is always "accessible" to both KVM and userspace in the
> sense that they can just flip gfns to shared as they please without the
> guest having any say in it.
> 
> I think we should untangle the behavior of guest_memfd_grab_folio from
> the presence of ops->accessible. E.g.  instead of direct map removal
> being dependent on ops->accessible we should have some
> GRAB_FOLIO_RETURN_SHARED flag for gmem_flags, which is set for y'all,
> and not set for us (I don't think we should have a "call
> set_direct_map_invalid_noflush unconditionally in
> guest_memfd_grab_folio" mode at all, because if sharing gmem is
> supported, then that is broken, and if sharing gmem is not supported
> then only removing direct map entries for freshly allocated folios gets
> us the same result of "all folios never in the direct map" while
> avoiding some no-op direct map operations).
> 
> Because we would still use ->accessible, albeit for us that would be
> more for bookkeeping along the lines of "which gfns does userspace
> currently require to be in the direct map?". I haven't completely
> thought it through, but what I could see working for us would be a pair
> of ioctls for marking ranges accessible/inaccessible, with
> "accessibility" stored in some xarray (somewhat like Fuad's patches, I
> guess? [1]).
> 
> In a world where we have a "sharing refcount", the "make accessible"
> ioctl reinserts into the direct map (if needed), lifts the "sharings
> refcount" for each folio in the given gfn range, and marks the range as
> accessible.  And the "make inaccessible" ioctl would first check that
> userspace has unmapped all those gfns again, and if yes, mark them as
> inaccessible, drop the "sharings refcount" by 1 for each, and removes
> from the direct map again if it held the last reference (if userspace
> still has some gfns mapped, the ioctl would just fail).
> 

I am warming up to the sharing refcount idea. How does the sharing
refcount look for kvm gpc?

> I guess for pKVM/Gunyah, there wouldn't be userspace ioctls, but instead
> the above would happen in handlers for share/unshare hypercalls. But the
> overall flow would be similar. The only difference is the default state
> of guest memory (shared for you, private for us). You want a
> guest_memfd_grab_folio that essentially returns folios with "sharing
> refcount == 1" (and thus present in the direct map), while we want the
> opposite.
> 
> So I think something like the following should work for both of us
> (modulo some error handling):
> 
> static struct folio *__kvm_gmem_get_folio(struct file *file, pgoff_t index, bool prepare, bool *fresh)
> {
>     // as today's kvm_gmem_get_folio, except
>     ...
>     if (!folio_test_uptodate(folio)) {
>         ...
>         if (fresh)
>             *fresh = true
>     }
>     ...
> }
> 
> struct folio *kvm_gmem_get_folio(struct file *file, pgoff_t index, bool prepare)
> {
>     bool fresh;
>     unsigned long gmem_flags = /* ... */
>     struct folio *folio = __kvm_gmem_get_folio(file, index, prepare, &fresh);
>     if (gmem_flag & GRAB_FOLIO_RETURN_SHARED != 0) {
>         // if "sharing refcount == 0", inserts back into direct map and lifts refcount, otherwise just lifts refcount
>         guest_memfd_folio_clear_private(folio);
>     } else {
>         if (fresh)
>             guest_memfd_folio_private(folio);
>     }
>     return folio;
> }
> 
> Now, thinking ahead, there's probably optimizations here where we defer
> the direct map manipulations to gmem_fault, at which point having a
> guest_memfd_grab_folio that doesn't remove direct map entries for fresh
> folios would be useful in our non-CoCo usecase too. But that should also
> be easily achievable by maybe having a flag to kvm_gmem_get_folio that
> forces the behavior of GRAB_FOLIO_RETURN_SHARED, indendently of whether
> GRAB_FOLIO_RETURN_SHARED is set in gmem_flags.
> 
> How does that sound to you?
> 

Yeah, I think this is a good idea.

I'm also thinking to make a few tweaks to the ops structure:

struct guest_memfd_operations {
        int (*invalidate_begin)(struct inode *inode, pgoff_t offset, unsigned long nr);
        void (*invalidate_end)(struct inode *inode, pgoff_t offset, unsigned long nr);
        int (*prepare_accessible)(struct inode *inode, struct folio *folio);
        int (*prepare_private)(struct inode *inode, struct folio *folio);
        int (*release)(struct inode *inode);
};

When grabbing a folio, we'd always call either prepare_accessible() or
prepare_private() based on GRAB_FOLIO_RETURN_SHARED. In the
prepare_private() case, guest_memfd can also ensure the folio is
unmapped and not pinned. If userspace tries to grab the folio in
pKVM/Gunyah case, prepare_accessible() will fail and grab_folio returns
error. There's a lot of details I'm glossing over, but I hope it gives
some brief idea of the direction I was thinking.

In some cases, prepare_accessible() and the invalidate_*() functions
might effectively be the same thing, except that invalidate_*() could
operate on a range larger-than-a-folio. That would be useful becase we
might offer optimization to reclaim a batch of pages versus e.g.
flushing caches every page.

Thanks,
Elliot
Patrick Roy Aug. 9, 2024, 3:02 p.m. UTC | #8
On Thu, 2024-08-08 at 23:16 +0100, Elliot Berman wrote
> On Thu, Aug 08, 2024 at 02:05:55PM +0100, Patrick Roy wrote:
>> On Wed, 2024-08-07 at 20:06 +0100, Elliot Berman wrote:
>>>>>>>>  struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags)
>>>>>>>>  {
>>>>>>>> +       unsigned long gmem_flags = (unsigned long)file->private_data;
>>>>>>>>         struct inode *inode = file_inode(file);
>>>>>>>>         struct guest_memfd_operations *ops = inode->i_private;
>>>>>>>>         struct folio *folio;
>>>>>>>> @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags
>>>>>>>>                         goto out_err;
>>>>>>>>         }
>>>>>>>>
>>>>>>>> +       if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
>>>>>>>> +               r = guest_memfd_folio_private(folio);
>>>>>>>> +               if (r)
>>>>>>>> +                       goto out_err;
>>>>>>>> +       }
>>>>>>>> +
>>>>>>>
>>>>>>> How does a caller of guest_memfd_grab_folio know whether a folio needs
>>>>>>> to be removed from the direct map? E.g. how can a caller know ahead of
>>>>>>> time whether guest_memfd_grab_folio will return a freshly allocated
>>>>>>> folio (which thus needs to be removed from the direct map), vs a folio
>>>>>>> that already exists and has been removed from the direct map (probably
>>>>>>> fine to remove from direct map again), vs a folio that already exists
>>>>>>> and is currently re-inserted into the direct map for whatever reason
>>>>>>> (must not remove these from the direct map, as other parts of
>>>>>>> KVM/userspace probably don't expect the direct map entries to disappear
>>>>>>> from underneath them). I couldn't figure this one out for my series,
>>>>>>> which is why I went with hooking into the PG_uptodate logic to always
>>>>>>> remove direct map entries on freshly allocated folios.
>>>>>>>
>>>>>>
>>>>>> gmem_flags come from the owner. If the caller (in non-CoCo case) wants
>>>>
>>>> Ah, oops, I got it mixed up with the new `flags` parameter.
>>>>
>>>>>> to restore the direct map right away, it'd have to be a direct
>>>>>> operation. As an optimization, we could add option that asks for page in
>>>>>> "shared" state. If allocating new page, we can return it right away
>>>>>> without removing from direct map. If grabbing existing folio, it would
>>>>>> try to do the private->shared conversion.
>>>>
>>>> My concern is more with the implicit shared->private conversion that
>>>> happens on every call to guest_memfd_grab_folio (and thus
>>>> kvm_gmem_get_pfn) when grabbing existing folios. If something else
>>>> marked the folio as shared, then we cannot punch it out of the direct
>>>> map again until that something is done using the folio (when working on
>>>> my RFC, kvm_gmem_get_pfn was indeed called on existing folios that were
>>>> temporarily marked shared, as I was seeing panics because of this). And
>>>> if the folio is currently private, there's nothing to do. So either way,
>>>> guest_memfd_grab_folio shouldn't touch the direct map entry for existing
>>>> folios.
>>>>
>>>
>>> What I did could be documented/commented better.
>>
>> No worries, thanks for taking the time to walk me through understanding
>> it!
>>
>>> If ops->accessible() is *not* provided, all guest_memfd allocations will
>>> immediately remove from direct map and treat them immediately like guest
>>> private (goal is to match what KVM does today on tip).
>>
>> Ah, so if ops->accessible() is not provided, then there will never be
>> any shared memory inside gmem (like today, where gmem doesn't support
>> shared memory altogether), and thus there's no problems with just
>> unconditionally doing set_direct_map_invalid_noflush in
>> guest_memfd_grab_folio, because all existing folios already have their
>> direct map entry removed. Got it!
>>
>>> If ops->accessible() is provided, then guest_memfd allocations start
>>> as "shared" and KVM/Gunyah need to do the shared->private conversion
>>> when they want to do the private conversion on the folio. "Shared" is
>>> the default because that is effectively a no-op.
>>> For the non-CoCo case you're interested in, we'd have the
>>> ops->accessible() provided and we wouldn't pull out the direct map from
>>> gpc.
>>
>> So in pKVM/Gunyah's case, guest memory starts as shared, and at some
>> point the guest will issue a hypercall (or similar) to flip it to
>> private, at which point it'll get removed from the direct map?
>>
>> That isn't really what we want for our case. We consider the folios as
>> private straight away, as we do not let the guest control their state at
>> all. Everything is always "accessible" to both KVM and userspace in the
>> sense that they can just flip gfns to shared as they please without the
>> guest having any say in it.
>>
>> I think we should untangle the behavior of guest_memfd_grab_folio from
>> the presence of ops->accessible. E.g.  instead of direct map removal
>> being dependent on ops->accessible we should have some
>> GRAB_FOLIO_RETURN_SHARED flag for gmem_flags, which is set for y'all,
>> and not set for us (I don't think we should have a "call
>> set_direct_map_invalid_noflush unconditionally in
>> guest_memfd_grab_folio" mode at all, because if sharing gmem is
>> supported, then that is broken, and if sharing gmem is not supported
>> then only removing direct map entries for freshly allocated folios gets
>> us the same result of "all folios never in the direct map" while
>> avoiding some no-op direct map operations).
>>
>> Because we would still use ->accessible, albeit for us that would be
>> more for bookkeeping along the lines of "which gfns does userspace
>> currently require to be in the direct map?". I haven't completely
>> thought it through, but what I could see working for us would be a pair
>> of ioctls for marking ranges accessible/inaccessible, with
>> "accessibility" stored in some xarray (somewhat like Fuad's patches, I
>> guess? [1]).
>>
>> In a world where we have a "sharing refcount", the "make accessible"
>> ioctl reinserts into the direct map (if needed), lifts the "sharings
>> refcount" for each folio in the given gfn range, and marks the range as
>> accessible.  And the "make inaccessible" ioctl would first check that
>> userspace has unmapped all those gfns again, and if yes, mark them as
>> inaccessible, drop the "sharings refcount" by 1 for each, and removes
>> from the direct map again if it held the last reference (if userspace
>> still has some gfns mapped, the ioctl would just fail).
>>
> 
> I am warming up to the sharing refcount idea. How does the sharing
> refcount look for kvm gpc?

I've come up with the below rough draft (written as a new commit on
top of my RFC series [1], with some bits from your patch copied in).
With this, I was able to actually boot a Firecracker VM with
multiple vCPUs (which previously didn't work because of different vCPUs
putting their kvm-clock structures into the same guest page). 

Best, 
Patrick

[1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/T/#ma44793da6bc000a2c22b1ffe37292b9615881838

---