mbox series

[RFC,V1,0/5] selftests: KVM: selftests for fd-based approach of supporting private memory

Message ID 20220408210545.3915712-1-vannapurve@google.com
Headers show
Series selftests: KVM: selftests for fd-based approach of supporting private memory | expand

Message

Vishal Annapurve April 8, 2022, 9:05 p.m. UTC
This series implements selftests targeting the feature floated by Chao
via:
https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/

Below changes aim to test the fd based approach for guest private memory
in context of normal (non-confidential) VMs executing on non-confidential
platforms.

Confidential platforms along with the confidentiality aware software
stack support a notion of private/shared accesses from the confidential
VMs.
Generally, a bit in the GPA conveys the shared/private-ness of the
access. Non-confidential platforms don't have a notion of private or
shared accesses from the guest VMs. To support this notion,
KVM_HC_MAP_GPA_RANGE
is modified to allow marking an access from a VM within a GPA range as
always shared or private. Any suggestions regarding implementing this ioctl
alternatively/cleanly are appreciated.

priv_memfd_test.c file adds a suite of two basic selftests to access private
memory from the guest via private/shared access and checking if the contents
can be leaked to/accessed by vmm via shared memory view.

Test results:
1) PMPAT - PrivateMemoryPrivateAccess test passes
2) PMSAT - PrivateMemorySharedAccess test fails currently and needs more
analysis to understand the reason of failure.

Important - Below patch is needed to ensure host kernel crash is avoided while
running these tests:
https://github.com/vishals4gh/linux/commit/b9adedf777ad84af39042e9c19899600a4add68a

Github link for the patches posted as part of this series:
https://github.com/vishals4gh/linux/commits/priv_memfd_selftests_v1
Note that this series is dependent on Chao's v5 patches mentioned above
applied on top of 5.17.

Vishal Annapurve (5):
  x86: kvm: HACK: Allow testing of priv memfd approach
  selftests: kvm: Fix inline assembly for hypercall
  selftests: kvm: Add a basic selftest test priv memfd
  selftests: kvm: priv_memfd_test: Add support for memory conversion
  selftests: kvm: priv_memfd_test: Add shared access test

 arch/x86/include/uapi/asm/kvm_para.h          |   1 +
 arch/x86/kvm/mmu/mmu.c                        |   9 +-
 arch/x86/kvm/x86.c                            |  16 +-
 include/linux/kvm_host.h                      |   3 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/lib/x86_64/processor.c      |   2 +-
 tools/testing/selftests/kvm/priv_memfd_test.c | 410 ++++++++++++++++++
 virt/kvm/kvm_main.c                           |   2 +-
 8 files changed, 436 insertions(+), 8 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/priv_memfd_test.c

Comments

Nikunj A. Dadhania April 11, 2022, 12:01 p.m. UTC | #1
On 4/9/2022 2:35 AM, Vishal Annapurve wrote:
> This series implements selftests targeting the feature floated by Chao
> via:
> https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
> 

Thanks for working on this.

> Below changes aim to test the fd based approach for guest private memory
> in context of normal (non-confidential) VMs executing on non-confidential
> platforms.
> 
> Confidential platforms along with the confidentiality aware software
> stack support a notion of private/shared accesses from the confidential
> VMs.
> Generally, a bit in the GPA conveys the shared/private-ness of the
> access. Non-confidential platforms don't have a notion of private or
> shared accesses from the guest VMs. To support this notion,
> KVM_HC_MAP_GPA_RANGE
> is modified to allow marking an access from a VM within a GPA range as
> always shared or private. Any suggestions regarding implementing this ioctl
> alternatively/cleanly are appreciated.
> 
> priv_memfd_test.c file adds a suite of two basic selftests to access private
> memory from the guest via private/shared access and checking if the contents
> can be leaked to/accessed by vmm via shared memory view.
> 
> Test results:
> 1) PMPAT - PrivateMemoryPrivateAccess test passes
> 2) PMSAT - PrivateMemorySharedAccess test fails currently and needs more
> analysis to understand the reason of failure.

That could be because of the return code (*r = -1) from the KVM_EXIT_MEMORY_ERROR. 
This gets interpreted as -EPERM in the VMM when the vcpu_run exits.

	+	vcpu->run->exit_reason = KVM_EXIT_MEMORY_ERROR;
	+	vcpu->run->memory.flags = flags;
	+	vcpu->run->memory.padding = 0;
	+	vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT;
	+	vcpu->run->memory.size = PAGE_SIZE;
	+	fault->pfn = -1;
	+	*r = -1;
	+	return true;


Regards
Nikunj

[1] https://lore.kernel.org/all/20220310140911.50924-10-chao.p.peng@linux.intel.com/#t
Chao Peng April 12, 2022, 8:25 a.m. UTC | #2
On Mon, Apr 11, 2022 at 05:31:09PM +0530, Nikunj A. Dadhania wrote:
> On 4/9/2022 2:35 AM, Vishal Annapurve wrote:
> > This series implements selftests targeting the feature floated by Chao
> > via:
> > https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
> > 
> 
> Thanks for working on this.
> 
> > Below changes aim to test the fd based approach for guest private memory
> > in context of normal (non-confidential) VMs executing on non-confidential
> > platforms.
> > 
> > Confidential platforms along with the confidentiality aware software
> > stack support a notion of private/shared accesses from the confidential
> > VMs.
> > Generally, a bit in the GPA conveys the shared/private-ness of the
> > access. Non-confidential platforms don't have a notion of private or
> > shared accesses from the guest VMs. To support this notion,
> > KVM_HC_MAP_GPA_RANGE
> > is modified to allow marking an access from a VM within a GPA range as
> > always shared or private. Any suggestions regarding implementing this ioctl
> > alternatively/cleanly are appreciated.
> > 
> > priv_memfd_test.c file adds a suite of two basic selftests to access private
> > memory from the guest via private/shared access and checking if the contents
> > can be leaked to/accessed by vmm via shared memory view.
> > 
> > Test results:
> > 1) PMPAT - PrivateMemoryPrivateAccess test passes
> > 2) PMSAT - PrivateMemorySharedAccess test fails currently and needs more
> > analysis to understand the reason of failure.
> 
> That could be because of the return code (*r = -1) from the KVM_EXIT_MEMORY_ERROR. 
> This gets interpreted as -EPERM in the VMM when the vcpu_run exits.
> 
> 	+	vcpu->run->exit_reason = KVM_EXIT_MEMORY_ERROR;
> 	+	vcpu->run->memory.flags = flags;
> 	+	vcpu->run->memory.padding = 0;
> 	+	vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT;
> 	+	vcpu->run->memory.size = PAGE_SIZE;
> 	+	fault->pfn = -1;
> 	+	*r = -1;
> 	+	return true;

That's true. The current private mem patch treats KVM_EXIT_MEMORY_ERROR as error
for KVM_RUN. That behavior needs to be discussed, but right now (v5) it hits the
ASSERT in tools/testing/selftests/kvm/lib/kvm_util.c before you have chance to
handle KVM_EXIT_MEMORY_ERROR in this patch series.

void vcpu_run(struct kvm_vm *vm, uint32_t vcpuid)
{
        int ret = _vcpu_run(vm, vcpuid);
        TEST_ASSERT(ret == 0, "KVM_RUN IOCTL failed, "
                "rc: %i errno: %i", ret, errno);
}

Thanks,
Chao

> 
> 
> Regards
> Nikunj
> 
> [1] https://lore.kernel.org/all/20220310140911.50924-10-chao.p.peng@linux.intel.com/#t
Andy Lutomirski April 13, 2022, 12:16 a.m. UTC | #3
On Fri, Apr 8, 2022, at 2:05 PM, Vishal Annapurve wrote:
> This series implements selftests targeting the feature floated by Chao
> via:
> https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
>
> Below changes aim to test the fd based approach for guest private memory
> in context of normal (non-confidential) VMs executing on non-confidential
> platforms.
>
> Confidential platforms along with the confidentiality aware software
> stack support a notion of private/shared accesses from the confidential
> VMs.
> Generally, a bit in the GPA conveys the shared/private-ness of the
> access. Non-confidential platforms don't have a notion of private or
> shared accesses from the guest VMs. To support this notion,
> KVM_HC_MAP_GPA_RANGE
> is modified to allow marking an access from a VM within a GPA range as
> always shared or private. Any suggestions regarding implementing this ioctl
> alternatively/cleanly are appreciated.

This is fantastic.  I do think we need to decide how this should work in general.  We have a few platforms with somewhat different properties:

TDX: The guest decides, per memory access (using a GPA bit), whether an access is private or shared.  In principle, the same address could be *both* and be distinguished by only that bit, and the two addresses would refer to different pages.

SEV: The guest decides, per memory access (using a GPA bit), whether an access is private or shared.  At any given time, a physical address (with that bit masked off) can be private, shared, or invalid, but it can't be valid as private and shared at the same time.

pKVM (currently, as I understand it): the guest decides by hypercall, in advance of an access, which addresses are private and which are shared.

This series, if I understood it correctly, is like TDX except with no hardware security.

Sean or Chao, do you have a clear sense of whether the current fd-based private memory proposal can cleanly support SEV and pKVM?  What, if anything, needs to be done on the API side to get that working well?  I don't think we need to support SEV or pKVM right away to get this merged, but I do think we should understand how the API can map to them.
Michael Roth April 13, 2022, 1:42 p.m. UTC | #4
On Tue, Apr 12, 2022 at 05:16:22PM -0700, Andy Lutomirski wrote:
> On Fri, Apr 8, 2022, at 2:05 PM, Vishal Annapurve wrote:
> > This series implements selftests targeting the feature floated by Chao
> > via:
> > https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
> >
> > Below changes aim to test the fd based approach for guest private memory
> > in context of normal (non-confidential) VMs executing on non-confidential
> > platforms.
> >
> > Confidential platforms along with the confidentiality aware software
> > stack support a notion of private/shared accesses from the confidential
> > VMs.
> > Generally, a bit in the GPA conveys the shared/private-ness of the
> > access. Non-confidential platforms don't have a notion of private or
> > shared accesses from the guest VMs. To support this notion,
> > KVM_HC_MAP_GPA_RANGE
> > is modified to allow marking an access from a VM within a GPA range as
> > always shared or private. Any suggestions regarding implementing this ioctl
> > alternatively/cleanly are appreciated.
> 
> This is fantastic.  I do think we need to decide how this should work in general.  We have a few platforms with somewhat different properties:
> 
> TDX: The guest decides, per memory access (using a GPA bit), whether an access is private or shared.  In principle, the same address could be *both* and be distinguished by only that bit, and the two addresses would refer to different pages.
> 
> SEV: The guest decides, per memory access (using a GPA bit), whether an access is private or shared.  At any given time, a physical address (with that bit masked off) can be private, shared, or invalid, but it can't be valid as private and shared at the same time.
> 
> pKVM (currently, as I understand it): the guest decides by hypercall, in advance of an access, which addresses are private and which are shared.
> 
> This series, if I understood it correctly, is like TDX except with no hardware security.
> 
> Sean or Chao, do you have a clear sense of whether the current fd-based private memory proposal can cleanly support SEV and pKVM?  What, if anything, needs to be done on the API side to get that working well?  I don't think we need to support SEV or pKVM right away to get this merged, but I do think we should understand how the API can map to them.

I've been looking at porting the SEV-SNP hypervisor patches over to
using memfd, and I hit an issue that I think is generally applicable
to SEV/SEV-ES as well. Namely at guest init time we have something
like the following flow:

  VMM:
    - allocate shared memory to back the guest and map it into guest
      address space
    - initialize shared memory with initialize memory contents (namely
      the BIOS)
    - ask KVM to encrypt these pages in-place and measure them to
      generate the initial measured payload for attestation, via
      KVM_SEV_LAUNCH_UPDATE with the GPA for each range of memory to
      encrypt.
  KVM:
    - issue SEV_LAUNCH_UPDATE firmware command, which takes an HPA as
      input and does an in-place encryption/measure of the page.

With current v5 of the memfd/UPM series, I think the expected flow is that
we would fallocate() these ranges from the private fd backend in advance of
calling KVM_SEV_LAUNCH_UPDATE (if VMM does it after we'd destroy the initial
guest payload, since they'd be replaced by newly-allocated pages). But if
VMM does it before, VMM has no way to initialize the guest memory contents,
since mmap()/pwrite() are disallowed due to MFD_INACCESSIBLE.

I think something similar to your proposal[1] here of making pread()/pwrite()
possible for private-fd-backed memory that's been flagged as "shareable"
would work for this case. Although here the "shareable" flag could be
removed immediately upon successful completion of the SEV_LAUNCH_UPDATE
firmware command.

I think with TDX this isn't an issue because their analagous TDH.MEM.PAGE.ADD
seamcall takes a pair of source/dest HPA as input params, so the VMM
wouldn't need write access to dest HPA at any point, just source HPA.

[1] https://lwn.net/ml/linux-kernel/eefc3c74-acca-419c-8947-726ce2458446@www.fastmail.com/
Chao Peng April 14, 2022, 10:07 a.m. UTC | #5
On Wed, Apr 13, 2022 at 08:42:00AM -0500, Michael Roth wrote:
> On Tue, Apr 12, 2022 at 05:16:22PM -0700, Andy Lutomirski wrote:
> > On Fri, Apr 8, 2022, at 2:05 PM, Vishal Annapurve wrote:
> > > This series implements selftests targeting the feature floated by Chao
> > > via:
> > > https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.intel.com/
> > >
> > > Below changes aim to test the fd based approach for guest private memory
> > > in context of normal (non-confidential) VMs executing on non-confidential
> > > platforms.
> > >
> > > Confidential platforms along with the confidentiality aware software
> > > stack support a notion of private/shared accesses from the confidential
> > > VMs.
> > > Generally, a bit in the GPA conveys the shared/private-ness of the
> > > access. Non-confidential platforms don't have a notion of private or
> > > shared accesses from the guest VMs. To support this notion,
> > > KVM_HC_MAP_GPA_RANGE
> > > is modified to allow marking an access from a VM within a GPA range as
> > > always shared or private. Any suggestions regarding implementing this ioctl
> > > alternatively/cleanly are appreciated.
> > 
> > This is fantastic.  I do think we need to decide how this should work in general.  We have a few platforms with somewhat different properties:
> > 
> > TDX: The guest decides, per memory access (using a GPA bit), whether an access is private or shared.  In principle, the same address could be *both* and be distinguished by only that bit, and the two addresses would refer to different pages.
> > 
> > SEV: The guest decides, per memory access (using a GPA bit), whether an access is private or shared.  At any given time, a physical address (with that bit masked off) can be private, shared, or invalid, but it can't be valid as private and shared at the same time.
> > 
> > pKVM (currently, as I understand it): the guest decides by hypercall, in advance of an access, which addresses are private and which are shared.
> > 
> > This series, if I understood it correctly, is like TDX except with no hardware security.
> > 
> > Sean or Chao, do you have a clear sense of whether the current fd-based private memory proposal can cleanly support SEV and pKVM?  What, if anything, needs to be done on the API side to get that working well?  I don't think we need to support SEV or pKVM right away to get this merged, but I do think we should understand how the API can map to them.
> 
> I've been looking at porting the SEV-SNP hypervisor patches over to
> using memfd, and I hit an issue that I think is generally applicable
> to SEV/SEV-ES as well. Namely at guest init time we have something
> like the following flow:
> 
>   VMM:
>     - allocate shared memory to back the guest and map it into guest
>       address space
>     - initialize shared memory with initialize memory contents (namely
>       the BIOS)
>     - ask KVM to encrypt these pages in-place and measure them to
>       generate the initial measured payload for attestation, via
>       KVM_SEV_LAUNCH_UPDATE with the GPA for each range of memory to
>       encrypt.
>   KVM:
>     - issue SEV_LAUNCH_UPDATE firmware command, which takes an HPA as
>       input and does an in-place encryption/measure of the page.
> 
> With current v5 of the memfd/UPM series, I think the expected flow is that
> we would fallocate() these ranges from the private fd backend in advance of
> calling KVM_SEV_LAUNCH_UPDATE (if VMM does it after we'd destroy the initial
> guest payload, since they'd be replaced by newly-allocated pages). But if
> VMM does it before, VMM has no way to initialize the guest memory contents,
> since mmap()/pwrite() are disallowed due to MFD_INACCESSIBLE.

OK, so for SEV, basically VMM puts vBIOS directly into guest memory and then
do in-place measurement.

TDX has no problem because TDX temporarily uses a VMM buffer (vs. guest memory)
to hold the vBIOS and then asks SEAM-MODULE to measure and copy that to guest
memory.

Maybe something like SHM_LOCK should be used instead of the aggressive
MFD_INACCESSIBLE. Before VMM calling SHM_LOCK on the memfd, the content
can be changed but after that it's not visible to userspace VMM. This
gives userspace a chance to modify the data in private page.

Chao
> 
> I think something similar to your proposal[1] here of making pread()/pwrite()
> possible for private-fd-backed memory that's been flagged as "shareable"
> would work for this case. Although here the "shareable" flag could be
> removed immediately upon successful completion of the SEV_LAUNCH_UPDATE
> firmware command.
> 
> I think with TDX this isn't an issue because their analagous TDH.MEM.PAGE.ADD
> seamcall takes a pair of source/dest HPA as input params, so the VMM
> wouldn't need write access to dest HPA at any point, just source HPA.
> 
> [1] https://lwn.net/ml/linux-kernel/eefc3c74-acca-419c-8947-726ce2458446@www.fastmail.com/