mbox series

[0/4] Process some MMIO-related errors without KVM exit

Message ID 20240923141810.76331-1-iorlov@amazon.com
Headers show
Series Process some MMIO-related errors without KVM exit | expand

Message

Ivan Orlov Sept. 23, 2024, 2:18 p.m. UTC
Currently, KVM may return a variety of internal errors to VMM when
accessing MMIO, and some of them could be gracefully handled on the KVM
level instead. Moreover, some of the MMIO-related errors are handled
differently in VMX in comparison with SVM, which produces certain
inconsistency and should be fixed. This patch series introduces
KVM-level handling for the following situations:

1) Guest is accessing MMIO during event delivery: triple fault instead
of internal error on VMX and infinite loop on SVM

2) Guest fetches an instruction from MMIO: inject #UD and resume guest
execution without internal error

Additionaly, this patch series includes a KVM selftest which covers
different cases of MMIO misuse.

Also, update the set_memory_region_test to expect the triple fault when
starting VM with no RAM.

Ivan Orlov (4):
  KVM: vmx, svm, mmu: Fix MMIO during event delivery handling
  KVM: x86: Inject UD when fetching from MMIO
  selftests: KVM: Change expected exit code in test_zero_memory_regions
  selftests: KVM: Add new test for faulty mmio usage

 arch/x86/include/asm/kvm_host.h               |   6 +
 arch/x86/kvm/emulate.c                        |   3 +
 arch/x86/kvm/kvm_emulate.h                    |   1 +
 arch/x86/kvm/mmu/mmu.c                        |  13 +-
 arch/x86/kvm/svm/svm.c                        |   4 +
 arch/x86/kvm/vmx/vmx.c                        |  21 +-
 arch/x86/kvm/x86.c                            |   7 +-
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/set_memory_region_test.c    |   3 +-
 .../selftests/kvm/x86_64/faulty_mmio.c        | 199 ++++++++++++++++++
 10 files changed, 242 insertions(+), 16 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/faulty_mmio.c

Comments

Sean Christopherson Sept. 26, 2024, 12:06 a.m. UTC | #1
On Tue, Sep 24, 2024, Ivan Orlov wrote:
> On Mon, Sep 23, 2024 at 02:46:17PM -0700, Sean Christopherson wrote:
>  > >
> > > > No.  This is not architectural behavior.  It's not even remotely
> > > > close to
> > > > architectural behavior.  KVM's behavior isn't great, but making up
> > > > _guest visible_
> > > > behavior is not going to happen.
> > >
> > > Is this a no to the whole series or from the cover letter?
> > 
> > The whole series.
> > 
> > > For patch 1 we have observed that if a guest has incorrectly set it's
> > > IDT base to point inside of an MMIO region it will result in a triple
> > > fault (bare metal Cascake Lake Intel).
> > 
> > That happens because the IDT is garbage and/or the CPU is getting master abort
> > semantics back, not because anything in the x86 architectures says that accessing
> > MMIO during exception vectoring goes straight to shutdown.
> >
> 
> Hi Sean, thank you for the detailed reply.
> 
> Yes, I ran the reproducer on my AMD Ryzen 5 once again, and it seems like
> pointing the IDT descriptor base to a framebuffer works perfectly fine,
> without any triple faults, so injecting it into guest is not a correct
> solution.
> 
> However, I believe KVM should demonstrate consistent behaviour in
> handling MMIO during event delivery on vmx/svm, so either by returning
> an event delivery error in both cases or going into infinite loop in
> both cases. I'm going to test the kvm/next with the commits you
> mentioned, and send a fix if it still hits an infinite loop on SVM.
> 
> Also, I reckon as detecting such an issue on the KVM level doesn't
> introduce much complexity, returning a sane error flag would be nice. For
> instance, we could set one of the 'internal.data' elements to identify
> that the problem occured due to MMIO during event delivery
> 
> > > Yes a sane operating system is not really going to be doing setting it's IDT
> > > or GDT base to point into an MMIO region, but we've seen occurrences.
> > > Normally when other external things have gone horribly wrong.
> > >
> > > Ivan can clarify as to what's been seen on AMD platforms regarding the
> > > infinite loop for patch one. This was also tested on bare metal
> > > hardware. Injection of the #UD within patch 2 may be debatable but I
> > > believe Ivan has some more data from experiments backing this up.
> > 
> > I have no problems improving KVM's handling of scenarios that KVM can't emulate,
> > but there needs to be reasonable justification for taking on complexity, and KVM
> > must not make up guest visible behavior.
> 
> Regarding the #UD-related change: the way how I formulated it in the
> cover letter and the patch is confusing, sorry. We are _alredy_ enqueuing
> an #UD when fetching from MMIO, so I believe it is already architecturally
> incorrect (see handle_emulation_failure in arch/x86/kvm/x86.c). However,
> we return an emulation failure after that,

Yeah, but only because the alternative sucks worse.  If KVM unconditionally exited
with an emulation error, then unsuspecting (read: old) VMMs would likely terminate
the guest, which gives guest userspace a way to DoS the entire VM, especially on
older CPUs where KVM needs to emulate much more often.

	if (kvm->arch.exit_on_emulation_error ||
	    (emulation_type & EMULTYPE_SKIP)) {
		prepare_emulation_ctxt_failure_exit(vcpu);
		return 0;
	}

	kvm_queue_exception(vcpu, UD_VECTOR);

	if (!is_guest_mode(vcpu) && kvm_x86_call(get_cpl)(vcpu) == 0) {
		prepare_emulation_ctxt_failure_exit(vcpu);
		return 0;
	}

	return 1;

And that's exactly why KVM_CAP_EXIT_ON_EMULATION_FAILURE exists.  VMMs that know
they won't unintentionally give guest userspace what amounts to a privilege
escalation can trap the emulation failure, do some logging or whatever, and then
take whatever action it wants to take.

> and I believe how we do this
> is debatable. I maintain we should either set a flag in emulation_failure.flags
> to indicate that the error happened due to fetch from mmio (to give more
> information to VMM),

Generally speaking, I'm not opposed to adding more information along those lines.
Though realistically, I don't know that an extra flag is warranted in this case,
as it shouldn't be _that_ hard for userspace to deduce what went wrong, especially
if KVM_TRANSLATE2[*] lands (though I'm somewhat curious as to why QEMU doesn't do
the page walks itself).

[*] https://lore.kernel.org/all/20240910152207.38974-1-nikwip@amazon.de

> or we shouldn't return an error at all... Maybe it should be KVM_EXIT_MMIO with
> some flag set? What do you think?

It'd be a breaking change and added complexity, for no benefit as far as I can
tell.  KVM_EXIT_INTERNAL_ERROR is _not_ a death sentence, or at least it doesn't
have to be.  Most VMMs do terminate the guest, but nothing is stopping userspace
from grabbing RIP and emulating the instruction.  I.e. userspace doesn't need
"permission" in the form of KVM_EXIT_MMIO to try and keep the guest alive.
Ivan Orlov Sept. 27, 2024, 12:13 p.m. UTC | #2
On Wed, Sep 25, 2024 at 05:06:45PM -0700, Sean Christopherson wrote:
> 
> Yeah, but only because the alternative sucks worse.  If KVM unconditionally exited
> with an emulation error, then unsuspecting (read: old) VMMs would likely terminate
> the guest, which gives guest userspace a way to DoS the entire VM, especially on
> older CPUs where KVM needs to emulate much more often.
> 
>         if (kvm->arch.exit_on_emulation_error ||
>             (emulation_type & EMULTYPE_SKIP)) {
>                 prepare_emulation_ctxt_failure_exit(vcpu);
>                 return 0;
>         }
> 
>         kvm_queue_exception(vcpu, UD_VECTOR);
> 
>         if (!is_guest_mode(vcpu) && kvm_x86_call(get_cpl)(vcpu) == 0) {
>                 prepare_emulation_ctxt_failure_exit(vcpu);
>                 return 0;
>         }
> 
>         return 1;
> 
> And that's exactly why KVM_CAP_EXIT_ON_EMULATION_FAILURE exists.  VMMs that know
> they won't unintentionally give guest userspace what amounts to a privilege
> escalation can trap the emulation failure, do some logging or whatever, and then
> take whatever action it wants to take.
> 

Hi Sean,

Makes sense, thank you for the explanation.

> > and I believe how we do this
> > is debatable. I maintain we should either set a flag in emulation_failure.flags
> > to indicate that the error happened due to fetch from mmio (to give more
> > information to VMM),
> 
> Generally speaking, I'm not opposed to adding more information along those lines.
> Though realistically, I don't know that an extra flag is warranted in this case,
> as it shouldn't be _that_ hard for userspace to deduce what went wrong, especially
> if KVM_TRANSLATE2[*] lands (though I'm somewhat curious as to why QEMU doesn't do
> the page walks itself).
> 
> [*] https://lore.kernel.org/all/20240910152207.38974-1-nikwip@amazon.de
> 

Fair enough, but I still believe that it would be good to provide more
information about the failure to the VMM (considering the fact that KVM
tries to emulate an instruction anyway, adding a flag won't introduce any
performance overhead). I'll think about how we could do that without
being redundant :)

> > or we shouldn't return an error at all... Maybe it should be KVM_EXIT_MMIO with
> > some flag set? What do you think?
> 
> It'd be a breaking change and added complexity, for no benefit as far as I can
> tell.  KVM_EXIT_INTERNAL_ERROR is _not_ a death sentence, or at least it doesn't
> have to be.  Most VMMs do terminate the guest, but nothing is stopping userspace
> from grabbing RIP and emulating the instruction.  I.e. userspace doesn't need
> "permission" in the form of KVM_EXIT_MMIO to try and keep the guest alive.

Yeah, I just thought that "internal error" is not the best exit code for
the situations when guest fetches from MMIO (since it is a perfectly legal
operation from the architectural point of view). But I agree that it
would be a breaking change without functional benefit ( especially if we
provide a flag about what happened :) ).

P.S. I tested the latest kvm/next, and if we set the IDT descriptor base to
an MMIO address it still falls into the infinite loop on SVM. I'm going
to send the fix in the next couple of days.

Kind regards,
Ivan Orlov