diff mbox series

[RFC,3/5] selftests: KVM: SEV IOCTL test

Message ID 20240710220540.188239-4-pratikrajesh.sampat@amd.com
State New
Headers show
Series [RFC,1/5] selftests: KVM: Add a basic SNP smoke test | expand

Commit Message

Pratik R. Sampat July 10, 2024, 10:05 p.m. UTC
Introduce tests for sev and sev-es ioctl that exercises the boot path
of launch, update and finish on an invalid policy.

Signed-off-by: Pratik R. Sampat <pratikrajesh.sampat@amd.com>
---
 .../selftests/kvm/x86_64/sev_smoke_test.c     | 57 +++++++++++++++++++
 1 file changed, 57 insertions(+)

Comments

Sean Christopherson Aug. 9, 2024, 3:45 p.m. UTC | #1
On Thu, Jul 11, 2024, Pratik Rajesh Sampat wrote:
> >> +static void sev_guest_status_assert(struct kvm_vm *vm, uint32_t type)
> >> +{
> >> +	struct kvm_sev_guest_status status;
> >> +	bool cond;
> >> +	int ret;
> >> +
> >> +	ret = __vm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &status);
> >> +	cond = type == KVM_X86_SEV_VM ? !ret : ret;
> >> +	TEST_ASSERT(cond,
> >> +		    "KVM_SEV_GUEST_STATUS should fail, invalid VM Type.");
> >> +}
> >> +
> >> +static void test_sev_launch(void *guest_code, uint32_t type, uint64_t policy)
> >> +{
> >> +	struct kvm_vcpu *vcpu;
> >> +	struct kvm_vm *vm;
> >> +	struct ucall uc;
> >> +	bool cond;
> >> +	int ret;
> >> +
> > 
> > Maybe a block comment here indicating what you're actually doing would
> > be good, because I'm a bit confused.
> > 
> > A policy value of 0 is valid for SEV, so you expect each call to
> > succeed, right? And, actually, for SEV-ES the launch start will succeed,
> > too, but the launch update will fail because LAUNCH_UPDATE_VMSA is not
> > valid for SEV, but then the launch measure should succeed. Is that
> > right? What about the other calls?
> > 
> 
> Sure, I can do that.
> Yes for SEV, the policy value of 0 succeeds for everything except when
> we try to run and we see a KVM_EXIT_IO.
> 
> For SEV-ES, with the policy value of 0 - we don't see launch_start
> succeed. It fails with EIO in this case. Post that all the calls for
> SEV-ES also fail subsequent to that. I guess the core idea behind this
> test is to ensure that once the first bad case of launch_start fails, we
> should see a cascading list of failures.
>
> >> +	vm = vm_sev_create_with_one_vcpu(type, guest_code, &vcpu);
> >> +	ret = sev_vm_launch_start(vm, 0);
> >> +	cond = type == KVM_X86_SEV_VM ? !ret : ret;
> >> +	TEST_ASSERT(cond,

Don't bury the result in a local boolean.  It's confusing, and _worse_ for debug
as it makes it impossible to see what actually failed (the assert message will
simply print "cond", which is useless).


> >> +		    "KVM_SEV_LAUNCH_START should fail, invalid policy.");

This is a blatant lie, because the KVM_X86_SEV_VM case apparently expects success.
Similar to Tom's comments about explaing what this code is doing, these assert
messages need to explain what the actually expected result it, provide a hint as
to _why_ that result is expected, and print the result.  As is, this will be
unnecessarily difficult to debug if/when it fails.
Pratik R. Sampat Aug. 13, 2024, 3:23 p.m. UTC | #2
On 8/9/2024 10:45 AM, Sean Christopherson wrote:
> On Thu, Jul 11, 2024, Pratik Rajesh Sampat wrote:
>>>> +static void sev_guest_status_assert(struct kvm_vm *vm, uint32_t type)
>>>> +{
>>>> +	struct kvm_sev_guest_status status;
>>>> +	bool cond;
>>>> +	int ret;
>>>> +
>>>> +	ret = __vm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &status);
>>>> +	cond = type == KVM_X86_SEV_VM ? !ret : ret;
>>>> +	TEST_ASSERT(cond,
>>>> +		    "KVM_SEV_GUEST_STATUS should fail, invalid VM Type.");
>>>> +}
>>>> +
>>>> +static void test_sev_launch(void *guest_code, uint32_t type, uint64_t policy)
>>>> +{
>>>> +	struct kvm_vcpu *vcpu;
>>>> +	struct kvm_vm *vm;
>>>> +	struct ucall uc;
>>>> +	bool cond;
>>>> +	int ret;
>>>> +
>>>
>>> Maybe a block comment here indicating what you're actually doing would
>>> be good, because I'm a bit confused.
>>>
>>> A policy value of 0 is valid for SEV, so you expect each call to
>>> succeed, right? And, actually, for SEV-ES the launch start will succeed,
>>> too, but the launch update will fail because LAUNCH_UPDATE_VMSA is not
>>> valid for SEV, but then the launch measure should succeed. Is that
>>> right? What about the other calls?
>>>
>>
>> Sure, I can do that.
>> Yes for SEV, the policy value of 0 succeeds for everything except when
>> we try to run and we see a KVM_EXIT_IO.
>>
>> For SEV-ES, with the policy value of 0 - we don't see launch_start
>> succeed. It fails with EIO in this case. Post that all the calls for
>> SEV-ES also fail subsequent to that. I guess the core idea behind this
>> test is to ensure that once the first bad case of launch_start fails, we
>> should see a cascading list of failures.
>>
>>>> +	vm = vm_sev_create_with_one_vcpu(type, guest_code, &vcpu);
>>>> +	ret = sev_vm_launch_start(vm, 0);
>>>> +	cond = type == KVM_X86_SEV_VM ? !ret : ret;
>>>> +	TEST_ASSERT(cond,
> 
> Don't bury the result in a local boolean.  It's confusing, and _worse_ for debug
> as it makes it impossible to see what actually failed (the assert message will
> simply print "cond", which is useless).
> 

Ack, I will make sure all the other occurrences of using similar boolean
are also removed and the conditions themselves are passed into the assert.

> 
>>>> +		    "KVM_SEV_LAUNCH_START should fail, invalid policy.");
> 
> This is a blatant lie, because the KVM_X86_SEV_VM case apparently expects success.
> Similar to Tom's comments about explaing what this code is doing, these assert
> messages need to explain what the actually expected result it, provide a hint as
> to _why_ that result is expected, and print the result.  As is, this will be
> unnecessarily difficult to debug if/when it fails.

Right. I'll make the error messages more reflective of what they are as
well as have an explanation to why we expect this behavior.

Thanks!
- Pratik
diff mbox series

Patch

diff --git a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
index 1a50a280173c..500c67b3793b 100644
--- a/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86_64/sev_smoke_test.c
@@ -131,12 +131,69 @@  static void test_sync_vmsa(uint32_t type, uint32_t policy)
 	kvm_vm_free(vm);
 }
 
+static void sev_guest_status_assert(struct kvm_vm *vm, uint32_t type)
+{
+	struct kvm_sev_guest_status status;
+	bool cond;
+	int ret;
+
+	ret = __vm_sev_ioctl(vm, KVM_SEV_GUEST_STATUS, &status);
+	cond = type == KVM_X86_SEV_VM ? !ret : ret;
+	TEST_ASSERT(cond,
+		    "KVM_SEV_GUEST_STATUS should fail, invalid VM Type.");
+}
+
+static void test_sev_launch(void *guest_code, uint32_t type, uint64_t policy)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm_vm *vm;
+	struct ucall uc;
+	bool cond;
+	int ret;
+
+	vm = vm_sev_create_with_one_vcpu(type, guest_code, &vcpu);
+	ret = sev_vm_launch_start(vm, 0);
+	cond = type == KVM_X86_SEV_VM ? !ret : ret;
+	TEST_ASSERT(cond,
+		    "KVM_SEV_LAUNCH_START should fail, invalid policy.");
+
+	ret = sev_vm_launch_update(vm, policy);
+	cond = type == KVM_X86_SEV_VM ? !ret : ret;
+	TEST_ASSERT(cond,
+		    "KVM_SEV_LAUNCH_UPDATE should fail, invalid policy.");
+	sev_guest_status_assert(vm, type);
+
+	ret = sev_vm_launch_measure(vm, alloca(256));
+	cond = type == KVM_X86_SEV_VM ? !ret : ret;
+	TEST_ASSERT(cond,
+		    "KVM_SEV_LAUNCH_MEASURE should fail, invalid policy.");
+	sev_guest_status_assert(vm, type);
+
+	ret = sev_vm_launch_finish(vm);
+	cond = type == KVM_X86_SEV_VM ? !ret : ret;
+	TEST_ASSERT(cond,
+		    "KVM_SEV_LAUNCH_FINISH should fail, invalid policy.");
+	sev_guest_status_assert(vm, type);
+
+	vcpu_run(vcpu);
+	get_ucall(vcpu, &uc);
+	cond = type == KVM_X86_SEV_VM ?
+		vcpu->run->exit_reason == KVM_EXIT_IO :
+		vcpu->run->exit_reason == KVM_EXIT_FAIL_ENTRY;
+	TEST_ASSERT(cond,
+		    "vcpu_run should fail, invalid policy.");
+
+	kvm_vm_free(vm);
+}
+
 static void test_sev(void *guest_code, uint32_t type, uint64_t policy)
 {
 	struct kvm_vcpu *vcpu;
 	struct kvm_vm *vm;
 	struct ucall uc;
 
+	test_sev_launch(guest_code, type, policy);
+
 	vm = vm_sev_create_with_one_vcpu(type, guest_code, &vcpu);
 
 	/* TODO: Validate the measurement is as expected. */