mbox series

[RFC,v1,0/4] Add support for the Bus Lock Threshold

Message ID 20240709175145.9986-1-manali.shukla@amd.com
Headers show
Series Add support for the Bus Lock Threshold | expand

Message

Manali Shukla July 9, 2024, 5:51 p.m. UTC
Malicious guests can cause bus locks to degrade the performance of a
system. Non-WB (write-back) and misaligned locked RMW
(read-modify-write) instructions are referred to as "bus locks" and
require system wide synchronization among all processors to guarantee
the atomicity. The bus locks can impose notable performance penalties
for all processors within the system.

Support for the Bus Lock Threshold is indicated by CPUID
Fn8000_000A_EDX[29] BusLockThreshold=1, the VMCB provides a Bus Lock
Threshold enable bit and an unsigned 16-bit Bus Lock Threshold count.

VMCB intercept bit
    VMCB Offset     Bits    Function
    14h             5       Intercept bus lock operations

Bus lock threshold count
    VMCB Offset     Bits    Function
    120h            15:0    Bus lock counter

During VMRUN, the bus lock threshold count is fetched and stored in an
internal count register.  Prior to executing a bus lock within the
guest, the processor verifies the count in the bus lock register. If
the count is greater than zero, the processor executes the bus lock,
reducing the count. However, if the count is zero, the bus lock
operation is not performed, and instead, a Bus Lock Threshold #VMEXIT
is triggered to transfer control to the Virtual Machine Monitor (VMM).

A Bus Lock Threshold #VMEXIT is reported to the VMM with VMEXIT code
0xA5h, VMEXIT_BUSLOCK. EXITINFO1 and EXITINFO2 are set to 0 on
a VMEXIT_BUSLOCK.  On a #VMEXIT, the processor writes the current
value of the Bus Lock Threshold Counter to the VMCB.

More details about the Bus Lock Threshold feature can be found in AMD
APM [1].

Patches are prepared on kvm-x86/svm (704ec48fc2fb)

Testing done:
- Added a selftest for the Bus Lock Threadshold functionality.
- Tested the Bus Lock Threshold functionality on SEV and SEV-ES guests.
- Tested the Bus Lock Threshold functionality on nested guests.

Qemu changes can be found on:
Repo: https://github.com/AMDESE/qemu.git
Branch: buslock_threshold

Qemu commandline to use the bus lock threshold functionality:
qemu-system-x86_64 -enable-kvm -cpu EPYC-Turin,+svm -M q35,bus-lock-ratelimit=10 \ ..

[1]: AMD64 Architecture Programmer's Manual Pub. 24593, April 2024,
     Vol 2, 15.14.5 Bus Lock Threshold.
     https://bugzilla.kernel.org/attachment.cgi?id=306250

Manali Shukla (2):
  x86/cpufeatures: Add CPUID feature bit for the Bus Lock Threshold
  KVM: x86: nSVM: Implement support for nested Bus Lock Threshold

Nikunj A Dadhania (2):
  KVM: SVM: Enable Bus lock threshold exit
  KVM: selftests: Add bus lock exit test

 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/svm.h                    |   5 +-
 arch/x86/include/uapi/asm/svm.h               |   2 +
 arch/x86/kvm/governed_features.h              |   1 +
 arch/x86/kvm/svm/nested.c                     |  25 ++++
 arch/x86/kvm/svm/svm.c                        |  48 ++++++++
 arch/x86/kvm/svm/svm.h                        |   1 +
 arch/x86/kvm/x86.h                            |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/x86_64/svm_buslock_test.c   | 114 ++++++++++++++++++
 10 files changed, 198 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/svm_buslock_test.c


base-commit: 704ec48fc2fbd4e41ec982662ad5bf1eee33eeb2

Comments

Manali Shukla July 30, 2024, 4:52 a.m. UTC | #1
On 7/9/2024 11:21 PM, Manali Shukla wrote:
> Malicious guests can cause bus locks to degrade the performance of a
> system. Non-WB (write-back) and misaligned locked RMW
> (read-modify-write) instructions are referred to as "bus locks" and
> require system wide synchronization among all processors to guarantee
> the atomicity. The bus locks can impose notable performance penalties
> for all processors within the system.
> 
> Support for the Bus Lock Threshold is indicated by CPUID
> Fn8000_000A_EDX[29] BusLockThreshold=1, the VMCB provides a Bus Lock
> Threshold enable bit and an unsigned 16-bit Bus Lock Threshold count.
> 
> VMCB intercept bit
>     VMCB Offset     Bits    Function
>     14h             5       Intercept bus lock operations
> 
> Bus lock threshold count
>     VMCB Offset     Bits    Function
>     120h            15:0    Bus lock counter
> 
> During VMRUN, the bus lock threshold count is fetched and stored in an
> internal count register.  Prior to executing a bus lock within the
> guest, the processor verifies the count in the bus lock register. If
> the count is greater than zero, the processor executes the bus lock,
> reducing the count. However, if the count is zero, the bus lock
> operation is not performed, and instead, a Bus Lock Threshold #VMEXIT
> is triggered to transfer control to the Virtual Machine Monitor (VMM).
> 
> A Bus Lock Threshold #VMEXIT is reported to the VMM with VMEXIT code
> 0xA5h, VMEXIT_BUSLOCK. EXITINFO1 and EXITINFO2 are set to 0 on
> a VMEXIT_BUSLOCK.  On a #VMEXIT, the processor writes the current
> value of the Bus Lock Threshold Counter to the VMCB.
> 
> More details about the Bus Lock Threshold feature can be found in AMD
> APM [1].
> 
> Patches are prepared on kvm-x86/svm (704ec48fc2fb)
> 
> Testing done:
> - Added a selftest for the Bus Lock Threadshold functionality.
> - Tested the Bus Lock Threshold functionality on SEV and SEV-ES guests.
> - Tested the Bus Lock Threshold functionality on nested guests.
> 
> Qemu changes can be found on:
> Repo: https://github.com/AMDESE/qemu.git
> Branch: buslock_threshold
> 
> Qemu commandline to use the bus lock threshold functionality:
> qemu-system-x86_64 -enable-kvm -cpu EPYC-Turin,+svm -M q35,bus-lock-ratelimit=10 \ ..
> 
> [1]: AMD64 Architecture Programmer's Manual Pub. 24593, April 2024,
>      Vol 2, 15.14.5 Bus Lock Threshold.
>      https://bugzilla.kernel.org/attachment.cgi?id=306250
> 
> Manali Shukla (2):
>   x86/cpufeatures: Add CPUID feature bit for the Bus Lock Threshold
>   KVM: x86: nSVM: Implement support for nested Bus Lock Threshold
> 
> Nikunj A Dadhania (2):
>   KVM: SVM: Enable Bus lock threshold exit
>   KVM: selftests: Add bus lock exit test
> 
>  arch/x86/include/asm/cpufeatures.h            |   1 +
>  arch/x86/include/asm/svm.h                    |   5 +-
>  arch/x86/include/uapi/asm/svm.h               |   2 +
>  arch/x86/kvm/governed_features.h              |   1 +
>  arch/x86/kvm/svm/nested.c                     |  25 ++++
>  arch/x86/kvm/svm/svm.c                        |  48 ++++++++
>  arch/x86/kvm/svm/svm.h                        |   1 +
>  arch/x86/kvm/x86.h                            |   1 +
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/x86_64/svm_buslock_test.c   | 114 ++++++++++++++++++
>  10 files changed, 198 insertions(+), 1 deletion(-)
>  create mode 100644 tools/testing/selftests/kvm/x86_64/svm_buslock_test.c
> 
> 
> base-commit: 704ec48fc2fbd4e41ec982662ad5bf1eee33eeb2

A gentle reminder.

-Manali
Sean Christopherson Aug. 16, 2024, 7:37 p.m. UTC | #2
On Tue, Jul 09, 2024, Manali Shukla wrote:
> Malicious guests can cause bus locks to degrade the performance of

I would say "misbehaving", I bet the overwhelming majority of bus locks in practice
are due to legacy/crusty software, not malicious software.

> a system. Non-WB(write-back) and misaligned locked
> RMW(read-modify-write) instructions are referred to as "bus locks" and
> require system wide synchronization among all processors to guarantee
> atomicity.  The bus locks may incur significant performance penalties
> for all processors in the system.
> 
> The Bus Lock Threshold feature proves beneficial for hypervisors
> seeking to restrict guests' ability to initiate numerous bus locks,
> thereby preventing system slowdowns that affect all tenants.

None of this actually says what the feature does.

> Presence of the Bus Lock threshold feature is indicated via CPUID
> function 0x8000000A_EDX[29]
> 
> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
> ---
>  arch/x86/include/asm/cpufeatures.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 3c7434329661..10f397873790 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -381,6 +381,7 @@
>  #define X86_FEATURE_V_SPEC_CTRL		(15*32+20) /* Virtual SPEC_CTRL */
>  #define X86_FEATURE_VNMI		(15*32+25) /* Virtual NMI */
>  #define X86_FEATURE_SVME_ADDR_CHK	(15*32+28) /* "" SVME addr check */
> +#define X86_FEATURE_BUS_LOCK_THRESHOLD	(15*32+29) /* "" Bus lock threshold */

I would strongly prefer to enumerate this in /proc/cpuinfo, having to manually
query CPUID to see if a CPU supports a feature I want to test is beyond annoying.

>  /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
>  #define X86_FEATURE_AVX512VBMI		(16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
> 
> base-commit: 704ec48fc2fbd4e41ec982662ad5bf1eee33eeb2
> -- 
> 2.34.1
>
Sean Christopherson Aug. 16, 2024, 7:54 p.m. UTC | #3
On Tue, Jul 09, 2024, Manali Shukla wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> Malicious guests can cause bus locks to degrade the performance of
> system. Non-WB(write-back) and misaligned locked RMW(read-modify-write)
> instructions are referred to as "bus locks" and require system wide
> synchronization among all processors to guarantee atomicity.  Bus locks
> may incur significant performance penalties for all processors in the
> system.

Copy+pasting the background into every changelog isn't helpful.  Instead, focus
on what the feature actually does, and simply mention what bus locks are in
passing.  If someone really doesn't know, it shouldn't be had for them to find
the previous changelog.

> The Bus Lock Threshold feature proves beneficial for hypervisors seeking
> to restrict guests' ability to initiate numerous bus locks, thereby
> preventing system slowdowns that affect all tenants.
> 
> Support for the buslock threshold is indicated via CPUID function
> 0x8000000A_EDX[29].
> 
> VMCB intercept bit
> VMCB Offset	Bits	Function
> 14h	        5	Intercept bus lock operations
>                         (occurs after guest instruction finishes)
> 
> Bus lock threshold
> VMCB Offset	Bits	Function
> 120h	        15:0	Bus lock counter

I can make a pretty educated guess as to how this works, but this is a pretty
simple feature, i.e. there's no reason not to document how it works in the
changelog.
 
> Use the KVM capability KVM_CAP_X86_BUS_LOCK_EXIT to enable the feature.
> 
> When the bus lock threshold counter reaches to zero, KVM will exit to
> user space by setting KVM_RUN_BUS_LOCK in vcpu->run->flags in
> bus_lock_exit handler, indicating that a bus lock has been detected in
> the guest.
> 
> More details about the Bus Lock Threshold feature can be found in AMD
> APM [1].
> 
> [1]: AMD64 Architecture Programmer's Manual Pub. 24593, April 2024,
>      Vol 2, 15.14.5 Bus Lock Threshold.
>      https://bugzilla.kernel.org/attachment.cgi?id=306250
> 
> [Manali:
>   - Added exit reason string for SVM_EXIT_BUS_LOCK.
>   - Moved enablement and disablement of bus lock intercept support.
>     to svm_vcpu_after_set_cpuid().
>   - Massage commit message.
>   - misc cleanups.
> ]

No need for this since you are listed as co-author.

> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> Co-developed-by: Manali Shukla <manali.shukla@amd.com>
> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
> ---
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 7d396f5fa010..9f1d51384eac 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -191,6 +191,9 @@ module_param(pause_filter_count_shrink, ushort, 0444);
>  static unsigned short pause_filter_count_max = KVM_SVM_DEFAULT_PLE_WINDOW_MAX;
>  module_param(pause_filter_count_max, ushort, 0444);
>  
> +static unsigned short bus_lock_counter = KVM_SVM_DEFAULT_BUS_LOCK_COUNTER;
> +module_param(bus_lock_counter, ushort, 0644);

This should be read-only, otherwise the behavior is non-deterministic, e.g. as
proposed, awon't take effect until a vCPU happens to trigger a bus lock exit.

If we really want it to be writable, then a per-VM capability is likely a better
solution.

Actually, we already have a capability, which means there's zero reason for this
module param to exist.  Userspace already has to opt-in to turning on bus lock
detection, i.e. userspace already has the opportunity to provide a different
threshold.

That said, unless someone specifically needs a threshold other than '0', I vote
to keep the uAPI as-is and simply exit on every bus lock.
 
>  /*
>   * Use nested page tables by default.  Note, NPT may get forced off by
>   * svm_hardware_setup() if it's unsupported by hardware or the host kernel.
> @@ -3231,6 +3234,19 @@ static int invpcid_interception(struct kvm_vcpu *vcpu)
>  	return kvm_handle_invpcid(vcpu, type, gva);
>  }
>  
> +static int bus_lock_exit(struct kvm_vcpu *vcpu)
> +{
> +	struct vcpu_svm *svm = to_svm(vcpu);
> +
> +	vcpu->run->exit_reason = KVM_EXIT_X86_BUS_LOCK;
> +	vcpu->run->flags |= KVM_RUN_X86_BUS_LOCK;
> +
> +	/* Reload the counter again */
> +	svm->vmcb->control.bus_lock_counter = bus_lock_counter;
> +
> +	return 0;
> +}
> +
>  static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
>  	[SVM_EXIT_READ_CR0]			= cr_interception,
>  	[SVM_EXIT_READ_CR3]			= cr_interception,
> @@ -3298,6 +3314,7 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
>  	[SVM_EXIT_CR4_WRITE_TRAP]		= cr_trap,
>  	[SVM_EXIT_CR8_WRITE_TRAP]		= cr_trap,
>  	[SVM_EXIT_INVPCID]                      = invpcid_interception,
> +	[SVM_EXIT_BUS_LOCK]			= bus_lock_exit,
>  	[SVM_EXIT_NPF]				= npf_interception,
>  	[SVM_EXIT_RSM]                          = rsm_interception,
>  	[SVM_EXIT_AVIC_INCOMPLETE_IPI]		= avic_incomplete_ipi_interception,
> @@ -4356,6 +4373,27 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)

Why on earth is this in svm_vcpu_after_set_cpuid()?  This has nothing to do with
guest CPUID.

>  		set_msr_interception(vcpu, svm->msrpm, MSR_IA32_FLUSH_CMD, 0,
>  				     !!guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D));
>  
> +	if (cpu_feature_enabled(X86_FEATURE_BUS_LOCK_THRESHOLD) &&

This should be a slow path, there's zero reason to check for host support as
bus_lock_detection_enabled should be allowed if and only if it's supported.

> +	    vcpu->kvm->arch.bus_lock_detection_enabled) {
> +		svm_set_intercept(svm, INTERCEPT_BUSLOCK);
> +
> +		/*
> +		 * The CPU decrements the bus lock counter every time a bus lock
> +		 * is detected. Once the counter reaches zero a VMEXIT_BUSLOCK
> +		 * is generated. A value of zero for bus lock counter means a
> +		 * VMEXIT_BUSLOCK at every bus lock detection.
> +		 *
> +		 * Currently, default value for bus_lock_counter is set to 10.

Please don't document the default _here_.  Because inevitably this will become
stale when the default changes.

> +		 * So, the VMEXIT_BUSLOCK is generated after every 10 bus locks
> +		 * detected.
> +		 */
> +		svm->vmcb->control.bus_lock_counter = bus_lock_counter;
> +		pr_debug("Setting buslock counter to %u\n", bus_lock_counter);
> +	} else {
> +		svm_clr_intercept(svm, INTERCEPT_BUSLOCK);
> +		svm->vmcb->control.bus_lock_counter = 0;
> +	}
> +
>  	if (sev_guest(vcpu->kvm))
>  		sev_vcpu_after_set_cpuid(svm);
>  
> @@ -5149,6 +5187,11 @@ static __init void svm_set_cpu_caps(void)
>  		kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK);
>  	}
>  
> +	if (cpu_feature_enabled(X86_FEATURE_BUS_LOCK_THRESHOLD)) {
> +		pr_info("Bus Lock Threashold supported\n");
> +		kvm_caps.has_bus_lock_exit = true;
> +	}
> +
>  	/* CPUID 0x80000008 */
>  	if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD) ||
>  	    boot_cpu_has(X86_FEATURE_AMD_SSBD))
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index d80a4c6b5a38..2a77232105da 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -58,6 +58,7 @@ void kvm_spurious_fault(void);
>  #define KVM_VMX_DEFAULT_PLE_WINDOW_MAX	UINT_MAX
>  #define KVM_SVM_DEFAULT_PLE_WINDOW_MAX	USHRT_MAX
>  #define KVM_SVM_DEFAULT_PLE_WINDOW	3000
> +#define KVM_SVM_DEFAULT_BUS_LOCK_COUNTER	10

There's zero reason this needs to be in x86.h.  I don't even see a reason to
have a #define, there's literally one user.
Sean Christopherson Aug. 16, 2024, 8:21 p.m. UTC | #4
On Tue, Jul 09, 2024, Manali Shukla wrote:
> From: Nikunj A Dadhania <nikunj@amd.com>
> 
> Malicious guests can cause bus locks to degrade the performance of
> a system.  The Bus Lock Threshold feature is beneficial for
> hypervisors aiming to restrict the ability of the guests to perform
> excessive bus locks and slow down the system for all the tenants.
> 
> Add a test case to verify the Bus Lock Threshold feature for SVM.
> 
> [Manali:
>   - The KVM_CAP_X86_BUS_LOCK_EXIT capability is not enabled while
>     vcpus are created, changed the VM and vCPU creation logic to
>     resolve the mentioned issue.
>   - Added nested guest test case for bus lock exit.
>   - massage commit message.
>   - misc cleanups. ]

Again, 99% of the changelog is boilerplate that does nothing to help me
understand what the test actually does.

> 
> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
> Co-developed-by: Manali Shukla <manali.shukla@amd.com>
> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
> ---
>  tools/testing/selftests/kvm/Makefile          |   1 +
>  .../selftests/kvm/x86_64/svm_buslock_test.c   | 114 ++++++++++++++++++
>  2 files changed, 115 insertions(+)
>  create mode 100644 tools/testing/selftests/kvm/x86_64/svm_buslock_test.c
> 
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index ce8ff8e8ce3a..711ec195e386 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -94,6 +94,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test
>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
>  TEST_GEN_PROGS_x86_64 += x86_64/state_test
>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test
> +TEST_GEN_PROGS_x86_64 += x86_64/svm_buslock_test
>  TEST_GEN_PROGS_x86_64 += x86_64/svm_vmcall_test
>  TEST_GEN_PROGS_x86_64 += x86_64/svm_int_ctl_test
>  TEST_GEN_PROGS_x86_64 += x86_64/svm_nested_shutdown_test
> diff --git a/tools/testing/selftests/kvm/x86_64/svm_buslock_test.c b/tools/testing/selftests/kvm/x86_64/svm_buslock_test.c
> new file mode 100644
> index 000000000000..dcb595999046
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/svm_buslock_test.c

I would *very* strongly prefer to have a bus lock test that is comment to VMX
and SVM.  For L1, there's no unique behavior.  And for L2, assuming we don't
support nested bus lock enabling, the only vendor specific bits are launching
L2.

I.e. writing this so it works on both VMX and SVM should be quite straightforward.

> @@ -0,0 +1,114 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * svm_buslock_test
> + *
> + * Copyright (C) 2024 Advanced Micro Devices, Inc.
> + *
> + * SVM testing: Buslock exit

Keep the Copyright, ditch everything else.

> + */
> +
> +#include "test_util.h"
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "svm_util.h"
> +
> +#define NO_ITERATIONS 100

Heh, NR_ITERATIONS.

> +#define __cacheline_aligned __aligned(128)

Eh, I would just split a page, that's about as future proof as we can get in
terms of cache line sizes.

> +
> +struct buslock_test {
> +	unsigned char pad[126];
> +	atomic_long_t val;
> +} __packed;
> +
> +struct buslock_test test __cacheline_aligned;
> +
> +static __always_inline void buslock_atomic_add(int i, atomic_long_t *v)
> +{
> +	asm volatile(LOCK_PREFIX "addl %1,%0"
> +		     : "+m" (v->counter)
> +		     : "ir" (i) : "memory");
> +}
> +
> +static void buslock_add(void)
> +{
> +	/*
> +	 * Increment a cache unaligned variable atomically.
> +	 * This should generate a bus lock exit.

So... this test doesn't actually verify that a bus lock exit occurs.  The userspace
side will eat an exit if one occurs, but there's literally not a single TEST_ASSERT()
in here.
Manali Shukla Aug. 22, 2024, 9:43 a.m. UTC | #5
Hi Sean,

Thank you for reviewing my patches.

On 8/17/2024 1:07 AM, Sean Christopherson wrote:
> On Tue, Jul 09, 2024, Manali Shukla wrote:
>> Malicious guests can cause bus locks to degrade the performance of
> 
> I would say "misbehaving", I bet the overwhelming majority of bus locks in practice
> are due to legacy/crusty software, not malicious software.
> 

Ack.

>> a system. Non-WB(write-back) and misaligned locked
>> RMW(read-modify-write) instructions are referred to as "bus locks" and
>> require system wide synchronization among all processors to guarantee
>> atomicity.  The bus locks may incur significant performance penalties
>> for all processors in the system.
>>
>> The Bus Lock Threshold feature proves beneficial for hypervisors
>> seeking to restrict guests' ability to initiate numerous bus locks,
>> thereby preventing system slowdowns that affect all tenants.
> 
> None of this actually says what the feature does.
> 

Sure I will rewrite the commit message. 

>> Presence of the Bus Lock threshold feature is indicated via CPUID
>> function 0x8000000A_EDX[29]
>>
>> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
>> ---
>>  arch/x86/include/asm/cpufeatures.h | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index 3c7434329661..10f397873790 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -381,6 +381,7 @@
>>  #define X86_FEATURE_V_SPEC_CTRL		(15*32+20) /* Virtual SPEC_CTRL */
>>  #define X86_FEATURE_VNMI		(15*32+25) /* Virtual NMI */
>>  #define X86_FEATURE_SVME_ADDR_CHK	(15*32+28) /* "" SVME addr check */
>> +#define X86_FEATURE_BUS_LOCK_THRESHOLD	(15*32+29) /* "" Bus lock threshold */
> 
> I would strongly prefer to enumerate this in /proc/cpuinfo, having to manually
> query CPUID to see if a CPU supports a feature I want to test is beyond annoying.

I will do the modifications accordingly.

> 
>>  /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
>>  #define X86_FEATURE_AVX512VBMI		(16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
>>
>> base-commit: 704ec48fc2fbd4e41ec982662ad5bf1eee33eeb2
>> -- 
>> 2.34.1
>>
 - Manali
Manali Shukla Aug. 24, 2024, 5:35 a.m. UTC | #6
Hi Sean,
Thank you for reviewing my patches.

On 8/17/2024 1:24 AM, Sean Christopherson wrote:
> On Tue, Jul 09, 2024, Manali Shukla wrote:
>> From: Nikunj A Dadhania <nikunj@amd.com>
>>
>> Malicious guests can cause bus locks to degrade the performance of
>> system. Non-WB(write-back) and misaligned locked RMW(read-modify-write)
>> instructions are referred to as "bus locks" and require system wide
>> synchronization among all processors to guarantee atomicity.  Bus locks
>> may incur significant performance penalties for all processors in the
>> system.
> 
> Copy+pasting the background into every changelog isn't helpful.  Instead, focus
> on what the feature actually does, and simply mention what bus locks are in
> passing.  If someone really doesn't know, it shouldn't be had for them to find
> the previous changelog.
> 

Sure. I will rewrite the commit messages based on the suggestions.

>> The Bus Lock Threshold feature proves beneficial for hypervisors seeking
>> to restrict guests' ability to initiate numerous bus locks, thereby
>> preventing system slowdowns that affect all tenants.
>>
>> Support for the buslock threshold is indicated via CPUID function
>> 0x8000000A_EDX[29].
>>
>> VMCB intercept bit
>> VMCB Offset	Bits	Function
>> 14h	        5	Intercept bus lock operations
>>                         (occurs after guest instruction finishes)
>>
>> Bus lock threshold
>> VMCB Offset	Bits	Function
>> 120h	        15:0	Bus lock counter
> 
> I can make a pretty educated guess as to how this works, but this is a pretty
> simple feature, i.e. there's no reason not to document how it works in the
> changelog.
>  

Sure.

>> Use the KVM capability KVM_CAP_X86_BUS_LOCK_EXIT to enable the feature.
>>
>> When the bus lock threshold counter reaches to zero, KVM will exit to
>> user space by setting KVM_RUN_BUS_LOCK in vcpu->run->flags in
>> bus_lock_exit handler, indicating that a bus lock has been detected in
>> the guest.
>>
>> More details about the Bus Lock Threshold feature can be found in AMD
>> APM [1].
>>
>> [1]: AMD64 Architecture Programmer's Manual Pub. 24593, April 2024,
>>      Vol 2, 15.14.5 Bus Lock Threshold.
>>      https://bugzilla.kernel.org/attachment.cgi?id=306250
>>
>> [Manali:
>>   - Added exit reason string for SVM_EXIT_BUS_LOCK.
>>   - Moved enablement and disablement of bus lock intercept support.
>>     to svm_vcpu_after_set_cpuid().
>>   - Massage commit message.
>>   - misc cleanups.
>> ]
> 
> No need for this since you are listed as co-author.
> 

Ack.

>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>> Co-developed-by: Manali Shukla <manali.shukla@amd.com>
>> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
>> ---
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index 7d396f5fa010..9f1d51384eac 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -191,6 +191,9 @@ module_param(pause_filter_count_shrink, ushort, 0444);
>>  static unsigned short pause_filter_count_max = KVM_SVM_DEFAULT_PLE_WINDOW_MAX;
>>  module_param(pause_filter_count_max, ushort, 0444);
>>  
>> +static unsigned short bus_lock_counter = KVM_SVM_DEFAULT_BUS_LOCK_COUNTER;
>> +module_param(bus_lock_counter, ushort, 0644);
> 
> This should be read-only, otherwise the behavior is non-deterministic, e.g. as
> proposed, awon't take effect until a vCPU happens to trigger a bus lock exit.
> 
> If we really want it to be writable, then a per-VM capability is likely a better
> solution.
> 
> Actually, we already have a capability, which means there's zero reason for this
> module param to exist.  Userspace already has to opt-in to turning on bus lock
> detection, i.e. userspace already has the opportunity to provide a different
> threshold.
> 
> That said, unless someone specifically needs a threshold other than '0', I vote
> to keep the uAPI as-is and simply exit on every bus lock.
>  

According to APM [1],
"The VMCB provides a Bus Lock Threshold enable bit and an unsigned 16-bit
Bus Lock Threshold count. On VMRUN, this value is loaded into an internal count register. Before
the processor executes a bus lock in the guest, it checks the value of this register. If the value is greater
than 0, the processor executes the bus lock successfully and decrements the count. If the value is 0, the
bus lock is not executed and a #VMEXIT to the VMM is taken."

So, the bus_lock_counter value "0" always results in VMEXIT_BUSLOCK, so the default value of
the bus_lock_counter should be greater or equal to "1".

I can remove the module parameter and initialize the value of bus_lock_counter as "1" ?

[1]: AMD64 Architecture Programmer's Manual Pub. 24593, April 2024,
        Vol 2, 15.14.5 Bus Lock Threshold.
        https://bugzilla.kernel.org/attachment.cgi?id=306250

-Manali
Manali Shukla Aug. 26, 2024, 10:29 a.m. UTC | #7
Hi Sean,
Thank you for reviewing my changes.

On 8/17/2024 1:51 AM, Sean Christopherson wrote:
> On Tue, Jul 09, 2024, Manali Shukla wrote:
>> From: Nikunj A Dadhania <nikunj@amd.com>
>>
>> Malicious guests can cause bus locks to degrade the performance of
>> a system.  The Bus Lock Threshold feature is beneficial for
>> hypervisors aiming to restrict the ability of the guests to perform
>> excessive bus locks and slow down the system for all the tenants.
>>
>> Add a test case to verify the Bus Lock Threshold feature for SVM.
>>
>> [Manali:
>>   - The KVM_CAP_X86_BUS_LOCK_EXIT capability is not enabled while
>>     vcpus are created, changed the VM and vCPU creation logic to
>>     resolve the mentioned issue.
>>   - Added nested guest test case for bus lock exit.
>>   - massage commit message.
>>   - misc cleanups. ]
> 
> Again, 99% of the changelog is boilerplate that does nothing to help me
> understand what the test actually does.
> 

Sure. I will rewrite the commit messages for all the patches.

>>
>> Signed-off-by: Nikunj A Dadhania <nikunj@amd.com>
>> Co-developed-by: Manali Shukla <manali.shukla@amd.com>
>> Signed-off-by: Manali Shukla <manali.shukla@amd.com>
>> ---
>>  tools/testing/selftests/kvm/Makefile          |   1 +
>>  .../selftests/kvm/x86_64/svm_buslock_test.c   | 114 ++++++++++++++++++
>>  2 files changed, 115 insertions(+)
>>  create mode 100644 tools/testing/selftests/kvm/x86_64/svm_buslock_test.c
>>
>> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
>> index ce8ff8e8ce3a..711ec195e386 100644
>> --- a/tools/testing/selftests/kvm/Makefile
>> +++ b/tools/testing/selftests/kvm/Makefile
>> @@ -94,6 +94,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/smaller_maxphyaddr_emulation_test
>>  TEST_GEN_PROGS_x86_64 += x86_64/smm_test
>>  TEST_GEN_PROGS_x86_64 += x86_64/state_test
>>  TEST_GEN_PROGS_x86_64 += x86_64/vmx_preemption_timer_test
>> +TEST_GEN_PROGS_x86_64 += x86_64/svm_buslock_test
>>  TEST_GEN_PROGS_x86_64 += x86_64/svm_vmcall_test
>>  TEST_GEN_PROGS_x86_64 += x86_64/svm_int_ctl_test
>>  TEST_GEN_PROGS_x86_64 += x86_64/svm_nested_shutdown_test
>> diff --git a/tools/testing/selftests/kvm/x86_64/svm_buslock_test.c b/tools/testing/selftests/kvm/x86_64/svm_buslock_test.c
>> new file mode 100644
>> index 000000000000..dcb595999046
>> --- /dev/null
>> +++ b/tools/testing/selftests/kvm/x86_64/svm_buslock_test.c
> 
> I would *very* strongly prefer to have a bus lock test that is comment to VMX
> and SVM.  For L1, there's no unique behavior.  And for L2, assuming we don't
> support nested bus lock enabling, the only vendor specific bits are launching
> L2.
> 
> I.e. writing this so it works on both VMX and SVM should be quite straightforward.
> 

Sure I will try to write a common test for SVM and VMX.

>> @@ -0,0 +1,114 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * svm_buslock_test
>> + *
>> + * Copyright (C) 2024 Advanced Micro Devices, Inc.
>> + *
>> + * SVM testing: Buslock exit
> 
> Keep the Copyright, ditch everything else.

Sure.

> 
>> + */
>> +
>> +#include "test_util.h"
>> +#include "kvm_util.h"
>> +#include "processor.h"
>> +#include "svm_util.h"
>> +
>> +#define NO_ITERATIONS 100
> 
> Heh, NR_ITERATIONS.

Ack.

> 
>> +#define __cacheline_aligned __aligned(128)
> 
> Eh, I would just split a page, that's about as future proof as we can get in
> terms of cache line sizes.
> 

Sure.

>> +
>> +struct buslock_test {
>> +	unsigned char pad[126];
>> +	atomic_long_t val;
>> +} __packed;
>> +
>> +struct buslock_test test __cacheline_aligned;
>> +
>> +static __always_inline void buslock_atomic_add(int i, atomic_long_t *v)
>> +{
>> +	asm volatile(LOCK_PREFIX "addl %1,%0"
>> +		     : "+m" (v->counter)
>> +		     : "ir" (i) : "memory");
>> +}
>> +
>> +static void buslock_add(void)
>> +{
>> +	/*
>> +	 * Increment a cache unaligned variable atomically.
>> +	 * This should generate a bus lock exit.
> 
> So... this test doesn't actually verify that a bus lock exit occurs.  The userspace
> side will eat an exit if one occurs, but there's literally not a single TEST_ASSERT()
> in here.

Agreed, How about doing following?

+       for (;;) {
+               struct ucall uc;
+
+               vcpu_run(vcpu);
+
+               if (run->exit_reason == KVM_EXIT_IO) {
+                       switch (get_ucall(vcpu, &uc)) {
+                       case UCALL_ABORT:
+                               REPORT_GUEST_ASSERT(uc);
+                               /* NOT REACHED */
+                       case UCALL_SYNC:
+                               break;
+                       case UCALL_DONE:
+                               goto done;
+                       default:
+                               TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd);
+                       }
+               }
+
+               TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_X86_BUS_LOCK);
+               TEST_ASSERT_EQ(run->flags, KVM_RUN_X86_BUS_LOCK);
+               run->flags &= ~KVM_RUN_X86_BUS_LOCK;
+               run->exit_reason = 0;
+       }

- Manali
Sean Christopherson Aug. 26, 2024, 4:06 p.m. UTC | #8
On Mon, Aug 26, 2024, Manali Shukla wrote:
> >> +struct buslock_test {
> >> +	unsigned char pad[126];
> >> +	atomic_long_t val;
> >> +} __packed;
> >> +
> >> +struct buslock_test test __cacheline_aligned;
> >> +
> >> +static __always_inline void buslock_atomic_add(int i, atomic_long_t *v)
> >> +{
> >> +	asm volatile(LOCK_PREFIX "addl %1,%0"
> >> +		     : "+m" (v->counter)
> >> +		     : "ir" (i) : "memory");
> >> +}
> >> +
> >> +static void buslock_add(void)
> >> +{
> >> +	/*
> >> +	 * Increment a cache unaligned variable atomically.
> >> +	 * This should generate a bus lock exit.
> > 
> > So... this test doesn't actually verify that a bus lock exit occurs.  The userspace
> > side will eat an exit if one occurs, but there's literally not a single TEST_ASSERT()
> > in here.
> 
> Agreed, How about doing following?
> 
> +       for (;;) {
> +               struct ucall uc;
> +
> +               vcpu_run(vcpu);
> +
> +               if (run->exit_reason == KVM_EXIT_IO) {
> +                       switch (get_ucall(vcpu, &uc)) {
> +                       case UCALL_ABORT:
> +                               REPORT_GUEST_ASSERT(uc);
> +                               /* NOT REACHED */
> +                       case UCALL_SYNC:
> +                               break;
> +                       case UCALL_DONE:
> +                               goto done;
> +                       default:
> +                               TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd);
> +                       }
> +               }
> +
> +               TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_X86_BUS_LOCK);

I doubt this works, the UCALL_SYNC above will fallthrough to this assert.  I
assume run->exit_reason needs a continue for UCALL_SYNC.

> +               TEST_ASSERT_EQ(run->flags, KVM_RUN_X86_BUS_LOCK);
> +               run->flags &= ~KVM_RUN_X86_BUS_LOCK;

No need, KVM should clear the flag if the exit isn't due to a bus lock.

> +               run->exit_reason = 0;

Again, no need, KVM should take care of resetting exit_reason.

> +       }
> 
> - Manali
> 
>
Sean Christopherson Aug. 26, 2024, 4:15 p.m. UTC | #9
On Sat, Aug 24, 2024, Manali Shukla wrote:
> > Actually, we already have a capability, which means there's zero reason for this
> > module param to exist.  Userspace already has to opt-in to turning on bus lock
> > detection, i.e. userspace already has the opportunity to provide a different
> > threshold.
> > 
> > That said, unless someone specifically needs a threshold other than '0', I vote
> > to keep the uAPI as-is and simply exit on every bus lock.
> >  
> 
> According to APM [1],
> "The VMCB provides a Bus Lock Threshold enable bit and an unsigned 16-bit Bus
> Lock Threshold count. On VMRUN, this value is loaded into an internal count
> register. Before the processor executes a bus lock in the guest, it checks
> the value of this register. If the value is greater than 0, the processor
> executes the bus lock successfully and decrements the count. If the value is
> 0, the bus lock is not executed and a #VMEXIT to the VMM is taken."
> 
> So, the bus_lock_counter value "0" always results in VMEXIT_BUSLOCK, so the
> default value of the bus_lock_counter should be greater or equal to "1".

Ugh, so AMD's bus-lock VM-Exit is fault-like.  That's annoying.

> I can remove the module parameter and initialize the value of
> bus_lock_counter as "1" ?

No, because that will have the effect of detecting every other bus lock, whereas
the intent is to detect _every_ bus lock.

I think the only sane approach is to set it to '0' when enabled, and then set it
to '1' on a bus lock exit _before_ exiting to userspace.  If userspace or the
guest mucks with RIP or the guest code stream and doesn't immediately trigger the
bus lock, then so be it.  That only defers the allowed bus lock to a later time,
so effectively such shenanigans would penalize the guest even more.

We'll need to document that KVM on AMD exits to userspace with RIP pointing at
the offending instruction, whereas KVM on Intel exits with RIP pointing at the
instruction after the guilty instruction.
Borislav Petkov Aug. 29, 2024, 6:48 a.m. UTC | #10
On Fri, Aug 16, 2024 at 12:37:52PM -0700, Sean Christopherson wrote:
> I would strongly prefer to enumerate this in /proc/cpuinfo, having to manually
> query CPUID to see if a CPU supports a feature I want to test is beyond annoying.

Why?

We have tools/arch/x86/kcpuid/kcpuid.c for that.
Manali Shukla Aug. 29, 2024, 9:41 a.m. UTC | #11
Hi Sean,

Thank you for reviewing my patches.

On 8/26/2024 9:36 PM, Sean Christopherson wrote:
> On Mon, Aug 26, 2024, Manali Shukla wrote:
>>>> +struct buslock_test {
>>>> +	unsigned char pad[126];
>>>> +	atomic_long_t val;
>>>> +} __packed;
>>>> +
>>>> +struct buslock_test test __cacheline_aligned;
>>>> +
>>>> +static __always_inline void buslock_atomic_add(int i, atomic_long_t *v)
>>>> +{
>>>> +	asm volatile(LOCK_PREFIX "addl %1,%0"
>>>> +		     : "+m" (v->counter)
>>>> +		     : "ir" (i) : "memory");
>>>> +}
>>>> +
>>>> +static void buslock_add(void)
>>>> +{
>>>> +	/*
>>>> +	 * Increment a cache unaligned variable atomically.
>>>> +	 * This should generate a bus lock exit.
>>>
>>> So... this test doesn't actually verify that a bus lock exit occurs.  The userspace
>>> side will eat an exit if one occurs, but there's literally not a single TEST_ASSERT()
>>> in here.
>>
>> Agreed, How about doing following?
>>
>> +       for (;;) {
>> +               struct ucall uc;
>> +
>> +               vcpu_run(vcpu);
>> +
>> +               if (run->exit_reason == KVM_EXIT_IO) {
>> +                       switch (get_ucall(vcpu, &uc)) {
>> +                       case UCALL_ABORT:
>> +                               REPORT_GUEST_ASSERT(uc);
>> +                               /* NOT REACHED */
>> +                       case UCALL_SYNC:
>> +                               break;
>> +                       case UCALL_DONE:
>> +                               goto done;
>> +                       default:
>> +                               TEST_FAIL("Unknown ucall 0x%lx.", uc.cmd);
>> +                       }
>> +               }
>> +
>> +               TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_X86_BUS_LOCK);
> 
> I doubt this works, the UCALL_SYNC above will fallthrough to this assert.  I
> assume run->exit_reason needs a continue for UCALL_SYNC.
>

I agree, there should be a continue for UCALL_SYNC in place of break. I will
correct it in V2. 

I didn't observe this issue because UCALL_SYNC is invoked, when GUEST_SYNC() is
called from the guest code. Since GUEST_SYNC() is not present in the guest
code used in bus lock test case, UCALL_SYNC was never triggered.
 
>> +               TEST_ASSERT_EQ(run->flags, KVM_RUN_X86_BUS_LOCK);
>> +               run->flags &= ~KVM_RUN_X86_BUS_LOCK;
>
> No need, KVM should clear the flag if the exit isn't due to a bus lock.

Sure I will remove this.

> 
>> +               run->exit_reason = 0;
> 
> Again, no need, KVM should take care of resetting exit_reason.

Ack.

> 
>> +       }
>>

- Manali
Sean Christopherson Aug. 30, 2024, 4:42 a.m. UTC | #12
On Thu, Aug 29, 2024, Borislav Petkov wrote:
> On Fri, Aug 16, 2024 at 12:37:52PM -0700, Sean Christopherson wrote:
> > I would strongly prefer to enumerate this in /proc/cpuinfo, having to manually
> > query CPUID to see if a CPU supports a feature I want to test is beyond annoying.
> 
> Why?
> 
> We have tools/arch/x86/kcpuid/kcpuid.c for that.

Ah, sorry, if the platform+kernel supports the feature, not just raw CPU.  And
because that utility is not available by default on most targets I care about,
and having to build and copy over a binary is annoying (though this is a minor
gripe).

That said, what I really want in most cases is to know if _KVM_ supports a
feature.  I'll think more on this, I have a few vague ideas for getting a pile
of information out of KVM without needing to add more uABI.
Borislav Petkov Aug. 30, 2024, 8:21 a.m. UTC | #13
On Thu, Aug 29, 2024 at 09:42:40PM -0700, Sean Christopherson wrote:
> Ah, sorry, if the platform+kernel supports the feature, not just raw CPU.

Yeah, that's not always trivial, as I'm sure you know. Especially if it is
a complicated feature like, SNP, for example, which needs fw and platform to
be configured properly and so on.

> And because that utility is not available by default on most targets I care
> about, and having to build and copy over a binary is annoying (though this
> is a minor gripe).

I'm keeping that thing as simple as possible on purpose. So if you wanna make
it available on such targets, I'm all ears.
 
> That said, what I really want in most cases is to know if _KVM_ supports
> a feature.  I'll think more on this, I have a few vague ideas for getting
> a pile of information out of KVM without needing to add more uABI.

That's exactly my pet peeve - making it a uABI and then supporting it foreva.

We have tried to explain what cpuinfo should be:

Documentation/arch/x86/cpuinfo.rst

The gist of it is:

"So, the current use of /proc/cpuinfo is to show features which the kernel has
*enabled* and *supports*. As in: the CPUID feature flag is there, there's an
additional setup which the kernel has done while booting and the functionality
is ready to use. A perfect example for that is "user_shstk" where additional
code enablement is present in the kernel to support shadow stack for user
programs."

So if it is something that has been enabled and is actively supported, then
sure, ofc. What I don't want to have there is a partial mirror of every
possible CPUID flag which is going to be a senseless and useless madness.

Dunno, I guess if we had a

"virt: ..."

line in /proc/cpuinfo which has flags of what the hypervisor has enabled as
a feature, it might not be such a wrong idea... with the above caveats, ofc.
I don't think you want a flurry of patches setting all possible flags just
because.

Or maybe somewhere else where you can query it conveniently...