From patchwork Fri Aug 2 18:22:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 816413 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79C8E21C18C for ; Fri, 2 Aug 2024 18:23:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722623008; cv=none; b=fKeY5Yz4llWahENYH5SlFILs2gSAFRa2tOIIknFNa868azrFwgvU4sFxaXuvvtM0JBlhMORQ6Vcc9YARFjl60CUPVJ0ZxOpi4XuVjb0zwtjySlpGzWcYEobXvQd+l61/XVPY9ZcJK8oXKfdj+dVjBvEy0SuG+8r7Okrb0tC0a3I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722623008; c=relaxed/simple; bh=t5yfFgH9ibPxbspkUAv08vifOSOXvkrbmarqeKUHNec=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bgMb4EKAAnG07Uz63K3S8NLWiylgoDb8k5qvtmQsb3ygA+eEVPx3vUZS3fFyBPHh6iXrkD9cfapZN0hTjr/hr/UnJsYbYbWy281i8QWzmfMt6XusGVDoR0vfLAZBQmCJRfzPJazfCplC14KUuI8MZbDarg0iviHISb798kwVH7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=V1efgfqv; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V1efgfqv" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0be2fa8f68so2245915276.3 for ; Fri, 02 Aug 2024 11:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722623005; x=1723227805; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v+gWTjgtCkRYFWW1GzmPZwN0r2c2xgD+FnvKvPUO5Yc=; b=V1efgfqvQhNS8R8AhLyNT4/+7fMZg4geGq+SW0T8S+Orq/3tGtafAi3KPDC0eL63tF uAe+EcqtWyiqXc/1gJsXfRgSM1ConfqMx6gaecPTKFsrJ4Zu4a4OE0ulJ3SXvZ6T7QRN p2sPPeEui+6WqS4WnUwjAT7snCbwPOtmxbaX0cW+knNyer4NQBN1+7LJM4NkIyMoaIET n6jamXzXzVnHUX/0RQKXwRq0nfEibbOLSl0m4wt3z4cWdoYlZGpPeT3/wtiHK3KVI+Dj IBPn/DEdMEBlQBF3a2tYcEFKP9je1W2q7647uBPRm53W4GwRjQNsSqZE1pYEtk/nGjNj X1hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722623005; x=1723227805; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v+gWTjgtCkRYFWW1GzmPZwN0r2c2xgD+FnvKvPUO5Yc=; b=vdS9vGA1l3FPVewQIWMabtIonMz0eIhgUKZs3OGZCyXc2xT7qPXpJpqOXM9afHhe7/ HVeSFIrop6S4fq49XYWZD2pXg6koeD8oT3P+yTa2Cv8yPgcVPbzoqigybKm4jAkHQOim vVxPHF1iRzJuIvzRYDI8EXRUCasTIjBXLKDZoHGYvtlO4WamxfmxzEiQjLInFZ9qSkk4 jFDKmkk7euL+W24XOTUPqGy1PnV/SOmwUDyh/CbKvE2LoD/6Hozx9M2h6H/8aR+9K98y Bd9eWNtvVHTXegBrHY9hNz6Fa4qGb9j4GeLGIPjrK+bC6fu0ACp9aFZCPph/6t209sTV FL8g== X-Forwarded-Encrypted: i=1; AJvYcCWynmXM7c5sSfv/dAAtDG/VKYon8nE5uVylMEq2++S7KXB2ZjvZIudzEiupVR8GALFLdPWcAIrb/bL0kVPMu8+t+qwz26sG9R0HObUKHam0 X-Gm-Message-State: AOJu0Ywot+Cs/Mt1Qyv606j2sU114HOKojsGD3jgfUeiUJzE5Om8mMzy CCuulWQ6ztzMhsXMnUMq2Dg7Ap9HNx5Ki98HKT7vPoxLPYxGN4BluhT09T8HIsQcL3NWtb7l3gm 8rSdutfLIq28FG98Jaf1Sww== X-Google-Smtp-Source: AGHT+IHIxi2bUOzRGGFcYEQj+O4fh0rQpQV9JaZDgKLUag1mDu0fuzixL7xb4lP4fZSR8NjHgonBhqXCr0OCVlzOFg== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a5b:bc7:0:b0:e0b:c0a5:71e2 with SMTP id 3f1490d57ef6-e0bde45ffb2mr14075276.11.1722623005492; Fri, 02 Aug 2024 11:23:25 -0700 (PDT) Date: Fri, 2 Aug 2024 18:22:35 +0000 In-Reply-To: <20240802182240.1916675-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240802182240.1916675-1-coltonlewis@google.com> X-Mailer: git-send-email 2.46.0.rc2.264.g509ed76dc8-goog Message-ID: <20240802182240.1916675-2-coltonlewis@google.com> Subject: [PATCH 1/6] KVM: x86: selftests: Fix typos in macro variable use From: Colton Lewis To: kvm@vger.kernel.org Cc: Mingwei Zhang , Jinrong Liang , Jim Mattson , Aaron Lewis , Sean Christopherson , Paolo Bonzini , Shuah Khan , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Colton Lewis Without the leading underscore, these variables are referencing a variable in the calling scope. It only worked before by accident because all calling scopes had a variable with the right name. Signed-off-by: Colton Lewis --- tools/testing/selftests/kvm/x86_64/pmu_counters_test.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 698cb36989db..0e305e43a93b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -174,7 +174,7 @@ do { \ #define GUEST_TEST_EVENT(_idx, _event, _pmc, _pmc_msr, _ctrl_msr, _value, FEP) \ do { \ - wrmsr(pmc_msr, 0); \ + wrmsr(_pmc_msr, 0); \ \ if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) \ GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt .", FEP); \ @@ -331,9 +331,9 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \ expect_gp ? "#GP" : "no fault", msr, vector) \ #define GUEST_ASSERT_PMC_VALUE(insn, msr, val, expected) \ - __GUEST_ASSERT(val == expected_val, \ + __GUEST_ASSERT(val == expected, \ "Expected " #insn "(0x%x) to yield 0x%lx, got 0x%lx", \ - msr, expected_val, val); + msr, expected, val); static void guest_test_rdpmc(uint32_t rdpmc_idx, bool expect_success, uint64_t expected_val) From patchwork Fri Aug 2 18:22:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 816412 Received: from mail-il1-f202.google.com (mail-il1-f202.google.com [209.85.166.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73FDF3DABF0 for ; Fri, 2 Aug 2024 18:23:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722623010; cv=none; b=AVDILoLy0XxgD+NKpRcsuXIj2JEU9p9yHw0q961qp7UBxCswHzozQrH81KJILOb1ZpKBcU+lvCr9HYIo49/5DaQXA2AS31Dvh4ON4A32oysd00fg2p0FDZ3GcO+fzLPb+VXdNOg0mehHmnKLVQEhuW3D6+fDtbnpmQ4q5EZRLWg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722623010; c=relaxed/simple; bh=M2AcP+s2OA4HvFU5924UK0HVvNHdjEpuqdHORW0nDuk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UT6oL17owvT9P+cYowFuYtmOSXZpPHIh2epzdOVZTYYN9hcv+N1CpCQTqo43HpzI8rDxYuW6+dcLZedWtdVhPTNJ8QXiCqlMtpuly2eKmEsydcAw1Uc8oJK2ZZwGP551u+hgeFJvCGOsECnADJ2/on7cDpZSTyMePXBO+tgfJ+Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=35sAJVgS; arc=none smtp.client-ip=209.85.166.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="35sAJVgS" Received: by mail-il1-f202.google.com with SMTP id e9e14a558f8ab-397e0efded3so149388505ab.2 for ; Fri, 02 Aug 2024 11:23:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722623007; x=1723227807; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VEpPHLX4AnIsaqPK3Ab7LVgS8QWm0D2UDwzN0KZ7X14=; b=35sAJVgS3t65+lalVkqQ9hn8PbCehIlRC5Pu3O2cqolDd5sjyO6Ft0JhToplq13QJQ PgP/mMu0qYF8IaNI5xodbjHG3kVPJQlirwp0CKstSlE6HGifZuCRrm/lETWsOS9hVpwn wYuGUiMnETG98PQXsV8QX+ygiuDlEMYUFkoH3quWOyd1QbCKD/ZM1EN722dJtfiUC/h7 DqA4fI/6J+Lxqgr9Ecj6Omb7rGH3yOo4/EHgohy8/VO48dlBKAnmbVP+bOUtKaedNtZP tnCOmfwXkiWbK2ONh6DOL1oVYqpYtfWqEMc5uczvjO5urFQB1z5Oz7yECqpLxsuMS5+N Tw4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722623007; x=1723227807; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VEpPHLX4AnIsaqPK3Ab7LVgS8QWm0D2UDwzN0KZ7X14=; b=LP/e0OPMPZUH7gC73XnOWdUrv3DU0VemEnGrPB+WES+nucjAGUlM8xAEYM4J0PpwcD rCWwACdhuU4fCmNECWR6C4iUl7K7m9RB2eivrV+7vlUjV9mEtJwdjNao5QvVUotMLRMh cq17yDN+c/VKZZlyEPFN3MA68Qm4hynYIXcAgBADGrMl2TJJcp79KAP+5LDQlnmhQlHW fzrf2YsIyyHXLp7EiOKTRVmk4+uZmbVQQXmynrWSEzKr2q1Np3YzaJuW94QOLJFYUB1i iiuS0faNv79KrxyzdqXXU6KMdVfrYMYNqLnKGXtuIJ4zqfnuupfb/1jQxGx7AiT1l1aS Vu4A== X-Forwarded-Encrypted: i=1; AJvYcCWe+uQhj9+Oz4g1pgFT3kwq8d42Yzju2BRjLOfdHdfZaBeVAAj5LSQsMAQShnA1jVjbhriXkS4YYgJYGV8d1FVVgXWx1D1dIb6N9qUL2Xw3 X-Gm-Message-State: AOJu0Yx4e78SRnMHI4rN57cQV4krJmuZ7fUDcohegK+cpkCrjBCTRO2z PKNRiQJwgs9/QS7Ix7ts8qyor9n0DwXQCFx/RpOV5GCRXyy0X0Ds2epntfsvW6jUajOikL6WDK0 bTkNjjv62uYczyOOpP2N1Dw== X-Google-Smtp-Source: AGHT+IHwx/s+GmkXDBo53QrNd0xcKe7xX8Jjc0U5RecbZE1aneXbPVG84LQC7y2SiWk6tksAmkVGNBb8VWM9/6ZQGg== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a05:6e02:2181:b0:396:dc3a:72f2 with SMTP id e9e14a558f8ab-39b1fc3aedcmr2758625ab.3.1722623007707; Fri, 02 Aug 2024 11:23:27 -0700 (PDT) Date: Fri, 2 Aug 2024 18:22:37 +0000 In-Reply-To: <20240802182240.1916675-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240802182240.1916675-1-coltonlewis@google.com> X-Mailer: git-send-email 2.46.0.rc2.264.g509ed76dc8-goog Message-ID: <20240802182240.1916675-4-coltonlewis@google.com> Subject: [PATCH 3/6] KVM: x86: selftests: Set up AMD VM in pmu_counters_test From: Colton Lewis To: kvm@vger.kernel.org Cc: Mingwei Zhang , Jinrong Liang , Jim Mattson , Aaron Lewis , Sean Christopherson , Paolo Bonzini , Shuah Khan , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Colton Lewis Branch in main() depending on if the CPU is Intel or AMD. They are subject to vastly different requirements because the AMD PMU lacks many properties defined by the Intel PMU including the entire CPUID 0xa function where Intel stores all the PMU properties. AMD lacks this as well as any consistent notion of PMU versions as Intel does. Every feature is a separate flag and they aren't the same features as Intel. Set up a VM for testing core AMD counters and ensure proper CPUID features are set. Signed-off-by: Colton Lewis --- .../selftests/kvm/x86_64/pmu_counters_test.c | 80 ++++++++++++++++--- 1 file changed, 68 insertions(+), 12 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 0e305e43a93b..a11df073331a 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -33,7 +33,7 @@ static uint8_t kvm_pmu_version; static bool kvm_has_perf_caps; -static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu, +static struct kvm_vm *intel_pmu_vm_create(struct kvm_vcpu **vcpu, void *guest_code, uint8_t pmu_version, uint64_t perf_capabilities) @@ -303,7 +303,7 @@ static void test_arch_events(uint8_t pmu_version, uint64_t perf_capabilities, if (!pmu_version) return; - vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_arch_events, + vm = intel_pmu_vm_create(&vcpu, guest_test_arch_events, pmu_version, perf_capabilities); vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH, @@ -463,7 +463,7 @@ static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities, struct kvm_vcpu *vcpu; struct kvm_vm *vm; - vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_gp_counters, + vm = intel_pmu_vm_create(&vcpu, guest_test_gp_counters, pmu_version, perf_capabilities); vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_NR_GP_COUNTERS, @@ -530,7 +530,7 @@ static void test_fixed_counters(uint8_t pmu_version, uint64_t perf_capabilities, struct kvm_vcpu *vcpu; struct kvm_vm *vm; - vm = pmu_vm_create_with_one_vcpu(&vcpu, guest_test_fixed_counters, + vm = intel_pmu_vm_create(&vcpu, guest_test_fixed_counters, pmu_version, perf_capabilities); vcpu_set_cpuid_property(vcpu, X86_PROPERTY_PMU_FIXED_COUNTERS_BITMASK, @@ -627,18 +627,74 @@ static void test_intel_counters(void) } } -int main(int argc, char *argv[]) +static uint8_t nr_core_counters(void) { - TEST_REQUIRE(kvm_is_pmu_enabled()); + const uint8_t nr_counters = kvm_cpu_property(X86_PROPERTY_NUM_PERF_CTR_CORE); + const bool core_ext = kvm_cpu_has(X86_FEATURE_PERF_CTR_EXT_CORE); + /* The default numbers promised if the property is 0 */ + const uint8_t amd_nr_core_ext_counters = 6; + const uint8_t amd_nr_core_counters = 4; + + if (nr_counters != 0) + return nr_counters; + + if (core_ext) + return amd_nr_core_ext_counters; + + return amd_nr_core_counters; +} + +static void guest_test_core_counters(void) +{ + GUEST_DONE(); +} - TEST_REQUIRE(host_cpu_is_intel); - TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); - TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); +static void test_core_counters(void) +{ + uint8_t nr_counters = nr_core_counters(); + bool core_ext = kvm_cpu_has(X86_FEATURE_PERF_CTR_EXT_CORE); + bool perf_mon_v2 = kvm_cpu_has(X86_FEATURE_PERF_MON_V2); + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; - kvm_pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); - kvm_has_perf_caps = kvm_cpu_has(X86_FEATURE_PDCM); + vm = vm_create_with_one_vcpu(&vcpu, guest_test_core_counters); - test_intel_counters(); + /* This property may not be there in older underlying CPUs, + * but it simplifies the test code for it to be set + * unconditionally. + */ + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_NUM_PERF_CTR_CORE, nr_counters); + if (core_ext) + vcpu_set_cpuid_feature(vcpu, X86_FEATURE_PERF_CTR_EXT_CORE); + if (perf_mon_v2) + vcpu_set_cpuid_feature(vcpu, X86_FEATURE_PERF_MON_V2); + + pr_info("Testing core counters: CoreExt = %u, PerfMonV2 = %u, NumCounters = %u\n", + core_ext, perf_mon_v2, nr_counters); + run_vcpu(vcpu); + + kvm_vm_free(vm); +} + +static void test_amd_counters(void) +{ + test_core_counters(); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_is_pmu_enabled()); + + if (host_cpu_is_intel) { + TEST_REQUIRE(kvm_cpu_has_p(X86_PROPERTY_PMU_VERSION)); + TEST_REQUIRE(kvm_cpu_property(X86_PROPERTY_PMU_VERSION) > 0); + kvm_pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION); + kvm_has_perf_caps = kvm_cpu_has(X86_FEATURE_PDCM); + test_intel_counters(); + } else if (host_cpu_is_amd) { + /* AMD CPUs don't have the same properties to look at. */ + test_amd_counters(); + } return 0; } From patchwork Fri Aug 2 18:22:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Colton Lewis X-Patchwork-Id: 816411 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2CAF1547F0 for ; Fri, 2 Aug 2024 18:23:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722623013; cv=none; b=FFIJSB+g6+bjCcm3hrtvRqP3B2moXbmK/DZSbkycHneOmFoTGTwBhXo2agTe8CmGyU1v3iLhFfpiyunm+/vq5opukswNFUao6ed/A4aOZU6I8p+O4kgIx09oN7NyR4hOz/1qGMN/CsdsN8CDbmVHC3l69wEf2docJCRIRQwsNeI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722623013; c=relaxed/simple; bh=Hy9USAPl/fxfVcQvWF7nOsOSt5AFSjwtbZ9xWp/GK5Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lN/BLoXa7z6qcZ3yZzMUDECfewFwJnvasjQcU1oVxasBo1f9JF44QEkkdodjjeKFkGl3YZ+lGE7TkidJYBFhEF3iztciptiTaXu0SAiBaIAHao6l29aHGXGx/3oxD3/nR0ZM7apRUrP8DQjFyrU8Jx+0+1oU4y4X7uKJs61COBI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=j6fD5JHj; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--coltonlewis.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j6fD5JHj" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-665a6dd38c8so167892037b3.1 for ; Fri, 02 Aug 2024 11:23:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722623010; x=1723227810; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Qol+YYMNNHW38r0W/7ZoKB8bBGx6w/YDTBQ1ltxmcDM=; b=j6fD5JHjE0FAP7pCRlZiQ5oY1Dl3chfT4p6PaVmAmIs1gX6aZmPKcxJHu37VzHvZmU Ci2hkMjGVDNG1UxT4GcEWwgxmAYdRG7NnYi7smn3pG+rFyZnhFD5zQMcyH4063wJucEO P8O1+mT7aLroduLQWILx2DXJ2ejxTd501MlK4SviUbeFfiMu/7AKyko1csAJPwlh0Pul ixLz1fNwI0377BfdJQROHZTkCoZfpC2qUsNlqEJ5Vx2odOjq/TQCtBb3FVwJ6DYvFEvo MPbmu9+skT54asE8rpVr/yKDYwhZCgHp0yZL2E+hKDzUpMm3XojD8PFYyc+fNUn0OcRY J/ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722623010; x=1723227810; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Qol+YYMNNHW38r0W/7ZoKB8bBGx6w/YDTBQ1ltxmcDM=; b=EFrjzla7kmGxy/TWRSHgZ77NUIFbGXyvGKZgNHYPpGTwznk9Nkltx92XT1cP+KMX17 IGOw9RkhYN5Hp8+7gGO2EBPiMCDzo/IkoMBiO+VkMxObJ6jO2tL36FQTWLflCgqZUI+p rajDROFi9aJMsoBPKymS4BcjzTDgjNLCy5Z5cZVLU1r2t95+BinBm6TJZwecvKkAsqia 0YGLCJ9SRWNC/gXR/i1y0j07qDqBpC+GiflFz2u2iqDb8E/XPG3s0QlkzT56DGovURGX XL5H0cVqCBa3Q9a2llnc0aWoFS1cd7RWBaFBdMC3Av3TzoddvmQy07rDwil1Lq3RwfBq qgaQ== X-Forwarded-Encrypted: i=1; AJvYcCVS9S+GPrSPRmtR5Vmh3reCOLpZRED5DSfr7nEzQtar3fc5pKSmkrkzMzsa0bqb0c2QYDEo6M/NFQFd1Ks72iAATFWEWXxazdGOZShxGEzq X-Gm-Message-State: AOJu0YyooGC86zHGOQIdGzE+L89RK4zIGt2r2xeJMEgxWZsrdI3D3KnJ ddKI8NhxaYSkrFSi6npW+9N3ic88hYiLzV6o0pU24lT3kesxFlMyU24ftxuMI7yxxL0TrgXqSPM As7Mrx520P84lY2bQMFW6Vw== X-Google-Smtp-Source: AGHT+IGNzvwJVCOdLqjHczjzlL6CV2UT0H8kVhxoi4rG8s6rOHGUtmwrsdUfGPs2+YGpxcURkb6OWNN3+Ek80veW+A== X-Received: from coltonlewis-kvm.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:14ce]) (user=coltonlewis job=sendgmr) by 2002:a25:9d0b:0:b0:e0b:f69b:da0a with SMTP id 3f1490d57ef6-e0bf69be01amr93276.12.1722623009765; Fri, 02 Aug 2024 11:23:29 -0700 (PDT) Date: Fri, 2 Aug 2024 18:22:39 +0000 In-Reply-To: <20240802182240.1916675-1-coltonlewis@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240802182240.1916675-1-coltonlewis@google.com> X-Mailer: git-send-email 2.46.0.rc2.264.g509ed76dc8-goog Message-ID: <20240802182240.1916675-6-coltonlewis@google.com> Subject: [PATCH 5/6] KVM: x86: selftests: Test core events From: Colton Lewis To: kvm@vger.kernel.org Cc: Mingwei Zhang , Jinrong Liang , Jim Mattson , Aaron Lewis , Sean Christopherson , Paolo Bonzini , Shuah Khan , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Colton Lewis Test events on core counters by iterating through every combination of events in amd_pmu_zen_events with every core counter. For each combination, calculate the appropriate register addresses for the event selection/control register and the counter register. The base addresses and layout schemes change depending on whether we have the CoreExt feature. To do the testing, reuse GUEST_TEST_EVENT to run a standard known workload. Decouple it from guest_assert_event_count (now guest_assert_intel_event_count) to generalize to AMD. Then assert the most specific detail that can be reasonably known about the counter result. Exact count is defined and known for some events and for other events merely asserted to be nonzero. Note on exact counts: AMD counts one more branch than Intel for the same workload. Though I can't confirm a reason, the only thing it could be is the boundary of the loop instruction being counted differently. Presumably, when the counter reaches 0 and execution continues to the next instruction, AMD counts this as a branch and Intel doesn't. Signed-off-by: Colton Lewis --- .../selftests/kvm/x86_64/pmu_counters_test.c | 87 ++++++++++++++++--- 1 file changed, 77 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c index 9620fc33d26e..fae078b444b3 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c @@ -29,6 +29,9 @@ /* Total number of instructions retired within the measured section. */ #define NUM_INSNS_RETIRED (NUM_LOOPS * NUM_INSNS_PER_LOOP + NUM_EXTRA_INSNS) +/* AMD counting one extra branch. Probably at loop boundary condition. */ +#define NUM_BRANCH_INSNS_RETIRED_AMD (NUM_LOOPS+1) +#define NUM_INSNS_RETIRED_AMD (NUM_INSNS_RETIRED+1) static uint8_t kvm_pmu_version; static bool kvm_has_perf_caps; @@ -98,7 +101,7 @@ static uint8_t guest_get_pmu_version(void) * Sanity check that in all cases, the event doesn't count when it's disabled, * and that KVM correctly emulates the write of an arbitrary value. */ -static void guest_assert_event_count(uint8_t idx, +static void guest_assert_intel_event_count(uint8_t idx, struct kvm_x86_pmu_feature event, uint32_t pmc, uint32_t pmc_msr) { @@ -140,6 +143,33 @@ static void guest_assert_event_count(uint8_t idx, GUEST_ASSERT_EQ(_rdpmc(pmc), 0xdead); } +static void guest_assert_amd_event_count(uint8_t evt_idx, uint8_t cnt_idx, uint32_t pmc_msr) +{ + uint64_t count; + uint64_t count_pmc; + + count = rdmsr(pmc_msr); + count_pmc = _rdpmc(cnt_idx); + GUEST_ASSERT_EQ(count, count_pmc); + + switch (evt_idx) { + case AMD_ZEN_CORE_CYCLES_INDEX: + GUEST_ASSERT_NE(count, 0); + break; + case AMD_ZEN_INSTRUCTIONS_INDEX: + GUEST_ASSERT_EQ(count, NUM_INSNS_RETIRED_AMD); + break; + case AMD_ZEN_BRANCHES_INDEX: + GUEST_ASSERT_EQ(count, NUM_BRANCH_INSNS_RETIRED_AMD); + break; + case AMD_ZEN_BRANCH_MISSES_INDEX: + GUEST_ASSERT_NE(count, 0); + break; + default: + break; + } + +} /* * Enable and disable the PMC in a monolithic asm blob to ensure that the * compiler can't insert _any_ code into the measured sequence. Note, ECX @@ -172,28 +202,29 @@ do { \ ); \ } while (0) -#define GUEST_TEST_EVENT(_idx, _event, _pmc, _pmc_msr, _ctrl_msr, _value, FEP) \ +#define GUEST_TEST_EVENT(_pmc_msr, _ctrl_msr, _ctrl_value, FEP) \ do { \ wrmsr(_pmc_msr, 0); \ \ if (this_cpu_has(X86_FEATURE_CLFLUSHOPT)) \ - GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflushopt .", FEP); \ + GUEST_MEASURE_EVENT(_ctrl_msr, _ctrl_value, "clflushopt .", FEP); \ else if (this_cpu_has(X86_FEATURE_CLFLUSH)) \ - GUEST_MEASURE_EVENT(_ctrl_msr, _value, "clflush .", FEP); \ + GUEST_MEASURE_EVENT(_ctrl_msr, _ctrl_value, "clflush .", FEP); \ else \ - GUEST_MEASURE_EVENT(_ctrl_msr, _value, "nop", FEP); \ - \ - guest_assert_event_count(_idx, _event, _pmc, _pmc_msr); \ + GUEST_MEASURE_EVENT(_ctrl_msr, _ctrl_value, "nop", FEP); \ } while (0) static void __guest_test_arch_event(uint8_t idx, struct kvm_x86_pmu_feature event, uint32_t pmc, uint32_t pmc_msr, uint32_t ctrl_msr, uint64_t ctrl_msr_value) { - GUEST_TEST_EVENT(idx, event, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, ""); + GUEST_TEST_EVENT(pmc_msr, ctrl_msr, ctrl_msr_value, ""); + guest_assert_intel_event_count(idx, event, pmc, pmc_msr); - if (is_forced_emulation_enabled) - GUEST_TEST_EVENT(idx, event, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, KVM_FEP); + if (is_forced_emulation_enabled) { + GUEST_TEST_EVENT(pmc_msr, ctrl_msr, ctrl_msr_value, KVM_FEP); + guest_assert_intel_event_count(idx, event, pmc, pmc_msr); + } } #define X86_PMU_FEATURE_NULL \ @@ -684,9 +715,45 @@ static void guest_test_rdwr_core_counters(void) } } +static void __guest_test_core_event(uint8_t event_idx, uint8_t counter_idx) +{ + /* One fortunate area of actual compatibility! This register + * layout is the same for both AMD and Intel. + */ + uint64_t eventsel = ARCH_PERFMON_EVENTSEL_OS | + ARCH_PERFMON_EVENTSEL_ENABLE | + amd_pmu_zen_events[event_idx]; + bool core_ext = this_cpu_has(X86_FEATURE_PERF_CTR_EXT_CORE); + uint64_t esel_msr_base = core_ext ? MSR_F15H_PERF_CTL : MSR_K7_EVNTSEL0; + uint64_t cnt_msr_base = core_ext ? MSR_F15H_PERF_CTR : MSR_K7_PERFCTR0; + uint64_t msr_step = core_ext ? 2 : 1; + uint64_t esel_msr = esel_msr_base + msr_step * counter_idx; + uint64_t cnt_msr = cnt_msr_base + msr_step * counter_idx; + + GUEST_TEST_EVENT(cnt_msr, esel_msr, eventsel, ""); + guest_assert_amd_event_count(event_idx, counter_idx, cnt_msr); + + if (is_forced_emulation_enabled) { + GUEST_TEST_EVENT(cnt_msr, esel_msr, eventsel, KVM_FEP); + guest_assert_amd_event_count(event_idx, counter_idx, cnt_msr); + } + +} + +static void guest_test_core_events(void) +{ + uint8_t nr_counters = this_cpu_property(X86_PROPERTY_NUM_PERF_CTR_CORE); + + for (uint8_t i = 0; i < NR_AMD_ZEN_EVENTS; i++) { + for (uint8_t j = 0; j < nr_counters; j++) + __guest_test_core_event(i, j); + } +} + static void guest_test_core_counters(void) { guest_test_rdwr_core_counters(); + guest_test_core_events(); GUEST_DONE(); }