From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Hi,
This patch set adds some tests to ensure consistent PMU performance event filter behavior. Specifically, the patches aim to improve KVM's PMU event filter by strengthening the test coverage, adding documentation, and making other small changes.
The first patch replaces int with uint32_t for nevents to ensure consistency and readability in the code. The second patch adds fixed_counter_bitmap to create_pmu_event_filter() to support the use of the same creator to control the use of guest fixed counters. The third patch adds test cases for unsupported input values in PMU filter, including unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values. Also, it tests setting non-existent fixed counters in the fixed bitmap doesn't fail.
The fourth patch updates the documentation for KVM_SET_PMU_EVENT_FILTER ioctl to include a detailed description of how fixed performance events are handled in the pmu filter. The fifth patch adds tests to cover that pmu_event_filter works as expected when applied to fixed performance counters, even if there is no fixed counter exists. The sixth patch adds a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed performance filter behavior in KVM. The seventh patch adds a test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used.
These changes help to ensure that KVM's PMU event filter functions as expected in all supported use cases. These patches have been tested and verified to function properly.
Thanks for your review and feedback.
Sincerely, Jinrong Liang
Previous: https://lore.kernel.org/kvm/20230414110056.19665-1-cloudliang@tencent.com
v2: - Wrap the code from the documentation in a block of code; (Bagas Sanjaya)
Jinrong Liang (7): KVM: selftests: Replace int with uint32_t for nevents KVM: selftests: Apply create_pmu_event_filter() to fixed ctrs KVM: selftests: Test unavailable event filters are rejected KVM: x86/pmu: Add documentation for fixed ctr on PMU filter KVM: selftests: Check if pmu_event_filter meets expectations on fixed ctrs KVM: selftests: Check gp event filters without affecting fixed event filters KVM: selftests: Test pmu event filter with incompatible kvm_pmu_event_filter
Documentation/virt/kvm/api.rst | 21 ++ .../kvm/x86_64/pmu_event_filter_test.c | 239 ++++++++++++++++-- 2 files changed, 243 insertions(+), 17 deletions(-)
base-commit: a25497a280bbd7bbcc08c87ddb2b3909affc8402
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Defined as type __u32, the nevents field in kvm_pmu_event_filter can only accept positive values within a specific range. Therefore, replacing int with uint32_t for nevents ensures consistency and readability in the code. This change has been tested and verified to work correctly with all relevant code.
Signed-off-by: Jinrong Liang cloudliang@tencent.com --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 1f60dfae69e0..c0521fc9e8f6 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -194,7 +194,7 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents)
static struct kvm_pmu_event_filter * -create_pmu_event_filter(const uint64_t event_list[], int nevents, +create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents, uint32_t action, uint32_t flags) { struct kvm_pmu_event_filter *f; @@ -648,7 +648,7 @@ const struct masked_events_test test_cases[] = { };
static int append_test_events(const struct masked_events_test *test, - uint64_t *events, int nevents) + uint64_t *events, uint32_t nevents) { const uint64_t *evts; int i; @@ -670,7 +670,7 @@ static bool bool_eq(bool a, bool b) }
static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, - int nevents) + uint32_t nevents) { int ntests = ARRAY_SIZE(test_cases); struct perf_counter c; @@ -695,7 +695,7 @@ static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, } }
-static void add_dummy_events(uint64_t *events, int nevents) +static void add_dummy_events(uint64_t *events, uint32_t nevents) { int i;
@@ -714,7 +714,7 @@ static void add_dummy_events(uint64_t *events, int nevents)
static void test_masked_events(struct kvm_vcpu *vcpu) { - int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; + uint32_t nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; uint64_t events[MAX_FILTER_EVENTS];
/* Run the test cases against a sparse PMU event filter. */ @@ -726,7 +726,7 @@ static void test_masked_events(struct kvm_vcpu *vcpu) }
static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - int nevents, uint32_t flags) + uint32_t nevents, uint32_t flags) { struct kvm_pmu_event_filter *f; int r;
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Defined as type __u32, the nevents field in kvm_pmu_event_filter can only accept positive values within a specific range. Therefore, replacing int with uint32_t for nevents ensures consistency and readability in the code.
Not really. It fixes one type of inconsistency that is fairly common (userspace passing an integer count to the kernel), and replaces it with a different type of inconsistency (signed iterator comparing against an unsigned count). There's already one of those in remove_event(), but I'd rather not create more.
Passing an unsigned int to track what *should* be a small-ish, postive integer can also make it more difficult to detect bugs, e.g. assertions like this won't work:
TEST_ASSERT(nevents >= 0);
If this code were being written from scratch then I wouldn't object to using uint32_t everywhere, but I don't see the point of trying to retroactively change the code.
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add fixed_counter_bitmap to the create_pmu_event_filter() to support the use of the same creator to control the use of guest fixed counters.
No functional change intended.
Signed-off-by: Jinrong Liang cloudliang@tencent.com --- .../kvm/x86_64/pmu_event_filter_test.c | 31 ++++++++++++------- 1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index c0521fc9e8f6..4e87eea6986b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -192,19 +192,22 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) return f; }
- static struct kvm_pmu_event_filter * create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents, - uint32_t action, uint32_t flags) + uint32_t action, uint32_t flags, + uint32_t fixed_counter_bitmap) { struct kvm_pmu_event_filter *f; int i;
f = alloc_pmu_event_filter(nevents); f->action = action; + f->fixed_counter_bitmap = fixed_counter_bitmap; f->flags = flags; - for (i = 0; i < nevents; i++) - f->events[i] = event_list[i]; + if (f->nevents) { + for (i = 0; i < f->nevents; i++) + f->events[i] = event_list[i]; + }
return f; } @@ -213,7 +216,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list), - action, 0); + action, 0, 0); }
/* @@ -260,7 +263,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count;
- f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0); + f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0, 0); count = test_with_filter(vcpu, f);
free(f); @@ -544,7 +547,7 @@ static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu,
f = create_pmu_event_filter(masked_events, nmasked_events, KVM_PMU_EVENT_ALLOW, - KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + KVM_PMU_EVENT_FLAG_MASKED_EVENTS, 0); r.raw = test_with_filter(vcpu, f); free(f);
@@ -726,12 +729,14 @@ static void test_masked_events(struct kvm_vcpu *vcpu) }
static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, - uint32_t nevents, uint32_t flags) + uint32_t nevents, uint32_t flags, uint32_t action, + uint32_t fixed_counter_bitmap) { struct kvm_pmu_event_filter *f; int r;
- f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags); + f = create_pmu_event_filter(events, nevents, action, flags, + fixed_counter_bitmap); r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); free(f);
@@ -747,14 +752,16 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) * Unfortunately having invalid bits set in event data is expected to * pass when flags == 0 (bits other than eventsel+umask). */ - r = run_filter_test(vcpu, &e, 1, 0); + r = run_filter_test(vcpu, &e, 1, 0, KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
- r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail");
e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); - r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); + r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, + KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); }
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add fixed_counter_bitmap to the create_pmu_event_filter() to support the use of the same creator to control the use of guest fixed counters.
No functional change intended.
Signed-off-by: Jinrong Liang cloudliang@tencent.com
.../kvm/x86_64/pmu_event_filter_test.c | 31 ++++++++++++------- 1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index c0521fc9e8f6..4e87eea6986b 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -192,19 +192,22 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents) return f; }
static struct kvm_pmu_event_filter * create_pmu_event_filter(const uint64_t event_list[], uint32_t nevents,
uint32_t action, uint32_t flags)
uint32_t action, uint32_t flags,
uint32_t fixed_counter_bitmap)
Rather than force callers to pass in every field, often with '0', what about adding an overlay struct similar to what selftests do for MSR arrays?
Ugh, and "event_filter()" uses dynamic allocation for a statically sized array. Even worse, it does ugly variable shadowing of "event_list".
E.g. if we define an overlay, then we can also define a "base" filter to use as the reference. Copying the base filter will be somewhat expensive, but probably no more than dynamic allocation, and all but guaranteed to be meaningless in both cases.
struct __kvm_pmu_event_filter { __u32 action; __u32 nevents; __u32 fixed_counter_bitmap; __u32 flags; __u32 pad[4]; __u64 events[MAX_FILTER_EVENTS]; };
/* * The events in the base filter comprises Intel's eight architectural events * plus AMD's "retired branch instructions" for Zen[123] (and possibly other * AMD CPUs). */ static const struct __kvm_pmu_event_filter base_event_filter = { .nevents = ARRAY_SIZE, .events = { EVENT(0x3c, 0), EVENT(0xc0, 0), EVENT(0x3c, 1), EVENT(0x2e, 0x4f), EVENT(0x2e, 0x41), EVENT(0xc4, 0), EVENT(0xc5, 0), EVENT(0xa4, 1), AMD_ZEN_BR_RETIRED, }, };
{ struct kvm_pmu_event_filter *f; int i; f = alloc_pmu_event_filter(nevents); f->action = action;
- f->fixed_counter_bitmap = fixed_counter_bitmap; f->flags = flags;
- for (i = 0; i < nevents; i++)
f->events[i] = event_list[i];
- if (f->nevents) {
for (i = 0; i < f->nevents; i++)
This is an unrelated, pointless change. The body of the loop will never execute if f->nevents is 0.
f->events[i] = event_list[i];
- }
return f; } @@ -213,7 +216,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action) { return create_pmu_event_filter(event_list, ARRAY_SIZE(event_list),
action, 0);
action, 0, 0);
} /* @@ -260,7 +263,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu) struct kvm_pmu_event_filter *f; uint64_t count;
- f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0);
- f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0, 0); count = test_with_filter(vcpu, f);
free(f); @@ -544,7 +547,7 @@ static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, f = create_pmu_event_filter(masked_events, nmasked_events, KVM_PMU_EVENT_ALLOW,
KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
r.raw = test_with_filter(vcpu, f); free(f);KVM_PMU_EVENT_FLAG_MASKED_EVENTS, 0);
@@ -726,12 +729,14 @@ static void test_masked_events(struct kvm_vcpu *vcpu) } static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events,
uint32_t nevents, uint32_t flags)
Blech, this helper is very poorly named. "run" strongly suggests running the vCPU. This should really be something like set_pmu_event_filter(). And for the common case of setting a single event, provide a wrapper for that too.
uint32_t nevents, uint32_t flags, uint32_t action,
uint32_t fixed_counter_bitmap)
Forcing common users to pass two hardcoded params just to reuse a few lines of code is not a good tradeoff. With all of the above refactoring, the new test can be done in a few lines of code without needing to update a bunch of callers. Of course, the rework will mean updating all callers,
f = base_event_filter; f.fixed_counter_bitmap = ~GENMASK_ULL(nr_fixed_counters, 0); r = set_pmu_event_filter(vcpu, &f); TEST_ASSERT(!r, "Set invalid or non-exist fixed cunters in the fixed bitmap fail.");
Also, I've lost the context, but please add a property for the number of fixed counters, e.g. to be able to do:
uint32_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
That won't be totally sufficient when KVM gains support for the bitmaks CPUID leaf, but it won't become totally invalid either.
Something like this as an end result, spread over multiple patches. Completely untested and most definitely won't compile, but it should provide the general idea.
// SPDX-License-Identifier: GPL-2.0 /* * Test for x86 KVM_SET_PMU_EVENT_FILTER. * * Copyright (C) 2022, Google LLC. * * This work is licensed under the terms of the GNU GPL, version 2. * * Verifies the expected behavior of allow lists and deny lists for * virtual PMU events. */
#define _GNU_SOURCE /* for program_invocation_short_name */ #include "test_util.h" #include "kvm_util.h" #include "processor.h"
/* * In lieu of copying perf_event.h into tools... */ #define ARCH_PERFMON_EVENTSEL_OS (1ULL << 17) #define ARCH_PERFMON_EVENTSEL_ENABLE (1ULL << 22)
/* End of stuff taken from perf_event.h. */
/* Oddly, this isn't in perf_event.h. */ #define ARCH_PERFMON_BRANCHES_RETIRED 5
#define NUM_BRANCHES 42 #define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) #define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1)
/* * This is how the event selector and unit mask are stored in an AMD * core performance event-select register. Intel's format is similar, * but the event selector is only 8 bits. */ #define EVENT(select, umask) ((select & 0xf00UL) << 24 | (select & 0xff) | \ (umask & 0xff) << 8)
/* * "Branch instructions retired", from the Intel SDM, volume 3, * "Pre-defined Architectural Performance Events." */
#define INTEL_BR_RETIRED EVENT(0xc4, 0)
/* * "Retired branch instructions", from Processor Programming Reference * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, * Preliminary Processor Programming Reference (PPR) for AMD Family * 17h Model 31h, Revision B0 Processors, and Preliminary Processor * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision * B1 Processors Volume 1 of 2. */
#define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0)
struct __kvm_pmu_event_filter { __u32 action; __u32 nevents; __u32 fixed_counter_bitmap; __u32 flags; __u32 pad[4]; __u64 events[MAX_FILTER_EVENTS]; };
/* * The events in the base filter comprises Intel's eight architectural events * plus AMD's "retired branch instructions" for Zen[123] (and possibly other * AMD CPUs). */ static const struct __kvm_pmu_event_filter base_event_filter = { .nevents = ARRAY_SIZE, .events = { EVENT(0x3c, 0), EVENT(0xc0, 0), EVENT(0x3c, 1), EVENT(0x2e, 0x4f), EVENT(0x2e, 0x41), EVENT(0xc4, 0), EVENT(0xc5, 0), EVENT(0xa4, 1), AMD_ZEN_BR_RETIRED, }, };
/* * If we encounter a #GP during the guest PMU sanity check, then the guest * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0). */ static void guest_gp_handler(struct ex_regs *regs) { GUEST_SYNC(0); }
/* * Check that we can write a new value to the given MSR and read it back. * The caller should provide a non-empty set of bits that are safe to flip. * * Return on success. GUEST_SYNC(0) on error. */ static void check_msr(uint32_t msr, uint64_t bits_to_flip) { uint64_t v = rdmsr(msr) ^ bits_to_flip;
wrmsr(msr, v); if (rdmsr(msr) != v) GUEST_SYNC(0);
v ^= bits_to_flip; wrmsr(msr, v); if (rdmsr(msr) != v) GUEST_SYNC(0); }
static void intel_guest_code(void) { check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1); check_msr(MSR_P6_EVNTSEL0, 0xffff); check_msr(MSR_IA32_PMC0, 0xffff); GUEST_SYNC(1);
for (;;) { uint64_t br0, br1;
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED); wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1); br0 = rdmsr(MSR_IA32_PMC0); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); br1 = rdmsr(MSR_IA32_PMC0); GUEST_SYNC(br1 - br0); } }
/* * To avoid needing a check for CPUID.80000001:ECX.PerfCtrExtCore[bit 23], * this code uses the always-available, legacy K7 PMU MSRs, which alias to * the first four of the six extended core PMU MSRs. */ static void amd_guest_code(void) { check_msr(MSR_K7_EVNTSEL0, 0xffff); check_msr(MSR_K7_PERFCTR0, 0xffff); GUEST_SYNC(1);
for (;;) { uint64_t br0, br1;
wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED); br0 = rdmsr(MSR_K7_PERFCTR0); __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); br1 = rdmsr(MSR_K7_PERFCTR0); GUEST_SYNC(br1 - br0); } }
/* * Run the VM to the next GUEST_SYNC(value), and return the value passed * to the sync. Any other exit from the guest is fatal. */ static uint64_t run_vcpu_to_sync(struct kvm_vcpu *vcpu) { struct ucall uc;
vcpu_run(vcpu); TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); get_ucall(vcpu, &uc); TEST_ASSERT(uc.cmd == UCALL_SYNC, "Received ucall other than UCALL_SYNC: %lu", uc.cmd); return uc.args[1]; }
/* * In a nested environment or if the vPMU is disabled, the guest PMU * might not work as architected (accessing the PMU MSRs may raise * #GP, or writes could simply be discarded). In those situations, * there is no point in running these tests. The guest code will perform * a sanity check and then GUEST_SYNC(success). In the case of failure, * the behavior of the guest on resumption is undefined. */ static bool sanity_check_pmu(struct kvm_vcpu *vcpu) { bool success;
vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler); success = run_vcpu_to_sync(vcpu); vm_install_exception_handler(vcpu->vm, GP_VECTOR, NULL);
return success; }
/* * Remove the first occurrence of 'event' (if any) from the filter's * event list. */ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f, uint64_t event) { bool found = false; int i;
for (i = 0; i < f->nevents; i++) { if (found) f->events[i - 1] = f->events[i]; else found = f->events[i] == event; } if (found) f->nevents--; return f; }
static void test_without_filter(struct kvm_vcpu *vcpu) { uint64_t count = run_vcpu_to_sync(vcpu);
if (count != NUM_BRANCHES) pr_info("%s: Branch instructions retired = %lu (expected %u)\n", __func__, count, NUM_BRANCHES); TEST_ASSERT(count, "Allowed PMU event is not counting"); }
static uint64_t test_with_filter(struct kvm_vcpu *vcpu, struct __kvm_pmu_event_filter *__f) { struct kvm_pmu_event_filter *f = (void *)__f;
vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f); return run_vcpu_to_sync(vcpu); }
static uint64_t test_with_base_filter(struct kvm_vcpu *vcpu, uint32_t action) { struct __kvm_pmu_event_filter f = base_event_filter;
f.action = action; return test_with_filter(vcpu, &f); }
static void test_amd_deny_list(struct kvm_vcpu *vcpu) { struct __kvm_pmu_event_filter f = base_event_filter;
f.nevents = 1; f.events[0] = EVENT(0x1C2, 0); count = test_with_filter(vcpu, f); if (count != NUM_BRANCHES) pr_info("%s: Branch instructions retired = %lu (expected %u)\n", __func__, count, NUM_BRANCHES); TEST_ASSERT(count, "Allowed PMU event is not counting"); }
static void test_member_deny_list(struct kvm_vcpu *vcpu) { uint64_t count = test_with_base_filter(vcpu, KVM_PMU_EVENT_DENY);
if (count) pr_info("%s: Branch instructions retired = %lu (expected 0)\n", __func__, count); TEST_ASSERT(!count, "Disallowed PMU Event is counting"); }
static void test_member_allow_list(struct kvm_vcpu *vcpu) { uint64_t count = test_with_base_filter(vcpu, KVM_PMU_EVENT_ALLOW);
if (count != NUM_BRANCHES) pr_info("%s: Branch instructions retired = %lu (expected %u)\n", __func__, count, NUM_BRANCHES); TEST_ASSERT(count, "Allowed PMU event is not counting"); }
static void test_not_member_deny_list(struct kvm_vcpu *vcpu) { struct __kvm_pmu_event_filter f = base_event_filter;
f.action = KVM_PMU_EVENT_DENY; remove_event(&f, INTEL_BR_RETIRED); remove_event(&f, AMD_ZEN_BR_RETIRED);
count = test_with_filter(vcpu, f); if (count != NUM_BRANCHES) pr_info("%s: Branch instructions retired = %lu (expected %u)\n", __func__, count, NUM_BRANCHES); TEST_ASSERT(count, "Allowed PMU event is not counting"); }
static void test_not_member_allow_list(struct kvm_vcpu *vcpu) { struct __kvm_pmu_event_filter f = base_event_filter; uint64_t count;
f.action = KVM_PMU_EVENT_ALLOW; remove_event(f, INTEL_BR_RETIRED); remove_event(f, AMD_ZEN_BR_RETIRED);
count = test_with_filter(vcpu, &f); if (count) pr_info("%s: Branch instructions retired = %lu (expected 0)\n", __func__, count); TEST_ASSERT(!count, "Disallowed PMU Event is counting"); }
/* * Verify that setting KVM_PMU_CAP_DISABLE prevents the use of the PMU. * * Note that KVM_CAP_PMU_CAPABILITY must be invoked prior to creating VCPUs. */ static void test_pmu_config_disable(void (*guest_code)(void)) { struct kvm_vcpu *vcpu; int r; struct kvm_vm *vm;
r = kvm_check_cap(KVM_CAP_PMU_CAPABILITY); if (!(r & KVM_PMU_CAP_DISABLE)) return;
vm = vm_create(1);
vm_enable_cap(vm, KVM_CAP_PMU_CAPABILITY, KVM_PMU_CAP_DISABLE);
vcpu = vm_vcpu_add(vm, 0, guest_code); vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(vcpu);
TEST_ASSERT(!sanity_check_pmu(vcpu), "Guest should not be able to use disabled PMU.");
kvm_vm_free(vm); }
/* * On Intel, check for a non-zero PMU version, at least one general-purpose * counter per logical processor, and support for counting the number of branch * instructions retired. */ static bool use_intel_pmu(void) { return host_cpu_is_intel && kvm_cpu_property(X86_PROPERTY_PMU_VERSION) && kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS) && kvm_pmu_has(X86_PMU_FEATURE_BRANCH_INSNS_RETIRED); }
static bool is_zen1(uint32_t family, uint32_t model) { return family == 0x17 && model <= 0x0f; }
static bool is_zen2(uint32_t family, uint32_t model) { return family == 0x17 && model >= 0x30 && model <= 0x3f; }
static bool is_zen3(uint32_t family, uint32_t model) { return family == 0x19 && model <= 0x0f; }
/* * Determining AMD support for a PMU event requires consulting the AMD * PPR for the CPU or reference material derived therefrom. The AMD * test code herein has been verified to work on Zen1, Zen2, and Zen3. * * Feel free to add more AMD CPUs that are documented to support event * select 0xc2 umask 0 as "retired branch instructions." */ static bool use_amd_pmu(void) { uint32_t family = kvm_cpu_family(); uint32_t model = kvm_cpu_model();
return host_cpu_is_amd && (is_zen1(family, model) || is_zen2(family, model) || is_zen3(family, model)); }
/* * "MEM_INST_RETIRED.ALL_LOADS", "MEM_INST_RETIRED.ALL_STORES", and * "MEM_INST_RETIRED.ANY" from https://perfmon-events.intel.com/ * supported on Intel Xeon processors: * - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake. */ #define MEM_INST_RETIRED 0xD0 #define MEM_INST_RETIRED_LOAD EVENT(MEM_INST_RETIRED, 0x81) #define MEM_INST_RETIRED_STORE EVENT(MEM_INST_RETIRED, 0x82) #define MEM_INST_RETIRED_LOAD_STORE EVENT(MEM_INST_RETIRED, 0x83)
static bool supports_event_mem_inst_retired(void) { uint32_t eax, ebx, ecx, edx;
cpuid(1, &eax, &ebx, &ecx, &edx); if (x86_family(eax) == 0x6) { switch (x86_model(eax)) { /* Sapphire Rapids */ case 0x8F: /* Ice Lake */ case 0x6A: /* Skylake */ /* Cascade Lake */ case 0x55: return true; } }
return false; }
/* * "LS Dispatch", from Processor Programming Reference * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors, * Preliminary Processor Programming Reference (PPR) for AMD Family * 17h Model 31h, Revision B0 Processors, and Preliminary Processor * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision * B1 Processors Volume 1 of 2. */ #define LS_DISPATCH 0x29 #define LS_DISPATCH_LOAD EVENT(LS_DISPATCH, BIT(0)) #define LS_DISPATCH_STORE EVENT(LS_DISPATCH, BIT(1)) #define LS_DISPATCH_LOAD_STORE EVENT(LS_DISPATCH, BIT(2))
#define INCLUDE_MASKED_ENTRY(event_select, mask, match) \ KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, false) #define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \ KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, true)
struct perf_counter { union { uint64_t raw; struct { uint64_t loads:22; uint64_t stores:22; uint64_t loads_stores:20; }; }; };
static uint64_t masked_events_guest_test(uint32_t msr_base) { uint64_t ld0, ld1, st0, st1, ls0, ls1; struct perf_counter c; int val;
/* * The acutal value of the counters don't determine the outcome of * the test. Only that they are zero or non-zero. */ ld0 = rdmsr(msr_base + 0); st0 = rdmsr(msr_base + 1); ls0 = rdmsr(msr_base + 2);
__asm__ __volatile__("movl $0, %[v];" "movl %[v], %%eax;" "incl %[v];" : [v]"+m"(val) :: "eax");
ld1 = rdmsr(msr_base + 0); st1 = rdmsr(msr_base + 1); ls1 = rdmsr(msr_base + 2);
c.loads = ld1 - ld0; c.stores = st1 - st0; c.loads_stores = ls1 - ls0;
return c.raw; }
static void intel_masked_events_guest_code(void) { uint64_t r;
for (;;) { wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
wrmsr(MSR_P6_EVNTSEL0 + 0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD); wrmsr(MSR_P6_EVNTSEL0 + 1, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_STORE); wrmsr(MSR_P6_EVNTSEL0 + 2, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD_STORE);
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7);
r = masked_events_guest_test(MSR_IA32_PMC0);
GUEST_SYNC(r); } }
static void amd_masked_events_guest_code(void) { uint64_t r;
for (;;) { wrmsr(MSR_K7_EVNTSEL0, 0); wrmsr(MSR_K7_EVNTSEL1, 0); wrmsr(MSR_K7_EVNTSEL2, 0);
wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD); wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_STORE); wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE | ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE);
r = masked_events_guest_test(MSR_K7_PERFCTR0);
GUEST_SYNC(r); } }
static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu, const uint64_t masked_events[], const int nmasked_events) { struct __kvm_pmu_event_filter f = { .nevents = nmasked_events, .action = KVM_PMU_EVENT_ALLOW, .flags = KVM_PMU_EVENT_FLAG_MASKED_EVENTS, }; struct perf_counter r;
memcpy(f.events, masked_events, sizeof(uint64_t) * nmasked_events); r.raw = test_with_filter(vcpu, f); return r; }
/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */ #define MAX_FILTER_EVENTS 300 #define MAX_TEST_EVENTS 10
#define ALLOW_LOADS BIT(0) #define ALLOW_STORES BIT(1) #define ALLOW_LOADS_STORES BIT(2)
struct masked_events_test { uint64_t intel_events[MAX_TEST_EVENTS]; uint64_t intel_event_end; uint64_t amd_events[MAX_TEST_EVENTS]; uint64_t amd_event_end; const char *msg; uint32_t flags; };
/* * These are the test cases for the masked events tests. * * For each test, the guest enables 3 PMU counters (loads, stores, * loads + stores). The filter is then set in KVM with the masked events * provided. The test then verifies that the counters agree with which * ones should be counting and which ones should be filtered. */ const struct masked_events_test test_cases[] = { { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x81), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), }, .msg = "Only allow loads.", .flags = ALLOW_LOADS, }, { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), }, .msg = "Only allow stores.", .flags = ALLOW_STORES, }, { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(2)), }, .msg = "Only allow loads + stores.", .flags = ALLOW_LOADS_STORES, }, { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, ~(BIT(0) | BIT(1)), 0), }, .msg = "Only allow loads and stores.", .flags = ALLOW_LOADS | ALLOW_STORES, }, { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)), }, .msg = "Only allow loads and loads + stores.", .flags = ALLOW_LOADS | ALLOW_LOADS_STORES }, { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFE, 0x82), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)), }, .msg = "Only allow stores and loads + stores.", .flags = ALLOW_STORES | ALLOW_LOADS_STORES }, { .intel_events = { INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0), }, .amd_events = { INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0), }, .msg = "Only allow loads, stores, and loads + stores.", .flags = ALLOW_LOADS | ALLOW_STORES | ALLOW_LOADS_STORES }, };
static int append_test_events(const struct masked_events_test *test, uint64_t *events, uint32_t nevents) { const uint64_t *evts; int i;
evts = use_intel_pmu() ? test->intel_events : test->amd_events; for (i = 0; i < MAX_TEST_EVENTS; i++) { if (evts[i] == 0) break;
events[nevents + i] = evts[i]; }
return nevents + i; }
static bool bool_eq(bool a, bool b) { return a == b; }
static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events, uint32_t nevents) { int ntests = ARRAY_SIZE(test_cases); struct perf_counter c; int i, n;
for (i = 0; i < ntests; i++) { const struct masked_events_test *test = &test_cases[i];
/* Do any test case events overflow MAX_TEST_EVENTS? */ assert(test->intel_event_end == 0); assert(test->amd_event_end == 0);
n = append_test_events(test, events, nevents);
c = run_masked_events_test(vcpu, events, n); TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) && bool_eq(c.stores, test->flags & ALLOW_STORES) && bool_eq(c.loads_stores, test->flags & ALLOW_LOADS_STORES), "%s loads: %u, stores: %u, loads + stores: %u", test->msg, c.loads, c.stores, c.loads_stores); } }
static void add_dummy_events(uint64_t *events, uint32_t nevents) { int i;
for (i = 0; i < nevents; i++) { int event_select = i % 0xFF; bool exclude = ((i % 4) == 0);
if (event_select == MEM_INST_RETIRED || event_select == LS_DISPATCH) event_select++;
events[i] = KVM_PMU_ENCODE_MASKED_ENTRY(event_select, 0, 0, exclude); } }
static void test_masked_events(struct kvm_vcpu *vcpu) { uint32_t nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS; uint64_t events[MAX_FILTER_EVENTS];
/* Run the test cases against a sparse PMU event filter. */ run_masked_events_tests(vcpu, events, 0);
/* Run the test cases against a dense PMU event filter. */ add_dummy_events(events, MAX_FILTER_EVENTS); run_masked_events_tests(vcpu, events, nevents); }
static int set_pmu_event_filter(struct kvm_vcpu *vcpu, struct __kvm_pmu_event_filter *f) { return __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, (struct kvm_pmu_event_filter *)&f); }
static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, uint64_t event, uint32_t flags, uint32_t action) { struct __kvm_pmu_event_filter f = { .nevents = 1, .flags = flags, .action = action, .events = { event, }, }; return do_vcpu_set_pmu_event_filter(vcpu, &f); }
static void test_filter_ioctl(struct kvm_vcpu *vcpu) { uint32_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS); struct __kvm_pmu_event_filter f; uint64_t e = ~0ul; int r;
/* * Unfortunately having invalid bits set in event data is expected to * pass when flags == 0 (bits other than eventsel+umask). */ r = set_pmu_single_event_filter(vcpu, e, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
r = set_pmu_single_event_filter(vcpu, e, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail");
e = KVM_PMU_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf); r = set_pmu_single_event_filter(vcpu, e, KVM_PMU_EVENT_FLAG_MASKED_EVENTS); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
/* * Test input of unsupported "action" values should return an error. * The only values currently supported are 0 or 1. */ f = base_event_filter; f.action = PMU_EVENT_FILTER_INVALID_ACTION; r = set_pmu_event_filter(vcpu, e, 0, PMU_EVENT_FILTER_INVALID_ACTION); TEST_ASSERT(r != 0, "Set invalid action is expected to fail.");
/* * Test input of unsupported "flags" values should return an error. * The only values currently supported are 0 or 1. */ r = set_pmu_single_event_filter(vcpu, e, PMU_EVENT_FILTER_INVALID_FLAGS); TEST_ASSERT(r != 0, "Set invalid flags is expected to fail.");
/* * Test input of unsupported "nevents" values should return an error. * The only values currently supported are those less than or equal to * MAX_FILTER_EVENTS. */ f = base_event_filter; f.nevents = PMU_EVENT_FILTER_INVALID_NEVENTS; r = set_pmu_event_filter(vcpu, &f); TEST_ASSERT(r != 0, "Setting PMU event filters that exceeds the maximum supported value should fail");
/* * In this case, set non-exist fixed counters in the fixed bitmap * doesn't fail. */ f = base_event_filter; f.fixed_counter_bitmap = ~GENMASK_ULL(nr_fixed_counters, 0); r = set_pmu_event_filter(vcpu, &f); TEST_ASSERT(!r, "Set invalid or non-exist fixed cunters in the fixed bitmap fail."); }
int main(int argc, char *argv[]) { void (*guest_code)(void); struct kvm_vcpu *vcpu, *vcpu2 = NULL; struct kvm_vm *vm;
TEST_REQUIRE(get_kvm_param_bool("enable_pmu")); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER)); TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS));
TEST_REQUIRE(use_intel_pmu() || use_amd_pmu()); guest_code = use_intel_pmu() ? intel_guest_code : amd_guest_code;
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(vcpu);
TEST_REQUIRE(sanity_check_pmu(vcpu));
if (use_amd_pmu()) test_amd_deny_list(vcpu);
test_without_filter(vcpu); test_member_deny_list(vcpu); test_member_allow_list(vcpu); test_not_member_deny_list(vcpu); test_not_member_allow_list(vcpu);
if (use_intel_pmu() && supports_event_mem_inst_retired() && kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS) >= 3) vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code); else if (use_amd_pmu()) vcpu2 = vm_vcpu_add(vm, 2, amd_masked_events_guest_code);
if (vcpu2) test_masked_events(vcpu2); test_filter_ioctl(vcpu);
kvm_vm_free(vm);
test_pmu_config_disable(guest_code);
return 0; }
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Adds unsupported input test cases for PMU filter. Specifically, it tests the input of unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values, which should all return an error, as they are currently unsupported by the filter. Additionally, the patch tests setting non-exist fixed counters in the fixed bitmap doesn't fail.
This change aims to improve the testing of the PMU filter and ensure that it functions correctly in all supported use cases. The patch has been tested and verified to function correctly.
Signed-off-by: Jinrong Liang cloudliang@tencent.com --- .../kvm/x86_64/pmu_event_filter_test.c | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4e87eea6986b..a3d5c30ce914 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -27,6 +27,10 @@ #define ARCH_PERFMON_BRANCHES_RETIRED 5
#define NUM_BRANCHES 42 +#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) +#define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) +#define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1)
/* * This is how the event selector and unit mask are stored in an AMD @@ -743,10 +747,22 @@ static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, return r; }
+static uint8_t get_kvm_supported_fixed_num(void) +{ + const struct kvm_cpuid_entry2 *kvm_entry; + + if (host_cpu_is_amd) + return 0; + + kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0); + return kvm_entry->edx & FIXED_CTR_NUM_MASK; +} + static void test_filter_ioctl(struct kvm_vcpu *vcpu) { uint64_t e = ~0ul; int r; + uint8_t max_fixed_num = get_kvm_supported_fixed_num();
/* * Unfortunately having invalid bits set in event data is expected to @@ -763,6 +779,42 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS, KVM_PMU_EVENT_ALLOW, 0); TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing"); + + /* + * Test input of unsupported "action" values should return an error. + * The only values currently supported are 0 or 1. + */ + r = run_filter_test(vcpu, 0, 0, 0, PMU_EVENT_FILTER_INVALID_ACTION, 0); + TEST_ASSERT(r != 0, "Set invalid action is expected to fail."); + + /* + * Test input of unsupported "flags" values should return an error. + * The only values currently supported are 0 or 1. + */ + r = run_filter_test(vcpu, 0, 0, PMU_EVENT_FILTER_INVALID_FLAGS, + KVM_PMU_EVENT_ALLOW, 0); + TEST_ASSERT(r != 0, "Set invalid flags is expected to fail."); + + /* + * Test input of unsupported "nevents" values should return an error. + * The only values currently supported are those less than or equal to + * MAX_FILTER_EVENTS. + */ + r = run_filter_test(vcpu, event_list, PMU_EVENT_FILTER_INVALID_NEVENTS, + 0, KVM_PMU_EVENT_ALLOW, 0); + TEST_ASSERT(r != 0, + "Setting PMU event filters that exceeds the maximum supported value should fail"); + + /* + * In this case, set non-exist fixed counters in the fixed bitmap + * doesn't fail. + */ + if (max_fixed_num) { + r = run_filter_test(vcpu, 0, 0, 0, KVM_PMU_EVENT_ALLOW, + ~GENMASK_ULL(max_fixed_num, 0)); + TEST_ASSERT(r == 0, + "Set invalid or non-exist fixed cunters in the fixed bitmap fail."); + } }
int main(int argc, char *argv[])
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Adds unsupported input test cases for PMU filter. Specifically, it tests the input of unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values, which should all return an error, as they are currently unsupported by the filter. Additionally, the patch tests setting non-exist fixed counters in the fixed bitmap doesn't fail.
This change aims to improve the testing of the PMU filter and ensure that it functions correctly in all supported use cases. The patch has been tested and verified to function correctly.
Signed-off-by: Jinrong Liang cloudliang@tencent.com
.../kvm/x86_64/pmu_event_filter_test.c | 52 +++++++++++++++++++ 1 file changed, 52 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 4e87eea6986b..a3d5c30ce914 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -27,6 +27,10 @@ #define ARCH_PERFMON_BRANCHES_RETIRED 5 #define NUM_BRANCHES 42 +#define FIXED_CTR_NUM_MASK GENMASK_ULL(4, 0) +#define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) +#define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) +#define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) /*
- This is how the event selector and unit mask are stored in an AMD
@@ -743,10 +747,22 @@ static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events, return r; } +static uint8_t get_kvm_supported_fixed_num(void) +{
- const struct kvm_cpuid_entry2 *kvm_entry;
- if (host_cpu_is_amd)
return 0;
- kvm_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0xa, 0);
- return kvm_entry->edx & FIXED_CTR_NUM_MASK;
Ah, I got ahead of myself. This is where the KVM_X86_PROPERTY comes in.
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Update the documentation for the KVM_SET_PMU_EVENT_FILTER ioctl to include a detailed description of how fixed performance events are handled in the pmu filter. The action and fixed_counter_bitmap members of the pmu filter to determine whether fixed performance events can be programmed by the guest. This information is helpful for correctly configuring the fixed_counter_bitmap and action fields to filter fixed performance events.
Suggested-by: Like Xu likexu@tencent.com Reported-by: kernel test robot lkp@intel.com Link: https://lore.kernel.org/oe-kbuild-all/202304150850.rx4UDDsB-lkp@intel.com Signed-off-by: Jinrong Liang cloudliang@tencent.com --- Documentation/virt/kvm/api.rst | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a69e91088d76..b5836767e0e7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5122,6 +5122,27 @@ Valid values for 'action':: #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1
+Via this API, KVM userspace can also control the behavior of the VM's fixed +counters (if any) by configuring the "action" and "fixed_counter_bitmap" fields. + +Specifically, KVM follows the following pseudo-code when determining whether to +allow the guest FixCtr[i] to count its pre-defined fixed event:: + + FixCtr[i]_is_allowed = (action == ALLOW) && (bitmap & BIT(i)) || + (action == DENY) && !(bitmap & BIT(i)); + FixCtr[i]_is_denied = !FixCtr[i]_is_allowed; + +Note once this API interface is called, the default zero value of the field +"fixed_counter_bitmap" will implicitly affect all fixed counters, even if it's +expected to be used only to control the events on generic counters. + +In addition, pre-defined performance events on the fixed counters already have +event_select and unit_mask values defined, which means userspace can also +control fixed counters by configuring "action"+ "events" fields. + +When there is a contradiction between these two polices, the fixed performance +counter will only follow the rule of the pseudo-code above. + 4.121 KVM_PPC_SVM_OFF ---------------------
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Update the documentation for the KVM_SET_PMU_EVENT_FILTER ioctl to include a detailed description of how fixed performance events are handled in the pmu filter. The action and fixed_counter_bitmap members of the pmu filter to determine whether fixed performance events can be programmed by the guest. This information is helpful for correctly configuring the fixed_counter_bitmap and action fields to filter fixed performance events.
Suggested-by: Like Xu likexu@tencent.com Reported-by: kernel test robot lkp@intel.com Link: https://lore.kernel.org/oe-kbuild-all/202304150850.rx4UDDsB-lkp@intel.com Signed-off-by: Jinrong Liang cloudliang@tencent.com
Please post this separately from the selftests changes.
1 file changed, 21 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a69e91088d76..b5836767e0e7 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5122,6 +5122,27 @@ Valid values for 'action':: #define KVM_PMU_EVENT_ALLOW 0 #define KVM_PMU_EVENT_DENY 1 +Via this API, KVM userspace can also control the behavior of the VM's fixed +counters (if any) by configuring the "action" and "fixed_counter_bitmap" fields.
+Specifically, KVM follows the following pseudo-code when determining whether to +allow the guest FixCtr[i] to count its pre-defined fixed event::
- FixCtr[i]_is_allowed = (action == ALLOW) && (bitmap & BIT(i)) ||
- (action == DENY) && !(bitmap & BIT(i));
- FixCtr[i]_is_denied = !FixCtr[i]_is_allowed;
+Note once this API interface is called, the default zero value of the field
No, there is no "default" value. Userspace provides the exact value. The KVM *selftest* clears fixed_counter_bitmap in all cases, but there is no default anywhere.
+"fixed_counter_bitmap" will implicitly affect all fixed counters, even if it's
There is no implicit behavior, userspace very explicitly provides fixed_counter_bitmap.
+expected to be used only to control the events on generic counters.
I would rather phrase this as:
--- KVM always consumes fixed_counter_bitmap, it's userspace's responsibility to ensure fixed_counter_bitmap is set correctly, e.g. if userspace wants to define a filter that only affects general purpose counters. ---
+In addition, pre-defined performance events on the fixed counters already have +event_select and unit_mask values defined, which means userspace can also +control fixed counters by configuring "action"+ "events" fields.
+When there is a contradiction between these two polices, the fixed performance +counter will only follow the rule of the pseudo-code above.
This is unnecessary vague. I think what you're saying is, with a slight reword of the first paragraph too:
--- Note, the "events" field also applies to fixed counters' hardcoded event_select and unit_mask values. "fixed_counter_bitmap" has higher priority than "events" if there is a contradiction between the two. ---
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add tests to cover that pmu_event_filter works as expected when it's applied to fixed performance counters, even if there is none fixed counter exists (e.g. Intel guest pmu version=1 or AMD guest).
Signed-off-by: Jinrong Liang cloudliang@tencent.com --- .../kvm/x86_64/pmu_event_filter_test.c | 109 ++++++++++++++++++ 1 file changed, 109 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index a3d5c30ce914..0f54c53d7fff 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -31,6 +31,7 @@ #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) #define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) +#define INTEL_PMC_IDX_FIXED 32
/* * This is how the event selector and unit mask are stored in an AMD @@ -817,6 +818,113 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) } }
+static void intel_guest_run_fixed_counters(uint8_t fixed_ctr_idx) +{ + for (;;) { + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0); + + /* Only OS_EN bit is enabled for fixed counter[idx]. */ + wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx)); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, + BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx)); + __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES})); + wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); + + GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx)); + } +} + +static struct kvm_vcpu *new_vcpu(void *guest_code) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + return vcpu; +} + +static void free_vcpu(struct kvm_vcpu *vcpu) +{ + kvm_vm_free(vcpu->vm); +} + +static uint64_t test_fixed_ctr_without_filter(struct kvm_vcpu *vcpu) +{ + return run_vcpu_to_sync(vcpu); +} + +static const uint32_t actions[] = { + KVM_PMU_EVENT_ALLOW, + KVM_PMU_EVENT_DENY, +}; + +static uint64_t test_fixed_ctr_with_filter(struct kvm_vcpu *vcpu, + uint32_t action, + uint32_t bitmap) +{ + struct kvm_pmu_event_filter *f; + uint64_t r; + + f = create_pmu_event_filter(0, 0, action, 0, bitmap); + r = test_with_filter(vcpu, f); + free(f); + return r; +} + +static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) +{ + return (action == KVM_PMU_EVENT_ALLOW && (bitmap & BIT_ULL(idx))) || + (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); +} + +static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, + uint8_t fixed_ctr_idx, + uint8_t max_fixed_num) +{ + uint8_t i; + uint32_t bitmap; + uint64_t count; + bool expected; + + /* + * Check the fixed performance counter can count normally works when + * KVM userspace doesn't set any pmu filter. + */ + TEST_ASSERT(max_fixed_num && test_fixed_ctr_without_filter(vcpu), + "Fixed counter does not exist or does not work as expected."); + + for (i = 0; i < ARRAY_SIZE(actions); i++) { + for (bitmap = 0; bitmap < BIT_ULL(max_fixed_num); bitmap++) { + expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); + count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap); + + TEST_ASSERT(expected == !!count, + "Fixed event filter does not work as expected."); + } + } +} + +static void test_fixed_counter_bitmap(void) +{ + struct kvm_vcpu *vcpu; + uint8_t idx, max_fixed_num = get_kvm_supported_fixed_num(); + + /* + * Check that pmu_event_filter works as expected when it's applied to + * fixed performance counters. + */ + for (idx = 0; idx < max_fixed_num; idx++) { + vcpu = new_vcpu(intel_guest_run_fixed_counters); + vcpu_args_set(vcpu, 1, idx); + test_fixed_ctr_action_and_bitmap(vcpu, idx, max_fixed_num); + free_vcpu(vcpu); + } +} + int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -860,6 +968,7 @@ int main(int argc, char *argv[]) kvm_vm_free(vm);
test_pmu_config_disable(guest_code); + test_fixed_counter_bitmap();
return 0; }
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add tests to cover that pmu_event_filter works as expected when it's applied to fixed performance counters, even if there is none fixed counter exists (e.g. Intel guest pmu version=1 or AMD guest).
Signed-off-by: Jinrong Liang cloudliang@tencent.com
.../kvm/x86_64/pmu_event_filter_test.c | 109 ++++++++++++++++++ 1 file changed, 109 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index a3d5c30ce914..0f54c53d7fff 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -31,6 +31,7 @@ #define PMU_EVENT_FILTER_INVALID_ACTION (KVM_PMU_EVENT_DENY + 1) #define PMU_EVENT_FILTER_INVALID_FLAGS (KVM_PMU_EVENT_FLAG_MASKED_EVENTS + 1) #define PMU_EVENT_FILTER_INVALID_NEVENTS (MAX_FILTER_EVENTS + 1) +#define INTEL_PMC_IDX_FIXED 32 /*
- This is how the event selector and unit mask are stored in an AMD
@@ -817,6 +818,113 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu) } } +static void intel_guest_run_fixed_counters(uint8_t fixed_ctr_idx) +{
- for (;;) {
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
wrmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx, 0);
/* Only OS_EN bit is enabled for fixed counter[idx]. */
wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, BIT_ULL(4 * fixed_ctr_idx));
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL,
BIT_ULL(INTEL_PMC_IDX_FIXED + fixed_ctr_idx));
__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
GUEST_SYNC(rdmsr(MSR_CORE_PERF_FIXED_CTR0 + fixed_ctr_idx));
- }
+}
+static struct kvm_vcpu *new_vcpu(void *guest_code) +{
- struct kvm_vm *vm;
- struct kvm_vcpu *vcpu;
- vm = vm_create_with_one_vcpu(&vcpu, guest_code);
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
Unnecessary copy+paste, this test doesn't setup a #GP handler.
- return vcpu;
+}
+static void free_vcpu(struct kvm_vcpu *vcpu) +{
- kvm_vm_free(vcpu->vm);
+}
+static uint64_t test_fixed_ctr_without_filter(struct kvm_vcpu *vcpu) +{
- return run_vcpu_to_sync(vcpu);
+}
Please don't add a wrappers that are single line passthroughs.
+static const uint32_t actions[] = {
- KVM_PMU_EVENT_ALLOW,
- KVM_PMU_EVENT_DENY,
+};
(a) don't define global variables with super common names (this test sets a bad precedent). (b) this array is used in *one* function, i.e. it can be a local variable. (c) using an array doesn't save you code and just obfuscates what's happening.
+static uint64_t test_fixed_ctr_with_filter(struct kvm_vcpu *vcpu,
Don't abbreviate "counter", there's really no need and "ctr" versus "ctrl" is already confusing enough.
uint32_t action,
uint32_t bitmap)
+{
- struct kvm_pmu_event_filter *f;
- uint64_t r;
- f = create_pmu_event_filter(0, 0, action, 0, bitmap);
- r = test_with_filter(vcpu, f);
- free(f);
- return r;
+}
+static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) +{
- return (action == KVM_PMU_EVENT_ALLOW && (bitmap & BIT_ULL(idx))) ||
(action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx)));
This helper shouldn't exist. It's largely a symptom of using an array.
+}
+static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu,
uint8_t fixed_ctr_idx,
uint8_t max_fixed_num)
+{
- uint8_t i;
- uint32_t bitmap;
- uint64_t count;
- bool expected;
- /*
* Check the fixed performance counter can count normally works when
* KVM userspace doesn't set any pmu filter.
*/
- TEST_ASSERT(max_fixed_num && test_fixed_ctr_without_filter(vcpu),
"Fixed counter does not exist or does not work as expected.");
- for (i = 0; i < ARRAY_SIZE(actions); i++) {
for (bitmap = 0; bitmap < BIT_ULL(max_fixed_num); bitmap++) {
You're comparing a 32-bit value against a 64-bit value.
expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap);
count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap);
TEST_ASSERT(expected == !!count,
"Fixed event filter does not work as expected.");
}
- }
static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu, uint32_t action, uint32_t bitmap) { ...
}
static void __test_fixed_counter_bitmap(...) { uint32_t bitmap;
TEST_ASSERT(nr_fixed_counters < sizeof(bitmap));
for (i = 0; i < BIT(nr_fixed_counters); i++) { count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_ALLOW, bitmap); TEST_ASSERT(!!count == !!(bitmap & BIT(idx)));
count = test_with_fixed_counter_filter(vcpu, KVM_PMU_EVENT_DENY, bitmap); TEST_ASSERT(!!count == !(bitmap & BIT(idx)));
} }
+}
+static void test_fixed_counter_bitmap(void) +{
- struct kvm_vcpu *vcpu;
- uint8_t idx, max_fixed_num = get_kvm_supported_fixed_num();
- /*
* Check that pmu_event_filter works as expected when it's applied to
* fixed performance counters.
*/
- for (idx = 0; idx < max_fixed_num; idx++) {
vcpu = new_vcpu(intel_guest_run_fixed_counters);
vcpu_args_set(vcpu, 1, idx);
test_fixed_ctr_action_and_bitmap(vcpu, idx, max_fixed_num);
free_vcpu(vcpu);
- }
+}
int main(int argc, char *argv[]) { void (*guest_code)(void); @@ -860,6 +968,7 @@ int main(int argc, char *argv[]) kvm_vm_free(vm); test_pmu_config_disable(guest_code);
- test_fixed_counter_bitmap();
return 0; } -- 2.31.1
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed performance filter behavior in KVM. This test helps to ensure that the fixed performance filter works as expected even when generic performance event filters are also set.
Signed-off-by: Jinrong Liang cloudliang@tencent.com --- .../selftests/kvm/x86_64/pmu_event_filter_test.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0f54c53d7fff..9be4c6f8fb7e 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -889,6 +889,7 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint32_t bitmap; uint64_t count; bool expected; + struct kvm_pmu_event_filter *f;
/* * Check the fixed performance counter can count normally works when @@ -902,6 +903,19 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap);
+ TEST_ASSERT(expected == !!count, + "Fixed event filter does not work as expected."); + + /* + * Check that setting both events[] and fixed_counter_bitmap + * does not affect the consistency of the fixed ctrs' behaviour. + * + * Note, the fixed_counter_bitmap rule has high priority. + */ + f = event_filter(actions[i]); + f->fixed_counter_bitmap = bitmap; + count = test_with_filter(vcpu, f); + TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected."); }
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed performance filter behavior in KVM. This test helps to ensure that the fixed performance filter works as expected even when generic performance event filters are also set.
Signed-off-by: Jinrong Liang cloudliang@tencent.com
.../selftests/kvm/x86_64/pmu_event_filter_test.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 0f54c53d7fff..9be4c6f8fb7e 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -889,6 +889,7 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint32_t bitmap; uint64_t count; bool expected;
- struct kvm_pmu_event_filter *f;
/* * Check the fixed performance counter can count normally works when @@ -902,6 +903,19 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, expected = fixed_ctr_is_allowed(fixed_ctr_idx, actions[i], bitmap); count = test_fixed_ctr_with_filter(vcpu, actions[i], bitmap);
TEST_ASSERT(expected == !!count,
"Fixed event filter does not work as expected.");
/*
* Check that setting both events[] and fixed_counter_bitmap
* does not affect the consistency of the fixed ctrs' behaviour.
*
* Note, the fixed_counter_bitmap rule has high priority.
"high" is ambiguous without a baseline. I believe what you want to say is "the fixed_counter_bitmap has higher priority than the events list".
*/
f = event_filter(actions[i]);
f->fixed_counter_bitmap = bitmap;
count = test_with_filter(vcpu, f);
}TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected.");
-- 2.31.1
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used. By running the test, we can ensure that the pmu event filter correctly handles incomplete structures and does not allow events to be counted when they should not be.
Signed-off-by: Jinrong Liang cloudliang@tencent.com --- .../kvm/x86_64/pmu_event_filter_test.c | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 9be4c6f8fb7e..a6b6e0d086ae 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -881,6 +881,24 @@ static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); }
+struct incompatible_pmu_event_filter { + __u32 action; + __u32 nevents; + __u32 fixed_counter_bitmap; +}; + +static uint64_t test_incompatible_filter(struct kvm_vcpu *vcpu, uint32_t action, + uint32_t bitmap) +{ + struct incompatible_pmu_event_filter err_f; + + err_f.action = action; + err_f.fixed_counter_bitmap = bitmap; + ioctl((vcpu->vm)->fd, KVM_SET_PMU_EVENT_FILTER, &err_f.action); + + return run_vcpu_to_sync(vcpu); +} + static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu, uint8_t fixed_ctr_idx, uint8_t max_fixed_num) @@ -918,6 +936,11 @@ static void test_fixed_ctr_action_and_bitmap(struct kvm_vcpu *vcpu,
TEST_ASSERT(expected == !!count, "Fixed event filter does not work as expected."); + + /* Test incompatible event filter works as expected. */ + count = test_incompatible_filter(vcpu, actions[i], bitmap); + TEST_ASSERT(expected == !!count, + "Incompatible filter does not work as expected."); } } }
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used. By running the test, we can ensure that the pmu event filter correctly handles incomplete structures and does not allow events to be counted when they should not be.
Signed-off-by: Jinrong Liang cloudliang@tencent.com
.../kvm/x86_64/pmu_event_filter_test.c | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 9be4c6f8fb7e..a6b6e0d086ae 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -881,6 +881,24 @@ static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); } +struct incompatible_pmu_event_filter {
- __u32 action;
- __u32 nevents;
- __u32 fixed_counter_bitmap;
+};
+static uint64_t test_incompatible_filter(struct kvm_vcpu *vcpu, uint32_t action,
uint32_t bitmap)
+{
- struct incompatible_pmu_event_filter err_f;
- err_f.action = action;
- err_f.fixed_counter_bitmap = bitmap;
- ioctl((vcpu->vm)->fd, KVM_SET_PMU_EVENT_FILTER, &err_f.action);
This is completely busted. It "passes" by luck, not because it's a valid test. The size of the argument is embedded in the IOCTL number itself, which means that unless glibc is being very nice and using a macro + typeof + sizeof to sanity check things, which I highly doubt is the case, this ioctl() is passing random stack data, a.k.a. garbage, to KVM.
In short, drop this patch.
Sean Christopherson seanjc@google.com 于2023年5月25日周四 07:50写道:
On Thu, Apr 20, 2023, Jinrong Liang wrote:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Add test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used. By running the test, we can ensure that the pmu event filter correctly handles incomplete structures and does not allow events to be counted when they should not be.
Signed-off-by: Jinrong Liang cloudliang@tencent.com
.../kvm/x86_64/pmu_event_filter_test.c | 23 +++++++++++++++++++ 1 file changed, 23 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c index 9be4c6f8fb7e..a6b6e0d086ae 100644 --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c @@ -881,6 +881,24 @@ static bool fixed_ctr_is_allowed(uint8_t idx, uint32_t action, uint32_t bitmap) (action == KVM_PMU_EVENT_DENY && !(bitmap & BIT_ULL(idx))); }
+struct incompatible_pmu_event_filter {
__u32 action;
__u32 nevents;
__u32 fixed_counter_bitmap;
+};
+static uint64_t test_incompatible_filter(struct kvm_vcpu *vcpu, uint32_t action,
uint32_t bitmap)
+{
struct incompatible_pmu_event_filter err_f;
err_f.action = action;
err_f.fixed_counter_bitmap = bitmap;
ioctl((vcpu->vm)->fd, KVM_SET_PMU_EVENT_FILTER, &err_f.action);
This is completely busted. It "passes" by luck, not because it's a valid test. The size of the argument is embedded in the IOCTL number itself, which means that unless glibc is being very nice and using a macro + typeof + sizeof to sanity check things, which I highly doubt is the case, this ioctl() is passing random stack data, a.k.a. garbage, to KVM.
In short, drop this patch.
Thank you for letting us know about the issues with the patch. I will drop the patch as suggested. Would you advise me to prepare version 3 to remove this patch?
On Thu, May 25, 2023, Jinrong Liang wrote:
Sean Christopherson seanjc@google.com 于2023年5月25日周四 07:50写道:
+static uint64_t test_incompatible_filter(struct kvm_vcpu *vcpu, uint32_t action,
uint32_t bitmap)
+{
struct incompatible_pmu_event_filter err_f;
err_f.action = action;
err_f.fixed_counter_bitmap = bitmap;
ioctl((vcpu->vm)->fd, KVM_SET_PMU_EVENT_FILTER, &err_f.action);
This is completely busted. It "passes" by luck, not because it's a valid test. The size of the argument is embedded in the IOCTL number itself, which means that unless glibc is being very nice and using a macro + typeof + sizeof to sanity check things, which I highly doubt is the case, this ioctl() is passing random stack data, a.k.a. garbage, to KVM.
In short, drop this patch.
Thank you for letting us know about the issues with the patch. I will drop the patch as suggested. Would you advise me to prepare version 3 to remove this patch?
More comments on the other patches are incoming, please hold off on v3 until then.
Jinrong Liang ljr.kernel@gmail.com 于2023年4月20日周四 18:46写道:
From: Jinrong Liang cloudliang@tencent.com
From: Jinrong Liang cloudliang@tencent.com
Hi,
This patch set adds some tests to ensure consistent PMU performance event filter behavior. Specifically, the patches aim to improve KVM's PMU event filter by strengthening the test coverage, adding documentation, and making other small changes.
The first patch replaces int with uint32_t for nevents to ensure consistency and readability in the code. The second patch adds fixed_counter_bitmap to create_pmu_event_filter() to support the use of the same creator to control the use of guest fixed counters. The third patch adds test cases for unsupported input values in PMU filter, including unsupported "action" values, unsupported "flags" values, and unsupported "nevents" values. Also, it tests setting non-existent fixed counters in the fixed bitmap doesn't fail.
The fourth patch updates the documentation for KVM_SET_PMU_EVENT_FILTER ioctl to include a detailed description of how fixed performance events are handled in the pmu filter. The fifth patch adds tests to cover that pmu_event_filter works as expected when applied to fixed performance counters, even if there is no fixed counter exists. The sixth patch adds a test to ensure that setting both generic and fixed performance event filters does not affect the consistency of the fixed performance filter behavior in KVM. The seventh patch adds a test to verify the behavior of the pmu event filter when an incomplete kvm_pmu_event_filter structure is used.
These changes help to ensure that KVM's PMU event filter functions as expected in all supported use cases. These patches have been tested and verified to function properly.
Thanks for your review and feedback.
Sincerely, Jinrong Liang
Previous: https://lore.kernel.org/kvm/20230414110056.19665-1-cloudliang@tencent.com
v2:
- Wrap the code from the documentation in a block of code; (Bagas Sanjaya)
Jinrong Liang (7): KVM: selftests: Replace int with uint32_t for nevents KVM: selftests: Apply create_pmu_event_filter() to fixed ctrs KVM: selftests: Test unavailable event filters are rejected KVM: x86/pmu: Add documentation for fixed ctr on PMU filter KVM: selftests: Check if pmu_event_filter meets expectations on fixed ctrs KVM: selftests: Check gp event filters without affecting fixed event filters KVM: selftests: Test pmu event filter with incompatible kvm_pmu_event_filter
Documentation/virt/kvm/api.rst | 21 ++ .../kvm/x86_64/pmu_event_filter_test.c | 239 ++++++++++++++++-- 2 files changed, 243 insertions(+), 17 deletions(-)
base-commit: a25497a280bbd7bbcc08c87ddb2b3909affc8402
2.31.1
Polite ping.
Should I post version 3 to fix the problem of two "From: Jinrong Liang cloudliang@tencent.com"?
Thanks
On Mon, May 22, 2023, Jinrong Liang wrote:
Jinrong Liang ljr.kernel@gmail.com 于2023年4月20日周四 18:46写道:
Jinrong Liang (7): KVM: selftests: Replace int with uint32_t for nevents KVM: selftests: Apply create_pmu_event_filter() to fixed ctrs KVM: selftests: Test unavailable event filters are rejected KVM: x86/pmu: Add documentation for fixed ctr on PMU filter KVM: selftests: Check if pmu_event_filter meets expectations on fixed ctrs KVM: selftests: Check gp event filters without affecting fixed event filters KVM: selftests: Test pmu event filter with incompatible kvm_pmu_event_filter
Documentation/virt/kvm/api.rst | 21 ++ .../kvm/x86_64/pmu_event_filter_test.c | 239 ++++++++++++++++-- 2 files changed, 243 insertions(+), 17 deletions(-)
base-commit: a25497a280bbd7bbcc08c87ddb2b3909affc8402
2.31.1
Polite ping.
Sorry for the delay, I'm finally getting into review mode for the 6.5 cycle.
Should I post version 3 to fix the problem of two "From: Jinrong Liang cloudliang@tencent.com"?
No need, that's trivial to fixup when applying (if it even requires fixup).
linux-kselftest-mirror@lists.linaro.org