Oliver Upton oupton@kernel.org writes:
Re-reading this patch...
On Tue, Dec 09, 2025 at 08:51:15PM +0000, Colton Lewis wrote:
The KVM API for event filtering says that counters do not count when blocked by the event filter. To enforce that, the event filter must be rechecked on every load since it might have changed since the last time the guest wrote a value.
Just directly state that this is guarding against userspace programming an unsupported event ID.
Sure
+static void kvm_pmu_apply_event_filter(struct kvm_vcpu *vcpu) +{
- struct arm_pmu *pmu = vcpu->kvm->arch.arm_pmu;
- u64 evtyper_set = ARMV8_PMU_EXCLUDE_EL0 |
ARMV8_PMU_EXCLUDE_EL1;- u64 evtyper_clr = ARMV8_PMU_INCLUDE_EL2;
- u8 i;
- u64 val;
- u64 evsel;
- if (!pmu)
return;- for (i = 0; i < pmu->hpmn_max; i++) {
Iterate the bitmask of counters and you'll handle the cycle counter 'for free'.
Will do.
<snip>
val = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i);evsel = val & kvm_pmu_event_mask(vcpu->kvm);if (vcpu->kvm->arch.pmu_filter &&!test_bit(evsel, vcpu->kvm->arch.pmu_filter))val |= evtyper_set;val &= ~evtyper_clr;write_pmevtypern(i, val);
</snip>
This all needs to be shared with writethrough_pmevtyper() instead of open-coding the same thing.
Will do.
Thanks, Oliver