The callback pmu->read() can be called with interrupts enabled. Currently, on ARM, this can cause the following callchain:
armpmu_read() -> armpmu_event_update() -> armv7pmu_read_counter()
The last function might modify the counter selector register and then read the target counter, without taking any lock. With interrupts enabled, a PMU interrupt could occur and modify the selector register as well, between the selection and read of the interrupted context.
Save and restore the value of the selector register in the PMU interrupt handler, ensuring the interrupted context is left with the correct PMU registers selected.
Signed-off-by: Julien Thierry julien.thierry@arm.com Cc: Will Deacon will.deacon@arm.com Cc: Mark Rutland mark.rutland@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Ingo Molnar mingo@redhat.com Cc: Arnaldo Carvalho de Melo acme@kernel.org Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Jiri Olsa jolsa@redhat.com Cc: Namhyung Kim namhyung@kernel.org Cc: Russell King linux@armlinux.org.uk Cc: stable@vger.kernel.org --- arch/arm/kernel/perf_event_v7.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/arm/kernel/perf_event_v7.c b/arch/arm/kernel/perf_event_v7.c index a4fb0f8..b7be2a3 100644 --- a/arch/arm/kernel/perf_event_v7.c +++ b/arch/arm/kernel/perf_event_v7.c @@ -736,10 +736,22 @@ static inline int armv7_pmnc_counter_has_overflowed(u32 pmnc, int idx) return pmnc & BIT(ARMV7_IDX_TO_COUNTER(idx)); }
-static inline void armv7_pmnc_select_counter(int idx) +static inline u32 armv7_pmsel_read(void) +{ + u32 pmsel; + + asm volatile("mrc p15, 0, %0, c9, c12, 5" : "=&r" (pmsel)); + return pmsel; +} + +static inline void armv7_pmsel_write(u32 counter) { - u32 counter = ARMV7_IDX_TO_COUNTER(idx); asm volatile("mcr p15, 0, %0, c9, c12, 5" : : "r" (counter)); +} + +static inline void armv7_pmnc_select_counter(int idx) +{ + armv7_pmsel_write(ARMV7_IDX_TO_COUNTER(idx)); isb(); }
@@ -952,8 +964,15 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu) struct perf_sample_data data; struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; + u32 pmsel; int idx;
+ + /* + * Save pmsel in case the interrupted context was using it. + */ + pmsel = armv7_pmsel_read(); + /* * Get and reset the IRQ flags */ @@ -1004,6 +1023,8 @@ static irqreturn_t armv7pmu_handle_irq(struct arm_pmu *cpu_pmu) */ irq_work_run();
+ armv7_pmsel_write(pmsel); + return IRQ_HANDLED; }
-- 1.9.1
[typo in subject: resore ->restore]
On Wed, Jul 17, 2019 at 09:17:06AM +0100, Julien Thierry wrote:
The callback pmu->read() can be called with interrupts enabled. Currently, on ARM, this can cause the following callchain:
armpmu_read() -> armpmu_event_update() -> armv7pmu_read_counter()
Why can't we just disable irqs in armv7pmu_read_counter() ?
Will
On 01/08/2019 14:01, Will Deacon wrote:
[typo in subject: resore ->restore]
On Wed, Jul 17, 2019 at 09:17:06AM +0100, Julien Thierry wrote:
The callback pmu->read() can be called with interrupts enabled. Currently, on ARM, this can cause the following callchain:
armpmu_read() -> armpmu_event_update() -> armv7pmu_read_counter()
Why can't we just disable irqs in armv7pmu_read_counter() ?
We could. But since we get rid of the lock after (otherwise it is the only reason we have to keep the lock) we might as well find another solution.
Thanks,
linux-stable-mirror@lists.linaro.org