From: Colin Ian King colin.i.king@gmail.com
commit 08245672cdc6505550d1a5020603b0a8d4a6dcc7 upstream.
The left shift of int 32 bit integer constant 1 is evaluated using 32 bit arithmetic and then passed as a 64 bit function argument. In the case where i is 32 or more this can lead to an overflow. Avoid this by shifting using the BIT_ULL macro instead.
Fixes: 471af006a747 ("perf/x86/amd: Constrain Large Increment per Cycle events") Signed-off-by: Colin Ian King colin.i.king@gmail.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Acked-by: Ian Rogers irogers@google.com Acked-by: Kim Phillips kim.phillips@amd.com Link: https://lore.kernel.org/r/20221202135149.1797974-1-colin.i.king@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/events/amd/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -969,7 +969,7 @@ static int __init amd_core_pmu_init(void * numbered counter following it. */ for (i = 0; i < x86_pmu.num_counters - 1; i += 2) - even_ctr_mask |= 1 << i; + even_ctr_mask |= BIT_ULL(i);
pair_constraint = (struct event_constraint) __EVENT_CONSTRAINT(0, even_ctr_mask, 0,