5.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Zheng Yejian zhengyejian@huaweicloud.com
[ Upstream commit 49aa8a1f4d6800721c7971ed383078257f12e8f9 ]
In __tracing_open(), when max latency tracers took place on the cpu, the time start of its buffer would be updated, then event entries with timestamps being earlier than start of the buffer would be skipped (see tracing_iter_reset()).
Softlockup will occur if the kernel is non-preemptible and too many entries were skipped in the loop that reset every cpu buffer, so add cond_resched() to avoid it.
Cc: stable@vger.kernel.org Fixes: 2f26ebd549b9a ("tracing: use timestamp to determine start of latency traces") Link: https://lore.kernel.org/20240827124654.3817443-1-zhengyejian@huaweicloud.com Suggested-by: Steven Rostedt rostedt@goodmis.org Signed-off-by: Zheng Yejian zhengyejian@huaweicloud.com Signed-off-by: Steven Rostedt (Google) rostedt@goodmis.org Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/trace/trace.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 8bf28d482abe..67466563d86f 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3487,6 +3487,8 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu) break; entries++; ring_buffer_iter_advance(buf_iter); + /* This could be a big loop */ + cond_resched(); }
per_cpu_ptr(iter->trace_buffer->data, cpu)->skipped_entries = entries;