On Mon, 08 Feb 2021 11:12:14 +0100 gregkh@linuxfoundation.org wrote:
The patch below does not apply to the 4.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
thanks,
greg k-h
Here's the 4.14 port (should also work for 4.9 and 4.4):
------------------ original commit in Linus's tree ------------------
From 7e0a9220467dbcfdc5bc62825724f3e52e50ab31 Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (VMware)" rostedt@goodmis.org Date: Fri, 29 Jan 2021 10:13:53 -0500 Subject: [PATCH] fgraph: Initialize tracing_graph_pause at task creation
On some archs, the idle task can call into cpu_suspend(). The cpu_suspend() will disable or pause function graph tracing, as there's some paths in bringing down the CPU that can have issues with its return address being modified. The task_struct structure has a "tracing_graph_pause" atomic counter, that when set to something other than zero, the function graph tracer will not modify the return address.
The problem is that the tracing_graph_pause counter is initialized when the function graph tracer is enabled. This can corrupt the counter for the idle task if it is suspended in these architectures.
CPU 1 CPU 2 ----- ----- do_idle() cpu_suspend() pause_graph_tracing() task_struct->tracing_graph_pause++ (0 -> 1)
start_graph_tracing() for_each_online_cpu(cpu) { ftrace_graph_init_idle_task(cpu) task-struct->tracing_graph_pause = 0 (1 -> 0)
unpause_graph_tracing() task_struct->tracing_graph_pause-- (0 -> -1)
The above should have gone from 1 to zero, and enabled function graph tracing again. But instead, it is set to -1, which keeps it disabled.
There's no reason that the field tracing_graph_pause on the task_struct can not be initialized at boot up.
Cc: stable@vger.kernel.org Fixes: 380c4b1411ccd ("tracing/function-graph-tracer: append the tracing_graph_flag") Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=211339 Reported-by: pierre.gondois@arm.com Signed-off-by: Steven Rostedt (VMware) rostedt@goodmis.org
Index: linux-test.git/kernel/trace/ftrace.c =================================================================== --- linux-test.git.orig/kernel/trace/ftrace.c +++ linux-test.git/kernel/trace/ftrace.c @@ -6666,7 +6666,6 @@ static int alloc_retstack_tasklist(struc }
if (t->ret_stack == NULL) { - atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); t->curr_ret_stack = -1; /* Make sure the tasks see the -1 first: */ @@ -6878,7 +6877,6 @@ static DEFINE_PER_CPU(struct ftrace_ret_ static void graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack) { - atomic_set(&t->tracing_graph_pause, 0); atomic_set(&t->trace_overrun, 0); t->ftrace_timestamp = 0; /* make curr_ret_stack visible before we add the ret_stack */ Index: linux-test.git/include/linux/ftrace.h =================================================================== --- linux-test.git.orig/include/linux/ftrace.h +++ linux-test.git/include/linux/ftrace.h @@ -792,7 +792,9 @@ typedef int (*trace_func_graph_ent_t)(st #ifdef CONFIG_FUNCTION_GRAPH_TRACER
/* for init task */ -#define INIT_FTRACE_GRAPH .ret_stack = NULL, +#define INIT_FTRACE_GRAPH \ + .ret_stack = NULL, \ + .tracing_graph_pause = ATOMIC_INIT(0),
/* * Stack of return addresses for functions