From: Masami Hiramatsu mhiramat@kernel.org
commit e03b4a084ea6b0a18b0e874baec439e69090c168 upstream.
The in_nmi() check in pre_handler_kretprobe() is meant to avoid recursion, and blindly assumes that anything NMI is recursive.
However, since commit:
9b38cc704e84 ("kretprobe: Prevent triggering kretprobe from within kprobe_flush_task")
there is a better way to detect and avoid actual recursion.
By setting a dummy kprobe, any actual exceptions will terminate early (by trying to handle the dummy kprobe), and recursion will not happen.
Employ this to avoid the kretprobe_table_lock() recursion, replacing the over-eager in_nmi() check.
Cc: stable@vger.kernel.org # 5.9.x Signed-off-by: Masami Hiramatsu mhiramat@kernel.org Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Ingo Molnar mingo@kernel.org Link: https://lkml.kernel.org/r/159870615628.1229682.6087311596892125907.stgit@dev... Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/kprobes.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-)
--- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1359,7 +1359,8 @@ static void cleanup_rp_inst(struct kretp struct hlist_node *next; struct hlist_head *head;
- /* No race here */ + /* To avoid recursive kretprobe by NMI, set kprobe busy here */ + kprobe_busy_begin(); for (hash = 0; hash < KPROBE_TABLE_SIZE; hash++) { kretprobe_table_lock(hash, &flags); head = &kretprobe_inst_table[hash]; @@ -1369,6 +1370,8 @@ static void cleanup_rp_inst(struct kretp } kretprobe_table_unlock(hash, &flags); } + kprobe_busy_end(); + free_rp_inst(rp); } NOKPROBE_SYMBOL(cleanup_rp_inst); @@ -1937,17 +1940,6 @@ static int pre_handler_kretprobe(struct unsigned long hash, flags = 0; struct kretprobe_instance *ri;
- /* - * To avoid deadlocks, prohibit return probing in NMI contexts, - * just skip the probe and increase the (inexact) 'nmissed' - * statistical counter, so that the user is informed that - * something happened: - */ - if (unlikely(in_nmi())) { - rp->nmissed++; - return 0; - } - /* TODO: consider to only swap the RA after the last pre_handler fired */ hash = hash_ptr(current, KPROBE_HASH_BITS); raw_spin_lock_irqsave(&rp->lock, flags);