The patch below does not apply to the 6.18-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.18.y git checkout FETCH_HEAD git cherry-pick -x 73721d8676771c6c7b06d4e636cc053fc76afefd # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2026010547-passenger-getting-5dde@gregkh' --subject-prefix 'PATCH 6.18.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 73721d8676771c6c7b06d4e636cc053fc76afefd Mon Sep 17 00:00:00 2001 From: Chenghao Duan duanchenghao@kylinos.cn Date: Wed, 31 Dec 2025 15:19:21 +0800 Subject: [PATCH] LoongArch: BPF: Enhance the bpf_arch_text_poke() function
Enhance the bpf_arch_text_poke() function to enable accurate location of BPF program entry points.
When modifying the entry point of a BPF program, skip the "move t0, ra" instruction to ensure the correct logic and copy of the jump address.
Cc: stable@vger.kernel.org Fixes: 677e6123e3d2 ("LoongArch: BPF: Disable trampoline for kernel module function trace") Signed-off-by: Chenghao Duan duanchenghao@kylinos.cn Signed-off-by: Huacai Chen chenhuacai@loongson.cn
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c index 9f6e93343b6e..d1d5a65308b9 100644 --- a/arch/loongarch/net/bpf_jit.c +++ b/arch/loongarch/net/bpf_jit.c @@ -1309,15 +1309,30 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type old_t, { int ret; bool is_call; + unsigned long size = 0; + unsigned long offset = 0; + void *image = NULL; + char namebuf[KSYM_NAME_LEN]; u32 old_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP}; u32 new_insns[LOONGARCH_LONG_JUMP_NINSNS] = {[0 ... 4] = INSN_NOP};
/* Only poking bpf text is supported. Since kernel function entry * is set up by ftrace, we rely on ftrace to poke kernel functions. */ - if (!is_bpf_text_address((unsigned long)ip)) + if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) return -ENOTSUPP;
+ image = ip - offset; + + /* zero offset means we're poking bpf prog entry */ + if (offset == 0) { + /* skip to the nop instruction in bpf prog entry: + * move t0, ra + * nop + */ + ip = image + LOONGARCH_INSN_SIZE; + } + is_call = old_t == BPF_MOD_CALL; ret = emit_jump_or_nops(old_addr, ip, old_insns, is_call); if (ret)