Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com --- arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index fcbf7a917c56..90ce75f0f1e2 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -241,8 +241,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) fp->jited_len = alloclen;
bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE)); - bpf_jit_binary_lock_ro(bpf_hdr); if (!fp->is_func || extra_pass) { + bpf_jit_binary_lock_ro(bpf_hdr); bpf_prog_fill_jited_linfo(fp, addrs); out_addrs: kfree(addrs);