Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com --- arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c index fcbf7a917c56..90ce75f0f1e2 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -241,8 +241,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) fp->jited_len = alloclen;
bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE)); - bpf_jit_binary_lock_ro(bpf_hdr); if (!fp->is_func || extra_pass) { + bpf_jit_binary_lock_ro(bpf_hdr); bpf_prog_fill_jited_linfo(fp, addrs); out_addrs: kfree(addrs);
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
On 10/25/21 8:15 AM, Naveen N. Rao wrote:
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
LGTM, I presume this fix will be routed via Michael.
BPF selftests have plenty of BPF-to-BPF calls in there, too bad this was caught so late. :/
Daniel Borkmann daniel@iogearbox.net writes:
On 10/25/21 8:15 AM, Naveen N. Rao wrote:
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
LGTM, I presume this fix will be routed via Michael.
Thanks for reviewing, I've picked it up.
BPF selftests have plenty of BPF-to-BPF calls in there, too bad this was caught so late. :/
Yeah :/
STRICT_KERNEL_RWX is not on by default in all our defconfigs, so that's probably why no one caught it.
I used to run the BPF selftests but they stopped building for me a while back, I'll see if I can get them going again.
cheers
Michael Ellerman wrote:
Daniel Borkmann daniel@iogearbox.net writes:
On 10/25/21 8:15 AM, Naveen N. Rao wrote:
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
LGTM, I presume this fix will be routed via Michael.
Thanks for reviewing, I've picked it up.
BPF selftests have plenty of BPF-to-BPF calls in there, too bad this was caught so late. :/
Yeah :/
STRICT_KERNEL_RWX is not on by default in all our defconfigs, so that's probably why no one caught it.
Yeah, sorry - we should have caught this sooner.
I used to run the BPF selftests but they stopped building for me a while back, I'll see if I can get them going again.
Ravi had started looking into getting the selftests working well before he left. I will take a look at this.
Thanks, Naveen
"Naveen N. Rao" naveen.n.rao@linux.ibm.com writes:
Michael Ellerman wrote:
Daniel Borkmann daniel@iogearbox.net writes:
On 10/25/21 8:15 AM, Naveen N. Rao wrote:
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
LGTM, I presume this fix will be routed via Michael.
Thanks for reviewing, I've picked it up.
BPF selftests have plenty of BPF-to-BPF calls in there, too bad this was caught so late. :/
Yeah :/
STRICT_KERNEL_RWX is not on by default in all our defconfigs, so that's probably why no one caught it.
Yeah, sorry - we should have caught this sooner.
I used to run the BPF selftests but they stopped building for me a while back, I'll see if I can get them going again.
Ravi had started looking into getting the selftests working well before he left. I will take a look at this.
Thanks.
I got them building with something like:
- turning on DEBUG_INFO and DEBUG_INFO_BTF and rebuilding vmlinux - grabbing clang 13 from: https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.0/clang+... - PATH=$HOME/clang+llvm-13.0.0-powerpc64le-linux-ubuntu-18.04/bin/:$PATH - apt install: - libelf-dev - dwarves - python-docutils - libcap-dev
The DEBUG_INFO requirement is a bit of a pain for me. I generally don't build with that enabled, because the resulting kernels are stupidly large. I'm not sure if that's a hard requirement, or if the vmlinux has to match the running kernel exactly?
There is logic in tools/testing/bpf/Makefile to use VMLINUX_H instead of extracting the BTF from the vmlinux (line 247), but AFAICS that's unreachable since 1a3449c19407 ("selftests/bpf: Clarify build error if no vmlinux"), which makes it a hard error to not have a VMLINUX_BTF.
cheers
On Tue, Nov 2, 2021 at 6:48 AM Michael Ellerman mpe@ellerman.id.au wrote:
"Naveen N. Rao" naveen.n.rao@linux.ibm.com writes:
Michael Ellerman wrote:
Daniel Borkmann daniel@iogearbox.net writes:
On 10/25/21 8:15 AM, Naveen N. Rao wrote:
Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass.
Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens.
Cc: stable@vger.kernel.org Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Signed-off-by: Hari Bathini hbathini@linux.ibm.com
arch/powerpc/net/bpf_jit_comp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks for the fix!
Reviewed-by: Naveen N. Rao naveen.n.rao@linux.vnet.ibm.com
LGTM, I presume this fix will be routed via Michael.
Thanks for reviewing, I've picked it up.
BPF selftests have plenty of BPF-to-BPF calls in there, too bad this was caught so late. :/
Yeah :/
STRICT_KERNEL_RWX is not on by default in all our defconfigs, so that's probably why no one caught it.
Yeah, sorry - we should have caught this sooner.
I used to run the BPF selftests but they stopped building for me a while back, I'll see if I can get them going again.
Ravi had started looking into getting the selftests working well before he left. I will take a look at this.
Thanks.
I got them building with something like:
- turning on DEBUG_INFO and DEBUG_INFO_BTF and rebuilding vmlinux
- grabbing clang 13 from: https://github.com/llvm/llvm-project/releases/download/llvmorg-13.0.0/clang+...
- PATH=$HOME/clang+llvm-13.0.0-powerpc64le-linux-ubuntu-18.04/bin/:$PATH
- apt install:
- libelf-dev
- dwarves
- python-docutils
- libcap-dev
The DEBUG_INFO requirement is a bit of a pain for me. I generally don't
We do need DWARF to be present during BTF generation. We don't really need to preserve DWARF after BTF is generated, though. But no one added that config option and corresponding optimization. If you can figure out how to do that, I'm sure a bunch of folks will appreciate being able to specify CONFIG_DEBUG_INFO_BTF without CONFIG_DEBUG_INFO dependency.
build with that enabled, because the resulting kernels are stupidly large. I'm not sure if that's a hard requirement, or if the vmlinux has to match the running kernel exactly?
There is logic in tools/testing/bpf/Makefile to use VMLINUX_H instead of extracting the BTF from the vmlinux (line 247), but AFAICS that's unreachable since 1a3449c19407 ("selftests/bpf: Clarify build error if no vmlinux"), which makes it a hard error to not have a VMLINUX_BTF.
Yeah, you can pass pre-generated vmlinux.h through VMLINUX_H, which we do for libbpf CI (see [0]) when running latest selftests against old kernels (we test 4.9 and 5.5 currently). Latest vmlinux image (which you can override with VMLINUX_BTF) is required for custom kernel module which we use during selftests. But if you don't provide the matching kernel, everything should still build fine, the test module won't load properly and we'll skip a few tests. You still should get a good coverage.
So in short, given we are able to build selftests and run it against 4.9 and 5.5, you should be able to as well.
[0] https://github.com/libbpf/libbpf/blob/master/travis-ci/vmtest/build_selftest...
cheers
On Mon, 25 Oct 2021 11:26:49 +0530, Hari Bathini wrote:
Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace:
[c000000000113f28] bpf_int_jit_compile+0x238/0x750 (unreliable) [c00000000037d2f8] bpf_check+0x2008/0x2710 [c000000000360050] bpf_prog_load+0xb00/0x13a0 [c000000000361d94] __sys_bpf+0x6f4/0x27c0 [c000000000363f0c] sys_bpf+0x2c/0x40 [c000000000032434] system_call_exception+0x164/0x330 [c00000000000c1e8] system_call_vectored_common+0xe8/0x278
[...]
Applied to powerpc/next.
[1/1] powerpc/bpf: fix write protecting JIT code https://git.kernel.org/powerpc/c/44a8214de96bafb5210e43bfa2c97c19bf75af3d
cheers
linux-stable-mirror@lists.linaro.org