This is a note to let you know that I've just added the patch titled
bpf: fix bpf_tail_call() x64 JIT
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: bpf-fix-bpf_tail_call-x64-jit.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From foo@baz Mon Jan 29 13:22:08 CET 2018
From: Daniel Borkmann daniel@iogearbox.net Date: Mon, 29 Jan 2018 02:48:55 +0100 Subject: bpf: fix bpf_tail_call() x64 JIT To: gregkh@linuxfoundation.org Cc: ast@kernel.org, stable@vger.kernel.org, Alexei Starovoitov ast@fb.com, "David S . Miller" davem@davemloft.net Message-ID: b7bd813935a7bc6a5f4fe4a3f199034f571c9b70.1517190206.git.daniel@iogearbox.net
From: Alexei Starovoitov ast@fb.com
[ upstream commit 90caccdd8cc0215705f18b92771b449b01e2474a ]
- bpf prog_array just like all other types of bpf array accepts 32-bit index. Clarify that in the comment. - fix x64 JIT of bpf_tail_call which was incorrectly loading 8 instead of 4 bytes - tighten corresponding check in the interpreter to stay consistent
The JIT bug can be triggered after introduction of BPF_F_NUMA_NODE flag in commit 96eabe7a40aa in 4.14. Before that the map_flags would stay zero and though JIT code is wrong it will check bounds correctly. Hence two fixes tags. All other JITs don't have this problem.
Signed-off-by: Alexei Starovoitov ast@kernel.org Fixes: 96eabe7a40aa ("bpf: Allow selecting numa node during map creation") Fixes: b52f00e6a715 ("x86: bpf_jit: implement bpf_tail_call() helper") Acked-by: Daniel Borkmann daniel@iogearbox.net Acked-by: Martin KaFai Lau kafai@fb.com Reviewed-by: Eric Dumazet edumazet@google.com Signed-off-by: David S. Miller davem@davemloft.net Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/x86/net/bpf_jit_comp.c | 4 ++-- kernel/bpf/core.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-)
--- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -278,9 +278,9 @@ static void emit_bpf_tail_call(u8 **ppro /* if (index >= array->map.max_entries) * goto out; */ - EMIT4(0x48, 0x8B, 0x46, /* mov rax, qword ptr [rsi + 16] */ + EMIT2(0x89, 0xD2); /* mov edx, edx */ + EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */ offsetof(struct bpf_array, map.max_entries)); - EMIT3(0x48, 0x39, 0xD0); /* cmp rax, rdx */ #define OFFSET1 43 /* number of bytes to jump */ EMIT2(X86_JBE, OFFSET1); /* jbe out */ label1 = cnt; --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -715,7 +715,7 @@ select_insn: struct bpf_map *map = (struct bpf_map *) (unsigned long) BPF_R2; struct bpf_array *array = container_of(map, struct bpf_array, map); struct bpf_prog *prog; - u64 index = BPF_R3; + u32 index = BPF_R3;
if (unlikely(index >= array->map.max_entries)) goto out;
Patches currently in stable-queue which might be from daniel@iogearbox.net are
queue-4.9/bpf-avoid-false-sharing-of-map-refcount-with-max_entries.patch queue-4.9/x86-bpf_jit-small-optimization-in-emit_bpf_tail_call.patch queue-4.9/bpf-reject-stores-into-ctx-via-st-and-xadd.patch queue-4.9/bpf-fix-32-bit-divide-by-zero.patch queue-4.9/bpf-fix-bpf_tail_call-x64-jit.patch queue-4.9/bpf-arsh-is-not-supported-in-32-bit-alu-thus-reject-it.patch queue-4.9/bpf-fix-divides-by-zero.patch queue-4.9/bpf-introduce-bpf_jit_always_on-config.patch