The previous commit improves the precision in scalar(32)_min_max_add, and scalar(32)_min_max_sub. The improvement in precision occurs in cases when all outcomes overflow or underflow, respectively. This commit adds selftests that exercise those cases.
Co-developed-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Harishankar Vishwanathan harishankar.vishwanathan@gmail.com --- .../selftests/bpf/progs/verifier_bounds.c | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c index 30e16153fdf1..20fb0fef5719 100644 --- a/tools/testing/selftests/bpf/progs/verifier_bounds.c +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c @@ -1371,4 +1371,89 @@ __naked void mult_sign_ovf(void) __imm(bpf_skb_store_bytes) : __clobber_all); } + +SEC("socket") +__description("64-bit addition overflow, all outcomes overflow") +__success __log_level(2) +__msg("7: (0f) r5 += r3 {{.*}} R5_w=scalar(smin=0x800003d67e960f7d,umin=0x551ee3d67e960f7d,umax=0xc0149fffffffffff,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x3d67e960f7d; 0xfffffc298169f082))") +__retval(0) +__naked void add64_ovf(void) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r3 = r0;" + "r4 = 0x950a43d67e960f7d ll;" + "r3 |= r4;" + "r5 = 0xc014a00000000000 ll;" + "r5 += r3;" + "r0 = 0;" + "exit" + : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("32-bit addition overflow, all outcomes overflow") +__success __log_level(2) +__msg("5: (0c) w5 += w3 {{.*}} R5_w=scalar(smin=umin=umin32=0x20130018,smax=umax=umax32=0x8000ffff,smin32=0x80000018,var_off=(0x18; 0xffffffe7))") +__retval(0) +__naked void add32_ovf(void) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r3 = r0;" + "w4 = 0xa0120018;" + "w3 |= w4;" + "w5 = 0x80010000;" + "w5 += w3;" + "r0 = 0;" + "exit" + : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("64-bit subtraction overflow, all outcomes underflow") +__success __log_level(2) +__msg("6: (1f) r3 -= r1 {{.*}} R3_w=scalar(umin=1,umax=0x8000000000000000)") +__retval(0) +__naked void sub64_ovf(void) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r1 = r0;" + "r2 = 0x8000000000000000 ll;" + "r1 |= r2;" + "r3 = 0x0;" + "r3 -= r1;" + "r0 = 0;" + "exit" + : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + +SEC("socket") +__description("32-bit subtraction overflow, all outcomes underflow") +__success __log_level(2) +__msg("5: (1c) w3 -= w1 {{.*}} R3_w=scalar(smin=umin=umin32=1,smax=umax=umax32=0x80000000,var_off=(0x0; 0xffffffff))") +__retval(0) +__naked void sub32_ovf(void) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r1 = r0;" + "w2 = 0x80000000;" + "w1 |= w2;" + "r3 = 0x0;" + "w3 -= w1;" + "r0 = 0;" + "exit" + : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL";
On Tue, 2025-06-17 at 19:17 -0400, Harishankar Vishwanathan wrote:
The previous commit improves the precision in scalar(32)_min_max_add, and scalar(32)_min_max_sub. The improvement in precision occurs in cases when all outcomes overflow or underflow, respectively. This commit adds selftests that exercise those cases.
Co-developed-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Harishankar Vishwanathan harishankar.vishwanathan@gmail.com
Could you please also add test cases when one bound overflows while another does not? Or these are covered by some other tests?
[...]
+SEC("socket") +__description("64-bit addition overflow, all outcomes overflow") +__success __log_level(2) +__msg("7: (0f) r5 += r3 {{.*}} R5_w=scalar(smin=0x800003d67e960f7d,umin=0x551ee3d67e960f7d,umax=0xc0149fffffffffff,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x3d67e960f7d; 0xfffffc298169f082))")
Would it be possible to pick some more "human readable" constants here? As-is it is hard to make sense what verifier actually computes.
+__retval(0) +__naked void add64_ovf(void) +{
- asm volatile (
- "call %[bpf_get_prandom_u32];"
- "r3 = r0;"
- "r4 = 0x950a43d67e960f7d ll;"
- "r3 |= r4;"
- "r5 = 0xc014a00000000000 ll;"
- "r5 += r3;"
- "r0 = 0;"
- "exit"
- :
- : __imm(bpf_get_prandom_u32)
- : __clobber_all);
+}
[...]
On Wed, Jun 18, 2025 at 5:22 PM Eduard Zingerman eddyz87@gmail.com wrote:
On Tue, 2025-06-17 at 19:17 -0400, Harishankar Vishwanathan wrote:
The previous commit improves the precision in scalar(32)_min_max_add, and scalar(32)_min_max_sub. The improvement in precision occurs in cases when all outcomes overflow or underflow, respectively. This commit adds selftests that exercise those cases.
Co-developed-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Harishankar Vishwanathan harishankar.vishwanathan@gmail.com
Could you please also add test cases when one bound overflows while another does not? Or these are covered by some other tests?
Yes this is possible and I can add such test cases. These are not covered by other tests as far as I can see.
[...]
+SEC("socket") +__description("64-bit addition overflow, all outcomes overflow") +__success __log_level(2) +__msg("7: (0f) r5 += r3 {{.*}} R5_w=scalar(smin=0x800003d67e960f7d,umin=0x551ee3d67e960f7d,umax=0xc0149fffffffffff,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x3d67e960f7d; 0xfffffc298169f082))")
Would it be possible to pick some more "human readable" constants here? As-is it is hard to make sense what verifier actually computes.
+__retval(0) +__naked void add64_ovf(void) +{
asm volatile (
"call %[bpf_get_prandom_u32];"
"r3 = r0;"
"r4 = 0x950a43d67e960f7d ll;"
"r3 |= r4;"
"r5 = 0xc014a00000000000 ll;"
"r5 += r3;"
"r0 = 0;"
"exit"
:
: __imm(bpf_get_prandom_u32)
: __clobber_all);
+}
It is possible to pick more human readable constants, but the precision gains might not be as apparent. For instance, with the above (current) test case, the old scalar_min_max_add() produced [umin_value=0x3d67e960f7d, umax_value=U64_MAX], while the updated scalar_min_max_add() produces a much more precise [0x551ee3d67e960f7d, 0xc0149fffffffffff], a bound that has close to 2**63 fewer inhabitants.
For the purposes of a test case, if human readability is more important than the demonstration of a large precision gain, I can prefer one that is more readable, similar to the one shown in the commit message of v1 of the patch [1]:
With the old scalar_min_max_add(), we get r3's bounds set to unbounded, i.e., [0, U64_MAX] after instruction 6: (0f) r3 += r3
0: R1=ctx() R10=fp0 0: (18) r3 = 0x8000000000000000 ; R3_w=0x8000000000000000 2: (18) r4 = 0x0 ; R4_w=0 4: (87) r4 = -r4 ; R4_w=scalar() 5: (4f) r3 |= r4 ; R3_w=scalar(smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff)) R4_w=scalar() 6: (0f) r3 += r3 ; R3_w=scalar() 7: (b7) r0 = 1 ; R0_w=1 8: (95) exit
With the new scalar_min_max_add(), we get r3's bounds set to [0, 0xfffffffffffffffe], a bound that is more precise by having only 1 less inhabitant.
... 6: (0f) r3 += r3 ; R3_w=scalar(umax=0xfffffffffffffffe) 7: (b7) r0 = 1 ; R0_w=1 8: (95) exit
Please advise which test cases to prefer. I will follow up with a v3.
[1]: https://lore.kernel.org/bpf/20250610221356.2663491-1-harishankar.vishwanatha...
[...]
On Thu, 2025-06-19 at 17:13 -0400, Harishankar Vishwanathan wrote:
On Wed, Jun 18, 2025 at 5:22 PM Eduard Zingerman eddyz87@gmail.com wrote:
On Tue, 2025-06-17 at 19:17 -0400, Harishankar Vishwanathan wrote:
The previous commit improves the precision in scalar(32)_min_max_add, and scalar(32)_min_max_sub. The improvement in precision occurs in cases when all outcomes overflow or underflow, respectively. This commit adds selftests that exercise those cases.
Co-developed-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Matan Shachnai m.shachnai@rutgers.edu Signed-off-by: Harishankar Vishwanathan harishankar.vishwanathan@gmail.com
Could you please also add test cases when one bound overflows while another does not? Or these are covered by some other tests?
Yes this is possible and I can add such test cases. These are not covered by other tests as far as I can see.
Great, thank you.
+SEC("socket") +__description("64-bit addition overflow, all outcomes overflow") +__success __log_level(2) +__msg("7: (0f) r5 += r3 {{.*}} R5_w=scalar(smin=0x800003d67e960f7d,umin=0x551ee3d67e960f7d,umax=0xc0149fffffffffff,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x3d67e960f7d; 0xfffffc298169f082))")
Would it be possible to pick some more "human readable" constants here? As-is it is hard to make sense what verifier actually computes.
+__retval(0) +__naked void add64_ovf(void) +{
asm volatile (
"call %[bpf_get_prandom_u32];"
"r3 = r0;"
"r4 = 0x950a43d67e960f7d ll;"
"r3 |= r4;"
"r5 = 0xc014a00000000000 ll;"
"r5 += r3;"
"r0 = 0;"
"exit"
:
: __imm(bpf_get_prandom_u32)
: __clobber_all);
+}
It is possible to pick more human readable constants, but the precision gains might not be as apparent. For instance, with the above (current) test case, the old scalar_min_max_add() produced [umin_value=0x3d67e960f7d, umax_value=U64_MAX], while the updated scalar_min_max_add() produces a much more precise [0x551ee3d67e960f7d, 0xc0149fffffffffff], a bound that has close to 2**63 fewer inhabitants.
For the purposes of a test case, if human readability is more important than the demonstration of a large precision gain, I can prefer one that is more readable, similar to the one shown in the commit message of v1 of the patch [1]:
With the old scalar_min_max_add(), we get r3's bounds set to unbounded, i.e., [0, U64_MAX] after instruction 6: (0f) r3 += r3
0: R1=ctx() R10=fp0 0: (18) r3 = 0x8000000000000000 ; R3_w=0x8000000000000000 2: (18) r4 = 0x0 ; R4_w=0 4: (87) r4 = -r4 ; R4_w=scalar() 5: (4f) r3 |= r4 ; R3_w=scalar(smax=-1,umin=0x8000000000000000,var_off=(0x8000000000000000; 0x7fffffffffffffff)) R4_w=scalar() 6: (0f) r3 += r3 ; R3_w=scalar() 7: (b7) r0 = 1 ; R0_w=1 8: (95) exit
With the new scalar_min_max_add(), we get r3's bounds set to [0, 0xfffffffffffffffe], a bound that is more precise by having only 1 less inhabitant.
... 6: (0f) r3 += r3 ; R3_w=scalar(umax=0xfffffffffffffffe) 7: (b7) r0 = 1 ; R0_w=1 8: (95) exit
Please advise which test cases to prefer. I will follow up with a v3.
Hm, I see, that's an interesting angle. The problem is, if I do something silly changing the code and this test fails I'd have a hard time understanding the expected output. Therefore, I'd prefer something more obvious.
Maybe let's go with this:
SEC("tc") __success __naked void test1(void) { asm volatile ( "r3 = 0xa000000000000000 ll;" "r4 = 0x0;" "r4 = -r4;" "r3 |= r4;" "r3 += r3;" "r0 = 1;" "exit;" : : __imm(bpf_get_prandom_u32) : __clobber_all); }
Here is verifier log comparison:
master: 5: (0f) r3 += r3 ; R3_w=scalar() branch: 5: (0f) r3 += r3 ; R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)
?
[...]
On Thu, Jun 19, 2025 at 5:55 PM Eduard Zingerman eddyz87@gmail.com wrote:
On Thu, 2025-06-19 at 17:13 -0400, Harishankar Vishwanathan wrote:
On Wed, Jun 18, 2025 at 5:22 PM Eduard Zingerman eddyz87@gmail.com wrote:
On Tue, 2025-06-17 at 19:17 -0400, Harishankar Vishwanathan wrote:
[...]
Hm, I see, that's an interesting angle. The problem is, if I do something silly changing the code and this test fails I'd have a hard time understanding the expected output. Therefore, I'd prefer something more obvious.
Maybe let's go with this:
SEC("tc") __success __naked void test1(void) { asm volatile ( "r3 = 0xa000000000000000 ll;" "r4 = 0x0;" "r4 = -r4;" "r3 |= r4;" "r3 += r3;" "r0 = 1;" "exit;" : : __imm(bpf_get_prandom_u32) : __clobber_all); }
Here is verifier log comparison:
master: 5: (0f) r3 += r3 ; R3_w=scalar() branch: 5: (0f) r3 += r3 ; R3_w=scalar(umin=0x4000000000000000,umax=0xfffffffffffffffe)
?
Okay, this seems both readable and also demonstrates precision gains. I'll follow up with a v3 with similar updated test cases for full overflow and partial overflow for all the four functions.
[...]
linux-kselftest-mirror@lists.linaro.org