On Fri, 2024-02-02 at 17:13 +0100, Eric Dumazet wrote:
On Fri, Feb 2, 2024 at 5:07 PM Paolo Abeni pabeni@redhat.com wrote:
In very slow environments, most big TCP cases including segmentation and reassembly of big TCP packets have a good chance to fail: by default the TCP client uses write size well below 64K. If the host is low enough autocorking is unable to build real big TCP packets.
Address the issue using much larger write operations.
Note that is hard to observe the issue without an extremely slow and/or overloaded environment; reduce the TCP transfer time to allow for much easier/faster reproducibility.
Fixes: 6bb382bcf742 ("selftests: add a selftest for big tcp") Signed-off-by: Paolo Abeni pabeni@redhat.com
tools/testing/selftests/net/big_tcp.sh | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/net/big_tcp.sh b/tools/testing/selftests/net/big_tcp.sh index cde9a91c4797..2db9d15cd45f 100755 --- a/tools/testing/selftests/net/big_tcp.sh +++ b/tools/testing/selftests/net/big_tcp.sh @@ -122,7 +122,9 @@ do_netperf() { local netns=$1
[ "$NF" = "6" ] && serip=$SERVER_IP6
ip net exec $netns netperf -$NF -t TCP_STREAM -H $serip 2>&1 >/dev/null
# use large write to be sure to generate big tcp packets
ip net exec $netns netperf -$NF -t TCP_STREAM -l 1 -H $serip -- -m 262144 2>&1 >/dev/null
}
Interesting.
I think we set tcp_wmem[1] to 262144 in our hosts. I think netperf default depends on tcp_wmem[1]
I haven't dug into the netperf source, but the above would be consistent with what I observe: in my VM I see 16Kb writes and tcp_wmem[1] is 16K.
Reviewed-by: Eric Dumazet edumazet@google.com
Thanks!
Paolo