Adding Mat(s) for awareness, it would be great (but difficult) to have mptcp too in the long run ;)
On 8/27/24 21:32, Willem de Bruijn wrote:
From: Willem de Bruijn willemb@google.com
Lay the groundwork to import into kselftests the over 150 packetdrill TCP/IP conformance tests on github.com/google/packetdrill.
Florian recently added support for packetdrill tests in nf_conntrack,Addin in commit a8a388c2aae49 ("selftests: netfilter: add packetdrill based conntrack tests").
This patch takes a slightly different implementation and reuses the ksft python library for its KTAP, ksft, NetNS and other such tooling.
It also anticipates the large number of testcases, by creating a separate kselftest for each feature (directory). It does this by copying the template script packetdrill_ksft.py for each directory, and putting those in TEST_CUSTOM_PROGS so that kselftests runs each.
To demonstrate the code with minimal patch size, initially import only two features/directories from github. One with a single script, and one with two. This was the only reason to pick tcp/inq and tcp/md5.
Any future imports of packetdrill tests should require no additional coding. Just add the tcp/$FEATURE directory with *.pkt files.
Implementation notes:
- restore alphabetical order when adding the new directory to tools/testing/selftests/Makefile
- copied *.pkt files and support verbatim from the github project, except for
- update common/defaults.sh path (there are two paths on github)
- add SPDX headers
- remove one author statement
- Acknowledgment: drop an e (checkpatch)
Tested: make -C tools/testing/selftests/ \ TARGETS=net/packetdrill \ install INSTALL_PATH=$KSFT_INSTALL_PATH
# in virtme-ng sudo ./run_kselftest.sh -c net/packetdrill sudo ./run_kselftest.sh -t net/packetdrill:tcp_inq.py
Result: kselftest: Running tests in net/packetdrill TAP version 13 1..2 # timeout set to 45 # selftests: net/packetdrill: tcp_inq.py # KTAP version 1 # 1..4 # ok 1 tcp_inq.client-v4 # ok 2 tcp_inq.client-v6 # ok 3 tcp_inq.server-v4 # ok 4 tcp_inq.server-v6 # # Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0 ok 1 selftests: net/packetdrill: tcp_inq.py # timeout set to 45 # selftests: net/packetdrill: tcp_md5.py # KTAP version 1 # 1..2 # ok 1 tcp_md5.md5-only-on-client-ack-v4 # ok 2 tcp_md5.md5-only-on-client-ack-v6 # # Totals: pass:2 fail:0 xfail:0 xpass:0 skip:0 error:0 ok 2 selftests: net/packetdrill: tcp_md5.py
Signed-off-by: Willem de Bruijn willemb@google.com
RFC points for discussion
ksft: the choice for this python framework introduces a dependency on the YNL scripts, and some non-obvious code:
- to include the net/lib dep in tools/testing/selftests/Makefile
- a boilerplate lib/py/__init__.py that each user of ksft will need
It seems preferable to me to use ksft.py over reinventing the wheel, e.g., to print KTAP output. But perhaps we can make it more obvious for future ksft users, and make the dependency on YNL optional.
kselftest-per-directory: copying packetdrill_ksft.py to create a separate script per dir is a bit of a hack.
Additionally, in some setups the test directory is RO, avoding file creation there would be better.
What about placing in each subdiretory a trivial wrapper invoking the 'main' packetdrill_ksft.py script specifying as an argument the (sub-)directory to run/process?
A single script is much simpler, optionally with nested KTAP (not supported yet by ksft). But, I'm afraid that running time without intermediate output will be very long when we integrate all packetdrill scripts.
If I read correctly, this runs the scripts in the given directory sequentially (as opposed to the default pktdrill run_all.py behavior that uses many concurrent threads).
I guess/fear that running all the pktdrill tests in a single batch would take quite a long time, which in turn could be not so good for CI integration. Currently there are a couple of CI test-cases with runtime
1h, but that is bad ;)
nf_conntrack: we can dedup the common.sh.
*pkt files: which of the 150+ scripts on github are candidates for kselftests, all or a subset? To avoid change detector tests. And what is the best way to eventually send up to 150 files, 7K LoC.
I have no idea WRT the overall test stability, is some specific case/dir is subject to very frequent false positive/false negative we could postpone importing them, but ideally IMHO all the github scripts are good candidates.
Side note: I think it would be great to have some easy command line parameter to run only the specified script/test-case.
Thanks!
Paolo