Hi Willem,
On 09/09/2024 15:15, Willem de Bruijn wrote:
Matthieu Baerts wrote:
Hi Jakub,
On 07/09/2024 02:04, Jakub Kicinski wrote:
On Fri, 06 Sep 2024 19:28:08 -0400 Willem de Bruijn wrote:
No, we opted for this design exactly to use existing kselftest infra, rather than reimplementing that in our wrapper, as I did in the RFC.
OK, I understood from the discussions from the RFC that by using the kselftest infra, the tests would be automatically executed in dedicated netns, and it could also help running tests in parallel. That sounded great to me, but that's not the case by default from what I see.
Perhaps that's something to change in the defaults for run_tests.
Since the infra exist, that is preferable over reimplementing it for one particular subset of tests.
Or if not all kselftests can run in netns (quite likely), this needs to be opt-in. Then a variable defined in the Makefile perhaps. To tell kselftest to enable the feature for this target.
Indeed, I was thinking along the same lines.
Yes, I was also thinking about a variable defined in the Makefile.
Because I suppose this variable will not be added in this cycle, and if a v3 is planned, would it be OK to simply prefix the 'packetdrill' commands with "unshare -n"? That would be similar to what is already done in Netfilter, and it prevents messing up with other tests/host settings?
Each target is built and booted separately, right?
From what I see, on NIPA, there are two executors dedicated to packetdrill: one with and one without a debug kernel config. Each of them is running in a dedicated VM.
So yes, on NIPA, we are safe.
But someone could run these packetdrill tests on a test machine manually, then switch to something else and have unexpected behaviours.
These three initial tests share set_defaults.sh, so in practice this should be fine. Most importantly, not affecting any tests outside net/packetdrill.
But agreed that netns are needed before adding more.
The unshare approach sounds fine to me. Easier than to plumb a Makefile variable through to the standalone run_kselftest.sh.
Yes, easier indeed.
3 give up on target proliferation; on a quick count we have 15 targets in ksft for various bits of networking, faaar more than anyone else
- fewer targets limits the need for libraries, libraries local to the target are trivial to handle
- ksft has no other form of "grouping" tests, if we collapse into a small number of targets it will be hard to run a group of tests
It is good to have targets, to easily run a group of tests related to a modification that has just been done, and to limit the size of the required kernel config, etc. Probably easier to have different libs per target/subsystem, and when something can be re-used elsewhere, it can be extracted to a more generic lib maybe?
The conflicting CONFIGs between targets could be an issue. Even with packetdrill I had to check HZ and saw a difference with net/bpf.
If some kernel configs are conflicting, it might be needed to add a check in ksft_runner.sh, because I know some CIs like LKFT build the kernel once mixing different kernel config files, then run the different targets on different VMs.
But maybe it is not needed to consider this case, and just adding CONFIG_HZ_100=y in the config file is enough.
That said, there could probably be a way to select tests between -c (collection/target) and -t (individual test) that uses a wildcard.
There are probably too many ways to run the selftests: personally, I never use run_kselftest.sh. Either I use 'make run_tests' to run all tests, or, when I need to check one specific selftest, I often directly execute the script manually, instead of dealing with the kselftest infrastructure. In this case, I can also simply use a for-loop using a wildcard in bash to execute all the tests I want.
Cheers, Matt