Hi all,
Happy new year! I'm just picking up this thread again, after having a bunch of other things come up at the end of December. I've since implemented some of the direct feedback on the patch, but wanted to clarify some overall direction too:
One idea I've had in the past is to keep such a list around of "test suites to be run when KUnit is ready". This is partly because it's much nicer to have all the test suites run together as part of a single (K)TAP output, so this could be a way of implementing (at least part of) that.
I had a look at implementing this, but it doesn't seem to win us much with the current structure: since we kunit_run_all_tests() before a filesystem is available, kunit will always be "ready" (and the tests run) before we've had a chance to load modules, which may contain further tests.
One option would be to defer kunit_run_all_tests() until we think we have the full set of tests, but there's no specific point at which we know that all required modules are loaded. We could defer this to an explicit user-triggered "run the tests now" interface (debugfs?), but that might break expectations of the tests automatically executing on init.
Alternatively, I could properly split the TAP output, and just run tests whenever they're probed - either from the built-in set or as modules are loaded at arbitrary points in the future. However, I'm not sure of what the expectations on the consumer-side of the TAP output might be.
Are there any preferences on the approach here?
Cheers,
Jeremy