On Fri, Apr 04, 2025 at 12:10:13PM +0100, Richard Fitzgerald wrote:
On 4/4/25 07:53, David Gow wrote:
- Make sure these tests don't automatically enable themselves if no
driver which depends on the firmware loader is enabled. (e.g. make it depend on the library, rather than selecting it. If there's an extra "dummy" option which force-enables it, that's fine, but that shouldn't be enabled if KUNIT_ALL_TESTS is)
I thought someone had already patched that but apparently not. I'm not the only person who thought "ALL_TESTS" meant "all tests", not "some tests". Given that ALL_TESTS is badly named and really means "All tests for modules that are already selected" then it should only build if cs_dsp is specifically selected.
The change didn't enable the drivers in the config fragement for all tests, and TBH I suspect it's a better idea to have a specific config option for the tests that depends on KUnit or something since the drivers will pull in subsystems which gets to be a mess and possibly fragile.
However the users of cs_dsp can have a lot of dependencies that are entirely irrelevant to cs_dsp and so cs_dsp *could* be tested without those. (This also applies to other Cirrus KUnit tests.) I started on some patches to make the Kconfig for the tested libraries visible if CONFIG_KUNIT is enabled so that they can be selected for testing without the need to enable a large bunch of other frameworks just to be able to enable a user of the library. They are somewhere in my to-do pile.
Yeah, I think that's the most sensible thing for coverage.
Related to this I created a UM kunit.py config specifically to run all Cirrus KUnit tests with minimum other clutter. But does that mean nobody outside Cirrus will run it? Or is there a process somewhere that runs kunit.py on all configs in tools/testing/kunit/configs ?
I'm not aware of anything. I'm running --alltests (on virtual hardware rather than um due to the environment where things get run).
- As a result, make them more explicitly enabled with --alltests, and
probably disabled -- or only run a subset -- in the default. Currently this is mostly decided by whether CONFIG_REGMAP is enabled, having a specific item to use for these tests would be less surprising.
- If any of the individual tests are particularly slow (more than a
~second or so on fast hardware), mark them as slow. Most people still enable slow tests, so they'll still get run most of the time, but they can be skipped on old m68k machines, or gated behind the quick tests passing in CI systems, etc)
I don't expect any individual test cases are slow (I'll check through). One of the advantages of parameterizing them was that it avoids one single test case taking a long time looping through a lot of testing. They should be a quick write-read-check-exit. "Slowness" is tricky to judge. The slowest hardware I have is an ARM7 clocked at a few hundred MHz, or use QEMU in full emulation.
On the scale of KUnit tests there's quite a few that are noticable, it's just a second or two per test but even in qemu the tests are usually ticking by faster than can be read.