On Thu, Dec 01, 2022 at 06:33:33PM +0100, Jaroslav Kysela wrote:
Obtain all test parameters from the configuration files. The defaults are defined in the pcm-test.conf file. The test count and parameters may be variable per specific hardware.
Also, handle alt_formats field now (with the fixes in the format loop). It replaces the original "automatic" logic which is not so universal.
The code may be further extended to skip various tests based on the configuration hints, if the exact PCM hardware parameters are not available for the given hardware.
--- a/tools/testing/selftests/alsa/conf.d/Lenovo_ThinkPad_P1_Gen2.conf +++ b/tools/testing/selftests/alsa/conf.d/Lenovo_ThinkPad_P1_Gen2.conf @@ -55,6 +55,14 @@ card.hda { period_size 24000 buffer_size 192000 }
test.time3 {
access RW_INTERLEAVED
format S16_LE
rate 44100
channels 2
period_size 24000
buffer_size 192000
}
I really do think we should be giving these names which help people understand what the tests are intending to cover, it'll make it easier to both understand the results and maintian the configurations going forward. Or at least commenting things, but names is probably better. Since the timeN is also used to figure out what type of test we're doing that'd mean either adding an explicit test_type field
pcm.test.48k2_S16 { test_type time
or block
pcm.test.time.48k2_S16
or alternatively adding a human reabale name field
pcm.test.time1 { description "48kHz Stereo S16_LE"
which is more readable but does mean that automated systems aren't going to surface the meaningful name for users so readily - you get things like
https://linux.kernelci.org/test/plan/id/6388c0cba8274c94402abd12/ https://linux.kernelci.org/test/plan/id/6388ce6efef77e61ab2abd10/
so there's a UI barrier before people see the test.
mixer-test is kind of "fun" in how many test results it can generate on bigger systems but hey, and there's some output corruption going on in the first link which looses us the capture tests. I have toyed with the idea of putting the control names into the mixer test names, but some of the test systems currently struggle with parsing spaces in the test name.
I do see this is all kind of baked into snd_config_get_type() unfortunately so perhaps the new description/name field is the best option here? We could add that incrementally.
for (pcm = pcm_list; pcm != NULL; pcm = pcm->next) {
test_pcm_time1(pcm, "test.time1", "S16_LE", 48000, 2, 512, 4096);
test_pcm_time1(pcm, "test.time2", "S16_LE", 48000, 2, 24000, 192000);
cfg = pcm->pcm_config;
if (cfg == NULL)
cfg = default_pcm_config;
cfg = conf_get_subtree(cfg, "test", NULL);
if (cfg == NULL)
continue;
snd_config_for_each(i, next, cfg) {
I can see the benefit in moving the defaults to a configuration file instead of code but rather than having it be an either/or it seems much better to have the board specific configuration file extend the defaults, resulting in us looping over both files if we've got both. We'd need to have something that avoided collisions, perhaps the simplest thing would be to just add an element into the printed test name for the source of the config so we get output like:
ok 1 test.default.time1.0.0.0.PLAYBACK ok 2 test.system.time1.0.0.0.PLAYBACK
That does mean that the system test list can't replace the generic test list but like I said elsewhere I think that would be a good thing for clarity anyway ("X works on system A but not the very similar system B, what's broken about system B...").
--- /dev/null +++ b/tools/testing/selftests/alsa/pcm-test.conf @@ -0,0 +1,16 @@ +pcm.test.time1 {
- format S16_LE
- alt_formats [ S32_LE ]
- rate 48000
- channels 2
- period_size 512
- buffer_size 4096
+} +pcm.test.time2 {
- format S16_LE
- alt_formats [ S32_LE ]
- rate 48000
- channels 2
- period_size 24000
- buffer_size 192000
+}
It's probably especially important that anything in a default configuration should skip on the constraints not being satisfied since we've no idea what the hardware we're running on is. Rather than requiring skipping to be explicitly configured perhaps we could just set a flag based on if we're reading the default tests or a system specific file, I'm not sure I see a sensible use case for system specific tests specifying a configuration that can't be satisfied. Doing things that way the flag could either mean we add skipping or that we report two results for each configured test:
not ok 1 test.system.time1.0.0.0.PLAYBACK.constraints ok 2 test.system.time1.0.0.0.PLAYBACK # SKIP
which is perhaps marginally simpler to implement and makes it clearer in the results if it was a straight up logic failure rather than a timing failure.
I would also like to see 44.1kHz, 96kHz and at least one mono and one 6 channel configuration adding (in my patches I added 8kHz mono since it's the most common practical mono format and 8kHz stereo so if 8kHz mono doesn't work it's a bit more obvious if it's mono or 8kHz that's broken). That could definitely be done incrementally though.