On Fri, 6 Nov 2020 at 06:54, Arpitha Raghunandan 98.arpi@gmail.com wrote:
On 06/11/20 1:25 am, Marco Elver wrote:
On Thu, Nov 05, 2020 at 04:02PM +0100, Marco Elver wrote:
On Thu, 5 Nov 2020 at 15:30, Arpitha Raghunandan 98.arpi@gmail.com wrote:
[...]
I tried adding support to run each parameter as a distinct test case by making changes to kunit_run_case_catch_errors(). The issue here is that since the results are displayed in KTAP format, this change will result in each parameter being considered a subtest of another subtest (test case in KUnit).
Do you have example output? That might help understand what's going on.
The change that I tried can be seen here (based on the v4 patch): https://gist.github.com/arpi-r/4822899087ca4cc34572ed9e45cc5fee.
Using the kunit tool, I get this error:
[19:20:41] [ERROR] expected 7 test suites, but got -1 [ERROR] no tests run! [19:20:41] ============================================================ [19:20:41] Testing complete. 0 tests run. 0 failed. 0 crashed.
But this error is only because of how the tool displays the results. The test actually does run, as can be seen in the dmesg output:
TAP version 14 1..7 # Subtest: ext4_inode_test 1..1 ok 1 - inode_test_xtimestamp_decoding 1 ok 1 - inode_test_xtimestamp_decoding 2 ok 1 - inode_test_xtimestamp_decoding 3 ok 1 - inode_test_xtimestamp_decoding 4 ok 1 - inode_test_xtimestamp_decoding 5 ok 1 - inode_test_xtimestamp_decoding 6 ok 1 - inode_test_xtimestamp_decoding 7 ok 1 - inode_test_xtimestamp_decoding 8 ok 1 - inode_test_xtimestamp_decoding 9 ok 1 - inode_test_xtimestamp_decoding 10 ok 1 - inode_test_xtimestamp_decoding 11 ok 1 - inode_test_xtimestamp_decoding 12 ok 1 - inode_test_xtimestamp_decoding 13 ok 1 - inode_test_xtimestamp_decoding 14 ok 1 - inode_test_xtimestamp_decoding 15 ok 1 - inode_test_xtimestamp_decoding 16 ok 1 - ext4_inode_test (followed by other kunit test outputs)
Hmm, interesting. Let me play with your patch a bit.
One option is to just have the test case number increment as well, i.e. have this: | ok 1 - inode_test_xtimestamp_decoding#1 | ok 2 - inode_test_xtimestamp_decoding#2 | ok 3 - inode_test_xtimestamp_decoding#3 | ok 4 - inode_test_xtimestamp_decoding#4 | ok 5 - inode_test_xtimestamp_decoding#5 ...
Or is there something else I missed?
Right, so TAP wants the exact number of tests it will run ahead of time. In which case we can still put the results of each parameterized test in a diagnostic. Please see my proposed patch below, which still does proper initialization/destruction of each parameter case as if it was its own test case.
With it the output looks as follows:
| TAP version 14 | 1..6 | # Subtest: ext4_inode_test | 1..1 | # ok param#0 - inode_test_xtimestamp_decoding | # ok param#1 - inode_test_xtimestamp_decoding | # ok param#2 - inode_test_xtimestamp_decoding | # ok param#3 - inode_test_xtimestamp_decoding | # ok param#4 - inode_test_xtimestamp_decoding | # ok param#5 - inode_test_xtimestamp_decoding | # ok param#6 - inode_test_xtimestamp_decoding | # ok param#7 - inode_test_xtimestamp_decoding | # ok param#8 - inode_test_xtimestamp_decoding | # ok param#9 - inode_test_xtimestamp_decoding | # ok param#10 - inode_test_xtimestamp_decoding | # ok param#11 - inode_test_xtimestamp_decoding | # ok param#12 - inode_test_xtimestamp_decoding | # ok param#13 - inode_test_xtimestamp_decoding | # ok param#14 - inode_test_xtimestamp_decoding | # ok param#15 - inode_test_xtimestamp_decoding | ok 1 - inode_test_xtimestamp_decoding | ok 1 - ext4_inode_test
Would that be reasonable? If so, feel free to take the patch and test/adjust as required.
I'm not sure on the best format -- is there is a recommended format for parameterized test result output? If not, I suppose we can put anything we like into the diagnostic.
I think this format of output should be fine for parameterized tests. But, this patch has the same issue as earlier. While, the tests run and this is the output that can be seen using dmesg, it still causes an issue on using the kunit tool. It gives a similar error:
[11:07:38] [ERROR] expected 7 test suites, but got -1 [11:07:38] [ERROR] expected_suite_index -1, but got 2 [11:07:38] [ERROR] got unexpected test suite: kunit-try-catch-test [ERROR] no tests run! [11:07:38] ============================================================ [11:07:38] Testing complete. 0 tests run. 0 failed. 0 crashed.
I'd suggest testing without these patches and diffing the output. AFAIK we're not adding any new non-# output, so it might be a pre-existing bug in some parsing code. Either that, or the parsing code does not respect the # correctly?
Thanks, -- Marco