The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com --- tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0. - Returns False and reports missing test plan error if fails to parse - test plan. + Returns False and sets expected_count to None if there is no valid test + plan.
Accepted format: - '1..[number of subtests]' @@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None - test.add_error('missing plan line!') return False test.log.append(lines.pop()) expected_count = int(match.group(1)) diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines())) - self.assertEqual(2, result.test.counts.errors) + # A missing test plan is not an error. + self.assertEqual(0, result.test.counts.errors) + # All tests should be accounted for. + self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
Currently, the results for individial parameters in a parameterised test are simply output as (K)TAP diagnostic lines. However, the plan was always[1] to make these (K)TAP subtests when kunit_tool supported them.
With [2], these are now supported. (v5 will print out an error about the missing plan line, but this can safely be ignored, and will hopefully be changed). As a result, individual test parameter results are parsed, displayed in the formatted results, and counted for test statistics.
[1]: https://lore.kernel.org/linux-kselftest/CABVgOSnJAgWvTTABaF082LuYjAoAWzrBsyu... [2]: https://lore.kernel.org/linux-kselftest/20211006001447.20919-1-dlatypov@goog...
Signed-off-by: David Gow davidgow@google.com ---
Note that this was previously posted as: https://lore.kernel.org/all/20211006071112.2206942-1-davidgow@google.com/
No changes have been made, save for a trivial rebase on the current kselftest/kunit branch.
lib/kunit/test.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 3bd741e50a2d..85265f9a66a1 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -508,6 +508,8 @@ int kunit_run_tests(struct kunit_suite *suite) /* Get initial param. */ param_desc[0] = '\0'; test.param_value = test_case->generate_params(NULL, param_desc); + kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT + "# Subtest: %s", test_case->name); }
do { @@ -520,9 +522,8 @@ int kunit_run_tests(struct kunit_suite *suite) }
kunit_log(KERN_INFO, &test, - KUNIT_SUBTEST_INDENT - "# %s: %s %d - %s", - test_case->name, + KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT + "%s %d - %s", kunit_status_to_ok_not_ok(test.status), test.param_index + 1, param_desc);
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
Currently, the results for individial parameters in a parameterised test are simply output as (K)TAP diagnostic lines. However, the plan was always[1] to make these (K)TAP subtests when kunit_tool supported them.
With [2], these are now supported. (v5 will print out an error about the missing plan line, but this can safely be ignored, and will hopefully be
Should this commit description be updated?
changed). As a result, individual test parameter results are parsed, displayed in the formatted results, and counted for test statistics.
This works for me.
One concern I have for the future is if showing all the parameters by default might become too verbose? Should there eventually be a verbosity/test-level flag that controls how deep we go? We could elect to only print FAILED subtests after we hit the depth limit.
Testing this out with: $ ./tools/testing/kunit/kunit.py --kunitconfig=fs/fat
Before: [17:55:48] Starting KUnit Kernel (1/1)... [17:55:48] ============================================================ [17:55:51] ================== fat_test (3 subtests) =================== [17:55:51] [PASSED] fat_checksum_test [17:55:51] [PASSED] fat_time_fat2unix_test [17:55:51] [PASSED] fat_time_unix2fat_test [17:55:51] ==================== [PASSED] fat_test ===================== [17:55:51] ============================================================ [17:55:51] Testing complete. Passed: 3, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [17:55:51] Elapsed time: 7.784s total, 0.001s configuring, 4.790s building, 2.877s running
[17:56:22] Starting KUnit Kernel (1/1)... [17:56:22] ============================================================ [17:56:25] ================== fat_test (3 subtests) =================== [17:56:25] [PASSED] fat_checksum_test [17:56:25] ================== fat_time_fat2unix_test ================== [17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00) [17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58) [17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC) [17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC) [17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00) [17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00) [17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00) [17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC) [17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC) [17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59) [17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_fat2unix_test ============== [17:56:25] ================== fat_time_unix2fat_test ================== [17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00) [17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58) [17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC) [17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC) [17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00) [17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00) [17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00) [17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC) [17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC) [17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59) [17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_unix2fat_test ============== [17:56:25] ==================== [PASSED] fat_test ===================== [17:56:25] ============================================================ [17:56:25] Testing complete. Passed: 23, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [17:56:25] Elapsed time: 7.733s total, 0.001s configuring, 4.740s building, 2.915s running
Looks similar when run with --kunitconfig=fs/ext4.
This "inverted" nesting of PASSED looks a bit "wrong" at first.
[17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_unix2fat_test ============== [17:56:25] ==================== [PASSED] fat_test =====================
But I know it's so that we can show results as incrementally as possible, so I'm fine with it. (I imagine our users won't necessarily make that connection, however.)
Signed-off-by: David Gow davidgow@google.com
Signed-off-by: Daniel Latypov dlatypov@google.com
Note that this was previously posted as: https://lore.kernel.org/all/20211006071112.2206942-1-davidgow@google.com/
No changes have been made, save for a trivial rebase on the current kselftest/kunit branch.
lib/kunit/test.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 3bd741e50a2d..85265f9a66a1 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -508,6 +508,8 @@ int kunit_run_tests(struct kunit_suite *suite) /* Get initial param. */ param_desc[0] = '\0'; test.param_value = test_case->generate_params(NULL, param_desc);
kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"# Subtest: %s", test_case->name); } do {
@@ -520,9 +522,8 @@ int kunit_run_tests(struct kunit_suite *suite) }
kunit_log(KERN_INFO, &test,
KUNIT_SUBTEST_INDENT
"# %s: %s %d - %s",
test_case->name,
KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"%s %d - %s", kunit_status_to_ok_not_ok(test.status), test.param_index + 1, param_desc);
-- 2.33.0.1079.g6e70778dc9-goog
On Fri, Oct 22, 2021 at 9:15 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
Currently, the results for individial parameters in a parameterised test are simply output as (K)TAP diagnostic lines. However, the plan was always[1] to make these (K)TAP subtests when kunit_tool supported them.
With [2], these are now supported. (v5 will print out an error about the missing plan line, but this can safely be ignored, and will hopefully be
Should this commit description be updated?
Whoops, yup. I didn't want to change anything without making this a v2, but I'll definitely update it now.
changed). As a result, individual test parameter results are parsed, displayed in the formatted results, and counted for test statistics.
This works for me.
One concern I have for the future is if showing all the parameters by default might become too verbose? Should there eventually be a verbosity/test-level flag that controls how deep we go? We could elect to only print FAILED subtests after we hit the depth limit.
Totally agree with this. A --depth option is definitely going to become necessary here, and I think that printing FAILED subtests after that limit is sensible default behaviour for it.
Testing this out with: $ ./tools/testing/kunit/kunit.py --kunitconfig=fs/fat
Before: [17:55:48] Starting KUnit Kernel (1/1)... [17:55:48] ============================================================ [17:55:51] ================== fat_test (3 subtests) =================== [17:55:51] [PASSED] fat_checksum_test [17:55:51] [PASSED] fat_time_fat2unix_test [17:55:51] [PASSED] fat_time_unix2fat_test [17:55:51] ==================== [PASSED] fat_test ===================== [17:55:51] ============================================================ [17:55:51] Testing complete. Passed: 3, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [17:55:51] Elapsed time: 7.784s total, 0.001s configuring, 4.790s building, 2.877s running
[17:56:22] Starting KUnit Kernel (1/1)... [17:56:22] ============================================================ [17:56:25] ================== fat_test (3 subtests) =================== [17:56:25] [PASSED] fat_checksum_test [17:56:25] ================== fat_time_fat2unix_test ================== [17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00) [17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58) [17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC) [17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC) [17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00) [17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00) [17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00) [17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC) [17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC) [17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59) [17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_fat2unix_test ============== [17:56:25] ================== fat_time_unix2fat_test ================== [17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00) [17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58) [17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC) [17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC) [17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00) [17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00) [17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00) [17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC) [17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC) [17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59) [17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_unix2fat_test ============== [17:56:25] ==================== [PASSED] fat_test ===================== [17:56:25] ============================================================ [17:56:25] Testing complete. Passed: 23, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [17:56:25] Elapsed time: 7.733s total, 0.001s configuring, 4.740s building, 2.915s running
Looks similar when run with --kunitconfig=fs/ext4.
This "inverted" nesting of PASSED looks a bit "wrong" at first.
[17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_unix2fat_test ============== [17:56:25] ==================== [PASSED] fat_test =====================
But I know it's so that we can show results as incrementally as possible, so I'm fine with it. (I imagine our users won't necessarily make that connection, however.)
Yeah, this is definitely something for which there's no "perfect" way of handling it. The fact that the number of '=' signs is based on the length of the name means that even that might not look consistent. I'm sure there are things we could experiment with to make this clearer, e.g. indenting or swapping out the '=' for '-' on subtests (though there's definitely a limit to how deep we could go with something like that).
Signed-off-by: David Gow davidgow@google.com
Signed-off-by: Daniel Latypov dlatypov@google.com
(Was this supposed to be a Tested-by or Reviewed-by or something?)
Note that this was previously posted as: https://lore.kernel.org/all/20211006071112.2206942-1-davidgow@google.com/
No changes have been made, save for a trivial rebase on the current kselftest/kunit branch.
lib/kunit/test.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 3bd741e50a2d..85265f9a66a1 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -508,6 +508,8 @@ int kunit_run_tests(struct kunit_suite *suite) /* Get initial param. */ param_desc[0] = '\0'; test.param_value = test_case->generate_params(NULL, param_desc);
kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"# Subtest: %s", test_case->name); } do {
@@ -520,9 +522,8 @@ int kunit_run_tests(struct kunit_suite *suite) }
kunit_log(KERN_INFO, &test,
KUNIT_SUBTEST_INDENT
"# %s: %s %d - %s",
test_case->name,
KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"%s %d - %s", kunit_status_to_ok_not_ok(test.status), test.param_index + 1, param_desc);
-- 2.33.0.1079.g6e70778dc9-goog
On Thu, Oct 21, 2021 at 11:16 PM David Gow davidgow@google.com wrote:
On Fri, Oct 22, 2021 at 9:15 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
Currently, the results for individial parameters in a parameterised test are simply output as (K)TAP diagnostic lines. However, the plan was always[1] to make these (K)TAP subtests when kunit_tool supported them.
With [2], these are now supported. (v5 will print out an error about the missing plan line, but this can safely be ignored, and will hopefully be
Should this commit description be updated?
Whoops, yup. I didn't want to change anything without making this a v2, but I'll definitely update it now.
changed). As a result, individual test parameter results are parsed, displayed in the formatted results, and counted for test statistics.
This works for me.
One concern I have for the future is if showing all the parameters by default might become too verbose? Should there eventually be a verbosity/test-level flag that controls how deep we go? We could elect to only print FAILED subtests after we hit the depth limit.
Totally agree with this. A --depth option is definitely going to become necessary here, and I think that printing FAILED subtests after that limit is sensible default behaviour for it.
Testing this out with: $ ./tools/testing/kunit/kunit.py --kunitconfig=fs/fat
Before: [17:55:48] Starting KUnit Kernel (1/1)... [17:55:48] ============================================================ [17:55:51] ================== fat_test (3 subtests) =================== [17:55:51] [PASSED] fat_checksum_test [17:55:51] [PASSED] fat_time_fat2unix_test [17:55:51] [PASSED] fat_time_unix2fat_test [17:55:51] ==================== [PASSED] fat_test ===================== [17:55:51] ============================================================ [17:55:51] Testing complete. Passed: 3, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [17:55:51] Elapsed time: 7.784s total, 0.001s configuring, 4.790s building, 2.877s running
[17:56:22] Starting KUnit Kernel (1/1)... [17:56:22] ============================================================ [17:56:25] ================== fat_test (3 subtests) =================== [17:56:25] [PASSED] fat_checksum_test [17:56:25] ================== fat_time_fat2unix_test ================== [17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00) [17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58) [17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC) [17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC) [17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00) [17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00) [17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00) [17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC) [17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC) [17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59) [17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_fat2unix_test ============== [17:56:25] ================== fat_time_unix2fat_test ================== [17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00) [17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58) [17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC) [17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC) [17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00) [17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00) [17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00) [17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC) [17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC) [17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59) [17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_unix2fat_test ============== [17:56:25] ==================== [PASSED] fat_test ===================== [17:56:25] ============================================================ [17:56:25] Testing complete. Passed: 23, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [17:56:25] Elapsed time: 7.733s total, 0.001s configuring, 4.740s building, 2.915s running
Looks similar when run with --kunitconfig=fs/ext4.
This "inverted" nesting of PASSED looks a bit "wrong" at first.
[17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010) [17:56:25] ============= [PASSED] fat_time_unix2fat_test ============== [17:56:25] ==================== [PASSED] fat_test =====================
But I know it's so that we can show results as incrementally as possible, so I'm fine with it. (I imagine our users won't necessarily make that connection, however.)
Yeah, this is definitely something for which there's no "perfect" way of handling it. The fact that the number of '=' signs is based on the length of the name means that even that might not look consistent. I'm sure there are things we could experiment with to make this clearer, e.g. indenting or swapping out the '=' for '-' on subtests (though there's definitely a limit to how deep we could go with something like that).
To be clear, I don't think we need to do anything about it at this moment. Just noting that it might cause confusion (and if it causes enough later on, then maybe we should revisit it).
Signed-off-by: David Gow davidgow@google.com
Signed-off-by: Daniel Latypov dlatypov@google.com
(Was this supposed to be a Tested-by or Reviewed-by or something?)
Oops, muscle memory kicked in since I had just typed a Signed-off-by recently....
Reviewed-by: Daniel Latypov dlatypov@google.com
Note that this was previously posted as: https://lore.kernel.org/all/20211006071112.2206942-1-davidgow@google.com/
No changes have been made, save for a trivial rebase on the current kselftest/kunit branch.
lib/kunit/test.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 3bd741e50a2d..85265f9a66a1 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -508,6 +508,8 @@ int kunit_run_tests(struct kunit_suite *suite) /* Get initial param. */ param_desc[0] = '\0'; test.param_value = test_case->generate_params(NULL, param_desc);
kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"# Subtest: %s", test_case->name); } do {
@@ -520,9 +522,8 @@ int kunit_run_tests(struct kunit_suite *suite) }
kunit_log(KERN_INFO, &test,
KUNIT_SUBTEST_INDENT
"# %s: %s %d - %s",
test_case->name,
KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
"%s %d - %s", kunit_status_to_ok_not_ok(test.status), test.param_index + 1, param_desc);
-- 2.33.0.1079.g6e70778dc9-goog
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com
tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0.
Returns False and reports missing test plan error if fails to parse
test plan.
Returns False and sets expected_count to None if there is no valid test
plan. Accepted format: - '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None
test.add_error('missing plan line!')
This works well, but there's an edge case.
This patch means we no longer print an error when there are no test cases in a subtest. We relied on a check just a bit lower in this function.
Consider
$ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..1 # Subtest: suite 1..1 # Subtest: case ok 1 - case ok 1 - suite EOF
This produces the following output (timestamps removed)
============================================================ ==================== suite (1 subtest) ===================== =========================== case =========================== ====================== [PASSED] case ======================= ====================== [PASSED] suite ====================== ============================================================
Should we surface some sort of error here?
return False test.log.append(lines.pop()) expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines()))
self.assertEqual(2, result.test.counts.errors)
# A missing test plan is not an error.
self.assertEqual(0, result.test.counts.errors)
# All tests should be accounted for.
self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
-- 2.33.0.1079.g6e70778dc9-goog
On Fri, Oct 22, 2021 at 9:29 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com
tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0.
Returns False and reports missing test plan error if fails to parse
test plan.
Returns False and sets expected_count to None if there is no valid test
plan. Accepted format: - '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None
test.add_error('missing plan line!')
This works well, but there's an edge case.
This patch means we no longer print an error when there are no test cases in a subtest. We relied on a check just a bit lower in this function.
Consider
$ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..1 # Subtest: suite 1..1 # Subtest: case ok 1 - case ok 1 - suite EOF
This produces the following output (timestamps removed)
============================================================ ==================== suite (1 subtest) ===================== =========================== case =========================== ====================== [PASSED] case ======================= ====================== [PASSED] suite ====================== ============================================================
Should we surface some sort of error here?
I thought about this a bit (and started prototyping it), and think the answer is probably "no" (or, perhaps, "optionally"). Largely because I think it'd be technically valid to have, e.g., a parameterised test whose generator function can legitimately provide zero subtests. And while that's probably worth warning about if it's the only test running, if you're trying to run all tests, and one random subtest of a test of a suite has no subtests, that seems like it'd be more annoying to error on than anything else.
That being said, I'm not opposed to implementing it as an option, or at least having the test status set to NO_ERROR. The implementation I've experimented with basically moves the check to "parse_test", and errors if the number of subtests is 0 after parsing, if parent_test is true (or main, but my rough plan was to make main imply parent_test, and adjust the various conditions to match). I haven't looked into exactly how this is bubbled up yet, but I'd be okay with having an error if there are no tests run at all.
I'll keep playing with this anyway: it's definitely a bit more of a minefield than I'd originally thought. :-)
-- David
return False test.log.append(lines.pop()) expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines()))
self.assertEqual(2, result.test.counts.errors)
# A missing test plan is not an error.
self.assertEqual(0, result.test.counts.errors)
# All tests should be accounted for.
self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
-- 2.33.0.1079.g6e70778dc9-goog
On Thu, Oct 21, 2021 at 11:10 PM David Gow davidgow@google.com wrote:
On Fri, Oct 22, 2021 at 9:29 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com
tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0.
Returns False and reports missing test plan error if fails to parse
test plan.
Returns False and sets expected_count to None if there is no valid test
plan. Accepted format: - '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None
test.add_error('missing plan line!')
This works well, but there's an edge case.
This patch means we no longer print an error when there are no test cases in a subtest. We relied on a check just a bit lower in this function.
Consider
$ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..1 # Subtest: suite 1..1 # Subtest: case ok 1 - case ok 1 - suite EOF
This produces the following output (timestamps removed)
============================================================ ==================== suite (1 subtest) ===================== =========================== case =========================== ====================== [PASSED] case ======================= ====================== [PASSED] suite ====================== ============================================================
Should we surface some sort of error here?
I thought about this a bit (and started prototyping it), and think the answer is probably "no" (or, perhaps, "optionally"). Largely because I think it'd be technically valid to have, e.g., a parameterised test whose generator function can legitimately provide zero subtests. And
That's the question. Should we report PASSED in that case as we do now?
Let's consider parameterised tests, our only current example in KUnit.
I think in most cases, it's a bug that if we got 0 cases and we should let the user know somehow. Should it be an error/warning? Maybe not, but wouldn't it be better to report SKIPPED? (This would require a change in KUnit on the kernel side, I'm not suggesting we do this in the parser)
while that's probably worth warning about if it's the only test running, if you're trying to run all tests, and one random subtest of a test of a suite has no subtests, that seems like it'd be more annoying to error on than anything else.
That being said, I'm not opposed to implementing it as an option, or at least having the test status set to NO_ERROR. The implementation I've experimented with basically moves the check to "parse_test", and errors if the number of subtests is 0 after parsing, if parent_test is true (or main, but my rough plan was to make main imply parent_test, and adjust the various conditions to match). I haven't looked into exactly how this is bubbled up yet, but I'd be okay with having an error if there are no tests run at all.
I'll keep playing with this anyway: it's definitely a bit more of a minefield than I'd originally thought. :-)
-- David
return False test.log.append(lines.pop()) expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines()))
self.assertEqual(2, result.test.counts.errors)
# A missing test plan is not an error.
self.assertEqual(0, result.test.counts.errors)
# All tests should be accounted for.
self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
-- 2.33.0.1079.g6e70778dc9-goog
On Sat, Oct 23, 2021 at 6:42 AM Daniel Latypov dlatypov@google.com wrote:
On Thu, Oct 21, 2021 at 11:10 PM David Gow davidgow@google.com wrote:
On Fri, Oct 22, 2021 at 9:29 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com
tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0.
Returns False and reports missing test plan error if fails to parse
test plan.
Returns False and sets expected_count to None if there is no valid test
plan. Accepted format: - '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None
test.add_error('missing plan line!')
This works well, but there's an edge case.
This patch means we no longer print an error when there are no test cases in a subtest. We relied on a check just a bit lower in this function.
Consider
$ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..1 # Subtest: suite 1..1 # Subtest: case ok 1 - case ok 1 - suite EOF
This produces the following output (timestamps removed)
============================================================ ==================== suite (1 subtest) ===================== =========================== case =========================== ====================== [PASSED] case ======================= ====================== [PASSED] suite ====================== ============================================================
Should we surface some sort of error here?
I thought about this a bit (and started prototyping it), and think the answer is probably "no" (or, perhaps, "optionally"). Largely because I think it'd be technically valid to have, e.g., a parameterised test whose generator function can legitimately provide zero subtests. And
That's the question. Should we report PASSED in that case as we do now?
Let's consider parameterised tests, our only current example in KUnit.
I think in most cases, it's a bug that if we got 0 cases and we should let the user know somehow. Should it be an error/warning? Maybe not, but wouldn't it be better to report SKIPPED? (This would require a change in KUnit on the kernel side, I'm not suggesting we do this in the parser)
Yeah: there are two sorf-of separable decisions here: 1) What result should a test with no subtests return? 2) Do we want to trigger any other errors/warnings.
I think the answer to 1 is that kunit_tool should report the result printed in the KTAP output. I agree that, for parameterised tests, though, that SKIPPED makes more sense than PASSED. (kunit_tool has a separate NO_TESTS result, which we could maybe try to generate and handle explicitly. I think we might as well leave that for the "no tests run at all" case for now.)
For 2, I feel that this definitely should count as a "warning", but all we have at the moment are "errors", which I feel is probably a bit too strong a term for this. Given errors don't actually halt parsing, I'm okay with generating them in kunit_tool in this case, but I'd probably slightly prefer to leave it with SKIPPED, and maybe add a warning later.
while that's probably worth warning about if it's the only test running, if you're trying to run all tests, and one random subtest of a test of a suite has no subtests, that seems like it'd be more annoying to error on than anything else.
That being said, I'm not opposed to implementing it as an option, or at least having the test status set to NO_ERROR. The implementation I've experimented with basically moves the check to "parse_test", and errors if the number of subtests is 0 after parsing, if parent_test is true (or main, but my rough plan was to make main imply parent_test, and adjust the various conditions to match). I haven't looked into exactly how this is bubbled up yet, but I'd be okay with having an error if there are no tests run at all.
I'll keep playing with this anyway: it's definitely a bit more of a minefield than I'd originally thought. :-)
-- David
return False test.log.append(lines.pop()) expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines()))
self.assertEqual(2, result.test.counts.errors)
# A missing test plan is not an error.
self.assertEqual(0, result.test.counts.errors)
# All tests should be accounted for.
self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
-- 2.33.0.1079.g6e70778dc9-goog
On Fri, Oct 22, 2021 at 5:25 PM David Gow davidgow@google.com wrote:
On Sat, Oct 23, 2021 at 6:42 AM Daniel Latypov dlatypov@google.com wrote:
On Thu, Oct 21, 2021 at 11:10 PM David Gow davidgow@google.com wrote:
On Fri, Oct 22, 2021 at 9:29 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com
tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0.
Returns False and reports missing test plan error if fails to parse
test plan.
Returns False and sets expected_count to None if there is no valid test
plan. Accepted format: - '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None
test.add_error('missing plan line!')
This works well, but there's an edge case.
This patch means we no longer print an error when there are no test cases in a subtest. We relied on a check just a bit lower in this function.
Consider
$ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..1 # Subtest: suite 1..1 # Subtest: case ok 1 - case ok 1 - suite EOF
This produces the following output (timestamps removed)
============================================================ ==================== suite (1 subtest) ===================== =========================== case =========================== ====================== [PASSED] case ======================= ====================== [PASSED] suite ====================== ============================================================
Should we surface some sort of error here?
I thought about this a bit (and started prototyping it), and think the answer is probably "no" (or, perhaps, "optionally"). Largely because I think it'd be technically valid to have, e.g., a parameterised test whose generator function can legitimately provide zero subtests. And
That's the question. Should we report PASSED in that case as we do now?
Let's consider parameterised tests, our only current example in KUnit.
I think in most cases, it's a bug that if we got 0 cases and we should let the user know somehow. Should it be an error/warning? Maybe not, but wouldn't it be better to report SKIPPED? (This would require a change in KUnit on the kernel side, I'm not suggesting we do this in the parser)
Yeah: there are two sorf-of separable decisions here:
- What result should a test with no subtests return?
- Do we want to trigger any other errors/warnings.
I think the answer to 1 is that kunit_tool should report the result printed in the KTAP output. I agree that, for parameterised tests, though, that SKIPPED makes more sense than PASSED. (kunit_tool has a separate NO_TESTS result, which we could maybe try to generate and handle explicitly. I think we might as well leave that for the "no tests run at all" case for now.)
For 2, I feel that this definitely should count as a "warning", but all we have at the moment are "errors", which I feel is probably a bit too strong a term for this. Given errors don't actually halt parsing, I'm okay with generating them in kunit_tool in this case, but I'd probably slightly prefer to leave it with SKIPPED, and maybe add a warning later.
I am OK marking it as SKIPPED, but I like the idea of promoting it to a warning in a future change.
Completely ignoring an empty test suite seems wrong, especially when we *do* complain when there *is* a test plan, and not all test cases are accounted for.
My 2c.
while that's probably worth warning about if it's the only test running, if you're trying to run all tests, and one random subtest of a test of a suite has no subtests, that seems like it'd be more annoying to error on than anything else.
That being said, I'm not opposed to implementing it as an option, or at least having the test status set to NO_ERROR. The implementation I've experimented with basically moves the check to "parse_test", and errors if the number of subtests is 0 after parsing, if parent_test is true (or main, but my rough plan was to make main imply parent_test, and adjust the various conditions to match). I haven't looked into exactly how this is bubbled up yet, but I'd be okay with having an error if there are no tests run at all.
I'll keep playing with this anyway: it's definitely a bit more of a minefield than I'd originally thought. :-)
-- David
return False test.log.append(lines.pop()) expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines()))
self.assertEqual(2, result.test.counts.errors)
# A missing test plan is not an error.
self.assertEqual(0, result.test.counts.errors)
# All tests should be accounted for.
self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
-- 2.33.0.1079.g6e70778dc9-goog
On Fri, Oct 22, 2021 at 3:41 PM Daniel Latypov dlatypov@google.com wrote:
On Thu, Oct 21, 2021 at 11:10 PM David Gow davidgow@google.com wrote:
On Fri, Oct 22, 2021 at 9:29 AM Daniel Latypov dlatypov@google.com wrote:
On Wed, Oct 20, 2021 at 11:28 PM David Gow davidgow@google.com wrote:
The (K)TAP spec encourages test output to begin with a 'test plan': a count of the number of tests being run of the form: 1..n
However, some test suites might not know the number of subtests in advance (for example, KUnit's parameterised tests use a generator function). In this case, it's not possible to print the test plan in advance.
kunit_tool already parses test output which doesn't contain a plan, but reports an error. Since we want to use nested subtests with KUnit paramterised tests, remove this error.
Signed-off-by: David Gow davidgow@google.com
tools/testing/kunit/kunit_parser.py | 5 ++--- tools/testing/kunit/kunit_tool_test.py | 5 ++++- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 3355196d0515..50ded55c168c 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: """ Parses test plan line and stores the expected number of subtests in test object. Reports an error if expected count is 0.
Returns False and reports missing test plan error if fails to parse
test plan.
Returns False and sets expected_count to None if there is no valid test
plan. Accepted format: - '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool: match = TEST_PLAN.match(lines.peek()) if not match: test.expected_count = None
test.add_error('missing plan line!')
This works well, but there's an edge case.
This patch means we no longer print an error when there are no test cases in a subtest. We relied on a check just a bit lower in this function.
Consider
$ ./tools/testing/kunit/kunit.py parse <<EOF TAP version 14 1..1 # Subtest: suite 1..1 # Subtest: case ok 1 - case ok 1 - suite EOF
This produces the following output (timestamps removed)
============================================================ ==================== suite (1 subtest) ===================== =========================== case =========================== ====================== [PASSED] case ======================= ====================== [PASSED] suite ====================== ============================================================
Should we surface some sort of error here?
I thought about this a bit (and started prototyping it), and think the answer is probably "no" (or, perhaps, "optionally"). Largely because I think it'd be technically valid to have, e.g., a parameterised test whose generator function can legitimately provide zero subtests. And
That's the question. Should we report PASSED in that case as we do now?
Let's consider parameterised tests, our only current example in KUnit.
I think in most cases, it's a bug that if we got 0 cases and we should let the user know somehow.
Actually, when I tried to pass in an empty parameter array, I get a segfault. So I guess we *do* let the user know somehow :)
The root cause: we call test_case->run_case(test), but the test->param_value == NULL. So the test code will segfault whenever it tries to read from param_value.
A hacky fix:
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 85265f9a66a1..e55f842ae355 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -513,6 +513,8 @@ int kunit_run_tests(struct kunit_suite *suite) }
do { + if (test_case->generate_params && !test.param_value) + break; // there were no parameters generated! kunit_run_case_catch_errors(suite, test_case, &test);
if (test_case->generate_params) {
Should it be an error/warning? Maybe not, but wouldn't it be better to report SKIPPED? (This would require a change in KUnit on the kernel side, I'm not suggesting we do this in the parser)
Being a bit more concrete, I was originally thinking of the following:
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 85265f9a66a1..3f7141a72308 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -537,6 +537,9 @@ int kunit_run_tests(struct kunit_suite *suite)
} while (test.param_value);
+ if (param_stats.total == 0) + test_case->status = KUNIT_SKIPPED; + kunit_print_test_stats(&test, param_stats);
kunit_print_ok_not_ok(&test, true, test_case->status,
But tacking onto the hacky fix above, it could look like
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 85265f9a66a1..a2d93b44ef88 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -513,6 +513,13 @@ int kunit_run_tests(struct kunit_suite *suite) }
do { + if (test_case->generate_params && !test.param_value) { + strncpy(test.status_comment,"No test parameters generated", + sizeof(test.status_comment)); + test_case->status = KUNIT_SKIPPED; + break; + } + kunit_run_case_catch_errors(suite, test_case, &test);
if (test_case->generate_params) {
while that's probably worth warning about if it's the only test running, if you're trying to run all tests, and one random subtest of a test of a suite has no subtests, that seems like it'd be more annoying to error on than anything else.
That being said, I'm not opposed to implementing it as an option, or at least having the test status set to NO_ERROR. The implementation I've experimented with basically moves the check to "parse_test", and errors if the number of subtests is 0 after parsing, if parent_test is true (or main, but my rough plan was to make main imply parent_test, and adjust the various conditions to match). I haven't looked into exactly how this is bubbled up yet, but I'd be okay with having an error if there are no tests run at all.
I'll keep playing with this anyway: it's definitely a bit more of a minefield than I'd originally thought. :-)
-- David
return False test.log.append(lines.pop()) expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 9c4126731457..bc8793145713 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase): result = kunit_parser.parse_run_tests( kunit_parser.extract_tap_lines( file.readlines()))
self.assertEqual(2, result.test.counts.errors)
# A missing test plan is not an error.
self.assertEqual(0, result.test.counts.errors)
# All tests should be accounted for.
self.assertEqual(10, result.test.counts.total()) self.assertEqual( kunit_parser.TestStatus.SUCCESS, result.status)
-- 2.33.0.1079.g6e70778dc9-goog
linux-kselftest-mirror@lists.linaro.org