Allow running each suite or each test case alone per kernel boot. The motivation for this is to debug "test hermeticity" issues. This new --run_isolated flag would be a good first step to try and narrow down root causes.
Context: sometimes tests pass/fail depending on what ran before them. Memory corruption errors in particular might only cause noticeable issues later on. But you can also have the opposite, where "fixing" one test causes another to start failing.
Usage: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test example
The last one would provide output like ======== [PASSED] example ======== [PASSED] example_simple_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/3)... ============================================================ ======== [SKIPPED] example ======== [SKIPPED] example_skip_test # SKIP this test should be skipped ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 1 skipped. Starting KUnit Kernel (3/3)... ============================================================ ======== [SKIPPED] example ======== [SKIPPED] example_mark_skipped_test # SKIP this test should be skipped ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 1 skipped.
See the last patch's description for a bit more detail.
Meta: The first patch is from another series with just a reworded commit message, https://lore.kernel.org/linux-kselftest/20210805235145.2528054-2-dlatypov@go...
This patch series is based on Shuah's kunit branch: https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/?h...
Changes: v1 -> v2: rebase onto Shuah's kunit branch, fix missing code in patch 1. v2 -> v3: fix mypy errors, drop test plan from output, fix pre-existing bug where kunit was not actually tracking test execution time (new patch 3). v3 -> v4: attempt to filter out non-KUnit dmesg output when getting list of test names, using this regex: ^[^\s.]+.[^\s.]+$
Daniel Latypov (4): kunit: add 'kunit.action' param to allow listing out tests kunit: tool: factor exec + parse steps into a function kunit: tool: actually track how long it took to run tests kunit: tool: support running each suite/test separately
lib/kunit/executor.c | 45 ++++++++- tools/testing/kunit/kunit.py | 134 +++++++++++++++++-------- tools/testing/kunit/kunit_tool_test.py | 40 ++++++++ 3 files changed, 173 insertions(+), 46 deletions(-)
base-commit: 3b29021ddd10cfb6b2565c623595bd3b02036f33
Context: It's difficult to map a given .kunitconfig => set of enabled tests. Letting kunit.py figure that out would be useful.
This patch: * is intended to be an implementation detail used only by kunit.py * adds a kunit.action module param with one valid non-null value, "list" * for the "list" action, it simply prints out "<suite>.<test>" * leaves the kunit.py changes to make use of this for another patch.
Note: kunit.filter_glob is respected for this and all future actions.
Hack: we print a TAP header (but no test plan) to allow kunit.py to use the same code to pick up KUnit output that it does for normal tests. Since this is intended to be an implementation detail, it seems fine for now. Maybe in the future we output each test as SKIPPED or the like.
Go with a more generic "action" param, since it seems like we might eventually have more modes besides just running or listing tests, e.g. * perhaps a benchmark mode that reruns test cases and reports timing * perhaps a deflake mode that reruns test cases that failed * perhaps a mode where we randomize test order to try and catch hermeticity bugs like "test a only passes if run after test b"
Tested: $ ./tools/testing/kunit/kunit.py run --kernel_arg=kunit.action=list --raw_output=kunit ... TAP version 14 1..1 example.example_simple_test example.example_skip_test example.example_mark_skipped_test reboot: System halted
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com --- lib/kunit/executor.c | 45 +++++++++++++++++++++++++++++++++++++++----- 1 file changed, 40 insertions(+), 5 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index bab3ab940acc..ce1f57a44ab9 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -15,9 +15,16 @@ extern struct kunit_suite * const * const __kunit_suites_end[]; #if IS_BUILTIN(CONFIG_KUNIT)
static char *filter_glob_param; +static char *action_param; + module_param_named(filter_glob, filter_glob_param, charp, 0); MODULE_PARM_DESC(filter_glob, "Filter which KUnit test suites/tests run at boot-time, e.g. list* or list*.*del_test"); +module_param_named(action, action_param, charp, 0); +MODULE_PARM_DESC(action, + "Changes KUnit executor behavior, valid values are:\n" + "<none>: run the tests like normal\n" + "'list' to list test names instead of running them.\n");
/* glob_match() needs NULL terminated strings, so we need a copy of filter_glob_param. */ struct kunit_test_filter { @@ -196,9 +203,35 @@ static void kunit_print_tap_header(struct suite_set *suite_set) pr_info("1..%d\n", num_of_suites); }
-int kunit_run_all_tests(void) +static void kunit_exec_run_tests(struct suite_set *suite_set) { struct kunit_suite * const * const *suites; + + kunit_print_tap_header(suite_set); + + for (suites = suite_set->start; suites < suite_set->end; suites++) + __kunit_test_suites_init(*suites); +} + +static void kunit_exec_list_tests(struct suite_set *suite_set) +{ + unsigned int i; + struct kunit_suite * const * const *suites; + struct kunit_case *test_case; + + /* Hack: print a tap header so kunit.py can find the start of KUnit output. */ + pr_info("TAP version 14\n"); + + for (suites = suite_set->start; suites < suite_set->end; suites++) + for (i = 0; (*suites)[i] != NULL; i++) { + kunit_suite_for_each_test_case((*suites)[i], test_case) { + pr_info("%s.%s\n", (*suites)[i]->name, test_case->name); + } + } +} + +int kunit_run_all_tests(void) +{ struct suite_set suite_set = { .start = __kunit_suites_start, .end = __kunit_suites_end, @@ -207,10 +240,12 @@ int kunit_run_all_tests(void) if (filter_glob_param) suite_set = kunit_filter_suites(&suite_set, filter_glob_param);
- kunit_print_tap_header(&suite_set); - - for (suites = suite_set.start; suites < suite_set.end; suites++) - __kunit_test_suites_init(*suites); + if (!action_param) + kunit_exec_run_tests(&suite_set); + else if (strcmp(action_param, "list") == 0) + kunit_exec_list_tests(&suite_set); + else + pr_err("kunit executor: unknown action '%s'\n", action_param);
if (filter_glob_param) { /* a copy was made of each array */ kunit_free_suite_set(suite_set);
On Thu, Sep 30, 2021 at 3:20 PM Daniel Latypov dlatypov@google.com wrote:
Context: It's difficult to map a given .kunitconfig => set of enabled tests. Letting kunit.py figure that out would be useful.
This patch:
- is intended to be an implementation detail used only by kunit.py
- adds a kunit.action module param with one valid non-null value, "list"
- for the "list" action, it simply prints out "<suite>.<test>"
- leaves the kunit.py changes to make use of this for another patch.
Note: kunit.filter_glob is respected for this and all future actions.
Hack: we print a TAP header (but no test plan) to allow kunit.py to use the same code to pick up KUnit output that it does for normal tests. Since this is intended to be an implementation detail, it seems fine for now. Maybe in the future we output each test as SKIPPED or the like.
Go with a more generic "action" param, since it seems like we might eventually have more modes besides just running or listing tests, e.g.
- perhaps a benchmark mode that reruns test cases and reports timing
- perhaps a deflake mode that reruns test cases that failed
- perhaps a mode where we randomize test order to try and catch hermeticity bugs like "test a only passes if run after test b"
Tested: $ ./tools/testing/kunit/kunit.py run --kernel_arg=kunit.action=list --raw_output=kunit ... TAP version 14 1..1 example.example_simple_test example.example_skip_test example.example_mark_skipped_test reboot: System halted
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Reviewed-by: Brendan Higgins brendanhiggins@google.com
Currently this code is copy-pasted between the normal "run" subcommand and the "exec" subcommand.
Given we don't have any interest in just executing the tests without giving the user any indication what happened (i.e. parsing the output), make a function that does both this things and can be reused.
This will be useful when we allow more complicated ways of running tests, e.g. invoking the kernel multiple times instead of just once, etc.
We remove input_data from the ParseRequest so the callers don't have to pass in a dummy value for this field. Named tuples are also immutable, so if they did pass in a dummy, exec_tests() would need to make a copy to call parse_tests().
Removing it also makes KunitParseRequest match the other *Request types, as they only contain user arguments/flags, not data.
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com --- tools/testing/kunit/kunit.py | 44 ++++++++++++++++-------------------- 1 file changed, 19 insertions(+), 25 deletions(-)
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py index 66f67af97971..31eec9f6ecc3 100755 --- a/tools/testing/kunit/kunit.py +++ b/tools/testing/kunit/kunit.py @@ -34,7 +34,7 @@ KunitExecRequest = namedtuple('KunitExecRequest', ['timeout', 'build_dir', 'alltests', 'filter_glob', 'kernel_args']) KunitParseRequest = namedtuple('KunitParseRequest', - ['raw_output', 'input_data', 'build_dir', 'json']) + ['raw_output', 'build_dir', 'json']) KunitRequest = namedtuple('KunitRequest', ['raw_output','timeout', 'jobs', 'build_dir', 'alltests', 'filter_glob', 'kernel_args', 'json', 'make_options']) @@ -91,23 +91,25 @@ def build_tests(linux: kunit_kernel.LinuxSourceTree, 'built kernel successfully', build_end - build_start)
-def exec_tests(linux: kunit_kernel.LinuxSourceTree, - request: KunitExecRequest) -> KunitResult: +def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest, + parse_request: KunitParseRequest) -> KunitResult: kunit_parser.print_with_timestamp('Starting KUnit Kernel ...') test_start = time.time() - result = linux.run_kernel( + run_result = linux.run_kernel( args=request.kernel_args, timeout=None if request.alltests else request.timeout, - filter_glob=request.filter_glob, + filter_glob=request.filter_glob, build_dir=request.build_dir)
test_end = time.time() + exec_time = test_end - test_start
- return KunitResult(KunitStatus.SUCCESS, - result, - test_end - test_start) + # Named tuples are immutable, so we rebuild them here manually + result = parse_tests(parse_request, run_result) + + return KunitResult(status=result.status, result=result.result, elapsed_time=exec_time)
-def parse_tests(request: KunitParseRequest) -> KunitResult: +def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> KunitResult: parse_start = time.time()
test_result = kunit_parser.TestResult(kunit_parser.TestStatus.SUCCESS, @@ -115,7 +117,7 @@ def parse_tests(request: KunitParseRequest) -> KunitResult: 'Tests not Parsed.')
if request.raw_output: - output: Iterable[str] = request.input_data + output: Iterable[str] = input_data if request.raw_output == 'all': pass elif request.raw_output == 'kunit': @@ -126,7 +128,7 @@ def parse_tests(request: KunitParseRequest) -> KunitResult: print(line.rstrip())
else: - test_result = kunit_parser.parse_run_tests(request.input_data) + test_result = kunit_parser.parse_run_tests(input_data) parse_end = time.time()
if request.json: @@ -165,15 +167,11 @@ def run_tests(linux: kunit_kernel.LinuxSourceTree, exec_request = KunitExecRequest(request.timeout, request.build_dir, request.alltests, request.filter_glob, request.kernel_args) - exec_result = exec_tests(linux, exec_request) - if exec_result.status != KunitStatus.SUCCESS: - return exec_result - parse_request = KunitParseRequest(request.raw_output, - exec_result.result, request.build_dir, request.json) - parse_result = parse_tests(parse_request) + + exec_result = exec_tests(linux, exec_request, parse_request)
run_end = time.time()
@@ -184,7 +182,7 @@ def run_tests(linux: kunit_kernel.LinuxSourceTree, config_result.elapsed_time, build_result.elapsed_time, exec_result.elapsed_time)) - return parse_result + return exec_result
def add_common_opts(parser) -> None: parser.add_argument('--build_dir', @@ -381,15 +379,12 @@ def main(argv, linux=None): cli_args.alltests, cli_args.filter_glob, cli_args.kernel_args) - exec_result = exec_tests(linux, exec_request) parse_request = KunitParseRequest(cli_args.raw_output, - exec_result.result, cli_args.build_dir, cli_args.json) - result = parse_tests(parse_request) + result = exec_tests(linux, exec_request, parse_request) kunit_parser.print_with_timestamp(( - 'Elapsed time: %.3fs\n') % ( - exec_result.elapsed_time)) + 'Elapsed time: %.3fs\n') % (result.elapsed_time)) if result.status != KunitStatus.SUCCESS: sys.exit(1) elif cli_args.subcommand == 'parse': @@ -399,10 +394,9 @@ def main(argv, linux=None): with open(cli_args.file, 'r') as f: kunit_output = f.read().splitlines() request = KunitParseRequest(cli_args.raw_output, - kunit_output, None, cli_args.json) - result = parse_tests(request) + result = parse_tests(request, kunit_output) if result.status != KunitStatus.SUCCESS: sys.exit(1) else:
On Thu, Sep 30, 2021 at 3:20 PM Daniel Latypov dlatypov@google.com wrote:
Currently this code is copy-pasted between the normal "run" subcommand and the "exec" subcommand.
Given we don't have any interest in just executing the tests without giving the user any indication what happened (i.e. parsing the output), make a function that does both this things and can be reused.
This will be useful when we allow more complicated ways of running tests, e.g. invoking the kernel multiple times instead of just once, etc.
We remove input_data from the ParseRequest so the callers don't have to pass in a dummy value for this field. Named tuples are also immutable, so if they did pass in a dummy, exec_tests() would need to make a copy to call parse_tests().
Removing it also makes KunitParseRequest match the other *Request types, as they only contain user arguments/flags, not data.
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Acked-by: Brendan Higgins brendanhiggins@google.com
This is a long standing bug in kunit tool. Since these files were added, run_kernel() has always yielded lines.
That means, the call to run_kernel() returns before the kernel finishes executing tests, potentially before a single line of output is even produced.
So code like this time_start = time.time() result = linux.run_kernel(...) time_end = time.time()
would only measure the time taken for python to give back the generator object.
From a caller's perspective, the only way to know the kernel has exited
is for us to consume all the output from the `result` generator object. Alternatively, we could change run_kernel() to try and do its own book keeping and return the total time, but that doesn't seem worth it.
This change makes us record `time_end` after we're done parsing all the output (which should mean we've consumed all of it, or errored out). That means we're including in the parsing time as well, but that should be quite small, and it's better than claiming it took 0s to run tests.
Let's use this as an example: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example
Before: Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running
After: Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com --- tools/testing/kunit/kunit.py | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py index 31eec9f6ecc3..5e717594df5b 100755 --- a/tools/testing/kunit/kunit.py +++ b/tools/testing/kunit/kunit.py @@ -101,12 +101,14 @@ def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest, filter_glob=request.filter_glob, build_dir=request.build_dir)
+ result = parse_tests(parse_request, run_result) + + # run_kernel() doesn't block on the kernel exiting. + # That only happens after we get the last line of output from `run_result`. + # So exec_time here actually contains parsing + execution time, which is fine. test_end = time.time() exec_time = test_end - test_start
- # Named tuples are immutable, so we rebuild them here manually - result = parse_tests(parse_request, run_result) - return KunitResult(status=result.status, result=result.result, elapsed_time=exec_time)
def parse_tests(request: KunitParseRequest, input_data: Iterable[str]) -> KunitResult:
On Thu, Sep 30, 2021 at 3:20 PM Daniel Latypov dlatypov@google.com wrote:
This is a long standing bug in kunit tool. Since these files were added, run_kernel() has always yielded lines.
That means, the call to run_kernel() returns before the kernel finishes executing tests, potentially before a single line of output is even produced.
So code like this time_start = time.time() result = linux.run_kernel(...) time_end = time.time()
would only measure the time taken for python to give back the generator object.
From a caller's perspective, the only way to know the kernel has exited is for us to consume all the output from the `result` generator object. Alternatively, we could change run_kernel() to try and do its own book keeping and return the total time, but that doesn't seem worth it.
This change makes us record `time_end` after we're done parsing all the output (which should mean we've consumed all of it, or errored out). That means we're including in the parsing time as well, but that should be quite small, and it's better than claiming it took 0s to run tests.
Let's use this as an example: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit example
Before: Elapsed time: 7.684s total, 0.001s configuring, 4.692s building, 0.000s running
After: Elapsed time: 6.283s total, 0.001s configuring, 3.202s building, 3.079s running
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Reviewed-by: Brendan Higgins brendanhiggins@google.com
The new --run_isolated flag makes the tool boot the kernel once per suite or test, preventing leftover state from one suite to impact the other. This can be useful as a starting point to debugging test hermeticity issues.
Note: it takes a lot longer, so people should not use it normally.
Consider the following very simplified example:
bool disable_something_for_test = false; void function_being_tested() { ... if (disable_something_for_test) return; ... }
static void test_before(struct kunit *test) { disable_something_for_test = true; function_being_tested(); /* oops, we forgot to reset it back to false */ }
static void test_after(struct kunit *test) { /* oops, now "fixing" test_before can cause test_after to fail! */ function_being_tested(); }
Presented like this, the issues are obvious, but it gets a lot more complicated to track down as the amount of test setup and helper functions increases.
Another use case is memory corruption. It might not be surfaced as a failure/crash in the test case or suite that caused it. I've noticed in kunit's own unit tests, the 3rd suite after might be the one to finally crash after an out-of-bounds write, for example.
Example usage:
Per suite: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite ... Starting KUnit Kernel (1/7)... ============================================================ ======== [PASSED] kunit_executor_test ======== .... Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/7)... ============================================================ ======== [PASSED] kunit-try-catch-test ======== ...
Per test: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test Starting KUnit Kernel (1/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] filter_subsuite_test ...
It works with filters as well: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example ... Starting KUnit Kernel (1/1)... ============================================================ ======== [PASSED] example ======== ...
It also handles test filters, '*.*skip*' runs these 3 tests: kunit_status.kunit_status_mark_skipped_test example.example_skip_test example.example_mark_skipped_test
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com --- tools/testing/kunit/kunit.py | 100 +++++++++++++++++++------ tools/testing/kunit/kunit_tool_test.py | 40 ++++++++++ 2 files changed, 118 insertions(+), 22 deletions(-)
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py index 5e717594df5b..8c7e8c7b2c97 100755 --- a/tools/testing/kunit/kunit.py +++ b/tools/testing/kunit/kunit.py @@ -8,15 +8,16 @@ # Author: Brendan Higgins brendanhiggins@google.com
import argparse -import sys import os +import re +import sys import time
assert sys.version_info >= (3, 7), "Python version is too old"
from collections import namedtuple from enum import Enum, auto -from typing import Iterable +from typing import Iterable, List
import kunit_config import kunit_json @@ -31,13 +32,13 @@ KunitBuildRequest = namedtuple('KunitBuildRequest', ['jobs', 'build_dir', 'alltests', 'make_options']) KunitExecRequest = namedtuple('KunitExecRequest', - ['timeout', 'build_dir', 'alltests', - 'filter_glob', 'kernel_args']) + ['timeout', 'build_dir', 'alltests', + 'filter_glob', 'kernel_args', 'run_isolated']) KunitParseRequest = namedtuple('KunitParseRequest', ['raw_output', 'build_dir', 'json']) KunitRequest = namedtuple('KunitRequest', ['raw_output','timeout', 'jobs', 'build_dir', 'alltests', 'filter_glob', - 'kernel_args', 'json', 'make_options']) + 'kernel_args', 'run_isolated', 'json', 'make_options'])
KernelDirectoryPath = sys.argv[0].split('tools/testing/kunit/')[0]
@@ -91,23 +92,70 @@ def build_tests(linux: kunit_kernel.LinuxSourceTree, 'built kernel successfully', build_end - build_start)
+def _list_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest) -> List[str]: + args = ['kunit.action=list'] + if request.kernel_args: + args.extend(request.kernel_args) + + output = linux.run_kernel(args=args, + timeout=None if request.alltests else request.timeout, + filter_glob=request.filter_glob, + build_dir=request.build_dir) + lines = kunit_parser.extract_tap_lines(output) + # Hack! Drop the dummy TAP version header that the executor prints out. + lines.pop() + + # Filter out any extraneous non-test output that might have gotten mixed in. + return [l for l in lines if re.match('^[^\s.]+.[^\s.]+$', l)] + +def _suites_from_test_list(tests: List[str]) -> List[str]: + """Extracts all the suites from an ordered list of tests.""" + suites = [] # type: List[str] + for t in tests: + parts = t.split('.', maxsplit=2) + if len(parts) != 2: + raise ValueError(f'internal KUnit error, test name should be of the form "<suite>.<test>", got "{t}"') + suite, case = parts + if not suites or suites[-1] != suite: + suites.append(suite) + return suites + + + def exec_tests(linux: kunit_kernel.LinuxSourceTree, request: KunitExecRequest, parse_request: KunitParseRequest) -> KunitResult: - kunit_parser.print_with_timestamp('Starting KUnit Kernel ...') - test_start = time.time() - run_result = linux.run_kernel( - args=request.kernel_args, - timeout=None if request.alltests else request.timeout, - filter_glob=request.filter_glob, - build_dir=request.build_dir) - - result = parse_tests(parse_request, run_result) - - # run_kernel() doesn't block on the kernel exiting. - # That only happens after we get the last line of output from `run_result`. - # So exec_time here actually contains parsing + execution time, which is fine. - test_end = time.time() - exec_time = test_end - test_start + filter_globs = [request.filter_glob] + if request.run_isolated: + tests = _list_tests(linux, request) + if request.run_isolated == 'test': + filter_globs = tests + if request.run_isolated == 'suite': + filter_globs = _suites_from_test_list(tests) + # Apply the test-part of the user's glob, if present. + if '.' in request.filter_glob: + test_glob = request.filter_glob.split('.', maxsplit=2)[1] + filter_globs = [g + '.'+ test_glob for g in filter_globs] + + overall_status = kunit_parser.TestStatus.SUCCESS + exec_time = 0.0 + for i, filter_glob in enumerate(filter_globs): + kunit_parser.print_with_timestamp('Starting KUnit Kernel ({}/{})...'.format(i+1, len(filter_globs))) + + test_start = time.time() + run_result = linux.run_kernel( + args=request.kernel_args, + timeout=None if request.alltests else request.timeout, + filter_glob=filter_glob, + build_dir=request.build_dir) + + result = parse_tests(parse_request, run_result) + # run_kernel() doesn't block on the kernel exiting. + # That only happens after we get the last line of output from `run_result`. + # So exec_time here actually contains parsing + execution time, which is fine. + test_end = time.time() + exec_time += test_end - test_start + + overall_status = kunit_parser.max_status(overall_status, result.status)
return KunitResult(status=result.status, result=result.result, elapsed_time=exec_time)
@@ -168,7 +216,7 @@ def run_tests(linux: kunit_kernel.LinuxSourceTree,
exec_request = KunitExecRequest(request.timeout, request.build_dir, request.alltests, request.filter_glob, - request.kernel_args) + request.kernel_args, request.run_isolated) parse_request = KunitParseRequest(request.raw_output, request.build_dir, request.json) @@ -252,6 +300,12 @@ def add_exec_opts(parser) -> None: parser.add_argument('--kernel_args', help='Kernel command-line parameters. Maybe be repeated', action='append') + parser.add_argument('--run_isolated', help='If set, boot the kernel for each ' + 'individual suite/test. This is can be useful for debugging ' + 'a non-hermetic test, one that might pass/fail based on ' + 'what ran before it.', + type=str, + choices=['suite', 'test']),
def add_parse_opts(parser) -> None: parser.add_argument('--raw_output', help='If set don't format output from kernel. ' @@ -325,6 +379,7 @@ def main(argv, linux=None): cli_args.alltests, cli_args.filter_glob, cli_args.kernel_args, + cli_args.run_isolated, cli_args.json, cli_args.make_options) result = run_tests(linux, request) @@ -380,7 +435,8 @@ def main(argv, linux=None): cli_args.build_dir, cli_args.alltests, cli_args.filter_glob, - cli_args.kernel_args) + cli_args.kernel_args, + cli_args.run_isolated) parse_request = KunitParseRequest(cli_args.raw_output, cli_args.build_dir, cli_args.json) diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py index 619c4554cbff..3251cef25b76 100755 --- a/tools/testing/kunit/kunit_tool_test.py +++ b/tools/testing/kunit/kunit_tool_test.py @@ -477,6 +477,46 @@ class KUnitMainTest(unittest.TestCase): args=['a=1','b=2'], build_dir='.kunit', filter_glob='', timeout=300) self.print_mock.assert_any_call(StrContains('Testing complete.'))
+ def test_list_tests(self): + want = ['suite.test1', 'suite.test2', 'suite2.test1'] + self.linux_source_mock.run_kernel.return_value = ['TAP version 14', 'init: random output'] + want + + got = kunit._list_tests(self.linux_source_mock, + kunit.KunitExecRequest(300, '.kunit', False, 'suite*', None, 'suite')) + + self.assertEqual(got, want) + # Should respect the user's filter glob when listing tests. + self.linux_source_mock.run_kernel.assert_called_once_with( + args=['kunit.action=list'], build_dir='.kunit', filter_glob='suite*', timeout=300) + + + @mock.patch.object(kunit, '_list_tests') + def test_run_isolated_by_suite(self, mock_tests): + mock_tests.return_value = ['suite.test1', 'suite.test2', 'suite2.test1'] + kunit.main(['exec', '--run_isolated=suite', 'suite*.test*'], self.linux_source_mock) + + # Should respect the user's filter glob when listing tests. + mock_tests.assert_called_once_with(mock.ANY, + kunit.KunitExecRequest(300, '.kunit', False, 'suite*.test*', None, 'suite')) + self.linux_source_mock.run_kernel.assert_has_calls([ + mock.call(args=None, build_dir='.kunit', filter_glob='suite.test*', timeout=300), + mock.call(args=None, build_dir='.kunit', filter_glob='suite2.test*', timeout=300), + ]) + + @mock.patch.object(kunit, '_list_tests') + def test_run_isolated_by_test(self, mock_tests): + mock_tests.return_value = ['suite.test1', 'suite.test2', 'suite2.test1'] + kunit.main(['exec', '--run_isolated=test', 'suite*'], self.linux_source_mock) + + # Should respect the user's filter glob when listing tests. + mock_tests.assert_called_once_with(mock.ANY, + kunit.KunitExecRequest(300, '.kunit', False, 'suite*', None, 'test')) + self.linux_source_mock.run_kernel.assert_has_calls([ + mock.call(args=None, build_dir='.kunit', filter_glob='suite.test1', timeout=300), + mock.call(args=None, build_dir='.kunit', filter_glob='suite.test2', timeout=300), + mock.call(args=None, build_dir='.kunit', filter_glob='suite2.test1', timeout=300), + ]) +
if __name__ == '__main__': unittest.main()
On Fri, Oct 1, 2021 at 6:21 AM Daniel Latypov dlatypov@google.com wrote:
The new --run_isolated flag makes the tool boot the kernel once per suite or test, preventing leftover state from one suite to impact the other. This can be useful as a starting point to debugging test hermeticity issues.
Note: it takes a lot longer, so people should not use it normally.
Consider the following very simplified example:
bool disable_something_for_test = false; void function_being_tested() { ... if (disable_something_for_test) return; ... }
static void test_before(struct kunit *test) { disable_something_for_test = true; function_being_tested(); /* oops, we forgot to reset it back to false */ }
static void test_after(struct kunit *test) { /* oops, now "fixing" test_before can cause test_after to fail! */ function_being_tested(); }
Presented like this, the issues are obvious, but it gets a lot more complicated to track down as the amount of test setup and helper functions increases.
Another use case is memory corruption. It might not be surfaced as a failure/crash in the test case or suite that caused it. I've noticed in kunit's own unit tests, the 3rd suite after might be the one to finally crash after an out-of-bounds write, for example.
Example usage:
Per suite: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite ... Starting KUnit Kernel (1/7)... ============================================================ ======== [PASSED] kunit_executor_test ======== .... Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/7)... ============================================================ ======== [PASSED] kunit-try-catch-test ======== ...
Per test: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test Starting KUnit Kernel (1/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] filter_subsuite_test ...
It works with filters as well: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example ... Starting KUnit Kernel (1/1)... ============================================================ ======== [PASSED] example ======== ...
It also handles test filters, '*.*skip*' runs these 3 tests: kunit_status.kunit_status_mark_skipped_test example.example_skip_test example.example_mark_skipped_test
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Thanks -- this version works for me.
I think the requirement that test and suite names contain neither full stops nor spaces is a reasonable one. There aren't any current tests which would break it, as far as I know.
Cheers, -- David
On Thu, Sep 30, 2021 at 4:05 PM David Gow davidgow@google.com wrote:
On Fri, Oct 1, 2021 at 6:21 AM Daniel Latypov dlatypov@google.com wrote:
The new --run_isolated flag makes the tool boot the kernel once per suite or test, preventing leftover state from one suite to impact the other. This can be useful as a starting point to debugging test hermeticity issues.
Note: it takes a lot longer, so people should not use it normally.
Consider the following very simplified example:
bool disable_something_for_test = false; void function_being_tested() { ... if (disable_something_for_test) return; ... }
static void test_before(struct kunit *test) { disable_something_for_test = true; function_being_tested(); /* oops, we forgot to reset it back to false */ }
static void test_after(struct kunit *test) { /* oops, now "fixing" test_before can cause test_after to fail! */ function_being_tested(); }
Presented like this, the issues are obvious, but it gets a lot more complicated to track down as the amount of test setup and helper functions increases.
Another use case is memory corruption. It might not be surfaced as a failure/crash in the test case or suite that caused it. I've noticed in kunit's own unit tests, the 3rd suite after might be the one to finally crash after an out-of-bounds write, for example.
Example usage:
Per suite: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite ... Starting KUnit Kernel (1/7)... ============================================================ ======== [PASSED] kunit_executor_test ======== .... Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/7)... ============================================================ ======== [PASSED] kunit-try-catch-test ======== ...
Per test: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test Starting KUnit Kernel (1/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] filter_subsuite_test ...
It works with filters as well: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example ... Starting KUnit Kernel (1/1)... ============================================================ ======== [PASSED] example ======== ...
It also handles test filters, '*.*skip*' runs these 3 tests: kunit_status.kunit_status_mark_skipped_test example.example_skip_test example.example_mark_skipped_test
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Thanks -- this version works for me.
I think the requirement that test and suite names contain neither full stops nor spaces is a reasonable one. There aren't any current tests which would break it, as far as I know.
I agree. Is this currently codified in the test naming conventions document?
On Wed, Oct 6, 2021 at 10:23 AM 'Brendan Higgins' via KUnit Development kunit-dev@googlegroups.com wrote:
On Thu, Sep 30, 2021 at 4:05 PM David Gow davidgow@google.com wrote:
On Fri, Oct 1, 2021 at 6:21 AM Daniel Latypov dlatypov@google.com wrote:
The new --run_isolated flag makes the tool boot the kernel once per suite or test, preventing leftover state from one suite to impact the other. This can be useful as a starting point to debugging test hermeticity issues.
Note: it takes a lot longer, so people should not use it normally.
Consider the following very simplified example:
bool disable_something_for_test = false; void function_being_tested() { ... if (disable_something_for_test) return; ... }
static void test_before(struct kunit *test) { disable_something_for_test = true; function_being_tested(); /* oops, we forgot to reset it back to false */ }
static void test_after(struct kunit *test) { /* oops, now "fixing" test_before can cause test_after to fail! */ function_being_tested(); }
Presented like this, the issues are obvious, but it gets a lot more complicated to track down as the amount of test setup and helper functions increases.
Another use case is memory corruption. It might not be surfaced as a failure/crash in the test case or suite that caused it. I've noticed in kunit's own unit tests, the 3rd suite after might be the one to finally crash after an out-of-bounds write, for example.
Example usage:
Per suite: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite ... Starting KUnit Kernel (1/7)... ============================================================ ======== [PASSED] kunit_executor_test ======== .... Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/7)... ============================================================ ======== [PASSED] kunit-try-catch-test ======== ...
Per test: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test Starting KUnit Kernel (1/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] filter_subsuite_test ...
It works with filters as well: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example ... Starting KUnit Kernel (1/1)... ============================================================ ======== [PASSED] example ======== ...
It also handles test filters, '*.*skip*' runs these 3 tests: kunit_status.kunit_status_mark_skipped_test example.example_skip_test example.example_mark_skipped_test
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Thanks -- this version works for me.
I think the requirement that test and suite names contain neither full stops nor spaces is a reasonable one. There aren't any current tests which would break it, as far as I know.
I agree. Is this currently codified in the test naming conventions document?
https://www.kernel.org/doc/html/latest/dev-tools/kunit/style.html
We have Test suites are named after the subsystem they are part of. If a subsystem contains several suites, the specific area under test should be appended to the subsystem name, separated by an underscore. As tests are C functions, they should be named and written in accordance with the kernel coding style.
So nothing is explicitly stated for test suites. Just a preference for underscores, sorta implying you shouldn't use spaces or other delimiters like periods.
-- You received this message because you are subscribed to the Google Groups "KUnit Development" group. To unsubscribe from this group and stop receiving emails from it, send an email to kunit-dev+unsubscribe@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/kunit-dev/CAFd5g45qD0H1sO2n-NcgpVKm-QRRBVS....
On Thu, Sep 30, 2021 at 3:21 PM Daniel Latypov dlatypov@google.com wrote:
The new --run_isolated flag makes the tool boot the kernel once per suite or test, preventing leftover state from one suite to impact the other. This can be useful as a starting point to debugging test hermeticity issues.
Note: it takes a lot longer, so people should not use it normally.
Consider the following very simplified example:
bool disable_something_for_test = false; void function_being_tested() { ... if (disable_something_for_test) return; ... }
static void test_before(struct kunit *test) { disable_something_for_test = true; function_being_tested(); /* oops, we forgot to reset it back to false */ }
static void test_after(struct kunit *test) { /* oops, now "fixing" test_before can cause test_after to fail! */ function_being_tested(); }
Presented like this, the issues are obvious, but it gets a lot more complicated to track down as the amount of test setup and helper functions increases.
Another use case is memory corruption. It might not be surfaced as a failure/crash in the test case or suite that caused it. I've noticed in kunit's own unit tests, the 3rd suite after might be the one to finally crash after an out-of-bounds write, for example.
Example usage:
Per suite: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite ... Starting KUnit Kernel (1/7)... ============================================================ ======== [PASSED] kunit_executor_test ======== .... Testing complete. 5 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/7)... ============================================================ ======== [PASSED] kunit-try-catch-test ======== ...
Per test: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test Starting KUnit Kernel (1/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] parse_filter_test ============================================================ Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped. Starting KUnit Kernel (2/23)... ============================================================ ======== [PASSED] kunit_executor_test ======== [PASSED] filter_subsuite_test ...
It works with filters as well: $ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite example ... Starting KUnit Kernel (1/1)... ============================================================ ======== [PASSED] example ======== ...
It also handles test filters, '*.*skip*' runs these 3 tests: kunit_status.kunit_status_mark_skipped_test example.example_skip_test example.example_mark_skipped_test
Signed-off-by: Daniel Latypov dlatypov@google.com Reviewed-by: David Gow davidgow@google.com
Reviewed-by: Brendan Higgins brendanhiggins@google.com
linux-kselftest-mirror@lists.linaro.org