On Tue, May 3, 2022 at 2:49 PM Daniel Latypov dlatypov@google.com wrote:
On Tue, May 3, 2022 at 1:37 AM David Gow davidgow@google.com wrote:
On Tue, May 3, 2022 at 6:37 AM Daniel Latypov dlatypov@google.com wrote:
On Fri, Apr 29, 2022 at 11:56 PM David Gow davidgow@google.com wrote:
It's often desirable (particularly in test automation) to run as many tests as possible. This config enables all the tests which work as builtins under UML at present, increasing the total tests run from 156 to 342 (not counting 36 'skipped' tests).
Just to clear up potential confusion for others, I'll note that these aren't counting test cases. This is from kunit.py's output, so it counts each parameter from parameterized tests as "subtests."
Copying my command from https://kunit-review.googlesource.com/c/linux/+/5249, one can use this to count the # of test cases. $ ./tools/testing/kunit/kunit.py run --kunitconfig=... --raw_output=kunit --kernel_args=kunit.action=list | egrep '^[a-z0-9_-]+.[a-z0-9_-]+'
I see this enabling a total of 260 test _cases_ (including skipped).
The default (basically just CONFIG_KUNIT_ALL_TESTS=y) gives 192 (including skipped).
Yup, that's definitely the case. I guess I still was thinking in KTAP terms, where all subtests are effectively tests.
That being said, I do think the total (sub)test (including parameters, etc) number is the one that's more visible: not only does kunit_tool print it, but it's also what we've been using as our go to "number of tests" generally.
Yes, I agree it's the number to use here. If there's a v2 of this patch, we could also reword the commit message a bit to make it more explicit. If not, this seems fine as-is. The only issue I saw was a minor typo.
Agreed. Change is fine, but if you do a v2, please fix the wording.
Re goto for "number of tests." Reminder, we've also been using this to count "# tests" :P $ git grep 'KUNIT_CASE' | grep -Ev '^Documentation/|get_metrics.sh|include/kunit/test.h' | wc -l This avoids us having to figure out how to build all the tests, sidesteps the problem that subtests can be dynamically generated via parameterized testing, etc.
They can be run with: ./tools/testing/kunit/kunit.py run --kunitconfig=./tools/testing/kunit/configs/all_tests_uml.config
This acts as an in-between point between the KUNIT_ALL_TESTS config (which enables only tests whose dependencies are already enabled), and the kunit_tool --alltests option, which tries to use allyesconfig, taking a very long time to build and breaking very often.
Signed-off-by: David Gow davidgow@google.com
Tested-by: Daniel Latypov dlatypov@google.com
Looks good to me, some small comments below.
.../kunit/configs/all_tests_uml.config | 37 +++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 tools/testing/kunit/configs/all_tests_uml.config
diff --git a/tools/testing/kunit/configs/all_tests_uml.config b/tools/testing/kunit/configs/all_tests_uml.config new file mode 100644 index 000000000000..bdee36bef4a3 --- /dev/null +++ b/tools/testing/kunit/configs/all_tests_uml.config @@ -0,0 +1,37 @@ +# This config enables as many tests as possible under UML. +# It is intended for use in continuous integration systems and similar for +# automated testing of as much as possible. +# The config is manually maintained, though it uses KUNIT_ALL_TESTS=y to enable +# any tests whose dependencies are already satisfied. Please feel free to add +# more options if they any new tests.
missing: "enable"? "if they enable any new tests"
Whoops, I was switching from "there are any" to "if they enable any" and clearly got distracted halfway through. :-)
Hmm, should we state a preference for how heavy (time or resource-wise) tests should be? Because the comment says it's meant for automation, but I can imagine humans wanting to run it. (I'm completely fine with us not stating one, just throwing the idea out there for discussion)
I think we're probably okay with being a little bit lenient on test times. The time_test_cases.time64_to_tm_test_date_range and similar tests take quite a long time in some situations already (older hw, running under some emulators), but is generally pretty close to instant under most UML setups. Particularly given that not building with allyesconfig already saves us many, many minutes of time.
Agreed on all points. I personally think it's reasonable to leave things as-is.
We don't have any problematic tests that work on UML yet.