Parsing KTAP is quite an inconvenience, but most of the time the thing you really want to know is "did anything fail"?
Let's give the user the his information without them needing to parse anything.
Because of the use of subshells and namespaces, this needs to be communicated via a file. Just write arbitrary data into the file and treat non-empty content as a signal that something failed.
In case any user depends on the current behaviour, such as running this from a script with `set -e` and parsing the result for failures afterwards, add a flag they can set to get the old behaviour, namely --no-error-on-fail.
Signed-off-by: Brendan Jackman jackmanb@google.com --- Changes in v3: - Fixed quoting - Link to v2: https://lore.kernel.org/r/20251014-b4-ksft-error-on-fail-v2-1-b3e2657237b8@g...
Changes in v2: - Fixed bug in report_failure() - Made error-on-fail the default - Link to v1: https://lore.kernel.org/r/20251007-b4-ksft-error-on-fail-v1-1-71bf058f5662@g... --- tools/testing/selftests/kselftest/runner.sh | 14 ++++++++++---- tools/testing/selftests/run_kselftest.sh | 14 ++++++++++++++ 2 files changed, 24 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh index 2c3c58e65a419f5ee8d7dc51a37671237a07fa0b..3a62039fa6217f3453423ff011575d0a1eb8c275 100644 --- a/tools/testing/selftests/kselftest/runner.sh +++ b/tools/testing/selftests/kselftest/runner.sh @@ -44,6 +44,12 @@ tap_timeout() fi }
+report_failure() +{ + echo "not ok $*" + echo "$*" >> "$kselftest_failures_file" +} + run_one() { DIR="$1" @@ -105,7 +111,7 @@ run_one() echo "# $TEST_HDR_MSG" if [ ! -e "$TEST" ]; then echo "# Warning: file $TEST is missing!" - echo "not ok $test_num $TEST_HDR_MSG" + report_failure "$test_num $TEST_HDR_MSG" else if [ -x /usr/bin/stdbuf ]; then stdbuf="/usr/bin/stdbuf --output=L " @@ -123,7 +129,7 @@ run_one() interpreter=$(head -n 1 "$TEST" | cut -c 3-) cmd="$stdbuf $interpreter ./$BASENAME_TEST" else - echo "not ok $test_num $TEST_HDR_MSG" + report_failure "$test_num $TEST_HDR_MSG" return fi fi @@ -137,9 +143,9 @@ run_one() echo "ok $test_num $TEST_HDR_MSG # SKIP" elif [ $rc -eq $timeout_rc ]; then \ echo "#" - echo "not ok $test_num $TEST_HDR_MSG # TIMEOUT $kselftest_timeout seconds" + report_failure "$test_num $TEST_HDR_MSG # TIMEOUT $kselftest_timeout seconds" else - echo "not ok $test_num $TEST_HDR_MSG # exit=$rc" + report_failure "$test_num $TEST_HDR_MSG # exit=$rc" fi) cd - >/dev/null fi diff --git a/tools/testing/selftests/run_kselftest.sh b/tools/testing/selftests/run_kselftest.sh index 0443beacf3621ae36cb12ffd57f696ddef3526b5..d4be97498b32e975c63a1167d3060bdeba674c8c 100755 --- a/tools/testing/selftests/run_kselftest.sh +++ b/tools/testing/selftests/run_kselftest.sh @@ -33,6 +33,7 @@ Usage: $0 [OPTIONS] -c | --collection COLLECTION Run all tests from COLLECTION -l | --list List the available collection:test entries -d | --dry-run Don't actually run any tests + -f | --no-error-on-fail Don't exit with an error just because tests failed -n | --netns Run each test in namespace -h | --help Show this usage info -o | --override-timeout Number of seconds after which we timeout @@ -44,6 +45,7 @@ COLLECTIONS="" TESTS="" dryrun="" kselftest_override_timeout="" +ERROR_ON_FAIL=true while true; do case "$1" in -s | --summary) @@ -65,6 +67,9 @@ while true; do -d | --dry-run) dryrun="echo" shift ;; + -f | --no-error-on-fail) + ERROR_ON_FAIL=false + shift ;; -n | --netns) RUN_IN_NETNS=1 shift ;; @@ -105,9 +110,18 @@ if [ -n "$TESTS" ]; then available="$(echo "$valid" | sed -e 's/ /\n/g')" fi
+kselftest_failures_file="$(mktemp --tmpdir kselftest-failures-XXXXXX)" +export kselftest_failures_file + collections=$(echo "$available" | cut -d: -f1 | sort | uniq) for collection in $collections ; do [ -w /dev/kmsg ] && echo "kselftest: Running tests in $collection" >> /dev/kmsg tests=$(echo "$available" | grep "^$collection:" | cut -d: -f2) ($dryrun cd "$collection" && $dryrun run_many $tests) done + +failures="$(cat "$kselftest_failures_file")" +rm "$kselftest_failures_file" +if "$ERROR_ON_FAIL" && [ "$failures" ]; then + exit 1 +fi
--- base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585 change-id: 20251007-b4-ksft-error-on-fail-0c2cb3246041
Best regards,
On Tue Oct 14, 2025 at 2:45 PM UTC, Brendan Jackman wrote:
Parsing KTAP is quite an inconvenience, but most of the time the thing you really want to know is "did anything fail"?
Let's give the user the his information without them needing to parse anything.
Because of the use of subshells and namespaces, this needs to be communicated via a file. Just write arbitrary data into the file and treat non-empty content as a signal that something failed.
In case any user depends on the current behaviour, such as running this from a script with `set -e` and parsing the result for failures afterwards, add a flag they can set to get the old behaviour, namely --no-error-on-fail.
Signed-off-by: Brendan Jackman jackmanb@google.com
Hi Shuah,
Can you take a look at this?
Cheers, Brendan
Hi Brendan,
Oct 14, 2025 16:45:32 Brendan Jackman jackmanb@google.com:
(...)
In case any user depends on the current behaviour, such as running this from a script with `set -e` and parsing the result for failures afterwards, add a flag they can set to get the old behaviour, namely --no-error-on-fail.
IMO this new flag is also unnecessary. The user can just do "|| true" when needed.
(...)
Thomas
On Fri Oct 31, 2025 at 12:48 PM UTC, Thomas Weißschuh wrote:
Hi Brendan,
Oct 14, 2025 16:45:32 Brendan Jackman jackmanb@google.com:
(...)
In case any user depends on the current behaviour, such as running this from a script with `set -e` and parsing the result for failures afterwards, add a flag they can set to get the old behaviour, namely --no-error-on-fail.
IMO this new flag is also unnecessary. The user can just do "|| true" when needed.
`|| true` is not the same thing, if you do that then you completely hide all failures of the script. With --no-error-on-fail you just skip the specific case of tests failing.
I did say somewhere in a previous thread that this distinction (test failure vs test harness failure) is always gonna be a bit sketchy for this script since it's running on the kernel under test. But that doesn't mean we should give up on it completely completely.
linux-kselftest-mirror@lists.linaro.org