There are several test cases in the vm directory are still using
exit 0 when they need to be skipped. Use kselftest framework skip
code instead so it can help us to distinguish the return status.
Criterion to filter out what should be fixed in vm directory:
grep -r "exit 0" -B1 | grep -i skip
This change might cause some false-positives if people are running
these test scripts directly and only checking their return codes,
which will change from 0 to 4. However I think the impact should be
small as most of our scripts here are already using this skip code.
And there will be no such issue if running them with the kselftest
framework.
Signed-off-by: Po-Hsu Lin <po-hsu.lin(a)canonical.com>
---
tools/testing/selftests/vm/charge_reserved_hugetlb.sh | 5 ++++-
tools/testing/selftests/vm/hugetlb_reparenting_test.sh | 5 ++++-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
index 18d3368..fe8fcfb 100644
--- a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
@@ -1,11 +1,14 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
set -e
if [[ $(id -u) -ne 0 ]]; then
echo "This test must be run as root. Skipping..."
- exit 0
+ exit $ksft_skip
fi
fault_limit_file=limit_in_bytes
diff --git a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
index d11d1fe..4a9a3af 100644
--- a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
+++ b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
@@ -1,11 +1,14 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
set -e
if [[ $(id -u) -ne 0 ]]; then
echo "This test must be run as root. Skipping..."
- exit 0
+ exit $ksft_skip
fi
usage_file=usage_in_bytes
--
2.7.4
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
The selftest for ftrace checks some features by checking if the README has
text that states the feature is supported by that kernel. Unfortunately,
this check gives false positives because it many not be checked if there's
spaces in the string to check. This is due to the compare between the
required variable with the ":README" string stripped, because neither has
quotes around them.
Link: https://lkml.kernel.org/r/20210820204742.087177341@goodmis.org
Cc: "Tzvetomir Stoyanov" <tz.stoyanov(a)gmail.com>
Cc: Tom Zanussi <zanussi(a)kernel.org>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Fixes: 1b8eec510ba64 ("selftests/ftrace: Support ":README" suffix for requires")
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
tools/testing/selftests/ftrace/test.d/functions | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index f68d336b961b..000fd05e84b1 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -137,7 +137,7 @@ check_requires() { # Check required files and tracers
echo "Required tracer $t is not configured."
exit_unsupported
fi
- elif [ $r != $i ]; then
+ elif [ "$r" != "$i" ]; then
if ! grep -Fq "$r" README ; then
echo "Required feature pattern \"$r\" is not in README."
exit_unsupported
--
2.30.2
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
Add a function to remove all dynamic events from the tracing directory. It
requires a loop as some of the dynamic events may depend on others being
removed first. Also add a safety that prevents it from looping infinitely
due to a bug where an event never gets removed.
Link: https://lkml.kernel.org/r/20210819152825.348941368@goodmis.org
Cc: "Tzvetomir Stoyanov" <tz.stoyanov(a)gmail.com>
Cc: Tom Zanussi <zanussi(a)kernel.org>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/functions | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index a6fac927ee82..f68d336b961b 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -83,6 +83,27 @@ clear_synthetic_events() { # reset all current synthetic events
done
}
+clear_dynamic_events() { # reset all current dynamic events
+ again=1
+ stop=1
+ # loop mulitple times as some events require other to be removed first
+ while [ $again -eq 1 ]; do
+ stop=$((stop+1))
+ # Prevent infinite loops
+ if [ $stop -gt 10 ]; then
+ break;
+ fi
+ again=2
+ grep -v '^#' dynamic_events|
+ while read line; do
+ del=`echo $line | sed -e 's/^.\([^ ]*\).*/-\1/'`
+ if ! echo "$del" >> dynamic_events; then
+ again=1
+ fi
+ done
+ done
+}
+
initialize_ftrace() { # Reset ftrace to initial-state
# As the initial state, ftrace will be set to nop tracer,
# no events, no triggers, no filters, no function filters,
@@ -93,6 +114,7 @@ initialize_ftrace() { # Reset ftrace to initial-state
reset_events_filter
reset_ftrace_filter
disable_events
+ clear_dynamic_events
[ -f set_event_pid ] && echo > set_event_pid
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
--
2.30.2
Add basic tests to cover some regressions that we had.
It's hard to test floppy because some tests require
presence or absense of a diskette in a drive. To simulate
test conditions and automate the testing I added
"run_*.sh" wrapper scripts that run tests in QEMU.
The first patch just improves check for reverted commits
in a commit message. The second patch is required to
generate a minimal initrd used in next commits. Rest of
commits are basic floppy tests.
Please, comment the approach, selftests integration
and suggest tests that you would like to add.
I thought about adding a possibility to remove/insert
diskettes inside a test. This is possible if we give
the guest an access to the QEMU monitor (eject/change cmds).
But I didn't find a better way to do it than to map a
monitor to an external port:
-monitor tcp:<ip>:<port>,server,nowait
and access this ip from the guest.
Maybe it's also possible to do with virtserialport.
Denis Efremov (5):
checkpatch: improve handling of revert commits
gen_initramfs.sh: use absolute path for gen_init_cpio
selftests: floppy: add basic tests for opening an empty device
selftests: floppy: add basic tests for a readonly disk
selftests: floppy: add basic rdwr tests
MAINTAINERS | 1 +
scripts/checkpatch.pl | 12 +--
tools/testing/selftests/floppy/.gitignore | 8 ++
tools/testing/selftests/floppy/Makefile | 10 ++
tools/testing/selftests/floppy/config | 1 +
tools/testing/selftests/floppy/empty.c | 58 ++++++++++++
tools/testing/selftests/floppy/init.c | 43 +++++++++
tools/testing/selftests/floppy/lib.sh | 67 +++++++++++++
tools/testing/selftests/floppy/rdonly.c | 99 ++++++++++++++++++++
tools/testing/selftests/floppy/rdwr.c | 67 +++++++++++++
tools/testing/selftests/floppy/run_empty.sh | 16 ++++
tools/testing/selftests/floppy/run_rdonly.sh | 22 +++++
tools/testing/selftests/floppy/run_rdwr.sh | 22 +++++
usr/gen_initramfs.sh | 2 +-
14 files changed, 421 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/floppy/.gitignore
create mode 100644 tools/testing/selftests/floppy/Makefile
create mode 100644 tools/testing/selftests/floppy/config
create mode 100644 tools/testing/selftests/floppy/empty.c
create mode 100644 tools/testing/selftests/floppy/init.c
create mode 100644 tools/testing/selftests/floppy/lib.sh
create mode 100644 tools/testing/selftests/floppy/rdonly.c
create mode 100644 tools/testing/selftests/floppy/rdwr.c
create mode 100755 tools/testing/selftests/floppy/run_empty.sh
create mode 100755 tools/testing/selftests/floppy/run_rdonly.sh
create mode 100755 tools/testing/selftests/floppy/run_rdwr.sh
--
2.31.1
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
The selftest for ftrace checks some features by checking if the README has
text that states the feature is supported by that kernel. Unfortunately,
this check gives false positives because it many not be checked if there's
spaces in the string to check. This is due to the compare between the
required variable with the ":README" string stripped, because neither has
quotes around them.
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Fixes: 1b8eec510ba64 ("selftests/ftrace: Support ":README" suffix for requires")
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
tools/testing/selftests/ftrace/test.d/functions | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index f68d336b961b..000fd05e84b1 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -137,7 +137,7 @@ check_requires() { # Check required files and tracers
echo "Required tracer $t is not configured."
exit_unsupported
fi
- elif [ $r != $i ]; then
+ elif [ "$r" != "$i" ]; then
if ! grep -Fq "$r" README ; then
echo "Required feature pattern \"$r\" is not in README."
exit_unsupported
--
2.30.2
Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:
- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested test and promotes the idea that both test suites and test
cases are tests.
- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing.
- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.
This patch now implements the KTAP specification as described here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqa….
This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).
This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.
Signed-off-by: Rae Moar <rmoar(a)google.com>
---
tools/testing/kunit/kunit_json.py | 54 +-
tools/testing/kunit/kunit_parser.py | 1191 ++++++++++++-----
tools/testing/kunit/kunit_tool_test.py | 91 +-
.../test_is_test_passed-all_passed_nested.log | 34 +
4 files changed, 986 insertions(+), 384 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
diff --git a/tools/testing/kunit/kunit_json.py b/tools/testing/kunit/kunit_json.py
index f5cca5c38cac..cc4bc9cc6e0f 100644
--- a/tools/testing/kunit/kunit_json.py
+++ b/tools/testing/kunit/kunit_json.py
@@ -11,47 +11,45 @@ import os
import kunit_parser
-from kunit_parser import TestStatus
-
-def get_json_result(test_result, def_config, build_dir, json_path) -> str:
- sub_groups = []
-
- # Each test suite is mapped to a KernelCI sub_group
- for test_suite in test_result.suites:
- sub_group = {
- "name": test_suite.name,
- "arch": "UM",
- "defconfig": def_config,
- "build_environment": build_dir,
- "test_cases": [],
- "lab_name": None,
- "kernel": None,
- "job": None,
- "git_branch": "kselftest",
- }
- test_cases = []
- # TODO: Add attachments attribute in test_case with detailed
- # failure message, see https://api.kernelci.org/schema-test-case.html#get
- for case in test_suite.cases:
- test_case = {"name": case.name, "status": "FAIL"}
- if case.status == TestStatus.SUCCESS:
+from kunit_parser import Test, TestResult, TestStatus
+from typing import Any, Dict
+
+JsonObj = Dict[str, Any]
+
+def _get_group_json(test: Test, def_config: str, build_dir: str) -> JsonObj:
+ sub_groups = [] # List[JsonObj]
+ test_cases = [] # List[JsonObj]
+
+ for subtest in test.subtests:
+ if len(subtest.subtests):
+ sub_group = _get_group_json(subtest, def_config, build_dir)
+ sub_groups.append(sub_group)
+ else:
+ test_case = {"name": subtest.name, "status": "FAIL"}
+ if subtest.status == TestStatus.SUCCESS:
test_case["status"] = "PASS"
- elif case.status == TestStatus.TEST_CRASHED:
+ elif subtest.status == TestStatus.TEST_CRASHED:
test_case["status"] = "ERROR"
test_cases.append(test_case)
- sub_group["test_cases"] = test_cases
- sub_groups.append(sub_group)
+
test_group = {
- "name": "KUnit Test Group",
+ "name": test.name,
"arch": "UM",
"defconfig": def_config,
"build_environment": build_dir,
"sub_groups": sub_groups,
+ "test_cases": test_cases,
"lab_name": None,
"kernel": None,
"job": None,
"git_branch": "kselftest",
}
+ return test_group
+
+def get_json_result(test_result: TestResult, def_config: str, build_dir: str, \
+ json_path: str) -> str:
+ test_group = _get_group_json(test_result.test, def_config, build_dir)
+ test_group["name"] = "KUnit Test Group"
json_obj = json.dumps(test_group, indent=4)
if json_path != 'stdout':
with open(json_path, 'w') as result_path:
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..bca4d19f7636 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -1,11 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
#
-# Parses test results from a kernel dmesg log.
+# Parses KTAP test results from a kernel dmesg log and incrementally prints
+# results with reader-friendly format. Stores and returns test results in a
+# Test object.
#
# Copyright (C) 2019, Google LLC.
# Author: Felix Guo <felixguoxiuping(a)gmail.com>
# Author: Brendan Higgins <brendanhiggins(a)google.com>
+# Author: Rae Moar <rmoar(a)google.com>
+from __future__ import annotations
import re
from collections import namedtuple
@@ -14,33 +18,84 @@ from enum import Enum, auto
from functools import reduce
from typing import Iterable, Iterator, List, Optional, Tuple
-TestResult = namedtuple('TestResult', ['status','suites','log'])
-
-class TestSuite(object):
+TestResult = namedtuple('TestResult', ['status','test','log'])
+
+class Test(object):
+ """
+ A class to represent a test parsed from KTAP results. All KTAP
+ results within a test log are stored in a main Test object as
+ subtests.
+
+ Attributes:
+ status : TestStatus - status of the test
+ name : str - name of the test
+ expected_count : int - expected number of subtests (0 if single
+ test case and None if unknown expected number of subtests)
+ subtests : List[Test] - list of subtests
+ log : List[str] - log of KTAP lines that correspond to the test
+ counts : TestCounts - counts of the test statuses and errors of
+ subtests or of the test itself if the test is a single
+ test case.
+ """
def __init__(self) -> None:
- self.status = TestStatus.SUCCESS
- self.name = ''
- self.cases = [] # type: List[TestCase]
-
- def __str__(self) -> str:
- return 'TestSuite(' + str(self.status) + ',' + self.name + ',' + str(self.cases) + ')'
+ """
+ Contructs the default attributes of a Test class object.
- def __repr__(self) -> str:
- return str(self)
+ Parameters:
+ None
-class TestCase(object):
- def __init__(self) -> None:
+ Return:
+ None
+ """
self.status = TestStatus.SUCCESS
self.name = ''
+ self.expected_count = 0 # type: Optional[int]
+ self.subtests = [] # type: List[Test]
self.log = [] # type: List[str]
+ self.counts = TestCounts()
def __str__(self) -> str:
- return 'TestCase(' + str(self.status) + ',' + self.name + ',' + str(self.log) + ')'
+ """
+ Returns string representation of a Test class object.
+
+ Parameters:
+ None
+
+ Return:
+ str - string representation of the Test class object
+ """
+ return ('Test(' + str(self.status) + ', ' + self.name + ', ' +
+ str(self.expected_count) + ', ' + str(self.subtests) +
+ ', ' + str(self.log) + ', ' + str(self.counts) + ')')
def __repr__(self) -> str:
+ """
+ Returns string representation of a Test class object.
+
+ Parameters:
+ None
+
+ Return:
+ str - string representation of the Test class object
+ """
return str(self)
+ def add_error(self, message: str):
+ """
+ Adds error to test object by printing the error and
+ incrementing the error count.
+
+ Parameters:
+ message : str - error message to print
+
+ Return:
+ None
+ """
+ print_error('Test ' + self.name + ': ' + message)
+ self.counts.errors += 1
+
class TestStatus(Enum):
+ """An enumeration class to represent the status of a test."""
SUCCESS = auto()
FAILURE = auto()
SKIPPED = auto()
@@ -48,385 +103,889 @@ class TestStatus(Enum):
NO_TESTS = auto()
FAILURE_TO_PARSE_TESTS = auto()
+class TestCounts:
+ """
+ A class to represent the counts of statuses and test errors of
+ subtests or of the test itself if the test is a single test case with
+ no subtests. Note that the sum of the counts of passed, failed,
+ crashed, and skipped should sum to the total number of subtests for
+ the test.
+
+ Attributes:
+ passed : int - the number of tests that have passed
+ failed : int - the number of tests that have failed
+ crashed : int - the number of tests that have crashed
+ skipped : int - the number of tests that have skipped
+ errors : int - the number of errors in the test and subtests
+ """
+ def __init__(self):
+ """
+ Contructs the default attributes of a TestCounts class object.
+ Sets the counts of all test statuses and test errors to be 0.
+
+ Parameters:
+ None
+
+ Return:
+ None
+ """
+ self.passed = 0
+ self.failed = 0
+ self.crashed = 0
+ self.skipped = 0
+ self.errors = 0
+
+ def __str__(self) -> str:
+ """
+ Returns total number of subtests or 1 if the test object has
+ no subtests. This number is calculated by the sum of the
+ passed, failed, crashed, and skipped subtests.
+
+ Parameters:
+ None
+
+ Return:
+ str - string representing TestCounts object.
+ """
+ return ('Passed: ' + str(self.passed) + ', Failed: ' +
+ str(self.failed) + ', Crashed: ' + str(self.crashed) +
+ ', Skipped: ' + str(self.skipped) + ', Errors: ' +
+ str(self.errors))
+
+ def total(self) -> int:
+ """
+ Returns total number of subtests or 1 if the test object has
+ no subtests. This number is calculated by the sum of the
+ passed, failed, crashed, and skipped subtests.
+
+ Parameters:
+ None
+
+ Return:
+ int - the total number of subtests or 1 if the test object has
+ no subtests
+ """
+ return self.passed + self.failed + self.crashed + self.skipped
+
+ def add_subtest_counts(self, counts: TestCounts) -> None:
+ """
+ Adds the counts of another TestCounts object to the current
+ TestCounts object. Used to add the counts of a subtest to the
+ parent test.
+
+ Parameters:
+ counts : TestCounts - another TestCounts object whose counts
+ will be added to the counts of the TestCounts object
+
+ Return:
+ None
+ """
+ self.passed += counts.passed
+ self.failed += counts.failed
+ self.crashed += counts.crashed
+ self.skipped += counts.skipped
+ self.errors += counts.errors
+
+ def get_status(self) -> TestStatus:
+ """
+ Returns the expected status of a Test using test counts.
+
+ Parameters:
+ None
+
+ Return:
+ TestStatus - expected status of a Test given test counts
+ """
+ if self.crashed:
+ # If one of the subtests crash, the expected status of
+ # the Test is crashed.
+ return TestStatus.TEST_CRASHED
+ elif self.failed:
+ # Otherwise if one of the subtests fail, the
+ # expected status of the Test is failed.
+ return TestStatus.FAILURE
+ elif self.passed:
+ # Otherwise if one of the subtests pass, the
+ # expected status of the Test is passed.
+ return TestStatus.SUCCESS
+ else:
+ # Finally, if none of the subtests have failed,
+ # crashed, or passed, the expected status of the
+ # Test is skipped.
+ return TestStatus.SKIPPED
+
+ def add_status(self, status: TestStatus) -> None:
+ """
+ Given inputted status, increments corresponding attribute of
+ TestCounts object.
+
+ Parameters:
+ status : TestStatus - status to be added to the TestCounts
+ object
+
+ Return:
+ None
+ """
+ if status == TestStatus.SUCCESS or \
+ status == TestStatus.NO_TESTS:
+ # if status is NO_TESTS the most appropriate attribute
+ # to increment is passed because the test did not
+ # fail, crash or get skipped.
+ self.passed += 1
+ elif status == TestStatus.FAILURE:
+ self.failed += 1
+ elif status == TestStatus.SKIPPED:
+ self.skipped += 1
+ else:
+ self.crashed += 1
+
class LineStream:
- """Provides a peek()/pop() interface over an iterator of (line#, text)."""
+ """
+ A class to represent the lines of kernel output.
+ Provides a peek()/pop() interface over an iterator of
+ (line#, text).
+
+ Attributes:
+ _lines : Iterator[Tuple[int, str]] - Iterator containing tuple of
+ line number and line of kernel output
+ _next : Tuple[int, str] - Tuple containing next line and the
+ corresponding line number
+ _done : bool - boolean denoting whether the LineStream has reached
+ the end of the lines
+ """
_lines: Iterator[Tuple[int, str]]
_next: Tuple[int, str]
_done: bool
def __init__(self, lines: Iterator[Tuple[int, str]]):
+ """Set defaults for LineStream object and sets _lines
+ attribute to lines parameter.
+ """
self._lines = lines
self._done = False
self._next = (0, '')
self._get_next()
def _get_next(self) -> None:
+ """Sets _next attribute to the upcoming Tuple of line and
+ line number in the LineStream.
+ """
try:
self._next = next(self._lines)
except StopIteration:
self._done = True
def peek(self) -> str:
+ """Returns the line stored in the _next attribute."""
return self._next[1]
def pop(self) -> str:
+ """Returns the line stored in the _next attribute and sets the
+ _next attribute to the following line and line number Tuple.
+ """
n = self._next
self._get_next()
return n[1]
def __bool__(self) -> bool:
+ """Returns whether the LineStream has reached the end of the
+ lines.
+ """
return not self._done
# Only used by kunit_tool_test.py.
def __iter__(self) -> Iterator[str]:
+ """Returns an Iterator object containing all of the lines
+ stored in the LineStream object. This method also empties the
+ LineStream so it reaches the end of the lines.
+ """
while bool(self):
yield self.pop()
def line_number(self) -> int:
+ """Returns the line number of the upcoming line."""
return self._next[0]
-kunit_start_re = re.compile(r'TAP version [0-9]+$')
-kunit_end_re = re.compile('(List of all partitions:|'
- 'Kernel panic - not syncing: VFS:|reboot: System halted)')
+# Parsing helper methods:
+
+KTAP_START = re.compile(r'KTAP version ([0-9]+)$')
+TAP_START = re.compile(r'TAP version ([0-9]+)$')
+KTAP_END = re.compile('(List of all partitions:|'
+ 'Kernel panic - not syncing: VFS:|reboot: System halted)')
def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
- def isolate_kunit_output(kernel_output: Iterable[str]) -> Iterator[Tuple[int, str]]:
+ """
+ Returns LineStream object of extracted ktap lines within
+ inputted kernel output.
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ LineStream - LineStream object containing extracted ktap lines.
+ """
+ def isolate_ktap_output(kernel_output: Iterable[str]) \
+ -> Iterator[Tuple[int, str]]:
+ """
+ Helper method of extract_tap_lines that yields extracted
+ ktap lines within inputted kernel output. Output is used to
+ create LineStream object in isolate_ktap_output.
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ Iterator[Tuple[int, str]] - Iterator object containing tuples
+ with extracted ktap lines and their correesponding line
+ number.
+ """
line_num = 0
started = False
for line in kernel_output:
line_num += 1
- line = line.rstrip() # line always has a trailing \n
- if kunit_start_re.search(line):
+ line = line.rstrip() # remove trailing \n
+ if not started and KTAP_START.search(line):
+ prefix_len = len(
+ line.split('KTAP version')[0])
+ started = True
+ yield line_num, line[prefix_len:]
+ elif not started and TAP_START.search(line):
prefix_len = len(line.split('TAP version')[0])
started = True
yield line_num, line[prefix_len:]
- elif kunit_end_re.search(line):
+ elif started and KTAP_END.search(line):
break
elif started:
- yield line_num, line[prefix_len:]
- return LineStream(lines=isolate_kunit_output(kernel_output))
-
-def raw_output(kernel_output) -> None:
+ # remove prefix and indention
+ line = line[prefix_len:].lstrip()
+ yield line_num, line
+ return LineStream(lines=isolate_ktap_output(kernel_output))
+
+def raw_output(kernel_output: Iterable[str]) -> None:
+ """
+ Prints all of given kernel output.
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ None
+ """
for line in kernel_output:
print(line.rstrip())
-DIVIDER = '=' * 60
-
-RESET = '\033[0;0m'
-
-def red(text) -> str:
- return '\033[1;31m' + text + RESET
-
-def yellow(text) -> str:
- return '\033[1;33m' + text + RESET
+KTAP_VERSIONS = [1]
+TAP_VERSIONS = [13, 14]
+
+def check_version(version_num: int, accepted_versions: List[int], \
+ version_type: str, test: Test) -> None:
+ """
+ Adds errors to the test if the version number is too high or too low.
+
+ Parameters:
+ version_num : int - The inputted version number from the parsed
+ ktap or tap header line
+ accepted_version : List[int] - List of accepted ktap or tap versions
+ version_type : str - 'KTAP' or 'TAP' depending on the type of
+ version line.
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ None
+ """
+ if version_num < min(accepted_versions):
+ test.add_error(version_type + ' version lower than expected!')
+ elif version_num > max(accepted_versions):
+ test.add_error(
+ version_type + ' version higher than expected!')
+
+def parse_ktap_header(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of ktap or tap
+ header line, the version number is checked, the line is popped,
+ and returns True. Otherwise the method returns False.
+
+ Accepted formats:
+ - 'KTAP version [version number]'
+ - 'TAP version [version number]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as
+ the ktap or tap header line
+ """
+ ktap_match = KTAP_START.match(lines.peek())
+ tap_match = TAP_START.match(lines.peek())
+ if ktap_match:
+ version_num = int(ktap_match.group(1))
+ check_version(version_num, KTAP_VERSIONS, 'KTAP', test)
+ elif tap_match:
+ version_num = int(tap_match.group(1))
+ check_version(version_num, TAP_VERSIONS, 'TAP', test)
+ else:
+ return False
+ test.log.append(lines.pop())
+ return True
+
+TEST_HEADER = re.compile(r'^# Subtest: (.*)$')
+
+def parse_test_header(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ header line, the name of test is set, the line is popped,
+ and returns True. Otherwise the method returns False.
+
+ Accepted format:
+ - '# Subtest: [test name]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as
+ a test header
+ """
+ match = TEST_HEADER.match(lines.peek())
+ if not match:
+ return False
+ test.log.append(lines.pop())
+ test.name = match.group(1)
+ return True
+
+TEST_PLAN = re.compile(r'1\.\.([0-9]+)')
+
+def parse_test_plan(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ plan line, the expected number of subtests is set in test object, an
+ error is thrown if there are 0 tests, the line is popped,
+ and returns True. Otherwise the method adds an error that the test
+ plan is missing to the test object and returns False.
+
+ Accepted format:
+ - '1..[number of subtests]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as
+ a test plan
+ """
+ match = TEST_PLAN.match(lines.peek())
+ if not match:
+ test.expected_count = None
+ test.add_error('missing plan line!')
+ return False
+ test.log.append(lines.pop())
+ expected_count = int(match.group(1))
+ test.expected_count = expected_count
+ if expected_count == 0:
+ test.status = TestStatus.NO_TESTS
+ test.add_error('0 tests run!')
+ return True
+
+TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*)$')
+
+TEST_RESULT_SKIP = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*) # SKIP(.*)$')
+
+def peek_test_name_match(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ result line and the name of the result line matches the name of the
+ current test, the method returns True. Otherwise it returns False.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream matched a test
+ result line and the name matched the test name
+ """
+ line = lines.peek()
+ match = TEST_RESULT.match(line)
+ if not match:
+ return False
+ name = match.group(4)
+ return (name == test.name)
+
+def parse_test_result(lines: LineStream, test: Test, expected_num: int) \
+ -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ result line, the status in the result line is added to the test
+ object, the test number is checked to match the expected test number
+ and if not an error is added to the test object, and returns True.
+ Otherwise it returns False. Note that the skip diirective is the only
+ directive that causes a change in status and otherwise the directive
+ is included in the name of the test.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+ expected_num : int - expected test number for current test
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as a
+ test result line.
+ """
+ line = lines.peek()
+ match = TEST_RESULT.match(line)
+ skip_match = TEST_RESULT_SKIP.match(line)
-def green(text) -> str:
- return '\033[1;32m' + text + RESET
+ # Check if line matches test result line format
+ if not match:
+ return False
+ test.log.append(lines.pop())
-def print_with_timestamp(message) -> None:
- print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+ # Check test num
+ num = int(match.group(2))
+ if num != expected_num:
+ test.add_error('Expected test number ' +
+ str(expected_num) + ' but found ' + str(num))
-def format_suite_divider(message) -> str:
- return '======== ' + message + ' ========'
+ # Set name of test object
+ if skip_match:
+ test.name = skip_match.group(4)
+ else:
+ test.name = match.group(4)
-def print_suite_divider(message) -> None:
- print_with_timestamp(DIVIDER)
- print_with_timestamp(format_suite_divider(message))
+ # Set status of test object
+ status = match.group(1)
+ if test.status == TestStatus.TEST_CRASHED:
+ return True
+ elif skip_match:
+ test.status = TestStatus.SKIPPED
+ elif status == 'ok':
+ test.status = TestStatus.SUCCESS
+ else:
+ test.status = TestStatus.FAILURE
+ return True
+
+DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^# .*?: kunit test case crashed!$')
+
+def parse_diagnostic(lines: LineStream, test: Test) -> None:
+ """
+ If the next line in LineStream does not match the format of a test
+ case line or test header line, the line is checked if the test has
+ crashed and if so adds an error message, pops the line and adds it to
+ the log.
+
+ Line formats that are not parsed:
+ - '# Subtest: [test name]'
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ None
+ """
+ while lines and not TEST_RESULT.match(lines.peek()) and not \
+ TEST_HEADER.match(lines.peek()):
+ if DIAGNOSTIC_CRASH_MESSAGE.match(lines.peek()):
+ test.status = TestStatus.TEST_CRASHED
+ test.log.append(lines.pop())
+
+# Printing helper methods:
-def print_log(log) -> None:
- for m in log:
- print_with_timestamp(m)
+DIVIDER = '=' * 60
-TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*#).*$')
+RESET = '\033[0;0m'
-def consume_non_diagnostic(lines: LineStream) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- lines.pop()
+def red(text: str) -> str:
+ """
+ Returns string with added red ansi color code at beginning and reset
+ code at end.
-def save_non_diagnostic(lines: LineStream, test_case: TestCase) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- test_case.log.append(lines.peek())
- lines.pop()
+ Parameters:
+ text: str -> text to be made red with ansi color codes
-OkNotOkResult = namedtuple('OkNotOkResult', ['is_ok','description', 'text'])
+ Return:
+ str - original text made red with ansi color codes
+ """
+ return '\033[1;31m' + text + RESET
-OK_NOT_OK_SKIP = re.compile(r'^[\s]*(ok|not ok) [0-9]+ - (.*) # SKIP(.*)$')
+def yellow(text: str) -> str:
+ """
+ Returns string with added yellow ansi color code at beginning and
+ reset code at end.
-OK_NOT_OK_SUBTEST = re.compile(r'^[\s]+(ok|not ok) [0-9]+ - (.*)$')
+ Parameters:
+ text: str -> text to be made yellow with ansi color codes
-OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$')
+ Return:
+ str - original text made yellow with ansi color codes
+ """
+ return '\033[1;33m' + text + RESET
-def parse_ok_not_ok_test_case(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- line = lines.peek()
- match = OK_NOT_OK_SUBTEST.match(line)
- while not match and lines:
- line = lines.pop()
- match = OK_NOT_OK_SUBTEST.match(line)
- if match:
- test_case.log.append(lines.pop())
- test_case.name = match.group(2)
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_case.status = TestStatus.SKIPPED
- return True
- if test_case.status == TestStatus.TEST_CRASHED:
- return True
- if match.group(1) == 'ok':
- test_case.status = TestStatus.SUCCESS
- else:
- test_case.status = TestStatus.FAILURE
- return True
- else:
- return False
+def green(text: str) -> str:
+ """
+ Returns string with added green ansi color code at beginning and reset
+ code at end.
-SUBTEST_DIAGNOSTIC = re.compile(r'^[\s]+# (.*)$')
-DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^[\s]+# .*?: kunit test case crashed!$')
+ Parameters:
+ text: str -> text to be made green with ansi color codes
-def parse_diagnostic(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
- return False
- line = lines.peek()
- match = SUBTEST_DIAGNOSTIC.match(line)
- if match:
- test_case.log.append(lines.pop())
- crash_match = DIAGNOSTIC_CRASH_MESSAGE.match(line)
- if crash_match:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- else:
- return False
+ Return:
+ str - original text made green with ansi color codes
+ """
+ return '\033[1;32m' + text + RESET
-def parse_test_case(lines: LineStream) -> Optional[TestCase]:
- test_case = TestCase()
- save_non_diagnostic(lines, test_case)
- while parse_diagnostic(lines, test_case):
- pass
- if parse_ok_not_ok_test_case(lines, test_case):
- return test_case
- else:
- return None
+ANSI_LEN = len(red(''))
-SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$')
+def print_with_timestamp(message: str) -> None:
+ """
+ Prints message with timestamp at beginning.
-def parse_subtest_header(lines: LineStream) -> Optional[str]:
- consume_non_diagnostic(lines)
- if not lines:
- return None
- match = SUBTEST_HEADER.match(lines.peek())
- if match:
- lines.pop()
- return match.group(1)
- else:
- return None
+ Parameters:
+ message: str -> message to be printed
-SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)')
+ Return:
+ None
+ """
+ print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
-def parse_subtest_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = SUBTEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
+def format_test_divider(message: str, len_message: int) -> str:
+ """
+ Returns string with message centered in fixed width divider.
+
+ Example:
+ '===================== message example ====================='
+
+ Parameters:
+ message: str -> message to be centered in divider line
+ len_message : int -> length of the message to be printed in the
+ divider such that the ansi codes are not counted if the
+ message is colored.
+
+ Return:
+ str - string containing message centered in fixed width divider
+ """
+ default_count = 3 # default number of dashes
+ len_1 = default_count
+ len_2 = default_count
+ difference = len(DIVIDER) - len_message - 2 # 2 spaces added
+ if difference > 0:
+ # calculate number of dashes for each side of the divider
+ len_1 = int(difference / 2)
+ len_2 = difference - len_1
+ return ('=' * len_1) + ' ' + message + ' ' + ('=' * len_2)
+
+def print_test_header(test: Test) -> None:
+ """
+ Prints test header with test name and optionally the expected number
+ of subtests.
+
+ Example:
+ '=================== example (2 subtests) ==================='
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test header
+
+ Return:
+ None
+ """
+ message = test.name
+ if test.expected_count:
+ message += ' (' + str(test.expected_count) + ' subtests)'
+ print_with_timestamp(format_test_divider(message, len(message)))
+
+def print_log(log: Iterable[str]) -> None:
+ """
+ Prints all strings in saved log for test in yellow.
+
+ Parameters:
+ log: Iterable[str] -> Iterable object with all strings saved in log
+ for test
+
+ Return:
+ None
+ """
+ for m in log:
+ print_with_timestamp(yellow(m))
+ print_with_timestamp('')
+
+def format_test_result(test: Test) -> str:
+ """
+ Returns string with formatted test result with colored status and test
+ name.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test result
+
+ Return:
+ str - string containing formatted test result
+ """
+ if test.status == TestStatus.SUCCESS:
+ return (green('[PASSED] ') + test.name)
+ elif test.status == TestStatus.SKIPPED:
+ return (yellow('[SKIPPED] ') + test.name)
+ elif test.status == TestStatus.TEST_CRASHED:
+ print_log(test.log)
+ return (red('[CRASHED] ') + test.name)
else:
- return None
-
-def max_status(left: TestStatus, right: TestStatus) -> TestStatus:
- if left == right:
- return left
- elif left == TestStatus.TEST_CRASHED or right == TestStatus.TEST_CRASHED:
- return TestStatus.TEST_CRASHED
- elif left == TestStatus.FAILURE or right == TestStatus.FAILURE:
- return TestStatus.FAILURE
- elif left == TestStatus.SKIPPED:
- return right
+ print_log(test.log)
+ return (red('[FAILED] ') + test.name)
+
+def print_test_result(test: Test) -> None:
+ """
+ Prints result line with status of test.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test result line
+
+ Return:
+ None
+ """
+ print_with_timestamp(format_test_result(test))
+
+def print_test_footer(test: Test) -> None:
+ """
+ Prints test footer with status of test.
+
+ Example:
+ '===================== [PASSED] example ====================='
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test footer
+
+ Return:
+ None
+ """
+ message = format_test_result(test)
+ print_with_timestamp(format_test_divider(message,
+ len(message) - ANSI_LEN))
+
+def print_summary_line(test: Test) -> None:
+ """
+ Prints summary line of test object. Color of line is dependent on
+ status of test. Color is green if test passes, yellow if test is
+ skipped, and red if the test fails or crashes. Summary line contains
+ counts of the statuses of the tests subtests or the test itself if it
+ has no subtests.
+
+ Example:
+ 'Testing complete. Passed: 2, Failed: 0, Crashed: 0, Skipped: 0, \
+ Errors: 0'
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test summary line
+
+ Return:
+ None
+ """
+ if test.status == TestStatus.SUCCESS or \
+ test.status == TestStatus.NO_TESTS:
+ color = green
+ elif test.status == TestStatus.SKIPPED:
+ color = yellow
else:
- return left
-
-def parse_ok_not_ok_test_suite(lines: LineStream,
- test_suite: TestSuite,
- expected_suite_index: int) -> bool:
- consume_non_diagnostic(lines)
- if not lines:
- test_suite.status = TestStatus.TEST_CRASHED
- return False
- line = lines.peek()
- match = OK_NOT_OK_MODULE.match(line)
- if match:
- lines.pop()
- if match.group(1) == 'ok':
- test_suite.status = TestStatus.SUCCESS
- else:
- test_suite.status = TestStatus.FAILURE
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_suite.status = TestStatus.SKIPPED
- suite_index = int(match.group(2))
- if suite_index != expected_suite_index:
- print_with_timestamp(
- red('[ERROR] ') + 'expected_suite_index ' +
- str(expected_suite_index) + ', but got ' +
- str(suite_index))
- return True
+ color = red
+ counts = test.counts
+ print_with_timestamp(color('Testing complete. ' + str(counts)))
+
+def print_error(message: str) -> None:
+ """
+ Prints message with error format.
+
+ Parameters:
+ message: str -> message to be used as error message
+
+ Return:
+ None
+ """
+ print_with_timestamp(red('[ERROR] ') + message)
+
+# Other methods:
+
+def bubble_up_test_results(test: Test) -> None:
+ """
+ If the test has subtests, add the test counts of the subtests to the
+ test and check if any of the tests crashed and if so set the test
+ status to crashed. Otherwise if the test has no subtests add the
+ status of the test to the test counts.
+
+ Parameters:
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ None
+ """
+ subtests = test.subtests
+ counts = test.counts
+ status = test.status
+ for t in subtests:
+ counts.add_subtest_counts(t.counts)
+ if counts.total() == 0:
+ counts.add_status(status)
+ elif test.counts.get_status() == TestStatus.TEST_CRASHED:
+ test.status = TestStatus.TEST_CRASHED
+
+def parse_test(lines: LineStream, expected_num: int) -> Test:
+ """
+ Finds next test to parse in LineStream, creates new Test object,
+ parses any subtests of the test, populates Test object with all
+ information (status, name) about the test and the Test objects for
+ any subtests, and then returns the Test object. The method accepts
+ three formats of tests:
+
+ Accepted test formats:
+
+ - Main KTAP/TAP header
+
+ Example:
+
+ KTAP version 1
+ 1..4
+ [subtests]
+
+ - Subtest header line
+
+ Example:
+
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ - Test result line
+
+ Example:
+
+ ok 1 - test
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ expected_num : int - expected test number for test to be parsed
+
+ Return:
+ Test : Test object populated with characteristics and containing any
+ subtests
+ """
+ test = Test()
+ parent_test = False
+ main = parse_ktap_header(lines, test)
+ if main:
+ # If KTAP/TAP header is found, attempt to parse
+ # test plan
+ parse_test_plan(lines, test)
else:
- return False
-
-def bubble_up_errors(status_list: Iterable[TestStatus]) -> TestStatus:
- return reduce(max_status, status_list, TestStatus.SKIPPED)
-
-def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus:
- max_test_case_status = bubble_up_errors(x.status for x in test_suite.cases)
- return max_status(max_test_case_status, test_suite.status)
-
-def parse_test_suite(lines: LineStream, expected_suite_index: int) -> Optional[TestSuite]:
- if not lines:
- return None
- consume_non_diagnostic(lines)
- test_suite = TestSuite()
- test_suite.status = TestStatus.SUCCESS
- name = parse_subtest_header(lines)
- if not name:
- return None
- test_suite.name = name
- expected_test_case_num = parse_subtest_plan(lines)
- if expected_test_case_num is None:
- return None
- while expected_test_case_num > 0:
- test_case = parse_test_case(lines)
- if not test_case:
+ # If KTAP/TAP header is not found, test must be subtest
+ # header or test result line so parse attempt to parser
+ # subtest header
+ parse_diagnostic(lines, test)
+ parent_test = parse_test_header(lines, test)
+ if parent_test:
+ # If subtest header is found, attempt to parse
+ # test plan and print header
+ parse_test_plan(lines, test)
+ print_test_header(test)
+ expected_count = test.expected_count
+ subtests = []
+ test_num = 1
+ while main or expected_count is None or test_num <= expected_count:
+ # Loop to parse any subtests.
+ # If test is main test, do not break until no lines left.
+ # Otherwise, break after parsing expected number of tests or
+ # if expected number of tests is unknown break when found
+ # test result line with matching name to subtest header.
+ if not lines:
+ if expected_count and test_num <= expected_count:
+ test.add_error('missing expected subtests!')
break
- test_suite.cases.append(test_case)
- expected_test_case_num -= 1
- if parse_ok_not_ok_test_suite(lines, test_suite, expected_suite_index):
- test_suite.status = bubble_up_test_case_errors(test_suite)
- return test_suite
- elif not lines:
- print_with_timestamp(red('[ERROR] ') + 'ran out of lines before end token')
- return test_suite
- else:
- print(f'failed to parse end of suite "{name}", at line {lines.line_number()}: {lines.peek()}')
- return None
-
-TAP_HEADER = re.compile(r'^TAP version 14$')
-
-def parse_tap_header(lines: LineStream) -> bool:
- consume_non_diagnostic(lines)
- if TAP_HEADER.match(lines.peek()):
- lines.pop()
- return True
- else:
- return False
-
-TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)')
-
-def parse_test_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = TEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
- else:
- return None
-
-def bubble_up_suite_errors(test_suites: Iterable[TestSuite]) -> TestStatus:
- return bubble_up_errors(x.status for x in test_suites)
-
-def parse_test_result(lines: LineStream) -> TestResult:
- consume_non_diagnostic(lines)
- if not lines or not parse_tap_header(lines):
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- expected_test_suite_num = parse_test_plan(lines)
- if expected_test_suite_num == 0:
- return TestResult(TestStatus.NO_TESTS, [], lines)
- elif expected_test_suite_num is None:
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- test_suites = []
- for i in range(1, expected_test_suite_num + 1):
- test_suite = parse_test_suite(lines, i)
- if test_suite:
- test_suites.append(test_suite)
- else:
- print_with_timestamp(
- red('[ERROR] ') + ' expected ' +
- str(expected_test_suite_num) +
- ' test suites, but got ' + str(i - 2))
+ if not expected_count and not main and \
+ peek_test_name_match(lines, test):
break
- test_suite = parse_test_suite(lines, -1)
- if test_suite:
- print_with_timestamp(red('[ERROR] ') +
- 'got unexpected test suite: ' + test_suite.name)
- if test_suites:
- return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines)
- else:
- return TestResult(TestStatus.NO_TESTS, [], lines)
-
-class TestCounts:
- passed: int
- failed: int
- crashed: int
- skipped: int
-
- def __init__(self):
- self.passed = 0
- self.failed = 0
- self.crashed = 0
- self.skipped = 0
-
- def total(self) -> int:
- return self.passed + self.failed + self.crashed + self.skipped
-
-def print_and_count_results(test_result: TestResult) -> TestCounts:
- counts = TestCounts()
- for test_suite in test_result.suites:
- if test_suite.status == TestStatus.SUCCESS:
- print_suite_divider(green('[PASSED] ') + test_suite.name)
- elif test_suite.status == TestStatus.SKIPPED:
- print_suite_divider(yellow('[SKIPPED] ') + test_suite.name)
- elif test_suite.status == TestStatus.TEST_CRASHED:
- print_suite_divider(red('[CRASHED] ' + test_suite.name))
+ subtests.append(parse_test(lines, test_num))
+ test_num += 1
+ test.subtests = subtests
+ if not main:
+ # If not main test, look for test result line
+ parse_diagnostic(lines, test)
+ if (parent_test and peek_test_name_match(lines, test)) or \
+ not parent_test:
+ parse_test_result(lines, test, expected_num)
+ if not parent_test:
+ print_test_result(test)
else:
- print_suite_divider(red('[FAILED] ') + test_suite.name)
- for test_case in test_suite.cases:
- if test_case.status == TestStatus.SUCCESS:
- counts.passed += 1
- print_with_timestamp(green('[PASSED] ') + test_case.name)
- elif test_case.status == TestStatus.SKIPPED:
- counts.skipped += 1
- print_with_timestamp(yellow('[SKIPPED] ') + test_case.name)
- elif test_case.status == TestStatus.TEST_CRASHED:
- counts.crashed += 1
- print_with_timestamp(red('[CRASHED] ' + test_case.name))
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- else:
- counts.failed += 1
- print_with_timestamp(red('[FAILED] ') + test_case.name)
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- return counts
+ test.add_error('missing subtest result line!')
+ # Add statuses to TestCounts attribute in Test object
+ bubble_up_test_results(test)
+ if parent_test:
+ # If test has subtests and is not the main test object, print
+ # footer.
+ print_test_footer(test)
+ return test
def parse_run_tests(kernel_output: Iterable[str]) -> TestResult:
- counts = TestCounts()
+ """
+ Using kernel output, extract ktap lines, parse the lines for test
+ results and print condensed test results and summary line .
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ TestResult - Tuple containg status of main test object, main test
+ object with all subtests, and log of all ktap lines.
+ """
+ print_with_timestamp(DIVIDER)
lines = extract_tap_lines(kernel_output)
- test_result = parse_test_result(lines)
- if test_result.status == TestStatus.NO_TESTS:
- print(red('[ERROR] ') + yellow('no tests run!'))
- elif test_result.status == TestStatus.FAILURE_TO_PARSE_TESTS:
- print(red('[ERROR] ') + yellow('could not parse test results!'))
+ test = Test()
+ if not lines:
+ test.add_error('invalid KTAP input!')
+ test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- counts = print_and_count_results(test_result)
+ test = parse_test(lines, 0)
+ if test.status != TestStatus.NO_TESTS:
+ test.status = test.counts.get_status()
print_with_timestamp(DIVIDER)
- if test_result.status == TestStatus.SUCCESS:
- fmt = green
- elif test_result.status == TestStatus.SKIPPED:
- fmt = yellow
- else:
- fmt =red
- print_with_timestamp(
- fmt('Testing complete. %d tests run. %d failed. %d crashed. %d skipped.' %
- (counts.total(), counts.failed, counts.crashed, counts.skipped)))
- return test_result
+ print_summary_line(test)
+ return TestResult(test.status, test, lines)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 75045aa0f8a1..ca760ee32096 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -106,10 +106,10 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: example', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - example_simple_test', result)
- self.assertContains(' ok 2 - example_mock_test', result)
+ self.assertContains('# Subtest: example', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - example_simple_test', result)
+ self.assertContains('ok 2 - example_mock_test', result)
self.assertContains('ok 1 - example', result)
def test_output_with_prefix_isolated_correctly(self):
@@ -117,28 +117,28 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: kunit-resource-test', result)
- self.assertContains(' 1..5', result)
- self.assertContains(' ok 1 - kunit_resource_test_init_resources', result)
- self.assertContains(' ok 2 - kunit_resource_test_alloc_resource', result)
- self.assertContains(' ok 3 - kunit_resource_test_destroy_resource', result)
- self.assertContains(' foo bar #', result)
- self.assertContains(' ok 4 - kunit_resource_test_cleanup_resources', result)
- self.assertContains(' ok 5 - kunit_resource_test_proper_free_ordering', result)
+ self.assertContains('# Subtest: kunit-resource-test', result)
+ self.assertContains('1..5', result)
+ self.assertContains('ok 1 - kunit_resource_test_init_resources', result)
+ self.assertContains('ok 2 - kunit_resource_test_alloc_resource', result)
+ self.assertContains('ok 3 - kunit_resource_test_destroy_resource', result)
+ self.assertContains('foo bar #', result)
+ self.assertContains('ok 4 - kunit_resource_test_cleanup_resources', result)
+ self.assertContains('ok 5 - kunit_resource_test_proper_free_ordering', result)
self.assertContains('ok 1 - kunit-resource-test', result)
- self.assertContains(' foo bar # non-kunit output', result)
- self.assertContains(' # Subtest: kunit-try-catch-test', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - kunit_test_try_catch_successful_try_no_catch',
+ self.assertContains('foo bar # non-kunit output', result)
+ self.assertContains('# Subtest: kunit-try-catch-test', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - kunit_test_try_catch_successful_try_no_catch',
result)
- self.assertContains(' ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
+ self.assertContains('ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
result)
self.assertContains('ok 2 - kunit-try-catch-test', result)
- self.assertContains(' # Subtest: string-stream-test', result)
- self.assertContains(' 1..3', result)
- self.assertContains(' ok 1 - string_stream_test_empty_on_creation', result)
- self.assertContains(' ok 2 - string_stream_test_not_empty_after_add', result)
- self.assertContains(' ok 3 - string_stream_test_get_string', result)
+ self.assertContains('# Subtest: string-stream-test', result)
+ self.assertContains('1..3', result)
+ self.assertContains('ok 1 - string_stream_test_empty_on_creation', result)
+ self.assertContains('ok 2 - string_stream_test_not_empty_after_add', result)
+ self.assertContains('ok 3 - string_stream_test_get_string', result)
self.assertContains('ok 3 - string-stream-test', result)
def test_parse_successful_test_log(self):
@@ -148,6 +148,13 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
+ def test_parse_successful_nested_tests_log(self):
+ all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log')
+ with open(all_passed_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
def test_parse_failed_test_log(self):
failed_log = test_data_path('test_is_test_passed-failure.log')
@@ -162,17 +169,19 @@ class KUnitParserTest(unittest.TestCase):
with open(empty_log) as file:
result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS,
result.status)
def test_no_tests(self):
- empty_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log')
- with open(empty_log) as file:
+ header_log = test_data_path('test_is_test_passed-'
+ 'no_tests_run_with_header.log')
+ with open(header_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.NO_TESTS,
result.status)
@@ -182,15 +191,17 @@ class KUnitParserTest(unittest.TestCase):
print_mock = mock.patch('builtins.print').start()
with open(crash_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- print_mock.assert_any_call(StrContains('could not parse test results!'))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ print_mock.assert_any_call(StrContains('invalid KTAP input!'))
print_mock.stop()
file.close()
def test_crashed_test(self):
crashed_log = test_data_path('test_is_test_passed-crash.log')
with open(crashed_log) as file:
- result = kunit_parser.parse_run_tests(file.readlines())
+ result = kunit_parser.parse_run_tests(
+ file.readlines())
self.assertEqual(
kunit_parser.TestStatus.TEST_CRASHED,
result.status)
@@ -224,7 +235,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_ignores_multiple_prefixes(self):
prefix_log = test_data_path('test_multiple_prefixes.log')
@@ -233,7 +244,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_mixed_kernel_output(self):
mixed_prefix_log = test_data_path('test_interrupted_tap_output.log')
@@ -242,7 +253,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_poundsign(self):
pound_log = test_data_path('test_pound_sign.log')
@@ -251,16 +262,16 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_kernel_panic_end(self):
panic_log = test_data_path('test_kernel_panic_interrupt.log')
with open(panic_log) as file:
result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual(
- kunit_parser.TestStatus.TEST_CRASHED,
+ kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_pound_no_prefix(self):
pound_log = test_data_path('test_pound_no_prefix.log')
@@ -269,7 +280,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
class LinuxSourceTreeTest(unittest.TestCase):
@@ -380,7 +391,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(e.exception.code, 1)
self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
- self.print_mock.assert_any_call(StrContains(' 0 tests run'))
+ self.print_mock.assert_any_call(StrContains('invalid KTAP input!'))
def test_exec_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -388,7 +399,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains('0 tests run!')))
def test_run_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -397,7 +408,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains('0 tests run!')))
def test_exec_timeout(self):
timeout = 3453
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
new file mode 100644
index 000000000000..9d5b04fe43a6
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
@@ -0,0 +1,34 @@
+TAP version 14
+1..2
+ # Subtest: sysctl_test
+ 1..4
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+ # Subtest: example
+ 1..2
+ init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+ kunit example: all tests passed
+ ok 2 - example
+ # sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
+ ok 3 - sysctl_test_dointvec_table_len_is_zero
+ # sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
+ ok 4 - sysctl_test_dointvec_table_read_but_position_set
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..2
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+kunit example: all tests passed
+ok 2 - example
--
2.33.0.rc2.250.ged5fa647cd-goog
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/core/sock_map.c | 1 +
net/unix/af_unix.c | 95 ++++++++++++++++---
net/unix/unix_bpf.c | 93 +++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
5 files changed, 191 insertions(+), 54 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
v5 -> v6 :
- fix READ_ONCE by reading to a local variable first.
v6 -> v7 :
- fix the following compiler error when CONFIG_UNIX is m.
modpost: "sock_map_unhash" [net/unix/unix.ko] undefined!
For the series:
Acked-by: John Fastabend <john.fastabend(a)gmail.com>
Acked-by: Jakub Sitnicki <jakub(a)cloudflare.com>
--
2.20.1
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
Add a function to remove all dynamic events from the tracing directory. It
requires a loop as some of the dynamic events may depend on others being
removed first. Also add a safety that prevents it from looping infinitely
due to a bug where an event never gets removed.
Link: https://lkml.kernel.org/r/20210819152825.348941368@goodmis.org
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/functions | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index a6fac927ee82..f68d336b961b 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -83,6 +83,27 @@ clear_synthetic_events() { # reset all current synthetic events
done
}
+clear_dynamic_events() { # reset all current dynamic events
+ again=1
+ stop=1
+ # loop mulitple times as some events require other to be removed first
+ while [ $again -eq 1 ]; do
+ stop=$((stop+1))
+ # Prevent infinite loops
+ if [ $stop -gt 10 ]; then
+ break;
+ fi
+ again=2
+ grep -v '^#' dynamic_events|
+ while read line; do
+ del=`echo $line | sed -e 's/^.\([^ ]*\).*/-\1/'`
+ if ! echo "$del" >> dynamic_events; then
+ again=1
+ fi
+ done
+ done
+}
+
initialize_ftrace() { # Reset ftrace to initial-state
# As the initial state, ftrace will be set to nop tracer,
# no events, no triggers, no filters, no function filters,
@@ -93,6 +114,7 @@ initialize_ftrace() { # Reset ftrace to initial-state
reset_events_filter
reset_ftrace_filter
disable_events
+ clear_dynamic_events
[ -f set_event_pid ] && echo > set_event_pid
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
--
2.30.2
Extend KSM self tests with a performance benchmark. These tests are not
part of regular regression testing, as they are mainly intended to be
used by developers making changes to the memory management subsystem.
This patchset is a respin of the previous series:
v2: https://lkml.org/lkml/2021/8/6/422
v1: https://lkml.org/lkml/2021/8/1/130
Zhansaya Bagdauletkyzy (2):
selftests: vm: add KSM merging time test
selftests: vm: add COW time test for KSM pages
v2 -> v3:
- address COW test review comments
v1 -> v2:
- replace MB with MiB
- address COW test review comments
tools/testing/selftests/vm/ksm_tests.c | 154 ++++++++++++++++++++++++-
1 file changed, 150 insertions(+), 4 deletions(-)
--
2.25.1
The PAC tests check to see if the system supports the relevant PAC features
but instead of skipping the tests if they can't be executed they fail the
tests which makes things look like they're not working when they are.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/arm64/pauth/pac.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
index 592fe538506e..b743daa772f5 100644
--- a/tools/testing/selftests/arm64/pauth/pac.c
+++ b/tools/testing/selftests/arm64/pauth/pac.c
@@ -25,13 +25,15 @@
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* data key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACA)) \
+ SKIP(return, "PAUTH not enabled"); \
} while (0)
#define ASSERT_GENERIC_PAUTH_ENABLED() \
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACG)) \
+ SKIP(return, "Generic PAUTH not enabled"); \
} while (0)
void sign_specific(struct signatures *sign, size_t val)
@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
--
2.20.1
When skipping the tests due to a lack of system support for MTE we
currently print a message saying FAIL which makes it look like the test
failed even though the test did actually report KSFT_SKIP, creating some
confusion. Change the error message to say SKIP instead so things are
clearer.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/arm64/mte/mte_common_util.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index f50ac31920d1..0328a1e08f65 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -298,7 +298,7 @@ int mte_default_setup(void)
int ret;
if (!(hwcaps2 & HWCAP2_MTE)) {
- ksft_print_msg("FAIL: MTE features unavailable\n");
+ ksft_print_msg("SKIP: MTE features unavailable\n");
return KSFT_SKIP;
}
/* Get current mte mode */
--
2.20.1
Fix a few issues reported by 0Day/LKP during runing selftests/bpf.
Changelog:
V2:
- folded previous similar standalone patch to [1/5], and add acked tag
from Song Liu
- add acked tag to [2/5], [3/5] from Song Liu
- [4/5]: move test_bpftool.py to TEST_PROGS_EXTENDED, files in TEST_GEN_PROGS_EXTENDED
are generated by make. Otherwise, it will break out-of-tree install:
'make O=/kselftest-build SKIP_TARGETS= V=1 -C tools/testing/selftests install INSTALL_PATH=/kselftest-install'
- [5/5]: new patch
Li Zhijian (5):
selftests/bpf: enlarge select() timeout for test_maps
selftests/bpf: make test_doc_build.sh work from script directory
selftests/bpf: add default bpftool built by selftests to PATH
selftests/bpf: add missing files required by test_bpftool.sh for
installing
selftests/bpf: exit with KSFT_SKIP if no Makefile found
tools/testing/selftests/bpf/Makefile | 4 +++-
tools/testing/selftests/bpf/test_bpftool.sh | 6 ++++++
tools/testing/selftests/bpf/test_bpftool_build.sh | 2 +-
tools/testing/selftests/bpf/test_doc_build.sh | 10 ++++++++--
tools/testing/selftests/bpf/test_maps.c | 2 +-
5 files changed, 19 insertions(+), 5 deletions(-)
--
2.32.0
Previously, it fails as below:
-------------
root@lkp-skl-d01 /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf# ./test_doc_build.sh
++ realpath --relative-to=/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf ./test_doc_build.sh
+ SCRIPT_REL_PATH=test_doc_build.sh
++ dirname test_doc_build.sh
+ SCRIPT_REL_DIR=.
++ realpath /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/./../../../../
+ KDIR_ROOT_DIR=/opt/rootfs/v5.14-rc4
+ cd /opt/rootfs/v5.14-rc4
+ for tgt in docs docs-clean
+ make -s -C /opt/rootfs/v5.14-rc4/. docs
make: *** No rule to make target 'docs'. Stop.
+ for tgt in docs docs-clean
+ make -s -C /opt/rootfs/v5.14-rc4/. docs-clean
make: *** No rule to make target 'docs-clean'. Stop.
-----------
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/bpf/test_doc_build.sh | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/test_doc_build.sh b/tools/testing/selftests/bpf/test_doc_build.sh
index ed12111cd2f0..d67ced95a6cf 100755
--- a/tools/testing/selftests/bpf/test_doc_build.sh
+++ b/tools/testing/selftests/bpf/test_doc_build.sh
@@ -4,9 +4,10 @@ set -e
# Assume script is located under tools/testing/selftests/bpf/. We want to start
# build attempts from the top of kernel repository.
-SCRIPT_REL_PATH=$(realpath --relative-to=$PWD $0)
+SCRIPT_REL_PATH=$(realpath $0)
SCRIPT_REL_DIR=$(dirname $SCRIPT_REL_PATH)
-KDIR_ROOT_DIR=$(realpath $PWD/$SCRIPT_REL_DIR/../../../../)
+KDIR_ROOT_DIR=$(realpath $SCRIPT_REL_DIR/../../../../)
+SCRIPT_REL_DIR=$(dirname $(realpath --relative-to=$KDIR_ROOT_DIR $SCRIPT_REL_PATH))
cd $KDIR_ROOT_DIR
for tgt in docs docs-clean; do
--
2.32.0
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
Add a function to remove all dynamic events from the tracing directory. It
requires a loop as some of the dynamic events may depend on others being
removed first. Also add a safety that prevents it from looping infinitely
due to a bug where an event never gets removed.
Link: https://lkml.kernel.org/r/20210819041842.696873153@goodmis.org
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/functions | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index a6fac927ee82..f68d336b961b 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -83,6 +83,27 @@ clear_synthetic_events() { # reset all current synthetic events
done
}
+clear_dynamic_events() { # reset all current dynamic events
+ again=1
+ stop=1
+ # loop mulitple times as some events require other to be removed first
+ while [ $again -eq 1 ]; do
+ stop=$((stop+1))
+ # Prevent infinite loops
+ if [ $stop -gt 10 ]; then
+ break;
+ fi
+ again=2
+ grep -v '^#' dynamic_events|
+ while read line; do
+ del=`echo $line | sed -e 's/^.\([^ ]*\).*/-\1/'`
+ if ! echo "$del" >> dynamic_events; then
+ again=1
+ fi
+ done
+ done
+}
+
initialize_ftrace() { # Reset ftrace to initial-state
# As the initial state, ftrace will be set to nop tracer,
# no events, no triggers, no filters, no function filters,
@@ -93,6 +114,7 @@ initialize_ftrace() { # Reset ftrace to initial-state
reset_events_filter
reset_ftrace_filter
disable_events
+ clear_dynamic_events
[ -f set_event_pid ] && echo > set_event_pid
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
--
2.30.2
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
V2: update to 30 seconds
3S is not enough sometimes on a very busy host
taskset 1,1 ./test_maps 9
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Failed sockmap unexpected timeout
taskset 1,1 ./test_maps 10
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
test_array_map_batch_ops:PASS
test_array_percpu_map_batch_ops:PASS
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
test_lpm_trie_map_batch_ops:PASS
test_sk_storage_map:PASS
test_maps: OK, 0 SKIPPED
taskset 0x01 ./test_maps 9
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
test_array_map_batch_ops:PASS
test_array_percpu_map_batch_ops:PASS
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
test_lpm_trie_map_batch_ops:PASS
test_sk_storage_map:PASS
test_maps: OK, 0 SKIPPED
taskset 0x01 ./test_maps 10
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
test_array_map_batch_ops:PASS
test_array_percpu_map_batch_ops:PASS
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
test_lpm_trie_map_batch_ops:PASS
test_sk_storage_map:PASS
test_maps: OK, 0 SKIPPED
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 30cbf5d98f7d..de58a3070eea 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -985,7 +985,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.32.0
Hi Greg,
Can you please pull these LKDTM changes for drivers/misc? I forgot
to flush this queue of enhancements earlier. :) Here's what I've got
built up, mostly tweaks for kernelCI, configs, consolidation. This also
includes the __alloc_size hint adjustment I'd sent earlier, now fixed
with a better comment.
Thanks!
-Kees
Kees Cook (4):
lkdtm/bugs: Add ARRAY_BOUNDS to selftests
lkdtm/fortify: Consolidate FORTIFY_SOURCE tests
lkdtm: Add kernel version to failure hints
lkdtm/heap: Avoid __alloc_size hint warning for
VMALLOC_LINEAR_OVERFLOW
drivers/misc/lkdtm/bugs.c | 51 +-----------------------
drivers/misc/lkdtm/core.c | 4 +-
drivers/misc/lkdtm/fortify.c | 53 +++++++++++++++++++++++++
drivers/misc/lkdtm/heap.c | 9 ++++-
drivers/misc/lkdtm/lkdtm.h | 24 ++++++-----
tools/testing/selftests/lkdtm/config | 2 +
tools/testing/selftests/lkdtm/tests.txt | 3 ++
7 files changed, 83 insertions(+), 63 deletions(-)
--
2.30.2
This patchset is an implementation of futex2 interface on top of existing
futex.c code.
* What happened to the current futex()?
The futex() is implemented using a multiplexed interface that doesn't
scale well and gives headaches to people. We don't want to add more
features there.
* New features at futex2()
** NUMA-awareness
At the current implementation, all futex kernel side infrastructure is
stored on a single node. Given that, all futex() calls issued by
processors that aren't located on that node will have a memory access
penalty when doing it.
** Variable sized futexes
Futexes are used to implement atomic operations in userspace.
Supporting 8, 16, 32 and 64 bit sized futexes allows user libraries to
implement all those sizes in a performant way. Thanks Boost devs for
feedback: https://lists.boost.org/Archives/boost/2021/05/251508.php
Embedded systems or anything with memory constrains could benefit of
using smaller sizes for the futex userspace integer.
** Wait on multiple futexes
Proton's (a set of compatibility tools to run Windows games) fork of Wine
benefits of this feature to implement WaitForMultipleObjects from Win32 in
a performant way. Native game engines will benefit from this as well,
given that this is a common wait pattern for games.
* The interface
The new interface has one syscall per operation as opposed to the
current multiplexing one. The details can be found in the following
patches, but this is a high level summary of what the interface can do:
- Supports wake/wait semantics, as in futex()
- Supports requeue operations, similarly as FUTEX_CMP_REQUEUE, but with
individual flags for each address
- Supports waiting for a vector of futexes, using a new syscall named
futex_waitv()
- The following features will be implemented in next patchset versions:
- Supports variable sized futexes (8bits, 16bits, 32bits and 64bits)
- Supports NUMA-awareness operations, where the user can specify on
which memory node would like to operate
* The patchset
Given that futex2 reuses futex code, the patches make futex.c functions
public and modify them as needed.
This patchset can be also found at my git tree:
https://gitlab.collabora.com/tonyk/linux/-/tree/futex2-dev
- Patch 1: Implements 32bit wait/wake
- Patches 2-3: Implement waitv and requeue.
- Patch 4: Add a documentation file which details the interface and
the internal implementation.
- Patches 5-10: Selftests for all operations along with perf
support for futex2.
- Patch 11: Proof of concept of waking threads at waitpid(), not to be
merged as it is.
* Testing
** Stability
- glibc[1]: nptl's low level locking was modified to use futex2 API
(except for PI). All nptl/ tests passed.
- Proton's Wine: Proton/Wine was modified in order to use futex2() for the
emulation of Windows NT sync mechanisms based on futex, called "fsync".
Triple-A games with huge CPU's loads and tons of parallel jobs worked
as expected when compared with the previous FUTEX_WAIT_MULTIPLE
implementation at futex(). Some games issue 42k futex2() calls
per second.
- perf: The perf benchmarks tests can also be used to stress the
interface, and they can be found in this patchset.
[1] https://gitlab.collabora.com/tonyk/glibc/-/tree/futex2-dev
** Performance
- Using perf, no significant difference was measured when comparing
futex() and futex2() for the following benchmarks: hash, wake and
wake-parallel.
- I measured a 15% overhead for the perf's requeue benchmark, comparing
futex2() to futex(). Requeue patch provides more details about why this
happens and how to overcome this.
* Changelog
Changes from v4:
- Use existing futex.c code when possible
- Cleaned up cover letter, check v4 for a more verbose version
v4: https://lore.kernel.org/lkml/20210603195924.361327-1-andrealmeid@collabora.…
André Almeida (11):
futex2: Implement wait and wake functions
futex2: Implement vectorized wait
futex2: Implement requeue operation
docs: locking: futex2: Add documentation
selftests: futex2: Add wake/wait test
selftests: futex2: Add timeout test
selftests: futex2: Add wouldblock test
selftests: futex2: Add waitv test
selftests: futex2: Add requeue test
perf bench: Add futex2 benchmark tests
kernel: Enable waitpid() for futex2
Documentation/locking/futex2.rst | 185 ++++++
Documentation/locking/index.rst | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 4 +
arch/x86/entry/syscalls/syscall_64.tbl | 4 +
include/linux/compat.h | 23 +
include/linux/futex.h | 103 ++++
include/linux/syscalls.h | 8 +
include/uapi/asm-generic/unistd.h | 11 +-
include/uapi/linux/futex.h | 27 +
init/Kconfig | 7 +
kernel/Makefile | 1 +
kernel/fork.c | 2 +
kernel/futex.c | 111 +---
kernel/futex2.c | 566 ++++++++++++++++++
kernel/sys_ni.c | 9 +
tools/arch/x86/include/asm/unistd_64.h | 12 +
tools/perf/bench/bench.h | 4 +
tools/perf/bench/futex-hash.c | 24 +-
tools/perf/bench/futex-requeue.c | 57 +-
tools/perf/bench/futex-wake-parallel.c | 41 +-
tools/perf/bench/futex-wake.c | 37 +-
tools/perf/bench/futex.h | 47 ++
tools/perf/builtin-bench.c | 18 +-
.../selftests/futex/functional/.gitignore | 3 +
.../selftests/futex/functional/Makefile | 6 +-
.../futex/functional/futex2_requeue.c | 164 +++++
.../selftests/futex/functional/futex2_wait.c | 195 ++++++
.../selftests/futex/functional/futex2_waitv.c | 154 +++++
.../futex/functional/futex_wait_timeout.c | 24 +-
.../futex/functional/futex_wait_wouldblock.c | 33 +-
.../testing/selftests/futex/functional/run.sh | 6 +
.../selftests/futex/include/futex2test.h | 112 ++++
32 files changed, 1865 insertions(+), 134 deletions(-)
create mode 100644 Documentation/locking/futex2.rst
create mode 100644 kernel/futex2.c
create mode 100644 tools/testing/selftests/futex/functional/futex2_requeue.c
create mode 100644 tools/testing/selftests/futex/functional/futex2_wait.c
create mode 100644 tools/testing/selftests/futex/functional/futex2_waitv.c
create mode 100644 tools/testing/selftests/futex/include/futex2test.h
--
2.32.0
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 30cbf5d98f7d..72673e0428fd 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -985,7 +985,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 3;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.32.0
From: Bongsu Jeon <bongsu.jeon(a)samsung.com>
This series updates the virtual NCI device driver and NCI selftest code
and add the NCI test case in selftests.
1/8 to use wait queue in virtual device driver.
2/8 to remove the polling code in selftests.
3/8 to fix a typo.
4/8 to fix the next nlattr offset calculation.
5/8 to fix the wrong condition in if statement.
6/8 to add a flag parameter to the Netlink send function.
7/8 to extract the start/stop discovery function.
8/8 to add the NCI testcase in selftests.
v2:
1/8
- change the commit message.
- add the dfense code while reading a frame.
3/8
- change the commit message.
- separate the commit into 3/8~8/8.
Bongsu Jeon (8):
nfc: virtual_ncidev: Use wait queue instead of polling
selftests: nci: Remove the polling code to read a NCI frame
selftests: nci: Fix the typo
selftests: nci: Fix the code for next nlattr offset
selftests: nci: Fix the wrong condition
selftests: nci: Add the flags parameter for the send_cmd_mt_nla
selftests: nci: Extract the start/stop discovery function
selftests: nci: Add the NCI testcase reading T4T Tag
drivers/nfc/virtual_ncidev.c | 9 +-
tools/testing/selftests/nci/nci_dev.c | 416 ++++++++++++++++++++++----
2 files changed, 362 insertions(+), 63 deletions(-)
--
2.32.0
The goal of these patches is to add a test case for a SGX reserved
memory oversubscription, i.e. make sure that the page reclaimer and
and the page fault handler are working correctly.
Change Log
==========
v3:
* Reorganized the patch set into smaller pieces, and refactored the code
so that the test enclave can be created inside each test case. Added a
new test case unclobbered_vdso_oversubscribed that creates a large enough
heap to fill all of the available SGX reserved memory (EPC).
Jarkko Sakkinen (8):
x86/sgx: Add /sys/kernel/debug/x86/sgx_total_mem
selftests/sgx: Assign source for each segment
selftests/sgx: Make data measurement for an enclave segment optional
selftests/sgx: Create a heap for the test enclave
selftests/sgx: Dump segments and /proc/self/maps only on failure
selftests/sgx: Encpsulate the test enclave creation
selftests/sgx: Move setup_test_encl() to each TEST_F()
selftests/sgx: Add a new kselftest: unclobbered_vdso_oversubscribed
Documentation/x86/sgx.rst | 6 ++
arch/x86/kernel/cpu/sgx/main.c | 10 +-
tools/testing/selftests/sgx/load.c | 40 ++++++--
tools/testing/selftests/sgx/main.c | 123 +++++++++++++++++++-----
tools/testing/selftests/sgx/main.h | 7 +-
tools/testing/selftests/sgx/sigstruct.c | 12 ++-
6 files changed, 159 insertions(+), 39 deletions(-)
--
2.32.0
Extend KSM self tests with a performance benchmark. These tests are not
part of regular regression testing, as they are mainly intended to be
used by developers making changes to the memory management subsystem.
This patchset is a respin of the previous series:
https://lore.kernel.org/lkml/cover.1627828548.git.zhansayabagdaulet@gmail.c…
Zhansaya Bagdauletkyzy (2):
selftests: vm: add KSM merging time test
selftests: vm: add COW time test for KSM pages
v1 -> v2:
- replace MB with MiB
- address COW test review comments
tools/testing/selftests/vm/ksm_tests.c | 152 ++++++++++++++++++++++++-
1 file changed, 148 insertions(+), 4 deletions(-)
--
2.25.1
Introduce selftests to validate the functionality of KSM. The tests are
run on private anonymous pages. Since some KSM tunables are modified,
their starting values are saved and restored after testing. At the
start, run is set to 2 to ensure that only test pages will be merged (we
assume that no applications make madvise syscalls in the background). If
KSM config not enabled, all tests will be skipped.
Zhansaya Bagdauletkyzy (4):
selftests: vm: add KSM merge test
selftests: vm: add KSM unmerge test
selftests: vm: add KSM zero page merging test
selftests: vm: add KSM merging across nodes test
v1 -> v2:
- add a test to check KSM unmerging
- add a test to check merging of zero pages
- add a test to check merging in different NUMA nodes
- include command line options for each test
- new options to specify use_zero_pages and merge_across_nodes
- run each test case in run_vmtests.sh
- add some helper functions to make the code more compact:
allocate_memory(), ksm_do_scan(), ksm_merge_pages()
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +
tools/testing/selftests/vm/ksm_tests.c | 516 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests.sh | 96 ++++
4 files changed, 616 insertions(+)
create mode 100644 tools/testing/selftests/vm/ksm_tests.c
--
2.25.1
From: Bongsu Jeon <bongsu.jeon(a)samsung.com>
This series updates the virtual NCI device driver and NCI selftest code
and add the NCI test case in selftests.
1/3 is the patch to use wait queue in virtual device driver.
2/3 is the patch to remove the polling code in selftests.
3/3 is the patch to add the NCI testcase in selftests.
Bongsu Jeon (3):
nfc: Change the virtual NCI device driver to use Wait Queue
selftests: Remove the polling code to read a NCI frame
selftests: Add the NCI testcase reading T4T Tag
drivers/nfc/virtual_ncidev.c | 10 +-
tools/testing/selftests/nci/nci_dev.c | 417 ++++++++++++++++++++++----
2 files changed, 362 insertions(+), 65 deletions(-)
--
2.32.0
v5:
- Rebased to the latest for-5.15 branch of cgroup git tree and drop the
1st v4 patch as it has been merged.
- Update patch 1 to always allow changing partition root back to member
even if it invalidates child partitions undeneath it.
- Adjust the empty effective cpu partition patch to not allow 0 effective
cpu for terminal partition which will make it invalid).
- Add a new patch to enable reading of cpuset.cpus.partition to display
the reason that causes invalid partition.
- Adjust the documentation and testing patch accordingly.
v4:
- Rebased to the for-5.15 branch of cgroup git tree and dropped the
first 3 patches of v3 series which have been merged.
- Beside prohibiting violation of cpu exclusivity rule, allow arbitrary
changes to cpuset.cpus of a partition root and force the partition root
to become invalid in case any of the partition root constraints
are violated. The documentation file and self test are modified
accordingly.
This patchset makes four enhancements to the cpuset v2 code.
Patch 1: Properly handle partition root tree and make partition
invalid in case changes to cpuset.cpus violate any of the partition
root constraints.
Patch 2: Enable the "cpuset.cpus.partition" file to show the reason
that causes invalid partition like "root invalid (No cpu available
due to hotplug)".
Patch 3: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 4: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 5 updates the cgroup-v2.rst file accordingly. Patch 6 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (6):
cgroup/cpuset: Properly transition to invalid partition
cgroup/cpuset: Show invalid partition reason string
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition to distribute out all
CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 116 +--
kernel/cgroup/cpuset.c | 347 ++++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 663 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 86 +++
5 files changed, 1068 insertions(+), 149 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/unix/af_unix.c | 91 +++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
4 files changed, 187 insertions(+), 53 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
v5 -> v6 :
- fix READ_ONCE by reading to a local variable first.
--
2.20.1
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/unix/af_unix.c | 91 +++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
4 files changed, 187 insertions(+), 53 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
v5 -> v6 :
- fix READ_ONCE by reading to a local variable first.
For the series:
Acked-by: John Fastabend <john.fastabend(a)gmail.com>
Acked-by: Jakub Sitnicki <jakub(a)cloudflare.com>
Also rebased on bpf-next
--
2.20.1
Hi Linus,
Please pull the following Kselftest fixes update for Linux 5.14-rc6
This Kselftest fixes update for Linux 5.14-rc6 consists of a single patch
to sgx test to fix Q1 and Q2 calculation.
diff is attached.
thanks,
-- Shuah
----------------------------------------------------------------
The following changes since commit 2734d6c1b1a089fb593ef6a23d4b70903526fe0c:
Linux 5.14-rc2 (2021-07-18 14:13:49 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest tags/linux-kselftest-fixes-5.14-rc6
for you to fetch changes up to 567c39047dbee341244fe3bf79fea24ee0897ff9:
selftests/sgx: Fix Q1 and Q2 calculation in sigstruct.c (2021-07-30 17:20:01 -0600)
----------------------------------------------------------------
linux-kselftest-fixes-5.14-rc6
This Kselftest fixes update for Linux 5.14-rc6 consists of a single patch
to sgx test to fix Q1 and Q2 calculation.
----------------------------------------------------------------
Tianjia Zhang (1):
selftests/sgx: Fix Q1 and Q2 calculation in sigstruct.c
tools/testing/selftests/sgx/sigstruct.c | 41 +++++++++++++++++----------------
1 file changed, 21 insertions(+), 20 deletions(-)
----------------------------------------------------------------
v4:
- Rebased to the for-5.15 branch of cgroup git tree and dropped the
first 3 patches of v3 series which have been merged.
- Beside prohibiting violation of cpu exclusivity rule, allow arbitrary
changes to cpuset.cpus of a partition root and force the partition root
to become invalid in case any of the partition root constraints
are violated. The documentation file and self test are modified
accordingly.
v3:
- Add two new patches (patches 2 & 3) to fix bugs found during the
testing process.
- Add a new patch to enable inotify event notification when partition
become invalid.
- Add a test to test event notification when partition become invalid.
v2:
- Drop v1 patch 1.
- Break out some cosmetic changes into a separate patch (patch #1).
- Add a new patch to clarify the transition to invalid partition root
is mainly caused by hotplug events.
- Enhance the partition root state test including CPU online/offline
behavior and fix issues found by the test.
This patchset makes four enhancements to the cpuset v2 code.
Patch 1: Enable event notification on "cpuset.cpus.partition" whenever
the state of a partition changes.
Patch 2: Properly handle partition root tree and make partition
invalid in case changes to cpuset.cpus violate any of the partition
root constraints.
Patch 3: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 4: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 5 updates the cgroup-v2.rst file accordingly. Patch 6 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (6):
cgroup/cpuset: Enable event notification when partition state changes
cgroup/cpuset: Properly handle partition root tree
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition root to distribute out
all CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 104 +--
kernel/cgroup/cpuset.c | 282 +++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 632 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 86 +++
5 files changed, 980 insertions(+), 129 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
When a number of tests fail, it can be useful to get higher-level
statistics of how many tests are failing (or how many parameters are
failing in parameterised tests), and in what cases or suites. This is
already done by some non-KUnit tests, so add support for automatically
generating these for KUnit tests.
This change adds a 'kunit.stats_enabled' switch which has three values:
- 0: No stats are printed (current behaviour)
- 1: Stats are printed only for tests/suites with more than one
subtest (new default)
- 2: Always print test statistics
For parameterised tests, the summary line looks as follows:
" # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16"
For test suites, there are two lines looking like this:
"# ext4_inode_test: pass:1 fail:0 skip:0 total:1"
"# Totals: pass:16 fail:0 skip:0 total:16"
The first line gives the number of direct subtests, the second "Totals"
line is the accumulated sum of all tests and test parameters.
This format is based on the one used by kselftest[1].
[1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/ksel…
Signed-off-by: David Gow <davidgow(a)google.com>
---
This is the long-awaited v2 of the test statistics patch:
https://lore.kernel.org/linux-kselftest/20201211072319.533803-1-davidgow@go…
It updates the patch to apply on current mainline kernels, takes skipped
tests into account, changes the output format to better match what
kselftest uses, and addresses some of the comments from v1.
Please let me know what you think, in particular:
- Is this sufficient to assuage any worries about porting tests to
KUnit?
- Are we printing too many stats by default: for a lot of existing tests
many of them are useless. I'm particuarly curious about the separate
"Totals" line, versus the per-suite line -- is that useful? Should it
only be printed when the totals differ?
- Is the output format sufficiently legible for people and/or tools
which may want to parse it?
Cheers,
-- David
Changelog:
Changes since v1:
https://lore.kernel.org/linux-kselftest/20201211072319.533803-1-davidgow@go…
- Rework to use a new struct kunit_result_stats, with helper functions
for adding results, accumulating them over nested structures, etc.
- Support skipped tests, report them separately from failures and
passes.
- New output format to better match kselftest:
- "pass:n fail:n skip:n total:n"
- Changes to stats_enabled parameter:
- Now a module parameter, with description
- Default "1" option now prints even when no tests fail.
- Improved parser fix which doesn't break crashed test detection.
---
lib/kunit/test.c | 109 ++++++++++++++++++++++++++++
tools/testing/kunit/kunit_parser.py | 2 +-
2 files changed, 110 insertions(+), 1 deletion(-)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index d79ecb86ea57..f246b847024e 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -10,6 +10,7 @@
#include <kunit/test-bug.h>
#include <linux/kernel.h>
#include <linux/kref.h>
+#include <linux/moduleparam.h>
#include <linux/sched/debug.h>
#include <linux/sched.h>
@@ -51,6 +52,51 @@ void __kunit_fail_current_test(const char *file, int line, const char *fmt, ...)
EXPORT_SYMBOL_GPL(__kunit_fail_current_test);
#endif
+/*
+ * KUnit statistic mode:
+ * 0 - disabled
+ * 1 - only when there is more than one subtest
+ * 2 - enabled
+ */
+static int kunit_stats_enabled = 1;
+module_param_named(stats_enabled, kunit_stats_enabled, int, 0644);
+MODULE_PARM_DESC(stats_enabled,
+ "Print test stats: never (0), only for multiple subtests (1), or always (2)");
+
+struct kunit_result_stats {
+ unsigned long passed;
+ unsigned long skipped;
+ unsigned long failed;
+ unsigned long total;
+};
+
+static bool kunit_should_print_stats(struct kunit_result_stats stats)
+{
+ if (kunit_stats_enabled == 0)
+ return false;
+
+ if (kunit_stats_enabled == 2)
+ return true;
+
+ return (stats.total > 1);
+}
+
+static void kunit_print_test_stats(struct kunit *test,
+ struct kunit_result_stats stats)
+{
+ if (!kunit_should_print_stats(stats))
+ return;
+
+ kunit_log(KERN_INFO, test,
+ KUNIT_SUBTEST_INDENT
+ "# %s: pass:%lu fail:%lu skip:%lu total:%lu",
+ test->name,
+ stats.passed,
+ stats.failed,
+ stats.skipped,
+ stats.total);
+}
+
/*
* Append formatted message to log, size of which is limited to
* KUNIT_LOG_SIZE bytes (including null terminating byte).
@@ -393,15 +439,69 @@ static void kunit_run_case_catch_errors(struct kunit_suite *suite,
test_case->status = KUNIT_SUCCESS;
}
+static void kunit_print_suite_stats(struct kunit_suite *suite,
+ struct kunit_result_stats suite_stats,
+ struct kunit_result_stats param_stats)
+{
+ if (kunit_should_print_stats(suite_stats)) {
+ kunit_log(KERN_INFO, suite,
+ "# %s: pass:%lu fail:%lu skip:%lu total:%lu",
+ suite->name,
+ suite_stats.passed,
+ suite_stats.failed,
+ suite_stats.skipped,
+ suite_stats.total);
+ }
+
+ if (kunit_should_print_stats(param_stats)) {
+ kunit_log(KERN_INFO, suite,
+ "# Totals: pass:%lu fail:%lu skip:%lu total:%lu",
+ param_stats.passed,
+ param_stats.failed,
+ param_stats.skipped,
+ param_stats.total);
+ }
+}
+
+static void kunit_update_stats(struct kunit_result_stats *stats,
+ enum kunit_status status)
+{
+ switch (status) {
+ case KUNIT_SUCCESS:
+ stats->passed++;
+ break;
+ case KUNIT_SKIPPED:
+ stats->skipped++;
+ break;
+ case KUNIT_FAILURE:
+ stats->failed++;
+ break;
+ }
+
+ stats->total++;
+}
+
+static void kunit_accumulate_stats(struct kunit_result_stats *total,
+ struct kunit_result_stats add)
+{
+ total->passed += add.passed;
+ total->skipped += add.skipped;
+ total->failed += add.failed;
+ total->total += add.total;
+}
+
int kunit_run_tests(struct kunit_suite *suite)
{
char param_desc[KUNIT_PARAM_DESC_SIZE];
struct kunit_case *test_case;
+ struct kunit_result_stats suite_stats = { 0 };
+ struct kunit_result_stats total_stats = { 0 };
kunit_print_subtest_start(suite);
kunit_suite_for_each_test_case(suite, test_case) {
struct kunit test = { .param_value = NULL, .param_index = 0 };
+ struct kunit_result_stats param_stats = { 0 };
test_case->status = KUNIT_SKIPPED;
if (test_case->generate_params) {
@@ -431,14 +531,23 @@ int kunit_run_tests(struct kunit_suite *suite)
test.param_value = test_case->generate_params(test.param_value, param_desc);
test.param_index++;
}
+
+ kunit_update_stats(¶m_stats, test.status);
+
} while (test.param_value);
+ kunit_print_test_stats(&test, param_stats);
+
kunit_print_ok_not_ok(&test, true, test_case->status,
kunit_test_case_num(suite, test_case),
test_case->name,
test.status_comment);
+
+ kunit_update_stats(&suite_stats, test_case->status);
+ kunit_accumulate_stats(&total_stats, param_stats);
}
+ kunit_print_suite_stats(suite, suite_stats, total_stats);
kunit_print_subtest_end(suite);
return 0;
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..c699f778da06 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -137,7 +137,7 @@ def print_log(log) -> None:
for m in log:
print_with_timestamp(m)
-TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*#).*$')
+TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*# (Subtest:|.*: kunit test case crashed!)).*$')
def consume_non_diagnostic(lines: LineStream) -> None:
while lines and not TAP_ENTRIES.match(lines.peek()):
--
2.32.0.554.ge1b32706d8-goog
--raw_output is nice, but it would be nicer if could show only output
after KUnit tests have started.
So change the flag to allow specifying a string ('kunit').
Make it so `--raw_output` alone will default to `--raw_output=all` and
have the same original behavior.
Drop the small kunit_parser.raw_output() function since it feels wrong
to put it in "kunit_parser.py" when the point of it is to not parse
anything.
E.g.
$ ./tools/testing/kunit/kunit.py run --raw_output=kunit
...
[15:24:07] Starting KUnit Kernel ...
TAP version 14
1..1
# Subtest: example
1..3
# example_simple_test: initializing
ok 1 - example_simple_test
# example_skip_test: initializing
# example_skip_test: You should not see a line below.
ok 2 - example_skip_test # SKIP this test should be skipped
# example_mark_skipped_test: initializing
# example_mark_skipped_test: You should see a line below.
# example_mark_skipped_test: You should see this line.
ok 3 - example_mark_skipped_test # SKIP this test should be skipped
ok 1 - example
[15:24:10] Elapsed time: 6.487s total, 0.001s configuring, 3.510s building, 0.000s running
Signed-off-by: Daniel Latypov <dlatypov(a)google.com>
---
Documentation/dev-tools/kunit/kunit-tool.rst | 9 ++++++---
tools/testing/kunit/kunit.py | 20 +++++++++++++++-----
tools/testing/kunit/kunit_parser.py | 4 ----
tools/testing/kunit/kunit_tool_test.py | 9 +++++++++
4 files changed, 30 insertions(+), 12 deletions(-)
diff --git a/Documentation/dev-tools/kunit/kunit-tool.rst b/Documentation/dev-tools/kunit/kunit-tool.rst
index c7ff9afe407a..ae52e0f489f9 100644
--- a/Documentation/dev-tools/kunit/kunit-tool.rst
+++ b/Documentation/dev-tools/kunit/kunit-tool.rst
@@ -114,9 +114,12 @@ results in TAP format, you can pass the ``--raw_output`` argument.
./tools/testing/kunit/kunit.py run --raw_output
-.. note::
- The raw output from test runs may contain other, non-KUnit kernel log
- lines.
+The raw output from test runs may contain other, non-KUnit kernel log
+lines. You can see just KUnit output with ``--raw_output=kunit``:
+
+.. code-block:: bash
+
+ ./tools/testing/kunit/kunit.py run --raw_output=kunit
If you have KUnit results in their raw TAP format, you can parse them and print
the human-readable summary with the ``parse`` command for kunit_tool. This
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7174377c2172..5a931456e718 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -16,6 +16,7 @@ assert sys.version_info >= (3, 7), "Python version is too old"
from collections import namedtuple
from enum import Enum, auto
+from typing import Iterable
import kunit_config
import kunit_json
@@ -114,7 +115,16 @@ def parse_tests(request: KunitParseRequest) -> KunitResult:
'Tests not Parsed.')
if request.raw_output:
- kunit_parser.raw_output(request.input_data)
+ output: Iterable[str] = request.input_data
+ if request.raw_output == 'all':
+ pass
+ elif request.raw_output == 'kunit':
+ output = kunit_parser.extract_tap_lines(output)
+ else:
+ print(f'Unknown --raw_output option "{request.raw_output}"', file=sys.stderr)
+ for line in output:
+ print(line.rstrip())
+
else:
test_result = kunit_parser.parse_run_tests(request.input_data)
parse_end = time.time()
@@ -135,7 +145,6 @@ def parse_tests(request: KunitParseRequest) -> KunitResult:
return KunitResult(KunitStatus.SUCCESS, test_result,
parse_end - parse_start)
-
def run_tests(linux: kunit_kernel.LinuxSourceTree,
request: KunitRequest) -> KunitResult:
run_start = time.time()
@@ -181,7 +190,7 @@ def add_common_opts(parser) -> None:
parser.add_argument('--build_dir',
help='As in the make command, it specifies the build '
'directory.',
- type=str, default='.kunit', metavar='build_dir')
+ type=str, default='.kunit', metavar='build_dir')
parser.add_argument('--make_options',
help='X=Y make option, can be repeated.',
action='append')
@@ -246,8 +255,9 @@ def add_exec_opts(parser) -> None:
action='append')
def add_parse_opts(parser) -> None:
- parser.add_argument('--raw_output', help='don\'t format output from kernel',
- action='store_true')
+ parser.add_argument('--raw_output', help='If set don\'t format output from kernel. '
+ 'If set to --raw_output=kunit, filters to just KUnit output.',
+ type=str, nargs='?', const='all', default=None)
parser.add_argument('--json',
nargs='?',
help='Stores test results in a JSON, and either '
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..84938fefbac0 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -106,10 +106,6 @@ def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
yield line_num, line[prefix_len:]
return LineStream(lines=isolate_kunit_output(kernel_output))
-def raw_output(kernel_output) -> None:
- for line in kernel_output:
- print(line.rstrip())
-
DIVIDER = '=' * 60
RESET = '\033[0;0m'
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 628ab00f74bc..619c4554cbff 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -399,6 +399,15 @@ class KUnitMainTest(unittest.TestCase):
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ def test_run_raw_output_kunit(self):
+ self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
+ kunit.main(['run', '--raw_output=kunit'], self.linux_source_mock)
+ self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
+ self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
+ for call in self.print_mock.call_args_list:
+ self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+
def test_exec_timeout(self):
timeout = 3453
kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)
base-commit: f684616e08e9cd9db3cd53fe2e068dfe02481657
--
2.32.0.605.g8dce9f2422-goog
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/core/sock_map.c | 8 +-
net/unix/af_unix.c | 89 ++++++++++++++++--
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
5 files changed, 194 insertions(+), 52 deletions(-)
--
2.20.1
From: SeongJae Park <sjpark(a)amazon.de>
When running a test program, 'run_one()' checks if the program has the
execution permission and fails if it doesn't. However, it's easy to
mistakenly missing the permission, as some common tools like 'diff'
don't support the permission change well[1]. Compared to that, making
mistakes in the test program's path would only rare, as those are
explicitly listed in 'TEST_PROGS'. Therefore, it might make more sense
to resolve the situation on our own and run the program.
For the reason, this commit makes the test program runner function to
still print the warning message but run the program after giving the
execution permission in the case. To make nothing corrupted, it also
restores the permission after running it.
[1] https://lore.kernel.org/mm-commits/YRJisBs9AunccCD4@kroah.com/
Suggested-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: SeongJae Park <sjpark(a)amazon.de>
---
tools/testing/selftests/kselftest/runner.sh | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh
index cc9c846585f0..2eb31e945709 100644
--- a/tools/testing/selftests/kselftest/runner.sh
+++ b/tools/testing/selftests/kselftest/runner.sh
@@ -65,15 +65,16 @@ run_one()
TEST_HDR_MSG="selftests: $DIR: $BASENAME_TEST"
echo "# $TEST_HDR_MSG"
- if [ ! -x "$TEST" ]; then
- echo -n "# Warning: file $TEST is "
- if [ ! -e "$TEST" ]; then
- echo "missing!"
- else
- echo "not executable, correct this."
- fi
+ if [ ! -e "$TEST" ]; then
+ echo "# Warning: file $TEST is missing!"
echo "not ok $test_num $TEST_HDR_MSG"
else
+ permission_added="false"
+ if [ ! -x "$TEST" ]; then
+ echo "# Warning: file $TEST is not executable"
+ chmod u+x "$TEST"
+ permission_added="true"
+ fi
cd `dirname $TEST` > /dev/null
((((( tap_timeout ./$BASENAME_TEST 2>&1; echo $? >&3) |
tap_prefix >&4) 3>&1) |
@@ -88,6 +89,9 @@ run_one()
else
echo "not ok $test_num $TEST_HDR_MSG # exit=$rc"
fi)
+ if [ "$permission_added" = "true" ]; then
+ chmod u-x "$TEST"
+ fi
cd - >/dev/null
fi
}
--
2.17.1