Hi!
I would like to publish two debug features which were needed for other stuff
I work on.
One is the reworked lx-symbols script which now actually works on at least
gdb 9.1 (gdb 9.2 was reported to fail to load the debug symbols from the kernel
for some reason, not related to this patch) and upstream qemu.
The other feature is the ability to trap all guest exceptions (on SVM for now)
and see them in kvmtrace prior to potential merge to double/triple fault.
This can be very useful and I already had to manually patch KVM a few
times for this.
I will, once time permits, implement this feature on Intel as well.
V2:
* Some more refactoring and workarounds for lx-symbols script
* added KVM_GUESTDBG_BLOCKIRQ flag to enable 'block interrupts on
single step' together with KVM_CAP_SET_GUEST_DEBUG2 capability
to indicate which guest debug flags are supported.
This is a replacement for unconditional block of interrupts on single
step that was done in previous version of this patch set.
Patches to qemu to use that feature will be sent soon.
* Reworked the the 'intercept all exceptions for debug' feature according
to the review feedback:
- renamed the parameter that enables the feature and
moved it to common kvm module.
(only SVM part is currently implemented though)
- disable the feature for SEV guests as was suggested during the review
- made the vmexit table const again, as was suggested in the review as well.
V3:
* Modified a selftest to cover the KVM_GUESTDBG_BLOCKIRQ
* Rebased on kvm/queue
Best regards,
Maxim Levitsky
Maxim Levitsky (6):
KVM: SVM: split svm_handle_invalid_exit
KVM: x86: add force_intercept_exceptions_mask
KVM: SVM: implement force_intercept_exceptions_mask
scripts/gdb: rework lx-symbols gdb script
KVM: x86: implement KVM_GUESTDBG_BLOCKIRQ
KVM: selftests: test KVM_GUESTDBG_BLOCKIRQ
Documentation/virt/kvm/api.rst | 1 +
arch/x86/include/asm/kvm_host.h | 5 +-
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/svm/svm.c | 87 +++++++-
arch/x86/kvm/svm/svm.h | 6 +-
arch/x86/kvm/x86.c | 12 +-
arch/x86/kvm/x86.h | 2 +
kernel/module.c | 8 +-
scripts/gdb/linux/symbols.py | 203 ++++++++++++------
.../testing/selftests/kvm/x86_64/debug_regs.c | 24 ++-
10 files changed, 266 insertions(+), 83 deletions(-)
--
2.26.3
We are looking to further standardise the output format used by kernel
test frameworks like kselftest and KUnit. Thus far we have used the
TAP (Test Anything Protocol) specification, but it has been extended
in many different ways, so we would like to agree on a common "Kernel
TAP" (KTAP) format to resolve these differences. Thus, below is a
draft of a specification of KTAP. Note that this specification is
largely based on the current format of test results for KUnit tests.
Additionally, this specification was heavily inspired by the KTAP
specification draft by Tim Bird
(https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD…).
However, there are some notable differences to his specification. One
such difference is the format of nested tests is more fully specified
in the following specification. However, they are specified in a way
which may not be compatible with many kselftest nested tests.
=====================
Specification of KTAP
=====================
TAP, or the Test Anything Protocol is a format for specifying test
results used by a number of projects. It's website and specification
are found at: https://testanything.org/. The Linux Kernel uses TAP
output for test results. However, KUnit (and other Kernel testing
frameworks such as kselftest) have some special needs for test results
which don't gel perfectly with the original TAP specification. Thus, a
"Kernel TAP" (KTAP) format is specified to extend and alter TAP to
support these use-cases.
KTAP Output consists of 5 major elements (all line-based):
- The version line
- Plan lines
- Test case result lines
- Diagnostic lines
- A bail out line
An important component in this specification of KTAP is the
specification of the format of nested tests. This can be found in the
section on nested tests below.
The version line
----------------
The first line of KTAP output must be the version line. As this
specification documents the first version of KTAP, the recommended
version line is "KTAP version 1". However, since all kernel testing
frameworks use TAP version lines, "TAP version 14" and "TAP version
13" are all acceptable version lines. Version lines with other
versions of TAP or KTAP will not cause the parsing of the test results
to fail but it will produce an error.
Plan lines
----------
Plan lines must follow the format of "1..N" where N is the number of
subtests. The second line of KTAP output must be a plan line, which
indicates the number of tests at the highest level, such that the
tests do not have a parent. Also, in the instance of a test having
subtests, the second line of the test after the subtest header must be
a plan line which indicates the number of subtests within that test.
Test case result lines
----------------------
Test case result lines must have the format:
<result> <number> [-] [<description>] [<directive>] [<diagnostic data>]
The result can be either "ok", which indicates the test case passed,
or "not ok", which indicates that the test case failed.
The number represents the number of the test case or suite being
performed. The first test case or suite must have the number 1 and the
number must increase by 1 for each additional test case or result at
the same level and within the same testing suite.
The "-" character is optional.
The description is a description of the test, generally the name of
the test, and can be any string of words (can't include #). The
description is optional.
The directive is used to indicate if a test was skipped. The format
for the directive is: "# SKIP [<skip_description>]". The
skip_description is optional and can be any string of words to
describe why the test was skipped. The result of the test case result
line can be either "ok" or "not ok" if the skip directive is used.
Finally, note that TAP 14 specification includes TODO directives but
these are not supported for KTAP.
Examples of test case result lines:
Test passed:
ok 1 - test_case_name
Test was skipped:
not ok 1 - test_case_name # SKIP test_case_name should be skipped
Test failed:
not_ok 1 - test_case_name
Diagnostic lines
----------------
Diagnostic lines are used for description of testing operations.
Diagnostic lines are generally formatted as "#
<diagnostic_description>", where the description can be any string.
However, in practice, diagnostic lines are all lines that don't follow
the format of any other KTAP line format. Diagnostic lines can be
anywhere in the test output after the first two lines. There are a few
special diagnostic lines. Diagnostic lines of the format "# Subtest:
<test_name>" indicate the start of a test with subtests. Also,
diagnostic lines of the format "# <test_name>: <description>" refer to
a specific test and tend to occur before the test result line of that
test but are optional.
Bail out line
-------------
A bail out line can occur anywhere in the KTAP output and will
indicate that a test has crashed. The format of a bail out line is
"Bail out! [<description>]", where the description can give
information on why the bail out occurred and can be any string.
Nested tests
------------
The new specification for KTAP will support an arbitrary number of
nested subtests. Thus, tests can now have subtests and those subtests
can have subtests. This can be useful to further categorize tests and
organize test results.
The new required format for a test with subtests consists of: a
subtest header line, a plan line, all subtests, and a final test
result line.
The first line of the test must be the subtest header line with the
format: "# Subtest: <test_name>".
The second line of the test must be the plan line, which is formatted
as "1..N", where N is the number of subtests.
Following the plan line, all lines pertaining to the subtests will follow.
Finally, the last line of the test is a final test result line with
the format: "(ok|not ok) <number> [-] [<description>] [<directive>]
[<diagnostic data>]", which follows the same format as the general
test result lines described in this section. The result line should
indicate the result of the subtests. Thus, if one of the subtests
fail, the test should fail. The description in the final test result
line should match the name of the test in the subtest header.
An example format:
KTAP version 1
1..1
# Subtest: test_suite
1..2
ok 1 - test_1
ok 2 - test_2
ok 1 - test_suite
An example format with multiple levels of nested testing:
KTAP version 1
1..1
# Subtest: test_suite
1..2
# Subtest: sub_test_suite
1..2
ok 1 - test_1
ok 2 test_2
ok 1 - sub_test_suite
ok 2 - test
ok 1 - test_suite
In the instance that the plan line is missing, the end of the test
will be denoted by the final result line containing a description that
matches the name of the test given in the subtest header. Note that
thus, if the plan line is missing and one of the subtests have a
matching name to the test suite this will cause errors.
Lastly, indentation is also recommended for improved readability.
Major differences between TAP 14 and KTAP specification
-------------------------------------------------------
Note that the major differences between TAP 14 and KTAP specification:
- yaml and json are not allowed in diagnostic messages
- TODO directive not allowed
- KTAP allows for an arbitrary number of tests to be nested with
specified nested test format
Example of KTAP
---------------
KTAP version 1
1..1
# Subtest: test_suite
1..1
# Subtest: sub_test_suite
1..2
ok 1 - test_1
ok 2 test_2
ok 1 - sub_test_suite
ok 1 - test_suite
=========================================
Note on incompatibilities with kselftests
=========================================
To my knowledge, the above specification seems to generally accept the
TAP format of many non-nested test results of kselftests.
An example of a common kselftests TAP format for non-nested test
results that are accepted by the above specification:
TAP version 13
1..2
# selftests: vDSO: vdso_test_gettimeofday
# The time is 1628024856.096879
ok 1 selftests: vDSO: vdso_test_gettimeofday
# selftests: vDSO: vdso_test_getcpu
# Could not find __vdso_getcpu
ok 2 selftests: vDSO: vdso_test_getcpu # SKIP
However, one major difference noted with kselftests is the use of more
directives than the "# SKIP" directive. kselftest also supports XPASS
and XFAIL directives. Some additional examples found in kselftests:
not ok 5 selftests: netfilter: nft_concat_range.sh # TIMEOUT 45 seconds
not ok 45 selftests: kvm: kvm_binary_stats_test # exit=127
Should the specification be expanded to include these directives?
However, the general format for kselftests with nested test results
seems to differ from the above specification. It seems that a general
format for nested tests is as follows:
TAP version 13
1..2
# selftests: membarrier: membarrier_test_single_thread
# TAP version 13
# 1..2
# ok 1 sys_membarrier available
# ok 2 sys membarrier invalid command test: command = -1, flags = 0,
errno = 22. Failed as expected
ok 1 selftests: membarrier: membarrier_test_single_thread
# selftests: membarrier: membarrier_test_multi_thread
# TAP version 13
# 1..2
# ok 1 sys_membarrier available
# ok 2 sys membarrier invalid command test: command = -1, flags = 0,
errno = 22. Failed as expected
ok 2 selftests: membarrier: membarrier_test_multi_thread
The major differences here, that do not match the above specification,
are use of "# " as an indentation and using a TAP version line to
denote a new test with subtests rather than the subtest header line
described above. If these are widely utilized formats in kselftests,
should we include both versions in the specification or should we
attempt to agree on a single format for nested tests? I personally
believe we should try to agree on a single format for nested tests.
This would allow for a cleaner specification of KTAP and would reduce
possible confusion.
====
So what do people think about the above specification?
How should we handle the differences with kselftests?
If this specification is accepted, where should the specification be documented?
Since v1 [1], I added Quentin's acks and applied Andrii's suggestions:
* Pass CFLAGS to libbpf link in patch 3
* Substitute CLANG_CROSS_FLAGS whole in HOST_CFLAGS to avoid accidents,
patch 4
Add support for cross-building BPF tools and selftests with clang, by
passing LLVM=1 or CC=clang to make, as well as CROSS_COMPILE. A single
clang toolchain can generate binaries for multiple architectures, so
instead of having prefixes such as aarch64-linux-gnu-gcc, clang uses the
-target parameter: `clang -target aarch64-linux-gnu'.
Patch 1 adds the parameter in Makefile.include so tools can easily
support this. Patch 2 prepares for the libbpf change from patch 3 (keep
building resolve_btfids's libbpf in the host arch, when cross-building
the kernel with clang). Patches 3-6 enable cross-building BPF tools with
clang.
[1] https://lore.kernel.org/bpf/20211122192019.1277299-1-jean-philippe@linaro.o…
Jean-Philippe Brucker (6):
tools: Help cross-building with clang
tools/resolve_btfids: Support cross-building the kernel with clang
tools/libbpf: Enable cross-building with clang
bpftool: Enable cross-building with clang
tools/runqslower: Enable cross-building with clang
selftests/bpf: Enable cross-building with clang
tools/bpf/bpftool/Makefile | 13 +++++++------
tools/bpf/resolve_btfids/Makefile | 1 +
tools/bpf/runqslower/Makefile | 4 ++--
tools/lib/bpf/Makefile | 3 ++-
tools/scripts/Makefile.include | 13 ++++++++++++-
tools/testing/selftests/bpf/Makefile | 8 ++++----
6 files changed, 28 insertions(+), 14 deletions(-)
--
2.34.1
The current kunit infrastructure defines its own module_init() when
built as a module, which conflicts with the mctp core's own.
So, only allow MCTP_TEST when both MCTP and KUNIT are built-in.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Jeremy Kerr <jk(a)codeconstruct.com.au>
---
net/mctp/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/mctp/Kconfig b/net/mctp/Kconfig
index 15267a5043d9..868c92272cbd 100644
--- a/net/mctp/Kconfig
+++ b/net/mctp/Kconfig
@@ -13,6 +13,6 @@ menuconfig MCTP
channel.
config MCTP_TEST
- tristate "MCTP core tests" if !KUNIT_ALL_TESTS
- depends on MCTP && KUNIT
+ bool "MCTP core tests" if !KUNIT_ALL_TESTS
+ depends on MCTP=y && KUNIT=y
default KUNIT_ALL_TESTS
--
2.30.2
Since kernel commit 43209ea2d17a ("zram: remove max_comp_streams internals"), zram has
switched to per-cpu streams. Even kernel still keep this interface for some reasons, but
writing to max_comp_stream doesn't take any effect. So remove it.
Signed-off-by: Yang Xu <xuyang2018.jy(a)fujitsu.com>
---
tools/testing/selftests/zram/zram01.sh | 4 ----
tools/testing/selftests/zram/zram02.sh | 4 ----
tools/testing/selftests/zram/zram_lib.sh | 22 ----------------------
3 files changed, 30 deletions(-)
diff --git a/tools/testing/selftests/zram/zram01.sh b/tools/testing/selftests/zram/zram01.sh
index 114863d9fb87..28583e4ae546 100755
--- a/tools/testing/selftests/zram/zram01.sh
+++ b/tools/testing/selftests/zram/zram01.sh
@@ -15,9 +15,6 @@ ERR_CODE=0
# Test will create the following number of zram devices:
dev_num=1
-# This is a list of parameters for zram devices.
-# Number of items must be equal to 'dev_num' parameter.
-zram_max_streams="2"
# The zram sysfs node 'disksize' value can be either in bytes,
# or you can use mem suffixes. But in some old kernels, mem
@@ -72,7 +69,6 @@ zram_fill_fs()
check_prereqs
zram_load
-zram_max_streams
zram_compress_alg
zram_set_disksizes
zram_set_memlimit
diff --git a/tools/testing/selftests/zram/zram02.sh b/tools/testing/selftests/zram/zram02.sh
index e83b404807c0..d664974a1317 100755
--- a/tools/testing/selftests/zram/zram02.sh
+++ b/tools/testing/selftests/zram/zram02.sh
@@ -14,9 +14,6 @@ ERR_CODE=0
# Test will create the following number of zram devices:
dev_num=1
-# This is a list of parameters for zram devices.
-# Number of items must be equal to 'dev_num' parameter.
-zram_max_streams="2"
# The zram sysfs node 'disksize' value can be either in bytes,
# or you can use mem suffixes. But in some old kernels, mem
@@ -30,7 +27,6 @@ zram_mem_limits="1M"
check_prereqs
zram_load
-zram_max_streams
zram_set_disksizes
zram_set_memlimit
zram_makeswap
diff --git a/tools/testing/selftests/zram/zram_lib.sh b/tools/testing/selftests/zram/zram_lib.sh
index 6f872f266fd1..0c49f9d1d563 100755
--- a/tools/testing/selftests/zram/zram_lib.sh
+++ b/tools/testing/selftests/zram/zram_lib.sh
@@ -82,28 +82,6 @@ zram_load()
fi
}
-zram_max_streams()
-{
- echo "set max_comp_streams to zram device(s)"
-
- local i=0
- for max_s in $zram_max_streams; do
- local sys_path="/sys/block/zram${i}/max_comp_streams"
- echo $max_s > $sys_path || \
- echo "FAIL failed to set '$max_s' to $sys_path"
- sleep 1
- local max_streams=$(cat $sys_path)
-
- [ "$max_s" -ne "$max_streams" ] && \
- echo "FAIL can't set max_streams '$max_s', get $max_stream"
-
- i=$(($i + 1))
- echo "$sys_path = '$max_streams' ($i/$dev_num)"
- done
-
- echo "zram max streams: OK"
-}
-
zram_compress_alg()
{
echo "test that we can set compression algorithm"
--
2.23.0
We have some many cases that will create child process as well, such as
pidfd_wait. Previously, we will signal/kill the parent process when it
is time out, but this signal will not be sent to its child process. In
such case, if child process doesn't terminate itself, ksefltest framework
will hang forever.
below ps tree show the situation when ksefltest is blocking:
root 1172 0.0 0.0 5996 2500 ? S 07:03 0:00 \_ /bin/bash /lkp/lkp/src/tests/kernel-selftests
root 1216 0.0 0.0 4392 1976 ? S 07:03 0:00 \_ make run_tests -C pidfd
root 1218 0.0 0.0 2396 1652 ? S 07:03 0:00 \_ /bin/sh -c BASE_DIR="/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests"; . /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/kselftest/runner.sh; if [ "X" != "X" ]; then per_test_logging=1; fi; run_many /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_fdinfo_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_open_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_poll_test /usr/src/perf_selftests-x86_64-rhel-
8.
root 12491 0.0 0.0 2396 132 ? S 07:03 0:00 \_ /bin/sh -c BASE_DIR="/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests"; . /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/kselftest/runner.sh; if [ "X" != "X" ]; then per_test_logging=1; fi; run_many /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_fdinfo_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_open_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_poll_test /usr/src/perf_selftests-x86_64-r
he
root 12492 0.0 0.0 2396 132 ? S 07:03 0:00 \_ /bin/sh -c BASE_DIR="/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests"; . /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/kselftest/runner.sh; if [ "X" != "X" ]; then per_test_logging=1; fi; run_many /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_fdinfo_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_open_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_poll_test /usr/src/perf_selftests-x86_
64
root 12493 0.0 0.0 2396 132 ? S 07:03 0:00 \_ /bin/sh -c BASE_DIR="/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests"; . /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/kselftest/runner.sh; if [ "X" != "X" ]; then per_test_logging=1; fi; run_many /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_fdinfo_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_open_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_poll_test /usr/src/perf_selftests-
x8
root 12496 0.0 0.0 2396 132 ? S 07:03 0:00 \_ /bin/sh -c BASE_DIR="/usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests"; . /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/kselftest/runner.sh; if [ "X" != "X" ]; then per_test_logging=1; fi; run_many /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_fdinfo_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_open_test /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/pidfd/pidfd_poll_test /usr/src/perf_selfte
st
root 12498 0.0 0.0 10564 6116 ? S 07:03 0:00 \_ perl /usr/src/perf_selftests-x86_64-rhel-8.3-kselftests-519d81956ee277b4419c723adfb154603c2565ba/tools/testing/selftests/kselftest/prefix.pl
root 12503 0.0 0.0 2452 112 ? T 07:03 0:00 ./pidfd_wait
root 12621 0.0 0.0 2372 1600 ? SLs 07:04 0:00 /usr/sbin/watchdog
root 19438 0.0 0.0 992 60 ? Ss 07:39 0:00 /lkp/lkp/src/bin/event/wakeup activate-monitor
Here we group all its child processes so that kill() can signal all of
them in timeout.
Suggested-by: yang xu <xuyang2018.jy(a)cn.fujitsu.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Acked-by: Christian Brauner <christian.brauner(a)ubuntu.com>
---
V2: add acked tag
---
tools/testing/selftests/kselftest_harness.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
index ae0f0f33b2a6..c7251396e7ee 100644
--- a/tools/testing/selftests/kselftest_harness.h
+++ b/tools/testing/selftests/kselftest_harness.h
@@ -875,7 +875,8 @@ static void __timeout_handler(int sig, siginfo_t *info, void *ucontext)
}
t->timed_out = true;
- kill(t->pid, SIGKILL);
+ // signal process group
+ kill(-(t->pid), SIGKILL);
}
void __wait_for_test(struct __test_metadata *t)
@@ -985,6 +986,7 @@ void __run_test(struct __fixture_metadata *f,
ksft_print_msg("ERROR SPAWNING TEST CHILD\n");
t->passed = 0;
} else if (t->pid == 0) {
+ setpgrp();
t->fn(t, variant);
if (t->skip)
_exit(255);
--
2.33.0
Hello,
The aim of this series is to make resctrl_tests run by using
kselftest framework.
- I modify Makefile of resctrl_test and Makefile of selftest, to
build/run resctrl_tests by using kselftest framework.
- I set the limited time for resctrl_tests to 120 seconds, to ensure the
resctrl_tests finish in limited time.
- When resctrl file system is not supported or resctrl_tests is not run
as root, return skip code of kselftest.
- If it is not finish in limited time, terminate resctrl_tests same as
executing ctrl+c.
Difference from v1:
- I change the order of patches according to Reinette's review.
- "LDLIBS + = -lnuma" has no dependencies on this patch series, delete
it from [PATCH v2 2/5].
- I separate the license info of Makefile into a new patch [PATCH v2
3/5].
- I separate "limited time" into a new patch [PATCH v2 4/5].
(There is no change in [PATCH v2 1/5] and [PATCH v2 5/5])
In addition, I think 120s is not a problem since some tests have longer
timeout (e.g. net test is 300s), please let me know if this is wrong
Thanks,
Shaopeng Tan (5):
selftests/resctrl: Kill the child process created by fork() when the
SIGTERM signal comes
selftests/resctrl: Make resctrl_tests run using kselftest framework
selftests/resctrl: Add license to resctrl_test Makefile
selftests/resctrl: Change default limited time to 120 seconds for
resctrl_tests
selftests/resctrl: Return KSFT_SKIP(4) if resctrlfile system is not
supported or resctrl is not run as root
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/resctrl/Makefile | 20 +++++++------------
.../testing/selftests/resctrl/resctrl_tests.c | 4 ++--
tools/testing/selftests/resctrl/resctrl_val.c | 1 +
tools/testing/selftests/resctrl/settings | 1 +
5 files changed, 12 insertions(+), 15 deletions(-)
create mode 100644 tools/testing/selftests/resctrl/settings
--
2.27.0