Add an FAQ entry to the KUnit documentation with some tips for
troubleshooting KUnit and kunit_tool.
These suggestions largely came from an email thread:
https://lore.kernel.org/linux-kselftest/41db8bbd-3ba0-8bde-7352-083bf4b947f…
Signed-off-by: David Gow <davidgow(a)google.com>
---
Documentation/dev-tools/kunit/faq.rst | 32 +++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
diff --git a/Documentation/dev-tools/kunit/faq.rst b/Documentation/dev-tools/kunit/faq.rst
index ea55b2467653..40109d425988 100644
--- a/Documentation/dev-tools/kunit/faq.rst
+++ b/Documentation/dev-tools/kunit/faq.rst
@@ -61,3 +61,35 @@ test, or an end-to-end test.
kernel by installing a production configuration of the kernel on production
hardware with a production userspace and then trying to exercise some behavior
that depends on interactions between the hardware, the kernel, and userspace.
+
+KUnit isn't working, what should I do?
+======================================
+
+Unfortunately, there are a number of things which can break, but here are some
+things to try.
+
+1. Try running ``./tools/testing/kunit/kunit.py run`` with the ``--raw_output``
+ parameter. This might show details or error messages hidden by the kunit_tool
+ parser.
+2. Instead of running ``kunit.py run``, try running ``kunit.py config``,
+ ``kunit.py build``, and ``kunit.py exec`` independently. This can help track
+ down where an issue is occurring. (If you think the parser is at fault, you
+ can run it manually against stdin or a file with ``kunit.py parse``.)
+3. Running the UML kernel directly can often reveal issues or error messages
+ kunit_tool ignores. This should be as simple as running ``./vmlinux`` after
+ building the UML kernel (e.g., by using ``kunit.py build``). Note that UML
+ has some unusual requirements (such as the host having a tmpfs filesystem
+ mounted), and has had issues in the past when built statically and the host
+ has KASLR enabled. (On older host kernels, you may need to run ``setarch
+ `uname -m` -R ./vmlinux`` to disable KASLR.)
+4. Make sure the kernel .config has ``CONFIG_KUNIT=y`` and at least one test
+ (e.g. ``CONFIG_KUNIT_EXAMPLE_TEST=y``). kunit_tool will keep its .config
+ around, so you can see what config was used after running ``kunit.py run``.
+ It also preserves any config changes you might make, so you can
+ enable/disable things with ``make ARCH=um menuconfig`` or similar, and then
+ re-run kunit_tool.
+5. Finally, running ``make ARCH=um defconfig`` before running ``kunit.py run``
+ may help clean up any residual config items which could be causing problems.
+
+If none of the above tricks help, you are always welcome to email any issues to
+kunit-dev(a)googlegroups.com.
--
2.27.0.rc2.251.g90737beb825-goog
When running with conntrack rules, the dropped overlap fragments may cause
EPERM to be returned to sendto. Instead of completely failing, just ignore
those errors and continue. If this causes packets with overlap fragments to
be dropped as expected, that is okay. And if it causes packets that are
expected to be received to be dropped, which should not happen, it will be
detected as failure.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo(a)canonical.com>
---
tools/testing/selftests/net/ip_defrag.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/net/ip_defrag.c b/tools/testing/selftests/net/ip_defrag.c
index b53fb67f8e5e..62ee927bacae 100644
--- a/tools/testing/selftests/net/ip_defrag.c
+++ b/tools/testing/selftests/net/ip_defrag.c
@@ -192,9 +192,9 @@ static void send_fragment(int fd_raw, struct sockaddr *addr, socklen_t alen,
}
res = sendto(fd_raw, ip_frame, frag_len, 0, addr, alen);
- if (res < 0)
+ if (res < 0 && errno != EPERM)
error(1, errno, "send_fragment");
- if (res != frag_len)
+ if (res >= 0 && res != frag_len)
error(1, 0, "send_fragment: %d vs %d", res, frag_len);
frag_counter++;
@@ -313,9 +313,9 @@ static void send_udp_frags(int fd_raw, struct sockaddr *addr,
iphdr->ip_len = htons(frag_len);
}
res = sendto(fd_raw, ip_frame, frag_len, 0, addr, alen);
- if (res < 0)
+ if (res < 0 && errno != EPERM)
error(1, errno, "sendto overlap: %d", frag_len);
- if (res != frag_len)
+ if (res >= 0 && res != frag_len)
error(1, 0, "sendto overlap: %d vs %d", (int)res, frag_len);
frag_counter++;
}
--
2.25.1
Hi,
Recently, I found some tests were always skipped.
Here is a series of patches to fix those issues.
The prime_numbers test is skipped in some cases because
prime_numbers.ko is not always compiled.
Since the CONFIG_PRIME_NUMBERS is not independently
configurable item (it has no title and help), it is enabled
only if other configs (DRM_DEBUG_SELFTEST etc.) select it.
To fix this issue, I added a title and help for
CONFIG_PRIME_NUMBERS.
The sysctl test is skipped because
- selftests/sysctl/config requires CONFIG_TEST_SYSCTL=y. But
since lib/test_sysctl.c doesn't use module_init(), the
test_syscall is not listed under /sys/module/ and the
test script gives up.
- Even if we make CONFIG_TEST_SYSCTL=m, the test script checks
/sys/modules/test_sysctl before loading module and gives up.
- Ayway, since the test module introduces useless sysctl
interface to the kernel, it would better be a module.
This series includes fixes for above 3 points.
- Fix lib/test_sysctl.c to use module_init()
- Fix tools/testing/selftests/sysctl/sysctl.sh to try to load
test module if it is not loaded (nor embedded).
- Fix tools/testing/selftests/sysctl/config to require
CONFIG_TEST_SYSCTL=m, not y.
Thank you,
---
Masami Hiramatsu (4):
lib: Make prime number generator independently selectable
lib: Make test_sysctl initialized as module
selftests/sysctl: Fix to load test_sysctl module
selftests/sysctl: Make sysctl test driver as a module
lib/math/Kconfig | 7 ++++++-
lib/test_sysctl.c | 2 +-
tools/testing/selftests/sysctl/config | 2 +-
tools/testing/selftests/sysctl/sysctl.sh | 13 ++-----------
4 files changed, 10 insertions(+), 14 deletions(-)
--
Masami Hiramatsu (Linaro) <mhiramat(a)kernel.org>
The test-klp-callbacks change implement a synchronization replacement of
initial code to use completion variables instead of delays. The
completion variable interlocks the busy module with the concurrent
loading of the target livepatch patches which works with the execution
flow instead of estimated time delays.
The test-klp-shadow-vars changes first refactors the code to be more of
a readable example as well as continuing to verify the component code.
The patch is broken in two to display the renaming and restructuring in
part 1 and the addition and change of logic in part 2. The last change
frees memory before bailing in case of errors.
Patchset to be merged via the livepatching tree is against: livepatching/for-next
Joe Lawrence (1):
selftests/livepatch: rework test-klp-callbacks to use completion
variables
Yannick Cote (3):
selftests/livepatch: rework test-klp-shadow-vars
selftests/livepatch: more verification in test-klp-shadow-vars
selftests/livepatch: fix mem leaks in test-klp-shadow-vars
lib/livepatch/test_klp_callbacks_busy.c | 42 +++-
lib/livepatch/test_klp_shadow_vars.c | 222 +++++++++---------
.../selftests/livepatch/test-callbacks.sh | 29 ++-
.../selftests/livepatch/test-shadow-vars.sh | 85 ++++---
4 files changed, 214 insertions(+), 164 deletions(-)
--
2.25.4
Hi,
Here is a series for adding "requires:" list for simplifying and
unifying requirement checks for each test case.
This series also includes the description line fix and
unresolved -> unsupported change ([1/7] and [2/7]).
Currently, we have many similar requirement checker to find
unconfigured or unsupported (in older kernels) feature in
each test case. I think it is a good time to unify those similar
checks.
As same as "description:" or "flags:" line, this series introduces
new "requires:" line, and convert current checking code intor the
"requires:" line.
This requires line gives some good effects, not only simplyfies
the code, but also unifies the reason message, and because it checks
the requirements before running the testc ase, it skips unneeded
ftrace initialization.
The requires line supports following checks
- tracefs interface check: Check whether the given file or directory
in the tracefs. (No suffix) [3/7],[4/7],[5/7]
- available tracer check: Check whether the given tracer is available
(":tracer" suffix) [6/7]
- README feature check: Check whether the given string is in the
README (":README" suffix) [7/7]
Note that since the requires line returns UNSUPPORTED error,
the requirements must be one of ftrace feature, but not the
user-space environmental requirement. If there is some issue
in user-space (e.g. lack of the command, modules, etc) it must
report UNRESOLVED error.
Since this series depends on following 2 commits,
commit 619ee76f5c9f ("selftests/ftrace: Return unsupported if no
error_log file") on Shuah's Kselftest tree
commit bea24f766efc ("selftests/ftrace: Distinguish between hist
and synthetic event checks") on Steven's Tracing tree
This can be applied on the tree which merged both of them.
Also, you can get the series from the following.
git://git.kernel.org/pub/scm/linux/kernel/git/mhiramat/linux.git ftracetest-requires-v1
Thank you,
---
Masami Hiramatsu (7):
selftests/ftrace: Allow ":" in description
selftests/ftrace: Return unsupported for the unconfigured features
selftests/ftrace: Add "requires:" list support
selftests/ftrace: Convert required interface checks into requires list
selftests/ftrace: Convert check_filter_file() with requires list
selftests/ftrace: Support ":tracer" suffix for requires
selftests/ftrace: Support ":README" suffix for requires
tools/testing/selftests/ftrace/ftracetest | 11 ++++++-
.../selftests/ftrace/test.d/00basic/snapshot.tc | 3 +-
.../selftests/ftrace/test.d/00basic/trace_pipe.tc | 3 +-
.../ftrace/test.d/direct/kprobe-direct.tc | 6 +---
.../ftrace/test.d/dynevent/add_remove_kprobe.tc | 6 +---
.../ftrace/test.d/dynevent/add_remove_synth.tc | 5 +--
.../ftrace/test.d/dynevent/clear_select_events.tc | 11 +------
.../ftrace/test.d/dynevent/generic_clear_event.tc | 8 +----
.../selftests/ftrace/test.d/event/event-enable.tc | 6 +---
.../selftests/ftrace/test.d/event/event-no-pid.tc | 11 +------
.../selftests/ftrace/test.d/event/event-pid.tc | 11 +------
.../ftrace/test.d/event/subsystem-enable.tc | 6 +---
.../ftrace/test.d/event/toplevel-enable.tc | 6 +---
.../ftrace/test.d/ftrace/fgraph-filter-stack.tc | 14 +--------
.../ftrace/test.d/ftrace/fgraph-filter.tc | 8 +----
.../ftrace/test.d/ftrace/func-filter-glob.tc | 8 +----
.../test.d/ftrace/func-filter-notrace-pid.tc | 13 +-------
.../ftrace/test.d/ftrace/func-filter-pid.tc | 13 +-------
.../ftrace/test.d/ftrace/func-filter-stacktrace.tc | 3 +-
.../selftests/ftrace/test.d/ftrace/func_cpumask.tc | 6 +---
.../ftrace/test.d/ftrace/func_event_triggers.tc | 7 ++---
.../ftrace/test.d/ftrace/func_mod_trace.tc | 3 +-
.../ftrace/test.d/ftrace/func_profile_stat.tc | 3 +-
.../ftrace/test.d/ftrace/func_profiler.tc | 12 +-------
.../ftrace/test.d/ftrace/func_set_ftrace_file.tc | 6 ++--
.../ftrace/test.d/ftrace/func_stack_tracer.tc | 8 +----
.../test.d/ftrace/func_traceonoff_triggers.tc | 6 ++--
.../ftrace/test.d/ftrace/tracing-error-log.tc | 12 ++------
tools/testing/selftests/ftrace/test.d/functions | 28 ++++++++++++++----
.../ftrace/test.d/instances/instance-event.tc | 6 +---
.../selftests/ftrace/test.d/instances/instance.tc | 6 +---
.../ftrace/test.d/kprobe/add_and_remove.tc | 3 +-
.../selftests/ftrace/test.d/kprobe/busy_check.tc | 3 +-
.../selftests/ftrace/test.d/kprobe/kprobe_args.tc | 3 +-
.../ftrace/test.d/kprobe/kprobe_args_comm.tc | 3 +-
.../ftrace/test.d/kprobe/kprobe_args_string.tc | 3 +-
.../ftrace/test.d/kprobe/kprobe_args_symbol.tc | 3 +-
.../ftrace/test.d/kprobe/kprobe_args_syntax.tc | 5 +--
.../ftrace/test.d/kprobe/kprobe_args_type.tc | 5 +--
.../ftrace/test.d/kprobe/kprobe_args_user.tc | 5 +--
.../ftrace/test.d/kprobe/kprobe_eventname.tc | 3 +-
.../ftrace/test.d/kprobe/kprobe_ftrace.tc | 6 +---
.../ftrace/test.d/kprobe/kprobe_module.tc | 3 +-
.../ftrace/test.d/kprobe/kprobe_multiprobe.tc | 5 +--
.../ftrace/test.d/kprobe/kprobe_syntax_errors.tc | 5 +--
.../ftrace/test.d/kprobe/kretprobe_args.tc | 3 +-
.../ftrace/test.d/kprobe/kretprobe_maxactive.tc | 4 +--
.../ftrace/test.d/kprobe/multiple_kprobes.tc | 3 +-
.../selftests/ftrace/test.d/kprobe/probepoint.tc | 3 +-
.../selftests/ftrace/test.d/kprobe/profile.tc | 3 +-
.../ftrace/test.d/kprobe/uprobe_syntax_errors.tc | 5 +--
.../ftrace/test.d/preemptirq/irqsoff_tracer.tc | 4 +--
tools/testing/selftests/ftrace/test.d/template | 4 +++
.../selftests/ftrace/test.d/tracer/wakeup.tc | 6 +---
.../selftests/ftrace/test.d/tracer/wakeup_rt.tc | 6 +---
.../inter-event/trigger-action-hist-xfail.tc | 13 +-------
.../inter-event/trigger-field-variable-support.tc | 16 +---------
.../trigger-inter-event-combined-hist.tc | 16 +---------
.../inter-event/trigger-multi-actions-accept.tc | 16 +---------
.../inter-event/trigger-onchange-action-hist.tc | 8 +----
.../inter-event/trigger-onmatch-action-hist.tc | 16 +---------
.../trigger-onmatch-onmax-action-hist.tc | 16 +---------
.../inter-event/trigger-onmax-action-hist.tc | 16 +---------
.../inter-event/trigger-snapshot-action-hist.tc | 20 +------------
.../trigger-synthetic-event-createremove.tc | 11 +------
.../inter-event/trigger-synthetic-event-syntax.tc | 11 +------
.../inter-event/trigger-trace-action-hist.tc | 18 +-----------
.../ftrace/test.d/trigger/trigger-eventonoff.tc | 11 +------
.../ftrace/test.d/trigger/trigger-filter.tc | 11 +------
.../ftrace/test.d/trigger/trigger-hist-mod.tc | 16 +---------
.../test.d/trigger/trigger-hist-syntax-errors.tc | 18 +-----------
.../ftrace/test.d/trigger/trigger-hist.tc | 16 +---------
.../ftrace/test.d/trigger/trigger-multihist.tc | 16 +---------
.../ftrace/test.d/trigger/trigger-snapshot.tc | 16 +---------
.../ftrace/test.d/trigger/trigger-stacktrace.tc | 11 +------
.../test.d/trigger/trigger-trace-marker-hist.tc | 21 +-------------
.../trigger/trigger-trace-marker-snapshot.tc | 21 +-------------
.../trigger-trace-marker-synthetic-kernel.tc | 31 +-------------------
.../trigger/trigger-trace-marker-synthetic.tc | 26 +----------------
.../ftrace/test.d/trigger/trigger-traceonoff.tc | 11 +------
80 files changed, 120 insertions(+), 633 deletions(-)
--
Masami Hiramatsu (Linaro) <mhiramat(a)kernel.org>
As seccomp_benchmark tries to calibrate how many samples will take more
than 5 seconds to execute, it may end up picking up a number of samples
that take 10 (but up to 12) seconds. As the calibration will take double
that time, it takes around 20 seconds. Then, it executes the whole thing
again, and then once more, with some added overhead. So, the thing might
take more than 40 seconds, which is too close to the 45s timeout.
That is very dependent on the system where it's executed, so may not be
observed always, but it has been observed on x86 VMs. Using a 90s timeout
seems safe enough.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo(a)canonical.com>
---
tools/testing/selftests/seccomp/settings | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/seccomp/settings b/tools/testing/selftests/seccomp/settings
index d61f00d8cad3..ba4d85f74cd6 100644
--- a/tools/testing/selftests/seccomp/settings
+++ b/tools/testing/selftests/seccomp/settings
@@ -1 +1 @@
-90
+timeout=90
--
2.25.1
From: John Stultz <john.stultz(a)linaro.org>
[ Upstream commit 4bb9d46d47b105a774f9dca642f5271375bca4b2 ]
When I added the expected error testing, I forgot I need to set
the return to zero when we successfully see an error.
Without this change we only end up testing a single heap
before the test quits.
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: Benjamin Gaignard <benjamin.gaignard(a)linaro.org>
Cc: Brian Starkey <brian.starkey(a)arm.com>
Cc: Laura Abbott <labbott(a)redhat.com>
Cc: "Andrew F. Davis" <afd(a)ti.com>
Cc: linux-kselftest(a)vger.kernel.org
Signed-off-by: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
index cd5e1f602ac9..909da9cdda97 100644
--- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
@@ -351,6 +351,7 @@ static int test_alloc_errors(char *heap_name)
}
printf("Expected error checking passed\n");
+ ret = 0;
out:
if (dmabuf_fd >= 0)
close(dmabuf_fd);
--
2.25.1
As seccomp_benchmark tries to calibrate how many samples will take more
than 5 seconds to execute, it may end up picking up a number of samples
that take 10 (but up to 12) seconds. As the calibration will take double
that time, it takes around 20 seconds. Then, it executes the whole thing
again, and then once more, with some added overhead. So, the thing might
take more than 40 seconds, which is too close to the 45s timeout.
That is very dependent on the system where it's executed, so may not be
observed always, but it has been observed on x86 VMs. Using a 90s timeout
seems safe enough.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo(a)canonical.com>
---
tools/testing/selftests/seccomp/settings | 1 +
1 file changed, 1 insertion(+)
create mode 100644 tools/testing/selftests/seccomp/settings
diff --git a/tools/testing/selftests/seccomp/settings b/tools/testing/selftests/seccomp/settings
new file mode 100644
index 000000000000..ba4d85f74cd6
--- /dev/null
+++ b/tools/testing/selftests/seccomp/settings
@@ -0,0 +1 @@
+timeout=90
--
2.25.1