Many systems build/test up-to-date kernels with older libcs, and
an older glibc (2.17) lacks the definition of SOL_DCCP in
/usr/include/bits/socket.h (it was added in the 4.6 timeframe).
Adding the definition to the test program avoids a compilation
failure that gets in the way of building tools/testing/selftests/net.
The test itself will work once the definition is added; either
skipping due to DCCP not being configured in the kernel under test
or passing, so there are no other more up-to-date glibc dependencies
here it seems beyond that missing definition.
Fixes: 11fb60d1089f ("selftests: net: reuseport_addr_any: add DCCP")
Signed-off-by: Alan Maguire <alan.maguire(a)oracle.com>
---
tools/testing/selftests/net/reuseport_addr_any.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/net/reuseport_addr_any.c b/tools/testing/selftests/net/reuseport_addr_any.c
index c623393..b8475cb2 100644
--- a/tools/testing/selftests/net/reuseport_addr_any.c
+++ b/tools/testing/selftests/net/reuseport_addr_any.c
@@ -21,6 +21,10 @@
#include <sys/socket.h>
#include <unistd.h>
+#ifndef SOL_DCCP
+#define SOL_DCCP 269
+#endif
+
static const char *IP4_ADDR = "127.0.0.1";
static const char *IP6_ADDR = "::1";
static const char *IP4_MAPPED6 = "::ffff:127.0.0.1";
--
1.8.3.1
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
The ftrace selftest "ftrace - test for function traceon/off triggers"
enables all events and reads the trace file. Now that the trace file does
not disable tracing, and will attempt to continually read new data that is
added, the selftest gets stuck reading the trace file. This is because the
data added to the trace file will fill up quicker than the reading of it.
By only enabling scheduling events, the read can keep up with the writes.
Instead of enabling all events, only enable the scheduler events.
Link: http://lkml.kernel.org/r/20200318111345.0516642e@gandalf.local.home
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc
index 0c04282d33dd..1947387fe976 100644
--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc
+++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc
@@ -41,7 +41,7 @@ fi
echo '** ENABLE EVENTS'
-echo 1 > events/enable
+echo 1 > events/sched/enable
echo '** ENABLE TRACING'
enable_tracing
--
2.25.1
On Wed, Mar 18, 2020 at 9:13 AM Steven Rostedt <rostedt(a)goodmis.org> wrote:
>
>
> From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
>
> The ftrace selftest "ftrace - test for function traceon/off triggers"
> enables all events and reads the trace file. Now that the trace file does
> not disable tracing, and will attempt to continually read new data that is
> added, the selftest gets stuck reading the trace file. This is because the
> data added to the trace file will fill up quicker than the reading of it.
>
> By only enabling scheduling events, the read can keep up with the writes.
> Instead of enabling all events, only enable the scheduler events.
>
> Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
> ---
> .../selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
+ linux-kselttest and my LF email.
thanks,
-- Shuah
If the 'name' array in check_vgem() was not initialized to null, the
value of name[4] may be random. Which will cause strcmp(name, "vgem")
failed.
Signed-off-by: Leon He <leon.he(a)unisoc.com>
---
tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
index cd5e1f6..21f3d19 100644
--- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
@@ -22,7 +22,7 @@
static int check_vgem(int fd)
{
drm_version_t version = { 0 };
- char name[5];
+ char name[5] = { 0 };
int ret;
version.name_len = 4;
--
2.7.4
Hi!
Shuah please consider applying to the kselftest tree.
This set is an attempt to make running tests for different
sets of data easier. The direct motivation is the tls
test which we'd like to run for TLS 1.2 and TLS 1.3,
but currently there is no easy way to invoke the same
tests with different parameters.
Tested all users of kselftest_harness.h.
v2:
- don't run tests by fixture
- don't pass params as an explicit argument
v3:
- go back to the orginal implementation with an extra
parameter, and running by fixture (Kees);
- add LIST_APPEND helper (Kees);
- add a dot between fixture and param name (Kees);
- rename the params to variants (Tim);
v1: https://lore.kernel.org/netdev/20200313031752.2332565-1-kuba@kernel.org/
v2: https://lore.kernel.org/netdev/20200314005501.2446494-1-kuba@kernel.org/
Jakub Kicinski (6):
selftests/seccomp: use correct FIXTURE macro
kselftest: factor out list manipulation to a helper
kselftest: create fixture objects
kselftest: run tests by fixture
kselftest: add fixture variants
selftests: tls: run all tests for TLS 1.2 and TLS 1.3
Documentation/dev-tools/kselftest.rst | 3 +-
tools/testing/selftests/kselftest_harness.h | 233 ++++++++++++++----
tools/testing/selftests/net/tls.c | 93 ++-----
tools/testing/selftests/seccomp/seccomp_bpf.c | 10 +-
4 files changed, 206 insertions(+), 133 deletions(-)
--
2.24.1
Hi!
Shuah please consider applying to the kselftest tree.
This set is an attempt to make running tests for different
sets of data easier. The direct motivation is the tls
test which we'd like to run for TLS 1.2 and TLS 1.3,
but currently there is no easy way to invoke the same
tests with different parameters.
Tested all users of kselftest_harness.h.
v2:
- don't run tests by fixture
- don't pass params as an explicit argument
v3:
- go back to the orginal implementation with an extra
parameter, and running by fixture (Kees);
- add LIST_APPEND helper (Kees);
- add a dot between fixture and param name (Kees);
- rename the params to variants (Tim);
v4:
- whitespace fixes.
v1: https://lore.kernel.org/netdev/20200313031752.2332565-1-kuba@kernel.org/
v2: https://lore.kernel.org/netdev/20200314005501.2446494-1-kuba@kernel.org/
v3: https://lore.kernel.org/netdev/20200316225647.3129354-1-kuba@kernel.org/
Jakub Kicinski (5):
kselftest: factor out list manipulation to a helper
kselftest: create fixture objects
kselftest: run tests by fixture
kselftest: add fixture variants
selftests: tls: run all tests for TLS 1.2 and TLS 1.3
Documentation/dev-tools/kselftest.rst | 3 +-
tools/testing/selftests/kselftest_harness.h | 236 +++++++++++++++-----
tools/testing/selftests/net/tls.c | 93 ++------
3 files changed, 204 insertions(+), 128 deletions(-)
--
2.24.1
Hi!
This set is an attempt to make running tests for different
sets of data easier. The direct motivation is the tls
test which we'd like to run for TLS 1.2 and TLS 1.3,
but currently there is no easy way to invoke the same
tests with different parameters.
Tested all users of kselftest_harness.h.
v2:
- don't run tests by fixture
- don't pass params as an explicit argument
Note that we loose a little bit of type safety
without passing parameters as an explicit argument.
If user puts the name of the wrong fixture as argument
to CURRENT_FIXTURE() it will happily cast the type.
Jakub Kicinski (4):
selftests/seccomp: use correct FIXTURE macro
kselftest: create fixture objects
kselftest: add fixture parameters
selftests: tls: run all tests for TLS 1.2 and TLS 1.3
Documentation/dev-tools/kselftest.rst | 3 +-
tools/testing/selftests/kselftest_harness.h | 156 ++++++++++++++++--
tools/testing/selftests/net/tls.c | 93 ++---------
tools/testing/selftests/seccomp/seccomp_bpf.c | 10 +-
4 files changed, 168 insertions(+), 94 deletions(-)
--
2.24.1
Hi!
This set is an attempt to make running tests for different
sets of data easier. The direct motivation is the tls
test which we'd like to run for TLS 1.2 and TLS 1.3,
but currently there is no easy way to invoke the same
tests with different parameters.
Tested all users of kselftest_harness.h.
Jakub Kicinski (5):
selftests/seccomp: use correct FIXTURE macro
kselftest: create fixture objects
kselftest: run tests by fixture
kselftest: add fixture parameters
selftests: tls: run all tests for TLS 1.2 and TLS 1.3
Documentation/dev-tools/kselftest.rst | 3 +-
tools/testing/selftests/kselftest_harness.h | 228 +++++++++++++++---
tools/testing/selftests/net/tls.c | 93 ++-----
tools/testing/selftests/seccomp/seccomp_bpf.c | 10 +-
4 files changed, 213 insertions(+), 121 deletions(-)
--
2.24.1
Remove some of the outmoded "Why KUnit" rationale, and move some
UML-specific information to the kunit_tool page. Also update the Getting
Started guide to mention running tests without the kunit_tool wrapper.
Signed-off-by: David Gow <davidgow(a)google.com>
Reviewed-by: Frank Rowand <frank.rowand(a)sony.com>
---
Sorry: I missed a couple of issues in the last version. They're fixed
here, and I think this should be ready to go.
Changelog:
v4:
- Fix typo: s/offsers/offers
- Talk about KUnit tests running on most "architectures" instead of
"kernel configurations.
v3:
https://lore.kernel.org/linux-kselftest/20200214235723.254228-1-davidgow@go…
- Added a note that KUnit can be used with UML, both with and without
kunit_tool to replace the section moved to kunit_tool.
v2:
https://lore.kernel.org/linux-kselftest/f99a3d4d-ad65-5fd1-3407-db33f378b1f…
- Reinstated the "Why Kunit?" section, minus the comparison with other
testing frameworks (covered in the FAQ), and the description of UML.
- Moved the description of UML into to kunit_tool page.
- Tidied up the wording around how KUnit is built and run to make it
work
without the UML description.
v1:
https://lore.kernel.org/linux-kselftest/9c703dea-a9e1-94e2-c12d-3cb0a09e75a…
- Initial patch
Documentation/dev-tools/kunit/index.rst | 40 ++++++----
Documentation/dev-tools/kunit/kunit-tool.rst | 7 ++
Documentation/dev-tools/kunit/start.rst | 80 ++++++++++++++++----
3 files changed, 99 insertions(+), 28 deletions(-)
diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
index d16a4d2c3a41..e93606ecfb01 100644
--- a/Documentation/dev-tools/kunit/index.rst
+++ b/Documentation/dev-tools/kunit/index.rst
@@ -17,14 +17,23 @@ What is KUnit?
==============
KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
-These tests are able to be run locally on a developer's workstation without a VM
-or special hardware.
KUnit is heavily inspired by JUnit, Python's unittest.mock, and
Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
cases, grouping related test cases into test suites, providing common
infrastructure for running tests, and much more.
+KUnit consists of a kernel component, which provides a set of macros for easily
+writing unit tests. Tests written against KUnit will run on kernel boot if
+built-in, or when loaded if built as a module. These tests write out results to
+the kernel log in `TAP <https://testanything.org/>`_ format.
+
+To make running these tests (and reading the results) easier, KUnit offers
+:doc:`kunit_tool <kunit-tool>`, which builds a `User Mode Linux
+<http://user-mode-linux.sourceforge.net>`_ kernel, runs it, and parses the test
+results. This provides a quick way of running KUnit tests during development,
+without requiring a virtual machine or separate hardware.
+
Get started now: :doc:`start`
Why KUnit?
@@ -36,21 +45,20 @@ allow all possible code paths to be tested in the code under test; this is only
possible if the code under test is very small and does not have any external
dependencies outside of the test's control like hardware.
-Outside of KUnit, there are no testing frameworks currently
-available for the kernel that do not require installing the kernel on a test
-machine or in a VM and all require tests to be written in userspace running on
-the kernel; this is true for Autotest, and kselftest, disqualifying
-any of them from being considered unit testing frameworks.
+KUnit provides a common framework for unit tests within the kernel.
+
+KUnit tests can be run on most architectures, and most tests are architecture
+independent. All built-in KUnit tests run on kernel startup. Alternatively,
+KUnit and KUnit tests can be built as modules and tests will run when the test
+module is loaded.
-KUnit addresses the problem of being able to run tests without needing a virtual
-machine or actual hardware with User Mode Linux. User Mode Linux is a Linux
-architecture, like ARM or x86; however, unlike other architectures it compiles
-to a standalone program that can be run like any other program directly inside
-of a host operating system; to be clear, it does not require any virtualization
-support; it is just a regular program.
+.. note::
-Alternatively, kunit and kunit tests can be built as modules and tests will
-run when the test module is loaded.
+ KUnit can also run tests without needing a virtual machine or actual
+ hardware under User Mode Linux. User Mode Linux is a Linux architecture,
+ like ARM or x86, which compiles the kernel as a Linux executable. KUnit
+ can be used with UML either by building with ``ARCH=um`` (like any other
+ architecture), or by using :doc:`kunit_tool <kunit-tool>`.
KUnit is fast. Excluding build time, from invocation to completion KUnit can run
several dozen tests in only 10 to 20 seconds; this might not sound like a big
@@ -81,3 +89,5 @@ How do I use it?
* :doc:`start` - for new users of KUnit
* :doc:`usage` - for a more detailed explanation of KUnit features
* :doc:`api/index` - for the list of KUnit APIs used for testing
+* :doc:`kunit-tool` - for more information on the kunit_tool helper script
+* :doc:`faq` - for answers to some common questions about KUnit
diff --git a/Documentation/dev-tools/kunit/kunit-tool.rst b/Documentation/dev-tools/kunit/kunit-tool.rst
index 50d46394e97e..949af2da81e5 100644
--- a/Documentation/dev-tools/kunit/kunit-tool.rst
+++ b/Documentation/dev-tools/kunit/kunit-tool.rst
@@ -12,6 +12,13 @@ the Linux kernel as UML (`User Mode Linux
<http://user-mode-linux.sourceforge.net/>`_), running KUnit tests, parsing
the test results and displaying them in a user friendly manner.
+kunit_tool addresses the problem of being able to run tests without needing a
+virtual machine or actual hardware with User Mode Linux. User Mode Linux is a
+Linux architecture, like ARM or x86; however, unlike other architectures it
+compiles the kernel as a standalone Linux executable that can be run like any
+other program directly inside of a host operating system. To be clear, it does
+not require any virtualization support: it is just a regular program.
+
What is a kunitconfig?
======================
diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
index 4e1d24db6b13..e1c5ce80ce12 100644
--- a/Documentation/dev-tools/kunit/start.rst
+++ b/Documentation/dev-tools/kunit/start.rst
@@ -9,11 +9,10 @@ Installing dependencies
KUnit has the same dependencies as the Linux kernel. As long as you can build
the kernel, you can run KUnit.
-KUnit Wrapper
-=============
-Included with KUnit is a simple Python wrapper that helps format the output to
-easily use and read KUnit output. It handles building and running the kernel, as
-well as formatting the output.
+Running tests with the KUnit Wrapper
+====================================
+Included with KUnit is a simple Python wrapper which runs tests under User Mode
+Linux, and formats the test results.
The wrapper can be run with:
@@ -21,22 +20,42 @@ The wrapper can be run with:
./tools/testing/kunit/kunit.py run --defconfig
-For more information on this wrapper (also called kunit_tool) checkout the
+For more information on this wrapper (also called kunit_tool) check out the
:doc:`kunit-tool` page.
Creating a .kunitconfig
-=======================
-The Python script is a thin wrapper around Kbuild. As such, it needs to be
-configured with a ``.kunitconfig`` file. This file essentially contains the
-regular Kernel config, with the specific test targets as well.
-
+-----------------------
+If you want to run a specific set of tests (rather than those listed in the
+KUnit defconfig), you can provide Kconfig options in the ``.kunitconfig`` file.
+This file essentially contains the regular Kernel config, with the specific
+test targets as well. The ``.kunitconfig`` should also contain any other config
+options required by the tests.
+
+A good starting point for a ``.kunitconfig`` is the KUnit defconfig:
.. code-block:: bash
cd $PATH_TO_LINUX_REPO
cp arch/um/configs/kunit_defconfig .kunitconfig
-Verifying KUnit Works
----------------------
+You can then add any other Kconfig options you wish, e.g.:
+.. code-block:: none
+
+ CONFIG_LIST_KUNIT_TEST=y
+
+:doc:`kunit_tool <kunit-tool>` will ensure that all config options set in
+``.kunitconfig`` are set in the kernel ``.config`` before running the tests.
+It'll warn you if you haven't included the dependencies of the options you're
+using.
+
+.. note::
+ Note that removing something from the ``.kunitconfig`` will not trigger a
+ rebuild of the ``.config`` file: the configuration is only updated if the
+ ``.kunitconfig`` is not a subset of ``.config``. This means that you can use
+ other tools (such as make menuconfig) to adjust other config options.
+
+
+Running the tests
+-----------------
To make sure that everything is set up correctly, simply invoke the Python
wrapper from your kernel repo:
@@ -62,6 +81,41 @@ followed by a list of tests that are run. All of them should be passing.
Because it is building a lot of sources for the first time, the
``Building KUnit kernel`` step may take a while.
+Running tests without the KUnit Wrapper
+=======================================
+
+If you'd rather not use the KUnit Wrapper (if, for example, you need to
+integrate with other systems, or use an architecture other than UML), KUnit can
+be included in any kernel, and the results read out and parsed manually.
+
+.. note::
+ KUnit is not designed for use in a production system, and it's possible that
+ tests may reduce the stability or security of the system.
+
+
+
+Configuring the kernel
+----------------------
+
+In order to enable KUnit itself, you simply need to enable the ``CONFIG_KUNIT``
+Kconfig option (it's under Kernel Hacking/Kernel Testing and Coverage in
+menuconfig). From there, you can enable any KUnit tests you want: they usually
+have config options ending in ``_KUNIT_TEST``.
+
+KUnit and KUnit tests can be compiled as modules: in this case the tests in a
+module will be run when the module is loaded.
+
+Running the tests
+-----------------
+
+Build and run your kernel as usual. Test output will be written to the kernel
+log in `TAP <https://testanything.org/>`_ format.
+
+.. note::
+ It's possible that there will be other lines and/or data interspersed in the
+ TAP output.
+
+
Writing your first test
=======================
--
2.25.1.481.gfbce0eb801-goog
When a selftest would timeout before, the program would just fall over
and no accounting of failures would be reported (i.e. it would result in
an incomplete TAP report). Instead, add an explicit SIGALRM handler to
cleanly catch and report the timeout.
Before:
[==========] Running 2 tests from 2 test cases.
[ RUN ] timeout.finish
[ OK ] timeout.finish
[ RUN ] timeout.too_long
Alarm clock
After:
[==========] Running 2 tests from 2 test cases.
[ RUN ] timeout.finish
[ OK ] timeout.finish
[ RUN ] timeout.too_long
timeout.too_long: Test terminated by timeout
[ FAIL ] timeout.too_long
[==========] 1 / 2 tests passed.
[ FAILED ]
-Kees
Kees Cook (2):
selftests/seccomp: Move test child waiting logic
selftests/harness: Handle timeouts cleanly
tools/testing/selftests/kselftest_harness.h | 144 ++++++++++++++------
1 file changed, 99 insertions(+), 45 deletions(-)
--
2.20.1
I was working on fixing the cross-compilation for the selftests/vm tests.
Currently, there are two issues in my testing:
1) problem: required library missing from some cross-compile environments:
tools/testing/selftests/vm/mlock-random-test.c requires libcap
be installed. The target line for mlock-random-test in
tools/testing/selftests/vm/Makefile looks like this:
$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
and mlock-random-test.c has this include line:
#include <sys/capability.h>
this is confusing, since this is different from the header file
linux/capability.h. It is associated with the capability library (libcap)
and not the kernel. In any event, on some distros and in some
cross-compile SDKs the package containing these files is not installed
by default.
Once this library is installed, things progress farther. Using an Ubuntu
system, you can install the cross version of this library (for arm64) by doing:
$ sudo apt install libcap-dev:arm64
1) solution:
I would like to add some meta-data about this build dependency, by putting
something in the settings file as a hint to CI build systems. Specifically, I'd like to
create the file 'tools/testing/selftests/vm/settings', with the content:
NEED_LIB=cap
We already use settings for other meta-data about a test (right now, just a
non-default timeout value), but I don't want to create a new file or syntax
for this build dependency data.
Let me know what you think.
I may follow up with some script in the kernel source tree to check these
dependencies, independent of any CI system. I have such a script in Fuego
that I could submit, but it would need some work to fit into the kernel build
flow for kselftest. The goal would be to provide a nicely formatted warning,
with a recommendation for a package install. But that's more work than
I think is needed right now just to let developers know there's a build dependency
here.
2) problem: reference to source-relative header file
the Makefile for vm uses a relative path for include directories.
Specifically, it has the line:
CFLAGS = -Wall -I ../../../../../usr/include $(EXTRA_CFLAGS)
I believe this needs to reference kernel include files from the
output directory, not the source directory.
With the relative include directory path, the program userfaultfd.c
gets compilation error like this:
userfaultfd.c:267:21: error: 'UFFD_API_RANGE_IOCTLS_BASIC' undeclared here (not in a function)
.expected_ioctls = UFFD_API_RANGE_IOCTLS_BASIC,
^
userfaultfd.c: In function 'uffd_poll_thread':
userfaultfd.c:529:8: error: 'UFFD_EVENT_FORK' undeclared (first use in this function)
case UFFD_EVENT_FORK:
^
userfaultfd.c:529:8: note: each undeclared identifier is reported only once for each function it appears in
userfaultfd.c:531:18: error: 'union <anonymous>' has no member named 'fork'
uffd = msg.arg.fork.ufd;
^
2) incomplete solution:
I originally changed this line to read:
CFLAGS = -Wall -I $(KBUILD_OUTPUT)/usr/include $(EXTRA_CFLAGS)
This works when the output directory is specified using KBUILD_OUTPUT,
but not when the output directory is specified using O=
I'm not sure what happens when the output directory is specified
with a non-source-tree current working directory.
In any event, while researching a proper solution to this, I found
the following in tools/testing/selftests/Makefile:
If compiling with ifneq ($(O),)
BUILD := $(O)
else
ifneq ($(KBUILD_OUTPUT),)
BUILD := $(KBUILD_OUTPUT)/kselftest
else
BUILD := $(shell pwd)
DEFAULT_INSTALL_HDR_PATH := 1
endif
endif
This doesn't seem right. It looks like the selftests Makefile treats a directory
passed in using O= different from one specified using KBUILD_OUTPUT
or the current working directory.
In the KBUILD_OUTPUT case, you get an extra 'kselftest' directory layer
that you don't get for the other two.
In contrast, the kernel top-level Makefile has this:
ifeq ("$(origin O)", "command line")
KBUILD_OUTPUT := $(O)
endif
(and from then on, the top-level Makefile appears to only use KBUILD_OUTPUT)
This makes it look like the rest of the kernel build system treats O= and KBUILD_OUTPUT
identically.
Am I missing something, or is there a flaw in the O=/KBUILD_OUTPUT handling in
kselftest? Please let me know and I'll try to work out an appropriate fix for
cross-compiling the vm tests.
-- Tim
Hi Shuah,
We discussed collecting and uploading all syzkaller reproducers
somewhere. You wanted to see how they look. I've uploaded all current
reproducers here:
https://github.com/dvyukov/syzkaller-repros
Minimalistic build/run scripts are included.
+some testing mailing lists too as this can be used as a test suite
If you have any potential uses for this, you are welcome to use it.
But then we probably need to find some more official and shared place
for them than my private github.
The test programs can also be bulk updated if necessary, because all
of this is auto-generated.
Thanks
From: SeongJae Park <sjpark(a)amazon.de>
Deletions of configs in the '.kunitconfig' is not applied because kunit
rebuilds '.config' only if the '.config' is not a subset of the
'.kunitconfig'. To allow the deletions to applied, this commit modifies
the '.config' rebuild condition to addtionally check the modified times
of those files.
Signed-off-by: SeongJae Park <sjpark(a)amazon.de>
---
tools/testing/kunit/kunit_kernel.py | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index cc5d844ecca1..a3a5d6c7e66d 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -111,17 +111,22 @@ class LinuxSourceTree(object):
return True
def build_reconfig(self, build_dir):
- """Creates a new .config if it is not a subset of the .kunitconfig."""
+ """Creates a new .config if it is not a subset of, or older than the .kunitconfig."""
kconfig_path = get_kconfig_path(build_dir)
if os.path.exists(kconfig_path):
existing_kconfig = kunit_config.Kconfig()
existing_kconfig.read_from_file(kconfig_path)
- if not self._kconfig.is_subset_of(existing_kconfig):
- print('Regenerating .config ...')
- os.remove(kconfig_path)
- return self.build_config(build_dir)
- else:
+ subset = self._kconfig.is_subset_of(existing_kconfig)
+
+ kunitconfig_mtime = os.path.getmtime(kunitconfig_path)
+ kconfig_mtime = os.path.getmtime(kconfig_path)
+ older = kconfig_mtime < kunitconfig_mtime
+
+ if subset and not older:
return True
+ print('Regenerating .config ...')
+ os.remove(kconfig_path)
+ return self.build_config(build_dir)
else:
print('Generating .config ...')
return self.build_config(build_dir)
--
2.17.1
A recent RFC patch set [1] suggests some additional functionality
may be needed around kunit resources. It seems to require
1. support for resources without allocation
2. support for lookup of such resources
3. support for access to resources across multiple kernel threads
The proposed changes here are designed to address these needs.
The idea is we first generalize the API to support adding
resources with static data; then from there we support named
resources. The latter support is needed because if we are
in a different thread context and only have the "struct kunit *"
to work with, we need a way to identify a resource in lookup.
[1] https://lkml.org/lkml/2020/2/26/1286
Alan Maguire (2):
kunit: generalize kunit_resource API beyond allocated resources
kunit: add support for named resources
include/kunit/test.h | 145 ++++++++++++++++++++++------
lib/kunit/kunit-test.c | 103 ++++++++++++++++----
lib/kunit/string-stream.c | 14 ++-
lib/kunit/test.c | 234 +++++++++++++++++++++++++++++++++-------------
4 files changed, 375 insertions(+), 121 deletions(-)
--
1.8.3.1
When kunit tests are run on native (i.e. non-UML) environments, the results
of test execution are often intermixed with dmesg output. This patch
series attempts to solve this by providing a debugfs representation
of the results of the last test run, available as
/sys/kernel/debug/kunit/<testsuite>/results
Changes since v5:
- replaced undefined behaviour use of snprintf(buf, ..., buf) in kunit_log()
with a function to append string to existing log (Frank, patch 1)
- added clarification on log size limitations to documentation
(Frank, patch 4)
Changes since v4:
- added suite-level log expectations to kunit log test (Brendan, patch 2)
- added log expectations (of it being NULL) for case where
CONFIG_KUNIT_DEBUGFS=n to kunit log test (patch 2)
- added patch 3 which replaces subtest tab indentation with 4 space
indentation as per TAP 14 spec (Frank, patch 3)
Changes since v3:
- added CONFIG_KUNIT_DEBUGFS to support conditional compilation of debugfs
representation, including string logging (Frank, patch 1)
- removed unneeded NULL check for test_case in
kunit_suite_for_each_test_case() (Frank, patch 1)
- added kunit log test to verify logging multiple strings works
(Frank, patch 2)
- rephrased description of results file (Frank, patch 3)
Changes since v2:
- updated kunit_status2str() to kunit_status_to_string() and made it
static inline in include/kunit/test.h (Brendan)
- added log string to struct kunit_suite and kunit_case, with log
pointer in struct kunit pointing at the case log. This allows us
to collect kunit_[err|info|warning]() messages at the same time
as we printk() them. This solves for the most part the sharing
of log messages between test execution and debugfs since we
just print the suite log (which contains the test suite preamble)
and the individual test logs. The only exception is the suite-level
status, which we cannot store in the suite log as it would mean
we'd print the suite and its status prior to the suite's results.
(Brendan, patch 1)
- dropped debugfs-based kunit run patch for now so as not to cause
problems with tests currently under development (Brendan)
- fixed doc issues with code block (Brendan, patch 3)
Changes since v1:
- trimmed unneeded include files in lib/kunit/debugfs.c (Greg)
- renamed global debugfs functions to be prefixed with kunit_ (Greg)
- removed error checking for debugfs operations (Greg)
Alan Maguire (4):
kunit: add debugfs /sys/kernel/debug/kunit/<suite>/results display
kunit: add log test
kunit: subtests should be indented 4 spaces according to TAP
kunit: update documentation to describe debugfs representation
Documentation/dev-tools/kunit/usage.rst | 14 +++
include/kunit/test.h | 59 +++++++++++--
lib/kunit/Kconfig | 8 ++
lib/kunit/Makefile | 4 +
lib/kunit/assert.c | 79 ++++++++---------
lib/kunit/debugfs.c | 116 +++++++++++++++++++++++++
lib/kunit/debugfs.h | 30 +++++++
lib/kunit/kunit-test.c | 45 +++++++++-
lib/kunit/test.c | 147 +++++++++++++++++++++++++-------
9 files changed, 421 insertions(+), 81 deletions(-)
create mode 100644 lib/kunit/debugfs.c
create mode 100644 lib/kunit/debugfs.h
--
1.8.3.1
This patch set has several miscellaneous fixes to resctrl selftest tool. Some
fixes are minor in nature while other are major fixes.
The minor fixes are
1. Typos, comment format
2. Fix MBA feature detection
3. Fix a bug while selecting sibling cpu
4. Remove unnecessary use of variable arguments
5. Change MBM/MBA results reporting format from absolute values to percentage
The major fixes are changing CAT and CQM test cases. CAT test wasn't testing
CAT as it isn't using the cache it's allocated, hence, change the test case to
test noisy neighbor use case. CAT guarantees a user specified amount of cache
for a process or a group of processes, hence test this use case. The updated
test case checks if critical process is impacted by noisy neighbor or not. If
it's impacted the test fails.
The present CQM test assumes that all the allocated memory (size less than LLC
size) for a process will fit into cache and there won't be any overlappings.
While this is mostly true, it cannot be *always* true by the nature of how cache
works i.e. two addresses could index into same cache line. Hence, change CQM
test such that it now uses CAT. Allocate a specific amount of cache using CAT
and check if CQM reports more than what CAT has allocated.
Fenghua Yu (1):
selftests/resctrl: Fix missing options "-n" and "-p"
Reinette Chatre (4):
selftests/resctrl: Fix feature detection
selftests/resctrl: Fix typo
selftests/resctrl: Fix typo in help text
selftests/resctrl: Ensure sibling CPU is not same as original CPU
Sai Praneeth Prakhya (8):
selftests/resctrl: Fix MBA/MBM results reporting format
selftests/resctrl: Don't use variable argument list for setup function
selftests/resctrl: Fix typos
selftests/resctrl: Modularize fill_buf for new CAT test case
selftests/resctrl: Change Cache Allocation Technology (CAT) test
selftests/resctrl: Change Cache Quality Monitoring (CQM) test
selftests/resctrl: Dynamically select buffer size for CAT test
selftests/resctrl: Cleanup fill_buff after changing CAT test
tools/testing/selftests/resctrl/cache.c | 179 ++++++++-----
tools/testing/selftests/resctrl/cat_test.c | 322 +++++++++++++-----------
tools/testing/selftests/resctrl/cqm_test.c | 210 +++++++++-------
tools/testing/selftests/resctrl/fill_buf.c | 113 ++++++---
tools/testing/selftests/resctrl/mba_test.c | 32 ++-
tools/testing/selftests/resctrl/mbm_test.c | 33 ++-
tools/testing/selftests/resctrl/resctrl.h | 19 +-
tools/testing/selftests/resctrl/resctrl_tests.c | 26 +-
tools/testing/selftests/resctrl/resctrl_val.c | 22 +-
tools/testing/selftests/resctrl/resctrlfs.c | 52 +++-
10 files changed, 592 insertions(+), 416 deletions(-)
--
2.7.4
Fix seccomp relocatable builds. This is a simple fix to use the
right lib.mk variable TEST_CUSTOM_PROGS to continue to do custom
build to preserve dependency on kselftest_harness.h local header.
This change applies cutom rule to seccomp_bpf seccomp_benchmark
for a simpler logic.
Uses $(OUTPUT) defined in lib.mk to handle build relocation.
The following use-cases work with this change:
In seccomp directory:
make all and make clean
>From top level from main Makefile:
make kselftest-install O=objdir ARCH=arm64 HOSTCC=gcc \
CROSS_COMPILE=aarch64-linux-gnu- TARGETS=seccomp
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
---
tools/testing/selftests/seccomp/Makefile | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/tools/testing/selftests/seccomp/Makefile b/tools/testing/selftests/seccomp/Makefile
index 1760b3e39730..355bcbc0394a 100644
--- a/tools/testing/selftests/seccomp/Makefile
+++ b/tools/testing/selftests/seccomp/Makefile
@@ -1,17 +1,16 @@
# SPDX-License-Identifier: GPL-2.0
-all:
-
-include ../lib.mk
+CFLAGS += -Wl,-no-as-needed -Wall
+LDFLAGS += -lpthread
.PHONY: all clean
-BINARIES := seccomp_bpf seccomp_benchmark
-CFLAGS += -Wl,-no-as-needed -Wall
+include ../lib.mk
+
+# OUTPUT set by lib.mk
+TEST_CUSTOM_PROGS := $(OUTPUT)/seccomp_bpf $(OUTPUT)/seccomp_benchmark
-seccomp_bpf: seccomp_bpf.c ../kselftest_harness.h
- $(CC) $(CFLAGS) $(LDFLAGS) $< -lpthread -o $@
+$(TEST_CUSTOM_PROGS): ../kselftest_harness.h
-TEST_PROGS += $(BINARIES)
-EXTRA_CLEAN := $(BINARIES)
+all: $(TEST_CUSTOM_PROGS)
-all: $(BINARIES)
+EXTRA_CLEAN := $(TEST_CUSTOM_PROGS)
--
2.20.1
Add RSEQ, restartable sequence, support and related selftest to RISCV.
The Kconfig option HAVE_REGS_AND_STACK_ACCESS_API is also required by
RSEQ because RSEQ will modify the content of pt_regs.sepc through
instruction_pointer_set() during the fixup procedure. In order to select
the config HAVE_REGS_AND_STACK_ACCESS_API, the missing APIs for accessing
pt_regs are also added in this patch set.
The relevant RSEQ tests in kselftest require the Binutils patch "RISC-V:
Fix linker problems with TLS copy relocs" to avoid placing
PREINIT_ARRAY and TLS variable of librseq.so at the same address.
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=3e7bd7f…
A segmental fault will happen if the Binutils misses this patch.
Vincent Chen (3):
riscv: add required functions to enable HAVE_REGS_AND_STACK_ACCESS_API
riscv: Add support for restartable sequence
rseq/selftests: Add support for riscv
arch/riscv/Kconfig | 2 +
arch/riscv/include/asm/ptrace.h | 29 +-
arch/riscv/kernel/entry.S | 4 +
arch/riscv/kernel/ptrace.c | 99 +++++
arch/riscv/kernel/signal.c | 3 +
tools/testing/selftests/rseq/param_test.c | 23 ++
tools/testing/selftests/rseq/rseq-riscv.h | 622 ++++++++++++++++++++++++++++++
tools/testing/selftests/rseq/rseq.h | 2 +
8 files changed, 783 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/rseq/rseq-riscv.h
--
2.7.4
> -----Original Message-----
> From: Shuah Khan
>
> On 2/28/20 10:50 AM, Bird, Tim wrote:
> >
> >
> >> -----Original Message-----
> >> From: Shuah Khan
> >>
> >> Integrating Kselftest into Kernel CI rings depends on Kselftest build
> >> and install framework to support Kernel CI use-cases. I am kicking off
> >> an effort to support Kselftest runs in Kernel CI rings. Running these
> >> tests in Kernel CI rings will help quality of kernel releases, both
> >> stable and mainline.
> >>
> >> What is required for full support?
> >>
> >> 1. Cross-compilation & relocatable build support
> >> 2. Generates objects in objdir/kselftest without cluttering main objdir
> >> 3. Leave source directory clean
> >> 4. Installs correctly in objdir/kselftest/kselftest_install and adds
> >> itself to run_kselftest.sh script generated during install.
> >>
> >> Note that install step is necessary for all files to be installed for
> >> run time support.
> >>
> >> I looked into the current status and identified problems. The work is
> >> minimal to add full support. Out of 80+ tests, 7 fail to cross-build
> >> and 1 fails to install correctly.
> >>
> >> List is below:
> >>
> >> Tests fails to build: bpf, capabilities, kvm, memfd, mqueue, timens, vm
> >> Tests fail to install: android (partial failure)
> >> Leaves source directory dirty: bpf, seccomp
> >>
> >> I have patches ready for the following issues:
> >>
> >> Kselftest objects (test dirs) clutter top level object directory.
> >> seccomp_bpf generates objects in the source directory.
> >>
> >> I created a topic branch to collect all the patches:
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/?…
> >>
> >> I am going to start working on build problems. If anybody is
> >> interested in helping me with this effort, don't hesitate to
> >> contact me. I first priority is fixing build and install and
> >> then look into tests that leave the source directory dirty.
> >
> > I'm interested in this. I'd like the same cleanups in order to run
> > kselftest in Fuego, and I can try it with additional toolchains
> > and boards. Unfortunately, in terms of running tests, almost all
> > the boards in my lab are running old kernels. So the tests results
> > aren't useful for upstream work. But I can still test
> > compilation and install issues, for the kselftest tests themselves.
> >
>
> Testing compilation and install issues is very valuable. This is one
> area that hasn't been test coverage compared to running tests. So it
> great if you can help with build/install on linux-next to catch
> problems in new tests. I am finding that older tests have been stable
> and as new tests come in, we tend to miss catching these types of
> problems.
>
> Especially cross-builds and installs on arm64 and others.
OK. I've got 2 different arm64 compilers, with wildly different SDK setups,
so hopefully this will be useful.
> >>
> >> Detailed report can be found here:
> >>
> >> https://drive.google.com/file/d/11nnWOKIzzOrE4EiucZBn423lzSU_eNNv/view?usp=…
> >
> > Is there anything you'd like me to look at specifically? Do you want me to start
> > at the bottom of the list and work up? I could look at 'vm' or 'timens'.
> >
>
> Yes you can start with vm and timens.
I wrote a test for Fuego and ran into a few interesting issues. Also, I have a question
about the best place to start, and your preference for reporting results. Your feedback
on any of this would be appreciated:
Here are some issues and questions I ran into:
1) overwriting of CC in lib.mk
This line in tools/testing/selftests/lib.mk caused me some grief:
CC := $(CROSS_COMPILE)gcc
One of my toolchains pre-defines CC with a bunch of extra flags, so this didn't work for
that tolchain.
I'm still debugging this. I'm not sure why the weird definition of CC works for the rest
of the kernel but not with kselftest. But I may submit some kind of patch to make this
CC assignment conditional (that is, only do the assignment if it's not already defined)
Let me know what you think.
2) ability to get list of targets would be nice
It would be nice if there were a mechanism to get the list of default targets from
kselftest. I added the following for my own tests, so that I don't have to hard-code
my loop over the individual selftests:
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index 63430e2..9955e71 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -246,4 +246,7 @@ clean:
$(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean;\
done;
+show_targets:
+ @echo $(TARGETS)
+
.PHONY: khdr all run_tests hotplug run_hotplug clean_hotplug run_pstore_crash install clean
This is pretty simple. I can submit this as a proper patch, if you're willing to take
something like it, and we can discuss details if you'd rather see this done another way.
3) different ways to invoke kselftest
There are a number of different ways to invoke kselftest. I'm currently using the
'-C' method for both building and installing.
make ARCH=$ARCHITECTURE TARGETS="$target" -C tools/testing/selftests
make ARCH=$ARCHITECTURE TARGETS="$target" -C tools/testing/selftests install
I think, there there are now targets for kselftest in the top-level Makefile.
Do you have a preferred method you'd like me to test? Or would you like
me to run my tests with multiple methods?
And I'm using a KBUILD_OUTPUT environment variable, rather than O=.
Let me know if you'd like me to build a matrix of these different build methods.
4) what tree(s) would you like me to test?
I think you mentioned that you'd like to see the tests against 'linux-next'.
Right now I've been doing tests against the 'torvalds' mainline tree, and
the 'linux-kselftest' tree, master branch. Let me know if there are other
branches or trees you like me to test.
5) where would you like test results?
In the short term, I'm testing the compile and install of the tests
and working on the ones that fail for me (I'm getting 17 or 18
failures, depending on the toolchain I'm using, for some of my boards).
However, I'm still debugging my setup, I hope I can drop that down
to the same one's you are seeing shortly.
Longer-term I plan to set up a CI loop for these tests for Fuego, and publish some
kind of matrix results and reports on my own server (https://birdcloud.org/)
I'm generating HTML tables now that work with Fuego's Jenkins
configuration, but I could send the data elsewhere if desired.
This is still under construction. Would you like me to publish results also to
kcidb, or some other repository? I might be able to publish my
results to Kernelci, but I'll end up with a customized report for kselftest,
that will allow drilling down to see output for individual compile or
install failures. I'm not sure how much of that would be supported in
the KernelCI interface. But I recognize you'd probably not like to
have to go to multiple places to see results.
Also, in terms of periodic results do you want any e-mails
sent to the Linux-kselftest list? I thought I'd hold off for now,
and wait for the compile/install fixes to settle down, so that
future e-mails would only report regressions or issues with new tests.
We can discuss this later, as I don't plan to do this quite
yet (and would only do an e-mail after checking with you anyway).
Thanks for any feedback you can provide.
-- Tim
P.S. Also, please let me know who is working on this on the KernelCI
side (if it's not Kevin), so I can CC them on future discussions.
Hi Linus,
Please pull the following Keselftest update for Linux 5.6-rc5.
This Kselftest update for Linux 5.6-rc5 consists of a cleanup patch
to undo changes to global .gitignore that added selftests/lkdtm
objects and add them to a local selftests/lkdtm/.gitignore.
Summary of Linus's comments on local vs. global gitignore scope:
- Keep local gitignore patterns in local files.
- Put only global gitignore patterns in the top-level gitignore file.
Local scope keeps things much better separated. It also incidentally
means that if a directory gets renamed, the gitignore file continues
to work unless in the case of renaming the actual files themselves that
are named in the gitignore.
Diff is attached.
I took the liberty to include summary of your comments on local vs
global gitignore scope in the commit message.
thanks,
-- Shuah
----------------------------------------------------------------
The following changes since commit ef89d0545132d685f73da6f58b7e7fe002536f91:
selftests/rseq: Fix out-of-tree compilation (2020-02-20 08:57:12 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
tags/linux-kselftest-5.6-rc5
for you to fetch changes up to f3a60268f5cec7dae0e9713f5fc65aecc3734c09:
selftest/lkdtm: Use local .gitignore (2020-03-02 08:39:39 -0700)
----------------------------------------------------------------
linux-kselftest-5.6-rc5
This Kselftest update for Linux 5.6-rc5 consists of a cleanup patch
to undo changes to global .gitignore that added selftests/lkdtm
objects and add them to a local selftests/lkdtm/.gitignore.
Summary of Linus's comments on local vs. global gitignore scope:
- Keep local gitignore patterns in local files.
- Put only global gitignore patterns in the top-level gitignore file.
Local scope keeps things much better separated. It also incidentally
means that if a directory gets renamed, the gitignore file continues
to work unless in the case of renaming the actual files themselves that
are named in the gitignore.
----------------------------------------------------------------
Christophe Leroy (1):
selftest/lkdtm: Use local .gitignore
.gitignore | 4 ----
tools/testing/selftests/lkdtm/.gitignore | 2 ++
2 files changed, 2 insertions(+), 4 deletions(-)
create mode 100644 tools/testing/selftests/lkdtm/.gitignore
----------------------------------------------------------------
Hello,
This patchset implements a new futex operation, called FUTEX_WAIT_MULTIPLE,
which allows a thread to wait on several futexes at the same time, and be
awoken by any of them.
The use case lies in the Wine implementation of the Windows NT interface
WaitMultipleObjects. This Windows API function allows a thread to sleep
waiting on the first of a set of event sources (mutexes, timers, signal,
console input, etc) to signal. Considering this is a primitive
synchronization operation for Windows applications, being able to quickly
signal events on the producer side, and quickly go to sleep on the
consumer side is essential for good performance of those running over Wine.
Since this API exposes a mechanism to wait on multiple objects, and
we might have multiple waiters for each of these events, a M->N
relationship, the current Linux interfaces fell short on performance
evaluation of large M,N scenarios. We experimented, for instance, with
eventfd, which has performance problems discussed below, but we also
experimented with userspace solutions, like making each consumer wait on
a condition variable guarding the entire list of objects, and then
waking up multiple variables on the producer side, but this is
prohibitively expensive since we either need to signal many condition
variables or share that condition variable among multiple waiters, and
then verify for the event being signaled in userspace, which means
dealing with often false positive wakes ups.
The natural interface to implement the behavior we want, also
considering that one of the waitable objects is a mutex itself, would be
the futex interface. Therefore, this patchset proposes a mechanism for
a thread to wait on multiple futexes at once, and wake up on the first
futex that was awaken.
In particular, using futexes in our Wine use case reduced the CPU
utilization by 4% for the game Beat Saber and by 1.5% for the game
Shadow of Tomb Raider, both running over Proton (a Wine based solution
for Windows emulation), when compared to the eventfd interface. This
implementation also doesn't rely of file descriptors, so it doesn't risk
overflowing the resource.
In time, we are also proposing modifications to glibc and libpthread to
make this feature available for Linux native multithreaded applications
using libpthread, which can benefit from the behavior of waiting on any
of a group of futexes.
Technically, the existing FUTEX_WAIT implementation can be easily
reworked by using futex_wait_multiple() with a count of one, and I
have a patch showing how it works. I'm not proposing it, since
futex is such a tricky code, that I'd be more comfortable to have
FUTEX_WAIT_MULTIPLE running upstream for a couple development cycles,
before considering modifying FUTEX_WAIT.
The patch series includes an extensive set of kselftests validating
the behavior of the interface. We also implemented support[1] on
Syzkaller and survived the fuzzy testing.
Finally, if you'd rather pull directly a branch with this set you can
find it here:
https://gitlab.collabora.com/tonyk/linux/commits/futex-dev-v3
The RFC for this patch can be found here:
https://lkml.org/lkml/2019/7/30/1399
=== Performance of eventfd ===
Polling on several eventfd contexts with semaphore semantics would
provide us with the exact semantics we are looking for. However, as
shown below, in a scenario with sufficient producers and consumers, the
eventfd interface itself becomes a bottleneck, in particular because
each thread will compete to acquire a sequence of waitqueue locks for
each eventfd context in the poll list. In addition, in the uncontended
case, where the producer is ready for consumption, eventfd still
requires going into the kernel on the consumer side.
When a write or a read operation in an eventfd file succeeds, it will try
to wake up all threads that are waiting to perform some operation to
the file. The lock (ctx->wqh.lock) that hold the access to the file value
(ctx->count) is the same lock used to control access the waitqueue. When
all those those thread woke, they will compete to get this lock. Along
with that, the poll() also manipulates the waitqueue and need to hold
this same lock. This lock is specially hard to acquire when poll() calls
poll_freewait(), where it tries to free all waitqueues associated with
this poll. While doing that, it will compete with a lot of read and
write operations that have been waken.
In our use case, with a huge number of parallel reads, writes and polls,
this lock is a bottleneck and hurts the performance of applications. Our
implementation of futex, however, decrease the calls of spin lock by more
than 80% in some user applications.
Finally, eventfd operates on file descriptors, which is a limited
resource that has shown its limitation in our use cases. Despite the
Windows interface not waiting on more than 64 objects at once, we still
have multiple waiters at the same time, and we were easily able to
exhaust the FD limits on applications like games.
Thanks,
André
[1] https://github.com/andrealmeid/syzkaller/tree/futex-wait-multiple
Gabriel Krisman Bertazi (4):
futex: Implement mechanism to wait on any of several futexes
selftests: futex: Add FUTEX_WAIT_MULTIPLE timeout test
selftests: futex: Add FUTEX_WAIT_MULTIPLE wouldblock test
selftests: futex: Add FUTEX_WAIT_MULTIPLE wake up test
include/uapi/linux/futex.h | 20 +
kernel/futex.c | 358 +++++++++++++++++-
.../selftests/futex/functional/.gitignore | 1 +
.../selftests/futex/functional/Makefile | 3 +-
.../futex/functional/futex_wait_multiple.c | 173 +++++++++
.../futex/functional/futex_wait_timeout.c | 38 +-
.../futex/functional/futex_wait_wouldblock.c | 28 +-
.../testing/selftests/futex/functional/run.sh | 3 +
.../selftests/futex/include/futextest.h | 22 ++
9 files changed, 639 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/futex/functional/futex_wait_multiple.c
--
2.25.0