Hi Shuah,
We discussed collecting and uploading all syzkaller reproducers
somewhere. You wanted to see how they look. I've uploaded all current
reproducers here:
https://github.com/dvyukov/syzkaller-repros
Minimalistic build/run scripts are included.
+some testing mailing lists too as this can be used as a test suite
If you have any potential uses for this, you are welcome to use it.
But then we probably need to find some more official and shared place
for them than my private github.
The test programs can also be bulk updated if necessary, because all
of this is auto-generated.
Thanks
From: SeongJae Park <sjpark(a)amazon.de>
Deletions of configs in the '.kunitconfig' is not applied because kunit
rebuilds '.config' only if the '.config' is not a subset of the
'.kunitconfig'. To allow the deletions to applied, this commit modifies
the '.config' rebuild condition to addtionally check the modified times
of those files.
Signed-off-by: SeongJae Park <sjpark(a)amazon.de>
---
tools/testing/kunit/kunit_kernel.py | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py
index cc5d844ecca1..a3a5d6c7e66d 100644
--- a/tools/testing/kunit/kunit_kernel.py
+++ b/tools/testing/kunit/kunit_kernel.py
@@ -111,17 +111,22 @@ class LinuxSourceTree(object):
return True
def build_reconfig(self, build_dir):
- """Creates a new .config if it is not a subset of the .kunitconfig."""
+ """Creates a new .config if it is not a subset of, or older than the .kunitconfig."""
kconfig_path = get_kconfig_path(build_dir)
if os.path.exists(kconfig_path):
existing_kconfig = kunit_config.Kconfig()
existing_kconfig.read_from_file(kconfig_path)
- if not self._kconfig.is_subset_of(existing_kconfig):
- print('Regenerating .config ...')
- os.remove(kconfig_path)
- return self.build_config(build_dir)
- else:
+ subset = self._kconfig.is_subset_of(existing_kconfig)
+
+ kunitconfig_mtime = os.path.getmtime(kunitconfig_path)
+ kconfig_mtime = os.path.getmtime(kconfig_path)
+ older = kconfig_mtime < kunitconfig_mtime
+
+ if subset and not older:
return True
+ print('Regenerating .config ...')
+ os.remove(kconfig_path)
+ return self.build_config(build_dir)
else:
print('Generating .config ...')
return self.build_config(build_dir)
--
2.17.1
Add RSEQ, restartable sequence, support and related selftest to RISCV.
The Kconfig option HAVE_REGS_AND_STACK_ACCESS_API is also required by
RSEQ because RSEQ will modify the content of pt_regs.sepc through
instruction_pointer_set() during the fixup procedure. In order to select
the config HAVE_REGS_AND_STACK_ACCESS_API, the missing APIs for accessing
pt_regs are also added in this patch set.
The relevant RSEQ tests in kselftest require the Binutils patch "RISC-V:
Fix linker problems with TLS copy relocs" to avoid placing
PREINIT_ARRAY and TLS variable of librseq.so at the same address.
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=3e7bd7f…
A segmental fault will happen if the Binutils misses this patch.
Vincent Chen (3):
riscv: add required functions to enable HAVE_REGS_AND_STACK_ACCESS_API
riscv: Add support for restartable sequence
rseq/selftests: Add support for riscv
arch/riscv/Kconfig | 2 +
arch/riscv/include/asm/ptrace.h | 29 +-
arch/riscv/kernel/entry.S | 4 +
arch/riscv/kernel/ptrace.c | 99 +++++
arch/riscv/kernel/signal.c | 3 +
tools/testing/selftests/rseq/param_test.c | 23 ++
tools/testing/selftests/rseq/rseq-riscv.h | 622 ++++++++++++++++++++++++++++++
tools/testing/selftests/rseq/rseq.h | 2 +
8 files changed, 783 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/rseq/rseq-riscv.h
--
2.7.4
Hello,
This patchset implements a new futex operation, called FUTEX_WAIT_MULTIPLE,
which allows a thread to wait on several futexes at the same time, and be
awoken by any of them.
The use case lies in the Wine implementation of the Windows NT interface
WaitMultipleObjects. This Windows API function allows a thread to sleep
waiting on the first of a set of event sources (mutexes, timers, signal,
console input, etc) to signal. Considering this is a primitive
synchronization operation for Windows applications, being able to quickly
signal events on the producer side, and quickly go to sleep on the
consumer side is essential for good performance of those running over Wine.
Since this API exposes a mechanism to wait on multiple objects, and
we might have multiple waiters for each of these events, a M->N
relationship, the current Linux interfaces fell short on performance
evaluation of large M,N scenarios. We experimented, for instance, with
eventfd, which has performance problems discussed below, but we also
experimented with userspace solutions, like making each consumer wait on
a condition variable guarding the entire list of objects, and then
waking up multiple variables on the producer side, but this is
prohibitively expensive since we either need to signal many condition
variables or share that condition variable among multiple waiters, and
then verify for the event being signaled in userspace, which means
dealing with often false positive wakes ups.
The natural interface to implement the behavior we want, also
considering that one of the waitable objects is a mutex itself, would be
the futex interface. Therefore, this patchset proposes a mechanism for
a thread to wait on multiple futexes at once, and wake up on the first
futex that was awaken.
In particular, using futexes in our Wine use case reduced the CPU
utilization by 4% for the game Beat Saber and by 1.5% for the game
Shadow of Tomb Raider, both running over Proton (a Wine based solution
for Windows emulation), when compared to the eventfd interface. This
implementation also doesn't rely of file descriptors, so it doesn't risk
overflowing the resource.
In time, we are also proposing modifications to glibc and libpthread to
make this feature available for Linux native multithreaded applications
using libpthread, which can benefit from the behavior of waiting on any
of a group of futexes.
Technically, the existing FUTEX_WAIT implementation can be easily
reworked by using futex_wait_multiple() with a count of one, and I
have a patch showing how it works. I'm not proposing it, since
futex is such a tricky code, that I'd be more comfortable to have
FUTEX_WAIT_MULTIPLE running upstream for a couple development cycles,
before considering modifying FUTEX_WAIT.
The patch series includes an extensive set of kselftests validating
the behavior of the interface. We also implemented support[1] on
Syzkaller and survived the fuzzy testing.
Finally, if you'd rather pull directly a branch with this set you can
find it here:
https://gitlab.collabora.com/tonyk/linux/commits/futex-dev-v3
The RFC for this patch can be found here:
https://lkml.org/lkml/2019/7/30/1399
=== Performance of eventfd ===
Polling on several eventfd contexts with semaphore semantics would
provide us with the exact semantics we are looking for. However, as
shown below, in a scenario with sufficient producers and consumers, the
eventfd interface itself becomes a bottleneck, in particular because
each thread will compete to acquire a sequence of waitqueue locks for
each eventfd context in the poll list. In addition, in the uncontended
case, where the producer is ready for consumption, eventfd still
requires going into the kernel on the consumer side.
When a write or a read operation in an eventfd file succeeds, it will try
to wake up all threads that are waiting to perform some operation to
the file. The lock (ctx->wqh.lock) that hold the access to the file value
(ctx->count) is the same lock used to control access the waitqueue. When
all those those thread woke, they will compete to get this lock. Along
with that, the poll() also manipulates the waitqueue and need to hold
this same lock. This lock is specially hard to acquire when poll() calls
poll_freewait(), where it tries to free all waitqueues associated with
this poll. While doing that, it will compete with a lot of read and
write operations that have been waken.
In our use case, with a huge number of parallel reads, writes and polls,
this lock is a bottleneck and hurts the performance of applications. Our
implementation of futex, however, decrease the calls of spin lock by more
than 80% in some user applications.
Finally, eventfd operates on file descriptors, which is a limited
resource that has shown its limitation in our use cases. Despite the
Windows interface not waiting on more than 64 objects at once, we still
have multiple waiters at the same time, and we were easily able to
exhaust the FD limits on applications like games.
Thanks,
André
[1] https://github.com/andrealmeid/syzkaller/tree/futex-wait-multiple
Gabriel Krisman Bertazi (4):
futex: Implement mechanism to wait on any of several futexes
selftests: futex: Add FUTEX_WAIT_MULTIPLE timeout test
selftests: futex: Add FUTEX_WAIT_MULTIPLE wouldblock test
selftests: futex: Add FUTEX_WAIT_MULTIPLE wake up test
include/uapi/linux/futex.h | 20 +
kernel/futex.c | 358 +++++++++++++++++-
.../selftests/futex/functional/.gitignore | 1 +
.../selftests/futex/functional/Makefile | 3 +-
.../futex/functional/futex_wait_multiple.c | 173 +++++++++
.../futex/functional/futex_wait_timeout.c | 38 +-
.../futex/functional/futex_wait_wouldblock.c | 28 +-
.../testing/selftests/futex/functional/run.sh | 3 +
.../selftests/futex/include/futextest.h | 22 ++
9 files changed, 639 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/futex/functional/futex_wait_multiple.c
--
2.25.0
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than
relying on late_initcall to schedule each test suite separately along
with a couple of new features that depend on it.
## What am I trying to do?
Conceptually, I am trying to provide a mechanism by which test suites
can be grouped together so that they can be reasoned about collectively.
The last two of three patches in this series add features which depend
on this:
PATCH 5/7 Prints out a test plan right before KUnit tests are run[1];
this is valuable because it makes it possible for a test
harness to detect whether the number of tests run matches the
number of tests expected to be run, ensuring that no tests
silently failed.
PATCH 6/7 Add a new kernel command-line option which allows the user to
specify that the kernel poweroff, halt, or reboot after
completing all KUnit tests; this is very handy for running
KUnit tests on UML or a VM so that the UML/VM process exits
cleanly immediately after running all tests without needing a
special initramfs.
In addition, by dispatching tests from a single location, we can
guarantee that all KUnit tests run after late_init is complete, which
was a concern during the initial KUnit patchset review (this has not
been a problem in practice, but resolving with certainty is nevertheless
desirable).
Other use cases for this exist, but the above features should provide an
idea of the value that this could provide.
Alan Maguire (1):
kunit: test: create a single centralized executor for all tests
Brendan Higgins (5):
vmlinux.lds.h: add linker section for KUnit test suites
arch: um: add linker section for KUnit test suites
init: main: add KUnit to kernel init
kunit: test: add test plan to KUnit TAP format
Documentation: Add kunit_shutdown to kernel-parameters.txt
David Gow (1):
kunit: Add 'kunit_shutdown' option
.../admin-guide/kernel-parameters.txt | 7 ++
arch/um/include/asm/common.lds.S | 4 +
include/asm-generic/vmlinux.lds.h | 8 ++
include/kunit/test.h | 82 ++++++++++++-------
init/main.c | 4 +
lib/kunit/Makefile | 3 +-
lib/kunit/executor.c | 71 ++++++++++++++++
lib/kunit/test.c | 11 ---
tools/testing/kunit/kunit_kernel.py | 2 +-
tools/testing/kunit/kunit_parser.py | 76 ++++++++++++++---
.../test_is_test_passed-all_passed.log | 1 +
.../test_data/test_is_test_passed-crash.log | 1 +
.../test_data/test_is_test_passed-failure.log | 1 +
13 files changed, 217 insertions(+), 54 deletions(-)
create mode 100644 lib/kunit/executor.c
--
2.25.0.341.g760bfbb309-goog
When kunit tests are run on native (i.e. non-UML) environments, the results
of test execution are often intermixed with dmesg output. This patch
series attempts to solve this by providing a debugfs representation
of the results of the last test run, available as
/sys/kernel/debug/kunit/<testsuite>/results
Changes since v4:
- added suite-level log expectations to kunit log test (Brendan, patch 2)
- added log expectations (of it being NULL) for case where
CONFIG_KUNIT_DEBUGFS=n to kunit log test (patch 2)
- added patch 3 which replaces subtest tab indentation with 4 space
indentation as per TAP 14 spec (Frank, patch 3)
Changes since v3:
- added CONFIG_KUNIT_DEBUGFS to support conditional compilation of debugfs
representation, including string logging (Frank, patch 1)
- removed unneeded NULL check for test_case in
kunit_suite_for_each_test_case() (Frank, patch 1)
- added kunit log test to verify logging multiple strings works
(Frank, patch 2)
- rephrased description of results file (Frank, patch 3)
Changes since v2:
- updated kunit_status2str() to kunit_status_to_string() and made it
static inline in include/kunit/test.h (Brendan)
- added log string to struct kunit_suite and kunit_case, with log
pointer in struct kunit pointing at the case log. This allows us
to collect kunit_[err|info|warning]() messages at the same time
as we printk() them. This solves for the most part the sharing
of log messages between test execution and debugfs since we
just print the suite log (which contains the test suite preamble)
and the individual test logs. The only exception is the suite-level
status, which we cannot store in the suite log as it would mean
we'd print the suite and its status prior to the suite's results.
(Brendan, patch 1)
- dropped debugfs-based kunit run patch for now so as not to cause
problems with tests currently under development (Brendan)
- fixed doc issues with code block (Brendan, patch 3)
Changes since v1:
- trimmed unneeded include files in lib/kunit/debugfs.c (Greg)
- renamed global debugfs functions to be prefixed with kunit_ (Greg)
- removed error checking for debugfs operations (Greg)
Alan Maguire (4):
kunit: add debugfs /sys/kernel/debug/kunit/<suite>/results display
kunit: add log test
kunit: subtests should be indented 4 spaces according to TAP
kunit: update documentation to describe debugfs representation
Documentation/dev-tools/kunit/usage.rst | 13 ++++
include/kunit/test.h | 59 ++++++++++++---
lib/kunit/Kconfig | 8 +++
lib/kunit/Makefile | 4 ++
lib/kunit/assert.c | 79 +++++++++++----------
lib/kunit/debugfs.c | 116 ++++++++++++++++++++++++++++++
lib/kunit/debugfs.h | 30 ++++++++
lib/kunit/kunit-test.c | 45 +++++++++++-
lib/kunit/test.c | 122 ++++++++++++++++++++++++--------
9 files changed, 395 insertions(+), 81 deletions(-)
create mode 100644 lib/kunit/debugfs.c
create mode 100644 lib/kunit/debugfs.h
--
1.8.3.1
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than
relying on late_initcall to schedule each test suite separately along
with a couple of new features that depend on it.
## What am I trying to do?
Conceptually, I am trying to provide a mechanism by which test suites
can be grouped together so that they can be reasoned about collectively.
The last two patches in this series add features which depend on this:
RFC 5/6 Prints out a test plan right before KUnit tests are run[1]; this
is valuable because it makes it possible for a test harness to
detect whether the number of tests run matches the number of
tests expected to be run, ensuring that no tests silently
failed.
RFC 6/6 Add a new kernel command-line option which allows the user to
specify that the kernel poweroff, halt, or reboot after
completing all KUnit tests; this is very handy for running KUnit
tests on UML or a VM so that the UML/VM process exits cleanly
immediately after running all tests without needing a special
initramfs.
In addition, by dispatching tests from a single location, we can
guarantee that all KUnit tests run after late_init is complete, which
was a concern during the initial KUnit patchset review (this has not
been a problem in practice, but resolving with certainty is nevertheless
desirable).
Other use cases for this exist, but the above features should provide an
idea of the value that this could provide.
## What work remains to be done?
These patches were based on patches in our non-upstream branch[2], so we
have a pretty good idea that they are useable as presented;
nevertheless, some of the changes done in this patchset could
*definitely* use some review by subsystem experts (linker scripts, init,
etc), and will likely change a lot after getting feedback.
The biggest thing that I know will require additional attention is
integrating this patchset with the KUnit module support patchset[3]. I
have not even attempted to build these patches on top of the module
support patches as I would like to get people's initial thoughts first
(especially Alan's :-) ). I think that making these patches work with
module support should be fairly straight forward, nevertheless.
Brendan Higgins (5):
vmlinux.lds.h: add linker section for KUnit test suites
arch: um: add linker section for KUnit test suites
kunit: test: create a single centralized executor for all tests
init: main: add KUnit to kernel init
kunit: test: add test plan to KUnit TAP format
David Gow (1):
kunit: Add 'kunit_shutdown' option
arch/um/include/asm/common.lds.S | 4 +
include/asm-generic/vmlinux.lds.h | 8 ++
include/kunit/test.h | 16 ++--
init/main.c | 4 +
lib/kunit/Makefile | 3 +-
lib/kunit/executor.c | 74 ++++++++++++++++++
lib/kunit/test.c | 11 ---
tools/testing/kunit/kunit_kernel.py | 2 +-
tools/testing/kunit/kunit_parser.py | 76 +++++++++++++++----
.../test_is_test_passed-all_passed.log | 1 +
.../test_data/test_is_test_passed-crash.log | 1 +
.../test_data/test_is_test_passed-failure.log | 1 +
12 files changed, 170 insertions(+), 31 deletions(-)
create mode 100644 lib/kunit/executor.c
[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-…
[2]: https://kunit-review.googlesource.com/c/linux/+/1037
[3]: https://patchwork.kernel.org/project/linux-kselftest/list/?series=211727
--
2.24.1.735.g03f4e72817-goog
Memory protection keys enables an application to protect its address
space from inadvertent access by its own code.
This feature is now enabled on powerpc and has been available since
4.16-rc1. The patches move the selftests to arch neutral directory
and enhance their test coverage.
Tested on powerpc64 and x86_64 (Skylake-SP).
Link to development branch:
https://github.com/sandip4n/linux/tree/pkey-selftests
Changelog
---------
Link to previous version (v17):
https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=154174
v18:
(1) Fixed issues with x86 multilib builds based on
feedback from Dave.
(2) Moved patch 2 to the end of the series.
v17:
(1) Fixed issues with i386 builds when running on x86_64
based on feedback from Dave.
(2) Replaced patch 6 from previous version with patch 7.
This addresses u64 format specifier related concerns
that Michael had raised in v15.
v16:
(1) Rebased on top of latest master.
(2) Switched to u64 instead of using an arch-dependent
pkey_reg_t type for references to the pkey register
based on suggestions from Dave, Michal and Michael.
(3) Removed build time determination of page size based
on suggestion from Michael.
(4) Fixed comment before the definition of __page_o_noops()
from patch 13 ("selftests/vm/pkeys: Introduce powerpc
support").
v15:
(1) Rebased on top of latest master.
(2) Addressed review comments from Dave Hansen.
(3) Moved code for getting or setting pkey bits to new
helpers. These changes replace patch 7 of v14.
(4) Added a fix which ensures that the correct count of
reserved keys is used across different platforms.
(5) Added a fix which ensures that the correct page size
is used as powerpc supports both 4K and 64K pages.
v14:
(1) Incorporated another round of comments from Dave Hansen.
v13:
(1) Incorporated comments for Dave Hansen.
(2) Added one more test for correct pkey-0 behavior.
v12:
(1) Fixed the offset of pkey field in the siginfo structure for
x86_64 and powerpc. And tries to use the actual field
if the headers have it defined.
v11:
(1) Fixed a deadlock in the ptrace testcase.
v10 and prior:
(1) Moved the testcase to arch neutral directory.
(2) Split the changes into incremental patches.
Desnes A. Nunes do Rosario (1):
selftests/vm/pkeys: Fix number of reserved powerpc pkeys
Ram Pai (16):
selftests/x86/pkeys: Move selftests to arch-neutral directory
selftests/vm/pkeys: Rename all references to pkru to a generic name
selftests/vm/pkeys: Move generic definitions to header file
selftests/vm/pkeys: Fix pkey_disable_clear()
selftests/vm/pkeys: Fix assertion in pkey_disable_set/clear()
selftests/vm/pkeys: Fix alloc_random_pkey() to make it really random
selftests/vm/pkeys: Introduce generic pkey abstractions
selftests/vm/pkeys: Introduce powerpc support
selftests/vm/pkeys: Fix assertion in test_pkey_alloc_exhaust()
selftests/vm/pkeys: Improve checks to determine pkey support
selftests/vm/pkeys: Associate key on a mapped page and detect access
violation
selftests/vm/pkeys: Associate key on a mapped page and detect write
violation
selftests/vm/pkeys: Detect write violation on a mapped
access-denied-key page
selftests/vm/pkeys: Introduce a sub-page allocator
selftests/vm/pkeys: Test correct behaviour of pkey-0
selftests/vm/pkeys: Override access right definitions on powerpc
Sandipan Das (5):
selftests: vm: pkeys: Use sane types for pkey register
selftests: vm: pkeys: Add helpers for pkey bits
selftests: vm: pkeys: Use the correct huge page size
selftests: vm: pkeys: Use the correct page size on powerpc
selftests: vm: pkeys: Fix multilib builds for x86
Thiago Jung Bauermann (2):
selftests/vm/pkeys: Move some definitions to arch-specific header
selftests/vm/pkeys: Make gcc check arguments of sigsafe_printf()
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 73 ++
tools/testing/selftests/vm/pkey-helpers.h | 225 ++++++
tools/testing/selftests/vm/pkey-powerpc.h | 136 ++++
tools/testing/selftests/vm/pkey-x86.h | 181 +++++
.../selftests/{x86 => vm}/protection_keys.c | 696 ++++++++++--------
tools/testing/selftests/x86/.gitignore | 1 -
tools/testing/selftests/x86/Makefile | 2 +-
tools/testing/selftests/x86/pkey-helpers.h | 219 ------
9 files changed, 1002 insertions(+), 532 deletions(-)
create mode 100644 tools/testing/selftests/vm/pkey-helpers.h
create mode 100644 tools/testing/selftests/vm/pkey-powerpc.h
create mode 100644 tools/testing/selftests/vm/pkey-x86.h
rename tools/testing/selftests/{x86 => vm}/protection_keys.c (74%)
delete mode 100644 tools/testing/selftests/x86/pkey-helpers.h
--
2.17.1
Integrating Kselftest into Kernel CI rings depends on Kselftest build
and install framework to support Kernel CI use-cases. I am kicking off
an effort to support Kselftest runs in Kernel CI rings. Running these
tests in Kernel CI rings will help quality of kernel releases, both
stable and mainline.
What is required for full support?
1. Cross-compilation & relocatable build support
2. Generates objects in objdir/kselftest without cluttering main objdir
3. Leave source directory clean
4. Installs correctly in objdir/kselftest/kselftest_install and adds
itself to run_kselftest.sh script generated during install.
Note that install step is necessary for all files to be installed for
run time support.
I looked into the current status and identified problems. The work is
minimal to add full support. Out of 80+ tests, 7 fail to cross-build
and 1 fails to install correctly.
List is below:
Tests fails to build: bpf, capabilities, kvm, memfd, mqueue, timens, vm
Tests fail to install: android (partial failure)
Leaves source directory dirty: bpf, seccomp
I have patches ready for the following issues:
Kselftest objects (test dirs) clutter top level object directory.
seccomp_bpf generates objects in the source directory.
I created a topic branch to collect all the patches:
https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/?…
I am going to start working on build problems. If anybody is
interested in helping me with this effort, don't hesitate to
contact me. I first priority is fixing build and install and
then look into tests that leave the source directory dirty.
Detailed report can be found here:
https://drive.google.com/file/d/11nnWOKIzzOrE4EiucZBn423lzSU_eNNv/view?usp=…
thanks,
-- Shuah