The XSAVE feature set supports the saving and restoring of state components,
which is used for process context switching. The state components include
x87 state for FPU execution environment, SSE state, AVX state and so on. In
order to ensure that XSAVE works correctly, add XSAVE basic test for
XSAVE architecture functionality.
This patch set tests and verifies the basic functions of XSAVE/XRSTOR in
user space; during and after signal processing on the x86 platform, the
XSAVE contents of the process should not be changed.
This series introduces only the most basic XSAVE tests. In the
future, the intention is to continue expanding the scope of
these selftests to include more kernel XSAVE-related functionality
and XSAVE-managed features like AMX and shadow stacks.
========
- Change from v3 to v4:
- Improve the comment in patch 1.
- Change from v2 to v3:
- Improve the description of patch 2 git log.
- Change from v1 to v2:
- Improve the cover-letter. (Dave Hansen)
Pengfei Xu (2):
selftests/xsave: test basic XSAVE architecture functionality
selftests/xsave: add xsave test during and after signal handling
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/xsave/.gitignore | 3 +
tools/testing/selftests/xsave/Makefile | 6 +
tools/testing/selftests/xsave/xsave_common.h | 246 ++++++++++++++++++
.../selftests/xsave/xsave_instruction.c | 83 ++++++
.../selftests/xsave/xsave_signal_handle.c | 184 +++++++++++++
6 files changed, 523 insertions(+)
create mode 100644 tools/testing/selftests/xsave/.gitignore
create mode 100644 tools/testing/selftests/xsave/Makefile
create mode 100644 tools/testing/selftests/xsave/xsave_common.h
create mode 100644 tools/testing/selftests/xsave/xsave_instruction.c
create mode 100644 tools/testing/selftests/xsave/xsave_signal_handle.c
--
2.20.1
LKP/0Day reported some building errors about kvm, and errors message
are not always same:
- lib/x86_64/processor.c:1083:31: error: ‘KVM_CAP_NESTED_STATE’ undeclared
(first use in this function); did you mean ‘KVM_CAP_PIT_STATE2’?
- lib/test_util.c:189:30: error: ‘MAP_HUGE_16KB’ undeclared (first use
in this function); did you mean ‘MAP_HUGE_16GB’?
Although kvm relies on the khdr, they still be built in parallel when -j
is specified. In this case, it will cause compiling errors.
Here we mark target khdr as NOTPARALLEL to make it be always built
first.
CC: Philip Li <philip.li(a)intel.com>
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/lib.mk | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/lib.mk b/tools/testing/selftests/lib.mk
index 7ee911355328..5074b01f2a29 100644
--- a/tools/testing/selftests/lib.mk
+++ b/tools/testing/selftests/lib.mk
@@ -48,6 +48,7 @@ ARCH ?= $(SUBARCH)
# When local build is done, headers are installed in the default
# INSTALL_HDR_PATH usr/include.
.PHONY: khdr
+.NOTPARALLEL:
khdr:
ifndef KSFT_KHDR_INSTALL_DONE
ifeq (1,$(DEFAULT_INSTALL_HDR_PATH))
--
2.33.0
Introduction
============
This patch set depends on:
- support for the euid policy keyword for critical data
(https://lore.kernel.org/linux-integrity/20210705115650.3373599-1-roberto.sa…)
- basic DIGLIM
(https://lore.kernel.org/linux-integrity/20210914163401.864635-1-roberto.sas…)
Introduce the remaining features necessary to upload to the kernel
reference values from RPM headers or digest lists in other formats.
Loader: it will automatically uploads digest lists from a directory
specified in the kernel configuration and will execute a user space
uploader to upload digest lists in a format that is not recognized
by the kernel;
LSM: it identifies digest list parsers and monitor their activity for
integrity evaluation; it protects digest list parsers from other user
space processes considered as untrusted;
Digest list generators: user space tools to generate digest lists from
files (in the compact format) or from the RPM DB;
Digest list uploader and parsers: user space tools responsible to upload to
the kernel digest lists not in the
compact format (e.g. those derived from
the RPM DB);
Administration guide: it describes the steps necessary to upload to the
kernel all the digests of an RPM-based Linux
distribution, using a custom kernel with the DIGLIM
patches applied.
With these changes, DIGLIM is ready to be used by IMA for measurement and
appraisal (this functionality will be added with a future patch set).
DIGLIM already supports appended signatures, but at the moment they cannot
be interpreted by IMA (unsupported ID PKEY_ID_PGP). Another patch set is
necessary to load the PGP keys from the Linux distribution to the system
keyring and to verify the PGP signatures of the RPM headers.
With the patch sets above and the execution policies for IMA proposed some
time ago, it will be possible to generate a measurement list with digest
lists and unknown files, and enable IMA appraisal in enforcing mode.
The kernel command line would be:
ima_template=ima-modsig ima_policy="exec_tcb|tmpfs|digest_lists|appraise_exec_tcb|appraise_tmpfs|appraise_digest_lists"
The effort required for Linux distribution vendors will be to generate and
sign the digest lists for the digest list uploader and the RPM parser. This
could be done for example in the kernel-tools package (or in a separate
package). Existing package signatures are sufficient for remaining files.
Issues/Questions
================
Lockdep (patch 2/9)
-------------------
I'm using iterate_dir() and file_open_root() to iterate and open files
in a directory. Unfortunately, I get the following warning:
============================================
WARNING: possible recursive locking detected
5.15.0-rc1-dont-use-00049-ga5a881519991 #134 Not tainted
--------------------------------------------
swapper/1 is trying to acquire lock:
0000000066812898 (&sb->s_type->i_mutex_key#7){++++}-{4:4}, at: path_openat+0x75d/0xd20
but task is already holding lock:
0000000066812898 (&sb->s_type->i_mutex_key#7){++++}-{4:4}, at: iterate_dir+0x65/0x250
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&sb->s_type->i_mutex_key#7);
lock(&sb->s_type->i_mutex_key#7);
*** DEADLOCK ***
due to the fact that path_openat() might be trying to lock the directory
already locked by iterate_dir(). What it would be a good way to avoid it?
Inode availability in security_file_free() (patch 3/9)
------------------------------------------------------
It seems that this hook is called when the last reference to a file is
released. After enabling debugging, sometimes the kernel reported that the
inode I was trying to access was already freed.
To avoid this situation, I'm grabbing an additional reference of the inode
in the security_file_open() hook, to ensure that the inode does not
disappear, and I'm releasing it in the security_file_free() hook. Is this
solution acceptable?
Roberto Sassu (9):
ima: Introduce new hook DIGEST_LIST_CHECK
diglim: Loader
diglim: LSM
diglim: Tests - LSM
diglim: Compact digest list generator
diglim: RPM digest list generator
diglim: Digest list uploader
diglim: RPM parser
diglim: Admin guide
Documentation/admin-guide/diglim.rst | 136 +++++
Documentation/admin-guide/index.rst | 1 +
.../security/diglim/implementation.rst | 16 +
Documentation/security/diglim/index.rst | 1 +
Documentation/security/diglim/lsm.rst | 65 +++
Documentation/security/diglim/tests.rst | 18 +-
MAINTAINERS | 10 +
security/integrity/diglim/Kconfig | 14 +
security/integrity/diglim/Makefile | 2 +-
security/integrity/diglim/diglim.h | 27 +
security/integrity/diglim/fs.c | 3 +
security/integrity/diglim/hooks.c | 436 ++++++++++++++++
security/integrity/diglim/loader.c | 92 ++++
security/integrity/iint.c | 1 +
security/integrity/ima/ima.h | 1 +
security/integrity/ima/ima_main.c | 3 +-
security/integrity/ima/ima_policy.c | 3 +
security/integrity/integrity.h | 8 +
tools/diglim/Makefile | 27 +
tools/diglim/common.c | 79 +++
tools/diglim/common.h | 59 +++
tools/diglim/compact_gen.c | 349 +++++++++++++
tools/diglim/rpm_gen.c | 334 ++++++++++++
tools/diglim/rpm_parser.c | 483 ++++++++++++++++++
tools/diglim/upload_digest_lists.c | 238 +++++++++
tools/testing/selftests/diglim/Makefile | 12 +-
tools/testing/selftests/diglim/common.h | 9 +
tools/testing/selftests/diglim/selftest.c | 357 ++++++++++++-
28 files changed, 2764 insertions(+), 20 deletions(-)
create mode 100644 Documentation/admin-guide/diglim.rst
create mode 100644 Documentation/security/diglim/lsm.rst
create mode 100644 security/integrity/diglim/hooks.c
create mode 100644 security/integrity/diglim/loader.c
create mode 100644 tools/diglim/Makefile
create mode 100644 tools/diglim/common.c
create mode 100644 tools/diglim/common.h
create mode 100644 tools/diglim/compact_gen.c
create mode 100644 tools/diglim/rpm_gen.c
create mode 100644 tools/diglim/rpm_parser.c
create mode 100644 tools/diglim/upload_digest_lists.c
--
2.25.1
Hi, I have been sharing an old VFAT formatted hard disk on one pc to
another using Samba and sometime after kernel 5.14.0 it stopped working (apparently no longer being shared as the mount.smbfs command
on the client failed with error -13 yet mount.smbfs still worked for
ext3 filesytems shared from the same machine which had the VFAT
filesystem).
The only error I saw on the machine with the VFAT formatted hard disk
was the output of the mount command had truncated the name of the
mount to only include the first 4 characters of the base name of the
mount point.
e.g. when VFAT filesystem was mounted on /mnt/victoria, the output of
the mount command showed the filesytem mounted on /mnt/vict
The kernel build used was i386 with gcc 11.2.0-4 using
make - j2 menuconfig bindeb-pkg
.config available on request.
The git-bisect was:
victoria:/usr/src/linux# git bisect loggit bisect start '--' 'fs/fat'#
good: [7d2a07b769330c34b4deabeed939325c77a7ec2f] Linux 5.14git bisect
good 7d2a07b769330c34b4deabeed939325c77a7ec2f# bad:
[a3fa7a101dcff93791d1b1bdb3affcad1410c8c1] Merge branches 'akpm' and
'akpm-hotfixes' (patches from Andrew)git bisect bad
a3fa7a101dcff93791d1b1bdb3affcad1410c8c1# good:
[edb0872f44ec9976ea6d052cb4b93cd2d23ac2ba] block: move the bdi from
the request_queue to the gendiskgit bisect good
edb0872f44ec9976ea6d052cb4b93cd2d23ac2ba# good:
[b0d4adaf3b3c4402d9c3b6186e02aa1e4f7985cd] fat: Add KUnit tests for
checksums and timestampsgit bisect good
b0d4adaf3b3c4402d9c3b6186e02aa1e4f7985cd# bad:
[c815f04ba94940fbc303a6ea9669e7da87f8e77d] Merge tag
'linux-kselftest-kunit-5.15-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftestgit
bisect bad c815f04ba94940fbc303a6ea9669e7da87f8e77d# first bad commit:
[c815f04ba94940fbc303a6ea9669e7da87f8e77d] Merge tag
'linux-kselftest-kunit-5.15-rc1' of
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
amarsh04@victoria:~$ mount|grep vic/dev/sdb6 on /vict type vfat
(rw,relatime,uid=65534,gid=65534,fmask=0000,dmask=0000,allow_utime=0022,codepage=437,iocharset=utf8,shortname=mixed,errors=remount-ro)
Happy to run any further tests but kernel builds are slow on this machine (Pentium Dl.
Arthur.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
This test assumes that the declared kunit_suite object is the exact one
which is being executed, which KUnit will not guarantee [1].
Specifically, `suite->log` is not initialized until a suite object is
executed. So if KUnit makes a copy of the suite and runs that instead,
this test dereferences an invalid pointer and (hopefully) segfaults.
N.B. since we no longer assume this, we can no longer verify that
`suite->log` is *not* allocated during normal execution.
An alternative to this patch that would allow us to test that would
require exposing an API for the current test to get its current suite.
Exposing that for one internal kunit test seems like overkill, and
grants users more footguns (e.g. reusing a test case in multiple suites
and changing behavior based on the suite name, dynamically modifying the
setup/cleanup funcs, storing/reading stuff out of the suite->log, etc.).
[1] In a subsequent patch, KUnit will allow running subsets of test
cases within a suite by making a copy of the suite w/ the filtered test
list. But there are other reasons KUnit might execute a copy, e.g. if it
ever wants to support parallel execution of different suites, recovering
from errors and restarting suites
Signed-off-by: Daniel Latypov <dlatypov(a)google.com>
Reviewed-by: Brendan Higgins <brendanhiggins(a)google.com>
---
lib/kunit/kunit-test.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/lib/kunit/kunit-test.c b/lib/kunit/kunit-test.c
index d69efcbed624..555601d17f79 100644
--- a/lib/kunit/kunit-test.c
+++ b/lib/kunit/kunit-test.c
@@ -415,12 +415,15 @@ static struct kunit_suite kunit_log_test_suite = {
static void kunit_log_test(struct kunit *test)
{
- struct kunit_suite *suite = &kunit_log_test_suite;
+ struct kunit_suite suite;
+
+ suite.log = kunit_kzalloc(test, KUNIT_LOG_SIZE, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, suite.log);
kunit_log(KERN_INFO, test, "put this in log.");
kunit_log(KERN_INFO, test, "this too.");
- kunit_log(KERN_INFO, suite, "add to suite log.");
- kunit_log(KERN_INFO, suite, "along with this.");
+ kunit_log(KERN_INFO, &suite, "add to suite log.");
+ kunit_log(KERN_INFO, &suite, "along with this.");
#ifdef CONFIG_KUNIT_DEBUGFS
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
@@ -428,12 +431,11 @@ static void kunit_log_test(struct kunit *test)
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
strstr(test->log, "this too."));
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
- strstr(suite->log, "add to suite log."));
+ strstr(suite.log, "add to suite log."));
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
- strstr(suite->log, "along with this."));
+ strstr(suite.log, "along with this."));
#else
KUNIT_EXPECT_PTR_EQ(test, test->log, (char *)NULL);
- KUNIT_EXPECT_PTR_EQ(test, suite->log, (char *)NULL);
#endif
}
base-commit: a3fa7a101dcff93791d1b1bdb3affcad1410c8c1
--
2.33.0.309.g3052b89438-goog
From: Baolin Wang <baolin.wang(a)linux.alibaba.com>
[ Upstream commit d538ddb97e066571e4fc58b832f40739621b42bb ]
The openat2 test suite fails on ARM64 because the definition of
O_LARGEFILE is different on ARM64. Fix the problem by defining
the correct O_LARGEFILE definition on ARM64.
"openat2 unexpectedly returned # 3['.../tools/testing/selftests/openat2']
with 208000 (!= 208000)
not ok 102 openat2 with incompatible flags (O_PATH | O_LARGEFILE) fails
with -22 (Invalid argument)"
Fixed change log to improve formatting and clarity:
Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Reviewed-by: Aleksa Sarai <cyphar(a)cyphar.com>
Acked-by: Christian Brauner <christian.brauner(a)ubuntu.com>
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/openat2/openat2_test.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
index b386367c606b..5354cef55c6c 100644
--- a/tools/testing/selftests/openat2/openat2_test.c
+++ b/tools/testing/selftests/openat2/openat2_test.c
@@ -22,7 +22,11 @@
* XXX: This is wrong on {mips, parisc, powerpc, sparc}.
*/
#undef O_LARGEFILE
+#ifdef __aarch64__
+#define O_LARGEFILE 0x20000
+#else
#define O_LARGEFILE 0x8000
+#endif
struct open_how_ext {
struct open_how inner;
--
2.30.2
From: Baolin Wang <baolin.wang(a)linux.alibaba.com>
[ Upstream commit d538ddb97e066571e4fc58b832f40739621b42bb ]
The openat2 test suite fails on ARM64 because the definition of
O_LARGEFILE is different on ARM64. Fix the problem by defining
the correct O_LARGEFILE definition on ARM64.
"openat2 unexpectedly returned # 3['.../tools/testing/selftests/openat2']
with 208000 (!= 208000)
not ok 102 openat2 with incompatible flags (O_PATH | O_LARGEFILE) fails
with -22 (Invalid argument)"
Fixed change log to improve formatting and clarity:
Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Reviewed-by: Aleksa Sarai <cyphar(a)cyphar.com>
Acked-by: Christian Brauner <christian.brauner(a)ubuntu.com>
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/openat2/openat2_test.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
index 381d874cce99..300af824b07b 100644
--- a/tools/testing/selftests/openat2/openat2_test.c
+++ b/tools/testing/selftests/openat2/openat2_test.c
@@ -22,7 +22,11 @@
* XXX: This is wrong on {mips, parisc, powerpc, sparc}.
*/
#undef O_LARGEFILE
+#ifdef __aarch64__
+#define O_LARGEFILE 0x20000
+#else
#define O_LARGEFILE 0x8000
+#endif
struct open_how_ext {
struct open_how inner;
--
2.30.2
From: Baolin Wang <baolin.wang(a)linux.alibaba.com>
[ Upstream commit d538ddb97e066571e4fc58b832f40739621b42bb ]
The openat2 test suite fails on ARM64 because the definition of
O_LARGEFILE is different on ARM64. Fix the problem by defining
the correct O_LARGEFILE definition on ARM64.
"openat2 unexpectedly returned # 3['.../tools/testing/selftests/openat2']
with 208000 (!= 208000)
not ok 102 openat2 with incompatible flags (O_PATH | O_LARGEFILE) fails
with -22 (Invalid argument)"
Fixed change log to improve formatting and clarity:
Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Reviewed-by: Aleksa Sarai <cyphar(a)cyphar.com>
Acked-by: Christian Brauner <christian.brauner(a)ubuntu.com>
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/openat2/openat2_test.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
index d7ec1e7da0d0..1bddbe934204 100644
--- a/tools/testing/selftests/openat2/openat2_test.c
+++ b/tools/testing/selftests/openat2/openat2_test.c
@@ -22,7 +22,11 @@
* XXX: This is wrong on {mips, parisc, powerpc, sparc}.
*/
#undef O_LARGEFILE
+#ifdef __aarch64__
+#define O_LARGEFILE 0x20000
+#else
#define O_LARGEFILE 0x8000
+#endif
struct open_how_ext {
struct open_how inner;
--
2.30.2
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
[ Upstream commit 2d82d73da35b72b53fe0d96350a2b8d929d07e42 ]
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Song Liu <songliubraving(a)fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 96c6238a4a1f..3f503ad37a2b 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -730,7 +730,7 @@ static void test_sockmap(int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.30.2
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
[ Upstream commit 2d82d73da35b72b53fe0d96350a2b8d929d07e42 ]
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Song Liu <songliubraving(a)fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 4e202217fae1..87ba89df9802 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -796,7 +796,7 @@ static void test_sockmap(int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.30.2
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
[ Upstream commit 2d82d73da35b72b53fe0d96350a2b8d929d07e42 ]
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Song Liu <songliubraving(a)fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 1c4219ceced2..45c7a55f0b8b 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -972,7 +972,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.30.2
From: Jussi Maki <joamaki(a)gmail.com>
[ Upstream commit 95413846cca37f20000dd095cf6d91f8777129d7 ]
The program type cannot be deduced from 'tx' which causes an invalid
argument error when trying to load xdp_tx.o using the skeleton.
Rename the section name to "xdp" so that libbpf can deduce the type.
Signed-off-by: Jussi Maki <joamaki(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Acked-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210731055738.16820-7-joamaki@gmail.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/progs/xdp_tx.c | 2 +-
tools/testing/selftests/bpf/test_xdp_veth.sh | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
index 57912e7c94b0..9ed477776eca 100644
--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
+++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
@@ -3,7 +3,7 @@
#include <linux/bpf.h>
#include "bpf_helpers.h"
-SEC("tx")
+SEC("xdp")
int xdp_tx(struct xdp_md *xdp)
{
return XDP_TX;
diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
index ba8ffcdaac30..995278e684b6 100755
--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
+++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
+ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
trap cleanup EXIT
--
2.30.2
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
[ Upstream commit 2d82d73da35b72b53fe0d96350a2b8d929d07e42 ]
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Song Liu <songliubraving(a)fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 0d92ebcb335d..179e680e8d13 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -968,7 +968,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.30.2
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 0c69bd2ca6ee20064dde7853cd749284e053a874 ]
The PAC tests check to see if the system supports the relevant PAC features
but instead of skipping the tests if they can't be executed they fail the
tests which makes things look like they're not working when they are.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Link: https://lore.kernel.org/r/20210819165723.43903-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/arm64/pauth/pac.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
index 592fe538506e..b743daa772f5 100644
--- a/tools/testing/selftests/arm64/pauth/pac.c
+++ b/tools/testing/selftests/arm64/pauth/pac.c
@@ -25,13 +25,15 @@
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* data key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACA)) \
+ SKIP(return, "PAUTH not enabled"); \
} while (0)
#define ASSERT_GENERIC_PAUTH_ENABLED() \
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACG)) \
+ SKIP(return, "Generic PAUTH not enabled"); \
} while (0)
void sign_specific(struct signatures *sign, size_t val)
@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
--
2.30.2
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 83e5dcbece4ea67ec3ad94b897e2844184802fd7 ]
When skipping the tests due to a lack of system support for MTE we
currently print a message saying FAIL which makes it look like the test
failed even though the test did actually report KSFT_SKIP, creating some
confusion. Change the error message to say SKIP instead so things are
clearer.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Link: https://lore.kernel.org/r/20210819172902.56211-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/arm64/mte/mte_common_util.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index 70665ba88cbb..2703bd628d06 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -285,7 +285,7 @@ int mte_default_setup(void)
int ret;
if (!(hwcaps2 & HWCAP2_MTE)) {
- ksft_print_msg("FAIL: MTE features unavailable\n");
+ ksft_print_msg("SKIP: MTE features unavailable\n");
return KSFT_SKIP;
}
/* Get current mte mode */
--
2.30.2
From: Yonghong Song <yhs(a)fb.com>
[ Upstream commit b16ac5bf732a5e23d164cf908ec7742d6a6120d3 ]
libbpf CI has reported send_signal test is flaky although
I am not able to reproduce it in my local environment.
But I am able to reproduce with on-demand libbpf CI ([1]).
Through code analysis, the following is possible reason.
The failed subtest runs bpf program in softirq environment.
Since bpf_send_signal() only sends to a fork of "test_progs"
process. If the underlying current task is
not "test_progs", bpf_send_signal() will not be triggered
and the subtest will fail.
To reduce the chances where the underlying process is not
the intended one, this patch boosted scheduling priority to
-20 (highest allowed by setpriority() call). And I did
10 runs with on-demand libbpf CI with this patch and I
didn't observe any failures.
[1] https://github.com/libbpf/libbpf/actions/workflows/ondemand.yml
Signed-off-by: Yonghong Song <yhs(a)fb.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210817190923.3186725-1-yhs@fb.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
.../selftests/bpf/prog_tests/send_signal.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
index 7043e6ded0e6..75b72c751772 100644
--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
+++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
+#include <sys/time.h>
+#include <sys/resource.h>
#include "test_send_signal_kern.skel.h"
static volatile int sigusr1_received = 0;
@@ -41,12 +43,23 @@ static void test_send_signal_common(struct perf_event_attr *attr,
}
if (pid == 0) {
+ int old_prio;
+
/* install signal handler and notify parent */
signal(SIGUSR1, sigusr1_handler);
close(pipe_c2p[0]); /* close read */
close(pipe_p2c[1]); /* close write */
+ /* boost with a high priority so we got a higher chance
+ * that if an interrupt happens, the underlying task
+ * is this process.
+ */
+ errno = 0;
+ old_prio = getpriority(PRIO_PROCESS, 0);
+ ASSERT_OK(errno, "getpriority");
+ ASSERT_OK(setpriority(PRIO_PROCESS, 0, -20), "setpriority");
+
/* notify parent signal handler is installed */
CHECK(write(pipe_c2p[1], buf, 1) != 1, "pipe_write", "err %d\n", -errno);
@@ -62,6 +75,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
/* wait for parent notification and exit */
CHECK(read(pipe_p2c[0], buf, 1) != 1, "pipe_read", "err %d\n", -errno);
+ /* restore the old priority */
+ ASSERT_OK(setpriority(PRIO_PROCESS, 0, old_prio), "setpriority");
+
close(pipe_c2p[1]);
close(pipe_p2c[0]);
exit(0);
--
2.30.2
From: Jussi Maki <joamaki(a)gmail.com>
[ Upstream commit 95413846cca37f20000dd095cf6d91f8777129d7 ]
The program type cannot be deduced from 'tx' which causes an invalid
argument error when trying to load xdp_tx.o using the skeleton.
Rename the section name to "xdp" so that libbpf can deduce the type.
Signed-off-by: Jussi Maki <joamaki(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Acked-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210731055738.16820-7-joamaki@gmail.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/progs/xdp_tx.c | 2 +-
tools/testing/selftests/bpf/test_xdp_veth.sh | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
index 94e6c2b281cb..5f725c720e00 100644
--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
+++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
@@ -3,7 +3,7 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
-SEC("tx")
+SEC("xdp")
int xdp_tx(struct xdp_md *xdp)
{
return XDP_TX;
diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
index ba8ffcdaac30..995278e684b6 100755
--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
+++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
+ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
trap cleanup EXIT
--
2.30.2
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
[ Upstream commit 2d82d73da35b72b53fe0d96350a2b8d929d07e42 ]
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Song Liu <songliubraving(a)fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 51adc42b2b40..7fed68492a2e 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -968,7 +968,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.30.2
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 0c69bd2ca6ee20064dde7853cd749284e053a874 ]
The PAC tests check to see if the system supports the relevant PAC features
but instead of skipping the tests if they can't be executed they fail the
tests which makes things look like they're not working when they are.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Link: https://lore.kernel.org/r/20210819165723.43903-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/arm64/pauth/pac.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
index 592fe538506e..b743daa772f5 100644
--- a/tools/testing/selftests/arm64/pauth/pac.c
+++ b/tools/testing/selftests/arm64/pauth/pac.c
@@ -25,13 +25,15 @@
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* data key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACA)) \
+ SKIP(return, "PAUTH not enabled"); \
} while (0)
#define ASSERT_GENERIC_PAUTH_ENABLED() \
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACG)) \
+ SKIP(return, "Generic PAUTH not enabled"); \
} while (0)
void sign_specific(struct signatures *sign, size_t val)
@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
--
2.30.2
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 83e5dcbece4ea67ec3ad94b897e2844184802fd7 ]
When skipping the tests due to a lack of system support for MTE we
currently print a message saying FAIL which makes it look like the test
failed even though the test did actually report KSFT_SKIP, creating some
confusion. Change the error message to say SKIP instead so things are
clearer.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Link: https://lore.kernel.org/r/20210819172902.56211-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/arm64/mte/mte_common_util.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index f50ac31920d1..0328a1e08f65 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -298,7 +298,7 @@ int mte_default_setup(void)
int ret;
if (!(hwcaps2 & HWCAP2_MTE)) {
- ksft_print_msg("FAIL: MTE features unavailable\n");
+ ksft_print_msg("SKIP: MTE features unavailable\n");
return KSFT_SKIP;
}
/* Get current mte mode */
--
2.30.2
From: Yonghong Song <yhs(a)fb.com>
[ Upstream commit b16ac5bf732a5e23d164cf908ec7742d6a6120d3 ]
libbpf CI has reported send_signal test is flaky although
I am not able to reproduce it in my local environment.
But I am able to reproduce with on-demand libbpf CI ([1]).
Through code analysis, the following is possible reason.
The failed subtest runs bpf program in softirq environment.
Since bpf_send_signal() only sends to a fork of "test_progs"
process. If the underlying current task is
not "test_progs", bpf_send_signal() will not be triggered
and the subtest will fail.
To reduce the chances where the underlying process is not
the intended one, this patch boosted scheduling priority to
-20 (highest allowed by setpriority() call). And I did
10 runs with on-demand libbpf CI with this patch and I
didn't observe any failures.
[1] https://github.com/libbpf/libbpf/actions/workflows/ondemand.yml
Signed-off-by: Yonghong Song <yhs(a)fb.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210817190923.3186725-1-yhs@fb.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
.../selftests/bpf/prog_tests/send_signal.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
index 7043e6ded0e6..75b72c751772 100644
--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
+++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
+#include <sys/time.h>
+#include <sys/resource.h>
#include "test_send_signal_kern.skel.h"
static volatile int sigusr1_received = 0;
@@ -41,12 +43,23 @@ static void test_send_signal_common(struct perf_event_attr *attr,
}
if (pid == 0) {
+ int old_prio;
+
/* install signal handler and notify parent */
signal(SIGUSR1, sigusr1_handler);
close(pipe_c2p[0]); /* close read */
close(pipe_p2c[1]); /* close write */
+ /* boost with a high priority so we got a higher chance
+ * that if an interrupt happens, the underlying task
+ * is this process.
+ */
+ errno = 0;
+ old_prio = getpriority(PRIO_PROCESS, 0);
+ ASSERT_OK(errno, "getpriority");
+ ASSERT_OK(setpriority(PRIO_PROCESS, 0, -20), "setpriority");
+
/* notify parent signal handler is installed */
CHECK(write(pipe_c2p[1], buf, 1) != 1, "pipe_write", "err %d\n", -errno);
@@ -62,6 +75,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
/* wait for parent notification and exit */
CHECK(read(pipe_p2c[0], buf, 1) != 1, "pipe_read", "err %d\n", -errno);
+ /* restore the old priority */
+ ASSERT_OK(setpriority(PRIO_PROCESS, 0, old_prio), "setpriority");
+
close(pipe_c2p[1]);
close(pipe_p2c[0]);
exit(0);
--
2.30.2
From: Jussi Maki <joamaki(a)gmail.com>
[ Upstream commit 95413846cca37f20000dd095cf6d91f8777129d7 ]
The program type cannot be deduced from 'tx' which causes an invalid
argument error when trying to load xdp_tx.o using the skeleton.
Rename the section name to "xdp" so that libbpf can deduce the type.
Signed-off-by: Jussi Maki <joamaki(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Acked-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210731055738.16820-7-joamaki@gmail.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/progs/xdp_tx.c | 2 +-
tools/testing/selftests/bpf/test_xdp_veth.sh | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
index 94e6c2b281cb..5f725c720e00 100644
--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
+++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
@@ -3,7 +3,7 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
-SEC("tx")
+SEC("xdp")
int xdp_tx(struct xdp_md *xdp)
{
return XDP_TX;
diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
index ba8ffcdaac30..995278e684b6 100755
--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
+++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
+ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
trap cleanup EXIT
--
2.30.2
From: Li Zhijian <lizhijian(a)cn.fujitsu.com>
[ Upstream commit 2d82d73da35b72b53fe0d96350a2b8d929d07e42 ]
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
Signed-off-by: Alexei Starovoitov <ast(a)kernel.org>
Acked-by: Song Liu <songliubraving(a)fb.com>
Link: https://lore.kernel.org/bpf/20210820015556.23276-2-lizhijian@cn.fujitsu.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 30cbf5d98f7d..de58a3070eea 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -985,7 +985,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.30.2
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 0c69bd2ca6ee20064dde7853cd749284e053a874 ]
The PAC tests check to see if the system supports the relevant PAC features
but instead of skipping the tests if they can't be executed they fail the
tests which makes things look like they're not working when they are.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Link: https://lore.kernel.org/r/20210819165723.43903-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/arm64/pauth/pac.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
index 592fe538506e..b743daa772f5 100644
--- a/tools/testing/selftests/arm64/pauth/pac.c
+++ b/tools/testing/selftests/arm64/pauth/pac.c
@@ -25,13 +25,15 @@
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* data key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACA)) \
+ SKIP(return, "PAUTH not enabled"); \
} while (0)
#define ASSERT_GENERIC_PAUTH_ENABLED() \
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACG)) \
+ SKIP(return, "Generic PAUTH not enabled"); \
} while (0)
void sign_specific(struct signatures *sign, size_t val)
@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
--
2.30.2
From: Mark Brown <broonie(a)kernel.org>
[ Upstream commit 83e5dcbece4ea67ec3ad94b897e2844184802fd7 ]
When skipping the tests due to a lack of system support for MTE we
currently print a message saying FAIL which makes it look like the test
failed even though the test did actually report KSFT_SKIP, creating some
confusion. Change the error message to say SKIP instead so things are
clearer.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Link: https://lore.kernel.org/r/20210819172902.56211-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/arm64/mte/mte_common_util.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index f50ac31920d1..0328a1e08f65 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -298,7 +298,7 @@ int mte_default_setup(void)
int ret;
if (!(hwcaps2 & HWCAP2_MTE)) {
- ksft_print_msg("FAIL: MTE features unavailable\n");
+ ksft_print_msg("SKIP: MTE features unavailable\n");
return KSFT_SKIP;
}
/* Get current mte mode */
--
2.30.2
From: Yonghong Song <yhs(a)fb.com>
[ Upstream commit b16ac5bf732a5e23d164cf908ec7742d6a6120d3 ]
libbpf CI has reported send_signal test is flaky although
I am not able to reproduce it in my local environment.
But I am able to reproduce with on-demand libbpf CI ([1]).
Through code analysis, the following is possible reason.
The failed subtest runs bpf program in softirq environment.
Since bpf_send_signal() only sends to a fork of "test_progs"
process. If the underlying current task is
not "test_progs", bpf_send_signal() will not be triggered
and the subtest will fail.
To reduce the chances where the underlying process is not
the intended one, this patch boosted scheduling priority to
-20 (highest allowed by setpriority() call). And I did
10 runs with on-demand libbpf CI with this patch and I
didn't observe any failures.
[1] https://github.com/libbpf/libbpf/actions/workflows/ondemand.yml
Signed-off-by: Yonghong Song <yhs(a)fb.com>
Signed-off-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210817190923.3186725-1-yhs@fb.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
.../selftests/bpf/prog_tests/send_signal.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
index 023cc532992d..839f7ddaec16 100644
--- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
+++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <test_progs.h>
+#include <sys/time.h>
+#include <sys/resource.h>
#include "test_send_signal_kern.skel.h"
int sigusr1_received = 0;
@@ -41,12 +43,23 @@ static void test_send_signal_common(struct perf_event_attr *attr,
}
if (pid == 0) {
+ int old_prio;
+
/* install signal handler and notify parent */
signal(SIGUSR1, sigusr1_handler);
close(pipe_c2p[0]); /* close read */
close(pipe_p2c[1]); /* close write */
+ /* boost with a high priority so we got a higher chance
+ * that if an interrupt happens, the underlying task
+ * is this process.
+ */
+ errno = 0;
+ old_prio = getpriority(PRIO_PROCESS, 0);
+ ASSERT_OK(errno, "getpriority");
+ ASSERT_OK(setpriority(PRIO_PROCESS, 0, -20), "setpriority");
+
/* notify parent signal handler is installed */
CHECK(write(pipe_c2p[1], buf, 1) != 1, "pipe_write", "err %d\n", -errno);
@@ -62,6 +75,9 @@ static void test_send_signal_common(struct perf_event_attr *attr,
/* wait for parent notification and exit */
CHECK(read(pipe_p2c[0], buf, 1) != 1, "pipe_read", "err %d\n", -errno);
+ /* restore the old priority */
+ ASSERT_OK(setpriority(PRIO_PROCESS, 0, old_prio), "setpriority");
+
close(pipe_c2p[1]);
close(pipe_p2c[0]);
exit(0);
--
2.30.2
From: Jussi Maki <joamaki(a)gmail.com>
[ Upstream commit 95413846cca37f20000dd095cf6d91f8777129d7 ]
The program type cannot be deduced from 'tx' which causes an invalid
argument error when trying to load xdp_tx.o using the skeleton.
Rename the section name to "xdp" so that libbpf can deduce the type.
Signed-off-by: Jussi Maki <joamaki(a)gmail.com>
Signed-off-by: Daniel Borkmann <daniel(a)iogearbox.net>
Acked-by: Andrii Nakryiko <andrii(a)kernel.org>
Link: https://lore.kernel.org/bpf/20210731055738.16820-7-joamaki@gmail.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
tools/testing/selftests/bpf/progs/xdp_tx.c | 2 +-
tools/testing/selftests/bpf/test_xdp_veth.sh | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c
index 94e6c2b281cb..5f725c720e00 100644
--- a/tools/testing/selftests/bpf/progs/xdp_tx.c
+++ b/tools/testing/selftests/bpf/progs/xdp_tx.c
@@ -3,7 +3,7 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
-SEC("tx")
+SEC("xdp")
int xdp_tx(struct xdp_md *xdp)
{
return XDP_TX;
diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh
index ba8ffcdaac30..995278e684b6 100755
--- a/tools/testing/selftests/bpf/test_xdp_veth.sh
+++ b/tools/testing/selftests/bpf/test_xdp_veth.sh
@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1
ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2
ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy
-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx
+ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp
ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy
trap cleanup EXIT
--
2.30.2
Allow running each suite or each test case alone per kernel boot.
The motivation for this is to debug "test hermeticity" issues.
This new --run_isolated flag would be a good first step to try and
narrow down root causes.
Context: sometimes tests pass/fail depending on what ran before them.
Memory corruption errors in particular might only cause noticeable
issues later on. But you can also have the opposite, where "fixing" one
test causes another to start failing.
Usage:
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=suite
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit --run_isolated=test example
The last one would provide output like
======== [PASSED] example ========
[PASSED] example_simple_test
============================================================
Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped.
Starting KUnit Kernel (2/3)...
============================================================
======== [SKIPPED] example ========
[SKIPPED] example_skip_test # SKIP this test should be skipped
============================================================
Testing complete. 1 tests run. 0 failed. 0 crashed. 1 skipped.
Starting KUnit Kernel (3/3)...
============================================================
======== [SKIPPED] example ========
[SKIPPED] example_mark_skipped_test # SKIP this test should be skipped
============================================================
Testing complete. 1 tests run. 0 failed. 0 crashed. 1 skipped.
See the last patch's description for a bit more detail.
Meta:
The first patch is from another series with just a reworded commit
message, https://lore.kernel.org/linux-kselftest/20210805235145.2528054-2-dlatypov@g…
This patch series is based on the 2 patches in
https://lore.kernel.org/linux-kselftest/20210831171926.3832806-2-dlatypov@g….
(That's what adds support for us to run a single test case by itself).
Daniel Latypov (3):
kunit: add 'kunit.action' param to allow listing out tests
kunit: tool: factor exec + parse steps into a function
kunit: tool: support running each suite/test separately
lib/kunit/executor.c | 38 +++++++-
tools/testing/kunit/kunit.py | 127 +++++++++++++++++--------
tools/testing/kunit/kunit_tool_test.py | 40 ++++++++
3 files changed, 160 insertions(+), 45 deletions(-)
base-commit: 23fdafa5ae209688d5d5253786bab666bdb07b69
--
2.33.0.309.g3052b89438-goog
So I just committed these three fixes:
4b93c544e90e ("thunderbolt: test: split up test cases in
tb_test_credit_alloc_all")
ba7b1f861086 ("lib/test_scanf: split up number parsing test routines")
1476ff21abb4 ("iwl: fix debug printf format strings")
for the fallout from -Werror that I could easily check (mainly i386
'allyesconfig' - a situation I don't normally test).
The printk format string one was trivial and I hopefully didn't screw
anything up, but I'd ask people to look at and verify the two other
ones. I tried to be very careful, and organizing the code movement in
such a way that 'git diff' shows that it's doing the same thing before
and after, but hey, mistakes happen.
I found those two test-based ones somewhat annoying, because they both
showed how little the test infrastructure tries to follow kernel
rules. I bet those warnings have been showing up for a long long time,
and people went "that's not a relevant configuration" or had some
other reason to ignore them.
No, the test cases may not be relevant in most situations, but it's
not a good thing when something that is supposed to verify kernel
behavior then violates some very fundamental and core kernel rules.
And maybe it was simply missed. The one thing that was clear when I
did that thunderbolt thing in particular is how easy it is to create
variations of those 'struct some-assertion-struct' things on stack as
part of the KUNIT infrastructure. That's unfortunate. It is possible
that the solution to the kernel stack usage might have been to make
those structures static instead, but I didn't check whether the
description structs really can be.
It would be even nicer if they were 'static const'. Being fully
initialized instead of generating not only code that uses up stack,
but also the code to dynamically initialize them on the stack is all
kinds of nasty. I took one look at the generated code, and ran away
screaming.
Anyway, I'm adding the Kunit maintainer and lists here too, just to
see if maybe it could be possible to make those 'struct kunit_assert'
things and friends be static and const, but at least for the cases
that caused problems for i386, those three commits should make the
build pass.
The test_scanf case didn't actually use the Kunit infrastructure, the
stack use explosion is because gcc doesn't seem to combine stack
allocations in many situations. I know gcc *sometimes* does that stack
allocation combining, but not here. I suspect it might be related to
type aliasing, and only merging stack slots when they have the same
types, and thus triggered by the different result buffer sizes. Maybe.
Linus
We refactored the lib/test_hash.c file into KUnit as part of the student
group LKCAMP [1] introductory hackathon for kernel development.
This test was pointed to our group by Daniel Latypov [2], so its full
conversion into a pure KUnit test was our goal in this patch series, but
we ran into many problems relating to it not being split as unit tests,
which complicated matters a bit, as the reasoning behind the original
tests is quite cryptic for those unfamiliar with hash implementations.
Some interesting developments we'd like to highlight are:
- In patch 1/6 we noticed that there was an unused define directive that
could be removed.
- In patch 5/6 we noticed how stringhash and hash tests are all under
the lib/test_hash.c file, which might cause some confusion, and we
also broke those kernel config entries up.
Overall KUnit developments have been made in the other patches in this
series:
In patches 2/6 through 4/6 and 6/6 we refactored the lib/test_hash.c
file so as to make it more compatible with the KUnit style, whilst
preserving the original idea of the maintainer who designed it (i.e.
George Spelvin), which might be undesirable for unit tests, but we
assume it is enough for a first patch.
This is our first patch series so we hope our contributions are
interesting and also hope to get some useful criticism from the
community :)
[1] - https://lkcamp.dev/
[2] - https://lore.kernel.org/linux-kselftest/CAGS_qxojszgM19u=3HLwFgKX5bm5Khywvs…
Isabella Basso (6):
hash.h: remove unused define directive
test_hash.c: move common definitions to top of file
test_hash.c: split test_int_hash into arch-specific functions
test_hash.c: split test_hash_init
lib/Kconfig.debug: properly split hash test kernel entries
test_hash.c: refactor into kunit
include/linux/hash.h | 5 +-
lib/Kconfig.debug | 28 ++++-
lib/Makefile | 3 +-
lib/test_hash.c | 249 ++++++++++++++++---------------------
tools/include/linux/hash.h | 5 +-
5 files changed, 136 insertions(+), 154 deletions(-)
--
2.33.0
Hi Linus,
Please pull the following Kselftest update for Linux 5.15-rc1.
This Kselftest update for Linux 5.15-rc1 consists of fixes to build
and test failures.
-- openat2 test failure for O_LARGEFILE flag on ARM64
-- x86 test build failures related to glibc 2.34 adding
support for variable sized MINSIGSTKSZ and SIGSTKSZ
-- removing obsolete configs in sync and cpufreq config files
-- minor spelling and duplicate header include cleanups
diff is attached.
thanks,
-- Shuah
----------------------------------------------------------------
The following changes since commit 2734d6c1b1a089fb593ef6a23d4b70903526fe0c:
Linux 5.14-rc2 (2021-07-18 14:13:49 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest tags/linux-kselftest-next-5.15-rc1
for you to fetch changes up to 67d6d80d90fb27b3cc7659f464fa3b87fd67bc14:
selftests/cpufreq: Rename DEBUG_PI_LIST to DEBUG_PLIST (2021-08-31 11:00:02 -0600)
----------------------------------------------------------------
linux-kselftest-next-5.15-rc1
This Kselftest update for Linux 5.15-rc1 consists of fixes to build
and test failures.
-- openat2 test failure for O_LARGEFILE flag on ARM64
-- x86 test build failures related to glibc 2.34 adding
support for variable sized MINSIGSTKSZ and SIGSTKSZ
-- removing obsolete configs in sync and cpufreq config files
-- minor spelling and duplicate header include cleanups
----------------------------------------------------------------
Baolin Wang (1):
selftests: openat2: Fix testing failure for O_LARGEFILE flag
Changcheng Deng (1):
kselftest:sched: remove duplicate include in cs_prctl_test.c
Colin Ian King (1):
selftests: safesetid: Fix spelling mistake "cant" -> "can't"
Jun Miao (1):
selftests/x86: Fix error: variably modified 'altstack_data' at file scope
Li Zhijian (2):
selftests/sync: Remove the deprecated config SYNC
selftests/cpufreq: Rename DEBUG_PI_LIST to DEBUG_PLIST
tools/testing/selftests/cpufreq/config | 2 +-
tools/testing/selftests/openat2/openat2_test.c | 4 ++++
tools/testing/selftests/safesetid/safesetid-test.c | 2 +-
tools/testing/selftests/sched/cs_prctl_test.c | 2 --
tools/testing/selftests/sync/config | 1 -
tools/testing/selftests/x86/mov_ss_trap.c | 4 ++--
tools/testing/selftests/x86/sigreturn.c | 7 +++----
tools/testing/selftests/x86/single_step_syscall.c | 4 ++--
tools/testing/selftests/x86/syscall_arg_fault.c | 7 +++----
9 files changed, 16 insertions(+), 17 deletions(-)
----------------------------------------------------------------
This is similar to TCP MD5 in functionality but it's sufficiently
different that wire formats are incompatible. Compared to TCP-MD5
more algorithms are supported and multiple keys can be used on the
same connection but there is still no negotiation mechanism.
Expected use-case is protecting long-duration BGP/LDP connections
between routers using pre-shared keys.
This version is mostly functional, it incorporates ABI feedback from
previous versions and adds tests to kselftests. More discussion and
testing is required and obvious optimizations were skipped in favor of
adding functionality. Here are several flaws:
* RST and TIMEWAIT are mostly unhandled
* Locking is lockdep-clean but need to be revised
* Sequence Number Extension not implemented
* User is responsible for ensuring keys do not overlap
* Traffic key is not cached (reducing performance)
Not all ABI suggestions were incorporated, they can be discussed further.
However I very much want to avoid supporting algorithms beyond RFC5926.
Test suite was added to tools/selftests/tcp_authopt. Tests are written
in python using pytest and scapy and check the API in some detail and
validate packet captures. Python code is already in linux and in
kselftests but virtualenvs not very much. This test suite uses `tox` to
create a private virtualenv and hide dependencies. Let me know if this
is OK or how it can be improved.
Limited testing support is also included in nettest and fcnal-test.sh,
those tests are slow and cover much less.
Changes for frr: https://github.com/FRRouting/frr/pull/9442
That PR was made early for ABI feedback, it has many issues.
Changes for yabgp: https://github.com/cdleonard/yabgp/commits/tcp_authopt
The patched version of yabgp can establish a BGP session protected by
TCP Authentication Option with a Cisco IOS-XR router. It old now.
Changes since RFCv2:
* Removed local_id from ABI and match on send_id/recv_id/addr
* Add all relevant out-of-tree tests to tools/testing/selftests
* Return an error instead of ignoring unknown flags, hopefully this makes
it easier to extend.
* Check sk_family before __tcp_authopt_info_get_or_create in tcp_set_authopt_key
* Use sock_owned_by_me instead of WARN_ON(!lockdep_sock_is_held(sk))
* Fix some intermediate build failures reported by kbuild robot
* Improve documentation
Link: https://lore.kernel.org/netdev/cover.1628544649.git.cdleonard@gmail.com/
Changes since RFC:
* Split into per-topic commits for ease of review. The intermediate
commits compile with a few "unused function" warnings and don't do
anything useful by themselves.
* Add ABI documention including kernel-doc on uapi
* Fix lockdep warnings from crypto by creating pools with one shash for
each cpu
* Accept short options to setsockopt by padding with zeros; this
approach allows increasing the size of the structs in the future.
* Support for aes-128-cmac-96
* Support for binding addresses to keys in a way similar to old tcp_md5
* Add support for retrieving received keyid/rnextkeyid and controling
the keyid/rnextkeyid being sent.
Link: https://lore.kernel.org/netdev/01383a8751e97ef826ef2adf93bfde3a08195a43.162…
Leonard Crestez (15):
tcp: authopt: Initial support and key management
docs: Add user documentation for tcp_authopt
selftests: Initial tcp_authopt test module
selftests: tcp_authopt: Initial sockopt manipulation
tcp: authopt: Add crypto initialization
tcp: authopt: Compute packet signatures
tcp: authopt: Hook into tcp core
tcp: authopt: Add snmp counters
selftests: tcp_authopt: Test key address binding
selftests: tcp_authopt: Capture and verify packets
selftests: Initial tcp_authopt support for nettest
selftests: Initial tcp_authopt support for fcnal-test
selftests: Add -t tcp_authopt option for fcnal-test.sh
tcp: authopt: Add key selection controls
selftests: tcp_authopt: Add tests for rollover
Documentation/networking/index.rst | 1 +
Documentation/networking/tcp_authopt.rst | 69 +
include/linux/tcp.h | 6 +
include/net/tcp.h | 1 +
include/net/tcp_authopt.h | 134 ++
include/uapi/linux/snmp.h | 1 +
include/uapi/linux/tcp.h | 110 ++
net/ipv4/Kconfig | 14 +
net/ipv4/Makefile | 1 +
net/ipv4/proc.c | 1 +
net/ipv4/tcp.c | 27 +
net/ipv4/tcp_authopt.c | 1168 +++++++++++++++++
net/ipv4/tcp_input.c | 17 +
net/ipv4/tcp_ipv4.c | 5 +
net/ipv4/tcp_minisocks.c | 2 +
net/ipv4/tcp_output.c | 74 +-
net/ipv6/tcp_ipv6.c | 4 +
tools/testing/selftests/net/fcnal-test.sh | 34 +
tools/testing/selftests/net/nettest.c | 34 +-
tools/testing/selftests/tcp_authopt/Makefile | 5 +
.../testing/selftests/tcp_authopt/README.rst | 15 +
tools/testing/selftests/tcp_authopt/config | 6 +
tools/testing/selftests/tcp_authopt/run.sh | 11 +
tools/testing/selftests/tcp_authopt/setup.cfg | 17 +
tools/testing/selftests/tcp_authopt/setup.py | 5 +
.../tcp_authopt/tcp_authopt_test/__init__.py | 0
.../tcp_authopt/tcp_authopt_test/conftest.py | 21 +
.../full_tcp_sniff_session.py | 53 +
.../tcp_authopt_test/linux_tcp_authopt.py | 198 +++
.../tcp_authopt_test/netns_fixture.py | 63 +
.../tcp_authopt/tcp_authopt_test/server.py | 82 ++
.../tcp_authopt/tcp_authopt_test/sockaddr.py | 101 ++
.../tcp_authopt_test/tcp_authopt_alg.py | 276 ++++
.../tcp_authopt/tcp_authopt_test/test_bind.py | 143 ++
.../tcp_authopt_test/test_rollover.py | 181 +++
.../tcp_authopt_test/test_sockopt.py | 74 ++
.../tcp_authopt_test/test_vectors.py | 359 +++++
.../tcp_authopt_test/test_verify_capture.py | 123 ++
.../tcp_authopt/tcp_authopt_test/utils.py | 154 +++
.../tcp_authopt/tcp_authopt_test/validator.py | 158 +++
40 files changed, 3746 insertions(+), 2 deletions(-)
create mode 100644 Documentation/networking/tcp_authopt.rst
create mode 100644 include/net/tcp_authopt.h
create mode 100644 net/ipv4/tcp_authopt.c
create mode 100644 tools/testing/selftests/tcp_authopt/Makefile
create mode 100644 tools/testing/selftests/tcp_authopt/README.rst
create mode 100644 tools/testing/selftests/tcp_authopt/config
create mode 100755 tools/testing/selftests/tcp_authopt/run.sh
create mode 100644 tools/testing/selftests/tcp_authopt/setup.cfg
create mode 100644 tools/testing/selftests/tcp_authopt/setup.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/__init__.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/conftest.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/full_tcp_sniff_session.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/linux_tcp_authopt.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/netns_fixture.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/server.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/sockaddr.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/tcp_authopt_alg.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/test_bind.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/test_rollover.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/test_sockopt.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/test_vectors.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/test_verify_capture.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/utils.py
create mode 100644 tools/testing/selftests/tcp_authopt/tcp_authopt_test/validator.py
base-commit: 3a62c333497b164868fdcd241842a1dd4e331825
--
2.25.1
[root@iaas-rpma gpio]# make
gcc gpio-mockup-cdev.c -o /home/lizhijian/linux/tools/testing/selftests/gpio/gpio-mockup-cdev
gpio-mockup-cdev.c: In function ‘request_line_v2’:
gpio-mockup-cdev.c:24:30: error: storage size of ‘req’ isn’t known
24 | struct gpio_v2_line_request req;
| ^~~
gpio-mockup-cdev.c:32:14: error: ‘GPIO_V2_LINE_FLAG_OUTPUT’ undeclared (first use in this function); did you mean ‘GPIOLINE_FLAG_IS_OUT’?
32 | if (flags & GPIO_V2_LINE_FLAG_OUTPUT) {
| ^~~~~~~~~~~~~~~~~~~~~~~~
Search headers from linux tree like others, such as sched
CC: Philip Li <philip.li(a)intel.com>
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/gpio/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/gpio/Makefile b/tools/testing/selftests/gpio/Makefile
index 39f2bbe8dd3d..42ea7d2aa844 100644
--- a/tools/testing/selftests/gpio/Makefile
+++ b/tools/testing/selftests/gpio/Makefile
@@ -3,5 +3,6 @@
TEST_PROGS := gpio-mockup.sh
TEST_FILES := gpio-mockup-sysfs.sh
TEST_GEN_PROGS_EXTENDED := gpio-mockup-cdev
+CFLAGS += -I../../../../usr/include
include ../lib.mk
--
2.31.1
The first KernelCI hackfest[1] early June was successful in getting
a number of kernel developers to work alongside the core KernelCI
team. Test coverage was increased in particular with kselftest,
LTP, KUnit and a new test suite for libcamera. We're now improving
documentation and tooling to make it easier for anyone to get
started. Find out more about KernelCI on https://kernelci.org.
The second hackfest is scheduled for the 6th-10th September. It
should be a good opportunity to start discussing and working on
upstream kernel testing topics ahead of the Linux Plumbers
Conference[2].
Here's the project board where anyone can already add some ideas:
https://github.com/orgs/kernelci/projects/5
There is no registration system, but please reply to this email or
send a message on IRC (#kernelci libera.chat) or kernelci.slack.com
if you would like to take part so you'll get email updates and
invitations to the meetings and open hours sessions online. You
may just drop in and out at any point during the hackfest as you
see fit.
The hackfest features:
* Daily open hours online using Big Blue Button to discuss things
and get support from the KernelCI team
* KernelCI team members available across most time zones to provide
quick feedback
* A curated list of topics and a project board to help set
objectives and coordinate efforts between all contributors
As always, KernelCI is at the service of the kernel community so
please share any feedback you may have to help shape this upcoming
hackfest in the best possible way.
Thanks,
Guillaume
[1] https://foundation.kernelci.org/blog/2021/06/24/the-first-ever-kernelci-hac…
[2] https://www.linuxplumbersconf.org/event/11/page/104-accepted-microconferenc…
On 02/08/2021 10:00, Guillaume Tucker wrote:
> The first KernelCI hackfest[1] early June was successful in getting
> a number of kernel developers to work alongside the core KernelCI
> team. Test coverage was increased in particular with kselftest,
> LTP, KUnit and a new test suite for libcamera. We're now improving
> documentation and tooling to make it easier for anyone to get
> started. Find out more about KernelCI on https://kernelci.org.
>
> The second hackfest is scheduled for the 6th-10th September. It
> should be a good opportunity to start discussing and working on
> upstream kernel testing topics ahead of the Linux Plumbers
> Conference[2].
Please find below some extra information for the KernelCI
Hackfest which is taking place next week. We're expecting at
least some contributors from the Civil Infrastructure Platform
project, the Google Chrome OS kernel team, Collabora kernel
developers and a few more from the wider Linux kernel community.
If you need any direct support, please reply to this email or ask
on kernelci.slack.com or IRC #kernelci (libera.chat).
> Here's the project board where anyone can already add some ideas:
>
> https://github.com/orgs/kernelci/projects/5
In order to add an issue to the workboard, please first create
one in a KernelCI GitHub repository such as kernelci-core:
https://github.com/kernelci/kernelci-core/issues
Each contributor to the hackfest should be added to the
KernelCI "hackers" team, which has permission to edit the
workboard. If you aren't part of this team yet, please ask and
you'll be invited.
Note: Having a GitHub account is not mandatory for taking part in
the hackfest. It's mainly there to facilitate coordination, even
though it is required in order to contribute to the KernelCI
GitHub repositories. Contributions as part of the hackfest may
also be in the kernel tree such as improvements to kselftest,
KUnit or bug fixes, or other test suites such as LTP etc.
> The hackfest features:
>
> * Daily open hours online using Big Blue Button to discuss things
> and get support from the KernelCI team
>
> * KernelCI team members available across most time zones to provide
> quick feedback
>
> * A curated list of topics and a project board to help set
> objectives and coordinate efforts between all contributors
Please see the table below with the proposed daily open hours to
accommodate most time zones:
Region Zone Time 1 Time 2
East Asia GMT+10 17:00-19:00 03:00-05:00
Europe GMT+2 09:00-11:00 19:00-21:00
UTC 07:00-09:00 17:00-19:00
West America GMT-7 00:00-02:00 10:00-12:00
They will be held as a Big Blue Button virtual conference with
the same URL as the last hackfest. It's not being shared
publicly to avoid any potential abuse, so please ask if you don't
have it already.
On Monday, the focus should be put on getting started and
reviewing the backlog on the hackfest workboard to distribute
things among people or help new contributors find topics suitable
for them. Open hours are otherwise opportunities to get more
direct support from the KernelCI team or discuss any topic.
See you there!
Best wishes,
Guillaume
> [1] https://foundation.kernelci.org/blog/2021/06/24/the-first-ever-kernelci-hac…
> [2] https://www.linuxplumbersconf.org/event/11/page/104-accepted-microconferenc…
Hi Linus,
Please pull the following KUnit update for Linux 5.15-rc1.
This KUnit update for Linux 5.15-rc1 adds new features and tests:
tool:
-- support for --kernel_args to allow setting module params
-- support for --raw_output option to show just the kunit output during
make
tests:
-- KUnit tests for checksums and timestamps
-- Print test statistics on failure
-- Integrates UBSAN into the KUnit testing framework.
It fails KUnit tests whenever it reports undefined behavior.
diff is attached.
thanks,
-- Shuah
----------------------------------------------------------------
The following changes since commit 2734d6c1b1a089fb593ef6a23d4b70903526fe0c:
Linux 5.14-rc2 (2021-07-18 14:13:49 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest tags/linux-kselftest-kunit-5.15-rc1
for you to fetch changes up to acd8e8407b8fcc3229d6d8558cac338bea801aed:
kunit: Print test statistics on failure (2021-08-13 13:38:31 -0600)
----------------------------------------------------------------
linux-kselftest-kunit-5.15-rc1
This KUnit update for Linux 5.15-rc1 adds new features and tests:
tool:
-- support for --kernel_args to allow setting module params
-- support for --raw_output option to show just the kunit output during
make
tests:
-- KUnit tests for checksums and timestamps
-- Print test statistics on failure
-- Integrates UBSAN into the KUnit testing framework.
It fails KUnit tests whenever it reports undefined behavior.
----------------------------------------------------------------
Daniel Latypov (2):
kunit: tool: add --kernel_args to allow setting module params
kunit: tool: make --raw_output support only showing kunit output
David Gow (2):
fat: Add KUnit tests for checksums and timestamps
kunit: Print test statistics on failure
Uriel Guajardo (1):
kunit: ubsan integration
Documentation/dev-tools/kunit/kunit-tool.rst | 9 +-
Documentation/dev-tools/kunit/running_tips.rst | 10 ++
fs/fat/.kunitconfig | 5 +
fs/fat/Kconfig | 14 +-
fs/fat/Makefile | 2 +
fs/fat/fat_test.c | 196 +++++++++++++++++++++++++
fs/fat/misc.c | 3 +
lib/kunit/test.c | 109 ++++++++++++++
lib/ubsan.c | 3 +
tools/testing/kunit/kunit.py | 36 +++--
tools/testing/kunit/kunit_parser.py | 6 +-
tools/testing/kunit/kunit_tool_test.py | 29 +++-
12 files changed, 398 insertions(+), 24 deletions(-)
create mode 100644 fs/fat/.kunitconfig
create mode 100644 fs/fat/fat_test.c
----------------------------------------------------------------
Patch 1 fixes a KVM+rseq bug where KVM's handling of TIF_NOTIFY_RESUME,
e.g. for task migration, clears the flag without informing rseq and leads
to stale data in userspace's rseq struct.
Patch 2 is a cleanup to try and make future bugs less likely. It's also
a baby step towards moving and renaming tracehook_notify_resume() since
it has nothing to do with tracing.
Patch 3 is a fix/cleanup to stop overriding x86's unistd_{32,64}.h when
the include path (intentionally) omits tools' uapi headers. KVM's
selftests do exactly that so that they can pick up the uapi headers from
the installed kernel headers, and still use various tools/ headers that
mirror kernel code, e.g. linux/types.h. This allows the new test in
patch 4 to reference __NR_rseq without having to manually define it.
Patch 4 is a regression test for the KVM+rseq bug.
Patch 5 is a cleanup made possible by patch 3.
Based on commit 835d31d319d9 ("Merge tag 'media/v5.15-1' of ...").
v3:
- Collect Ack/Review. [Mathieu, Ben]
- Add explicit smp_wmb() instead of relying on atomic_inc() to do a full
barrier. [Mathieu]
- Add lots and lots of comments in the selftest, especially around why
the migration thread needs a udelay(). [Mathieu]
- Delay between 1us and 10us to reduce the odds of having a hard
dependency on arch/kernel behavior. [Mathieu]
- Dropped an s390 change in patch 2 after a rebase to upstream master.
v2:
- https://lkml.kernel.org/r/20210820225002.310652-1-seanjc@google.com
- Don't touch rseq_cs when handling KVM case so that rseq_syscall() will
still detect a naughty userspace. [Mathieu]
- Use a sequence counter + retry in the test to ensure the process isn't
migrated between sched_getcpu() and reading rseq.cpu_id, i.e. to
avoid a flaky test. [Mathieu]
- Add Mathieu's ack for patch 2.
- Add more comments in the test.
v1: https://lkml.kernel.org/r/20210818001210.4073390-1-seanjc@google.com
Sean Christopherson (5):
KVM: rseq: Update rseq when processing NOTIFY_RESUME on xfer to KVM
guest
entry: rseq: Call rseq_handle_notify_resume() in
tracehook_notify_resume()
tools: Move x86 syscall number fallbacks to .../uapi/
KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration
bugs
KVM: selftests: Remove __NR_userfaultfd syscall fallback
arch/arm/kernel/signal.c | 1 -
arch/arm64/kernel/signal.c | 1 -
arch/csky/kernel/signal.c | 4 +-
arch/mips/kernel/signal.c | 4 +-
arch/powerpc/kernel/signal.c | 4 +-
include/linux/tracehook.h | 2 +
kernel/entry/common.c | 4 +-
kernel/rseq.c | 14 +-
.../x86/include/{ => uapi}/asm/unistd_32.h | 0
.../x86/include/{ => uapi}/asm/unistd_64.h | 3 -
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 3 +
tools/testing/selftests/kvm/rseq_test.c | 236 ++++++++++++++++++
13 files changed, 257 insertions(+), 20 deletions(-)
rename tools/arch/x86/include/{ => uapi}/asm/unistd_32.h (100%)
rename tools/arch/x86/include/{ => uapi}/asm/unistd_64.h (83%)
create mode 100644 tools/testing/selftests/kvm/rseq_test.c
--
2.33.0.153.gba50c8fa24-goog
Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:
- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested tests and promotes the idea that both test suites and test
cases are tests.
- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing. Note that kunit.py parse does
not print incrementally yet. However, this fix brings us closer to
this feature.
- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.
This patch now implements the KTAP specification as described here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqa….
This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).
This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.
Signed-off-by: Rae Moar <rmoar(a)google.com>
Reviewed-by: Brendan Higgins <brendanhiggins(a)google.com>
---
Change log from v2:
https://lore.kernel.org/linux-kselftest/20210826195505.3066755-1-rmoar@goog…
- Fixes bug of type disagreement in kunit_json.py for build_dir
- Removes raw_output()
- Changes docstrings in kunit_parser.py (class docstring, LineStream
docstrings, add_error(), total(), get_status(), all parsing methods)
- Fixes bug of not printing diagnostic log in the case of end of lines
- Sets default status of all tests to TEST_CRASHED
- Adds and prints empty tests with crashed status in case of missing
tests
- Prints 'subtest' in instance of 1 subtest instead of 'subtests'
- Includes checking for 'BUG:' message in search of crash messages in
log (note that parse_crash_in_log method could be removed but would
require deleting tests in kunit_tool_test.py that include the crash
message that is no longer used. If removed, parser would still print
log in cases of test crashed or failure, which would now include
missing subtests)
- Fixes bug of including directives (other than SKIP) in test name
when matching name in result line for subtests
---
Change log from v1:
https://lore.kernel.org/linux-kselftest/20210820200032.2178134-1-rmoar@goog…
- Rebase onto kselftest/kunit branch
- Add tests to kunit_tool_test.py to check parser is correctly stripping
hyphen, producing correct json objects with nested tests, correctly
passing kselftest TAP output, and correctly deals with missing test plan.
- Fix bug to correctly match test name in instance of a missing test plan.
- Fix bug in kunit_tool_test.py pointed out by Daniel where it was not
correctly checking for a proper match to the '0 tests run!' error
message. Reverts changes back to original.
- A few minor changes to commit message using Daniel's comments.
- Change docstrings using Daniel's comments to reduce:
- Shortens some docstrings to be one-line or just description if it is
self explanatory.
- Remove explicit respecification of types of parameters and returns
because this is already specified in the function annoations. However,
some descriptions of the parameters and returns remain and some contain
the type for context. Additionally, the types of public attributes of
classes remain.
- Remove any documentation of 'Return: None'
- Remove docstrings of helper methods within other methods
---
tools/testing/kunit/kunit_json.py | 56 +-
tools/testing/kunit/kunit_parser.py | 1022 ++++++++++++-----
tools/testing/kunit/kunit_tool_test.py | 132 ++-
.../test_is_test_passed-all_passed_nested.log | 34 +
.../test_is_test_passed-kselftest.log | 14 +
.../test_is_test_passed-missing_plan.log | 31 +
.../kunit/test_data/test_strip_hyphen.log | 16 +
7 files changed, 925 insertions(+), 380 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-kselftest.log
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log
create mode 100644 tools/testing/kunit/test_data/test_strip_hyphen.log
diff --git a/tools/testing/kunit/kunit_json.py b/tools/testing/kunit/kunit_json.py
index f5cca5c38cac..746bec72b9ac 100644
--- a/tools/testing/kunit/kunit_json.py
+++ b/tools/testing/kunit/kunit_json.py
@@ -11,47 +11,47 @@ import os
import kunit_parser
-from kunit_parser import TestStatus
-
-def get_json_result(test_result, def_config, build_dir, json_path) -> str:
- sub_groups = []
-
- # Each test suite is mapped to a KernelCI sub_group
- for test_suite in test_result.suites:
- sub_group = {
- "name": test_suite.name,
- "arch": "UM",
- "defconfig": def_config,
- "build_environment": build_dir,
- "test_cases": [],
- "lab_name": None,
- "kernel": None,
- "job": None,
- "git_branch": "kselftest",
- }
- test_cases = []
- # TODO: Add attachments attribute in test_case with detailed
- # failure message, see https://api.kernelci.org/schema-test-case.html#get
- for case in test_suite.cases:
- test_case = {"name": case.name, "status": "FAIL"}
- if case.status == TestStatus.SUCCESS:
+from kunit_parser import Test, TestResult, TestStatus
+from typing import Any, Dict, Optional
+
+JsonObj = Dict[str, Any]
+
+def _get_group_json(test: Test, def_config: str,
+ build_dir: Optional[str]) -> JsonObj:
+ sub_groups = [] # List[JsonObj]
+ test_cases = [] # List[JsonObj]
+
+ for subtest in test.subtests:
+ if len(subtest.subtests):
+ sub_group = _get_group_json(subtest, def_config,
+ build_dir)
+ sub_groups.append(sub_group)
+ else:
+ test_case = {"name": subtest.name, "status": "FAIL"}
+ if subtest.status == TestStatus.SUCCESS:
test_case["status"] = "PASS"
- elif case.status == TestStatus.TEST_CRASHED:
+ elif subtest.status == TestStatus.TEST_CRASHED:
test_case["status"] = "ERROR"
test_cases.append(test_case)
- sub_group["test_cases"] = test_cases
- sub_groups.append(sub_group)
+
test_group = {
- "name": "KUnit Test Group",
+ "name": test.name,
"arch": "UM",
"defconfig": def_config,
"build_environment": build_dir,
"sub_groups": sub_groups,
+ "test_cases": test_cases,
"lab_name": None,
"kernel": None,
"job": None,
"git_branch": "kselftest",
}
+ return test_group
+
+def get_json_result(test_result: TestResult, def_config: str,
+ build_dir: Optional[str], json_path: str) -> str:
+ test_group = _get_group_json(test_result.test, def_config, build_dir)
+ test_group["name"] = "KUnit Test Group"
json_obj = json.dumps(test_group, indent=4)
if json_path != 'stdout':
with open(json_path, 'w') as result_path:
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 6310a641b151..f1b28def3e78 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -1,11 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
#
-# Parses test results from a kernel dmesg log.
+# Parses KTAP test results from a kernel dmesg log and incrementally prints
+# results with reader-friendly format. Stores and returns test results in a
+# Test object.
#
# Copyright (C) 2019, Google LLC.
# Author: Felix Guo <felixguoxiuping(a)gmail.com>
# Author: Brendan Higgins <brendanhiggins(a)google.com>
+# Author: Rae Moar <rmoar(a)google.com>
+from __future__ import annotations
import re
from collections import namedtuple
@@ -14,33 +18,52 @@ from enum import Enum, auto
from functools import reduce
from typing import Iterable, Iterator, List, Optional, Tuple
-TestResult = namedtuple('TestResult', ['status','suites','log'])
-
-class TestSuite(object):
+TestResult = namedtuple('TestResult', ['status','test','log'])
+
+class Test(object):
+ """
+ A class to represent a test parsed from KTAP results. All KTAP
+ results within a test log are stored in a main Test object as
+ subtests.
+
+ Attributes:
+ status : TestStatus - status of the test
+ name : str - name of the test
+ expected_count : int - expected number of subtests (0 if single
+ test case and None if unknown expected number of subtests)
+ subtests : List[Test] - list of subtests
+ log : List[str] - log of KTAP lines that correspond to the test
+ counts : TestCounts - counts of the test statuses and errors of
+ subtests or of the test itself if the test is a single
+ test case.
+ """
def __init__(self) -> None:
- self.status = TestStatus.SUCCESS
- self.name = ''
- self.cases = [] # type: List[TestCase]
-
- def __str__(self) -> str:
- return 'TestSuite(' + str(self.status) + ',' + self.name + ',' + str(self.cases) + ')'
-
- def __repr__(self) -> str:
- return str(self)
-
-class TestCase(object):
- def __init__(self) -> None:
- self.status = TestStatus.SUCCESS
+ """Creates Test object with default attributes."""
+ self.status = TestStatus.TEST_CRASHED
self.name = ''
+ self.expected_count = 0 # type: Optional[int]
+ self.subtests = [] # type: List[Test]
self.log = [] # type: List[str]
+ self.counts = TestCounts()
def __str__(self) -> str:
- return 'TestCase(' + str(self.status) + ',' + self.name + ',' + str(self.log) + ')'
+ """Returns string representation of a Test class object."""
+ return ('Test(' + str(self.status) + ', ' + self.name +
+ ', ' + str(self.expected_count) + ', ' +
+ str(self.subtests) + ', ' + str(self.log) + ', ' +
+ str(self.counts) + ')')
def __repr__(self) -> str:
+ """Returns string representation of a Test class object."""
return str(self)
+ def add_error(self, error_message: str) -> None:
+ """Records an error that occurred while parsing this test."""
+ self.counts.errors += 1
+ print_error('Test ' + self.name + ': ' + error_message)
+
class TestStatus(Enum):
+ """An enumeration class to represent the status of a test."""
SUCCESS = auto()
FAILURE = auto()
SKIPPED = auto()
@@ -48,381 +71,754 @@ class TestStatus(Enum):
NO_TESTS = auto()
FAILURE_TO_PARSE_TESTS = auto()
+class TestCounts:
+ """
+ Tracks the counts of statuses of all test cases and any errors within
+ a Test.
+
+ Attributes:
+ passed : int - the number of tests that have passed
+ failed : int - the number of tests that have failed
+ crashed : int - the number of tests that have crashed
+ skipped : int - the number of tests that have skipped
+ errors : int - the number of errors in the test and subtests
+ """
+ def __init__(self):
+ """Creates TestCounts object with counts of all test
+ statuses and test errors set to 0.
+ """
+ self.passed = 0
+ self.failed = 0
+ self.crashed = 0
+ self.skipped = 0
+ self.errors = 0
+
+ def __str__(self) -> str:
+ """Returns the string representation of a TestCounts object.
+ """
+ return ('Passed: ' + str(self.passed) +
+ ', Failed: ' + str(self.failed) +
+ ', Crashed: ' + str(self.crashed) +
+ ', Skipped: ' + str(self.skipped) +
+ ', Errors: ' + str(self.errors))
+
+ def total(self) -> int:
+ """Returns the total number of test cases within a test
+ object, where a test case is a test with no subtests.
+ """
+ return (self.passed + self.failed + self.crashed +
+ self.skipped)
+
+ def add_subtest_counts(self, counts: TestCounts) -> None:
+ """
+ Adds the counts of another TestCounts object to the current
+ TestCounts object. Used to add the counts of a subtest to the
+ parent test.
+
+ Parameters:
+ counts - a different TestCounts object whose counts
+ will be added to the counts of the TestCounts object
+ """
+ self.passed += counts.passed
+ self.failed += counts.failed
+ self.crashed += counts.crashed
+ self.skipped += counts.skipped
+ self.errors += counts.errors
+
+ def get_status(self) -> TestStatus:
+ """Returns the aggregated status of a Test using test
+ counts.
+ """
+ if self.crashed:
+ # If one of the subtests crash, the expected status
+ # of the Test is crashed.
+ return TestStatus.TEST_CRASHED
+ elif self.failed:
+ # Otherwise if one of the subtests fail, the
+ # expected status of the Test is failed.
+ return TestStatus.FAILURE
+ elif self.passed:
+ # Otherwise if one of the subtests pass, the
+ # expected status of the Test is passed.
+ return TestStatus.SUCCESS
+ else:
+ # Finally, if none of the subtests have failed,
+ # crashed, or passed, the expected status of the
+ # Test is skipped.
+ return TestStatus.SKIPPED
+
+ def add_status(self, status: TestStatus) -> None:
+ """
+ Increments count of inputted status.
+
+ Parameters:
+ status - status to be added to the TestCounts object
+ """
+ if status == TestStatus.SUCCESS or \
+ status == TestStatus.NO_TESTS:
+ # if status is NO_TESTS the most appropriate
+ # attribute to increment is passed because
+ # the test did not fail, crash or get skipped.
+ self.passed += 1
+ elif status == TestStatus.FAILURE:
+ self.failed += 1
+ elif status == TestStatus.SKIPPED:
+ self.skipped += 1
+ else:
+ self.crashed += 1
+
class LineStream:
- """Provides a peek()/pop() interface over an iterator of (line#, text)."""
+ """
+ A class to represent the lines of kernel output.
+ Provides a peek()/pop() interface over an iterator of
+ (line#, text).
+ """
_lines: Iterator[Tuple[int, str]]
_next: Tuple[int, str]
_done: bool
def __init__(self, lines: Iterator[Tuple[int, str]]):
+ """Creates a new LineStream that wraps the given iterator."""
self._lines = lines
self._done = False
self._next = (0, '')
self._get_next()
def _get_next(self) -> None:
+ """Advances the LineSteam to the next line or sets the _done
+ attribute if the LineStream has reached the end of the lines.
+ """
try:
self._next = next(self._lines)
except StopIteration:
self._done = True
def peek(self) -> str:
+ """Returns the current line, without advancing the LineStream.
+ """
return self._next[1]
def pop(self) -> str:
+ """Returns the current line and advances the LineStream to
+ the next line.
+ """
n = self._next
self._get_next()
return n[1]
def __bool__(self) -> bool:
+ """Returns True if stream has more lines."""
return not self._done
# Only used by kunit_tool_test.py.
def __iter__(self) -> Iterator[str]:
+ """Empties all lines stored in LineStream object into
+ Iterator object and returns the Iterator object.
+ """
while bool(self):
yield self.pop()
def line_number(self) -> int:
+ """Returns the line number of the current line."""
return self._next[0]
-kunit_start_re = re.compile(r'TAP version [0-9]+$')
-kunit_end_re = re.compile('(List of all partitions:|'
- 'Kernel panic - not syncing: VFS:|reboot: System halted)')
+# Parsing helper methods:
+
+KTAP_START = re.compile(r'KTAP version ([0-9]+)$')
+TAP_START = re.compile(r'TAP version ([0-9]+)$')
+KTAP_END = re.compile('(List of all partitions:|'
+ 'Kernel panic - not syncing: VFS:|reboot: System halted)')
def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
- def isolate_kunit_output(kernel_output: Iterable[str]) -> Iterator[Tuple[int, str]]:
+ """Extracts KTAP lines from inputted kernel output in LineStream
+ object."""
+ def isolate_ktap_output(kernel_output: Iterable[str]) \
+ -> Iterator[Tuple[int, str]]:
line_num = 0
started = False
for line in kernel_output:
line_num += 1
- line = line.rstrip() # line always has a trailing \n
- if kunit_start_re.search(line):
+ line = line.rstrip() # remove trailing \n
+ if not started and KTAP_START.search(line):
+ # start extracting KTAP lines and set prefix
+ # to number of characters before version line
+ prefix_len = len(
+ line.split('KTAP version')[0])
+ started = True
+ yield line_num, line[prefix_len:]
+ elif not started and TAP_START.search(line):
+ # start extracting KTAP lines and set prefix
+ # to number of characters before version line
prefix_len = len(line.split('TAP version')[0])
started = True
yield line_num, line[prefix_len:]
- elif kunit_end_re.search(line):
+ elif started and KTAP_END.search(line):
+ # stop extracting KTAP lines
break
elif started:
- yield line_num, line[prefix_len:]
- return LineStream(lines=isolate_kunit_output(kernel_output))
-
-DIVIDER = '=' * 60
-
-RESET = '\033[0;0m'
-
-def red(text) -> str:
- return '\033[1;31m' + text + RESET
-
-def yellow(text) -> str:
- return '\033[1;33m' + text + RESET
-
-def green(text) -> str:
- return '\033[1;32m' + text + RESET
-
-def print_with_timestamp(message) -> None:
- print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+ # remove prefix and any indention and yield
+ # line with line number
+ line = line[prefix_len:].lstrip()
+ yield line_num, line
+ return LineStream(lines=isolate_ktap_output(kernel_output))
+
+KTAP_VERSIONS = [1]
+TAP_VERSIONS = [13, 14]
+
+def check_version(version_num: int, accepted_versions: List[int],
+ version_type: str, test: Test) -> None:
+ """
+ Adds error to test object if version number is too high or too
+ low.
+
+ Parameters:
+ version_num - The inputted version number from the parsed KTAP or TAP
+ header line
+ accepted_version - List of accepted KTAP or TAP versions
+ version_type - 'KTAP' or 'TAP' depending on the type of
+ version line.
+ test - Test object for current test being parsed
+ """
+ if version_num < min(accepted_versions):
+ test.add_error(version_type +
+ ' version lower than expected!')
+ elif version_num > max(accepted_versions):
+ test.add_error(
+ version_type + ' version higher than expected!')
+
+def parse_ktap_header(lines: LineStream, test: Test) -> bool:
+ """
+ Parses KTAP/TAP header line and checks version number.
+ Returns False if fails to parse KTAP/TAP header line.
+
+ Accepted formats:
+ - 'KTAP version [version number]'
+ - 'TAP version [version number]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ test - Test object for current test being parsed
+
+ Return:
+ True if successfully parsed KTAP/TAP header line
+ """
+ ktap_match = KTAP_START.match(lines.peek())
+ tap_match = TAP_START.match(lines.peek())
+ if ktap_match:
+ version_num = int(ktap_match.group(1))
+ check_version(version_num, KTAP_VERSIONS, 'KTAP', test)
+ elif tap_match:
+ version_num = int(tap_match.group(1))
+ check_version(version_num, TAP_VERSIONS, 'TAP', test)
+ else:
+ return False
+ test.log.append(lines.pop())
+ return True
-def format_suite_divider(message) -> str:
- return '======== ' + message + ' ========'
+TEST_HEADER = re.compile(r'^# Subtest: (.*)$')
-def print_suite_divider(message) -> None:
- print_with_timestamp(DIVIDER)
- print_with_timestamp(format_suite_divider(message))
+def parse_test_header(lines: LineStream, test: Test) -> bool:
+ """
+ Parses test header and stores test name in test object.
+ Returns False if fails to parse test header line.
-def print_log(log) -> None:
- for m in log:
- print_with_timestamp(m)
+ Accepted format:
+ - '# Subtest: [test name]'
-TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*# (Subtest:|.*: kunit test case crashed!)).*$')
+ Parameters:
+ lines - LineStream of ktap output to parse
+ test - Test object for current test being parsed
-def consume_non_diagnostic(lines: LineStream) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- lines.pop()
-
-def save_non_diagnostic(lines: LineStream, test_case: TestCase) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- test_case.log.append(lines.peek())
- lines.pop()
+ Return:
+ True if successfully parsed test header line
+ """
+ match = TEST_HEADER.match(lines.peek())
+ if not match:
+ return False
+ test.log.append(lines.pop())
+ test.name = match.group(1)
+ return True
-OkNotOkResult = namedtuple('OkNotOkResult', ['is_ok','description', 'text'])
+TEST_PLAN = re.compile(r'1\.\.([0-9]+)')
-OK_NOT_OK_SKIP = re.compile(r'^[\s]*(ok|not ok) [0-9]+ - (.*) # SKIP(.*)$')
+def parse_test_plan(lines: LineStream, test: Test) -> bool:
+ """
+ Parses test plan line and stores the expected number of subtests in
+ test object. Reports an error if expected count is 0.
+ Returns False and reports missing test plan error if fails to parse
+ test plan.
-OK_NOT_OK_SUBTEST = re.compile(r'^[\s]+(ok|not ok) [0-9]+ - (.*)$')
+ Accepted format:
+ - '1..[number of subtests]'
-OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$')
+ Parameters:
+ lines - LineStream of ktap output to parse
+ test - Test object for current test being parsed
-def parse_ok_not_ok_test_case(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- line = lines.peek()
- match = OK_NOT_OK_SUBTEST.match(line)
- while not match and lines:
- line = lines.pop()
- match = OK_NOT_OK_SUBTEST.match(line)
- if match:
- test_case.log.append(lines.pop())
- test_case.name = match.group(2)
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_case.status = TestStatus.SKIPPED
- return True
- if test_case.status == TestStatus.TEST_CRASHED:
- return True
- if match.group(1) == 'ok':
- test_case.status = TestStatus.SUCCESS
- else:
- test_case.status = TestStatus.FAILURE
- return True
- else:
+ Return:
+ True if successfully parsed test plan line
+ """
+ match = TEST_PLAN.match(lines.peek())
+ if not match:
+ test.expected_count = None
+ test.add_error('missing plan line!')
return False
-
-SUBTEST_DIAGNOSTIC = re.compile(r'^[\s]+# (.*)$')
-DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^[\s]+# .*?: kunit test case crashed!$')
-
-def parse_diagnostic(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
+ test.log.append(lines.pop())
+ expected_count = int(match.group(1))
+ test.expected_count = expected_count
+ if expected_count == 0:
+ test.status = TestStatus.NO_TESTS
+ test.add_error('0 tests run!')
+ return True
+
+TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?([^#]*)( # .*)?$')
+
+TEST_RESULT_SKIP = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*) # SKIP(.*)$')
+
+def peek_test_name_match(lines: LineStream, test: Test) -> bool:
+ """
+ Matches current line with the format of a test result line and checks
+ if the name matches the name of the current test.
+ Returns False if fails to match format or name.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ test - Test object for current test being parsed
+
+ Return:
+ True if matched a test result line and the name matching the
+ expected test name
+ """
+ line = lines.peek()
+ match = TEST_RESULT.match(line)
+ if not match:
return False
+ name = match.group(4)
+ return (name == test.name)
+
+def parse_test_result(lines: LineStream, test: Test,
+ expected_num: int) -> bool:
+ """
+ Parses test result line and stores the status and name in the test
+ object. Reports an error if the test number does not match expected
+ test number.
+ Returns False if fails to parse test result line.
+
+ Note that the SKIP directive is the only direction that causes a
+ change in status.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ test - Test object for current test being parsed
+ expected_num - expected test number for current test
+
+ Return:
+ True if successfully parsed a test result line.
+ """
line = lines.peek()
- match = SUBTEST_DIAGNOSTIC.match(line)
- if match:
- test_case.log.append(lines.pop())
- crash_match = DIAGNOSTIC_CRASH_MESSAGE.match(line)
- if crash_match:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- else:
+ match = TEST_RESULT.match(line)
+ skip_match = TEST_RESULT_SKIP.match(line)
+
+ # Check if line matches test result line format
+ if not match:
return False
+ test.log.append(lines.pop())
-def parse_test_case(lines: LineStream) -> Optional[TestCase]:
- test_case = TestCase()
- save_non_diagnostic(lines, test_case)
- while parse_diagnostic(lines, test_case):
- pass
- if parse_ok_not_ok_test_case(lines, test_case):
- return test_case
+ # Set name of test object
+ if skip_match:
+ test.name = skip_match.group(4)
else:
- return None
-
-SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$')
-
-def parse_subtest_header(lines: LineStream) -> Optional[str]:
- consume_non_diagnostic(lines)
- if not lines:
- return None
- match = SUBTEST_HEADER.match(lines.peek())
- if match:
- lines.pop()
- return match.group(1)
+ test.name = match.group(4)
+
+ # Check test num
+ num = int(match.group(2))
+ if num != expected_num:
+ test.add_error('Expected test number ' +
+ str(expected_num) + ' but found ' + str(num))
+
+ # Set status of test object
+ status = match.group(1)
+ if skip_match:
+ test.status = TestStatus.SKIPPED
+ elif status == 'ok':
+ test.status = TestStatus.SUCCESS
else:
- return None
+ test.status = TestStatus.FAILURE
+ return True
+
+def parse_diagnostic(lines: LineStream) -> List[str]:
+ """
+ Parse lines that do not match the format of a test result line or
+ test header line and returns them in list.
+
+ Line formats that are not parsed:
+ - '# Subtest: [test name]'
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+
+ Return:
+ Log of diagnostic lines
+ """
+ log = [] # type: List[str]
+ while lines and not TEST_RESULT.match(lines.peek()) and not \
+ TEST_HEADER.match(lines.peek()):
+ log.append(lines.pop())
+ return log
+
+DIAGNOSTIC_CRASH_MESSAGE = re.compile(
+ r'^(BUG:|# .*?: kunit test case crashed!$)')
+
+def parse_crash_in_log(test: Test) -> bool:
+ """
+ Iterate through the lines of the log to parse for crash message.
+ If crash message found, set status to crashed and return True.
+ Otherwise return False.
+
+ Parameters:
+ test - Test object for current test being parsed
+
+ Return:
+ True if crash message found in log
+ """
+ for line in test.log:
+ if DIAGNOSTIC_CRASH_MESSAGE.match(line):
+ test.status = TestStatus.TEST_CRASHED
+ return True
+ return False
-SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)')
-def parse_subtest_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = SUBTEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
- else:
- return None
-
-def max_status(left: TestStatus, right: TestStatus) -> TestStatus:
- if left == right:
- return left
- elif left == TestStatus.TEST_CRASHED or right == TestStatus.TEST_CRASHED:
- return TestStatus.TEST_CRASHED
- elif left == TestStatus.FAILURE or right == TestStatus.FAILURE:
- return TestStatus.FAILURE
- elif left == TestStatus.SKIPPED:
- return right
- else:
- return left
+# Printing helper methods:
-def parse_ok_not_ok_test_suite(lines: LineStream,
- test_suite: TestSuite,
- expected_suite_index: int) -> bool:
- consume_non_diagnostic(lines)
- if not lines:
- test_suite.status = TestStatus.TEST_CRASHED
- return False
- line = lines.peek()
- match = OK_NOT_OK_MODULE.match(line)
- if match:
- lines.pop()
- if match.group(1) == 'ok':
- test_suite.status = TestStatus.SUCCESS
- else:
- test_suite.status = TestStatus.FAILURE
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_suite.status = TestStatus.SKIPPED
- suite_index = int(match.group(2))
- if suite_index != expected_suite_index:
- print_with_timestamp(
- red('[ERROR] ') + 'expected_suite_index ' +
- str(expected_suite_index) + ', but got ' +
- str(suite_index))
- return True
- else:
- return False
+DIVIDER = '=' * 60
-def bubble_up_errors(status_list: Iterable[TestStatus]) -> TestStatus:
- return reduce(max_status, status_list, TestStatus.SKIPPED)
+RESET = '\033[0;0m'
-def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus:
- max_test_case_status = bubble_up_errors(x.status for x in test_suite.cases)
- return max_status(max_test_case_status, test_suite.status)
+def red(text: str) -> str:
+ """Returns inputted string with red color code."""
+ return '\033[1;31m' + text + RESET
-def parse_test_suite(lines: LineStream, expected_suite_index: int) -> Optional[TestSuite]:
- if not lines:
- return None
- consume_non_diagnostic(lines)
- test_suite = TestSuite()
- test_suite.status = TestStatus.SUCCESS
- name = parse_subtest_header(lines)
- if not name:
- return None
- test_suite.name = name
- expected_test_case_num = parse_subtest_plan(lines)
- if expected_test_case_num is None:
- return None
- while expected_test_case_num > 0:
- test_case = parse_test_case(lines)
- if not test_case:
- break
- test_suite.cases.append(test_case)
- expected_test_case_num -= 1
- if parse_ok_not_ok_test_suite(lines, test_suite, expected_suite_index):
- test_suite.status = bubble_up_test_case_errors(test_suite)
- return test_suite
- elif not lines:
- print_with_timestamp(red('[ERROR] ') + 'ran out of lines before end token')
- return test_suite
- else:
- print(f'failed to parse end of suite "{name}", at line {lines.line_number()}: {lines.peek()}')
- return None
+def yellow(text: str) -> str:
+ """Returns inputted string with yellow color code."""
+ return '\033[1;33m' + text + RESET
-TAP_HEADER = re.compile(r'^TAP version 14$')
+def green(text: str) -> str:
+ """Returns inputted string with green color code."""
+ return '\033[1;32m' + text + RESET
-def parse_tap_header(lines: LineStream) -> bool:
- consume_non_diagnostic(lines)
- if TAP_HEADER.match(lines.peek()):
- lines.pop()
- return True
- else:
- return False
+ANSI_LEN = len(red(''))
-TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)')
+def print_with_timestamp(message: str) -> None:
+ """Prints message with timestamp at beginning."""
+ print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
-def parse_test_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = TEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
- else:
- return None
-
-def bubble_up_suite_errors(test_suites: Iterable[TestSuite]) -> TestStatus:
- return bubble_up_errors(x.status for x in test_suites)
-
-def parse_test_result(lines: LineStream) -> TestResult:
- consume_non_diagnostic(lines)
- if not lines or not parse_tap_header(lines):
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- expected_test_suite_num = parse_test_plan(lines)
- if expected_test_suite_num == 0:
- return TestResult(TestStatus.NO_TESTS, [], lines)
- elif expected_test_suite_num is None:
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- test_suites = []
- for i in range(1, expected_test_suite_num + 1):
- test_suite = parse_test_suite(lines, i)
- if test_suite:
- test_suites.append(test_suite)
+def format_test_divider(message: str, len_message: int) -> str:
+ """
+ Returns string with message centered in fixed width divider.
+
+ Example:
+ '===================== message example ====================='
+
+ Parameters:
+ message - message to be centered in divider line
+ len_message - length of the message to be printed such that
+ any characters of the color codes are not counted
+
+ Return:
+ String containing message centered in fixed width divider
+ """
+ default_count = 3 # default number of dashes
+ len_1 = default_count
+ len_2 = default_count
+ difference = len(DIVIDER) - len_message - 2 # 2 spaces added
+ if difference > 0:
+ # calculate number of dashes for each side of the divider
+ len_1 = int(difference / 2)
+ len_2 = difference - len_1
+ return ('=' * len_1) + ' ' + message + ' ' + ('=' * len_2)
+
+def print_test_header(test: Test) -> None:
+ """
+ Prints test header with test name and optionally the expected number
+ of subtests.
+
+ Example:
+ '=================== example (2 subtests) ==================='
+
+ Parameters:
+ test - Test object representing current test being printed
+ """
+ message = test.name
+ if test.expected_count:
+ if test.expected_count == 1:
+ message += (' (' + str(test.expected_count) +
+ ' subtest)')
else:
- print_with_timestamp(
- red('[ERROR] ') + ' expected ' +
- str(expected_test_suite_num) +
- ' test suites, but got ' + str(i - 2))
- break
- test_suite = parse_test_suite(lines, -1)
- if test_suite:
- print_with_timestamp(red('[ERROR] ') +
- 'got unexpected test suite: ' + test_suite.name)
- if test_suites:
- return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines)
- else:
- return TestResult(TestStatus.NO_TESTS, [], lines)
+ message += (' (' + str(test.expected_count) +
+ ' subtests)')
+ print_with_timestamp(format_test_divider(message, len(message)))
-class TestCounts:
- passed: int
- failed: int
- crashed: int
- skipped: int
+def print_log(log: Iterable[str]) -> None:
+ """
+ Prints all strings in saved log for test in yellow.
- def __init__(self):
- self.passed = 0
- self.failed = 0
- self.crashed = 0
- self.skipped = 0
-
- def total(self) -> int:
- return self.passed + self.failed + self.crashed + self.skipped
-
-def print_and_count_results(test_result: TestResult) -> TestCounts:
- counts = TestCounts()
- for test_suite in test_result.suites:
- if test_suite.status == TestStatus.SUCCESS:
- print_suite_divider(green('[PASSED] ') + test_suite.name)
- elif test_suite.status == TestStatus.SKIPPED:
- print_suite_divider(yellow('[SKIPPED] ') + test_suite.name)
- elif test_suite.status == TestStatus.TEST_CRASHED:
- print_suite_divider(red('[CRASHED] ' + test_suite.name))
- else:
- print_suite_divider(red('[FAILED] ') + test_suite.name)
- for test_case in test_suite.cases:
- if test_case.status == TestStatus.SUCCESS:
- counts.passed += 1
- print_with_timestamp(green('[PASSED] ') + test_case.name)
- elif test_case.status == TestStatus.SKIPPED:
- counts.skipped += 1
- print_with_timestamp(yellow('[SKIPPED] ') + test_case.name)
- elif test_case.status == TestStatus.TEST_CRASHED:
- counts.crashed += 1
- print_with_timestamp(red('[CRASHED] ' + test_case.name))
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
+ Parameters:
+ log - Iterable object with all strings saved in log for test
+ """
+ for m in log:
+ print_with_timestamp(yellow(m))
+
+def format_test_result(test: Test) -> str:
+ """
+ Returns string with formatted test result with colored status and test
+ name.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test - Test object representing current test being printed
+
+ Return:
+ String containing formatted test result
+ """
+ if test.status == TestStatus.SUCCESS:
+ return (green('[PASSED] ') + test.name)
+ elif test.status == TestStatus.SKIPPED:
+ return (yellow('[SKIPPED] ') + test.name)
+ elif test.status == TestStatus.TEST_CRASHED:
+ print_log(test.log)
+ return (red('[CRASHED] ') + test.name)
+ else:
+ print_log(test.log)
+ return (red('[FAILED] ') + test.name)
+
+def print_test_result(test: Test) -> None:
+ """
+ Prints result line with status of test.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test - Test object representing current test being printed
+ """
+ print_with_timestamp(format_test_result(test))
+
+def print_test_footer(test: Test) -> None:
+ """
+ Prints test footer with status of test.
+
+ Example:
+ '===================== [PASSED] example ====================='
+
+ Parameters:
+ test - Test object representing current test being printed
+ """
+ message = format_test_result(test)
+ print_with_timestamp(format_test_divider(message,
+ len(message) - ANSI_LEN))
+
+def print_summary_line(test: Test) -> None:
+ """
+ Prints summary line of test object. Color of line is dependent on
+ status of test. Color is green if test passes, yellow if test is
+ skipped, and red if the test fails or crashes. Summary line contains
+ counts of the statuses of the tests subtests or the test itself if it
+ has no subtests.
+
+ Example:
+ "Testing complete. Passed: 2, Failed: 0, Crashed: 0, Skipped: 0,
+ Errors: 0"
+
+ test - Test object representing current test being printed
+ """
+ if test.status == TestStatus.SUCCESS or \
+ test.status == TestStatus.NO_TESTS:
+ color = green
+ elif test.status == TestStatus.SKIPPED:
+ color = yellow
+ else:
+ color = red
+ counts = test.counts
+ print_with_timestamp(color('Testing complete. ' + str(counts)))
+
+def print_error(error_message: str) -> None:
+ """
+ Prints error message with error format.
+
+ Example:
+ "[ERROR] Test example: missing test plan!"
+
+ Parameters:
+ error_message - message describing error
+ """
+ print_with_timestamp(red('[ERROR] ') + error_message)
+
+# Other methods:
+
+def bubble_up_test_results(test: Test) -> None:
+ """
+ If the test has subtests, add the test counts of the subtests to the
+ test and check if any of the tests crashed and if so set the test
+ status to crashed. Otherwise if the test has no subtests add the
+ status of the test to the test counts.
+
+ Parameters:
+ test - Test object for current test being parsed
+ """
+ parse_crash_in_log(test)
+ subtests = test.subtests
+ counts = test.counts
+ status = test.status
+ for t in subtests:
+ counts.add_subtest_counts(t.counts)
+ if counts.total() == 0:
+ counts.add_status(status)
+ elif test.counts.get_status() == TestStatus.TEST_CRASHED:
+ test.status = TestStatus.TEST_CRASHED
+
+def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+ """
+ Finds next test to parse in LineStream, creates new Test object,
+ parses any subtests of the test, populates Test object with all
+ information (status, name) about the test and the Test objects for
+ any subtests, and then returns the Test object. The method accepts
+ three formats of tests:
+
+ Accepted test formats:
+
+ - Main KTAP/TAP header
+
+ Example:
+
+ KTAP version 1
+ 1..4
+ [subtests]
+
+ - Subtest header line
+
+ Example:
+
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ - Test result line
+
+ Example:
+
+ ok 1 - test
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ expected_num - expected test number for test to be parsed
+ log - list of strings containing any preceding diagnostic lines
+ corresponding to the current test
+
+ Return:
+ Test object populated with characteristics and any subtests
+ """
+ test = Test()
+ test.log.extend(log)
+ parent_test = False
+ main = parse_ktap_header(lines, test)
+ if main:
+ # If KTAP/TAP header is found, attempt to parse
+ # test plan
+ test.name = "main"
+ parse_test_plan(lines, test)
+ else:
+ # If KTAP/TAP header is not found, test must be subtest
+ # header or test result line so parse attempt to parser
+ # subtest header
+ parent_test = parse_test_header(lines, test)
+ if parent_test:
+ # If subtest header is found, attempt to parse
+ # test plan and print header
+ parse_test_plan(lines, test)
+ print_test_header(test)
+ expected_count = test.expected_count
+ subtests = []
+ test_num = 1
+ while expected_count is None or test_num <= expected_count:
+ # Loop to parse any subtests.
+ # Break after parsing expected number of tests or
+ # if expected number of tests is unknown break when test
+ # result line with matching name to subtest header is found
+ # or no more lines in stream.
+ sub_log = parse_diagnostic(lines)
+ sub_test = Test()
+ if not lines or (peek_test_name_match(lines, test) and
+ not main):
+ if expected_count and test_num <= expected_count:
+ # If parser reaches end of test before
+ # parsing expected number of subtests, print
+ # crashed subtest and record error
+ test.add_error('missing expected subtest!')
+ sub_test.log.extend(sub_log)
+ test.counts.add_status(
+ TestStatus.TEST_CRASHED)
+ print_test_result(sub_test)
else:
- counts.failed += 1
- print_with_timestamp(red('[FAILED] ') + test_case.name)
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- return counts
+ test.log.extend(sub_log)
+ break
+ else:
+ sub_test = parse_test(lines, test_num, sub_log)
+ subtests.append(sub_test)
+ test_num += 1
+ test.subtests = subtests
+ if not main:
+ # If not main test, look for test result line
+ test.log.extend(parse_diagnostic(lines))
+ if (parent_test and peek_test_name_match(lines, test)) or \
+ not parent_test:
+ parse_test_result(lines, test, expected_num)
+ else:
+ test.add_error('missing subtest result line!')
+ # Add statuses to TestCounts attribute in Test object
+ bubble_up_test_results(test)
+ if parent_test:
+ # If test has subtests and is not the main test object, print
+ # footer.
+ print_test_footer(test)
+ elif not main:
+ print_test_result(test)
+ return test
def parse_run_tests(kernel_output: Iterable[str]) -> TestResult:
- counts = TestCounts()
+ """
+ Using kernel output, extract KTAP lines, parse the lines for test
+ results and print condensed test results and summary line .
+
+ Parameters:
+ kernel_output - Iterable object contains lines of kernel output
+
+ Return:
+ TestResult - Tuple containg status of main test object, main test
+ object with all subtests, and log of all KTAP lines.
+ """
+ print_with_timestamp(DIVIDER)
lines = extract_tap_lines(kernel_output)
- test_result = parse_test_result(lines)
- if test_result.status == TestStatus.NO_TESTS:
- print(red('[ERROR] ') + yellow('no tests run!'))
- elif test_result.status == TestStatus.FAILURE_TO_PARSE_TESTS:
- print(red('[ERROR] ') + yellow('could not parse test results!'))
+ test = Test()
+ if not lines:
+ test.add_error('invalid KTAP input!')
+ test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- counts = print_and_count_results(test_result)
+ test = parse_test(lines, 0, [])
+ if test.status != TestStatus.NO_TESTS:
+ test.status = test.counts.get_status()
print_with_timestamp(DIVIDER)
- if test_result.status == TestStatus.SUCCESS:
- fmt = green
- elif test_result.status == TestStatus.SKIPPED:
- fmt = yellow
- else:
- fmt =red
- print_with_timestamp(
- fmt('Testing complete. %d tests run. %d failed. %d crashed. %d skipped.' %
- (counts.total(), counts.failed, counts.crashed, counts.skipped)))
- return test_result
+ print_summary_line(test)
+ return TestResult(test.status, test, lines)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 619c4554cbff..2a8b0b5f4269 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -106,10 +106,10 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: example', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - example_simple_test', result)
- self.assertContains(' ok 2 - example_mock_test', result)
+ self.assertContains('# Subtest: example', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - example_simple_test', result)
+ self.assertContains('ok 2 - example_mock_test', result)
self.assertContains('ok 1 - example', result)
def test_output_with_prefix_isolated_correctly(self):
@@ -117,28 +117,28 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: kunit-resource-test', result)
- self.assertContains(' 1..5', result)
- self.assertContains(' ok 1 - kunit_resource_test_init_resources', result)
- self.assertContains(' ok 2 - kunit_resource_test_alloc_resource', result)
- self.assertContains(' ok 3 - kunit_resource_test_destroy_resource', result)
- self.assertContains(' foo bar #', result)
- self.assertContains(' ok 4 - kunit_resource_test_cleanup_resources', result)
- self.assertContains(' ok 5 - kunit_resource_test_proper_free_ordering', result)
+ self.assertContains('# Subtest: kunit-resource-test', result)
+ self.assertContains('1..5', result)
+ self.assertContains('ok 1 - kunit_resource_test_init_resources', result)
+ self.assertContains('ok 2 - kunit_resource_test_alloc_resource', result)
+ self.assertContains('ok 3 - kunit_resource_test_destroy_resource', result)
+ self.assertContains('foo bar #', result)
+ self.assertContains('ok 4 - kunit_resource_test_cleanup_resources', result)
+ self.assertContains('ok 5 - kunit_resource_test_proper_free_ordering', result)
self.assertContains('ok 1 - kunit-resource-test', result)
- self.assertContains(' foo bar # non-kunit output', result)
- self.assertContains(' # Subtest: kunit-try-catch-test', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - kunit_test_try_catch_successful_try_no_catch',
+ self.assertContains('foo bar # non-kunit output', result)
+ self.assertContains('# Subtest: kunit-try-catch-test', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - kunit_test_try_catch_successful_try_no_catch',
result)
- self.assertContains(' ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
+ self.assertContains('ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
result)
self.assertContains('ok 2 - kunit-try-catch-test', result)
- self.assertContains(' # Subtest: string-stream-test', result)
- self.assertContains(' 1..3', result)
- self.assertContains(' ok 1 - string_stream_test_empty_on_creation', result)
- self.assertContains(' ok 2 - string_stream_test_not_empty_after_add', result)
- self.assertContains(' ok 3 - string_stream_test_get_string', result)
+ self.assertContains('# Subtest: string-stream-test', result)
+ self.assertContains('1..3', result)
+ self.assertContains('ok 1 - string_stream_test_empty_on_creation', result)
+ self.assertContains('ok 2 - string_stream_test_not_empty_after_add', result)
+ self.assertContains('ok 3 - string_stream_test_get_string', result)
self.assertContains('ok 3 - string-stream-test', result)
def test_parse_successful_test_log(self):
@@ -148,6 +148,13 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
+ def test_parse_successful_nested_tests_log(self):
+ all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log')
+ with open(all_passed_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
def test_parse_failed_test_log(self):
failed_log = test_data_path('test_is_test_passed-failure.log')
@@ -162,17 +169,31 @@ class KUnitParserTest(unittest.TestCase):
with open(empty_log) as file:
result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS,
result.status)
+ def test_missing_test_plan(self):
+ missing_plan_log = test_data_path('test_is_test_passed-'
+ 'missing_plan.log')
+ with open(missing_plan_log) as file:
+ result = kunit_parser.parse_run_tests(
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ self.assertEqual(2, result.test.counts.errors)
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
+
def test_no_tests(self):
- empty_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log')
- with open(empty_log) as file:
+ header_log = test_data_path('test_is_test_passed-'
+ 'no_tests_run_with_header.log')
+ with open(header_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.NO_TESTS,
result.status)
@@ -182,15 +203,17 @@ class KUnitParserTest(unittest.TestCase):
print_mock = mock.patch('builtins.print').start()
with open(crash_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- print_mock.assert_any_call(StrContains('could not parse test results!'))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ print_mock.assert_any_call(StrContains('invalid KTAP input!'))
print_mock.stop()
file.close()
def test_crashed_test(self):
crashed_log = test_data_path('test_is_test_passed-crash.log')
with open(crashed_log) as file:
- result = kunit_parser.parse_run_tests(file.readlines())
+ result = kunit_parser.parse_run_tests(
+ file.readlines())
self.assertEqual(
kunit_parser.TestStatus.TEST_CRASHED,
result.status)
@@ -216,6 +239,23 @@ class KUnitParserTest(unittest.TestCase):
result.status)
file.close()
+ def test_ignores_hyphen(self):
+ hyphen_log = test_data_path('test_strip_hyphen.log')
+ file = open(hyphen_log)
+ result = kunit_parser.parse_run_tests(file.readlines())
+
+ # A skipped test does not fail the whole suite.
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
+ self.assertEqual(
+ "sysctl_test",
+ result.test.subtests[0].name)
+ self.assertEqual(
+ "example",
+ result.test.subtests[1].name)
+ file.close()
+
def test_ignores_prefix_printk_time(self):
prefix_log = test_data_path('test_config_printk_time.log')
@@ -224,7 +264,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_ignores_multiple_prefixes(self):
prefix_log = test_data_path('test_multiple_prefixes.log')
@@ -233,7 +273,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_mixed_kernel_output(self):
mixed_prefix_log = test_data_path('test_interrupted_tap_output.log')
@@ -242,7 +282,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_poundsign(self):
pound_log = test_data_path('test_pound_sign.log')
@@ -251,7 +291,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_kernel_panic_end(self):
panic_log = test_data_path('test_kernel_panic_interrupt.log')
@@ -260,7 +300,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.TEST_CRASHED,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_pound_no_prefix(self):
pound_log = test_data_path('test_pound_no_prefix.log')
@@ -269,7 +309,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
class LinuxSourceTreeTest(unittest.TestCase):
@@ -291,6 +331,14 @@ class LinuxSourceTreeTest(unittest.TestCase):
pass
tree = kunit_kernel.LinuxSourceTree('', kunitconfig_path=dir)
+ def test_kselftest_nested(self):
+ kselftest_log = test_data_path('test_is_test_passed-kselftest.log')
+ with open(kselftest_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
+
# TODO: add more test cases.
@@ -322,6 +370,12 @@ class KUnitJsonTest(unittest.TestCase):
result = self._json_for('test_is_test_passed-no_tests_run_with_header.log')
self.assertEqual(0, len(result['sub_groups']))
+ def test_nested_json(self):
+ result = self._json_for('test_is_test_passed-all_passed_nested.log')
+ self.assertEqual(
+ {'name': 'example_simple_test', 'status': 'PASS'},
+ result["sub_groups"][0]["sub_groups"][0]["test_cases"][0])
+
class StrContains(str):
def __eq__(self, other):
return self in other
@@ -380,7 +434,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(e.exception.code, 1)
self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
- self.print_mock.assert_any_call(StrContains(' 0 tests run'))
+ self.print_mock.assert_any_call(StrContains('invalid KTAP input!'))
def test_exec_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -388,7 +442,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run!')))
def test_run_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -397,7 +451,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run!')))
def test_run_raw_output_kunit(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
new file mode 100644
index 000000000000..9d5b04fe43a6
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
@@ -0,0 +1,34 @@
+TAP version 14
+1..2
+ # Subtest: sysctl_test
+ 1..4
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+ # Subtest: example
+ 1..2
+ init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+ kunit example: all tests passed
+ ok 2 - example
+ # sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
+ ok 3 - sysctl_test_dointvec_table_len_is_zero
+ # sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
+ ok 4 - sysctl_test_dointvec_table_read_but_position_set
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..2
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+kunit example: all tests passed
+ok 2 - example
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-kselftest.log b/tools/testing/kunit/test_data/test_is_test_passed-kselftest.log
new file mode 100644
index 000000000000..65d3f27feaf2
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-kselftest.log
@@ -0,0 +1,14 @@
+TAP version 13
+1..2
+# selftests: membarrier: membarrier_test_single_thread
+# TAP version 13
+# 1..2
+# ok 1 sys_membarrier available
+# ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected
+ok 1 selftests: membarrier: membarrier_test_single_thread
+# selftests: membarrier: membarrier_test_multi_thread
+# TAP version 13
+# 1..2
+# ok 1 sys_membarrier available
+# ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected
+ok 2 selftests: membarrier: membarrier_test_multi_thread
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log b/tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log
new file mode 100644
index 000000000000..5cd17b7f818a
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log
@@ -0,0 +1,31 @@
+KTAP version 1
+ # Subtest: sysctl_test
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+ # sysctl_test_dointvec_table_maxlen_unset: sysctl_test_dointvec_table_maxlen_unset passed
+ ok 2 - sysctl_test_dointvec_table_maxlen_unset
+ # sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
+ ok 3 - sysctl_test_dointvec_table_len_is_zero
+ # sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
+ ok 4 - sysctl_test_dointvec_table_read_but_position_set
+ # sysctl_test_dointvec_happy_single_positive: sysctl_test_dointvec_happy_single_positive passed
+ ok 5 - sysctl_test_dointvec_happy_single_positive
+ # sysctl_test_dointvec_happy_single_negative: sysctl_test_dointvec_happy_single_negative passed
+ ok 6 - sysctl_test_dointvec_happy_single_negative
+ # sysctl_test_dointvec_single_less_int_min: sysctl_test_dointvec_single_less_int_min passed
+ ok 7 - sysctl_test_dointvec_single_less_int_min
+ # sysctl_test_dointvec_single_greater_int_max: sysctl_test_dointvec_single_greater_int_max passed
+ ok 8 - sysctl_test_dointvec_single_greater_int_max
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..2
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+kunit example: all tests passed
+ok 2 - example
diff --git a/tools/testing/kunit/test_data/test_strip_hyphen.log b/tools/testing/kunit/test_data/test_strip_hyphen.log
new file mode 100644
index 000000000000..92ac7c24b374
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_strip_hyphen.log
@@ -0,0 +1,16 @@
+KTAP version 1
+1..2
+ # Subtest: sysctl_test
+ 1..1
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..1
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 example_simple_test
+kunit example: all tests passed
+ok 2 example
--
2.33.0.259.gc128427fd7-goog
Synchronous Ethernet networks use a physical layer clock to syntonize
the frequency across different network elements.
Multiple reference clock sources can be used. Clocks recovered from
PHY ports on the RX side or external sources like 1PPS GPS, etc.
This patch series introduces basic interface for reading the DPLL
state on a SyncE capable device. This state gives us information
about the source of the syntonization signal and whether the DPLL
circuit is tuned to the incoming signal.
Next steps:
- add interface to enable recovered clocks and get information
about them
v2:
- removed whitespace changes
- fix issues reported by test robot
Maciej Machnikowski (2):
rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
ice: add support for reading SyncE DPLL state
drivers/net/ethernet/intel/ice/ice.h | 5 ++
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 34 ++++++++
drivers/net/ethernet/intel/ice/ice_common.c | 62 +++++++++++++++
drivers/net/ethernet/intel/ice/ice_common.h | 4 +
drivers/net/ethernet/intel/ice/ice_devids.h | 3 +
drivers/net/ethernet/intel/ice/ice_main.c | 55 +++++++++++++
drivers/net/ethernet/intel/ice/ice_ptp.c | 35 +++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 44 +++++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 22 ++++++
include/linux/netdevice.h | 6 ++
include/uapi/linux/if_link.h | 43 +++++++++++
include/uapi/linux/rtnetlink.h | 11 ++-
net/core/rtnetlink.c | 77 +++++++++++++++++++
security/selinux/nlmsgtab.c | 3 +-
14 files changed, 399 insertions(+), 5 deletions(-)
--
2.26.3
This test assumes that the declared kunit_suite object is the exact one
which is being executed, which KUnit will not guarantee [1].
Specifically, `suite->log` is not initialized until a suite object is
executed. So if KUnit makes a copy of the suite and runs that instead,
this test dereferences an invalid pointer and (hopefully) segfaults.
N.B. since we no longer assume this, we can no longer verify that
`suite->log` is *not* allocated during normal execution.
An alternative to this patch that would allow us to test that would
require exposing an API for the current test to get its current suite.
Exposing that for one internal kunit test seems like overkill, and
grants users more footguns (e.g. reusing a test case in multiple suites
and changing behavior based on the suite name, dynamically modifying the
setup/cleanup funcs, storing/reading stuff out of the suite->log, etc.).
[1] In a subsequent patch, KUnit will allow running subsets of test
cases within a suite by making a copy of the suite w/ the filtered test
list. But there are other reasons KUnit might execute a copy, e.g. if it
ever wants to support parallel execution of different suites, recovering
from errors and restarting suites
Signed-off-by: Daniel Latypov <dlatypov(a)google.com>
---
lib/kunit/kunit-test.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/lib/kunit/kunit-test.c b/lib/kunit/kunit-test.c
index d69efcbed624..555601d17f79 100644
--- a/lib/kunit/kunit-test.c
+++ b/lib/kunit/kunit-test.c
@@ -415,12 +415,15 @@ static struct kunit_suite kunit_log_test_suite = {
static void kunit_log_test(struct kunit *test)
{
- struct kunit_suite *suite = &kunit_log_test_suite;
+ struct kunit_suite suite;
+
+ suite.log = kunit_kzalloc(test, KUNIT_LOG_SIZE, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, suite.log);
kunit_log(KERN_INFO, test, "put this in log.");
kunit_log(KERN_INFO, test, "this too.");
- kunit_log(KERN_INFO, suite, "add to suite log.");
- kunit_log(KERN_INFO, suite, "along with this.");
+ kunit_log(KERN_INFO, &suite, "add to suite log.");
+ kunit_log(KERN_INFO, &suite, "along with this.");
#ifdef CONFIG_KUNIT_DEBUGFS
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
@@ -428,12 +431,11 @@ static void kunit_log_test(struct kunit *test)
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
strstr(test->log, "this too."));
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
- strstr(suite->log, "add to suite log."));
+ strstr(suite.log, "add to suite log."));
KUNIT_EXPECT_NOT_ERR_OR_NULL(test,
- strstr(suite->log, "along with this."));
+ strstr(suite.log, "along with this."));
#else
KUNIT_EXPECT_PTR_EQ(test, test->log, (char *)NULL);
- KUNIT_EXPECT_PTR_EQ(test, suite->log, (char *)NULL);
#endif
}
base-commit: 9c849ce86e0fa93a218614eac562ace44053d7ce
--
2.33.0.259.gc128427fd7-goog
0Day will check if all configs listing under selftests are able
to be enabled properly.
For the missing configs, it will report something like:
LKP WARN miss config CONFIG_SYNC= of sync/config
CC: "Rafael J. Wysocki" <rjw(a)rjwysocki.net>
CC: Viresh Kumar <viresh.kumar(a)linaro.org>
CC: linux-pm(a)vger.kernel.org
Reported-by: kernel test robot <lkp(a)intel.com>
Li Zhijian (2):
selftests/sync: Remove the deprecated config SYNC
selftests/cpufreq: Rename DEBUG_PI_LIST to DEBUG_PLIST
tools/testing/selftests/cpufreq/config | 2 +-
tools/testing/selftests/sync/config | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)
--
2.31.1
Synchronous Ethernet networks use a physical layer clock to syntonize
the frequency across different network elements.
Basic SyncE node defined in the ITU-T G.8264 consist of an Ethernet
Equipment Clock (EEC) and have the ability to recover synchronization
from the synchronization inputs - either traffic interfaces or external
frequency sources.
The EEC can synchronize its frequency (syntonize) to any of those sources.
It is also able to select synchronization source through priority tables
and synchronization status messaging. It also provides neccessary
filtering and holdover capabilities
This patch series introduces basic interface for reading the Ethernet
Equipment Clock (EEC) state on a SyncE capable device. This state gives
information about the source of the syntonization signal and the state
of EEC. This interface is required to implement Synchronization Status
Messaging on upper layers.
Maciej Machnikowski (2):
rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status
ice: add support for reading SyncE DPLL state
drivers/net/ethernet/intel/ice/ice.h | 5 ++
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 34 ++++++++++
drivers/net/ethernet/intel/ice/ice_common.c | 62 ++++++++++++++++++
drivers/net/ethernet/intel/ice/ice_common.h | 4 ++
drivers/net/ethernet/intel/ice/ice_devids.h | 3 +
drivers/net/ethernet/intel/ice/ice_main.c | 57 +++++++++++++++++
drivers/net/ethernet/intel/ice/ice_ptp.c | 35 ++++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 44 +++++++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 22 +++++++
include/linux/netdevice.h | 5 ++
include/uapi/linux/if_link.h | 46 +++++++++++++
include/uapi/linux/rtnetlink.h | 3 +
net/core/rtnetlink.c | 64 +++++++++++++++++++
security/selinux/nlmsgtab.c | 3 +-
14 files changed, 386 insertions(+), 1 deletion(-)
--
2.26.3
Synchronous Ethernet networks use a physical layer clock to syntonize
the frequency across different network elements.
Basic SyncE node defined in the ITU-T G.8264 consist of an Ethernet
Equipment Clock (EEC) and have the ability to recover synchronization
from the synchronization inputs - either traffic interfaces or external
frequency sources.
The EEC can synchronize its frequency (syntonize) to any of those sources.
It is also able to select synchronization source through priority tables
and synchronization status messaging. It also provides neccessary
filtering and holdover capabilities
This patch series introduces basic interface for reading the Ethernet
Equipment Clock (EEC) state on a SyncE capable device. This state gives
information about the source of the syntonization signal and the state
of EEC. This interface is required to implement Synchronization Status
Messaging on upper layers.
Next steps:
- add interface to enable source clocks and get information about them
v2:
- removed whitespace changes
- fix issues reported by test robot
v3:
- Changed naming from SyncE to EEC
- Clarify cover letter and commit message for patch 1
Maciej Machnikowski (2):
rtnetlink: Add new RTM_GETEECSTATE message to get SyncE status
ice: add support for reading SyncE DPLL state
drivers/net/ethernet/intel/ice/ice.h | 5 ++
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 34 +++++++++
drivers/net/ethernet/intel/ice/ice_common.c | 62 ++++++++++++++++
drivers/net/ethernet/intel/ice/ice_common.h | 4 +
drivers/net/ethernet/intel/ice/ice_devids.h | 3 +
drivers/net/ethernet/intel/ice/ice_main.c | 55 ++++++++++++++
drivers/net/ethernet/intel/ice/ice_ptp.c | 35 +++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 44 +++++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 22 ++++++
include/linux/netdevice.h | 6 ++
include/uapi/linux/if_link.h | 43 +++++++++++
include/uapi/linux/rtnetlink.h | 3 +
net/core/rtnetlink.c | 74 +++++++++++++++++++
security/selinux/nlmsgtab.c | 3 +-
14 files changed, 392 insertions(+), 1 deletion(-)
--
2.26.3
SyncE - Synchronous Ethernet is defined in ITU-T Rec. G.8264
(https://www.itu.int/rec/T-REC-G.8264)
SyncE allows synchronizing the frequency of ethernet PHY clock signal
(the frequency used to send the data onto wire), to some reference
clock signal.
Multiple reference clock sources can be available. PHY ports recover
the frequency at which the transmitter sent the data on the RX side.
Alternatively, we can use external sources like 1PPS GPS, etc.
This patch series introduces basic interfaces for communication
with a SyncE capable device.
The first part of the interface allows acquiring the synchronization
state of DPLL (Digital Phase Locked Loop). DPLL LOCKED state means
that the frequency generated by it is locked to the input frequency.
As a result, PHYs connected to it are synchronized to the chosen input
frequency signal.
The second part can be used to select the port from which the clock
gets recovered. Each PHY chip can have multiple pins on which the
recovered clock can be propagated. For example, a SyncE-capable PHY
can recover the carrier frequency of the first port, divide it
internally, and output it as a reference clock on PIN 0.
When such a signal is enabled, the DPLL can LOCK to the frequency
recovered on PIN 0.
Next steps:
- Add CONFIG_SYNCE definition into Kconfig
- Add more configuration interfaces. Aiming at devlink, since this
would be device-wide configuration
Arkadiusz Kubalewski (7):
ptp: Add interface for acquiring DPLL state
selftests/ptp: Add usage of PTP_DPLL_GETSTATE ioctl in testptp
ice: add get_dpll_state ptp interface usage
net: add ioctl interface for recover reference clock on netdev
selftests/net: Add test app for SIOC{S|G}SYNCE
ice: add SIOC{S|G}SYNCE interface usage to recover reference signal
ice: add sysfs interface to configure PHY recovered reference signal
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 62 +++++
drivers/net/ethernet/intel/ice/ice_common.c | 101 ++++++++
drivers/net/ethernet/intel/ice/ice_common.h | 9 +
drivers/net/ethernet/intel/ice/ice_main.c | 4 +
drivers/net/ethernet/intel/ice/ice_ptp.c | 234 +++++++++++++++++-
drivers/net/ethernet/intel/ice/ice_ptp.h | 9 +
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 6 +
drivers/ptp/ptp_chardev.c | 15 ++
drivers/ptp/ptp_clockmatrix.h | 12 -
drivers/ptp/ptp_private.h | 2 +
drivers/ptp/ptp_sysfs.c | 48 ++++
include/linux/ptp_clock_kernel.h | 9 +
include/uapi/linux/net_synce.h | 21 ++
include/uapi/linux/ptp_clock.h | 27 ++
include/uapi/linux/sockios.h | 4 +
net/core/dev_ioctl.c | 6 +-
tools/testing/selftests/net/Makefile | 1 +
tools/testing/selftests/net/phy_ref_clk.c | 138 +++++++++++
tools/testing/selftests/ptp/testptp.c | 27 +-
19 files changed, 720 insertions(+), 15 deletions(-)
create mode 100644 include/uapi/linux/net_synce.h
create mode 100644 tools/testing/selftests/net/phy_ref_clk.c
base-commit: aba1e4adb54e020d3ca85a4df3ef0f8febe87548
--
2.24.0
This series of patches updates the format of kselftest TAP results to improve
compatibility with the proposed KTAP specification
(https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqa…).
Three changes:
- Change from "# " to " " for indentation of nested tests
- Add subtest header line at start of tests with subtests. Line format
is "# Subtest: [name of test]".
- Remove TAP header in nested tests
Standardizing TAP results would not only allow for clearer documentation and ease of reading but by standardizing the format across different testing frameworks, we could also share the use of tools.
As an example:
This is a truncated version of TAP results from the kselftest ptrace with the new format changes:
TAP version 13
1..1
# selftests: ptrace: get_syscall_info
# Subtest: selftests: ptrace: get_syscall_info
1..1
# Starting 1 tests from 1 test cases.
# RUN global.get_syscall_info ...
# OK global.get_syscall_info
ok 1 global.get_syscall_info
# PASSED: 1 / 1 tests passed.
# Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
ok 1 selftests: ptrace: get_syscall_info
With the new patch to update the KUnit parser to improve compatibility with the proposed KTAP specification, (https://lore.kernel.org/linux-kselftest/20210826195505.3066755-1-rmoar@goog…) the above TAP results would be parsed as the following:
[20:46:09] ============================================================
[20:46:09] ===== selftests: ptrace: get_syscall_info (1 subtest) ======
[20:46:09] [PASSED] global.get_syscall_info
[20:46:09] ======= [PASSED] selftests: ptrace: get_syscall_info =======
[20:46:09] ============================================================
[20:46:09] Testing complete. Passed: 1, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0
Thus, the kunit parser could become a useful tool for kselftest users.
Rae Moar (2):
selftests: tool: Add subtest header line and change indentation format
in TAP results
Revert "selftests: Remove KSFT_TAP_LEVEL"
tools/testing/selftests/Makefile | 6 ++++++
tools/testing/selftests/kselftest/prefix.pl | 2 +-
tools/testing/selftests/kselftest/runner.sh | 7 ++++---
3 files changed, 11 insertions(+), 4 deletions(-)
--
2.33.0.259.gc128427fd7-goog
Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:
- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested tests and promotes the idea that both test suites and test
cases are tests.
- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing. Note that kunit.py parse does
not print incrementally yet. However, this fix brings us closer to
this feature.
- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.
This patch now implements the KTAP specification as described here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqa….
This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).
This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.
Signed-off-by: Rae Moar <rmoar(a)google.com>
Reviewed-by: Brendan Higgins <brendanhiggins(a)google.com>
---
Change log from v1:
https://lore.kernel.org/linux-kselftest/20210820200032.2178134-1-rmoar@goog…
- Rebase onto kselftest/kunit branch
- Add tests to kunit_tool_test.py to check parser is correctly stripping
hyphen, producing correct json objects with nested tests, correctly
passing kselftest TAP output, and correctly deals with missing test plan.
- Fix bug to correctly match test name in instance of a missing test plan.
- Fix bug in kunit_tool_test.py pointed out by Daniel where it was not
correctly checking for a proper match to the '0 tests run!' error
message. Reverts changes back to original.
- A few minor changes to commit message using Daniel's comments.
- Change docstrings using Daniel's comments to reduce:
- Shortens some docstrings to be one-line or just description if it is
self explanatory.
- Remove explicit respecification of types of parameters and returns
because this is already specified in the function annoations. However,
some descriptions of the parameters and returns remain and some contain
the type for context. Additionally, the types of public attributes of
classes remain.
- Remove any documentation of 'Return: None'
- Remove docstrings of helper methods within other methods.
---
tools/testing/kunit/kunit_json.py | 55 +-
tools/testing/kunit/kunit_parser.py | 1056 ++++++++++++-----
tools/testing/kunit/kunit_tool_test.py | 134 ++-
.../test_is_test_passed-all_passed_nested.log | 34 +
.../test_is_test_passed-kselftest.log | 14 +
.../test_is_test_passed-missing_plan.log | 31 +
.../kunit/test_data/test_strip_hyphen.log | 16 +
7 files changed, 951 insertions(+), 389 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-kselftest.log
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log
create mode 100644 tools/testing/kunit/test_data/test_strip_hyphen.log
diff --git a/tools/testing/kunit/kunit_json.py b/tools/testing/kunit/kunit_json.py
index f5cca5c38cac..e7317b4fad9d 100644
--- a/tools/testing/kunit/kunit_json.py
+++ b/tools/testing/kunit/kunit_json.py
@@ -11,47 +11,46 @@ import os
import kunit_parser
-from kunit_parser import TestStatus
-
-def get_json_result(test_result, def_config, build_dir, json_path) -> str:
- sub_groups = []
-
- # Each test suite is mapped to a KernelCI sub_group
- for test_suite in test_result.suites:
- sub_group = {
- "name": test_suite.name,
- "arch": "UM",
- "defconfig": def_config,
- "build_environment": build_dir,
- "test_cases": [],
- "lab_name": None,
- "kernel": None,
- "job": None,
- "git_branch": "kselftest",
- }
- test_cases = []
- # TODO: Add attachments attribute in test_case with detailed
- # failure message, see https://api.kernelci.org/schema-test-case.html#get
- for case in test_suite.cases:
- test_case = {"name": case.name, "status": "FAIL"}
- if case.status == TestStatus.SUCCESS:
+from kunit_parser import Test, TestResult, TestStatus
+from typing import Any, Dict
+
+JsonObj = Dict[str, Any]
+
+def _get_group_json(test: Test, def_config: str, build_dir: str) -> JsonObj:
+ sub_groups = [] # List[JsonObj]
+ test_cases = [] # List[JsonObj]
+
+ for subtest in test.subtests:
+ if len(subtest.subtests):
+ sub_group = _get_group_json(subtest, def_config,
+ build_dir)
+ sub_groups.append(sub_group)
+ else:
+ test_case = {"name": subtest.name, "status": "FAIL"}
+ if subtest.status == TestStatus.SUCCESS:
test_case["status"] = "PASS"
- elif case.status == TestStatus.TEST_CRASHED:
+ elif subtest.status == TestStatus.TEST_CRASHED:
test_case["status"] = "ERROR"
test_cases.append(test_case)
- sub_group["test_cases"] = test_cases
- sub_groups.append(sub_group)
+
test_group = {
- "name": "KUnit Test Group",
+ "name": test.name,
"arch": "UM",
"defconfig": def_config,
"build_environment": build_dir,
"sub_groups": sub_groups,
+ "test_cases": test_cases,
"lab_name": None,
"kernel": None,
"job": None,
"git_branch": "kselftest",
}
+ return test_group
+
+def get_json_result(test_result: TestResult, def_config: str, build_dir: str,
+ json_path: str) -> str:
+ test_group = _get_group_json(test_result.test, def_config, build_dir)
+ test_group["name"] = "KUnit Test Group"
json_obj = json.dumps(test_group, indent=4)
if json_path != 'stdout':
with open(json_path, 'w') as result_path:
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 6310a641b151..4b6086159c7f 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -1,11 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
#
-# Parses test results from a kernel dmesg log.
+# Parses KTAP test results from a kernel dmesg log and incrementally prints
+# results with reader-friendly format. Stores and returns test results in a
+# Test object.
#
# Copyright (C) 2019, Google LLC.
# Author: Felix Guo <felixguoxiuping(a)gmail.com>
# Author: Brendan Higgins <brendanhiggins(a)google.com>
+# Author: Rae Moar <rmoar(a)google.com>
+from __future__ import annotations
import re
from collections import namedtuple
@@ -14,33 +18,55 @@ from enum import Enum, auto
from functools import reduce
from typing import Iterable, Iterator, List, Optional, Tuple
-TestResult = namedtuple('TestResult', ['status','suites','log'])
-
-class TestSuite(object):
- def __init__(self) -> None:
- self.status = TestStatus.SUCCESS
- self.name = ''
- self.cases = [] # type: List[TestCase]
-
- def __str__(self) -> str:
- return 'TestSuite(' + str(self.status) + ',' + self.name + ',' + str(self.cases) + ')'
-
- def __repr__(self) -> str:
- return str(self)
-
-class TestCase(object):
+TestResult = namedtuple('TestResult', ['status','test','log'])
+
+class Test(object):
+ """
+ A class to represent a test parsed from KTAP results. All KTAP
+ results within a test log are stored in a main Test object as
+ subtests.
+
+ Attributes:
+ status : TestStatus - status of the test
+ name : str - name of the test
+ expected_count : int - expected number of subtests (0 if single
+ test case and None if unknown expected number of subtests)
+ subtests : List[Test] - list of subtests
+ log : List[str] - log of KTAP lines that correspond to the test
+ counts : TestCounts - counts of the test statuses and errors of
+ subtests or of the test itself if the test is a single
+ test case.
+ """
def __init__(self) -> None:
+ """Constructs the default attributes of a Test class object.
+ """
self.status = TestStatus.SUCCESS
self.name = ''
+ self.expected_count = 0 # type: Optional[int]
+ self.subtests = [] # type: List[Test]
self.log = [] # type: List[str]
+ self.counts = TestCounts()
def __str__(self) -> str:
- return 'TestCase(' + str(self.status) + ',' + self.name + ',' + str(self.log) + ')'
+ """Returns string representation of a Test class object."""
+ return ('Test(' + str(self.status) + ', ' + self.name +
+ ', ' + str(self.expected_count) + ', ' +
+ str(self.subtests) + ', ' + str(self.log) + ', ' +
+ str(self.counts) + ')')
def __repr__(self) -> str:
+ """Returns string representation of a Test class object."""
return str(self)
+ def add_error(self, error_message: str) -> None:
+ """Adds error to test object by incrementing the error count
+ and printing the error message.
+ """
+ self.counts.errors += 1
+ print_error('Test ' + self.name + ': ' + error_message)
+
class TestStatus(Enum):
+ """An enumeration class to represent the status of a test."""
SUCCESS = auto()
FAILURE = auto()
SKIPPED = auto()
@@ -48,381 +74,769 @@ class TestStatus(Enum):
NO_TESTS = auto()
FAILURE_TO_PARSE_TESTS = auto()
+class TestCounts:
+ """
+ A class to represent the counts of statuses and test errors of
+ subtests or of the test itself if the test is a single test case with
+ no subtests. Note that the sum of the counts of passed, failed,
+ crashed, and skipped should sum to the total number of subtests for
+ the test.
+
+ Attributes:
+ passed : int - the number of tests that have passed
+ failed : int - the number of tests that have failed
+ crashed : int - the number of tests that have crashed
+ skipped : int - the number of tests that have skipped
+ errors : int - the number of errors in the test and subtests
+ """
+ def __init__(self):
+ """Contructs the default attributes of a TestCounts class
+ object. Sets the counts of all test statuses and test
+ errors to be 0.
+ """
+ self.passed = 0
+ self.failed = 0
+ self.crashed = 0
+ self.skipped = 0
+ self.errors = 0
+
+ def __str__(self) -> str:
+ """Returns the string representation of a TestCounts object.
+ """
+ return ('Passed: ' + str(self.passed) +
+ ', Failed: ' + str(self.failed) +
+ ', Crashed: ' + str(self.crashed) +
+ ', Skipped: ' + str(self.skipped) +
+ ', Errors: ' + str(self.errors))
+
+ def total(self) -> int:
+ """Returns total number of subtests or 1 if the test object
+ has no subtests to represent the test itself. This number is
+ calculated by the sum of the passed, failed, crashed, and
+ skipped subtests.
+ """
+ return (self.passed + self.failed + self.crashed +
+ self.skipped)
+
+ def add_subtest_counts(self, counts: TestCounts) -> None:
+ """
+ Adds the counts of another TestCounts object to the current
+ TestCounts object. Used to add the counts of a subtest to the
+ parent test.
+
+ Parameters:
+ counts - a different TestCounts object whose counts
+ will be added to the counts of the TestCounts object
+ """
+ self.passed += counts.passed
+ self.failed += counts.failed
+ self.crashed += counts.crashed
+ self.skipped += counts.skipped
+ self.errors += counts.errors
+
+ def get_status(self) -> TestStatus:
+ """Returns the expected status of a Test using test counts."""
+ if self.crashed:
+ # If one of the subtests crash, the expected status
+ # of the Test is crashed.
+ return TestStatus.TEST_CRASHED
+ elif self.failed:
+ # Otherwise if one of the subtests fail, the
+ # expected status of the Test is failed.
+ return TestStatus.FAILURE
+ elif self.passed:
+ # Otherwise if one of the subtests pass, the
+ # expected status of the Test is passed.
+ return TestStatus.SUCCESS
+ else:
+ # Finally, if none of the subtests have failed,
+ # crashed, or passed, the expected status of the
+ # Test is skipped.
+ return TestStatus.SKIPPED
+
+ def add_status(self, status: TestStatus) -> None:
+ """
+ Given inputted status, increments corresponding attribute of
+ TestCounts object.
+
+ Parameters:
+ status - status to be added to the TestCounts object
+ """
+ if status == TestStatus.SUCCESS or \
+ status == TestStatus.NO_TESTS:
+ # if status is NO_TESTS the most appropriate
+ # attribute to increment is passed because
+ # the test did not fail, crash or get skipped.
+ self.passed += 1
+ elif status == TestStatus.FAILURE:
+ self.failed += 1
+ elif status == TestStatus.SKIPPED:
+ self.skipped += 1
+ else:
+ self.crashed += 1
+
class LineStream:
- """Provides a peek()/pop() interface over an iterator of (line#, text)."""
+ """
+ A class to represent the lines of kernel output.
+ Provides a peek()/pop() interface over an iterator of
+ (line#, text).
+ """
_lines: Iterator[Tuple[int, str]]
_next: Tuple[int, str]
_done: bool
def __init__(self, lines: Iterator[Tuple[int, str]]):
+ """Set defaults for LineStream object and sets _lines
+ attribute to lines parameter.
+ """
self._lines = lines
self._done = False
self._next = (0, '')
self._get_next()
def _get_next(self) -> None:
+ """Advances the LineSteam to the next line or sets the _done
+ attribute if the LineStream has reached the end of the lines.
+ """
try:
self._next = next(self._lines)
except StopIteration:
self._done = True
def peek(self) -> str:
+ """Returns the next line in the LineStream without advancing
+ the LineStream.
+ """
return self._next[1]
def pop(self) -> str:
+ """Returns the next line in the LineStream and advances the
+ LineStream to the next line.
+ """
n = self._next
self._get_next()
return n[1]
def __bool__(self) -> bool:
+ """Returns whether the LineStream has reached the end of the
+ lines.
+ """
return not self._done
# Only used by kunit_tool_test.py.
def __iter__(self) -> Iterator[str]:
+ """Empties all lines stored in LineStream object into
+ Iterator object and returns the Iterator object.
+ """
while bool(self):
yield self.pop()
def line_number(self) -> int:
+ """Returns the line number of the next line in the
+ LineStream.
+ """
return self._next[0]
-kunit_start_re = re.compile(r'TAP version [0-9]+$')
-kunit_end_re = re.compile('(List of all partitions:|'
- 'Kernel panic - not syncing: VFS:|reboot: System halted)')
+# Parsing helper methods:
+
+KTAP_START = re.compile(r'KTAP version ([0-9]+)$')
+TAP_START = re.compile(r'TAP version ([0-9]+)$')
+KTAP_END = re.compile('(List of all partitions:|'
+ 'Kernel panic - not syncing: VFS:|reboot: System halted)')
def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
- def isolate_kunit_output(kernel_output: Iterable[str]) -> Iterator[Tuple[int, str]]:
+ """Extracts KTAP lines from inputted kernel output in LineStream
+ object."""
+ def isolate_ktap_output(kernel_output: Iterable[str]) \
+ -> Iterator[Tuple[int, str]]:
line_num = 0
started = False
for line in kernel_output:
line_num += 1
- line = line.rstrip() # line always has a trailing \n
- if kunit_start_re.search(line):
+ line = line.rstrip() # remove trailing \n
+ if not started and KTAP_START.search(line):
+ # start extracting KTAP lines and set prefix
+ # to number of characters before version line
+ prefix_len = len(
+ line.split('KTAP version')[0])
+ started = True
+ yield line_num, line[prefix_len:]
+ elif not started and TAP_START.search(line):
+ # start extracting KTAP lines and set prefix
+ # to number of characters before version line
prefix_len = len(line.split('TAP version')[0])
started = True
yield line_num, line[prefix_len:]
- elif kunit_end_re.search(line):
+ elif started and KTAP_END.search(line):
+ # stop extracting KTAP lines
break
elif started:
- yield line_num, line[prefix_len:]
- return LineStream(lines=isolate_kunit_output(kernel_output))
-
-DIVIDER = '=' * 60
-
-RESET = '\033[0;0m'
-
-def red(text) -> str:
- return '\033[1;31m' + text + RESET
-
-def yellow(text) -> str:
- return '\033[1;33m' + text + RESET
-
-def green(text) -> str:
- return '\033[1;32m' + text + RESET
-
-def print_with_timestamp(message) -> None:
- print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
-
-def format_suite_divider(message) -> str:
- return '======== ' + message + ' ========'
-
-def print_suite_divider(message) -> None:
- print_with_timestamp(DIVIDER)
- print_with_timestamp(format_suite_divider(message))
-
-def print_log(log) -> None:
- for m in log:
- print_with_timestamp(m)
-
-TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*# (Subtest:|.*: kunit test case crashed!)).*$')
-
-def consume_non_diagnostic(lines: LineStream) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- lines.pop()
-
-def save_non_diagnostic(lines: LineStream, test_case: TestCase) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- test_case.log.append(lines.peek())
- lines.pop()
-
-OkNotOkResult = namedtuple('OkNotOkResult', ['is_ok','description', 'text'])
-
-OK_NOT_OK_SKIP = re.compile(r'^[\s]*(ok|not ok) [0-9]+ - (.*) # SKIP(.*)$')
-
-OK_NOT_OK_SUBTEST = re.compile(r'^[\s]+(ok|not ok) [0-9]+ - (.*)$')
-
-OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$')
-
-def parse_ok_not_ok_test_case(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- line = lines.peek()
- match = OK_NOT_OK_SUBTEST.match(line)
- while not match and lines:
- line = lines.pop()
- match = OK_NOT_OK_SUBTEST.match(line)
- if match:
- test_case.log.append(lines.pop())
- test_case.name = match.group(2)
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_case.status = TestStatus.SKIPPED
- return True
- if test_case.status == TestStatus.TEST_CRASHED:
- return True
- if match.group(1) == 'ok':
- test_case.status = TestStatus.SUCCESS
- else:
- test_case.status = TestStatus.FAILURE
- return True
+ # remove prefix and any indention and yield
+ # line with line number
+ line = line[prefix_len:].lstrip()
+ yield line_num, line
+ return LineStream(lines=isolate_ktap_output(kernel_output))
+
+def raw_output(kernel_output: Iterable[str]) -> None:
+ """Prints all lines of kernel output."""
+ for line in kernel_output:
+ print(line.rstrip())
+
+KTAP_VERSIONS = [1]
+TAP_VERSIONS = [13, 14]
+
+def check_version(version_num: int, accepted_versions: List[int],
+ version_type: str, test: Test) -> None:
+ """
+ Adds error to test object if version number is too high or too
+ low.
+
+ Parameters:
+ version_num - The inputted version number from the parsed KTAP or TAP
+ header line
+ accepted_version - List of accepted KTAP or TAP versions
+ version_type - 'KTAP' or 'TAP' depending on the type of
+ version line.
+ test - Test object for current test being parsed
+ """
+ if version_num < min(accepted_versions):
+ test.add_error(version_type +
+ ' version lower than expected!')
+ elif version_num > max(accepted_versions):
+ test.add_error(
+ version_type + ' version higher than expected!')
+
+def parse_ktap_header(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of KTAP or TAP
+ header line, the version number is checked, the line is popped,
+ and returns True. Otherwise the method returns False.
+
+ Accepted formats:
+ - 'KTAP version [version number]'
+ - 'TAP version [version number]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ test - Test object for current test being parsed
+
+ Return:
+ Boolean that represents if the next line in the LineStream was parsed
+ as the KTAP or TAP header line
+ """
+ ktap_match = KTAP_START.match(lines.peek())
+ tap_match = TAP_START.match(lines.peek())
+ if ktap_match:
+ version_num = int(ktap_match.group(1))
+ check_version(version_num, KTAP_VERSIONS, 'KTAP', test)
+ elif tap_match:
+ version_num = int(tap_match.group(1))
+ check_version(version_num, TAP_VERSIONS, 'TAP', test)
else:
return False
-
-SUBTEST_DIAGNOSTIC = re.compile(r'^[\s]+# (.*)$')
-DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^[\s]+# .*?: kunit test case crashed!$')
-
-def parse_diagnostic(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
+ test.log.append(lines.pop())
+ return True
+
+TEST_HEADER = re.compile(r'^# Subtest: (.*)$')
+
+def parse_test_header(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ header line, the name of test is set, the line is popped,
+ and returns True. Otherwise the method returns False.
+
+ Accepted format:
+ - '# Subtest: [test name]'
+
+ Parameters:
+ lines - LineStream of ktap output to parse
+ test - Test object for current test being parsed
+
+ Return:
+ Boolean that represents if the next line in the LineStream was parsed
+ as a test header
+ """
+ match = TEST_HEADER.match(lines.peek())
+ if not match:
return False
+ test.log.append(lines.pop())
+ test.name = match.group(1)
+ return True
+
+TEST_PLAN = re.compile(r'1\.\.([0-9]+)')
+
+def parse_test_plan(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ plan line, the expected number of subtests is set in test object, an
+ error is thrown if there are 0 tests, the line is popped,
+ and returns True. Otherwise the method adds an error that the test
+ plan is missing to the test object and returns False.
+
+ Accepted format:
+ - '1..[number of subtests]'
+
+ Parameters:
+ lines - LineStream of ktap output to parse
+ test - Test object for current test being parsed
+
+ Return:
+ Boolean that represents if the next line in the LineStream was parsed
+ as a test plan
+ """
+ match = TEST_PLAN.match(lines.peek())
+ if not match:
+ test.expected_count = None
+ test.add_error('missing plan line!')
+ return False
+ test.log.append(lines.pop())
+ expected_count = int(match.group(1))
+ test.expected_count = expected_count
+ if expected_count == 0:
+ test.status = TestStatus.NO_TESTS
+ test.add_error('0 tests run!')
+ return True
+
+TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*)$')
+
+TEST_RESULT_SKIP = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*) # SKIP(.*)$')
+
+def peek_test_name_match(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ result line and the name of the result line matches the name of the
+ current test, the method returns True. Otherwise it returns False.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ test - Test object for current test being parsed
+
+ Return:
+ Boolean that represents if the next line in the LineStream matched a
+ test result line and the name matched the expected test name
+ """
line = lines.peek()
- match = SUBTEST_DIAGNOSTIC.match(line)
- if match:
- test_case.log.append(lines.pop())
- crash_match = DIAGNOSTIC_CRASH_MESSAGE.match(line)
- if crash_match:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- else:
+ match = TEST_RESULT.match(line)
+ if not match:
return False
+ name = match.group(4)
+ return (name == test.name)
+
+def parse_test_result(lines: LineStream, test: Test,
+ expected_num: int) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ result line, the status in the result line is added to the test
+ object, the test number is checked to match the expected test number
+ and if not an error is added to the test object, and returns True.
+ Otherwise it returns False.
+
+ Note that the skip diirective is the only
+ directive that causes a change in status and otherwise the directive
+ is included in the name of the test.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ test - Test object for current test being parsed
+ expected_num - expected test number for current test
+
+ Return:
+ Boolean that represents if the next line in the LineStream was parsed
+ as a test result line.
+ """
+ line = lines.peek()
+ match = TEST_RESULT.match(line)
+ skip_match = TEST_RESULT_SKIP.match(line)
-def parse_test_case(lines: LineStream) -> Optional[TestCase]:
- test_case = TestCase()
- save_non_diagnostic(lines, test_case)
- while parse_diagnostic(lines, test_case):
- pass
- if parse_ok_not_ok_test_case(lines, test_case):
- return test_case
- else:
- return None
-
-SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$')
+ # Check if line matches test result line format
+ if not match:
+ return False
+ test.log.append(lines.pop())
-def parse_subtest_header(lines: LineStream) -> Optional[str]:
- consume_non_diagnostic(lines)
- if not lines:
- return None
- match = SUBTEST_HEADER.match(lines.peek())
- if match:
- lines.pop()
- return match.group(1)
+ # Set name of test object
+ if skip_match:
+ test.name = skip_match.group(4)
else:
- return None
+ test.name = match.group(4)
-SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)')
+ # Check test num
+ num = int(match.group(2))
+ if num != expected_num:
+ test.add_error('Expected test number ' +
+ str(expected_num) + ' but found ' + str(num))
-def parse_subtest_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = SUBTEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
- else:
- return None
-
-def max_status(left: TestStatus, right: TestStatus) -> TestStatus:
- if left == right:
- return left
- elif left == TestStatus.TEST_CRASHED or right == TestStatus.TEST_CRASHED:
- return TestStatus.TEST_CRASHED
- elif left == TestStatus.FAILURE or right == TestStatus.FAILURE:
- return TestStatus.FAILURE
- elif left == TestStatus.SKIPPED:
- return right
- else:
- return left
-
-def parse_ok_not_ok_test_suite(lines: LineStream,
- test_suite: TestSuite,
- expected_suite_index: int) -> bool:
- consume_non_diagnostic(lines)
- if not lines:
- test_suite.status = TestStatus.TEST_CRASHED
- return False
- line = lines.peek()
- match = OK_NOT_OK_MODULE.match(line)
- if match:
- lines.pop()
- if match.group(1) == 'ok':
- test_suite.status = TestStatus.SUCCESS
- else:
- test_suite.status = TestStatus.FAILURE
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_suite.status = TestStatus.SKIPPED
- suite_index = int(match.group(2))
- if suite_index != expected_suite_index:
- print_with_timestamp(
- red('[ERROR] ') + 'expected_suite_index ' +
- str(expected_suite_index) + ', but got ' +
- str(suite_index))
+ # Set status of test object
+ status = match.group(1)
+ if test.status == TestStatus.TEST_CRASHED:
return True
+ elif skip_match:
+ test.status = TestStatus.SKIPPED
+ elif status == 'ok':
+ test.status = TestStatus.SUCCESS
else:
- return False
-
-def bubble_up_errors(status_list: Iterable[TestStatus]) -> TestStatus:
- return reduce(max_status, status_list, TestStatus.SKIPPED)
+ test.status = TestStatus.FAILURE
+ return True
+
+def parse_diagnostic(lines: LineStream) -> List[str]:
+ """
+ If the next line in LineStream does not match the format of a test
+ case line or test header line, the line is checked if the test has
+ crashed and if so adds an error message, pops the line and adds it to
+ the log.
+
+ Line formats that are not parsed:
+ - '# Subtest: [test name]'
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+
+ Return:
+ Log of diagnostic lines
+ """
+ log = [] # type: List[str]
+ while lines and not TEST_RESULT.match(lines.peek()) and not \
+ TEST_HEADER.match(lines.peek()):
+ log.append(lines.pop())
+ return log
+
+DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^# .*?: kunit test case crashed!$')
+
+def parse_crash_in_log(test: Test) -> bool:
+ """
+ Iterate through the lines of the log to parse for crash message.
+ If crash message found, set status to crashed and return True.
+ Otherwise return False.
+
+ Parameters:
+ test - Test object for current test being parsed
+
+ Return:
+ Boolean that represents if crash message found in log
+ """
+ for line in test.log:
+ if DIAGNOSTIC_CRASH_MESSAGE.match(line):
+ test.status = TestStatus.TEST_CRASHED
+ return True
+ return False
-def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus:
- max_test_case_status = bubble_up_errors(x.status for x in test_suite.cases)
- return max_status(max_test_case_status, test_suite.status)
+# Printing helper methods:
-def parse_test_suite(lines: LineStream, expected_suite_index: int) -> Optional[TestSuite]:
- if not lines:
- return None
- consume_non_diagnostic(lines)
- test_suite = TestSuite()
- test_suite.status = TestStatus.SUCCESS
- name = parse_subtest_header(lines)
- if not name:
- return None
- test_suite.name = name
- expected_test_case_num = parse_subtest_plan(lines)
- if expected_test_case_num is None:
- return None
- while expected_test_case_num > 0:
- test_case = parse_test_case(lines)
- if not test_case:
- break
- test_suite.cases.append(test_case)
- expected_test_case_num -= 1
- if parse_ok_not_ok_test_suite(lines, test_suite, expected_suite_index):
- test_suite.status = bubble_up_test_case_errors(test_suite)
- return test_suite
- elif not lines:
- print_with_timestamp(red('[ERROR] ') + 'ran out of lines before end token')
- return test_suite
- else:
- print(f'failed to parse end of suite "{name}", at line {lines.line_number()}: {lines.peek()}')
- return None
+DIVIDER = '=' * 60
-TAP_HEADER = re.compile(r'^TAP version 14$')
+RESET = '\033[0;0m'
-def parse_tap_header(lines: LineStream) -> bool:
- consume_non_diagnostic(lines)
- if TAP_HEADER.match(lines.peek()):
- lines.pop()
- return True
- else:
- return False
+def red(text: str) -> str:
+ """Returns inputted string with red color code."""
+ return '\033[1;31m' + text + RESET
-TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)')
+def yellow(text: str) -> str:
+ """Returns inputted string with yellow color code."""
+ return '\033[1;33m' + text + RESET
-def parse_test_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = TEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
- else:
- return None
-
-def bubble_up_suite_errors(test_suites: Iterable[TestSuite]) -> TestStatus:
- return bubble_up_errors(x.status for x in test_suites)
-
-def parse_test_result(lines: LineStream) -> TestResult:
- consume_non_diagnostic(lines)
- if not lines or not parse_tap_header(lines):
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- expected_test_suite_num = parse_test_plan(lines)
- if expected_test_suite_num == 0:
- return TestResult(TestStatus.NO_TESTS, [], lines)
- elif expected_test_suite_num is None:
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- test_suites = []
- for i in range(1, expected_test_suite_num + 1):
- test_suite = parse_test_suite(lines, i)
- if test_suite:
- test_suites.append(test_suite)
- else:
- print_with_timestamp(
- red('[ERROR] ') + ' expected ' +
- str(expected_test_suite_num) +
- ' test suites, but got ' + str(i - 2))
- break
- test_suite = parse_test_suite(lines, -1)
- if test_suite:
- print_with_timestamp(red('[ERROR] ') +
- 'got unexpected test suite: ' + test_suite.name)
- if test_suites:
- return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines)
- else:
- return TestResult(TestStatus.NO_TESTS, [], lines)
+def green(text: str) -> str:
+ """Returns inputted string with green color code."""
+ return '\033[1;32m' + text + RESET
-class TestCounts:
- passed: int
- failed: int
- crashed: int
- skipped: int
+ANSI_LEN = len(red(''))
- def __init__(self):
- self.passed = 0
- self.failed = 0
- self.crashed = 0
- self.skipped = 0
+def print_with_timestamp(message: str) -> None:
+ """Prints message with timestamp at beginning."""
+ print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
- def total(self) -> int:
- return self.passed + self.failed + self.crashed + self.skipped
-
-def print_and_count_results(test_result: TestResult) -> TestCounts:
- counts = TestCounts()
- for test_suite in test_result.suites:
- if test_suite.status == TestStatus.SUCCESS:
- print_suite_divider(green('[PASSED] ') + test_suite.name)
- elif test_suite.status == TestStatus.SKIPPED:
- print_suite_divider(yellow('[SKIPPED] ') + test_suite.name)
- elif test_suite.status == TestStatus.TEST_CRASHED:
- print_suite_divider(red('[CRASHED] ' + test_suite.name))
+def format_test_divider(message: str, len_message: int) -> str:
+ """
+ Returns string with message centered in fixed width divider.
+
+ Example:
+ '===================== message example ====================='
+
+ Parameters:
+ message - message to be centered in divider line
+ len_message - length of the message to be printed such that
+ any characters of the color codes are not counted
+
+ Return:
+ String containing message centered in fixed width divider
+ """
+ default_count = 3 # default number of dashes
+ len_1 = default_count
+ len_2 = default_count
+ difference = len(DIVIDER) - len_message - 2 # 2 spaces added
+ if difference > 0:
+ # calculate number of dashes for each side of the divider
+ len_1 = int(difference / 2)
+ len_2 = difference - len_1
+ return ('=' * len_1) + ' ' + message + ' ' + ('=' * len_2)
+
+def print_test_header(test: Test) -> None:
+ """
+ Prints test header with test name and optionally the expected number
+ of subtests.
+
+ Example:
+ '=================== example (2 subtests) ==================='
+
+ Parameters:
+ test - Test object representing current test being printed
+ """
+ message = test.name
+ if test.expected_count:
+ message += ' (' + str(test.expected_count) + ' subtests)'
+ print_with_timestamp(format_test_divider(message, len(message)))
+
+def print_log(log: Iterable[str]) -> None:
+ """
+ Prints all strings in saved log for test in yellow.
+
+ Parameters:
+ log - Iterable object with all strings saved in log for test
+ """
+ for m in log:
+ print_with_timestamp(yellow(m))
+ print_with_timestamp('')
+
+def format_test_result(test: Test) -> str:
+ """
+ Returns string with formatted test result with colored status and test
+ name.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test - Test object representing current test being printed
+
+ Return:
+ String containing formatted test result
+ """
+ if test.status == TestStatus.SUCCESS:
+ return (green('[PASSED] ') + test.name)
+ elif test.status == TestStatus.SKIPPED:
+ return (yellow('[SKIPPED] ') + test.name)
+ elif test.status == TestStatus.TEST_CRASHED:
+ print_log(test.log)
+ return (red('[CRASHED] ') + test.name)
+ else:
+ print_log(test.log)
+ return (red('[FAILED] ') + test.name)
+
+def print_test_result(test: Test) -> None:
+ """
+ Prints result line with status of test.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test - Test object representing current test being printed
+ """
+ print_with_timestamp(format_test_result(test))
+
+def print_test_footer(test: Test) -> None:
+ """
+ Prints test footer with status of test.
+
+ Example:
+ '===================== [PASSED] example ====================='
+
+ Parameters:
+ test - Test object representing current test being printed
+ """
+ message = format_test_result(test)
+ print_with_timestamp(format_test_divider(message,
+ len(message) - ANSI_LEN))
+
+def print_summary_line(test: Test) -> None:
+ """
+ Prints summary line of test object. Color of line is dependent on
+ status of test. Color is green if test passes, yellow if test is
+ skipped, and red if the test fails or crashes. Summary line contains
+ counts of the statuses of the tests subtests or the test itself if it
+ has no subtests.
+
+ Example:
+ "Testing complete. Passed: 2, Failed: 0, Crashed: 0, Skipped: 0,
+ Errors: 0"
+
+ test - Test object representing current test being printed
+ """
+ if test.status == TestStatus.SUCCESS or \
+ test.status == TestStatus.NO_TESTS:
+ color = green
+ elif test.status == TestStatus.SKIPPED:
+ color = yellow
+ else:
+ color = red
+ counts = test.counts
+ print_with_timestamp(color('Testing complete. ' + str(counts)))
+
+def print_error(error_message: str) -> None:
+ """
+ Prints error message with error format.
+
+ Example:
+ "[ERROR] Test example: missing test plan!"
+
+ Parameters:
+ error_message - message describing error
+ """
+ print_with_timestamp(red('[ERROR] ') + error_message)
+
+# Other methods:
+
+def bubble_up_test_results(test: Test) -> None:
+ """
+ If the test has subtests, add the test counts of the subtests to the
+ test and check if any of the tests crashed and if so set the test
+ status to crashed. Otherwise if the test has no subtests add the
+ status of the test to the test counts.
+
+ Parameters:
+ test - Test object for current test being parsed
+ """
+ parse_crash_in_log(test)
+ subtests = test.subtests
+ counts = test.counts
+ status = test.status
+ for t in subtests:
+ counts.add_subtest_counts(t.counts)
+ if counts.total() == 0:
+ counts.add_status(status)
+ elif test.counts.get_status() == TestStatus.TEST_CRASHED:
+ test.status = TestStatus.TEST_CRASHED
+
+def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+ """
+ Finds next test to parse in LineStream, creates new Test object,
+ parses any subtests of the test, populates Test object with all
+ information (status, name) about the test and the Test objects for
+ any subtests, and then returns the Test object. The method accepts
+ three formats of tests:
+
+ Accepted test formats:
+
+ - Main KTAP/TAP header
+
+ Example:
+
+ KTAP version 1
+ 1..4
+ [subtests]
+
+ - Subtest header line
+
+ Example:
+
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ - Test result line
+
+ Example:
+
+ ok 1 - test
+
+ Parameters:
+ lines - LineStream of KTAP output to parse
+ expected_num - expected test number for test to be parsed
+ log - list of strings containing any preceding diagnostic lines
+ corresponding to the current test
+
+ Return:
+ Test object populated with characteristics and any subtests
+ """
+ test = Test()
+ test.log.extend(log)
+ parent_test = False
+ main = parse_ktap_header(lines, test)
+ if main:
+ # If KTAP/TAP header is found, attempt to parse
+ # test plan
+ test.name = "main"
+ parse_test_plan(lines, test)
+ else:
+ # If KTAP/TAP header is not found, test must be subtest
+ # header or test result line so parse attempt to parser
+ # subtest header
+ parent_test = parse_test_header(lines, test)
+ if parent_test:
+ # If subtest header is found, attempt to parse
+ # test plan and print header
+ parse_test_plan(lines, test)
+ print_test_header(test)
+ expected_count = test.expected_count
+ subtests = []
+ test_num = 1
+ while main or expected_count is None or test_num <= expected_count:
+ # Loop to parse any subtests.
+ # If test is main test, do not break until no lines left.
+ # Otherwise, break after parsing expected number of tests or
+ # if expected number of tests is unknown break when found
+ # test result line with matching name to subtest header.
+ if not lines:
+ if expected_count and test_num <= expected_count:
+ test.add_error('missing expected subtests!')
+ break
+ sub_log = parse_diagnostic(lines)
+ if not expected_count and not main and \
+ peek_test_name_match(lines, test):
+ test.log.extend(sub_log)
+ break
+ subtests.append(parse_test(lines, test_num, sub_log))
+ test_num += 1
+ test.subtests = subtests
+ if not main:
+ # If not main test, look for test result line
+ test.log.extend(parse_diagnostic(lines))
+ if (parent_test and peek_test_name_match(lines, test)) or \
+ not parent_test:
+ parse_test_result(lines, test, expected_num)
else:
- print_suite_divider(red('[FAILED] ') + test_suite.name)
- for test_case in test_suite.cases:
- if test_case.status == TestStatus.SUCCESS:
- counts.passed += 1
- print_with_timestamp(green('[PASSED] ') + test_case.name)
- elif test_case.status == TestStatus.SKIPPED:
- counts.skipped += 1
- print_with_timestamp(yellow('[SKIPPED] ') + test_case.name)
- elif test_case.status == TestStatus.TEST_CRASHED:
- counts.crashed += 1
- print_with_timestamp(red('[CRASHED] ' + test_case.name))
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- else:
- counts.failed += 1
- print_with_timestamp(red('[FAILED] ') + test_case.name)
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- return counts
+ test.add_error('missing subtest result line!')
+ # Add statuses to TestCounts attribute in Test object
+ bubble_up_test_results(test)
+ if parent_test:
+ # If test has subtests and is not the main test object, print
+ # footer.
+ print_test_footer(test)
+ elif not main:
+ print_test_result(test)
+ return test
def parse_run_tests(kernel_output: Iterable[str]) -> TestResult:
- counts = TestCounts()
+ """
+ Using kernel output, extract KTAP lines, parse the lines for test
+ results and print condensed test results and summary line .
+
+ Parameters:
+ kernel_output - Iterable object contains lines of kernel output
+
+ Return:
+ TestResult - Tuple containg status of main test object, main test
+ object with all subtests, and log of all KTAP lines.
+ """
+ print_with_timestamp(DIVIDER)
lines = extract_tap_lines(kernel_output)
- test_result = parse_test_result(lines)
- if test_result.status == TestStatus.NO_TESTS:
- print(red('[ERROR] ') + yellow('no tests run!'))
- elif test_result.status == TestStatus.FAILURE_TO_PARSE_TESTS:
- print(red('[ERROR] ') + yellow('could not parse test results!'))
+ test = Test()
+ if not lines:
+ test.add_error('invalid KTAP input!')
+ test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- counts = print_and_count_results(test_result)
+ test = parse_test(lines, 0, [])
+ if test.status != TestStatus.NO_TESTS:
+ test.status = test.counts.get_status()
print_with_timestamp(DIVIDER)
- if test_result.status == TestStatus.SUCCESS:
- fmt = green
- elif test_result.status == TestStatus.SKIPPED:
- fmt = yellow
- else:
- fmt =red
- print_with_timestamp(
- fmt('Testing complete. %d tests run. %d failed. %d crashed. %d skipped.' %
- (counts.total(), counts.failed, counts.crashed, counts.skipped)))
- return test_result
+ print_summary_line(test)
+ return TestResult(test.status, test, lines)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 619c4554cbff..e527b90de8ea 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -106,10 +106,10 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: example', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - example_simple_test', result)
- self.assertContains(' ok 2 - example_mock_test', result)
+ self.assertContains('# Subtest: example', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - example_simple_test', result)
+ self.assertContains('ok 2 - example_mock_test', result)
self.assertContains('ok 1 - example', result)
def test_output_with_prefix_isolated_correctly(self):
@@ -117,28 +117,28 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: kunit-resource-test', result)
- self.assertContains(' 1..5', result)
- self.assertContains(' ok 1 - kunit_resource_test_init_resources', result)
- self.assertContains(' ok 2 - kunit_resource_test_alloc_resource', result)
- self.assertContains(' ok 3 - kunit_resource_test_destroy_resource', result)
- self.assertContains(' foo bar #', result)
- self.assertContains(' ok 4 - kunit_resource_test_cleanup_resources', result)
- self.assertContains(' ok 5 - kunit_resource_test_proper_free_ordering', result)
+ self.assertContains('# Subtest: kunit-resource-test', result)
+ self.assertContains('1..5', result)
+ self.assertContains('ok 1 - kunit_resource_test_init_resources', result)
+ self.assertContains('ok 2 - kunit_resource_test_alloc_resource', result)
+ self.assertContains('ok 3 - kunit_resource_test_destroy_resource', result)
+ self.assertContains('foo bar #', result)
+ self.assertContains('ok 4 - kunit_resource_test_cleanup_resources', result)
+ self.assertContains('ok 5 - kunit_resource_test_proper_free_ordering', result)
self.assertContains('ok 1 - kunit-resource-test', result)
- self.assertContains(' foo bar # non-kunit output', result)
- self.assertContains(' # Subtest: kunit-try-catch-test', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - kunit_test_try_catch_successful_try_no_catch',
+ self.assertContains('foo bar # non-kunit output', result)
+ self.assertContains('# Subtest: kunit-try-catch-test', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - kunit_test_try_catch_successful_try_no_catch',
result)
- self.assertContains(' ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
+ self.assertContains('ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
result)
self.assertContains('ok 2 - kunit-try-catch-test', result)
- self.assertContains(' # Subtest: string-stream-test', result)
- self.assertContains(' 1..3', result)
- self.assertContains(' ok 1 - string_stream_test_empty_on_creation', result)
- self.assertContains(' ok 2 - string_stream_test_not_empty_after_add', result)
- self.assertContains(' ok 3 - string_stream_test_get_string', result)
+ self.assertContains('# Subtest: string-stream-test', result)
+ self.assertContains('1..3', result)
+ self.assertContains('ok 1 - string_stream_test_empty_on_creation', result)
+ self.assertContains('ok 2 - string_stream_test_not_empty_after_add', result)
+ self.assertContains('ok 3 - string_stream_test_get_string', result)
self.assertContains('ok 3 - string-stream-test', result)
def test_parse_successful_test_log(self):
@@ -148,6 +148,13 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
+ def test_parse_successful_nested_tests_log(self):
+ all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log')
+ with open(all_passed_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
def test_parse_failed_test_log(self):
failed_log = test_data_path('test_is_test_passed-failure.log')
@@ -162,17 +169,31 @@ class KUnitParserTest(unittest.TestCase):
with open(empty_log) as file:
result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS,
result.status)
+ def test_missing_test_plan(self):
+ missing_plan_log = test_data_path('test_is_test_passed-'
+ 'missing_plan.log')
+ with open(missing_plan_log) as file:
+ result = kunit_parser.parse_run_tests(
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ self.assertEqual(2, result.test.counts.errors)
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
+
def test_no_tests(self):
- empty_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log')
- with open(empty_log) as file:
+ header_log = test_data_path('test_is_test_passed-'
+ 'no_tests_run_with_header.log')
+ with open(header_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.NO_TESTS,
result.status)
@@ -182,15 +203,17 @@ class KUnitParserTest(unittest.TestCase):
print_mock = mock.patch('builtins.print').start()
with open(crash_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- print_mock.assert_any_call(StrContains('could not parse test results!'))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ print_mock.assert_any_call(StrContains('invalid KTAP input!'))
print_mock.stop()
file.close()
def test_crashed_test(self):
crashed_log = test_data_path('test_is_test_passed-crash.log')
with open(crashed_log) as file:
- result = kunit_parser.parse_run_tests(file.readlines())
+ result = kunit_parser.parse_run_tests(
+ file.readlines())
self.assertEqual(
kunit_parser.TestStatus.TEST_CRASHED,
result.status)
@@ -216,6 +239,23 @@ class KUnitParserTest(unittest.TestCase):
result.status)
file.close()
+ def test_ignores_hyphen(self):
+ hyphen_log = test_data_path('test_strip_hyphen.log')
+ file = open(hyphen_log)
+ result = kunit_parser.parse_run_tests(file.readlines())
+
+ # A skipped test does not fail the whole suite.
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
+ self.assertEqual(
+ "sysctl_test",
+ result.test.subtests[0].name)
+ self.assertEqual(
+ "example",
+ result.test.subtests[1].name)
+ file.close()
+
def test_ignores_prefix_printk_time(self):
prefix_log = test_data_path('test_config_printk_time.log')
@@ -224,7 +264,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_ignores_multiple_prefixes(self):
prefix_log = test_data_path('test_multiple_prefixes.log')
@@ -233,7 +273,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_mixed_kernel_output(self):
mixed_prefix_log = test_data_path('test_interrupted_tap_output.log')
@@ -242,7 +282,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_poundsign(self):
pound_log = test_data_path('test_pound_sign.log')
@@ -251,16 +291,16 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_kernel_panic_end(self):
panic_log = test_data_path('test_kernel_panic_interrupt.log')
with open(panic_log) as file:
result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual(
- kunit_parser.TestStatus.TEST_CRASHED,
+ kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_pound_no_prefix(self):
pound_log = test_data_path('test_pound_no_prefix.log')
@@ -269,7 +309,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
class LinuxSourceTreeTest(unittest.TestCase):
@@ -291,6 +331,14 @@ class LinuxSourceTreeTest(unittest.TestCase):
pass
tree = kunit_kernel.LinuxSourceTree('', kunitconfig_path=dir)
+ def test_kselftest_nested(self):
+ kselftest_log = test_data_path('test_is_test_passed-kselftest.log')
+ with open(kselftest_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
+
# TODO: add more test cases.
@@ -322,6 +370,12 @@ class KUnitJsonTest(unittest.TestCase):
result = self._json_for('test_is_test_passed-no_tests_run_with_header.log')
self.assertEqual(0, len(result['sub_groups']))
+ def test_nested_json(self):
+ result = self._json_for('test_is_test_passed-all_passed_nested.log')
+ self.assertEqual(
+ {'name': 'example_simple_test', 'status': 'PASS'},
+ result["sub_groups"][0]["sub_groups"][0]["test_cases"][0])
+
class StrContains(str):
def __eq__(self, other):
return self in other
@@ -380,7 +434,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(e.exception.code, 1)
self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
- self.print_mock.assert_any_call(StrContains(' 0 tests run'))
+ self.print_mock.assert_any_call(StrContains('invalid KTAP input!'))
def test_exec_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -388,7 +442,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run!')))
def test_run_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -397,7 +451,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run!')))
def test_run_raw_output_kunit(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
new file mode 100644
index 000000000000..9d5b04fe43a6
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
@@ -0,0 +1,34 @@
+TAP version 14
+1..2
+ # Subtest: sysctl_test
+ 1..4
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+ # Subtest: example
+ 1..2
+ init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+ kunit example: all tests passed
+ ok 2 - example
+ # sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
+ ok 3 - sysctl_test_dointvec_table_len_is_zero
+ # sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
+ ok 4 - sysctl_test_dointvec_table_read_but_position_set
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..2
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+kunit example: all tests passed
+ok 2 - example
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-kselftest.log b/tools/testing/kunit/test_data/test_is_test_passed-kselftest.log
new file mode 100644
index 000000000000..65d3f27feaf2
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-kselftest.log
@@ -0,0 +1,14 @@
+TAP version 13
+1..2
+# selftests: membarrier: membarrier_test_single_thread
+# TAP version 13
+# 1..2
+# ok 1 sys_membarrier available
+# ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected
+ok 1 selftests: membarrier: membarrier_test_single_thread
+# selftests: membarrier: membarrier_test_multi_thread
+# TAP version 13
+# 1..2
+# ok 1 sys_membarrier available
+# ok 2 sys membarrier invalid command test: command = -1, flags = 0, errno = 22. Failed as expected
+ok 2 selftests: membarrier: membarrier_test_multi_thread
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log b/tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log
new file mode 100644
index 000000000000..5cd17b7f818a
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-missing_plan.log
@@ -0,0 +1,31 @@
+KTAP version 1
+ # Subtest: sysctl_test
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+ # sysctl_test_dointvec_table_maxlen_unset: sysctl_test_dointvec_table_maxlen_unset passed
+ ok 2 - sysctl_test_dointvec_table_maxlen_unset
+ # sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
+ ok 3 - sysctl_test_dointvec_table_len_is_zero
+ # sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
+ ok 4 - sysctl_test_dointvec_table_read_but_position_set
+ # sysctl_test_dointvec_happy_single_positive: sysctl_test_dointvec_happy_single_positive passed
+ ok 5 - sysctl_test_dointvec_happy_single_positive
+ # sysctl_test_dointvec_happy_single_negative: sysctl_test_dointvec_happy_single_negative passed
+ ok 6 - sysctl_test_dointvec_happy_single_negative
+ # sysctl_test_dointvec_single_less_int_min: sysctl_test_dointvec_single_less_int_min passed
+ ok 7 - sysctl_test_dointvec_single_less_int_min
+ # sysctl_test_dointvec_single_greater_int_max: sysctl_test_dointvec_single_greater_int_max passed
+ ok 8 - sysctl_test_dointvec_single_greater_int_max
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..2
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+kunit example: all tests passed
+ok 2 - example
diff --git a/tools/testing/kunit/test_data/test_strip_hyphen.log b/tools/testing/kunit/test_data/test_strip_hyphen.log
new file mode 100644
index 000000000000..92ac7c24b374
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_strip_hyphen.log
@@ -0,0 +1,16 @@
+KTAP version 1
+1..2
+ # Subtest: sysctl_test
+ 1..1
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..1
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 example_simple_test
+kunit example: all tests passed
+ok 2 example
--
2.33.0.259.gc128427fd7-goog
v7:
- Simplify the documentation patch (patch 5) as suggested by Tejun.
- Fix a typo in patch 2 and improper commit log in patch 3.
v6:
- Remove duplicated tmpmask from update_prstate() which should fix the
frame size too large problem reported by kernel test robot.
v5:
- Rebased to the latest for-5.15 branch of cgroup git tree and drop the
1st v4 patch as it has been merged.
- Update patch 1 to always allow changing partition root back to member
even if it invalidates child partitions undeneath it.
- Adjust the empty effective cpu partition patch to not allow 0 effective
cpu for terminal partition which will make it invalid).
- Add a new patch to enable reading of cpuset.cpus.partition to display
the reason that causes invalid partition.
- Adjust the documentation and testing patch accordingly.
This patchset makes four enhancements to the cpuset v2 code.
Patch 1: Properly handle partition root tree and make partition
invalid in case changes to cpuset.cpus violate any of the partition
root constraints.
Patch 2: Enable the "cpuset.cpus.partition" file to show the reason
that causes invalid partition like "root invalid (No cpu available
due to hotplug)".
Patch 3: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 4: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 5 updates the cgroup-v2.rst file accordingly. Patch 6 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (6):
cgroup/cpuset: Properly transition to invalid partition
cgroup/cpuset: Show invalid partition reason string
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition to distribute out all
CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 112 +--
kernel/cgroup/cpuset.c | 337 ++++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 663 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 86 +++
5 files changed, 1050 insertions(+), 153 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
Current PTP driver exposes one PTP device to user which binds network
interface/interfaces to provide timestamping. Actually we have a way
utilizing timecounter/cyclecounter to virtualize any number of PTP
clocks based on a same free running physical clock for using.
The purpose of having multiple PTP virtual clocks is for user space
to directly/easily use them for multiple domains synchronization.
user
space: ^ ^
| SO_TIMESTAMPING new flag: | Packets with
| SOF_TIMESTAMPING_BIND_PHC | TX/RX HW timestamps
v v
+--------------------------------------------+
sock: | sock (new member sk_bind_phc) |
+--------------------------------------------+
^ ^
| ethtool_get_phc_vclocks | Convert HW timestamps
| | to sk_bind_phc
v v
+--------------+--------------+--------------+
vclock: | ptp1 | ptp2 | ptpN |
+--------------+--------------+--------------+
pclock: | ptp0 free running |
+--------------------------------------------+
The block diagram may explain how it works. Besides the PTP virtual
clocks, the packet HW timestamp converting to the bound PHC is also
done in sock driver. For user space, PTP virtual clocks can be
created via sysfs, and extended SO_TIMESTAMPING API (new flag
SOF_TIMESTAMPING_BIND_PHC) can be used to bind one PTP virtual clock
for timestamping.
The test tool timestamping.c (together with linuxptp phc_ctl tool) can
be used to verify:
# echo 4 > /sys/class/ptp/ptp0/n_vclocks
[ 129.399472] ptp ptp0: new virtual clock ptp2
[ 129.404234] ptp ptp0: new virtual clock ptp3
[ 129.409532] ptp ptp0: new virtual clock ptp4
[ 129.413942] ptp ptp0: new virtual clock ptp5
[ 129.418257] ptp ptp0: guarantee physical clock free running
#
# phc_ctl /dev/ptp2 set 10000
# phc_ctl /dev/ptp3 set 20000
#
# timestamping eno0 2 SOF_TIMESTAMPING_TX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
# timestamping eno0 2 SOF_TIMESTAMPING_RX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
# timestamping eno0 3 SOF_TIMESTAMPING_TX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
# timestamping eno0 3 SOF_TIMESTAMPING_RX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
Changes for v2:
- Converted to num_vclocks for creating virtual clocks.
- Guranteed physical clock free running when using virtual
clocks.
- Fixed build warning.
- Updated copyright.
Changes for v3:
- Supported PTP virtual clock in default in PTP driver.
- Protected concurrency of ptp->num_vclocks accessing.
- Supported PHC vclocks query via ethtool.
- Extended SO_TIMESTAMPING API for PHC binding.
- Converted HW timestamps to PHC bound, instead of previous
binding domain value to PHC idea.
- Other minor fixes.
Changes for v4:
- Used do_aux_work callback for vclock refreshing instead.
- Used unsigned int for vclocks number, and max_vclocks
for limitiation.
- Fixed mutex locking.
- Dynamically allocated memory for vclock index storage.
- Removed ethtool ioctl command for vclocks getting.
- Updated doc for ethtool phc vclocks get.
- Converted to mptcp_setsockopt_sol_socket_timestamping().
- Passed so_timestamping for sock_set_timestamping.
- Fixed checkpatch/build.
- Other minor fixed.
Yangbo Lu (11):
ptp: add ptp virtual clock driver framework
ptp: support ptp physical/virtual clocks conversion
ptp: track available ptp vclocks information
ptp: add kernel API ptp_get_vclocks_index()
ethtool: add a new command for getting PHC virtual clocks
ptp: add kernel API ptp_convert_timestamp()
mptcp: setsockopt: convert to
mptcp_setsockopt_sol_socket_timestamping()
net: sock: extend SO_TIMESTAMPING for PHC binding
net: socket: support hardware timestamp conversion to PHC bound
selftests/net: timestamping: support binding PHC
MAINTAINERS: add entry for PTP virtual clock driver
Documentation/ABI/testing/sysfs-ptp | 20 ++
Documentation/networking/ethtool-netlink.rst | 22 ++
MAINTAINERS | 7 +
drivers/ptp/Makefile | 2 +-
drivers/ptp/ptp_clock.c | 41 +++-
drivers/ptp/ptp_private.h | 39 ++++
drivers/ptp/ptp_sysfs.c | 160 ++++++++++++++
drivers/ptp/ptp_vclock.c | 219 +++++++++++++++++++
include/linux/ethtool.h | 10 +
include/linux/ptp_clock_kernel.h | 31 ++-
include/net/sock.h | 8 +-
include/uapi/linux/ethtool_netlink.h | 15 ++
include/uapi/linux/net_tstamp.h | 17 +-
net/core/sock.c | 65 +++++-
net/ethtool/Makefile | 2 +-
net/ethtool/common.c | 14 ++
net/ethtool/netlink.c | 10 +
net/ethtool/netlink.h | 2 +
net/ethtool/phc_vclocks.c | 94 ++++++++
net/mptcp/sockopt.c | 69 ++++--
net/socket.c | 19 +-
tools/testing/selftests/net/timestamping.c | 62 ++++--
22 files changed, 875 insertions(+), 53 deletions(-)
create mode 100644 drivers/ptp/ptp_vclock.c
create mode 100644 net/ethtool/phc_vclocks.c
base-commit: 19938bafa7ae8fc0a4a2c1c1430abb1a04668da1
--
2.25.1
Synchronous Ethernet networks use a physical layer clock to syntonize
the frequency across different network elements.
Multiple reference clock sources can be used. Clocks recovered from
PHY ports on the RX side or external sources like 1PPS GPS, etc.
This patch series introduces basic interface for reading the DPLL
state on a SyncE capable device. This state gives us information
about the source of the syntonization signal and whether the DPLL
circuit is tuned to the incoming signal.
Next steps:
- add interface to enable recovered clocks and get information
about them
Maciej Machnikowski (2):
rtnetlink: Add new RTM_GETSYNCESTATE message to get SyncE status
ice: add support for reading SyncE DPLL state
drivers/net/ethernet/intel/ice/ice.h | 5 ++
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 34 ++++++++
drivers/net/ethernet/intel/ice/ice_common.c | 62 +++++++++++++++
drivers/net/ethernet/intel/ice/ice_common.h | 4 +
drivers/net/ethernet/intel/ice/ice_devids.h | 3 +
drivers/net/ethernet/intel/ice/ice_main.c | 55 +++++++++++++
drivers/net/ethernet/intel/ice/ice_ptp.c | 35 +++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.c | 44 +++++++++++
drivers/net/ethernet/intel/ice/ice_ptp_hw.h | 22 ++++++
include/linux/netdevice.h | 6 ++
include/uapi/linux/if_link.h | 43 +++++++++++
include/uapi/linux/rtnetlink.h | 11 ++-
net/core/rtnetlink.c | 77 +++++++++++++++++++
security/selinux/nlmsgtab.c | 3 +-
14 files changed, 399 insertions(+), 5 deletions(-)
--
2.26.3
From: Changcheng Deng <deng.changcheng(a)zte.com.cn>
tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c:
225:18-23:WARNING: conversion to bool not needed here
Because the definition of function is
"static int move_mount_set_group_supported(void)",
the return type should be int.
Reported-by: Zeal Robot <zealci(a)zte.com.cn>
Signed-off-by: Changcheng Deng <deng.changcheng(a)zte.com.cn>
---
.../testing/selftests/move_mount_set_group/move_mount_set_group_test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c b/tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c
index 860198f..beade21 100644
--- a/tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c
+++ b/tools/testing/selftests/move_mount_set_group/move_mount_set_group_test.c
@@ -222,7 +222,7 @@ static int move_mount_set_group_supported(void)
AT_FDCWD, SET_GROUP_TO, MOVE_MOUNT_SET_GROUP);
umount2("/tmp", MNT_DETACH);
- return ret < 0 ? false : true;
+ return ret < 0 ? 0 : 1;
}
FIXTURE(move_mount_set_group) {
--
1.8.3.1
Patch 1 fixes a KVM+rseq bug where KVM's handling of TIF_NOTIFY_RESUME,
e.g. for task migration, clears the flag without informing rseq and leads
to stale data in userspace's rseq struct.
Patch 2 is a cleanup to try and make future bugs less likely. It's also
a baby step towards moving and renaming tracehook_notify_resume() since
it has nothing to do with tracing.
Patch 3 is a fix/cleanup to stop overriding x86's unistd_{32,64}.h when
the include path (intentionally) omits tools' uapi headers. KVM's
selftests do exactly that so that they can pick up the uapi headers from
the installed kernel headers, and still use various tools/ headers that
mirror kernel code, e.g. linux/types.h. This allows the new test in
patch 4 to reference __NR_rseq without having to manually define it.
Patch 4 is a regression test for the KVM+rseq bug.
Patch 5 is a cleanup made possible by patch 3.
v2:
- Don't touch rseq_cs when handling KVM case so that rseq_syscall() will
still detect a naughty userspace. [Mathieu]
- Use a sequence counter + retry in the test to ensure the process isn't
migrated between sched_getcpu() and reading rseq.cpu_id, i.e. to
avoid a flaky test. [Mathieu]
- Add Mathieu's ack for patch 2.
- Add more comments in the test.
v1: https://lkml.kernel.org/r/20210818001210.4073390-1-seanjc@google.com
Sean Christopherson (5):
KVM: rseq: Update rseq when processing NOTIFY_RESUME on xfer to KVM
guest
entry: rseq: Call rseq_handle_notify_resume() in
tracehook_notify_resume()
tools: Move x86 syscall number fallbacks to .../uapi/
KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration
bugs
KVM: selftests: Remove __NR_userfaultfd syscall fallback
arch/arm/kernel/signal.c | 1 -
arch/arm64/kernel/signal.c | 1 -
arch/csky/kernel/signal.c | 4 +-
arch/mips/kernel/signal.c | 4 +-
arch/powerpc/kernel/signal.c | 4 +-
arch/s390/kernel/signal.c | 1 -
include/linux/tracehook.h | 2 +
kernel/entry/common.c | 4 +-
kernel/rseq.c | 14 +-
.../x86/include/{ => uapi}/asm/unistd_32.h | 0
.../x86/include/{ => uapi}/asm/unistd_64.h | 3 -
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 3 +
tools/testing/selftests/kvm/rseq_test.c | 154 ++++++++++++++++++
14 files changed, 175 insertions(+), 21 deletions(-)
rename tools/arch/x86/include/{ => uapi}/asm/unistd_32.h (100%)
rename tools/arch/x86/include/{ => uapi}/asm/unistd_64.h (83%)
create mode 100644 tools/testing/selftests/kvm/rseq_test.c
--
2.33.0.rc2.250.ged5fa647cd-goog
0Day will check if all configs listing under selftests are able to be enabled properly.
For the missing configs, it will report something like:
LKP WARN miss config CONFIG_SYNC= of sync/config
CC: kernel test robot <lkp(a)intel.com>
CC: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
CC: Nick Desaulniers <ndesaulniers(a)google.com>
CC: Masahiro Yamada <masahiroy(a)kernel.org>
CC: wireguard(a)lists.zx2c4.com
CC: netdev(a)vger.kernel.org
CC: "Rafael J. Wysocki" <rjw(a)rjwysocki.net>
CC: Viresh Kumar <viresh.kumar(a)linaro.org>
CC: linux-pm(a)vger.kernel.org
Reported-by: kernel test robot <lkp(a)intel.com>
Li Zhijian (3):
selftests/sync: Remove the deprecated config SYNC
selftests/cpufreq: Rename DEBUG_PI_LIST to DEBUG_PLIST
selftests/wireguard: Rename DEBUG_PI_LIST to DEBUG_PLIST
tools/testing/selftests/cpufreq/config | 2 +-
tools/testing/selftests/sync/config | 1 -
tools/testing/selftests/wireguard/qemu/debug.config | 2 +-
3 files changed, 2 insertions(+), 3 deletions(-)
--
2.31.1
From: Colin Ian King <colin.king(a)canonical.com>
There is a spelling mistake in an error message. Fix it.
Signed-off-by: Colin Ian King <colin.king(a)canonical.com>
---
tools/testing/selftests/safesetid/safesetid-test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/safesetid/safesetid-test.c b/tools/testing/selftests/safesetid/safesetid-test.c
index 0c4d50644c13..4b809c93ba36 100644
--- a/tools/testing/selftests/safesetid/safesetid-test.c
+++ b/tools/testing/selftests/safesetid/safesetid-test.c
@@ -152,7 +152,7 @@ static void write_policies(void)
fd = open(add_whitelist_policy_file, O_WRONLY);
if (fd < 0)
- die("cant open add_whitelist_policy file\n");
+ die("can't open add_whitelist_policy file\n");
written = write(fd, policy_str, strlen(policy_str));
if (written != strlen(policy_str)) {
if (written >= 0) {
--
2.32.0
From: Colin Ian King <colin.king(a)canonical.com>
There is a spelling mistake in an error message. Fix it.
Signed-off-by: Colin Ian King <colin.king(a)canonical.com>
---
tools/testing/selftests/vm/mlock-random-test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/vm/mlock-random-test.c b/tools/testing/selftests/vm/mlock-random-test.c
index ff4d72eb74b9..782ea94dee2f 100644
--- a/tools/testing/selftests/vm/mlock-random-test.c
+++ b/tools/testing/selftests/vm/mlock-random-test.c
@@ -70,7 +70,7 @@ int get_proc_locked_vm_size(void)
}
}
- perror("cann't parse VmLck in /proc/self/status\n");
+ perror("cannot parse VmLck in /proc/self/status\n");
fclose(f);
return -1;
}
--
2.32.0
v6:
- Remove duplicated tmpmask from update_prstate() which should fix the
frame size too large problem reported by kernel test robot.
v5:
- Rebased to the latest for-5.15 branch of cgroup git tree and drop the
1st v4 patch as it has been merged.
- Update patch 1 to always allow changing partition root back to member
even if it invalidates child partitions undeneath it.
- Adjust the empty effective cpu partition patch to not allow 0 effective
cpu for terminal partition which will make it invalid).
- Add a new patch to enable reading of cpuset.cpus.partition to display
the reason that causes invalid partition.
- Adjust the documentation and testing patch accordingly.
v4:
- Rebased to the for-5.15 branch of cgroup git tree and dropped the
first 3 patches of v3 series which have been merged.
- Beside prohibiting violation of cpu exclusivity rule, allow arbitrary
changes to cpuset.cpus of a partition root and force the partition root
to become invalid in case any of the partition root constraints
are violated. The documentation file and self test are modified
accordingly.
This patchset makes four enhancements to the cpuset v2 code.
Patch 1: Properly handle partition root tree and make partition
invalid in case changes to cpuset.cpus violate any of the partition
root constraints.
Patch 2: Enable the "cpuset.cpus.partition" file to show the reason
that causes invalid partition like "root invalid (No cpu available
due to hotplug)".
Patch 3: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 4: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 5 updates the cgroup-v2.rst file accordingly. Patch 6 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (6):
cgroup/cpuset: Properly transition to invalid partition
cgroup/cpuset: Show invalid partition reason string
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition to distribute out all
CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 116 +--
kernel/cgroup/cpuset.c | 337 ++++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 663 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 86 +++
5 files changed, 1058 insertions(+), 149 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
There are several test cases in the net directory are still using
exit 0 or exit 1 when they need to be skipped. Use kselftest
framework skip code instead so it can help us to distinguish the
return status.
Criterion to filter out what should be fixed in net directory:
grep -r "exit [01]" -B1 | grep -i skip
This change might cause some false-positives if people are running
these test scripts directly and only checking their return codes,
which will change from 0 to 4. However I think the impact should be
small as most of our scripts here are already using this skip code.
And there will be no such issue if running them with the kselftest
framework.
Signed-off-by: Po-Hsu Lin <po-hsu.lin(a)canonical.com>
---
tools/testing/selftests/net/fcnal-test.sh | 5 +++-
tools/testing/selftests/net/fib_rule_tests.sh | 7 ++++--
.../selftests/net/forwarding/devlink_lib.sh | 15 +++++++-----
tools/testing/selftests/net/forwarding/lib.sh | 27 ++++++++++++----------
.../selftests/net/forwarding/router_mpath_nh.sh | 2 +-
.../net/forwarding/router_mpath_nh_res.sh | 2 +-
tools/testing/selftests/net/run_afpackettests | 5 +++-
.../selftests/net/srv6_end_dt46_l3vpn_test.sh | 9 +++++---
.../selftests/net/srv6_end_dt4_l3vpn_test.sh | 9 +++++---
.../selftests/net/srv6_end_dt6_l3vpn_test.sh | 9 +++++---
tools/testing/selftests/net/unicast_extensions.sh | 5 +++-
.../testing/selftests/net/vrf_strict_mode_test.sh | 9 +++++---
12 files changed, 67 insertions(+), 37 deletions(-)
diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh
index a8ad928..9074e25 100755
--- a/tools/testing/selftests/net/fcnal-test.sh
+++ b/tools/testing/selftests/net/fcnal-test.sh
@@ -37,6 +37,9 @@
#
# server / client nomenclature relative to ns-A
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
VERBOSE=0
NSA_DEV=eth1
@@ -3946,7 +3949,7 @@ fi
which nettest >/dev/null
if [ $? -ne 0 ]; then
echo "'nettest' command not found; skipping tests"
- exit 0
+ exit $ksft_skip
fi
declare -i nfail=0
diff --git a/tools/testing/selftests/net/fib_rule_tests.sh b/tools/testing/selftests/net/fib_rule_tests.sh
index a93e6b6..43ea840 100755
--- a/tools/testing/selftests/net/fib_rule_tests.sh
+++ b/tools/testing/selftests/net/fib_rule_tests.sh
@@ -3,6 +3,9 @@
# This test is for checking IPv4 and IPv6 FIB rules API
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
ret=0
PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no}
@@ -238,12 +241,12 @@ run_fibrule_tests()
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
- exit 0
+ exit $ksft_skip
fi
if [ ! -x "$(command -v ip)" ]; then
echo "SKIP: Could not run test without ip tool"
- exit 0
+ exit $ksft_skip
fi
# start clean
diff --git a/tools/testing/selftests/net/forwarding/devlink_lib.sh b/tools/testing/selftests/net/forwarding/devlink_lib.sh
index 13d3d44..2c14a86 100644
--- a/tools/testing/selftests/net/forwarding/devlink_lib.sh
+++ b/tools/testing/selftests/net/forwarding/devlink_lib.sh
@@ -1,6 +1,9 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
##############################################################################
# Defines
@@ -9,11 +12,11 @@ if [[ ! -v DEVLINK_DEV ]]; then
| jq -r '.port | keys[]' | cut -d/ -f-2)
if [ -z "$DEVLINK_DEV" ]; then
echo "SKIP: ${NETIFS[p1]} has no devlink device registered for it"
- exit 1
+ exit $ksft_skip
fi
if [[ "$(echo $DEVLINK_DEV | grep -c pci)" -eq 0 ]]; then
echo "SKIP: devlink device's bus is not PCI"
- exit 1
+ exit $ksft_skip
fi
DEVLINK_VIDDID=$(lspci -s $(echo $DEVLINK_DEV | cut -d"/" -f2) \
@@ -22,7 +25,7 @@ elif [[ ! -z "$DEVLINK_DEV" ]]; then
devlink dev show $DEVLINK_DEV &> /dev/null
if [ $? -ne 0 ]; then
echo "SKIP: devlink device \"$DEVLINK_DEV\" not found"
- exit 1
+ exit $ksft_skip
fi
fi
@@ -32,19 +35,19 @@ fi
devlink help 2>&1 | grep resource &> /dev/null
if [ $? -ne 0 ]; then
echo "SKIP: iproute2 too old, missing devlink resource support"
- exit 1
+ exit $ksft_skip
fi
devlink help 2>&1 | grep trap &> /dev/null
if [ $? -ne 0 ]; then
echo "SKIP: iproute2 too old, missing devlink trap support"
- exit 1
+ exit $ksft_skip
fi
devlink dev help 2>&1 | grep info &> /dev/null
if [ $? -ne 0 ]; then
echo "SKIP: iproute2 too old, missing devlink dev info support"
- exit 1
+ exit $ksft_skip
fi
##############################################################################
diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
index 42e28c9..e7fc5c3 100644
--- a/tools/testing/selftests/net/forwarding/lib.sh
+++ b/tools/testing/selftests/net/forwarding/lib.sh
@@ -4,6 +4,9 @@
##############################################################################
# Defines
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
# Can be overridden by the configuration file.
PING=${PING:=ping}
PING6=${PING6:=ping6}
@@ -38,7 +41,7 @@ check_tc_version()
tc -j &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing JSON support"
- exit 1
+ exit $ksft_skip
fi
}
@@ -51,7 +54,7 @@ check_tc_mpls_support()
matchall action pipe &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing MPLS support"
- return 1
+ return $ksft_skip
fi
tc filter del dev $dev ingress protocol mpls_uc pref 1 handle 1 \
matchall
@@ -69,7 +72,7 @@ check_tc_mpls_lse_stats()
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc-flower is missing extended MPLS support"
- return 1
+ return $ksft_skip
fi
tc -j filter show dev $dev ingress protocol mpls_uc | jq . &> /dev/null
@@ -79,7 +82,7 @@ check_tc_mpls_lse_stats()
if [[ $ret -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc-flower produces invalid json output for extended MPLS filters"
- return 1
+ return $ksft_skip
fi
}
@@ -88,7 +91,7 @@ check_tc_shblock_support()
tc filter help 2>&1 | grep block &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing shared block support"
- exit 1
+ exit $ksft_skip
fi
}
@@ -97,7 +100,7 @@ check_tc_chain_support()
tc help 2>&1|grep chain &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing chain support"
- exit 1
+ exit $ksft_skip
fi
}
@@ -106,7 +109,7 @@ check_tc_action_hw_stats_support()
tc actions help 2>&1 | grep -q hw_stats
if [[ $? -ne 0 ]]; then
echo "SKIP: iproute2 too old; tc is missing action hw_stats support"
- exit 1
+ exit $ksft_skip
fi
}
@@ -115,13 +118,13 @@ check_ethtool_lanes_support()
ethtool --help 2>&1| grep lanes &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: ethtool too old; it is missing lanes support"
- exit 1
+ exit $ksft_skip
fi
}
if [[ "$(id -u)" -ne 0 ]]; then
echo "SKIP: need root privileges"
- exit 0
+ exit $ksft_skip
fi
if [[ "$CHECK_TC" = "yes" ]]; then
@@ -134,7 +137,7 @@ require_command()
if [[ ! -x "$(command -v "$cmd")" ]]; then
echo "SKIP: $cmd not installed"
- exit 1
+ exit $ksft_skip
fi
}
@@ -143,7 +146,7 @@ require_command $MZ
if [[ ! -v NUM_NETIFS ]]; then
echo "SKIP: importer does not define \"NUM_NETIFS\""
- exit 1
+ exit $ksft_skip
fi
##############################################################################
@@ -203,7 +206,7 @@ for ((i = 1; i <= NUM_NETIFS; ++i)); do
ip link show dev ${NETIFS[p$i]} &> /dev/null
if [[ $? -ne 0 ]]; then
echo "SKIP: could not find all required interfaces"
- exit 1
+ exit $ksft_skip
fi
done
diff --git a/tools/testing/selftests/net/forwarding/router_mpath_nh.sh b/tools/testing/selftests/net/forwarding/router_mpath_nh.sh
index 76efb1f..a0d612e 100755
--- a/tools/testing/selftests/net/forwarding/router_mpath_nh.sh
+++ b/tools/testing/selftests/net/forwarding/router_mpath_nh.sh
@@ -411,7 +411,7 @@ ping_ipv6()
ip nexthop ls >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "Nexthop objects not supported; skipping tests"
- exit 0
+ exit $ksft_skip
fi
trap cleanup EXIT
diff --git a/tools/testing/selftests/net/forwarding/router_mpath_nh_res.sh b/tools/testing/selftests/net/forwarding/router_mpath_nh_res.sh
index 4898dd4..cb08ffe 100755
--- a/tools/testing/selftests/net/forwarding/router_mpath_nh_res.sh
+++ b/tools/testing/selftests/net/forwarding/router_mpath_nh_res.sh
@@ -386,7 +386,7 @@ ping_ipv6()
ip nexthop ls >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "Nexthop objects not supported; skipping tests"
- exit 0
+ exit $ksft_skip
fi
trap cleanup EXIT
diff --git a/tools/testing/selftests/net/run_afpackettests b/tools/testing/selftests/net/run_afpackettests
index 8b42e8b..a59cb6a 100755
--- a/tools/testing/selftests/net/run_afpackettests
+++ b/tools/testing/selftests/net/run_afpackettests
@@ -1,9 +1,12 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
if [ $(id -u) != 0 ]; then
echo $msg must be run as root >&2
- exit 0
+ exit $ksft_skip
fi
ret=0
diff --git a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh
index 75ada17..aebaab8 100755
--- a/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh
+++ b/tools/testing/selftests/net/srv6_end_dt46_l3vpn_test.sh
@@ -193,6 +193,9 @@
# +---------------------------------------------------+
#
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
readonly LOCALSID_TABLE_ID=90
readonly IPv6_RT_NETWORK=fd00
readonly IPv6_HS_NETWORK=cafe
@@ -543,18 +546,18 @@ host_vpn_isolation_tests()
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
- exit 0
+ exit $ksft_skip
fi
if [ ! -x "$(command -v ip)" ]; then
echo "SKIP: Could not run test without ip tool"
- exit 0
+ exit $ksft_skip
fi
modprobe vrf &>/dev/null
if [ ! -e /proc/sys/net/vrf/strict_mode ]; then
echo "SKIP: vrf sysctl does not exist"
- exit 0
+ exit $ksft_skip
fi
cleanup &>/dev/null
diff --git a/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh
index ad7a9fc..1003119 100755
--- a/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh
+++ b/tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh
@@ -163,6 +163,9 @@
# +---------------------------------------------------+
#
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
readonly LOCALSID_TABLE_ID=90
readonly IPv6_RT_NETWORK=fd00
readonly IPv4_HS_NETWORK=10.0.0
@@ -464,18 +467,18 @@ host_vpn_isolation_tests()
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
- exit 0
+ exit $ksft_skip
fi
if [ ! -x "$(command -v ip)" ]; then
echo "SKIP: Could not run test without ip tool"
- exit 0
+ exit $ksft_skip
fi
modprobe vrf &>/dev/null
if [ ! -e /proc/sys/net/vrf/strict_mode ]; then
echo "SKIP: vrf sysctl does not exist"
- exit 0
+ exit $ksft_skip
fi
cleanup &>/dev/null
diff --git a/tools/testing/selftests/net/srv6_end_dt6_l3vpn_test.sh b/tools/testing/selftests/net/srv6_end_dt6_l3vpn_test.sh
index 68708f5..b9b06ef 100755
--- a/tools/testing/selftests/net/srv6_end_dt6_l3vpn_test.sh
+++ b/tools/testing/selftests/net/srv6_end_dt6_l3vpn_test.sh
@@ -164,6 +164,9 @@
# +---------------------------------------------------+
#
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
readonly LOCALSID_TABLE_ID=90
readonly IPv6_RT_NETWORK=fd00
readonly IPv6_HS_NETWORK=cafe
@@ -472,18 +475,18 @@ host_vpn_isolation_tests()
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
- exit 0
+ exit $ksft_skip
fi
if [ ! -x "$(command -v ip)" ]; then
echo "SKIP: Could not run test without ip tool"
- exit 0
+ exit $ksft_skip
fi
modprobe vrf &>/dev/null
if [ ! -e /proc/sys/net/vrf/strict_mode ]; then
echo "SKIP: vrf sysctl does not exist"
- exit 0
+ exit $ksft_skip
fi
cleanup &>/dev/null
diff --git a/tools/testing/selftests/net/unicast_extensions.sh b/tools/testing/selftests/net/unicast_extensions.sh
index 66354cd..2d10cca 100755
--- a/tools/testing/selftests/net/unicast_extensions.sh
+++ b/tools/testing/selftests/net/unicast_extensions.sh
@@ -28,12 +28,15 @@
# These tests provide an easy way to flip the expected result of any
# of these behaviors for testing kernel patches that change them.
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
# nettest can be run from PATH or from same directory as this selftest
if ! which nettest >/dev/null; then
PATH=$PWD:$PATH
if ! which nettest >/dev/null; then
echo "'nettest' command not found; skipping tests"
- exit 0
+ exit $ksft_skip
fi
fi
diff --git a/tools/testing/selftests/net/vrf_strict_mode_test.sh b/tools/testing/selftests/net/vrf_strict_mode_test.sh
index 18b982d..865d53c 100755
--- a/tools/testing/selftests/net/vrf_strict_mode_test.sh
+++ b/tools/testing/selftests/net/vrf_strict_mode_test.sh
@@ -3,6 +3,9 @@
# This test is designed for testing the new VRF strict_mode functionality.
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
ret=0
# identifies the "init" network namespace which is often called root network
@@ -371,18 +374,18 @@ vrf_strict_mode_check_support()
if [ "$(id -u)" -ne 0 ];then
echo "SKIP: Need root privileges"
- exit 0
+ exit $ksft_skip
fi
if [ ! -x "$(command -v ip)" ]; then
echo "SKIP: Could not run test without ip tool"
- exit 0
+ exit $ksft_skip
fi
modprobe vrf &>/dev/null
if [ ! -e /proc/sys/net/vrf/strict_mode ]; then
echo "SKIP: vrf sysctl does not exist"
- exit 0
+ exit $ksft_skip
fi
cleanup &> /dev/null
--
2.7.4
When running the openat2 test suite on ARM64 platform, we got below failure,
since the definition of the O_LARGEFILE is different on ARM64. So we can
set the correct O_LARGEFILE definition on ARM64 to fix this issue.
Signed-off-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
---
tools/testing/selftests/openat2/openat2_test.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
index d7ec1e7..1bddbe9 100644
--- a/tools/testing/selftests/openat2/openat2_test.c
+++ b/tools/testing/selftests/openat2/openat2_test.c
@@ -22,7 +22,11 @@
* XXX: This is wrong on {mips, parisc, powerpc, sparc}.
*/
#undef O_LARGEFILE
+#ifdef __aarch64__
+#define O_LARGEFILE 0x20000
+#else
#define O_LARGEFILE 0x8000
+#endif
struct open_how_ext {
struct open_how inner;
--
1.8.3.1
When running the openat2 test suite on ARM64 platform, we got below failure,
since the definition of the O_LARGEFILE is different on ARM64. So we can
set the correct O_LARGEFILE definition on ARM64 to fix this issue.
"openat2 unexpectedly returned # 3['.../tools/testing/selftests/openat2']
with 208000 (!= 208000)
not ok 102 openat2 with incompatible flags (O_PATH | O_LARGEFILE) fails
with -22 (Invalid argument)"
Signed-off-by: Baolin Wang <baolin.wang(a)linux.alibaba.com>
Reviewed-by: Aleksa Sarai <cyphar(a)cyphar.com>
Acked-by: Christian Brauner <christian.brauner(a)ubuntu.com>
---
Changes from v1:
- Add reviewed and acked tags from Aleksa and Christian.
- Add failure logs in the commit log.
---
tools/testing/selftests/openat2/openat2_test.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/tools/testing/selftests/openat2/openat2_test.c b/tools/testing/selftests/openat2/openat2_test.c
index d7ec1e7..1bddbe9 100644
--- a/tools/testing/selftests/openat2/openat2_test.c
+++ b/tools/testing/selftests/openat2/openat2_test.c
@@ -22,7 +22,11 @@
* XXX: This is wrong on {mips, parisc, powerpc, sparc}.
*/
#undef O_LARGEFILE
+#ifdef __aarch64__
+#define O_LARGEFILE 0x20000
+#else
#define O_LARGEFILE 0x8000
+#endif
struct open_how_ext {
struct open_how inner;
--
1.8.3.1
This series introduces a new helper, bpf_trace_vprintk, which functions
like bpf_trace_printk but supports > 3 arguments via a pseudo-vararg u64
array. A libbpf convienience macro, bpf_vprintk, is added to support
true vararg calling style.
Helper functions and macros added during the implementation of
bpf_seq_printf and bpf_snprintf do most of the heavy lifting for
bpf_trace_vprintk. There's no novel format string wrangling here.
Usecase here is straightforward: Giving BPF program writers a more
powerful printk will ease development of BPF programs, particularly
during debugging and testing, where printk tends to be used.
Hypothetically libbpf's bpf_printk convenience macro could be modified
to use bpf_trace_vprintk under the hood. This patchset does not attempt
to do this, though, nor am I confident that it's desired.
This feature was proposed by Andrii in libbpf mirror's issue tracker
[1].
[1] https://github.com/libbpf/libbpf/issues/315
Dave Marchevsky (5):
bpf: merge printk and seq_printf VARARG max macros
bpf: add bpf_trace_vprintk helper
libbpf: Add bpf_vprintk convenience macro
bpftool: only probe trace_vprintk feature in 'full' mode
selftests/bpf: add trace_vprintk test prog
include/linux/bpf.h | 3 +
include/uapi/linux/bpf.h | 23 ++++++
kernel/bpf/core.c | 5 ++
kernel/bpf/helpers.c | 6 +-
kernel/trace/bpf_trace.c | 54 ++++++++++++-
tools/bpf/bpftool/feature.c | 1 +
tools/include/uapi/linux/bpf.h | 23 ++++++
tools/lib/bpf/bpf_helpers.h | 18 +++++
tools/testing/selftests/bpf/Makefile | 3 +-
.../selftests/bpf/prog_tests/trace_vprintk.c | 75 +++++++++++++++++++
.../selftests/bpf/progs/trace_vprintk.c | 25 +++++++
tools/testing/selftests/bpf/test_bpftool.py | 22 +++---
12 files changed, 238 insertions(+), 20 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/trace_vprintk.c
create mode 100644 tools/testing/selftests/bpf/progs/trace_vprintk.c
--
2.30.2
From: jing yangyang <jing.yangyang(a)zte.com.cn>
sizeof when applied to a pointer typed expression gives the size of the
pointer.
./tools/testing/selftests/vm/split_huge_page_test.c:344:36-42: ERROR application
of sizeof to pointer
This issue was detected with the help of Coccinelle.
Reported-by: Zeal Robot <zealci(a)zte.com.cn>
Signed-off-by: jing yangyang <jing.yangyang(a)zte.com.cn>
---
tools/testing/selftests/vm/split_huge_page_test.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/vm/split_huge_page_test.c b/tools/testing/selftests/vm/split_huge_page_test.c
index 1af16d2..54bf57f 100644
--- a/tools/testing/selftests/vm/split_huge_page_test.c
+++ b/tools/testing/selftests/vm/split_huge_page_test.c
@@ -341,7 +341,7 @@ void split_file_backed_thp(void)
}
/* write something to the file, so a file-backed THP can be allocated */
- num_written = write(fd, tmpfs_loc, sizeof(tmpfs_loc));
+ num_written = write(fd, tmpfs_loc, sizeof(*tmpfs_loc));
close(fd);
if (num_written < 1) {
--
1.8.3.1
From: yong yiran <yong.yiran(a)zte.com.cn>
'sys/types.h' included in 'cs_prctl_test.c' is duplicated.
Remove all but the first include of sys/types.h from cs_prctl_test.c.
'sys/wait.h' include in 'cs_prctl_test.c' is duplicated.
Remove all but the first include of sys/wait.h from cs_prctl_test.c.
Reported-by: Zeal Robot <zealci(a)zte.com.cn>
Signed-off-by: yong yiran <yong.yiran(a)zte.com.cn>
---
tools/testing/selftests/sched/cs_prctl_test.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/tools/testing/selftests/sched/cs_prctl_test.c b/tools/testing/selftests/sched/cs_prctl_test.c
index 63fe6521c56d..7db9cf822dc7 100644
--- a/tools/testing/selftests/sched/cs_prctl_test.c
+++ b/tools/testing/selftests/sched/cs_prctl_test.c
@@ -25,8 +25,6 @@
#include <sys/types.h>
#include <sched.h>
#include <sys/prctl.h>
-#include <sys/types.h>
-#include <sys/wait.h>
#include <unistd.h>
#include <time.h>
#include <stdio.h>
--
2.25.1
Hi Alexey,
When lkp team run kernel selftests, we found after these series of patches, testcase mqueue: mq_perf_tests
in kselftest failed with following message.
If you confirm and fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot lkp(a)intel.com
```
# selftests: mqueue: mq_perf_tests
#
# Initial system state:
# Using queue path: /mq_perf_tests
# RLIMIT_MSGQUEUE(soft): 819200
# RLIMIT_MSGQUEUE(hard): 819200
# Maximum Message Size: 8192
# Maximum Queue Size: 10
# Nice value: 0
#
# Adjusted system state for testing:
# RLIMIT_MSGQUEUE(soft): (unlimited)
# RLIMIT_MSGQUEUE(hard): (unlimited)
# Maximum Message Size: 16777216
# Maximum Queue Size: 65530
# Nice value: -20
# Continuous mode: (disabled)
# CPUs to pin: 3
# ./mq_perf_tests: mq_open() at 296: Too many open files
not ok 2 selftests: mqueue: mq_perf_tests # exit=1
```
Test env:
rootfs: debian-10
gcc version: 9
------
Thanks
Ma Xinjian
There are several test cases in the vm directory are still using
exit 0 when they need to be skipped. Use kselftest framework skip
code instead so it can help us to distinguish the return status.
Criterion to filter out what should be fixed in vm directory:
grep -r "exit 0" -B1 | grep -i skip
This change might cause some false-positives if people are running
these test scripts directly and only checking their return codes,
which will change from 0 to 4. However I think the impact should be
small as most of our scripts here are already using this skip code.
And there will be no such issue if running them with the kselftest
framework.
Signed-off-by: Po-Hsu Lin <po-hsu.lin(a)canonical.com>
---
tools/testing/selftests/vm/charge_reserved_hugetlb.sh | 5 ++++-
tools/testing/selftests/vm/hugetlb_reparenting_test.sh | 5 ++++-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
index 18d3368..fe8fcfb 100644
--- a/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/vm/charge_reserved_hugetlb.sh
@@ -1,11 +1,14 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
set -e
if [[ $(id -u) -ne 0 ]]; then
echo "This test must be run as root. Skipping..."
- exit 0
+ exit $ksft_skip
fi
fault_limit_file=limit_in_bytes
diff --git a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
index d11d1fe..4a9a3af 100644
--- a/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
+++ b/tools/testing/selftests/vm/hugetlb_reparenting_test.sh
@@ -1,11 +1,14 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
set -e
if [[ $(id -u) -ne 0 ]]; then
echo "This test must be run as root. Skipping..."
- exit 0
+ exit $ksft_skip
fi
usage_file=usage_in_bytes
--
2.7.4
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
The selftest for ftrace checks some features by checking if the README has
text that states the feature is supported by that kernel. Unfortunately,
this check gives false positives because it many not be checked if there's
spaces in the string to check. This is due to the compare between the
required variable with the ":README" string stripped, because neither has
quotes around them.
Link: https://lkml.kernel.org/r/20210820204742.087177341@goodmis.org
Cc: "Tzvetomir Stoyanov" <tz.stoyanov(a)gmail.com>
Cc: Tom Zanussi <zanussi(a)kernel.org>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Fixes: 1b8eec510ba64 ("selftests/ftrace: Support ":README" suffix for requires")
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
tools/testing/selftests/ftrace/test.d/functions | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index f68d336b961b..000fd05e84b1 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -137,7 +137,7 @@ check_requires() { # Check required files and tracers
echo "Required tracer $t is not configured."
exit_unsupported
fi
- elif [ $r != $i ]; then
+ elif [ "$r" != "$i" ]; then
if ! grep -Fq "$r" README ; then
echo "Required feature pattern \"$r\" is not in README."
exit_unsupported
--
2.30.2
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
Add a function to remove all dynamic events from the tracing directory. It
requires a loop as some of the dynamic events may depend on others being
removed first. Also add a safety that prevents it from looping infinitely
due to a bug where an event never gets removed.
Link: https://lkml.kernel.org/r/20210819152825.348941368@goodmis.org
Cc: "Tzvetomir Stoyanov" <tz.stoyanov(a)gmail.com>
Cc: Tom Zanussi <zanussi(a)kernel.org>
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/functions | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index a6fac927ee82..f68d336b961b 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -83,6 +83,27 @@ clear_synthetic_events() { # reset all current synthetic events
done
}
+clear_dynamic_events() { # reset all current dynamic events
+ again=1
+ stop=1
+ # loop mulitple times as some events require other to be removed first
+ while [ $again -eq 1 ]; do
+ stop=$((stop+1))
+ # Prevent infinite loops
+ if [ $stop -gt 10 ]; then
+ break;
+ fi
+ again=2
+ grep -v '^#' dynamic_events|
+ while read line; do
+ del=`echo $line | sed -e 's/^.\([^ ]*\).*/-\1/'`
+ if ! echo "$del" >> dynamic_events; then
+ again=1
+ fi
+ done
+ done
+}
+
initialize_ftrace() { # Reset ftrace to initial-state
# As the initial state, ftrace will be set to nop tracer,
# no events, no triggers, no filters, no function filters,
@@ -93,6 +114,7 @@ initialize_ftrace() { # Reset ftrace to initial-state
reset_events_filter
reset_ftrace_filter
disable_events
+ clear_dynamic_events
[ -f set_event_pid ] && echo > set_event_pid
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
--
2.30.2
Add basic tests to cover some regressions that we had.
It's hard to test floppy because some tests require
presence or absense of a diskette in a drive. To simulate
test conditions and automate the testing I added
"run_*.sh" wrapper scripts that run tests in QEMU.
The first patch just improves check for reverted commits
in a commit message. The second patch is required to
generate a minimal initrd used in next commits. Rest of
commits are basic floppy tests.
Please, comment the approach, selftests integration
and suggest tests that you would like to add.
I thought about adding a possibility to remove/insert
diskettes inside a test. This is possible if we give
the guest an access to the QEMU monitor (eject/change cmds).
But I didn't find a better way to do it than to map a
monitor to an external port:
-monitor tcp:<ip>:<port>,server,nowait
and access this ip from the guest.
Maybe it's also possible to do with virtserialport.
Denis Efremov (5):
checkpatch: improve handling of revert commits
gen_initramfs.sh: use absolute path for gen_init_cpio
selftests: floppy: add basic tests for opening an empty device
selftests: floppy: add basic tests for a readonly disk
selftests: floppy: add basic rdwr tests
MAINTAINERS | 1 +
scripts/checkpatch.pl | 12 +--
tools/testing/selftests/floppy/.gitignore | 8 ++
tools/testing/selftests/floppy/Makefile | 10 ++
tools/testing/selftests/floppy/config | 1 +
tools/testing/selftests/floppy/empty.c | 58 ++++++++++++
tools/testing/selftests/floppy/init.c | 43 +++++++++
tools/testing/selftests/floppy/lib.sh | 67 +++++++++++++
tools/testing/selftests/floppy/rdonly.c | 99 ++++++++++++++++++++
tools/testing/selftests/floppy/rdwr.c | 67 +++++++++++++
tools/testing/selftests/floppy/run_empty.sh | 16 ++++
tools/testing/selftests/floppy/run_rdonly.sh | 22 +++++
tools/testing/selftests/floppy/run_rdwr.sh | 22 +++++
usr/gen_initramfs.sh | 2 +-
14 files changed, 421 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/floppy/.gitignore
create mode 100644 tools/testing/selftests/floppy/Makefile
create mode 100644 tools/testing/selftests/floppy/config
create mode 100644 tools/testing/selftests/floppy/empty.c
create mode 100644 tools/testing/selftests/floppy/init.c
create mode 100644 tools/testing/selftests/floppy/lib.sh
create mode 100644 tools/testing/selftests/floppy/rdonly.c
create mode 100644 tools/testing/selftests/floppy/rdwr.c
create mode 100755 tools/testing/selftests/floppy/run_empty.sh
create mode 100755 tools/testing/selftests/floppy/run_rdonly.sh
create mode 100755 tools/testing/selftests/floppy/run_rdwr.sh
--
2.31.1
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
The selftest for ftrace checks some features by checking if the README has
text that states the feature is supported by that kernel. Unfortunately,
this check gives false positives because it many not be checked if there's
spaces in the string to check. This is due to the compare between the
required variable with the ":README" string stripped, because neither has
quotes around them.
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Fixes: 1b8eec510ba64 ("selftests/ftrace: Support ":README" suffix for requires")
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
tools/testing/selftests/ftrace/test.d/functions | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index f68d336b961b..000fd05e84b1 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -137,7 +137,7 @@ check_requires() { # Check required files and tracers
echo "Required tracer $t is not configured."
exit_unsupported
fi
- elif [ $r != $i ]; then
+ elif [ "$r" != "$i" ]; then
if ! grep -Fq "$r" README ; then
echo "Required feature pattern \"$r\" is not in README."
exit_unsupported
--
2.30.2
Update to kunit_parser to improve compatibility with KTAP
specification including arbitrarily nested tests. Patch accomplishes
three major changes:
- Use a general Test object to represent all tests rather than TestCase
and TestSuite objects. This allows for easier implementation of arbitrary
levels of nested test and promotes the idea that both test suites and test
cases are tests.
- Print errors incrementally rather than all at once after the
parsing finishes to maximize information given to the user in the
case of the parser given invalid input and to increase the helpfulness
of the timestamps given during printing.
- Increase compatibility for different formats of input. Arbitrary levels
of nested tests supported. Also, test cases and test suites are now
supported to be present on the same level of testing.
This patch now implements the KTAP specification as described here:
https://lore.kernel.org/linux-kselftest/CA+GJov6tdjvY9x12JsJT14qn6c7NViJxqa….
This patch adjusts the kunit_tool_test.py file to check for
the correct outputs from the new parser and adds a new test to check
the parsing for a KTAP result log with correct format for multiple nested
subtests (test_is_test_passed-all_passed_nested.log).
This patch also alters the kunit_json.py file to allow for arbitrarily
nested tests.
Signed-off-by: Rae Moar <rmoar(a)google.com>
---
tools/testing/kunit/kunit_json.py | 54 +-
tools/testing/kunit/kunit_parser.py | 1191 ++++++++++++-----
tools/testing/kunit/kunit_tool_test.py | 91 +-
.../test_is_test_passed-all_passed_nested.log | 34 +
4 files changed, 986 insertions(+), 384 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
diff --git a/tools/testing/kunit/kunit_json.py b/tools/testing/kunit/kunit_json.py
index f5cca5c38cac..cc4bc9cc6e0f 100644
--- a/tools/testing/kunit/kunit_json.py
+++ b/tools/testing/kunit/kunit_json.py
@@ -11,47 +11,45 @@ import os
import kunit_parser
-from kunit_parser import TestStatus
-
-def get_json_result(test_result, def_config, build_dir, json_path) -> str:
- sub_groups = []
-
- # Each test suite is mapped to a KernelCI sub_group
- for test_suite in test_result.suites:
- sub_group = {
- "name": test_suite.name,
- "arch": "UM",
- "defconfig": def_config,
- "build_environment": build_dir,
- "test_cases": [],
- "lab_name": None,
- "kernel": None,
- "job": None,
- "git_branch": "kselftest",
- }
- test_cases = []
- # TODO: Add attachments attribute in test_case with detailed
- # failure message, see https://api.kernelci.org/schema-test-case.html#get
- for case in test_suite.cases:
- test_case = {"name": case.name, "status": "FAIL"}
- if case.status == TestStatus.SUCCESS:
+from kunit_parser import Test, TestResult, TestStatus
+from typing import Any, Dict
+
+JsonObj = Dict[str, Any]
+
+def _get_group_json(test: Test, def_config: str, build_dir: str) -> JsonObj:
+ sub_groups = [] # List[JsonObj]
+ test_cases = [] # List[JsonObj]
+
+ for subtest in test.subtests:
+ if len(subtest.subtests):
+ sub_group = _get_group_json(subtest, def_config, build_dir)
+ sub_groups.append(sub_group)
+ else:
+ test_case = {"name": subtest.name, "status": "FAIL"}
+ if subtest.status == TestStatus.SUCCESS:
test_case["status"] = "PASS"
- elif case.status == TestStatus.TEST_CRASHED:
+ elif subtest.status == TestStatus.TEST_CRASHED:
test_case["status"] = "ERROR"
test_cases.append(test_case)
- sub_group["test_cases"] = test_cases
- sub_groups.append(sub_group)
+
test_group = {
- "name": "KUnit Test Group",
+ "name": test.name,
"arch": "UM",
"defconfig": def_config,
"build_environment": build_dir,
"sub_groups": sub_groups,
+ "test_cases": test_cases,
"lab_name": None,
"kernel": None,
"job": None,
"git_branch": "kselftest",
}
+ return test_group
+
+def get_json_result(test_result: TestResult, def_config: str, build_dir: str, \
+ json_path: str) -> str:
+ test_group = _get_group_json(test_result.test, def_config, build_dir)
+ test_group["name"] = "KUnit Test Group"
json_obj = json.dumps(test_group, indent=4)
if json_path != 'stdout':
with open(json_path, 'w') as result_path:
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..bca4d19f7636 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -1,11 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
#
-# Parses test results from a kernel dmesg log.
+# Parses KTAP test results from a kernel dmesg log and incrementally prints
+# results with reader-friendly format. Stores and returns test results in a
+# Test object.
#
# Copyright (C) 2019, Google LLC.
# Author: Felix Guo <felixguoxiuping(a)gmail.com>
# Author: Brendan Higgins <brendanhiggins(a)google.com>
+# Author: Rae Moar <rmoar(a)google.com>
+from __future__ import annotations
import re
from collections import namedtuple
@@ -14,33 +18,84 @@ from enum import Enum, auto
from functools import reduce
from typing import Iterable, Iterator, List, Optional, Tuple
-TestResult = namedtuple('TestResult', ['status','suites','log'])
-
-class TestSuite(object):
+TestResult = namedtuple('TestResult', ['status','test','log'])
+
+class Test(object):
+ """
+ A class to represent a test parsed from KTAP results. All KTAP
+ results within a test log are stored in a main Test object as
+ subtests.
+
+ Attributes:
+ status : TestStatus - status of the test
+ name : str - name of the test
+ expected_count : int - expected number of subtests (0 if single
+ test case and None if unknown expected number of subtests)
+ subtests : List[Test] - list of subtests
+ log : List[str] - log of KTAP lines that correspond to the test
+ counts : TestCounts - counts of the test statuses and errors of
+ subtests or of the test itself if the test is a single
+ test case.
+ """
def __init__(self) -> None:
- self.status = TestStatus.SUCCESS
- self.name = ''
- self.cases = [] # type: List[TestCase]
-
- def __str__(self) -> str:
- return 'TestSuite(' + str(self.status) + ',' + self.name + ',' + str(self.cases) + ')'
+ """
+ Contructs the default attributes of a Test class object.
- def __repr__(self) -> str:
- return str(self)
+ Parameters:
+ None
-class TestCase(object):
- def __init__(self) -> None:
+ Return:
+ None
+ """
self.status = TestStatus.SUCCESS
self.name = ''
+ self.expected_count = 0 # type: Optional[int]
+ self.subtests = [] # type: List[Test]
self.log = [] # type: List[str]
+ self.counts = TestCounts()
def __str__(self) -> str:
- return 'TestCase(' + str(self.status) + ',' + self.name + ',' + str(self.log) + ')'
+ """
+ Returns string representation of a Test class object.
+
+ Parameters:
+ None
+
+ Return:
+ str - string representation of the Test class object
+ """
+ return ('Test(' + str(self.status) + ', ' + self.name + ', ' +
+ str(self.expected_count) + ', ' + str(self.subtests) +
+ ', ' + str(self.log) + ', ' + str(self.counts) + ')')
def __repr__(self) -> str:
+ """
+ Returns string representation of a Test class object.
+
+ Parameters:
+ None
+
+ Return:
+ str - string representation of the Test class object
+ """
return str(self)
+ def add_error(self, message: str):
+ """
+ Adds error to test object by printing the error and
+ incrementing the error count.
+
+ Parameters:
+ message : str - error message to print
+
+ Return:
+ None
+ """
+ print_error('Test ' + self.name + ': ' + message)
+ self.counts.errors += 1
+
class TestStatus(Enum):
+ """An enumeration class to represent the status of a test."""
SUCCESS = auto()
FAILURE = auto()
SKIPPED = auto()
@@ -48,385 +103,889 @@ class TestStatus(Enum):
NO_TESTS = auto()
FAILURE_TO_PARSE_TESTS = auto()
+class TestCounts:
+ """
+ A class to represent the counts of statuses and test errors of
+ subtests or of the test itself if the test is a single test case with
+ no subtests. Note that the sum of the counts of passed, failed,
+ crashed, and skipped should sum to the total number of subtests for
+ the test.
+
+ Attributes:
+ passed : int - the number of tests that have passed
+ failed : int - the number of tests that have failed
+ crashed : int - the number of tests that have crashed
+ skipped : int - the number of tests that have skipped
+ errors : int - the number of errors in the test and subtests
+ """
+ def __init__(self):
+ """
+ Contructs the default attributes of a TestCounts class object.
+ Sets the counts of all test statuses and test errors to be 0.
+
+ Parameters:
+ None
+
+ Return:
+ None
+ """
+ self.passed = 0
+ self.failed = 0
+ self.crashed = 0
+ self.skipped = 0
+ self.errors = 0
+
+ def __str__(self) -> str:
+ """
+ Returns total number of subtests or 1 if the test object has
+ no subtests. This number is calculated by the sum of the
+ passed, failed, crashed, and skipped subtests.
+
+ Parameters:
+ None
+
+ Return:
+ str - string representing TestCounts object.
+ """
+ return ('Passed: ' + str(self.passed) + ', Failed: ' +
+ str(self.failed) + ', Crashed: ' + str(self.crashed) +
+ ', Skipped: ' + str(self.skipped) + ', Errors: ' +
+ str(self.errors))
+
+ def total(self) -> int:
+ """
+ Returns total number of subtests or 1 if the test object has
+ no subtests. This number is calculated by the sum of the
+ passed, failed, crashed, and skipped subtests.
+
+ Parameters:
+ None
+
+ Return:
+ int - the total number of subtests or 1 if the test object has
+ no subtests
+ """
+ return self.passed + self.failed + self.crashed + self.skipped
+
+ def add_subtest_counts(self, counts: TestCounts) -> None:
+ """
+ Adds the counts of another TestCounts object to the current
+ TestCounts object. Used to add the counts of a subtest to the
+ parent test.
+
+ Parameters:
+ counts : TestCounts - another TestCounts object whose counts
+ will be added to the counts of the TestCounts object
+
+ Return:
+ None
+ """
+ self.passed += counts.passed
+ self.failed += counts.failed
+ self.crashed += counts.crashed
+ self.skipped += counts.skipped
+ self.errors += counts.errors
+
+ def get_status(self) -> TestStatus:
+ """
+ Returns the expected status of a Test using test counts.
+
+ Parameters:
+ None
+
+ Return:
+ TestStatus - expected status of a Test given test counts
+ """
+ if self.crashed:
+ # If one of the subtests crash, the expected status of
+ # the Test is crashed.
+ return TestStatus.TEST_CRASHED
+ elif self.failed:
+ # Otherwise if one of the subtests fail, the
+ # expected status of the Test is failed.
+ return TestStatus.FAILURE
+ elif self.passed:
+ # Otherwise if one of the subtests pass, the
+ # expected status of the Test is passed.
+ return TestStatus.SUCCESS
+ else:
+ # Finally, if none of the subtests have failed,
+ # crashed, or passed, the expected status of the
+ # Test is skipped.
+ return TestStatus.SKIPPED
+
+ def add_status(self, status: TestStatus) -> None:
+ """
+ Given inputted status, increments corresponding attribute of
+ TestCounts object.
+
+ Parameters:
+ status : TestStatus - status to be added to the TestCounts
+ object
+
+ Return:
+ None
+ """
+ if status == TestStatus.SUCCESS or \
+ status == TestStatus.NO_TESTS:
+ # if status is NO_TESTS the most appropriate attribute
+ # to increment is passed because the test did not
+ # fail, crash or get skipped.
+ self.passed += 1
+ elif status == TestStatus.FAILURE:
+ self.failed += 1
+ elif status == TestStatus.SKIPPED:
+ self.skipped += 1
+ else:
+ self.crashed += 1
+
class LineStream:
- """Provides a peek()/pop() interface over an iterator of (line#, text)."""
+ """
+ A class to represent the lines of kernel output.
+ Provides a peek()/pop() interface over an iterator of
+ (line#, text).
+
+ Attributes:
+ _lines : Iterator[Tuple[int, str]] - Iterator containing tuple of
+ line number and line of kernel output
+ _next : Tuple[int, str] - Tuple containing next line and the
+ corresponding line number
+ _done : bool - boolean denoting whether the LineStream has reached
+ the end of the lines
+ """
_lines: Iterator[Tuple[int, str]]
_next: Tuple[int, str]
_done: bool
def __init__(self, lines: Iterator[Tuple[int, str]]):
+ """Set defaults for LineStream object and sets _lines
+ attribute to lines parameter.
+ """
self._lines = lines
self._done = False
self._next = (0, '')
self._get_next()
def _get_next(self) -> None:
+ """Sets _next attribute to the upcoming Tuple of line and
+ line number in the LineStream.
+ """
try:
self._next = next(self._lines)
except StopIteration:
self._done = True
def peek(self) -> str:
+ """Returns the line stored in the _next attribute."""
return self._next[1]
def pop(self) -> str:
+ """Returns the line stored in the _next attribute and sets the
+ _next attribute to the following line and line number Tuple.
+ """
n = self._next
self._get_next()
return n[1]
def __bool__(self) -> bool:
+ """Returns whether the LineStream has reached the end of the
+ lines.
+ """
return not self._done
# Only used by kunit_tool_test.py.
def __iter__(self) -> Iterator[str]:
+ """Returns an Iterator object containing all of the lines
+ stored in the LineStream object. This method also empties the
+ LineStream so it reaches the end of the lines.
+ """
while bool(self):
yield self.pop()
def line_number(self) -> int:
+ """Returns the line number of the upcoming line."""
return self._next[0]
-kunit_start_re = re.compile(r'TAP version [0-9]+$')
-kunit_end_re = re.compile('(List of all partitions:|'
- 'Kernel panic - not syncing: VFS:|reboot: System halted)')
+# Parsing helper methods:
+
+KTAP_START = re.compile(r'KTAP version ([0-9]+)$')
+TAP_START = re.compile(r'TAP version ([0-9]+)$')
+KTAP_END = re.compile('(List of all partitions:|'
+ 'Kernel panic - not syncing: VFS:|reboot: System halted)')
def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
- def isolate_kunit_output(kernel_output: Iterable[str]) -> Iterator[Tuple[int, str]]:
+ """
+ Returns LineStream object of extracted ktap lines within
+ inputted kernel output.
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ LineStream - LineStream object containing extracted ktap lines.
+ """
+ def isolate_ktap_output(kernel_output: Iterable[str]) \
+ -> Iterator[Tuple[int, str]]:
+ """
+ Helper method of extract_tap_lines that yields extracted
+ ktap lines within inputted kernel output. Output is used to
+ create LineStream object in isolate_ktap_output.
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ Iterator[Tuple[int, str]] - Iterator object containing tuples
+ with extracted ktap lines and their correesponding line
+ number.
+ """
line_num = 0
started = False
for line in kernel_output:
line_num += 1
- line = line.rstrip() # line always has a trailing \n
- if kunit_start_re.search(line):
+ line = line.rstrip() # remove trailing \n
+ if not started and KTAP_START.search(line):
+ prefix_len = len(
+ line.split('KTAP version')[0])
+ started = True
+ yield line_num, line[prefix_len:]
+ elif not started and TAP_START.search(line):
prefix_len = len(line.split('TAP version')[0])
started = True
yield line_num, line[prefix_len:]
- elif kunit_end_re.search(line):
+ elif started and KTAP_END.search(line):
break
elif started:
- yield line_num, line[prefix_len:]
- return LineStream(lines=isolate_kunit_output(kernel_output))
-
-def raw_output(kernel_output) -> None:
+ # remove prefix and indention
+ line = line[prefix_len:].lstrip()
+ yield line_num, line
+ return LineStream(lines=isolate_ktap_output(kernel_output))
+
+def raw_output(kernel_output: Iterable[str]) -> None:
+ """
+ Prints all of given kernel output.
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ None
+ """
for line in kernel_output:
print(line.rstrip())
-DIVIDER = '=' * 60
-
-RESET = '\033[0;0m'
-
-def red(text) -> str:
- return '\033[1;31m' + text + RESET
-
-def yellow(text) -> str:
- return '\033[1;33m' + text + RESET
+KTAP_VERSIONS = [1]
+TAP_VERSIONS = [13, 14]
+
+def check_version(version_num: int, accepted_versions: List[int], \
+ version_type: str, test: Test) -> None:
+ """
+ Adds errors to the test if the version number is too high or too low.
+
+ Parameters:
+ version_num : int - The inputted version number from the parsed
+ ktap or tap header line
+ accepted_version : List[int] - List of accepted ktap or tap versions
+ version_type : str - 'KTAP' or 'TAP' depending on the type of
+ version line.
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ None
+ """
+ if version_num < min(accepted_versions):
+ test.add_error(version_type + ' version lower than expected!')
+ elif version_num > max(accepted_versions):
+ test.add_error(
+ version_type + ' version higher than expected!')
+
+def parse_ktap_header(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of ktap or tap
+ header line, the version number is checked, the line is popped,
+ and returns True. Otherwise the method returns False.
+
+ Accepted formats:
+ - 'KTAP version [version number]'
+ - 'TAP version [version number]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as
+ the ktap or tap header line
+ """
+ ktap_match = KTAP_START.match(lines.peek())
+ tap_match = TAP_START.match(lines.peek())
+ if ktap_match:
+ version_num = int(ktap_match.group(1))
+ check_version(version_num, KTAP_VERSIONS, 'KTAP', test)
+ elif tap_match:
+ version_num = int(tap_match.group(1))
+ check_version(version_num, TAP_VERSIONS, 'TAP', test)
+ else:
+ return False
+ test.log.append(lines.pop())
+ return True
+
+TEST_HEADER = re.compile(r'^# Subtest: (.*)$')
+
+def parse_test_header(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ header line, the name of test is set, the line is popped,
+ and returns True. Otherwise the method returns False.
+
+ Accepted format:
+ - '# Subtest: [test name]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as
+ a test header
+ """
+ match = TEST_HEADER.match(lines.peek())
+ if not match:
+ return False
+ test.log.append(lines.pop())
+ test.name = match.group(1)
+ return True
+
+TEST_PLAN = re.compile(r'1\.\.([0-9]+)')
+
+def parse_test_plan(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ plan line, the expected number of subtests is set in test object, an
+ error is thrown if there are 0 tests, the line is popped,
+ and returns True. Otherwise the method adds an error that the test
+ plan is missing to the test object and returns False.
+
+ Accepted format:
+ - '1..[number of subtests]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as
+ a test plan
+ """
+ match = TEST_PLAN.match(lines.peek())
+ if not match:
+ test.expected_count = None
+ test.add_error('missing plan line!')
+ return False
+ test.log.append(lines.pop())
+ expected_count = int(match.group(1))
+ test.expected_count = expected_count
+ if expected_count == 0:
+ test.status = TestStatus.NO_TESTS
+ test.add_error('0 tests run!')
+ return True
+
+TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*)$')
+
+TEST_RESULT_SKIP = re.compile(r'^(ok|not ok) ([0-9]+) (- )?(.*) # SKIP(.*)$')
+
+def peek_test_name_match(lines: LineStream, test: Test) -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ result line and the name of the result line matches the name of the
+ current test, the method returns True. Otherwise it returns False.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ bool : Represents if the next line in the LineStream matched a test
+ result line and the name matched the test name
+ """
+ line = lines.peek()
+ match = TEST_RESULT.match(line)
+ if not match:
+ return False
+ name = match.group(4)
+ return (name == test.name)
+
+def parse_test_result(lines: LineStream, test: Test, expected_num: int) \
+ -> bool:
+ """
+ If the next line in LineStream matches the format of a test
+ result line, the status in the result line is added to the test
+ object, the test number is checked to match the expected test number
+ and if not an error is added to the test object, and returns True.
+ Otherwise it returns False. Note that the skip diirective is the only
+ directive that causes a change in status and otherwise the directive
+ is included in the name of the test.
+
+ Accepted format:
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+ expected_num : int - expected test number for current test
+
+ Return:
+ bool : Represents if the next line in the LineStream was parsed as a
+ test result line.
+ """
+ line = lines.peek()
+ match = TEST_RESULT.match(line)
+ skip_match = TEST_RESULT_SKIP.match(line)
-def green(text) -> str:
- return '\033[1;32m' + text + RESET
+ # Check if line matches test result line format
+ if not match:
+ return False
+ test.log.append(lines.pop())
-def print_with_timestamp(message) -> None:
- print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
+ # Check test num
+ num = int(match.group(2))
+ if num != expected_num:
+ test.add_error('Expected test number ' +
+ str(expected_num) + ' but found ' + str(num))
-def format_suite_divider(message) -> str:
- return '======== ' + message + ' ========'
+ # Set name of test object
+ if skip_match:
+ test.name = skip_match.group(4)
+ else:
+ test.name = match.group(4)
-def print_suite_divider(message) -> None:
- print_with_timestamp(DIVIDER)
- print_with_timestamp(format_suite_divider(message))
+ # Set status of test object
+ status = match.group(1)
+ if test.status == TestStatus.TEST_CRASHED:
+ return True
+ elif skip_match:
+ test.status = TestStatus.SKIPPED
+ elif status == 'ok':
+ test.status = TestStatus.SUCCESS
+ else:
+ test.status = TestStatus.FAILURE
+ return True
+
+DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^# .*?: kunit test case crashed!$')
+
+def parse_diagnostic(lines: LineStream, test: Test) -> None:
+ """
+ If the next line in LineStream does not match the format of a test
+ case line or test header line, the line is checked if the test has
+ crashed and if so adds an error message, pops the line and adds it to
+ the log.
+
+ Line formats that are not parsed:
+ - '# Subtest: [test name]'
+ - '[ok|not ok] [test number] [-] [test name] [optional skip
+ directive]'
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ None
+ """
+ while lines and not TEST_RESULT.match(lines.peek()) and not \
+ TEST_HEADER.match(lines.peek()):
+ if DIAGNOSTIC_CRASH_MESSAGE.match(lines.peek()):
+ test.status = TestStatus.TEST_CRASHED
+ test.log.append(lines.pop())
+
+# Printing helper methods:
-def print_log(log) -> None:
- for m in log:
- print_with_timestamp(m)
+DIVIDER = '=' * 60
-TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*#).*$')
+RESET = '\033[0;0m'
-def consume_non_diagnostic(lines: LineStream) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- lines.pop()
+def red(text: str) -> str:
+ """
+ Returns string with added red ansi color code at beginning and reset
+ code at end.
-def save_non_diagnostic(lines: LineStream, test_case: TestCase) -> None:
- while lines and not TAP_ENTRIES.match(lines.peek()):
- test_case.log.append(lines.peek())
- lines.pop()
+ Parameters:
+ text: str -> text to be made red with ansi color codes
-OkNotOkResult = namedtuple('OkNotOkResult', ['is_ok','description', 'text'])
+ Return:
+ str - original text made red with ansi color codes
+ """
+ return '\033[1;31m' + text + RESET
-OK_NOT_OK_SKIP = re.compile(r'^[\s]*(ok|not ok) [0-9]+ - (.*) # SKIP(.*)$')
+def yellow(text: str) -> str:
+ """
+ Returns string with added yellow ansi color code at beginning and
+ reset code at end.
-OK_NOT_OK_SUBTEST = re.compile(r'^[\s]+(ok|not ok) [0-9]+ - (.*)$')
+ Parameters:
+ text: str -> text to be made yellow with ansi color codes
-OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$')
+ Return:
+ str - original text made yellow with ansi color codes
+ """
+ return '\033[1;33m' + text + RESET
-def parse_ok_not_ok_test_case(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- line = lines.peek()
- match = OK_NOT_OK_SUBTEST.match(line)
- while not match and lines:
- line = lines.pop()
- match = OK_NOT_OK_SUBTEST.match(line)
- if match:
- test_case.log.append(lines.pop())
- test_case.name = match.group(2)
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_case.status = TestStatus.SKIPPED
- return True
- if test_case.status == TestStatus.TEST_CRASHED:
- return True
- if match.group(1) == 'ok':
- test_case.status = TestStatus.SUCCESS
- else:
- test_case.status = TestStatus.FAILURE
- return True
- else:
- return False
+def green(text: str) -> str:
+ """
+ Returns string with added green ansi color code at beginning and reset
+ code at end.
-SUBTEST_DIAGNOSTIC = re.compile(r'^[\s]+# (.*)$')
-DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^[\s]+# .*?: kunit test case crashed!$')
+ Parameters:
+ text: str -> text to be made green with ansi color codes
-def parse_diagnostic(lines: LineStream, test_case: TestCase) -> bool:
- save_non_diagnostic(lines, test_case)
- if not lines:
- return False
- line = lines.peek()
- match = SUBTEST_DIAGNOSTIC.match(line)
- if match:
- test_case.log.append(lines.pop())
- crash_match = DIAGNOSTIC_CRASH_MESSAGE.match(line)
- if crash_match:
- test_case.status = TestStatus.TEST_CRASHED
- return True
- else:
- return False
+ Return:
+ str - original text made green with ansi color codes
+ """
+ return '\033[1;32m' + text + RESET
-def parse_test_case(lines: LineStream) -> Optional[TestCase]:
- test_case = TestCase()
- save_non_diagnostic(lines, test_case)
- while parse_diagnostic(lines, test_case):
- pass
- if parse_ok_not_ok_test_case(lines, test_case):
- return test_case
- else:
- return None
+ANSI_LEN = len(red(''))
-SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$')
+def print_with_timestamp(message: str) -> None:
+ """
+ Prints message with timestamp at beginning.
-def parse_subtest_header(lines: LineStream) -> Optional[str]:
- consume_non_diagnostic(lines)
- if not lines:
- return None
- match = SUBTEST_HEADER.match(lines.peek())
- if match:
- lines.pop()
- return match.group(1)
- else:
- return None
+ Parameters:
+ message: str -> message to be printed
-SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)')
+ Return:
+ None
+ """
+ print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message))
-def parse_subtest_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = SUBTEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
+def format_test_divider(message: str, len_message: int) -> str:
+ """
+ Returns string with message centered in fixed width divider.
+
+ Example:
+ '===================== message example ====================='
+
+ Parameters:
+ message: str -> message to be centered in divider line
+ len_message : int -> length of the message to be printed in the
+ divider such that the ansi codes are not counted if the
+ message is colored.
+
+ Return:
+ str - string containing message centered in fixed width divider
+ """
+ default_count = 3 # default number of dashes
+ len_1 = default_count
+ len_2 = default_count
+ difference = len(DIVIDER) - len_message - 2 # 2 spaces added
+ if difference > 0:
+ # calculate number of dashes for each side of the divider
+ len_1 = int(difference / 2)
+ len_2 = difference - len_1
+ return ('=' * len_1) + ' ' + message + ' ' + ('=' * len_2)
+
+def print_test_header(test: Test) -> None:
+ """
+ Prints test header with test name and optionally the expected number
+ of subtests.
+
+ Example:
+ '=================== example (2 subtests) ==================='
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test header
+
+ Return:
+ None
+ """
+ message = test.name
+ if test.expected_count:
+ message += ' (' + str(test.expected_count) + ' subtests)'
+ print_with_timestamp(format_test_divider(message, len(message)))
+
+def print_log(log: Iterable[str]) -> None:
+ """
+ Prints all strings in saved log for test in yellow.
+
+ Parameters:
+ log: Iterable[str] -> Iterable object with all strings saved in log
+ for test
+
+ Return:
+ None
+ """
+ for m in log:
+ print_with_timestamp(yellow(m))
+ print_with_timestamp('')
+
+def format_test_result(test: Test) -> str:
+ """
+ Returns string with formatted test result with colored status and test
+ name.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test result
+
+ Return:
+ str - string containing formatted test result
+ """
+ if test.status == TestStatus.SUCCESS:
+ return (green('[PASSED] ') + test.name)
+ elif test.status == TestStatus.SKIPPED:
+ return (yellow('[SKIPPED] ') + test.name)
+ elif test.status == TestStatus.TEST_CRASHED:
+ print_log(test.log)
+ return (red('[CRASHED] ') + test.name)
else:
- return None
-
-def max_status(left: TestStatus, right: TestStatus) -> TestStatus:
- if left == right:
- return left
- elif left == TestStatus.TEST_CRASHED or right == TestStatus.TEST_CRASHED:
- return TestStatus.TEST_CRASHED
- elif left == TestStatus.FAILURE or right == TestStatus.FAILURE:
- return TestStatus.FAILURE
- elif left == TestStatus.SKIPPED:
- return right
+ print_log(test.log)
+ return (red('[FAILED] ') + test.name)
+
+def print_test_result(test: Test) -> None:
+ """
+ Prints result line with status of test.
+
+ Example:
+ '[PASSED] example'
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test result line
+
+ Return:
+ None
+ """
+ print_with_timestamp(format_test_result(test))
+
+def print_test_footer(test: Test) -> None:
+ """
+ Prints test footer with status of test.
+
+ Example:
+ '===================== [PASSED] example ====================='
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test footer
+
+ Return:
+ None
+ """
+ message = format_test_result(test)
+ print_with_timestamp(format_test_divider(message,
+ len(message) - ANSI_LEN))
+
+def print_summary_line(test: Test) -> None:
+ """
+ Prints summary line of test object. Color of line is dependent on
+ status of test. Color is green if test passes, yellow if test is
+ skipped, and red if the test fails or crashes. Summary line contains
+ counts of the statuses of the tests subtests or the test itself if it
+ has no subtests.
+
+ Example:
+ 'Testing complete. Passed: 2, Failed: 0, Crashed: 0, Skipped: 0, \
+ Errors: 0'
+
+ Parameters:
+ test: Test -> Test object representing current test object being
+ parsed and information used to print test summary line
+
+ Return:
+ None
+ """
+ if test.status == TestStatus.SUCCESS or \
+ test.status == TestStatus.NO_TESTS:
+ color = green
+ elif test.status == TestStatus.SKIPPED:
+ color = yellow
else:
- return left
-
-def parse_ok_not_ok_test_suite(lines: LineStream,
- test_suite: TestSuite,
- expected_suite_index: int) -> bool:
- consume_non_diagnostic(lines)
- if not lines:
- test_suite.status = TestStatus.TEST_CRASHED
- return False
- line = lines.peek()
- match = OK_NOT_OK_MODULE.match(line)
- if match:
- lines.pop()
- if match.group(1) == 'ok':
- test_suite.status = TestStatus.SUCCESS
- else:
- test_suite.status = TestStatus.FAILURE
- skip_match = OK_NOT_OK_SKIP.match(line)
- if skip_match:
- test_suite.status = TestStatus.SKIPPED
- suite_index = int(match.group(2))
- if suite_index != expected_suite_index:
- print_with_timestamp(
- red('[ERROR] ') + 'expected_suite_index ' +
- str(expected_suite_index) + ', but got ' +
- str(suite_index))
- return True
+ color = red
+ counts = test.counts
+ print_with_timestamp(color('Testing complete. ' + str(counts)))
+
+def print_error(message: str) -> None:
+ """
+ Prints message with error format.
+
+ Parameters:
+ message: str -> message to be used as error message
+
+ Return:
+ None
+ """
+ print_with_timestamp(red('[ERROR] ') + message)
+
+# Other methods:
+
+def bubble_up_test_results(test: Test) -> None:
+ """
+ If the test has subtests, add the test counts of the subtests to the
+ test and check if any of the tests crashed and if so set the test
+ status to crashed. Otherwise if the test has no subtests add the
+ status of the test to the test counts.
+
+ Parameters:
+ test : Test - Test object representing current test object being
+ parsed
+
+ Return:
+ None
+ """
+ subtests = test.subtests
+ counts = test.counts
+ status = test.status
+ for t in subtests:
+ counts.add_subtest_counts(t.counts)
+ if counts.total() == 0:
+ counts.add_status(status)
+ elif test.counts.get_status() == TestStatus.TEST_CRASHED:
+ test.status = TestStatus.TEST_CRASHED
+
+def parse_test(lines: LineStream, expected_num: int) -> Test:
+ """
+ Finds next test to parse in LineStream, creates new Test object,
+ parses any subtests of the test, populates Test object with all
+ information (status, name) about the test and the Test objects for
+ any subtests, and then returns the Test object. The method accepts
+ three formats of tests:
+
+ Accepted test formats:
+
+ - Main KTAP/TAP header
+
+ Example:
+
+ KTAP version 1
+ 1..4
+ [subtests]
+
+ - Subtest header line
+
+ Example:
+
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ - Test result line
+
+ Example:
+
+ ok 1 - test
+
+ Parameters:
+ lines : LineStream - LineStream object containing ktap lines from
+ kernel output
+ expected_num : int - expected test number for test to be parsed
+
+ Return:
+ Test : Test object populated with characteristics and containing any
+ subtests
+ """
+ test = Test()
+ parent_test = False
+ main = parse_ktap_header(lines, test)
+ if main:
+ # If KTAP/TAP header is found, attempt to parse
+ # test plan
+ parse_test_plan(lines, test)
else:
- return False
-
-def bubble_up_errors(status_list: Iterable[TestStatus]) -> TestStatus:
- return reduce(max_status, status_list, TestStatus.SKIPPED)
-
-def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus:
- max_test_case_status = bubble_up_errors(x.status for x in test_suite.cases)
- return max_status(max_test_case_status, test_suite.status)
-
-def parse_test_suite(lines: LineStream, expected_suite_index: int) -> Optional[TestSuite]:
- if not lines:
- return None
- consume_non_diagnostic(lines)
- test_suite = TestSuite()
- test_suite.status = TestStatus.SUCCESS
- name = parse_subtest_header(lines)
- if not name:
- return None
- test_suite.name = name
- expected_test_case_num = parse_subtest_plan(lines)
- if expected_test_case_num is None:
- return None
- while expected_test_case_num > 0:
- test_case = parse_test_case(lines)
- if not test_case:
+ # If KTAP/TAP header is not found, test must be subtest
+ # header or test result line so parse attempt to parser
+ # subtest header
+ parse_diagnostic(lines, test)
+ parent_test = parse_test_header(lines, test)
+ if parent_test:
+ # If subtest header is found, attempt to parse
+ # test plan and print header
+ parse_test_plan(lines, test)
+ print_test_header(test)
+ expected_count = test.expected_count
+ subtests = []
+ test_num = 1
+ while main or expected_count is None or test_num <= expected_count:
+ # Loop to parse any subtests.
+ # If test is main test, do not break until no lines left.
+ # Otherwise, break after parsing expected number of tests or
+ # if expected number of tests is unknown break when found
+ # test result line with matching name to subtest header.
+ if not lines:
+ if expected_count and test_num <= expected_count:
+ test.add_error('missing expected subtests!')
break
- test_suite.cases.append(test_case)
- expected_test_case_num -= 1
- if parse_ok_not_ok_test_suite(lines, test_suite, expected_suite_index):
- test_suite.status = bubble_up_test_case_errors(test_suite)
- return test_suite
- elif not lines:
- print_with_timestamp(red('[ERROR] ') + 'ran out of lines before end token')
- return test_suite
- else:
- print(f'failed to parse end of suite "{name}", at line {lines.line_number()}: {lines.peek()}')
- return None
-
-TAP_HEADER = re.compile(r'^TAP version 14$')
-
-def parse_tap_header(lines: LineStream) -> bool:
- consume_non_diagnostic(lines)
- if TAP_HEADER.match(lines.peek()):
- lines.pop()
- return True
- else:
- return False
-
-TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)')
-
-def parse_test_plan(lines: LineStream) -> Optional[int]:
- consume_non_diagnostic(lines)
- match = TEST_PLAN.match(lines.peek())
- if match:
- lines.pop()
- return int(match.group(1))
- else:
- return None
-
-def bubble_up_suite_errors(test_suites: Iterable[TestSuite]) -> TestStatus:
- return bubble_up_errors(x.status for x in test_suites)
-
-def parse_test_result(lines: LineStream) -> TestResult:
- consume_non_diagnostic(lines)
- if not lines or not parse_tap_header(lines):
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- expected_test_suite_num = parse_test_plan(lines)
- if expected_test_suite_num == 0:
- return TestResult(TestStatus.NO_TESTS, [], lines)
- elif expected_test_suite_num is None:
- return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- test_suites = []
- for i in range(1, expected_test_suite_num + 1):
- test_suite = parse_test_suite(lines, i)
- if test_suite:
- test_suites.append(test_suite)
- else:
- print_with_timestamp(
- red('[ERROR] ') + ' expected ' +
- str(expected_test_suite_num) +
- ' test suites, but got ' + str(i - 2))
+ if not expected_count and not main and \
+ peek_test_name_match(lines, test):
break
- test_suite = parse_test_suite(lines, -1)
- if test_suite:
- print_with_timestamp(red('[ERROR] ') +
- 'got unexpected test suite: ' + test_suite.name)
- if test_suites:
- return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines)
- else:
- return TestResult(TestStatus.NO_TESTS, [], lines)
-
-class TestCounts:
- passed: int
- failed: int
- crashed: int
- skipped: int
-
- def __init__(self):
- self.passed = 0
- self.failed = 0
- self.crashed = 0
- self.skipped = 0
-
- def total(self) -> int:
- return self.passed + self.failed + self.crashed + self.skipped
-
-def print_and_count_results(test_result: TestResult) -> TestCounts:
- counts = TestCounts()
- for test_suite in test_result.suites:
- if test_suite.status == TestStatus.SUCCESS:
- print_suite_divider(green('[PASSED] ') + test_suite.name)
- elif test_suite.status == TestStatus.SKIPPED:
- print_suite_divider(yellow('[SKIPPED] ') + test_suite.name)
- elif test_suite.status == TestStatus.TEST_CRASHED:
- print_suite_divider(red('[CRASHED] ' + test_suite.name))
+ subtests.append(parse_test(lines, test_num))
+ test_num += 1
+ test.subtests = subtests
+ if not main:
+ # If not main test, look for test result line
+ parse_diagnostic(lines, test)
+ if (parent_test and peek_test_name_match(lines, test)) or \
+ not parent_test:
+ parse_test_result(lines, test, expected_num)
+ if not parent_test:
+ print_test_result(test)
else:
- print_suite_divider(red('[FAILED] ') + test_suite.name)
- for test_case in test_suite.cases:
- if test_case.status == TestStatus.SUCCESS:
- counts.passed += 1
- print_with_timestamp(green('[PASSED] ') + test_case.name)
- elif test_case.status == TestStatus.SKIPPED:
- counts.skipped += 1
- print_with_timestamp(yellow('[SKIPPED] ') + test_case.name)
- elif test_case.status == TestStatus.TEST_CRASHED:
- counts.crashed += 1
- print_with_timestamp(red('[CRASHED] ' + test_case.name))
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- else:
- counts.failed += 1
- print_with_timestamp(red('[FAILED] ') + test_case.name)
- print_log(map(yellow, test_case.log))
- print_with_timestamp('')
- return counts
+ test.add_error('missing subtest result line!')
+ # Add statuses to TestCounts attribute in Test object
+ bubble_up_test_results(test)
+ if parent_test:
+ # If test has subtests and is not the main test object, print
+ # footer.
+ print_test_footer(test)
+ return test
def parse_run_tests(kernel_output: Iterable[str]) -> TestResult:
- counts = TestCounts()
+ """
+ Using kernel output, extract ktap lines, parse the lines for test
+ results and print condensed test results and summary line .
+
+ Parameters:
+ kernel_output : Iterable[str] - iterable object contains lines
+ of kernel output
+
+ Return:
+ TestResult - Tuple containg status of main test object, main test
+ object with all subtests, and log of all ktap lines.
+ """
+ print_with_timestamp(DIVIDER)
lines = extract_tap_lines(kernel_output)
- test_result = parse_test_result(lines)
- if test_result.status == TestStatus.NO_TESTS:
- print(red('[ERROR] ') + yellow('no tests run!'))
- elif test_result.status == TestStatus.FAILURE_TO_PARSE_TESTS:
- print(red('[ERROR] ') + yellow('could not parse test results!'))
+ test = Test()
+ if not lines:
+ test.add_error('invalid KTAP input!')
+ test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- counts = print_and_count_results(test_result)
+ test = parse_test(lines, 0)
+ if test.status != TestStatus.NO_TESTS:
+ test.status = test.counts.get_status()
print_with_timestamp(DIVIDER)
- if test_result.status == TestStatus.SUCCESS:
- fmt = green
- elif test_result.status == TestStatus.SKIPPED:
- fmt = yellow
- else:
- fmt =red
- print_with_timestamp(
- fmt('Testing complete. %d tests run. %d failed. %d crashed. %d skipped.' %
- (counts.total(), counts.failed, counts.crashed, counts.skipped)))
- return test_result
+ print_summary_line(test)
+ return TestResult(test.status, test, lines)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 75045aa0f8a1..ca760ee32096 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -106,10 +106,10 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: example', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - example_simple_test', result)
- self.assertContains(' ok 2 - example_mock_test', result)
+ self.assertContains('# Subtest: example', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - example_simple_test', result)
+ self.assertContains('ok 2 - example_mock_test', result)
self.assertContains('ok 1 - example', result)
def test_output_with_prefix_isolated_correctly(self):
@@ -117,28 +117,28 @@ class KUnitParserTest(unittest.TestCase):
with open(log_path) as file:
result = kunit_parser.extract_tap_lines(file.readlines())
self.assertContains('TAP version 14', result)
- self.assertContains(' # Subtest: kunit-resource-test', result)
- self.assertContains(' 1..5', result)
- self.assertContains(' ok 1 - kunit_resource_test_init_resources', result)
- self.assertContains(' ok 2 - kunit_resource_test_alloc_resource', result)
- self.assertContains(' ok 3 - kunit_resource_test_destroy_resource', result)
- self.assertContains(' foo bar #', result)
- self.assertContains(' ok 4 - kunit_resource_test_cleanup_resources', result)
- self.assertContains(' ok 5 - kunit_resource_test_proper_free_ordering', result)
+ self.assertContains('# Subtest: kunit-resource-test', result)
+ self.assertContains('1..5', result)
+ self.assertContains('ok 1 - kunit_resource_test_init_resources', result)
+ self.assertContains('ok 2 - kunit_resource_test_alloc_resource', result)
+ self.assertContains('ok 3 - kunit_resource_test_destroy_resource', result)
+ self.assertContains('foo bar #', result)
+ self.assertContains('ok 4 - kunit_resource_test_cleanup_resources', result)
+ self.assertContains('ok 5 - kunit_resource_test_proper_free_ordering', result)
self.assertContains('ok 1 - kunit-resource-test', result)
- self.assertContains(' foo bar # non-kunit output', result)
- self.assertContains(' # Subtest: kunit-try-catch-test', result)
- self.assertContains(' 1..2', result)
- self.assertContains(' ok 1 - kunit_test_try_catch_successful_try_no_catch',
+ self.assertContains('foo bar # non-kunit output', result)
+ self.assertContains('# Subtest: kunit-try-catch-test', result)
+ self.assertContains('1..2', result)
+ self.assertContains('ok 1 - kunit_test_try_catch_successful_try_no_catch',
result)
- self.assertContains(' ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
+ self.assertContains('ok 2 - kunit_test_try_catch_unsuccessful_try_does_catch',
result)
self.assertContains('ok 2 - kunit-try-catch-test', result)
- self.assertContains(' # Subtest: string-stream-test', result)
- self.assertContains(' 1..3', result)
- self.assertContains(' ok 1 - string_stream_test_empty_on_creation', result)
- self.assertContains(' ok 2 - string_stream_test_not_empty_after_add', result)
- self.assertContains(' ok 3 - string_stream_test_get_string', result)
+ self.assertContains('# Subtest: string-stream-test', result)
+ self.assertContains('1..3', result)
+ self.assertContains('ok 1 - string_stream_test_empty_on_creation', result)
+ self.assertContains('ok 2 - string_stream_test_not_empty_after_add', result)
+ self.assertContains('ok 3 - string_stream_test_get_string', result)
self.assertContains('ok 3 - string-stream-test', result)
def test_parse_successful_test_log(self):
@@ -148,6 +148,13 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
+ def test_parse_successful_nested_tests_log(self):
+ all_passed_log = test_data_path('test_is_test_passed-all_passed_nested.log')
+ with open(all_passed_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(
+ kunit_parser.TestStatus.SUCCESS,
+ result.status)
def test_parse_failed_test_log(self):
failed_log = test_data_path('test_is_test_passed-failure.log')
@@ -162,17 +169,19 @@ class KUnitParserTest(unittest.TestCase):
with open(empty_log) as file:
result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.FAILURE_TO_PARSE_TESTS,
result.status)
def test_no_tests(self):
- empty_log = test_data_path('test_is_test_passed-no_tests_run_with_header.log')
- with open(empty_log) as file:
+ header_log = test_data_path('test_is_test_passed-'
+ 'no_tests_run_with_header.log')
+ with open(header_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- self.assertEqual(0, len(result.suites))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ self.assertEqual(0, len(result.test.subtests))
self.assertEqual(
kunit_parser.TestStatus.NO_TESTS,
result.status)
@@ -182,15 +191,17 @@ class KUnitParserTest(unittest.TestCase):
print_mock = mock.patch('builtins.print').start()
with open(crash_log) as file:
result = kunit_parser.parse_run_tests(
- kunit_parser.extract_tap_lines(file.readlines()))
- print_mock.assert_any_call(StrContains('could not parse test results!'))
+ kunit_parser.extract_tap_lines(
+ file.readlines()))
+ print_mock.assert_any_call(StrContains('invalid KTAP input!'))
print_mock.stop()
file.close()
def test_crashed_test(self):
crashed_log = test_data_path('test_is_test_passed-crash.log')
with open(crashed_log) as file:
- result = kunit_parser.parse_run_tests(file.readlines())
+ result = kunit_parser.parse_run_tests(
+ file.readlines())
self.assertEqual(
kunit_parser.TestStatus.TEST_CRASHED,
result.status)
@@ -224,7 +235,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_ignores_multiple_prefixes(self):
prefix_log = test_data_path('test_multiple_prefixes.log')
@@ -233,7 +244,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_mixed_kernel_output(self):
mixed_prefix_log = test_data_path('test_interrupted_tap_output.log')
@@ -242,7 +253,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_prefix_poundsign(self):
pound_log = test_data_path('test_pound_sign.log')
@@ -251,16 +262,16 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_kernel_panic_end(self):
panic_log = test_data_path('test_kernel_panic_interrupt.log')
with open(panic_log) as file:
result = kunit_parser.parse_run_tests(file.readlines())
self.assertEqual(
- kunit_parser.TestStatus.TEST_CRASHED,
+ kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
def test_pound_no_prefix(self):
pound_log = test_data_path('test_pound_no_prefix.log')
@@ -269,7 +280,7 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
- self.assertEqual('kunit-resource-test', result.suites[0].name)
+ self.assertEqual('kunit-resource-test', result.test.subtests[0].name)
class LinuxSourceTreeTest(unittest.TestCase):
@@ -380,7 +391,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(e.exception.code, 1)
self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
- self.print_mock.assert_any_call(StrContains(' 0 tests run'))
+ self.print_mock.assert_any_call(StrContains('invalid KTAP input!'))
def test_exec_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -388,7 +399,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains('0 tests run!')))
def test_run_raw_output(self):
self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
@@ -397,7 +408,7 @@ class KUnitMainTest(unittest.TestCase):
self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
for call in self.print_mock.call_args_list:
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
- self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ self.assertNotEqual(call, mock.call(StrContains('0 tests run!')))
def test_exec_timeout(self):
timeout = 3453
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
new file mode 100644
index 000000000000..9d5b04fe43a6
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-all_passed_nested.log
@@ -0,0 +1,34 @@
+TAP version 14
+1..2
+ # Subtest: sysctl_test
+ 1..4
+ # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
+ ok 1 - sysctl_test_dointvec_null_tbl_data
+ # Subtest: example
+ 1..2
+ init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+ kunit example: all tests passed
+ ok 2 - example
+ # sysctl_test_dointvec_table_len_is_zero: sysctl_test_dointvec_table_len_is_zero passed
+ ok 3 - sysctl_test_dointvec_table_len_is_zero
+ # sysctl_test_dointvec_table_read_but_position_set: sysctl_test_dointvec_table_read_but_position_set passed
+ ok 4 - sysctl_test_dointvec_table_read_but_position_set
+kunit sysctl_test: all tests passed
+ok 1 - sysctl_test
+ # Subtest: example
+ 1..2
+init_suite
+ # example_simple_test: initializing
+ # example_simple_test: example_simple_test passed
+ ok 1 - example_simple_test
+ # example_mock_test: initializing
+ # example_mock_test: example_mock_test passed
+ ok 2 - example_mock_test
+kunit example: all tests passed
+ok 2 - example
--
2.33.0.rc2.250.ged5fa647cd-goog
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/core/sock_map.c | 1 +
net/unix/af_unix.c | 95 ++++++++++++++++---
net/unix/unix_bpf.c | 93 +++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
5 files changed, 191 insertions(+), 54 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
v5 -> v6 :
- fix READ_ONCE by reading to a local variable first.
v6 -> v7 :
- fix the following compiler error when CONFIG_UNIX is m.
modpost: "sock_map_unhash" [net/unix/unix.ko] undefined!
For the series:
Acked-by: John Fastabend <john.fastabend(a)gmail.com>
Acked-by: Jakub Sitnicki <jakub(a)cloudflare.com>
--
2.20.1
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
Add a function to remove all dynamic events from the tracing directory. It
requires a loop as some of the dynamic events may depend on others being
removed first. Also add a safety that prevents it from looping infinitely
due to a bug where an event never gets removed.
Link: https://lkml.kernel.org/r/20210819152825.348941368@goodmis.org
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Acked-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/functions | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index a6fac927ee82..f68d336b961b 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -83,6 +83,27 @@ clear_synthetic_events() { # reset all current synthetic events
done
}
+clear_dynamic_events() { # reset all current dynamic events
+ again=1
+ stop=1
+ # loop mulitple times as some events require other to be removed first
+ while [ $again -eq 1 ]; do
+ stop=$((stop+1))
+ # Prevent infinite loops
+ if [ $stop -gt 10 ]; then
+ break;
+ fi
+ again=2
+ grep -v '^#' dynamic_events|
+ while read line; do
+ del=`echo $line | sed -e 's/^.\([^ ]*\).*/-\1/'`
+ if ! echo "$del" >> dynamic_events; then
+ again=1
+ fi
+ done
+ done
+}
+
initialize_ftrace() { # Reset ftrace to initial-state
# As the initial state, ftrace will be set to nop tracer,
# no events, no triggers, no filters, no function filters,
@@ -93,6 +114,7 @@ initialize_ftrace() { # Reset ftrace to initial-state
reset_events_filter
reset_ftrace_filter
disable_events
+ clear_dynamic_events
[ -f set_event_pid ] && echo > set_event_pid
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
--
2.30.2
Extend KSM self tests with a performance benchmark. These tests are not
part of regular regression testing, as they are mainly intended to be
used by developers making changes to the memory management subsystem.
This patchset is a respin of the previous series:
v2: https://lkml.org/lkml/2021/8/6/422
v1: https://lkml.org/lkml/2021/8/1/130
Zhansaya Bagdauletkyzy (2):
selftests: vm: add KSM merging time test
selftests: vm: add COW time test for KSM pages
v2 -> v3:
- address COW test review comments
v1 -> v2:
- replace MB with MiB
- address COW test review comments
tools/testing/selftests/vm/ksm_tests.c | 154 ++++++++++++++++++++++++-
1 file changed, 150 insertions(+), 4 deletions(-)
--
2.25.1
The PAC tests check to see if the system supports the relevant PAC features
but instead of skipping the tests if they can't be executed they fail the
tests which makes things look like they're not working when they are.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/arm64/pauth/pac.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
index 592fe538506e..b743daa772f5 100644
--- a/tools/testing/selftests/arm64/pauth/pac.c
+++ b/tools/testing/selftests/arm64/pauth/pac.c
@@ -25,13 +25,15 @@
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* data key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACA)) \
+ SKIP(return, "PAUTH not enabled"); \
} while (0)
#define ASSERT_GENERIC_PAUTH_ENABLED() \
do { \
unsigned long hwcaps = getauxval(AT_HWCAP); \
/* generic key instructions are not in NOP space. This prevents a SIGILL */ \
- ASSERT_NE(0, hwcaps & HWCAP_PACG) TH_LOG("Generic PAUTH not enabled"); \
+ if (!(hwcaps & HWCAP_PACG)) \
+ SKIP(return, "Generic PAUTH not enabled"); \
} while (0)
void sign_specific(struct signatures *sign, size_t val)
@@ -256,7 +258,7 @@ TEST(single_thread_different_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
@@ -299,7 +301,7 @@ TEST(exec_changed_keys)
unsigned long hwcaps = getauxval(AT_HWCAP);
/* generic and data key instructions are not in NOP space. This prevents a SIGILL */
- ASSERT_NE(0, hwcaps & HWCAP_PACA) TH_LOG("PAUTH not enabled");
+ ASSERT_PAUTH_ENABLED();
if (!(hwcaps & HWCAP_PACG)) {
TH_LOG("WARNING: Generic PAUTH not enabled. Skipping generic key checks");
nkeys = NKEYS - 1;
--
2.20.1
When skipping the tests due to a lack of system support for MTE we
currently print a message saying FAIL which makes it look like the test
failed even though the test did actually report KSFT_SKIP, creating some
confusion. Change the error message to say SKIP instead so things are
clearer.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/arm64/mte/mte_common_util.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
index f50ac31920d1..0328a1e08f65 100644
--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
+++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
@@ -298,7 +298,7 @@ int mte_default_setup(void)
int ret;
if (!(hwcaps2 & HWCAP2_MTE)) {
- ksft_print_msg("FAIL: MTE features unavailable\n");
+ ksft_print_msg("SKIP: MTE features unavailable\n");
return KSFT_SKIP;
}
/* Get current mte mode */
--
2.20.1
Fix a few issues reported by 0Day/LKP during runing selftests/bpf.
Changelog:
V2:
- folded previous similar standalone patch to [1/5], and add acked tag
from Song Liu
- add acked tag to [2/5], [3/5] from Song Liu
- [4/5]: move test_bpftool.py to TEST_PROGS_EXTENDED, files in TEST_GEN_PROGS_EXTENDED
are generated by make. Otherwise, it will break out-of-tree install:
'make O=/kselftest-build SKIP_TARGETS= V=1 -C tools/testing/selftests install INSTALL_PATH=/kselftest-install'
- [5/5]: new patch
Li Zhijian (5):
selftests/bpf: enlarge select() timeout for test_maps
selftests/bpf: make test_doc_build.sh work from script directory
selftests/bpf: add default bpftool built by selftests to PATH
selftests/bpf: add missing files required by test_bpftool.sh for
installing
selftests/bpf: exit with KSFT_SKIP if no Makefile found
tools/testing/selftests/bpf/Makefile | 4 +++-
tools/testing/selftests/bpf/test_bpftool.sh | 6 ++++++
tools/testing/selftests/bpf/test_bpftool_build.sh | 2 +-
tools/testing/selftests/bpf/test_doc_build.sh | 10 ++++++++--
tools/testing/selftests/bpf/test_maps.c | 2 +-
5 files changed, 19 insertions(+), 5 deletions(-)
--
2.32.0
Previously, it fails as below:
-------------
root@lkp-skl-d01 /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf# ./test_doc_build.sh
++ realpath --relative-to=/opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf ./test_doc_build.sh
+ SCRIPT_REL_PATH=test_doc_build.sh
++ dirname test_doc_build.sh
+ SCRIPT_REL_DIR=.
++ realpath /opt/rootfs/v5.14-rc4/tools/testing/selftests/bpf/./../../../../
+ KDIR_ROOT_DIR=/opt/rootfs/v5.14-rc4
+ cd /opt/rootfs/v5.14-rc4
+ for tgt in docs docs-clean
+ make -s -C /opt/rootfs/v5.14-rc4/. docs
make: *** No rule to make target 'docs'. Stop.
+ for tgt in docs docs-clean
+ make -s -C /opt/rootfs/v5.14-rc4/. docs-clean
make: *** No rule to make target 'docs-clean'. Stop.
-----------
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/bpf/test_doc_build.sh | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/test_doc_build.sh b/tools/testing/selftests/bpf/test_doc_build.sh
index ed12111cd2f0..d67ced95a6cf 100755
--- a/tools/testing/selftests/bpf/test_doc_build.sh
+++ b/tools/testing/selftests/bpf/test_doc_build.sh
@@ -4,9 +4,10 @@ set -e
# Assume script is located under tools/testing/selftests/bpf/. We want to start
# build attempts from the top of kernel repository.
-SCRIPT_REL_PATH=$(realpath --relative-to=$PWD $0)
+SCRIPT_REL_PATH=$(realpath $0)
SCRIPT_REL_DIR=$(dirname $SCRIPT_REL_PATH)
-KDIR_ROOT_DIR=$(realpath $PWD/$SCRIPT_REL_DIR/../../../../)
+KDIR_ROOT_DIR=$(realpath $SCRIPT_REL_DIR/../../../../)
+SCRIPT_REL_DIR=$(dirname $(realpath --relative-to=$KDIR_ROOT_DIR $SCRIPT_REL_PATH))
cd $KDIR_ROOT_DIR
for tgt in docs docs-clean; do
--
2.32.0
From: "Steven Rostedt (VMware)" <rostedt(a)goodmis.org>
Add a function to remove all dynamic events from the tracing directory. It
requires a loop as some of the dynamic events may depend on others being
removed first. Also add a safety that prevents it from looping infinitely
due to a bug where an event never gets removed.
Link: https://lkml.kernel.org/r/20210819041842.696873153@goodmis.org
Cc: Shuah Khan <shuah(a)kernel.org>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: linux-kselftest(a)vger.kernel.org
Signed-off-by: Steven Rostedt (VMware) <rostedt(a)goodmis.org>
---
.../testing/selftests/ftrace/test.d/functions | 22 +++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/tools/testing/selftests/ftrace/test.d/functions b/tools/testing/selftests/ftrace/test.d/functions
index a6fac927ee82..f68d336b961b 100644
--- a/tools/testing/selftests/ftrace/test.d/functions
+++ b/tools/testing/selftests/ftrace/test.d/functions
@@ -83,6 +83,27 @@ clear_synthetic_events() { # reset all current synthetic events
done
}
+clear_dynamic_events() { # reset all current dynamic events
+ again=1
+ stop=1
+ # loop mulitple times as some events require other to be removed first
+ while [ $again -eq 1 ]; do
+ stop=$((stop+1))
+ # Prevent infinite loops
+ if [ $stop -gt 10 ]; then
+ break;
+ fi
+ again=2
+ grep -v '^#' dynamic_events|
+ while read line; do
+ del=`echo $line | sed -e 's/^.\([^ ]*\).*/-\1/'`
+ if ! echo "$del" >> dynamic_events; then
+ again=1
+ fi
+ done
+ done
+}
+
initialize_ftrace() { # Reset ftrace to initial-state
# As the initial state, ftrace will be set to nop tracer,
# no events, no triggers, no filters, no function filters,
@@ -93,6 +114,7 @@ initialize_ftrace() { # Reset ftrace to initial-state
reset_events_filter
reset_ftrace_filter
disable_events
+ clear_dynamic_events
[ -f set_event_pid ] && echo > set_event_pid
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
--
2.30.2
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
In practice, i tried to pin it to only one cpu by 'taskset 0x01 ./test_maps',
and knew 10S is likely enough, but i still perfer to a larger value 30.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
V2: update to 30 seconds
3S is not enough sometimes on a very busy host
taskset 1,1 ./test_maps 9
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Failed sockmap unexpected timeout
taskset 1,1 ./test_maps 10
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
test_array_map_batch_ops:PASS
test_array_percpu_map_batch_ops:PASS
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
test_lpm_trie_map_batch_ops:PASS
test_sk_storage_map:PASS
test_maps: OK, 0 SKIPPED
taskset 0x01 ./test_maps 9
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
test_array_map_batch_ops:PASS
test_array_percpu_map_batch_ops:PASS
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
test_lpm_trie_map_batch_ops:PASS
test_sk_storage_map:PASS
test_maps: OK, 0 SKIPPED
taskset 0x01 ./test_maps 10
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
Fork 1024 tasks to 'test_update_delete'
Fork 1024 tasks to 'test_update_delete'
Fork 100 tasks to 'test_hashmap'
Fork 100 tasks to 'test_hashmap_percpu'
Fork 100 tasks to 'test_hashmap_sizes'
Fork 100 tasks to 'test_hashmap_walk'
Fork 100 tasks to 'test_arraymap'
Fork 100 tasks to 'test_arraymap_percpu'
test_array_map_batch_ops:PASS
test_array_percpu_map_batch_ops:PASS
test_htab_map_batch_ops:PASS
test_htab_percpu_map_batch_ops:PASS
test_lpm_trie_map_batch_ops:PASS
test_sk_storage_map:PASS
test_maps: OK, 0 SKIPPED
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 30cbf5d98f7d..de58a3070eea 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -985,7 +985,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 30;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.32.0
Hi Greg,
Can you please pull these LKDTM changes for drivers/misc? I forgot
to flush this queue of enhancements earlier. :) Here's what I've got
built up, mostly tweaks for kernelCI, configs, consolidation. This also
includes the __alloc_size hint adjustment I'd sent earlier, now fixed
with a better comment.
Thanks!
-Kees
Kees Cook (4):
lkdtm/bugs: Add ARRAY_BOUNDS to selftests
lkdtm/fortify: Consolidate FORTIFY_SOURCE tests
lkdtm: Add kernel version to failure hints
lkdtm/heap: Avoid __alloc_size hint warning for
VMALLOC_LINEAR_OVERFLOW
drivers/misc/lkdtm/bugs.c | 51 +-----------------------
drivers/misc/lkdtm/core.c | 4 +-
drivers/misc/lkdtm/fortify.c | 53 +++++++++++++++++++++++++
drivers/misc/lkdtm/heap.c | 9 ++++-
drivers/misc/lkdtm/lkdtm.h | 24 ++++++-----
tools/testing/selftests/lkdtm/config | 2 +
tools/testing/selftests/lkdtm/tests.txt | 3 ++
7 files changed, 83 insertions(+), 63 deletions(-)
--
2.30.2
This patchset is an implementation of futex2 interface on top of existing
futex.c code.
* What happened to the current futex()?
The futex() is implemented using a multiplexed interface that doesn't
scale well and gives headaches to people. We don't want to add more
features there.
* New features at futex2()
** NUMA-awareness
At the current implementation, all futex kernel side infrastructure is
stored on a single node. Given that, all futex() calls issued by
processors that aren't located on that node will have a memory access
penalty when doing it.
** Variable sized futexes
Futexes are used to implement atomic operations in userspace.
Supporting 8, 16, 32 and 64 bit sized futexes allows user libraries to
implement all those sizes in a performant way. Thanks Boost devs for
feedback: https://lists.boost.org/Archives/boost/2021/05/251508.php
Embedded systems or anything with memory constrains could benefit of
using smaller sizes for the futex userspace integer.
** Wait on multiple futexes
Proton's (a set of compatibility tools to run Windows games) fork of Wine
benefits of this feature to implement WaitForMultipleObjects from Win32 in
a performant way. Native game engines will benefit from this as well,
given that this is a common wait pattern for games.
* The interface
The new interface has one syscall per operation as opposed to the
current multiplexing one. The details can be found in the following
patches, but this is a high level summary of what the interface can do:
- Supports wake/wait semantics, as in futex()
- Supports requeue operations, similarly as FUTEX_CMP_REQUEUE, but with
individual flags for each address
- Supports waiting for a vector of futexes, using a new syscall named
futex_waitv()
- The following features will be implemented in next patchset versions:
- Supports variable sized futexes (8bits, 16bits, 32bits and 64bits)
- Supports NUMA-awareness operations, where the user can specify on
which memory node would like to operate
* The patchset
Given that futex2 reuses futex code, the patches make futex.c functions
public and modify them as needed.
This patchset can be also found at my git tree:
https://gitlab.collabora.com/tonyk/linux/-/tree/futex2-dev
- Patch 1: Implements 32bit wait/wake
- Patches 2-3: Implement waitv and requeue.
- Patch 4: Add a documentation file which details the interface and
the internal implementation.
- Patches 5-10: Selftests for all operations along with perf
support for futex2.
- Patch 11: Proof of concept of waking threads at waitpid(), not to be
merged as it is.
* Testing
** Stability
- glibc[1]: nptl's low level locking was modified to use futex2 API
(except for PI). All nptl/ tests passed.
- Proton's Wine: Proton/Wine was modified in order to use futex2() for the
emulation of Windows NT sync mechanisms based on futex, called "fsync".
Triple-A games with huge CPU's loads and tons of parallel jobs worked
as expected when compared with the previous FUTEX_WAIT_MULTIPLE
implementation at futex(). Some games issue 42k futex2() calls
per second.
- perf: The perf benchmarks tests can also be used to stress the
interface, and they can be found in this patchset.
[1] https://gitlab.collabora.com/tonyk/glibc/-/tree/futex2-dev
** Performance
- Using perf, no significant difference was measured when comparing
futex() and futex2() for the following benchmarks: hash, wake and
wake-parallel.
- I measured a 15% overhead for the perf's requeue benchmark, comparing
futex2() to futex(). Requeue patch provides more details about why this
happens and how to overcome this.
* Changelog
Changes from v4:
- Use existing futex.c code when possible
- Cleaned up cover letter, check v4 for a more verbose version
v4: https://lore.kernel.org/lkml/20210603195924.361327-1-andrealmeid@collabora.…
André Almeida (11):
futex2: Implement wait and wake functions
futex2: Implement vectorized wait
futex2: Implement requeue operation
docs: locking: futex2: Add documentation
selftests: futex2: Add wake/wait test
selftests: futex2: Add timeout test
selftests: futex2: Add wouldblock test
selftests: futex2: Add waitv test
selftests: futex2: Add requeue test
perf bench: Add futex2 benchmark tests
kernel: Enable waitpid() for futex2
Documentation/locking/futex2.rst | 185 ++++++
Documentation/locking/index.rst | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 4 +
arch/x86/entry/syscalls/syscall_64.tbl | 4 +
include/linux/compat.h | 23 +
include/linux/futex.h | 103 ++++
include/linux/syscalls.h | 8 +
include/uapi/asm-generic/unistd.h | 11 +-
include/uapi/linux/futex.h | 27 +
init/Kconfig | 7 +
kernel/Makefile | 1 +
kernel/fork.c | 2 +
kernel/futex.c | 111 +---
kernel/futex2.c | 566 ++++++++++++++++++
kernel/sys_ni.c | 9 +
tools/arch/x86/include/asm/unistd_64.h | 12 +
tools/perf/bench/bench.h | 4 +
tools/perf/bench/futex-hash.c | 24 +-
tools/perf/bench/futex-requeue.c | 57 +-
tools/perf/bench/futex-wake-parallel.c | 41 +-
tools/perf/bench/futex-wake.c | 37 +-
tools/perf/bench/futex.h | 47 ++
tools/perf/builtin-bench.c | 18 +-
.../selftests/futex/functional/.gitignore | 3 +
.../selftests/futex/functional/Makefile | 6 +-
.../futex/functional/futex2_requeue.c | 164 +++++
.../selftests/futex/functional/futex2_wait.c | 195 ++++++
.../selftests/futex/functional/futex2_waitv.c | 154 +++++
.../futex/functional/futex_wait_timeout.c | 24 +-
.../futex/functional/futex_wait_wouldblock.c | 33 +-
.../testing/selftests/futex/functional/run.sh | 6 +
.../selftests/futex/include/futex2test.h | 112 ++++
32 files changed, 1865 insertions(+), 134 deletions(-)
create mode 100644 Documentation/locking/futex2.rst
create mode 100644 kernel/futex2.c
create mode 100644 tools/testing/selftests/futex/functional/futex2_requeue.c
create mode 100644 tools/testing/selftests/futex/functional/futex2_wait.c
create mode 100644 tools/testing/selftests/futex/functional/futex2_waitv.c
create mode 100644 tools/testing/selftests/futex/include/futex2test.h
--
2.32.0
0Day robot observed that it's easily timeout on a heavy load host.
-------------------
# selftests: bpf: test_maps
# Fork 1024 tasks to 'test_update_delete'
# Fork 1024 tasks to 'test_update_delete'
# Fork 100 tasks to 'test_hashmap'
# Fork 100 tasks to 'test_hashmap_percpu'
# Fork 100 tasks to 'test_hashmap_sizes'
# Fork 100 tasks to 'test_hashmap_walk'
# Fork 100 tasks to 'test_arraymap'
# Fork 100 tasks to 'test_arraymap_percpu'
# Failed sockmap unexpected timeout
not ok 3 selftests: bpf: test_maps # exit=1
# selftests: bpf: test_lru_map
# nr_cpus:8
-------------------
Since this test will be scheduled by 0Day to a random host that could have
only a few cpus(2-8), enlarge the timeout to avoid a false NG report.
Reported-by: kernel test robot <lkp(a)intel.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
tools/testing/selftests/bpf/test_maps.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
index 30cbf5d98f7d..72673e0428fd 100644
--- a/tools/testing/selftests/bpf/test_maps.c
+++ b/tools/testing/selftests/bpf/test_maps.c
@@ -985,7 +985,7 @@ static void test_sockmap(unsigned int tasks, void *data)
FD_ZERO(&w);
FD_SET(sfd[3], &w);
- to.tv_sec = 1;
+ to.tv_sec = 3;
to.tv_usec = 0;
s = select(sfd[3] + 1, &w, NULL, NULL, &to);
if (s == -1) {
--
2.32.0
From: Bongsu Jeon <bongsu.jeon(a)samsung.com>
This series updates the virtual NCI device driver and NCI selftest code
and add the NCI test case in selftests.
1/8 to use wait queue in virtual device driver.
2/8 to remove the polling code in selftests.
3/8 to fix a typo.
4/8 to fix the next nlattr offset calculation.
5/8 to fix the wrong condition in if statement.
6/8 to add a flag parameter to the Netlink send function.
7/8 to extract the start/stop discovery function.
8/8 to add the NCI testcase in selftests.
v2:
1/8
- change the commit message.
- add the dfense code while reading a frame.
3/8
- change the commit message.
- separate the commit into 3/8~8/8.
Bongsu Jeon (8):
nfc: virtual_ncidev: Use wait queue instead of polling
selftests: nci: Remove the polling code to read a NCI frame
selftests: nci: Fix the typo
selftests: nci: Fix the code for next nlattr offset
selftests: nci: Fix the wrong condition
selftests: nci: Add the flags parameter for the send_cmd_mt_nla
selftests: nci: Extract the start/stop discovery function
selftests: nci: Add the NCI testcase reading T4T Tag
drivers/nfc/virtual_ncidev.c | 9 +-
tools/testing/selftests/nci/nci_dev.c | 416 ++++++++++++++++++++++----
2 files changed, 362 insertions(+), 63 deletions(-)
--
2.32.0
The goal of these patches is to add a test case for a SGX reserved
memory oversubscription, i.e. make sure that the page reclaimer and
and the page fault handler are working correctly.
Change Log
==========
v3:
* Reorganized the patch set into smaller pieces, and refactored the code
so that the test enclave can be created inside each test case. Added a
new test case unclobbered_vdso_oversubscribed that creates a large enough
heap to fill all of the available SGX reserved memory (EPC).
Jarkko Sakkinen (8):
x86/sgx: Add /sys/kernel/debug/x86/sgx_total_mem
selftests/sgx: Assign source for each segment
selftests/sgx: Make data measurement for an enclave segment optional
selftests/sgx: Create a heap for the test enclave
selftests/sgx: Dump segments and /proc/self/maps only on failure
selftests/sgx: Encpsulate the test enclave creation
selftests/sgx: Move setup_test_encl() to each TEST_F()
selftests/sgx: Add a new kselftest: unclobbered_vdso_oversubscribed
Documentation/x86/sgx.rst | 6 ++
arch/x86/kernel/cpu/sgx/main.c | 10 +-
tools/testing/selftests/sgx/load.c | 40 ++++++--
tools/testing/selftests/sgx/main.c | 123 +++++++++++++++++++-----
tools/testing/selftests/sgx/main.h | 7 +-
tools/testing/selftests/sgx/sigstruct.c | 12 ++-
6 files changed, 159 insertions(+), 39 deletions(-)
--
2.32.0
Extend KSM self tests with a performance benchmark. These tests are not
part of regular regression testing, as they are mainly intended to be
used by developers making changes to the memory management subsystem.
This patchset is a respin of the previous series:
https://lore.kernel.org/lkml/cover.1627828548.git.zhansayabagdaulet@gmail.c…
Zhansaya Bagdauletkyzy (2):
selftests: vm: add KSM merging time test
selftests: vm: add COW time test for KSM pages
v1 -> v2:
- replace MB with MiB
- address COW test review comments
tools/testing/selftests/vm/ksm_tests.c | 152 ++++++++++++++++++++++++-
1 file changed, 148 insertions(+), 4 deletions(-)
--
2.25.1
Introduce selftests to validate the functionality of KSM. The tests are
run on private anonymous pages. Since some KSM tunables are modified,
their starting values are saved and restored after testing. At the
start, run is set to 2 to ensure that only test pages will be merged (we
assume that no applications make madvise syscalls in the background). If
KSM config not enabled, all tests will be skipped.
Zhansaya Bagdauletkyzy (4):
selftests: vm: add KSM merge test
selftests: vm: add KSM unmerge test
selftests: vm: add KSM zero page merging test
selftests: vm: add KSM merging across nodes test
v1 -> v2:
- add a test to check KSM unmerging
- add a test to check merging of zero pages
- add a test to check merging in different NUMA nodes
- include command line options for each test
- new options to specify use_zero_pages and merge_across_nodes
- run each test case in run_vmtests.sh
- add some helper functions to make the code more compact:
allocate_memory(), ksm_do_scan(), ksm_merge_pages()
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +
tools/testing/selftests/vm/ksm_tests.c | 516 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests.sh | 96 ++++
4 files changed, 616 insertions(+)
create mode 100644 tools/testing/selftests/vm/ksm_tests.c
--
2.25.1
From: Bongsu Jeon <bongsu.jeon(a)samsung.com>
This series updates the virtual NCI device driver and NCI selftest code
and add the NCI test case in selftests.
1/3 is the patch to use wait queue in virtual device driver.
2/3 is the patch to remove the polling code in selftests.
3/3 is the patch to add the NCI testcase in selftests.
Bongsu Jeon (3):
nfc: Change the virtual NCI device driver to use Wait Queue
selftests: Remove the polling code to read a NCI frame
selftests: Add the NCI testcase reading T4T Tag
drivers/nfc/virtual_ncidev.c | 10 +-
tools/testing/selftests/nci/nci_dev.c | 417 ++++++++++++++++++++++----
2 files changed, 362 insertions(+), 65 deletions(-)
--
2.32.0
v5:
- Rebased to the latest for-5.15 branch of cgroup git tree and drop the
1st v4 patch as it has been merged.
- Update patch 1 to always allow changing partition root back to member
even if it invalidates child partitions undeneath it.
- Adjust the empty effective cpu partition patch to not allow 0 effective
cpu for terminal partition which will make it invalid).
- Add a new patch to enable reading of cpuset.cpus.partition to display
the reason that causes invalid partition.
- Adjust the documentation and testing patch accordingly.
v4:
- Rebased to the for-5.15 branch of cgroup git tree and dropped the
first 3 patches of v3 series which have been merged.
- Beside prohibiting violation of cpu exclusivity rule, allow arbitrary
changes to cpuset.cpus of a partition root and force the partition root
to become invalid in case any of the partition root constraints
are violated. The documentation file and self test are modified
accordingly.
This patchset makes four enhancements to the cpuset v2 code.
Patch 1: Properly handle partition root tree and make partition
invalid in case changes to cpuset.cpus violate any of the partition
root constraints.
Patch 2: Enable the "cpuset.cpus.partition" file to show the reason
that causes invalid partition like "root invalid (No cpu available
due to hotplug)".
Patch 3: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 4: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 5 updates the cgroup-v2.rst file accordingly. Patch 6 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (6):
cgroup/cpuset: Properly transition to invalid partition
cgroup/cpuset: Show invalid partition reason string
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition to distribute out all
CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 116 +--
kernel/cgroup/cpuset.c | 347 ++++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 663 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 86 +++
5 files changed, 1068 insertions(+), 149 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/unix/af_unix.c | 91 +++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
4 files changed, 187 insertions(+), 53 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
v5 -> v6 :
- fix READ_ONCE by reading to a local variable first.
--
2.20.1
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/unix/af_unix.c | 91 +++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
4 files changed, 187 insertions(+), 53 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
v5 -> v6 :
- fix READ_ONCE by reading to a local variable first.
For the series:
Acked-by: John Fastabend <john.fastabend(a)gmail.com>
Acked-by: Jakub Sitnicki <jakub(a)cloudflare.com>
Also rebased on bpf-next
--
2.20.1
Hi Linus,
Please pull the following Kselftest fixes update for Linux 5.14-rc6
This Kselftest fixes update for Linux 5.14-rc6 consists of a single patch
to sgx test to fix Q1 and Q2 calculation.
diff is attached.
thanks,
-- Shuah
----------------------------------------------------------------
The following changes since commit 2734d6c1b1a089fb593ef6a23d4b70903526fe0c:
Linux 5.14-rc2 (2021-07-18 14:13:49 -0700)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest tags/linux-kselftest-fixes-5.14-rc6
for you to fetch changes up to 567c39047dbee341244fe3bf79fea24ee0897ff9:
selftests/sgx: Fix Q1 and Q2 calculation in sigstruct.c (2021-07-30 17:20:01 -0600)
----------------------------------------------------------------
linux-kselftest-fixes-5.14-rc6
This Kselftest fixes update for Linux 5.14-rc6 consists of a single patch
to sgx test to fix Q1 and Q2 calculation.
----------------------------------------------------------------
Tianjia Zhang (1):
selftests/sgx: Fix Q1 and Q2 calculation in sigstruct.c
tools/testing/selftests/sgx/sigstruct.c | 41 +++++++++++++++++----------------
1 file changed, 21 insertions(+), 20 deletions(-)
----------------------------------------------------------------
v4:
- Rebased to the for-5.15 branch of cgroup git tree and dropped the
first 3 patches of v3 series which have been merged.
- Beside prohibiting violation of cpu exclusivity rule, allow arbitrary
changes to cpuset.cpus of a partition root and force the partition root
to become invalid in case any of the partition root constraints
are violated. The documentation file and self test are modified
accordingly.
v3:
- Add two new patches (patches 2 & 3) to fix bugs found during the
testing process.
- Add a new patch to enable inotify event notification when partition
become invalid.
- Add a test to test event notification when partition become invalid.
v2:
- Drop v1 patch 1.
- Break out some cosmetic changes into a separate patch (patch #1).
- Add a new patch to clarify the transition to invalid partition root
is mainly caused by hotplug events.
- Enhance the partition root state test including CPU online/offline
behavior and fix issues found by the test.
This patchset makes four enhancements to the cpuset v2 code.
Patch 1: Enable event notification on "cpuset.cpus.partition" whenever
the state of a partition changes.
Patch 2: Properly handle partition root tree and make partition
invalid in case changes to cpuset.cpus violate any of the partition
root constraints.
Patch 3: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 4: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 5 updates the cgroup-v2.rst file accordingly. Patch 6 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (6):
cgroup/cpuset: Enable event notification when partition state changes
cgroup/cpuset: Properly handle partition root tree
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition root to distribute out
all CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 104 +--
kernel/cgroup/cpuset.c | 282 +++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 632 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 86 +++
5 files changed, 980 insertions(+), 129 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
When a number of tests fail, it can be useful to get higher-level
statistics of how many tests are failing (or how many parameters are
failing in parameterised tests), and in what cases or suites. This is
already done by some non-KUnit tests, so add support for automatically
generating these for KUnit tests.
This change adds a 'kunit.stats_enabled' switch which has three values:
- 0: No stats are printed (current behaviour)
- 1: Stats are printed only for tests/suites with more than one
subtest (new default)
- 2: Always print test statistics
For parameterised tests, the summary line looks as follows:
" # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16"
For test suites, there are two lines looking like this:
"# ext4_inode_test: pass:1 fail:0 skip:0 total:1"
"# Totals: pass:16 fail:0 skip:0 total:16"
The first line gives the number of direct subtests, the second "Totals"
line is the accumulated sum of all tests and test parameters.
This format is based on the one used by kselftest[1].
[1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/ksel…
Signed-off-by: David Gow <davidgow(a)google.com>
---
This is the long-awaited v2 of the test statistics patch:
https://lore.kernel.org/linux-kselftest/20201211072319.533803-1-davidgow@go…
It updates the patch to apply on current mainline kernels, takes skipped
tests into account, changes the output format to better match what
kselftest uses, and addresses some of the comments from v1.
Please let me know what you think, in particular:
- Is this sufficient to assuage any worries about porting tests to
KUnit?
- Are we printing too many stats by default: for a lot of existing tests
many of them are useless. I'm particuarly curious about the separate
"Totals" line, versus the per-suite line -- is that useful? Should it
only be printed when the totals differ?
- Is the output format sufficiently legible for people and/or tools
which may want to parse it?
Cheers,
-- David
Changelog:
Changes since v1:
https://lore.kernel.org/linux-kselftest/20201211072319.533803-1-davidgow@go…
- Rework to use a new struct kunit_result_stats, with helper functions
for adding results, accumulating them over nested structures, etc.
- Support skipped tests, report them separately from failures and
passes.
- New output format to better match kselftest:
- "pass:n fail:n skip:n total:n"
- Changes to stats_enabled parameter:
- Now a module parameter, with description
- Default "1" option now prints even when no tests fail.
- Improved parser fix which doesn't break crashed test detection.
---
lib/kunit/test.c | 109 ++++++++++++++++++++++++++++
tools/testing/kunit/kunit_parser.py | 2 +-
2 files changed, 110 insertions(+), 1 deletion(-)
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index d79ecb86ea57..f246b847024e 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -10,6 +10,7 @@
#include <kunit/test-bug.h>
#include <linux/kernel.h>
#include <linux/kref.h>
+#include <linux/moduleparam.h>
#include <linux/sched/debug.h>
#include <linux/sched.h>
@@ -51,6 +52,51 @@ void __kunit_fail_current_test(const char *file, int line, const char *fmt, ...)
EXPORT_SYMBOL_GPL(__kunit_fail_current_test);
#endif
+/*
+ * KUnit statistic mode:
+ * 0 - disabled
+ * 1 - only when there is more than one subtest
+ * 2 - enabled
+ */
+static int kunit_stats_enabled = 1;
+module_param_named(stats_enabled, kunit_stats_enabled, int, 0644);
+MODULE_PARM_DESC(stats_enabled,
+ "Print test stats: never (0), only for multiple subtests (1), or always (2)");
+
+struct kunit_result_stats {
+ unsigned long passed;
+ unsigned long skipped;
+ unsigned long failed;
+ unsigned long total;
+};
+
+static bool kunit_should_print_stats(struct kunit_result_stats stats)
+{
+ if (kunit_stats_enabled == 0)
+ return false;
+
+ if (kunit_stats_enabled == 2)
+ return true;
+
+ return (stats.total > 1);
+}
+
+static void kunit_print_test_stats(struct kunit *test,
+ struct kunit_result_stats stats)
+{
+ if (!kunit_should_print_stats(stats))
+ return;
+
+ kunit_log(KERN_INFO, test,
+ KUNIT_SUBTEST_INDENT
+ "# %s: pass:%lu fail:%lu skip:%lu total:%lu",
+ test->name,
+ stats.passed,
+ stats.failed,
+ stats.skipped,
+ stats.total);
+}
+
/*
* Append formatted message to log, size of which is limited to
* KUNIT_LOG_SIZE bytes (including null terminating byte).
@@ -393,15 +439,69 @@ static void kunit_run_case_catch_errors(struct kunit_suite *suite,
test_case->status = KUNIT_SUCCESS;
}
+static void kunit_print_suite_stats(struct kunit_suite *suite,
+ struct kunit_result_stats suite_stats,
+ struct kunit_result_stats param_stats)
+{
+ if (kunit_should_print_stats(suite_stats)) {
+ kunit_log(KERN_INFO, suite,
+ "# %s: pass:%lu fail:%lu skip:%lu total:%lu",
+ suite->name,
+ suite_stats.passed,
+ suite_stats.failed,
+ suite_stats.skipped,
+ suite_stats.total);
+ }
+
+ if (kunit_should_print_stats(param_stats)) {
+ kunit_log(KERN_INFO, suite,
+ "# Totals: pass:%lu fail:%lu skip:%lu total:%lu",
+ param_stats.passed,
+ param_stats.failed,
+ param_stats.skipped,
+ param_stats.total);
+ }
+}
+
+static void kunit_update_stats(struct kunit_result_stats *stats,
+ enum kunit_status status)
+{
+ switch (status) {
+ case KUNIT_SUCCESS:
+ stats->passed++;
+ break;
+ case KUNIT_SKIPPED:
+ stats->skipped++;
+ break;
+ case KUNIT_FAILURE:
+ stats->failed++;
+ break;
+ }
+
+ stats->total++;
+}
+
+static void kunit_accumulate_stats(struct kunit_result_stats *total,
+ struct kunit_result_stats add)
+{
+ total->passed += add.passed;
+ total->skipped += add.skipped;
+ total->failed += add.failed;
+ total->total += add.total;
+}
+
int kunit_run_tests(struct kunit_suite *suite)
{
char param_desc[KUNIT_PARAM_DESC_SIZE];
struct kunit_case *test_case;
+ struct kunit_result_stats suite_stats = { 0 };
+ struct kunit_result_stats total_stats = { 0 };
kunit_print_subtest_start(suite);
kunit_suite_for_each_test_case(suite, test_case) {
struct kunit test = { .param_value = NULL, .param_index = 0 };
+ struct kunit_result_stats param_stats = { 0 };
test_case->status = KUNIT_SKIPPED;
if (test_case->generate_params) {
@@ -431,14 +531,23 @@ int kunit_run_tests(struct kunit_suite *suite)
test.param_value = test_case->generate_params(test.param_value, param_desc);
test.param_index++;
}
+
+ kunit_update_stats(¶m_stats, test.status);
+
} while (test.param_value);
+ kunit_print_test_stats(&test, param_stats);
+
kunit_print_ok_not_ok(&test, true, test_case->status,
kunit_test_case_num(suite, test_case),
test_case->name,
test.status_comment);
+
+ kunit_update_stats(&suite_stats, test_case->status);
+ kunit_accumulate_stats(&total_stats, param_stats);
}
+ kunit_print_suite_stats(suite, suite_stats, total_stats);
kunit_print_subtest_end(suite);
return 0;
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..c699f778da06 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -137,7 +137,7 @@ def print_log(log) -> None:
for m in log:
print_with_timestamp(m)
-TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*#).*$')
+TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*# (Subtest:|.*: kunit test case crashed!)).*$')
def consume_non_diagnostic(lines: LineStream) -> None:
while lines and not TAP_ENTRIES.match(lines.peek()):
--
2.32.0.554.ge1b32706d8-goog
--raw_output is nice, but it would be nicer if could show only output
after KUnit tests have started.
So change the flag to allow specifying a string ('kunit').
Make it so `--raw_output` alone will default to `--raw_output=all` and
have the same original behavior.
Drop the small kunit_parser.raw_output() function since it feels wrong
to put it in "kunit_parser.py" when the point of it is to not parse
anything.
E.g.
$ ./tools/testing/kunit/kunit.py run --raw_output=kunit
...
[15:24:07] Starting KUnit Kernel ...
TAP version 14
1..1
# Subtest: example
1..3
# example_simple_test: initializing
ok 1 - example_simple_test
# example_skip_test: initializing
# example_skip_test: You should not see a line below.
ok 2 - example_skip_test # SKIP this test should be skipped
# example_mark_skipped_test: initializing
# example_mark_skipped_test: You should see a line below.
# example_mark_skipped_test: You should see this line.
ok 3 - example_mark_skipped_test # SKIP this test should be skipped
ok 1 - example
[15:24:10] Elapsed time: 6.487s total, 0.001s configuring, 3.510s building, 0.000s running
Signed-off-by: Daniel Latypov <dlatypov(a)google.com>
---
Documentation/dev-tools/kunit/kunit-tool.rst | 9 ++++++---
tools/testing/kunit/kunit.py | 20 +++++++++++++++-----
tools/testing/kunit/kunit_parser.py | 4 ----
tools/testing/kunit/kunit_tool_test.py | 9 +++++++++
4 files changed, 30 insertions(+), 12 deletions(-)
diff --git a/Documentation/dev-tools/kunit/kunit-tool.rst b/Documentation/dev-tools/kunit/kunit-tool.rst
index c7ff9afe407a..ae52e0f489f9 100644
--- a/Documentation/dev-tools/kunit/kunit-tool.rst
+++ b/Documentation/dev-tools/kunit/kunit-tool.rst
@@ -114,9 +114,12 @@ results in TAP format, you can pass the ``--raw_output`` argument.
./tools/testing/kunit/kunit.py run --raw_output
-.. note::
- The raw output from test runs may contain other, non-KUnit kernel log
- lines.
+The raw output from test runs may contain other, non-KUnit kernel log
+lines. You can see just KUnit output with ``--raw_output=kunit``:
+
+.. code-block:: bash
+
+ ./tools/testing/kunit/kunit.py run --raw_output=kunit
If you have KUnit results in their raw TAP format, you can parse them and print
the human-readable summary with the ``parse`` command for kunit_tool. This
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7174377c2172..5a931456e718 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -16,6 +16,7 @@ assert sys.version_info >= (3, 7), "Python version is too old"
from collections import namedtuple
from enum import Enum, auto
+from typing import Iterable
import kunit_config
import kunit_json
@@ -114,7 +115,16 @@ def parse_tests(request: KunitParseRequest) -> KunitResult:
'Tests not Parsed.')
if request.raw_output:
- kunit_parser.raw_output(request.input_data)
+ output: Iterable[str] = request.input_data
+ if request.raw_output == 'all':
+ pass
+ elif request.raw_output == 'kunit':
+ output = kunit_parser.extract_tap_lines(output)
+ else:
+ print(f'Unknown --raw_output option "{request.raw_output}"', file=sys.stderr)
+ for line in output:
+ print(line.rstrip())
+
else:
test_result = kunit_parser.parse_run_tests(request.input_data)
parse_end = time.time()
@@ -135,7 +145,6 @@ def parse_tests(request: KunitParseRequest) -> KunitResult:
return KunitResult(KunitStatus.SUCCESS, test_result,
parse_end - parse_start)
-
def run_tests(linux: kunit_kernel.LinuxSourceTree,
request: KunitRequest) -> KunitResult:
run_start = time.time()
@@ -181,7 +190,7 @@ def add_common_opts(parser) -> None:
parser.add_argument('--build_dir',
help='As in the make command, it specifies the build '
'directory.',
- type=str, default='.kunit', metavar='build_dir')
+ type=str, default='.kunit', metavar='build_dir')
parser.add_argument('--make_options',
help='X=Y make option, can be repeated.',
action='append')
@@ -246,8 +255,9 @@ def add_exec_opts(parser) -> None:
action='append')
def add_parse_opts(parser) -> None:
- parser.add_argument('--raw_output', help='don\'t format output from kernel',
- action='store_true')
+ parser.add_argument('--raw_output', help='If set don\'t format output from kernel. '
+ 'If set to --raw_output=kunit, filters to just KUnit output.',
+ type=str, nargs='?', const='all', default=None)
parser.add_argument('--json',
nargs='?',
help='Stores test results in a JSON, and either '
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..84938fefbac0 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -106,10 +106,6 @@ def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
yield line_num, line[prefix_len:]
return LineStream(lines=isolate_kunit_output(kernel_output))
-def raw_output(kernel_output) -> None:
- for line in kernel_output:
- print(line.rstrip())
-
DIVIDER = '=' * 60
RESET = '\033[0;0m'
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 628ab00f74bc..619c4554cbff 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -399,6 +399,15 @@ class KUnitMainTest(unittest.TestCase):
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ def test_run_raw_output_kunit(self):
+ self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
+ kunit.main(['run', '--raw_output=kunit'], self.linux_source_mock)
+ self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
+ self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
+ for call in self.print_mock.call_args_list:
+ self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+
def test_exec_timeout(self):
timeout = 3453
kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)
base-commit: f684616e08e9cd9db3cd53fe2e068dfe02481657
--
2.32.0.605.g8dce9f2422-goog
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/core/sock_map.c | 8 +-
net/unix/af_unix.c | 89 ++++++++++++++++--
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
5 files changed, 194 insertions(+), 52 deletions(-)
--
2.20.1
From: SeongJae Park <sjpark(a)amazon.de>
When running a test program, 'run_one()' checks if the program has the
execution permission and fails if it doesn't. However, it's easy to
mistakenly missing the permission, as some common tools like 'diff'
don't support the permission change well[1]. Compared to that, making
mistakes in the test program's path would only rare, as those are
explicitly listed in 'TEST_PROGS'. Therefore, it might make more sense
to resolve the situation on our own and run the program.
For the reason, this commit makes the test program runner function to
still print the warning message but run the program after giving the
execution permission in the case. To make nothing corrupted, it also
restores the permission after running it.
[1] https://lore.kernel.org/mm-commits/YRJisBs9AunccCD4@kroah.com/
Suggested-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: SeongJae Park <sjpark(a)amazon.de>
---
tools/testing/selftests/kselftest/runner.sh | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh
index cc9c846585f0..2eb31e945709 100644
--- a/tools/testing/selftests/kselftest/runner.sh
+++ b/tools/testing/selftests/kselftest/runner.sh
@@ -65,15 +65,16 @@ run_one()
TEST_HDR_MSG="selftests: $DIR: $BASENAME_TEST"
echo "# $TEST_HDR_MSG"
- if [ ! -x "$TEST" ]; then
- echo -n "# Warning: file $TEST is "
- if [ ! -e "$TEST" ]; then
- echo "missing!"
- else
- echo "not executable, correct this."
- fi
+ if [ ! -e "$TEST" ]; then
+ echo "# Warning: file $TEST is missing!"
echo "not ok $test_num $TEST_HDR_MSG"
else
+ permission_added="false"
+ if [ ! -x "$TEST" ]; then
+ echo "# Warning: file $TEST is not executable"
+ chmod u+x "$TEST"
+ permission_added="true"
+ fi
cd `dirname $TEST` > /dev/null
((((( tap_timeout ./$BASENAME_TEST 2>&1; echo $? >&3) |
tap_prefix >&4) 3>&1) |
@@ -88,6 +89,9 @@ run_one()
else
echo "not ok $test_num $TEST_HDR_MSG # exit=$rc"
fi)
+ if [ "$permission_added" = "true" ]; then
+ chmod u-x "$TEST"
+ fi
cd - >/dev/null
fi
}
--
2.17.1
From: SeongJae Park <sjpark(a)amazon.de>
Commit 04edafbc0c07 ("mm/damon: add user space selftests") of
linux-mm[1] gives no execute permission to 'debugfs_attrs.sh' file.
This results in a DAMON selftest failure as below:
$ make -C tools/testing/selftests/damon run_tests
make: Entering directory '/home/sjpark/linux/tools/testing/selftests/damon'
TAP version 13
1..1
# selftests: damon: debugfs_attrs.sh
# Warning: file debugfs_attrs.sh is not executable, correct this.
not ok 1 selftests: damon: debugfs_attrs.sh
make: Leaving directory '/home/sjpark/linux/tools/testing/selftests/damon'
To solve the problem, this commit adds the execute permission for
'debugfs_attrs.sh' file.
[1] https://github.com/hnaz/linux-mm/commit/04edafbc0c07
Signed-off-by: SeongJae Park <sjpark(a)amazon.de>
---
tools/testing/selftests/damon/debugfs_attrs.sh | 0
1 file changed, 0 insertions(+), 0 deletions(-)
mode change 100644 => 100755 tools/testing/selftests/damon/debugfs_attrs.sh
diff --git a/tools/testing/selftests/damon/debugfs_attrs.sh b/tools/testing/selftests/damon/debugfs_attrs.sh
old mode 100644
new mode 100755
--
2.17.1
v3:
- Add two new patches (patches 2 & 3) to fix bugs found during the
testing process.
- Add a new patch to enable inotify event notification when partition
become invalid.
- Add a test to test event notification when partition become invalid.
v2:
- Drop v1 patch 1.
- Break out some cosmetic changes into a separate patch (patch #1).
- Add a new patch to clarify the transition to invalid partition root
is mainly caused by hotplug events.
- Enhance the partition root state test including CPU online/offline
behavior and fix issues found by the test.
This patchset fixes two bugs and makes four enhancements to the cpuset
v2 code.
Bug fixes:
Patch 2: Fix a hotplug handling bug when just all cpus in subparts_cpus
are offlined.
Patch 3: Fix violation of cpuset locking rule.
Enhancements:
Patch 4: Enable event notification on "cpuset.cpus.partition" when
a partition become invalid.
Patch 5: Clarify the use of invalid partition root and add new checks
to make sure that normal cpuset control file operations will not be
allowed to create invalid partition root. It also fixes some of the
issues in existing code.
Patch 6: Add a new partition state "isolated" to create a partition
root without load balancing. This is for handling intermitten workloads
that have a strict low latency requirement.
Patch 7: Allow partition roots that are not the top cpuset to distribute
all its cpus to child partitions as long as there is no task associated
with that partition root. This allows more flexibility for middleware
to manage multiple partitions.
Patch 8 updates the cgroup-v2.rst file accordingly. Patch 9 adds a new
cpuset test to test the new cpuset partition code.
Waiman Long (9):
cgroup/cpuset: Miscellaneous code cleanup
cgroup/cpuset: Fix a partition bug with hotplug
cgroup/cpuset: Fix violation of cpuset locking rule
cgroup/cpuset: Enable event notification when partition become invalid
cgroup/cpuset: Clarify the use of invalid partition root
cgroup/cpuset: Add a new isolated cpus.partition type
cgroup/cpuset: Allow non-top parent partition root to distribute out
all CPUs
cgroup/cpuset: Update description of cpuset.cpus.partition in
cgroup-v2.rst
kselftest/cgroup: Add cpuset v2 partition root state test
Documentation/admin-guide/cgroup-v2.rst | 94 ++-
kernel/cgroup/cpuset.c | 360 +++++++---
tools/testing/selftests/cgroup/Makefile | 5 +-
.../selftests/cgroup/test_cpuset_prs.sh | 626 ++++++++++++++++++
tools/testing/selftests/cgroup/wait_inotify.c | 67 ++
5 files changed, 1007 insertions(+), 145 deletions(-)
create mode 100755 tools/testing/selftests/cgroup/test_cpuset_prs.sh
create mode 100644 tools/testing/selftests/cgroup/wait_inotify.c
--
2.18.1
On Mon, Aug 09 2021 at 09:54, Rong A. Chen wrote:
> On 8/6/2021 8:42 PM, Thomas Gleixner wrote:
>> On Wed, Aug 04 2021 at 17:04, Rong A. Chen wrote:
>>> On 7/27/2021 10:52 PM, Dave Hansen wrote:
>>>> On 7/26/21 8:11 PM, kernel test robot wrote:
>>>>>>> sparc64-linux-gcc: error: unrecognized command-line option '-mxsave'
>>>>
>>>> Is there something else funky going on here? All of the "-mxsave" flags
>>>> that I can find are under checks for x86 builds, like:
>>>>
>>>> ifeq ($(CAN_BUILD_I386),1)
>>>> $(BINARIES_32): CFLAGS += -m32 -mxsave
>>>> ..
>>>>
>>>> I'm confused how we could have a sparc64 compiler (and only a sparc64
>>>> compiler) that would end up with "-mxsave" in CFLAGS.
>>>
>>> Hi Dave,
>>>
>>> We can reproduce the error and have no idea too, but we have disabled
>>> the test for selftests on non-x86 arch.
>>
>> This smells like a host/target compiler mixup. Can you please make the
>> kernel build verbose with 'V=1' and provide the full build output?
>
> Hi Thomas,
>
> I run the below command:
>
> $make V=1 --keep-going CROSS_COMPILE=sparc64-linux- -j1 O=build_dir
> ARCH=sparc64 -C tools/testing/selftests/vm
> ...
> sparc64-linux-gcc -Wall -I ../../../../usr/include -no-pie -m32 -mxsave
> protection_keys.c -lrt -lpthread -lrt -ldl -lm -o
> /root/linux/tools/testing/selftests/vm/protection_keys_32
> sparc64-linux-gcc: error: unrecognized command-line option '-mxsave'
> make: *** [Makefile:107:
Right. That's clearly broken because all these x8664 muck is derived
from:
MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/ppc64/')
which obviously fails for cross compiling because it's looking at the
compile machine and not at the target.
Something like the below should cure that, but TBH I lost track
which one of ARCH, SUBARCH, UTS_MACHINE should be used here. The kbuild
folks should know.
Thanks,
tglx
---
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -4,7 +4,6 @@
include local_config.mk
uname_M := $(shell uname -m 2>/dev/null || echo not)
-MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/ppc64/')
# Without this, failed build products remain, with up-to-date timestamps,
# thus tricking Make (and you!) into believing that All Is Well, in subsequent
@@ -46,7 +45,7 @@ TEST_GEN_FILES += transhuge-stress
TEST_GEN_FILES += userfaultfd
TEST_GEN_FILES += split_huge_page_test
-ifeq ($(MACHINE),x86_64)
+ifeq ($(UTS_MACHINE),x86_64)
CAN_BUILD_I386 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_32bit_program.c -m32)
CAN_BUILD_X86_64 := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_64bit_program.c)
CAN_BUILD_WITH_NOPIE := $(shell ./../x86/check_cc.sh $(CC) ../x86/trivial_program.c -no-pie)
@@ -68,7 +67,7 @@ TEST_GEN_FILES += $(BINARIES_64)
endif
else
-ifneq (,$(findstring $(MACHINE),ppc64))
+ifneq (,$(findstring $(UTS_MACHINE),ppc64))
TEST_GEN_FILES += protection_keys
endif
@@ -87,7 +86,7 @@ TEST_FILES := test_vmalloc.sh
KSFT_KHDR_INSTALL := 1
include ../lib.mk
-ifeq ($(MACHINE),x86_64)
+ifeq ($(UTS_MACHINE),x86_64)
BINARIES_32 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_32))
BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64))
Current PTP driver exposes one PTP device to user which binds network
interface/interfaces to provide timestamping. Actually we have a way
utilizing timecounter/cyclecounter to virtualize any number of PTP
clocks based on a same free running physical clock for using.
The purpose of having multiple PTP virtual clocks is for user space
to directly/easily use them for multiple domains synchronization.
user
space: ^ ^
| SO_TIMESTAMPING new flag: | Packets with
| SOF_TIMESTAMPING_BIND_PHC | TX/RX HW timestamps
v v
+--------------------------------------------+
sock: | sock (new member sk_bind_phc) |
+--------------------------------------------+
^ ^
| ethtool_get_phc_vclocks | Convert HW timestamps
| | to sk_bind_phc
v v
+--------------+--------------+--------------+
vclock: | ptp1 | ptp2 | ptpN |
+--------------+--------------+--------------+
pclock: | ptp0 free running |
+--------------------------------------------+
The block diagram may explain how it works. Besides the PTP virtual
clocks, the packet HW timestamp converting to the bound PHC is also
done in sock driver. For user space, PTP virtual clocks can be
created via sysfs, and extended SO_TIMESTAMPING API (new flag
SOF_TIMESTAMPING_BIND_PHC) can be used to bind one PTP virtual clock
for timestamping.
The test tool timestamping.c (together with linuxptp phc_ctl tool) can
be used to verify:
# echo 4 > /sys/class/ptp/ptp0/n_vclocks
[ 129.399472] ptp ptp0: new virtual clock ptp2
[ 129.404234] ptp ptp0: new virtual clock ptp3
[ 129.409532] ptp ptp0: new virtual clock ptp4
[ 129.413942] ptp ptp0: new virtual clock ptp5
[ 129.418257] ptp ptp0: guarantee physical clock free running
#
# phc_ctl /dev/ptp2 set 10000
# phc_ctl /dev/ptp3 set 20000
#
# timestamping eno0 2 SOF_TIMESTAMPING_TX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
# timestamping eno0 2 SOF_TIMESTAMPING_RX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
# timestamping eno0 3 SOF_TIMESTAMPING_TX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
# timestamping eno0 3 SOF_TIMESTAMPING_RX_HARDWARE SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_BIND_PHC
Changes for v2:
- Converted to num_vclocks for creating virtual clocks.
- Guranteed physical clock free running when using virtual
clocks.
- Fixed build warning.
- Updated copyright.
Changes for v3:
- Supported PTP virtual clock in default in PTP driver.
- Protected concurrency of ptp->num_vclocks accessing.
- Supported PHC vclocks query via ethtool.
- Extended SO_TIMESTAMPING API for PHC binding.
- Converted HW timestamps to PHC bound, instead of previous
binding domain value to PHC idea.
- Other minor fixes.
Changes for v4:
- Used do_aux_work callback for vclock refreshing instead.
- Used unsigned int for vclocks number, and max_vclocks
for limitiation.
- Fixed mutex locking.
- Dynamically allocated memory for vclock index storage.
- Removed ethtool ioctl command for vclocks getting.
- Updated doc for ethtool phc vclocks get.
- Converted to mptcp_setsockopt_sol_socket_timestamping().
- Passed so_timestamping for sock_set_timestamping.
- Fixed checkpatch/build.
- Other minor fixed.
Changes for v5:
- Fixed checkpatch/build/bug reported by test robot.
Yangbo Lu (11):
ptp: add ptp virtual clock driver framework
ptp: support ptp physical/virtual clocks conversion
ptp: track available ptp vclocks information
ptp: add kernel API ptp_get_vclocks_index()
ethtool: add a new command for getting PHC virtual clocks
ptp: add kernel API ptp_convert_timestamp()
mptcp: setsockopt: convert to
mptcp_setsockopt_sol_socket_timestamping()
net: sock: extend SO_TIMESTAMPING for PHC binding
net: socket: support hardware timestamp conversion to PHC bound
selftests/net: timestamping: support binding PHC
MAINTAINERS: add entry for PTP virtual clock driver
Documentation/ABI/testing/sysfs-ptp | 20 ++
Documentation/networking/ethtool-netlink.rst | 22 ++
MAINTAINERS | 7 +
drivers/ptp/Makefile | 2 +-
drivers/ptp/ptp_clock.c | 42 +++-
drivers/ptp/ptp_private.h | 39 ++++
drivers/ptp/ptp_sysfs.c | 160 ++++++++++++++
drivers/ptp/ptp_vclock.c | 219 +++++++++++++++++++
include/linux/ethtool.h | 10 +
include/linux/ptp_clock_kernel.h | 31 ++-
include/net/sock.h | 8 +-
include/uapi/linux/ethtool_netlink.h | 15 ++
include/uapi/linux/net_tstamp.h | 17 +-
net/core/sock.c | 65 +++++-
net/ethtool/Makefile | 2 +-
net/ethtool/common.c | 14 ++
net/ethtool/netlink.c | 10 +
net/ethtool/netlink.h | 2 +
net/ethtool/phc_vclocks.c | 94 ++++++++
net/mptcp/sockopt.c | 68 ++++--
net/socket.c | 19 +-
tools/testing/selftests/net/timestamping.c | 55 +++--
22 files changed, 867 insertions(+), 54 deletions(-)
create mode 100644 drivers/ptp/ptp_vclock.c
create mode 100644 net/ethtool/phc_vclocks.c
base-commit: b6df00789e2831fff7a2c65aa7164b2a4dcbe599
--
2.25.1
The goal of these patches is to add a test case for a SGX reserved
memory oversubscription, i.e. make sure that the page reclaimer and
and the page fault handler are working correctly.
Change Log
==========
v3:
* Reorganized the patch set into smaller pieces, and refactored the code so that
the test enclave can be created inside each test case. Added a new test case
unclobbered_vdso_oversubscribed that creates a large enough heap to
fill all of the available SGX reserved memory (EPC).
Jarkko Sakkinen (8):
x86/sgx: Add /sys/kernel/debug/x86/sgx_total_mem
selftests/sgx: Assign source for each segment
selftests/sgx: Make data measurement for an enclave segment optional
selftests/sgx: Create a heap for the test enclave
selftests/sgx: Dump segments and /proc/self/maps only on failure
selftests/sgx: Encpsulate the test enclave creation
selftests/sgx: Move setup_test_encl() to each TEST_F()
selftests/sgx: Add a new kselftest: unclobbered_vdso_oversubscribed
Documentation/x86/sgx.rst | 6 ++
arch/x86/kernel/cpu/sgx/main.c | 10 +-
tools/testing/selftests/sgx/load.c | 40 ++++++--
tools/testing/selftests/sgx/main.c | 129 ++++++++++++++++++++----
tools/testing/selftests/sgx/main.h | 7 +-
tools/testing/selftests/sgx/sigstruct.c | 12 ++-
6 files changed, 165 insertions(+), 39 deletions(-)
--
2.32.0
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/unix/af_unix.c | 87 ++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
4 files changed, 184 insertions(+), 52 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
v4 -> v5 :
- add two missing READ_ONCE for sk_prot.
--
2.20.1
Rechtsanwältin BILBAO &EMMA ASSOZIIERT & CO...
#########################################
AV/DE GRAN VIA NO.38k, 28008 MADRID. SPAIN
TEL. ( 34) 602 810 185 FAX: ( 34) 931-702-120
Eingetragener Fall NR: GY/Q3J63753 / SQQ/93000XS10.
Ihnen wird empfohlen, die folgenden Informationen an Ihre Bevollmächtigte zu senden. Rechtsanwältin BILBAO & EMMA ASSOZIIERT CO ERMÖGLICHT IHNEN DIE FREIGABE IHRES FONDS:Wir bitten dringend, Ihre E-Mails an unsere Büro-E-Mail zu beantworten, buroLotto.es(a)spainmail.com,
Wir gratulieren und informieren Sie über die Auswahl des Geldpreises €935.470,00 EUROS, SOMMERBONANZA, EL GORDO DE LA PRIMITIVA LOTTERIE IN VERBINDUNG MIT EUROMILLIONS ESPAÑA INTERNATIONAL LOTTERIE BEFÖRDERUNG PROGRAMM Madrid Spanien
Sehr Geehrter Begünstigten,
Wir möchten Sie informieren, dass das Büro des nicht Beanspruchten Preisgeldes in Spanien,unsere Anwaltskanzlei ernannt hat, als gesetzliche Berater zu handeln, in der Verarbeitung und der Zahlung eines Preisgeldes, das auf Ihrem Namen gutgeschrieben wurde, und nun seit über zwei Jahren nicht beansprucht wurde.
Der Gesamtbetrag der ihnen zusteht beträgt momentan €935, 470, 15, cent.
Der Gesamtbetrag der ihnen zusteht beträgt momentan €935, 470, 15, neunhundert fünfunddreißigtausend, vierhundertsiebzig und fünfzehn Cent, Das ursprüngliche Preisgeld bertug €785.810, 15.00 EUROS. Siebenhundert Fünfundachtzigtausend Acht Hundertzehn Euro und fünfzehn Cent Diese Summe wurde fuer nun mehr als zwei Jahre,Gewinnbringend angelegt,daher die aufstockung auf die oben genannte Gesamtsumme.Entsprechend dem Büros des nicht Beanspruchten Preisgeldes,wurde dieses Geld als nicht beanspruchten Gewinn einer Lotterie Firma bei ihnen zum verwalten niedergelegt und in ihrem namen versichert. Nach Ansicht der Lotterie Firma wurde ihnen das Geld nach einer Weihnachts Förderung Lotterie zugesprochen.
Die Kupons wurden von einer Investmentgesellschaft gekauft.Nach Ansicht der Lotterie Firma wurden sie damals Angeschrieben um Sie über dieses Geld zu informieren es hat sich aber leider bis zum Ablauf der gesetzten Frist keiner gemeldet um den Gewinn zu Beanspruchen. Dieses war der Grund weshalb das Geld zum verwalten niedergelegt wurde. Gemab des Spanischen Gesetzes muss der inhaber alle zwei Jahre ueber seinen vorhanden Gewinn informiert werden.Sollte dass Geld wieder nicht beansprucht werden,.wird der Gewinn abermals ueber eine Investierung gesellschaft fur eine weitere Periode von zwei Jahren angelegt werden.Wir sind daher, durch das Buro des nicht Beanspruchten Preisgelds beauftragt worden sie anzuschreiben.Dies ist eine Notifikation für das Beanspruchen dieses Gelds.
Wir möchten sie darauf hinweisen, dass die Lotteriegesellschaft überprüfen und bestätigen wird ob ihre Identität übereinstimmt bevor ihnen ihr Geld ausbezahlt wird.Wir werden sie beraten wie sie ihren Anspruch geltend machen.Bitte setzen sie sich dafuer mit unserer Deutsch Spanisch oder Englisch Sprachigen Rechtsanwalt in Verbindung Rechtsanwältin: Bilbao & Emma ASSOZIIERT & CO.., TEL( 34) 602 810 185 & email,( Ihre Antwort sollte an diese E-MAIL-Adresse gerichtet, (promolottooffice(a)spainmail.com )ist zustaendig fuer Auszahlungen ins Ausland und wird ihnen in dieser sache zur seite stehen. Der Anspruch sollte vor den 30 August 2021 geltend gemacht werden,da sonst dass Geld wieder angelegt werden wuerde.Wir freuen uns, von Ihnen zu hören, während wir Ihnen unsere Rechtshilfe Versichern.
Nachdem Sie die von Ihnen geforderten Daten bereitgestellt haben, können Sie davon ausgehen, dass Sie innerhalb weniger Stunden direkt von diesem Büro erfahren werden. Bis dahin müssen wir Ihre Informationen verarbeitet und Ihre Fonds Akte für die Zustellung vorbereitet haben, um Verzögerungen zu vermeiden.
Wir gehen davon aus, dass Sie die erläuternden Anweisungen und Anweisungen für den Erhalt Ihrer Prämien (935 €, 470, 15 Cent) verstehen, die Ihnen von der spanischen Euro Millones /El Gordo de la Primitiva International lotterie Madrid Spain legal zugesprochen werden.
HINWEIS: Um unnötige Verzögerungen zu vermeiden, wenn es eine Änderung Ihrer Adresse oder Komplikationen geben, informieren Sie Ihren Agenten so schnell wie möglich, Ihr Agent wird 10% des Premium Preises bezahlt, da die Provision NACH Dem, was Sie Ihr Geld auf Ihr kostenpflichtiges Konto erhalten haben. Das Zahlungsbearbeitung Formular ist mit einer Fotokopie Ihres Ausweises auszufüllen und zur Überprüfung per Faxnummer zu senden: ( 34) 935457490 & E-Mail: Wir bitten dringend, Ihre E-Mails an unsere Büro-E-Mail zu beantworten promolottooffice(a)spainmail.com
Mit Freundlichen Grüßen
Rechtsanwältin Bilbao & Emma ASSOZIIERT & CO..
ANMELDEFORMULAR FÜR DEN GEWINNANSPRUCH Vom 28. Juni bis 30. August 2021
Hinweis bitte geben Sie die folgenden Informationen, wie unten gefordert, faxen 34935457490 oder e mail: promolottooffice(a)spainmail.com ,es zurück in mein Büro sofort für uns in der Lage zu sein die Legalisierung Prozess Ihrer Persönliche investiertes Preisgeld zu vervollständigen, und das Geld wird Ihnen von Zentralbank spain Int ausgezahlt. Alle Prozess Überprüfung durch unsere Kanzlei ist für Sie kostenlos, weil unsere Kosten werden von der internationalen Lotto Kommission am Ende des Prozesses zu zahlen, wenn Sie Ihr Geld erhalten.Wenn Sie nicht die erforderlichen Informationen vor der Zeit gegeben hat, können ist Anwaltskanzlei nicht haftbar gemacht werden, wenn Ihr Geld reinvestiert wurde.
Ein Bestätigungsschreiben wird Ihnen gefaxt werden sofort wenn wir komplette Überprüfung der Informationen die Sie uns zur Verfügung stellen habe, Ich werde die Investmentbank unverzüglich über die von Ihnen angegebene Informationen zu kommen, bevor sie werden mit Ihnen Kontakt aufnehmen für die ausZahlung von Ihrem Geld . Ihre Daten werden vertraulich gehalten nach der Europäischen Union Datenschutzrecht.
"Antworten Sie nicht auf die Absenderadresse oder die Quell-E-Mail-Adresse, es wird über den Computer gesendet virtuelle Hilfe für die Antwort wird nicht meine menschliche sondern Computer" Daher müssen Sie die Treuhänder über Telefon und E- Mail-Adresse oben" (ACHTUNG Wir (bitten Sie, auf diese E-Mail-Adresse zu antworten, (promolottooffice(a)spainmail.com )
########################################################
REF.NR:………………………………STAPELN Sie NR:…………………………
Vorname:……………………Vor-NACHNAME…………………………………
GEBURTSDATUM:……………………………BERUF:……………………………
STRASSE:………………………………………PLZ/ORT…………………………
ADRESSE:……………………………………………………………………………
TELEFON:(___)……………………HANDY:(__)………………FAX (__)………
EMAIL:…………………………………………………………Nationalitit:……
HINWEIS: BANKVERBINDUNG IST NUR ERFORDERLICH, WENN SIE BESCHLIEßEN, IHREN GEWINN ZU ERHALTEN PER ÜBERWEISUNG
Nachdem Sie die von Ihnen geforderten Daten bereitgestellt haben, können Sie davon ausgehen, dass Sie innerhalb weniger Stunden direkt von diesem Büro erfahren werden. Bis dahin müssen wir Ihre Informationen verarbeitet und Ihre Fonds Akte für die Zustellung vorbereitet haben, um Verzögerungen zu vermeiden. Wir gehen davon aus, dass Sie die erklärenden Anweisungen und Anweisungen zum Einholen und Einholen Ihrer Auszeichnungen (€935,470,15 EUROS) verstehen, die Ihnen vom spanischen Euro Millones de La Primitiva International Madrid legal zugesprochen wurden
BANK ZAHLUNGSOPTIONEN: A / BANKÜBERWEISUNG Oder BANK CERTIFIED CHECK (BANKDATEN SIND NUR NOTWENDIG, WENN SIE SICH FÜR EINE BANKÜBERWEISUNG ENTSCHIEDEN HABEN)
ZAHLUNGSOPTION: (A) BESTÄTIGTER SCHECK (BEZAHLEN Sie ÜBERTRAGUNG EIN
BETRÄGE GEWONNEN: ……………………………………………………
NAME DER BANK:……………………………………………………………
KONTONUMMER:…………………………SWIFT-CODE:…………………
ADRESSE DER BANK …………………………………………………………
GEB-DATUM:…………Unterschrift …………(Erst bei hmeAbna)
Rechtsanwältin Bilbao & Emma Asociados, Abogados, Fiscal Y Accesorios horario de consultas Lunes.bis Samstag De. 09 - 16.30 Uhr 654280 / MLA & (Seien Sie informiert, dass Ihr Vertreter 10% des Preises als Provision erhält, wenn Sie Ihr Geld auf Ihrem angegebenen Konto erhalten haben) Mitglied des Consejo de Constitucional de España, (ACHTUNG Wir bitten Sie, auf diese E-Mail-Adresse zu antworten (promolottooffice(a)spainmail.com) BÜRO-KONTOINFORMATIONEN- BANK NAME: P.F.S.SPAIN SL SWIFT CODE: PFSSESM1 IBAN: ES17 6713 0002 5700 0584 3906)COPYRIGHT 2019.LOTERIA SPANIEN. Alle Rechte vorbehalten. NUTZUNGSBEDINGUNGEN HANDELSPOLITIK DATENSCHUTZ VON BESCHWERDEN....
Diese E-Mail ist für den vorgesehenen Empfänger bestimmt und enthält Informationen, die vertraulich sein können. Wenn Sie nicht der beabsichtigte Empfänger sind, benachrichtigen Sie bitte den Absender per E-Mail und löschen Sie diese E-Mail aus Ihrem Posteingang. Jede unbefugte Nutzung oder Verbreitung dieser E-Mail, ganz oder teilweise, ist strengstens untersagt und kann rechtswidrig sein. Alle in dieser E-Mail enthaltenen Preisangebote sind nur indikativ und führen zu keiner rechtlich bindenden oder durchsetzbaren Verpflichtung. Sofern nicht ausdrücklich als beabsichtigter E-Vertrag bezeichnet, stellt diese E-Mail kein Vertragsangebot, keine Vertragsänderung oder eine Annahme eines Vertragsangebots dar.
WWW.GORDO/ EUROMILLIONS ESPAÑA Sitz der Gesellschaft: Torre Europa Paseo de la Barcelona 15. Planta 16 28006 • Madrid. (Spanien)
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/unix/af_unix.c | 86 ++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
4 files changed, 183 insertions(+), 52 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
v3 -> v4 :
- remove sk_is_unix_stream, just check TCP_ESTABLISHED for UNIX sockets.
- add READ_ONCE in unix_dgram_recvmsg
- remove type check in unix_stream_bpf_update_proto
--
2.20.1
This patch set depends on:
- https://lore.kernel.org/linux-integrity/20210723085304.1760138-1-roberto.sa…
- https://lore.kernel.org/linux-integrity/20210705115650.3373599-1-roberto.sa…
I still kept pointer math to optimize the size of the digest_list_item_ref
structure. Replacing offsets with pointers would cause the size of the
structure to double. I could do this in the next version of the patch set
if the size change is acceptable.
Digest Lists Integrity Module (DIGLIM) is a new component added to the
integrity subsystem in the kernel, primarily aiming to aid Integrity
Measurement Architecture (IMA) in the process of checking the integrity
of file content and metadata. It accomplishes this task by storing
reference values coming from software vendors and by reporting whether
or not the digest of file content or metadata calculated by IMA (or EVM)
is found among those values. In this way, IMA can decide, depending on
the result of a query, if a measurement should be taken or access to the
file should be granted. The Security Assumptions section explains more
in detail why this component has been placed in the kernel.
The main benefits of using IMA in conjunction with DIGLIM are the
ability to implement advanced remote attestation schemes based on the
usage of a TPM key for establishing a TLS secure channel [1][2], and to
reduce the burden on Linux distribution vendors to extend secure boot at
OS level to applications.
DIGLIM does not have the complexity of feature-rich databases. In fact,
its main functionality comes from the hash table primitives already in
the kernel. It does not have an ad-hoc storage module, it just indexes
data in a fixed format (digest lists, a set of concatenated digests
preceded by a header), copied to kernel memory as they are. Lastly, it
does not support database-oriented languages such as SQL, but only
accepts a digest and its algorithm as a query.
The only digest list format supported by DIGLIM is called compact.
However, Linux distribution vendors don't have to generate new digest
lists in this format for the packages they release, as already available
information, such as RPM headers and DEB package metadata, can be
already used as a source for reference values (they already include file
digests), with a user space parser taking care of the conversion to the
compact format.
Although one might perceive that storing file or metadata digests for a
Linux distribution would significantly increase the memory usage, this
does not seem to be the case. As an anticipation of the evaluation done
in the Preliminary Performance Evaluation section, protecting binaries
and shared libraries of a minimal Fedora 33 installation requires 208K
of memory for the digest lists plus 556K for indexing.
In exchange for a slightly increased memory usage, DIGLIM improves the
performance of the integrity subsystem. In the considered scenario, IMA
measurement and appraisal with digest lists requires respectively less
than one quarter and less than half the time, compared to the current
solution.
DIGLIM also keeps track of whether digest lists have been processed in
some way (e.g. measured or appraised by IMA). This is important for
example for remote attestation, so that remote verifiers understand what
has been uploaded to the kernel.
DIGLIM behaves like a transactional database, i.e. it has the ability to
roll back to the beginning of the transaction if an error occurred
during the addition of a digest list (the deletion operation always
succeeds). This capability has been tested with an ad-hoc fault
injection mechanism capable of simulating failures during the
operations.
Finally, DIGLIM exposes to user space, through securityfs, the digest
lists currently loaded, the number of digests added, a query interface
and an interface to set digest list labels.
[1] LSS EU 2019
- slides:
https://static.sched.com/hosted_files/lsseu2019/bd/secure_attested_communic…
- video: https://youtu.be/mffdQgkvDNY
[2] FutureTPM EU project, final review meeting demo
- slides:
https://futuretpm.eu/images/07-3-FutureTPM-Final-Review-Slides-WP6-Device-M…
- video: https://vimeo.com/528251864/4c1d55abcd
Binary Integrity
Integrity is a fundamental security property in information systems.
Integrity could be described as the condition in which a generic
component is just after it has been released by the entity that created
it.
One way to check whether a component is in this condition (called binary
integrity) is to calculate its digest and to compare it with a reference
value (i.e. the digest calculated in controlled conditions, when the
component is released).
IMA, a software part of the integrity subsystem, can perform such
evaluation and execute different actions:
- store the digest in an integrity-protected measurement list, so that
it can be sent to a remote verifier for analysis;
- compare the calculated digest with a reference value (usually
protected with a signature) and deny operations if the file is found
corrupted;
- store the digest in the system log.
Contribution
DIGLIM further enhances the capabilities offered by IMA-based solutions
and, at the same time, makes them more practical to adopt by reusing
existing sources as reference values for integrity decisions.
Possible sources for digest lists are:
- RPM headers;
- Debian repository metadata.
Benefits for IMA Measurement
One of the issues that arises when files are measured by the OS is that,
due to parallel execution, the order in which file accesses happen
cannot be predicted. Since the TPM Platform Configuration Register (PCR)
extend operation, executed after each file measurement,
cryptographically binds the current measurement to the previous ones,
the PCR value at the end of a workload cannot be predicted too.
Thus, even if the usage of a TPM key, bound to a PCR value, should be
allowed when only good files were accessed, the TPM could unexpectedly
deny an operation on that key if files accesses did not happen as stated
by the key policy (which allows only one of the possible sequences).
DIGLIM solves this issue by making the PCR value stable over the time
and not dependent on file accesses. The following figure depicts the
current and the new approaches:
IMA measurement list (current)
entry# 1st boot 2nd boot 3rd boot
+----+---------------+ +----+---------------+ +----+---------------+
1: | 10 | file1 measur. | | 10 | file3 measur. | | 10 | file2 measur. |
+----+---------------+ +----+---------------+ +----+---------------+
2: | 10 | file2 measur. | | 10 | file2 measur. | | 10 | file3 measur. |
+----+---------------+ +----+---------------+ +----+---------------+
3: | 10 | file3 measur. | | 10 | file1 measur. | | 10 | file4 measur. |
+----+---------------+ +----+---------------+ +----+---------------+
PCR: Extend != Extend != Extend
file1, file2, file3 file3, file2, file1 file2, file3, file4
PCR Extend definition:
PCR(new value) = Hash(Hash(meas. entry), PCR(previous value))
A new entry in the measurement list is created by IMA for each file
access. Assuming that file1, file2 and file3 are files provided by the
software vendor, file4 is an unknown file, the first two PCR values
above represent a good system state, the third a bad system state. The
PCR values are the result of the PCR extend operation performed for each
measurement entry with the digest of the measurement entry as an input.
IMA measurement list (with DIGLIM)
dlist
+--------------+
| header |
+--------------+
| file1 digest |
| file2 digest |
| file3 digest |
+--------------+
dlist is a digest list containing the digest of file1, file2 and file3.
In the intended scenario, it is generated by a software vendor at the
end of the building process, and retrieved by the administrator of the
system where the digest list is loaded.
entry# 1st boot 2nd boot 3rd boot
+----+---------------+ +----+---------------+ +----+---------------+
0: | 11 | dlist measur. | | 11 | dlist measur. | | 11 | dlist measur. |
+----+---------------+ +----+---------------+ +----+---------------+
1: < file1 measur. skip > < file3 measur. skip > < file2 measur. skip >
2: < file2 measur. skip > < file2 measur. skip > < file3 measur. skip >
+----+---------------+
3: < file3 measur. skip > < file1 measur. skip > | 11 | file4 measur. |
+----+---------------+
PCR: Extend = Extend != Extend
dlist dlist dlist, file4
The first entry in the measurement list contains the digest of the
digest list uploaded to the kernel at kernel initialization time.
When a file is accessed, IMA queries DIGLIM with the calculated file
digest and, if it is found, IMA skips the measurement.
Thus, the only information sent to remote verifiers are: the list of
files that could possibly be accessed (from the digest list), but not if
they were accessed and when; the measurement of unknown files.
Despite providing less information, this solution has the advantage that
the good system state (i.e. when only file1, file2 and file3 are
accessed) now can be represented with a deterministic PCR value (the PCR
is extended only with the measurement of the digest list). Also, the bad
system state can still be distinguished from the good state (the PCR is
extended also with the measurement of file4).
If a TPM key is bound to the good PCR value, the TPM would allow the key
to be used if file1, file2 or file3 are accessed, regardless of the
sequence in which they are accessed (the PCR value does not change), and
would revoke the permission when the unknown file4 is accessed (the PCR
value changes). If a system is able to establish a TLS connection with a
peer, this implicitly means that the system was in a good state (i.e.
file4 was not accessed, otherwise the TPM would have denied the usage of
the TPM key due to the key policy).
Benefits for IMA Appraisal
Extending secure boot to applications means being able to verify the
provenance of files accessed. IMA does it by verifying file signatures
with a key that it trusts, which requires Linux distribution vendors to
additionally include in the package header a signature for each file
that must be verified (there is the dedicated RPMTAG_FILESIGNATURES
section in the RPM header).
The proposed approach would be instead to verify data provenance from
already available metadata (file digests) in existing packages. IMA
would verify the signature of package metadata and search file digests
extracted from package metadata and added to the hash table in the
kernel.
For RPMs, file digests can be found in the RPMTAG_FILEDIGESTS section of
RPMTAG_IMMUTABLE, whose signature is in RPMTAG_RSAHEADER. For DEBs, file
digests (unsafe to use due to a weak digest algorithm) can be found in
the md5sum file, which can be indirectly verified from Release.gpg.
The following figure highlights the differences between the current and
the proposed approach.
IMA appraisal (current solution, with file signatures):
appraise
+-----------+
V |
+-------------------------+-----+ +-------+-----+ |
| RPM header | | ima rpm | file1 | sig | |
| ... | | plugin +-------+-----+ +-----+
| file1 sig [to be added] | sig |--------> ... | IMA |
| ... | | +-------+-----+ +-----+
| fileN sig [to be added] | | | fileN | sig |
+-------------------------+-----+ +-------+-----+
In this case, file signatures must be added to the RPM header, so that
the ima rpm plugin can extract them together with the file content. The
RPM header signature is not used.
IMA appraisal (with DIGLIM):
kernel hash table
with RPM header content
+---+ +--------------+
| |--->| file1 digest |
+---+ +--------------+
...
+---+ appraise (file1)
| | <--------------+
+----------------+-----+ +---+ |
| RPM header | | ^ |
| ... | | digest_list | |
| file1 digest | sig | rpm plugin | +-------+ +-----+
| ... | |-------------+--->| file1 | | IMA |
| fileN digest | | +-------+ +-----+
+----------------+-----+ |
^ |
+------------------------------------+
appraise (RPM header)
In this case, the RPM header is used as it is, and its signature is used
for IMA appraisal. Then, the digest_list rpm plugin executes the user
space parser to parse the RPM header and add the extracted digests to an
hash table in the kernel. IMA appraisal of the files in the RPM package
consists in searching their digest in the hash table.
Other than reusing available information as digest list, another
advantage is the lower computational overhead compared to the solution
with file signatures (only one signature verification for many files and
digest lookup, instead of per file signature verification, see
Preliminary Performance Evaluation for more details).
Lifecycle
The lifecycle of DIGLIM is represented in the following figure:
Vendor premises (release process with modifications):
+------------+ +-----------------------+ +------------------------+
| 1. build a | | 2. generate and sign | | 3. publish the package |
| package |-->| a digest list from |-->| and digest list in |
| | | packaged files | | a repository |
+------------+ +-----------------------+ +------------------------+
|
|
User premises: |
V
+---------------------+ +------------------------+ +-----------------+
| 6. use digest lists | | 5. download the digest | | 4. download and |
| for measurement |<--| list and upload to |<--| install the |
| and/or appraisal | | the kernel | | package |
+---------------------+ +------------------------+ +-----------------+
The figure above represents all the steps when a digest list is
generated separately. However, as mentioned in Contribution, in most
cases existing packages can be already used as a source for digest
lists, limiting the effort for software vendors.
If, for example, RPMs are used as a source for digest lists, the figure
above becomes:
Vendor premises (release process without modifications):
+------------+ +------------------------+
| 1. build a | | 2. publish the package |
| package |-->| in a repository |---------------------+
| | | | |
+------------+ +------------------------+ |
|
|
User premises: |
V
+---------------------+ +------------------------+ +-----------------+
| 5. use digest lists | | 4. extract digest list | | 3. download and |
| for measurement |<--| from the package |<--| install the |
| and/or appraisal | | and upload to the | | package |
| | | kernel | | |
+---------------------+ +------------------------+ +-----------------+
Step 4 can be performed with the digest_list rpm plugin and the user
space parser, without changes to rpm itself.
Security Assumptions
As mentioned in the Introduction, DIGLIM will be primarily used in
conjunction with IMA to enforce a mandatory policy on all user space
processes, including those owned by root. Even root, in a system with a
locked-down kernel, cannot affect the enforcement of the mandatory
policy or, if changes are permitted, it cannot do so without being
detected.
Given that the target of the enforcement are user space processes,
DIGLIM cannot be placed in the target, as a Mandatory Access Control
(MAC) design is required to have the components responsible to enforce
the mandatory policy separated from the target.
While locking-down a system and limiting actions with a mandatory policy
is generally perceived by users as an obstacle, it has noteworthy
benefits for the users themselves.
First, it would timely block attempts by malicious software to steal or
misuse user assets. Although users could query the package managers to
detect them, detection would happen after the fact, or it wouldn't
happen at all if the malicious software tampered with package managers.
With a mandatory policy enforced by the kernel, users would still be
able to decide which software they want to be executed except that,
unlike package managers, the kernel is not affected by user space
processes or root.
Second, it might make systems more easily verifiable from outside, due
to the limited actions the system allows. When users connect to a
server, not only they would be able to verify the server identity, which
is already possible with communication protocols like TLS, but also if
the software running on that server can be trusted to handle their
sensitive data.
Adoption
A former version of DIGLIM is used in the following OSes:
- openEuler 20.09
https://github.com/openeuler-mirror/kernel/tree/openEuler-20.09
- openEuler 21.03
https://github.com/openeuler-mirror/kernel/tree/openEuler-21.03
Originally, DIGLIM was part of IMA (known as IMA Digest Lists). In this
version, it has been redesigned as a standalone module with an API that
makes its functionality accessible by IMA and, eventually, other
subsystems.
User Space Support
Digest lists can be generated and managed with digest-list-tools:
https://github.com/openeuler-mirror/digest-list-tools
It includes two main applications:
- gen_digest_lists: generates digest lists from files in the
filesystem or from the RPM database (more digest list sources can be
supported);
- manage_digest_lists: converts and uploads digest lists to the
kernel.
Integration with rpm is done with the digest_list plugin:
https://gitee.com/src-openeuler/rpm/blob/master/Add-digest-list-plugin.patch
This plugin writes the RPM header and its signature to a file, so that
the file is ready to be appraised by IMA, and calls the user space
parser to convert and upload the digest list to the kernel.
Simple Usage Example (Tested with Fedora 33)
1. Digest list generation (RPM headers and their signature are copied
to the specified directory):
# mkdir /etc/digest_lists
# gen_digest_lists -t file -f rpm+db -d /etc/digest_lists -o add
2. Digest list upload with the user space parser:
# manage_digest_lists -p add-digest -d /etc/digest_lists
3. First digest list query:
# echo sha256-$(sha256sum /bin/cat) > /sys/kernel/security/integrity/diglim/digest_query
# cat /sys/kernel/security/integrity/diglim/digest_query
sha256-[...]-0-file_list-rpm-coreutils-8.32-18.fc33.x86_64 (actions: 0): version: 1, algo: sha256, type: 2, modifiers: 1, count: 106, datalen: 3392
4. Second digest list query:
# echo sha256-$(sha256sum /bin/zip) > /sys/kernel/security/integrity/diglim/digest_query
# cat /sys/kernel/security/integrity/diglim/digest_query
sha256-[...]-0-file_list-rpm-zip-3.0-27.fc33.x86_64 (actions: 0): version: 1, algo: sha256, type: 2, modifiers: 1, count: 4, datalen: 128
Preliminary Performance Evaluation
This section provides an initial estimation of the overhead introduced
by DIGLIM. The estimation has been performed on a Fedora 33 virtual
machine with 1447 packages installed. The virtual machine has 16 vCPU
(host CPU: AMD Ryzen Threadripper PRO 3955WX 16-Cores) and 2G of RAM
(host memory: 64G). The virtual machine also has a vTPM with libtpms and
swtpm as backend.
After writing the RPM headers to files, the size of the directory
containing them is 36M.
After converting the RPM headers to the compact digest list, the size of
the data being uploaded to the kernel is 3.6M.
The time to load the entire RPM database is 0.628s.
After loading the digest lists to the kernel, the slab usage due to
indexing is (obtained with slab_nomerge in the kernel command line):
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
118144 118144 100% 0,03K 923 128 3692K digest_list_item_ref_cache
102400 102400 100% 0,03K 800 128 3200K digest_item_cache
2646 2646 100% 0,09K 63 42 252K digest_list_item_cache
The stats, obtained from the digests_count interface, introduced later,
are:
Parser digests: 0
File digests: 99100
Metadata digests: 0
Digest list digests: 1423
On this installation, this would be the worst case in which all files
are measured and/or appraised, which is currently not recommended
without enforcing an integrity policy protecting mutable files. Infoflow
LSM is a component to accomplish this task:
https://patchwork.kernel.org/project/linux-integrity/cover/20190818235745.1…
The first manageable goal of IMA with DIGLIM is to use an execution
policy, with measurement and/or appraisal of files executed or mapped in
memory as executable (in addition to kernel modules and firmware). In
this case, the digest list contains the digest only for those files. The
numbers above change as follows.
After converting the RPM headers to the compact digest list, the size of
the data being uploaded to the kernel is 208K.
The time to load the digest of binaries and shared libraries is 0.062s.
After loading the digest lists to the kernel, the slab usage due to
indexing is:
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
7168 7168 100% 0,03K 56 128 224K digest_list_item_ref_cache
7168 7168 100% 0,03K 56 128 224K digest_item_cache
1134 1134 100% 0,09K 27 42 108K digest_list_item_cache
The stats, obtained from the digests_count interface, are:
Parser digests: 0
File digests: 5986
Metadata digests: 0
Digest list digests: 1104
Comparison with IMA
This section compares the performance between the current solution for
IMA measurement and appraisal, and IMA with DIGLIM.
Workload A (without DIGLIM):
1. cat file[0-5985] > /dev/null
Workload B (with DIGLIM):
1. echo $PWD/0-file_list-compact-file[0-1103] >
<securityfs>/integrity/diglim/digest_list_add
2. cat file[0-5985] > /dev/null
Workload A execution time without IMA policy:
real 0m0,155s
user 0m0,008s
sys 0m0,066s
Measurement
IMA policy:
measure fowner=2000 func=FILE_CHECK mask=MAY_READ use_diglim=allow pcr=11 ima_template=ima-sig
use_diglim is a policy keyword not yet supported by IMA.
Workload A execution time with IMA and 5986 files with signature
measured:
real 0m8,273s
user 0m0,008s
sys 0m2,537s
Workload B execution time with IMA, 1104 digest lists with signature
measured and uploaded to the kernel, and 5986 files with signature
accessed but not measured (due to the file digest being found in the
hash table):
real 0m1,837s
user 0m0,036s
sys 0m0,583s
Appraisal
IMA policy:
appraise fowner=2000 func=FILE_CHECK mask=MAY_READ use_diglim=allow
use_diglim is a policy keyword not yet supported by IMA.
Workload A execution time with IMA and 5986 files with file signature
appraised:
real 0m2,197s
user 0m0,011s
sys 0m2,022s
Workload B execution time with IMA, 1104 digest lists with signature
appraised and uploaded to the kernel, and with 5986 files with signature
not verified (due to the file digest being found in the hash table):
real 0m0,982s
user 0m0,020s
sys 0m0,865s
Changelog
v1:
- remove 'ima: Add digest, algo, measured parameters to
ima_measure_critical_data()', replaced by:
https://lore.kernel.org/linux-integrity/20210705090922.3321178-1-roberto.sa…
- add 'Lifecycle' subsection to better clarify how digest lists are
generated and used (suggested by Greg KH)
- remove 'Possible Usages' subsection and add 'Benefits for IMA
Measurement' and 'Benefits for IMA Appraisal' subsubsections
- add 'Preliminary Performance Evaluation' subsection
- declare digest_offset and hdr_offset in the digest_list_item_ref
structure as u32 (sufficient for digest lists of 4G) to make room for a
list_head structure (digest_list_item_ref size: 32)
- implement digest list reference management with a linked list instead of
an array
- reorder structure members for better alignment (suggested by Mauro)
- rename digest_lookup() to __digest_lookup() (suggested by Mauro)
- introduce an object cache for each defined structure
- replace atomic_long_t with unsigned long in h_table structure definition
(suggested by Greg KH)
- remove GPL2 license text and file names (suggested by Greg KH)
- ensure that the _reserved field of compact_list_hdr is equal to zero
(suggested by Greg KH)
- dynamically allocate the buffer in digest_lists_show_htable_len() to
avoid frame size warning (reported by kernel test robot, dynamic
allocation suggested by Mauro)
- split documentation in multiple files and reference the source code
(suggested by Mauro)
- use #ifdef in include/linux/diglim.h
- improve generation of event name for IMA measurements
- add new patch to introduce the 'Remote Attestation' section in the
documentation
- fix assignment of actions variable in digest_list_read() and
digest_list_write()
- always release dentry reference when digest_list_get_secfs_files() is
called
- rewrite add/del and query interfaces to take advantage of m->private
- prevent deletion of a digest list only if there are actions done at
addition time that are not currently being performed
- fix doc warnings (replace Returns with Return:)
- perform queries of digest list digests in the existing tests
- add new tests: digest_list_add_del_test_file_upload_measured,
digest_list_check_measurement_list_test_file_upload and
digest_list_check_measurement_list_test_buffer_upload
- don't return a value from digest_del(), digest_list_ref_del, and
digest_list_del()
- improve Makefile for tests
Roberto Sassu (12):
diglim: Overview
diglim: Basic definitions
diglim: Objects
diglim: Methods
diglim: Parser
diglim: Interfaces - digest_list_add, digest_list_del
diglim: Interfaces - digest_lists_loaded
diglim: Interfaces - digest_label
diglim: Interfaces - digest_query
diglim: Interfaces - digests_count
diglim: Remote Attestation
diglim: Tests
.../security/diglim/architecture.rst | 45 +
.../security/diglim/implementation.rst | 255 +++
Documentation/security/diglim/index.rst | 14 +
.../security/diglim/introduction.rst | 631 ++++++++
.../security/diglim/remote_attestation.rst | 87 ++
Documentation/security/diglim/tests.rst | 66 +
Documentation/security/index.rst | 1 +
MAINTAINERS | 19 +
include/linux/diglim.h | 28 +
include/linux/kernel_read_file.h | 1 +
include/uapi/linux/diglim.h | 51 +
security/integrity/Kconfig | 1 +
security/integrity/Makefile | 1 +
security/integrity/diglim/Kconfig | 11 +
security/integrity/diglim/Makefile | 8 +
security/integrity/diglim/diglim.h | 157 ++
security/integrity/diglim/fs.c | 782 ++++++++++
security/integrity/diglim/methods.c | 499 ++++++
security/integrity/diglim/parser.c | 274 ++++
security/integrity/integrity.h | 4 +
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/diglim/Makefile | 19 +
tools/testing/selftests/diglim/common.c | 115 ++
tools/testing/selftests/diglim/common.h | 31 +
tools/testing/selftests/diglim/config | 3 +
tools/testing/selftests/diglim/selftest.c | 1382 +++++++++++++++++
26 files changed, 4486 insertions(+)
create mode 100644 Documentation/security/diglim/architecture.rst
create mode 100644 Documentation/security/diglim/implementation.rst
create mode 100644 Documentation/security/diglim/index.rst
create mode 100644 Documentation/security/diglim/introduction.rst
create mode 100644 Documentation/security/diglim/remote_attestation.rst
create mode 100644 Documentation/security/diglim/tests.rst
create mode 100644 include/linux/diglim.h
create mode 100644 include/uapi/linux/diglim.h
create mode 100644 security/integrity/diglim/Kconfig
create mode 100644 security/integrity/diglim/Makefile
create mode 100644 security/integrity/diglim/diglim.h
create mode 100644 security/integrity/diglim/fs.c
create mode 100644 security/integrity/diglim/methods.c
create mode 100644 security/integrity/diglim/parser.c
create mode 100644 tools/testing/selftests/diglim/Makefile
create mode 100644 tools/testing/selftests/diglim/common.c
create mode 100644 tools/testing/selftests/diglim/common.h
create mode 100644 tools/testing/selftests/diglim/config
create mode 100644 tools/testing/selftests/diglim/selftest.c
--
2.25.1
Exit with return code 4 if lkdtm is not available like other tests
in order to properly skip the test.
Signed-off-by: Misono Tomohiro <misono.tomohiro(a)jp.fujitsu.com>
---
I saw the same problem reported here (on 5.14-rc4):
https://lore.kernel.org/lkml/2836f48a-d4e2-7f00-f06c-9f556fbd6332@linuxfoun…
tools/testing/selftests/lkdtm/stack-entropy.sh | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/lkdtm/stack-entropy.sh b/tools/testing/selftests/lkdtm/stack-entropy.sh
index 1b4d95d575f8..14fedeef762e 100755
--- a/tools/testing/selftests/lkdtm/stack-entropy.sh
+++ b/tools/testing/selftests/lkdtm/stack-entropy.sh
@@ -4,13 +4,27 @@
# Measure kernel stack entropy by sampling via LKDTM's REPORT_STACK test.
set -e
samples="${1:-1000}"
+TRIGGER=/sys/kernel/debug/provoke-crash/DIRECT
+KSELFTEST_SKIP_TEST=4
+
+# Verify we have LKDTM available in the kernel.
+if [ ! -r $TRIGGER ] ; then
+ /sbin/modprobe -q lkdtm || true
+ if [ ! -r $TRIGGER ] ; then
+ echo "Cannot find $TRIGGER (missing CONFIG_LKDTM?)"
+ else
+ echo "Cannot write $TRIGGER (need to run as root?)"
+ fi
+ # Skip this test
+ exit $KSELFTEST_SKIP_TEST
+fi
# Capture dmesg continuously since it may fill up depending on sample size.
log=$(mktemp -t stack-entropy-XXXXXX)
dmesg --follow >"$log" & pid=$!
report=-1
for i in $(seq 1 $samples); do
- echo "REPORT_STACK" >/sys/kernel/debug/provoke-crash/DIRECT
+ echo "REPORT_STACK" > $TRIGGER
if [ -t 1 ]; then
percent=$(( 100 * $i / $samples ))
if [ "$percent" -ne "$report" ]; then
--
2.31.1
A common feature of unit testing frameworks is support for sharing a test
configuration across multiple unit tests. Add this functionality to the
KUnit framework. This functionality will be used in the next patch in this
series.
Reviewed-by: Brendan Higgins <brendanhiggins(a)google.com>
Cc: David Gow <davidgow(a)google.com>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: kunit-dev(a)googlegroups.com
Cc: linux-kselftest(a)vger.kernel.org
Cc: Bodo Stroesser <bostroesser(a)gmail.com>
Cc: Martin K. Petersen <martin.petersen(a)oracle.com>
Cc: Yanko Kaneti <yaneti(a)declera.com>
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
---
include/kunit/test.h | 4 ++++
lib/kunit/test.c | 14 ++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 24b40e5c160b..a6eef96a409c 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -215,6 +215,8 @@ static inline char *kunit_status_to_ok_not_ok(enum kunit_status status)
* struct kunit_suite - describes a related collection of &struct kunit_case
*
* @name: the name of the test. Purely informational.
+ * @init_suite: called once per test suite before the test cases.
+ * @exit_suite: called once per test suite after all test cases.
* @init: called before every test case.
* @exit: called after every test case.
* @test_cases: a null terminated array of test cases.
@@ -229,6 +231,8 @@ static inline char *kunit_status_to_ok_not_ok(enum kunit_status status)
*/
struct kunit_suite {
const char name[256];
+ int (*init_suite)(void);
+ void (*exit_suite)(void);
int (*init)(struct kunit *test);
void (*exit)(struct kunit *test);
struct kunit_case *test_cases;
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index d79ecb86ea57..c271692ced93 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -397,9 +397,19 @@ int kunit_run_tests(struct kunit_suite *suite)
{
char param_desc[KUNIT_PARAM_DESC_SIZE];
struct kunit_case *test_case;
+ int res = 0;
kunit_print_subtest_start(suite);
+ if (suite->init_suite)
+ res = suite->init_suite();
+
+ if (res < 0) {
+ kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT
+ "# Suite initialization failed (%d)\n", res);
+ goto end;
+ }
+
kunit_suite_for_each_test_case(suite, test_case) {
struct kunit test = { .param_value = NULL, .param_index = 0 };
test_case->status = KUNIT_SKIPPED;
@@ -439,6 +449,10 @@ int kunit_run_tests(struct kunit_suite *suite)
test.status_comment);
}
+ if (suite->exit_suite)
+ suite->exit_suite();
+
+end:
kunit_print_subtest_end(suite);
return 0;
TDX stands for Trust Domain Extensions which isolates VMs from the
virtual-machine manager (VMM)/hypervisor and any other software on the
platform.
Intel has recently submitted a set of RFC patches for KVM support for
TDX and more information can be found on the latest TDX Support
Patches: https://lkml.org/lkml/2021/7/2/558
Due to the nature of the confidential computing environment that TDX
provides, it is very difficult to verify/test the KVM support. TDX
requires UEFI and the guest kernel to be enlightened which are all under
development.
We are working on a set of selftests to close this gap and be able to
verify the KVM functionality to support TDX lifecycle and GHCI [1]
interface.
We are looking for any feedback on:
- Patch series itself
- Any suggestion on how we should approach testing TDX functionality.
Does selftests seems reasonable or should we switch to using KVM
unit tests. I would be happy to get some perspective on how KVM unit
tests can help us more.
- Any test case or scenario that we should add.
- Anything else I have not thought of yet.
Current patch series provide the following capabilities:
- Provide helper functions to create a TD (Trusted Domain) using the KVM
ioctls
- Provide helper functions to create a guest image that can include any
testing code
- Provide helper functions and wrapper functions to write testing code
using GHCI interface
- Add a test case that verifies TDX life cycle
- Add a test case that verifies TDX GHCI port IO
TODOs:
- Use existing function to create page tables dynamically
(ie __virt_pg_map())
- Remove arbitrary defined magic numbers for data structure offsets
- Add TDVMCALL for error reporting
- Add additional test cases as some listed below
- Add #VE handlers to help testing more complicated test cases
Other test cases that we are planning to add:
(with credit to sagis(a)google.com)
VM call interface Input Output Result
GetTdVmCallInfo R12=0 None VMCALL_SUCCESS
MapGPA Map private page (GPA.S=0) VMCALL_SUCCESS
MapGPA Map shared page (GPA.S=1) VMCALL_SUCCESS
MapGPA Map already private page as private VMCALL_INVALID_OPERAND
MapGPA Map already shared page as shared VMCALL_INVALID_OPERAND
GetQuote
ReportFatalError
SetupEventNotifyInterrupt Valid interrupt value (32:255) VMCALL_SUCCESS
SetupEventNotifyInterrupt Invalid value (>255) VMCALL_INVALID_OPERAND
Instruction.CPUID R12(EAX)=1, R13(ECX)=0 EBX[8:15]=0x8
EBX[16:23]=X
EBX[24:31]=vcpu_id
ECX[0]=1
ECX[12]=Y
Instruction.CPUID R12(EAX)=1, R13(ECX)=4 VMCALL_INVALID_OPERAND
VE.RequestMMIO
Instruction.HLT VMCALL_SUCCESS
Instruction.IO Read/Write 1/2/4 bytes VMCALL_SUCCESS
Instruction.IO Read/Write 3 bytes VMCALL_INVALID_OPERAND
Instruction.RDMSR Accessible register R11=msr_value VMCALL_SUCCESS
Inaccessible register VMCALL_INVALID_OPERAND
Instruction.RDMSR Accessible register VMCALL_SUCCESS
Inaccessible register VMCALL_INVALID_OPERAND
INSTRUCTION.PCONFIG
[1] Intel TDX Guest-Hypervisor Communication Interface
https://software.intel.com/content/dam/develop/external/us/en/documents/int…
Erdem Aktas (4):
KVM: selftests: Add support for creating non-default type VMs
KVM: selftest: Add helper functions to create TDX VMs
KVM: selftest: Adding TDX life cycle test.
KVM: selftest: Adding test case for TDX port IO
tools/testing/selftests/kvm/Makefile | 6 +-
.../testing/selftests/kvm/include/kvm_util.h | 1 +
.../selftests/kvm/include/x86_64/processor.h | 5 +
tools/testing/selftests/kvm/lib/kvm_util.c | 29 +-
.../selftests/kvm/lib/x86_64/processor.c | 23 ++
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 220 ++++++++++++
.../selftests/kvm/lib/x86_64/tdx_lib.c | 314 ++++++++++++++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 209 ++++++++++++
8 files changed, 800 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
--
2.32.0.432.gabb21c7263-goog
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/core/sock_map.c | 8 +-
net/unix/af_unix.c | 86 ++++++++++++++---
net/unix/unix_bpf.c | 96 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
5 files changed, 193 insertions(+), 53 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
v2 -> v3 :
- check for stream type in update_proto
- remove intermediate variable in __unix_stream_recvmsg
- fix compile warning in unix_stream_recvmsg
--
2.20.1
The XSAVE feature set supports the saving and restoring of state components,
and XSAVE feature is used for process context switching. The XSAVE state
components include x87 state for FPU execution environment, SSE state, AVX
state and so on. In order to ensure that XSAVE works correctly, add XSAVE
basic test for XSAVE architecture functionality.
This patch set tests and verifies the basic functions of XSAVE/XRSTOR in
user space; during and after signal handling on the x86 platform, the
XSAVE contents of the process should not be changed.
This series introduces only the most basic XSAVE tests. In the
future, the intention is to continue expanding the scope of
these selftests to include more kernel XSAVE-related functionality
and XSAVE-managed features like AMX and shadow stacks.
Pengfei Xu (2):
selftests/xsave: test basic XSAVE architecture functionality
selftests/xsave: add xsave test during and after signal handling
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/xsave/.gitignore | 3 +
tools/testing/selftests/xsave/Makefile | 6 +
tools/testing/selftests/xsave/xsave_common.h | 246 ++++++++++++++++++
.../selftests/xsave/xsave_instruction.c | 83 ++++++
.../selftests/xsave/xsave_signal_handle.c | 184 +++++++++++++
6 files changed, 523 insertions(+)
create mode 100644 tools/testing/selftests/xsave/.gitignore
create mode 100644 tools/testing/selftests/xsave/Makefile
create mode 100644 tools/testing/selftests/xsave/xsave_common.h
create mode 100644 tools/testing/selftests/xsave/xsave_instruction.c
create mode 100644 tools/testing/selftests/xsave/xsave_signal_handle.c
--
2.20.1
Extend KSM self tests with a performance benchmark. These tests are not
part of regular regression testing, as they are mainly intended to be
used by developers making changes to the memory management subsystem.
Both patches were developed against linux-next.
Zhansaya Bagdauletkyzy (2):
selftests: vm: add KSM merging time test
selftests: vm: add COW time test for KSM pages
tools/testing/selftests/vm/ksm_tests.c | 136 ++++++++++++++++++++++++-
1 file changed, 132 insertions(+), 4 deletions(-)
--
2.25.1
Currently we don't have full automated tests for the vector length
configuation ABIs offered for SVE, we have a helper binary for setting
the vector length which can be used for manual tests and we use the
prctl() interface to enumerate the vector lengths but don't actually
verify that the vector lengths enumerated were set.
This patch series provides a small helper which allows us to get the
currently configured vector length using the RDVL instruction via either
a library call or stdout of a process and then uses this to both add
verification of enumerated vector lengths to our existing tests and also
add a new test program which exercises both the prctl() and sysfs
interfaces.
In preparation for the forthcomng support for the Scalable Matrix
Extension (SME) [1] which introduces a new vector length managed via a
very similar hardware interface the helper and new test program are
parameterised with the goal of allowing reuse for SME.
[1] https://community.arm.com/developer/ip-products/processors/b/processors-ip-…
v5:
- Fix a potentially uninialized variable case.
- Clarify an error message.
- Add TODO.
v4:
- Fix fscanf() format string handling to properly confirm the newline.
- Pull fclose() out of stdio read helper.
- Change style of child monitoring loop.
v3:
- Add BTI landing pads to the asm helper functions.
- Clean up pipes used to talk to children.
- Remove another unneeded include.
- Make functions in the main executable static.
- Match the newline when parsing vector length from the child.
- Factor out the fscanf() and fclose() from parsing integers from file
descriptors.
- getauxval() returns unsigned long.
v2:
- Tweak log message on failure in sve-probe-vls.
- Stylistic changes in vec-syscfg.
- Flush stdout before forking in vec-syscfg.
- Use EXIT_FAILURE.
- Use fdopen() to get child output.
- Replace a bunch of UNIX API usage with stdio.
- Add a TODO list.
- Verify that we're root before testing writes to /proc.
Mark Brown (4):
kselftest/arm64: Provide a helper binary and "library" for SVE RDVL
kselftest/arm64: Validate vector lengths are set in sve-probe-vls
kselftest/arm64: Add tests for SVE vector configuration
kselftest/arm64: Add a TODO list for floating point tests
tools/testing/selftests/arm64/fp/.gitignore | 2 +
tools/testing/selftests/arm64/fp/Makefile | 11 +-
tools/testing/selftests/arm64/fp/TODO | 4 +
tools/testing/selftests/arm64/fp/rdvl-sve.c | 14 +
tools/testing/selftests/arm64/fp/rdvl.S | 10 +
tools/testing/selftests/arm64/fp/rdvl.h | 8 +
.../selftests/arm64/fp/sve-probe-vls.c | 5 +
tools/testing/selftests/arm64/fp/vec-syscfg.c | 593 ++++++++++++++++++
8 files changed, 644 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/arm64/fp/TODO
create mode 100644 tools/testing/selftests/arm64/fp/rdvl-sve.c
create mode 100644 tools/testing/selftests/arm64/fp/rdvl.S
create mode 100644 tools/testing/selftests/arm64/fp/rdvl.h
create mode 100644 tools/testing/selftests/arm64/fp/vec-syscfg.c
base-commit: ff1176468d368232b684f75e82563369208bc371
--
2.20.1
Currently we don't have full automated tests for the vector length
configuation ABIs offered for SVE, we have a helper binary for setting
the vector length which can be used for manual tests and we use the
prctl() interface to enumerate the vector lengths but don't actually
verify that the vector lengths enumerated were set.
This patch series provides a small helper which allows us to get the
currently configured vector length using the RDVL instruction via either
a library call or stdout of a process and then uses this to both add
verification of enumerated vector lengths to our existing tests and also
add a new test program which exercises both the prctl() and sysfs
interfaces.
In preparation for the forthcomng support for the Scalable Matrix
Extension (SME) [1] which introduces a new vector length managed via a
very similar hardware interface the helper and new test program are
parameterised with the goal of allowing reuse for SME.
[1] https://community.arm.com/developer/ip-products/processors/b/processors-ip-…
v3:
- Add BTI landing pads to the asm helper functions.
- Clean up pipes used to talk to children.
- Remove another unneeded include.
- Make functions in the main executable static.
- Match the newline when parsing vector length from the child.
- Factor out the fscanf() and fclose() from parsing integers from file
descriptors.
- getauxval() returns unsigned long.
v2:
- Tweak log message on failure in sve-probe-vls.
- Stylistic changes in vec-syscfg.
- Flush stdout before forking in vec-syscfg.
- Use EXIT_FAILURE.
- Use fdopen() to get child output.
- Replace a bunch of UNIX API usage with stdio.
- Add a TODO list.
- Verify that we're root before testing writes to /proc.
Mark Brown (4):
kselftest/arm64: Provide a helper binary and "library" for SVE RDVL
kselftest/arm64: Validate vector lengths are set in sve-probe-vls
kselftest/arm64: Add tests for SVE vector configuration
kselftest/arm64: Add a TODO list for floating point tests
tools/testing/selftests/arm64/fp/.gitignore | 2 +
tools/testing/selftests/arm64/fp/Makefile | 11 +-
tools/testing/selftests/arm64/fp/TODO | 3 +
tools/testing/selftests/arm64/fp/rdvl-sve.c | 14 +
tools/testing/selftests/arm64/fp/rdvl.S | 10 +
tools/testing/selftests/arm64/fp/rdvl.h | 8 +
.../selftests/arm64/fp/sve-probe-vls.c | 5 +
tools/testing/selftests/arm64/fp/vec-syscfg.c | 594 ++++++++++++++++++
8 files changed, 644 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/arm64/fp/TODO
create mode 100644 tools/testing/selftests/arm64/fp/rdvl-sve.c
create mode 100644 tools/testing/selftests/arm64/fp/rdvl.S
create mode 100644 tools/testing/selftests/arm64/fp/rdvl.h
create mode 100644 tools/testing/selftests/arm64/fp/vec-syscfg.c
base-commit: ff1176468d368232b684f75e82563369208bc371
--
2.20.1
Currently we don't have full automated tests for the vector length
configuation ABIs offered for SVE, we have a helper binary for setting
the vector length which can be used for manual tests and we use the
prctl() interface to enumerate the vector lengths but don't actually
verify that the vector lengths enumerated were set.
This patch series provides a small helper which allows us to get the
currently configured vector length using the RDVL instruction via either
a library call or stdout of a process and then uses this to both add
verification of enumerated vector lengths to our existing tests and also
add a new test program which exercises both the prctl() and sysfs
interfaces.
In preparation for the forthcomng support for the Scalable Matrix
Extension (SME) [1] which introduces a new vector length managed via a
very similar hardware interface the helper and new test program are
parameterised with the goal of allowing reuse for SME.
[1] https://community.arm.com/developer/ip-products/processors/b/processors-ip-…
v3:
- Add BTI landing pads to the asm helper functions.
- Clean up pipes used to talk to children.
- Remove another unneeded include.
- Make functions in the main executable static.
- Match the newline when parsing vector length from the child.
- Factor out the fscanf() and fclose() from parsing integers from file
descriptors.
- getauxval() returns unsigned long.
v2:
- Tweak log message on failure in sve-probe-vls.
- Stylistic changes in vec-syscfg.
- Flush stdout before forking in vec-syscfg.
- Use EXIT_FAILURE.
- Use fdopen() to get child output.
- Replace a bunch of UNIX API usage with stdio.
- Add a TODO list.
- Verify that we're root before testing writes to /proc.
Mark Brown (4):
kselftest/arm64: Provide a helper binary and "library" for SVE RDVL
kselftest/arm64: Validate vector lengths are set in sve-probe-vls
kselftest/arm64: Add tests for SVE vector configuration
kselftest/arm64: Add a TODO list for floating point tests
tools/testing/selftests/arm64/fp/.gitignore | 2 +
tools/testing/selftests/arm64/fp/Makefile | 11 +-
tools/testing/selftests/arm64/fp/TODO | 3 +
tools/testing/selftests/arm64/fp/rdvl-sve.c | 14 +
tools/testing/selftests/arm64/fp/rdvl.S | 10 +
tools/testing/selftests/arm64/fp/rdvl.h | 8 +
.../selftests/arm64/fp/sve-probe-vls.c | 5 +
tools/testing/selftests/arm64/fp/vec-syscfg.c | 594 ++++++++++++++++++
8 files changed, 644 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/arm64/fp/TODO
create mode 100644 tools/testing/selftests/arm64/fp/rdvl-sve.c
create mode 100644 tools/testing/selftests/arm64/fp/rdvl.S
create mode 100644 tools/testing/selftests/arm64/fp/rdvl.h
create mode 100644 tools/testing/selftests/arm64/fp/vec-syscfg.c
base-commit: ff1176468d368232b684f75e82563369208bc371
--
2.20.1
The XSAVE feature set supports the saving and restoring of state components,
and XSAVE feature is used for process context switching. The XSAVE state
components include x87 state for FPU execution environment, SSE state, AVX
state and so on. In order to ensure that XSAVE works correctly, add XSAVE
basic test for XSAVE architecture functionality.
This patch set tests and verifies the basic functions of XSAVE/XRSTOR in
user space; during and after signal processing on the x86 platform, the
XSAVE contents of the process should not be changed.
This series introduces only the most basic XSAVE tests. In the
future, the intention is to continue expanding the scope of
these selftests to include more kernel XSAVE-related functionality
and XSAVE-managed features like AMX and shadow stacks.
Pengfei Xu (2):
selftests/xsave: test basic XSAVE architecture functionality
selftests/xsave: add xsave test during and after signal handling
tools/testing/selftests/Makefile | 1 +
tools/testing/selftests/xsave/.gitignore | 3 +
tools/testing/selftests/xsave/Makefile | 6 +
tools/testing/selftests/xsave/xsave_common.h | 246 ++++++++++++++++++
.../selftests/xsave/xsave_instruction.c | 83 ++++++
.../selftests/xsave/xsave_signal_handle.c | 184 +++++++++++++
6 files changed, 523 insertions(+)
create mode 100644 tools/testing/selftests/xsave/.gitignore
create mode 100644 tools/testing/selftests/xsave/Makefile
create mode 100644 tools/testing/selftests/xsave/xsave_common.h
create mode 100644 tools/testing/selftests/xsave/xsave_instruction.c
create mode 100644 tools/testing/selftests/xsave/xsave_signal_handle.c
--
2.20.1
This patch series add support for unix stream type
for sockmap. Sockmap already supports TCP, UDP,
unix dgram types. The unix stream support is similar
to unix dgram.
Also add selftests for unix stream type in sockmap tests.
Jiang Wang (5):
af_unix: add read_sock for stream socket types
af_unix: add unix_stream_proto for sockmap
selftest/bpf: add tests for sockmap with unix stream type.
selftest/bpf: change udp to inet in some function names
selftest/bpf: add new tests in sockmap for unix stream to tcp.
include/net/af_unix.h | 8 +-
net/core/sock_map.c | 8 +-
net/unix/af_unix.c | 88 +++++++++++++++---
net/unix/unix_bpf.c | 93 ++++++++++++++-----
.../selftests/bpf/prog_tests/sockmap_listen.c | 48 ++++++----
5 files changed, 192 insertions(+), 53 deletions(-)
v1 -> v2 :
- Call unhash in shutdown.
- Clean up unix_create1 a bit.
- Return -ENOTCONN if socket is not connected.
--
2.20.1
A common feature of unit testing frameworks is support for sharing a test
configuration across multiple unit tests. Add this functionality to the
KUnit framework. This functionality will be used in the next patch in this
series.
Reviewed-by: Brendan Higgins <brendanhiggins(a)google.com>
Cc: David Gow <davidgow(a)google.com>
Cc: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: kunit-dev(a)googlegroups.com
Cc: linux-kselftest(a)vger.kernel.org
Cc: Bodo Stroesser <bostroesser(a)gmail.com>
Cc: Martin K. Petersen <martin.petersen(a)oracle.com>
Cc: Yanko Kaneti <yaneti(a)declera.com>
Signed-off-by: Bart Van Assche <bvanassche(a)acm.org>
---
include/kunit/test.h | 4 ++++
lib/kunit/test.c | 14 ++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/include/kunit/test.h b/include/kunit/test.h
index 24b40e5c160b..a6eef96a409c 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -215,6 +215,8 @@ static inline char *kunit_status_to_ok_not_ok(enum kunit_status status)
* struct kunit_suite - describes a related collection of &struct kunit_case
*
* @name: the name of the test. Purely informational.
+ * @init_suite: called once per test suite before the test cases.
+ * @exit_suite: called once per test suite after all test cases.
* @init: called before every test case.
* @exit: called after every test case.
* @test_cases: a null terminated array of test cases.
@@ -229,6 +231,8 @@ static inline char *kunit_status_to_ok_not_ok(enum kunit_status status)
*/
struct kunit_suite {
const char name[256];
+ int (*init_suite)(void);
+ void (*exit_suite)(void);
int (*init)(struct kunit *test);
void (*exit)(struct kunit *test);
struct kunit_case *test_cases;
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index d79ecb86ea57..c271692ced93 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -397,9 +397,19 @@ int kunit_run_tests(struct kunit_suite *suite)
{
char param_desc[KUNIT_PARAM_DESC_SIZE];
struct kunit_case *test_case;
+ int res = 0;
kunit_print_subtest_start(suite);
+ if (suite->init_suite)
+ res = suite->init_suite();
+
+ if (res < 0) {
+ kunit_log(KERN_INFO, suite, KUNIT_SUBTEST_INDENT
+ "# Suite initialization failed (%d)\n", res);
+ goto end;
+ }
+
kunit_suite_for_each_test_case(suite, test_case) {
struct kunit test = { .param_value = NULL, .param_index = 0 };
test_case->status = KUNIT_SKIPPED;
@@ -439,6 +449,10 @@ int kunit_run_tests(struct kunit_suite *suite)
test.status_comment);
}
+ if (suite->exit_suite)
+ suite->exit_suite();
+
+end:
kunit_print_subtest_end(suite);
return 0;
From: Tianjia Zhang <tianjia.zhang(a)linux.alibaba.com>
Q1 and Q2 are numbers with *maximum* length of 384 bytes. If the calculated
length of Q1 and Q2 is less than 384 bytes, things will go wrong.
E.g. if Q2 is 383 bytes, then
1. The bytes of q2 are copied to sigstruct->q2 in calc_q1q2().
2. The entire sigstruct->q2 is reversed, which results it being
256 * Q2, given that the last byte of sigstruct->q2 is added
to before the bytes given by calc_q1q2().
Either change in key or measurement can trigger the bug. E.g. an unmeasured
heap could cause a devastating change in Q1 or Q2.
Reverse exactly the bytes of Q1 and Q2 in calc_q1q2() before returning to
the caller.
Fixes: dedde2634570 ("selftests/sgx: Trigger the reclaimer in the selftests")
Link: https://lore.kernel.org/linux-sgx/20210301051836.30738-1-tianjia.zhang@linu…
Signed-off-by: Tianjia Zhang <tianjia.zhang(a)linux.alibaba.com>
Signed-off-by: Jarkko Sakkinen <jarkko(a)kernel.org>
---
The original patch did a bad job explaining the code change but it
turned out making sense. I wrote a new description.
v2:
- Added a fixes tag.
tools/testing/selftests/sgx/sigstruct.c | 41 +++++++++++++------------
1 file changed, 21 insertions(+), 20 deletions(-)
diff --git a/tools/testing/selftests/sgx/sigstruct.c b/tools/testing/selftests/sgx/sigstruct.c
index dee7a3d6c5a5..92bbc5a15c39 100644
--- a/tools/testing/selftests/sgx/sigstruct.c
+++ b/tools/testing/selftests/sgx/sigstruct.c
@@ -55,10 +55,27 @@ static bool alloc_q1q2_ctx(const uint8_t *s, const uint8_t *m,
return true;
}
+static void reverse_bytes(void *data, int length)
+{
+ int i = 0;
+ int j = length - 1;
+ uint8_t temp;
+ uint8_t *ptr = data;
+
+ while (i < j) {
+ temp = ptr[i];
+ ptr[i] = ptr[j];
+ ptr[j] = temp;
+ i++;
+ j--;
+ }
+}
+
static bool calc_q1q2(const uint8_t *s, const uint8_t *m, uint8_t *q1,
uint8_t *q2)
{
struct q1q2_ctx ctx;
+ int len;
if (!alloc_q1q2_ctx(s, m, &ctx)) {
fprintf(stderr, "Not enough memory for Q1Q2 calculation\n");
@@ -89,8 +106,10 @@ static bool calc_q1q2(const uint8_t *s, const uint8_t *m, uint8_t *q1,
goto out;
}
- BN_bn2bin(ctx.q1, q1);
- BN_bn2bin(ctx.q2, q2);
+ len = BN_bn2bin(ctx.q1, q1);
+ reverse_bytes(q1, len);
+ len = BN_bn2bin(ctx.q2, q2);
+ reverse_bytes(q2, len);
free_q1q2_ctx(&ctx);
return true;
@@ -152,22 +171,6 @@ static RSA *gen_sign_key(void)
return key;
}
-static void reverse_bytes(void *data, int length)
-{
- int i = 0;
- int j = length - 1;
- uint8_t temp;
- uint8_t *ptr = data;
-
- while (i < j) {
- temp = ptr[i];
- ptr[i] = ptr[j];
- ptr[j] = temp;
- i++;
- j--;
- }
-}
-
enum mrtags {
MRECREATE = 0x0045544145524345,
MREADD = 0x0000000044444145,
@@ -367,8 +370,6 @@ bool encl_measure(struct encl *encl)
/* BE -> LE */
reverse_bytes(sigstruct->signature, SGX_MODULUS_SIZE);
reverse_bytes(sigstruct->modulus, SGX_MODULUS_SIZE);
- reverse_bytes(sigstruct->q1, SGX_MODULUS_SIZE);
- reverse_bytes(sigstruct->q2, SGX_MODULUS_SIZE);
EVP_MD_CTX_destroy(ctx);
RSA_free(key);
--
2.32.0
Create a heap for the test enclave, which has the same size as all
available Enclave Page Cache (EPC) pages in the system. This will guarantee
that all test_encl.elf pages *and* SGX Enclave Control Structure (SECS)
have been swapped out by the page reclaimer during the load time. Actually,
this adds a bit more stress than that since part of the EPC gets reserved
for the Version Array (VA) pages.
For each test, the page fault handler gets triggered in two occasions:
- When SGX_IOC_ENCLAVE_INIT is performed, SECS gets swapped in by the
page fault handler.
- During the execution, each page that is referenced gets swapped in
by the page fault handler.
Jarkko Sakkinen (3):
x86/sgx: Add /sys/kernel/debug/x86/sgx_total_mem
selftests/sgx: Assign source for each segment
selftests/sgx: Trigger the reclaimer and #PF handler
Tianjia Zhang (1):
selftests/sgx: Fix calculations for sub-maximum field sizes
Documentation/x86/sgx.rst | 6 +++
arch/x86/kernel/cpu/sgx/main.c | 10 ++++-
tools/testing/selftests/sgx/load.c | 38 ++++++++++++++----
tools/testing/selftests/sgx/main.c | 42 +++++++++++++++++++-
tools/testing/selftests/sgx/main.h | 4 +-
tools/testing/selftests/sgx/sigstruct.c | 53 +++++++++++++------------
6 files changed, 117 insertions(+), 36 deletions(-)
--
2.32.0
--raw_output is nice, but it would be nicer if could show only output
after KUnit tests ahve started.
So change the flag to allow specifying a string ('kunit').
Make it so `--raw_output` alone will default to `--raw_output=all` and
have the same original behavior.
Drop the small kunit_parser.raw_output() function since it feels wrong
to put it in "kunit_parser.py" when the point of it is to not parse
anything.
E.g.
$ ./tools/testing/kunit/kunit.py run --raw_output=kunit
...
[15:24:07] Starting KUnit Kernel ...
TAP version 14
1..1
# Subtest: example
1..3
# example_simple_test: initializing
ok 1 - example_simple_test
# example_skip_test: initializing
# example_skip_test: You should not see a line below.
ok 2 - example_skip_test # SKIP this test should be skipped
# example_mark_skipped_test: initializing
# example_mark_skipped_test: You should see a line below.
# example_mark_skipped_test: You should see this line.
ok 3 - example_mark_skipped_test # SKIP this test should be skipped
ok 1 - example
[15:24:10] Elapsed time: 6.487s total, 0.001s configuring, 3.510s building, 0.000s running
Signed-off-by: Daniel Latypov <dlatypov(a)google.com>
---
Documentation/dev-tools/kunit/kunit-tool.rst | 9 ++++++---
tools/testing/kunit/kunit.py | 20 +++++++++++++++-----
tools/testing/kunit/kunit_parser.py | 4 ----
tools/testing/kunit/kunit_tool_test.py | 9 +++++++++
4 files changed, 30 insertions(+), 12 deletions(-)
diff --git a/Documentation/dev-tools/kunit/kunit-tool.rst b/Documentation/dev-tools/kunit/kunit-tool.rst
index c7ff9afe407a..ae52e0f489f9 100644
--- a/Documentation/dev-tools/kunit/kunit-tool.rst
+++ b/Documentation/dev-tools/kunit/kunit-tool.rst
@@ -114,9 +114,12 @@ results in TAP format, you can pass the ``--raw_output`` argument.
./tools/testing/kunit/kunit.py run --raw_output
-.. note::
- The raw output from test runs may contain other, non-KUnit kernel log
- lines.
+The raw output from test runs may contain other, non-KUnit kernel log
+lines. You can see just KUnit output with ``--raw_output=kunit``:
+
+.. code-block:: bash
+
+ ./tools/testing/kunit/kunit.py run --raw_output=kunit
If you have KUnit results in their raw TAP format, you can parse them and print
the human-readable summary with the ``parse`` command for kunit_tool. This
diff --git a/tools/testing/kunit/kunit.py b/tools/testing/kunit/kunit.py
index 7174377c2172..5a931456e718 100755
--- a/tools/testing/kunit/kunit.py
+++ b/tools/testing/kunit/kunit.py
@@ -16,6 +16,7 @@ assert sys.version_info >= (3, 7), "Python version is too old"
from collections import namedtuple
from enum import Enum, auto
+from typing import Iterable
import kunit_config
import kunit_json
@@ -114,7 +115,16 @@ def parse_tests(request: KunitParseRequest) -> KunitResult:
'Tests not Parsed.')
if request.raw_output:
- kunit_parser.raw_output(request.input_data)
+ output: Iterable[str] = request.input_data
+ if request.raw_output == 'all':
+ pass
+ elif request.raw_output == 'kunit':
+ output = kunit_parser.extract_tap_lines(output)
+ else:
+ print(f'Unknown --raw_output option "{request.raw_output}"', file=sys.stderr)
+ for line in output:
+ print(line.rstrip())
+
else:
test_result = kunit_parser.parse_run_tests(request.input_data)
parse_end = time.time()
@@ -135,7 +145,6 @@ def parse_tests(request: KunitParseRequest) -> KunitResult:
return KunitResult(KunitStatus.SUCCESS, test_result,
parse_end - parse_start)
-
def run_tests(linux: kunit_kernel.LinuxSourceTree,
request: KunitRequest) -> KunitResult:
run_start = time.time()
@@ -181,7 +190,7 @@ def add_common_opts(parser) -> None:
parser.add_argument('--build_dir',
help='As in the make command, it specifies the build '
'directory.',
- type=str, default='.kunit', metavar='build_dir')
+ type=str, default='.kunit', metavar='build_dir')
parser.add_argument('--make_options',
help='X=Y make option, can be repeated.',
action='append')
@@ -246,8 +255,9 @@ def add_exec_opts(parser) -> None:
action='append')
def add_parse_opts(parser) -> None:
- parser.add_argument('--raw_output', help='don\'t format output from kernel',
- action='store_true')
+ parser.add_argument('--raw_output', help='If set don\'t format output from kernel. '
+ 'If set to --raw_output=kunit, filters to just KUnit output.',
+ type=str, nargs='?', const='all', default=None)
parser.add_argument('--json',
nargs='?',
help='Stores test results in a JSON, and either '
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index b88db3f51dc5..84938fefbac0 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -106,10 +106,6 @@ def extract_tap_lines(kernel_output: Iterable[str]) -> LineStream:
yield line_num, line[prefix_len:]
return LineStream(lines=isolate_kunit_output(kernel_output))
-def raw_output(kernel_output) -> None:
- for line in kernel_output:
- print(line.rstrip())
-
DIVIDER = '=' * 60
RESET = '\033[0;0m'
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 628ab00f74bc..619c4554cbff 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -399,6 +399,15 @@ class KUnitMainTest(unittest.TestCase):
self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+ def test_run_raw_output_kunit(self):
+ self.linux_source_mock.run_kernel = mock.Mock(return_value=[])
+ kunit.main(['run', '--raw_output=kunit'], self.linux_source_mock)
+ self.assertEqual(self.linux_source_mock.build_reconfig.call_count, 1)
+ self.assertEqual(self.linux_source_mock.run_kernel.call_count, 1)
+ for call in self.print_mock.call_args_list:
+ self.assertNotEqual(call, mock.call(StrContains('Testing complete.')))
+ self.assertNotEqual(call, mock.call(StrContains(' 0 tests run')))
+
def test_exec_timeout(self):
timeout = 3453
kunit.main(['exec', '--timeout', str(timeout)], self.linux_source_mock)
base-commit: f684616e08e9cd9db3cd53fe2e068dfe02481657
--
2.32.0.554.ge1b32706d8-goog