From: Frank Rowand <frank.rowand(a)sony.com>
Clarify some confusing phrasing.
Signed-off-by: Frank Rowand <frank.rowand(a)sony.com>
---
One item that may result in bikeshedding is that I added the spec
version to the title line.
Documentation/dev-tools/ktap.rst | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/Documentation/dev-tools/ktap.rst b/Documentation/dev-tools/ktap.rst
index 878530cb9c27..3b7a26816930 100644
--- a/Documentation/dev-tools/ktap.rst
+++ b/Documentation/dev-tools/ktap.rst
@@ -1,8 +1,8 @@
.. SPDX-License-Identifier: GPL-2.0
-========================================
-The Kernel Test Anything Protocol (KTAP)
-========================================
+===================================================
+The Kernel Test Anything Protocol (KTAP), version 1
+===================================================
TAP, or the Test Anything Protocol is a format for specifying test results used
by a number of projects. It's website and specification are found at this `link
@@ -186,7 +186,7 @@ starting with another KTAP version line and test plan, and end with the overall
result. If one of the subtests fail, for example, the parent test should also
fail.
-Additionally, all result lines in a subtest should be indented. One level of
+Additionally, all lines in a subtest should be indented. One level of
indentation is two spaces: " ". The indentation should begin at the version
line and should end before the parent test's result line.
@@ -225,8 +225,8 @@ Major differences between TAP and KTAP
--------------------------------------
Note the major differences between the TAP and KTAP specification:
-- yaml and json are not recommended in diagnostic messages
-- TODO directive not recognized
+- yaml and json are not recommended in KTAP diagnostic messages
+- TODO directive not recognized in KTAP
- KTAP allows for an arbitrary number of tests to be nested
The TAP14 specification does permit nested tests, but instead of using another
--
Frank Rowand <frank.rowand(a)sony.com>
Hello Dave Hansen,
The patch 5f23f6d082a9: "x86/pkeys: Add self-tests" from Jul 29,
2016, leads to the following Smatch static checker warning:
tools/testing/selftests/vm/protection_keys.c:647 record_pkey_malloc()
warn: address of 'pkey_malloc_records[i]' is probably non-NULL
tools/testing/selftests/vm/protection_keys.c
638 long nr_pkey_malloc_records;
639 void record_pkey_malloc(void *ptr, long size, int prot)
640 {
641 long i;
642 struct pkey_malloc_record *rec = NULL;
643
644 for (i = 0; i < nr_pkey_malloc_records; i++) {
645 rec = &pkey_malloc_records[i];
646 /* find a free record */
--> 647 if (rec)
648 break;
649 }
650 if (!rec) {
This code is supposed re-allocate memory. If we run out, then allocate
2x the memory. But it only works for the first allocation where
"pkey_malloc_records" is NULL.
For the following allocations it will just select &pkey_malloc_records[0]
and re-use that again.
651 /* every record is full */
652 size_t old_nr_records = nr_pkey_malloc_records;
653 size_t new_nr_records = (nr_pkey_malloc_records * 2 + 1);
654 size_t new_size = new_nr_records * sizeof(struct pkey_malloc_record);
655 dprintf2("new_nr_records: %zd\n", new_nr_records);
656 dprintf2("new_size: %zd\n", new_size);
657 pkey_malloc_records = realloc(pkey_malloc_records, new_size);
658 pkey_assert(pkey_malloc_records != NULL);
659 rec = &pkey_malloc_records[nr_pkey_malloc_records];
660 /*
661 * realloc() does not initialize memory, so zero it from
662 * the first new record all the way to the end.
663 */
664 for (i = 0; i < new_nr_records - old_nr_records; i++)
665 memset(rec + i, 0, sizeof(*rec));
666 }
667 dprintf3("filling malloc record[%d/%p]: {%p, %ld}\n",
668 (int)(rec - pkey_malloc_records), rec, ptr, size);
669 rec->ptr = ptr;
670 rec->size = size;
671 rec->prot = prot;
672 pkey_last_malloc_record = rec;
673 nr_pkey_malloc_records++;
674 }
regards,
dan carpenter
Add some coverage of event generation to mixer-test. Rather than doing a
separate set of writes designed to trigger events we add a step to the
existing write_and_verify() which checks to see if the value we read back
from non-volatile controls matches the value before writing and that an
event is or isn't generated as appropriate. The "tests" for events then
simply check that no spurious or missing events were detected. This avoids
needing further logic to generate appropriate values for each control type
and maximises coverage.
When checking for events we use a timeout of 0. This relies on the kernel
generating any event prior to returning to userspace when setting a control.
That is currently the case and it is difficult to see it changing, if it
does the test will need to be updated. Using a delay of 0 means that we
don't slow things down unduly when checking for no event or when events
fail to be generated.
We don't check behaviour for volatile controls since we can't tell what
the behaviour is supposed to be for any given control.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Reviewed-by: Shuah Khan <skhan(a)linuxfoundation.org>
Reviewed-by: Jaroslav Kysela <perex(a)perex.cz>
---
v3:
- Add a check for a removal event when monitoring for events, if one is
seen then return an error.
v2:
- Get the numid from the API rather than using the control index.
tools/testing/selftests/alsa/mixer-test.c | 154 +++++++++++++++++++++-
1 file changed, 151 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/alsa/mixer-test.c b/tools/testing/selftests/alsa/mixer-test.c
index 0e88f4f3d802..6edb7dca32af 100644
--- a/tools/testing/selftests/alsa/mixer-test.c
+++ b/tools/testing/selftests/alsa/mixer-test.c
@@ -3,7 +3,7 @@
// kselftest for the ALSA mixer API
//
// Original author: Mark Brown <broonie(a)kernel.org>
-// Copyright (c) 2021 Arm Limited
+// Copyright (c) 2021-2 Arm Limited
// This test will iterate over all cards detected in the system, exercising
// every mixer control it can find. This may conflict with other system
@@ -27,11 +27,12 @@
#include "../kselftest.h"
-#define TESTS_PER_CONTROL 4
+#define TESTS_PER_CONTROL 6
struct card_data {
snd_ctl_t *handle;
int card;
+ struct pollfd pollfd;
int num_ctls;
snd_ctl_elem_list_t *ctls;
struct card_data *next;
@@ -43,6 +44,8 @@ struct ctl_data {
snd_ctl_elem_info_t *info;
snd_ctl_elem_value_t *def_val;
int elem;
+ int event_missing;
+ int event_spurious;
struct card_data *card;
struct ctl_data *next;
};
@@ -149,6 +152,7 @@ void find_controls(void)
if (!ctl_data)
ksft_exit_fail_msg("Out of memory\n");
+ memset(ctl_data, 0, sizeof(*ctl_data));
ctl_data->card = card_data;
ctl_data->elem = ctl;
ctl_data->name = snd_ctl_elem_list_get_name(card_data->ctls,
@@ -184,6 +188,26 @@ void find_controls(void)
ctl_list = ctl_data;
}
+ /* Set up for events */
+ err = snd_ctl_subscribe_events(card_data->handle, true);
+ if (err < 0) {
+ ksft_exit_fail_msg("snd_ctl_subscribe_events() failed for card %d: %d\n",
+ card, err);
+ }
+
+ err = snd_ctl_poll_descriptors_count(card_data->handle);
+ if (err != 1) {
+ ksft_exit_fail_msg("Unexpected desciptor count %d for card %d\n",
+ err, card);
+ }
+
+ err = snd_ctl_poll_descriptors(card_data->handle,
+ &card_data->pollfd, 1);
+ if (err != 1) {
+ ksft_exit_fail_msg("snd_ctl_poll_descriptors() failed for %d\n",
+ card, err);
+ }
+
next_card:
if (snd_card_next(&card) < 0) {
ksft_print_msg("snd_card_next");
@@ -194,6 +218,79 @@ void find_controls(void)
snd_config_delete(config);
}
+/*
+ * Block for up to timeout ms for an event, returns a negative value
+ * on error, 0 for no event and 1 for an event.
+ */
+int wait_for_event(struct ctl_data *ctl, int timeout)
+{
+ unsigned short revents;
+ snd_ctl_event_t *event;
+ int count, err;
+ unsigned int mask = 0;
+ unsigned int ev_id;
+
+ snd_ctl_event_alloca(&event);
+
+ do {
+ err = poll(&(ctl->card->pollfd), 1, timeout);
+ if (err < 0) {
+ ksft_print_msg("poll() failed for %s: %s (%d)\n",
+ ctl->name, strerror(errno), errno);
+ return -1;
+ }
+ /* Timeout */
+ if (err == 0)
+ return 0;
+
+ err = snd_ctl_poll_descriptors_revents(ctl->card->handle,
+ &(ctl->card->pollfd),
+ 1, &revents);
+ if (err < 0) {
+ ksft_print_msg("snd_ctl_poll_desciptors_revents() failed for %s: %d\n",
+ ctl->name, err);
+ return err;
+ }
+ if (revents & POLLERR) {
+ ksft_print_msg("snd_ctl_poll_desciptors_revents() reported POLLERR for %s\n",
+ ctl->name);
+ return -1;
+ }
+ /* No read events */
+ if (!(revents & POLLIN)) {
+ ksft_print_msg("No POLLIN\n");
+ continue;
+ }
+
+ err = snd_ctl_read(ctl->card->handle, event);
+ if (err < 0) {
+ ksft_print_msg("snd_ctl_read() failed for %s: %d\n",
+ ctl->name, err);
+ return err;
+ }
+
+ if (snd_ctl_event_get_type(event) != SND_CTL_EVENT_ELEM)
+ continue;
+
+ /* The ID returned from the event is 1 less than numid */
+ mask = snd_ctl_event_elem_get_mask(event);
+ ev_id = snd_ctl_event_elem_get_numid(event);
+ if (ev_id != snd_ctl_elem_info_get_numid(ctl->info)) {
+ ksft_print_msg("Event for unexpected ctl %s\n",
+ snd_ctl_event_elem_get_name(event));
+ continue;
+ }
+
+ if ((mask & SND_CTL_EVENT_MASK_REMOVE) == SND_CTL_EVENT_MASK_REMOVE) {
+ ksft_print_msg("Removal event for %s\n",
+ ctl->name);
+ return -1;
+ }
+ } while ((mask & SND_CTL_EVENT_MASK_VALUE) != SND_CTL_EVENT_MASK_VALUE);
+
+ return 1;
+}
+
bool ctl_value_index_valid(struct ctl_data *ctl, snd_ctl_elem_value_t *val,
int index)
{
@@ -428,7 +525,8 @@ int write_and_verify(struct ctl_data *ctl,
{
int err, i;
bool error_expected, mismatch_shown;
- snd_ctl_elem_value_t *read_val, *w_val;
+ snd_ctl_elem_value_t *initial_val, *read_val, *w_val;
+ snd_ctl_elem_value_alloca(&initial_val);
snd_ctl_elem_value_alloca(&read_val);
snd_ctl_elem_value_alloca(&w_val);
@@ -446,6 +544,18 @@ int write_and_verify(struct ctl_data *ctl,
snd_ctl_elem_value_copy(expected_val, write_val);
}
+ /* Store the value before we write */
+ if (snd_ctl_elem_info_is_readable(ctl->info)) {
+ snd_ctl_elem_value_set_id(initial_val, ctl->id);
+
+ err = snd_ctl_elem_read(ctl->card->handle, initial_val);
+ if (err < 0) {
+ ksft_print_msg("snd_ctl_elem_read() failed: %s\n",
+ snd_strerror(err));
+ return err;
+ }
+ }
+
/*
* Do the write, if we have an expected value ignore the error
* and carry on to validate the expected value.
@@ -470,6 +580,30 @@ int write_and_verify(struct ctl_data *ctl,
return err;
}
+ /*
+ * Check for an event if the value changed, or confirm that
+ * there was none if it didn't. We rely on the kernel
+ * generating the notification before it returns from the
+ * write, this is currently true, should that ever change this
+ * will most likely break and need updating.
+ */
+ if (!snd_ctl_elem_info_is_volatile(ctl->info)) {
+ err = wait_for_event(ctl, 0);
+ if (snd_ctl_elem_value_compare(initial_val, read_val)) {
+ if (err < 1) {
+ ksft_print_msg("No event generated for %s\n",
+ ctl->name);
+ ctl->event_missing++;
+ }
+ } else {
+ if (err != 0) {
+ ksft_print_msg("Spurious event generated for %s\n",
+ ctl->name);
+ ctl->event_spurious++;
+ }
+ }
+ }
+
/*
* Use the libray to compare values, if there's a mismatch
* carry on and try to provide a more useful diagnostic than
@@ -898,6 +1032,18 @@ void test_ctl_write_invalid(struct ctl_data *ctl)
ctl->card->card, ctl->elem);
}
+void test_ctl_event_missing(struct ctl_data *ctl)
+{
+ ksft_test_result(!ctl->event_missing, "event_missing.%d.%d\n",
+ ctl->card->card, ctl->elem);
+}
+
+void test_ctl_event_spurious(struct ctl_data *ctl)
+{
+ ksft_test_result(!ctl->event_spurious, "event_spurious.%d.%d\n",
+ ctl->card->card, ctl->elem);
+}
+
int main(void)
{
struct ctl_data *ctl;
@@ -917,6 +1063,8 @@ int main(void)
test_ctl_write_default(ctl);
test_ctl_write_valid(ctl);
test_ctl_write_invalid(ctl);
+ test_ctl_event_missing(ctl);
+ test_ctl_event_spurious(ctl);
}
ksft_exit_pass();
--
2.30.2
Although both iproute2 and the kernel accept 1 and 2 as tos values for
new routes, those are invalid. These values only set ECN bits, which
are ignored during IPv4 fib lookups. Therefore, no packet can actually
match such routes. This selftest therefore only succeeds because it
doesn't verify that the new routes do actually work in practice (it
just checks if the routes are offloaded or not).
It makes more sense to use tos values that don't conflict with ECN.
This way, the selftest won't be affected if we later decide to warn or
even reject invalid tos configurations for new routes.
Signed-off-by: Guillaume Nault <gnault(a)redhat.com>
---
.../selftests/net/forwarding/fib_offload_lib.sh | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/net/forwarding/fib_offload_lib.sh b/tools/testing/selftests/net/forwarding/fib_offload_lib.sh
index e134a5f529c9..1b3b46292179 100644
--- a/tools/testing/selftests/net/forwarding/fib_offload_lib.sh
+++ b/tools/testing/selftests/net/forwarding/fib_offload_lib.sh
@@ -99,15 +99,15 @@ fib_ipv4_tos_test()
fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 0 metric 1024" false
check_err $? "Route not in hardware when should"
- ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 2 metric 1024
- fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 2 metric 1024" false
+ ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 8 metric 1024
+ fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 8 metric 1024" false
check_err $? "Highest TOS route not in hardware when should"
fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 0 metric 1024" true
check_err $? "Lowest TOS route still in hardware when should not"
- ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 1 metric 1024
- fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 1 metric 1024" true
+ ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 4 metric 1024
+ fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 4 metric 1024" true
check_err $? "Middle TOS route in hardware when should not"
log_test "IPv4 routes with TOS"
@@ -277,11 +277,11 @@ fib_ipv4_replay_tos_test()
ip -n $ns link set dev dummy1 up
ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 0
- ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 1
+ ip -n $ns route add 192.0.2.0/24 dev dummy1 tos 4
devlink -N $ns dev reload $devlink_dev
- fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 1" false
+ fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 4" false
check_err $? "Highest TOS route not in hardware when should"
fib4_trap_check $ns "192.0.2.0/24 dev dummy1 tos 0" true
--
2.21.3
Using tos 0x1 with 'ip route get <IPv4 address> ...' doesn't test much
of the tos option handling: 0x1 just sets an ECN bit, which is cleared
by inet_rtm_getroute() before doing the fib lookup. Let's use 0x10
instead, which is actually taken into account in the route lookup (and
is less surprising for the reader).
For consistency, use 0x10 for the IPv6 route lookup too (IPv6 currently
doesn't clear ECN bits, but might do so in the future).
Signed-off-by: Guillaume Nault <gnault(a)redhat.com>
---
No Fixes tag, since this is for net-next and the original test wasn't
actually broken in the first place.
tools/testing/selftests/net/rtnetlink.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
index c9ce3dfa42ee..0900c5438fbb 100755
--- a/tools/testing/selftests/net/rtnetlink.sh
+++ b/tools/testing/selftests/net/rtnetlink.sh
@@ -216,9 +216,9 @@ kci_test_route_get()
check_err $?
ip route get fe80::1 dev "$devdummy" > /dev/null
check_err $?
- ip route get 127.0.0.1 from 127.0.0.1 oif lo tos 0x1 mark 0x1 > /dev/null
+ ip route get 127.0.0.1 from 127.0.0.1 oif lo tos 0x10 mark 0x1 > /dev/null
check_err $?
- ip route get ::1 from ::1 iif lo oif lo tos 0x1 mark 0x1 > /dev/null
+ ip route get ::1 from ::1 iif lo oif lo tos 0x10 mark 0x1 > /dev/null
check_err $?
ip addr add dev "$devdummy" 10.23.7.11/24
check_err $?
--
2.21.3
We are looking to further standardise the output format used by kernel
test frameworks like kselftest and KUnit. Thus far we have used the
TAP (Test Anything Protocol) specification, but it has been extended
in many different ways, so we would like to agree on a common "Kernel
TAP" (KTAP) format to resolve these differences. Thus, below is a
draft of a specification of KTAP. Note that this specification is
largely based on the current format of test results for KUnit tests.
Additionally, this specification was heavily inspired by the KTAP
specification draft by Tim Bird
(https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD…).
However, there are some notable differences to his specification. One
such difference is the format of nested tests is more fully specified
in the following specification. However, they are specified in a way
which may not be compatible with many kselftest nested tests.
=====================
Specification of KTAP
=====================
TAP, or the Test Anything Protocol is a format for specifying test
results used by a number of projects. It's website and specification
are found at: https://testanything.org/. The Linux Kernel uses TAP
output for test results. However, KUnit (and other Kernel testing
frameworks such as kselftest) have some special needs for test results
which don't gel perfectly with the original TAP specification. Thus, a
"Kernel TAP" (KTAP) format is specified to extend and alter TAP to
support these use-cases.
KTAP Output consists of 5 major elements (all line-based):
- The version line
- Plan lines
- Test case result lines
- Diagnostic lines
- A bail out line
An important component in this specification of KTAP is the
specification of the format of nested tests. This can be found in the
section on nested tests below.
The version line
----------------
The first line of KTAP output must be the version line. As this
specification documents the first version of KTAP, the recommended
version line is "KTAP version 1". However, since all kernel testing
frameworks use TAP version lines, "TAP version 14" and "TAP version
13" are all acceptable version lines. Version lines with other
versions of TAP or KTAP will not cause the parsing of the test results
to fail but it will produce an error.
Plan lines
----------
Plan lines must follow the format of "1..N" where N is the number of
subtests. The second line of KTAP output must be a plan line, which
indicates the number of tests at the highest level, such that the
tests do not have a parent. Also, in the instance of a test having
subtests, the second line of the test after the subtest header must be
a plan line which indicates the number of subtests within that test.
Test case result lines
----------------------
Test case result lines must have the format:
<result> <number> [-] [<description>] [<directive>] [<diagnostic data>]
The result can be either "ok", which indicates the test case passed,
or "not ok", which indicates that the test case failed.
The number represents the number of the test case or suite being
performed. The first test case or suite must have the number 1 and the
number must increase by 1 for each additional test case or result at
the same level and within the same testing suite.
The "-" character is optional.
The description is a description of the test, generally the name of
the test, and can be any string of words (can't include #). The
description is optional.
The directive is used to indicate if a test was skipped. The format
for the directive is: "# SKIP [<skip_description>]". The
skip_description is optional and can be any string of words to
describe why the test was skipped. The result of the test case result
line can be either "ok" or "not ok" if the skip directive is used.
Finally, note that TAP 14 specification includes TODO directives but
these are not supported for KTAP.
Examples of test case result lines:
Test passed:
ok 1 - test_case_name
Test was skipped:
not ok 1 - test_case_name # SKIP test_case_name should be skipped
Test failed:
not_ok 1 - test_case_name
Diagnostic lines
----------------
Diagnostic lines are used for description of testing operations.
Diagnostic lines are generally formatted as "#
<diagnostic_description>", where the description can be any string.
However, in practice, diagnostic lines are all lines that don't follow
the format of any other KTAP line format. Diagnostic lines can be
anywhere in the test output after the first two lines. There are a few
special diagnostic lines. Diagnostic lines of the format "# Subtest:
<test_name>" indicate the start of a test with subtests. Also,
diagnostic lines of the format "# <test_name>: <description>" refer to
a specific test and tend to occur before the test result line of that
test but are optional.
Bail out line
-------------
A bail out line can occur anywhere in the KTAP output and will
indicate that a test has crashed. The format of a bail out line is
"Bail out! [<description>]", where the description can give
information on why the bail out occurred and can be any string.
Nested tests
------------
The new specification for KTAP will support an arbitrary number of
nested subtests. Thus, tests can now have subtests and those subtests
can have subtests. This can be useful to further categorize tests and
organize test results.
The new required format for a test with subtests consists of: a
subtest header line, a plan line, all subtests, and a final test
result line.
The first line of the test must be the subtest header line with the
format: "# Subtest: <test_name>".
The second line of the test must be the plan line, which is formatted
as "1..N", where N is the number of subtests.
Following the plan line, all lines pertaining to the subtests will follow.
Finally, the last line of the test is a final test result line with
the format: "(ok|not ok) <number> [-] [<description>] [<directive>]
[<diagnostic data>]", which follows the same format as the general
test result lines described in this section. The result line should
indicate the result of the subtests. Thus, if one of the subtests
fail, the test should fail. The description in the final test result
line should match the name of the test in the subtest header.
An example format:
KTAP version 1
1..1
# Subtest: test_suite
1..2
ok 1 - test_1
ok 2 - test_2
ok 1 - test_suite
An example format with multiple levels of nested testing:
KTAP version 1
1..1
# Subtest: test_suite
1..2
# Subtest: sub_test_suite
1..2
ok 1 - test_1
ok 2 test_2
ok 1 - sub_test_suite
ok 2 - test
ok 1 - test_suite
In the instance that the plan line is missing, the end of the test
will be denoted by the final result line containing a description that
matches the name of the test given in the subtest header. Note that
thus, if the plan line is missing and one of the subtests have a
matching name to the test suite this will cause errors.
Lastly, indentation is also recommended for improved readability.
Major differences between TAP 14 and KTAP specification
-------------------------------------------------------
Note that the major differences between TAP 14 and KTAP specification:
- yaml and json are not allowed in diagnostic messages
- TODO directive not allowed
- KTAP allows for an arbitrary number of tests to be nested with
specified nested test format
Example of KTAP
---------------
KTAP version 1
1..1
# Subtest: test_suite
1..1
# Subtest: sub_test_suite
1..2
ok 1 - test_1
ok 2 test_2
ok 1 - sub_test_suite
ok 1 - test_suite
=========================================
Note on incompatibilities with kselftests
=========================================
To my knowledge, the above specification seems to generally accept the
TAP format of many non-nested test results of kselftests.
An example of a common kselftests TAP format for non-nested test
results that are accepted by the above specification:
TAP version 13
1..2
# selftests: vDSO: vdso_test_gettimeofday
# The time is 1628024856.096879
ok 1 selftests: vDSO: vdso_test_gettimeofday
# selftests: vDSO: vdso_test_getcpu
# Could not find __vdso_getcpu
ok 2 selftests: vDSO: vdso_test_getcpu # SKIP
However, one major difference noted with kselftests is the use of more
directives than the "# SKIP" directive. kselftest also supports XPASS
and XFAIL directives. Some additional examples found in kselftests:
not ok 5 selftests: netfilter: nft_concat_range.sh # TIMEOUT 45 seconds
not ok 45 selftests: kvm: kvm_binary_stats_test # exit=127
Should the specification be expanded to include these directives?
However, the general format for kselftests with nested test results
seems to differ from the above specification. It seems that a general
format for nested tests is as follows:
TAP version 13
1..2
# selftests: membarrier: membarrier_test_single_thread
# TAP version 13
# 1..2
# ok 1 sys_membarrier available
# ok 2 sys membarrier invalid command test: command = -1, flags = 0,
errno = 22. Failed as expected
ok 1 selftests: membarrier: membarrier_test_single_thread
# selftests: membarrier: membarrier_test_multi_thread
# TAP version 13
# 1..2
# ok 1 sys_membarrier available
# ok 2 sys membarrier invalid command test: command = -1, flags = 0,
errno = 22. Failed as expected
ok 2 selftests: membarrier: membarrier_test_multi_thread
The major differences here, that do not match the above specification,
are use of "# " as an indentation and using a TAP version line to
denote a new test with subtests rather than the subtest header line
described above. If these are widely utilized formats in kselftests,
should we include both versions in the specification or should we
attempt to agree on a single format for nested tests? I personally
believe we should try to agree on a single format for nested tests.
This would allow for a cleaner specification of KTAP and would reduce
possible confusion.
====
So what do people think about the above specification?
How should we handle the differences with kselftests?
If this specification is accepted, where should the specification be documented?
Hi Linus,
Please pull the following Kselftest fixes update for Linux 5.17-rc3
This Kselftest fixes update for Linux 5.17-rc3 consists of important
fixes to several tests and documentation clarification on running
mainline kselftest on stable releases. A few notable fixes:
- fix kselftest run hang due to child processes that haven't been
terminated. Fix signals all child processes
- fix false pass/fail results from vdso_test_abi, openat2, mincore
- build failures when using -j (multiple jobs) option
- exec test build failure due to incorrect build rule for a run-time
created "pipe"
- zram test fixes related to interaction with zram-generator to
make sure zram test to coordinate deleted with zram-generator
- zram test compression ratio calculation fix and skipping
max_comp_streams.
- increasing rtc test timeout
- cpufreq test to write test results to stdout which will necessary on
automated test systems
diff is attached.
thanks,
-- Shuah
----------------------------------------------------------------
The following changes since commit e783362eb54cd99b2cac8b3a9aeac942e6f6ac07:
Linux 5.17-rc1 (2022-01-23 10:12:53 +0200)
are available in the Git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest tags/linux-kselftest-fixes-5.17-rc3
for you to fetch changes up to ec049891b2dc16591813eacaddc476b3d27c8c14:
kselftest: Fix vdso_test_abi return status (2022-01-31 10:35:14 -0700)
----------------------------------------------------------------
linux-kselftest-fixes-5.17-rc3
This Kselftest fixes update for Linux 5.17-rc3 consists of important
fixes to several tests and documentation clarification on running
mainline kselftest on stable releases. A few notable fixes:
- fix kselftest run hang due to child processes that haven't been
terminated. Fix signals all child processes
- fix false pass/fail results from vdso_test_abi, openat2, mincore
- build failures when using -j (multiple jobs) option
- exec test build failure due to incorrect build rule for a run-time
created "pipe"
- zram test fixes related to interaction with zram-generator to
make sure zram test to coordinate deleted with zram-generator
- zram test compression ratio calculation fix and skipping
max_comp_streams.
- increasing rtc test timeout
- cpufreq test to write test results to stdout which will necessary on
automated test systems
----------------------------------------------------------------
Cristian Marussi (4):
selftests: openat2: Print also errno in failure messages
selftests: openat2: Add missing dependency in Makefile
selftests: openat2: Skip testcases that fail with EOPNOTSUPP
selftests: skip mincore.check_file_mmap when fs lacks needed support
Li Zhijian (1):
kselftest: signal all child processes
Muhammad Usama Anjum (2):
selftests/exec: Remove pipe from TEST_GEN_FILES
selftests: futex: Use variable MAKE instead of make
Nícolas F. R. A. Prado (2):
selftests: rtc: Increase test timeout so that all tests run
selftests: cpufreq: Write test output to stdout as well
Shuah Khan (1):
docs/kselftest: clarify running mainline tests on stables
Vincenzo Frascino (1):
kselftest: Fix vdso_test_abi return status
Yang Xu (3):
selftests/zram: Skip max_comp_streams interface on newer kernel
selftests/zram01.sh: Fix compression ratio calculation
selftests/zram: Adapt the situation that /dev/zram0 is being used
Documentation/dev-tools/kselftest.rst | 8 ++
tools/testing/selftests/cpufreq/main.sh | 2 +-
tools/testing/selftests/exec/Makefile | 2 +-
tools/testing/selftests/futex/Makefile | 4 +-
tools/testing/selftests/kselftest_harness.h | 4 +-
tools/testing/selftests/mincore/mincore_selftest.c | 20 ++-
tools/testing/selftests/openat2/Makefile | 2 +-
tools/testing/selftests/openat2/helpers.h | 12 +-
tools/testing/selftests/openat2/openat2_test.c | 12 +-
tools/testing/selftests/rtc/settings | 2 +-
tools/testing/selftests/vDSO/vdso_test_abi.c | 135 ++++++++++-----------
tools/testing/selftests/zram/zram.sh | 15 +--
tools/testing/selftests/zram/zram01.sh | 33 ++---
tools/testing/selftests/zram/zram02.sh | 1 -
tools/testing/selftests/zram/zram_lib.sh | 134 +++++++++++++-------
15 files changed, 209 insertions(+), 177 deletions(-)
----------------------------------------------------------------
Hi Shuah,
I've made this PR to start monitoring the "fixes" branch from the
kselftest tree on kernelci.org:
https://github.com/kernelci/kernelci-core/pull/998
While kselftest changes eventually land in linux-next, monitoring
your tree directly means we can test it earlier and potentially
enable more build variants or experimental tests. Since
kernelci.org also builds and runs some kselftests we're regularly
finding issues and people are sending fixes for them. See this
recent story for example:
https://twitter.com/kernelci/status/1488831497259921409
Keeping an eye on kselftest patches with kernelci.org means we
can verify that fixes do what they're supposed to do with a much
larger test coverage than what individual developers can do.
We've been applying kselftest fixes on a branch managed by
kernelci.org to verify them in the past, but having the actual
kselftest tree part of the workflow would seem much better.
There are several branches in your tree, while "fixes" seemed
like the most useful one to pick I see there is also a "kernelci"
branch too but it hasn't been updated for a while, reviving it
could give you the possibility to test patches through
kernelci.org before applying them on other branches that get
pulled into linux-next and mainline.
Many things could potentially be done with arbitrary builds and
tests etc. These are some initial suggestions. How does this
sound?
Best wishes,
Guillaume
Allow the ageing timeout that is set on bridges to be customized from
forwarding.config. This allows the tests to be run on hardware which
does not support a 10s timeout (e.g. mv88e6xxx).
Signed-off-by: Tobias Waldekranz <tobias(a)waldekranz.com>
Reviewed-by: Petr Machata <petrm(a)nvidia.com>
---
tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh | 5 +++--
.../testing/selftests/net/forwarding/bridge_vlan_unaware.sh | 5 +++--
.../selftests/net/forwarding/forwarding.config.sample | 2 ++
tools/testing/selftests/net/forwarding/lib.sh | 1 +
4 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh b/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
index b90dff8d3a94..64bd00fe9a4f 100755
--- a/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
+++ b/tools/testing/selftests/net/forwarding/bridge_vlan_aware.sh
@@ -28,8 +28,9 @@ h2_destroy()
switch_create()
{
- # 10 Seconds ageing time.
- ip link add dev br0 type bridge vlan_filtering 1 ageing_time 1000 \
+ ip link add dev br0 type bridge \
+ vlan_filtering 1 \
+ ageing_time $LOW_AGEING_TIME \
mcast_snooping 0
ip link set dev $swp1 master br0
diff --git a/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh b/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh
index c15c6c85c984..1c8a26046589 100755
--- a/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh
+++ b/tools/testing/selftests/net/forwarding/bridge_vlan_unaware.sh
@@ -27,8 +27,9 @@ h2_destroy()
switch_create()
{
- # 10 Seconds ageing time.
- ip link add dev br0 type bridge ageing_time 1000 mcast_snooping 0
+ ip link add dev br0 type bridge \
+ ageing_time $LOW_AGEING_TIME \
+ mcast_snooping 0
ip link set dev $swp1 master br0
ip link set dev $swp2 master br0
diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample
index b0980a2efa31..4a546509de90 100644
--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample
+++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample
@@ -41,6 +41,8 @@ NETIF_CREATE=yes
# Timeout (in seconds) before ping exits regardless of how many packets have
# been sent or received
PING_TIMEOUT=5
+# Minimum ageing_time (in centiseconds) supported by hardware
+LOW_AGEING_TIME=1000
# Flag for tc match, supposed to be skip_sw/skip_hw which means do not process
# filter by software/hardware
TC_FLAG=skip_hw
diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
index 7da783d6f453..e7e434a4758b 100644
--- a/tools/testing/selftests/net/forwarding/lib.sh
+++ b/tools/testing/selftests/net/forwarding/lib.sh
@@ -24,6 +24,7 @@ PING_COUNT=${PING_COUNT:=10}
PING_TIMEOUT=${PING_TIMEOUT:=5}
WAIT_TIMEOUT=${WAIT_TIMEOUT:=20}
INTERFACE_TIMEOUT=${INTERFACE_TIMEOUT:=600}
+LOW_AGEING_TIME=${LOW_AGEING_TIME:=1000}
REQUIRE_JQ=${REQUIRE_JQ:=yes}
REQUIRE_MZ=${REQUIRE_MZ:=yes}
--
2.25.1