Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is: - test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr); + test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
This patch is based on top of Andy's series that renames KUnit based tests in lib/ and Ramsus' series on deterministic random testing.
Signed-off-by: Arpitha Raghunandan 98.arpi@gmail.com --- Changes v2->v3: - Split tests into multiple KUNIT_CASE() - kunittest variable made static - Allocation and freeing done through init and exit callbacks in KUnit Changes v1->v2: - Retain the control flow (early returns) in do_test() - Display less irrelevant information on test failure - A more detailed commit message
lib/Kconfig.debug | 7 +- lib/Makefile | 2 +- lib/{test_printf.c => printf_kunit.c} | 266 ++++++++++++++------------ 3 files changed, 150 insertions(+), 125 deletions(-) rename lib/{test_printf.c => printf_kunit.c} (78%)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index c789b39ed527..42de097d4c5f 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2051,9 +2051,6 @@ config TEST_STRSCPY config TEST_KSTRTOX tristate "Test kstrto*() family of functions at runtime"
-config TEST_PRINTF - tristate "Test printf() family of functions at runtime" - config TEST_BITMAP tristate "Test bitmap_*() family of functions at runtime" help @@ -2280,6 +2277,10 @@ config BITS_TEST
If unsure, say N.
+config PRINTF_KUNIT_TEST + tristate "KUnit test for printf() family of functions at runtime" + depends on KUNIT + config TEST_UDELAY tristate "udelay test driver" help diff --git a/lib/Makefile b/lib/Makefile index dc76e7d8a453..1351d242447c 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -84,7 +84,6 @@ obj-$(CONFIG_TEST_SORT) += test_sort.o obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_keys.o obj-$(CONFIG_TEST_STATIC_KEYS) += test_static_key_base.o -obj-$(CONFIG_TEST_PRINTF) += test_printf.o obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o obj-$(CONFIG_TEST_STRSCPY) += test_strscpy.o obj-$(CONFIG_TEST_UUID) += test_uuid.o @@ -352,3 +351,4 @@ obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o obj-$(CONFIG_BITS_TEST) += bits_kunit.o obj-$(CONFIG_LINEAR_RANGES_TEST) += linear_ranges_kunit.o obj-$(CONFIG_LIST_KUNIT_TEST) += list_kunit.o +obj-$(CONFIG_PRINTF_KUNIT_TEST) += printf_kunit.o diff --git a/lib/test_printf.c b/lib/printf_kunit.c similarity index 78% rename from lib/test_printf.c rename to lib/printf_kunit.c index bbea8b807d1e..46b4a11f4b47 100644 --- a/lib/test_printf.c +++ b/lib/printf_kunit.c @@ -5,6 +5,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <kunit/test.h> #include <linux/init.h> #include <linux/kernel.h> #include <linux/module.h> @@ -33,60 +34,55 @@
static char *test_buffer __initdata; static char *alloced_buffer __initdata; +static struct kunit *kunittest;
-static int __printf(4, 0) __init +static void __printf(4, 0) __init do_test(int bufsize, const char *expect, int elen, const char *fmt, va_list ap) { va_list aq; int ret, written;
- total_tests++; - memset(alloced_buffer, FILL_CHAR, BUF_SIZE + 2*PAD_SIZE); va_copy(aq, ap); ret = vsnprintf(test_buffer, bufsize, fmt, aq); va_end(aq);
if (ret != elen) { - pr_warn("vsnprintf(buf, %d, "%s", ...) returned %d, expected %d\n", + KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, "%s", ...) returned %d, expected %d\n", bufsize, fmt, ret, elen); - return 1; + return; }
if (memchr_inv(alloced_buffer, FILL_CHAR, PAD_SIZE)) { - pr_warn("vsnprintf(buf, %d, "%s", ...) wrote before buffer\n", bufsize, fmt); - return 1; + KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, "%s", ...) wrote before buffer\n", bufsize, fmt); + return; }
if (!bufsize) { - if (memchr_inv(test_buffer, FILL_CHAR, BUF_SIZE + PAD_SIZE)) { - pr_warn("vsnprintf(buf, 0, "%s", ...) wrote to buffer\n", - fmt); - return 1; - } - return 0; + if (memchr_inv(test_buffer, FILL_CHAR, BUF_SIZE + PAD_SIZE)) + KUNIT_FAIL(kunittest, "vsnprintf(buf, 0, "%s", ...) wrote to buffer\n", fmt); + return; }
written = min(bufsize-1, elen); if (test_buffer[written]) { - pr_warn("vsnprintf(buf, %d, "%s", ...) did not nul-terminate buffer\n", + KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, "%s", ...) did not nul-terminate buffer\n", bufsize, fmt); - return 1; + return; }
if (memchr_inv(test_buffer + written + 1, FILL_CHAR, BUF_SIZE + PAD_SIZE - (written + 1))) { - pr_warn("vsnprintf(buf, %d, "%s", ...) wrote beyond the nul-terminator\n", + KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, "%s", ...) wrote beyond the nul-terminator\n", bufsize, fmt); - return 1; + return; }
if (memcmp(test_buffer, expect, written)) { - pr_warn("vsnprintf(buf, %d, "%s", ...) wrote '%s', expected '%.*s'\n", + KUNIT_FAIL(kunittest, "vsnprintf(buf, %d, "%s", ...) wrote '%s', expected '%.*s'\n", bufsize, fmt, test_buffer, written, expect); - return 1; + return; } - return 0; }
static void __printf(3, 4) __init @@ -97,9 +93,8 @@ __test(const char *expect, int elen, const char *fmt, ...) char *p;
if (elen >= BUF_SIZE) { - pr_err("error in test suite: expected output length %d too long. Format was '%s'.\n", + KUNIT_FAIL(kunittest, "error in test suite: expected output length %d too long. Format was '%s'.\n", elen, fmt); - failed_tests++; return; }
@@ -111,7 +106,7 @@ __test(const char *expect, int elen, const char *fmt, ...) * enough and 0), and then we also test that kvasprintf would * be able to print it as expected. */ - failed_tests += do_test(BUF_SIZE, expect, elen, fmt, ap); + do_test(BUF_SIZE, expect, elen, fmt, ap); rand = prandom_u32_range_state(&rnd_state, 1, elen + 1); /* * Except for elen == 0, we have 1 <= rand <= elen < BUF_SIZE, @@ -120,16 +115,14 @@ __test(const char *expect, int elen, const char *fmt, ...) * larger than it really is. For the boring case of elen == 0, * rand is 1, which is of course also <= BUF_SIZE. */ - failed_tests += do_test(rand, expect, elen, fmt, ap); - failed_tests += do_test(0, expect, elen, fmt, ap); + do_test(rand, expect, elen, fmt, ap); + do_test(0, expect, elen, fmt, ap);
p = kvasprintf(GFP_KERNEL, fmt, ap); if (p) { - total_tests++; if (memcmp(p, expect, elen+1)) { - pr_warn("kvasprintf(..., "%s", ...) returned '%s', expected '%s'\n", + KUNIT_FAIL(kunittest, "kvasprintf(..., "%s", ...) returned '%s', expected '%s'\n", fmt, p, expect); - failed_tests++; } kfree(p); } @@ -139,9 +132,10 @@ __test(const char *expect, int elen, const char *fmt, ...) #define test(expect, fmt, ...) \ __test(expect, strlen(expect), fmt, ##__VA_ARGS__)
-static void __init -test_basic(void) +static void +test_basic(struct kunit *ktest) { + kunittest = ktest; /* Work around annoying "warning: zero-length gnu_printf format string". */ char nul = '\0';
@@ -151,9 +145,10 @@ test_basic(void) __test("xxx\0yyy", 7, "xxx%cyyy", '\0'); }
-static void __init -test_number(void) +static void +test_number(struct kunit *ktest) { + kunittest = ktest; test("0x1234abcd ", "%#-12x", 0x1234abcd); test(" 0x1234abcd", "%#12x", 0x1234abcd); test("0|001| 12|+123| 1234|-123|-1234", "%d|%03d|%3d|%+d|% d|%+d|% d", 0, 1, 12, 123, 1234, -123, -1234); @@ -183,9 +178,10 @@ test_number(void) #endif }
-static void __init -test_string(void) +static void +test_string(struct kunit *ktest) { + kunittest = ktest; test("", "%s%.0s", "", "123"); test("ABCD|abc|123", "%s|%.3s|%.*s", "ABCD", "abcdef", 3, "123456"); test("1 | 2|3 | 4|5 ", "%-3s|%3s|%-*s|%*s|%*s", "1", "2", 3, "3", 3, "4", -3, "5"); @@ -301,23 +297,21 @@ plain_hash(void) * We can't use test() to test %p because we don't know what output to expect * after an address is hashed. */ -static void __init -plain(void) +static void +plain(struct kunit *ktest) { + kunittest = ktest; int err;
err = plain_hash(); if (err) { - pr_warn("plain 'p' does not appear to be hashed\n"); - failed_tests++; + KUNIT_FAIL(kunittest, "plain 'p' does not appear to be hashed\n"); return; }
err = plain_format(); - if (err) { - pr_warn("hashing plain 'p' has unexpected format\n"); - failed_tests++; - } + if (err) + KUNIT_FAIL(kunittest, "hashing plain 'p' has unexpected format\n"); }
static void __init @@ -340,9 +334,10 @@ test_hashed(const char *fmt, const void *p) /* * NULL pointers aren't hashed. */ -static void __init -null_pointer(void) +static void +null_pointer(struct kunit *ktest) { + kunittest = ktest; test(ZEROS "00000000", "%p", NULL); test(ZEROS "00000000", "%px", NULL); test("(null)", "%pE", NULL); @@ -351,9 +346,10 @@ null_pointer(void) /* * Error pointers aren't hashed. */ -static void __init -error_pointer(void) +static void +error_pointer(struct kunit *ktest) { + kunittest = ktest; test(ONES "fffffff5", "%p", ERR_PTR(-11)); test(ONES "fffffff5", "%px", ERR_PTR(-11)); test("(efault)", "%pE", ERR_PTR(-11)); @@ -361,43 +357,50 @@ error_pointer(void)
#define PTR_INVALID ((void *)0x000000ab)
-static void __init -invalid_pointer(void) +static void +invalid_pointer(struct kunit *ktest) { + kunittest = ktest; test_hashed("%p", PTR_INVALID); test(ZEROS "000000ab", "%px", PTR_INVALID); test("(efault)", "%pE", PTR_INVALID); }
-static void __init -symbol_ptr(void) +static void +symbol_ptr(struct kunit *ktest) { + kunittest = ktest; }
-static void __init -kernel_ptr(void) +static void +kernel_ptr(struct kunit *ktest) { + kunittest = ktest; /* We can't test this without access to kptr_restrict. */ }
-static void __init -struct_resource(void) +static void +struct_resource(struct kunit *ktest) { + kunittest = ktest; }
-static void __init -addr(void) +static void +addr(struct kunit *ktest) { + kunittest = ktest; }
-static void __init -escaped_str(void) +static void +escaped_str(struct kunit *ktest) { + kunittest = ktest; }
-static void __init -hex_string(void) +static void +hex_string(struct kunit *ktest) { + kunittest = ktest; const char buf[3] = {0xc0, 0xff, 0xee};
test("c0 ff ee|c0:ff:ee|c0-ff-ee|c0ffee", @@ -406,9 +409,10 @@ hex_string(void) "%*ph|%*phC|%*phD|%*phN", 3, buf, 3, buf, 3, buf, 3, buf); }
-static void __init -mac(void) +static void +mac(struct kunit *ktest) { + kunittest = ktest; const u8 addr[6] = {0x2d, 0x48, 0xd6, 0xfc, 0x7a, 0x05};
test("2d:48:d6:fc:7a:05", "%pM", addr); @@ -438,16 +442,18 @@ ip6(void) { }
-static void __init -ip(void) +static void +ip(struct kunit *ktest) { + kunittest = ktest; ip4(); ip6(); }
-static void __init -uuid(void) +static void +uuid(struct kunit *ktest) { + kunittest = ktest; const char uuid[16] = {0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf};
@@ -472,9 +478,10 @@ static struct dentry test_dentry[4] __initdata = { .d_iname = "romeo" }, };
-static void __init -dentry(void) +static void +dentry(struct kunit *ktest) { + kunittest = ktest; test("foo", "%pd", &test_dentry[0]); test("foo", "%pd2", &test_dentry[0]);
@@ -493,14 +500,16 @@ dentry(void) test(" bravo/alfa| bravo/alfa", "%12pd2|%*pd2", &test_dentry[2], 12, &test_dentry[2]); }
-static void __init -struct_va_format(void) +static void +struct_va_format(struct kunit *ktest) { + kunittest = ktest; }
-static void __init -time_and_date(void) +static void +time_and_date(struct kunit *ktest) { + kunittest = ktest; /* 1543210543 */ const struct rtc_time tm = { .tm_sec = 43, @@ -527,9 +536,10 @@ time_and_date(void) test("15:32:23|0119-00-04", "%ptTtr|%ptTdr", &t, &t); }
-static void __init -struct_clk(void) +static void +struct_clk(struct kunit *ktest) { + kunittest = ktest; }
static void __init @@ -546,9 +556,10 @@ large_bitmap(void) bitmap_free(bits); }
-static void __init -bitmap(void) +static void +bitmap(struct kunit *ktest) { + kunittest = ktest; DECLARE_BITMAP(bits, 20); const int primes[] = {2,3,5,7,11,13,17,19}; int i; @@ -569,14 +580,16 @@ bitmap(void) large_bitmap(); }
-static void __init -netdev_features(void) +static void +netdev_features(struct kunit *ktest) { + kunittest = ktest; }
-static void __init -flags(void) +static void +flags(struct kunit *ktest) { + kunittest = ktest; unsigned long flags; gfp_t gfp; char *cmp_buffer; @@ -623,8 +636,9 @@ flags(void) kfree(cmp_buffer); }
-static void __init fwnode_pointer(void) +static void fwnode_pointer(struct kunit *ktest) { + kunittest = ktest; const struct software_node softnodes[] = { { .name = "first", }, { .name = "second", .parent = &softnodes[0], }, @@ -654,9 +668,10 @@ static void __init fwnode_pointer(void) software_node_unregister(&softnodes[0]); }
-static void __init -errptr(void) +static void +errptr(struct kunit *ktest) { + kunittest = ktest; test("-1234", "%pe", ERR_PTR(-1234));
/* Check that %pe with a non-ERR_PTR gets treated as ordinary %p. */ @@ -674,48 +689,57 @@ errptr(void) #endif }
-static void __init -test_pointer(void) -{ - plain(); - null_pointer(); - error_pointer(); - invalid_pointer(); - symbol_ptr(); - kernel_ptr(); - struct_resource(); - addr(); - escaped_str(); - hex_string(); - mac(); - ip(); - uuid(); - dentry(); - struct_va_format(); - time_and_date(); - struct_clk(); - bitmap(); - netdev_features(); - flags(); - errptr(); - fwnode_pointer(); -} - -static void __init selftest(void) -{ - alloced_buffer = kmalloc(BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL); +static int printf_test_init(struct kunit *ktest) +{ + alloced_buffer = kunit_kmalloc(ktest, BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL); if (!alloced_buffer) - return; + return -ENOMEM; test_buffer = alloced_buffer + PAD_SIZE; + return 0; +}
- test_basic(); - test_number(); - test_string(); - test_pointer(); +static void printf_test_exit(struct kunit *ktest) +{ + kunit_kfree(ktest, alloced_buffer); +} + +static struct kunit_case printf_test_cases[] = { + KUNIT_CASE(test_basic), + KUNIT_CASE(test_number), + KUNIT_CASE(test_string), + KUNIT_CASE(plain), + KUNIT_CASE(null_pointer), + KUNIT_CASE(error_pointer), + KUNIT_CASE(invalid_pointer), + KUNIT_CASE(symbol_ptr), + KUNIT_CASE(kernel_ptr), + KUNIT_CASE(struct_resource), + KUNIT_CASE(addr), + KUNIT_CASE(escaped_str), + KUNIT_CASE(hex_string), + KUNIT_CASE(mac), + KUNIT_CASE(ip), + KUNIT_CASE(uuid), + KUNIT_CASE(dentry), + KUNIT_CASE(struct_va_format), + KUNIT_CASE(time_and_date), + KUNIT_CASE(struct_clk), + KUNIT_CASE(bitmap), + KUNIT_CASE(netdev_features), + KUNIT_CASE(flags), + KUNIT_CASE(errptr), + KUNIT_CASE(fwnode_pointer), + {} +};
- kfree(alloced_buffer); -} +static struct kunit_suite printf_test_suite = { + .name = "printf", + .init = printf_test_init, + .exit = printf_test_exit, + .test_cases = printf_test_cases, +}; + +kunit_test_suite(printf_test_suite);
-KSTM_MODULE_LOADERS(test_printf); MODULE_AUTHOR("Rasmus Villemoes linux@rasmusvillemoes.dk"); MODULE_LICENSE("GPL");
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
This patch is based on top of Andy's series that renames KUnit based tests in lib/ and Ramsus' series on deterministic random testing.
For the reference: https://lore.kernel.org/linux-kselftest/20201016110836.52613-1-andriy.shevch...
...
-static void __init -test_string(void) +static void +test_string(struct kunit *ktest)
I guess no need anymore to keep them on two lines, just combine to one line.
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
thanks,
greg k-h
On Tue, Nov 3, 2020 at 1:54 PM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
...
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
I dunno. I'm asking... We have few possibilities here: a) let it go like above, b) use the cumulative statistics, c) if there is no such available, implement and use it, d) ...
On Tue, Nov 03, 2020 at 02:06:23PM +0200, Andy Shevchenko wrote:
On Tue, Nov 3, 2020 at 1:54 PM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
...
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
I dunno. I'm asking... We have few possibilities here: a) let it go like above, b) use the cumulative statistics, c) if there is no such available, implement and use it, d) ...
We have a standard, please use it.
greg k-h
On Tue 2020-11-03 12:52:23, Greg KH wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
What is the standard, please?
The original module produced:
[ 48.577739] test_printf: loaded. [ 48.578046] test_printf: all 388 tests passed
It comes from KSTM_MODULE_LOADERS() macro that has been created by the commit eebf4dd452377921e3a26 ("kselftest: Add test module framework header"). There are 3 users:
$> git grep KSTM_MODULE_LOADERS Documentation/dev-tools/kselftest.rst: KSTM_MODULE_LOADERS(test_foo); lib/test_bitmap.c:KSTM_MODULE_LOADERS(test_bitmap); lib/test_printf.c:KSTM_MODULE_LOADERS(test_printf); lib/test_strscpy.c:KSTM_MODULE_LOADERS(test_strscpy); tools/testing/selftests/kselftest_module.h:#define KSTM_MODULE_LOADERS(__module) \
When I looked for similar strings then I see huge creativity:
drivers/firmware/psci/psci_checker.c: pr_info("Hotplug tests passed OK\n"); drivers/firmware/psci/psci_checker.c: pr_info("Suspend tests passed OK\n"); drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c: seq_printf(m, "ib ring tests passed.\n"); drivers/mtd/chips/jedec_probe.c: /* all tests passed - mark as success */ drivers/usb/host/xhci-mem.c: xhci_dbg(xhci, "TRB math tests passed.\n"); fs/unicode/utf8-selftest.c: pr_info("All %u tests passed\n", total_tests); kernel/kcsan/selftest.c: pr_info("selftest: %d/%d tests passed\n", passed, total); lib/crc32test.c: pr_info("crc32c: self tests passed, processed %d bytes in %lld nsec\n", lib/crc32test.c: pr_info("crc32c_combine: %d self tests passed\n", runs); lib/crc32test.c: pr_info("crc32: self tests passed, processed %d bytes in %lld nsec\n", lib/crc32test.c: pr_info("crc32_combine: %d self tests passed\n", runs); lib/globtest.c: KERN_INFO "glob: %u self-tests passed, %u failed\n"; lib/random32.c: pr_info("prandom: %d self tests passed\n", runs); lib/test_hash.c: pr_notice("%u tests passed.", tests); lib/test_hexdump.c: pr_info("all %u tests passed\n", total_tests); lib/test_ida.c: printk("IDA: %u of %u tests passed\n", tests_passed, tests_run); lib/test_meminit.c: pr_info("all %d tests passed!\n", num_tests); lib/test_overflow.c: pr_info("all tests passed\n"); lib/test_stackinit.c: pr_info("all tests passed!\n"); lib/test_user_copy.c: pr_info("tests passed.\n"); lib/test_uuid.c: pr_info("all %u tests passed\n", total_tests); lib/test_xarray.c: printk("XArray: %u of %u tests passed\n", tests_passed, tests_run); tools/bootconfig/test-bootconfig.sh: echo "All tests passed" Binary file tools/testing/kunit/test_data/test_is_test_passed-all_passed.log matches Binary file tools/testing/kunit/test_data/test_is_test_passed-crash.log matches Binary file tools/testing/kunit/test_data/test_is_test_passed-failure.log matches Binary file tools/testing/kunit/test_data/test_output_isolated_correctly.log matches tools/testing/selftests/bpf/test_tc_tunnel.sh: echo "OK. All tests passed" tools/testing/selftests/bpf/test_xdping.sh:echo "OK. All tests passed" tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c: printf("Ioctl compatibility tests passed\n"); tools/testing/selftests/kselftest_harness.h: ksft_print_msg("%s: %u / %u tests passed.\n", ret ? "FAILED" : "PASSED", tools/testing/selftests/kselftest_module.h: pr_info("all %u tests passed\n", total_tests); tools/testing/selftests/net/ipv6_flowlabel.sh:echo OK. All tests passed tools/testing/selftests/net/msg_zerocopy.sh: echo "OK. All tests passed" tools/testing/selftests/net/psock_fanout.c: printf("OK. All tests passed\n"); tools/testing/selftests/net/psock_snd.sh:echo "OK. All tests passed" tools/testing/selftests/net/psock_tpacket.c: printf("OK. All tests passed\n"); tools/testing/selftests/net/so_txtime.sh:echo OK. All tests passed tools/testing/selftests/net/txtimestamp.sh: echo "OK. All tests passed"
From my POV, KUnit is a framework that is supposed to actually
unify many tests.
There is alternative framework maintained by Shuah Khan. In compare with KUnit, it prints the results in the userspace. The format is quite similar, for example:
tools/testing/selftests/livepatch/# make run_tests TAP version 13 1..5 # selftests: livepatch: test-livepatch.sh # TEST: basic function patching ... ok # TEST: multiple livepatches ... ok # TEST: atomic replace livepatch ... ok ok 1 selftests: livepatch: test-livepatch.sh # selftests: livepatch: test-callbacks.sh # TEST: target module before livepatch ... ok # TEST: module_coming notifier ... ok # TEST: module_going notifier ... ok # TEST: module_coming and module_going notifiers ... ok # TEST: target module not present ... ok # TEST: pre-patch callback -ENODEV ... ok # TEST: module_coming + pre-patch callback -ENODEV ... ok # TEST: multiple target modules ... ok # TEST: busy target module ... ok # TEST: multiple livepatches ... ok # TEST: atomic replace ... ok ok 2 selftests: livepatch: test-callbacks.sh # selftests: livepatch: test-shadow-vars.sh # TEST: basic shadow variable API ... ok ok 3 selftests: livepatch: test-shadow-vars.sh # selftests: livepatch: test-state.sh # TEST: system state modification ... ok # TEST: taking over system state modification ... ok # TEST: compatible cumulative livepatches ... ok # TEST: incompatible cumulative livepatches ... ok ok 4 selftests: livepatch: test-state.sh # selftests: livepatch: test-ftrace.sh # TEST: livepatch interaction with ftrace_enabled sysctl ... ok ok 5 selftests: livepatch: test-ftrace.sh
I do not have strong meaning what format is better. But the format:
"doing something ... ok"
is problematic in kernel log because it contains messages from all kernel activity. As a result, many unrelated messages might be added between "doing something ..." and "ok". It is much more reliable with stdout/stderr in userspace.
For kernel, it is more reliable to print results when the test finishes. And it seems to be "easy" to spot an error in KUnit report.
That said, it would be nice to see some summary on the very last line. It would be especially useful when the log is longer than one screen.
The result of every subtest might help to locate the problem when the system freezes. It might be optional.
I am personally fine with the proposed output of printf_kunit.c.
Best Regards, Petr
On Tue, Nov 03, 2020 at 05:11:47PM +0100, Petr Mladek wrote:
On Tue 2020-11-03 12:52:23, Greg KH wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
What is the standard, please?
The TAP format should be the standard, no reason the kernel can not spit out the same test message format that the userspace tests do, right?
thanks,
greg k-h
On Tue, Nov 3, 2020 at 8:22 AM Greg KH gregkh@linuxfoundation.org wrote:
On Tue, Nov 03, 2020 at 05:11:47PM +0100, Petr Mladek wrote:
On Tue 2020-11-03 12:52:23, Greg KH wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
What is the standard, please?
The TAP format should be the standard, no reason the kernel can not spit out the same test message format that the userspace tests do, right?
KUnit and kselftest both use the TAP format. Well, we both use slight variations on the TAP format. Nevertheless, we are both unifying around a common variation of TAP called KTAP[1]. To my knowledge, neither kselftest or KUnit fully obey the new KTAP spec, but I believe we are both working toward full compliance (as of 5.10 KUnit output should be fully compliant[2]), and both are pretty close.
When I say both kselftest and KUnit are pretty close to obeying KTAP, I mean that for the purpose of Arpitha's change here which we are discussing, the sample segment of output that she shared as the new output of this test *is fully compliant with KTAP and is not expected to change any time soon*. Additionally, most, but not all, of kselftest's tests are compliant as well.
The main visual difference between KUnit and kselftest TAP output comes from two things: kselftests tend to make heavy use of diagnostic lines (basically log lines which serve to communicate arbitrary unstructured information about the test) - KUnit has facilities for devs to use these in tests, but so far they have not gotten much use probably due to the smaller scope nature of KUnit tests. The other difference is because KUnit groups related test cases together into subtests (KUnit calls them test suites, but TAP calls them subtests) whereas kselftest does not - again this largely has to do with the style of tests. Both of these features exist in KTAP, but the output tends to reflect the kinds of tests being written.
Tim, Shuah, David (or anyone else), please let me know if I missed anything.
I hope that answers everyone's questions.
P.S. You can get KUnit results from user space separated out from dmesg if you prefer to run your tests that way[3].
[1] https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8... [2] We now support the top level test plan as specified, but do not support SKIP tests yet; however, most KUnit tests don't need SKIP tests anyway. See [1] for more information. [3] https://www.kernel.org/doc/html/latest/dev-tools/kunit/usage.html#kunit-debu...
On 03/11/2020 17.11, Petr Mladek wrote:
On Tue 2020-11-03 12:52:23, Greg KH wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
Actually more like xkcd#927.
What is the standard, please?
The original module produced:
[ 48.577739] test_printf: loaded. [ 48.578046] test_printf: all 388 tests passed
It comes from KSTM_MODULE_LOADERS() macro that has been created by the commit eebf4dd452377921e3a26 ("kselftest: Add test module framework header").
That's a bit of a simplification. That code was clearly lifted directly from the original test_printf.c code (707cc7280f452a162c52bc240eae62568b9753c2). test_bitmap.c cloned test_printf.c (including a "Test cases for printf facility" comment...). Later both were converted to use the KSTM header.
As the one coming up with that originally, I certainly endorse its use as a standard way of producing a final, free-form, summary, and I prefer to keep that total tally (i.e. 388) being printed for the reasons I've previously stated. [*]
That said, I have absolutely nothing against _also_ producing machine-parsable TAP output. And the above looks to be a good compromise between spitting out one TAP line for each of the 388 test cases (or checks, or atoms, whatever the indiviual parts are to be called) and treating all of test_printf.c as one single PASS/FAIL test.
[*] If I add some test cases to the time_and_date and run the kernel under virtme, I want to see 388 becoming something else, so that I know that I actually ran the code I just added and not some stale vmlinux - maybe I only did "make lib/test_printf.o" and didn't recreate vmlinux, maybe I did "make vmlinux" but failed to notice the build was broken. BTDT.
And those summary lines are even more important to keep given my "deterministic random testing" series - the seed used _must_ be reported (preferably also in the "all good" case, but certainly in the "some failed" case).
Arpitha, thank you for taking that series into account. Is there some way to keep the print of the total number of "atoms" as well as the reporting of the random seed? Or should the "deterministic random testing" be done in the context of KUNIT rather than KSTM? I'm really not sure why we have both nor which one one is supposed to write against. But I would prefer that the framework, whichever one, provides a means to get a deterministic sequence of random numbers, so that I won't have to eventually copy-paste the boilerplate to test_sort.c and test_list_sort.c - it's also much nicer if all test modules have a somewhat consistent interface in terms of the module parameters they accept.
Rasmus
On 04/11/20 1:48 pm, Rasmus Villemoes wrote:
On 03/11/2020 17.11, Petr Mladek wrote:
On Tue 2020-11-03 12:52:23, Greg KH wrote:
On Tue, Nov 03, 2020 at 01:33:53PM +0200, Andy Shevchenko wrote:
On Tue, Nov 03, 2020 at 04:40:49PM +0530, Arpitha Raghunandan wrote:
Convert test lib/test_printf.c to KUnit. More information about KUnit can be found at: https://www.kernel.org/doc/html/latest/dev-tools/kunit/index.html. KUnit provides a common framework for unit tests in the kernel. KUnit and kselftest are standardizing around KTAP, converting this test to KUnit makes this test output in KTAP which we are trying to make the standard test result format for the kernel. More about the KTAP format can be found at: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD8.... I ran both the original and converted tests as is to produce the output for success of the test in the two cases. I also ran these tests with a small modification to show the difference in the output for failure of the test in both cases. The modification I made is:
- test("127.000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
- test("127-000.000.001|127.0.0.1", "%pi4|%pI4", &sa.sin_addr, &sa.sin_addr);
Original test success: [ 0.540860] test_printf: loaded. [ 0.540863] test_printf: random seed = 0x5c46c33837bc0619 [ 0.541022] test_printf: all 388 tests passed
Original test failure: [ 0.537980] test_printf: loaded. [ 0.537983] test_printf: random seed = 0x1bc1efd881954afb [ 0.538029] test_printf: vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538030] test_printf: kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' [ 0.538124] test_printf: failed 2 out of 388 tests [ 0.538125] test_printf: random seed used was 0x1bc1efd881954afb
Converted test success: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer ok 1 - printf
Converted test failure: # Subtest: printf 1..25 ok 1 - test_basic ok 2 - test_number ok 3 - test_string ok 4 - plain ok 5 - null_pointer ok 6 - error_pointer ok 7 - invalid_pointer ok 8 - symbol_ptr ok 9 - kernel_ptr ok 10 - struct_resource ok 11 - addr ok 12 - escaped_str ok 13 - hex_string ok 14 - mac # ip: EXPECTATION FAILED at lib/printf_kunit.c:82 vsnprintf(buf, 256, "%pi4|%pI4", ...) wrote '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' # ip: EXPECTATION FAILED at lib/printf_kunit.c:124 kvasprintf(..., "%pi4|%pI4", ...) returned '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' not ok 15 - ip ok 16 - uuid ok 17 - dentry ok 18 - struct_va_format ok 19 - time_and_date ok 20 - struct_clk ok 21 - bitmap ok 22 - netdev_features ok 23 - flags ok 24 - errptr ok 25 - fwnode_pointer not ok 1 - printf
Better, indeed.
But can be this improved to have a cumulative statistics, like showing only number of total, succeeded, failed with details of the latter ones?
Is that the proper test output format? We have a standard...
Actually more like xkcd#927.
What is the standard, please?
The original module produced:
[ 48.577739] test_printf: loaded. [ 48.578046] test_printf: all 388 tests passed
It comes from KSTM_MODULE_LOADERS() macro that has been created by the commit eebf4dd452377921e3a26 ("kselftest: Add test module framework header").
That's a bit of a simplification. That code was clearly lifted directly from the original test_printf.c code (707cc7280f452a162c52bc240eae62568b9753c2). test_bitmap.c cloned test_printf.c (including a "Test cases for printf facility" comment...). Later both were converted to use the KSTM header.
As the one coming up with that originally, I certainly endorse its use as a standard way of producing a final, free-form, summary, and I prefer to keep that total tally (i.e. 388) being printed for the reasons I've previously stated. [*]
That said, I have absolutely nothing against _also_ producing machine-parsable TAP output. And the above looks to be a good compromise between spitting out one TAP line for each of the 388 test cases (or checks, or atoms, whatever the indiviual parts are to be called) and treating all of test_printf.c as one single PASS/FAIL test.
[*] If I add some test cases to the time_and_date and run the kernel under virtme, I want to see 388 becoming something else, so that I know that I actually ran the code I just added and not some stale vmlinux - maybe I only did "make lib/test_printf.o" and didn't recreate vmlinux, maybe I did "make vmlinux" but failed to notice the build was broken. BTDT.
And those summary lines are even more important to keep given my "deterministic random testing" series - the seed used _must_ be reported (preferably also in the "all good" case, but certainly in the "some failed" case).
Arpitha, thank you for taking that series into account. Is there some way to keep the print of the total number of "atoms" as well as the reporting of the random seed? Or should the "deterministic random testing" be done in the context of KUNIT rather than KSTM? I'm really not sure why we have both nor which one one is supposed to write against. But I would prefer that the framework, whichever one, provides a means to get a deterministic sequence of random numbers, so that I won't have to eventually copy-paste the boilerplate to test_sort.c and test_list_sort.c - it's also much nicer if all test modules have a somewhat consistent interface in terms of the module parameters they accept.
Rasmus
The total number of "atoms" can be printed by maintaining a static variable total_count that can be incremented as is in the original test_printf test. But, the reporting of the random seed currently is done in kselftest and so will not show up with KUnit. I am not really sure which is better in this case.
Thanks!
On 06/11/2020 05.04, Arpitha Raghunandan wrote:
The total number of "atoms" can be printed by maintaining a static variable total_count that can be incremented as is in the original test_printf test. But, the reporting of the random seed currently is done in kselftest and so will not show up with KUnit. I am not really sure which is better in this case.
So my real questions are: Why do we have both kselftest and kunit? What's the difference? Are there any plans to merge the two frameworks? What's the point of converting from one to the other if not? And if one is to take a currently stand-alone ad hoc test module and place it under one of those umbrellas, which one should one choose?
I don't really care if the "deterministic random testing" prep work goes with kstm or kunit; whichever framework could provide that boilerplate is the framework I'd make sure to use for subsequent work on various tests.
The reporting of the number of "atoms" in the printf test suite is something I'd miss if not preserved, but if there's sufficient good rationale for doing the conversion to Kunit I could live with that. But if kunit then can't provide a per-test-module rnd_state and handle its seeding (and reporting of what seed was used), I won't ack the conversion.
Rasmus
On Fri, Nov 06, 2020 at 11:31:43AM +0100, Rasmus Villemoes wrote:
On 06/11/2020 05.04, Arpitha Raghunandan wrote:
The total number of "atoms" can be printed by maintaining a static variable total_count that can be incremented as is in the original test_printf test. But, the reporting of the random seed currently is done in kselftest and so will not show up with KUnit. I am not really sure which is better in this case.
So my real questions are: Why do we have both kselftest and kunit?
One is testing code within the kernel image testing it within kernelspace, and one is outside the kernel testing it from userspace.
thanks,
greg k-h
linux-kselftest-mirror@lists.linaro.org