## TL;DR
This patchset adds a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately along with a couple of new features that depend on it.
## What am I trying to do?
Conceptually, I am trying to provide a mechanism by which test suites can be grouped together so that they can be reasoned about collectively. The last two of three patches in this series add features which depend on this:
PATCH 5/7 Prints out a test plan right before KUnit tests are run[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed.
PATCH 6/7 Add a new kernel command-line option which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs.
In addition, by dispatching tests from a single location, we can guarantee that all KUnit tests run after late_init is complete, which was a concern during the initial KUnit patchset review (this has not been a problem in practice, but resolving with certainty is nevertheless desirable).
Other use cases for this exist, but the above features should provide an idea of the value that this could provide.
Alan Maguire (1): kunit: test: create a single centralized executor for all tests
Brendan Higgins (5): vmlinux.lds.h: add linker section for KUnit test suites arch: um: add linker section for KUnit test suites init: main: add KUnit to kernel init kunit: test: add test plan to KUnit TAP format Documentation: Add kunit_shutdown to kernel-parameters.txt
David Gow (1): kunit: Add 'kunit_shutdown' option
.../admin-guide/kernel-parameters.txt | 7 ++ arch/um/include/asm/common.lds.S | 4 + include/asm-generic/vmlinux.lds.h | 8 ++ include/kunit/test.h | 82 ++++++++++++------- init/main.c | 4 + lib/kunit/Makefile | 3 +- lib/kunit/executor.c | 71 ++++++++++++++++ lib/kunit/test.c | 11 --- tools/testing/kunit/kunit_kernel.py | 2 +- tools/testing/kunit/kunit_parser.py | 76 ++++++++++++++--- .../test_is_test_passed-all_passed.log | 1 + .../test_data/test_is_test_passed-crash.log | 1 + .../test_data/test_is_test_passed-failure.log | 1 + 13 files changed, 217 insertions(+), 54 deletions(-) create mode 100644 lib/kunit/executor.c
Add a linker section where KUnit can put references to its test suites. This patch is the first step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall.
Co-developed-by: Iurii Zaikin yzaikin@google.com Signed-off-by: Iurii Zaikin yzaikin@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- include/asm-generic/vmlinux.lds.h | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index e00f41aa8ec4f..99a866f49cb3d 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -856,6 +856,13 @@ KEEP(*(.con_initcall.init)) \ __con_initcall_end = .;
+/* Alignment must be consistent with (kunit_suite *) in include/kunit/test.h */ +#define KUNIT_TEST_SUITES \ + . = ALIGN(8); \ + __kunit_suites_start = .; \ + KEEP(*(.kunit_test_suites)) \ + __kunit_suites_end = .; + #ifdef CONFIG_BLK_DEV_INITRD #define INIT_RAM_FS \ . = ALIGN(4); \ @@ -1024,6 +1031,7 @@ INIT_CALLS \ CON_INITCALL \ INIT_RAM_FS \ + KUNIT_TEST_SUITES \ }
#define BSS_SECTION(sbss_align, bss_align, stop_align) \
Add a linker section to UML where KUnit can put references to its test suites. This patch is an early step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall.
Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- arch/um/include/asm/common.lds.S | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S index 7145ce6999822..eab9ceb450efd 100644 --- a/arch/um/include/asm/common.lds.S +++ b/arch/um/include/asm/common.lds.S @@ -52,6 +52,10 @@ CON_INITCALL }
+ .kunit_test_suites : { + KUNIT_TEST_SUITES + } + .exitcall : { __exitcall_begin = .; *(.exitcall.exit)
On 1/30/20 5:08 PM, Brendan Higgins wrote:
Add a linker section to UML where KUnit can put references to its test suites. This patch is an early step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall.
All architectures please.
The early versions of Kunit documented reliance on UML. Discussion lead to the conclusion that real architectures and real hardware would be supported.
This like this are what make me reluctant to move devicetree unittests to KUnit.
Can you please add a section to the KUnit documentation that lists things like the expectations, requirements, limitations, etc for a test case that is run by KUnit? Some examples that pop to mind from recent discussions and my own experiences:
- Each test case is invoked after late_init is complete. + Exception: the possible value of being able to run a unit test at a specific runlevel has been expressed. If an actual unit test can be shown to require running earlier, this restriction will be re-visited.
- Each test case must be idempotent. Each test case may be called multiple times, and must generate the same result each time it is called. + Exception 1: a test case can be declared to not be idempotent [[ mechanism TBD ]], in which case KUnit will not call the test case a second time without the kernel rebooting. + Exception 2: hardware may not be deterministic, so a test that always passes or fails when run under UML may not always to so on real hardware. <--- sentence copied from Documentation/dev-tools/kunit/usage.rst [[ This item and 1st exception do not exist yet, but will exist in some form if the proposed proc filesystem interface is added. ]]
- KUnit provides a helpful wrapper to simplify building a UML kernel containing the KUnit test cases, booting the UML kernel, and formatting the output from the test cases. This wrapper MUST NOT be required to run the test cases or to determine a test result. The formatting may provide additional analysis and improve readability of a test result.
- .... There is more that belongs here, but I'm getting side tracked here, when I'm trying to instead convert devicetree unittests to KUnit and want to get back to that.
-Frank
Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
arch/um/include/asm/common.lds.S | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S index 7145ce6999822..eab9ceb450efd 100644 --- a/arch/um/include/asm/common.lds.S +++ b/arch/um/include/asm/common.lds.S @@ -52,6 +52,10 @@ CON_INITCALL }
- .kunit_test_suites : {
- KUNIT_TEST_SUITES
- }
- .exitcall : { __exitcall_begin = .; *(.exitcall.exit)
On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand frowand.list@gmail.com wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
Add a linker section to UML where KUnit can put references to its test suites. This patch is an early step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall.
All architectures please.
I *am* supporting all architectures with this patchset.
The first patch in this series adds support to all architectures except UML (admittedly I only tried x86 and ARM, 32 bit and 64 bit for both, but I am pretty sure someone tried it for POWER and something else, so maybe I should try it with others before submission). A patch specific for UML, this patch, was needed because UML is a special snowflake and has a bunch of special linker scripts that don't make the change in vmlinux.lds.h (the previous patch) sufficient.
The early versions of Kunit documented reliance on UML. Discussion lead to the conclusion that real architectures and real hardware would be supported.
I am *very* aware.
I would never intentionally break support for other architectures. I know it is very important to you, Alan, and others.
This like this are what make me reluctant to move devicetree unittests to KUnit.
Hopefully I can reassure you then:
With Alan as a regular contributor who cares very much about non-UML architectures, it would be very unlikely for me to accidentally break support for other architectures without us finding out before a release.
I also periodically test KUnit on linux-next on x86-64. I have gotten bugs for other architectures from Arnd Bergmann and one of the m86k maintainers who seems to play around with it as well.
So yeah, other people care about this too, and I would really not want to make any of them unhappy.
Can you please add a section to the KUnit documentation that lists things like the expectations, requirements, limitations, etc for a test case that is run by KUnit? Some examples that pop to mind from recent discussions and my own experiences:
Each test case is invoked after late_init is complete.
- Exception: the possible value of being able to run a unit test at a specific runlevel has been expressed. If an actual unit test can be shown to require running earlier, this restriction will be re-visited.
Each test case must be idempotent. Each test case may be called multiple times, and must generate the same result each time it is called.
- Exception 1: a test case can be declared to not be idempotent [[ mechanism TBD ]], in which case KUnit will not call the test case a second time without the kernel rebooting.
- Exception 2: hardware may not be deterministic, so a test that always passes or fails when run under UML may not always to so on real hardware. <--- sentence copied from Documentation/dev-tools/kunit/usage.rst [[ This item and 1st exception do not exist yet, but will exist in some form if the proposed proc filesystem interface is added. ]]
KUnit provides a helpful wrapper to simplify building a UML kernel containing the KUnit test cases, booting the UML kernel, and formatting the output from the test cases. This wrapper MUST NOT be required to run the test cases or to determine a test result. The formatting may provide additional analysis and improve readability of a test result.
.... There is more that belongs here, but I'm getting side tracked here, when I'm trying to instead convert devicetree unittests to KUnit and want to get back to that.
Sure, I think that's a great start! Thanks for that. I hope you don't mind if I copy and paste some of it.
It kind of sounds like you are talking about more of a requirements doc than the design doc I was imagining in my reply to you on the cover letter, which is fine. The documentation is primarily for people other than me, so whatever you and others think is useful, I will do.
On 2/4/20 4:30 PM, Brendan Higgins wrote:
On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand frowand.list@gmail.com wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
Add a linker section to UML where KUnit can put references to its test suites. This patch is an early step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall.
All architectures please.
I *am* supporting all architectures with this patchset.
The first patch in this series adds support to all architectures except UML (admittedly I only tried x86 and ARM, 32 bit and 64 bit for
Right you are. My mind did not span from patch 1 to patch 2. Apologies for the noise.
both, but I am pretty sure someone tried it for POWER and something else, so maybe I should try it with others before submission). A patch specific for UML, this patch, was needed because UML is a special snowflake and has a bunch of special linker scripts that don't make the change in vmlinux.lds.h (the previous patch) sufficient.
The early versions of Kunit documented reliance on UML. Discussion lead to the conclusion that real architectures and real hardware would be supported.
I am *very* aware.
I would never intentionally break support for other architectures. I know it is very important to you, Alan, and others.
This like this are what make me reluctant to move devicetree unittests to KUnit.
Hopefully I can reassure you then:
With Alan as a regular contributor who cares very much about non-UML architectures, it would be very unlikely for me to accidentally break support for other architectures without us finding out before a release.
I also periodically test KUnit on linux-next on x86-64. I have gotten bugs for other architectures from Arnd Bergmann and one of the m86k maintainers who seems to play around with it as well.
So yeah, other people care about this too, and I would really not want to make any of them unhappy.
Thanks for the extra reassurance.
Can you please add a section to the KUnit documentation that lists things like the expectations, requirements, limitations, etc for a test case that is run by KUnit? Some examples that pop to mind from recent discussions and my own experiences:
Each test case is invoked after late_init is complete.
- Exception: the possible value of being able to run a unit test at a specific runlevel has been expressed. If an actual unit test can be shown to require running earlier, this restriction will be re-visited.
Each test case must be idempotent. Each test case may be called multiple times, and must generate the same result each time it is called.
- Exception 1: a test case can be declared to not be idempotent [[ mechanism TBD ]], in which case KUnit will not call the test case a second time without the kernel rebooting.
- Exception 2: hardware may not be deterministic, so a test that always passes or fails when run under UML may not always to so on real hardware. <--- sentence copied from Documentation/dev-tools/kunit/usage.rst [[ This item and 1st exception do not exist yet, but will exist in some form if the proposed proc filesystem interface is added. ]]
KUnit provides a helpful wrapper to simplify building a UML kernel containing the KUnit test cases, booting the UML kernel, and formatting the output from the test cases. This wrapper MUST NOT be required to run the test cases or to determine a test result. The formatting may provide additional analysis and improve readability of a test result.
.... There is more that belongs here, but I'm getting side tracked here, when I'm trying to instead convert devicetree unittests to KUnit and want to get back to that.
Sure, I think that's a great start! Thanks for that. I hope you don't mind if I copy and paste some of it.
Please do. And no need to credit me.
It kind of sounds like you are talking about more of a requirements doc than the design doc I was imagining in my reply to you on the cover letter, which is fine. The documentation is primarily for people other than me, so whatever you and others think is useful, I will do.
I wasn't really sure what to label it as. My inspiration was based a little bit on reading through the Linux 5.5 KUnit source and documentation, and trying to understand the expectations of the KUnit framework and what the test cases have to either obey or can expect.
I think there is a lot of history that you know, but is only visible to test implementors if they read through the past couple of years email threads.
-Frank
On Tue, Feb 4, 2020 at 3:17 PM Frank Rowand frowand.list@gmail.com wrote:
On 2/4/20 4:30 PM, Brendan Higgins wrote:
On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand frowand.list@gmail.com wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
Add a linker section to UML where KUnit can put references to its test suites. This patch is an early step in transitioning to dispatching all KUnit tests from a centralized executor rather than having each as its own separate late_initcall.
All architectures please.
I *am* supporting all architectures with this patchset.
The first patch in this series adds support to all architectures except UML (admittedly I only tried x86 and ARM, 32 bit and 64 bit for
Right you are. My mind did not span from patch 1 to patch 2. Apologies for the noise.
both, but I am pretty sure someone tried it for POWER and something else, so maybe I should try it with others before submission). A patch specific for UML, this patch, was needed because UML is a special snowflake and has a bunch of special linker scripts that don't make the change in vmlinux.lds.h (the previous patch) sufficient.
The early versions of Kunit documented reliance on UML. Discussion lead to the conclusion that real architectures and real hardware would be supported.
I am *very* aware.
I would never intentionally break support for other architectures. I know it is very important to you, Alan, and others.
This like this are what make me reluctant to move devicetree unittests to KUnit.
Hopefully I can reassure you then:
With Alan as a regular contributor who cares very much about non-UML architectures, it would be very unlikely for me to accidentally break support for other architectures without us finding out before a release.
I also periodically test KUnit on linux-next on x86-64. I have gotten bugs for other architectures from Arnd Bergmann and one of the m86k maintainers who seems to play around with it as well.
So yeah, other people care about this too, and I would really not want to make any of them unhappy.
Thanks for the extra reassurance.
No worries. This actually makes me think that we should probably have some kind of automated test that at least makes sure that popular KUnit architectures are not broken. Someone is currently adding KUnit support to KernelCI; maybe we can have KernelCI run multiple architecture tests after the initial support is complete.
Can you please add a section to the KUnit documentation that lists things like the expectations, requirements, limitations, etc for a test case that is run by KUnit? Some examples that pop to mind from recent discussions and my own experiences:
Each test case is invoked after late_init is complete.
- Exception: the possible value of being able to run a unit test at a specific runlevel has been expressed. If an actual unit test can be shown to require running earlier, this restriction will be re-visited.
Each test case must be idempotent. Each test case may be called multiple times, and must generate the same result each time it is called.
- Exception 1: a test case can be declared to not be idempotent [[ mechanism TBD ]], in which case KUnit will not call the test case a second time without the kernel rebooting.
- Exception 2: hardware may not be deterministic, so a test that always passes or fails when run under UML may not always to so on real hardware. <--- sentence copied from Documentation/dev-tools/kunit/usage.rst [[ This item and 1st exception do not exist yet, but will exist in some form if the proposed proc filesystem interface is added. ]]
KUnit provides a helpful wrapper to simplify building a UML kernel containing the KUnit test cases, booting the UML kernel, and formatting the output from the test cases. This wrapper MUST NOT be required to run the test cases or to determine a test result. The formatting may provide additional analysis and improve readability of a test result.
.... There is more that belongs here, but I'm getting side tracked here, when I'm trying to instead convert devicetree unittests to KUnit and want to get back to that.
Sure, I think that's a great start! Thanks for that. I hope you don't mind if I copy and paste some of it.
Please do. And no need to credit me.
It kind of sounds like you are talking about more of a requirements doc than the design doc I was imagining in my reply to you on the cover letter, which is fine. The documentation is primarily for people other than me, so whatever you and others think is useful, I will do.
I wasn't really sure what to label it as. My inspiration was based a little bit on reading through the Linux 5.5 KUnit source and documentation, and trying to understand the expectations of the KUnit framework and what the test cases have to either obey or can expect.
I think there is a lot of history that you know, but is only visible to test implementors if they read through the past couple of years email threads.
Yeah, that's no good. We need to provide a better experience than that. David has gotten deeply involved relatively recently: I suspect that he might have some good insight on this.
David, you mentioned offline that there are some philosophical changes in how we think about KUnit that has happened that you feel have never quite been captured in the docs. Do you think this is part of what Frank has pointed out here? If not, do you have any thoughts about what we should call this documentation section?
Shuah's first KUnit PR seemed to imply that KUnit was primarily for UML, or only fully supported under UML. So I think I might be the odd one out thinking that that has changed and the documentation properly conveys that.
On Tue, Feb 4, 2020 at 5:17 PM 'Brendan Higgins' via KUnit Development kunit-dev@googlegroups.com wrote:
On Tue, Feb 4, 2020 at 3:17 PM Frank Rowand frowand.list@gmail.com wrote:
On 2/4/20 4:30 PM, Brendan Higgins wrote:
On Tue, Feb 4, 2020 at 1:59 PM Frank Rowand frowand.list@gmail.com wrote:
Can you please add a section to the KUnit documentation that lists things like the expectations, requirements, limitations, etc for a test case that is run by KUnit? Some examples that pop to mind from recent discussions and my own experiences:
Each test case is invoked after late_init is complete.
- Exception: the possible value of being able to run a unit test at a specific runlevel has been expressed. If an actual unit test can be shown to require running earlier, this restriction will be re-visited.
Each test case must be idempotent. Each test case may be called multiple times, and must generate the same result each time it is called.
- Exception 1: a test case can be declared to not be idempotent [[ mechanism TBD ]], in which case KUnit will not call the test case a second time without the kernel rebooting.
- Exception 2: hardware may not be deterministic, so a test that always passes or fails when run under UML may not always to so on real hardware. <--- sentence copied from Documentation/dev-tools/kunit/usage.rst [[ This item and 1st exception do not exist yet, but will exist in some form if the proposed proc filesystem interface is added. ]]
KUnit provides a helpful wrapper to simplify building a UML kernel containing the KUnit test cases, booting the UML kernel, and formatting the output from the test cases. This wrapper MUST NOT be required to run the test cases or to determine a test result. The formatting may provide additional analysis and improve readability of a test result.
.... There is more that belongs here, but I'm getting side tracked here, when I'm trying to instead convert devicetree unittests to KUnit and want to get back to that.
Sure, I think that's a great start! Thanks for that. I hope you don't mind if I copy and paste some of it.
Please do. And no need to credit me.
It kind of sounds like you are talking about more of a requirements doc than the design doc I was imagining in my reply to you on the cover letter, which is fine. The documentation is primarily for people other than me, so whatever you and others think is useful, I will do.
I wasn't really sure what to label it as. My inspiration was based a little bit on reading through the Linux 5.5 KUnit source and documentation, and trying to understand the expectations of the KUnit framework and what the test cases have to either obey or can expect.
I think there is a lot of history that you know, but is only visible to test implementors if they read through the past couple of years email threads.
Yeah, that's no good. We need to provide a better experience than that. David has gotten deeply involved relatively recently: I suspect that he might have some good insight on this.
David, you mentioned offline that there are some philosophical changes in how we think about KUnit that has happened that you feel have never quite been captured in the docs. Do you think this is part of what Frank has pointed out here? If not, do you have any thoughts about what we should call this documentation section?
Shuah's first KUnit PR seemed to imply that KUnit was primarily for UML, or only fully supported under UML. So I think I might be the odd one out thinking that that has changed and the documentation properly conveys that.
Yeah: I think the documentation could do with some improvements on these fronts: there are a few places which imply that KUnit requires UML, which is definitely not the case. We still want to encourage people to try UML: it's usually the quickest way of running tests, but KUnit itself should remain architecture-independent, as should as many tests as possible.
I think there are probably (at least) two different things that need doing to make these sorts of miscommunications less likely. The bulk of the documentation needs to stop referring to KUnit as something built upon or using UML (while still making it clear that some of our tooling defaults to or requires UML at the moment). This shows up in a lot of the "What is KUnit/Why use KUnit" sections in both the kernel docs and the KUnit website.
Secondly, we need to document the test environment (possibly alongside some test style best practises, possibly separately). Given that work like the debugfs stuff and how to support tests which need __init data or need to only run once is still ongoing, I'm not sure if we can definitively state what the solutions there will be yet, but noting that tests should not depend on a specific architecture like UML (unless they're testing architecture-specific code in a way that can't reasonably be made architecture independent) is worth doing sooner rather than later.
I'll see if I can put a few doc patches together to update/fix at least some of the most egregious examples.
Cheers, -- David
From: Alan Maguire alan.maguire@oracle.com
Add a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately. Centralized execution is for built-in tests only; modules will execute tests when loaded.
Signed-off-by: Alan Maguire alan.maguire@oracle.com Co-developed-by: Iurii Zaikin yzaikin@google.com Signed-off-by: Iurii Zaikin yzaikin@google.com Co-developed-by: Brendan Higgins brendanhiggins@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- include/kunit/test.h | 73 +++++++++++++++++++++++++++----------------- lib/kunit/Makefile | 3 +- lib/kunit/executor.c | 36 ++++++++++++++++++++++ 3 files changed, 83 insertions(+), 29 deletions(-) create mode 100644 lib/kunit/executor.c
diff --git a/include/kunit/test.h b/include/kunit/test.h index 2dfb550c6723a..8a02f93a6b505 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -197,46 +197,63 @@ void kunit_init_test(struct kunit *test, const char *name);
int kunit_run_tests(struct kunit_suite *suite);
-/** - * kunit_test_suites() - used to register one or more &struct kunit_suite - * with KUnit. - * - * @suites: a statically allocated list of &struct kunit_suite. - * - * Registers @suites with the test framework. See &struct kunit_suite for - * more information. - * - * When builtin, KUnit tests are all run as late_initcalls; this means - * that they cannot test anything where tests must run at a different init - * phase. One significant restriction resulting from this is that KUnit - * cannot reliably test anything that is initialize in the late_init phase; - * another is that KUnit is useless to test things that need to be run in - * an earlier init phase. - * - * An alternative is to build the tests as a module. Because modules - * do not support multiple late_initcall()s, we need to initialize an - * array of suites for a module. - * - * TODO(brendanhiggins@google.com): Don't run all KUnit tests as - * late_initcalls. I have some future work planned to dispatch all KUnit - * tests from the same place, and at the very least to do so after - * everything else is definitely initialized. +/* + * If a test suite is built-in, module_init() gets translated into + * an initcall which we don't want as the idea is that for builtins + * the executor will manage execution. So ensure we do not define + * module_{init|exit} functions for the builtin case when registering + * suites via kunit_test_suites() below. */ -#define kunit_test_suites(...) \ - static struct kunit_suite *suites[] = { __VA_ARGS__, NULL}; \ - static int kunit_test_suites_init(void) \ +#ifdef MODULE +#define kunit_test_suites_for_module(__suites) \ + static int __init kunit_test_suites_init(void) \ { \ + struct kunit_suite *suites[] = (__suites); \ unsigned int i; \ + \ for (i = 0; suites[i] != NULL; i++) \ kunit_run_tests(suites[i]); \ return 0; \ } \ - late_initcall(kunit_test_suites_init); \ + module_init(kunit_test_suites_init); \ + \ static void __exit kunit_test_suites_exit(void) \ { \ return; \ } \ module_exit(kunit_test_suites_exit) +#else +#define kunit_test_suites_for_module(__suites) +#endif /* MODULE */ + +#define __kunit_test_suites(unique_array, unique_suites, ...) \ + static struct kunit_suite *unique_array[] = { __VA_ARGS__, NULL }; \ + kunit_test_suites_for_module(unique_array); \ + static struct kunit_suite **unique_suites \ + __used __aligned(8) __section(.kunit_test_suites) = unique_array + +/** + * kunit_test_suites() - used to register one or more &struct kunit_suite + * with KUnit. + * + * @suites: a statically allocated list of &struct kunit_suite. + * + * Registers @suites with the test framework. See &struct kunit_suite for + * more information. + * + * When builtin, KUnit tests are all run via executor; this is done + * by placing the array of struct kunit_suite * in the .kunit_test_suites + * ELF section. + * + * An alternative is to build the tests as a module. Because modules do not + * support multiple initcall()s, we need to initialize an array of suites for a + * module. + * + */ +#define kunit_test_suites(...) \ + __kunit_test_suites(__UNIQUE_ID(array), \ + __UNIQUE_ID(suites), \ + __VA_ARGS__)
#define kunit_test_suite(suite) kunit_test_suites(&suite)
diff --git a/lib/kunit/Makefile b/lib/kunit/Makefile index fab55649b69a5..c282f02ca066b 100644 --- a/lib/kunit/Makefile +++ b/lib/kunit/Makefile @@ -3,7 +3,8 @@ obj-$(CONFIG_KUNIT) += kunit.o kunit-objs += test.o \ string-stream.o \ assert.o \ - try-catch.o + try-catch.o \ + executor.o
obj-$(CONFIG_KUNIT_TEST) += kunit-test.o
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c new file mode 100644 index 0000000000000..6429927d598a5 --- /dev/null +++ b/lib/kunit/executor.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <kunit/test.h> + +/* + * These symbols point to the .kunit_test_suites section and are defined in + * include/asm-generic/vmlinux.lds.h, and consequently must be extern. + */ +extern struct kunit_suite * const * const __kunit_suites_start[]; +extern struct kunit_suite * const * const __kunit_suites_end[]; + +#if IS_BUILTIN(CONFIG_KUNIT) + +static int kunit_run_all_tests(void) +{ + struct kunit_suite * const * const *suites, * const *subsuite; + bool has_test_failed = false; + + for (suites = __kunit_suites_start; + suites < __kunit_suites_end; + suites++) { + for (subsuite = *suites; *subsuite != NULL; subsuite++) { + if (kunit_run_tests(*subsuite)) + has_test_failed = true; + } + } + + if (has_test_failed) + return -EFAULT; + + return 0; +} + +late_initcall(kunit_run_all_tests); + +#endif /* IS_BUILTIN(CONFIG_KUNIT) */
On 1/30/20 5:08 PM, Brendan Higgins wrote:
From: Alan Maguire alan.maguire@oracle.com
Add a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately. Centralized execution is for built-in tests only; modules will execute tests when loaded.
Signed-off-by: Alan Maguire alan.maguire@oracle.com Co-developed-by: Iurii Zaikin yzaikin@google.com Signed-off-by: Iurii Zaikin yzaikin@google.com Co-developed-by: Brendan Higgins brendanhiggins@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
include/kunit/test.h | 73 +++++++++++++++++++++++++++----------------- lib/kunit/Makefile | 3 +- lib/kunit/executor.c | 36 ++++++++++++++++++++++ 3 files changed, 83 insertions(+), 29 deletions(-) create mode 100644 lib/kunit/executor.c
diff --git a/include/kunit/test.h b/include/kunit/test.h index 2dfb550c6723a..8a02f93a6b505 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h
The following fragment does not match the test.h in Linux 5.5 or 5.4-rc1 (as one possible earlier version). And we are not to Linux 5.5-rc1 yet. (Simple way to check for the mis-match - 5.5 has kunit_test_suite() instead of kunit_test_suites().)
I know that there is an alternate tree where some of the development occurs. Can you please add a link in MAINTAINERS? And please note (at least in patch 0) what tree the series is developed against?
-Frank
@@ -197,46 +197,63 @@ void kunit_init_test(struct kunit *test, const char *name); int kunit_run_tests(struct kunit_suite *suite); -/**
- kunit_test_suites() - used to register one or more &struct kunit_suite
with KUnit.
- @suites: a statically allocated list of &struct kunit_suite.
- Registers @suites with the test framework. See &struct kunit_suite for
- more information.
- When builtin, KUnit tests are all run as late_initcalls; this means
- that they cannot test anything where tests must run at a different init
- phase. One significant restriction resulting from this is that KUnit
- cannot reliably test anything that is initialize in the late_init phase;
- another is that KUnit is useless to test things that need to be run in
- an earlier init phase.
- An alternative is to build the tests as a module. Because modules
- do not support multiple late_initcall()s, we need to initialize an
- array of suites for a module.
- TODO(brendanhiggins@google.com): Don't run all KUnit tests as
- late_initcalls. I have some future work planned to dispatch all KUnit
- tests from the same place, and at the very least to do so after
- everything else is definitely initialized.
+/*
- If a test suite is built-in, module_init() gets translated into
- an initcall which we don't want as the idea is that for builtins
- the executor will manage execution. So ensure we do not define
- module_{init|exit} functions for the builtin case when registering
*/
- suites via kunit_test_suites() below.
-#define kunit_test_suites(...) \
- static struct kunit_suite *suites[] = { __VA_ARGS__, NULL}; \
- static int kunit_test_suites_init(void) \
+#ifdef MODULE +#define kunit_test_suites_for_module(__suites) \
- static int __init kunit_test_suites_init(void) \ { \
unsigned int i; \struct kunit_suite *suites[] = (__suites); \
for (i = 0; suites[i] != NULL; i++) \ kunit_run_tests(suites[i]); \ return 0; \ } \\
- late_initcall(kunit_test_suites_init); \
- module_init(kunit_test_suites_init); \
static void __exit kunit_test_suites_exit(void) \ { \ return; \ } \ module_exit(kunit_test_suites_exit)\
+#else +#define kunit_test_suites_for_module(__suites) +#endif /* MODULE */
+#define __kunit_test_suites(unique_array, unique_suites, ...) \
- static struct kunit_suite *unique_array[] = { __VA_ARGS__, NULL }; \
- kunit_test_suites_for_module(unique_array); \
- static struct kunit_suite **unique_suites \
- __used __aligned(8) __section(.kunit_test_suites) = unique_array
+/**
- kunit_test_suites() - used to register one or more &struct kunit_suite
with KUnit.
- @suites: a statically allocated list of &struct kunit_suite.
- Registers @suites with the test framework. See &struct kunit_suite for
- more information.
- When builtin, KUnit tests are all run via executor; this is done
- by placing the array of struct kunit_suite * in the .kunit_test_suites
- ELF section.
- An alternative is to build the tests as a module. Because modules do not
- support multiple initcall()s, we need to initialize an array of suites for a
- module.
- */
+#define kunit_test_suites(...) \
- __kunit_test_suites(__UNIQUE_ID(array), \
__UNIQUE_ID(suites), \
__VA_ARGS__)
#define kunit_test_suite(suite) kunit_test_suites(&suite) diff --git a/lib/kunit/Makefile b/lib/kunit/Makefile index fab55649b69a5..c282f02ca066b 100644 --- a/lib/kunit/Makefile +++ b/lib/kunit/Makefile @@ -3,7 +3,8 @@ obj-$(CONFIG_KUNIT) += kunit.o kunit-objs += test.o \ string-stream.o \ assert.o \
try-catch.o
try-catch.o \
executor.o
obj-$(CONFIG_KUNIT_TEST) += kunit-test.o diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c new file mode 100644 index 0000000000000..6429927d598a5 --- /dev/null +++ b/lib/kunit/executor.c @@ -0,0 +1,36 @@ +// SPDX-License-Identifier: GPL-2.0
+#include <kunit/test.h>
+/*
- These symbols point to the .kunit_test_suites section and are defined in
- include/asm-generic/vmlinux.lds.h, and consequently must be extern.
- */
+extern struct kunit_suite * const * const __kunit_suites_start[]; +extern struct kunit_suite * const * const __kunit_suites_end[];
+#if IS_BUILTIN(CONFIG_KUNIT)
+static int kunit_run_all_tests(void) +{
- struct kunit_suite * const * const *suites, * const *subsuite;
- bool has_test_failed = false;
- for (suites = __kunit_suites_start;
suites < __kunit_suites_end;
suites++) {
for (subsuite = *suites; *subsuite != NULL; subsuite++) {
if (kunit_run_tests(*subsuite))
has_test_failed = true;
}
- }
- if (has_test_failed)
return -EFAULT;
- return 0;
+}
+late_initcall(kunit_run_all_tests);
+#endif /* IS_BUILTIN(CONFIG_KUNIT) */
On Tue, Feb 4, 2020 at 2:27 PM Frank Rowand frowand.list@gmail.com wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
From: Alan Maguire alan.maguire@oracle.com
Add a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately. Centralized execution is for built-in tests only; modules will execute tests when loaded.
Signed-off-by: Alan Maguire alan.maguire@oracle.com Co-developed-by: Iurii Zaikin yzaikin@google.com Signed-off-by: Iurii Zaikin yzaikin@google.com Co-developed-by: Brendan Higgins brendanhiggins@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
include/kunit/test.h | 73 +++++++++++++++++++++++++++----------------- lib/kunit/Makefile | 3 +- lib/kunit/executor.c | 36 ++++++++++++++++++++++ 3 files changed, 83 insertions(+), 29 deletions(-) create mode 100644 lib/kunit/executor.c
diff --git a/include/kunit/test.h b/include/kunit/test.h index 2dfb550c6723a..8a02f93a6b505 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h
The following fragment does not match the test.h in Linux 5.5 or 5.4-rc1 (as one possible earlier version). And we are not to Linux 5.5-rc1 yet. (Simple way to check for the mis-match - 5.5 has kunit_test_suite() instead of kunit_test_suites().)
I know that there is an alternate tree where some of the development occurs. Can you please add a link in MAINTAINERS? And please note (at least in patch 0) what tree the series is developed against?
Yep, I was planning on sending an update to the MAINTAINERS with that and some other updates soon.
In future revisions, I will put a link to the tree I developed against. In the meantime, we send pull-requests from here:
https://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git/lo...
And that is what I used as my development base for this series.
Remove KUnit from init calls entirely, instead call directly from kernel_init().
Co-developed-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- include/kunit/test.h | 9 +++++++++ init/main.c | 4 ++++ lib/kunit/executor.c | 4 +--- 3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/include/kunit/test.h b/include/kunit/test.h index 8a02f93a6b505..8689dd1459844 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -197,6 +197,15 @@ void kunit_init_test(struct kunit *test, const char *name);
int kunit_run_tests(struct kunit_suite *suite);
+#if IS_BUILTIN(CONFIG_KUNIT) +int kunit_run_all_tests(void); +#else +static inline int kunit_run_all_tests(void) +{ + return 0; +} +#endif /* IS_BUILTIN(CONFIG_KUNIT) */ + /* * If a test suite is built-in, module_init() gets translated into * an initcall which we don't want as the idea is that for builtins diff --git a/init/main.c b/init/main.c index 2cd736059416f..90301d4fbd1bb 100644 --- a/init/main.c +++ b/init/main.c @@ -103,6 +103,8 @@ #define CREATE_TRACE_POINTS #include <trace/events/initcall.h>
+#include <kunit/test.h> + static int kernel_init(void *);
extern void init_IRQ(void); @@ -1201,6 +1203,8 @@ static noinline void __init kernel_init_freeable(void)
do_basic_setup();
+ kunit_run_all_tests(); + console_on_rootfs();
/* diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index 6429927d598a5..b75a46c560847 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -11,7 +11,7 @@ extern struct kunit_suite * const * const __kunit_suites_end[];
#if IS_BUILTIN(CONFIG_KUNIT)
-static int kunit_run_all_tests(void) +int kunit_run_all_tests(void) { struct kunit_suite * const * const *suites, * const *subsuite; bool has_test_failed = false; @@ -31,6 +31,4 @@ static int kunit_run_all_tests(void) return 0; }
-late_initcall(kunit_run_all_tests); - #endif /* IS_BUILTIN(CONFIG_KUNIT) */
On Thu, Jan 30, 2020 at 03:08:09PM -0800, Brendan Higgins wrote:
Remove KUnit from init calls entirely, instead call directly from kernel_init().
Co-developed-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: Alan Maguire alan.maguire@oracle.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
Reviewed-by: Luis Chamberlain mcgrof@kernel.org
In particular the placement and strategy matches my expectations and allows us to think of __init as a separate beast, as we should.
Luis
TAP 14 allows an optional test plan to be emitted before the start of the start of testing[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed.
Link[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-s... Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- lib/kunit/executor.c | 17 +++++ lib/kunit/test.c | 11 --- tools/testing/kunit/kunit_parser.py | 74 ++++++++++++++++--- .../test_is_test_passed-all_passed.log | 1 + .../test_data/test_is_test_passed-crash.log | 1 + .../test_data/test_is_test_passed-failure.log | 1 + 6 files changed, 82 insertions(+), 23 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index b75a46c560847..7fd16feff157e 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -11,11 +11,28 @@ extern struct kunit_suite * const * const __kunit_suites_end[];
#if IS_BUILTIN(CONFIG_KUNIT)
+static void kunit_print_tap_header(void) +{ + struct kunit_suite * const * const *suites, * const *subsuite; + int num_of_suites = 0; + + for (suites = __kunit_suites_start; + suites < __kunit_suites_end; + suites++) + for (subsuite = *suites; *subsuite != NULL; subsuite++) + num_of_suites++; + + pr_info("TAP version 14\n"); + pr_info("1..%d\n", num_of_suites); +} + int kunit_run_all_tests(void) { struct kunit_suite * const * const *suites, * const *subsuite; bool has_test_failed = false;
+ kunit_print_tap_header(); + for (suites = __kunit_suites_start; suites < __kunit_suites_end; suites++) { diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 9242f932896c7..da56b94261b43 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -18,16 +18,6 @@ static void kunit_set_failure(struct kunit *test) WRITE_ONCE(test->success, false); }
-static void kunit_print_tap_version(void) -{ - static bool kunit_has_printed_tap_version; - - if (!kunit_has_printed_tap_version) { - pr_info("TAP version 14\n"); - kunit_has_printed_tap_version = true; - } -} - static size_t kunit_test_cases_len(struct kunit_case *test_cases) { struct kunit_case *test_case; @@ -41,7 +31,6 @@ static size_t kunit_test_cases_len(struct kunit_case *test_cases)
static void kunit_print_subtest_start(struct kunit_suite *suite) { - kunit_print_tap_version(); pr_info("\t# Subtest: %s\n", suite->name); pr_info("\t1..%zd\n", kunit_test_cases_len(suite->test_cases)); } diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 4ffbae0f67325..78b3bdd03b1e4 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -45,6 +45,7 @@ class TestStatus(Enum): FAILURE = auto() TEST_CRASHED = auto() NO_TESTS = auto() + FAILURE_TO_PARSE_TESTS = auto()
kunit_start_re = re.compile(r'^TAP version [0-9]+$') kunit_end_re = re.compile('List of all partitions:') @@ -106,7 +107,7 @@ OkNotOkResult = namedtuple('OkNotOkResult', ['is_ok','description', 'text'])
OK_NOT_OK_SUBTEST = re.compile(r'^\t(ok|not ok) [0-9]+ - (.*)$')
-OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) [0-9]+ - (.*)$') +OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$')
def parse_ok_not_ok_test_case(lines: List[str], test_case: TestCase, @@ -196,7 +197,9 @@ def max_status(left: TestStatus, right: TestStatus) -> TestStatus: else: return TestStatus.SUCCESS
-def parse_ok_not_ok_test_suite(lines: List[str], test_suite: TestSuite) -> bool: +def parse_ok_not_ok_test_suite(lines: List[str], + test_suite: TestSuite, + expected_suite_index: int) -> bool: consume_non_diagnositic(lines) if not lines: test_suite.status = TestStatus.TEST_CRASHED @@ -209,6 +212,12 @@ def parse_ok_not_ok_test_suite(lines: List[str], test_suite: TestSuite) -> bool: test_suite.status = TestStatus.SUCCESS else: test_suite.status = TestStatus.FAILURE + suite_index = int(match.group(2)) + if suite_index != expected_suite_index: + print_with_timestamp( + red('[ERROR] ') + 'expected_suite_index ' + + str(expected_suite_index) + ', but got ' + + str(suite_index)) return True else: return False @@ -221,7 +230,7 @@ def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus: max_test_case_status = bubble_up_errors(lambda x: x.status, test_suite.cases) return max_status(max_test_case_status, test_suite.status)
-def parse_test_suite(lines: List[str]) -> TestSuite: +def parse_test_suite(lines: List[str], expected_suite_index: int) -> TestSuite: if not lines: return None consume_non_diagnositic(lines) @@ -240,7 +249,7 @@ def parse_test_suite(lines: List[str]) -> TestSuite: test_suite.cases.append(test_case) test_case = parse_test_case(lines, expected_test_case_num > 0) expected_test_case_num -= 1 - if parse_ok_not_ok_test_suite(lines, test_suite): + if parse_ok_not_ok_test_suite(lines, test_suite, expected_suite_index): test_suite.status = bubble_up_test_case_errors(test_suite) return test_suite elif not lines: @@ -260,6 +269,17 @@ def parse_tap_header(lines: List[str]) -> bool: else: return False
+TEST_PLAN = re.compile(r'[0-9]+..([0-9]+)') + +def parse_test_plan(lines: List[str]) -> int: + consume_non_diagnositic(lines) + match = TEST_PLAN.match(lines[0]) + if match: + lines.pop(0) + return int(match.group(1)) + else: + return None + def bubble_up_suite_errors(test_suite_list: List[TestSuite]) -> TestStatus: return bubble_up_errors(lambda x: x.status, test_suite_list)
@@ -268,19 +288,34 @@ def parse_test_result(lines: List[str]) -> TestResult: return TestResult(TestStatus.NO_TESTS, [], lines) consume_non_diagnositic(lines) if not parse_tap_header(lines): - return None + return TestResult(TestStatus.NO_TESTS, [], lines) + expected_test_suite_num = parse_test_plan(lines) + if not expected_test_suite_num: + return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines) test_suites = [] - test_suite = parse_test_suite(lines) - while test_suite: - test_suites.append(test_suite) - test_suite = parse_test_suite(lines) - return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines) + for i in range(1, expected_test_suite_num + 1): + test_suite = parse_test_suite(lines, i) + if test_suite: + test_suites.append(test_suite) + else: + print_with_timestamp( + red('[ERROR] ') + ' expected ' + + str(expected_test_suite_num) + + ' test suites, but got ' + str(i - 2)) + break + test_suite = parse_test_suite(lines, -1) + if test_suite: + print_with_timestamp(red('[ERROR] ') + + 'got unexpected test suite: ' + test_suite.name) + if test_suites: + return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines) + else: + return TestResult(TestStatus.NO_TESTS, [], lines)
-def parse_run_tests(kernel_output) -> TestResult: +def print_and_count_results(test_result: TestResult) -> None: total_tests = 0 failed_tests = 0 crashed_tests = 0 - test_result = parse_test_result(list(isolate_kunit_output(kernel_output))) for test_suite in test_result.suites: if test_suite.status == TestStatus.SUCCESS: print_suite_divider(green('[PASSED] ') + test_suite.name) @@ -302,6 +337,21 @@ def parse_run_tests(kernel_output) -> TestResult: print_with_timestamp(red('[FAILED] ') + test_case.name) print_log(map(yellow, test_case.log)) print_with_timestamp('') + return total_tests, failed_tests, crashed_tests + +def parse_run_tests(kernel_output) -> TestResult: + total_tests = 0 + failed_tests = 0 + crashed_tests = 0 + test_result = parse_test_result(list(isolate_kunit_output(kernel_output))) + if test_result.status == TestStatus.NO_TESTS: + print(red('[ERROR] ') + yellow('no tests run!')) + elif test_result.status == TestStatus.FAILURE_TO_PARSE_TESTS: + print(red('[ERROR] ') + yellow('could not parse test results!')) + else: + (total_tests, + failed_tests, + crashed_tests) = print_and_count_results(test_result) print_with_timestamp(DIVIDER) fmt = green if test_result.status == TestStatus.SUCCESS else red print_with_timestamp( diff --git a/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log b/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log index 62ebc0288355c..bc0dc8fe35b76 100644 --- a/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log +++ b/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log @@ -1,4 +1,5 @@ TAP version 14 +1..2 # Subtest: sysctl_test 1..8 # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed diff --git a/tools/testing/kunit/test_data/test_is_test_passed-crash.log b/tools/testing/kunit/test_data/test_is_test_passed-crash.log index 0b249870c8be4..4d97f6708c4a5 100644 --- a/tools/testing/kunit/test_data/test_is_test_passed-crash.log +++ b/tools/testing/kunit/test_data/test_is_test_passed-crash.log @@ -1,6 +1,7 @@ printk: console [tty0] enabled printk: console [mc-1] enabled TAP version 14 +1..2 # Subtest: sysctl_test 1..8 # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed diff --git a/tools/testing/kunit/test_data/test_is_test_passed-failure.log b/tools/testing/kunit/test_data/test_is_test_passed-failure.log index 9e89d32d5667a..7a416497e3bec 100644 --- a/tools/testing/kunit/test_data/test_is_test_passed-failure.log +++ b/tools/testing/kunit/test_data/test_is_test_passed-failure.log @@ -1,4 +1,5 @@ TAP version 14 +1..2 # Subtest: sysctl_test 1..8 # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
On 1/30/20 5:08 PM, Brendan Higgins wrote:
TAP 14 allows an optional test plan to be emitted before the start of the start of testing[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed.
Link[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-s... Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
lib/kunit/executor.c | 17 +++++ lib/kunit/test.c | 11 --- tools/testing/kunit/kunit_parser.py | 74 ++++++++++++++++--- .../test_is_test_passed-all_passed.log | 1 + .../test_data/test_is_test_passed-crash.log | 1 + .../test_data/test_is_test_passed-failure.log | 1 + 6 files changed, 82 insertions(+), 23 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index b75a46c560847..7fd16feff157e 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -11,11 +11,28 @@ extern struct kunit_suite * const * const __kunit_suites_end[]; #if IS_BUILTIN(CONFIG_KUNIT) +static void kunit_print_tap_header(void) +{
- struct kunit_suite * const * const *suites, * const *subsuite;
- int num_of_suites = 0;
- for (suites = __kunit_suites_start;
suites < __kunit_suites_end;
suites++)
for (subsuite = *suites; *subsuite != NULL; subsuite++)
num_of_suites++;
- pr_info("TAP version 14\n");
- pr_info("1..%d\n", num_of_suites);
+}
Subsuites are not in Linux 5.5, as far as I can tell, so it is hard to review this.
But I think this observation will still be valid even though I am not up to speed on subsuites.
num_of_suites is all test suites declared (as built in). In Linux 5.5, each kunit_suite contains one or more test cases. Each test case leads to either "ok" or "not ok". So instead of counting number of tests suites, the number of test cases needs to be counted.
Ahhh, I think my lack of visibility of what tree this series is against is about to lead me astray! Are the test cases in the subsuite implemented as what TAP 14 calls "subtests"? If so, then the above code that counts suites instead of test cases would be correct.
I'm spinning my wheels now, since I don't know what I am reviewing against, so I'm going to stop reviewing, other than maybe a quick scan.
-Frank
int kunit_run_all_tests(void) { struct kunit_suite * const * const *suites, * const *subsuite; bool has_test_failed = false;
- kunit_print_tap_header();
- for (suites = __kunit_suites_start; suites < __kunit_suites_end; suites++) {
diff --git a/lib/kunit/test.c b/lib/kunit/test.c index 9242f932896c7..da56b94261b43 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -18,16 +18,6 @@ static void kunit_set_failure(struct kunit *test) WRITE_ONCE(test->success, false); } -static void kunit_print_tap_version(void) -{
- static bool kunit_has_printed_tap_version;
- if (!kunit_has_printed_tap_version) {
pr_info("TAP version 14\n");
kunit_has_printed_tap_version = true;
- }
-}
static size_t kunit_test_cases_len(struct kunit_case *test_cases) { struct kunit_case *test_case; @@ -41,7 +31,6 @@ static size_t kunit_test_cases_len(struct kunit_case *test_cases) static void kunit_print_subtest_start(struct kunit_suite *suite) {
- kunit_print_tap_version(); pr_info("\t# Subtest: %s\n", suite->name); pr_info("\t1..%zd\n", kunit_test_cases_len(suite->test_cases));
} diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 4ffbae0f67325..78b3bdd03b1e4 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -45,6 +45,7 @@ class TestStatus(Enum): FAILURE = auto() TEST_CRASHED = auto() NO_TESTS = auto()
- FAILURE_TO_PARSE_TESTS = auto()
kunit_start_re = re.compile(r'^TAP version [0-9]+$') kunit_end_re = re.compile('List of all partitions:') @@ -106,7 +107,7 @@ OkNotOkResult = namedtuple('OkNotOkResult', ['is_ok','description', 'text']) OK_NOT_OK_SUBTEST = re.compile(r'^\t(ok|not ok) [0-9]+ - (.*)$') -OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) [0-9]+ - (.*)$') +OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$') def parse_ok_not_ok_test_case(lines: List[str], test_case: TestCase, @@ -196,7 +197,9 @@ def max_status(left: TestStatus, right: TestStatus) -> TestStatus: else: return TestStatus.SUCCESS -def parse_ok_not_ok_test_suite(lines: List[str], test_suite: TestSuite) -> bool: +def parse_ok_not_ok_test_suite(lines: List[str],
test_suite: TestSuite,
consume_non_diagnositic(lines) if not lines: test_suite.status = TestStatus.TEST_CRASHEDexpected_suite_index: int) -> bool:
@@ -209,6 +212,12 @@ def parse_ok_not_ok_test_suite(lines: List[str], test_suite: TestSuite) -> bool: test_suite.status = TestStatus.SUCCESS else: test_suite.status = TestStatus.FAILURE
suite_index = int(match.group(2))
if suite_index != expected_suite_index:
print_with_timestamp(
red('[ERROR] ') + 'expected_suite_index ' +
str(expected_suite_index) + ', but got ' +
return True else: return Falsestr(suite_index))
@@ -221,7 +230,7 @@ def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus: max_test_case_status = bubble_up_errors(lambda x: x.status, test_suite.cases) return max_status(max_test_case_status, test_suite.status) -def parse_test_suite(lines: List[str]) -> TestSuite: +def parse_test_suite(lines: List[str], expected_suite_index: int) -> TestSuite: if not lines: return None consume_non_diagnositic(lines) @@ -240,7 +249,7 @@ def parse_test_suite(lines: List[str]) -> TestSuite: test_suite.cases.append(test_case) test_case = parse_test_case(lines, expected_test_case_num > 0) expected_test_case_num -= 1
- if parse_ok_not_ok_test_suite(lines, test_suite):
- if parse_ok_not_ok_test_suite(lines, test_suite, expected_suite_index): test_suite.status = bubble_up_test_case_errors(test_suite) return test_suite elif not lines:
@@ -260,6 +269,17 @@ def parse_tap_header(lines: List[str]) -> bool: else: return False +TEST_PLAN = re.compile(r'[0-9]+..([0-9]+)')
+def parse_test_plan(lines: List[str]) -> int:
- consume_non_diagnositic(lines)
- match = TEST_PLAN.match(lines[0])
- if match:
lines.pop(0)
return int(match.group(1))
- else:
return None
def bubble_up_suite_errors(test_suite_list: List[TestSuite]) -> TestStatus: return bubble_up_errors(lambda x: x.status, test_suite_list) @@ -268,19 +288,34 @@ def parse_test_result(lines: List[str]) -> TestResult: return TestResult(TestStatus.NO_TESTS, [], lines) consume_non_diagnositic(lines) if not parse_tap_header(lines):
return None
return TestResult(TestStatus.NO_TESTS, [], lines)
- expected_test_suite_num = parse_test_plan(lines)
- if not expected_test_suite_num:
test_suites = []return TestResult(TestStatus.FAILURE_TO_PARSE_TESTS, [], lines)
- test_suite = parse_test_suite(lines)
- while test_suite:
test_suites.append(test_suite)
test_suite = parse_test_suite(lines)
- return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines)
- for i in range(1, expected_test_suite_num + 1):
test_suite = parse_test_suite(lines, i)
if test_suite:
test_suites.append(test_suite)
else:
print_with_timestamp(
red('[ERROR] ') + ' expected ' +
str(expected_test_suite_num) +
' test suites, but got ' + str(i - 2))
break
- test_suite = parse_test_suite(lines, -1)
- if test_suite:
print_with_timestamp(red('[ERROR] ') +
'got unexpected test suite: ' + test_suite.name)
- if test_suites:
return TestResult(bubble_up_suite_errors(test_suites), test_suites, lines)
- else:
return TestResult(TestStatus.NO_TESTS, [], lines)
-def parse_run_tests(kernel_output) -> TestResult: +def print_and_count_results(test_result: TestResult) -> None: total_tests = 0 failed_tests = 0 crashed_tests = 0
- test_result = parse_test_result(list(isolate_kunit_output(kernel_output))) for test_suite in test_result.suites: if test_suite.status == TestStatus.SUCCESS: print_suite_divider(green('[PASSED] ') + test_suite.name)
@@ -302,6 +337,21 @@ def parse_run_tests(kernel_output) -> TestResult: print_with_timestamp(red('[FAILED] ') + test_case.name) print_log(map(yellow, test_case.log)) print_with_timestamp('')
- return total_tests, failed_tests, crashed_tests
+def parse_run_tests(kernel_output) -> TestResult:
- total_tests = 0
- failed_tests = 0
- crashed_tests = 0
- test_result = parse_test_result(list(isolate_kunit_output(kernel_output)))
- if test_result.status == TestStatus.NO_TESTS:
print(red('[ERROR] ') + yellow('no tests run!'))
- elif test_result.status == TestStatus.FAILURE_TO_PARSE_TESTS:
print(red('[ERROR] ') + yellow('could not parse test results!'))
- else:
(total_tests,
failed_tests,
print_with_timestamp(DIVIDER) fmt = green if test_result.status == TestStatus.SUCCESS else red print_with_timestamp(crashed_tests) = print_and_count_results(test_result)
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log b/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log index 62ebc0288355c..bc0dc8fe35b76 100644 --- a/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log +++ b/tools/testing/kunit/test_data/test_is_test_passed-all_passed.log @@ -1,4 +1,5 @@ TAP version 14 +1..2 # Subtest: sysctl_test 1..8 # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed diff --git a/tools/testing/kunit/test_data/test_is_test_passed-crash.log b/tools/testing/kunit/test_data/test_is_test_passed-crash.log index 0b249870c8be4..4d97f6708c4a5 100644 --- a/tools/testing/kunit/test_data/test_is_test_passed-crash.log +++ b/tools/testing/kunit/test_data/test_is_test_passed-crash.log @@ -1,6 +1,7 @@ printk: console [tty0] enabled printk: console [mc-1] enabled TAP version 14 +1..2 # Subtest: sysctl_test 1..8 # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed diff --git a/tools/testing/kunit/test_data/test_is_test_passed-failure.log b/tools/testing/kunit/test_data/test_is_test_passed-failure.log index 9e89d32d5667a..7a416497e3bec 100644 --- a/tools/testing/kunit/test_data/test_is_test_passed-failure.log +++ b/tools/testing/kunit/test_data/test_is_test_passed-failure.log @@ -1,4 +1,5 @@ TAP version 14 +1..2 # Subtest: sysctl_test 1..8 # sysctl_test_dointvec_null_tbl_data: sysctl_test_dointvec_null_tbl_data passed
On Tue, Feb 4, 2020 at 3:01 PM Frank Rowand frowand.list@gmail.com wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
TAP 14 allows an optional test plan to be emitted before the start of the start of testing[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed.
Link[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-s... Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
lib/kunit/executor.c | 17 +++++ lib/kunit/test.c | 11 --- tools/testing/kunit/kunit_parser.py | 74 ++++++++++++++++--- .../test_is_test_passed-all_passed.log | 1 + .../test_data/test_is_test_passed-crash.log | 1 + .../test_data/test_is_test_passed-failure.log | 1 + 6 files changed, 82 insertions(+), 23 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index b75a46c560847..7fd16feff157e 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -11,11 +11,28 @@ extern struct kunit_suite * const * const __kunit_suites_end[];
#if IS_BUILTIN(CONFIG_KUNIT)
+static void kunit_print_tap_header(void) +{
struct kunit_suite * const * const *suites, * const *subsuite;
int num_of_suites = 0;
for (suites = __kunit_suites_start;
suites < __kunit_suites_end;
suites++)
for (subsuite = *suites; *subsuite != NULL; subsuite++)
num_of_suites++;
pr_info("TAP version 14\n");
pr_info("1..%d\n", num_of_suites);
+}
Subsuites are not in Linux 5.5, as far as I can tell, so it is hard to review this.
But I think this observation will still be valid even though I am not up to speed on subsuites.
"substitutes" are just an artifact from having to group together all suites that occur in a module. It should be mostly immaterial to most users, but we have to have all the suites that occur in a module grouped together in order to support KUnit tests as loadable modules.
I just call them subsuites here because I didn't know what else to call them. Still, the relationship between them should be mostly ignored by users as you seem to recognize below.
Also, module support for KUnit was just recently accepted into torvalds/master:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
You should be able to apply this change to torvalds/master now.
num_of_suites is all test suites declared (as built in). In Linux 5.5, each kunit_suite contains one or more test cases. Each test case leads to either "ok" or "not ok". So instead of counting number of tests suites, the number of test cases needs to be counted.
Nope, but I think you understand correctly below...
Ahhh, I think my lack of visibility of what tree this series is against is about to lead me astray! Are the test cases in the subsuite implemented as what TAP 14 calls "subtests"? If so, then the above code that counts suites instead of test cases would be correct.
Yep! You got it! Test cases are "subtests" in TAP 14. So there is a ok/not ok line for each suite, and nested as subtests above that are indented ok/not ok lines for each test case. KUnit already does this today. We are just moving some of the printing logic out so we can count the suites and print a test plan.
Technically, what we print today (before this change) is legal TAP 14. The test plan is not required. However, the test plan is very helpful as it makes it possible to know if some tests just never ran for some reason.
I'm spinning my wheels now, since I don't know what I am reviewing against, so I'm going to stop reviewing, other than maybe a quick scan.
I just applied this patch series against torvalds/master which I pulled this morning, and it applied cleanly. I will specify in any subsequent revisions that this is based on linux-kselftest/kunit since that's where these patches will be applied, but they should apply against any rc of 5.6 just fine once they are cut.
Cheers!
From: David Gow davidgow@google.com
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs.
Signed-off-by: David Gow davidgow@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- lib/kunit/executor.c | 20 ++++++++++++++++++++ tools/testing/kunit/kunit_kernel.py | 2 +- tools/testing/kunit/kunit_parser.py | 2 +- 3 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index 7fd16feff157e..a93821116ccec 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0
+#include <linux/reboot.h> #include <kunit/test.h>
/* @@ -11,6 +12,23 @@ extern struct kunit_suite * const * const __kunit_suites_end[];
#if IS_BUILTIN(CONFIG_KUNIT)
+static char *kunit_shutdown; +core_param(kunit_shutdown, kunit_shutdown, charp, 0644); + +static void kunit_handle_shutdown(void) +{ + if (!kunit_shutdown) + return; + + if (!strcmp(kunit_shutdown, "poweroff")) + kernel_power_off(); + else if (!strcmp(kunit_shutdown, "halt")) + kernel_halt(); + else if (!strcmp(kunit_shutdown, "reboot")) + kernel_restart(NULL); + +} + static void kunit_print_tap_header(void) { struct kunit_suite * const * const *suites, * const *subsuite; @@ -42,6 +60,8 @@ int kunit_run_all_tests(void) } }
+ kunit_handle_shutdown(); + if (has_test_failed) return -EFAULT;
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py index cc5d844ecca13..43314aa537d30 100644 --- a/tools/testing/kunit/kunit_kernel.py +++ b/tools/testing/kunit/kunit_kernel.py @@ -141,7 +141,7 @@ class LinuxSourceTree(object): return True
def run_kernel(self, args=[], timeout=None, build_dir=''): - args.extend(['mem=256M']) + args.extend(['mem=256M', 'kunit_shutdown=halt']) process = self._ops.linux_bin(args, timeout, build_dir) with open(os.path.join(build_dir, 'test.log'), 'w') as f: for line in process.stdout: diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 78b3bdd03b1e4..633811dd9bce8 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -48,7 +48,7 @@ class TestStatus(Enum): FAILURE_TO_PARSE_TESTS = auto()
kunit_start_re = re.compile(r'^TAP version [0-9]+$') -kunit_end_re = re.compile('List of all partitions:') +kunit_end_re = re.compile(r'reboot: System halted')
def isolate_kunit_output(kernel_output): started = False
On 1/30/20 5:08 PM, Brendan Higgins wrote:
From: David Gow davidgow@google.com
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs.
kunit_shutdown needs to be added to Documentation/admin-guide/kernel-parameters.txt
-Frank
Signed-off-by: David Gow davidgow@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
lib/kunit/executor.c | 20 ++++++++++++++++++++ tools/testing/kunit/kunit_kernel.py | 2 +- tools/testing/kunit/kunit_parser.py | 2 +- 3 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index 7fd16feff157e..a93821116ccec 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/reboot.h> #include <kunit/test.h> /* @@ -11,6 +12,23 @@ extern struct kunit_suite * const * const __kunit_suites_end[]; #if IS_BUILTIN(CONFIG_KUNIT) +static char *kunit_shutdown; +core_param(kunit_shutdown, kunit_shutdown, charp, 0644);
+static void kunit_handle_shutdown(void) +{
- if (!kunit_shutdown)
return;
- if (!strcmp(kunit_shutdown, "poweroff"))
kernel_power_off();
- else if (!strcmp(kunit_shutdown, "halt"))
kernel_halt();
- else if (!strcmp(kunit_shutdown, "reboot"))
kernel_restart(NULL);
+}
static void kunit_print_tap_header(void) { struct kunit_suite * const * const *suites, * const *subsuite; @@ -42,6 +60,8 @@ int kunit_run_all_tests(void) } }
- kunit_handle_shutdown();
- if (has_test_failed) return -EFAULT;
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py index cc5d844ecca13..43314aa537d30 100644 --- a/tools/testing/kunit/kunit_kernel.py +++ b/tools/testing/kunit/kunit_kernel.py @@ -141,7 +141,7 @@ class LinuxSourceTree(object): return True def run_kernel(self, args=[], timeout=None, build_dir=''):
args.extend(['mem=256M'])
process = self._ops.linux_bin(args, timeout, build_dir) with open(os.path.join(build_dir, 'test.log'), 'w') as f: for line in process.stdout:args.extend(['mem=256M', 'kunit_shutdown=halt'])
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 78b3bdd03b1e4..633811dd9bce8 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -48,7 +48,7 @@ class TestStatus(Enum): FAILURE_TO_PARSE_TESTS = auto() kunit_start_re = re.compile(r'^TAP version [0-9]+$') -kunit_end_re = re.compile('List of all partitions:') +kunit_end_re = re.compile(r'reboot: System halted') def isolate_kunit_output(kernel_output): started = False
On 2/4/20 5:05 PM, Frank Rowand wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
From: David Gow davidgow@google.com
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs.
kunit_shutdown needs to be added to Documentation/admin-guide/kernel-parameters.txt
Oops, patch 7/7 does exactly that.
-Frank
-Frank
Signed-off-by: David Gow davidgow@google.com Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
lib/kunit/executor.c | 20 ++++++++++++++++++++ tools/testing/kunit/kunit_kernel.py | 2 +- tools/testing/kunit/kunit_parser.py | 2 +- 3 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/lib/kunit/executor.c b/lib/kunit/executor.c index 7fd16feff157e..a93821116ccec 100644 --- a/lib/kunit/executor.c +++ b/lib/kunit/executor.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 +#include <linux/reboot.h> #include <kunit/test.h> /* @@ -11,6 +12,23 @@ extern struct kunit_suite * const * const __kunit_suites_end[]; #if IS_BUILTIN(CONFIG_KUNIT) +static char *kunit_shutdown; +core_param(kunit_shutdown, kunit_shutdown, charp, 0644);
+static void kunit_handle_shutdown(void) +{
- if (!kunit_shutdown)
return;
- if (!strcmp(kunit_shutdown, "poweroff"))
kernel_power_off();
- else if (!strcmp(kunit_shutdown, "halt"))
kernel_halt();
- else if (!strcmp(kunit_shutdown, "reboot"))
kernel_restart(NULL);
+}
static void kunit_print_tap_header(void) { struct kunit_suite * const * const *suites, * const *subsuite; @@ -42,6 +60,8 @@ int kunit_run_all_tests(void) } }
- kunit_handle_shutdown();
- if (has_test_failed) return -EFAULT;
diff --git a/tools/testing/kunit/kunit_kernel.py b/tools/testing/kunit/kunit_kernel.py index cc5d844ecca13..43314aa537d30 100644 --- a/tools/testing/kunit/kunit_kernel.py +++ b/tools/testing/kunit/kunit_kernel.py @@ -141,7 +141,7 @@ class LinuxSourceTree(object): return True def run_kernel(self, args=[], timeout=None, build_dir=''):
args.extend(['mem=256M'])
process = self._ops.linux_bin(args, timeout, build_dir) with open(os.path.join(build_dir, 'test.log'), 'w') as f: for line in process.stdout:args.extend(['mem=256M', 'kunit_shutdown=halt'])
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py index 78b3bdd03b1e4..633811dd9bce8 100644 --- a/tools/testing/kunit/kunit_parser.py +++ b/tools/testing/kunit/kunit_parser.py @@ -48,7 +48,7 @@ class TestStatus(Enum): FAILURE_TO_PARSE_TESTS = auto() kunit_start_re = re.compile(r'^TAP version [0-9]+$') -kunit_end_re = re.compile('List of all partitions:') +kunit_end_re = re.compile(r'reboot: System halted') def isolate_kunit_output(kernel_output): started = False
Add kunit_shutdown, an option to specify that the kernel shutsdown after running KUnit tests, to the kernel-parameters.txt documentation.
Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org --- Documentation/admin-guide/kernel-parameters.txt | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ade4e6ec23e03..522fd8bdec949 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2054,6 +2054,13 @@ 0: force disabled 1: force enabled
+ kunit_shutdown [KERNEL UNIT TESTING FRAMEWORK] Shutdown kernel after + running tests. + Default: (flag not present) don't shutdown + poweroff: poweroff the kernel after running tests + halt: halt the kernel after running tests + reboot: reboot the kernel after running tests + kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs. Default is 0 (don't ignore, but inject #GP)
On 1/30/20 5:08 PM, Brendan Higgins wrote:
Add kunit_shutdown, an option to specify that the kernel shutsdown after running KUnit tests, to the kernel-parameters.txt documentation.
Signed-off-by: Brendan Higgins brendanhiggins@google.com Reviewed-by: Stephen Boyd sboyd@kernel.org
Documentation/admin-guide/kernel-parameters.txt | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ade4e6ec23e03..522fd8bdec949 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2054,6 +2054,13 @@ 0: force disabled 1: force enabled
- kunit_shutdown [KERNEL UNIT TESTING FRAMEWORK] Shutdown kernel after
running tests.
running built-in tests. Tests configured as modules will not be run.
My wording might not be consistent with KUnit terminology regarding "built-in" and "modules". Feel free to properly word smith.
Default: (flag not present) don't shutdown
poweroff: poweroff the kernel after running tests
halt: halt the kernel after running tests
reboot: reboot the kernel after running tests
- kvm.ignore_msrs=[KVM] Ignore guest accesses to unhandled MSRs. Default is 0 (don't ignore, but inject #GP)
Quoting Brendan Higgins (2020-01-30 15:08:05)
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately along with a couple of new features that depend on it.
Is there any diff from v1 to v2? I don't know what changed, but I see that my Reviewed-by tag has been put on everything, so I guess everything I said was addressed or discussed in the previous round.
On Mon, Feb 3, 2020 at 11:19 PM Stephen Boyd sboyd@kernel.org wrote:
Quoting Brendan Higgins (2020-01-30 15:08:05)
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately along with a couple of new features that depend on it.
Is there any diff from v1 to v2? I don't know what changed, but I see that my Reviewed-by tag has been put on everything, so I guess everything I said was addressed or discussed in the previous round.
Oh yes, sorry about that. I have gotten a bit lazy in regard to changing logs. I noticed that a lot of people don't seem to care. I'll make a note that you do.
Changes since last revision: - On patch 6/7, I flipped the include order and removed braces from the if statements. - On patch 7/7, I removed the periods from the short descriptions.
On 2/4/20 1:35 PM, Brendan Higgins wrote:
On Mon, Feb 3, 2020 at 11:19 PM Stephen Boyd sboyd@kernel.org wrote:
Quoting Brendan Higgins (2020-01-30 15:08:05)
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately along with a couple of new features that depend on it.
Is there any diff from v1 to v2? I don't know what changed, but I see that my Reviewed-by tag has been put on everything, so I guess everything I said was addressed or discussed in the previous round.
Oh yes, sorry about that. I have gotten a bit lazy in regard to changing logs. I noticed that a lot of people don't seem to care. I'll make a note that you do.
Please ignore those who don't care. Just always include a change log.
You may encounter bike shedding about where the log belongs (in patch 0, in the modified patches, in both locations). The color of my bike shed is simply that they exist somewhere, but my most favorite color is both places.
Changes since last revision:
- On patch 6/7, I flipped the include order and removed braces from the if statements.
- On patch 7/7, I removed the periods from the short descriptions.
On 1/30/20 5:08 PM, Brendan Higgins wrote:
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately along with a couple of new features that depend on it.
And the "couple of new features" are .... ?
## What am I trying to do?
Conceptually, I am trying to provide a mechanism by which test suites can be grouped together so that they can be reasoned about collectively. The last two of three patches in this series add features which depend on this:
PATCH 5/7 Prints out a test plan right before KUnit tests are run[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed.
PATCH 6/7 Add a new kernel command-line option which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs.
In addition, by dispatching tests from a single location, we can guarantee that all KUnit tests run after late_init is complete, which
That the tests will run after late init (and are guaranteed to do such) needs to be added to the documentation.
-Frank
was a concern during the initial KUnit patchset review (this has not been a problem in practice, but resolving with certainty is nevertheless desirable).
Other use cases for this exist, but the above features should provide an idea of the value that this could provide.
Alan Maguire (1): kunit: test: create a single centralized executor for all tests
Brendan Higgins (5): vmlinux.lds.h: add linker section for KUnit test suites arch: um: add linker section for KUnit test suites init: main: add KUnit to kernel init kunit: test: add test plan to KUnit TAP format Documentation: Add kunit_shutdown to kernel-parameters.txt
David Gow (1): kunit: Add 'kunit_shutdown' option
.../admin-guide/kernel-parameters.txt | 7 ++ arch/um/include/asm/common.lds.S | 4 + include/asm-generic/vmlinux.lds.h | 8 ++ include/kunit/test.h | 82 ++++++++++++------- init/main.c | 4 + lib/kunit/Makefile | 3 +- lib/kunit/executor.c | 71 ++++++++++++++++ lib/kunit/test.c | 11 --- tools/testing/kunit/kunit_kernel.py | 2 +- tools/testing/kunit/kunit_parser.py | 76 ++++++++++++++--- .../test_is_test_passed-all_passed.log | 1 + .../test_data/test_is_test_passed-crash.log | 1 + .../test_data/test_is_test_passed-failure.log | 1 + 13 files changed, 217 insertions(+), 54 deletions(-) create mode 100644 lib/kunit/executor.c
On Tue, Feb 4, 2020 at 1:18 PM Frank Rowand frowand.list@gmail.com wrote:
On 1/30/20 5:08 PM, Brendan Higgins wrote:
## TL;DR
This patchset adds a centralized executor to dispatch tests rather than relying on late_initcall to schedule each test suite separately along with a couple of new features that depend on it.
And the "couple of new features" are .... ?
Sorry, I thought I addressed that in the next section, specifically by calling out: printing the test plan, and adding the new kernel command line option. I will call these out here in any future cover letters.
## What am I trying to do?
Conceptually, I am trying to provide a mechanism by which test suites can be grouped together so that they can be reasoned about collectively. The last two of three patches in this series add features which depend on this:
PATCH 5/7 Prints out a test plan right before KUnit tests are run[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed.
PATCH 6/7 Add a new kernel command-line option which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs.
In addition, by dispatching tests from a single location, we can guarantee that all KUnit tests run after late_init is complete, which
That the tests will run after late init (and are guaranteed to do such) needs to be added to the documentation.
Yeah, that's reasonable. I am not sure where I should put this in the documentation, however. This seems kind of a technical detail, and all the pages I have now are more of how-tos, I think.
Maybe I should send a patch which adds a page detailing how KUnit works?
I would like to get some other people's thoughts on this. Such a technical guide wouldn't provide me a lot of value, at least not now, so I want to make sure that something like that would be valuable to others.
linux-kselftest-mirror@lists.linaro.org