## TL;DR
This patchset adds a centralized executor to dispatch tests rather than
relying on late_initcall to schedule each test suite separately along
with a couple of new features that depend on it.
Also, sorry for the delay in getting this new revision out. I have been
really busy for the past couple weeks.
## What am I trying to do?
Conceptually, I am trying to provide a mechanism by which test suites
can be grouped together so that they can be reasoned about collectively.
The last two of three patches in this series add features which depend
on this:
PATCH 5/7 Prints out a test plan[1] right before KUnit tests are run;
this is valuable because it makes it possible for a test
harness to detect whether the number of tests run matches the
number of tests expected to be run, ensuring that no tests
silently failed. The test plan includes a count of tests that
will run. With the centralized executor, the tests are located
in a single data structure and thus can be counted.
PATCH 6/7 Add a new kernel command-line option which allows the user to
specify that the kernel poweroff, halt, or reboot after
completing all KUnit tests; this is very handy for running
KUnit tests on UML or a VM so that the UML/VM process exits
cleanly immediately after running all tests without needing a
special initramfs. The centralized executor provides a
definitive point when all tests have completed and the
poweroff, halt, or reboot could occur.
In addition, by dispatching tests from a single location, we can
guarantee that all KUnit tests run after late_init is complete, which
was a concern during the initial KUnit patchset review (this has not
been a problem in practice, but resolving with certainty is nevertheless
desirable).
Other use cases for this exist, but the above features should provide an
idea of the value that this could provide.
## Changes since last revision:
- On patch 7/7, I added some additional wording around the
kunit_shutdown command line option explaining that it runs after
built-in tests as suggested by Frank.
- On the coverletter, I improved some wording and added a missing link.
I also specified the base-commit for the series.
- Frank asked for some changes to the documentation; however, David is
taking care of that in a separate patch[2], so I did not make those
changes here. There will be some additional changes necessary
after David's patch is applied.
Alan Maguire (1):
kunit: test: create a single centralized executor for all tests
Brendan Higgins (5):
vmlinux.lds.h: add linker section for KUnit test suites
arch: um: add linker section for KUnit test suites
init: main: add KUnit to kernel init
kunit: test: add test plan to KUnit TAP format
Documentation: Add kunit_shutdown to kernel-parameters.txt
David Gow (1):
kunit: Add 'kunit_shutdown' option
.../admin-guide/kernel-parameters.txt | 8 ++
arch/um/include/asm/common.lds.S | 4 +
include/asm-generic/vmlinux.lds.h | 8 ++
include/kunit/test.h | 82 ++++++++++++-------
init/main.c | 4 +
lib/kunit/Makefile | 3 +-
lib/kunit/executor.c | 71 ++++++++++++++++
lib/kunit/test.c | 11 ---
tools/testing/kunit/kunit_kernel.py | 2 +-
tools/testing/kunit/kunit_parser.py | 76 ++++++++++++++---
.../test_is_test_passed-all_passed.log | 1 +
.../test_data/test_is_test_passed-crash.log | 1 +
.../test_data/test_is_test_passed-failure.log | 1 +
13 files changed, 218 insertions(+), 54 deletions(-)
create mode 100644 lib/kunit/executor.c
base-commit: a2f0b878c3ca531a1706cb2a8b079cea3b17bafc
[1] https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-…
[2] https://patchwork.kernel.org/patch/11383635/
--
2.25.1.481.gfbce0eb801-goog
The arm64 signal tests generate warnings during build since both they and
the toplevel lib.mk define a clean target:
Makefile:25: warning: overriding recipe for target 'clean'
../../lib.mk:126: warning: ignoring old recipe for target 'clean'
Since the inclusion of lib.mk is in the signal Makefile there is no
situation where this warning could be avoided so just remove the redundant
clean target.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
tools/testing/selftests/arm64/signal/Makefile | 4 ----
1 file changed, 4 deletions(-)
diff --git a/tools/testing/selftests/arm64/signal/Makefile b/tools/testing/selftests/arm64/signal/Makefile
index b497cfea4643..ac4ad0005715 100644
--- a/tools/testing/selftests/arm64/signal/Makefile
+++ b/tools/testing/selftests/arm64/signal/Makefile
@@ -21,10 +21,6 @@ include ../../lib.mk
$(TEST_GEN_PROGS): $(PROGS)
cp $(PROGS) $(OUTPUT)/
-clean:
- $(CLEAN)
- rm -f $(PROGS)
-
# Common test-unit targets to build common-layout test-cases executables
# Needs secondary expansion to properly include the testcase c-file in pre-reqs
.SECONDEXPANSION:
--
2.20.1
Tim Bird started a thread [1] proposing that he document the selftest result
format used by Linux kernel tests.
[1] https://lore.kernel.org/r/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1…
The issue of messages generated by the kernel being tested (that are not
messages directly created by the tests, but are instead triggered as a
side effect of the test) came up. In this thread, I will call these
messages "expected messages". Instead of sidetracking that thread with
a proposal to handle expected messages, I am starting this new thread.
I implemented an API for expected messages that are triggered by tests
in the Devicetree unittest code, with the expectation that the specific
details may change when the Devicetree unittest code adapts the KUnit
API. It seems appropriate to incorporate the concept of expected
messages in Tim's documentation instead of waiting to address the
subject when the Devicetree unittest code adapts the KUnit API, since
Tim's document may become the kernel selftest standard.
Instead of creating a very long email containing multiple objects,
I will reply to this email with a separate reply for each of:
The "expected messages" API implemention and use can be from
drivers/of/unittest.c in the mainline kernel.
of_unittest_expect - A proof of concept perl program to filter console
output containing expected messages output
of_unittest_expect is also available by cloning
https://github.com/frowand/dt_tools.git
An example raw console output with timestamps and expect messages.
An example of console output processed by filter program
of_unittest_expect to be more human readable. The expected
messages are not removed, but are flagged.
An example of console output processed by filter program
of_unittest_expect to be more human readable. The expected
messages are removed instead of being flagged.
Fix
make[1]: execvp: /bin/sh: Argument list too long
encountered with some shells and a couple of more potential problems
in that part of code.
Yauheni Kaliuta (3):
selftests: do not use .ONESHELL
selftests: fix condition in run_tests
selftests: simplify run_tests
tools/testing/selftests/lib.mk | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
--
2.26.2
Changeset 1eecbcdca2bd ("docs: move protection-keys.rst to the core-api book")
from Jun 7, 2019 converted protection-keys.txt file to ReST.
A recent change at protection_keys.c partially reverted such
changeset, causing it to point to a non-existing file:
- * Tests x86 Memory Protection Keys (see Documentation/core-api/protection-keys.rst)
+ * Tests Memory Protection Keys (see Documentation/vm/protection-keys.txt)
It sounds to me that the changeset that introduced such change
4645e3563673 ("selftests/vm/pkeys: rename all references to pkru to a generic name")
could also have other side effects, as it sounds that it was not
generated against uptream code, but, instead, against a version
older than Jun 7, 2019.
Fixes: 4645e3563673 ("selftests/vm/pkeys: rename all references to pkru to a generic name")
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei(a)kernel.org>
---
tools/testing/selftests/vm/protection_keys.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/vm/protection_keys.c b/tools/testing/selftests/vm/protection_keys.c
index fc19addcb5c8..fdbb602ecf32 100644
--- a/tools/testing/selftests/vm/protection_keys.c
+++ b/tools/testing/selftests/vm/protection_keys.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
- * Tests Memory Protection Keys (see Documentation/vm/protection-keys.txt)
+ * Tests Memory Protection Keys (see Documentation/core-api/protection-keys.rst)
*
* There are examples in here of:
* * how to set protection keys on memory
--
2.26.2
Some months ago I started work on a document to formalize how
kselftest implements the TAP specification. However, I didn't finish
that work. Maybe it's time to do so now.
kselftest has developed a few differences from the original
TAP specification, and some extensions that I believe are worth
documenting.
Essentially, we have created our own KTAP (kernel TAP)
format. I think it is worth documenting our conventions, in order to
keep everyone on the same page.
Below is a partially completed document on my understanding
of KTAP, based on examination of some of the kselftest test
output. I have not reconciled this with the kunit output format,
which I believe has some differences (which maybe we should
resolve before we get too far into this).
I submit the document now, before it is finished, because a patch
was recently introduced to alter one of the result conventions
(from SKIP='not ok' to SKIP='ok').
See the document include inline below
====== start of ktap-doc-rfc.txt ======
Selftest preferred output format
--------------------------------
The linux kernel selftest system uses TAP (Test Anything Protocol)
output to make testing results consumable by automated systems. A
number of Continuous Integration (CI) systems test the kernel every
day. It is useful for these systems that output from selftest
programs be consistent and machine-parsable.
At the same time, it is useful for test results to be human-readable
as well.
The kernel test result format is based on a variation TAP
TAP is a simple text-based format that is
documented on the TAP home page (http://testanything.org/). There
is an official TAP13 specification here:
http://testanything.org/tap-version-13-specification.html
The kernel test result format consists of 5 major elements,
4 of which are line-based:
* the output version line
* the plan line
* one or more test result lines (also called test result lines)
* a possible "Bail out!" line
optional elements:
* diagnostic data
The 5th element is diagnostic information, which is used to describe
items running in the test, and possibly to explain test results.
A sample test result is show below:
Some other lines may be placed the test harness, and are not emitted
by individual test programs:
* one or more test identification lines
* a possible results summary line
Here is an example:
TAP version 13
1..1
# selftests: cpufreq: main.sh
# pid 8101's current affinity mask: fff
# pid 8101's new affinity mask: 1
ok 1 selftests: cpufreq: main.sh
The output version line is: "TAP version 13"
The test plan is "1..1".
Element details
===============
Output version line
-------------------
The output version line is always "TAP version 13".
Although the kernel test result format has some additions
to the TAP13 format, the version line reported by kselftest tests
is (currently) always the exact string "TAP version 13"
This is always the first line of test output.
Test plan line
--------------
The test plan indicates the number of individual test cases intended to
be executed by the test. It always starts with "1.." and is followed
by the number of tests cases. In the example above, 1..1", indicates
that this test reports only 1 test case.
The test plan line can be placed in two locations:
* the second line of test output, when the number of test cases is known
in advance
* as the last line of test output, when the number of test cases is not
known in advance.
Most often, the number of test cases is known in advance, and the test plan
line appears as the second line of test output, immediately following
the output version line. The number of test cases might not be known
in advance if the number of tests is calculated from runtime data.
In this case, the test plan line is emitted as the last line of test
output.
Test result lines
-----------------
The test output consists of one or more test result lines that indicate
the actual results for the test. These have the format:
<result> <number> <description> [<directive>] [<diagnostic data>]
The ''result'' must appear at the start of a line (except for when a
test is nested, see below), and must consist of one of the following
two phrases:
* ok
* not ok
If the test passed, then the result is reported as "ok". If the test
failed, then the result is reported as "not ok". These must be in
lower case, exactly as shown.
The ''number'' in the test result line represents the number of the
test case being performed by the test program. This is often used by
test harnesses as a unique identifier for each test case. The test
number is a base-10 number, starting with 1. It should increase by
one for each new test result line emitted. If possible the number
for a test case should be kept the same over the lifetime of the test.
The ''description'' is a short description of the test case.
This can be any string of words, but should avoid using colons (':')
except as part of a fully qualifed test case name (see below).
Finally, it is possible to use a test directive to indicate another
possible outcome for a test: that it was skipped. To report that
a test case was skipped, the result line should start with the
result "not ok", and the directive "# SKIP" should be placed after
the test description. (Note that this deviates from the TAP13
specification).
A test may be skipped for a variety of reasons, ranging for
insufficient privileges to missing features or resources required
to execute that test case.
It is usually helpful if a diagnostic message is emitted to explain
the reasons for the skip. If the message is a single line and is
short, the diagnostic message may be placed after the '# SKIP'
directive on the same line as the test result. However, if it is
not on the test result line, it should precede the test line (see
diagnostic data, next).
Diagnostic data
---------------
Diagnostic data is text that reports on test conditions or test
operations, or that explains test results. In the kernel test
result format, diagnostic data is placed on lines that start with a
hash sign, followed by a space ('# ').
One special format of diagnostic data is a test identification line,
that has the fully qualified name of a test case. Such a test
identification line marks the start of test output for a test case.
In the example above, there are three lines that start with '#'
which precede the test result line:
# selftests: cpufreq: main.sh
# pid 8101's current affinity mask: fff
# pid 8101's new affinity mask: 1
These are used to indicate diagnostic data for the test case
'selftests: cpufreq: main.sh'
Material in comments between the identification line and the test
result line are diagnostic data that can help to interpret the
results of the test.
The TAP specification indicates that automated test harnesses may
ignore any line that is not one of the mandatory prescribed lines
(that is, the output format version line, the plan line, a test
result line, or a "Bail out!" line.)
Bail out!
---------
If a line in the test output starts with 'Bail out!', it indicates
that the test was aborted for some reason. It indicates that
the test is unable to proceed, and no additional tests will be
performed.
This can be used at the very beginning of a test, or anywhere in the
middle of the test, to indicate that the test can not continue.
--- from here on is not-yet-organized material
Tip:
- don't change the test plan based on skipped tests.
- it is better to report that a test case was skipped, than to
not report it
- that is, don't adjust the number of test cases based on skipped
tests
Other things to mention:
TAP13 elements not used:
- yaml for diagnostic messages
- reason: try to keep things line-based, since output from other things
may be interspersed with messages from the test itself
- TODO directive
KTAP Extensions beyond TAP13:
- nesting
- via indentation
- indentation makes it easier for humans to read
- test identifier
- multiple parts, separated by ':'
- summary lines
- can be skipped by CI systems that do their own calculations
Other notes:
- automatic assignment of result status based on exit code
Tips:
- do NOT describe the result in the test line
- the test case description should be the same whether the test
succeeds or fails
- use diagnostic lines to describe or explain results, if this is
desirable
- test numbers are considered harmful
- test harnesses should use the test description as the identifier
- test numbers change when testcases are added or removed
- which means that results can't be compared between different
versions of the test
- recommendations for diagnostic messages:
- reason for failure
- reason for skip
- diagnostic data should always preceding the result line
- problem: harness may emit result before test can do assessment
to determine reason for result
- this is what the kernel uses
Differences between kernel test result format and TAP13:
- in KTAP the "# SKIP" directive is placed after the description on
the test result line
====== start of ktap-doc-rfc.txt ======
OK - that's the end of the RFC doc.
Here are a few questions:
- is this document desired or not?
- is it too long or too short?
- if the document is desired, where should it be placed?
I assume somewhere under Documentation, and put into
.rst format. Suggestions for a name and location are welcome.
- is this document accurate?
I think KUNIT does a few things differently than this description.
- is the intent to have kunit and kselftest have the same output format?
if so, then these should be rationalized.
Finally,
- Should a SKIP result be 'ok' (TAP13 spec) or 'not ok' (current kselftest practice)?
See https://testanything.org/tap-version-13-specification.html
Regards,
-- Tim
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split across mm core, nouveau,
and testing, I'm guessing Jason Gunthorpe's HMM tree would be appropriate.
Ralph Campbell (16):
mm: fix migrate_vma_setup() src_owner and normal pages
nouveau: fix migrate page regression
nouveau: fix mixed normal and device private page migration
mm/hmm: fix test timeout on slower machines
mm/hmm/test: remove redundant page table invalidate
mm/hmm: test mixed normal and device private migrations
nouveau: make nvkm_vmm_ctor() and nvkm_mmu_ptp_get() static
nouveau/hmm: fault one page at a time
mm/hmm: add output flag for compound page mapping
nouveau/hmm: support mapping large sysmem pages
hmm: add tests for HMM_PFN_COMPOUND flag
mm/hmm: optimize migrate_vma_setup() for holes
mm: support THP migration to device private memory
mm/thp: add THP allocation helper
mm/hmm/test: add self tests for THP migration
nouveau: support THP migration to private memory
drivers/gpu/drm/nouveau/nouveau_dmem.c | 177 +++++---
drivers/gpu/drm/nouveau/nouveau_svm.c | 241 +++++------
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
.../gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 6 +-
.../gpu/drm/nouveau/nvkm/subdev/mmu/priv.h | 2 +
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 10 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 3 -
.../drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 29 +-
include/linux/gfp.h | 10 +
include/linux/hmm.h | 4 +-
include/linux/migrate.h | 1 +
include/linux/mm.h | 1 +
lib/test_hmm.c | 359 ++++++++++++----
lib/test_hmm_uapi.h | 2 +
mm/hmm.c | 10 +-
mm/huge_memory.c | 46 ++-
mm/internal.h | 1 -
mm/memory.c | 10 +-
mm/memremap.c | 9 +-
mm/migrate.c | 236 +++++++++--
mm/page_alloc.c | 1 +
tools/testing/selftests/vm/hmm-tests.c | 388 +++++++++++++++++-
22 files changed, 1203 insertions(+), 346 deletions(-)
--
2.20.1
On 6/20/20 9:37 PM, akpm(a)linux-foundation.org wrote:
> The mm-of-the-moment snapshot 2020-06-20-21-36 has been uploaded to
>
> http://www.ozlabs.org/~akpm/mmotm/
>
> mmotm-readme.txt says
>
> README for mm-of-the-moment:
>
> http://www.ozlabs.org/~akpm/mmotm/
>
> This is a snapshot of my -mm patch queue. Uploaded at random hopefully
> more than once a week.
drivers/misc/lkdtm/bugs.c has build errors when building UML for i386
(allmodconfig or allyesconfig):
In file included from ../drivers/misc/lkdtm/bugs.c:17:0:
../arch/x86/um/asm/desc.h:7:0: warning: "LDT_empty" redefined
#define LDT_empty(info) (\
In file included from ../arch/um/include/asm/mmu.h:10:0,
from ../include/linux/mm_types.h:18,
from ../include/linux/sched/signal.h:13,
from ../drivers/misc/lkdtm/bugs.c:11:
../arch/x86/um/asm/mm_context.h:65:0: note: this is the location of the previous definition
#define LDT_empty(info) (_LDT_empty(info))
../drivers/misc/lkdtm/bugs.c: In function ‘lkdtm_DOUBLE_FAULT’:
../drivers/misc/lkdtm/bugs.c:428:9: error: variable ‘d’ has initializer but incomplete type
struct desc_struct d = {
^~~~~~~~~~~
../drivers/misc/lkdtm/bugs.c:429:4: error: ‘struct desc_struct’ has no member named ‘type’
.type = 3, /* expand-up, writable, accessed data */
^~~~
../drivers/misc/lkdtm/bugs.c:429:11: warning: excess elements in struct initializer
.type = 3, /* expand-up, writable, accessed data */
^
../drivers/misc/lkdtm/bugs.c:429:11: note: (near initialization for ‘d’)
../drivers/misc/lkdtm/bugs.c:430:4: error: ‘struct desc_struct’ has no member named ‘p’
.p = 1, /* present */
^
../drivers/misc/lkdtm/bugs.c:430:8: warning: excess elements in struct initializer
.p = 1, /* present */
^
../drivers/misc/lkdtm/bugs.c:430:8: note: (near initialization for ‘d’)
../drivers/misc/lkdtm/bugs.c:431:4: error: ‘struct desc_struct’ has no member named ‘d’
.d = 1, /* 32-bit */
^
../drivers/misc/lkdtm/bugs.c:431:8: warning: excess elements in struct initializer
.d = 1, /* 32-bit */
^
../drivers/misc/lkdtm/bugs.c:431:8: note: (near initialization for ‘d’)
../drivers/misc/lkdtm/bugs.c:432:4: error: ‘struct desc_struct’ has no member named ‘g’
.g = 0, /* limit in bytes */
^
../drivers/misc/lkdtm/bugs.c:432:8: warning: excess elements in struct initializer
.g = 0, /* limit in bytes */
^
../drivers/misc/lkdtm/bugs.c:432:8: note: (near initialization for ‘d’)
../drivers/misc/lkdtm/bugs.c:433:4: error: ‘struct desc_struct’ has no member named ‘s’
.s = 1, /* not system */
^
../drivers/misc/lkdtm/bugs.c:433:8: warning: excess elements in struct initializer
.s = 1, /* not system */
^
../drivers/misc/lkdtm/bugs.c:433:8: note: (near initialization for ‘d’)
../drivers/misc/lkdtm/bugs.c:428:21: error: storage size of ‘d’ isn’t known
struct desc_struct d = {
^
../drivers/misc/lkdtm/bugs.c:437:2: error: implicit declaration of function ‘write_gdt_entry’; did you mean ‘init_wait_entry’? [-Werror=implicit-function-declaration]
write_gdt_entry(get_cpu_gdt_rw(smp_processor_id()),
^~~~~~~~~~~~~~~
init_wait_entry
../drivers/misc/lkdtm/bugs.c:437:18: error: implicit declaration of function ‘get_cpu_gdt_rw’; did you mean ‘get_cpu_ptr’? [-Werror=implicit-function-declaration]
write_gdt_entry(get_cpu_gdt_rw(smp_processor_id()),
^~~~~~~~~~~~~~
get_cpu_ptr
../drivers/misc/lkdtm/bugs.c:438:27: error: ‘DESCTYPE_S’ undeclared (first use in this function)
GDT_ENTRY_TLS_MIN, &d, DESCTYPE_S);
^~~~~~~~~~
../drivers/misc/lkdtm/bugs.c:438:27: note: each undeclared identifier is reported only once for each function it appears in
../drivers/misc/lkdtm/bugs.c:428:21: warning: unused variable ‘d’ [-Wunused-variable]
struct desc_struct d = {
^
cc1: some warnings being treated as errors
--
~Randy
Reported-by: Randy Dunlap <rdunlap(a)infradead.org>
As discussed in [1], KUnit tests have hitherto not had a particularly
consistent naming scheme. This adds documentation outlining how tests
and test suites should be named, including how those names should be
used in Kconfig entries and filenames.
[1]:
https://lore.kernel.org/linux-kselftest/202006141005.BA19A9D3@keescook/t/#u
Signed-off-by: David Gow <davidgow(a)google.com>
---
This is a first draft of some naming guidelines for KUnit tests. Note
that I haven't edited it for spelling/grammar/style yet: I wanted to get
some feedback on the actual naming conventions first.
The issues which came most to the forefront while writing it were:
- Do we want to make subsystems a more explicit thing (make the KUnit
framework recognise them, make suites KTAP subtests of them, etc)
- I'm leaning towards no, mainly because it doesn't seem necessary,
and it makes the subsystem-with-only-one-suite case ugly.
- Do we want to support (or encourage) Kconfig options and/or modules at
the subsystem level rather than the suite level?
- This could be nice: it'd avoid the proliferation of a large number
of tiny config options and modules, and would encourage the test for
<module> to be <module>_kunit, without other stuff in-between.
- As test names are also function names, it may actually make sense to
decorate them with "test" or "kunit" or the like.
- If we're testing a function "foo", "test_foo" seems like as good a
name for the function as any. Sure, many cases may could have better
names like "foo_invalid_context" or something, but that won't make
sense for everything.
- Alternatively, do we split up the test name and the name of the
function implementing the test?
Thoughts?
Documentation/dev-tools/kunit/index.rst | 1 +
Documentation/dev-tools/kunit/style.rst | 139 ++++++++++++++++++++++++
2 files changed, 140 insertions(+)
create mode 100644 Documentation/dev-tools/kunit/style.rst
diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
index e93606ecfb01..117c88856fb3 100644
--- a/Documentation/dev-tools/kunit/index.rst
+++ b/Documentation/dev-tools/kunit/index.rst
@@ -11,6 +11,7 @@ KUnit - Unit Testing for the Linux Kernel
usage
kunit-tool
api/index
+ style
faq
What is KUnit?
diff --git a/Documentation/dev-tools/kunit/style.rst b/Documentation/dev-tools/kunit/style.rst
new file mode 100644
index 000000000000..9363b5607262
--- /dev/null
+++ b/Documentation/dev-tools/kunit/style.rst
@@ -0,0 +1,139 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+Test Style and Nomenclature
+===========================
+
+Subsystems, Suites, and Tests
+=============================
+
+In order to make tests as easy to find as possible, they're grouped into suites
+and subsystems. A test suite is a group of tests which test a related area of
+the kernel, and a subsystem is a set of test suites which test different parts
+of the same kernel subsystem or driver.
+
+Subsystems
+----------
+
+Every test suite must belong to a subsystem. A subsystem is a collection of one
+or more KUnit test suites which test the same driver or part of the kernel. A
+rule of thumb is that a test subsystem should match a single kernel module. If
+the code being tested can't be compiled as a module, in many cases the subsystem
+should correspond to a directory in the source tree or an entry in the
+MAINTAINERS file. If unsure, follow the conventions set by tests in similar
+areas.
+
+Test subsystems should be named after the code being tested, either after the
+module (wherever possible), or after the directory or files being tested. Test
+subsystems should be named to avoid ambiguity where necessary.
+
+If a test subsystem name has multiple components, they should be separated by
+underscores. Do not include "test" or "kunit" directly in the subsystem name
+unless you are actually testing other tests or the kunit framework itself.
+
+Example subsystems could be:
+
+* ``ext4``
+* ``apparmor``
+* ``kasan``
+
+.. note::
+ The KUnit API and tools do not explicitly know about subsystems. They're
+ simply a way of categorising test suites and naming modules which
+ provides a simple, consistent way for humans to find and run tests. This
+ may change in the future, though.
+
+Suites
+------
+
+KUnit tests are grouped into test suites, which cover a specific area of
+functionality being tested. Test suites can have shared initialisation and
+shutdown code which is run for all tests in the suite.
+Not all subsystems will need to be split into multiple test suites (e.g. simple drivers).
+
+Test suites are named after the subsystem they are part of. If a subsystem
+contains several suites, the specific area under test should be appended to the
+subsystem name, separated by an underscore.
+
+The full test suite name (including the subsystem name) should be specified as
+the ``.name`` member of the ``kunit_suite`` struct, and forms the base for the
+module name (see below).
+
+Example test suites could include:
+
+* ``ext4_inode``
+* ``kunit_try_catch``
+* ``apparmor_property_entry``
+* ``kasan``
+
+Tests
+-----
+
+Individual tests consist of a single function which tests a constrained
+codepath, property, or function. In the test output, individual tests' results
+will show up as subtests of the suite's results.
+
+Tests should be named after what they're testing. This is often the name of the
+function being tested, with a description of the input or codepath being tested.
+As tests are C functions, they should be named and written in accordance with
+the kernel coding style.
+
+.. note::
+ As tests are themselves functions, their names cannot conflict with
+ other C identifiers in the kernel. This may require some creative
+ naming. It's a good idea to make your test functions `static` to avoid
+ polluting the global namespace.
+
+Should it be necessary to refer to a test outside the context of its test suite,
+the *fully-qualified* name of a test should be the suite name followed by the
+test name, separated by a colon (i.e. ``suite:test``).
+
+Test Kconfig Entries
+====================
+
+Every test suite should be tied to a Kconfig entry.
+
+This Kconfig entry must:
+
+* be named ``CONFIG_<name>_KUNIT_TEST``: where <name> is the name of the test
+ suite.
+* be listed either alongside the config entries for the driver/subsystem being
+ tested, or be under [Kernel Hacking]→[Kernel Testing and Coverage]
+* depend on ``CONFIG_KUNIT``
+* be visible only if ``CONFIG_KUNIT_ALL_TESTS`` is not enabled.
+* have a default value of ``CONFIG_KUNIT_ALL_TESTS``.
+* have a brief description of KUnit in the help text
+* include "If unsure, say N" in the help text
+
+Unless there's a specific reason not to (e.g. the test is unable to be built as
+a module), Kconfig entries for tests should be tristate.
+
+An example Kconfig entry:
+
+.. code-block:: none
+
+ config FOO_KUNIT_TEST
+ tristate "KUnit test for foo" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ This builds unit tests for foo.
+
+ For more information on KUnit and unit tests in general, please refer
+ to the KUnit documentation in Documentation/dev-tools/kunit
+
+ If unsure, say N
+
+
+Test Filenames
+==============
+
+Where possible, test suites should be placed in a separate source file in the
+same directory as the code being tested.
+
+This file should be named ``<suite>_kunit.c``. It may make sense to strip
+excessive namespacing from the source filename (e.g., ``firmware_kunit.c`` instead of
+``<drivername>_firmware.c``), but please ensure the module name does contain the
+full suite name.
+
+
--
2.27.0.111.gc72c7da667-goog