Updated the architecture.rst page with the following changes:
-Add missing article _the_ across the document.
-Reword content across for style and standard.
-Update all occurrences of Command Line to Command-line
across the document.
-Correct grammatical issues, for example,
added _it_wherever missing.
-Update all occurrences of “via" to either use
“through” or “using”.
-Update the text preceding the external links and pushed the full
link to a new line for better readability.
-Reword content under the config command to make it more clear and concise.
Signed-off-by: Sadiya Kazi <sadiyakazi(a)google.com>
---
Thank you Bagas for your detailed comments.
I think the current commit message does convey the right message as it is not a complete rewrite, hence retained it.
Also since we talk about the two parts of the architecture, I have retained the it as 'kunit_tool (Command-line Test Harness)' instead of 'Running Tests Options'.
Changes since v2:
https://lore.kernel.org/linux-kselftest/20221013080545.1552573-1-sadiyakazi…
-Updated the link descriptions as per Bagas’s feedback
-Reworded content talking about options to run tests and added links as per Bagas’s feedback
Best Regards,
Sadiya Kazi
---
.../dev-tools/kunit/architecture.rst | 118 +++++++++---------
1 file changed, 60 insertions(+), 58 deletions(-)
diff --git a/Documentation/dev-tools/kunit/architecture.rst b/Documentation/dev-tools/kunit/architecture.rst
index 8efe792bdcb9..52b1a30c9f89 100644
--- a/Documentation/dev-tools/kunit/architecture.rst
+++ b/Documentation/dev-tools/kunit/architecture.rst
@@ -4,16 +4,17 @@
KUnit Architecture
==================
-The KUnit architecture can be divided into two parts:
+The KUnit architecture is divided into two parts:
- `In-Kernel Testing Framework`_
-- `kunit_tool (Command Line Test Harness)`_
+- `kunit_tool (Command-line Test Harness)`_
In-Kernel Testing Framework
===========================
The kernel testing library supports KUnit tests written in C using
-KUnit. KUnit tests are kernel code. KUnit does several things:
+KUnit. These KUnit tests are kernel code. KUnit performs the following
+tasks:
- Organizes tests
- Reports test results
@@ -22,19 +23,17 @@ KUnit. KUnit tests are kernel code. KUnit does several things:
Test Cases
----------
-The fundamental unit in KUnit is the test case. The KUnit test cases are
-grouped into KUnit suites. A KUnit test case is a function with type
-signature ``void (*)(struct kunit *test)``.
-These test case functions are wrapped in a struct called
-struct kunit_case.
+The test case is the fundamental unit in KUnit. KUnit test cases are organised
+into suites. A KUnit test case is a function with type signature
+``void (*)(struct kunit *test)``. These test case functions are wrapped in a
+struct called struct kunit_case.
.. note:
``generate_params`` is optional for non-parameterized tests.
-Each KUnit test case gets a ``struct kunit`` context
-object passed to it that tracks a running test. The KUnit assertion
-macros and other KUnit utilities use the ``struct kunit`` context
-object. As an exception, there are two fields:
+Each KUnit test case receives a ``struct kunit`` context object that tracks a
+running test. The KUnit assertion macros and other KUnit utilities use the
+``struct kunit`` context object. As an exception, there are two fields:
- ``->priv``: The setup functions can use it to store arbitrary test
user data.
@@ -75,14 +74,15 @@ with the KUnit test framework.
Executor
--------
-The KUnit executor can list and run built-in KUnit tests on boot.
+The KUnit executor can list and run built-in KUnit tests on boot
The Test suites are stored in a linker section
-called ``.kunit_test_suites``. For code, see:
-https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/asm-generic/vmlinux.lds.h?h=v5.15#n945.
+called ``.kunit_test_suites``. For the code, see ``KUNIT_TABLE()`` macro
+definition in
+`include/asm-generic/vmlinux.lds.h <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/inc…>`_.
The linker section consists of an array of pointers to
``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
-macro. To run all tests compiled into the kernel, the KUnit executor
-iterates over the linker section array.
+macro. The KUnit executor iterates over the linker section array in order to
+run all the tests that are compiled into the kernel.
.. kernel-figure:: kunit_suitememorydiagram.svg
:alt: KUnit Suite Memory
@@ -90,17 +90,18 @@ iterates over the linker section array.
KUnit Suite Memory Diagram
On the kernel boot, the KUnit executor uses the start and end addresses
-of this section to iterate over and run all tests. For code, see:
-https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/kunit/executor.c
-
+of this section to iterate over and run all tests. For the implementation of the
+executor, see
+`lib/kunit/executor.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib…>`_.
When built as a module, the ``kunit_test_suites()`` macro defines a
``module_init()`` function, which runs all the tests in the compilation
unit instead of utilizing the executor.
In KUnit tests, some error classes do not affect other tests
or parts of the kernel, each KUnit case executes in a separate thread
-context. For code, see:
-https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib/kunit/try-catch.c?h=v5.15#n58
+context. For the implememtation details, see ``kunit_try_catch_run()`` function
+code in
+`lib/kunit/try-catch.c <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/lib…>`_.
Assertion Macros
----------------
@@ -111,37 +112,36 @@ All expectations/assertions are formatted as:
- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
expectation.
+ In the event of a failure, the testing flow differs as follows:
- - For an expectation, if the check fails, marks the test as failed
- and logs the failure.
+ - For expectations, the test is marked as failed and the failure is logged.
- - An assertion, on failure, causes the test case to terminate
- immediately.
+ - Failing assertions, on the other hand, result in the test case being
+ terminated immediately.
- - Assertions call function:
+ - Assertions call the function:
``void __noreturn kunit_abort(struct kunit *)``.
- - ``kunit_abort`` calls function:
+ - ``kunit_abort`` calls the function:
``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
- - ``kunit_try_catch_throw`` calls function:
+ - ``kunit_try_catch_throw`` calls the function:
``void kthread_complete_and_exit(struct completion *, long) __noreturn;``
and terminates the special thread context.
- ``<op>`` denotes a check with options: ``TRUE`` (supplied property
- has the boolean value “true”), ``EQ`` (two supplied properties are
+ has the boolean value "true"), ``EQ`` (two supplied properties are
equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
- contain an “err” value).
+ contain an "err" value).
- ``[_MSG]`` prints a custom message on failure.
Test Result Reporting
---------------------
-KUnit prints test results in KTAP format. KTAP is based on TAP14, see:
-https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md.
-KTAP (yet to be standardized format) works with KUnit and Kselftest.
-The KUnit executor prints KTAP results to dmesg, and debugfs
-(if configured).
+KUnit prints the test results in KTAP format. KTAP is based on TAP14, see
+Documentation/dev-tools/ktap.rst.
+KTAP works with KUnit and Kselftest. The KUnit executor prints KTAP results to
+dmesg, and debugfs (if configured).
Parameterized Tests
-------------------
@@ -150,33 +150,35 @@ Each KUnit parameterized test is associated with a collection of
parameters. The test is invoked multiple times, once for each parameter
value and the parameter is stored in the ``param_value`` field.
The test case includes a KUNIT_CASE_PARAM() macro that accepts a
-generator function.
-The generator function is passed the previous parameter and returns the next
-parameter. It also provides a macro to generate common-case generators based on
-arrays.
+generator function. The generator function is passed the previous parameter
+and returns the next parameter. It also includes a macro for generating
+array-based common-case generators.
-kunit_tool (Command Line Test Harness)
+kunit_tool (Command-line Test Harness)
======================================
-kunit_tool is a Python script ``(tools/testing/kunit/kunit.py)``
-that can be used to configure, build, exec, parse and run (runs other
-commands in order) test results. You can either run KUnit tests using
-kunit_tool or can include KUnit in kernel and parse manually.
+``kunit_tool`` is a Python script, found in ``tools/testing/kunit/kunit.py``. It
+is used to configure, build, execute, parse test results and run all of the
+previous commands in correct order (i.e., configure, build, execute and parse).
+You have two options for running KUnit tests: either build the kernel with KUnit
+enabled and manually parse the results (see
+Documentation/dev-tools/kunit/run_manual.rst) or use ``kunit_tool``
+(see Documentation/dev-tools/kunit/run_wrapper.rst).
- ``configure`` command generates the kernel ``.config`` from a
``.kunitconfig`` file (and any architecture-specific options).
- For some architectures, additional config options are specified in the
- ``qemu_config`` Python script
- (For example: ``tools/testing/kunit/qemu_configs/powerpc.py``).
+ The Python scripts available in ``qemu_configs`` folder
+ (for example, ``tools/testing/kunit/qemu configs/powerpc.py``) contains
+ additional configuration options for specific architectures.
It parses both the existing ``.config`` and the ``.kunitconfig`` files
- and ensures that ``.config`` is a superset of ``.kunitconfig``.
- If this is not the case, it will combine the two and run
- ``make olddefconfig`` to regenerate the ``.config`` file. It then
- verifies that ``.config`` is now a superset. This checks if all
- Kconfig dependencies are correctly specified in ``.kunitconfig``.
- ``kunit_config.py`` includes the parsing Kconfigs code. The code which
- runs ``make olddefconfig`` is a part of ``kunit_kernel.py``. You can
- invoke this command via: ``./tools/testing/kunit/kunit.py config`` and
+ to ensure that ``.config`` is a superset of ``.kunitconfig``.
+ If not, it will combine the two and run ``make olddefconfig`` to regenerate
+ the ``.config`` file. It then checks to see if ``.config`` has become a superset.
+ This verifies that all the Kconfig dependencies are correctly specified in the
+ file ``.kunitconfig``. The ``kunit_config.py`` script contains the code for parsing
+ Kconfigs. The code which runs ``make olddefconfig`` is part of the
+ ``kunit_kernel.py`` script. You can invoke this command through:
+ ``./tools/testing/kunit/kunit.py config`` and
generate a ``.config`` file.
- ``build`` runs ``make`` on the kernel tree with required options
(depends on the architecture and some options, for example: build_dir)
@@ -184,8 +186,8 @@ kunit_tool or can include KUnit in kernel and parse manually.
To build a KUnit kernel from the current ``.config``, you can use the
``build`` argument: ``./tools/testing/kunit/kunit.py build``.
- ``exec`` command executes kernel results either directly (using
- User-mode Linux configuration), or via an emulator such
- as QEMU. It reads results from the log via standard
+ User-mode Linux configuration), or through an emulator such
+ as QEMU. It reads results from the log using standard
output (stdout), and passes them to ``parse`` to be parsed.
If you already have built a kernel with built-in KUnit tests,
you can run the kernel and display the test results with the ``exec``
--
2.38.0.413.g74048e4d9e-goog
Dzień dobry,
kontaktuję się z Państwem, ponieważ chciałbym zaproponować wygodne rozwiązanie, które umożliwi Państwa firmie stabilny rozwój.
Konkurencyjne otoczenie wymaga ciągłego ulepszania i poszerzenia oferty, co z kolei wiąże się z koniecznością inwestowania. Brak odpowiedniego kapitału poważnie ogranicza tempo rozwoju firmy.
Od wielu lat z powodzeniem pomagam firmom w uzyskaniu najlepszej formy finansowania z banku oraz UE. Mam stałych Klientów, którzy nadal chętnie korzystają z moich usług, a także polecają je innym.
Czy chcieliby Państwo skorzystać z pomocy wykwalifikowanego i doświadczonego doradcy finansowego?
Pozdrawiam
Jakub Olejniczak
When KUNIT_EXPECT_EQ() or KUNIT_ASSERT_EQ() log a failure, they log the
two values being compared, with numerical values logged in decimal.
In some cases, decimal output is painful to consume, and hexadecimal
output would be more helpful. For example, this is the case for tests
I'm currently developing for the arm64 insn encoding/decoding code,
where comparing two 32-bit instruction opcodes results in output such
as:
| # test_insn_add_shifted_reg: EXPECTATION FAILED at arch/arm64/lib/test_insn.c:2791
| Expected obj_insn == gen_insn, but
| obj_insn == 2332164128
| gen_insn == 1258422304
To make this easier to consume, this patch logs the values in both
decimal and hexadecimal:
| # test_insn_add_shifted_reg: EXPECTATION FAILED at arch/arm64/lib/test_insn.c:2791
| Expected obj_insn == gen_insn, but
| obj_insn == 2332164128 (0x8b020020)
| gen_insn == 1258422304 (0x4b020020)
As can be seen from the example, having hexadecimal makes it
significantly easier for a human to spot which specific bits are
incorrect.
Signed-off-by: Mark Rutland <mark.rutland(a)arm.com>
Cc: Brendan Higgins <brendan.higgins(a)linux.dev>
Cc: David Gow <davidgow(a)google.com>
Cc: linux-kselftest(a)vger.kernel.org
Cc: kunit-dev(a)googlegroups.com
---
lib/kunit/assert.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/kunit/assert.c b/lib/kunit/assert.c
index d00d6d181ee8..24dec5b48722 100644
--- a/lib/kunit/assert.c
+++ b/lib/kunit/assert.c
@@ -127,13 +127,15 @@ void kunit_binary_assert_format(const struct kunit_assert *assert,
binary_assert->text->right_text);
if (!is_literal(stream->test, binary_assert->text->left_text,
binary_assert->left_value, stream->gfp))
- string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld\n",
+ string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld (0x%llx)\n",
binary_assert->text->left_text,
+ binary_assert->left_value,
binary_assert->left_value);
if (!is_literal(stream->test, binary_assert->text->right_text,
binary_assert->right_value, stream->gfp))
- string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld",
+ string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s == %lld (0x%llx)",
binary_assert->text->right_text,
+ binary_assert->right_value,
binary_assert->right_value);
kunit_assert_print_msg(message, stream);
}
--
2.30.2
Hi,
I've been trying the hmm_tests as of today's commit:
a185a0995518 ("Merge tag 'linux-kselftest-kunit-6.1-rc1-2' ...)
and run into several issues that seemed worth reporting.
First, it seems the FIXTURE_TEARDOWN(hmm) in
tools/testing/selftests/vm/hmm-tests.c
using ASSERT_EQ(ret, 0); can run into an infinite loop of reporting the
assertion failure. Dunno if it's a kselftests issue or it's a bug to
use asserts in teardown. I hacked it up like this locally to proceed:
--- a/tools/testing/selftests/vm/hmm-tests.c
+++ b/tools/testing/selftests/vm/hmm-tests.c
@@ -154,6 +154,11 @@ FIXTURE_TEARDOWN(hmm)
{
int ret = close(self->fd);
+ if (ret != 0) {
+ fprintf(stderr, "close returned (%d) fd is (%d)\n", ret,self->fd);
+ exit(1);
+ }
+
ASSERT_EQ(ret, 0);
self->fd = -1;
}
Next, there are some tests that fail (and thus also trigger the issue above)
# RUN hmm.hmm_device_private.exclusive ...
# hmm-tests.c:1702:exclusive:Expected ret (-16) == 0 (0)
close returned (-1) fd is (3)
# exclusive: Test failed at step #1
# FAIL hmm.hmm_device_private.exclusive
not ok 20 hmm.hmm_device_private.exclusive
# RUN hmm.hmm_device_private.exclusive_mprotect ...
# hmm-tests.c:1756:exclusive_mprotect:Expected ret (-16) == 0 (0)
close returned (-1) fd is (3)
# exclusive_mprotect: Test failed at step #1
# FAIL hmm.hmm_device_private.exclusive_mprotect
not ok 21 hmm.hmm_device_private.exclusive_mprotect
# RUN hmm.hmm_device_private.exclusive_cow ...
# hmm-tests.c:1809:exclusive_cow:Expected ret (-16) == 0 (0)
close returned (-1) fd is (3)
# exclusive_cow: Test failed at step #1
# FAIL hmm.hmm_device_private.exclusive_cow
not ok 22 hmm.hmm_device_private.exclusive_cow
I'll try to check more closely but maybe if you can reproduce it too, you'll
have more idea what's going on.
The next thing is more of a question/documentation suggestion. Tons of tests
fail like this:
ok 24 hmm.hmm_device_private.hmm_cow_in_device
# RUN hmm.hmm_device_coherent.open_close ...
could not open hmm dmirror driver (/dev/hmm_dmirror2)
# SKIP DEVICE_COHERENT not available
# OK hmm.hmm_device_coherent.open_close
I assume this is because I run "test_hmm.sh smoke" without the SPM parameters.
The help message doesn't say much about what to specify there for
<spm_addr_dev0> <spm_addr_dev1>. Do these tests need a particular hardware?
(unlike the rest?) Maybe it could be clarified.
Last thing, I noticed all these DEVICE_COHERENT tests ultimately count as OK,
not SKIPPED, which would probably be more appropriate?
# FAILED: 51 / 54 tests passed.
# Totals: pass:50 fail:3 xfail:0 xpass:0 skip:1 error:0
(the skip:1 is due to test 9 "# SKIP Huge page could not be allocated"
which is probably a misconfiguration on my part so I don't report that as an issue)
Thanks,
Vlastimil