On 11/23/22 16:25, Maxime Ripard wrote:
> We'll need in some tests to control when the device needs to be added
> and removed, so let's split the device creation from the DRM device
> creation function.
>
> Signed-off-by: Maxime Ripard <maxime(a)cerno.tech>
> ---
[...]
> -static int dev_init(struct kunit_resource *res, void *ptr)
> +struct device *drm_kunit_helper_alloc_device(struct kunit *test)
> {
> - char *name = ptr;
> - struct device *dev;
> -
> - dev = root_device_register(name);
> - if (IS_ERR(dev))
> - return PTR_ERR(dev);
> -
> - res->data = dev;
> - return 0;
> + return root_device_register(FAKE_DEVICE_NAME);
I was going to ask if a platform device could be used here, to prevent the
issue you mentioned the other day of root_device not having a bus set. But
I see now that you are doing that change in the next patch.
Reviewed-by: Javier Martinez Canillas <javierm(a)redhat.com>
--
Best regards,
Javier Martinez Canillas
Core Platforms
Red Hat
On 11/23/22 16:25, Maxime Ripard wrote:
> The name doesn't really fit the conventions for the other helpers in
> DRM/KMS, so let's rename it to make it obvious that we allocate a new
> DRM device.
>
> Signed-off-by: Maxime Ripard <maxime(a)cerno.tech>
> ---
Reviewed-by: Javier Martinez Canillas <javierm(a)redhat.com>
--
Best regards,
Javier Martinez Canillas
Core Platforms
Red Hat
KUnit does a few expensive things when enabled. This hasn't been a
problem because KUnit was only enabled on test kernels, but with a few
people enabling (but not _using_) KUnit on production systems, we need a
runtime way of handling this.
Provide a 'kunit_running' static key (defaulting to false), which allows
us to hide any KUnit code behind a static branch. This should reduce the
performance impact (on other code) of having KUnit enabled to a single
NOP when no tests are running.
Note that, while it looks unintuitive, tests always run entirely within
__kunit_test_suites_init(), so it's safe to decrement the static key at
the end of this function, rather than in __kunit_test_suites_exit(),
which is only there to clean up results in debugfs.
Signed-off-by: David Gow <davidgow(a)google.com>
---
This should be a no-op (other than a possible performance improvement)
functionality-wise, and lays the groundwork for a more optimised static
stub implementation.
The remaining patches in the series add a kunit_get_current_test()
function which is a more friendly and performant wrapper around
current->kunit_test, and use this in the slub test. They also improve
the documentation a bit.
If there are no objections, we'll take the whole series via the KUnit
tree.
No changes since v2:
https://lore.kernel.org/all/20221025071907.1251820-1-davidgow@google.com/
Changes since v1:
https://lore.kernel.org/linux-kselftest/20221021072854.333010-1-davidgow@go…
- No changes in this patch.
- Patch 2/3 is reworked, patch 3/3 is new.
---
include/kunit/test.h | 4 ++++
lib/kunit/test.c | 6 ++++++
2 files changed, 10 insertions(+)
diff --git a/include/kunit/test.h b/include/kunit/test.h
index d7f60e8aab30..b948c32a7b6b 100644
--- a/include/kunit/test.h
+++ b/include/kunit/test.h
@@ -16,6 +16,7 @@
#include <linux/container_of.h>
#include <linux/err.h>
#include <linux/init.h>
+#include <linux/jump_label.h>
#include <linux/kconfig.h>
#include <linux/kref.h>
#include <linux/list.h>
@@ -27,6 +28,9 @@
#include <asm/rwonce.h>
+/* Static key: true if any KUnit tests are currently running */
+extern struct static_key_false kunit_running;
+
struct kunit;
/* Size of log associated with test. */
diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 90640a43cf62..314717b63080 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -20,6 +20,8 @@
#include "string-stream.h"
#include "try-catch-impl.h"
+DEFINE_STATIC_KEY_FALSE(kunit_running);
+
#if IS_BUILTIN(CONFIG_KUNIT)
/*
* Fail the current test and print an error message to the log.
@@ -612,10 +614,14 @@ int __kunit_test_suites_init(struct kunit_suite * const * const suites, int num_
return 0;
}
+ static_branch_inc(&kunit_running);
+
for (i = 0; i < num_suites; i++) {
kunit_init_suite(suites[i]);
kunit_run_tests(suites[i]);
}
+
+ static_branch_dec(&kunit_running);
return 0;
}
EXPORT_SYMBOL_GPL(__kunit_test_suites_init);
--
2.38.1.584.g0f3c55d4c2-goog
DAMOS let users do system operations in a data access pattern oriented
way. The data access pattern, which is extracted by DAMON, is somewhat
accurate more than what user space could know in many cases. However,
in some situation, users could know something more than the kernel about
the pattern or some special requirements for some types of memory or
processes. For example, some users would have slow swap devices and
knows latency-ciritical processes and therefore want to use DAMON-based
proactive reclamation (DAMON_RECLAIM) for only non-anonymous pages of
non-latency-critical processes.
For such restriction, users could exclude the memory regions from the
initial monitoring regions and use non-dynamic monitoring regions update
monitoring operations set including fvaddr and paddr. They could also
adjust the DAMOS target access pattern. For dynamically changing memory
layout and access pattern, those would be not enough.
To help the case, add an interface, namely DAMOS filters, which can be
used to avoid the DAMOS actions be applied to specific types of memory,
to DAMON kernel API (damon.h). At the moment, it supports filtering
anonymous pages and/or specific memory cgroups in or out for each DAMOS
scheme.
This patchset adds the support for all DAMOS actions that 'paddr'
monitoring operations set supports ('pageout', 'lru_prio', and
'lru_deprio'), and the functionality is exposed via DAMON kernel API
(damon.h) the DAMON sysfs interface (/sys/kernel/mm/damon/admins/), and
DAMON_RECLAIM module parameters.
Patches Sequence
----------------
First patch implements DAMOS filter interface to DAMON kernel API.
Second patch makes the physical address space monitoring operations set
to support the filters from all supporting DAMOS actions. Third patch
adds anonymous pages filter support to DAMON_RECLAIM, and the fourth
patch documents the DAMON_RECLAIM's new feature. Fifth to seventh
patches implement DAMON sysfs files for support of the filters, and
eighth patch connects the file to use DAMOS filters feature. Ninth
patch adds simple self test cases for DAMOS filters of the sysfs
interface. Finally, following two patches (tenth and eleventh) document
the new features and interfaces.
SeongJae Park (11):
mm/damon/core: implement damos filter
mm/damon/paddr: support DAMOS filters
mm/damon/reclaim: add a parameter called skip_anon for avoiding
anonymous pages reclamation
Docs/admin-guide/damon/reclaim: document 'skip_anon' parameter
mm/damon/sysfs-schemes: implement filters directory
mm/damon/sysfs-schemes: implement filter directory
mm/damon/sysfs-schemes: connect filter directory and filters directory
mm/damon/sysfs-schemes: implement scheme filters
selftests/damon/sysfs: test filters directory
Docs/admin-guide/mm/damon/usage: document DAMOS filters of sysfs
Docs/ABI/damon: document scheme filters files
.../ABI/testing/sysfs-kernel-mm-damon | 29 ++
.../admin-guide/mm/damon/reclaim.rst | 9 +
Documentation/admin-guide/mm/damon/usage.rst | 48 ++-
include/linux/damon.h | 51 +++
mm/damon/core.c | 39 ++
mm/damon/paddr.c | 71 +++-
mm/damon/reclaim.c | 19 +
mm/damon/sysfs-schemes.c | 365 +++++++++++++++++-
tools/testing/selftests/damon/sysfs.sh | 29 ++
9 files changed, 647 insertions(+), 13 deletions(-)
--
2.25.1
On Wed, Nov 23, 2022 at 11:28 PM Maxime Ripard <maxime(a)cerno.tech> wrote:
>
> Hi,
>
> This series introduce Kunit tests to the vc4 KMS driver, but unlike what we
> have been doing so far in KMS, it actually tests the atomic modesetting code.
>
> In order to do so, I've had to improve a fair bit on the Kunit helpers already
> found in the tree in order to register a full blown and somewhat functional KMS
> driver.
>
> It's of course relying on a mock so that we can test it anywhere. The mocking
> approach created a number of issues, the main one being that we need to create
> a decent mock in the first place, see patch 22. The basic idea is that I
> created some structures to provide a decent approximation of the actual
> hardware, and that would support both major architectures supported by vc4.
>
> This is of course meant to evolve over time and support more tests, but I've
> focused on testing the HVS FIFO assignment code which is fairly tricky (and the
> tests have actually revealed one more bug with our current implementation). I
> used to have a userspace implementation of those tests, where I would copy and
> paste the kernel code and run the tests on a regular basis. It's was obviously
> fairly suboptimal, so it seemed like the perfect testbed for that series.
>
> Let me know what you think,
> Maxime
>
> To: David Airlie <airlied(a)gmail.com>
> To: Daniel Vetter <daniel(a)ffwll.ch>
> To: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
> To: Maxime Ripard <mripard(a)kernel.org>
> To: Thomas Zimmermann <tzimmermann(a)suse.de>
> Cc: Dave Stevenson <dave.stevenson(a)raspberrypi.com>
> Cc: Javier Martinez Canillas <javierm(a)redhat.com>
> Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
> Cc: Maíra Canal <mairacanal(a)riseup.net>
> Cc: Brendan Higgins <brendan.higgins(a)linux.dev>
> Cc: David Gow <davidgow(a)google.com>
> Cc: linux-kselftest(a)vger.kernel.org
> Cc: kunit-dev(a)googlegroups.com
> Cc: dri-devel(a)lists.freedesktop.org
> Cc: linux-kernel(a)vger.kernel.org
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> Signed-off-by: Maxime Ripard <maxime(a)cerno.tech>
>
> ---
Hi Maxime,
Thanks very much for this! I'm really excited to see these sorts of
tests being written.
I was able to successfully run these under qemu with:
./tools/testing/kunit/kunit.py run --kunitconfig
drivers/gpu/drm/vc4/tests --arch arm64
--cross_compile=aarch64-linux-gnu-
(and also with clang, using --make_options LLVM=1 instead of the
--cross_compile flag)
On the other hand, they don't compile as a module:
ERROR: modpost: missing MODULE_LICENSE() in drivers/gpu/drm/vc4/tests/vc4_mock.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_mock_crtc.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_mock_output.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_mock_plane.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/vc4/tests/vc4_test_pv_muxing.o
ERROR: modpost: missing MODULE_LICENSE() in
drivers/gpu/drm/tests/drm_managed_test.o
ERROR: modpost: "vc4_drm_driver"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc5_drm_driver"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "drm_kunit_helper_alloc_device"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "__drm_kunit_helper_alloc_drm_device_with_driver"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "__vc4_hvs_alloc"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_dummy_plane"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_mock_pv" [drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_dummy_output"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
ERROR: modpost: "vc4_kms_load" [drivers/gpu/drm/vc4/tests/vc4_mock.ko]
undefined!
ERROR: modpost: "vc4_txp_crtc_data"
[drivers/gpu/drm/vc4/tests/vc4_mock.ko] undefined!
WARNING: modpost: suppressed 17 unresolved symbol warnings because
there were too many)
Most of those are just the need to export some symbols. There's some
work underway to support conditionally exporting symbols only if KUnit
is enabled, which may help:
https://lore.kernel.org/linux-kselftest/20221102175959.2921063-1-rmoar@goog…
Otherwise, I suspect the better short-term solution would just be to
require that the tests are built-in (or at least compiled into
whatever of the drm/vc4 modules makes most sense).
The only other thing which has me a little confused is the naming of
some of the functions, specifically with the __ prefix. Is it just for
internal functions (many of them aren't static, but maybe they could
use the VISIBLE_IF_KUNIT macro if that makes sense), or for versions
of functions which accept extra arguments? Not a big deal (and maybe
it's a DRM naming convention I'm ignorant of), but I couldn't quite
find a pattern on my first read through.
But on the whole, these look good from a KUnit point-of-view. It's
really to see some solid mocking and driver testing, too. There would
be ways to avoid passing the 'struct kunit' around in more places (or
to store extra data as a kunit_resource), but I think it's better
overall to pass it around like you have in this case -- it's certainly
more compatible with things which might span threads (e.g. the
workqueues).
Thanks a bunch,
-- David
Change the KUnit parser to be able to parse test output that complies with
the KTAP version 1 specification format found here:
https://kernel.org/doc/html/latest/dev-tools/ktap.html. Ensure the parser
is able to parse tests with the original KUnit test output format as
well.
KUnit parser now accepts any of the following test output formats:
Original KUnit test output format:
TAP version 14
1..1
# Subtest: kunit-test-suite
1..3
ok 1 - kunit_test_1
ok 2 - kunit_test_2
ok 3 - kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 - kunit-test-suite
KTAP version 1 test output format:
KTAP version 1
1..1
KTAP version 1
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
ok 1 kunit-test-suite
New KUnit test output format (changes made in the next patch of
this series):
KTAP version 1
1..1
KTAP version 1
# Subtest: kunit-test-suite
1..3
ok 1 kunit_test_1
ok 2 kunit_test_2
ok 3 kunit_test_3
# kunit-test-suite: pass:3 fail:0 skip:0 total:3
# Totals: pass:3 fail:0 skip:0 total:3
ok 1 kunit-test-suite
Signed-off-by: Rae Moar <rmoar(a)google.com>
Reviewed-by: Daniel Latypov <dlatypov(a)google.com>
Reviewed-by: David Gow <davidgow(a)google.com>
---
Changes since v2:
https://lore.kernel.org/all/CA+GJov4QZ8yrD8sgGeMYJ4zYkg2CEUX8owqzPFE0BQGe_f…
- Rebased onto linux-kselftest/kunit to correct merge conflict with
recently approved patch
- Fixed typo
- Added test_parse_subtest_header to test whether the “# Subtest:”
line is being parsed correctly when using the new test format
Changes since v1:
https://lore.kernel.org/all/20221104194705.3245738-2-rmoar@google.com/
- Switch order of patches to make changes to the parser before making
changes to the test output
- Change placeholder label for test header from “Test suite” to empty
string
- Change parser to approve the new KTAP version line in the subtest header
to be before the subtest header line rather than after.
- Note: Considered changing parser to allow for the top-level of testing
to have a '# Subtest' line as discussed in v1 but this breaks the missing
test plan test. So I think it would be best to add this ability at a later
time or after top-level test name and result lines are discussed for
KTAP v2.
tools/testing/kunit/kunit_parser.py | 79 ++++++++++++-------
tools/testing/kunit/kunit_tool_test.py | 14 ++++
.../test_data/test_parse_ktap_output.log | 8 ++
.../test_data/test_parse_subtest_header.log | 7 ++
4 files changed, 80 insertions(+), 28 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_parse_ktap_output.log
create mode 100644 tools/testing/kunit/test_data/test_parse_subtest_header.log
diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index d0ed5dd5cfc4..4cc2f8b7ecd0 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -441,6 +441,7 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
- '# Subtest: [test name]'
- '[ok|not ok] [test number] [-] [test name] [optional skip
directive]'
+ - 'KTAP version [version number]'
Parameters:
lines - LineStream of KTAP output to parse
@@ -449,8 +450,9 @@ def parse_diagnostic(lines: LineStream) -> List[str]:
Log of diagnostic lines
"""
log = [] # type: List[str]
- while lines and not TEST_RESULT.match(lines.peek()) and not \
- TEST_HEADER.match(lines.peek()):
+ non_diagnostic_lines = [TEST_RESULT, TEST_HEADER, KTAP_START]
+ while lines and not any(re.match(lines.peek())
+ for re in non_diagnostic_lines):
log.append(lines.pop())
return log
@@ -496,11 +498,15 @@ def print_test_header(test: Test) -> None:
test - Test object representing current test being printed
"""
message = test.name
+ if message != "":
+ # Add a leading space before the subtest counts only if a test name
+ # is provided using a "# Subtest" header line.
+ message += " "
if test.expected_count:
if test.expected_count == 1:
- message += ' (1 subtest)'
+ message += '(1 subtest)'
else:
- message += f' ({test.expected_count} subtests)'
+ message += f'({test.expected_count} subtests)'
stdout.print_with_timestamp(format_test_divider(message, len(message)))
def print_log(log: Iterable[str]) -> None:
@@ -647,7 +653,7 @@ def bubble_up_test_results(test: Test) -> None:
elif test.counts.get_status() == TestStatus.TEST_CRASHED:
test.status = TestStatus.TEST_CRASHED
-def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
+def parse_test(lines: LineStream, expected_num: int, log: List[str], is_subtest: bool) -> Test:
"""
Finds next test to parse in LineStream, creates new Test object,
parses any subtests of the test, populates Test object with all
@@ -665,15 +671,32 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
1..4
[subtests]
- - Subtest header line
+ - Subtest header (must include either the KTAP version line or
+ "# Subtest" header line)
- Example:
+ Example (preferred format with both KTAP version line and
+ "# Subtest" line):
+
+ KTAP version 1
+ # Subtest: name
+ 1..3
+ [subtests]
+ ok 1 name
+
+ Example (only "# Subtest" line):
# Subtest: name
1..3
[subtests]
ok 1 name
+ Example (only KTAP version line, compliant with KTAP v1 spec):
+
+ KTAP version 1
+ 1..3
+ [subtests]
+ ok 1 name
+
- Test result line
Example:
@@ -685,28 +708,29 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
expected_num - expected test number for test to be parsed
log - list of strings containing any preceding diagnostic lines
corresponding to the current test
+ is_subtest - boolean indicating whether test is a subtest
Return:
Test object populated with characteristics and any subtests
"""
test = Test()
test.log.extend(log)
- parent_test = False
- main = parse_ktap_header(lines, test)
- if main:
- # If KTAP/TAP header is found, attempt to parse
+ if not is_subtest:
+ # If parsing the main/top-level test, parse KTAP version line and
# test plan
test.name = "main"
+ ktap_line = parse_ktap_header(lines, test)
parse_test_plan(lines, test)
parent_test = True
else:
- # If KTAP/TAP header is not found, test must be subtest
- # header or test result line so parse attempt to parser
- # subtest header
- parent_test = parse_test_header(lines, test)
+ # If not the main test, attempt to parse a test header containing
+ # the KTAP version line and/or subtest header line
+ ktap_line = parse_ktap_header(lines, test)
+ subtest_line = parse_test_header(lines, test)
+ parent_test = (ktap_line or subtest_line)
if parent_test:
- # If subtest header is found, attempt to parse
- # test plan and print header
+ # If KTAP version line and/or subtest header is found, attempt
+ # to parse test plan and print test header
parse_test_plan(lines, test)
print_test_header(test)
expected_count = test.expected_count
@@ -721,7 +745,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
sub_log = parse_diagnostic(lines)
sub_test = Test()
if not lines or (peek_test_name_match(lines, test) and
- not main):
+ is_subtest):
if expected_count and test_num <= expected_count:
# If parser reaches end of test before
# parsing expected number of subtests, print
@@ -735,20 +759,19 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
test.log.extend(sub_log)
break
else:
- sub_test = parse_test(lines, test_num, sub_log)
+ sub_test = parse_test(lines, test_num, sub_log, True)
subtests.append(sub_test)
test_num += 1
test.subtests = subtests
- if not main:
+ if is_subtest:
# If not main test, look for test result line
test.log.extend(parse_diagnostic(lines))
- if (parent_test and peek_test_name_match(lines, test)) or \
- not parent_test:
- parse_test_result(lines, test, expected_num)
- else:
+ if test.name != "" and not peek_test_name_match(lines, test):
test.add_error('missing subtest result line!')
+ else:
+ parse_test_result(lines, test, expected_num)
- # Check for there being no tests
+ # Check for there being no subtests within parent test
if parent_test and len(subtests) == 0:
# Don't override a bad status if this test had one reported.
# Assumption: no subtests means CRASHED is from Test.__init__()
@@ -758,11 +781,11 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# Add statuses to TestCounts attribute in Test object
bubble_up_test_results(test)
- if parent_test and not main:
+ if parent_test and is_subtest:
# If test has subtests and is not the main test object, print
# footer.
print_test_footer(test)
- elif not main:
+ elif is_subtest:
print_test_result(test)
return test
@@ -785,7 +808,7 @@ def parse_run_tests(kernel_output: Iterable[str]) -> Test:
test.add_error('Could not find any KTAP output. Did any KUnit tests run?')
test.status = TestStatus.FAILURE_TO_PARSE_TESTS
else:
- test = parse_test(lines, 0, [])
+ test = parse_test(lines, 0, [], False)
if test.status != TestStatus.NO_TESTS:
test.status = test.counts.get_status()
stdout.print_with_timestamp(DIVIDER)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 84a08cf07242..d7f669cbf2a8 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -312,6 +312,20 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(kunit_parser._summarize_failed_tests(result),
'Failures: all_failed_suite, some_failed_suite.test2')
+ def test_ktap_format(self):
+ ktap_log = test_data_path('test_parse_ktap_output.log')
+ with open(ktap_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.assertEqual(result.counts, kunit_parser.TestCounts(passed=3))
+ self.assertEqual('suite', result.subtests[0].name)
+ self.assertEqual('case_1', result.subtests[0].subtests[0].name)
+ self.assertEqual('case_2', result.subtests[0].subtests[1].name)
+
+ def test_parse_subtest_header(self):
+ ktap_log = test_data_path('test_parse_subtest_header.log')
+ with open(ktap_log) as file:
+ result = kunit_parser.parse_run_tests(file.readlines())
+ self.print_mock.assert_any_call(StrContains('suite (1 subtest)'))
def line_stream_from_strs(strs: Iterable[str]) -> kunit_parser.LineStream:
return kunit_parser.LineStream(enumerate(strs, start=1))
diff --git a/tools/testing/kunit/test_data/test_parse_ktap_output.log b/tools/testing/kunit/test_data/test_parse_ktap_output.log
new file mode 100644
index 000000000000..ccdf244e5303
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_parse_ktap_output.log
@@ -0,0 +1,8 @@
+KTAP version 1
+1..1
+ KTAP version 1
+ 1..3
+ ok 1 case_1
+ ok 2 case_2
+ ok 3 case_3
+ok 1 suite
diff --git a/tools/testing/kunit/test_data/test_parse_subtest_header.log b/tools/testing/kunit/test_data/test_parse_subtest_header.log
new file mode 100644
index 000000000000..216631092e7b
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_parse_subtest_header.log
@@ -0,0 +1,7 @@
+KTAP version 1
+1..1
+ KTAP version 1
+ # Subtest: suite
+ 1..1
+ ok 1 test
+ok 1 suite
\ No newline at end of file
base-commit: 99c8c9276be71e6bc98979e95d56cdcbe0c2454e
--
2.38.1.584.g0f3c55d4c2-goog
Signal that a test run is complete through perf_test_args instead of
having tests open code a similar solution. Ensure that the field resets
to false at the beginning of a test run as the structure is reused
between test runs, eliminating a couple of bugs:
access_tracking_perf_test hangs indefinitely on a subsequent test run,
as 'done' remains true. The bug doesn't amount to much right now, as x86
supports a single guest mode. However, this is a precondition of
enabling the test for other architectures with >1 guest mode, like
arm64.
memslot_modification_stress_test has the exact opposite problem, where
subsequent test runs complete immediately as 'run_vcpus' remains false.
Co-developed-by: Sean Christopherson <seanjc(a)google.com>
Signed-off-by: Sean Christopherson <seanjc(a)google.com>
[oliver: added commit message, preserve spin_wait_for_next_iteration()]
Signed-off-by: Oliver Upton <oliver.upton(a)linux.dev>
---
tools/testing/selftests/kvm/access_tracking_perf_test.c | 8 +-------
tools/testing/selftests/kvm/include/perf_test_util.h | 3 +++
tools/testing/selftests/kvm/lib/perf_test_util.c | 3 +++
.../selftests/kvm/memslot_modification_stress_test.c | 6 +-----
4 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 76c583a07ea2..942370d57392 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -58,9 +58,6 @@ static enum {
ITERATION_MARK_IDLE,
} iteration_work;
-/* Set to true when vCPU threads should exit. */
-static bool done;
-
/* The iteration that was last completed by each vCPU. */
static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
@@ -211,7 +208,7 @@ static bool spin_wait_for_next_iteration(int *current_iteration)
int last_iteration = *current_iteration;
do {
- if (READ_ONCE(done))
+ if (READ_ONCE(perf_test_args.stop_vcpus))
return false;
*current_iteration = READ_ONCE(iteration);
@@ -321,9 +318,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
mark_memory_idle(vm, nr_vcpus);
access_memory(vm, nr_vcpus, ACCESS_READ, "Reading from idle memory");
- /* Set done to signal the vCPU threads to exit */
- done = true;
-
perf_test_join_vcpu_threads(nr_vcpus);
perf_test_destroy_vm(vm);
}
diff --git a/tools/testing/selftests/kvm/include/perf_test_util.h b/tools/testing/selftests/kvm/include/perf_test_util.h
index eaa88df0555a..536d7c3c3f14 100644
--- a/tools/testing/selftests/kvm/include/perf_test_util.h
+++ b/tools/testing/selftests/kvm/include/perf_test_util.h
@@ -40,6 +40,9 @@ struct perf_test_args {
/* Run vCPUs in L2 instead of L1, if the architecture supports it. */
bool nested;
+ /* Test is done, stop running vCPUs. */
+ bool stop_vcpus;
+
struct perf_test_vcpu_args vcpu_args[KVM_MAX_VCPUS];
};
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 9618b37c66f7..ee3f499ccbd2 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -267,6 +267,7 @@ void perf_test_start_vcpu_threads(int nr_vcpus,
vcpu_thread_fn = vcpu_fn;
WRITE_ONCE(all_vcpu_threads_running, false);
+ WRITE_ONCE(perf_test_args.stop_vcpus, false);
for (i = 0; i < nr_vcpus; i++) {
struct vcpu_thread *vcpu = &vcpu_threads[i];
@@ -289,6 +290,8 @@ void perf_test_join_vcpu_threads(int nr_vcpus)
{
int i;
+ WRITE_ONCE(perf_test_args.stop_vcpus, true);
+
for (i = 0; i < nr_vcpus; i++)
pthread_join(vcpu_threads[i].thread, NULL);
}
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index bb1d17a1171b..3a5e4518307c 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -34,8 +34,6 @@
static int nr_vcpus = 1;
static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
-static bool run_vcpus = true;
-
static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
{
struct kvm_vcpu *vcpu = vcpu_args->vcpu;
@@ -45,7 +43,7 @@ static void vcpu_worker(struct perf_test_vcpu_args *vcpu_args)
run = vcpu->run;
/* Let the guest access its memory until a stop signal is received */
- while (READ_ONCE(run_vcpus)) {
+ while (!READ_ONCE(perf_test_args.stop_vcpus)) {
ret = _vcpu_run(vcpu);
TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret);
@@ -110,8 +108,6 @@ static void run_test(enum vm_guest_mode mode, void *arg)
add_remove_memslot(vm, p->memslot_modification_delay,
p->nr_memslot_modifications);
- run_vcpus = false;
-
perf_test_join_vcpu_threads(nr_vcpus);
pr_info("All vCPU threads joined\n");
--
2.38.1.584.g0f3c55d4c2-goog