On 08/11/2022 02.06, David Matlack wrote:
On Fri, Oct 14, 2022 at 09:03:59PM +0000, Sean Christopherson wrote:
On Tue, Oct 04, 2022, Thomas Huth wrote:
Many KVM selftests are completely silent. This has the disadvantage for the users that they do not know what's going on here. For example, some time ago, a tester asked me how to know whether a certain new sub-test has been added to one of the s390x test binaries or not (which he didn't compile on his own), which is hard to judge when there is no output. So I finally went ahead and implemented TAP output in the s390x-specific tests some months ago.
Now I wonder whether that could be a good strategy for the x86 and generic tests, too?
Taking Andrew's thoughts a step further, I'm in favor of adding TAP output, but only if we implement it in such a way that it reduces the burden on writing new tests. I _really_ like that sync_regs_test's subtests are split into consumable chunks, but I worry that the amount of boilerplate needed will deter test writes and increase the maintenance cost.
And my experience with KVM-unit-tests is that letting test writers specify strings for test names is a bad idea, e.g. using an arbitrary string creates a disconnect between what the user sees and what code is running, and makes it unnecessarily difficult to connect a failure back to code. And if we ever support running specific testcases by name (I'm still not sure this is a net positive), arbitrary strings get really annoying because inevitably an arbitrary string will contain characters that need to be escaped in the shell.
Adding a macro or three to let tests define and run testscases with minimal effort would more or less eliminate the boilerplate. And in theory providing semi-rigid macros would help force simple tests to conform to standard patterns, which should reduce the cost of someone new understanding the test, and would likely let us do more automagic things in the future.
E.g. something like this in the test:
KVM_RUN_TESTCASES(vcpu, test_clear_kvm_dirty_regs_bits, test_set_invalid, test_req_and_verify_all_valid_regs, test_set_and_verify_various_reg_values, test_clear_kvm_dirty_regs_bits, );
There is an existing framework in tools/testing/selftests/kselftest_harness.h that provides macros for setting up and running tests cases. I converted sync_regs_test to use it below as an example [1].
The harness runs each subtest in a child process, so sharing a VM/VCPU across test cases is not possible. This means setting up and tearing down a VM for every test case, but the harness makes this pretty easy with FIXTURE_{SETUP,TEARDOWN}(). With this harness, we can keep using TEST_ASSERT() as-is, and still run all test cases even if one fails. Plus no need for the hard-coded ksft_*() calls in main().
Hi!
Sorry for not getting back to this earlier - I'm pretty much busy with other stuff right now. But your suggestion looks really cool, I like it - so if you've got some spare time to work on the conversion, please go ahead (I won't have much time to work on this in the next weeks, I think)!
Thomas