On Tue, 23 Jun 2020, David Gow wrote:
On Mon, Jun 22, 2020 at 6:45 AM Frank Rowand frowand.list@gmail.com wrote:
Tim Bird started a thread [1] proposing that he document the selftest result format used by Linux kernel tests.
[1] https://lore.kernel.org/r/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB11...
The issue of messages generated by the kernel being tested (that are not messages directly created by the tests, but are instead triggered as a side effect of the test) came up. In this thread, I will call these messages "expected messages". Instead of sidetracking that thread with a proposal to handle expected messages, I am starting this new thread.
Thanks for doing this: I think there are quite a few tests which could benefit from something like this.
I think there were actually two separate questions: what do we do with unexpected messages (most of which I expect are useless, but some of which may end up being related to an unexpected test failure), and how to have tests "expect" a particular message to appear. I'll stick to talking about the latter for this thread, but even there there's two possible interpretations of "expected messages" we probably want to explicitly distinguish between: a message which must be present for the test to pass (which I think best fits the "expected message" name), and a message which the test is likely to produce, but which shouldn't alter the result (an "ignored message"). I don't see much use for the latter at present, but if we wanted to do more things with messages and had some otherwise very verbose tests, it could potentially be useful.
The other thing I'd note here is that this proposal seems to be doing all of the actual message filtering in userspace, which makes a lot of sense for kselftest tests, but does mean that the kernel can't know if the test has passed or failed. There's definitely a tradeoff between trying to put too much needless string parsing in the kernel and having to have a userland tool determine the test results. The proposed KCSAN test suite[1] is using tracepoints to do this in the kernel. It's not the cleanest thing, but there's no reason KUnit or similar couldn't implement a nicer API around it.
For KTF the way we handled this was to use the APIs for catching function entry and return (via kprobes), specifying printk as the function to catch, and checking its argument string to verify the expected message was seen. That allows you to verify that messages appear in kernel testing context, but it's not ideal as printk() has not yet filled in the arguments in the buffer for display (there may be a better place to trace). If it seems like it could be useful I could have a go at porting the kprobe stuff to KUnit, as it helps expand the vocabulary for what can be tested in kernel context; for example we can also override return values for kernel functions to simulate errors.
Alan