On Fri, Jul 12, 2024 at 05:48:14PM +0000, Bird, Tim wrote:
-----Original Message----- From: Nícolas F. R. A. Prado nfraprado@collabora.com On Thu, Jul 11, 2024 at 01:53:37PM -0600, Shuah Khan wrote:
On 7/10/24 15:49, Shuah Khan wrote:
On 7/10/24 07:11, Greg Kroah-Hartman wrote:
On Fri, Jul 05, 2024 at 07:29:53PM -0400, Nícolas F. R. A. Prado wrote:
Log errors are the most widely used mechanism for reporting issues in the kernel. When an error is logged using the device helpers, eg dev_err(), it gets metadata attached that identifies the subsystem and device where the message is coming from. This series makes use of that metadata in a new test to report which devices logged errors.
The first two patches move a test and a helper script to keep things organized before this new test is added in the third patch.
It is expected that there might be many false-positive error messages throughout the drivers code which will be reported by this test. By having this test in the first place and working through the results we can address those occurrences by adjusting the loglevel of the messages that turn out to not be real errors that require the user's attention. It will also motivate additional error messages to be introduced in the code to detect real errors where they turn out to be missing, since it will be possible to detect said issues automatically.
As an example, below you can see the test result for mt8192-asurada-spherion. The single standing issue has been investigated and will be addressed in an EC firmware update [1]:
TAP version 13 1..1 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `model_name' property: -6 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `energy_full_design' property: -6 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 not ok 1 +power_supply:sbs-8-000b
Sorry this feedback is coming late.
The test description only includes the device identifier, and not a description of the testcase. (See below for why this might be the case).
Maybe a better testcase description might be: "check that power_supply:sbs-8-000b device has no error messages"
Indeed that would make it more descriptive, but it would also add a lot of redundant information. Since all the test does is check whether each device has error messages, the only unique information for each test case is the device identifier. In other words I considered the "check that ... device has no error messages" implied by the test suite.
I note that there are 3 different error messages, but only one testcase report generated. It would be a lot of work (probably too much), for this test to interpret the error messages and convert them into testcase language. But it might be worthwhile to report a test result for each type of error message generated.
I don't know how much value that would bring, but it might be an interesting idea to try.
Interpreting the messages seems too much, but if we just compare the messages for uniqueness and use them as is in the test identifiers it would be easy to do, eg:
not ok 1 +power_supply:sbs-8-000b.power_supply sbs-8-000b: driver failed to report `time_to_empty_now' property: -5 not ok 2 +power_supply:sbs-8-000b.power_supply sbs-8-000b: driver failed to report `energy_full_design' property: -6 not ok 3 +power_supply:sbs-8-000b.power_supply sbs-8-000b: driver failed to report `model_name' property: -6
My concern is that I don't know if this is actually doing a good job of portraying the issues: in this case it's actually really just one issue that triggers those different messages, but this makes it look like there are 3 independent issues and the order of the errors is completely lost.
Totals: pass:0 fail:1 xfail:0 xpass:0 skip:0 error:0
There are a few interesting issues that this 'test' raises.
It is not so much a test itself, as a way to convert an error log report into KTAP output.
The 'test' has no knowledge of what the testcases are, that are being reported. That is, the actual test has occurred at runtime, inside the driver or device manager code, which produced the error string. This 'test' just reports that the device reported an error.
Right. I would argue that the test suite as a whole does test something: "Did any devices on this platform report errors?". And if the answer is yes, then the test cases will detail this information by simply converting the error log into KTAP.
But indeed the bulk of the testing is done by the drivers and frameworks and this 'test' is mostly about structuring and reporting that data in a way that is easier to identify the problematic devices on the system.
As currently structured, there is no way for this to report an 'ok' testcase. I talked with Nicolas on a KernelCI conference call, it is apparently possible to try to find all possible devices on a system, and report them as 'ok'. But this would produce a LOT of noise, as all the devices that did not have errors would be reported as 'ok'. This level of verbosity is probably not desirable. (It's similar to how a compiler does not report the success of every test it conducts on the source code, as it is being compiled. Rather a compiler only outputs errors and warnings for test failures.)
In terms of tracking regressions, it would be useful to identify the different things that could cause an error message to appear or disappear. For example, some errors related to hardware might change due to flakiness of the hardware. Some errors might be introduced or go away based on a change in the kernel configuration. This is one case, where have a long list of "ok" lines would be useful - to disambiguate drivers that were present, but didn't have an error, from drivers that were not present at all.
Yes, to this I currently don't see another option... However, if we keep the "ok" cases hidden/implicit, I think the concern about drivers not being present at all can be mostly addressed by also running a probe test, like the DT kselftest [1] or the ACPI kselftest [2]. That way, you would rely on the probe test to verify whether all the devices expected on the system are there, and on the error log test to verify whether any of the devices on the system have issues.
I think this test is useful, and am not NAK-ing it. I'm just giving some feedback on the general approach, and hopefully raising some issues to consider, to make the test more useful.
I think more important than merging or not merging the test is getting feedback from the community on how to improve testing on the kernel, so I definitely welcome your feedback. Thank you for taking the time to write this.
Thanks, Nícolas
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tool... [2] https://lore.kernel.org/all/20240308144933.337107-1-laura.nao@collabora.com