-----Original Message----- From: Nícolas F. R. A. Prado nfraprado@collabora.com On Thu, Aug 01, 2024 at 02:13:05PM -0600, Shuah Khan wrote:
On 8/1/24 13:15, Nícolas F. R. A. Prado wrote:
On Wed, Jul 31, 2024 at 05:19:45PM -0600, Shuah Khan wrote:
On 7/24/24 15:40, Nícolas F. R. A. Prado wrote:
Introduce a new test to identify regressions causing devices to go missing on the system.
For each bus and class on the system the test checks the number of devices present against a reference file, which needs to have been generated by the program at a previous point on a known-good kernel, and if there are missing devices they are reported.
Can you elaborate on how to generate reference file? It isn't clear.
Indeed, I'll make that information clearer in future versions.
The reference file is generated by passing the --generate-reference flag to the test:
./exist.py --generate-reference
It will be printed as standard output.
How about adding an option to generate file taking filename? Makes it easier to use.
Sure, we can do that. Another option would be to write it to the filename that would be looked for by default. So for your machine just calling
./exist.py --generate-reference
could write the reference to ./LENOVO,20XH005JUS.yaml.
Signed-off-by: Nícolas F. R. A. Prado nfraprado@collabora.com
Hi,
Key points about this test:
- Goal: Identify regressions causing devices to go missing on the system
- Focus:
- Ease of maintenance: the reference file is generated programatically
- Minimum of false-positives: the script makes as few assumptions as possible about the stability of device identifiers to ensure renames/refactors don't trigger false-positives
- How it works: For each bus and class on the system the test checks the number of devices present against a reference file, which needs to have been generated by the program at a previous point on a known-good kernel, and if there are missing devices they are reported.
- Comparison to other tests: It might be possible(*) to replace the discoverable devices test [1] with this. The benefits of this test is that it's easier to setup and maintain and has wider coverage of devices.
Additional detail:
- Having more devices on the running system than the reference does not cause a failure, but a warning is printed in that case to suggest that the reference be updated.
- Missing devices are detected per bus/class based on the number of devices. When the test fails, the known metadata for each of the expected and detected devices is printed and some simple similitarity comparison is done to suggest the devices that are the most likely to be missing.
- The proposed place to store the generated reference files is the 'platform-test-parameters' repository in KernelCI [2].
How would a user run this on their systems - do they need to access this repository in KernelCI?
No, that repository would just be a place where people could find pre-generated reference files (which we'll be using when running this test in KernelCI), but anyone can always generate their own reference files and store them wherever they want.
Thanks for the clarification. This might be good addition to the document. I think this test could benefit from a README or howto
Sure, I can add a README in the next revision.
This is what I see when I run the test on my system:
make -C tools/testing/selftests/devices/exist/ run_tests make: Entering directory '/linux/linux_6.11/tools/testing/selftests/devices/exist' TAP version 13 1..1 # timeout set to 45 # selftests: devices/exist: exist.py # TAP version 13 # # No matching reference file found (tried './LENOVO,20XH005JUS.yaml')
First generate the reference file for your system like so:
tools/testing/selftests/devices/exist/exist.py --generate-reference > tools/testing/selftests/devices/exist/LENOVO,20XH005JUS.yaml
Worked - I see
TAP version 13 # Using reference file: ./LENOVO,20XH005JUS.yaml 1..76
# Totals: pass:76 fail:0 xfail:0 xpass:0 skip:0 error:0
Things to improve:
- Have the script take a file instead of assuming that the reference file is in the current directory. e.g: exist.py -f reference_file
The script also has another parameter to specify a different directory to look for the reference file: --reference-dir
But the file name is currently fixed and determined from the system's ID (DMI or Devicetree compatible).
We can definitely have another flag to force a different file name if that's useful. In theory it shouldn't be needed given the machine name is used as filename, but might come in handy if there are machine name clashes or if you want to have references for different kernel stable versions for the same machine in the same directory.
I would recommend having a flag that allows specifying the filename (which overrides the automatically determined filename). I am watching this thread with great interest, as I am planning on proposing, at Plumbers, another system which will utilize reference values (the 'adding benchmark results support to KTAP/kselftest' topic that I plan to give at the testing MC.)
Since this test uses reference values, it has all the same issues as my proposal: - naming of the file where the reference values live - some method to automatically select an appropriate reference values file This patch uses the system's ID, but other things (like kernel config) will likely affect the set of devices on the system. So a more complex selection mechanism may eventually be needed.
- generating or updating the reference value file - where to store the reference file - e.g. when would it be appropriate to store it upstream and when elsewhere?
I don't want to gum up the acceptance of this patch by raising all these issues now. I think it's acceptable to have a tester generate their own reference file prior to first use of the test, as the simplest way to use this test. But I think that having pre-generated reference files that testers can use, with known-good values, will be valuable in the long term.
These issues can be discussed at Plumbers, IMHO.
Regards, -- Tim