On Thu, Jan 20, 2022 at 2:21 PM MichaĆ Winiarski michal.winiarski@intel.com wrote:
So I'm a bit doubtful about this particular statement, but if you have tried it out and it works well, then that's good too.
I just think the primary benefit of UML is faster compilation and it being somewhat lighter than bringing up a VM.
It was good enough to debug any problems that I accidentally introduced during the conversion, but perhaps that was simple enough usecase to not encounter SIGSEGVs.
Ah, that's good to know.
I don't thing that would work. IIUC, currently the runner is operating on taint - there's a subset of taints that it considers "fatal" (including TAINT_WARN).
If we have tests that WARN, perhaps we could introduce extra flag to the test state on per-test or per-testsuite level, to mark that the test wants to fail-on-warn? Then we would only fail if the test opted in (or other way around? if opt-out makes more sense and we have more tests that actually don't WARN as part of their normal test logic).
Yeah, I think this would work. I chatted with Brendan and David about this and suggested this approach.
This definitely seems useful, so I definitely think we should keep it in mind, even if we don't get an implementation done in the near future.
It's applied on top of DRM subsystem integration tree (drm-tip): https://cgit.freedesktop.org/drm-tip At that time it was already based on v5.16.
Ack, thanks!
I might take another stab at applying the patches locally, but based on my brief skim over them, everything seemed fine from a KUnit point of view. It's quite clear you've read over KUnit pretty thoroughly (e.g. figured out how to create new managed resources, etc.). So I probably won't have any feedback to give.
Most of that 17-18s is taken by two subtest of drm_mm_tests:
[22:17:19] ============================================================ [22:17:19] ================= drm_mm_tests (1 subtest) ================= [22:17:27] [PASSED] test_insert [22:17:27] ================== [PASSED] drm_mm_tests =================== [22:17:27] ============================================================ [22:17:27] Testing complete. Passed: 1, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [22:17:27] Elapsed time: 10.400s total, 0.001s configuring, 2.419s building, 7.947s running
[22:17:42] ============================================================ [22:17:42] ================= drm_mm_tests (1 subtest) ================= [22:17:50] [PASSED] test_replace [22:17:50] ================== [PASSED] drm_mm_tests =================== [22:17:50] ============================================================ [22:17:50] Testing complete. Passed: 1, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0 [22:17:50] Elapsed time: 10.272s total, 0.001s configuring, 2.492s building, 7.776s running
Their runtime can be controlled with max_prime and max_iterations modparams - I left the default values intact, but we can tweak them to speed things up if needed.
Ah, I was not concerned about test runtime at all. I was just suggesting that real-time output would be useful if you didn't have it already.