On Thu, Jul 25, 2019 at 06:30:10PM +0200, Paolo Bonzini wrote:
On 25/07/19 18:20, Sean Christopherson wrote:
On Thu, Jul 25, 2019 at 06:10:37PM +0200, Paolo Bonzini wrote:
On 25/07/19 18:09, Sean Christopherson wrote:
This investigation confirms it is a new test code failure on stable-rc 5.2.3
No, it only confirms that kvm-unit-tests/master fails on 5.2.*. To confirm a new failure in 5.2.3 you would need to show a test that passes on 5.2.2 and fails on 5.2.3.
I think he meant "a failure in new test code". :)
Ah, that does appear to be the case. So just to be clear, we're good, right?
Yes. I'm happy to gather ideas on how to avoid this (i.e. 1) if a submodule would be useful; 2) where to stick it).
Hi!
First, to be clear: from LKFT perspective there are no kernel regressions here.
To your point Paolo - reporting 'fail' because of a missing kernel feature is a generic problem we see across test suites, and causes tons of pain and misery for CI people. As a general rule, I'd avoid submodules, and even branches that track specific kernels. Rather, and I don't know if it's possible in this case, but the best way to manage it from both a test author and a test runner POV is to wrap the test in kernel feature checks, kernel version checks, kernel config checks, etc. Report 'skip' if the environment in which the test is running isn't sufficient to run the test. Then, you only have to maintain one version of the test suite, users can always use the latest, and critically: all failures are actual failures.
Dan
Paolo