On Mon, 2 Oct 2023 at 21:55, Michal Wajdeczko michal.wajdeczko@intel.com wrote:
On 28.09.2023 22:54, Rae Moar wrote:
On Mon, Sep 25, 2023 at 1:58 PM Michal Wajdeczko michal.wajdeczko@intel.com wrote:
In case of parameterized tests we are not providing a test plan so we can't detect if any result is missing.
Count available params using the same generator as during a test execution
Signed-off-by: Michal Wajdeczko michal.wajdeczko@intel.com Cc: David Gow davidgow@google.com Cc: Rae Moar rmoar@google.com
<...snip...>
Hello!
This change largely looks good to me. However, I am not 100 percent confident that the function to generate parameters always produces the same output (or same number of test cases). I would be interested in David's opinion on this.
Right, it's not explicitly specified in KUNIT_CASE_PARAM nor test_case.generate_params documentation, but I would assume that while generating different output could be fine (and harmless to this patch), like based on a random seed, but IMO at the same time it should be prohibited to generate different number of params, as this would make harder to compare each execution for regression.
There are definitely some valid cases for parameterised tests to generate different numbers of tests in different configs / environments (e.g., there are some where the number of parameters depends on the number of CPUs). That being said, it shouldn't be a problem in a relatively standard test environment with any of the tests we currently have.
Some of these issues can be got around by having tests be generated regardless, but having those tests be skipped if required.
Alternatively we can introduce some flag to indicate whether provided param generator is stable or not and then provide test plan only for the former.
I think this sounds like a good idea, and a use for the KUnit attributes system. I'd thought that a 'deterministic' attribute would make sense, but it may make sense to split it into a 'deterministic' and 'fixed structure' flag (the latter only requiring that the number, order, names, etc of tests and subtests be the same).
There have been a couple of people asking for a feature to deliberately randomise test ordering, too. We'd want to clear these flags if that's in use. Of course, ideally anyone doing regression testing would be able to use the test/parameter name/description instead of test number, so ordering of tests shouldn't matter unless tests were buggy.