On 11 June 2015 at 15:45, Riku Voipio riku.voipio@linaro.org wrote:
On 11 June 2015 at 06:55, Chase Qi chase.qi@linaro.org wrote:
The data from 'time' and 'perf' also will be saved by LKP. I think the 'avg.json' is the right file I should parse, it included metrics of the benchmark and time and perf. I added a 'LOOPS' parameter in test definition to support repeatedly run. If we run the test more then once, the data in avg.json will be the average of the runs. Here is a lava job example https://validation.linaro.org/scheduler/job/382401
Does the LOOPS parameter map to the iterations variable lkp uses? At least the ebizzy seems to use iterations setting to run itself 100x by default.
Hi Riku,
No. It just run test multiple times. For example, if the 'LOOPS' variable set to 3, the current test-definition will run this command 'run-local *.yaml' 3 times, lkp will calculate the average of the metrics and saved them to 'avg.josn' automatically. As you mentioned, lkp might already considered this part, and some benchmarks also was designed to run itself multiple times by default. So in the beginning, I guess we should set the LOOPS to 1 (the default setting). If we found that test scores vary obviously for each run, we can update test plan and increase the LOOPS to see if it helps.
Here is the current test code https://git.linaro.org/people/chase.qi/test-definitions.git/blob/HEAD:/commo...
Thanks, Chase
Riku