On 06/11/2015 11:55 AM, Chase Qi wrote:
The parsing of test output is done by LKP, LKP save metrics to json files, our test definition decode the json file and send them to LAVA. If we want to have all the sub-metrics, I guess patch the LKP test suite and send it to upstream is the right way to go. IHMO, it can be done, but not at this stage.
Maybe the upstream LKP don't want our specific parse for LAVA. We probably need to handle them by ourself. and if the test output can not be show out clear/appropriately, it willn't so helpful for us.
The data from 'time' and 'perf' also will be saved by LKP. I think the 'avg.json' is the right file I should parse, it included metrics of the benchmark and time and perf. I added a 'LOOPS' parameter in test definition to support repeatedly run. If we run the test more then once, the data in avg.json will be the average of the runs. Here is a lava job example https://validation.linaro.org/scheduler/job/382401
It is hard to figure out something useful from this links. https://validation.linaro.org/dashboard/streams/anonymous/chase-qi/bundles/8...
seems it doesn't work now. Could you like to resend the report when everything right.
Hopefully, it is what we need. Would you please check and let me know your opinion?