On 11 June 2015 at 08:17, Alex Shi alex.shi@linaro.org wrote:
On 06/11/2015 11:55 AM, Chase Qi wrote:
The parsing of test output is done by LKP, LKP save metrics to json files, our test definition decode the json file and send them to LAVA. If we want to have all the sub-metrics, I guess patch the LKP test suite and send it to upstream is the right way to go. IHMO, it can be done, but not at this stage.
Maybe the upstream LKP don't want our specific parse for LAVA. We probably need to handle them by ourself. and if the test output can not be show out clear/appropriately, it willn't so helpful for us.
There is nothing LAVA specific there. Chase is using LKP output only and LKP doesn't save the table you presented in any way. So if we want to have the data, LKP needs to be patched.
The data from 'time' and 'perf' also will be saved by LKP. I think the 'avg.json' is the right file I should parse, it included metrics of the benchmark and time and perf. I added a 'LOOPS' parameter in test definition to support repeatedly run. If we run the test more then once, the data in avg.json will be the average of the runs. Here is a lava job example https://validation.linaro.org/scheduler/job/382401
It is hard to figure out something useful from this links. https://validation.linaro.org/dashboard/streams/anonymous/chase-qi/bundles/8...
seems it doesn't work now. Could you like to resend the report when everything right.
It does work, here are detailed results: https://validation.linaro.org/dashboard/streams/anonymous/chase-qi/bundles/8... Alex, we're not kernel hackers and we don't know what's important and what is not. Chase is asking for help identifying the important bits. Complaining that what we present is not what you want without details doesn't help :(
milosz
Hopefully, it is what we need. Would you please check and let me know your opinion?