On 06/10/2015 06:53 PM, Mark Brown wrote:
On 10 June 2015 at 08:04, Alex Shi <alex.shi@linaro.org mailto:alex.shi@linaro.org> wrote:
2, performance testing often has variational results, that normally requests repeat testing and collect the average, standard deviations etc index of results. We need to collect repeatly running data and to decide how many times re-run needed.
I wonder if this should be done at a level up from the test definition for a given test itself, or perhaps a library that the test definitions can use - like you say this seems like something that's going to be needed for many tests (though for some the test will already have its own implementation).
If my memory correct, the LKP should have this function in test script. We just need to tune it for each of benchmark and each different boards.
4, perf tool output is very important for kernel developer to know why we got this performance data, where need to improve, that is same important as test results. So we'd better to figure out how many perf data we can get from testing, and collect them.
This might be something else that could be shared.
Yes, like other kind of profile tools etc.