-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 06/30/2011 01:04 PM, Paul Larson wrote:
On Thu, Jun 30, 2011 at 10:44 AM, Daniel Lezcano daniel.lezcano@linaro.orgwrote:
I added in attachment the result of these scripts on a fully working cpufreq framework on my intel box. That will show the ouput of the tests.
But the cpufreq is not complete on a pandaboard, so results won't be really nice.
Looking at your results answered some of my questions at least, it seems to
have a very different format for the output than the previous tests. Are the older ones replaced by this, extended by it, or is this just a completely separate testsuite? If it's to be considered a different testsuite, that makes some sense, as the type of tests here seem to be consistent pass/fail sorts of tests. However I have a few concerns for having it be easily parsed into results that can be stored automatically:
It is the same test suite but I want the new tests to replace the old ones in a near future.
At present, new and old tests are co-existing.
The test suite is launched in two different manners:
* running the old tests -> no modification * the new way, invoked by 'make check'
Today, you should not have to modify anything as lava should invoke the old way pm-qa tests.
When all tests will be finished I wish to switch the new way test suite execution in lava, if it is possible.
So in the meantime, lava can continue to run the old tests while the developers can easily invoke the new tests with 'make check' and check their kernel code each time new tests are committed.
Does it make sense ?
for 'cpu0':
checking scaling_available_governors file presence ... PASS checking scaling_governor file presence ... PASS for 'cpu1': checking scaling_available_governors file presence ... PASS checking scaling_governor file presence ... PASS
... Heading1 test_id1 test_id2 Heading2 test_id1 test_id2 This is notoriously a bit tricky to deal with. It can be done, but the parsing has to track which heading it's under, and modify the test_id (or some attribute of it) to designate how it differs from other testcases with the exact same name. It can be done, but since you have complete control over how you output results, it can easily be changed in such a way that is easy to parse, and easy for a human to look at. What might be easier is: cpu0_scaling_available_governors_file_exists: PASS cpu0_scaling_governor_file_exists: PASS cpu1_scaling_available_governors_file_exists: PASS cpu1_scaling_governor_file_exists: PASS
Ok I can get rid of the nested format. No problem.
Each scripts are doing several tests, IMO, that would be better to show the description of what is doing the script and finish them by PASS or FAIL.
Will the following format be ok ?
test_01/cpu0 : checking scaling_available_frequencies file ... PASS test_01/cpu0 : checking scaling_cur_freq file ... PASS test_01/cpu0 : checking scaling_setspeed file ... PASS test_01/cpu1 : checking scaling_available_frequencies file ... PASS test_01/cpu1 : checking scaling_cur_freq file ... PASS test_01/cpu1 : checking scaling_setspeed file ... PASS
All the tests are described at :
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts
Another thing that I'm curious about here is...
saving governor for cpu0 ... DONE
Is that a result? Or just an informational message? That's not clear, even as a human reader.
The result for a test case is PASS or FAIL.
But under some circumstances, we need to do some extra work where a failure does not mean the test case failed but the pre-requisite for the test case is not met.
For example, the test case is to change the governor to 'userspace'. We have to be 'root' to do such operation. If the test script is run without 'root' privileges then the prerequisite is not met and the test script fails, not the test case.
But anyway, I can log to a file the operations not related to the test case and just display PASS or FAIL as a result. It will be up to the user to look at the log file to understand the problem.
deviation 0 % for 2333000 is ... VERY
GOOD
Same comments as above about having an easier to interpret format, but the result here: "VERY GOOD" - what does that mean? What are the other possible values? Is this simply another way of saying "PASS"? Or should it actually be a measurement reported here?
Yep, I agree it is an informational message and should go to a logging file. I will stick to a simple result 'PASS' or 'FAIL' and I will let the user to read the documentation of the test in the wiki page to understand the meaning of these messages (GOOD, VERY GOOD...).
eg. for this one:
https://wiki.linaro.org/WorkingGroups/PowerManagement/Doc/QA/Scripts#test_06
Thanks -- Daniel
- -- http://www.linaro.org/ Linaro.org │ Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linaro Facebook | http://twitter.com/#!/linaroorg Twitter | http://www.linaro.org/linaro-blog/ Blog