On Thu, Apr 14, 2011 at 1:19 PM, Paul Larson paul.larson@linaro.org wrote:
On Wed, Apr 13, 2011 at 11:25 PM, Alexander Sack asac@linaro.org wrote:
On Wed, Apr 13, 2011 at 12:52 PM, Jeremy Chang jeremy.chang@linaro.org wrote:
Hi, list: Just want to let you know, especially for the validation team members, I am trying to do Android support on lava, so I created a branch, lp:~jeremychang/lava/android-support Hi, Paul, maybe you can help me give it a review. Thanks!
Paul is travelling .... I assume he will be able to review this on friday.
Yes, trying to catch up with email, I'll take a look at it as soon as I can. And thanks for working on this! I'm looking forward to seeing what you've got!
Hi, Paul: Thanks in advance! and appreciate your comments.
. I am on my personal environment, so I changed some configs to fit my environment for testing. Also I documented them here in https://wiki.linaro.org/JeremyChang/Sandbox/LavaAndroidValidation
On the partitioning... is all of that really necessary? Is there any way to get this down? To support this, we'll need to modify MMC_BLOCK_MINORS for the 10.11 kernels that we have for the master images on most of the boards, as well as on any laptop used to create these images. Would be nice if there were some easy way around this.
The rootfs partition will be removed and replaced by the initial ramdisk image soon, but it seems the sdcard partition still can not be viewed by the master image. Thinking more about this.
17 -MASTER_STR = "root@master:" 18 +MASTER_STR = "root@linaro:" 19 #Test image recognization string 20 TESTER_STR = "root@linaro:"
This is ill-advised, as you now can't tell which image you're booted into. Better to just set the hostname on you master image to "master".
Right, just need to changing the host name. I updated the wiki page. Thanks!
Basically I would like to a make a prototype based on the existing Lava to do the testing/Validation for Android from the deployment to submit_result. Now I already made it do the continual process of starting from the deployment, reboot to Android, do basic test and simple monkey test.
Now what I thinking is, also the trickiest item seems that how to present the test results of various Android validations to dashboard [1]. Due to platform difference, abrek, python code could not run on Android target, there is a need to generate the json bundle in lava for sending to the dashboard. I think for android validation, benchmark/testing, abrek could not be used.
Why is there an android version of test_abrek here? It doesn't seem to belong, but filling in run_adb_shell_command looks like it would be useful.
Yes, in the end, I think the android_abrek command is not needed for android. I think I will remove it later. The test result conversion can be done just in test actions in lava.
Regarding to using adb shell to trigger the commands, another thing I am inspecting is how to monitor the status of long time running applications(either native or android java) like monkey and 0xbench from the host side, lava.
For now - to get past this point - you could just implement an action/command that is "monkey_results_to_json".
Zygmunt did some looking at conversion scripts for gcc tests a while back. You should talk to him as I think he has some good feelings about good and bad approaches to this problem since he's been through some of the pain with it already. But yes, abrek will convert it, but some other conversion piece will be needed since we don't use abrek here.
Thanks. I will need to check the conversion scripts.
my understanding is that "attachment" should be used to "attach additional information" ... the main results should be in the "main" json part. basically "result" (boolean) and/or "measurement" (float) from what i recall.
Attachments can be used when you need to preserve the whole result as it was, logfiles, even binaries. However, launch control can't easily tell you much about them... just dump them out for you. For instance, we plan to store the serial log in an attachment. It's not something we need the dashboard to query on, but it would be useful to have if we get a failure, and want to investigate further. Parsing the test results, and storing them in measurements and results makes them easier to query for, so that graphs and reports can later be generated.
I still have no much feeling about generating the graphs and reports from the json bundle. I just setup a launch-control/0.4 and start to play with it.
Regarding to generating the bundle, I will check what's the way to generate the reasonable software_context and hardware_context for android.
I wouldn't worry about this too much right now, it's not a critical thing to do.
Yes, I will try to do with a most basic format that can only contains the test result.
Thanks again for working on this! I'll look at it more in depth when I get home.
Cool!
Thanks, Paul Larson