Multiple conversations over the last week have convinced me that lava-test, as it currently is, is not well suited to the way LAVA is changing.
I should say that I'm writing this email more to start us thinking about where we're going rather han any immediate plans to start coding.
The fundamental problem is that it runs solely on the device under test (DUT). This has two problems:
1) It seems ill-suited to tests where the not all of the data produced by the test originates from the device being tested (think power measurement or hdmi capture here).
2) We do too much work on the DUT. As Zygmunt can tell you, just installing lava-test on a fast model is quite a trial; doing the test result parsing and bundle formatting there is just silly.
I think that both of these things suggest that the 'brains' of the test running process should run on the host side, somewhat as lava-android-test does already.
Surprisingly enough, I don't think this necessarily requires changing much at all about how we specify the tests. At the end of the day, a test definition defines a bunch of shell commands to run, and we could move to a model where lava-test sends these to the board[1] to be executed rather than running them through os.system or whatever it runs now (parsing is different I guess, but if we can get the output onto the the host, we can just run parsing there).
To actually solve the problems of 1 and 2 above though we will want some extensions I think.
For point 1, we clearly need some way to specify how to get the data from the other data source. I don't have any bright ideas here :-)
In the theme of point 2, if we can specify installation in a more declarative way than "run these shell commands" there is a change we can perform some of these steps on the host -- for example, stream installation could really just drop a pre-compiled binary at a particular location on the testrootfs before flashing it to the SD card. Tests can already depend on debian packages to be installed, which I guess is a particular case of this (and "apt-get install" usually works fine when chrooted into a armel or armhf rootfs with qemu-arm-static in the right place).
We might want to take different approaches for different backends -- for example, running the install steps on real hardware might not be any slower and certainly parallizes better than running them on the host via qemu, but the same is emphatically not the case for fast models.
Comments? Thoughts?
Cheers, mwh
[1] One way of doing this would be to create (on the testrootfs) a shell script that runs all the tests and an upstart job that runs the tests on boot -- this would avoid depending on a reliable network or serial console in the test image (although producing output on the serial console would still be useful for people watching the job).
On 18 May 2012 08:04, Michael Hudson-Doyle michael.hudson@linaro.org wrote:
Multiple conversations over the last week have convinced me that lava-test, as it currently is, is not well suited to the way LAVA is changing.
I should say that I'm writing this email more to start us thinking about where we're going rather han any immediate plans to start coding.
The fundamental problem is that it runs solely on the device under test (DUT). This has two problems:
1) It seems ill-suited to tests where the not all of the data produced by the test originates from the device being tested (think power measurement or hdmi capture here).
2) We do too much work on the DUT. As Zygmunt can tell you, just installing lava-test on a fast model is quite a trial; doing the test result parsing and bundle formatting there is just silly.
I think that both of these things suggest that the 'brains' of the test running process should run on the host side, somewhat as lava-android-test does already.
Surprisingly enough, I don't think this necessarily requires changing much at all about how we specify the tests. At the end of the day, a test definition defines a bunch of shell commands to run, and we could move to a model where lava-test sends these to the board[1] to be executed rather than running them through os.system or whatever it runs now (parsing is different I guess, but if we can get the output onto the the host, we can just run parsing there).
To actually solve the problems of 1 and 2 above though we will want some extensions I think.
For point 1, we clearly need some way to specify how to get the data from the other data source. I don't have any bright ideas here :-)
Getting data from an external device (and not only the DUT) isn't the only problem. It will be an interesting discussion at Connect. We'll have to run tests with lava-test to change the workload on the DUT and synchronize the data acquisition device to observe what's happening from hdmi/power point of view with regard to the tests code path.
In the theme of point 2, if we can specify installation in a more declarative way than "run these shell commands" there is a change we can perform some of these steps on the host -- for example, stream installation could really just drop a pre-compiled binary at a particular location on the testrootfs before flashing it to the SD card. Tests can already depend on debian packages to be installed, which I guess is a particular case of this (and "apt-get install" usually works fine when chrooted into a armel or armhf rootfs with qemu-arm-static in the right place).
We might want to take different approaches for different backends -- for example, running the install steps on real hardware might not be any slower and certainly parallizes better than running them on the host via qemu, but the same is emphatically not the case for fast models.
Comments? Thoughts?
The main issue is related to lava-test being more than a test runner. It's causing performance issues as we do computation on the DUT. Parsing and compiling are the main bottlenecks. +1 to move the parsing on the host +1 to use pre-compiled binaries when possible
Cheers, mwh
[1] One way of doing this would be to create (on the testrootfs) a shell script that runs all the tests and an upstart job that runs the tests on boot
It should be flexible and not tight to Ubuntu images. This is our use case but we can have to test other OS that doesn't use upstart.
-- this would avoid depending on a reliable network or serial console in the test image (although producing output on the serial console would still be useful for people watching the job).
Fathi Boudra fathi.boudra@linaro.org writes:
On 18 May 2012 08:04, Michael Hudson-Doyle michael.hudson@linaro.org wrote:
Multiple conversations over the last week have convinced me that lava-test, as it currently is, is not well suited to the way LAVA is changing.
I should say that I'm writing this email more to start us thinking about where we're going rather han any immediate plans to start coding.
The fundamental problem is that it runs solely on the device under test (DUT). �This has two problems:
�1) It seems ill-suited to tests where the not all of the data � �produced by the test originates from the device being tested � �(think power measurement or hdmi capture here).
�2) We do too much work on the DUT. �As Zygmunt can tell you, just � �installing lava-test on a fast model is quite a trial; doing the � �test result parsing and bundle formatting there is just silly.
I think that both of these things suggest that the 'brains' of the test running process should run on the host side, somewhat as lava-android-test does already.
Surprisingly enough, I don't think this necessarily requires changing much at all about how we specify the tests. �At the end of the day, a test definition defines a bunch of shell commands to run, and we could move to a model where lava-test sends these to the board[1] to be executed rather than running them through os.system or whatever it runs now (parsing is different I guess, but if we can get the output onto the the host, we can just run parsing there).
To actually solve the problems of 1 and 2 above though we will want some extensions I think.
For point 1, we clearly need some way to specify how to get the data from the other data source. �I don't have any bright ideas here :-)
Getting data from an external device (and not only the DUT) isn't the only problem. It will be an interesting discussion at Connect. We'll have to run tests with lava-test to change the workload on the DUT and synchronize the data acquisition device to observe what's happening from hdmi/power point of view with regard to the tests code path.
Sure. But I was only talking about the specification of tests here, which seems like one of the things that needs to get thought about soonest, because it's such a pain for everyone if we need to change it.
In the theme of point 2, if we can specify installation in a more declarative way than "run these shell commands" there is a change we can perform some of these steps on the host -- for example, stream installation could really just drop a pre-compiled binary at a particular location on the testrootfs before flashing it to the SD card. �Tests can already depend on debian packages to be installed, which I guess is a particular case of this (and "apt-get install" usually works fine when chrooted into a armel or armhf rootfs with qemu-arm-static in the right place).
We might want to take different approaches for different backends -- for example, running the install steps on real hardware might not be any slower and certainly parallizes better than running them on the host via qemu, but the same is emphatically not the case for fast models.
Comments? �Thoughts?
The main issue is related to lava-test being more than a test runner. It's causing performance issues as we do computation on the DUT. Parsing and compiling are the main bottlenecks. +1 to move the parsing on the host +1 to use pre-compiled binaries when possible
Cheers, mwh
[1] One way of doing this would be to create (on the testrootfs) a � �shell script that runs all the tests and an upstart job that runs � �the tests on boot
It should be flexible and not tight to Ubuntu images. This is our use case but we can have to test other OS that doesn't use upstart.
Well, sure. I think all OS's we care about (except possibly Android, which is already in a happier place here) support running a shell script at boot somehow or orther...
-- this would avoid depending on a reliable � �network or serial console in the test image (although producing � �output on the serial console would still be useful for people � �watching the job).
Cheers, mwh
On 18 May 2012 13:04, Michael Hudson-Doyle michael.hudson@linaro.orgwrote:
Multiple conversations over the last week have convinced me that lava-test, as it currently is, is not well suited to the way LAVA is changing.
I should say that I'm writing this email more to start us thinking about where we're going rather han any immediate plans to start coding.
The fundamental problem is that it runs solely on the device under test (DUT). This has two problems:
- It seems ill-suited to tests where the not all of the data
produced by the test originates from the device being tested (think power measurement or hdmi capture here).
- We do too much work on the DUT. As Zygmunt can tell you, just
installing lava-test on a fast model is quite a trial; doing the test result parsing and bundle formatting there is just silly.
I agree with the test result staff, but if we move it to host side, it needs a collecting and parsing too. I think we can discuss a more efficient result collecting way but I have no good idea here.
We can enable a result collection and parsing extension, for out-of-order test result, we use a dumb one, collect all output logs and no analysis, just dump it to test result.
I think that both of these things suggest that the 'brains' of the test running process should run on the host side, somewhat as lava-android-test does already.
Surprisingly enough, I don't think this necessarily requires changing much at all about how we specify the tests. At the end of the day, a test definition defines a bunch of shell commands to run, and we could move to a model where lava-test sends these to the board[1] to be executed rather than running them through os.system or whatever it runs now (parsing is different I guess, but if we can get the output onto the the host, we can just run parsing there).
To actually solve the problems of 1 and 2 above though we will want some extensions I think.
For point 1, we clearly need some way to specify how to get the data from the other data source. I don't have any bright ideas here :-)
In the theme of point 2, if we can specify installation in a more declarative way than "run these shell commands" there is a change we can perform some of these steps on the host -- for example, stream installation could really just drop a pre-compiled binary at a particular location on the testrootfs before flashing it to the SD card. Tests can already depend on debian packages to be installed, which I guess is a particular case of this (and "apt-get install" usually works fine when chrooted into a armel or armhf rootfs with qemu-arm-static in the right place).
We might want to take different approaches for different backends -- for example, running the install steps on real hardware might not be any slower and certainly parallizes better than running them on the host via qemu, but the same is emphatically not the case for fast models.
Does qemu simulation work for all platforms? AFAIK it has full support on beagle/panda, but not other platforms.
Comments? Thoughts?
Cheers, mwh
[1] One way of doing this would be to create (on the testrootfs) a shell script that runs all the tests and an upstart job that runs the tests on boot -- this would avoid depending on a reliable network or serial console in the test image (although producing output on the serial console would still be useful for people watching the job).
I think stable network is necessary, at least in test case deployment step.
linaro-validation mailing list linaro-validation@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-validation
Spring Zhang spring.zhang@linaro.org writes:
On 18 May 2012 13:04, Michael Hudson-Doyle michael.hudson@linaro.orgwrote:
Multiple conversations over the last week have convinced me that lava-test, as it currently is, is not well suited to the way LAVA is changing.
I should say that I'm writing this email more to start us thinking about where we're going rather han any immediate plans to start coding.
The fundamental problem is that it runs solely on the device under test (DUT). This has two problems:
- It seems ill-suited to tests where the not all of the data
produced by the test originates from the device being tested (think power measurement or hdmi capture here).
- We do too much work on the DUT. As Zygmunt can tell you, just
installing lava-test on a fast model is quite a trial; doing the test result parsing and bundle formatting there is just silly.
I agree with the test result staff, but if we move it to host side, it needs a collecting and parsing too. I think we can discuss a more efficient result collecting way but I have no good idea here.
We can enable a result collection and parsing extension, for out-of-order test result, we use a dumb one, collect all output logs and no analysis, just dump it to test result.
Yes, I think to start with we should just ship the entire test output from the DUT to the host and parse it there.
I think that both of these things suggest that the 'brains' of the test running process should run on the host side, somewhat as lava-android-test does already.
Surprisingly enough, I don't think this necessarily requires changing much at all about how we specify the tests. At the end of the day, a test definition defines a bunch of shell commands to run, and we could move to a model where lava-test sends these to the board[1] to be executed rather than running them through os.system or whatever it runs now (parsing is different I guess, but if we can get the output onto the the host, we can just run parsing there).
To actually solve the problems of 1 and 2 above though we will want some extensions I think.
For point 1, we clearly need some way to specify how to get the data from the other data source. I don't have any bright ideas here :-)
In the theme of point 2, if we can specify installation in a more declarative way than "run these shell commands" there is a change we can perform some of these steps on the host -- for example, stream installation could really just drop a pre-compiled binary at a particular location on the testrootfs before flashing it to the SD card. Tests can already depend on debian packages to be installed, which I guess is a particular case of this (and "apt-get install" usually works fine when chrooted into a armel or armhf rootfs with qemu-arm-static in the right place).
We might want to take different approaches for different backends -- for example, running the install steps on real hardware might not be any slower and certainly parallizes better than running them on the host via qemu, but the same is emphatically not the case for fast models.
Does qemu simulation work for all platforms? AFAIK it has full support on beagle/panda, but not other platforms.
No, but I think the sort of things that are done during test installation -- installing a package from a ppa, compiling a c file -- could be run just as well under QEMU's beagle emulation as something more like the DUT itself. But it's something to keep in mind, for sure.
Comments? Thoughts?
Cheers, mwh
[1] One way of doing this would be to create (on the testrootfs) a shell script that runs all the tests and an upstart job that runs the tests on boot -- this would avoid depending on a reliable network or serial console in the test image (although producing output on the serial console would still be useful for people watching the job).
I think stable network is necessary, at least in test case deployment step.
Yes, for sure. We've had this goal to run tests without depending on a working network in the test image but I don't know how important it is to stick to that -- android tests require network and it doesn't seem to cause massive problems there...
Cheers, mwh
linaro-validation@lists.linaro.org