On Mon, Jan 31, 2011 at 11:58 AM, Paul Larson paul.larson@linaro.orgwrote:
- One queue TOTAL. One queue may seem like a bottleneck, but I don't
think it has to be in practice. One process can monitor that queue, then launch a process or thread to handle each new job that comes in.
I think RabbitMQ message can have the ability to include the board information in it to use one queue, and also we can use different queues for different types of boards.
Right, but my question was, if you are already encoding that information in the job stream, what is the advantage to having a queue for each board type, rather than a single queue?
Job description: I'd like to see some more detail here. Can you provide an example of a job file that would illustrate how you see it working? We also need to specify the underlying mechanisms that will handle parsing what's in the job file, and calling [something?] to do those tasks. What we have here feels like it might be a bit cumbersome.
I added an detailed one on the spec, like:
[Beagle-1, http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/imx51/20110131/0/imag...,
http://snapshots.linaro.org/11.05-daily/linaro-headless/20110131/0/images/ta..., 900, ["abrek", "LTP", 180], ["none", "echo 100 > abc", 1], ... ] 2.
[IMX51-2, 20100131, 900, ["abrek", "LTP", 180], ["none", "echo 100 > abc", 1], ... ]
This looks almost like JSON, which is what Zygmunt was originally pushing
for IIRC. If we are already going down that path, seems it would be sensible to take it a step further and have defined sections. For example:
Tests:[ { name:"LTP", testsuite:"abrek", subtest:"ltp", <---- not thrilled about that name "subtest", but can't think of something better to call it at the moment timeout:180, reboot_after:True }, { name:"echo test", testsuite:"shell", timeout:20 reboot_after:False } ] What do you think? Obviously, there would be other sections for things like defining the host characteristics, the image to deploy, etc.
Other questions...
What if we have a dependency, how does that get installed on the target image? For example, if I want to do a test that requires bootchart be installed before the system is booted, we should be able to specify that, and have it installed before booting.
What about installing necessary test suites? How do we tell it, in advance, what we need to have installed, and how does it get on the image before we boot, or before we start testing?
I think validation tools, test suites and necessary files are likely to install after test image deployment.
Yes, clearly they have to be installed after deployment, but we may also need to consider them installing BEFORE we actually boot the test image.
Had another thought on this tonight, I didn't hear much back on the
previous proposal, but how about something more like this as an example of what a job description might look like. In this scenario, there would be some metadata for the job itself, such as timeout values, etc. Then steps that would define commands with parameters. That would allow us to be more flexible, and add support for more commands without changing the format. Hopefully the mailer doesn't mangle this too badly. :)
{ "job_name": "foo", "target": "panda01", "timeout": 18000, "steps": [ { "command": "deploy", "parameters": { "rootfs": " http://snapshots.linaro.org/11.05-daily/linaro-developer/20110208/0/images/t... " "hwpack": " http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/panda/20110208/0/imag... " } }, { "command": "boot_test_image" }, { "command": "test_abrek", "parameters": { "test_name": "ltp" } }, { "command": "submit_results", "parameters": { "server": "http://dashboard.linaro.org", "stream": "panda01-ltp" } } ] }
Thanks, Paul Larson
On 9 February 2011 20:12, Paul Larson paul.larson@linaro.org wrote:
On Mon, Jan 31, 2011 at 11:58 AM, Paul Larson paul.larson@linaro.orgwrote:
- One queue TOTAL. One queue may seem like a bottleneck, but I don't
think it has to be in practice. One process can monitor that queue, then launch a process or thread to handle each new job that comes in.
I think RabbitMQ message can have the ability to include the board information in it to use one queue, and also we can use different queues for different types of boards.
Right, but my question was, if you are already encoding that information in the job stream, what is the advantage to having a queue for each board type, rather than a single queue?
Job description: I'd like to see some more detail here. Can you provide an example of a job file that would illustrate how you see it working? We also need to specify the underlying mechanisms that will handle parsing what's in the job file, and calling [something?] to do those tasks. What we have here feels like it might be a bit cumbersome.
I added an detailed one on the spec, like:
[Beagle-1, http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/imx51/20110131/0/imag...,
http://snapshots.linaro.org/11.05-daily/linaro-headless/20110131/0/images/ta..., 900, ["abrek", "LTP", 180], ["none", "echo 100 > abc", 1], ... ] 2.
[IMX51-2, 20100131, 900, ["abrek", "LTP", 180], ["none", "echo 100 > abc", 1], ... ]
This looks almost like JSON, which is what Zygmunt was originally pushing
for IIRC. If we are already going down that path, seems it would be sensible to take it a step further and have defined sections. For example:
Tests:[ { name:"LTP", testsuite:"abrek", subtest:"ltp", <---- not thrilled about that name "subtest", but can't think of something better to call it at the moment timeout:180, reboot_after:True }, { name:"echo test", testsuite:"shell", timeout:20 reboot_after:False } ] What do you think? Obviously, there would be other sections for things like defining the host characteristics, the image to deploy, etc.
Other questions...
What if we have a dependency, how does that get installed on the target image? For example, if I want to do a test that requires bootchart be installed before the system is booted, we should be able to specify that, and have it installed before booting.
What about installing necessary test suites? How do we tell it, in advance, what we need to have installed, and how does it get on the image before we boot, or before we start testing?
I think validation tools, test suites and necessary files are likely to install after test image deployment.
Yes, clearly they have to be installed after deployment, but we may also need to consider them installing BEFORE we actually boot the test image.
Had another thought on this tonight, I didn't hear much back on the
previous proposal, but how about something more like this as an example of what a job description might look like. In this scenario, there would be some metadata for the job itself, such as timeout values, etc. Then steps that would define commands with parameters. That would allow us to be more flexible, and add support for more commands without changing the format. Hopefully the mailer doesn't mangle this too badly. :)
{ "job_name": "foo", "target": "panda01", "timeout": 18000, "steps": [ { "command": "deploy", "parameters": { "rootfs": " http://snapshots.linaro.org/11.05-daily/linaro-developer/20110208/0/images/t... " "hwpack": " http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/panda/20110208/0/imag... " } }, { "command": "boot_test_image" }, { "command": "test_abrek", "parameters": { "test_name": "ltp" } }, { "command": "submit_results", "parameters": { "server": "http://dashboard.linaro.org", "stream": "panda01-ltp" } } ] }
I have a question regarding test job json file and messages from Scheduler to Dispatcher: is this file created by Scheduler and put in the message queue where Dispatcher will fetch it from, or how should this work?
On Mon, Feb 14, 2011 at 5:58 AM, Mirsad Vojnikovic < mirsad.vojnikovic@linaro.org> wrote:
I have a question regarding test job json file and messages from Scheduler to Dispatcher: is this file created by Scheduler and put in the message queue where Dispatcher will fetch it from, or how should this work?
That's one possibility, yes, that the scheduler could create this from user
input. Another of course, is that a user could submit a job control file directly to the scheduler rather than dealing with selecting options.
On 9 February 2011 20:12, Paul Larson paul.larson@linaro.org wrote:
On Mon, Jan 31, 2011 at 11:58 AM, Paul Larson paul.larson@linaro.orgwrote:
- One queue TOTAL. One queue may seem like a bottleneck, but I don't
think it has to be in practice. One process can monitor that queue, then launch a process or thread to handle each new job that comes in.
I think RabbitMQ message can have the ability to include the board information in it to use one queue, and also we can use different queues for different types of boards.
Right, but my question was, if you are already encoding that information in the job stream, what is the advantage to having a queue for each board type, rather than a single queue?
Job description: I'd like to see some more detail here. Can you provide an example of a job file that would illustrate how you see it working? We also need to specify the underlying mechanisms that will handle parsing what's in the job file, and calling [something?] to do those tasks. What we have here feels like it might be a bit cumbersome.
I added an detailed one on the spec, like:
[Beagle-1, http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/imx51/20110131/0/imag...,
http://snapshots.linaro.org/11.05-daily/linaro-headless/20110131/0/images/ta..., 900, ["abrek", "LTP", 180], ["none", "echo 100 > abc", 1], ... ] 2.
[IMX51-2, 20100131, 900, ["abrek", "LTP", 180], ["none", "echo 100 > abc", 1], ... ]
This looks almost like JSON, which is what Zygmunt was originally pushing
for IIRC. If we are already going down that path, seems it would be sensible to take it a step further and have defined sections. For example:
Tests:[ { name:"LTP", testsuite:"abrek", subtest:"ltp", <---- not thrilled about that name "subtest", but can't think of something better to call it at the moment timeout:180, reboot_after:True }, { name:"echo test", testsuite:"shell", timeout:20 reboot_after:False } ] What do you think? Obviously, there would be other sections for things like defining the host characteristics, the image to deploy, etc.
Other questions...
What if we have a dependency, how does that get installed on the target image? For example, if I want to do a test that requires bootchart be installed before the system is booted, we should be able to specify that, and have it installed before booting.
What about installing necessary test suites? How do we tell it, in advance, what we need to have installed, and how does it get on the image before we boot, or before we start testing?
I think validation tools, test suites and necessary files are likely to install after test image deployment.
Yes, clearly they have to be installed after deployment, but we may also need to consider them installing BEFORE we actually boot the test image.
Had another thought on this tonight, I didn't hear much back on the
previous proposal, but how about something more like this as an example of what a job description might look like. In this scenario, there would be some metadata for the job itself, such as timeout values, etc. Then steps that would define commands with parameters. That would allow us to be more flexible, and add support for more commands without changing the format. Hopefully the mailer doesn't mangle this too badly. :)
{ "job_name": "foo", "target": "panda01", "timeout": 18000, "steps": [ { "command": "deploy", "parameters": { "rootfs": " http://snapshots.linaro.org/11.05-daily/linaro-developer/20110208/0/images/t... " "hwpack": " http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/panda/20110208/0/imag... " } }, { "command": "boot_test_image" }, { "command": "test_abrek", "parameters": { "test_name": "ltp" } }, { "command": "submit_results", "parameters": { "server": "http://dashboard.linaro.org", "stream": "panda01-ltp" } } ] }
I think this looks good. Is this the JSON file definition we will use initially? Asking this because the corresponding Dispatcher WI is set to done "[qzhang] Define the job description both on dispatcher and scheduler: DONE". I will need at least initial version to work with on Scheduler parts.
Also, I wonder if we will support more complex test jobs involving more than one board, either of the same type or different types? If yes, how will the Dispatcher handle those? Should we put that in queue as a single job or as multiple jobs? How will that affect JSON file definition?
Thanks, Paul Larson
On Tue, Feb 15, 2011 at 10:23 AM, Mirsad Vojnikovic < mirsad.vojnikovic@linaro.org> wrote:
... I think this looks good. Is this the JSON file definition we will use initially? Asking this because the corresponding Dispatcher WI is set to done "[qzhang] Define the job description both on dispatcher and scheduler: DONE". I will need at least initial version to work with on Scheduler parts.
Yes, let's get this documented and go with it for now. I talked to zyga on irc about it and he seemed happy with the last thing I proposed. It will probably need some tweaking, but I think that should be pretty close unless someone can see a reason why this won't work?
Also, I wonder if we will support more complex test jobs involving more than one board, either of the same type or different types? If yes, how will the Dispatcher handle those? Should we put that in queue as a single job or as multiple jobs? How will that affect JSON file definition?
We don't really have any tests at the moment that would require that, so
it's purely speculation at this point. If it's just a matter of needing a server to run against, we may *want* to have a stable server to use for things like this, and just have them all use the same thing rather than deploying a second machine to act as a server. If it's really a test with two component parts, it may make more sense to launch them as separate jobs. I think that would be a reasonable approach, but lets revisit it when/if we have something that absolutely requires it, rather than spend a lot of time trying to design around something we don't have need for right now anyway.
Thanks, Paul Larson
Hi, Paul,
For the python environment in the deployed test image, how much support will it provide? Can we deploy additional python libs to the test image if required? Dispatcher client part may need some.
On 02/17/2011 11:28 AM, Spring Zhang wrote:
Hi, Paul,
For the python environment in the deployed test image, how much support will it provide? Can we deploy additional python libs to the test image if required? Dispatcher client part may need some.
What dependencies are you thinking of?
IMHO when the client is present on the device we can assume "normal" system and be okay with most dependencies.
Best regards ZK
Right now, it uses a headless image and has some, but not too much room for additional things to be installed. There is a possibility that we may, at some point, want to implement this master image in flas though. If that happens, it would likely be something like initramfs/busybox. For now, it is safe to assume that python is present on the master image, but try to keep the dependencies to a minimum. Now if you are talking about a client piece that runs in the booted image, that depends on the image we are using. Most would likely have python, but you would still need to keep it pretty thin. Not sure whether nano will once it is fully stripped down to a minimum. Android images are unlikely to have python at all.
-Paul Larson On Feb 17, 2011 5:24 AM, "Zygmunt Krynicki" zygmunt.krynicki@linaro.org wrote:
On 02/17/2011 11:28 AM, Spring Zhang wrote:
Hi, Paul,
For the python environment in the deployed test image, how much support will it provide? Can we deploy additional python libs to the test image if required? Dispatcher client part may need some.
What dependencies are you thinking of?
IMHO when the client is present on the device we can assume "normal" system and be okay with most dependencies.
Best regards ZK