Hey all, Lately I've been studying the LAVA QA Tracker by looking at its code, setting it up on a LAVA dashboard instance in a VM and trying to use it, and asking some questions on IRC.
I recall asking at the beginning of the month if there were plans on the Linaro side to allocate developers to finish the QA Tracker in a specific timeframe, and was told that some development for this [monthly] cycle should start "next week".
However, except my own bazaar branch (crappy; it does not really solve/fix anything so far), I haven't heard of recent development activity. Or did anything happen that I missed?
Also, some people I spoke with were not aware at all of those plans for this cycle. So I'd just like to check out a few things: - Do you have a target deadline to meet for this? - Who are the people who will specifically be working on this?
Thanks :)
Hi,
On Tue, Mar 20, 2012 at 12:12:30AM -0400, Jean-François Fortin Tam wrote:
Hey all, Lately I've been studying the LAVA QA Tracker by looking at its code, setting it up on a LAVA dashboard instance in a VM and trying to use it, and asking some questions on IRC.
I recall asking at the beginning of the month if there were plans on the Linaro side to allocate developers to finish the QA Tracker in a specific timeframe, and was told that some development for this [monthly] cycle should start "next week".
However, except my own bazaar branch (crappy; it does not really solve/fix anything so far), I haven't heard of recent development activity. Or did anything happen that I missed?
Also, some people I spoke with were not aware at all of those plans for this cycle. So I'd just like to check out a few things:
- Do you have a target deadline to meet for this?
- Who are the people who will specifically be working on this?
LAVA QA tracker has been on/off the development agenda with recent activities indicating that we probably want a solution around those lines.
ATM, we are gathering requirements and decide what to do strategically on this topic. I suspect that the final input will come from the dedicated QA team that we are currently in the progress of bootstrapping.
I expect those to be the owners of requirements of such things for LAVA.
What is your interest in the QA tracker? What use-cases would you want to cover? If you want to work on this, I would suggest that we setup joint meeting with the right stakeholders to nail down what we want to do and how you can best help...
Hi Jeff, a bit of background on lava-qatracker. A while back, we were using another qatracker and having some issues. It allowed us to set up milestones for testing, but just linked to a wiki page with steps to test, which a tester would have to follow, then come back to the page and just enter any bug numbers that were found. There was no more detail than that, and just telling someone to follow a wiki made it too easy to miss steps completely.
Just with some quick brainstorming, we thought it would be really useful to have a system that would allow us to define the tests, take the user through the tests and let them answer right on the page whether it passed, failed, or was skipped. It wasn't meant to be a finished solution, just a starting point for the conversation about what features should be included in the first pass. I think already, it would be a huge improvement, but there were some important things missing. Namely, we needed a good way to link manual tests to automated tests on the same image, and visualize them together for the milestone testing. This proved difficult because on Ubuntu, we have separate build artifacts that are combined to make the image. However, over the next month or so, we're looking at moving to a different system for producing the builds, and creating an image at build time that could be submitted. This should simplify the definition of "what is an image" quite a bit.
There are still a number of issues that need to be worked out. For instance, linking and visualization of the results, how to define the tests (as a bzr branch with files, or in the db directly are two options that have been brought up), granularity of products, milestones, etc. I'd love to get input from other interested parties such as collabora so that it's not only useful to us, but flexible enough to be useful to others as well.
In the interest of making it usable enough to demo what's there, and be useful as a conversation starter again, I did a bit of testing on it today and fixed a couple of bugs that might have been preventing you from seeing how to use it. It looks like you've gotten far enough to install it and make it run, so I'll skip over the boring details of that. Update your branch and reinstall though, and I'll start from there: 1. Login as the admin user and open the admin interface 2. Click on the +Add button next to "milestones" on that page 3. Give it a name, bundle stream, etc. You'll also want to give it a branch to use. For the branch just use " https://code.launchpad.net/~mwhudson/+junk/test-cases" which has some example tests in it 4. Go back out to the main lava interface and click on "QA Tracker at the top, and select the milestone you just created", under there you'll see how it asks you for test results, etc.
Let me know if you have any issues with getting to this point. I'd like to talk further about your interest, and ideas for what kind of features you're looking to have in a tool like this.
Thanks, Paul Larson
On Tue, Mar 20, 2012 at 5:51 AM, Alexander Sack asac@linaro.org wrote:
Hi,
On Tue, Mar 20, 2012 at 12:12:30AM -0400, Jean-François Fortin Tam wrote:
Hey all, Lately I've been studying the LAVA QA Tracker by looking at its code, setting it up on a LAVA dashboard instance in a VM and trying to use it, and asking some questions on IRC.
I recall asking at the beginning of the month if there were plans on the Linaro side to allocate developers to finish the QA Tracker in a specific timeframe, and was told that some development for this [monthly] cycle should start "next week".
However, except my own bazaar branch (crappy; it does not really solve/fix anything so far), I haven't heard of recent development activity. Or did anything happen that I missed?
Also, some people I spoke with were not aware at all of those plans for this cycle. So I'd just like to check out a few things:
- Do you have a target deadline to meet for this?
- Who are the people who will specifically be working on this?
LAVA QA tracker has been on/off the development agenda with recent activities indicating that we probably want a solution around those lines.
ATM, we are gathering requirements and decide what to do strategically on this topic. I suspect that the final input will come from the dedicated QA team that we are currently in the progress of bootstrapping.
I expect those to be the owners of requirements of such things for LAVA.
What is your interest in the QA tracker? What use-cases would you want to cover? If you want to work on this, I would suggest that we setup joint meeting with the right stakeholders to nail down what we want to do and how you can best help...
-- Alexander Sack asac@linaro.org Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog
linaro-validation mailing list linaro-validation@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-validation
Hi all,
On Tue, 2012-03-20 at 22:26 -0500, Paul Larson wrote:
Hi Jeff, a bit of background on lava-qatracker. A while back, we were using another qatracker and having some issues. It allowed us to set up milestones for testing, but just linked to a wiki page with steps to test, which a tester would have to follow, then come back to the page and just enter any bug numbers that were found. There was no more detail than that, and just telling someone to follow a wiki made it too easy to miss steps completely.
Just with some quick brainstorming, we thought it would be really useful to have a system that would allow us to define the tests, take the user through the tests and let them answer right on the page whether it passed, failed, or was skipped. It wasn't meant to be a finished solution, just a starting point for the conversation about what features should be included in the first pass. I think already, it would be a huge improvement, but there were some important things missing. Namely, we needed a good way to link manual tests to automated tests on the same image, and visualize them together for the milestone testing. This proved difficult because on Ubuntu, we have separate build artifacts that are combined to make the image. However, over the next month or so, we're looking at moving to a different system for producing the builds, and creating an image at build time that could be submitted. This should simplify the definition of "what is an image" quite a bit.
I (and Jeff of course) have been looking for a tool to help with tracking manual test results for some time already, and honestly it would be awesome to have it integrated with the automated test results. That's why I'm interested to define the QA Tracker future. That way we could start planning to work on this tool or discard it as a possible alternative.
I heard that Salveti was planning to use QA Tracker really soon and even came up with a list of basic requirements/features that he would like to have. They seem very simple in terms of functionalities, but might need some definitions, like what is an image, etc.
I know it's not a priority for you guys right now, but it would be nice if we could start officially listing what are the features we would be aiming for the next release and what need to be decided before the development start, otherwise the project will stay stuck until someone else really need it again.
What do you think?
There are still a number of issues that need to be worked out. For instance, linking and visualization of the results, how to define the tests (as a bzr branch with files, or in the db directly are two options that have been brought up), granularity of products, milestones, etc. I'd love to get input from other interested parties such as collabora so that it's not only useful to us, but flexible enough to be useful to others as well.
In the interest of making it usable enough to demo what's there, and be useful as a conversation starter again, I did a bit of testing on it today and fixed a couple of bugs that might have been preventing you from seeing how to use it. It looks like you've gotten far enough to install it and make it run, so I'll skip over the boring details of that. Update your branch and reinstall though, and I'll start from there:
- Login as the admin user and open the admin interface
- Click on the +Add button next to "milestones" on that page
- Give it a name, bundle stream, etc. You'll also want to give it a
branch to use. For the branch just use "https://code.launchpad.net/~mwhudson/+junk/test-cases" which has some example tests in it 4. Go back out to the main lava interface and click on "QA Tracker at the top, and select the milestone you just created", under there you'll see how it asks you for test results, etc.
Let me know if you have any issues with getting to this point. I'd like to talk further about your interest, and ideas for what kind of features you're looking to have in a tool like this.
Honestly there're no many features to have in a such tool IMO. What we are looking for is a simple way to track results of manual tests and generate reports. So, it can be very simple and basic, no rocket science. The point is that we would like to align our interests with Linaro's ones, then we could have a tool that everybody could take advantage. I guess that I good start point would be the list of requirements that Salveti cooked some time ago. You guys have it?
Again, thanks a lot for this brainstorm.
BR,
-Abner
Thanks, Paul Larson
On Tue, Mar 20, 2012 at 5:51 AM, Alexander Sack asac@linaro.org wrote: Hi, On Tue, Mar 20, 2012 at 12:12:30AM -0400, Jean-François Fortin Tam wrote: > Hey all, > Lately I've been studying the LAVA QA Tracker by looking at its code, > setting it up on a LAVA dashboard instance in a VM and trying to use it, > and asking some questions on IRC. > > I recall asking at the beginning of the month if there were plans on the > Linaro side to allocate developers to finish the QA Tracker in a > specific timeframe, and was told that some development for this > [monthly] cycle should start "next week". > > However, except my own bazaar branch (crappy; it does not really > solve/fix anything so far), I haven't heard of recent development > activity. Or did anything happen that I missed? > > Also, some people I spoke with were not aware at all of those plans for > this cycle. So I'd just like to check out a few things: > - Do you have a target deadline to meet for this? > - Who are the people who will specifically be working on this? LAVA QA tracker has been on/off the development agenda with recent activities indicating that we probably want a solution around those lines. ATM, we are gathering requirements and decide what to do strategically on this topic. I suspect that the final input will come from the dedicated QA team that we are currently in the progress of bootstrapping. I expect those to be the owners of requirements of such things for LAVA. What is your interest in the QA tracker? What use-cases would you want to cover? If you want to work on this, I would suggest that we setup joint meeting with the right stakeholders to nail down what we want to do and how you can best help... -- Alexander Sack asac@linaro.org Technical Director, Linaro Platform Teams http://www.linaro.org | Open source software for ARM SoCs http://twitter.com/#%21/linaroorg - http://www.linaro.org/linaro-blog _______________________________________________ linaro-validation mailing list linaro-validation@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-validation
linaro-validation mailing list linaro-validation@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-validation
linaro-validation@lists.linaro.org