Greetings Linaro-dev,
I have joined the mailing list today and as a newbie Codethink(er) I am looking at contributing around the Linaro Validation area, especially Lava. My background is as a programme manager, generally around embedded mobile and have just completed a significant webkit based development project. First up on Linaro I have been digesting blueprints to get a feel for the Lava project scope and produce a system use-case (or story) list. The blueprints are not really giving me a system level picture that pulls together all the threads at the moment. Once I get this I can then spot some gaps where we can add value. Typical questions I am trying to answer are:
What Mechanism is used by the community member to trigger/request an evaluation run? What needs to be provided by the community member in the evaluation package? What is the minimum? What will be present on the boards? Will the test run content be configurable by the evaluation requester?
To this end, is there a Lava system-level requirement/scope? I have come across Linaro burn-down graphs, so maybe there is a Scrum type requirements backlog I can pick up on.
Regards, Paul
paul.miles@codethink.co.uk
W dniu 08.02.2011 17:48, Paul Miles pisze:
Greetings Linaro-dev,
I have joined the mailing list today and as a newbie Codethink(er) I am looking at contributing around the Linaro Validation area, especially Lava. My background is as a programme manager, generally around embedded mobile and have just completed a significant webkit based development project. First up on Linaro I have been digesting blueprints to get a feel for the Lava project scope and produce a system use-case (or story) list. The blueprints are not really giving me a system level picture that pulls together all the threads at the moment. Once I get this I can then spot some gaps where we can add value. Typical questions I am trying to answer are:
Hi, Welcome!
What Mechanism is used by the community member to trigger/request an evaluation run?
None, we're not planning that right now.
What needs to be provided by the community member in the evaluation package? What is the minimum? What will be present on the boards? Will the test run content be configurable by the evaluation requester?
Lots of questions here. Technically our system will be able (it's not running yet) run jobs. Jobs are described by a bunch of data attributes (it's not approved yet, it might still change during the development stage as we implement the components and see how they work together.
Currently the job will be able to specify hardware and software required (that's fuzzy but it's the general idea) and a set of tests to perform where tests are pluggable and can be defined by the end users via our framework. Finally the results need to be processed in certain way to fit our "outgoing" format and then be stored in a designated results repository.
To this end, is there a Lava system-level requirement/scope? I have come across Linaro burn-down graphs, so maybe there is a Scrum type requirements backlog I can pick up on.
Jump to #linaro on freenode and grab me (zyga) or paul larson (plars)
Thanks ZK