On Fri, Feb 10, 2012 at 6:16 AM, Benjamin Gaignard
>
> last time I have ask the cable was on panda 03 :-)
> Anyway e2eaudiotest itself isn't mature enough, I believe that we still
> have issue on audio paths settings, but Kurt will work on that (Tom can you
> confirm ?)
>
I have submit lava e2e test to check if the parser was working well even
> on error cases.
>
That could be, we need to add tags to the boards that have it so that you
can be sure, but for the audio cables, they are cheap enough that we should
really just put them everywhere. And yes, even if the the test fails
currently, we should still run it if it's ready to go. Is it ready to go?
If so, I'd like to get it merged, and either in the daily runs or in some
special multimedia run if that is preferred.
About cmatest: I have try to test the latest version (v21) but it failed
> http://validation.linaro.org/lava-server/scheduler/job/11852 with this
> strange error "ERROR: There is no test with the specified ID".
> I don't understand where is the problem, I have run the test on my side
> without any issues. How the test ID could be wrong ? The same test ID has
> been used to install the test itself .... strange.
>
We'll try to look into this today.
> The good news is that realvideo test is working well: http://validation.linaro.org/lava-server/scheduler/job/1;a=blob_plain;f=cma…
> HEAD1619 <http://validation.linaro.org/lava-server/scheduler/job/11619>
>
Great! It's done in json, so we can't merge it, but it would be possible to
get into regular runs anyway. Looks like it takes maybe 45 min. to run.
Any special requirements here?
> I have lot of meetings this morning, and I leave at 12:30....
> What I wanted to talk with you and Tom, is the numbers of automatic tests
> done daily on lava. They take so much time than I'm not able to
> run/validate my own tests before 3-4 pm.
> Do you think you can reduce the number of daily tests ? for example is it
> really needed to do android and ubuntu lab health test daily on each board ?
> Maybe we can also reduce lava test time setup by having rootfs with
> pre-installed lava-tool package.
>
> Have dedicated lava rootfs could also help me the speed up test creation
> because it is always difficult to found the correct match between hwpack
> and rootfs.
>
I'm not sure what you mean here, you don't necessarily have to have the
rootfs from the exact same day. As for the backload on the boards and not
getting your job in quickly, there are a couple of things to be aware of:
First, we had an issue with the scheduler earlier in the week and needed to
shut it down for a bit. Short story is that it's back up now, processing
jobs, and it's mostly cleared the queue that had built up. There are now
lots of boards sitting idle (even snowball). Next, on snowball, we only
have 3 at the moment. We're supposed to get some more, but that means for
now they are going to be pretty busy.
The health checks are part of an effort to try to detect when there's
something wrong with the board, or infrastructure that would prevent jobs
from passing when they normally should. They aren't likely to go away, but
we just merged something that should help us make them faster.
Thanks,
Paul Larson
>
> Regards,
> Benjamin
>
> 2012/2/10 Paul Larson <paul.larson(a)linaro.org>
>
>> Let's talk on Friday, sorry I missed today. I didn't forget you though,
>> just got pulled into an unexpected meeting late in the day after my tsc
>> talk. It never fails that at connects I never manage to talk to everyone I
>> want to, but I wanted to definitely sync with the two of you and see how
>> things were going with cmatest and e2eaudio.
>>
>> One quick point - on the e2e audio tests, I noticed that you seem to have
>> got a failed result (but a result none-the-less) on panda03. I *think* the
>> only board we've managed to connect with the loopback audio cable is
>> panda01. Dave can confirm that. The company we ordered the audio cables
>> from delayed and we didn't get them before the connect. :(
>> In the meantime, if you set the target for the job to look at panda01,
>> you should be able to get to a board with the loopback actually plugged in.
>>
>> Thanks,
>> Paul Larson
>>
>
>
>
> --
>
> Benjamin Gaignard
>
> Multimedia Working Group
>
> Linaro.org <http://www.linaro.org/>* **│ *Open source software for ARM
> SoCs
>
> **
>
> Follow *Linaro: *Facebook <http://www.facebook.com/pages/Linaro> | Twitter<http://twitter.com/#%21/linaroorg>
> | Blog <http://www.linaro.org/linaro-blog/>
>
>
On Thu, Feb 23, 2012 at 12:36:02PM +1300, Michael Hudson-Doyle wrote:
> On Wed, 22 Feb 2012 10:10:05 +0000, Zygmunt Krynicki <zygmunt.krynicki(a)linaro.org> wrote:
> > On Wed, Feb 22, 2012 at 02:21:57PM +1300, Michael Hudson-Doyle wrote:
> > > Hi all,
> > >
> > > The LAVA team is working on support for private jobs -- we already have
> > > some support for private results, but if the log of the job that
> > > produced the results is publicly visible, this isn't much privacy.
> > >
> > > The model for result security is that a set of results can be:
> > >
> > > - anonymous (anyone can see, anyone can write)
> > > - public (anyone can see, only owning user or group can write)
> > > - private (only owning user or group can see or write)
> > >
> > > Each non-anonymous set of results is owned by a group or user. I think
> > > this model is sufficiently flexible -- the only gap I can see is that
> > > it's not possible to have a stream where a subset of the people who can
> > > see it can submit results to it.
> >
> > We may, one day, want to implement real permissions but for the moment I
> > think the security model we have is sufficient.
>
> 'real permissions'?
Fine graned permissions. "can_see", "can_read", "can_write",
"can_remove", etc. + stacking, resolving deny+allow rules. All the good
ACL stuff.
> > A bigger issue is the abuse of anonymous streams. I'd like to abolish
> > them over the next few months. If anything, they were a workaround
> > around lack of oauth support in early versions of the dashboard
> > (something that has since proven a failure for our use case). We should
> > IMO move everyone to non-anonymous streams and reserve anonymous streams
> > for mass-filing of profiling information from end-users, something that
> > we have yet to see being used.
>
> Yeah. This should be easy to manage now, I'm not sure how to arrange
> the changeover without getting every user to change their job
> descriptions all at once. Maybe we could say an authenticated request
> to put a bundle into /anonymous/foo just goes into
> /public/personal/$user/foo by magic.
We may do something special on v.l.o but it general anonymous streams
should be ... anonymous ;)
> > > Clearly it makes sense to have the set of people who can see the
> > > eventual results and see the job output be the same. Currently the
> > > former group is encoded in the stream name of the submit_results action,
> > > for example:
> > >
> > > {
> > > "command": "submit_results",
> > > "parameters":
> > > {
> > > "server": "http://locallava/RPC2/",
> > > "stream": "/private/personal/mwhudson/test/"
> > > }
> > > }
> > >
> > > would place results in a stream called 'test' that only I can or
> > >
> > > "stream": "/public/team/linaro/kernel/"
> > >
> > > identifies a stream that anyone can see but only members of the linaro
> > > group can put results in.
> > >
> > > The scheduler *could* read out this parameter from the job json and
> > > enforce the privacy rules based on this, but that seems a bit fragile
> > > somehow. I think top level attribute in the json describing who can
> > > see the job would make sense -- we can then make sure the stream name on
> > > the submit_results matches this.
> > >
> > > Does the /{public,private}/{personal,team}/{team-or-user-name} syntax
> > > make sense to people? I think it's reasonably clear and nicely terse.
> >
> > You've missed the /{any-other-name,} at the end (a single person can
> > have any number of streams.
>
> Right but the name of the stream is not part of the "who can see it"
> stuff.
Sure but it affects parsing
> > Despite being the author I always forget if the privacy flag comes before
> > the owner classification. The words "personal", "private" and "public"
> > are easy to confuse. I was thinking that perhaps we should one day
> > migrate towards something else. The stuff below is my random proposal:
> >
> > ~{team-or-person}/{private,}/{name,}
> >
> > > We should do as much validation at submit time as we can (rejecting jobs
> > > that submit to streams that do not exist, for example).
> >
> > That will break the scheduler / dashboard separation model. You must
> > also remember that scheduler and dashboard can use separate databases
> > so you cannot reason about remote (dashboard) users without an explicit
> > interface (that we don't have).
>
> Well yes. I don't know how much of a benefit that separation is
> really -- some level of separation so that results can be submitted to a
> dashboard by a developer running tests on her desk is useful, but I
> don't know to what extent having the scheduler be able to send results
> to an entirely different dashboard is.
NONE :-)
Let's get rid of it, the only question is how should this look like?
> > On a side note. I think that the very first thing we should do is
> > migrate Job to be a RestrictedResource. Then we can simply allow users
> > to submit --private jobs, or delegate ownership to a --team they are a
> > member of. This will immediately unlock a lot of testing that currently
> > cannot happen (toolchain tests with restricted benchmarks).
>
> Yep. That's on the list.
>
> > When that works we can see how we can bring both extensions closer so
> > that users have a better experience. In my opinion that is to clearly
> > define that scheduler _must_ be in the same database as the dashboard
> > and to discard the full URL in favour of stream name. Less confusion,
> > all validation possible, no real use cases lost (exactly who is using a
> > private dispatcher to schedule tests to a public dashboard, or vice
> > versa?)
>
> Yeah, I agree. The question I was trying (badly) to ask is twofold:
>
> 1) what do we want users to write in their job file?
Writing job files manually is also part of the problem but I understand
what you are asking about. IMHO we should specify job security at
submission time via a new API. It should not be a part of the document
being sent. Perhaps we could bond that with setting up destination
stream. Then we would have a really clean user interface,100%
validated input and 100% reusable jobs. The only case that would be lost
is dispatcher being able to send stuff to the dashboard. In that case I
think it should follow lava-test, i.e. to loose the feature and to
restrict itself to making good bunles (that something else can send)
> 2) (less important) how do we handle the transition from what we have
> now to the answer to 1)?
Introduce the new interface, monitor the old interface, beat people with
a stick if they keep using the old interface, start rejecting the old
interface in 20120.4
Thanks
ZK
PS: CC-ing to linaro-validation
I just hit something interesting in Djano and thought I'd share. I was
playing with some data from a queryset. I was somewhat aware its not
your ordinary python list. However, I was surprised to see this issue:
I had a query that was returning TestResult objects. When I iterated
over the list like:
for r in results:
print r.measurement
I got the objects I expected. However, at the time I was coding I needed
to know the loop index, so I did something like:
for i in xrange(results.count())
print results[i].measurement
This example acts like it works, but upon looking at the data I realized
I got incorrect results. It seems like results[0] and results[1] are
always the same, but I haven't dug enough to prove that's always the case.
Anyway, I had a feeling using the xrange approach was wrong to begin
with, but it turns out to be actually wrong without telling you.
-andy
Hi Deepti
If I see the log from your attached file,
Line 399, Line 10278 both shows the success full ping message.
Addition to those, there is no problem in Kernel booting message: No kernel
panic, USB Ethernet recognised well.
Can I see the exact same booting log that you faced booting problem ?
Thanks
Sangwook
On 10 February 2012 06:42, Deepti Kalakeri <deepti.kalakeri(a)linaro.org>wrote:
> Hello Sangwook,
>
> On Fri, Feb 10, 2012 at 12:38 AM, Sangwook Lee <sangwook.lee(a)linaro.org>wrote:
>
>> Hi Paul
>>
>> I visited the below and then downloaded the file,but it has the broken to
>> the link for the file serial.log
>> Could you send me kernel log by attached file ?
>>
>>
> FYI.. Here is the Kernel build log for the hwpack that was being tested
> https://ci.linaro.org/jenkins/view/Linux%20ARM%20SoC%20%28for-next%29%20CI%….
>
>
> Also, I was able to access the serial log and I have attached the same
> with the mail.
>
>>
I saw the serial log, but it looks fine to me.
> Thanks
>> Sangwook
>>
>>
>>
>> On 9 February 2012 16:57, Paul Larson <paul.larson(a)linaro.org> wrote:
>>
>>>
>>> http://validation.linaro.org/lava-server/dashboard/streams/anonymous/ci-lin… the link to one of the recent results
>>>
>>> The kernel ci view page for this is at:
>>> http://validation.linaro.org/lava-server/kernel-ci-views/per_board/origen
>>>
>>>
>>> On Thu, Feb 9, 2012 at 8:56 AM, Paul Larson <paul.larson(a)linaro.org>wrote:
>>>
>>>> Hi, I'm looking at the kernel ci results and origen seems to be running
>>>> tests pretty frequently, but always failing to boot. Sangwook, could you
>>>> take a look at the args that are getting passed and make sure there's not
>>>> something here that would explain it hanging on boot? Or is this perhaps
>>>> something in the kernel config that is getting used?
>>>>
>>>> Thanks,
>>>> Paul Larson
>>>>
>>>
>>>
>>
>
>
> --
> Thanks and Regards,
> Deepti
> Infrastructure Team Member, Linaro Platform Teams
> Linaro.org | Open source software for ARM SoCs
> Follow Linaro: http://www.facebook.com/pages/Linaro
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
>
>
>
On Tue, Feb 21, 2012 at 10:58:45AM -0600, Andy Doan wrote:
> The real update is now ready:
>
> http://pypi.python.org/pypi/android-benchmark-views/0.1.3
>
> In addition to the fixes listed below, I also made improvements to
> the way the 0xbench results were being displayed.
I think we should track this somehow, RT system for LAVA? I don't know,
bug for lava-lab. Otherwise we'll just loose track of this.
I'm including wider audience to get feedback, since this is not
sensitive I'm including the public mailing list.
Best regards
ZK
> thanks
> andy
>
> On 02/20/2012 05:09 PM, Andy Doan wrote:
> >Please dis-regard this request for now. I've encountered one more bug
> >I'd like to try and fix. Its still safe to deploy this, but I hate to
> >ask you to do two updates for me.
> >
> >On 02/20/2012 04:22 PM, Andy Doan wrote:
> >>A new version of android-benchmark-views is available:
> >>
> >>http://pypi.python.org/pypi/android-benchmark-views/0.1.2
> >>
> >>This fixes a bug with flot charts and adds the ability to review then
> >>publish a report.
> >>
> >>Can someone please update this during the next LAVA maintenance window?
> >>NOTE: this version includes a small schema change for which a south
> >>migration step will be required.
> >>
> >>Thanks,
> >>-andy
> >
>
Hi.
I'd like to see Jenkins moved away from control. It causes random CPU
spikes and as we already know control is a busy place. Could we spawn
a VM (if that's stable enough already) solely for running it and
remove the parts that currently run on control?
ZK
Last Friday I thought up an idea that could piggy back the current
android-benchmark plugin for LAVA that can allow people to create
dynamic reports. I wanted to share the basic idea and code to see what
people think.
Currently I define a benchmark report using the Django admin interface.
I simply define a BenchmarkReport (which has a label like 2012.01) and
then BenchmarkRuns which point to their BenchmarkReport and the bundle
stream containing its runs.
The idea I had was to allow a user to specify all this information, have
a view that creates these objects (but not commit them), and use the
existing logic I have to create reports.
dynamic.png shows the current way the code allows you to create a
dynamic report. dynamic-report.png shows what that report will then look
like.
In its current form, its all done via an HTTP POST so you can't really
share a link. I was thinking it could be cool to make an update to where
the parameters were passed via HTTP GET parameters. You could then
easily construct a URL to share with someone that could compare metrics
between two or more bundle streams. ie - hey take a look at this "v8
javascript" comparison.
I've included the minimal set of patches I did to make this work.
Comments/Thoughts/Suggestions?
-andy
http://digital-loggers.com/lpc.html
I'm in a talk right now on ktest.pl, which is pretty neat, but made for
basically just doing kernel testing on one machine. One of the things he
mentioned, that we know we've had to deal with is power control. Instead
of the APC devices like we're using though, he's found a thing called a web
power switch from data loggers. It doesn't support telnet, but can easily
be controlled with wget or curl to tell a port to reset. The nice thing?
It is only $129 (less if you order more). It does only seem to have
american power plugs on it, but will handle 240V, so it could be used with
an adapter.