Hello,
(This mail could have epigraph "I'm starting to get CBuild" ;-) ).
I'd like to draw some attention to native CBuild builders. There appear to be quite a backlog of builds ("Pending" section at http://cbuild.validation.linaro.org/helpers/scheduler ), which appear to be caused by builders problems.
On Friday night, I noticed that tcpanda05 was hanging for around a week, and rebooted it, based on the information provided by Dave previously. Hope that's ok, as it happily cutting size of a9-daily's backlog since then (and sorry for letting know just now, again, it was off usual working hours). a9-daily's queue does decrease, but rather slowly. At the same time, that queue has tcpanda06 board commented out - it went that way with initial import "scheduler" repo, so I couldn't know exact reason. Generic one is understood - so tcpanda06 could quickly pick up jobs from other queues, but that means it spends idle half a day. So, I temporarily re-enabled it for a9-daily to process backlog, that change is clearly commented and easily bzr diff'able on toolchain64.
That ends list of issues I actively poked at, but there're more I couldn't:
1. tcserver01x5, which belongs to x86_64-heavy queue, and apparently is a VM hosted in lava lab, hangs for 16 days, with a dozen jobs in backlog. "tcserver01x5" didn't ping for me from gateway, but "tcserver01" does, so maybe it's live but has some issues.
2. a9hf-daily has single active machine, tcpanda11, so that's in permanent, and probably growing, backlog.
I understand that leveraging LAVA builds is probably the best way to resolve the issue, and I'm currently validating and tweaking those builds (but I'm not sure if build image as was provided by Michael is hardfloat or not, and how to go about adding missing one).
Paul,
On 28 January 2013 15:35, Paul Sokolovsky paul.sokolovsky@linaro.org wrote:
Hello,
(This mail could have epigraph "I'm starting to get CBuild" ;-) ).
I'm glad someone is.
I'd like to draw some attention to native CBuild builders. There appear to be quite a backlog of builds ("Pending" section at http://cbuild.validation.linaro.org/helpers/scheduler ), which appear to be caused by builders problems.
On Friday night, I noticed that tcpanda05 was hanging for around a week, and rebooted it, based on the information provided by Dave previously. Hope that's ok, as it happily cutting size of a9-daily's backlog since then (and sorry for letting know just now, again, it was off usual working hours). a9-daily's queue does decrease, but rather slowly. At the same time, that queue has tcpanda06 board commented out
- it went that way with initial import "scheduler" repo, so I couldn't
know exact reason. Generic one is understood - so tcpanda06 could quickly pick up jobs from other queues, but that means it spends idle half a day. So, I temporarily re-enabled it for a9-daily to process backlog, that change is clearly commented and easily bzr diff'able on toolchain64.
That ends list of issues I actively poked at, but there're more I couldn't:
- tcserver01x5, which belongs to x86_64-heavy queue, and apparently is
a VM hosted in lava lab, hangs for 16 days, with a dozen jobs in backlog. "tcserver01x5" didn't ping for me from gateway, but "tcserver01" does, so maybe it's live but has some issues.
tcserver01x5 seems to have appeared recently (this month) - I didn't put it there, so I don't know what exactly it is.
- a9hf-daily has single active machine, tcpanda11, so that's in
permanent, and probably growing, backlog.
So 'daily' is the 'do this if you've nothing better to do' queue. Items appear at the rate of once a day, and get purged if not done after two weeks. Around release weeks this queue gets really big as there are many builds with higher priority spawned.
I understand that leveraging LAVA builds is probably the best way to resolve the issue, and I'm currently validating and tweaking those builds (but I'm not sure if build image as was provided by Michael is hardfloat or not, and how to go about adding missing one).
So in fact the daily queues are probably the best queues to initially move to being pure Lava queues, as they are not currently business critical for the working group (but may soon be).
Sorry to be naive - if by build image you mean filesystem then uname -a should tell you whether you have a Hardfloat (arm-none-linux-gnueabihf) or soft-float (arm-none-linux-gnueabi) system, and you should be able to multi-arch the system to get hold of soft-float binaries.
Thanks,
Matt
-- Matthew Gretton-Dann Linaro Toolchain Working Group matthew.gretton-dann@linaro.org
Hello,
On Mon, 28 Jan 2013 16:44:32 +0000 Matthew Gretton-Dann matthew.gretton-dann@linaro.org wrote:
[]
- tcserver01x5, which belongs to x86_64-heavy queue, and
apparently is a VM hosted in lava lab, hangs for 16 days, with a dozen jobs in backlog. "tcserver01x5" didn't ping for me from gateway, but "tcserver01" does, so maybe it's live but has some issues.
tcserver01x5 seems to have appeared recently (this month) - I didn't put it there, so I don't know what exactly it is.
- a9hf-daily has single active machine, tcpanda11, so that's in
permanent, and probably growing, backlog.
So 'daily' is the 'do this if you've nothing better to do' queue. Items appear at the rate of once a day, and get purged if not done after two weeks. Around release weeks this queue gets really big as there are many builds with higher priority spawned.
Ah, yes, I remember Michael mentioning something like that too, good to know.
I understand that leveraging LAVA builds is probably the best way to resolve the issue, and I'm currently validating and tweaking those builds (but I'm not sure if build image as was provided by Michael is hardfloat or not, and how to go about adding missing one).
So in fact the daily queues are probably the best queues to initially move to being pure Lava queues, as they are not currently business critical for the working group (but may soon be).
Ok, let's go that route. Step by step, currently I'm validating that LAVA builds match native and sane otherwise.
Sorry to be naive - if by build image you mean filesystem then uname -a should tell you whether you have a Hardfloat (arm-none-linux-gnueabihf) or soft-float (arm-none-linux-gnueabi) system, and you should be able to multi-arch the system to get hold of soft-float binaries.
I'm not much knowledgeable of hardfloat and not sure if it requires kernel support of fully user-space thing. I added "uname -a" to build process and it gives:
Linux ursa4 3.5.0-r1 #1 SMP PREEMPT Tue Aug 14 17:16:00 NZST 2012 armv7l armv7l armv7l GNU/Linux
I don't see anything special there (should hardfloat be armv7hl?). However, CBuild itself classifies it as hard-float, based on URL for results: http://cbuild.validation.linaro.org/build/gcc-linaro-4.7-2013.01/logs/armv7l... So, I wonder if that's correct or it's another mixup with LAVA builds, like hostname. I'll look into that, but appreciate your checking it too.
Thanks,
Matt
-- Matthew Gretton-Dann Linaro Toolchain Working Group matthew.gretton-dann@linaro.org
linaro-validation@lists.linaro.org