On 25 April 2012 13:59, Paul Sokolovsky paul.sokolovsky@linaro.org wrote:
Hello,
[Alexander, Joey, at the end are some thought on how we got to known EC2 usage, and how we can "optimize" it.]
Sorry, I'm a bit out if context, so adding my comments wherever I see something to comment ;-).
On Tue, 24 Apr 2012 14:23:09 +0530 Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 24 April 2012 13:28, Paul Larson paul.larson@linaro.org wrote:
On Apr 24, 2012 1:30 AM, "Jon Medhurst (Tixy)" tixy@linaro.org wrote:
On Mon, 2012-04-23 at 22:10 -0500, Paul Larson wrote:
I was trying to run the linaro-android-build-cmds-sh a few times now, and I've never actually been able to get all the way through it. First time was due to a lack of space. The drive I tried to run it on had 6G left, but apparently that's not enough. So I ran it again on a drive with much
Google says 30Gb free space is needed for their builds, we need more for our builds. linaro-android-build-cmds.sh really should check for 50Gb and refuse to run otherwise.
more free space. This is running on a core i7, 8GB ram. After
Google says 16Gb is the requirement, linaro-android-build-cmds.sh really should check and refuse to run otherwise.
A lot of developers have 8GB of memory and we build Android ICS on EC2 large instance with 7.5 GB memory afaik.
It's just the same as Google's own check for 64-bitness in their makefiles. Can the build process work on 32bits? Of course it can, it's just not supported. So it all happens strictly in the following order
- users get visible indication that what they do is unsupported; 2)
users go to patch that out because they obviously know better; 3) users continue, in full awareness that they (and only they) put themselves in a position for problems.
12 hours, I finally got far enough to get an error telling me that vendor.tar.bz2 didn't exist (it did... just not in the path
That's why I was great proponent of adding these build scripts - to be able to add loads of validation *before* starting the build, to avoid cases like spending hours (days) to get only errors in result. To build is very easy, setting up proper environment turns out to be very hard, based on the feedback we saw. IMHO, build scripts should be 5-10Kb, and 90% of that should be environment validation for all possible and impossible environment problems which may happen on users' machines (essentially, whenever we hear a problem report, we add a validation for that). The alternative is a user frustration, which we already saw quite a lot on IRC.
after it did a cd - submitted a small bug for this which should be a very easy fix). So I'm trying again, but my question is, should we really be including this test for every build, given that it takes so long to run? How long does this normally take to run?
I didn't run these tests when I did the vexpress testing because I knew the large amount of data which would need to be downloaded and disk space required.
The other linaro-android-build-env build tests also would require me to setup two new Ubuntu chroots and do that enormous download an build twice again.
I justify to myself that I didn't need to run them by using the fact that the tests weren't marked with and 'm' to say they should be run monthly. But to be honest, after 5 hour testing the release I wanted to be doing other things, not having my work laptop out of action for what would probably be the best part of another day.
If these tests are meant to run monthly, can't we automate this in 'the cloud' somewhere?
That's what I'm hoping - see my 2nd note in this thread where I suggested that android build use this. Then it would be doing this step for us effectively.
Aye, and using linaro-android-media-create to program the images. Would mean we'd be automatically testing exactly the way users would use the builds which would save a bunch of time. Adding Paul S for visibility.
Creating images with linaro-android-media-create on android-build was already in talks, following concerns were raised:
- The way it was suggested is to add linaro-android-media-create
processing at the end of build. Android builds are already very complicated and monolithic - it's for example very hard to test any changes to build process already.
- That produces even more artifacts (arguable, with low "hit" ratio),
with us going out of space all the time already.
- It requires running root. We don't have root for build scripts on
android-build. And with restricted builds launch, I guess it really should stay that way.
More generally, recently I wondered that we turn our Jenkins instances from build systems into generic script runners. I didn't share that speculation previously, because I thought "indeed, why not". But now we know what's the problem with that - it costs too much (re: this month's hot discussion of us having gone over EC2 budget, I don't know if that's widely known).
We use cloud for Android builds because android *build* (i.e. compiling gigabytes of C++ files, produced by humans, and thus expected to contain errors) requires enormous resources and time. It's plain expensive to use that for file archive munging. So, what about old good cronjobs? (Alternatively, cheap instances could be used for that kind of work, but someone first need to implement that, and that's still inefficient cost-wise and arguably more complicated than setting a cronjob).
Another way is to rethink what and how we do. For example:
Would mean we'd be automatically testing exactly the way users would use the builds which would save a bunch of time.
At first, this seems like a good idea, but thinking more, the machine code which is executed by CPU is the same in Android standard tarballs and in images produced by l-i-t. So, regarding testing of android *code* the latter is as helpful as wrapping a tarball in other bzip2 round (but it takes extra resources to do that).
I don't say that l-i-t images shouldn't be tested - apparently, they should be, but that's *different* kind of testing, with different priority, schedule, etc.
Thanks, Paul Larson
Thanks, Paul