Hello,
On Thu, 26 Apr 2012 00:32:14 +0530 Vishal Bhoj vishal.bhoj@linaro.org wrote:
On 25 April 2012 18:25, Fathi Boudra fathi.boudra@linaro.org wrote:
On 25 April 2012 13:59, Paul Sokolovsky paul.sokolovsky@linaro.org wrote:
Hello,
[Alexander, Joey, at the end are some thought on how we got to known EC2 usage, and how we can "optimize" it.]
Sorry, I'm a bit out if context, so adding my comments wherever I see something to comment ;-).
On Tue, 24 Apr 2012 14:23:09 +0530 Zach Pfeffer zach.pfeffer@linaro.org wrote:
On 24 April 2012 13:28, Paul Larson paul.larson@linaro.org wrote:
On Apr 24, 2012 1:30 AM, "Jon Medhurst (Tixy)" tixy@linaro.org wrote:
On Mon, 2012-04-23 at 22:10 -0500, Paul Larson wrote: > I was trying to run the linaro-android-build-cmds-sh a few > times now, and > I've never actually been able to get all the way through it. > First time was due to a lack of space. The drive I tried > to run it on had 6G left, but apparently that's not > enough. So I ran it again on a drive with much
Google says 30Gb free space is needed for their builds, we need more for our builds. linaro-android-build-cmds.sh really should check for 50Gb and refuse to run otherwise.
> more free space. This is running on a core i7, 8GB ram. > After
Google says 16Gb is the requirement, linaro-android-build-cmds.sh really should check and refuse to run otherwise.
I build android on my 4gb ram laptop in 2 hrs.
Build, like your working directory, or rebuild from scratch (a new checkout)? Can't believe that, the checkout alone may take more ;-). But if you know some trick, let us know - maybe we can go from very expensive XLARGE instances to LARGE.
the requirement mentioned from google is not mandatory but preferable.
Yes, I elaborated my IMHO on what all those requirements mean - something for user to override. But again IMHO, it's still good idea to use them as the baseline, to set users' expectations right.
A lot of developers have 8GB of memory and we build Android ICS on EC2 large instance with 7.5 GB memory afaik.
No we're compliant with upstream requirements - how could we do it otherwise? We use XLARGE instances with 15Gb, and maybe the lacking gigabyte is the cause why some of instances hang after a build (just kidding ;-) ).
It's just the same as Google's own check for 64-bitness in their makefiles. Can the build process work on 32bits? Of course it can, it's just not supported. So it all happens strictly in the following order 1) users get visible indication that what they do is unsupported; 2) users go to patch that out because they obviously know better; 3) users continue, in full awareness that they (and only they) put themselves in a position for problems.
> 12 hours, I > finally got far enough to get an error telling me that > vendor.tar.bz2 didn't exist (it did... just not in the path
That's why I was great proponent of adding these build scripts - to be able to add loads of validation *before* starting the build, to avoid cases like spending hours (days) to get only errors in result. To build is very easy, setting up proper environment turns out to be very hard, based on the feedback we saw. IMHO, build scripts should be 5-10Kb, and 90% of that should be environment validation for all possible and impossible environment problems which may happen on users' machines (essentially, whenever we hear a problem report, we add a validation for that). The alternative is a user frustration, which we already saw quite a lot on IRC.
[]