Today we migrated the www.linaro.org website to a new server at our
provider. The website was down for less than a minute (pingdom didn't
even fire an alert). We've tested it and it appears to be in perfect
order. Should you find any issues, please give me a shout.
IT Services Manager
Open source software for ARM SoCs
On 03/29/2012 02:22 AM, Jon Medhurst (Tixy) wrote:
> On Mon, 2012-03-26 at 12:20 -0700, John Stultz wrote:
>> So after talking about it at the last Linaro Connect, I've finally
>> gotten around to making a first pass at providing config fragments for
>> the linaro kernel. I'd like to propose merging this for 12.04, and
>> doing so early so we can make sure that all the desired config options
>> are present in the fragments and to allow the vairous linaro build
>> systems to begin migrating their config generation over.
>> The current tree is here:
>> I'd ask Landing teams to take a look at this, and report any missing
>> config options or fragment chunks they'd like to see.
> John, I've attached a config fragment for Versatile Express.
Great! I've merged that in! There's a few warnings though:
Value requested for CONFIG_ARCH_VEXPRESS_DT not in final .config
Requested value: CONFIG_ARCH_VEXPRESS_DT=y
Value requested for CONFIG_FB_ARMHDLCD not in final .config
Requested value: CONFIG_FB_ARMHDLCD=y
I'm guessing these are features not in the base 3.3 tree? If so you
might want to break those out and re-add them with those patches.
> This includes loadable module support because one of our topic branches
> adds the gator module with a default config of 'm'. I did this because
> Linaro kernels are expected to have this module available but I didn't
> see any reason for it to be built-in, and as there may be versioning
> issues between it and the separate user side daemon, I thought it wise
> to keep the door open for loading an alternate module obtained from
> elsewhere. That decision does mean that all Linaro kernels would need
> loadable module support built in, but I don't think that is a bad idea.
Tushar had similar request, but I don't think the android configs (at
least the ones I've managed) use modules, so I've added the MODULES=y to
the common ubuntu.conf file.
If this is still objectionable, it can be changed and we can push it
down to the linaro-base.conf, but I want to make sure the Android tree
doesn't run into trouble.
I am forwarding the message to a couple mailing lists which might have
people interested on the Mono porting for ARM hard-float ABI.
2012/2/2 Jo Shields <directhex(a)apebox.org>:
> Right now, Mono is available in Debian armhf. This is a hack - what
> we're actually doing is building Mono as an armhf binary, but built to
> emit soft VFP instructions and using calling conventions and ABI for
> armel. This hack works well enough for pure cross-platform code (like
> the C# compiler) to run, but dies in a heap for anything complex.
> This situation is a bit on the crappy side of crap.
> In order for Mono on armhf not to be a waste of time, a "true" port
> needs to be completed. If I were to make a not-remotely-educated guess,
> I'd say it needs about 550 lines of changes, primarily the addition of
> code to emit the correct instructions feeling the correct registers in
> mono/mini/mini-arm.c plus a couple of tweaks to related headers.
> Upstream have also indicated that they're happy to provide guidance and
> pointers on how to implement this port, although they're unable to
> provide the requisite code themselves.
> Sadly, unless someone in the community is able to step forward and
> contribute here, it's only a matter of time before the armhf packages
> are rightfully marked RC-buggy, and 100+ packages need to be axed from
> armhf. This would make me sad.
Héctor Orón -.. . -... .. .- -. -.. . ...- . .-.. --- .--. . .-.
Changes since RFC:
*Changed the cpu cooling registration/unregistration API's to instance based
*Changed the STATE_ACTIVE trip type to pass correct instance id
*Adding support to restore back the policy->max_freq after doing frequency
*Moved the trip cooling stats from sysfs node to debugfs node as suggested
by Greg KH greg(a)kroah.com
*Incorporated several review comments from eduardo.valentin(a)ti.com
*Report time spent in each trip point along with the cooling statistics
*Add opp library support in cpufreq cooling api's
1) The generic cooling devices code is placed inside driver/thermal/* as
placing inside acpi folder will need un-necessary enabling of acpi code.
2) This patchset adds a new trip type THERMAL_TRIP_STATE_ACTIVE which passes
cooling device instance number and may be helpful for cpufreq cooling devices
to take the correct cooling action.
3) This patchset adds generic cpu cooling low level implementation through
frequency clipping and cpu hotplug. In future, other cpu related cooling
devices may be added here. An ACPI version of this already exists
(drivers/acpi/processor_thermal.c). But this will be useful for platforms
like ARM using the generic thermal interface along with the generic cpu
cooling devices. The cooling device registration API's return cooling device
pointers which can be easily binded with the thermal zone trip points.
4)This patchset provides a simple way of reporting cooling achieved in a
trip type. This will help in fine cooling the cooling devices attached.
Amit Daniel Kachhap (4):
thermal: Add a new trip type to use cooling device instance number
thermal: Add generic cpufreq cooling implementation
thermal: Add generic cpuhotplug cooling implementation
thermal: Add support to report cooling statistics achieved by cooling
Documentation/thermal/cpu-cooling-api.txt | 57 ++++
Documentation/thermal/sysfs-api.txt | 4 +-
drivers/thermal/Kconfig | 11 +
drivers/thermal/Makefile | 1 +
drivers/thermal/cpu_cooling.c | 484 +++++++++++++++++++++++++++++
drivers/thermal/thermal_sys.c | 165 ++++++++++-
include/linux/cpu_cooling.h | 71 +++++
include/linux/thermal.h | 14 +
8 files changed, 802 insertions(+), 5 deletions(-)
create mode 100644 Documentation/thermal/cpu-cooling-api.txt
create mode 100644 drivers/thermal/cpu_cooling.c
create mode 100644 include/linux/cpu_cooling.h
Following some complains and issues with the way we're currently
maintaining our main Overlay PPA, I'd like to propose a new way to
produce the Ubuntu LEBs, by generating Stable and Unstable/Development
At the moment all the development work for the Ubuntu LEB happens at 2
- Staging PPA (https://launchpad.net/~linaro-maintainers/+archive/staging-overlay):
test and development related builds, usually broken but a common place
to share the packages before making them official at the LEB.
- Overlay PPA (https://launchpad.net/~linaro-maintainers/+archive/overlay):
main overlay repository, and common location for hardware enablement
packages and released components by Linaro (e.g. Linux Linaro,
Using just one single main repository seems fine until you have at
least one board fully enabled, with dependencies going from Kernel to
user space, as the enablement usually doesn't keep up with the latest
development and updates produced by the Landing Teams and Working
Groups. One simple case is what happened with Pandaboard, that had a
fully enabled userspace (OpenGL ES2.0 and HW Decode), but needed to be
locked down with an specific kernel and user space packages.
Differently than what we have with Android images, we don't just
produce one single tarball where the user is unable to change or
extend. At our side we're maintaining a real distro, and
updating/upgrading packages is expected. Once we release a fully
enabled build, we can't simply break it down with newer updates
because will make all users unhappy, which is bad for the users and
for Linaro (as it's hard to compare hw enablement, produce demos and
At the same time, we don't want to be locked down into specific
versions, because one of our goals is to make sure the hardware
enablement is always matching the latest kernel/components available.
Working with upstream based components helps us with development and
validation, and also identifying what is still needed to get
everything working from time to time.
Looking at the problem, here's what I think it'd help to fix the situation:
1 - Overlay PPA becomes the main repository for basic platform
development, without having any hardware specific package (not even
This PPA would be used by all the images we're producing
(stable/unstable), as all the changes would be just related with the
distribution. Once we get newer components that are not related with
hardware enablement, and not critical enough to break compatibility
across images, we would be integrating them at this repository (e.g.
powertop, glmark2, libpng, libjpeg-turbo, etc).
2 - We create the Development Overlay PPA, to be the main place
carrying the hardware related pieces, like kernel, drivers and
This would be the main PPA used during our own development, always
containing the latest kernel packages from the Linux Linaro and
Landing Team trees. Here we could also integrate components related
with hardware enablement that are not for prime use yet, but would
help people that are always looking for the latest stuff available.
3 - Create a set of Stable PPAs for every board we're officially supporting:
Here the goal would be to have a single place where we can push the
enablement related changes that are known to be working. At the
Pandaboard case this PPA would contain the 3.1 based kernel with SGX
and HW video decoding working out of the box. Once a new snapshot
based on the development release is known to work in a satisfactory
way, we would simply just copy them over the Stable PPA for the
respective board. This would also help the cases where we have
packages with hw enablement changes that conflict with other boards.
So in the end we'd be generating 2 sets of hwpacks per board, one
based on the Development Overlay (latest components, even if not
working properly), and one based on the Stable PPAs, that we know
it'll always have a better enablement. Both would be using the same
rootfs, as all distro related core changes will be part of the Overlay
I believe this can simply things a bit, and would not consume much of
our time as the stable repository would just be a snapshot of a known
to be working development PPA.
Please let me know what you all think, as we could have this model
working with the 12.04 Ubuntu Precise based images already.
Ricardo Salveti de Araujo
Postmortem and lessons learned for Linaro's release 2012.03
Highlights and Key Successes
The 12.03 release saw the debut of the refurbished Linaro Website with
a new content management system that makes creating user content much
easier and faster.
Development continues to hit deliverable targets with the teams
continuing to generate build for platform LEBs and working groups.
Linux Linaro under the Ubuntu Developer Platform premiered and will
become the baseline on which deliveries will be made. The big.LITTLE
Switcher project has launched and is going well with Platform
enablement and some Test runs/automation in place.
Pre-built release images for the Android and Ubuntu LEBs are currenly
available with each of the Linaro monthly releases; however, new with
the Linaro 12.03 release is that now developers can get daily
pre-builds of the Ubuntu LEB which will be based on the current
release of Ubuntu. Pre-built images can be found at
Postmortem and Lessons Learned
On the heels of a very short release cycle came one of the longest
release cycles. Many more blueprints and effort came out of the longer
cycle. The length of the cycle allowed team members to focus and
increase their productivity.
Some issues were highlighted that need to be addressed. Jenkins
scripting was offline for a week affecting CI jobs during release
week. LAVA continues to have some instabilities. There needs to be
some sort of backup or redundancy for Jenkins scripts so that builds
are not offline for an extended period. Testing should not solely rely
on LAVA. In the event of LAVA issues, other testing methods should be
Linaro Release Manager | Project Manager
Linaro.org | Open source software for ARM SoCs
FYI, this is about a test rebuild of precise pangolin, including armhf.
-------- Original Message --------
Subject: third test rebuild of precise pangolin
Date: Mon, 02 Apr 2012 15:43:58 +0200
From: Matthias Klose <doko(a)ubuntu.com>
To: ubuntu-devel <ubuntu-devel-announce(a)lists.ubuntu.com>
CC: ubuntu-devel <ubuntu-devel(a)lists.ubuntu.com>
Another (and probably the last) test rebuild for precise pangolin is currently
running on amd64, armhf and i386 (finished for main, universe is still
building). The results will show up at .
The rebuild is running on the distro buildds, so please be a bit conservative
with uploading packages (e.g. really do a successful local build before upload).
Please look at the build failures for the packages in the "superseded" section
too. It's not guaranteed that these are really fixed.