This is my first post, I'm a complete noop to Linaro, discovered it
yesterday. Needing to run OpenGL ES applications and media playback, I
was excited to find e.g. this one:
(under Linaro, GStreamer playing a video by DSP in fullscreen)
and this one, admittedly not Linaro but Debian:
(Quake3 using hardware OpenGL ES)
During first and promising own experiments, I was surprised to find that
OpenGL acceleration and A/V decoding by hardware don't seem to be part
of the Linaro hardware package. Does anybody have that integrated, or
how would I do about that?
Interestingly, TI has just released those supporting components in a
But they come with an older kernel. To my understanding, the TI code
would have to be recompiled with the Linaro kernel?
Thanks for answers,
As a followup to IRC conversations around backports, releases and QA
today, I'd like to hear what others think of our Linaro PPAs. I'll
start with some history and proposals:
We created a fairly ad hoc PPA layout for the 10.11 cycle, with
and nowadays we also have this PPA for toolchain backports:
and misc other PPAs which I don't really know much about:
(and probably many more)
There are multiple problems with our current approach:
* ~linaro-maintainers membership is poorly defined
* it's not clear which software should go in which PPA
* it's not clear when we can update which PPAs, e.g. can we update them
after 6-monthly releases? how do we QA/validate updates?
In general, it's good to avoid PPA proliferation, both for sanity and
for the confort of our end users, but I think a consistent set of PPAs
is more important than trying to optimize the number of PPAs to the
smallest subset possible.
Some ideas on addressing this:
* have software ownership split by team; ~toolchain owns gcc-linaro and
qemu-linaro, ~infrastructure owns
* have each team decide between either:
- a single PPA for all their outputs (e.g. ~infrastructure/ppa for
linaro-image-tools and gcc-log-analyzer)
- or multiple PPAs, one per software (e.g. ~toolchain/gcc-linaro PPA
for gcc-linaro, ~toolchain/qemu-linaro PPA for qemu-linaro)
* optionally, additional PPAs for dailies
* optionally, additional PPAs for RC releases
[ this is inspired from the set of PPAs for bzr:
In addition, we'd have a single overlay PPA which would be used for
rootfs builds. We could keep the existing one below
- list of ~linaro-teams
- do we upload latest release of e.g. gcc-linaro to the natty toolchain
PPA if it already gets uploaded to natty proper?
I am using the linaro-m-headless-rootfs and tools for btrfs are missing.
I want to put btrfs on the onboard MMC and therefore I would like to
run btrfs-tools on target.
Does any one have arm version of the btrfs-tools?
As a result of a series of bugs around linaro-image-tools and daily
images, it seemed a sensible approach to solve this class of issues
would be to move more data into hardware packs rather than hardcoding
stuff in linaro-image-tools.
Guilherme, Steve Langasek, and Michael Hudson were all interested in
the idea, so Michael convinced me to start a wiki page with some notes:
I'm sending this for others to get the chance to raise issues with
hwpacks since we don't want to change the format too frequently.
I've just uploaded linaro-image-tools 0.4(.1) to natty; thanks to the
hard work of Michael Hudson, Guilherme Salgado, James Westby, Martin
Ohlsson and many others, we have an awesomely nicer tool to write our
For developers: please note that the branch location just moved from:
(lp:linaro-image-tools will do the right thing)
This doesn't affect existing merge requests, but it might affect your
checkouts; use "bzr pull --remember lp:linaro-image-tools" (or push)
the next time your pull (or push).
Packaging will also be split out into
Once this gets out of NEW and enters natty, I should prepare lucid and
maverick backports into the Tools PPA (ppa:linaro-maintainers/tools,
in the mean time, it's in my PPA:
I had a go at hacking up a tool to copy sparse images to cards less
wastefully (and hopefully faster) by only coping to the important
data: See the src/fibcp.c here:
It's used like this:
# fibcp image.bin /dev/<sdcard>
There are some caveats, most notably:
* You have to be root, since the tool relies on the FIBMAP ioctl.
* The filesystem the image file is generated on must not "optmise"
explicit writing of blocks of zeros by generating a sparse file.
ext[2-4] should work; not sure about btrfs.
* The image must be freshly generated via a loop mount (e.g.,
linaro-media-create --image_file ... copied, tar'd or gzip'd images
It works well enough that e2fsck passes after a writing an image to a
card which was previously full of random data.
Anyway, it's there if anyone wants to play with it -- comments welcome.