Enclosed you'll find a link to the agenda, notes and actions from the
Linaro Developer Platforms Weekly Status meeting held on January 26th
in #linaro-meeting on irc.freenode.net at 15:00 UTC.
https://wiki.linaro.org/Platform/Foundations/2011-01-26
Actions from the meeting where as follows:
* wookey rewarded for great work on omxil components FTBFS by also
helping get the ffmpeg-dist component building
* slangasek to get tgall_foo his uboot env settings for a BB
Regards,
Tom (tgall_foo)
Developer Platforms Team
"We want great men who, when fortune frowns will not be discouraged."
- Colonel Henry Knox
w) tom.gall att linaro.org
w) tom_gall att vnet.ibm.com
h) tom_gall att mac.com
Hey
I'd like to propose moving our Hudson stuff to Launchpad hosted
branches. This would be useful to:
* share our stuff, perhaps in common repos across hudson instances (see
below)
* have everybody follow best practices of keeping as much as possible
under a VCS
* help move/factor hudson instances (e.g. factor two instances in one,
move from home server or cloud to Linaro/Canonical IS datacenter)
I see two ways to approach this problem:
* each team owns a hudson instance, and stores its config and scripts
in the team's namespace; this doesn't encourage factoring instances
together though
* we create a virtual team of people caring for hudson stuff, e.g.
~linaro-hudson-hackers, and we put branches below that
We will need to keep the hudson configs private because they might hold
private data like hudson's secret key or the userdb, perhaps with some
private bzr branches? or can we move to openid and avoid storing
the other secret keys already?
The branch with build script would of course be public.
We could have some cron committing each config (changed from the web
UI) hourly, and pushing it to LP, and a cron pulling the latest version
of the scripts hourly (or a special hudson job which does either).
For scripts, we'd have to agree on some conventions such as namespaces
for Android build scripts (e.g. android-list-changes), hudson job
definitions (e.g. job-build-u-boot, or job-build-android).
Two things which might slow us down if we want to proceed:
* Hudson might get renamed due to the situation with Oracle
* getting private branches
* we need some kind of Launchpad accounts for the hudson instances to
commit stuff
Paul, James, Michael Hudson, (Alexander?), and anybody, what are your
thoughts?
Cheers,
--
Loïc Minier
Hi there,
I am trying to implement cpuidle driver for imx51, and for better
understanding how various c-state map to ARM soc,
I would like to get some comments.
First of all, we basically have 3 major state for imx51, which are defined
in specification of the soc. Like:
RUN - Core is active, clocks are on, the peripheral modules required are
active. SW can close
clocks of modules that are not in use. In addition CCM can enable power
gating for the modules described above.
WAIT - Core is disabled and clock gated, bus clocks to peripherals can be on
as required. PG [power gating] and
SRPG[state retention PG] can be applied to Cortex_A8 and the different
blocks as described on the section above.
STOP - Core is disabled, peripherals are disabled, bus clocks are off, PLLs
off. PG and SRPG can
be applied to Cortex_A8 and the different blocks as described on the section
above.
Naturely, I think the maping can be:
RUN - c0
WAIT - c1
STOP - c2
Or, if possible, some extra states can be assert into each c-state to get
c3, c4....
Since other SOCs, like omap or samsung's chip, already have cpuidle driver,
I would like to especially compare imx51 with those.
thanks
Yong
On 18 January 2011 03:35, Jaehoon Chung <jh80.chung(a)samsung.com> wrote:
> Hi Per..
>
> it is interesting approach..so
> we want to test your double buffering in our environment(Samsung SoC).
>
> Did you test with SDHCI?
So far I have only tested on mmci for board u5500 and u8500. I happily
test on a different board if I can get hold of one.
> If you tested with SDHCI, i want to know how much increase the performance.
>
> Thanks,
> Jaehoon Chung
On Sat, Jan 29, 2011, Loïc Minier wrote:
> Backports to Ubuntu 10.04 and Ubuntu 10.10 will be provided shortly.
A backport to Ubuntu 10.04 willl take longer to prepare, but a backport
to Ubuntu 10.10 is now available in ppa:linaro-maintainers/tools.
--
Loïc Minier
Hi Folks,
I believe that every Versatile Express board user would agree that
booting Linux kernel on it is "slightly" more complex than on, say,
Beagle...
I wrote a short text describing how do I do this:
https://wiki.linaro.org/PawelMoll/BootingVEMadeEasy
The interesting bit may be lack of U-boot in my flow. In future the VE
Boot Monitor will be able to boot the kernel on its own, but it is
effectively possible already, with a few tricks. So the only case one
would have to use U-boot is to download kernel over network.
Anyway, any feedback appreciated, especially if all thing is absolutely
pointless. Don't hesitate to use the Wiki "Edit" button :-)
Additionally I think that using similar technique, it would be possible
to prepare a U-boot-less MMC card automatically booting kernel. Let me
know if anyone is interested - it could be potentially useful for the
media-create script?
Cheers!
Paweł
Hey Nicolas,
In working on my 2.6.37 based merge tree of the linaro and android
kernels, I've been seeing a crash or hang at init time on my beagleboard
xm.
I reproduced it with your linaro-2.6.37 tree, and I managed to bisect it
down to the following commit:
ff8ea0c44f666883ef1e5545c33f858557b02043 ARM: 6384/1: Remove the domain
switching on ARMv6k/v7 CPUs
Reverting this patch allows the linaro-2.6.37 kernel to boot on a linaro
disk image, and also allows my dev/linaro.android branch to boot using
an android disk image.
Is anyone else seeing this issue?
thanks
-john
On Thu, 27 Jan 2011, Thomas Abraham wrote:
> On 27 January 2011 19:35, Angus Ainslie <angus.ainslie(a)linaro.org> wrote:
> > Hi Thomas,
> >
> > Thanks for pointing out that mailing list. Can you have a quick look a the
> > included patch and tell me if you've seen these upstream for the v310. I
> > found a similar one for the v210 but not the v310.
> >
> > The patch is untested as I haven't received my hardware yet.
>
> Hi Angus,
>
> pdma patch was submitted. The link to the patch is
> http://www.spinics.net/lists/arm-kernel/msg108006.html
>
> I have not seen any patch to change IRQ_EINT(5) trigger level.
>
> defconfig for s5pv310 was deliberately avoided by the maintainer
> because of linus being unhappy with too many defconfigs. So for
> s5pv310, the s5pv210_defconfig is used.
As I'd prefer to use patches in the Linaro kernel tree which are as
close as possible to upstream including proper authorship attributions,
could you direct me to a git repo where I could find them?
Alternatively, can you resend them to me? Extracting patches from
mailing list archives is not practical.
Nicolas