Hi All,
The minutes of the power management weekly call can be found at :
https://wiki.linaro.org/WorkingGroups/PowerManagement/Meetings/2011-06-01
Summary:
* A block diagram has been added to the thermal wiki page. A 1st port
of thermal framework is ongoing on panda.
* Powerdebug code has been modified to prepare the addition of new
features (gpio, thermal)
* Some issues with cpufreq tests have been raised. Vishwa is going to
check if all patches for omap3 cpufreq have been pushed into the
linaro kernel
* a 1st version of arm cpu topo definition based on MPIDR is running.
We are also going to study the interest of using FDT for the
definition of the cpu topology
* A 1st version of a common cpu context save/restore for cpuidle
should be proposed soon.
Vincent
For links and details check:
https://wiki.linaro.org/WorkingGroups/Middleware/Graphics/Status/2011-06-01
== Key Points for wider discussion ==
* Training material request: how to release wiki page for work groups -
with instructions that can be followed. Indicate which routes to take in
common cases (when to release as a tarball etc). Action item to Fathi,
Ricardo and Tom Gall - perhaps need to start a discussion in the mailing
list also to get the feedback from the teams
* Managing our monthly releases - level of quality testing may need to
be tackled on a component per component basis especially for components
where the correctness testing is not really there (or is challenging to
implement completely). Validation team should participate as well as
perhaps others (landing teams - to get some crisp baselines from teams
actually using our components)
== Team Highlights ==
* Review for the GWG cycle public plans 20110602 (today!). Check the
[[https://wiki.linaro.org/Cycles/1111/PublicPlanReview|PublicPlanReview]] page!
* Memory management: Jesse sent out a status report after the UDS
mini-summit discussions. Jesse will coordinate these activities with
Sumit and Rob.
* NUX work: daily sync ups with Alexandros, Travis and Jesse trying to
get NUX working. Latest big thing was the fact that NUX is trying to
support the openGL shading language but in a way that would not even
work on desktop. The implementation of the shading language in NUX is
not correct - Alexandros to file a bug. We believe we have a solid fix
coming up.
* Compiz: Blur plugin is the only thing missing Travis is testing now,
merge branch from our team will need to be packaged for some testing.
* Validation - out of tree tests registration and providing the tests
to validation team: Alexandros worked with Zygmunt and defined 2 ways to
proceed. If a test does not require something special we can push it
into abrek and it will be put in tree. For out of tree packages we need
to inform the validation team which PPA it is going to be and what it
enables so they can pull it into validation.
* Managing our monthly releases: Alexandros started the discussion on
how to manage the release of the GWG components. Cycles are being
refactored, to use monthly releases of our components, so we may or may
not have interesting work to go into the monthly release EVERY MONTH,
but we should have a good process to make sure that we are enabled to
release the components as needed. Automatic testing for the components,
and also testing the other components for possible breakages prior to
merging something in order to know how well the changes work and what
they break is a challenge. Not everything we work on has built-in
support for the automatic tests especially how to do the correctness
testing. We will have the challenge to check component-by-component what
human element of testing can be automated. NUX has a standalone test,
but the tests do not cover visuals let alone that there are lots of
combinations in plugins used and configurations - the task of writing
tests for 100% coverage would be a real challenge, and manual testing is
right now the only one applied for just the default configuration (which
amounts for desktop functionality). Other possible ideas: a basic compiz
functionality automatic validation may be doable (example of a
screensaver plugin which could be used) eg by poking compiz via D-BUS
and testing the window manager functionality of Compiz (for example,
moving windows around) and using some tool to check if the drawing is as
should be.
* Related to the packaging, for the monthly release, a question rose
what kind of setup should we have in order to release good quality
tarballs. Alexandros commented that a day or two prior to the release
for some testing is good enough now. Also trying to build daily packages
and using the test reports from those components that have self-testing
could be useful for the team. Alexandros will write up a description of
what he does on the wiki
* Developing further the blueprints, activities for 11.06 to be
targeted (action for all)
* Training request - how to do the release components for a group. We
need to collect the material and make sure that all in the team have
some cross-training on how to release components for the group. Maybe
later push the wiki page to a general howto.
== Upcoming Deliverables ==
* Working now to define the targets for the 11.06 release
== Miscellaneous ==
* Sumit Semwal (TI) presented himself for the first time to the group.
Welcome Sumit!!
--
Ilias Biris,
Aallonkohina 2D 19, 02320 Espoo, Finland
Tel: +358 50 4839608 (mobile)
Email: ilias dot biris at linaro dot org
Skype: ilias_biris
There has been quite a lot of discussion of image creation, how many
flavours there should be, where we store them and for how long, how
they get generated and delivered etc. And there were many sessions at
UDS as well as a couple of threads on this list covering this.
All this discussion left me feeling that people had got a little
fixated on pre-generated images, and the alternative approach of using
an installer has not received enough attention. Only the server
sessions even mentioned this and even there we are still planning to
start with an image and move to an installer later (IIRC).
So I just thought I'd bring this up as a point to bear in mind as we
move along. I really think we should be taking the installer option
more seriously in the future, and not trying to serve the whole world
of possibilities with images.
There are clearly advantages to the 'just dd an image onto removable
media and plug it in to get started' and having one of these available
for that 'instant start' fix makes a lot of sense. I'm not suggesting
we should stop doing that; this is the 'crack' to get new people
addicted :-)
But the more you get into having different images for different
machines and purposes, the more you suffer from the combinatorial
explosion problem. And all those images are actually just combinations
of packages for the hardware and the software wanted, which are
trivially specified and installed by a general-purpose installer (an
example of which already exists in Debian, working across all
architectures, using the same software to run the install as is
ultimately installed on the machines).
By using an installer (actually just a mini-distro image, which could
be made exactly as the images we are currently making) running on the
target hardware you gain the ability to make as many 'tasks'
(currently images) as you like, and support as many boards as you want
to support without the combinatorial explosion. And it can be skinned
and pre-configured in different ways for different use-cases (fully
automated/simple task selection/fully configurable, headless/gui,
netboot/SD image and so on).
I realise that this involves some work, but so does all that
image-generation, and I do believe that the flexibility of this
approach is powerful and we shouldn't ignore it to the degree we
currently are.
Pre-built images comes from an 'embedded systems' view of the world.
Real computers can run installers to put their operating systems on
and set themselves up, and we should be taking advantage of that. ARM
computers _are_ real computers now and we should be treating them as
such.
Yes it takes a lot longer on initial boot, which is why I agree that
there should be at least one 'instant gratification' image, but for
real work, having to install your OS the first time you boot a board
is not a big deal, and provides enormous flexibility. We can make it
slick and painless.
So, that's all. We don't necessarily need to have a big debate about
this now, I'd really just like to make sure that this is properly on
the agenda for the next cycle, and that we don't all get so stuck in
the 'everything must be supplied as pre-generated image' mindset that
we back ourselves into a very inefficient corner.
Wookey
--
Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM
http://wookware.org/
Hello Arnd,
I'd like to follow up to the discussion which took place during
"Android Code Review" session at LDS
(https://wiki.linaro.org/Platform/Android/Specs/AndroidCodeReview).
You expressed concern that Gerrit is not too flexible for working with
multiple topic branches, in particular, that it enforces work
against single (master) branch (as far as I understood).
Well, looking at AOSP's Gerrit instance,
https://review.source.android.com/ one can see that there's a "branch"
field present, and shown in the list of patches, and there're patches
which reference non-master branches (here's list for "froyo" branch
for example:
https://review.source.android.com/#q,status:open+branch:froyo,n,z ).
So, it at least allows to label a patch with a branch it is submitted
against.
So, can you please elaborate on the concerns you expressed? A
generalized user story and specific example would be very helpful. (And
please keep in mind that I still know little about Gerrit, so if the
above doesn't relate directly to your concerns, I'm sorry, that's why
I'd like to collect more detailed info on Gerrit's
features/misfeatures people know).
Thanks,
Paul
Memory Management Mini-Summit
Linaro Developer Summit, Budapest, May 9-11, 2011
=================================================
Hi all. Apologies for this report being so long in coming. I know
others have thrown in their perceptions and opinions on how the
mini-summit went, so I suppose it's my turn.
Outcomes:
---------
* Approach (full proposal under draft, to be sent to the lists below)
- Modified CMA for additional physically contiguous buffer support.
- dma-mapping API changes, enhancements and ARM architecture support.
- "struct dma_buf" based buffer sharing infrastructure with support
from device drivers.
- Pick any "low-hanging fruit" with respect to consolidation
(supporting the ARM arch/sub-arch goals).
* Proposal for work around allocation, mapping and buffer sharing to
be announced on:
- dri-devel
- linux-arm-kernel
- linux-kernel
- linux-media
- linux-mm
- linux-mm-sig
* Communication and Meetings
- New IRC channel #linaro-mm-sig for meetings and general
communication between those working on and interested in these topics
(already created).
- IRC meetings will be weekly with an option for the consituency to
decide on ultimate frequency (logs to be emailed to linaro-mm-sig
list).
- Linaro can provide wiki services and any dial-in needed.
- Next face-to-face meetings:
. Linaro mid-cycle summit (August 1-5, see
https://wiki.linaro.org/Events/2011-08-LDS)
. Linux Plumbers Conference (September 7-9, see
http://www.linuxplumbersconf.org/2011/ocw/proposals/567)
. V4L2 brainstorm meeting (Hans Verkuil to update with details)
Overview and Goals for the 3 days:
----------------------------------
* Day 1 - Component overviews, expected to spill over into day 2
* Day 2 - Concrete use case that outlines a definition of the problem
that we are trying to solve, and shows that we have solved it.
* Day 3 - Dig into the lower level details of the current
implementations. What do we have, what's missing, what's not
implemented for ARM.
This is about memory management, zero-copy pipelines, kernel/userspace
interfaces, memory management, memory reservations and much more :-)
In particular, what we would like to end up with is:
* Understand who is working on what; avoid work duplication.
* Focus on a specific problem we want to solve and discuss possible solutions.
* Come up with a plan to fix this specific problem.
* Start enumerating work items that the Linaro Graphics WG can work
on in this cycle.
Day 1:
------
The first day got off to a little bit of a stutter start as the summit
scheduler would not let us indicate that our desired starting time was
immediately after lunch, during the plenaries. However, that didn't
stop people from flocking to the session in droves. By the time I
made the kickoff comments on why we were there, and what we were there
to accomplish (see "Overview and Goals for the 3 days" above), we had
brought in an extra 10 chairs and there were people on the floor and
spilling out into the hallway.
Based upon our experiences from the birds-of-a-feather at the Embedded
Linux Conference, 2 things dominated day 1. First things first, I
assigned someone to take notes ;-). Etherpad made it really easy for
people to take notes collectively, including those participating
remotely, and for everyone to see who was writing what, but we
definitely needed someone whose focus would be capturing the
proceedings, so thanks to Dave Rusling for shouldering that burden.
The second thing was that we desperately needed an education in each
others components and subsystems. Without this, we would risk missing
significant areas of discussion, or possibly even be violently
agreeing on something without realizing it. So, we started with a
series of component overviews. These were presentations on the order
of 20 minutes with some room for Q&A. On day 1, we had:
* V4L2 - Hans Verkuil
* DRM/GEM/KMS - Daniel Vetter
* TTM - Thomas Hellstrom
* CMA - Marek Szyprowski
* VCMM - Zach Pfeffer
All of these (as well as the ones from day 2) are available through
links on the mini-summit wiki
(https://wiki.linaro.org/Events/2011-05-MM).
Day 2:
------
The second day got off to a bit better a start than did day 1 as we
more clearly communicated the start time to everyone involved, and
forgot about the summit scheduler. We (conceptually) picked up where
day 1 left off with one more component overview:
* UMP - Ketil Johnson
and, covered the MediaController API for good measure. From there, we
spent a fair amount of time discussing use cases to illustrate our
problem space. We started (via pre-summit submissions) with a couple
of variations on what amounted to basically the same thing. I think
the actual case is probably best illustrated by the pdf slides from
Sakari Ailus (see the link on the mini-summit wiki). Basically, we
want to take a video input, either from a camera or from a file,
decode it, process it, render to it and/or with it and display it.
These pipeline stages may be handled by hardware, by software on the
CPU or some combination of the two; each stage should be handled by
accepting a buffer from the last stage and operating on it in some
fashion (no copies wherever possible). It turned out that still image
capture can actually be a more complicated version of this use case,
but even something as simple as taking input through the camera and
displaying it (image preview) can involve much of the underpinnings
required to support the more complicated cases. We may indeed start
with this simple case as a proof-of-concept.
Once we had the use case nailed down, we moved onto the actual
components/subsystems that would need to share buffers in order for
the use case to work properly with the zero-copy (or at least
minimal-copy) requirement. We had:
* DRM
* V4L2
* fbdev
* ALSA
* DSP
* User-space (kind of all encompassing and could include things like
OpenCL, which also makes an interesting use case).
* DVB
* Out-of-tree GPU drivers
We wound out the day by discussing exactly what metadata we would want
to track in order to enable the desired levels of sharing with
simultaneous device mappings, cache management and other
considerations (e.g., device peculiarities). What we came up with is
a struct (we called it "dma_buf") that has the following info:
* Size
* Creator/Allocator
* Attributes:
- sharable?
- contiguous?
- device-local?
* Reference count
* Pinning reference count
* CPU cache management data
* Device private data (e.g., quirky tiling modes)
* Scatter list
* Synchronization data (for managing in-flight device transactions)
* Mapping data
These last few (device privates through mapping data) are lists of
data, one for each device that has a mapping of the buffer. The
mapping data is nominally an address and per-device cache management
data. We actually got through the this part fairly quickly. The
biggest part of the discussion was what to use for handles/identifiers
in the buffer sharing scheme. The discussion was between global
identifiers like GEM uses, or file descriptors as favored by Android.
Initially, there was an informal consensus around unique IDs, though
it was not a definitive decision (yet). The atomicity of passing file
descriptors between processes makes them quite attractive for the
task.
Day 3:
------
By the third day, there was a sense of running out of time and really
needing to ensure that we left with a reasonable set of outcomes (see
the overview and goals section above). In short, we wanted to make
sure that we had a plan/roadmap, a reasonably actionable set of tasks
that could be picked up by Linaro engineers and community members
alike, and that we would not only avoid duplicating new work, but also
reduce some of the existing code duplication that got us to this point
in the first place.
But, we weren't done. We still had to cover the requirements around
allocation and explore the dma-mapping and IOMMU APIs.
This took most of the day, but was a quite fruitful set of
discussions. As with the rest of the discussions, we focused on
leveraging existing technologies as much as possible. With
allocations, however, this wasn't entirely possible as we have devices
on ARM SoCs that do not have an IOMMU and require physically
contiguous buffers in order to operate. After a fair amount of
discussion, it was decided that a modified version of the current CMA
(see Marek's slides linked from the wiki). It assumes the pages are
movable and manages them and not the mappings. There was concern that
the API didn't quite fit with other related API, so the changes from
the current state will be around those details.
On the mapping side, we focused on the dma-mapping API with
appropriate layering on the IOMMU API where appropriate. Without
going into crazy detail, we are looking at something like 4
implementation s of the dma_map_ops functions for ARM: with and
without IOMMU, with and without bounce buffer (these last two exist,
but not using the dma_map_ops API). Marek has put out patches for
comment on the IOMMU based implementation of this based upon work he
had in progress. Also in the area of dma_map_ops, the sync related
API need a start address and offset, and the alloc and free need
attribute parameters like map and unmap already have (to support
cacheable/coherent/write-combined). In the "not involving
dma_map_ops" category, we have a couple of changes that are likely to
be non-trivial (not that any of the other proposed work is). It was
proposed to modify (actually, the word thrown about in the discussions
was "fix") dma_alloc_coherent for ARM to support unmapping from the
kernel linear mapping and the use of HIGHMEM; two separate
implementations, configured at build-time. And, last but not least,
there was a fair amount of concern over the cache management API and
its ability to live cleanly with the IOMMU code and to resist breakage
from other architecture implementations.
At this point, we reviewed what we had done and finalized the outcomes
(see the outcomes section at the top). And, with a half an hour to
spare, I re-instigated the file descriptors versus unique identifiers
discussion from day 2. I think file descriptors were winning by the
end (especially after people started posting pointers to code samples
of how to actually pass them between processes)....
Attendees:
----------
I will likely miss people here trying to list out everyone, especially
given that some of the sessions were quite literally overflowing the
room we were in. For as accurate an account of attendance as I can
muster, check out the list of attendees on the mini-summit wiki page
or the discussion blueprints we used for scheduling:
https://wiki.linaro.org/Events/2011-05-MM#Attendeeshttps://blueprints.launchpad.net/linaro-graphics-wg/+spec/linaro-graphics-m…https://blueprints.launchpad.net/linaro-graphics-wg/+spec/linaro-graphics-m…https://blueprints.launchpad.net/linaro-graphics-wg/+spec/linaro-graphics-m…
The occupants of the fishbowl (the front/center of the room in closest
proximity to the microphones) were primarily:
Arnd Bergmann
Laurent Pinchart
Hans Verkuil
Mauro Chehab
Daniel Vetter
Sakari Ailus
Thomas Hellstrom
Marek Szyprowski
Jesse Barker
The IRC fishbowl seemed to consist of:
Rob Morell
Jordan Crouse
David Brown
There were certainly others both local and remote participating to
varying degrees that I do not intend to omit, and a special thanks
goes out to Joey Stanford for arranging a larger room for us on days 2
and 3 when we had people sitting on the floor and spilling into the
hallway during day 1.
Hi there Eric,
I have some results now from trying a Thumb-2 kernel on our various
trees for imx:
* It "works" on the packaged 2.6.38 kernel tree (linux-linaro-natty),
but I can't test things the kernel doesn't support (like power
management).
[ John, is it worth trying to turn this on for the vanilla linaro
tree and seeing whether it causes problems for anyone? Are there
any particular tests I should do? ]
* The landing team trees contain some power management code that
doesn't build for Thumb-2 -- if you can suggest who to discuss
that with / where to propose patches, that would be great.
(The affected code appears simple to patch.)
* Many .align directives should be added, where inline data words
are declared in .S files. I can advise further on this / suggest
a patch. Only the landing trees appear to be affected.
* The lt-2.6.38 and linaro-lt-mx5-2.6.35 trees don't seem to boot
for me, either in the default configuration (from
debian.linaro/config/) or in Thumb-2.
I may be doing something wrong...
* The lt-2.6.38 tree has CONFIG_FIQ=y. I believe this might be a
bug: this option is implicitly enabled in sound/soc/imx/Kconfig
if any imx SoC audio support is built.
I believe that FIQ is may not be used by any driver applicable
to the mx5 platforms, but I'd like someone to clarify this, since
I'm not famliar with exactly what platforms exist and their
hardware configuration.
If FIQ might be used, then some work is needed in plat-mxc/ssi-fiq.S,
since that code is not currently Thumb-2 compatible. Note that
all exceptions are entered in Thumb-2 in a Thumb-2 kernel.
There are a couple of core arm patches I recently upstreamed
which would also be needed.
I'm guessing the best way forward is to discuss these with someone
on the landing team.
None of the problems appears to affect the code that is currently
upstream.
Cheers
---Dave
Hi there John,
I've tested the linux-linaro-natty tree with CONFIG_THUMB2_KERNEL on
vexpress, and it seems to work well.
Since we have reasonable confidence in the status of vexpress for
Thumb-2 anyway, the start of this cycle seems a good opportunity to
switch to Thumb-2 for this platform.
Which packaging tree is the active one for the 11.11 cycle? I guess
that would be the appropriate place for this config change.
Cheers
---Dave
Options to add:
CONFIG_THUMB2_KERNEL=y
CONFIG_THUMB2_AVOID_R_ARM_THM_JUMP11=y (this should get turned on automatically:
if so, leave it implicit)
Options to remove:
CONFIG_FPE_NWFPE=n
CONFIG_FPE_FASTFPE=n
Folks
I recently came up with a list of ideas proposed for the problem of the
development cycle. I put those in the wiki, however it soon dawned on me
that others would need to actually edit the wiki in order to give
feedback. Each user could add some comment or +1 or something like that.
Altogether deciphering the results could mean some work going through
the user comments.
So I want to have a quick and easy way for ratings on the ideas.
Moinmoin supports star ratings, like described here:
http://moinmo.in/RatingSystemForMoin. Basically it allows readers to
rate options given in the wiki page, without having to edit the page,
just click on the stars.
I made an IT request to have this plugin installed but was asked to
start a discussion in the list, as the usefulness of this plugin is doubted.
Am I the only one needing some quick way for readers to rate ideas put
on the wiki? If you also have the same need, +1 it by responding to this
thread :-) Hopefully we can have this support added.
Thank you
--
Ilias Biris,
Aallonkohina 2D 19, 02320 Espoo, Finland
Tel: +358 50 4839608 (mobile)
Email: ilias dot biris at linaro dot org
Skype: ilias_biris
Hi folks,
There is a significant number of patches sent to devicetree-discuss that
were CCed to patches(a)linaro.org and thus will be counted when generating
our patch stats. However, if this is just a mailing list for discussion
and the patches are actually being sent to the lkml (or other ML like
linux-arm-kernel) as well, they will end up being double counted, which
is not what you want.
I'd appreciate if somebody could clarify to me what's the purpose of the
devicetree-discuss ML and whether or not we should be ignoring patches
sent there.
(Note that even if the right thing to do is to ignore patches sent there
it's fine, for simplicity's sake, for you all to continue CCing patches
sent there to patches(a)l.o -- we'll just ignore them when generating
stats).
Cheers,
--
Guilherme Salgado <https://launchpad.net/~salgado>