Greetings,
Enclosed you'll find a link to the agenda, minutes and actions from the
Linaro kernel working group weekly meeting of December 13, 2010.
https://wiki.linaro.org/WorkingGroups/KernelConsolidation/Meetings/2010-12-…<https://wiki.linaro.org/WorkingGroups/KernelConsolidation/Meetings/2010-11-…>
== Summary ==
* Reviewed Freescale BSP. Many drivers are close to being upstreamable.
* eMMC performance improvement, initial numbers shows 5% performance gains
for large writes and 10% performance gains for large reads.
* USB patches for 8500 have been accepted into Greg KH's tree, and Felipe
Balbi's tree.
* u-boot is in the final phases of the 2010-12 release. Patch from last
week (environment code would incorrectly report correctable errors) accepted
in mainline.
* Working RTC issues: timer list and timer queue which are precursors to
getting the Android alarms upstreamed. Patches in -tip slated for 2.6.38.
LWN article for timers and alarms.
* Working IRQ code in core code in support of virtual IRQ. Working on QEMU
model of Versatile Express and the Nvidia Tegra (Serial port and eMMC).
Dusting off u-boot work that John Rigby has done so that device tree is
passed correctly to kernel. Kernel configuration management solution in the
works -- "Kconfig fragment". Configuration process loads an alternative
per-board Kconfig file that sets defaults for Kconfig, and then sets up with
the full Kconfig, so that you can have definitions set up for a family of
SoCs or boards.
* Looked at gitdm tool, considering sending in a patch to list Linaro
people as belonging to Linaro instead of to whatever gitdm currently
accounts them to.
* Finished kexec() blueprint specification, will get it reviewed next.
* Progress on RCU priority boosting, additional energy-management tasks and
scalability tasks look to be easily handled.
Regards,
Mounir
All,
The weekly report for the Linaro Infrastructure team may be found at:-
Status report: https://wiki.linaro.org/Platform/Infrastructure/Status/2010-12-23
Burndown chart: This link is awaiting the production of new burndown charts.
The Infrastructure related blueprints from the maverick cycle, of which currently there are 4 active ones (4 from the last report), are showing that there are 9 work items in progress (8 last report), and 10 work items to undertake (11 last report). These are to be moved into the natty cycle if still required.
* arm-m-validation-dashboard; 0 work items completed (1 last report); 4 in progress (3 last report); 6 to do (6 last report); 0 work item added (1 last report)
* arm-m-image-building-console; no change in status from last report; 3 in progress; 3 to do
* arm-m-automated-testing-framework; no change in status from last report; 1 in progress; 0 to do
* arm-m-testsuites-and-profilers; no change in status from last report; 1 in progress; 1 to do
In the natty cycle, the following lists the current active Blueprints, or Blueprints planned and not started. Currently there are 5 active Blueprints (4 from the last report), which are showing that there are 9 work items in progress (8 last report), 43 work items to undertake (41 last report), 2 work items postponed in total (2 last report) and 0 work items added (1 item added last report).
* other-linaro-n-improve-linaro-media-create: 8 work items completed in total (7 last week); 2 work items in progress (3 last week); 8 work items to do (8 last week); 1 work item added this week (1 last week).
* other-linaro-n-test-result-display-in-launch-control: 0 work items completed (0 last week); 1 work item in progress (1 last week); 10 work items to do (10 last week); 0 work items added (0 last week)
* other-linaro-n-patch-tracking: 0 work item completed (0 last week); 2 work items in progress (2 last week); 9 work items to do (9 last week)
* other-linaro-n-image-building: 2 work items in progress (2 last week); 5 work items to do (5 last week); 2 work items postponed (2 last week); 0 work items added (0 last week)
* other-linaro-n-continuous-integration: Not started - awaiting a Hudson build server (RT#42278).
* other-linaro-n-package-development-tools: Not Started; 9 work items to do
* other-linaro-n-benchmark-reports-in-launch-control: 2 work item completed (2 last week); 2 work items in progress (2 last week); 2 work items to do (2 last week)
Specifics accomplished this week include:-
* Reports have indicated reduced accomplishments due to people being on leave.
* Work on a comprehensive MX51 test plan than headless one: https://wiki.linaro.org/Platform/QA/TestCases/IMX51EVK - Adding SATA, Backlight, Touchscreen, Keypad, etc.
* [ImproveLinaroMediaCreate] port check_device to python:DONE.
* Submitted the hardware pack licence request to the TSC for approval.
Please let me know if you have any comments or questions.
Kind Regards,
Ian
Hi
The bug http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46883 files
against GCC trunk also happens with linaro gcc 4.5
My guess is that there is a backported patch from trunk into linaro
4.5 tree thats causing this ICE
This ICE does not happen on upstream gcc-4.5 branch
I havent figured out the commit yet. Should you need a bug in linaro
bug tracker I will be happy to file one
Thanks
-Khem
Following a discussion we had on the Freescale BSP, I started a tree
at http://git.linaro.org/gitweb?p=people/arnd/imx.git;a=summary
that has the same contents as the tree on the freescale git
server, but splits them into six branches at this time, though
the number should increase.
The current split is
* master -- an octopus merge of all the other branches, identical to
the BSP
381 patches total
1286 files changed, 882747 insertions(+), 2932 deletions(-)
* drv-mxc -- all drivers in the new drivers/mxc/ directory and new drivers
in drivers/char/, as these typically introduce new user ABIs
that need very careful review.
23 patches
189 files changed, 97215 insertions(+), 0 deletions(-)
* amd-gpu -- a single but huge driver for the GPU. As is normally the
case with GPU drivers, we can expect long discussions
before it will get considered for mainline
4 patches
98 files changed, 278321 insertions(+), 0 deletions(-)
* ath -- another single driver that is rather large, for the ath6km
wifi controller. Split out because it is not owned by freescale.
4 patches
169 files changed, 94561 insertions(+), 0 deletions(-)
* other-subsys -- device drivers for existing subsystems. These should
be largely uncontroversial, because they don't introduce
new ABIs and the code looks clean enough for a straight-
forward inclusion through the respective subsystem
maintainers.
Someone still needs to go through these and split them
up by subsystem and separate new drivers from patches
to existing drivers where needed.
159 patches
445 files changed, 270318 insertions(+), 959 deletions(-)
* arch -- everything in the arch/arm, directory, this will go through
review on the linux-arm-kernel mailing list.
This also needs to be split up further into smaller branches.
179 patches
378 files changed, 140275 insertions(+), 1991 deletions(-)
* external -- patches that look unrelated to the rest, and are probably
backported from patches that already went upstream.
9 patches
7 files changed, 37 insertions(+), 32 deletions(-)
Arnd
Hi,
If I want to build the system from scratch, what should I do? I mean, where to get the deb binary packages of Linaro? Where to get the image creator tool? etc. thank you.
B.R
Paul
Hi,
Due to the holiday season many key people are on vacation at the moment.
For this reason the weekly Release Meeting has been cancelled. The next
meeting will be Wednesday 5th January 2010.
Regards,
Jamie.
--
Linaro Release Manager
Hi,
I am working on the blueprint
https://blueprints.launchpad.net/linux-linaro/+spec/other-storage-performan….
Currently I am investigating performance for DMA vs PIO on eMMC.
Pros and cons for DMA on MMC
+ Offloads CPU
+ Fewer interrupts, one single interrupt for each transfer compared to
100s or even 1000s
+ Power save, DMA consumes less power than CPU
- Less bandwidth / throughput compared to PIO-CPU
The reason for introducing double buffering in the MMC framework is to
address the throughput issue for DMA on MMC.
The assumption is that the CPU and DMA have higher throughput than the
MMC / SD-card.
My hypothesis is that the difference in performance between PIO-mode
and DMA-mode for MMC is due to latency for preparing a DMA-job.
If the next DMA-job could be prepared while the current job is ongoing
this latency would be reduced. The biggest part of preparing a DMA-job
is maintenance of caches.
In my case I run on U5500 (mach-ux500) which has both L1 and L2
caches. The host mmc driver in use is the mmci driver (PL180).
I have done a hack in both the MMC-framework and mmci in order to make
a prove of concept. I have run IOZone to get measurements to prove my
case worthy.
The next step, if the results are promising will be to clean up my
work and send out patches for review.
The DMAC in ux500 support to modes LOG and PHY.
LOG - Many logical channels are multiplex on top of one physical channel
PHY - Only one channel per physical channel
DMA mode LOG and PHY have different latency both HW and SW wise. One
could almost treat them as "two different DMACs. To get a wider test
scope I have tested using both modes.
Summary of the results.
* It is optional for the mmc host driver to utitlize the 2-buf
support. 2-buf in framework requires no change in the host drivers.
* IOZone shows no performance hit on existing drivers* if adding 2-buf
to the framework but not in the host driver.
(* So far I have only test one driver)
* The performance gain for DMA using 2-buf is probably proportional to
the cache maintenance time.
The faster the card is the more significant the cache maintenance
part becomes and vice versa.
* For U5500 with 2-buf performance for DMA is:
Throughput: DMA vanilla vs DMA 2-buf
* read +5-10 %
* write +0-3 %
CPU load: CPU vs DMA 2-buf
* read large data: minus 10-20 units of %
* read small data: same as PIO
* write: same load as PIO ( why? )
Here follows two of the measurements from IOZones comparing MMC with
double buffering and without. The rest you can find in the text files
attached.
=== Performance CPU compared with DMA vanilla kernel ===
Absolute diff: MMC-VANILLA-CPU -> MMC-VANILLA-DMA-LOG
random random
KB reclen write rewrite read reread read write
51200 4 -14 -8 -1005 -988 -679 -1
cpu: -0.0 -0.1 -0.8 -0.9 -0.7 +0.0
51200 8 -35 -34 -1763 -1791 -1327 +0
cpu: +0.0 -0.1 -0.9 -1.2 -0.7 +0.0
51200 16 +6 -38 -2712 -2728 -2225 +0
cpu: -0.1 -0.0 -1.6 -1.2 -0.7 -0.0
51200 32 -10 -79 -3640 -3710 -3298 -1
cpu: -0.1 -0.2 -1.2 -1.2 -0.7 -0.0
51200 64 +31 -16 -4401 -4533 -4212 -1
cpu: -0.2 -0.2 -0.6 -1.2 -1.2 -0.0
51200 128 +58 -58 -4749 -4776 -4532 -4
cpu: -0.2 -0.0 -1.2 -1.1 -1.2 +0.1
51200 256 +192 +283 -5343 -5347 -5184 +13
cpu: +0.0 +0.1 -1.2 -0.6 -1.2 +0.0
51200 512 +232 +470 -4663 -4690 -4588 +171
cpu: +0.1 +0.1 -4.5 -3.9 -3.8 -0.1
51200 1024 +250 +68 -3151 -3318 -3303 +122
cpu: -0.1 -0.5 -14.0 -13.5 -14.0 -0.1
51200 2048 +224 +401 -2708 -2601 -2612 +161
cpu: -1.7 -1.3 -18.4 -19.5 -17.8 -0.5
51200 4096 +194 +417 -2380 -2361 -2520 +242
cpu: -1.3 -1.6 -19.4 -19.9 -19.4 -0.6
51200 8192 +228 +315 -2279 -2327 -2291 +270
cpu: -1.0 -0.9 -20.8 -20.3 -21.0 -0.6
51200 16384 +254 +289 -2260 -2232 -2269 +308
cpu: -0.8 -0.8 -20.5 -19.9 -21.5 -0.4
=== Performance CPU compared with DMA with MMC double buffering ===
Absolute diff: MMC-VANILLA-CPU -> MMC-MMCI-2-BUF-DMA-LOG
random random
KB reclen write rewrite read reread read write
51200 4 -7 -11 -533 -513 -365 +0
cpu: -0.0 -0.1 -0.5 -0.7 -0.4 +0.0
51200 8 -19 -28 -916 -932 -671 +0
cpu: -0.0 -0.0 -0.3 -0.6 -0.2 +0.0
51200 16 +14 -13 -1467 -1479 -1203 +1
cpu: +0.0 -0.1 -0.7 -0.7 -0.2 -0.0
51200 32 +61 +24 -2008 -2088 -1853 +4
cpu: -0.3 -0.2 -0.7 -0.7 -0.2 -0.0
51200 64 +130 +84 -2571 -2692 -2483 +5
cpu: +0.0 -0.4 -0.1 -0.7 -0.7 +0.0
51200 128 +275 +279 -2760 -2747 -2607 +19
cpu: -0.1 +0.1 -0.7 -0.6 -0.7 +0.1
51200 256 +558 +503 -3455 -3429 -3216 +55
cpu: -0.1 +0.1 -0.8 -0.1 -0.8 +0.0
51200 512 +608 +820 -2476 -2497 -2504 +154
cpu: +0.2 +0.5 -3.3 -2.1 -2.7 +0.0
51200 1024 +652 +493 -818 -977 -1023 +291
cpu: +0.0 -0.1 -13.2 -12.8 -13.3 +0.1
51200 2048 +654 +809 -241 -218 -242 +501
cpu: -1.5 -1.2 -16.9 -18.2 -17.0 -0.2
51200 4096 +482 +908 -80 +82 -154 +633
cpu: -1.4 -1.2 -19.1 -18.4 -18.6 -0.2
51200 8192 +643 +810 +199 +186 +182 +675
cpu: -0.8 -0.7 -19.8 -19.2 -19.5 -0.7
51200 16384 +684 +724 +275 +323 +269 +724
cpu: -0.6 -0.7 -19.2 -18.6 -19.8 -0.2