Following a discussion we had on the Freescale BSP, I started a tree
at http://git.linaro.org/gitweb?p=people/arnd/imx.git;a=summary
that has the same contents as the tree on the freescale git
server, but splits them into six branches at this time, though
the number should increase.
The current split is
* master -- an octopus merge of all the other branches, identical to
the BSP
381 patches total
1286 files changed, 882747 insertions(+), 2932 deletions(-)
* drv-mxc -- all drivers in the new drivers/mxc/ directory and new drivers
in drivers/char/, as these typically introduce new user ABIs
that need very careful review.
23 patches
189 files changed, 97215 insertions(+), 0 deletions(-)
* amd-gpu -- a single but huge driver for the GPU. As is normally the
case with GPU drivers, we can expect long discussions
before it will get considered for mainline
4 patches
98 files changed, 278321 insertions(+), 0 deletions(-)
* ath -- another single driver that is rather large, for the ath6km
wifi controller. Split out because it is not owned by freescale.
4 patches
169 files changed, 94561 insertions(+), 0 deletions(-)
* other-subsys -- device drivers for existing subsystems. These should
be largely uncontroversial, because they don't introduce
new ABIs and the code looks clean enough for a straight-
forward inclusion through the respective subsystem
maintainers.
Someone still needs to go through these and split them
up by subsystem and separate new drivers from patches
to existing drivers where needed.
159 patches
445 files changed, 270318 insertions(+), 959 deletions(-)
* arch -- everything in the arch/arm, directory, this will go through
review on the linux-arm-kernel mailing list.
This also needs to be split up further into smaller branches.
179 patches
378 files changed, 140275 insertions(+), 1991 deletions(-)
* external -- patches that look unrelated to the rest, and are probably
backported from patches that already went upstream.
9 patches
7 files changed, 37 insertions(+), 32 deletions(-)
Arnd
Hi,
If I want to build the system from scratch, what should I do? I mean, where to get the deb binary packages of Linaro? Where to get the image creator tool? etc. thank you.
B.R
Paul
Hi,
Due to the holiday season many key people are on vacation at the moment.
For this reason the weekly Release Meeting has been cancelled. The next
meeting will be Wednesday 5th January 2010.
Regards,
Jamie.
--
Linaro Release Manager
Hi,
I am working on the blueprint
https://blueprints.launchpad.net/linux-linaro/+spec/other-storage-performan….
Currently I am investigating performance for DMA vs PIO on eMMC.
Pros and cons for DMA on MMC
+ Offloads CPU
+ Fewer interrupts, one single interrupt for each transfer compared to
100s or even 1000s
+ Power save, DMA consumes less power than CPU
- Less bandwidth / throughput compared to PIO-CPU
The reason for introducing double buffering in the MMC framework is to
address the throughput issue for DMA on MMC.
The assumption is that the CPU and DMA have higher throughput than the
MMC / SD-card.
My hypothesis is that the difference in performance between PIO-mode
and DMA-mode for MMC is due to latency for preparing a DMA-job.
If the next DMA-job could be prepared while the current job is ongoing
this latency would be reduced. The biggest part of preparing a DMA-job
is maintenance of caches.
In my case I run on U5500 (mach-ux500) which has both L1 and L2
caches. The host mmc driver in use is the mmci driver (PL180).
I have done a hack in both the MMC-framework and mmci in order to make
a prove of concept. I have run IOZone to get measurements to prove my
case worthy.
The next step, if the results are promising will be to clean up my
work and send out patches for review.
The DMAC in ux500 support to modes LOG and PHY.
LOG - Many logical channels are multiplex on top of one physical channel
PHY - Only one channel per physical channel
DMA mode LOG and PHY have different latency both HW and SW wise. One
could almost treat them as "two different DMACs. To get a wider test
scope I have tested using both modes.
Summary of the results.
* It is optional for the mmc host driver to utitlize the 2-buf
support. 2-buf in framework requires no change in the host drivers.
* IOZone shows no performance hit on existing drivers* if adding 2-buf
to the framework but not in the host driver.
(* So far I have only test one driver)
* The performance gain for DMA using 2-buf is probably proportional to
the cache maintenance time.
The faster the card is the more significant the cache maintenance
part becomes and vice versa.
* For U5500 with 2-buf performance for DMA is:
Throughput: DMA vanilla vs DMA 2-buf
* read +5-10 %
* write +0-3 %
CPU load: CPU vs DMA 2-buf
* read large data: minus 10-20 units of %
* read small data: same as PIO
* write: same load as PIO ( why? )
Here follows two of the measurements from IOZones comparing MMC with
double buffering and without. The rest you can find in the text files
attached.
=== Performance CPU compared with DMA vanilla kernel ===
Absolute diff: MMC-VANILLA-CPU -> MMC-VANILLA-DMA-LOG
random random
KB reclen write rewrite read reread read write
51200 4 -14 -8 -1005 -988 -679 -1
cpu: -0.0 -0.1 -0.8 -0.9 -0.7 +0.0
51200 8 -35 -34 -1763 -1791 -1327 +0
cpu: +0.0 -0.1 -0.9 -1.2 -0.7 +0.0
51200 16 +6 -38 -2712 -2728 -2225 +0
cpu: -0.1 -0.0 -1.6 -1.2 -0.7 -0.0
51200 32 -10 -79 -3640 -3710 -3298 -1
cpu: -0.1 -0.2 -1.2 -1.2 -0.7 -0.0
51200 64 +31 -16 -4401 -4533 -4212 -1
cpu: -0.2 -0.2 -0.6 -1.2 -1.2 -0.0
51200 128 +58 -58 -4749 -4776 -4532 -4
cpu: -0.2 -0.0 -1.2 -1.1 -1.2 +0.1
51200 256 +192 +283 -5343 -5347 -5184 +13
cpu: +0.0 +0.1 -1.2 -0.6 -1.2 +0.0
51200 512 +232 +470 -4663 -4690 -4588 +171
cpu: +0.1 +0.1 -4.5 -3.9 -3.8 -0.1
51200 1024 +250 +68 -3151 -3318 -3303 +122
cpu: -0.1 -0.5 -14.0 -13.5 -14.0 -0.1
51200 2048 +224 +401 -2708 -2601 -2612 +161
cpu: -1.7 -1.3 -18.4 -19.5 -17.8 -0.5
51200 4096 +194 +417 -2380 -2361 -2520 +242
cpu: -1.3 -1.6 -19.4 -19.9 -19.4 -0.6
51200 8192 +228 +315 -2279 -2327 -2291 +270
cpu: -1.0 -0.9 -20.8 -20.3 -21.0 -0.6
51200 16384 +254 +289 -2260 -2232 -2269 +308
cpu: -0.8 -0.8 -20.5 -19.9 -21.5 -0.4
=== Performance CPU compared with DMA with MMC double buffering ===
Absolute diff: MMC-VANILLA-CPU -> MMC-MMCI-2-BUF-DMA-LOG
random random
KB reclen write rewrite read reread read write
51200 4 -7 -11 -533 -513 -365 +0
cpu: -0.0 -0.1 -0.5 -0.7 -0.4 +0.0
51200 8 -19 -28 -916 -932 -671 +0
cpu: -0.0 -0.0 -0.3 -0.6 -0.2 +0.0
51200 16 +14 -13 -1467 -1479 -1203 +1
cpu: +0.0 -0.1 -0.7 -0.7 -0.2 -0.0
51200 32 +61 +24 -2008 -2088 -1853 +4
cpu: -0.3 -0.2 -0.7 -0.7 -0.2 -0.0
51200 64 +130 +84 -2571 -2692 -2483 +5
cpu: +0.0 -0.4 -0.1 -0.7 -0.7 +0.0
51200 128 +275 +279 -2760 -2747 -2607 +19
cpu: -0.1 +0.1 -0.7 -0.6 -0.7 +0.1
51200 256 +558 +503 -3455 -3429 -3216 +55
cpu: -0.1 +0.1 -0.8 -0.1 -0.8 +0.0
51200 512 +608 +820 -2476 -2497 -2504 +154
cpu: +0.2 +0.5 -3.3 -2.1 -2.7 +0.0
51200 1024 +652 +493 -818 -977 -1023 +291
cpu: +0.0 -0.1 -13.2 -12.8 -13.3 +0.1
51200 2048 +654 +809 -241 -218 -242 +501
cpu: -1.5 -1.2 -16.9 -18.2 -17.0 -0.2
51200 4096 +482 +908 -80 +82 -154 +633
cpu: -1.4 -1.2 -19.1 -18.4 -18.6 -0.2
51200 8192 +643 +810 +199 +186 +182 +675
cpu: -0.8 -0.7 -19.8 -19.2 -19.5 -0.7
51200 16384 +684 +724 +275 +323 +269 +724
cpu: -0.6 -0.7 -19.2 -18.6 -19.8 -0.2
Hi,
I want to build linaro-netboook filesystem from *sources*, as I need a
filesystem build without VFP support.
I checked the wiki and other links but could not find a related
document.
Can anyone please point me to directions on how to do the same?
--
Thanks
Amit Mahajan
All,
The weekly report for the Linaro Infrastructure team may be found at:-
Status report: https://wiki.linaro.org/Platform/Infrastructure/Status/2010-12-16
Burndown chart: This link is awaiting the production of new burndown charts.
The Infrastructure related blueprints from the maverick cycle, of which currently there are 4 active ones (4 from the last report), are showing that there are 8 work items in progress (8 last report), and 11 work items to undertake (11 last report). These are to be moved into the natty cycle if still required.
* arm-m-validation-dashboard; 1 work item completed; 3 in progress; 6 to do (7 last report); 1 work item added
* arm-m-image-building-console; no change in status from last report; 3 in progress; 3 to do
* arm-m-automated-testing-framework; no change in status from last report; 1 in progress; 0 to do
* arm-m-testsuites-and-profilers; no change in status from last report; 1 in progress; 1 to do
In the natty cycle, the following lists the current active Blueprints, or Blueprints planned and not started. Currently there are 4 active Blueprints (4 from the last report), which are showing that there are 8 work items in progress (8 last report), 41 work items to undertake (41 last report), 0 work items postponed (0 last report) and 1 work item added (8 items added last report).
* other-linaro-n-improve-linaro-media-create: 7 work items completed in total (5 last week); 3 work items in progress (4 last week); 8 work items to do (8 last week); 1 work item added this week (2 last week).
* other-linaro-n-test-result-display-in-launch-control: 0 work items completed (0 last week); 1 work item in progress (1 last week); 10 work items to do (10 last week); 0 work items added (0 last week)
* other-linaro-n-patch-tracking: 0 work item completed (0 last week); 2 work items in progress (2 last week); 9 work items to do (9 last week)
* other-linaro-n-image-building: 2 work items in progress (2 last week); 5 work items to do (5 last week); 2 work items postponed (2 last week); 0 work items added (0 last week)
* other-linaro-n-continuous-integration: Not started - awaiting a Hudson build server (RT#42278).
* other-linaro-n-package-development-tools: Not Started; 9 work items to do
Specifics accomplished this week include:-
* Set up hardware in Cambridge for automated testing; worked on images to boot a stable environment, and be used to flash the test image
* Progress on blueprint tracking - Code changes seem to be sufficient to get burndown charts going again
* other-linaro-n-image-building: "4.6 Multiple hwpacks": DONE
* ImproveLinaroMediaCreate: port setup_sizes, calculatesize and get_device_by_id to python: DONE
* Working on the extended test plan for ux500, three new test cases added to the plan this week.
Please let me know if you have any comments or questions.
Kind Regards,
Ian
I can't get my beagle xM to boot recent dailies at all. I used this
command line:
sudo ./linaro-media-create --dev beagle --rootfs ext3 --mmc /dev/sdb \
--binary /home/mwh/Downloads/linaro-natty-headless-tar-20101214-0.tar.gz \
--hwpack /home/mwh/Downloads/hwpack_linaro-omap3_20101215-0_armel_supported.tar.gz \
--hwpack-force-yes
(although I've tried a few others over the past few days)
The serial console output gets this far:
4686489 bytes read
## Booting kernel from Legacy Image at 80000000 ...
Image Name: Linux
Image Type: ARM Linux Kernel Image (uncompressed)
Data Size: 3704884 Bytes = 3.5 MiB
Load Address: 80008000
Entry Point: 80008000
Verifying Checksum ... OK
## Loading init Ramdisk from Legacy Image at 81600000 ...
Image Name: initramfs
Image Type: ARM Linux RAMDisk Image (uncompressed)
Data Size: 4686425 Bytes = 4.5 MiB
Load Address: 00000000
Entry Point: 00000000
Verifying Checksum ... OK
Loading Kernel Image ... OK
OK
and then just hangs.
A card I burnt a few months ago still boots fine. I'm starting to
suspect my 'scratch' microSD card has given up the ghost, but I thought
I'd ask here before I run out and buy another one.
Cheers,
mwh
PS: can you buy non-crappy microSD cards in multi-packs anywhere?
Greetings,
Enclosed you'll find a link to the agenda, notes and actions from the
Linaro User Platforms Weekly Status meeting dated December 15th held in
#linaro-meeting on irc.freenode.net at 13:00 UTC.
https://wiki.linaro.org/Platform/UserPlatforms/WeeklyStatus/2010-12-15
Status Summary:
- First patch for linaro 11.05 cairo GLES efforts landed upstream (2010-12-15)
- Autocolorspace updates in progress after request from gstreamer team
for some updates after initial gstreamer patches
- Successfully proof of concept for NEON optimized Android JPEG decoder on Linux
- Meego Packaging for graphics WG progress 18 Packaged (41 identified
pkgs total, 12 optional)
Regards,
Tom (tgall_foo)
User Platforms Team
"We want great men who, when fortune frowns will not be discouraged."
- Colonel Henry Knox
w) tom.gall att linaro.org
w) tom_gall att vnet.ibm.com
h) tom_gall att mac.com