The sched_mc feature has been originally designed to improve power
consumption of multi-package system and several architecture functions
are available to tune the topology and the scheduler's parameters when
scheduler rebuilds the sched_domain hierarchy (change the
sched_mc_power_savings level). This patches series is a trial to
improve the power consumption of dual and quad cortex-A9 when the
sched_mc_power_savings is set to 2. The following patch's policy is to
accept up to 4 threads (can be configured) in the run queue of a core
before starting to load balance if cpu runs at low frequencies but to
accept only 1 thread for high frequencies, which is the normal
behaviour. The goal is to use only one cpu in light load situation and
both cpu in heavy load situation
Patches [1-3] modify the ARM cpu topology according to
sched_mc_power_savings value and Cortex id
Patch [4] enables ARCH_POWER feature of the scheduler
Patch [5] adds ARCH_POWER function for ARM platform
Patches [6-7] modify the cpu_power of CA-9 according to
sched_mc_power_savings' level and current frequency. The main goal is
to increase the capacity of a core when using low cpu frequency in
order to pull tasks on this core. Note that this behaviour is not
really advised but it can be seen as an intermediate step between the
use of cpu hotplug (which is not a power saving feature) and a new
load balancer which will take into account low load situation on dual
core.
Patch [8] ensures that cpu0 is used in priority when only one CPU is running
Patch [9] adds some debugfs interface for test purpose
Patch [10] ensures that the cpu_power will be update periodically
Patch [11] fixes an issue around the trigger of ilb.
TODO list:
-remove useless start of ilb when the core has capacity.
-add a method (DT, sysfs, ...) to set threshold for using 1 or 2 cpus
for dual CA-9
-irq balancing
The tests hereafter have been done on a u8500 with kernel linaro-3.1.
They check that there is no obvious lost of performance when
sched_mc=2.
sysbench --test=cpu --num-threads=12 --max-time=20 run
Test execution summary: sched_mc=0 sched_mc=2 cpu hotplug
total number of events: 665 664
336
per-request statistics:
min: 92.68ms 70.53ms 618.89ms
avg: 361.75ms 361.38ms 725.29ms
max: 421.08ms 420.73ms 840.74ms
approx. 95 percentile: 402.28ms 390.53ms 760.17ms
sysbench --test=threads --thread-locks=9 --num-threads=12 --max-time=20 run
Test execution summary: sched_mc=0 sched_mc=2 cpu hotplug
total number of events: 10000 10000 3129
per-request statistics:
min: 1.62ms 1.70ms
13.16ms
avg: 22.23ms 21.87ms
76.77ms
max: 153.52ms 133.99ms 173.82ms
approx. 95 percentile: 54.12ms 52.65ms 136.32ms
sysbench --test=threads --thread-locks=2 --num-threads=3 --max-time=20 run
Test execution summary: sched_mc=0 sched_mc=2 cpu hotplug
total number of events: 10000 10000 10000
per-request statistics:
min: 1.38ms 1.38ms
5.70ms
avg: 4.67ms 5.37ms
11.85ms
max: 36.84ms 32.42ms 32.58ms
approx. 95 percentile: 14.34ms 12.89ms 21.30ms
cyclictest -q -t -D 20
Only one cpu is used during this test when sched_mc=2 whereas both cpu
are used when sched_mc=0
Test execution summary: sched_mc=0 sched_mc=2 cpu hotplug
Avg, Max: 15, 434 19, 2145 17, 3556
Avg, Max: 14, 104 19, 1686 17, 3593
Regards,
Vincent
This RFC version of the patch set is intended to share the current work
about providing a thermal solution using Linux thermal infrastructure. The
closest driver which has the same features and not using acpi layer
is drivers/platform/x86/acerhdf.c.
For the ARM world there is no clarity for placing such files so currently
I have placed the temperature sensor driver and a binding layer for cooling device,
thermal zone and core thermal interfaces inside staging directory.
Feel free to comment about the implementation, the directory structure and
the shortcomings.
Amit Daniel Kachhap (4):
ARM: Exynos4: Add thermal sensor driver for samsung exynos4 platform.
ARM: Exynos4: Add thermal sensor driver platform device support
ARM: Exynos4: Enable thermal sensor driver for origen board
ARM: Exynos4: Add thermal interface support for linux thermal layer
arch/arm/mach-exynos4/Kconfig | 5 +
arch/arm/mach-exynos4/Makefile | 3 +-
arch/arm/mach-exynos4/dev-tmu.c | 71 +++
arch/arm/mach-exynos4/include/mach/exynos4-tmu.h | 75 ++++
arch/arm/mach-exynos4/include/mach/irqs.h | 3 +
arch/arm/mach-exynos4/include/mach/map.h | 1 +
arch/arm/mach-exynos4/include/mach/regs-tmu.h | 58 +++
.../mach-exynos4/include/mach/thermal_interface.h | 26 ++
arch/arm/mach-exynos4/mach-origen.c | 10 +
arch/arm/plat-samsung/include/plat/devs.h | 2 +
drivers/staging/Kconfig | 2 +
drivers/staging/Makefile | 1 +
drivers/staging/thermal_exynos4/Kconfig | 12 +
drivers/staging/thermal_exynos4/Makefile | 5 +
drivers/staging/thermal_exynos4/sensor/Kconfig | 14 +
drivers/staging/thermal_exynos4/sensor/Makefile | 4 +
.../thermal_exynos4/sensor/exynos4210_tmu.c | 465 ++++++++++++++++++++
.../staging/thermal_exynos4/thermal_interface.c | 382 ++++++++++++++++
18 files changed, 1138 insertions(+), 1 deletions(-)
create mode 100644 arch/arm/mach-exynos4/dev-tmu.c
create mode 100644 arch/arm/mach-exynos4/include/mach/exynos4-tmu.h
create mode 100644 arch/arm/mach-exynos4/include/mach/regs-tmu.h
create mode 100644 arch/arm/mach-exynos4/include/mach/thermal_interface.h
create mode 100644 drivers/staging/thermal_exynos4/Kconfig
create mode 100644 drivers/staging/thermal_exynos4/Makefile
create mode 100644 drivers/staging/thermal_exynos4/sensor/Kconfig
create mode 100644 drivers/staging/thermal_exynos4/sensor/Makefile
create mode 100644 drivers/staging/thermal_exynos4/sensor/exynos4210_tmu.c
create mode 100644 drivers/staging/thermal_exynos4/thermal_interface.c
Hi,
I am the author of OMAP display driver, and while developing it I've
often felt that there's something missing in Linux's display area. I've
been planning to write a post about this for a few years already, but I
never got to it. So here goes at last!
---
First I want to (try to) describe shortly what we have on OMAP, to give
a bit of a background for my point of view, and to have an example HW.
The display subsystem (DSS) hardware on OMAP handles only showing pixels
on a display, so it doesn't contain anything that produces pixels like
3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
possibly do some modifications for them (color format conversions etc),
and output them to a display.
The hardware has multiple overlays, which are like hardware windows.
They fetch pixels from SDRAM, and output them in a certain area on the
display (possibly with scaling). Multiple overlays can be composited
into one output.
So we may have something like this, when all overlays read pixels from
separate areas in the memory, and all overlays are on LCD display:
.-----. .------. .------.
| mem |-------->| ovl0 |-----.---->| LCD |
'-----' '------' | '------'
.-----. .------. |
| mem |-------->| ovl1 |-----|
'-----' '------' |
.-----. .------. | .------.
| mem |-------->| ovl2 |-----' | TV |
'-----' '------' '------'
The LCD display can be rather simple one, like a standard monitor or a
simple panel directly connected to parallel RGB output, or a more
complex one. A complex panel needs something else than just
turn-it-on-and-go. This may involve sending and receiving messages
between OMAP and the panel, but more generally, there's need to have
custom code that handles the particular panel. And the complex panel is
not necessarily a panel at all, it may be a buffer chip between OMAP and
the actual panel.
The software side can be divided into three parts: the lower level
omapdss driver, the lower level panel drivers, and higher level drivers
like omapfb, v4l2 and omapdrm.
The omapdss driver handles the OMAP DSS hardware, and offers a kernel
internal API which the higher level drivers use. The omapdss does not
know anything about fb or drm, it just offers core display services.
The panel drivers handle particular panels/chips. The panel driver may
be very simple in case of a conventional display, basically doing pretty
much nothing, or bigger piece of code, handling communication with the
panel.
The higher level drivers handle buffers and tell omapdss things like
where to find the pixels, what size the overlays should be, and use the
omapdss API to turn displays on/off, etc.
---
There are two things that I'm proposing to improve the Linux display
support:
First, there should be a bunch of common video structs and helpers that
are independent of any higher level framework. Things like video
timings, mode databases, and EDID seem to be implemented multiple times
in the kernel. But there shouldn't be anything in those things that
depend on any particular display framework, so they could be implemented
just once and all the frameworks could use them.
Second, I think there could be use for a common low level display
framework. Currently the lower level code (display HW handling, etc.)
and higher level code (buffer management, policies, etc) seem to be
usually tied together, like the fb framework or the drm. Granted, the
frameworks do not force that, and for OMAP we indeed have omapfb and
omapdrm using the lower level omapdss. But I don't see that it's
anything OMAP specific as such.
I think the lower level framework could have components something like
this (the naming is OMAP oriented, of course):
overlay - a hardware "window", gets pixels from memory, possibly does
format conversions, scaling, etc.
overlay compositor - composes multiple overlays into one output,
possibly doing things like translucency.
output - gets the pixels from overlay compositor, and sends them out
according to particular video timings when using conventional video
interface, or via any other mean when using non-conventional video buses
like DSI command mode.
display - handles an external display. For conventional displays this
wouldn't do much, but for complex ones it does whatever needed by that
particular display.
This is something similar to what DRM has, I believe. The biggest
difference is that the display can be a full blown driver for a complex
piece of HW.
This kind of low level framework would be good for two purposes: 1) I
think it's a good division generally, having the low level HW driver
separate from the higher level buffer/policy management and 2) fb, drm,
v4l2 or any possible future framework could all use the same low level
framework.
---
Now, I'm quite sure the above framework could work quite well with any
OMAP like hardware, with unified memory (i.e. the video buffers are in
SDRAM) and 3D chips and similar components are separate. But what I'm
not sure is how desktop world's gfx cards change things. Most probably
all the above components can be found from there also in some form, but
are there some interdependencies between 3D/buffer management/something
else and the video output side?
This was a very rough and quite short proposal, but I'm happy to improve
and extend it if it's not totally shot down.
Tomi
Hello,
OpenID plugin usage has been disabled in ci.linaro.org due to some
vulnerability detected with the plugin.
Hence the Single Sig On option using your launchpad id wont work for now
till it gets fixed.
If you need to use ci.linaro.org services and need a way to login please
create a new user on ci.linaro.org
and mail me the details and I will give you appropriate access to the
service.
---------- Forwarded message ----------
From: Paul Sokolovsky <paul.sokolovsky(a)linaro.org>
Date: Fri, Oct 28, 2011 at 4:09 PM
Subject: FYI: OpenID auth disabled on android-build.linaro.org
To: linaro-android <linaro-android(a)linaro.org>, Alexander Sack <
asac(a)linaro.org>, Danilo Šegan <danilo.segan(a)linaro.org>, Infrastructure <
infrastructure(a)linaro.org>
Hello,
Due to suspected security issue, OpenID auth was disabed on
android-build.linaro.org. OpenID was never recommended as auth means
there, and instead username/passwd auth was recommended, so this change
should not affect users. Please let me know if you have any issues.
ETA for being enabled back is so far not known, Danilo Shegan tracks
this issue.
--
Best Regards,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
--
Thanks and Regards,
Deepti
Infrastructure Team Member, Linaro Platform Teams
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog
Hi,
Let me forward this to more people. :)
I'm still grabbing the relevant information for this, so if anyone has
ideas on it, welcome and reply emails to me!
This is currently subscriber list:
Bernhard Rosenkraenzer
Dan Trevino
Deepak Saxena
Frans Gifford
Grant Likely
John Stultz
Kwanghyun LA
Marcus Lorentzon
Mathieu Poirier
Patrik Ryd
Sangwook Lee
Tony Mansson
Yejin Moon
Zach Pfeffer
Thank you all!
BR
Botao Sun
---------- Forwarded message ----------
From: Tony Mansson <tony.mansson(a)linaro.org>
Date: Fri, Oct 28, 2011 at 2:38 AM
Subject: Device Tree for Android
To: Deepak Saxena <dsaxena(a)linaro.org>, Grant Likely <
grant.likely(a)linaro.org>
Cc: Botao Sun <Botao.Sun(a)linaro.org>
Hello,
Please note that there will be a session at Connect titled "Device Tree for
Android" headed by Botao Sun from the Android Platform Team.
https://blueprints.launchpad.net/linaro-android/+spec/linaro-platforms-lc4.…
At the time of writing the session is scheduled for Tuesday at 10:00. Your
presence would be appreciated.
BR,
Tony Mansson
W dniu 28.10.2011 17:46, Jeremiah Foster pisze:
>>>
>>> Android's Linux kernels are supported (maintained?) by Linaro.
>>
>> With my Linaro hat on I must object. Depending on what you meant the statement above is either highly inaccurate or simply untrue.
>
> Hence the question mark. :)
I think what I originally meant is that we don't focus solely on Android.
"Improved" is the word I would use, that implies neither support nor
maintenance as we are between the vendors (that are also part of Linaro)
and upstream kernel community (that we are a part of) and we have no
control of either side and their actions. As to what we do check our FAQ
(http://www.linaro.org/faqs/) and read on.
>> Android kernel situation is complicated and varies per board/SoC. What Linaro does is try to upstream and unify the kernel for Linaro member companies SoCs.
>
> What does that mean in practice?
Disclaimer: I'm not a kernel developer. I have experience in the
non-Intel part of the world but I'm not the sort of person with
up-to-date hands-on experience. For those folks please look at traffic
in linaro-dev(a)lists.linaro.org and at our patchwork instance at
patches.linaro.org. You can see what kind of patches we push upstream
and if they have landed yet.
As for your question, read on.
A lot of ARM devices have a BSP kernel (android or not) that is prepared
by some 3rd party (sometimes also by the vendor themselves if the device
is a clone of the reference design) and that kernel is generally not
pushed upstream.
This affects, by far, the vast majority of devices out there (I'd say
that nothing gets pushed by those companies simply because their work
mode does not require such a step - we are working on educating them in
the benefits of working both towards products and common code base).
The ARM tree in the upstream kernel is, again, by far, the largest of
all the other architectures. If I remember correctly it is in fact
larger than *all* the other trees combined. The reasons for this are
complicated but can be generally simplified to code duplication between
the different devices and greater diversity in the actual hardware as
compared to other platforms.
To get ARM Linux healthy we need to reduce that clutter and make sure
that support for the latest and greatest hardware is upstream quickly
and the code is being actively maintained by everyone.
>> This is far from finished and uniform. The "BSP" kernel that hardware vendors provide is not supported by Linaro and in fact often contains code that cannot go upstream.
>
> What does it use this proprietary code for? To know the APIs or to get other hardware interface info? Isn't that a little risky? Won't proprietary, and potentially patented IP leak into the Linaro work? (Not that I believe in IP.)
The term proprietary is a bit misleading, the code IS released as GPL.
It is simply there to support some parts (often userspace or "firmware")
that is not open sourced and cannot be for all practical considerations.
As for the patches in general, there are different reasons why they are
not suitable for being proposed and included upstream:
1) Shabby code, against old trees, copy-pasted from another similar
device, maintenance hell. This is, by my unqualified judgment, the vast
majority of the problem.
2) Code that has no good place in the kernel just yet because the kernel
interfaces are insufficient for the extremely complicated world of ARM
SoCs. Off the top of my, unqualified, head: power management, memory
management, everything related to graphics and probably many more. Here
the reason for not being upstream is that there is no consensus on how
to do something (how to support a certain class of SoC components) that
could be applied to many vendors and non-arm world as well. Here the
people that write the BSP cannot solve the problem and just implement
their own solution to meet the deadline. Such code is often very good
but there are many similar solutions that are quite nontrivial to merge
into one sensible codebase. One such example is memory management where
we have no less than 3 or 4 competing interfaces from various companies
and there is a working group inside Linaro and the greater Linux
community that tries to solve this problem.
3) Bits that enable proprietary userspace drivers. The reasons are
obvious. This could be related to lots of different things, not only
graphics as people often think. IP and software edge (optimizations that
make otherwise identical hardware perform better than competition) is
probably a big motivation here. The IP protection is not only used as in
"don't steal our stuff" but rather "hey, with this being binary it is
harder to prove that we violate a specific patent you have". In
retrospective this is a thing those companies obviously need. Just look
at how many Android handset vendors pay to, for example, Microsoft, for
patents that allegedly apply there. The world of graphics is riddled
with patents and I'm sure that a big money-laden hardware vendor is a
good target for whoever owns the patents.
>> Linaro has several trees, including a grand unification tree that tries to support all the member companies chips in one tree (and one binary, thanks to device trees) but this effort is years away (my personal estimate, I don't speak for the organization). In addition we have several trees for normal/androidized kernel for each board. In the latest 2011.10 release hardware was not supported in 100% on any board that I'm aware of.
>>
>> Having said that the term "supported" seems inappropriate to me. We do work on those boards though.
>
> How would you define it?
By "work" I meant "we are *working* on making the kernel and userspace
on those boards better in each release". Better is shared amongst:
1) More patches landed upstream, thus less delta.
2) Less duplication within the kernel (better code), more device tree
usage, closer to having one kernel binary that supports several
different boards.
3) Better power management, stability, performance, more features, bells
and whistles.
4) Less delta from the android variant to the normal variant. More
discussion and more consensus on how to join the two worlds.
And let's not forget, my own personal favorite, more validation. The
code is tested both manually and automatically and the scope, coverage
and quality is pushed forward each time.
>>> Anything that runs Android can run GNU/Linux.
>>
>> This is a gross oversimplification IMHO. You usually get androidized BSP kernel from a few months/years ago with binary parts that have no corresponding source code. Good luck booting vanilla kernel there.
>
> But it appears to me that all the official boards that are targets for Linaro can run a vanilla kernel, is that not the case? If not, what BSP stuff are you referring to - graphics acceleration?
No I don't think that is the case. A quick glance at
http://git.linaro.org/gitweb will show you how many trees we have.
Except for the explicit upstream trees that Nico maintains none of the
important changes in other trees are upstream today (at least not yet).
Even the Linaro various kernels (which are _not_ the BSP kernels) fail
to work sensibly on all of the boards today. Next-gen boards are usually
the ones with weakest support (although that is rapidly changing, thanks
to what we are doing). Often most primitive board features work (like,
it kind of boots with a specific boot loader and the CPU runs) but
anything you definitely want those boards to do: power management,
stability, sound, graphics, 3D & multimedia, wifi/bluetooth, FM radio,
GPS(?), DSP suppoet, ARMv7 optimizations, is simply not there.
Best regards
ZK
PS: I think that you should ask those questions on @linaro-dev. I could
be talking nonsense here and the people that really know simply did not
see this message. Therefore I'm cross-posting to linaro-dev.
I've just finished uploading pre-built images for the 11.10 release to:
http://releases.linaro.org/images/11.10/oneiric/
These are built with default options from Linaro Media Create to pre-set
sizes. You can install these with 4 simple commands:
$ SDCARD=/dev/sdb
$ IMGFILE=overo-nano.img
$ gunzip ${IMGFILE}.gz
$ dd bs=4k if=${IMGFILE} of=${SDCARD}
-andy
Hello,
There were few user-facing improvements for
https://android-build.linaro.org/ , Linaro Android Build System, from
which can be downloaded daily builds of Android components (platform,
toolchain):
1. HTTP downloads were enabled.
It was long-standing feature request, and brings number of improvements
to user experience and system scalability:
a) With 100Mb+ downloads android-build.linaro.org provides, going
plain HTTP instead of SSL improves download speed pretty well.
b) Taking into account that downloads are server by not very powerful
EC2 instance, alleviating it from SSL number-crunching improve
concurrent download speed and build performance.
c) Using plain HTTP gets around SSL certificate issues, the need to
do extra click-thru in browser or use obscure options for wget.
2. HTTP downloads made default for the Build Frontend.
Instead of documenting HTTP download link on some obscure web page,
they were made default when browsing build results from
https://android-build.linaro.org , e.g.:
https://android-build.linaro.org/builds/~linaro-android/staging-panda/
(Please note that all older links will keep working. Underlying
Jenkins instance also keeps using https:// links).
3. MD5SUMS are available for platform downloads.
Now it's easy to verify integrity of downloads using well-known
convention of MD5SUMS files. Whole download and integrity checking can
be scripted as:
wget -r -nd http://android-build.linaro.org/builds/~linaro-android/staging-panda/65/tar…
md5sum -c MD5SUMS
4. Links to easily access build logs in several formats added.
Last milestone, we enabled Jenkins Log Parser plugin, which allowed
to get quickly to the location of errors within 20MB build log file.
However, it was available only from within Jenkins. Now, links to the
most useful log browsing pages were added directly to frontend's build
page, following concept that frontend page should make the most useful
build information available and developers' fingertips. Now there're
(under "Log" subsection on the page):
* Parsed - parsed log mentioned above, providing quick jump to error
locations and stats about a build
* Tail - Last 500KB or so of the log, which oftentimes contain error.
With one click, entire log can be browsed, with all lines still
nicely wrapped.
* Raw - plain-text log, as available before. This is mostly suitable
for download and local view, as lines are not wrapped in most
browsers when viewing online.
See https://android-build.linaro.org/builds/~linaro-android/staging-panda/#buil…
for example.
--
Best Regards,
Paul
Linaro.org | Open source software for ARM SoCs
Follow Linaro: http://www.facebook.com/pages/Linarohttp://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog