v3 is rebased on top of the latest serial runtime
patches and boot tested with/without DT on OMAP4
SDP and OMAP4 Panda boards.
Patches can be found here..
I also had to pull in a fix for DT testing (already in linux-omap
master) which was missing as the serial runtime branch
was based on an older master commit.
Changes in v3:
-1- Rebased on latest serial runtime patches
-2- Minor typr fixes
Changes in v2:
-1- Got rid of binding to define which uart is console
-2- Added checks to default clock speed to 48Mhz
-3- Added compatible for each OMAP family
-4- Used of_alias_get_id to populate port.line
 git://gitorious.org/runtime_3-0/runtime_3-0.git for_3_3/lo_rc4_uartruntime
Rajendra Nayak (4):
omap-serial: Get rid of all pdev->id usage
omap-serial: Use default clock speed (48Mhz) if not specified
omap-serial: Add minimal device tree support
ARM: omap: pass minimal SoC/board data for UART from dt
.../devicetree/bindings/serial/omap_serial.txt | 10 +++
arch/arm/boot/dts/omap3.dtsi | 31 ++++++++
arch/arm/boot/dts/omap4.dtsi | 28 +++++++
arch/arm/mach-omap2/board-generic.c | 1 -
drivers/tty/serial/omap-serial.c | 80 +++++++++++++++----
5 files changed, 132 insertions(+), 18 deletions(-)
create mode 100644 Documentation/devicetree/bindings/serial/omap_serial.txt
There's considerable activity in the subject of the scheduler lately and how to
adapt it to the peculiarities of the new class of hardware coming out lately,
like the big.LITTLE class of devices from a number of manufacturers.
The platforms that Linux runs are very diverse, and run differing workloads.
For example most consumer devices will very likely run something like Android,
with common use cases such as audio and/or video playback. Methods to achieve
lower power consumption using a power aware scheduler are under investigation.
Similarly for server applications, or VM hosting, the behavior of the scheduler
shouldn't have adverse performance implications; any power saving on top of that
would be a welcome improvement.
The current issue is that scheduler development is not easily shared between
developers. Each developer has their own 'itch', be it Android use cases, server
workloads, VM, etc. The risk is high of optimizing for one's own use case and
causing severe degradation on most other use cases.
One way to fix this problem would be the development of a method with which one
could perform a given use-case workload in a host, record the activity in a
interchangeable portable trace format file, and then play it back on another
host via a playback application that will generate an approximately similar load
which was observed during recording.
The way that the two hosts respond under the same load generated by the playback
application can be compared, so that the performance of the two scheduler implementations
measured in various metrics (like performance, power consumption etc.) can be
The fidelity of the this approximation is of great importance but it is debatable
if it is possible to have a fully identical load generated, since details of the hosts
might differ in such a way that such a thing is impossible.
I believe that it should be possible at least to simulate a purely CPU load, and the
blocking behavior of tasks, in such a way that it would result in scheduler decisions
that can be compared and shared among developers.
The recording part I believe can be handled by the kernel's tracing infrastructure,
either by using the existing tracepoints, or need be adding more; possibly even
creating a new tracer solely for this purpose.
Since some applications can adapt their behavior according to insufficient system
resources (think media players that can drop frames for example), I think it would
be beneficial to record such events to the same trace file.
The trace file should have a portable format so that it can be freely shared between
developers. An ASCII format like we currently use should be OK, as long as it
doesn't cause too much of an effect during execution of the recording.
The playback application can be implemented via two ways.
One way, which is the LinSched way would be to have the full scheduler implementation
compiled as part of said application, and use application specific methods to evaluate
performance. While this will work, it won't allow comparison of the two hosts in a meaningful
For both scheduler and platform evaluation, the playback application will generate the load
on the running host by simulating the source host's recorded work load session.
That means emulating process activity like forks, thread spawning, blocking on resources
etc. It is not clear to me yet if that's possible without using some kind of kernel
level helper module, but not requiring such is desirable.
Since one would have the full trace of scheduling activity: past, present and future; there would
be the possibility of generating a perfect schedule (as defined by best performance, or best
power consumption), and use it as a yardstick of evaluation against the actual scheduler.
Comparing the results, you would get an estimate of the best case improvement that could be
achieved if the ideal scheduler existed.
I know this is a bit long, but I hope this can be a basis of thinking on how to go about
TI seems to be happy with the cpuidle driver based on Colin's couple
C-state work. Santosh has provided a branch at the end of this message that
is rebased on top of 3.4-rc2. Can we fold this into the TILT tree for April?
You should provided a consolidated version of your fixes and cleanups to
the various LTs too.
---------- Forwarded message ----------
From: Santosh Shilimkar <santosh.shilimkar(a)ti.com>
Date: Mon, Apr 9, 2012 at 10:11 AM
Subject: Re: [PATCHv2 0/5] coupled cpuidle state support
To: Colin Cross <ccross(a)android.com>
Cc: linux-kernel(a)vger.kernel.org, linux-arm-kernel(a)lists.infradead.org,
linux-pm(a)lists.linux-foundation.org, Kevin Hilman <khilman(a)ti.com>, Len
Brown <len.brown(a)intel.com>, Trinabh Gupta <g.trinabh(a)gmail.com>, Arjan van
de Ven <arjan(a)linux.intel.com>, Deepthi Dharwar <deepthi(a)linux.vnet.ibm.com>,
Greg Kroah-Hartman <gregkh(a)suse.de>, Kay Sievers <kay.sievers(a)vrfy.org>,
Daniel Lezcano <daniel.lezcano(a)linaro.org>, Amit Kucheria <
amit.kucheria(a)linaro.org>, Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com>,
Rob Lee <rob.lee(a)linaro.org>
On Friday 30 March 2012 06:23 PM, Santosh Shilimkar wrote:
> On Friday 16 March 2012 05:07 AM, Colin Cross wrote:
>> On Wed, Mar 14, 2012 at 11:29 AM, Colin Cross <ccross(a)android.com> wrote:
>>> * removed the coupled lock, replacing it with atomic counters
>>> * added a check for outstanding pokes before beginning the
>>> final transition to avoid extra wakeups
>>> * made the cpuidle_coupled struct completely private
>>> * fixed kerneldoc comment formatting
>>> * added a patch with a helper function for resynchronizing
>>> cpus after aborting idle
>>> * added a patch (not for merging) to add trace events for
>>> verification and performance testing
>> I forgot to mention, this patch series is on v3.3-rc7, and will
>> conflict with the cpuidle timekeeping patches. If those go in first
>> (which is likely), I will rework this series on top of it. I left it
>> on v3.3-rc7 now to make testing easier.
> I have re-based your series against Len Browns
> next branch  which has time keeping and other cpuidle patches.
> Have also folded the CPU hotplug fix which I posted in the
> original coupled idle patch.
As you know, we have been playing around this series for OMAP
for last few weeks. This version series seems to work as intended
and found it pretty stable in my testing. Apart from the cpu
hotplug fix and the trace event comment, series looks fine
Reviewed-by: Santosh Shilimkar <santosh.shilimkar(a)ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar(a)ti.com>
An updated version of this series along with OMAP cpuidle
driver updates against 3.4-rc2 is available here  in
case some body is interested looking at it.
So the Google Android team just posted this:
Which shows their device emulator running w/ hardware acceleration.
Since I know they started with qemu, I was curious if there was any
details as to if these sorts of changes were making it back upstream to
qemu, or if qemu had its own plans for hardware acceleration?
Unfortunately there's little technical details in the post above. The
video is clearly running on OS X, so I'm not sure if this will also be
usable w/ Linux hosts, but I could be wrong there.
On Sat, Mar 31, 2012 at 6:58 PM, Linus Torvalds
> - drm dma-buf prime support. Dave Airlie sent me the pull request but
> didn't push very hard for it, it's in my "ok, I can still pull it for
> 3.4 if individual DRM driver people tell me that it will make their
> lives easier." So this is in limbo - I have nothing against it, but I
> won't pull unless I get a few people say "yes, please".
yes, please :-)
Note that the core drm dma-buf/prime support has already been reviewed
by a lot of folks, and tested with a few different drivers (exynos,
omapdrm, i915, nouveau, udl) with some driver support that could be
pushed for 3.5 cycle if the core support makes it in for 3.4 cycle.