v3 is rebased on top of the latest serial runtime
patches[1] and boot tested with/without DT on OMAP4
SDP and OMAP4 Panda boards.
Patches can be found here..
git://gitorious.org/omap-pm/linux.git for-dt/serial
I also had to pull in a fix[2] for DT testing (already in linux-omap
master) which was missing as the serial runtime branch[1]
was based on an older master commit.
Changes in v3:
-1- Rebased on latest serial runtime patches
-2- Minor typr fixes
Changes in v2:
-1- Got rid of binding to define which uart is console
-2- Added checks to default clock speed to 48Mhz
-3- Added compatible for each OMAP family
-4- Used of_alias_get_id to populate port.line
[1] git://gitorious.org/runtime_3-0/runtime_3-0.git for_3_3/lo_rc4_uartruntime
[2] http://www.spinics.net/lists/arm-kernel/msg150751.html
Rajendra Nayak (4):
omap-serial: Get rid of all pdev->id usage
omap-serial: Use default clock speed (48Mhz) if not specified
omap-serial: Add minimal device tree support
ARM: omap: pass minimal SoC/board data for UART from dt
.../devicetree/bindings/serial/omap_serial.txt | 10 +++
arch/arm/boot/dts/omap3.dtsi | 31 ++++++++
arch/arm/boot/dts/omap4.dtsi | 28 +++++++
arch/arm/mach-omap2/board-generic.c | 1 -
drivers/tty/serial/omap-serial.c | 80 +++++++++++++++----
5 files changed, 132 insertions(+), 18 deletions(-)
create mode 100644 Documentation/devicetree/bindings/serial/omap_serial.txt
Hi there,
There's considerable activity in the subject of the scheduler lately and how to
adapt it to the peculiarities of the new class of hardware coming out lately,
like the big.LITTLE class of devices from a number of manufacturers.
The platforms that Linux runs are very diverse, and run differing workloads.
For example most consumer devices will very likely run something like Android,
with common use cases such as audio and/or video playback. Methods to achieve
lower power consumption using a power aware scheduler are under investigation.
Similarly for server applications, or VM hosting, the behavior of the scheduler
shouldn't have adverse performance implications; any power saving on top of that
would be a welcome improvement.
The current issue is that scheduler development is not easily shared between
developers. Each developer has their own 'itch', be it Android use cases, server
workloads, VM, etc. The risk is high of optimizing for one's own use case and
causing severe degradation on most other use cases.
One way to fix this problem would be the development of a method with which one
could perform a given use-case workload in a host, record the activity in a
interchangeable portable trace format file, and then play it back on another
host via a playback application that will generate an approximately similar load
which was observed during recording.
The way that the two hosts respond under the same load generated by the playback
application can be compared, so that the performance of the two scheduler implementations
measured in various metrics (like performance, power consumption etc.) can be
evaluated.
The fidelity of the this approximation is of great importance but it is debatable
if it is possible to have a fully identical load generated, since details of the hosts
might differ in such a way that such a thing is impossible.
I believe that it should be possible at least to simulate a purely CPU load, and the
blocking behavior of tasks, in such a way that it would result in scheduler decisions
that can be compared and shared among developers.
The recording part I believe can be handled by the kernel's tracing infrastructure,
either by using the existing tracepoints, or need be adding more; possibly even
creating a new tracer solely for this purpose.
Since some applications can adapt their behavior according to insufficient system
resources (think media players that can drop frames for example), I think it would
be beneficial to record such events to the same trace file.
The trace file should have a portable format so that it can be freely shared between
developers. An ASCII format like we currently use should be OK, as long as it
doesn't cause too much of an effect during execution of the recording.
The playback application can be implemented via two ways.
One way, which is the LinSched way would be to have the full scheduler implementation
compiled as part of said application, and use application specific methods to evaluate
performance. While this will work, it won't allow comparison of the two hosts in a meaningful
manner.
For both scheduler and platform evaluation, the playback application will generate the load
on the running host by simulating the source host's recorded work load session.
That means emulating process activity like forks, thread spawning, blocking on resources
etc. It is not clear to me yet if that's possible without using some kind of kernel
level helper module, but not requiring such is desirable.
Since one would have the full trace of scheduling activity: past, present and future; there would
be the possibility of generating a perfect schedule (as defined by best performance, or best
power consumption), and use it as a yardstick of evaluation against the actual scheduler.
Comparing the results, you would get an estimate of the best case improvement that could be
achieved if the ideal scheduler existed.
I know this is a bit long, but I hope this can be a basis of thinking on how to go about
developing this.
Regards
-- Pantelis
On 03/29/2012 02:22 AM, Jon Medhurst (Tixy) wrote:
> On Mon, 2012-03-26 at 12:20 -0700, John Stultz wrote:
>> So after talking about it at the last Linaro Connect, I've finally
>> gotten around to making a first pass at providing config fragments for
>> the linaro kernel. I'd like to propose merging this for 12.04, and
>> doing so early so we can make sure that all the desired config options
>> are present in the fragments and to allow the vairous linaro build
>> systems to begin migrating their config generation over.
>>
>> The current tree is here:
>>
>> http://git.linaro.org/gitweb?p=people/jstultz/android.git;a=shortlog;h=refs…
>>
> [...]
>> I'd ask Landing teams to take a look at this, and report any missing
>> config options or fragment chunks they'd like to see.
> John, I've attached a config fragment for Versatile Express.
Great! I've merged that in! There's a few warnings though:
Value requested for CONFIG_ARCH_VEXPRESS_DT not in final .config
Requested value: CONFIG_ARCH_VEXPRESS_DT=y
Actual value:
Value requested for CONFIG_FB_ARMHDLCD not in final .config
Requested value: CONFIG_FB_ARMHDLCD=y
Actual value:
I'm guessing these are features not in the base 3.3 tree? If so you
might want to break those out and re-add them with those patches.
> This includes loadable module support because one of our topic branches
> adds the gator module with a default config of 'm'. I did this because
> Linaro kernels are expected to have this module available but I didn't
> see any reason for it to be built-in, and as there may be versioning
> issues between it and the separate user side daemon, I thought it wise
> to keep the door open for loading an alternate module obtained from
> elsewhere. That decision does mean that all Linaro kernels would need
> loadable module support built in, but I don't think that is a bad idea.
>
Tushar had similar request, but I don't think the android configs (at
least the ones I've managed) use modules, so I've added the MODULES=y to
the common ubuntu.conf file.
If this is still objectionable, it can be changed and we can push it
down to the linaro-base.conf, but I want to make sure the Android tree
doesn't run into trouble.
thanks
-john
Hello Jo,
I am forwarding the message to a couple mailing lists which might have
people interested on the Mono porting for ARM hard-float ABI.
2012/2/2 Jo Shields <directhex(a)apebox.org>:
> Right now, Mono is available in Debian armhf. This is a hack - what
> we're actually doing is building Mono as an armhf binary, but built to
> emit soft VFP instructions and using calling conventions and ABI for
> armel. This hack works well enough for pure cross-platform code (like
> the C# compiler) to run, but dies in a heap for anything complex.
>
> This situation is a bit on the crappy side of crap.
>
> In order for Mono on armhf not to be a waste of time, a "true" port
> needs to be completed. If I were to make a not-remotely-educated guess,
> I'd say it needs about 550 lines of changes, primarily the addition of
> code to emit the correct instructions feeling the correct registers in
> mono/mini/mini-arm.c plus a couple of tweaks to related headers.
>
> Upstream have also indicated that they're happy to provide guidance and
> pointers on how to implement this port, although they're unable to
> provide the requisite code themselves.
>
> Sadly, unless someone in the community is able to step forward and
> contribute here, it's only a matter of time before the armhf packages
> are rightfully marked RC-buggy, and 100+ packages need to be axed from
> armhf. This would make me sad.
--
Héctor Orón -.. . -... .. .- -. -.. . ...- . .-.. --- .--. . .-.
Changes since RFC:
*Changed the cpu cooling registration/unregistration API's to instance based
*Changed the STATE_ACTIVE trip type to pass correct instance id
*Adding support to restore back the policy->max_freq after doing frequency
clipping.
*Moved the trip cooling stats from sysfs node to debugfs node as suggested
by Greg KH greg(a)kroah.com
*Incorporated several review comments from eduardo.valentin(a)ti.com
Todo:
*Report time spent in each trip point along with the cooling statistics
*Add opp library support in cpufreq cooling api's
Brief Description:
1) The generic cooling devices code is placed inside driver/thermal/* as
placing inside acpi folder will need un-necessary enabling of acpi code.
2) This patchset adds a new trip type THERMAL_TRIP_STATE_ACTIVE which passes
cooling device instance number and may be helpful for cpufreq cooling devices
to take the correct cooling action.
3) This patchset adds generic cpu cooling low level implementation through
frequency clipping and cpu hotplug. In future, other cpu related cooling
devices may be added here. An ACPI version of this already exists
(drivers/acpi/processor_thermal.c). But this will be useful for platforms
like ARM using the generic thermal interface along with the generic cpu
cooling devices. The cooling device registration API's return cooling device
pointers which can be easily binded with the thermal zone trip points.
4)This patchset provides a simple way of reporting cooling achieved in a
trip type. This will help in fine cooling the cooling devices attached.
Amit Daniel Kachhap (4):
thermal: Add a new trip type to use cooling device instance number
thermal: Add generic cpufreq cooling implementation
thermal: Add generic cpuhotplug cooling implementation
thermal: Add support to report cooling statistics achieved by cooling
devices
Documentation/thermal/cpu-cooling-api.txt | 57 ++++
Documentation/thermal/sysfs-api.txt | 4 +-
drivers/thermal/Kconfig | 11 +
drivers/thermal/Makefile | 1 +
drivers/thermal/cpu_cooling.c | 484 +++++++++++++++++++++++++++++
drivers/thermal/thermal_sys.c | 165 ++++++++++-
include/linux/cpu_cooling.h | 71 +++++
include/linux/thermal.h | 14 +
8 files changed, 802 insertions(+), 5 deletions(-)
create mode 100644 Documentation/thermal/cpu-cooling-api.txt
create mode 100644 drivers/thermal/cpu_cooling.c
create mode 100644 include/linux/cpu_cooling.h
The two patches were originally in [PATCH V6 0/7] add a generic cpufreq driver.
I seperated them and hope they can go to upstream earlier.
Richard Zhao (2):
ARM: add cpufreq transiton notifier to adjust loops_per_jiffy for smp
cpufreq: OMAP: remove loops_per_jiffy recalculate for smp
arch/arm/kernel/smp.c | 54 ++++++++++++++++++++++++++++++++++++++++
drivers/cpufreq/omap-cpufreq.c | 36 --------------------------
2 files changed, 54 insertions(+), 36 deletions(-)
--
1.7.5.4
So after talking about it at the last Linaro Connect, I've finally
gotten around to making a first pass at providing config fragments for
the linaro kernel. I'd like to propose merging this for 12.04, and
doing so early so we can make sure that all the desired config options
are present in the fragments and to allow the vairous linaro build
systems to begin migrating their config generation over.
The current tree is here:
http://git.linaro.org/gitweb?p=people/jstultz/android.git;a=shortlog;h=refs…
The most relevant commit being:
http://git.linaro.org/gitweb?p=people/jstultz/android.git;a=commitdiff;h=da…
This includes config fragments for a linaro-base, an android-ization
fragment, and board configs for panda, origen and imx53.
I suspect we'll need an ubuntu specific fragment as well as all the
other board fragments.
There is likely to be quite a bit of churn as we decide what sort of
configs are really common and which are board specific. But that's ok.
Configs are generated from the config fragments, as follows:
./scripts/kconfig/merge_config.sh ./configs/linaro-base.conf ./configs/android.conf ./configs/panda.conf
You may see warnings, which are not fatal, but should be reported so
they can be properly cleaned up.
I'm asking for Build folks to take a look at the above and consider how
they might merge in fragment assembly into their system replacing their
current config generation.
I'd ask Landing teams to take a look at this, and report any missing
config options or fragment chunks they'd like to see.
thanks
-john