Optimize performance for single irq
Changes since v4.
* Proper rebase on mmc-next. In previous patch, a spelling-patch in sdio_irq
was accidentally rebased on top as well causing merge conflict.
Stefan Nilsson XK (1):
sdio: optimized SDIO IRQ handling for single irq
drivers/mmc/core/sdio_irq.c | 33 ++++++++++++++++++++++++++++++++-
include/linux/mmc/card.h | 1 +
2 files changed, 33 insertions(+), 1 deletions(-)
--
1.7.4.1
[Apologies for the wide distibution of this mail (Debian, Ubuntu and
Fedora main+arm dev lists, plus linaro and lsb) but it's useful to
catch people who care about this issue enough to do some work. Do
please bear the distribution in mind when replying, focussing any
detailed discussion on linaro-dev please. I'll post a summary at the
end if there is material input]
---------------
The subject of the Linux Standards Base (LSB) and ARM came up at the
recent Linaro Dev summit.
During discussion of standardisation of ABI across distributions (Ubuntu,
Debian, Fedora etc) it was suggested that maybe the LSB was a useful
place to specify some kind of agreed minimum.
It turns out that the LSB supports 7 architectures, but does not
include ARM, beyond the catch-all 'generic'. This seemed an odd
omission so I contacted the LSB people who were very helpful, and I
found that they would like to support ARM but a) there was not a clear
binary ABI standard to support in the past (there were lots of
variants) and b) no-one really stepped up to do the work of porting
the LSB docs and tools.
It came up on the LSB list too:
https://lists.linux-foundation.org/pipermail/lsb-discuss/2011-May/006828.ht…
As I say in that thread, I think a) has been dealt with in that there
is now an agreed base with wide-enough support that we could usefully
specify (the armv7, VFP-D16 ('hard-float'), little-endian ABI, as used
by Debian 'armhf', Ubuntu 'armhf', Fedora 'armv7hl', and also Meego
'armv7hl'). That fits with what Linaro is supporting too.
Mats D Wichman kindly gave some idea of how much work in needed to
'port' the LSB in the above thread:
-------------
in that range of options [week, month, year, 6 years], it's closer to
a year than to any of the others.
The spec work isn't hard by itself if there's a reasonable
processor-level ABI document available, which unless something
has changed, is the case for ARM: like the psABI documents
that exist for other architectures, reference is made to such
a base document (or set) for things like register assignments,
calling conventions, exception processing, etc. so it doesn't
have to be written - maybe just pinning down where the base
document offers choices, that the LSB ABI does it this way.
In addition to the spec, there's a great deal of code around
LSB, which all has to be adapted. It's obviously portable
code since it works for seven architectures already, of
varying wordsizes, endian-ness, etc. Where there are details
specific to a processor architecture, we actually have all of
the details stored in a database (which can be browsed at
http://dev.linuxfoundation.org/navigator), and the biggest
task actually becomes populating the database with data for
this new architecture. There are some fairly reasonable tools
for scanning a distribution which would provide a useful
starting point, but then someone has to validate that things
are all correct. Some of the validation happens by building
and running iteratively various checkers which are part of the
software suite anyway. That will require adjusting a number of
makefiles, populating new trees under arch-specific names, etc.
but that part is easy enough, just manual work. It has been
rather a long time since a new architecture was added, I
think PPC64 and S390X were added at about the same time and
it was many years ago), so the procedure hasn't really been
tested out recently.
Then there are a bunch of test suites which need to run to
validate that a distribution conforms, and in my experience,
this ends to be where new issues show up that break the assumption
that everything's clean and portable, so there may well be
extra debugging here.
------------------
As this is a non-trivial amount of work, the question then arises,
does anyone care about this enough to actually do the work? Linaro is
an obvious organisation that could expend some engineering effort on
this, but to do that it needs some indication that it's more than a
'would-be-nice'.
Who actually uses the LSB for making widely-distributed binaries? Would
anyone do so on ARM if it was specced? Is it important to make ARM a
'real' architecture alongside the others, e.g. especially in server space?
In my experience anyone distributing binaries actually picks a small
set of distros and builds for those explicitly, rather than relying
on the LSB. Does that mean that it's not actually useful in the real
world? I guess in a sense this posting is to the wrong lists; we're
all free software people here who have little use for the LSB. Where
do the proprietary software distributors hang out :-)
It's easy to think of potential use-cases, and I think ultimately,
unless the LSB is in fact entirely irrelevant, this work will get done
everntually. But should we get on with that now, rather than whatever
else we might be fixing, and if so, who is volunteering to get
involved?
( Jon Masters and I have both expressed interest but are not exactly
brimming over with spare time. Any more for any more?)
Opinions welcome.
Wookey
--
Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM
http://wookware.org/
Here is a bunch of scenarii I am planning to integrate to the pm-qa package.
Any idea or comment will be appreciate.
Note the test cases are designed for a specific host configured to
have a minimal number of services running on it and without any pending cron
jobs. This pre-requisite is needed in order to not alter the expected
results.
Thanks
-- Daniel
cpufreq:
--------
(1) test the cpufreq framework is available
check the following files are present in the sysfs path:
/sys/devices/system/cpu/cpu[0-9].*
-> cpufreq/scaling_available_frequencies
-> cpufreq/scaling_cur_freq
-> cpufreq/scaling_setspeed
There are also several other files:
-> cpufreq/cpuinfo_max_freq
-> cpufreq/cpuinfo_cur_freq
-> cpufreq/cpuinfo_min_freq
-> cpufreq/cpuinfo_transition_latency
-> cpufreq/stats/time_in_state
-> cpufreq/stats/total_trans
-> cpufreq/stats/trans_table
-> ...
Should we do some testing on that or do we assume it is not up to
Linaro to do that as being part of the generic cpufreq framework ?
(2) test the change of the frequency is effective in 'userspace' mode
- set the governor to 'userspace' policy
- for each frequency and cpu
- write the frequency
- wait at least cpuinfo_transition_latency
- read the frequency
- check the frequency is the expected one
(3) test the change of the frequencies affect the performances of a test
program
(*) Write a simple program which takes a cpu list as parameter in
order to set the affinity on it. This program computes the number
of cycles (eg. a simple counter) it did in 1 second and display
the result. In case more than one cpu is specified, a process is
created for each cpu, setting the affinity on it.
(**) - set the governor to userspace policy
- set the frequency to the lower value
- wait at least cpuinfo_transition_latency
- run the test (*) program for each cpu, combinate them and
concatenate the result to a file
- for each frequency rerun (**)
- check the result file contains noticeable increasing values
(3) test the load of the cpu affects the frequency with 'ondemand'
(*) write a simple program which does nothing more than consuming
cpu (no syscall)
for each cpu
- set the governor to 'ondemand'
- run (*) in background
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- read the frequency
- kill (*)
- check the frequency is equal to the higher frequency available
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- read the frequency
- check the frequency is the lowest available
(4) test the load of the cpu does not affect the frequency with 'userspace'
for each cpu
- set the governor to 'userspace'
- set the frequency between min and max frequencies
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- run (*) in background
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- read the frequency
- kill (*)
- check the frequency is equal to the one we set
(5) test the load of the cpu does not affect the frequency with 'powersave'
for each cpu
- set the governor to 'powersave'
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- check the frequency is the lowest available
- run (*) in background
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies - - read the frequency
- kill (*)
- check the frequency is the lowest available
(6) test the load of the cpu affects the frequency with 'conservative'
for each cpu
- set the governor to 'conservative'
for each freq step
- set the up_threshold to the freq step
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- run (*) in background
- wait at least cpuinfo_transition_latency *
nr_scaling_available_frequencies
- read the frequency
- check the frequency is equal to higher we have with the
freq_step
- kill (*)
Hello Arnd,
I'd like to follow up to the discussion which took place during
"Android Code Review" session at LDS
(https://wiki.linaro.org/Platform/Android/Specs/AndroidCodeReview).
You expressed concern that Gerrit is not too flexible for working with
multiple topic branches, in particular, that it enforces work
against single (master) branch (as far as I understood).
Well, looking at AOSP's Gerrit instance,
https://review.source.android.com/ one can see that there's a "branch"
field present, and shown in the list of patches, and there're patches
which reference non-master branches (here's list for "froyo" branch
for example:
https://review.source.android.com/#q,status:open+branch:froyo,n,z ).
So, it at least allows to label a patch with a branch it is submitted
against.
So, can you please elaborate on the concerns you expressed? A
generalized user story and specific example would be very helpful. (And
please keep in mind that I still know little about Gerrit, so if the
above doesn't relate directly to your concerns, I'm sorry, that's why
I'd like to collect more detailed info on Gerrit's
features/misfeatures people know).
Thanks,
Paul
Hi there,
Some of you know about this already as it was discussed a bit at LDS,
but probably not very inclusively.
We have a slightly strange arrangement of projects on Launchpad, with
the main ones being "launch-control" and "lava". AIUI, Zygmunt really
wanted to call "launch-control" "dashboard" from day 1, but that was
clearly an overly generic name. Since then, we've come up with the
"lava" name to cover our automated validation efforts, but currently the
"lava" project just contains code to do with the validation farm in
Cambridge, which is only part of the story.
So. We have a plan to reorganize the projects like so (also documented on
https://blueprints.launchpad.net/linaro-validation-misc/+spec/linaro-platfo…):
- lava -- project group containing:
- lava-server -- contains the django settings file and templates
- this well mostly be extracted from what is lp:launch-control today
- lava-scheduler -- django application for scheduling jobs
- what exists for this is in lp:lava currently
- lava-dispatcher -- tool that runs tests on hardware
- this is also in lp:lava currently
- lava-dashboard -- django application for showing test results
- this is lp:launch-control currently
- lava-tool -- command line entry point and framework
- this actually exists already and has the right name!
- lava-dashboard-tool
- new name for launch-control-tool
- lava-test-tool -- plugin for lava-tool that adds commands for running tests
- new name and refactoring for abrek
- lava-auth-tool -- plugin for lava-tool that adds commands for authenticating against validation.linaro.org
- lava-scheduler-tool -- plugin for lava-tool that adds commands for interacting with the scheduler
- these last two don't actually exist yet
It's possible there are too many lava-tool plugins here, but we can see
how that goes when it comes to implementing those bits.
There will be a certain amount of churn and broken links during the
reorganization, so this is mostly a heads up and apologies in advance.
Bike shedding on the details of the rearrangement will probably be
ignored :)
I'll probably do this on Monday morning my time, which should be nice
and quiet.
Cheers,
mwh
On 27 May 2011 11:45, Deepak Saxena <dsaxena(a)linaro.org> wrote:
> On 26 May 2011 20:17, Zach Pfeffer <zach.pfeffer(a)linaro.org> wrote:
>> My only comment is on:
>>
>> What We're Not Doing
>>
>> Integrating graphics drivers
>> ● Handled by vendor Landing Team
>>
>> I think aspects of this will need to be handled by the kernel team.
>> Especially with the work Jesse and the MM summit are proposing.
>
> Hi Zach,
>
> Clarification on the above point is that we will not be handling integrating
> binary blobs for the different SOCs. Any work that Jesse's team and
> upstream developers do on consolidating towards a single buffer
> management solution will certainly be integrated into the linaro-linux
> tree as it starts stabilizing.
We're going to need to chat about that. I'll need support for binary
blob integration from all teams. Overall I'm asking for each team
(Landing Teams included) to:
(Thanks to Michael Hope for suggesting this list)
1. Build the working group output and Android combinations with each
commit (CI)
* Build last-known-good Android with latest working group output
* Build latest Android with last-released working group output
* Build latest Android with latest working group output
2. Smoke test (CI/test farm)
* Load onto a board
* Run a test script
* Run 0xbench
3. Report any build or smoke test failures (CI)
4. Report any performance regressions (CI)
5. Document Linaro improvements in general and on Android, how to
build the component and how to get it.
6. Add benchmarks to showcase Linaro improvements.
7. Add tests to test functionality.
Some of this will be automated with Gerrit eventually. Until then
it'll be a manual process.
-Zach
Hi there,
Here are the slides I presented on the May 9th plenary; I was
supposed to send these earlier but surprise, surprise, I never found the
time for it ;-)
We'll see in a month's time how well it worked!
--
Christian Robottom Reis | [+55] 16 9112 6430 | http://launchpad.net/~kiko
Linaro Engineering VP | [ +1] 612 216 4935 | http://async.com.br/~kiko
Hi all,
The public reviews for Linaro plans for this year start today with the first
session talking about Power Management WG plans.
Please participate if you're interested in hearing what we'll be working on.
Feedback is very welcome.
Slides and participation details are at
https://wiki.linaro.org/Cycles/1111/PublicPlanReview
See you there.
Regards,
Amit