Hi,
My question concerns this patch
--------
commit 2ffe2da3e71652d4f4cae19539b5c78c2a239136
Author: Russell King <rmk+kernel(a)arm.linux.org.uk>
Date: Sat Oct 31 16:52:16 2009 +0000
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting. Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.
Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks. On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.
Signed-off-by: Russell King <rmk+kernel(a)arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar(a)ti.com>
---------
file: arch/arm/mm/dma-mapping.c
> void ___dma_page_cpu_to_dev(struct page *page, unsigned long off,
> size_t size, enum dma_data_direction dir)
> {
> ...
> if (dir == DMA_FROM_DEVICE) {
> outer_inv_range(paddr, paddr + size);
> ...
> }
> EXPORT_SYMBOL(___dma_page_cpu_to_dev);
>
> void ___dma_page_dev_to_cpu(struct page *page, unsigned long off,
> size_t size, enum dma_data_direction dir)
> {
> ...
> if (dir != DMA_TO_DEVICE)
> outer_inv_range(paddr, paddr + size);
> ...
> }
outer_inv_range () is called twice for DMA_FROM_DEVICE.
The first time to "get rid of potential writebacks" and the second
time to "get rid of any stale speculative prefetches"
outer_inv_range() is a rather expensive operation. In the first case
isn't it enough to just call cache_sync()?
What about:
void ___dma_page_cpu_to_dev(struct page *page, unsigned long off,
size_t size, enum dma_data_direction dir)
{
...
if (dir == DMA_FROM_DEVICE) {
- outer_inv_range(paddr, paddr + size);
+ outer_sync();
...
}
/Per
Hi,
See the notes in full formatted glory:
https://wiki.linaro.org/Platform/Infrastructure/Meetings/2011-02-15
or see below.
Thanks,
James
=== Attendees ===
* Mattias Backman
* Guilherme Salgado
* Deepti Kalakeri
* James Westby
=== Agenda ===
* Team status
* Actions from last meeting
* AOB
=== Action Items ===
* James to Cc Guilherme on RT for patchwork
* James to email Guilherme with info on patches(a)linaro.org account
=== Action Items from previous meeting ===
* James to talk with Michael about requirement driving not wanting to host artefacts on Hudson
=== Minutes ===
* Team status reports
* Mattias Backman
* Started prototyping BuildFailureBugFiling
* Snowball was taken away again
* Guilherme Salgado
* Wrote design/implementation plans for PatchTracking
* Submitted some patchwork patches and got good feedback
* James Westby
* Continued positive feedback about status.linaro.org, along with requests for improvement
* Deepti Kalakeri
* Set up Hudson locally to experiment with.
* Panda should be arriving this week
* Not going to rely on an IS deployed instance of Hudson/Jenkins for short-term experimentation.
Enclosed you'll find a link to the agenda, notes and actions from the
Linaro Developer Platforms Weekly Status meeting held on February 9th
in #linaro-meeting on irc.freenode.net at 16:00 UTC.
https://wiki.linaro.org/Platform/Foundations/2011-02-09
Actions from the meeting where as follows:
* JamieBennett to come up with test assignments for the new
developer image: carried over
* jcrigby to write up landing team u-boot, kernel workflows and send
for review after first draft is done: carried over
* wookey to work with tgall_foo to get the linaro-nano image building
* tgall_foo and JamieBennett to get linaro-desktop building in Offspring
* slangasek and fturgis to decide whether or not to move to more
recent version of systemtap
* ppearse to investigate how libtool does ldopen for GObject Introspection work
Regards,
Tom (tgall_foo)
Developer Platforms Team
"We want great men who, when fortune frowns will not be discouraged."
- Colonel Henry Knox
w) tom.gall att linaro.org
w) tom_gall att vnet.ibm.com
h) tom_gall att mac.com
Hi,
notes and actions from our Wednesday graphics working group call are
available on the wiki:
+ https://wiki.linaro.org/WorkingGroups/Middleware/Graphics/Notes/2011-02-09
Details about when and where of this meeting can be found here:
+ https://wiki.linaro.org/WorkingGroups/Middleware/Graphics#Weekly%20Public%2…
Summary
=======
* Cairo patchset "gl: Replace built-in shader variables with custom variables"
sent upstream for review.
* Most of the team members able to access git.linaro.org from work place but
not confirmed for everyone.
* Added desktop GL support to glcompbench benchmark, in order to promote
adoption and increase contributions and feedback.
* Qtwebkit investigation shows that trace captures for websites (without
<canvas>) can be performed without pixmap tracing.
* Refactored most core Compiz plugins to make them GLES2 ready.
* First experiments adding Mali(ARM's GPU) and UMP driver to Linaro kernel are
ongoing. Bugs have been discovered and are being worked on with Samsung
landing team.
* First version of Unified Memory Management position:
https://wiki.linaro.org/WorkingGroups/Middleware/Graphics/Projects/UnifiedM…
Action Items
============
* Chunsang and Jesse to discuss UMP validation situation and its implications.
* Jesse to ensure that we have valid conference call numbers for all
members of the gfx WG (deferred from previous meeting).
* Jesse to keep track of nux licensing issues (deferred from previous
meeting).
* Everyone to send Jesse a list of the licenses of the projects they are
working on (both new and existing).
* Everyone to review Rajeev's comments in their blueprints and fix any pending
issues.
* Everyone to update blueprints to reflect current work item status.
Thanks,
--
- Alexandros
Hi All,
Please find enclosed link to minutes and actions for multimedia wg
meeting on 8st Feb 2011.
https://wiki.linaro.org/WorkingGroups/Middleware/Multimedia/Notes/2011-02-15
Summary
- buffer pool design doc from GStreamer lead architect reviewed
- bellagio patches send updstream
- libjpeg optimization: agreement found with Mans for job split
- instrumented player: good progress expect first release for the end of the
month
Thanks
Benjamin
_______________________________________________
linaro-dev mailing list
linaro-dev(a)lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
On 28 January 2011 08:12, Paul Larson <paul.larson(a)linaro.org> wrote:
> Other questions...
>
>
> What does it do after running the test? How, and where does it leave the
> results? What get's added to the results (e.g. serial log) and how does
> that happen? What is the mechanism for then taking that bundle and pushing
> to the server? What happens if there were no results from the client, what
> do we capture in that case?
>
About the test result, I draft a test result bundle template, for a job test
result bundle, it is stored in a text file.
Dispatcher server part will get the return value of the test case and the
serial port log, then create test result together with some info from job
message, client dispatcher test logs.
I think server dispatcher can fetch the client test logs initiatively after
client dispatcher ends.
If there is no result from the client, after timeout,
1. server dispatcher send some "Ctrl+C" to client dispatcher to end it.
2. then restart the client dispatcher.
3. try to get the client dispatcher remained test logs.
4. mark the test case to TIMEOUT and create the test result.
PS. If step 1 can not end the client dispatcher, it may need a reboot for
server dispatcher have lost the control of serial line.
Test result bundle composed of every test case result, for each test case
result, it includes:
-
LOG
casename:Test case name
testsuite:Test suite name
testcmd:Test case Command
timeout:Timeout
retvalue:Return value
version:kernel version(by "cat /proc/version")
seriallog:
serial port log
ENDLOG
e.g.
-
LOG
casename:PERF001
testsuite:abrek
testcmd:x11perf
timeout:60000
retvalue:0
version:2.6.35-xxxx
seriallog:
xxx
xxx
xxx
ENDLOG
And a test result bundle:
LOG
casename:PERF001
testsuite:abrek
testcmd:x11perf
timeout:60000
retvalue:0
version:2.6.35-xxxx
seriallog:
xxx
xxx
xxx
ENDLOG
LOG
casename:USB002
testsuite:abrek
testcmd:usb_app
timeout:60000
retvalue:0
version:2.6.35-xxxx
seriallog:
yyy
yyy
yyy
ENDLOG
What do you think?
The flow to get test result:
1. Server dispatcher sends commands to client dispatcher via serial line to
invoke client dispatcher, capture all the serial log and wait for the client
dispatcher ends.
2. Client dispatcher executes the command, save logs to some place on the
board.
3. If client dispatcher ends normally(return to the "root@testimage:~$"
string), server dispatcher uses "echo $?" to get the return value. Or
terminate the client dispatcher after timeout.
4. Server dispatcher gets client dispatcher test logs.
5. Server dispatcher uses the collected information to create test result.
--
Best wishes,
Spring Zhang
Hi Mirsad, I'm looking at the recent edits to
https://wiki.linaro.org/Platform/Validation/Specs/ValidationScheduler and
wanted to start a thread to discuss. Would love to hear thoughts from
others as well.
We could probably use some more in the way of implementation details, but
this is starting to take shape pretty well, good work. I have a few
comments below:
> Admin users can also cancel any scheduled jobs.
Job submitters should be allowed to cancel their own jobs too, right?
I think in general, the user stories need tweaking. Many of them center
around automatic scheduling of jobs based on some event (adding a machine,
adding a test, etc). Based on the updated design, this kind of logic would
be in the piece we were referring to as the driver. The scheduler shouldn't
be making those decisions on its own, but it should provide an interface for
both humans to schedule jobs (web, cli) as well as and api for machines
(driver) to do this.
> should we avoid scheduling image tests twice because a hwpack is coming in
after images or vv.
Is this a question? Again, I don't think that's the scheduler's call. The
scheduler isn't deciding what tests to run, and what to run them on. In
this case, assuming we have the resources to pull it off, running the new
image with the old, and the new hwpack would be good to do.
> Test job definition
Is this different from the job definition used by the dipatcher? Please
tell me if I'm missing something here, but I think to schedule something,
you only really need two blobs of information:
1a. specific host to run on
-OR-
1b. (any/every system matching given criteria)
This one is tricky, and though it sounds really useful, my personally
feeling is that it is of questionable value. In theory, it lets you make
more efficient use of your hardware when you have multiple identical
machines. In practice, what I've seen on similar systems is that humans
typically know exactly which machine they want to run something on. Where
it might really come in to play is later when we have a driver automatically
scheduling jobs for us.
2. job file - this is the piece that the job dispatcher consumes. It could
be handwritten, machine generated, or created based on a web form where the
user selects what they want.
> Test job status
One distinction I want to make here is job status vs. test result. A failed
test can certainly have a "complete" job status.
Incomplete, as a job status, just means that the dispatcher was unable to
finsish all the steps in the job. For instance, a better example would be
if we had a test that required an image to be deployed, booted, and a test
run on it. If we tried to deploy the image and hit a kernel panic on
reboot, that is an incomplete job because it never made it far enough to run
the specified test.
> Link to test results in launch-control
If we tie this closely enough with launch-control, it seems we could just
communicate the job id to the dispatcher so that it gets rolled up with the
bundle. That way the dashboard would have a backlink to the job, and could
create the link to the bundle once it is deserialized. Just a different
option if it's easier. I don't see an obvious advantage to either approach.
Thanks,
Paul Larson