2011/10/8 Feng Wei <feng.wei(a)linaro.org>:
> I test the dts with 'time avconv -i test.dts -f null -', i.mx53 still
> cost more than 20s.
> Following is mx53's cpuinfo
> root@linaro-desktop:/home/linaro# cat /proc/cpuinfo
> Processor : ARMv7 Processor rev 5 (v7l)
> BogoMIPS : 999.42
> Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3
> CPU implementer : 0x41
> CPU architecture: 7
> CPU variant : 0x2
> CPU part : 0xc08
> CPU revision : 5
>
> Hardware : Freescale MX53 LOCO Board
> Revision : 53020
> Serial : 0000000000000000
>
> Is there any difference in co-processor between beagle and mx53?
Although both are Cortex-A8, the Beagle-xm has core revision r3p2
while the mx53 has r2p5, but this should not cause much of a
difference.
The times you've reported, are they "real" or "user" times?
Did you also have a pandaboard?
--
Mans Rullgard / mru
Hi Kurt,
Please check the page at
https://wiki.linaro.org/WorkingGroups/Middleware/Multimedia/Specs/1111/Audi…
If there's no problem, I will send it to others like broonie, liam,
colin and linaro-dev
Thank you
--
Wei.Feng (irc wei_feng)
Linaro Multimedia Team
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog
2011/9/30 Feng Wei <feng.wei(a)linaro.org>:
> 2011/9/30 Mans Rullgard <mans.rullgard(a)linaro.org>:
>> 2011/9/30 Feng Wei <feng.wei(a)linaro.org>:
>>> I rebuild the libav with neon enabled, and I got the new benchmark, as below
>>>
>>> time avconv -i SourceCode.dts -f s16le a.pcm
>>> panda -- 4.135s (53% better than non-neon version 6.320s)
>>> mx53 -- 19.054s (165% better than non-neon version 50.526s)
>>>
>>> So as mru said, dts is mostly neon optimized. Although it's not so
>>> reasonable on A8 cpu, I think we don't need to put it into next cycle.
>>
>> Which revision of libav did you use for these benchmarks? Did it include the
>> optimisation I added a couple of days ago (baf6b738)? With the latest version,
>> I get almost exactly the same speed on Beagle-xm and Panda. Could you share
>> your test file in case there's something special about it?
>
> although i got latest git version today including baf6b738, the
> results are same as my last version.
> I attach the dts file
With that file I get 2.3s on Beagle-xm, 2.2s on Panda, using latest libav git
built with Linaro gcc 4.5-2011.09.
--
Mans Rullgard / mru
Hi Alexander,
Based on the mails I have tried to capture the requirements in a
block diagram.Please let me know if there are any mistakes in the digram.
I wanted to add a few points regarding the final comparison block which
is being sought to be compared using Speech recognition.
I think the comparisons can very easily be done using a PSNR comparison
which will effectively do a comparison of two streams for
differences in the audio samples.This kind of measurements is quite
mature in audio codecs and can as well work here.
Speech recognition has its own sets of problem of training the
recognition engine and it is notoriously erroneous.This was my
observation while working
on a ASR engine.So,finally we may end up doing a testing of the sphiks
;) rather than the Panda audio .But,it is definitely worth a try.
Block Diagram
http://www.gliffy.com/publish/2944818/
Regards
Rony
-------- Original Message --------
Subject: Re: end-to-end audio testing (jacks)
Date: Tue, 27 Sep 2011 18:25:05 +0200
From: Alexander Sack <asac(a)linaro.org>
To: Kurt Taylor <kurt.taylor(a)linaro.org>
CC: linaro-multimedia(a)lists.linaro.org, David Zinman
<david.zinman(a)linaro.org>
On Tue, Sep 27, 2011 at 5:16 PM, Kurt Taylor <kurt.taylor(a)linaro.org
<mailto:kurt.taylor@linaro.org>> wrote:
On 27 September 2011 09:18, Alexander Sack <asac(a)linaro.org
<mailto:asac@linaro.org>> wrote:
Hi,
we are looking at landing more and more full stack test cases
for our automated board support status tracking efforts.
While for some hardware ports it's hard to test whether a port
really gets a proper signal etc, we feel for audio this might be
relatively straight forward: we got the idea that we could
connect a cable from jack out to jack in in the lab and then have
a testcase that plays something using aplay and checks that he
gets proper input/signal on the jack in.
This could be done on alsa level and later pa level (for ubuntu).
A more advanced idea that came up when discussing options was to
use opensource speech recognition like sphinx to even
go one step further and see if the output we produce yields
roughly the same input. For that we could play one or two words,
use speech recognition to parse it and check if the resulting
text is stable/expected.
What do you think?
These are really good ideas. I had started a discussion with Torez
several months ago about an automated test for audio. My idea at
the time was to use a sine wav at a particular frequency and use or
hack one of the tuner/freq analysis apps to detect the frequency. If
it was too garbled or distorted, it wouldnt recognize the frequency.
As you know, sound quality is very subjective and depends on the
cables, speakers, amp, etc. I like the speech recognition idea as
well, for the same reasons. It might actually be a better test of
the quality.
right. i think it would be hard to measure real audio quality, but if we
get speech recognition going we would at least know that the input was
similar enough to what we played.
I think some experiments with pocketsphinx would make sense to see how
easy that would be. I am happy to create a blueprint for the first
investigation steps for your backlog with a quick outline.
Would MMWG be able to take experimenting and implementing such
end-to-end audio test into their 11.10 work list?
I think this is a really good idea to explore. Could we also maybe
use camera and face recognition when we hack a pandaboard to do
that? Hm...
psssst ... i wanted to keep that idea back for a bit :).
--
Alexander Sack
Technical Director, Linaro Platform Teams
http://www.linaro.org | Open source software for ARM SoCs
http://twitter.com/#!/linaroorg <http://twitter.com/#%21/linaroorg> -
http://www.linaro.org/linaro-blog