Hello,
I am working to enable USB Camera on 4460 OMAP Android ICS. Can you please help me / anyone has done this before? I understand we need to enable V4L2 and modify the CameraHAL to use V4LCameraAdapter, but not sure what exact changes need to be made.
Any help is greatly appreciated.
Thanks
Jag
On 29 March 2012 10:04, Andres Rodriguez <andresx7(a)gmail.com> wrote:
> Hi,
>
> I found your version of the TinyHAL project for PandaBoard and I have a
> question. I wanted to configure TinyHAL for another device, but I am unsure
> on how to write the config xml file. Where do you get the names for all the
> controls in the static route?
>
The xml config controls are the ALSA controls needed for setting up for
playback. The easiest method for you would be to find the closest Ubuntu
ucm config and port it to the XML format. What dev board are you using, it
may already be supported for Ubuntu?
Also, I am working on a new format that will hopefully unify the various
config files for ucm across Ubuntu and Android.
Let me know if you create with a working xml file for your board, I can
make it available to others.
>
> Should I be specifying here the devices that I see when I invoke amixer?
>
> Regards,
> Andres
>
--
Kurt Taylor (irc krtaylor)
Linaro Multimedia
Linaro.org <http://www.linaro.org/>* **│ *Open source software for ARM SoCs
Follow *Linaro: *Facebook <http://www.facebook.com/pages/Linaro> |
Twitter<http://twitter.com/#%21/linaroorg>|
Blog <http://www.linaro.org/linaro-blog/>
Hi Bryan,
I have written a very small writeup of the things which
was required to do the libjpeg-turbo benchmark integration with LAVA. I
have also updated the writeup in wiki
https://wiki.linaro.org/Internal/People/RonyNandy/LAVAforMultimedia .
Feel free to update and modify with new findings and editings.We can
move the wiki from People once it is in a better shape.
Steps to write a test definition for a Multimedia Component for LAVA
1) A test definition in python script needs to be written for the
tests/benchmarks you want to run on LAVA.
A simple example of a test definition is this which was written for
libjepeg-turbo.
This example is for android.
http://bazaar.launchpad.net/~liuyq0307/lava-android-test/tjbench/view/head:…
<http://bazaar.launchpad.net/%7Eliuyq0307/lava-android-test/tjbench/view/hea…>
I suggest you start with a simpler one to be run on Ubuntu.You don't
need a target board to develop the scrip.Developing
the script to run x86 code on host is fine for script development.The
assumption here is that the x86 binary works
the same as the android/ubuntu arm port.
2)The test definition can be written on lines of the the above example
and validated on the host system
using command example $lava-test example.py
Please run the following commands on oneiric host to install
the LAVA based utilities
sudo add-apt-repository ppa:linaro-validation/ppa
sudo apt-get update
sudo apt-get install lava-dashboard
sudo add-apt-repository ppa:linaro-validation/ppa
sudo apt-get update
sudo apt-get install lava-dispatcher
sudo apt-get install bzr python-distutils-extra python-testtools
python-parted command-not-found python-yaml python-beautifulsoup
python-wxgtk2.6
sudo add-apt-repository ppa:linaro-validation/ppa
sudo apt-get install lava-test
3)The debugging of the script needs to be done using lava-test and make
sure that the script does not throw any errors.
The lava-test commands are:
lava-test uninstall example
lava-test install example
lava-test run example
lava-test parse example
4)The python script needs to be sent(or pull request) to the Linaro
validation team to be merged on the LAVA Server so that it gets
displayed on the dashboard.
5) We are yet to run our own LAVA server locally to do final merge
locally,but it is on the cards for Jan 12.
Regards
Rony
On 12/01/2011 04:02 AM, Bryan Honza wrote:
> I suppose I found the ubuntu 11.11 release image and booted that on panda:
>
> I ended up using the instructions and panda desktop image from here:
> http://releases.linaro.org/images/11.11/oneiric/ubuntu-desktop/
>
> I found these instructions first,
> http://releases.linaro.org/11.11/ubuntu/leb-panda/ but the media
> create tool didn't work, spent much time getting qemu to update, only
> to find out media create still wanted to update my system
> packages...didn't want to go there.
>
> I have yet to figure out where the source code is obtained for this
> release, so I have just have binaries now :)
>
> -Bryan
>
>
>
>
> On 30 November 2011 13:32, Bryan Honza <bryan.honza(a)linaro.org
> <mailto:bryan.honza@linaro.org>> wrote:
>
> Hi Ilias,
> Yes that's fine...my question was because I saw this note
> indicating that LAVA Test isn't used on Android:
>
> "1) LAVA Test (formerly known as Abrek) is the test runner
> framework. It is on traditional Linux images -- Android is tested
> differently."
> from https://launchpad.net/lava
>
> Thanks,
> Bryan
>
>
> On 30 November 2011 12:51, Ilias Biris <ilias.biris(a)linaro.org
> <mailto:ilias.biris@linaro.org>> wrote:
>
> Hi Bryan
>
> eventually we should be using both. But predominantly using
> Ubuntu I
> guess. Android is usually added after we have done work on the
> Ubuntu side.
>
> Hope this makes sense :-)
>
> Ilias
>
> On 30/11/11 18:56, Bryan Honza wrote:
> > One bit of info that would help me is: should I be using
> ubuntu or android?
> >
> >
> --
> Ilias Biris ilias.biris(a)linaro.org <mailto:ilias.biris@linaro.org>
> Project Manager, Linaro
> M: +358504839608 <tel:%2B358504839608>, IRC: ibiris Skype:
> ilias_biris
> Linaro.org│ Open source software for ARM SoCs
>
>
>
--
Rony Nandy
Multimdedia Working Group,
www.linaro.org │ Open source software for ARM SoCs
Hello,
The goal of those two patches is to add debug and trace capabilities to CMA
on going development.
The first patch allow to dump CMA bitmap status by a simple "cat
/sys/kernel/debug/cma" command line.
The second add events trace points that can be used for performance and/or
log with trace tools:
- to enable it "echo 1 > /sys/kernel/debug/tracing/events/cma/enable"
- to get the log "cat /sys/kernel/debug/tracing/events/trace"
Regards,
Benjamin
--
Benjamin Gaignard
Multimedia Working Group
Linaro.org <http://www.linaro.org/>* **│ *Open source software for ARM SoCs
**
Follow *Linaro: *Facebook <http://www.facebook.com/pages/Linaro> |
Twitter<http://twitter.com/#!/linaroorg>
| Blog <http://www.linaro.org/linaro-blog/>
Hi,
I'm trying to make a SIP stack run on a low-end processor (720 MIPS capable). Problem is the stack's EC is not neon-optimized & with a 256 ms EC tail, I'm exhausting the CPU.
On one of the forums, I came across neon optimization trials for speex AEC by Linaro. Has there been a completed project? Or any other that can help me?
Thank you for answering.
Regards,
Sangram
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
2011/10/8 Feng Wei <feng.wei(a)linaro.org>:
> I test the dts with 'time avconv -i test.dts -f null -', i.mx53 still
> cost more than 20s.
> Following is mx53's cpuinfo
> root@linaro-desktop:/home/linaro# cat /proc/cpuinfo
> Processor : ARMv7 Processor rev 5 (v7l)
> BogoMIPS : 999.42
> Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3
> CPU implementer : 0x41
> CPU architecture: 7
> CPU variant : 0x2
> CPU part : 0xc08
> CPU revision : 5
>
> Hardware : Freescale MX53 LOCO Board
> Revision : 53020
> Serial : 0000000000000000
>
> Is there any difference in co-processor between beagle and mx53?
Although both are Cortex-A8, the Beagle-xm has core revision r3p2
while the mx53 has r2p5, but this should not cause much of a
difference.
The times you've reported, are they "real" or "user" times?
Did you also have a pandaboard?
--
Mans Rullgard / mru
Hi Kurt,
Please check the page at
https://wiki.linaro.org/WorkingGroups/Middleware/Multimedia/Specs/1111/Audi…
If there's no problem, I will send it to others like broonie, liam,
colin and linaro-dev
Thank you
--
Wei.Feng (irc wei_feng)
Linaro Multimedia Team
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog
2011/9/30 Feng Wei <feng.wei(a)linaro.org>:
> 2011/9/30 Mans Rullgard <mans.rullgard(a)linaro.org>:
>> 2011/9/30 Feng Wei <feng.wei(a)linaro.org>:
>>> I rebuild the libav with neon enabled, and I got the new benchmark, as below
>>>
>>> time avconv -i SourceCode.dts -f s16le a.pcm
>>> panda -- 4.135s (53% better than non-neon version 6.320s)
>>> mx53 -- 19.054s (165% better than non-neon version 50.526s)
>>>
>>> So as mru said, dts is mostly neon optimized. Although it's not so
>>> reasonable on A8 cpu, I think we don't need to put it into next cycle.
>>
>> Which revision of libav did you use for these benchmarks? Did it include the
>> optimisation I added a couple of days ago (baf6b738)? With the latest version,
>> I get almost exactly the same speed on Beagle-xm and Panda. Could you share
>> your test file in case there's something special about it?
>
> although i got latest git version today including baf6b738, the
> results are same as my last version.
> I attach the dts file
With that file I get 2.3s on Beagle-xm, 2.2s on Panda, using latest libav git
built with Linaro gcc 4.5-2011.09.
--
Mans Rullgard / mru
Hi Alexander,
Based on the mails I have tried to capture the requirements in a
block diagram.Please let me know if there are any mistakes in the digram.
I wanted to add a few points regarding the final comparison block which
is being sought to be compared using Speech recognition.
I think the comparisons can very easily be done using a PSNR comparison
which will effectively do a comparison of two streams for
differences in the audio samples.This kind of measurements is quite
mature in audio codecs and can as well work here.
Speech recognition has its own sets of problem of training the
recognition engine and it is notoriously erroneous.This was my
observation while working
on a ASR engine.So,finally we may end up doing a testing of the sphiks
;) rather than the Panda audio .But,it is definitely worth a try.
Block Diagram
http://www.gliffy.com/publish/2944818/
Regards
Rony
-------- Original Message --------
Subject: Re: end-to-end audio testing (jacks)
Date: Tue, 27 Sep 2011 18:25:05 +0200
From: Alexander Sack <asac(a)linaro.org>
To: Kurt Taylor <kurt.taylor(a)linaro.org>
CC: linaro-multimedia(a)lists.linaro.org, David Zinman
<david.zinman(a)linaro.org>
On Tue, Sep 27, 2011 at 5:16 PM, Kurt Taylor <kurt.taylor(a)linaro.org
<mailto:kurt.taylor@linaro.org>> wrote:
On 27 September 2011 09:18, Alexander Sack <asac(a)linaro.org
<mailto:asac@linaro.org>> wrote:
Hi,
we are looking at landing more and more full stack test cases
for our automated board support status tracking efforts.
While for some hardware ports it's hard to test whether a port
really gets a proper signal etc, we feel for audio this might be
relatively straight forward: we got the idea that we could
connect a cable from jack out to jack in in the lab and then have
a testcase that plays something using aplay and checks that he
gets proper input/signal on the jack in.
This could be done on alsa level and later pa level (for ubuntu).
A more advanced idea that came up when discussing options was to
use opensource speech recognition like sphinx to even
go one step further and see if the output we produce yields
roughly the same input. For that we could play one or two words,
use speech recognition to parse it and check if the resulting
text is stable/expected.
What do you think?
These are really good ideas. I had started a discussion with Torez
several months ago about an automated test for audio. My idea at
the time was to use a sine wav at a particular frequency and use or
hack one of the tuner/freq analysis apps to detect the frequency. If
it was too garbled or distorted, it wouldnt recognize the frequency.
As you know, sound quality is very subjective and depends on the
cables, speakers, amp, etc. I like the speech recognition idea as
well, for the same reasons. It might actually be a better test of
the quality.
right. i think it would be hard to measure real audio quality, but if we
get speech recognition going we would at least know that the input was
similar enough to what we played.
I think some experiments with pocketsphinx would make sense to see how
easy that would be. I am happy to create a blueprint for the first
investigation steps for your backlog with a quick outline.
Would MMWG be able to take experimenting and implementing such
end-to-end audio test into their 11.10 work list?
I think this is a really good idea to explore. Could we also maybe
use camera and face recognition when we hack a pandaboard to do
that? Hm...
psssst ... i wanted to keep that idea back for a bit :).
--
Alexander Sack
Technical Director, Linaro Platform Teams
http://www.linaro.org | Open source software for ARM SoCs
http://twitter.com/#!/linaroorg <http://twitter.com/#%21/linaroorg> -
http://www.linaro.org/linaro-blog