Hi Robert and all,
On Tuesday 08 February 2011 14:48:21 Robert Fekete wrote:
Hi,
Thanks for your ideas.
If I am not mistaken all subdevices in the ISP media pipe could be interconnected without the need from ARM intervention. But I could be wrong.
Why not ask Hans Verkuil and Laurent Pinchart? Sorry for bringing you in like this but you are the true experts in v4l2.
No worries, I'm glad to be helpful.
On 8 February 2011 13:42, SUBASH PATEL subash.rp@samsung.com wrote:
Thanks for sharing the slide. It was informative on OMAP3. My reference was wrt to a similar hardware itself.
In slide 20, the green blobs are mentioned to be drivers. Lets consider an example where camera sensor is connected on a CSI2 interface.
On every CSI2 interrupt (assuming frame based), we will have to take out frame and pass on to each of ISP block in the diagram. ARM will be involved.
If we are speaking of performance like HD/Full-HD @ 30fps, imagine the processing required by the ARM processor for this. Everytime it has to take frame from one component and pass onto another until resizer gives a final frame. From there it has to go to an encoder for encoding too.
Whether the ARM processor is involved largely depends on the hardware. The OMAP3 ISP can process video frames from the CSI2 block to the resizer without going through memory and without generating interrupts for intermediate results. The ARM processor will only be interrupted when the resizer has finished processing the frame.
But if we use a dedicated imaging processor, which will run on its own and provide us the desired frames (last yellow box - ISP resizer output), ARM can concentrate on something else in the meantime.
We cannot do such a thing with V4L2. As far my knowledge is, since this is two processor environment, we require some client/server architecture.
There's some misunderstanding here. V4L2 is a kernel <-> userspace API used to configure the image capture device and manage image buffers. Whether the hardware is directly mapped in the ARM processor memory space, or requires communication with another processor, is an implementation detail.
OMX comes handy in these cases as it has all of client, core and component parts. Thats why gstreamer is looked as a broker by many media applications.
Gstreamer will appropriately forward controls to v4L2 or OMX depending on how the hardware is delivering the frames.
Nothing prevents you from writing a V4L2 driver that will communicate with a second CPU using an IPC mechanism. Once again V4L2 is just an API, and with the recent additions (such as the media controller API that will be merged in 2.6.39) you should be able to control any kind of hardware.
Do you have any OMX-based hardware in mind for which you think the V4L2 and media controller APIs couldn't be used efficiently ?