On Tuesday 08 February 2011, SUBASH PATEL wrote:
Sent: Sachin Gupta sachin.gupta@linaro.org
you are correct that omx and v4l2 sit at different levels one being userside API and other being kernel API.But from the point of view of integrating these API's in OS frameworks like gstreamer,Android camera service they are at the same level.
I see. I was slightly misinterpreting the question, I thought you were asking what interface the cameras should export, not what interface a new application should use.
When writing a camera driver, there should be no question at all, there are literally hundreds of cameras supported with v4l2 drivers now, and all Linux applications use v4l2 to access them. Not providing a v4l2 driver is no option if you want the camera to be used.
When writing an application, you have the choice between using v4l2 directly, or using a wrapper library like gstreamer or omx depending on your needs. Gstreamer seems to have a lot of traction these days, so that sounds like a reasonable thing to use if you want to use a camera, but of course there are a lot of other projects that don't use gstreamer.
I mean one will have to implement gstreamer source plugin based on either v4l2 or Omx.
Wouldn't any v4l2 device just work with gstreamer or other libraries? AFAICT, v4l2 is one of the base plugins in gstreamer, while gst-openmax is currently not even packaged for ubuntu.
Also the way vendors(STE and TI) have gone about implementing OMX, they completely bypass v4l2 .The major reason being code sharing among different OS environments.The kernel side for OMX implementation just facilitates RPC between imaging coprocessor and ARM side..
That approach makes a lot of sense if you are only interested in a single camera but want to be portable across proprietary operating systems.
However, if you are interested in working specifically with the Linux community and support more than one camera, it doesn't seem helpful at all.
Writing a v4l2 driver is really easy these days, so if the drivers are still missing for a few of our target systems, someone will just write them eventually.
I think when we speak of OMX, we are referring to the OMX-IL layer. This layer is supported as middleware component.
Right.
If we have a RAW sensor which produces, say Bayer pixel format, we will have to have an image pipe to process it before converting this to one of RGB/YUV formats. Image pipes involves conversions, resizing etc. It would be an overhead to do these stages in ARM, and some vendors have proprietary imaging processors for it. These processors may run a custom RTOS.
They may have built a private IPC layer into linux kernel and proprietary OS. OMX layer works in such scenarios. A concept called distributed OMX works on RPC mechanism. OMX client calls will now land up on the image processor from ARM. Again some proprietary driver/s will be invoked from the remote OMX component which would do the image processing mentioned above.
User space wrapper/app aka OMX client, is a set of OMX calls, which will get routed to proper OMX component through OMX core. This is similar to V4L2 client, but instead of controlling camera through IOCTL's, we use OMX specific functions Get/Set methods.
I don't see why you couldn't just do the same thing with the v4l2 ioctl interface.
From my view, the choice of choosing V4L2 or OMX is basically depending on the type of sensors and presence of dedicated hardware. If we already have a dedicated imaging processor, V4L2 can be absent, and we will have to leverage the OMX because of its capability. But if we are integrating a new sensor which has in-built accelerator, it makes sense to reduce the silicon area on SoC and use V4L2 instead.
I would put it the other way round: All cameras are accessed through V4L2 anyway, because you need a common interface to get the raw pixel data out if you want. The choice whether to use OMX or not remains in user space, so if you have hardware that can accelerate a specific operation. This has multiple significant advantages:
* All applications that are written against the V4L2 API just work on all hardware, not just all hardware except the one that you care about.
* When there are restrictions in distributing the firmware blob that runs on the DSP (e.g. for free-software only distributions), the user experience is that the hardware works out of the box, while installing the extra (proprietary) package gives some extra performance and battery life, but is not required for essential features.
* Building OMX on top of V4L instead of having it talk to the sensor directly makes it more portable, i.e. it lets you combine arbitrary sensors (think high-resolution USB devices to replace a builtin component) with the same accelerator code. Moreover, you can use the video encoding capability standalone, i.e. for transcoding a video that does not come from the camera.
Arnd