Hi All,
In Multimedia WG we have been posed with a question regarding best way to expose low level API for camera.so this a questions mainly about pros and cons of v4l2 and omx over each other.So to involve a wider community to discuss this topic I am floating this mail on linaro-dev.Please share your view/experiences.Also please involve any body else in this mail who can provide valuable inputs on this.
Thanks Sachin
On Monday 07 February 2011, Sachin Gupta wrote:
In Multimedia WG we have been posed with a question regarding best way
to expose low level API for camera.so this a questions mainly about pros and cons of v4l2 and omx over each other.So to involve a wider community to discuss this topic I am floating this mail on linaro-dev.Please share your view/experiences.Also please involve any body else in this mail who can provide valuable inputs on this.
I've had to look up with "omx" actually stands for [1][2], but from an outsider view, they don't seem to be mutually exclusive or even competing interfaces. v4l2 is the interface you use to get at camera data, in whatever format the camera gives you. There are no alternatives to that. OpenMax gives you a way to accelerate video codecs, which is good, but this sits a layer higher up in the stack. Supporting omx is probably a good idea, but would be totally optional.
Arnd
[1] http://www.khronos.org/openmax/ [2] http://www.freedesktop.org/wiki/GstOpenMAX
Bringing in my boys.
Robert, Linus, what say you?
On 07/02/11 12:33, Arnd Bergmann wrote:
On Monday 07 February 2011, Sachin Gupta wrote:
In Multimedia WG we have been posed with a question regarding best way
to expose low level API for camera.so this a questions mainly about pros and cons of v4l2 and omx over each other.So to involve a wider community to discuss this topic I am floating this mail on linaro-dev.Please share your view/experiences.Also please involve any body else in this mail who can provide valuable inputs on this.
I've had to look up with "omx" actually stands for [1][2], but from an outsider view, they don't seem to be mutually exclusive or even competing interfaces. v4l2 is the interface you use to get at camera data, in whatever format the camera gives you. There are no alternatives to that. OpenMax gives you a way to accelerate video codecs, which is good, but this sits a layer higher up in the stack. Supporting omx is probably a good idea, but would be totally optional.
Arnd
[1] http://www.khronos.org/openmax/ [2] http://www.freedesktop.org/wiki/GstOpenMAX
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
Arnd,
you are correct that omx and v4l2 sit at different levels one being userside API and other being kernel API.But from the point of view of integrating these API's in OS frameworks like gstreamer,Android camera service they are at the same level.I mean one will have to implement gstreamer source plugin based on either v4l2 or Omx.Also the way vendors(STE and TI) have gone about implementing OMX, they completely bypass v4l2 .The major reason being code sharing among different OS environments.The kernel side for OMX implementation just facilitates RPC between imaging coprocessor and ARM side..
Sachin
On Tue, Feb 8, 2011 at 2:00 PM, Lee Jones lee.jones@linaro.org wrote:
Bringing in my boys.
Robert, Linus, what say you?
On 07/02/11 12:33, Arnd Bergmann wrote:
On Monday 07 February 2011, Sachin Gupta wrote:
In Multimedia WG we have been posed with a question regarding best
way
to expose low level API for camera.so this a questions mainly about pros
and
cons of v4l2 and omx over each other.So to involve a wider community to discuss this topic I am floating this mail on linaro-dev.Please share
your
view/experiences.Also please involve any body else in this mail who can provide valuable inputs on this.
I've had to look up with "omx" actually stands for [1][2], but from an outsider view, they don't seem to be mutually exclusive or even competing interfaces. v4l2 is the interface you use to get at camera data, in whatever format the camera gives you. There are no alternatives to that. OpenMax gives you a way to accelerate video codecs, which is good, but this sits a layer higher up in the stack. Supporting omx is probably a good idea, but would be totally optional.
Arnd
[1] http://www.khronos.org/openmax/ [2] http://www.freedesktop.org/wiki/GstOpenMAX
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
On Tue, Feb 8, 2011 at 9:30 AM, Lee Jones lee.jones@linaro.org wrote:
Robert, Linus, what say you?
[I'm looping in Harald from Ericsson who worked with Khronos so he can correct me for all inevitable mistakes in trying to understand how Khronos works.]
I mainly come from the kernel for kernels' sake side of things and my remarks are basically the same as Arnd's: what is OMX doing handling hardware? If the OMX stuff *did* have some business with the hardware it should be a drivers/omx or lib/omx subsystem and everyone could make use of it from there and that's it. SCSI and VGA and whatever hardware protocol is in the kernel after all.
That's the simple answer. However I also know from experience why the Khronos Group folks don't see things this way.
Their problem is this: a plethora of operating systems, including Linux, Symbian, Windows Mobile, ITRON, VxWorks, QNX, OSE and a boatload of internal NIH systems. Some of them with nonexistant of half-brewn frameworks to handle the stuff that is done in kernel subsystems like V4L2, /media, or /sound (ALSA SoC). Their objective is to unify media pipelines across these desperately heterogeneous environments.
First premise of Khronos and OpenMAX: by describing very strict APIs for multimedia streams they set a stage where different companies can develop one implementation each and compete with each other. So, define API and compete by implementation.
Second premise: for each implementation, develop once, deploy everywhere. So each company create a large body of code, which they port to every OS they need to support. So that one of these OS:es happens to be Linux is just a technical detail. For OpenMAX it's just one world where the important stuff (i.e. the portable code and the Open* APIs) can live and thrive.
These basic premises permeates the work in Khronos so deeply that it becomes a paradigm or ontology if you like, it defines the world in which Open* development takes place. If you buy into one part of Khronos APIs you usually assume an idea of the world where all is done the Khronos way with no compromises. E.g: for OpenMAX you know that OpenMAX IL, AL and DL are different things but surely you expect that you will implement all of it down to the very last bit. (Yes, I know that this does not happen in practice, but such is the Khronos ontology.)
Some may think that OpenMAX is just an obtrusive layer of APIs adding nothing but complexity. If you live in an all-Linux world where say GStreamer and ALSA, V4L2 APIs go hand in hand this is true. But this is not the world where Khronos lives. Anyway, that's a separate subject. Let us say if Linux and GStreamer were dominating the world there would be no need for OpenMAX. Likewise if say Window Mobile dominated the world and had some framework for multimedia for that matter. Another implicit premise of the OpenMAX work is obviously that there will always be many OS:es involved in this game.
OpenMAX does not really conflict with the kernel way of doing business at all. It's natural place is within stuff like GStreamer or Android's counterpart, where they may or may not have some perfect fit - this is a matter of debate, admittedly the Linaro multimedia working group spend some time on it. The way I understand what they do is that they implement interfaces in say GStreamer like gst-openmax & Bellagio so that the big portable OpenMAX shebang that every vendor create can easily integrate into Linux.
The problem comes when you have a few DSP/ISP processors and need some kernel services, and then there's this V4L2 API you're supposed to use to get things done in the middle of your world, and in some parts it's not even fully defined, things in OpenMAX have no counterpart that maps 1-1 to V4L2 for example. From an OpenMAX point of view that's just an obstacle making your code bundle less portable to VxWorks and heterogenous, requiring new .c files and #ifdefs and whatever.
So naturally, if you can get away with implementing, say, your own userspace driver for video or audio sinks, and this can be reused on VxWorks and ITRON, you're happy because that means less hazzle. (In theory.)
The fact that the Linux community prefer a V4L2 API for e.g. a camera is totally irrelevant from an OpenMAX point of view, that's just a technical detail of Linux. Naturally, the Khronos mindset is that apps that need photographic images need to set up the OpenMAX IL chain and preferably conform to the OpenMAX AL API. However that is then handled behind the API is none of OpenMAX' business.
The goal of the portable OpenMAX is to make things like V4L2 become irrelevant to application developers. Compare how say Qt in theory makes the fact of whether you develop for Linux, Symbian or Windows irrelevant. There is not one word the specs that mandates that you use the native operating system APIs or frameworks behind the scenes at all, not even in the lowermost DL layer. (There is some portability glue in OpenKODE though, you can compare it to the portability code found in stuff like glib or nspr.) Anything providing the services will do. Needless to say, I don't think this is the view of say the V4L2 or ALSA developers. Or even the TI people working on drivers/staging/tidspbridge.
I would say that the problem is exactly this: both are defining how to do things with mediastreams, but the bridge inbetween is undefined, which makes for all kind of strange ad hoc solutions, usually the most comfortable one, from the implementers point of view, meaning they signed up for doing the OpenMAX API so they recognize that they have to develop and deploy this, but some of them think, that even though they are members of Linaro, they didn't sign up for the ALSA or V4L2 APIs. Is that really so?
I leave the question open, because there was an inflamed debate on another very related subject the other week, which was that of OpenGL ES vs 3D hardware API's like DRI/DRM. Again a Khronos API and similar ideas of portable, proprietary, competing code bodies under a standardized API, and no reuse of the community interfaces.
I don't know if Khronos and Linaro could talk about this, say come up with something like "if you're member of both you're actually supposed to use the Linux APIs when implementing Khronos stuff" that kind of high-level politics is not my department. Our problem is that right now the inverse of that statement is assumed. Couldn't we atleast say that the native Linux APIs are "preferable" or something like that?
Yours, Linus Walleij
On Wednesday, February 09, 2011 18:11:22 Linus Walleij wrote:
On Tue, Feb 8, 2011 at 9:30 AM, Lee Jones lee.jones@linaro.org wrote:
Robert, Linus, what say you?
[I'm looping in Harald from Ericsson who worked with Khronos so he can correct me for all inevitable mistakes in trying to understand how Khronos works.]
I mainly come from the kernel for kernels' sake side of things and my remarks are basically the same as Arnd's: what is OMX doing handling hardware? If the OMX stuff *did* have some business with the hardware it should be a drivers/omx or lib/omx subsystem and everyone could make use of it from there and that's it. SCSI and VGA and whatever hardware protocol is in the kernel after all.
That's the simple answer. However I also know from experience why the Khronos Group folks don't see things this way.
Their problem is this: a plethora of operating systems, including Linux, Symbian, Windows Mobile, ITRON, VxWorks, QNX, OSE and a boatload of internal NIH systems. Some of them with nonexistant of half-brewn frameworks to handle the stuff that is done in kernel subsystems like V4L2, /media, or /sound (ALSA SoC). Their objective is to unify media pipelines across these desperately heterogeneous environments.
First premise of Khronos and OpenMAX: by describing very strict APIs for multimedia streams they set a stage where different companies can develop one implementation each and compete with each other. So, define API and compete by implementation.
Second premise: for each implementation, develop once, deploy everywhere. So each company create a large body of code, which they port to every OS they need to support. So that one of these OS:es happens to be Linux is just a technical detail. For OpenMAX it's just one world where the important stuff (i.e. the portable code and the Open* APIs) can live and thrive.
These basic premises permeates the work in Khronos so deeply that it becomes a paradigm or ontology if you like, it defines the world in which Open* development takes place. If you buy into one part of Khronos APIs you usually assume an idea of the world where all is done the Khronos way with no compromises. E.g: for OpenMAX you know that OpenMAX IL, AL and DL are different things but surely you expect that you will implement all of it down to the very last bit. (Yes, I know that this does not happen in practice, but such is the Khronos ontology.)
Some may think that OpenMAX is just an obtrusive layer of APIs adding nothing but complexity. If you live in an all-Linux world where say GStreamer and ALSA, V4L2 APIs go hand in hand this is true. But this is not the world where Khronos lives. Anyway, that's a separate subject. Let us say if Linux and GStreamer were dominating the world there would be no need for OpenMAX. Likewise if say Window Mobile dominated the world and had some framework for multimedia for that matter. Another implicit premise of the OpenMAX work is obviously that there will always be many OS:es involved in this game.
OpenMAX does not really conflict with the kernel way of doing business at all. It's natural place is within stuff like GStreamer or Android's counterpart, where they may or may not have some perfect fit - this is a matter of debate, admittedly the Linaro multimedia working group spend some time on it. The way I understand what they do is that they implement interfaces in say GStreamer like gst-openmax & Bellagio so that the big portable OpenMAX shebang that every vendor create can easily integrate into Linux.
The problem comes when you have a few DSP/ISP processors and need some kernel services, and then there's this V4L2 API you're supposed to use to get things done in the middle of your world, and in some parts it's not even fully defined, things in OpenMAX have no counterpart that maps 1-1 to V4L2 for example. From an OpenMAX point of view that's just an obstacle making your code bundle less portable to VxWorks and heterogenous, requiring new .c files and #ifdefs and whatever.
So naturally, if you can get away with implementing, say, your own userspace driver for video or audio sinks, and this can be reused on VxWorks and ITRON, you're happy because that means less hazzle. (In theory.)
The fact that the Linux community prefer a V4L2 API for e.g. a camera is totally irrelevant from an OpenMAX point of view, that's just a technical detail of Linux. Naturally, the Khronos mindset is that apps that need photographic images need to set up the OpenMAX IL chain and preferably conform to the OpenMAX AL API. However that is then handled behind the API is none of OpenMAX' business.
The goal of the portable OpenMAX is to make things like V4L2 become irrelevant to application developers. Compare how say Qt in theory makes the fact of whether you develop for Linux, Symbian or Windows irrelevant. There is not one word the specs that mandates that you use the native operating system APIs or frameworks behind the scenes at all, not even in the lowermost DL layer. (There is some portability glue in OpenKODE though, you can compare it to the portability code found in stuff like glib or nspr.) Anything providing the services will do. Needless to say, I don't think this is the view of say the V4L2 or ALSA developers. Or even the TI people working on drivers/staging/tidspbridge.
I would say that the problem is exactly this: both are defining how to do things with mediastreams, but the bridge inbetween is undefined, which makes for all kind of strange ad hoc solutions, usually the most comfortable one, from the implementers point of view, meaning they signed up for doing the OpenMAX API so they recognize that they have to develop and deploy this, but some of them think, that even though they are members of Linaro, they didn't sign up for the ALSA or V4L2 APIs. Is that really so?
I leave the question open, because there was an inflamed debate on another very related subject the other week, which was that of OpenGL ES vs 3D hardware API's like DRI/DRM. Again a Khronos API and similar ideas of portable, proprietary, competing code bodies under a standardized API, and no reuse of the community interfaces.
I don't know if Khronos and Linaro could talk about this, say come up with something like "if you're member of both you're actually supposed to use the Linux APIs when implementing Khronos stuff" that kind of high-level politics is not my department. Our problem is that right now the inverse of that statement is assumed. Couldn't we atleast say that the native Linux APIs are "preferable" or something like that?
I agree with your mail.
Under linux APIs like ALSA and V4L give access to the multimedia hardware. So if you want to have your hardware drivers merged in the linux kernel, then those APIs are the way to go. OpenMax drivers will never be merged. Instead, the OpenMax framework should use the V4L/ALSA drivers to access the hardware.
Exceptions are DSPs/processors. While it is definitely possible to use V4L2 there as well, in practice I don't see this happening anytime soon. It would be a very interesting experiment though.
Of course, if you don't care about getting drivers in the kernel, then I won't stop you from using OpenMax. Personally I think that attempt to write a generic framework like OpenMax is highly problematic due to the wildly different video hardware implementations and the many different types of features.
BTW, if there are features missing in V4L2 that are needed to write an OpenMAX framework on top of it, then please let us know.
Regards,
Hans
Interesting topic, I have been part of Khronos work for quite some years but not anymore. I would not assume that Khronos as a standards body have any view on how the implementation is done since they want to be agnostic to OS etc. But I don't get your comment on why does OMX handle hardware, OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that. But a drivers/omx solution would set it to directly compete with v4l2 in Linux and that is not a good option, hence the needed enablers should be included in v4l2 etc instead. And it would need to be able to handle DSP/ISP firmware/load-modules etc. It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
About your second premise, this have never been express inside Khronos by companies or the body (not at least so I have heard it) but might rather be how some companies manage their OMX implementation. I would say that for the most part the standards are designed to work alone and are developed in very seperate groups. But it would be foolish to design APIs that don't fit between each other inside the same standards body, hence cross API work is done and the nice SW stack pictures are drawn ;-).
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts. Hence, this makes this as a good candidate to find common ways of exposing HW that is implemented differently but have the same functionality. There is no point in exposing similare HW in many differnt ways, that just make it harder for users.
/Harald
-----Original Message----- From: Hans Verkuil [mailto:hverkuil@xs4all.nl] Sent: den 9 februari 2011 20:07 To: Linus Walleij Cc: linaro-dev@lists.linaro.org; Lee Jones; ST-Ericsson LT Mailing List; Harald Gustafsson Subject: Re: [st-ericsson] v4l2 vs omx for camera
On Wednesday, February 09, 2011 18:11:22 Linus Walleij wrote:
On Tue, Feb 8, 2011 at 9:30 AM, Lee Jones
lee.jones@linaro.org wrote:
Robert, Linus, what say you?
[I'm looping in Harald from Ericsson who worked with
Khronos so he can
correct me for all inevitable mistakes in trying to understand how Khronos works.]
I mainly come from the kernel for kernels' sake side of
things and my
remarks are basically the same as Arnd's: what is OMX doing
handling
hardware? If the OMX stuff *did* have some business with
the hardware
it should be a drivers/omx or lib/omx subsystem and everyone could make use of it from there and that's it. SCSI and VGA and whatever hardware protocol is in the kernel after all.
That's the simple answer. However I also know from
experience why the
Khronos Group folks don't see things this way.
Their problem is this: a plethora of operating systems, including Linux, Symbian, Windows Mobile, ITRON, VxWorks, QNX, OSE and a boatload of internal NIH systems. Some of them with nonexistant of half-brewn frameworks to handle the stuff that is done in kernel subsystems like V4L2, /media, or /sound (ALSA SoC). Their
objective is
to unify media pipelines across these desperately heterogeneous environments.
First premise of Khronos and OpenMAX: by describing very
strict APIs
for multimedia streams they set a stage where different
companies can
develop one implementation each and compete with each other. So, define API and compete by implementation.
Second premise: for each implementation, develop once, deploy everywhere. So each company create a large body of code, which they port to every OS they need to support. So that one of these OS:es happens to be Linux is just a technical detail. For OpenMAX
it's just
one world where the important stuff (i.e. the portable code and the Open* APIs) can live and thrive.
These basic premises permeates the work in Khronos so
deeply that it
becomes a paradigm or ontology if you like, it defines the world in which Open* development takes place. If you buy into one part of Khronos APIs you usually assume
an idea of
the world where all is done the Khronos way with no
compromises. E.g:
for OpenMAX you know that OpenMAX IL, AL and DL are
different things
but surely you expect that you will implement all of it down to the very last bit. (Yes, I know that this does not happen in
practice, but
such is the Khronos ontology.)
Some may think that OpenMAX is just an obtrusive layer of
APIs adding
nothing but complexity. If you live in an all-Linux world where say GStreamer and ALSA, V4L2 APIs go hand in hand this is true.
But this
is not the world where Khronos lives. Anyway, that's a separate subject. Let us say if Linux and GStreamer were dominating
the world
there would be no need for OpenMAX. Likewise if say Window Mobile dominated the world and had some framework for multimedia for that matter. Another implicit premise of the OpenMAX work is
obviously that
there will always be many OS:es involved in this game.
OpenMAX does not really conflict with the kernel way of doing business at all. It's natural place is within stuff like
GStreamer or
Android's counterpart, where they may or may not have some
perfect fit
- this is a matter of debate, admittedly the Linaro
multimedia working
group spend some time on it. The way I understand what they do is that they implement
interfaces in
say GStreamer like gst-openmax & Bellagio so that the big portable OpenMAX shebang that every vendor create can easily integrate into Linux.
The problem comes when you have a few DSP/ISP processors
and need some
kernel services, and then there's this V4L2 API you're
supposed to use
to get things done in the middle of your world, and in some
parts it's
not even fully defined, things in OpenMAX have no counterpart that maps 1-1 to V4L2 for example. From an OpenMAX point of view that's just an obstacle making your code bundle less portable to
VxWorks and
heterogenous, requiring new .c files and #ifdefs and whatever.
So naturally, if you can get away with implementing, say, your own userspace driver for video or audio sinks, and this can be
reused on
VxWorks and ITRON, you're happy because that means less hazzle. (In theory.)
The fact that the Linux community prefer a V4L2 API for e.g. a camera is totally irrelevant from an OpenMAX point of
view, that's
just a technical detail of Linux. Naturally, the Khronos mindset is that apps that need photographic images need to set up the
OpenMAX IL
chain and preferably conform to the OpenMAX AL API. However that is then handled behind the API is none of OpenMAX' business.
The goal of the portable OpenMAX is to make things like V4L2 become irrelevant to application developers. Compare how say Qt in theory makes the fact of whether you develop for Linux, Symbian or Windows irrelevant. There is not one word the specs that mandates
that you use
the native operating system APIs or frameworks behind the scenes at all, not even in the lowermost DL layer. (There is some portability glue in OpenKODE though, you can compare it to the portability code found in stuff like glib or nspr.) Anything providing the services will do. Needless to say, I don't think this is the view of say the V4L2 or ALSA developers. Or even the TI people working on drivers/staging/tidspbridge.
I would say that the problem is exactly this: both are
defining how to
do things with mediastreams, but the bridge inbetween is undefined, which makes for all kind of strange ad hoc solutions,
usually the most
comfortable one, from the implementers point of view, meaning they signed up for doing the OpenMAX API so they recognize that
they have
to develop and deploy this, but some of them think, that
even though
they are members of Linaro, they didn't sign up for the
ALSA or V4L2
APIs. Is that really so?
I leave the question open, because there was an inflamed debate on another very related subject the other week, which was that
of OpenGL
ES vs 3D hardware API's like DRI/DRM. Again a Khronos API
and similar
ideas of portable, proprietary, competing code bodies under a standardized API, and no reuse of the community interfaces.
I don't know if Khronos and Linaro could talk about this,
say come up
with something like "if you're member of both you're
actually supposed
to use the Linux APIs when implementing Khronos stuff" that kind of high-level politics is not my department. Our problem is that right now the inverse of that statement is assumed. Couldn't we
atleast say
that the native Linux APIs are "preferable" or something like that?
I agree with your mail.
Under linux APIs like ALSA and V4L give access to the multimedia hardware. So if you want to have your hardware drivers merged in the linux kernel, then those APIs are the way to go. OpenMax drivers will never be merged. Instead, the OpenMax framework should use the V4L/ALSA drivers to access the hardware.
Exceptions are DSPs/processors. While it is definitely possible to use V4L2 there as well, in practice I don't see this happening anytime soon. It would be a very interesting experiment though.
Of course, if you don't care about getting drivers in the kernel, then I won't stop you from using OpenMax. Personally I think that attempt to write a generic framework like OpenMax is highly problematic due to the wildly different video hardware implementations and the many different types of features.
BTW, if there are features missing in V4L2 that are needed to write an OpenMAX framework on top of it, then please let us know.
Regards,
Hans
-- Hans Verkuil - video4linux developer - sponsored by Cisco
Thanks for the help Harald, much appreciated.
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson harald.gustafsson@ericsson.com wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
Yours, Linus Walleij
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
Thanks for the help Harald, much appreciated.
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson harald.gustafsson@ericsson.com wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by themselves, but it seems that this work would exactly fit the Linaro purpose. From the Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be very happy to assist with this when it comes to extending/improving the V4L2 API to make all this possible.
Regards,
Hans
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
Actually this has been done already, unfortunately the project was (and still is) kept under a hat... STMicrolectronics (no -Ericsson in the name ;-) developed a multimedia framework that used sort-of-DSPs to decode, encode, process etc. etc. and exposed Linux-DVB/V4L/ALSA APIs. It's been used in a number of commercial projects but "Open Duckbox" picked it up as well:
http://gitorious.org/open-duckbox-project-sh4/pages/Home http://gitorious.org/open-duckbox-project-sh4/tdt/blobs/master/tdt/cvs/drive...
Having said all that, I don't think that anything of this will be useful in your development. I just mention it as a proof that it viable :-)
Cheers!
Paweł
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free reference implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making OMX-based solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
The main issue with the V4L2 API compared with the OMX API is that V4L2 is a kernelspace/userspace API only, while OMX can live in userspace. When the need to communicate with other processors (CPUs, DSP, dedicated image processing hardware blocks, ...) arises, platforms usually ship with a thin kernel layer to handle low-level communication protocols, and a userspace OMX library that does the bulk of the work. We would need to be able to do something similar with V4L2.
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by themselves, but it seems that this work would exactly fit the Linaro purpose. From the Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be very happy to assist with this when it comes to extending/improving the V4L2 API to make all this possible.
The first step would probably be to decide what Linux needs. Then I'll also be happy to assist with the implementation phase :-)
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html(%22v4... vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
Let the discussion continue...
On 17 February 2011 14:48, Laurent Pinchart < laurent.pinchart@ideasonboard.com> wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries
to
do. And yes optimally it should be implemented in drivers/omx in
Linux
and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file
exported
by the kernel.
It might be that some alignment also needs to be made between 4vl2
and
other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the
V4L
and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free reference implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making OMX-based solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job
that
should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
[Robert F] This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL media framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is a big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.
The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete? Answering these questions could be improved guidelines on what embedded device vendors in the future would provide as hw-driver front-ends. OMX is just one of these. Perhaps we could do better to fit and evolve the Linux eco-system?
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
The main issue with the V4L2 API compared with the OMX API is that V4L2 is a kernelspace/userspace API only, while OMX can live in userspace. When the need to communicate with other processors (CPUs, DSP, dedicated image processing hardware blocks, ...) arises, platforms usually ship with a thin kernel layer to handle low-level communication protocols, and a userspace OMX library that does the bulk of the work. We would need to be able to do something similar with V4L2.
[Robert F] Ok, doesn.t mediacontroller/subdevices solve many of these issues?
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by themselves, but it seems that this work would exactly fit the Linaro purpose. From
the
Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be
very
happy to assist with this when it comes to extending/improving the V4L2
API
to make all this possible.
The first step would probably be to decide what Linux needs. Then I'll also be happy to assist with the implementation phase :-)
-- Regards,
Laurent Pinchart
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
BR /Robert Fekete
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete robert.fekete@linaro.org wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
Let the discussion continue...
On 17 February 2011 14:48, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free reference implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making OMX-based solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
[Robert F] This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL media framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is a big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.
Not even different vendor's omx camera implementations are compatible.. there seems to be too much various in ISP architecture and features for this.
Another point, and possibly the reason that TI went the OMX camera route, was that a userspace API made it possible to move the camera driver all to a co-processor (with advantages of reduced interrupt latency for SIMCOP processing, and a larger part of the code being OS independent).. doing this in a kernel mode driver would have required even more of syslink in the kernel.
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
(ahh, ok, you have some of the same thoughts as I do regarding sharing buffers between various drivers)
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
OpenMAX is missing some very obvious stuff to make it an API for portable applications like autoplugging, discovery of capabilities/formats supported, etc.. at least with gst I can drop in some hw specific plugins and have apps continue to work without code changes.
Anyways, it would be an easier argument to make if GStreamer was the one true framework across different OSs, or at least across linux and android.
BR, -R
Answering these questions could be improved guidelines on what embedded device vendors in the future would provide as hw-driver front-ends. OMX is just one of these. Perhaps we could do better to fit and evolve the Linux eco-system?
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
The main issue with the V4L2 API compared with the OMX API is that V4L2 is a kernelspace/userspace API only, while OMX can live in userspace. When the need to communicate with other processors (CPUs, DSP, dedicated image processing hardware blocks, ...) arises, platforms usually ship with a thin kernel layer to handle low-level communication protocols, and a userspace OMX library that does the bulk of the work. We would need to be able to do something similar with V4L2.
[Robert F] Ok, doesn.t mediacontroller/subdevices solve many of these issues?
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by themselves, but it seems that this work would exactly fit the Linaro purpose. From the Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be very happy to assist with this when it comes to extending/improving the V4L2 API to make all this possible.
The first step would probably be to decide what Linux needs. Then I'll also be happy to assist with the implementation phase :-)
-- Regards,
Laurent Pinchart
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
BR /Robert Fekete
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
Hi All,
Just wanted to add one last point in this discussion. The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
I am not sure if that is the case with all the platforms today but my feeling is that this is going to be excercised more in future architectures where a single dedicated dsp/arm processor is used to control the video/imaging specific hardware blocks and there could be some other tasks offloaded to this dedicated dsp/arm processor in case it has free bandwidth to support those tasks.
Thanks Sachin
On Tue, Feb 22, 2011 at 8:14 AM, Clark, Rob rob@ti.com wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete robert.fekete@linaro.org wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really
recommend
to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions
so
far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in
many
senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well.
Then
there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
Let the discussion continue...
On 17 February 2011 14:48, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the
vendor
delivering that hardware, much like the v4l2 or USB subsystems
tries
to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation,
and
I was thinking more of it as a userspace lib. Now my understanding
is
that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to
pay
an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free
reference
implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making
OMX-based
solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2
API.
Something similar is already available for the old V4L1 API (now
removed
from the kernel) that allows apps that still speak V4L1 only to use
the
V4L2 API. This is done in the libv4l1 library. The various v4l
libraries
are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained
and
automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
[Robert F] This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL
media
framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is
a
big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.
Not even different vendor's omx camera implementations are compatible.. there seems to be too much various in ISP architecture and features for this.
Another point, and possibly the reason that TI went the OMX camera route, was that a userspace API made it possible to move the camera driver all to a co-processor (with advantages of reduced interrupt latency for SIMCOP processing, and a larger part of the code being OS independent).. doing this in a kernel mode driver would have required even more of syslink in the kernel.
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
The question is if the Linux kernel and V4L2 is ready to incorporate
several
HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power
non(or
minimal) CPU intervention pipelines where dedicated HW does the work most
of
the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as
well,
like hwmem(search for hwmem in Linux-mm) handles to pass buffers in
between
different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
(ahh, ok, you have some of the same thoughts as I do regarding sharing buffers between various drivers)
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt
some
ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
OpenMAX is missing some very obvious stuff to make it an API for portable applications like autoplugging, discovery of capabilities/formats supported, etc.. at least with gst I can drop in some hw specific plugins and have apps continue to work without code changes.
Anyways, it would be an easier argument to make if GStreamer was the one true framework across different OSs, or at least across linux and android.
BR, -R
Answering these questions could be improved guidelines on what embedded device vendors in the future would provide as hw-driver front-ends. OMX is just one of these. Perhaps we
could
do better to fit and evolve the Linux eco-system?
Regarding using V4L to communicate with DSPs/other processors: that
too
could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work.
I
expect it to be pretty easy, but again, nobody has actually done the initial work.
The main issue with the V4L2 API compared with the OMX API is that V4L2
is
a kernelspace/userspace API only, while OMX can live in userspace. When
the
need to communicate with other processors (CPUs, DSP, dedicated image processing hardware blocks, ...) arises, platforms usually ship with a thin kernel layer to handle low-level communication protocols, and a userspace OMX library that does the bulk of the work. We would need to be able to do something similar with V4L2.
[Robert F] Ok, doesn.t mediacontroller/subdevices solve many of these issues?
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by
themselves,
but it seems that this work would exactly fit the Linaro purpose. From the Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be very happy to assist with this when it comes to extending/improving the
V4L2
API to make all this possible.
The first step would probably be to decide what Linux needs. Then I'll also be happy to assist with the implementation phase :-)
-- Regards,
Laurent Pinchart
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
BR /Robert Fekete
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
2011/2/23 Sachin Gupta sachin.gupta@linaro.org:
The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
Earlier in this thread I discussed TI:s DSPbridge.
In drivers/staging/tidspbridge http://omappedia.org/wiki/DSPBridge_Project you find the TI hackers happy at work with providing a DSP accelerator subsystem.
Isn't it possible for a V4L2 component to use this interface (or something more evolved, generic) as backend for assorted DSP offloading?
So using one kernel framework does not exclude using another one at the same time. Whereas something like DSPbridge will load firmware into DSP accelerators and provide control/datapath for that, this can in turn be used by some camera or codec which in turn presents a V4L2 or ALSA interface.
Yours, Linus Walleij
On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
2011/2/23 Sachin Gupta sachin.gupta@linaro.org:
The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
Earlier in this thread I discussed TI:s DSPbridge.
In drivers/staging/tidspbridge http://omappedia.org/wiki/DSPBridge_Project you find the TI hackers happy at work with providing a DSP accelerator subsystem.
Isn't it possible for a V4L2 component to use this interface (or something more evolved, generic) as backend for assorted DSP offloading?
So using one kernel framework does not exclude using another one at the same time. Whereas something like DSPbridge will load firmware into DSP accelerators and provide control/datapath for that, this can in turn be used by some camera or codec which in turn presents a V4L2 or ALSA interface.
Yes, something along those lines can be done.
While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP instead.
The hardest part will be to identify the missing V4L2 API pieces and design and add them. I don't think the actual driver code will be particularly hard. It should be nothing more than a thin front-end for the DSP. Of course, that's just theory at the moment :-)
The problem is that someone has to do the actual work for the initial driver. And I expect that it will be a substantial amount of work. Future drivers should be *much* easier, though.
A good argument for doing this work is that this API can hide which parts of the video subsystem are hardware and which are software. The application really doesn't care how it is organized. What is done in hardware on one SoC might be done on a DSP instead on another SoC. But the end result is pretty much the same.
Regards,
Hans
On Thursday 24 February 2011 14:04:19 Hans Verkuil wrote:
On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
2011/2/23 Sachin Gupta sachin.gupta@linaro.org:
The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
Earlier in this thread I discussed TI:s DSPbridge.
In drivers/staging/tidspbridge http://omappedia.org/wiki/DSPBridge_Project you find the TI hackers happy at work with providing a DSP accelerator subsystem.
Isn't it possible for a V4L2 component to use this interface (or something more evolved, generic) as backend for assorted DSP offloading?
So using one kernel framework does not exclude using another one at the same time. Whereas something like DSPbridge will load firmware into DSP accelerators and provide control/datapath for that, this can in turn be used by some camera or codec which in turn presents a V4L2 or ALSA interface.
Yes, something along those lines can be done.
While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP instead.
The hardest part will be to identify the missing V4L2 API pieces and design and add them. I don't think the actual driver code will be particularly hard. It should be nothing more than a thin front-end for the DSP. Of course, that's just theory at the moment :-)
The problem is that someone has to do the actual work for the initial driver. And I expect that it will be a substantial amount of work. Future drivers should be *much* easier, though.
A good argument for doing this work is that this API can hide which parts of the video subsystem are hardware and which are software. The application really doesn't care how it is organized. What is done in hardware on one SoC might be done on a DSP instead on another SoC. But the end result is pretty much the same.
I think the biggest issue we will have here is that part of the inter- processors communication stack lives in userspace in most recent SoCs (OMAP4 comes to mind for instance). This will make implementing a V4L2 driver that relies on IPC difficult.
It's probably time to start seriously thinking about userspace drivers/librairies/middlewares/frameworks/whatever, at least to clearly tell chip vendors what the Linux community expects.
On Thu, Feb 24, 2011 at 7:10 AM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Thursday 24 February 2011 14:04:19 Hans Verkuil wrote:
On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
2011/2/23 Sachin Gupta sachin.gupta@linaro.org:
The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
Earlier in this thread I discussed TI:s DSPbridge.
In drivers/staging/tidspbridge http://omappedia.org/wiki/DSPBridge_Project you find the TI hackers happy at work with providing a DSP accelerator subsystem.
Isn't it possible for a V4L2 component to use this interface (or something more evolved, generic) as backend for assorted DSP offloading?
So using one kernel framework does not exclude using another one at the same time. Whereas something like DSPbridge will load firmware into DSP accelerators and provide control/datapath for that, this can in turn be used by some camera or codec which in turn presents a V4L2 or ALSA interface.
Yes, something along those lines can be done.
While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP instead.
The hardest part will be to identify the missing V4L2 API pieces and design and add them. I don't think the actual driver code will be particularly hard. It should be nothing more than a thin front-end for the DSP. Of course, that's just theory at the moment :-)
The problem is that someone has to do the actual work for the initial driver. And I expect that it will be a substantial amount of work. Future drivers should be *much* easier, though.
A good argument for doing this work is that this API can hide which parts of the video subsystem are hardware and which are software. The application really doesn't care how it is organized. What is done in hardware on one SoC might be done on a DSP instead on another SoC. But the end result is pretty much the same.
I think the biggest issue we will have here is that part of the inter- processors communication stack lives in userspace in most recent SoCs (OMAP4 comes to mind for instance). This will make implementing a V4L2 driver that relies on IPC difficult.
It's probably time to start seriously thinking about userspace drivers/librairies/middlewares/frameworks/whatever, at least to clearly tell chip vendors what the Linux community expects.
I suspect more of the IPC framework needs to move down to the kernel.. this is the only way I can see to move the virt->phys address translation to a trusted layer. I'm not sure how others would feel about pushing more if the IPC stack down to the kernel, but at least it would make it easier for a v4l2 driver to leverage the coprocessors..
BR, -R
-- Regards,
Laurent Pinchart
Hi,
On Thu, Feb 24, 2011 at 3:04 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
On Thursday, February 24, 2011 13:29:56 Linus Walleij wrote:
2011/2/23 Sachin Gupta sachin.gupta@linaro.org:
The imaging coprocessor in today's platforms have a general purpose DSP attached to it I have seen some work being done to use this DSP for graphics/audio processing in case the camera use case is not being tried or also if the camera usecases does not consume the full bandwidth of this dsp.I am not sure how v4l2 would fit in such an architecture,
Earlier in this thread I discussed TI:s DSPbridge.
In drivers/staging/tidspbridge http://omappedia.org/wiki/DSPBridge_Project you find the TI hackers happy at work with providing a DSP accelerator subsystem.
Isn't it possible for a V4L2 component to use this interface (or something more evolved, generic) as backend for assorted DSP offloading?
Yes it is, and it has been part of my to-do list for some time now.
So using one kernel framework does not exclude using another one at the same time. Whereas something like DSPbridge will load firmware into DSP accelerators and provide control/datapath for that, this can in turn be used by some camera or codec which in turn presents a V4L2 or ALSA interface.
Yes, something along those lines can be done.
While normally V4L2 talks to hardware it is perfectly fine to talk to a DSP instead.
The hardest part will be to identify the missing V4L2 API pieces and design and add them. I don't think the actual driver code will be particularly hard. It should be nothing more than a thin front-end for the DSP. Of course, that's just theory at the moment :-)
The pieces are known already. I started a project called gst-dsp, which I plan to split into the gst part, and the part that communicates with the DSP, this part can move to kernel side with a v4l2 interface.
It's easier to identify the code in the patches for FFmpeg: http://article.gmane.org/gmane.comp.video.ffmpeg.devel/116798
The problem is that someone has to do the actual work for the initial driver. And I expect that it will be a substantial amount of work. Future drivers should be *much* easier, though.
A good argument for doing this work is that this API can hide which parts of the video subsystem are hardware and which are software. The application really doesn't care how it is organized. What is done in hardware on one SoC might be done on a DSP instead on another SoC. But the end result is pretty much the same.
Exactly.
On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete robert.fekete@linaro.org wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
V4L2 also allows userspace to pass pointers to contiguous physical memory. On TI systems this memory is usually obtained via the out-of-tree cmem module.
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
There are two parts to this: first of all you need a way to allocate large buffers. The CMA patch series is available (but not yet merged) that does this. I'm not sure of the latest status of this series.
The other part is that everyone can use and share these buffers. There isn't anything for this yet. We have discussed this in the past and we need something generic for this that all subsystems can use. It's not a good idea to tie this to any specific framework like GEM. Instead any subsystem should be able to use the same subsystem-independent buffer pool API.
The actual code is probably not too bad, but trying to coordinate this over all subsystems is not an easy task.
Let the discussion continue...
On 17 February 2011 14:48, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free reference implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making OMX-based solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
[Robert F] This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL media framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is a big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.
Not even different vendor's omx camera implementations are compatible.. there seems to be too much various in ISP architecture and features for this.
Another point, and possibly the reason that TI went the OMX camera route, was that a userspace API made it possible to move the camera driver all to a co-processor (with advantages of reduced interrupt latency for SIMCOP processing, and a larger part of the code being OS independent).. doing this in a kernel mode driver would have required even more of syslink in the kernel.
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
MCF? What does that stand for?
The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
(ahh, ok, you have some of the same thoughts as I do regarding sharing buffers between various drivers)
Perhaps the time is right for someone to start working on this?
Regards,
Hans
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
OpenMAX is missing some very obvious stuff to make it an API for portable applications like autoplugging, discovery of capabilities/formats supported, etc.. at least with gst I can drop in some hw specific plugins and have apps continue to work without code changes.
Anyways, it would be an easier argument to make if GStreamer was the one true framework across different OSs, or at least across linux and android.
BR, -R
Answering these questions could be improved guidelines on what embedded device vendors in the future would provide as hw-driver front-ends. OMX is just one of these. Perhaps we could do better to fit and evolve the Linux eco-system?
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
The main issue with the V4L2 API compared with the OMX API is that V4L2 is a kernelspace/userspace API only, while OMX can live in userspace. When the need to communicate with other processors (CPUs, DSP, dedicated image processing hardware blocks, ...) arises, platforms usually ship with a thin kernel layer to handle low-level communication protocols, and a userspace OMX library that does the bulk of the work. We would need to be able to do something similar with V4L2.
[Robert F] Ok, doesn.t mediacontroller/subdevices solve many of these issues?
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by themselves, but it seems that this work would exactly fit the Linaro purpose. From the Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be very happy to assist with this when it comes to extending/improving the V4L2 API to make all this possible.
The first step would probably be to decide what Linux needs. Then I'll also be happy to assist with the implementation phase :-)
-- Regards,
Laurent Pinchart
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
BR /Robert Fekete
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
-- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thursday 24 February 2011 14:17:12 Hans Verkuil wrote:
On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
V4L2 also allows userspace to pass pointers to contiguous physical memory. On TI systems this memory is usually obtained via the out-of-tree cmem module.
On the OMAP3 the ISP doesn't require physically contiguous memory. User pointers can be used quite freely, except that they introduce cache management issues on ARM when speculative prefetching comes into play (those issues are currently ignored completely).
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
There are two parts to this: first of all you need a way to allocate large buffers. The CMA patch series is available (but not yet merged) that does this. I'm not sure of the latest status of this series.
Some platforms don't require contiguous memory. What we need is a way to allocate memory in the kernel with various options, and use that memory in various drivers (V4L2, GPU, ...)
The other part is that everyone can use and share these buffers. There isn't anything for this yet. We have discussed this in the past and we need something generic for this that all subsystems can use. It's not a good idea to tie this to any specific framework like GEM. Instead any subsystem should be able to use the same subsystem-independent buffer pool API.
The actual code is probably not too bad, but trying to coordinate this over all subsystems is not an easy task.
[snip]
Not even different vendor's omx camera implementations are compatible.. there seems to be too much various in ISP architecture and features for this.
Another point, and possibly the reason that TI went the OMX camera route, was that a userspace API made it possible to move the camera driver all to a co-processor (with advantages of reduced interrupt latency for SIMCOP processing, and a larger part of the code being OS independent).. doing this in a kernel mode driver would have required even more of syslink in the kernel.
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
MCF? What does that stand for?
Media Controller Framework I guess.
The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
(ahh, ok, you have some of the same thoughts as I do regarding sharing buffers between various drivers)
Perhaps the time is right for someone to start working on this?
Totally. It's time to start working on lots of things :-)
On Thu, Feb 24, 2011 at 10:17 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete robert.fekete@linaro.org wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
V4L2 also allows userspace to pass pointers to contiguous physical memory. On TI systems this memory is usually obtained via the out-of-tree cmem module.
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
There are two parts to this: first of all you need a way to allocate large buffers. The CMA patch series is available (but not yet merged) that does this. I'm not sure of the latest status of this series.
Still ARM maintainer doesn't agree these patches since it's not solve the ARM memory different attribute mapping problem. but try to send the CMA v9 patch soon.
We need really require the physical memory management module. Each chip vendors use the their own implementations. Our approach called it as CMA, others called it as cmem, carveout, hwmon and so on.
I think Laurent's approaches is similar one.
We will try it again to merge CMA.
Thank you, Kyungmin Park
The other part is that everyone can use and share these buffers. There isn't anything for this yet. We have discussed this in the past and we need something generic for this that all subsystems can use. It's not a good idea to tie this to any specific framework like GEM. Instead any subsystem should be able to use the same subsystem-independent buffer pool API.
The actual code is probably not too bad, but trying to coordinate this over all subsystems is not an easy task.
Let the discussion continue...
On 17 February 2011 14:48, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote: > OMX main purpose is to handle multimedia hardware and offer an > interface to that HW that looks identical indenpendent of the vendor > delivering that hardware, much like the v4l2 or USB subsystems tries > to > do. And yes optimally it should be implemented in drivers/omx in > Linux > and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
> It might be that some alignment also needs to be made between 4vl2 > and > other OS's implementation, to ease developing drivers for many OSs > (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
> By the way IL is about to finalize version 1.2 of OpenMAX IL which > is > more than a years work of aligning all vendors and fixing unclear > and > buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free reference implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making OMX-based solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
[Robert F] This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL media framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is a big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.
Not even different vendor's omx camera implementations are compatible.. there seems to be too much various in ISP architecture and features for this.
Another point, and possibly the reason that TI went the OMX camera route, was that a userspace API made it possible to move the camera driver all to a co-processor (with advantages of reduced interrupt latency for SIMCOP processing, and a larger part of the code being OS independent).. doing this in a kernel mode driver would have required even more of syslink in the kernel.
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
MCF? What does that stand for?
The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
(ahh, ok, you have some of the same thoughts as I do regarding sharing buffers between various drivers)
Perhaps the time is right for someone to start working on this?
Regards,
Hans
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
OpenMAX is missing some very obvious stuff to make it an API for portable applications like autoplugging, discovery of capabilities/formats supported, etc.. at least with gst I can drop in some hw specific plugins and have apps continue to work without code changes.
Anyways, it would be an easier argument to make if GStreamer was the one true framework across different OSs, or at least across linux and android.
BR, -R
Answering these questions could be improved guidelines on what embedded device vendors in the future would provide as hw-driver front-ends. OMX is just one of these. Perhaps we could do better to fit and evolve the Linux eco-system?
Regarding using V4L to communicate with DSPs/other processors: that too could be something for Linaro to pick up: experiment with it for one particular board, see what (if anything) is needed to make this work. I expect it to be pretty easy, but again, nobody has actually done the initial work.
The main issue with the V4L2 API compared with the OMX API is that V4L2 is a kernelspace/userspace API only, while OMX can live in userspace. When the need to communicate with other processors (CPUs, DSP, dedicated image processing hardware blocks, ...) arises, platforms usually ship with a thin kernel layer to handle low-level communication protocols, and a userspace OMX library that does the bulk of the work. We would need to be able to do something similar with V4L2.
[Robert F] Ok, doesn.t mediacontroller/subdevices solve many of these issues?
Once you have an example driver, then it should be much easier for others to follow.
As Linus said, companies are unlikely to start doing this by themselves, but it seems that this work would exactly fit the Linaro purpose. From the Linaro homepage:
"Linaro™ brings together the open source community and the electronics industry to work on key projects, deliver great tools, reduce industry wide fragmentation and provide common foundations for Linux software distributions and stacks to land on."
Spot on, I'd say :-)
Just for the record, let me say again they the V4L2 community will be very happy to assist with this when it comes to extending/improving the V4L2 API to make all this possible.
The first step would probably be to decide what Linux needs. Then I'll also be happy to assist with the implementation phase :-)
-- Regards,
Laurent Pinchart
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
BR /Robert Fekete
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
-- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
-- Hans Verkuil - video4linux developer - sponsored by Cisco -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi,
On Thursday 24 February 2011 15:48:20 Kyungmin Park wrote:
On Thu, Feb 24, 2011 at 10:17 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
On Tuesday, February 22, 2011 03:44:19 Clark, Rob wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
V4L2 also allows userspace to pass pointers to contiguous physical memory. On TI systems this memory is usually obtained via the out-of-tree cmem module.
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
There are two parts to this: first of all you need a way to allocate large buffers. The CMA patch series is available (but not yet merged) that does this. I'm not sure of the latest status of this series.
Still ARM maintainer doesn't agree these patches since it's not solve the ARM memory different attribute mapping problem. but try to send the CMA v9 patch soon.
We need really require the physical memory management module. Each chip vendors use the their own implementations. Our approach called it as CMA, others called it as cmem, carveout, hwmon and so on.
I think Laurent's approaches is similar one.
Just for the record, my global buffers pool RFC didn't try to solve the contiguous memory allocation problem. It aimed at providing drivers (and applications) with an API to allocate and use buffers. How the memory is allocated is outside the scope of the global buffers pool, CMA makes perfect sense for that.
We will try it again to merge CMA.
On Thu, Feb 24, 2011 at 7:17 AM, Hans Verkuil hverkuil@xs4all.nl wrote:
There are two parts to this: first of all you need a way to allocate large buffers. The CMA patch series is available (but not yet merged) that does this. I'm not sure of the latest status of this series.
The other part is that everyone can use and share these buffers. There isn't anything for this yet. We have discussed this in the past and we need something generic for this that all subsystems can use. It's not a good idea to tie this to any specific framework like GEM. Instead any subsystem should be able to use the same subsystem-independent buffer pool API.
yeah, doesn't need to be GEM.. but should at least inter-operate so we can share buffers with the display/gpu..
[snip]
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
MCF? What does that stand for?
sorry, v4l2 media controller framework
BR, -R
On Tuesday 22 February 2011 03:44:19 Clark, Rob wrote:
On Fri, Feb 18, 2011 at 10:39 AM, Robert Fekete wrote:
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
just fwiw, there were some patches to make v4l2src work with userptr buffers in case the camera has an mmu and can handle any random non-physically-contiguous buffer.. so there is in theory no reason why a gst capture pipeline could not be zero copy and capture directly into buffers allocated from the display
Certainly a more general way to allocate buffers that any of the hw blocks (display, imaging, video encoders/decoders, 3d/2d hw, etc) could use, and possibly share across-process for some zero copy DRI style rendering, would be nice. Perhaps V4L2_MEMORY_GEM?
This is something we first discussed in the end of 2009. We need to get people from different subsystems around the same table, with memory management specialists (especially for ARM), and lay the ground for a common memory management system. Discussions on the V4L2 side called this the global buffers pool (see http://lwn.net/Articles/353044/ for instance, more information can be found in the linux-media list archives).
[snip]
Let the discussion continue...
On 17 February 2011 14:48, Laurent Pinchart wrote:
On Thursday 10 February 2011 08:47:15 Hans Verkuil wrote:
On Thursday, February 10, 2011 08:17:31 Linus Walleij wrote:
On Wed, Feb 9, 2011 at 8:44 PM, Harald Gustafsson wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
By the way IL is about to finalize version 1.2 of OpenMAX IL which is more than a years work of aligning all vendors and fixing unclear and buggy parts.
I suspect that the basic problem with Khronos OpenMAX right now is how to handle communities - for example the X consortium had something like the same problem a while back, only member companies could partake in the standard process, and they need of course to pay an upfront fee for that, and the majority of these companies didn't exactly send Linux community members to the meetings.
And now all the companies who took part in OpenMAX somehow end up having to do a lot of upfront community work if they want to drive the API:s in a certain direction, discuss it again with the V4L and ALSA maintainers and so on. Which takes a lot of time and patience with uncertain outcome, since this process is autonomous from Khronos. Nobody seems to be doing this, I javen't seen a single patch aimed at trying to unify the APIs so far. I don't know if it'd be welcome.
Patches are usually welcome, but one issue with OMX is that it doesn't feel like a real Linux API. Linux developers usually don't like to be forced to use alien APIs that originate in other worlds (such as Windows) and don't feel good on Linux.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
The end result is that Khronos publishes and API spec that chip vendors implement, but nobody in the community is interested in it. OMX is something the Linux community mostly ignores. And I don't see this changing any time soon, or even ever. OMX was designed without the community. If Khronos really want good Linux support, they need to ditch OMX and design something new with the community. I don't see this happening anytime soon though, so the community will keep working on its APIs and pushing vendors to implement them (or even create community-supported implementations). That's a complete waste of resources for everybody.
From my experience with OMX, one of the issues is that companies usually extend the API to fullfill their platform's needs, without going through any standardization process. Coupled with the lack of open and free reference implementation and test tools, this more or less means that OMX implementations are not really compatible with eachother, making OMX-based solution not better than proprietary solutions.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
One option might be to create a OMX wrapper library around the V4L2 API. Something similar is already available for the old V4L1 API (now removed from the kernel) that allows apps that still speak V4L1 only to use the V4L2 API. This is done in the libv4l1 library. The various v4l libraries are maintained here: http://git.linuxtv.org/v4l-utils.git
Adding a libomx might not be such a bad idea. Linaro might be the appropriate organization to look into this. Any missing pieces in V4L2 needed to create a fully functioning omx API can be discussed and solved.
Making this part of v4l-utils means that it is centrally maintained and automatically picked up by distros.
It will certainly be a non-trivial exercise, but it is a one-time job that should solve a lot of problems. But someone has to do it...
It's an option, but why would that be needed ? Again from my (probably limited) OMX experience, platforms expose higher-level APIs to applications, implemented on top of OMX. If the OMX layer is itself implemented on top of V4L2, it would just be an extraneous useless internal layer that could (should ?) be removed completely.
This would be the case in a GStreamer driven multimedia, i.e. Implement GStreamer elements using V4L2 directly(or camerabin using v4l2 directly). Perhaps some vendors would provide a library in between as well but that could be libv4l in that case. If someone would have an OpenMAX AL/IL media framework an OMX component would make sense to have but in this case it would be a thinner OMX component which in turn is implemented using V4L2. But it might be that Khronos provides OS independent components that by vendors gets implemented as the actual HW driver forgetting that there is a big difference in the driver model of an RTOS system compared to Linux(user/kernel space) or any OS...never mind.
Not even different vendor's omx camera implementations are compatible.. there seems to be too much various in ISP architecture and features for this.
Another point, and possibly the reason that TI went the OMX camera route, was that a userspace API made it possible to move the camera driver all to a co-processor (with advantages of reduced interrupt latency for SIMCOP processing, and a larger part of the code being OS independent).. doing this in a kernel mode driver would have required even more of syslink in the kernel.
That's a very valid point. This is why we need to think about what we want as a Linux middleware for multimedia devices. The conclusion might be that everything needs to be pushed in the kernel (although I doubt that), but the goal is to give a clear message to chip vendors. This is in my opinion one of the most urgent tasks.
But maybe it would be nice to have a way to have sensor driver on the linux side, pipelined with hw and imaging drivers on a co-processor for various algorithms and filters with configuration all exposed to userspace thru MCF.. I'm not immediately sure how this would work, but it sounds nice at least ;-)
If the IPC communication layer is in the kernel, that shouldn't be very difficult. If it's in userspace, we need help of userspace librairies with some kind of userspace driver (in my opinion at least).
The question is if the Linux kernel and V4L2 is ready to incorporate several HW(DSP, CPU, ISP, xxHW) in an imaging pipeline for instance. The reason Embedded Vendors provide custom solutions is to implement low power non(or minimal) CPU intervention pipelines where dedicated HW does the work most of the time(like full screen Video Playback).
A common way of managing memory would of course also be necessary as well, like hwmem(search for hwmem in Linux-mm) handles to pass buffers in between different drivers and processes all the way from sources(camera, video parser/decode) to sinks(display, hdmi, video encoders(record))
(ahh, ok, you have some of the same thoughts as I do regarding sharing buffers between various drivers)
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
I'd very much like to see GStreamer (or something else, maybe lower level, but community-maintainted) replace OMX.
Does anyone have any GStreamer vs. OMX memory and CPU usage numbers ? I suppose it depends on the actual OMX implementations, but what I'd like to know is if GStreamer is too heavy for platforms on which OMX works fine.
OpenMAX is missing some very obvious stuff to make it an API for portable applications like autoplugging, discovery of capabilities/formats supported, etc.. at least with gst I can drop in some hw specific plugins and have apps continue to work without code changes.
Anyways, it would be an easier argument to make if GStreamer was the one true framework across different OSs, or at least across linux and android.
Let's push for GStreamer on Android then :-)
On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
I'd very much like to see GStreamer (or something else, maybe lower level, but community-maintainted) replace OMX.
Yes, it would be great to have something that wraps all the hardware acceleration and could have support for software codecs too, all in a standard interface. It would also be great if this interface would be used in the upper layers like GStreamer, VLC, etc. Kind of what OMX was supposed to be, but open [1].
Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU, DirectX, and soon OMAP3 DSP)
Cheers.
[1] http://freedesktop.org/wiki/GstOpenMAX?action=AttachFile&do=get&targ...
On Saturday, February 26, 2011 14:38:50 Felipe Contreras wrote:
On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
I'd very much like to see GStreamer (or something else, maybe lower level, but community-maintainted) replace OMX.
Yes, it would be great to have something that wraps all the hardware acceleration and could have support for software codecs too, all in a standard interface. It would also be great if this interface would be used in the upper layers like GStreamer, VLC, etc. Kind of what OMX was supposed to be, but open [1].
Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU, DirectX, and soon OMAP3 DSP)
Cheers.
[1] http://freedesktop.org/wiki/GstOpenMAX?action=AttachFile&do=get&targ...
Are there any gstreamer/linaro/etc core developers attending the ELC in San Francisco in April? I think it might be useful to get together before, during or after the conference and see if we can turn this discussion in something more concrete.
It seems to me that there is an overall agreement of what should be done, but that we are far from anything concrete.
Regards,
Hans
On Sat, 2011-02-26 at 14:47 +0100, Hans Verkuil wrote:
On Saturday, February 26, 2011 14:38:50 Felipe Contreras wrote:
On Thu, Feb 24, 2011 at 3:27 PM, Laurent Pinchart laurent.pinchart@ideasonboard.com wrote:
Perhaps GStreamer experts would like to comment on the future plans ahead for zero copying/IPC and low power HW use cases? Could Gstreamer adapt some ideas from OMX IL making OMX IL obsolete?
perhaps OMX should adapt some of the ideas from GStreamer ;-)
I'd very much like to see GStreamer (or something else, maybe lower level, but community-maintainted) replace OMX.
Yes, it would be great to have something that wraps all the hardware acceleration and could have support for software codecs too, all in a standard interface. It would also be great if this interface would be used in the upper layers like GStreamer, VLC, etc. Kind of what OMX was supposed to be, but open [1].
Oh wait, I'm describing FFmpeg :) (supports vl42, VA-API, VDPAU, DirectX, and soon OMAP3 DSP)
Cheers.
[1] http://freedesktop.org/wiki/GstOpenMAX?action=AttachFile&do=get&targ...
Are there any gstreamer/linaro/etc core developers attending the ELC in San Francisco in April? I think it might be useful to get together before, during or after the conference and see if we can turn this discussion in something more concrete.
It seems to me that there is an overall agreement of what should be done, but that we are far from anything concrete.
I will be there and this was definitely a topic I intended to talk about. See you there.
Edward
Regards,
Hans
On Saturday 26 February 2011, Edward Hervey wrote:
Are there any gstreamer/linaro/etc core developers attending the ELC in San Francisco in April? I think it might be useful to get together before, during or after the conference and see if we can turn this discussion in something more concrete.
It seems to me that there is an overall agreement of what should be done, but that we are far from anything concrete.
I will be there and this was definitely a topic I intended to talk about. See you there.
I'll also be there. Should we organize an official BOF session for this and invite more people?
Arnd
Hi All,
Linaro is currently collecting requirements for next cycle.If you all agree we can set up a call to discuss what could be interesting things on this topic to work on next cycle and then I can take the ideas generated for approval to Linaro TSC.
Thanks Sachin
On Mon, Feb 28, 2011 at 1:19 AM, Arnd Bergmann arnd@arndb.de wrote:
On Saturday 26 February 2011, Edward Hervey wrote:
Are there any gstreamer/linaro/etc core developers attending the ELC in
San Francisco
in April? I think it might be useful to get together before, during or
after the
conference and see if we can turn this discussion in something more
concrete.
It seems to me that there is an overall agreement of what should be
done, but
that we are far from anything concrete.
I will be there and this was definitely a topic I intended to talk about. See you there.
I'll also be there. Should we organize an official BOF session for this and invite more people?
Arnd
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
On Sunday, February 27, 2011 20:49:37 Arnd Bergmann wrote:
On Saturday 26 February 2011, Edward Hervey wrote:
Are there any gstreamer/linaro/etc core developers attending the ELC in
San Francisco
in April? I think it might be useful to get together before, during or
after the
conference and see if we can turn this discussion in something more
concrete.
It seems to me that there is an overall agreement of what should be
done, but
that we are far from anything concrete.
I will be there and this was definitely a topic I intended to talk about. See you there.
I'll also be there. Should we organize an official BOF session for this and invite more people?
I think that is an excellent idea. Do you want to organize that? (Always the penalty for suggesting this first :-) )
Regards,
Hans
On Sunday 27 February 2011 20:49:37 Arnd Bergmann wrote:
On Saturday 26 February 2011, Edward Hervey wrote:
Are there any gstreamer/linaro/etc core developers attending the ELC in San Francisco in April? I think it might be useful to get together before, during or after the conference and see if we can turn this discussion in something more concrete.
It seems to me that there is an overall agreement of what should be done, but that we are far from anything concrete.
I will be there and this was definitely a topic I intended to talk about.
See you there.
I'll also be there. Should we organize an official BOF session for this and invite more people?
Any chance of an IRC backchannel and a live audio/video stream for those of us who won't be there ?
Hi,
On Fri, 2011-02-18 at 17:39 +0100, Robert Fekete wrote:
Hi,
In order to expand this knowledge outside of Linaro I took the Liberty of inviting both linux-media@vger.kernel.org and gstreamer-devel@lists.freedesktop.org. For any newcomer I really recommend to do some catch-up reading on http://lists.linaro.org/pipermail/linaro-dev/2011-February/thread.html ("v4l2 vs omx for camera" thread) before making any comments. And sign up for Linaro-dev while you are at it :-)
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
Let the discussion continue...
I'll try to summarize here my perspective from a GStreamer point of view. You wanted some, here it is :) This is a summary to answering everything in this mail thread at this time. You can go straight to the last paragraphs for a summary.
The question to be asked, imho, is not "omx or v4l2 or gstreamer", but rather "What purpose does each of those API/interface serve, when do they make sense, and how can they interact in the most efficient way possible"
Looking at the bigger picture, the end goal to all of us is to make best usage of what hardware/IP/silica is available all the way up to end-user applications/use-cases, and do so in the most efficient way possible (whether in terms of memory/cpu/power usage at the lower levels, but also in terms of manpower and flexibility at the higher levels).
Will GStreamer be as cpu/memory efficient as a pure OMX solution ? No, I seriously doubt we'll break down all the fundamental notions in GStreamer to make it use 0 cpu when running some processing.
Can GStreamer provide higher flexibility than a pure OMX solution ? Definitely, unless you have all the plugins for accesing all other hw systems out there, (de)muxers, rtp (de)payloaders, jitter buffers, network components, auto-pluggers, convenience elements, application interaction that GStreamer has been improving over the past 10 years. All that is far from trivial. And as Rob Clark said that you could drop HW specific gst plugins in and have it work with all existing applications, the same applies to all the other peripheral existing *and* future plugins you need to make a final application. So there you benefit from all the work done from the non-hw-centric community.
Can we make GStreamer use as little cpu/overhead as possible without breaking the fundamental concepts it provides ? Definitely. There are quite a few examples out there of zero-memcpy gst plugins wrapping hw accelerated systems for a ridiculous amount of cpu (they just take a opaque buffer and pass it down. That's 300-500 cpu instructions for a properly configured setup if my memory serves me right). And efforts have been going on for the past 2 years to carry on to make GStreamer overall consume as little cpu as possible, making it as lockless as possible and so forth. The undergoing GStreamer 0.11/1.0 effort will allow breaking down even more barriers for even more efficient usage.
Can OMX provide a better interface than v4l2 for video sources ? Possible, but doubtful, The V4L2 people have been working at it for ages and works for a *lot* of devices out there. It is the interface one expects to use on Linux based systems, you write your kernel drivers with a v4l2 interface and people can use it straight away on any linux setup.
Do Hardware/Silica vendors want to write kernel/userspace drivers for their hw-accelerated codecs in all variants available out there ? No way, they've got better things to do, they need to chose one. Is OMX the best API out there for providing hw-accelerated codecs ? Not in my opinion. Efforts like libva/vdpau are better in that regards, but for most ARM SoC ... OMX is the closest thing to a '''standard'''. And they (Khronos) don't even provide reference implementations, so you end up with a bunch of header files that everybody {mis|ab}uses.
So where does this leave us ?
* OMX is here for HW-accelerated codecs and vendors are doubtfully going to switch from it, but there are other system popping up that will use other APIs (libva, vdpau, ...). * V4L2 has an long standing and evolving interface people expect for video sources on linux-based systems. Making OMX provide an as-robust/tested interface as that is going to be hard. * GStreamer can wrap all existing APIs (including the two mentionned above), adds the missing blocks to go from standalone components to full-blown future-looking applications/use-cases.
* The main problem... is making all those components talk to eachother in the most cpu/mem efficient way possible.
No, GStreamer can't solve all of that last problem. We are working hard on reducing as much as possible the overhead GStreamer brings in while offering the most flexible solution out there and you can join in making sure the plugins exposing those various APIs mentionned above make the best usage of it. There is a point where we are going to reach our limit.
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about. That is a *massive* challenge on its own. The choice of using GStreamer or not ... is what you want to do once that challenge is solved.
Regards,
Edward
P.S. GStreamer for Android already works : http://www.elinux.org/images/a/a4/Android_and_Gstreamer.ppt
On Thu, 2011-02-24 at 21:19 +0100, Edward Hervey wrote:
Will GStreamer be as cpu/memory efficient as a pure OMX solution ? No, I seriously doubt we'll break down all the fundamental notions in GStreamer to make it use 0 cpu when running some processing.
I blame late night mails...
I meant "Will GStreamer be capable of zero-cpu usage like OMX is capable in some situation". The answer still stands.
But regarding memory usage, GStreamer can do zero-memcpy provided the underlying layers have a mechanism it can use.
Edward
On Thu, Feb 24, 2011 at 2:19 PM, Edward Hervey bilboed@gmail.com wrote:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
yes yes yes yes!!
vaapi/vdpau is half way there, as they cover sharing buffers with X/GL.. but sadly they ignore camera. There are a few other inconveniences with vaapi and possibly vdpau.. at least we'd prefer to have an API the covered decoding config data like SPS/PPS and not just slice data since config data NALU's are already decoded by our accelerators..
That is a *massive* challenge on its own. The choice of using GStreamer or not ... is what you want to do once that challenge is solved.
Regards,
Edward
P.S. GStreamer for Android already works : http://www.elinux.org/images/a/a4/Android_and_Gstreamer.ppt
yeah, I'm aware of that.. someone please convince google to pick it up and drop stagefright so we can only worry about a single framework between android and linux (and then I look forward to playing with pitivi on an android phone :-))
BR, -R
gstreamer-devel mailing list gstreamer-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC? I think there is *much* interest in this mechanism, people just don't know from the name what it really does. Maybe it should be called mediamem or something instead...
Yours, Linus Walleij
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC?
Yes, please. This sounds promising and we at linux-media would very much like to take a look at this. I hope that the CMA + HWMEM combination is exactly what we need.
Regards,
Hans
I think there is *much* interest in this mechanism, people just don't know from the name what it really does. Maybe it should be called mediamem or something instead...
Yours, Linus Walleij
On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the
kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC?
Yes, please. This sounds promising and we at linux-media would very much like to take a look at this. I hope that the CMA + HWMEM combination is exactly what we need.
Once again let me restate what I've been telling for some time: CMA must be *optional*. Not all hardware need contiguous memory. I'll have a look at the next HWMEM version.
On Monday, February 28, 2011 11:11:47 Laurent Pinchart wrote:
On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at
the
kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC?
Yes, please. This sounds promising and we at linux-media would very much like to take a look at this. I hope that the CMA + HWMEM combination is exactly what we need.
Once again let me restate what I've been telling for some time: CMA must be *optional*. Not all hardware need contiguous memory. I'll have a look at the next HWMEM version.
Yes, it is optional when you look at specific hardware. On a kernel level however it is functionality that is required in order to support all the hardware. There is little point in solving one issue and not the other.
Regards,
Hans
On Monday 28 February 2011 11:21:52 Hans Verkuil wrote:
On Monday, February 28, 2011 11:11:47 Laurent Pinchart wrote:
On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC?
Yes, please. This sounds promising and we at linux-media would very much like to take a look at this. I hope that the CMA + HWMEM combination is exactly what we need.
Once again let me restate what I've been telling for some time: CMA must be *optional*. Not all hardware need contiguous memory. I'll have a look at the next HWMEM version.
Yes, it is optional when you look at specific hardware. On a kernel level however it is functionality that is required in order to support all the hardware. There is little point in solving one issue and not the other.
I agree. What I meant is that we need to make sure there's no HWMEM -> CMA dependency.
On 28 February 2011 11:33, Laurent Pinchart < laurent.pinchart@ideasonboard.com> wrote:
On Monday 28 February 2011 11:21:52 Hans Verkuil wrote:
On Monday, February 28, 2011 11:11:47 Laurent Pinchart wrote:
On Saturday 26 February 2011 13:12:42 Hans Verkuil wrote:
On Friday, February 25, 2011 18:22:51 Linus Walleij wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing
at
the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC?
Yepp..Johan will do that (his mail is fubar at the moment so I will answer instead :-) )
Yes, please. This sounds promising and we at linux-media would very much like to take a look at this. I hope that the CMA + HWMEM combination is exactly what we need.
Once again let me restate what I've been telling for some time: CMA
must
be *optional*. Not all hardware need contiguous memory. I'll have a
look
at the next HWMEM version.
hwmem API has Scattered memory support as well (not implemented yet though)
Yes, it is optional when you look at specific hardware. On a kernel level however it is functionality that is required in order to support all the hardware. There is little point in solving one issue and not the other.
I agree. What I meant is that we need to make sure there's no HWMEM -> CMA dependency.
HWMEM has no CMA dependency, although hwmem is easily adapted ontop of CMA(once the speculative prefetch stuff in ARM arch is resolved)
BR /Robert F
_______________________________________________
st-ericsson mailing list st-ericsson@lists.linaro.org http://lists.linaro.org/mailman/listinfo/st-ericsson
On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.walleij@linaro.org wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC? I think there is *much* interest in this mechanism, people just don't know from the name what it really does. Maybe it should be called mediamem or something instead...
To Marek,
Can you also update the CMA status and plan?
The important thing is still Russell don't agree the CMA since it's not solve the ARM different memory attribute mapping issue. Of course there's no way to solve the ARM issue.
We really need the memory solution for multimedia.
Thank you, Kyungmin Park
Yours, Linus Walleij -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sat, 26 Feb 2011, Kyungmin Park wrote:
On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.walleij@linaro.org wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC? I think there is *much* interest in this mechanism, people just don't know from the name what it really does. Maybe it should be called mediamem or something instead...
To Marek,
Can you also update the CMA status and plan?
The important thing is still Russell don't agree the CMA since it's not solve the ARM different memory attribute mapping issue. Of course there's no way to solve the ARM issue.
There are at least two ways to solve that issue, and I have suggested both on the lak mailing list already.
1) Make the direct mapped kernel memory usable by CMA mapped through a page-sized two-level page table mapping which would allow for solving the attributes conflict on a per page basis.
2) Use highmem more aggressively and allow only highmem pages for CMA. This is quite easy to make sure the target page(s) for CMA would have no kernel mappings and therefore no attribute conflict. Furthermore, highmem pages are always relocatable for making physically contiguous segments available.
Nicolas
Hello,
On Saturday, February 26, 2011 8:20 PM Nicolas Pitre wrote:
On Sat, 26 Feb 2011, Kyungmin Park wrote:
On Sat, Feb 26, 2011 at 2:22 AM, Linus Walleij linus.walleij@linaro.org wrote:
2011/2/24 Edward Hervey bilboed@gmail.com:
What *needs* to be solved is an API for data allocation/passing at the kernel level which v4l2,omx,X,GL,vdpau,vaapi,... can use and that userspace (like GStreamer) can pass around, monitor and know about.
I think the patches sent out from ST-Ericsson's Johan Mossberg to linux-mm for "HWMEM" (hardware memory) deals exactly with buffer passing, pinning of buffers and so on. The CMA (Contigous Memory Allocator) has been slightly modified to fit hand-in-glove with HWMEM, so CMA provides buffers, HWMEM pass them around.
Johan, when you re-spin the HWMEM patchset, can you include linaro-dev and linux-media in the CC? I think there is *much* interest in this mechanism, people just don't know from the name what it really does. Maybe it should be called mediamem or something instead...
To Marek,
Can you also update the CMA status and plan?
The important thing is still Russell don't agree the CMA since it's not solve the ARM different memory attribute mapping issue. Of course there's no way to solve the ARM issue.
There are at least two ways to solve that issue, and I have suggested both on the lak mailing list already.
- Make the direct mapped kernel memory usable by CMA mapped through a page-sized two-level page table mapping which would allow for solving the attributes conflict on a per page basis.
That's the solution I work on now. I've also suggested this in the last CMA discussion, however there was no response if this is the right way
- Use highmem more aggressively and allow only highmem pages for CMA. This is quite easy to make sure the target page(s) for CMA would have no kernel mappings and therefore no attribute conflict. Furthermore, highmem pages are always relocatable for making physically contiguous segments available.
I'm not sure that highmem is the right solution. First, this will force systems with rather small amount of memory (like 256M) to use highmem just to support DMA allocable memory. It also doesn't solve the issue with specific memory requirement for our DMA hardware (multimedia codec needs video memory buffers from 2 physical banks).
The relocation issue has been already addressed in the last CMA patch series. Michal managed to create a framework that allowed to relocate on demand any pages from the CMA area.
Best regards -- Marek Szyprowski Samsung Poland R&D Center
On Mon, 2011-02-28 at 09:50 +0100, Marek Szyprowski wrote:
Hello,
[...]
I'm not sure that highmem is the right solution. First, this will force systems with rather small amount of memory (like 256M) to use highmem just to support DMA allocable memory. It also doesn't solve the issue with specific memory requirement for our DMA hardware (multimedia codec needs video memory buffers from 2 physical banks).
Could you explain why a codec would require memory buffers from 2 physical banks ?
Thanks,
Edward
The relocation issue has been already addressed in the last CMA patch series. Michal managed to create a framework that allowed to relocate on demand any pages from the CMA area.
Best regards
Marek Szyprowski Samsung Poland R&D Center
Hello,
On Tuesday, March 01, 2011 11:26 AM Edward Hervey wrote:
On Mon, 2011-02-28 at 09:50 +0100, Marek Szyprowski wrote:
Hello,
[...]
I'm not sure that highmem is the right solution. First, this will force systems with rather small amount of memory (like 256M) to use highmem just to support DMA allocable memory. It also doesn't solve the issue with specific memory requirement for our DMA hardware (multimedia codec needs video memory buffers from 2 physical banks).
Could you explain why a codec would require memory buffers from 2 physical banks ?
Well, this is rather a question to hardware engineer who designed it.
I suspect that the buffers has been split into 2 regions and placed in 2 different memory banks to achieve the performance required to decode/encode full hd h264 movie. Video codec has 2 AXI master interfaces and I expect it is able to perform 2 transaction to the memory at once.
Best regards -- Marek Szyprowski Samsung Poland R&D Center
Hi All,
I have sent an invitation for 3pm UTC Monday 7th March to have a discussion on things that can be explored/worked upon in Linaro related to v4l2 support.The idea is to come out with concrete activities that can be targeted in Linaro MM WG to address this topic.
If any body is unhappy with time please let me know.I tried to take care of US,Europe and India time zones. Also please forward the invitation to anybody I have missed out on
Thanks Sachin
On Tue, Mar 1, 2011 at 4:21 PM, Marek Szyprowski m.szyprowski@samsung.comwrote:
Hello,
On Tuesday, March 01, 2011 11:26 AM Edward Hervey wrote:
On Mon, 2011-02-28 at 09:50 +0100, Marek Szyprowski wrote:
Hello,
[...]
I'm not sure that highmem is the right solution. First, this will force systems with rather small amount of memory (like 256M) to use highmem
just
to support DMA allocable memory. It also doesn't solve the issue with specific memory requirement for our DMA hardware (multimedia codec
needs
video memory buffers from 2 physical banks).
Could you explain why a codec would require memory buffers from 2 physical banks ?
Well, this is rather a question to hardware engineer who designed it.
I suspect that the buffers has been split into 2 regions and placed in 2 different memory banks to achieve the performance required to decode/encode full hd h264 movie. Video codec has 2 AXI master interfaces and I expect it is able to perform 2 transaction to the memory at once.
Best regards
Marek Szyprowski Samsung Poland R&D Center
linaro-dev mailing list linaro-dev@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-dev
Hi,
On Fri, Feb 18, 2011 at 6:39 PM, Robert Fekete robert.fekete@linaro.org wrote:
To make a long story short: Different vendors provide custom OpenMax solutions for say Camera/ISP. In the Linux eco-system there is V4L2 doing much of this work already and is evolving with mediacontroller as well. Then there is the integration in Gstreamer...Which solution is the best way forward. Current discussions so far puts V4L2 greatly in favor of OMX. Please have in mind that OpenMAX as a concept is more like GStreamer in many senses. The question is whether Camera drivers should have OMX or V4L2 as the driver front end? This may perhaps apply to video codecs as well. Then there is how to in best of ways make use of this in GStreamer in order to achieve no copy highly efficient multimedia pipelines. Is gst-omx the way forward?
Let the discussion continue...
We are talking about 3 different layers here which don't necessarily overlap. You could have a v4l2 driver, which is wrapped in an OpenMAX IL library, which is wrapped again by gst-openmax. Each layer is different. The problem here is the OMX layer, which is often ill-conceived.
First of all, you have to remember that whatever OMX is supposed to provide, that doesn't apply to camera; you can argue that there's some value in audio/video encoding/decoding, as the interfaces are very simple and easy to standardize, but that's not the case with camera. I haven't worked with OMX camera interfaces, but AFAIK it's very incomplete and vendors have to implement their own interfaces, which defeats the purpose of OMX. So OMX provides nothing in the camera case.
Secondly, there's no OMX kernel interface. You still need something between kernel to user-space, the only established interface is v4l2. So, even if you choose OMX in user-space, the sensible choice in kernel-space is v4l2, otherwise you would end up with some custom interface which is never good.
And third, as Laurent already pointed out; OpenMAX is _not_ open. The community has no say in what happens, everything is decided by a consortium, you need to pay money to be in it, to access their bugzilla, to subscribe to their mailing lists, and to get access to their conformance test.
If you forget all the marketing mumbo jumbo about OMX, at the of the day what is provided is a bunch of headers (and a document explaining how to use them). We (the linux community) can come up with a bunch of headers too, in fact, we already do much more than that with v4l2, the only part missing is encoders/decoders, which if needed could be added very easily (Samsung already does AFAIK). Right?
Cheers.
On Thursday 10 February 2011 08:17:31 Linus Walleij wrote:
OMX main purpose is to handle multimedia hardware and offer an interface to that HW that looks identical indenpendent of the vendor delivering that hardware, much like the v4l2 or USB subsystems tries to do. And yes optimally it should be implemented in drivers/omx in Linux and a user space library on top of that.
I believe Hans was pretty clear on this: A new subsystem for video input is not going to happen in the kernel, since there already is one. It took over a decade to migrate all drivers from v4l1 to v4l2, nobody right now feels in the mood to rewrite all the drivers once more.
Thanks for clarifying this part, it was unclear to me. The reason being that it seems OMX does not imply userspace/kernelspace separation, and I was thinking more of it as a userspace lib. Now my understanding is that if e.g. OpenMAX defines a certain data structure, say for a PCM frame or whatever, then that exact struct is supposed to be used by the kernelspace/userspace interface, and defined in the include file exported by the kernel.
I don't think it can work that way. OpenMAX can only define a user-level API, not a kernel-level API. We can extend v4l2 in ways to make it easier to implement OpenMAX libraries on top of it, but there is not going to be a duplication of kernel interfaces for the sake of following a specific API in the upstream kernel.
It might be that some alignment also needs to be made between 4vl2 and other OS's implementation, to ease developing drivers for many OSs (sorry I don't know these details, but you ST-E guys should know).
The basic conflict I would say is that Linux has its own API+ABI, which is defined by V4L and ALSA through a community process without much thought about any existing standard APIs. (In some cases also predating them.)
Some people would argue that on the contrary, the standard was written without an understanding of reality and existing practice if it tries to specify a kernel-level ABI ;-)
This has happened a lot before, and it generally doesn't help the adoption of those standards.
This coupled with strict delivery deadlines and a marketing will to state conformance to OpenMAX of course leads companies into solutions breaking the Linux kernelspace API to be able to present this.
Now I think we have a pretty clear view of the problem, I don't know what could be done about it though :-/
We can't stop anyone from shipping incompatible kernels, but we can help the Linaro members understand the problem.
If the goal is to state OpenMAX conformance, it would certainly be a good idea to have people work on providing a hardware-independent OpenMAX IL implementation on top of V4L2 and make sure it works with all the devices.
Arnd
On Wed, Feb 9, 2011 at 8:06 PM, Hans Verkuil hverkuil@xs4all.nl wrote:
Exceptions are DSPs/processors. While it is definitely possible to use V4L2 there as well, in practice I don't see this happening anytime soon. It would be a very interesting experiment though.
In drivers/staging/tidspbridge http://omappedia.org/wiki/DSPBridge_Project you find the TI hackers happy at work with providing a DSP accelerator subsystem.
When I look at the code it looks quite TI-centric, but nevertheless: they try to sort out something here.
Their idea of things is that DSPs can well be used with other things than multimedia, so e.g. you userspace SETI@Home client or whatever that wants to offload some stuff with a piece of DSP firmware should be able to access this resource.
Maybe that's the path forward, I don't quite know. Maybe wrapping the DSP bridge into a V4L codec (for example) inside the kernel to accelerate a media stream is the proper thing to do, so we get some cross-subsystem talk for these things.
Yours, Linus Walleij