[Linaro-mm-sig] Memory Management Discussion

Rebecca Schultz Zavin rschultz at google.com
Thu Apr 21 18:36:42 UTC 2011


On Thu, Apr 21, 2011 at 9:08 AM, Zach Pfeffer <zach.pfeffer at linaro.org>wrote:

> This discussion reminded me of a very cool use-case thats looming in the
> future:
>
> video analytics
>
> In video analytics you'd:
> capture
> analyze (probably from user space or in a DSP - either way some data
> is flowing up the stack)
> encode to a texture
> possibly send back through analyze
>
> To make this fast, one buffer may transition between many owners - and
> having multiple mappings may make that easier to deal with.
>
> I think people are suggesting a multiple map scenario, where only one
> mapper is live at a time. We could make this on-demand where the
> appropriate fixups for a given mapping are done when that mapping
> becomes live.
>

This is already the common case for something like video's being sent to the
gpu as textures.  Typically the system maps a collection of 4 or so video
output buffers to the decoder and the gpu and then they handle
synchronization.  Mapping them on demand is too expensive.  I think the
synchronization problem is a separate but related issue that could be
tackled once we have this whole buffer management solution ironed out :)

Rebecca


>
> On 21 April 2011 03:04, Hans Verkuil <hverkuil at xs4all.nl> wrote:
> > On Wednesday, April 20, 2011 23:24:14 Zach Pfeffer wrote:
> >> That's true.
> >>
> >> I think there's some value in splitting the discussion up into 3 areas:
> >>
> >> Allocation
> >> Mapping
> >> Buffer flows
> >
> > I agree with this split. They are for the most part independent problems,
> but
> > solutions for all three are needed to make a zero-copy pipeline a
> reality.
> >
> > With regards to V4L: there is currently no buffer sharing possible at all
> > (except in the trivial case of buffers in virtual memory, and of course
> when
> > using CMEM-type solutions). Even between a V4L capture and output device
> you
> > cannot share buffers. The main reason is that the videobuf framework used
> in
> > drivers was a horrid piece of code that prevented any sharing. In 2.6.39
> the
> > videobuf2 replacement framework was merged that will make such sharing
> much,
> > much easier.
> >
> > This functionality is not yet implemented though, partially because this
> is all
> > still very new, and partially because we would like to wait and see if a
> > general solution will be created.
> >
> > It should be fairly easy to implement such a solution in the videobuf2
> framework.
> > Buffer 'handles' can be either 32 or 64 bit, so we have no restriction
> there.
> > File descriptors would be fine as handle.
> >
> > A requirement to enable buffer sharing between V4L and anything else
> would
> > probably be that the V4L driver must use videobuf2. Since this is brand
> new,
> > the initial set of compatible drivers will be very small, but that should
> > improve over time. Anyway, that's our problem :-)
> >
> > A separate problem that I would like to discuss is the interaction of
> GPU/FB
> > and V4L display devices. There is overlap there and we need to figure out
> how
> > to handle that. However, this is unrelated to the memory summit. It might
> be
> > a good topic for a meeting on Tuesday and/or Wednesday morning, though.
> >
> > Regards,
> >
> >        Hans
> >
> >>
> >> There seems to be a general design disconnect between the way Linux
> >> deals with buffer mapping (one mapper at a time, buffers are mapped
> >> and unmapped as they flow through the system) and the way users
> >> actually want things to work (sharing a global buffer that one entity
> >> writes to and another reads without unmapping each time).
> >>
> >> Perhaps there's a solution that will give users the allusion of shared
> >> mappings while ensuring correctness if those mappings are different
> >> (something on-demand perhaps).
> >>
> >> -Zach
> >>
> >> On 20 April 2011 10:13, Jordan Crouse <jcrouse at codeaurora.org> wrote:
> >> > On 04/19/2011 10:12 PM, Zach Pfeffer wrote:
> >> >>
> >> >> Speaking of Graphics and Multimedia - we may want to discuss IOMMU
> >> >> APIs and distributed memory management. These devices are becoming
> >> >> more prevalent and having a standard way of working with them would
> be
> >> >> useful.
> >> >>
> >> >> I did a little of this work at Qualcomm and pushed some soundly
> >> >> rejected patches to the kernel, see "mm: iommu: An API to unify
> IOMMU,
> >> >> CPU and device memory management."
> >> >>
> >> >> -Zach
> >> >
> >> > As we talked during the meeting at ELC, IOMMU is important, but I
> think that
> >> > there
> >> > is broad agreement to consolidate (eventually) on the standard APIs.
>  I
> >> > still think
> >> > that the memory allocation problem is the more interesting one because
> it
> >> > affects
> >> > everybody equally, MMU or not.  Not that I want to shut down debate or
> >> > anything,
> >> > I just don't want to distract us from the larger problem that we face.
> >> >
> >> > Jordan
> >> >
> >> >> On 19 April 2011 20:52, Clark, Rob<rob at ti.com>  wrote:
> >> >>>
> >> >>> On Mon, Apr 18, 2011 at 9:45 AM, Sree Kumar<sreeon at gmail.com>
>  wrote:
> >> >>>>
> >> >>>> Thanks Jesse for initiating the mailing list.
> >> >>>>
> >> >>>> We need to address the requirements of Graphics and Multimedia
> >> >>>> Accelerators
> >> >>>> (IPs).
> >> >>>> What we really need is a permanent solution (at upstream) which
> >> >>>> accommodates
> >> >>>> the following requirements and conforms to Graphics and Multimedia
> use
> >> >>>> cases.
> >> >>>>
> >> >>>> 1.Mechanism to map/unmap the memory. Some of the IPs’ have the
> ability
> >> >>>> to
> >> >>>> address virtual memory and some can address only physically
> contiguous
> >> >>>> address space. We need to address both these cases.
> >> >>>> 2.Mechanism to allocate and release memory.
> >> >>>> 3.Method to share the memory (ZERO copy is a MUST for better
> >> >>>> performance)
> >> >>>> between different device drivers (example output of camera to
> multimedia
> >> >>>> encoder).
> >> >>>> 4.Method to share the memory with different processes in userspace.
> The
> >> >>>> sharing mechanism should include built-in security features.
> >> >>>>
> >> >>>> Are there any special requirements from V4L or DRM perspectives?
> >> >>>
> >> >>>  From DRI perspective.. I guess the global buffer name is restricted
> to
> >> >>> a 4 byte integer, unless you change the DRI proto..
> >> >>>
> >> >>> Authentication hooks for the driver (on x11 driver side) are for a
> >> >>> single authentication covering all buffers shared between client and
> >> >>> server, and is done by 4 byte token exchange between client and
> >> >>> server.  I've not had time yet to look more closely at the
> >> >>> authentication aspect of ION.
> >> >>>
> >> >>> Those are just things off the top of my head, hopefully someone else
> >> >>> from X11 world chimes in with whatever else I missed.  But I guess
> >> >>> most important thing is whether or not it can fit within existing
> DRI
> >> >>> protocol.  If it does, then the drivers on client and server side
> >> >>> could use whatever..
> >> >>>
> >> >>> BR,
> >> >>> -R
> >> >>>
> >> >>>
> >> >>>> Thanks,
> >> >>>> Sree
> >> >>>>
> >> >>>> _______________________________________________
> >> >>>> Linaro-mm-sig mailing list
> >> >>>> Linaro-mm-sig at lists.linaro.org
> >> >>>> http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
> >> >>>>
> >> >>>>
> >> >>>
> >> >>> _______________________________________________
> >> >>> Linaro-mm-sig mailing list
> >> >>> Linaro-mm-sig at lists.linaro.org
> >> >>> http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
> >> >>>
> >> >>
> >> >> _______________________________________________
> >> >> Linaro-mm-sig mailing list
> >> >> Linaro-mm-sig at lists.linaro.org
> >> >> http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
> >> >
> >> >
> >> > --
> >> > Jordan Crouse
> >> > Qualcomm Innovation Center
> >> > Qualcomm Innovation Center is a member of Code Aurora Forum
> >> >
> >>
> >> _______________________________________________
> >> Linaro-mm-sig mailing list
> >> Linaro-mm-sig at lists.linaro.org
> >> http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
> >>
> >>
> >
>
> _______________________________________________
> Linaro-mm-sig mailing list
> Linaro-mm-sig at lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-mm-sig
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linaro.org/pipermail/linaro-mm-sig/attachments/20110421/2e017f2b/attachment-0001.html>


More information about the Linaro-mm-sig mailing list