On Wed, 14 May 2014 10:15:38 +0100, "Jon Medhurst (Tixy)" <tixy(a)linaro.org> wrote:
> On Sun, 2014-03-02 at 13:40 +0800, Grant Likely wrote:
> > On Fri, 28 Feb 2014 14:42:50 +0100, Marek Szyprowski <m.szyprowski(a)samsung.com> wrote:
> > > This patch adds code for automated assignment of reserved memory regions
> > > to struct device. reserved_mem->ops->device_init()/device_cleanup()
> > > callbacks are called to perform reserved memory driver specific
> > > initialization and cleanup
> > >
> > > Based on previous code provided by Josh Cartwright <joshc(a)codeaurora.org>
> > >
> > > Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
> >
> > Hi Marek,
> >
> > I've not applied this one yet, only because there is still the open
> > issue of whether or not these functions should be called from drivers or
> > from core code. I don't actually have any problems with the content of
> > this patch. Once the user is sorted out I'll merge it.
>
> Has anything more come of these patches? I see some of the series is now
> in Linux 3.15, but the actual patches to let people use the feature
> aren't there yet, namely patches 5 though 8.
>
> My personal immediate interest in these is as a mechanism on arm64 to
> limit CMA to a region of memory that is actually DMA-able devices (e.g.
> below 4GB for 32-bit devices without an iommu).
>
> For reference, the mail archives for this series is at
> http://lkml.org/lkml/2014/2/28/237
IIRC, the issue I have with patch 5-8 is that I don't like the driver core
going off and doing automagical things to attach regions to devices.
I've not seen any more discussion on this topic since I merged the
patches I was okay with, but I may have missed something.
g.
On 5/12/2014 7:37 AM, Pintu Kumar wrote:
> Hi,
> Thanks for the reply.
>
> ----------------------------------------
>> From: arnd(a)arndb.de
>> To: linux-arm-kernel(a)lists.infradead.org
>> CC: pintu.k(a)outlook.com; linux-mm(a)kvack.org; linux-kernel(a)vger.kernel.org; linaro-mm-sig(a)lists.linaro.org
>> Subject: Re: Questions regarding DMA buffer sharing using IOMMU
>> Date: Mon, 12 May 2014 14:00:57 +0200
>>
>> On Monday 12 May 2014 15:12:41 Pintu Kumar wrote:
>>> Hi,
>>> I have some queries regarding IOMMU and CMA buffer sharing.
>>> We have an embedded linux device (kernel 3.10, RAM: 256Mb) in
>>> which camera and codec supports IOMMU but the display does not support IOMMU.
>>> Thus for camera capture we are using iommu buffers using
>>> ION/DMABUF. But for all display rendering we are using CMA buffers.
>>> So, the question is how to achieve buffer sharing (zero-copy)
>>> between Camera and Display using only IOMMU?
>>> Currently we are achieving zero-copy using CMA. And we are
>>> exploring options to use IOMMU.
>>> Now we wanted to know which option is better? To use IOMMU or CMA?
>>> If anybody have come across these design please share your thoughts and results.
>>
>> There is a slight performance overhead in using the IOMMU in general,
>> because the IOMMU has to fetch the page table entries from memory
>> at least some of the time.
>
> Ok, we need to check performance later
>
>>
>> If that overhead is within the constraints you have for transfers between
>> camera and codec, you are always better off using IOMMU since that
>> means you don't have to do memory migration.
>
> Transfer between camera is codec is fine. But our major concern is single buffer
> sharing between camera & display. Here camera supports iommu but display does not support iommu.
> Is it possible to render camera preview (iommu buffers) on display (not iommu and required physical contiguous overlay memory)?
>
I'm pretty sure the answer is no for zero copy IOMMU buffers if one of your
devices does not support IOMMU. If the data is coming in as individual pages
and the hardware does not support scattered pages there isn't much you can
do except copy to a contiguous buffer. At least with Ion, the heap types can
be set up in a particular way such that the client need never know about the
existence of an IOMMU or not.
> Also is it possible to buffer sharing between 2 iommu supported devices?
>
I don't see why not but there isn't a lot of information to go on here.
Thanks,
Laura
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
On Monday 12 May 2014 15:12:41 Pintu Kumar wrote:
> Hi,
> I have some queries regarding IOMMU and CMA buffer sharing.
> We have an embedded linux device (kernel 3.10, RAM: 256Mb) in
> which camera and codec supports IOMMU but the display does not support IOMMU.
> Thus for camera capture we are using iommu buffers using
> ION/DMABUF. But for all display rendering we are using CMA buffers.
> So, the question is how to achieve buffer sharing (zero-copy)
> between Camera and Display using only IOMMU?
> Currently we are achieving zero-copy using CMA. And we are
> exploring options to use IOMMU.
> Now we wanted to know which option is better? To use IOMMU or CMA?
> If anybody have come across these design please share your thoughts and results.
There is a slight performance overhead in using the IOMMU in general,
because the IOMMU has to fetch the page table entries from memory
at least some of the time.
If that overhead is within the constraints you have for transfers between
camera and codec, you are always better off using IOMMU since that
means you don't have to do memory migration.
Note however, that we don't have a way to describe IOMMU relations
to devices in DT, so whatever you come up with to do this will most
likely be incompatible with what we do in future kernel versions.
Arnd
Certain platforms contain peripherals which have contiguous
memory alignment requirements, necessitating the use of the alignment
argument when obtaining CMA memory. The current default maximum
CMA_ALIGNMENT of order 9 translates into a 1MB alignment on systems
with a 4K page size. To accommodate systems with peripherals with even
larger alignment requirements, increase the upper-bound of
CMA_ALIGNMENT from order 9 to order 12.
Marc Carino (1):
cma: increase CMA_ALIGNMENT upper limit to 12
drivers/base/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--
1.9.1
Hi all,
I found some scenario could need memory-hotplug feature in their product
based on platform ARM.
A typical scenario can be described as follows:
Developer reserve some memory which can't be accessed by kernel and
other function can use these memory before system start up.
After booting, system can reclaim the memory reserved with memory-hot
plug mechanism. for example , user can add the reserved memory by the
next command.
#echo "PHYSICAL_ADDRESS_OF_MEMORY, SIZE_FOR_ADDING_MEMORY" >
/kernel/sys/addmemory/addmemory.
PHYSICAL_ADDRESS_OF_MEMORY: the begging position for adding memory
SIZE_FOR_ADDING_MEMORY: the memory size you want to add. the unit should
be integral multiple of a section size.
So my question is whether arm support memory-hot plug or not in next
plan. I am very interested to move the feature from x86 to arm. I have
finish the above function and realize dynamic memory addition.
I can push my patches if possible.
Give me your suggestion.
Thanks
xiaofeng.yan
On 04/11/2014 12:18 AM, Jan Kara wrote:
> On Thu 10-04-14 23:57:38, Jan Kara wrote:
>> On Thu 10-04-14 14:22:20, Hans Verkuil wrote:
>>> On 04/10/14 14:15, Jan Kara wrote:
>>>> On Thu 10-04-14 13:07:42, Hans Verkuil wrote:
>>>>> On 04/10/14 12:32, Jan Kara wrote:
>>>>>> Hello,
>>>>>>
>>>>>> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>>>>>>> On 2014-03-17 20:49, Jan Kara wrote:
>>>>>>>> The following patch series is my first stab at abstracting vma handling
>>>>>>> >from the various media drivers. After this patch set drivers have to know
>>>>>>>> much less details about vmas, their types, and locking. My motivation for
>>>>>>>> the series is that I want to change get_user_pages() locking and I want
>>>>>>>> to handle subtle locking details in as few places as possible.
>>>>>>>>
>>>>>>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>>>>>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>>>>>>> normal pages it also grabs references to these pages. The difference from
>>>>>>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>>>>>>> mappings which is what the media drivers need.
>>>>>>>>
>>>>>>>> The patches are just compile tested (since I don't have any of the hardware
>>>>>>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>>>>>>> with care. I'm grateful for any comments.
>>>>>>>
>>>>>>> Thanks for posting this series! I will check if it works with our
>>>>>>> hardware soon. This is something I wanted to introduce some time ago to
>>>>>>> simplify buffer handling in dma-buf, but I had no time to start working.
>>>>>> Thanks for having a look in the series.
>>>>>>
>>>>>>> However I would like to go even further with integration of your pfn
>>>>>>> vector idea. This structure looks like a best solution for a compact
>>>>>>> representation of the memory buffer, which should be considered by the
>>>>>>> hardware as contiguous (either contiguous in physical memory or mapped
>>>>>>> contiguously into dma address space by the respective iommu). As you
>>>>>>> already noticed it is widely used by graphics and video drivers.
>>>>>>>
>>>>>>> I would also like to add support for pfn vector directly to the
>>>>>>> dma-mapping subsystem. This can be done quite easily (even with a
>>>>>>> fallback for architectures which don't provide method for it). I will try
>>>>>>> to prepare rfc soon. This will finally remove the need for hacks in
>>>>>>> media/v4l2-core/videobuf2-dma-contig.c
>>>>>> That would be a worthwhile thing to do. When I was reading the code this
>>>>>> seemed like something which could be done but I delibrately avoided doing
>>>>>> more unification than necessary for my purposes as I don't have any
>>>>>> hardware to test and don't know all the subtleties in the code... BTW, is
>>>>>> there some way to test the drivers without the physical video HW?
>>>>>
>>>>> You can use the vivi driver (drivers/media/platform/vivi) for this.
>>>>> However, while the vivi driver can import dma buffers it cannot export
>>>>> them. If you want that, then you have to use this tree:
>>>>>
>>>>> http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
>>>> Thanks for the pointer that looks good. I've also found
>>>> drivers/media/platform/mem2mem_testdev.c which seems to do even more
>>>> testing of the area I made changes to. So now I have to find some userspace
>>>> tool which can issue proper ioctls to setup and use the buffers and I can
>>>> start testing what I wrote :)
>>>
>>> Get the v4l-utils.git repository (http://git.linuxtv.org/cgit.cgi/v4l-utils.git/).
>>> You want the v4l2-ctl tool. Don't use the version supplied by your distro,
>>> that's often too old.
>>>
>>> 'v4l2-ctl --help-streaming' gives the available options for doing streaming.
>>>
>>> So simple capturing from vivi is 'v4l2-ctl --stream-mmap' or '--stream-user'.
>>> You can't test dmabuf unless you switch to the vb2-part4 branch of my tree.
>> Great, it seems to be doing something and it shows there's some bug in my
>> code. Thanks a lot for help.
> OK, so after a small fix the basic functionality seems to be working. It
> doesn't seem there's a way to test multiplanar buffers with vivi, is there?
For that you need to switch to the vb2-part4 branch as well. That has support
for multiplanar.
Regards,
Hans
On 04/10/14 14:15, Jan Kara wrote:
> On Thu 10-04-14 13:07:42, Hans Verkuil wrote:
>> On 04/10/14 12:32, Jan Kara wrote:
>>> Hello,
>>>
>>> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>>>> On 2014-03-17 20:49, Jan Kara wrote:
>>>>> The following patch series is my first stab at abstracting vma handling
>>>> >from the various media drivers. After this patch set drivers have to know
>>>>> much less details about vmas, their types, and locking. My motivation for
>>>>> the series is that I want to change get_user_pages() locking and I want
>>>>> to handle subtle locking details in as few places as possible.
>>>>>
>>>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>>>> normal pages it also grabs references to these pages. The difference from
>>>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>>>> mappings which is what the media drivers need.
>>>>>
>>>>> The patches are just compile tested (since I don't have any of the hardware
>>>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>>>> with care. I'm grateful for any comments.
>>>>
>>>> Thanks for posting this series! I will check if it works with our
>>>> hardware soon. This is something I wanted to introduce some time ago to
>>>> simplify buffer handling in dma-buf, but I had no time to start working.
>>> Thanks for having a look in the series.
>>>
>>>> However I would like to go even further with integration of your pfn
>>>> vector idea. This structure looks like a best solution for a compact
>>>> representation of the memory buffer, which should be considered by the
>>>> hardware as contiguous (either contiguous in physical memory or mapped
>>>> contiguously into dma address space by the respective iommu). As you
>>>> already noticed it is widely used by graphics and video drivers.
>>>>
>>>> I would also like to add support for pfn vector directly to the
>>>> dma-mapping subsystem. This can be done quite easily (even with a
>>>> fallback for architectures which don't provide method for it). I will try
>>>> to prepare rfc soon. This will finally remove the need for hacks in
>>>> media/v4l2-core/videobuf2-dma-contig.c
>>> That would be a worthwhile thing to do. When I was reading the code this
>>> seemed like something which could be done but I delibrately avoided doing
>>> more unification than necessary for my purposes as I don't have any
>>> hardware to test and don't know all the subtleties in the code... BTW, is
>>> there some way to test the drivers without the physical video HW?
>>
>> You can use the vivi driver (drivers/media/platform/vivi) for this.
>> However, while the vivi driver can import dma buffers it cannot export
>> them. If you want that, then you have to use this tree:
>>
>> http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
> Thanks for the pointer that looks good. I've also found
> drivers/media/platform/mem2mem_testdev.c which seems to do even more
> testing of the area I made changes to. So now I have to find some userspace
> tool which can issue proper ioctls to setup and use the buffers and I can
> start testing what I wrote :)
Get the v4l-utils.git repository (http://git.linuxtv.org/cgit.cgi/v4l-utils.git/).
You want the v4l2-ctl tool. Don't use the version supplied by your distro,
that's often too old.
'v4l2-ctl --help-streaming' gives the available options for doing streaming.
So simple capturing from vivi is 'v4l2-ctl --stream-mmap' or '--stream-user'.
You can't test dmabuf unless you switch to the vb2-part4 branch of my tree.
If you need help with testing it's easiest to contact me on the #v4l irc
channel.
Regards,
Hans