Certain platforms contain peripherals which have contiguous
memory alignment requirements, necessitating the use of the alignment
argument when obtaining CMA memory. The current default maximum
CMA_ALIGNMENT of order 9 translates into a 1MB alignment on systems
with a 4K page size. To accommodate systems with peripherals with even
larger alignment requirements, increase the upper-bound of
CMA_ALIGNMENT from order 9 to order 12.
Marc Carino (1):
cma: increase CMA_ALIGNMENT upper limit to 12
drivers/base/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--
1.9.1
Hi all,
I found some scenario could need memory-hotplug feature in their product
based on platform ARM.
A typical scenario can be described as follows:
Developer reserve some memory which can't be accessed by kernel and
other function can use these memory before system start up.
After booting, system can reclaim the memory reserved with memory-hot
plug mechanism. for example , user can add the reserved memory by the
next command.
#echo "PHYSICAL_ADDRESS_OF_MEMORY, SIZE_FOR_ADDING_MEMORY" >
/kernel/sys/addmemory/addmemory.
PHYSICAL_ADDRESS_OF_MEMORY: the begging position for adding memory
SIZE_FOR_ADDING_MEMORY: the memory size you want to add. the unit should
be integral multiple of a section size.
So my question is whether arm support memory-hot plug or not in next
plan. I am very interested to move the feature from x86 to arm. I have
finish the above function and realize dynamic memory addition.
I can push my patches if possible.
Give me your suggestion.
Thanks
xiaofeng.yan
On 04/11/2014 12:18 AM, Jan Kara wrote:
> On Thu 10-04-14 23:57:38, Jan Kara wrote:
>> On Thu 10-04-14 14:22:20, Hans Verkuil wrote:
>>> On 04/10/14 14:15, Jan Kara wrote:
>>>> On Thu 10-04-14 13:07:42, Hans Verkuil wrote:
>>>>> On 04/10/14 12:32, Jan Kara wrote:
>>>>>> Hello,
>>>>>>
>>>>>> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>>>>>>> On 2014-03-17 20:49, Jan Kara wrote:
>>>>>>>> The following patch series is my first stab at abstracting vma handling
>>>>>>> >from the various media drivers. After this patch set drivers have to know
>>>>>>>> much less details about vmas, their types, and locking. My motivation for
>>>>>>>> the series is that I want to change get_user_pages() locking and I want
>>>>>>>> to handle subtle locking details in as few places as possible.
>>>>>>>>
>>>>>>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>>>>>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>>>>>>> normal pages it also grabs references to these pages. The difference from
>>>>>>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>>>>>>> mappings which is what the media drivers need.
>>>>>>>>
>>>>>>>> The patches are just compile tested (since I don't have any of the hardware
>>>>>>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>>>>>>> with care. I'm grateful for any comments.
>>>>>>>
>>>>>>> Thanks for posting this series! I will check if it works with our
>>>>>>> hardware soon. This is something I wanted to introduce some time ago to
>>>>>>> simplify buffer handling in dma-buf, but I had no time to start working.
>>>>>> Thanks for having a look in the series.
>>>>>>
>>>>>>> However I would like to go even further with integration of your pfn
>>>>>>> vector idea. This structure looks like a best solution for a compact
>>>>>>> representation of the memory buffer, which should be considered by the
>>>>>>> hardware as contiguous (either contiguous in physical memory or mapped
>>>>>>> contiguously into dma address space by the respective iommu). As you
>>>>>>> already noticed it is widely used by graphics and video drivers.
>>>>>>>
>>>>>>> I would also like to add support for pfn vector directly to the
>>>>>>> dma-mapping subsystem. This can be done quite easily (even with a
>>>>>>> fallback for architectures which don't provide method for it). I will try
>>>>>>> to prepare rfc soon. This will finally remove the need for hacks in
>>>>>>> media/v4l2-core/videobuf2-dma-contig.c
>>>>>> That would be a worthwhile thing to do. When I was reading the code this
>>>>>> seemed like something which could be done but I delibrately avoided doing
>>>>>> more unification than necessary for my purposes as I don't have any
>>>>>> hardware to test and don't know all the subtleties in the code... BTW, is
>>>>>> there some way to test the drivers without the physical video HW?
>>>>>
>>>>> You can use the vivi driver (drivers/media/platform/vivi) for this.
>>>>> However, while the vivi driver can import dma buffers it cannot export
>>>>> them. If you want that, then you have to use this tree:
>>>>>
>>>>> http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
>>>> Thanks for the pointer that looks good. I've also found
>>>> drivers/media/platform/mem2mem_testdev.c which seems to do even more
>>>> testing of the area I made changes to. So now I have to find some userspace
>>>> tool which can issue proper ioctls to setup and use the buffers and I can
>>>> start testing what I wrote :)
>>>
>>> Get the v4l-utils.git repository (http://git.linuxtv.org/cgit.cgi/v4l-utils.git/).
>>> You want the v4l2-ctl tool. Don't use the version supplied by your distro,
>>> that's often too old.
>>>
>>> 'v4l2-ctl --help-streaming' gives the available options for doing streaming.
>>>
>>> So simple capturing from vivi is 'v4l2-ctl --stream-mmap' or '--stream-user'.
>>> You can't test dmabuf unless you switch to the vb2-part4 branch of my tree.
>> Great, it seems to be doing something and it shows there's some bug in my
>> code. Thanks a lot for help.
> OK, so after a small fix the basic functionality seems to be working. It
> doesn't seem there's a way to test multiplanar buffers with vivi, is there?
For that you need to switch to the vb2-part4 branch as well. That has support
for multiplanar.
Regards,
Hans
On 04/10/14 14:15, Jan Kara wrote:
> On Thu 10-04-14 13:07:42, Hans Verkuil wrote:
>> On 04/10/14 12:32, Jan Kara wrote:
>>> Hello,
>>>
>>> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>>>> On 2014-03-17 20:49, Jan Kara wrote:
>>>>> The following patch series is my first stab at abstracting vma handling
>>>> >from the various media drivers. After this patch set drivers have to know
>>>>> much less details about vmas, their types, and locking. My motivation for
>>>>> the series is that I want to change get_user_pages() locking and I want
>>>>> to handle subtle locking details in as few places as possible.
>>>>>
>>>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>>>> normal pages it also grabs references to these pages. The difference from
>>>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>>>> mappings which is what the media drivers need.
>>>>>
>>>>> The patches are just compile tested (since I don't have any of the hardware
>>>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>>>> with care. I'm grateful for any comments.
>>>>
>>>> Thanks for posting this series! I will check if it works with our
>>>> hardware soon. This is something I wanted to introduce some time ago to
>>>> simplify buffer handling in dma-buf, but I had no time to start working.
>>> Thanks for having a look in the series.
>>>
>>>> However I would like to go even further with integration of your pfn
>>>> vector idea. This structure looks like a best solution for a compact
>>>> representation of the memory buffer, which should be considered by the
>>>> hardware as contiguous (either contiguous in physical memory or mapped
>>>> contiguously into dma address space by the respective iommu). As you
>>>> already noticed it is widely used by graphics and video drivers.
>>>>
>>>> I would also like to add support for pfn vector directly to the
>>>> dma-mapping subsystem. This can be done quite easily (even with a
>>>> fallback for architectures which don't provide method for it). I will try
>>>> to prepare rfc soon. This will finally remove the need for hacks in
>>>> media/v4l2-core/videobuf2-dma-contig.c
>>> That would be a worthwhile thing to do. When I was reading the code this
>>> seemed like something which could be done but I delibrately avoided doing
>>> more unification than necessary for my purposes as I don't have any
>>> hardware to test and don't know all the subtleties in the code... BTW, is
>>> there some way to test the drivers without the physical video HW?
>>
>> You can use the vivi driver (drivers/media/platform/vivi) for this.
>> However, while the vivi driver can import dma buffers it cannot export
>> them. If you want that, then you have to use this tree:
>>
>> http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
> Thanks for the pointer that looks good. I've also found
> drivers/media/platform/mem2mem_testdev.c which seems to do even more
> testing of the area I made changes to. So now I have to find some userspace
> tool which can issue proper ioctls to setup and use the buffers and I can
> start testing what I wrote :)
Get the v4l-utils.git repository (http://git.linuxtv.org/cgit.cgi/v4l-utils.git/).
You want the v4l2-ctl tool. Don't use the version supplied by your distro,
that's often too old.
'v4l2-ctl --help-streaming' gives the available options for doing streaming.
So simple capturing from vivi is 'v4l2-ctl --stream-mmap' or '--stream-user'.
You can't test dmabuf unless you switch to the vb2-part4 branch of my tree.
If you need help with testing it's easiest to contact me on the #v4l irc
channel.
Regards,
Hans
On 04/10/14 12:32, Jan Kara wrote:
> Hello,
>
> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>> On 2014-03-17 20:49, Jan Kara wrote:
>>> The following patch series is my first stab at abstracting vma handling
>> >from the various media drivers. After this patch set drivers have to know
>>> much less details about vmas, their types, and locking. My motivation for
>>> the series is that I want to change get_user_pages() locking and I want
>>> to handle subtle locking details in as few places as possible.
>>>
>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>> normal pages it also grabs references to these pages. The difference from
>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>> mappings which is what the media drivers need.
>>>
>>> The patches are just compile tested (since I don't have any of the hardware
>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>> with care. I'm grateful for any comments.
>>
>> Thanks for posting this series! I will check if it works with our
>> hardware soon. This is something I wanted to introduce some time ago to
>> simplify buffer handling in dma-buf, but I had no time to start working.
> Thanks for having a look in the series.
>
>> However I would like to go even further with integration of your pfn
>> vector idea. This structure looks like a best solution for a compact
>> representation of the memory buffer, which should be considered by the
>> hardware as contiguous (either contiguous in physical memory or mapped
>> contiguously into dma address space by the respective iommu). As you
>> already noticed it is widely used by graphics and video drivers.
>>
>> I would also like to add support for pfn vector directly to the
>> dma-mapping subsystem. This can be done quite easily (even with a
>> fallback for architectures which don't provide method for it). I will try
>> to prepare rfc soon. This will finally remove the need for hacks in
>> media/v4l2-core/videobuf2-dma-contig.c
> That would be a worthwhile thing to do. When I was reading the code this
> seemed like something which could be done but I delibrately avoided doing
> more unification than necessary for my purposes as I don't have any
> hardware to test and don't know all the subtleties in the code... BTW, is
> there some way to test the drivers without the physical video HW?
You can use the vivi driver (drivers/media/platform/vivi) for this.
However, while the vivi driver can import dma buffers it cannot export
them. If you want that, then you have to use this tree:
http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
Regards,
Hans
Hello,
On 2014-03-17 20:49, Jan Kara wrote:
> The following patch series is my first stab at abstracting vma handling
> from the various media drivers. After this patch set drivers have to know
> much less details about vmas, their types, and locking. My motivation for
> the series is that I want to change get_user_pages() locking and I want
> to handle subtle locking details in as few places as possible.
>
> The core of the series is the new helper get_vaddr_pfns() which is given a
> virtual address and it fills in PFNs into provided array. If PFNs correspond to
> normal pages it also grabs references to these pages. The difference from
> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
> mappings which is what the media drivers need.
>
> The patches are just compile tested (since I don't have any of the hardware
> I'm afraid I won't be able to do any more testing anyway) so please handle
> with care. I'm grateful for any comments.
Thanks for posting this series! I will check if it works with our
hardware soon.
This is something I wanted to introduce some time ago to simplify buffer
handling in dma-buf, but I had no time to start working.
However I would like to go even further with integration of your pfn
vector idea.
This structure looks like a best solution for a compact representation
of the
memory buffer, which should be considered by the hardware as contiguous
(either
contiguous in physical memory or mapped contiguously into dma address
space by
the respective iommu). As you already noticed it is widely used by
graphics and
video drivers.
I would also like to add support for pfn vector directly to the dma-mapping
subsystem. This can be done quite easily (even with a fallback for
architectures
which don't provide method for it). I will try to prepare rfc soon. This
will
finally remove the need for hacks in media/v4l2-core/videobuf2-dma-contig.c
Thanks for motivating me to finally start working on this!
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
On Thu, Apr 10, 2014 at 01:30:06AM +0200, Javier Martinez Canillas wrote:
> commit c0b00a5 ("dma-buf: update debugfs output") modified the
> default exporter name to be the KBUILD_MODNAME pre-processor
> macro instead of __FILE__ but the documentation was not updated.
>
> Also the "Supporting existing mmap interfaces in exporters" section
> title seems wrong since talks about the interface used by importers.
>
> Signed-off-by: Javier Martinez Canillas <javier.martinez(a)collabora.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
> ---
> Documentation/dma-buf-sharing.txt | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
> index 505e711..7d61cef 100644
> --- a/Documentation/dma-buf-sharing.txt
> +++ b/Documentation/dma-buf-sharing.txt
> @@ -66,7 +66,7 @@ The dma_buf buffer sharing API usage contains the following steps:
>
> Exporting modules which do not wish to provide any specific name may use the
> helper define 'dma_buf_export()', with the same arguments as above, but
> - without the last argument; a __FILE__ pre-processor directive will be
> + without the last argument; a KBUILD_MODNAME pre-processor directive will be
> inserted in place of 'exp_name' instead.
>
> 2. Userspace gets a handle to pass around to potential buffer-users
> @@ -352,7 +352,7 @@ Being able to mmap an export dma-buf buffer object has 2 main use-cases:
>
> No special interfaces, userspace simply calls mmap on the dma-buf fd.
>
> -2. Supporting existing mmap interfaces in exporters
> +2. Supporting existing mmap interfaces in importers
>
> Similar to the motivation for kernel cpu access it is again important that
> the userspace code of a given importing subsystem can use the same interfaces
> --
> 1.9.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
Hi,
I was looking at some code(given below) which seems to perform very badly
when attachments and detachments to used to simulate cache coherency.
In the code below, when remote_attach is false(ie no remote processors),
using just the two A9 cores the following code runs in 8.8 seconds. But
when remote_attach is true then even though there are other cores also
executing and sharing the workload the following code takes 52.7 seconds.
This shows that detach and attach is very heavy for this kind of code. (The
system call detach performs dma_buf_unmap_attachment and dma_buf_detach,
system call attach performs dma_buf_attach and dma_buf_map_attachment).
for (k = 0; k < N; k++) {
if(remote_attach) {
detach(path) ;
attach(path) ;
}
for(i = start_indx; i < end_indx; i++) {
for (j = 0; j < N; j++) {
if(path[i][j] < (path[i][k] + path[k][j])) {
path[i][j] = path[i][k] + path[k][j] ;
}
}
}
}
I would like to manage the cache explicitly and flush cache lines rather
than pages to reduce overhead. I also want to access these buffers from the
userspace. I can change some kernel code for this. Where should I start ?
Thanks in advance.
--Kiran