On 04/10/14 14:15, Jan Kara wrote:
> On Thu 10-04-14 13:07:42, Hans Verkuil wrote:
>> On 04/10/14 12:32, Jan Kara wrote:
>>> Hello,
>>>
>>> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>>>> On 2014-03-17 20:49, Jan Kara wrote:
>>>>> The following patch series is my first stab at abstracting vma handling
>>>> >from the various media drivers. After this patch set drivers have to know
>>>>> much less details about vmas, their types, and locking. My motivation for
>>>>> the series is that I want to change get_user_pages() locking and I want
>>>>> to handle subtle locking details in as few places as possible.
>>>>>
>>>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>>>> normal pages it also grabs references to these pages. The difference from
>>>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>>>> mappings which is what the media drivers need.
>>>>>
>>>>> The patches are just compile tested (since I don't have any of the hardware
>>>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>>>> with care. I'm grateful for any comments.
>>>>
>>>> Thanks for posting this series! I will check if it works with our
>>>> hardware soon. This is something I wanted to introduce some time ago to
>>>> simplify buffer handling in dma-buf, but I had no time to start working.
>>> Thanks for having a look in the series.
>>>
>>>> However I would like to go even further with integration of your pfn
>>>> vector idea. This structure looks like a best solution for a compact
>>>> representation of the memory buffer, which should be considered by the
>>>> hardware as contiguous (either contiguous in physical memory or mapped
>>>> contiguously into dma address space by the respective iommu). As you
>>>> already noticed it is widely used by graphics and video drivers.
>>>>
>>>> I would also like to add support for pfn vector directly to the
>>>> dma-mapping subsystem. This can be done quite easily (even with a
>>>> fallback for architectures which don't provide method for it). I will try
>>>> to prepare rfc soon. This will finally remove the need for hacks in
>>>> media/v4l2-core/videobuf2-dma-contig.c
>>> That would be a worthwhile thing to do. When I was reading the code this
>>> seemed like something which could be done but I delibrately avoided doing
>>> more unification than necessary for my purposes as I don't have any
>>> hardware to test and don't know all the subtleties in the code... BTW, is
>>> there some way to test the drivers without the physical video HW?
>>
>> You can use the vivi driver (drivers/media/platform/vivi) for this.
>> However, while the vivi driver can import dma buffers it cannot export
>> them. If you want that, then you have to use this tree:
>>
>> http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
> Thanks for the pointer that looks good. I've also found
> drivers/media/platform/mem2mem_testdev.c which seems to do even more
> testing of the area I made changes to. So now I have to find some userspace
> tool which can issue proper ioctls to setup and use the buffers and I can
> start testing what I wrote :)
Get the v4l-utils.git repository (http://git.linuxtv.org/cgit.cgi/v4l-utils.git/).
You want the v4l2-ctl tool. Don't use the version supplied by your distro,
that's often too old.
'v4l2-ctl --help-streaming' gives the available options for doing streaming.
So simple capturing from vivi is 'v4l2-ctl --stream-mmap' or '--stream-user'.
You can't test dmabuf unless you switch to the vb2-part4 branch of my tree.
If you need help with testing it's easiest to contact me on the #v4l irc
channel.
Regards,
Hans
On 04/10/14 12:32, Jan Kara wrote:
> Hello,
>
> On Thu 10-04-14 12:02:50, Marek Szyprowski wrote:
>> On 2014-03-17 20:49, Jan Kara wrote:
>>> The following patch series is my first stab at abstracting vma handling
>> >from the various media drivers. After this patch set drivers have to know
>>> much less details about vmas, their types, and locking. My motivation for
>>> the series is that I want to change get_user_pages() locking and I want
>>> to handle subtle locking details in as few places as possible.
>>>
>>> The core of the series is the new helper get_vaddr_pfns() which is given a
>>> virtual address and it fills in PFNs into provided array. If PFNs correspond to
>>> normal pages it also grabs references to these pages. The difference from
>>> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
>>> mappings which is what the media drivers need.
>>>
>>> The patches are just compile tested (since I don't have any of the hardware
>>> I'm afraid I won't be able to do any more testing anyway) so please handle
>>> with care. I'm grateful for any comments.
>>
>> Thanks for posting this series! I will check if it works with our
>> hardware soon. This is something I wanted to introduce some time ago to
>> simplify buffer handling in dma-buf, but I had no time to start working.
> Thanks for having a look in the series.
>
>> However I would like to go even further with integration of your pfn
>> vector idea. This structure looks like a best solution for a compact
>> representation of the memory buffer, which should be considered by the
>> hardware as contiguous (either contiguous in physical memory or mapped
>> contiguously into dma address space by the respective iommu). As you
>> already noticed it is widely used by graphics and video drivers.
>>
>> I would also like to add support for pfn vector directly to the
>> dma-mapping subsystem. This can be done quite easily (even with a
>> fallback for architectures which don't provide method for it). I will try
>> to prepare rfc soon. This will finally remove the need for hacks in
>> media/v4l2-core/videobuf2-dma-contig.c
> That would be a worthwhile thing to do. When I was reading the code this
> seemed like something which could be done but I delibrately avoided doing
> more unification than necessary for my purposes as I don't have any
> hardware to test and don't know all the subtleties in the code... BTW, is
> there some way to test the drivers without the physical video HW?
You can use the vivi driver (drivers/media/platform/vivi) for this.
However, while the vivi driver can import dma buffers it cannot export
them. If you want that, then you have to use this tree:
http://git.linuxtv.org/cgit.cgi/hverkuil/media_tree.git/log/?h=vb2-part4
Regards,
Hans
Hello,
On 2014-03-17 20:49, Jan Kara wrote:
> The following patch series is my first stab at abstracting vma handling
> from the various media drivers. After this patch set drivers have to know
> much less details about vmas, their types, and locking. My motivation for
> the series is that I want to change get_user_pages() locking and I want
> to handle subtle locking details in as few places as possible.
>
> The core of the series is the new helper get_vaddr_pfns() which is given a
> virtual address and it fills in PFNs into provided array. If PFNs correspond to
> normal pages it also grabs references to these pages. The difference from
> get_user_pages() is that this function can also deal with pfnmap, mixed, and io
> mappings which is what the media drivers need.
>
> The patches are just compile tested (since I don't have any of the hardware
> I'm afraid I won't be able to do any more testing anyway) so please handle
> with care. I'm grateful for any comments.
Thanks for posting this series! I will check if it works with our
hardware soon.
This is something I wanted to introduce some time ago to simplify buffer
handling in dma-buf, but I had no time to start working.
However I would like to go even further with integration of your pfn
vector idea.
This structure looks like a best solution for a compact representation
of the
memory buffer, which should be considered by the hardware as contiguous
(either
contiguous in physical memory or mapped contiguously into dma address
space by
the respective iommu). As you already noticed it is widely used by
graphics and
video drivers.
I would also like to add support for pfn vector directly to the dma-mapping
subsystem. This can be done quite easily (even with a fallback for
architectures
which don't provide method for it). I will try to prepare rfc soon. This
will
finally remove the need for hacks in media/v4l2-core/videobuf2-dma-contig.c
Thanks for motivating me to finally start working on this!
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
On Thu, Apr 10, 2014 at 01:30:06AM +0200, Javier Martinez Canillas wrote:
> commit c0b00a5 ("dma-buf: update debugfs output") modified the
> default exporter name to be the KBUILD_MODNAME pre-processor
> macro instead of __FILE__ but the documentation was not updated.
>
> Also the "Supporting existing mmap interfaces in exporters" section
> title seems wrong since talks about the interface used by importers.
>
> Signed-off-by: Javier Martinez Canillas <javier.martinez(a)collabora.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
> ---
> Documentation/dma-buf-sharing.txt | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/dma-buf-sharing.txt b/Documentation/dma-buf-sharing.txt
> index 505e711..7d61cef 100644
> --- a/Documentation/dma-buf-sharing.txt
> +++ b/Documentation/dma-buf-sharing.txt
> @@ -66,7 +66,7 @@ The dma_buf buffer sharing API usage contains the following steps:
>
> Exporting modules which do not wish to provide any specific name may use the
> helper define 'dma_buf_export()', with the same arguments as above, but
> - without the last argument; a __FILE__ pre-processor directive will be
> + without the last argument; a KBUILD_MODNAME pre-processor directive will be
> inserted in place of 'exp_name' instead.
>
> 2. Userspace gets a handle to pass around to potential buffer-users
> @@ -352,7 +352,7 @@ Being able to mmap an export dma-buf buffer object has 2 main use-cases:
>
> No special interfaces, userspace simply calls mmap on the dma-buf fd.
>
> -2. Supporting existing mmap interfaces in exporters
> +2. Supporting existing mmap interfaces in importers
>
> Similar to the motivation for kernel cpu access it is again important that
> the userspace code of a given importing subsystem can use the same interfaces
> --
> 1.9.0
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
Hi,
I was looking at some code(given below) which seems to perform very badly
when attachments and detachments to used to simulate cache coherency.
In the code below, when remote_attach is false(ie no remote processors),
using just the two A9 cores the following code runs in 8.8 seconds. But
when remote_attach is true then even though there are other cores also
executing and sharing the workload the following code takes 52.7 seconds.
This shows that detach and attach is very heavy for this kind of code. (The
system call detach performs dma_buf_unmap_attachment and dma_buf_detach,
system call attach performs dma_buf_attach and dma_buf_map_attachment).
for (k = 0; k < N; k++) {
if(remote_attach) {
detach(path) ;
attach(path) ;
}
for(i = start_indx; i < end_indx; i++) {
for (j = 0; j < N; j++) {
if(path[i][j] < (path[i][k] + path[k][j])) {
path[i][j] = path[i][k] + path[k][j] ;
}
}
}
}
I would like to manage the cache explicitly and flush cache lines rather
than pages to reduce overhead. I also want to access these buffers from the
userspace. I can change some kernel code for this. Where should I start ?
Thanks in advance.
--Kiran
we found there'd be varied specific data connected to a dmabuf, like
the iommu mapping of buffer, the dma descriptors (that can be shared
between several components). Though these info might be able to be
generated every time before dma operations, for performance sake,
it's better to be kept before really invalid.
Change-Id: I89d43dc3fe1ee3da91c42074da5df71b968e6d3c
Signed-off-by: Bin Wang <binw(a)marvell.com>
---
drivers/base/dma-buf.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++
include/linux/dma-buf.h | 22 ++++++++++
2 files changed, 122 insertions(+), 0 deletions(-)
diff --git a/drivers/base/dma-buf.c b/drivers/base/dma-buf.c
index 08fe897..5c82e60 100644
--- a/drivers/base/dma-buf.c
+++ b/drivers/base/dma-buf.c
@@ -50,6 +50,7 @@ static int dma_buf_release(struct inode *inode, struct file *file)
BUG_ON(dmabuf->vmapping_counter);
+ dma_buf_meta_release(dmabuf);
dmabuf->ops->release(dmabuf);
mutex_lock(&db_list.lock);
@@ -138,6 +139,7 @@ struct dma_buf *dma_buf_export_named(void *priv, const struct dma_buf_ops *ops,
mutex_init(&dmabuf->lock);
INIT_LIST_HEAD(&dmabuf->attachments);
+ INIT_LIST_HEAD(&dmabuf->metas);
mutex_lock(&db_list.lock);
list_add(&dmabuf->list_node, &db_list.head);
@@ -570,6 +572,104 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
}
EXPORT_SYMBOL_GPL(dma_buf_vunmap);
+/**
+ * dma_buf_meta_attach - Attach additional meta data to the dmabuf
+ * @dmabuf: [in] the dmabuf to attach to
+ * @id: [in] the id of the meta data
+ * @pdata: [in] the raw data to be attached
+ * @release: [in] the callback to release the meta data
+ */
+int dma_buf_meta_attach(struct dma_buf *dmabuf, int id, void *pdata,
+ int (*release)(void *))
+{
+ struct dma_buf_meta *pmeta;
+
+ pmeta = kmalloc(sizeof(struct dma_buf_meta), GFP_KERNEL);
+ if (pmeta == NULL)
+ return -ENOMEM;
+
+ pmeta->id = id;
+ pmeta->pdata = pdata;
+ pmeta->release = release;
+
+ mutex_lock(&dmabuf->lock);
+ list_add(&pmeta->node, &dmabuf->metas);
+ mutex_unlock(&dmabuf->lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dma_buf_meta_attach);
+
+/**
+ * dma_buf_meta_dettach - Dettach the meta data from dmabuf by id
+ * @dmabuf: [in] the dmabuf including the meta data
+ * @id: [in] the id of the meta data
+ */
+int dma_buf_meta_dettach(struct dma_buf *dmabuf, int id)
+{
+ struct dma_buf_meta *pmeta, *tmp;
+ int ret = -ENOENT;
+
+ mutex_lock(&dmabuf->lock);
+ list_for_each_entry_safe(pmeta, tmp, &dmabuf->metas, node) {
+ if (pmeta->id == id) {
+ if (pmeta->release)
+ pmeta->release(pmeta->pdata);
+ list_del(&pmeta->node);
+ kfree(pmeta);
+ ret = 0;
+ break;
+ }
+ }
+ mutex_unlock(&dmabuf->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dma_buf_meta_dettach);
+
+/**
+ * dma_buf_meta_fetch - Get the meta data from dmabuf by id
+ * @dmabuf: [in] the dmabuf including the meta data
+ * @id: [in] the id of the meta data
+ */
+void *dma_buf_meta_fetch(struct dma_buf *dmabuf, int id)
+{
+ struct dma_buf_meta *pmeta;
+ void *pdata = NULL;
+
+ mutex_lock(&dmabuf->lock);
+ list_for_each_entry(pmeta, &dmabuf->metas, node) {
+ if (pmeta->id == id) {
+ pdata = pmeta->pdata;
+ break;
+ }
+ }
+ mutex_unlock(&dmabuf->lock);
+
+ return pdata;
+}
+EXPORT_SYMBOL_GPL(dma_buf_meta_fetch);
+
+/**
+ * dma_buf_meta_release - Release all the meta data attached to the dmabuf
+ * @dmabuf: [in] the dmabuf including the meta data
+ */
+void dma_buf_meta_release(struct dma_buf *dmabuf)
+{
+ struct dma_buf_meta *pmeta, *tmp;
+
+ mutex_lock(&dmabuf->lock);
+ list_for_each_entry_safe(pmeta, tmp, &dmabuf->metas, node) {
+ if (pmeta->release)
+ pmeta->release(pmeta->pdata);
+ list_del(&pmeta->node);
+ kfree(pmeta);
+ }
+ mutex_unlock(&dmabuf->lock);
+
+ return;
+}
+
#ifdef CONFIG_DEBUG_FS
static int dma_buf_describe(struct seq_file *s)
{
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index dfac5ed..369d032 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -120,6 +120,7 @@ struct dma_buf {
size_t size;
struct file *file;
struct list_head attachments;
+ struct list_head metas;
const struct dma_buf_ops *ops;
/* mutex to serialize list manipulation, attach/detach and vmap/unmap */
struct mutex lock;
@@ -149,6 +150,20 @@ struct dma_buf_attachment {
};
/**
+ * struct dma_buf_meta - holds varied meta data attached to the buffer
+ * @id: the identification of the meta data
+ * @dmabuf: buffer for this attachment.
+ * @node: list of dma_buf_meta.
+ * @pdata: specific meta data.
+ */
+struct dma_buf_meta {
+ int id;
+ struct list_head node;
+ int (*release)(void *pdata);
+ void *pdata;
+};
+
+/**
* get_dma_buf - convenience wrapper for get_file.
* @dmabuf: [in] pointer to dma_buf
*
@@ -194,6 +209,13 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
unsigned long);
void *dma_buf_vmap(struct dma_buf *);
void dma_buf_vunmap(struct dma_buf *, void *vaddr);
+
+int dma_buf_meta_attach(struct dma_buf *dmabuf, int id, void *pdata,
+ int (*release)(void *));
+int dma_buf_meta_dettach(struct dma_buf *dmabuf, int id);
+void *dma_buf_meta_fetch(struct dma_buf *dmabuf, int id);
+void dma_buf_meta_release(struct dma_buf *dmabuf);
+
int dma_buf_debugfs_create_file(const char *name,
int (*write)(struct seq_file *));
#endif /* __DMA_BUF_H__ */
--
1.7.0.4
On Thu, 13 Mar 2014 14:51:56 -0700, Kevin Hilman <khilman(a)linaro.org> wrote:
> Josh Cartwright <joshc(a)codeaurora.org> writes:
>
> > On Thu, Mar 13, 2014 at 01:46:50PM -0700, Kevin Hilman wrote:
> >> On Fri, Feb 21, 2014 at 4:25 AM, Marek Szyprowski
> >> <m.szyprowski(a)samsung.com> wrote:
> >> > Enable reserved memory initialization from device tree.
> >> >
> >> > Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
> >>
> >> This patch has hit -next and several legacy (non-DT) boot failures
> >> were detected and bisected down to this patch. A quick scan looks
> >> like there needs to be some sanity checking whether a DT is even
> >> present.
> >
> > Hmm. Yes, the code unconditionally calls of_flat_dt_scan(), which will
> > gladly touch initial_boot_params, even though it may be uninitialized.
> > The below patch should allow these boards to boot...
> >
> > However, I'm wondering if there is a good reason why we don't parse the
> > /reserved-memory nodes at the right after we parse the /memory nodes as
> > part of early_init_dt_scan()...
> >
> > Thanks,
> > Josh
> >
> > --8<--
> > Subject: [PATCH] drivers: of: only scan for reserved mem when fdt present
> >
> > Reported-by: Kevin Hilman <khilman(a)linaro.org>
> > Signed-off-by: Josh Cartwright <joshc(a)codeaurora.org>
>
> This gets legacy boot working again. Thanks.
>
> Tested-by: Kevin Hilman <khilman(a)linaro.org>
Applied and confirmed on non-DT qemu boot. Thanks. It will be pushed out
shortly.
g.
On Thu, Mar 13, 2014 at 01:46:50PM -0700, Kevin Hilman wrote:
> On Fri, Feb 21, 2014 at 4:25 AM, Marek Szyprowski
> <m.szyprowski(a)samsung.com> wrote:
> > Enable reserved memory initialization from device tree.
> >
> > Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
>
> This patch has hit -next and several legacy (non-DT) boot failures
> were detected and bisected down to this patch. A quick scan looks
> like there needs to be some sanity checking whether a DT is even
> present.
Hmm. Yes, the code unconditionally calls of_flat_dt_scan(), which will
gladly touch initial_boot_params, even though it may be uninitialized.
The below patch should allow these boards to boot...
However, I'm wondering if there is a good reason why we don't parse the
/reserved-memory nodes at the right after we parse the /memory nodes as
part of early_init_dt_scan()...
Thanks,
Josh
--8<--
Subject: [PATCH] drivers: of: only scan for reserved mem when fdt present
Reported-by: Kevin Hilman <khilman(a)linaro.org>
Signed-off-by: Josh Cartwright <joshc(a)codeaurora.org>
---
drivers/of/fdt.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 510c0d8..501bc83 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -557,6 +557,9 @@ static int __init __fdt_scan_reserved_mem(unsigned long node, const char *uname,
*/
void __init early_init_fdt_scan_reserved_mem(void)
{
+ if (!initial_boot_params)
+ return;
+
of_scan_flat_dt(__fdt_scan_reserved_mem, NULL);
fdt_init_reserved_mem();
}
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation