On Thu, May 14, 2020 at 02:38:38PM +0300, Oded Gabbay wrote:
> On Tue, May 12, 2020 at 9:12 AM Daniel Vetter <daniel.vetter(a)ffwll.ch> wrote:
> >
> > On Tue, May 12, 2020 at 4:14 AM Dave Airlie <airlied(a)gmail.com> wrote:
> > >
> > > On Mon, 11 May 2020 at 19:37, Oded Gabbay <oded.gabbay(a)gmail.com> wrote:
> > > >
> > > > On Mon, May 11, 2020 at 12:11 PM Daniel Vetter <daniel.vetter(a)ffwll.ch> wrote:
> > > > >
> > > > > It's the default.
> > > > Thanks for catching that.
> > > >
> > > > >
> > > > > Also so much for "we're not going to tell the graphics people how to
> > > > > review their code", dma_fence is a pretty core piece of gpu driver
> > > > > infrastructure. And it's very much uapi relevant, including piles of
> > > > > corresponding userspace protocols and libraries for how to pass these
> > > > > around.
> > > > >
> > > > > Would be great if habanalabs would not use this (from a quick look
> > > > > it's not needed at all), since open source the userspace and playing
> > > > > by the usual rules isn't on the table. If that's not possible (because
> > > > > it's actually using the uapi part of dma_fence to interact with gpu
> > > > > drivers) then we have exactly what everyone promised we'd want to
> > > > > avoid.
> > > >
> > > > We don't use the uapi parts, we currently only using the fencing and
> > > > signaling ability of this module inside our kernel code. But maybe I
> > > > didn't understand what you request. You want us *not* to use this
> > > > well-written piece of kernel code because it is only used by graphics
> > > > drivers ?
> > > > I'm sorry but I don't get this argument, if this is indeed what you meant.
> > >
> > > We would rather drivers using a feature that has requirements on
> > > correct userspace implementations of the feature have a userspace that
> > > is open source and auditable.
> > >
> > > Fencing is tricky, cross-device fencing is really tricky, and having
> > > the ability for a closed userspace component to mess up other people's
> > > drivers, think i915 shared with closed habana userspace and shared
> > > fences, decreases ability to debug things.
> > >
> > > Ideally we wouldn't offer users known untested/broken scenarios, so
> > > yes we'd prefer that drivers that intend to expose a userspace fencing
> > > api around dma-fence would adhere to the rules of the gpu drivers.
> > >
> > > I'm not say you have to drop using dma-fence, but if you move towards
> > > cross-device stuff I believe other drivers would be correct in
> > > refusing to interact with fences from here.
> >
> > The flip side is if you only used dma-fence.c "because it's there",
> > and not because it comes with an uapi attached and a cross-driver
> > kernel internal contract for how to interact with gpu drivers, then
> > there's really not much point in using it. It's a custom-rolled
> > wait_queue/event thing, that's all. Without the gpu uapi and gpu
> > cross-driver contract it would be much cleaner to just use wait_queue
> > directly, and that's a construct all kernel developers understand, not
> > just gpu folks. From a quick look at least habanalabs doesn't use any
> > of these uapi/cross-driver/gpu bits.
> > -Daniel
>
> Hi Daniel,
> I want to say explicitly that we don't use the dma-buf uapi parts, nor
> we intend to use them to communicate with any GPU device. We only use
> it as simple completion mechanism as it was convenient to use.
> I do understand I can exchange that mechanism with a simpler one, and
> I will add an internal task to do it (albeit not in a very high
> priority) and upstream it, its just that it is part of our data path
> so we need to thoroughly validate it first.
Sounds good.
Wrt merging this patch here, can you include that in one of your next
pulls? Or should I toss it entirely, waiting for you to remove dma_fence
outright?
Thanks, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Do it uncontionally, there's a separate peek function with
dma_fence_is_signalled() which can be called from atomic context.
v2: Consensus calls for an unconditional might_sleep (Chris,
Christian)
Full audit:
- dma-fence.h: Uses MAX_SCHEDULE_TIMOUT, good chance this sleeps
- dma-resv.c: Timeout always at least 1
- st-dma-fence.c: Save to sleep in testcases
- amdgpu_cs.c: Both callers are for variants of the wait ioctl
- amdgpu_device.c: Two callers in vram recover code, both right next
to mutex_lock.
- amdgpu_vm.c: Use in the vm_wait ioctl, next to _reserve/unreserve
- remaining functions in amdgpu: All for test_ib implementations for
various engines, caller for that looks all safe (debugfs, driver
load, reset)
- etnaviv: another wait ioctl
- habanalabs: another wait ioctl
- nouveau_fence.c: hardcoded 15*HZ ... glorious
- nouveau_gem.c: hardcoded 2*HZ ... so not even super consistent, but
this one does have a WARN_ON :-/ At least this one is only a
fallback path for when kmalloc fails. Maybe this should be put onto
some worker list instead, instead of a work per unamp ...
- i915/selftests: Hardecoded HZ / 4 or HZ / 8
- i915/gt/selftests: Going up the callchain looks safe looking at
nearby callers
- i915/gt/intel_gt_requests.c. Wrapped in a mutex_lock
- i915/gem_i915_gem_wait.c: The i915-version which is called instead
for i915 fences already has a might_sleep() annotation, so all good
Cc: Alex Deucher <alexander.deucher(a)amd.com>
Cc: Lucas Stach <l.stach(a)pengutronix.de>
Cc: Jani Nikula <jani.nikula(a)linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen(a)linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Cc: Ben Skeggs <bskeggs(a)redhat.com>
Cc: "VMware Graphics" <linux-graphics-maintainer(a)vmware.com>
Cc: Oded Gabbay <oded.gabbay(a)gmail.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/dma-buf/dma-fence.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index 90edf2b281b0..656e9ac2d028 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -208,6 +208,8 @@ dma_fence_wait_timeout(struct dma_fence *fence, bool intr, signed long timeout)
if (WARN_ON(timeout < 0))
return -EINVAL;
+ might_sleep();
+
trace_dma_fence_wait_start(fence);
if (fence->ops->wait)
ret = fence->ops->wait(fence, intr, timeout);
--
2.26.2
On Fri, May 15, 2020 at 02:07:06PM +0900, David Stevens wrote:
> On Thu, May 14, 2020 at 9:30 PM Daniel Vetter <daniel(a)ffwll.ch> wrote:
> > On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
> > > Sorry for the duplicate reply, didn't notice this until now.
> > >
> > > > Just storing
> > > > the uuid should be doable (assuming this doesn't change during the
> > > > lifetime of the buffer), so no need for a callback.
> > >
> > > Directly storing the uuid doesn't work that well because of
> > > synchronization issues. The uuid needs to be shared between multiple
> > > virtio devices with independent command streams, so to prevent races
> > > between importing and exporting, the exporting driver can't share the
> > > uuid with other drivers until it knows that the device has finished
> > > registering the uuid. That requires a round trip to and then back from
> > > the device. Using a callback allows the latency from that round trip
> > > registration to be hidden.
> >
> > Uh, that means you actually do something and there's locking involved.
> > Makes stuff more complicated, invariant attributes are a lot easier
> > generally. Registering that uuid just always doesn't work, and blocking
> > when you're exporting?
>
> Registering the id at creation and blocking in gem export is feasible,
> but it doesn't work well for systems with a centralized buffer
> allocator that doesn't support batch allocations (e.g. gralloc). In
> such a system, the round trip latency would almost certainly be
> included in the buffer allocation time. At least on the system I'm
> working on, I suspect that would add 10s of milliseconds of startup
> latency to video pipelines (although I haven't benchmarked the
> difference). Doing the blocking as late as possible means most or all
> of the latency can be hidden behind other pipeline setup work.
>
> In terms of complexity, I think the synchronization would be basically
> the same in either approach, just in different locations. All it would
> do is alleviate the need for a callback to fetch the UUID.
Hm ok. I guess if we go with the older patch, where this all is a lot more
just code in virtio, doing an extra function to allocate the uuid sounds
fine. Then synchronization is entirely up to the virtio subsystem and not
a dma-buf problem (and hence not mine). You can use dma_resv_lock or so,
but no need to. But with callbacks potentially going both ways things
always get a bit interesting wrt locking - this is what makes peer2peer
dma-buf so painful right now. Hence I'd like to avoid that if needed, at
least at the dma-buf level. virtio code I don't mind what you do there :-)
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Mar 11, 2020 at 12:20 PM David Stevens <stevensd(a)chromium.org> wrote:
>
> This change adds a new dma-buf operation that allows dma-bufs to be used
> by virtio drivers to share exported objects. The new operation allows
> the importing driver to query the exporting driver for the UUID which
> identifies the underlying exported object.
>
> Signed-off-by: David Stevens <stevensd(a)chromium.org>
Adding Tomasz Figa, I've discussed this with him at elce last year I
think. Just to make sure.
Bunch of things:
- obviously we need the users of this in a few drivers, can't really
review anything stand-alone
- adding very specific ops to the generic interface is rather awkward,
eventually everyone wants that and we end up in a mess. I think the
best solution here would be if we create a struct virtio_dma_buf which
subclasses dma-buf, add a (hopefully safe) runtime upcasting
functions, and then a virtio_dma_buf_get_uuid() function. Just storing
the uuid should be doable (assuming this doesn't change during the
lifetime of the buffer), so no need for a callback.
- for the runtime upcasting the usual approach is to check the ->ops
pointer. Which means that would need to be the same for all virtio
dma_bufs, which might get a bit awkward. But I'd really prefer we not
add allocator specific stuff like this to dma-buf.
-Daniel
> ---
> drivers/dma-buf/dma-buf.c | 12 ++++++++++++
> include/linux/dma-buf.h | 18 ++++++++++++++++++
> 2 files changed, 30 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index d4097856c86b..fa5210ba6aaa 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1158,6 +1158,18 @@ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
> }
> EXPORT_SYMBOL_GPL(dma_buf_vunmap);
>
> +int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid)
> +{
> + if (WARN_ON(!dmabuf) || !uuid)
> + return -EINVAL;
> +
> + if (!dmabuf->ops->get_uuid)
> + return -ENODEV;
> +
> + return dmabuf->ops->get_uuid(dmabuf, uuid);
> +}
> +EXPORT_SYMBOL_GPL(dma_buf_get_uuid);
> +
> #ifdef CONFIG_DEBUG_FS
> static int dma_buf_debug_show(struct seq_file *s, void *unused)
> {
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index abf5459a5b9d..00758523597d 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -251,6 +251,21 @@ struct dma_buf_ops {
>
> void *(*vmap)(struct dma_buf *);
> void (*vunmap)(struct dma_buf *, void *vaddr);
> +
> + /**
> + * @get_uuid
> + *
> + * This is called by dma_buf_get_uuid to get the UUID which identifies
> + * the buffer to virtio devices.
> + *
> + * This callback is optional.
> + *
> + * Returns:
> + *
> + * 0 on success or a negative error code on failure. On success uuid
> + * will be populated with the buffer's UUID.
> + */
> + int (*get_uuid)(struct dma_buf *dmabuf, uuid_t *uuid);
> };
>
> /**
> @@ -444,4 +459,7 @@ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
> unsigned long);
> void *dma_buf_vmap(struct dma_buf *);
> void dma_buf_vunmap(struct dma_buf *, void *vaddr);
> +
> +int dma_buf_get_uuid(struct dma_buf *dmabuf, uuid_t *uuid);
> +
> #endif /* __DMA_BUF_H__ */
> --
> 2.25.1.481.gfbce0eb801-goog
>
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
On Thu, May 14, 2020 at 05:19:40PM +0900, David Stevens wrote:
> Sorry for the duplicate reply, didn't notice this until now.
>
> > Just storing
> > the uuid should be doable (assuming this doesn't change during the
> > lifetime of the buffer), so no need for a callback.
>
> Directly storing the uuid doesn't work that well because of
> synchronization issues. The uuid needs to be shared between multiple
> virtio devices with independent command streams, so to prevent races
> between importing and exporting, the exporting driver can't share the
> uuid with other drivers until it knows that the device has finished
> registering the uuid. That requires a round trip to and then back from
> the device. Using a callback allows the latency from that round trip
> registration to be hidden.
Uh, that means you actually do something and there's locking involved.
Makes stuff more complicated, invariant attributes are a lot easier
generally. Registering that uuid just always doesn't work, and blocking
when you're exporting?
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, May 14, 2020 at 11:08:52AM +0900, David Stevens wrote:
> On Thu, May 14, 2020 at 12:45 AM Daniel Vetter <daniel(a)ffwll.ch> wrote:
> > On Wed, Mar 11, 2020 at 12:20 PM David Stevens <stevensd(a)chromium.org> wrote:
> > >
> > > This change adds a new dma-buf operation that allows dma-bufs to be used
> > > by virtio drivers to share exported objects. The new operation allows
> > > the importing driver to query the exporting driver for the UUID which
> > > identifies the underlying exported object.
> > >
> > > Signed-off-by: David Stevens <stevensd(a)chromium.org>
> >
> > Adding Tomasz Figa, I've discussed this with him at elce last year I
> > think. Just to make sure.
> >
> > Bunch of things:
> > - obviously we need the users of this in a few drivers, can't really
> > review anything stand-alone
>
> Here is a link to the usage of this feature by the currently under
> development virtio-video driver:
> https://markmail.org/thread/j4xlqaaim266qpks
>
> > - adding very specific ops to the generic interface is rather awkward,
> > eventually everyone wants that and we end up in a mess. I think the
> > best solution here would be if we create a struct virtio_dma_buf which
> > subclasses dma-buf, add a (hopefully safe) runtime upcasting
> > functions, and then a virtio_dma_buf_get_uuid() function. Just storing
> > the uuid should be doable (assuming this doesn't change during the
> > lifetime of the buffer), so no need for a callback.
>
> So you would prefer a solution similar to the original version of this
> patchset? https://markmail.org/message/z7if4u56q5fmaok4
yup.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, May 13, 2020 at 05:40:26PM +0530, Charan Teja Kalla wrote:
>
> Thank you Greg for the comments.
> On 5/12/2020 2:22 PM, Greg KH wrote:
> > On Fri, May 08, 2020 at 12:11:03PM +0530, Charan Teja Reddy wrote:
> >> The following race occurs while accessing the dmabuf object exported as
> >> file:
> >> P1 P2
> >> dma_buf_release() dmabuffs_dname()
> >> [say lsof reading /proc/<P1 pid>/fd/<num>]
> >>
> >> read dmabuf stored in dentry->d_fsdata
> >> Free the dmabuf object
> >> Start accessing the dmabuf structure
> >>
> >> In the above description, the dmabuf object freed in P1 is being
> >> accessed from P2 which is resulting into the use-after-free. Below is
> >> the dump stack reported.
> >>
> >> We are reading the dmabuf object stored in the dentry->d_fsdata but
> >> there is no binding between the dentry and the dmabuf which means that
> >> the dmabuf can be freed while it is being read from ->d_fsdata and
> >> inuse. Reviews on the patch V1 says that protecting the dmabuf inuse
> >> with an extra refcount is not a viable solution as the exported dmabuf
> >> is already under file's refcount and keeping the multiple refcounts on
> >> the same object coordinated is not possible.
> >>
> >> As we are reading the dmabuf in ->d_fsdata just to get the user passed
> >> name, we can directly store the name in d_fsdata thus can avoid the
> >> reading of dmabuf altogether.
> >>
> >> Call Trace:
> >> kasan_report+0x12/0x20
> >> __asan_report_load8_noabort+0x14/0x20
> >> dmabuffs_dname+0x4f4/0x560
> >> tomoyo_realpath_from_path+0x165/0x660
> >> tomoyo_get_realpath
> >> tomoyo_check_open_permission+0x2a3/0x3e0
> >> tomoyo_file_open
> >> tomoyo_file_open+0xa9/0xd0
> >> security_file_open+0x71/0x300
> >> do_dentry_open+0x37a/0x1380
> >> vfs_open+0xa0/0xd0
> >> path_openat+0x12ee/0x3490
> >> do_filp_open+0x192/0x260
> >> do_sys_openat2+0x5eb/0x7e0
> >> do_sys_open+0xf2/0x180
> >>
> >> Fixes: bb2bb9030425 ("dma-buf: add DMA_BUF_SET_NAME ioctls")
> >> Reported-by: syzbot+3643a18836bce555bff6(a)syzkaller.appspotmail.com
> >> Cc: <stable(a)vger.kernel.org> [5.3+]
> >> Signed-off-by: Charan Teja Reddy <charante(a)codeaurora.org>
> >> ---
> >>
> >> Changes in v2:
> >>
> >> - Pass the user passed name in ->d_fsdata instead of dmabuf
> >> - Improve the commit message
> >>
> >> Changes in v1: (https://patchwork.kernel.org/patch/11514063/)
> >>
> >> drivers/dma-buf/dma-buf.c | 17 ++++++++++-------
> >> 1 file changed, 10 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> >> index 01ce125..0071f7d 100644
> >> --- a/drivers/dma-buf/dma-buf.c
> >> +++ b/drivers/dma-buf/dma-buf.c
> >> @@ -25,6 +25,7 @@
> >> #include <linux/mm.h>
> >> #include <linux/mount.h>
> >> #include <linux/pseudo_fs.h>
> >> +#include <linux/dcache.h>
> >>
> >> #include <uapi/linux/dma-buf.h>
> >> #include <uapi/linux/magic.h>
> >> @@ -40,15 +41,13 @@ struct dma_buf_list {
> >>
> >> static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
> >> {
> >> - struct dma_buf *dmabuf;
> >> char name[DMA_BUF_NAME_LEN];
> >> size_t ret = 0;
> >>
> >> - dmabuf = dentry->d_fsdata;
> >> - dma_resv_lock(dmabuf->resv, NULL);
> >> - if (dmabuf->name)
> >> - ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
> >> - dma_resv_unlock(dmabuf->resv);
> >> + spin_lock(&dentry->d_lock);
> >
> > Are you sure this lock always protects d_fsdata?
>
> I think yes. In the dma-buf.c, I have to make sure that d_fsdata should
> always be under d_lock thus it will be protected. (In this posted patch
> there is one place(in dma_buf_set_name) that is missed, will update this
> in V3).
>
> >
> >> + if (dentry->d_fsdata)
> >> + ret = strlcpy(name, dentry->d_fsdata, DMA_BUF_NAME_LEN);
> >> + spin_unlock(&dentry->d_lock);
> >>
> >> return dynamic_dname(dentry, buffer, buflen, "/%s:%s",
> >> dentry->d_name.name, ret > 0 ? name : "");
> >
> > If the above check fails the name will be what? How could d_name.name
> > be valid but d_fsdata not be valid?
>
> In case of check fails, empty string "" is appended to the name by the
> code, ret > 0 ? name : "", ret is initialized to zero. Thus the name
> string will be like "/dmabuf:".
So multiple objects can have the same "name" if this happens to multiple
ones at once?
> Regarding the validity of d_fsdata, we are setting the dmabuf's
> dentry->d_fsdata to NULL in the dma_buf_release() thus can go invalid if
> that dmabuf is in the free path.
Why are we allowing the name to be set if the dmabuf is on the free path
at all? Shouldn't that be the real fix here?
> >> @@ -80,12 +79,16 @@ static int dma_buf_fs_init_context(struct fs_context *fc)
> >> static int dma_buf_release(struct inode *inode, struct file *file)
> >> {
> >> struct dma_buf *dmabuf;
> >> + struct dentry *dentry = file->f_path.dentry;
> >>
> >> if (!is_dma_buf_file(file))
> >> return -EINVAL;
> >>
> >> dmabuf = file->private_data;
> >>
> >> + spin_lock(&dentry->d_lock);
> >> + dentry->d_fsdata = NULL;
> >> + spin_unlock(&dentry->d_lock);
> >> BUG_ON(dmabuf->vmapping_counter);
> >>
> >> /*
> >> @@ -343,6 +346,7 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf)
> >> }
> >> kfree(dmabuf->name);
> >> dmabuf->name = name;
> >> + dmabuf->file->f_path.dentry->d_fsdata = name;
> >
> > You are just changing the use of d_fsdata from being a pointer to the
> > dmabuf to being a pointer to the name string? What's to keep that name
> > string around and not have the same reference counting issues that the
> > dmabuf structure itself has? Who frees that string memory?
> >
>
> Yes, I am just storing the name string in the d_fsdata in place of
> dmabuf and this helps to get rid of any extra refcount requirement.
> Because the user passed name carried in the d_fsdata is copied to the
> local buffer in dmabuffs_dname under spin_lock(d_lock) and the same
> d_fsdata is set to NULL(under the d_lock only) when that dmabuf is in
> the release path. So, when d_fsdata is NULL, name string is not accessed
> from the dmabuffs_dname thus extra count is not required.
>
> String memory, stored in the dmabuf->name, is released from the
> dma_buf_release(). Flow will be like, It fist sets d_fsdata=NULL and
> then free the dmabuf->name.
>
> However from your comments I have realized that there is a race in this
> patch when using the name string between dma_buf_set_name() and
> dmabuffs_dname(). But, If the idea of passing the name string inplace of
> dmabuf in d_fsdata looks fine, I can update this next patch.
I'll leave that to the dmabuf authors/maintainers, but it feels odd to
me...
thanks,
greg k-h
Dear All,
During the Exynos DRM GEM rework and fixing the issues in the.
drm_prime_sg_to_page_addr_arrays() function [1] I've noticed that most
drivers in DRM framework incorrectly use nents and orig_nents entries of
the struct sg_table.
In case of the most DMA-mapping implementations exchanging those two
entries or using nents for all loops on the scatterlist is harmless,
because they both have the same value. There exists however a DMA-mapping
implementations, for which such incorrect usage breaks things. The nents
returned by dma_map_sg() might be lower than the nents passed as its
parameter and this is perfectly fine. DMA framework or IOMMU is allowed
to join consecutive chunks while mapping if such operation is supported
by the underlying HW (bus, bridge, IOMMU, etc). Example of the case
where dma_map_sg() might return 1 'DMA' chunk for the 4 'physical' pages
is described here [2]
The DMA-mapping framework documentation [3] states that dma_map_sg()
returns the numer of the created entries in the DMA address space.
However the subsequent calls to dma_sync_sg_for_{device,cpu} and
dma_unmap_sg must be called with the original number of entries passed to
dma_map_sg. The common pattern in DRM drivers were to assign the
dma_map_sg() return value to sg_table->nents and use that value for
the subsequent calls to dma_sync_sg_* or dma_unmap_sg functions. Also
the code iterated over nents times to access the pages stored in the
processed scatterlist, while it should use orig_nents as the numer of
the page entries.
I've tried to identify all such incorrect usage of sg_table->nents and
this is a result of my research. It looks that the incorrect pattern has
been copied over the many drivers mainly in the DRM subsystem. Too bad in
most cases it even worked correctly if the system used a simple, linear
DMA-mapping implementation, for which swapping nents and orig_nents
doesn't make any difference. To avoid similar issues in the future, I've
introduced a common wrappers for DMA-mapping calls, which operate directly
on the sg_table objects. I've also added wrappers for iterating over the
scatterlists stored in the sg_table objects and applied them where
possible. This, together with some common DRM prime helpers, allowed me
to almost get rid of all nents/orig_nents usage in the drivers. I hope
that such change makes the code robust, easier to follow and copy/paste
safe.
The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
it fully. The driver creatively uses sg_table->orig_nents to store the
size of the allocate scatterlist and ignores the number of the entries
returned by dma_map_sg function. In this patchset I only fixed the
sg_table objects exported by dmabuf related functions. I hope that I
didn't break anything there.
Patches are based on top of Linux next-20200511.
Best regards,
Marek Szyprowski
References:
[1] https://lkml.org/lkml/2020/3/27/555.
[2] https://lkml.org/lkml/2020/3/29/65
[3] Documentation/DMA-API-HOWTO.txt
Changelog:
v4:
- added for_each_sgtable_* wrappers and applied where possible
- added drm_prime_get_contiguous_size() and applied where possible
- applied drm_prime_sg_to_page_addr_arrays() where possible to remove page
extraction from sg_table objects
- added documentation for the introduced wrappers
- improved patches description a bit
v3: https://lore.kernel.org/dri-devel/20200505083926.28503-1-m.szyprowski@samsu…
- introduce dma_*_sgtable_* wrappers and use them in all patches
v2: https://lore.kernel.org/linux-iommu/c01c9766-9778-fd1f-f36e-2dc7bd376ba4@ar…
- dropped most of the changes to drm/i915
- added fixes for rcar-du, xen, media and ion
- fixed a few issues pointed by kbuild test robot
- added wide cc: list for each patch
v1: https://lore.kernel.org/linux-iommu/c01c9766-9778-fd1f-f36e-2dc7bd376ba4@ar…
- initial version
Patch summary:
Marek Szyprowski (38):
dma-mapping: add generic helpers for mapping sgtable objects
scatterlist: add generic wrappers for iterating over sgtable objects
iommu: add generic helper for mapping sgtable objects
drm: prime: add common helper to check scatterlist contiguity
drm: prime: use sgtable iterators in
drm_prime_sg_to_page_addr_arrays()
drm: core: fix common struct sg_table related issues
drm: amdgpu: fix common struct sg_table related issues
drm: armada: fix common struct sg_table related issues
drm: etnaviv: fix common struct sg_table related issues
drm: exynos: use common helper for a scatterlist contiguity check
drm: exynos: fix common struct sg_table related issues
drm: i915: fix common struct sg_table related issues
drm: lima: fix common struct sg_table related issues
drm: mediatek: use common helper for a scatterlist contiguity check
drm: mediatek: use common helper for extracting pages array
drm: msm: fix common struct sg_table related issues
drm: omapdrm: use common helper for extracting pages array
drm: omapdrm: fix common struct sg_table related issues
drm: panfrost: fix common struct sg_table related issues
drm: radeon: fix common struct sg_table related issues
drm: rockchip: use common helper for a scatterlist contiguity check
drm: rockchip: fix common struct sg_table related issues
drm: tegra: fix common struct sg_table related issues
drm: v3d: fix common struct sg_table related issues
drm: virtio: fix common struct sg_table related issues
drm: vmwgfx: fix common struct sg_table related issues
xen: gntdev: fix common struct sg_table related issues
drm: host1x: fix common struct sg_table related issues
drm: rcar-du: fix common struct sg_table related issues
dmabuf: fix common struct sg_table related issues
staging: ion: remove dead code
staging: ion: fix common struct sg_table related issues
staging: tegra-vde: fix common struct sg_table related issues
misc: fastrpc: fix common struct sg_table related issues
rapidio: fix common struct sg_table related issues
samples: vfio-mdev/mbochs: fix common struct sg_table related issues
media: pci: fix common ALSA DMA-mapping related codes
videobuf2: use sgtable-based scatterlist wrappers
drivers/dma-buf/heaps/heap-helpers.c | 13 ++--
drivers/dma-buf/udmabuf.c | 7 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 9 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 8 +-
drivers/gpu/drm/armada/armada_gem.c | 12 +--
drivers/gpu/drm/drm_cache.c | 2 +-
drivers/gpu/drm/drm_gem_cma_helper.c | 23 +-----
drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++--
drivers/gpu/drm/drm_prime.c | 86 ++++++++++++----------
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 12 ++-
drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 13 +---
drivers/gpu/drm/exynos/exynos_drm_g2d.c | 10 +--
drivers/gpu/drm/exynos/exynos_drm_gem.c | 23 +-----
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 11 +--
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c | 7 +-
drivers/gpu/drm/lima/lima_gem.c | 11 ++-
drivers/gpu/drm/lima/lima_vm.c | 5 +-
drivers/gpu/drm/mediatek/mtk_drm_gem.c | 34 ++-------
drivers/gpu/drm/msm/msm_gem.c | 13 ++--
drivers/gpu/drm/msm/msm_gpummu.c | 14 ++--
drivers/gpu/drm/msm/msm_iommu.c | 2 +-
drivers/gpu/drm/omapdrm/omap_gem.c | 20 ++---
drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 7 +-
drivers/gpu/drm/radeon/radeon_ttm.c | 11 ++-
drivers/gpu/drm/rcar-du/rcar_du_vsp.c | 3 +-
drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 42 +++--------
drivers/gpu/drm/tegra/gem.c | 27 +++----
drivers/gpu/drm/tegra/plane.c | 15 ++--
drivers/gpu/drm/v3d/v3d_mmu.c | 17 ++---
drivers/gpu/drm/virtio/virtgpu_object.c | 36 +++++----
drivers/gpu/drm/virtio/virtgpu_vq.c | 12 ++-
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 17 +----
drivers/gpu/host1x/job.c | 22 ++----
.../media/common/videobuf2/videobuf2-dma-contig.c | 41 +++++------
drivers/media/common/videobuf2/videobuf2-dma-sg.c | 32 +++-----
drivers/media/common/videobuf2/videobuf2-vmalloc.c | 12 +--
drivers/media/pci/cx23885/cx23885-alsa.c | 2 +-
drivers/media/pci/cx25821/cx25821-alsa.c | 2 +-
drivers/media/pci/cx88/cx88-alsa.c | 2 +-
drivers/media/pci/saa7134/saa7134-alsa.c | 2 +-
drivers/media/platform/vsp1/vsp1_drm.c | 8 +-
drivers/misc/fastrpc.c | 4 +-
drivers/rapidio/devices/rio_mport_cdev.c | 8 +-
drivers/staging/android/ion/ion.c | 25 +++----
drivers/staging/android/ion/ion.h | 1 -
drivers/staging/android/ion/ion_heap.c | 53 ++++---------
drivers/staging/android/ion/ion_system_heap.c | 2 +-
drivers/staging/media/tegra-vde/iommu.c | 4 +-
drivers/xen/gntdev-dmabuf.c | 13 ++--
include/drm/drm_prime.h | 2 +
include/linux/dma-mapping.h | 79 ++++++++++++++++++++
include/linux/iommu.h | 16 ++++
include/linux/scatterlist.h | 50 ++++++++++++-
samples/vfio-mdev/mbochs.c | 3 +-
56 files changed, 452 insertions(+), 477 deletions(-)
--
1.9.1