On Wed, Oct 14, 2020 at 09:16:01AM -0700, Jianxin Xiong wrote:
> The dma-buf API have been used under the assumption that the sg lists
> returned from dma_buf_map_attachment() are fully page aligned. Lots of
> stuff can break otherwise all over the place. Clarify this in the
> documentation and add a check when DMA API debug is enabled.
>
> Signed-off-by: Jianxin Xiong <jianxin.xiong(a)intel.com>
lgtm, thanks for creating this and giving it a spin.
I'll queue this up in drm-misc-next for 5.11, should show up in linux-next
after the merge window is closed.
Cheers, Daniel
> ---
> drivers/dma-buf/dma-buf.c | 21 +++++++++++++++++++++
> include/linux/dma-buf.h | 3 ++-
> 2 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 844967f..7309c83 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -851,6 +851,9 @@ void dma_buf_unpin(struct dma_buf_attachment *attach)
> * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR
> * on error. May return -EINTR if it is interrupted by a signal.
> *
> + * On success, the DMA addresses and lengths in the returned scatterlist are
> + * PAGE_SIZE aligned.
> + *
> * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that
> * the underlying backing storage is pinned for as long as a mapping exists,
> * therefore users/importers should not hold onto a mapping for undue amounts of
> @@ -904,6 +907,24 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach,
> attach->dir = direction;
> }
>
> +#ifdef CONFIG_DMA_API_DEBUG
> + {
> + struct scatterlist *sg;
> + u64 addr;
> + int len;
> + int i;
> +
> + for_each_sgtable_dma_sg(sg_table, sg, i) {
> + addr = sg_dma_address(sg);
> + len = sg_dma_len(sg);
> + if (!PAGE_ALIGNED(addr) || !PAGE_ALIGNED(len)) {
> + pr_debug("%s: addr %llx or len %x is not page aligned!\n",
> + __func__, addr, len);
> + }
> + }
> + }
> +#endif /* CONFIG_DMA_API_DEBUG */
> +
> return sg_table;
> }
> EXPORT_SYMBOL_GPL(dma_buf_map_attachment);
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index a2ca294e..4a5fa70 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -145,7 +145,8 @@ struct dma_buf_ops {
> *
> * A &sg_table scatter list of or the backing storage of the DMA buffer,
> * already mapped into the device address space of the &device attached
> - * with the provided &dma_buf_attachment.
> + * with the provided &dma_buf_attachment. The addresses and lengths in
> + * the scatter list are PAGE_SIZE aligned.
> *
> * On failure, returns a negative error value wrapped into a pointer.
> * May also return -EINTR when a signal was received while being
> --
> 1.8.3.1
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
From: Rob Clark <robdclark(a)chromium.org>
This doesn't remove *all* the struct_mutex, but it covers the worst
of it, ie. shrinker/madvise/free/retire. The submit path still uses
struct_mutex, but it still needs *something* serialize a portion of
the submit path, and lock_stat mostly just shows the lock contention
there being with other submits. And there are a few other bits of
struct_mutex usage in less critical paths (debugfs, etc). But this
seems like a reasonable step in the right direction.
v2: teach lockdep about shrinker locking patters (danvet) and
convert to obj->resv locking (danvet)
Rob Clark (22):
drm/msm/gem: Add obj->lock wrappers
drm/msm/gem: Rename internal get_iova_locked helper
drm/msm/gem: Move prototypes to msm_gem.h
drm/msm/gem: Add some _locked() helpers
drm/msm/gem: Move locking in shrinker path
drm/msm/submit: Move copy_from_user ahead of locking bos
drm/msm: Do rpm get sooner in the submit path
drm/msm/gem: Switch over to obj->resv for locking
drm/msm: Use correct drm_gem_object_put() in fail case
drm/msm: Drop chatty trace
drm/msm: Move update_fences()
drm/msm: Add priv->mm_lock to protect active/inactive lists
drm/msm: Document and rename preempt_lock
drm/msm: Protect ring->submits with it's own lock
drm/msm: Refcount submits
drm/msm: Remove obj->gpu
drm/msm: Drop struct_mutex from the retire path
drm/msm: Drop struct_mutex in free_object() path
drm/msm: remove msm_gem_free_work
drm/msm: drop struct_mutex in madvise path
drm/msm: Drop struct_mutex in shrinker path
drm/msm: Don't implicit-sync if only a single ring
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 +-
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 1 +
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 1 +
drivers/gpu/drm/msm/dsi/dsi_host.c | 1 +
drivers/gpu/drm/msm/msm_debugfs.c | 7 +
drivers/gpu/drm/msm/msm_drv.c | 21 +-
drivers/gpu/drm/msm/msm_drv.h | 73 ++-----
drivers/gpu/drm/msm/msm_fbdev.c | 1 +
drivers/gpu/drm/msm/msm_gem.c | 245 ++++++++++------------
drivers/gpu/drm/msm/msm_gem.h | 131 ++++++++++--
drivers/gpu/drm/msm/msm_gem_shrinker.c | 81 +++----
drivers/gpu/drm/msm/msm_gem_submit.c | 154 +++++++++-----
drivers/gpu/drm/msm/msm_gpu.c | 98 +++++----
drivers/gpu/drm/msm/msm_gpu.h | 5 +-
drivers/gpu/drm/msm/msm_ringbuffer.c | 3 +-
drivers/gpu/drm/msm/msm_ringbuffer.h | 13 +-
18 files changed, 459 insertions(+), 396 deletions(-)
--
2.26.2
Am 08.10.20 um 23:49 schrieb John Hubbard:
> On 10/8/20 4:23 AM, Christian König wrote:
>> Add the new vma_set_file() function to allow changing
>> vma->vm_file with the necessary refcount dance.
>>
>> v2: add more users of this.
>>
>> Signed-off-by: Christian König <christian.koenig(a)amd.com>
>> ---
>> drivers/dma-buf/dma-buf.c | 16 +++++-----------
>> drivers/gpu/drm/etnaviv/etnaviv_gem.c | 4 +---
>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 3 +--
>> drivers/gpu/drm/i915/gem/i915_gem_mman.c | 4 ++--
>> drivers/gpu/drm/msm/msm_gem.c | 4 +---
>> drivers/gpu/drm/omapdrm/omap_gem.c | 3 +--
>> drivers/gpu/drm/vgem/vgem_drv.c | 3 +--
>> drivers/staging/android/ashmem.c | 5 ++---
>> include/linux/mm.h | 2 ++
>> mm/mmap.c | 16 ++++++++++++++++
>> 10 files changed, 32 insertions(+), 28 deletions(-)
>
> Looks like a nice cleanup. Two comments below.
>
> ...
>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> index 3d69e51f3e4d..c9d5f1a38af3 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> @@ -893,8 +893,8 @@ int i915_gem_mmap(struct file *filp, struct
>> vm_area_struct *vma)
>> * requires avoiding extraneous references to their filp, hence
>> why
>> * we prefer to use an anonymous file for their mmaps.
>> */
>> - fput(vma->vm_file);
>> - vma->vm_file = anon;
>> + vma_set_file(vma, anon);
>> + fput(anon);
>
> That's one fput() too many, isn't it?
No, the other cases were replacing the vm_file with something
pre-allocated and also grabbed a new reference.
But this case here uses the freshly allocated anon file and so
vma_set_file() grabs another extra reference which we need to drop.
The alternative is to just keep it as it is. Opinions?
>
>
> ...
>
>> diff --git a/drivers/staging/android/ashmem.c
>> b/drivers/staging/android/ashmem.c
>> index 10b4be1f3e78..a51dc089896e 100644
>> --- a/drivers/staging/android/ashmem.c
>> +++ b/drivers/staging/android/ashmem.c
>> @@ -450,9 +450,8 @@ static int ashmem_mmap(struct file *file, struct
>> vm_area_struct *vma)
>> vma_set_anonymous(vma);
>> }
>> - if (vma->vm_file)
>> - fput(vma->vm_file);
>> - vma->vm_file = asma->file;
>> + vma_set_file(vma, asma->file);
>> + fput(asma->file);
>
> Same here: that fput() seems wrong, as it was already done within
> vma_set_file().
No, that case is correct as well. The Android code here has the matching
get_file() a few lines up, see the surrounding code.
I didn't wanted to replace that since it does some strange error
handling here, so the result is that we need to drop the extra reference
as again.
We could also keep it like it is or maybe better put a TODO comment on it.
Regards,
Christian.
>
>
>
> thanks,
From: Rob Clark <robdclark(a)chromium.org>
This doesn't remove *all* the struct_mutex, but it covers the worst
of it, ie. shrinker/madvise/free/retire. The submit path still uses
struct_mutex, but it still needs *something* serialize a portion of
the submit path, and lock_stat mostly just shows the lock contention
there being with other submits. And there are a few other bits of
struct_mutex usage in less critical paths (debugfs, etc). But this
seems like a reasonable step in the right direction.
Rob Clark (14):
drm/msm: Use correct drm_gem_object_put() in fail case
drm/msm: Drop chatty trace
drm/msm: Move update_fences()
drm/msm: Add priv->mm_lock to protect active/inactive lists
drm/msm: Document and rename preempt_lock
drm/msm: Protect ring->submits with it's own lock
drm/msm: Refcount submits
drm/msm: Remove obj->gpu
drm/msm: Drop struct_mutex from the retire path
drm/msm: Drop struct_mutex in free_object() path
drm/msm: remove msm_gem_free_work
drm/msm: drop struct_mutex in madvise path
drm/msm: Drop struct_mutex in shrinker path
drm/msm: Don't implicit-sync if only a single ring
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 +-
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 +--
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 +-
drivers/gpu/drm/msm/msm_debugfs.c | 7 ++
drivers/gpu/drm/msm/msm_drv.c | 15 +---
drivers/gpu/drm/msm/msm_drv.h | 19 +++--
drivers/gpu/drm/msm/msm_gem.c | 76 ++++++------------
drivers/gpu/drm/msm/msm_gem.h | 53 +++++++++----
drivers/gpu/drm/msm/msm_gem_shrinker.c | 58 ++------------
drivers/gpu/drm/msm/msm_gem_submit.c | 17 ++--
drivers/gpu/drm/msm/msm_gpu.c | 96 ++++++++++++++---------
drivers/gpu/drm/msm/msm_gpu.h | 5 +-
drivers/gpu/drm/msm/msm_ringbuffer.c | 3 +-
drivers/gpu/drm/msm/msm_ringbuffer.h | 13 ++-
14 files changed, 188 insertions(+), 194 deletions(-)
--
2.26.2
On Thu, Oct 01, 2020 at 07:28:27PM +0200, Alexandre Bailon wrote:
> Hi Daniel,
>
> On 10/1/20 10:48 AM, Daniel Vetter wrote:
> > On Wed, Sep 30, 2020 at 01:53:46PM +0200, Alexandre Bailon wrote:
> > > This adds a RPMsg driver that implements communication between the CPU and an
> > > APU.
> > > This uses VirtIO buffer to exchange messages but for sharing data, this uses
> > > a dmabuf, mapped to be shared between CPU (userspace) and APU.
> > > The driver is relatively generic, and should work with any SoC implementing
> > > hardware accelerator for AI if they use support remoteproc and VirtIO.
> > >
> > > For the people interested by the firmware or userspace library,
> > > the sources are available here:
> > > https://github.com/BayLibre/open-amp/tree/v2020.01-mtk/apps/examples/apu
> > Since this has open userspace (from a very cursory look), and smells very
> > much like an acceleration driver, and seems to use dma-buf for memory
> > management: Why is this not just a drm driver?
>
> I have never though to DRM since for me it was only a RPMsg driver.
> I don't know well DRM. Could you tell me how you would do it so I could have
> a look ?
Well internally it would still be an rpmsg driver ... I'm assuming that's
kinda similar to how most gpu drivers sit on top of a pci_device or a
platform_device, it's just a means to get at your "device"?
The part I'm talking about here is the userspace api. You're creating an
entirely new chardev interface, which at least from a quick look seems to
be based on dma-buf buffers and used to submit commands to your device to
do some kind of computing/processing. That's exactly what drivers/gpu/drm
does (if you ignore the display/modeset side of things) - at the kernel
level gpus have nothing to do with graphics, but all with handling buffer
objects and throwing workloads at some kind of accelerator thing.
Of course that's just my guess of what's going on, after scrolling through
your driver and userspace a bit, I might be completely off. But if my
guess is roughly right, then your driver is internally an rpmsg
driver, but towards userspace it should be a drm driver.
Cheers, Daniel
>
> Thanks,
> Alexandre
>
> > -Daniel
> >
> > > Alexandre Bailon (3):
> > > Add a RPMSG driver for the APU in the mt8183
> > > rpmsg: apu_rpmsg: update the way to store IOMMU mapping
> > > rpmsg: apu_rpmsg: Add an IOCTL to request IOMMU mapping
> > >
> > > Julien STEPHAN (1):
> > > rpmsg: apu_rpmsg: Add support for async apu request
> > >
> > > drivers/rpmsg/Kconfig | 9 +
> > > drivers/rpmsg/Makefile | 1 +
> > > drivers/rpmsg/apu_rpmsg.c | 752 +++++++++++++++++++++++++++++++++
> > > drivers/rpmsg/apu_rpmsg.h | 52 +++
> > > include/uapi/linux/apu_rpmsg.h | 47 +++
> > > 5 files changed, 861 insertions(+)
> > > create mode 100644 drivers/rpmsg/apu_rpmsg.c
> > > create mode 100644 drivers/rpmsg/apu_rpmsg.h
> > > create mode 100644 include/uapi/linux/apu_rpmsg.h
> > >
> > > --
> > > 2.26.2
> > >
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel(a)lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Sep 30, 2020 at 01:53:46PM +0200, Alexandre Bailon wrote:
> This adds a RPMsg driver that implements communication between the CPU and an
> APU.
> This uses VirtIO buffer to exchange messages but for sharing data, this uses
> a dmabuf, mapped to be shared between CPU (userspace) and APU.
> The driver is relatively generic, and should work with any SoC implementing
> hardware accelerator for AI if they use support remoteproc and VirtIO.
>
> For the people interested by the firmware or userspace library,
> the sources are available here:
> https://github.com/BayLibre/open-amp/tree/v2020.01-mtk/apps/examples/apu
Since this has open userspace (from a very cursory look), and smells very
much like an acceleration driver, and seems to use dma-buf for memory
management: Why is this not just a drm driver?
-Daniel
>
> Alexandre Bailon (3):
> Add a RPMSG driver for the APU in the mt8183
> rpmsg: apu_rpmsg: update the way to store IOMMU mapping
> rpmsg: apu_rpmsg: Add an IOCTL to request IOMMU mapping
>
> Julien STEPHAN (1):
> rpmsg: apu_rpmsg: Add support for async apu request
>
> drivers/rpmsg/Kconfig | 9 +
> drivers/rpmsg/Makefile | 1 +
> drivers/rpmsg/apu_rpmsg.c | 752 +++++++++++++++++++++++++++++++++
> drivers/rpmsg/apu_rpmsg.h | 52 +++
> include/uapi/linux/apu_rpmsg.h | 47 +++
> 5 files changed, 861 insertions(+)
> create mode 100644 drivers/rpmsg/apu_rpmsg.c
> create mode 100644 drivers/rpmsg/apu_rpmsg.h
> create mode 100644 include/uapi/linux/apu_rpmsg.h
>
> --
> 2.26.2
>
> _______________________________________________
> dri-devel mailing list
> dri-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Hi Alex,
On 22.09.2020 01:15, Alex Goins wrote:
> Tested-by: Alex Goins <agoins(a)nvidia.com>
>
> This change fixes a regression with drm_prime_sg_to_page_addr_arrays() and
> AMDGPU in v5.9.
Thanks for testing!
> Commit 39913934 similarly revamped AMDGPU to use sgtable helper functions. When
> it changed from dma_map_sg_attrs() to dma_map_sgtable(), as a side effect it
> started correctly updating sgt->nents to the return value of dma_map_sg_attrs().
> However, drm_prime_sg_to_page_addr_arrays() incorrectly uses sgt->nents to
> iterate over pages, rather than sgt->orig_nents, resulting in it now returning
> the incorrect number of pages on AMDGPU.
>
> I had written a patch that changes drm_prime_sg_to_page_addr_arrays() to use
> for_each_sgtable_sg() instead of for_each_sg(), iterating using sgt->orig_nents:
>
> - for_each_sg(sgt->sgl, sg, sgt->nents, count) {
> + for_each_sgtable_sg(sgt, sg, count) {
>
> This patch takes it further, but still has the effect of fixing the number of
> pages that drm_prime_sg_to_page_addr_arrays() returns. Something like this
> should be included in v5.9 to prevent a regression with AMDGPU.
Probably the easiest way to handle a fix for v5.9 would be to simply
merge the latest version of this patch also to v5.9-rcX:
https://lore.kernel.org/dri-devel/20200904131711.12950-3-m.szyprowski@samsu…
This way we would get it fixed and avoid possible conflict in the -next.
Do you have any AMDGPU fixes for v5.9 in the queue? Maybe you can add
that patch to the queue? Dave: would it be okay that way?
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
Am 28.09.20 um 09:37 schrieb Thomas Zimmermann:
> Hi
>
> Am 28.09.20 um 08:50 schrieb Christian König:
>> Am 27.09.20 um 21:16 schrieb Sam Ravnborg:
>>> Hi Thomas.
>>>
>>>>> struct simap {
>>>>> union {
>>>>> void __iomem *vaddr_iomem;
>>>>> void *vaddr;
>>>>> };
>>>>> bool is_iomem;
>>>>> };
>>>>>
>>>>> Where simap is a shorthand for system_iomem_map
>>>>> And it could al be stuffed into a include/linux/simap.h file.
>>>>>
>>>>> Not totally sold on the simap name - but wanted to come up with
>>>>> something.
>>>> Yes. Others, myself included, have suggested to use a name that does not
>>>> imply a connection to the dma-buf framework, but no one has come up with
>>>> a good name.
>>>>
>>>> I strongly dislike simap, as it's entirely non-obvious what it does.
>>>> dma-buf-map is not actually wrong. The structures represents the mapping
>>>> of a dma-able buffer in most cases.
>>>>
>>>>> With this approach users do not have to pull in dma-buf to use it and
>>>>> users will not confuse that this is only for dma-buf usage.
>>>> There's no need to enable dma-buf. It's all in the header file without
>>>> dependencies on dma-buf. It's really just the name.
>>>>
>>>> But there's something else to take into account. The whole issue here is
>>>> that the buffer is disconnected from its originating driver, so we don't
>>>> know which kind of memory ops we have to use. Thinking about it, I
>>>> realized that no one else seemed to have this problem until now.
>>>> Otherwise there would be a solution already. So maybe the dma-buf
>>>> framework *is* the native use case for this data structure.
>>> We have at least:
>>> linux/fb.h:
>>> union {
>>> char __iomem *screen_base; /* Virtual address */
>>> char *screen_buffer;
>>> };
>>>
>>> Which solve more or less the same problem.
> I thought this was for convenience. The important is_iomem bit is missing.
>
>> I also already noted that in TTM we have exactly the same problem and a
>> whole bunch of helpers to allow operations on those pointers.
> How do you call this within TTM?
ttm_bus_placement, but I really don't like that name.
>
> The data structure represents a pointer to either system or I/O memory,
> but not necessatrily device memory. It contains raw data. That would
> give something like
>
> struct databuf_map
> struct databuf_ptr
> struct dbuf_map
> struct dbuf_ptr
>
> My favorite would be dbuf_ptr. It's short and the API names would make
> sense: dbuf_ptr_clear() for clearing, dbuf_ptr_set_vaddr() to set an
> address, dbuf_ptr_incr() to increment, etc. Also, the _ptr indicates
> that it's a single address; not an offset with length.
Puh, no idea. All of that doesn't sound like it 100% hits the underlying
meaning of the structure.
Christian.
>
> Best regards
> Thomas
>
>> Christian.
>>
>>>
>>>> Anyway, if a better name than dma-buf-map comes in, I'm willing to
>>>> rename the thing. Otherwise I intend to merge the patchset by the end of
>>>> the week.
>>> Well, the main thing is that I think this shoud be moved away from
>>> dma-buf. But if indeed dma-buf is the only relevant user in drm then
>>> I am totally fine with the current naming.
>>>
>>> One alternative named that popped up in my head: struct sys_io_map {}
>>> But again, if this is kept in dma-buf then I am fine with the current
>>> naming.
>>>
>>> In other words, if you continue to think this is mostly a dma-buf
>>> thing all three patches are:
>>> Acked-by: Sam Ravnborg <sam(a)ravnborg.org>
>>>
>>> Sam
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel(a)lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel