Although there is usually not such a limitation (and when there is it is
often only because the driver forgot to change the super small default),
it is still correct here to break scatterlist element into chunks of
dma_max_mapping_size().
This might cause some issues for users with misbehaving drivers. If
bisecting has landed you on this commit, make sure your drivers both set
dma_set_max_seg_size() and are checking for contiguousness correctly.
Signed-off-by: Andrew Davis <afd(a)ti.com>
---
drivers/dma-buf/heaps/cma_heap.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
index 28fb04eccdd0..cacc84cb5ece 100644
--- a/drivers/dma-buf/heaps/cma_heap.c
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -58,10 +58,11 @@ static int cma_heap_attach(struct dma_buf *dmabuf,
if (!a)
return -ENOMEM;
- ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
- buffer->pagecount, 0,
- buffer->pagecount << PAGE_SHIFT,
- GFP_KERNEL);
+ size_t max_segment = dma_get_max_seg_size(attachment->dev);
+ ret = sg_alloc_table_from_pages_segment(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ max_segment, GFP_KERNEL);
if (ret) {
kfree(a);
return ret;
--
2.36.1
Purpose of these patches is to add a new dma-buf heap: linaro,secure-heap
Linaro OPTEE OS Secure Data Path feature is relying on a reserved memory
defined at Linux Kernel level and OPTEE OS level.
From Linux Kernel side, heap management is using dma-buf heaps interface.
Olivier Masse (3):
dma-buf: heaps: add Linaro secure dmabuf heap support
dt-bindings: reserved-memory: add linaro,secure-heap
plat-hikey: Add linaro,secure-heap compatible
.../reserved-memory/linaro,secure-heap.yaml | 56 +++
.../arm64/boot/dts/hisilicon/hi6220-hikey.dts | 11 +
arch/arm64/configs/defconfig | 2 +
drivers/dma-buf/heaps/Kconfig | 9 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/secure_heap.c | 357 ++++++++++++++++++
6 files changed, 436 insertions(+)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml
create mode 100644 drivers/dma-buf/heaps/secure_heap.c
--
2.25.0
Sent from my iPad
> On Aug 1, 2022, at 5:46 PM, Tomasz Figa <tfiga(a)chromium.org> wrote:
>
> CAUTION: Email originated externally, do not click links or open attachments unless you recognize the sender and know the content is safe.
>
>
>> On Mon, Aug 1, 2022 at 3:44 PM Hsia-Jun Li <Randy.Li(a)synaptics.com> wrote:
>>> On 8/1/22 14:19, Tomasz Figa wrote:
>> Hello Tomasz
>>> ?Hi Randy,
>>> On Mon, Aug 1, 2022 at 5:21 AM <ayaka(a)soulik.info> wrote:
>>>> From: Randy Li <ayaka(a)soulik.info>
>>>> This module is still at a early stage, I wrote this for showing what
>>>> APIs we need here.
>>>> Let me explain why we need such a module here.
>>>> If you won't allocate buffers from a V4L2 M2M device, this module
>>>> may not be very useful. I am sure the most of users won't know a
>>>> device would require them allocate buffers from a DMA-Heap then
>>>> import those buffers into a V4L2's queue.
>>>> Then the question goes back to why DMA-Heap. From the Android's
>>>> description, we know it is about the copyright's DRM.
>>>> When we allocate a buffer in a DMA-Heap, it may register that buffer
>>>> in the trusted execution environment so the firmware which is running
>>>> or could only be acccesed from there could use that buffer later.
>>>> The answer above leads to another thing which is not done in this
>>>> version, the DMA mapping. Although in some platforms, a DMA-Heap
>>>> responses a IOMMU device as well. For the genernal purpose, we would
>>>> be better assuming the device mapping should be done for each device
>>>> itself. The problem here we only know alloc_devs in those DMAbuf
>>>> methods, which are DMA-heaps in my design, the device from the queue
>>>> is not enough, a plane may requests another IOMMU device or table
>>>> for mapping.
>>>> Signed-off-by: Randy Li <ayaka(a)soulik.info>
>>>> ---
>>>> drivers/media/common/videobuf2/Kconfig | 6 +
>>>> drivers/media/common/videobuf2/Makefile | 1 +
>>>> .../common/videobuf2/videobuf2-dma-heap.c | 350 ++++++++++++++++++
>>>> include/media/videobuf2-dma-heap.h | 30 ++
>>>> 4 files changed, 387 insertions(+)
>>>> create mode 100644 drivers/media/common/videobuf2/videobuf2-dma-heap.c
>>>> create mode 100644 include/media/videobuf2-dma-heap.h
>>> First of all, thanks for the series.
>>> Possibly a stupid question, but why not just allocate the DMA-bufs
>>> directly from the DMA-buf heap device in the userspace and just import
>>> the buffers to the V4L2 device using V4L2_MEMORY_DMABUF?
>> Sometimes the allocation policy could be very complex, let's suppose a
>> multiple planes pixel format enabling with frame buffer compression.
>> Its luma, chroma data could be allocated from a pool which is delegated
>> for large buffers while its metadata would come from a pool which many
>> users could take some few slices from it(likes system pool).
>> Then when we have a new users knowing nothing about this platform, if we
>> just configure the alloc_devs in each queues well. The user won't need
>> to know those complex rules.
>> The real situation could be more complex, Samsung MFC's left and right
>> banks could be regarded as two pools, many devices would benefit from
>> this either from the allocation times or the security buffers policy.
>> In our design, when we need to do some security decoding(DRM video),
>> codecs2 would allocate buffers from the pool delegated for that. While
>> the non-DRM video, users could not care about this.
>
> I'm a little bit surprised about this, because on Android all the
> graphics buffers are allocated from the system IAllocator and imported
> to the specific devices.
In the non-tunnel mode, yes it is. While the tunnel mode is completely vendor defined. Neither HWC nor codec2 cares about where the buffers coming from, you could do what ever you want.
Besides there are DRM video in GNU Linux platform, I heard the webkit has made huge effort here and Playready is one could work in non-Android Linux.
> Would it make sense to instead extend the UAPI to expose enough
> information about the allocation requirements to the userspace, so it
> can allocate correctly?
Yes, it could. But as I said it would need the users to do more works.
> My reasoning here is that it's not a driver's decision to allocate
> from a DMA-buf heap (and which one) or not. It's the userspace which
> knows that, based on the specific use case that it wants to fulfill.
Although I would like to let the users decide that, users just can’t do that which would violate the security rules in some platforms.
For example, video codec and display device could only access a region of memory, any other device or trusted apps can’t access it. Users have to allocate the buffer from the pool the vendor decided.
So why not we offer a quick way that users don’t need to try and error.
> Also, FWIW, dma_heap_ioctl_allocate() is a static function not exposed
> to other kernel modules:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__elixir.bootlin.com_lin…
I may forget to mention that you need two extra patches from Linaro that export those API(original version is actually out of time). Besides Android kernel did have the two kAPI I need here.
Actually I need more APIs from DMA-heap to archive those things in TODO list.
> By the way, the MFC left/right port requirement was gone long ago, it
> was only one of the earliest Exynos SoCs which required that.
Yes, MFCv5 or v6 right. I just want mention that the world has any possible, vendor always has its own reason.
> Best regards,
> Tomasz
>
>>> Best regards,
>>> Tomasz
>>>> diff --git a/drivers/media/common/videobuf2/Kconfig b/drivers/media/common/videobuf2/Kconfig
>>>> index d2223a12c95f..02235077f07e 100644
>>>> --- a/drivers/media/common/videobuf2/Kconfig
>>>> +++ b/drivers/media/common/videobuf2/Kconfig
>>>> @@ -30,3 +30,9 @@ config VIDEOBUF2_DMA_SG
>>>> config VIDEOBUF2_DVB
>>>> tristate
>>>> select VIDEOBUF2_CORE
>>>> +
>>>> +config VIDEOBUF2_DMA_HEAP
>>>> + tristate
>>>> + select VIDEOBUF2_CORE
>>>> + select VIDEOBUF2_MEMOPS
>>>> + select DMABUF_HEAPS
>>>> diff --git a/drivers/media/common/videobuf2/Makefile b/drivers/media/common/videobuf2/Makefile
>>>> index a6fe3f304685..7fe65f93117f 100644
>>>> --- a/drivers/media/common/videobuf2/Makefile
>>>> +++ b/drivers/media/common/videobuf2/Makefile
>>>> @@ -10,6 +10,7 @@ endif
>>>> # (e. g. LC_ALL=C sort Makefile)
>>>> obj-$(CONFIG_VIDEOBUF2_CORE) += videobuf2-common.o
>>>> obj-$(CONFIG_VIDEOBUF2_DMA_CONTIG) += videobuf2-dma-contig.o
>>>> +obj-$(CONFIG_VIDEOBUF2_DMA_HEAP) += videobuf2-dma-heap.o
>>>> obj-$(CONFIG_VIDEOBUF2_DMA_SG) += videobuf2-dma-sg.o
>>>> obj-$(CONFIG_VIDEOBUF2_DVB) += videobuf2-dvb.o
>>>> obj-$(CONFIG_VIDEOBUF2_MEMOPS) += videobuf2-memops.o
>>>> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-heap.c b/drivers/media/common/videobuf2/videobuf2-dma-heap.c
>>>> new file mode 100644
>>>> index 000000000000..377b82ab8f5a
>>>> --- /dev/null
>>>> +++ b/drivers/media/common/videobuf2/videobuf2-dma-heap.c
>>>> @@ -0,0 +1,350 @@
>>>> +/*
>>>> + * Copyright (C) 2022 Randy Li <ayaka(a)soulik.info>
>>>> + *
>>>> + * This software is licensed under the terms of the GNU General Public
>>>> + * License version 2, as published by the Free Software Foundation, and
>>>> + * may be copied, distributed, and modified under those terms.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + */
>>>> +
>>>> +#include <linux/dma-buf.h>
>>>> +#include <linux/dma-heap.h>
>>>> +#include <linux/refcount.h>
>>>> +#include <linux/scatterlist.h>
>>>> +#include <linux/sched.h>
>>>> +#include <linux/slab.h>
>>>> +#include <linux/dma-mapping.h>
>>>> +
>>>> +#include <media/videobuf2-v4l2.h>
>>>> +#include <media/videobuf2-memops.h>
>>>> +#include <media/videobuf2-dma-heap.h>
>>>> +
>>>> +struct vb2_dmaheap_buf {
>>>> + struct device *dev;
>>>> + void *vaddr;
>>>> + unsigned long size;
>>>> + struct dma_buf *dmabuf;
>>>> + dma_addr_t dma_addr;
>>>> + unsigned long attrs;
>>>> + enum dma_data_direction dma_dir;
>>>> + struct sg_table *dma_sgt;
>>>> +
>>>> + /* MMAP related */
>>>> + struct vb2_vmarea_handler handler;
>>>> + refcount_t refcount;
>>>> +
>>>> + /* DMABUF related */
>>>> + struct dma_buf_attachment *db_attach;
>>>> +};
>>>> +
>>>> +/*********************************************/
>>>> +/* callbacks for all buffers */
>>>> +/*********************************************/
>>>> +
>>>> +void *vb2_dmaheap_cookie(struct vb2_buffer *vb, void *buf_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> +
>>>> + return &buf->dma_addr;
>>>> +}
>>>> +
>>>> +static void *vb2_dmaheap_vaddr(struct vb2_buffer *vb, void *buf_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> + struct iosys_map map;
>>>> +
>>>> + if (buf->vaddr)
>>>> + return buf->vaddr;
>>>> +
>>>> + if (buf->db_attach) {
>>>> + if (!dma_buf_vmap(buf->db_attach->dmabuf, &map))
>>>> + buf->vaddr = map.vaddr;
>>>> + }
>>>> +
>>>> + return buf->vaddr;
>>>> +}
>>>> +
>>>> +static unsigned int vb2_dmaheap_num_users(void *buf_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> +
>>>> + return refcount_read(&buf->refcount);
>>>> +}
>>>> +
>>>> +static void vb2_dmaheap_prepare(void *buf_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> +
>>>> + /* TODO: DMABUF exporter will flush the cache for us */
>>>> + if (buf->db_attach)
>>>> + return;
>>>> +
>>>> + dma_buf_end_cpu_access(buf->dmabuf, buf->dma_dir);
>>>> +}
>>>> +
>>>> +static void vb2_dmaheap_finish(void *buf_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> +
>>>> + /* TODO: DMABUF exporter will flush the cache for us */
>>>> + if (buf->db_attach)
>>>> + return;
>>>> +
>>>> + dma_buf_begin_cpu_access(buf->dmabuf, buf->dma_dir);
>>>> +}
>>>> +
>>>> +/*********************************************/
>>>> +/* callbacks for MMAP buffers */
>>>> +/*********************************************/
>>>> +
>>>> +void vb2_dmaheap_put(void *buf_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> +
>>>> + if (!refcount_dec_and_test(&buf->refcount))
>>>> + return;
>>>> +
>>>> + dma_buf_put(buf->dmabuf);
>>>> +
>>>> + put_device(buf->dev);
>>>> + kfree(buf);
>>>> +}
>>>> +
>>>> +static void *vb2_dmaheap_alloc(struct vb2_buffer *vb,
>>>> + struct device *dev,
>>>> + unsigned long size)
>>>> +{
>>>> + struct vb2_queue *q = vb->vb2_queue;
>>>> + struct dma_heap *heap;
>>>> + struct vb2_dmaheap_buf *buf;
>>>> + const char *heap_name;
>>>> + int ret;
>>>> +
>>>> + if (WARN_ON(!dev))
>>>> + return ERR_PTR(-EINVAL);
>>>> +
>>>> + heap_name = dev_name(dev);
>>>> + if (!heap_name)
>>>> + return ERR_PTR(-EINVAL);
>>>> +
>>>> + heap = dma_heap_find(heap_name);
>>>> + if (!heap) {
>>>> + dev_err(dev, "is not a DMA-heap device\n");
>>>> + return ERR_PTR(-EINVAL);
>>>> + }
>>>> +
>>>> + buf = kzalloc(sizeof *buf, GFP_KERNEL);
>>>> + if (!buf)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + /* Prevent the device from being released while the buffer is used */
>>>> + buf->dev = get_device(dev);
>>>> + buf->attrs = vb->vb2_queue->dma_attrs;
>>>> + buf->dma_dir = vb->vb2_queue->dma_dir;
>>>> +
>>>> + /* TODO: heap flags */
>>>> + ret = dma_heap_buffer_alloc(heap, size, 0, 0);
>>>> + if (ret < 0) {
>>>> + dev_err(dev, "is not a DMA-heap device\n");
>>>> + put_device(buf->dev);
>>>> + kfree(buf);
>>>> + return ERR_PTR(ret);
>>>> + }
>>>> + buf->dmabuf = dma_buf_get(ret);
>>>> +
>>>> + /* FIXME */
>>>> + buf->dma_addr = 0;
>>>> +
>>>> + if ((q->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) == 0)
>>>> + buf->vaddr = buf->dmabuf;
>>>> +
>>>> + buf->handler.refcount = &buf->refcount;
>>>> + buf->handler.put = vb2_dmaheap_put;
>>>> + buf->handler.arg = buf;
>>>> +
>>>> + refcount_set(&buf->refcount, 1);
>>>> +
>>>> + return buf;
>>>> +}
>>>> +
>>>> +static int vb2_dmaheap_mmap(void *buf_priv, struct vm_area_struct *vma)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> + int ret;
>>>> +
>>>> + if (!buf) {
>>>> + printk(KERN_ERR "No buffer to map\n");
>>>> + return -EINVAL;
>>>> + }
>>>> +
>>>> + vma->vm_flags &= ~VM_PFNMAP;
>>>> +
>>>> + ret = dma_buf_mmap(buf->dmabuf, vma, 0);
>>>> + if (ret) {
>>>> + pr_err("Remapping memory failed, error: %d\n", ret);
>>>> + return ret;
>>>> + }
>>>> + vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
>>>> + vma->vm_private_data = &buf->handler;
>>>> + vma->vm_ops = &vb2_common_vm_ops;
>>>> +
>>>> + vma->vm_ops->open(vma);
>>>> +
>>>> + pr_debug("%s: mapped memid 0x%08lx at 0x%08lx, size %ld\n",
>>>> + __func__, (unsigned long)buf->dma_addr, vma->vm_start,
>>>> + buf->size);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +/*********************************************/
>>>> +/* DMABUF ops for exporters */
>>>> +/*********************************************/
>>>> +
>>>> +static struct dma_buf *vb2_dmaheap_get_dmabuf(struct vb2_buffer *vb,
>>>> + void *buf_priv,
>>>> + unsigned long flags)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = buf_priv;
>>>> + struct dma_buf *dbuf;
>>>> +
>>>> + dbuf = buf->dmabuf;
>>>> +
>>>> + return dbuf;
>>>> +}
>>>> +
>>>> +/*********************************************/
>>>> +/* callbacks for DMABUF buffers */
>>>> +/*********************************************/
>>>> +
>>>> +static int vb2_dmaheap_map_dmabuf(void *mem_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = mem_priv;
>>>> + struct sg_table *sgt;
>>>> +
>>>> + if (WARN_ON(!buf->db_attach)) {
>>>> + pr_err("trying to pin a non attached buffer\n");
>>>> + return -EINVAL;
>>>> + }
>>>> +
>>>> + if (WARN_ON(buf->dma_sgt)) {
>>>> + pr_err("dmabuf buffer is already pinned\n");
>>>> + return 0;
>>>> + }
>>>> +
>>>> + /* get the associated scatterlist for this buffer */
>>>> + sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir);
>>>> + if (IS_ERR(sgt)) {
>>>> + pr_err("Error getting dmabuf scatterlist\n");
>>>> + return -EINVAL;
>>>> + }
>>>> +
>>>> + buf->dma_addr = sg_dma_address(sgt->sgl);
>>>> + buf->dma_sgt = sgt;
>>>> + buf->vaddr = NULL;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static void vb2_dmaheap_unmap_dmabuf(void *mem_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = mem_priv;
>>>> + struct sg_table *sgt = buf->dma_sgt;
>>>> + struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr);
>>>> +
>>>> + if (WARN_ON(!buf->db_attach)) {
>>>> + pr_err("trying to unpin a not attached buffer\n");
>>>> + return;
>>>> + }
>>>> +
>>>> + if (WARN_ON(!sgt)) {
>>>> + pr_err("dmabuf buffer is already unpinned\n");
>>>> + return;
>>>> + }
>>>> +
>>>> + if (buf->vaddr) {
>>>> + dma_buf_vunmap(buf->db_attach->dmabuf, &map);
>>>> + buf->vaddr = NULL;
>>>> + }
>>>> + dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
>>>> +
>>>> + buf->dma_addr = 0;
>>>> + buf->dma_sgt = NULL;
>>>> +}
>>>> +
>>>> +static void vb2_dmaheap_detach_dmabuf(void *mem_priv)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf = mem_priv;
>>>> +
>>>> + /* if vb2 works correctly you should never detach mapped buffer */
>>>> + if (WARN_ON(buf->dma_addr))
>>>> + vb2_dmaheap_unmap_dmabuf(buf);
>>>> +
>>>> + /* detach this attachment */
>>>> + dma_buf_detach(buf->db_attach->dmabuf, buf->db_attach);
>>>> + kfree(buf);
>>>> +}
>>>> +
>>>> +static void *vb2_dmaheap_attach_dmabuf(struct vb2_buffer *vb, struct device *dev,
>>>> + struct dma_buf *dbuf, unsigned long size)
>>>> +{
>>>> + struct vb2_dmaheap_buf *buf;
>>>> + struct dma_buf_attachment *dba;
>>>> +
>>>> + if (dbuf->size < size)
>>>> + return ERR_PTR(-EFAULT);
>>>> +
>>>> + if (WARN_ON(!dev))
>>>> + return ERR_PTR(-EINVAL);
>>>> + /*
>>>> + * TODO: A better way to check whether the buffer is coming
>>>> + * from this heap or this heap could accept this buffer
>>>> + */
>>>> + if (strcmp(dbuf->exp_name, dev_name(dev)))
>>>> + return ERR_PTR(-EINVAL);
>>>> +
>>>> + buf = kzalloc(sizeof(*buf), GFP_KERNEL);
>>>> + if (!buf)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + buf->dev = dev;
>>>> + /* create attachment for the dmabuf with the user device */
>>>> + dba = dma_buf_attach(dbuf, buf->dev);
>>>> + if (IS_ERR(dba)) {
>>>> + pr_err("failed to attach dmabuf\n");
>>>> + kfree(buf);
>>>> + return dba;
>>>> + }
>>>> +
>>>> + buf->dma_dir = vb->vb2_queue->dma_dir;
>>>> + buf->size = size;
>>>> + buf->db_attach = dba;
>>>> +
>>>> + return buf;
>>>> +}
>>>> +
>>>> +const struct vb2_mem_ops vb2_dmaheap_memops = {
>>>> + .alloc = vb2_dmaheap_alloc,
>>>> + .put = vb2_dmaheap_put,
>>>> + .get_dmabuf = vb2_dmaheap_get_dmabuf,
>>>> + .cookie = vb2_dmaheap_cookie,
>>>> + .vaddr = vb2_dmaheap_vaddr,
>>>> + .prepare = vb2_dmaheap_prepare,
>>>> + .finish = vb2_dmaheap_finish,
>>>> + .map_dmabuf = vb2_dmaheap_map_dmabuf,
>>>> + .unmap_dmabuf = vb2_dmaheap_unmap_dmabuf,
>>>> + .attach_dmabuf = vb2_dmaheap_attach_dmabuf,
>>>> + .detach_dmabuf = vb2_dmaheap_detach_dmabuf,
>>>> + .num_users = vb2_dmaheap_num_users,
>>>> + .mmap = vb2_dmaheap_mmap,
>>>> +};
>>>> +
>>>> +MODULE_DESCRIPTION("DMA-Heap memory handling routines for videobuf2");
>>>> +MODULE_AUTHOR("Randy Li <ayaka(a)soulik.info>");
>>>> +MODULE_LICENSE("GPL");
>>>> +MODULE_IMPORT_NS(DMA_BUF);
>>>> diff --git a/include/media/videobuf2-dma-heap.h b/include/media/videobuf2-dma-heap.h
>>>> new file mode 100644
>>>> index 000000000000..fa057f67d6e9
>>>> --- /dev/null
>>>> +++ b/include/media/videobuf2-dma-heap.h
>>>> @@ -0,0 +1,30 @@
>>>> +/*
>>>> + * Copyright (C) 2022 Randy Li <ayaka(a)soulik.info>
>>>> + *
>>>> + * This software is licensed under the terms of the GNU General Public
>>>> + * License version 2, as published by the Free Software Foundation, and
>>>> + * may be copied, distributed, and modified under those terms.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + */
>>>> +
>>>> +#ifndef _MEDIA_VIDEOBUF2_DMA_HEAP_H
>>>> +#define _MEDIA_VIDEOBUF2_DMA_HEAP_H
>>>> +
>>>> +#include <media/videobuf2-v4l2.h>
>>>> +#include <linux/dma-mapping.h>
>>>> +
>>>> +static inline dma_addr_t
>>>> +vb2_dmaheap_plane_dma_addr(struct vb2_buffer *vb, unsigned int plane_no)
>>>> +{
>>>> + dma_addr_t *addr = vb2_plane_cookie(vb, plane_no);
>>>> +
>>>> + return *addr;
>>>> +}
>>>> +
>>>> +extern const struct vb2_mem_ops vb2_dmaheap_memops;
>>>> +#endif
>>>> --
>>>> 2.17.1
>> --
>> Hsia-Jun(Randy) Li
ib_umem_dmabuf_map_pages() returns 0 on success and -ERRNO on failure.
dma_resv_wait_timeout() uses a different scheme:
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or
* greater than zero on success.
This results in ib_umem_dmabuf_map_pages() being non-functional as a
positive return will be understood to be an error by drivers.
Fixes: f30bceab16d1 ("RDMA: use dma_resv_wait() instead of extracting the fence")
Cc: stable(a)kernel.org
Tested-by: Maor Gottlieb <maorg(a)nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg(a)nvidia.com>
---
drivers/infiniband/core/umem_dmabuf.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
Oded, I assume the Habana driver will hit this as well - does this mean you
are not testing upstream kernels?
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
index fce80a4a5147cd..04c04e6d24c358 100644
--- a/drivers/infiniband/core/umem_dmabuf.c
+++ b/drivers/infiniband/core/umem_dmabuf.c
@@ -18,6 +18,7 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
struct scatterlist *sg;
unsigned long start, end, cur = 0;
unsigned int nmap = 0;
+ long ret;
int i;
dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
@@ -67,9 +68,14 @@ int ib_umem_dmabuf_map_pages(struct ib_umem_dmabuf *umem_dmabuf)
* may be not up-to-date. Wait for the exporter to finish
* the migration.
*/
- return dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv,
+ ret = dma_resv_wait_timeout(umem_dmabuf->attach->dmabuf->resv,
DMA_RESV_USAGE_KERNEL,
false, MAX_SCHEDULE_TIMEOUT);
+ if (ret < 0)
+ return ret;
+ if (ret == 0)
+ return -ETIMEDOUT;
+ return 0;
}
EXPORT_SYMBOL(ib_umem_dmabuf_map_pages);
base-commit: 568035b01cfb107af8d2e4bd2fb9aea22cf5b868
--
2.37.2