Am 18.08.21 um 14:17 schrieb Sa, Nuno:
>> From: Christian König <christian.koenig(a)amd.com>
>> Sent: Wednesday, August 18, 2021 2:10 PM
>> To: Sa, Nuno <Nuno.Sa(a)analog.com>; linaro-mm-sig(a)lists.linaro.org;
>> dri-devel(a)lists.freedesktop.org; linux-media(a)vger.kernel.org
>> Cc: Rob Clark <rob(a)ti.com>; Sumit Semwal
>> <sumit.semwal(a)linaro.org>
>> Subject: Re: [PATCH] dma-buf: return -EINVAL if dmabuf object is
>> NULL
>>
>> [External]
>>
>> To be honest I think the if(WARN_ON(!dmabuf)) return -EINVAL
>> handling
>> here is misleading in the first place.
>>
>> Returning -EINVAL on a hard coding error is not good practice and
>> should
>> probably be removed from the DMA-buf subsystem in general.
> Would you say to just return 0 then? I don't think that having the
> dereference is also good..
No, just run into the dereference.
Passing NULL as the core object you are working on is a hard coding
error and not something we should bubble up as recoverable error.
> I used -EINVAL to be coherent with the rest of the code.
I rather suggest to remove the check elsewhere as well.
Christian.
>
> - Nuno Sá
>
>> Christian.
>>
>> Am 18.08.21 um 13:58 schrieb Nuno Sá:
>>> On top of warning about a NULL object, we also want to return with a
>>> proper error code (as done in 'dma_buf_begin_cpu_access()').
>> Otherwise,
>>> we will get a NULL pointer dereference.
>>>
>>> Fixes: fc13020e086b ("dma-buf: add support for kernel cpu access")
>>> Signed-off-by: Nuno Sá <nuno.sa(a)analog.com>
>>> ---
>>> drivers/dma-buf/dma-buf.c | 3 ++-
>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-
>> buf.c
>>> index 63d32261b63f..8ec7876dd523 100644
>>> --- a/drivers/dma-buf/dma-buf.c
>>> +++ b/drivers/dma-buf/dma-buf.c
>>> @@ -1231,7 +1231,8 @@ int dma_buf_end_cpu_access(struct
>> dma_buf *dmabuf,
>>> {
>>> int ret = 0;
>>>
>>> - WARN_ON(!dmabuf);
>>> + if (WARN_ON(!dmabuf))
>>> + return -EINVAL;
>>>
>>> might_lock(&dmabuf->resv->lock.base);
>>>
To be honest I think the if(WARN_ON(!dmabuf)) return -EINVAL handling
here is misleading in the first place.
Returning -EINVAL on a hard coding error is not good practice and should
probably be removed from the DMA-buf subsystem in general.
Christian.
Am 18.08.21 um 13:58 schrieb Nuno Sá:
> On top of warning about a NULL object, we also want to return with a
> proper error code (as done in 'dma_buf_begin_cpu_access()'). Otherwise,
> we will get a NULL pointer dereference.
>
> Fixes: fc13020e086b ("dma-buf: add support for kernel cpu access")
> Signed-off-by: Nuno Sá <nuno.sa(a)analog.com>
> ---
> drivers/dma-buf/dma-buf.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 63d32261b63f..8ec7876dd523 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -1231,7 +1231,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> {
> int ret = 0;
>
> - WARN_ON(!dmabuf);
> + if (WARN_ON(!dmabuf))
> + return -EINVAL;
>
> might_lock(&dmabuf->resv->lock.base);
>
On Wed, Aug 18, 2021 at 03:38:23PM +0800, Desmond Cheong Zhi Xi wrote:
> The task_work_add function is needed to prevent userspace races with
> DRM modesetting rights.
>
> Some DRM ioctls can change modesetting permissions while other
> concurrent users are performing modesetting. To prevent races with
> userspace, such functions should flush readers of old permissions
> before returning to user mode. As the function that changes
> permissions might itself be a reader of the old permissions, we intend
> to schedule this flush using task_work_add.
>
> However, when DRM is compiled as a loadable kernel module without
> exporting task_work_add, we get the following compilation error:
>
> ERROR: modpost: "task_work_add" [drivers/gpu/drm/drm.ko] undefined!
>
> Reported-by: kernel test robot <lkp(a)intel.com>
> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
Just realized another benefit of pushing the dev->master_rwsem write
locks down into ioctls that need them: We wouldn't need this function here
exported for use in drm. But also I'm not sure that works any better than
the design in your current patch set ...
-Daniel
> ---
> kernel/task_work.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/kernel/task_work.c b/kernel/task_work.c
> index 1698fbe6f0e1..90000404af2b 100644
> --- a/kernel/task_work.c
> +++ b/kernel/task_work.c
> @@ -60,6 +60,7 @@ int task_work_add(struct task_struct *task, struct callback_head *work,
>
> return 0;
> }
> +EXPORT_SYMBOL(task_work_add);
>
> /**
> * task_work_cancel_match - cancel a pending work added by task_work_add()
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Aug 18, 2021 at 03:38:22PM +0800, Desmond Cheong Zhi Xi wrote:
> In a future patch, a read lock on drm_device.master_rwsem is
> held in the ioctl handler before the check for ioctl
> permissions. However, this produces the following lockdep splat:
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 5.14.0-rc6-CI-Patchwork_20831+ #1 Tainted: G U
> ------------------------------------------------------
> kms_lease/1752 is trying to acquire lock:
> ffffffff827bad88 (drm_global_mutex){+.+.}-{3:3}, at: drm_open+0x64/0x280
>
> but task is already holding lock:
> ffff88812e350108 (&dev->master_rwsem){++++}-{3:3}, at:
> drm_ioctl_kernel+0xfb/0x1a0
>
> which lock already depends on the new lock.
>
> the existing dependency chain (in reverse order) is:
>
> -> #2 (&dev->master_rwsem){++++}-{3:3}:
> lock_acquire+0xd3/0x310
> down_read+0x3b/0x140
> drm_master_internal_acquire+0x1d/0x60
> drm_client_modeset_commit+0x10/0x40
> __drm_fb_helper_restore_fbdev_mode_unlocked+0x88/0xb0
> drm_fb_helper_set_par+0x34/0x40
> intel_fbdev_set_par+0x11/0x40 [i915]
> fbcon_init+0x270/0x4f0
> visual_init+0xc6/0x130
> do_bind_con_driver+0x1de/0x2c0
> do_take_over_console+0x10e/0x180
> do_fbcon_takeover+0x53/0xb0
> register_framebuffer+0x22d/0x310
> __drm_fb_helper_initial_config_and_unlock+0x36c/0x540
> intel_fbdev_initial_config+0xf/0x20 [i915]
> async_run_entry_fn+0x28/0x130
> process_one_work+0x26d/0x5c0
> worker_thread+0x37/0x390
> kthread+0x13b/0x170
> ret_from_fork+0x1f/0x30
>
> -> #1 (&helper->lock){+.+.}-{3:3}:
> lock_acquire+0xd3/0x310
> __mutex_lock+0xa8/0x930
> __drm_fb_helper_restore_fbdev_mode_unlocked+0x44/0xb0
> intel_fbdev_restore_mode+0x2b/0x50 [i915]
> drm_lastclose+0x27/0x50
> drm_release_noglobal+0x42/0x60
> __fput+0x9e/0x250
> task_work_run+0x6b/0xb0
> exit_to_user_mode_prepare+0x1c5/0x1d0
> syscall_exit_to_user_mode+0x19/0x50
> do_syscall_64+0x46/0xb0
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> -> #0 (drm_global_mutex){+.+.}-{3:3}:
> validate_chain+0xb39/0x1e70
> __lock_acquire+0x5a1/0xb70
> lock_acquire+0xd3/0x310
> __mutex_lock+0xa8/0x930
> drm_open+0x64/0x280
> drm_stub_open+0x9f/0x100
> chrdev_open+0x9f/0x1d0
> do_dentry_open+0x14a/0x3a0
> dentry_open+0x53/0x70
> drm_mode_create_lease_ioctl+0x3cb/0x970
> drm_ioctl_kernel+0xc9/0x1a0
> drm_ioctl+0x201/0x3d0
> __x64_sys_ioctl+0x6a/0xa0
> do_syscall_64+0x37/0xb0
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> other info that might help us debug this:
> Chain exists of:
> drm_global_mutex --> &helper->lock --> &dev->master_rwsem
> Possible unsafe locking scenario:
> CPU0 CPU1
> ---- ----
> lock(&dev->master_rwsem);
> lock(&helper->lock);
> lock(&dev->master_rwsem);
> lock(drm_global_mutex);
>
> *** DEADLOCK ***
>
> The lock hierarchy inversion happens because we grab the
> drm_global_mutex while already holding on to master_rwsem. To avoid
> this, we do some prep work to grab the drm_global_mutex before
> checking for ioctl permissions.
>
> At the same time, we update the check for the global mutex to use the
> drm_dev_needs_global_mutex helper function.
This is intentional, essentially we force all non-legacy drivers to have
unlocked ioctl (otherwise everyone forgets to set that flag).
For non-legacy drivers the global lock only ensures ordering between
drm_open and lastclose (I think at least), and between
drm_dev_register/unregister and the backwards ->load/unload callbacks
(which are called in the wrong place, but we cannot fix that for legacy
drivers).
->load/unload should be completely unused (maybe radeon still uses it),
and ->lastclose is also on the decline.
Maybe we should update the comment of drm_global_mutex to explain what it
protects and why.
I'm also confused how this patch connects to the splat, since for i915 we
shouldn't be taking the drm_global_lock here at all. The problem seems to
be the drm_open_helper when we create a new lease, which is an entirely
different can of worms.
I'm honestly not sure how to best do that, but we should be able to create
a file and then call drm_open_helper directly, or well a version of that
which never takes the drm_global_mutex. Because that is not needed for
nested drm_file opening:
- legacy drivers never go down this path because leases are only supported
with modesetting, and modesetting is only supported for non-legacy
drivers
- the races against dev->open_count due to last_close or ->load callbacks
don't matter, because for the entire ioctl we already have an open
drm_file and that wont disappear.
So this should work, but I'm not entirely sure how to make it work.
-Daniel
> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
> ---
> drivers/gpu/drm/drm_ioctl.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 880fc565d599..2cb57378a787 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -779,19 +779,19 @@ long drm_ioctl_kernel(struct file *file, drm_ioctl_t *func, void *kdata,
> if (drm_dev_is_unplugged(dev))
> return -ENODEV;
>
> + /* Enforce sane locking for modern driver ioctls. */
> + if (unlikely(drm_dev_needs_global_mutex(dev)) && !(flags & DRM_UNLOCKED))
> + mutex_lock(&drm_global_mutex);
> +
> retcode = drm_ioctl_permit(flags, file_priv);
> if (unlikely(retcode))
> - return retcode;
> + goto out;
>
> - /* Enforce sane locking for modern driver ioctls. */
> - if (likely(!drm_core_check_feature(dev, DRIVER_LEGACY)) ||
> - (flags & DRM_UNLOCKED))
> - retcode = func(dev, kdata, file_priv);
> - else {
> - mutex_lock(&drm_global_mutex);
> - retcode = func(dev, kdata, file_priv);
> + retcode = func(dev, kdata, file_priv);
> +
> +out:
> + if (unlikely(drm_dev_needs_global_mutex(dev)) && !(flags & DRM_UNLOCKED))
> mutex_unlock(&drm_global_mutex);
> - }
> return retcode;
> }
> EXPORT_SYMBOL(drm_ioctl_kernel);
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Aug 18, 2021 at 03:38:19PM +0800, Desmond Cheong Zhi Xi wrote:
> There are three areas where we dereference struct drm_master without
> checking if the pointer is non-NULL.
>
> 1. drm_getmagic is called from the ioctl_handler. Since
> DRM_IOCTL_GET_MAGIC has no ioctl flags, drm_getmagic is run without
> any check that drm_file.master has been set.
>
> 2. Similarly, drm_getunique is called from the ioctl_handler, but
> DRM_IOCTL_GET_UNIQUE has no ioctl flags. So there is no guarantee that
> drm_file.master has been set.
I think the above two are impossible, due to the refcounting rules for
struct file.
> 3. drm_master_release can also be called without having a
> drm_file.master set. Here is one error path:
> drm_open():
> drm_open_helper():
> drm_master_open():
> drm_new_set_master(); <--- returns -ENOMEM,
> drm_file.master not set
> drm_file_free():
> drm_master_release(); <--- NULL ptr dereference
> (file_priv->master->magic_map)
>
> Fix these by checking if the master pointers are NULL before use.
>
> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
> ---
> drivers/gpu/drm/drm_auth.c | 16 ++++++++++++++--
> drivers/gpu/drm/drm_ioctl.c | 5 +++++
> 2 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
> index f9267b21556e..b7230604496b 100644
> --- a/drivers/gpu/drm/drm_auth.c
> +++ b/drivers/gpu/drm/drm_auth.c
> @@ -95,11 +95,18 @@ EXPORT_SYMBOL(drm_is_current_master);
> int drm_getmagic(struct drm_device *dev, void *data, struct drm_file *file_priv)
> {
> struct drm_auth *auth = data;
> + struct drm_master *master;
> int ret = 0;
>
> mutex_lock(&dev->master_mutex);
> + master = file_priv->master;
> + if (!master) {
> + mutex_unlock(&dev->master_mutex);
> + return -EINVAL;
> + }
> +
> if (!file_priv->magic) {
> - ret = idr_alloc(&file_priv->master->magic_map, file_priv,
> + ret = idr_alloc(&master->magic_map, file_priv,
> 1, 0, GFP_KERNEL);
> if (ret >= 0)
> file_priv->magic = ret;
> @@ -355,8 +362,12 @@ void drm_master_release(struct drm_file *file_priv)
>
> mutex_lock(&dev->master_mutex);
> master = file_priv->master;
> +
> + if (!master)
> + goto unlock;
This is a bit convoluted, since we're in the single-threaded release path
we don't need any locking for file_priv related things. Therefore we can
pull the master check out and just directly return.
But since it's a bit surprising maybe a comment that this can happen when
drm_master_open in drm_open_helper fails?
Another option, and maybe cleaner, would be to move the drm_master_release
from drm_file_free into drm_close_helper. That would be fully symmetrical
and should also fix the bug here?
-Daniel
> +
> if (file_priv->magic)
> - idr_remove(&file_priv->master->magic_map, file_priv->magic);
> + idr_remove(&master->magic_map, file_priv->magic);
>
> if (!drm_is_current_master_locked(file_priv))
> goto out;
> @@ -379,6 +390,7 @@ void drm_master_release(struct drm_file *file_priv)
> drm_master_put(&file_priv->master);
> spin_unlock(&dev->master_lookup_lock);
> }
> +unlock:
> mutex_unlock(&dev->master_mutex);
> }
>
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index 26f3a9ede8fe..4d029d3061d9 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -121,6 +121,11 @@ int drm_getunique(struct drm_device *dev, void *data,
>
> mutex_lock(&dev->master_mutex);
> master = file_priv->master;
> + if (!master) {
> + mutex_unlock(&dev->master_mutex);
> + return -EINVAL;
> + }
> +
> if (u->unique_len >= master->unique_len) {
> if (copy_to_user(u->unique, master->unique, master->unique_len)) {
> mutex_unlock(&dev->master_mutex);
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Aug 18, 2021 at 03:38:17PM +0800, Desmond Cheong Zhi Xi wrote:
> When drm_file.master changes value, the corresponding
> drm_device.master_lookup_lock should be held.
>
> In drm_master_release, a call to drm_master_put sets the
> file_priv->master to NULL, so we protect this section with
> drm_device.master_lookup_lock.
>
> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
At this points all refcounts to drm_file have disappeared, so yeah this is
a lockless access, but also no one can observe it anymore. See also next
patch.
Hence I think the current code is fine.
-Daniel
> ---
> drivers/gpu/drm/drm_auth.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
> index 8efb58aa7d95..8c0e0dba1611 100644
> --- a/drivers/gpu/drm/drm_auth.c
> +++ b/drivers/gpu/drm/drm_auth.c
> @@ -373,8 +373,11 @@ void drm_master_release(struct drm_file *file_priv)
> }
>
> /* drop the master reference held by the file priv */
> - if (file_priv->master)
> + if (file_priv->master) {
> + spin_lock(&dev->master_lookup_lock);
> drm_master_put(&file_priv->master);
> + spin_unlock(&dev->master_lookup_lock);
> + }
> mutex_unlock(&dev->master_mutex);
> }
>
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Aug 18, 2021 at 03:38:18PM +0800, Desmond Cheong Zhi Xi wrote:
> There is a window after calling drm_master_release, and before a file
> is freed, where drm_file can have is_master set to true, but both the
> drm_file and drm_device have no master.
>
> This could result in wrongly approving permissions in
> drm_is_current_master_locked. Add a check that fpriv->master is
> non-NULl to guard against this scenario.
>
> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
This should be impossible, drm_master_release is only called when the
struct file is released, which means all ioctls and anything else have
finished (they hold a temporary reference).
fpriv->master can change (if the drm_file becomes newly minted master and
wasnt one before through the setmaster ioctl), but it cannot become NULL
before it's completely gone from the system.
-Daniel
> ---
> drivers/gpu/drm/drm_auth.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
> index 8c0e0dba1611..f9267b21556e 100644
> --- a/drivers/gpu/drm/drm_auth.c
> +++ b/drivers/gpu/drm/drm_auth.c
> @@ -66,7 +66,8 @@ static bool drm_is_current_master_locked(struct drm_file *fpriv)
> lockdep_assert_once(lockdep_is_held(&fpriv->minor->dev->master_lookup_lock) ||
> lockdep_is_held(&fpriv->minor->dev->master_mutex));
>
> - return fpriv->is_master && drm_lease_owner(fpriv->master) == fpriv->minor->dev->master;
> + return (fpriv->is_master && fpriv->master &&
> + drm_lease_owner(fpriv->master) == fpriv->minor->dev->master);
> }
>
> /**
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
From: Rob Clark <robdclark(a)chromium.org>
Based on discussion from a previous series[1] to add a "boost" mechanism
when, for example, vblank deadlines are missed. Instead of a boost
callback, this approach adds a way to set a deadline on the fence, by
which the waiter would like to see the fence signalled.
I've not yet had a chance to re-work the drm/msm part of this, but
wanted to send this out as an RFC in case I don't have a chance to
finish the drm/msm part this week.
Original description:
In some cases, like double-buffered rendering, missing vblanks can
trick the GPU into running at a lower frequence, when really we
want to be running at a higher frequency to not miss the vblanks
in the first place.
This is partially inspired by a trick i915 does, but implemented
via dma-fence for a couple of reasons:
1) To continue to be able to use the atomic helpers
2) To support cases where display and gpu are different drivers
[1] https://patchwork.freedesktop.org/series/90331/
v1: https://patchwork.freedesktop.org/series/93035/
v2: Move filtering out of later deadlines to fence implementation
to avoid increasing the size of dma_fence
Rob Clark (5):
dma-fence: Add deadline awareness
drm/vblank: Add helper to get next vblank time
drm/atomic-helper: Set fence deadline for vblank
drm/scheduler: Add fence deadline support
drm/msm: Add deadline based boost support
drivers/dma-buf/dma-fence.c | 20 +++++++
drivers/gpu/drm/drm_atomic_helper.c | 36 ++++++++++++
drivers/gpu/drm/drm_vblank.c | 31 ++++++++++
drivers/gpu/drm/msm/msm_fence.c | 76 +++++++++++++++++++++++++
drivers/gpu/drm/msm/msm_fence.h | 20 +++++++
drivers/gpu/drm/msm/msm_gpu.h | 1 +
drivers/gpu/drm/msm/msm_gpu_devfreq.c | 20 +++++++
drivers/gpu/drm/scheduler/sched_fence.c | 25 ++++++++
drivers/gpu/drm/scheduler/sched_main.c | 3 +
include/drm/drm_vblank.h | 1 +
include/drm/gpu_scheduler.h | 6 ++
include/linux/dma-fence.h | 16 ++++++
12 files changed, 255 insertions(+)
--
2.31.1
On Fri, Aug 13, 2021 at 01:41:26PM +0200, Paul Cercueil wrote:
> Hi,
>
> A few months ago we (ADI) tried to upstream the interface we use with our
> high-speed ADCs and DACs. It is a system with custom ioctls on the iio
> device node to dequeue and enqueue buffers (allocated with
> dma_alloc_coherent), that can then be mmap'd by userspace applications.
> Anyway, it was ultimately denied entry [1]; this API was okay in ~2014 when
> it was designed but it feels like re-inventing the wheel in 2021.
>
> Back to the drawing table, and we'd like to design something that we can
> actually upstream. This high-speed interface looks awfully similar to
> DMABUF, so we may try to implement a DMABUF interface for IIO, unless
> someone has a better idea.
To me this does sound a lot like a dma buf use case. The interesting
question to me is how to signal arrival of new data, or readyness to
consume more data. I suspect that people that are actually using
dmabuf heavily at the moment (dri/media folks) might be able to chime
in a little more on that.
> Our first usecase is, we want userspace applications to be able to dequeue
> buffers of samples (from ADCs), and/or enqueue buffers of samples (for
> DACs), and to be able to manipulate them (mmapped buffers). With a DMABUF
> interface, I guess the userspace application would dequeue a dma buffer
> from the driver, mmap it, read/write the data, unmap it, then enqueue it to
> the IIO driver again so that it can be disposed of. Does that sound sane?
>
> Our second usecase is - and that's where things get tricky - to be able to
> stream the samples to another computer for processing, over Ethernet or
> USB. Our typical setup is a high-speed ADC/DAC on a dev board with a FPGA
> and a weak soft-core or low-power CPU; processing the data in-situ is not
> an option. Copying the data from one buffer to another is not an option
> either (way too slow), so we absolutely want zero-copy.
>
> Usual userspace zero-copy techniques (vmsplice+splice, MSG_ZEROCOPY etc)
> don't really work with mmapped kernel buffers allocated for DMA [2] and/or
> have a huge overhead, so the way I see it, we would also need DMABUF
> support in both the Ethernet stack and USB (functionfs) stack. However, as
> far as I understood, DMABUF is mostly a DRM/V4L2 thing, so I am really not
> sure we have the right idea here.
>
> And finally, there is the new kid in town, io_uring. I am not very literate
> about the topic, but it does not seem to be able to handle DMA buffers
> (yet?). The idea that we could dequeue a buffer of samples from the IIO
> device and send it over the network in one single syscall is appealing,
> though.
Think of io_uring really just as an async syscall layer. It doesn't
replace DMA buffers, but can be used as a different and for some
workloads more efficient way to dispatch syscalls.