19.11.2021 17:48, Andy Shevchenko пишет:
>> + if (i2c_dev->nclocks == 0)
>> + return;
> Why? Make clocks optional.
This check shouldn't be needed because both clk_disable() and
clk_bulk_unprepare() should handle NULL/zero clocks without problems.
On 11/21/21 6:52 PM, Paul Cercueil wrote:
> Hi Lars,
>
> Le dim., nov. 21 2021 at 17:23:35 +0100, Lars-Peter Clausen
> <lars(a)metafoo.de> a écrit :
>> On 11/15/21 3:19 PM, Paul Cercueil wrote:
>>> The buffer-dma code was using two queues, incoming and outgoing, to
>>> manage the state of the blocks in use.
>>>
>>> While this totally works, it adds some complexity to the code,
>>> especially since the code only manages 2 blocks. It is much easier to
>>> just check each block's state manually, and keep a counter for the next
>>> block to dequeue.
>>>
>>> Since the new DMABUF based API wouldn't use these incoming and outgoing
>>> queues anyway, getting rid of them now makes the upcoming changes
>>> simpler.
>>>
>>> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
>> The outgoing queue is going to be replaced by fences, but I think we
>> need to keep the incoming queue.
>
> Blocks are always accessed in sequential order, so we now have a
> "queue->next_dequeue" that cycles between the buffers allocated for
> fileio.
>
>>> [...]
>>> @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
>>> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue
>>> *queue,
>>> struct iio_dma_buffer_block *block)
>>> {
>>> - if (block->state == IIO_BLOCK_STATE_DEAD) {
>>> + if (block->state == IIO_BLOCK_STATE_DEAD)
>>> iio_buffer_block_put(block);
>>> - } else if (queue->active) {
>>> + else if (queue->active)
>>> iio_dma_buffer_submit_block(queue, block);
>>> - } else {
>>> + else
>>> block->state = IIO_BLOCK_STATE_QUEUED;
>>> - list_add_tail(&block->head, &queue->incoming);
>> If iio_dma_buffer_enqueue() is called with a dmabuf and the buffer is
>> not active, it will be marked as queued, but we don't actually keep a
>> reference to it anywhere. It will never be submitted to the DMA, and
>> it will never be signaled as completed.
>
> We do keep a reference to the buffers, in the queue->fileio.blocks
> array. When the buffer is enabled, all the blocks in that array that
> are in the "queued" state will be submitted to the DMA.
>
But not when used in combination with the DMA buf changes later in this
series.
On 11/15/21 3:19 PM, Paul Cercueil wrote:
> The buffer-dma code was using two queues, incoming and outgoing, to
> manage the state of the blocks in use.
>
> While this totally works, it adds some complexity to the code,
> especially since the code only manages 2 blocks. It is much easier to
> just check each block's state manually, and keep a counter for the next
> block to dequeue.
>
> Since the new DMABUF based API wouldn't use these incoming and outgoing
> queues anyway, getting rid of them now makes the upcoming changes
> simpler.
>
> Signed-off-by: Paul Cercueil <paul(a)crapouillou.net>
The outgoing queue is going to be replaced by fences, but I think we
need to keep the incoming queue.
> [...]
> @@ -442,28 +435,33 @@ EXPORT_SYMBOL_GPL(iio_dma_buffer_disable);
> static void iio_dma_buffer_enqueue(struct iio_dma_buffer_queue *queue,
> struct iio_dma_buffer_block *block)
> {
> - if (block->state == IIO_BLOCK_STATE_DEAD) {
> + if (block->state == IIO_BLOCK_STATE_DEAD)
> iio_buffer_block_put(block);
> - } else if (queue->active) {
> + else if (queue->active)
> iio_dma_buffer_submit_block(queue, block);
> - } else {
> + else
> block->state = IIO_BLOCK_STATE_QUEUED;
> - list_add_tail(&block->head, &queue->incoming);
If iio_dma_buffer_enqueue() is called with a dmabuf and the buffer is
not active, it will be marked as queued, but we don't actually keep a
reference to it anywhere. It will never be submitted to the DMA, and it
will never be signaled as completed.
Am 16.11.21 um 17:31 schrieb Laurent Pinchart:
> On Tue, Nov 16, 2021 at 05:02:25PM +0100, Daniel Vetter wrote:
>> On Mon, Nov 15, 2021 at 02:57:37PM +0000, Paul Cercueil wrote:
>>> Le lun., nov. 15 2021 at 15:37:16 +0100, Daniel Vetter a écrit :
>>>> On Mon, Nov 15, 2021 at 02:19:10PM +0000, Paul Cercueil wrote:
>>>>> Hi Jonathan,
>>>>>
>>>>> This patchset introduces a new userspace interface based on DMABUF
>>>>> objects, to complement the existing fileio based API.
>>>>>
>>>>> The advantage of this DMABUF based interface vs. the fileio
>>>>> interface, is that it avoids an extra copy of the data between the
>>>>> kernel and userspace. This is particularly userful for high-speed
>>>>> devices which produce several megabytes or even gigabytes of data per
>>>>> second.
>>>>>
>>>>> The first few patches [01/15] to [03/15] are not really related, but
>>>>> allow to reduce the size of the patches that introduce the new API.
>>>>>
>>>>> Patch [04/15] to [06/15] enables write() support to the buffer-dma
>>>>> implementation of the buffer API, to continue the work done by
>>>>> Mihail Chindris.
>>>>>
>>>>> Patches [07/15] to [12/15] introduce the new DMABUF based API.
>>>>>
>>>>> Patches [13/15] and [14/15] add support for cyclic buffers, only through
>>>>> the new API. A cyclic buffer will be repeated on the output until the
>>>>> buffer is disabled.
>>>>>
>>>>> Patch [15/15] adds documentation about the new API.
>>>>>
>>>>> For now, the API allows you to alloc DMABUF objects and mmap() them
>>>>> to
>>>>> read or write the samples. It does not yet allow to import DMABUFs
>>>>> parented to other subsystems, but that should eventually be possible
>>>>> once it's wired.
>>>>>
>>>>> This patchset is inspired by the "mmap interface" that was previously
>>>>> submitted by Alexandru Ardelean and Lars-Peter Clausen, so it would be
>>>>> great if I could get a review from you guys. Alexandru's commit was
>>>>> signed with his @analog.com address but he doesn't work at ADI anymore,
>>>>> so I believe I'll need him to sign with a new email.
>>>> Why dma-buf? dma-buf looks like something super generic and useful, until
>>>> you realize that there's a metric ton of gpu/accelerator bagage piled in.
>>>> So unless buffer sharing with a gpu/video/accel/whatever device is the
> And cameras (maybe they're included in "whatever" ?).
>
>>>> goal here, and it's just for a convenient way to get at buffer handles,
>>>> this doesn't sound like a good idea.
>>> Good question. The first reason is that a somewhat similar API was intented
>>> before[1], but refused upstream as it was kind of re-inventing the wheel.
>>>
>>> The second reason, is that we want to be able to share buffers too, not with
>>> gpu/video but with the network (zctap) and in the future with USB
>>> (functionFS) too.
>>>
>>> [1]: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.kern…
>> Hm is that code merged already in upstream already?
>>
>> I know that dma-buf looks really generic, but like I said if there's no
>> need ever to interface with any of the gpu buffer sharing it might be
>> better to use something else (like get_user_pages maybe, would that work?).
> Not GUP please. That brings another set of issues, especially when
> dealing with DMA, we've suffered enough from the USERPTR support in V4L2
> to avoid repeating this. Let's make dma-buf better instead.
Yeah, when comparing GUP and DMA-buf the later is clearly the lesser evil.
DMA-buf indeed has some design issues we need to work on, especially
around the async operations and synchronization. But I still think those
are solvable.
GUP on the other hand has some hard fundamental problems which you can
only solved completely if the hardware is capable of fast and reliable
recoverable page faults.
Regards,
Christian.
On Mon, Nov 15, 2021 at 02:57:37PM +0000, Paul Cercueil wrote:
> Hi Daniel,
>
> Le lun., nov. 15 2021 at 15:37:16 +0100, Daniel Vetter <daniel(a)ffwll.ch> a
> écrit :
> > On Mon, Nov 15, 2021 at 02:19:10PM +0000, Paul Cercueil wrote:
> > > Hi Jonathan,
> > >
> > > This patchset introduces a new userspace interface based on DMABUF
> > > objects, to complement the existing fileio based API.
> > >
> > > The advantage of this DMABUF based interface vs. the fileio
> > > interface, is that it avoids an extra copy of the data between the
> > > kernel and userspace. This is particularly userful for high-speed
> > > devices which produce several megabytes or even gigabytes of data
> > > per
> > > second.
> > >
> > > The first few patches [01/15] to [03/15] are not really related, but
> > > allow to reduce the size of the patches that introduce the new API.
> > >
> > > Patch [04/15] to [06/15] enables write() support to the buffer-dma
> > > implementation of the buffer API, to continue the work done by
> > > Mihail Chindris.
> > >
> > > Patches [07/15] to [12/15] introduce the new DMABUF based API.
> > >
> > > Patches [13/15] and [14/15] add support for cyclic buffers, only
> > > through
> > > the new API. A cyclic buffer will be repeated on the output until
> > > the
> > > buffer is disabled.
> > >
> > > Patch [15/15] adds documentation about the new API.
> > >
> > > For now, the API allows you to alloc DMABUF objects and mmap() them
> > > to
> > > read or write the samples. It does not yet allow to import DMABUFs
> > > parented to other subsystems, but that should eventually be possible
> > > once it's wired.
> > >
> > > This patchset is inspired by the "mmap interface" that was
> > > previously
> > > submitted by Alexandru Ardelean and Lars-Peter Clausen, so it would
> > > be
> > > great if I could get a review from you guys. Alexandru's commit was
> > > signed with his @analog.com address but he doesn't work at ADI
> > > anymore,
> > > so I believe I'll need him to sign with a new email.
> >
> > Why dma-buf? dma-buf looks like something super generic and useful,
> > until
> > you realize that there's a metric ton of gpu/accelerator bagage piled
> > in.
> > So unless buffer sharing with a gpu/video/accel/whatever device is the
> > goal here, and it's just for a convenient way to get at buffer handles,
> > this doesn't sound like a good idea.
>
> Good question. The first reason is that a somewhat similar API was intented
> before[1], but refused upstream as it was kind of re-inventing the wheel.
>
> The second reason, is that we want to be able to share buffers too, not with
> gpu/video but with the network (zctap) and in the future with USB
> (functionFS) too.
>
> [1]: https://lore.kernel.org/linux-iio/20210217073638.21681-1-alexandru.ardelean…
Hm is that code merged already in upstream already?
I know that dma-buf looks really generic, but like I said if there's no
need ever to interface with any of the gpu buffer sharing it might be
better to use something else (like get_user_pages maybe, would that work?).
> > Also if the idea is to this with gpus/accelerators then I'd really like
> > to
> > see the full thing, since most likely at that point you also want
> > dma_fence. And once we talk dma_fence things get truly horrible from a
> > locking pov :-( Or well, just highly constrained and I get to review
> > what
> > iio is doing with these buffers to make sure it all fits.
>
> There is some dma_fence action in patch #10, which is enough for the
> userspace apps to use the API.
>
> What "horribleness" are we talking about here? It doesn't look that scary to
> me, but I certainly don't have the complete picture.
You need to annotate all the code involved in signalling that dma_fence
using dma_fence_begin/end_signalling, and then enable full lockdep and
everything.
You can safely assume you'll find bugs, because we even have bugs about
this in gpu drivers (where that annotation isn't fully rolled out yet).
The tldr is that you can allocate memory in there. And a pile of other
restrictions, but not being able to allocate memory (well GFP_ATOMIC is
ok, but that can fail) is a very serious restriction.
-Daniel
>
> Cheers,
> -Paul
>
> > Cheers, Daniel
> >
> > >
> > > Cheers,
> > > -Paul
> > >
> > > Alexandru Ardelean (1):
> > > iio: buffer-dma: split iio_dma_buffer_fileio_free() function
> > >
> > > Paul Cercueil (14):
> > > iio: buffer-dma: Get rid of incoming/outgoing queues
> > > iio: buffer-dma: Remove unused iio_buffer_block struct
> > > iio: buffer-dma: Use round_down() instead of rounddown()
> > > iio: buffer-dma: Enable buffer write support
> > > iio: buffer-dmaengine: Support specifying buffer direction
> > > iio: buffer-dmaengine: Enable write support
> > > iio: core: Add new DMABUF interface infrastructure
> > > iio: buffer-dma: Use DMABUFs instead of custom solution
> > > iio: buffer-dma: Implement new DMABUF based userspace API
> > > iio: buffer-dma: Boost performance using write-combine cache
> > > setting
> > > iio: buffer-dmaengine: Support new DMABUF based userspace API
> > > iio: core: Add support for cyclic buffers
> > > iio: buffer-dmaengine: Add support for cyclic buffers
> > > Documentation: iio: Document high-speed DMABUF based API
> > >
> > > Documentation/driver-api/dma-buf.rst | 2 +
> > > Documentation/iio/dmabuf_api.rst | 94 +++
> > > Documentation/iio/index.rst | 2 +
> > > drivers/iio/adc/adi-axi-adc.c | 3 +-
> > > drivers/iio/buffer/industrialio-buffer-dma.c | 670
> > > ++++++++++++++----
> > > .../buffer/industrialio-buffer-dmaengine.c | 42 +-
> > > drivers/iio/industrialio-buffer.c | 49 ++
> > > include/linux/iio/buffer-dma.h | 43 +-
> > > include/linux/iio/buffer-dmaengine.h | 5 +-
> > > include/linux/iio/buffer_impl.h | 8 +
> > > include/uapi/linux/iio/buffer.h | 30 +
> > > 11 files changed, 783 insertions(+), 165 deletions(-)
> > > create mode 100644 Documentation/iio/dmabuf_api.rst
> > >
> > > --
> > > 2.33.0
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Mon, Nov 15, 2021 at 6:43 AM Akhil P Oommen <akhilpo(a)codeaurora.org> wrote:
>
> On 11/12/2021 12:54 AM, Rob Clark wrote:
> > From: Rob Clark <robdclark(a)chromium.org>
> >
> > When converting to use an idr to map userspace fence seqno values back
> > to a dma_fence, we lost the error return when userspace passes seqno
> > that is larger than the last submitted fence. Restore this check.
> >
> > Reported-by: Akhil P Oommen <akhilpo(a)codeaurora.org>
> > Fixes: a61acbbe9cf8 ("drm/msm: Track "seqno" fences by idr")
> > Signed-off-by: Rob Clark <robdclark(a)chromium.org>
> > ---
> > Note: I will rebase "drm/msm: Handle fence rollover" on top of this,
> > to simplify backporting this patch to stable kernels
> >
> > drivers/gpu/drm/msm/msm_drv.c | 6 ++++++
> > drivers/gpu/drm/msm/msm_gem_submit.c | 1 +
> > drivers/gpu/drm/msm/msm_gpu.h | 3 +++
> > 3 files changed, 10 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> > index cb14d997c174..56500eb5219e 100644
> > --- a/drivers/gpu/drm/msm/msm_drv.c
> > +++ b/drivers/gpu/drm/msm/msm_drv.c
> > @@ -967,6 +967,12 @@ static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id,
> > struct dma_fence *fence;
> > int ret;
> >
> > + if (fence_id > queue->last_fence) {
>
> But fence_id can wrap around and then this check won't be valid.
that is correct, but see my note about rebasing "drm/msm: Handle fence
rollover" on top of this patch, so this patch could be more easily
cherry-picked to stable/lts branches
BR,
-R
> -Akhil.
>
> > + DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n",
> > + fence_id, queue->last_fence);
> > + return -EINVAL;
> > + }
> > +
> > /*
> > * Map submitqueue scoped "seqno" (which is actually an idr key)
> > * back to underlying dma-fence
> > diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> > index 151d19e4453c..a38f23be497d 100644
> > --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> > +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> > @@ -911,6 +911,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
> > drm_sched_entity_push_job(&submit->base, queue->entity);
> >
> > args->fence = submit->fence_id;
> > + queue->last_fence = submit->fence_id;
> >
> > msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs);
> > msm_process_post_deps(post_deps, args->nr_out_syncobjs,
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> > index bd4e0024033e..e73a5bb03544 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.h
> > +++ b/drivers/gpu/drm/msm/msm_gpu.h
> > @@ -376,6 +376,8 @@ static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio,
> > * @ring_nr: the ringbuffer used by this submitqueue, which is determined
> > * by the submitqueue's priority
> > * @faults: the number of GPU hangs associated with this submitqueue
> > + * @last_fence: the sequence number of the last allocated fence (for error
> > + * checking)
> > * @ctx: the per-drm_file context associated with the submitqueue (ie.
> > * which set of pgtables do submits jobs associated with the
> > * submitqueue use)
> > @@ -391,6 +393,7 @@ struct msm_gpu_submitqueue {
> > u32 flags;
> > u32 ring_nr;
> > int faults;
> > + uint32_t last_fence;
> > struct msm_file_private *ctx;
> > struct list_head node;
> > struct idr fence_idr;
> >
>
On Mon, Nov 15, 2021 at 02:19:10PM +0000, Paul Cercueil wrote:
> Hi Jonathan,
>
> This patchset introduces a new userspace interface based on DMABUF
> objects, to complement the existing fileio based API.
>
> The advantage of this DMABUF based interface vs. the fileio
> interface, is that it avoids an extra copy of the data between the
> kernel and userspace. This is particularly userful for high-speed
> devices which produce several megabytes or even gigabytes of data per
> second.
>
> The first few patches [01/15] to [03/15] are not really related, but
> allow to reduce the size of the patches that introduce the new API.
>
> Patch [04/15] to [06/15] enables write() support to the buffer-dma
> implementation of the buffer API, to continue the work done by
> Mihail Chindris.
>
> Patches [07/15] to [12/15] introduce the new DMABUF based API.
>
> Patches [13/15] and [14/15] add support for cyclic buffers, only through
> the new API. A cyclic buffer will be repeated on the output until the
> buffer is disabled.
>
> Patch [15/15] adds documentation about the new API.
>
> For now, the API allows you to alloc DMABUF objects and mmap() them to
> read or write the samples. It does not yet allow to import DMABUFs
> parented to other subsystems, but that should eventually be possible
> once it's wired.
>
> This patchset is inspired by the "mmap interface" that was previously
> submitted by Alexandru Ardelean and Lars-Peter Clausen, so it would be
> great if I could get a review from you guys. Alexandru's commit was
> signed with his @analog.com address but he doesn't work at ADI anymore,
> so I believe I'll need him to sign with a new email.
Why dma-buf? dma-buf looks like something super generic and useful, until
you realize that there's a metric ton of gpu/accelerator bagage piled in.
So unless buffer sharing with a gpu/video/accel/whatever device is the
goal here, and it's just for a convenient way to get at buffer handles,
this doesn't sound like a good idea.
Also if the idea is to this with gpus/accelerators then I'd really like to
see the full thing, since most likely at that point you also want
dma_fence. And once we talk dma_fence things get truly horrible from a
locking pov :-( Or well, just highly constrained and I get to review what
iio is doing with these buffers to make sure it all fits.
Cheers, Daniel
>
> Cheers,
> -Paul
>
> Alexandru Ardelean (1):
> iio: buffer-dma: split iio_dma_buffer_fileio_free() function
>
> Paul Cercueil (14):
> iio: buffer-dma: Get rid of incoming/outgoing queues
> iio: buffer-dma: Remove unused iio_buffer_block struct
> iio: buffer-dma: Use round_down() instead of rounddown()
> iio: buffer-dma: Enable buffer write support
> iio: buffer-dmaengine: Support specifying buffer direction
> iio: buffer-dmaengine: Enable write support
> iio: core: Add new DMABUF interface infrastructure
> iio: buffer-dma: Use DMABUFs instead of custom solution
> iio: buffer-dma: Implement new DMABUF based userspace API
> iio: buffer-dma: Boost performance using write-combine cache setting
> iio: buffer-dmaengine: Support new DMABUF based userspace API
> iio: core: Add support for cyclic buffers
> iio: buffer-dmaengine: Add support for cyclic buffers
> Documentation: iio: Document high-speed DMABUF based API
>
> Documentation/driver-api/dma-buf.rst | 2 +
> Documentation/iio/dmabuf_api.rst | 94 +++
> Documentation/iio/index.rst | 2 +
> drivers/iio/adc/adi-axi-adc.c | 3 +-
> drivers/iio/buffer/industrialio-buffer-dma.c | 670 ++++++++++++++----
> .../buffer/industrialio-buffer-dmaengine.c | 42 +-
> drivers/iio/industrialio-buffer.c | 49 ++
> include/linux/iio/buffer-dma.h | 43 +-
> include/linux/iio/buffer-dmaengine.h | 5 +-
> include/linux/iio/buffer_impl.h | 8 +
> include/uapi/linux/iio/buffer.h | 30 +
> 11 files changed, 783 insertions(+), 165 deletions(-)
> create mode 100644 Documentation/iio/dmabuf_api.rst
>
> --
> 2.33.0
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On 11/8/21 11:54 PM, Pavel Machek wrote:
> Hi!
>
> This series is truncated .. I only got first patches. Similary, 5.10
> series is truncated, [PATCH AUTOSEL 5.10 035/101] media: s5p-mfc: Add
> checking to s5p_mfc_probe... is last one I got.
>
> I got all the patches before that, so I believe it is not problem on
> my side, but I'd not mind someone confirming they are seeing the same
> problem...
Yes, several of the patch series were incomplete for me also...
--
~Randy