Hi Jonathan,
This is the V2 of my patchset that introduces a new userspace interface
based on DMABUF objects to complement the fileio API, and adds write()
support to the existing fileio API.
Changes since v1:
- the patches that were merged in v1 have been (obviously) dropped from
this patchset;
- the patch that was setting the write-combine cache setting has been
dropped as well, as it was simply not useful.
- [01/12]:
* Only remove the outgoing queue, and keep the incoming queue, as we
want the buffer to start streaming data as soon as it is enabled.
* Remove IIO_BLOCK_STATE_DEQUEUED, since it is now functionally the
same as IIO_BLOCK_STATE_DONE.
- [02/12]:
* Fix block->state not being reset in
iio_dma_buffer_request_update() for output buffers.
* Only update block->bytes_used once and add a comment about why we
update it.
* Add a comment about why we're setting a different state for output
buffers in iio_dma_buffer_request_update()
* Remove useless cast to bool (!!) in iio_dma_buffer_io()
- [05/12]:
Only allow the new IOCTLs on the buffer FD created with
IIO_BUFFER_GET_FD_IOCTL().
- [12/12]:
* Explicitly state that the new interface is optional and is
not implemented by all drivers.
* The IOCTLs can now only be called on the buffer FD returned by
IIO_BUFFER_GET_FD_IOCTL.
* Move the page up a bit in the index since it is core stuff and not
driver-specific.
The patches not listed here have not been modified since v1.
Cheers,
-Paul
Alexandru Ardelean (1):
iio: buffer-dma: split iio_dma_buffer_fileio_free() function
Paul Cercueil (11):
iio: buffer-dma: Get rid of outgoing queue
iio: buffer-dma: Enable buffer write support
iio: buffer-dmaengine: Support specifying buffer direction
iio: buffer-dmaengine: Enable write support
iio: core: Add new DMABUF interface infrastructure
iio: buffer-dma: Use DMABUFs instead of custom solution
iio: buffer-dma: Implement new DMABUF based userspace API
iio: buffer-dmaengine: Support new DMABUF based userspace API
iio: core: Add support for cyclic buffers
iio: buffer-dmaengine: Add support for cyclic buffers
Documentation: iio: Document high-speed DMABUF based API
Documentation/driver-api/dma-buf.rst | 2 +
Documentation/iio/dmabuf_api.rst | 94 +++
Documentation/iio/index.rst | 2 +
drivers/iio/adc/adi-axi-adc.c | 3 +-
drivers/iio/buffer/industrialio-buffer-dma.c | 610 ++++++++++++++----
.../buffer/industrialio-buffer-dmaengine.c | 42 +-
drivers/iio/industrialio-buffer.c | 60 ++
include/linux/iio/buffer-dma.h | 38 +-
include/linux/iio/buffer-dmaengine.h | 5 +-
include/linux/iio/buffer_impl.h | 8 +
include/uapi/linux/iio/buffer.h | 30 +
11 files changed, 749 insertions(+), 145 deletions(-)
create mode 100644 Documentation/iio/dmabuf_api.rst
--
2.34.1
On Mon, Mar 21, 2022 at 04:54:26PM -0700, "T.J. Mercier"
<tjmercier(a)google.com> wrote:
> Since the charge is duplicated in two cgroups for a short period
> before it is uncharged from the source cgroup I guess the situation
> you're thinking about is a global (or common ancestor) limit?
The common ancestor was on my mind (after the self-shortcut).
> I can see how that would be a problem for transfers done this way and
> an alternative would be to swap the order of the charge operations:
> first uncharge, then try_charge. To be certain the uncharge is
> reversible if the try_charge fails, I think I'd need either a mutex
> used at all gpucg_*charge call sites or access to the gpucg_mutex,
Yes, that'd provide safe conditions for such operations, although I'm
not sure these special types of memory can afford global lock on their
fast paths.
> which implies adding transfer support to gpu.c as part of the gpucg_*
> API itself and calling it here. Am I following correctly here?
My idea was to provide a special API (apart from
gpucp_{try_charge,uncharge}) to facilitate transfers...
> This series doesn't actually add limit support just accounting, but
> I'd like to get it right here.
...which could be implemented (or changed) depending on how the charging
is realized internally.
Michal