On Thu, Oct 24, 2024 at 03:23:06PM +0100, Pavel Begunkov wrote:
That's not what this series does. It adds the new memory_provider_ops set of hooks, with once implementation for dmabufs, and one for io_uring zero copy.
First, it's not a _new_ abstraction over a buffer as you called it before, the abstraction (net_iov) is already merged.
Umm, it is a new ops vector.
Second, you mention devmem TCP, and it's not just a page pool with "dmabufs", it's a user API to use it and other memory agnostic allocation logic. And yes, dmabufs there is the least technically important part. Just having a dmabuf handle solves absolutely nothing.
It solves a lot, becaue it provides a proper abstraction.
So you are precluding zero copy RX into anything but your magic io_uring buffers, and using an odd abstraction for that.
Right io_uring zero copy RX API expects transfer to happen into io_uring controlled buffers, and that's the entire idea. Buffers that are based on an existing network specific abstraction, which are not restricted to pages or anything specific in the long run, but the flow of which from net stack to user and back is controlled by io_uring. If you worry about abuse, io_uring can't even sanely initialise those buffers itself and therefore asking the page pool code to do that.
No, I worry about trying to io_uring for not good reason. This pre-cludes in-kernel uses which would be extremly useful for network storage drivers, and it precludes device memory of all kinds.
I'm even more confused how that would help. The user API has to be implemented and adding a new dmabuf gives nothing, not even mentioning it's not clear what semantics of that beast is supposed to be.
The dma-buf maintainers already explained to you last time that there is absolutely no need to use the dmabuf UAPI, you can use dma-bufs through in-kernel interfaces just fine.
linaro-mm-sig@lists.linaro.org