On Sun, Mar 24, 2024 at 4:37 PM Christoph Hellwig hch@infradead.org wrote:
On Fri, Mar 22, 2024 at 10:54:54AM -0700, Mina Almasry wrote:
Sorry I don't mean to argue but as David mentioned, there are some plans in the works and ones not in the works to extend this to other memory types. David mentioned io_uring & Jakub's huge page use cases which may want to re-use this design. I have an additional one in mind, which is extending devmem TCP for storage devices. Currently storage devices do not support dmabuf and my understanding is that it's very hard to do so, and NVMe uses pci_p2pdma instead. I wonder if it's possible to extend devmem TCP in the future to support pci_p2pdma to support nvme devices in the future.
The block layer needs to suppotr dmabuf for this kind of I/O. Any special netdev to block side channel will be NAKed before you can even send it out.
Thanks, a few questions if you have time to help me understand the potential of extending this to storage devices.
Are you envisioning that dmabuf support would be added to the block layer (which I understand is part of the VFS and not driver specific), or as part of the specific storage driver (like nvme for example)? If we can add dmabuf support to the block layer itself that sounds awesome. We may then be able to do devmem TCP on all/most storage devices without having to modify each individual driver.
In your estimation, is adding dmabuf support to the block layer something technically feasible & acceptable upstream? I notice you suggested it so I'm guessing yes to both, but I thought I'd confirm.
Worthy of note this is all pertaining to potential follow up use cases, nothing in this particular proposal is trying to do any of this yet.