On Mon, Jun 10, 2024 at 11:26 PM Christoph Hellwig hch@infradead.org wrote:
On Mon, Jun 10, 2024 at 09:16:43AM -0600, David Ahern wrote:
exactly. io_uring, page_pool, dmabuf - all kernel building blocks for solutions. This why I was pushing for Mina's set not to be using the name `devmem` - it is but one type of memory and with dmabuf it should not matter if it is gpu or host (or something else later on - cxl?).
While not really realted to the rest of the discussion I agree. It really is dmabuf integration now, so let's call it that?
My mental model is that the feature folks care about is the ability to use TCP with device memory, and dmabuf is an implementation detail that is the format that device memory is packaged in. Although not likely given this discussion, in theory we could want to extend devmem TCP to support p2pdma for nvme, or some other format if a new one arises in device drivers. I also think it's more obvious to an end user what 'devmem TCP' aims to do rather than 'dmabuf TCP' especially if the user is not a kernel developer familiar with dmabuf.