Changes in RFC v3: ------------------
1. Pulled in the memory-provider dependency from Jakub's RFC[1] to make the series reviewable and mergable.
2. Implemented multi-rx-queue binding which was a todo in v2.
3. Fix to cmsg handling.
The sticking point in RFC v2[2] was the device reset required to refill the device rx-queues after the dmabuf bind/unbind. The solution suggested as I understand is a subset of the per-queue management ops Jakub suggested or similar:
https://lore.kernel.org/netdev/20230815171638.4c057dcd@kernel.org/
This is not addressed in this revision, because:
1. This point was discussed at netconf & netdev and there is openness to using the current approach of requiring a device reset.
2. Implementing individual queue resetting seems to be difficult for my test bed with GVE. My prototype to test this ran into issues with the rx-queues not coming back up properly if reset individually. At the moment I'm unsure if it's a mistake in the POC or a genuine issue in the virtualization stack behind GVE, which currently doesn't test individual rx-queue restart.
3. Our usecases are not bothered by requiring a device reset to refill the buffer queues, and we'd like to support NICs that run into this limitation with resetting individual queues.
My thought is that drivers that have trouble with per-queue configs can use the support in this series, while drivers that support new netdev ops to reset individual queues can automatically reset the queue as part of the dma-buf bind/unbind.
The same approach with device resets is presented again for consideration with other sticking points addressed.
This proposal includes the rx devmem path only proposed for merge. For a snapshot of my entire tree which includes the GVE POC page pool support & device memory support:
https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3
[1] https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.c... [2] https://lore.kernel.org/netdev/CAHS8izOVJGJH5WF68OsRWFKJid1_huzzUK+hpKbLcL4p...
Cc: Shakeel Butt shakeelb@google.com Cc: Jeroen de Borst jeroendb@google.com Cc: Praveen Kaligineedi pkaligineedi@google.com
Changes in RFC v2: ------------------
The sticking point in RFC v1[1] was the dma-buf pages approach we used to deliver the device memory to the TCP stack. RFC v2 is a proof-of-concept that attempts to resolve this by implementing scatterlist support in the networking stack, such that we can import the dma-buf scatterlist directly. This is the approach proposed at a high level here[2].
Detailed changes: 1. Replaced dma-buf pages approach with importing scatterlist into the page pool. 2. Replace the dma-buf pages centric API with a netlink API. 3. Removed the TX path implementation - there is no issue with implementing the TX path with scatterlist approach, but leaving out the TX path makes it easier to review. 4. Functionality is tested with this proposal, but I have not conducted perf testing yet. I'm not sure there are regressions, but I removed perf claims from the cover letter until they can be re-confirmed. 5. Added Signed-off-by: contributors to the implementation. 6. Fixed some bugs with the RX path since RFC v1.
Any feedback welcome, but specifically the biggest pending questions needing feedback IMO are:
1. Feedback on the scatterlist-based approach in general. 2. Netlink API (Patch 1 & 2). 3. Approach to handle all the drivers that expect to receive pages from the page pool (Patch 6).
[1] https://lore.kernel.org/netdev/dfe4bae7-13a0-3c5d-d671-f61b375cb0b4@gmail.co... [2] https://lore.kernel.org/netdev/CAHS8izPm6XRS54LdCDZVd0C75tA1zHSu6jLVO8nzTLXC...
----------------------
* TL;DR:
Device memory TCP (devmem TCP) is a proposal for transferring data to and/or from device memory efficiently, without bouncing the data to a host memory buffer.
* Problem:
A large amount of data transfers have device memory as the source and/or destination. Accelerators drastically increased the volume of such transfers. Some examples include: - ML accelerators transferring large amounts of training data from storage into GPU/TPU memory. In some cases ML training setup time can be as long as 50% of TPU compute time, improving data transfer throughput & efficiency can help improving GPU/TPU utilization.
- Distributed training, where ML accelerators, such as GPUs on different hosts, exchange data among them.
- Distributed raw block storage applications transfer large amounts of data with remote SSDs, much of this data does not require host processing.
Today, the majority of the Device-to-Device data transfers the network are implemented as the following low level operations: Device-to-Host copy, Host-to-Host network transfer, and Host-to-Device copy.
The implementation is suboptimal, especially for bulk data transfers, and can put significant strains on system resources, such as host memory bandwidth, PCIe bandwidth, etc. One important reason behind the current state is the kernel’s lack of semantics to express device to network transfers.
* Proposal:
In this patch series we attempt to optimize this use case by implementing socket APIs that enable the user to:
1. send device memory across the network directly, and 2. receive incoming network packets directly into device memory.
Packet _payloads_ go directly from the NIC to device memory for receive and from device memory to NIC for transmit. Packet _headers_ go to/from host memory and are processed by the TCP/IP stack normally. The NIC _must_ support header split to achieve this.
Advantages:
- Alleviate host memory bandwidth pressure, compared to existing network-transfer + device-copy semantics.
- Alleviate PCIe BW pressure, by limiting data transfer to the lowest level of the PCIe tree, compared to traditional path which sends data through the root complex.
* Patch overview:
** Part 1: netlink API
Gives user ability to bind dma-buf to an RX queue.
** Part 2: scatterlist support
Currently the standard for device memory sharing is DMABUF, which doesn't generate struct pages. On the other hand, networking stack (skbs, drivers, and page pool) operate on pages. We have 2 options:
1. Generate struct pages for dmabuf device memory, or, 2. Modify the networking stack to process scatterlist.
Approach #1 was attempted in RFC v1. RFC v2 implements approach #2.
** part 3: page pool support
We piggy back on page pool memory providers proposal: https://github.com/kuba-moo/linux/tree/pp-providers
It allows the page pool to define a memory provider that provides the page allocation and freeing. It helps abstract most of the device memory TCP changes from the driver.
** part 4: support for unreadable skb frags
Page pool iovs are not accessible by the host; we implement changes throughput the networking stack to correctly handle skbs with unreadable frags.
** Part 5: recvmsg() APIs
We define user APIs for the user to send and receive device memory.
Not included with this RFC is the GVE devmem TCP support, just to simplify the review. Code available here if desired: https://github.com/mina/linux/tree/tcpdevmem
This RFC is built on top of net-next with Jakub's pp-providers changes cherry-picked.
* NIC dependencies:
1. (strict) Devmem TCP require the NIC to support header split, i.e. the capability to split incoming packets into a header + payload and to put each into a separate buffer. Devmem TCP works by using device memory for the packet payload, and host memory for the packet headers.
2. (optional) Devmem TCP works better with flow steering support & RSS support, i.e. the NIC's ability to steer flows into certain rx queues. This allows the sysadmin to enable devmem TCP on a subset of the rx queues, and steer devmem TCP traffic onto these queues and non devmem TCP elsewhere.
The NIC I have access to with these properties is the GVE with DQO support running in Google Cloud, but any NIC that supports these features would suffice. I may be able to help reviewers bring up devmem TCP on their NICs.
* Testing:
The series includes a udmabuf kselftest that show a simple use case of devmem TCP and validates the entire data path end to end without a dependency on a specific dmabuf provider.
** Test Setup
Kernel: net-next with this RFC and memory provider API cherry-picked locally.
Hardware: Google Cloud A3 VMs.
NIC: GVE with header split & RSS & flow steering support.
Jakub Kicinski (2): net: page_pool: factor out releasing DMA from releasing the page net: page_pool: create hooks for custom page providers
Mina Almasry (10): net: netdev netlink api to bind dma-buf to a net device netdev: support binding dma-buf to netdevice netdev: netdevice devmem allocator memory-provider: dmabuf devmem memory provider page-pool: device memory support net: support non paged skb frags net: add support for skbs with unreadable frags tcp: RX path for devmem TCP net: add SO_DEVMEM_DONTNEED setsockopt to release RX pages selftests: add ncdevmem, netcat for devmem TCP
Documentation/netlink/specs/netdev.yaml | 28 ++ include/linux/netdevice.h | 93 ++++ include/linux/skbuff.h | 56 ++- include/linux/socket.h | 1 + include/net/netdev_rx_queue.h | 1 + include/net/page_pool/helpers.h | 151 ++++++- include/net/page_pool/types.h | 55 +++ include/net/sock.h | 2 + include/net/tcp.h | 5 +- include/uapi/asm-generic/socket.h | 6 + include/uapi/linux/netdev.h | 10 + include/uapi/linux/uio.h | 10 + net/core/datagram.c | 6 + net/core/dev.c | 240 +++++++++++ net/core/gro.c | 7 +- net/core/netdev-genl-gen.c | 14 + net/core/netdev-genl-gen.h | 1 + net/core/netdev-genl.c | 118 +++++ net/core/page_pool.c | 209 +++++++-- net/core/skbuff.c | 80 +++- net/core/sock.c | 36 ++ net/ipv4/tcp.c | 205 ++++++++- net/ipv4/tcp_input.c | 13 +- net/ipv4/tcp_ipv4.c | 7 + net/ipv4/tcp_output.c | 5 +- net/packet/af_packet.c | 4 +- tools/include/uapi/linux/netdev.h | 10 + tools/net/ynl/generated/netdev-user.c | 42 ++ tools/net/ynl/generated/netdev-user.h | 47 ++ tools/testing/selftests/net/.gitignore | 1 + tools/testing/selftests/net/Makefile | 5 + tools/testing/selftests/net/ncdevmem.c | 546 ++++++++++++++++++++++++ 32 files changed, 1950 insertions(+), 64 deletions(-) create mode 100644 tools/testing/selftests/net/ncdevmem.c
From: Jakub Kicinski kuba@kernel.org
Releasing the DMA mapping will be useful for other types of pages, so factor it out. Make sure compiler inlines it, to avoid any regressions.
Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Mina Almasry almasrymina@google.com
---
This is implemented by Jakub in his RFC:
https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.c...
I take no credit for the idea or implementation. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable.
--- net/core/page_pool.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5e409b98aba0..578b6f2eeb46 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -514,21 +514,16 @@ static s32 page_pool_inflight(struct page_pool *pool) return inflight; }
-/* Disconnects a page (from a page_pool). API users can have a need - * to disconnect a page (from a page_pool), to allow it to be used as - * a regular page (that will eventually be returned to the normal - * page-allocator via put_page). - */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static __always_inline +void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) { dma_addr_t dma; - int count;
if (!(pool->p.flags & PP_FLAG_DMA_MAP)) /* Always account for inflight pages, even if we didn't * map them */ - goto skip_dma_unmap; + return;
dma = page_pool_get_dma_addr(page);
@@ -537,7 +532,19 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr(page, 0); -skip_dma_unmap: +} + +/* Disconnects a page (from a page_pool). API users can have a need + * to disconnect a page (from a page_pool), to allow it to be used as + * a regular page (that will eventually be returned to the normal + * page-allocator via put_page). + */ +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + int count; + + __page_pool_release_page_dma(pool, page); + page_pool_clear_pp_info(page);
/* This may be the last page returned, releasing the pool, so
From: Jakub Kicinski kuba@kernel.org
The page providers which try to reuse the same pages will need to hold onto the ref, even if page gets released from the pool - as in releasing the page from the pp just transfers the "ownership" reference from pp to the provider, and provider will wait for other references to be gone before feeding this page back into the pool.
Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Mina Almasry almasrymina@google.com
---
This is implemented by Jakub in his RFC: https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.c...
I take no credit for the idea or implementation; I only added minor edits to make this workable with device memory TCP, and removed some hacky test code. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable.
--- include/net/page_pool/types.h | 18 +++++++++++++ net/core/page_pool.c | 51 +++++++++++++++++++++++++++++++---- 2 files changed, 64 insertions(+), 5 deletions(-)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi; + u8 memory_provider; + void *mp_priv; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset; @@ -118,6 +120,19 @@ struct page_pool_stats { }; #endif
+struct mem_provider; + +enum pp_memory_provider_type { + __PP_MP_NONE, /* Use system allocator directly */ +}; + +struct pp_memory_provider_ops { + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp); + bool (*release_page)(struct page_pool *pool, struct page *page); +}; + struct page_pool { struct page_pool_params p;
@@ -165,6 +180,9 @@ struct page_pool { */ struct ptr_ring ring;
+ const struct pp_memory_provider_ops *mp_ops; + void *mp_priv; + #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ struct page_pool_recycle_stats __percpu *recycle_stats; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 578b6f2eeb46..7ea1f4682479 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -23,6 +23,8 @@
#include <trace/events/page_pool.h>
+static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); + #define DEFER_TIME (msecs_to_jiffies(1000)) #define DEFER_WARN_INTERVAL (60 * HZ)
@@ -172,6 +174,7 @@ static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { unsigned int ring_qsize = 1024; /* Default */ + int err;
memcpy(&pool->p, params, sizeof(pool->p));
@@ -225,10 +228,34 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1);
+ switch (pool->p.memory_provider) { + case __PP_MP_NONE: + break; + default: + err = -EINVAL; + goto free_ptr_ring; + } + + pool->mp_priv = pool->p.mp_priv; + if (pool->mp_ops) { + err = pool->mp_ops->init(pool); + if (err) { + pr_warn("%s() mem-provider init failed %d\n", + __func__, err); + goto free_ptr_ring; + } + + static_branch_inc(&page_pool_mem_providers); + } + if (pool->p.flags & PP_FLAG_DMA_MAP) get_device(pool->p.dev);
return 0; + +free_ptr_ring: + ptr_ring_cleanup(&pool->ring, NULL); + return err; }
/** @@ -490,7 +517,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) return page;
/* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + page = pool->mp_ops->alloc_pages(pool, gfp); + else + page = __page_pool_alloc_pages_slow(pool, gfp); return page; } EXPORT_SYMBOL(page_pool_alloc_pages); @@ -542,10 +572,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) void page_pool_return_page(struct page_pool *pool, struct page *page) { int count; + bool put;
- __page_pool_release_page_dma(pool, page); - - page_pool_clear_pp_info(page); + put = true; + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_page(pool, page); + else + __page_pool_release_page_dma(pool, page);
/* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. @@ -553,7 +586,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); trace_page_pool_state_release(pool, page, count);
- put_page(page); + if (put) { + page_pool_clear_pp_info(page); + put_page(page); + } /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). @@ -821,6 +857,11 @@ static void __page_pool_destroy(struct page_pool *pool) if (pool->disconnect) pool->disconnect(pool);
+ if (pool->mp_ops) { + pool->mp_ops->destroy(pool); + static_branch_dec(&page_pool_mem_providers); + } + ptr_ring_cleanup(&pool->ring, NULL);
if (pool->p.flags & PP_FLAG_DMA_MAP)
On 2023/11/6 10:44, Mina Almasry wrote:
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi;
- u8 memory_provider;
- void *mp_priv; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset;
@@ -118,6 +120,19 @@ struct page_pool_stats { }; #endif +struct mem_provider;
The above doesn't seems be used?
+enum pp_memory_provider_type {
- __PP_MP_NONE, /* Use system allocator directly */
+};
+struct pp_memory_provider_ops {
Is it better to rename the above to pp_memory_provider and put the above memory_provider and mp_priv here? so that all the fields related to pp_memory_provider are in one place?
It is probably better to provide a register function for driver to implement its own pp_memory_provider in the future.
- int (*init)(struct page_pool *pool);
- void (*destroy)(struct page_pool *pool);
- struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
- bool (*release_page)(struct page_pool *pool, struct page *page);
+};
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote:
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi;
- u8 memory_provider;
- void *mp_priv;
Minor nit: swapping the above 2 fields should make the struct smaller.
enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset; @@ -118,6 +120,19 @@ struct page_pool_stats { }; #endif +struct mem_provider;
+enum pp_memory_provider_type {
- __PP_MP_NONE, /* Use system allocator directly */
+};
+struct pp_memory_provider_ops {
- int (*init)(struct page_pool *pool);
- void (*destroy)(struct page_pool *pool);
- struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp);
- bool (*release_page)(struct page_pool *pool, struct page *page);
+};
struct page_pool { struct page_pool_params p; @@ -165,6 +180,9 @@ struct page_pool { */ struct ptr_ring ring;
- const struct pp_memory_provider_ops *mp_ops;
- void *mp_priv;
Why the mp_ops are not part of page_pool_params? why mp_priv is duplicated here?
Cheers,
Paolo
On Sun, 5 Nov 2023 18:44:01 -0800 Mina Almasry wrote:
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi;
- u8 memory_provider;
- void *mp_priv; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset;
you should rebase on top of net-next
More importantly I was expecting those fields to be gone from params. The fact that the page pool is configured to a specific provider should be fully transparent to the driver, driver should just tell the core what queue its creating the pool from and if there's a dmabuf bound for that queue - out pops a pp backed by the dmabuf.
My RFC had the page pool params fields here as a hack.
On Fri, Nov 10, 2023 at 3:19 PM Jakub Kicinski kuba@kernel.org wrote:
On Sun, 5 Nov 2023 18:44:01 -0800 Mina Almasry wrote:
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi;
u8 memory_provider;
void *mp_priv; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset;
you should rebase on top of net-next
More importantly I was expecting those fields to be gone from params. The fact that the page pool is configured to a specific provider should be fully transparent to the driver, driver should just tell the core what queue its creating the pool from and if there's a dmabuf bound for that queue - out pops a pp backed by the dmabuf.
My issue with this is that if the driver doesn't support dmabuf then the driver will accidentally use the pp backed by the dmabuf, allocate a page from it, then call page_address() on it or something, and crash.
Currently I avoid that by having the driver be responsible for picking up the dmabuf from the netdev_rx_queue and giving it to the page pool. What would be the appropriate way to check for driver support in the netlink API? Perhaps adding something to ndo_features_check?
On Sun, 12 Nov 2023 19:28:52 -0800 Mina Almasry wrote:
My issue with this is that if the driver doesn't support dmabuf then the driver will accidentally use the pp backed by the dmabuf, allocate a page from it, then call page_address() on it or something, and crash.
Currently I avoid that by having the driver be responsible for picking up the dmabuf from the netdev_rx_queue and giving it to the page pool. What would be the appropriate way to check for driver support in the netlink API? Perhaps adding something to ndo_features_check?
We need some form of capabilities. I was expecting to add that as part of the queue API. Either a new field in struct net_device or in ndos. I tend to put static driver caps of this nature into ops. See for instance .supported_ring_params in ethtool ops.
API takes the dma-buf fd as input, and binds it to the netdevice. The user can specify the rx queues to bind the dma-buf to.
Suggested-by: Stanislav Fomichev sdf@google.com Signed-off-by: Mina Almasry almasrymina@google.com
---
Changes in v3: - Support binding multiple rx rx-queues
--- Documentation/netlink/specs/netdev.yaml | 28 +++++++++++++++ include/uapi/linux/netdev.h | 10 ++++++ net/core/netdev-genl-gen.c | 14 ++++++++ net/core/netdev-genl-gen.h | 1 + net/core/netdev-genl.c | 6 ++++ tools/include/uapi/linux/netdev.h | 10 ++++++ tools/net/ynl/generated/netdev-user.c | 42 ++++++++++++++++++++++ tools/net/ynl/generated/netdev-user.h | 47 +++++++++++++++++++++++++ 8 files changed, 158 insertions(+)
diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 14511b13f305..2141c5f5c33e 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -86,6 +86,24 @@ attribute-sets: See Documentation/networking/xdp-rx-metadata.rst for more details. type: u64 enum: xdp-rx-metadata + - + name: bind-dmabuf + attributes: + - + name: ifindex + doc: netdev ifindex to bind the dma-buf to. + type: u32 + checks: + min: 1 + - + name: queues + doc: receive queues to bind the dma-buf to. + type: u32 + multi-attr: true + - + name: dmabuf-fd + doc: dmabuf file descriptor to bind. + type: u32
operations: list: @@ -120,6 +138,16 @@ operations: doc: Notification about device configuration being changed. notify: dev-get mcgrp: mgmt + - + name: bind-rx + doc: Bind dmabuf to netdev + attribute-set: bind-dmabuf + do: + request: + attributes: + - ifindex + - dmabuf-fd + - queues
mcast-groups: list: diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index 2943a151d4f1..2cd367c498c7 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -64,11 +64,21 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) };
+enum { + NETDEV_A_BIND_DMABUF_IFINDEX = 1, + NETDEV_A_BIND_DMABUF_QUEUES, + NETDEV_A_BIND_DMABUF_DMABUF_FD, + + __NETDEV_A_BIND_DMABUF_MAX, + NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, NETDEV_CMD_DEV_DEL_NTF, NETDEV_CMD_DEV_CHANGE_NTF, + NETDEV_CMD_BIND_RX,
__NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index ea9231378aa6..58300efaf4e5 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -15,6 +15,13 @@ static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1 [NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), };
+/* NETDEV_CMD_BIND_RX - do */ +static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_BIND_DMABUF_DMABUF_FD + 1] = { + [NETDEV_A_BIND_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), + [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .type = NLA_U32, }, + [NETDEV_A_BIND_DMABUF_QUEUES] = { .type = NLA_U32, }, +}; + /* Ops table for netdev */ static const struct genl_split_ops netdev_nl_ops[] = { { @@ -29,6 +36,13 @@ static const struct genl_split_ops netdev_nl_ops[] = { .dumpit = netdev_nl_dev_get_dumpit, .flags = GENL_CMD_CAP_DUMP, }, + { + .cmd = NETDEV_CMD_BIND_RX, + .doit = netdev_nl_bind_rx_doit, + .policy = netdev_bind_rx_nl_policy, + .maxattr = NETDEV_A_BIND_DMABUF_DMABUF_FD, + .flags = GENL_CMD_CAP_DO, + }, };
static const struct genl_multicast_group netdev_nl_mcgrps[] = { diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index 7b370c073e7d..5aaeb435ec08 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -13,6 +13,7 @@
int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info);
enum { NETDEV_NLGRP_MGMT, diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index fe61f85bcf33..59d3d512d9cc 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -129,6 +129,12 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; }
+/* Stub */ +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) +{ + return 0; +} + static int netdev_genl_netdevice_event(struct notifier_block *nb, unsigned long event, void *ptr) { diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index 2943a151d4f1..2cd367c498c7 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -64,11 +64,21 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) };
+enum { + NETDEV_A_BIND_DMABUF_IFINDEX = 1, + NETDEV_A_BIND_DMABUF_QUEUES, + NETDEV_A_BIND_DMABUF_DMABUF_FD, + + __NETDEV_A_BIND_DMABUF_MAX, + NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, NETDEV_CMD_DEV_DEL_NTF, NETDEV_CMD_DEV_CHANGE_NTF, + NETDEV_CMD_BIND_RX,
__NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) diff --git a/tools/net/ynl/generated/netdev-user.c b/tools/net/ynl/generated/netdev-user.c index b5ffe8cd1144..d5f4c6d4c2b2 100644 --- a/tools/net/ynl/generated/netdev-user.c +++ b/tools/net/ynl/generated/netdev-user.c @@ -18,6 +18,7 @@ static const char * const netdev_op_strmap[] = { [NETDEV_CMD_DEV_ADD_NTF] = "dev-add-ntf", [NETDEV_CMD_DEV_DEL_NTF] = "dev-del-ntf", [NETDEV_CMD_DEV_CHANGE_NTF] = "dev-change-ntf", + [NETDEV_CMD_BIND_RX] = "bind-rx", };
const char *netdev_op_str(int op) @@ -72,6 +73,17 @@ struct ynl_policy_nest netdev_dev_nest = { .table = netdev_dev_policy, };
+struct ynl_policy_attr netdev_bind_dmabuf_policy[NETDEV_A_BIND_DMABUF_MAX + 1] = { + [NETDEV_A_BIND_DMABUF_IFINDEX] = { .name = "ifindex", .type = YNL_PT_U32, }, + [NETDEV_A_BIND_DMABUF_QUEUES] = { .name = "queues", .type = YNL_PT_U32, }, + [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .name = "dmabuf-fd", .type = YNL_PT_U32, }, +}; + +struct ynl_policy_nest netdev_bind_dmabuf_nest = { + .max_attr = NETDEV_A_BIND_DMABUF_MAX, + .table = netdev_bind_dmabuf_policy, +}; + /* Common nested types */ /* ============== NETDEV_CMD_DEV_GET ============== */ /* NETDEV_CMD_DEV_GET - do */ @@ -197,6 +209,36 @@ void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp) free(rsp); }
+/* ============== NETDEV_CMD_BIND_RX ============== */ +/* NETDEV_CMD_BIND_RX - do */ +void netdev_bind_rx_req_free(struct netdev_bind_rx_req *req) +{ + free(req->queues); + free(req); +} + +int netdev_bind_rx(struct ynl_sock *ys, struct netdev_bind_rx_req *req) +{ + struct nlmsghdr *nlh; + int err; + + nlh = ynl_gemsg_start_req(ys, ys->family_id, NETDEV_CMD_BIND_RX, 1); + ys->req_policy = &netdev_bind_dmabuf_nest; + + if (req->_present.ifindex) + mnl_attr_put_u32(nlh, NETDEV_A_BIND_DMABUF_IFINDEX, req->ifindex); + if (req->_present.dmabuf_fd) + mnl_attr_put_u32(nlh, NETDEV_A_BIND_DMABUF_DMABUF_FD, req->dmabuf_fd); + for (unsigned int i = 0; i < req->n_queues; i++) + mnl_attr_put_u32(nlh, NETDEV_A_BIND_DMABUF_QUEUES, req->queues[i]); + + err = ynl_exec(ys, nlh, NULL); + if (err < 0) + return -1; + + return 0; +} + static const struct ynl_ntf_info netdev_ntf_info[] = { [NETDEV_CMD_DEV_ADD_NTF] = { .alloc_sz = sizeof(struct netdev_dev_get_ntf), diff --git a/tools/net/ynl/generated/netdev-user.h b/tools/net/ynl/generated/netdev-user.h index 4fafac879df3..3cf9096d733a 100644 --- a/tools/net/ynl/generated/netdev-user.h +++ b/tools/net/ynl/generated/netdev-user.h @@ -87,4 +87,51 @@ struct netdev_dev_get_ntf {
void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp);
+/* ============== NETDEV_CMD_BIND_RX ============== */ +/* NETDEV_CMD_BIND_RX - do */ +struct netdev_bind_rx_req { + struct { + __u32 ifindex:1; + __u32 dmabuf_fd:1; + } _present; + + __u32 ifindex; + __u32 dmabuf_fd; + unsigned int n_queues; + __u32 *queues; +}; + +static inline struct netdev_bind_rx_req *netdev_bind_rx_req_alloc(void) +{ + return calloc(1, sizeof(struct netdev_bind_rx_req)); +} +void netdev_bind_rx_req_free(struct netdev_bind_rx_req *req); + +static inline void +netdev_bind_rx_req_set_ifindex(struct netdev_bind_rx_req *req, __u32 ifindex) +{ + req->_present.ifindex = 1; + req->ifindex = ifindex; +} +static inline void +netdev_bind_rx_req_set_dmabuf_fd(struct netdev_bind_rx_req *req, + __u32 dmabuf_fd) +{ + req->_present.dmabuf_fd = 1; + req->dmabuf_fd = dmabuf_fd; +} +static inline void +__netdev_bind_rx_req_set_queues(struct netdev_bind_rx_req *req, __u32 *queues, + unsigned int n_queues) +{ + free(req->queues); + req->queues = queues; + req->n_queues = n_queues; +} + +/* + * Bind dmabuf to netdev + */ +int netdev_bind_rx(struct ynl_sock *ys, struct netdev_bind_rx_req *req); + #endif /* _LINUX_NETDEV_GEN_H */
On Sun, 5 Nov 2023 18:44:02 -0800 Mina Almasry wrote:
-
name: queues
doc: receive queues to bind the dma-buf to.
type: u32
multi-attr: true
I think that you should throw in the queue type. I know you made the op imply RX:
name: bind-rx
but if we decide to create a separate "type" for some magic queue type one day we'll have to ponder how to extend this API
IMHO queue should be identified by a <type, id> tuple, always.
Add a netdev_dmabuf_binding struct which represents the dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to rx queues on the netdevice. On the binding, the dma_buf_attach & dma_buf_map_attachment will occur. The entries in the sg_table from mapping will be inserted into a genpool to make it ready for allocation.
The chunks in the genpool are owned by a dmabuf_chunk_owner struct which holds the dma-buf offset of the base of the chunk and the dma_addr of the chunk. Both are needed to use allocations that come from this chunk.
We create a new type that represents an allocation from the genpool: page_pool_iov. We setup the page_pool_iov allocation size in the genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally allocated by the page pool and given to the drivers.
The user can unbind the dmabuf from the netdevice by closing the netlink socket that established the binding. We do this so that the binding is automatically unbound even if the userspace process crashes.
The binding and unbinding leaves an indicator in struct netdev_rx_queue that the given queue is bound, but the binding doesn't take effect until the driver actually reconfigures its queues, and re-initializes its page pool.
The netdev_dmabuf_binding struct is refcounted, and releases its resources only when all the refs are released.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
---
RFC v3: - Support multi rx-queue binding
--- include/linux/netdevice.h | 80 ++++++++++++++ include/net/netdev_rx_queue.h | 1 + include/net/page_pool/types.h | 27 +++++ net/core/dev.c | 203 ++++++++++++++++++++++++++++++++++ net/core/netdev-genl.c | 116 ++++++++++++++++++- 5 files changed, 425 insertions(+), 2 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index b8bf669212cc..eeeda849115c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,8 @@ #include <net/net_trackers.h> #include <net/net_debug.h> #include <net/dropreason-core.h> +#include <linux/xarray.h> +#include <linux/refcount.h>
struct netpoll_info; struct device; @@ -808,6 +810,84 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id, #endif #endif /* CONFIG_RPS */
+struct netdev_dmabuf_binding { + struct dma_buf *dmabuf; + struct dma_buf_attachment *attachment; + struct sg_table *sgt; + struct net_device *dev; + struct gen_pool *chunk_pool; + + /* The user holds a ref (via the netlink API) for as long as they want + * the binding to remain alive. Each page pool using this binding holds + * a ref to keep the binding alive. Each allocated page_pool_iov holds a + * ref. + * + * The binding undos itself and unmaps the underlying dmabuf once all + * those refs are dropped and the binding is no longer desired or in + * use. + */ + refcount_t ref; + + /* The portid of the user that owns this binding. Used for netlink to + * notify us of the user dropping the bind. + */ + u32 owner_nlportid; + + /* The list of bindings currently active. Used for netlink to notify us + * of the user dropping the bind. + */ + struct list_head list; + + /* rxq's this binding is active on. */ + struct xarray bound_rxq_list; +}; + +#ifdef CONFIG_DMA_SHARED_BUFFER +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out); +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding); +#else +static inline void +__netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{ +} + +static inline int netdev_bind_dmabuf(struct net_device *dev, + unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out) +{ + return -EOPNOTSUPP; +} +static inline void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ +} + +static inline int +netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding) +{ + return -EOPNOTSUPP; +} +#endif + +static inline void +netdev_devmem_binding_get(struct netdev_dmabuf_binding *binding) +{ + refcount_inc(&binding->ref); +} + +static inline void +netdev_devmem_binding_put(struct netdev_dmabuf_binding *binding) +{ + if (!refcount_dec_and_test(&binding->ref)) + return; + + __netdev_devmem_binding_free(binding); +} + /* XPS map type and offset of the xps map within net_device->xps_maps[]. */ enum xps_map_type { XPS_CPUS = 0, diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index cdcafb30d437..1bfcf60a145d 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -21,6 +21,7 @@ struct netdev_rx_queue { #ifdef CONFIG_XDP_SOCKETS struct xsk_buff_pool *pool; #endif + struct netdev_dmabuf_binding *binding; } ____cacheline_aligned_in_smp;
/* diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index d4bea053bb7e..64386325d965 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -133,6 +133,33 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); };
+/* page_pool_iov support */ + +/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist + * entry from the dmabuf is inserted into the genpool as a chunk, and needs + * this owner struct to keep track of some metadata necessary to create + * allocations from this chunk. + */ +struct dmabuf_genpool_chunk_owner { + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; + + /* dma_addr of the start of the chunk. */ + dma_addr_t base_dma_addr; + + /* Array of page_pool_iovs for this chunk. */ + struct page_pool_iov *ppiovs; + size_t num_ppiovs; + + struct netdev_dmabuf_binding *binding; +}; + +struct page_pool_iov { + struct dmabuf_genpool_chunk_owner *owner; + + refcount_t refcount; +}; + struct page_pool { struct page_pool_params p;
diff --git a/net/core/dev.c b/net/core/dev.c index a37a932a3e14..c8c3709d42c8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -153,6 +153,9 @@ #include <linux/prandom.h> #include <linux/once_lite.h> #include <net/netdev_rx_queue.h> +#include <linux/genalloc.h> +#include <linux/dma-buf.h> +#include <net/page_pool/types.h>
#include "dev.h" #include "net-sysfs.h" @@ -2040,6 +2043,206 @@ static int call_netdevice_notifiers_mtu(unsigned long val, return call_netdevice_notifiers_info(val, &info.info); }
+/* Device memory support */ + +#ifdef CONFIG_DMA_SHARED_BUFFER +static void netdev_devmem_free_chunk_owner(struct gen_pool *genpool, + struct gen_pool_chunk *chunk, + void *not_used) +{ + struct dmabuf_genpool_chunk_owner *owner = chunk->owner; + + kvfree(owner->ppiovs); + kfree(owner); +} + +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{ + size_t size, avail; + + gen_pool_for_each_chunk(binding->chunk_pool, + netdev_devmem_free_chunk_owner, NULL); + + size = gen_pool_size(binding->chunk_pool); + avail = gen_pool_avail(binding->chunk_pool); + + if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu", + size, avail)) + gen_pool_destroy(binding->chunk_pool); + + dma_buf_unmap_attachment(binding->attachment, binding->sgt, + DMA_BIDIRECTIONAL); + dma_buf_detach(binding->dmabuf, binding->attachment); + dma_buf_put(binding->dmabuf); + kfree(binding); +} + +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + unsigned long xa_idx; + + if (!binding) + return; + + list_del_rcu(&binding->list); + + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) + if (rxq->binding == binding) + /* We hold the rtnl_lock while binding/unbinding + * dma-buf, so we can't race with another thread that + * is also modifying this value. However, the driver + * may read this config while it's creating its + * rx-queues. WRITE_ONCE() here to match the + * READ_ONCE() in the driver. + */ + WRITE_ONCE(rxq->binding, NULL); + + netdev_devmem_binding_put(binding); +} + +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + u32 xa_idx; + int err; + + rxq = __netif_get_rx_queue(dev, rxq_idx); + + if (rxq->binding) + return -EEXIST; + + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b, + GFP_KERNEL); + if (err) + return err; + + /*We hold the rtnl_lock while binding/unbinding dma-buf, so we can't + * race with another thread that is also modifying this value. However, + * the driver may read this config while it's creating its * rx-queues. + * WRITE_ONCE() here to match the READ_ONCE() in the driver. + */ + WRITE_ONCE(rxq->binding, binding); + + return 0; +} + +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out) +{ + struct netdev_dmabuf_binding *binding; + struct scatterlist *sg; + struct dma_buf *dmabuf; + unsigned int sg_idx, i; + unsigned long virtual; + int err; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR_OR_NULL(dmabuf)) + return -EBADFD; + + binding = kzalloc_node(sizeof(*binding), GFP_KERNEL, + dev_to_node(&dev->dev)); + if (!binding) { + err = -ENOMEM; + goto err_put_dmabuf; + } + + xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC); + + refcount_set(&binding->ref, 1); + + binding->dmabuf = dmabuf; + + binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent); + if (IS_ERR(binding->attachment)) { + err = PTR_ERR(binding->attachment); + goto err_free_binding; + } + + binding->sgt = dma_buf_map_attachment(binding->attachment, + DMA_BIDIRECTIONAL); + if (IS_ERR(binding->sgt)) { + err = PTR_ERR(binding->sgt); + goto err_detach; + } + + /* For simplicity we expect to make PAGE_SIZE allocations, but the + * binding can be much more flexible than that. We may be able to + * allocate MTU sized chunks here. Leave that for future work... + */ + binding->chunk_pool = gen_pool_create(PAGE_SHIFT, + dev_to_node(&dev->dev)); + if (!binding->chunk_pool) { + err = -ENOMEM; + goto err_unmap; + } + + virtual = 0; + for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) { + dma_addr_t dma_addr = sg_dma_address(sg); + struct dmabuf_genpool_chunk_owner *owner; + size_t len = sg_dma_len(sg); + struct page_pool_iov *ppiov; + + owner = kzalloc_node(sizeof(*owner), GFP_KERNEL, + dev_to_node(&dev->dev)); + owner->base_virtual = virtual; + owner->base_dma_addr = dma_addr; + owner->num_ppiovs = len / PAGE_SIZE; + owner->binding = binding; + + err = gen_pool_add_owner(binding->chunk_pool, dma_addr, + dma_addr, len, dev_to_node(&dev->dev), + owner); + if (err) { + err = -EINVAL; + goto err_free_chunks; + } + + owner->ppiovs = kvmalloc_array(owner->num_ppiovs, + sizeof(*owner->ppiovs), + GFP_KERNEL); + if (!owner->ppiovs) { + err = -ENOMEM; + goto err_free_chunks; + } + + for (i = 0; i < owner->num_ppiovs; i++) { + ppiov = &owner->ppiovs[i]; + ppiov->owner = owner; + refcount_set(&ppiov->refcount, 1); + } + + dma_addr += len; + virtual += len; + } + + *out = binding; + + return 0; + +err_free_chunks: + gen_pool_for_each_chunk(binding->chunk_pool, + netdev_devmem_free_chunk_owner, NULL); + gen_pool_destroy(binding->chunk_pool); +err_unmap: + dma_buf_unmap_attachment(binding->attachment, binding->sgt, + DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(dmabuf, binding->attachment); +err_free_binding: + kfree(binding); +err_put_dmabuf: + dma_buf_put(dmabuf); + return err; +} +#endif + #ifdef CONFIG_NET_INGRESS static DEFINE_STATIC_KEY_FALSE(ingress_needed_key);
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 59d3d512d9cc..2c2a62593217 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -129,10 +129,89 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; }
-/* Stub */ +static LIST_HEAD(netdev_rbinding_list); + int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) { - return 0; + struct netdev_dmabuf_binding *out_binding; + u32 ifindex, dmabuf_fd, rxq_idx; + struct net_device *netdev; + struct sk_buff *rsp; + int rem, err = 0; + void *hdr; + struct nlattr *attr; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES)) + return -EINVAL; + + ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); + dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]); + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (!netdev) { + err = -ENODEV; + goto err_unlock; + } + + err = netdev_bind_dmabuf(netdev, dmabuf_fd, &out_binding); + if (err) + goto err_unlock; + + nla_for_each_attr(attr, genlmsg_data(info->genlhdr), + genlmsg_len(info->genlhdr), rem) { + switch (nla_type(attr)) { + case NETDEV_A_BIND_DMABUF_QUEUES: + rxq_idx = nla_get_u32(attr); + + if (rxq_idx >= netdev->num_rx_queues) { + err = -ERANGE; + goto err_unbind; + } + + err = netdev_bind_dmabuf_to_queue(netdev, rxq_idx, + out_binding); + if (err) + goto err_unbind; + + break; + default: + break; + } + } + + out_binding->owner_nlportid = info->snd_portid; + list_add_rcu(&out_binding->list, &netdev_rbinding_list); + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) { + err = -ENOMEM; + goto err_unbind; + } + + hdr = genlmsg_put(rsp, info->snd_portid, info->snd_seq, + &netdev_nl_family, 0, info->genlhdr->cmd); + if (!hdr) { + err = -EMSGSIZE; + goto err_genlmsg_free; + } + + genlmsg_end(rsp, hdr); + + rtnl_unlock(); + + return genlmsg_reply(rsp, info); + +err_genlmsg_free: + nlmsg_free(rsp); +err_unbind: + netdev_unbind_dmabuf(out_binding); +err_unlock: + rtnl_unlock(); + return err; }
static int netdev_genl_netdevice_event(struct notifier_block *nb, @@ -155,10 +234,37 @@ static int netdev_genl_netdevice_event(struct notifier_block *nb, return NOTIFY_OK; }
+static int netdev_netlink_notify(struct notifier_block *nb, unsigned long state, + void *_notify) +{ + struct netlink_notify *notify = _notify; + struct netdev_dmabuf_binding *rbinding; + + if (state != NETLINK_URELEASE || notify->protocol != NETLINK_GENERIC) + return NOTIFY_DONE; + + rcu_read_lock(); + + list_for_each_entry_rcu(rbinding, &netdev_rbinding_list, list) { + if (rbinding->owner_nlportid == notify->portid) { + netdev_unbind_dmabuf(rbinding); + break; + } + } + + rcu_read_unlock(); + + return NOTIFY_OK; +} + static struct notifier_block netdev_genl_nb = { .notifier_call = netdev_genl_netdevice_event, };
+static struct notifier_block netdev_netlink_notifier = { + .notifier_call = netdev_netlink_notify, +}; + static int __init netdev_genl_init(void) { int err; @@ -171,8 +277,14 @@ static int __init netdev_genl_init(void) if (err) goto err_unreg_ntf;
+ err = netlink_register_notifier(&netdev_netlink_notifier); + if (err) + goto err_unreg_family; + return 0;
+err_unreg_family: + genl_unregister_family(&netdev_nl_family); err_unreg_ntf: unregister_netdevice_notifier(&netdev_genl_nb); return err;
On 2023/11/6 10:44, Mina Almasry wrote:
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
- size_t size, avail;
- gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
- size = gen_pool_size(binding->chunk_pool);
- avail = gen_pool_avail(binding->chunk_pool);
- if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
Is there any other place calling the gen_pool_destroy() when the above warning is triggered? Do we have a leaking for binding->chunk_pool?
- dma_buf_unmap_attachment(binding->attachment, binding->sgt,
DMA_BIDIRECTIONAL);
- dma_buf_detach(binding->dmabuf, binding->attachment);
- dma_buf_put(binding->dmabuf);
- kfree(binding);
+}
On Mon, Nov 6, 2023 at 11:46 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
size_t size, avail;
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
size = gen_pool_size(binding->chunk_pool);
avail = gen_pool_avail(binding->chunk_pool);
if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
Is there any other place calling the gen_pool_destroy() when the above warning is triggered? Do we have a leaking for binding->chunk_pool?
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
dma_buf_unmap_attachment(binding->attachment, binding->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(binding->dmabuf, binding->attachment);
dma_buf_put(binding->dmabuf);
kfree(binding);
+}
On 2023/11/8 5:59, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:46 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
size_t size, avail;
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
size = gen_pool_size(binding->chunk_pool);
avail = gen_pool_avail(binding->chunk_pool);
if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
Is there any other place calling the gen_pool_destroy() when the above warning is triggered? Do we have a leaking for binding->chunk_pool?
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
On Tue, Nov 7, 2023 at 7:40 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/8 5:59, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:46 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
size_t size, avail;
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
size = gen_pool_size(binding->chunk_pool);
avail = gen_pool_avail(binding->chunk_pool);
if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
Is there any other place calling the gen_pool_destroy() when the above warning is triggered? Do we have a leaking for binding->chunk_pool?
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
Forgive me but I'm not understanding the concern here.
__netdev_devmem_binding_free() is called when binding->ref hits 0.
binding->ref is incremented when an iov slice of the dma-buf is allocated, and decremented when an iov is freed. So, __netdev_devmem_binding_free() can't really be called unless all the iovs have been freed, and gen_pool_size() == gen_pool_avail(), regardless of what's happening on the page_pool side of things, right?
On 2023/11/9 10:22, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 7:40 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/8 5:59, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:46 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
size_t size, avail;
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
size = gen_pool_size(binding->chunk_pool);
avail = gen_pool_avail(binding->chunk_pool);
if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
Is there any other place calling the gen_pool_destroy() when the above warning is triggered? Do we have a leaking for binding->chunk_pool?
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
Forgive me but I'm not understanding the concern here.
__netdev_devmem_binding_free() is called when binding->ref hits 0.
binding->ref is incremented when an iov slice of the dma-buf is allocated, and decremented when an iov is freed. So, __netdev_devmem_binding_free() can't really be called unless all the iovs have been freed, and gen_pool_size() == gen_pool_avail(), regardless of what's happening on the page_pool side of things, right?
I seems to misunderstand it. In that case, it seems to be about defensive programming like other checking.
By looking at it more closely, it seems napi_frag_unref() call page_pool_page_put_many() directly, which means devmem seems to be bypassing the napi_safe optimization.
Can napi_frag_unref() reuse napi_pp_put_page() in order to reuse the napi_safe optimization?
On 2023-11-05 18:44, Mina Almasry wrote:
Add a netdev_dmabuf_binding struct which represents the dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to rx queues on the netdevice. On the binding, the dma_buf_attach & dma_buf_map_attachment will occur. The entries in the sg_table from mapping will be inserted into a genpool to make it ready for allocation.
The chunks in the genpool are owned by a dmabuf_chunk_owner struct which holds the dma-buf offset of the base of the chunk and the dma_addr of the chunk. Both are needed to use allocations that come from this chunk.
We create a new type that represents an allocation from the genpool: page_pool_iov. We setup the page_pool_iov allocation size in the genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally allocated by the page pool and given to the drivers.
The user can unbind the dmabuf from the netdevice by closing the netlink socket that established the binding. We do this so that the binding is automatically unbound even if the userspace process crashes.
The binding and unbinding leaves an indicator in struct netdev_rx_queue that the given queue is bound, but the binding doesn't take effect until the driver actually reconfigures its queues, and re-initializes its page pool.
The netdev_dmabuf_binding struct is refcounted, and releases its resources only when all the refs are released.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
RFC v3:
- Support multi rx-queue binding
include/linux/netdevice.h | 80 ++++++++++++++ include/net/netdev_rx_queue.h | 1 + include/net/page_pool/types.h | 27 +++++ net/core/dev.c | 203 ++++++++++++++++++++++++++++++++++ net/core/netdev-genl.c | 116 ++++++++++++++++++- 5 files changed, 425 insertions(+), 2 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index b8bf669212cc..eeeda849115c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,8 @@ #include <net/net_trackers.h> #include <net/net_debug.h> #include <net/dropreason-core.h> +#include <linux/xarray.h> +#include <linux/refcount.h> struct netpoll_info; struct device; @@ -808,6 +810,84 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id, #endif #endif /* CONFIG_RPS */ +struct netdev_dmabuf_binding {
- struct dma_buf *dmabuf;
- struct dma_buf_attachment *attachment;
- struct sg_table *sgt;
- struct net_device *dev;
- struct gen_pool *chunk_pool;
- /* The user holds a ref (via the netlink API) for as long as they want
* the binding to remain alive. Each page pool using this binding holds
* a ref to keep the binding alive. Each allocated page_pool_iov holds a
* ref.
*
* The binding undos itself and unmaps the underlying dmabuf once all
* those refs are dropped and the binding is no longer desired or in
* use.
*/
- refcount_t ref;
- /* The portid of the user that owns this binding. Used for netlink to
* notify us of the user dropping the bind.
*/
- u32 owner_nlportid;
- /* The list of bindings currently active. Used for netlink to notify us
* of the user dropping the bind.
*/
- struct list_head list;
- /* rxq's this binding is active on. */
- struct xarray bound_rxq_list;
+};
+#ifdef CONFIG_DMA_SHARED_BUFFER +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out);
+void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding);
+#else +static inline void +__netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{ +}
+static inline int netdev_bind_dmabuf(struct net_device *dev,
unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out)
+{
- return -EOPNOTSUPP;
+} +static inline void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ +}
+static inline int +netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding)
+{
- return -EOPNOTSUPP;
+} +#endif
+static inline void +netdev_devmem_binding_get(struct netdev_dmabuf_binding *binding) +{
- refcount_inc(&binding->ref);
+}
+static inline void +netdev_devmem_binding_put(struct netdev_dmabuf_binding *binding) +{
- if (!refcount_dec_and_test(&binding->ref))
return;
- __netdev_devmem_binding_free(binding);
+}
/* XPS map type and offset of the xps map within net_device->xps_maps[]. */ enum xps_map_type { XPS_CPUS = 0, diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index cdcafb30d437..1bfcf60a145d 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -21,6 +21,7 @@ struct netdev_rx_queue { #ifdef CONFIG_XDP_SOCKETS struct xsk_buff_pool *pool; #endif
- struct netdev_dmabuf_binding *binding;
@Pavel - They are using struct netdev_rx_queue to hold the binding, which is an object that holds the state and is mapped 1:1 to an rxq. This object is similar to our "interface queue". I wonder if we should re-visit using this generic struct, instead of driver specific structs e.g. bnxt_rx_ring_info?
} ____cacheline_aligned_in_smp; /* diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index d4bea053bb7e..64386325d965 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -133,6 +133,33 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); }; +/* page_pool_iov support */
+/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist
- entry from the dmabuf is inserted into the genpool as a chunk, and needs
- this owner struct to keep track of some metadata necessary to create
- allocations from this chunk.
- */
+struct dmabuf_genpool_chunk_owner {
- /* Offset into the dma-buf where this chunk starts. */
- unsigned long base_virtual;
- /* dma_addr of the start of the chunk. */
- dma_addr_t base_dma_addr;
- /* Array of page_pool_iovs for this chunk. */
- struct page_pool_iov *ppiovs;
- size_t num_ppiovs;
- struct netdev_dmabuf_binding *binding;
+};
+struct page_pool_iov {
- struct dmabuf_genpool_chunk_owner *owner;
- refcount_t refcount;
+};
struct page_pool { struct page_pool_params p; diff --git a/net/core/dev.c b/net/core/dev.c index a37a932a3e14..c8c3709d42c8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -153,6 +153,9 @@ #include <linux/prandom.h> #include <linux/once_lite.h> #include <net/netdev_rx_queue.h> +#include <linux/genalloc.h> +#include <linux/dma-buf.h> +#include <net/page_pool/types.h> #include "dev.h" #include "net-sysfs.h" @@ -2040,6 +2043,206 @@ static int call_netdevice_notifiers_mtu(unsigned long val, return call_netdevice_notifiers_info(val, &info.info); } +/* Device memory support */
+#ifdef CONFIG_DMA_SHARED_BUFFER +static void netdev_devmem_free_chunk_owner(struct gen_pool *genpool,
struct gen_pool_chunk *chunk,
void *not_used)
+{
- struct dmabuf_genpool_chunk_owner *owner = chunk->owner;
- kvfree(owner->ppiovs);
- kfree(owner);
+}
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
- size_t size, avail;
- gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
- size = gen_pool_size(binding->chunk_pool);
- avail = gen_pool_avail(binding->chunk_pool);
- if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
- dma_buf_unmap_attachment(binding->attachment, binding->sgt,
DMA_BIDIRECTIONAL);
- dma_buf_detach(binding->dmabuf, binding->attachment);
- dma_buf_put(binding->dmabuf);
- kfree(binding);
+}
+void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{
- struct netdev_rx_queue *rxq;
- unsigned long xa_idx;
- if (!binding)
return;
- list_del_rcu(&binding->list);
- xa_for_each(&binding->bound_rxq_list, xa_idx, rxq)
if (rxq->binding == binding)
/* We hold the rtnl_lock while binding/unbinding
* dma-buf, so we can't race with another thread that
* is also modifying this value. However, the driver
* may read this config while it's creating its
* rx-queues. WRITE_ONCE() here to match the
* READ_ONCE() in the driver.
*/
WRITE_ONCE(rxq->binding, NULL);
- netdev_devmem_binding_put(binding);
+}
+int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding)
+{
- struct netdev_rx_queue *rxq;
- u32 xa_idx;
- int err;
- rxq = __netif_get_rx_queue(dev, rxq_idx);
- if (rxq->binding)
return -EEXIST;
- err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
GFP_KERNEL);
- if (err)
return err;
- /*We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
* race with another thread that is also modifying this value. However,
* the driver may read this config while it's creating its * rx-queues.
* WRITE_ONCE() here to match the READ_ONCE() in the driver.
*/
- WRITE_ONCE(rxq->binding, binding);
- return 0;
+}
+int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out)
I'm not entirely familiar with the Netlink API. Mina, do you know if we can call into netdev_bind_dmabuf or netdev_nl_bind_rx_doit directly, without needing to call send/recv on a Netlink socket? We likely want io_uring to do the registration of a dmabuf fd and keep ownership over it.
+{
- struct netdev_dmabuf_binding *binding;
- struct scatterlist *sg;
- struct dma_buf *dmabuf;
- unsigned int sg_idx, i;
- unsigned long virtual;
- int err;
- if (!capable(CAP_NET_ADMIN))
return -EPERM;
- dmabuf = dma_buf_get(dmabuf_fd);
- if (IS_ERR_OR_NULL(dmabuf))
return -EBADFD;
- binding = kzalloc_node(sizeof(*binding), GFP_KERNEL,
dev_to_node(&dev->dev));
- if (!binding) {
err = -ENOMEM;
goto err_put_dmabuf;
- }
- xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC);
- refcount_set(&binding->ref, 1);
- binding->dmabuf = dmabuf;
- binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
- if (IS_ERR(binding->attachment)) {
err = PTR_ERR(binding->attachment);
goto err_free_binding;
- }
- binding->sgt = dma_buf_map_attachment(binding->attachment,
DMA_BIDIRECTIONAL);
- if (IS_ERR(binding->sgt)) {
err = PTR_ERR(binding->sgt);
goto err_detach;
- }
- /* For simplicity we expect to make PAGE_SIZE allocations, but the
* binding can be much more flexible than that. We may be able to
* allocate MTU sized chunks here. Leave that for future work...
*/
- binding->chunk_pool = gen_pool_create(PAGE_SHIFT,
dev_to_node(&dev->dev));
- if (!binding->chunk_pool) {
err = -ENOMEM;
goto err_unmap;
- }
- virtual = 0;
- for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
dma_addr_t dma_addr = sg_dma_address(sg);
struct dmabuf_genpool_chunk_owner *owner;
size_t len = sg_dma_len(sg);
struct page_pool_iov *ppiov;
owner = kzalloc_node(sizeof(*owner), GFP_KERNEL,
dev_to_node(&dev->dev));
owner->base_virtual = virtual;
owner->base_dma_addr = dma_addr;
owner->num_ppiovs = len / PAGE_SIZE;
owner->binding = binding;
err = gen_pool_add_owner(binding->chunk_pool, dma_addr,
dma_addr, len, dev_to_node(&dev->dev),
owner);
if (err) {
err = -EINVAL;
goto err_free_chunks;
}
owner->ppiovs = kvmalloc_array(owner->num_ppiovs,
sizeof(*owner->ppiovs),
GFP_KERNEL);
if (!owner->ppiovs) {
err = -ENOMEM;
goto err_free_chunks;
}
for (i = 0; i < owner->num_ppiovs; i++) {
ppiov = &owner->ppiovs[i];
ppiov->owner = owner;
refcount_set(&ppiov->refcount, 1);
}
dma_addr += len;
virtual += len;
- }
- *out = binding;
- return 0;
+err_free_chunks:
- gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
- gen_pool_destroy(binding->chunk_pool);
+err_unmap:
- dma_buf_unmap_attachment(binding->attachment, binding->sgt,
DMA_BIDIRECTIONAL);
+err_detach:
- dma_buf_detach(dmabuf, binding->attachment);
+err_free_binding:
- kfree(binding);
+err_put_dmabuf:
- dma_buf_put(dmabuf);
- return err;
+} +#endif
#ifdef CONFIG_NET_INGRESS static DEFINE_STATIC_KEY_FALSE(ingress_needed_key); diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 59d3d512d9cc..2c2a62593217 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -129,10 +129,89 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; } -/* Stub */ +static LIST_HEAD(netdev_rbinding_list);
int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) {
- return 0;
- struct netdev_dmabuf_binding *out_binding;
- u32 ifindex, dmabuf_fd, rxq_idx;
- struct net_device *netdev;
- struct sk_buff *rsp;
- int rem, err = 0;
- void *hdr;
- struct nlattr *attr;
- if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) ||
GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES))
return -EINVAL;
- ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
- dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]);
- rtnl_lock();
- netdev = __dev_get_by_index(genl_info_net(info), ifindex);
- if (!netdev) {
err = -ENODEV;
goto err_unlock;
- }
- err = netdev_bind_dmabuf(netdev, dmabuf_fd, &out_binding);
- if (err)
goto err_unlock;
- nla_for_each_attr(attr, genlmsg_data(info->genlhdr),
genlmsg_len(info->genlhdr), rem) {
switch (nla_type(attr)) {
case NETDEV_A_BIND_DMABUF_QUEUES:
rxq_idx = nla_get_u32(attr);
if (rxq_idx >= netdev->num_rx_queues) {
err = -ERANGE;
goto err_unbind;
}
err = netdev_bind_dmabuf_to_queue(netdev, rxq_idx,
out_binding);
if (err)
goto err_unbind;
break;
default:
break;
}
- }
- out_binding->owner_nlportid = info->snd_portid;
- list_add_rcu(&out_binding->list, &netdev_rbinding_list);
- rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
- if (!rsp) {
err = -ENOMEM;
goto err_unbind;
- }
- hdr = genlmsg_put(rsp, info->snd_portid, info->snd_seq,
&netdev_nl_family, 0, info->genlhdr->cmd);
- if (!hdr) {
err = -EMSGSIZE;
goto err_genlmsg_free;
- }
- genlmsg_end(rsp, hdr);
- rtnl_unlock();
- return genlmsg_reply(rsp, info);
+err_genlmsg_free:
- nlmsg_free(rsp);
+err_unbind:
- netdev_unbind_dmabuf(out_binding);
+err_unlock:
- rtnl_unlock();
- return err;
} static int netdev_genl_netdevice_event(struct notifier_block *nb, @@ -155,10 +234,37 @@ static int netdev_genl_netdevice_event(struct notifier_block *nb, return NOTIFY_OK; } +static int netdev_netlink_notify(struct notifier_block *nb, unsigned long state,
void *_notify)
+{
- struct netlink_notify *notify = _notify;
- struct netdev_dmabuf_binding *rbinding;
- if (state != NETLINK_URELEASE || notify->protocol != NETLINK_GENERIC)
return NOTIFY_DONE;
- rcu_read_lock();
- list_for_each_entry_rcu(rbinding, &netdev_rbinding_list, list) {
if (rbinding->owner_nlportid == notify->portid) {
netdev_unbind_dmabuf(rbinding);
break;
}
- }
- rcu_read_unlock();
- return NOTIFY_OK;
+}
static struct notifier_block netdev_genl_nb = { .notifier_call = netdev_genl_netdevice_event, }; +static struct notifier_block netdev_netlink_notifier = {
- .notifier_call = netdev_netlink_notify,
+};
Is this mechamism what cleans up TCP devmem in case userspace crashes and the associated Netlink socket is closed?
static int __init netdev_genl_init(void) { int err; @@ -171,8 +277,14 @@ static int __init netdev_genl_init(void) if (err) goto err_unreg_ntf;
- err = netlink_register_notifier(&netdev_netlink_notifier);
- if (err)
goto err_unreg_family;
- return 0;
+err_unreg_family:
- genl_unregister_family(&netdev_nl_family);
err_unreg_ntf: unregister_netdevice_notifier(&netdev_genl_nb); return err;
On Wed, Nov 8, 2023 at 3:47 PM David Wei dw@davidwei.uk wrote:
On 2023-11-05 18:44, Mina Almasry wrote:
Add a netdev_dmabuf_binding struct which represents the dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to rx queues on the netdevice. On the binding, the dma_buf_attach & dma_buf_map_attachment will occur. The entries in the sg_table from mapping will be inserted into a genpool to make it ready for allocation.
The chunks in the genpool are owned by a dmabuf_chunk_owner struct which holds the dma-buf offset of the base of the chunk and the dma_addr of the chunk. Both are needed to use allocations that come from this chunk.
We create a new type that represents an allocation from the genpool: page_pool_iov. We setup the page_pool_iov allocation size in the genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally allocated by the page pool and given to the drivers.
The user can unbind the dmabuf from the netdevice by closing the netlink socket that established the binding. We do this so that the binding is automatically unbound even if the userspace process crashes.
The binding and unbinding leaves an indicator in struct netdev_rx_queue that the given queue is bound, but the binding doesn't take effect until the driver actually reconfigures its queues, and re-initializes its page pool.
The netdev_dmabuf_binding struct is refcounted, and releases its resources only when all the refs are released.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
RFC v3:
- Support multi rx-queue binding
include/linux/netdevice.h | 80 ++++++++++++++ include/net/netdev_rx_queue.h | 1 + include/net/page_pool/types.h | 27 +++++ net/core/dev.c | 203 ++++++++++++++++++++++++++++++++++ net/core/netdev-genl.c | 116 ++++++++++++++++++- 5 files changed, 425 insertions(+), 2 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index b8bf669212cc..eeeda849115c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,8 @@ #include <net/net_trackers.h> #include <net/net_debug.h> #include <net/dropreason-core.h> +#include <linux/xarray.h> +#include <linux/refcount.h>
struct netpoll_info; struct device; @@ -808,6 +810,84 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id, #endif #endif /* CONFIG_RPS */
+struct netdev_dmabuf_binding {
struct dma_buf *dmabuf;
struct dma_buf_attachment *attachment;
struct sg_table *sgt;
struct net_device *dev;
struct gen_pool *chunk_pool;
/* The user holds a ref (via the netlink API) for as long as they want
* the binding to remain alive. Each page pool using this binding holds
* a ref to keep the binding alive. Each allocated page_pool_iov holds a
* ref.
*
* The binding undos itself and unmaps the underlying dmabuf once all
* those refs are dropped and the binding is no longer desired or in
* use.
*/
refcount_t ref;
/* The portid of the user that owns this binding. Used for netlink to
* notify us of the user dropping the bind.
*/
u32 owner_nlportid;
/* The list of bindings currently active. Used for netlink to notify us
* of the user dropping the bind.
*/
struct list_head list;
/* rxq's this binding is active on. */
struct xarray bound_rxq_list;
+};
+#ifdef CONFIG_DMA_SHARED_BUFFER +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out);
+void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding);
+#else +static inline void +__netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{ +}
+static inline int netdev_bind_dmabuf(struct net_device *dev,
unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out)
+{
return -EOPNOTSUPP;
+} +static inline void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ +}
+static inline int +netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding)
+{
return -EOPNOTSUPP;
+} +#endif
+static inline void +netdev_devmem_binding_get(struct netdev_dmabuf_binding *binding) +{
refcount_inc(&binding->ref);
+}
+static inline void +netdev_devmem_binding_put(struct netdev_dmabuf_binding *binding) +{
if (!refcount_dec_and_test(&binding->ref))
return;
__netdev_devmem_binding_free(binding);
+}
/* XPS map type and offset of the xps map within net_device->xps_maps[]. */ enum xps_map_type { XPS_CPUS = 0, diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index cdcafb30d437..1bfcf60a145d 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -21,6 +21,7 @@ struct netdev_rx_queue { #ifdef CONFIG_XDP_SOCKETS struct xsk_buff_pool *pool; #endif
struct netdev_dmabuf_binding *binding;
@Pavel - They are using struct netdev_rx_queue to hold the binding, which is an object that holds the state and is mapped 1:1 to an rxq. This object is similar to our "interface queue". I wonder if we should re-visit using this generic struct, instead of driver specific structs e.g. bnxt_rx_ring_info?
} ____cacheline_aligned_in_smp;
/* diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index d4bea053bb7e..64386325d965 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -133,6 +133,33 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); };
+/* page_pool_iov support */
+/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist
- entry from the dmabuf is inserted into the genpool as a chunk, and needs
- this owner struct to keep track of some metadata necessary to create
- allocations from this chunk.
- */
+struct dmabuf_genpool_chunk_owner {
/* Offset into the dma-buf where this chunk starts. */
unsigned long base_virtual;
/* dma_addr of the start of the chunk. */
dma_addr_t base_dma_addr;
/* Array of page_pool_iovs for this chunk. */
struct page_pool_iov *ppiovs;
size_t num_ppiovs;
struct netdev_dmabuf_binding *binding;
+};
+struct page_pool_iov {
struct dmabuf_genpool_chunk_owner *owner;
refcount_t refcount;
+};
struct page_pool { struct page_pool_params p;
diff --git a/net/core/dev.c b/net/core/dev.c index a37a932a3e14..c8c3709d42c8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -153,6 +153,9 @@ #include <linux/prandom.h> #include <linux/once_lite.h> #include <net/netdev_rx_queue.h> +#include <linux/genalloc.h> +#include <linux/dma-buf.h> +#include <net/page_pool/types.h>
#include "dev.h" #include "net-sysfs.h" @@ -2040,6 +2043,206 @@ static int call_netdevice_notifiers_mtu(unsigned long val, return call_netdevice_notifiers_info(val, &info.info); }
+/* Device memory support */
+#ifdef CONFIG_DMA_SHARED_BUFFER +static void netdev_devmem_free_chunk_owner(struct gen_pool *genpool,
struct gen_pool_chunk *chunk,
void *not_used)
+{
struct dmabuf_genpool_chunk_owner *owner = chunk->owner;
kvfree(owner->ppiovs);
kfree(owner);
+}
+void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{
size_t size, avail;
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
size = gen_pool_size(binding->chunk_pool);
avail = gen_pool_avail(binding->chunk_pool);
if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu",
size, avail))
gen_pool_destroy(binding->chunk_pool);
dma_buf_unmap_attachment(binding->attachment, binding->sgt,
DMA_BIDIRECTIONAL);
dma_buf_detach(binding->dmabuf, binding->attachment);
dma_buf_put(binding->dmabuf);
kfree(binding);
+}
+void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{
struct netdev_rx_queue *rxq;
unsigned long xa_idx;
if (!binding)
return;
list_del_rcu(&binding->list);
xa_for_each(&binding->bound_rxq_list, xa_idx, rxq)
if (rxq->binding == binding)
/* We hold the rtnl_lock while binding/unbinding
* dma-buf, so we can't race with another thread that
* is also modifying this value. However, the driver
* may read this config while it's creating its
* rx-queues. WRITE_ONCE() here to match the
* READ_ONCE() in the driver.
*/
WRITE_ONCE(rxq->binding, NULL);
netdev_devmem_binding_put(binding);
+}
+int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding)
+{
struct netdev_rx_queue *rxq;
u32 xa_idx;
int err;
rxq = __netif_get_rx_queue(dev, rxq_idx);
if (rxq->binding)
return -EEXIST;
err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
GFP_KERNEL);
if (err)
return err;
/*We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
* race with another thread that is also modifying this value. However,
* the driver may read this config while it's creating its * rx-queues.
* WRITE_ONCE() here to match the READ_ONCE() in the driver.
*/
WRITE_ONCE(rxq->binding, binding);
return 0;
+}
+int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out)
I'm not entirely familiar with the Netlink API. Mina, do you know if we can call into netdev_bind_dmabuf or netdev_nl_bind_rx_doit directly, without needing to call send/recv on a Netlink socket? We likely want io_uring to do the registration of a dmabuf fd and keep ownership over it.
You can likely call into netdev_bind_dmabuf(), but not netdev_nl_bind_rx_doit. The latter is very netlink specific.
+{
struct netdev_dmabuf_binding *binding;
struct scatterlist *sg;
struct dma_buf *dmabuf;
unsigned int sg_idx, i;
unsigned long virtual;
int err;
if (!capable(CAP_NET_ADMIN))
return -EPERM;
dmabuf = dma_buf_get(dmabuf_fd);
if (IS_ERR_OR_NULL(dmabuf))
return -EBADFD;
binding = kzalloc_node(sizeof(*binding), GFP_KERNEL,
dev_to_node(&dev->dev));
if (!binding) {
err = -ENOMEM;
goto err_put_dmabuf;
}
xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC);
refcount_set(&binding->ref, 1);
binding->dmabuf = dmabuf;
binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
if (IS_ERR(binding->attachment)) {
err = PTR_ERR(binding->attachment);
goto err_free_binding;
}
binding->sgt = dma_buf_map_attachment(binding->attachment,
DMA_BIDIRECTIONAL);
if (IS_ERR(binding->sgt)) {
err = PTR_ERR(binding->sgt);
goto err_detach;
}
/* For simplicity we expect to make PAGE_SIZE allocations, but the
* binding can be much more flexible than that. We may be able to
* allocate MTU sized chunks here. Leave that for future work...
*/
binding->chunk_pool = gen_pool_create(PAGE_SHIFT,
dev_to_node(&dev->dev));
if (!binding->chunk_pool) {
err = -ENOMEM;
goto err_unmap;
}
virtual = 0;
for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
dma_addr_t dma_addr = sg_dma_address(sg);
struct dmabuf_genpool_chunk_owner *owner;
size_t len = sg_dma_len(sg);
struct page_pool_iov *ppiov;
owner = kzalloc_node(sizeof(*owner), GFP_KERNEL,
dev_to_node(&dev->dev));
owner->base_virtual = virtual;
owner->base_dma_addr = dma_addr;
owner->num_ppiovs = len / PAGE_SIZE;
owner->binding = binding;
err = gen_pool_add_owner(binding->chunk_pool, dma_addr,
dma_addr, len, dev_to_node(&dev->dev),
owner);
if (err) {
err = -EINVAL;
goto err_free_chunks;
}
owner->ppiovs = kvmalloc_array(owner->num_ppiovs,
sizeof(*owner->ppiovs),
GFP_KERNEL);
if (!owner->ppiovs) {
err = -ENOMEM;
goto err_free_chunks;
}
for (i = 0; i < owner->num_ppiovs; i++) {
ppiov = &owner->ppiovs[i];
ppiov->owner = owner;
refcount_set(&ppiov->refcount, 1);
}
dma_addr += len;
virtual += len;
}
*out = binding;
return 0;
+err_free_chunks:
gen_pool_for_each_chunk(binding->chunk_pool,
netdev_devmem_free_chunk_owner, NULL);
gen_pool_destroy(binding->chunk_pool);
+err_unmap:
dma_buf_unmap_attachment(binding->attachment, binding->sgt,
DMA_BIDIRECTIONAL);
+err_detach:
dma_buf_detach(dmabuf, binding->attachment);
+err_free_binding:
kfree(binding);
+err_put_dmabuf:
dma_buf_put(dmabuf);
return err;
+} +#endif
#ifdef CONFIG_NET_INGRESS static DEFINE_STATIC_KEY_FALSE(ingress_needed_key);
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 59d3d512d9cc..2c2a62593217 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -129,10 +129,89 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; }
-/* Stub */ +static LIST_HEAD(netdev_rbinding_list);
int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) {
return 0;
struct netdev_dmabuf_binding *out_binding;
u32 ifindex, dmabuf_fd, rxq_idx;
struct net_device *netdev;
struct sk_buff *rsp;
int rem, err = 0;
void *hdr;
struct nlattr *attr;
if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) ||
GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) ||
GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES))
return -EINVAL;
ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]);
dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]);
rtnl_lock();
netdev = __dev_get_by_index(genl_info_net(info), ifindex);
if (!netdev) {
err = -ENODEV;
goto err_unlock;
}
err = netdev_bind_dmabuf(netdev, dmabuf_fd, &out_binding);
if (err)
goto err_unlock;
nla_for_each_attr(attr, genlmsg_data(info->genlhdr),
genlmsg_len(info->genlhdr), rem) {
switch (nla_type(attr)) {
case NETDEV_A_BIND_DMABUF_QUEUES:
rxq_idx = nla_get_u32(attr);
if (rxq_idx >= netdev->num_rx_queues) {
err = -ERANGE;
goto err_unbind;
}
err = netdev_bind_dmabuf_to_queue(netdev, rxq_idx,
out_binding);
if (err)
goto err_unbind;
break;
default:
break;
}
}
out_binding->owner_nlportid = info->snd_portid;
list_add_rcu(&out_binding->list, &netdev_rbinding_list);
rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!rsp) {
err = -ENOMEM;
goto err_unbind;
}
hdr = genlmsg_put(rsp, info->snd_portid, info->snd_seq,
&netdev_nl_family, 0, info->genlhdr->cmd);
if (!hdr) {
err = -EMSGSIZE;
goto err_genlmsg_free;
}
genlmsg_end(rsp, hdr);
rtnl_unlock();
return genlmsg_reply(rsp, info);
+err_genlmsg_free:
nlmsg_free(rsp);
+err_unbind:
netdev_unbind_dmabuf(out_binding);
+err_unlock:
rtnl_unlock();
return err;
}
static int netdev_genl_netdevice_event(struct notifier_block *nb, @@ -155,10 +234,37 @@ static int netdev_genl_netdevice_event(struct notifier_block *nb, return NOTIFY_OK; }
+static int netdev_netlink_notify(struct notifier_block *nb, unsigned long state,
void *_notify)
+{
struct netlink_notify *notify = _notify;
struct netdev_dmabuf_binding *rbinding;
if (state != NETLINK_URELEASE || notify->protocol != NETLINK_GENERIC)
return NOTIFY_DONE;
rcu_read_lock();
list_for_each_entry_rcu(rbinding, &netdev_rbinding_list, list) {
if (rbinding->owner_nlportid == notify->portid) {
netdev_unbind_dmabuf(rbinding);
break;
}
}
rcu_read_unlock();
return NOTIFY_OK;
+}
static struct notifier_block netdev_genl_nb = { .notifier_call = netdev_genl_netdevice_event, };
+static struct notifier_block netdev_netlink_notifier = {
.notifier_call = netdev_netlink_notify,
+};
Is this mechamism what cleans up TCP devmem in case userspace crashes and the associated Netlink socket is closed?
Correct.
static int __init netdev_genl_init(void) { int err; @@ -171,8 +277,14 @@ static int __init netdev_genl_init(void) if (err) goto err_unreg_ntf;
err = netlink_register_notifier(&netdev_netlink_notifier);
if (err)
goto err_unreg_family;
return 0;
+err_unreg_family:
genl_unregister_family(&netdev_nl_family);
err_unreg_ntf: unregister_netdevice_notifier(&netdev_genl_nb); return err;
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote: [...]
+int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd,
struct netdev_dmabuf_binding **out)
+{
- struct netdev_dmabuf_binding *binding;
- struct scatterlist *sg;
- struct dma_buf *dmabuf;
- unsigned int sg_idx, i;
- unsigned long virtual;
- int err;
- if (!capable(CAP_NET_ADMIN))
return -EPERM;
- dmabuf = dma_buf_get(dmabuf_fd);
- if (IS_ERR_OR_NULL(dmabuf))
return -EBADFD;
- binding = kzalloc_node(sizeof(*binding), GFP_KERNEL,
dev_to_node(&dev->dev));
- if (!binding) {
err = -ENOMEM;
goto err_put_dmabuf;
- }
- xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC);
- refcount_set(&binding->ref, 1);
- binding->dmabuf = dmabuf;
- binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent);
- if (IS_ERR(binding->attachment)) {
err = PTR_ERR(binding->attachment);
goto err_free_binding;
- }
- binding->sgt = dma_buf_map_attachment(binding->attachment,
DMA_BIDIRECTIONAL);
- if (IS_ERR(binding->sgt)) {
err = PTR_ERR(binding->sgt);
goto err_detach;
- }
- /* For simplicity we expect to make PAGE_SIZE allocations, but the
* binding can be much more flexible than that. We may be able to
* allocate MTU sized chunks here. Leave that for future work...
*/
- binding->chunk_pool = gen_pool_create(PAGE_SHIFT,
dev_to_node(&dev->dev));
- if (!binding->chunk_pool) {
err = -ENOMEM;
goto err_unmap;
- }
- virtual = 0;
- for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) {
dma_addr_t dma_addr = sg_dma_address(sg);
struct dmabuf_genpool_chunk_owner *owner;
size_t len = sg_dma_len(sg);
struct page_pool_iov *ppiov;
owner = kzalloc_node(sizeof(*owner), GFP_KERNEL,
dev_to_node(&dev->dev));
owner->base_virtual = virtual;
owner->base_dma_addr = dma_addr;
owner->num_ppiovs = len / PAGE_SIZE;
owner->binding = binding;
err = gen_pool_add_owner(binding->chunk_pool, dma_addr,
dma_addr, len, dev_to_node(&dev->dev),
owner);
if (err) {
err = -EINVAL;
goto err_free_chunks;
}
owner->ppiovs = kvmalloc_array(owner->num_ppiovs,
sizeof(*owner->ppiovs),
GFP_KERNEL);
if (!owner->ppiovs) {
err = -ENOMEM;
goto err_free_chunks;
}
for (i = 0; i < owner->num_ppiovs; i++) {
ppiov = &owner->ppiovs[i];
ppiov->owner = owner;
refcount_set(&ppiov->refcount, 1);
}
dma_addr += len;
I'm trying to wrap my head around the whole infra... the above line is confusing. Why do you increment dma_addr? it will be re-initialized in the next iteration.
Cheers,
Paolo
On Thu, Nov 9, 2023 at 12:30 AM Paolo Abeni pabeni@redhat.com wrote:
I'm trying to wrap my head around the whole infra... the above line is confusing. Why do you increment dma_addr? it will be re-initialized in the next iteration.
That is just a mistake, sorry. Will remove this increment.
On Thu, Nov 9, 2023 at 1:29 AM Yunsheng Lin linyunsheng@huawei.com wrote:> >>>
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
Forgive me but I'm not understanding the concern here.
__netdev_devmem_binding_free() is called when binding->ref hits 0.
binding->ref is incremented when an iov slice of the dma-buf is allocated, and decremented when an iov is freed. So, __netdev_devmem_binding_free() can't really be called unless all the iovs have been freed, and gen_pool_size() == gen_pool_avail(), regardless of what's happening on the page_pool side of things, right?
I seems to misunderstand it. In that case, it seems to be about defensive programming like other checking.
By looking at it more closely, it seems napi_frag_unref() call page_pool_page_put_many() directly, which means devmem seems to be bypassing the napi_safe optimization.
Can napi_frag_unref() reuse napi_pp_put_page() in order to reuse the napi_safe optimization?
I think it already does. page_pool_page_put_many() is only called if !recycle or !napi_pp_put_page(). In that case page_pool_page_put_many() is just a replacement for put_page(), because this 'page' may be an iov.
On 2023/11/10 10:59, Mina Almasry wrote:
On Thu, Nov 9, 2023 at 12:30 AM Paolo Abeni pabeni@redhat.com wrote:
I'm trying to wrap my head around the whole infra... the above line is confusing. Why do you increment dma_addr? it will be re-initialized in the next iteration.
That is just a mistake, sorry. Will remove this increment.
You seems to be combining comments in different thread and replying in one thread, I am not sure that is a good practice and I almost missed the reply below as I don't seem to be cc'ed.
On Thu, Nov 9, 2023 at 1:29 AM Yunsheng Lin linyunsheng@huawei.com wrote:> >>>
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
Forgive me but I'm not understanding the concern here.
__netdev_devmem_binding_free() is called when binding->ref hits 0.
binding->ref is incremented when an iov slice of the dma-buf is allocated, and decremented when an iov is freed. So, __netdev_devmem_binding_free() can't really be called unless all the iovs have been freed, and gen_pool_size() == gen_pool_avail(), regardless of what's happening on the page_pool side of things, right?
I seems to misunderstand it. In that case, it seems to be about defensive programming like other checking.
By looking at it more closely, it seems napi_frag_unref() call page_pool_page_put_many() directly, which means devmem seems to be bypassing the napi_safe optimization.
Can napi_frag_unref() reuse napi_pp_put_page() in order to reuse the napi_safe optimization?
I think it already does. page_pool_page_put_many() is only called if !recycle or !napi_pp_put_page(). In that case page_pool_page_put_many() is just a replacement for put_page(), because this 'page' may be an iov.
Is there a reason why not calling napi_pp_put_page() for devmem too instead of calling page_pool_page_put_many()? mem provider has a 'release_page' ops, calling page_pool_page_put_many() directly here seems to be bypassing the 'release_page' ops, which means devmem is bypassing most of the main features of page pool.
As far as I can tell, the main features of page pool: 1. Allow lockless allocation and freeing in pool->alloc cache by utilizing NAPI non-concurrent context. 2. Allow concurrent allocation and freeing in pool->ring cache by utilizing ptr_ring. 3. Allow dma map/unmap and cache sync optimization. 4. Allow detailed stats logging and tracing. 5. Allow some bulk allocation and freeing. 6. support both skb packet and xdp frame.
I am wondering what is the main features that devmem is utilizing by intergrating into page pool?
It seems the driver can just call netdev_alloc_devmem() and napi_frag_unref() can call netdev_free_devmem() directly without intergrating into page pool and it should just works too?
Maybe we should consider creating a new thin layer, in order to demux to page pool, devmem or other mem type if my suggestion does not work out too?
On Thu, Nov 9, 2023 at 11:38 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/10 10:59, Mina Almasry wrote:
On Thu, Nov 9, 2023 at 12:30 AM Paolo Abeni pabeni@redhat.com wrote:
I'm trying to wrap my head around the whole infra... the above line is confusing. Why do you increment dma_addr? it will be re-initialized in the next iteration.
That is just a mistake, sorry. Will remove this increment.
You seems to be combining comments in different thread and replying in one thread, I am not sure that is a good practice and I almost missed the reply below as I don't seem to be cc'ed.
Sorry about that.
On Thu, Nov 9, 2023 at 1:29 AM Yunsheng Lin linyunsheng@huawei.com wrote:> >>>
gen_pool_destroy BUG_ON() if it's not empty at the time of destroying. Technically that should never happen, because __netdev_devmem_binding_free() should only be called when the refcount hits 0, so all the chunks have been freed back to the gen_pool. But, just in case, I don't want to crash the server just because I'm leaking a chunk... this is a bit of defensive programming that is typically frowned upon, but the behavior of gen_pool is so severe I think the WARN() + check is warranted here.
It seems it is pretty normal for the above to happen nowadays because of retransmits timeouts, NAPI defer schemes mentioned below:
https://lkml.kernel.org/netdev/168269854650.2191653.8465259808498269815.stgi...
And currently page pool core handles that by using a workqueue.
Forgive me but I'm not understanding the concern here.
__netdev_devmem_binding_free() is called when binding->ref hits 0.
binding->ref is incremented when an iov slice of the dma-buf is allocated, and decremented when an iov is freed. So, __netdev_devmem_binding_free() can't really be called unless all the iovs have been freed, and gen_pool_size() == gen_pool_avail(), regardless of what's happening on the page_pool side of things, right?
I seems to misunderstand it. In that case, it seems to be about defensive programming like other checking.
By looking at it more closely, it seems napi_frag_unref() call page_pool_page_put_many() directly, which means devmem seems to be bypassing the napi_safe optimization.
Can napi_frag_unref() reuse napi_pp_put_page() in order to reuse the napi_safe optimization?
I think it already does. page_pool_page_put_many() is only called if !recycle or !napi_pp_put_page(). In that case page_pool_page_put_many() is just a replacement for put_page(), because this 'page' may be an iov.
Is there a reason why not calling napi_pp_put_page() for devmem too instead of calling page_pool_page_put_many()? mem provider has a 'release_page' ops, calling page_pool_page_put_many() directly here seems to be bypassing the 'release_page' ops, which means devmem is bypassing most of the main features of page pool.
I think we're still calling napi_pp_put_page() as normal:
/** @@ -3441,13 +3466,13 @@ bool napi_pp_put_page(struct page *page, bool napi_safe); static inline void napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { - struct page *page = skb_frag_page(frag); - #ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page, napi_safe)) + if (recycle && napi_pp_put_page(frag->bv_page, napi_safe)) return; + page_pool_page_put_many(frag->bv_page, 1); +#else + put_page(skb_frag_page(frag)); #endif - put_page(page); }
The code change here is to replace put_page() with page_pool_page_put_many(), only, because bv_page may be a page_pool_iov, so we need to use page_pool_page_put_many() which handles page_pool_iov correcly. I did not change whether or not napi_pp_put_page() is called. It's still called if recycle==true.
As far as I can tell, the main features of page pool:
- Allow lockless allocation and freeing in pool->alloc cache by utilizing NAPI non-concurrent context.
- Allow concurrent allocation and freeing in pool->ring cache by utilizing ptr_ring.
- Allow dma map/unmap and cache sync optimization.
- Allow detailed stats logging and tracing.
- Allow some bulk allocation and freeing.
- support both skb packet and xdp frame.
I am wondering what is the main features that devmem is utilizing by intergrating into page pool?
It seems the driver can just call netdev_alloc_devmem() and napi_frag_unref() can call netdev_free_devmem() directly without intergrating into page pool and it should just works too?
Maybe we should consider creating a new thin layer, in order to demux to page pool, devmem or other mem type if my suggestion does not work out too?
I went through this discussion with Jesper on RFC v2 in this thread:
https://lore.kernel.org/netdev/CAHS8izOVJGJH5WF68OsRWFKJid1_huzzUK+hpKbLcL4p...
which culminates with that email where he seems on board with the change from a performance POV, and seems on board with hiding the memory type implementation from the drivers. That thread fully goes over the tradeoffs of integrating over the page pool or creating new ones. Integrating with the page pool abstracts most of the devmem implementation (and other memory types) from the driver. It reuses page pool features like page recycling for example.
On Sun, 5 Nov 2023 18:44:03 -0800 Mina Almasry wrote:
--- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,8 @@ #include <net/net_trackers.h> #include <net/net_debug.h> #include <net/dropreason-core.h> +#include <linux/xarray.h> +#include <linux/refcount.h> struct netpoll_info; struct device; @@ -808,6 +810,84 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id, #endif #endif /* CONFIG_RPS */ +struct netdev_dmabuf_binding {
Similar nitpick to the skbuff.h comment. Take this somewhere else, please, it doesn't need to be included in netdevice.h
- struct netdev_dmabuf_binding *rbinding;
the 'r' in rbinding stands for rx? 🤔️
On Fri, Nov 10, 2023 at 3:20 PM Jakub Kicinski kuba@kernel.org wrote:
On Sun, 5 Nov 2023 18:44:03 -0800 Mina Almasry wrote:
--- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,8 @@ #include <net/net_trackers.h> #include <net/net_debug.h> #include <net/dropreason-core.h> +#include <linux/xarray.h> +#include <linux/refcount.h>
struct netpoll_info; struct device; @@ -808,6 +810,84 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id, #endif #endif /* CONFIG_RPS */
+struct netdev_dmabuf_binding {
Similar nitpick to the skbuff.h comment. Take this somewhere else, please, it doesn't need to be included in netdevice.h
struct netdev_dmabuf_binding *rbinding;
the 'r' in rbinding stands for rx? 🤔️
reverse binding. As in usually it's netdev->binding, but the reverse map holds the bindings themselves so we can unbind them from the netdev.
Implement netdev devmem allocator. The allocator takes a given struct netdev_dmabuf_binding as input and allocates page_pool_iov from that binding.
The allocation simply delegates to the binding's genpool for the allocation logic and wraps the returned memory region in a page_pool_iov struct.
page_pool_iov are refcounted and are freed back to the binding when the refcount drops to 0.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
--- include/linux/netdevice.h | 13 ++++++++++++ include/net/page_pool/helpers.h | 28 +++++++++++++++++++++++++ net/core/dev.c | 37 +++++++++++++++++++++++++++++++++ 3 files changed, 78 insertions(+)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov); void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding); int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, struct netdev_dmabuf_binding **out); @@ -850,6 +853,16 @@ void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct netdev_dmabuf_binding *binding); #else +static inline struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding) +{ + return NULL; +} + +static inline void netdev_free_devmem(struct page_pool_iov *ppiov) +{ +} + static inline void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) { diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4ebd544ae977..78cbb040af94 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -83,6 +83,34 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) } #endif
+/* page_pool_iov support */ + +static inline struct dmabuf_genpool_chunk_owner * +page_pool_iov_owner(const struct page_pool_iov *ppiov) +{ + return ppiov->owner; +} + +static inline unsigned int page_pool_iov_idx(const struct page_pool_iov *ppiov) +{ + return ppiov - page_pool_iov_owner(ppiov)->ppiovs; +} + +static inline dma_addr_t +page_pool_iov_dma_addr(const struct page_pool_iov *ppiov) +{ + struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov); + + return owner->base_dma_addr + + ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT); +} + +static inline struct netdev_dmabuf_binding * +page_pool_iov_binding(const struct page_pool_iov *ppiov) +{ + return page_pool_iov_owner(ppiov)->binding; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate diff --git a/net/core/dev.c b/net/core/dev.c index c8c3709d42c8..2315bbc03ec8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -156,6 +156,7 @@ #include <linux/genalloc.h> #include <linux/dma-buf.h> #include <net/page_pool/types.h> +#include <net/page_pool/helpers.h>
#include "dev.h" #include "net-sysfs.h" @@ -2077,6 +2078,42 @@ void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) kfree(binding); }
+struct page_pool_iov *netdev_alloc_devmem(struct netdev_dmabuf_binding *binding) +{ + struct dmabuf_genpool_chunk_owner *owner; + struct page_pool_iov *ppiov; + unsigned long dma_addr; + ssize_t offset; + ssize_t index; + + dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE, + (void **)&owner); + if (!dma_addr) + return NULL; + + offset = dma_addr - owner->base_dma_addr; + index = offset / PAGE_SIZE; + ppiov = &owner->ppiovs[index]; + + netdev_devmem_binding_get(binding); + + return ppiov; +} + +void netdev_free_devmem(struct page_pool_iov *ppiov) +{ + struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov); + + refcount_set(&ppiov->refcount, 1); + + if (gen_pool_has_addr(binding->chunk_pool, + page_pool_iov_dma_addr(ppiov), PAGE_SIZE)) + gen_pool_free(binding->chunk_pool, + page_pool_iov_dma_addr(ppiov), PAGE_SIZE); + + netdev_devmem_binding_put(binding); +} + void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) { struct netdev_rx_queue *rxq;
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { }; #ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
On Mon, Nov 6, 2023 at 11:45 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void netdev_free_devmem(struct page_pool_iov *ppiov) +{
struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
refcount_set(&ppiov->refcount, 1);
if (gen_pool_has_addr(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
When gen_pool_has_addr() returns false, does it mean something has gone really wrong here?
Yes, good eye. gen_pool_has_addr() should never return false, but then again, gen_pool_free() BUG_ON()s if it doesn't find the address, which is an extremely severe reaction to what can be a minor bug in the accounting. I prefer to leak rather than crash the machine. It's a bit of defensive programming that is normally frowned upon, but I feel like in this case it's maybe warranted due to the very severe reaction (BUG_ON).
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set. That the io_uring set needs to dive into page_pools is just wrong - complicating the design and code and pushing io_uring into a realm it does not need to be involved in.
Most (all?) of this patch set can work with any memory; only device memory is unreadable.
On Tue, Nov 7, 2023 at 2:55 PM David Ahern dsahern@kernel.org wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set.
I agree actually, and I was about to make that comment to David Wei's series once I have the time.
David, your io_uring RX zerocopy proposal actually works with devmem TCP, if you're inclined to do that instead, what you'd do roughly is (I think):
- Allocate a memfd, - Use CONFIG_UDMABUF to create a dma-buf out of that memfd. - Bind the dma-buf to the NIC using the netlink API in this RFC. - Your io_uring extensions and io_uring uapi should work as-is almost on top of this series, I think.
If you do this the incoming packets should land into your memfd, which may or may not work for you. In the future if you feel inclined to use device memory, this approach that I'm describing here would be more extensible to device memory, because you'd already be using dma-bufs for your user memory; you'd just replace one kind of dma-buf (UDMABUF) with another.
That the io_uring set needs to dive into page_pools is just wrong - complicating the design and code and pushing io_uring into a realm it does not need to be involved in.
Most (all?) of this patch set can work with any memory; only device memory is unreadable.
On 2023-11-07 15:03, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 2:55 PM David Ahern dsahern@kernel.org wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set.
I agree actually, and I was about to make that comment to David Wei's series once I have the time.
David, your io_uring RX zerocopy proposal actually works with devmem TCP, if you're inclined to do that instead, what you'd do roughly is (I think):
- Allocate a memfd,
- Use CONFIG_UDMABUF to create a dma-buf out of that memfd.
- Bind the dma-buf to the NIC using the netlink API in this RFC.
- Your io_uring extensions and io_uring uapi should work as-is almost
on top of this series, I think.
If you do this the incoming packets should land into your memfd, which may or may not work for you. In the future if you feel inclined to use device memory, this approach that I'm describing here would be more extensible to device memory, because you'd already be using dma-bufs for your user memory; you'd just replace one kind of dma-buf (UDMABUF) with another.
How would TCP devmem change if we no longer assume that dmabuf is device memory? Pavel will know more on the perf side, but I wouldn't want to put any if/else on the hot path if we can avoid it. I could be wrong, but right now in my mind using different memory providers solves this neatly and the driver/networking stack doesn't need to care.
Mina, I believe you said at NetDev conf that you already had an udmabuf implementation for testing. I would like to see this (you can send privately) to see how TCP devmem would handle both user memory and device memory.
That the io_uring set needs to dive into page_pools is just wrong - complicating the design and code and pushing io_uring into a realm it does not need to be involved in.
Most (all?) of this patch set can work with any memory; only device memory is unreadable.
On 11/7/23 23:03, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 2:55 PM David Ahern dsahern@kernel.org wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set.
I agree actually, and I was about to make that comment to David Wei's series once I have the time.
David, your io_uring RX zerocopy proposal actually works with devmem TCP, if you're inclined to do that instead, what you'd do roughly is (I think):
That would be a Frankenstein's monster api with no good reason for it. You bind memory via netlink because you don't have a proper context to work with otherwise, io_uring serves as the context with a separate and precise abstraction around queues. Same with dmabufs, it totally makes sense for device memory, but wrapping host memory into a file just to immediately unwrap it back with no particular benefits from doing so doesn't seem like a good uapi. Currently, the difference will be hidden by io_uring.
And we'd still need to have a hook in pp's get page to grab buffers from the buffer ring instead of refilling via SO_DEVMEM_DONTNEED and a callback for when skbs are dropped. It's just instead of a new pp ops it'll be a branch in the devmem path. io_uring might want to use the added iov format in the future for device memory or even before that, io_uring doesn't really care whether it's pages or not.
It's also my big concern from how many optimisations it'll fence us off. With the current io_uring RFC I can get rid of all buffer atomic refcounting and replace it with a single percpu counting per skb. Hopefully, that will be doable after we place it on top of pp providers.
- Allocate a memfd,
- Use CONFIG_UDMABUF to create a dma-buf out of that memfd.
- Bind the dma-buf to the NIC using the netlink API in this RFC.
- Your io_uring extensions and io_uring uapi should work as-is almost
on top of this series, I think.
If you do this the incoming packets should land into your memfd, which may or may not work for you. In the future if you feel inclined to use device memory, this approach that I'm describing here would be more extensible to device memory, because you'd already be using dma-bufs for your user memory; you'd just replace one kind of dma-buf (UDMABUF) with another.
That the io_uring set needs to dive into page_pools is just wrong - complicating the design and code and pushing io_uring into a realm it does not need to be involved in.
I disagree. How does it complicate it? io_uring will be just a yet another provider implementing the callbacks of the API created for such use cases and not changing common pp/net bits. The rest of code is in io_uring implementing interaction with userspace and other usability features, but there will be anyway some amount of code if we want to have a convenient and performant api via io_uring.
Most (all?) of this patch set can work with any memory; only device memory is unreadable.
On 11/10/23 7:26 AM, Pavel Begunkov wrote:
On 11/7/23 23:03, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 2:55 PM David Ahern dsahern@kernel.org wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set.
I agree actually, and I was about to make that comment to David Wei's series once I have the time.
David, your io_uring RX zerocopy proposal actually works with devmem TCP, if you're inclined to do that instead, what you'd do roughly is (I think):
That would be a Frankenstein's monster api with no good reason for it.
It brings a consistent API from a networking perspective.
io_uring should not need to be in the page pool and memory management business. Have you or David coded up the re-use of the socket APIs with dmabuf to see how much smaller it makes the io_uring change - or even walked through from a theoretical perspective?
On 11/11/23 17:19, David Ahern wrote:
On 11/10/23 7:26 AM, Pavel Begunkov wrote:
On 11/7/23 23:03, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 2:55 PM David Ahern dsahern@kernel.org wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote: > diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h > index eeeda849115c..1c351c138a5b 100644 > --- a/include/linux/netdevice.h > +++ b/include/linux/netdevice.h > @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { > }; > > #ifdef CONFIG_DMA_SHARED_BUFFER > +struct page_pool_iov * > +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); > +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set.
I agree actually, and I was about to make that comment to David Wei's series once I have the time.
David, your io_uring RX zerocopy proposal actually works with devmem TCP, if you're inclined to do that instead, what you'd do roughly is (I think):
That would be a Frankenstein's monster api with no good reason for it.
It brings a consistent API from a networking perspective.
io_uring should not need to be in the page pool and memory management business. Have you or David coded up the re-use of the socket APIs with dmabuf to see how much smaller it makes the io_uring change - or even walked through from a theoretical perspective?
Yes, we did the mental exercise, which is why we're converting to pp. I don't see many opportunities for reuse for the main data path, potentially apart from using the iov format instead of pages.
If the goal is to minimise the amount of code, it can mimic the tcp devmem api with netlink, ioctl-ish buffer return, but that'd be a pretty bad api for io_uring, overly complicated and limiting optimisation options. If not, then we have to do some buffer management in io_uring, and I don't see anything wrong with that. It shouldn't be a burden for networking if all that extra code is contained in io_uring and only exposed via pp ops and following the rules.
On 2023-11-07 14:55, David Ahern wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set. That the io_uring set needs to dive into page_pools is just wrong - complicating the design and code and pushing io_uring into a realm it does not need to be involved in.
I think our io_uring proposal will already be vastly simplified once we rebase onto Kuba's page pool memory provider API. Using udmabuf means depending on a driver designed for testing, vs io_uring's registered buffers API that's been tried and tested.
I don't have an intuitive understanding of the trade offs yet, and would need to try out udmabuf and compare vs say using our own page pool memory provider.
Most (all?) of this patch set can work with any memory; only device memory is unreadable.
On 2023/11/8 6:10, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
On Mon, Nov 6, 2023 at 11:45 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void netdev_free_devmem(struct page_pool_iov *ppiov) +{
struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
refcount_set(&ppiov->refcount, 1);
if (gen_pool_has_addr(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
When gen_pool_has_addr() returns false, does it mean something has gone really wrong here?
Yes, good eye. gen_pool_has_addr() should never return false, but then again, gen_pool_free() BUG_ON()s if it doesn't find the address, which is an extremely severe reaction to what can be a minor bug in the accounting. I prefer to leak rather than crash the machine. It's a bit of defensive programming that is normally frowned upon, but I feel like in this case it's maybe warranted due to the very severe reaction (BUG_ON).
I would argue that why is the above defensive programming not done in the gen_pool core:)
On Mon, Nov 6, 2023 at 11:45 PM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
+void netdev_free_devmem(struct page_pool_iov *ppiov) +{
struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
refcount_set(&ppiov->refcount, 1);
if (gen_pool_has_addr(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
When gen_pool_has_addr() returns false, does it mean something has gone really wrong here?
Yes, good eye. gen_pool_has_addr() should never return false, but then again, gen_pool_free() BUG_ON()s if it doesn't find the address, which is an extremely severe reaction to what can be a minor bug in the accounting. I prefer to leak rather than crash the machine. It's a bit of defensive programming that is normally frowned upon, but I feel like in this case it's maybe warranted due to the very severe reaction (BUG_ON).
I would argue that why is the above defensive programming not done in the gen_pool core:)
I think gen_pool is not really not that new, and suggesting removing the BUG_ONs must have been proposed before and rejected. I'll try to do some research and maybe suggest downgrading the BUG_ON to WARN_ON, but my guess is there is some reason the maintainer wants it to be a BUG_ON.
On Wed, Nov 8, 2023 at 5:00 PM David Wei dw@davidwei.uk wrote:
On 2023-11-07 14:55, David Ahern wrote:
On 11/7/23 3:10 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 3:44 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { };
#ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov);
netdev_{alloc,free}_dmabuf?
Can do.
I say that because a dmabuf can be host memory, at least I am not aware of a restriction that a dmabuf is device memory.
In my limited experience dma-buf is generally device memory, and that's really its use case. CONFIG_UDMABUF is a driver that mocks dma-buf with a memfd which I think is used for testing. But I can do the rename, it's more clear anyway, I think.
config UDMABUF bool "userspace dmabuf misc driver" default n depends on DMA_SHARED_BUFFER depends on MEMFD_CREATE || COMPILE_TEST help A driver to let userspace turn memfd regions into dma-bufs. Qemu can use this to create host dmabufs for guest framebuffers.
Qemu is just a userspace process; it is no way a special one.
Treating host memory as a dmabuf should radically simplify the io_uring extension of this set. That the io_uring set needs to dive into page_pools is just wrong - complicating the design and code and pushing io_uring into a realm it does not need to be involved in.
I think our io_uring proposal will already be vastly simplified once we rebase onto Kuba's page pool memory provider API. Using udmabuf means depending on a driver designed for testing, vs io_uring's registered buffers API that's been tried and tested.
FWIW I also get an impression that udmabuf is mostly targeting testing, but I'm not aware of any deficiency that makes it concretely unsuitable for you. You be the judge.
The only quirk of udmabuf I'm aware of is that it seems to cap the max dma-buf size to 16000 pages. Not sure if that's due to a genuine technical limitation or just convenience.
I don't have an intuitive understanding of the trade offs yet, and would need to try out udmabuf and compare vs say using our own page pool memory provider.
On Wed, Nov 8, 2023 at 5:15 PM David Wei dw@davidwei.uk wrote:
How would TCP devmem change if we no longer assume that dmabuf is device memory?
It wouldn't. The code already never assumes that dmabuf is device memory. Any dma-buf should work, as far as I can tell. I'm also quite confident udmabuf works, I use it for testing.
(Jason Gunthrope is much more of an expert and may chime in to say 'some dma-buf will not work'. My primitive understanding is that we're using dma-bufs without any quirks and any dma-buf should work. I of course haven't tested all dma-bufs :D)
Pavel will know more on the perf side, but I wouldn't want to put any if/else on the hot path if we can avoid it. I could be wrong, but right now in my mind using different memory providers solves this neatly and the driver/networking stack doesn't need to care.
Mina, I believe you said at NetDev conf that you already had an udmabuf implementation for testing. I would like to see this (you can send privately) to see how TCP devmem would handle both user memory and device memory.
There is nothing to send privately. The patch series you're looking at supports udma-buf as-is, and the kselftest included with the series demonstrates devmem TCP working with udmabuf.
The only thing missing from this series is the driver support. You can see the GVE driver support for devmem TCP here:
https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3
You may need to implement devmem TCP for your driver before you can reproduce udmabuf working for yourself, though.
On 2023/11/6 10:44, Mina Almasry wrote:
+void netdev_free_devmem(struct page_pool_iov *ppiov) +{
- struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
- refcount_set(&ppiov->refcount, 1);
- if (gen_pool_has_addr(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
When gen_pool_has_addr() returns false, does it mean something has gone really wrong here?
gen_pool_free(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE);
- netdev_devmem_binding_put(binding);
+}
void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) { struct netdev_rx_queue *rxq;
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote: [...]
+void netdev_free_devmem(struct page_pool_iov *ppiov) +{
- struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov);
- refcount_set(&ppiov->refcount, 1);
- if (gen_pool_has_addr(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE))
gen_pool_free(binding->chunk_pool,
page_pool_iov_dma_addr(ppiov), PAGE_SIZE);
Minor nit: what about caching the dma_addr value to make the above more readable?
Cheers,
Paolo
Implement a memory provider that allocates dmabuf devmem page_pool_iovs.
Support of PP_FLAG_DMA_MAP and PP_FLAG_DMA_SYNC_DEV is omitted for simplicity.
The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the page_pool_params.
The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
--- include/net/page_pool/helpers.h | 40 +++++++++++++++++ include/net/page_pool/types.h | 10 +++++ net/core/page_pool.c | 76 +++++++++++++++++++++++++++++++++ 3 files changed, 126 insertions(+)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -53,6 +53,7 @@ #define _NET_PAGE_POOL_HELPERS_H
#include <net/page_pool/types.h> +#include <net/net_debug.h>
#ifdef CONFIG_PAGE_POOL_STATS int page_pool_ethtool_stats_get_count(void); @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; }
+static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{ + return refcount_read(&ppiov->refcount); +} + +static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + refcount_add(count, &ppiov->refcount); +} + +void __page_pool_iov_free(struct page_pool_iov *ppiov); + +static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + if (!refcount_sub_and_test(count, &ppiov->refcount)) + return; + + __page_pool_iov_free(ppiov); +} + +/* page pool mm helpers */ + +static inline bool page_is_page_pool_iov(const struct page *page) +{ + return (unsigned long)page & PP_DEVMEM; +} + +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return (struct page_pool_iov *)((unsigned long)page & + ~PP_DEVMEM); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 64386325d965..1e67f9466250 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -124,6 +124,7 @@ struct mem_provider;
enum pp_memory_provider_type { __PP_MP_NONE, /* Use system allocator directly */ + PP_MP_DMABUF_DEVMEM, /* dmabuf devmem provider */ };
struct pp_memory_provider_ops { @@ -133,8 +134,15 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); };
+extern const struct pp_memory_provider_ops dmabuf_devmem_ops; + /* page_pool_iov support */
+/* We overload the LSB of the struct page pointer to indicate whether it's + * a page or page_pool_iov. + */ +#define PP_DEVMEM 0x01UL + /* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist * entry from the dmabuf is inserted into the genpool as a chunk, and needs * this owner struct to keep track of some metadata necessary to create @@ -158,6 +166,8 @@ struct page_pool_iov { struct dmabuf_genpool_chunk_owner *owner;
refcount_t refcount; + + struct page_pool *pp; };
struct page_pool { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7ea1f4682479..138ddea0b28f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -20,6 +20,7 @@ #include <linux/poison.h> #include <linux/ethtool.h> #include <linux/netdevice.h> +#include <linux/genalloc.h>
#include <trace/events/page_pool.h>
@@ -231,6 +232,9 @@ static int page_pool_init(struct page_pool *pool, switch (pool->p.memory_provider) { case __PP_MP_NONE: break; + case PP_MP_DMABUF_DEVMEM: + pool->mp_ops = &dmabuf_devmem_ops; + break; default: err = -EINVAL; goto free_ptr_ring; @@ -996,3 +1000,75 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +void __page_pool_iov_free(struct page_pool_iov *ppiov) +{ + if (ppiov->pp->mp_ops != &dmabuf_devmem_ops) + return; + + netdev_free_devmem(ppiov); +} +EXPORT_SYMBOL_GPL(__page_pool_iov_free); + +/*** "Dmabuf devmem memory provider" ***/ + +static int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (pool->p.flags & PP_FLAG_DMA_MAP || + pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + return -EOPNOTSUPP; + + netdev_devmem_binding_get(binding); + return 0; +} + +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, + gfp_t gfp) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + struct page_pool_iov *ppiov; + + ppiov = netdev_alloc_devmem(binding); + if (!ppiov) + return NULL; + + ppiov->pp = pool; + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, (struct page *)ppiov, + pool->pages_state_hold_cnt); + return (struct page *)((unsigned long)ppiov | PP_DEVMEM); +} + +static void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + netdev_devmem_binding_put(binding); +} + +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + struct page *page) +{ + struct page_pool_iov *ppiov; + + if (WARN_ON_ONCE(!page_is_page_pool_iov(page))) + return false; + + ppiov = page_to_page_pool_iov(page); + page_pool_iov_put_many(ppiov, 1); + /* We don't want the page pool put_page()ing our page_pool_iovs. */ + return false; +} + +const struct pp_memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_pages = mp_dmabuf_devmem_alloc_pages, + .release_page = mp_dmabuf_devmem_release_page, +}; +EXPORT_SYMBOL(dmabuf_devmem_ops);
On 11/05, Mina Almasry wrote:
Implement a memory provider that allocates dmabuf devmem page_pool_iovs.
Support of PP_FLAG_DMA_MAP and PP_FLAG_DMA_SYNC_DEV is omitted for simplicity.
The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the page_pool_params.
The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/net/page_pool/helpers.h | 40 +++++++++++++++++ include/net/page_pool/types.h | 10 +++++ net/core/page_pool.c | 76 +++++++++++++++++++++++++++++++++ 3 files changed, 126 insertions(+)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -53,6 +53,7 @@ #define _NET_PAGE_POOL_HELPERS_H #include <net/page_pool/types.h> +#include <net/net_debug.h> #ifdef CONFIG_PAGE_POOL_STATS int page_pool_ethtool_stats_get_count(void); @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; } +static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{
- return refcount_read(&ppiov->refcount);
+}
+static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov,
unsigned int count)
+{
- refcount_add(count, &ppiov->refcount);
+}
+void __page_pool_iov_free(struct page_pool_iov *ppiov);
+static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
unsigned int count)
+{
- if (!refcount_sub_and_test(count, &ppiov->refcount))
return;
- __page_pool_iov_free(ppiov);
+}
+/* page pool mm helpers */
+static inline bool page_is_page_pool_iov(const struct page *page) +{
- return (unsigned long)page & PP_DEVMEM;
+}
Speaking of bpf: one thing that might be problematic with this PP_DEVMEM bit is that it will make debugging with bpftrace a bit (more) complicated. If somebody were trying to get to that page_pool_iov from the frags, they will have to do the equivalent of page_is_page_pool_iov, but probably not a big deal? (thinking out loud)
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; } +static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{
- return refcount_read(&ppiov->refcount);
+}
+static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov,
unsigned int count)
+{
- refcount_add(count, &ppiov->refcount);
+}
+void __page_pool_iov_free(struct page_pool_iov *ppiov);
+static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
unsigned int count)
+{
- if (!refcount_sub_and_test(count, &ppiov->refcount))
return;
- __page_pool_iov_free(ppiov);
+}
+/* page pool mm helpers */
+static inline bool page_is_page_pool_iov(const struct page *page) +{
- return (unsigned long)page & PP_DEVMEM;
This is another one where the code can be more generic to not force a lot changes later. e.g., PP_CUSTOM or PP_NO_PAGE. Then io_uring use case with host memory can leverage the iov pool in a similar manner.
That does mean skb->devmem needs to be a flag on the page pool and not just assume iov == device memory.
On Mon, Nov 6, 2023 at 1:02 PM Stanislav Fomichev sdf@google.com wrote:
On 11/05, Mina Almasry wrote:
+static inline bool page_is_page_pool_iov(const struct page *page) +{
return (unsigned long)page & PP_DEVMEM;
+}
Speaking of bpf: one thing that might be problematic with this PP_DEVMEM bit is that it will make debugging with bpftrace a bit (more) complicated. If somebody were trying to get to that page_pool_iov from the frags, they will have to do the equivalent of page_is_page_pool_iov, but probably not a big deal? (thinking out loud)
Good point, but that doesn't only apply to bpf I think. I'm guessing even debugger drgn access to the bv_page in the frag will have trouble if it's actually accessing an iov with LSB set.
But this is not specific to this use for LSB pointer trick. I think all code that currently uses LSB pointer trick will have similar troubles. In this context my humble vote is that we get such big upside from reducing code churn that it's reasonable to tolerate such side effects.
I could alleviate some of the issues by teaching drgn to do the right thing for devmem/iovs... time permitting.
On Mon, Nov 6, 2023 at 3:49 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; }
+static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{
return refcount_read(&ppiov->refcount);
+}
+static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov,
unsigned int count)
+{
refcount_add(count, &ppiov->refcount);
+}
+void __page_pool_iov_free(struct page_pool_iov *ppiov);
+static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov,
unsigned int count)
+{
if (!refcount_sub_and_test(count, &ppiov->refcount))
return;
__page_pool_iov_free(ppiov);
+}
+/* page pool mm helpers */
+static inline bool page_is_page_pool_iov(const struct page *page) +{
return (unsigned long)page & PP_DEVMEM;
This is another one where the code can be more generic to not force a lot changes later. e.g., PP_CUSTOM or PP_NO_PAGE. Then io_uring use case with host memory can leverage the iov pool in a similar manner.
That does mean skb->devmem needs to be a flag on the page pool and not just assume iov == device memory.
On 11/7/23 5:02 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:02 PM Stanislav Fomichev sdf@google.com wrote:
On 11/05, Mina Almasry wrote:
+static inline bool page_is_page_pool_iov(const struct page *page) +{
return (unsigned long)page & PP_DEVMEM;
+}
Speaking of bpf: one thing that might be problematic with this PP_DEVMEM bit is that it will make debugging with bpftrace a bit (more) complicated. If somebody were trying to get to that page_pool_iov from the frags, they will have to do the equivalent of page_is_page_pool_iov, but probably not a big deal? (thinking out loud)
Good point, but that doesn't only apply to bpf I think. I'm guessing even debugger drgn access to the bv_page in the frag will have trouble if it's actually accessing an iov with LSB set.
But this is not specific to this use for LSB pointer trick. I think all code that currently uses LSB pointer trick will have similar troubles. In this context my humble vote is that we get such big upside from reducing code churn that it's reasonable to tolerate such side effects.
+1
I could alleviate some of the issues by teaching drgn to do the right thing for devmem/iovs... time permitting.
Tools like drgn and crash have to know when the LSB trick is used - e.g., dst_entry - and handle it when dereferencing pointers.
On Sun, 5 Nov 2023 18:44:05 -0800 Mina Almasry wrote:
+static int mp_dmabuf_devmem_init(struct page_pool *pool) +{
- struct netdev_dmabuf_binding *binding = pool->mp_priv;
- if (!binding)
return -EINVAL;
- if (pool->p.flags & PP_FLAG_DMA_MAP ||
pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
return -EOPNOTSUPP;
This looks backwards, we should _force_ the driver to use the dma mapping built into the page pool APIs, to isolate the driver from how the DMA addr actually gets obtained. Right?
Maybe seeing driver patches would illuminate.
On Fri, Nov 10, 2023 at 3:16 PM Jakub Kicinski kuba@kernel.org wrote:
On Sun, 5 Nov 2023 18:44:05 -0800 Mina Almasry wrote:
+static int mp_dmabuf_devmem_init(struct page_pool *pool) +{
struct netdev_dmabuf_binding *binding = pool->mp_priv;
if (!binding)
return -EINVAL;
if (pool->p.flags & PP_FLAG_DMA_MAP ||
pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
return -EOPNOTSUPP;
This looks backwards, we should _force_ the driver to use the dma mapping built into the page pool APIs, to isolate the driver from how the DMA addr actually gets obtained. Right?
Maybe seeing driver patches would illuminate.
The full tree with driver patches is here:
https://github.com/torvalds/linux/compare/master...mina:linux:tcpdevmem-v3
This is probably the most relevant patch, it implements POC page-pool support into GVE + devmem support:
https://github.com/torvalds/linux/commit/3c27aa21eb3374f2f1677ece6258f046da2...
But, to answer your question, yes, this is a mistake. devmem doesn't need to be mapped, which is why I disabled the flag. Actually what should happen is like you said, we should enforce that PP_FLAG_DMA_MAP is on, and have it be a no-op, so the driver doesn't try to map the devmem on its own.
Overload the LSB of struct page* to indicate that it's a page_pool_iov.
Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead.
In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov.
Signed-off-by: Mina Almasry almasrymina@google.com
--- include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; }
+static inline int page_pool_page_ref_count(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return page_pool_iov_refcount(page_to_page_pool_iov(page)); + + return page_ref_count(page); +} + +static inline void page_pool_page_get_many(struct page *page, + unsigned int count) +{ + if (page_is_page_pool_iov(page)) + return page_pool_iov_get_many(page_to_page_pool_iov(page), + count); + + return page_ref_add(page, count); +} + +static inline void page_pool_page_put_many(struct page *page, + unsigned int count) +{ + if (page_is_page_pool_iov(page)) + return page_pool_iov_put_many(page_to_page_pool_iov(page), + count); + + if (count > 1) + page_ref_sub(page, count - 1); + + put_page(page); +} + +static inline bool page_pool_page_is_pfmemalloc(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return false; + + return page_is_pfmemalloc(page); +} + +static inline bool page_pool_page_is_pref_nid(struct page *page, int pref_nid) +{ + /* Assume page_pool_iov are on the preferred node without actually + * checking... + * + * This check is only used to check for recycling memory in the page + * pool's fast paths. Currently the only implementation of page_pool_iov + * is dmabuf device memory. It's a deliberate decision by the user to + * bind a certain dmabuf to a certain netdev, and the netdev rx queue + * would not be able to reallocate memory from another dmabuf that + * exists on the preferred node, so, this check doesn't make much sense + * in this case. Assume all page_pool_iovs can be recycled for now. + */ + if (page_is_page_pool_iov(page)) + return true; + + return page_to_nid(page) == pref_nid; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate @@ -301,6 +359,9 @@ static inline long page_pool_defrag_page(struct page *page, long nr) { long ret;
+ if (page_is_page_pool_iov(page)) + return -EINVAL; + /* If nr == pp_frag_count then we have cleared all remaining * references to the page: * 1. 'n == 1': no need to actually overwrite it. @@ -431,7 +492,12 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va, */ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { - dma_addr_t ret = page->dma_addr; + dma_addr_t ret; + + if (page_is_page_pool_iov(page)) + return page_pool_iov_dma_addr(page_to_page_pool_iov(page)); + + ret = page->dma_addr;
if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) ret <<= PAGE_SHIFT; @@ -441,6 +507,12 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) { + /* page_pool_iovs are mapped and their dma-addr can't be modified. */ + if (page_is_page_pool_iov(page)) { + DEBUG_NET_WARN_ON_ONCE(true); + return false; + } + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { page->dma_addr = addr >> PAGE_SHIFT;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 138ddea0b28f..d211996d423b 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -317,7 +317,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) if (unlikely(!page)) break;
- if (likely(page_to_nid(page) == pref_nid)) { + if (likely(page_pool_page_is_pref_nid(page, pref_nid))) { pool->alloc.cache[pool->alloc.count++] = page; } else { /* NUMA mismatch; @@ -362,7 +362,15 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, struct page *page, unsigned int dma_sync_size) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr; + + /* page_pool_iov memory provider do not support PP_FLAG_DMA_SYNC_DEV */ + if (page_is_page_pool_iov(page)) { + DEBUG_NET_WARN_ON_ONCE(true); + return; + } + + dma_addr = page_pool_get_dma_addr(page);
dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -374,6 +382,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) { dma_addr_t dma;
+ if (page_is_page_pool_iov(page)) { + /* page_pool_iovs are already mapped */ + DEBUG_NET_WARN_ON_ONCE(true); + return true; + } + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr * since dma_addr_t can be either 32 or 64 bits and does not always fit * into page private data (i.e 32bit cpu with 64bit DMA caps) @@ -405,22 +419,33 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) static void page_pool_set_pp_info(struct page_pool *pool, struct page *page) { - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; - - /* Ensuring all pages have been split into one fragment initially: - * page_pool_set_pp_info() is only called once for every page when it - * is allocated from the page allocator and page_pool_fragment_page() - * is dirtying the same cache line as the page->pp_magic above, so - * the overhead is negligible. - */ - page_pool_fragment_page(page, 1); + if (!page_is_page_pool_iov(page)) { + page->pp = pool; + page->pp_magic |= PP_SIGNATURE; + + /* Ensuring all pages have been split into one fragment + * initially: + * page_pool_set_pp_info() is only called once for every page + * when it is allocated from the page allocator and + * page_pool_fragment_page() is dirtying the same cache line as + * the page->pp_magic above, so * the overhead is negligible. + */ + page_pool_fragment_page(page, 1); + } else { + page_to_page_pool_iov(page)->pp = pool; + } + if (pool->p.init_callback) pool->p.init_callback(page, pool->p.init_arg); }
static void page_pool_clear_pp_info(struct page *page) { + if (page_is_page_pool_iov(page)) { + page_to_page_pool_iov(page)->pp = NULL; + return; + } + page->pp_magic = 0; page->pp = NULL; } @@ -630,7 +655,7 @@ static bool page_pool_recycle_in_cache(struct page *page, return false; }
- /* Caller MUST have verified/know (page_ref_count(page) == 1) */ + /* Caller MUST have verified/know (page_pool_page_ref_count(page) == 1) */ pool->alloc.cache[pool->alloc.count++] = page; recycle_stat_inc(pool, cached); return true; @@ -655,9 +680,10 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * refcnt == 1 means page_pool owns page, and can recycle it. * * page is NOT reusable when allocated when system is under - * some pressure. (page_is_pfmemalloc) + * some pressure. (page_pool_page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { + if (likely(page_pool_page_ref_count(page) == 1 && + !page_pool_page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */
if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) @@ -772,7 +798,8 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (likely(page_pool_defrag_page(page, drain_count))) return NULL;
- if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { + if (page_pool_page_ref_count(page) == 1 && + !page_pool_page_is_pfmemalloc(page)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, page, -1);
@@ -848,9 +875,9 @@ static void page_pool_empty_ring(struct page_pool *pool) /* Empty recycle ring */ while ((page = ptr_ring_consume_bh(&pool->ring))) { /* Verify the refcnt invariant of cached pages */ - if (!(page_ref_count(page) == 1)) + if (!(page_pool_page_ref_count(page) == 1)) pr_crit("%s() page_pool refcnt %d violation\n", - __func__, page_ref_count(page)); + __func__, page_pool_page_ref_count(page));
page_pool_return_page(pool, page); }
On 2023/11/6 10:44, Mina Almasry wrote:
Overload the LSB of struct page* to indicate that it's a page_pool_iov.
Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead.
In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov.
Signed-off-by: Mina Almasry almasrymina@google.com
include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; } +static inline int page_pool_page_ref_count(struct page *page) +{
- if (page_is_page_pool_iov(page))
return page_pool_iov_refcount(page_to_page_pool_iov(page));
We have added a lot of 'if' for the devmem case, it would be better to make it more generic so that we can have more unified metadata handling for normal page and devmem. If we add another memory type here, do we need another 'if' here? That is part of the reason I suggested using a more unified metadata for all the types of memory chunks used by page_pool.
On Tue, Nov 7, 2023 at 12:00 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
Overload the LSB of struct page* to indicate that it's a page_pool_iov.
Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead.
In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov.
Signed-off-by: Mina Almasry almasrymina@google.com
include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; }
+static inline int page_pool_page_ref_count(struct page *page) +{
if (page_is_page_pool_iov(page))
return page_pool_iov_refcount(page_to_page_pool_iov(page));
We have added a lot of 'if' for the devmem case, it would be better to make it more generic so that we can have more unified metadata handling for normal page and devmem. If we add another memory type here, do we need another 'if' here?
Maybe, not sure. I'm guessing new memory types will either be pages or iovs, so maybe no new if statements needed.
That is part of the reason I suggested using a more unified metadata for all the types of memory chunks used by page_pool.
I think your suggestion was to use struct pages for devmem. That was thoroughly considered and intensely argued about in the initial conversations regarding devmem and the initial RFC, and from the conclusions there it's extremely clear to me that devmem struct pages are categorically a no-go.
-- Thanks, Mina
On 2023/11/8 5:56, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 12:00 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
Overload the LSB of struct page* to indicate that it's a page_pool_iov.
Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead.
In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov.
Signed-off-by: Mina Almasry almasrymina@google.com
include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; }
+static inline int page_pool_page_ref_count(struct page *page) +{
if (page_is_page_pool_iov(page))
return page_pool_iov_refcount(page_to_page_pool_iov(page));
We have added a lot of 'if' for the devmem case, it would be better to make it more generic so that we can have more unified metadata handling for normal page and devmem. If we add another memory type here, do we need another 'if' here?
Maybe, not sure. I'm guessing new memory types will either be pages or iovs, so maybe no new if statements needed.
That is part of the reason I suggested using a more unified metadata for all the types of memory chunks used by page_pool.
I think your suggestion was to use struct pages for devmem. That was thoroughly considered and intensely argued about in the initial conversations regarding devmem and the initial RFC, and from the conclusions there it's extremely clear to me that devmem struct pages are categorically a no-go.
Not exactly, I was wondering if adding a more abstract structure specificly for page pool makes any sense, and each mem type can add its own specific fields, net stack only see and handle the common fields so that it does not care about specific mem type, and each provider only see the and handle the specific fields belonging to it most of the time.
Ideally something like beleow:
struct netmem { /* common fields */ refcount_t refcount; struct page_pool *pp; ......
union { struct devmem{ struct dmabuf_genpool_chunk_owner *owner; };
struct other_mem{ ... ... }; }; };
But untill we completely decouple the 'struct page' from the net stack, the above seems undoable in the near term. But we might be able to do something as folio is doing now, mm subsystem is still seeing 'struct folio/page', but other subsystem like slab is using 'struct slab', and there is still some common fields shared between 'struct folio' and 'struct slab'.
As the netmem patchset, is devmem able to reuse the below 'struct netmem' and rename it to 'struct page_pool_iov'? So that 'struct page' for normal memory and 'struct page_pool_iov' for devmem share the common fields used by page pool and net stack? And we might be able to reuse the 'flags', '_pp_mapping_pad' and '_mapcount' for specific mem provider, which is enough for the devmem only requiring a single pointer to point to it's owner?
https://lkml.kernel.org/netdev/20230105214631.3939268-2-willy@infradead.org/
+/** + * struct netmem - A memory allocation from a &struct page_pool. + * @flags: The same as the page flags. Do not use directly. + * @pp_magic: Magic value to avoid recycling non page_pool allocated pages. + * @pp: The page pool this netmem was allocated from. + * @dma_addr: Call netmem_get_dma_addr() to read this value. + * @dma_addr_upper: Might need to be 64-bit on 32-bit architectures. + * @pp_frag_count: For frag page support, not supported in 32-bit + * architectures with 64-bit DMA. + * @_mapcount: Do not access this member directly. + * @_refcount: Do not access this member directly. Read it using + * netmem_ref_count() and manipulate it with netmem_get() and netmem_put(). + * + * This struct overlays struct page for now. Do not modify without a + * good understanding of the issues. + */ +struct netmem { + unsigned long flags; + unsigned long pp_magic; + struct page_pool *pp; + /* private: no need to document this padding */ + unsigned long _pp_mapping_pad; /* aliases with folio->mapping */ + /* public: */ + unsigned long dma_addr; + union { + unsigned long dma_addr_upper; + atomic_long_t pp_frag_count; + }; + atomic_t _mapcount; + atomic_t _refcount; +};
If we do that, it seems we might be able to allow net stack and page pool to see the metadata for devmem chunk as 'struct page', and may be able to aovid most of the 'if' checking in net stack and page pool?
-- Thanks, Mina
.
On Wed, Nov 8, 2023 at 2:56 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/8 5:56, Mina Almasry wrote:
On Tue, Nov 7, 2023 at 12:00 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
Overload the LSB of struct page* to indicate that it's a page_pool_iov.
Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead.
In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov.
Signed-off-by: Mina Almasry almasrymina@google.com
include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; }
+static inline int page_pool_page_ref_count(struct page *page) +{
if (page_is_page_pool_iov(page))
return page_pool_iov_refcount(page_to_page_pool_iov(page));
We have added a lot of 'if' for the devmem case, it would be better to make it more generic so that we can have more unified metadata handling for normal page and devmem. If we add another memory type here, do we need another 'if' here?
Maybe, not sure. I'm guessing new memory types will either be pages or iovs, so maybe no new if statements needed.
That is part of the reason I suggested using a more unified metadata for all the types of memory chunks used by page_pool.
I think your suggestion was to use struct pages for devmem. That was thoroughly considered and intensely argued about in the initial conversations regarding devmem and the initial RFC, and from the conclusions there it's extremely clear to me that devmem struct pages are categorically a no-go.
Not exactly, I was wondering if adding a more abstract structure specificly for page pool makes any sense, and each mem type can add its own specific fields, net stack only see and handle the common fields so that it does not care about specific mem type, and each provider only see the and handle the specific fields belonging to it most of the time.
Ideally something like beleow:
struct netmem { /* common fields */ refcount_t refcount; struct page_pool *pp; ......
union { struct devmem{ struct dmabuf_genpool_chunk_owner *owner; }; struct other_mem{ ... ... }; };
};
But untill we completely decouple the 'struct page' from the net stack, the above seems undoable in the near term.
Agreed everything above is undoable.
But we might be able to do something as folio is doing now, mm subsystem is still seeing 'struct folio/page', but other subsystem like slab is using 'struct slab', and there is still some common fields shared between 'struct folio' and 'struct slab'.
In my eyes this is almost exactly what I suggested in RFC v1 and got immediately nacked with no room to negotiate. What we did for v1 is to allocate struct pages for dma-buf to make dma-bufs look like struct page to mm subsystem. Almost exactly what you're describing above. It's a no-go. I don't think renaming struct page to netmem is going to move the needle (it also re-introduces code-churn). What I feel like I learnt is that dma-bufs are not struct pages and can't be made to look like one, I think.
As the netmem patchset, is devmem able to reuse the below 'struct netmem' and rename it to 'struct page_pool_iov'?
I don't think so. For the reasons above, but also practically it immediately falls apart. Consider this field in netmem:
+ * @flags: The same as the page flags. Do not use directly.
dma-buf don't have or support page-flags, and making dma-buf looks like they support page flags or any page-like features (other than dma_addr) seems extremely unacceptable to mm folks.
So that 'struct page' for normal memory and 'struct page_pool_iov' for devmem share the common fields used by page pool and net stack?
Are you suggesting that we'd cast a netmem* to a page* and call core mm APIs on it? It's basically what was happening with RFC v1, where things that are not struct pages were made to look like struct pages.
Also, there isn't much upside for what you're suggesting, I think. For example I can align the refcount variable in struct page_pool_iov with the refcount in struct page so that this works:
put_page((struct page*)ppiov);
but it's a disaster. Because put_page() will call __put_page() if the page is freed, and __put_page() will try to return the page to the buddy allocator!
And we might be able to reuse the 'flags', '_pp_mapping_pad' and '_mapcount' for specific mem provider, which is enough for the devmem only requiring a single pointer to point to it's owner?
All the above seems quite similar to RFC v1 again, using netmem instead of struct page. In RFC v1 we re-used zone_device_data() for the dma-buf owner equivalent.
https://lkml.kernel.org/netdev/20230105214631.3939268-2-willy@infradead.org/
+/**
- struct netmem - A memory allocation from a &struct page_pool.
- @flags: The same as the page flags. Do not use directly.
- @pp_magic: Magic value to avoid recycling non page_pool allocated pages.
- @pp: The page pool this netmem was allocated from.
- @dma_addr: Call netmem_get_dma_addr() to read this value.
- @dma_addr_upper: Might need to be 64-bit on 32-bit architectures.
- @pp_frag_count: For frag page support, not supported in 32-bit
- architectures with 64-bit DMA.
- @_mapcount: Do not access this member directly.
- @_refcount: Do not access this member directly. Read it using
- netmem_ref_count() and manipulate it with netmem_get() and netmem_put().
- This struct overlays struct page for now. Do not modify without a
- good understanding of the issues.
- */
+struct netmem {
unsigned long flags;
unsigned long pp_magic;
struct page_pool *pp;
/* private: no need to document this padding */
unsigned long _pp_mapping_pad; /* aliases with folio->mapping */
/* public: */
unsigned long dma_addr;
union {
unsigned long dma_addr_upper;
atomic_long_t pp_frag_count;
};
atomic_t _mapcount;
atomic_t _refcount;
+};
If we do that, it seems we might be able to allow net stack and page pool to see the metadata for devmem chunk as 'struct page', and may be able to aovid most of the 'if' checking in net stack and page pool?
-- Thanks, Mina
.
On 2023/11/9 11:20, Mina Almasry wrote:
On Wed, Nov 8, 2023 at 2:56 AM Yunsheng Lin linyunsheng@huawei.com wrote:
Agreed everything above is undoable.
But we might be able to do something as folio is doing now, mm subsystem is still seeing 'struct folio/page', but other subsystem like slab is using 'struct slab', and there is still some common fields shared between 'struct folio' and 'struct slab'.
In my eyes this is almost exactly what I suggested in RFC v1 and got immediately nacked with no room to negotiate. What we did for v1 is to allocate struct pages for dma-buf to make dma-bufs look like struct page to mm subsystem. Almost exactly what you're describing above.
Maybe the above is where we have disagreement: Do we still need make dma-bufs look like struct page to mm subsystem? IMHO, the answer is no. We might only need to make dma-bufs look like struct page to net stack and page pool subsystem. I think that is already what this pacthset is trying to do, what I am suggesting is just make it more like 'struct page' to net stack and page pool subsystem, in order to try to avoid most of the 'if' checking in net stack and page pool subsystem.
It's a no-go. I don't think renaming struct page to netmem is going to move the needle (it also re-introduces code-churn). What I feel like I learnt is that dma-bufs are not struct pages and can't be made to look like one, I think.
As the netmem patchset, is devmem able to reuse the below 'struct netmem' and rename it to 'struct page_pool_iov'?
I don't think so. For the reasons above, but also practically it immediately falls apart. Consider this field in netmem:
- @flags: The same as the page flags. Do not use directly.
dma-buf don't have or support page-flags, and making dma-buf looks like they support page flags or any page-like features (other than dma_addr) seems extremely unacceptable to mm folks.
As far as I tell, as we limit the devmem usage in netstack, the below is the related mm function call for 'struct page' for devmem: page_ref_*(): page->_refcount does not need changing page_is_pfmemalloc(): which is corresponding to page->pp_magic, and devmem provider can set/unset it in it's 'alloc_pages' ops. page_to_nid(): we may need to handle it differently somewhat like this patch does as page_to_nid() may has different implementation based on different configuration. page_pool_iov_put_many(): as mentioned in other thread, if net stack is not calling page_pool_page_put_many() directly, we can reuse napi_pp_put_page() for devmem too, and handle the special case for devmem in 'release_page' ops.
So that 'struct page' for normal memory and 'struct page_pool_iov' for devmem share the common fields used by page pool and net stack?
Are you suggesting that we'd cast a netmem* to a page* and call core mm APIs on it? It's basically what was happening with RFC v1, where things that are not struct pages were made to look like struct pages.
Also, there isn't much upside for what you're suggesting, I think. For example I can align the refcount variable in struct page_pool_iov with the refcount in struct page so that this works:
put_page((struct page*)ppiov);
but it's a disaster. Because put_page() will call __put_page() if the page is freed, and __put_page() will try to return the page to the buddy allocator!
As what I suggested above, Can we handle this in devmem provider's 'release_page' ops instead of calling put_page() directly as for devmem.
And we might be able to reuse the 'flags', '_pp_mapping_pad' and '_mapcount' for specific mem provider, which is enough for the devmem only requiring a single pointer to point to it's owner?
All the above seems quite similar to RFC v1 again, using netmem instead of struct page. In RFC v1 we re-used zone_device_data() for the dma-buf owner equivalent.
As we have added a few checkings to limit 'struct page' for devmem to be only used in net stack, we can decouple 'struct page' for devmem from mm subsystem, zone_device_data() is not really needed, right?
If we can decouple 'struct page' for normal memory from mm subsystem through the folio work in the future, then we may define a more abstract structure for page pool and net stack instead of reusing 'struct page' from mm.
On Thu, Nov 9, 2023 at 1:30 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/9 11:20, Mina Almasry wrote:
On Wed, Nov 8, 2023 at 2:56 AM Yunsheng Lin linyunsheng@huawei.com wrote:
Agreed everything above is undoable.
But we might be able to do something as folio is doing now, mm subsystem is still seeing 'struct folio/page', but other subsystem like slab is using 'struct slab', and there is still some common fields shared between 'struct folio' and 'struct slab'.
In my eyes this is almost exactly what I suggested in RFC v1 and got immediately nacked with no room to negotiate. What we did for v1 is to allocate struct pages for dma-buf to make dma-bufs look like struct page to mm subsystem. Almost exactly what you're describing above.
Maybe the above is where we have disagreement: Do we still need make dma-bufs look like struct page to mm subsystem? IMHO, the answer is no. We might only need to make dma-bufs look like struct page to net stack and page pool subsystem. I think that is already what this pacthset is trying to do, what I am suggesting is just make it more like 'struct page' to net stack and page pool subsystem, in order to try to avoid most of the 'if' checking in net stack and page pool subsystem.
First, most of the checking in the net stack is skb_frag_not_readable(). dma-buf are fundamentally not kmap()able and not readable. So we can't remove those, no matter what we do I think. Can we agree on that? If so, lets discuss removing most of the ifs in the page pool, only.
It's a no-go. I don't think renaming struct page to netmem is going to move the needle (it also re-introduces code-churn). What I feel like I learnt is that dma-bufs are not struct pages and can't be made to look like one, I think.
As the netmem patchset, is devmem able to reuse the below 'struct netmem' and rename it to 'struct page_pool_iov'?
I don't think so. For the reasons above, but also practically it immediately falls apart. Consider this field in netmem:
- @flags: The same as the page flags. Do not use directly.
dma-buf don't have or support page-flags, and making dma-buf looks like they support page flags or any page-like features (other than dma_addr) seems extremely unacceptable to mm folks.
As far as I tell, as we limit the devmem usage in netstack, the below is the related mm function call for 'struct page' for devmem: page_ref_*(): page->_refcount does not need changing
Sorry, I don't understand. Are you suggesting we call page_ref_add() & page_ref_sub() on page_pool_iov? That is basically making page_pool_iov look like struct page to the mm stack, since page_ref_* are mm calls, which you say above we don't need to do. We will still need to special case this, no?
page_is_pfmemalloc(): which is corresponding to page->pp_magic, and devmem provider can set/unset it in it's 'alloc_pages' ops.
page_is_pfmemalloc() has nothing to do with page->pp_magic. It checks page->lru.next to figure out if this is a pfmemalloc. page_pool_iov has no page->lru.next. Still need to special case this?
page_to_nid(): we may need to handle it differently somewhat like this patch does as page_to_nid() may has different implementation based on different configuration.
So you're saying we need to handle page_to_nid() differently for devmem? So we're not going to be able to avoid the if statement.
page_pool_iov_put_many(): as mentioned in other thread, if net stack is not calling page_pool_page_put_many() directly, we can reuse napi_pp_put_page() for devmem too, and handle the special case for devmem in 'release_page' ops.
page_pool_iov_put_many()/page_pool_iov_get_many() are called to do refcounting before the page is released back to the provider. I'm not seeing how we can handle the special case inside of 'release_page' - that's too late, as far as I can tell.
The only way to remove the if statements in the page pool is to implement what you said was not feasible in an earlier email. We would define this struct:
struct netmem { /* common fields */ refcount_t refcount; bool is_pfmemalloc; int nid; ...... union { struct devmem{ struct dmabuf_genpool_chunk_owner *owner; };
struct page * page; }; };
Then, we would require all memory providers to allocate struct netmem for the memory and set the common fields, including ones that have struct pages. For devmem, netmem->page will be NULL, because netmem has no page.
If we do that, the page pool can ignore whether the underlying memory is page or devmem, because it can use the common fields, example:
/* page_ref_count replacement */ netmem_ref_count(struct netmem* netmem) { return netmem->refcount; }
/* page_ref_add replacement */ netmem_ref_add(struct netmem* netmem) { atomic_inc(netmem->refcount); }
/* page_to_nid replacement */ netmem_nid(struct netmem* netmem) { return netmem->nid; }
/* page_is_pfmemalloc() replacement */ netmem_is_pfmemalloc(struct netmem* netmem) { return netmem->is_pfmemalloc; }
/* page_ref_sub replacement */ netmem_ref_sub(struct netmem* netmem) { atomic_sub(netmet->refcount); if (netmem->refcount == 0) { /* release page to the memory provider. * struct page memory provider will do put_page(), * devmem will do something else */ } } }
I think this MAY BE technically feasible, but I'm not sure it's better:
1. It is a huge refactor to the page pool, lots of code churn. While the page pool currently uses page*, it needs to be completely refactored to use netmem*. 2. It causes extra memory usage. struct netmem needs to be allocated for every struct page. 3. It has minimal perf upside. The page_is_page_pool_iov() checks currently have minimal perf impact, and I demonstrated that to Jesper in RFC v2. 4. It also may not be technically feasible. I'm not sure how netmem interacts with skb_frag_t. I guess we replace struct page* bv_page with struct netmem* bv_page, and add changes there. 5. Drivers need to be refactored to use netmem* instead of page*, unless we cast netmem* to page* before returning to the driver.
Possibly other downsides, these are what I could immediately think of.
If I'm still misunderstanding your suggestion, it may be time to send me a concrete code snippet of what you have in mind. I'm a bit confused at the moment because the only avenue I see to remove the if statements in the page pool is to define the struct that we agreed is not feasible in earlier emails.
So that 'struct page' for normal memory and 'struct page_pool_iov' for devmem share the common fields used by page pool and net stack?
Are you suggesting that we'd cast a netmem* to a page* and call core mm APIs on it? It's basically what was happening with RFC v1, where things that are not struct pages were made to look like struct pages.
Also, there isn't much upside for what you're suggesting, I think. For example I can align the refcount variable in struct page_pool_iov with the refcount in struct page so that this works:
put_page((struct page*)ppiov);
but it's a disaster. Because put_page() will call __put_page() if the page is freed, and __put_page() will try to return the page to the buddy allocator!
As what I suggested above, Can we handle this in devmem provider's 'release_page' ops instead of calling put_page() directly as for devmem.
And we might be able to reuse the 'flags', '_pp_mapping_pad' and '_mapcount' for specific mem provider, which is enough for the devmem only requiring a single pointer to point to it's owner?
All the above seems quite similar to RFC v1 again, using netmem instead of struct page. In RFC v1 we re-used zone_device_data() for the dma-buf owner equivalent.
As we have added a few checkings to limit 'struct page' for devmem to be only used in net stack, we can decouple 'struct page' for devmem from mm subsystem, zone_device_data() is not really needed, right?
If we can decouple 'struct page' for normal memory from mm subsystem through the folio work in the future, then we may define a more abstract structure for page pool and net stack instead of reusing 'struct page' from mm.
-- Thanks, Mina
On 2023/11/9 20:20, Mina Almasry wrote:
On Thu, Nov 9, 2023 at 1:30 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/9 11:20, Mina Almasry wrote:
On Wed, Nov 8, 2023 at 2:56 AM Yunsheng Lin linyunsheng@huawei.com wrote:
Agreed everything above is undoable.
But we might be able to do something as folio is doing now, mm subsystem is still seeing 'struct folio/page', but other subsystem like slab is using 'struct slab', and there is still some common fields shared between 'struct folio' and 'struct slab'.
In my eyes this is almost exactly what I suggested in RFC v1 and got immediately nacked with no room to negotiate. What we did for v1 is to allocate struct pages for dma-buf to make dma-bufs look like struct page to mm subsystem. Almost exactly what you're describing above.
Maybe the above is where we have disagreement: Do we still need make dma-bufs look like struct page to mm subsystem? IMHO, the answer is no. We might only need to make dma-bufs look like struct page to net stack and page pool subsystem. I think that is already what this pacthset is trying to do, what I am suggesting is just make it more like 'struct page' to net stack and page pool subsystem, in order to try to avoid most of the 'if' checking in net stack and page pool subsystem.
First, most of the checking in the net stack is skb_frag_not_readable(). dma-buf are fundamentally not kmap()able and not readable. So we can't remove those, no matter what we do I think. Can we agree on that? If so, lets discuss removing most of the ifs in the page pool, only.
Agreed on the 'not kmap()able and not readable' checking part.
It's a no-go. I don't think renaming struct page to netmem is going to move the needle (it also re-introduces code-churn). What I feel like I learnt is that dma-bufs are not struct pages and can't be made to look like one, I think.
As the netmem patchset, is devmem able to reuse the below 'struct netmem' and rename it to 'struct page_pool_iov'?
I don't think so. For the reasons above, but also practically it immediately falls apart. Consider this field in netmem:
- @flags: The same as the page flags. Do not use directly.
dma-buf don't have or support page-flags, and making dma-buf looks like they support page flags or any page-like features (other than dma_addr) seems extremely unacceptable to mm folks.
As far as I tell, as we limit the devmem usage in netstack, the below is the related mm function call for 'struct page' for devmem: page_ref_*(): page->_refcount does not need changing
Sorry, I don't understand. Are you suggesting we call page_ref_add() & page_ref_sub() on page_pool_iov? That is basically making page_pool_iov look like struct page to the mm stack, since page_ref_* are mm calls, which you say above we don't need to do. We will still need to special case this, no?
As we are reusing 'struct page' for devmem, page->_refcount for devmem and page->_refcount for normal memory should be the same, right? We may need to ensure 'struct page' for devmem to always look like a head page for compound page or base page for net stack, as we use get_page() in __skb_frag_ref().
We can choose to not call page_ref_sub() for page from devmem, we can call napi_pp_put_page(), and we may be able to special handle the page from devmem in devmem provider's 'release_page' ops in napi_pp_put_page().
page_is_pfmemalloc(): which is corresponding to page->pp_magic, and devmem provider can set/unset it in it's 'alloc_pages' ops.
page_is_pfmemalloc() has nothing to do with page->pp_magic. It checks page->lru.next to figure out if this is a pfmemalloc. page_pool_iov has no page->lru.next. Still need to special case this?
See the comment in napi_pp_put_page():
/* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation * in order to preserve any existing bits, such as bit 0 for the * head page of compound page and bit 1 for pfmemalloc page, so * mask those bits for freeing side when doing below checking, * and page_is_pfmemalloc() is checked in __page_pool_put_page() * to avoid recycling the pfmemalloc page. */
There is some union in struct page, page->lru.next and page->pp_magic is actually pointing to the same thing as my understanding.
page_to_nid(): we may need to handle it differently somewhat like this patch does as page_to_nid() may has different implementation based on different configuration.
So you're saying we need to handle page_to_nid() differently for devmem? So we're not going to be able to avoid the if statement.
Yes, it seems to be the only place that might need special handling I see so far.
page_pool_iov_put_many(): as mentioned in other thread, if net stack is not calling page_pool_page_put_many() directly, we can reuse napi_pp_put_page() for devmem too, and handle the special case for devmem in 'release_page' ops.
page_pool_iov_put_many()/page_pool_iov_get_many() are called to do
Can we remove the page_pool_iov_put_many()/page_pool_iov_get_many() calling?
refcounting before the page is released back to the provider. I'm not seeing how we can handle the special case inside of 'release_page' - that's too late, as far as I can tell.
And handle the special case in page_pool_return_page() to mainly replace put_page() with 'release_page' for devmem page? https://elixir.free-electrons.com/linux/v6.6-rc1/source/net/core/page_pool.c...
The only way to remove the if statements in the page pool is to implement what you said was not feasible in an earlier email. We would define this struct:
struct netmem { /* common fields */ refcount_t refcount; bool is_pfmemalloc; int nid; ...... union { struct devmem{ struct dmabuf_genpool_chunk_owner *owner; };
struct page * page; };
};
Then, we would require all memory providers to allocate struct netmem for the memory and set the common fields, including ones that have struct pages. For devmem, netmem->page will be NULL, because netmem has no page.
That is not what I have in mind.
If we do that, the page pool can ignore whether the underlying memory is page or devmem, because it can use the common fields, example:
/* page_ref_count replacement */ netmem_ref_count(struct netmem* netmem) { return netmem->refcount; }
/* page_ref_add replacement */ netmem_ref_add(struct netmem* netmem) { atomic_inc(netmem->refcount); }
/* page_to_nid replacement */ netmem_nid(struct netmem* netmem) { return netmem->nid; }
/* page_is_pfmemalloc() replacement */ netmem_is_pfmemalloc(struct netmem* netmem) { return netmem->is_pfmemalloc; }
/* page_ref_sub replacement */ netmem_ref_sub(struct netmem* netmem) { atomic_sub(netmet->refcount); if (netmem->refcount == 0) { /* release page to the memory provider. * struct page memory provider will do put_page(), * devmem will do something else */ } } }
I think this MAY BE technically feasible, but I'm not sure it's better:
- It is a huge refactor to the page pool, lots of code churn. While
the page pool currently uses page*, it needs to be completely refactored to use netmem*. 2. It causes extra memory usage. struct netmem needs to be allocated for every struct page. 3. It has minimal perf upside. The page_is_page_pool_iov() checks currently have minimal perf impact, and I demonstrated that to Jesper in RFC v2. 4. It also may not be technically feasible. I'm not sure how netmem interacts with skb_frag_t. I guess we replace struct page* bv_page with struct netmem* bv_page, and add changes there. 5. Drivers need to be refactored to use netmem* instead of page*, unless we cast netmem* to page* before returning to the driver.
Possibly other downsides, these are what I could immediately think of.
If I'm still misunderstanding your suggestion, it may be time to send me a concrete code snippet of what you have in mind. I'm a bit confused at the moment because the only avenue I see to remove the if statements in the page pool is to define the struct that we agreed is not feasible in earlier emails.
I might be able to do it at the weekend if it is still not making any sense to you.
-- Thanks, Mina
.
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote:
Overload the LSB of struct page* to indicate that it's a page_pool_iov.
Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead.
In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov.
Signed-off-by: Mina Almasry almasrymina@google.com
include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-)
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; } +static inline int page_pool_page_ref_count(struct page *page) +{
- if (page_is_page_pool_iov(page))
return page_pool_iov_refcount(page_to_page_pool_iov(page));
- return page_ref_count(page);
+}
+static inline void page_pool_page_get_many(struct page *page,
unsigned int count)
+{
- if (page_is_page_pool_iov(page))
return page_pool_iov_get_many(page_to_page_pool_iov(page),
count);
- return page_ref_add(page, count);
+}
+static inline void page_pool_page_put_many(struct page *page,
unsigned int count)
+{
- if (page_is_page_pool_iov(page))
return page_pool_iov_put_many(page_to_page_pool_iov(page),
count);
- if (count > 1)
page_ref_sub(page, count - 1);
- put_page(page);
+}
+static inline bool page_pool_page_is_pfmemalloc(struct page *page) +{
- if (page_is_page_pool_iov(page))
return false;
- return page_is_pfmemalloc(page);
+}
+static inline bool page_pool_page_is_pref_nid(struct page *page, int pref_nid) +{
- /* Assume page_pool_iov are on the preferred node without actually
* checking...
*
* This check is only used to check for recycling memory in the page
* pool's fast paths. Currently the only implementation of page_pool_iov
* is dmabuf device memory. It's a deliberate decision by the user to
* bind a certain dmabuf to a certain netdev, and the netdev rx queue
* would not be able to reallocate memory from another dmabuf that
* exists on the preferred node, so, this check doesn't make much sense
* in this case. Assume all page_pool_iovs can be recycled for now.
*/
- if (page_is_page_pool_iov(page))
return true;
- return page_to_nid(page) == pref_nid;
+}
/**
- page_pool_dev_alloc_pages() - allocate a page.
- @pool: pool from which to allocate
@@ -301,6 +359,9 @@ static inline long page_pool_defrag_page(struct page *page, long nr) { long ret;
- if (page_is_page_pool_iov(page))
return -EINVAL;
- /* If nr == pp_frag_count then we have cleared all remaining
- references to the page:
- 'n == 1': no need to actually overwrite it.
@@ -431,7 +492,12 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va, */ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) {
- dma_addr_t ret = page->dma_addr;
- dma_addr_t ret;
- if (page_is_page_pool_iov(page))
return page_pool_iov_dma_addr(page_to_page_pool_iov(page));
Should the above conditional be guarded by the page_pool_mem_providers static key? this looks like fast-path. Same question for the refcount helper above.
Minor nit: possibly cache 'page_is_page_pool_iov(page)' to make the code more readable.
- ret = page->dma_addr;
if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) ret <<= PAGE_SHIFT; @@ -441,6 +507,12 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) {
- /* page_pool_iovs are mapped and their dma-addr can't be modified. */
- if (page_is_page_pool_iov(page)) {
DEBUG_NET_WARN_ON_ONCE(true);
return false;
- }
Quickly skimming over the page_pool_code it looks like page_pool_set_dma_addr() usage is guarded by the PP_FLAG_DMA_MAP page pool flag. Could the device mem provider enforce such flag being cleared on the page pool?
- if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { page->dma_addr = addr >> PAGE_SHIFT;
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 138ddea0b28f..d211996d423b 100644 --- a/net/core/page_pool.cnn +++ b/net/core/page_pool.c @@ -317,7 +317,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) if (unlikely(!page)) break;
if (likely(page_to_nid(page) == pref_nid)) {
} else { /* NUMA mismatch;if (likely(page_pool_page_is_pref_nid(page, pref_nid))) { pool->alloc.cache[pool->alloc.count++] = page;
@@ -362,7 +362,15 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, struct page *page, unsigned int dma_sync_size) {
- dma_addr_t dma_addr = page_pool_get_dma_addr(page);
- dma_addr_t dma_addr;
- /* page_pool_iov memory provider do not support PP_FLAG_DMA_SYNC_DEV */
- if (page_is_page_pool_iov(page)) {
DEBUG_NET_WARN_ON_ONCE(true);
return;
- }
Similar to the above point, mutatis mutandis.
- dma_addr = page_pool_get_dma_addr(page);
dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -374,6 +382,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) { dma_addr_t dma;
- if (page_is_page_pool_iov(page)) {
/* page_pool_iovs are already mapped */
DEBUG_NET_WARN_ON_ONCE(true);
return true;
- }
Ditto.
Cheers,
Paolo
Make skb_frag_page() fail in the case where the frag is not backed by a page, and fix its relevent callers to handle this case.
Correctly handle skb_frag refcounting in the page_pool_iovs case.
Signed-off-by: Mina Almasry almasrymina@google.com
--- include/linux/skbuff.h | 42 +++++++++++++++++++++++++++++++++++------- net/core/gro.c | 2 +- net/core/skbuff.c | 3 +++ net/ipv4/tcp.c | 10 +++++++++- 4 files changed, 48 insertions(+), 9 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 97bfef071255..1fae276c1353 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,8 @@ #endif #include <net/net_debug.h> #include <net/dropreason-core.h> +#include <net/page_pool/types.h> +#include <net/page_pool/helpers.h>
/** * DOC: skb checksums @@ -3402,15 +3404,38 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, fragto->bv_offset = fragfrom->bv_offset; }
+/* Returns true if the skb_frag contains a page_pool_iov. */ +static inline bool skb_frag_is_page_pool_iov(const skb_frag_t *frag) +{ + return page_is_page_pool_iov(frag->bv_page); +} + /** * skb_frag_page - retrieve the page referred to by a paged fragment * @frag: the paged fragment * - * Returns the &struct page associated with @frag. + * Returns the &struct page associated with @frag. Returns NULL if this frag + * has no associated page. */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + if (!page_is_page_pool_iov(frag->bv_page)) + return frag->bv_page; + + return NULL; +} + +/** + * skb_frag_page_pool_iov - retrieve the page_pool_iov referred to by fragment + * @frag: the fragment + * + * Returns the &struct page_pool_iov associated with @frag. Returns NULL if this + * frag has no associated page_pool_iov. + */ +static inline struct page_pool_iov * +skb_frag_page_pool_iov(const skb_frag_t *frag) +{ + return page_to_page_pool_iov(frag->bv_page); }
/** @@ -3421,7 +3446,7 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) */ static inline void __skb_frag_ref(skb_frag_t *frag) { - get_page(skb_frag_page(frag)); + page_pool_page_get_many(frag->bv_page, 1); }
/** @@ -3441,13 +3466,13 @@ bool napi_pp_put_page(struct page *page, bool napi_safe); static inline void napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { - struct page *page = skb_frag_page(frag); - #ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page, napi_safe)) + if (recycle && napi_pp_put_page(frag->bv_page, napi_safe)) return; + page_pool_page_put_many(frag->bv_page, 1); +#else + put_page(skb_frag_page(frag)); #endif - put_page(page); }
/** @@ -3487,6 +3512,9 @@ static inline void skb_frag_unref(struct sk_buff *skb, int f) */ static inline void *skb_frag_address(const skb_frag_t *frag) { + if (!skb_frag_page(frag)) + return NULL; + return page_address(skb_frag_page(frag)) + skb_frag_off(frag); }
diff --git a/net/core/gro.c b/net/core/gro.c index 0759277dc14e..42d7f6755f32 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -376,7 +376,7 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff) NAPI_GRO_CB(skb)->frag0 = NULL; NAPI_GRO_CB(skb)->frag0_len = 0;
- if (!skb_headlen(skb) && pinfo->nr_frags && + if (!skb_headlen(skb) && pinfo->nr_frags && skb_frag_page(frag0) && !PageHighMem(skb_frag_page(frag0)) && (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) { NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index c52ddd6891d9..13eca4fd25e1 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2994,6 +2994,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];
+ if (WARN_ON_ONCE(!skb_frag_page(f))) + return false; + if (__splice_segment(skb_frag_page(f), skb_frag_off(f), skb_frag_size(f), offset, len, spd, false, sk, pipe)) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index a86d8200a1e8..23b29dc49271 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2155,6 +2155,9 @@ static int tcp_zerocopy_receive(struct sock *sk, break; } page = skb_frag_page(frags); + if (WARN_ON_ONCE(!page)) + break; + prefetchw(page); pages[pages_to_map++] = page; length += PAGE_SIZE; @@ -4411,7 +4414,12 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, for (i = 0; i < shi->nr_frags; ++i) { const skb_frag_t *f = &shi->frags[i]; unsigned int offset = skb_frag_off(f); - struct page *page = skb_frag_page(f) + (offset >> PAGE_SHIFT); + struct page *page = skb_frag_page(f); + + if (WARN_ON_ONCE(!page)) + return 1; + + page += offset >> PAGE_SHIFT;
sg_set_page(&sg, page, skb_frag_size(f), offset_in_page(offset));
On 2023/11/6 10:44, Mina Almasry wrote:
Make skb_frag_page() fail in the case where the frag is not backed by a page, and fix its relevent callers to handle this case.
Correctly handle skb_frag refcounting in the page_pool_iovs case.
Signed-off-by: Mina Almasry almasrymina@google.com
...
/**
- skb_frag_page - retrieve the page referred to by a paged fragment
- @frag: the paged fragment
- Returns the &struct page associated with @frag.
- Returns the &struct page associated with @frag. Returns NULL if this frag
*/
- has no associated page.
static inline struct page *skb_frag_page(const skb_frag_t *frag) {
- return frag->bv_page;
- if (!page_is_page_pool_iov(frag->bv_page))
return frag->bv_page;
- return NULL;
It seems most of callers don't expect NULL returning for skb_frag_page(), and this patch only changes a few relevant callers to handle the NULL case.
It may make more sense to add a new helper to do the above checking, and add a warning in skb_frag_page() to catch any missing NULL checking for skb_frag_page() caller, something like below?
static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + struct page *page = frag->bv_page; + + BUG_ON(page_is_page_pool_iov(page)); + + return page; +} + +static inline struct page *skb_frag_readable_page(const skb_frag_t *frag) +{ + struct page *page = frag->bv_page; + + if (!page_is_page_pool_iov(page)) + return page; + + return NULL; }
On Tue, Nov 7, 2023 at 1:00 AM Yunsheng Lin linyunsheng@huawei.com wrote:
On 2023/11/6 10:44, Mina Almasry wrote:
Make skb_frag_page() fail in the case where the frag is not backed by a page, and fix its relevent callers to handle this case.
Correctly handle skb_frag refcounting in the page_pool_iovs case.
Signed-off-by: Mina Almasry almasrymina@google.com
...
/**
- skb_frag_page - retrieve the page referred to by a paged fragment
- @frag: the paged fragment
- Returns the &struct page associated with @frag.
- Returns the &struct page associated with @frag. Returns NULL if this frag
*/
- has no associated page.
static inline struct page *skb_frag_page(const skb_frag_t *frag) {
return frag->bv_page;
if (!page_is_page_pool_iov(frag->bv_page))
return frag->bv_page;
return NULL;
It seems most of callers don't expect NULL returning for skb_frag_page(), and this patch only changes a few relevant callers to handle the NULL case.
Yes, I did not change code that I guessed was not likely to be affected or enable the devmem TCP case. Here is my breakdown:
➜ cos-kernel git:(tcpdevmem) ✗ ack -i "skb_frag_page(" --ignore-dir=drivers -t cc -l net/core/dev.c net/core/datagram.c net/core/xdp.c net/core/skbuff.c net/core/filter.c net/core/gro.c net/appletalk/ddp.c net/wireless/util.c net/tls/tls_device.c net/tls/tls_device_fallback.c net/ipv4/tcp.c net/ipv4/tcp_output.c net/bpf/test_run.c include/linux/skbuff.h
I'm ignoring ank skb_frag_page() calls in drivers because drivers need to add support for devmem TCP, and handle these calls at time of adding support, I think that's reasonable.
net/core/dev.c: I think I missed ilegal_highdma()
net/core/datagram.c: __skb_datagram_iter() protected by not_readable(skb) check.
net/core/skbuff.c: protected by not_readable(skb) check.
net/core/filter.c: bpf_xdp_frags_shrink_tail seems like xdp specific, not sure it's relevant here.
net/core/gro.c: skb_gro_reset_offset: protected by NULL check
net/ipv4/tcp.c: tcp_zerocopy_receive protected by NULL check.
net/ipv4/tcp_output.c: tcp_clone_payload: handles NULL return fine.
net/bpf/test_run.c: seems xdp specific and not sure if it can run into devmem issues.
include/linux/skbuff.h: I think the multiple calls here are being handled correctly, but let me know if not.
All the calls in these files, I think, are code paths not possible to hit devmem TCP with the current support, I think: net/core/xdp.c net/appletalk/ddp.c net/wireless/util.c net/tls/tls_device.c net/tls/tls_device_fallback.c
All in all I think maybe all in all I missed illegal_highdma(). I'll fix it in the next iteration.
It may make more sense to add a new helper to do the above checking, and add a warning in skb_frag_page() to catch any missing NULL checking for skb_frag_page() caller, something like below?
static inline struct page *skb_frag_page(const skb_frag_t *frag) {
return frag->bv_page;
struct page *page = frag->bv_page;
BUG_ON(page_is_page_pool_iov(page));
return page;
+}
+static inline struct page *skb_frag_readable_page(const skb_frag_t *frag) +{
struct page *page = frag->bv_page;
if (!page_is_page_pool_iov(page))
return page;
return NULL;
}
My personal immediate reaction is that this may just introduce code churn without significant benefit. If an unsuspecting caller call skb_frag_page() on devmem frag and doesn't correctly handle NULL return, it will crash or error out anyway, and likely in some obvious way, so maybe the BUG_ON() isn't so useful that it's worth changing all the call sites. But if there is consensus on adding a change like you propose, I have no problem adding it.
On 2023/11/8 5:19, Mina Almasry wrote:
My personal immediate reaction is that this may just introduce code churn without significant benefit. If an unsuspecting caller call skb_frag_page() on devmem frag and doesn't correctly handle NULL return, it will crash or error out anyway, and likely in some obvious way, so maybe the BUG_ON() isn't so useful that it's worth changing
If it will always crash or error out, then I agree that BUG_ON() is unnecessary.
all the call sites. But if there is consensus on adding a change like you propose, I have no problem adding it.
One obvious benefit I forget to mention is that, it provides a better semantic that if a caller need to do the return checking: 1. For the old helper, the semantic is not to do the checking if the caller has ensure that it has passed a readable frag to skb_frag_page(), which avoid adding some overhead for non-devmen supported drivers. 2. For the new helper, the semantic is to do the checking and we may provide a compiler '__must_check' function-attribute to ensure the caller to do the checking.
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote: [...]
@@ -3421,7 +3446,7 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) */ static inline void __skb_frag_ref(skb_frag_t *frag) {
- get_page(skb_frag_page(frag));
- page_pool_page_get_many(frag->bv_page, 1);
I guess the above needs #ifdef CONFIG_PAGE_POOL guards and explicit skb_frag_is_page_pool_iov() check ?
Cheers,
Paolo
On Thu, Nov 9, 2023 at 1:15 AM Paolo Abeni pabeni@redhat.com wrote:
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote: [...]
@@ -3421,7 +3446,7 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) */ static inline void __skb_frag_ref(skb_frag_t *frag) {
get_page(skb_frag_page(frag));
page_pool_page_get_many(frag->bv_page, 1);
I guess the above needs #ifdef CONFIG_PAGE_POOL guards and explicit skb_frag_is_page_pool_iov() check ?
It doesn't actually. page_pool_page_* helpers are compiled in regardless of CONFIG_PAGE_POOL, and handle both page_pool_iov* & page* just fine (the checking happens inside the function).
You may yell at me that it's too confusing... I somewhat agree, but I'm unsure of what is a better name or location for the helpers. The helpers handle (page_pool_iov* || page*) gracefully, so they seem to belong in the page pool for me, but it is indeed surprising/confusing that these helpers are available even if !CONFIG_PAGE_POOL.
Cheers,
Paolo
On Sun, 5 Nov 2023 18:44:07 -0800 Mina Almasry wrote:
#include <net/net_debug.h> #include <net/dropreason-core.h> +#include <net/page_pool/types.h> +#include <net/page_pool/helpers.h>
/**
- DOC: skb checksums
@@ -3402,15 +3404,38 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, fragto->bv_offset = fragfrom->bv_offset; } +/* Returns true if the skb_frag contains a page_pool_iov. */ +static inline bool skb_frag_is_page_pool_iov(const skb_frag_t *frag) +{
- return page_is_page_pool_iov(frag->bv_page);
+}
Maybe we can create a new header? For skb + page pool.
skbuff.h is included by 1/4th of the kernel objects, we should not be adding dependencies to this header, it really slows down incremental builds.
On Fri, Nov 10, 2023 at 3:19 PM Jakub Kicinski kuba@kernel.org wrote:
On Sun, 5 Nov 2023 18:44:07 -0800 Mina Almasry wrote:
#include <net/net_debug.h> #include <net/dropreason-core.h> +#include <net/page_pool/types.h> +#include <net/page_pool/helpers.h>
/**
- DOC: skb checksums
@@ -3402,15 +3404,38 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, fragto->bv_offset = fragfrom->bv_offset; }
+/* Returns true if the skb_frag contains a page_pool_iov. */ +static inline bool skb_frag_is_page_pool_iov(const skb_frag_t *frag) +{
return page_is_page_pool_iov(frag->bv_page);
+}
Maybe we can create a new header? For skb + page pool.
skbuff.h is included by 1/4th of the kernel objects, we should not be adding dependencies to this header, it really slows down incremental builds.
My issue here is that all these skb helpers call each other so I end up having to move a lot of the unrelated skb helpers to this new header (maybe that is acceptable but it feels weird).
What I could do here is move all the page_pool_page|iov_* helpers to a minimal header, and include only that one from skbuff.h, rather than including all of net/page_pool/helpers.h
On Sun, 12 Nov 2023 22:05:30 -0800 Mina Almasry wrote:
My issue here is that all these skb helpers call each other so I end up having to move a lot of the unrelated skb helpers to this new header (maybe that is acceptable but it feels weird).
Splitting pp headers again is not an option, we already split it into types and helpers.
The series doesn't apply and it's a bit hard for me to, in between LPC talks, figure out how to extract your patches from the GH UI to take a proper look :( We can defer this for now, and hopefully v4 will apply to net-next. But it will need to get solved.
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
--- include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; * @csum_level: indicates the number of consecutive checksums found in * the packet minus one that have been verified as * CHECKSUM_UNNECESSARY (max 3) + * @devmem: indicates that all the fragments in this skb are backed by + * device memory. * @dst_pending_confirm: need to confirm neighbour * @decrypted: Decrypted SKB * @slow_gro: state present at GRO time, slower prepare step required @@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif - + __u8 devmem:1; #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
+/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{ + return skb->devmem; +} + static inline void skb_mark_not_on_list(struct sk_buff *skb) { skb->next = NULL; @@ -2468,6 +2476,10 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, struct page *page, int off, int size) { __skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size); + if (page_is_page_pool_iov(page)) { + skb->devmem = true; + return; + }
/* Propagate page pfmemalloc to the skb if we can. The problem is * that not all callers have unique ownership of the page but rely diff --git a/include/net/tcp.h b/include/net/tcp.h index 39b731c900dd..1ae62d1e284b 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1012,7 +1012,7 @@ static inline int tcp_skb_mss(const struct sk_buff *skb)
static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb) { - return likely(!TCP_SKB_CB(skb)->eor); + return likely(!TCP_SKB_CB(skb)->eor && !skb_frags_not_readable(skb)); }
static inline bool tcp_skb_can_collapse(const struct sk_buff *to, @@ -1020,7 +1020,8 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to, { return likely(tcp_skb_can_collapse_to(to) && mptcp_skb_can_collapse(to, from) && - skb_pure_zcopy_same(to, from)); + skb_pure_zcopy_same(to, from) && + skb_frags_not_readable(to) == skb_frags_not_readable(from)); }
/* Events passed to congestion control interface */ diff --git a/net/core/datagram.c b/net/core/datagram.c index 176eb5834746..cdd4fb129968 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -425,6 +425,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; }
+ if (skb_frags_not_readable(skb)) + goto short_copy; + /* Copy paged appendix. Hmm... why does this look so complicated? */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -616,6 +619,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, { int frag;
+ if (skb_frags_not_readable(skb)) + return -EFAULT; + if (msg && msg->msg_ubuf && msg->sg_from_iter) return msg->sg_from_iter(sk, skb, from, length);
diff --git a/net/core/gro.c b/net/core/gro.c index 42d7f6755f32..56046d65386a 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -390,6 +390,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow) { struct skb_shared_info *pinfo = skb_shinfo(skb);
+ if (WARN_ON_ONCE(skb_frags_not_readable(skb))) + return; + BUG_ON(skb->end - skb->tail < grow);
memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow); @@ -411,7 +414,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb) { int grow = skb_gro_offset(skb) - skb_headlen(skb);
- if (grow > 0) + if (grow > 0 && !skb_frags_not_readable(skb)) gro_pull_from_frag0(skb, grow); }
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 13eca4fd25e1..f01673ed2eff 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1230,6 +1230,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) struct page *p; u8 *vaddr;
+ if (skb_frag_is_page_pool_iov(frag)) { + printk("%sskb frag %d: not readable\n", level, i); + len -= frag->bv_len; + if (!len) + break; + continue; + } + skb_frag_foreach_page(frag, skb_frag_off(frag), skb_frag_size(frag), p, p_off, p_len, copied) { @@ -1807,6 +1815,9 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) return -EINVAL;
+ if (skb_frags_not_readable(skb)) + return -EFAULT; + if (!num_frags) goto release;
@@ -1977,8 +1988,12 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) { int headerlen = skb_headroom(skb); unsigned int size = skb_end_offset(skb) + skb->data_len; - struct sk_buff *n = __alloc_skb(size, gfp_mask, - skb_alloc_rx_flag(skb), NUMA_NO_NODE); + struct sk_buff *n; + + if (skb_frags_not_readable(skb)) + return NULL; + + n = __alloc_skb(size, gfp_mask, skb_alloc_rx_flag(skb), NUMA_NO_NODE);
if (!n) return NULL; @@ -2304,14 +2319,16 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb, int newheadroom, int newtailroom, gfp_t gfp_mask) { - /* - * Allocate the copy buffer - */ - struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom, - gfp_mask, skb_alloc_rx_flag(skb), - NUMA_NO_NODE); int oldheadroom = skb_headroom(skb); int head_copy_len, head_copy_off; + struct sk_buff *n; + + if (skb_frags_not_readable(skb)) + return NULL; + + /* Allocate the copy buffer */ + n = __alloc_skb(newheadroom + skb->len + newtailroom, gfp_mask, + skb_alloc_rx_flag(skb), NUMA_NO_NODE);
if (!n) return NULL; @@ -2650,6 +2667,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta) */ int i, k, eat = (skb->tail + delta) - skb->end;
+ if (skb_frags_not_readable(skb)) + return NULL; + if (eat > 0 || skb_cloned(skb)) { if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0, GFP_ATOMIC)) @@ -2803,6 +2823,9 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len) to += copy; }
+ if (skb_frags_not_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *f = &skb_shinfo(skb)->frags[i]; @@ -2991,6 +3014,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, /* * then map the fragments */ + if (skb_frags_not_readable(skb)) + return false; + for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg];
@@ -3214,6 +3240,9 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len) from += copy; }
+ if (skb_frags_not_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; int end; @@ -3293,6 +3322,9 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len, pos = copy; }
+ if (skb_frags_not_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; @@ -3393,6 +3425,9 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset, pos = copy; }
+ if (skb_frags_not_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end;
@@ -3883,7 +3918,9 @@ static inline void skb_split_inside_header(struct sk_buff *skb, skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i];
skb_shinfo(skb1)->nr_frags = skb_shinfo(skb)->nr_frags; + skb1->devmem = skb->devmem; skb_shinfo(skb)->nr_frags = 0; + skb->devmem = 0; skb1->data_len = skb->data_len; skb1->len += skb1->data_len; skb->data_len = 0; @@ -3897,6 +3934,7 @@ static inline void skb_split_no_header(struct sk_buff *skb, { int i, k = 0; const int nfrags = skb_shinfo(skb)->nr_frags; + const int devmem = skb->devmem;
skb_shinfo(skb)->nr_frags = 0; skb1->len = skb1->data_len = skb->len - len; @@ -3930,6 +3968,16 @@ static inline void skb_split_no_header(struct sk_buff *skb, pos += size; } skb_shinfo(skb1)->nr_frags = k; + + if (skb_shinfo(skb)->nr_frags) + skb->devmem = devmem; + else + skb->devmem = 0; + + if (skb_shinfo(skb1)->nr_frags) + skb1->devmem = devmem; + else + skb1->devmem = 0; }
/** @@ -4165,6 +4213,9 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data, return block_limit - abs_offset; }
+ if (skb_frags_not_readable(st->cur_skb)) + return 0; + if (st->frag_idx == 0 && !st->frag_data) st->stepped_offset += skb_headlen(st->cur_skb);
@@ -5779,7 +5830,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, (from->pp_recycle && skb_cloned(from))) return false;
- if (len <= skb_tailroom(to)) { + if (skb_frags_not_readable(from) != skb_frags_not_readable(to)) + return false; + + if (len <= skb_tailroom(to) && !skb_frags_not_readable(from)) { if (len) BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len)); *delta_truesize = 0; @@ -5954,6 +6008,9 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len) if (!pskb_may_pull(skb, write_len)) return -ENOMEM;
+ if (skb_frags_not_readable(skb)) + return -EFAULT; + if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) return 0;
@@ -6608,7 +6665,7 @@ void skb_condense(struct sk_buff *skb) { if (skb->data_len) { if (skb->data_len > skb->end - skb->tail || - skb_cloned(skb)) + skb_cloned(skb) || skb_frags_not_readable(skb)) return;
/* Nice, we can free page frag(s) right now */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 23b29dc49271..5c6fed52ed0e 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2138,6 +2138,9 @@ static int tcp_zerocopy_receive(struct sock *sk, skb = tcp_recv_skb(sk, seq, &offset); }
+ if (skb_frags_not_readable(skb)) + break; + if (TCP_SKB_CB(skb)->has_rxtstamp) { tcp_update_recv_tstamps(skb, tss); zc->msg_flags |= TCP_CMSG_TS; @@ -4411,6 +4414,9 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, if (crypto_ahash_update(req)) return 1;
+ if (skb_frags_not_readable(skb)) + return 1; + for (i = 0; i < shi->nr_frags; ++i) { const skb_frag_t *f = &shi->frags[i]; unsigned int offset = skb_frag_off(f); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 18b858597af4..64643dad5e1a 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5264,6 +5264,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, for (end_of_skbs = true; skb != NULL && skb != tail; skb = n) { n = tcp_skb_next(skb, list);
+ if (skb_frags_not_readable(skb)) + goto skip_this; + /* No new bits? It is possible on ofo queue. */ if (!before(start, TCP_SKB_CB(skb)->end_seq)) { skb = tcp_collapse_one(sk, skb, list, root); @@ -5284,17 +5287,20 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, break; }
- if (n && n != tail && mptcp_skb_can_collapse(skb, n) && + if (n && n != tail && !skb_frags_not_readable(n) && + mptcp_skb_can_collapse(skb, n) && TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) { end_of_skbs = false; break; }
+skip_this: /* Decided to skip this, advance start seq. */ start = TCP_SKB_CB(skb)->end_seq; } if (end_of_skbs || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + skb_frags_not_readable(skb)) return;
__skb_queue_head_init(&tmp); @@ -5338,7 +5344,8 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, if (!skb || skb == tail || !mptcp_skb_can_collapse(nskb, skb) || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + skb_frags_not_readable(skb)) goto end; #ifdef CONFIG_TLS_DEVICE if (skb->decrypted != nskb->decrypted) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 2866ccbccde0..60df27f6c649 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2309,7 +2309,8 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)
if (unlikely(TCP_SKB_CB(skb)->eor) || tcp_has_tx_tstamp(skb) || - !skb_pure_zcopy_same(skb, next)) + !skb_pure_zcopy_same(skb, next) || + skb_frags_not_readable(skb) != skb_frags_not_readable(next)) return false;
len -= skb->len; @@ -3193,6 +3194,8 @@ static bool tcp_can_collapse(const struct sock *sk, const struct sk_buff *skb) return false; if (skb_cloned(skb)) return false; + if (skb_frags_not_readable(skb)) + return false; /* Some heuristics for collapsing over SACK'd could be invented */ if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) return false; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index a84e00b5904b..8f6cca683939 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2156,7 +2156,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev, } }
- snaplen = skb->len; + snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len;
res = run_filter(skb, sk, snaplen); if (!res) @@ -2279,7 +2279,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, } }
- snaplen = skb->len; + snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len;
res = run_filter(skb, sk, snaplen); if (!res)
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); } +/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
On 11/6/23 11:47 AM, Stanislav Fomichev wrote:
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); } +/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote:
On 11/6/23 11:47 AM, Stanislav Fomichev wrote:
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
+/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
+ * @devmem: indicates that all the fragments in this skb are backed by + * device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
1. new header flag: skb->newmem 2.
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
3. tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote:
On 11/6/23 11:47 AM, Stanislav Fomichev wrote:
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
+/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
@devmem: indicates that all the fragments in this skb are backed by
device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
- new header flag: skb->newmem
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
- tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch
series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something). We are fundamentally receiving into and sending from a dmabuf (devmem == dmabuf). And once you have this association, recvmsg shouldn't need any new special flags.
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote:
On 11/6/23 11:47 AM, Stanislav Fomichev wrote:
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
+/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
@devmem: indicates that all the fragments in this skb are backed by
device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
- new header flag: skb->newmem
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
- tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch
series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
We are fundamentally receiving into and sending from a dmabuf (devmem == dmabuf). And once you have this association, recvmsg shouldn't need any new special flags.
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote:
On 11/6/23 11:47 AM, Stanislav Fomichev wrote:
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
+/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
@devmem: indicates that all the fragments in this skb are backed by
device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
- new header flag: skb->newmem
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
- tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch
series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
So why not have a separate control channel action to say: this socket fd is supposed to receive into this dmabuf fd? This action would put the socket into permanent 'MSG_SOCK_DEVMEM' mode. Maybe you can also put some checks at the lower level to to enforce this dmabuf association. (to avoid any potential issues with flow steering)
We'll still have dmabuf to rx-queue association because of various reasons..
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote:
On 11/6/23 11:47 AM, Stanislav Fomichev wrote:
On 11/05, Mina Almasry wrote: > For device memory TCP, we expect the skb headers to be available in host > memory for access, and we expect the skb frags to be in device memory > and unaccessible to the host. We expect there to be no mixing and > matching of device memory frags (unaccessible) with host memory frags > (accessible) in the same skb. > > Add a skb->devmem flag which indicates whether the frags in this skb > are device memory frags or not. > > __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > and marks the skb as skb->devmem accordingly. > > Add checks through the network stack to avoid accessing the frags of > devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > > Signed-off-by: Willem de Bruijn willemb@google.com > Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com > Signed-off-by: Mina Almasry almasrymina@google.com > > --- > include/linux/skbuff.h | 14 +++++++- > include/net/tcp.h | 5 +-- > net/core/datagram.c | 6 ++++ > net/core/gro.c | 5 ++- > net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ > net/ipv4/tcp.c | 6 ++++ > net/ipv4/tcp_input.c | 13 +++++-- > net/ipv4/tcp_output.c | 5 ++- > net/packet/af_packet.c | 4 +-- > 9 files changed, 115 insertions(+), 20 deletions(-) > > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > index 1fae276c1353..8fb468ff8115 100644 > --- a/include/linux/skbuff.h > +++ b/include/linux/skbuff.h > @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; > * @csum_level: indicates the number of consecutive checksums found in > * the packet minus one that have been verified as > * CHECKSUM_UNNECESSARY (max 3) > + * @devmem: indicates that all the fragments in this skb are backed by > + * device memory. > * @dst_pending_confirm: need to confirm neighbour > * @decrypted: Decrypted SKB > * @slow_gro: state present at GRO time, slower prepare step required > @@ -991,7 +993,7 @@ struct sk_buff { > #if IS_ENABLED(CONFIG_IP_SCTP) > __u8 csum_not_inet:1; > #endif > - > + __u8 devmem:1; > #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > __u16 tc_index; /* traffic control index */ > #endif > @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > __skb_zcopy_downgrade_managed(skb); > } > > +/* Return true if frags in this skb are not readable by the host. */ > +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > +{ > + return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
@devmem: indicates that all the fragments in this skb are backed by
device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
- new header flag: skb->newmem
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
- tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch
series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
So why not have a separate control channel action to say: this socket fd is supposed to receive into this dmabuf fd? This action would put the socket into permanent 'MSG_SOCK_DEVMEM' mode. Maybe you can also put some checks at the lower level to to enforce this dmabuf association. (to avoid any potential issues with flow steering)
setsockopt(SO_DEVMEM_ASSERT_DMA_BUF, dmabuf_fd)? Sounds interesting, but maybe a bit of a weird API to me. Because the API can't enforce the socket to receive packets on a dma-buf (rx-queue binding + flow steering does that), but the API can assert that incoming packets are received on said dma-buf. I guess it would check packets before they are acked and would drop packets that landed on the wrong queue.
I'm a bit unsure about defensively programming features (and uapi no less) to 'avoid any potential issues with flow steering'. Flow steering is supposed to work.
Also if we wanted to defensively program something to avoid flow steering issues, then I'd suggest adding to cmsg_devmem the dma-buf fd that the data is on, not this setsockopt() that asserts. IMO it's a weird API for the userspace to ask the kernel to assert some condition (at least I haven't seen it before or commonly).
But again, in general, I'm a bit unsure about defensively designing uapi around a feature like flow steering that's supposed to work.
We'll still have dmabuf to rx-queue association because of various reasons..
-- Thanks, Mina
On Mon, Nov 6, 2023 at 3:27 PM Mina Almasry almasrymina@google.com wrote:
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote:
On 11/6/23 11:47 AM, Stanislav Fomichev wrote: > On 11/05, Mina Almasry wrote: >> For device memory TCP, we expect the skb headers to be available in host >> memory for access, and we expect the skb frags to be in device memory >> and unaccessible to the host. We expect there to be no mixing and >> matching of device memory frags (unaccessible) with host memory frags >> (accessible) in the same skb. >> >> Add a skb->devmem flag which indicates whether the frags in this skb >> are device memory frags or not. >> >> __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, >> and marks the skb as skb->devmem accordingly. >> >> Add checks through the network stack to avoid accessing the frags of >> devmem skbs and avoid coalescing devmem skbs with non devmem skbs. >> >> Signed-off-by: Willem de Bruijn willemb@google.com >> Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com >> Signed-off-by: Mina Almasry almasrymina@google.com >> >> --- >> include/linux/skbuff.h | 14 +++++++- >> include/net/tcp.h | 5 +-- >> net/core/datagram.c | 6 ++++ >> net/core/gro.c | 5 ++- >> net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ >> net/ipv4/tcp.c | 6 ++++ >> net/ipv4/tcp_input.c | 13 +++++-- >> net/ipv4/tcp_output.c | 5 ++- >> net/packet/af_packet.c | 4 +-- >> 9 files changed, 115 insertions(+), 20 deletions(-) >> >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h >> index 1fae276c1353..8fb468ff8115 100644 >> --- a/include/linux/skbuff.h >> +++ b/include/linux/skbuff.h >> @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; >> * @csum_level: indicates the number of consecutive checksums found in >> * the packet minus one that have been verified as >> * CHECKSUM_UNNECESSARY (max 3) >> + * @devmem: indicates that all the fragments in this skb are backed by >> + * device memory. >> * @dst_pending_confirm: need to confirm neighbour >> * @decrypted: Decrypted SKB >> * @slow_gro: state present at GRO time, slower prepare step required >> @@ -991,7 +993,7 @@ struct sk_buff { >> #if IS_ENABLED(CONFIG_IP_SCTP) >> __u8 csum_not_inet:1; >> #endif >> - >> + __u8 devmem:1; >> #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) >> __u16 tc_index; /* traffic control index */ >> #endif >> @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) >> __skb_zcopy_downgrade_managed(skb); >> } >> >> +/* Return true if frags in this skb are not readable by the host. */ >> +static inline bool skb_frags_not_readable(const struct sk_buff *skb) >> +{ >> + return skb->devmem; > > bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? > It better communicates the fact that the stack shouldn't dereference the > frags (because it has 'devmem' fragments or for some other potential > future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
@devmem: indicates that all the fragments in this skb are backed by
device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
- new header flag: skb->newmem
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
- tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch
series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
It's still fixed and won't change during the socket lifetime, right? And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
So why not have a separate control channel action to say: this socket fd is supposed to receive into this dmabuf fd? This action would put the socket into permanent 'MSG_SOCK_DEVMEM' mode. Maybe you can also put some checks at the lower level to to enforce this dmabuf association. (to avoid any potential issues with flow steering)
setsockopt(SO_DEVMEM_ASSERT_DMA_BUF, dmabuf_fd)? Sounds interesting, but maybe a bit of a weird API to me. Because the API can't enforce the socket to receive packets on a dma-buf (rx-queue binding + flow steering does that), but the API can assert that incoming packets are received on said dma-buf. I guess it would check packets before they are acked and would drop packets that landed on the wrong queue.
I'm a bit unsure about defensively programming features (and uapi no less) to 'avoid any potential issues with flow steering'. Flow steering is supposed to work.
Also if we wanted to defensively program something to avoid flow steering issues, then I'd suggest adding to cmsg_devmem the dma-buf fd that the data is on, not this setsockopt() that asserts. IMO it's a weird API for the userspace to ask the kernel to assert some condition (at least I haven't seen it before or commonly).
But again, in general, I'm a bit unsure about defensively designing uapi around a feature like flow steering that's supposed to work.
On Mon, Nov 6, 2023 at 3:55 PM Stanislav Fomichev sdf@google.com wrote:
On Mon, Nov 6, 2023 at 3:27 PM Mina Almasry almasrymina@google.com wrote:
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote: > > On 11/6/23 11:47 AM, Stanislav Fomichev wrote: > > On 11/05, Mina Almasry wrote: > >> For device memory TCP, we expect the skb headers to be available in host > >> memory for access, and we expect the skb frags to be in device memory > >> and unaccessible to the host. We expect there to be no mixing and > >> matching of device memory frags (unaccessible) with host memory frags > >> (accessible) in the same skb. > >> > >> Add a skb->devmem flag which indicates whether the frags in this skb > >> are device memory frags or not. > >> > >> __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > >> and marks the skb as skb->devmem accordingly. > >> > >> Add checks through the network stack to avoid accessing the frags of > >> devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > >> > >> Signed-off-by: Willem de Bruijn willemb@google.com > >> Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com > >> Signed-off-by: Mina Almasry almasrymina@google.com > >> > >> --- > >> include/linux/skbuff.h | 14 +++++++- > >> include/net/tcp.h | 5 +-- > >> net/core/datagram.c | 6 ++++ > >> net/core/gro.c | 5 ++- > >> net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ > >> net/ipv4/tcp.c | 6 ++++ > >> net/ipv4/tcp_input.c | 13 +++++-- > >> net/ipv4/tcp_output.c | 5 ++- > >> net/packet/af_packet.c | 4 +-- > >> 9 files changed, 115 insertions(+), 20 deletions(-) > >> > >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > >> index 1fae276c1353..8fb468ff8115 100644 > >> --- a/include/linux/skbuff.h > >> +++ b/include/linux/skbuff.h > >> @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; > >> * @csum_level: indicates the number of consecutive checksums found in > >> * the packet minus one that have been verified as > >> * CHECKSUM_UNNECESSARY (max 3) > >> + * @devmem: indicates that all the fragments in this skb are backed by > >> + * device memory. > >> * @dst_pending_confirm: need to confirm neighbour > >> * @decrypted: Decrypted SKB > >> * @slow_gro: state present at GRO time, slower prepare step required > >> @@ -991,7 +993,7 @@ struct sk_buff { > >> #if IS_ENABLED(CONFIG_IP_SCTP) > >> __u8 csum_not_inet:1; > >> #endif > >> - > >> + __u8 devmem:1; > >> #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > >> __u16 tc_index; /* traffic control index */ > >> #endif > >> @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > >> __skb_zcopy_downgrade_managed(skb); > >> } > >> > >> +/* Return true if frags in this skb are not readable by the host. */ > >> +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > >> +{ > >> + return skb->devmem; > > > > bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? > > It better communicates the fact that the stack shouldn't dereference the > > frags (because it has 'devmem' fragments or for some other potential > > future reason). > > +1. > > Also, the flag on the skb is an optimization - a high level signal that > one or more frags is in unreadable memory. There is no requirement that > all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
The flag indicates that the skb contains all devmem dma-buf memory specifically, not generic 'not_readable' frags as the comment says:
@devmem: indicates that all the fragments in this skb are backed by
device memory.
The reason it's not a generic 'not_readable' flag is because handing off a generic not_readable skb to the userspace is semantically not what we're doing. recvmsg() is augmented in this patch series to return a devmem skb to the user via a cmsg_devmem struct which refers specifically to the memory in the dma-buf. recvmsg() in this patch series is not augmented to give any 'not_readable' skb to the userspace.
IMHO skb->devmem + an skb_frags_not_readable() as implemented is correct. If a new type of unreadable skbs are introduced to the stack, I imagine the stack would implement:
- new header flag: skb->newmem
static inline bool skb_frags_not_readable(const struct skb_buff *skb) { return skb->devmem || skb->newmem; }
- tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch
series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
It's still fixed and won't change during the socket lifetime, right? And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
On 11/06, Willem de Bruijn wrote:
On Mon, Nov 6, 2023 at 3:55 PM Stanislav Fomichev sdf@google.com wrote:
On Mon, Nov 6, 2023 at 3:27 PM Mina Almasry almasrymina@google.com wrote:
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote: > On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote: > > > > On 11/6/23 11:47 AM, Stanislav Fomichev wrote: > > > On 11/05, Mina Almasry wrote: > > >> For device memory TCP, we expect the skb headers to be available in host > > >> memory for access, and we expect the skb frags to be in device memory > > >> and unaccessible to the host. We expect there to be no mixing and > > >> matching of device memory frags (unaccessible) with host memory frags > > >> (accessible) in the same skb. > > >> > > >> Add a skb->devmem flag which indicates whether the frags in this skb > > >> are device memory frags or not. > > >> > > >> __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > > >> and marks the skb as skb->devmem accordingly. > > >> > > >> Add checks through the network stack to avoid accessing the frags of > > >> devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > > >> > > >> Signed-off-by: Willem de Bruijn willemb@google.com > > >> Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com > > >> Signed-off-by: Mina Almasry almasrymina@google.com > > >> > > >> --- > > >> include/linux/skbuff.h | 14 +++++++- > > >> include/net/tcp.h | 5 +-- > > >> net/core/datagram.c | 6 ++++ > > >> net/core/gro.c | 5 ++- > > >> net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ > > >> net/ipv4/tcp.c | 6 ++++ > > >> net/ipv4/tcp_input.c | 13 +++++-- > > >> net/ipv4/tcp_output.c | 5 ++- > > >> net/packet/af_packet.c | 4 +-- > > >> 9 files changed, 115 insertions(+), 20 deletions(-) > > >> > > >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > > >> index 1fae276c1353..8fb468ff8115 100644 > > >> --- a/include/linux/skbuff.h > > >> +++ b/include/linux/skbuff.h > > >> @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; > > >> * @csum_level: indicates the number of consecutive checksums found in > > >> * the packet minus one that have been verified as > > >> * CHECKSUM_UNNECESSARY (max 3) > > >> + * @devmem: indicates that all the fragments in this skb are backed by > > >> + * device memory. > > >> * @dst_pending_confirm: need to confirm neighbour > > >> * @decrypted: Decrypted SKB > > >> * @slow_gro: state present at GRO time, slower prepare step required > > >> @@ -991,7 +993,7 @@ struct sk_buff { > > >> #if IS_ENABLED(CONFIG_IP_SCTP) > > >> __u8 csum_not_inet:1; > > >> #endif > > >> - > > >> + __u8 devmem:1; > > >> #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > > >> __u16 tc_index; /* traffic control index */ > > >> #endif > > >> @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > > >> __skb_zcopy_downgrade_managed(skb); > > >> } > > >> > > >> +/* Return true if frags in this skb are not readable by the host. */ > > >> +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > > >> +{ > > >> + return skb->devmem; > > > > > > bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? > > > It better communicates the fact that the stack shouldn't dereference the > > > frags (because it has 'devmem' fragments or for some other potential > > > future reason). > > > > +1. > > > > Also, the flag on the skb is an optimization - a high level signal that > > one or more frags is in unreadable memory. There is no requirement that > > all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
> The flag indicates that the skb contains all devmem dma-buf memory > specifically, not generic 'not_readable' frags as the comment says: > > + * @devmem: indicates that all the fragments in this skb are backed by > + * device memory. > > The reason it's not a generic 'not_readable' flag is because handing > off a generic not_readable skb to the userspace is semantically not > what we're doing. recvmsg() is augmented in this patch series to > return a devmem skb to the user via a cmsg_devmem struct which refers > specifically to the memory in the dma-buf. recvmsg() in this patch > series is not augmented to give any 'not_readable' skb to the > userspace. > > IMHO skb->devmem + an skb_frags_not_readable() as implemented is > correct. If a new type of unreadable skbs are introduced to the stack, > I imagine the stack would implement: > > 1. new header flag: skb->newmem > 2. > > static inline bool skb_frags_not_readable(const struct skb_buff *skb) > { > return skb->devmem || skb->newmem; > } > > 3. tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch > series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
It's still fixed and won't change during the socket lifetime, right? And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
I was thinking this is similar to SOCK_CLOEXEC or SOCK_NONBLOCK? There are plenty of bits we can grab. But setsockopt works as well!
On 11/06, Stanislav Fomichev wrote:
On 11/06, Willem de Bruijn wrote:
On Mon, Nov 6, 2023 at 3:55 PM Stanislav Fomichev sdf@google.com wrote:
On Mon, Nov 6, 2023 at 3:27 PM Mina Almasry almasrymina@google.com wrote:
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote: > > On 11/06, Mina Almasry wrote: > > On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote: > > > > > > On 11/6/23 11:47 AM, Stanislav Fomichev wrote: > > > > On 11/05, Mina Almasry wrote: > > > >> For device memory TCP, we expect the skb headers to be available in host > > > >> memory for access, and we expect the skb frags to be in device memory > > > >> and unaccessible to the host. We expect there to be no mixing and > > > >> matching of device memory frags (unaccessible) with host memory frags > > > >> (accessible) in the same skb. > > > >> > > > >> Add a skb->devmem flag which indicates whether the frags in this skb > > > >> are device memory frags or not. > > > >> > > > >> __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > > > >> and marks the skb as skb->devmem accordingly. > > > >> > > > >> Add checks through the network stack to avoid accessing the frags of > > > >> devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > > > >> > > > >> Signed-off-by: Willem de Bruijn willemb@google.com > > > >> Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com > > > >> Signed-off-by: Mina Almasry almasrymina@google.com > > > >> > > > >> --- > > > >> include/linux/skbuff.h | 14 +++++++- > > > >> include/net/tcp.h | 5 +-- > > > >> net/core/datagram.c | 6 ++++ > > > >> net/core/gro.c | 5 ++- > > > >> net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ > > > >> net/ipv4/tcp.c | 6 ++++ > > > >> net/ipv4/tcp_input.c | 13 +++++-- > > > >> net/ipv4/tcp_output.c | 5 ++- > > > >> net/packet/af_packet.c | 4 +-- > > > >> 9 files changed, 115 insertions(+), 20 deletions(-) > > > >> > > > >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > > > >> index 1fae276c1353..8fb468ff8115 100644 > > > >> --- a/include/linux/skbuff.h > > > >> +++ b/include/linux/skbuff.h > > > >> @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; > > > >> * @csum_level: indicates the number of consecutive checksums found in > > > >> * the packet minus one that have been verified as > > > >> * CHECKSUM_UNNECESSARY (max 3) > > > >> + * @devmem: indicates that all the fragments in this skb are backed by > > > >> + * device memory. > > > >> * @dst_pending_confirm: need to confirm neighbour > > > >> * @decrypted: Decrypted SKB > > > >> * @slow_gro: state present at GRO time, slower prepare step required > > > >> @@ -991,7 +993,7 @@ struct sk_buff { > > > >> #if IS_ENABLED(CONFIG_IP_SCTP) > > > >> __u8 csum_not_inet:1; > > > >> #endif > > > >> - > > > >> + __u8 devmem:1; > > > >> #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > > > >> __u16 tc_index; /* traffic control index */ > > > >> #endif > > > >> @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > > > >> __skb_zcopy_downgrade_managed(skb); > > > >> } > > > >> > > > >> +/* Return true if frags in this skb are not readable by the host. */ > > > >> +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > > > >> +{ > > > >> + return skb->devmem; > > > > > > > > bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? > > > > It better communicates the fact that the stack shouldn't dereference the > > > > frags (because it has 'devmem' fragments or for some other potential > > > > future reason). > > > > > > +1. > > > > > > Also, the flag on the skb is an optimization - a high level signal that > > > one or more frags is in unreadable memory. There is no requirement that > > > all of the frags are in the same memory type. > > David: maybe there should be such a requirement (that they all are > unreadable)? Might be easier to support initially; we can relax later > on. >
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
> > The flag indicates that the skb contains all devmem dma-buf memory > > specifically, not generic 'not_readable' frags as the comment says: > > > > + * @devmem: indicates that all the fragments in this skb are backed by > > + * device memory. > > > > The reason it's not a generic 'not_readable' flag is because handing > > off a generic not_readable skb to the userspace is semantically not > > what we're doing. recvmsg() is augmented in this patch series to > > return a devmem skb to the user via a cmsg_devmem struct which refers > > specifically to the memory in the dma-buf. recvmsg() in this patch > > series is not augmented to give any 'not_readable' skb to the > > userspace. > > > > IMHO skb->devmem + an skb_frags_not_readable() as implemented is > > correct. If a new type of unreadable skbs are introduced to the stack, > > I imagine the stack would implement: > > > > 1. new header flag: skb->newmem > > 2. > > > > static inline bool skb_frags_not_readable(const struct skb_buff *skb) > > { > > return skb->devmem || skb->newmem; > > } > > > > 3. tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch > > series, but tcp_recvmsg_newmem() would handle skb->newmem skbs. > > You copy it to the userspace in a special way because your frags > are page_is_page_pool_iov(). I agree with David, the skb bit is > just and optimization. > > For most of the core stack, it doesn't matter why your skb is not > readable. For a few places where it matters (recvmsg?), you can > double-check your frags (all or some) with page_is_page_pool_iov. >
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
> Unrelated: we probably need socket to dmabuf association as well (via > netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
It's still fixed and won't change during the socket lifetime, right? And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
I was thinking this is similar to SOCK_CLOEXEC or SOCK_NONBLOCK? There are plenty of bits we can grab. But setsockopt works as well!
To follow up: if we have this flag on a socket, not on a per-message basis, can we also use recvmsg for the recycling part maybe?
while (true) { memset(msg, 0, ...);
/* receive the tokens */ ret = recvmsg(fd, &msg, 0);
/* recycle the tokens from the above recvmsg() */ ret = recvmsg(fd, &msg, MSG_RECYCLE); }
recvmsg + MSG_RECYCLE can parse the same format that regular recvmsg exports (SO_DEVMEM_OFFSET) and we can also add extra cmsg option to recycle a range.
Will this be more straightforward than a setsockopt(SO_DEVMEM_DONTNEED)? Or is it more confusing?
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
I was thinking this is similar to SOCK_CLOEXEC or SOCK_NONBLOCK? There are plenty of bits we can grab. But setsockopt works as well!
To follow up: if we have this flag on a socket, not on a per-message basis, can we also use recvmsg for the recycling part maybe?
while (true) { memset(msg, 0, ...);
/* receive the tokens */ ret = recvmsg(fd, &msg, 0); /* recycle the tokens from the above recvmsg() */ ret = recvmsg(fd, &msg, MSG_RECYCLE);
}
recvmsg + MSG_RECYCLE can parse the same format that regular recvmsg exports (SO_DEVMEM_OFFSET) and we can also add extra cmsg option to recycle a range.
Will this be more straightforward than a setsockopt(SO_DEVMEM_DONTNEED)? Or is it more confusing?
It would have to be sendmsg, as recvmsg is a copy_to_user operation.
I am not aware of any precedent in multiplexing the data stream and a control operation stream in this manner. It would also require adding a branch in the sendmsg hot path.
The memory is associated with the socket, freed when the socket is closed as well as on SO_DEVMEM_DONTNEED. Fundamentally it is a socket state operation, for which setsockopt is the socket interface.
Is your request purely a dislike, or is there some technical concern with BPF and setsockopt?
On 11/06, Willem de Bruijn wrote:
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
I was thinking this is similar to SOCK_CLOEXEC or SOCK_NONBLOCK? There are plenty of bits we can grab. But setsockopt works as well!
To follow up: if we have this flag on a socket, not on a per-message basis, can we also use recvmsg for the recycling part maybe?
while (true) { memset(msg, 0, ...);
/* receive the tokens */ ret = recvmsg(fd, &msg, 0); /* recycle the tokens from the above recvmsg() */ ret = recvmsg(fd, &msg, MSG_RECYCLE);
}
recvmsg + MSG_RECYCLE can parse the same format that regular recvmsg exports (SO_DEVMEM_OFFSET) and we can also add extra cmsg option to recycle a range.
Will this be more straightforward than a setsockopt(SO_DEVMEM_DONTNEED)? Or is it more confusing?
It would have to be sendmsg, as recvmsg is a copy_to_user operation.
I am not aware of any precedent in multiplexing the data stream and a control operation stream in this manner. It would also require adding a branch in the sendmsg hot path.
Is it too much plumbing to copy_from_user msg_control deep in recvmsg stack where we need it? Mixing in sendmsg is indeed ugly :-(
Regarding hot patch: aren't we already doing copy_to_user for the tokens in this hot path, so having one extra condition shouldn't hurt too much?
The memory is associated with the socket, freed when the socket is closed as well as on SO_DEVMEM_DONTNEED. Fundamentally it is a socket state operation, for which setsockopt is the socket interface.
Is your request purely a dislike, or is there some technical concern with BPF and setsockopt?
It's mostly because I've been bitten too much by custom socket options that are not really on/off/update-value operations:
29ebbba7d461 - bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen 00e74ae08638 - bpf: Don't EFAULT for getsockopt with optval=NULL 9cacf81f8161 - bpf: Remove extra lock_sock for TCP_ZEROCOPY_RECEIVE d8fe449a9c51 - bpf: Don't return EINVAL from {get,set}sockopt when optlen > PAGE_SIZE
I do agree that this particular case of SO_DEVMEM_DONTNEED seems ok, but things tend to evolve and change.
On Tue, Nov 7, 2023 at 12:44 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Willem de Bruijn wrote:
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call. If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
I was thinking this is similar to SOCK_CLOEXEC or SOCK_NONBLOCK? There are plenty of bits we can grab. But setsockopt works as well!
To follow up: if we have this flag on a socket, not on a per-message basis, can we also use recvmsg for the recycling part maybe?
while (true) { memset(msg, 0, ...);
/* receive the tokens */ ret = recvmsg(fd, &msg, 0); /* recycle the tokens from the above recvmsg() */ ret = recvmsg(fd, &msg, MSG_RECYCLE);
}
recvmsg + MSG_RECYCLE can parse the same format that regular recvmsg exports (SO_DEVMEM_OFFSET) and we can also add extra cmsg option to recycle a range.
Will this be more straightforward than a setsockopt(SO_DEVMEM_DONTNEED)? Or is it more confusing?
It would have to be sendmsg, as recvmsg is a copy_to_user operation.
I am not aware of any precedent in multiplexing the data stream and a control operation stream in this manner. It would also require adding a branch in the sendmsg hot path.
Is it too much plumbing to copy_from_user msg_control deep in recvmsg stack where we need it? Mixing in sendmsg is indeed ugly :-(
I tried exactly the inverse of that when originally adding MSG_ZEROCOPY: to allow piggy-backing zerocopy completion notifications on sendmsg calls by writing to sendmsg msg_control on return to user. It required significant code churn, which the performance gains did not warrant. Doing so also breaks the simple rule that recv is for reading and send is for writing.
Regarding hot patch: aren't we already doing copy_to_user for the tokens in this hot path, so having one extra condition shouldn't hurt too much?
We're doing that in the optional cmsg handling of recvmsg, which is already a slow path (compared to the data read() itself).
The memory is associated with the socket, freed when the socket is closed as well as on SO_DEVMEM_DONTNEED. Fundamentally it is a socket state operation, for which setsockopt is the socket interface.
Is your request purely a dislike, or is there some technical concern with BPF and setsockopt?
It's mostly because I've been bitten too much by custom socket options that are not really on/off/update-value operations:
29ebbba7d461 - bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen 00e74ae08638 - bpf: Don't EFAULT for getsockopt with optval=NULL 9cacf81f8161 - bpf: Remove extra lock_sock for TCP_ZEROCOPY_RECEIVE d8fe449a9c51 - bpf: Don't return EINVAL from {get,set}sockopt when optlen > PAGE_SIZE
I do agree that this particular case of SO_DEVMEM_DONTNEED seems ok, but things tend to evolve and change.
I see. I'm a bit concerned if we start limiting what we can do in sockets because of dependencies that BPF processing places on them. The use case for BPF [gs]etsockopt is limited to specific control mode calls. Would it make sense to just exclude calls like SO_DEVMEM_DONTNEED from this interpositioning?
At a high level what we really want is a high rate metadata path from user to kernel. And there are no perfect solutions. From kernel to user we use the socket error queue for this. That was never intended for high event rate itself, dealing with ICMP errors and the like before timestamps and zerocopy notifications were added.
If I squint hard enough I can see some prior art in mixing data and high rate state changes within the same channel in NIC descriptor queues, where some devices do this, e.g., { "insert encryption key", "send packet" }. But fundamentally I think we should keep the socket queues for data only.
On 11/07, Willem de Bruijn wrote:
On Tue, Nov 7, 2023 at 12:44 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Willem de Bruijn wrote:
> I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that > it somehow implies that I have an option of passing or not passing it > for an individual system call. > If we know that we're going to use dmabuf with the socket, maybe we > should move this flag to the socket() syscall? > > fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM); > > ?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
I was thinking this is similar to SOCK_CLOEXEC or SOCK_NONBLOCK? There are plenty of bits we can grab. But setsockopt works as well!
To follow up: if we have this flag on a socket, not on a per-message basis, can we also use recvmsg for the recycling part maybe?
while (true) { memset(msg, 0, ...);
/* receive the tokens */ ret = recvmsg(fd, &msg, 0); /* recycle the tokens from the above recvmsg() */ ret = recvmsg(fd, &msg, MSG_RECYCLE);
}
recvmsg + MSG_RECYCLE can parse the same format that regular recvmsg exports (SO_DEVMEM_OFFSET) and we can also add extra cmsg option to recycle a range.
Will this be more straightforward than a setsockopt(SO_DEVMEM_DONTNEED)? Or is it more confusing?
It would have to be sendmsg, as recvmsg is a copy_to_user operation.
I am not aware of any precedent in multiplexing the data stream and a control operation stream in this manner. It would also require adding a branch in the sendmsg hot path.
Is it too much plumbing to copy_from_user msg_control deep in recvmsg stack where we need it? Mixing in sendmsg is indeed ugly :-(
I tried exactly the inverse of that when originally adding MSG_ZEROCOPY: to allow piggy-backing zerocopy completion notifications on sendmsg calls by writing to sendmsg msg_control on return to user. It required significant code churn, which the performance gains did not warrant. Doing so also breaks the simple rule that recv is for reading and send is for writing.
We're breaking so many rules here, so not sure we should be super constrained :-D
Regarding hot patch: aren't we already doing copy_to_user for the tokens in this hot path, so having one extra condition shouldn't hurt too much?
We're doing that in the optional cmsg handling of recvmsg, which is already a slow path (compared to the data read() itself).
The memory is associated with the socket, freed when the socket is closed as well as on SO_DEVMEM_DONTNEED. Fundamentally it is a socket state operation, for which setsockopt is the socket interface.
Is your request purely a dislike, or is there some technical concern with BPF and setsockopt?
It's mostly because I've been bitten too much by custom socket options that are not really on/off/update-value operations:
29ebbba7d461 - bpf: Don't EFAULT for {g,s}setsockopt with wrong optlen 00e74ae08638 - bpf: Don't EFAULT for getsockopt with optval=NULL 9cacf81f8161 - bpf: Remove extra lock_sock for TCP_ZEROCOPY_RECEIVE d8fe449a9c51 - bpf: Don't return EINVAL from {get,set}sockopt when optlen > PAGE_SIZE
I do agree that this particular case of SO_DEVMEM_DONTNEED seems ok, but things tend to evolve and change.
I see. I'm a bit concerned if we start limiting what we can do in sockets because of dependencies that BPF processing places on them. The use case for BPF [gs]etsockopt is limited to specific control mode calls. Would it make sense to just exclude calls like SO_DEVMEM_DONTNEED from this interpositioning?
Yup, that's why I'm asking. We already have ->bpf_bypass_getsockopt() to special-case tcp zerocopy. We might add another bpf_bypass_setsockopt to special case SO_DEVMEM_DONTNEED. That's why I'm trying to see if there is a better alternative.
At a high level what we really want is a high rate metadata path from user to kernel. And there are no perfect solutions. From kernel to user we use the socket error queue for this. That was never intended for high event rate itself, dealing with ICMP errors and the like before timestamps and zerocopy notifications were added.
If I squint hard enough I can see some prior art in mixing data and high rate state changes within the same channel in NIC descriptor queues, where some devices do this, e.g., { "insert encryption key", "send packet" }. But fundamentally I think we should keep the socket queues for data only.
+1, we keep taking an easy route with using sockopt for this :-(
Anyway, let's see if any better suggestions pop up. Worst case - we stick with a socket option and will add a bypass on the bpf side.
On Mon, Nov 6, 2023 at 4:08 PM Willem de Bruijn willemdebruijn.kernel@gmail.com wrote:
On Mon, Nov 6, 2023 at 3:55 PM Stanislav Fomichev sdf@google.com wrote:
On Mon, Nov 6, 2023 at 3:27 PM Mina Almasry almasrymina@google.com wrote:
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote: > On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote: > > > > On 11/6/23 11:47 AM, Stanislav Fomichev wrote: > > > On 11/05, Mina Almasry wrote: > > >> For device memory TCP, we expect the skb headers to be available in host > > >> memory for access, and we expect the skb frags to be in device memory > > >> and unaccessible to the host. We expect there to be no mixing and > > >> matching of device memory frags (unaccessible) with host memory frags > > >> (accessible) in the same skb. > > >> > > >> Add a skb->devmem flag which indicates whether the frags in this skb > > >> are device memory frags or not. > > >> > > >> __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > > >> and marks the skb as skb->devmem accordingly. > > >> > > >> Add checks through the network stack to avoid accessing the frags of > > >> devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > > >> > > >> Signed-off-by: Willem de Bruijn willemb@google.com > > >> Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com > > >> Signed-off-by: Mina Almasry almasrymina@google.com > > >> > > >> --- > > >> include/linux/skbuff.h | 14 +++++++- > > >> include/net/tcp.h | 5 +-- > > >> net/core/datagram.c | 6 ++++ > > >> net/core/gro.c | 5 ++- > > >> net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ > > >> net/ipv4/tcp.c | 6 ++++ > > >> net/ipv4/tcp_input.c | 13 +++++-- > > >> net/ipv4/tcp_output.c | 5 ++- > > >> net/packet/af_packet.c | 4 +-- > > >> 9 files changed, 115 insertions(+), 20 deletions(-) > > >> > > >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > > >> index 1fae276c1353..8fb468ff8115 100644 > > >> --- a/include/linux/skbuff.h > > >> +++ b/include/linux/skbuff.h > > >> @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; > > >> * @csum_level: indicates the number of consecutive checksums found in > > >> * the packet minus one that have been verified as > > >> * CHECKSUM_UNNECESSARY (max 3) > > >> + * @devmem: indicates that all the fragments in this skb are backed by > > >> + * device memory. > > >> * @dst_pending_confirm: need to confirm neighbour > > >> * @decrypted: Decrypted SKB > > >> * @slow_gro: state present at GRO time, slower prepare step required > > >> @@ -991,7 +993,7 @@ struct sk_buff { > > >> #if IS_ENABLED(CONFIG_IP_SCTP) > > >> __u8 csum_not_inet:1; > > >> #endif > > >> - > > >> + __u8 devmem:1; > > >> #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > > >> __u16 tc_index; /* traffic control index */ > > >> #endif > > >> @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > > >> __skb_zcopy_downgrade_managed(skb); > > >> } > > >> > > >> +/* Return true if frags in this skb are not readable by the host. */ > > >> +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > > >> +{ > > >> + return skb->devmem; > > > > > > bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? > > > It better communicates the fact that the stack shouldn't dereference the > > > frags (because it has 'devmem' fragments or for some other potential > > > future reason). > > > > +1. > > > > Also, the flag on the skb is an optimization - a high level signal that > > one or more frags is in unreadable memory. There is no requirement that > > all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
> The flag indicates that the skb contains all devmem dma-buf memory > specifically, not generic 'not_readable' frags as the comment says: > > + * @devmem: indicates that all the fragments in this skb are backed by > + * device memory. > > The reason it's not a generic 'not_readable' flag is because handing > off a generic not_readable skb to the userspace is semantically not > what we're doing. recvmsg() is augmented in this patch series to > return a devmem skb to the user via a cmsg_devmem struct which refers > specifically to the memory in the dma-buf. recvmsg() in this patch > series is not augmented to give any 'not_readable' skb to the > userspace. > > IMHO skb->devmem + an skb_frags_not_readable() as implemented is > correct. If a new type of unreadable skbs are introduced to the stack, > I imagine the stack would implement: > > 1. new header flag: skb->newmem > 2. > > static inline bool skb_frags_not_readable(const struct skb_buff *skb) > { > return skb->devmem || skb->newmem; > } > > 3. tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch > series, but tcp_recvmsg_newmem() would handle skb->newmem skbs.
You copy it to the userspace in a special way because your frags are page_is_page_pool_iov(). I agree with David, the skb bit is just and optimization.
For most of the core stack, it doesn't matter why your skb is not readable. For a few places where it matters (recvmsg?), you can double-check your frags (all or some) with page_is_page_pool_iov.
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
Unrelated: we probably need socket to dmabuf association as well (via netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
It's still fixed and won't change during the socket lifetime, right?
Technically, no.
The user is free to modify or delete flow steering rules outside of the lifetime of the socket. Technically it's possible for the user to reconfigure flow steering while the socket is simultaneously receiving, and the result will be packets switching from devmem to non-devmem. For a reasonably correctly configured application the application would probably want to steer 1 flow to 1 dma-buf and never change it, but this is not something we enforce, but rather the user orchestrates. In theory someone can find a use case for configuring and unconfigure flow steering during a connection.
And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call.
You do have the option of passing it or not passing it per system call. The MSG_SOCK_DEVMEM says the application is willing to receive devmem cmsgs - that's all. The application doesn't get to decide whether it's actually going to receive a devmem cmsg or not, because that's dictated by the type of skb that is present in the receive queue, and not up to the application. I should explain this in the commit message...
If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
We can definitely move SOCK_DEVMEM to a setsockopt(). Seems more than reasonable.
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 4:08 PM Willem de Bruijn willemdebruijn.kernel@gmail.com wrote:
On Mon, Nov 6, 2023 at 3:55 PM Stanislav Fomichev sdf@google.com wrote:
On Mon, Nov 6, 2023 at 3:27 PM Mina Almasry almasrymina@google.com wrote:
On Mon, Nov 6, 2023 at 2:59 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 1:59 PM Stanislav Fomichev sdf@google.com wrote: > > On 11/06, Mina Almasry wrote: > > On Mon, Nov 6, 2023 at 11:34 AM David Ahern dsahern@kernel.org wrote: > > > > > > On 11/6/23 11:47 AM, Stanislav Fomichev wrote: > > > > On 11/05, Mina Almasry wrote: > > > >> For device memory TCP, we expect the skb headers to be available in host > > > >> memory for access, and we expect the skb frags to be in device memory > > > >> and unaccessible to the host. We expect there to be no mixing and > > > >> matching of device memory frags (unaccessible) with host memory frags > > > >> (accessible) in the same skb. > > > >> > > > >> Add a skb->devmem flag which indicates whether the frags in this skb > > > >> are device memory frags or not. > > > >> > > > >> __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, > > > >> and marks the skb as skb->devmem accordingly. > > > >> > > > >> Add checks through the network stack to avoid accessing the frags of > > > >> devmem skbs and avoid coalescing devmem skbs with non devmem skbs. > > > >> > > > >> Signed-off-by: Willem de Bruijn willemb@google.com > > > >> Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com > > > >> Signed-off-by: Mina Almasry almasrymina@google.com > > > >> > > > >> --- > > > >> include/linux/skbuff.h | 14 +++++++- > > > >> include/net/tcp.h | 5 +-- > > > >> net/core/datagram.c | 6 ++++ > > > >> net/core/gro.c | 5 ++- > > > >> net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ > > > >> net/ipv4/tcp.c | 6 ++++ > > > >> net/ipv4/tcp_input.c | 13 +++++-- > > > >> net/ipv4/tcp_output.c | 5 ++- > > > >> net/packet/af_packet.c | 4 +-- > > > >> 9 files changed, 115 insertions(+), 20 deletions(-) > > > >> > > > >> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > > > >> index 1fae276c1353..8fb468ff8115 100644 > > > >> --- a/include/linux/skbuff.h > > > >> +++ b/include/linux/skbuff.h > > > >> @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; > > > >> * @csum_level: indicates the number of consecutive checksums found in > > > >> * the packet minus one that have been verified as > > > >> * CHECKSUM_UNNECESSARY (max 3) > > > >> + * @devmem: indicates that all the fragments in this skb are backed by > > > >> + * device memory. > > > >> * @dst_pending_confirm: need to confirm neighbour > > > >> * @decrypted: Decrypted SKB > > > >> * @slow_gro: state present at GRO time, slower prepare step required > > > >> @@ -991,7 +993,7 @@ struct sk_buff { > > > >> #if IS_ENABLED(CONFIG_IP_SCTP) > > > >> __u8 csum_not_inet:1; > > > >> #endif > > > >> - > > > >> + __u8 devmem:1; > > > >> #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > > > >> __u16 tc_index; /* traffic control index */ > > > >> #endif > > > >> @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > > > >> __skb_zcopy_downgrade_managed(skb); > > > >> } > > > >> > > > >> +/* Return true if frags in this skb are not readable by the host. */ > > > >> +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > > > >> +{ > > > >> + return skb->devmem; > > > > > > > > bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? > > > > It better communicates the fact that the stack shouldn't dereference the > > > > frags (because it has 'devmem' fragments or for some other potential > > > > future reason). > > > > > > +1. > > > > > > Also, the flag on the skb is an optimization - a high level signal that > > > one or more frags is in unreadable memory. There is no requirement that > > > all of the frags are in the same memory type. > > David: maybe there should be such a requirement (that they all are > unreadable)? Might be easier to support initially; we can relax later > on. >
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
> > The flag indicates that the skb contains all devmem dma-buf memory > > specifically, not generic 'not_readable' frags as the comment says: > > > > + * @devmem: indicates that all the fragments in this skb are backed by > > + * device memory. > > > > The reason it's not a generic 'not_readable' flag is because handing > > off a generic not_readable skb to the userspace is semantically not > > what we're doing. recvmsg() is augmented in this patch series to > > return a devmem skb to the user via a cmsg_devmem struct which refers > > specifically to the memory in the dma-buf. recvmsg() in this patch > > series is not augmented to give any 'not_readable' skb to the > > userspace. > > > > IMHO skb->devmem + an skb_frags_not_readable() as implemented is > > correct. If a new type of unreadable skbs are introduced to the stack, > > I imagine the stack would implement: > > > > 1. new header flag: skb->newmem > > 2. > > > > static inline bool skb_frags_not_readable(const struct skb_buff *skb) > > { > > return skb->devmem || skb->newmem; > > } > > > > 3. tcp_recvmsg_devmem() would handle skb->devmem skbs is in this patch > > series, but tcp_recvmsg_newmem() would handle skb->newmem skbs. > > You copy it to the userspace in a special way because your frags > are page_is_page_pool_iov(). I agree with David, the skb bit is > just and optimization. > > For most of the core stack, it doesn't matter why your skb is not > readable. For a few places where it matters (recvmsg?), you can > double-check your frags (all or some) with page_is_page_pool_iov. >
I see, we can do that then. I.e. make the header flag 'not_readable' and check the frags to decide to delegate to tcp_recvmsg_devmem() or something else. We can even assume not_readable == devmem because currently devmem is the only type of unreadable frag currently.
> Unrelated: we probably need socket to dmabuf association as well (via > netlink or something).
Not sure this is possible. The dma-buf is bound to the rx-queue, and any packets that land on that rx-queue are bound to that dma-buf, regardless of which socket that packet belongs to. So the association IMO must be rx-queue to dma-buf, not socket to dma-buf.
But there is still always 1 dmabuf to 1 socket association (on rx), right? Because otherwise, there is no way currently to tell, at recvmsg, which dmabuf the received token belongs to.
Yes, but this 1 dma-buf to 1 socket association happens because the user binds the dma-buf to an rx-queue and configures flow steering of the socket to that rx-queue.
It's still fixed and won't change during the socket lifetime, right?
Technically, no.
The user is free to modify or delete flow steering rules outside of the lifetime of the socket. Technically it's possible for the user to reconfigure flow steering while the socket is simultaneously receiving, and the result will be packets switching from devmem to non-devmem. For a reasonably correctly configured application the application would probably want to steer 1 flow to 1 dma-buf and never change it, but this is not something we enforce, but rather the user orchestrates. In theory someone can find a use case for configuring and unconfigure flow steering during a connection.
If we do want to support this flexible configuration then we also should export some dmabuf id along with the token?
And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call.
You do have the option of passing it or not passing it per system call. The MSG_SOCK_DEVMEM says the application is willing to receive devmem cmsgs - that's all. The application doesn't get to decide whether it's actually going to receive a devmem cmsg or not, because that's dictated by the type of skb that is present in the receive queue, and not up to the application. I should explain this in the commit message...
What would be the case of passing it or not passing it? Some fallback to the host memory after flow steering update? Yeah, would be useful to document those constrains. I'd lean on starting stricter and relaxing those conditions if we find the use-cases.
If we know that we're going to use dmabuf with the socket, maybe we should move this flag to the socket() syscall?
fd = socket(AF_INET6, SOCK_STREAM, SOCK_DEVMEM);
?
I think it should then be a setsockopt called before any data is exchanged, with no change of modifying mode later. We generally use setsockopts for the mode of a socket. This use of the protocol field in socket() for setting a mode would be novel. Also, it might miss passively opened connections, or be overly restrictive: one approach for all accepted child sockets.
We can definitely move SOCK_DEVMEM to a setsockopt(). Seems more than reasonable.
SG, added another suggestion for SO_DEVMEM_DONTNEED on another thread with Willem. LMK what you think.
On Mon, Nov 6, 2023 at 5:06 PM Stanislav Fomichev sdf@google.com wrote: [..]
And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call.
You do have the option of passing it or not passing it per system call. The MSG_SOCK_DEVMEM says the application is willing to receive devmem cmsgs - that's all. The application doesn't get to decide whether it's actually going to receive a devmem cmsg or not, because that's dictated by the type of skb that is present in the receive queue, and not up to the application. I should explain this in the commit message...
What would be the case of passing it or not passing it? Some fallback to the host memory after flow steering update? Yeah, would be useful to document those constrains. I'd lean on starting stricter and relaxing those conditions if we find the use-cases.
MSG_SOCK_DEVMEM (or its replacement SOCK_DEVMEM or SO_SOCK_DEVMEM), just says that the application is able to receive devmem cmsgs and will parse them. The use case for not setting that flag is existing applications that are not aware of devmem cmsgs. I don't want those applications to think they're receiving data in the linear buffer only to find out that the data is in devmem and they ignored the devmem cmsg.
So, what happens:
- MSG_SOCK_DEVMEM provided and next skb in the queue is devmem: application receives cmsgs. - MSG_SOCK_DEVMEM provided and next skb in the queue is non-devmem: application receives in the linear buffer. - MSG_SOCK_DEVMEM not provided and net skb is devmem: application receives EFAULT. - MSG_SOCK_DEVMEM not provided and next skb is non-devmem: application receives in the linear buffer.
My bad on not including some docs about this. The next version should have the commit message beefed up to explain all this, or a docs patch.
On 11/07, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 5:06 PM Stanislav Fomichev sdf@google.com wrote: [..]
And the socket has to know this association; otherwise those tokens are useless since they don't carry anything to identify the dmabuf.
I think my other issue with MSG_SOCK_DEVMEM being on recvmsg is that it somehow implies that I have an option of passing or not passing it for an individual system call.
You do have the option of passing it or not passing it per system call. The MSG_SOCK_DEVMEM says the application is willing to receive devmem cmsgs - that's all. The application doesn't get to decide whether it's actually going to receive a devmem cmsg or not, because that's dictated by the type of skb that is present in the receive queue, and not up to the application. I should explain this in the commit message...
What would be the case of passing it or not passing it? Some fallback to the host memory after flow steering update? Yeah, would be useful to document those constrains. I'd lean on starting stricter and relaxing those conditions if we find the use-cases.
MSG_SOCK_DEVMEM (or its replacement SOCK_DEVMEM or SO_SOCK_DEVMEM), just says that the application is able to receive devmem cmsgs and will parse them. The use case for not setting that flag is existing applications that are not aware of devmem cmsgs. I don't want those applications to think they're receiving data in the linear buffer only to find out that the data is in devmem and they ignored the devmem cmsg.
So, what happens:
- MSG_SOCK_DEVMEM provided and next skb in the queue is devmem:
application receives cmsgs.
- MSG_SOCK_DEVMEM provided and next skb in the queue is non-devmem:
application receives in the linear buffer.
- MSG_SOCK_DEVMEM not provided and net skb is devmem: application
receives EFAULT.
- MSG_SOCK_DEVMEM not provided and next skb is non-devmem: application
receives in the linear buffer.
My bad on not including some docs about this. The next version should have the commit message beefed up to explain all this, or a docs patch.
I don't understand. We require an elaborate setup to receive devmem cmsgs, why would some random application receive those?
On Tue, Nov 7, 2023 at 10:05 PM Stanislav Fomichev sdf@google.com wrote:
I don't understand. We require an elaborate setup to receive devmem cmsgs, why would some random application receive those?
A TCP socket can receive 'valid TCP packets' from many different sources, especially with BPF hooks...
Think of a bonding setup, packets being mirrored by some switches or even from tc.
Better double check than be sorry.
We have not added a 5th component in the 4-tuple lookups, being "is this socket a devmem one".
A mix of regular/devmem skb is supported.
On 11/07, Eric Dumazet wrote:
On Tue, Nov 7, 2023 at 10:05 PM Stanislav Fomichev sdf@google.com wrote:
I don't understand. We require an elaborate setup to receive devmem cmsgs, why would some random application receive those?
A TCP socket can receive 'valid TCP packets' from many different sources, especially with BPF hooks...
Think of a bonding setup, packets being mirrored by some switches or even from tc.
Better double check than be sorry.
We have not added a 5th component in the 4-tuple lookups, being "is this socket a devmem one".
A mix of regular/devmem skb is supported.
Can we mark a socket as devmem-only? Do we have any use-case for those hybrid setups? Or, let me put it that way: do we expect API callers to handle both linear and non-linear cases correctly? As a consumer of the previous versions of these apis internally, I find all those corner cases confusing :-( Hence trying to understand whether we can make it a bit more rigid and properly defined upstream.
But going back to that MSG_SOCK_DEVMEM flag. If the application is supposed to handle both linear and devmem chucks, why do we need this extra MSG_SOCK_DEVMEM opt-in to signal that it's able to process it? From Mina's reply, it seemed like MSG_SOCK_DEVMEM is there to protect random applications that get misrouted devmem skb. I don't see how returning EFAULT helps in that case.
On Tue, 7 Nov 2023 14:23:20 -0800 Stanislav Fomichev wrote:
Can we mark a socket as devmem-only? Do we have any use-case for those hybrid setups? Or, let me put it that way: do we expect API callers to handle both linear and non-linear cases correctly? As a consumer of the previous versions of these apis internally, I find all those corner cases confusing :-( Hence trying to understand whether we can make it a bit more rigid and properly defined upstream.
FWIW I'd also prefer to allow mixing. "Some NICs" can decide HDS very flexibly, incl. landing full jumbo frames into the "headers".
There's no sender API today to signal how to mark the data for selective landing, but if Mina already has the rx side written to allow that...
On Tue, 7 Nov 2023 11:53:22 -0800 Mina Almasry wrote:
My bad on not including some docs about this. The next version should have the commit message beefed up to explain all this, or a docs patch.
Yes, please. Would be great to have the user facing interface well explained under Documentation/
On 11/6/23 5:20 PM, Mina Almasry wrote:
The user is free to modify or delete flow steering rules outside of the lifetime of the socket. Technically it's possible for the user to reconfigure flow steering while the socket is simultaneously receiving, and the result will be packets switching from devmem to non-devmem.
generically, from one page pool to another (ie., devmem piece of that statement is not relevant).
On 11/6/23 3:18 PM, Mina Almasry wrote:
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
+/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{
- return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
What requires that restriction? In all of the uses of skb->devmem and skb_frags_not_readable() what matters is if any frag is not readable, then frag list walk or collapse is avoided.
On Mon, Nov 6, 2023 at 3:37 PM David Ahern dsahern@kernel.org wrote:
On 11/6/23 3:18 PM, Mina Almasry wrote:
> @@ -991,7 +993,7 @@ struct sk_buff { > #if IS_ENABLED(CONFIG_IP_SCTP) > __u8 csum_not_inet:1; > #endif > - > + __u8 devmem:1; > #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) > __u16 tc_index; /* traffic control index */ > #endif > @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) > __skb_zcopy_downgrade_managed(skb); > } > > +/* Return true if frags in this skb are not readable by the host. */ > +static inline bool skb_frags_not_readable(const struct sk_buff *skb) > +{ > + return skb->devmem;
bikeshedding: should we also rename 'devmem' sk_buff flag to 'not_readable'? It better communicates the fact that the stack shouldn't dereference the frags (because it has 'devmem' fragments or for some other potential future reason).
+1.
Also, the flag on the skb is an optimization - a high level signal that one or more frags is in unreadable memory. There is no requirement that all of the frags are in the same memory type.
David: maybe there should be such a requirement (that they all are unreadable)? Might be easier to support initially; we can relax later on.
Currently devmem == not_readable, and the restriction is that all the frags in the same skb must be either all readable or all unreadable (all devmem or all non-devmem).
What requires that restriction? In all of the uses of skb->devmem and skb_frags_not_readable() what matters is if any frag is not readable, then frag list walk or collapse is avoided.
Currently only tcp_recvmsg_devmem(), I think. tcp_recvmsg_locked() delegates to tcp_recvmsg_devmem() if skb->devmem, and tcp_recvmsg_devmem() net_err's if it finds a non-iov frag in the skb. This is done for some simplicity, because iov's are given to the user via cmsg, but pages are copied into the linear buffer. I think it would be confusing for the user if we simultaneously copied some data to the linear buffer and gave them a devmem cmsgs in the same recvmsg() call.
So, my simplicity is:
1. in a single skb, all frags must be devmem or non-devmem, no mixing. 2. In a single recvmsg() call, we only process devmem or non-devmem skbs, no mixing.
On 11/05, Mina Almasry wrote:
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
__skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly.
Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
[..]
- snaplen = skb->len;
- snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len;
res = run_filter(skb, sk, snaplen); if (!res) @@ -2279,7 +2279,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, } }
- snaplen = skb->len;
- snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len;
res = run_filter(skb, sk, snaplen); if (!res)
Not sure it covers 100% of bpf. We might need to double-check bpf_xdp_copy_buf which is having its own, non-skb shinfo and frags. And in general, xdp can reference those shinfo frags early... (xdp part happens before we create an skb with all devmem association)
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/net/core/datagram.c b/net/core/datagram.c index 176eb5834746..cdd4fb129968 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -425,6 +425,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; }
- if (skb_frags_not_readable(skb))
goto short_copy;
- /* Copy paged appendix. Hmm... why does this look so complicated? */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end;
@@ -616,6 +619,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, { int frag;
- if (skb_frags_not_readable(skb))
return -EFAULT;
This check ....
- if (msg && msg->msg_ubuf && msg->sg_from_iter) return msg->sg_from_iter(sk, skb, from, length);
... should go here. That allows custome sg_from_iter to have access to the skb. What matters is not expecting struct page (e.g., refcounting); if the custom iter does not do that then all is well. io_uring's iter does not look at the pages, so all good.
diff --git a/net/core/gro.c b/net/core/gro.c index 42d7f6755f32..56046d65386a 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -390,6 +390,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow) { struct skb_shared_info *pinfo = skb_shinfo(skb);
- if (WARN_ON_ONCE(skb_frags_not_readable(skb)))
return;
- BUG_ON(skb->end - skb->tail < grow);
memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow); @@ -411,7 +414,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb) { int grow = skb_gro_offset(skb) - skb_headlen(skb);
- if (grow > 0)
- if (grow > 0 && !skb_frags_not_readable(skb)) gro_pull_from_frag0(skb, grow);
} diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 13eca4fd25e1..f01673ed2eff 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1230,6 +1230,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) struct page *p; u8 *vaddr;
if (skb_frag_is_page_pool_iov(frag)) {
Why skb_frag_is_page_pool_iov here vs skb_frags_not_readable?
On Mon, Nov 6, 2023 at 4:16 PM David Ahern dsahern@kernel.org wrote:
On 11/5/23 7:44 PM, Mina Almasry wrote:
diff --git a/net/core/datagram.c b/net/core/datagram.c index 176eb5834746..cdd4fb129968 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -425,6 +425,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; }
if (skb_frags_not_readable(skb))
goto short_copy;
/* Copy paged appendix. Hmm... why does this look so complicated? */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end;
@@ -616,6 +619,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, { int frag;
if (skb_frags_not_readable(skb))
return -EFAULT;
This check ....
if (msg && msg->msg_ubuf && msg->sg_from_iter) return msg->sg_from_iter(sk, skb, from, length);
... should go here. That allows custome sg_from_iter to have access to the skb. What matters is not expecting struct page (e.g., refcounting); if the custom iter does not do that then all is well. io_uring's iter does not look at the pages, so all good.
diff --git a/net/core/gro.c b/net/core/gro.c index 42d7f6755f32..56046d65386a 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -390,6 +390,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow) { struct skb_shared_info *pinfo = skb_shinfo(skb);
if (WARN_ON_ONCE(skb_frags_not_readable(skb)))
return;
BUG_ON(skb->end - skb->tail < grow); memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);
@@ -411,7 +414,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb) { int grow = skb_gro_offset(skb) - skb_headlen(skb);
if (grow > 0)
if (grow > 0 && !skb_frags_not_readable(skb)) gro_pull_from_frag0(skb, grow);
}
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 13eca4fd25e1..f01673ed2eff 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1230,6 +1230,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) struct page *p; u8 *vaddr;
if (skb_frag_is_page_pool_iov(frag)) {
Why skb_frag_is_page_pool_iov here vs skb_frags_not_readable?
Seems like a silly choice on my end. I should probably check skb_frags_not_readable() and not kmap any frags in that case. Will do.
From: Mina Almasry
Sent: 06 November 2023 02:44
For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb.
Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not.
...
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t;
- @csum_level: indicates the number of consecutive checksums found in
the packet minus one that have been verified as
CHECKSUM_UNNECESSARY (max 3)
- @devmem: indicates that all the fragments in this skb are backed by
device memory.
- @dst_pending_confirm: need to confirm neighbour
- @decrypted: Decrypted SKB
- @slow_gro: state present at GRO time, slower prepare step required
@@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif
- __u8 devmem:1;
#if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); }
Doesn't that bloat struct sk_buff? I'm not sure there are any spare bits available. Although CONFIG_NET_SWITCHDEV and CONFIG_NET_SCHED seem to already add padding.
David
- Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
In tcp_recvmsg_locked(), detect if the skb being received by the user is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM flag - pass it to tcp_recvmsg_devmem() for custom handling.
tcp_recvmsg_devmem() copies any data in the skb header to the linear buffer, and returns a cmsg to the user indicating the number of bytes returned in the linear buffer.
tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags, and returns to the user a cmsg_devmem indicating the location of the data in the dmabuf device memory. cmsg_devmem contains this information:
1. the offset into the dmabuf where the payload starts. 'frag_offset'. 2. the size of the frag. 'frag_size'. 3. an opaque token 'frag_token' to return to the kernel when the buffer is to be released.
The pages awaiting freeing are stored in the newly added sk->sk_user_pages, and each page passed to userspace is get_page()'d. This reference is dropped once the userspace indicates that it is done reading this page. All pages are released when the socket is destroyed.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
---
RFC v3: - Fixed issue with put_cmsg() failing silently.
--- include/linux/socket.h | 1 + include/net/page_pool/helpers.h | 9 ++ include/net/sock.h | 2 + include/uapi/asm-generic/socket.h | 5 + include/uapi/linux/uio.h | 6 + net/ipv4/tcp.c | 189 +++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 7 ++ 7 files changed, 214 insertions(+), 5 deletions(-)
diff --git a/include/linux/socket.h b/include/linux/socket.h index cfcb7e2c3813..fe2b9e2081bb 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -326,6 +326,7 @@ struct ucred { * plain text and require encryption */
+#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */ #define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */ #define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */ #define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 08f1a2cc70d2..95f4d579cbc4 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -106,6 +106,15 @@ page_pool_iov_dma_addr(const struct page_pool_iov *ppiov) ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT); }
+static inline unsigned long +page_pool_iov_virtual_addr(const struct page_pool_iov *ppiov) +{ + struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov); + + return owner->base_virtual + + ((unsigned long)page_pool_iov_idx(ppiov) << PAGE_SHIFT); +} + static inline struct netdev_dmabuf_binding * page_pool_iov_binding(const struct page_pool_iov *ppiov) { diff --git a/include/net/sock.h b/include/net/sock.h index 242590308d64..986d9da6e062 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -353,6 +353,7 @@ struct sk_filter; * @sk_txtime_unused: unused txtime flags * @ns_tracker: tracker for netns reference * @sk_bind2_node: bind node in the bhash2 table + * @sk_user_pages: xarray of pages the user is holding a reference on. */ struct sock { /* @@ -545,6 +546,7 @@ struct sock { struct rcu_head sk_rcu; netns_tracker ns_tracker; struct hlist_node sk_bind2_node; + struct xarray sk_user_pages; };
enum sk_pacing { diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index 8ce8a39a1e5f..aacb97f16b78 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -135,6 +135,11 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77
+#define SO_DEVMEM_HEADER 98 +#define SCM_DEVMEM_HEADER SO_DEVMEM_HEADER +#define SO_DEVMEM_OFFSET 99 +#define SCM_DEVMEM_OFFSET SO_DEVMEM_OFFSET + #if !defined(__KERNEL__)
#if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__)) diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 059b1a9147f4..ae94763b1963 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -20,6 +20,12 @@ struct iovec __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ };
+struct cmsg_devmem { + __u64 frag_offset; + __u32 frag_size; + __u32 frag_token; +}; + /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 5c6fed52ed0e..fd7f6d7e7671 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -461,6 +461,7 @@ void tcp_init_sock(struct sock *sk)
set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); sk_sockets_allocated_inc(sk); + xa_init_flags(&sk->sk_user_pages, XA_FLAGS_ALLOC1); } EXPORT_SYMBOL(tcp_init_sock);
@@ -2301,6 +2302,154 @@ static int tcp_inq_hint(struct sock *sk) return inq; }
+/* On error, returns the -errno. On success, returns number of bytes sent to the + * user. May not consume all of @remaining_len. + */ +static int tcp_recvmsg_devmem(const struct sock *sk, const struct sk_buff *skb, + unsigned int offset, struct msghdr *msg, + int remaining_len) +{ + struct cmsg_devmem cmsg_devmem = { 0 }; + unsigned int start; + int i, copy, n; + int sent = 0; + int err = 0; + + do { + start = skb_headlen(skb); + + if (!skb_frags_not_readable(skb)) { + err = -ENODEV; + goto out; + } + + /* Copy header. */ + copy = start - offset; + if (copy > 0) { + copy = min(copy, remaining_len); + + n = copy_to_iter(skb->data + offset, copy, + &msg->msg_iter); + if (n != copy) { + err = -EFAULT; + goto out; + } + + offset += copy; + remaining_len -= copy; + + /* First a cmsg_devmem for # bytes copied to user + * buffer. + */ + memset(&cmsg_devmem, 0, sizeof(cmsg_devmem)); + cmsg_devmem.frag_size = copy; + err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_HEADER, + sizeof(cmsg_devmem), &cmsg_devmem); + if (err || msg->msg_flags & MSG_CTRUNC) { + msg->msg_flags &= ~MSG_CTRUNC; + if (!err) + err = -ETOOSMALL; + goto out; + } + + sent += copy; + + if (remaining_len == 0) + goto out; + } + + /* after that, send information of devmem pages through a + * sequence of cmsg + */ + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + struct page_pool_iov *ppiov; + u64 frag_offset; + u32 user_token; + int end; + + /* skb_frags_not_readable() should indicate that ALL the + * frags in this skb are unreadable page_pool_iovs. + * We're checking for that flag above, but also check + * individual pages here. If the tcp stack is not + * setting skb->devmem correctly, we still don't want to + * crash here when accessing pgmap or priv below. + */ + if (!skb_frag_page_pool_iov(frag)) { + net_err_ratelimited("Found non-devmem skb with page_pool_iov"); + err = -ENODEV; + goto out; + } + + ppiov = skb_frag_page_pool_iov(frag); + end = start + skb_frag_size(frag); + copy = end - offset; + + if (copy > 0) { + copy = min(copy, remaining_len); + + frag_offset = page_pool_iov_virtual_addr(ppiov) + + skb_frag_off(frag) + offset - + start; + cmsg_devmem.frag_offset = frag_offset; + cmsg_devmem.frag_size = copy; + err = xa_alloc((struct xarray *)&sk->sk_user_pages, + &user_token, frag->bv_page, + xa_limit_31b, GFP_KERNEL); + if (err) + goto out; + + cmsg_devmem.frag_token = user_token; + + offset += copy; + remaining_len -= copy; + + err = put_cmsg(msg, SOL_SOCKET, + SO_DEVMEM_OFFSET, + sizeof(cmsg_devmem), + &cmsg_devmem); + if (err || msg->msg_flags & MSG_CTRUNC) { + msg->msg_flags &= ~MSG_CTRUNC; + xa_erase((struct xarray *)&sk->sk_user_pages, + user_token); + if (!err) + err = -ETOOSMALL; + goto out; + } + + page_pool_iov_get_many(ppiov, 1); + + sent += copy; + + if (remaining_len == 0) + goto out; + } + start = end; + } + + if (!remaining_len) + goto out; + + /* if remaining_len is not satisfied yet, we need to go to the + * next frag in the frag_list to satisfy remaining_len. + */ + skb = skb_shinfo(skb)->frag_list ?: skb->next; + + offset = offset - start; + } while (skb); + + if (remaining_len) { + err = -EFAULT; + goto out; + } + +out: + if (!sent) + sent = err; + + return sent; +} + /* * This routine copies from a sock struct into the user buffer. * @@ -2314,6 +2463,7 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, int *cmsg_flags) { struct tcp_sock *tp = tcp_sk(sk); + int last_copied_devmem = -1; /* uninitialized */ int copied = 0; u32 peek_seq; u32 *seq; @@ -2491,15 +2641,44 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, }
if (!(flags & MSG_TRUNC)) { - err = skb_copy_datagram_msg(skb, offset, msg, used); - if (err) { - /* Exception. Bailout! */ - if (!copied) - copied = -EFAULT; + if (last_copied_devmem != -1 && + last_copied_devmem != skb->devmem) break; + + if (!skb->devmem) { + err = skb_copy_datagram_msg(skb, offset, msg, + used); + if (err) { + /* Exception. Bailout! */ + if (!copied) + copied = -EFAULT; + break; + } + } else { + if (!(flags & MSG_SOCK_DEVMEM)) { + /* skb->devmem skbs can only be received + * with the MSG_SOCK_DEVMEM flag. + */ + if (!copied) + copied = -EFAULT; + + break; + } + + err = tcp_recvmsg_devmem(sk, skb, offset, msg, + used); + if (err <= 0) { + if (!copied) + copied = -EFAULT; + + break; + } + used = err; } }
+ last_copied_devmem = skb->devmem; + WRITE_ONCE(*seq, *seq + used); copied += used; len -= used; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 7583d4e34c8c..4cc8be892f05 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -2299,6 +2299,13 @@ static int tcp_v4_init_sock(struct sock *sk) void tcp_v4_destroy_sock(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); + struct page *page; + unsigned long index; + + xa_for_each(&sk->sk_user_pages, index, page) + page_pool_page_put_many(page, 1); + + xa_destroy(&sk->sk_user_pages);
trace_tcp_destroy_sock(sk);
On 11/05, Mina Almasry wrote:
In tcp_recvmsg_locked(), detect if the skb being received by the user is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM flag - pass it to tcp_recvmsg_devmem() for custom handling.
tcp_recvmsg_devmem() copies any data in the skb header to the linear buffer, and returns a cmsg to the user indicating the number of bytes returned in the linear buffer.
tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags, and returns to the user a cmsg_devmem indicating the location of the data in the dmabuf device memory. cmsg_devmem contains this information:
- the offset into the dmabuf where the payload starts. 'frag_offset'.
- the size of the frag. 'frag_size'.
- an opaque token 'frag_token' to return to the kernel when the buffer
is to be released.
The pages awaiting freeing are stored in the newly added sk->sk_user_pages, and each page passed to userspace is get_page()'d. This reference is dropped once the userspace indicates that it is done reading this page. All pages are released when the socket is destroyed.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
RFC v3:
- Fixed issue with put_cmsg() failing silently.
include/linux/socket.h | 1 + include/net/page_pool/helpers.h | 9 ++ include/net/sock.h | 2 + include/uapi/asm-generic/socket.h | 5 + include/uapi/linux/uio.h | 6 + net/ipv4/tcp.c | 189 +++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 7 ++ 7 files changed, 214 insertions(+), 5 deletions(-)
diff --git a/include/linux/socket.h b/include/linux/socket.h index cfcb7e2c3813..fe2b9e2081bb 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -326,6 +326,7 @@ struct ucred { * plain text and require encryption */ +#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */
Sharing the feedback that I've been providing internally on the public list:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?) or bite the bullet and switch to io_uring.
I was also suggesting to do it via netlink initially, but it's probably a bit slow for these purpose, idk.
On Mon, Nov 6, 2023 at 10:44 AM Stanislav Fomichev sdf@google.com wrote:
On 11/05, Mina Almasry wrote:
In tcp_recvmsg_locked(), detect if the skb being received by the user is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM flag - pass it to tcp_recvmsg_devmem() for custom handling.
tcp_recvmsg_devmem() copies any data in the skb header to the linear buffer, and returns a cmsg to the user indicating the number of bytes returned in the linear buffer.
tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags, and returns to the user a cmsg_devmem indicating the location of the data in the dmabuf device memory. cmsg_devmem contains this information:
- the offset into the dmabuf where the payload starts. 'frag_offset'.
- the size of the frag. 'frag_size'.
- an opaque token 'frag_token' to return to the kernel when the buffer
is to be released.
The pages awaiting freeing are stored in the newly added sk->sk_user_pages, and each page passed to userspace is get_page()'d. This reference is dropped once the userspace indicates that it is done reading this page. All pages are released when the socket is destroyed.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
RFC v3:
- Fixed issue with put_cmsg() failing silently.
include/linux/socket.h | 1 + include/net/page_pool/helpers.h | 9 ++ include/net/sock.h | 2 + include/uapi/asm-generic/socket.h | 5 + include/uapi/linux/uio.h | 6 + net/ipv4/tcp.c | 189 +++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 7 ++ 7 files changed, 214 insertions(+), 5 deletions(-)
diff --git a/include/linux/socket.h b/include/linux/socket.h index cfcb7e2c3813..fe2b9e2081bb 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -326,6 +326,7 @@ struct ucred { * plain text and require encryption */
+#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */
Sharing the feedback that I've been providing internally on the public list:
There may have been a miscommunication. I don't recall hearing this specific feedback from you, at least in the last few months. Sorry if it seemed like I'm ignoring feedback :)
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I was also suggesting to do it via netlink initially, but it's probably a bit slow for these purpose, idk.
Yeah, I hear netlink is reserved for control paths and is inappropriate for data path, but I'll let folks correct me if wrong.
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
On 11/06, Willem de Bruijn wrote:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
I'm not really concerned with faster. I would prefer something cleaner :-)
Or maybe we should just have it documented. With some kind of path towards beautiful world where we can create dynamic queues..
On Mon, Nov 6, 2023 at 2:34 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Willem de Bruijn wrote:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
I'm not really concerned with faster. I would prefer something cleaner :-)
Or maybe we should just have it documented. With some kind of path towards beautiful world where we can create dynamic queues..
I suppose we just disagree on the elegance of the API.
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
This is analogous to the MSG_ZEROCOPY notification mechanism from kernel to user.
The synchronous socket syscall interface can be replaced by something asynchronous like io_uring. This already works today? Whatever asynchronous ring-based API would be selected, io_uring or otherwise, I think the concise notification encoding would remain as is.
Since this is an operation on a socket, I find a setsockopt the fitting interface.
On Mon, Nov 6, 2023 at 2:56 PM Willem de Bruijn willemdebruijn.kernel@gmail.com wrote:
On Mon, Nov 6, 2023 at 2:34 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Willem de Bruijn wrote:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
I'm not really concerned with faster. I would prefer something cleaner :-)
Or maybe we should just have it documented. With some kind of path towards beautiful world where we can create dynamic queues..
I suppose we just disagree on the elegance of the API.
Yeah, I might be overly sensitive to the apis that use get/setsockopt for something more involved than setting a flag. Probably because I know that bpf will (unnecessarily) trigger on these :-D I had to implement that bpf "bypass" (or fastpath) for TCP_ZEROCOPY_RECEIVE and it looks like this token recycle might also benefit from something similar.
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
Tangential: should tokens be u64? Otherwise we can't have more than 4gb unacknowledged. Or that's a reasonable constraint?
This is analogous to the MSG_ZEROCOPY notification mechanism from kernel to user.
The synchronous socket syscall interface can be replaced by something asynchronous like io_uring. This already works today? Whatever asynchronous ring-based API would be selected, io_uring or otherwise, I think the concise notification encoding would remain as is.
Since this is an operation on a socket, I find a setsockopt the fitting interface.
On 11/6/23 4:32 PM, Stanislav Fomichev wrote:
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
Tangential: should tokens be u64? Otherwise we can't have more than 4gb unacknowledged. Or that's a reasonable constraint?
Was thinking the same and with bits reserved for a dmabuf id to allow multiple dmabufs in a single rx queue (future extension, but build the capability in now). e.g., something like a 37b offset (128GB dmabuf size), 19b length (large GRO), 8b dmabuf id (lots of dmabufs to a queue).
On Mon, Nov 6, 2023 at 3:55 PM David Ahern dsahern@kernel.org wrote:
On 11/6/23 4:32 PM, Stanislav Fomichev wrote:
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
Tangential: should tokens be u64? Otherwise we can't have more than 4gb unacknowledged. Or that's a reasonable constraint?
Was thinking the same and with bits reserved for a dmabuf id to allow multiple dmabufs in a single rx queue (future extension, but build the capability in now). e.g., something like a 37b offset (128GB dmabuf size), 19b length (large GRO), 8b dmabuf id (lots of dmabufs to a queue).
Agreed. Converting to 64b now sounds like a good forward looking revision.
On Mon, Nov 6, 2023 at 4:03 PM Willem de Bruijn willemdebruijn.kernel@gmail.com wrote:
On Mon, Nov 6, 2023 at 3:55 PM David Ahern dsahern@kernel.org wrote:
On 11/6/23 4:32 PM, Stanislav Fomichev wrote:
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
Tangential: should tokens be u64? Otherwise we can't have more than 4gb unacknowledged. Or that's a reasonable constraint?
Was thinking the same and with bits reserved for a dmabuf id to allow multiple dmabufs in a single rx queue (future extension, but build the capability in now). e.g., something like a 37b offset (128GB dmabuf size), 19b length (large GRO), 8b dmabuf id (lots of dmabufs to a queue).
Agreed. Converting to 64b now sounds like a good forward looking revision.
The concept of IDing a dma-buf came up in a couple of different contexts. First, in the context of us giving the dma-buf ID to the user on recvmsg() to tell the user the data is in this specific dma-buf. The second context is here, to bind dma-bufs with multiple user-visible IDs to an rx queue.
My issue here is that I don't see anything in the struct dma_buf that can practically serve as an ID:
https://elixir.bootlin.com/linux/v6.6-rc7/source/include/linux/dma-buf.h#L30...
Actually, from the userspace, only the name of the dma-buf seems queryable. That's only unique if the user sets it as such. The dmabuf FD can't serve as an ID. For our use case we need to support 1 process doing the dma-buf bind via netlink, sharing the dma-buf FD to another process, and that process receives the data. In this case the FDs shown by the 2 processes may be different. Converting to 64b is a trivial change I can make now, but I'm not sure how to ID these dma-bufs. Suggestions welcome. I'm not sure the dma-buf guys will allow adding a new ID + APIs to query said dma-buf ID.
-- Thanks, Mina
On 11/7/23 4:55 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 4:03 PM Willem de Bruijn willemdebruijn.kernel@gmail.com wrote:
On Mon, Nov 6, 2023 at 3:55 PM David Ahern dsahern@kernel.org wrote:
On 11/6/23 4:32 PM, Stanislav Fomichev wrote:
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
Tangential: should tokens be u64? Otherwise we can't have more than 4gb unacknowledged. Or that's a reasonable constraint?
Was thinking the same and with bits reserved for a dmabuf id to allow multiple dmabufs in a single rx queue (future extension, but build the capability in now). e.g., something like a 37b offset (128GB dmabuf size), 19b length (large GRO), 8b dmabuf id (lots of dmabufs to a queue).
Agreed. Converting to 64b now sounds like a good forward looking revision.
The concept of IDing a dma-buf came up in a couple of different contexts. First, in the context of us giving the dma-buf ID to the user on recvmsg() to tell the user the data is in this specific dma-buf. The second context is here, to bind dma-bufs with multiple user-visible IDs to an rx queue.
My issue here is that I don't see anything in the struct dma_buf that can practically serve as an ID:
https://elixir.bootlin.com/linux/v6.6-rc7/source/include/linux/dma-buf.h#L30...
Actually, from the userspace, only the name of the dma-buf seems queryable. That's only unique if the user sets it as such. The dmabuf FD can't serve as an ID. For our use case we need to support 1 process doing the dma-buf bind via netlink, sharing the dma-buf FD to another process, and that process receives the data. In this case the FDs shown by the 2 processes may be different. Converting to 64b is a trivial change I can make now, but I'm not sure how to ID these dma-bufs. Suggestions welcome. I'm not sure the dma-buf guys will allow adding a new ID + APIs to query said dma-buf ID.
The API can be unique to this usage: e.g., add a dmabuf id to the netlink API. Userspace manages the ids (tells the kernel what value to use with an instance), the kernel validates no 2 dmabufs have the same id and then returns the value here.
On Tue, Nov 7, 2023 at 4:01 PM David Ahern dsahern@kernel.org wrote:
On 11/7/23 4:55 PM, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 4:03 PM Willem de Bruijn willemdebruijn.kernel@gmail.com wrote:
On Mon, Nov 6, 2023 at 3:55 PM David Ahern dsahern@kernel.org wrote:
On 11/6/23 4:32 PM, Stanislav Fomichev wrote:
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
Tangential: should tokens be u64? Otherwise we can't have more than 4gb unacknowledged. Or that's a reasonable constraint?
Was thinking the same and with bits reserved for a dmabuf id to allow multiple dmabufs in a single rx queue (future extension, but build the capability in now). e.g., something like a 37b offset (128GB dmabuf size), 19b length (large GRO), 8b dmabuf id (lots of dmabufs to a queue).
Agreed. Converting to 64b now sounds like a good forward looking revision.
The concept of IDing a dma-buf came up in a couple of different contexts. First, in the context of us giving the dma-buf ID to the user on recvmsg() to tell the user the data is in this specific dma-buf. The second context is here, to bind dma-bufs with multiple user-visible IDs to an rx queue.
My issue here is that I don't see anything in the struct dma_buf that can practically serve as an ID:
https://elixir.bootlin.com/linux/v6.6-rc7/source/include/linux/dma-buf.h#L30...
Actually, from the userspace, only the name of the dma-buf seems queryable. That's only unique if the user sets it as such. The dmabuf FD can't serve as an ID. For our use case we need to support 1 process doing the dma-buf bind via netlink, sharing the dma-buf FD to another process, and that process receives the data. In this case the FDs shown by the 2 processes may be different. Converting to 64b is a trivial change I can make now, but I'm not sure how to ID these dma-bufs. Suggestions welcome. I'm not sure the dma-buf guys will allow adding a new ID + APIs to query said dma-buf ID.
The API can be unique to this usage: e.g., add a dmabuf id to the netlink API. Userspace manages the ids (tells the kernel what value to use with an instance), the kernel validates no 2 dmabufs have the same id and then returns the value here.
Seems reasonable, will do.
On Wed, Nov 8, 2023 at 7:36 AM Edward Cree ecree.xilinx@gmail.com wrote:
On 06/11/2023 21:17, Stanislav Fomichev wrote:
I guess I'm just wondering whether other people have any suggestions here. Not sure Jonathan's way was better, but we fundamentally have two queues between the kernel and the userspace:
- userspace receiving tokens (recvmsg + magical flag)
- userspace refilling tokens (setsockopt + magical flag)
So having some kind of shared memory producer-consumer queue feels natural. And using 'classic' socket api here feels like a stretch, idk.
Do 'refilled tokens' (returned memory areas) get used for anything other than subsequent RX?
Hi Ed!
Not really, it's only the subsequent RX.
If not then surely the way to return a memory area in an io_uring idiom is just to post a new read sqe ('RX descriptor') pointing into it, rather than explicitly returning it with setsockopt.
We're interested in using this with regular TCP sockets, not necessarily io_uring. The io_uring interface to devmem TCP may very well use what you suggest and can drop the setsockopt.
(Being async means you can post lots of these, unlike recvmsg(), so you don't need any kernel management to keep the RX queue filled; it can just be all handled by the userland thus simplifying APIs overall.) Or I'm misunderstanding something?
-e
-- Thanks, Mina
On 09/11/2023 02:39, Mina Almasry wrote:
On Wed, Nov 8, 2023 at 7:36 AM Edward Cree ecree.xilinx@gmail.com wrote:
If not then surely the way to return a memory area in an io_uring idiom is just to post a new read sqe ('RX descriptor') pointing into it, rather than explicitly returning it with setsockopt.
We're interested in using this with regular TCP sockets, not necessarily io_uring.
Fair. I just wanted to push against the suggestion upthread that "oh, since io_uring supports setsockopt() we can just ignore it and it'll all magically work later" (paraphrased). If you can keep the "allocate buffers out of a devmem region" and "post RX descriptors built on those buffers" APIs separate (inside the kernel; obviously both triggered by a single call to the setsockopt() uAPI) that'll likely make things simpler for the io_uring interface I describe, which will only want the latter.
-ed
PS: Here's a crazy idea that I haven't thought through at all: what if you allow device memory to be mmap()ed into process address space (obviously with none of r/w/x because it's unreachable), so that your various uAPIs can just operate on pointers (e.g. the setsockopt becomes the madvise it's named after; recvmsg just uses or populates the iovec rather than needing a cmsg). Then if future devices have their memory CXL accessible that can potentially be enabled with no change to the uAPI (userland just starts being able to access the region without faulting). And you can maybe add a semantic flag to recvmsg saying "if you don't use all the buffers in my iovec, keep hold of the rest of them for future incoming traffic, and if I post new buffers with my next recvmsg, add those to the tail of the RXQ rather than replacing the ones you've got". That way you can still have the "userland directly fills the RX ring" behaviour even with TCP sockets.
On 11/9/23 16:07, Edward Cree wrote:
On 09/11/2023 02:39, Mina Almasry wrote:
On Wed, Nov 8, 2023 at 7:36 AM Edward Cree ecree.xilinx@gmail.com wrote:
If not then surely the way to return a memory area in an io_uring idiom is just to post a new read sqe ('RX descriptor') pointing into it, rather than explicitly returning it with setsockopt.
We're interested in using this with regular TCP sockets, not necessarily io_uring.
Fair. I just wanted to push against the suggestion upthread that "oh, since io_uring supports setsockopt() we can just ignore it and it'll all magically work later" (paraphrased).
IMHO, that'd be horrible, but that why there are io_uring zc rx patches, and we'll be sending an update soon
https://lore.kernel.org/all/20231107214045.2172393-1-dw@davidwei.uk/
If you can keep the "allocate buffers out of a devmem region" and "post RX descriptors built on those buffers" APIs separate (inside the kernel; obviously both triggered by a single call to the setsockopt() uAPI) that'll likely make things simpler for the io_uring interface I describe, which will only want the latter. PS: Here's a crazy idea that I haven't thought through at all: what if you allow device memory to be mmap()ed into process address space (obviously with none of r/w/x because it's unreachable), so that your various uAPIs can just operate on pointers (e.g. the setsockopt becomes the madvise it's named after; recvmsg just uses or populates the iovec rather than needing a cmsg). Then if future devices have their memory CXL accessible that can potentially be enabled with no change to the uAPI (userland just starts being able to access the region without faulting). And you can maybe add a semantic flag to recvmsg saying "if you don't use all the buffers in my iovec, keep hold of the rest of them for future incoming traffic, and if I post new buffers with my next recvmsg, add those to the tail of the RXQ rather than replacing the ones you've got". That way you can still have the "userland directly fills the RX ring" behaviour even with TCP sockets.
On Mon, 2023-11-06 at 14:55 -0800, Willem de Bruijn wrote:
On Mon, Nov 6, 2023 at 2:34 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Willem de Bruijn wrote:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
I'm not really concerned with faster. I would prefer something cleaner :-)
Or maybe we should just have it documented. With some kind of path towards beautiful world where we can create dynamic queues..
I suppose we just disagree on the elegance of the API.
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
This is analogous to the MSG_ZEROCOPY notification mechanism from kernel to user.
The synchronous socket syscall interface can be replaced by something asynchronous like io_uring. This already works today? Whatever asynchronous ring-based API would be selected, io_uring or otherwise, I think the concise notification encoding would remain as is.
Since this is an operation on a socket, I find a setsockopt the fitting interface.
FWIW, I think sockopt +cmsg is the right API. It would deserve some explicit addition to the documentation, both in the kernel and in the man-pages.
Cheers,
Paolo
On 11/6/23 22:55, Willem de Bruijn wrote:
On Mon, Nov 6, 2023 at 2:34 PM Stanislav Fomichev sdf@google.com wrote:
On 11/06, Willem de Bruijn wrote:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
I'm not really concerned with faster. I would prefer something cleaner :-)
Or maybe we should just have it documented. With some kind of path towards beautiful world where we can create dynamic queues..
I suppose we just disagree on the elegance of the API.
The concise notification API returns tokens as a range for compression, encoding as two 32-bit unsigned integers start + length. It allows for even further batching by returning multiple such ranges in a single call.
FWIW, nothing prevents io_uring from compressing ranges. The io_uring zc RFC returns {offset, size} as well, though at the moment the would lie in the same page.
This is analogous to the MSG_ZEROCOPY notification mechanism from kernel to user.
The synchronous socket syscall interface can be replaced by something asynchronous like io_uring. This already works today? Whatever
If you mean async io_uring recv, it does work. In short, internally it polls the socket and then calls sock_recvmsg(). There is also a feature that would make it return back to polling after sock_recvmsg() and loop like this.
asynchronous ring-based API would be selected, io_uring or otherwise, I think the concise notification encoding would remain as is.
Since this is an operation on a socket, I find a setsockopt the fitting interface.
On 11/6/23 22:34, Stanislav Fomichev wrote:
On 11/06, Willem de Bruijn wrote:
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
Oops, missed the discussion. IMHO shared rings are more elegant here. With that the app -> kernel buffer return path doesn't need to setsockopt(), which will have to figure out how to return buffers to pp efficiently, and then potentially some sync on the pp allocation side. It just grabs entries from the ring in the napi context on allocation when necessary. But then you basically get the io_uring zc rx... just saying
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other.
They don't preclude each other, but I wouldn't say they're orthogonal. Similar approaches, some different details. FWIW, we'll be posting a next iteration on top of the pp providers patches soon.
As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
Well, not exactly support of devmem, but true, we definitely want to have io_uring zerocopy, considering all the api differences. (at the same time not duplicating net bits).
I also disagree that we need to replace a standard socket interface with something "faster", in quotes.
This interface is not the bottleneck to the target workload.
Replacing the synchronous sockets interface with something more performant for workloads where it is, is an orthogonal challenge. However we do that, I think that traditional sockets should continue to be supported.
The feature may already even work with io_uring, as both recvmsg with cmsg and setsockopt have io_uring support now.
It should, in theory, but the api wouldn't suit io_uring, internals wouldn't be properly optimised, and we can't use it with some important features like multishot recv because of cmsg.
I'm not really concerned with faster. I would prefer something cleaner :-)
Or maybe we should just have it documented. With some kind of path towards beautiful world where we can create dynamic queues..
On 11/06, Mina Almasry wrote:
On Mon, Nov 6, 2023 at 10:44 AM Stanislav Fomichev sdf@google.com wrote:
On 11/05, Mina Almasry wrote:
In tcp_recvmsg_locked(), detect if the skb being received by the user is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM flag - pass it to tcp_recvmsg_devmem() for custom handling.
tcp_recvmsg_devmem() copies any data in the skb header to the linear buffer, and returns a cmsg to the user indicating the number of bytes returned in the linear buffer.
tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags, and returns to the user a cmsg_devmem indicating the location of the data in the dmabuf device memory. cmsg_devmem contains this information:
- the offset into the dmabuf where the payload starts. 'frag_offset'.
- the size of the frag. 'frag_size'.
- an opaque token 'frag_token' to return to the kernel when the buffer
is to be released.
The pages awaiting freeing are stored in the newly added sk->sk_user_pages, and each page passed to userspace is get_page()'d. This reference is dropped once the userspace indicates that it is done reading this page. All pages are released when the socket is destroyed.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
RFC v3:
- Fixed issue with put_cmsg() failing silently.
include/linux/socket.h | 1 + include/net/page_pool/helpers.h | 9 ++ include/net/sock.h | 2 + include/uapi/asm-generic/socket.h | 5 + include/uapi/linux/uio.h | 6 + net/ipv4/tcp.c | 189 +++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 7 ++ 7 files changed, 214 insertions(+), 5 deletions(-)
diff --git a/include/linux/socket.h b/include/linux/socket.h index cfcb7e2c3813..fe2b9e2081bb 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -326,6 +326,7 @@ struct ucred { * plain text and require encryption */
+#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */
Sharing the feedback that I've been providing internally on the public list:
There may have been a miscommunication. I don't recall hearing this specific feedback from you, at least in the last few months. Sorry if it seemed like I'm ignoring feedback :)
No worries, there was a thread long time ago about this whole token interface and whether it should support out-of-order refills, etc.
IMHO, we need a better UAPI to receive the tokens and give them back to the kernel. CMSG + setsockopt(SO_DEVMEM_DONTNEED) get the job done, but look dated and hacky :-(
We should either do some kind of user/kernel shared memory queue to receive/return the tokens (similar to what Jonathan was doing in his proposal?)
I'll take a look at Jonathan's proposal, sorry, I'm not immediately familiar but I wanted to respond :-) But is the suggestion here to build a new kernel-user communication channel primitive for the purpose of passing the information in the devmem cmsg? IMHO that seems like an overkill. Why add 100-200 lines of code to the kernel to add something that can already be done with existing primitives? I don't see anything concretely wrong with cmsg & setsockopt approach, and if we switch to something I'd prefer to switch to an existing primitive for simplicity?
The only other existing primitive to pass data outside of the linear buffer is the MSG_ERRQUEUE that is used for zerocopy. Is that preferred? Any other suggestions or existing primitives I'm not aware of?
I guess I'm just wondering whether other people have any suggestions here. Not sure Jonathan's way was better, but we fundamentally have two queues between the kernel and the userspace: - userspace receiving tokens (recvmsg + magical flag) - userspace refilling tokens (setsockopt + magical flag)
So having some kind of shared memory producer-consumer queue feels natural. And using 'classic' socket api here feels like a stretch, idk.
But maybe I'm overthinking and overcomplicating :-)
or bite the bullet and switch to io_uring.
IMO io_uring & socket support are orthogonal, and one doesn't preclude the other. As you know we like to use sockets and I believe there are issues with io_uring adoption at Google that I'm not familiar with (and could be wrong). I'm interested in exploring io_uring support as a follow up but I think David Wei will be interested in io_uring support as well anyway.
Ack, might be one more reason on our side to adopt iouring :-p
I was also suggesting to do it via netlink initially, but it's probably a bit slow for these purpose, idk.
Yeah, I hear netlink is reserved for control paths and is inappropriate for data path, but I'll let folks correct me if wrong.
-- Thanks, Mina
On 06/11/2023 21:17, Stanislav Fomichev wrote:
I guess I'm just wondering whether other people have any suggestions here. Not sure Jonathan's way was better, but we fundamentally have two queues between the kernel and the userspace:
- userspace receiving tokens (recvmsg + magical flag)
- userspace refilling tokens (setsockopt + magical flag)
So having some kind of shared memory producer-consumer queue feels natural. And using 'classic' socket api here feels like a stretch, idk.
Do 'refilled tokens' (returned memory areas) get used for anything other than subsequent RX? If not then surely the way to return a memory area in an io_uring idiom is just to post a new read sqe ('RX descriptor') pointing into it, rather than explicitly returning it with setsockopt. (Being async means you can post lots of these, unlike recvmsg(), so you don't need any kernel management to keep the RX queue filled; it can just be all handled by the userland thus simplifying APIs overall.) Or I'm misunderstanding something?
-e
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote: [...]
+/* On error, returns the -errno. On success, returns number of bytes sent to the
- user. May not consume all of @remaining_len.
- */
+static int tcp_recvmsg_devmem(const struct sock *sk, const struct sk_buff *skb,
unsigned int offset, struct msghdr *msg,
int remaining_len)
+{
- struct cmsg_devmem cmsg_devmem = { 0 };
- unsigned int start;
- int i, copy, n;
- int sent = 0;
- int err = 0;
- do {
start = skb_headlen(skb);
if (!skb_frags_not_readable(skb)) {
As 'skb_frags_not_readable()' is intended to be a possibly wider scope test then skb->devmem, should the above test explicitly skb->devmem?
err = -ENODEV;
goto out;
}
/* Copy header. */
copy = start - offset;
if (copy > 0) {
copy = min(copy, remaining_len);
n = copy_to_iter(skb->data + offset, copy,
&msg->msg_iter);
if (n != copy) {
err = -EFAULT;
goto out;
}
offset += copy;
remaining_len -= copy;
/* First a cmsg_devmem for # bytes copied to user
* buffer.
*/
memset(&cmsg_devmem, 0, sizeof(cmsg_devmem));
cmsg_devmem.frag_size = copy;
err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_HEADER,
sizeof(cmsg_devmem), &cmsg_devmem);
if (err || msg->msg_flags & MSG_CTRUNC) {
msg->msg_flags &= ~MSG_CTRUNC;
if (!err)
err = -ETOOSMALL;
goto out;
}
sent += copy;
if (remaining_len == 0)
goto out;
}
/* after that, send information of devmem pages through a
* sequence of cmsg
*/
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
struct page_pool_iov *ppiov;
u64 frag_offset;
u32 user_token;
int end;
/* skb_frags_not_readable() should indicate that ALL the
* frags in this skb are unreadable page_pool_iovs.
* We're checking for that flag above, but also check
* individual pages here. If the tcp stack is not
* setting skb->devmem correctly, we still don't want to
* crash here when accessing pgmap or priv below.
*/
if (!skb_frag_page_pool_iov(frag)) {
net_err_ratelimited("Found non-devmem skb with page_pool_iov");
err = -ENODEV;
goto out;
}
ppiov = skb_frag_page_pool_iov(frag);
end = start + skb_frag_size(frag);
copy = end - offset;
if (copy > 0) {
copy = min(copy, remaining_len);
frag_offset = page_pool_iov_virtual_addr(ppiov) +
skb_frag_off(frag) + offset -
start;
cmsg_devmem.frag_offset = frag_offset;
cmsg_devmem.frag_size = copy;
err = xa_alloc((struct xarray *)&sk->sk_user_pages,
&user_token, frag->bv_page,
xa_limit_31b, GFP_KERNEL);
if (err)
goto out;
cmsg_devmem.frag_token = user_token;
offset += copy;
remaining_len -= copy;
err = put_cmsg(msg, SOL_SOCKET,
SO_DEVMEM_OFFSET,
sizeof(cmsg_devmem),
&cmsg_devmem);
if (err || msg->msg_flags & MSG_CTRUNC) {
msg->msg_flags &= ~MSG_CTRUNC;
xa_erase((struct xarray *)&sk->sk_user_pages,
user_token);
if (!err)
err = -ETOOSMALL;
goto out;
}
page_pool_iov_get_many(ppiov, 1);
sent += copy;
if (remaining_len == 0)
goto out;
}
start = end;
}
if (!remaining_len)
goto out;
/* if remaining_len is not satisfied yet, we need to go to the
* next frag in the frag_list to satisfy remaining_len.
*/
skb = skb_shinfo(skb)->frag_list ?: skb->next;
I think at this point the 'skb' is still on the sk receive queue. The above will possibly walk the queue.
Later on, only the current queue tail could be possibly consumed by tcp_recvmsg_locked(). This feel confusing to me?!? Why don't limit the loop only the 'current' skb and it's frags?
offset = offset - start;
- } while (skb);
- if (remaining_len) {
err = -EFAULT;
goto out;
- }
+out:
- if (!sent)
sent = err;
- return sent;
+}
/*
- This routine copies from a sock struct into the user buffer.
@@ -2314,6 +2463,7 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, int *cmsg_flags) { struct tcp_sock *tp = tcp_sk(sk);
- int last_copied_devmem = -1; /* uninitialized */ int copied = 0; u32 peek_seq; u32 *seq;
@@ -2491,15 +2641,44 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, } if (!(flags & MSG_TRUNC)) {
err = skb_copy_datagram_msg(skb, offset, msg, used);
if (err) {
/* Exception. Bailout! */
if (!copied)
copied = -EFAULT;
if (last_copied_devmem != -1 &&
last_copied_devmem != skb->devmem) break;
if (!skb->devmem) {
err = skb_copy_datagram_msg(skb, offset, msg,
used);
if (err) {
/* Exception. Bailout! */
if (!copied)
copied = -EFAULT;
break;
}
} else {
if (!(flags & MSG_SOCK_DEVMEM)) {
/* skb->devmem skbs can only be received
* with the MSG_SOCK_DEVMEM flag.
*/
if (!copied)
copied = -EFAULT;
break;
}
err = tcp_recvmsg_devmem(sk, skb, offset, msg,
used);
if (err <= 0) {
if (!copied)
copied = -EFAULT;
break;
}
used = err;
Minor nit: I personally would find the above more readable, placing this whole chunk in a single helper (e.g. the current tcp_recvmsg_devmem(), renamed to something more appropriate).
Cheers,
Paolo
Add an interface for the user to notify the kernel that it is done reading the NET_RX dmabuf pages returned as cmsg. The kernel will drop the reference on the NET_RX pages to make them available for re-use.
Signed-off-by: Willem de Bruijn willemb@google.com Signed-off-by: Kaiyuan Zhang kaiyuanz@google.com Signed-off-by: Mina Almasry almasrymina@google.com
--- include/uapi/asm-generic/socket.h | 1 + include/uapi/linux/uio.h | 4 ++++ net/core/sock.c | 36 +++++++++++++++++++++++++++++++ 3 files changed, 41 insertions(+)
diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index aacb97f16b78..eb93b43394d4 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -135,6 +135,7 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77
+#define SO_DEVMEM_DONTNEED 97 #define SO_DEVMEM_HEADER 98 #define SCM_DEVMEM_HEADER SO_DEVMEM_HEADER #define SO_DEVMEM_OFFSET 99 diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index ae94763b1963..71314bf41590 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -26,6 +26,10 @@ struct cmsg_devmem { __u32 frag_token; };
+struct devmemtoken { + __u32 token_start; + __u32 token_count; +}; /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ diff --git a/net/core/sock.c b/net/core/sock.c index 1d28e3e87970..4ddc6b11d915 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1051,6 +1051,39 @@ static int sock_reserve_memory(struct sock *sk, int bytes) return 0; }
+static noinline_for_stack int +sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen) +{ + struct devmemtoken tokens[128]; + unsigned int num_tokens, i, j; + int ret; + + if (sk->sk_type != SOCK_STREAM || sk->sk_protocol != IPPROTO_TCP) + return -EBADF; + + if (optlen % sizeof(struct devmemtoken) || optlen > sizeof(tokens)) + return -EINVAL; + + num_tokens = optlen / sizeof(struct devmemtoken); + if (copy_from_sockptr(tokens, optval, optlen)) + return -EFAULT; + + ret = 0; + for (i = 0; i < num_tokens; i++) { + for (j = 0; j < tokens[i].token_count; j++) { + struct page *page = xa_erase(&sk->sk_user_pages, + tokens[i].token_start + j); + + if (page) { + page_pool_page_put_many(page, 1); + ret++; + } + } + } + + return ret; +} + void sockopt_lock_sock(struct sock *sk) { /* When current->bpf_ctx is set, the setsockopt is called from @@ -1538,6 +1571,9 @@ int sk_setsockopt(struct sock *sk, int level, int optname, break; }
+ case SO_DEVMEM_DONTNEED: + ret = sock_devmem_dontneed(sk, optval, optlen); + break; default: ret = -ENOPROTOOPT; break;
ncdevmem is a devmem TCP netcat. It works similarly to netcat, but it sends and receives data using the devmem TCP APIs. It uses udmabuf as the dmabuf provider. It is compatible with a regular netcat running on a peer, or a ncdevmem running on a peer.
In addition to normal netcat support, ncdevmem has a validation mode, where it sends a specific pattern and validates this pattern on the receiver side to ensure data integrity.
Suggested-by: Stanislav Fomichev sdf@google.com Signed-off-by: Mina Almasry almasrymina@google.com
---
RFC v2: - General cleanups (Willem).
--- tools/testing/selftests/net/.gitignore | 1 + tools/testing/selftests/net/Makefile | 5 + tools/testing/selftests/net/ncdevmem.c | 546 +++++++++++++++++++++++++ 3 files changed, 552 insertions(+) create mode 100644 tools/testing/selftests/net/ncdevmem.c
diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore index 2f9d378edec3..b644dbae58b7 100644 --- a/tools/testing/selftests/net/.gitignore +++ b/tools/testing/selftests/net/.gitignore @@ -17,6 +17,7 @@ ipv6_flowlabel ipv6_flowlabel_mgr log.txt msg_zerocopy +ncdevmem nettest psock_fanout psock_snd diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index b9804ceb9494..6c6e53c70e99 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -5,6 +5,10 @@ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g CFLAGS += -I../../../../usr/include/ $(KHDR_INCLUDES) # Additional include paths needed by kselftest.h CFLAGS += -I../ +CFLAGS += -I../../../net/ynl/generated/ +CFLAGS += -I../../../net/ynl/lib/ + +LDLIBS += ../../../net/ynl/lib/ynl.a ../../../net/ynl/generated/protos.a
TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh \ rtnetlink.sh xfrm_policy.sh test_blackhole_dev.sh @@ -91,6 +95,7 @@ TEST_PROGS += test_bridge_neigh_suppress.sh TEST_PROGS += test_vxlan_nolocalbypass.sh TEST_PROGS += test_bridge_backup_port.sh TEST_PROGS += fdb_flush.sh +TEST_GEN_FILES += ncdevmem
TEST_FILES := settings
diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c new file mode 100644 index 000000000000..78bc3ad767ca --- /dev/null +++ b/tools/testing/selftests/net/ncdevmem.c @@ -0,0 +1,546 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#define __EXPORTED_HEADERS__ + +#include <linux/uio.h> +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> +#include <stdbool.h> +#include <string.h> +#include <errno.h> +#define __iovec_defined +#include <fcntl.h> +#include <malloc.h> + +#include <arpa/inet.h> +#include <sys/socket.h> +#include <sys/mman.h> +#include <sys/ioctl.h> +#include <sys/syscall.h> + +#include <linux/memfd.h> +#include <linux/if.h> +#include <linux/dma-buf.h> +#include <linux/udmabuf.h> +#include <libmnl/libmnl.h> +#include <linux/types.h> +#include <linux/netlink.h> +#include <linux/genetlink.h> +#include <linux/netdev.h> +#include <time.h> + +#include "netdev-user.h" +#include <ynl.h> + +#define PAGE_SHIFT 12 +#define TEST_PREFIX "ncdevmem" +#define NUM_PAGES 16000 + +#ifndef MSG_SOCK_DEVMEM +#define MSG_SOCK_DEVMEM 0x2000000 +#endif + +/* + * tcpdevmem netcat. Works similarly to netcat but does device memory TCP + * instead of regular TCP. Uses udmabuf to mock a dmabuf provider. + * + * Usage: + * + * * Without validation: + * + * On server: + * ncdevmem -s <server IP> -c <client IP> -f eth1 -n 0000:06:00.0 -l \ + * -p 5201 + * + * On client: + * ncdevmem -s <server IP> -c <client IP> -f eth1 -n 0000:06:00.0 -p 5201 + * + * * With Validation: + * On server: + * ncdevmem -s <server IP> -c <client IP> -l -f eth1 -n 0000:06:00.0 \ + * -p 5202 -v 1 + * + * On client: + * ncdevmem -s <server IP> -c <client IP> -f eth1 -n 0000:06:00.0 -p 5202 \ + * -v 100000 + * + * Note this is compatible with regular netcat. i.e. the sender or receiver can + * be replaced with regular netcat to test the RX or TX path in isolation. + */ + +static char *server_ip = "192.168.1.4"; +static char *client_ip = "192.168.1.2"; +static char *port = "5201"; +static size_t do_validation; +static int queue_num = 15; +static char *ifname = "eth1"; +static char *nic_pci_addr = "0000:06:00.0"; +static unsigned int iterations; + +void print_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + int i; + + for (i = 0; i < size; i++) { + printf("%02hhX ", p[i]); + } + printf("\n"); +} + +void print_nonzero_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + unsigned int i; + + for (i = 0; i < size; i++) + putchar(p[i]); + printf("\n"); +} + +void validate_buffer(void *line, size_t size) +{ + static unsigned char seed = 1; + unsigned char *ptr = line; + int errors = 0; + size_t i; + + for (i = 0; i < size; i++) { + if (ptr[i] != seed) { + fprintf(stderr, + "Failed validation: expected=%u, actual=%u, index=%lu\n", + seed, ptr[i], i); + errors++; + if (errors > 20) + exit(1); + } + seed++; + if (seed == do_validation) + seed = 0; + } + + fprintf(stdout, "Validated buffer\n"); +} + +static void reset_flow_steering(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple off", + "eth1"); + system(command); + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple on", + "eth1"); + system(command); +} + +static void configure_flow_steering(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d", + ifname, client_ip, server_ip, port, port, queue_num); + system(command); +} + +/* Triggers a driver reset... + * + * The proper way to do this is probably 'ethtool --reset', but I don't have + * that supported on my current test bed. I resort to changing this + * configuration in the driver which also causes a driver reset... + */ +static void trigger_device_reset(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool --set-priv-flags %s enable-header-split off", + ifname); + system(command); + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool --set-priv-flags %s enable-header-split on", + ifname); + system(command); +} + +static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd, + __u32 *queue_idx, unsigned int n_queue_index, + struct ynl_sock **ys) +{ + struct netdev_bind_rx_req *req = NULL; + struct ynl_error yerr; + int ret = 0; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + if (!*ys) { + fprintf(stderr, "YNL: %s\n", yerr.msg); + return -1; + } + + if (ynl_subscribe(*ys, "mgmt")) + goto err_close; + + req = netdev_bind_rx_req_alloc(); + netdev_bind_rx_req_set_ifindex(req, ifindex); + netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd); + __netdev_bind_rx_req_set_queues(req, queue_idx, n_queue_index); + + ret = netdev_bind_rx(*ys, req); + if (!ret) { + perror("netdev_bind_rx"); + goto err_close; + } + + netdev_bind_rx_req_free(req); + + return 0; + +err_close: + fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg); + netdev_bind_rx_req_free(req); + ynl_sock_destroy(*ys); + return -1; +} + +static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size) +{ + struct udmabuf_create create; + int ret; + + *devfd = open("/dev/udmabuf", O_RDWR); + if (*devfd < 0) { + fprintf(stderr, + "%s: [skip,no-udmabuf: Unable to access DMA " + "buffer device file]\n", + TEST_PREFIX); + exit(70); + } + + *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING); + if (*memfd < 0) { + printf("%s: [skip,no-memfd]\n", TEST_PREFIX); + exit(72); + } + + ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK); + if (ret < 0) { + printf("%s: [skip,fcntl-add-seals]\n", TEST_PREFIX); + exit(73); + } + + ret = ftruncate(*memfd, dmabuf_size); + if (ret == -1) { + printf("%s: [FAIL,memfd-truncate]\n", TEST_PREFIX); + exit(74); + } + + memset(&create, 0, sizeof(create)); + + create.memfd = *memfd; + create.offset = 0; + create.size = dmabuf_size; + *buf = ioctl(*devfd, UDMABUF_CREATE, &create); + if (*buf < 0) { + printf("%s: [FAIL, create udmabuf]\n", TEST_PREFIX); + exit(75); + } +} + +int do_server(void) +{ + char ctrl_data[sizeof(int) * 20000]; + size_t non_page_aligned_frags = 0; + struct sockaddr_in client_addr; + struct sockaddr_in server_sin; + size_t page_aligned_frags = 0; + int devfd, memfd, buf, ret; + size_t total_received = 0; + bool is_devmem = false; + char *buf_mem = NULL; + struct ynl_sock *ys; + size_t dmabuf_size; + char iobuf[819200]; + char buffer[256]; + int socket_fd; + int client_fd; + size_t i = 0; + int opt = 1; + + dmabuf_size = getpagesize() * NUM_PAGES; + + create_udmabuf(&devfd, &memfd, &buf, dmabuf_size); + + __u32 *queue_idx = malloc(sizeof(__u32) * 2); + + queue_idx[0] = 14; + queue_idx[1] = 15; + if (bind_rx_queue(3 /* index for eth1 */, buf, queue_idx, 2, &ys)) { + fprintf(stderr, "Failed to bind\n"); + exit(1); + } + + buf_mem = mmap(NULL, dmabuf_size, PROT_READ | PROT_WRITE, MAP_SHARED, + buf, 0); + if (buf_mem == MAP_FAILED) { + perror("mmap()"); + exit(1); + } + + /* Need to trigger the NIC to reallocate its RX pages, otherwise the + * bind doesn't take effect. + */ + trigger_device_reset(); + + sleep(1); + + reset_flow_steering(); + configure_flow_steering(); + + server_sin.sin_family = AF_INET; + server_sin.sin_port = htons(atoi(port)); + + ret = inet_pton(server_sin.sin_family, server_ip, &server_sin.sin_addr); + if (socket < 0) { + printf("%s: [FAIL, create socket]\n", TEST_PREFIX); + exit(79); + } + + socket_fd = socket(server_sin.sin_family, SOCK_STREAM, 0); + if (socket < 0) { + printf("%s: [FAIL, create socket]\n", TEST_PREFIX); + exit(76); + } + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &opt, + sizeof(opt)); + if (ret) { + printf("%s: [FAIL, set sock opt]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &opt, + sizeof(opt)); + if (ret) { + printf("%s: [FAIL, set sock opt]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, + sizeof(opt)); + if (ret) { + printf("%s: [FAIL, set sock opt]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + + printf("binding to address %s:%d\n", server_ip, + ntohs(server_sin.sin_port)); + + ret = bind(socket_fd, &server_sin, sizeof(server_sin)); + if (ret) { + printf("%s: [FAIL, bind]: %s\n", TEST_PREFIX, strerror(errno)); + exit(76); + } + + ret = listen(socket_fd, 1); + if (ret) { + printf("%s: [FAIL, listen]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + + socklen_t client_addr_len = sizeof(client_addr); + + inet_ntop(server_sin.sin_family, &server_sin.sin_addr, buffer, + sizeof(buffer)); + printf("Waiting or connection on %s:%d\n", buffer, + ntohs(server_sin.sin_port)); + client_fd = accept(socket_fd, &client_addr, &client_addr_len); + + inet_ntop(client_addr.sin_family, &client_addr.sin_addr, buffer, + sizeof(buffer)); + printf("Got connection from %s:%d\n", buffer, + ntohs(client_addr.sin_port)); + + while (1) { + struct iovec iov = { .iov_base = iobuf, + .iov_len = sizeof(iobuf) }; + struct cmsg_devmem *cmsg_devmem = NULL; + struct dma_buf_sync sync = { 0 }; + struct cmsghdr *cm = NULL; + struct msghdr msg = { 0 }; + struct devmemtoken token; + ssize_t ret; + + is_devmem = false; + printf("\n\n"); + + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + msg.msg_control = ctrl_data; + msg.msg_controllen = sizeof(ctrl_data); + ret = recvmsg(client_fd, &msg, MSG_SOCK_DEVMEM); + printf("recvmsg ret=%ld\n", ret); + if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK)) { + continue; + } + if (ret < 0) { + perror("recvmsg"); + continue; + } + if (ret == 0) { + printf("client exited\n"); + goto cleanup; + } + + i++; + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + if (cm->cmsg_level != SOL_SOCKET || + (cm->cmsg_type != SCM_DEVMEM_OFFSET && + cm->cmsg_type != SCM_DEVMEM_HEADER)) { + fprintf(stdout, "skipping non-devmem cmsg\n"); + continue; + } + + cmsg_devmem = (struct cmsg_devmem *)CMSG_DATA(cm); + is_devmem = true; + + if (cm->cmsg_type == SCM_DEVMEM_HEADER) { + /* TODO: process data copied from skb's linear + * buffer. + */ + fprintf(stdout, + "SCM_DEVMEM_HEADER. " + "cmsg_devmem->frag_size=%u\n", + cmsg_devmem->frag_size); + + continue; + } + + token.token_start = cmsg_devmem->frag_token; + token.token_count = 1; + + total_received += cmsg_devmem->frag_size; + printf("received frag_page=%llu, in_page_offset=%llu," + " frag_offset=%llu, frag_size=%u, token=%u" + " total_received=%lu\n", + cmsg_devmem->frag_offset >> PAGE_SHIFT, + cmsg_devmem->frag_offset % getpagesize(), + cmsg_devmem->frag_offset, cmsg_devmem->frag_size, + cmsg_devmem->frag_token, total_received); + + if (cmsg_devmem->frag_size % getpagesize()) + non_page_aligned_frags++; + else + page_aligned_frags++; + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_START; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + if (do_validation) + validate_buffer( + ((unsigned char *)buf_mem) + + cmsg_devmem->frag_offset, + cmsg_devmem->frag_size); + else + print_nonzero_bytes( + ((unsigned char *)buf_mem) + + cmsg_devmem->frag_offset, + cmsg_devmem->frag_size); + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_END; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + ret = setsockopt(client_fd, SOL_SOCKET, + SO_DEVMEM_DONTNEED, &token, + sizeof(token)); + if (ret != 1) { + perror("SO_DEVMEM_DONTNEED not enough tokens"); + exit(1); + } + } + if (!is_devmem) + printf("flow steering error\n"); + + printf("total_received=%lu\n", total_received); + } + + fprintf(stdout, "%s: ok\n", TEST_PREFIX); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + +cleanup: + + munmap(buf_mem, dmabuf_size); + close(client_fd); + close(socket_fd); + close(buf); + close(memfd); + close(devfd); + ynl_sock_destroy(ys); + trigger_device_reset(); + + return 0; +} + +int main(int argc, char *argv[]) +{ + int is_server = 0, opt; + + while ((opt = getopt(argc, argv, "ls:c:p:v:q:f:n:i:")) != -1) { + switch (opt) { + case 'l': + is_server = 1; + break; + case 's': + server_ip = optarg; + break; + case 'c': + client_ip = optarg; + break; + case 'p': + port = optarg; + break; + case 'v': + do_validation = atoll(optarg); + break; + case 'q': + queue_num = atoi(optarg); + break; + case 'f': + ifname = optarg; + break; + case 'n': + nic_pci_addr = optarg; + break; + case 'i': + iterations = atoll(optarg); + break; + case '?': + printf("unknown option: %c\n", optopt); + break; + } + } + + for (; optind < argc; optind++) { + printf("extra arguments: %s\n", argv[optind]); + } + + if (is_server) + return do_server(); + + return 0; +}
On Sun, 2023-11-05 at 18:44 -0800, Mina Almasry wrote:
@@ -91,6 +95,7 @@ TEST_PROGS += test_bridge_neigh_suppress.sh TEST_PROGS += test_vxlan_nolocalbypass.sh TEST_PROGS += test_bridge_backup_port.sh TEST_PROGS += fdb_flush.sh +TEST_GEN_FILES += ncdevmem
I guess we want something added to TEST_PROGS, too ;)
TEST_FILES := settings diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c new file mode 100644 index 000000000000..78bc3ad767ca --- /dev/null +++ b/tools/testing/selftests/net/ncdevmem.c @@ -0,0 +1,546 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#define __EXPORTED_HEADERS__
+#include <linux/uio.h> +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> +#include <stdbool.h> +#include <string.h> +#include <errno.h> +#define __iovec_defined +#include <fcntl.h> +#include <malloc.h>
+#include <arpa/inet.h> +#include <sys/socket.h> +#include <sys/mman.h> +#include <sys/ioctl.h> +#include <sys/syscall.h>
+#include <linux/memfd.h> +#include <linux/if.h> +#include <linux/dma-buf.h> +#include <linux/udmabuf.h> +#include <libmnl/libmnl.h> +#include <linux/types.h> +#include <linux/netlink.h> +#include <linux/genetlink.h> +#include <linux/netdev.h> +#include <time.h>
+#include "netdev-user.h" +#include <ynl.h>
+#define PAGE_SHIFT 12 +#define TEST_PREFIX "ncdevmem" +#define NUM_PAGES 16000
+#ifndef MSG_SOCK_DEVMEM +#define MSG_SOCK_DEVMEM 0x2000000 +#endif
+/*
- tcpdevmem netcat. Works similarly to netcat but does device memory TCP
- instead of regular TCP. Uses udmabuf to mock a dmabuf provider.
- Usage:
- Without validation:
- On server:
- ncdevmem -s <server IP> -c <client IP> -f eth1 -n 0000:06:00.0 -l \
-p 5201
- On client:
- ncdevmem -s <server IP> -c <client IP> -f eth1 -n 0000:06:00.0 -p 5201
- With Validation:
- On server:
- ncdevmem -s <server IP> -c <client IP> -l -f eth1 -n 0000:06:00.0 \
-p 5202 -v 1
- On client:
- ncdevmem -s <server IP> -c <client IP> -f eth1 -n 0000:06:00.0 -p 5202 \
-v 100000
- Note this is compatible with regular netcat. i.e. the sender or receiver can
- be replaced with regular netcat to test the RX or TX path in isolation.
- */
+static char *server_ip = "192.168.1.4"; +static char *client_ip = "192.168.1.2"; +static char *port = "5201"; +static size_t do_validation; +static int queue_num = 15; +static char *ifname = "eth1"; +static char *nic_pci_addr = "0000:06:00.0"; +static unsigned int iterations;
+void print_bytes(void *ptr, size_t size) +{
- unsigned char *p = ptr;
- int i;
- for (i = 0; i < size; i++) {
printf("%02hhX ", p[i]);
- }
- printf("\n");
+}
+void print_nonzero_bytes(void *ptr, size_t size) +{
- unsigned char *p = ptr;
- unsigned int i;
- for (i = 0; i < size; i++)
putchar(p[i]);
- printf("\n");
+}
+void validate_buffer(void *line, size_t size) +{
- static unsigned char seed = 1;
- unsigned char *ptr = line;
- int errors = 0;
- size_t i;
- for (i = 0; i < size; i++) {
if (ptr[i] != seed) {
fprintf(stderr,
"Failed validation: expected=%u, actual=%u, index=%lu\n",
seed, ptr[i], i);
errors++;
if (errors > 20)
exit(1);
}
seed++;
if (seed == do_validation)
seed = 0;
- }
- fprintf(stdout, "Validated buffer\n");
+}
+static void reset_flow_steering(void) +{
- char command[256];
- memset(command, 0, sizeof(command));
- snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple off",
"eth1");
- system(command);
- memset(command, 0, sizeof(command));
- snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple on",
"eth1");
- system(command);
+}
+static void configure_flow_steering(void) +{
- char command[256];
- memset(command, 0, sizeof(command));
- snprintf(command, sizeof(command),
"sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d",
ifname, client_ip, server_ip, port, port, queue_num);
- system(command);
+}
+/* Triggers a driver reset...
- The proper way to do this is probably 'ethtool --reset', but I don't have
- that supported on my current test bed. I resort to changing this
- configuration in the driver which also causes a driver reset...
- */
+static void trigger_device_reset(void) +{
- char command[256];
- memset(command, 0, sizeof(command));
- snprintf(command, sizeof(command),
"sudo ethtool --set-priv-flags %s enable-header-split off",
ifname);
- system(command);
- memset(command, 0, sizeof(command));
- snprintf(command, sizeof(command),
"sudo ethtool --set-priv-flags %s enable-header-split on",
ifname);
- system(command);
+}
+static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd,
__u32 *queue_idx, unsigned int n_queue_index,
struct ynl_sock **ys)
+{
- struct netdev_bind_rx_req *req = NULL;
- struct ynl_error yerr;
- int ret = 0;
- *ys = ynl_sock_create(&ynl_netdev_family, &yerr);
- if (!*ys) {
fprintf(stderr, "YNL: %s\n", yerr.msg);
return -1;
- }
- if (ynl_subscribe(*ys, "mgmt"))
goto err_close;
- req = netdev_bind_rx_req_alloc();
- netdev_bind_rx_req_set_ifindex(req, ifindex);
- netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd);
- __netdev_bind_rx_req_set_queues(req, queue_idx, n_queue_index);
- ret = netdev_bind_rx(*ys, req);
- if (!ret) {
perror("netdev_bind_rx");
goto err_close;
- }
- netdev_bind_rx_req_free(req);
- return 0;
+err_close:
- fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg);
- netdev_bind_rx_req_free(req);
- ynl_sock_destroy(*ys);
- return -1;
+}
+static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size) +{
- struct udmabuf_create create;
- int ret;
- *devfd = open("/dev/udmabuf", O_RDWR);
- if (*devfd < 0) {
fprintf(stderr,
"%s: [skip,no-udmabuf: Unable to access DMA "
"buffer device file]\n",
TEST_PREFIX);
exit(70);
- }
- *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING);
- if (*memfd < 0) {
printf("%s: [skip,no-memfd]\n", TEST_PREFIX);
exit(72);
- }
- ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK);
- if (ret < 0) {
printf("%s: [skip,fcntl-add-seals]\n", TEST_PREFIX);
exit(73);
- }
- ret = ftruncate(*memfd, dmabuf_size);
- if (ret == -1) {
printf("%s: [FAIL,memfd-truncate]\n", TEST_PREFIX);
exit(74);
- }
- memset(&create, 0, sizeof(create));
- create.memfd = *memfd;
- create.offset = 0;
- create.size = dmabuf_size;
- *buf = ioctl(*devfd, UDMABUF_CREATE, &create);
- if (*buf < 0) {
printf("%s: [FAIL, create udmabuf]\n", TEST_PREFIX);
exit(75);
- }
+}
+int do_server(void) +{
- char ctrl_data[sizeof(int) * 20000];
- size_t non_page_aligned_frags = 0;
- struct sockaddr_in client_addr;
- struct sockaddr_in server_sin;
- size_t page_aligned_frags = 0;
- int devfd, memfd, buf, ret;
- size_t total_received = 0;
- bool is_devmem = false;
- char *buf_mem = NULL;
- struct ynl_sock *ys;
- size_t dmabuf_size;
- char iobuf[819200];
- char buffer[256];
- int socket_fd;
- int client_fd;
- size_t i = 0;
- int opt = 1;
- dmabuf_size = getpagesize() * NUM_PAGES;
- create_udmabuf(&devfd, &memfd, &buf, dmabuf_size);
- __u32 *queue_idx = malloc(sizeof(__u32) * 2);
- queue_idx[0] = 14;
- queue_idx[1] = 15;
- if (bind_rx_queue(3 /* index for eth1 */, buf, queue_idx, 2, &ys)) {
^^^^^^^^^^^^^^^^^^^ I guess we want to explicitly fetch the "ifname" index.
Side note: I'm wondering if we could extend some kind of virtual device to allow single host self-tests? e.g. veth, if that would not cause excessive bloat in the device driver?
Cheers,
Paolo
My brain is slightly fried after trying to catch up on the thread for close to 2h. So forgive me if I'm missing something. This applies to all emails I'm about to send :)
On Sun, 5 Nov 2023 18:44:11 -0800 Mina Almasry wrote:
- trigger_device_reset();
The user space must not be responsible for the reset. We can add some temporary "recreate page pools" ndo until the queue API is ready.
But it should not be visible to the user in any way.
And then the kernel can issue the same reset when the netlink socket dies to flush device free lists.
Maybe we should also add a "allow device/all-queues reload" flag to the netlink API to differentiate drivers which can't implement full queue API later on. We want to make sure the defaults work well in our "target design", rather than at the first stage. And target design will reload queues one by one.
On Fri, Nov 10, 2023 at 3:13 PM Jakub Kicinski kuba@kernel.org wrote:
My brain is slightly fried after trying to catch up on the thread for close to 2h. So forgive me if I'm missing something. This applies to all emails I'm about to send :)
On Sun, 5 Nov 2023 18:44:11 -0800 Mina Almasry wrote:
trigger_device_reset();
The user space must not be responsible for the reset. We can add some temporary "recreate page pools" ndo until the queue API is ready.
Thanks for the clear requirement. I clearly had something different in mind.
Might be dumb suggestions, but instead of creating a new ndo that we maybe end up wanting to deprecate once the queue API is ready, how about we use either of those existing APIs?
+void netdev_reset(struct net_device *dev) +{ + int flags = ETH_RESET_ALL; + int err; + +#if 1 + __dev_close(dev); + err = __dev_open(dev, NULL); +#else + err = dev->ethtool_ops->reset(dev, &flags); +#endif +} +
I've tested both of these to work with GVE on both bind via the netlink API and unbind via the netlink socket close, but I'm not enough of an expert to tell if there is some bad side effect that can happen or something.
But it should not be visible to the user in any way.
And then the kernel can issue the same reset when the netlink socket dies to flush device free lists.
Sure thing, I can do that.
Maybe we should also add a "allow device/all-queues reload" flag to the netlink API to differentiate drivers which can't implement full queue API later on. We want to make sure the defaults work well in our "target design", rather than at the first stage. And target design will reload queues one by one.
I can add a flag, yes.
On Fri, 10 Nov 2023 18:27:08 -0800 Mina Almasry wrote:
Thanks for the clear requirement. I clearly had something different in mind.
Might be dumb suggestions, but instead of creating a new ndo that we maybe end up wanting to deprecate once the queue API is ready, how about we use either of those existing APIs?
+void netdev_reset(struct net_device *dev) +{
int flags = ETH_RESET_ALL;
int err;
+#if 1
__dev_close(dev);
err = __dev_open(dev, NULL);
+#else
err = dev->ethtool_ops->reset(dev, &flags);
+#endif +}
I've tested both of these to work with GVE on both bind via the netlink API and unbind via the netlink socket close, but I'm not enough of an expert to tell if there is some bad side effect that can happen or something.
We generally don't accept drivers doing device reconfiguration with full close() + open() because if the open() fails your machine may be cut off.
There are drivers which do it, but they are either old... or weren't reviewed hard enough.
The driver should allocate memory and whether else it can without stopping the queues first. Once it has all those, stop the queues, reconfigure with already allocated resources, start queues, free old.
Even without the queue API in place, good drivers do full device reconfig this way. Hence my mind goes towards a new (temporary?) ndo. It will be replaced by the queue API, but whoever implements it for now has to follow this careful reconfig strategy...
j
On Fri, Nov 10, 2023 at 6:36 PM Jakub Kicinski kuba@kernel.org wrote:
On Fri, 10 Nov 2023 18:27:08 -0800 Mina Almasry wrote:
Thanks for the clear requirement. I clearly had something different in mind.
Might be dumb suggestions, but instead of creating a new ndo that we maybe end up wanting to deprecate once the queue API is ready, how about we use either of those existing APIs?
+void netdev_reset(struct net_device *dev) +{
int flags = ETH_RESET_ALL;
int err;
+#if 1
__dev_close(dev);
err = __dev_open(dev, NULL);
+#else
err = dev->ethtool_ops->reset(dev, &flags);
+#endif +}
I've tested both of these to work with GVE on both bind via the netlink API and unbind via the netlink socket close, but I'm not enough of an expert to tell if there is some bad side effect that can happen or something.
We generally don't accept drivers doing device reconfiguration with full close() + open() because if the open() fails your machine may be cut off.
There are drivers which do it, but they are either old... or weren't reviewed hard enough.
The driver should allocate memory and whether else it can without stopping the queues first. Once it has all those, stop the queues, reconfigure with already allocated resources, start queues, free old.
Even without the queue API in place, good drivers do full device reconfig this way. Hence my mind goes towards a new (temporary?) ndo. It will be replaced by the queue API, but whoever implements it for now has to follow this careful reconfig strategy...
OK, thanks. I managed to get a POC (but only POC) of the queue API working with GVE. I still need to test it more thoroughly and get a review before I can conclude it's actually a viable path but it doesn't seem as grim as I originally thought:
https://github.com/torvalds/linux/commit/21b8e108fa88d90870eef53be9320f136b9...
So, seems there are 2 paths forward:
(a) implement a new 'reconfig' ndo carefully as you described above. (b) implement a minimal version of the queue API as you described here: https://lore.kernel.org/netdev/20230815171638.4c057dcd@kernel.org/
Some questions, sorry if basic:
1. For (b), would it be OK to implement a very minimal version of queue_[stop|start]/queue_mem_[alloc|free], which I use for the sole purpose of reposting buffers to an individual queue, and then later whoever picks up your queue API effort (maybe me) extends the implementation to do the rest of the things you described in your email? If not, what is the minimal queue API I can implement and use for devmem TCP?
2. Since this is adding ndo, do I need to implement the ndo for 2 drivers or is GVE sufficient?
-- Thanks, Mina
On Sun, 12 Nov 2023 20:08:10 -0800 Mina Almasry wrote:
- For (b), would it be OK to implement a very minimal version of
queue_[stop|start]/queue_mem_[alloc|free], which I use for the sole purpose of reposting buffers to an individual queue, and then later whoever picks up your queue API effort (maybe me) extends the implementation to do the rest of the things you described in your email? If not, what is the minimal queue API I can implement and use for devmem TCP?
Any form of queue API is better than a temporary ndo. IIUC it will not bubble up into uAPI in any way so we can extend/change it later as needed.
- Since this is adding ndo, do I need to implement the ndo for 2
drivers or is GVE sufficient?
One driver is fine, especially if we're doing this instead of the reset hack.
Is there a policy about cc'ing moderated lists on patch sets? I thought there was, but not finding anything under Documentation/. Getting a 'needs moderator approval response' on every message is rather annoying.
linux-kselftest-mirror@lists.linaro.org