On Sat, Dec 9, 2023 at 3:29 PM David Ahern dsahern@kernel.org wrote:
On 12/8/23 12:22 PM, Mina Almasry wrote:
On Fri, Dec 8, 2023 at 9:48 AM David Ahern dsahern@kernel.org wrote:
On 12/7/23 5:52 PM, Mina Almasry wrote:
...
xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
if (rxq->binding == binding) {
/* We hold the rtnl_lock while binding/unbinding
* dma-buf, so we can't race with another thread that
* is also modifying this value. However, the driver
* may read this config while it's creating its
* rx-queues. WRITE_ONCE() here to match the
* READ_ONCE() in the driver.
*/
WRITE_ONCE(rxq->binding, NULL);
rxq_idx = get_netdev_rx_queue_index(rxq);
netdev_restart_rx_queue(binding->dev, rxq_idx);
Blindly restarting a queue when a dmabuf is heavy handed. If the dmabuf has no outstanding references (ie., no references in the RxQ), then no restart is needed.
I think I need to stop the queue while binding to a dmabuf for the sake of concurrency, no? I.e. the softirq thread may be delivering a packet, and in parallel a separate thread holds rtnl_lock and tries to bind the dma-buf. At that point the page_pool recreation will race with the driver doing page_pool_alloc_page(). I don't think I can insert a lock to handle this into the rx fast path, no?
I think it depends on the details of how entries are added and removed from the pool. I am behind on the pp details at this point, so I do need to do some homework.
I think it also depends on the details of how to invalidate buffers posted to the rx queue of a particular driver. For GVE as far as I understands when the queue is started I believe it allocates a bunch of buffers and posts them to the rx queue. Then it processes the completion descriptors from the hardware and posts new buffers to replace the ones consumed, so any started queue would have postesd buffers in it.
As far as I know we also don't support invalidating posted buffers without first stopping the queue, replacing the buffers, and starting again. But I don't think these are limitations overly specific to GVE, I believe non-RDMA NICs work similarly?
But I'd stress that what I'm proposing here should be extensible to capabilities of specific drivers. If one has a driver that allows them to invalidate posted buffers on the fly, I imagine they can extend the queue API to declare that support to the netstack in a genric way, and the net stack can invalidate buffers from the previous page pool and supply the new one.
Also, this sounds like it requires (lots of) more changes. The page_pool + driver need to report how many pending references there are (with locking so we don't race with incoming packets), and have them reported via an ndo so that we can skip restarting the queue. Implementing the changes in to a huge issue but handling the concurrency may be a genuine blocker. Not sure it's worth the upside of not restarting the single rx queue?
It has to do with the usability of this overall solution. As I mentioned most ML use cases can (and will want to) use many memory allocations for receiving packets - e.g., allocations per message and receiving multiple messages per socket connection.
We support that by flow steering different flows to different RX queues. Our NICs don't support smart choosing of which page_pool to place the packet in (based on ntuple rule or what not). So flows that must land on a given dmabuf are flow steered to that dmabuf, and flows that need to land host memory and not flow steered and are RSS'd to the non-dmabuf bound queues. This should also be extensible by folks that have NICs with the appropriate support.
}
}
xa_erase(&netdev_dmabuf_bindings, binding->id);
netdev_dmabuf_binding_put(binding);
+}
+int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
struct netdev_dmabuf_binding *binding)
+{
struct netdev_rx_queue *rxq;
u32 xa_idx;
int err;
rxq = __netif_get_rx_queue(dev, rxq_idx);
if (rxq->binding)
return -EEXIST;
err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b,
GFP_KERNEL);
if (err)
return err;
/* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
* race with another thread that is also modifying this value. However,
* the driver may read this config while it's creating its * rx-queues.
* WRITE_ONCE() here to match the READ_ONCE() in the driver.
*/
WRITE_ONCE(rxq->binding, binding);
err = netdev_restart_rx_queue(dev, rxq_idx);
Similarly, here binding a dmabuf to a queue. I was expecting the dmabuf binding to add entries to the page pool for the queue.
To be honest, I think maybe there's a slight disconnect between how you think the page_pool works, and my primitive understanding of how it works. Today, I see a 1:1 mapping between rx-queue and page_pool in the code. I don't see 1:many or many:1 mappings.
I am not referring to 1:N or N:1 for page pool and queues. I am referring to entries within a single page pool for a single Rx queue.
Thanks, glad to hear that. I was afraid there is a miscommunication here.
In theory mapping 1 rx-queue to n page_pools is trivial: the driver can call page_pool_create() multiple times to generate n queues and decide for incoming packets which one to use.
However, mapping n rx-queues to 1 page_pool seems like a can of worms. I see code in the page_pool that looks to me (and Willem) like it's safe only because the page_pool is used from the same napi context. with a n rx-queueue: 1 page_pool mapping, that is no longer true, no? There is a tail end of issues to resolve to be able to map 1 page_pool to n queues as I understand and even if resolved I'm not sure the maintainers are interested in taking the code.
So, per my humble understanding there is no such thing as "add entries to the page pool for the (specific) queue", the page_pool is always used by 1 queue.
Note that even though this limitation exists, we still support binding 1 dma-buf to multiple queues, because multiple page pools can use the same netdev_dmabuf_binding. I should add that to the docs.
If the pool was previously empty, then maybe the queue needs to be "started" in the sense of creating with h/w or just pushing buffers into the queue and moving the pidx.
I don't think it's enough to add buffers to the page_pool, no? The existing buffers in the page_pool (host mem) must be purged. I think maybe the queue needs to be stopped as well so that we don't race with incoming packets and end up with skbs with devmem and non-devmem frags (unless you're thinking it becomes a requirement to support that, I think things are complicated as-is and it's a good simplification). When we already purge the existing buffers & restart the queue, it's little effort to migrate this to become in line with Jakub's queue-api that he also wants to use for per-queue configuration & ndo_stop/open.