On 2024-05-10 16:21, Mina Almasry wrote:
+void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding) +{
- struct netdev_rx_queue *rxq;
- unsigned long xa_idx;
- unsigned int rxq_idx;
- if (!binding)
return;
- if (binding->list.next)
list_del(&binding->list);
- xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) {
if (rxq->mp_params.mp_priv == binding) {
/* We hold the rtnl_lock while binding/unbinding
* dma-buf, so we can't race with another thread that
* is also modifying this value. However, the page_pool
* may read this config while it's creating its
* rx-queues. WRITE_ONCE() here to match the
* READ_ONCE() in the page_pool.
*/
WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
rxq_idx = get_netdev_rx_queue_index(rxq);
netdev_rx_queue_restart(binding->dev, rxq_idx);
What if netdev_rx_queue_restart() fails? Depending on where it failed, a queue might still be filled from struct net_devmem_dmabuf_binding. This is one downside of the current situation with netdev_rx_queue_restart() needing to do allocations each time.
Perhaps a full reset if individual queue restart fails?