From: Jakub Kicinski kuba@kernel.org
Releasing the DMA mapping will be useful for other types of pages, so factor it out. Make sure compiler inlines it, to avoid any regressions.
Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Mina Almasry almasrymina@google.com
---
This is implemented by Jakub in his RFC:
https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.c...
I take no credit for the idea or implementation. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable.
--- net/core/page_pool.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c index c2e7c9a6efbe..ca1b3b65c9b5 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -548,21 +548,16 @@ s32 page_pool_inflight(const struct page_pool *pool, bool strict) return inflight; }
-/* Disconnects a page (from a page_pool). API users can have a need - * to disconnect a page (from a page_pool), to allow it to be used as - * a regular page (that will eventually be returned to the normal - * page-allocator via put_page). - */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static __always_inline +void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) { dma_addr_t dma; - int count;
if (!(pool->p.flags & PP_FLAG_DMA_MAP)) /* Always account for inflight pages, even if we didn't * map them */ - goto skip_dma_unmap; + return;
dma = page_pool_get_dma_addr(page);
@@ -571,7 +566,19 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr(page, 0); -skip_dma_unmap: +} + +/* Disconnects a page (from a page_pool). API users can have a need + * to disconnect a page (from a page_pool), to allow it to be used as + * a regular page (that will eventually be returned to the normal + * page-allocator via put_page). + */ +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + int count; + + __page_pool_release_page_dma(pool, page); + page_pool_clear_pp_info(page);
/* This may be the last page returned, releasing the pool, so