6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pavel Begunkov asml.silence@gmail.com
Commit 3a3c6d61577dbb23c09df3e21f6f9eda1ecd634b upstream.
There is no guaranteed alignment for user pointers, however the calculation of an offset of the first page into a folio after coalescing uses some weird bit mask logic, get rid of it.
Cc: stable@vger.kernel.org Reported-by: David Hildenbrand david@redhat.com Fixes: a8edbb424b139 ("io_uring/rsrc: enable multi-hugepage buffer coalescing") Signed-off-by: Pavel Begunkov asml.silence@gmail.com Link: https://lore.kernel.org/io-uring/e387b4c78b33f231105a601d84eefd8301f57954.17... Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- io_uring/rsrc.c | 5 ++++- io_uring/rsrc.h | 1 + 2 files changed, 5 insertions(+), 1 deletion(-)
--- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -918,6 +918,7 @@ static bool io_try_coalesce_buffer(struc return false;
data->folio_shift = folio_shift(folio); + data->first_folio_page_idx = folio_page_idx(folio, page_array[0]); /* * Check if pages are contiguous inside a folio, and all folios have * the same page count except for the head and tail. @@ -998,7 +999,9 @@ static int io_sqe_buffer_register(struct if (coalesced) imu->folio_shift = data.folio_shift; refcount_set(&imu->refs, 1); - off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1); + off = (unsigned long)iov->iov_base & ~PAGE_MASK; + if (coalesced) + off += data.first_folio_page_idx << PAGE_SHIFT; *pimu = imu; ret = 0;
--- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -56,6 +56,7 @@ struct io_imu_folio_data { /* For non-head/tail folios, has to be fully included */ unsigned int nr_pages_mid; unsigned int folio_shift; + unsigned long first_folio_page_idx; };
void io_rsrc_node_ref_zero(struct io_rsrc_node *node);