The patch titled Subject: mm/gup: stop leaking pinned pages in low memory conditions has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-gup-stop-leaking-pinned-pages-in-low-memory-conditions.patch
This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches...
This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days
------------------------------------------------------ From: John Hubbard jhubbard@nvidia.com Subject: mm/gup: stop leaking pinned pages in low memory conditions Date: Wed, 16 Oct 2024 13:22:42 -0700
If a driver tries to call any of the pin_user_pages*(FOLL_LONGTERM) family of functions, and requests "too many" pages, then the call will erroneously leave pages pinned. This is visible in user space as an actual memory leak.
Repro is trivial: just make enough pin_user_pages(FOLL_LONGTERM) calls to exhaust memory.
The root cause of the problem is this sequence, within __gup_longterm_locked():
__get_user_pages_locked() rc = check_and_migrate_movable_pages()
...which gets retried in a loop. The loop error handling is incomplete, clearly due to a somewhat unusual and complicated tri-state error API. But anyway, if -ENOMEM, or in fact, any unexpected error is returned from check_and_migrate_movable_pages(), then __gup_longterm_locked() happily returns the error, while leaving the pages pinned.
In the failed case, which is an app that requests (via a device driver) 30720000000 bytes to be pinned, and then exits, I see this:
$ grep foll /proc/vmstat nr_foll_pin_acquired 7502048 nr_foll_pin_released 2048
And after applying this patch, it returns to balanced pins:
$ grep foll /proc/vmstat nr_foll_pin_acquired 7502048 nr_foll_pin_released 7502048
Fix this by unpinning the pages that __get_user_pages_locked() has pinned, in such error cases.
Link: https://lkml.kernel.org/r/20241016202242.456953-1-jhubbard@nvidia.com Fixes: 24a95998e9ba ("mm/gup.c: simplify and fix check_and_migrate_movable_pages() return codes") Signed-off-by: John Hubbard jhubbard@nvidia.com Cc: Alistair Popple apopple@nvidia.com Cc: Shigeru Yoshida syoshida@redhat.com Cc: David Hildenbrand david@redhat.com Cc: Jason Gunthorpe jgg@nvidia.com Cc: Minchan Kim minchan@kernel.org Cc: Pasha Tatashin pasha.tatashin@soleen.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/gup.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
--- a/mm/gup.c~mm-gup-stop-leaking-pinned-pages-in-low-memory-conditions +++ a/mm/gup.c @@ -2492,6 +2492,17 @@ static long __gup_longterm_locked(struct
/* FOLL_LONGTERM implies FOLL_PIN */ rc = check_and_migrate_movable_pages(nr_pinned_pages, pages); + + /* + * The __get_user_pages_locked() call happens before we know + * that whether it's possible to successfully complete the whole + * operation. To compensate for this, if we get an unexpected + * error (such as -ENOMEM) then we must unpin everything, before + * erroring out. + */ + if (rc != -EAGAIN && rc != 0) + unpin_user_pages(pages, nr_pinned_pages); + } while (rc == -EAGAIN); memalloc_pin_restore(flags); return rc ? rc : nr_pinned_pages; _
Patches currently in -mm which might be from jhubbard@nvidia.com are
mm-gup-stop-leaking-pinned-pages-in-low-memory-conditions.patch kaslr-rename-physmem_end-and-physmem_end-to-direct_map_physmem_end.patch
linux-stable-mirror@lists.linaro.org