The quilt patch titled Subject: mm/readahead: limit page cache size in page_cache_ra_order() has been removed from the -mm tree. Its filename was mm-readahead-limit-page-cache-size-in-page_cache_ra_order.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------ From: Gavin Shan gshan@redhat.com Subject: mm/readahead: limit page cache size in page_cache_ra_order() Date: Thu, 27 Jun 2024 10:39:50 +1000
In page_cache_ra_order(), the maximal order of the page cache to be allocated shouldn't be larger than MAX_PAGECACHE_ORDER. Otherwise, it's possible the large page cache can't be supported by xarray when the corresponding xarray entry is split.
For example, HPAGE_PMD_ORDER is 13 on ARM64 when the base page size is 64KB. The PMD-sized page cache can't be supported by xarray.
Link: https://lkml.kernel.org/r/20240627003953.1262512-3-gshan@redhat.com Fixes: 793917d997df ("mm/readahead: Add large folio readahead") Signed-off-by: Gavin Shan gshan@redhat.com Acked-by: David Hildenbrand david@redhat.com Cc: Darrick J. Wong djwong@kernel.org Cc: Don Dutile ddutile@redhat.com Cc: Hugh Dickins hughd@google.com Cc: Linus Torvalds torvalds@linux-foundation.org Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Ryan Roberts ryan.roberts@arm.com Cc: William Kucharski william.kucharski@oracle.com Cc: Zhenyu Zhang zhenyzha@redhat.com Cc: stable@vger.kernel.org [5.18+] Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/readahead.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
--- a/mm/readahead.c~mm-readahead-limit-page-cache-size-in-page_cache_ra_order +++ a/mm/readahead.c @@ -503,11 +503,11 @@ void page_cache_ra_order(struct readahea
limit = min(limit, index + ra->size - 1);
- if (new_order < MAX_PAGECACHE_ORDER) { + if (new_order < MAX_PAGECACHE_ORDER) new_order += 2; - new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order); - new_order = min_t(unsigned int, new_order, ilog2(ra->size)); - } + + new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order); + new_order = min_t(unsigned int, new_order, ilog2(ra->size));
/* See comment in page_cache_ra_unbounded() */ nofs = memalloc_nofs_save(); _
Patches currently in -mm which might be from gshan@redhat.com are
linux-stable-mirror@lists.linaro.org