The quilt patch titled Subject: mm: shmem: fix incorrect index alignment for within_size policy has been removed from the -mm tree. Its filename was mm-shmem-fix-incorrect-index-alignment-for-within_size-policy.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------ From: Baolin Wang baolin.wang@linux.alibaba.com Subject: mm: shmem: fix incorrect index alignment for within_size policy Date: Thu, 19 Dec 2024 15:30:08 +0800
With enabling the shmem per-size within_size policy, using an incorrect 'order' size to round_up() the index can lead to incorrect i_size checks, resulting in an inappropriate large orders being returned.
Changing to use '1 << order' to round_up() the index to fix this issue. Additionally, adding an 'aligned_index' variable to avoid affecting the index checks.
Link: https://lkml.kernel.org/r/77d8ef76a7d3d646e9225e9af88a76549a68aab1.173459315... Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Baolin Wang baolin.wang@linux.alibaba.com Acked-by: David Hildenbrand david@redhat.com Cc: Hugh Dickins hughd@google.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
mm/shmem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
--- a/mm/shmem.c~mm-shmem-fix-incorrect-index-alignment-for-within_size-policy +++ a/mm/shmem.c @@ -1689,6 +1689,7 @@ unsigned long shmem_allowable_huge_order unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma ? vma->vm_flags : 0; + pgoff_t aligned_index; bool global_huge; loff_t i_size; int order; @@ -1723,9 +1724,9 @@ unsigned long shmem_allowable_huge_order /* Allow mTHP that will be fully within i_size. */ order = highest_order(within_size_orders); while (within_size_orders) { - index = round_up(index + 1, order); + aligned_index = round_up(index + 1, 1 << order); i_size = round_up(i_size_read(inode), PAGE_SIZE); - if (i_size >> PAGE_SHIFT >= index) { + if (i_size >> PAGE_SHIFT >= aligned_index) { mask |= within_size_orders; break; } _
Patches currently in -mm which might be from baolin.wang@linux.alibaba.com are
mm-factor-out-the-order-calculation-into-a-new-helper.patch mm-shmem-change-shmem_huge_global_enabled-to-return-huge-order-bitmap.patch mm-shmem-add-large-folio-support-for-tmpfs.patch mm-shmem-add-a-kernel-command-line-to-change-the-default-huge-policy-for-tmpfs.patch docs-tmpfs-drop-fadvise-from-the-documentation.patch
linux-stable-mirror@lists.linaro.org