The patch below does not apply to the 4.9-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
Possible dependencies:
12df140f0bdf ("mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages") db71ef79b59b ("hugetlb: make free_huge_page irq safe") 10c6ec49802b ("hugetlb: change free_pool_huge_page to remove_pool_huge_page") 1121828a0c21 ("hugetlb: call update_and_free_page without hugetlb_lock") 6eb4e88a6d27 ("hugetlb: create remove_hugetlb_page() to separate functionality") 2938396771c8 ("hugetlb: add per-hstate mutex to synchronize user adjustments") 5c8ecb131a65 ("mm/hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate()") 5af1ab1d24e0 ("mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state()") 6c0371490140 ("hugetlb: convert PageHugeFreed to HPageFreed flag") 9157c31186c3 ("hugetlb: convert PageHugeTemporary() to HPageTemporary flag") 8f251a3d5ce3 ("hugetlb: convert page_huge_active() HPageMigratable flag") d6995da31122 ("hugetlb: use page.private for hugetlb specific page flags") dbfee5aee7e5 ("hugetlb: fix update_and_free_page contig page struct assumption") 3f1b0162f6f6 ("mm/hugetlb: remove unnecessary VM_BUG_ON_PAGE on putback_active_hugepage()") 1d88433bb008 ("mm/hugetlb: fix use after free when subpool max_hpages accounting is not enabled") 0aa7f3544aaa ("mm/hugetlb: avoid unnecessary hugetlb_acct_memory() call") ecbf4724e606 ("mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active") 0eb2df2b5629 ("mm: hugetlb: fix a race between isolating and freeing page") 7ffddd499ba6 ("mm: hugetlb: fix a race between freeing and dissolving the page") 585fc0d2871c ("mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 12df140f0bdfae5dcfc81800970dd7f6f632e00c Mon Sep 17 00:00:00 2001 From: Rik van Riel riel@surriel.com Date: Mon, 17 Oct 2022 20:25:05 -0400 Subject: [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages
The h->*_huge_pages counters are protected by the hugetlb_lock, but alloc_huge_page has a corner case where it can decrement the counter outside of the lock.
This could lead to a corrupted value of h->resv_huge_pages, which we have observed on our systems.
Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid a potential race.
Link: https://lkml.kernel.org/r/20221017202505.0e6a4fcd@imladris.surriel.com Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count") Signed-off-by: Rik van Riel riel@surriel.com Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Cc: Naoya Horiguchi n-horiguchi@ah.jp.nec.com Cc: Glen McCready gkmccready@meta.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Muchun Song songmuchun@bytedance.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b586cdd75930..dede0337c07c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, page = alloc_buddy_huge_page_with_mpol(h, vma, addr); if (!page) goto out_uncharge_cgroup; + spin_lock_irq(&hugetlb_lock); if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { SetHPageRestoreReserve(page); h->resv_huge_pages--; } - spin_lock_irq(&hugetlb_lock); list_add(&page->lru, &h->hugepage_activelist); set_page_refcounted(page); /* Fall through */
On 10/26/22 17:07, gregkh@linuxfoundation.org wrote:
From c90324c242ddfabe1cc328bdeda1e2cbf4b77d61 Mon Sep 17 00:00:00 2001 From: Rik van Riel riel@surriel.com Date: Wed, 26 Oct 2022 13:28:04 -0700 Subject: [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages
commit 12df140f0bdfae5dcfc81800970dd7f6f632e00c upstream.
The h->*_huge_pages counters are protected by the hugetlb_lock, but alloc_huge_page has a corner case where it can decrement the counter outside of the lock.
This could lead to a corrupted value of h->resv_huge_pages, which we have observed on our systems.
Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid a potential race.
Link: https://lkml.kernel.org/r/20221017202505.0e6a4fcd@imladris.surriel.com Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count") Signed-off-by: Rik van Riel riel@surriel.com Reviewed-by: Mike Kravetz mike.kravetz@oracle.com Cc: Naoya Horiguchi n-horiguchi@ah.jp.nec.com Cc: Glen McCready gkmccready@meta.com Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Muchun Song songmuchun@bytedance.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Mike Kravetz mike.kravetz@oracle.com --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6bed5da45f8f..b9128eaafffe 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2104,11 +2104,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, page = __alloc_buddy_huge_page_with_mpol(h, vma, addr); if (!page) goto out_uncharge_cgroup; + spin_lock(&hugetlb_lock); if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { SetPagePrivate(page); h->resv_huge_pages--; } - spin_lock(&hugetlb_lock); list_move(&page->lru, &h->hugepage_activelist); /* Fall through */ }
linux-stable-mirror@lists.linaro.org