From: Miaohe Lin linmiaohe@huawei.com
[ Upstream commit 1e57ffb6e3fd9583268c6462c4e3853575b21701 ]
Think about the below scene:
CPU1 CPU2 memunmap_pages percpu_ref_exit __percpu_ref_exit free_percpu(percpu_count); /* percpu_count is freed here! */ get_dev_pagemap xa_load(&pgmap_array, PHYS_PFN(phys)) /* pgmap still in the pgmap_array */ percpu_ref_tryget_live(&pgmap->ref) if __ref_is_percpu /* __PERCPU_REF_ATOMIC_DEAD not set yet */ this_cpu_inc(*percpu_count) /* access freed percpu_count here! */ ref->percpu_count_ptr = __PERCPU_REF_ATOMIC_DEAD; /* too late... */ pageunmap_range
To fix the issue, do percpu_ref_exit() after pgmap_array is emptied. So we won't do percpu_ref_tryget_live() against a being freed percpu_ref.
Link: https://lkml.kernel.org/r/20220609121305.2508-1-linmiaohe@huawei.com Fixes: b7b3c01b1915 ("mm/memremap_pages: support multiple ranges per invocation") Signed-off-by: Miaohe Lin linmiaohe@huawei.com Cc: Dan Williams dan.j.williams@intel.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/memremap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memremap.c b/mm/memremap.c index a638a27d89f5..8d743cbc2964 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -148,10 +148,10 @@ void memunmap_pages(struct dev_pagemap *pgmap) for_each_device_pfn(pfn, pgmap, i) put_page(pfn_to_page(pfn)); wait_for_completion(&pgmap->done); - percpu_ref_exit(&pgmap->ref);
for (i = 0; i < pgmap->nr_range; i++) pageunmap_range(pgmap, i); + percpu_ref_exit(&pgmap->ref);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n"); devmap_managed_enable_put(pgmap);