The patch below does not apply to the 6.1-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y git checkout FETCH_HEAD git cherry-pick -x 35e351780fa9d8240dd6f7e4f245f9ea37e96c19 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2024042320-angled-goldmine-2cd7@gregkh' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
35e351780fa9 ("fork: defer linking file vma until vma is fully initialized") d24062914837 ("fork: use __mt_dup() to duplicate maple tree in dup_mmap()") 2820b0f09be9 ("hugetlbfs: close race between MADV_DONTNEED and page fault") b5df09226450 ("mm: set up vma iterator for vma_iter_prealloc() calls") f72cf24a8686 ("mm: use vma_iter_clear_gfp() in nommu") da0892547b10 ("maple_tree: re-introduce entry to mas_preallocate() arguments") fd892593d44d ("mm: change do_vmi_align_munmap() tracking of VMAs to remove") 5502ea44f5ad ("mm/hugetlb: add page_mask for hugetlb_follow_page_mask()") dd767aaa2fc8 ("mm/hugetlb: handle FOLL_DUMP well in follow_page_mask()") 1279aa0656bb ("mm: make show_free_areas() static") 408579cd627a ("mm: Update do_vmi_align_munmap() return semantics") e4bd84c069f2 ("mm: Always downgrade mmap_lock if requested") 43ec8a620b38 ("Merge tag 'unmap-fix-20230629' of git://git.infradead.org/users/dwmw2/linux")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 35e351780fa9d8240dd6f7e4f245f9ea37e96c19 Mon Sep 17 00:00:00 2001 From: Miaohe Lin linmiaohe@huawei.com Date: Wed, 10 Apr 2024 17:14:41 +0800 Subject: [PATCH] fork: defer linking file vma until vma is fully initialized
Thorvald reported a WARNING [1]. And the root cause is below race:
CPU 1 CPU 2 fork hugetlbfs_fallocate dup_mmap hugetlbfs_punch_hole i_mmap_lock_write(mapping); vma_interval_tree_insert_after -- Child vma is visible through i_mmap tree. i_mmap_unlock_write(mapping); hugetlb_dup_vma_private -- Clear vma_lock outside i_mmap_rwsem! i_mmap_lock_write(mapping); hugetlb_vmdelete_list vma_interval_tree_foreach hugetlb_vma_trylock_write -- Vma_lock is cleared. tmp->vm_ops->open -- Alloc new vma_lock outside i_mmap_rwsem! hugetlb_vma_unlock_write -- Vma_lock is assigned!!! i_mmap_unlock_write(mapping);
hugetlb_dup_vma_private() and hugetlb_vm_op_open() are called outside i_mmap_rwsem lock while vma lock can be used in the same time. Fix this by deferring linking file vma until vma is fully initialized. Those vmas should be initialized first before they can be used.
Link: https://lkml.kernel.org/r/20240410091441.3539905-1-linmiaohe@huawei.com Fixes: 8d9bfb260814 ("hugetlb: add vma based lock for pmd sharing") Signed-off-by: Miaohe Lin linmiaohe@huawei.com Reported-by: Thorvald Natvig thorvald@google.com Closes: https://lore.kernel.org/linux-mm/20240129161735.6gmjsswx62o4pbja@revolver/T/ [1] Reviewed-by: Jane Chu jane.chu@oracle.com Cc: Christian Brauner brauner@kernel.org Cc: Heiko Carstens hca@linux.ibm.com Cc: Kent Overstreet kent.overstreet@linux.dev Cc: Liam R. Howlett Liam.Howlett@oracle.com Cc: Mateusz Guzik mjguzik@gmail.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Miaohe Lin linmiaohe@huawei.com Cc: Muchun Song muchun.song@linux.dev Cc: Oleg Nesterov oleg@redhat.com Cc: Peng Zhang zhangpeng.00@bytedance.com Cc: Tycho Andersen tandersen@netflix.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
diff --git a/kernel/fork.c b/kernel/fork.c index 39a5046c2f0b..aebb3e6c96dc 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -714,6 +714,23 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, } else if (anon_vma_fork(tmp, mpnt)) goto fail_nomem_anon_vma_fork; vm_flags_clear(tmp, VM_LOCKED_MASK); + /* + * Copy/update hugetlb private vma information. + */ + if (is_vm_hugetlb_page(tmp)) + hugetlb_dup_vma_private(tmp); + + /* + * Link the vma into the MT. After using __mt_dup(), memory + * allocation is not necessary here, so it cannot fail. + */ + vma_iter_bulk_store(&vmi, tmp); + + mm->map_count++; + + if (tmp->vm_ops && tmp->vm_ops->open) + tmp->vm_ops->open(tmp); + file = tmp->vm_file; if (file) { struct address_space *mapping = file->f_mapping; @@ -730,25 +747,9 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, i_mmap_unlock_write(mapping); }
- /* - * Copy/update hugetlb private vma information. - */ - if (is_vm_hugetlb_page(tmp)) - hugetlb_dup_vma_private(tmp); - - /* - * Link the vma into the MT. After using __mt_dup(), memory - * allocation is not necessary here, so it cannot fail. - */ - vma_iter_bulk_store(&vmi, tmp); - - mm->map_count++; if (!(tmp->vm_flags & VM_WIPEONFORK)) retval = copy_page_range(tmp, mpnt);
- if (tmp->vm_ops && tmp->vm_ops->open) - tmp->vm_ops->open(tmp); - if (retval) { mpnt = vma_next(&vmi); goto loop_out;