On 08.10.24 09:58, Qi Zheng wrote:
On 2024/10/8 15:52, David Hildenbrand wrote:
On 08.10.24 05:53, Qi Zheng wrote:
Hi Jann,
On 2024/10/8 05:42, Jann Horn wrote:
[...]
diff --git a/mm/mremap.c b/mm/mremap.c index 24712f8dbb6b..dda09e957a5d 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -238,6 +238,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, { spinlock_t *old_ptl, *new_ptl; struct mm_struct *mm = vma->vm_mm; + bool res = false; pmd_t pmd; if (!arch_supports_page_table_move()) @@ -277,19 +278,25 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, if (new_ptl != old_ptl) spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); - /* Clear the pmd */ pmd = *old_pmd;
+ /* Racing with collapse? */ + if (unlikely(!pmd_present(pmd) || pmd_leaf(pmd)))
Since we already hold the exclusive mmap lock, after a racing with collapse occurs, the pmd entry cannot be refilled with new content by page fault. So maybe we only need to recheck pmd_none(pmd) here?
My thinking was that it is cheap and more future proof to check that we really still have a page table here. For example, what if collapse code is ever changed to replace the page table by the collapsed PMD?
Ah, make sense.
Acked-by: Qi Zheng zhengqi.arch@bytedance.com
Thanks a lot for your review!