On 9/27/23 18:07, Liam R. Howlett wrote:
When merging of the previous VMA fails after the vma iterator has been moved to the previous entry, the vma iterator must be advanced to ensure the caller takes the correct action on the next vma iterator event. Fix this by adding a vma_next() call to the error path.
Users may experience higher CPU usage, most likely in very low memory situations.
Maybe we could say explicitly that before this fix, vma_merge will be called twice on the same vma, which to the best of our knowledge does not cause anything worse than some wasted cycles because vma == prev, but it's fragile?
Link: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejti... Closes: https://lore.kernel.org/linux-mm/CAG48ez12VN1JAOtTNMY+Y2YnsU45yL5giS-Qn=ejti... Fixes: 18b098af2890 ("vma_merge: set vma iterator to correct position.") Cc: stable@vger.kernel.org Cc: Jann Horn jannh@google.com Signed-off-by: Liam R. Howlett Liam.Howlett@oracle.com
mm/mmap.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c index b56a7f0c9f85..b5bc4ca9bdc4 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -968,14 +968,14 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, vma_pgoff = curr->vm_pgoff; vma_start_write(curr); remove = curr;
err = dup_anon_vma(next, curr);
} }err = dup_anon_vma(next, curr, &anon_dup); }
/* Error in anon_vma clone. */ if (err)
return NULL;
goto anon_vma_fail;
if (vma_start < vma->vm_start || vma_end > vma->vm_end) vma_expanded = true;
The vma_iter_config() actions done in this part are something we don't need to undo?
@@ -988,7 +988,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, } if (vma_iter_prealloc(vmi, vma))
return NULL;
goto prealloc_fail;
init_multi_vma_prep(&vp, vma, adjust, remove, remove2); VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma && @@ -1016,6 +1016,12 @@ struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, vma_complete(&vp, vmi, mm); khugepaged_enter_vma(res, vm_flags); return res;
+prealloc_fail: +anon_vma_fail:
- if (merge_prev)
vma_next(vmi);
- return NULL;
} /*