Current unmapped_area() may fail to get the available area.
For example, we have a vma range like below:
0123456789abcdef m m A m m
Let's assume start_gap is 0x2000 and stack_guard_gap is 0x1000. And now we are looking for free area with size 0x1000 within [0x2000, 0xd000].
The unmapped_area_topdown() could find address at 0x8000, while unmapped_area() fails.
In original code before commit 3499a13168da ("mm/mmap: use maple tree for unmapped_area{_topdown}"), the logic is:
* find a gap with total length, including alignment * adjust start address with alignment
What we do now is:
* find a gap with total length, including alignment * adjust start address with alignment * then compare the left range with total length
This is not necessary to compare with total length again after start address adjusted.
Also, it is not correct to minus 1 here. This may not trigger an issue in real world, since address are usually aligned with page size.
Fixes: 58c5d0d6d522 ("mm/mmap: regression fix for unmapped_area{_topdown}") Signed-off-by: Wei Yang richard.weiyang@gmail.com CC: Liam R. Howlett Liam.Howlett@Oracle.com CC: Lorenzo Stoakes lorenzo.stoakes@oracle.com CC: Vlastimil Babka vbabka@suse.cz CC: Jann Horn jannh@google.com CC: Rick Edgecombe rick.p.edgecombe@intel.com Cc: stable@vger.kernel.org --- mm/vma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vma.c b/mm/vma.c index 3f45a245e31b..d82fdbc710b0 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2668,7 +2668,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info) gap += (info->align_offset - gap) & info->align_mask; tmp = vma_next(&vmi); if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ - if (vm_start_gap(tmp) < gap + length - 1) { + if (vm_start_gap(tmp) < gap + info->length) { low_limit = tmp->vm_end; vma_iter_reset(&vmi); goto retry;