pr_debug("%s(): memory range at pfn 0x%lx %p is busy, retrying\n",
__func__, pfn, pfn_to_page(pfn));
__func__, pfn, page);
trace_cma_alloc_busy_retry(cma->name, pfn, pfn_to_page(pfn),
Nitpick: I think you already have the page here.
Indeed, forgot to clean that up as well.
count, align);
/* try again with a bit different memory target */
} out:start = bitmap_no + mask + 1;
- *pagep = page;
- if (!ret)
return ret; }*pagep = page;
@@ -882,7 +892,7 @@ static struct page *__cma_alloc(struct cma *cma, unsigned long count, */ if (page) { for (i = 0; i < count; i++)
page_kasan_tag_reset(nth_page(page, i));
page_kasan_tag_reset(page + i);
Had a look at it, not very familiar with CMA, but the changes look equivalent to what was before. Not sure that's worth a Reviewed-by tag, but here it in case you want to add it:
Reviewed-by: Alexandru Elisei alexandru.elisei@arm.com
Thanks!
Just so I can better understand the problem being fixed, I guess you can have two consecutive pfns with non-consecutive associated struct page if you have two adjacent memory sections spanning the same physical memory region, is that correct?
Exactly. Essentially on SPARSEMEM without SPARSEMEM_VMEMMAP it is not guaranteed that
pfn_to_page(pfn + 1) == pfn_to_page(pfn) + 1
when we cross memory section boundaries.
It can be the case for early boot memory if we allocated consecutive areas from memblock when allocating the memmap (struct pages) per memory section, but it's not guaranteed.
So we rule out that case.