On 06.08.25 04:20, Zi Yan wrote:
Current behavior is to move to next PAGE_SIZE and split, but that makes it hard to check after-split folio orders. This is a preparation patch to allow more precise split_huge_page_test check in an upcoming commit.
split_folio_to_order() part is not changed, since split_pte_mapped_thp test relies on its current behavior.
Signed-off-by: Zi Yan ziy@nvidia.com
[...]
nr_pages = folio_nr_pages(folio);
- if (!folio_test_anon(folio)) { mapping = folio->mapping; target_order = max(new_order,
@@ -4385,15 +4388,16 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, if (!folio_test_anon(folio) && folio->mapping != mapping) goto unlock;
if (in_folio_offset < 0 ||
in_folio_offset >= folio_nr_pages(folio)) {
} else {if (in_folio_offset < 0 || in_folio_offset >= nr_pages) { if (!split_folio_to_order(folio, target_order)) split++;
struct page *split_at = folio_page(folio,
in_folio_offset);
if (!folio_split(folio, target_order, split_at, NULL))
struct page *split_at =
folio_page(folio, in_folio_offset);
Can we add an empty line here, and just have this in a single line, please (feel free to exceed 80chars if it makes the code look less ugly).
if (!folio_split(folio, target_order, split_at, NULL)) { split++;
addr += PAGE_SIZE * nr_pages;
Hm, but won't we do another "addr += PAGE_SIZE" in the for loop?