This patchset uses kpageflags to get after-split folio orders for a better split_huge_page_test result check[1]. The added gather_folio_orders() scans through a VPN range and collects the numbers of folios at different orders. check_folio_orders() compares the result of gather_folio_orders() to a given list of numbers of different orders.
This patchset also added new order and in folio offset to the split huge page debugfs's pr_debug()s;
Changelog === From V1[2]: 1. Dropped split_huge_pages_pid() for loop step change to avoid messing up with PTE-mapped THP handling. split_huge_page_test.c is changed to perform split at [addr, addr + pagesize) range to limit one folio_split() per folio. 2. Moved pr_debug changes in Patch 2 to Patch 1. 3. Moved KPF_* to vm_util.h and used PAGEMAP_PFN instead of local PFN_MASK. 4. Used pagemap_get_pfn() helper. 5. Used char *vaddr and size_t len as inputs to gather_folio_orders() and check_folio_orders() instead of vpn and nr_pages. 6. Removed variable length variables and used malloc instead.
[1] https://lore.kernel.org/linux-mm/e2f32bdb-e4a4-447c-867c-31405cbba151@redhat... [2] https://lore.kernel.org/linux-mm/20250806022045.342824-1-ziy@nvidia.com/
Zi Yan (3): mm/huge_memory: add new_order and offset to split_huge_pages*() pr_debug. selftests/mm: add check_folio_orders() helper. selftests/mm: check after-split folio orders in split_huge_page_test.
mm/huge_memory.c | 8 +- .../selftests/mm/split_huge_page_test.c | 102 ++++++++++---- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 4 files changed, 217 insertions(+), 33 deletions(-)
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com --- mm/huge_memory.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..ebf875928bac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4327,8 +4327,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto out; }
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx]\n", - pid, vaddr_start, vaddr_end); + pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n", + pid, vaddr_start, vaddr_end, new_order, in_folio_offset);
mmap_read_lock(mm); /* @@ -4438,8 +4438,8 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (IS_ERR(candidate)) goto out;
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx]\n", - file_path, off_start, off_end); + pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n", + file_path, off_start, off_end, new_order, in_folio_offset);
mapping = candidate->f_mapping; min_order = mapping_min_folio_order(mapping);
On Fri, Aug 08, 2025 at 03:01:42PM -0400, Zi Yan wrote:
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com
Reviewed-by: Wei Yang richard.weiyang@gmail.com
On 8/9/25 12:31 AM, Zi Yan wrote:
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com
mm/huge_memory.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..ebf875928bac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4327,8 +4327,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto out; }
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx]\n",
pid, vaddr_start, vaddr_end);
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
pid, vaddr_start, vaddr_end, new_order, in_folio_offset);
mmap_read_lock(mm); /* @@ -4438,8 +4438,8 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (IS_ERR(candidate)) goto out;
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx]\n",
file_path, off_start, off_end);
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
file_path, off_start, off_end, new_order, in_folio_offset);
LGTM
Reviewed by : Donet Tom donettom@linux.ibm.com
mapping = candidate->f_mapping; min_order = mapping_min_folio_order(mapping);
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com
Yes. LGTM. Reviewed-by: wang lian lianux.mm@gmail.com
Best regards, wang lian
On 2025/8/9 03:01, Zi Yan wrote:
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com
LGTM. Reviewed-by: Baolin Wang baolin.wang@linux.alibaba.com
mm/huge_memory.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..ebf875928bac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4327,8 +4327,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto out; }
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx]\n",
pid, vaddr_start, vaddr_end);
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
pid, vaddr_start, vaddr_end, new_order, in_folio_offset);
mmap_read_lock(mm); /* @@ -4438,8 +4438,8 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (IS_ERR(candidate)) goto out;
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx]\n",
file_path, off_start, off_end);
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
file_path, off_start, off_end, new_order, in_folio_offset);
mapping = candidate->f_mapping; min_order = mapping_min_folio_order(mapping);
On Sat, Aug 9, 2025 at 3:02 AM Zi Yan ziy@nvidia.com wrote:
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com
LGTM. Reviewed-by: Barry Song baohua@kernel.org
mm/huge_memory.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..ebf875928bac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4327,8 +4327,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto out; }
pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx]\n",
pid, vaddr_start, vaddr_end);
pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
pid, vaddr_start, vaddr_end, new_order, in_folio_offset); mmap_read_lock(mm); /*
@@ -4438,8 +4438,8 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (IS_ERR(candidate)) goto out;
pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx]\n",
file_path, off_start, off_end);
pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
file_path, off_start, off_end, new_order, in_folio_offset); mapping = candidate->f_mapping; min_order = mapping_min_folio_order(mapping);
-- 2.47.2
Thanks Barry
On 08.08.25 21:01, Zi Yan wrote:
They are useful information for debugging split huge page tests.
Signed-off-by: Zi Yan ziy@nvidia.com
mm/huge_memory.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2b4ea5a2ce7d..ebf875928bac 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4327,8 +4327,8 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto out; }
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx]\n",
pid, vaddr_start, vaddr_end);
- pr_debug("Split huge pages in pid: %d, vaddr: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
pid, vaddr_start, vaddr_end, new_order, in_folio_offset);
mmap_read_lock(mm); /* @@ -4438,8 +4438,8 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (IS_ERR(candidate)) goto out;
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx]\n",
file_path, off_start, off_end);
- pr_debug("split file-backed THPs in file: %s, page offset: [0x%lx - 0x%lx], new_order: %u, in_folio_offset: %ld\n",
file_path, off_start, off_end, new_order, in_folio_offset);
mapping = candidate->f_mapping; min_order = mapping_min_folio_order(mapping);
Acked-by: David Hildenbrand david@redhat.com
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com --- .../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index cb364c5670c6..5ab488fab1cd 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -34,8 +34,6 @@ uint64_t pmd_pagesize; #define PID_FMT_OFFSET "%d,0x%lx,0x%lx,%d,%d" #define PATH_FMT "%s,0x%lx,0x%lx,%d"
-#define PFN_MASK ((1UL<<55)-1) -#define KPF_THP (1UL<<22) #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages))
int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) @@ -49,7 +47,7 @@ int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file)
if (kpageflags_file) { pread(kpageflags_file, &page_flags, sizeof(page_flags), - (paddr & PFN_MASK) * sizeof(page_flags)); + PAGEMAP_PFN(paddr) * sizeof(page_flags));
return !!(page_flags & KPF_THP); } diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c index 6a239aa413e2..41d50b74b2f6 100644 --- a/tools/testing/selftests/mm/vm_util.c +++ b/tools/testing/selftests/mm/vm_util.c @@ -338,6 +338,139 @@ int detect_hugetlb_page_sizes(size_t sizes[], int max) return count; }
+static int get_page_flags(char *vaddr, int pagemap_file, int kpageflags_file, + uint64_t *flags) +{ + unsigned long pfn; + size_t count; + + pfn = pagemap_get_pfn(pagemap_file, vaddr); + /* + * Treat non-present page as a page without any flag, so that + * gather_folio_orders() just record the current folio order. + */ + if (pfn == -1UL) { + *flags = 0; + return 0; + } + + count = pread(kpageflags_file, flags, sizeof(*flags), + pfn * sizeof(*flags)); + + if (count != sizeof(*flags)) + return -1; + + return 0; +} + +static int gather_folio_orders(char *vaddr_start, size_t len, + int pagemap_file, int kpageflags_file, + int orders[], int nr_orders) +{ + uint64_t page_flags = 0; + int cur_order = -1; + char *vaddr; + + if (!pagemap_file || !kpageflags_file) + return -1; + if (nr_orders <= 0) + return -1; + + for (vaddr = vaddr_start; vaddr < vaddr_start + len; ) { + char *next_folio_vaddr; + int status; + + if (get_page_flags(vaddr, pagemap_file, kpageflags_file, &page_flags)) + return -1; + + /* all order-0 pages with possible false postive (non folio) */ + if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) { + orders[0]++; + vaddr += psize(); + continue; + } + + /* skip non thp compound pages */ + if (!(page_flags & KPF_THP)) { + vaddr += psize(); + continue; + } + + /* vpn points to part of a THP at this point */ + if (page_flags & KPF_COMPOUND_HEAD) + cur_order = 1; + else { + /* not a head nor a tail in a THP? */ + if (!(page_flags & KPF_COMPOUND_TAIL)) + return -1; + continue; + } + + next_folio_vaddr = vaddr + (1UL << (cur_order + pshift())); + + if (next_folio_vaddr >= vaddr_start + len) + break; + + while (!(status = get_page_flags(next_folio_vaddr, pagemap_file, + kpageflags_file, + &page_flags))) { + /* next compound head page or order-0 page */ + if ((page_flags & KPF_COMPOUND_HEAD) || + !(page_flags & (KPF_COMPOUND_HEAD | + KPF_COMPOUND_TAIL))) { + if (cur_order < nr_orders) { + orders[cur_order]++; + cur_order = -1; + vaddr = next_folio_vaddr; + } + break; + } + + /* not a head nor a tail in a THP? */ + if (!(page_flags & KPF_COMPOUND_TAIL)) + return -1; + + cur_order++; + next_folio_vaddr = vaddr + (1UL << (cur_order + pshift())); + } + + if (status) + return status; + } + if (cur_order > 0 && cur_order < nr_orders) + orders[cur_order]++; + return 0; +} + +int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file, + int kpageflags_file, int orders[], int nr_orders) +{ + int *vaddr_orders; + int status; + int i; + + vaddr_orders = (int *)malloc(sizeof(int) * nr_orders); + + if (!vaddr_orders) + ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders"); + + memset(vaddr_orders, 0, sizeof(int) * nr_orders); + status = gather_folio_orders(vaddr_start, len, pagemap_file, + kpageflags_file, vaddr_orders, nr_orders); + if (status) + return status; + + status = 0; + for (i = 0; i < nr_orders; i++) + if (vaddr_orders[i] != orders[i]) { + ksft_print_msg("order %d: expected: %d got %d\n", i, + orders[i], vaddr_orders[i]); + status = -1; + } + + return status; +} + /* If `ioctls' non-NULL, the allowed ioctls will be returned into the var */ int uffd_register_with_ioctls(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor, uint64_t *ioctls) diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h index 1843ad48d32b..02e3f1e7065b 100644 --- a/tools/testing/selftests/mm/vm_util.h +++ b/tools/testing/selftests/mm/vm_util.h @@ -18,6 +18,11 @@ #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63)
+#define KPF_COMPOUND_HEAD BIT_ULL(15) +#define KPF_COMPOUND_TAIL BIT_ULL(16) +#define KPF_THP BIT_ULL(22) + + /* * Ignore the checkpatch warning, we must read from x but don't want to do * anything with it in order to trigger a read page fault. We therefore must use @@ -85,6 +90,8 @@ bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size); int64_t allocate_transhuge(void *ptr, int pagemap_fd); unsigned long default_huge_page_size(void); int detect_hugetlb_page_sizes(size_t sizes[], int max); +int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file, + int kpageflags_file, int orders[], int nr_orders);
int uffd_register(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor);
On Fri, Aug 08, 2025 at 03:01:43PM -0400, Zi Yan wrote:
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index cb364c5670c6..5ab488fab1cd 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -34,8 +34,6 @@ uint64_t pmd_pagesize; #define PID_FMT_OFFSET "%d,0x%lx,0x%lx,%d,%d" #define PATH_FMT "%s,0x%lx,0x%lx,%d"
-#define PFN_MASK ((1UL<<55)-1) -#define KPF_THP (1UL<<22) #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages))
int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) @@ -49,7 +47,7 @@ int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file)
if (kpageflags_file) { pread(kpageflags_file, &page_flags, sizeof(page_flags),
(paddr & PFN_MASK) * sizeof(page_flags));
PAGEMAP_PFN(paddr) * sizeof(page_flags));
is_backed_by_thp() shares similar logic as get_page_flags(), I am thinking we can leverage get_page_flags() here.
return !!(page_flags & KPF_THP); }
diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c index 6a239aa413e2..41d50b74b2f6 100644 --- a/tools/testing/selftests/mm/vm_util.c +++ b/tools/testing/selftests/mm/vm_util.c @@ -338,6 +338,139 @@ int detect_hugetlb_page_sizes(size_t sizes[], int max) return count; }
+static int get_page_flags(char *vaddr, int pagemap_file, int kpageflags_file,
uint64_t *flags)
+{
Nit.
In vm_util.c, we usually name the file descriptor as xxx_fd.
- unsigned long pfn;
- size_t count;
- pfn = pagemap_get_pfn(pagemap_file, vaddr);
- /*
* Treat non-present page as a page without any flag, so that
* gather_folio_orders() just record the current folio order.
*/
- if (pfn == -1UL) {
*flags = 0;
return 0;
- }
- count = pread(kpageflags_file, flags, sizeof(*flags),
pfn * sizeof(*flags));
- if (count != sizeof(*flags))
return -1;
- return 0;
+}
Maybe a simple document here would be helpful.
+static int gather_folio_orders(char *vaddr_start, size_t len,
int pagemap_file, int kpageflags_file,
int orders[], int nr_orders)
+{
- uint64_t page_flags = 0;
- int cur_order = -1;
- char *vaddr;
- if (!pagemap_file || !kpageflags_file)
return -1;
- if (nr_orders <= 0)
return -1;
- for (vaddr = vaddr_start; vaddr < vaddr_start + len; ) {
char *next_folio_vaddr;
int status;
if (get_page_flags(vaddr, pagemap_file, kpageflags_file, &page_flags))
return -1;
/* all order-0 pages with possible false postive (non folio) */
if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
orders[0]++;
vaddr += psize();
continue;
}
/* skip non thp compound pages */
if (!(page_flags & KPF_THP)) {
vaddr += psize();
continue;
}
/* vpn points to part of a THP at this point */
if (page_flags & KPF_COMPOUND_HEAD)
cur_order = 1;
else {
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
continue;
}
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
if (next_folio_vaddr >= vaddr_start + len)
break;
Would we skip order 1 folio at the last position?
For example, vaddr_start is 0x2000, len is 0x2000 and the folio at vaddr_start is an order 1 folio, whose size is exactly 0x2000.
Then we will get next_folio_vaddr == vaddr_start + len.
Could that happen?
while (!(status = get_page_flags(next_folio_vaddr, pagemap_file,
kpageflags_file,
&page_flags))) {
/* next compound head page or order-0 page */
if ((page_flags & KPF_COMPOUND_HEAD) ||
!(page_flags & (KPF_COMPOUND_HEAD |
KPF_COMPOUND_TAIL))) {
Maybe we can put them into one line.
if (cur_order < nr_orders) {
orders[cur_order]++;
cur_order = -1;
vaddr = next_folio_vaddr;
}
break;
}
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
cur_order++;
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
}
The while loop share similar logic as the outer for loop. Is it possible reduce some duplication?
if (status)
return status;
- }
- if (cur_order > 0 && cur_order < nr_orders)
orders[cur_order]++;
- return 0;
+}
+int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders)
+{
- int *vaddr_orders;
- int status;
- int i;
- vaddr_orders = (int *)malloc(sizeof(int) * nr_orders);
I took a look into thp_setting.h, where defines an array with NR_ORDERS element which is 20. Maybe we can leverage it here, since we don't expect the order to be larger.
- if (!vaddr_orders)
ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders");
- memset(vaddr_orders, 0, sizeof(int) * nr_orders);
- status = gather_folio_orders(vaddr_start, len, pagemap_file,
kpageflags_file, vaddr_orders, nr_orders);
- if (status)
return status;
- status = 0;
- for (i = 0; i < nr_orders; i++)
if (vaddr_orders[i] != orders[i]) {
ksft_print_msg("order %d: expected: %d got %d\n", i,
orders[i], vaddr_orders[i]);
status = -1;
}
- return status;
+}
/* If `ioctls' non-NULL, the allowed ioctls will be returned into the var */ int uffd_register_with_ioctls(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor, uint64_t *ioctls) diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h index 1843ad48d32b..02e3f1e7065b 100644 --- a/tools/testing/selftests/mm/vm_util.h +++ b/tools/testing/selftests/mm/vm_util.h @@ -18,6 +18,11 @@ #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63)
+#define KPF_COMPOUND_HEAD BIT_ULL(15) +#define KPF_COMPOUND_TAIL BIT_ULL(16) +#define KPF_THP BIT_ULL(22)
/*
- Ignore the checkpatch warning, we must read from x but don't want to do
- anything with it in order to trigger a read page fault. We therefore must use
@@ -85,6 +90,8 @@ bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size); int64_t allocate_transhuge(void *ptr, int pagemap_fd); unsigned long default_huge_page_size(void); int detect_hugetlb_page_sizes(size_t sizes[], int max); +int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders);
int uffd_register(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor); -- 2.47.2
On 9 Aug 2025, at 16:18, Wei Yang wrote:
On Fri, Aug 08, 2025 at 03:01:43PM -0400, Zi Yan wrote:
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index cb364c5670c6..5ab488fab1cd 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -34,8 +34,6 @@ uint64_t pmd_pagesize; #define PID_FMT_OFFSET "%d,0x%lx,0x%lx,%d,%d" #define PATH_FMT "%s,0x%lx,0x%lx,%d"
-#define PFN_MASK ((1UL<<55)-1) -#define KPF_THP (1UL<<22) #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages))
int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) @@ -49,7 +47,7 @@ int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file)
if (kpageflags_file) { pread(kpageflags_file, &page_flags, sizeof(page_flags),
(paddr & PFN_MASK) * sizeof(page_flags));
PAGEMAP_PFN(paddr) * sizeof(page_flags));
is_backed_by_thp() shares similar logic as get_page_flags(), I am thinking we can leverage get_page_flags() here.
I was lazy for this one. I will use check_folio_orders() in the next version.
return !!(page_flags & KPF_THP); }
diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c index 6a239aa413e2..41d50b74b2f6 100644 --- a/tools/testing/selftests/mm/vm_util.c +++ b/tools/testing/selftests/mm/vm_util.c @@ -338,6 +338,139 @@ int detect_hugetlb_page_sizes(size_t sizes[], int max) return count; }
+static int get_page_flags(char *vaddr, int pagemap_file, int kpageflags_file,
uint64_t *flags)
+{
Nit.
In vm_util.c, we usually name the file descriptor as xxx_fd.
OK. I can rename them.
- unsigned long pfn;
- size_t count;
- pfn = pagemap_get_pfn(pagemap_file, vaddr);
- /*
* Treat non-present page as a page without any flag, so that
* gather_folio_orders() just record the current folio order.
*/
- if (pfn == -1UL) {
*flags = 0;
return 0;
- }
- count = pread(kpageflags_file, flags, sizeof(*flags),
pfn * sizeof(*flags));
- if (count != sizeof(*flags))
return -1;
- return 0;
+}
Maybe a simple document here would be helpful.
Will do.
+static int gather_folio_orders(char *vaddr_start, size_t len,
int pagemap_file, int kpageflags_file,
int orders[], int nr_orders)
+{
- uint64_t page_flags = 0;
- int cur_order = -1;
- char *vaddr;
- if (!pagemap_file || !kpageflags_file)
return -1;
- if (nr_orders <= 0)
return -1;
- for (vaddr = vaddr_start; vaddr < vaddr_start + len; ) {
char *next_folio_vaddr;
int status;
if (get_page_flags(vaddr, pagemap_file, kpageflags_file, &page_flags))
return -1;
/* all order-0 pages with possible false postive (non folio) */
if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
orders[0]++;
vaddr += psize();
continue;
}
/* skip non thp compound pages */
if (!(page_flags & KPF_THP)) {
vaddr += psize();
continue;
}
/* vpn points to part of a THP at this point */
if (page_flags & KPF_COMPOUND_HEAD)
cur_order = 1;
else {
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
continue;
}
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
if (next_folio_vaddr >= vaddr_start + len)
break;
Would we skip order 1 folio at the last position?
For example, vaddr_start is 0x2000, len is 0x2000 and the folio at vaddr_start is an order 1 folio, whose size is exactly 0x2000.
Then we will get next_folio_vaddr == vaddr_start + len.
Could that happen?
No. After the loop, there is code checking cur_order and updating orders[].
while (!(status = get_page_flags(next_folio_vaddr, pagemap_file,
kpageflags_file,
&page_flags))) {
/* next compound head page or order-0 page */
if ((page_flags & KPF_COMPOUND_HEAD) ||
!(page_flags & (KPF_COMPOUND_HEAD |
KPF_COMPOUND_TAIL))) {
Maybe we can put them into one line.
Sure.
if (cur_order < nr_orders) {
orders[cur_order]++;
cur_order = -1;
vaddr = next_folio_vaddr;
}
break;
}
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
cur_order++;
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
}
The while loop share similar logic as the outer for loop. Is it possible reduce some duplication?
Outer loop is to filter order-0 and non head pages and while loop is to find current THP/mTHP orders. It would be messy to combine them. But feel free to provide ideas if you see a way.
if (status)
return status;
- }
- if (cur_order > 0 && cur_order < nr_orders)
orders[cur_order]++;
- return 0;
+}
+int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders)
+{
- int *vaddr_orders;
- int status;
- int i;
- vaddr_orders = (int *)malloc(sizeof(int) * nr_orders);
I took a look into thp_setting.h, where defines an array with NR_ORDERS element which is 20. Maybe we can leverage it here, since we don't expect the order to be larger.
20 is too large for current use. We can revisit this when the function gets more users.
- if (!vaddr_orders)
ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders");
- memset(vaddr_orders, 0, sizeof(int) * nr_orders);
- status = gather_folio_orders(vaddr_start, len, pagemap_file,
kpageflags_file, vaddr_orders, nr_orders);
- if (status)
return status;
- status = 0;
- for (i = 0; i < nr_orders; i++)
if (vaddr_orders[i] != orders[i]) {
ksft_print_msg("order %d: expected: %d got %d\n", i,
orders[i], vaddr_orders[i]);
status = -1;
}
- return status;
+}
/* If `ioctls' non-NULL, the allowed ioctls will be returned into the var */ int uffd_register_with_ioctls(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor, uint64_t *ioctls) diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h index 1843ad48d32b..02e3f1e7065b 100644 --- a/tools/testing/selftests/mm/vm_util.h +++ b/tools/testing/selftests/mm/vm_util.h @@ -18,6 +18,11 @@ #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63)
+#define KPF_COMPOUND_HEAD BIT_ULL(15) +#define KPF_COMPOUND_TAIL BIT_ULL(16) +#define KPF_THP BIT_ULL(22)
/*
- Ignore the checkpatch warning, we must read from x but don't want to do
- anything with it in order to trigger a read page fault. We therefore must use
@@ -85,6 +90,8 @@ bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size); int64_t allocate_transhuge(void *ptr, int pagemap_fd); unsigned long default_huge_page_size(void); int detect_hugetlb_page_sizes(size_t sizes[], int max); +int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders);
int uffd_register(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor); -- 2.47.2
-- Wei Yang Help you, Help me
Best Regards, Yan, Zi
On Mon, Aug 11, 2025 at 02:39:08PM -0400, Zi Yan wrote: [...]
+static int gather_folio_orders(char *vaddr_start, size_t len,
int pagemap_file, int kpageflags_file,
int orders[], int nr_orders)
+{
- uint64_t page_flags = 0;
- int cur_order = -1;
- char *vaddr;
- if (!pagemap_file || !kpageflags_file)
return -1;
- if (nr_orders <= 0)
return -1;
- for (vaddr = vaddr_start; vaddr < vaddr_start + len; ) {
char *next_folio_vaddr;
int status;
if (get_page_flags(vaddr, pagemap_file, kpageflags_file, &page_flags))
return -1;
/* all order-0 pages with possible false postive (non folio) */
if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
orders[0]++;
vaddr += psize();
continue;
}
/* skip non thp compound pages */
if (!(page_flags & KPF_THP)) {
vaddr += psize();
continue;
}
/* vpn points to part of a THP at this point */
if (page_flags & KPF_COMPOUND_HEAD)
cur_order = 1;
else {
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
continue;
}
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
if (next_folio_vaddr >= vaddr_start + len)
break;
Would we skip order 1 folio at the last position?
For example, vaddr_start is 0x2000, len is 0x2000 and the folio at vaddr_start is an order 1 folio, whose size is exactly 0x2000.
Then we will get next_folio_vaddr == vaddr_start + len.
Could that happen?
No. After the loop, there is code checking cur_order and updating orders[].
Oh, I missed this.
On 8/9/25 12:31 AM, Zi Yan wrote:
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index cb364c5670c6..5ab488fab1cd 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -34,8 +34,6 @@ uint64_t pmd_pagesize; #define PID_FMT_OFFSET "%d,0x%lx,0x%lx,%d,%d" #define PATH_FMT "%s,0x%lx,0x%lx,%d" -#define PFN_MASK ((1UL<<55)-1) -#define KPF_THP (1UL<<22) #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages)) int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) @@ -49,7 +47,7 @@ int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) if (kpageflags_file) { pread(kpageflags_file, &page_flags, sizeof(page_flags),
(paddr & PFN_MASK) * sizeof(page_flags));
PAGEMAP_PFN(paddr) * sizeof(page_flags));
return !!(page_flags & KPF_THP); } diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c index 6a239aa413e2..41d50b74b2f6 100644 --- a/tools/testing/selftests/mm/vm_util.c +++ b/tools/testing/selftests/mm/vm_util.c @@ -338,6 +338,139 @@ int detect_hugetlb_page_sizes(size_t sizes[], int max) return count; } +static int get_page_flags(char *vaddr, int pagemap_file, int kpageflags_file,
uint64_t *flags)
+{
- unsigned long pfn;
- size_t count;
- pfn = pagemap_get_pfn(pagemap_file, vaddr);
- /*
* Treat non-present page as a page without any flag, so that
* gather_folio_orders() just record the current folio order.
*/
- if (pfn == -1UL) {
*flags = 0;
return 0;
- }
- count = pread(kpageflags_file, flags, sizeof(*flags),
pfn * sizeof(*flags));
- if (count != sizeof(*flags))
return -1;
- return 0;
+}
+static int gather_folio_orders(char *vaddr_start, size_t len,
int pagemap_file, int kpageflags_file,
int orders[], int nr_orders)
+{
- uint64_t page_flags = 0;
- int cur_order = -1;
- char *vaddr;
- if (!pagemap_file || !kpageflags_file)
return -1;
- if (nr_orders <= 0)
return -1;
- for (vaddr = vaddr_start; vaddr < vaddr_start + len; ) {
char *next_folio_vaddr;
int status;
if (get_page_flags(vaddr, pagemap_file, kpageflags_file, &page_flags))
return -1;
/* all order-0 pages with possible false postive (non folio) */
if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
orders[0]++;
vaddr += psize();
continue;
}
/* skip non thp compound pages */
if (!(page_flags & KPF_THP)) {
vaddr += psize();
continue;
}
/* vpn points to part of a THP at this point */
if (page_flags & KPF_COMPOUND_HEAD)
cur_order = 1;
else {
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
continue;
If KPF_COMPOUND_TAIL is set, do we use the same vaddr, or should we advance to the next vaddr before continuing?
}
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
if (next_folio_vaddr >= vaddr_start + len)
break;
while (!(status = get_page_flags(next_folio_vaddr, pagemap_file,
kpageflags_file,
&page_flags))) {
/* next compound head page or order-0 page */
if ((page_flags & KPF_COMPOUND_HEAD) ||
!(page_flags & (KPF_COMPOUND_HEAD |
KPF_COMPOUND_TAIL))) {
if (cur_order < nr_orders) {
orders[cur_order]++;
cur_order = -1;
vaddr = next_folio_vaddr;
}
break;
}
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
cur_order++;
next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
}
if (status)
return status;
- }
- if (cur_order > 0 && cur_order < nr_orders)
orders[cur_order]++;
- return 0;
+}
+int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders)
+{
- int *vaddr_orders;
- int status;
- int i;
- vaddr_orders = (int *)malloc(sizeof(int) * nr_orders);
- if (!vaddr_orders)
ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders");
- memset(vaddr_orders, 0, sizeof(int) * nr_orders);
- status = gather_folio_orders(vaddr_start, len, pagemap_file,
kpageflags_file, vaddr_orders, nr_orders);
- if (status)
return status;
- status = 0;
- for (i = 0; i < nr_orders; i++)
if (vaddr_orders[i] != orders[i]) {
ksft_print_msg("order %d: expected: %d got %d\n", i,
orders[i], vaddr_orders[i]);
status = -1;
}
- return status;
+}
- /* If `ioctls' non-NULL, the allowed ioctls will be returned into the var */ int uffd_register_with_ioctls(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor, uint64_t *ioctls)
diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h index 1843ad48d32b..02e3f1e7065b 100644 --- a/tools/testing/selftests/mm/vm_util.h +++ b/tools/testing/selftests/mm/vm_util.h @@ -18,6 +18,11 @@ #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63) +#define KPF_COMPOUND_HEAD BIT_ULL(15) +#define KPF_COMPOUND_TAIL BIT_ULL(16) +#define KPF_THP BIT_ULL(22)
- /*
- Ignore the checkpatch warning, we must read from x but don't want to do
- anything with it in order to trigger a read page fault. We therefore must use
@@ -85,6 +90,8 @@ bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size); int64_t allocate_transhuge(void *ptr, int pagemap_fd); unsigned long default_huge_page_size(void); int detect_hugetlb_page_sizes(size_t sizes[], int max); +int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders);
int uffd_register(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor);
On 10 Aug 2025, at 12:49, Donet Tom wrote:
On 8/9/25 12:31 AM, Zi Yan wrote:
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index cb364c5670c6..5ab488fab1cd 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -34,8 +34,6 @@ uint64_t pmd_pagesize; #define PID_FMT_OFFSET "%d,0x%lx,0x%lx,%d,%d" #define PATH_FMT "%s,0x%lx,0x%lx,%d" -#define PFN_MASK ((1UL<<55)-1) -#define KPF_THP (1UL<<22) #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages)) int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) @@ -49,7 +47,7 @@ int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) if (kpageflags_file) { pread(kpageflags_file, &page_flags, sizeof(page_flags),
(paddr & PFN_MASK) * sizeof(page_flags));
}PAGEMAP_PFN(paddr) * sizeof(page_flags)); return !!(page_flags & KPF_THP);
diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c index 6a239aa413e2..41d50b74b2f6 100644 --- a/tools/testing/selftests/mm/vm_util.c +++ b/tools/testing/selftests/mm/vm_util.c @@ -338,6 +338,139 @@ int detect_hugetlb_page_sizes(size_t sizes[], int max) return count; } +static int get_page_flags(char *vaddr, int pagemap_file, int kpageflags_file,
uint64_t *flags)
+{
- unsigned long pfn;
- size_t count;
- pfn = pagemap_get_pfn(pagemap_file, vaddr);
- /*
* Treat non-present page as a page without any flag, so that
* gather_folio_orders() just record the current folio order.
*/
- if (pfn == -1UL) {
*flags = 0;
return 0;
- }
- count = pread(kpageflags_file, flags, sizeof(*flags),
pfn * sizeof(*flags));
- if (count != sizeof(*flags))
return -1;
- return 0;
+}
+static int gather_folio_orders(char *vaddr_start, size_t len,
int pagemap_file, int kpageflags_file,
int orders[], int nr_orders)
+{
- uint64_t page_flags = 0;
- int cur_order = -1;
- char *vaddr;
- if (!pagemap_file || !kpageflags_file)
return -1;
- if (nr_orders <= 0)
return -1;
- for (vaddr = vaddr_start; vaddr < vaddr_start + len; ) {
char *next_folio_vaddr;
int status;
if (get_page_flags(vaddr, pagemap_file, kpageflags_file, &page_flags))
return -1;
/* all order-0 pages with possible false postive (non folio) */
if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
orders[0]++;
vaddr += psize();
continue;
}
/* skip non thp compound pages */
if (!(page_flags & KPF_THP)) {
vaddr += psize();
continue;
}
/* vpn points to part of a THP at this point */
if (page_flags & KPF_COMPOUND_HEAD)
cur_order = 1;
else {
/* not a head nor a tail in a THP? */
if (!(page_flags & KPF_COMPOUND_TAIL))
return -1;
continue;
If KPF_COMPOUND_TAIL is set, do we use the same vaddr, or should we advance to the next vaddr before continuing?
Yeah, I missed a vaddr += psize() here. Thank you for pointing this out.
Best Regards, Yan, Zi
On 2025/8/9 03:01, Zi Yan wrote:
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
[snip]
+int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders)
+{
- int *vaddr_orders;
- int status;
- int i;
- vaddr_orders = (int *)malloc(sizeof(int) * nr_orders);
- if (!vaddr_orders)
ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders");
- memset(vaddr_orders, 0, sizeof(int) * nr_orders);
- status = gather_folio_orders(vaddr_start, len, pagemap_file,
kpageflags_file, vaddr_orders, nr_orders);
- if (status)
Missed calling free(vaddr_orders) before returning.
return status;
- status = 0;
- for (i = 0; i < nr_orders; i++)
if (vaddr_orders[i] != orders[i]) {
ksft_print_msg("order %d: expected: %d got %d\n", i,
orders[i], vaddr_orders[i]);
status = -1;
}
Ditto.
- return status;
+}
- /* If `ioctls' non-NULL, the allowed ioctls will be returned into the var */ int uffd_register_with_ioctls(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor, uint64_t *ioctls)
diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h index 1843ad48d32b..02e3f1e7065b 100644 --- a/tools/testing/selftests/mm/vm_util.h +++ b/tools/testing/selftests/mm/vm_util.h @@ -18,6 +18,11 @@ #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63) +#define KPF_COMPOUND_HEAD BIT_ULL(15) +#define KPF_COMPOUND_TAIL BIT_ULL(16) +#define KPF_THP BIT_ULL(22)
- /*
- Ignore the checkpatch warning, we must read from x but don't want to do
- anything with it in order to trigger a read page fault. We therefore must use
@@ -85,6 +90,8 @@ bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size); int64_t allocate_transhuge(void *ptr, int pagemap_fd); unsigned long default_huge_page_size(void); int detect_hugetlb_page_sizes(size_t sizes[], int max); +int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders);
int uffd_register(int uffd, void *addr, uint64_t len, bool miss, bool wp, bool minor);
On 11 Aug 2025, at 3:52, Baolin Wang wrote:
On 2025/8/9 03:01, Zi Yan wrote:
The helper gathers an folio order statistics of folios within a virtual address range and checks it against a given order list. It aims to provide a more precise folio order check instead of just checking the existence of PMD folios.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 4 +- tools/testing/selftests/mm/vm_util.c | 133 ++++++++++++++++++ tools/testing/selftests/mm/vm_util.h | 7 + 3 files changed, 141 insertions(+), 3 deletions(-)
[snip]
+int check_folio_orders(char *vaddr_start, size_t len, int pagemap_file,
int kpageflags_file, int orders[], int nr_orders)
+{
- int *vaddr_orders;
- int status;
- int i;
- vaddr_orders = (int *)malloc(sizeof(int) * nr_orders);
- if (!vaddr_orders)
ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders");
- memset(vaddr_orders, 0, sizeof(int) * nr_orders);
- status = gather_folio_orders(vaddr_start, len, pagemap_file,
kpageflags_file, vaddr_orders, nr_orders);
- if (status)
Missed calling free(vaddr_orders) before returning.
return status;
- status = 0;
- for (i = 0; i < nr_orders; i++)
if (vaddr_orders[i] != orders[i]) {
ksft_print_msg("order %d: expected: %d got %d\n", i,
orders[i], vaddr_orders[i]);
status = -1;
}
Ditto.
- return status;
+}
Will add free() in the above two locations. Thank you for spotting them.
Best Regards, Yan, Zi
Instead of just checking the existence of PMD folios before and after folio split tests, use check_folio_orders() to check after-split folio orders.
The following tests are not changed: 1. split_pte_mapped_thp: the test already uses kpageflags to check; 2. split_file_backed_thp: no vaddr available.
Signed-off-by: Zi Yan ziy@nvidia.com --- .../selftests/mm/split_huge_page_test.c | 98 ++++++++++++++----- 1 file changed, 72 insertions(+), 26 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index 5ab488fab1cd..161108717f1c 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -25,6 +25,10 @@ uint64_t pagesize; unsigned int pageshift; uint64_t pmd_pagesize; +unsigned int pmd_order; +unsigned int max_order; + +#define NR_ORDERS (max_order + 1)
#define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" #define SMAP_PATH "/proc/self/smaps" @@ -36,6 +40,11 @@ uint64_t pmd_pagesize;
#define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages))
+const char *pagemap_proc = "/proc/self/pagemap"; +const char *kpageflags_proc = "/proc/kpageflags"; +int pagemap_fd; +int kpageflags_fd; + int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) { uint64_t paddr; @@ -151,6 +160,11 @@ void split_pmd_thp_to_order(int order) char *one_page; size_t len = 4 * pmd_pagesize; size_t i; + int *orders; + + orders = (int *)malloc(sizeof(int) * NR_ORDERS); + if (!orders) + ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
one_page = memalign(pmd_pagesize, len); if (!one_page) @@ -172,12 +186,20 @@ void split_pmd_thp_to_order(int order) if (one_page[i] != (char)i) ksft_exit_fail_msg("%ld byte corrupted\n", i);
+ memset(orders, 0, sizeof(int) * NR_ORDERS); + /* set expected orders */ + orders[order] = 4 << (pmd_order - order); + + if (check_folio_orders(one_page, len, pagemap_fd, kpageflags_fd, + orders, NR_ORDERS)) + ksft_exit_fail_msg("Unexpected THP split\n");
if (!check_huge_anon(one_page, 0, pmd_pagesize)) ksft_exit_fail_msg("Still AnonHugePages not split\n");
ksft_test_result_pass("Split huge pages to order %d successful\n", order); free(one_page); + free(orders); }
void split_pte_mapped_thp(void) @@ -186,22 +208,6 @@ void split_pte_mapped_thp(void) size_t len = 4 * pmd_pagesize; uint64_t thp_size; size_t i; - const char *pagemap_template = "/proc/%d/pagemap"; - const char *kpageflags_proc = "/proc/kpageflags"; - char pagemap_proc[255]; - int pagemap_fd; - int kpageflags_fd; - - if (snprintf(pagemap_proc, 255, pagemap_template, getpid()) < 0) - ksft_exit_fail_msg("get pagemap proc error: %s\n", strerror(errno)); - - pagemap_fd = open(pagemap_proc, O_RDONLY); - if (pagemap_fd == -1) - ksft_exit_fail_msg("read pagemap: %s\n", strerror(errno)); - - kpageflags_fd = open(kpageflags_proc, O_RDONLY); - if (kpageflags_fd == -1) - ksft_exit_fail_msg("read kpageflags: %s\n", strerror(errno));
one_page = mmap((void *)(1UL << 30), len, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); @@ -259,8 +265,6 @@ void split_pte_mapped_thp(void)
ksft_test_result_pass("Split PTE-mapped huge pages successful\n"); munmap(one_page, len); - close(pagemap_fd); - close(kpageflags_fd); }
void split_file_backed_thp(int order) @@ -463,10 +467,16 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, int order, int offset) { int fd; + char *split_addr; char *addr; size_t i; char testfile[INPUT_MAX]; int err = 0; + int *orders; + + orders = (int *)malloc(sizeof(int) * NR_ORDERS); + if (!orders) + ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
err = snprintf(testfile, INPUT_MAX, "%s/test", fs_loc);
@@ -474,16 +484,32 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, ksft_exit_fail_msg("cannot generate right test file name\n");
err = create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); - if (err) + if (err) { + free(orders); return; + } err = 0;
- if (offset == -1) - write_debugfs(PID_FMT, getpid(), (uint64_t)addr, - (uint64_t)addr + fd_size, order); - else - write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)addr, - (uint64_t)addr + fd_size, order, offset); + memset(orders, 0, sizeof(int) * NR_ORDERS); + if (offset == -1) { + for (split_addr = addr; split_addr < addr + fd_size; split_addr += pmd_pagesize) + write_debugfs(PID_FMT, getpid(), (uint64_t)split_addr, + (uint64_t)split_addr + pagesize, order); + + /* set expected orders */ + orders[order] = fd_size / (pagesize << order); + } else { + int times = fd_size / pmd_pagesize; + + for (split_addr = addr; split_addr < addr + fd_size; split_addr += pmd_pagesize) + write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)split_addr, + (uint64_t)split_addr + pagesize, order, offset); + + /* set expected orders */ + for (i = order + 1; i < pmd_order; i++) + orders[i] = times; + orders[order] = 2 * times; + }
for (i = 0; i < fd_size; i++) if (*(addr + i) != (char)i) { @@ -492,6 +518,14 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, goto out; }
+ if (check_folio_orders(addr, fd_size, pagemap_fd, kpageflags_fd, orders, + NR_ORDERS)) { + ksft_print_msg("Unexpected THP split\n"); + err = 1; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { ksft_print_msg("Still FilePmdMapped not split\n"); err = EXIT_FAILURE; @@ -499,6 +533,7 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, }
out: + free(orders); munmap(addr, fd_size); close(fd); unlink(testfile); @@ -522,7 +557,6 @@ int main(int argc, char **argv) const char *fs_loc; bool created_tmp; int offset; - unsigned int max_order; unsigned int nr_pages; unsigned int tests;
@@ -539,6 +573,7 @@ int main(int argc, char **argv) pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; pmd_pagesize = read_pmd_pagesize(); + pmd_order = GET_ORDER(pmd_pagesize / pagesize); if (!pmd_pagesize) ksft_exit_fail_msg("Reading PMD pagesize failed\n");
@@ -547,6 +582,14 @@ int main(int argc, char **argv) tests = 2 + (max_order - 1) + (2 * max_order) + (max_order - 1) * 4 + 2; ksft_set_plan(tests);
+ pagemap_fd = open(pagemap_proc, O_RDONLY); + if (pagemap_fd == -1) + ksft_exit_fail_msg("read pagemap: %s\n", strerror(errno)); + + kpageflags_fd = open(kpageflags_proc, O_RDONLY); + if (kpageflags_fd == -1) + ksft_exit_fail_msg("read kpageflags: %s\n", strerror(errno)); + fd_size = 2 * pmd_pagesize;
split_pmd_zero_pages(); @@ -571,6 +614,9 @@ int main(int argc, char **argv) split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, offset); cleanup_thp_fs(fs_loc, created_tmp);
+ close(pagemap_fd); + close(kpageflags_fd); + ksft_finished();
return 0;
On 8/9/25 12:31 AM, Zi Yan wrote:
Instead of just checking the existence of PMD folios before and after folio split tests, use check_folio_orders() to check after-split folio orders.
The following tests are not changed:
- split_pte_mapped_thp: the test already uses kpageflags to check;
- split_file_backed_thp: no vaddr available.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 98 ++++++++++++++----- 1 file changed, 72 insertions(+), 26 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index 5ab488fab1cd..161108717f1c 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -25,6 +25,10 @@ uint64_t pagesize; unsigned int pageshift; uint64_t pmd_pagesize; +unsigned int pmd_order; +unsigned int max_order;
+#define NR_ORDERS (max_order + 1) #define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" #define SMAP_PATH "/proc/self/smaps" @@ -36,6 +40,11 @@ uint64_t pmd_pagesize; #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages)) +const char *pagemap_proc = "/proc/self/pagemap"; +const char *kpageflags_proc = "/proc/kpageflags"; +int pagemap_fd; +int kpageflags_fd;
- int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) { uint64_t paddr;
@@ -151,6 +160,11 @@ void split_pmd_thp_to_order(int order) char *one_page; size_t len = 4 * pmd_pagesize; size_t i;
- int *orders;
- orders = (int *)malloc(sizeof(int) * NR_ORDERS);
- if (!orders)
ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
one_page = memalign(pmd_pagesize, len); if (!one_page) @@ -172,12 +186,20 @@ void split_pmd_thp_to_order(int order) if (one_page[i] != (char)i) ksft_exit_fail_msg("%ld byte corrupted\n", i);
- memset(orders, 0, sizeof(int) * NR_ORDERS);
- /* set expected orders */
- orders[order] = 4 << (pmd_order - order);
- if (check_folio_orders(one_page, len, pagemap_fd, kpageflags_fd,
orders, NR_ORDERS))
ksft_exit_fail_msg("Unexpected THP split\n");
if (!check_huge_anon(one_page, 0, pmd_pagesize)) ksft_exit_fail_msg("Still AnonHugePages not split\n"); ksft_test_result_pass("Split huge pages to order %d successful\n", order); free(one_page);
- free(orders); }
void split_pte_mapped_thp(void) @@ -186,22 +208,6 @@ void split_pte_mapped_thp(void) size_t len = 4 * pmd_pagesize; uint64_t thp_size; size_t i;
- const char *pagemap_template = "/proc/%d/pagemap";
- const char *kpageflags_proc = "/proc/kpageflags";
- char pagemap_proc[255];
- int pagemap_fd;
- int kpageflags_fd;
- if (snprintf(pagemap_proc, 255, pagemap_template, getpid()) < 0)
ksft_exit_fail_msg("get pagemap proc error: %s\n", strerror(errno));
- pagemap_fd = open(pagemap_proc, O_RDONLY);
- if (pagemap_fd == -1)
ksft_exit_fail_msg("read pagemap: %s\n", strerror(errno));
- kpageflags_fd = open(kpageflags_proc, O_RDONLY);
- if (kpageflags_fd == -1)
ksft_exit_fail_msg("read kpageflags: %s\n", strerror(errno));
one_page = mmap((void *)(1UL << 30), len, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); @@ -259,8 +265,6 @@ void split_pte_mapped_thp(void) ksft_test_result_pass("Split PTE-mapped huge pages successful\n"); munmap(one_page, len);
- close(pagemap_fd);
- close(kpageflags_fd); }
void split_file_backed_thp(int order) @@ -463,10 +467,16 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, int order, int offset) { int fd;
- char *split_addr; char *addr; size_t i; char testfile[INPUT_MAX]; int err = 0;
- int *orders;
- orders = (int *)malloc(sizeof(int) * NR_ORDERS);
- if (!orders)
ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
err = snprintf(testfile, INPUT_MAX, "%s/test", fs_loc); @@ -474,16 +484,32 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, ksft_exit_fail_msg("cannot generate right test file name\n"); err = create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr);
- if (err)
- if (err) {
return;free(orders);
- } err = 0;
- if (offset == -1)
write_debugfs(PID_FMT, getpid(), (uint64_t)addr,
(uint64_t)addr + fd_size, order);
- else
write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)addr,
(uint64_t)addr + fd_size, order, offset);
- memset(orders, 0, sizeof(int) * NR_ORDERS);
- if (offset == -1) {
for (split_addr = addr; split_addr < addr + fd_size; split_addr += pmd_pagesize)
write_debugfs(PID_FMT, getpid(), (uint64_t)split_addr,
(uint64_t)split_addr + pagesize, order);
/* set expected orders */
orders[order] = fd_size / (pagesize << order);
- } else {
int times = fd_size / pmd_pagesize;
for (split_addr = addr; split_addr < addr + fd_size; split_addr += pmd_pagesize)
write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)split_addr,
(uint64_t)split_addr + pagesize, order, offset);
/* set expected orders */
for (i = order + 1; i < pmd_order; i++)
orders[i] = times;
orders[order] = 2 * times;
- }
for (i = 0; i < fd_size; i++) if (*(addr + i) != (char)i) { @@ -492,6 +518,14 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, goto out; }
- if (check_folio_orders(addr, fd_size, pagemap_fd, kpageflags_fd, orders,
NR_ORDERS)) {
ksft_print_msg("Unexpected THP split\n");
err = 1;
goto out;
- }
- if (!check_huge_file(addr, 0, pmd_pagesize)) { ksft_print_msg("Still FilePmdMapped not split\n"); err = EXIT_FAILURE;
@@ -499,6 +533,7 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, } out:
- free(orders); munmap(addr, fd_size); close(fd); unlink(testfile);
@@ -522,7 +557,6 @@ int main(int argc, char **argv) const char *fs_loc; bool created_tmp; int offset;
- unsigned int max_order; unsigned int nr_pages; unsigned int tests;
@@ -539,6 +573,7 @@ int main(int argc, char **argv) pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; pmd_pagesize = read_pmd_pagesize();
- pmd_order = GET_ORDER(pmd_pagesize / pagesize);
I think max_order is also same as pmd_order
nr_pages = pmd_pagesize / pagesize; max_order = GET_ORDER(nr_pages);
Can we use one?
if (!pmd_pagesize) ksft_exit_fail_msg("Reading PMD pagesize failed\n"); @@ -547,6 +582,14 @@ int main(int argc, char **argv) tests = 2 + (max_order - 1) + (2 * max_order) + (max_order - 1) * 4 + 2; ksft_set_plan(tests);
- pagemap_fd = open(pagemap_proc, O_RDONLY);
- if (pagemap_fd == -1)
ksft_exit_fail_msg("read pagemap: %s\n", strerror(errno));
- kpageflags_fd = open(kpageflags_proc, O_RDONLY);
- if (kpageflags_fd == -1)
ksft_exit_fail_msg("read kpageflags: %s\n", strerror(errno));
- fd_size = 2 * pmd_pagesize;
split_pmd_zero_pages(); @@ -571,6 +614,9 @@ int main(int argc, char **argv) split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, offset); cleanup_thp_fs(fs_loc, created_tmp);
- close(pagemap_fd);
- close(kpageflags_fd);
- ksft_finished();
return 0;
On 10 Aug 2025, at 12:53, Donet Tom wrote:
On 8/9/25 12:31 AM, Zi Yan wrote:
Instead of just checking the existence of PMD folios before and after folio split tests, use check_folio_orders() to check after-split folio orders.
The following tests are not changed:
- split_pte_mapped_thp: the test already uses kpageflags to check;
- split_file_backed_thp: no vaddr available.
Signed-off-by: Zi Yan ziy@nvidia.com
.../selftests/mm/split_huge_page_test.c | 98 ++++++++++++++----- 1 file changed, 72 insertions(+), 26 deletions(-)
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index 5ab488fab1cd..161108717f1c 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -25,6 +25,10 @@ uint64_t pagesize; unsigned int pageshift; uint64_t pmd_pagesize; +unsigned int pmd_order; +unsigned int max_order;
+#define NR_ORDERS (max_order + 1) #define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" #define SMAP_PATH "/proc/self/smaps" @@ -36,6 +40,11 @@ uint64_t pmd_pagesize; #define GET_ORDER(nr_pages) (31 - __builtin_clz(nr_pages)) +const char *pagemap_proc = "/proc/self/pagemap"; +const char *kpageflags_proc = "/proc/kpageflags"; +int pagemap_fd; +int kpageflags_fd;
- int is_backed_by_thp(char *vaddr, int pagemap_file, int kpageflags_file) { uint64_t paddr;
@@ -151,6 +160,11 @@ void split_pmd_thp_to_order(int order) char *one_page; size_t len = 4 * pmd_pagesize; size_t i;
- int *orders;
- orders = (int *)malloc(sizeof(int) * NR_ORDERS);
- if (!orders)
one_page = memalign(pmd_pagesize, len); if (!one_page)ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
@@ -172,12 +186,20 @@ void split_pmd_thp_to_order(int order) if (one_page[i] != (char)i) ksft_exit_fail_msg("%ld byte corrupted\n", i);
- memset(orders, 0, sizeof(int) * NR_ORDERS);
- /* set expected orders */
- orders[order] = 4 << (pmd_order - order);
- if (check_folio_orders(one_page, len, pagemap_fd, kpageflags_fd,
orders, NR_ORDERS))
if (!check_huge_anon(one_page, 0, pmd_pagesize)) ksft_exit_fail_msg("Still AnonHugePages not split\n"); ksft_test_result_pass("Split huge pages to order %d successful\n", order); free(one_page);ksft_exit_fail_msg("Unexpected THP split\n");
- free(orders); } void split_pte_mapped_thp(void)
@@ -186,22 +208,6 @@ void split_pte_mapped_thp(void) size_t len = 4 * pmd_pagesize; uint64_t thp_size; size_t i;
- const char *pagemap_template = "/proc/%d/pagemap";
- const char *kpageflags_proc = "/proc/kpageflags";
- char pagemap_proc[255];
- int pagemap_fd;
- int kpageflags_fd;
- if (snprintf(pagemap_proc, 255, pagemap_template, getpid()) < 0)
ksft_exit_fail_msg("get pagemap proc error: %s\n", strerror(errno));
- pagemap_fd = open(pagemap_proc, O_RDONLY);
- if (pagemap_fd == -1)
ksft_exit_fail_msg("read pagemap: %s\n", strerror(errno));
- kpageflags_fd = open(kpageflags_proc, O_RDONLY);
- if (kpageflags_fd == -1)
one_page = mmap((void *)(1UL << 30), len, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);ksft_exit_fail_msg("read kpageflags: %s\n", strerror(errno));
@@ -259,8 +265,6 @@ void split_pte_mapped_thp(void) ksft_test_result_pass("Split PTE-mapped huge pages successful\n"); munmap(one_page, len);
- close(pagemap_fd);
- close(kpageflags_fd); } void split_file_backed_thp(int order)
@@ -463,10 +467,16 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, int order, int offset) { int fd;
- char *split_addr; char *addr; size_t i; char testfile[INPUT_MAX]; int err = 0;
- int *orders;
- orders = (int *)malloc(sizeof(int) * NR_ORDERS);
- if (!orders)
err = snprintf(testfile, INPUT_MAX, "%s/test", fs_loc);ksft_exit_fail_msg("Fail to allocate memory: %s\n", strerror(errno));
@@ -474,16 +484,32 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, ksft_exit_fail_msg("cannot generate right test file name\n"); err = create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr);
- if (err)
- if (err) {
return;free(orders);
- } err = 0;
- if (offset == -1)
write_debugfs(PID_FMT, getpid(), (uint64_t)addr,
(uint64_t)addr + fd_size, order);
- else
write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)addr,
(uint64_t)addr + fd_size, order, offset);
- memset(orders, 0, sizeof(int) * NR_ORDERS);
- if (offset == -1) {
for (split_addr = addr; split_addr < addr + fd_size; split_addr += pmd_pagesize)
write_debugfs(PID_FMT, getpid(), (uint64_t)split_addr,
(uint64_t)split_addr + pagesize, order);
/* set expected orders */
orders[order] = fd_size / (pagesize << order);
- } else {
int times = fd_size / pmd_pagesize;
for (split_addr = addr; split_addr < addr + fd_size; split_addr += pmd_pagesize)
write_debugfs(PID_FMT_OFFSET, getpid(), (uint64_t)split_addr,
(uint64_t)split_addr + pagesize, order, offset);
/* set expected orders */
for (i = order + 1; i < pmd_order; i++)
orders[i] = times;
orders[order] = 2 * times;
- } for (i = 0; i < fd_size; i++) if (*(addr + i) != (char)i) {
@@ -492,6 +518,14 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, goto out; }
- if (check_folio_orders(addr, fd_size, pagemap_fd, kpageflags_fd, orders,
NR_ORDERS)) {
ksft_print_msg("Unexpected THP split\n");
err = 1;
goto out;
- }
- if (!check_huge_file(addr, 0, pmd_pagesize)) { ksft_print_msg("Still FilePmdMapped not split\n"); err = EXIT_FAILURE;
@@ -499,6 +533,7 @@ void split_thp_in_pagecache_to_order_at(size_t fd_size, const char *fs_loc, } out:
- free(orders); munmap(addr, fd_size); close(fd); unlink(testfile);
@@ -522,7 +557,6 @@ int main(int argc, char **argv) const char *fs_loc; bool created_tmp; int offset;
- unsigned int max_order; unsigned int nr_pages; unsigned int tests;
@@ -539,6 +573,7 @@ int main(int argc, char **argv) pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; pmd_pagesize = read_pmd_pagesize();
- pmd_order = GET_ORDER(pmd_pagesize / pagesize);
I think max_order is also same as pmd_order
nr_pages = pmd_pagesize / pagesize; max_order = GET_ORDER(nr_pages);
Can we use one?
Sure. Will rename max_order to pmd_order. Thanks.
if (!pmd_pagesize) ksft_exit_fail_msg("Reading PMD pagesize failed\n"); @@ -547,6 +582,14 @@ int main(int argc, char **argv) tests = 2 + (max_order - 1) + (2 * max_order) + (max_order - 1) * 4 + 2; ksft_set_plan(tests);
- pagemap_fd = open(pagemap_proc, O_RDONLY);
- if (pagemap_fd == -1)
ksft_exit_fail_msg("read pagemap: %s\n", strerror(errno));
- kpageflags_fd = open(kpageflags_proc, O_RDONLY);
- if (kpageflags_fd == -1)
ksft_exit_fail_msg("read kpageflags: %s\n", strerror(errno));
- fd_size = 2 * pmd_pagesize; split_pmd_zero_pages();
@@ -571,6 +614,9 @@ int main(int argc, char **argv) split_thp_in_pagecache_to_order_at(fd_size, fs_loc, i, offset); cleanup_thp_fs(fs_loc, created_tmp);
- close(pagemap_fd);
- close(kpageflags_fd);
- ksft_finished(); return 0;
Best Regards, Yan, Zi
linux-kselftest-mirror@lists.linaro.org