This series fixes a few issues in the hugetlb cgroup charging selftests (write_to_hugetlbfs.c + charge_reserved_hugetlb.sh) that show up on systems with large hugepages (e.g. 512MB) and when failures cause the test to wait indefinitely.
On an aarch64 64k page kernel with 512MB hugepages, the test consistently fails in write_to_hugetlbfs with ENOMEM and then hangs waiting for the expected usage values. The root cause is that charge_reserved_hugetlb.sh mounts hugetlbfs with a fixed size=256M, which is smaller than a single hugepage, resulting in a mount with size=0 capacity.
In addition, write_to_hugetlbfs previously parsed -s via atoi() into an int, which can overflow and print negative sizes.
Reproducer / environment: - Kernel: 6.12.0-xxx.el10.aarch64+64k - Hugepagesize: 524288 kB (512MB) - ./charge_reserved_hugetlb.sh -cgroup-v2 - Observed mount: pagesize=512M,size=0 before this series
After applying the series, the test completes successfully on the above setup.
Li Wang (3): selftests/mm/write_to_hugetlbfs: parse -s with strtoull and use size_t selftests/mm/charge_reserved_hugetlb.sh: add waits with timeout helper selftests/mm/charge_reserved_hugetlb: fix hugetlbfs mount size for large hugepages
.../selftests/mm/charge_reserved_hugetlb.sh | 49 ++++++++++--------- .../testing/selftests/mm/write_to_hugetlbfs.c | 19 +++++-- 2 files changed, 43 insertions(+), 25 deletions(-)