On Tue, 17 Dec 2019, Mina Almasry wrote:
These counters will track hugetlb reservations rather than hugetlb memory faulted in. This patch only adds the counter, following patches add the charging and uncharging of the counter.
This is patch 1 of an 8 patch series.
Problem: Currently tasks attempting to allocate more hugetlb memory than is available get a failure at mmap/shmget time. This is thanks to Hugetlbfs Reservations [1]. However, if a task attempts to allocate hugetlb memory only more than its hugetlb_cgroup limit allows, the kernel will allow the mmap/shmget call, but will SIGBUS the task when it attempts to fault the memory in.
I think it's subtle, but the use of the word "allocate" instead of using "reserve" might be confusing here. Might want to reword it.
We have developers interested in using hugetlb_cgroups, and they have expressed dissatisfaction regarding this behavior. We'd like to improve this behavior such that tasks violating the hugetlb_cgroup limits get an error on mmap/shmget time, rather than getting SIGBUS'd when they try to fault the excess memory in.
I'm not sure the developers are interested in being restricted by hugetlb_cgroups :) I think users get constrained by hugetlb_cgroup so the developers are interested in the failure more: do we want to SIGBUS at fault and not be allowed an opportunity to influence that (today) or do we want to fallback to non-hugetlbfs memory and just keep going (tomorrow, after your patchset).
The underlying problem is that today's hugetlb_cgroup accounting happens at hugetlb memory *fault* time, rather than at *reservation* time. Thus, enforcing the hugetlb_cgroup limit only happens at fault time, and the offending task gets SIGBUS'd.
Proposed Solution: A new page counter named hugetlb.xMB.reservation_[limit|usage]_in_bytes. This counter has slightly different semantics than hugetlb.xMB.[limit|usage]_in_bytes:
Is there a max_usage_in_bytes equivalent? It's a page_counter so I assume it's easy to support.
I'll defer the naming to Mike here, "rsvd" seems to be the hugetlb way of saying "reserved".
- While usage_in_bytes tracks all *faulted* hugetlb memory,
reservation_usage_in_bytes tracks all *reserved* hugetlb memory and hugetlb memory faulted in without a prior reservation.
- If a task attempts to reserve more memory than limit_in_bytes allows,
the kernel will allow it to do so. But if a task attempts to reserve more memory than reservation_limit_in_bytes, the kernel will fail this reservation.
This proposal is implemented in this patch series, with tests to verify functionality and show the usage. We also added cgroup-v2 support to hugetlb_cgroup so that the new use cases can be extended to v2.
Alternatives considered:
A new cgroup, instead of only a new page_counter attached to the existing hugetlb_cgroup. Adding a new cgroup seemed like a lot of code duplication with hugetlb_cgroup. Keeping hugetlb related page counters under hugetlb_cgroup seemed cleaner as well.
Instead of adding a new counter, we considered adding a sysctl that modifies the behavior of hugetlb.xMB.[limit|usage]_in_bytes, to do accounting at reservation time rather than fault time. Adding a new page_counter seems better as userspace could, if it wants, choose to enforce different cgroups differently: one via limit_in_bytes, and another via reservation_limit_in_bytes. This could be very useful if you're transitioning how hugetlb memory is partitioned on your system one cgroup at a time, for example. Also, someone may find usage for both limit_in_bytes and reservation_limit_in_bytes concurrently, and this approach gives them the option to do so.
Testing:
- Added tests passing.
- Used libhugetlbfs for regression testing.
Signed-off-by: Mina Almasry almasrymina@google.com Acked-by: Hillf Danton hdanton@sina.com