On Tue, Jul 16, 2024 at 12:44 PM Tejun Heo tj@kernel.org wrote:
Hello,
On Tue, Jul 16, 2024 at 03:48:17PM +0200, Michal Hocko wrote: ...
This behavior is particularly useful for work scheduling systems that need to track memory usage of worker processes/cgroups per-work-item. Since memory can't be squeezed like CPU can (the OOM-killer has opinions), these systems need to track the peak memory usage to compute system/container fullness when binpacking workitems.
Swap still has bad reps but there's nothing drastically worse about it than page cache. ie. If you're under memory pressure, you get thrashing one way or another. If there's no swap, the system is just memlocking anon memory even when they are a lot colder than page cache, so I'm skeptical that no swap + mostly anon + kernel OOM kills is a good strategy in general especially given that the system behavior is not very predictable under OOM conditions.
The reason we need peak memory information is to let us schedule work in a way that we generally avoid OOM conditions. For the workloads I work on, we generally have very little in the page-cache, since the data isn't stored locally most of the time, but streamed from other storage/database systems. For those cases, demand-paging will cause large variations in servicing time, and we'd rather restart the process than have unpredictable latency. The same is true for the batch/queue-work system I wrote this patch to support. We keep very little data on the local disk, so the page cache is relatively small.
As mentioned down the email thread, I consider usefulness of peak value rather limited. It is misleading when memory is reclaimed. But fundamentally I do not oppose to unifying the write behavior to reset values.
The removal of resets was intentional. The problem was that it wasn't clear who owned those counters and there's no way of telling who reset what when. It was easy to accidentally end up with multiple entities that think they can get timed measurement by resetting.
So, in general, I don't think this is a great idea. There are shortcomings to how memory.peak behaves in that its meaningfulness quickly declines over time. This is expected and the rationale behind adding memory.peak, IIRC, was that it was difficult to tell the memory usage of a short-lived cgroup.
If we want to allow peak measurement of time periods, I wonder whether we could do something similar to pressure triggers - ie. let users register watchers so that each user can define their own watch periods. This is more involved but more useful and less error-inducing than adding reset to a single counter.
I appreciate the ownership issues with the current resetting interface in the other locations. However, this peak RSS data is not used by all that many applications (as evidenced by the fact that the memory.peak file was only added a bit over a year ago). I think there are enough cases where ownership is enforced externally that mirroring the existing interface to cgroup2 is sufficient.
I do think a more stateful interface would be nice, but I don't know whether I have enough knowledge of memcg to implement that in a reasonable amount of time.
Ownership aside, I think being able to reset the high watermark of a process makes it significantly more useful. Creating new cgroups and moving processes around is significantly heavier-weight.
Thanks,
Johannes, what do you think?
Thanks.
-- tejun