v5: - Use mem_cgroup_usage() as originally suggested by Johannes.
v4: - Add "#ifdef CONFIG_MEMCG" directives around shrink_node_memcgs() to avoid compilation problem with !CONFIG_MEMCG configs.
The test_memcontrol selftest consistently fails its test_memcg_low sub-test and sporadically fails its test_memcg_min sub-test. This patchset fixes the test_memcg_min and test_memcg_low failures by skipping the !usage case in shrink_node_memcgs() and adjust the test_memcontrol selftest to fix other causes of the test failures.
Waiman Long (2): mm/vmscan: Skip memcg with !usage in shrink_node_memcgs() selftests: memcg: Increase error tolerance of child memory.current check in test_memcg_protection()
mm/internal.h | 9 +++++++++ mm/memcontrol-v1.h | 2 -- mm/vmscan.c | 4 ++++ tools/testing/selftests/cgroup/test_memcontrol.c | 11 ++++++++--- 4 files changed, 21 insertions(+), 5 deletions(-)
The test_memcontrol selftest consistently fails its test_memcg_low sub-test due to the fact that two of its test child cgroups which have a memmory.low of 0 or an effective memory.low of 0 still have low events generated for them since mem_cgroup_below_low() use the ">=" operator when comparing to elow.
The two failed use cases are as follows:
1) memory.low is set to 0, but low events can still be triggered and so the cgroup may have a non-zero low event count. I doubt users are looking for that as they didn't set memory.low at all.
2) memory.low is set to a non-zero value but the cgroup has no task in it so that it has an effective low value of 0. Again it may have a non-zero low event count if memory reclaim happens. This is probably not a result expected by the users and it is really doubtful that users will check an empty cgroup with no task in it and expecting some non-zero event counts.
In the first case, even though memory.low isn't set, it may still have some low protection if memory.low is set in the parent. So low event may still be recorded. The test_memcontrol.c test has to be modified to account for that.
For the second case, it really doesn't make sense to have non-zero low event if the cgroup has 0 usage. So we need to skip this corner case in shrink_node_memcgs() using mem_cgroup_usage(). The mem_cgroup_usage() function declaration is moved from mm/memcontrol-v1.h to mm/internal.h with the !CONFIG_MEMCG case defined as always true.
With this patch applied, the test_memcg_low sub-test finishes successfully without failure in most cases. Though both test_memcg_low and test_memcg_min sub-tests may still fail occasionally if the memory.current values fall outside of the expected ranges.
Suggested-by: Johannes Weiner hannes@cmpxchg.org Signed-off-by: Waiman Long longman@redhat.com --- mm/internal.h | 9 +++++++++ mm/memcontrol-v1.h | 2 -- mm/vmscan.c | 4 ++++ tools/testing/selftests/cgroup/test_memcontrol.c | 7 ++++++- 4 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index 50c2f590b2d0..c06fb0e8d75c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1535,6 +1535,15 @@ void __meminit __init_page_from_nid(unsigned long pfn, int nid); unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority);
+#ifdef CONFIG_MEMCG +unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap); +#else +static inline unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) +{ + return 1UL; +} +#endif + #ifdef CONFIG_SHRINKER_DEBUG static inline __printf(2, 0) int shrinker_debugfs_name_alloc( struct shrinker *shrinker, const char *fmt, va_list ap) diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h index 6358464bb416..e92b21af92b1 100644 --- a/mm/memcontrol-v1.h +++ b/mm/memcontrol-v1.h @@ -22,8 +22,6 @@ iter != NULL; \ iter = mem_cgroup_iter(NULL, iter, NULL))
-unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap); - void drain_all_stock(struct mem_cgroup *root_memcg);
unsigned long memcg_events(struct mem_cgroup *memcg, int event); diff --git a/mm/vmscan.c b/mm/vmscan.c index b620d74b0f66..a771a0145a12 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5963,6 +5963,10 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
mem_cgroup_calculate_protection(target_memcg, memcg);
+ /* Skip memcg with no usage */ + if (!mem_cgroup_usage(memcg, false)) + continue; + if (mem_cgroup_below_min(target_memcg, memcg)) { /* * Hard protection. diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index 16f5d74ae762..bab826b6b7b0 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -525,8 +525,13 @@ static int test_memcg_protection(const char *root, bool min) goto cleanup; }
+ /* + * Child 2 has memory.low=0, but some low protection is still being + * distributed down from its parent with memory.low=50M. So the low + * event count will be non-zero. + */ for (i = 0; i < ARRAY_SIZE(children); i++) { - int no_low_events_index = 1; + int no_low_events_index = 2; long low, oom;
oom = cg_read_key_long(children[i], "memory.events", "oom ");
Hello.
On Mon, Apr 07, 2025 at 12:23:15PM -0400, Waiman Long longman@redhat.com wrote:
--- a/mm/memcontrol-v1.h +++ b/mm/memcontrol-v1.h @@ -22,8 +22,6 @@ iter != NULL; \ iter = mem_cgroup_iter(NULL, iter, NULL)) -unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap);
Hm, maybe keep it for v1 only where mem_cgroup_usage has meaning for memsw (i.e. do the opposite and move the function definition to -v1.c).
void drain_all_stock(struct mem_cgroup *root_memcg); unsigned long memcg_events(struct mem_cgroup *memcg, int event); diff --git a/mm/vmscan.c b/mm/vmscan.c index b620d74b0f66..a771a0145a12 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5963,6 +5963,10 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) mem_cgroup_calculate_protection(target_memcg, memcg);
/* Skip memcg with no usage */
if (!mem_cgroup_usage(memcg, false))
continue;
(Not only for v2), there is mem_cgroup_size() for this purpose (already used in mm/vmscan.c).
if (mem_cgroup_below_min(target_memcg, memcg)) { /* * Hard protection.
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index 16f5d74ae762..bab826b6b7b0 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -525,8 +525,13 @@ static int test_memcg_protection(const char *root, bool min) goto cleanup; }
- /*
* Child 2 has memory.low=0, but some low protection is still being
* distributed down from its parent with memory.low=50M. So the low
* event count will be non-zero.
for (i = 0; i < ARRAY_SIZE(children); i++) {*/
int no_low_events_index = 1;
int no_low_events_index = 2;
See suggestion in https://lore.kernel.org/lkml/awgbdn6gwnj4kfaezsorvopgsdyoty3yahdeanqvoxstz2w...
HTH, Michal
On 4/11/25 1:11 PM, Michal Koutný wrote:
Hello.
On Mon, Apr 07, 2025 at 12:23:15PM -0400, Waiman Long longman@redhat.com wrote:
--- a/mm/memcontrol-v1.h +++ b/mm/memcontrol-v1.h @@ -22,8 +22,6 @@ iter != NULL; \ iter = mem_cgroup_iter(NULL, iter, NULL)) -unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap);
Hm, maybe keep it for v1 only where mem_cgroup_usage has meaning for memsw (i.e. do the opposite and move the function definition to -v1.c).
memcontrol-v1.c also include mm/internal.h. That is the reason why I can remove it from here.
void drain_all_stock(struct mem_cgroup *root_memcg); unsigned long memcg_events(struct mem_cgroup *memcg, int event); diff --git a/mm/vmscan.c b/mm/vmscan.c index b620d74b0f66..a771a0145a12 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5963,6 +5963,10 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) mem_cgroup_calculate_protection(target_memcg, memcg);
/* Skip memcg with no usage */
if (!mem_cgroup_usage(memcg, false))
continue;
(Not only for v2), there is mem_cgroup_size() for this purpose (already used in mm/vmscan.c).
My understanding is that mem_cgroup_usage() is for both v1 and v2, while mem_cgroup_size() is for v2 only.
if (mem_cgroup_below_min(target_memcg, memcg)) { /* * Hard protection.
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index 16f5d74ae762..bab826b6b7b0 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -525,8 +525,13 @@ static int test_memcg_protection(const char *root, bool min) goto cleanup; }
- /*
* Child 2 has memory.low=0, but some low protection is still being
* distributed down from its parent with memory.low=50M. So the low
* event count will be non-zero.
for (i = 0; i < ARRAY_SIZE(children); i++) {*/
int no_low_events_index = 1;
int no_low_events_index = 2;
See suggestion in https://lore.kernel.org/lkml/awgbdn6gwnj4kfaezsorvopgsdyoty3yahdeanqvoxstz2w...
I have just replied on your suggestion.
Cheers, Longman
HTH, Michal
The test_memcg_protection() function is used for the test_memcg_min and test_memcg_low sub-tests. This function generates a set of parent/child cgroups like:
parent: memory.min/low = 50M child 0: memory.min/low = 75M, memory.current = 50M child 1: memory.min/low = 25M, memory.current = 50M child 2: memory.min/low = 0, memory.current = 50M
After applying memory pressure, the function expects the following actual memory usages.
parent: memory.current ~= 50M child 0: memory.current ~= 29M child 1: memory.current ~= 21M child 2: memory.current ~= 0
In reality, the actual memory usages can differ quite a bit from the expected values. It uses an error tolerance of 10% with the values_close() helper.
Both the test_memcg_min and test_memcg_low sub-tests can fail sporadically because the actual memory usage exceeds the 10% error tolerance. Below are a sample of the usage data of the tests runs that fail.
Child Actual usage Expected usage %err ----- ------------ -------------- ---- 1 16990208 22020096 -12.9% 1 17252352 22020096 -12.1% 0 37699584 30408704 +10.7% 1 14368768 22020096 -21.0% 1 16871424 22020096 -13.2%
The current 10% error tolerenace might be right at the time test_memcontrol.c was first introduced in v4.18 kernel, but memory reclaim have certainly evolved quite a bit since then which may result in a bit more run-to-run variation than previously expected.
Increase the error tolerance to 15% for child 0 and 20% for child 1 to minimize the chance of this type of failure. The tolerance is bigger for child 1 because an upswing in child 0 corresponds to a smaller %err than a similar downswing in child 1 due to the way %err is used in values_close().
Before this patch, a 100 test runs of test_memcontrol produced the following results:
17 not ok 1 test_memcg_min 22 not ok 2 test_memcg_low
After applying this patch, there were no test failure for test_memcg_min and test_memcg_low in 100 test runs.
Signed-off-by: Waiman Long longman@redhat.com --- tools/testing/selftests/cgroup/test_memcontrol.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c index bab826b6b7b0..8f4f2479650e 100644 --- a/tools/testing/selftests/cgroup/test_memcontrol.c +++ b/tools/testing/selftests/cgroup/test_memcontrol.c @@ -495,10 +495,10 @@ static int test_memcg_protection(const char *root, bool min) for (i = 0; i < ARRAY_SIZE(children); i++) c[i] = cg_read_long(children[i], "memory.current");
- if (!values_close(c[0], MB(29), 10)) + if (!values_close(c[0], MB(29), 15)) goto cleanup;
- if (!values_close(c[1], MB(21), 10)) + if (!values_close(c[1], MB(21), 20)) goto cleanup;
if (c[3] != 0)
On Mon, Apr 07, 2025 at 12:23:16PM -0400, Waiman Long longman@redhat.com wrote:
Child Actual usage Expected usage %err
1 16990208 22020096 -12.9% 1 17252352 22020096 -12.1% 0 37699584 30408704 +10.7% 1 14368768 22020096 -21.0% 1 16871424 22020096 -13.2%
The current 10% error tolerenace might be right at the time test_memcontrol.c was first introduced in v4.18 kernel, but memory reclaim have certainly evolved quite a bit since then which may result in a bit more run-to-run variation than previously expected.
I like Roman's suggestion of nr_cpus dependence but I assume your variations were still on the same system, weren't they? Is it fair to say that reclaim is chaotic [1]? I wonder what may cause variations between separate runs of the test.
Would it help to `echo 3 >drop_caches` before each run to have more stable initial conditions? (Not sure if it's OK in selftests.)
<del>Or sleep 0.5s to settle rstat flushing?</del> No, page_counter's don't suffer that but stock MEMCG_CHARGE_BATCH in percpu stocks. So maybe drain the stock so that counters are precise after the test? (Either by executing a dummy memcg on each CPU or via some debugging API.)
Michal
[1] https://en.wikipedia.org/wiki/Chaos_theory#Chaotic_dynamics
On 4/11/25 1:22 PM, Michal Koutný wrote:
On Mon, Apr 07, 2025 at 12:23:16PM -0400, Waiman Long longman@redhat.com wrote:
Child Actual usage Expected usage %err
1 16990208 22020096 -12.9% 1 17252352 22020096 -12.1% 0 37699584 30408704 +10.7% 1 14368768 22020096 -21.0% 1 16871424 22020096 -13.2%
The current 10% error tolerenace might be right at the time test_memcontrol.c was first introduced in v4.18 kernel, but memory reclaim have certainly evolved quite a bit since then which may result in a bit more run-to-run variation than previously expected.
I like Roman's suggestion of nr_cpus dependence but I assume your variations were still on the same system, weren't they? Is it fair to say that reclaim is chaotic [1]? I wonder what may cause variations between separate runs of the test.
Yes, the variation I saw was on the same system with multiple runs. The memory.current values are read by the time the parent cgroup memory usage reaches near the target 50M, but how much memory are remaining in each child varies from run-to-run. You can say that it is somewhat chaotic.
Would it help to `echo 3 >drop_caches` before each run to have more stable initial conditions? (Not sure if it's OK in selftests.)
I don't know, we may have to try it out. However, I doubt it will have an effect.
<del>Or sleep 0.5s to settle rstat flushing?</del> No, page_counter's don't suffer that but stock MEMCG_CHARGE_BATCH in percpu stocks. So maybe drain the stock so that counters are precise after the test? (Either by executing a dummy memcg on each CPU or via some debugging API.)
The test itself is already sleeping up to 5 times in 1s interval to wait until the parent memory usage is settled down.
Cheers, Longman
linux-kselftest-mirror@lists.linaro.org