On Sat, Mar 21, 2020 at 9:31 PM Andrew Morton akpm@linux-foundation.org wrote:
On Sat, 21 Mar 2020 22:03:26 -0400 Rafael Aquini aquini@redhat.com wrote:
- In order to sort out that race, and get the after fault checks consistent,
- the "quick and dirty" trick below is required in order to force a call to
- lru_add_drain_all() to get the recently MLOCK_ONFAULT pages moved to
- the unevictable LRU, as expected by the checks in this selftest.
- */
+static void force_lru_add_drain_all(void) +{
- sched_yield();
- system("echo 1 > /proc/sys/vm/compact_memory");
+}
What is the sched_yield() for?
Mostly it's there to provide a sleeping gap after the fault, whithout actually adding an arbitrary value with usleep().
It's not a hard requirement, but, in some of the tests I performed (whithout that sleeping gap) I would still see around 1% chance of hitting the false-negative. After adding it I could not hit the issue anymore.
It's concerning that such deep machinery as pagevec draining is visible to userspace.
We already have other examples like memcg stats where the optimizations like batching per-cpu stats collection exposes differences to the userspace. I would not be that worried here.
I suppose that for consistency and correctness we should perform a drain prior to each read from /proc/*/pagemap. Presumably this would be far too expensive.
Is there any other way? One such might be to make the MLOCK_ONFAULT pages bypass the lru_add_pvecs?
I would rather prefer to have something similar to /proc/sys/vm/stat_refresh which drains the pagevecs.