On 02/03/23 14:40, Michal Hocko wrote:
On Wed 01-02-23 13:05:35, Mike Kravetz wrote:
On 02/01/23 08:47, Michal Hocko wrote:
process.
- At page fault time, check if we are adding a new entry to a shared PMD. If yes, add 'num_of_sharing__processes - 1' to the ref and map count.
In each of the above operations, we are holding the PTL lock (which is really the split/PMD lock) so synchronization should not be an issue.
Although I mention processes sharing the PMD above, it is really mappings/vmas sharing the PMD. You could have two mappings of the same object in the same process sharing PMDs.
I'll code this up and see how it looks.
Thanks!
However, unless you have an objection I would prefer the simple patches move forward, especially for stable backports.
Yes, the current patch is much simpler and more suitable for stable backports. If the explicit map count modifications are not all that terrible then this would sound like a more appropriate long term plan though.
The approach mentioned above seems to be simple enough. Patch is below.
I 'tested' with the same method and tests used to measure fault scalabilty when developing vma based locking [1]. I figured this would be a good stress of the share, unshare and fault paths. With the patch, we are doing more with the page table lock held, so I expected to see a little difference in scalability, but not as much as actually measured:
next-20230131
test instances unmodified patched
Combined faults 24 61888.4 58314.8 Combined forks 24 157.3 130.1
So faults are 6 % slower while forks are hit by 18%. This is quite a lot and more than I expected. pmd sharing shouldn't really be a common operation AFAICS. It should only happen with the first mapping in the pud (so once every 1GB/2MB faults for fully populated mappings).
Just want to be perfectly clear on what in being measured in these tests: - faults program does the following in a loop . mmap 250GB hugetlb file PUD_SIZE aligned . read fault each hugetlb page (so all on shared PMDs) . unmap file measurement is how many times this map/read/unmap loop is completed - fork program does the following . mmap 250GB hugetlb file PUD_SIZE aligned . read fault 3 pages on different PUDs . fork . child write faults 3 pages on different PUDs . child exits measurement is how many children can be created sequentially For the results above, 24 instances of the fault program are being run in parallel with 24 instances of the fork program.
It would be good to know whether this is purely lock contention based or the additional work in each #pf and unmapping makes a big impact as well.
I did not do a deep dive into the exact cause of the slowdown. Do note how much we are executing the PMD sharing paths in these tests. I did not really plan it that way, but was trying to simulate what might be happening in a customer environment.
These tests could seem a bit like a micro-benchmark targeting these code paths. However, I put them together based on the description of a customer workload that prompted the vma based locking work. And, performance of these tests seems to reflect performance of their workloads.
This extra overhead is the cost needed to make shared PMD map counts be accurate and in line with what is normal and expected. I think it is worth the cost. Other opinions? Of course, the patch below may have issues so please take a look.
If 18% slowdown really reflects a real workload then this might just be too expensive I am afraid.
On second thought, I tend to agree. The fixes already done cover all known exposures from inaccurate counts due to PMD sharing. If we want to move forward with the approach here, I would like to: - Do more analysis in order to explain exactly why this is happening. - Try to run the proposed patch is a more accurate customer environment simulation to determine if slowdown is actually visible. I do not have access to such an environment, so will require cooperation from external vendor/customer.
Unless someone thinks we should move forward, I will not push the code for this approach now. It will also be interesting to see if this is impacted at all by the outcome of discussions to perhaps redesign mapcount.