On Tue, Sep 23, 2025 at 02:00:01PM +0200, David Hildenbrand wrote:
On 23.09.25 13:52, Catalin Marinas wrote:
I just realised that on arm64 with MTE we won't get any merging with the zero page even if the user page isn't mapped with PROT_MTE. In cpu_enable_mte() we zero the tags in the zero page and set PG_mte_tagged. The reason is that we want to use the zero page with PROT_MTE mappings (until tag setting causes CoW). Hmm, the arm64 memcmp_pages() messed up KSM merging with the zero page even before this patch.
The MTE tag setting evolved a bit over time with some locking using PG_* flags to avoid a set_pte_at() race trying to initialise the tags on the same page. We also moved the swap restoring to arch_swap_restore() rather than the set_pte_at() path. So it is safe now to merge with the zero page if the other page isn't tagged. A subsequent set_pte_at() attempting to clear the tags would notice that the zero page is already tagged.
We could go a step further and add tag comparison (I had some code around) but I think the quick fix is to just not treat the zero page as tagged.
I assume any tag changes would result in CoW.
Yes.
It would be interesting to know if there are use cases with VMs or other workloads where that could be beneficial with KSM.
With VMs, if MTE is allowed in the guest, we currently treat any VM page as tagged. In the initial version of the MTE spec, we did not have any fine-rained control at the stage 2 page table over whether MTE is in use by the guest (just a big knob in a control register). We later got FEAT_MTE_PERM which allows stage 2 to trap tag accesses in a VM on a page by page basis, though we haven't added KVM support for it yet.
If we add full tag comparison, VMs may be able to share more pages. For example, code pages are never tagged in a VM but the hypervisor doesn't know this, so it just avoids sharing. I posted tag comparison some years ago but dropped it eventually to keep things simple:
https://lore.kernel.org/all/20200421142603.3894-9-catalin.marinas@arm.com/
However, it needs a bit of tidying up since at the time we assumed everything was tagged. I can respin the above (on top of the fix below), though I don't see many vendors rushing to deploy MTE in a multi-VM scenario (Android + pKVM maybe but not sure they enable KSM due to power constraints).
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index e5e773844889..72a1dfc54659 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -73,6 +73,8 @@ int memcmp_pages(struct page *page1, struct page *page2) { char *addr1, *addr2; int ret;
- bool page1_tagged = page_mte_tagged(page1) && !is_zero_page(page1);
- bool page2_tagged = page_mte_tagged(page2) && !is_zero_page(page2); addr1 = page_address(page1); addr2 = page_address(page2);
@@ -83,11 +85,10 @@ int memcmp_pages(struct page *page1, struct page *page2) /* * If the page content is identical but at least one of the pages is
* tagged, return non-zero to avoid KSM merging. If only one of the
* pages is tagged, __set_ptes() may zero or change the tags of the
* other page via mte_sync_tags().
* tagged, return non-zero to avoid KSM merging. Ignore the zero page
*/* since it is always tagged with the tags cleared.
- if (page_mte_tagged(page1) || page_mte_tagged(page2))
- if (page1_tagged || page2_tagged) return addr1 != addr2;
That looks reasonable to me.
@Lance as you had a test setup, could you give this a try as well with KSM shared zeropage deduplication enabled whether it now works as expected as well?
Thanks!
Then, this should likely be an independent fix.
Yes, I'll add a proper commit message. We could do a cc stable, though it's more of an optimisation.