Cc: RISC-V folks
On 2025/9/22 10:36, Zi Yan wrote:
On 21 Sep 2025, at 22:14, Lance Yang wrote:
From: Lance Yang lance.yang@linux.dev
When both THP and MTE are enabled, splitting a THP and replacing its zero-filled subpages with the shared zeropage can cause MTE tag mismatch faults in userspace.
Remapping zero-filled subpages to the shared zeropage is unsafe, as the zeropage has a fixed tag of zero, which may not match the tag expected by the userspace pointer.
KSM already avoids this problem by using memcmp_pages(), which on arm64 intentionally reports MTE-tagged pages as non-identical to prevent unsafe merging.
As suggested by David[1], this patch adopts the same pattern, replacing the memchr_inv() byte-level check with a call to pages_identical(). This leverages existing architecture-specific logic to determine if a page is truly identical to the shared zeropage.
Having both the THP shrinker and KSM rely on pages_identical() makes the design more future-proof, IMO. Instead of handling quirks in generic code, we just let the architecture decide what makes two pages identical.
[1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com
Cc: stable@vger.kernel.org Reported-by: Qun-wei Lin Qun-wei.Lin@mediatek.com Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@m... Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lance Yang lance.yang@linux.dev
Tested on x86_64 and on QEMU for arm64 (with and without MTE support), and the fix works as expected.
From [1], I see you mentioned RISC-V also has the address masking feature. Is it affected by this? And memcmp_pages() is only implemented by ARM64 for MTE. Should any arch with address masking always implement it to avoid the same issue?
Yeah, I'm new to RISC-V, seems like RISC-V has a similar feature as described in Documentation/arch/riscv/uabi.rst, which is the Supm (Supervisor-mode Pointer Masking) extension.
``` Pointer masking ---------------
Support for pointer masking in userspace (the Supm extension) is provided via the ``PR_SET_TAGGED_ADDR_CTRL`` and ``PR_GET_TAGGED_ADDR_CTRL`` ``prctl()`` operations. Pointer masking is disabled by default. To enable it, userspace must call ``PR_SET_TAGGED_ADDR_CTRL`` with the ``PR_PMLEN`` field set to the number of mask/tag bits needed by the application. ``PR_PMLEN`` is interpreted as a lower bound; if the kernel is unable to satisfy the request, the ``PR_SET_TAGGED_ADDR_CTRL`` operation will fail. The actual number of tag bits is returned in ``PR_PMLEN`` by the ``PR_GET_TAGGED_ADDR_CTRL`` operation. ```
But, IIUC, Supm by itself only ensures that the upper bits are ignored on memory access :)
So, RISC-V today would likely not be affected. However, once it implements full hardware tag checking, it will face the exact same zero-page problem.
Anyway, any architecture with a feature like MTE in the future will need its own memcmp_pages() to prevent unsafe merges ;)
mm/huge_memory.c | 15 +++------------ mm/migrate.c | 8 +------- 2 files changed, 4 insertions(+), 19 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 32e0ec2dde36..28d4b02a1aa5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4104,29 +4104,20 @@ static unsigned long deferred_split_count(struct shrinker *shrink, static bool thp_underused(struct folio *folio) { int num_zero_pages = 0, num_filled_pages = 0;
void *kaddr; int i;
for (i = 0; i < folio_nr_pages(folio); i++) {
kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
num_zero_pages++;
if (num_zero_pages > khugepaged_max_ptes_none) {
kunmap_local(kaddr);
if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {
if (++num_zero_pages > khugepaged_max_ptes_none) return true;
} else { /* * Another path for early exit once the number * of non-zero filled pages exceeds threshold. */}
num_filled_pages++;
if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) {
kunmap_local(kaddr);
if (++num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) return false;
}}
} return false; }kunmap_local(kaddr);
diff --git a/mm/migrate.c b/mm/migrate.c index aee61a980374..ce83c2c3c287 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -300,9 +300,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, unsigned long idx) { struct page *page = folio_page(folio, idx);
bool contains_data; pte_t newpte;
void *addr;
if (PageCompound(page)) return false;
@@ -319,11 +317,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, * this subpage has been non present. If the subpage is only zero-filled * then map it to the shared zeropage. */
- addr = kmap_local_page(page);
- contains_data = memchr_inv(addr, 0, PAGE_SIZE);
- kunmap_local(addr);
- if (contains_data)
if (!pages_identical(page, ZERO_PAGE(0))) return false;
newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),
-- 2.49.0
The changes look good to me. Thanks. Acked-by: Zi Yan ziy@nvidia.com
Cheers!