On 30/04/2025 14:04, Petr Vaněk wrote:
On Tue, Apr 29, 2025 at 04:02:10PM +0100, Ryan Roberts wrote:
On 29/04/2025 15:46, David Hildenbrand wrote:
On 29.04.25 16:41, Ryan Roberts wrote:
On 29/04/2025 15:29, David Hildenbrand wrote:
On 29.04.25 16:22, Petr Vaněk wrote:
folio_pte_batch() could overcount the number of contiguous PTEs when pte_advance_pfn() returns a zero-valued PTE and the following PTE in memory also happens to be zero. The loop doesn't break in such a case because pte_same() returns true, and the batch size is advanced by one more than it should be.
To fix this, bail out early if a non-present PTE is encountered, preventing the invalid comparison.
This issue started to appear after commit 10ebac4f95e7 ("mm/memory: optimize unmap/zap with PTE-mapped THP") and was discovered via git bisect.
Fixes: 10ebac4f95e7 ("mm/memory: optimize unmap/zap with PTE-mapped THP") Cc: stable@vger.kernel.org Signed-off-by: Petr Vaněk arkamar@atlas.cz
mm/internal.h | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/mm/internal.h b/mm/internal.h index e9695baa5922..c181fe2bac9d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -279,6 +279,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, dirty = !!pte_dirty(pte); pte = __pte_batch_clear_ignored(pte, flags); + if (!pte_present(pte)) + break; if (!pte_same(pte, expected_pte)) break;
How could pte_same() suddenly match on a present and non-present PTE.
Something with XEN is really problematic here.
We are inside a lazy MMU region (arch_enter_lazy_mmu_mode()) at this point, which I believe XEN uses. If a PTE was written then read back while in lazy mode you could get a stale value.
See https://lore.kernel.org/all/912c7a32-b39c-494f-a29c-4865cd92aeba@agordeev.lo... for an example bug.
So if we cannot trust ptep_get() output, then, ... how could we trust anything here and ever possibly batch?
The point is that for a write followed by a read to the same PTE, the read may not return what was written. It could return the value of the PTE at the point of entry into the lazy mmu mode.
I guess one quick way to test is to hack out lazy mmu support. Something like this? (totally untested):
I (blindly) applied the suggested change but I am still seeing the same issue.
Thanks for trying; it was just something that came to mind as a possibility knowing it was XEN and inside lazy mmu region. I think your other discussion has concluded that the x86 implementation of pte_advance_pfn() is not correct when XEN is in use? (I was just scanning, perhaps I came to wrong conclusion)..
Thanks, Ryan
Petr
----8<---- diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index c4c23190925c..1f0a1a713072 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -541,22 +541,6 @@ static inline void arch_end_context_switch(struct task_struct *next) PVOP_VCALL1(cpu.end_context_switch, next); }
-#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE -static inline void arch_enter_lazy_mmu_mode(void) -{
PVOP_VCALL0(mmu.lazy_mode.enter);
-}
-static inline void arch_leave_lazy_mmu_mode(void) -{
PVOP_VCALL0(mmu.lazy_mode.leave);
-}
-static inline void arch_flush_lazy_mmu_mode(void) -{
PVOP_VCALL0(mmu.lazy_mode.flush);
-}
static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, phys_addr_t phys, pgprot_t flags) { ----8<----