On Wed, Apr 30, 2025 at 11:25:56PM +0200, David Hildenbrand wrote:
On 30.04.25 18:00, Petr Vaněk wrote:
On Wed, Apr 30, 2025 at 04:37:21PM +0200, David Hildenbrand wrote:
On 30.04.25 13:52, Petr Vaněk wrote:
On Tue, Apr 29, 2025 at 08:56:03PM +0200, David Hildenbrand wrote:
On 29.04.25 20:33, Petr Vaněk wrote:
On Tue, Apr 29, 2025 at 05:45:53PM +0200, David Hildenbrand wrote: > On 29.04.25 16:52, David Hildenbrand wrote: >> On 29.04.25 16:45, Petr Vaněk wrote: >>> On Tue, Apr 29, 2025 at 04:29:30PM +0200, David Hildenbrand wrote: >>>> On 29.04.25 16:22, Petr Vaněk wrote: >>>>> folio_pte_batch() could overcount the number of contiguous PTEs when >>>>> pte_advance_pfn() returns a zero-valued PTE and the following PTE in >>>>> memory also happens to be zero. The loop doesn't break in such a case >>>>> because pte_same() returns true, and the batch size is advanced by one >>>>> more than it should be. >>>>> >>>>> To fix this, bail out early if a non-present PTE is encountered, >>>>> preventing the invalid comparison. >>>>> >>>>> This issue started to appear after commit 10ebac4f95e7 ("mm/memory: >>>>> optimize unmap/zap with PTE-mapped THP") and was discovered via git >>>>> bisect. >>>>> >>>>> Fixes: 10ebac4f95e7 ("mm/memory: optimize unmap/zap with PTE-mapped THP") >>>>> Cc: stable@vger.kernel.org >>>>> Signed-off-by: Petr Vaněk arkamar@atlas.cz >>>>> --- >>>>> mm/internal.h | 2 ++ >>>>> 1 file changed, 2 insertions(+) >>>>> >>>>> diff --git a/mm/internal.h b/mm/internal.h >>>>> index e9695baa5922..c181fe2bac9d 100644 >>>>> --- a/mm/internal.h >>>>> +++ b/mm/internal.h >>>>> @@ -279,6 +279,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, >>>>> dirty = !!pte_dirty(pte); >>>>> pte = __pte_batch_clear_ignored(pte, flags); >>>>> >>>>> + if (!pte_present(pte)) >>>>> + break; >>>>> if (!pte_same(pte, expected_pte)) >>>>> break; >>>> >>>> How could pte_same() suddenly match on a present and non-present PTE. >>> >>> In the problematic case pte.pte == 0 and expected_pte.pte == 0 as well. >>> pte_same() returns a.pte == b.pte -> 0 == 0. Both are non-present PTEs. >> >> Observe that folio_pte_batch() was called *with a present pte*. >> >> do_zap_pte_range() >> if (pte_present(ptent)) >> zap_present_ptes() >> folio_pte_batch() >> >> How can we end up with an expected_pte that is !present, if it is based >> on the provided pte that *is present* and we only used pte_advance_pfn() >> to advance the pfn? > > I've been staring at the code for too long and don't see the issue. > > We even have > > VM_WARN_ON_FOLIO(!pte_present(pte), folio); > > So the initial pteval we got is present. > > I don't see how > > nr = pte_batch_hint(start_ptep, pte); > expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); > > would suddenly result in !pte_present(expected_pte).
The issue is not happening in __pte_batch_clear_ignored but later in following line:
expected_pte = pte_advance_pfn(expected_pte, nr);
The issue seems to be in __pte function which converts PTE value to pte_t in pte_advance_pfn, because warnings disappears when I change the line to
expected_pte = (pte_t){ .pte = pte_val(expected_pte) + (nr << PFN_PTE_SHIFT) };
The kernel probably uses __pte function from arch/x86/include/asm/paravirt.h because it is configured with CONFIG_PARAVIRT=y:
static inline pte_t __pte(pteval_t val) { return (pte_t) { PVOP_ALT_CALLEE1(pteval_t, mmu.make_pte, val, "mov %%rdi, %%rax", ALT_NOT_XEN) }; }
I guess it might cause this weird magic, but I need more time to understand what it does :)
I understand it slightly more. __pte() uses xen_make_pte(), which calls pte_pfn_to_mfn(), however, mfn for this pfn contains INVALID_P2M_ENTRY value, therefore the pte_pfn_to_mfn() returns 0, see [1].
I guess that the mfn was invalidated by xen-balloon driver?
[1] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/...
What XEN does with basic primitives that convert between pteval and pte_t is beyond horrible.
How come set_ptes() that uses pte_next_pfn()->pte_advance_pfn() does not run into this?
I don't know, but I guess it is somehow related to pfn->mfn translation.
Is it only a problem if we exceed a certain pfn?
No, it is a problem if the corresponding mft to given pfn is invalid.
I am not sure if my original patch is a good fix.
No :)
Maybe it would be
better to have some sort of native_pte_advance_pfn() which will use native_make_pte() rather than __pte(). Or do you think the issue is in Xen part?
I think what's happening is that -- under XEN only -- we might get garbage when calling pte_advance_pfn() and the next PFN would no longer fall into the folio. And the current code cannot deal with that XEN garbage.
But still not 100% sure.
The following is completely untested, could you give that a try?
Yes, it solves the issue for me.
Cool!
However, maybe it would be better to solve it with the following patch. The pte_pfn_to_mfn() actually returns the same value for non-present PTEs. I suggest to return original PTE if the mfn is INVALID_P2M_ENTRY, rather than empty non-present PTE, but the _PAGE_PRESENT bit will be set to zero. Thus, we will not loose information about original pfn but it will be clear that the page is not present.
From e84781f9ec4fb7275d5e7629cf7e222466caf759 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Petr=20Van=C4=9Bk?= arkamar@atlas.cz Date: Wed, 30 Apr 2025 17:08:41 +0200 Subject: [PATCH] x86/mm: Reset pte _PAGE_PRESENT bit for invalid mft MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
Signed-off-by: Petr Vaněk arkamar@atlas.cz
arch/x86/xen/mmu_pv.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index 38971c6dcd4b..92a6a9af0c65 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -392,28 +392,25 @@ static pteval_t pte_mfn_to_pfn(pteval_t val) static pteval_t pte_pfn_to_mfn(pteval_t val) { if (val & _PAGE_PRESENT) { unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT; pteval_t flags = val & PTE_FLAGS_MASK; unsigned long mfn; mfn = __pfn_to_mfn(pfn); /*
* If there's no mfn for the pfn, then just create an
* empty non-present pte. Unfortunately this loses
* information about the original pfn, so
* pte_mfn_to_pfn is asymmetric.
*/ if (unlikely(mfn == INVALID_P2M_ENTRY)) {* If there's no mfn for the pfn, then just reset present pte bit.
mfn = 0;
flags = 0;
mfn = pfn;
} else mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT); val = ((pteval_t)mfn << PAGE_SHIFT) | flags; }flags &= ~_PAGE_PRESENT;
return val; } __visible pteval_t xen_pte_val(pte_t pte) {
That might do as well.
I assume the following would also work? (removing the early ==1 check)
Yes, it also works in my case and the removal makes sense to me.
It has the general benefit of removing the pte_pfn() call from the loop body, which is why I like that fix. (almost looks like a cleanup)
Indeed, it looks like a cleanup to me as well :)
I am still considering if it would make sense to send both patches, I am not sure if reseting _PAGE_PRESENT flag is enough, because of swapping or other areas which I am not aware of.
From 75948778b586d4759a480bf412fd4682067b12ea Mon Sep 17 00:00:00 2001 From: David Hildenbrand david@redhat.com Date: Wed, 30 Apr 2025 16:35:12 +0200 Subject: [PATCH] tmp
Signed-off-by: David Hildenbrand david@redhat.com
mm/internal.h | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h index e9695baa59226..25a29872c634b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -248,11 +248,9 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, bool *any_writable, bool *any_young, bool *any_dirty) {
- unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio);
- const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; bool writable, young, dirty;
- int nr;
- int nr, cur_nr;
if (any_writable) *any_writable = false; @@ -265,11 +263,15 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio);
- /* Limit max_nr to the actual remaining PFNs in the folio we could batch. */
- max_nr = min_t(unsigned long, max_nr,
folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte));
- nr = pte_batch_hint(start_ptep, pte); expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); ptep = start_ptep + nr;
- while (ptep < end_ptep) {
- while (nr < max_nr) { pte = ptep_get(ptep); if (any_writable) writable = !!pte_write(pte);
@@ -282,14 +284,6 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, if (!pte_same(pte, expected_pte)) break;
/*
* Stop immediately once we reached the end of the folio. In
* corner cases the next PFN might fall into a different
* folio.
*/
if (pte_pfn(pte) >= folio_end_pfn)
break;
- if (any_writable) *any_writable |= writable; if (any_young)
@@ -297,12 +291,13 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, if (any_dirty) *any_dirty |= dirty;
nr = pte_batch_hint(ptep, pte);
expected_pte = pte_advance_pfn(expected_pte, nr);
ptep += nr;
cur_nr = pte_batch_hint(ptep, pte);
expected_pte = pte_advance_pfn(expected_pte, cur_nr);
ptep += cur_nr;
}nr += cur_nr;
- return min(ptep - start_ptep, max_nr);
- return min(nr, max_nr); }
/** -- 2.49.0
-- Cheers,
David / dhildenb