On Tue, Apr 20, 2021 at 03:19:56PM +0200, David Hildenbrand wrote:
On 20.04.21 15:16, Mike Rapoport wrote:
From: Mike Rapoport rppt@linux.ibm.com
The check in gup_pte_range() whether a page belongs to a secretmem mapping is performed before grabbing the page reference.
To avoid potential race move the check after try_grab_compound_head().
Signed-off-by: Mike Rapoport rppt@linux.ibm.com
mm/gup.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c index c3a17b189064..4b58c016e949 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2080,13 +2080,13 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte);
if (page_is_secretmem(page))
goto pte_unmap;
- head = try_grab_compound_head(page, 1, flags); if (!head) goto pte_unmap;
if (page_is_secretmem(page))
goto pte_unmap;
Looking at the hunk below, I wonder if you're missing a put_compound_head().
Hmm, yes.
(also, I'd do if unlikely(page_is_secretmem()) but that's a different discussion)
I don't mind, actually. I don't think there would be massive secretmem usage soon.
if (unlikely(pte_val(pte) != pte_val(*ptep))) { put_compound_head(head, 1, flags); goto pte_unmap;
-- Thanks,
David / dhildenb