From: Miaohe Lin linmiaohe@huawei.com
[ Upstream commit a44f89dc6c5f8ba70240b81a570260d29d04bcb0 ]
It's more recommended to use helper function migration_entry_to_page() to get the page via migration entry. We can also enjoy the PageLocked() check there.
Link: https://lkml.kernel.org/r/20210318122722.13135-7-linmiaohe@huawei.com Signed-off-by: Miaohe Lin linmiaohe@huawei.com Reviewed-by: Peter Xu peterx@redhat.com Cc: Aneesh Kumar K.V aneesh.kumar@linux.ibm.com Cc: Matthew Wilcox willy@infradead.org Cc: Michel Lespinasse walken@google.com Cc: Ralph Campbell rcampbell@nvidia.com Cc: Thomas Hellstrm (Intel) thomas_os@shipmail.org Cc: Vlastimil Babka vbabka@suse.cz Cc: Wei Yang richard.weiyang@linux.alibaba.com Cc: William Kucharski william.kucharski@oracle.com Cc: Yang Shi yang.shi@linux.alibaba.com Cc: yuleixzhang yulei.kernel@gmail.com Cc: Zi Yan ziy@nvidia.com Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 594368f6134f1..cb7b0aead7096 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1691,7 +1691,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry = pmd_to_swp_entry(orig_pmd); - page = pfn_to_page(swp_offset(entry)); + page = migration_entry_to_page(entry); flush_needed = 0; } else WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); @@ -2110,7 +2110,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, swp_entry_t entry;
entry = pmd_to_swp_entry(old_pmd); - page = pfn_to_page(swp_offset(entry)); + page = migration_entry_to_page(entry); write = is_write_migration_entry(entry); young = false; soft_dirty = pmd_swp_soft_dirty(old_pmd);