From: David Hildenbrand david@redhat.com
[ Upstream commit 78cb1a13c42a6d843e21389f74d1edb90ed07288 ]
Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the folio_test_anon() test only.
... but staring at the users, this function should never even have been called on movable_ops pages. E.g., * __buffer_migrate_folio() does not make sense for them * folio_migrate_mapping() does not make sense for them * migrate_huge_page_move_mapping() does not make sense for them * __migrate_folio() does not make sense for them * ... and khugepaged should never stumble over them
Let's simply refuse typed pages (which includes slab) except hugetlb, and WARN.
Link: https://lkml.kernel.org/r/20250704102524.326966-26-david@redhat.com Signed-off-by: David Hildenbrand david@redhat.com Reviewed-by: Zi Yan ziy@nvidia.com Reviewed-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com Reviewed-by: Harry Yoo harry.yoo@oracle.com Cc: Alistair Popple apopple@nvidia.com Cc: Al Viro viro@zeniv.linux.org.uk Cc: Arnd Bergmann arnd@arndb.de Cc: Brendan Jackman jackmanb@google.com Cc: Byungchul Park byungchul@sk.com Cc: Chengming Zhou chengming.zhou@linux.dev Cc: Christian Brauner brauner@kernel.org Cc: Christophe Leroy christophe.leroy@csgroup.eu Cc: Eugenio Pé rez eperezma@redhat.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Gregory Price gourry@gourry.net Cc: "Huang, Ying" ying.huang@linux.alibaba.com Cc: Jan Kara jack@suse.cz Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Jason Wang jasowang@redhat.com Cc: Jerrin Shaji George jerrin.shaji-george@broadcom.com Cc: Johannes Weiner hannes@cmpxchg.org Cc: John Hubbard jhubbard@nvidia.com Cc: Jonathan Corbet corbet@lwn.net Cc: Joshua Hahn joshua.hahnjy@gmail.com Cc: Liam Howlett liam.howlett@oracle.com Cc: Madhavan Srinivasan maddy@linux.ibm.com Cc: Mathew Brost matthew.brost@intel.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Miaohe Lin linmiaohe@huawei.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: "Michael S. Tsirkin" mst@redhat.com Cc: Michal Hocko mhocko@suse.com Cc: Mike Rapoport rppt@kernel.org Cc: Minchan Kim minchan@kernel.org Cc: Naoya Horiguchi nao.horiguchi@gmail.com Cc: Nicholas Piggin npiggin@gmail.com Cc: Oscar Salvador osalvador@suse.de Cc: Peter Xu peterx@redhat.com Cc: Qi Zheng zhengqi.arch@bytedance.com Cc: Rakie Kim rakie.kim@sk.com Cc: Rik van Riel riel@surriel.com Cc: Sergey Senozhatsky senozhatsky@chromium.org Cc: Shakeel Butt shakeel.butt@linux.dev Cc: Suren Baghdasaryan surenb@google.com Cc: Vlastimil Babka vbabka@suse.cz Cc: Xuan Zhuo xuanzhuo@linux.alibaba.com Cc: xu xin xu.xin16@zte.com.cn Signed-off-by: Andrew Morton akpm@linux-foundation.org Stable-dep-of: f183663901f2 ("mm: consider non-anon swap cache folios in folio_expected_ref_count()") Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/mm.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h index 44381ffaf34b..61efff50eb67 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1820,13 +1820,13 @@ static inline int folio_expected_ref_count(struct folio *folio) const int order = folio_order(folio); int ref_count = 0;
- if (WARN_ON_ONCE(folio_test_slab(folio))) + if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio))) return 0;
if (folio_test_anon(folio)) { /* One reference per page from the swapcache. */ ref_count += folio_test_swapcache(folio) << order; - } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) { + } else { /* One reference per page from the pagecache. */ ref_count += !!folio->mapping << order; /* One reference from PG_private. */