On Wed, Feb 26, 2025 at 04:00:25PM -0500, Zi Yan wrote:
+static int __split_unmapped_folio(struct folio *folio, int new_order,
struct page *split_at, struct page *lock_at,struct list_head *list, pgoff_t end,struct xa_state *xas, struct address_space *mapping,bool uniform_split)+{
[...]
/* complete memcg works before add pages to LRU */split_page_memcg(&folio->page, old_order, split_order);split_page_owner(&folio->page, old_order, split_order);pgalloc_tag_split(folio, old_order, split_order);
At least split_page_memcg() needs to become aware of 'uniform_split'.
if (folio_memcg_kmem(folio)) obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1);
If we're doing uniform_split, that calculation should be old_order - new_order - 1