The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.4.y git checkout FETCH_HEAD git cherry-pick -x 694d6b99923eb05a8fd188be44e26077d19f0e21 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2025072811-ethanol-arbitrary-a664@gregkh' --subject-prefix 'PATCH 5.4.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 694d6b99923eb05a8fd188be44e26077d19f0e21 Mon Sep 17 00:00:00 2001 From: Harry Yoo harry.yoo@oracle.com Date: Fri, 4 Jul 2025 19:30:53 +0900 Subject: [PATCH] mm/zsmalloc: do not pass __GFP_MOVABLE if CONFIG_COMPACTION=n
Commit 48b4800a1c6a ("zsmalloc: page migration support") added support for migrating zsmalloc pages using the movable_operations migration framework. However, the commit did not take into account that zsmalloc supports migration only when CONFIG_COMPACTION is enabled. Tracing shows that zsmalloc was still passing the __GFP_MOVABLE flag even when compaction is not supported.
This can result in unmovable pages being allocated from movable page blocks (even without stealing page blocks), ZONE_MOVABLE and CMA area.
Possible user visible effects: - Some ZONE_MOVABLE memory can be not actually movable - CMA allocation can fail because of this - Increased memory fragmentation due to ignoring the page mobility grouping feature I'm not really sure who uses kernels without compaction support, though :(
To fix this, clear the __GFP_MOVABLE flag when !IS_ENABLED(CONFIG_COMPACTION).
Link: https://lkml.kernel.org/r/20250704103053.6913-1-harry.yoo@oracle.com Fixes: 48b4800a1c6a ("zsmalloc: page migration support") Signed-off-by: Harry Yoo harry.yoo@oracle.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Sergey Senozhatsky senozhatsky@chromium.org Cc: Minchan Kim minchan@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 999b513c7fdf..f3e2215f95eb 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1043,6 +1043,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, if (!zspage) return NULL;
+ if (!IS_ENABLED(CONFIG_COMPACTION)) + gfp &= ~__GFP_MOVABLE; + zspage->magic = ZSPAGE_MAGIC; zspage->pool = pool; zspage->class = class->index;
From: Miaohe Lin linmiaohe@huawei.com
[ Upstream commit f0231305acd53375c6cf736971bf5711105dd6bb ]
We always memset the zspage allocated via cache_alloc_zspage. So it's more convenient to use kmem_cache_zalloc in cache_alloc_zspage than caller do it manually.
Link: https://lkml.kernel.org/r/20210114120032.25885-1-linmiaohe@huawei.com Signed-off-by: Miaohe Lin linmiaohe@huawei.com Reviewed-by: Sergey Senozhatsky sergey.senozhatsky@gmail.com Cc: Minchan Kim minchan@kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Stable-dep-of: 694d6b99923e ("mm/zsmalloc: do not pass __GFP_MOVABLE if CONFIG_COMPACTION=n") Signed-off-by: Sasha Levin sashal@kernel.org --- mm/zsmalloc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 6b100f02ee43..eae16c6b6fc6 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -357,7 +357,7 @@ static void cache_free_handle(struct zs_pool *pool, unsigned long handle)
static struct zspage *cache_alloc_zspage(struct zs_pool *pool, gfp_t flags) { - return kmem_cache_alloc(pool->zspage_cachep, + return kmem_cache_zalloc(pool->zspage_cachep, flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE)); }
@@ -1067,7 +1067,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, if (!zspage) return NULL;
- memset(zspage, 0, sizeof(struct zspage)); zspage->magic = ZSPAGE_MAGIC; migrate_lock_init(zspage);
From: Harry Yoo harry.yoo@oracle.com
[ Upstream commit 694d6b99923eb05a8fd188be44e26077d19f0e21 ]
Commit 48b4800a1c6a ("zsmalloc: page migration support") added support for migrating zsmalloc pages using the movable_operations migration framework. However, the commit did not take into account that zsmalloc supports migration only when CONFIG_COMPACTION is enabled. Tracing shows that zsmalloc was still passing the __GFP_MOVABLE flag even when compaction is not supported.
This can result in unmovable pages being allocated from movable page blocks (even without stealing page blocks), ZONE_MOVABLE and CMA area.
Possible user visible effects: - Some ZONE_MOVABLE memory can be not actually movable - CMA allocation can fail because of this - Increased memory fragmentation due to ignoring the page mobility grouping feature I'm not really sure who uses kernels without compaction support, though :(
To fix this, clear the __GFP_MOVABLE flag when !IS_ENABLED(CONFIG_COMPACTION).
Link: https://lkml.kernel.org/r/20250704103053.6913-1-harry.yoo@oracle.com Fixes: 48b4800a1c6a ("zsmalloc: page migration support") Signed-off-by: Harry Yoo harry.yoo@oracle.com Acked-by: David Hildenbrand david@redhat.com Reviewed-by: Sergey Senozhatsky senozhatsky@chromium.org Cc: Minchan Kim minchan@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/zsmalloc.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index eae16c6b6fc6..b379deb0a10c 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1067,6 +1067,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, if (!zspage) return NULL;
+ if (!IS_ENABLED(CONFIG_COMPACTION)) + gfp &= ~__GFP_MOVABLE; + zspage->magic = ZSPAGE_MAGIC; migrate_lock_init(zspage);
linux-stable-mirror@lists.linaro.org