6.12-stable review patch. If anyone has any objections, please let me know.
------------------
From: Vlastimil Babka vbabka@suse.cz
commit e2d18cbf178775ad377ad88ee55e6e183c38d262 upstream.
The slab allocator observes the task's NUMA policy in various places such as allocating slab pages. Large kmalloc() allocations used to do that too, until an unintended change by c4cab557521a ("mm/slab_common: cleanup kmalloc_large()") resulted in ignoring mempolicy and just preferring the local node. Restore the NUMA policy support.
Fixes: c4cab557521a ("mm/slab_common: cleanup kmalloc_large()") Cc: stable@vger.kernel.org Acked-by: Christoph Lameter (Ampere) cl@gentwo.org Acked-by: Roman Gushchin roman.gushchin@linux.dev Reviewed-by: Harry Yoo harry.yoo@oracle.com Signed-off-by: Vlastimil Babka vbabka@suse.cz Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- mm/slub.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
--- a/mm/slub.c +++ b/mm/slub.c @@ -4225,7 +4225,12 @@ static void *___kmalloc_large_node(size_ flags = kmalloc_fix_flags(flags);
flags |= __GFP_COMP; - folio = (struct folio *)alloc_pages_node_noprof(node, flags, order); + + if (node == NUMA_NO_NODE) + folio = (struct folio *)alloc_pages_noprof(flags, order); + else + folio = (struct folio *)__alloc_pages_noprof(flags, order, node, NULL); + if (folio) { ptr = folio_address(folio); lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B,