On 11/4/24 19:00, Matthew Wilcox wrote:
On Tue, Nov 05, 2024 at 12:08:37AM +0900, Koichiro Den wrote:
Commit b035f5a6d852 ("mm: slab: reduce the kmalloc() minimum alignment if DMA bouncing possible") reduced ARCH_KMALLOC_MINALIGN to 8 on arm64. However, with KASAN_HW_TAGS enabled, arch_slab_minalign() becomes 16. This causes kmalloc_caches[*][8] to be aliased to kmalloc_caches[*][16], resulting in kmem_buckets_create() attempting to create a kmem_cache for size 16 twice. This duplication triggers warnings on boot:
Wouldn't this be easier?
They wanted it to depend on actual HW capability / kernel parameter, see d949a8155d13 ("mm: make minimum slab alignment a runtime property")
Also Catalin's commit referenced above was part of the series that made the alignment more dynamic for other cases IIRC. So I doubt we can simply reduce it back to a build-time constant.
+++ b/arch/arm64/include/asm/cache.h @@ -33,7 +33,11 @@
- the CPU.
*/ #define ARCH_DMA_MINALIGN (128) +#ifdef CONFIG_KASAN_HW_TAGS +#define ARCH_KMALLOC_MINALIGN (16) +#else #define ARCH_KMALLOC_MINALIGN (8) +#endif
#ifndef __ASSEMBLY__