Kees Cook kees@kernel.org writes:
When KCOV is enabled all functions get instrumented, unless the __no_sanitize_coverage attribute is used. To prepare for __no_sanitize_coverage being applied to __init functions, we have to handle differences in how GCC's inline optimizations get resolved. For s390 this requires forcing a couple functions to be inline with __always_inline.
Signed-off-by: Kees Cook kees@kernel.org
Cc: Madhavan Srinivasan maddy@linux.ibm.com Cc: Michael Ellerman mpe@ellerman.id.au Cc: Nicholas Piggin npiggin@gmail.com Cc: Christophe Leroy christophe.leroy@csgroup.eu Cc: Naveen N Rao naveen@kernel.org Cc: "Ritesh Harjani (IBM)" ritesh.list@gmail.com Cc: "Aneesh Kumar K.V" aneesh.kumar@linux.ibm.com Cc: Andrew Morton akpm@linux-foundation.org Cc: linuxppc-dev@lists.ozlabs.org
arch/powerpc/mm/book3s64/hash_utils.c | 2 +- arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index 5158aefe4873..93f1e1eb5ea6 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -409,7 +409,7 @@ static DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock); static phys_addr_t kfence_pool; -static inline void hash_kfence_alloc_pool(void) +static __always_inline void hash_kfence_alloc_pool(void) { if (!kfence_early_init_enabled()) goto err; diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 9f764bc42b8c..3238e9ed46b5 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -363,7 +363,7 @@ static int __meminit create_physical_mapping(unsigned long start, } #ifdef CONFIG_KFENCE -static inline phys_addr_t alloc_kfence_pool(void) +static __always_inline phys_addr_t alloc_kfence_pool(void) { phys_addr_t kfence_pool;
I remember seeing a warning msg around .init.text section. Let me dig that...
... Here it is: https://lore.kernel.org/oe-kbuild-all/202504190552.mnFGs5sj-lkp@intel.com/
I am not sure why it only complains for hash_debug_pagealloc_alloc_slots(). I believe there should me more functions to mark with __init here. Anyways, here is the patch of what I had in mind.. I am not a compiler expert, so please let me know your thoughts on this.
-ritesh
From 59d64dc0014ccb4ae13ed08ab596738628ee23b1 Mon Sep 17 00:00:00 2001 Message-Id: 59d64dc0014ccb4ae13ed08ab596738628ee23b1.1748084756.git.ritesh.list@gmail.com From: "Ritesh Harjani (IBM)" ritesh.list@gmail.com Date: Sat, 24 May 2025 16:14:08 +0530 Subject: [RFC] powerpc/mm/book3s64: Move few kfence & debug_pagealloc related calls to __init section
Move few kfence and debug_pagealloc related functions in hash_utils.c and radix_pgtable.c to __init sections since these are only invoked once by an __init function during system initialization.
i.e. - hash_debug_pagealloc_alloc_slots() - hash_kfence_alloc_pool() - hash_kfence_map_pool() The above 3 functions only gets called by __init htab_initialize().
- alloc_kfence_pool() - map_kfence_pool() The above 2 functions only gets called by __init radix_init_pgtable()
This should also help fix warning msgs like:
WARNING: modpost: vmlinux: section mismatch in reference:
hash_debug_pagealloc_alloc_slots+0xb0 (section: .text) -> memblock_alloc_try_nid (section: .init.text)
Reported-by: kernel test robot lkp@intel.com Closes: https://lore.kernel.org/oe-kbuild-all/202504190552.mnFGs5sj-lkp@intel.com/ Signed-off-by: Ritesh Harjani (IBM) ritesh.list@gmail.com --- arch/powerpc/mm/book3s64/hash_utils.c | 6 +++--- arch/powerpc/mm/book3s64/radix_pgtable.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index 5158aefe4873..4693c464fc5a 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -343,7 +343,7 @@ static inline bool hash_supports_debug_pagealloc(void) static u8 *linear_map_hash_slots; static unsigned long linear_map_hash_count; static DEFINE_RAW_SPINLOCK(linear_map_hash_lock); -static void hash_debug_pagealloc_alloc_slots(void) +static __init void hash_debug_pagealloc_alloc_slots(void) { if (!hash_supports_debug_pagealloc()) return; @@ -409,7 +409,7 @@ static DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);
static phys_addr_t kfence_pool;
-static inline void hash_kfence_alloc_pool(void) +static __init void hash_kfence_alloc_pool(void) { if (!kfence_early_init_enabled()) goto err; @@ -445,7 +445,7 @@ static inline void hash_kfence_alloc_pool(void) disable_kfence(); }
-static inline void hash_kfence_map_pool(void) +static __init void hash_kfence_map_pool(void) { unsigned long kfence_pool_start, kfence_pool_end; unsigned long prot = pgprot_val(PAGE_KERNEL); diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index 311e2112d782..ed226ee1569a 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -363,7 +363,7 @@ static int __meminit create_physical_mapping(unsigned long start, }
#ifdef CONFIG_KFENCE -static inline phys_addr_t alloc_kfence_pool(void) +static __init phys_addr_t alloc_kfence_pool(void) { phys_addr_t kfence_pool;
@@ -393,7 +393,7 @@ static inline phys_addr_t alloc_kfence_pool(void) return 0; }
-static inline void map_kfence_pool(phys_addr_t kfence_pool) +static __init void map_kfence_pool(phys_addr_t kfence_pool) { if (!kfence_pool) return;