6.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Charles Mirabile cmirabil@redhat.com
[ Upstream commit 8d90d9872edae7e78c3a12b98e239bfaa66f3639 ]
the `__runtime_fixup_32` function does not handle the case where `val` is zero correctly (as might occur when patching a nommu kernel and referring to a physical address below the 4GiB boundary whose upper 32 bits are all zero) because nothing in the existing logic prevents the code from taking the `else` branch of both nop-checks and emitting two `nop` instructions.
This leaves random garbage in the register that is supposed to receive the upper 32 bits of the pointer instead of zero that when combined with the value for the lower 32 bits yields an invalid pointer and causes a kernel panic when that pointer is eventually accessed.
The author clearly considered the fact that if the `lui` is converted into a `nop` that the second instruction needs to be adjusted to become an `li` instead of an `addi`, hence introducing the `addi_insn_mask` variable, but didn't follow that logic through fully to the case where the `else` branch executes. To fix it just adjust the logic to ensure that the second `else` branch is not taken if the first instruction will be patched to a `nop`.
Fixes: a44fb5722199 ("riscv: Add runtime constant support")
Signed-off-by: Charles Mirabile cmirabil@redhat.com Reviewed-by: Charlie Jenkins charlie@rivosinc.com Tested-by: Charlie Jenkins charlie@rivosinc.com Link: https://lore.kernel.org/r/20250530211422.784415-2-cmirabil@redhat.com Signed-off-by: Palmer Dabbelt palmer@dabbelt.com Signed-off-by: Sasha Levin sashal@kernel.org --- arch/riscv/include/asm/runtime-const.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv/include/asm/runtime-const.h b/arch/riscv/include/asm/runtime-const.h index 451fd76b88115..d766e2b9e6df1 100644 --- a/arch/riscv/include/asm/runtime-const.h +++ b/arch/riscv/include/asm/runtime-const.h @@ -206,7 +206,7 @@ static inline void __runtime_fixup_32(__le16 *lui_parcel, __le16 *addi_parcel, u addi_insn_mask &= 0x07fff; }
- if (lower_immediate & 0x00000fff) { + if (lower_immediate & 0x00000fff || lui_insn == RISCV_INSN_NOP4) { /* replace upper 12 bits of addi with lower 12 bits of val */ addi_insn &= addi_insn_mask; addi_insn |= (lower_immediate & 0x00000fff) << 20;