Sorry forgot to respond to rest of your comments on this thread.
On Thu, Apr 10, 2025 at 01:04:39PM +0200, Radim Krčmář wrote:
2025-03-14T14:39:24-07:00, Deepak Gupta debug@rivosinc.com:
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h @@ -62,6 +62,9 @@ struct thread_info { long user_sp; /* User stack pointer */ int cpu; unsigned long syscall_work; /* SYSCALL_WORK_ flags */ +#ifdef CONFIG_RISCV_USER_CFI
- struct cfi_status user_cfi_state;
+#endif
<... snipped ...>
diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S @@ -147,6 +147,20 @@ SYM_CODE_START(handle_exception)
REG_L s0, TASK_TI_USER_SP(tp) csrrc s1, CSR_STATUS, t0
- /*
* If previous mode was U, capture shadow stack pointer and save it away
* Zero CSR_SSP at the same time for sanitization.
*/
- ALTERNATIVE("nop; nop; nop; nop",
__stringify( \
andi s2, s1, SR_SPP; \
bnez s2, skip_ssp_save; \
csrrw s2, CSR_SSP, x0; \
REG_S s2, TASK_TI_USER_SSP(tp); \
skip_ssp_save:),
0,
RISCV_ISA_EXT_ZICFISS,
CONFIG_RISCV_USER_CFI)
(I'd prefer this closer to the user_sp and kernel_sp swap, it's breaking the flow here. We also already know if we've returned from userspace or not even without SR_SPP, but reusing the information might tangle the logic.)
If CSR_SCRATCH was 0, then we would be coming from kernel else flow goes to `.Lsave_context`. If we were coming from kernel mode, then eventually flow merges to `.Lsave_context`.
So we will be saving CSR_SSP on all kernel -- > kernel trap handling. That would be unnecessary. IIRC, this was one of the first review comments in early RFC series of these patch series (to not touch CSR_SSP un-necessarily)
We can avoid that by ensuring when we branch by determining if we are coming from user to something like `.Lsave_ssp` which eventually merges into ".Lsave_context". And if we were coming from kernel then we would branch to `.Lsave_context` and thus skipping ssp save logic. But # of branches it introduces in early exception handling is equivalent to what current patches do. So I don't see any value in doing that.
Let me know if I am missing something.
csrr s2, CSR_EPC csrr s3, CSR_TVAL csrr s4, CSR_CAUSE @@ -236,6 +250,18 @@ SYM_CODE_START_NOALIGN(ret_from_exception) csrw CSR_SCRATCH, tp
- /*
* Going back to U mode, restore shadow stack pointer
*/
I can remove my comment because it's obvious.
Are we? I think we can be just as well returning back to kernel-space. Similar to how we can enter the exception handler from kernel-space.
Yes we are. See excerpt from `ret_from_exception` in `entry.S`
""" SYM_CODE_START_NOALIGN(ret_from_exception) REG_L s0, PT_STATUS(sp) #ifdef CONFIG_RISCV_M_MODE /* the MPP value is too large to be used as an immediate arg for addi */ li t0, SR_MPP and s0, s0, t0 #else andi s0, s0, SR_SPP #endif bnez s0, 1f
<... snipped ...>
/* * Going back to U mode, restore shadow stack pointer */ ALTERNATIVE("nops(2)", __stringify( \ REG_L s3, TASK_TI_USER_SSP(tp); \ csrw CSR_SSP, s3), 0, RISCV_ISA_EXT_ZICFISS, CONFIG_RISCV_USER_CFI)
1: #ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE move a0, sp call riscv_v_context_nesting_end
<... snipped ...>
"""
- ALTERNATIVE("nop; nop",
__stringify( \
REG_L s3, TASK_TI_USER_SSP(tp); \
csrw CSR_SSP, s3),
0,
RISCV_ISA_EXT_ZICFISS,
CONFIG_RISCV_USER_CFI)
Thanks.