When all CPUs in the system implement the SSBS instructions, we advertise this via an HWCAP and allow EL0 to toggle the SSBS field in PSTATE directly. Consequently, the state of the mitigation is not accurately tracked by the TIF_SSBD thread flag and the PSTATE value is authoritative.
Avoid forcing the SSBS field in context-switch on such a system, and simply rely on the PSTATE register instead.
Cc: stable@vger.kernel.org Cc: Marc Zyngier maz@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Srinivas Ramana sramana@codeaurora.org Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch") Signed-off-by: Will Deacon will@kernel.org --- arch/arm64/kernel/process.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index d54586d5b031..45e867f40a7a 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next) if (unlikely(next->flags & PF_KTHREAD)) return;
+ /* + * If all CPUs implement the SSBS instructions, then we just + * need to context-switch the PSTATE field. + */ + if (cpu_have_feature(cpu_feature(SSBS))) + return; + /* If the mitigation is enabled, then we leave SSBS clear. */ if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) || test_tsk_thread_flag(next, TIF_SSBD))