When all CPUs in the system implement the SSBS instructions, we advertise this via an HWCAP and allow EL0 to toggle the SSBS field in PSTATE directly. Consequently, the state of the mitigation is not accurately tracked by the TIF_SSBD thread flag and the PSTATE value is authoritative.
Avoid forcing the SSBS field in context-switch on such a system, and simply rely on the PSTATE register instead.
Cc: stable@vger.kernel.org Cc: Marc Zyngier maz@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Srinivas Ramana sramana@codeaurora.org Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch") Signed-off-by: Will Deacon will@kernel.org --- arch/arm64/kernel/process.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index d54586d5b031..45e867f40a7a 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next) if (unlikely(next->flags & PF_KTHREAD)) return;
+ /* + * If all CPUs implement the SSBS instructions, then we just + * need to context-switch the PSTATE field. + */ + if (cpu_have_feature(cpu_feature(SSBS))) + return; + /* If the mitigation is enabled, then we leave SSBS clear. */ if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) || test_tsk_thread_flag(next, TIF_SSBD))
On 2020-02-06 11:34, Will Deacon wrote:
When all CPUs in the system implement the SSBS instructions, we advertise this via an HWCAP and allow EL0 to toggle the SSBS field in PSTATE directly. Consequently, the state of the mitigation is not accurately tracked by the TIF_SSBD thread flag and the PSTATE value is authoritative.
Avoid forcing the SSBS field in context-switch on such a system, and simply rely on the PSTATE register instead.
Cc: stable@vger.kernel.org Cc: Marc Zyngier maz@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Srinivas Ramana sramana@codeaurora.org Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch") Signed-off-by: Will Deacon will@kernel.org
arch/arm64/kernel/process.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index d54586d5b031..45e867f40a7a 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next) if (unlikely(next->flags & PF_KTHREAD)) return;
- /*
* If all CPUs implement the SSBS instructions, then we just
* need to context-switch the PSTATE field.
*/
- if (cpu_have_feature(cpu_feature(SSBS)))
return;
- /* If the mitigation is enabled, then we leave SSBS clear. */ if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) || test_tsk_thread_flag(next, TIF_SSBD))
Looks goot to me.
Reviewed-by: Marc Zyngier maz@kernel.org
M.
On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
On 2020-02-06 11:34, Will Deacon wrote:
When all CPUs in the system implement the SSBS instructions, we advertise this via an HWCAP and allow EL0 to toggle the SSBS field in PSTATE directly. Consequently, the state of the mitigation is not accurately tracked by the TIF_SSBD thread flag and the PSTATE value is authoritative.
Avoid forcing the SSBS field in context-switch on such a system, and simply rely on the PSTATE register instead.
Cc: stable@vger.kernel.org Cc: Marc Zyngier maz@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Srinivas Ramana sramana@codeaurora.org Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch") Signed-off-by: Will Deacon will@kernel.org
arch/arm64/kernel/process.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index d54586d5b031..45e867f40a7a 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next) if (unlikely(next->flags & PF_KTHREAD)) return;
- /*
* If all CPUs implement the SSBS instructions, then we just
* need to context-switch the PSTATE field.
*/
- if (cpu_have_feature(cpu_feature(SSBS)))
return;
- /* If the mitigation is enabled, then we leave SSBS clear. */ if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) || test_tsk_thread_flag(next, TIF_SSBD))
Looks goot to me.
Ja!
Reviewed-by: Marc Zyngier maz@kernel.org
Cheers. It occurs to me that, although the patch is correct, the comment and the commit message need tweaking because we're actually predicating this on the presence of SSBS in any form, so the instructions may not be implemented. That's fine because the prctl() updates pstate, so it remains authoritative and can't be lost by one of the CPUs treating it as RAZ/WI.
I'll spin a v2 later on.
Will
On 2020-02-06 12:20, Will Deacon wrote:
On Thu, Feb 06, 2020 at 11:49:31AM +0000, Marc Zyngier wrote:
On 2020-02-06 11:34, Will Deacon wrote:
When all CPUs in the system implement the SSBS instructions, we advertise this via an HWCAP and allow EL0 to toggle the SSBS field in PSTATE directly. Consequently, the state of the mitigation is not accurately tracked by the TIF_SSBD thread flag and the PSTATE value is authoritative.
Avoid forcing the SSBS field in context-switch on such a system, and simply rely on the PSTATE register instead.
Cc: stable@vger.kernel.org Cc: Marc Zyngier maz@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Srinivas Ramana sramana@codeaurora.org Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch") Signed-off-by: Will Deacon will@kernel.org
arch/arm64/kernel/process.c | 7 +++++++ 1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index d54586d5b031..45e867f40a7a 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct task_struct *next) if (unlikely(next->flags & PF_KTHREAD)) return;
- /*
* If all CPUs implement the SSBS instructions, then we just
* need to context-switch the PSTATE field.
*/
- if (cpu_have_feature(cpu_feature(SSBS)))
return;
- /* If the mitigation is enabled, then we leave SSBS clear. */ if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) || test_tsk_thread_flag(next, TIF_SSBD))
Looks goot to me.
Ja!
Ach...
Reviewed-by: Marc Zyngier maz@kernel.org
Cheers. It occurs to me that, although the patch is correct, the comment and the commit message need tweaking because we're actually predicating this on the presence of SSBS in any form, so the instructions may not be implemented. That's fine because the prctl() updates pstate, so it remains authoritative and can't be lost by one of the CPUs treating it as RAZ/WI.
True. It is the PSTATE bit that actually matters, not the presence of the control instruction.
I'll spin a v2 later on.
Thanks,
M.
linux-stable-mirror@lists.linaro.org