On Wed, Oct 05, 2022 at 04:24:54PM -0700, Jim Mattson wrote:
On Wed, Oct 5, 2022 at 3:03 PM Suraj Jitindar Singh surajjs@amazon.com wrote:
tl;dr: The existing mitigation for eIBRS PBRSB predictions uses an INT3 to ensure a call instruction retires before a following unbalanced RET. Replace this with a WRMSR serialising instruction which has a lower performance penalty.
== Background ==
eIBRS (enhanced indirect branch restricted speculation) is used to prevent predictor addresses from one privilege domain from being used for prediction in a higher privilege domain.
== Problem ==
On processors with eIBRS protections there can be a case where upon VM exit a guest address may be used as an RSB prediction for an unbalanced RET if a CALL instruction hasn't yet been retired. This is termed PBRSB (Post-Barrier Return Stack Buffer).
A mitigation for this was introduced in: (2b1299322016731d56807aa49254a5ea3080b6b3 x86/speculation: Add RSB VM Exit protections)
This mitigation [1] has a ~1% performance impact on VM exit compared to without it [2].
== Solution ==
The WRMSR instruction can be used as a speculation barrier and a serialising instruction. Use this on the VM exit path instead to ensure that a CALL instruction (in this case the call to vmx_spec_ctrl_restore_host) has retired before the prediction of a following unbalanced RET.
This mitigation [3] has a negligible performance impact.
== Testing ==
Run the outl_to_kernel kvm-unit-tests test 200 times per configuration which counts the cycles for an exit to kernel mode.
[1] With existing mitigation: Average: 2026 cycles [2] With no mitigation: Average: 2008 cycles [3] With proposed mitigation: Average: 2008 cycles
Signed-off-by: Suraj Jitindar Singh surajjs@amazon.com Cc: stable@vger.kernel.org
arch/x86/include/asm/nospec-branch.h | 7 +++---- arch/x86/kvm/vmx/vmenter.S | 3 +-- arch/x86/kvm/vmx/vmx.c | 5 +++++ 3 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c936ce9f0c47..e5723e024b47 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -159,10 +159,9 @@
- A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
- monstrosity above, manually.
*/ -.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2=ALT_NOT(X86_FEATURE_ALWAYS)
ALTERNATIVE_2 "jmp .Lskip_rsb_\@", \
__stringify(__FILL_RETURN_BUFFER(\reg,\nr)), \ftr, \
__stringify(__FILL_ONE_RETURN), \ftr2
+.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
ALTERNATIVE "jmp .Lskip_rsb_\@", \
__stringify(__FILL_RETURN_BUFFER(\reg,\nr)), \ftr
.Lskip_rsb_@: .endm diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 6de96b943804..eb82797bd7bf 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -231,8 +231,7 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) * single call to retire, before the first unbalanced RET. */
FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\
X86_FEATURE_RSB_VMEXIT_LITE
FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT pop %_ASM_ARG2 /* @flags */
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c9b49a09e6b5..fdcd8e10c2ab 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7049,8 +7049,13 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, * For legacy IBRS, the IBRS bit always needs to be written after * transitioning from a less privileged predictor mode, regardless of * whether the guest/host values differ.
*
* For eIBRS affected by Post Barrier RSB Predictions a serialising
* instruction (wrmsr) must be executed to ensure a call instruction has
* retired before the prediction of a following unbalanced ret. */ if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) ||
cpu_feature_enabled(X86_FEATURE_RSB_VMEXIT_LITE) || vmx->spec_ctrl != hostval) native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval);
Okay. I see how this almost meets the requirements. But this WRMSR is conditional, which means that there's a speculative path through this code that ends up at the unbalanced RET without executing the WRMSR.
Agree. I was just about to post this.