In preparation for moving vmload/vmsave to __svm_vcpu_run, keep the pointer to the struct vcpu_svm in %rdi. This way it is possible to load svm->vmcb01.pa in %rax without clobbering the pointer to svm itself.
No functional change intended.
Cc: stable@vger.kernel.org Fixes: a149180fbcf3 ("x86: Add magic AMD return-thunk") Signed-off-by: Paolo Bonzini pbonzini@redhat.com --- arch/x86/kvm/svm/vmenter.S | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index f0ff41103e4c..531510ab6072 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -54,29 +54,29 @@ SYM_FUNC_START(__svm_vcpu_run) /* Save @vmcb. */ push %_ASM_ARG1
- /* Move @svm to RAX. */ - mov %_ASM_ARG2, %_ASM_AX + /* Move @svm to RDI. */ + mov %_ASM_ARG2, %_ASM_DI + + /* "POP" @vmcb to RAX. */ + pop %_ASM_AX
/* Load guest registers. */ - mov VCPU_RCX(%_ASM_AX), %_ASM_CX - mov VCPU_RDX(%_ASM_AX), %_ASM_DX - mov VCPU_RBX(%_ASM_AX), %_ASM_BX - mov VCPU_RBP(%_ASM_AX), %_ASM_BP - mov VCPU_RSI(%_ASM_AX), %_ASM_SI - mov VCPU_RDI(%_ASM_AX), %_ASM_DI + mov VCPU_RCX(%_ASM_DI), %_ASM_CX + mov VCPU_RDX(%_ASM_DI), %_ASM_DX + mov VCPU_RBX(%_ASM_DI), %_ASM_BX + mov VCPU_RBP(%_ASM_DI), %_ASM_BP + mov VCPU_RSI(%_ASM_DI), %_ASM_SI #ifdef CONFIG_X86_64 - mov VCPU_R8 (%_ASM_AX), %r8 - mov VCPU_R9 (%_ASM_AX), %r9 - mov VCPU_R10(%_ASM_AX), %r10 - mov VCPU_R11(%_ASM_AX), %r11 - mov VCPU_R12(%_ASM_AX), %r12 - mov VCPU_R13(%_ASM_AX), %r13 - mov VCPU_R14(%_ASM_AX), %r14 - mov VCPU_R15(%_ASM_AX), %r15 + mov VCPU_R8 (%_ASM_DI), %r8 + mov VCPU_R9 (%_ASM_DI), %r9 + mov VCPU_R10(%_ASM_DI), %r10 + mov VCPU_R11(%_ASM_DI), %r11 + mov VCPU_R12(%_ASM_DI), %r12 + mov VCPU_R13(%_ASM_DI), %r13 + mov VCPU_R14(%_ASM_DI), %r14 + mov VCPU_R15(%_ASM_DI), %r15 #endif - - /* "POP" @vmcb to RAX. */ - pop %_ASM_AX + mov VCPU_RDI(%_ASM_DI), %_ASM_DI
/* Enter guest mode */ sti
On Wed, Nov 09, 2022, Paolo Bonzini wrote:
In preparation for moving vmload/vmsave to __svm_vcpu_run,
__svm_vcpu_run()
keep the pointer to the struct vcpu_svm in %rdi. This way it is possible to load svm->vmcb01.pa in %rax without clobbering the pointer to svm itself.
If you feel like doing fixup before pushing, add a note to call out that avoiding RAX will also be "necessary" to play nice with loading/storing MSR_SPEC_CTRL? When fiddling with this code, I found the RDMSR/WRMSR clobbers to be far more problematic than the VMCB pointers.
No functional change intended.
Cc: stable@vger.kernel.org Fixes: a149180fbcf3 ("x86: Add magic AMD return-thunk") Signed-off-by: Paolo Bonzini pbonzini@redhat.com
Reviewed-by: Sean Christopherson seanjc@google.com
On 11/9/22 16:20, Sean Christopherson wrote:
keep the pointer to the struct vcpu_svm in %rdi. This way it is possible to load svm->vmcb01.pa in %rax without clobbering the pointer to svm itself.
If you feel like doing fixup before pushing, add a note to call out that avoiding RAX will also be "necessary" to play nice with loading/storing MSR_SPEC_CTRL? When fiddling with this code, I found the RDMSR/WRMSR clobbers to be far more problematic than the VMCB pointers.
I tried to avoid the "in the future" phrasing which some people dislike. I'll rephrase anyway. Even without taking RDMSR/WRMSR/VMLOAD/VMRUN/VMSAVE into account, avoiding the potential clash with 32-bit register arguments is a decent reason for the patch anyway.
Paolo
linux-stable-mirror@lists.linaro.org