From: Jim Mattson jmattson@google.com Sent: Thursday, January 6, 2022 6:31 AM
On Wed, Jan 5, 2022 at 2:22 PM Sean Christopherson seanjc@google.com wrote:
On Wed, Jan 05, 2022, Yang Zhong wrote:
@@ -6399,6 +6424,26 @@ static void handle_interrupt_nmi_irqoff(struct
kvm_vcpu *vcpu,
kvm_after_interrupt(vcpu);
}
+static void handle_nm_fault_irqoff(struct kvm_vcpu *vcpu) +{
/*
* Save xfd_err to guest_fpu before interrupt is enabled, so the
* MSR value is not clobbered by the host activity before the guest
* has chance to consume it.
*
* We should not blindly read xfd_err here, since this exception
Nit, avoid "we", and explain what KVM does (or doesn't) do, not what KVM
"should"
do, e.g. just say
* Do not blindly read ...
* might be caused by L1 interception on a platform which doesn't
* support xfd at all.
*
* Do it conditionally upon guest_fpu::xfd. xfd_err matters
* only when xfd contains a non-zero value.
*
* Queuing exception is done in vmx_handle_exit. See comment there.
Another nit, it's worth explaining why XFD_ERR needs to be read here
regardless
of is_guest_mode(). E.g.
* Injecting the #NM back into the guest is handled in the standard path * as an #NM in L2 may be reflected into L1 as a VM-Exit. Read
XFD_ERR
* even if the #NM is from L2, as L1 may have exposed XFD to L2.
Do we have tests of L1 passing through XFD to L2?
As Sean mentioned passing through XFD to L2 still needs one more change in nested_vmx_prepare_msr_bitmap(). This will be done in a follow-up patch.
btw we did verify having L1 emulate XFD for L2, by running the amx selftest in L1. But overall nested configuration will need more test and polish after this series.
Thanks Kevin