Hi Paolo,
Thanks for the patch.
On Tue, 29 Sep 2020 at 20:17, Paolo Bonzini pbonzini@redhat.com wrote:
On 29/09/20 15:39, Qian Cai wrote:
On Tue, 2020-09-29 at 14:26 +0200, Paolo Bonzini wrote:
On 29/09/20 13:59, Qian Cai wrote:
WARN_ON_ONCE(!allow_smaller_maxphyaddr);
I noticed the origin patch did not have this WARN_ON_ONCE(), but the mainline commit b96e6506c2ea ("KVM: x86: VMX: Make smaller physical guest address space support user-configurable") does have it for some reasons.
Because that part of the code should not be reached. The exception bitmap is set up with
if (!vmx_need_pf_intercept(vcpu)) eb &= ~(1u << PF_VECTOR);
where
static inline bool vmx_need_pf_intercept(struct kvm_vcpu *vcpu) { if (!enable_ept) return true;
return allow_smaller_maxphyaddr && cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits;
}
We shouldn't get here if "enable_ept && !allow_smaller_maxphyaddr", which implies vmx_need_pf_intercept(vcpu) == false. So the warning is genuine; I've sent a patch.
Care to provide a link to the patch? Just curious.
Ok, I haven't sent it yet. :) But here it is:
commit 608e2791d7353e7d777bf32038ca3e7d548155a4 (HEAD -> kvm-master) Author: Paolo Bonzini pbonzini@redhat.com Date: Tue Sep 29 08:31:32 2020 -0400
KVM: VMX: update PFEC_MASK/PFEC_MATCH together with PF intercept The PFEC_MASK and PFEC_MATCH fields in the VMCS reverse the meaning of the #PF intercept bit in the exception bitmap when they do not match. This means that, if PFEC_MASK and/or PFEC_MATCH are set, the hypervisor can get a vmexit for #PF exceptions even when the corresponding bit is clear in the exception bitmap. This is unexpected and is promptly reported as a WARN_ON_ONCE. To fix it, reset PFEC_MASK and PFEC_MATCH when the #PF intercept is disabled (as is common with enable_ept && !allow_smaller_maxphyaddr).
I have tested this patch on an x86_64 machine and the reported issue is gone.
Reported-by: Qian Cai <cai@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Naresh Kamboju naresh.kamboju@linaro.org Tested-by: Naresh Kamboju naresh.kamboju@linaro.org
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f0384e93548a..f4e9c310032a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -794,6 +794,18 @@ void update_exception_bitmap(struct kvm_vcpu *vcpu) */ if (is_guest_mode(vcpu)) eb |= get_vmcs12(vcpu)->exception_bitmap;
else {
/*
* If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched
* between guest and host. In that case we only care about present
* faults. For vmcs02, however, PFEC_MASK and PFEC_MATCH are set in
* prepare_vmcs02_rare.
*/
bool selective_pf_trap = enable_ept && (eb & (1u << PF_VECTOR));
int mask = selective_pf_trap ? PFERR_PRESENT_MASK : 0;
vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, mask);
vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, mask);
} vmcs_write32(EXCEPTION_BITMAP, eb);
} @@ -4355,16 +4367,6 @@ static void init_vmcs(struct vcpu_vmx *vmx) vmx->pt_desc.guest.output_mask = 0x7F; vmcs_write64(GUEST_IA32_RTIT_CTL, 0); }
/*
* If EPT is enabled, #PF is only trapped if MAXPHYADDR is mismatched
* between guest and host. In that case we only care about present
* faults.
*/
if (enable_ept) {
vmcs_write32(PAGE_FAULT_ERROR_CODE_MASK, PFERR_PRESENT_MASK);
vmcs_write32(PAGE_FAULT_ERROR_CODE_MATCH, PFERR_PRESENT_MASK);
}
}
static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
test log link https://lkft.validation.linaro.org/scheduler/job/1813223
- Naresh