kvm_cpu_accept_dm_intr and kvm_vcpu_ready_for_interrupt_injection are a hodge-podge of conditions, hacked together to get something that more or less works. But what is actually needed is much simpler; in both cases the fundamental question is, do we have a place to stash an interrupt if userspace does KVM_INTERRUPT?
In userspace irqchip mode, that is !vcpu->arch.interrupt.injected. Currently kvm_event_needs_reinjection(vcpu) covers it, but it is unnecessarily restrictive.
In split irqchip mode it's a bit more complicated, we need to check kvm_apic_accept_pic_intr(vcpu) (the IRQ window exit is basically an INTACK cycle and thus requires ExtINTs not to be masked) as well as !pending_userspace_extint(vcpu). However, there is no need to check kvm_event_needs_reinjection(vcpu), since split irqchip keeps pending ExtINT state separate from event injection state, and checking kvm_cpu_has_interrupt(vcpu) is wrong too since ExtINT has higher priority than APIC interrupts. In fact the latter fixes a bug: when userspace requests an IRQ window vmexit, an interrupt in the local APIC can cause kvm_cpu_has_interrupt() to be true and thus kvm_vcpu_ready_for_interrupt_injection() to return false. When this happens, vcpu_run does not exit to userspace but the interrupt window vmexits keep occurring. The VM loops without any hope of making progress.
Once we try to fix these with something like
return kvm_arch_interrupt_allowed(vcpu) && - !kvm_cpu_has_interrupt(vcpu) && - !kvm_event_needs_reinjection(vcpu) && - kvm_cpu_accept_dm_intr(vcpu); + (!lapic_in_kernel(vcpu) + ? !vcpu->arch.interrupt.injected + : (kvm_apic_accept_pic_intr(vcpu) + && !pending_userspace_extint(v)));
we realize two things. First, thanks to the previous patch the complex conditional can reuse !kvm_cpu_has_extint(vcpu). Second, the interrupt window request in vcpu_enter_guest()
bool req_int_win = dm_request_for_irq_injection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
should be kept in sync with kvm_vcpu_ready_for_interrupt_injection(): it is unnecessary to ask the processor for an interrupt window if we would not be able to return to userspace. Therefore, the complex conditional is really the correct implementation of kvm_cpu_accept_dm_intr(vcpu). It all makes sense:
- we can accept an interrupt from userspace if there is a place to stash it (and, for irqchip split, ExtINTs are not masked). Interrupts from userspace _can_ be accepted even if right now EFLAGS.IF=0.
- in order to tell userspace we will inject its interrupt ("IRQ window open" i.e. kvm_vcpu_ready_for_interrupt_injection), both KVM and the vCPU need to be ready to accept the interrupt.
... and this is what the patch implements.
Reported-by: David Woodhouse dwmw@amazon.co.uk Analyzed-by: David Woodhouse dwmw@amazon.co.uk Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/irq.c | 2 +- arch/x86/kvm/x86.c | 17 +++++++---------- 3 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d44858b69353..ddaf3e01a854 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1655,6 +1655,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v); int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); +int kvm_cpu_has_extint(struct kvm_vcpu *v); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index e2d49a506e7f..fa01f07e449e 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -40,7 +40,7 @@ static int pending_userspace_extint(struct kvm_vcpu *v) * check if there is pending interrupt from * non-APIC source without intack. */ -static int kvm_cpu_has_extint(struct kvm_vcpu *v) +int kvm_cpu_has_extint(struct kvm_vcpu *v) { /* * FIXME: interrupt.injected represents an interrupt that it's diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 447edc0d1d5a..54124b6211df 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4051,21 +4051,22 @@ static int kvm_vcpu_ioctl_set_lapic(struct kvm_vcpu *vcpu,
static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu) { - return (!lapic_in_kernel(vcpu) || - kvm_apic_accept_pic_intr(vcpu)); + /* + * We can accept userspace's request for interrupt injection + * as long as we have a place to store the interrupt number. + * The actual injection will happen when the CPU is able to + * deliver the interrupt. + */ + if (kvm_cpu_has_extint(vcpu)) + return false; + + /* Acknowledging ExtINT does not happen if LINT0 is masked. */ + return !(lapic_in_kernel(vcpu) && !kvm_apic_accept_pic_intr(vcpu)); }
-/* - * if userspace requested an interrupt window, check that the - * interrupt window is open. - * - * No need to exit to userspace if we already have an interrupt queued. - */ static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu) { return kvm_arch_interrupt_allowed(vcpu) && - !kvm_cpu_has_interrupt(vcpu) && - !kvm_event_needs_reinjection(vcpu) && kvm_cpu_accept_dm_intr(vcpu); }
On 11/27/20 12:21 PM, Paolo Bonzini wrote:
kvm_cpu_accept_dm_intr and kvm_vcpu_ready_for_interrupt_injection are a hodge-podge of conditions, hacked together to get something that more or less works. But what is actually needed is much simpler; in both cases the fundamental question is, do we have a place to stash an interrupt if userspace does KVM_INTERRUPT?
In userspace irqchip mode, that is !vcpu->arch.interrupt.injected. Currently kvm_event_needs_reinjection(vcpu) covers it, but it is unnecessarily restrictive.
In split irqchip mode it's a bit more complicated, we need to check kvm_apic_accept_pic_intr(vcpu) (the IRQ window exit is basically an INTACK cycle and thus requires ExtINTs not to be masked) as well as !pending_userspace_extint(vcpu). However, there is no need to check kvm_event_needs_reinjection(vcpu), since split irqchip keeps pending ExtINT state separate from event injection state, and checking kvm_cpu_has_interrupt(vcpu) is wrong too since ExtINT has higher priority than APIC interrupts. In fact the latter fixes a bug: when userspace requests an IRQ window vmexit, an interrupt in the local APIC can cause kvm_cpu_has_interrupt() to be true and thus kvm_vcpu_ready_for_interrupt_injection() to return false. When this happens, vcpu_run does not exit to userspace but the interrupt window vmexits keep occurring. The VM loops without any hope of making progress.
Once we try to fix these with something like
return kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu) &&
!kvm_event_needs_reinjection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
(!lapic_in_kernel(vcpu)
? !vcpu->arch.interrupt.injected
: (kvm_apic_accept_pic_intr(vcpu)
&& !pending_userspace_extint(v)));
we realize two things. First, thanks to the previous patch the complex conditional can reuse !kvm_cpu_has_extint(vcpu). Second, the interrupt window request in vcpu_enter_guest()
bool req_int_win = dm_request_for_irq_injection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
should be kept in sync with kvm_vcpu_ready_for_interrupt_injection(): it is unnecessary to ask the processor for an interrupt window if we would not be able to return to userspace. Therefore, the complex conditional is really the correct implementation of kvm_cpu_accept_dm_intr(vcpu). It all makes sense:
we can accept an interrupt from userspace if there is a place to stash it (and, for irqchip split, ExtINTs are not masked). Interrupts from userspace _can_ be accepted even if right now EFLAGS.IF=0.
in order to tell userspace we will inject its interrupt ("IRQ window open" i.e. kvm_vcpu_ready_for_interrupt_injection), both KVM and the vCPU need to be ready to accept the interrupt.
... and this is what the patch implements.
Reported-by: David Woodhouse dwmw@amazon.co.uk Analyzed-by: David Woodhouse dwmw@amazon.co.uk Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com
arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/irq.c | 2 +- arch/x86/kvm/x86.c | 17 +++++++---------- 3 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d44858b69353..ddaf3e01a854 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1655,6 +1655,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v); int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); +int kvm_cpu_has_extint(struct kvm_vcpu *v); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index e2d49a506e7f..fa01f07e449e 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -40,7 +40,7 @@ static int pending_userspace_extint(struct kvm_vcpu *v)
- check if there is pending interrupt from
- non-APIC source without intack.
*/ -static int kvm_cpu_has_extint(struct kvm_vcpu *v) +int kvm_cpu_has_extint(struct kvm_vcpu *v) { /* * FIXME: interrupt.injected represents an interrupt that it's diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 447edc0d1d5a..54124b6211df 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4051,21 +4051,22 @@ static int kvm_vcpu_ioctl_set_lapic(struct kvm_vcpu *vcpu,
static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu) {
return (!lapic_in_kernel(vcpu) ||
kvm_apic_accept_pic_intr(vcpu));
/*
* We can accept userspace's request for interrupt injection
* as long as we have a place to store the interrupt number.
* The actual injection will happen when the CPU is able to
* deliver the interrupt.
*/
if (kvm_cpu_has_extint(vcpu))
return false;
/* Acknowledging ExtINT does not happen if LINT0 is masked. */
}return !(lapic_in_kernel(vcpu) && !kvm_apic_accept_pic_intr(vcpu));
-/*
- if userspace requested an interrupt window, check that the
- interrupt window is open.
- No need to exit to userspace if we already have an interrupt queued.
- */ static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu) { return kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu) &&
}!kvm_event_needs_reinjection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
-- 2.28.0
Reviewed-by: Filippo Sironi sironi@amazon.de
Amazon Development Center Germany GmbH Krausenstr. 38 10117 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B Sitz: Berlin Ust-ID: DE 289 237 879
On Fri, Nov 27, 2020 at 7:26 PM Paolo Bonzini pbonzini@redhat.com wrote:
kvm_cpu_accept_dm_intr and kvm_vcpu_ready_for_interrupt_injection are a hodge-podge of conditions, hacked together to get something that more or less works. But what is actually needed is much simpler; in both cases the fundamental question is, do we have a place to stash an interrupt if userspace does KVM_INTERRUPT?
In userspace irqchip mode, that is !vcpu->arch.interrupt.injected. Currently kvm_event_needs_reinjection(vcpu) covers it, but it is unnecessarily restrictive.
In split irqchip mode it's a bit more complicated, we need to check kvm_apic_accept_pic_intr(vcpu) (the IRQ window exit is basically an INTACK cycle and thus requires ExtINTs not to be masked) as well as !pending_userspace_extint(vcpu). However, there is no need to check kvm_event_needs_reinjection(vcpu), since split irqchip keeps pending ExtINT state separate from event injection state, and checking kvm_cpu_has_interrupt(vcpu) is wrong too since ExtINT has higher priority than APIC interrupts. In fact the latter fixes a bug: when userspace requests an IRQ window vmexit, an interrupt in the local APIC can cause kvm_cpu_has_interrupt() to be true and thus kvm_vcpu_ready_for_interrupt_injection() to return false. When this happens, vcpu_run does not exit to userspace but the interrupt window vmexits keep occurring. The VM loops without any hope of making progress.
Once we try to fix these with something like
return kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu) &&
!kvm_event_needs_reinjection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
(!lapic_in_kernel(vcpu)
? !vcpu->arch.interrupt.injected
: (kvm_apic_accept_pic_intr(vcpu)
&& !pending_userspace_extint(v)));
we realize two things. First, thanks to the previous patch the complex conditional can reuse !kvm_cpu_has_extint(vcpu). Second, the interrupt window request in vcpu_enter_guest()
bool req_int_win = dm_request_for_irq_injection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
should be kept in sync with kvm_vcpu_ready_for_interrupt_injection(): it is unnecessary to ask the processor for an interrupt window if we would not be able to return to userspace. Therefore, the complex conditional is really the correct implementation of kvm_cpu_accept_dm_intr(vcpu). It all makes sense:
- we can accept an interrupt from userspace if there is a place to stash it (and, for irqchip split, ExtINTs are not masked). Interrupts from userspace _can_ be accepted even if right now EFLAGS.IF=0.
Hello, Paolo
If userspace does KVM_INTERRUPT, vcpu->arch.interrupt.injected is set immediately, and in inject_pending_event(), we have
else if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { kvm_x86_ops.set_nmi(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { kvm_x86_ops.set_irq(vcpu); can_inject = false; } }
I'm curious about that can the kvm_x86_ops.set_irq() here be possible to queue the irq with EFLAGS.IF=0? If not, which code prevents it?
I'm asking about this because I just noticed that interrupt can be queued when exception pending, and this patch relaxed it even more.
Note: interrupt can NOT be queued when exception pending until 664f8e26b00c7 ("KVM: X86: Fix loss of exception which has not yet been injected") which I think is dangerous.
Thanks Lai
- in order to tell userspace we will inject its interrupt ("IRQ window open" i.e. kvm_vcpu_ready_for_interrupt_injection), both KVM and the vCPU need to be ready to accept the interrupt.
... and this is what the patch implements.
Reported-by: David Woodhouse dwmw@amazon.co.uk Analyzed-by: David Woodhouse dwmw@amazon.co.uk Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini pbonzini@redhat.com
arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/irq.c | 2 +- arch/x86/kvm/x86.c | 17 +++++++---------- 3 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index d44858b69353..ddaf3e01a854 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1655,6 +1655,7 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v); int kvm_cpu_has_interrupt(struct kvm_vcpu *vcpu); +int kvm_cpu_has_extint(struct kvm_vcpu *v); int kvm_arch_interrupt_allowed(struct kvm_vcpu *vcpu); int kvm_cpu_get_interrupt(struct kvm_vcpu *v); void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c index e2d49a506e7f..fa01f07e449e 100644 --- a/arch/x86/kvm/irq.c +++ b/arch/x86/kvm/irq.c @@ -40,7 +40,7 @@ static int pending_userspace_extint(struct kvm_vcpu *v)
- check if there is pending interrupt from
- non-APIC source without intack.
*/ -static int kvm_cpu_has_extint(struct kvm_vcpu *v) +int kvm_cpu_has_extint(struct kvm_vcpu *v) { /* * FIXME: interrupt.injected represents an interrupt that it's diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 447edc0d1d5a..54124b6211df 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4051,21 +4051,22 @@ static int kvm_vcpu_ioctl_set_lapic(struct kvm_vcpu *vcpu,
static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu) {
return (!lapic_in_kernel(vcpu) ||
kvm_apic_accept_pic_intr(vcpu));
/*
* We can accept userspace's request for interrupt injection
* as long as we have a place to store the interrupt number.
* The actual injection will happen when the CPU is able to
* deliver the interrupt.
*/
if (kvm_cpu_has_extint(vcpu))
return false;
/* Acknowledging ExtINT does not happen if LINT0 is masked. */
return !(lapic_in_kernel(vcpu) && !kvm_apic_accept_pic_intr(vcpu));
}
-/*
- if userspace requested an interrupt window, check that the
- interrupt window is open.
- No need to exit to userspace if we already have an interrupt queued.
- */
static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu) { return kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu) &&
!kvm_event_needs_reinjection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
}
-- 2.28.0
On Fri, Apr 09, 2021, Lai Jiangshan wrote:
On Fri, Nov 27, 2020 at 7:26 PM Paolo Bonzini pbonzini@redhat.com wrote:
kvm_cpu_accept_dm_intr and kvm_vcpu_ready_for_interrupt_injection are a hodge-podge of conditions, hacked together to get something that more or less works. But what is actually needed is much simpler; in both cases the fundamental question is, do we have a place to stash an interrupt if userspace does KVM_INTERRUPT?
In userspace irqchip mode, that is !vcpu->arch.interrupt.injected. Currently kvm_event_needs_reinjection(vcpu) covers it, but it is unnecessarily restrictive.
In split irqchip mode it's a bit more complicated, we need to check kvm_apic_accept_pic_intr(vcpu) (the IRQ window exit is basically an INTACK cycle and thus requires ExtINTs not to be masked) as well as !pending_userspace_extint(vcpu). However, there is no need to check kvm_event_needs_reinjection(vcpu), since split irqchip keeps pending ExtINT state separate from event injection state, and checking kvm_cpu_has_interrupt(vcpu) is wrong too since ExtINT has higher priority than APIC interrupts. In fact the latter fixes a bug: when userspace requests an IRQ window vmexit, an interrupt in the local APIC can cause kvm_cpu_has_interrupt() to be true and thus kvm_vcpu_ready_for_interrupt_injection() to return false. When this happens, vcpu_run does not exit to userspace but the interrupt window vmexits keep occurring. The VM loops without any hope of making progress.
Once we try to fix these with something like
return kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu) &&
!kvm_event_needs_reinjection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
(!lapic_in_kernel(vcpu)
? !vcpu->arch.interrupt.injected
: (kvm_apic_accept_pic_intr(vcpu)
&& !pending_userspace_extint(v)));
we realize two things. First, thanks to the previous patch the complex conditional can reuse !kvm_cpu_has_extint(vcpu). Second, the interrupt window request in vcpu_enter_guest()
bool req_int_win = dm_request_for_irq_injection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
should be kept in sync with kvm_vcpu_ready_for_interrupt_injection(): it is unnecessary to ask the processor for an interrupt window if we would not be able to return to userspace. Therefore, the complex conditional is really the correct implementation of kvm_cpu_accept_dm_intr(vcpu). It all makes sense:
- we can accept an interrupt from userspace if there is a place to stash it (and, for irqchip split, ExtINTs are not masked). Interrupts from userspace _can_ be accepted even if right now EFLAGS.IF=0.
Hello, Paolo
If userspace does KVM_INTERRUPT, vcpu->arch.interrupt.injected is set immediately, and in inject_pending_event(), we have
else if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { kvm_x86_ops.set_nmi(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { kvm_x86_ops.set_irq(vcpu); can_inject = false; } }
I'm curious about that can the kvm_x86_ops.set_irq() here be possible to queue the irq with EFLAGS.IF=0? If not, which code prevents it?
The interrupt is only directly injected if the local APIC is _not_ in-kernel. If userspace is managing the local APIC, my understanding is that userspace is also responsible for honoring EFLAGS.IF, though KVM aids userspace by updating vcpu->run->ready_for_interrupt_injection when exiting to userspace. When userspace is modeling the local APIC, that resolves to kvm_vcpu_ready_for_interrupt_injection():
return kvm_arch_interrupt_allowed(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
where kvm_arch_interrupt_allowed() checks EFLAGS.IF (and an edge case related to nested virtualization). KVM also captures EFLAGS.IF in vcpu->run->if_flag. For whatever reason, QEMU checks both vcpu->run flags before injecting an IRQ, maybe to handle a case where QEMU itself clears EFLAGS.IF?
I'm asking about this because I just noticed that interrupt can be queued when exception pending, and this patch relaxed it even more.
Note: interrupt can NOT be queued when exception pending until 664f8e26b00c7 ("KVM: X86: Fix loss of exception which has not yet been injected") which I think is dangerous.
On Tue, Apr 13, 2021 at 5:43 AM Sean Christopherson seanjc@google.com wrote:
On Fri, Apr 09, 2021, Lai Jiangshan wrote:
On Fri, Nov 27, 2020 at 7:26 PM Paolo Bonzini pbonzini@redhat.com wrote:
kvm_cpu_accept_dm_intr and kvm_vcpu_ready_for_interrupt_injection are a hodge-podge of conditions, hacked together to get something that more or less works. But what is actually needed is much simpler; in both cases the fundamental question is, do we have a place to stash an interrupt if userspace does KVM_INTERRUPT?
In userspace irqchip mode, that is !vcpu->arch.interrupt.injected. Currently kvm_event_needs_reinjection(vcpu) covers it, but it is unnecessarily restrictive.
In split irqchip mode it's a bit more complicated, we need to check kvm_apic_accept_pic_intr(vcpu) (the IRQ window exit is basically an INTACK cycle and thus requires ExtINTs not to be masked) as well as !pending_userspace_extint(vcpu). However, there is no need to check kvm_event_needs_reinjection(vcpu), since split irqchip keeps pending ExtINT state separate from event injection state, and checking kvm_cpu_has_interrupt(vcpu) is wrong too since ExtINT has higher priority than APIC interrupts. In fact the latter fixes a bug: when userspace requests an IRQ window vmexit, an interrupt in the local APIC can cause kvm_cpu_has_interrupt() to be true and thus kvm_vcpu_ready_for_interrupt_injection() to return false. When this happens, vcpu_run does not exit to userspace but the interrupt window vmexits keep occurring. The VM loops without any hope of making progress.
Once we try to fix these with something like
return kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu) &&
!kvm_event_needs_reinjection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
(!lapic_in_kernel(vcpu)
? !vcpu->arch.interrupt.injected
: (kvm_apic_accept_pic_intr(vcpu)
&& !pending_userspace_extint(v)));
we realize two things. First, thanks to the previous patch the complex conditional can reuse !kvm_cpu_has_extint(vcpu). Second, the interrupt window request in vcpu_enter_guest()
bool req_int_win = dm_request_for_irq_injection(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
should be kept in sync with kvm_vcpu_ready_for_interrupt_injection(): it is unnecessary to ask the processor for an interrupt window if we would not be able to return to userspace. Therefore, the complex conditional is really the correct implementation of kvm_cpu_accept_dm_intr(vcpu). It all makes sense:
- we can accept an interrupt from userspace if there is a place to stash it (and, for irqchip split, ExtINTs are not masked). Interrupts from userspace _can_ be accepted even if right now EFLAGS.IF=0.
Hello, Paolo
If userspace does KVM_INTERRUPT, vcpu->arch.interrupt.injected is set immediately, and in inject_pending_event(), we have
else if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { kvm_x86_ops.set_nmi(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { kvm_x86_ops.set_irq(vcpu); can_inject = false; } }
I'm curious about that can the kvm_x86_ops.set_irq() here be possible to queue the irq with EFLAGS.IF=0? If not, which code prevents it?
The interrupt is only directly injected if the local APIC is _not_ in-kernel. If userspace is managing the local APIC, my understanding is that userspace is also responsible for honoring EFLAGS.IF, though KVM aids userspace by updating vcpu->run->ready_for_interrupt_injection when exiting to userspace. When userspace is modeling the local APIC, that resolves to kvm_vcpu_ready_for_interrupt_injection():
return kvm_arch_interrupt_allowed(vcpu) && kvm_cpu_accept_dm_intr(vcpu);
where kvm_arch_interrupt_allowed() checks EFLAGS.IF (and an edge case related to nested virtualization). KVM also captures EFLAGS.IF in vcpu->run->if_flag. For whatever reason, QEMU checks both vcpu->run flags before injecting an IRQ, maybe to handle a case where QEMU itself clears EFLAGS.IF?
If userspace is managing the local APIC, the user VMM would insert IRQ when kvm_run->ready_for_interrupt_injection=1 since this flags implied EFLAGS.IF before this patch (for example gVisor checks this only instead of kvm_run->if_flag). This patch claims that it has a place to stash the IRQ when EFLAGS.IF=0, but inject_pending_event() seams to ignore EFLAGS.IF and queues the IRQ to the guest directly in the first branch of using "kvm_x86_ops.set_irq(vcpu)".
I have encountered a problem but failed to exactly dissect it with some internal code involved.
It is somewhat possible that it has resulted from Li Wanpeng's patch (I replied to this patch because this patch relaxes the condition even more without reasons for how it suppresses/stashes IRQ to the guest).
When a guest APP userspace hits an exception and vmexit and returns to the user VMM (gVisor) in conditions combined, and the user VMM wants to queue an IRQ to it: It is now EFLAGS.IF=1 and ready_for_interrupt_injection=1 and user VMM happily queues the IRQ. In inject_pending_event(), the IRQ is lower priority and the earlier exception is queued to the guest first. But the IRQ can't be continuously suppressed and it is queued at the beginning of the exception handler where EFLAGS.IF=0. (Before Li Wanpeng's patch, ready_for_interrupt_injection=0, since there is an exception pending)
All above is just my guess. But I want to know more clues. And this patch says:
: we can accept an interrupt from userspace if there is a place : to stash it (and, for irqchip split, ExtINTs are not masked). : Interrupts from userspace _can_ be accepted even if right now : EFLAGS.IF=0.
So it might help me for analyzing if I knew how this behavior is achieved since inject_pending_event() doesn't check EFLAGS.IF=0 for the first using "kvm_x86_ops.set_irq(vcpu)".
Thanks
Lai.
I'm asking about this because I just noticed that interrupt can be queued when exception pending, and this patch relaxed it even more.
Note: interrupt can NOT be queued when exception pending until 664f8e26b00c7 ("KVM: X86: Fix loss of exception which has not yet been injected") which I think is dangerous.
On 13/04/21 13:03, Lai Jiangshan wrote:
This patch claims that it has a place to stash the IRQ when EFLAGS.IF=0, but inject_pending_event() seams to ignore EFLAGS.IF and queues the IRQ to the guest directly in the first branch of using "kvm_x86_ops.set_irq(vcpu)".
This is only true for pure-userspace irqchip. For split-irqchip, in which case the "place to stash" the interrupt is vcpu->arch.pending_external_vector.
For pure-userspace irqchip, KVM_INTERRUPT only cares about being able to stash the interrupt in vcpu->arch.interrupt.injected. It is indeed wrong for userspace to call KVM_INTERRUPT if the vCPU is not ready for interrupt injection, but KVM_INTERRUPT does not return an error.
Ignoring the fact that this would be incorrect use of the API, are you saying that the incorrect injection was not possible before this patch?
Paolo
On Tue, Apr 13, 2021 at 8:15 PM Paolo Bonzini pbonzini@redhat.com wrote:
On 13/04/21 13:03, Lai Jiangshan wrote:
This patch claims that it has a place to stash the IRQ when EFLAGS.IF=0, but inject_pending_event() seams to ignore EFLAGS.IF and queues the IRQ to the guest directly in the first branch of using "kvm_x86_ops.set_irq(vcpu)".
This is only true for pure-userspace irqchip. For split-irqchip, in which case the "place to stash" the interrupt is vcpu->arch.pending_external_vector.
For pure-userspace irqchip, KVM_INTERRUPT only cares about being able to stash the interrupt in vcpu->arch.interrupt.injected. It is indeed wrong for userspace to call KVM_INTERRUPT if the vCPU is not ready for interrupt injection, but KVM_INTERRUPT does not return an error.
Thanks for the reply.
May I ask what is the correct/practical way of using KVM_INTERRUPT ABI for pure-userspace irqchip.
gVisor is indeed a pure-userspace irqchip, it will call KVM_INTERRUPT when kvm_run->ready_for_interrupt_injection=1 (along with other conditions unrelated to our discussion).
https://github.com/google/gvisor/blob/a9441aea2780da8c93da1c73da860219f98438...
if kvm_run->ready_for_interrupt_injection=1 when expection pending or EFLAGS.IF=0, it would be unexpected for gVisor.
Thanks Lai
Ignoring the fact that this would be incorrect use of the API, are you saying that the incorrect injection was not possible before this patch?
Paolo
On 14/04/21 04:28, Lai Jiangshan wrote:
On Tue, Apr 13, 2021 at 8:15 PM Paolo Bonzini pbonzini@redhat.com wrote:
On 13/04/21 13:03, Lai Jiangshan wrote:
This patch claims that it has a place to stash the IRQ when EFLAGS.IF=0, but inject_pending_event() seams to ignore EFLAGS.IF and queues the IRQ to the guest directly in the first branch of using "kvm_x86_ops.set_irq(vcpu)".
This is only true for pure-userspace irqchip. For split-irqchip, in which case the "place to stash" the interrupt is vcpu->arch.pending_external_vector.
For pure-userspace irqchip, KVM_INTERRUPT only cares about being able to stash the interrupt in vcpu->arch.interrupt.injected. It is indeed wrong for userspace to call KVM_INTERRUPT if the vCPU is not ready for interrupt injection, but KVM_INTERRUPT does not return an error.
Thanks for the reply.
May I ask what is the correct/practical way of using KVM_INTERRUPT ABI for pure-userspace irqchip.
gVisor is indeed a pure-userspace irqchip, it will call KVM_INTERRUPT when kvm_run->ready_for_interrupt_injection=1 (along with other conditions unrelated to our discussion).
https://github.com/google/gvisor/blob/a9441aea2780da8c93da1c73da860219f98438...
if kvm_run->ready_for_interrupt_injection=1 when expection pending or EFLAGS.IF=0, it would be unexpected for gVisor.
Not with EFLAGS.IF=0. For pending exception, there is code to handle it in inject_pending_event:
... if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { static_call(kvm_x86_set_nmi)(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { static_call(kvm_x86_set_irq)(vcpu); can_inject = false; } } ... if (vcpu->arch.exception.pending) { ... can_inject = false; } // this is vcpu->arch.interrupt.injected for userspace LAPIC if (kvm_cpu_has_injectable_intr(vcpu)) { r = can_inject ? static_call(kvm_x86_interrupt_allowed)(vcpu, true) : -EBUSY; if (r < 0) goto busy; ... }
so what happens is:
- the interrupt will not be injected before the exception
- KVM will schedule an immediate vmexit to inject the interrupt as well
- if (as is likely) the exception has turned off interrupts, the next call to inject_pending_event will reach static_call(kvm_x86_enable_irq_window) and the interrupt will only be injected when IF becomes 1 again.
Paolo
On Thu, Apr 15, 2021 at 12:58 AM Paolo Bonzini pbonzini@redhat.com wrote:
On 14/04/21 04:28, Lai Jiangshan wrote:
On Tue, Apr 13, 2021 at 8:15 PM Paolo Bonzini pbonzini@redhat.com wrote:
On 13/04/21 13:03, Lai Jiangshan wrote:
This patch claims that it has a place to stash the IRQ when EFLAGS.IF=0, but inject_pending_event() seams to ignore EFLAGS.IF and queues the IRQ to the guest directly in the first branch of using "kvm_x86_ops.set_irq(vcpu)".
This is only true for pure-userspace irqchip. For split-irqchip, in which case the "place to stash" the interrupt is vcpu->arch.pending_external_vector.
For pure-userspace irqchip, KVM_INTERRUPT only cares about being able to stash the interrupt in vcpu->arch.interrupt.injected. It is indeed wrong for userspace to call KVM_INTERRUPT if the vCPU is not ready for interrupt injection, but KVM_INTERRUPT does not return an error.
Thanks for the reply.
May I ask what is the correct/practical way of using KVM_INTERRUPT ABI for pure-userspace irqchip.
gVisor is indeed a pure-userspace irqchip, it will call KVM_INTERRUPT when kvm_run->ready_for_interrupt_injection=1 (along with other conditions unrelated to our discussion).
https://github.com/google/gvisor/blob/a9441aea2780da8c93da1c73da860219f98438...
if kvm_run->ready_for_interrupt_injection=1 when expection pending or EFLAGS.IF=0, it would be unexpected for gVisor.
Not with EFLAGS.IF=0. For pending exception, there is code to handle it in inject_pending_event:
Thanks for the reply. (I rearranged your summarization here)
so what happens is:
the interrupt will not be injected before the exception
KVM will schedule an immediate vmexit to inject the interrupt as well
if (as is likely) the exception has turned off interrupts, the next
call to inject_pending_event will reach static_call(kvm_x86_enable_irq_window) and the interrupt will only be injected when IF becomes 1 again.
The next call to inject_pending_event() will reach here AT FIRST with vcpu->arch.exception.injected==false and vcpu->arch.exception.pending==false
... if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { static_call(kvm_x86_set_nmi)(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { static_call(kvm_x86_set_irq)(vcpu); can_inject = false;
And comes here and vcpu->arch.interrupt.injected is true for there is an interrupt queued by KVM_INTERRUPT for pure user irqchip. It then does the injection of the interrupt without checking the EFLAGS.IF.
My question is that what stops the next call to inject_pending_event() to reach here when KVM_INTERRUPT is called with exepction pending.
Or what makes kvm_run->ready_for_interrupt_injection be zero when exception pending to disallow userspace to call KVM_INTERRUPT.
} } ... if (vcpu->arch.exception.pending) { ... can_inject = false; } // this is vcpu->arch.interrupt.injected for userspace LAPIC if (kvm_cpu_has_injectable_intr(vcpu)) { r = can_inject ?
static_call(kvm_x86_interrupt_allowed)(vcpu, true) : -EBUSY; if (r < 0) goto busy; ... }
Paolo
On 15/04/21 02:59, Lai Jiangshan wrote:
The next call to inject_pending_event() will reach here AT FIRST with vcpu->arch.exception.injected==false and vcpu->arch.exception.pending==false
... if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { static_call(kvm_x86_set_nmi)(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { static_call(kvm_x86_set_irq)(vcpu); can_inject = false;
And comes here and vcpu->arch.interrupt.injected is true for there is an interrupt queued by KVM_INTERRUPT for pure user irqchip. It then does the injection of the interrupt without checking the EFLAGS.IF.
Ok, understood now. Yeah, that could be a problem for userspace irqchip so we should switch it to use pending_external_vector instead. Are you going to write the patch or should I?
Thanks!
Paolo
My question is that what stops the next call to inject_pending_event() to reach here when KVM_INTERRUPT is called with exepction pending.
On Thu, Apr 15, 2021 at 2:07 PM Paolo Bonzini pbonzini@redhat.com wrote:
On 15/04/21 02:59, Lai Jiangshan wrote:
The next call to inject_pending_event() will reach here AT FIRST with vcpu->arch.exception.injected==false and vcpu->arch.exception.pending==false
... if (!vcpu->arch.exception.pending) { if (vcpu->arch.nmi_injected) { static_call(kvm_x86_set_nmi)(vcpu); can_inject = false; } else if (vcpu->arch.interrupt.injected) { static_call(kvm_x86_set_irq)(vcpu); can_inject = false;
And comes here and vcpu->arch.interrupt.injected is true for there is an interrupt queued by KVM_INTERRUPT for pure user irqchip. It then does the injection of the interrupt without checking the EFLAGS.IF.
Ok, understood now. Yeah, that could be a problem for userspace irqchip so we should switch it to use pending_external_vector instead. Are you going to write the patch or should I?
I wish you do it. I haven't figured out how to write a clean test for it and confirm it in upstream. But I will backport your patch and test it.
My fix is changing the behavior back to before 664f8e26b00c7 where arch.exception.pending=true would prevent ready_for_interrupt_injection to be non-zero. So that KVM_INTERRUPT maintains the original behavior that it can immediately inject IRQ into guests. (Userspace may regret an unearthly injected IRQ for it has no right to revise the IRQ or cancel it.) But your fix will unify the behaviors of all kinds of irqchips.
Thanks Lai
Thanks!
Paolo
My question is that what stops the next call to inject_pending_event() to reach here when KVM_INTERRUPT is called with exepction pending.
On 12/04/21 23:43, Sean Christopherson wrote:
where kvm_arch_interrupt_allowed() checks EFLAGS.IF (and an edge case related to nested virtualization). KVM also captures EFLAGS.IF in vcpu->run->if_flag. For whatever reason, QEMU checks both vcpu->run flags before injecting an IRQ, maybe to handle a case where QEMU itself clears EFLAGS.IF?
It's mostly obsolete code (that will be deprecated in the next version and removed in about a year) so I wouldn't read much into it. if_flag itself is obsolete; it is not provided by SEV-ES, and a useless subset of ready_for_interrupt_injection.
Paolo
linux-stable-mirror@lists.linaro.org