4.16-stable review patch. If anyone has any objections, please let me know.
------------------
From: Andre Przywara andre.przywara@arm.com
[ Upstream commit bf9a41377d14f565764022470e14aae72559589a ]
When vgic_prune_ap_list() finds an interrupt that needs to be migrated to a new VCPU, we should notify this VCPU of the pending interrupt, since it requires immediate action. Kick this VCPU once we have added the new IRQ to the list, but only after dropping the locks.
Reported-by: Stefano Stabellini sstabellini@kernel.org Reviewed-by: Christoffer Dall christoffer.dall@arm.com Signed-off-by: Andre Przywara andre.przywara@arm.com Signed-off-by: Marc Zyngier marc.zyngier@arm.com Signed-off-by: Sasha Levin alexander.levin@microsoft.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- virt/kvm/arm/vgic/vgic.c | 8 ++++++++ 1 file changed, 8 insertions(+)
--- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -599,6 +599,7 @@ retry:
list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) { struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB; + bool target_vcpu_needs_kick = false;
spin_lock(&irq->irq_lock);
@@ -669,11 +670,18 @@ retry: list_del(&irq->ap_list); irq->vcpu = target_vcpu; list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); + target_vcpu_needs_kick = true; }
spin_unlock(&irq->irq_lock); spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + + if (target_vcpu_needs_kick) { + kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu); + kvm_vcpu_kick(target_vcpu); + } + goto retry; }