On Sun, 08 Sep 2024 14:36:37 +0100, Sasha Levin sashal@kernel.org wrote:
This is a note to let you know that I've just added the patch titled
irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued
to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: irqchip-gic-v4-make-sure-a-vpe-is-locked-when-vmapp-.patch and it can be found in the queue-6.6 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
commit 1a232324773145ff7ce59b6a1b52b3247223f9d4 Author: Marc Zyngier maz@kernel.org Date: Fri Jul 5 10:31:55 2024 +0100
irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued
[ Upstream commit a84a07fa3100d7ad46a3d6882af25a3df9c9e7e3 ] In order to make sure that vpe->col_idx is correctly sampled when a VMAPP command is issued, the vpe_lock must be held for the VPE. This is now possible since the introduction of the per-VM vmapp_lock, which can be taken before vpe_lock in the correct locking order. Signed-off-by: Marc Zyngier maz@kernel.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Tested-by: Nianyao Tang tangnianyao@huawei.com Link: https://lore.kernel.org/r/20240705093155.871070-4-maz@kernel.org Signed-off-by: Sasha Levin sashal@kernel.org
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index e25dea0e50c7..1e0f0e1bf481 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1804,7 +1804,9 @@ static void its_map_vm(struct its_node *its, struct its_vm *vm) for (i = 0; i < vm->nr_vpes; i++) { struct its_vpe *vpe = vm->vpes[i];
its_send_vmapp(its, vpe, true);
scoped_guard(raw_spinlock, &vpe->vpe_lock)
its_send_vmapp(its, vpe, true);
} }its_send_vinvall(its, vpe);
@@ -1825,8 +1827,10 @@ static void its_unmap_vm(struct its_node *its, struct its_vm *vm) if (!--vm->vlpi_count[its->list_nr]) { int i;
for (i = 0; i < vm->nr_vpes; i++)
for (i = 0; i < vm->nr_vpes; i++) {
guard(raw_spinlock)(&vm->vpes[i]->vpe_lock); its_send_vmapp(its, vm->vpes[i], false);
}}
raw_spin_unlock_irqrestore(&vmovp_lock, flags);
No please.
Not only you are missing the essential part of the series (the patch introducing the per-VM lock that this change relies on), you are also missing the fixes that followed.
So please drop this patch from the 6.6 and 6.1 queues.
M.
On Sun, Sep 08, 2024 at 04:27:10PM +0100, Marc Zyngier wrote:
On Sun, 08 Sep 2024 14:36:37 +0100, Sasha Levin sashal@kernel.org wrote:
This is a note to let you know that I've just added the patch titled
irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued
to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: irqchip-gic-v4-make-sure-a-vpe-is-locked-when-vmapp-.patch and it can be found in the queue-6.6 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
commit 1a232324773145ff7ce59b6a1b52b3247223f9d4 Author: Marc Zyngier maz@kernel.org Date: Fri Jul 5 10:31:55 2024 +0100
irqchip/gic-v4: Make sure a VPE is locked when VMAPP is issued [ Upstream commit a84a07fa3100d7ad46a3d6882af25a3df9c9e7e3 ] In order to make sure that vpe->col_idx is correctly sampled when a VMAPP command is issued, the vpe_lock must be held for the VPE. This is now possible since the introduction of the per-VM vmapp_lock, which can be taken before vpe_lock in the correct locking order. Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Nianyao Tang <tangnianyao@huawei.com> Link: https://lore.kernel.org/r/20240705093155.871070-4-maz@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index e25dea0e50c7..1e0f0e1bf481 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1804,7 +1804,9 @@ static void its_map_vm(struct its_node *its, struct its_vm *vm) for (i = 0; i < vm->nr_vpes; i++) { struct its_vpe *vpe = vm->vpes[i];
its_send_vmapp(its, vpe, true);
scoped_guard(raw_spinlock, &vpe->vpe_lock)
its_send_vmapp(its, vpe, true);
} }its_send_vinvall(its, vpe);
@@ -1825,8 +1827,10 @@ static void its_unmap_vm(struct its_node *its, struct its_vm *vm) if (!--vm->vlpi_count[its->list_nr]) { int i;
for (i = 0; i < vm->nr_vpes; i++)
for (i = 0; i < vm->nr_vpes; i++) {
guard(raw_spinlock)(&vm->vpes[i]->vpe_lock); its_send_vmapp(its, vm->vpes[i], false);
}
}
raw_spin_unlock_irqrestore(&vmovp_lock, flags);
No please.
Not only you are missing the essential part of the series (the patch introducing the per-VM lock that this change relies on), you are also missing the fixes that followed.
So please drop this patch from the 6.6 and 6.1 queues.
Will do, thanks!
linux-stable-mirror@lists.linaro.org