Hi Greg,
Please consider these two patches for the 6.6 kernel. These patches are unmodified versions of the corresponding upstream commits.
Thank you,
Bart.
David Stevens (1): genirq/cpuhotplug: Skip suspended interrupts when restoring affinity
Dongli Zhang (1): genirq/cpuhotplug: Retry with cpu_online_mask when migration fails
kernel/irq/cpuhotplug.c | 27 ++++++++++++++++++++++++--- kernel/irq/manage.c | 12 ++++++++---- 2 files changed, 32 insertions(+), 7 deletions(-)
From: David Stevens stevensd@chromium.org
commit a60dd06af674d3bb76b40da5d722e4a0ecefe650 upstream.
irq_restore_affinity_of_irq() restarts managed interrupts unconditionally when the first CPU in the affinity mask comes online. That's correct during normal hotplug operations, but not when resuming from S3 because the drivers are not resumed yet and interrupt delivery is not expected by them.
Skip the startup of suspended interrupts and let resume_device_irqs() deal with restoring them. This ensures that irqs are not delivered to drivers during the noirq phase of resuming from S3, after non-boot CPUs are brought back online.
Signed-off-by: David Stevens stevensd@chromium.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20240424090341.72236-1-stevensd@chromium.org --- kernel/irq/cpuhotplug.c | 11 ++++++++--- kernel/irq/manage.c | 12 ++++++++---- 2 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c index 5ecd072a34fe..367e15a2f570 100644 --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -195,10 +195,15 @@ static void irq_restore_affinity_of_irq(struct irq_desc *desc, unsigned int cpu) !irq_data_get_irq_chip(data) || !cpumask_test_cpu(cpu, affinity)) return;
- if (irqd_is_managed_and_shutdown(data)) { - irq_startup(desc, IRQ_RESEND, IRQ_START_COND); + /* + * Don't restore suspended interrupts here when a system comes back + * from S3. They are reenabled via resume_device_irqs(). + */ + if (desc->istate & IRQS_SUSPENDED) return; - } + + if (irqd_is_managed_and_shutdown(data)) + irq_startup(desc, IRQ_RESEND, IRQ_START_COND);
/* * If the interrupt can only be directed to a single target diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index a054cd5ec08b..8a936c1ffad3 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -796,10 +796,14 @@ void __enable_irq(struct irq_desc *desc) irq_settings_set_noprobe(desc); /* * Call irq_startup() not irq_enable() here because the - * interrupt might be marked NOAUTOEN. So irq_startup() - * needs to be invoked when it gets enabled the first - * time. If it was already started up, then irq_startup() - * will invoke irq_enable() under the hood. + * interrupt might be marked NOAUTOEN so irq_startup() + * needs to be invoked when it gets enabled the first time. + * This is also required when __enable_irq() is invoked for + * a managed and shutdown interrupt from the S3 resume + * path. + * + * If it was already started up, then irq_startup() will + * invoke irq_enable() under the hood. */ irq_startup(desc, IRQ_RESEND, IRQ_START_FORCE); break;
On Wed, Aug 14, 2024 at 11:28:25AM -0700, Bart Van Assche wrote:
From: David Stevens stevensd@chromium.org
commit a60dd06af674d3bb76b40da5d722e4a0ecefe650 upstream.
irq_restore_affinity_of_irq() restarts managed interrupts unconditionally when the first CPU in the affinity mask comes online. That's correct during normal hotplug operations, but not when resuming from S3 because the drivers are not resumed yet and interrupt delivery is not expected by them.
Skip the startup of suspended interrupts and let resume_device_irqs() deal with restoring them. This ensures that irqs are not delivered to drivers during the noirq phase of resuming from S3, after non-boot CPUs are brought back online.
Signed-off-by: David Stevens stevensd@chromium.org Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20240424090341.72236-1-stevensd@chromium.org
kernel/irq/cpuhotplug.c | 11 ++++++++--- kernel/irq/manage.c | 12 ++++++++---- 2 files changed, 16 insertions(+), 7 deletions(-)
When forwarding patches on from others, you always have to sign off on them :(
thanks,
greg k-h
On 8/15/24 1:35 AM, Greg Kroah-Hartman wrote:
When forwarding patches on from others, you always have to sign off on them :(
I will keep this in mind for the future. If the patch can still be modified, feel free to add my signed-off-by to both patches in this series:
Signed-off-by: Bart Van Assche bvanassche@acm.org
Thanks,
Bart.
From: Dongli Zhang dongli.zhang@oracle.com
commit 88d724e2301a69c1ab805cd74fc27aa36ae529e0 upstream.
When a CPU goes offline, the interrupts affine to that CPU are re-configured.
Managed interrupts undergo either migration to other CPUs or shutdown if all CPUs listed in the affinity are offline. The migration of managed interrupts is guaranteed on x86 because there are interrupt vectors reserved.
Regular interrupts are migrated to a still online CPU in the affinity mask or if there is no online CPU to any online CPU.
This works as long as the still online CPUs in the affinity mask have interrupt vectors available, but in case that none of those CPUs has a vector available the migration fails and the device interrupt becomes stale.
This is not any different from the case where the affinity mask does not contain any online CPU, but there is no fallback operation for this.
Instead of giving up, retry the migration attempt with the online CPU mask if the interrupt is not managed, as managed interrupts cannot be affected by this problem.
Signed-off-by: Dongli Zhang dongli.zhang@oracle.com Signed-off-by: Thomas Gleixner tglx@linutronix.de Link: https://lore.kernel.org/r/20240423073413.79625-1-dongli.zhang@oracle.com --- kernel/irq/cpuhotplug.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c index 367e15a2f570..eb8628390156 100644 --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -130,6 +130,22 @@ static bool migrate_one_irq(struct irq_desc *desc) * CPU. */ err = irq_do_set_affinity(d, affinity, false); + + /* + * If there are online CPUs in the affinity mask, but they have no + * vectors left to make the migration work, try to break the + * affinity by migrating to any online CPU. + */ + if (err == -ENOSPC && !irqd_affinity_is_managed(d) && affinity != cpu_online_mask) { + pr_debug("IRQ%u: set affinity failed for %*pbl, re-try with online CPUs\n", + d->irq, cpumask_pr_args(affinity)); + + affinity = cpu_online_mask; + brokeaff = true; + + err = irq_do_set_affinity(d, affinity, false); + } + if (err) { pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n", d->irq, err);
On Wed, Aug 14, 2024 at 11:28:24AM -0700, Bart Van Assche wrote:
Hi Greg,
Please consider these two patches for the 6.6 kernel. These patches are unmodified versions of the corresponding upstream commits.
Now applied, but what about older kernels as well?
thanks,
greg k-h
On 8/15/24 1:37 AM, Greg Kroah-Hartman wrote:
On Wed, Aug 14, 2024 at 11:28:24AM -0700, Bart Van Assche wrote:
Hi Greg,
Please consider these two patches for the 6.6 kernel. These patches are unmodified versions of the corresponding upstream commits.
Now applied, but what about older kernels as well?
Is anyone using suspend/resume in combination with managed interrupts with older stable kernels? I submitted this series for the 6.6 stable kernel because I want these patches to be merged in the Android 6.6 kernel branches. I'm not aware of any use cases for older Android kernels for managed interrupts.
Thanks,
Bart.
linux-stable-mirror@lists.linaro.org