On Mon, Jun 4, 2018 at 8:33 AM, Thomas Gleixner <tglx(a)linutronix.de> wrote:
> The generic pending interrupt mechanism moves interrupts from the interrupt
> handler on the original target CPU to the new destination CPU. This is
> required for x86 and ia64 due to the way the interrupt delivery and
> acknowledge works if the interrupts are not remapped.
>
> However that update can fail for various reasons. Some of them are valid
> reasons to discard the pending update, but the case, when the previous move
> has not been fully cleaned up is not a legit reason to fail.
>
> Check the return value of irq_do_set_affinity() for -EBUSY, which indicates
> a pending cleanup, and rearm the pending move in the irq dexcriptor so it's
> tried again when the next interrupt arrives.
>
> Fixes: 996c591227d9 ("x86/irq: Plug vector cleanup race")
> Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
> Cc: stable(a)vger.kernel.org
Tested-by: Song Liu <songliubraving(a)fb.com>
> ---
> kernel/irq/migration.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> --- a/kernel/irq/migration.c
> +++ b/kernel/irq/migration.c
> @@ -38,17 +38,18 @@ bool irq_fixup_move_pending(struct irq_d
> void irq_move_masked_irq(struct irq_data *idata)
> {
> struct irq_desc *desc = irq_data_to_desc(idata);
> - struct irq_chip *chip = desc->irq_data.chip;
> + struct irq_data *data = &desc->irq_data;
> + struct irq_chip *chip = data->chip;
>
> - if (likely(!irqd_is_setaffinity_pending(&desc->irq_data)))
> + if (likely(!irqd_is_setaffinity_pending(data)))
> return;
>
> - irqd_clr_move_pending(&desc->irq_data);
> + irqd_clr_move_pending(data);
>
> /*
> * Paranoia: cpu-local interrupts shouldn't be calling in here anyway.
> */
> - if (irqd_is_per_cpu(&desc->irq_data)) {
> + if (irqd_is_per_cpu(data)) {
> WARN_ON(1);
> return;
> }
> @@ -73,9 +74,20 @@ void irq_move_masked_irq(struct irq_data
> * For correct operation this depends on the caller
> * masking the irqs.
> */
> - if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids)
> - irq_do_set_affinity(&desc->irq_data, desc->pending_mask, false);
> + if (cpumask_any_and(desc->pending_mask, cpu_online_mask) < nr_cpu_ids) {
> + int ret;
>
> + ret = irq_do_set_affinity(data, desc->pending_mask, false);
> + /*
> + * If the there is a cleanup pending in the underlying
> + * vector management, reschedule the move for the next
> + * interrupt. Leave desc->pending_mask intact.
> + */
> + if (ret == -EBUSY) {
> + irqd_set_move_pending(data);
> + return;
> + }
> + }
> cpumask_clear(desc->pending_mask);
> }
>
>
>
On Mon, Jun 4, 2018 at 8:33 AM, Thomas Gleixner <tglx(a)linutronix.de> wrote:
> Several people observed the WARN_ON() in irq_matrix_free() which triggers
> when the caller tries to free an vector which is not in the allocation
> range. Song provided the trace information which allowed to decode the root
> cause.
>
> The rework of the vector allocation mechanism failed to preserve a sanity
> check, which prevents setting a new target vector/CPU when the previous
> affinity change has not fully completed.
>
> As a result a half finished affinity change can be overwritten, which can
> cause the leak of a irq descriptor pointer on the previous target CPU and
> double enqueue of the hlist head into the cleanup lists of two or more
> CPUs. After one CPU cleaned up its vector the next CPU will invoke the
> cleanup handler with vector 0, which triggers the out of range warning in
> the matrix allocator.
>
> Prevent this by checking the apic_data of the interrupt whether the
> move_in_progress flag is false and the hlist node is not hashed. Return
> -EBUSY if not.
>
> This prevents the damage and restores the behaviour before the vector
> allocation rework, but due to other changes in that area it also widens the
> chance that user space can observe -EBUSY. In theory this should be fine,
> but actually not all user space tools handle -EBUSY correctly. Addressing
> that is not part of this fix, but will be addressed in follow up patches.
>
> Fixes: 69cde0004a4b ("x86/vector: Use matrix allocator for vector assignment")
> Reported-by: Dmitry Safonov <0x7f454c46(a)gmail.com>
> Reported-by: Tariq Toukan <tariqt(a)mellanox.com>
> Reported-by: Song Liu <liu.song.a23(a)gmail.com>
> Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
> Cc: stable(a)vger.kernel.org
Thanks Thomas!
This patch alone fixes my test: ethtool -L in a loop.
I also run the same test for the full set, and it works well.
Tested-by: Song Liu <songliubraving(a)fb.com>
> ---
> arch/x86/kernel/apic/vector.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> --- a/arch/x86/kernel/apic/vector.c
> +++ b/arch/x86/kernel/apic/vector.c
> @@ -235,6 +235,15 @@ static int allocate_vector(struct irq_da
> if (vector && cpu_online(cpu) && cpumask_test_cpu(cpu, dest))
> return 0;
>
> + /*
> + * Careful here. @apicd might either have move_in_progress set or
> + * be enqueued for cleanup. Assigning a new vector would either
> + * leave a stale vector on some CPU around or in case of a pending
> + * cleanup corrupt the hlist.
> + */
> + if (apicd->move_in_progress || !hlist_unhashed(&apicd->clist))
> + return -EBUSY;
> +
> vector = irq_matrix_alloc(vector_matrix, dest, resvd, &cpu);
> if (vector > 0)
> apic_update_vector(irqd, vector, cpu);
>
>
Decided to add Enric's commit because it is also a bug fix instead
of modifying Chris commit.
Chris Chiu (1):
tpm: self test failure should not cause suspend to fail
Enric Balletbo i Serra (1):
tpm: do not suspend/resume if power stays on
drivers/char/tpm/tpm-chip.c | 12 ++++++++++++
drivers/char/tpm/tpm-interface.c | 7 +++++++
drivers/char/tpm/tpm.h | 1 +
3 files changed, 20 insertions(+)
--
v2: moved the check from tpm_of.c to tpm-chip.c as in v4.4 chip is
unreachable otherwise. I did compilation test now with BuildRoot
for power arch.
2.17.0
The patch titled
Subject: mm/huge_memory.c: __split_huge_page() use atomic ClearPageDirty()
has been removed from the -mm tree. Its filename was
mm-huge_memoryc-__split_huge_page-use-atomic-clearpagedirty.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Hugh Dickins <hughd(a)google.com>
Subject: mm/huge_memory.c: __split_huge_page() use atomic ClearPageDirty()
Swapping load on huge=always tmpfs (with khugepaged tuned up to be very
eager, but I'm not sure that is relevant) soon hung uninterruptibly,
waiting for page lock in shmem_getpage_gfp()'s find_lock_entry(), most
often when "cp -a" was trying to write to a smallish file. Debug showed
that the page in question was not locked, and page->mapping NULL by now,
but page->index consistent with having been in a huge page before.
Reproduced in minutes on a 4.15 kernel, even with 4.17's 605ca5ede764
("mm/huge_memory.c: reorder operations in __split_huge_page_tail()") added
in; but took hours to reproduce on a 4.17 kernel (no idea why).
The culprit proved to be the __ClearPageDirty() on tails beyond i_size in
__split_huge_page(): the non-atomic __bitoperation may have been safe when
4.8's baa355fd3314 ("thp: file pages support for split_huge_page()")
introduced it, but liable to erase PageWaiters after 4.10's 62906027091f
("mm: add PageWaiters indicating tasks are waiting for a page bit").
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1805291841070.3197@eggly.anvils
Fixes: 62906027091f ("mm: add PageWaiters indicating tasks are waiting for a page bit")
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Cc: Konstantin Khlebnikov <khlebnikov(a)yandex-team.ru>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/huge_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -puN mm/huge_memory.c~mm-huge_memoryc-__split_huge_page-use-atomic-clearpagedirty mm/huge_memory.c
--- a/mm/huge_memory.c~mm-huge_memoryc-__split_huge_page-use-atomic-clearpagedirty
+++ a/mm/huge_memory.c
@@ -2431,7 +2431,7 @@ static void __split_huge_page(struct pag
__split_huge_page_tail(head, i, lruvec, list);
/* Some pages can be beyond i_size: drop them from page cache */
if (head[i].index >= end) {
- __ClearPageDirty(head + i);
+ ClearPageDirty(head + i);
__delete_from_page_cache(head + i, NULL);
if (IS_ENABLED(CONFIG_SHMEM) && PageSwapBacked(head))
shmem_uncharge(head->mapping->host, 1);
_
Patches currently in -mm which might be from hughd(a)google.com are