The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 7abab1605139bc41442864c18f9573440f7ca105 Mon Sep 17 00:00:00 2001
From: Anssi Hannula <anssi.hannula(a)bitwise.fi>
Date: Fri, 15 Feb 2019 18:45:08 +0200
Subject: [PATCH] serial: uartps: Fix stuck ISR if RX disabled with non-empty
FIFO
If RX is disabled while there are still unprocessed bytes in RX FIFO,
cdns_uart_handle_rx() called from interrupt handler will get stuck in
the receive loop as read bytes will not get removed from the RX FIFO
and CDNS_UART_SR_RXEMPTY bit will never get set.
Avoid the stuck handler by checking first if RX is disabled. port->lock
protects against race with RX-disabling functions.
This HW behavior was mentioned by Nathan Rossi in 43e98facc4a3 ("tty:
xuartps: Fix RX hang, and TX corruption in termios call") which fixed a
similar issue in cdns_uart_set_termios().
The behavior can also be easily verified by e.g. setting
CDNS_UART_CR_RX_DIS at the beginning of cdns_uart_handle_rx() - the
following loop will then get stuck.
Resetting the FIFO using RXRST would not set RXEMPTY either so simply
issuing a reset after RX-disable would not work.
I observe this frequently on a ZynqMP board during heavy RX load at 1M
baudrate when the reader process exits and thus RX gets disabled.
Fixes: 61ec9016988f ("tty/serial: add support for Xilinx PS UART")
Signed-off-by: Anssi Hannula <anssi.hannula(a)bitwise.fi>
Cc: stable(a)vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
index 094f2958cb2b..ee9f18c52d29 100644
--- a/drivers/tty/serial/xilinx_uartps.c
+++ b/drivers/tty/serial/xilinx_uartps.c
@@ -364,7 +364,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
cdns_uart_handle_tx(dev_id);
isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
}
- if (isrstatus & CDNS_UART_IXR_RXMASK)
+
+ /*
+ * Skip RX processing if RX is disabled as RXEMPTY will never be set
+ * as read bytes will not be removed from the FIFO.
+ */
+ if (isrstatus & CDNS_UART_IXR_RXMASK &&
+ !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
cdns_uart_handle_rx(dev_id, isrstatus);
spin_unlock(&port->lock);
This is a note to let you know that I've just added the patch titled
binder: fix race between munmap() and direct reclaim
to my char-misc git tree which can be found at
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
in the char-misc-linus branch.
The patch will show up in the next release of the linux-next tree
(usually sometime within the next 24 hours during the week.)
The patch will hopefully also be merged in Linus's tree for the
next -rc kernel release.
If you have any questions about this process, please let me know.
>From 5cec2d2e5839f9c0fec319c523a911e0a7fd299f Mon Sep 17 00:00:00 2001
From: Todd Kjos <tkjos(a)android.com>
Date: Fri, 1 Mar 2019 15:06:06 -0800
Subject: binder: fix race between munmap() and direct reclaim
An munmap() on a binder device causes binder_vma_close() to be called
which clears the alloc->vma pointer.
If direct reclaim causes binder_alloc_free_page() to be called, there
is a race where alloc->vma is read into a local vma pointer and then
used later after the mm->mmap_sem is acquired. This can result in
calling zap_page_range() with an invalid vma which manifests as a
use-after-free in zap_page_range().
The fix is to check alloc->vma after acquiring the mmap_sem (which we
were acquiring anyway) and skip zap_page_range() if it has changed
to NULL.
Signed-off-by: Todd Kjos <tkjos(a)google.com>
Reviewed-by: Joel Fernandes (Google) <joel(a)joelfernandes.org>
Cc: stable <stable(a)vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/android/binder_alloc.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 6389467670a0..195f120c4e8c 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -927,14 +927,13 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
index = page - alloc->pages;
page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
+
+ mm = alloc->vma_vm_mm;
+ if (!mmget_not_zero(mm))
+ goto err_mmget;
+ if (!down_write_trylock(&mm->mmap_sem))
+ goto err_down_write_mmap_sem_failed;
vma = binder_alloc_get_vma(alloc);
- if (vma) {
- if (!mmget_not_zero(alloc->vma_vm_mm))
- goto err_mmget;
- mm = alloc->vma_vm_mm;
- if (!down_read_trylock(&mm->mmap_sem))
- goto err_down_write_mmap_sem_failed;
- }
list_lru_isolate(lru, item);
spin_unlock(lock);
@@ -945,10 +944,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
zap_page_range(vma, page_addr, PAGE_SIZE);
trace_binder_unmap_user_end(alloc, index);
-
- up_read(&mm->mmap_sem);
- mmput(mm);
}
+ up_write(&mm->mmap_sem);
+ mmput(mm);
trace_binder_unmap_kernel_start(alloc, index);
--
2.21.0
From: Eric Biggers <ebiggers(a)google.com>
commit 12455e320e19e9cc7ad97f4ab89c280fe297387c upstream.
The arm64 NEON bit-sliced implementation of AES-CTR fails the improved
skcipher tests because it sometimes produces the wrong ciphertext. The
bug is that the final keystream block isn't returned from the assembly
code when the number of non-final blocks is zero. This can happen if
the input data ends a few bytes after a page boundary. In this case the
last bytes get "encrypted" by XOR'ing them with uninitialized memory.
Fix the assembly code to return the final keystream block when needed.
Fixes: 88a3f582bea9 ("crypto: arm64/aes - don't use IV buffer to return final keystream block")
Cc: <stable(a)vger.kernel.org> # v4.11+
Reviewed-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
---
Please apply to 4.14-stable. This resolves conflicts due to
"crypto: arm64/aes-bs - yield NEON after every block of input"
not being present in 4.14, but that has other dependencies.
Tested using the crypto self-tests from v5.1-rc1 backported to 4.14.
"rfc3686(ctr-aes-neonbs)" now passes the tests.
arch/arm64/crypto/aes-neonbs-core.S | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
index ca0472500433..3b18e3e79531 100644
--- a/arch/arm64/crypto/aes-neonbs-core.S
+++ b/arch/arm64/crypto/aes-neonbs-core.S
@@ -940,7 +940,7 @@ CPU_LE( rev x8, x8 )
8: next_ctr v0
cbnz x4, 99b
-0: st1 {v0.16b}, [x5]
+ st1 {v0.16b}, [x5]
ldp x29, x30, [sp], #16
ret
@@ -948,6 +948,9 @@ CPU_LE( rev x8, x8 )
* If we are handling the tail of the input (x6 != NULL), return the
* final keystream block back to the caller.
*/
+0: cbz x6, 8b
+ st1 {v0.16b}, [x6]
+ b 8b
1: cbz x6, 8b
st1 {v1.16b}, [x6]
b 8b
--
2.21.0
The patch titled
Subject: mm/slab.c: protect cache_reap() against CPU and memory hot plug operations
has been removed from the -mm tree. Its filename was
mm-slab-protect-cache_reap-against-cpu-and-memory-hot-plug-operations.patch
This patch was dropped because it was withdrawn
------------------------------------------------------
From: Laurent Dufour <ldufour(a)linux.ibm.com>
Subject: mm/slab.c: protect cache_reap() against CPU and memory hot plug operations
95402b382901 ("cpu-hotplug: replace per-subsystem mutexes with
get_online_cpus()") remove the CPU_LOCK_ACQUIRE operation which was use to
grap the cache_chain_mutex lock which was protecting cache_reap() against
CPU hot plug operations.
Later 18004c5d4084 ("mm, sl[aou]b: Use a common mutex definition") changed
cache_chain_mutex to slab_mutex but this didn't help fixing the missing
the cache_reap() protection against CPU hot plug operations.
Here we are stopping the per cpu worker while holding the slab_mutex to
ensure that cache_reap() is not running in our back and will not be
triggered anymore for this cpu.
This patch fixes that race leading to SLAB's data corruption when CPU
hotplug are triggered. We hit it while doing partition migration on
PowerVM leading to CPU reconfiguration through the CPU hotplug mechanism.
This fix is covering kernel containing to the commit 6731d4f12315 ("slab:
Convert to hotplug state machine"), ie 4.9.1, earlier kernel needs a
slightly different patch.
Link: http://lkml.kernel.org/r/20190311191701.24325-1-ldufour@linux.ibm.com
Signed-off-by: Laurent Dufour <ldufour(a)linux.ibm.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
--- a/mm/slab.c~mm-slab-protect-cache_reap-against-cpu-and-memory-hot-plug-operations
+++ a/mm/slab.c
@@ -1103,6 +1103,7 @@ static int slab_online_cpu(unsigned int
static int slab_offline_cpu(unsigned int cpu)
{
+ mutex_lock(&slab_mutex);
/*
* Shutdown cache reaper. Note that the slab_mutex is held so
* that if cache_reap() is invoked it cannot do anything
@@ -1112,6 +1113,7 @@ static int slab_offline_cpu(unsigned int
cancel_delayed_work_sync(&per_cpu(slab_reap_work, cpu));
/* Now the cache_reaper is guaranteed to be not running. */
per_cpu(slab_reap_work, cpu).work.func = NULL;
+ mutex_unlock(&slab_mutex);
return 0;
}
_
Patches currently in -mm which might be from ldufour(a)linux.ibm.com are