The TLB flush optimisation (a46cc7a90f: powerpc/mm/radix: Improve TLB/PWC flushes) may result in random memory corruption. Any concurrent page-table walk could end up with a Use-after-Free. Even on UP this might give issues, since mmu_gather is preemptible these days. An interrupt or preempted task accessing user pages might stumble into the free page if the hardware caches page directories.
The series is a backport of the fix sent by Peter [1].
The first three patches are dependencies for the last patch (avoid potential double flush). If the performance impact due to double flush is considered trivial then the first three patches and last patch may be dropped.
This is only for v4.19 stable.
[1] https://patchwork.kernel.org/cover/11284843/
-- Aneesh Kumar K.V (1): powerpc/mmu_gather: enable RCU_TABLE_FREE even for !SMP case
Peter Zijlstra (4): asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE mm/mmu_gather: invalidate TLB correctly on batch allocation failure and flush asm-generic/tlb: avoid potential double flush
Will Deacon (1): asm-generic/tlb: Track which levels of the page tables have been cleared
arch/Kconfig | 3 - arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/book3s/32/pgalloc.h | 8 -- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 - arch/powerpc/include/asm/tlb.h | 11 ++ arch/powerpc/mm/pgtable-book3s64.c | 7 -- arch/sparc/include/asm/tlb_64.h | 9 ++ arch/x86/Kconfig | 1 - include/asm-generic/tlb.h | 103 ++++++++++++++++--- mm/memory.c | 20 ++-- 10 files changed, 122 insertions(+), 44 deletions(-)