From: John David Anglin dave.anglin@bell.net
[ Upstream commit 38860b2c8bb1b92f61396eb06a63adff916fc31d ]
For years, there have been random segmentation faults in userspace on SMP PA-RISC machines. It occurred to me that this might be a problem in set_pte_at(). MIPS and some other architectures do cache flushes when installing PTEs with the present bit set.
Here I have adapted the code in update_mmu_cache() to flush the kernel mapping when the kernel flush is deferred, or when the kernel mapping may alias with the user mapping. This simplifies calls to update_mmu_cache().
I also changed the barrier in set_pte() from a compiler barrier to a full memory barrier. I know this change is not sufficient to fix the problem. It might not be needed.
I have had a few days of operation with 5.14.16 to 5.15.1 and haven't seen any random segmentation faults on rp3440 or c8000 so far.
Signed-off-by: John David Anglin dave.anglin@bell.net Signed-off-by: Helge Deller deller@gmx.de Cc: stable@kernel.org # 5.12+ Signed-off-by: Sasha Levin sashal@kernel.org --- arch/parisc/include/asm/pgtable.h | 10 ++++++++-- arch/parisc/kernel/cache.c | 4 ++-- 2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index 39017210dbf0..8964798b8274 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -76,6 +76,8 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) purge_tlb_end(flags); }
+extern void __update_cache(pte_t pte); + /* Certain architectures need to do special things when PTEs * within a page table are directly modified. Thus, the following * hook is made available. @@ -83,11 +85,14 @@ static inline void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) #define set_pte(pteptr, pteval) \ do { \ *(pteptr) = (pteval); \ - barrier(); \ + mb(); \ } while(0)
#define set_pte_at(mm, addr, pteptr, pteval) \ do { \ + if (pte_present(pteval) && \ + pte_user(pteval)) \ + __update_cache(pteval); \ *(pteptr) = (pteval); \ purge_tlb_entries(mm, addr); \ } while (0) @@ -305,6 +310,7 @@ extern unsigned long *empty_zero_page;
#define pte_none(x) (pte_val(x) == 0) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) +#define pte_user(x) (pte_val(x) & _PAGE_USER) #define pte_clear(mm, addr, xp) set_pte_at(mm, addr, xp, __pte(0))
#define pmd_flag(x) (pmd_val(x) & PxD_FLAG_MASK) @@ -412,7 +418,7 @@ extern void paging_init (void);
#define PG_dcache_dirty PG_arch_1
-extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); +#define update_mmu_cache(vms,addr,ptep) __update_cache(*ptep)
/* Encode and de-code a swap entry */
diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 86a1a63563fd..c81ab0cb8925 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -83,9 +83,9 @@ EXPORT_SYMBOL(flush_cache_all_local); #define pfn_va(pfn) __va(PFN_PHYS(pfn))
void -update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) +__update_cache(pte_t pte) { - unsigned long pfn = pte_pfn(*ptep); + unsigned long pfn = pte_pfn(pte); struct page *page;
/* We don't have pte special. As a result, we can be called with