Dave Vasilevsky dave@vasilevsky.ca writes:
On 32-bit book3s with hash-MMUs, tlb_flush() was a no-op. This was unnoticed because all uses until recently were for unmaps, and thus handled by __tlb_remove_tlb_entry().
After commit 4a18419f71cd ("mm/mprotect: use mmu_gather") in kernel 5.19, tlb_gather_mmu() started being used for mprotect as well. This caused mprotect to simply not work on these machines:
int *ptr = mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); *ptr = 1; // force HPTE to be created mprotect(ptr, 4096, PROT_READ); *ptr = 2; // should segfault, but succeeds
Fixed by making tlb_flush() actually flush TLB pages. This finally agrees with the behaviour of boot3s64's tlb_flush().
Fixes: 4a18419f71cd ("mm/mprotect: use mmu_gather") Signed-off-by: Dave Vasilevsky dave@vasilevsky.ca Cc: stable@vger.kernel.org
Changes in v2:
- Flush entire TLB if full mm is requested.
- Link to v1: https://lore.kernel.org/r/20251027-vasi-mprotect-g3-v1-1-3c5187085f9a@vasile...
arch/powerpc/include/asm/book3s/32/tlbflush.h | 8 ++++++-- arch/powerpc/mm/book3s32/tlb.c | 9 +++++++++ 2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/32/tlbflush.h b/arch/powerpc/include/asm/book3s/32/tlbflush.h index e43534da5207aa3b0cb3c07b78e29b833c141f3f..b8c587ad2ea954f179246a57d6e86e45e91dcfdc 100644 --- a/arch/powerpc/include/asm/book3s/32/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/32/tlbflush.h @@ -11,6 +11,7 @@ void hash__flush_tlb_mm(struct mm_struct *mm); void hash__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr); void hash__flush_range(struct mm_struct *mm, unsigned long start, unsigned long end); +void hash__flush_gather(struct mmu_gather *tlb);
Maybe I would have preferred the following naming convention for hash specific tlb_flush w.r.t mmu_gather, which is also similar to what book3s64 uses ;)
- hash__tlb_flush()
But no strong objection on this either. BTW - I did run your test program in Qemu and I was able to reproduce the problem, and this patch fixes it.
The change overall looks good to me. So, please feel free to add:
Reviewed-by: Ritesh Harjani (IBM) ritesh.list@gmail.com