"Huang, Ying" ying.huang@intel.com writes:
Alistair Popple apopple@nvidia.com writes:
Lu Baolu baolu.lu@linux.intel.com writes:
Commit 6bbd42e2df8f ("mmu_notifiers: call invalidate_range() when invalidating TLBs") moved the secondary TLB invalidations into the TLB invalidation functions to ensure that all secondary TLB invalidations happen at the same time as the CPU invalidation and added a flush-all type of secondary TLB invalidation for the batched mode, where a range of [0, -1UL) is used to indicates that the range extends to the end of the address space.
However, using an end address of -1UL caused an overflow in the Intel IOMMU driver, where the end address was rounded up to the next page. As a result, both the IOTLB and device ATC were not invalidated correctly.
Thanks for catching. This fix looks good so:
Reviewed-by: Alistair Popple apopple@nvidia.com
However examining the fixes patch again I note that we are calling mmu_notifier_invalidate_range(mm, 0, -1UL) from arch_tlbbatch_add_pending() in arch/x86/include/asm/tlbflush.h.
That seems suboptimal because we would be doing an invalidate all for every page unmap,
Yes. This can be performance regression for IOMMU TLB flushing. For CPU, it's "flush smaller ranges with more IPI" vs. "flush whole range with less IPI", and in general the later wins because the high overhead of IPI. But, IIUC, for IOMMU TLB, it becomes "flush smaller ranges" vs. "flush whole range". That is generally bad.
The "flush smaller ranges" vs. "flush whole range" is equally valid for some architectures, or at least some implementations of SMMU on ARM because flushing the whole range is a single IOMMU command vs. multiple for flushing a range. See for example https://lore.kernel.org/linux-arm-kernel/20230920052257.8615-1-nicolinc@nvid... which switches to a full invalidate depending on the range. I've no idea if that's true more generally though, although a similar situation existed on POWER9.
It may be better to restore the original behavior. Can we just pass the size of TLB flushing in set_tlb_ubc_flush_pending()->arch_tlbbatch_add_pending(), and flush the IOMMU TLB for the range?
Ideally we'd push the notifier call down the stack, closer to where the actual HW tlb invalidate gets called. I think I was just getting lost through all the indirection in the lower level x86_64 TLB flushing and batching code though. Will take another look.
and as of db6c1f6f236d ("mm/tlbbatch: introduce arch_flush_tlb_batched_pending()") arch_flush_tlb_batched_pending() calls flush_tlb_mm() anyway. So I think we can probably drop the explicit notifier call from arch_flush_tlb_batched_pending().
arch_flush_tlb_batched_pending() is used when we need to change page table (e.g., munmap()) in parallel with TLB flushing batching (e.g., try_to_unmap()). The actual TLB flushing part for set_tlb_ubc_flush_pending()->arch_tlbbatch_add_pending() is try_to_unmap_flush()->arch_tlbbatch_flush().
Thanks for the pointer. I must have got arch_tlbbatch_flush() and arch_flush_tlb_batched_pending() crossed at some point.
- Alistair
Will put togeather a patch for that.
- Alistair
Add a flush all helper function and call it when the invalidation range is from 0 to -1UL, ensuring that the entire caches are invalidated correctly.
[snip]