On 7/10/25 23:26, Dave Hansen wrote:
On 7/10/25 06:22, Jason Gunthorpe wrote:
Why does this matter? We flush the CPU TLB in a bunch of different ways, _especially_ when it's being done for kernel mappings. For example, __flush_tlb_all() is a non-ranged kernel flush which has a completely parallel implementation with flush_tlb_kernel_range(). Call sites that use _it_ are unaffected by the patch here.
Basically, if we're only worried about vmalloc/vfree freeing page tables, then this patch is OK. If the problem is bigger than that, then we need a more comprehensive patch.
I think we are worried about any place that frees page tables.
The two places that come to mind are the remove_memory() code and __change_page_attr().
The remove_memory() gunk is in arch/x86/mm/init_64.c. It has a few sites that do flush_tlb_all(). Now that I'm looking at it, there look to be some races between freeing page tables pages and flushing the TLB. But, basically, if you stick to the sites in there that do flush_tlb_all() after free_pagetable(), you should be good.
As for the __change_page_attr() code, I think the only spot you need to hit is cpa_collapse_large_pages() and maybe the one in __split_large_page() as well.
Thank you for the guide. It appears that all paths mentioned above will eventually call flush_tlb_all() after changing the page table. So, I can simply put a call site in flush_tlb_all()? Something like this:
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index a41499dfdc3f..3b85e7d3ba44 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1479,6 +1479,8 @@ void flush_tlb_all(void) else /* Fall back to the IPI-based invalidation. */ on_each_cpu(do_flush_tlb_all, NULL, 1); + + iommu_sva_invalidate_kva_range(0, TLB_FLUSH_ALL); }
This is all disturbingly ad-hoc, though. The remove_memory() code needs fixing and I'll probably go try to bring some order to the chaos in the process of fixing it up. But that's a separate problem than this IOMMU fun.
Yes. Please.
Thanks, baolu