On Wed, Jul 16, 2025 at 02:34:04PM +0800, Baolu Lu wrote:
@@ -654,6 +656,9 @@ struct iommu_ops {
int (*def_domain_type)(struct device *dev);
- void (*paging_cache_invalidate)(struct iommu_device *dev,
unsigned long start, unsigned long end);
How would you even implement this in a driver?
You either flush the whole iommu, in which case who needs a rage, or the driver has to iterate over the PASID list, in which case it doesn't really improve the situation.
The Intel iommu driver supports flushing all SVA PASIDs with a single request in the invalidation queue.
How? All PASID !=0 ? The HW has no notion about a SVA PASID vs no-SVA else. This is just flushing almost everything.
If this is a concern I think the better answer is to do a defered free like the mm can sometimes do where we thread the page tables onto a linked list, flush the CPU cache and push it all into a work which will do the iommu flush before actually freeing the memory.
Is it a workable solution to use schedule_work() to queue the KVA cache invalidation as a work item in the system workqueue? By doing so, we wouldn't need the spinlock to protect the list anymore.
Maybe.
MM is also more careful to pull the invalidation out some of the locks, I don't know what the KVA side is like..
Jason