From: Lu Baolu baolu.lu@linux.intel.com Sent: Thursday, July 3, 2025 11:16 AM
The iotlb_sync_map iommu ops allows drivers to perform necessary cache flushes when new mappings are established. For the Intel iommu driver, this callback specifically serves two purposes:
- To flush caches when a second-stage page table is attached to a device whose iommu is operating in caching mode (CAP_REG.CM==1).
- To explicitly flush internal write buffers to ensure updates to memory- resident remapping structures are visible to hardware (CAP_REG.RWBF==1).
However, in scenarios where neither caching mode nor the RWBF flag is active, the cache_tag_flush_range_np() helper, which is called in the iotlb_sync_map path, effectively becomes a no-op.
Despite being a no-op, cache_tag_flush_range_np() involves iterating through all cache tags of the iommu's attached to the domain, protected by a spinlock. This unnecessary execution path introduces overhead, leading to a measurable I/O performance regression. On systems with NVMes under the same bridge, performance was observed to drop from approximately ~6150 MiB/s down to ~4985 MiB/s.
so for the same bridge case two NVMe disks likely are in the same iommu group sharing a domain. Then there is contention on the spinlock from two parallel threads on two disks. when disks come from different bridges they are attached to different domains hence no contention.
is it a correct description for the difference between same vs. different bridge?
@@ -1833,6 +1845,8 @@ static int dmar_domain_attach_device(struct dmar_domain *domain, if (ret) goto out_block_translation;
- domain->iotlb_sync_map |= domain_need_iotlb_sync_map(domain,
iommu);
- return 0;
also need to update the flag upon detach.
with it:
Reviewed-by: Kevin Tian kevin.tian@intel.com