On 2023-02-03 23:04, Jacob Pan wrote:
Intel IOMMU driver implements IOTLB flush queue with domain selective or PASID selective invalidations. In this case there's no need to track IOVA page range and sync IOTLBs, which may cause significant performance hit.
This patch adds a check to avoid IOVA gather page and IOTLB sync for the lazy path.
The performance difference on Sapphire Rapids 100Gb NIC is improved by the following (as measured by iperf send):
w/o this fix~48 Gbits/s. with this fix ~54 Gbits/s
Reviewed-by: Robin Murphy robin.murphy@arm.com
Cc: stable@vger.kernel.org Tested-by: Sanjay Kumar sanjay.k.kumar@intel.com Signed-off-by: Sanjay Kumar sanjay.k.kumar@intel.com Signed-off-by: Jacob Pan jacob.jun.pan@linux.intel.com
drivers/iommu/intel/iommu.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index b4878c7ac008..705a1c66691a 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -4352,7 +4352,13 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain, if (dmar_domain->max_addr == iova + size) dmar_domain->max_addr = iova;
- iommu_iotlb_gather_add_page(domain, gather, iova, size);
- /*
* We do not use page-selective IOTLB invalidation in flush queue,
* There is no need to track page and sync iotlb. Domain-selective or
* PASID-selective validation are used in the flush queue.
*/
- if (!gather->queued)
iommu_iotlb_gather_add_page(domain, gather, iova, size);
return size; }