On 1/27/21 8:26 AM, Lu Baolu wrote:
+{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + struct intel_iommu *iommu = domain_get_iommu(dmar_domain);
+ if (intel_iommu_strict) + return 0;
+ /* + * The flush queue implementation does not perform page-selective + * invalidations that are required for efficient TLB flushes in virtual + * environments. The benefit of batching is likely to be much lower than + * the overhead of synchronizing the virtual and physical IOMMU + * page-tables. + */ + if (iommu && cap_caching_mode(iommu->cap)) { + pr_warn_once("IOMMU batching is partially disabled due to virtualization"); + return 0; + }
domain_get_iommu() only returns the first iommu, and could return NULL when this is called before domain attaching to any device. A better choice could be check caching mode globally and return false if caching mode is supported on any iommu.
struct dmar_drhd_unit *drhd; struct intel_iommu *iommu;
rcu_read_lock(); for_each_active_iommu(iommu, drhd) { if (cap_caching_mode(iommu->cap)) return false;
We should unlock rcu before return here. Sorry!
} rcu_read_unlock();
Best regards, baolu