On Wed, Oct 01, 2025 at 09:19:09PM +0000, Zhichuang Sun wrote:
iommu/amd: fix amd iotlb flush range in unmap
Shouldn't this be the subject line?
This was fixed in mainline in 6b080c4e815ceba3c08ffa980c858595c07e786a, but do not backport the full refactor.
Why not?
Targeting branch lts linux-5.15.y.
Why just this one? Why not also 5.10.y and 5.4.y?
AMD IOMMU driver supports power of 2 KB page size, it can be 4K, 8K, 16K, etc. So when VFIO driver ask AMD IOMMU driver to unmap a IOVA with a page_size 4K, it actually can unmap a page_size of 8K, depending on the page used during mapping. However, the iotlb gather function use the page_size as the range of unmap range, instead of the real unmapped page size r.
This miscalculation of iotlb flush range will make the unflushed IOTLB entry stale. It triggered hard-to-debug silent data corruption issue as DMA engine who used the stale IOTLB entry will DMA into unmapped memory region.
The upstream commit aims at changing API from map/unmap_page() to map/unmap_pages() and changed the gather range calculation along with it. It accidentally fixed this bug in the mainline since 6.1. For this backport, we don't backport the API change, only port the gather range calculation to fix the bug.
Cc: Nadav Amit namit@vmware.com Cc: Joerg Roedel joro@8bytes.org Cc: Will Deacon will@kernel.org Cc: Robin Murphy robin.murphy@arm.com Cc: Lu Baolu baolu.lu@linux.intel.com Cc: iommu@lists.linux-foundation.org Fixes: fc65d0acaf23179b94de399c204328fa259acb90
Please use the proper format as documented:
Fixes: fc65d0acaf23 ("iommu/amd: Selective flush on unmap")
thanks,
greg k-h