The unmap logic assumes a fixed step size of PAGE_SIZE, but the actual IOVA step depends on iommu_pgshift, not PAGE_SHIFT. If iommu_pgshift > PAGE_SHIFT, this results in mismatched offsets and causes iommu_unmap() to target incorrect addresses, potentially leaving mappings intact or corrupting IOMMU state.
Fix this by recomputing the offset per index using the same logic as in the map loop, ensuring symmetry and correctness.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Cc: stable@vger.kernel.org # v4.3+ Fixes: a7f6da6e758c ("drm/nouveau/instmem/gk20a: add IOMMU support") Signed-off-by: Alexey Nepomnyashih sdl@nppct.ru --- drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c index 17a0e1a46211..f58e0d4fb2b1 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c @@ -481,8 +481,9 @@ gk20a_instobj_ctor_iommu(struct gk20a_instmem *imem, u32 npages, u32 align, nvkm_error(subdev, "IOMMU mapping failure: %d\n", ret);
while (i-- > 0) { - offset -= PAGE_SIZE; - iommu_unmap(imem->domain, offset, PAGE_SIZE); + iommu_unmap(imem->domain, + ((unsigned long)r->offset + i) << imem->iommu_pgshift, + PAGE_SIZE); } goto release_area; }