On Mon, Nov 28, 2022 at 04:48:58PM +0100, Eric Auger wrote:
+/**
- iommufd_access_notify_unmap - Notify users of an iopt to stop using it
- @iopt: iopt to work on
- @iova: Starting iova in the iopt
- @length: Number of bytes
- After this function returns there should be no users attached to the pages
- linked to this iopt that intersect with iova,length. Anyone that has attached
- a user through iopt_access_pages() needs to detatch it through
detach
- iommufd_access_unpin_pages() before this function returns.
- The unmap callback may not call or wait for a iommufd_access_destroy() to
- complete. Once iommufd_access_destroy() returns no ops are running and no
- future ops will be called.
I don't understand the above sentence. Is that related to the
if (!iommufd_lock_obj(&access->obj))
continue;
where is the unmap() called in that case?
It is basically saying a driver cannot write this:
unmap(): mutex_lock(lock) iommufd_access_unpin_pages(access) mutex_unlock(lock)
driver_close mutex_lock(lock) iommufd_access_destroy(access) mutex_unlock(lock)
Or any other equivalent thing. How about
* iommufd_access_destroy() will wait for any outstanding unmap callback to * complete. Once iommufd_access_destroy() no unmap ops are running or will * run in the future. Due to this a driver must not create locking that prevents * unmap to complete while iommufd_access_destroy() is running.
And I should really add a lockdep map here, which I will add as a followup patch:
diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c index de1babd56af156..d2b8e33ffaa0d7 100644 --- a/drivers/iommu/iommufd/device.c +++ b/drivers/iommu/iommufd/device.c @@ -5,6 +5,7 @@ #include <linux/slab.h> #include <linux/iommu.h> #include <linux/irqdomain.h> +#include <linux/lockdep.h>
#include "io_pagetable.h" #include "iommufd_private.h" @@ -501,6 +502,15 @@ void iommufd_access_destroy(struct iommufd_access *access) { bool was_destroyed;
+ /* + * Alert lockdep that this cannot become entangled with an unmap + * callback, or we will have deadlock. + */ +#ifdef CONFIG_LOCKDEP + lock_acquire_exclusive(&access->ioas->iopt.unmap_map, 0, 0, NULL, _RET_IP_); + lock_release(&access->ioas->iopt.unmap_map, _RET_IP_); +#endif + was_destroyed = iommufd_object_destroy_user(access->ictx, &access->obj); WARN_ON(!was_destroyed); } diff --git a/drivers/iommu/iommufd/io_pagetable.c b/drivers/iommu/iommufd/io_pagetable.c index 3467cea795684c..d858cc7f241fd0 100644 --- a/drivers/iommu/iommufd/io_pagetable.c +++ b/drivers/iommu/iommufd/io_pagetable.c @@ -460,6 +460,9 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start, unsigned long unmapped_bytes = 0; int rc = -ENOENT;
+#ifdef CONFIG_LOCKDEP + lock_acquire(&iopt->unmap_map, 0, 0, NULL, _RET_IP_); +#endif /* * The domains_rwsem must be held in read mode any time any area->pages * is NULL. This prevents domain attach/detatch from running @@ -521,6 +524,10 @@ static int iopt_unmap_iova_range(struct io_pagetable *iopt, unsigned long start, up_read(&iopt->domains_rwsem); if (unmapped) *unmapped = unmapped_bytes; + +#ifdef CONFIG_LOCKDEP + lock_release(&iopt->unmap_map, _RET_IP_); +#endif return rc; }
@@ -643,6 +650,14 @@ void iopt_init_table(struct io_pagetable *iopt) * restriction. */ iopt->iova_alignment = 1; + +#ifdef CONFIG_LOCKDEP + { + static struct lock_class_key key; + + lockdep_init_map(&iopt->unmap_map, "access_unmap", &key, 0); + } +#endif }
void iopt_destroy_table(struct io_pagetable *iopt) diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h index 222e86591f8ac9..8fb8e53ee0d3d3 100644 --- a/drivers/iommu/iommufd/iommufd_private.h +++ b/drivers/iommu/iommufd/iommufd_private.h @@ -45,6 +45,10 @@ struct io_pagetable { struct rb_root_cached reserved_itree; u8 disable_large_pages; unsigned long iova_alignment; + +#ifdef CONFIG_LOCKDEP + struct lockdep_map unmap_map; +#endif };
void iopt_init_table(struct io_pagetable *iopt);
+/**
- iommufd_access_pin_pages() - Return a list of pages under the iova
- @access: IOAS access to act on
- @iova: Starting IOVA
- @length: Number of bytes to access
- @out_pages: Output page list
- @flags: IOPMMUFD_ACCESS_RW_* flags
- Reads @length bytes starting at iova and returns the struct page * pointers.
- These can be kmap'd by the caller for CPU access.
- The caller must perform iopt_unaccess_pages() when done to balance this.
this function does not exist
iommufd_access_unpin_pages()
Thanks, Jason