On Tue, Feb 07, 2023 at 12:32:50AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe jgg@nvidia.com Sent: Monday, February 6, 2023 9:25 PM
On Mon, Feb 06, 2023 at 06:57:35AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe jgg@nvidia.com Sent: Friday, February 3, 2023 11:03 PM
On Fri, Feb 03, 2023 at 08:26:44AM +0000, Tian, Kevin wrote:
From: Nicolin Chen nicolinc@nvidia.com Sent: Thursday, February 2, 2023 3:05 PM
All drivers are already required to support changing between active UNMANAGED domains when using their attach_dev ops.
All drivers which don't have *broken* UNMANAGED domain?
No, all drivers.. It has always been used by VFIO.
existing iommu_attach_group() doesn't support changing between two UNMANAGED domains. only from default->unmanaged or blocking->unmanaged.
Yes, but before we added the blocking domains VFIO was changing between unmanaged domains. Blocking domains are so new that no driver could have suddenly started to depend on this.
In legacy VFIO unmanaged domain was 1:1 associated with vfio container. I didn't say how a group can switch between two containers w/o going through transition to/from the default domain, i.e. detach from 1st container and then attach to the 2nd.
Yes, in the past we went through the default domain which is basically another unmanaged domain type. So unmanaged -> unmanaged is OK.
Inside the driver, it can keep track of the domain pointer if attach_dev succeeds
Are you referring to no error unwinding in __iommu_group_for_each_dev() so if it is failed some devices may have attach_dev succeeds then simply recovering group->domain in __iommu_attach_group() is insufficient?
Yes
Jason