On 2023/5/17 20:43, Jason Gunthorpe wrote:
On Wed, May 17, 2023 at 06:33:30AM +0000, Tian, Kevin wrote:
From: Baolu Lu baolu.lu@linux.intel.com Sent: Wednesday, May 17, 2023 12:15 PM
On 5/16/23 8:27 PM, Jason Gunthorpe wrote:
On Tue, May 16, 2023 at 11:00:16AM +0800, Baolu Lu wrote:
On 5/15/23 10:00 PM, Jason Gunthorpe wrote:
The devices list was used as a simple way to avoid having per-group information. Now that this seems to be unavoidable, just commit to per-group information fully and remove the devices list from the HWPT.
The iommufd_group stores the currently assigned HWPT for the entire
group
and we can manage the per-device attach/detach with a list in the iommufd_group.
I am preparing the patches to route I/O page faults to user space through iommufd. The iommufd page fault handler knows the hwpt and
the
device pointer, but it needs to convert the device pointer into its iommufd object id and pass the id to user space.
It's fine that we remove the hwpt->devices here, but perhaps I need to add the context pointer in ioas later,
struct iommufd_ioas { struct io_pagetable iopt; struct mutex mutex; struct list_head hwpt_list;
};struct iommufd_ctx *ictx;
and, use below helper to look up the device id.
+u32 iommufd_get_device_id(struct iommufd_ctx *ictx, struct device *dev) +{
struct iommu_group *group = iommu_group_get(dev);
u32 dev_id = IOMMUFD_INVALID_OBJ_ID;
struct iommufd_group *igroup;
struct iommufd_device *cur;
unsigned int id;
if (!group)
return IOMMUFD_INVALID_OBJ_ID;
id = iommu_group_id(group);
xa_lock(&ictx->groups);
igroup = xa_load(&ictx->groups, id);
if (!iommufd_group_try_get(igroup, group)) {
xa_unlock(&ictx->groups);
iommu_group_put(group);
return IOMMUFD_INVALID_OBJ_ID;
}
xa_unlock(&ictx->groups);
mutex_lock(&igroup->lock);
list_for_each_entry(cur, &igroup->device_list, group_item) {
if (cur->dev == dev) {
dev_id = cur->obj.id;
break;
}
}
I dislike how slow this is on something resembling a fastish path :\
Yes, agreed.
Maybe we should stash something in the dev_iommu instead?
Or can the PRI stuff provide a cookie per-device?
We already have a per-device fault cookie:
/**
- struct iommu_fault_param - per-device IOMMU fault data
- @handler: Callback function to handle IOMMU faults at device level
- @data: handler private data
- @faults: holds the pending faults which needs response
- @lock: protect pending faults list
*/ struct iommu_fault_param { iommu_dev_fault_handler_t handler; void *data; struct list_head faults; struct mutex lock; };
Perhaps we can add a @dev_id memory here?
what about SIOV? There is only one cookie per parent device.
It doesn't make any sense to store a struct like that in dev_iommu.
The fault handler should come from the domain and we should be able to have a unique 'void *data' cookie linked to the (dev,PASID) to go along with the fault handler.
If I get your point correctly, the iommu core should provide some places for the iommufd to put a cookie for each pair of {device, pasid}, and provide interfaces to manage it. For example,
void iommu_set_device_fault_cookie(struct device *dev, ioasit_t pasid, void *fault_cookie);
void *iommu_get_device_fault_cookie(struct device *dev, ioasit_t pasid)
If so, perhaps we need some special treatment for ARM as a user hwpt actually presents the PASID table of the device and the guest setting pasid table entry will not be propagated to host. Then, the @pasid in above interfaces is meaningless.
This is all going to need some revising before we can expose it to iommufd
Yes, agreed. i will post a preparation series to do this. Besides the fault cookie, at least, I want to do the following preparation.
1) Move iommu faults uapi from uapi/linux/iommu.h to uapi/linux /iommufd.h and remove the former.
2) Add a device id in the iommu_fault structure. struct iommu_fault { __u32 type; - __u32 padding; + __u32 dev_id; union { struct iommu_fault_unrecoverable event; struct iommu_fault_page_request prm;
3) Add the device pointer to the parameters of domain fault handler.
4) Decouple I/O page fault handling from IOMMU_SVA in the iommu core and the drivers.
Best regards, baolu