From: Liu, Yi L yi.l.liu@intel.com Sent: Monday, July 24, 2023 7:13 PM
This adds the data structure for flushing iotlb for the nested domain allocated with IOMMU_HWPT_TYPE_VTD_S1 type.
Cache invalidation path is performance path, so it's better to avoid memory allocation in such path. To achieve it, this path reuses the ucmd_buffer to copy user data. So the new data structures are added in the ucmd_buffer union to avoid overflow.
this patch has nothing to do with ucmd_buffer
+/**
- struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
(IOMMU_HWPT_TYPE_VTD_S1)
- @flags: Must be 0
- @entry_size: Size in bytes of each cache invalidation request
- @entry_nr_uptr: User pointer to the number of invalidation requests.
Kernel reads it to get the number of requests and
updates the buffer with the number of requests that
have been processed successfully. This pointer must
point to a __u32 type of memory location.
- @inv_data_uptr: Pointer to the cache invalidation requests
- The Intel VT-d specific invalidation data for a set of cache invalidation
- requests. Kernel loops the requests one-by-one and stops when failure
- is encountered. The number of handled requests is reported to user by
- writing the buffer pointed by @entry_nr_uptr.
- */
+struct iommu_hwpt_vtd_s1_invalidate {
- __u32 flags;
- __u32 entry_size;
- __aligned_u64 entry_nr_uptr;
- __aligned_u64 inv_data_uptr;
+};
I wonder whether this array can be defined directly in the common struct iommu_hwpt_invalidate so there is no need for underlying iommu driver to further deal with user buffers, including various minsz/backward compat. check.
smmu may not require it by using a native queue format. But that could be considered as a special case of 1-entry array. With careful coding the added cost should be negligible.
Jason, your thought?