From: Nicolin Chen nicolinc@nvidia.com Sent: Wednesday, May 7, 2025 3:18 AM
On Tue, May 06, 2025 at 09:36:38AM +0000, Tian, Kevin wrote:
From: Nicolin Chen nicolinc@nvidia.com Sent: Saturday, April 26, 2025 1:58 PM
The new vCMDQ object will be added for HW to access the guest memory
for
a HW-accelerated virtualization feature. It needs to ensure the guest
memory
pages are pinned when HW accesses them and they are contiguous in physical address space.
This is very like the existing iommufd_access_pin_pages() that outputs
the
pinned page list for the caller to test its contiguity.
Move those code from iommufd_access_pin/unpin_pages() and related function for a pair of iopt helpers that can be shared with the vCMDQ allocator. As the vCMDQ allocator will be a user-space triggered ioctl function,
WARN_ON
would not be a good fit in the new iopt_unpin_pages(), thus change them
to
use WARN_ON_ONCE instead.
Rename check_area_prot() to align with the existing iopt_area helpers,
and
inline it to the header since iommufd_access_rw() still uses it.
Signed-off-by: Nicolin Chen nicolinc@nvidia.com
any reason why this cannot be done by the core? all types of vcmd queues need to pin the guest buffer pages, no matter the IOMMU accesses GPA or HPA.
Jason made a similar comment earlier [1].
check of continuity is still done by the driver, if HPA is being accessed.
[1] https://lore.kernel.org/all/20250424134049.GP1648741@nvidia.com/
But I am doing in the core. I have iopt_pin_pages() called in the core ioctl handler iommufd_vqueue_alloc_ioctl():
https://lore.kernel.org/linux- iommu/1ef2e242ee1d844f823581a5365823d78c67ec6a.1746139811.git.nicoli nc@nvidia.com/
IMHO we just want to keep the pin logic in the core while leaving the check of PFN continuity to the driver (or have a way for the driver to communicate such need to the core).
It's possible for have an implementation with IOMMU accessing GPA of the queue which further goes through the stage-2 translation. In such case it's fine to have disjointed PFNs.