On 11/9/24 17:08, Nicolin Chen wrote:
On Wed, Sep 11, 2024 at 06:12:21AM +0000, Tian, Kevin wrote:
From: Nicolin Chen nicolinc@nvidia.com Sent: Wednesday, August 28, 2024 1:00 AM
[...]
On a multi-IOMMU system, the VIOMMU object can be instanced to the number of vIOMMUs in a guest VM, while holding the same parent HWPT to share the
Is there restriction that multiple vIOMMU objects can be only created on a multi-IOMMU system?
I think it should be generally restricted to the number of pIOMMUs, although likely (not 100% sure) we could do multiple vIOMMUs on a single-pIOMMU system. Any reason for doing that?
Just to clarify the terminology here - what are pIOMMU and vIOMMU exactly?
On AMD, IOMMU is a pretend-pcie device, one per a rootport, manages a DT - device table, one entry per BDFn, the entry owns a queue. A slice of that can be passed to a VM (== queues mapped directly to the VM, and such IOMMU appears in the VM as a pretend-pcie device too). So what is [pv]IOMMU here? Thanks,
stage-2 IO pagetable. Each VIOMMU then just need to only allocate its own VMID to attach the shared stage-2 IO pagetable to the physical IOMMU:
this reads like 'VMID' is a virtual ID allocated by vIOMMU. But from the entire context it actually means the physical 'VMID' allocated on the associated physical IOMMU, correct?
Quoting Jason's narratives, a VMID is a "Security namespace for guest owned ID". The allocation, using SMMU as an example, should be a part of vIOMMU instance allocation in the host SMMU driver. Then, this VMID will be used to mark the cache tags. So, it is still a software allocated ID, while HW would use it too.
Thanks Nicolin