On Wed, Sep 18, 2024 at 08:10:52AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe jgg@nvidia.com Sent: Saturday, September 14, 2024 10:51 PM
On Fri, Sep 13, 2024 at 02:33:59AM +0000, Tian, Kevin wrote:
From: Jason Gunthorpe jgg@nvidia.com Sent: Thursday, September 12, 2024 7:08 AM
On Wed, Sep 11, 2024 at 08:13:01AM +0000, Tian, Kevin wrote:
Probably there is a good reason e.g. for simplification or better aligned with hw accel stuff. But it's not explained clearly so far.
Probably the most concrete thing is if you have a direct assignment invalidation queue (ie DMA'd directly by HW) then it only applies to a single pIOMMU and invalidation commands placed there are unavoidably limited in scope.
This creates a representation problem, if we have a vIOMMU that spans many pIOMMUs but invalidations do some subset how to do we model that. Just saying the vIOMMU is linked to the pIOMMU solves this nicely.
yes that is a good reason.
btw do we expect the VMM to try-and-fail when deciding whether a new vIOMMU object is required when creating a new vdev?
I think there was some suggestion the getinfo could return this, but also I think qemu needs to have a command line that matches physical so maybe it needs some sysfs?
My impression was that Qemu is moving away from directly accessing sysfs (e.g. as the reason behind allowing Libvirt to pass in an opened cdev fd to Qemu). So probably getinfo makes more sense...
Yes, but I think libvirt needs this information before it invokes qemu..
The physical and virtual iommus need to sort of match, something should figure this out automatically I would guess.
Jason