This RFC builds on T.J. Mercier's earlier series [1] which added
a memory.stat counter for exported dma-bufs and a binder-backed
mechanism to transfer charges between cgroups.
The first commit is taken almost verbatim from TJ's series:
it introduces MEMCG_DMABUF as a dedicated per-cgroup stat, so that
the total exported dma-buf footprint is visible both system-wide
(via the root cgroup) and per-application (via per-process cgroups).
This avoids the overhead of DMABUF_SYSFS_STATS and integrates
naturally into the existing cgroup memory hierarchy.
The rest of the series departs from TJ's approach. While the first
commit introduces the memcg stat infrastructure for dmabufs, the
export-time charging it introduces in dma_buf_export() is then
superseded: we charge at dma_heap_ioctl_allocate() time, using a
new charge_pid_fd field in struct dma_heap_allocation_data. The
allocator opens a pidfd for its client (e.g., from binder's
sender_pid), passes it to the ioctl, and the kernel charges the
buffer directly to the client's cgroup at allocation time, so no
transfer step is needed.
This decouples the accounting path from binder entirely:
any allocator that knows its client's PID can use the pid_fd
mechanism regardless of the IPC transport in use.
The cross-cgroup charging capability requires access control.
Patches #3 and #4 add a generic LSM hook (security_dma_heap_alloc)
and an SELinux implementation based on a new dma_heap object class
with a charge_to permission, so policy authors can express which
domains are allowed to charge memory to another domain's cgroup.
Last patch adds some tests to verify the new charge_pid_fd field.
We are sending it as an RFC to spark broader discussion. It may or
may not be the right path forward, and we welcome feedback on the
trade-offs.
Collision note: Eric Chanudet's series [2] adds __GFP_ACCOUNT to
system_heap page allocations as an opt-in module parameter. That
approach charges pages to the allocator's own kmem, which overlaps with
MEMCG_DMABUF. This series explicitly removes __GFP_ACCOUNT from system
heap allocations and routes all accounting through the MEMCG_DMABUF
path to avoid double-counting.
[1] https://lore.kernel.org/cgroups/20230109213809.418135-1-tjmercier@google.co…
[2] https://lore.kernel.org/r/20260113-dmabuf-heap-system-memcg-v2-0-e85722cc2f…
Signed-off-by: Albert Esteve <aesteve(a)redhat.com>
---
Albert Esteve (4):
dma-heap: charge dma-buf memory via explicit memcg
security: dma-heap: Add dma_heap_alloc LSM hook
selinux: Restrict cross-cgroup dma-heap charging
selftests/dmabuf-heaps: Add dma-buf memcg accounting tests
T.J. Mercier (1):
memcg: Track exported dma-buffers
Documentation/admin-guide/cgroup-v2.rst | 5 +
drivers/dma-buf/dma-buf.c | 7 +
drivers/dma-buf/dma-heap.c | 54 +++++-
drivers/dma-buf/heaps/system_heap.c | 2 -
include/linux/dma-buf.h | 4 +
include/linux/lsm_hook_defs.h | 1 +
include/linux/memcontrol.h | 37 ++++
include/linux/security.h | 7 +
include/uapi/linux/dma-heap.h | 6 +
mm/memcontrol.c | 19 ++
security/security.c | 16 ++
security/selinux/hooks.c | 7 +
security/selinux/include/classmap.h | 1 +
tools/testing/selftests/cgroup/Makefile | 2 +-
tools/testing/selftests/cgroup/test_memcontrol.c | 143 +++++++++++++-
tools/testing/selftests/dmabuf-heaps/config | 1 +
tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 126 ++++++++++++-
tools/testing/selftests/dmabuf-heaps/vmtest.sh | 205 +++++++++++++++++++++
18 files changed, 633 insertions(+), 10 deletions(-)
---
base-commit: 74fe02ce122a6103f207d29fafc8b3a53de6abaf
change-id: 20260508-v2_20230123_tjmercier_google_com-f44fcfb16530
Best regards,
--
Albert Esteve <aesteve(a)redhat.com>
Ever stumbled upon a game that, despite its simple premise, completely hooks you with its pure, unadulterated fun? That’s exactly the experience many players find with Drive Mad. This charmingly addictive browser game offers a delightful blend of physics-based challenges and surprisingly deep vehicle customization, making it a perfect quick pick-up-and-play for a few minutes or an hour. If you’re looking to dive into some lighthearted, yet engaging, gaming, then look no further than learning how to play and truly experience Drive Mad.
What is Drive Mad? The Basics of Bouncing and Building https://drivemadfree.com
At its core, Drive Mad is a physics-based driving game where your primary goal is to navigate a vehicle through increasingly outlandish courses. The controls are delightfully straightforward: typically just accelerator, brake, and often a way to tilt your vehicle forward or backward. What makes it interesting, however, is the emphasis on building. Before each level, you’re presented with a basic chassis and a limited budget to add various components: wheels of different sizes and grips, engines with varying power, and even structural elements to help you survive a crash or two. The magic lies in experimenting with these parts to create the most effective (or hilariously ineffective) contraption to tackle the upcoming obstacles. You can find this gem at Drive Mad.
From Novice to Mad Driver: Gameplay and Progression
The journey in Drive Mad starts simply. Early levels introduce you to basic inclines, gaps, and flat surfaces, allowing you to get a feel for the physics and how different vehicle builds respond. As you progress, the challenges escalate dramatically. You’ll encounter massive ramps, precarious bridges, explosive barrels, and even sections that require careful balancing on a single wheel. Each successful completion of a level earns you more in-game currency, which you can then use to unlock new vehicle parts, expanding your creative possibilities. The satisfaction of finally overcoming a particularly tricky level with a custom-built monster truck or a zippy, lightweight dune buggy is truly rewarding.
Tips for Taming the Tracks: Your Guide to Success
While Drive Mad is easy to pick up, mastering it requires a bit of strategy. Here are a few friendly tips to help you on your way:
• Experiment with Wheels: Don't underestimate the power of different wheel types. Larger wheels offer better clearance and absorb impacts, while smaller wheels can be nimble. Experiment with combinations!
• Balance is Key: Especially in later levels, maintaining balance is crucial. Use your tilt controls wisely to prevent your vehicle from flipping over, especially on steep ascents or descents.
• Budget Wisely: While it's tempting to splurge on the most powerful engine, sometimes a more balanced build with good wheels and a sturdy frame is more effective than a top-heavy speed demon.
• Learn from Crashes: Every glorious explosion or comical flip is a learning opportunity. Pay attention to what caused the failure and adjust your vehicle or driving style accordingly.
• Sometimes Less is More: Don’t feel obligated to use every component you can. A simpler, more focused design can sometimes be more robust and effective than an over-engineered behemoth.
Conclusion: Embrace the Madness
Drive Mad is more than just a simple browser game; it's a testament to the joy of creative problem-solving and the sheer fun of physics-based chaos. Whether you're a seasoned gamer looking for a refreshing break or someone new to the world of digital entertainment, its approachable nature and engaging gameplay make it a fantastic choice. So, next time you have a few minutes to spare, why not hop into the driver's seat, build your dream machine, and see just how far you can drive yourself mad?
What tree is this against? I can't apply it against the usual
candidates, even accounting for the time lag in getting to it.
Can you provide a git tree?
On 5/12/26 05:07, Deepanshu Kartikey wrote:
> virtio_gpu_cursor_plane_update() allocates a virtio_gpu_object_array,
> locks its dma_resv, and queues a fenced transfer to the host. The
> lock acquisition can fail in two ways:
>
> - dma_resv_lock_interruptible() returns -EINTR when a signal is
> delivered while waiting for the reservation lock.
> - dma_resv_reserve_fences() returns -ENOMEM if it fails to allocate
> a fence slot; in this case lock_resv unlocks before returning.
>
> The return value was ignored, so the cursor path could proceed with
> the resv lock not held. The queue path then walks the object array
> and calls dma_resv_add_fence(), which requires the lock; with lockdep
> enabled this trips dma_resv_assert_held():
>
> WARNING: drivers/dma-buf/dma-resv.c:296 at dma_resv_add_fence+0x71e/0x840
> Call Trace:
> virtio_gpu_array_add_fence
> virtio_gpu_queue_ctrl_sgs
> virtio_gpu_queue_fenced_ctrl_buffer
> virtio_gpu_cursor_plane_update
> drm_atomic_helper_commit_planes
> drm_atomic_helper_commit_tail
> commit_tail
> drm_atomic_helper_commit
> drm_atomic_commit
> drm_atomic_helper_update_plane
> __setplane_atomic
> drm_mode_cursor_universal
> drm_mode_cursor_common
> drm_mode_cursor_ioctl
> drm_ioctl
> __x64_sys_ioctl
>
> Beyond the WARN, mutating the dma_resv fence list without the lock
> races with concurrent readers/writers and can corrupt the list.
>
> The DRM atomic helpers do not allow .atomic_update to fail: by the
> time it runs, the commit has been signed off to userspace and there
> is no clean rollback path. Move the fallible work -- objs allocation,
> dma_resv locking, and fence slot reservation -- into
> virtio_gpu_plane_prepare_fb, which is the designated callback for
> resource acquisition and may return errors that the framework
> handles by rolling back the commit. Stash the prepared object array
> on virtio_gpu_plane_state so the update step can consume it.
>
> Make virtio_gpu_plane_cleanup_fb release the objs if the commit was
> rolled back before update ran (i.e., objs not consumed). The queue
> path already unlocks the resv after attaching the fence (vq.c:411)
> and frees the array via put_free_delayed after host completion
> (vq.c:271), so the update step only needs to clear vgplane_st->objs
> to transfer ownership.
>
> Simplify virtio_gpu_cursor_plane_update to a no-fail queue submission
> that hands the prepared, locked objs to the queue path.
>
> The bug was reported by syzbot, triggered via fault injection
> (fail_nth) on the DRM_IOCTL_MODE_CURSOR path, which forces the
> -ENOMEM branch in dma_resv_reserve_fences().
>
> Reported-by: syzbot+72bd3dd3a5d5f39a0271(a)syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=72bd3dd3a5d5f39a0271
> Fixes: 5cfd31c5b3a3 ("drm/virtio: fix virtio_gpu_cursor_plane_update().")
> Cc: stable(a)vger.kernel.org
> Link: https://lore.kernel.org/all/20260510053025.100224-1-kartikey406@gmail.com/T/ [v1]
> Signed-off-by: Deepanshu Kartikey <kartikey406(a)gmail.com>
> ---
> v2: Move resv lock acquisition from .atomic_update (which must not
> fail) to .prepare_fb (which may), per maintainer review of v1.
> The previous approach of silently skipping the cursor update on
> lock failure violated the atomic-commit contract with userspace.
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 1 +
> drivers/gpu/drm/virtio/virtgpu_plane.c | 38 ++++++++++++++++++++------
> 2 files changed, 30 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index f17660a71a3e..e51f959dce46 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -198,6 +198,7 @@ struct virtio_gpu_framebuffer {
> struct virtio_gpu_plane_state {
> struct drm_plane_state base;
> struct virtio_gpu_fence *fence;
> + struct virtio_gpu_object_array *objs;
> };
> #define to_virtio_gpu_plane_state(x) \
> container_of(x, struct virtio_gpu_plane_state, base)
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index a126d1b25f46..b0511ace89e6 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -381,6 +381,23 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
> goto err_fence;
> }
>
> + if (plane->type == DRM_PLANE_TYPE_CURSOR && bo->dumb) {
> + struct virtio_gpu_object_array *objs;
> +
> + objs = virtio_gpu_array_alloc(1);
> + if (!objs) {
> + ret = -ENOMEM;
> + goto err_fence;
> + }
> + virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
> + ret = virtio_gpu_array_lock_resv(objs);
> + if (ret) {
> + virtio_gpu_array_put_free(objs);
> + goto err_fence;
> + }
> + vgplane_st->objs = objs;
> + }
> +
> return 0;
>
> err_fence:
> @@ -417,6 +434,12 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
> vgplane_st->fence = NULL;
> }
>
> + if (vgplane_st->objs) {
> + virtio_gpu_array_unlock_resv(vgplane_st->objs);
> + virtio_gpu_array_put_free(vgplane_st->objs);
> + vgplane_st->objs = NULL;
> + }
> +
> obj = state->fb->obj[0];
> if (drm_gem_is_imported(obj))
> virtio_gpu_cleanup_imported_obj(obj);
> @@ -452,21 +475,18 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
> }
>
> if (bo && bo->dumb && (plane->state->fb != old_state->fb)) {
> - /* new cursor -- update & wait */
> - struct virtio_gpu_object_array *objs;
> -
> - objs = virtio_gpu_array_alloc(1);
> - if (!objs)
> - return;
> - virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
> - virtio_gpu_array_lock_resv(objs);
> + /* objs and fence were prepared in virtio_gpu_plane_prepare_fb;
> + * the resv is already locked. The queue path takes ownership
> + * of objs and unlocks the resv after attaching the fence.
> + */
> virtio_gpu_cmd_transfer_to_host_2d
> (vgdev, 0,
> plane->state->crtc_w,
> plane->state->crtc_h,
> - 0, 0, objs, vgplane_st->fence);
> + 0, 0, vgplane_st->objs, vgplane_st->fence);
> virtio_gpu_notify(vgdev);
> dma_fence_wait(&vgplane_st->fence->f, true);
> + vgplane_st->objs = NULL;
> }
>
> if (plane->state->fb != old_state->fb) {
I'm getting lockup with this patch applied and now see that
virtio_gpu_resource_flush() also locks BO.
Easiest option might be to add uninterruptible variant of
virtio_gpu_array_lock_resv(). Could you please try it for v3?
--
Best regards,
Dmitry
On Tue, May 12, 2026 at 09:42:01AM +1000, Alexey Kardashevskiy wrote:
> > true but either way dmabuf slicing will be directed by QEMU's msix-table
> > emulation MR and this slicing needs to match the TDISP report so I'll
> > have to teach QEMU these reports, right?
>
> Or TDISP devices are going to align MSIX BARs to 4K, and QEMU will
> do the same and it should "just work", and if it does not - the host
> won't crash. Can this work? Thanks,
Host crashing stuff is a different issue, I think the plan was to
revoke the entire MMIO space from userspace and remove it from the
kernel mapping. Entire because we don't want to parse the TDISP report
to figure out something more narrow.
Therefore there is no way the host can crash.
When qemu constructs the VM memory map it already has a scheme to
insert a hole for a SW emulated page for MSI. That will keep working
exactly as it is.
When the VM validates the MMIO the hole has to fall within a T=0 space
of the TDISP report or the VM will reject it.
This means devices need to have a T=0 hole around their MSI-X/etc
suitable for a 64K page size OS.
This is already the case, if a device mixes MSIx with other things
qemu will work but it becomes horribly slow and a little broken.
Jason
On Sat, May 09, 2026 at 01:31:56PM +0800, Xu Yilun wrote:
> > Would you be open to an in-between? The exporter and importer both
> > have information that should not leak into each other's drivers.
> >
> > What if the dmabuf mapping type core code was the only thing that had
> > access to *BOTH*? The exporter provides the address data, the importer
> > provides the iommu_domain. The core code, and only the core code, has
> > both and does the required operation?
>
> I think that may not work for KVM. On IOMMU side, IOMMUFD acts as the
> address space (iova) manager and dma_api/IOMMU driver acts as the
> actual page table mapper. But for KVM, it is both. KVM doesn't allow
> another component to provide an unknown address space (GPA space) and
> say "map it", so doesn't expose to other components about "KVM domain".
>
> Even if we expose "KVM domain", KVM still acts as the importer and the
> mapper, is it wierd to say we trust KVM-the-mapper, but don't trust
> KVM-the-as-manager?
>
> Is it also wierd that we trust IOMMU-the-mapper, but don't trust
> IOMMUFD-the-as-manager? There are more IOMMU drivers than IOMMUFD...
Yeah, it doesn't work well for kvm, and yes it is really weird and
worse that phys in every way..
Jason
On Mon, 11 May 2026 16:30:39 +0100
Matt Evans <mattev(a)meta.com> wrote:
> Hi Alex, Leon,
>
> On 27/04/2026 15:36, Alex Williamson wrote:
> >
> > On Sun, 26 Apr 2026 13:52:15 +0300
> > Leon Romanovsky <leon(a)kernel.org> wrote:
> >
> >> On Fri, Apr 24, 2026 at 03:31:53PM -0300, Jason Gunthorpe wrote:
> >>> On Thu, Apr 16, 2026 at 06:17:52AM -0700, Matt Evans wrote:
> >>>> A new field is reserved in vfio_device_feature_dma_buf.flags to
> >>>> request CPU-facing memory type attributes for mmap()s of the buffer.
> >>>> Add a flag VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC, which results in WC
> >>>> PTEs for the DMABUF's BAR region.
> >>>>
> >>>> Signed-off-by: Matt Evans <mattev(a)meta.com>
> >>>> ---
> >>>> drivers/vfio/pci/vfio_pci_dmabuf.c | 15 +++++++++++++--
> >>>> drivers/vfio/pci/vfio_pci_priv.h | 1 +
> >>>> include/uapi/linux/vfio.h | 12 +++++++++---
> >>>> 3 files changed, 23 insertions(+), 5 deletions(-)
> >>>
> >>> Nice and simple
> >>>
> >>> Reviewed-by: Jason Gunthorpe <jgg(a)nvidia.com>
> >>>
> >>>> @@ -1549,8 +1551,12 @@ struct vfio_region_dma_range {
> >>>> struct vfio_device_feature_dma_buf {
> >>>> __u32 region_index;
> >>>> __u32 open_flags;
> >>>> - __u32 flags;
> >>>> - __u32 nr_ranges;
> >>>> + __u32 flags;
> >>>> + /* Flags sub-field reserved for attribute enum */
> >>>> +#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_MASK (0xfU << 28)
> >>>> +#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_UC (0 << 28)
> >>>> +#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC (1 << 28)
> >>>> + __u32 nr_ranges;
> >>
> >> Alex,
> >>
> >> The TPH proposal extends the flags field in a similar way, but I suggested
> >> a different approach to conserve bits. At the moment, we spend three bits
> >> on a single feature, which feels wasteful.
> >>
> >> What do you think?
> >> https://lore.kernel.org/all/20260409120415.GF86584@unreal/
> >
> > I already proposed a very different interface for TPH that decouples
> > the dma-buf creation from setting the TPH values:
> >
> > https://lore.kernel.org/all/20260423132016.4a25e074@shazbot.org/
> >
> > This is overall less intrusive than the TPH change proposed, but it
> > could still make sense to align this as an operation on the dma-buf,
> > that can be probed as a separate feature. Thanks,
>
> I'll add a VFIO_DEVICE_FEATURE_DMA_BUF_ATTRS in a v2 instead to get in
> line with the TPH work, no worries.
>
> For the benefit of future hackers, how would you describe the criteria
> for adding flags to this existing field? What hypothetical feature
> characteristics would be appropriate? (Maybe it's that these attrs &
> TPH add scalar fields in several bits rather than a simple boolean.)
> Two of us have independently added something that's turned out to be
> inapproriate so some guidance would be good.
I think the question of how we actually expand an arbitrary grab bag of
"ATTRS" is the central question in whether we should implement the
interface. If we follow the direction I suggested for TPH, maybe this
is just a VFIO_DEVICE_FEATURE_DMA_BUF_WC, where it supports only PROBE
and SET, with SET taking only the dma-buf fd to implement the one-way
promotion from UC -> WC.
If we support a generic SET ATTRS feature, we really need to map out how
flag bits are indicated as supported and how a user untangles failures
from trying to set various attributes. If we end up with a feature
indicating each ATTR is available, we might as well have just
implemented a feature for each attribute. Thanks,
Alex
On Mon, May 11, 2026 at 04:30:39PM +0100, Matt Evans wrote:
> Hi Alex, Leon,
>
> On 27/04/2026 15:36, Alex Williamson wrote:
> >
> > On Sun, 26 Apr 2026 13:52:15 +0300
> > Leon Romanovsky <leon(a)kernel.org> wrote:
> >
> > > On Fri, Apr 24, 2026 at 03:31:53PM -0300, Jason Gunthorpe wrote:
> > > > On Thu, Apr 16, 2026 at 06:17:52AM -0700, Matt Evans wrote:
> > > > > A new field is reserved in vfio_device_feature_dma_buf.flags to
> > > > > request CPU-facing memory type attributes for mmap()s of the buffer.
> > > > > Add a flag VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC, which results in WC
> > > > > PTEs for the DMABUF's BAR region.
> > > > >
> > > > > Signed-off-by: Matt Evans <mattev(a)meta.com>
> > > > > ---
> > > > > drivers/vfio/pci/vfio_pci_dmabuf.c | 15 +++++++++++++--
> > > > > drivers/vfio/pci/vfio_pci_priv.h | 1 +
> > > > > include/uapi/linux/vfio.h | 12 +++++++++---
> > > > > 3 files changed, 23 insertions(+), 5 deletions(-)
> > > >
> > > > Nice and simple
> > > >
> > > > Reviewed-by: Jason Gunthorpe <jgg(a)nvidia.com>
> > > > > @@ -1549,8 +1551,12 @@ struct vfio_region_dma_range {
> > > > > struct vfio_device_feature_dma_buf {
> > > > > __u32 region_index;
> > > > > __u32 open_flags;
> > > > > - __u32 flags;
> > > > > - __u32 nr_ranges;
> > > > > + __u32 flags;
> > > > > + /* Flags sub-field reserved for attribute enum */
> > > > > +#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_MASK (0xfU << 28)
> > > > > +#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_UC (0 << 28)
> > > > > +#define VFIO_DEVICE_FEATURE_DMA_BUF_ATTR_WC (1 << 28)
> > > > > + __u32 nr_ranges;
> > >
> > > Alex,
> > >
> > > The TPH proposal extends the flags field in a similar way, but I suggested
> > > a different approach to conserve bits. At the moment, we spend three bits
> > > on a single feature, which feels wasteful.
> > >
> > > What do you think?
> > > https://lore.kernel.org/all/20260409120415.GF86584@unreal/
> >
> > I already proposed a very different interface for TPH that decouples
> > the dma-buf creation from setting the TPH values:
> >
> > https://lore.kernel.org/all/20260423132016.4a25e074@shazbot.org/
> >
> > This is overall less intrusive than the TPH change proposed, but it
> > could still make sense to align this as an operation on the dma-buf,
> > that can be probed as a separate feature. Thanks,
>
> I'll add a VFIO_DEVICE_FEATURE_DMA_BUF_ATTRS in a v2 instead to get in line
> with the TPH work, no worries.
>
> For the benefit of future hackers, how would you describe the criteria for
> adding flags to this existing field?
One bit per-feature.
> What hypothetical feature characteristics would be appropriate? (Maybe it's that these attrs & TPH
> add scalar fields in several bits rather than a simple boolean.) Two of us
> have independently added something that's turned out to be inapproriate so
> some guidance would be good.
Both of you intertwined the signaling bit with the actual data, and that is
what led me to prefer a different approach.
Thanks
>
> Thanks!
>
>
> Matt
On Thu, May 07, 2026 at 05:16:56PM +1000, Alexey Kardashevskiy wrote:
> true but either way dmabuf slicing will be directed by QEMU's
> msix-table emulation MR and this slicing needs to match the TDISP
> report so I'll have to teach QEMU these reports, right? I am worried
> if I miss something obvious, again. Thanks,
I don't think so.. It just needs to slice it into the MSI page
blindly. When the VM goes to validate the TDISP report against the
mappings it will fail to accept the device if there is a mismatch.
The only thing qemu could do is fail sooner, but I don't know that is
worth the complexity as we do expect all devices to have their MSI
range unprotected.
Jason