On Thu, Jun 05, 2025 at 09:47:01PM +0530, Aneesh Kumar K.V wrote:
> Jason Gunthorpe <jgg(a)nvidia.com> writes:
>
> > On Thu, Jun 05, 2025 at 05:33:52PM +0530, Aneesh Kumar K.V wrote:
> >
> >> > +
> >> > + /* To ensure no host side MMIO access is possible */
> >> > + ret = pci_request_regions_exclusive(pdev, "vfio-pci-tsm");
> >> > + if (ret)
> >> > + goto out_unlock;
> >> > +
> >> >
> >>
> >> I am hitting failures here with similar changes. Can you share the Qemu
> >> changes needed to make this pci_request_regions_exclusive successful.
> >> Also after the TDI is unbound, we want the region ownership backto
> >> "vfio-pci" so that things continue to work as non-secure device. I don't
> >> see we doing that. I could add a pci_bar_deactivate/pci_bar_activate in
> >> userspace which will result in vfio_unmap()/vfio_map(). But that doesn't
> >> release the region ownership.
> >
> > Again, IMHO, we should not be doing this dynamically. VFIO should do
> > pci_request_regions_exclusive() once at the very start and it should
> > stay that way.
> >
> > There is no reason to change it dynamically.
> >
> > The only decision to make is if all vfio should switch to exclusive
> > mode or if we need to make it optional for userspace.
>
> We only need the exclusive mode when the device is operating in secure
> mode, correct? That suggests we’ll need to dynamically toggle this
> setting based on the device’s security state.
No, if the decision is that VFIO should allow this to be controlled by
userspace then userspace will tell iommufd to run in regions_exclusive
mode prior to opening the vfio cdev and VFIO will still do it once at
open time and never change it.
The only thing request_regions does is block other drivers outside
vfio from using this memory space. There is no reason at all to change
this dynamically. A CC VMM using VFIO will never use a driver outside
VFIO to touch the VFIO controlled memory.
Jason
On Thu, Jun 05, 2025 at 05:33:52PM +0530, Aneesh Kumar K.V wrote:
> > +
> > + /* To ensure no host side MMIO access is possible */
> > + ret = pci_request_regions_exclusive(pdev, "vfio-pci-tsm");
> > + if (ret)
> > + goto out_unlock;
> > +
> >
>
> I am hitting failures here with similar changes. Can you share the Qemu
> changes needed to make this pci_request_regions_exclusive successful.
> Also after the TDI is unbound, we want the region ownership backto
> "vfio-pci" so that things continue to work as non-secure device. I don't
> see we doing that. I could add a pci_bar_deactivate/pci_bar_activate in
> userspace which will result in vfio_unmap()/vfio_map(). But that doesn't
> release the region ownership.
Again, IMHO, we should not be doing this dynamically. VFIO should do
pci_request_regions_exclusive() once at the very start and it should
stay that way.
There is no reason to change it dynamically.
The only decision to make is if all vfio should switch to exclusive
mode or if we need to make it optional for userspace.
Jason
On Thu, Jun 05, 2025 at 05:41:17PM +0800, Xu Yilun wrote:
> No, this is not device side TDISP requirement. It is host side
> requirement to fix DMA silent drop issue. TDX enforces CPU S2 PT share
> with IOMMU S2 PT (does ARM do the same?), so unmap CPU S2 PT in KVM equals
> unmap IOMMU S2 PT.
>
> If we allow IOMMU S2 PT unmapped when TDI is running, host could fool
> guest by just unmap some PT entry and suppress the fault event. Guest
> thought a DMA writting is successful but it is not and may cause
> data integrity issue.
So, TDX prevents *any* unmap, even of normal memory, from the S2 while
a guest is running? Seems extreme?
MMIO isn't special, if you have a rule like that for such a security
reason it should cover all of the S2.
> This is not a TDX specific problem, but different vendors has different
> mechanisms for this. For TDX, firmware fails the MMIO unmap for S2. For
> AMD, will trigger some HW protection called "ASID fence" [1]. Not sure
> how ARM handles this?
This seems even more extreme, if the guest gets a bad DMA address into
the device then the entire device gets killed? No chance to debug it?
Jason
[ Since Daniel made me look... ]
On 2025-06-04 8:57 am, Tomeu Vizoso wrote:
[...]
> diff --git a/drivers/accel/rocket/Kconfig b/drivers/accel/rocket/Kconfig
> new file mode 100644
> index 0000000000000000000000000000000000000000..9a59c6c61bf4d6460d8008b16331f001c97de67d
> --- /dev/null
> +++ b/drivers/accel/rocket/Kconfig
> @@ -0,0 +1,25 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +
> +config DRM_ACCEL_ROCKET
> + tristate "Rocket (support for Rockchip NPUs)"
> + depends on DRM
> + depends on ARM64 || COMPILE_TEST
Best make that "(ARCH_ROCKCHIP && ARM64) || COMPILE_TEST" now before
someone else inevitably does. Or perhaps just a pre-emptive
"ARCH_ROCKCHIP || COMPILE_TEST" if this is the same NPU that's in RV1126
etc.
> + depends on MMU
> + select DRM_SCHED
> + select IOMMU_SUPPORT
Selecting user-visible symbols is often considered bad form, but this
one isn't even functional - all you're doing here is forcing the
top-level availability of all the IOMMU driver/API options.
If you really want to nanny the user and dissuade them from building a
config which is unlikely to be useful in practice, then at best maybe
"depends on ROCKCHIP_IOMMU || COMPILE_TEST", but TBH I wouldn't even
bother with that. Even if you want to rely on using the IOMMU client API
unconditionally, it'll fail decisively enough at runtime if there's no
IOMMU present (or the API is stubbed out entirely).
> + select IOMMU_IO_PGTABLE_LPAE
And I have no idea what this might think it's here for :/
Thanks,
Robin.
> + select DRM_GEM_SHMEM_HELPER
> + help
> + Choose this option if you have a Rockchip SoC that contains a
> + compatible Neural Processing Unit (NPU), such as the RK3588. Called by
> + Rockchip either RKNN or RKNPU, it accelerates inference of neural
> + networks.
> +
> + The interface exposed to userspace is described in
> + include/uapi/drm/rocket_accel.h and is used by the Rocket userspace
> + driver in Mesa3D.
> +
> + If unsure, say N.
> +
> + To compile this driver as a module, choose M here: the
> + module will be called rocket.
On Wed, Jun 04, 2025 at 02:10:43PM +0530, Aneesh Kumar K.V wrote:
> Jason Gunthorpe <jgg(a)nvidia.com> writes:
>
> > On Tue, Jun 03, 2025 at 02:20:51PM +0800, Xu Yilun wrote:
> >> > Wouldn’t it be simpler to skip the reference count increment altogether
> >> > and just call tsm_unbind in the virtual device’s destroy callback?
> >> > (iommufd_vdevice_destroy())
> >>
> >> The vdevice refcount is the main concern, there is also an IOMMU_DESTROY
> >> ioctl. User could just free the vdevice instance if no refcount, while VFIO
> >> is still in bound state. That seems not the correct free order.
> >
> > Freeing the vdevice should automatically unbind it..
> >
>
> One challenge I ran into during implementation was the dependency of
> vfio on iommufd_device. When vfio needs to perform a tsm_unbind,
> it only has access to an iommufd_device.
VFIO should never do that except by destroying the idevice..
> However, TSM operations like binding and unbinding are handled at the
> iommufd_vdevice level. The issue? There’s no direct link from
> iommufd_device back to iommufd_vdevice.
Yes.
> To address this, I modified the following structures:
>
> modified drivers/iommu/iommufd/iommufd_private.h
> @@ -428,6 +428,7 @@ struct iommufd_device {
> /* protect iopf_enabled counter */
> struct mutex iopf_lock;
> unsigned int iopf_enabled;
> + struct iommufd_vdevice *vdev;
> };
Locking will be painful:
> Updating vdevice->idev requires holding vdev->mutex (vdev_lock).
> Updating device->vdev requires idev->igroup->lock (idev_lock).
I wonder if that can work on the destory paths..
You also have to prevent more than one vdevice from being created for
an idevice, I don't think we do that today.
> tsm_unbind in vdevice_destroy:
>
> vdevice_destroy() ends up calling tsm_unbind() while holding only the
> vdev_lock. At first glance, this seems unsafe. But in practice, it's
> fine because the corresponding iommufd_device has already been destroyed
> when the VFIO device file descriptor was closed—triggering
> vfio_df_iommufd_unbind().
This needs some kind of fixing the idevice should destroy the vdevices
during idevice destruction so we don't get this out of order where the
idevice is destroyed before the vdevice.
This should be a separate patch as it is an immediate bug fix..
Jason
On Tue, May 20, 2025 at 12:27:00PM +0200, Tomeu Vizoso wrote:
> Using the DRM GPU scheduler infrastructure, with a scheduler for each
> core.
>
> Userspace can decide for a series of tasks to be executed sequentially
> in the same core, so SRAM locality can be taken advantage of.
>
> The job submission code was initially based on Panfrost.
>
> v2:
> - Remove hardcoded number of cores
> - Misc. style fixes (Jeffrey Hugo)
> - Repack IOCTL struct (Jeffrey Hugo)
>
> v3:
> - Adapt to a split of the register block in the DT bindings (Nicolas
> Frattaroli)
> - Make use of GPL-2.0-only for the copyright notice (Jeff Hugo)
> - Use drm_* logging functions (Thomas Zimmermann)
> - Rename reg i/o macros (Thomas Zimmermann)
> - Add padding to ioctls and check for zero (Jeff Hugo)
> - Improve error handling (Nicolas Frattaroli)
>
> Signed-off-by: Tomeu Vizoso <tomeu(a)tomeuvizoso.net>
> diff --git a/drivers/accel/rocket/rocket_job.c b/drivers/accel/rocket/rocket_job.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..aee6ebdb2bd227439449fdfcab3ce7d1e39cd4c4
> --- /dev/null
> +++ b/drivers/accel/rocket/rocket_job.c
> @@ -0,0 +1,723 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright 2019 Linaro, Ltd, Rob Herring <robh(a)kernel.org> */
> +/* Copyright 2019 Collabora ltd. */
> +/* Copyright 2024-2025 Tomeu Vizoso <tomeu(a)tomeuvizoso.net> */
> +
> +#include <drm/drm_print.h>
> +#include <drm/drm_file.h>
> +#include <drm/drm_gem.h>
> +#include <drm/rocket_accel.h>
> +#include <linux/interrupt.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +
> +#include "rocket_core.h"
> +#include "rocket_device.h"
> +#include "rocket_drv.h"
> +#include "rocket_job.h"
> +#include "rocket_registers.h"
> +
> +#define JOB_TIMEOUT_MS 500
> +
> +static struct rocket_job *
> +to_rocket_job(struct drm_sched_job *sched_job)
> +{
> + return container_of(sched_job, struct rocket_job, base);
> +}
> +
> +struct rocket_fence {
> + struct dma_fence base;
> + struct drm_device *dev;
> + /* rocket seqno for signaled() test */
> + u64 seqno;
> + int queue;
AFAICT, you are not using any of the elements here. So you can just drop
rocket_fence and use dma_fence.
Rob
On 6/3/25 17:00, Tvrtko Ursulin wrote:
>
> On 03/06/2025 14:13, Maxime Ripard wrote:
>> Hi,
>>
>> On Mon, Jun 02, 2025 at 04:42:27PM +0200, Christian König wrote:
>>> On 6/2/25 15:05, Tvrtko Ursulin wrote:
>>>> On 15/05/2025 14:15, Christian König wrote:
>>>>> Hey drm-misc maintainers,
>>>>>
>>>>> can you guys please backmerge drm-next into drm-misc-next?
>>>>>
>>>>> I want to push this patch here but it depends on changes which are partially in drm-next and partially in drm-misc-next.
>>>>
>>>> Looks like the backmerge is still pending?
>>>
>>> Yes, @Maarten, @Maxime and @Thomas ping on this.
>>
>> It's done
>
> Thanks Maxime!
>
> Christian, I can merge 2-5 to take some load off you if you want?
Sure, go ahead.
Then I can call it a day for today :)
Cheers,
Christian.
>
> Regards,
>
> Tvrtko
>