Fence signaling must be enabled to make sure that
the dma_fence_is_signaled() function ever returns true.
Since drivers and implementations sometimes mess this up,
this ensures correct behaviour when DEBUG_WW_MUTEX_SLOWPATH
is used during debugging.
This should make any implementation bugs resulting in not
signaled fences much more obvious.
Arvind Yadav (6):
[PATCH v3 1/6] dma-buf: Remove the signaled bit status check
[PATCH v3 2/6] dma-buf: set signaling bit for the stub fence
[PATCH v3 3/6] dma-buf: Enable signaling on fence for selftests
[PATCH v3 4/6] drm/amdgpu: Enable signaling on fence.
[PATCH v3 5/6] drm/sched: Use parent fence instead of finished
[PATCH v3 6/6] dma-buf: Check status of enable-signaling bit on debug
drivers/dma-buf/dma-fence.c | 7 ++++---
drivers/dma-buf/st-dma-fence-chain.c | 4 ++++
drivers/dma-buf/st-dma-fence-unwrap.c | 22 ++++++++++++++++++++++
drivers/dma-buf/st-dma-fence.c | 16 ++++++++++++++++
drivers/dma-buf/st-dma-resv.c | 10 ++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 2 ++
drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
include/linux/dma-fence.h | 5 +++++
8 files changed, 65 insertions(+), 5 deletions(-)
--
2.25.1
TTM, GEM, DRM or the core DMA-buf framework are needs
to enable software signaling before the fence is signaled.
The core DMA-buf framework software can forget to call
enable_signaling before the fence is signaled. It means
framework code can forget to call dma_fence_enable_sw_signaling()
before calling dma_fence_is_signaled(). To avoid this scenario
on debug kernel, check the DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT bit
status before checking the MA_FENCE_FLAG_SIGNALED_BIT bit status
to confirm that software signaling is enabled.
Arvind Yadav (4):
[PATCH v2 1/4] drm/sched: Enable signaling for finished fence
[PATCH v2 2/4] dma-buf: enable signaling for the stub fence on debug
[PATCH v2 3/4] dma-buf: enable signaling for selftest fence on debug
[PATCH v2 4/4] dma-buf: Check status of enable-signaling bit on debug
drivers/dma-buf/dma-fence.c | 7 ++++
drivers/dma-buf/st-dma-fence-chain.c | 8 +++++
drivers/dma-buf/st-dma-fence-unwrap.c | 44 ++++++++++++++++++++++++++
drivers/dma-buf/st-dma-fence.c | 25 ++++++++++++++-
drivers/dma-buf/st-dma-resv.c | 20 ++++++++++++
drivers/gpu/drm/scheduler/sched_main.c | 2 ++
include/linux/dma-fence.h | 5 +++
7 files changed, 110 insertions(+), 1 deletion(-)
--
2.25.1
dma-buf has become a way to safely acquire a handle to non-struct page
memory that can still have lifetime controlled by the exporter. Notably
RDMA can now import dma-buf FDs and build them into MRs which allows for
PCI P2P operations. Extend this to allow vfio-pci to export MMIO memory
from PCI device BARs.
This series supports a use case for SPDK where a NVMe device will be owned
by SPDK through VFIO but interacting with a RDMA device. The RDMA device
may directly access the NVMe CMB or directly manipulate the NVMe device's
doorbell using PCI P2P.
However, as a general mechanism, it can support many other scenarios with
VFIO. I imagine this dmabuf approach to be usable by iommufd as well for
generic and safe P2P mappings.
This series goes after the "Break up ioctl dispatch functions to one
function per ioctl" series.
This is on github: https://github.com/jgunthorpe/linux/commits/vfio_dma_buf
v2:
- Name the new file dma_buf.c
- Restore orig_nents before freeing
- Fix reversed logic around priv->revoked
- Set priv->index
- Rebased on v2 "Break up ioctl dispatch functions"
v1: https://lore.kernel.org/r/0-v1-9e6e1739ed95+5fa-vfio_dma_buf_jgg@nvidia.com
Cc: linux-rdma(a)vger.kernel.org
Cc: Oded Gabbay <ogabbay(a)kernel.org>
Cc: Christian König <christian.koenig(a)amd.com>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Leon Romanovsky <leon(a)kernel.org>
Cc: Maor Gottlieb <maorg(a)nvidia.com>
Cc: dri-devel(a)lists.freedesktop.org
Signed-off-by: Jason Gunthorpe <jgg(a)nvidia.com>
Jason Gunthorpe (4):
dma-buf: Add dma_buf_try_get()
vfio: Add vfio_device_get()
vfio_pci: Do not open code pci_try_reset_function()
vfio/pci: Allow MMIO regions to be exported through dma-buf
drivers/vfio/pci/Makefile | 1 +
drivers/vfio/pci/dma_buf.c | 269 +++++++++++++++++++++++++++++
drivers/vfio/pci/vfio_pci_config.c | 22 ++-
drivers/vfio/pci/vfio_pci_core.c | 33 +++-
drivers/vfio/pci/vfio_pci_priv.h | 24 +++
drivers/vfio/vfio_main.c | 3 +-
include/linux/dma-buf.h | 13 ++
include/linux/vfio.h | 6 +
include/linux/vfio_pci_core.h | 1 +
include/uapi/linux/vfio.h | 18 ++
10 files changed, 368 insertions(+), 22 deletions(-)
create mode 100644 drivers/vfio/pci/dma_buf.c
base-commit: 285fef0ff7f1a97d8acd380971c061985d8dafb5
--
2.37.2
TTM, GEM, DRM or the core DMA-buf framework are needs
to enable software signaling before the fence is signaled.
The core DMA-buf framework software can forget to call
enable_signaling before the fence is signaled. It means
framework code can forget to call dma_fence_enable_sw_signaling()
before calling dma_fence_is_signaled(). To avoid this scenario
on debug kernel, check the DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT bit
status before checking the MA_FENCE_FLAG_SIGNALED_BIT bit status
to confirm that software signaling is enabled.
Arvind Yadav (4):
dma-buf: Check status of enable-signaling bit on debug
drm/sched: Add callback and enable signaling on debug
dma-buf: Add callback and enable signaling on debug
dma-buf: Add callback and enable signaling on debug
drivers/dma-buf/dma-fence.c | 17 ++++++++
drivers/dma-buf/st-dma-fence-chain.c | 17 ++++++++
drivers/dma-buf/st-dma-fence-unwrap.c | 54 +++++++++++++++++++++++++
drivers/dma-buf/st-dma-fence.c | 34 +++++++++++++++-
drivers/dma-buf/st-dma-resv.c | 30 ++++++++++++++
drivers/gpu/drm/scheduler/sched_fence.c | 12 ++++++
drivers/gpu/drm/scheduler/sched_main.c | 4 +-
include/linux/dma-fence.h | 5 +++
8 files changed, 171 insertions(+), 2 deletions(-)
--
2.25.1