From: Matthew Auld <matthew.auld(a)intel.com>
[ Upstream commit edb1745fc618ba8ef63a45ce3ae60de1bdf29231 ]
Since the dma-resv is shared we don't need to reserve and add a fence
slot fence twice, plus no need to loop through the dependencies.
Signed-off-by: Matthew Auld <matthew.auld(a)intel.com>
Cc: Thomas Hellström <thomas.hellstrom(a)linux.intel.com>
Cc: Matthew Brost <matthew.brost(a)intel.com>
Reviewed-by: Jonathan Cavitt <jonathan.cavitt(a)intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom(a)linux.intel.com>
Link: https://lore.kernel.org/r/20250829164715.720735-2-matthew.auld@intel.com
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
YES
Explanation
- What it fixes
- Removes redundant dma-resv operations when a backup BO shares the
same reservation object as the original BO, preventing the same
fence from being reserved/added twice to the same `dma_resv`.
- Avoids scanning the same dependency set twice when source and
destination BOs share the same `dma_resv`.
- Why the change is correct
- The backup object is created to share the parent’s reservation
object, so a single reserve/add is sufficient:
- The backup BO is initialized with the parent’s resv:
`drivers/gpu/drm/xe/xe_bo.c:1309` (`xe_bo_init_locked(...,
bo->ttm.base.resv, ...)`), ensuring `bo->ttm.base.resv ==
backup->ttm.base.resv`.
- The patch adds an explicit invariant check to document and enforce
this: `drivers/gpu/drm/xe/xe_bo.c:1225` (`xe_assert(xe,
bo->ttm.base.resv == backup->ttm.base.resv)`).
- With shared `dma_resv`, adding the same fence twice is at best
redundant (wasting fence slots and memory) and at worst error-prone.
Reserving fence slots only once and adding the fence once is the
correct behavior.
- Specific code changes and effects
- Evict path (GPU migration copy case):
- Before: reserves and adds fence on both `bo->ttm.base.resv` and
`backup->ttm.base.resv`.
- After: reserves and adds exactly once, guarded by the shared-resv
assertion.
- See single reserve and add: `drivers/gpu/drm/xe/xe_bo.c:1226`
(reserve) and `drivers/gpu/drm/xe/xe_bo.c:1237` (add fence). This
is the core fix; the removed second reserve/add on the backup is
the redundant part eliminated.
- Restore path (migration copy back):
- Same simplification: reserve once, add once on the shared
`dma_resv`.
- See single reserve and add: `drivers/gpu/drm/xe/xe_bo.c:1375`
(reserve) and `drivers/gpu/drm/xe/xe_bo.c:1387` (add fence).
- Dependency handling in migrate:
- Before: added deps for both src and dst based only on `src_bo !=
dst_bo`.
- After: only add dst deps if the resv objects differ, avoiding
double-walking the same `dma_resv`.
- See updated condition: `drivers/gpu/drm/xe/xe_migrate.c:932`
(`src_bo->ttm.base.resv != dst_bo->ttm.base.resv`).
- User-visible impact without the patch
- Duplicate `dma_resv_add_fence()` calls on the same reservation
object can:
- Consume extra shared-fence slots and memory.
- Inflate dependency lists, causing unnecessary scheduler waits and
overhead.
- Increase failure likelihood of `dma_resv_reserve_fences()` under
memory pressure.
- These paths are exercised during suspend/resume flows of pinned VRAM
BOs (evict/restore), so reliability and performance in power
transitions can be affected.
- Scope and risk
- Small, focused changes localized to the Intel Xe driver
migration/evict/restore paths:
- Files: `drivers/gpu/drm/xe/xe_bo.c`,
`drivers/gpu/drm/xe/xe_migrate.c`.
- No API changes or architectural refactors; logic strictly reduces
redundant operations.
- The `xe_assert` acts as a safety net to catch unexpected non-shared
`resv` usage; normal runtime behavior is unchanged when the
invariant holds.
- The CPU copy fallback paths are untouched.
- Stable backport considerations
- This is a clear correctness and robustness fix, not a feature.
- Low regression risk if the stable branch also creates the backup BO
with the parent’s `dma_resv` (as shown by the use of
`xe_bo_init_locked(..., bo->ttm.base.resv, ...)` in
`drivers/gpu/drm/xe/xe_bo.c:1309`).
- If a stable branch diverges and the backup BO does not share the
resv, this patch would need adjustment (i.e., keep dual reserve/add
in that case). The added `xe_assert` helps surface such mismatches
during testing.
Conclusion: This commit fixes a real bug (duplicate fence reserve/add
and duplicate dependency scanning on a shared `dma_resv`) with a
minimal, well-scoped change. It aligns with stable rules (important
bugfix, low risk, contained), so it should be backported.
drivers/gpu/drm/xe/xe_bo.c | 13 +------------
drivers/gpu/drm/xe/xe_migrate.c | 2 +-
2 files changed, 2 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index d07e23eb1a54d..5a61441d68af5 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1242,14 +1242,11 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
else
migrate = mem_type_to_migrate(xe, bo->ttm.resource->mem_type);
+ xe_assert(xe, bo->ttm.base.resv == backup->ttm.base.resv);
ret = dma_resv_reserve_fences(bo->ttm.base.resv, 1);
if (ret)
goto out_backup;
- ret = dma_resv_reserve_fences(backup->ttm.base.resv, 1);
- if (ret)
- goto out_backup;
-
fence = xe_migrate_copy(migrate, bo, backup, bo->ttm.resource,
backup->ttm.resource, false);
if (IS_ERR(fence)) {
@@ -1259,8 +1256,6 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
dma_resv_add_fence(bo->ttm.base.resv, fence,
DMA_RESV_USAGE_KERNEL);
- dma_resv_add_fence(backup->ttm.base.resv, fence,
- DMA_RESV_USAGE_KERNEL);
dma_fence_put(fence);
} else {
ret = xe_bo_vmap(backup);
@@ -1338,10 +1333,6 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
if (ret)
goto out_unlock_bo;
- ret = dma_resv_reserve_fences(backup->ttm.base.resv, 1);
- if (ret)
- goto out_unlock_bo;
-
fence = xe_migrate_copy(migrate, backup, bo,
backup->ttm.resource, bo->ttm.resource,
false);
@@ -1352,8 +1343,6 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
dma_resv_add_fence(bo->ttm.base.resv, fence,
DMA_RESV_USAGE_KERNEL);
- dma_resv_add_fence(backup->ttm.base.resv, fence,
- DMA_RESV_USAGE_KERNEL);
dma_fence_put(fence);
} else {
ret = xe_bo_vmap(backup);
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index 2a627ed64b8f8..ba9b8590eccb2 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -901,7 +901,7 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
if (!fence) {
err = xe_sched_job_add_deps(job, src_bo->ttm.base.resv,
DMA_RESV_USAGE_BOOKKEEP);
- if (!err && src_bo != dst_bo)
+ if (!err && src_bo->ttm.base.resv != dst_bo->ttm.base.resv)
err = xe_sched_job_add_deps(job, dst_bo->ttm.base.resv,
DMA_RESV_USAGE_BOOKKEEP);
if (err)
--
2.51.0
On Tue, 21 Oct 2025 17:20:22 +1300, Barry Song wrote:
> From: Barry Song <v-songbaohua(a)oppo.com>
>
> We can allocate high-order pages, but mapping them one by
> one is inefficient. This patch changes the code to map
> as large a chunk as possible. The code looks somewhat
>
> [ ... ]
Reviewed-by: Maxime Ripard <mripard(a)kernel.org>
Thanks!
Maxime
For retrieving a pointer to the struct dma_resv for a given GEM object. We
also introduce it in a new trait, BaseObjectPrivate, which we automatically
implement for all gem objects and don't expose to users outside of the
crate.
Signed-off-by: Lyude Paul <lyude(a)redhat.com>
---
rust/kernel/drm/gem/mod.rs | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/rust/kernel/drm/gem/mod.rs b/rust/kernel/drm/gem/mod.rs
index 32bff2e8463f4..67813cfb0db42 100644
--- a/rust/kernel/drm/gem/mod.rs
+++ b/rust/kernel/drm/gem/mod.rs
@@ -200,6 +200,18 @@ fn create_mmap_offset(&self) -> Result<u64> {
impl<T: IntoGEMObject> BaseObject for T {}
+/// Crate-private base operations shared by all GEM object classes.
+#[expect(unused)]
+pub(crate) trait BaseObjectPrivate: IntoGEMObject {
+ /// Return a pointer to this object's dma_resv.
+ fn raw_dma_resv(&self) -> *mut bindings::dma_resv {
+ // SAFETY: `as_gem_obj()` always returns a valid pointer to the base DRM gem object
+ unsafe { (*self.as_raw()).resv }
+ }
+}
+
+impl<T: IntoGEMObject> BaseObjectPrivate for T {}
+
/// A base GEM object.
///
/// Invariants
--
2.51.0
On Tue, Oct 21, 2025 at 4:43 PM Matthew Brost <matthew.brost(a)intel.com> wrote:
>
> On Sat, Oct 18, 2025 at 12:42:30AM -0700, Matthew Brost wrote:
> > On Fri, Oct 17, 2025 at 11:43:51PM -0700, Matthew Brost wrote:
> > > On Fri, Oct 17, 2025 at 10:37:46AM -0500, Rob Herring wrote:
> > > > On Thu, Oct 16, 2025 at 11:25:34PM -0700, Matthew Brost wrote:
> > > > > On Thu, Oct 16, 2025 at 04:06:05PM -0500, Rob Herring (Arm) wrote:
> > > > > > Add a driver for Arm Ethos-U65/U85 NPUs. The Ethos-U NPU has a
> > > > > > relatively simple interface with single command stream to describe
> > > > > > buffers, operation settings, and network operations. It supports up to 8
> > > > > > memory regions (though no h/w bounds on a region). The Ethos NPUs
> > > > > > are designed to use an SRAM for scratch memory. Region 2 is reserved
> > > > > > for SRAM (like the downstream driver stack and compiler). Userspace
> > > > > > doesn't need access to the SRAM.
> > > >
> > > > Thanks for the review.
> > > >
> > > > [...]
> > > >
> > > > > > +static struct dma_fence *ethosu_job_run(struct drm_sched_job *sched_job)
> > > > > > +{
> > > > > > + struct ethosu_job *job = to_ethosu_job(sched_job);
> > > > > > + struct ethosu_device *dev = job->dev;
> > > > > > + struct dma_fence *fence = NULL;
> > > > > > + int ret;
> > > > > > +
> > > > > > + if (unlikely(job->base.s_fence->finished.error))
> > > > > > + return NULL;
> > > > > > +
> > > > > > + fence = ethosu_fence_create(dev);
> > > > >
> > > > > Another reclaim issue: ethosu_fence_create allocates memory using
> > > > > GFP_KERNEL. Since we're already in the DMA fence signaling path
> > > > > (reclaim), this can lead to a deadlock.
> > > > >
> > > > > Without too much thought, you likely want to move this allocation to
> > > > > ethosu_job_do_push, but before taking dev->sched_lock or calling
> > > > > drm_sched_job_arm.
> > > > >
> > > > > We really should fix the DRM scheduler work queue to be tainted with
> > > > > reclaim. If I recall correctly, we'd need to update the work queue
> > > > > layer. Let me look into that—I've seen this type of bug several times,
> > > > > and lockdep should be able to catch it.
> > > >
> > > > Likely the rocket driver suffers from the same issues...
> > > >
> > >
> > > I am not surprised by this statement.
> > >
> > > > >
> > > > > > + if (IS_ERR(fence))
> > > > > > + return fence;
> > > > > > +
> > > > > > + if (job->done_fence)
> > > > > > + dma_fence_put(job->done_fence);
> > > > > > + job->done_fence = dma_fence_get(fence);
> > > > > > +
> > > > > > + ret = pm_runtime_get_sync(dev->base.dev);
> > > > >
> > > > > I haven't looked at your PM design, but this generally looks quite
> > > > > dangerous with respect to reclaim. For example, if your PM resume paths
> > > > > allocate memory or take locks that allocate memory underneath, you're
> > > > > likely to run into issues.
> > > > >
> > > > > A better approach would be to attach a PM reference to your job upon
> > > > > creation and release it upon job destruction. That would be safer and
> > > > > save you headaches in the long run.
> > > >
> > > > Our PM is nothing more than clock enable/disable and register init.
> > > >
> > > > If the runtime PM API doesn't work and needs special driver wrappers,
> > > > then I'm inclined to just not use it and manage clocks directly (as
> > > > that's all it is doing).
> > > >
> > >
> > > Yes, then you’re probably fine. More complex drivers can do all sorts of
> > > things during a PM wake, which is why PM wakes should generally be the
> > > outermost layer. I still suggest, to future-proof your code, that you
> > > move the PM reference to an outer layer.
> > >
> >
> > Also, taking a PM reference in a function call — as opposed to tying it
> > to a object's lifetime — is risky. It can quickly lead to imbalances in
> > PM references if things go sideways or function calls become unbalanced.
> > Depending on how your driver uses the DRM scheduler, this seems like a
> > real possibility.
> >
> > Matt
> >
> > > > >
> > > > > This is what we do in Xe [1] [2].
> > > > >
> > > > > Also, in general, this driver has been reviewed (RB’d), but it's not
> > > > > great that I spotted numerous issues within just five minutes. I suggest
> > > > > taking a step back and thoroughly evaluating everything this driver is
> > > > > doing.
> > > >
> > > > Well, if it is hard to get simple drivers right, then it's a problem
> > > > with the subsystem APIs IMO.
> > > >
> > >
> > > Yes, agreed. We should have assertions and lockdep annotations in place
> > > to catch driver-side misuses. This is the second driver I’ve randomly
> > > looked at over the past year that has broken DMA fencing and reclaim
> > > rules. I’ll take an action item to fix this in the DRM scheduler, but
> > > I’m afraid I’ll likely break multiple drivers in the process as misuess
> > > / lockdep will complain.
>
> I've posted a series [1] for the DRM scheduler which will complain about the
> things I've pointed out here.
Thanks. I ran v6 with them and no lockdep splats.
Rob
Changelog:
v5:
* Rebased on top of v6.18-rc1.
* Added more validation logic to make sure that DMA-BUF length doesn't
overflow in various scenarios.
* Hide kernel config from the users.
* Fixed type conversion issue. DMA ranges are exposed with u64 length,
but DMA-BUF uses "unsigned int" as a length for SG entries.
* Added check to prevent from VFIO drivers which reports BAR size
different from PCI, do not use DMA-BUF functionality.
v4: https://lore.kernel.org/all/cover.1759070796.git.leon@kernel.org
* Split pcim_p2pdma_provider() to two functions, one that initializes
array of providers and another to return right provider pointer.
v3: https://lore.kernel.org/all/cover.1758804980.git.leon@kernel.org
* Changed pcim_p2pdma_enable() to be pcim_p2pdma_provider().
* Cache provider in vfio_pci_dma_buf struct instead of BAR index.
* Removed misleading comment from pcim_p2pdma_provider().
* Moved MMIO check to be in pcim_p2pdma_provider().
v2: https://lore.kernel.org/all/cover.1757589589.git.leon@kernel.org/
* Added extra patch which adds new CONFIG, so next patches can reuse
* it.
* Squashed "PCI/P2PDMA: Remove redundant bus_offset from map state"
into the other patch.
* Fixed revoke calls to be aligned with true->false semantics.
* Extended p2pdma_providers to be per-BAR and not global to whole
* device.
* Fixed possible race between dmabuf states and revoke.
* Moved revoke to PCI BAR zap block.
v1: https://lore.kernel.org/all/cover.1754311439.git.leon@kernel.org
* Changed commit messages.
* Reused DMA_ATTR_MMIO attribute.
* Returned support for multiple DMA ranges per-dMABUF.
v0: https://lore.kernel.org/all/cover.1753274085.git.leonro@nvidia.com
---------------------------------------------------------------------------
Based on "[PATCH v6 00/16] dma-mapping: migrate to physical address-based API"
https://lore.kernel.org/all/cover.1757423202.git.leonro@nvidia.com/ series.
---------------------------------------------------------------------------
This series extends the VFIO PCI subsystem to support exporting MMIO
regions from PCI device BARs as dma-buf objects, enabling safe sharing of
non-struct page memory with controlled lifetime management. This allows RDMA
and other subsystems to import dma-buf FDs and build them into memory regions
for PCI P2P operations.
The series supports a use case for SPDK where a NVMe device will be
owned by SPDK through VFIO but interacting with a RDMA device. The RDMA
device may directly access the NVMe CMB or directly manipulate the NVMe
device's doorbell using PCI P2P.
However, as a general mechanism, it can support many other scenarios with
VFIO. This dmabuf approach can be usable by iommufd as well for generic
and safe P2P mappings.
In addition to the SPDK use-case mentioned above, the capability added
in this patch series can also be useful when a buffer (located in device
memory such as VRAM) needs to be shared between any two dGPU devices or
instances (assuming one of them is bound to VFIO PCI) as long as they
are P2P DMA compatible.
The implementation provides a revocable attachment mechanism using dma-buf
move operations. MMIO regions are normally pinned as BARs don't change
physical addresses, but access is revoked when the VFIO device is closed
or a PCI reset is issued. This ensures kernel self-defense against
potentially hostile userspace.
The series includes significant refactoring of the PCI P2PDMA subsystem
to separate core P2P functionality from memory allocation features,
making it more modular and suitable for VFIO use cases that don't need
struct page support.
-----------------------------------------------------------------------
The series is based originally on
https://lore.kernel.org/all/20250307052248.405803-1-vivek.kasireddy@intel.c…
but heavily rewritten to be based on DMA physical API.
-----------------------------------------------------------------------
The WIP branch can be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=…
Thanks
Leon Romanovsky (7):
PCI/P2PDMA: Separate the mmap() support from the core logic
PCI/P2PDMA: Simplify bus address mapping API
PCI/P2PDMA: Refactor to separate core P2P functionality from memory
allocation
PCI/P2PDMA: Export pci_p2pdma_map_type() function
types: move phys_vec definition to common header
vfio/pci: Enable peer-to-peer DMA transactions by default
vfio/pci: Add dma-buf export support for MMIO regions
Vivek Kasireddy (2):
vfio: Export vfio device get and put registration helpers
vfio/pci: Share the core device pointer while invoking feature
functions
block/blk-mq-dma.c | 7 +-
drivers/iommu/dma-iommu.c | 4 +-
drivers/pci/p2pdma.c | 175 ++++++++---
drivers/vfio/pci/Kconfig | 3 +
drivers/vfio/pci/Makefile | 2 +
drivers/vfio/pci/vfio_pci_config.c | 22 +-
drivers/vfio/pci/vfio_pci_core.c | 63 ++--
drivers/vfio/pci/vfio_pci_dmabuf.c | 446 +++++++++++++++++++++++++++++
drivers/vfio/pci/vfio_pci_priv.h | 23 ++
drivers/vfio/vfio_main.c | 2 +
include/linux/pci-p2pdma.h | 120 +++++---
include/linux/types.h | 5 +
include/linux/vfio.h | 2 +
include/linux/vfio_pci_core.h | 1 +
include/uapi/linux/vfio.h | 25 ++
kernel/dma/direct.c | 4 +-
mm/hmm.c | 2 +-
17 files changed, 785 insertions(+), 121 deletions(-)
create mode 100644 drivers/vfio/pci/vfio_pci_dmabuf.c
--
2.51.0
On Wed, 22 Oct 2025, Biancaa Ramesh <biancaa2210329(a)ssn.edu.in> wrote:
> --
> ::DISCLAIMER::
>
> ---------------------------------------------------------------------
> The
> contents of this e-mail and any attachment(s) are confidential and
> intended
> for the named recipient(s) only. Views or opinions, if any,
> presented in
> this email are solely those of the author and may not
> necessarily reflect
> the views or opinions of SSN Institutions (SSN) or its
> affiliates. Any form
> of reproduction, dissemination, copying, disclosure,
> modification,
> distribution and / or publication of this message without the
> prior written
> consent of authorized representative of SSN is strictly
> prohibited. If you
> have received this email in error please delete it and
> notify the sender
> immediately.
There are some obvious issues in the patch itself, but please do figure
out how to send patches and generally list email without disclaimers
like this first. Or use the b4 web submission endpoint [1].
BR,
Jani.
[1] https://b4.docs.kernel.org/en/latest/contributor/send.html
--
Jani Nikula, Intel
On Tue, 14 Oct 2025 16:26:06 +0530 Meghana Malladi wrote:
> This series adds AF_XDP zero coppy support to icssg driver.
>
> Tests were performed on AM64x-EVM with xdpsock application [1].
>
> A clear improvement is seen Transmit (txonly) and receive (rxdrop)
> for 64 byte packets. 1500 byte test seems to be limited by line
> rate (1G link) so no improvement seen there in packet rate
>
> Having some issue with l2fwd as the benchmarking numbers show 0
> for 64 byte packets after forwading first batch packets and I am
> currently looking into it.
This series stopped applying, could you please respin?
--
pw-bot: cr
The Arm Ethos-U65/85 NPUs are designed for edge AI inference
applications[0].
The driver works with Mesa Teflon. The Ethos support was merged on
10/15. The UAPI should also be compatible with the downstream (open
source) driver stack[2] and Vela compiler though that has not been
implemented.
Testing so far has been on i.MX93 boards with Ethos-U65 and a FVP model
with Ethos-U85. More work is needed in mesa for handling U85 command
stream differences, but that doesn't affect the UAPI.
A git tree is here[3].
Rob
[0] https://www.arm.com/products/silicon-ip-cpu?families=ethos%20npus
[2] https://gitlab.arm.com/artificial-intelligence/ethos-u/
[3] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git ethos-v6
Signed-off-by: Rob Herring (Arm) <robh(a)kernel.org>
---
Changes in v6:
- Rework job submit to avoid potential deadlocks with allocations/reclaim
in the fence signaling paths. ethosu_acquire_object_fences() and the job
done_fence allocation are moved earlier. The runtime-pm resume now before
the job is pushed and autosuspend is done when the job is freed.
- Drop unused ethosu_job_is_idle()
- Link to v5: https://lore.kernel.org/r/20251016-ethos-v5-0-ba0aece0a006@kernel.org
Changes in v5:
- Rework Runtime PM init in probe
- Use __free() cleanups where possible
- Use devm_mutex_init()
- Handle U85 NPU_SET_WEIGHT2_BASE and NPU_SET_WEIGHT2_LENGTH
- Link to v4: https://lore.kernel.org/r/20251015-ethos-v4-0-81025a3dcbf3@kernel.org
Changes in v4:
- Use bulk clk API
- Various whitespace fixes mostly due to ethos->ethosu rename
- Drop error check on dma_set_mask_and_coherent()
- Drop unnecessary pm_runtime_mark_last_busy() call
- Move variable declarations out of switch (a riscv/clang build failure)
- Use lowercase hex in all defines
- Drop unused ethosu_device.coherent member
- Add comments on all locks
- Link to v3: https://lore.kernel.org/r/20250926-ethos-v3-0-6bd24373e4f5@kernel.org
Changes in v3:
- Rework and improve job submit validation
- Rename ethos to ethosu. There was an Ethos-Nxx that's unrelated.
- Add missing init for sched_lock mutex
- Drop some prints to debug level
- Fix i.MX93 SRAM accesses (AXI config)
- Add U85 AXI configuration and test on FVP with U85
- Print the current cmd value on timeout
- Link to v2: https://lore.kernel.org/r/20250811-ethos-v2-0-a219fc52a95b@kernel.org
Changes in v2:
- Rebase on v6.17-rc1 adapting to scheduler changes
- scheduler: Drop the reset workqueue. According to the scheduler docs,
we don't need it since we have a single h/w queue.
- scheduler: Rework the timeout handling to continue running if we are
making progress. Fixes timeouts on larger jobs.
- Reset the NPU on resume so it's in a known state
- Add error handling on clk_get() calls
- Fix drm_mm splat on module unload. We were missing a put on the
cmdstream BO in the scheduler clean-up.
- Fix 0-day report needing explicit bitfield.h include
- Link to v1: https://lore.kernel.org/r/20250722-ethos-v1-0-cc1c5a0cbbfb@kernel.org
---
Rob Herring (Arm) (2):
dt-bindings: npu: Add Arm Ethos-U65/U85
accel: Add Arm Ethos-U NPU driver
.../devicetree/bindings/npu/arm,ethos.yaml | 79 +++
MAINTAINERS | 9 +
drivers/accel/Kconfig | 1 +
drivers/accel/Makefile | 1 +
drivers/accel/ethosu/Kconfig | 10 +
drivers/accel/ethosu/Makefile | 4 +
drivers/accel/ethosu/ethosu_device.h | 195 ++++++
drivers/accel/ethosu/ethosu_drv.c | 403 ++++++++++++
drivers/accel/ethosu/ethosu_drv.h | 15 +
drivers/accel/ethosu/ethosu_gem.c | 704 +++++++++++++++++++++
drivers/accel/ethosu/ethosu_gem.h | 46 ++
drivers/accel/ethosu/ethosu_job.c | 496 +++++++++++++++
drivers/accel/ethosu/ethosu_job.h | 40 ++
include/uapi/drm/ethosu_accel.h | 261 ++++++++
14 files changed, 2264 insertions(+)
---
base-commit: 3a8660878839faadb4f1a6dd72c3179c1df56787
change-id: 20250715-ethos-3fdd39ef6f19
Best regards,
--
Rob Herring (Arm) <robh(a)kernel.org>