On Fri, 16 May 2025 18:53:15 +0200, Tomeu Vizoso wrote:
> Add the bindings for the Neural Processing Unit IP from Rockchip.
>
> v2:
> - Adapt to new node structure (one node per core, each with its own
> IOMMU)
> - Several misc. fixes from Sebastian Reichel
>
> v3:
> - Split register block in its constituent subblocks, and only require
> the ones that the kernel would ever use (Nicolas Frattaroli)
> - Group supplies (Rob Herring)
> - Explain the way in which the top core is special (Rob Herring)
>
> Signed-off-by: Tomeu Vizoso <tomeu(a)tomeuvizoso.net>
> Signed-off-by: Sebastian Reichel <sebastian.reichel(a)collabora.com>
> ---
> .../bindings/npu/rockchip,rknn-core.yaml | 162 +++++++++++++++++++++
> 1 file changed, 162 insertions(+)
>
My bot found errors running 'make dt_binding_check' on your patch:
yamllint warnings/errors:
dtschema/dtc warnings/errors:
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/npu/rockchip,rknn-core.yaml: properties:reg-names: 'oneOf' conditional failed, one must be fixed:
[{'const': 'pc'}, {'const': 'cna'}, {'const': 'core'}] is too long
[{'const': 'pc'}, {'const': 'cna'}, {'const': 'core'}] is too short
False schema does not allow 3
1 was expected
3 is greater than the maximum of 2
hint: "minItems" is only needed if less than the "items" list length
from schema $id: http://devicetree.org/meta-schemas/items.yaml#
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/npu/rockchip,rknn-core.example.dtb: npu-core@fdab0000 (rockchip,rk3588-rknn-core-top): compatible: 'oneOf' conditional failed, one must be fixed:
['rockchip,rk3588-rknn-core-top', 'rockchip,rknn-core-top'] is too long
'rockchip,rk3588-rknn-core-top' is not one of ['rockchip,rk3588-rknn-core']
from schema $id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/npu/rockchip,rknn-core.example.dtb: npu-core@fdab0000 (rockchip,rk3588-rknn-core-top): reg: [[0, 4255842304, 0, 36864]] is too short
from schema $id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/npu/rockchip,rknn-core.example.dtb: npu-core@fdac0000 (rockchip,rk3588-rknn-core): compatible: 'oneOf' conditional failed, one must be fixed:
['rockchip,rk3588-rknn-core', 'rockchip,rknn-core'] is too long
'rockchip,rk3588-rknn-core' is not one of ['rockchip,rk3588-rknn-core-top']
from schema $id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
/builds/robherring/dt-review-ci/linux/Documentation/devicetree/bindings/npu/rockchip,rknn-core.example.dtb: npu-core@fdac0000 (rockchip,rk3588-rknn-core): reg: [[0, 4255907840, 0, 36864]] is too short
from schema $id: http://devicetree.org/schemas/npu/rockchip,rknn-core.yaml#
doc reference errors (make refcheckdocs):
See https://patchwork.ozlabs.org/project/devicetree-bindings/patch/20250516-6-1…
The base for the series is generally the latest rc1. A different dependency
should be noted in *this* patch.
If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:
pip3 install dtschema --upgrade
Please check and re-submit after running the above command yourself. Note
that DT_SCHEMA_FILES can be set to your schema file to speed up checking
your schema. However, it must be unset to test all examples with your schema.
Hi,
This patch set allocates the protected DMA-bufs from a DMA-heap
instantiated from the TEE subsystem.
The TEE subsystem handles the DMA-buf allocations since it is the TEE
(OP-TEE, AMD-TEE, TS-TEE, or perhaps a future QTEE) which sets up the
protection for the memory used for the DMA-bufs.
The DMA-heap uses a protected memory pool provided by the backend TEE
driver, allowing it to choose how to allocate the protected physical
memory.
The allocated DMA-bufs must be imported with a new TEE_IOC_SHM_REGISTER_FD
before they can be passed as arguments when requesting services from the
secure world.
Three use-cases (Secure Video Playback, Trusted UI, and Secure Video
Recording) have been identified so far to serve as examples of what can be
expected. The use-cases have predefined DMA-heap names,
"protected,secure-video", "protected,trusted-ui", and
"protected,secure-video-record". The backend driver registers protected
memory pools for the use-cases it supports.
Each use-case has its own protected memory pool since different use-cases
require isolation from different parts of the system. A protected memory
pool can be based on a static carveout instantiated while probing the TEE
backend driver, or dynamically allocated from CMA (dma_alloc_pages()) and
made protected as needed by the TEE.
This can be tested on a RockPi 4B+ with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m rockpi4.xml \
-b prototype/sdp-v9
repo sync -j8
cd build
make toolchains -j$(nproc)
make all -j$(nproc)
# Copy ../out/rockpi4.img to an SD card and boot the RockPi from that
# Connect a monitor to the RockPi
# login and at the prompt:
gst-launch-1.0 videotestsrc ! \
aesenc key=1f9423681beb9a79215820f6bda73d0f \
iv=e9aa8e834d8d70b7e0d254ff670dd718 serialize-iv=true ! \
aesdec key=1f9423681beb9a79215820f6bda73d0f ! \
kmssink
The aesdec module has been hacked to use an OP-TEE TA to decrypt the stream
into protected DMA-bufs which are consumed by the kmssink.
The primitive QEMU tests from previous patch sets can be tested on RockPi
in the same way using:
xtest --sdp-basic
The primitive tests are tested on QEMU with the following steps:
repo init -u https://github.com/jenswi-linaro/manifest.git -m qemu_v8.xml \
-b prototype/sdp-v9
repo sync -j8
cd build
make toolchains -j$(nproc)
make SPMC_AT_EL=1 all -j$(nproc)
make SPMC_AT_EL=1 run-only
# login and at the prompt:
xtest --sdp-basic
The SPMC_AT_EL=1 parameter configures the build with FF-A and an SPMC at
S-EL1 inside OP-TEE. The parameter can be changed to SPMC_AT_EL=n to test
without FF-A using the original SMC ABI instead. Please remember to do
%make arm-tf-clean
for TF-A to be rebuilt properly using the new configuration.
https://optee.readthedocs.io/en/latest/building/prerequisites.html
list dependencies required to build the above.
The primitive tests are pretty basic, mostly checking that a Trusted
Application in the secure world can access and manipulate the memory. There
are also some negative tests for out of bounds buffers, etc.
Thanks,
Jens
Changes since V8:
* Using dma_alloc_pages() instead of cma_alloc() so the direct dependency on
CMA can be removed together with the patches
"cma: export cma_alloc() and cma_release()" and
"dma-contiguous: export dma_contiguous_default_area". The patch
* Renaming the patch "tee: add tee_shm_alloc_cma_phys_mem()" to
"tee: add tee_shm_alloc_dma_mem()"
* Setting DMA mask for the OP-TEE TEE device based on input from the secure
world instead of relying on the parent device so following patches are
removed: "tee: tee_device_alloc(): copy dma_mask from parent device" and
"optee: pass parent device to tee_device_alloc()".
* Adding Sumit Garg's R-B to "tee: refactor params_from_user()"
* In the patch "tee: implement protected DMA-heap", map the physical memory
passed to tee_protmem_static_pool_alloc().
Changes since V7:
* Adding "dma-buf: dma-heap: export declared functions",
"cma: export cma_alloc() and cma_release()", and
"dma-contiguous: export dma_contiguous_default_area" to export the symbols
needed to keep the TEE subsystem as a load module.
* Removing CONFIG_TEE_DMABUF_HEAP and CONFIG_TEE_CMA since they aren't
needed any longer.
* Addressing review comments in "optee: sync secure world ABI headers"
* Better align protected memory pool initialization between the smc-abi and
ffa-abi parts of the optee driver.
* Removing the patch "optee: account for direction while converting parameters"
Changes since V6:
* Restricted memory is now known as protected memory since to use the same
term as https://docs.vulkan.org/guide/latest/protected.html. Update all
patches to consistently use protected memory.
* In "tee: implement protected DMA-heap" add the hidden config option
TEE_DMABUF_HEAP to tell if the DMABUF_HEAPS functions are available
for the TEE subsystem
* Adding "tee: refactor params_from_user()", broken out from the patch
"tee: new ioctl to a register tee_shm from a dmabuf file descriptor"
* For "tee: new ioctl to a register tee_shm from a dmabuf file descriptor":
- Update commit message to mention protected memory
- Remove and open code tee_shm_get_parent_shm() in param_from_user_memref()
* In "tee: add tee_shm_alloc_cma_phys_mem" add the hidden config option
TEE_CMA to tell if the CMA functions are available for the TEE subsystem
* For "tee: tee_device_alloc(): copy dma_mask from parent device" and
"optee: pass parent device to tee_device_alloc", added
Reviewed-by: Sumit Garg <sumit.garg(a)kernel.org>
Changes since V5:
* Removing "tee: add restricted memory allocation" and
"tee: add TEE_IOC_RSTMEM_FD_INFO"
* Adding "tee: implement restricted DMA-heap",
"tee: new ioctl to a register tee_shm from a dmabuf file descriptor",
"tee: add tee_shm_alloc_cma_phys_mem()",
"optee: pass parent device to tee_device_alloc()", and
"tee: tee_device_alloc(): copy dma_mask from parent device"
* The two TEE driver OPs "rstmem_alloc()" and "rstmem_free()" are replaced
with a struct tee_rstmem_pool abstraction.
* Replaced the the TEE_IOC_RSTMEM_ALLOC user space API with the DMA-heap API
Changes since V4:
* Adding the patch "tee: add TEE_IOC_RSTMEM_FD_INFO" needed by the
GStreamer demo
* Removing the dummy CPU access and mmap functions from the dma_buf_ops
* Fixing a compile error in "optee: FF-A: dynamic restricted memory allocation"
reported by kernel test robot <lkp(a)intel.com>
Changes since V3:
* Make the use_case and flags field in struct tee_shm u32's instead of
u16's
* Add more description for TEE_IOC_RSTMEM_ALLOC in the header file
* Import namespace DMA_BUF in module tee, reported by lkp(a)intel.com
* Added a note in the commit message for "optee: account for direction
while converting parameters" why it's needed
* Factor out dynamic restricted memory allocation from
"optee: support restricted memory allocation" into two new commits
"optee: FF-A: dynamic restricted memory allocation" and
"optee: smc abi: dynamic restricted memory allocation"
* Guard CMA usage with #ifdef CONFIG_CMA, effectively disabling dynamic
restricted memory allocate if CMA isn't configured
Changes since the V2 RFC:
* Based on v6.12
* Replaced the flags for SVP and Trusted UID memory with a u32 field with
unique id for each use case
* Added dynamic allocation of restricted memory pools
* Added OP-TEE ABI both with and without FF-A for dynamic restricted memory
* Added support for FF-A with FFA_LEND
Changes since the V1 RFC:
* Based on v6.11
* Complete rewrite, replacing the restricted heap with TEE_IOC_RSTMEM_ALLOC
Changes since Olivier's post [2]:
* Based on Yong Wu's post [1] where much of dma-buf handling is done in
the generic restricted heap
* Simplifications and cleanup
* New commit message for "dma-buf: heaps: add Linaro restricted dmabuf heap
support"
* Replaced the word "secure" with "restricted" where applicable
Etienne Carriere (1):
tee: new ioctl to a register tee_shm from a dmabuf file descriptor
Jens Wiklander (8):
optee: sync secure world ABI headers
dma-buf: dma-heap: export declared functions
tee: implement protected DMA-heap
tee: refactor params_from_user()
tee: add tee_shm_alloc_dma_mem()
optee: support protected memory allocation
optee: FF-A: dynamic protected memory allocation
optee: smc abi: dynamic protected memory allocation
drivers/dma-buf/dma-heap.c | 3 +
drivers/tee/Makefile | 1 +
drivers/tee/optee/Makefile | 1 +
drivers/tee/optee/core.c | 10 +
drivers/tee/optee/ffa_abi.c | 147 ++++++++-
drivers/tee/optee/optee_ffa.h | 27 +-
drivers/tee/optee/optee_msg.h | 84 +++++-
drivers/tee/optee/optee_private.h | 15 +-
drivers/tee/optee/optee_smc.h | 37 ++-
drivers/tee/optee/protmem.c | 332 ++++++++++++++++++++
drivers/tee/optee/smc_abi.c | 113 ++++++-
drivers/tee/tee_core.c | 155 +++++++---
drivers/tee/tee_heap.c | 487 ++++++++++++++++++++++++++++++
drivers/tee/tee_private.h | 16 +
drivers/tee/tee_shm.c | 183 ++++++++++-
include/linux/tee_core.h | 71 +++++
include/linux/tee_drv.h | 10 +
include/uapi/linux/tee.h | 31 ++
18 files changed, 1655 insertions(+), 68 deletions(-)
create mode 100644 drivers/tee/optee/protmem.c
create mode 100644 drivers/tee/tee_heap.c
base-commit: b4432656b36e5cc1d50a1f2dc15357543add530e
--
2.43.0
On 5/22/25 10:02, wangtao wrote:
>> -----Original Message-----
>> From: Christian König <christian.koenig(a)amd.com>
>> Sent: Wednesday, May 21, 2025 7:57 PM
>> To: wangtao <tao.wangtao(a)honor.com>; T.J. Mercier
>> <tjmercier(a)google.com>
>> Cc: sumit.semwal(a)linaro.org; benjamin.gaignard(a)collabora.com;
>> Brian.Starkey(a)arm.com; jstultz(a)google.com; linux-media(a)vger.kernel.org;
>> dri-devel(a)lists.freedesktop.org; linaro-mm-sig(a)lists.linaro.org; linux-
>> kernel(a)vger.kernel.org; wangbintian(BintianWang)
>> <bintian.wang(a)honor.com>; yipengxiang <yipengxiang(a)honor.com>; liulu
>> 00013167 <liulu.liu(a)honor.com>; hanfeng 00012985 <feng.han(a)honor.com>;
>> amir73il(a)gmail.com
>> Subject: Re: [PATCH 2/2] dmabuf/heaps: implement
>> DMA_BUF_IOCTL_RW_FILE for system_heap
>>
>> On 5/21/25 12:25, wangtao wrote:
>>> [wangtao] I previously explained that
>>> read/sendfile/splice/copy_file_range
>>> syscalls can't achieve dmabuf direct IO zero-copy.
>>
>> And why can't you work on improving those syscalls instead of creating a new
>> IOCTL?
>>
> [wangtao] As I mentioned in previous emails, these syscalls cannot
> achieve dmabuf zero-copy due to technical constraints.
Yeah, and why can't you work on removing those technical constrains?
What is blocking you from improving the sendfile system call or proposing a patch to remove the copy_file_range restrictions?
Regards,
Christian.
Could you
> specify the technical points, code, or principles that need
> optimization?
>
> Let me explain again why these syscalls can't work:
> 1. read() syscall
> - dmabuf fops lacks read callback implementation. Even if implemented,
> file_fd info cannot be transferred
> - read(file_fd, dmabuf_ptr, len) with remap_pfn_range-based mmap
> cannot access dmabuf_buf pages, forcing buffer-mode reads
>
> 2. sendfile() syscall
> - Requires CPU copy from page cache to memory file(tmpfs/shmem):
> [DISK] --DMA--> [page cache] --CPU copy--> [MEMORY file]
> - CPU overhead (both buffer/direct modes involve copies):
> 55.08% do_sendfile
> |- 55.08% do_splice_direct
> |-|- 55.08% splice_direct_to_actor
> |-|-|- 22.51% copy_splice_read
> |-|-|-|- 16.57% f2fs_file_read_iter
> |-|-|-|-|- 15.12% __iomap_dio_rw
> |-|-|- 32.33% direct_splice_actor
> |-|-|-|- 32.11% iter_file_splice_write
> |-|-|-|-|- 28.42% vfs_iter_write
> |-|-|-|-|-|- 28.42% do_iter_write
> |-|-|-|-|-|-|- 28.39% shmem_file_write_iter
> |-|-|-|-|-|-|-|- 24.62% generic_perform_write
> |-|-|-|-|-|-|-|-|- 18.75% __pi_memmove
>
> 3. splice() requires one end to be a pipe, incompatible with regular files or dmabuf.
>
> 4. copy_file_range()
> - Blocked by cross-FS restrictions (Amir's commit 868f9f2f8e00)
> - Even without this restriction, Even without restrictions, implementing
> the copy_file_range callback in dmabuf fops would only allow dmabuf read
> from regular files. This is because copy_file_range relies on
> file_out->f_op->copy_file_range, which cannot support dmabuf write
> operations to regular files.
>
> Test results confirm these limitations:
> T.J. Mercier's 1G from ext4 on 6.12.20 | read/sendfile (ms) w/ 3 > drop_caches
> ------------------------|-------------------
> udmabuf buffer read | 1210
> udmabuf direct read | 671
> udmabuf buffer sendfile | 1096
> udmabuf direct sendfile | 2340
>
> My 3GHz CPU tests (cache cleared):
> Method | alloc | read | vs. (%)
> -----------------------------------------------
> udmabuf buffer read | 135 | 546 | 180%
> udmabuf direct read | 159 | 300 | 99%
> udmabuf buffer sendfile | 134 | 303 | 100%
> udmabuf direct sendfile | 141 | 912 | 301%
> dmabuf buffer read | 22 | 362 | 119%
> my patch direct read | 29 | 265 | 87%
>
> My 1GHz CPU tests (cache cleared):
> Method | alloc | read | vs. (%)
> -----------------------------------------------
> udmabuf buffer read | 552 | 2067 | 198%
> udmabuf direct read | 540 | 627 | 60%
> udmabuf buffer sendfile | 497 | 1045 | 100%
> udmabuf direct sendfile | 527 | 2330 | 223%
> dmabuf buffer read | 40 | 1111 | 106%
> patch direct read | 44 | 310 | 30%
>
> Test observations align with expectations:
> 1. dmabuf buffer read requires slow CPU copies
> 2. udmabuf direct read achieves zero-copy but has page retrieval
> latency from vaddr
> 3. udmabuf buffer sendfile suffers CPU copy overhead
> 4. udmabuf direct sendfile combines CPU copies with frequent DMA
> operations due to small pipe buffers
> 5. dmabuf buffer read also requires CPU copies
> 6. My direct read patch enables zero-copy with better performance
> on low-power CPUs
> 7. udmabuf creation time remains problematic (as you’ve noted).
>
>>> My focus is enabling dmabuf direct I/O for [regular file] <--DMA-->
>>> [dmabuf] zero-copy.
>>
>> Yeah and that focus is wrong. You need to work on a general solution to the
>> issue and not specific to your problem.
>>
>>> Any API achieving this would work. Are there other uAPIs you think
>>> could help? Could you recommend experts who might offer suggestions?
>>
>> Well once more: Either work on sendfile or copy_file_range or eventually
>> splice to make it what you want to do.
>>
>> When that is done we can discuss with the VFS people if that approach is
>> feasible.
>>
>> But just bypassing the VFS review by implementing a DMA-buf specific IOCTL
>> is a NO-GO. That is clearly not something you can do in any way.
> [wangtao] The issue is that only dmabuf lacks Direct I/O zero-copy support. Tmpfs/shmem
> already work with Direct I/O zero-copy. As explained, existing syscalls or
> generic methods can't enable dmabuf direct I/O zero-copy, which is why I
> propose adding an IOCTL command.
>
> I respect your perspective. Could you clarify specific technical aspects,
> code requirements, or implementation principles for modifying sendfile()
> or copy_file_range()? This would help advance our discussion.
>
> Thank you for engaging in this dialogue.
>
>>
>> Regards,
>> Christian.
This is the next version of the shmem backed GEM objects series
originally from Asahi, previously posted by Daniel Almeida. Along with
bindings for shmem backed GEM objects, it also adds a few features that
various users like Tyr and Asahi are interested in:
* The ability to pass custom arguments to new GEM objects (needed by
Tyr)
* OpaqueObject (to enable the use of custom private GEM objects, which I
believe asahi wanted)
And replaces some of the hand-rolled API bindings (sg_table mainly) with
some of the WIP patch series for adding kernel-wide bindings. It also
addresses the comments from the code review of the last version of this
patch series.
Currently doesn't apply on an upstream branch, but should very soon as
all of the dependencies in this series are on a mailing list already.
The current branch this can be applied on top of is here:
https://gitlab.freedesktop.org/lyudess/linux/-/commits/rust%2Fgem-shmem-base
Which is based on top of nova/nova-next with the following patch series
applied:
* My (hopefully final) gem bindings cleanup:
https://lkml.org/lkml/2025/5/20/1541
* Benno's derive Zeroable series:
https://lkml.org/lkml/2025/5/20/1446
* Abdiel's sg_table series:
https://lwn.net/Articles/1020986/
Also, there is one FIXES patch on top of Abdiel's work to fix some
iterator bugs. These fixes have already been mentioned on the
mailing list and should not be needed for their V2 version
Asahi Lina (3):
rust: helpers: Add bindings/wrappers for dma_resv_lock
rust: drm: gem: shmem: Add DRM shmem helper abstraction
rust: drm: gem: shmem: Add share_dma_resv to ObjectConfig
Lyude Paul (9):
rust: drm: gem: Add raw_dma_resv() function
drm/gem/shmem: Extract drm_gem_shmem_init() from
drm_gem_shmem_create()
drm/gem/shmem: Extract drm_gem_shmem_release() from
drm_gem_shmem_free()
rust: gem: Introduce BaseDriverObject::Args
rust: drm: gem: Add OpaqueObject
rust: drm: gem: Introduce OwnedSGTable
rust: Add dma_buf stub bindings
rust: drm: gem: Add export() callback
rust: drm: gem: Add BaseObject::prime_export()
drivers/gpu/drm/drm_gem_shmem_helper.c | 98 +++++--
drivers/gpu/drm/nova/gem.rs | 6 +-
include/drm/drm_gem_shmem_helper.h | 2 +
rust/bindings/bindings_helper.h | 4 +
rust/helpers/dma-resv.c | 13 +
rust/helpers/drm.c | 48 +++-
rust/helpers/helpers.c | 1 +
rust/kernel/dma_buf.rs | 39 +++
rust/kernel/drm/gem/mod.rs | 187 ++++++++++++-
rust/kernel/drm/gem/shmem.rs | 370 +++++++++++++++++++++++++
rust/kernel/lib.rs | 1 +
11 files changed, 727 insertions(+), 42 deletions(-)
create mode 100644 rust/helpers/dma-resv.c
create mode 100644 rust/kernel/dma_buf.rs
create mode 100644 rust/kernel/drm/gem/shmem.rs
--
2.49.0
Hi Jared,
On Thu, Apr 24, 2025 at 09:11:24AM -0700, Jared Kangas wrote:
> > > struct cma_heap {
> > > struct dma_heap *heap;
> > > @@ -394,15 +395,26 @@ static int __init __add_cma_heap(struct cma *cma, const char *name)
> > > static int __init add_default_cma_heap(void)
> > > {
> > > struct cma *default_cma = dev_get_cma_area(NULL);
> > > + const char *legacy_cma_name;
> > > int ret;
> > >
> > > if (!default_cma)
> > > return 0;
> > >
> > > - ret = __add_cma_heap(default_cma, cma_get_name(default_cma));
> > > + ret = __add_cma_heap(default_cma, DEFAULT_CMA_NAME);
> > > if (ret)
> > > return ret;
> > >
> > > + legacy_cma_name = cma_get_name(default_cma);
> > > +
> > > + if (IS_ENABLED(CONFIG_DMABUF_HEAPS_CMA_LEGACY) &&
> > > + strcmp(legacy_cma_name, DEFAULT_CMA_NAME)) {
> > > + ret = __add_cma_heap(default_cma, legacy_cma_name);
> > > + if (ret)
> > > + pr_warn("cma_heap: failed to add legacy heap: %pe\n",
> > > + ERR_PTR(-ret));
> > > + }
> > > +
> >
> > It would also simplify this part, since you would always create the legacy heap.
>
> By "always", do you mean removing the strcmp? I added this to guard
> against cases where the devicetree node's name clashed with the default
> name, given that the DT name isn't necessarily restricted to one of the
> current names in use ("linux,cma" or "default-pool"). It seems like the
> strcmp would be relevant regardless of the naming choice, but if this is
> overly cautious, I can remove it in v3.
That's not overly cautious, that's something I overlooked :)
You're totally right that we should check for that. We should probably
add a more specific error message in that case though
Maxime
On 5/21/25 12:25, wangtao wrote:
> [wangtao] I previously explained that read/sendfile/splice/copy_file_range
> syscalls can't achieve dmabuf direct IO zero-copy.
And why can't you work on improving those syscalls instead of creating a new IOCTL?
> My focus is enabling dmabuf direct I/O for [regular file] <--DMA--> [dmabuf]
> zero-copy.
Yeah and that focus is wrong. You need to work on a general solution to the issue and not specific to your problem.
> Any API achieving this would work. Are there other uAPIs you think
> could help? Could you recommend experts who might offer suggestions?
Well once more: Either work on sendfile or copy_file_range or eventually splice to make it what you want to do.
When that is done we can discuss with the VFS people if that approach is feasible.
But just bypassing the VFS review by implementing a DMA-buf specific IOCTL is a NO-GO. That is clearly not something you can do in any way.
Regards,
Christian.
On 5/21/25 06:17, wangtao wrote:
>>> Reducing CPU overhead/power consumption is critical for mobile devices.
>>> We need simpler and more efficient dmabuf direct I/O support.
>>>
>>> As Christian evaluated sendfile performance based on your data, could
>>> you confirm whether the cache was cleared? If not, please share the
>>> post-cache-clearing test data. Thank you for your support.
>>
>> Yes sorry, I was out yesterday riding motorcycles. I did not clear the cache for
>> the buffered reads, I didn't realize you had. The IO plus the copy certainly
>> explains the difference.
>>
>> Your point about the unlikelihood of any of that data being in the cache also
>> makes sense.
> [wangtao] Thank you for testing and clarifying.
>
>>
>> I'm not sure it changes anything about the ioctl approach though.
>> Another way to do this would be to move the (optional) support for direct IO
>> into the exporter via dma_buf_fops and dma_buf_ops. Then normal read()
>> syscalls would just work for buffers that support them.
>> I know that's more complicated, but at least it doesn't require inventing new
>> uapi to do it.
>>
> [wangtao] Thank you for the discussion. I fully support any method that enables
> dmabuf direct I/O.
>
> I understand using sendfile/splice with regular files for dmabuf
> adds an extra CPU copy, preventing zero-copy. For example:
> sendfile path: [DISK] → DMA → [page cache] → CPU copy → [memory file].
Yeah, but why can't you work on improving that?
> The read() syscall can't pass regular file fd parameters, so I added
> an ioctl command.
> While copy_file_range() supports two fds (fd_in/fd_out), it blocks cross-fs use.
> Even without this restriction, file_out->f_op->copy_file_range
> only enables dmabuf direct reads from regular files, not writes.
>
> Since dmabuf's direct I/O limitation comes from its unique
> attachment/map/fence model and lacks suitable syscalls, adding
> an ioctl seems necessary.
I absolutely don't see that. Both splice and sendfile can take two regular file descriptors.
That the underlying fops currently can't do that is not a valid argument for adding new uAPI. It just means that you need to work on improving those fops.
As long as nobody proves to me that the existing uAPI isn't sufficient for this use case I will systematically reject any approach to adding new one.
Regards,
Christian.
> When system exporters return a duplicated sg_table via map_dma_buf
> (used exclusively like a pages array), they should retain control
> over it.
>
> I welcome all solutions to achieve dmabuf direct I/O! Your feedback
> is greatly appreciated.
>
>> 1G from ext4 on 6.12.20 | read/sendfile (ms) w/ 3 > drop_caches
>> ------------------------|-------------------
>> udmabuf buffer read | 1210
>> udmabuf direct read | 671
>> udmabuf buffer sendfile | 1096
>> udmabuf direct sendfile | 2340
>>
>>
>>
>>>
>>>>>
>>>>>>> dmabuf buffer read | 51 | 1068 | 1118
>>>>>>> dmabuf direct read | 52 | 297 | 349
>>>>>>>
>>>>>>> udmabuf sendfile test steps:
>>>>>>> 1. Open data file(1024MB), get back_fd 2. Create memfd(32MB) #
>>>>>>> Loop steps 2-6 3. Allocate udmabuf with memfd 4. Call
>>>>>>> sendfile(memfd,
>>>>>>> back_fd) 5. Close memfd after sendfile 6. Close udmabuf 7.
>>>>>>> Close back_fd
>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Christian.
>>>>>>>
>>>>>>
>>>
On Tue, May 20, 2025 at 12:26:54PM GMT, Tomeu Vizoso wrote:
> Add the bindings for the Neural Processing Unit IP from Rockchip.
>
> v2:
> - Adapt to new node structure (one node per core, each with its own
> IOMMU)
> - Several misc. fixes from Sebastian Reichel
>
> v3:
> - Split register block in its constituent subblocks, and only require
> the ones that the kernel would ever use (Nicolas Frattaroli)
> - Group supplies (Rob Herring)
> - Explain the way in which the top core is special (Rob Herring)
>
> v4:
> - Change required node name to npu@ (Rob Herring and Krzysztof Kozlowski)
> - Remove unneeded items: (Krzysztof Kozlowski)
> - Fix use of minItems/maxItems (Krzysztof Kozlowski)
> - Add reg-names to list of required properties (Krzysztof Kozlowski)
> - Fix example (Krzysztof Kozlowski)
>
> v5:
> - Rename file to rockchip,rk3588-rknn-core.yaml (Krzysztof Kozlowski)
> - Streamline compatible property (Krzysztof Kozlowski)
>
This is a big patchset, so please slow down and do not send it every day
but allow people to actually review the version you post.
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski(a)linaro.org>
Best regards,
Krzysztof
We've discussed a number of times of how some heap names are bad, but
not really what makes a good heap name.
Let's document what we expect the heap names to look like.
Signed-off-by: Maxime Ripard <mripard(a)kernel.org>
---
Documentation/userspace-api/dma-buf-heaps.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/Documentation/userspace-api/dma-buf-heaps.rst b/Documentation/userspace-api/dma-buf-heaps.rst
index 535f49047ce6450796bf4380c989e109355efc05..b24618e360a9a9ba0bd85135d8c1760776f1a37f 100644
--- a/Documentation/userspace-api/dma-buf-heaps.rst
+++ b/Documentation/userspace-api/dma-buf-heaps.rst
@@ -21,5 +21,24 @@ following heaps:
usually created either through the kernel commandline through the
`cma` parameter, a memory region Device-Tree node with the
`linux,cma-default` property set, or through the `CMA_SIZE_MBYTES` or
`CMA_SIZE_PERCENTAGE` Kconfig options. Depending on the platform, it
might be called ``reserved``, ``linux,cma``, or ``default-pool``.
+
+Naming Convention
+=================
+
+A good heap name is a name that:
+
+- Is stable, and won't change from one version to the other;
+
+- Describes the memory region the heap will allocate from, and will
+ uniquely identify it in a given platform;
+
+- Doesn't use implementation details, such as the allocator;
+
+- Can describe intended usage.
+
+For example, assuming a platform with a reserved memory region located
+at the RAM address 0x42000000, intended to allocate video framebuffers,
+and backed by the CMA kernel allocator. Good names would be
+`memory@42000000` or `video@42000000`, but `cma-video` wouldn't.
---
base-commit: 92a09c47464d040866cf2b4cd052bc60555185fb
change-id: 20250520-dma-buf-heap-names-doc-31261aa0cfe6
Best regards,
--
Maxime Ripard <mripard(a)kernel.org>