On Tue, Apr 26, 2022 at 02:08:01PM +0800, Cai Huoqing wrote:
The NVIDIA Deep Learning Accelerator (NVDLA) is an open source IP which is integrated into NVIDIA Jetson AGX Xavier, so add UAPI of this driver.
Signed-off-by: Cai Huoqing cai.huoqing@linux.dev
v1->v2: *Rename nvdla_drm.[ch] to nvdla_drv.[ch] and rename nvdla_ioctl.h to nvdla_drm.h, move it to uapi. comments link: https://lore.kernel.org/lkml/20bac605-97e6-e5cd-c4e4-83a8121645d8@amd.com/
include/uapi/drm/nvdla_drm.h | 99 ++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 include/uapi/drm/nvdla_drm.h
diff --git a/include/uapi/drm/nvdla_drm.h b/include/uapi/drm/nvdla_drm.h new file mode 100644 index 000000000000..984635285525 --- /dev/null +++ b/include/uapi/drm/nvdla_drm.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ +/*
- Copyright (C) 2017-2018 NVIDIA CORPORATION.
- Copyright (C) 2022 Cai Huoqing
- */
+#ifndef __LINUX_NVDLA_IOCTL_H +#define __LINUX_NVDLA_IOCTL_H
+#include <linux/ioctl.h> +#include <linux/types.h>
+#if !defined(__KERNEL__) +#define __user +#endif
+/**
- struct nvdla_mem_handle structure for memory handles
- @handle handle to DMA buffer allocated in userspace
- @reserved Reserved for padding
- @offset offset in bytes from start address of buffer
- */
+struct nvdla_mem_handle {
- __u32 handle;
- __u32 reserved;
- __u64 offset;
+};
+/**
- struct nvdla_ioctl_submit_task structure for single task information
- @num_addresses total number of entries in address_list
- @reserved Reserved for padding
- @address_list pointer to array of struct nvdla_mem_handle
- */
+struct nvdla_ioctl_submit_task { +#define NVDLA_MAX_BUFFERS_PER_TASK (6144)
This is an odd number. Can you clarify where this limitation comes from? I say "limitation" here because, again, I'm no expert on DLA and I don't know what a typical workload would look like. 6144 is a lot of buffers, but are these tasks typically using a few large buffers or many small buffers?
- __u32 num_addresses;
+#define NVDLA_NO_TIMEOUT (0xffffffff)
- __u32 timeout;
- __u64 address_list;
+};
So if a task is basically just a collection of DMA buffers, is the userspace supposed to fill some of those buffers with metadata to determine what the task is about? If so, is this something that the DLA firmware/hardware knows how to parse?
+/**
- struct nvdla_submit_args structure for task submit
- @tasks pointer to array of struct nvdla_ioctl_submit_task
- @num_tasks number of entries in tasks
- @flags flags for task submit, no flags defined yet
- @version version of task structure
- */
+struct nvdla_submit_args {
- __u64 tasks;
- __u16 num_tasks;
+#define NVDLA_MAX_TASKS_PER_SUBMIT 24
Perhaps worth clarifying if this is a hardware restriction or an arbitrary software limit. Is this perhaps worth parameterizing somehow if this can potentially change in newer versions of DLA?
+#define NVDLA_SUBMIT_FLAGS_ATOMIC (1 << 0)
What exactly does atomicity imply here? Should this be described in a comment?
Thierry
- __u16 flags;
- __u32 version;
+};
+/**
- struct nvdla_gem_create_args for allocating DMA buffer through GEM
- @handle handle updated by kernel after allocation
- @flags implementation specific flags
- @size size of buffer to allocate
- */
+struct nvdla_gem_create_args {
- __u32 handle;
- __u32 flags;
- __u64 size;
+};
+/**
- struct nvdla_gem_map_offset_args for mapping DMA buffer
- @handle handle of the buffer
- @reserved reserved for padding
- @offset offset updated by kernel after mapping
- */
+struct nvdla_gem_map_offset_args {
- __u32 handle;
- __u32 reserved;
- __u64 offset;
+};
+#define DRM_NVDLA_SUBMIT 0x00 +#define DRM_NVDLA_GEM_CREATE 0x01 +#define DRM_NVDLA_GEM_MMAP 0x02
+#define DRM_IOCTL_NVDLA_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_NVDLA_SUBMIT, struct nvdla_submit_args) +#define DRM_IOCTL_NVDLA_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + DRM_NVDLA_GEM_CREATE, struct nvdla_gem_create_args) +#define DRM_IOCTL_NVDLA_GEM_MMAP DRM_IOWR(DRM_COMMAND_BASE + DRM_NVDLA_GEM_MMAP, struct nvdla_gem_map_offset_args)
+#endif
2.25.1