The purpose of this patchset is for MediaTek secure video playback, and also to enable other potential uses of this in the future. The 'restricted dma-heap' will be used to allocate dma_buf objects that reference memory in the secure world that is inaccessible/unmappable by the non-secure (i.e. kernel/userspace) world. That memory will be used by the secure/ trusted world to store secure information (i.e. decrypted media content). The dma_bufs allocated from the kernel will be passed to V4L2 for video decoding (as input and output). They will also be used by the drm system for rendering of the content.
This patchset adds two MediaTek restricted heaps and they will be used in v4l2[1] and drm[2]. 1) restricted_mtk_cm: secure chunk memory for MediaTek SVP (Secure Video Path). The buffer is reserved for the secure world after bootup and it is used for vcodec's ES/working buffer; 2) restricted_mtk_cma: secure CMA memory for MediaTek SVP. This buffer is dynamically reserved for the secure world and will be got when we start playing secure videos. Once the security video playing is complete, the CMA will be released. This heap is used for the vcodec's frame buffer.
[1] https://lore.kernel.org/linux-mediatek/20231206081538.17056-1-yunfei.dong@me... [2] https://lore.kernel.org/all/20231223182932.27683-1-jason-jh.lin@mediatek.com...
Change note: v4: 1) Rename the heap name from "secure" to "restricted". suggested from Simon/Pekka. There are still several "secure" string in MTK file since we use ARM platform in which we call this "secure world"/ "secure command".
v3: https://lore.kernel.org/linux-mediatek/20231212024607.3681-1-yong.wu@mediate... 1) Separate the secure heap to a common file(secure_heap.c) and mtk special file (secure_heap_mtk.c), and put all the tee related code into our special file. 2) About dt-binding, Add "mediatek," prefix since this is Mediatek TEE firmware definition. 3) Remove the normal CMA heap which is a draft for qcom. Rebase on v6.7-rc1.
v2: https://lore.kernel.org/linux-mediatek/20231111111559.8218-1-yong.wu@mediate... 1) Move John's patches into the vcodec patchset since they use the new dma heap interface directly. https://lore.kernel.org/linux-mediatek/20231106120423.23364-1-yunfei.dong@me... 2) Reword the dt-binding description. 3) Rename the heap name from mtk_svp to secure_mtk_cm. This means the current vcodec/DRM upstream code doesn't match this. 4) Add a normal CMA heap. currently it should be a draft version. 5) Regarding the UUID, I still use hard code, but put it in a private data which allow the others could set their own UUID. What's more, UUID is necessary for the session with TEE. If we don't have it, we can't communicate with the TEE, including the get_uuid interface, which tries to make uuid more generic, not working. If there is other way to make UUID more general, please free to tell me.
v1: https://lore.kernel.org/linux-mediatek/20230911023038.30649-1-yong.wu@mediat... Base on v6.6-rc1.
Yong Wu (7): dt-bindings: reserved-memory: Add mediatek,dynamic-restricted-region dma-buf: heaps: Initialize a restricted heap dma-buf: heaps: restricted_heap: Add private heap ops dma-buf: heaps: restricted_heap: Add dma_ops dma-buf: heaps: restricted_heap: Add MediaTek restricted heap and heap_init dma-buf: heaps: restricted_heap_mtk: Add TEE memory service call dma_buf: heaps: restricted_heap_mtk: Add a new CMA heap
.../mediatek,dynamic-restricted-region.yaml | 43 +++ drivers/dma-buf/heaps/Kconfig | 16 + drivers/dma-buf/heaps/Makefile | 4 +- drivers/dma-buf/heaps/restricted_heap.c | 237 +++++++++++++ drivers/dma-buf/heaps/restricted_heap.h | 43 +++ drivers/dma-buf/heaps/restricted_heap_mtk.c | 322 ++++++++++++++++++ 6 files changed, 664 insertions(+), 1 deletion(-) create mode 100644 Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml create mode 100644 drivers/dma-buf/heaps/restricted_heap.c create mode 100644 drivers/dma-buf/heaps/restricted_heap.h create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
Add a binding for describing the dynamic restricted reserved memory range. The memory range also will be defined in the TEE firmware. It means the TEE will be configured with the same address/size that is being set in this DT node. Regarding to the detail TEE command, Please search MTK_TZCMD_SECMEM_ZALLOC and MTK_TZCMD_SECMEM_FREE.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- .../mediatek,dynamic-restricted-region.yaml | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml
diff --git a/Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml b/Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml new file mode 100644 index 000000000000..5cbe3a5637fa --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml @@ -0,0 +1,43 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/mediatek,dynamic-restricted-re... +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: MediaTek Dynamic Reserved Region + +description: + A memory region that can dynamically transition as a whole between + secure and non-secure states. This memory will be protected by OP-TEE + when allocations are active and unprotected otherwise. + +maintainers: + - Yong Wu yong.wu@mediatek.com + +allOf: + - $ref: reserved-memory.yaml + +properties: + compatible: + const: mediatek,dynamic-restricted-region + +required: + - compatible + - reg + - reusable + +unevaluatedProperties: false + +examples: + - | + reserved-memory { + #address-cells = <1>; + #size-cells = <1>; + ranges; + + reserved-memory@80000000 { + compatible = "mediatek,dynamic-restricted-region"; + reg = <0x80000000 0x18000000>; + reusable; + }; + };
Initialize a restricted heap. Currently just add a null heap, Prepare for the later patches.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/Kconfig | 9 ++++ drivers/dma-buf/heaps/Makefile | 3 +- drivers/dma-buf/heaps/restricted_heap.c | 67 +++++++++++++++++++++++++ drivers/dma-buf/heaps/restricted_heap.h | 22 ++++++++ 4 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/heaps/restricted_heap.c create mode 100644 drivers/dma-buf/heaps/restricted_heap.h
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..e54506f480ea 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_RESTRICTED + bool "DMA-BUF Restricted Heap" + depends on DMABUF_HEAPS + help + Choose this option to enable dma-buf restricted heap. The purpose of this + heap is to manage buffers that are inaccessible to the kernel and user space. + There may be several ways to restrict it, for example it may be encrypted or + protected by a TEE or hypervisor. If in doubt, say N. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..a2437c1817e2 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c new file mode 100644 index 000000000000..fd7c82abd42e --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF restricted heap exporter + * + * Copyright (C) 2024 MediaTek Inc. + */ + +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/err.h> +#include <linux/slab.h> + +#include "restricted_heap.h" + +static struct dma_buf * +restricted_heap_allocate(struct dma_heap *heap, unsigned long size, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct restricted_buffer *restricted_buf; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + int ret; + + restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL); + if (!restricted_buf) + return ERR_PTR(-ENOMEM); + + restricted_buf->size = ALIGN(size, PAGE_SIZE); + restricted_buf->heap = heap; + + exp_info.exp_name = dma_heap_get_name(heap); + exp_info.size = restricted_buf->size; + exp_info.flags = fd_flags; + exp_info.priv = restricted_buf; + + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_free_buf; + } + + return dmabuf; + +err_free_buf: + kfree(restricted_buf); + return ERR_PTR(ret); +} + +static const struct dma_heap_ops restricted_heap_ops = { + .allocate = restricted_heap_allocate, +}; + +int restricted_heap_add(struct restricted_heap *rstrd_heap) +{ + struct dma_heap_export_info exp_info; + struct dma_heap *heap; + + exp_info.name = rstrd_heap->name; + exp_info.ops = &restricted_heap_ops; + exp_info.priv = (void *)rstrd_heap; + + heap = dma_heap_add(&exp_info); + if (IS_ERR(heap)) + return PTR_ERR(heap); + return 0; +} +EXPORT_SYMBOL_GPL(restricted_heap_add); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h new file mode 100644 index 000000000000..443028f6ba3b --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Restricted heap Header. + * + * Copyright (C) 2024 MediaTek, Inc. + */ + +#ifndef _DMABUF_RESTRICTED_HEAP_H_ +#define _DMABUF_RESTRICTED_HEAP_H_ + +struct restricted_buffer { + struct dma_heap *heap; + size_t size; +}; + +struct restricted_heap { + const char *name; +}; + +int restricted_heap_add(struct restricted_heap *rstrd_heap); + +#endif
Add "struct restricted_heap_ops". For the restricted memory, totally there are two steps: a) memory_alloc: Allocate the buffer in kernel; b) memory_restrict: Restrict/Protect/Secure that buffer. The memory_alloc is mandatory while memory_restrict is optinal since it may be part of memory_alloc.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/restricted_heap.c | 41 ++++++++++++++++++++++++- drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index fd7c82abd42e..8c266a0f6192 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,10 +12,44 @@
#include "restricted_heap.h"
+static int +restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) +{ + const struct restricted_heap_ops *ops = heap->ops; + int ret; + + ret = ops->memory_alloc(heap, buf); + if (ret) + return ret; + + if (ops->memory_restrict) { + ret = ops->memory_restrict(heap, buf); + if (ret) + goto memory_free; + } + return 0; + +memory_free: + ops->memory_free(heap, buf); + return ret; +} + +static void +restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buffer *buf) +{ + const struct restricted_heap_ops *ops = heap->ops; + + if (ops->memory_unrestrict) + ops->memory_unrestrict(heap, buf); + + ops->memory_free(heap, buf); +} + static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) { + struct restricted_heap *restricted_heap = dma_heap_get_drvdata(heap); struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; @@ -28,6 +62,9 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, restricted_buf->size = ALIGN(size, PAGE_SIZE); restricted_buf->heap = heap;
+ ret = restricted_heap_memory_allocate(restricted_heap, restricted_buf); + if (ret) + goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; @@ -36,11 +73,13 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); - goto err_free_buf; + goto err_free_restricted_mem; }
return dmabuf;
+err_free_restricted_mem: + restricted_heap_memory_free(restricted_heap, restricted_buf); err_free_buf: kfree(restricted_buf); return ERR_PTR(ret); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer {
struct restricted_heap { const char *name; + + const struct restricted_heap_ops *ops; +}; + +struct restricted_heap_ops { + int (*heap_init)(struct restricted_heap *heap); + + int (*memory_alloc)(struct restricted_heap *heap, struct restricted_buffer *buf); + void (*memory_free)(struct restricted_heap *heap, struct restricted_buffer *buf); + + int (*memory_restrict)(struct restricted_heap *heap, struct restricted_buffer *buf); + void (*memory_unrestrict)(struct restricted_heap *heap, struct restricted_buffer *buf); };
int restricted_heap_add(struct restricted_heap *rstrd_heap);
Add the dma_ops for this restricted heap. For restricted buffer, cache_ops/mmap are not allowed, thus return EPERM for them.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/restricted_heap.c | 103 ++++++++++++++++++++++++ 1 file changed, 103 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8c266a0f6192..ec4c63d2112d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,6 +12,10 @@
#include "restricted_heap.h"
+struct restricted_heap_attachment { + struct sg_table *table; +}; + static int restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) { @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buff ops->memory_free(heap, buf); }
+static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct restricted_buffer *restricted_buf = dmabuf->priv; + struct restricted_heap_attachment *a; + struct sg_table *table; + int ret; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = kzalloc(sizeof(*table), GFP_KERNEL); + if (!table) { + ret = -ENOMEM; + goto err_free_attach; + } + + ret = sg_alloc_table(table, 1, GFP_KERNEL); + if (ret) + goto err_free_sgt; + sg_set_page(table->sgl, NULL, restricted_buf->size, 0); + + a->table = table; + attachment->priv = a; + + return 0; + +err_free_sgt: + kfree(table); +err_free_attach: + kfree(a); + return ret; +} + +static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct restricted_heap_attachment *a = attachment->priv; + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table * +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) +{ + struct restricted_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + + return table; +} + +static void +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, + enum dma_data_direction direction) +{ + struct restricted_heap_attachment *a = attachment->priv; + + WARN_ON(a->table != table); +} + +static int +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{ + return -EPERM; +} + +static int +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{ + return -EPERM; +} + +static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + return -EPERM; +} + +static void restricted_heap_free(struct dma_buf *dmabuf) +{ + struct restricted_buffer *restricted_buf = dmabuf->priv; + struct restricted_heap *heap = dma_heap_get_drvdata(restricted_buf->heap); + + restricted_heap_memory_free(heap, restricted_buf); + kfree(restricted_buf); +} + +static const struct dma_buf_ops restricted_heap_buf_ops = { + .attach = restricted_heap_attach, + .detach = restricted_heap_detach, + .map_dma_buf = restricted_heap_map_dma_buf, + .unmap_dma_buf = restricted_heap_unmap_dma_buf, + .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, + .end_cpu_access = restricted_heap_dma_buf_end_cpu_access, + .mmap = restricted_heap_dma_buf_mmap, + .release = restricted_heap_free, +}; + static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, if (ret) goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); + exp_info.ops = &restricted_heap_buf_ops; exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; exp_info.priv = restricted_buf;
Add a Mediatek restricted heap which uses TEE service call to restrict buffer. Currently this restricted heap is NULL, Prepare for the later patch. Mainly there are two changes: a) Add a heap_init ops since TEE probe late than restricted heap, thus initialize the heap when we require the buffer the first time. b) Add a priv_data for each heap, like the special data used by MTK (such as "TEE session") can be placed in priv_data.
Currently our heap depends on CMA which could only be bool, thus depend on "TEE=y".
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/Kconfig | 7 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/restricted_heap.c | 11 ++ drivers/dma-buf/heaps/restricted_heap.h | 2 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 113 ++++++++++++++++++++ 5 files changed, 134 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index e54506f480ea..84f748fb2856 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -21,3 +21,10 @@ config DMABUF_HEAPS_RESTRICTED heap is to manage buffers that are inaccessible to the kernel and user space. There may be several ways to restrict it, for example it may be encrypted or protected by a TEE or hypervisor. If in doubt, say N. + +config DMABUF_HEAPS_RESTRICTED_MTK + bool "MediaTek DMA-BUF Restricted Heap" + depends on DMABUF_HEAPS_RESTRICTED && TEE=y + help + Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by + TEE client interfaces. If in doubt, say N. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index a2437c1817e2..0028aa9d875f 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,4 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index ec4c63d2112d..4e9869ab4a85 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -152,11 +152,22 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) { struct restricted_heap *restricted_heap = dma_heap_get_drvdata(heap); + const struct restricted_heap_ops *ops = restricted_heap->ops; struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; int ret;
+ /* + * In some implements, TEE is required to protect buffer. However TEE probe + * may be late, Thus heap_init is performed when the first buffer is requested. + */ + if (ops->heap_init) { + ret = ops->heap_init(restricted_heap); + if (ret) + return ERR_PTR(ret); + } + restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL); if (!restricted_buf) return ERR_PTR(-ENOMEM); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index ddeaf9805708..cf5865f829fc 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -17,6 +17,8 @@ struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops; + + void *priv_data; };
struct restricted_heap_ops { diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c new file mode 100644 index 000000000000..a5f5365059cd --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -0,0 +1,113 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF restricted heap exporter for MediaTek + * + * Copyright (C) 2024 MediaTek Inc. + */ +#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/tee_drv.h> +#include <linux/uuid.h> + +#include "restricted_heap.h" + +#define TZ_TA_MEM_UUID_MTK "4477588a-8476-11e2-ad15-e41f1390d676" + +#define TEE_PARAM_NUM 4 + +enum mtk_secure_mem_type { + /* + * MediaTek static chunk memory carved out for TrustZone. The memory + * management is inside the TEE. + */ + MTK_SECURE_MEMORY_TYPE_CM_TZ = 1, +}; + +struct mtk_restricted_heap_data { + struct tee_context *tee_ctx; + u32 tee_session; + + const enum mtk_secure_mem_type mem_type; + +}; + +static int mtk_tee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) +{ + return ver->impl_id == TEE_IMPL_ID_OPTEE; +} + +static int mtk_tee_session_init(struct mtk_restricted_heap_data *data) +{ + struct tee_param t_param[TEE_PARAM_NUM] = {0}; + struct tee_ioctl_open_session_arg arg = {0}; + uuid_t ta_mem_uuid; + int ret; + + data->tee_ctx = tee_client_open_context(NULL, mtk_tee_ctx_match, NULL, NULL); + if (IS_ERR(data->tee_ctx)) { + pr_err_once("%s: open context failed, ret=%ld\n", __func__, + PTR_ERR(data->tee_ctx)); + return -ENODEV; + } + + arg.num_params = TEE_PARAM_NUM; + arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC; + ret = uuid_parse(TZ_TA_MEM_UUID_MTK, &ta_mem_uuid); + if (ret) + goto close_context; + memcpy(&arg.uuid, &ta_mem_uuid.b, sizeof(ta_mem_uuid)); + + ret = tee_client_open_session(data->tee_ctx, &arg, t_param); + if (ret < 0 || arg.ret) { + pr_err_once("%s: open session failed, ret=%d:%d\n", + __func__, ret, arg.ret); + ret = -EINVAL; + goto close_context; + } + data->tee_session = arg.session; + return 0; + +close_context: + tee_client_close_context(data->tee_ctx); + return ret; +} + +static int mtk_restricted_heap_init(struct restricted_heap *heap) +{ + struct mtk_restricted_heap_data *data = heap->priv_data; + + if (!data->tee_ctx) + return mtk_tee_session_init(data); + return 0; +} + +static const struct restricted_heap_ops mtk_restricted_heap_ops = { + .heap_init = mtk_restricted_heap_init, +}; + +static struct mtk_restricted_heap_data mtk_restricted_heap_data = { + .mem_type = MTK_SECURE_MEMORY_TYPE_CM_TZ, +}; + +static struct restricted_heap mtk_restricted_heaps[] = { + { + .name = "restricted_mtk_cm", + .ops = &mtk_restricted_heap_ops, + .priv_data = &mtk_restricted_heap_data, + }, +}; + +static int mtk_restricted_heap_initialize(void) +{ + struct restricted_heap *rstrd_heap = mtk_restricted_heaps; + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rstrd_heap++) + restricted_heap_add(rstrd_heap); + return 0; +} +module_init(mtk_restricted_heap_initialize); +MODULE_DESCRIPTION("MediaTek Restricted Heap Driver"); +MODULE_LICENSE("GPL");
Add TEE service call for MediaTek heap. We have a limited number of hardware entries to protect memory, therefore we cannot protect memory arbitrarily, and our secure memory management is actually inside OPTEE.
The kernel just tells the TEE what size I want and the TEE will return a "secure handle"/"secure address". To make the name more general, We call it "restricted_addr" here. The restricted_addr is a reference to a secure buffer within TEE. We put it in the sg_dma_address, please see the comment in code.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/restricted_heap.c | 17 ++++ drivers/dma-buf/heaps/restricted_heap.h | 3 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 93 +++++++++++++++++++++ 3 files changed, 113 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 4e9869ab4a85..148dbf5662c2 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -96,8 +96,23 @@ static struct sg_table * restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) { struct restricted_heap_attachment *a = attachment->priv; + struct dma_buf *dmabuf = attachment->dmabuf; + struct restricted_buffer *restricted_buf = dmabuf->priv; struct sg_table *table = a->table;
+ /* + * Technically dma_address refers to the address used by HW, But for restricted buffer + * we don't know its dma_address in kernel, Instead, we may know its restricted address + * which refers to the real buffer in the trusted or secure world. Here use this property + * to save the restricted address, and the user will use it to obtain the real address in + * trusted or secure world. + * + * Note: CONFIG_DMA_API_DEBUG requires this to be aligned with PAGE_SIZE. + */ + if (restricted_buf->restricted_addr) { + sg_dma_address(table->sgl) = restricted_buf->restricted_addr; + sg_dma_len(table->sgl) = restricted_buf->size; + } return table; }
@@ -108,6 +123,8 @@ restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_t struct restricted_heap_attachment *a = attachment->priv;
WARN_ON(a->table != table); + sg_dma_address(table->sgl) = 0; + sg_dma_len(table->sgl) = 0; }
static int diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index cf5865f829fc..6c93f6d257dc 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -11,6 +11,9 @@ struct restricted_buffer { struct dma_heap *heap; size_t size; + + /* A reference to a buffer in the trusted or secure world. */ + u64 restricted_addr; };
struct restricted_heap { diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c index a5f5365059cd..902add95bb7e 100644 --- a/drivers/dma-buf/heaps/restricted_heap_mtk.c +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -25,6 +25,27 @@ enum mtk_secure_mem_type { MTK_SECURE_MEMORY_TYPE_CM_TZ = 1, };
+enum mtk_secure_buffer_tee_cmd { + /* + * Allocate the zeroed secure memory from TEE. + * + * [in] value[0].a: The buffer size. + * value[0].b: alignment. + * [in] value[1].a: enum mtk_secure_mem_type. + * [out] value[3].a: The secure handle. + */ + MTK_TZCMD_SECMEM_ZALLOC = 0x10000, /* MTK TEE Command ID Base */ + + /* + * Free secure memory. + * + * [in] value[0].a: The secure handle of this buffer, It's value[3].a of + * MTK_TZCMD_SECMEM_ZALLOC. + * [out] value[1].a: return value, 0 means successful, otherwise fail. + */ + MTK_TZCMD_SECMEM_FREE = 0x10001, +}; + struct mtk_restricted_heap_data { struct tee_context *tee_ctx; u32 tee_session; @@ -74,6 +95,74 @@ static int mtk_tee_session_init(struct mtk_restricted_heap_data *data) return ret; }
+static int mtk_tee_service_call(struct tee_context *tee_ctx, u32 session, + unsigned int command, struct tee_param *params) +{ + struct tee_ioctl_invoke_arg arg = {0}; + int ret; + + arg.num_params = TEE_PARAM_NUM; + arg.session = session; + arg.func = command; + + ret = tee_client_invoke_func(tee_ctx, &arg, params); + if (ret < 0 || arg.ret) { + pr_err("%s: cmd %d ret %d:%x.\n", __func__, command, ret, arg.ret); + ret = -EOPNOTSUPP; + } + return ret; +} + +static int mtk_tee_restrict_memory(struct restricted_heap *heap, struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = heap->priv_data; + struct tee_param params[TEE_PARAM_NUM] = {0}; + int ret; + + params[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[0].u.value.a = buf->size; + params[0].u.value.b = PAGE_SIZE; + params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[1].u.value.a = data->mem_type; + params[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[3].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; + ret = mtk_tee_service_call(data->tee_ctx, data->tee_session, + MTK_TZCMD_SECMEM_ZALLOC, params); + if (ret) + return -ENOMEM; + + buf->restricted_addr = params[3].u.value.a; + return 0; +} + +static void mtk_tee_unrestrict_memory(struct restricted_heap *heap, struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = heap->priv_data; + struct tee_param params[TEE_PARAM_NUM] = {0}; + + params[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[0].u.value.a = buf->restricted_addr; + params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; + + mtk_tee_service_call(data->tee_ctx, data->tee_session, + MTK_TZCMD_SECMEM_FREE, params); + if (params[1].u.value.a) + pr_err("%s, Unrestrict buffer(0x%llx) fail(%lld) from TEE.\n", + heap->name, buf->restricted_addr, params[1].u.value.a); +} + +static int +mtk_restricted_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) +{ + /* The memory allocating is within the TEE. */ + return 0; +} + +static void +mtk_restricted_memory_free(struct restricted_heap *heap, struct restricted_buffer *buf) +{ +} + static int mtk_restricted_heap_init(struct restricted_heap *heap) { struct mtk_restricted_heap_data *data = heap->priv_data; @@ -85,6 +174,10 @@ static int mtk_restricted_heap_init(struct restricted_heap *heap)
static const struct restricted_heap_ops mtk_restricted_heap_ops = { .heap_init = mtk_restricted_heap_init, + .memory_alloc = mtk_restricted_memory_allocate, + .memory_free = mtk_restricted_memory_free, + .memory_restrict = mtk_tee_restrict_memory, + .memory_unrestrict = mtk_tee_unrestrict_memory, };
static struct mtk_restricted_heap_data mtk_restricted_heap_data = {
Create a new MediaTek CMA heap from the CMA reserved buffer.
In this heap, When the first allocating buffer, use cma_alloc to prepare whole the CMA range, then send its range to TEE to protect and manage. For the later allocating, we just adds the cma_used_size.
When SVP done, cma_release will release the buffer, then kernel may reuse it.
For the "CMA" restricted heap, "struct cma *cma" is a common property, not just for MediaTek, so put it into "struct restricted_heap" instead of our private data.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/Kconfig | 2 +- drivers/dma-buf/heaps/restricted_heap.h | 4 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 120 +++++++++++++++++++- 3 files changed, 123 insertions(+), 3 deletions(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 84f748fb2856..58903bc62ac8 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -24,7 +24,7 @@ config DMABUF_HEAPS_RESTRICTED
config DMABUF_HEAPS_RESTRICTED_MTK bool "MediaTek DMA-BUF Restricted Heap" - depends on DMABUF_HEAPS_RESTRICTED && TEE=y + depends on DMABUF_HEAPS_RESTRICTED && DMA_CMA && TEE=y help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N. diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 6c93f6d257dc..8d17c39b65de 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -21,6 +21,10 @@ struct restricted_heap {
const struct restricted_heap_ops *ops;
+ struct cma *cma; + unsigned long cma_paddr; + unsigned long cma_size; + void *priv_data; };
diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c index 902add95bb7e..0180d04076e2 100644 --- a/drivers/dma-buf/heaps/restricted_heap_mtk.c +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -4,9 +4,11 @@ * * Copyright (C) 2024 MediaTek Inc. */ +#include <linux/cma.h> #include <linux/dma-buf.h> #include <linux/err.h> #include <linux/module.h> +#include <linux/of_reserved_mem.h> #include <linux/slab.h> #include <linux/tee_drv.h> #include <linux/uuid.h> @@ -23,6 +25,13 @@ enum mtk_secure_mem_type { * management is inside the TEE. */ MTK_SECURE_MEMORY_TYPE_CM_TZ = 1, + /* + * MediaTek dynamic chunk memory carved out from CMA. + * In normal case, the CMA could be used in kernel; When SVP start, we will + * allocate whole this CMA and pass whole the CMA PA and size into TEE to + * protect it, then the detail memory management also is inside the TEE. + */ + MTK_SECURE_MEMORY_TYPE_CM_CMA = 2, };
enum mtk_secure_buffer_tee_cmd { @@ -32,6 +41,8 @@ enum mtk_secure_buffer_tee_cmd { * [in] value[0].a: The buffer size. * value[0].b: alignment. * [in] value[1].a: enum mtk_secure_mem_type. + * [in] value[2].a: pa base in cma case. + * value[2].b: The buffer size in cma case. * [out] value[3].a: The secure handle. */ MTK_TZCMD_SECMEM_ZALLOC = 0x10000, /* MTK TEE Command ID Base */ @@ -52,6 +63,9 @@ struct mtk_restricted_heap_data {
const enum mtk_secure_mem_type mem_type;
+ struct page *cma_page; + unsigned long cma_used_size; + struct mutex lock; /* lock for cma_used_size */ };
static int mtk_tee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) @@ -125,6 +139,10 @@ static int mtk_tee_restrict_memory(struct restricted_heap *heap, struct restrict params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; params[1].u.value.a = data->mem_type; params[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + if (heap->cma && data->mem_type == MTK_SECURE_MEMORY_TYPE_CM_CMA) { + params[2].u.value.a = heap->cma_paddr; + params[2].u.value.b = heap->cma_size; + } params[3].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; ret = mtk_tee_service_call(data->tee_ctx, data->tee_session, MTK_TZCMD_SECMEM_ZALLOC, params); @@ -163,6 +181,48 @@ mtk_restricted_memory_free(struct restricted_heap *heap, struct restricted_buffe { }
+static int mtk_restricted_memory_cma_allocate(struct restricted_heap *heap, + struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = heap->priv_data; + int ret = 0; + /* + * Allocate CMA only when allocating buffer for the first time, and just + * increase cma_used_size at the other time, Actually the memory + * allocating is within the TEE. + */ + mutex_lock(&data->lock); + if (!data->cma_used_size) { + data->cma_page = cma_alloc(heap->cma, heap->cma_size >> PAGE_SHIFT, + get_order(PAGE_SIZE), false); + if (!data->cma_page) { + ret = -ENOMEM; + goto out_unlock; + } + } else if (data->cma_used_size + buf->size > heap->cma_size) { + ret = -EINVAL; + goto out_unlock; + } + data->cma_used_size += buf->size; + +out_unlock: + mutex_unlock(&data->lock); + return ret; +} + +static void mtk_restricted_memory_cma_free(struct restricted_heap *heap, + struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = heap->priv_data; + + mutex_lock(&data->lock); + data->cma_used_size -= buf->size; + if (!data->cma_used_size) + cma_release(heap->cma, data->cma_page, + heap->cma_size >> PAGE_SHIFT); + mutex_unlock(&data->lock); +} + static int mtk_restricted_heap_init(struct restricted_heap *heap) { struct mtk_restricted_heap_data *data = heap->priv_data; @@ -184,21 +244,77 @@ static struct mtk_restricted_heap_data mtk_restricted_heap_data = { .mem_type = MTK_SECURE_MEMORY_TYPE_CM_TZ, };
+static const struct restricted_heap_ops mtk_restricted_heap_ops_cma = { + .heap_init = mtk_restricted_heap_init, + .memory_alloc = mtk_restricted_memory_cma_allocate, + .memory_free = mtk_restricted_memory_cma_free, + .memory_restrict = mtk_tee_restrict_memory, + .memory_unrestrict = mtk_tee_unrestrict_memory, +}; + +static struct mtk_restricted_heap_data mtk_restricted_heap_data_cma = { + .mem_type = MTK_SECURE_MEMORY_TYPE_CM_CMA, +}; + static struct restricted_heap mtk_restricted_heaps[] = { { .name = "restricted_mtk_cm", .ops = &mtk_restricted_heap_ops, .priv_data = &mtk_restricted_heap_data, }, + { + .name = "restricted_mtk_cma", + .ops = &mtk_restricted_heap_ops_cma, + .priv_data = &mtk_restricted_heap_data_cma, + }, };
+static int __init mtk_restricted_cma_init(struct reserved_mem *rmem) +{ + struct restricted_heap *rstrd_heap = mtk_restricted_heaps, *rstrd_heap_cma = NULL; + struct mtk_restricted_heap_data *data; + struct cma *cma; + int ret, i; + + for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rstrd_heap++) { + data = rstrd_heap->priv_data; + if (data->mem_type == MTK_SECURE_MEMORY_TYPE_CM_CMA) { + rstrd_heap_cma = rstrd_heap; + break; + } + } + if (!rstrd_heap_cma) + return -EINVAL; + + ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, + &cma); + if (ret) { + pr_err("%s: %s set up CMA fail\n", __func__, rmem->name); + return ret; + } + + rstrd_heap_cma->cma = cma; + rstrd_heap_cma->cma_paddr = rmem->base; + rstrd_heap_cma->cma_size = rmem->size; + return 0; +} + +RESERVEDMEM_OF_DECLARE(restricted_cma, "mediatek,dynamic-restricted-region", + mtk_restricted_cma_init); + static int mtk_restricted_heap_initialize(void) { struct restricted_heap *rstrd_heap = mtk_restricted_heaps; + struct mtk_restricted_heap_data *data; unsigned int i;
- for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rstrd_heap++) - restricted_heap_add(rstrd_heap); + for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rstrd_heap++) { + data = rstrd_heap->priv_data; + if (data->mem_type == MTK_SECURE_MEMORY_TYPE_CM_CMA && !rstrd_heap->cma) + continue; + if (!restricted_heap_add(rstrd_heap)) + mutex_init(&data->lock); + } return 0; } module_init(mtk_restricted_heap_initialize);
On Fri, Jan 12, 2024 at 05:20:11PM +0800, Yong Wu wrote:
Add the dma_ops for this restricted heap. For restricted buffer, cache_ops/mmap are not allowed, thus return EPERM for them.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 103 ++++++++++++++++++++++++ 1 file changed, 103 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8c266a0f6192..ec4c63d2112d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,6 +12,10 @@ #include "restricted_heap.h" +struct restricted_heap_attachment {
- struct sg_table *table;
+};
static int restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) { @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buff ops->memory_free(heap, buf); } +static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{
- struct restricted_buffer *restricted_buf = dmabuf->priv;
- struct restricted_heap_attachment *a;
- struct sg_table *table;
- int ret;
- a = kzalloc(sizeof(*a), GFP_KERNEL);
- if (!a)
return -ENOMEM;
- table = kzalloc(sizeof(*table), GFP_KERNEL);
- if (!table) {
ret = -ENOMEM;
goto err_free_attach;
- }
- ret = sg_alloc_table(table, 1, GFP_KERNEL);
- if (ret)
goto err_free_sgt;
- sg_set_page(table->sgl, NULL, restricted_buf->size, 0);
So this is definitely broken and violating the dma-buf api rules. You cannot let attach succed and supply a dummy/invalid sg table.
Two options:
- Reject ->attach for all this buffers with -EBUSY and provide instead a private api for these secure buffers, similar to how virtio_dma_buf has private virto-specific apis. This interface would need to be standardized across all arm TEE users, so that we don't have a disastrous proliferation of apis.
- Allow ->attach, but _only_ for drivers/devices which can access the secure buffer correctly, and only if you can put the right secure buffer address into the sg table directly. If dma to a secure buffer for a given struct device * will not work correctly (i.e. without data corruption), you _must_ reject the attach attempt with -EBUSY.
The 2nd approach would be my preferred one, if it's technically possible.
Also my understanding is that arm TEE is standardized, so I think we'll at least want some acks from other soc people whether this will work for them too.
Finally the usual drill: - this also needs the driver side support, if there's any changes needed. Just the new heap isn't enough. - and for drm you need open userspace for this. Doesn't have to be the full content protection decode pipeline, the drivers in drm that landed secure buffer support thus far enabled it using the EGL_EXT_protected_content extension using gl, which side steps all the complications around content decryption keys and support
Cheers, Sima
- a->table = table;
- attachment->priv = a;
- return 0;
+err_free_sgt:
- kfree(table);
+err_free_attach:
- kfree(a);
- return ret;
+}
+static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{
- struct restricted_heap_attachment *a = attachment->priv;
- sg_free_table(a->table);
- kfree(a->table);
- kfree(a);
+}
+static struct sg_table * +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) +{
- struct restricted_heap_attachment *a = attachment->priv;
- struct sg_table *table = a->table;
- return table;
+}
+static void +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table,
enum dma_data_direction direction)
+{
- struct restricted_heap_attachment *a = attachment->priv;
- WARN_ON(a->table != table);
+}
+static int +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{
- return -EPERM;
+}
+static int +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{
- return -EPERM;
+}
+static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{
- return -EPERM;
+}
+static void restricted_heap_free(struct dma_buf *dmabuf) +{
- struct restricted_buffer *restricted_buf = dmabuf->priv;
- struct restricted_heap *heap = dma_heap_get_drvdata(restricted_buf->heap);
- restricted_heap_memory_free(heap, restricted_buf);
- kfree(restricted_buf);
+}
+static const struct dma_buf_ops restricted_heap_buf_ops = {
- .attach = restricted_heap_attach,
- .detach = restricted_heap_detach,
- .map_dma_buf = restricted_heap_map_dma_buf,
- .unmap_dma_buf = restricted_heap_unmap_dma_buf,
- .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access,
- .end_cpu_access = restricted_heap_dma_buf_end_cpu_access,
- .mmap = restricted_heap_dma_buf_mmap,
- .release = restricted_heap_free,
+};
static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, if (ret) goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap);
- exp_info.ops = &restricted_heap_buf_ops; exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; exp_info.priv = restricted_buf;
-- 2.25.1
On Fri, Jan 12, 2024 at 10:41:14AM +0100, Daniel Vetter wrote:
On Fri, Jan 12, 2024 at 05:20:11PM +0800, Yong Wu wrote:
Add the dma_ops for this restricted heap. For restricted buffer, cache_ops/mmap are not allowed, thus return EPERM for them.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 103 ++++++++++++++++++++++++ 1 file changed, 103 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8c266a0f6192..ec4c63d2112d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,6 +12,10 @@ #include "restricted_heap.h" +struct restricted_heap_attachment {
- struct sg_table *table;
+};
static int restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) { @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buff ops->memory_free(heap, buf); } +static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{
- struct restricted_buffer *restricted_buf = dmabuf->priv;
- struct restricted_heap_attachment *a;
- struct sg_table *table;
- int ret;
- a = kzalloc(sizeof(*a), GFP_KERNEL);
- if (!a)
return -ENOMEM;
- table = kzalloc(sizeof(*table), GFP_KERNEL);
- if (!table) {
ret = -ENOMEM;
goto err_free_attach;
- }
- ret = sg_alloc_table(table, 1, GFP_KERNEL);
- if (ret)
goto err_free_sgt;
- sg_set_page(table->sgl, NULL, restricted_buf->size, 0);
So this is definitely broken and violating the dma-buf api rules. You cannot let attach succed and supply a dummy/invalid sg table.
Two options:
Reject ->attach for all this buffers with -EBUSY and provide instead a private api for these secure buffers, similar to how virtio_dma_buf has private virto-specific apis. This interface would need to be standardized across all arm TEE users, so that we don't have a disastrous proliferation of apis.
Allow ->attach, but _only_ for drivers/devices which can access the secure buffer correctly, and only if you can put the right secure buffer address into the sg table directly. If dma to a secure buffer for a given struct device * will not work correctly (i.e. without data corruption), you _must_ reject the attach attempt with -EBUSY.
The 2nd approach would be my preferred one, if it's technically possible.
Also my understanding is that arm TEE is standardized, so I think we'll at least want some acks from other soc people whether this will work for them too.
Finally the usual drill:
- this also needs the driver side support, if there's any changes needed. Just the new heap isn't enough.
Ok I quickly scrolled through your drm patches and that confirms that the current dma-buf interface you're implementing is just completely breaking the api. And you need to paper over that will all kinds of very icky special-casing.
So definitely need to rethink the overall design between dma-buf heaps and drivers here. -Sima
- and for drm you need open userspace for this. Doesn't have to be the full content protection decode pipeline, the drivers in drm that landed secure buffer support thus far enabled it using the EGL_EXT_protected_content extension using gl, which side steps all the complications around content decryption keys and support
Cheers, Sima
- a->table = table;
- attachment->priv = a;
- return 0;
+err_free_sgt:
- kfree(table);
+err_free_attach:
- kfree(a);
- return ret;
+}
+static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{
- struct restricted_heap_attachment *a = attachment->priv;
- sg_free_table(a->table);
- kfree(a->table);
- kfree(a);
+}
+static struct sg_table * +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, enum dma_data_direction direct) +{
- struct restricted_heap_attachment *a = attachment->priv;
- struct sg_table *table = a->table;
- return table;
+}
+static void +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table,
enum dma_data_direction direction)
+{
- struct restricted_heap_attachment *a = attachment->priv;
- WARN_ON(a->table != table);
+}
+static int +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{
- return -EPERM;
+}
+static int +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{
- return -EPERM;
+}
+static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{
- return -EPERM;
+}
+static void restricted_heap_free(struct dma_buf *dmabuf) +{
- struct restricted_buffer *restricted_buf = dmabuf->priv;
- struct restricted_heap *heap = dma_heap_get_drvdata(restricted_buf->heap);
- restricted_heap_memory_free(heap, restricted_buf);
- kfree(restricted_buf);
+}
+static const struct dma_buf_ops restricted_heap_buf_ops = {
- .attach = restricted_heap_attach,
- .detach = restricted_heap_detach,
- .map_dma_buf = restricted_heap_map_dma_buf,
- .unmap_dma_buf = restricted_heap_unmap_dma_buf,
- .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access,
- .end_cpu_access = restricted_heap_dma_buf_end_cpu_access,
- .mmap = restricted_heap_dma_buf_mmap,
- .release = restricted_heap_free,
+};
static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, if (ret) goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap);
- exp_info.ops = &restricted_heap_buf_ops; exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; exp_info.priv = restricted_buf;
-- 2.25.1
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Fri, 12 Jan 2024 17:20:07 +0800 Yong Wu yong.wu@mediatek.com wrote:
The purpose of this patchset is for MediaTek secure video playback, and also to enable other potential uses of this in the future. The 'restricted dma-heap' will be used to allocate dma_buf objects that reference memory in the secure world that is inaccessible/unmappable by the non-secure (i.e. kernel/userspace) world. That memory will be used by the secure/ trusted world to store secure information (i.e. decrypted media content). The dma_bufs allocated from the kernel will be passed to V4L2 for video decoding (as input and output). They will also be used by the drm system for rendering of the content.
This patchset adds two MediaTek restricted heaps and they will be used in v4l2[1] and drm[2].
- restricted_mtk_cm: secure chunk memory for MediaTek SVP (Secure Video Path). The buffer is reserved for the secure world after bootup and it is used for vcodec's ES/working buffer;
- restricted_mtk_cma: secure CMA memory for MediaTek SVP. This buffer is dynamically reserved for the secure world and will be got when we start playing secure videos. Once the security video playing is complete, the CMA will be released. This heap is used for the vcodec's frame buffer.
[1] https://lore.kernel.org/linux-mediatek/20231206081538.17056-1-yunfei.dong@me... [2] https://lore.kernel.org/all/20231223182932.27683-1-jason-jh.lin@mediatek.com...
Change note: v4: 1) Rename the heap name from "secure" to "restricted". suggested from Simon/Pekka. There are still several "secure" string in MTK file since we use ARM platform in which we call this "secure world"/ "secure command".
Hi,
I am really happy about this name change, thank you.
It is unfortunate that ARM specifications use the word "secure", but so be it. When referring to specs, it's good to use the spec wording.
In everything that is not a direct reference to some spec though it would be nice to use the "restricted" terminology if possible. I presume there are other vendors who use words other than what ARM uses for similar concepts. A common vocabulary would be nice.
Thanks, pq
v3: https://lore.kernel.org/linux-mediatek/20231212024607.3681-1-yong.wu@mediate... 1) Separate the secure heap to a common file(secure_heap.c) and mtk special file (secure_heap_mtk.c), and put all the tee related code into our special file. 2) About dt-binding, Add "mediatek," prefix since this is Mediatek TEE firmware definition. 3) Remove the normal CMA heap which is a draft for qcom. Rebase on v6.7-rc1.
v2: https://lore.kernel.org/linux-mediatek/20231111111559.8218-1-yong.wu@mediate... 1) Move John's patches into the vcodec patchset since they use the new dma heap interface directly. https://lore.kernel.org/linux-mediatek/20231106120423.23364-1-yunfei.dong@me... 2) Reword the dt-binding description. 3) Rename the heap name from mtk_svp to secure_mtk_cm. This means the current vcodec/DRM upstream code doesn't match this. 4) Add a normal CMA heap. currently it should be a draft version. 5) Regarding the UUID, I still use hard code, but put it in a private data which allow the others could set their own UUID. What's more, UUID is necessary for the session with TEE. If we don't have it, we can't communicate with the TEE, including the get_uuid interface, which tries to make uuid more generic, not working. If there is other way to make UUID more general, please free to tell me. v1: https://lore.kernel.org/linux-mediatek/20230911023038.30649-1-yong.wu@mediat... Base on v6.6-rc1.
Yong Wu (7): dt-bindings: reserved-memory: Add mediatek,dynamic-restricted-region dma-buf: heaps: Initialize a restricted heap dma-buf: heaps: restricted_heap: Add private heap ops dma-buf: heaps: restricted_heap: Add dma_ops dma-buf: heaps: restricted_heap: Add MediaTek restricted heap and heap_init dma-buf: heaps: restricted_heap_mtk: Add TEE memory service call dma_buf: heaps: restricted_heap_mtk: Add a new CMA heap
.../mediatek,dynamic-restricted-region.yaml | 43 +++ drivers/dma-buf/heaps/Kconfig | 16 + drivers/dma-buf/heaps/Makefile | 4 +- drivers/dma-buf/heaps/restricted_heap.c | 237 +++++++++++++ drivers/dma-buf/heaps/restricted_heap.h | 43 +++ drivers/dma-buf/heaps/restricted_heap_mtk.c | 322 ++++++++++++++++++ 6 files changed, 664 insertions(+), 1 deletion(-) create mode 100644 Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml create mode 100644 drivers/dma-buf/heaps/restricted_heap.c create mode 100644 drivers/dma-buf/heaps/restricted_heap.h create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
On Fri, Jan 12, 2024 at 1:21 AM Yong Wu yong.wu@mediatek.com wrote:
Add "struct restricted_heap_ops". For the restricted memory, totally there are two steps: a) memory_alloc: Allocate the buffer in kernel; b) memory_restrict: Restrict/Protect/Secure that buffer. The memory_alloc is mandatory while memory_restrict is optinal since it may be part of memory_alloc.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 41 ++++++++++++++++++++++++- drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
Thanks for sending this out! A thought below.
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer {
struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops;
+};
+struct restricted_heap_ops {
int (*heap_init)(struct restricted_heap *heap);
int (*memory_alloc)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_free)(struct restricted_heap *heap, struct restricted_buffer *buf);
int (*memory_restrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_unrestrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
};
int restricted_heap_add(struct restricted_heap *rstrd_heap);
So, I'm a little worried here, because you're basically turning the restricted_heap dma-buf heap driver into a framework itself. Where this patch is creating a subdriver framework.
Part of my hesitancy, is you're introducing this under the dma-buf heaps. For things like CMA, that's more of a core subsystem that has multiple users, and exporting cma buffers via dmabuf heaps is just an additional interface. What I like about that is the core kernel has to define the semantics for the memory type and then the dmabuf heap is just exporting that well understood type of buffer.
But with these restricted buffers, I'm not sure there's yet a well understood set of semantics nor a central abstraction for that which other drivers use directly.
I know part of this effort here is to start to centralize all these vendor specific restricted buffer implementations, which I think is great, but I just worry that in doing it in the dmabuf heap interface, it is a bit too user-facing. The likelihood of encoding a particular semantic to the singular "restricted_heap" name seems high.
Similarly we might run into systems with multiple types of restricted buffers (imagine a discrete gpu having one type along with TEE protected buffers also being used on the same system).
So the one question I have: Why not just have a mediatek specific restricted_heap dmabuf heap driver? Since there's already been some talk of slight semantic differences in various restricted buffer implementations, should we just start with separately named dmabuf heaps for each? Maybe consolidating to a common name as more drivers arrive and we gain a better understanding of the variations of semantics folks are using?
thanks -john
On Fri, Jan 12, 2024 at 2:52 PM John Stultz jstultz@google.com wrote:
On Fri, Jan 12, 2024 at 1:21 AM Yong Wu yong.wu@mediatek.com wrote:
Add "struct restricted_heap_ops". For the restricted memory, totally there are two steps: a) memory_alloc: Allocate the buffer in kernel; b) memory_restrict: Restrict/Protect/Secure that buffer. The memory_alloc is mandatory while memory_restrict is optinal since it may be part of memory_alloc.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 41 ++++++++++++++++++++++++- drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
Thanks for sending this out! A thought below.
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer {
struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops;
+};
+struct restricted_heap_ops {
int (*heap_init)(struct restricted_heap *heap);
int (*memory_alloc)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_free)(struct restricted_heap *heap, struct restricted_buffer *buf);
int (*memory_restrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_unrestrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
};
int restricted_heap_add(struct restricted_heap *rstrd_heap);
So, I'm a little worried here, because you're basically turning the restricted_heap dma-buf heap driver into a framework itself. Where this patch is creating a subdriver framework.
Part of my hesitancy, is you're introducing this under the dma-buf heaps. For things like CMA, that's more of a core subsystem that has multiple users, and exporting cma buffers via dmabuf heaps is just an additional interface. What I like about that is the core kernel has to define the semantics for the memory type and then the dmabuf heap is just exporting that well understood type of buffer.
But with these restricted buffers, I'm not sure there's yet a well understood set of semantics nor a central abstraction for that which other drivers use directly.
I know part of this effort here is to start to centralize all these vendor specific restricted buffer implementations, which I think is great, but I just worry that in doing it in the dmabuf heap interface, it is a bit too user-facing. The likelihood of encoding a particular semantic to the singular "restricted_heap" name seems high.
In patch #5 it has the actual driver implementation for the MTK heap that includes the heap name of "restricted_mtk_cm", so there shouldn't be a heap named "restricted_heap" that's actually getting exposed. The restricted_heap code is a framework, and not a driver itself. Unless I'm missing something in this patchset...but that's the way it's *supposed* to be.
Similarly we might run into systems with multiple types of restricted buffers (imagine a discrete gpu having one type along with TEE protected buffers also being used on the same system).
So the one question I have: Why not just have a mediatek specific restricted_heap dmabuf heap driver? Since there's already been some talk of slight semantic differences in various restricted buffer implementations, should we just start with separately named dmabuf heaps for each? Maybe consolidating to a common name as more drivers arrive and we gain a better understanding of the variations of semantics folks are using?
thanks -john
On Fri, Jan 12, 2024 at 3:27 PM Jeffrey Kardatzke jkardatzke@google.com wrote:
On Fri, Jan 12, 2024 at 2:52 PM John Stultz jstultz@google.com wrote:
On Fri, Jan 12, 2024 at 1:21 AM Yong Wu yong.wu@mediatek.com wrote:
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer {
struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops;
+};
+struct restricted_heap_ops {
int (*heap_init)(struct restricted_heap *heap);
int (*memory_alloc)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_free)(struct restricted_heap *heap, struct restricted_buffer *buf);
int (*memory_restrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_unrestrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
};
int restricted_heap_add(struct restricted_heap *rstrd_heap);
So, I'm a little worried here, because you're basically turning the restricted_heap dma-buf heap driver into a framework itself. Where this patch is creating a subdriver framework.
Part of my hesitancy, is you're introducing this under the dma-buf heaps. For things like CMA, that's more of a core subsystem that has multiple users, and exporting cma buffers via dmabuf heaps is just an additional interface. What I like about that is the core kernel has to define the semantics for the memory type and then the dmabuf heap is just exporting that well understood type of buffer.
But with these restricted buffers, I'm not sure there's yet a well understood set of semantics nor a central abstraction for that which other drivers use directly.
I know part of this effort here is to start to centralize all these vendor specific restricted buffer implementations, which I think is great, but I just worry that in doing it in the dmabuf heap interface, it is a bit too user-facing. The likelihood of encoding a particular semantic to the singular "restricted_heap" name seems high.
In patch #5 it has the actual driver implementation for the MTK heap that includes the heap name of "restricted_mtk_cm", so there shouldn't be a heap named "restricted_heap" that's actually getting exposed. The
Ah, I appreciate that clarification! Indeed, you're right the name is passed through. Apologies for missing that detail.
restricted_heap code is a framework, and not a driver itself. Unless I'm missing something in this patchset...but that's the way it's *supposed* to be.
So I guess I'm not sure I understand the benefit of the extra indirection. What then does the restricted_heap.c logic itself provide? The dmabuf heaps framework already provides a way to add heap implementations.
thanks -john
On Fri, Jan 12, 2024 at 3:51 PM John Stultz jstultz@google.com wrote:
On Fri, Jan 12, 2024 at 3:27 PM Jeffrey Kardatzke jkardatzke@google.com wrote:
On Fri, Jan 12, 2024 at 2:52 PM John Stultz jstultz@google.com wrote:
On Fri, Jan 12, 2024 at 1:21 AM Yong Wu yong.wu@mediatek.com wrote:
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer {
struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops;
+};
+struct restricted_heap_ops {
int (*heap_init)(struct restricted_heap *heap);
int (*memory_alloc)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_free)(struct restricted_heap *heap, struct restricted_buffer *buf);
int (*memory_restrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
void (*memory_unrestrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
};
int restricted_heap_add(struct restricted_heap *rstrd_heap);
So, I'm a little worried here, because you're basically turning the restricted_heap dma-buf heap driver into a framework itself. Where this patch is creating a subdriver framework.
Part of my hesitancy, is you're introducing this under the dma-buf heaps. For things like CMA, that's more of a core subsystem that has multiple users, and exporting cma buffers via dmabuf heaps is just an additional interface. What I like about that is the core kernel has to define the semantics for the memory type and then the dmabuf heap is just exporting that well understood type of buffer.
But with these restricted buffers, I'm not sure there's yet a well understood set of semantics nor a central abstraction for that which other drivers use directly.
I know part of this effort here is to start to centralize all these vendor specific restricted buffer implementations, which I think is great, but I just worry that in doing it in the dmabuf heap interface, it is a bit too user-facing. The likelihood of encoding a particular semantic to the singular "restricted_heap" name seems high.
In patch #5 it has the actual driver implementation for the MTK heap that includes the heap name of "restricted_mtk_cm", so there shouldn't be a heap named "restricted_heap" that's actually getting exposed. The
Ah, I appreciate that clarification! Indeed, you're right the name is passed through. Apologies for missing that detail.
restricted_heap code is a framework, and not a driver itself. Unless I'm missing something in this patchset...but that's the way it's *supposed* to be.
So I guess I'm not sure I understand the benefit of the extra indirection. What then does the restricted_heap.c logic itself provide? The dmabuf heaps framework already provides a way to add heap implementations.
So in the v1 patchset, it was done with just a Mediatek specific heap with no framework or abstractions for another vendor to build on top of. The feedback was to make this more generic since Mediatek won't be the only vendor who wants a restricted heap..which is how it ended up here. There was more code in the framework before relating to TEE calls, but then that was moved to the vendor specific code since not all restricted heaps are allocated through a TEE.
This was also desirable for the V4L2 pieces since there's going to be a V4L2 flag set when using restricted dma_bufs (and it wants to validate that)....so in order to keep that more generic, there should be a higher level concept of restricted dma_bufs that isn't specific to a single vendor. One other thing that would ideally come out of this is a cleaner way to check that a dma_buf is restricted or not. The current V4L2 patchset just attaches the dma_buf and then checks if the page table is empty....and if so, it's restricted. But now I see there's other feedback indicating attaching a restricted dma_buf shouldn't even be allowed, so we'll need another strategy for detecting them. Ideally there is some function/macro like is_dma_buf_restricted(struct dma_buf*) that can indicate that...but we haven't come up with a good way to do that yet which doesn't involve adding another field to dma_buf or to dma_buf_ops (and if such a thing would be fine, then OK...but I had assumed we would get pushback on modifying either of those structs).
thanks -john
On Fri, Jan 12, 2024 at 4:13 PM Jeffrey Kardatzke jkardatzke@google.com wrote:
On Fri, Jan 12, 2024 at 3:51 PM John Stultz jstultz@google.com wrote:
On Fri, Jan 12, 2024 at 3:27 PM Jeffrey Kardatzke jkardatzke@google.com wrote:
On Fri, Jan 12, 2024 at 2:52 PM John Stultz jstultz@google.com wrote:
I know part of this effort here is to start to centralize all these vendor specific restricted buffer implementations, which I think is great, but I just worry that in doing it in the dmabuf heap interface, it is a bit too user-facing. The likelihood of encoding a particular semantic to the singular "restricted_heap" name seems high.
In patch #5 it has the actual driver implementation for the MTK heap that includes the heap name of "restricted_mtk_cm", so there shouldn't be a heap named "restricted_heap" that's actually getting exposed. The
Ah, I appreciate that clarification! Indeed, you're right the name is passed through. Apologies for missing that detail.
restricted_heap code is a framework, and not a driver itself. Unless I'm missing something in this patchset...but that's the way it's *supposed* to be.
So I guess I'm not sure I understand the benefit of the extra indirection. What then does the restricted_heap.c logic itself provide? The dmabuf heaps framework already provides a way to add heap implementations.
So in the v1 patchset, it was done with just a Mediatek specific heap with no framework or abstractions for another vendor to build on top of. The feedback was to make this more generic since Mediatek won't be the only vendor who wants a restricted heap..which is how it ended up here. There was more code in the framework before relating to TEE calls, but then that was moved to the vendor specific code since not all restricted heaps are allocated through a TEE.
Yeah. I apologize, as I know how frustrating the contradictory feedback can be. I don't mean to demotivate. :(
I think folks would very much like to see consolidation around the various implementations, and I agree! I just worry that creating the common framework for this concept in a dmabuf heaps driver is maybe too peripheral/close to userland.
This was also desirable for the V4L2 pieces since there's going to be a V4L2 flag set when using restricted dma_bufs (and it wants to validate that)....so in order to keep that more generic, there should be a higher level concept of restricted dma_bufs that isn't specific to a single vendor. One other thing that would ideally come out of this is a cleaner way to check that a dma_buf is restricted or not.
Yeah. If there is a clear meaning to "restricted" here, I think having a query method on the dmabuf is reasonable. My only fret is if the meaning is too vague and userland starts depending on it meaning what it meant for vendor1, but doesn't mean for vendor2.
So we need some clarity in what "restricted" really means. For instance, it being not cpu mappable vs other potential variations like being cpu mappable, but not cpu accessible. Or not cpu mappable, but only mappable between a set of 3 devices (Which 3 devices?! How can we tell?).
And if there is variation, maybe we need to enumerate the types of "restricted" buffers so we can be specific when it's queried.
That's where maybe having the framework for this be more central or closer to the kernel mm code and not just a sub-type of a dmabuf heap driver might be better?
The current V4L2 patchset just attaches the dma_buf and then checks if the page table is empty....and if so, it's restricted. But now I see there's other feedback indicating attaching a restricted dma_buf shouldn't even be allowed, so we'll need another strategy for detecting them. Ideally there is some function/macro like is_dma_buf_restricted(struct dma_buf*) that can indicate that...but we haven't come up with a good way to do that yet which doesn't involve adding another field to dma_buf or to dma_buf_ops (and if such a thing would be fine, then OK...but I had assumed we would get pushback on modifying either of those structs).
If there's a need and the best place to put something is in the dma_buf or dma_buf_ops, that's where it should go. Folks may reasonably disagree if it's the best place (there may be yet better spots for the state to sit in the abstractions), but for stuff going upstream, there's no reason to try to hack around things or smuggle state just to avoid changing core structures. Especially if core structures are internal only and have no ABI implications.
Sima's suggestion that attachments should fail if the device cannot properly map the restricted buffer makes sense to me. Though I don't quite see why all attachments should fail, and I don't really like the idea of a private api, but I need to look more at the suggested virtio example (but even they said that wasn't their preferred route).
My sense of attach was only that it was supposed to connect a device's interest in the buffer, allowing lazy allocation to satisfy various device constraints before first mapping - a design feature that I don't think anyone ever implemented. So my sense was it didn't have much meaning otherwise (but was a requirement to call before map). But that may have evolved since the early days.
And I'm sure the method to figure out if the attachment can work with the device may be complicated/difficult, so it sounding reasonable can be far from it being reasonable to implement.
And again, I don't mean to frustrate or demotivate here. I'm really excited to see this effort being pushed upstream!
thanks -john
On Fri, Jan 12, 2024 at 05:20:09PM +0800, Yong Wu wrote:
Initialize a restricted heap. Currently just add a null heap, Prepare for the later patches.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/Kconfig | 9 ++++ drivers/dma-buf/heaps/Makefile | 3 +- drivers/dma-buf/heaps/restricted_heap.c | 67 +++++++++++++++++++++++++ drivers/dma-buf/heaps/restricted_heap.h | 22 ++++++++ 4 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/heaps/restricted_heap.c create mode 100644 drivers/dma-buf/heaps/restricted_heap.h
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..e54506f480ea 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here.
+config DMABUF_HEAPS_RESTRICTED
- bool "DMA-BUF Restricted Heap"
- depends on DMABUF_HEAPS
- help
Choose this option to enable dma-buf restricted heap. The purpose of this
heap is to manage buffers that are inaccessible to the kernel and user space.
There may be several ways to restrict it, for example it may be encrypted or
protected by a TEE or hypervisor. If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..a2437c1817e2 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c new file mode 100644 index 000000000000..fd7c82abd42e --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF restricted heap exporter
- Copyright (C) 2024 MediaTek Inc.
- */
+#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/err.h> +#include <linux/slab.h>
+#include "restricted_heap.h"
+static struct dma_buf * +restricted_heap_allocate(struct dma_heap *heap, unsigned long size,
unsigned long fd_flags, unsigned long heap_flags)
+{
- struct restricted_buffer *restricted_buf;
- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
- struct dma_buf *dmabuf;
- int ret;
- restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL);
- if (!restricted_buf)
return ERR_PTR(-ENOMEM);
- restricted_buf->size = ALIGN(size, PAGE_SIZE);
- restricted_buf->heap = heap;
- exp_info.exp_name = dma_heap_get_name(heap);
- exp_info.size = restricted_buf->size;
- exp_info.flags = fd_flags;
- exp_info.priv = restricted_buf;
- dmabuf = dma_buf_export(&exp_info);
- if (IS_ERR(dmabuf)) {
ret = PTR_ERR(dmabuf);
goto err_free_buf;
- }
- return dmabuf;
+err_free_buf:
- kfree(restricted_buf);
- return ERR_PTR(ret);
+}
+static const struct dma_heap_ops restricted_heap_ops = {
- .allocate = restricted_heap_allocate,
+};
+int restricted_heap_add(struct restricted_heap *rstrd_heap)
Nothing wrong, but what about shortening rstrd_heap throughout the patch set to "rheap", I would find that easier to read.
+{
- struct dma_heap_export_info exp_info;
- struct dma_heap *heap;
- exp_info.name = rstrd_heap->name;
- exp_info.ops = &restricted_heap_ops;
- exp_info.priv = (void *)rstrd_heap;
- heap = dma_heap_add(&exp_info);
- if (IS_ERR(heap))
return PTR_ERR(heap);
- return 0;
+} +EXPORT_SYMBOL_GPL(restricted_heap_add); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h new file mode 100644 index 000000000000..443028f6ba3b --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/*
- Restricted heap Header.
- Copyright (C) 2024 MediaTek, Inc.
- */
+#ifndef _DMABUF_RESTRICTED_HEAP_H_ +#define _DMABUF_RESTRICTED_HEAP_H_
+struct restricted_buffer {
- struct dma_heap *heap;
- size_t size;
+};
+struct restricted_heap {
- const char *name;
+};
+int restricted_heap_add(struct restricted_heap *rstrd_heap);
+#endif
2.25.1
On Fri, Jan 12, 2024 at 05:20:10PM +0800, Yong Wu wrote:
Add "struct restricted_heap_ops". For the restricted memory, totally there are two steps: a) memory_alloc: Allocate the buffer in kernel; b) memory_restrict: Restrict/Protect/Secure that buffer. The memory_alloc is mandatory while memory_restrict is optinal since it may
s/optinal/optional/
be part of memory_alloc.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 41 ++++++++++++++++++++++++- drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index fd7c82abd42e..8c266a0f6192 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,10 +12,44 @@ #include "restricted_heap.h" +static int +restricted_heap_memory_allocate(struct restricted_heap *heap, struct restricted_buffer *buf) +{
- const struct restricted_heap_ops *ops = heap->ops;
- int ret;
- ret = ops->memory_alloc(heap, buf);
- if (ret)
return ret;
- if (ops->memory_restrict) {
ret = ops->memory_restrict(heap, buf);
if (ret)
goto memory_free;
- }
- return 0;
+memory_free:
- ops->memory_free(heap, buf);
- return ret;
+}
+static void +restricted_heap_memory_free(struct restricted_heap *heap, struct restricted_buffer *buf) +{
- const struct restricted_heap_ops *ops = heap->ops;
- if (ops->memory_unrestrict)
ops->memory_unrestrict(heap, buf);
- ops->memory_free(heap, buf);
+}
static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) {
- struct restricted_heap *restricted_heap = dma_heap_get_drvdata(heap); struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf;
@@ -28,6 +62,9 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, restricted_buf->size = ALIGN(size, PAGE_SIZE); restricted_buf->heap = heap;
- ret = restricted_heap_memory_allocate(restricted_heap, restricted_buf);
Can we guarantee that "restricted_heap" here isn't NULL (i.e., heap->priv). If not perhaps we should consider adding a check for NULL in the restricted_heap_memory_allocate() function?
- if (ret)
exp_info.exp_name = dma_heap_get_name(heap); exp_info.size = restricted_buf->size; exp_info.flags = fd_flags;goto err_free_buf;
@@ -36,11 +73,13 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf);
goto err_free_buf;
}goto err_free_restricted_mem;
return dmabuf; +err_free_restricted_mem:
- restricted_heap_memory_free(restricted_heap, restricted_buf);
err_free_buf: kfree(restricted_buf); return ERR_PTR(ret); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer { struct restricted_heap { const char *name;
- const struct restricted_heap_ops *ops;
+};
+struct restricted_heap_ops {
This have the same name as used for the dma_heap_ops in the file restricted_heap.c, this might be a little bit confusing, or?
- int (*heap_init)(struct restricted_heap *heap);
- int (*memory_alloc)(struct restricted_heap *heap, struct restricted_buffer *buf);
- void (*memory_free)(struct restricted_heap *heap, struct restricted_buffer *buf);
- int (*memory_restrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
- void (*memory_unrestrict)(struct restricted_heap *heap, struct restricted_buffer *buf);
Is the prefix "memory_" superfluous here in these ops?
Also related to a comment on the prior patch. The name here is "heap" for restricted_heap, but below you use rstrd_heap. It's the same struct, so I would advise to use the same name to avoid confusion when reading the code. As mentioned before, I think the name "rheap" would be a good choice.
}; int restricted_heap_add(struct restricted_heap *rstrd_heap); -- 2.25.1
On Fri, Jan 12, 2024 at 05:23:07PM -0800, John Stultz wrote:
On Fri, Jan 12, 2024 at 4:13 PM Jeffrey Kardatzke jkardatzke@google.com wrote:
On Fri, Jan 12, 2024 at 3:51 PM John Stultz jstultz@google.com wrote:
On Fri, Jan 12, 2024 at 3:27 PM Jeffrey Kardatzke jkardatzke@google.com wrote:
On Fri, Jan 12, 2024 at 2:52 PM John Stultz jstultz@google.com wrote:
I know part of this effort here is to start to centralize all these vendor specific restricted buffer implementations, which I think is great, but I just worry that in doing it in the dmabuf heap interface, it is a bit too user-facing. The likelihood of encoding a particular semantic to the singular "restricted_heap" name seems high.
In patch #5 it has the actual driver implementation for the MTK heap that includes the heap name of "restricted_mtk_cm", so there shouldn't be a heap named "restricted_heap" that's actually getting exposed. The
Ah, I appreciate that clarification! Indeed, you're right the name is passed through. Apologies for missing that detail.
restricted_heap code is a framework, and not a driver itself. Unless I'm missing something in this patchset...but that's the way it's *supposed* to be.
So I guess I'm not sure I understand the benefit of the extra indirection. What then does the restricted_heap.c logic itself provide? The dmabuf heaps framework already provides a way to add heap implementations.
So in the v1 patchset, it was done with just a Mediatek specific heap with no framework or abstractions for another vendor to build on top of. The feedback was to make this more generic since Mediatek won't be the only vendor who wants a restricted heap..which is how it ended up here. There was more code in the framework before relating to TEE calls, but then that was moved to the vendor specific code since not all restricted heaps are allocated through a TEE.
Yeah. I apologize, as I know how frustrating the contradictory feedback can be. I don't mean to demotivate. :(
I think folks would very much like to see consolidation around the various implementations, and I agree! I just worry that creating the common framework for this concept in a dmabuf heaps driver is maybe too peripheral/close to userland.
This was also desirable for the V4L2 pieces since there's going to be a V4L2 flag set when using restricted dma_bufs (and it wants to validate that)....so in order to keep that more generic, there should be a higher level concept of restricted dma_bufs that isn't specific to a single vendor. One other thing that would ideally come out of this is a cleaner way to check that a dma_buf is restricted or not.
Yeah. If there is a clear meaning to "restricted" here, I think having a query method on the dmabuf is reasonable. My only fret is if the meaning is too vague and userland starts depending on it meaning what it meant for vendor1, but doesn't mean for vendor2.
So we need some clarity in what "restricted" really means. For instance, it being not cpu mappable vs other potential variations like being cpu mappable, but not cpu accessible. Or not cpu mappable, but only mappable between a set of 3 devices (Which 3 devices?! How can we tell?).
Can we flip things around? I.e., instead of saying which devices are allowed to use the restricted buffer, can we instead say where it's not allowed to be used? My view has been that by default the contents of the types of buffers where talking about here is only accessible to things running on the secure side, i.e, typically S-EL3, S-EL1 and a specific Trusted Application running in S-EL0. I guess that serves as some kind of baseline.
From there, things turns to a more dynamic nature, where firewalls etc, can be configured to give access to various IPs, blocks and runtimes.
I understand that it's nice to be able to know all this from the Linux kernel point of view, but does it have to be aware of this? What's the major drawback if Linux doesn't know about it?
And if there is variation, maybe we need to enumerate the types of "restricted" buffers so we can be specific when it's queried.
That's where maybe having the framework for this be more central or closer to the kernel mm code and not just a sub-type of a dmabuf heap driver might be better?
The current V4L2 patchset just attaches the dma_buf and then checks if the page table is empty....and if so, it's restricted. But now I see there's other feedback indicating attaching a restricted dma_buf shouldn't even be allowed, so we'll need another strategy for detecting them. Ideally there is some function/macro like is_dma_buf_restricted(struct dma_buf*) that can indicate that...but we haven't come up with a good way to do that yet which doesn't involve adding another field to dma_buf or to dma_buf_ops (and if such a thing would be fine, then OK...but I had assumed we would get pushback on modifying either of those structs).
If there's a need and the best place to put something is in the dma_buf or dma_buf_ops, that's where it should go. Folks may reasonably disagree if it's the best place (there may be yet better spots for the state to sit in the abstractions), but for stuff going upstream, there's no reason to try to hack around things or smuggle state just to avoid changing core structures. Especially if core structures are internal only and have no ABI implications.
Sima's suggestion that attachments should fail if the device cannot properly map the restricted buffer makes sense to me. Though I don't quite see why all attachments should fail, and I don't really like the idea of a private api, but I need to look more at the suggested virtio example (but even they said that wasn't their preferred route).
My sense of attach was only that it was supposed to connect a device's interest in the buffer, allowing lazy allocation to satisfy various device constraints before first mapping - a design feature that I don't think anyone ever implemented. So my sense was it didn't have much meaning otherwise (but was a requirement to call before map). But that may have evolved since the early days.
And I'm sure the method to figure out if the attachment can work with the device may be complicated/difficult, so it sounding reasonable can be far from it being reasonable to implement.
And again, I don't mean to frustrate or demotivate here. I'm really excited to see this effort being pushed upstream!
thanks -john
On Wed, Jan 31, 2024 at 6:15 AM Joakim Bech joakim.bech@linaro.org wrote:
On Fri, Jan 12, 2024 at 05:23:07PM -0800, John Stultz wrote:
So we need some clarity in what "restricted" really means. For instance, it being not cpu mappable vs other potential variations like being cpu mappable, but not cpu accessible. Or not cpu mappable, but only mappable between a set of 3 devices (Which 3 devices?! How can we tell?).
Can we flip things around? I.e., instead of saying which devices are allowed to use the restricted buffer, can we instead say where it's not allowed to be used? My view has been that by default the contents of the types of buffers where talking about here is only accessible to things running on the secure side, i.e, typically S-EL3, S-EL1 and a specific Trusted Application running in S-EL0. I guess that serves as some kind of baseline.
? This seems like you're suggesting enumerating badness? I'm not sure I understand the benefit of that.
From there, things turns to a more dynamic nature, where firewalls etc, can be configured to give access to various IPs, blocks and runtimes.
I understand that it's nice to be able to know all this from the Linux kernel point of view, but does it have to be aware of this? What's the major drawback if Linux doesn't know about it?
Indeed, it doesn't necessarily. The idea with DMABUF heaps is it provides a name to abstract/wrap a type of constraint. So you can then allocate buffers that satisfy that constraint.
Admittedly the downside with DMABUF heaps is that it has a bit of a gap in the abstraction in that we don't have a mapping of device constraints, so in Android gralloc provides a device specific usage/pipeline -> heap mapping. (Note: This I don't think is very problematic - I often use the example of fstab as device-specific config everyone is comfortable with - but I know folks would like to have something more generic)
I believe Christian has previously proposed to have the devices provide something like symlinks from their sysfs nodes to the heaps the device supports, which is an interesting idea to mostly close that issue. Applications could then scan the devices in a pipeline and find the type they all support, and the specific names wouldn't matter.
However, I'd expect the same hardware pipeline might support both restricted and unrestricted playback, so there would need to be some way to differentiate for the use case, so I'm not sure you can get away from some heap name to functionality mapping.
My main concern with this patch series is that it seems to want to bundle all the different types of "restricted" buffers that might be possible under a single "restricted" heap name.
Since we likely have devices with different security domains, thus different types of restrictions. So we may need to be able to differentiate between "secure video playback" uses and "protected device firmware" uses on the same machine. Thus, I'm not sure it's a good idea to bundle all of these under the same heap name.
thanks -john
On Fri, 2024-01-12 at 10:49 +0100, Daniel Vetter wrote:
External email : Please do not click links or open attachments until you have verified the sender or the content. On Fri, Jan 12, 2024 at 10:41:14AM +0100, Daniel Vetter wrote:
On Fri, Jan 12, 2024 at 05:20:11PM +0800, Yong Wu wrote:
Add the dma_ops for this restricted heap. For restricted buffer, cache_ops/mmap are not allowed, thus return EPERM for them.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 103
++++++++++++++++++++++++
1 file changed, 103 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c
b/drivers/dma-buf/heaps/restricted_heap.c
index 8c266a0f6192..ec4c63d2112d 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,6 +12,10 @@ #include "restricted_heap.h" +struct restricted_heap_attachment { +struct sg_table*table; +};
static int restricted_heap_memory_allocate(struct restricted_heap *heap,
struct restricted_buffer *buf)
{ @@ -45,6 +49,104 @@ restricted_heap_memory_free(struct
restricted_heap *heap, struct restricted_buff
ops->memory_free(heap, buf); } +static int restricted_heap_attach(struct dma_buf *dmabuf, struct
dma_buf_attachment *attachment)
+{ +struct restricted_buffer *restricted_buf = dmabuf->priv; +struct restricted_heap_attachment *a; +struct sg_table *table; +int ret;
+a = kzalloc(sizeof(*a), GFP_KERNEL); +if (!a) +return -ENOMEM;
+table = kzalloc(sizeof(*table), GFP_KERNEL); +if (!table) { +ret = -ENOMEM; +goto err_free_attach; +}
+ret = sg_alloc_table(table, 1, GFP_KERNEL); +if (ret) +goto err_free_sgt; +sg_set_page(table->sgl, NULL, restricted_buf->size, 0);
So this is definitely broken and violating the dma-buf api rules.
You
cannot let attach succed and supply a dummy/invalid sg table.
Two options:
- Reject ->attach for all this buffers with -EBUSY and provide
instead a
private api for these secure buffers, similar to how
virtio_dma_buf has
private virto-specific apis. This interface would need to be standardized across all arm TEE users, so that we don't have a disastrous proliferation of apis.
- Allow ->attach, but _only_ for drivers/devices which can access
the
secure buffer correctly, and only if you can put the right secure
buffer
address into the sg table directly. If dma to a secure buffer for
a
given struct device * will not work correctly (i.e. without data corruption), you _must_ reject the attach attempt with -EBUSY.
The 2nd approach would be my preferred one, if it's technically
possible.
Also my understanding is that arm TEE is standardized, so I think
we'll at
least want some acks from other soc people whether this will work
for them
too.
Finally the usual drill:
- this also needs the driver side support, if there's any changes
needed.
Just the new heap isn't enough.
Ok I quickly scrolled through your drm patches and that confirms that the current dma-buf interface you're implementing is just completely breaking the api. And you need to paper over that will all kinds of very icky special-casing.
So definitely need to rethink the overall design between dma-buf heaps and drivers here.
Hi,
Thanks very much for the review, and sorry for reply so late. We reconstructed our TEE commands so that the kernel can obtain the valid PA/pages, then the sg operations can run normally.
I will send the next version. Thanks.
-Sima
- and for drm you need open userspace for this. Doesn't have to be
the
full content protection decode pipeline, the drivers in drm that
landed
secure buffer support thus far enabled it using the EGL_EXT_protected_content extension using gl, which side steps
all the
complications around content decryption keys and support
Cheers, Sima
+a->table = table; +attachment->priv = a;
+return 0;
+err_free_sgt: +kfree(table); +err_free_attach: +kfree(a); +return ret; +}
+static void restricted_heap_detach(struct dma_buf *dmabuf,
struct dma_buf_attachment *attachment)
+{ +struct restricted_heap_attachment *a = attachment->priv;
+sg_free_table(a->table); +kfree(a->table); +kfree(a); +}
+static struct sg_table * +restricted_heap_map_dma_buf(struct dma_buf_attachment
*attachment, enum dma_data_direction direct)
+{ +struct restricted_heap_attachment *a = attachment->priv; +struct sg_table *table = a->table;
+return table; +}
+static void +restricted_heap_unmap_dma_buf(struct dma_buf_attachment
*attachment, struct sg_table *table,
enum dma_data_direction direction)
+{ +struct restricted_heap_attachment *a = attachment->priv;
+WARN_ON(a->table != table); +}
+static int +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
+{ +return -EPERM; +}
+static int +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
+{ +return -EPERM; +}
+static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf,
struct vm_area_struct *vma)
+{ +return -EPERM; +}
+static void restricted_heap_free(struct dma_buf *dmabuf) +{ +struct restricted_buffer *restricted_buf = dmabuf->priv; +struct restricted_heap *heap =
dma_heap_get_drvdata(restricted_buf->heap);
+restricted_heap_memory_free(heap, restricted_buf); +kfree(restricted_buf); +}
+static const struct dma_buf_ops restricted_heap_buf_ops = { +.attach= restricted_heap_attach, +.detach= restricted_heap_detach, +.map_dma_buf= restricted_heap_map_dma_buf, +.unmap_dma_buf= restricted_heap_unmap_dma_buf, +.begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, +.end_cpu_access= restricted_heap_dma_buf_end_cpu_access, +.mmap= restricted_heap_dma_buf_mmap, +.release= restricted_heap_free, +};
static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long
size,
unsigned long fd_flags, unsigned long heap_flags) @@ -66,6 +168,7 @@ restricted_heap_allocate(struct dma_heap
*heap, unsigned long size,
if (ret) goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); +exp_info.ops = &restricted_heap_buf_ops; exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; exp_info.priv = restricted_buf; -- 2.25.1
-- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Hi Joakim,
Sorry for reply so late.
On Wed, 2024-01-31 at 14:53 +0100, Joakim Bech wrote:
External email : Please do not click links or open attachments until you have verified the sender or the content. On Fri, Jan 12, 2024 at 05:20:10PM +0800, Yong Wu wrote:
Add "struct restricted_heap_ops". For the restricted memory,
totally there
are two steps: a) memory_alloc: Allocate the buffer in kernel; b) memory_restrict: Restrict/Protect/Secure that buffer. The memory_alloc is mandatory while memory_restrict is optinal
since it may
s/optinal/optional/
Will Fix.
be part of memory_alloc.
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 41
++++++++++++++++++++++++-
drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-
buf/heaps/restricted_heap.c
index fd7c82abd42e..8c266a0f6192 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,10 +12,44 @@ #include "restricted_heap.h" +static int +restricted_heap_memory_allocate(struct restricted_heap *heap,
struct restricted_buffer *buf)
+{ +const struct restricted_heap_ops *ops = heap->ops; +int ret;
+ret = ops->memory_alloc(heap, buf); +if (ret) +return ret;
+if (ops->memory_restrict) { +ret = ops->memory_restrict(heap, buf); +if (ret) +goto memory_free; +} +return 0;
+memory_free: +ops->memory_free(heap, buf); +return ret; +}
+static void +restricted_heap_memory_free(struct restricted_heap *heap, struct
restricted_buffer *buf)
+{ +const struct restricted_heap_ops *ops = heap->ops;
+if (ops->memory_unrestrict) +ops->memory_unrestrict(heap, buf);
+ops->memory_free(heap, buf); +}
static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long
size,
unsigned long fd_flags, unsigned long heap_flags) { +struct restricted_heap *restricted_heap =
dma_heap_get_drvdata(heap);
struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; @@ -28,6 +62,9 @@ restricted_heap_allocate(struct dma_heap *heap,
unsigned long size,
restricted_buf->size = ALIGN(size, PAGE_SIZE); restricted_buf->heap = heap; +ret = restricted_heap_memory_allocate(restricted_heap,
restricted_buf);
Can we guarantee that "restricted_heap" here isn't NULL (i.e., heap-
priv). If
not perhaps we should consider adding a check for NULL in the restricted_heap_memory_allocate() function?
heap->priv always is set when dma_heap_add is called. Suppose heap-
priv is NULL, the KE would have already been occurred in
restricted_heap_add. I don't think it can be NULL. is it right?
+if (ret) +goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; @@ -36,11 +73,13 @@ restricted_heap_allocate(struct dma_heap *heap,
unsigned long size,
dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); -goto err_free_buf; +goto err_free_restricted_mem; } return dmabuf; +err_free_restricted_mem: +restricted_heap_memory_free(restricted_heap, restricted_buf); err_free_buf: kfree(restricted_buf); return ERR_PTR(ret); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-
buf/heaps/restricted_heap.h
index 443028f6ba3b..ddeaf9805708 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer { struct restricted_heap { const char*name;
+const struct restricted_heap_ops *ops; +};
+struct restricted_heap_ops {
This have the same name as used for the dma_heap_ops in the file restricted_heap.c, this might be a little bit confusing, or?
Thanks very much. I really didn't notice this. I will rename it.
+int(*heap_init)(struct restricted_heap *heap);
+int(*memory_alloc)(struct restricted_heap *heap, struct
restricted_buffer *buf);
+void(*memory_free)(struct restricted_heap *heap, struct
restricted_buffer *buf);
+int(*memory_restrict)(struct restricted_heap *heap, struct
restricted_buffer *buf);
+void(*memory_unrestrict)(struct restricted_heap *heap, struct
restricted_buffer *buf);
Is the prefix "memory_" superfluous here in these ops?
I will remove "memory_". But it looks like the "restrict" is a keyword, I can't use it or it will build fail. Therefore I plan to use alloc/free/restrict_buf/unrestrict_buf in next version.
Also related to a comment on the prior patch. The name here is "heap" for restricted_heap, but below you use rstrd_heap. It's the same struct, so I would advise to use the same name to avoid confusion when reading the code. As mentioned before, I think the name "rheap" would be a good choice.
I will use "rheap" to replace everywhere.
Thanks.
}; int restricted_heap_add(struct restricted_heap *rstrd_heap); -- 2.25.1
-- // Regards Joakim
linaro-mm-sig@lists.linaro.org