From: Jason-jh Lin jason-jh.lin@mediatek.corp-partner.google.com
Memory Definitions: secure memory - Memory allocated in the TEE (Trusted Execution Environment) which is inaccessible in the REE (Rich Execution Environment, i.e. linux kernel/userspace). secure handle - Integer value which acts as reference to 'secure memory'. Used in communication between TEE and REE to reference 'secure memory'. secure buffer - 'secure memory' that is used to store decrypted, compressed video or for other general purposes in the TEE. secure surface - 'secure memory' that is used to store graphic buffers.
Memory Usage in SVP: The overall flow of SVP starts with encrypted video coming in from an outside source into the REE. The REE will then allocate a 'secure buffer' and send the corresponding 'secure handle' along with the encrypted, compressed video data to the TEE. The TEE will then decrypt the video and store the result in the 'secure buffer'. The REE will then allocate a 'secure surface'. The REE will pass the 'secure handles' for both the 'secure buffer' and 'secure surface' into the TEE for video decoding. The video decoder HW will then decode the contents of the 'secure buffer' and place the result in the 'secure surface'. The REE will then attach the 'secure surface' to the overlay plane for rendering of the video.
Everything relating to ensuring security of the actual contents of the 'secure buffer' and 'secure surface' is out of scope for the REE and is the responsibility of the TEE.
DRM driver handles allocation of gem objects that are backed by a 'secure surface' and for displaying a 'secure surface' on the overlay plane. This introduces a new flag for object creation called DRM_MTK_GEM_CREATE_RESTRICTED which indicates it should be a 'secure surface'. All changes here are in MediaTek specific code. --- TODO: 1) Drop MTK_DRM_IOCTL_GEM_CREATE and use DMA_HEAP_IOCTL_ALLOC in userspace 2) DRM driver use secure mailbox channel to handle normal and secure flow 3) Implement setting mmsys routing table in the secure world series --- Based on 3 series: [1] v3 dma-buf: heaps: Add MediaTek secure heap - https://patchwork.kernel.org/project/linux-mediatek/list/?series=809023 [2] v6 media: mediatek: add driver to support secure video decoder - https://patchwork.kernel.org/project/linux-mediatek/list/?series=853689 [3] v6 Add CMDQ secure driver for SVP - https://patchwork.kernel.org/project/linux-mediatek/list/?series=855884 --- Change in v6: 1. Rebase to linux-next then rebased on 3 series [1][2][3] 2. Remove some parameter and security settings in normal world 3. Drop secure port related patches 4. Fix some build error and warning
Changes in v5: 1. Sync the local changes
Changes in v4: 1. Rebase on mediatek-drm-next(278640d4d74cd) and fix the conflicts 2. This series is based on 20240129063025.29251-1-yunfei.dong@mediatek.com 3. This series is based on 20240322052829.9893-1-shawn.sung@mediatek.com 4. This series is based on 20240403065603.21920-1-shawn.sung@mediatek.com
Changes in v3: 1. fix kerneldoc problems 2. fix typo in title and commit message 3. adjust naming for secure variable 4. add the missing part for is_suecure plane implementation 5. use BIT_ULL macro to replace bit shifting 6. move modification of ovl_adaptor part to the correct patch 7. add TODO list in commit message 8. add commit message for using share memory to store execute count
Changes in v2:
1. remove the DRIVER_RDNDER flag for mtk_drm_ioctl 2. move cmdq_insert_backup_cookie into client driver 3. move secure gce node define from mt8195-cherry.dtsi to mt8195.dtsi ---
CK Hu (1): drm/mediatek: Add interface to allocate MediaTek GEM buffer
Jason-JH.Lin (6): drm/mediatek/uapi: Add DRM_MTK_GEM_CREATE_RESTRICTED flag drm/mediatek: Add secure buffer control flow to mtk_drm_gem drm/mediatek: Add secure identify flag and funcution to mtk_drm_plane drm/mediatek: Add mtk_ddp_sec_write() to config secure buffer info drm/mediatek: Add secure flow support to mediatek-drm drm/mediatek: Add cmdq_insert_backup_cookie before secure pkt finalize
drivers/gpu/drm/mediatek/mtk_crtc.c | 263 +++++++++++++++++- drivers/gpu/drm/mediatek/mtk_crtc.h | 1 + drivers/gpu/drm/mediatek/mtk_ddp_comp.c | 14 + drivers/gpu/drm/mediatek/mtk_ddp_comp.h | 5 + drivers/gpu/drm/mediatek/mtk_disp_ovl.c | 9 +- .../gpu/drm/mediatek/mtk_disp_ovl_adaptor.c | 1 + drivers/gpu/drm/mediatek/mtk_drm_drv.c | 13 +- drivers/gpu/drm/mediatek/mtk_gem.c | 114 ++++++++ drivers/gpu/drm/mediatek/mtk_gem.h | 12 + drivers/gpu/drm/mediatek/mtk_mdp_rdma.c | 8 +- drivers/gpu/drm/mediatek/mtk_mdp_rdma.h | 1 + drivers/gpu/drm/mediatek/mtk_plane.c | 25 ++ drivers/gpu/drm/mediatek/mtk_plane.h | 2 + include/uapi/drm/mediatek_drm.h | 47 ++++ 14 files changed, 500 insertions(+), 15 deletions(-) create mode 100644 include/uapi/drm/mediatek_drm.h
From: CK Hu ck.hu@mediatek.com
Add an interface to allocate MediaTek GEM buffers, allow the IOCTLs to be used by render nodes. This patch also sets the RENDER driver feature.
TODO: Drop this path after we change all the usages of this ioctl to DMA_HEAP_IOCTL_ALLOC in the user sapce.
Signed-off-by: CK Hu ck.hu@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.corp-partner.google.com --- drivers/gpu/drm/mediatek/mtk_drm_drv.c | 13 +++++++- drivers/gpu/drm/mediatek/mtk_gem.c | 31 +++++++++++++++++ drivers/gpu/drm/mediatek/mtk_gem.h | 8 +++++ include/uapi/drm/mediatek_drm.h | 46 ++++++++++++++++++++++++++ 4 files changed, 97 insertions(+), 1 deletion(-) create mode 100644 include/uapi/drm/mediatek_drm.h
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c index b5f605751b0a..11e1555e9aa4 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c @@ -23,6 +23,7 @@ #include <drm/drm_of.h> #include <drm/drm_probe_helper.h> #include <drm/drm_vblank.h> +#include <drm/mediatek_drm.h>
#include "mtk_crtc.h" #include "mtk_ddp_comp.h" @@ -570,6 +571,11 @@ static void mtk_drm_kms_deinit(struct drm_device *drm) component_unbind_all(drm->dev, drm); }
+static const struct drm_ioctl_desc mtk_ioctls[] = { + DRM_IOCTL_DEF_DRV(MTK_GEM_CREATE, mtk_gem_create_ioctl, + DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), +}; + DEFINE_DRM_GEM_FOPS(mtk_drm_fops);
/* @@ -585,12 +591,17 @@ static struct drm_gem_object *mtk_gem_prime_import(struct drm_device *dev, }
static const struct drm_driver mtk_drm_driver = { - .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC, + .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC | + DRIVER_RENDER,
.dumb_create = mtk_gem_dumb_create,
.gem_prime_import = mtk_gem_prime_import, .gem_prime_import_sg_table = mtk_gem_prime_import_sg_table, + + .ioctls = mtk_ioctls, + .num_ioctls = ARRAY_SIZE(mtk_ioctls), + .fops = &mtk_drm_fops,
.name = DRIVER_NAME, diff --git a/drivers/gpu/drm/mediatek/mtk_gem.c b/drivers/gpu/drm/mediatek/mtk_gem.c index a172456d1d7b..91f6cfa3f1b7 100644 --- a/drivers/gpu/drm/mediatek/mtk_gem.c +++ b/drivers/gpu/drm/mediatek/mtk_gem.c @@ -5,9 +5,11 @@
#include <linux/dma-buf.h> #include <linux/vmalloc.h> +#include <drm/mediatek_drm.h>
#include <drm/drm.h> #include <drm/drm_device.h> +#include <drm/drm_drv.h> #include <drm/drm_gem.h> #include <drm/drm_gem_dma_helper.h> #include <drm/drm_prime.h> @@ -286,3 +288,32 @@ void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map) mtk_gem->kvaddr = NULL; kfree(mtk_gem->pages); } + +int mtk_gem_create_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv) +{ + struct mtk_gem_obj *mtk_gem; + struct drm_mtk_gem_create *args = data; + int ret; + + mtk_gem = mtk_gem_create(dev, args->size, false); + if (IS_ERR(mtk_gem)) + return PTR_ERR(mtk_gem); + + /* + * allocate a id of idr table where the obj is registered + * and handle has the id what user can see. + */ + ret = drm_gem_handle_create(file_priv, &mtk_gem->base, &args->handle); + if (ret) + goto err_handle_create; + + /* drop reference from allocate - handle holds it now. */ + drm_gem_object_put(&mtk_gem->base); + + return 0; + +err_handle_create: + mtk_gem_free_object(&mtk_gem->base); + return ret; +} diff --git a/drivers/gpu/drm/mediatek/mtk_gem.h b/drivers/gpu/drm/mediatek/mtk_gem.h index 66e5f154f698..b71a7e7b405a 100644 --- a/drivers/gpu/drm/mediatek/mtk_gem.h +++ b/drivers/gpu/drm/mediatek/mtk_gem.h @@ -45,4 +45,12 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
+/* + * request gem object creation and buffer allocation as the size + * that it is calculated with framebuffer information such as width, + * height and bpp. + */ +int mtk_gem_create_ioctl(struct drm_device *dev, void *data, + struct drm_file *file_priv); + #endif diff --git a/include/uapi/drm/mediatek_drm.h b/include/uapi/drm/mediatek_drm.h new file mode 100644 index 000000000000..821f7a732365 --- /dev/null +++ b/include/uapi/drm/mediatek_drm.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ +/* + * Copyright (c) 2015 MediaTek Inc. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + */ + +#ifndef _UAPI_MEDIATEK_DRM_H +#define _UAPI_MEDIATEK_DRM_H + +#include <drm/drm.h> + +#ifdef __KERNEL__ +#include <linux/types.h> +#else +#include <stdint.h> +#endif + +/** + * User-desired buffer creation information structure. + * + * @size: user-desired memory allocation size. + * - this size value would be page-aligned internally. + * @flags: user request for setting memory type or cache attributes. + * @handle: returned a handle to created gem object. + * - this handle will be set by gem module of kernel side. + */ +struct drm_mtk_gem_create { + uint64_t size; + uint32_t flags; + uint32_t handle; +}; + +#define DRM_MTK_GEM_CREATE 0x00 + +#define DRM_IOCTL_MTK_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + \ + DRM_MTK_GEM_CREATE, struct drm_mtk_gem_create) + +#endif /* _UAPI_MEDIATEK_DRM_H */
Add DRM_MTK_GEM_CREATE_RESTRICTED flag for the user space to tell the kernel space this dma buffer is restricted. The user space can store this flag into the private data of the dma buffer after allocating.
A restricted buffer is used to store the secure video content to support secure video path feature.
Signed-off-by: Jason-JH.Lin jason-jh.lin@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.com --- include/uapi/drm/mediatek_drm.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/include/uapi/drm/mediatek_drm.h b/include/uapi/drm/mediatek_drm.h index 821f7a732365..dee581718d70 100644 --- a/include/uapi/drm/mediatek_drm.h +++ b/include/uapi/drm/mediatek_drm.h @@ -39,6 +39,7 @@ struct drm_mtk_gem_create { };
#define DRM_MTK_GEM_CREATE 0x00 +#define DRM_MTK_GEM_CREATE_RESTRICTED 0x01
#define DRM_IOCTL_MTK_GEM_CREATE DRM_IOWR(DRM_COMMAND_BASE + \ DRM_MTK_GEM_CREATE, struct drm_mtk_gem_create)
Add secure buffer control flow to mtk_drm_gem.
When user space takes DRM_MTK_GEM_CREATE_RESTRICTED flag and size to create a mtk_drm_gem object, mtk_drm_gem will find a matched size dma buffer from secure dma-heap and bind it to mtk_drm_gem object.
TODO: 1. Drop the mtk_gem_create_from_heap() after we change the ioctl to DMA_HEAP_IOCTL_ALLOC. 2. Checking the private data of dmabuf instead of strncmp the exp_name.
Signed-off-by: Jason-JH.Lin jason-jh.lin@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.com --- drivers/gpu/drm/mediatek/mtk_drm_drv.c | 2 +- drivers/gpu/drm/mediatek/mtk_gem.c | 85 +++++++++++++++++++++++++- drivers/gpu/drm/mediatek/mtk_gem.h | 4 ++ 3 files changed, 89 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c index 11e1555e9aa4..ff40ca5dd2a0 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c @@ -573,7 +573,7 @@ static void mtk_drm_kms_deinit(struct drm_device *drm)
static const struct drm_ioctl_desc mtk_ioctls[] = { DRM_IOCTL_DEF_DRV(MTK_GEM_CREATE, mtk_gem_create_ioctl, - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), + DRM_AUTH | DRM_RENDER_ALLOW), };
DEFINE_DRM_GEM_FOPS(mtk_drm_fops); diff --git a/drivers/gpu/drm/mediatek/mtk_gem.c b/drivers/gpu/drm/mediatek/mtk_gem.c index 91f6cfa3f1b7..118ea7f0a71c 100644 --- a/drivers/gpu/drm/mediatek/mtk_gem.c +++ b/drivers/gpu/drm/mediatek/mtk_gem.c @@ -5,6 +5,8 @@
#include <linux/dma-buf.h> #include <linux/vmalloc.h> +#include <linux/dma-heap.h> +#include <uapi/linux/dma-heap.h> #include <drm/mediatek_drm.h>
#include <drm/drm.h> @@ -103,6 +105,81 @@ struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, return ERR_PTR(ret); }
+struct mtk_gem_obj *mtk_gem_create_from_heap(struct drm_device *dev, + const char *heap, size_t size) +{ + struct mtk_drm_private *priv = dev->dev_private; + struct mtk_gem_obj *mtk_gem; + struct drm_gem_object *obj; + struct dma_heap *dma_heap; + struct dma_buf *dma_buf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct iosys_map map = {}; + int ret; + + mtk_gem = mtk_gem_init(dev, size); + if (IS_ERR(mtk_gem)) + return ERR_CAST(mtk_gem); + + obj = &mtk_gem->base; + + dma_heap = dma_heap_find(heap); + if (!dma_heap) { + DRM_ERROR("heap find fail\n"); + goto err_gem_free; + } + dma_buf = dma_heap_buffer_alloc(dma_heap, size, + O_RDWR | O_CLOEXEC, DMA_HEAP_VALID_HEAP_FLAGS); + if (IS_ERR(dma_buf)) { + DRM_ERROR("buffer alloc fail\n"); + dma_heap_put(dma_heap); + goto err_gem_free; + } + dma_heap_put(dma_heap); + + attach = dma_buf_attach(dma_buf, priv->dma_dev); + if (IS_ERR(attach)) { + DRM_ERROR("attach fail, return\n"); + dma_buf_put(dma_buf); + goto err_gem_free; + } + + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + if (IS_ERR(sgt)) { + DRM_ERROR("map failed, detach and return\n"); + dma_buf_detach(dma_buf, attach); + dma_buf_put(dma_buf); + goto err_gem_free; + } + obj->import_attach = attach; + mtk_gem->dma_addr = sg_dma_address(sgt->sgl); + mtk_gem->sg = sgt; + mtk_gem->size = dma_buf->size; + + if (!strcmp(heap, "restricted_mtk_cm") || !strcmp(heap, "restricted_mtk_cma")) { + /* secure buffer can not be mapped */ + mtk_gem->secure = true; + } else { + ret = dma_buf_vmap(dma_buf, &map); + mtk_gem->kvaddr = map.vaddr; + if (ret) { + DRM_ERROR("map failed, ret=%d\n", ret); + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_detach(dma_buf, attach); + dma_buf_put(dma_buf); + mtk_gem->kvaddr = NULL; + } + } + + return mtk_gem; + +err_gem_free: + drm_gem_object_release(obj); + kfree(mtk_gem); + return ERR_PTR(-ENOMEM); +} + void mtk_gem_free_object(struct drm_gem_object *obj) { struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); @@ -230,7 +307,9 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, if (IS_ERR(mtk_gem)) return ERR_CAST(mtk_gem);
+ mtk_gem->secure = (!strncmp(attach->dmabuf->exp_name, "restricted", 10)); mtk_gem->dma_addr = sg_dma_address(sg->sgl); + mtk_gem->size = attach->dmabuf->size; mtk_gem->sg = sg;
return &mtk_gem->base; @@ -296,7 +375,11 @@ int mtk_gem_create_ioctl(struct drm_device *dev, void *data, struct drm_mtk_gem_create *args = data; int ret;
- mtk_gem = mtk_gem_create(dev, args->size, false); + if (args->flags & DRM_MTK_GEM_CREATE_RESTRICTED) + mtk_gem = mtk_gem_create_from_heap(dev, "restricted_mtk_cma", args->size); + else + mtk_gem = mtk_gem_create(dev, args->size, false); + if (IS_ERR(mtk_gem)) return PTR_ERR(mtk_gem);
diff --git a/drivers/gpu/drm/mediatek/mtk_gem.h b/drivers/gpu/drm/mediatek/mtk_gem.h index b71a7e7b405a..7f6b23b9875a 100644 --- a/drivers/gpu/drm/mediatek/mtk_gem.h +++ b/drivers/gpu/drm/mediatek/mtk_gem.h @@ -27,9 +27,11 @@ struct mtk_gem_obj { void *cookie; void *kvaddr; dma_addr_t dma_addr; + size_t size; unsigned long dma_attrs; struct sg_table *sg; struct page **pages; + bool secure; };
#define to_mtk_gem_obj(x) container_of(x, struct mtk_gem_obj, base) @@ -37,6 +39,8 @@ struct mtk_gem_obj { void mtk_gem_free_object(struct drm_gem_object *gem); struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, size_t size, bool alloc_kmap); +struct mtk_gem_obj *mtk_gem_create_from_heap(struct drm_device *dev, + const char *heap, size_t size); int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, struct drm_mode_create_dumb *args); struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
Add is_sec flag to identify current mtk_drm_plane is secure. Add mtk_plane_is_sec_fb() to check current drm_framebuffer is secure.
Signed-off-by: Jason-JH.Lin jason-jh.lin@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.com --- drivers/gpu/drm/mediatek/mtk_plane.c | 18 ++++++++++++++++++ drivers/gpu/drm/mediatek/mtk_plane.h | 2 ++ 2 files changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/mediatek/mtk_plane.c b/drivers/gpu/drm/mediatek/mtk_plane.c index 4625deb21d40..478206f21fd0 100644 --- a/drivers/gpu/drm/mediatek/mtk_plane.c +++ b/drivers/gpu/drm/mediatek/mtk_plane.c @@ -210,6 +210,7 @@ static void mtk_plane_update_new_state(struct drm_plane_state *new_state, mtk_plane_state->pending.height = drm_rect_height(&new_state->dst); mtk_plane_state->pending.rotation = new_state->rotation; mtk_plane_state->pending.color_encoding = new_state->color_encoding; + mtk_plane_state->pending.is_secure = mtk_plane_fb_is_secure(fb); }
static void mtk_plane_atomic_async_update(struct drm_plane *plane, @@ -348,3 +349,20 @@ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
return 0; } + +bool mtk_plane_fb_is_secure(struct drm_framebuffer *fb) +{ + struct drm_gem_object *gem = NULL; + struct mtk_gem_obj *mtk_gem = NULL; + + if (!fb) + return false; + + gem = fb->obj[0]; + if (!gem) + return false; + + mtk_gem = to_mtk_gem_obj(gem); + + return mtk_gem->secure; +} diff --git a/drivers/gpu/drm/mediatek/mtk_plane.h b/drivers/gpu/drm/mediatek/mtk_plane.h index 231bb7aac947..a7779a91f0a2 100644 --- a/drivers/gpu/drm/mediatek/mtk_plane.h +++ b/drivers/gpu/drm/mediatek/mtk_plane.h @@ -33,6 +33,7 @@ struct mtk_plane_pending_state { bool async_dirty; bool async_config; enum drm_color_encoding color_encoding; + bool is_secure; };
struct mtk_plane_state { @@ -46,6 +47,7 @@ to_mtk_plane_state(struct drm_plane_state *state) return container_of(state, struct mtk_plane_state, base); }
+bool mtk_plane_fb_is_secure(struct drm_framebuffer *fb); int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane, unsigned long possible_crtcs, enum drm_plane_type type, unsigned int supported_rotations, const u32 *formats,
Add mtk_ddp_sec_write() to configure secure buffer information to cmdq secure packet data and send to the secure world.
OVL and OVL_ADAPTOR need to use mtk_ddp_sec_write() instead of original mtk_ddp_write() because the address in plane is secure handle not the real buffer address.
The secure buffer information will be used to translate the secure handle to the curresponding secure buffer address and then the secure handle in instruction generated by OVL or OVL_ADPATOR will be replaced to the real address in secure world.
Signed-off-by: Jason-JH.Lin jason-jh.lin@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.com --- drivers/gpu/drm/mediatek/mtk_ddp_comp.c | 14 ++++++++++++++ drivers/gpu/drm/mediatek/mtk_ddp_comp.h | 5 +++++ drivers/gpu/drm/mediatek/mtk_disp_ovl.c | 9 +++++++-- drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c | 1 + drivers/gpu/drm/mediatek/mtk_mdp_rdma.c | 8 ++++++-- drivers/gpu/drm/mediatek/mtk_mdp_rdma.h | 1 + 6 files changed, 34 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/mediatek/mtk_ddp_comp.c b/drivers/gpu/drm/mediatek/mtk_ddp_comp.c index 17b036411292..dc2b36a8bdd6 100644 --- a/drivers/gpu/drm/mediatek/mtk_ddp_comp.c +++ b/drivers/gpu/drm/mediatek/mtk_ddp_comp.c @@ -111,6 +111,20 @@ void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, unsigned int value, #endif }
+void mtk_ddp_sec_write(struct cmdq_pkt *cmdq_pkt, + enum cmdq_iwc_addr_metadata_type type, + unsigned int base, unsigned int base_offset, + struct cmdq_client_reg *cmdq_reg, unsigned int offset) +{ +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + if (!cmdq_pkt) + return; + + cmdq_sec_pkt_write(cmdq_pkt, cmdq_reg->subsys, cmdq_reg->offset + offset, + type, base, base_offset); +#endif +} + static int mtk_ddp_clk_enable(struct device *dev) { struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); diff --git a/drivers/gpu/drm/mediatek/mtk_ddp_comp.h b/drivers/gpu/drm/mediatek/mtk_ddp_comp.h index 26236691ce4c..792fd1b004ee 100644 --- a/drivers/gpu/drm/mediatek/mtk_ddp_comp.h +++ b/drivers/gpu/drm/mediatek/mtk_ddp_comp.h @@ -7,6 +7,7 @@ #define MTK_DDP_COMP_H
#include <linux/io.h> +#include <linux/mailbox/mtk-cmdq-sec-mailbox.h> #include <linux/pm_runtime.h> #include <linux/soc/mediatek/mtk-cmdq.h> #include <linux/soc/mediatek/mtk-mmsys.h> @@ -339,4 +340,8 @@ void mtk_ddp_write_relaxed(struct cmdq_pkt *cmdq_pkt, unsigned int value, void mtk_ddp_write_mask(struct cmdq_pkt *cmdq_pkt, unsigned int value, struct cmdq_client_reg *cmdq_reg, void __iomem *regs, unsigned int offset, unsigned int mask); +void mtk_ddp_sec_write(struct cmdq_pkt *cmdq_pkt, + enum cmdq_iwc_addr_metadata_type type, + unsigned int base, unsigned int base_offset, + struct cmdq_client_reg *cmdq_reg, unsigned int offset); #endif /* MTK_DDP_COMP_H */ diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c index b552a02d7eae..5f518c9c63dc 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c @@ -449,8 +449,13 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx, DISP_REG_OVL_SRC_SIZE(idx)); mtk_ddp_write_relaxed(cmdq_pkt, offset, &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_OFFSET(idx)); - mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs, - DISP_REG_OVL_ADDR(ovl, idx)); + + if (pending->is_secure) + mtk_ddp_sec_write(cmdq_pkt, CMDQ_IWC_H_2_MVA, pending->addr, 0, + &ovl->cmdq_reg, DISP_REG_OVL_ADDR(ovl, idx)); + else + mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs, + DISP_REG_OVL_ADDR(ovl, idx));
if (is_afbc) { mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs, diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c index 02dd7dcdfedb..5db8711f21c2 100644 --- a/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c +++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl_adaptor.c @@ -188,6 +188,7 @@ void mtk_ovl_adaptor_layer_config(struct device *dev, unsigned int idx, rdma_config.pitch = pending->pitch; rdma_config.fmt = pending->format; rdma_config.color_encoding = pending->color_encoding; + rdma_config.is_secure = state->pending.is_secure; mtk_mdp_rdma_config(rdma_l, &rdma_config, cmdq_pkt);
if (use_dual_pipe) { diff --git a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c index 925cbb7471ec..961189e16aab 100644 --- a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c +++ b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.c @@ -199,8 +199,12 @@ void mtk_mdp_rdma_config(struct device *dev, struct mtk_mdp_rdma_cfg *cfg, mtk_ddp_write_mask(cmdq_pkt, 0, &priv->cmdq_reg, priv->regs, MDP_RDMA_SRC_CON, FLD_OUTPUT_ARGB);
- mtk_ddp_write_mask(cmdq_pkt, cfg->addr0, &priv->cmdq_reg, priv->regs, - MDP_RDMA_SRC_BASE_0, FLD_SRC_BASE_0); + if (cfg->is_secure) + mtk_ddp_sec_write(cmdq_pkt, CMDQ_IWC_H_2_MVA, cfg->addr0, 0, + &priv->cmdq_reg, MDP_RDMA_SRC_BASE_0); + else + mtk_ddp_write_mask(cmdq_pkt, cfg->addr0, &priv->cmdq_reg, priv->regs, + MDP_RDMA_SRC_BASE_0, FLD_SRC_BASE_0);
mtk_ddp_write_mask(cmdq_pkt, src_pitch_y, &priv->cmdq_reg, priv->regs, MDP_RDMA_MF_BKGD_SIZE_IN_BYTE, FLD_MF_BKGD_WB); diff --git a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h index 9943ee3aac31..fcd9b3a934d0 100644 --- a/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h +++ b/drivers/gpu/drm/mediatek/mtk_mdp_rdma.h @@ -15,6 +15,7 @@ struct mtk_mdp_rdma_cfg { unsigned int y_top; int fmt; int color_encoding; + bool is_secure; };
#endif // __MTK_MDP_RDMA_H__
To add secure flow support for mediatek-drm, each crtc have to create a secure cmdq mailbox channel. Then cmdq packets with display HW configuration will be sent to secure cmdq mailbox channel and configured in the secure world.
Each crtc have to use secure cmdq interface to configure some secure settings for display HW before sending cmdq packets to secure cmdq mailbox channel.
If any of plane fbs get from current drm_atomic_state is secure, then crtc will switch to the secure flow to configure display HW. If all plane fbs are not secure in current drm_atomic_state, then crtc will switch to the normal flow.
TODO: 1. Try to use secure mailbox channel to handle normal and secure flow.
Signed-off-by: Jason-JH.Lin jason-jh.lin@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.com --- drivers/gpu/drm/mediatek/mtk_crtc.c | 260 +++++++++++++++++++++++++-- drivers/gpu/drm/mediatek/mtk_crtc.h | 1 + drivers/gpu/drm/mediatek/mtk_plane.c | 7 + 3 files changed, 258 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c index 6f34f573e127..59f6263ae806 100644 --- a/drivers/gpu/drm/mediatek/mtk_crtc.c +++ b/drivers/gpu/drm/mediatek/mtk_crtc.c @@ -56,6 +56,11 @@ struct mtk_crtc { u32 cmdq_event; u32 cmdq_vblank_cnt; wait_queue_head_t cb_blocking_queue; + + struct cmdq_client sec_cmdq_client; + struct cmdq_pkt sec_cmdq_handle; + bool sec_cmdq_working; + wait_queue_head_t sec_cb_blocking_queue; #endif
struct device *mmsys_dev; @@ -69,6 +74,7 @@ struct mtk_crtc { /* lock for display hardware access */ struct mutex hw_lock; bool config_updating; + bool sec_on; };
struct mtk_crtc_state { @@ -113,6 +119,144 @@ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc) } }
+void mtk_crtc_disable_secure_state(struct drm_crtc *crtc) +{ +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + enum cmdq_sec_scenario sec_scn = CMDQ_SEC_SCNR_MAX; + int i; + struct mtk_ddp_comp *ddp_first_comp; + struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc); + + mutex_lock(&mtk_crtc->hw_lock); + + if (!mtk_crtc->sec_cmdq_client.chan) { + pr_err("crtc-%d secure mbox channel is NULL\n", drm_crtc_index(crtc)); + goto err; + } + + if (!mtk_crtc->sec_on) { + pr_debug("crtc-%d is already disabled!\n", drm_crtc_index(crtc)); + goto err; + } + + mbox_flush(mtk_crtc->sec_cmdq_client.chan, 0); + mtk_crtc->sec_cmdq_handle.cmd_buf_size = 0; + + if (mtk_crtc->sec_cmdq_handle.sec_data) { + struct cmdq_sec_data *sec_data; + + sec_data = mtk_crtc->sec_cmdq_handle.sec_data; + sec_data->addr_metadata_cnt = 0; + sec_data->addr_metadatas = (uintptr_t)NULL; + } + + /* + * Secure path only support DL mode, so we just wait + * the first path frame done here + */ + cmdq_pkt_wfe(&mtk_crtc->sec_cmdq_handle, mtk_crtc->cmdq_event, false); + + ddp_first_comp = mtk_crtc->ddp_comp[0]; + for (i = 0; i < mtk_crtc->layer_nr; i++) { + struct drm_plane *plane = &mtk_crtc->planes[i]; + + /* make sure secure layer off before switching secure state */ + if (!mtk_plane_fb_is_secure(plane->state->fb)) { + struct mtk_plane_state *plane_state = to_mtk_plane_state(plane->state); + + plane_state->pending.enable = false; + mtk_ddp_comp_layer_config(ddp_first_comp, i, plane_state, + &mtk_crtc->sec_cmdq_handle); + } + } + + /* Disable secure path */ + if (drm_crtc_index(crtc) == 0) + sec_scn = CMDQ_SEC_SCNR_PRIMARY_DISP_DISABLE; + else if (drm_crtc_index(crtc) == 1) + sec_scn = CMDQ_SEC_SCNR_SUB_DISP_DISABLE; + + cmdq_sec_pkt_set_data(&mtk_crtc->sec_cmdq_handle, sec_scn); + + cmdq_pkt_finalize(&mtk_crtc->sec_cmdq_handle); + dma_sync_single_for_device(mtk_crtc->sec_cmdq_client.chan->mbox->dev, + mtk_crtc->sec_cmdq_handle.pa_base, + mtk_crtc->sec_cmdq_handle.cmd_buf_size, + DMA_TO_DEVICE); + + mtk_crtc->sec_cmdq_working = true; + mbox_send_message(mtk_crtc->sec_cmdq_client.chan, &mtk_crtc->sec_cmdq_handle); + mbox_client_txdone(mtk_crtc->sec_cmdq_client.chan, 0); + + // Wait for sec state to be disabled by cmdq + wait_event_timeout(mtk_crtc->sec_cb_blocking_queue, + !mtk_crtc->sec_cmdq_working, + msecs_to_jiffies(500)); + + mtk_crtc->sec_on = false; + pr_debug("crtc-%d disable secure plane!\n", drm_crtc_index(crtc)); + +err: + mutex_unlock(&mtk_crtc->hw_lock); +#endif +} + +#if IS_REACHABLE(CONFIG_MTK_CMDQ) +static void mtk_crtc_enable_secure_state(struct drm_crtc *crtc) +{ + enum cmdq_sec_scenario sec_scn = CMDQ_SEC_SCNR_MAX; + struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc); + + if (drm_crtc_index(crtc) == 0) + sec_scn = CMDQ_SEC_SCNR_PRIMARY_DISP; + else if (drm_crtc_index(crtc) == 1) + sec_scn = CMDQ_SEC_SCNR_SUB_DISP; + + cmdq_sec_pkt_set_data(&mtk_crtc->sec_cmdq_handle, sec_scn); + + pr_debug("crtc-%d enable secure plane!\n", drm_crtc_index(crtc)); +} +#endif + +static void mtk_crtc_plane_switch_sec_state(struct drm_crtc *crtc, + struct drm_atomic_state *state) +{ +#if IS_REACHABLE(CONFIG_MTK_CMDQ) + bool sec_on[MAX_CRTC] = {0}; + int i; + struct drm_crtc_state *crtc_state; + struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc); + struct drm_plane *plane; + struct drm_plane_state *old_plane_state; + + for_each_old_plane_in_state(state, plane, old_plane_state, i) { + if (!old_plane_state->crtc || !plane->state->crtc) + continue; + + if (plane->state->fb && + mtk_plane_fb_is_secure(plane->state->fb) && + mtk_crtc->sec_cmdq_client.chan) + sec_on[drm_crtc_index(plane->state->crtc)] = true; + } + + for_each_old_crtc_in_state(state, crtc, crtc_state, i) { + if (!crtc_state->active) + continue; + + mtk_crtc = to_mtk_crtc(crtc); + + if (!sec_on[i]) { + mtk_crtc_disable_secure_state(crtc); + continue; + } + + mutex_lock(&mtk_crtc->hw_lock); + mtk_crtc->sec_on = true; + mutex_unlock(&mtk_crtc->hw_lock); + } +#endif +} + #if IS_REACHABLE(CONFIG_MTK_CMDQ) static int mtk_drm_cmdq_pkt_create(struct cmdq_client *client, struct cmdq_pkt *pkt, size_t size) @@ -148,22 +292,33 @@ static void mtk_drm_cmdq_pkt_destroy(struct cmdq_pkt *pkt) dma_unmap_single(client->chan->mbox->dev, pkt->pa_base, pkt->buf_size, DMA_TO_DEVICE); kfree(pkt->va_base); + kfree(pkt->sec_data); } #endif
static void mtk_crtc_destroy(struct drm_crtc *crtc) { struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc); + struct mtk_drm_private *priv = crtc->dev->dev_private; int i;
+ priv = priv->all_drm_private[drm_crtc_index(crtc)]; + mtk_mutex_put(mtk_crtc->mutex); #if IS_REACHABLE(CONFIG_MTK_CMDQ) mtk_drm_cmdq_pkt_destroy(&mtk_crtc->cmdq_handle); + mtk_drm_cmdq_pkt_destroy(&mtk_crtc->sec_cmdq_handle);
if (mtk_crtc->cmdq_client.chan) { mbox_free_channel(mtk_crtc->cmdq_client.chan); mtk_crtc->cmdq_client.chan = NULL; } + + if (mtk_crtc->sec_cmdq_client.chan) { + device_link_remove(priv->dev, mtk_crtc->sec_cmdq_client.chan->mbox->dev); + mbox_free_channel(mtk_crtc->sec_cmdq_client.chan); + mtk_crtc->sec_cmdq_client.chan = NULL; + } #endif
for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) { @@ -312,6 +467,11 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg) if (data->sta < 0) return;
+ if (!data->pkt || !data->pkt->sec_data) + mtk_crtc = container_of(cmdq_cl, struct mtk_crtc, cmdq_client); + else + mtk_crtc = container_of(cmdq_cl, struct mtk_crtc, sec_cmdq_client); + state = to_mtk_crtc_state(mtk_crtc->base.state);
state->pending_config = false; @@ -340,6 +500,11 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg) mtk_crtc->pending_async_planes = false; }
+ if (mtk_crtc->sec_cmdq_working) { + mtk_crtc->sec_cmdq_working = false; + wake_up(&mtk_crtc->sec_cb_blocking_queue); + } + mtk_crtc->cmdq_vblank_cnt = 0; wake_up(&mtk_crtc->cb_blocking_queue); } @@ -563,7 +728,8 @@ static void mtk_crtc_ddp_config(struct drm_crtc *crtc, static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank) { #if IS_REACHABLE(CONFIG_MTK_CMDQ) - struct cmdq_pkt *cmdq_handle = &mtk_crtc->cmdq_handle; + struct cmdq_client cmdq_client; + struct cmdq_pkt *cmdq_handle; #endif struct drm_crtc *crtc = &mtk_crtc->base; struct mtk_drm_private *priv = crtc->dev->dev_private; @@ -601,14 +767,36 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank) mtk_mutex_release(mtk_crtc->mutex); } #if IS_REACHABLE(CONFIG_MTK_CMDQ) - if (mtk_crtc->cmdq_client.chan) { + if (mtk_crtc->sec_on) { + mbox_flush(mtk_crtc->sec_cmdq_client.chan, 0); + mtk_crtc->sec_cmdq_handle.cmd_buf_size = 0; + + if (mtk_crtc->sec_cmdq_handle.sec_data) { + struct cmdq_sec_data *sec_data = mtk_crtc->sec_cmdq_handle.sec_data; + + memset((void *)sec_data->addr_metadatas, 0, + sec_data->addr_metadata_cnt * sizeof(u64)); + sec_data->addr_metadata_cnt = 0; + } + + mtk_crtc_enable_secure_state(crtc); + + cmdq_client = mtk_crtc->sec_cmdq_client; + cmdq_handle = &mtk_crtc->sec_cmdq_handle; + } else if (mtk_crtc->cmdq_client.chan) { mbox_flush(mtk_crtc->cmdq_client.chan, 2000); - cmdq_handle->cmd_buf_size = 0; + mtk_crtc->cmdq_handle.cmd_buf_size = 0; + + cmdq_client = mtk_crtc->cmdq_client; + cmdq_handle = &mtk_crtc->cmdq_handle; + } + + if (cmdq_client.chan) { cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event); cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false); mtk_crtc_ddp_config(crtc, cmdq_handle); cmdq_pkt_finalize(cmdq_handle); - dma_sync_single_for_device(mtk_crtc->cmdq_client.chan->mbox->dev, + dma_sync_single_for_device(cmdq_client.chan->mbox->dev, cmdq_handle->pa_base, cmdq_handle->cmd_buf_size, DMA_TO_DEVICE); @@ -621,8 +809,8 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank) */ mtk_crtc->cmdq_vblank_cnt = 3;
- mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle); - mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0); + mbox_send_message(cmdq_client.chan, cmdq_handle); + mbox_client_txdone(cmdq_client.chan, 0); } #endif mtk_crtc->config_updating = false; @@ -764,6 +952,8 @@ static void mtk_crtc_atomic_disable(struct drm_crtc *crtc, if (!mtk_crtc->enabled) return;
+ mtk_crtc_disable_secure_state(crtc); + /* Set all pending plane state to disabled */ for (i = 0; i < mtk_crtc->layer_nr; i++) { struct drm_plane *plane = &mtk_crtc->planes[i]; @@ -802,6 +992,8 @@ static void mtk_crtc_atomic_begin(struct drm_crtc *crtc, struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc); unsigned long flags;
+ mtk_crtc_plane_switch_sec_state(crtc, state); + if (mtk_crtc->event && mtk_crtc_state->base.event) DRM_ERROR("new event while there is still a pending event\n");
@@ -1091,8 +1283,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path, if (ret) { dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n", drm_crtc_index(&mtk_crtc->base)); - mbox_free_channel(mtk_crtc->cmdq_client.chan); - mtk_crtc->cmdq_client.chan = NULL; + goto cmdq_err; } else { ret = mtk_drm_cmdq_pkt_create(&mtk_crtc->cmdq_client, &mtk_crtc->cmdq_handle, @@ -1100,14 +1291,63 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path, if (ret) { dev_dbg(dev, "mtk_crtc %d failed to create cmdq packet\n", drm_crtc_index(&mtk_crtc->base)); - mbox_free_channel(mtk_crtc->cmdq_client.chan); - mtk_crtc->cmdq_client.chan = NULL; + goto cmdq_err; } }
/* for sending blocking cmd in crtc disable */ init_waitqueue_head(&mtk_crtc->cb_blocking_queue); } + + mtk_crtc->sec_cmdq_client.client.dev = mtk_crtc->mmsys_dev; + mtk_crtc->sec_cmdq_client.client.tx_block = false; + mtk_crtc->sec_cmdq_client.client.knows_txdone = true; + mtk_crtc->sec_cmdq_client.client.rx_callback = ddp_cmdq_cb; + mtk_crtc->sec_cmdq_client.chan = + mbox_request_channel(&mtk_crtc->sec_cmdq_client.client, i + 1); + if (IS_ERR(mtk_crtc->sec_cmdq_client.chan)) { + dev_err(dev, "mtk_crtc %d failed to create sec mailbox client\n", + drm_crtc_index(&mtk_crtc->base)); + mtk_crtc->sec_cmdq_client.chan = NULL; + } + + if (mtk_crtc->sec_cmdq_client.chan) { + struct device_link *link; + + /* add devlink to cmdq dev to make sure suspend/resume order is correct */ + link = device_link_add(priv->dev, mtk_crtc->sec_cmdq_client.chan->mbox->dev, + DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); + if (!link) { + dev_err(priv->dev, "Unable to link dev=%s\n", + dev_name(mtk_crtc->sec_cmdq_client.chan->mbox->dev)); + ret = -ENODEV; + goto cmdq_err; + } + + ret = mtk_drm_cmdq_pkt_create(&mtk_crtc->sec_cmdq_client, + &mtk_crtc->sec_cmdq_handle, + PAGE_SIZE); + if (ret) { + dev_dbg(dev, "mtk_crtc %d failed to create cmdq secure packet\n", + drm_crtc_index(&mtk_crtc->base)); + goto cmdq_err; + } + + /* for sending blocking cmd in crtc disable */ + init_waitqueue_head(&mtk_crtc->sec_cb_blocking_queue); + } + +cmdq_err: + if (ret) { + if (mtk_crtc->cmdq_client.chan) { + mbox_free_channel(mtk_crtc->cmdq_client.chan); + mtk_crtc->cmdq_client.chan = NULL; + } + if (mtk_crtc->sec_cmdq_client.chan) { + mbox_free_channel(mtk_crtc->sec_cmdq_client.chan); + mtk_crtc->sec_cmdq_client.chan = NULL; + } + } #endif
if (conn_routes) { diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.h b/drivers/gpu/drm/mediatek/mtk_crtc.h index 388e900b6f4d..0b0be01c25f2 100644 --- a/drivers/gpu/drm/mediatek/mtk_crtc.h +++ b/drivers/gpu/drm/mediatek/mtk_crtc.h @@ -19,6 +19,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path, unsigned int path_len, int priv_data_index, const struct mtk_drm_route *conn_routes, unsigned int num_conn_routes); +void mtk_crtc_disable_secure_state(struct drm_crtc *crtc); int mtk_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane, struct mtk_plane_state *state); void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane, diff --git a/drivers/gpu/drm/mediatek/mtk_plane.c b/drivers/gpu/drm/mediatek/mtk_plane.c index 478206f21fd0..95e1b17091f0 100644 --- a/drivers/gpu/drm/mediatek/mtk_plane.c +++ b/drivers/gpu/drm/mediatek/mtk_plane.c @@ -287,6 +287,13 @@ static void mtk_plane_atomic_disable(struct drm_plane *plane, mtk_plane_state->pending.enable = false; wmb(); /* Make sure the above parameter is set before update */ mtk_plane_state->pending.dirty = true; + + if (mtk_plane_state->pending.is_secure) { + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, plane); + + if (old_state->crtc) + mtk_crtc_disable_secure_state(old_state->crtc); + } }
static void mtk_plane_atomic_update(struct drm_plane *plane,
Add cmdq_insert_backup_cookie to append some commands before EOC: 1. Get GCE HW thread execute count from the GCE HW register. 2. Add 1 to the execute count and then store into a shared memory. 3. Set a software event siganl as secure irq to GCE HW.
Since the value of execute count + 1 is stored in a shared memory, CMDQ driver in the normal world can use it to handle task done in irq handler and CMDQ driver in the secure world will use it to schedule the task slot for each secure thread.
Signed-off-by: Jason-JH.Lin jason-jh.lin@mediatek.com Signed-off-by: Hsiao Chien Sung shawn.sung@mediatek.com --- drivers/gpu/drm/mediatek/mtk_crtc.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c index 59f6263ae806..0df9bf695f65 100644 --- a/drivers/gpu/drm/mediatek/mtk_crtc.c +++ b/drivers/gpu/drm/mediatek/mtk_crtc.c @@ -178,6 +178,7 @@ void mtk_crtc_disable_secure_state(struct drm_crtc *crtc)
cmdq_sec_pkt_set_data(&mtk_crtc->sec_cmdq_handle, sec_scn);
+ cmdq_sec_insert_backup_cookie(&mtk_crtc->sec_cmdq_handle); cmdq_pkt_finalize(&mtk_crtc->sec_cmdq_handle); dma_sync_single_for_device(mtk_crtc->sec_cmdq_client.chan->mbox->dev, mtk_crtc->sec_cmdq_handle.pa_base, @@ -795,6 +796,8 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank) cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event); cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event, false); mtk_crtc_ddp_config(crtc, cmdq_handle); + if (cmdq_handle->sec_data) + cmdq_sec_insert_backup_cookie(cmdq_handle); cmdq_pkt_finalize(cmdq_handle); dma_sync_single_for_device(cmdq_client.chan->mbox->dev, cmdq_handle->pa_base,
Hi,
On Sun, May 26, 2024 at 07:29:21AM GMT, Jason-JH.Lin wrote:
From: Jason-jh Lin jason-jh.lin@mediatek.corp-partner.google.com
Memory Definitions: secure memory - Memory allocated in the TEE (Trusted Execution Environment) which is inaccessible in the REE (Rich Execution Environment, i.e. linux kernel/userspace). secure handle - Integer value which acts as reference to 'secure memory'. Used in communication between TEE and REE to reference 'secure memory'. secure buffer - 'secure memory' that is used to store decrypted, compressed video or for other general purposes in the TEE. secure surface - 'secure memory' that is used to store graphic buffers.
Memory Usage in SVP: The overall flow of SVP starts with encrypted video coming in from an outside source into the REE. The REE will then allocate a 'secure buffer' and send the corresponding 'secure handle' along with the encrypted, compressed video data to the TEE. The TEE will then decrypt the video and store the result in the 'secure buffer'. The REE will then allocate a 'secure surface'. The REE will pass the 'secure handles' for both the 'secure buffer' and 'secure surface' into the TEE for video decoding. The video decoder HW will then decode the contents of the 'secure buffer' and place the result in the 'secure surface'. The REE will then attach the 'secure surface' to the overlay plane for rendering of the video.
Everything relating to ensuring security of the actual contents of the 'secure buffer' and 'secure surface' is out of scope for the REE and is the responsibility of the TEE.
DRM driver handles allocation of gem objects that are backed by a 'secure surface' and for displaying a 'secure surface' on the overlay plane. This introduces a new flag for object creation called DRM_MTK_GEM_CREATE_RESTRICTED which indicates it should be a 'secure surface'. All changes here are in MediaTek specific code.
TODO:
- Drop MTK_DRM_IOCTL_GEM_CREATE and use DMA_HEAP_IOCTL_ALLOC in userspace
- DRM driver use secure mailbox channel to handle normal and secure flow
- Implement setting mmsys routing table in the secure world series
I'm not sure what you mean here. Why are you trying to upstream something that still needs to be removed from your patch series?
Also, I made some comments on the previous version that have been entirely ignored and still apply on this version: https://lore.kernel.org/dri-devel/20240415-guppy-of-perpetual-current-3a7974...
Maxime
linaro-mm-sig@lists.linaro.org