The purpose of this patchset is for MediaTek secure video playback, and also to enable other potential uses of this in the future. The 'restricted dma-heap' will be used to allocate dma_buf objects that reference memory in the secure world that is inaccessible/unmappable by the non-secure (i.e. kernel/userspace) world. That memory will be used by the secure/ trusted world to store secure information (i.e. decrypted media content). The dma_bufs allocated from the kernel will be passed to V4L2 for video decoding (as input and output). They will also be used by the drm system for rendering of the content.
This patchset adds two MediaTek restricted heaps and they will be used in v4l2[1] and drm[2]. 1) restricted_mtk_cm: secure chunk memory for MediaTek SVP (Secure Video Path). The buffer is reserved for the secure world after bootup and it is used for vcodec's ES/working buffer; 2) restricted_mtk_cma: secure CMA memory for MediaTek SVP. This buffer is dynamically reserved for the secure world and will be got when we start playing secure videos. Once the security video playing is complete, the CMA will be released. This heap is used for the vcodec's frame buffer.
[1] https://lore.kernel.org/linux-mediatek/20240412090851.24999-1-yunfei.dong@me... [2] https://lore.kernel.org/linux-mediatek/20240403102701.369-1-shawn.sung@media...
Change note: v5: 1) Reconstruct TEE commands to allow the kernel to obtain the PA of the TEE buffer to initialize a valid sg table. 2) Previously, PA was hidden from the kernel. Then the kernel checks if this is restricted buffer by "if (sg_page(sg) == NULL)". In this version, we will add a new explicit interface (sg_dma_is_restricted) for users to determine whether this is a restricted buffer. 3) some words improve, like using "rheap". Rebase on v6.9-rc7.
v4: https://lore.kernel.org/linux-mediatek/20240112092014.23999-1-yong.wu@mediat... 1) Rename the heap name from "secure" to "restricted". suggested from Simon/Pekka. There are still several "secure" string in MTK file since we use ARM platform in which we call this "secure world"/ "secure command".
v3: https://lore.kernel.org/linux-mediatek/20231212024607.3681-1-yong.wu@mediate... 1) Separate the secure heap to a common file(secure_heap.c) and mtk special file (secure_heap_mtk.c), and put all the tee related code into our special file. 2) About dt-binding, Add "mediatek," prefix since this is Mediatek TEE firmware definition. 3) Remove the normal CMA heap which is a draft for qcom. Rebase on v6.7-rc1.
v2: https://lore.kernel.org/linux-mediatek/20231111111559.8218-1-yong.wu@mediate... 1) Move John's patches into the vcodec patchset since they use the new dma heap interface directly. https://lore.kernel.org/linux-mediatek/20231106120423.23364-1-yunfei.dong@me... 2) Reword the dt-binding description. 3) Rename the heap name from mtk_svp to secure_mtk_cm. This means the current vcodec/DRM upstream code doesn't match this. 4) Add a normal CMA heap. currently it should be a draft version. 5) Regarding the UUID, I still use hard code, but put it in a private data which allow the others could set their own UUID. What's more, UUID is necessary for the session with TEE. If we don't have it, we can't communicate with the TEE, including the get_uuid interface, which tries to make uuid more generic, not working. If there is other way to make UUID more general, please free to tell me.
v1: https://lore.kernel.org/linux-mediatek/20230911023038.30649-1-yong.wu@mediat... Base on v6.6-rc1.
Yong Wu (9): dt-bindings: reserved-memory: Add mediatek,dynamic-restricted-region scatterlist: Add a flag for the restricted memory lib/scatterlist: Add sg_dup_table dma-buf: heaps: Initialize a restricted heap dma-buf: heaps: restricted_heap: Add private heap ops dma-buf: heaps: restricted_heap: Add dma_ops dma-buf: heaps: restricted_heap: Add MediaTek restricted heap and heap_init dma-buf: heaps: restricted_heap_mtk: Add TEE memory service call dma_buf: heaps: restricted_heap_mtk: Add a new CMA heap
.../mediatek,dynamic-restricted-region.yaml | 43 ++ drivers/dma-buf/heaps/Kconfig | 16 + drivers/dma-buf/heaps/Makefile | 4 +- drivers/dma-buf/heaps/restricted_heap.c | 219 +++++++++ drivers/dma-buf/heaps/restricted_heap.h | 45 ++ drivers/dma-buf/heaps/restricted_heap_mtk.c | 423 ++++++++++++++++++ drivers/dma-buf/heaps/system_heap.c | 27 +- include/linux/scatterlist.h | 36 ++ lib/scatterlist.c | 26 ++ 9 files changed, 812 insertions(+), 27 deletions(-) create mode 100644 Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml create mode 100644 drivers/dma-buf/heaps/restricted_heap.c create mode 100644 drivers/dma-buf/heaps/restricted_heap.h create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
Add a binding for describing the dynamic restricted reserved memory range. The memory range also will be defined in the TEE firmware. It means the TEE will be configured with the same address/size that is being set in this DT node. Regarding to the detail TEE command, Please search MTK_TZCMD_SECMEM_ZALLOC and MTK_TZCMD_SECMEM_FREE.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- .../mediatek,dynamic-restricted-region.yaml | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml
diff --git a/Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml b/Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml new file mode 100644 index 000000000000..5cbe3a5637fa --- /dev/null +++ b/Documentation/devicetree/bindings/reserved-memory/mediatek,dynamic-restricted-region.yaml @@ -0,0 +1,43 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/reserved-memory/mediatek,dynamic-restricted-re... +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: MediaTek Dynamic Reserved Region + +description: + A memory region that can dynamically transition as a whole between + secure and non-secure states. This memory will be protected by OP-TEE + when allocations are active and unprotected otherwise. + +maintainers: + - Yong Wu yong.wu@mediatek.com + +allOf: + - $ref: reserved-memory.yaml + +properties: + compatible: + const: mediatek,dynamic-restricted-region + +required: + - compatible + - reg + - reusable + +unevaluatedProperties: false + +examples: + - | + reserved-memory { + #address-cells = <1>; + #size-cells = <1>; + ranges; + + reserved-memory@80000000 { + compatible = "mediatek,dynamic-restricted-region"; + reg = <0x80000000 0x18000000>; + reusable; + }; + };
Introduce a FLAG for the restricted memory which means the memory is protected by TEE or hypervisor, then it's inaccessiable for kernel.
Currently we don't use sg_dma_unmark_restricted, thus this interface has not been added.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- include/linux/scatterlist.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 77df3d7b18a6..a6ad9018eca0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -282,6 +282,7 @@ static inline void sg_unmark_end(struct scatterlist *sg)
#define SG_DMA_BUS_ADDRESS (1 << 0) #define SG_DMA_SWIOTLB (1 << 1) +#define SG_DMA_RESTRICTED (2 << 1)
/** * sg_dma_is_bus_address - Return whether a given segment was marked @@ -352,6 +353,31 @@ static inline void sg_dma_mark_swiotlb(struct scatterlist *sg) sg->dma_flags |= SG_DMA_SWIOTLB; }
+/** + * sg_dma_mark_restricted - Mark the scatterlist for restricted buffer. + * @sg: SG entry + * + * Description: + * Marks a a scatterlist for the restricted buffer that may be inaccessiable + * in kernel if it is protected. + */ +static inline void sg_dma_mark_restricted(struct scatterlist *sg) +{ + sg->dma_flags |= SG_DMA_RESTRICTED; +} + +/** + * sg_dma_is_restricted - Return whether the scatterlist was marked as restricted + * buffer. + * @sg: SG entry + * + * Description: + * Returns true if the scatterlist was marked as restricted buffer. + */ +static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{ + return sg->dma_flags & SG_DMA_RESTRICTED; +} #else
static inline bool sg_dma_is_bus_address(struct scatterlist *sg) @@ -372,6 +398,14 @@ static inline void sg_dma_mark_swiotlb(struct scatterlist *sg) { }
+static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{ + return false; +} + +static inline void sg_dma_mark_restrited(struct scatterlist *sg) +{ +} #endif /* CONFIG_NEED_SG_DMA_FLAGS */
/**
Am 15.05.24 um 13:23 schrieb Yong Wu:
Introduce a FLAG for the restricted memory which means the memory is protected by TEE or hypervisor, then it's inaccessiable for kernel.
Currently we don't use sg_dma_unmark_restricted, thus this interface has not been added.
Why should that be part of the scatterlist? It doesn't seem to affect any of it's functionality.
As far as I can see the scatterlist shouldn't be the transport of this kind of information.
Regards, Christian.
Signed-off-by: Yong Wu yong.wu@mediatek.com
include/linux/scatterlist.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 77df3d7b18a6..a6ad9018eca0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -282,6 +282,7 @@ static inline void sg_unmark_end(struct scatterlist *sg) #define SG_DMA_BUS_ADDRESS (1 << 0) #define SG_DMA_SWIOTLB (1 << 1) +#define SG_DMA_RESTRICTED (2 << 1) /**
- sg_dma_is_bus_address - Return whether a given segment was marked
@@ -352,6 +353,31 @@ static inline void sg_dma_mark_swiotlb(struct scatterlist *sg) sg->dma_flags |= SG_DMA_SWIOTLB; } +/**
- sg_dma_mark_restricted - Mark the scatterlist for restricted buffer.
- @sg: SG entry
- Description:
- Marks a a scatterlist for the restricted buffer that may be inaccessiable
- in kernel if it is protected.
- */
+static inline void sg_dma_mark_restricted(struct scatterlist *sg) +{
- sg->dma_flags |= SG_DMA_RESTRICTED;
+}
+/**
- sg_dma_is_restricted - Return whether the scatterlist was marked as restricted
buffer.
- @sg: SG entry
- Description:
- Returns true if the scatterlist was marked as restricted buffer.
- */
+static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{
- return sg->dma_flags & SG_DMA_RESTRICTED;
+} #else static inline bool sg_dma_is_bus_address(struct scatterlist *sg) @@ -372,6 +398,14 @@ static inline void sg_dma_mark_swiotlb(struct scatterlist *sg) { } +static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{
- return false;
+}
+static inline void sg_dma_mark_restrited(struct scatterlist *sg) +{ +} #endif /* CONFIG_NEED_SG_DMA_FLAGS */ /**
On Thu, 2024-05-16 at 10:17 +0200, Christian König wrote:
External email : Please do not click links or open attachments until you have verified the sender or the content. Am 15.05.24 um 13:23 schrieb Yong Wu:
Introduce a FLAG for the restricted memory which means the memory
is
protected by TEE or hypervisor, then it's inaccessiable for kernel.
Currently we don't use sg_dma_unmark_restricted, thus this
interface
has not been added.
Why should that be part of the scatterlist? It doesn't seem to affect any of it's functionality.
As far as I can see the scatterlist shouldn't be the transport of this kind of information.
Thanks for the review. I will remove this.
In our user scenario, DRM will import these buffers and check if this is a restricted buffer. If yes, it will use secure GCE takes over.
If this judgment is not suitable to be placed in scatterlist. I don't know if it is ok to limit this inside dma-buf. Adding such an interface:
static bool dma_buf_is_restricted(struct dma_buf *dmabuf) { return !strncmp(dmabuf->exp_name, "restricted", 10); }
Thanks.
Regards, Christian.
Signed-off-by: Yong Wu yong.wu@mediatek.com
include/linux/scatterlist.h | 34
++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/include/linux/scatterlist.h
b/include/linux/scatterlist.h
index 77df3d7b18a6..a6ad9018eca0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -282,6 +282,7 @@ static inline void sg_unmark_end(struct
scatterlist *sg)
#define SG_DMA_BUS_ADDRESS(1 << 0) #define SG_DMA_SWIOTLB(1 << 1) +#define SG_DMA_RESTRICTED(2 << 1) /**
- sg_dma_is_bus_address - Return whether a given segment was
marked
@@ -352,6 +353,31 @@ static inline void sg_dma_mark_swiotlb(struct
scatterlist *sg)
sg->dma_flags |= SG_DMA_SWIOTLB; } +/**
- sg_dma_mark_restricted - Mark the scatterlist for restricted
buffer.
- @sg:SG entry
- Description:
- Marks a a scatterlist for the restricted buffer that may be
inaccessiable
- in kernel if it is protected.
- */
+static inline void sg_dma_mark_restricted(struct scatterlist *sg) +{ +sg->dma_flags |= SG_DMA_RESTRICTED; +}
+/**
- sg_dma_is_restricted - Return whether the scatterlist was
marked as restricted
buffer.
- @sg:SG entry
- Description:
- Returns true if the scatterlist was marked as restricted
buffer.
- */
+static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{ +return sg->dma_flags & SG_DMA_RESTRICTED; +} #else static inline bool sg_dma_is_bus_address(struct scatterlist *sg) @@ -372,6 +398,14 @@ static inline void sg_dma_mark_swiotlb(struct
scatterlist *sg)
{ } +static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{ +return false; +}
+static inline void sg_dma_mark_restrited(struct scatterlist *sg) +{ +} #endif/* CONFIG_NEED_SG_DMA_FLAGS */ /**
Am 20.05.24 um 09:58 schrieb Yong Wu (吴勇):
On Thu, 2024-05-16 at 10:17 +0200, Christian König wrote:
External email : Please do not click links or open attachments until you have verified the sender or the content. Am 15.05.24 um 13:23 schrieb Yong Wu:
Introduce a FLAG for the restricted memory which means the memory
is
protected by TEE or hypervisor, then it's inaccessiable for kernel.
Currently we don't use sg_dma_unmark_restricted, thus this
interface
has not been added.
Why should that be part of the scatterlist? It doesn't seem to affect any of it's functionality.
As far as I can see the scatterlist shouldn't be the transport of this kind of information.
Thanks for the review. I will remove this.
In our user scenario, DRM will import these buffers and check if this is a restricted buffer. If yes, it will use secure GCE takes over.
If this judgment is not suitable to be placed in scatterlist. I don't know if it is ok to limit this inside dma-buf. Adding such an interface:
static bool dma_buf_is_restricted(struct dma_buf *dmabuf) { return !strncmp(dmabuf->exp_name, "restricted", 10); }
No, usually stuff like that doesn't belong into DMA buf either.
Question here really is who controls the security status of the memory backing the buffer?
In other words who tells the exporter that it should allocate and fill a buffer with encrypted data?
If that is userspace then that is part of the format information and it is also userspace who should tell the importer that it needs to work with encrypted data.
The kernel is intentionally not involved in stuff like that.
Regards, Christian.
Thanks.
Regards, Christian.
Signed-off-by: Yong Wu yong.wu@mediatek.com
include/linux/scatterlist.h | 34
++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/include/linux/scatterlist.h
b/include/linux/scatterlist.h
index 77df3d7b18a6..a6ad9018eca0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -282,6 +282,7 @@ static inline void sg_unmark_end(struct
scatterlist *sg)
#define SG_DMA_BUS_ADDRESS(1 << 0) #define SG_DMA_SWIOTLB(1 << 1) +#define SG_DMA_RESTRICTED(2 << 1) /** * sg_dma_is_bus_address - Return whether a given segment was
marked
@@ -352,6 +353,31 @@ static inline void sg_dma_mark_swiotlb(struct
scatterlist *sg)
sg->dma_flags |= SG_DMA_SWIOTLB; } +/**
- sg_dma_mark_restricted - Mark the scatterlist for restricted
buffer.
- @sg:SG entry
- Description:
- Marks a a scatterlist for the restricted buffer that may be
inaccessiable
- in kernel if it is protected.
- */
+static inline void sg_dma_mark_restricted(struct scatterlist *sg) +{ +sg->dma_flags |= SG_DMA_RESTRICTED; +}
+/**
- sg_dma_is_restricted - Return whether the scatterlist was
marked as restricted
buffer.
- @sg:SG entry
- Description:
- Returns true if the scatterlist was marked as restricted
buffer.
- */
+static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{ +return sg->dma_flags & SG_DMA_RESTRICTED; +} #else static inline bool sg_dma_is_bus_address(struct scatterlist *sg) @@ -372,6 +398,14 @@ static inline void sg_dma_mark_swiotlb(struct
scatterlist *sg)
{ } +static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{ +return false; +}
+static inline void sg_dma_mark_restrited(struct scatterlist *sg) +{ +} #endif/* CONFIG_NEED_SG_DMA_FLAGS */ /**
Il 15/05/24 13:23, Yong Wu ha scritto:
Introduce a FLAG for the restricted memory which means the memory is protected by TEE or hypervisor, then it's inaccessiable for kernel.
Currently we don't use sg_dma_unmark_restricted, thus this interface has not been added.
Signed-off-by: Yong Wu yong.wu@mediatek.com
include/linux/scatterlist.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 77df3d7b18a6..a6ad9018eca0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -282,6 +282,7 @@ static inline void sg_unmark_end(struct scatterlist *sg) #define SG_DMA_BUS_ADDRESS (1 << 0) #define SG_DMA_SWIOTLB (1 << 1) +#define SG_DMA_RESTRICTED (2 << 1)
I think you wanted to write (1 << 2) here :-)
Cheers, Angelo
/**
- sg_dma_is_bus_address - Return whether a given segment was marked
@@ -352,6 +353,31 @@ static inline void sg_dma_mark_swiotlb(struct scatterlist *sg) sg->dma_flags |= SG_DMA_SWIOTLB; } +/**
- sg_dma_mark_restricted - Mark the scatterlist for restricted buffer.
- @sg: SG entry
- Description:
- Marks a a scatterlist for the restricted buffer that may be inaccessiable
- in kernel if it is protected.
- */
+static inline void sg_dma_mark_restricted(struct scatterlist *sg) +{
- sg->dma_flags |= SG_DMA_RESTRICTED;
+}
+/**
- sg_dma_is_restricted - Return whether the scatterlist was marked as restricted
buffer.
- @sg: SG entry
- Description:
- Returns true if the scatterlist was marked as restricted buffer.
- */
+static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{
- return sg->dma_flags & SG_DMA_RESTRICTED;
+} #else static inline bool sg_dma_is_bus_address(struct scatterlist *sg) @@ -372,6 +398,14 @@ static inline void sg_dma_mark_swiotlb(struct scatterlist *sg) { } +static inline bool sg_dma_is_restricted(struct scatterlist *sg) +{
- return false;
+}
+static inline void sg_dma_mark_restrited(struct scatterlist *sg) +{ +} #endif /* CONFIG_NEED_SG_DMA_FLAGS */ /**
On Thu, 2024-05-16 at 11:59 +0200, AngeloGioacchino Del Regno wrote:
Il 15/05/24 13:23, Yong Wu ha scritto:
Introduce a FLAG for the restricted memory which means the memory is protected by TEE or hypervisor, then it's inaccessiable for kernel.
Currently we don't use sg_dma_unmark_restricted, thus this interface has not been added.
Signed-off-by: Yong Wu yong.wu@mediatek.com
include/linux/scatterlist.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 77df3d7b18a6..a6ad9018eca0 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -282,6 +282,7 @@ static inline void sg_unmark_end(struct scatterlist *sg) #define SG_DMA_BUS_ADDRESS (1 << 0) #define SG_DMA_SWIOTLB (1 << 1) +#define SG_DMA_RESTRICTED (2 << 1)
I think you wanted to write (1 << 2) here :-)
Apparently, you are right:)
Thanks.
Cheers, Angelo
Prepare for the restricted heap to reuse, move it out from system_heap.c. To keep the function name consistent, rename it to sg_dup_table.
Cc: Andrew Morton akpm@linux-foundation.org Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/system_heap.c | 27 +-------------------------- include/linux/scatterlist.h | 2 ++ lib/scatterlist.c | 26 ++++++++++++++++++++++++++ 3 files changed, 29 insertions(+), 26 deletions(-)
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 9076d47ed2ef..204e55f92330 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -54,31 +54,6 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP}; static const unsigned int orders[] = {8, 4, 0}; #define NUM_ORDERS ARRAY_SIZE(orders)
-static struct sg_table *dup_sg_table(struct sg_table *table) -{ - struct sg_table *new_table; - int ret, i; - struct scatterlist *sg, *new_sg; - - new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); - if (!new_table) - return ERR_PTR(-ENOMEM); - - ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); - if (ret) { - kfree(new_table); - return ERR_PTR(-ENOMEM); - } - - new_sg = new_table->sgl; - for_each_sgtable_sg(table, sg, i) { - sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); - new_sg = sg_next(new_sg); - } - - return new_table; -} - static int system_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) { @@ -90,7 +65,7 @@ static int system_heap_attach(struct dma_buf *dmabuf, if (!a) return -ENOMEM;
- table = dup_sg_table(&buffer->sg_table); + table = sg_dup_table(&buffer->sg_table); if (IS_ERR(table)) { kfree(a); return -ENOMEM; diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index a6ad9018eca0..53a4cdc11f4f 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -538,6 +538,8 @@ size_t sg_pcopy_to_buffer(struct scatterlist *sgl, unsigned int nents, size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, size_t buflen, off_t skip);
+struct sg_table *sg_dup_table(struct sg_table *table); + /* * Maximum number of entries that will be allocated in one piece, if * a list larger than this is required then chaining will be utilized. diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 7bc2220fea80..3efcf728c13b 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -1100,6 +1100,32 @@ size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, } EXPORT_SYMBOL(sg_zero_buffer);
+struct sg_table *sg_dup_table(struct sg_table *table) +{ + struct sg_table *new_table; + int ret, i; + struct scatterlist *sg, *new_sg; + + new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); + if (!new_table) + return ERR_PTR(-ENOMEM); + + ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); + if (ret) { + kfree(new_table); + return ERR_PTR(-ENOMEM); + } + + new_sg = new_table->sgl; + for_each_sgtable_sg(table, sg, i) { + sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset); + new_sg = sg_next(new_sg); + } + + return new_table; +} +EXPORT_SYMBOL(sg_dup_table); + /* * Extract and pin a list of up to sg_max pages from UBUF- or IOVEC-class * iterators, and add them to the scatterlist.
Initialize a restricted heap. Currently just add a null heap, Prepare for the later patches.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/Kconfig | 9 ++++ drivers/dma-buf/heaps/Makefile | 3 +- drivers/dma-buf/heaps/restricted_heap.c | 67 +++++++++++++++++++++++++ drivers/dma-buf/heaps/restricted_heap.h | 22 ++++++++ 4 files changed, 100 insertions(+), 1 deletion(-) create mode 100644 drivers/dma-buf/heaps/restricted_heap.c create mode 100644 drivers/dma-buf/heaps/restricted_heap.h
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index a5eef06c4226..e54506f480ea 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -12,3 +12,12 @@ config DMABUF_HEAPS_CMA Choose this option to enable dma-buf CMA heap. This heap is backed by the Contiguous Memory Allocator (CMA). If your system has these regions, you should say Y here. + +config DMABUF_HEAPS_RESTRICTED + bool "DMA-BUF Restricted Heap" + depends on DMABUF_HEAPS + help + Choose this option to enable dma-buf restricted heap. The purpose of this + heap is to manage buffers that are inaccessible to the kernel and user space. + There may be several ways to restrict it, for example it may be encrypted or + protected by a TEE or hypervisor. If in doubt, say N. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index 974467791032..a2437c1817e2 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c new file mode 100644 index 000000000000..c2ae19ba7d7e --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF restricted heap exporter + * + * Copyright (C) 2024 MediaTek Inc. + */ + +#include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <linux/err.h> +#include <linux/slab.h> + +#include "restricted_heap.h" + +static struct dma_buf * +restricted_heap_allocate(struct dma_heap *heap, unsigned long size, + unsigned long fd_flags, unsigned long heap_flags) +{ + struct restricted_buffer *restricted_buf; + DEFINE_DMA_BUF_EXPORT_INFO(exp_info); + struct dma_buf *dmabuf; + int ret; + + restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL); + if (!restricted_buf) + return ERR_PTR(-ENOMEM); + + restricted_buf->size = ALIGN(size, PAGE_SIZE); + restricted_buf->heap = heap; + + exp_info.exp_name = dma_heap_get_name(heap); + exp_info.size = restricted_buf->size; + exp_info.flags = fd_flags; + exp_info.priv = restricted_buf; + + dmabuf = dma_buf_export(&exp_info); + if (IS_ERR(dmabuf)) { + ret = PTR_ERR(dmabuf); + goto err_free_buf; + } + + return dmabuf; + +err_free_buf: + kfree(restricted_buf); + return ERR_PTR(ret); +} + +static const struct dma_heap_ops rheap_ops = { + .allocate = restricted_heap_allocate, +}; + +int restricted_heap_add(struct restricted_heap *rheap) +{ + struct dma_heap_export_info exp_info; + struct dma_heap *heap; + + exp_info.name = rheap->name; + exp_info.ops = &rheap_ops; + exp_info.priv = (void *)rheap; + + heap = dma_heap_add(&exp_info); + if (IS_ERR(heap)) + return PTR_ERR(heap); + return 0; +} +EXPORT_SYMBOL_GPL(restricted_heap_add); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h new file mode 100644 index 000000000000..b448f77616ac --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Restricted heap Header. + * + * Copyright (C) 2024 MediaTek, Inc. + */ + +#ifndef _DMABUF_RESTRICTED_HEAP_H_ +#define _DMABUF_RESTRICTED_HEAP_H_ + +struct restricted_buffer { + struct dma_heap *heap; + size_t size; +}; + +struct restricted_heap { + const char *name; +}; + +int restricted_heap_add(struct restricted_heap *rheap); + +#endif
Add "struct restricted_heap_ops". For the restricted memory, totally there are two steps: a) alloc: Allocate the buffer in kernel; b) restrict_buf: Restrict/Protect/Secure that buffer. The "alloc" is mandatory while "restrict_buf" is optional since it may be part of "alloc".
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/restricted_heap.c | 41 ++++++++++++++++++++++++- drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index c2ae19ba7d7e..8bb3c1876a69 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,10 +12,44 @@
#include "restricted_heap.h"
+static int +restricted_heap_memory_allocate(struct restricted_heap *rheap, struct restricted_buffer *buf) +{ + const struct restricted_heap_ops *ops = rheap->ops; + int ret; + + ret = ops->alloc(rheap, buf); + if (ret) + return ret; + + if (ops->restrict_buf) { + ret = ops->restrict_buf(rheap, buf); + if (ret) + goto buf_free; + } + return 0; + +buf_free: + ops->free(rheap, buf); + return ret; +} + +static void +restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buffer *buf) +{ + const struct restricted_heap_ops *ops = rheap->ops; + + if (ops->unrestrict_buf) + ops->unrestrict_buf(rheap, buf); + + ops->free(rheap, buf); +} + static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) { + struct restricted_heap *rheap = dma_heap_get_drvdata(heap); struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; @@ -28,6 +62,9 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, restricted_buf->size = ALIGN(size, PAGE_SIZE); restricted_buf->heap = heap;
+ ret = restricted_heap_memory_allocate(rheap, restricted_buf); + if (ret) + goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; @@ -36,11 +73,13 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf); - goto err_free_buf; + goto err_free_rstrd_mem; }
return dmabuf;
+err_free_rstrd_mem: + restricted_heap_memory_free(rheap, restricted_buf); err_free_buf: kfree(restricted_buf); return ERR_PTR(ret); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index b448f77616ac..5783275d5714 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer {
struct restricted_heap { const char *name; + + const struct restricted_heap_ops *ops; +}; + +struct restricted_heap_ops { + int (*heap_init)(struct restricted_heap *rheap); + + int (*alloc)(struct restricted_heap *rheap, struct restricted_buffer *buf); + void (*free)(struct restricted_heap *rheap, struct restricted_buffer *buf); + + int (*restrict_buf)(struct restricted_heap *rheap, struct restricted_buffer *buf); + void (*unrestrict_buf)(struct restricted_heap *rheap, struct restricted_buffer *buf); };
int restricted_heap_add(struct restricted_heap *rheap);
On Wed, May 15, 2024 at 07:23:04PM GMT, Yong Wu wrote:
Add "struct restricted_heap_ops". For the restricted memory, totally there are two steps: a) alloc: Allocate the buffer in kernel; b) restrict_buf: Restrict/Protect/Secure that buffer. The "alloc" is mandatory while "restrict_buf" is optional since it may be part of "alloc".
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/restricted_heap.c | 41 ++++++++++++++++++++++++- drivers/dma-buf/heaps/restricted_heap.h | 12 ++++++++ 2 files changed, 52 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index c2ae19ba7d7e..8bb3c1876a69 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -12,10 +12,44 @@ #include "restricted_heap.h" +static int +restricted_heap_memory_allocate(struct restricted_heap *rheap, struct restricted_buffer *buf) +{
- const struct restricted_heap_ops *ops = rheap->ops;
- int ret;
- ret = ops->alloc(rheap, buf);
- if (ret)
return ret;
- if (ops->restrict_buf) {
ret = ops->restrict_buf(rheap, buf);
if (ret)
goto buf_free;
- }
- return 0;
+buf_free:
- ops->free(rheap, buf);
- return ret;
+}
+static void +restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buffer *buf) +{
- const struct restricted_heap_ops *ops = rheap->ops;
- if (ops->unrestrict_buf)
ops->unrestrict_buf(rheap, buf);
- ops->free(rheap, buf);
+}
static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) {
- struct restricted_heap *rheap = dma_heap_get_drvdata(heap); struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf;
@@ -28,6 +62,9 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, restricted_buf->size = ALIGN(size, PAGE_SIZE); restricted_buf->heap = heap;
- ret = restricted_heap_memory_allocate(rheap, restricted_buf);
- if (ret)
exp_info.exp_name = dma_heap_get_name(heap); exp_info.size = restricted_buf->size; exp_info.flags = fd_flags;goto err_free_buf;
@@ -36,11 +73,13 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, dmabuf = dma_buf_export(&exp_info); if (IS_ERR(dmabuf)) { ret = PTR_ERR(dmabuf);
goto err_free_buf;
}goto err_free_rstrd_mem;
return dmabuf; +err_free_rstrd_mem:
- restricted_heap_memory_free(rheap, restricted_buf);
err_free_buf: kfree(restricted_buf); return ERR_PTR(ret); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index b448f77616ac..5783275d5714 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -15,6 +15,18 @@ struct restricted_buffer { struct restricted_heap { const char *name;
- const struct restricted_heap_ops *ops;
+};
+struct restricted_heap_ops {
- int (*heap_init)(struct restricted_heap *rheap);
It might be worth moving this to a later patch when it's actually getting used.
Thierry
Add the dma_ops for this restricted heap. For restricted buffer, 1) cache_ops/mmap are not allowed, thus return EPERM for them. 2) In map_dma_buf, use DMA_ATTR_SKIP_CPU_SYNC to skip cache sync since the buffer is protected. This type buffers are marked by sg_dma_mark_restricted, the user could check if this is a restricted buffer by sg_dma_is_restricted.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/restricted_heap.c | 102 ++++++++++++++++++++++++ drivers/dma-buf/heaps/restricted_heap.h | 2 + 2 files changed, 104 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 8bb3c1876a69..4e45d46a6467 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -8,10 +8,16 @@ #include <linux/dma-buf.h> #include <linux/dma-heap.h> #include <linux/err.h> +#include <linux/scatterlist.h> #include <linux/slab.h>
#include "restricted_heap.h"
+struct restricted_heap_attachment { + struct sg_table *table; + struct device *dev; +}; + static int restricted_heap_memory_allocate(struct restricted_heap *rheap, struct restricted_buffer *buf) { @@ -45,6 +51,101 @@ restricted_heap_memory_free(struct restricted_heap *rheap, struct restricted_buf ops->free(rheap, buf); }
+static int restricted_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct restricted_buffer *restricted_buf = dmabuf->priv; + struct restricted_heap_attachment *a; + struct sg_table *table; + + a = kzalloc(sizeof(*a), GFP_KERNEL); + if (!a) + return -ENOMEM; + + table = sg_dup_table(&restricted_buf->sg_table); + if (!table) { + kfree(a); + return -ENOMEM; + } + + sg_dma_mark_restricted(table->sgl); + a->table = table; + a->dev = attachment->dev; + attachment->priv = a; + + return 0; +} + +static void restricted_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment) +{ + struct restricted_heap_attachment *a = attachment->priv; + + sg_free_table(a->table); + kfree(a->table); + kfree(a); +} + +static struct sg_table * +restricted_heap_map_dma_buf(struct dma_buf_attachment *attachment, + enum dma_data_direction direction) +{ + struct restricted_heap_attachment *a = attachment->priv; + struct sg_table *table = a->table; + int ret; + + ret = dma_map_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC); + if (ret) + return ERR_PTR(ret); + return table; +} + +static void +restricted_heap_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, + enum dma_data_direction direction) +{ + struct restricted_heap_attachment *a = attachment->priv; + + WARN_ON(a->table != table); + + dma_unmap_sgtable(attachment->dev, table, direction, DMA_ATTR_SKIP_CPU_SYNC); +} + +static int +restricted_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{ + return -EPERM; +} + +static int +restricted_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf, enum dma_data_direction direction) +{ + return -EPERM; +} + +static int restricted_heap_dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) +{ + return -EPERM; +} + +static void restricted_heap_free(struct dma_buf *dmabuf) +{ + struct restricted_buffer *restricted_buf = dmabuf->priv; + struct restricted_heap *rheap = dma_heap_get_drvdata(restricted_buf->heap); + + restricted_heap_memory_free(rheap, restricted_buf); + kfree(restricted_buf); +} + +static const struct dma_buf_ops restricted_heap_buf_ops = { + .attach = restricted_heap_attach, + .detach = restricted_heap_detach, + .map_dma_buf = restricted_heap_map_dma_buf, + .unmap_dma_buf = restricted_heap_unmap_dma_buf, + .begin_cpu_access = restricted_heap_dma_buf_begin_cpu_access, + .end_cpu_access = restricted_heap_dma_buf_end_cpu_access, + .mmap = restricted_heap_dma_buf_mmap, + .release = restricted_heap_free, +}; + static struct dma_buf * restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) @@ -66,6 +167,7 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, if (ret) goto err_free_buf; exp_info.exp_name = dma_heap_get_name(heap); + exp_info.ops = &restricted_heap_buf_ops; exp_info.size = restricted_buf->size; exp_info.flags = fd_flags; exp_info.priv = restricted_buf; diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 5783275d5714..6d9599a4a34e 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -11,6 +11,8 @@ struct restricted_buffer { struct dma_heap *heap; size_t size; + + struct sg_table sg_table; };
struct restricted_heap {
Add a MediaTek restricted heap which uses TEE service call to restrict buffer. Currently this restricted heap is NULL, Prepare for the later patch. Mainly there are two changes: a) Add a heap_init ops since TEE probe late than restricted heap, thus initialize the heap when we require the buffer the first time. b) Add a priv_data for each heap, like the special data used by MTK (such as "TEE session") can be placed in priv_data.
Currently our heap depends on CMA which could only be bool, thus depend on "TEE=y".
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/Kconfig | 7 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/restricted_heap.c | 11 ++ drivers/dma-buf/heaps/restricted_heap.h | 2 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 115 ++++++++++++++++++++ 5 files changed, 136 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index e54506f480ea..84f748fb2856 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -21,3 +21,10 @@ config DMABUF_HEAPS_RESTRICTED heap is to manage buffers that are inaccessible to the kernel and user space. There may be several ways to restrict it, for example it may be encrypted or protected by a TEE or hypervisor. If in doubt, say N. + +config DMABUF_HEAPS_RESTRICTED_MTK + bool "MediaTek DMA-BUF Restricted Heap" + depends on DMABUF_HEAPS_RESTRICTED && TEE=y + help + Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by + TEE client interfaces. If in doubt, say N. diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index a2437c1817e2..0028aa9d875f 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,4 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 4e45d46a6467..8bc8a5e3f969 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -151,11 +151,22 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) { struct restricted_heap *rheap = dma_heap_get_drvdata(heap); + const struct restricted_heap_ops *ops = rheap->ops; struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; int ret;
+ /* + * In some implements, TEE is required to protect buffer. However TEE probe + * may be late, Thus heap_init is performed when the first buffer is requested. + */ + if (ops->heap_init) { + ret = ops->heap_init(rheap); + if (ret) + return ERR_PTR(ret); + } + restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL); if (!restricted_buf) return ERR_PTR(-ENOMEM); diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 6d9599a4a34e..2a33a1c7a48b 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -19,6 +19,8 @@ struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops; + + void *priv_data; };
struct restricted_heap_ops { diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c new file mode 100644 index 000000000000..52e805eb9858 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMABUF restricted heap exporter for MediaTek + * + * Copyright (C) 2024 MediaTek Inc. + */ +#define pr_fmt(fmt) "rheap_mtk: " fmt + +#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/tee_drv.h> +#include <linux/uuid.h> + +#include "restricted_heap.h" + +#define TZ_TA_MEM_UUID_MTK "4477588a-8476-11e2-ad15-e41f1390d676" + +#define TEE_PARAM_NUM 4 + +enum mtk_secure_mem_type { + /* + * MediaTek static chunk memory carved out for TrustZone. The memory + * management is inside the TEE. + */ + MTK_SECURE_MEMORY_TYPE_CM_TZ = 1, +}; + +struct mtk_restricted_heap_data { + struct tee_context *tee_ctx; + u32 tee_session; + + const enum mtk_secure_mem_type mem_type; + +}; + +static int mtk_tee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) +{ + return ver->impl_id == TEE_IMPL_ID_OPTEE; +} + +static int mtk_tee_session_init(struct mtk_restricted_heap_data *data) +{ + struct tee_param t_param[TEE_PARAM_NUM] = {0}; + struct tee_ioctl_open_session_arg arg = {0}; + uuid_t ta_mem_uuid; + int ret; + + data->tee_ctx = tee_client_open_context(NULL, mtk_tee_ctx_match, NULL, NULL); + if (IS_ERR(data->tee_ctx)) { + pr_err_once("%s: open context failed, ret=%ld\n", __func__, + PTR_ERR(data->tee_ctx)); + return -ENODEV; + } + + arg.num_params = TEE_PARAM_NUM; + arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC; + ret = uuid_parse(TZ_TA_MEM_UUID_MTK, &ta_mem_uuid); + if (ret) + goto close_context; + memcpy(&arg.uuid, &ta_mem_uuid.b, sizeof(ta_mem_uuid)); + + ret = tee_client_open_session(data->tee_ctx, &arg, t_param); + if (ret < 0 || arg.ret) { + pr_err_once("%s: open session failed, ret=%d:%d\n", + __func__, ret, arg.ret); + ret = -EINVAL; + goto close_context; + } + data->tee_session = arg.session; + return 0; + +close_context: + tee_client_close_context(data->tee_ctx); + return ret; +} + +static int mtk_restricted_heap_init(struct restricted_heap *rheap) +{ + struct mtk_restricted_heap_data *data = rheap->priv_data; + + if (!data->tee_ctx) + return mtk_tee_session_init(data); + return 0; +} + +static const struct restricted_heap_ops mtk_restricted_heap_ops = { + .heap_init = mtk_restricted_heap_init, +}; + +static struct mtk_restricted_heap_data mtk_restricted_heap_data = { + .mem_type = MTK_SECURE_MEMORY_TYPE_CM_TZ, +}; + +static struct restricted_heap mtk_restricted_heaps[] = { + { + .name = "restricted_mtk_cm", + .ops = &mtk_restricted_heap_ops, + .priv_data = &mtk_restricted_heap_data, + }, +}; + +static int mtk_restricted_heap_initialize(void) +{ + struct restricted_heap *rheap = mtk_restricted_heaps; + unsigned int i; + + for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rheap++) + restricted_heap_add(rheap); + return 0; +} +module_init(mtk_restricted_heap_initialize); +MODULE_DESCRIPTION("MediaTek Restricted Heap Driver"); +MODULE_LICENSE("GPL");
On Wed, May 15, 2024 at 07:23:06PM GMT, Yong Wu wrote:
Add a MediaTek restricted heap which uses TEE service call to restrict buffer. Currently this restricted heap is NULL, Prepare for the later patch. Mainly there are two changes: a) Add a heap_init ops since TEE probe late than restricted heap, thus initialize the heap when we require the buffer the first time. b) Add a priv_data for each heap, like the special data used by MTK (such as "TEE session") can be placed in priv_data.
Currently our heap depends on CMA which could only be bool, thus depend on "TEE=y".
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/Kconfig | 7 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/restricted_heap.c | 11 ++ drivers/dma-buf/heaps/restricted_heap.h | 2 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 115 ++++++++++++++++++++ 5 files changed, 136 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index e54506f480ea..84f748fb2856 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -21,3 +21,10 @@ config DMABUF_HEAPS_RESTRICTED heap is to manage buffers that are inaccessible to the kernel and user space. There may be several ways to restrict it, for example it may be encrypted or protected by a TEE or hypervisor. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_MTK
- bool "MediaTek DMA-BUF Restricted Heap"
- depends on DMABUF_HEAPS_RESTRICTED && TEE=y
- help
Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by
TEE client interfaces. If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index a2437c1817e2..0028aa9d875f 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,4 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 4e45d46a6467..8bc8a5e3f969 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -151,11 +151,22 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) { struct restricted_heap *rheap = dma_heap_get_drvdata(heap);
- const struct restricted_heap_ops *ops = rheap->ops; struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; int ret;
- /*
* In some implements, TEE is required to protect buffer. However TEE probe
* may be late, Thus heap_init is performed when the first buffer is requested.
*/
- if (ops->heap_init) {
ret = ops->heap_init(rheap);
if (ret)
return ERR_PTR(ret);
- }
I wonder if we should make this parameterized rather than the default. Perhaps we can add a "init_on_demand" (or whatever other name) flag to struct restricted_heap_ops and then call this from heap initialization if possible and defer initialization depending on the restricted heap provider?
- restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL); if (!restricted_buf) return ERR_PTR(-ENOMEM);
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 6d9599a4a34e..2a33a1c7a48b 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -19,6 +19,8 @@ struct restricted_heap { const char *name; const struct restricted_heap_ops *ops;
- void *priv_data;
Honestly, I would just get rid of any of this extra padding/indentation in these structures. There's really no benefit to this, except maybe if you *really* like things to be aligned, in which case the above is now probably worse than if you didn't try to align in the first place.
Thierry
On Wed, May 15, 2024 at 1:25 PM Yong Wu yong.wu@mediatek.com wrote:
Add a MediaTek restricted heap which uses TEE service call to restrict buffer. Currently this restricted heap is NULL, Prepare for the later patch. Mainly there are two changes: a) Add a heap_init ops since TEE probe late than restricted heap, thus initialize the heap when we require the buffer the first time. b) Add a priv_data for each heap, like the special data used by MTK (such as "TEE session") can be placed in priv_data.
Currently our heap depends on CMA which could only be bool, thus depend on "TEE=y".
Signed-off-by: Yong Wu yong.wu@mediatek.com
drivers/dma-buf/heaps/Kconfig | 7 ++ drivers/dma-buf/heaps/Makefile | 1 + drivers/dma-buf/heaps/restricted_heap.c | 11 ++ drivers/dma-buf/heaps/restricted_heap.h | 2 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 115 ++++++++++++++++++++ 5 files changed, 136 insertions(+) create mode 100644 drivers/dma-buf/heaps/restricted_heap_mtk.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index e54506f480ea..84f748fb2856 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -21,3 +21,10 @@ config DMABUF_HEAPS_RESTRICTED heap is to manage buffers that are inaccessible to the kernel and user space. There may be several ways to restrict it, for example it may be encrypted or protected by a TEE or hypervisor. If in doubt, say N.
+config DMABUF_HEAPS_RESTRICTED_MTK
bool "MediaTek DMA-BUF Restricted Heap"
depends on DMABUF_HEAPS_RESTRICTED && TEE=y
help
Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by
TEE client interfaces. If in doubt, say N.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile index a2437c1817e2..0028aa9d875f 100644 --- a/drivers/dma-buf/heaps/Makefile +++ b/drivers/dma-buf/heaps/Makefile @@ -1,4 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED) += restricted_heap.o +obj-$(CONFIG_DMABUF_HEAPS_RESTRICTED_MTK) += restricted_heap_mtk.o obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o diff --git a/drivers/dma-buf/heaps/restricted_heap.c b/drivers/dma-buf/heaps/restricted_heap.c index 4e45d46a6467..8bc8a5e3f969 100644 --- a/drivers/dma-buf/heaps/restricted_heap.c +++ b/drivers/dma-buf/heaps/restricted_heap.c @@ -151,11 +151,22 @@ restricted_heap_allocate(struct dma_heap *heap, unsigned long size, unsigned long fd_flags, unsigned long heap_flags) { struct restricted_heap *rheap = dma_heap_get_drvdata(heap);
const struct restricted_heap_ops *ops = rheap->ops; struct restricted_buffer *restricted_buf; DEFINE_DMA_BUF_EXPORT_INFO(exp_info); struct dma_buf *dmabuf; int ret;
/*
* In some implements, TEE is required to protect buffer. However TEE probe
* may be late, Thus heap_init is performed when the first buffer is requested.
*/
if (ops->heap_init) {
ret = ops->heap_init(rheap);
if (ret)
return ERR_PTR(ret);
}
restricted_buf = kzalloc(sizeof(*restricted_buf), GFP_KERNEL); if (!restricted_buf) return ERR_PTR(-ENOMEM);
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 6d9599a4a34e..2a33a1c7a48b 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -19,6 +19,8 @@ struct restricted_heap { const char *name;
const struct restricted_heap_ops *ops;
void *priv_data;
};
struct restricted_heap_ops { diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c new file mode 100644 index 000000000000..52e805eb9858 --- /dev/null +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0 +/*
- DMABUF restricted heap exporter for MediaTek
- Copyright (C) 2024 MediaTek Inc.
- */
+#define pr_fmt(fmt) "rheap_mtk: " fmt
+#include <linux/dma-buf.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/tee_drv.h> +#include <linux/uuid.h>
+#include "restricted_heap.h"
+#define TZ_TA_MEM_UUID_MTK "4477588a-8476-11e2-ad15-e41f1390d676"
+#define TEE_PARAM_NUM 4
+enum mtk_secure_mem_type {
/*
* MediaTek static chunk memory carved out for TrustZone. The memory
* management is inside the TEE.
*/
MTK_SECURE_MEMORY_TYPE_CM_TZ = 1,
+};
+struct mtk_restricted_heap_data {
struct tee_context *tee_ctx;
u32 tee_session;
const enum mtk_secure_mem_type mem_type;
+};
+static int mtk_tee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) +{
return ver->impl_id == TEE_IMPL_ID_OPTEE;
+}
+static int mtk_tee_session_init(struct mtk_restricted_heap_data *data) +{
struct tee_param t_param[TEE_PARAM_NUM] = {0};
struct tee_ioctl_open_session_arg arg = {0};
uuid_t ta_mem_uuid;
int ret;
data->tee_ctx = tee_client_open_context(NULL, mtk_tee_ctx_match, NULL, NULL);
if (IS_ERR(data->tee_ctx)) {
pr_err_once("%s: open context failed, ret=%ld\n", __func__,
PTR_ERR(data->tee_ctx));
return -ENODEV;
}
arg.num_params = TEE_PARAM_NUM;
arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC;
ret = uuid_parse(TZ_TA_MEM_UUID_MTK, &ta_mem_uuid);
if (ret)
goto close_context;
memcpy(&arg.uuid, &ta_mem_uuid.b, sizeof(ta_mem_uuid));
ret = tee_client_open_session(data->tee_ctx, &arg, t_param);
if (ret < 0 || arg.ret) {
pr_err_once("%s: open session failed, ret=%d:%d\n",
__func__, ret, arg.ret);
ret = -EINVAL;
goto close_context;
}
data->tee_session = arg.session;
return 0;
+close_context:
tee_client_close_context(data->tee_ctx);
There's a data->tee_ctx = NULL; missing here.
Cheers, Jens
return ret;
+}
+static int mtk_restricted_heap_init(struct restricted_heap *rheap) +{
struct mtk_restricted_heap_data *data = rheap->priv_data;
if (!data->tee_ctx)
return mtk_tee_session_init(data);
return 0;
+}
+static const struct restricted_heap_ops mtk_restricted_heap_ops = {
.heap_init = mtk_restricted_heap_init,
+};
+static struct mtk_restricted_heap_data mtk_restricted_heap_data = {
.mem_type = MTK_SECURE_MEMORY_TYPE_CM_TZ,
+};
+static struct restricted_heap mtk_restricted_heaps[] = {
{
.name = "restricted_mtk_cm",
.ops = &mtk_restricted_heap_ops,
.priv_data = &mtk_restricted_heap_data,
},
+};
+static int mtk_restricted_heap_initialize(void) +{
struct restricted_heap *rheap = mtk_restricted_heaps;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rheap++)
restricted_heap_add(rheap);
return 0;
+} +module_init(mtk_restricted_heap_initialize); +MODULE_DESCRIPTION("MediaTek Restricted Heap Driver");
+MODULE_LICENSE("GPL");
2.25.1
linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Add TEE service call for MediaTek heap. We have a limited number of hardware entries to protect memory, therefore we cannot protect memory arbitrarily, and our secure memory management is actually inside OPTEE.
Totally there are 3 commands: 1) MTK_TZCMD_SECMEM_ZALLOC: The kernel tells the TEE what size I want and the TEE will return a "secure handle"/"secure address". To make the name more general, We call it "restricted_addr" here. The restricted_addr is a reference to the secure buffer within TEE. 2) MTK_TZCMD_SECMEM_FREE: Free the buffer. Match with the ALLOC command above. 3) MTK_TZCMD_SECMEM_RETRIEVE_SG: If the tee buffer is discrete, this command can retrieve the detailed PA list from the TEE with which the kernel will initialize the sg table. Of course, if the tee buffer is contiguous, the PA will be obtained directly from MTK_TZCMD_SECMEM_ZALLOC.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/restricted_heap.h | 3 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 193 ++++++++++++++++++++ 2 files changed, 196 insertions(+)
diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 2a33a1c7a48b..8cb9211093c5 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -13,6 +13,9 @@ struct restricted_buffer { size_t size;
struct sg_table sg_table; + + /* A reference to a buffer in the trusted or secure world. */ + u64 restricted_addr; };
struct restricted_heap { diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c index 52e805eb9858..e571eae719e0 100644 --- a/drivers/dma-buf/heaps/restricted_heap_mtk.c +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -27,6 +27,46 @@ enum mtk_secure_mem_type { MTK_SECURE_MEMORY_TYPE_CM_TZ = 1, };
+/* This structure also is synchronized with tee, thus not use the phys_addr_t */ +struct mtk_tee_scatterlist { + u64 pa; + u32 length; +} __packed; + +enum mtk_secure_buffer_tee_cmd { + /* + * Allocate the zeroed secure memory from TEE. + * + * [in] value[0].a: The buffer size. + * value[0].b: alignment. + * [in] value[1].a: enum mtk_secure_mem_type. + * [inout] + * [out] value[2].a: entry number of memory block. + * If this is 1, it means the memory is continuous. + * value[2].b: buffer PA base. + * [out] value[3].a: The secure handle. + */ + MTK_TZCMD_SECMEM_ZALLOC = 0x10000, /* MTK TEE Command ID Base */ + + /* + * Free secure memory. + * + * [in] value[0].a: The secure handle of this buffer, It's value[3].a of + * MTK_TZCMD_SECMEM_ZALLOC. + * [out] value[1].a: return value, 0 means successful, otherwise fail. + */ + MTK_TZCMD_SECMEM_FREE = 0x10001, + + /* + * Get secure memory sg-list. + * + * [in] value[0].a: The secure handle of this buffer, It's value[3].a of + * MTK_TZCMD_SECMEM_ZALLOC. + * [out] value[1].a: The array of sg items (struct mtk_tee_scatterlist). + */ + MTK_TZCMD_SECMEM_RETRIEVE_SG = 0x10002, +}; + struct mtk_restricted_heap_data { struct tee_context *tee_ctx; u32 tee_session; @@ -76,6 +116,155 @@ static int mtk_tee_session_init(struct mtk_restricted_heap_data *data) return ret; }
+static int mtk_tee_service_call(struct tee_context *tee_ctx, u32 session, + unsigned int command, struct tee_param *params) +{ + struct tee_ioctl_invoke_arg arg = {0}; + int ret; + + arg.num_params = TEE_PARAM_NUM; + arg.session = session; + arg.func = command; + + ret = tee_client_invoke_func(tee_ctx, &arg, params); + if (ret < 0 || arg.ret) { + pr_err("%s: cmd 0x%x ret %d:%x.\n", __func__, command, ret, arg.ret); + ret = -EOPNOTSUPP; + } + return ret; +} + +static int mtk_tee_secmem_free(struct restricted_heap *rheap, u64 restricted_addr) +{ + struct mtk_restricted_heap_data *data = rheap->priv_data; + struct tee_param params[TEE_PARAM_NUM] = {0}; + + params[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[0].u.value.a = restricted_addr; + params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; + + mtk_tee_service_call(data->tee_ctx, data->tee_session, + MTK_TZCMD_SECMEM_FREE, params); + if (params[1].u.value.a) { + pr_err("%s, SECMEM_FREE buffer(0x%llx) fail(%lld) from TEE.\n", + rheap->name, restricted_addr, params[1].u.value.a); + return -EINVAL; + } + return 0; +} + +static int mtk_tee_restrict_memory(struct restricted_heap *rheap, struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = rheap->priv_data; + struct tee_param params[TEE_PARAM_NUM] = {0}; + struct mtk_tee_scatterlist *tee_sg_item; + struct mtk_tee_scatterlist *tee_sg_buf; + unsigned int sg_num, size, i; + struct tee_shm *sg_shm; + struct scatterlist *sg; + phys_addr_t pa_tee; + u64 r_addr; + int ret; + + /* Alloc the secure buffer and get the sg-list number from TEE */ + params[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[0].u.value.a = buf->size; + params[0].u.value.b = PAGE_SIZE; + params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[1].u.value.a = data->mem_type; + params[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT; + params[3].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; + ret = mtk_tee_service_call(data->tee_ctx, data->tee_session, + MTK_TZCMD_SECMEM_ZALLOC, params); + if (ret) + return -ENOMEM; + + sg_num = params[2].u.value.a; + r_addr = params[3].u.value.a; + + /* If there is only one entry, It means the buffer is continuous, Get the PA directly. */ + if (sg_num == 1) { + pa_tee = params[2].u.value.b; + if (!pa_tee) + goto tee_secmem_free; + if (sg_alloc_table(&buf->sg_table, 1, GFP_KERNEL)) + goto tee_secmem_free; + sg_set_page(buf->sg_table.sgl, phys_to_page(pa_tee), buf->size, 0); + buf->restricted_addr = r_addr; + return 0; + } + + /* + * If the buffer inside TEE are discontinuous, Use sharemem to retrieve + * the detail sg list from TEE. + */ + tee_sg_buf = kmalloc_array(sg_num, sizeof(*tee_sg_item), GFP_KERNEL); + if (!tee_sg_buf) { + ret = -ENOMEM; + goto tee_secmem_free; + } + + size = sg_num * sizeof(*tee_sg_item); + sg_shm = tee_shm_register_kernel_buf(data->tee_ctx, tee_sg_buf, size); + if (!sg_shm) + goto free_tee_sg_buf; + + memset(params, 0, sizeof(params)); + params[0].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; + params[0].u.value.a = r_addr; + params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT; + params[1].u.memref.shm = sg_shm; + params[1].u.memref.size = size; + ret = mtk_tee_service_call(data->tee_ctx, data->tee_session, + MTK_TZCMD_SECMEM_RETRIEVE_SG, params); + if (ret) + goto put_shm; + + if (sg_alloc_table(&buf->sg_table, sg_num, GFP_KERNEL)) + goto put_shm; + + for_each_sgtable_sg(&buf->sg_table, sg, i) { + tee_sg_item = tee_sg_buf + i; + if (!tee_sg_item->pa) + goto free_buf_sg; + sg_set_page(sg, phys_to_page(tee_sg_item->pa), + tee_sg_item->length, 0); + } + + tee_shm_put(sg_shm); + kfree(tee_sg_buf); + buf->restricted_addr = r_addr; + return 0; + +free_buf_sg: + sg_free_table(&buf->sg_table); +put_shm: + tee_shm_put(sg_shm); +free_tee_sg_buf: + kfree(tee_sg_buf); +tee_secmem_free: + mtk_tee_secmem_free(rheap, r_addr); + return ret; +} + +static void mtk_tee_unrestrict_memory(struct restricted_heap *rheap, struct restricted_buffer *buf) +{ + sg_free_table(&buf->sg_table); + mtk_tee_secmem_free(rheap, buf->restricted_addr); +} + +static int +mtk_restricted_memory_allocate(struct restricted_heap *rheap, struct restricted_buffer *buf) +{ + /* The memory allocating is within the TEE. */ + return 0; +} + +static void +mtk_restricted_memory_free(struct restricted_heap *rheap, struct restricted_buffer *buf) +{ +} + static int mtk_restricted_heap_init(struct restricted_heap *rheap) { struct mtk_restricted_heap_data *data = rheap->priv_data; @@ -87,6 +276,10 @@ static int mtk_restricted_heap_init(struct restricted_heap *rheap)
static const struct restricted_heap_ops mtk_restricted_heap_ops = { .heap_init = mtk_restricted_heap_init, + .alloc = mtk_restricted_memory_allocate, + .free = mtk_restricted_memory_free, + .restrict_buf = mtk_tee_restrict_memory, + .unrestrict_buf = mtk_tee_unrestrict_memory, };
static struct mtk_restricted_heap_data mtk_restricted_heap_data = {
Create a new MediaTek CMA heap from the CMA reserved buffer.
In this heap, When the first allocating buffer, use cma_alloc to prepare whole the CMA range, then send its range to TEE to protect and manage. For the later allocating, we just adds the cma_used_size.
When SVP done, cma_release will release the buffer, then kernel may reuse it.
For the "CMA" restricted heap, "struct cma *cma" is a common property, not just for MediaTek, so put it into "struct restricted_heap" instead of our private data.
Signed-off-by: Yong Wu yong.wu@mediatek.com --- drivers/dma-buf/heaps/Kconfig | 2 +- drivers/dma-buf/heaps/restricted_heap.h | 4 + drivers/dma-buf/heaps/restricted_heap_mtk.c | 121 +++++++++++++++++++- 3 files changed, 123 insertions(+), 4 deletions(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig index 84f748fb2856..58903bc62ac8 100644 --- a/drivers/dma-buf/heaps/Kconfig +++ b/drivers/dma-buf/heaps/Kconfig @@ -24,7 +24,7 @@ config DMABUF_HEAPS_RESTRICTED
config DMABUF_HEAPS_RESTRICTED_MTK bool "MediaTek DMA-BUF Restricted Heap" - depends on DMABUF_HEAPS_RESTRICTED && TEE=y + depends on DMABUF_HEAPS_RESTRICTED && DMA_CMA && TEE=y help Enable restricted dma-buf heaps for MediaTek platform. This heap is backed by TEE client interfaces. If in doubt, say N. diff --git a/drivers/dma-buf/heaps/restricted_heap.h b/drivers/dma-buf/heaps/restricted_heap.h index 8cb9211093c5..7dec4b8a471b 100644 --- a/drivers/dma-buf/heaps/restricted_heap.h +++ b/drivers/dma-buf/heaps/restricted_heap.h @@ -23,6 +23,10 @@ struct restricted_heap {
const struct restricted_heap_ops *ops;
+ struct cma *cma; + unsigned long cma_paddr; + unsigned long cma_size; + void *priv_data; };
diff --git a/drivers/dma-buf/heaps/restricted_heap_mtk.c b/drivers/dma-buf/heaps/restricted_heap_mtk.c index e571eae719e0..6d8119828485 100644 --- a/drivers/dma-buf/heaps/restricted_heap_mtk.c +++ b/drivers/dma-buf/heaps/restricted_heap_mtk.c @@ -6,9 +6,11 @@ */ #define pr_fmt(fmt) "rheap_mtk: " fmt
+#include <linux/cma.h> #include <linux/dma-buf.h> #include <linux/err.h> #include <linux/module.h> +#include <linux/of_reserved_mem.h> #include <linux/slab.h> #include <linux/tee_drv.h> #include <linux/uuid.h> @@ -25,6 +27,13 @@ enum mtk_secure_mem_type { * management is inside the TEE. */ MTK_SECURE_MEMORY_TYPE_CM_TZ = 1, + /* + * MediaTek dynamic chunk memory carved out from CMA. + * In normal case, the CMA could be used in kernel; When SVP start, we will + * allocate whole this CMA and pass whole the CMA PA and size into TEE to + * protect it, then the detail memory management also is inside the TEE. + */ + MTK_SECURE_MEMORY_TYPE_CM_CMA = 2, };
/* This structure also is synchronized with tee, thus not use the phys_addr_t */ @@ -40,7 +49,8 @@ enum mtk_secure_buffer_tee_cmd { * [in] value[0].a: The buffer size. * value[0].b: alignment. * [in] value[1].a: enum mtk_secure_mem_type. - * [inout] + * [inout] [in] value[2].a: pa base in cma case. + * value[2].b: The buffer size in cma case. * [out] value[2].a: entry number of memory block. * If this is 1, it means the memory is continuous. * value[2].b: buffer PA base. @@ -73,6 +83,9 @@ struct mtk_restricted_heap_data {
const enum mtk_secure_mem_type mem_type;
+ struct page *cma_page; + unsigned long cma_used_size; + struct mutex lock; /* lock for cma_used_size */ };
static int mtk_tee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) @@ -173,6 +186,10 @@ static int mtk_tee_restrict_memory(struct restricted_heap *rheap, struct restric params[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT; params[1].u.value.a = data->mem_type; params[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INOUT; + if (rheap->cma && data->mem_type == MTK_SECURE_MEMORY_TYPE_CM_CMA) { + params[2].u.value.a = rheap->cma_paddr; + params[2].u.value.b = rheap->cma_size; + } params[3].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT; ret = mtk_tee_service_call(data->tee_ctx, data->tee_session, MTK_TZCMD_SECMEM_ZALLOC, params); @@ -265,6 +282,48 @@ mtk_restricted_memory_free(struct restricted_heap *rheap, struct restricted_buff { }
+static int mtk_restricted_memory_cma_allocate(struct restricted_heap *rheap, + struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = rheap->priv_data; + int ret = 0; + /* + * Allocate CMA only when allocating buffer for the first time, and just + * increase cma_used_size at the other time, Actually the memory + * allocating is within the TEE. + */ + mutex_lock(&data->lock); + if (!data->cma_used_size) { + data->cma_page = cma_alloc(rheap->cma, rheap->cma_size >> PAGE_SHIFT, + get_order(PAGE_SIZE), false); + if (!data->cma_page) { + ret = -ENOMEM; + goto out_unlock; + } + } else if (data->cma_used_size + buf->size > rheap->cma_size) { + ret = -EINVAL; + goto out_unlock; + } + data->cma_used_size += buf->size; + +out_unlock: + mutex_unlock(&data->lock); + return ret; +} + +static void mtk_restricted_memory_cma_free(struct restricted_heap *rheap, + struct restricted_buffer *buf) +{ + struct mtk_restricted_heap_data *data = rheap->priv_data; + + mutex_lock(&data->lock); + data->cma_used_size -= buf->size; + if (!data->cma_used_size) + cma_release(rheap->cma, data->cma_page, + rheap->cma_size >> PAGE_SHIFT); + mutex_unlock(&data->lock); +} + static int mtk_restricted_heap_init(struct restricted_heap *rheap) { struct mtk_restricted_heap_data *data = rheap->priv_data; @@ -286,21 +345,77 @@ static struct mtk_restricted_heap_data mtk_restricted_heap_data = { .mem_type = MTK_SECURE_MEMORY_TYPE_CM_TZ, };
+static const struct restricted_heap_ops mtk_restricted_heap_ops_cma = { + .heap_init = mtk_restricted_heap_init, + .alloc = mtk_restricted_memory_cma_allocate, + .free = mtk_restricted_memory_cma_free, + .restrict_buf = mtk_tee_restrict_memory, + .unrestrict_buf = mtk_tee_unrestrict_memory, +}; + +static struct mtk_restricted_heap_data mtk_restricted_heap_data_cma = { + .mem_type = MTK_SECURE_MEMORY_TYPE_CM_CMA, +}; + static struct restricted_heap mtk_restricted_heaps[] = { { .name = "restricted_mtk_cm", .ops = &mtk_restricted_heap_ops, .priv_data = &mtk_restricted_heap_data, }, + { + .name = "restricted_mtk_cma", + .ops = &mtk_restricted_heap_ops_cma, + .priv_data = &mtk_restricted_heap_data_cma, + }, };
+static int __init mtk_restricted_cma_init(struct reserved_mem *rmem) +{ + struct restricted_heap *rheap = mtk_restricted_heaps, *rheap_cma = NULL; + struct mtk_restricted_heap_data *data; + struct cma *cma; + int ret, i; + + for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rheap++) { + data = rheap->priv_data; + if (data->mem_type == MTK_SECURE_MEMORY_TYPE_CM_CMA) { + rheap_cma = rheap; + break; + } + } + if (!rheap_cma) + return -EINVAL; + + ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, + &cma); + if (ret) { + pr_err("%s: %s set up CMA fail. ret %d.\n", __func__, rmem->name, ret); + return ret; + } + + rheap_cma->cma = cma; + rheap_cma->cma_paddr = rmem->base; + rheap_cma->cma_size = rmem->size; + return 0; +} + +RESERVEDMEM_OF_DECLARE(restricted_cma, "mediatek,dynamic-restricted-region", + mtk_restricted_cma_init); + static int mtk_restricted_heap_initialize(void) { struct restricted_heap *rheap = mtk_restricted_heaps; + struct mtk_restricted_heap_data *data; unsigned int i;
- for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rheap++) - restricted_heap_add(rheap); + for (i = 0; i < ARRAY_SIZE(mtk_restricted_heaps); i++, rheap++) { + data = rheap->priv_data; + if (data->mem_type == MTK_SECURE_MEMORY_TYPE_CM_CMA && !rheap->cma) + continue; + if (!restricted_heap_add(rheap)) + mutex_init(&data->lock); + } return 0; } module_init(mtk_restricted_heap_initialize);
linaro-mm-sig@lists.linaro.org