strlcpy() reads the entire source buffer first. This read may exceed
the destination size limit. This is both inefficient and can lead
to linear read overflows if a source string is not NUL-terminated[1].
Additionally, it returns the size of the source string, not the
resulting size of the destination string. In an effort to remove strlcpy()
completely[2], replace strlcpy() here with strscpy().
Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strlcpy [1]
Link: https://github.com/KSPP/linux/issues/89 [2]
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: Azeem Shaikh <azeemshaikh38(a)gmail.com>
Cc: linux-media(a)vger.kernel.org
Cc: dri-devel(a)lists.freedesktop.org
Cc: linaro-mm-sig(a)lists.linaro.org
Signed-off-by: Kees Cook <keescook(a)chromium.org>
---
drivers/dma-buf/dma-buf.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 21916bba77d5..8fe5aa67b167 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -46,12 +46,12 @@ static char *dmabuffs_dname(struct dentry *dentry, char *buffer, int buflen)
{
struct dma_buf *dmabuf;
char name[DMA_BUF_NAME_LEN];
- size_t ret = 0;
+ ssize_t ret = 0;
dmabuf = dentry->d_fsdata;
spin_lock(&dmabuf->name_lock);
if (dmabuf->name)
- ret = strlcpy(name, dmabuf->name, DMA_BUF_NAME_LEN);
+ ret = strscpy(name, dmabuf->name, sizeof(name));
spin_unlock(&dmabuf->name_lock);
return dynamic_dname(buffer, buflen, "/%s:%s",
--
2.34.1
This patchset adds three secure heaps:
1) secure_mtk_cm: secure chunk memory for MediaTek SVP (Secure Video Path).
The buffer is reserved for the secure world after bootup and it is used
for vcodec's ES/working buffer;
2) secure_mtk_cma: secure CMA memory for MediaTek SVP. This buffer is
dynamically reserved for the secure world and will be got when we start
playing secure videos, Once the security video playing is complete, the
CMA will be released. This heap is used for the vcodec's frame buffer.
3) secure_cma: Use the kerne CMA ops as the allocation ops.
currently it is a draft version for Vijay and Jaskaran.
For the first two MediaTek heaps will be used v4l2[1] and drm[2], thus we
cannot put it in v4l2 or drm, and create a common heap for them. Meanwhile
We have a limited number of hardware entries to protect memory, we cannot
protect memory arbitrarily, thus the secure memory management is actually
inside OPTEE. The kernel just tells the TEE what size I want and the TEE
will return a "secure handle".
[1] https://lore.kernel.org/linux-mediatek/20231106120423.23364-1-yunfei.dong@m…
[2] https://lore.kernel.org/linux-mediatek/20231023044549.21412-1-jason-jh.lin@…
Change note:
v2: 1) Move John's patches into the vcodec patchset since they use the new
dma heap interface directly.
https://lore.kernel.org/linux-mediatek/20231106120423.23364-1-yunfei.dong@m…
2) Reword the dt-binding description.
3) Rename the heap name from mtk_svp to secure_mtk_cm.
This means the current vcodec/DRM upstream code doesn't match this.
4) Add a normal CMA heap. currently it should be a draft version.
5) Regarding the UUID, I still use hard code, but put it in a private
data which allow the others could set their own UUID. What's more, UUID
is necessary for the session with TEE. If we don't have it, we can't
communicate with the TEE, including the get_uuid interface, which tries
to make uuid more generic, not working. If there is other way to make
UUID more general, please free to tell me.
v1: https://lore.kernel.org/linux-mediatek/20230911023038.30649-1-yong.wu@media…
Base on v6.6-rc1.
Yong Wu (8):
dma-buf: heaps: Initialize a secure heap
dma-buf: heaps: secure_heap: Add private heap ops
dma-buf: heaps: secure_heap: Initialize tee session
dma-buf: heaps: secure_heap: Add tee memory service call
dma-buf: heaps: secure_heap: Add dma_ops
dt-bindings: reserved-memory: Add secure CMA reserved memory range
dma_buf: heaps: secure_heap: Add a new MediaTek CMA heap
dma-buf: heaps: secure_heap: Add normal CMA heap
.../reserved-memory/secure_cma_region.yaml | 44 ++
drivers/dma-buf/heaps/Kconfig | 7 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/secure_heap.c | 602 ++++++++++++++++++
4 files changed, 654 insertions(+)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/secure_cma_region.yaml
create mode 100644 drivers/dma-buf/heaps/secure_heap.c
--
2.25.1
The main goal is for secure video playback, and to also enable other
potential uses of this in the future. The 'secure dma-heap' will be
used to allocate dma_buf objects that reference memory in the secure
world that is inaccessible/unmappable by the non-secure (i.e.
kernel/userspace) world. That memory will be used by the secure world
to store secure information (i.e. decrypted media content). The
dma_bufs allocated from the kernel will be passed to V4L2 for video
decoding (as input and output). They will also be used by the drm
system for rendering of the content.
Hope that helps.
Cheers,
Jeff
On Mon, Nov 13, 2023 at 3:38 AM Pavel Machek <pavel(a)ucw.cz> wrote:
>
> Hi!
>
> > This patchset adds three secure heaps:
> > 1) secure_mtk_cm: secure chunk memory for MediaTek SVP (Secure Video Path).
> > The buffer is reserved for the secure world after bootup and it is used
> > for vcodec's ES/working buffer;
> > 2) secure_mtk_cma: secure CMA memory for MediaTek SVP. This buffer is
> > dynamically reserved for the secure world and will be got when we start
> > playing secure videos, Once the security video playing is complete, the
> > CMA will be released. This heap is used for the vcodec's frame buffer.
> > 3) secure_cma: Use the kerne CMA ops as the allocation ops.
> > currently it is a draft version for Vijay and Jaskaran.
>
> Is there high-level description of what the security goals here are,
> somewhere?
>
> BR,
> Pavel
> --
> People of Russia, stop Putin before his war on Ukraine escalates.
Hi Yuran,
On 10/26/23 19:03, Yuran Pereira wrote:
> There are instances where the "args" argument passed to
> nouveau_uvmm_sm_prepare() is NULL.
>
> I.e. when nouveau_uvmm_sm_prepare() is called from
> nouveau_uvmm_sm_unmap_prepare()
>
> ```
> static int
> nouveau_uvmm_sm_unmap_prepare(struct nouveau_uvmm *uvmm,
> ...
> {
> return nouveau_uvmm_sm_prepare(uvmm, new, ops, NULL);
> }
> ```
>
> The problem is that op_map_prepare() which nouveau_uvmm_sm_prepare
> calls, dereferences this value, which can lead to a NULL pointer
> dereference.
op_map_prepare() can't be called with `args` being NULL, since when called
through nouveau_uvmm_sm_unmap_prepare() we can't hit the DRM_GPUVA_OP_MAP
case at all.
Unmapping something never leads to a new mapping being created, it can lead
to remaps though.
>
> ```
> static int
> op_map_prepare(struct nouveau_uvmm *uvmm,
> ...
> {
> ...
> uvma->region = args->region; <-- Dereferencing of possibly NULL pointer
> uvma->kind = args->kind; <--
> ...
> }
> ```
>
> ```
> static int
> nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm,
> ...
> struct uvmm_map_args *args)
> {
> struct drm_gpuva_op *op;
> u64 vmm_get_start = args ? args->addr : 0;
> u64 vmm_get_end = args ? args->addr + args->range : 0;
> int ret;
>
> drm_gpuva_for_each_op(op, ops) {
> switch (op->op) {
> case DRM_GPUVA_OP_MAP: {
> u64 vmm_get_range = vmm_get_end - vmm_get_start;
>
> ret = op_map_prepare(uvmm, &new->map, &op->map, args); <---
> if (ret)
> goto unwind;
>
> if (args && vmm_get_range) {
> ret = nouveau_uvmm_vmm_get(uvmm, vmm_get_start,
> vmm_get_range);
> if (ret) {
> op_map_prepare_unwind(new->map);
> goto unwind;
> }
> }
> ...
> ```
>
> Since the switch "case DRM_GPUVA_OP_MAP", also NULL checks "args"
This check is not required for the reason given above. If you like, you
can change this patch up to remove the args check and add a comment like:
/* args can't be NULL when called for a map operation. */
> after the call to op_map_prepare(), my guess is that we should
> probably relocate this check to a point before op_map_prepare()
> is called.
Yeah, I see how this unnecessary check made you think so.
- Danilo
>
> This patch ensures that the value of args is checked before
> calling op_map_prepare()
>
> Addresses-Coverity-ID: 1544574 ("Dereference after null check")
> Signed-off-by: Yuran Pereira <yuran.pereira(a)hotmail.com>
> ---
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index aae780e4a4aa..6baa481eb2c8 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -620,11 +620,14 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm,
> case DRM_GPUVA_OP_MAP: {
> u64 vmm_get_range = vmm_get_end - vmm_get_start;
>
> + if (!args)
> + goto unwind;
> +
> ret = op_map_prepare(uvmm, &new->map, &op->map, args);
> if (ret)
> goto unwind;
>
> - if (args && vmm_get_range) {
> + if (vmm_get_range) {
> ret = nouveau_uvmm_vmm_get(uvmm, vmm_get_start,
> vmm_get_range);
> if (ret) {
On Sun, 12 Nov 2023 20:08:10 -0800 Mina Almasry wrote:
> 1. For (b), would it be OK to implement a very minimal version of
> queue_[stop|start]/queue_mem_[alloc|free], which I use for the sole
> purpose of reposting buffers to an individual queue, and then later
> whoever picks up your queue API effort (maybe me) extends the
> implementation to do the rest of the things you described in your
> email? If not, what is the minimal queue API I can implement and use
> for devmem TCP?
Any form of queue API is better than a temporary ndo.
IIUC it will not bubble up into uAPI in any way so we can extend/change
it later as needed.
> 2. Since this is adding ndo, do I need to implement the ndo for 2
> drivers or is GVE sufficient?
One driver is fine, especially if we're doing this instead of the reset
hack.
On Sun, 12 Nov 2023 22:05:30 -0800 Mina Almasry wrote:
> My issue here is that all these skb helpers call each other so I end
> up having to move a lot of the unrelated skb helpers to this new
> header (maybe that is acceptable but it feels weird).
Splitting pp headers again is not an option, we already split it into
types and helpers.
The series doesn't apply and it's a bit hard for me to, in between LPC
talks, figure out how to extract your patches from the GH UI to take a
proper look :(
We can defer this for now, and hopefully v4 will apply to net-next.
But it will need to get solved.
On Sun, 12 Nov 2023 19:28:52 -0800 Mina Almasry wrote:
> My issue with this is that if the driver doesn't support dmabuf then
> the driver will accidentally use the pp backed by the dmabuf, allocate
> a page from it, then call page_address() on it or something, and
> crash.
>
> Currently I avoid that by having the driver be responsible for picking
> up the dmabuf from the netdev_rx_queue and giving it to the page pool.
> What would be the appropriate way to check for driver support in the
> netlink API? Perhaps adding something to ndo_features_check?
We need some form of capabilities. I was expecting to add that as part
of the queue API. Either a new field in struct net_device or in ndos.
I tend to put static driver caps of this nature into ops.
See for instance .supported_ring_params in ethtool ops.