tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_PFNMAP, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
v2:
Jason brought up that we also want to guarantee that all ptes have the
pte_special flag set, to catch fast get_user_pages (on architectures
that support this). Allowing VM_MIXEDMAP (like VM_SPECIAL does) would
still allow vm_insert_page, but limiting to VM_PFNMAP will catch that.
>From auditing the various functions to insert pfn pte entires
(vm_insert_pfn_prot, remap_pfn_range and all it's callers like
dma_mmap_wc) it looks like VM_PFNMAP is already required anyway, so
this should be the correct flag to check for.
v3: Change to WARN_ON_ONCE (Thomas Zimmermann)
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Acked-by: Christian König <christian.koenig(a)amd.com>
Acked-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
--
Resending this so I can test the next patches for vgem/shmem in
intel-gfx-ci.
No immediate plans to merge this patch here since ttm isn't addressed
yet (and there we have the hugepte issue, for which I don't think we
have a clear consensus yet).
-Daniel
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 63d32261b63f..d19b1cf6c34f 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -130,6 +130,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -145,7 +146,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON_ONCE(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1260,6 +1265,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1280,7 +1287,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON_ONCE(!(vma->vm_flags & VM_PFNMAP));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.32.0
On Wed, Aug 11, 2021 at 08:50:52PM +0300, Pavel Skripkin wrote:
> Syzbot reported general protection fault in udmabuf_create. The problem
> was in wrong error handling.
>
> In commit 16c243e99d33 ("udmabuf: Add support for mapping hugepages (v4)")
> shmem_read_mapping_page() call was replaced with find_get_page_flags(),
> but find_get_page_flags() returns NULL on failure instead PTR_ERR().
>
> Wrong error checking was causing GPF in get_page(), since passed page
> was equal to NULL. Fix it by changing if (IS_ER(!hpage)) to if (!hpage)
>
> Reported-by: syzbot+e9cd3122a37c5d6c51e8(a)syzkaller.appspotmail.com
> Fixes: 16c243e99d33 ("udmabuf: Add support for mapping hugepages (v4)")
> Signed-off-by: Pavel Skripkin <paskripkin(a)gmail.com>
Pushed to drm-misc-next.
thanks,
Gerd
This patch limits the size of total memory that can be requested in a
single allocation from the system heap. This would prevent a
buggy/malicious client from depleting system memory by requesting for an
extremely large allocation which might destabilize the system.
The limit is set to half the size of the device's total RAM which is the
same as what was set by the deprecated ION system heap.
Signed-off-by: Hridya Valsaraju <hridya(a)google.com>
---
drivers/dma-buf/heaps/system_heap.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index b7fbce66bcc0..099f5a8304b4 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -371,6 +371,12 @@ static struct dma_buf *system_heap_do_allocate(struct dma_heap *heap,
struct page *page, *tmp_page;
int i, ret = -ENOMEM;
+ if (len / PAGE_SIZE > totalram_pages() / 2) {
+ pr_err("pid %d requested too large an allocation(size %lu) from system heap\n",
+ current->pid, len);
+ return ERR_PTR(ret);
+ }
+
buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
if (!buffer)
return ERR_PTR(-ENOMEM);
--
2.32.0.432.gabb21c7263-goog
On 8/9/21 5:22 AM, Gal Pressman wrote:
> Fix a few typos in the documentation:
> - Remove an extraneous 'or'
> - 'unpins' -> 'unpin'
> - 'braket' -> 'bracket'
> - 'mappinsg' -> 'mappings'
> - 'fullfills' -> 'fulfills'
>
> Signed-off-by: Gal Pressman <galpress(a)amazon.com>
Reviewed-by: Randy Dunlap <rdunlap(a)infradead.org>
Thanks.
> ---
> include/linux/dma-buf.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index efdc56b9d95f..772403352767 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -54,7 +54,7 @@ struct dma_buf_ops {
> * device), and otherwise need to fail the attach operation.
> *
> * The exporter should also in general check whether the current
> - * allocation fullfills the DMA constraints of the new device. If this
> + * allocation fulfills the DMA constraints of the new device. If this
> * is not the case, and the allocation cannot be moved, it should also
> * fail the attach operation.
> *
> @@ -146,7 +146,7 @@ struct dma_buf_ops {
> *
> * Returns:
> *
> - * A &sg_table scatter list of or the backing storage of the DMA buffer,
> + * A &sg_table scatter list of the backing storage of the DMA buffer,
> * already mapped into the device address space of the &device attached
> * with the provided &dma_buf_attachment. The addresses and lengths in
> * the scatter list are PAGE_SIZE aligned.
> @@ -168,7 +168,7 @@ struct dma_buf_ops {
> *
> * This is called by dma_buf_unmap_attachment() and should unmap and
> * release the &sg_table allocated in @map_dma_buf, and it is mandatory.
> - * For static dma_buf handling this might also unpins the backing
> + * For static dma_buf handling this might also unpin the backing
> * storage if this is the last mapping of the DMA buffer.
> */
> void (*unmap_dma_buf)(struct dma_buf_attachment *,
> @@ -237,7 +237,7 @@ struct dma_buf_ops {
> * This callback is used by the dma_buf_mmap() function
> *
> * Note that the mapping needs to be incoherent, userspace is expected
> - * to braket CPU access using the DMA_BUF_IOCTL_SYNC interface.
> + * to bracket CPU access using the DMA_BUF_IOCTL_SYNC interface.
> *
> * Because dma-buf buffers have invariant size over their lifetime, the
> * dma-buf core checks whether a vma is too large and rejects such
> @@ -464,7 +464,7 @@ static inline bool dma_buf_is_dynamic(struct dma_buf *dmabuf)
>
> /**
> * dma_buf_attachment_is_dynamic - check if a DMA-buf attachment uses dynamic
> - * mappinsg
> + * mappings
> * @attach: the DMA-buf attachment to check
> *
> * Returns true if a DMA-buf importer wants to call the map/unmap functions with
>
--
~Randy
On Mon, Aug 02, 2021 at 06:59:57PM +0800, Desmond Cheong Zhi Xi wrote:
> In drm_is_current_master_locked, accessing drm_file.master should be
> protected by either drm_file.master_lookup_lock or
> drm_device.master_mutex. This was previously awkward to assert with
> lockdep.
>
> Following patch ("locking/lockdep: Provide lockdep_assert{,_once}()
> helpers"), this assertion is now convenient. So we add in the
> assertion and explain this lock design in the kerneldoc.
>
> Signed-off-by: Desmond Cheong Zhi Xi <desmondcheongzx(a)gmail.com>
> Acked-by: Boqun Feng <boqun.feng(a)gmail.com>
> Acked-by: Waiman Long <longman(a)redhat.com>
> Acked-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Both patches pushed to drm-misc-next, thanks.
-Daniel
> ---
> drivers/gpu/drm/drm_auth.c | 6 +++---
> include/drm/drm_file.h | 4 ++++
> 2 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
> index 9c24b8cc8e36..6f4d7ff23c80 100644
> --- a/drivers/gpu/drm/drm_auth.c
> +++ b/drivers/gpu/drm/drm_auth.c
> @@ -63,9 +63,9 @@
>
> static bool drm_is_current_master_locked(struct drm_file *fpriv)
> {
> - /* Either drm_device.master_mutex or drm_file.master_lookup_lock
> - * should be held here.
> - */
> + lockdep_assert_once(lockdep_is_held(&fpriv->master_lookup_lock) ||
> + lockdep_is_held(&fpriv->minor->dev->master_mutex));
> +
> return fpriv->is_master && drm_lease_owner(fpriv->master) == fpriv->minor->dev->master;
> }
>
> diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
> index 726cfe0ff5f5..a3acb7ac3550 100644
> --- a/include/drm/drm_file.h
> +++ b/include/drm/drm_file.h
> @@ -233,6 +233,10 @@ struct drm_file {
> * this only matches &drm_device.master if the master is the currently
> * active one.
> *
> + * To update @master, both &drm_device.master_mutex and
> + * @master_lookup_lock need to be held, therefore holding either of
> + * them is safe and enough for the read side.
> + *
> * When dereferencing this pointer, either hold struct
> * &drm_device.master_mutex for the duration of the pointer's use, or
> * use drm_file_get_master() if struct &drm_device.master_mutex is not
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch