This patchset introduces a new dma heap, "chunk-heap" that makes it
easy to perform the bulk allocation of high order pages.
It has been created to help optimize the 4K/8K HDR video playback
with secure DRM HW to protect contents on memory. The HW needs
physically contiguous memory chunks(e.g, 64K) up to several hundred
MB memory.
To make such high-order big bulk allocations work, chunk-heap uses
CMA area. To avoid CMA allocation long stall on blocking pages(e.g.,
page writeback and/or page locking), it uses failfast mode of the
CMA API(i.e., __GFP_NORETRY) so it will continue to find easy
migratable pages in different pageblocks without stalling. At last
resort, it will allow the blocking only if it couldn't find the
available memory in the end.
First two patches introduces the failfast mode as __GFP_NORETRY
in alloc_contig_range and the allow to use it from the CMA API.
Third patch introduces device tree syntax for chunk-heap to bind
the specific CMA area with chunk-heap.
Finally, last patch implements chunk-heap as dma-buf heap.
* since v3 - https://lore.kernel.org/linux-mm/20210113012143.1201105-1-minchan@kernel.or…
* use prefix for chunk-name - John
* fix yamllint error - Rob
* add reviewed-by - Suren
* since v2 - https://lore.kernel.org/linux-mm/20201201175144.3996569-1-minchan@kernel.or…
* introduce gfp_mask with __GFP_NORETRY on cma_alloc - Michal
* do not expoert CMA APIs - Christoph
* use compatible string for DT instead of dma-heap specific property - Hridya
* Since v1 - https://lore.kernel.org/linux-mm/20201117181935.3613581-1-minchan@kernel.or…
* introduce alloc_contig_mode - David
* use default CMA instead of device tree - John
Hyesoo Yu (2):
dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
dma-buf: heaps: add chunk heap to dmabuf heaps
Minchan Kim (2):
mm: cma: introduce gfp flag in cma_alloc instead of no_warn
mm: failfast mode with __GFP_NORETRY in alloc_contig_range
.../reserved-memory/dma_heap_chunk.yaml | 56 ++
drivers/dma-buf/heaps/Kconfig | 8 +
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/chunk_heap.c | 492 ++++++++++++++++++
drivers/dma-buf/heaps/cma_heap.c | 2 +-
drivers/s390/char/vmcp.c | 2 +-
include/linux/cma.h | 2 +-
kernel/dma/contiguous.c | 3 +-
mm/cma.c | 12 +-
mm/cma_debug.c | 2 +-
mm/hugetlb.c | 6 +-
mm/page_alloc.c | 8 +-
mm/secretmem.c | 3 +-
13 files changed, 581 insertions(+), 16 deletions(-)
create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
create mode 100644 drivers/dma-buf/heaps/chunk_heap.c
--
2.30.0.296.g2bfb1c46d8-goog
tldr; DMA buffers aren't normal memory, expecting that you can use
them like that (like calling get_user_pages works, or that they're
accounting like any other normal memory) cannot be guaranteed.
Since some userspace only runs on integrated devices, where all
buffers are actually all resident system memory, there's a huge
temptation to assume that a struct page is always present and useable
like for any more pagecache backed mmap. This has the potential to
result in a uapi nightmare.
To stop this gap require that DMA buffer mmaps are VM_SPECIAL, which
blocks get_user_pages and all the other struct page based
infrastructure for everyone. In spirit this is the uapi counterpart to
the kernel-internal CONFIG_DMABUF_DEBUG.
Motivated by a recent patch which wanted to swich the system dma-buf
heap to vm_insert_page instead of vm_insert_pfn.
References: https://lore.kernel.org/lkml/CAKMK7uHi+mG0z0HUmNt13QCCvutuRVjpcR0NjRL12k-Wb…
Cc: Jason Gunthorpe <jgg(a)ziepe.ca>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: John Stultz <john.stultz(a)linaro.org>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-buf.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index f264b70c383e..d3081fc07056 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -127,6 +127,7 @@ static struct file_system_type dma_buf_fs_type = {
static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
{
struct dma_buf *dmabuf;
+ int ret;
if (!is_dma_buf_file(file))
return -EINVAL;
@@ -142,7 +143,11 @@ static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
dmabuf->size >> PAGE_SHIFT)
return -EINVAL;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_SPECIAL));
+
+ return ret;
}
static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence)
@@ -1244,6 +1249,8 @@ EXPORT_SYMBOL_GPL(dma_buf_end_cpu_access);
int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
unsigned long pgoff)
{
+ int ret;
+
if (WARN_ON(!dmabuf || !vma))
return -EINVAL;
@@ -1264,7 +1271,11 @@ int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
vma_set_file(vma, dmabuf->file);
vma->vm_pgoff = pgoff;
- return dmabuf->ops->mmap(dmabuf, vma);
+ ret = dmabuf->ops->mmap(dmabuf, vma);
+
+ WARN_ON(!(vma->vm_flags & VM_SPECIAL));
+
+ return ret;
}
EXPORT_SYMBOL_GPL(dma_buf_mmap);
--
2.30.0
Replace BUG_ON(vma->vm_flags & VM_PFNMAP) in vm_insert_page with
WARN_ON_ONCE and returning an error. This is to ensure users of the
vm_insert_page that set VM_PFNMAP are notified of the wrong flag usage
and get an indication of an error without panicing the kernel.
This will help identifying drivers that need to clear VM_PFNMAP before
using dmabuf system heap which is moving to use vm_insert_page.
Suggested-by: Christoph Hellwig <hch(a)infradead.org>
Signed-off-by: Suren Baghdasaryan <surenb(a)google.com>
---
mm/memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index feff48e1465a..e503c9801cd9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1827,7 +1827,8 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
return -EINVAL;
if (!(vma->vm_flags & VM_MIXEDMAP)) {
BUG_ON(mmap_read_trylock(vma->vm_mm));
- BUG_ON(vma->vm_flags & VM_PFNMAP);
+ if (WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP))
+ return -EINVAL;
vma->vm_flags |= VM_MIXEDMAP;
}
return insert_page(vma, addr, page, vma->vm_page_prot);
--
2.30.0.365.g02bc693789-goog
Android captures per-process system memory state when certain low memory
events (e.g a foreground app kill) occur, to identify potential memory
hoggers. In order to measure how much memory a process actually consumes,
it is necessary to include the DMA buffer sizes for that process in the
memory accounting. Since the handle to DMA buffers are raw FDs, it is
important to be able to identify which processes have FD references to
a DMA buffer.
Currently, DMA buffer FDs can be accounted using /proc/<pid>/fd/* and
/proc/<pid>/fdinfo -- both are only readable by the process owner,
as follows:
1. Do a readlink on each FD.
2. If the target path begins with "/dmabuf", then the FD is a dmabuf FD.
3. stat the file to get the dmabuf inode number.
4. Read/ proc/<pid>/fdinfo/<fd>, to get the DMA buffer size.
Accessing other processes’ fdinfo requires root privileges. This limits
the use of the interface to debugging environments and is not suitable
for production builds. Granting root privileges even to a system process
increases the attack surface and is highly undesirable.
Since fdinfo doesn't permit reading process memory and manipulating
process state, allow accessing fdinfo under PTRACE_MODE_READ_FSCRED.
Suggested-by: Jann Horn <jannh(a)google.com>
Signed-off-by: Kalesh Singh <kaleshsingh(a)google.com>
---
Changes in v2:
- Update patch desciption
fs/proc/base.c | 4 ++--
fs/proc/fd.c | 15 ++++++++++++++-
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/fs/proc/base.c b/fs/proc/base.c
index b3422cda2a91..a37f9de7103f 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -3160,7 +3160,7 @@ static const struct pid_entry tgid_base_stuff[] = {
DIR("task", S_IRUGO|S_IXUGO, proc_task_inode_operations, proc_task_operations),
DIR("fd", S_IRUSR|S_IXUSR, proc_fd_inode_operations, proc_fd_operations),
DIR("map_files", S_IRUSR|S_IXUSR, proc_map_files_inode_operations, proc_map_files_operations),
- DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fdinfo_operations),
+ DIR("fdinfo", S_IRUGO|S_IXUGO, proc_fdinfo_inode_operations, proc_fdinfo_operations),
DIR("ns", S_IRUSR|S_IXUGO, proc_ns_dir_inode_operations, proc_ns_dir_operations),
#ifdef CONFIG_NET
DIR("net", S_IRUGO|S_IXUGO, proc_net_inode_operations, proc_net_operations),
@@ -3504,7 +3504,7 @@ static const struct inode_operations proc_tid_comm_inode_operations = {
*/
static const struct pid_entry tid_base_stuff[] = {
DIR("fd", S_IRUSR|S_IXUSR, proc_fd_inode_operations, proc_fd_operations),
- DIR("fdinfo", S_IRUSR|S_IXUSR, proc_fdinfo_inode_operations, proc_fdinfo_operations),
+ DIR("fdinfo", S_IRUGO|S_IXUGO, proc_fdinfo_inode_operations, proc_fdinfo_operations),
DIR("ns", S_IRUSR|S_IXUGO, proc_ns_dir_inode_operations, proc_ns_dir_operations),
#ifdef CONFIG_NET
DIR("net", S_IRUGO|S_IXUGO, proc_net_inode_operations, proc_net_operations),
diff --git a/fs/proc/fd.c b/fs/proc/fd.c
index cb51763ed554..585e213301f9 100644
--- a/fs/proc/fd.c
+++ b/fs/proc/fd.c
@@ -6,6 +6,7 @@
#include <linux/fdtable.h>
#include <linux/namei.h>
#include <linux/pid.h>
+#include <linux/ptrace.h>
#include <linux/security.h>
#include <linux/file.h>
#include <linux/seq_file.h>
@@ -72,6 +73,18 @@ static int seq_show(struct seq_file *m, void *v)
static int seq_fdinfo_open(struct inode *inode, struct file *file)
{
+ bool allowed = false;
+ struct task_struct *task = get_proc_task(inode);
+
+ if (!task)
+ return -ESRCH;
+
+ allowed = ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS);
+ put_task_struct(task);
+
+ if (!allowed)
+ return -EACCES;
+
return single_open(file, seq_show, inode);
}
@@ -307,7 +320,7 @@ static struct dentry *proc_fdinfo_instantiate(struct dentry *dentry,
struct proc_inode *ei;
struct inode *inode;
- inode = proc_pid_make_inode(dentry->d_sb, task, S_IFREG | S_IRUSR);
+ inode = proc_pid_make_inode(dentry->d_sb, task, S_IFREG | S_IRUGO);
if (!inode)
return ERR_PTR(-ENOENT);
--
2.30.0.365.g02bc693789-goog
Android captures per-process system memory state when certain low memory
events (e.g a foreground app kill) occur, to identify potential memory
hoggers. In order to measure how much memory a process actually consumes,
it is necessary to include the DMA buffer sizes for that process in the
memory accounting. Since the handle to DMA buffers are raw FDs, it is
important to be able to identify which processes have FD references to
a DMA buffer.
Currently, DMA buffer FDs can be accounted using /proc/<pid>/fd/* and
/proc/<pid>/fdinfo -- both are only readable by the process owner,
as follows:
1. Do a readlink on each FD.
2. If the target path begins with "/dmabuf", then the FD is a dmabuf FD.
3. stat the file to get the dmabuf inode number.
4. Read/ proc/<pid>/fdinfo/<fd>, to get the DMA buffer size.
Accessing other processes’ fdinfo requires root privileges. This limits
the use of the interface to debugging environments and is not suitable
for production builds. Granting root privileges even to a system process
increases the attack surface and is highly undesirable.
This series proposes making the requirement to read fdinfo less strict
with PTRACE_MODE_READ.
Kalesh Singh (2):
procfs: Allow reading fdinfo with PTRACE_MODE_READ
dmabuf: Add dmabuf inode no to fdinfo
drivers/dma-buf/dma-buf.c | 1 +
fs/proc/base.c | 4 ++--
fs/proc/fd.c | 15 ++++++++++++++-
3 files changed, 17 insertions(+), 3 deletions(-)
--
2.30.0.365.g02bc693789-goog
On Fri, Jan 22, 2021 at 03:06:44PM +0100, Thomas Zimmermann wrote:
> Hi
>
> Am 22.01.21 um 14:36 schrieb Daniel Vetter:
> > Requested by Thomas. I think it justifies a new level, since I tried
> > to make some forward progress on this last summer, and gave up (for
> > now). This is very tricky.
>
> Adding it to the TODO list is a first step. :)
>
> Acked-by: Thomas Zimmermann <tzimmermann(a)suse.de>
Applied.
-Daniel
>
> >
> > Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
> > Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
> > Cc: Maxime Ripard <mripard(a)kernel.org>
> > Cc: Thomas Zimmermann <tzimmermann(a)suse.de>
> > Cc: David Airlie <airlied(a)linux.ie>
> > Cc: Daniel Vetter <daniel(a)ffwll.ch>
> > Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
> > Cc: "Christian König" <christian.koenig(a)amd.com>
> > Cc: linux-media(a)vger.kernel.org
> > Cc: linaro-mm-sig(a)lists.linaro.org
> > ---
> > Documentation/gpu/todo.rst | 19 +++++++++++++++++++
> > 1 file changed, 19 insertions(+)
> >
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index dea9082c0e5f..f872d3d33218 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -23,6 +23,9 @@ Advanced: Tricky tasks that need fairly good understanding of the DRM subsystem
> > and graphics topics. Generally need the relevant hardware for development and
> > testing.
> > +Expert: Only attempt these if you've successfully completed some tricky
> > +refactorings already and are an expert in the specific area
> > +
> > Subsystem-wide refactorings
> > ===========================
> > @@ -168,6 +171,22 @@ Contact: Daniel Vetter, respective driver maintainers
> > Level: Advanced
> > +Move Buffer Object Locking to dma_resv_lock()
> > +---------------------------------------------
> > +
> > +Many drivers have their own per-object locking scheme, usually using
> > +mutex_lock(). This causes all kinds of trouble for buffer sharing, since
> > +depending which driver is the exporter and importer, the locking hierarchy is
> > +reversed.
> > +
> > +To solve this we need one standard per-object locking mechanism, which is
> > +dma_resv_lock(). This lock needs to be called as the outermost lock, with all
> > +other driver specific per-object locks removed. The problem is tha rolling out
> > +the actual change to the locking contract is a flag day, due to struct dma_buf
> > +buffer sharing.
> > +
> > +Level: Expert
> > +
> > Convert logging to drm_* functions with drm_device paramater
> > ------------------------------------------------------------
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Currently system heap maps its buffers with VM_PFNMAP flag using
remap_pfn_range. This results in such buffers not being accounted
for in PSS calculations because vm treats this memory as having no
page structs. Without page structs there are no counters representing
how many processes are mapping a page and therefore PSS calculation
is impossible.
Historically, ION driver used to map its buffers as VM_PFNMAP areas
due to memory carveouts that did not have page structs [1]. That
is not the case anymore and it seems there was desire to move away
from remap_pfn_range [2].
Dmabuf system heap design inherits this ION behavior and maps its
pages using remap_pfn_range even though allocated pages are backed
by page structs.
Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the
system heap and replace remap_pfn_range with vm_insert_page, following
Laura's suggestion in [1]. This would allow correct PSS calculation
for dmabufs.
[1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-m…
[2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-Octo…
(sorry, could not find lore links for these discussions)
Suggested-by: Laura Abbott <labbott(a)kernel.org>
Signed-off-by: Suren Baghdasaryan <surenb(a)google.com>
---
drivers/dma-buf/heaps/system_heap.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index 17e0e9a68baf..0e92e42b2251 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
struct sg_page_iter piter;
int ret;
+ /* All pages are backed by a "struct page" */
+ vma->vm_flags &= ~VM_PFNMAP;
+
for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
struct page *page = sg_page_iter_page(&piter);
- ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
- vma->vm_page_prot);
+ ret = vm_insert_page(vma, addr, page);
if (ret)
return ret;
addr += PAGE_SIZE;
--
2.30.0.280.ga3ce27912f-goog