On Fri, Apr 24, 2020 at 3:18 PM Andrew F. Davis <afd(a)ti.com> wrote:
>
> Some clients of DMA-Heaps probe earlier than subsys_initcall(), this
> can cause issues when these clients call dma_heap_add() before the
> core DMA-Heaps framework has initialized. DMA-Heaps should initialize
> during core startup to get ahead of all users.
>
> Signed-off-by: Andrew F. Davis <afd(a)ti.com>
No objection from me right off.
Acked-by: John Stultz <john.stultz(a)linaro.org>
thanks
-john
Hi all,
Peter noticed that with some dumb luck you can toast the kernel address
space with exported vmalloc symbols.
I used this as an opportunity to decruft the vmalloc.c API and make it
much more systematic. This also removes any chance to create vmalloc
mappings outside the designated areas or using executable permissions
from modules. Besides that it removes more than 300 lines of code.
A git tree is also available here:
git://git.infradead.org/users/hch/misc.git sanitize-vmalloc-api.2
Gitweb:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/sanitize-vm…
Changes since v1:
- implement pgprot_nx for arm64 (Mark Rutland)
- fix a patch description
- properly pass pgprot to vmap in ion
- add a new patch to fix vmap() API misuse
- fix a vmap argument in x86
- two more vmalloc cleanups
- cleanup use of the unmap_kernel_range API
- rename ioremap_pbh to ioremap_phb
Only sync the sg-list of an Ion dma-buf attachment when the attachment
is actually mapped on the device.
dma-bufs may be synced at any time. It can be reached from user space
via DMA_BUF_IOCTL_SYNC, so there are no guarantees from callers on when
syncs may be attempted, and dma_buf_end_cpu_access() and
dma_buf_begin_cpu_access() may not be paired.
Since the sg_list's dma_address isn't set up until the buffer is used
on the device, and dma_map_sg() is called on it, the dma_address will be
NULL if sync is attempted on the dma-buf before it's mapped on a device.
Before v5.0 (commit 55897af63091 ("dma-direct: merge swiotlb_dma_ops
into the dma_direct code")) this was a problem as the dma-api (at least
the swiotlb_dma_ops on arm64) would use the potentially invalid
dma_address. How that failed depended on how the device handled physical
address 0. If 0 was a valid address to physical ram, that page would get
flushed a lot, while the actual pages in the buffer would not get synced
correctly. While if 0 is an invalid physical address it may cause a
fault and trigger a crash.
In v5.0 this was incidentally fixed by commit 55897af63091 ("dma-direct:
merge swiotlb_dma_ops into the dma_direct code"), as this moved the
dma-api to use the page pointer in the sg_list, and (for Ion buffers at
least) this will always be valid if the sg_list exists at all.
But, this issue is re-introduced in v5.3 with
commit 449fa54d6815 ("dma-direct: correct the physical addr in
dma_direct_sync_sg_for_cpu/device") moves the dma-api back to the old
behaviour and picks the dma_address that may be invalid.
dma-buf core doesn't ensure that the buffer is mapped on the device, and
thus have a valid sg_list, before calling the exporter's
begin_cpu_access.
Signed-off-by: Ørjan Eide <orjan.eide(a)arm.com>
---
drivers/staging/android/ion/ion.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
This seems to be part of a bigger issue where dma-buf exporters assume
that their dma-buf begin_cpu_access and end_cpu_access callbacks have a
certain guaranteed behavior, which isn't ensured by dma-buf core.
This patch fixes this in ion only, but it also needs to be fixed for
other exporters, either handled like this in each exporter, or in
dma-buf core before calling into the exporters.
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
index 38b51eace4f9..7b752ba0cb6d 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -173,6 +173,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
+ bool mapped:1;
};
static int ion_dma_buf_attach(struct dma_buf *dmabuf,
@@ -195,6 +196,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
a->table = table;
a->dev = attachment->dev;
INIT_LIST_HEAD(&a->list);
+ a->mapped = false;
attachment->priv = a;
@@ -231,6 +233,8 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment,
direction))
return ERR_PTR(-ENOMEM);
+ a->mapped = true;
+
return table;
}
@@ -238,6 +242,10 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *table,
enum dma_data_direction direction)
{
+ struct ion_dma_buf_attachment *a = attachment->priv;
+
+ a->mapped = false;
+
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
}
@@ -297,6 +305,8 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
direction);
}
@@ -320,6 +330,8 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
+ if (!a->mapped)
+ continue;
dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
direction);
}
--
2.17.1
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Sergey,
On Fri, Apr 10, 2020 at 11:38:45AM +0900, Sergey Senozhatsky wrote:
> On (20/04/09 10:08), Minchan Kim wrote:
> > > > Even though I don't know how many usecase we have using zsmalloc as
> > > > module(I heard only once by dumb reason), it could affect existing
> > > > users. Thus, please include concrete explanation in the patch to
> > > > justify when the complain occurs.
> > >
> > > The justification is 'we can unexport functions that have no sane reason
> > > of being exported in the first place'.
> > >
> > > The Changelog pretty much says that.
> >
> > Okay, I hope there is no affected user since this patch.
> > If there are someone, they need to provide sane reason why they want
> > to have zsmalloc as module.
>
> I'm one of those who use zsmalloc as a module - mainly because I use zram
> as a compressing general purpose block device, not as a swap device.
> I create zram0, mkfs, mount, checkout and compile code, once done -
> umount, rmmod. This reduces the number of writes to SSD. Some people use
> tmpfs, but zram device(-s) can be much larger in size. That's a niche use
> case and I'm not against the patch.
It doesn't mean we couldn't use zsmalloc as module any longer. It means
we couldn't use zsmalloc as module with pgtable mapping whcih was little
bit faster on microbenchmark in some architecutre(However, I usually temped
to remove it since it had several problems). However, we could still use
zsmalloc as module as copy way instead of pgtable mapping. Thus, if someone
really want to rollback the feature, they should provide reasonable reason
why it doesn't work for them. "A little fast" wouldn't be enough to exports
deep internal to the module.
Thanks.
Hi all,
Peter noticed that with some dumb luck you can toast the kernel address
space with exported vmalloc symbols.
I used this as an opportunity to decruft the vmalloc.c API and make it
much more systematic. This also removes any chance to create vmalloc
mappings outside the designated areas or using executable permissions
from modules. Besides that it removes more than 300 lines of code.
A git tree is also available here:
git://git.infradead.org/users/hch/misc.git sanitize-vmalloc-api
Gitweb:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/sanitize-vm…
From: Randy Dunlap <rdunlap(a)infradead.org>
Fix documentation warnings in dma-buf.[hc]:
../drivers/dma-buf/dma-buf.c:678: warning: Function parameter or member 'importer_ops' not described in 'dma_buf_dynamic_attach'
../drivers/dma-buf/dma-buf.c:678: warning: Function parameter or member 'importer_priv' not described in 'dma_buf_dynamic_attach'
../include/linux/dma-buf.h:339: warning: Incorrect use of kernel-doc format: * @move_notify
Signed-off-by: Randy Dunlap <rdunlap(a)infradead.org>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: linux-media(a)vger.kernel.org
Cc: dri-devel(a)lists.freedesktop.org
Cc: linaro-mm-sig(a)lists.linaro.org
---
drivers/dma-buf/dma-buf.c | 4 ++--
include/linux/dma-buf.h | 3 +--
2 files changed, 3 insertions(+), 4 deletions(-)
--- linux-next-20200407.orig/drivers/dma-buf/dma-buf.c
+++ linux-next-20200407/drivers/dma-buf/dma-buf.c
@@ -655,8 +655,8 @@ EXPORT_SYMBOL_GPL(dma_buf_put);
* calls attach() of dma_buf_ops to allow device-specific attach functionality
* @dmabuf: [in] buffer to attach device to.
* @dev: [in] device to be attached.
- * @importer_ops [in] importer operations for the attachment
- * @importer_priv [in] importer private pointer for the attachment
+ * @importer_ops: [in] importer operations for the attachment
+ * @importer_priv: [in] importer private pointer for the attachment
*
* Returns struct dma_buf_attachment pointer for this attachment. Attachments
* must be cleaned up by calling dma_buf_detach().
--- linux-next-20200407.orig/include/linux/dma-buf.h
+++ linux-next-20200407/include/linux/dma-buf.h
@@ -329,13 +329,12 @@ struct dma_buf {
/**
* struct dma_buf_attach_ops - importer operations for an attachment
- * @move_notify: [optional] notification that the DMA-buf is moving
*
* Attachment operations implemented by the importer.
*/
struct dma_buf_attach_ops {
/**
- * @move_notify
+ * @move_notify: [optional] notification that the DMA-buf is moving
*
* If this callback is provided the framework can avoid pinning the
* backing store while mappings exists.
On Wed, Apr 08, 2020 at 08:15:19AM -0700, Matthew Wilcox wrote:
> > > > config ZSMALLOC_PGTABLE_MAPPING
> > > > bool "Use page table mapping to access object in zsmalloc"
> > > > - depends on ZSMALLOC
> > > > + depends on ZSMALLOC=y
> > >
> > > It's a bool so this shouldn't matter... not needed.
> >
> > My mm/Kconfig has:
> >
> > config ZSMALLOC
> > tristate "Memory allocator for compressed pages"
> > depends on MMU
> >
> > which I think means it can be modular, no?
>
> Randy means that ZSMALLOC_PGTABLE_MAPPING is a bool, so I think hch's patch
> is wrong ... if ZSMALLOC is 'm' then ZSMALLOC_PGTABLE_MAPPING would become
> 'n' instead of 'y'.
In Linus' tree you can select PGTABLE_MAPPING=y with ZSMALLOC=m,
and that fits my understanding of the kbuild language. With this
patch I can't anymore.
On 4/8/20 8:15 AM, Matthew Wilcox wrote:
> On Wed, Apr 08, 2020 at 05:12:03PM +0200, Peter Zijlstra wrote:
>> On Wed, Apr 08, 2020 at 08:01:00AM -0700, Randy Dunlap wrote:
>>> Hi,
>>>
>>> On 4/8/20 4:59 AM, Christoph Hellwig wrote:
>>>> diff --git a/mm/Kconfig b/mm/Kconfig
>>>> index 36949a9425b8..614cc786b519 100644
>>>> --- a/mm/Kconfig
>>>> +++ b/mm/Kconfig
>>>> @@ -702,7 +702,7 @@ config ZSMALLOC
>>>>
>>>> config ZSMALLOC_PGTABLE_MAPPING
>>>> bool "Use page table mapping to access object in zsmalloc"
>>>> - depends on ZSMALLOC
>>>> + depends on ZSMALLOC=y
>>>
>>> It's a bool so this shouldn't matter... not needed.
>>
>> My mm/Kconfig has:
>>
>> config ZSMALLOC
>> tristate "Memory allocator for compressed pages"
>> depends on MMU
>>
>> which I think means it can be modular, no?
ack. I misread it.
> Randy means that ZSMALLOC_PGTABLE_MAPPING is a bool, so I think hch's patch
> is wrong ... if ZSMALLOC is 'm' then ZSMALLOC_PGTABLE_MAPPING would become
> 'n' instead of 'y'.
sigh, I wish that I had meant that. :)
thanks.
--
~Randy
On Wed, Apr 08, 2020 at 01:38:36PM +0100, Mark Rutland wrote:
> > +static inline pgprot_t pgprot_nx(pgprot_t prot)
> > +{
> > + return __pgprot(pgprot_val(prot) | _PAGE_NX);
> > +}
> > +#define pgprot_nx pgprot_nx
> > +
> > #ifdef CONFIG_X86_PAE
>
> I reckon for arm64 we can do similar in our <asm/pgtable.h>:
>
> #define pgprot_nx(pgprot_t prot) \
> __pgprot_modify(prot, 0, PTE_PXN)
>
> ... matching the style of our existing pgprot_*() modifier helpers.
I've added that for the next version with attribution to you.