Some stability changes to improve ION robustness and a perf related change to make it easier for clients to avoid unnecessary cache maintenance, such as when buffers are clean and haven't had any CPU access.
Liam Mark (4): staging: android: ion: Support cpu access during dma_buf_detach staging: android: ion: Restrict cache maintenance to dma mapped memory dma-buf: add support for mapping with dma mapping attributes staging: android: ion: Support for mapping with dma mapping attributes
drivers/staging/android/ion/ion.c | 33 +++++++++++++++++++++++++-------- include/linux/dma-buf.h | 3 +++ 2 files changed, 28 insertions(+), 8 deletions(-)
Often userspace doesn't know when the kernel will be calling dma_buf_detach on the buffer. If userpace starts its CPU access at the same time as the sg list is being freed it could end up accessing the sg list after it has been freed.
Thread A Thread B - DMA_BUF_IOCTL_SYNC IOCT - ion_dma_buf_begin_cpu_access - list_for_each_entry - ion_dma_buf_detatch - free_duped_table - dma_sync_sg_for_cpu
Fix this by getting the ion_buffer lock before freeing the sg table memory.
Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping") Signed-off-by: Liam Mark lmark@codeaurora.org --- drivers/staging/android/ion/ion.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index a0802de8c3a1..6f5afab7c1a1 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -248,10 +248,10 @@ static void ion_dma_buf_detatch(struct dma_buf *dmabuf, struct ion_dma_buf_attachment *a = attachment->priv; struct ion_buffer *buffer = dmabuf->priv;
- free_duped_table(a->table); mutex_lock(&buffer->lock); list_del(&a->list); mutex_unlock(&buffer->lock); + free_duped_table(a->table);
kfree(a); }
On 1/18/19 10:37 AM, Liam Mark wrote:
Often userspace doesn't know when the kernel will be calling dma_buf_detach on the buffer. If userpace starts its CPU access at the same time as the sg list is being freed it could end up accessing the sg list after it has been freed.
Thread A Thread B
- DMA_BUF_IOCTL_SYNC IOCT
- ion_dma_buf_begin_cpu_access
- list_for_each_entry - ion_dma_buf_detatch - free_duped_table
- dma_sync_sg_for_cpu
Fix this by getting the ion_buffer lock before freeing the sg table memory.
Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping") Signed-off-by: Liam Mark lmark@codeaurora.org
drivers/staging/android/ion/ion.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index a0802de8c3a1..6f5afab7c1a1 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -248,10 +248,10 @@ static void ion_dma_buf_detatch(struct dma_buf *dmabuf, struct ion_dma_buf_attachment *a = attachment->priv; struct ion_buffer *buffer = dmabuf->priv;
- free_duped_table(a->table); mutex_lock(&buffer->lock); list_del(&a->list); mutex_unlock(&buffer->lock);
- free_duped_table(a->table);
kfree(a); }
Acked-by: Laura Abbott labbott@redhat.com
The ION begin_cpu_access and end_cpu_access functions use the dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache maintenance.
Currently it is possible to apply cache maintenance, via the begin_cpu_access and end_cpu_access APIs, to ION buffers which are not dma mapped.
The dma sync sg APIs should not be called on sg lists which have not been dma mapped as this can result in cache maintenance being applied to the wrong address. If an sg list has not been dma mapped then its dma_address field has not been populated, some dma ops such as the swiotlb_dma_ops ops use the dma_address field to calculate the address onto which to apply cache maintenance.
Also I don’t think we want CMOs to be applied to a buffer which is not dma mapped as the memory should already be coherent for access from the CPU. Any CMOs required for device access taken care of in the dma_buf_map_attachment and dma_buf_unmap_attachment calls. So really it only makes sense for begin_cpu_access and end_cpu_access to apply CMOs if the buffer is dma mapped.
Fix the ION begin_cpu_access and end_cpu_access functions to only apply cache maintenance to buffers which are dma mapped.
Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping") Signed-off-by: Liam Mark lmark@codeaurora.org --- drivers/staging/android/ion/ion.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 6f5afab7c1a1..1fe633a7fdba 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment { struct device *dev; struct sg_table *table; struct list_head list; + bool dma_mapped; };
static int ion_dma_buf_attach(struct dma_buf *dmabuf, @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
a->table = table; a->dev = attachment->dev; + a->dma_mapped = false; INIT_LIST_HEAD(&a->list);
attachment->priv = a; @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, { struct ion_dma_buf_attachment *a = attachment->priv; struct sg_table *table; + struct ion_buffer *buffer = attachment->dmabuf->priv;
table = a->table;
+ mutex_lock(&buffer->lock); if (!dma_map_sg(attachment->dev, table->sgl, table->nents, - direction)) + direction)) { + mutex_unlock(&buffer->lock); return ERR_PTR(-ENOMEM); + } + a->dma_mapped = true; + mutex_unlock(&buffer->lock);
return table; } @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct sg_table *table, enum dma_data_direction direction) { + struct ion_dma_buf_attachment *a = attachment->priv; + struct ion_buffer *buffer = attachment->dmabuf->priv; + + mutex_lock(&buffer->lock); dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); + a->dma_mapped = false; + mutex_unlock(&buffer->lock); }
static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents, - direction); + if (a->dma_mapped) + dma_sync_sg_for_cpu(a->dev, a->table->sgl, + a->table->nents, direction); }
unlock: @@ -369,8 +384,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
mutex_lock(&buffer->lock); list_for_each_entry(a, &buffer->attachments, list) { - dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents, - direction); + if (a->dma_mapped) + dma_sync_sg_for_device(a->dev, a->table->sgl, + a->table->nents, direction); } mutex_unlock(&buffer->lock);
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
Signed-off-by: Liam Mark lmark@codeaurora.org --- include/linux/dma-buf.h | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 58725f890b5b..59bf33e09e2d 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -308,6 +308,8 @@ struct dma_buf { * @dev: device attached to the buffer. * @node: list of dma_buf_attachment. * @priv: exporter specific attachment data. + * @dma_map_attrs: DMA mapping attributes to be used in + * dma_buf_map_attachment() and dma_buf_unmap_attachment(). * * This structure holds the attachment information between the dma_buf buffer * and its user device(s). The list contains one attachment struct per device @@ -323,6 +325,7 @@ struct dma_buf_attachment { struct device *dev; struct list_head node; void *priv; + unsigned long dma_map_attrs; };
/**
On 1/18/19 10:37 AM, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
Signed-off-by: Liam Mark lmark@codeaurora.org
include/linux/dma-buf.h | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 58725f890b5b..59bf33e09e2d 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -308,6 +308,8 @@ struct dma_buf {
- @dev: device attached to the buffer.
- @node: list of dma_buf_attachment.
- @priv: exporter specific attachment data.
- @dma_map_attrs: DMA mapping attributes to be used in
dma_buf_map_attachment() and dma_buf_unmap_attachment().
- This structure holds the attachment information between the dma_buf buffer
- and its user device(s). The list contains one attachment struct per device
@@ -323,6 +325,7 @@ struct dma_buf_attachment { struct device *dev; struct list_head node; void *priv;
- unsigned long dma_map_attrs; };
/**
Did you miss part of this patch? This only adds it to the structure but doesn't add it to any API. The same commment applies to the follow up patch, I don't quite see how it's being used.
Thanks, Laura
On Fri, 18 Jan 2019, Laura Abbott wrote:
On 1/18/19 10:37 AM, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
Signed-off-by: Liam Mark lmark@codeaurora.org
include/linux/dma-buf.h | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 58725f890b5b..59bf33e09e2d 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -308,6 +308,8 @@ struct dma_buf {
- @dev: device attached to the buffer.
- @node: list of dma_buf_attachment.
- @priv: exporter specific attachment data.
- @dma_map_attrs: DMA mapping attributes to be used in
dma_buf_map_attachment() and dma_buf_unmap_attachment().
- This structure holds the attachment information between the dma_buf
buffer
- and its user device(s). The list contains one attachment struct per
device @@ -323,6 +325,7 @@ struct dma_buf_attachment { struct device *dev; struct list_head node; void *priv;
- unsigned long dma_map_attrs; }; /**
Did you miss part of this patch? This only adds it to the structure but doesn't add it to any API. The same commment applies to the follow up patch, I don't quite see how it's being used.
Were you asking for a cleaner DMA-buf API to set this field or were you asking for a change to an upstream client to make use of this field?
I have clients set the dma_map_attrs field directly on their dma_buf_attachment struct before calling dma_buf_map_attachment (if they need this functionality). Of course this is all being used in Android for out of tree drivers, but I assume it is just as useful to everyone else who has cached ION buffers which aren't always accessed by the CPU.
My understanding is that AOSP Android on Hikey 960 also is currently suffering from too many CMOs due to dma_map_attachemnt always applying CMOs, so this support should help them avoid it.
Thanks, Laura
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
On 1/18/19 1:32 PM, Liam Mark wrote:
On Fri, 18 Jan 2019, Laura Abbott wrote:
On 1/18/19 10:37 AM, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
Signed-off-by: Liam Mark lmark@codeaurora.org
include/linux/dma-buf.h | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 58725f890b5b..59bf33e09e2d 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -308,6 +308,8 @@ struct dma_buf { * @dev: device attached to the buffer. * @node: list of dma_buf_attachment. * @priv: exporter specific attachment data.
- @dma_map_attrs: DMA mapping attributes to be used in
dma_buf_map_attachment() and dma_buf_unmap_attachment().
- This structure holds the attachment information between the dma_buf
buffer * and its user device(s). The list contains one attachment struct per device @@ -323,6 +325,7 @@ struct dma_buf_attachment { struct device *dev; struct list_head node; void *priv;
- unsigned long dma_map_attrs; }; /**
Did you miss part of this patch? This only adds it to the structure but doesn't add it to any API. The same commment applies to the follow up patch, I don't quite see how it's being used.
Were you asking for a cleaner DMA-buf API to set this field or were you asking for a change to an upstream client to make use of this field?
I have clients set the dma_map_attrs field directly on their dma_buf_attachment struct before calling dma_buf_map_attachment (if they need this functionality). Of course this is all being used in Android for out of tree drivers, but I assume it is just as useful to everyone else who has cached ION buffers which aren't always accessed by the CPU.
My understanding is that AOSP Android on Hikey 960 also is currently suffering from too many CMOs due to dma_map_attachemnt always applying CMOs, so this support should help them avoid it.
Ahhhh I see how you intend this to be used now! I was missing that clients would do attachment->dma_map_attrs = blah and that was how it would be stored as opposed to passing it in at the top level for dma_buf_map. I'll give this some more thought but I think it could work if Sumit is okay with the approach.
Thanks, Laura
On Fri, Jan 18, 2019 at 10:37:46AM -0800, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
Signed-off-by: Liam Mark lmark@codeaurora.org
And who is going to decide which ones to pass? And who documents which ones are safe?
I'd much rather have explicit, well documented dma-buf flags that might get translated to the DMA API flags, which are not error checked, not very well documented and way to easy to get wrong.
On 1/19/19 2:25 AM, Christoph Hellwig wrote:
On Fri, Jan 18, 2019 at 10:37:46AM -0800, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
Signed-off-by: Liam Mark lmark@codeaurora.org
And who is going to decide which ones to pass? And who documents which ones are safe?
I'd much rather have explicit, well documented dma-buf flags that might get translated to the DMA API flags, which are not error checked, not very well documented and way to easy to get wrong.
I'm not sure having flags in dma-buf really solves anything given drivers can use the attributes directly with dma_map anyway, which is what we're looking to do. The intention is for the driver creating the dma_buf attachment to have the knowledge of which flags to use.
Thanks, Laura
On Sat, Jan 19, 2019 at 08:50:41AM -0800, Laura Abbott wrote:
And who is going to decide which ones to pass? And who documents which ones are safe?
I'd much rather have explicit, well documented dma-buf flags that might get translated to the DMA API flags, which are not error checked, not very well documented and way to easy to get wrong.
I'm not sure having flags in dma-buf really solves anything given drivers can use the attributes directly with dma_map anyway, which is what we're looking to do. The intention is for the driver creating the dma_buf attachment to have the knowledge of which flags to use.
Well, there are very few flags that you can simply use for all calls of dma_map*. And given how badly these flags are defined I just don't want people to add more places where they indirectly use these flags, as it will be more than enough work to clean up the current mess.
What flag(s) do you want to pass this way, btw? Maybe that is where the problem is.
On Mon, 21 Jan 2019, Christoph Hellwig wrote:
On Sat, Jan 19, 2019 at 08:50:41AM -0800, Laura Abbott wrote:
And who is going to decide which ones to pass? And who documents which ones are safe?
I'd much rather have explicit, well documented dma-buf flags that might get translated to the DMA API flags, which are not error checked, not very well documented and way to easy to get wrong.
I'm not sure having flags in dma-buf really solves anything given drivers can use the attributes directly with dma_map anyway, which is what we're looking to do. The intention is for the driver creating the dma_buf attachment to have the knowledge of which flags to use.
Well, there are very few flags that you can simply use for all calls of dma_map*. And given how badly these flags are defined I just don't want people to add more places where they indirectly use these flags, as it will be more than enough work to clean up the current mess.
What flag(s) do you want to pass this way, btw? Maybe that is where the problem is.
The main use case is for allowing clients to pass in DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In ION the buffers aren't usually accessed from the CPU so this allows clients to often avoid doing unnecessary cache maintenance.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
On Mon, Jan 21, 2019 at 11:44:10AM -0800, Liam Mark wrote:
The main use case is for allowing clients to pass in DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In ION the buffers aren't usually accessed from the CPU so this allows clients to often avoid doing unnecessary cache maintenance.
This can't work. The cpu can still easily speculate into this area. Moreover in general these operations should be cheap if the addresses aren't cached.
On Mon, 21 Jan 2019, Christoph Hellwig wrote:
On Mon, Jan 21, 2019 at 11:44:10AM -0800, Liam Mark wrote:
The main use case is for allowing clients to pass in DMA_ATTR_SKIP_CPU_SYNC in order to skip the default cache maintenance which happens in dma_buf_map_attachment and dma_buf_unmap_attachment. In ION the buffers aren't usually accessed from the CPU so this allows clients to often avoid doing unnecessary cache maintenance.
This can't work. The cpu can still easily speculate into this area.
Can you provide more detail on your concern here. The use case I am thinking about here is a cached buffer which is accessed by a non IO-coherent device (quite a common use case for ION).
Guessing on your concern: The speculative access can be an issue if you are going to access the buffer from the CPU after the device has written to it, however if you know you aren't going to do any CPU access before the buffer is again returned to the device then I don't think the speculative access is a concern.
Moreover in general these operations should be cheap if the addresses aren't cached.
I am thinking of use cases with cached buffers here, so CMO isn't cheap.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
For example this will allow ION clients to skip cache maintenance, by using DMA_ATTR_SKIP_CPU_SYNC, for buffers which are clean and haven't been accessed by the CPU.
Signed-off-by: Liam Mark lmark@codeaurora.org --- drivers/staging/android/ion/ion.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 1fe633a7fdba..0aae845b20ba 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -268,8 +268,8 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, table = a->table;
mutex_lock(&buffer->lock); - if (!dma_map_sg(attachment->dev, table->sgl, table->nents, - direction)) { + if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents, + direction, attachment->dma_map_attrs)) { mutex_unlock(&buffer->lock); return ERR_PTR(-ENOMEM); } @@ -287,7 +287,8 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct ion_buffer *buffer = attachment->dmabuf->priv;
mutex_lock(&buffer->lock); - dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction); + dma_unmap_sg_attrs(attachment->dev, table->sgl, table->nents, direction, + attachment->dma_map_attrs); a->dma_mapped = false; mutex_unlock(&buffer->lock); }
Hi Liam,
On Fri, Jan 18, 2019 at 10:37:47AM -0800, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
For example this will allow ION clients to skip cache maintenance, by using DMA_ATTR_SKIP_CPU_SYNC, for buffers which are clean and haven't been accessed by the CPU.
How can a client know that the buffer won't be accessed by the CPU in the future though?
I don't think we can push this decision to clients, because they are lacking information about what else is going on with the buffer. It needs to be done by the exporter, IMO.
Thanks, -Brian
Signed-off-by: Liam Mark lmark@codeaurora.org
drivers/staging/android/ion/ion.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 1fe633a7fdba..0aae845b20ba 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -268,8 +268,8 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, table = a->table; mutex_lock(&buffer->lock);
- if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
direction)) {
- if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
mutex_unlock(&buffer->lock); return ERR_PTR(-ENOMEM); }direction, attachment->dma_map_attrs)) {
@@ -287,7 +287,8 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct ion_buffer *buffer = attachment->dmabuf->priv; mutex_lock(&buffer->lock);
- dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
- dma_unmap_sg_attrs(attachment->dev, table->sgl, table->nents, direction,
a->dma_mapped = false; mutex_unlock(&buffer->lock);attachment->dma_map_attrs);
}
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, 21 Jan 2019, Brian Starkey wrote:
Hi Liam,
On Fri, Jan 18, 2019 at 10:37:47AM -0800, Liam Mark wrote:
Add support for configuring dma mapping attributes when mapping and unmapping memory through dma_buf_map_attachment and dma_buf_unmap_attachment.
For example this will allow ION clients to skip cache maintenance, by using DMA_ATTR_SKIP_CPU_SYNC, for buffers which are clean and haven't been accessed by the CPU.
How can a client know that the buffer won't be accessed by the CPU in the future though?
Yes, for use cases where you don't if it will be accessed in the future then you would only use it to optimize the dma map path, but as I mentioned in the other thread there are cases (such as in our Camera) where we have complete ownership of buffers and do know if it will be accessed in the future.
I don't think we can push this decision to clients, because they are lacking information about what else is going on with the buffer. It needs to be done by the exporter, IMO.
I do agree it would be better to handle in the exporter, but in a pipelining use case where there might not be any devices attached that doesn't seem very doable.
Thanks, -Brian
Signed-off-by: Liam Mark lmark@codeaurora.org
drivers/staging/android/ion/ion.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c index 1fe633a7fdba..0aae845b20ba 100644 --- a/drivers/staging/android/ion/ion.c +++ b/drivers/staging/android/ion/ion.c @@ -268,8 +268,8 @@ static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, table = a->table; mutex_lock(&buffer->lock);
- if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
direction)) {
- if (!dma_map_sg_attrs(attachment->dev, table->sgl, table->nents,
mutex_unlock(&buffer->lock); return ERR_PTR(-ENOMEM); }direction, attachment->dma_map_attrs)) {
@@ -287,7 +287,8 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, struct ion_buffer *buffer = attachment->dmabuf->priv; mutex_lock(&buffer->lock);
- dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
- dma_unmap_sg_attrs(attachment->dev, table->sgl, table->nents, direction,
a->dma_mapped = false; mutex_unlock(&buffer->lock);attachment->dma_map_attrs);
}
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
linaro-mm-sig@lists.linaro.org