I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com --- drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); * * @dmabuf: [in] buffer which is moving * - * Informs all attachmenst that they need to destroy and recreated all their + * Informs all attachments that they need to destroy and recreate all their * mappings. */ void dma_buf_move_notify(struct dma_buf *dmabuf) @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /** * DOC: cpu access * - * There are mutliple reasons for supporting CPU access to a dma buffer object: + * There are multiple reasons for supporting CPU access to a dma buffer object: * * - Fallback operations in the kernel, for example when a device is connected * over USB and the kernel needs to shuffle the data around first before - * sending it away. Cache coherency is handled by braketing any transactions + * sending it away. Cache coherency is handled by bracketing any transactions * with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() * access. * @@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); * replace ION buffers mmap support was needed. * * There is no special interfaces, userspace simply calls mmap on the dma-buf - * fd. But like for CPU access there's a need to braket the actual access, + * fd. But like for CPU access there's a need to bracket the actual access, * which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that * DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must * be restarted. @@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, * preparations. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to prepare cpu access for. - * @direction: [in] length of range for cpu access. + * @direction: [in] direction of access. * * After the cpu access is complete the caller should call - * dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is + * dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is * it guaranteed to be coherent with other DMA access. * * This function will also wait for any DMA transactions tracked through @@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); * actions. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to complete cpu access for. - * @direction: [in] length of range for cpu access. + * @direction: [in] direction of access. * * This terminates CPU access started with dma_buf_begin_cpu_access(). * diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and - * vmap/unmap. Note that in many cases this is superseeded by + * vmap/unmap. Note that in many cases this is superseded by * dma_resv_lock() on @resv. */ struct mutex lock; @@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */ + /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock;
/** @@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only - * causes unecessarily synchronization, but no correctness issues. + * causes unnecessary synchronization, but no correctness issues. * * - Some drivers only expose a synchronous userspace API with no * pipelining across drivers. These do not set any fences for their
On 11/23/22 11:35, T.J. Mercier wrote:
I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com
Reviewed-by: Randy Dunlap rdunlap@infradead.org Thanks.
(although I would prefer to see CPU instead of cpu, but that's no reason to hold up this patch)
drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
- @dmabuf: [in] buffer which is moving
- Informs all attachmenst that they need to destroy and recreated all their
*/
- Informs all attachments that they need to destroy and recreate all their
- mappings.
void dma_buf_move_notify(struct dma_buf *dmabuf) @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /**
- DOC: cpu access
- There are mutliple reasons for supporting CPU access to a dma buffer object:
- There are multiple reasons for supporting CPU access to a dma buffer object:
- Fallback operations in the kernel, for example when a device is connected
- over USB and the kernel needs to shuffle the data around first before
- sending it away. Cache coherency is handled by braketing any transactions
- sending it away. Cache coherency is handled by bracketing any transactions
- with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
- access.
@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
- replace ION buffers mmap support was needed.
- There is no special interfaces, userspace simply calls mmap on the dma-buf
- fd. But like for CPU access there's a need to braket the actual access,
- fd. But like for CPU access there's a need to bracket the actual access,
- which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
- DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
- be restarted.
@@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
- preparations. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to prepare cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- After the cpu access is complete the caller should call
- dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
- dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is
- it guaranteed to be coherent with other DMA access.
- This function will also wait for any DMA transactions tracked through
@@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF);
- actions. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to complete cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- This terminates CPU access started with dma_buf_begin_cpu_access().
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and
* vmap/unmap. Note that in many cases this is superseeded by
* vmap/unmap. Note that in many cases this is superseded by
*/ struct mutex lock;
- dma_resv_lock() on @resv.
@@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */
- /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock;
/** @@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only
* causes unecessarily synchronization, but no correctness issues.
* causes unnecessary synchronization, but no correctness issues.
- Some drivers only expose a synchronous userspace API with no
- pipelining across drivers. These do not set any fences for their
Am 23.11.22 um 20:35 schrieb T.J. Mercier:
I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com
Acked-by: Christian König christian.koenig@amd.com
drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
- @dmabuf: [in] buffer which is moving
- Informs all attachmenst that they need to destroy and recreated all their
*/ void dma_buf_move_notify(struct dma_buf *dmabuf)
- Informs all attachments that they need to destroy and recreate all their
- mappings.
@@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /**
- DOC: cpu access
- There are mutliple reasons for supporting CPU access to a dma buffer object:
- There are multiple reasons for supporting CPU access to a dma buffer object:
- Fallback operations in the kernel, for example when a device is connected
- over USB and the kernel needs to shuffle the data around first before
- sending it away. Cache coherency is handled by braketing any transactions
- sending it away. Cache coherency is handled by bracketing any transactions
- with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
- access.
@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
- replace ION buffers mmap support was needed.
- There is no special interfaces, userspace simply calls mmap on the dma-buf
- fd. But like for CPU access there's a need to braket the actual access,
- fd. But like for CPU access there's a need to bracket the actual access,
- which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
- DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
- be restarted.
@@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
- preparations. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to prepare cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- After the cpu access is complete the caller should call
- dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
- dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is
- it guaranteed to be coherent with other DMA access.
- This function will also wait for any DMA transactions tracked through
@@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF);
- actions. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to complete cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- This terminates CPU access started with dma_buf_begin_cpu_access().
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and
* vmap/unmap. Note that in many cases this is superseeded by
* vmap/unmap. Note that in many cases this is superseded by
*/ struct mutex lock;
- dma_resv_lock() on @resv.
@@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */
- /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock;
/** @@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only
* causes unecessarily synchronization, but no correctness issues.
* causes unnecessary synchronization, but no correctness issues.
- Some drivers only expose a synchronous userspace API with no
- pipelining across drivers. These do not set any fences for their
On Thu, Nov 24, 2022 at 08:03:09AM +0100, Christian König wrote:
Am 23.11.22 um 20:35 schrieb T.J. Mercier:
I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com
Acked-by: Christian König christian.koenig@amd.com
Will you also push this? I think tj doesn't have commit rights yet, and I somehow can't see the patch locally (I guess it's stuck in moderation). -Daniel
drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
- @dmabuf: [in] buffer which is moving
- Informs all attachmenst that they need to destroy and recreated all their
*/ void dma_buf_move_notify(struct dma_buf *dmabuf)
- Informs all attachments that they need to destroy and recreate all their
- mappings.
@@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /**
- DOC: cpu access
- There are mutliple reasons for supporting CPU access to a dma buffer object:
- There are multiple reasons for supporting CPU access to a dma buffer object:
- Fallback operations in the kernel, for example when a device is connected
- over USB and the kernel needs to shuffle the data around first before
- sending it away. Cache coherency is handled by braketing any transactions
- sending it away. Cache coherency is handled by bracketing any transactions
- with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
- access.
@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
- replace ION buffers mmap support was needed.
- There is no special interfaces, userspace simply calls mmap on the dma-buf
- fd. But like for CPU access there's a need to braket the actual access,
- fd. But like for CPU access there's a need to bracket the actual access,
- which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
- DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
- be restarted.
@@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
- preparations. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to prepare cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- After the cpu access is complete the caller should call
- dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
- dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is
- it guaranteed to be coherent with other DMA access.
- This function will also wait for any DMA transactions tracked through
@@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF);
- actions. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to complete cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- This terminates CPU access started with dma_buf_begin_cpu_access().
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and
* vmap/unmap. Note that in many cases this is superseeded by
* vmap/unmap. Note that in many cases this is superseded by
*/ struct mutex lock;
- dma_resv_lock() on @resv.
@@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */
- /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock; /**
@@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only
* causes unecessarily synchronization, but no correctness issues.
* causes unnecessary synchronization, but no correctness issues.
- Some drivers only expose a synchronous userspace API with no
- pipelining across drivers. These do not set any fences for their
Am 24.11.22 um 10:05 schrieb Daniel Vetter:
On Thu, Nov 24, 2022 at 08:03:09AM +0100, Christian König wrote:
Am 23.11.22 um 20:35 schrieb T.J. Mercier:
I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com
Acked-by: Christian König christian.koenig@amd.com
Will you also push this? I think tj doesn't have commit rights yet, and I somehow can't see the patch locally (I guess it's stuck in moderation).
I was just about to complain that this doesn't apply cleanly to drm-misc-next.
Trivial problem, one of the typos was just removed by Dimitry a few weeks ago.
I've fixed that up locally and pushed the result, but nevertheless please make sure that DMA-buf patches are based on the drm branches.
Thanks, Christian.
-Daniel
drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); * * @dmabuf: [in] buffer which is moving *
- Informs all attachmenst that they need to destroy and recreated all their
void dma_buf_move_notify(struct dma_buf *dmabuf)
- Informs all attachments that they need to destroy and recreate all their
*/
- mappings.
@@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /** * DOC: cpu access *
- There are mutliple reasons for supporting CPU access to a dma buffer object:
- There are multiple reasons for supporting CPU access to a dma buffer object:
- Fallback operations in the kernel, for example when a device is connected
- over USB and the kernel needs to shuffle the data around first before
- sending it away. Cache coherency is handled by braketing any transactions
- sending it away. Cache coherency is handled by bracketing any transactions
- with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
- access.
@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); * replace ION buffers mmap support was needed. * * There is no special interfaces, userspace simply calls mmap on the dma-buf
- fd. But like for CPU access there's a need to braket the actual access,
- fd. But like for CPU access there's a need to bracket the actual access,
- which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
- DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
- be restarted.
@@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, * preparations. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to prepare cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- After the cpu access is complete the caller should call
- dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
- dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is
- it guaranteed to be coherent with other DMA access.
- This function will also wait for any DMA transactions tracked through
@@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); * actions. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to complete cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- This terminates CPU access started with dma_buf_begin_cpu_access().
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and
* vmap/unmap. Note that in many cases this is superseeded by
* vmap/unmap. Note that in many cases this is superseded by
*/ struct mutex lock;
- dma_resv_lock() on @resv.
@@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */
- /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock; /**
@@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only
* causes unecessarily synchronization, but no correctness issues.
* causes unnecessary synchronization, but no correctness issues.
- Some drivers only expose a synchronous userspace API with no
- pipelining across drivers. These do not set any fences for their
On Thu, Nov 24, 2022 at 1:43 AM Christian König ckoenig.leichtzumerken@gmail.com wrote:
Am 24.11.22 um 10:05 schrieb Daniel Vetter:
On Thu, Nov 24, 2022 at 08:03:09AM +0100, Christian König wrote:
Am 23.11.22 um 20:35 schrieb T.J. Mercier:
I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com
Acked-by: Christian König christian.koenig@amd.com
Will you also push this? I think tj doesn't have commit rights yet, and I somehow can't see the patch locally (I guess it's stuck in moderation).
I was just about to complain that this doesn't apply cleanly to drm-misc-next.
Trivial problem, one of the typos was just removed by Dimitry a few weeks ago.
I've fixed that up locally and pushed the result, but nevertheless please make sure that DMA-buf patches are based on the drm branches.
I'm sorry, this was on top of a random spot in Linus's 6.1-rc5. (84368d882b96 Merge tag 'soc-fixes-6.1-3') I'm not sure why I did that, but I suspect it was after a fresh git pull. I have too many repos.
Thanks all for the reviews.
Thanks, Christian.
-Daniel
drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); * * @dmabuf: [in] buffer which is moving *
- Informs all attachmenst that they need to destroy and recreated all their
void dma_buf_move_notify(struct dma_buf *dmabuf)
- Informs all attachments that they need to destroy and recreate all their
*/
- mappings.
@@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /** * DOC: cpu access *
- There are mutliple reasons for supporting CPU access to a dma buffer object:
- There are multiple reasons for supporting CPU access to a dma buffer object:
- Fallback operations in the kernel, for example when a device is connected
- over USB and the kernel needs to shuffle the data around first before
- sending it away. Cache coherency is handled by braketing any transactions
- sending it away. Cache coherency is handled by bracketing any transactions
- with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
- access.
@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); * replace ION buffers mmap support was needed. * * There is no special interfaces, userspace simply calls mmap on the dma-buf
- fd. But like for CPU access there's a need to braket the actual access,
- fd. But like for CPU access there's a need to bracket the actual access,
- which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
- DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
- be restarted.
@@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, * preparations. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to prepare cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- After the cpu access is complete the caller should call
- dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
- dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is
- it guaranteed to be coherent with other DMA access.
- This function will also wait for any DMA transactions tracked through
@@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF); * actions. Coherency is only guaranteed in the specified range for the * specified access direction. * @dmabuf: [in] buffer to complete cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- This terminates CPU access started with dma_buf_begin_cpu_access().
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and
- vmap/unmap. Note that in many cases this is superseeded by
*/ struct mutex lock;
- vmap/unmap. Note that in many cases this is superseded by
- dma_resv_lock() on @resv.
@@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */
- /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock; /**
@@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only
- causes unecessarily synchronization, but no correctness issues.
- causes unnecessary synchronization, but no correctness issues.
- Some drivers only expose a synchronous userspace API with no
- pipelining across drivers. These do not set any fences for their
Hi T.J,
On Wed, Nov 23, 2022 at 07:35:18PM +0000, T.J. Mercier wrote:
I've been collecting these typo fixes for a while and it feels like time to send them in.
Signed-off-by: T.J. Mercier tjmercier@google.com
drivers/dma-buf/dma-buf.c | 14 +++++++------- include/linux/dma-buf.h | 6 +++--- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index dd0f83ee505b..614ccd208af4 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF);
- @dmabuf: [in] buffer which is moving
- Informs all attachmenst that they need to destroy and recreated all their
*/
- Informs all attachments that they need to destroy and recreate all their
- mappings.
void dma_buf_move_notify(struct dma_buf *dmabuf) @@ -1159,11 +1159,11 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF); /**
- DOC: cpu access
- There are mutliple reasons for supporting CPU access to a dma buffer object:
- There are multiple reasons for supporting CPU access to a dma buffer object:
- Fallback operations in the kernel, for example when a device is connected
- over USB and the kernel needs to shuffle the data around first before
- sending it away. Cache coherency is handled by braketing any transactions
- sending it away. Cache coherency is handled by bracketing any transactions
- with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
- access.
@@ -1190,7 +1190,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, DMA_BUF);
- replace ION buffers mmap support was needed.
- There is no special interfaces, userspace simply calls mmap on the dma-buf
- fd. But like for CPU access there's a need to braket the actual access,
- fd. But like for CPU access there's a need to bracket the actual access,
- which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
- DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
- be restarted.
@@ -1264,10 +1264,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
- preparations. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to prepare cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- After the cpu access is complete the caller should call
- dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
- dma_buf_end_cpu_access(). Only when cpu access is bracketed by both calls is
- it guaranteed to be coherent with other DMA access.
- This function will also wait for any DMA transactions tracked through
@@ -1307,7 +1307,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_begin_cpu_access, DMA_BUF);
- actions. Coherency is only guaranteed in the specified range for the
- specified access direction.
- @dmabuf: [in] buffer to complete cpu access for.
- @direction: [in] length of range for cpu access.
- @direction: [in] direction of access.
- This terminates CPU access started with dma_buf_begin_cpu_access().
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..1d61a4f6db35 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -330,7 +330,7 @@ struct dma_buf { * @lock: * * Used internally to serialize list manipulation, attach/detach and
* vmap/unmap. Note that in many cases this is superseeded by
* vmap/unmap. Note that in many cases this is superseded by
*/ struct mutex lock;
- dma_resv_lock() on @resv.
@@ -365,7 +365,7 @@ struct dma_buf { */ const char *name;
- /** @name_lock: Spinlock to protect name acces for read access. */
- /** @name_lock: Spinlock to protect name access for read access. */ spinlock_t name_lock;
/** @@ -402,7 +402,7 @@ struct dma_buf { * anything the userspace API considers write access. * * - Drivers may just always add a write fence, since that only
* causes unecessarily synchronization, but no correctness issues.
* causes unnecessary synchronization, but no correctness issues.
- Some drivers only expose a synchronous userspace API with no
- pipelining across drivers. These do not set any fences for their
-- 2.38.1.584.g0f3c55d4c2-goog
Looks good to me.
Reviewed-by: Tommaso Merciai tommaso.merciai@amarulasolutions.com
Thanks & Regards, Tommaso
linaro-mm-sig@lists.linaro.org