This is the third and final part of my series to start supporting P2P with DMA-buf.
The implementation is straight forward, apart from a helper to aid constructing scatterlists without having struct pages we only add a new flag indicating that an DMA-buf importer can handle peer2peer.
The exporter can then check if P2P is general possible using the pci_p2pdma_distance_many() function and if necessary can also clear the flag.
The rest is an example how to implementing the necessary functionality into the amdgpu driver to setup scatterlists pointing to device memory.
Please review and comment, Christian.
This can be used by drivers to setup P2P DMA between device memory which is not backed by struct pages.
The drivers of the involved devices are responsible for setting up and tearing down DMA addresses as necessary using dma_map_resource().
The page pointer is set to NULL and only the DMA address, length and offset values are valid.
Signed-off-by: Christian König christian.koenig@amd.com --- include/linux/scatterlist.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h index 6eec50fb36c8..28a477bf0bdf 100644 --- a/include/linux/scatterlist.h +++ b/include/linux/scatterlist.h @@ -145,6 +145,29 @@ static inline void sg_set_buf(struct scatterlist *sg, const void *buf, sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf)); }
+/** + * sg_set_dma_addr - Set sg entry to point at specified dma address + * @sg: SG entry + * @address: DMA address to set + * @len: Length of data + * @offset: Offset into page + * + * Description: + * Use this function to set an sg entry to point to device resources mapped + * using dma_map_resource(). The page pointer is set to NULL and only the DMA + * address, length and offset values are valid. + * + **/ +static inline void sg_set_dma_addr(struct scatterlist *sg, dma_addr_t address, + unsigned int len, unsigned int offset) +{ + sg_set_page(sg, NULL, len, offset); + sg->dma_address = address; +#ifdef CONFIG_NEED_SG_DMA_LENGTH + sg->dma_length = len; +#endif +} + /* * Loop over each sg element, following the pointer to a new list if necessary */
On Wed, Mar 11, 2020 at 02:51:53PM +0100, Christian König wrote:
This can be used by drivers to setup P2P DMA between device memory which is not backed by struct pages.
The drivers of the involved devices are responsible for setting up and tearing down DMA addresses as necessary using dma_map_resource().
The page pointer is set to NULL and only the DMA address, length and offset values are valid.
NAK. The only valid way to fill DMA address in scatterlists is dma_map_sg / dma_map_sg_attr.
Am 11.03.20 um 16:28 schrieb Christoph Hellwig:
On Wed, Mar 11, 2020 at 02:51:53PM +0100, Christian König wrote:
This can be used by drivers to setup P2P DMA between device memory which is not backed by struct pages.
The drivers of the involved devices are responsible for setting up and tearing down DMA addresses as necessary using dma_map_resource().
The page pointer is set to NULL and only the DMA address, length and offset values are valid.
NAK. The only valid way to fill DMA address in scatterlists is dma_map_sg / dma_map_sg_attr.
How can we then map PCIe BARs into an scatterlist which are not backed by struct pages?
Regards, Christian.
On Thu, Mar 12, 2020 at 11:14:22AM +0100, Christian König wrote:
The page pointer is set to NULL and only the DMA address, length and offset values are valid.
NAK. The only valid way to fill DMA address in scatterlists is dma_map_sg / dma_map_sg_attr.
How can we then map PCIe BARs into an scatterlist which are not backed by struct pages?
You can't. scatterlists by definition map memory backed by a struct page. If you want to map something else struct scatterlist is the wrong structure and you need to use something else (which you should anyway as struct scatterlist is a bad design patter, and the above is only one of the many issues with it).
Am 12.03.20 um 11:19 schrieb Christoph Hellwig:
On Thu, Mar 12, 2020 at 11:14:22AM +0100, Christian König wrote:
The page pointer is set to NULL and only the DMA address, length and offset values are valid.
NAK. The only valid way to fill DMA address in scatterlists is dma_map_sg / dma_map_sg_attr.
How can we then map PCIe BARs into an scatterlist which are not backed by struct pages?
You can't. scatterlists by definition map memory backed by a struct page. If you want to map something else struct scatterlist is the wrong structure and you need to use something else (which you should anyway as struct scatterlist is a bad design patter, and the above is only one of the many issues with it).
But how should we then deal with all the existing interfaces which already take a scatterlist/sg_table ?
The whole DMA-buf design and a lot of drivers are build around scatterlist/sg_table and to me that actually makes quite a lot of sense.
For TTM I'm also trying for quite a while to just nuke the manual dma_address arrays we have and switch over to scatterlist/sg_table.
I mean we could come up with a new structure for this, but to me that just looks like reinventing the wheel. Especially since drivers need to be able to handle both I/O to system memory and I/O to PCIe BARs.
Regards, Christian.
On Thu, Mar 12, 2020 at 11:31:35AM +0100, Christian König wrote:
But how should we then deal with all the existing interfaces which already take a scatterlist/sg_table ?
The whole DMA-buf design and a lot of drivers are build around scatterlist/sg_table and to me that actually makes quite a lot of sense.
Replace them with a saner interface that doesn't take a scatterlist. At very least for new functionality like peer to peer DMA, but especially this code would also benefit from a general move away from the scatterlist.
For TTM I'm also trying for quite a while to just nuke the manual dma_address arrays we have and switch over to scatterlist/sg_table.
Which is a move in the wrong direction.
I mean we could come up with a new structure for this, but to me that just looks like reinventing the wheel. Especially since drivers need to be able to handle both I/O to system memory and I/O to PCIe BARs.
The structure for holding the struct page side of the scatterlist is called struct bio_vec, so far mostly used by the block and networking code. The structure for holding dma addresses doesn't really exist in a generic form, but would be an array of these structures:
struct dma_sg { dma_addr_t addr; u32 len; };
Keeping them separate is important as most IOMMU drivers will return less entries than you can feed them. E.g. if your input boundaries are 4k aligned you will usually just get a single IOVA entry back. I will soon also have a dma mapping interface that will take advantage of that fact.
Am 12.03.20 um 11:47 schrieb Christoph Hellwig:
On Thu, Mar 12, 2020 at 11:31:35AM +0100, Christian König wrote: [SNIP]
I mean we could come up with a new structure for this, but to me that just looks like reinventing the wheel. Especially since drivers need to be able to handle both I/O to system memory and I/O to PCIe BARs.
The structure for holding the struct page side of the scatterlist is called struct bio_vec, so far mostly used by the block and networking code.
Yeah, I'm aware of this.
The structure for holding dma addresses doesn't really exist in a generic form, but would be an array of these structures:
struct dma_sg { dma_addr_t addr; u32 len; };
So the whole idea is to nuke scatterlist/sg_table in the long term and switch over to using bio_vec as input and dma_sg as output for a DMA mapping operation.
Is that correct? If yes I could live with that, but it makes my patchset much more complicated.
Keeping them separate is important as most IOMMU drivers will return less entries than you can feed them. E.g. if your input boundaries are 4k aligned you will usually just get a single IOVA entry back. I will soon also have a dma mapping interface that will take advantage of that fact.
Yeah, I noticed as well that this is not really well handled.
Thanks for the feedback, Christian.
Add a peer2peer flag noting that the importer can deal with device resources which are not backed by pages.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-buf.c | 2 ++ include/linux/dma-buf.h | 10 ++++++++++ 2 files changed, 12 insertions(+)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f4ace9af2191..f9220928ec90 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -689,6 +689,8 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev,
attach->dev = dev; attach->dmabuf = dmabuf; + if (importer_ops) + attach->peer2peer = importer_ops->allow_peer2peer; attach->importer_ops = importer_ops; attach->importer_priv = importer_priv;
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1ade486fc2bb..82e0a4a64601 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -334,6 +334,14 @@ struct dma_buf { * Attachment operations implemented by the importer. */ struct dma_buf_attach_ops { + /** + * @allow_peer2peer: + * + * If this is set to true the importer must be able to handle peer + * resources without struct pages. + */ + bool allow_peer2peer; + /** * @move_notify * @@ -362,6 +370,7 @@ struct dma_buf_attach_ops { * @node: list of dma_buf_attachment, protected by dma_resv lock of the dmabuf. * @sgt: cached mapping. * @dir: direction of cached mapping. + * @peer2peer: true if the importer can handle peer resources without pages. * @priv: exporter specific attachment data. * @importer_ops: importer operations for this attachment, if provided * dma_buf_map/unmap_attachment() must be called with the dma_resv lock held. @@ -382,6 +391,7 @@ struct dma_buf_attachment { struct list_head node; struct sg_table *sgt; enum dma_data_direction dir; + bool peer2peer; const struct dma_buf_attach_ops *importer_ops; void *importer_priv; void *priv;
Importing should work out of the box.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index ffeb20f11c07..aef12ee2f1e3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -514,6 +514,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) }
static const struct dma_buf_attach_ops amdgpu_dma_buf_attach_ops = { + .allow_peer2peer = true, .move_notify = amdgpu_dma_buf_move_notify };
Check if we can do peer2peer on the PCIe bus.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index aef12ee2f1e3..bbf67800c8a6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -38,6 +38,7 @@ #include <drm/amdgpu_drm.h> #include <linux/dma-buf.h> #include <linux/dma-fence-array.h> +#include <linux/pci-p2pdma.h>
/** * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation @@ -179,6 +180,9 @@ static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf, struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); int r;
+ if (pci_p2pdma_distance_many(adev->pdev, &attach->dev, 1, true) < 0) + attach->peer2peer = false; + if (attach->dev->driver == adev->dev->driver) return 0;
We should be able to do this now after checking all the prerequisites.
v2: fix entrie count in the sgt
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 56 ++++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 12 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 97 ++++++++++++++++++++ 3 files changed, 151 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index bbf67800c8a6..43d8ed7dbd00 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -276,14 +276,21 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, struct dma_buf *dma_buf = attach->dmabuf; struct drm_gem_object *obj = dma_buf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); struct sg_table *sgt; long r;
if (!bo->pin_count) { - /* move buffer into GTT */ + /* move buffer into GTT or VRAM */ struct ttm_operation_ctx ctx = { false, false }; + unsigned domains = AMDGPU_GEM_DOMAIN_GTT;
- amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT); + if (bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM && + attach->peer2peer) { + bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; + domains |= AMDGPU_GEM_DOMAIN_VRAM; + } + amdgpu_bo_placement_from_domain(bo, domains); r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); if (r) return ERR_PTR(r); @@ -293,20 +300,34 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, return ERR_PTR(-EBUSY); }
- sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, bo->tbo.num_pages); - if (IS_ERR(sgt)) - return sgt; - - if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, - DMA_ATTR_SKIP_CPU_SYNC)) - goto error_free; + switch (bo->tbo.mem.mem_type) { + case TTM_PL_TT: + sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, + bo->tbo.num_pages); + if (IS_ERR(sgt)) + return sgt; + + if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, + DMA_ATTR_SKIP_CPU_SYNC)) + goto error_free; + break; + + case TTM_PL_VRAM: + r = amdgpu_vram_mgr_alloc_sgt(adev, &bo->tbo.mem, attach->dev, + dir, &sgt); + if (r) + return ERR_PTR(r); + break; + default: + return ERR_PTR(-EINVAL); + }
return sgt;
error_free: sg_free_table(sgt); kfree(sgt); - return ERR_PTR(-ENOMEM); + return ERR_PTR(-EBUSY); }
/** @@ -322,9 +343,18 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir) { - dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir); - sg_free_table(sgt); - kfree(sgt); + struct dma_buf *dma_buf = attach->dmabuf; + struct drm_gem_object *obj = dma_buf->priv; + struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); + + if (sgt->sgl->page_link) { + dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir); + sg_free_table(sgt); + kfree(sgt); + } else { + amdgpu_vram_mgr_free_sgt(adev, attach->dev, dir, sgt); + } }
/** diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h index 7551f3729445..a99d813b23a5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h @@ -24,8 +24,9 @@ #ifndef __AMDGPU_TTM_H__ #define __AMDGPU_TTM_H__
-#include "amdgpu.h" +#include <linux/dma-direction.h> #include <drm/gpu_scheduler.h> +#include "amdgpu.h"
#define AMDGPU_PL_GDS (TTM_PL_PRIV + 0) #define AMDGPU_PL_GWS (TTM_PL_PRIV + 1) @@ -74,6 +75,15 @@ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man); int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo); +int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, + struct ttm_mem_reg *mem, + struct device *dev, + enum dma_data_direction dir, + struct sg_table **sgt); +void amdgpu_vram_mgr_free_sgt(struct amdgpu_device *adev, + struct device *dev, + enum dma_data_direction dir, + struct sg_table *sgt); uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man); uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index 82a3299e53c0..c6e7f00c5b21 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -22,6 +22,7 @@ * Authors: Christian König */
+#include <linux/dma-mapping.h> #include "amdgpu.h" #include "amdgpu_vm.h" #include "amdgpu_atomfirmware.h" @@ -458,6 +459,102 @@ static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man, mem->mm_node = NULL; }
+/** + * amdgpu_vram_mgr_alloc_sgt - allocate and fill a sg table + * + * @adev: amdgpu device pointer + * @mem: TTM memory object + * @dev: the other device + * @dir: dma direction + * @sgt: resulting sg table + * + * Allocate and fill a sg table from a VRAM allocation. + */ +int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev, + struct ttm_mem_reg *mem, + struct device *dev, + enum dma_data_direction dir, + struct sg_table **sgt) +{ + struct drm_mm_node *node; + struct scatterlist *sg; + int num_entries = 0; + unsigned int pages; + int i, r; + + *sgt = kmalloc(sizeof(*sg), GFP_KERNEL); + if (!*sgt) + return -ENOMEM; + + for (pages = mem->num_pages, node = mem->mm_node; + pages; pages -= node->size, ++node) + ++num_entries; + + r = sg_alloc_table(*sgt, num_entries, GFP_KERNEL); + if (r) + goto error_free; + + for_each_sg((*sgt)->sgl, sg, num_entries, i) + sg->length = 0; + + node = mem->mm_node; + for_each_sg((*sgt)->sgl, sg, num_entries, i) { + phys_addr_t phys = (node->start << PAGE_SHIFT) + + adev->gmc.aper_base; + size_t size = node->size << PAGE_SHIFT; + dma_addr_t addr; + + ++node; + addr = dma_map_resource(dev, phys, size, dir, + DMA_ATTR_SKIP_CPU_SYNC); + r = dma_mapping_error(dev, addr); + if (r) + goto error_unmap; + + sg_set_dma_addr(sg, addr, size, 0); + } + return 0; + +error_unmap: + for_each_sg((*sgt)->sgl, sg, num_entries, i) { + if (!sg->length) + continue; + + dma_unmap_resource(dev, sg->dma_address, + sg->length, dir, + DMA_ATTR_SKIP_CPU_SYNC); + } + sg_free_table(*sgt); + +error_free: + kfree(*sgt); + return r; +} + +/** + * amdgpu_vram_mgr_alloc_sgt - allocate and fill a sg table + * + * @adev: amdgpu device pointer + * @sgt: sg table to free + * + * Free a previously allocate sg table. + */ +void amdgpu_vram_mgr_free_sgt(struct amdgpu_device *adev, + struct device *dev, + enum dma_data_direction dir, + struct sg_table *sgt) +{ + struct scatterlist *sg; + int i; + + for_each_sg(sgt->sgl, sg, sgt->nents, i) + dma_unmap_resource(dev, sg->dma_address, + sg->length, dir, + DMA_ATTR_SKIP_CPU_SYNC); + sg_free_table(sgt); + kfree(sgt); +} + /** * amdgpu_vram_mgr_usage - how many bytes are used in this domain *
On Wed, Mar 11, 2020 at 9:52 AM Christian König ckoenig.leichtzumerken@gmail.com wrote:
We should be able to do this now after checking all the prerequisites.
v2: fix entrie count in the sgt
Signed-off-by: Christian König christian.koenig@amd.com
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 56 ++++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 12 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 97 ++++++++++++++++++++ 3 files changed, 151 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index bbf67800c8a6..43d8ed7dbd00 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -276,14 +276,21 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, struct dma_buf *dma_buf = attach->dmabuf; struct drm_gem_object *obj = dma_buf->priv; struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); struct sg_table *sgt; long r; if (!bo->pin_count) {
/* move buffer into GTT */
/* move buffer into GTT or VRAM */ struct ttm_operation_ctx ctx = { false, false };
unsigned domains = AMDGPU_GEM_DOMAIN_GTT;
amdgpu_bo_placement_from_domain(bo, AMDGPU_GEM_DOMAIN_GTT);
if (bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
attach->peer2peer) {
bo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED;
domains |= AMDGPU_GEM_DOMAIN_VRAM;
}
amdgpu_bo_placement_from_domain(bo, domains); r = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); if (r) return ERR_PTR(r);
@@ -293,20 +300,34 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach, return ERR_PTR(-EBUSY); }
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages, bo->tbo.num_pages);
if (IS_ERR(sgt))
return sgt;
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
DMA_ATTR_SKIP_CPU_SYNC))
goto error_free;
switch (bo->tbo.mem.mem_type) {
case TTM_PL_TT:
sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
bo->tbo.num_pages);
if (IS_ERR(sgt))
return sgt;
if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
DMA_ATTR_SKIP_CPU_SYNC))
goto error_free;
break;
case TTM_PL_VRAM:
r = amdgpu_vram_mgr_alloc_sgt(adev, &bo->tbo.mem, attach->dev,
dir, &sgt);
if (r)
return ERR_PTR(r);
break;
default:
return ERR_PTR(-EINVAL);
} return sgt;
error_free: sg_free_table(sgt); kfree(sgt);
return ERR_PTR(-ENOMEM);
return ERR_PTR(-EBUSY);
}
/** @@ -322,9 +343,18 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach, struct sg_table *sgt, enum dma_data_direction dir) {
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
sg_free_table(sgt);
kfree(sgt);
struct dma_buf *dma_buf = attach->dmabuf;
struct drm_gem_object *obj = dma_buf->priv;
struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
if (sgt->sgl->page_link) {
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
sg_free_table(sgt);
kfree(sgt);
} else {
amdgpu_vram_mgr_free_sgt(adev, attach->dev, dir, sgt);
}
}
/** diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h index 7551f3729445..a99d813b23a5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h @@ -24,8 +24,9 @@ #ifndef __AMDGPU_TTM_H__ #define __AMDGPU_TTM_H__
-#include "amdgpu.h" +#include <linux/dma-direction.h> #include <drm/gpu_scheduler.h> +#include "amdgpu.h"
#define AMDGPU_PL_GDS (TTM_PL_PRIV + 0) #define AMDGPU_PL_GWS (TTM_PL_PRIV + 1) @@ -74,6 +75,15 @@ uint64_t amdgpu_gtt_mgr_usage(struct ttm_mem_type_manager *man); int amdgpu_gtt_mgr_recover(struct ttm_mem_type_manager *man);
u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo); +int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
struct ttm_mem_reg *mem,
struct device *dev,
enum dma_data_direction dir,
struct sg_table **sgt);
+void amdgpu_vram_mgr_free_sgt(struct amdgpu_device *adev,
struct device *dev,
enum dma_data_direction dir,
struct sg_table *sgt);
uint64_t amdgpu_vram_mgr_usage(struct ttm_mem_type_manager *man); uint64_t amdgpu_vram_mgr_vis_usage(struct ttm_mem_type_manager *man);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c index 82a3299e53c0..c6e7f00c5b21 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c @@ -22,6 +22,7 @@
- Authors: Christian König
*/
+#include <linux/dma-mapping.h> #include "amdgpu.h" #include "amdgpu_vm.h" #include "amdgpu_atomfirmware.h" @@ -458,6 +459,102 @@ static void amdgpu_vram_mgr_del(struct ttm_mem_type_manager *man, mem->mm_node = NULL; }
+/**
- amdgpu_vram_mgr_alloc_sgt - allocate and fill a sg table
- @adev: amdgpu device pointer
- @mem: TTM memory object
- @dev: the other device
- @dir: dma direction
- @sgt: resulting sg table
- Allocate and fill a sg table from a VRAM allocation.
- */
+int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
struct ttm_mem_reg *mem,
struct device *dev,
enum dma_data_direction dir,
struct sg_table **sgt)
+{
struct drm_mm_node *node;
struct scatterlist *sg;
int num_entries = 0;
unsigned int pages;
int i, r;
*sgt = kmalloc(sizeof(*sg), GFP_KERNEL);
if (!*sgt)
return -ENOMEM;
for (pages = mem->num_pages, node = mem->mm_node;
pages; pages -= node->size, ++node)
++num_entries;
r = sg_alloc_table(*sgt, num_entries, GFP_KERNEL);
if (r)
goto error_free;
for_each_sg((*sgt)->sgl, sg, num_entries, i)
sg->length = 0;
node = mem->mm_node;
for_each_sg((*sgt)->sgl, sg, num_entries, i) {
phys_addr_t phys = (node->start << PAGE_SHIFT) +
adev->gmc.aper_base;
size_t size = node->size << PAGE_SHIFT;
dma_addr_t addr;
++node;
addr = dma_map_resource(dev, phys, size, dir,
DMA_ATTR_SKIP_CPU_SYNC);
r = dma_mapping_error(dev, addr);
if (r)
goto error_unmap;
sg_set_dma_addr(sg, addr, size, 0);
}
return 0;
+error_unmap:
for_each_sg((*sgt)->sgl, sg, num_entries, i) {
if (!sg->length)
continue;
dma_unmap_resource(dev, sg->dma_address,
sg->length, dir,
DMA_ATTR_SKIP_CPU_SYNC);
}
sg_free_table(*sgt);
+error_free:
kfree(*sgt);
return r;
+}
+/**
- amdgpu_vram_mgr_alloc_sgt - allocate and fill a sg table
This should be: amdgpu_vram_mgr_free_sgt - unmap and free an sg table
- @adev: amdgpu device pointer
- @sgt: sg table to free
- Free a previously allocate sg table.
- */
+void amdgpu_vram_mgr_free_sgt(struct amdgpu_device *adev,
struct device *dev,
enum dma_data_direction dir,
struct sg_table *sgt)
+{
struct scatterlist *sg;
int i;
for_each_sg(sgt->sgl, sg, sgt->nents, i)
dma_unmap_resource(dev, sg->dma_address,
sg->length, dir,
DMA_ATTR_SKIP_CPU_SYNC);
sg_free_table(sgt);
kfree(sgt);
+}
/**
- amdgpu_vram_mgr_usage - how many bytes are used in this domain
-- 2.17.1
dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
Note if a buffer was imported using peer2peer.
Signed-off-by: Christian König christian.koenig@amd.com --- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 4277125a79ee..e42608115c99 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -29,6 +29,7 @@ #include <linux/module.h> #include <linux/pagemap.h> #include <linux/pci.h> +#include <linux/dma-buf.h>
#include <drm/amdgpu_drm.h> #include <drm/drm_debugfs.h> @@ -854,7 +855,8 @@ static int amdgpu_debugfs_gem_bo_info(int id, void *ptr, void *data) attachment = READ_ONCE(bo->tbo.base.import_attach);
if (attachment) - seq_printf(m, " imported from %p", dma_buf); + seq_printf(m, " imported from %p%s", dma_buf, + attachment->peer2peer ? " P2P" : ""); else if (dma_buf) seq_printf(m, " exported as %p", dma_buf);
linaro-mm-sig@lists.linaro.org