GPU drivers need this in their shrinkers, to be able to throw out
mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
but that loop is resolved by trylocking in shrinkers.
So full hierarchy is now (ignore some of the other branches we already
have primed):
mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
I hope that's not inconsistent with anything mm or fs does, adding
relevant people.
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Cc: Sumit Semwal <sumit.semwal(a)linaro.org>
Cc: "Christian König" <christian.koenig(a)amd.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: Dave Chinner <david(a)fromorbit.com>
Cc: Qian Cai <cai(a)lca.pw>
Cc: linux-xfs(a)vger.kernel.org
Cc: linux-fsdevel(a)vger.kernel.org
Cc: Thomas Hellström (Intel) <thomas_os(a)shipmail.org>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Jason Gunthorpe <jgg(a)mellanox.com>
Cc: linux-mm(a)kvack.org
Cc: linux-rdma(a)vger.kernel.org
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
---
drivers/dma-buf/dma-resv.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 0e6675ec1d11..9678162a4ac5 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
+ struct address_space mapping;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
+ address_space_init_once(&mapping);
mmap_read_lock(mm);
ww_acquire_init(&ctx, &reservation_ww_class);
@@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
+ /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
+ i_mmap_lock_write(&mapping);
+ i_mmap_unlock_write(&mapping);
#ifdef CONFIG_MMU_NOTIFIER
lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
__dma_fence_might_wait();
--
2.27.0
Unless there are any remaining objections to these patches, what are
the next steps towards getting these merged? Sorry, I'm not familiar
with the workflow for contributing patches to Linux.
Thanks,
David
On Tue, Jun 9, 2020 at 6:53 PM Michael S. Tsirkin <mst(a)redhat.com> wrote:
>
> On Tue, Jun 09, 2020 at 10:25:15AM +0900, David Stevens wrote:
> > This patchset implements the current proposal for virtio cross-device
> > resource sharing [1]. It will be used to import virtio resources into
> > the virtio-video driver currently under discussion [2]. The patch
> > under consideration to add support in the virtio-video driver is [3].
> > It uses the APIs from v3 of this series, but the changes to update it
> > are relatively minor.
> >
> > This patchset adds a new flavor of dma-bufs that supports querying the
> > underlying virtio object UUID, as well as adding support for exporting
> > resources from virtgpu.
>
> Gerd, David, if possible, please test this in configuration with
> virtual VTD enabled but with iommu_platform=off
> to make sure we didn't break this config.
>
>
> Besides that, for virtio parts:
>
> Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
>
>
> > [1] https://markmail.org/thread/2ypjt5cfeu3m6lxu
> > [2] https://markmail.org/thread/p5d3k566srtdtute
> > [3] https://markmail.org/thread/j4xlqaaim266qpks
> >
> > v4 -> v5 changes:
> > - Remove virtio_dma_buf_export_info.
> >
> > David Stevens (3):
> > virtio: add dma-buf support for exported objects
> > virtio-gpu: add VIRTIO_GPU_F_RESOURCE_UUID feature
> > drm/virtio: Support virtgpu exported resources
> >
> > drivers/gpu/drm/virtio/virtgpu_drv.c | 3 +
> > drivers/gpu/drm/virtio/virtgpu_drv.h | 20 ++++++
> > drivers/gpu/drm/virtio/virtgpu_kms.c | 4 ++
> > drivers/gpu/drm/virtio/virtgpu_prime.c | 96 +++++++++++++++++++++++++-
> > drivers/gpu/drm/virtio/virtgpu_vq.c | 55 +++++++++++++++
> > drivers/virtio/Makefile | 2 +-
> > drivers/virtio/virtio.c | 6 ++
> > drivers/virtio/virtio_dma_buf.c | 82 ++++++++++++++++++++++
> > include/linux/virtio.h | 1 +
> > include/linux/virtio_dma_buf.h | 37 ++++++++++
> > include/uapi/linux/virtio_gpu.h | 19 +++++
> > 11 files changed, 321 insertions(+), 4 deletions(-)
> > create mode 100644 drivers/virtio/virtio_dma_buf.c
> > create mode 100644 include/linux/virtio_dma_buf.h
> >
> > --
> > 2.27.0.278.ge193c7cf3a9-goog
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe(a)lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help(a)lists.oasis-open.org
>
On Thu, Jul 16, 2020 at 6:10 PM Kunihiko Hayashi
<hayashi.kunihiko(a)socionext.com> wrote:
>
> Current dma-buf heaps can handle only default CMA. This introduces
> dma_heap_add_cma() function to attach CMA heaps that belongs to a device.
>
> At first, the driver calls of_reserved_mem_device_init() to set
> memory-region property associated with reserved-memory defined as CMA
> to the device. And when the driver calls this dma_heap_add_cma(),
> the CMA will be added to dma-buf heaps.
>
> For example, prepare CMA node named "linux,cma@10000000" and
> specify the node for memory-region property. After the above calls
> in the driver, a device file "/dev/dma_heap/linux,cma@10000000"
> associated with the CMA become available as dma-buf heaps.
>
> Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko(a)socionext.com>
> ---
> drivers/dma-buf/heaps/cma_heap.c | 12 ++++++++++++
> include/linux/dma-heap.h | 9 +++++++++
> 2 files changed, 21 insertions(+)
Hey! Sorry for the slow response on this! I just realized I never replied!
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> index 626cf7f..5d2442e 100644
> --- a/drivers/dma-buf/heaps/cma_heap.c
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -162,6 +162,18 @@ static int __add_cma_heap(struct cma *cma, void *data)
> return 0;
> }
>
> +/* add device CMA heap to dmabuf heaps */
> +int dma_heap_add_cma(struct device *dev)
> +{
> + struct cma *cma = dev_get_cma_area(dev);
> +
> + if (!cma)
> + return -ENOMEM;
> +
> + return __add_cma_heap(cma, NULL);
> +}
> +EXPORT_SYMBOL_GPL(dma_heap_add_cma);
> +
> static int add_default_cma_heap(void)
> {
> struct cma *default_cma = dev_get_cma_area(NULL);
> diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> index 454e354..16bec7d 100644
> --- a/include/linux/dma-heap.h
> +++ b/include/linux/dma-heap.h
> @@ -56,4 +56,13 @@ void *dma_heap_get_drvdata(struct dma_heap *heap);
> */
> struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
>
> +#ifdef CONFIG_DMABUF_HEAPS_CMA
> +/**
> + * dma_heap_add_cma - adds a device CMA heap to dmabuf heaps
> + * @dev: device with a CMA heap to register
> + */
> +int dma_heap_add_cma(struct device *dev);
> +
> +#endif /* CONFIG_DMABUF_HEAPS_CMA */
> +
> #endif /* _DMA_HEAPS_H */
> --
> 2.7.4
Looks sane to me. Being able to expose different multiple CMA heaps
is needed, and I agree this way (as opposed to my earlier dts
appraoch) is probably the best approach. The only bit I'm so-so on is
adding the CMA heap specific call in the dma-heap.h, but at the same
time I can't justify adding a whole new header for a single function.
Do you have a upstream driver that you plan to make use this new call?
We want to have in-tree users of code added.
But if so, feel free to add my:
Acked-by: John Stultz <john.stultz(a)linaro.org>
To this patch when you submit the driver changes.
thanks
-john
Trying to grab dma_resv_lock while in commit_tail before we've done
all the code that leads to the eventual signalling of the vblank event
(which can be a dma_fence) is deadlock-y. Don't do that.
Here the solution is easy because just grabbing locks to read
something races anyway. We don't need to bother, READ_ONCE is
equivalent. And avoids the locking issue.
v2: Also take into account tmz_surface boolean, plus just delete the
old code.
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
DC-folks, I think this split out patch from my series here
https://lore.kernel.org/dri-devel/20200707201229.472834-1-daniel.vetter@ffw…
should be ready for review/merging. I fixed it up a bit so that it's not
just a gross hack :-)
Cheers, Daniel
---
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 21ec64fe5527..a20b62b1f2ef 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -6959,20 +6959,13 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
DRM_ERROR("Waiting for fences timed out!");
/*
- * TODO This might fail and hence better not used, wait
- * explicitly on fences instead
- * and in general should be called for
- * blocking commit to as per framework helpers
+ * We cannot reserve buffers here, which means the normal flag
+ * access functions don't work. Paper over this with READ_ONCE,
+ * but maybe the flags are invariant enough that not even that
+ * would be needed.
*/
- r = amdgpu_bo_reserve(abo, true);
- if (unlikely(r != 0))
- DRM_ERROR("failed to reserve buffer before flip\n");
-
- amdgpu_bo_get_tiling_flags(abo, &tiling_flags);
-
- tmz_surface = amdgpu_bo_encrypted(abo);
-
- amdgpu_bo_unreserve(abo);
+ tiling_flags = READ_ONCE(abo->tiling_flags);
+ tmz_surface = READ_ONCE(abo->flags) & AMDGPU_GEM_CREATE_ENCRYPTED;
fill_dc_plane_info_and_addr(
dm->adev, new_plane_state, tiling_flags,
--
2.27.0
Fix W=1 compile warnings (invalid kerneldoc):
drivers/dma-buf/dma-buf.c:328: warning: Function parameter or member 'dmabuf' not described in 'dma_buf_set_name'
Signed-off-by: Krzysztof Kozlowski <krzk(a)kernel.org>
---
drivers/dma-buf/dma-buf.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 1699a8e309ef..58564d82a3a2 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -316,9 +316,9 @@ static __poll_t dma_buf_poll(struct file *file, poll_table *poll)
* name of the dma-buf if the same piece of memory is used for multiple
* purpose between different devices.
*
- * @dmabuf [in] dmabuf buffer that will be renamed.
- * @buf: [in] A piece of userspace memory that contains the name of
- * the dma-buf.
+ * @dmabuf: [in] dmabuf buffer that will be renamed.
+ * @buf: [in] A piece of userspace memory that contains the name of
+ * the dma-buf.
*
* Returns 0 on success. If the dma-buf buffer is already attached to
* devices, return -EBUSY.
--
2.17.1
Am 22.07.2020 18:04 schrieb Wei Yongjun <weiyongjun1(a)huawei.com>:
The sparse tool complains as follows:
drivers/dma-buf/dma-fence.c:249:25: warning:
symbol 'dma_fence_lockdep_map' was not declared. Should it be static?
This variable is not used outside of dma-fence.c, so this commit
marks it static.
Fixes: 5fbff813a4a3 ("dma-fence: basic lockdep annotations")
Reported-by: Hulk Robot <hulkci(a)huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1(a)huawei.com>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
---
drivers/dma-buf/dma-fence.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
index af1d8ea926b3..43624b4ee13d 100644
--- a/drivers/dma-buf/dma-fence.c
+++ b/drivers/dma-buf/dma-fence.c
@@ -246,7 +246,7 @@ EXPORT_SYMBOL(dma_fence_context_alloc);
* concerned.
*/
#ifdef CONFIG_LOCKDEP
-struct lockdep_map dma_fence_lockdep_map = {
+static struct lockdep_map dma_fence_lockdep_map = {
.name = "dma_fence_map"
};
Comes up every few years, gets somewhat tedious to discuss, let's
write this down once and for all.
What I'm not sure about is whether the text should be more explicit in
flat out mandating the amdkfd eviction fences for long running compute
workloads or workloads where userspace fencing is allowed.
v2: Now with dot graph!
Cc: Jesse Natalie <jenatali(a)microsoft.com>
Cc: Steve Pronovost <spronovo(a)microsoft.com>
Cc: Jason Ekstrand <jason(a)jlekstrand.net>
Cc: Felix Kuehling <Felix.Kuehling(a)amd.com>
Cc: Mika Kuoppala <mika.kuoppala(a)intel.com>
Cc: Thomas Hellstrom <thomas.hellstrom(a)intel.com>
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
Documentation/driver-api/dma-buf.rst | 70 ++++++++++++++++++++++++
drivers/gpu/drm/virtio/virtgpu_display.c | 20 -------
2 files changed, 70 insertions(+), 20 deletions(-)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index f8f6decde359..037ba0078bb4 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -178,3 +178,73 @@ DMA Fence uABI/Sync File
.. kernel-doc:: include/linux/sync_file.h
:internal:
+Idefinite DMA Fences
+~~~~~~~~~~~~~~~~~~~~
+
+At various times &dma_fence with an indefinite time until dma_fence_wait()
+finishes have been proposed. Examples include:
+
+* Future fences, used in HWC1 to signal when a buffer isn't used by the display
+ any longer, and created with the screen update that makes the buffer visible.
+ The time this fence completes is entirely under userspace's control.
+
+* Proxy fences, proposed to handle &drm_syncobj for which the fence has not yet
+ been set. Used to asynchronously delay command submission.
+
+* Userspace fences or gpu futexes, fine-grained locking within a command buffer
+ that userspace uses for synchronization across engines or with the CPU, which
+ are then imported as a DMA fence for integration into existing winsys
+ protocols.
+
+* Long-running compute command buffers, while still using traditional end of
+ batch DMA fences for memory management instead of context preemption DMA
+ fences which get reattached when the compute job is rescheduled.
+
+Common to all these schemes is that userspace controls the dependencies of these
+fences and controls when they fire. Mixing indefinite fences with normal
+in-kernel DMA fences does not work, even when a fallback timeout is included to
+protect against malicious userspace:
+
+* Only the kernel knows about all DMA fence dependencies, userspace is not aware
+ of dependencies injected due to memory management or scheduler decisions.
+
+* Only userspace knows about all dependencies in indefinite fences and when
+ exactly they will complete, the kernel has no visibility.
+
+Furthermore the kernel has to be able to hold up userspace command submission
+for memory management needs, which means we must support indefinite fences being
+dependent upon DMA fences. If the kernel also support indefinite fences in the
+kernel like a DMA fence, like any of the above proposal would, there is the
+potential for deadlocks.
+
+.. kernel-render:: DOT
+ :alt: Indefinite Fencing Dependency Cycle
+ :caption: Indefinite Fencing Dependency Cycle
+
+ digraph "Fencing Cycle" {
+ node [shape=box bgcolor=grey style=filled]
+ kernel [label="Kernel DMA Fences"]
+ userspace [label="userspace controlled fences"]
+ kernel -> userspace [label="memory management"]
+ userspace -> kernel [label="Future fence, fence proxy, ..."]
+
+ { rank=same; kernel userspace }
+ }
+
+This means that the kernel might accidentally create deadlocks
+through memory management dependencies which userspace is unaware of, which
+randomly hangs workloads until the timeout kicks in. Workloads, which from
+userspace's perspective, do not contain a deadlock. In such a mixed fencing
+architecture there is no single entity with knowledge of all dependencies.
+Thefore preventing such deadlocks from within the kernel is not possible.
+
+The only solution to avoid dependencies loops is by not allowing indefinite
+fences in the kernel. This means:
+
+* No future fences, proxy fences or userspace fences imported as DMA fences,
+ with or without a timeout.
+
+* No DMA fences that signal end of batchbuffer for command submission where
+ userspace is allowed to use userspace fencing or long running compute
+ workloads. This also means no implicit fencing for shared buffers in these
+ cases.
diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c
index f3ce49c5a34c..af55b334be2f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_display.c
+++ b/drivers/gpu/drm/virtio/virtgpu_display.c
@@ -314,25 +314,6 @@ virtio_gpu_user_framebuffer_create(struct drm_device *dev,
return &virtio_gpu_fb->base;
}
-static void vgdev_atomic_commit_tail(struct drm_atomic_state *state)
-{
- struct drm_device *dev = state->dev;
-
- drm_atomic_helper_commit_modeset_disables(dev, state);
- drm_atomic_helper_commit_modeset_enables(dev, state);
- drm_atomic_helper_commit_planes(dev, state, 0);
-
- drm_atomic_helper_fake_vblank(state);
- drm_atomic_helper_commit_hw_done(state);
-
- drm_atomic_helper_wait_for_vblanks(dev, state);
- drm_atomic_helper_cleanup_planes(dev, state);
-}
-
-static const struct drm_mode_config_helper_funcs virtio_mode_config_helpers = {
- .atomic_commit_tail = vgdev_atomic_commit_tail,
-};
-
static const struct drm_mode_config_funcs virtio_gpu_mode_funcs = {
.fb_create = virtio_gpu_user_framebuffer_create,
.atomic_check = drm_atomic_helper_check,
@@ -346,7 +327,6 @@ void virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev)
drm_mode_config_init(vgdev->ddev);
vgdev->ddev->mode_config.quirk_addfb_prefer_host_byte_order = true;
vgdev->ddev->mode_config.funcs = &virtio_gpu_mode_funcs;
- vgdev->ddev->mode_config.helper_private = &virtio_mode_config_helpers;
/* modes will be validated against the framebuffer size */
vgdev->ddev->mode_config.min_width = XRES_MIN;
--
2.27.0
My dma-fence lockdep annotations caught an inversion because we
allocate memory where we really shouldn't:
kmem_cache_alloc+0x2b/0x6d0
amdgpu_fence_emit+0x30/0x330 [amdgpu]
amdgpu_ib_schedule+0x306/0x550 [amdgpu]
amdgpu_job_run+0x10f/0x260 [amdgpu]
drm_sched_main+0x1b9/0x490 [gpu_sched]
kthread+0x12e/0x150
Trouble right now is that lockdep only validates against GFP_FS, which
would be good enough for shrinkers. But for mmu_notifiers we actually
need !GFP_ATOMIC, since they can be called from any page laundering,
even if GFP_NOFS or GFP_NOIO are set.
I guess we should improve the lockdep annotations for
fs_reclaim_acquire/release.
Ofc real fix is to properly preallocate this fence and stuff it into
the amdgpu job structure. But GFP_ATOMIC gets the lockdep splat out of
the way.
v2: Two more allocations in scheduler paths.
Frist one:
__kmalloc+0x58/0x720
amdgpu_vmid_grab+0x100/0xca0 [amdgpu]
amdgpu_job_dependency+0xf9/0x120 [amdgpu]
drm_sched_entity_pop_job+0x3f/0x440 [gpu_sched]
drm_sched_main+0xf9/0x490 [gpu_sched]
Second one:
kmem_cache_alloc+0x2b/0x6d0
amdgpu_sync_fence+0x7e/0x110 [amdgpu]
amdgpu_vmid_grab+0x86b/0xca0 [amdgpu]
amdgpu_job_dependency+0xf9/0x120 [amdgpu]
drm_sched_entity_pop_job+0x3f/0x440 [gpu_sched]
drm_sched_main+0xf9/0x490 [gpu_sched]
Cc: linux-media(a)vger.kernel.org
Cc: linaro-mm-sig(a)lists.linaro.org
Cc: linux-rdma(a)vger.kernel.org
Cc: amd-gfx(a)lists.freedesktop.org
Cc: intel-gfx(a)lists.freedesktop.org
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Maarten Lankhorst <maarten.lankhorst(a)linux.intel.com>
Cc: Christian König <christian.koenig(a)amd.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 8d84975885cd..a089a827fdfe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -143,7 +143,7 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
uint32_t seq;
int r;
- fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_KERNEL);
+ fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
if (fence == NULL)
return -ENOMEM;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
index 267fa45ddb66..a333ca2d4ddd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c
@@ -208,7 +208,7 @@ static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
if (ring->vmid_wait && !dma_fence_is_signaled(ring->vmid_wait))
return amdgpu_sync_fence(sync, ring->vmid_wait);
- fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_KERNEL);
+ fences = kmalloc_array(sizeof(void *), id_mgr->num_ids, GFP_ATOMIC);
if (!fences)
return -ENOMEM;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
index 8ea6c49529e7..af22b526cec9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
@@ -160,7 +160,7 @@ int amdgpu_sync_fence(struct amdgpu_sync *sync, struct dma_fence *f)
if (amdgpu_sync_add_later(sync, f))
return 0;
- e = kmem_cache_alloc(amdgpu_sync_slab, GFP_KERNEL);
+ e = kmem_cache_alloc(amdgpu_sync_slab, GFP_ATOMIC);
if (!e)
return -ENOMEM;
--
2.27.0