Hi,
This series of two patches fixes the issue introduced in cf586021642d80 ("drm/i915/gt: Pipelined page migration") where, as reported by Matt, in a chain of requests an error is reported only if happens in the last request.
However Chris noticed that without ensuring exclusivity in the locking we might end up in some deadlock. That's why patch 1 throttles for the ringspace in order to make sure that no one is holding it.
Version 1 of this patch has been reviewed by matt and this version is adding Chris exclusive locking.
Thanks Chris for this work.
Andi
Changelog ========= v3 -> v4 - In v3 the timeline was being locked, but I forgot that also request_create() and request_add() are locking the timeline as well. The former does the locking, the latter does the unlocking. In order to avoid this extra lock/unlock, we need the "_locked" version of the said functions.
v2 -> v3 - Really lock the timeline before generating all the requests until the last.
v1 -> v2 - Add patch 1 for ensuring exclusive locking of the timeline - Reword git commit of patch 2.
Andi Shyti (4): drm/i915/gt: Add intel_context_timeline_is_locked helper drm/i915: Create the locked version of the request create drm/i915: Create the locked version of the request add drm/i915/gt: Make sure that errors are propagated through request chains
Chris Wilson (1): drm/i915: Throttle for ringspace prior to taking the timeline mutex
drivers/gpu/drm/i915/gt/intel_context.c | 41 +++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_context.h | 8 ++++ drivers/gpu/drm/i915/gt/intel_migrate.c | 41 ++++++++++++++----- drivers/gpu/drm/i915/i915_request.c | 54 ++++++++++++++++++------- drivers/gpu/drm/i915/i915_request.h | 3 ++ 5 files changed, 122 insertions(+), 25 deletions(-)
From: Chris Wilson chris@chris-wilson.co.uk
Before taking exclusive ownership of the ring for emitting the request, wait for space in the ring to become available. This allows others to take the timeline->mutex to make forward progresses while userspace is blocked.
In particular, this allows regular clients to issue requests on the kernel context, potentially filling the ring, but allow the higher priority heartbeats and pulses to still be submitted without being blocked by the less critical work.
Signed-off-by: Chris Wilson chris.p.wilson@linux.intel.com Cc: Maciej Patelczyk maciej.patelczyk@intel.com Cc: stable@vger.kernel.org Signed-off-by: Andi Shyti andi.shyti@linux.intel.com --- drivers/gpu/drm/i915/gt/intel_context.c | 41 +++++++++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_context.h | 2 ++ drivers/gpu/drm/i915/i915_request.c | 3 ++ 3 files changed, 46 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 2aa63ec521b89..59cd612a23561 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -626,6 +626,47 @@ bool intel_context_revoke(struct intel_context *ce) return ret; }
+int intel_context_throttle(const struct intel_context *ce) +{ + const struct intel_ring *ring = ce->ring; + const struct intel_timeline *tl = ce->timeline; + struct i915_request *rq; + int err = 0; + + if (READ_ONCE(ring->space) >= SZ_1K) + return 0; + + rcu_read_lock(); + list_for_each_entry_reverse(rq, &tl->requests, link) { + if (__i915_request_is_complete(rq)) + break; + + if (rq->ring != ring) + continue; + + /* Wait until there will be enough space following that rq */ + if (__intel_ring_space(rq->postfix, + ring->emit, + ring->size) < ring->size / 2) { + if (i915_request_get_rcu(rq)) { + rcu_read_unlock(); + + if (i915_request_wait(rq, + I915_WAIT_INTERRUPTIBLE, + MAX_SCHEDULE_TIMEOUT) < 0) + err = -EINTR; + + rcu_read_lock(); + i915_request_put(rq); + } + break; + } + } + rcu_read_unlock(); + + return err; +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_context.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 0a8d553da3f43..f919a66cebf5b 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -226,6 +226,8 @@ static inline void intel_context_exit(struct intel_context *ce) ce->ops->exit(ce); }
+int intel_context_throttle(const struct intel_context *ce); + static inline struct intel_context *intel_context_get(struct intel_context *ce) { kref_get(&ce->ref); diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 630a732aaecca..72aed544f8714 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1034,6 +1034,9 @@ i915_request_create(struct intel_context *ce) struct i915_request *rq; struct intel_timeline *tl;
+ if (intel_context_throttle(ce)) + return ERR_PTR(-EINTR); + tl = intel_context_timeline_lock(ce); if (IS_ERR(tl)) return ERR_CAST(tl);
On 08.03.2023 10:41, Andi Shyti wrote:
From: Chris Wilson chris@chris-wilson.co.uk
Before taking exclusive ownership of the ring for emitting the request, wait for space in the ring to become available. This allows others to take the timeline->mutex to make forward progresses while userspace is blocked.
In particular, this allows regular clients to issue requests on the kernel context, potentially filling the ring, but allow the higher priority heartbeats and pulses to still be submitted without being blocked by the less critical work.
Signed-off-by: Chris Wilson chris.p.wilson@linux.intel.com Cc: Maciej Patelczyk maciej.patelczyk@intel.com Cc: stable@vger.kernel.org Signed-off-by: Andi Shyti andi.shyti@linux.intel.com
Reviewed-by: Andrzej Hajda andrzej.hajda@intel.com
Regards Andrzej
drivers/gpu/drm/i915/gt/intel_context.c | 41 +++++++++++++++++++++++++ drivers/gpu/drm/i915/gt/intel_context.h | 2 ++ drivers/gpu/drm/i915/i915_request.c | 3 ++ 3 files changed, 46 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index 2aa63ec521b89..59cd612a23561 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -626,6 +626,47 @@ bool intel_context_revoke(struct intel_context *ce) return ret; } +int intel_context_throttle(const struct intel_context *ce) +{
- const struct intel_ring *ring = ce->ring;
- const struct intel_timeline *tl = ce->timeline;
- struct i915_request *rq;
- int err = 0;
- if (READ_ONCE(ring->space) >= SZ_1K)
return 0;
- rcu_read_lock();
- list_for_each_entry_reverse(rq, &tl->requests, link) {
if (__i915_request_is_complete(rq))
break;
if (rq->ring != ring)
continue;
/* Wait until there will be enough space following that rq */
if (__intel_ring_space(rq->postfix,
ring->emit,
ring->size) < ring->size / 2) {
if (i915_request_get_rcu(rq)) {
rcu_read_unlock();
if (i915_request_wait(rq,
I915_WAIT_INTERRUPTIBLE,
MAX_SCHEDULE_TIMEOUT) < 0)
err = -EINTR;
rcu_read_lock();
i915_request_put(rq);
}
break;
}
- }
- rcu_read_unlock();
- return err;
+}
- #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_context.c" #endif
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index 0a8d553da3f43..f919a66cebf5b 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -226,6 +226,8 @@ static inline void intel_context_exit(struct intel_context *ce) ce->ops->exit(ce); } +int intel_context_throttle(const struct intel_context *ce);
- static inline struct intel_context *intel_context_get(struct intel_context *ce) { kref_get(&ce->ref);
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 630a732aaecca..72aed544f8714 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1034,6 +1034,9 @@ i915_request_create(struct intel_context *ce) struct i915_request *rq; struct intel_timeline *tl;
- if (intel_context_throttle(ce))
return ERR_PTR(-EINTR);
- tl = intel_context_timeline_lock(ce); if (IS_ERR(tl)) return ERR_CAST(tl);
We have:
- intel_context_timeline_lock() - intel_context_timeline_unlock()
In the next patches we will also need:
- intel_context_timeline_is_locked()
Add it.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org --- drivers/gpu/drm/i915/gt/intel_context.h | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index f919a66cebf5b..87d5e2d60b6db 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -265,6 +265,12 @@ static inline void intel_context_timeline_unlock(struct intel_timeline *tl) mutex_unlock(&tl->mutex); }
+static inline void intel_context_assert_timeline_is_locked(struct intel_timeline *tl) + __must_hold(&tl->mutex) +{ + lockdep_assert_held(&tl->mutex); +} + int intel_context_prepare_remote_request(struct intel_context *ce, struct i915_request *rq);
On 3/8/2023 10:41 AM, Andi Shyti wrote:
We have:
- intel_context_timeline_lock()
- intel_context_timeline_unlock()
In the next patches we will also need:
- intel_context_timeline_is_locked()
Add it.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org
Reviewed-by: Nirmoy Das nirmoy.das@intel.com
drivers/gpu/drm/i915/gt/intel_context.h | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index f919a66cebf5b..87d5e2d60b6db 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -265,6 +265,12 @@ static inline void intel_context_timeline_unlock(struct intel_timeline *tl) mutex_unlock(&tl->mutex); } +static inline void intel_context_assert_timeline_is_locked(struct intel_timeline *tl)
- __must_hold(&tl->mutex)
+{
- lockdep_assert_held(&tl->mutex);
+}
- int intel_context_prepare_remote_request(struct intel_context *ce, struct i915_request *rq);
Make version of the request creation that doesn't hold any lock.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org --- drivers/gpu/drm/i915/i915_request.c | 43 +++++++++++++++++++---------- drivers/gpu/drm/i915/i915_request.h | 2 ++ 2 files changed, 31 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 72aed544f8714..5ddb0e02b06b7 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1028,18 +1028,11 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) return ERR_PTR(ret); }
-struct i915_request * -i915_request_create(struct intel_context *ce) +static struct i915_request * +__i915_request_create_locked(struct intel_context *ce) { struct i915_request *rq; - struct intel_timeline *tl; - - if (intel_context_throttle(ce)) - return ERR_PTR(-EINTR); - - tl = intel_context_timeline_lock(ce); - if (IS_ERR(tl)) - return ERR_CAST(tl); + struct intel_timeline *tl = ce->timeline;
/* Move our oldest request to the slab-cache (if not in use!) */ rq = list_first_entry(&tl->requests, typeof(*rq), link); @@ -1049,16 +1042,38 @@ i915_request_create(struct intel_context *ce) intel_context_enter(ce); rq = __i915_request_create(ce, GFP_KERNEL); intel_context_exit(ce); /* active reference transferred to request */ - if (IS_ERR(rq)) - goto err_unlock;
/* Check that we do not interrupt ourselves with a new request */ rq->cookie = lockdep_pin_lock(&tl->mutex);
return rq; +} + +struct i915_request * +i915_request_create_locked(struct intel_context *ce) +{ + intel_context_assert_timeline_is_locked(ce->timeline); + + if (intel_context_throttle(ce)) + return ERR_PTR(-EINTR); + + return __i915_request_create_locked(ce); +} + +struct i915_request * +i915_request_create(struct intel_context *ce) +{ + struct i915_request *rq; + struct intel_timeline *tl; + + tl = intel_context_timeline_lock(ce); + if (IS_ERR(tl)) + return ERR_CAST(tl); + + rq = __i915_request_create_locked(ce); + if (IS_ERR(rq)) + intel_context_timeline_unlock(tl);
-err_unlock: - intel_context_timeline_unlock(tl); return rq; }
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index f5e1bb5e857aa..bb48bd4605c03 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -374,6 +374,8 @@ struct i915_request * __must_check __i915_request_create(struct intel_context *ce, gfp_t gfp); struct i915_request * __must_check i915_request_create(struct intel_context *ce); +struct i915_request * __must_check +i915_request_create_locked(struct intel_context *ce);
void __i915_request_skip(struct i915_request *rq); bool i915_request_set_error_once(struct i915_request *rq, int error);
On 3/8/2023 10:41 AM, Andi Shyti wrote:
Make version of the request creation that doesn't hold any lock.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org
Reviewed-by: Nirmoy Das nirmoy.das@intel.com
drivers/gpu/drm/i915/i915_request.c | 43 +++++++++++++++++++---------- drivers/gpu/drm/i915/i915_request.h | 2 ++ 2 files changed, 31 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 72aed544f8714..5ddb0e02b06b7 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1028,18 +1028,11 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) return ERR_PTR(ret); } -struct i915_request * -i915_request_create(struct intel_context *ce) +static struct i915_request * +__i915_request_create_locked(struct intel_context *ce) { struct i915_request *rq;
- struct intel_timeline *tl;
- if (intel_context_throttle(ce))
return ERR_PTR(-EINTR);
- tl = intel_context_timeline_lock(ce);
- if (IS_ERR(tl))
return ERR_CAST(tl);
- struct intel_timeline *tl = ce->timeline;
/* Move our oldest request to the slab-cache (if not in use!) */ rq = list_first_entry(&tl->requests, typeof(*rq), link); @@ -1049,16 +1042,38 @@ i915_request_create(struct intel_context *ce) intel_context_enter(ce); rq = __i915_request_create(ce, GFP_KERNEL); intel_context_exit(ce); /* active reference transferred to request */
- if (IS_ERR(rq))
goto err_unlock;
/* Check that we do not interrupt ourselves with a new request */ rq->cookie = lockdep_pin_lock(&tl->mutex); return rq; +}
+struct i915_request * +i915_request_create_locked(struct intel_context *ce) +{
- intel_context_assert_timeline_is_locked(ce->timeline);
- if (intel_context_throttle(ce))
return ERR_PTR(-EINTR);
- return __i915_request_create_locked(ce);
+}
+struct i915_request * +i915_request_create(struct intel_context *ce) +{
- struct i915_request *rq;
- struct intel_timeline *tl;
- tl = intel_context_timeline_lock(ce);
- if (IS_ERR(tl))
return ERR_CAST(tl);
- rq = __i915_request_create_locked(ce);
- if (IS_ERR(rq))
intel_context_timeline_unlock(tl);
-err_unlock:
- intel_context_timeline_unlock(tl); return rq; }
diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index f5e1bb5e857aa..bb48bd4605c03 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -374,6 +374,8 @@ struct i915_request * __must_check __i915_request_create(struct intel_context *ce, gfp_t gfp); struct i915_request * __must_check i915_request_create(struct intel_context *ce); +struct i915_request * __must_check +i915_request_create_locked(struct intel_context *ce); void __i915_request_skip(struct i915_request *rq); bool i915_request_set_error_once(struct i915_request *rq, int error);
i915_request_add() assumes that the timeline is locked whtn the function is called. Before exiting it releases the lock. But in the next commit we have one case where releasing the timeline mutex is not necessary and we don't want that.
Make a new i915_request_add_locked() version of the function where the lock is not released.
Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org --- drivers/gpu/drm/i915/i915_request.c | 14 +++++++++++--- drivers/gpu/drm/i915/i915_request.h | 1 + 2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 5ddb0e02b06b7..a4af16e25d966 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1852,13 +1852,13 @@ void __i915_request_queue(struct i915_request *rq, local_bh_enable(); /* kick tasklets */ }
-void i915_request_add(struct i915_request *rq) +void i915_request_add_locked(struct i915_request *rq) { struct intel_timeline * const tl = i915_request_timeline(rq); struct i915_sched_attr attr = {}; struct i915_gem_context *ctx;
- lockdep_assert_held(&tl->mutex); + intel_context_assert_timeline_is_locked(tl); lockdep_unpin_lock(&tl->mutex, rq->cookie);
trace_i915_request_add(rq); @@ -1873,7 +1873,15 @@ void i915_request_add(struct i915_request *rq)
__i915_request_queue(rq, &attr);
- mutex_unlock(&tl->mutex); +} + +void i915_request_add(struct i915_request *rq) +{ + struct intel_timeline * const tl = i915_request_timeline(rq); + + i915_request_add_locked(rq); + + intel_context_timeline_unlock(tl); }
static unsigned long local_clock_ns(unsigned int *cpu) diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index bb48bd4605c03..29e3a37c300a7 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -425,6 +425,7 @@ int i915_request_await_deps(struct i915_request *rq, const struct i915_deps *dep int i915_request_await_execution(struct i915_request *rq, struct dma_fence *fence);
+void i915_request_add_locked(struct i915_request *rq); void i915_request_add(struct i915_request *rq);
bool __i915_request_submit(struct i915_request *request);
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com --- drivers/gpu/drm/i915/gt/intel_migrate.c | 41 ++++++++++++++++++------- 1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 3f638f1987968..0031e7b1b4704 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -742,13 +742,19 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; }
+ /* + * While building the chain of requests, we need to ensure + * that no one can sneak into the timeline unnoticed. + */ + mutex_lock(&ce->timeline->mutex); + do { int len;
- rq = i915_request_create(ce); + rq = i915_request_create_locked(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_ce; + break; }
if (deps) { @@ -878,10 +884,14 @@ intel_context_migrate_copy(struct intel_context *ce,
/* Arbitration is re-enabled between requests. */ out_rq: - if (*out) + i915_sw_fence_await(&rq->submit); + i915_request_get(rq); + i915_request_add_locked(rq); + if (*out) { + i915_sw_fence_complete(&(*out)->submit); i915_request_put(*out); - *out = i915_request_get(rq); - i915_request_add(rq); + } + *out = rq;
if (err) break; @@ -905,7 +915,10 @@ intel_context_migrate_copy(struct intel_context *ce, cond_resched(); } while (1);
-out_ce: + mutex_unlock(&ce->timeline->mutex); + + if (*out) + i915_sw_fence_complete(&(*out)->submit); return err; }
@@ -1005,7 +1018,7 @@ intel_context_migrate_clear(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_ce; + break; }
if (deps) { @@ -1056,17 +1069,23 @@ intel_context_migrate_clear(struct intel_context *ce,
/* Arbitration is re-enabled between requests. */ out_rq: - if (*out) - i915_request_put(*out); - *out = i915_request_get(rq); + i915_sw_fence_await(&rq->submit); + i915_request_get(rq); i915_request_add(rq); + if (*out) { + i915_sw_fence_complete(&(*out)->submit); + i915_request_put(*out); + } + *out = rq; + if (err || !it.sg || !sg_dma_len(it.sg)) break;
cond_resched(); } while (1);
-out_ce: + if (*out) + i915_sw_fence_complete(&(*out)->submit); return err; }
On 08/03/2023 09:41, Andi Shyti wrote:
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com
drivers/gpu/drm/i915/gt/intel_migrate.c | 41 ++++++++++++++++++------- 1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 3f638f1987968..0031e7b1b4704 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -742,13 +742,19 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; }
- /*
* While building the chain of requests, we need to ensure
* that no one can sneak into the timeline unnoticed.
*/
- mutex_lock(&ce->timeline->mutex);
Hmm, this looks different/new from the previous version. Why do we only do this for the copy and not the clear btw? Both should be conceptually the same. Sorry if I'm misunderstanding something here.
do { int len;
rq = i915_request_create(ce);
if (IS_ERR(rq)) { err = PTR_ERR(rq);rq = i915_request_create_locked(ce);
goto out_ce;
}break;
if (deps) { @@ -878,10 +884,14 @@ intel_context_migrate_copy(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq:
if (*out)
i915_sw_fence_await(&rq->submit);
i915_request_get(rq);
i915_request_add_locked(rq);
if (*out) {
i915_sw_fence_complete(&(*out)->submit); i915_request_put(*out);
*out = i915_request_get(rq);
i915_request_add(rq);
}
*out = rq;
if (err) break; @@ -905,7 +915,10 @@ intel_context_migrate_copy(struct intel_context *ce, cond_resched(); } while (1); -out_ce:
- mutex_unlock(&ce->timeline->mutex);
- if (*out)
return err; }i915_sw_fence_complete(&(*out)->submit);
@@ -1005,7 +1018,7 @@ intel_context_migrate_clear(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq);
goto out_ce;
}break;
if (deps) { @@ -1056,17 +1069,23 @@ intel_context_migrate_clear(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq:
if (*out)
i915_request_put(*out);
*out = i915_request_get(rq);
i915_sw_fence_await(&rq->submit);
i915_request_add(rq);i915_request_get(rq);
if (*out) {
i915_sw_fence_complete(&(*out)->submit);
i915_request_put(*out);
}
*out = rq;
- if (err || !it.sg || !sg_dma_len(it.sg)) break;
cond_resched(); } while (1); -out_ce:
- if (*out)
return err; }i915_sw_fence_complete(&(*out)->submit);
On 3/8/2023 10:41 AM, Andi Shyti wrote:
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com
With Matt's comment regarding missing lock in intel_context_migrate_clear addressed, this is:
Acked-by: Nirmoy Das nirmoy.das@intel.com
drivers/gpu/drm/i915/gt/intel_migrate.c | 41 ++++++++++++++++++------- 1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 3f638f1987968..0031e7b1b4704 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -742,13 +742,19 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; }
- /*
* While building the chain of requests, we need to ensure
* that no one can sneak into the timeline unnoticed.
*/
- mutex_lock(&ce->timeline->mutex);
- do { int len;
rq = i915_request_create(ce);
if (IS_ERR(rq)) { err = PTR_ERR(rq);rq = i915_request_create_locked(ce);
goto out_ce;
}break;
if (deps) { @@ -878,10 +884,14 @@ intel_context_migrate_copy(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq:
if (*out)
i915_sw_fence_await(&rq->submit);
i915_request_get(rq);
i915_request_add_locked(rq);
if (*out) {
i915_sw_fence_complete(&(*out)->submit); i915_request_put(*out);
*out = i915_request_get(rq);
i915_request_add(rq);
}
*out = rq;
if (err) break; @@ -905,7 +915,10 @@ intel_context_migrate_copy(struct intel_context *ce, cond_resched(); } while (1); -out_ce:
- mutex_unlock(&ce->timeline->mutex);
- if (*out)
return err; }i915_sw_fence_complete(&(*out)->submit);
@@ -1005,7 +1018,7 @@ intel_context_migrate_clear(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq);
goto out_ce;
}break;
if (deps) { @@ -1056,17 +1069,23 @@ intel_context_migrate_clear(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq:
if (*out)
i915_request_put(*out);
*out = i915_request_get(rq);
i915_sw_fence_await(&rq->submit);
i915_request_add(rq);i915_request_get(rq);
if (*out) {
i915_sw_fence_complete(&(*out)->submit);
i915_request_put(*out);
}
*out = rq;
- if (err || !it.sg || !sg_dma_len(it.sg)) break;
cond_resched(); } while (1); -out_ce:
- if (*out)
return err; }i915_sw_fence_complete(&(*out)->submit);
On Tue, Apr 11, 2023 at 08:39:00AM +0200, Das, Nirmoy wrote:
On 3/8/2023 10:41 AM, Andi Shyti wrote:
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com
With Matt's comment regarding missing lock in intel_context_migrate_clear addressed, this is:
Acked-by: Nirmoy Das nirmoy.das@intel.com
Nack!
Please get some ack from Joonas or Tvrtko before merging this series.
It is a big series targeting stable o.O where the revisions in the cover letter are not helping me to be confident that this is the right approach instead of simply reverting the original offending commit:
cf586021642d ("drm/i915/gt: Pipelined page migration")
It looks to me that we are adding magic on top of magic to workaround the deadlocks, but then adding more waits inside locks... And this with the hang checks vs heartbeats, is this really an issue on current upstream code? or was only on DII?
Where was the bug report to start with?
drivers/gpu/drm/i915/gt/intel_migrate.c | 41 ++++++++++++++++++------- 1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 3f638f1987968..0031e7b1b4704 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -742,13 +742,19 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; }
- /*
* While building the chain of requests, we need to ensure
* that no one can sneak into the timeline unnoticed.
*/
- mutex_lock(&ce->timeline->mutex);
- do { int len;
rq = i915_request_create(ce);
if (IS_ERR(rq)) { err = PTR_ERR(rq);rq = i915_request_create_locked(ce);
goto out_ce;
} if (deps) {break;
@@ -878,10 +884,14 @@ intel_context_migrate_copy(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq:
if (*out)
i915_sw_fence_await(&rq->submit);
i915_request_get(rq);
i915_request_add_locked(rq);
if (*out) {
i915_sw_fence_complete(&(*out)->submit); i915_request_put(*out);
*out = i915_request_get(rq);
i915_request_add(rq);
}
if (err) break;*out = rq;
@@ -905,7 +915,10 @@ intel_context_migrate_copy(struct intel_context *ce, cond_resched(); } while (1); -out_ce:
- mutex_unlock(&ce->timeline->mutex);
- if (*out)
return err; }i915_sw_fence_complete(&(*out)->submit);
@@ -1005,7 +1018,7 @@ intel_context_migrate_clear(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq);
goto out_ce;
} if (deps) {break;
@@ -1056,17 +1069,23 @@ intel_context_migrate_clear(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq:
if (*out)
i915_request_put(*out);
*out = i915_request_get(rq);
i915_sw_fence_await(&rq->submit);
i915_request_add(rq);i915_request_get(rq);
if (*out) {
i915_sw_fence_complete(&(*out)->submit);
i915_request_put(*out);
}
*out = rq;
- if (err || !it.sg || !sg_dma_len(it.sg)) break; cond_resched(); } while (1);
-out_ce:
- if (*out)
return err; }i915_sw_fence_complete(&(*out)->submit);
Hi Rodrigo,
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com
With Matt's comment regarding missing lock in intel_context_migrate_clear addressed, this is:
Acked-by: Nirmoy Das nirmoy.das@intel.com
Nack!
Please get some ack from Joonas or Tvrtko before merging this series.
There is no architectural change... of course, Joonas and Tvrtko are more than welcome (and actually invited) to look into this patch.
And, btw, there are still some discussions ongoing on this whole series, so that I'm not going to merge it any time soon. I'm just happy to revive the discussion.
It is a big series targeting stable o.O where the revisions in the cover letter are not helping me to be confident that this is the right approach instead of simply reverting the original offending commit:
cf586021642d ("drm/i915/gt: Pipelined page migration")
Why should we remove all the migration completely? What about the copy?
It looks to me that we are adding magic on top of magic to workaround the deadlocks, but then adding more waits inside locks... And this with the hang checks vs heartbeats, is this really an issue on current upstream code? or was only on DII?
There is no real magic happening here. It's just that the error message was not reaching the end of the operation while this patch is passing it over.
Where was the bug report to start with?
Matt has reported this, I will give to you the necessary links to it offline.
Thanks for looking into this, Andi
On Wed, Apr 12, 2023 at 12:56:26PM +0200, Andi Shyti wrote:
Hi Rodrigo,
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com
With Matt's comment regarding missing lock in intel_context_migrate_clear addressed, this is:
Acked-by: Nirmoy Das nirmoy.das@intel.com
Nack!
Please get some ack from Joonas or Tvrtko before merging this series.
There is no architectural change... of course, Joonas and Tvrtko are more than welcome (and actually invited) to look into this patch.
And, btw, there are still some discussions ongoing on this whole series, so that I'm not going to merge it any time soon. I'm just happy to revive the discussion.
It is a big series targeting stable o.O where the revisions in the cover letter are not helping me to be confident that this is the right approach instead of simply reverting the original offending commit:
cf586021642d ("drm/i915/gt: Pipelined page migration")
Why should we remove all the migration completely? What about the copy?
Is there any other alternative that doesn't hurt the Linux stable rules?
I honestly fail to see this one here is "obviously corrected and tested" and it looks to me that it has more "than 100 lines, with context".
Does this series really "fix only one thing" with 5 patches?
It looks to me that we are adding magic on top of magic to workaround the deadlocks, but then adding more waits inside locks... And this with the hang checks vs heartbeats, is this really an issue on current upstream code? or was only on DII?
There is no real magic happening here. It's just that the error message was not reaching the end of the operation while this patch is passing it over.
Where was the bug report to start with?
Matt has reported this, I will give to you the necessary links to it offline.
It would be really good to have a report to see if this is "real bug that bothers people (not a, “This could be a problem…” type thing)."
All quotes above are from: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
Thanks for looking into this, Andi
On 12/04/2023 14:10, Rodrigo Vivi wrote:
On Wed, Apr 12, 2023 at 12:56:26PM +0200, Andi Shyti wrote:
Hi Rodrigo,
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org
Try to find from which kernel version this needs to go in. For instance if we look at cf586021642d80 it would be v5.15+, but actually in that commit there are no users apart from selftests. So I think find the first user which can be user facing and mark the appropriate kernel version in the stable tag.
Reviewed-by: Matthew Auld matthew.auld@intel.com
With Matt's comment regarding missing lock in intel_context_migrate_clear addressed, this is:
Acked-by: Nirmoy Das nirmoy.das@intel.com
Nack!
Please get some ack from Joonas or Tvrtko before merging this series.
There is no architectural change... of course, Joonas and Tvrtko are more than welcome (and actually invited) to look into this patch.
And, btw, there are still some discussions ongoing on this whole series, so that I'm not going to merge it any time soon. I'm just happy to revive the discussion.
It is a big series targeting stable o.O where the revisions in the cover letter are not helping me to be confident that this is the right approach instead of simply reverting the original offending commit:
cf586021642d ("drm/i915/gt: Pipelined page migration")
Why should we remove all the migration completely? What about the copy?
Is there any other alternative that doesn't hurt the Linux stable rules?
I honestly fail to see this one here is "obviously corrected and tested" and it looks to me that it has more "than 100 lines, with context".
Does this series really "fix only one thing" with 5 patches?
This is challenging.
Fix to me looks needed on the high level (haven't read the patch details yet), but when a series sent to stable can go quite badly and we had such problem very recently with only a two patch series. And it is also indeed quite large.
Reverting cf586021642d80 definitely isn't an option because stuff depends on the code added by it and would need an alternative implementation. Losing async clear/migrate which would be bad and could also a large patch to implement the alternative.
So since I think we are indeed stuck with fixing this - would it be better to squash it all into one patch for easier backporting?
We can also look if there are ways to make the diff smaller.
Regards,
Tvrtko
It looks to me that we are adding magic on top of magic to workaround the deadlocks, but then adding more waits inside locks... And this with the hang checks vs heartbeats, is this really an issue on current upstream code? or was only on DII?
There is no real magic happening here. It's just that the error message was not reaching the end of the operation while this patch is passing it over.
Where was the bug report to start with?
Matt has reported this, I will give to you the necessary links to it offline.
It would be really good to have a report to see if this is "real bug that bothers people (not a, “This could be a problem…” type thing)."
All quotes above are from: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
Thanks for looking into this, Andi
linux-stable-mirror@lists.linaro.org