Hi Rodrigo,
Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain.
However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated.
For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request.
To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid.
On copy and clear migration signal fences upon completion.
On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome.
Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld matthew.auld@intel.com Suggested-by: Chris Wilson chris@chris-wilson.co.uk Signed-off-by: Andi Shyti andi.shyti@linux.intel.com Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld matthew.auld@intel.com
With Matt's comment regarding missing lock in intel_context_migrate_clear addressed, this is:
Acked-by: Nirmoy Das nirmoy.das@intel.com
Nack!
Please get some ack from Joonas or Tvrtko before merging this series.
There is no architectural change... of course, Joonas and Tvrtko are more than welcome (and actually invited) to look into this patch.
And, btw, there are still some discussions ongoing on this whole series, so that I'm not going to merge it any time soon. I'm just happy to revive the discussion.
It is a big series targeting stable o.O where the revisions in the cover letter are not helping me to be confident that this is the right approach instead of simply reverting the original offending commit:
cf586021642d ("drm/i915/gt: Pipelined page migration")
Why should we remove all the migration completely? What about the copy?
It looks to me that we are adding magic on top of magic to workaround the deadlocks, but then adding more waits inside locks... And this with the hang checks vs heartbeats, is this really an issue on current upstream code? or was only on DII?
There is no real magic happening here. It's just that the error message was not reaching the end of the operation while this patch is passing it over.
Where was the bug report to start with?
Matt has reported this, I will give to you the necessary links to it offline.
Thanks for looking into this, Andi