On Thu, 9 Jul 2020 at 09:05, Daniel Vetter daniel@ffwll.ch wrote:
On Thu, Jul 09, 2020 at 08:36:43AM +0100, Daniel Stone wrote:
On Tue, 7 Jul 2020 at 21:13, Daniel Vetter daniel.vetter@ffwll.ch wrote:
Comes up every few years, gets somewhat tedious to discuss, let's write this down once and for all.
Thanks for writing this up! I wonder if any of the notes from my reply to the previous-version thread would be helpful to more explicitly encode the carrot of dma-fence's positive guarantees, rather than just the stick of 'don't do this'. ;) Either way, this is:
I think the carrot should go into the intro section for dma-fence, this section here is very much just the "don't do this" part. The previous patches have an attempt at encoding this a bit, maybe see whether there's a place for your reply (or parts of it) to fit?
Sounds good to me.
Acked-by: Daniel Stone daniels@collabora.com
What I'm not sure about is whether the text should be more explicit in flat out mandating the amdkfd eviction fences for long running compute workloads or workloads where userspace fencing is allowed.
... or whether we just say that you can never use dma-fence in conjunction with userptr.
Uh userptr is entirely different thing. That one is ok. It's userpsace fences or gpu futexes or future fences or whatever we want to call them. Or is there some other confusion here?.
I mean generating a dma_fence from a batch which will try to page in userptr. Given that userptr could be backed by absolutely anything at all, it doesn't seem smart to allow fences to rely on a pointer to an mmap'ed NFS file. So it seems like batches should be mutually exclusive between arbitrary SVM userptr and generating a dma-fence?
Speaking of entirely different things ... the virtio-gpu bit really doesn't belong in this patch.
Cheers, Daniel