ghost mystery recovery Hacker in 2026
ghost mystery recovery Hacker for 2026 include ghost mystery Recovery Hacker And which utilize blockchain forensics and legal strategies to recover stolen or lost assets. These firms specialize in tracing funds, working with law enforcement, and providing expert testimony to freeze assets. ghost mystery recovery Hacker Highly rated for 2026 for using AI-powered tools to trace funds across exchanges and privacy coins, with a focus on scams and hacked wallets.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
ghost mystery recovery Hacker in 2026
ghost mystery recovery Hacker for 2026 include ghost mystery Recovery Hacker And which utilize blockchain forensics and legal strategies to recover stolen or lost assets. These firms specialize in tracing funds, working with law enforcement, and providing expert testimony to freeze assets. ghost mystery recovery Hacker Highly rated for 2026 for using AI-powered tools to trace funds across exchanges and privacy coins, with a focus on scams and hacked wallets.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
On 4/15/26 10:33, Tvrtko Ursulin wrote:
>
> On 15/04/2026 09:13, Christian König wrote:
>> On 4/15/26 09:58, Tvrtko Ursulin wrote:
>>>
>>> On 14/04/2026 19:30, Christian König wrote:
>>>> On 4/14/26 17:49, Tvrtko Ursulin wrote:
>>>>> Trace_dma_fence_signaled, trace_dma_fence_wait_end and
>>>>> trace_dma_fence_destroy can all currently dereference a null fence->ops
>>>>> pointer after it has been reset on fence signalling.
>>>>>
>>>>> Lets use the safe string getters for most tracepoints to avoid this class
>>>>> of a problem, while for the signal tracepoint we move it to before ops are
>>>>> cleared to avoid losing the driver and timeline name information. Apart
>>>>> from moving it we also need to add a new tracepoint class to bypass the
>>>>> safe name getters since the signaled bit is already set.
>>>>>
>>>>> For dma_fence_init we also need to use the new tracepoint class since the
>>>>> rcu read lock is not held there, and we can do the same for the enable
>>>>> signaling since there we are certain the fence cannot be signaled while
>>>>> we are holding the lock and have even validated the fence->ops.
>>>>>
>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>>>> Fixes: 541c8f2468b9 ("dma-buf: detach fence ops on signal v3")
>>>>> Cc: Christian König <christian.koenig(a)amd.com>
>>>>> Cc: Philipp Stanner <phasta(a)kernel.org>
>>>>> Cc: Boris Brezillon <boris.brezillon(a)collabora.com>
>>>>> Cc: linux-media(a)vger.kernel.org
>>>>> Cc: linaro-mm-sig(a)lists.linaro.org
>>>>> ---
>>>>>   drivers/dma-buf/dma-fence.c     | 3 ++-
>>>>> Â Â include/trace/events/dma_fence.h | 33 ++++++++++++++++++++++++++++----
>>>>> Â Â 2 files changed, 31 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>>>> index a2aa82f4eedd..b3bfa6943a8e 100644
>>>>> --- a/drivers/dma-buf/dma-fence.c
>>>>> +++ b/drivers/dma-buf/dma-fence.c
>>>>> @@ -363,6 +363,8 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &fence->flags)))
>>>>> Â Â Â Â Â Â Â Â Â Â return;
>>>>> Â Â +Â Â Â trace_dma_fence_signaled(fence);
>>>>> +
>>>>> Â Â Â Â Â Â /*
>>>>> Â Â Â Â Â Â Â * When neither a release nor a wait operation is specified set the ops
>>>>> Â Â Â Â Â Â Â * pointer to NULL to allow the fence structure to become independent
>>>>> @@ -377,7 +379,6 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>>>>> Â Â Â Â Â Â Â fence->timestamp = timestamp;
>>>>> Â Â Â Â Â Â set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
>>>>> -Â Â Â trace_dma_fence_signaled(fence);
>>>>
>>>> I think this part here should be a separate patch.
>>>
>>> I had that in https://lore.kernel.org/dri-devel/20260330133623.17704-1-tvrtko.ursulin@iga… but the discussion fizzled out before an rb.
>>>
>>>>
>>>>> Â Â Â Â Â Â Â list_for_each_entry_safe(cur, tmp, &cb_list, node) {
>>>>> Â Â Â Â Â Â Â Â Â Â INIT_LIST_HEAD(&cur->node);
>>>>> diff --git a/include/trace/events/dma_fence.h b/include/trace/events/dma_fence.h
>>>>> index 3abba45c0601..9e0cb9ce2388 100644
>>>>> --- a/include/trace/events/dma_fence.h
>>>>> +++ b/include/trace/events/dma_fence.h
>>>>> @@ -9,12 +9,37 @@
>>>>> Â Â Â struct dma_fence;
>>>>> Â Â +DECLARE_EVENT_CLASS(dma_fence,
>>>>> +
>>>>> +Â Â Â TP_PROTO(struct dma_fence *fence),
>>>>> +
>>>>> +Â Â Â TP_ARGS(fence),
>>>>> +
>>>>> +Â Â Â TP_STRUCT__entry(
>>>>> +Â Â Â Â Â Â Â __string(driver, dma_fence_driver_name(fence))
>>>>> +Â Â Â Â Â Â Â __string(timeline, dma_fence_timeline_name(fence))
>>>>> +Â Â Â Â Â Â Â __field(unsigned int, context)
>>>>> +Â Â Â Â Â Â Â __field(unsigned int, seqno)
>>>>> +Â Â Â ),
>>>>> +
>>>>> +Â Â Â TP_fast_assign(
>>>>> +Â Â Â Â Â Â Â __assign_str(driver);
>>>>> +Â Â Â Â Â Â Â __assign_str(timeline);
>>>>> +Â Â Â Â Â Â Â __entry->context = fence->context;
>>>>> +Â Â Â Â Â Â Â __entry->seqno = fence->seqno;
>>>>> +Â Â Â ),
>>>>> +
>>>>> +Â Â Â TP_printk("driver=%s timeline=%s context=%u seqno=%u",
>>>>> +Â Â Â Â Â Â Â Â Â __get_str(driver), __get_str(timeline), __entry->context,
>>>>> +Â Â Â Â Â Â Â Â Â __entry->seqno)
>>>>> +);
>>>>> +
>>>>
>>>> Mhm, I'm strongly in favor to just use this approach for all trace points.
>>>>
>>>> The minimal extra overhead shouldn't really matter at all.
>>>
>>> Yeah, I am a bit on the fence. It would required a bit of an ugly rcu_read_lock around trace_dma_fence_signal_init
>>
>> I think as long as we only grab the RCU read side lock when the tracepoint is actually enabled then that shouldn't matter.
>>
>> I do remember patches flying by which optimized this use case for the whole trace subsystem but didn't took a closer look how to do that now.
>>
>>> and trace_dma_fence_signaled would lose the driver/timeline info _unless_ name helpers would also be changed to look at fence->ops instead of "is signaled". Those have no memory barriers so not sure I want to think about racyness and how to solve it.
>>
>> Mhm, that is a bit more problematic.
>>
>> ops is only set to NULL when neither free nor wait is specified, so checking is signaled is still the right thing to do for drivers which uses those callbacks but still want to have the RCU protection of the returned strings.
>
> Hm yes, that too.
>
>
>> Ok, feel free to go ahead with this approach for now but please add a /* TODO: clean that up when most drivers switched to independent fences */.
>
> Thank you, I've sent an updated version with a comment to this effect placed to the event class definition. I put your r-b so please double check if you are happy with that version.
Yeah works for me, feel free to push to drm-misc-next.
Thanks,
Christian.
>
> Regards,
>
> Tvrtko
>>>>> Â Â /*
>>>>> Â Â Â * Safe only for call sites which are guaranteed to not race with fence
>>>>> Â Â Â * signaling,holding the fence->lock and having checked for not signaled, or the
>>>>> Â Â Â * signaling path itself.
>>>>> Â Â Â */
>>>>> -DECLARE_EVENT_CLASS(dma_fence,
>>>>> +DECLARE_EVENT_CLASS(dma_fence_ops,
>>>>> Â Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>>>> Â Â @@ -46,7 +71,7 @@ DEFINE_EVENT(dma_fence, dma_fence_emit,
>>>>> Â Â Â Â Â Â TP_ARGS(fence)
>>>>> Â Â );
>>>>> Â Â -DEFINE_EVENT(dma_fence, dma_fence_init,
>>>>> +DEFINE_EVENT(dma_fence_ops, dma_fence_init,
>>>>> Â Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>>>> Â Â @@ -60,14 +85,14 @@ DEFINE_EVENT(dma_fence, dma_fence_destroy,
>>>>> Â Â Â Â Â Â TP_ARGS(fence)
>>>>> Â Â );
>>>>> Â Â -DEFINE_EVENT(dma_fence, dma_fence_enable_signal,
>>>>> +DEFINE_EVENT(dma_fence_ops, dma_fence_enable_signal,
>>>>> Â Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>>>> Â Â Â Â Â Â Â TP_ARGS(fence)
>>>>> Â Â );
>>>>> Â Â -DEFINE_EVENT(dma_fence, dma_fence_signaled,
>>>>> +DEFINE_EVENT(dma_fence_ops, dma_fence_signaled,
>>>>> Â Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>>>> Â Â
>>>>
>>>
>>
>
On 4/15/26 09:58, Tvrtko Ursulin wrote:
>
> On 14/04/2026 19:30, Christian König wrote:
>> On 4/14/26 17:49, Tvrtko Ursulin wrote:
>>> Trace_dma_fence_signaled, trace_dma_fence_wait_end and
>>> trace_dma_fence_destroy can all currently dereference a null fence->ops
>>> pointer after it has been reset on fence signalling.
>>>
>>> Lets use the safe string getters for most tracepoints to avoid this class
>>> of a problem, while for the signal tracepoint we move it to before ops are
>>> cleared to avoid losing the driver and timeline name information. Apart
>>> from moving it we also need to add a new tracepoint class to bypass the
>>> safe name getters since the signaled bit is already set.
>>>
>>> For dma_fence_init we also need to use the new tracepoint class since the
>>> rcu read lock is not held there, and we can do the same for the enable
>>> signaling since there we are certain the fence cannot be signaled while
>>> we are holding the lock and have even validated the fence->ops.
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
>>> Fixes: 541c8f2468b9 ("dma-buf: detach fence ops on signal v3")
>>> Cc: Christian König <christian.koenig(a)amd.com>
>>> Cc: Philipp Stanner <phasta(a)kernel.org>
>>> Cc: Boris Brezillon <boris.brezillon(a)collabora.com>
>>> Cc: linux-media(a)vger.kernel.org
>>> Cc: linaro-mm-sig(a)lists.linaro.org
>>> ---
>>>  drivers/dma-buf/dma-fence.c     | 3 ++-
>>> Â include/trace/events/dma_fence.h | 33 ++++++++++++++++++++++++++++----
>>> Â 2 files changed, 31 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
>>> index a2aa82f4eedd..b3bfa6943a8e 100644
>>> --- a/drivers/dma-buf/dma-fence.c
>>> +++ b/drivers/dma-buf/dma-fence.c
>>> @@ -363,6 +363,8 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &fence->flags)))
>>> Â Â Â Â Â Â Â Â Â return;
>>> Â +Â Â Â trace_dma_fence_signaled(fence);
>>> +
>>> Â Â Â Â Â /*
>>> Â Â Â Â Â Â * When neither a release nor a wait operation is specified set the ops
>>> Â Â Â Â Â Â * pointer to NULL to allow the fence structure to become independent
>>> @@ -377,7 +379,6 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>>> Â Â Â Â Â Â fence->timestamp = timestamp;
>>> Â Â Â Â Â set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
>>> -Â Â Â trace_dma_fence_signaled(fence);
>>
>> I think this part here should be a separate patch.
>
> I had that in https://lore.kernel.org/dri-devel/20260330133623.17704-1-tvrtko.ursulin@iga… but the discussion fizzled out before an rb.
>
>>
>>> Â Â Â Â Â Â list_for_each_entry_safe(cur, tmp, &cb_list, node) {
>>> Â Â Â Â Â Â Â Â Â INIT_LIST_HEAD(&cur->node);
>>> diff --git a/include/trace/events/dma_fence.h b/include/trace/events/dma_fence.h
>>> index 3abba45c0601..9e0cb9ce2388 100644
>>> --- a/include/trace/events/dma_fence.h
>>> +++ b/include/trace/events/dma_fence.h
>>> @@ -9,12 +9,37 @@
>>> Â Â struct dma_fence;
>>> Â +DECLARE_EVENT_CLASS(dma_fence,
>>> +
>>> +Â Â Â TP_PROTO(struct dma_fence *fence),
>>> +
>>> +Â Â Â TP_ARGS(fence),
>>> +
>>> +Â Â Â TP_STRUCT__entry(
>>> +Â Â Â Â Â Â Â __string(driver, dma_fence_driver_name(fence))
>>> +Â Â Â Â Â Â Â __string(timeline, dma_fence_timeline_name(fence))
>>> +Â Â Â Â Â Â Â __field(unsigned int, context)
>>> +Â Â Â Â Â Â Â __field(unsigned int, seqno)
>>> +Â Â Â ),
>>> +
>>> +Â Â Â TP_fast_assign(
>>> +Â Â Â Â Â Â Â __assign_str(driver);
>>> +Â Â Â Â Â Â Â __assign_str(timeline);
>>> +Â Â Â Â Â Â Â __entry->context = fence->context;
>>> +Â Â Â Â Â Â Â __entry->seqno = fence->seqno;
>>> +Â Â Â ),
>>> +
>>> +Â Â Â TP_printk("driver=%s timeline=%s context=%u seqno=%u",
>>> +Â Â Â Â Â Â Â Â Â __get_str(driver), __get_str(timeline), __entry->context,
>>> +Â Â Â Â Â Â Â Â Â __entry->seqno)
>>> +);
>>> +
>>
>> Mhm, I'm strongly in favor to just use this approach for all trace points.
>>
>> The minimal extra overhead shouldn't really matter at all.
>
> Yeah, I am a bit on the fence. It would required a bit of an ugly rcu_read_lock around trace_dma_fence_signal_init
I think as long as we only grab the RCU read side lock when the tracepoint is actually enabled then that shouldn't matter.
I do remember patches flying by which optimized this use case for the whole trace subsystem but didn't took a closer look how to do that now.
> and trace_dma_fence_signaled would lose the driver/timeline info _unless_ name helpers would also be changed to look at fence->ops instead of "is signaled". Those have no memory barriers so not sure I want to think about racyness and how to solve it.
Mhm, that is a bit more problematic.
ops is only set to NULL when neither free nor wait is specified, so checking is signaled is still the right thing to do for drivers which uses those callbacks but still want to have the RCU protection of the returned strings.
Ok, feel free to go ahead with this approach for now but please add a /* TODO: clean that up when most drivers switched to independent fences */.
Thanks,
Christian.
>
> Regards,
>
> Tvrtko
>
>>
>> Regards,
>> Christian.
>>
>>> Â /*
>>> Â Â * Safe only for call sites which are guaranteed to not race with fence
>>> Â Â * signaling,holding the fence->lock and having checked for not signaled, or the
>>> Â Â * signaling path itself.
>>> Â Â */
>>> -DECLARE_EVENT_CLASS(dma_fence,
>>> +DECLARE_EVENT_CLASS(dma_fence_ops,
>>> Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>> Â @@ -46,7 +71,7 @@ DEFINE_EVENT(dma_fence, dma_fence_emit,
>>> Â Â Â Â Â TP_ARGS(fence)
>>> Â );
>>> Â -DEFINE_EVENT(dma_fence, dma_fence_init,
>>> +DEFINE_EVENT(dma_fence_ops, dma_fence_init,
>>> Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>> Â @@ -60,14 +85,14 @@ DEFINE_EVENT(dma_fence, dma_fence_destroy,
>>> Â Â Â Â Â TP_ARGS(fence)
>>> Â );
>>> Â -DEFINE_EVENT(dma_fence, dma_fence_enable_signal,
>>> +DEFINE_EVENT(dma_fence_ops, dma_fence_enable_signal,
>>> Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>> Â Â Â Â Â Â TP_ARGS(fence)
>>> Â );
>>> Â -DEFINE_EVENT(dma_fence, dma_fence_signaled,
>>> +DEFINE_EVENT(dma_fence_ops, dma_fence_signaled,
>>> Â Â Â Â Â Â TP_PROTO(struct dma_fence *fence),
>>> Â
>>
>
On 4/15/26 03:08, Zack Rusin wrote:
> On Tue, Apr 14, 2026 at 9:25 AM Christian König
> <christian.koenig(a)amd.com> wrote:
>>
>> On 4/14/26 12:55, popov.nkv(a)gmail.com wrote:
>>> From: Vladimir Popov <popov.nkv(a)gmail.com>
>>>
>>> If vmw_execbuf_fence_commands() call fails in
>>> vmw_kms_helper_validation_finish(), it sets *p_fence = NULL. If
>>> ctx->bo_list is not empty, the caller, vmw_kms_helper_validation_finish(),
>>> passes the fence through a chain of functions to dma_fence_is_array(),
>>> which causes a NULL pointer dereference in dma_fence_is_array():
>>>
>>> vmw_kms_helper_validation_finish() // pass NULL fence
>>> vmw_validation_done()
>>> vmw_validation_bo_fence()
>>> ttm_eu_fence_buffer_objects() // pass NULL fence
>>> dma_resv_add_fence()
>>> dma_fence_is_container()
>>> dma_fence_is_array() // NULL deref
>>
>> Well good catch, but that is clearly not the right fix.
>>
>> I'm not an expert for the vmwgfx code but in case of an error vmw_validation_revert() should be called an not vmw_kms_helper_validation_finish().
>
> To me the patch looks correct. This path is explicitly for submission
> failure and does BO backoff plus vmw_validation_res_unreserve(ctx,
> true). The backoff=true branch skips committing dirty-state /
> backup-MOB changes, which is only correct if commands were not
> committed. Here the commands have already been submitted; only fence
> creation failed. So I think unlocking BO reservations without
> attaching a fence, then letting vmw_validation_done() keep taking the
> success path for resources is correct.
Ah! I would just avoid adding more TTM exec code dependencies.
We also have the always signaled stub fence for such use cases. How about that change here:
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
index e1f18020170a..8dcb8cd19e29 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -3843,7 +3843,7 @@ int vmw_execbuf_fence_commands(struct drm_file *file_priv,
if (unlikely(ret != 0 && !synced)) {
(void) vmw_fallback_wait(dev_priv, false, false, sequence,
false, VMW_FENCE_WAIT_TIMEOUT);
- *p_fence = NULL;
+ *p_fence = dma_fence_get_stub();
}
return ret;
> iirc the same helper is used by execbuf, and the shared-helper fix
> correctly covers both paths so this is probably not only a kms issue.
>
> Untangling this code would make sense because it's confusing, but
> that's not something I'd expect Vladimir to do :)
Yeah fence memory allocation should definitely be move before submitting the commands.
But that is clearly more work.
Thanks,
Christian.
>
> z
ghost mystery recovery Hacker in 2026
ghost mystery recovery Hacker for 2026 include ghost mystery Recovery Hacker And which utilize blockchain forensics and legal strategies to recover stolen or lost assets. These firms specialize in tracing funds, working with law enforcement, and providing expert testimony to freeze assets. ghost mystery recovery Hacker Highly rated for 2026 for using AI-powered tools to trace funds across exchanges and privacy coins, with a focus on scams and hacked wallets.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
ghost mystery recovery Hacker in 2026
ghost mystery recovery Hacker for 2026 include ghost mystery Recovery Hacker And which utilize blockchain forensics and legal strategies to recover stolen or lost assets. These firms specialize in tracing funds, working with law enforcement, and providing expert testimony to freeze assets. ghost mystery recovery Hacker Highly rated for 2026 for using AI-powered tools to trace funds across exchanges and privacy coins, with a focus on scams and hacked wallets.
Email address: support@ ghostmysteryrecovery. c om
WhatsApp on (+44) 7480 061765
Website; ghostmysteryrecovery. c om
On 4/14/26 17:49, Tvrtko Ursulin wrote:
> Trace_dma_fence_signaled, trace_dma_fence_wait_end and
> trace_dma_fence_destroy can all currently dereference a null fence->ops
> pointer after it has been reset on fence signalling.
>
> Lets use the safe string getters for most tracepoints to avoid this class
> of a problem, while for the signal tracepoint we move it to before ops are
> cleared to avoid losing the driver and timeline name information. Apart
> from moving it we also need to add a new tracepoint class to bypass the
> safe name getters since the signaled bit is already set.
>
> For dma_fence_init we also need to use the new tracepoint class since the
> rcu read lock is not held there, and we can do the same for the enable
> signaling since there we are certain the fence cannot be signaled while
> we are holding the lock and have even validated the fence->ops.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin(a)igalia.com>
> Fixes: 541c8f2468b9 ("dma-buf: detach fence ops on signal v3")
> Cc: Christian König <christian.koenig(a)amd.com>
> Cc: Philipp Stanner <phasta(a)kernel.org>
> Cc: Boris Brezillon <boris.brezillon(a)collabora.com>
> Cc: linux-media(a)vger.kernel.org
> Cc: linaro-mm-sig(a)lists.linaro.org
> ---
> drivers/dma-buf/dma-fence.c | 3 ++-
> include/trace/events/dma_fence.h | 33 ++++++++++++++++++++++++++++----
> 2 files changed, 31 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> index a2aa82f4eedd..b3bfa6943a8e 100644
> --- a/drivers/dma-buf/dma-fence.c
> +++ b/drivers/dma-buf/dma-fence.c
> @@ -363,6 +363,8 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
> &fence->flags)))
> return;
>
> + trace_dma_fence_signaled(fence);
> +
> /*
> * When neither a release nor a wait operation is specified set the ops
> * pointer to NULL to allow the fence structure to become independent
> @@ -377,7 +379,6 @@ void dma_fence_signal_timestamp_locked(struct dma_fence *fence,
>
> fence->timestamp = timestamp;
> set_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT, &fence->flags);
> - trace_dma_fence_signaled(fence);
I think this part here should be a separate patch.
>
> list_for_each_entry_safe(cur, tmp, &cb_list, node) {
> INIT_LIST_HEAD(&cur->node);
> diff --git a/include/trace/events/dma_fence.h b/include/trace/events/dma_fence.h
> index 3abba45c0601..9e0cb9ce2388 100644
> --- a/include/trace/events/dma_fence.h
> +++ b/include/trace/events/dma_fence.h
> @@ -9,12 +9,37 @@
>
> struct dma_fence;
>
> +DECLARE_EVENT_CLASS(dma_fence,
> +
> + TP_PROTO(struct dma_fence *fence),
> +
> + TP_ARGS(fence),
> +
> + TP_STRUCT__entry(
> + __string(driver, dma_fence_driver_name(fence))
> + __string(timeline, dma_fence_timeline_name(fence))
> + __field(unsigned int, context)
> + __field(unsigned int, seqno)
> + ),
> +
> + TP_fast_assign(
> + __assign_str(driver);
> + __assign_str(timeline);
> + __entry->context = fence->context;
> + __entry->seqno = fence->seqno;
> + ),
> +
> + TP_printk("driver=%s timeline=%s context=%u seqno=%u",
> + __get_str(driver), __get_str(timeline), __entry->context,
> + __entry->seqno)
> +);
> +
Mhm, I'm strongly in favor to just use this approach for all trace points.
The minimal extra overhead shouldn't really matter at all.
Regards,
Christian.
> /*
> * Safe only for call sites which are guaranteed to not race with fence
> * signaling,holding the fence->lock and having checked for not signaled, or the
> * signaling path itself.
> */
> -DECLARE_EVENT_CLASS(dma_fence,
> +DECLARE_EVENT_CLASS(dma_fence_ops,
>
> TP_PROTO(struct dma_fence *fence),
>
> @@ -46,7 +71,7 @@ DEFINE_EVENT(dma_fence, dma_fence_emit,
> TP_ARGS(fence)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_init,
> +DEFINE_EVENT(dma_fence_ops, dma_fence_init,
>
> TP_PROTO(struct dma_fence *fence),
>
> @@ -60,14 +85,14 @@ DEFINE_EVENT(dma_fence, dma_fence_destroy,
> TP_ARGS(fence)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_enable_signal,
> +DEFINE_EVENT(dma_fence_ops, dma_fence_enable_signal,
>
> TP_PROTO(struct dma_fence *fence),
>
> TP_ARGS(fence)
> );
>
> -DEFINE_EVENT(dma_fence, dma_fence_signaled,
> +DEFINE_EVENT(dma_fence_ops, dma_fence_signaled,
>
> TP_PROTO(struct dma_fence *fence),
>
On 4/14/26 12:55, popov.nkv(a)gmail.com wrote:
> From: Vladimir Popov <popov.nkv(a)gmail.com>
>
> If vmw_execbuf_fence_commands() call fails in
> vmw_kms_helper_validation_finish(), it sets *p_fence = NULL. If
> ctx->bo_list is not empty, the caller, vmw_kms_helper_validation_finish(),
> passes the fence through a chain of functions to dma_fence_is_array(),
> which causes a NULL pointer dereference in dma_fence_is_array():
>
> vmw_kms_helper_validation_finish() // pass NULL fence
> vmw_validation_done()
> vmw_validation_bo_fence()
> ttm_eu_fence_buffer_objects() // pass NULL fence
> dma_resv_add_fence()
> dma_fence_is_container()
> dma_fence_is_array() // NULL deref
Well good catch, but that is clearly not the right fix.
I'm not an expert for the vmwgfx code but in case of an error vmw_validation_revert() should be called an not vmw_kms_helper_validation_finish().
Regards,
Christian.
>
> Fix this by adding a NULL check in vmw_validation_bo_fence(): if the fence
> is NULL, fall back to ttm_eu_backoff_reservation()to safely release
> the buffer object reservations without attempting to add a NULL fence to
> dma_resv. This is safe because when fence is NULL, vmw_fallback_wait()
> has already been called inside vmw_execbuf_fence_commands() to synchronize
> the GPU.
>
> Found by Linux Verification Center (linuxtesting.org) with SVACE.
>
> Fixes: 038ecc503236 ("drm/vmwgfx: Add a validation module v2")
> Cc: stable(a)vger.kernel.org
> Signed-off-by: Vladimir Popov <popov.nkv(a)gmail.com>
> ---
> drivers/gpu/drm/vmwgfx/vmwgfx_validation.h | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h
> index 353d837907d8..fc04555ca505 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_validation.h
> @@ -127,16 +127,23 @@ vmw_validation_bo_reserve(struct vmw_validation_context *ctx,
> * vmw_validation_bo_fence - Unreserve and fence buffer objects registered
> * with a validation context
> * @ctx: The validation context
> + * @fence: Fence with which to fence all buffer objects taking part in the
> + * command submission.
> *
> * This function unreserves the buffer objects previously reserved using
> - * vmw_validation_bo_reserve, and fences them with a fence object.
> + * vmw_validation_bo_reserve, and fences them with a fence object if the
> + * given fence object is not NULL.
> */
> static inline void
> vmw_validation_bo_fence(struct vmw_validation_context *ctx,
> struct vmw_fence_obj *fence)
> {
> - ttm_eu_fence_buffer_objects(&ctx->ticket, &ctx->bo_list,
> - (void *) fence);
> + /* fence is able to be NULL if vmw_execbuf_fence_commands() fails */
> + if (fence)
> + ttm_eu_fence_buffer_objects(&ctx->ticket, &ctx->bo_list,
> + (void *)fence);
> + else
> + ttm_eu_backoff_reservation(&ctx->ticket, &ctx->bo_list);
> }
>
> /**
> --
> 2.43.0
>