From: Rob Clark robdclark@chromium.org
This series adds a deadline hint to fences, so realtime deadlines such as vblank can be communicated to the fence signaller for power/ frequency management decisions.
This is partially inspired by a trick i915 does, but implemented via dma-fence for a couple of reasons:
1) To continue to be able to use the atomic helpers 2) To support cases where display and gpu are different drivers
This iteration adds a dma-fence ioctl to set a deadline (both to support igt-tests, and compositors which delay decisions about which client buffer to display), and a sw_sync ioctl to read back the deadline. IGT tests utilizing these can be found at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadli...
v1: https://patchwork.freedesktop.org/series/93035/ v2: Move filtering out of later deadlines to fence implementation to avoid increasing the size of dma_fence v3: Add support in fence-array and fence-chain; Add some uabi to support igt tests and userspace compositors. v4: Rebase, address various comments, and add syncobj deadline support, and sync_file EPOLLPRI based on experience with perf/ freq issues with clvk compute workloads on i915 (anv) v5: Clarify that this is a hint as opposed to a more hard deadline guarantee, switch to using u64 ns values in UABI (still absolute CLOCK_MONOTONIC values), drop syncobj related cap and driver feature flag in favor of allowing count_handles==0 for probing kernel support. v6: Re-work vblank helper to calculate time of _start_ of vblank, and work correctly if the last vblank event was more than a frame ago. Add (mostly unrelated) drm/msm patch which also uses the vblank helper. Use dma_fence_chain_contained(). More verbose syncobj UABI comments. Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v7: Fix kbuild complaints about vblank helper. Add more docs. v8: Add patch to surface sync_file UAPI, and more docs updates. v9: Drop (E)POLLPRI support.. I still like it, but not essential and it can always be revived later. Fix doc build warning. v10: Update 11/15 to handle multiple CRTCs
Rob Clark (15): dma-buf/dma-fence: Add deadline awareness dma-buf/fence-array: Add fence deadline support dma-buf/fence-chain: Add fence deadline support dma-buf/dma-resv: Add a way to set fence deadline dma-buf/sync_file: Surface sync-file uABI dma-buf/sync_file: Add SET_DEADLINE ioctl dma-buf/sw_sync: Add fence deadline support drm/scheduler: Add fence deadline support drm/syncobj: Add deadline support for syncobj waits drm/vblank: Add helper to get next vblank time drm/atomic-helper: Set fence deadline for vblank drm/msm: Add deadline based boost support drm/msm: Add wait-boost support drm/msm/atomic: Switch to vblank_start helper drm/i915: Add deadline based boost support
Rob Clark (15): dma-buf/dma-fence: Add deadline awareness dma-buf/fence-array: Add fence deadline support dma-buf/fence-chain: Add fence deadline support dma-buf/dma-resv: Add a way to set fence deadline dma-buf/sync_file: Surface sync-file uABI dma-buf/sync_file: Add SET_DEADLINE ioctl dma-buf/sw_sync: Add fence deadline support drm/scheduler: Add fence deadline support drm/syncobj: Add deadline support for syncobj waits drm/vblank: Add helper to get next vblank time drm/atomic-helper: Set fence deadline for vblank drm/msm: Add deadline based boost support drm/msm: Add wait-boost support drm/msm/atomic: Switch to vblank_start helper drm/i915: Add deadline based boost support
Documentation/driver-api/dma-buf.rst | 16 ++++- drivers/dma-buf/dma-fence-array.c | 11 ++++ drivers/dma-buf/dma-fence-chain.c | 12 ++++ drivers/dma-buf/dma-fence.c | 60 ++++++++++++++++++ drivers/dma-buf/dma-resv.c | 22 +++++++ drivers/dma-buf/sw_sync.c | 81 +++++++++++++++++++++++++ drivers/dma-buf/sync_debug.h | 2 + drivers/dma-buf/sync_file.c | 19 ++++++ drivers/gpu/drm/drm_atomic_helper.c | 37 +++++++++++ drivers/gpu/drm/drm_syncobj.c | 64 +++++++++++++++---- drivers/gpu/drm/drm_vblank.c | 53 +++++++++++++--- drivers/gpu/drm/i915/i915_request.c | 20 ++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 15 ----- drivers/gpu/drm/msm/msm_atomic.c | 8 ++- drivers/gpu/drm/msm/msm_drv.c | 12 ++-- drivers/gpu/drm/msm/msm_fence.c | 74 ++++++++++++++++++++++ drivers/gpu/drm/msm/msm_fence.h | 20 ++++++ drivers/gpu/drm/msm/msm_gem.c | 5 ++ drivers/gpu/drm/msm/msm_kms.h | 8 --- drivers/gpu/drm/scheduler/sched_fence.c | 46 ++++++++++++++ drivers/gpu/drm/scheduler/sched_main.c | 2 +- include/drm/drm_vblank.h | 1 + include/drm/gpu_scheduler.h | 17 ++++++ include/linux/dma-fence.h | 22 +++++++ include/linux/dma-resv.h | 2 + include/uapi/drm/drm.h | 17 ++++++ include/uapi/drm/msm_drm.h | 14 ++++- include/uapi/linux/sync_file.h | 59 +++++++++++------- 28 files changed, 640 insertions(+), 79 deletions(-)
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com --- Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation
+DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~ + +.. kernel-doc:: drivers/dma-buf/dma-fence.c + :doc: deadline hints + DMA Fences Functions Reference ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout);
+/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed. + * + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. + * The deadline hint provides a way for the waiting driver, or userspace, to + * convey an appropriate sense of urgency to the signaling driver. + * + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace + * facing APIs). The time could either be some point in the future (such as + * the vblank based deadline for page-flipping, or the start of a compositor's + * composition cycle), or the current time to indicate an immediate deadline + * hint (Ie. forward progress cannot be made until this fence is signaled). + * + * Multiple deadlines may be set on a given fence, even in parallel. See the + * documentation for &dma_fence_ops.set_deadline. + * + * The deadline hint is just that, a hint. The driver that created the fence + * may react by increasing frequency, making different scheduling choices, etc. + * Or doing nothing at all. + */ + +/** + * dma_fence_set_deadline - set desired fence-wait deadline hint + * @fence: the fence that is to be waited on + * @deadline: the time by which the waiter hopes for the fence to be + * signaled + * + * Give the fence signaler a hint about an upcoming deadline, such as + * vblank, by which point the waiter would prefer the fence to be + * signaled by. This is intended to give feedback to the fence signaler + * to aid in power management decisions, such as boosting GPU frequency + * if a periodic vblank deadline is approaching but the fence is not + * yet signaled.. + */ +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{ + if (fence->ops->set_deadline && !dma_fence_is_signaled(fence)) + fence->ops->set_deadline(fence, deadline); +} +EXPORT_SYMBOL(dma_fence_set_deadline); + /** * dma_fence_describe - Dump fence describtion into seq_file * @fence: the 6fence to describe diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 775cdc0b4f24..d54b595a0fe0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -257,6 +257,26 @@ struct dma_fence_ops { */ void (*timeline_value_str)(struct dma_fence *fence, char *str, int size); + + /** + * @set_deadline: + * + * Callback to allow a fence waiter to inform the fence signaler of + * an upcoming deadline, such as vblank, by which point the waiter + * would prefer the fence to be signaled by. This is intended to + * give feedback to the fence signaler to aid in power management + * decisions, such as boosting GPU frequency. + * + * This is called without &dma_fence.lock held, it can be called + * multiple times and from any context. Locking is up to the callee + * if it has some state to manage. If multiple deadlines are set, + * the expectation is to track the soonest one. If the deadline is + * before the current time, it should be interpreted as an immediate + * deadline. + * + * This callback is optional. + */ + void (*set_deadline)(struct dma_fence *fence, ktime_t deadline); };
void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, @@ -583,6 +603,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline); + struct dma_fence *dma_fence_get_stub(void); struct dma_fence *dma_fence_allocate_private_stub(void); u64 dma_fence_context_alloc(unsigned num);
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Rob!
Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation +DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
- :doc: deadline hints
DMA Fences Functions Reference
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout); +/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
- To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
- The deadline hint provides a way for the waiting driver, or userspace, to
- convey an appropriate sense of urgency to the signaling driver.
- A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace
- facing APIs). The time could either be some point in the future (such as
- the vblank based deadline for page-flipping, or the start of a compositor's
- composition cycle), or the current time to indicate an immediate deadline
- hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
Can userspace set a deadline that targets the next vblank deadline before GPU work has been flushed e.g. at the start of a paint cycle, and still be sure that the kernel has the information it needs to know it should make its clocks increase their speed in time for when the actual work has been actually flushed? Or is it needed that the this deadline is set at the end?
What I'm more or less trying to ask is, will a mode setting compositor be able to tell the kernel to boost its clocks at the time it knows is best, and how will it in practice achieve this?
For example relying on the atomic mode setting commit setting the deadline is fundamentally flawed, since user space will at times want to purposefully delay committing until as late as possible, without doing so causing an increased risk of missing the deadline due to the kernel not speeding up clocks at the right time for GPU work that has already been flushed long ago.
Relying on commits also has no effect on GPU work queued by a compositor drawing only to dma-bufs that are never intended to be presented using mode setting. How can we make sure a compositor can provide hints that the kernel will know to respect despite the compositor not being drm master?
Jonas
- Multiple deadlines may be set on a given fence, even in parallel. See the
- documentation for &dma_fence_ops.set_deadline.
- The deadline hint is just that, a hint. The driver that created the fence
- may react by increasing frequency, making different scheduling choices, etc.
- Or doing nothing at all.
- */
+/**
- dma_fence_set_deadline - set desired fence-wait deadline hint
- @fence: the fence that is to be waited on
- @deadline: the time by which the waiter hopes for the fence to be
signaled
- Give the fence signaler a hint about an upcoming deadline, such as
- vblank, by which point the waiter would prefer the fence to be
- signaled by. This is intended to give feedback to the fence signaler
- to aid in power management decisions, such as boosting GPU frequency
- if a periodic vblank deadline is approaching but the fence is not
- yet signaled..
- */
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{
- if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
fence->ops->set_deadline(fence, deadline);
+} +EXPORT_SYMBOL(dma_fence_set_deadline);
/**
- dma_fence_describe - Dump fence describtion into seq_file
- @fence: the 6fence to describe
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 775cdc0b4f24..d54b595a0fe0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -257,6 +257,26 @@ struct dma_fence_ops { */ void (*timeline_value_str)(struct dma_fence *fence, char *str, int size);
- /**
* @set_deadline:
*
* Callback to allow a fence waiter to inform the fence signaler of
* an upcoming deadline, such as vblank, by which point the waiter
* would prefer the fence to be signaled by. This is intended to
* give feedback to the fence signaler to aid in power management
* decisions, such as boosting GPU frequency.
*
* This is called without &dma_fence.lock held, it can be called
* multiple times and from any context. Locking is up to the callee
* if it has some state to manage. If multiple deadlines are set,
* the expectation is to track the soonest one. If the deadline is
* before the current time, it should be interpreted as an immediate
* deadline.
*
* This callback is optional.
*/
- void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
}; void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, @@ -583,6 +603,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; } +void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
struct dma_fence *dma_fence_get_stub(void); struct dma_fence *dma_fence_allocate_private_stub(void); u64 dma_fence_context_alloc(unsigned num); -- 2.39.2
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Rob!
Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation
+DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
- :doc: deadline hints
DMA Fences Functions Reference
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout); +/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
- To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
- The deadline hint provides a way for the waiting driver, or userspace, to
- convey an appropriate sense of urgency to the signaling driver.
- A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace
- facing APIs). The time could either be some point in the future (such as
- the vblank based deadline for page-flipping, or the start of a compositor's
- composition cycle), or the current time to indicate an immediate deadline
- hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
Also, keep in mind the deadline hint is just that. It won't magically make the GPU finish by that deadline, but it gives the GPU driver information about lateness so it can realize if it needs to clock up.
Can userspace set a deadline that targets the next vblank deadline before GPU work has been flushed e.g. at the start of a paint cycle, and still be sure that the kernel has the information it needs to know it should make its clocks increase their speed in time for when the actual work has been actually flushed? Or is it needed that the this deadline is set at the end?
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
What I'm more or less trying to ask is, will a mode setting compositor be able to tell the kernel to boost its clocks at the time it knows is best, and how will it in practice achieve this?
The anticipated usage for a compositor is that, when you receive a <buf, fence> pair from an app, you immediately set a deadline for upcoming start-of-vblank on the fence fd passed from the app. (Or for implicit sync you can use DMA_BUF_IOCTL_EXPORT_SYNC_FILE). For the composite step, no need to set a deadline as this is already done on the kernel side in drm_atomic_helper_wait_for_fences().
For example relying on the atomic mode setting commit setting the deadline is fundamentally flawed, since user space will at times want to purposefully delay committing until as late as possible, without doing so causing an increased risk of missing the deadline due to the kernel not speeding up clocks at the right time for GPU work that has already been flushed long ago.
Right, this is the point for exposing the ioctl to userspace.
Relying on commits also has no effect on GPU work queued by a compositor drawing only to dma-bufs that are never intended to be presented using mode setting. How can we make sure a compositor can provide hints that the kernel will know to respect despite the compositor not being drm master?
It doesn't matter if there are indirect dependencies. Even if the compositor completely ignores deadline hints and fancy tricks like delaying composite decisions, the indirect dependency (app rendering) will delay the direct dependency (compositor rendering) of the page flip. So the driver will still see whether it is late or early compared to the deadline, allowing it to adjust freq in the appropriate direction for the next frame.
BR, -R
Jonas
- Multiple deadlines may be set on a given fence, even in parallel. See the
- documentation for &dma_fence_ops.set_deadline.
- The deadline hint is just that, a hint. The driver that created the fence
- may react by increasing frequency, making different scheduling choices, etc.
- Or doing nothing at all.
- */
+/**
- dma_fence_set_deadline - set desired fence-wait deadline hint
- @fence: the fence that is to be waited on
- @deadline: the time by which the waiter hopes for the fence to be
signaled
- Give the fence signaler a hint about an upcoming deadline, such as
- vblank, by which point the waiter would prefer the fence to be
- signaled by. This is intended to give feedback to the fence signaler
- to aid in power management decisions, such as boosting GPU frequency
- if a periodic vblank deadline is approaching but the fence is not
- yet signaled..
- */
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{
if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
fence->ops->set_deadline(fence, deadline);
+} +EXPORT_SYMBOL(dma_fence_set_deadline);
/**
- dma_fence_describe - Dump fence describtion into seq_file
- @fence: the 6fence to describe
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 775cdc0b4f24..d54b595a0fe0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -257,6 +257,26 @@ struct dma_fence_ops { */ void (*timeline_value_str)(struct dma_fence *fence, char *str, int size);
/**
* @set_deadline:
*
* Callback to allow a fence waiter to inform the fence signaler of
* an upcoming deadline, such as vblank, by which point the waiter
* would prefer the fence to be signaled by. This is intended to
* give feedback to the fence signaler to aid in power management
* decisions, such as boosting GPU frequency.
*
* This is called without &dma_fence.lock held, it can be called
* multiple times and from any context. Locking is up to the callee
* if it has some state to manage. If multiple deadlines are set,
* the expectation is to track the soonest one. If the deadline is
* before the current time, it should be interpreted as an immediate
* deadline.
*
* This callback is optional.
*/
void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
};
void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, @@ -583,6 +603,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
struct dma_fence *dma_fence_get_stub(void); struct dma_fence *dma_fence_allocate_private_stub(void); u64 dma_fence_context_alloc(unsigned num); -- 2.39.2
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Rob!
Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation
+DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
- :doc: deadline hints
DMA Fences Functions Reference
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout); +/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
- To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
- The deadline hint provides a way for the waiting driver, or userspace, to
- convey an appropriate sense of urgency to the signaling driver.
- A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace
- facing APIs). The time could either be some point in the future (such as
- the vblank based deadline for page-flipping, or the start of a compositor's
- composition cycle), or the current time to indicate an immediate deadline
- hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
Also, keep in mind the deadline hint is just that. It won't magically make the GPU finish by that deadline, but it gives the GPU driver information about lateness so it can realize if it needs to clock up.
Sure, even if the GPU ramped up clocks to the max, if the job queue is too large, it won't magically invent more cycles to squeeze in.
Can userspace set a deadline that targets the next vblank deadline before GPU work has been flushed e.g. at the start of a paint cycle, and still be sure that the kernel has the information it needs to know it should make its clocks increase their speed in time for when the actual work has been actually flushed? Or is it needed that the this deadline is set at the end?
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
What I'm more or less trying to ask is, will a mode setting compositor be able to tell the kernel to boost its clocks at the time it knows is best, and how will it in practice achieve this?
The anticipated usage for a compositor is that, when you receive a <buf, fence> pair from an app, you immediately set a deadline for upcoming start-of-vblank on the fence fd passed from the app. (Or for implicit sync you can use DMA_BUF_IOCTL_EXPORT_SYNC_FILE). For the composite step, no need to set a deadline as this is already done on the kernel side in drm_atomic_helper_wait_for_fences().
So it sounds like the new uapi will help compositors that do not draw with the intention of page flipping anything, and compositors that deliberately delay the commit. I suppose with proper target presentation time integration EGL/Vulkan WSI can set deadlines them as well. All that sounds like a welcomed improvement, but I'm not convinced it's enough to solve the problems we currently face.
For example relying on the atomic mode setting commit setting the deadline is fundamentally flawed, since user space will at times want to purposefully delay committing until as late as possible, without doing so causing an increased risk of missing the deadline due to the kernel not speeding up clocks at the right time for GPU work that has already been flushed long ago.
Right, this is the point for exposing the ioctl to userspace.
Relying on commits also has no effect on GPU work queued by a compositor drawing only to dma-bufs that are never intended to be presented using mode setting. How can we make sure a compositor can provide hints that the kernel will know to respect despite the compositor not being drm master?
It doesn't matter if there are indirect dependencies. Even if the compositor completely ignores deadline hints and fancy tricks like delaying composite decisions, the indirect dependency (app rendering) will delay the direct dependency (compositor rendering) of the page flip. So the driver will still see whether it is late or early compared to the deadline, allowing it to adjust freq in the appropriate direction for the next frame.
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
Jonas
BR, -R
Jonas
- Multiple deadlines may be set on a given fence, even in parallel. See the
- documentation for &dma_fence_ops.set_deadline.
- The deadline hint is just that, a hint. The driver that created the fence
- may react by increasing frequency, making different scheduling choices, etc.
- Or doing nothing at all.
- */
+/**
- dma_fence_set_deadline - set desired fence-wait deadline hint
- @fence: the fence that is to be waited on
- @deadline: the time by which the waiter hopes for the fence to be
signaled
- Give the fence signaler a hint about an upcoming deadline, such as
- vblank, by which point the waiter would prefer the fence to be
- signaled by. This is intended to give feedback to the fence signaler
- to aid in power management decisions, such as boosting GPU frequency
- if a periodic vblank deadline is approaching but the fence is not
- yet signaled..
- */
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{
if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
fence->ops->set_deadline(fence, deadline);
+} +EXPORT_SYMBOL(dma_fence_set_deadline);
/**
- dma_fence_describe - Dump fence describtion into seq_file
- @fence: the 6fence to describe
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 775cdc0b4f24..d54b595a0fe0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -257,6 +257,26 @@ struct dma_fence_ops { */ void (*timeline_value_str)(struct dma_fence *fence, char *str, int size);
/**
* @set_deadline:
*
* Callback to allow a fence waiter to inform the fence signaler of
* an upcoming deadline, such as vblank, by which point the waiter
* would prefer the fence to be signaled by. This is intended to
* give feedback to the fence signaler to aid in power management
* decisions, such as boosting GPU frequency.
*
* This is called without &dma_fence.lock held, it can be called
* multiple times and from any context. Locking is up to the callee
* if it has some state to manage. If multiple deadlines are set,
* the expectation is to track the soonest one. If the deadline is
* before the current time, it should be interpreted as an immediate
* deadline.
*
* This callback is optional.
*/
void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
};
void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, @@ -583,6 +603,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
struct dma_fence *dma_fence_get_stub(void); struct dma_fence *dma_fence_allocate_private_stub(void); u64 dma_fence_context_alloc(unsigned num); -- 2.39.2
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Rob!
Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation
+DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
- :doc: deadline hints
DMA Fences Functions Reference
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout); +/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
- To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
- The deadline hint provides a way for the waiting driver, or userspace, to
- convey an appropriate sense of urgency to the signaling driver.
- A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace
- facing APIs). The time could either be some point in the future (such as
- the vblank based deadline for page-flipping, or the start of a compositor's
- composition cycle), or the current time to indicate an immediate deadline
- hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Also, keep in mind the deadline hint is just that. It won't magically make the GPU finish by that deadline, but it gives the GPU driver information about lateness so it can realize if it needs to clock up.
Sure, even if the GPU ramped up clocks to the max, if the job queue is too large, it won't magically invent more cycles to squeeze in.
Can userspace set a deadline that targets the next vblank deadline before GPU work has been flushed e.g. at the start of a paint cycle, and still be sure that the kernel has the information it needs to know it should make its clocks increase their speed in time for when the actual work has been actually flushed? Or is it needed that the this deadline is set at the end?
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
What I'm more or less trying to ask is, will a mode setting compositor be able to tell the kernel to boost its clocks at the time it knows is best, and how will it in practice achieve this?
The anticipated usage for a compositor is that, when you receive a <buf, fence> pair from an app, you immediately set a deadline for upcoming start-of-vblank on the fence fd passed from the app. (Or for implicit sync you can use DMA_BUF_IOCTL_EXPORT_SYNC_FILE). For the composite step, no need to set a deadline as this is already done on the kernel side in drm_atomic_helper_wait_for_fences().
So it sounds like the new uapi will help compositors that do not draw with the intention of page flipping anything, and compositors that deliberately delay the commit. I suppose with proper target presentation time integration EGL/Vulkan WSI can set deadlines them as well. All that sounds like a welcomed improvement, but I'm not convinced it's enough to solve the problems we currently face.
Yeah, like I mentioned this addresses one issue, giving the GPU kernel driver better information for freq mgmt. But there probably isn't one single solution for everything.
For example relying on the atomic mode setting commit setting the deadline is fundamentally flawed, since user space will at times want to purposefully delay committing until as late as possible, without doing so causing an increased risk of missing the deadline due to the kernel not speeding up clocks at the right time for GPU work that has already been flushed long ago.
Right, this is the point for exposing the ioctl to userspace.
Relying on commits also has no effect on GPU work queued by a compositor drawing only to dma-bufs that are never intended to be presented using mode setting. How can we make sure a compositor can provide hints that the kernel will know to respect despite the compositor not being drm master?
It doesn't matter if there are indirect dependencies. Even if the compositor completely ignores deadline hints and fancy tricks like delaying composite decisions, the indirect dependency (app rendering) will delay the direct dependency (compositor rendering) of the page flip. So the driver will still see whether it is late or early compared to the deadline, allowing it to adjust freq in the appropriate direction for the next frame.
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
I'm kind of assuming compositors, but if the WSI somehow has more information about ideal presentation time, then I suppose it could be in the WSI? I'll defer to folks who spend more time on WSI and compositors to hash out the details ;-)
BR, -R
Jonas
BR, -R
Jonas
- Multiple deadlines may be set on a given fence, even in parallel. See the
- documentation for &dma_fence_ops.set_deadline.
- The deadline hint is just that, a hint. The driver that created the fence
- may react by increasing frequency, making different scheduling choices, etc.
- Or doing nothing at all.
- */
+/**
- dma_fence_set_deadline - set desired fence-wait deadline hint
- @fence: the fence that is to be waited on
- @deadline: the time by which the waiter hopes for the fence to be
signaled
- Give the fence signaler a hint about an upcoming deadline, such as
- vblank, by which point the waiter would prefer the fence to be
- signaled by. This is intended to give feedback to the fence signaler
- to aid in power management decisions, such as boosting GPU frequency
- if a periodic vblank deadline is approaching but the fence is not
- yet signaled..
- */
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{
if (fence->ops->set_deadline && !dma_fence_is_signaled(fence))
fence->ops->set_deadline(fence, deadline);
+} +EXPORT_SYMBOL(dma_fence_set_deadline);
/**
- dma_fence_describe - Dump fence describtion into seq_file
- @fence: the 6fence to describe
diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 775cdc0b4f24..d54b595a0fe0 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -257,6 +257,26 @@ struct dma_fence_ops { */ void (*timeline_value_str)(struct dma_fence *fence, char *str, int size);
/**
* @set_deadline:
*
* Callback to allow a fence waiter to inform the fence signaler of
* an upcoming deadline, such as vblank, by which point the waiter
* would prefer the fence to be signaled by. This is intended to
* give feedback to the fence signaler to aid in power management
* decisions, such as boosting GPU frequency.
*
* This is called without &dma_fence.lock held, it can be called
* multiple times and from any context. Locking is up to the callee
* if it has some state to manage. If multiple deadlines are set,
* the expectation is to track the soonest one. If the deadline is
* before the current time, it should be interpreted as an immediate
* deadline.
*
* This callback is optional.
*/
void (*set_deadline)(struct dma_fence *fence, ktime_t deadline);
};
void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, @@ -583,6 +603,8 @@ static inline signed long dma_fence_wait(struct dma_fence *fence, bool intr) return ret < 0 ? ret : 0; }
+void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
struct dma_fence *dma_fence_get_stub(void); struct dma_fence *dma_fence_allocate_private_stub(void); u64 dma_fence_context_alloc(unsigned num); -- 2.39.2
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Rob!
Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation
+DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
- :doc: deadline hints
DMA Fences Functions Reference
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout); +/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
- To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
- The deadline hint provides a way for the waiting driver, or userspace, to
- convey an appropriate sense of urgency to the signaling driver.
- A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace
- facing APIs). The time could either be some point in the future (such as
- the vblank based deadline for page-flipping, or the start of a compositor's
- composition cycle), or the current time to indicate an immediate deadline
- hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
8-< *snip* 8-<
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
8-< *snip* 8-<
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
I'm kind of assuming compositors, but if the WSI somehow has more information about ideal presentation time, then I suppose it could be in the WSI? I'll defer to folks who spend more time on WSI and compositors to hash out the details ;-)
With my compositor developer hat on, it might be best to let it be up to the compositor, it's the one that knows if a client's content will actually end up anywhere visible.
Jonas
BR, -R
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
Add a way to hint to the fence signaler of an upcoming deadline, such as vblank, which the fence waiter would prefer not to miss. This is to aid the fence signaler in making power management decisions, like boosting frequency as the deadline approaches and awareness of missing deadlines so that can be factored in to the frequency scaling.
v2: Drop dma_fence::deadline and related logic to filter duplicate deadlines, to avoid increasing dma_fence size. The fence-context implementation will need similar logic to track deadlines of all the fences on the same timeline. [ckoenig] v3: Clarify locking wrt. set_deadline callback v4: Clarify in docs comment that this is a hint v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v6: More docs v7: Fix typo, clarify past deadlines
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com
Hi Rob!
Documentation/driver-api/dma-buf.rst | 6 +++ drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ include/linux/dma-fence.h | 22 +++++++++++ 3 files changed, 87 insertions(+)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 622b8156d212..183e480d8cea 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations .. kernel-doc:: drivers/dma-buf/dma-fence.c :doc: fence signalling annotation
+DMA Fence Deadline Hints +~~~~~~~~~~~~~~~~~~~~~~~~
+.. kernel-doc:: drivers/dma-buf/dma-fence.c
- :doc: deadline hints
DMA Fences Functions Reference
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 0de0482cd36e..f177c56269bb 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, } EXPORT_SYMBOL(dma_fence_wait_any_timeout); +/** + * DOC: deadline hints + * + * In an ideal world, it would be possible to pipeline a workload sufficiently + * that a utilization based device frequency governor could arrive at a minimum + * frequency that meets the requirements of the use-case, in order to minimize + * power consumption. But in the real world there are many workloads which + * defy this ideal. For example, but not limited to: + * + * * Workloads that ping-pong between device and CPU, with alternating periods + * of CPU waiting for device, and device waiting on CPU. This can result in + * devfreq and cpufreq seeing idle time in their respective domains and in + * result reduce frequency. + * + * * Workloads that interact with a periodic time based deadline, such as double + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, + * missing a vblank deadline results in an *increase* in idle time on the GPU + * (since it has to wait an additional vblank period), sending a signal to + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
- To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline.
- The deadline hint provides a way for the waiting driver, or userspace, to
- convey an appropriate sense of urgency to the signaling driver.
- A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace
- facing APIs). The time could either be some point in the future (such as
- the vblank based deadline for page-flipping, or the start of a compositor's
- composition cycle), or the current time to indicate an immediate deadline
- hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
8-< *snip* 8-<
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
8-< *snip* 8-<
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
I'm kind of assuming compositors, but if the WSI somehow has more information about ideal presentation time, then I suppose it could be in the WSI? I'll defer to folks who spend more time on WSI and compositors to hash out the details ;-)
With my compositor developer hat on, it might be best to let it be up to the compositor, it's the one that knows if a client's content will actually end up anywhere visible.
wfm
BR, -R
Jonas
BR, -R
On Thu, Mar 16, 2023 at 5:29 PM Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > From: Rob Clark robdclark@chromium.org > > Add a way to hint to the fence signaler of an upcoming deadline, such as > vblank, which the fence waiter would prefer not to miss. This is to aid > the fence signaler in making power management decisions, like boosting > frequency as the deadline approaches and awareness of missing deadlines > so that can be factored in to the frequency scaling. > > v2: Drop dma_fence::deadline and related logic to filter duplicate > deadlines, to avoid increasing dma_fence size. The fence-context > implementation will need similar logic to track deadlines of all > the fences on the same timeline. [ckoenig] > v3: Clarify locking wrt. set_deadline callback > v4: Clarify in docs comment that this is a hint > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > v6: More docs > v7: Fix typo, clarify past deadlines > > Signed-off-by: Rob Clark robdclark@chromium.org > Reviewed-by: Christian König christian.koenig@amd.com > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > ---
Hi Rob!
> Documentation/driver-api/dma-buf.rst | 6 +++ > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > include/linux/dma-fence.h | 22 +++++++++++ > 3 files changed, 87 insertions(+) > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > index 622b8156d212..183e480d8cea 100644 > --- a/Documentation/driver-api/dma-buf.rst > +++ b/Documentation/driver-api/dma-buf.rst > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > .. kernel-doc:: drivers/dma-buf/dma-fence.c > :doc: fence signalling annotation > > +DMA Fence Deadline Hints > +~~~~~~~~~~~~~~~~~~~~~~~~ > + > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > + :doc: deadline hints > + > DMA Fences Functions Reference > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > index 0de0482cd36e..f177c56269bb 100644 > --- a/drivers/dma-buf/dma-fence.c > +++ b/drivers/dma-buf/dma-fence.c > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > } > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > +/** > + * DOC: deadline hints > + * > + * In an ideal world, it would be possible to pipeline a workload sufficiently > + * that a utilization based device frequency governor could arrive at a minimum > + * frequency that meets the requirements of the use-case, in order to minimize > + * power consumption. But in the real world there are many workloads which > + * defy this ideal. For example, but not limited to: > + * > + * * Workloads that ping-pong between device and CPU, with alternating periods > + * of CPU waiting for device, and device waiting on CPU. This can result in > + * devfreq and cpufreq seeing idle time in their respective domains and in > + * result reduce frequency. > + * > + * * Workloads that interact with a periodic time based deadline, such as double > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > + * missing a vblank deadline results in an *increase* in idle time on the GPU > + * (since it has to wait an additional vblank period), sending a signal to > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
> + * > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > + * The deadline hint provides a way for the waiting driver, or userspace, to > + * convey an appropriate sense of urgency to the signaling driver. > + * > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > + * facing APIs). The time could either be some point in the future (such as > + * the vblank based deadline for page-flipping, or the start of a compositor's > + * composition cycle), or the current time to indicate an immediate deadline > + * hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer. Using the start of vblank as a deadline makes VRR useless. It really would be nice to have some feedback about the actual deadline from the kernel, maybe in `struct drm_event_vblank`.
But yes, sorry, off topic...
8-< *snip* 8-<
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
The mutter issue also is about a suboptimal steady-state.
Truth be told, I'm not sure if this fence deadline idea fixes the issue we're seeing or at least helps sometimes. It might, it might not. What annoys me is that the compositor *knows* before any work is submitted that some work will be submitted and when it has to finish. We could maximize the chances to get everything right but having to wait for a fence to materialize in the compositor to do anything about it is suboptimal.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
Why? Do you think user space is so slow that it can't process the input events and then do a syscall? We need to have all input devices open anyway that can affect the system and know more about how they affect behavior than the kernel can ever know.
8-< *snip* 8-<
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
I'm kind of assuming compositors, but if the WSI somehow has more information about ideal presentation time, then I suppose it could be in the WSI? I'll defer to folks who spend more time on WSI and compositors to hash out the details ;-)
With my compositor developer hat on, it might be best to let it be up to the compositor, it's the one that knows if a client's content will actually end up anywhere visible.
wfm
BR, -R
Jonas
BR, -R
On Thu, Mar 16, 2023 at 3:22 PM Sebastian Wick sebastian.wick@redhat.com wrote:
On Thu, Mar 16, 2023 at 5:29 PM Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > > On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > > From: Rob Clark robdclark@chromium.org > > > > Add a way to hint to the fence signaler of an upcoming deadline, such as > > vblank, which the fence waiter would prefer not to miss. This is to aid > > the fence signaler in making power management decisions, like boosting > > frequency as the deadline approaches and awareness of missing deadlines > > so that can be factored in to the frequency scaling. > > > > v2: Drop dma_fence::deadline and related logic to filter duplicate > > deadlines, to avoid increasing dma_fence size. The fence-context > > implementation will need similar logic to track deadlines of all > > the fences on the same timeline. [ckoenig] > > v3: Clarify locking wrt. set_deadline callback > > v4: Clarify in docs comment that this is a hint > > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > > v6: More docs > > v7: Fix typo, clarify past deadlines > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > Reviewed-by: Christian König christian.koenig@amd.com > > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > > --- > > Hi Rob! > > > Documentation/driver-api/dma-buf.rst | 6 +++ > > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > > include/linux/dma-fence.h | 22 +++++++++++ > > 3 files changed, 87 insertions(+) > > > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > > index 622b8156d212..183e480d8cea 100644 > > --- a/Documentation/driver-api/dma-buf.rst > > +++ b/Documentation/driver-api/dma-buf.rst > > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > > .. kernel-doc:: drivers/dma-buf/dma-fence.c > > :doc: fence signalling annotation > > > > +DMA Fence Deadline Hints > > +~~~~~~~~~~~~~~~~~~~~~~~~ > > + > > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > > + :doc: deadline hints > > + > > DMA Fences Functions Reference > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > > index 0de0482cd36e..f177c56269bb 100644 > > --- a/drivers/dma-buf/dma-fence.c > > +++ b/drivers/dma-buf/dma-fence.c > > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > > } > > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > > > +/** > > + * DOC: deadline hints > > + * > > + * In an ideal world, it would be possible to pipeline a workload sufficiently > > + * that a utilization based device frequency governor could arrive at a minimum > > + * frequency that meets the requirements of the use-case, in order to minimize > > + * power consumption. But in the real world there are many workloads which > > + * defy this ideal. For example, but not limited to: > > + * > > + * * Workloads that ping-pong between device and CPU, with alternating periods > > + * of CPU waiting for device, and device waiting on CPU. This can result in > > + * devfreq and cpufreq seeing idle time in their respective domains and in > > + * result reduce frequency. > > + * > > + * * Workloads that interact with a periodic time based deadline, such as double > > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > > + * missing a vblank deadline results in an *increase* in idle time on the GPU > > + * (since it has to wait an additional vblank period), sending a signal to > > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > > + * needed. > > This is the use case I'd like to get some better understanding about how > this series intends to work, as the problematic scheduling behavior > triggered by missed deadlines has plagued compositing display servers > for a long time. > > I apologize, I'm not a GPU driver developer, nor an OpenGL driver > developer, so I will need some hand holding when it comes to > understanding exactly what piece of software is responsible for > communicating what piece of information. > > > + * > > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > > + * The deadline hint provides a way for the waiting driver, or userspace, to > > + * convey an appropriate sense of urgency to the signaling driver. > > + * > > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > > + * facing APIs). The time could either be some point in the future (such as > > + * the vblank based deadline for page-flipping, or the start of a compositor's > > + * composition cycle), or the current time to indicate an immediate deadline > > + * hint (Ie. forward progress cannot be made until this fence is signaled). > > Is it guaranteed that a GPU driver will use the actual start of the > vblank as the effective deadline? I have some memories of seing > something about vblank evasion browsing driver code, which I might have > misunderstood, but I have yet to find whether this is something > userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer. Using the start of vblank as a deadline makes VRR useless. It really would be nice to have some feedback about the actual deadline from the kernel, maybe in `struct drm_event_vblank`.
note that here we are only talking about the difference between start/end of vblank and the deadline for the hw to latch a change for the next frame. (Which I _expect_ generally amounts to however long it takes to slurp in a row of tiles)
But yes, sorry, off topic...
8-< *snip* 8-<
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
The mutter issue also is about a suboptimal steady-state.
Truth be told, I'm not sure if this fence deadline idea fixes the issue we're seeing or at least helps sometimes. It might, it might not. What annoys me is that the compositor *knows* before any work is submitted that some work will be submitted and when it has to finish. We could maximize the chances to get everything right but having to wait for a fence to materialize in the compositor to do anything about it is suboptimal.
Why would the app not immediately send the fence+buf to the compositor as soon as it is submitted to the kernel on client process side?
At any rate, it really doesn't matter how early the kernel finds out about the deadline, since the point is to let the kernel driver know if it is missing the deadline so that it doesn't mis-interpret stall time waiting for the _next_ vblank after the one we wanted.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
Why? Do you think user space is so slow that it can't process the input events and then do a syscall? We need to have all input devices open anyway that can affect the system and know more about how they affect behavior than the kernel can ever know.
Again this is getting off into a different topic. But my gut feel is that the shorter the path to input cpu freq boost, the better.. since however many extra cycles you add, they will be cycles with cpu (and probably ddr) at lowest freq
BR, -R
8-< *snip* 8-<
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
I'm kind of assuming compositors, but if the WSI somehow has more information about ideal presentation time, then I suppose it could be in the WSI? I'll defer to folks who spend more time on WSI and compositors to hash out the details ;-)
With my compositor developer hat on, it might be best to let it be up to the compositor, it's the one that knows if a client's content will actually end up anywhere visible.
wfm
BR, -R
Jonas
BR, -R
On Thu, Mar 16, 2023 at 11:59 PM Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 16, 2023 at 3:22 PM Sebastian Wick sebastian.wick@redhat.com wrote:
On Thu, Mar 16, 2023 at 5:29 PM Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote: > On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > > > > On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > > > From: Rob Clark robdclark@chromium.org > > > > > > Add a way to hint to the fence signaler of an upcoming deadline, such as > > > vblank, which the fence waiter would prefer not to miss. This is to aid > > > the fence signaler in making power management decisions, like boosting > > > frequency as the deadline approaches and awareness of missing deadlines > > > so that can be factored in to the frequency scaling. > > > > > > v2: Drop dma_fence::deadline and related logic to filter duplicate > > > deadlines, to avoid increasing dma_fence size. The fence-context > > > implementation will need similar logic to track deadlines of all > > > the fences on the same timeline. [ckoenig] > > > v3: Clarify locking wrt. set_deadline callback > > > v4: Clarify in docs comment that this is a hint > > > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > > > v6: More docs > > > v7: Fix typo, clarify past deadlines > > > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > > Reviewed-by: Christian König christian.koenig@amd.com > > > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > > > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > > > --- > > > > Hi Rob! > > > > > Documentation/driver-api/dma-buf.rst | 6 +++ > > > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > > > include/linux/dma-fence.h | 22 +++++++++++ > > > 3 files changed, 87 insertions(+) > > > > > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > > > index 622b8156d212..183e480d8cea 100644 > > > --- a/Documentation/driver-api/dma-buf.rst > > > +++ b/Documentation/driver-api/dma-buf.rst > > > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > > > .. kernel-doc:: drivers/dma-buf/dma-fence.c > > > :doc: fence signalling annotation > > > > > > +DMA Fence Deadline Hints > > > +~~~~~~~~~~~~~~~~~~~~~~~~ > > > + > > > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > > > + :doc: deadline hints > > > + > > > DMA Fences Functions Reference > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > > > index 0de0482cd36e..f177c56269bb 100644 > > > --- a/drivers/dma-buf/dma-fence.c > > > +++ b/drivers/dma-buf/dma-fence.c > > > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > > > } > > > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > > > > > +/** > > > + * DOC: deadline hints > > > + * > > > + * In an ideal world, it would be possible to pipeline a workload sufficiently > > > + * that a utilization based device frequency governor could arrive at a minimum > > > + * frequency that meets the requirements of the use-case, in order to minimize > > > + * power consumption. But in the real world there are many workloads which > > > + * defy this ideal. For example, but not limited to: > > > + * > > > + * * Workloads that ping-pong between device and CPU, with alternating periods > > > + * of CPU waiting for device, and device waiting on CPU. This can result in > > > + * devfreq and cpufreq seeing idle time in their respective domains and in > > > + * result reduce frequency. > > > + * > > > + * * Workloads that interact with a periodic time based deadline, such as double > > > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > > > + * missing a vblank deadline results in an *increase* in idle time on the GPU > > > + * (since it has to wait an additional vblank period), sending a signal to > > > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > > > + * needed. > > > > This is the use case I'd like to get some better understanding about how > > this series intends to work, as the problematic scheduling behavior > > triggered by missed deadlines has plagued compositing display servers > > for a long time. > > > > I apologize, I'm not a GPU driver developer, nor an OpenGL driver > > developer, so I will need some hand holding when it comes to > > understanding exactly what piece of software is responsible for > > communicating what piece of information. > > > > > + * > > > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > > > + * The deadline hint provides a way for the waiting driver, or userspace, to > > > + * convey an appropriate sense of urgency to the signaling driver. > > > + * > > > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > > > + * facing APIs). The time could either be some point in the future (such as > > > + * the vblank based deadline for page-flipping, or the start of a compositor's > > > + * composition cycle), or the current time to indicate an immediate deadline > > > + * hint (Ie. forward progress cannot be made until this fence is signaled). > > > > Is it guaranteed that a GPU driver will use the actual start of the > > vblank as the effective deadline? I have some memories of seing > > something about vblank evasion browsing driver code, which I might have > > misunderstood, but I have yet to find whether this is something > > userspace can actually expect to be something it can rely on. > > I guess you mean s/GPU driver/display driver/ ? It makes things more > clear if we talk about them separately even if they happen to be the > same device.
Sure, sorry about being unclear about that.
> > Assuming that is what you mean, nothing strongly defines what the > deadline is. In practice there is probably some buffering in the > display controller. For ex, block based (including bandwidth > compressed) formats, you need to buffer up a row of blocks to > efficiently linearize for scanout. So you probably need to latch some > time before you start sending pixel data to the display. But details > like this are heavily implementation dependent. I think the most > reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer. Using the start of vblank as a deadline makes VRR useless. It really would be nice to have some feedback about the actual deadline from the kernel, maybe in `struct drm_event_vblank`.
note that here we are only talking about the difference between start/end of vblank and the deadline for the hw to latch a change for the next frame. (Which I _expect_ generally amounts to however long it takes to slurp in a row of tiles)
But yes, sorry, off topic...
8-< *snip* 8-<
> > You need a fence to set the deadline, and for that work needs to be > flushed. But you can't associate a deadline with work that the kernel > is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
The mutter issue also is about a suboptimal steady-state.
Truth be told, I'm not sure if this fence deadline idea fixes the issue we're seeing or at least helps sometimes. It might, it might not. What annoys me is that the compositor *knows* before any work is submitted that some work will be submitted and when it has to finish. We could maximize the chances to get everything right but having to wait for a fence to materialize in the compositor to do anything about it is suboptimal.
Why would the app not immediately send the fence+buf to the compositor as soon as it is submitted to the kernel on client process side?
Some apps just are not good at this. Reading back work from the GPU, taking a lot of CPU time to create the GPU work, etc.
The other obvious offender: frame callbacks. Committing a buffer only happens after receiving a frame callback in FIFO/vsync mode which we try to schedule as close to the deadline as possible.
The idea that the clients are able to submit all GPU work some time early, then immediately commit to show up in the compositor well before the deadline is very idealized. We're trying to get there but we also only have control over the WSI so bad apps will still be bad apps.
At any rate, it really doesn't matter how early the kernel finds out about the deadline, since the point is to let the kernel driver know if it is missing the deadline so that it doesn't mis-interpret stall time waiting for the _next_ vblank after the one we wanted.
That's a good point! Let's see how well this works in practice and how we can improve on that in the future.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
Why? Do you think user space is so slow that it can't process the input events and then do a syscall? We need to have all input devices open anyway that can affect the system and know more about how they affect behavior than the kernel can ever know.
Again this is getting off into a different topic. But my gut feel is that the shorter the path to input cpu freq boost, the better.. since however many extra cycles you add, they will be cycles with cpu (and probably ddr) at lowest freq
On the one hand, sure, that makes sense in theory. On the other hand, we won't know for sure until we try it and I suspect a RT thread in user space will be fast enough.
BR, -R
8-< *snip* 8-<
Is it expected that WSI's will set their own deadlines, or should that be the job of the compositor? For example by using compositors using DMA_BUF_IOCTL_EXPORT_SYNC_FILE that you mentioned, using it to set a deadline matching the vsync it most ideally will be committed to?
I'm kind of assuming compositors, but if the WSI somehow has more information about ideal presentation time, then I suppose it could be in the WSI? I'll defer to folks who spend more time on WSI and compositors to hash out the details ;-)
With my compositor developer hat on, it might be best to let it be up to the compositor, it's the one that knows if a client's content will actually end up anywhere visible.
wfm
BR, -R
Jonas
BR, -R
On Thu, 16 Mar 2023 23:22:24 +0100 Sebastian Wick sebastian.wick@redhat.com wrote:
On Thu, Mar 16, 2023 at 5:29 PM Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > > On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > > From: Rob Clark robdclark@chromium.org > > > > Add a way to hint to the fence signaler of an upcoming deadline, such as > > vblank, which the fence waiter would prefer not to miss. This is to aid > > the fence signaler in making power management decisions, like boosting > > frequency as the deadline approaches and awareness of missing deadlines > > so that can be factored in to the frequency scaling. > > > > v2: Drop dma_fence::deadline and related logic to filter duplicate > > deadlines, to avoid increasing dma_fence size. The fence-context > > implementation will need similar logic to track deadlines of all > > the fences on the same timeline. [ckoenig] > > v3: Clarify locking wrt. set_deadline callback > > v4: Clarify in docs comment that this is a hint > > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > > v6: More docs > > v7: Fix typo, clarify past deadlines > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > Reviewed-by: Christian König christian.koenig@amd.com > > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > > --- > > Hi Rob! > > > Documentation/driver-api/dma-buf.rst | 6 +++ > > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > > include/linux/dma-fence.h | 22 +++++++++++ > > 3 files changed, 87 insertions(+) > > > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > > index 622b8156d212..183e480d8cea 100644 > > --- a/Documentation/driver-api/dma-buf.rst > > +++ b/Documentation/driver-api/dma-buf.rst > > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > > .. kernel-doc:: drivers/dma-buf/dma-fence.c > > :doc: fence signalling annotation > > > > +DMA Fence Deadline Hints > > +~~~~~~~~~~~~~~~~~~~~~~~~ > > + > > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > > + :doc: deadline hints > > + > > DMA Fences Functions Reference > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > > index 0de0482cd36e..f177c56269bb 100644 > > --- a/drivers/dma-buf/dma-fence.c > > +++ b/drivers/dma-buf/dma-fence.c > > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > > } > > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > > > +/** > > + * DOC: deadline hints > > + * > > + * In an ideal world, it would be possible to pipeline a workload sufficiently > > + * that a utilization based device frequency governor could arrive at a minimum > > + * frequency that meets the requirements of the use-case, in order to minimize > > + * power consumption. But in the real world there are many workloads which > > + * defy this ideal. For example, but not limited to: > > + * > > + * * Workloads that ping-pong between device and CPU, with alternating periods > > + * of CPU waiting for device, and device waiting on CPU. This can result in > > + * devfreq and cpufreq seeing idle time in their respective domains and in > > + * result reduce frequency. > > + * > > + * * Workloads that interact with a periodic time based deadline, such as double > > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > > + * missing a vblank deadline results in an *increase* in idle time on the GPU > > + * (since it has to wait an additional vblank period), sending a signal to > > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > > + * needed. > > This is the use case I'd like to get some better understanding about how > this series intends to work, as the problematic scheduling behavior > triggered by missed deadlines has plagued compositing display servers > for a long time. > > I apologize, I'm not a GPU driver developer, nor an OpenGL driver > developer, so I will need some hand holding when it comes to > understanding exactly what piece of software is responsible for > communicating what piece of information. > > > + * > > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > > + * The deadline hint provides a way for the waiting driver, or userspace, to > > + * convey an appropriate sense of urgency to the signaling driver. > > + * > > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > > + * facing APIs). The time could either be some point in the future (such as > > + * the vblank based deadline for page-flipping, or the start of a compositor's > > + * composition cycle), or the current time to indicate an immediate deadline > > + * hint (Ie. forward progress cannot be made until this fence is signaled). > > Is it guaranteed that a GPU driver will use the actual start of the > vblank as the effective deadline? I have some memories of seing > something about vblank evasion browsing driver code, which I might have > misunderstood, but I have yet to find whether this is something > userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer. Using the start of vblank as a deadline makes VRR useless. It really would be nice to have some feedback about the actual deadline from the kernel, maybe in `struct drm_event_vblank`.
Hi Sebastian,
why would deadline at vblank beginning not be exactly what you want also with VRR?
Let's say the frame misses that deadline on a VRR system, but the frame still completes and the flip makes it to the intended scanout cycle, thanks to VRR giving it more time. Why should that miss not be classified as a miss?
If it is classified as a miss, then GPU freq and whatnot will be increased, which will increase the effective update rate of VRR.
If it is not classified as a miss, then GPU might not speed up, and you end up with low update rate, even though you still don't skip scanout cycles thanks to VRR.
I guess if you actually *want* VRR to run at a low rate in order to keep the GPU in a lower power demand, you could compute your "late deadline" from the minimum rate VRR timings, subtracting the vblank period from the estimated next flip completion timestamp?
Thanks, pq
On Fri, 17 Mar 2023 11:09:21 +0200 Pekka Paalanen ppaalanen@gmail.com wrote:
On Thu, 16 Mar 2023 23:22:24 +0100 Sebastian Wick sebastian.wick@redhat.com wrote:
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer.
Btw. VRR extends front porch, not vblank.
Thanks, pq
On Fri, 17 Mar 2023 11:17:37 +0200 Pekka Paalanen ppaalanen@gmail.com wrote:
On Fri, 17 Mar 2023 11:09:21 +0200 Pekka Paalanen ppaalanen@gmail.com wrote:
On Thu, 16 Mar 2023 23:22:24 +0100 Sebastian Wick sebastian.wick@redhat.com wrote:
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer.
Btw. VRR extends front porch, not vblank.
Need to correct myself too. vblank includes front porch, vsync does not.
https://electronics.stackexchange.com/questions/166681/how-exactly-does-a-vg...
Thanks, pq
On 3/16/23 23:22, Sebastian Wick wrote:
On Thu, Mar 16, 2023 at 5:29 PM Rob Clark robdclark@gmail.com wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > >> + * >> + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. >> + * The deadline hint provides a way for the waiting driver, or userspace, to >> + * convey an appropriate sense of urgency to the signaling driver. >> + * >> + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace >> + * facing APIs). The time could either be some point in the future (such as >> + * the vblank based deadline for page-flipping, or the start of a compositor's >> + * composition cycle), or the current time to indicate an immediate deadline >> + * hint (Ie. forward progress cannot be made until this fence is signaled). > > Is it guaranteed that a GPU driver will use the actual start of the > vblank as the effective deadline? I have some memories of seing > something about vblank evasion browsing driver code, which I might have > misunderstood, but I have yet to find whether this is something > userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
Vblank can be really long, especially with VRR where the additional time you get to finish the frame comes from making vblank longer. Using the start of vblank as a deadline makes VRR useless.
Not really. We normally still want to aim for start of vblank with VRR, which would result in the maximum refresh rate. Missing that target just incurs less of a penalty than with fixed refresh rate.
On Thu, Mar 16, 2023 at 09:28:55AM -0700, Rob Clark wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > From: Rob Clark robdclark@chromium.org > > Add a way to hint to the fence signaler of an upcoming deadline, such as > vblank, which the fence waiter would prefer not to miss. This is to aid > the fence signaler in making power management decisions, like boosting > frequency as the deadline approaches and awareness of missing deadlines > so that can be factored in to the frequency scaling. > > v2: Drop dma_fence::deadline and related logic to filter duplicate > deadlines, to avoid increasing dma_fence size. The fence-context > implementation will need similar logic to track deadlines of all > the fences on the same timeline. [ckoenig] > v3: Clarify locking wrt. set_deadline callback > v4: Clarify in docs comment that this is a hint > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > v6: More docs > v7: Fix typo, clarify past deadlines > > Signed-off-by: Rob Clark robdclark@chromium.org > Reviewed-by: Christian König christian.koenig@amd.com > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > ---
Hi Rob!
> Documentation/driver-api/dma-buf.rst | 6 +++ > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > include/linux/dma-fence.h | 22 +++++++++++ > 3 files changed, 87 insertions(+) > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > index 622b8156d212..183e480d8cea 100644 > --- a/Documentation/driver-api/dma-buf.rst > +++ b/Documentation/driver-api/dma-buf.rst > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > .. kernel-doc:: drivers/dma-buf/dma-fence.c > :doc: fence signalling annotation > > +DMA Fence Deadline Hints > +~~~~~~~~~~~~~~~~~~~~~~~~ > + > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > + :doc: deadline hints > + > DMA Fences Functions Reference > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > index 0de0482cd36e..f177c56269bb 100644 > --- a/drivers/dma-buf/dma-fence.c > +++ b/drivers/dma-buf/dma-fence.c > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > } > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > +/** > + * DOC: deadline hints > + * > + * In an ideal world, it would be possible to pipeline a workload sufficiently > + * that a utilization based device frequency governor could arrive at a minimum > + * frequency that meets the requirements of the use-case, in order to minimize > + * power consumption. But in the real world there are many workloads which > + * defy this ideal. For example, but not limited to: > + * > + * * Workloads that ping-pong between device and CPU, with alternating periods > + * of CPU waiting for device, and device waiting on CPU. This can result in > + * devfreq and cpufreq seeing idle time in their respective domains and in > + * result reduce frequency. > + * > + * * Workloads that interact with a periodic time based deadline, such as double > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > + * missing a vblank deadline results in an *increase* in idle time on the GPU > + * (since it has to wait an additional vblank period), sending a signal to > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > + * needed.
This is the use case I'd like to get some better understanding about how this series intends to work, as the problematic scheduling behavior triggered by missed deadlines has plagued compositing display servers for a long time.
I apologize, I'm not a GPU driver developer, nor an OpenGL driver developer, so I will need some hand holding when it comes to understanding exactly what piece of software is responsible for communicating what piece of information.
> + * > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > + * The deadline hint provides a way for the waiting driver, or userspace, to > + * convey an appropriate sense of urgency to the signaling driver. > + * > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > + * facing APIs). The time could either be some point in the future (such as > + * the vblank based deadline for page-flipping, or the start of a compositor's > + * composition cycle), or the current time to indicate an immediate deadline > + * hint (Ie. forward progress cannot be made until this fence is signaled).
Is it guaranteed that a GPU driver will use the actual start of the vblank as the effective deadline? I have some memories of seing something about vblank evasion browsing driver code, which I might have misunderstood, but I have yet to find whether this is something userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
8-< *snip* 8-<
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
How do you deal with boosting CPU speeds downstream? Does the input notifier do that too?
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
I'm just not sure it will help. Missed deadlines set at commit hasn't been enough in the past to let the kernel understand it should speed things up before the next frame (which will be a whole frame late without any triple buffering which should be a last resort), so I don't see how it will help by adding a userspace hook to do the same thing.
I think input latency and steady state target frequency here is tightly linked; what we should aim for is to provide enough information at the right time so that it does *not* take a frame or two to of missed deadlines to arrive at the target frequency, as those missed deadlines either means either stuttering and/or lag.
That it helps with the deliberately late commit I do understand, but we don't do that yet, but intend to when there is kernel uapi to lets us do so without negative consequences.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
Other keys are even more problematic. Alt, for example, does nothing, Alt + Tab does some light rendering, but Alt + KeyAboveTab will, depending on the current active applications, suddenly trigger N Wayland surfaces to start rendering at the same time.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
I have hard time seeing any static information can be enough, it's depends too much on context what is expected to happen. And can a BPF program really help? Unless BPF programs that pulls some internal kernel strings to speed things up whenever userspace wants I don't see how it is that much better.
I don't think userspace is necessarily too slow to actively particitpate in providing direct scheduling hints either. Input processing can, for example, be off loaded to a real time scheduled thread, and plumbing any hints about future expectations from rendering, windowing and layout subsystems will be significantly easier to plumb to a real time input thread than translated into static informations or BPF programs.
Jonas
On Fri, Mar 17, 2023 at 3:23 AM Jonas Ådahl jadahl@gmail.com wrote:
On Thu, Mar 16, 2023 at 09:28:55AM -0700, Rob Clark wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote:
On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > > On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > > From: Rob Clark robdclark@chromium.org > > > > Add a way to hint to the fence signaler of an upcoming deadline, such as > > vblank, which the fence waiter would prefer not to miss. This is to aid > > the fence signaler in making power management decisions, like boosting > > frequency as the deadline approaches and awareness of missing deadlines > > so that can be factored in to the frequency scaling. > > > > v2: Drop dma_fence::deadline and related logic to filter duplicate > > deadlines, to avoid increasing dma_fence size. The fence-context > > implementation will need similar logic to track deadlines of all > > the fences on the same timeline. [ckoenig] > > v3: Clarify locking wrt. set_deadline callback > > v4: Clarify in docs comment that this is a hint > > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > > v6: More docs > > v7: Fix typo, clarify past deadlines > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > Reviewed-by: Christian König christian.koenig@amd.com > > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > > --- > > Hi Rob! > > > Documentation/driver-api/dma-buf.rst | 6 +++ > > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > > include/linux/dma-fence.h | 22 +++++++++++ > > 3 files changed, 87 insertions(+) > > > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > > index 622b8156d212..183e480d8cea 100644 > > --- a/Documentation/driver-api/dma-buf.rst > > +++ b/Documentation/driver-api/dma-buf.rst > > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > > .. kernel-doc:: drivers/dma-buf/dma-fence.c > > :doc: fence signalling annotation > > > > +DMA Fence Deadline Hints > > +~~~~~~~~~~~~~~~~~~~~~~~~ > > + > > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > > + :doc: deadline hints > > + > > DMA Fences Functions Reference > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > > index 0de0482cd36e..f177c56269bb 100644 > > --- a/drivers/dma-buf/dma-fence.c > > +++ b/drivers/dma-buf/dma-fence.c > > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > > } > > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > > > +/** > > + * DOC: deadline hints > > + * > > + * In an ideal world, it would be possible to pipeline a workload sufficiently > > + * that a utilization based device frequency governor could arrive at a minimum > > + * frequency that meets the requirements of the use-case, in order to minimize > > + * power consumption. But in the real world there are many workloads which > > + * defy this ideal. For example, but not limited to: > > + * > > + * * Workloads that ping-pong between device and CPU, with alternating periods > > + * of CPU waiting for device, and device waiting on CPU. This can result in > > + * devfreq and cpufreq seeing idle time in their respective domains and in > > + * result reduce frequency. > > + * > > + * * Workloads that interact with a periodic time based deadline, such as double > > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > > + * missing a vblank deadline results in an *increase* in idle time on the GPU > > + * (since it has to wait an additional vblank period), sending a signal to > > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > > + * needed. > > This is the use case I'd like to get some better understanding about how > this series intends to work, as the problematic scheduling behavior > triggered by missed deadlines has plagued compositing display servers > for a long time. > > I apologize, I'm not a GPU driver developer, nor an OpenGL driver > developer, so I will need some hand holding when it comes to > understanding exactly what piece of software is responsible for > communicating what piece of information. > > > + * > > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > > + * The deadline hint provides a way for the waiting driver, or userspace, to > > + * convey an appropriate sense of urgency to the signaling driver. > > + * > > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > > + * facing APIs). The time could either be some point in the future (such as > > + * the vblank based deadline for page-flipping, or the start of a compositor's > > + * composition cycle), or the current time to indicate an immediate deadline > > + * hint (Ie. forward progress cannot be made until this fence is signaled). > > Is it guaranteed that a GPU driver will use the actual start of the > vblank as the effective deadline? I have some memories of seing > something about vblank evasion browsing driver code, which I might have > misunderstood, but I have yet to find whether this is something > userspace can actually expect to be something it can rely on.
I guess you mean s/GPU driver/display driver/ ? It makes things more clear if we talk about them separately even if they happen to be the same device.
Sure, sorry about being unclear about that.
Assuming that is what you mean, nothing strongly defines what the deadline is. In practice there is probably some buffering in the display controller. For ex, block based (including bandwidth compressed) formats, you need to buffer up a row of blocks to efficiently linearize for scanout. So you probably need to latch some time before you start sending pixel data to the display. But details like this are heavily implementation dependent. I think the most reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
8-< *snip* 8-<
You need a fence to set the deadline, and for that work needs to be flushed. But you can't associate a deadline with work that the kernel is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
How do you deal with boosting CPU speeds downstream? Does the input notifier do that too?
Yes.. actually currently downstream (depending on device) we have 1 to 3 input notifiers, one for CPU boost, one for early-PSR-exit, and one to get a head start on booting up the GPU.
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
I'm just not sure it will help. Missed deadlines set at commit hasn't been enough in the past to let the kernel understand it should speed things up before the next frame (which will be a whole frame late without any triple buffering which should be a last resort), so I don't see how it will help by adding a userspace hook to do the same thing.
So deadline is just a superset of "right now" and "sometime in the future".. and this has been useful enough for i915 that they have both forms, when waiting on GPU via i915 specific ioctls and when pageflip (assuming userspace isn't deferring composition decision and instead just pushing it all down to the kernel). But this breaks down in a few cases:
1) non pageflip (for ex. ping-ponging between cpu and gpu) use cases when you wait via polling on fence fd or wait via drm_syncobj instead of DRM_IOCTL_I915_GEM_WAIT 2) when userspace decides late in frame to not pageflip because app fence isn't signaled yet
And this is all done in a way that doesn't help for situations where you have separate kms and render devices. Or the kms driver doesn't bypass atomic helpers (ie. uses drm_atomic_helper_wait_for_fences()). So the technique has already proven to be useful. This series just extends it beyond driver specific primitives (ie. dma_fence/drm_syncojb)
I think input latency and steady state target frequency here is tightly linked; what we should aim for is to provide enough information at the right time so that it does *not* take a frame or two to of missed deadlines to arrive at the target frequency, as those missed deadlines either means either stuttering and/or lag.
If you have some magic way for a gl/vk driver to accurately predict how many cycles it will take to execute a sequence of draws, I'm all ears.
Realistically, the best solution on sudden input is to overshoot and let freqs settle back down.
But there is a lot more to input latency than GPU freq. In UI workloads, even fullscreen animation, I don't really see the GPU going above the 2nd lowest OPP even on relatively small things like a618. UI input latency (touch scrolling, on-screen stylus / low-latency-ink, animations) are a separate issue from what this series addresses, and aren't too much to do with GPU freq.
That it helps with the deliberately late commit I do understand, but we don't do that yet, but intend to when there is kernel uapi to lets us do so without negative consequences.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
Other keys are even more problematic. Alt, for example, does nothing, Alt + Tab does some light rendering, but Alt + KeyAboveTab will, depending on the current active applications, suddenly trigger N Wayland surfaces to start rendering at the same time.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
I have hard time seeing any static information can be enough, it's depends too much on context what is expected to happen. And can a BPF program really help? Unless BPF programs that pulls some internal kernel strings to speed things up whenever userspace wants I don't see how it is that much better.
I don't think userspace is necessarily too slow to actively particitpate in providing direct scheduling hints either. Input processing can, for example, be off loaded to a real time scheduled thread, and plumbing any hints about future expectations from rendering, windowing and layout subsystems will be significantly easier to plumb to a real time input thread than translated into static informations or BPF programs.
I mean, the kernel side input handler is called from irq context long before even the scheduler gets involved..
But I think you are over-thinking the Alt + SomeOtherKey case. The important thing isn't what the other key is, it is just to know that Alt is a modifier key (ie. handle it on key-up instead of key-down). No need to over-complicate things. It's probably enough to give the kernel a list of modifier+key combo's that do _something_..
And like I've said before, keyboard input is the least problematic in terms of latency. It is a _lot_ easier to notice lag with touch scrolling or stylus (on screen). (The latter case, I think wayland has some catching up to do compared to CrOS or android.. you really need a way to allow the app to do front buffer rendering to an overlay for the stylus case, because even just 16ms delay is _very_ noticeable.)
BR, -R
On Fri, Mar 17, 2023 at 08:59:48AM -0700, Rob Clark wrote:
On Fri, Mar 17, 2023 at 3:23 AM Jonas Ådahl jadahl@gmail.com wrote:
On Thu, Mar 16, 2023 at 09:28:55AM -0700, Rob Clark wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote: > On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > > > > On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > > > From: Rob Clark robdclark@chromium.org > > > > > > Add a way to hint to the fence signaler of an upcoming deadline, such as > > > vblank, which the fence waiter would prefer not to miss. This is to aid > > > the fence signaler in making power management decisions, like boosting > > > frequency as the deadline approaches and awareness of missing deadlines > > > so that can be factored in to the frequency scaling. > > > > > > v2: Drop dma_fence::deadline and related logic to filter duplicate > > > deadlines, to avoid increasing dma_fence size. The fence-context > > > implementation will need similar logic to track deadlines of all > > > the fences on the same timeline. [ckoenig] > > > v3: Clarify locking wrt. set_deadline callback > > > v4: Clarify in docs comment that this is a hint > > > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > > > v6: More docs > > > v7: Fix typo, clarify past deadlines > > > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > > Reviewed-by: Christian König christian.koenig@amd.com > > > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > > > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > > > --- > > > > Hi Rob! > > > > > Documentation/driver-api/dma-buf.rst | 6 +++ > > > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > > > include/linux/dma-fence.h | 22 +++++++++++ > > > 3 files changed, 87 insertions(+) > > > > > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > > > index 622b8156d212..183e480d8cea 100644 > > > --- a/Documentation/driver-api/dma-buf.rst > > > +++ b/Documentation/driver-api/dma-buf.rst > > > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > > > .. kernel-doc:: drivers/dma-buf/dma-fence.c > > > :doc: fence signalling annotation > > > > > > +DMA Fence Deadline Hints > > > +~~~~~~~~~~~~~~~~~~~~~~~~ > > > + > > > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > > > + :doc: deadline hints > > > + > > > DMA Fences Functions Reference > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > > > index 0de0482cd36e..f177c56269bb 100644 > > > --- a/drivers/dma-buf/dma-fence.c > > > +++ b/drivers/dma-buf/dma-fence.c > > > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > > > } > > > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > > > > > +/** > > > + * DOC: deadline hints > > > + * > > > + * In an ideal world, it would be possible to pipeline a workload sufficiently > > > + * that a utilization based device frequency governor could arrive at a minimum > > > + * frequency that meets the requirements of the use-case, in order to minimize > > > + * power consumption. But in the real world there are many workloads which > > > + * defy this ideal. For example, but not limited to: > > > + * > > > + * * Workloads that ping-pong between device and CPU, with alternating periods > > > + * of CPU waiting for device, and device waiting on CPU. This can result in > > > + * devfreq and cpufreq seeing idle time in their respective domains and in > > > + * result reduce frequency. > > > + * > > > + * * Workloads that interact with a periodic time based deadline, such as double > > > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > > > + * missing a vblank deadline results in an *increase* in idle time on the GPU > > > + * (since it has to wait an additional vblank period), sending a signal to > > > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > > > + * needed. > > > > This is the use case I'd like to get some better understanding about how > > this series intends to work, as the problematic scheduling behavior > > triggered by missed deadlines has plagued compositing display servers > > for a long time. > > > > I apologize, I'm not a GPU driver developer, nor an OpenGL driver > > developer, so I will need some hand holding when it comes to > > understanding exactly what piece of software is responsible for > > communicating what piece of information. > > > > > + * > > > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > > > + * The deadline hint provides a way for the waiting driver, or userspace, to > > > + * convey an appropriate sense of urgency to the signaling driver. > > > + * > > > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > > > + * facing APIs). The time could either be some point in the future (such as > > > + * the vblank based deadline for page-flipping, or the start of a compositor's > > > + * composition cycle), or the current time to indicate an immediate deadline > > > + * hint (Ie. forward progress cannot be made until this fence is signaled). > > > > Is it guaranteed that a GPU driver will use the actual start of the > > vblank as the effective deadline? I have some memories of seing > > something about vblank evasion browsing driver code, which I might have > > misunderstood, but I have yet to find whether this is something > > userspace can actually expect to be something it can rely on. > > I guess you mean s/GPU driver/display driver/ ? It makes things more > clear if we talk about them separately even if they happen to be the > same device.
Sure, sorry about being unclear about that.
> > Assuming that is what you mean, nothing strongly defines what the > deadline is. In practice there is probably some buffering in the > display controller. For ex, block based (including bandwidth > compressed) formats, you need to buffer up a row of blocks to > efficiently linearize for scanout. So you probably need to latch some > time before you start sending pixel data to the display. But details > like this are heavily implementation dependent. I think the most > reasonable thing to target is start of vblank.
The driver exposing those details would be quite useful for userspace though, so that it can delay committing updates to late, but not too late. Setting a deadline to be the vblank seems easy enough, but it isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
8-< *snip* 8-<
> > You need a fence to set the deadline, and for that work needs to be > flushed. But you can't associate a deadline with work that the kernel > is unaware of anyways.
That makes sense, but it might also a bit inadequate to have it as the only way to tell the kernel it should speed things up. Even with the trick i915 does, with GNOME Shell, we still end up with the feedback loop this series aims to mitigate. Doing triple buffering, i.e. delaying or dropping the first frame is so far the best work around that works, except doing other tricks that makes the kernel to ramp up its clock. Having to rely on choosing between latency and frame drops should ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
How do you deal with boosting CPU speeds downstream? Does the input notifier do that too?
Yes.. actually currently downstream (depending on device) we have 1 to 3 input notifiers, one for CPU boost, one for early-PSR-exit, and one to get a head start on booting up the GPU.
Would be really nice to upstream these, one way or the other, be it actually input event based, or via some uapi to just poke the kernel. I realize it's not related to this thread, so this is just me wishing things into the void.
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
I'm just not sure it will help. Missed deadlines set at commit hasn't been enough in the past to let the kernel understand it should speed things up before the next frame (which will be a whole frame late without any triple buffering which should be a last resort), so I don't see how it will help by adding a userspace hook to do the same thing.
So deadline is just a superset of "right now" and "sometime in the future".. and this has been useful enough for i915 that they have both forms, when waiting on GPU via i915 specific ioctls and when pageflip (assuming userspace isn't deferring composition decision and instead just pushing it all down to the kernel). But this breaks down in a few cases:
- non pageflip (for ex. ping-ponging between cpu and gpu) use cases
when you wait via polling on fence fd or wait via drm_syncobj instead of DRM_IOCTL_I915_GEM_WAIT 2) when userspace decides late in frame to not pageflip because app fence isn't signaled yet
It breaks down in practice today, because we do entering the low-freq feedback loop that triple buffering today effectively works around. That is even with non-delayed page flipping, and a single pipeline source (compositor only rendering) or only using already signaled ready client buffers when compositing.
Anyway, I don't doubt its usefulness, just a bit pessimistic.
And this is all done in a way that doesn't help for situations where you have separate kms and render devices. Or the kms driver doesn't bypass atomic helpers (ie. uses drm_atomic_helper_wait_for_fences()). So the technique has already proven to be useful. This series just extends it beyond driver specific primitives (ie. dma_fence/drm_syncojb)
I think input latency and steady state target frequency here is tightly linked; what we should aim for is to provide enough information at the right time so that it does *not* take a frame or two to of missed deadlines to arrive at the target frequency, as those missed deadlines either means either stuttering and/or lag.
If you have some magic way for a gl/vk driver to accurately predict how many cycles it will take to execute a sequence of draws, I'm all ears.
Realistically, the best solution on sudden input is to overshoot and let freqs settle back down.
But there is a lot more to input latency than GPU freq. In UI workloads, even fullscreen animation, I don't really see the GPU going above the 2nd lowest OPP even on relatively small things like a618. UI input latency (touch scrolling, on-screen stylus / low-latency-ink, animations) are a separate issue from what this series addresses, and aren't too much to do with GPU freq.
That it helps with the deliberately late commit I do understand, but we don't do that yet, but intend to when there is kernel uapi to lets us do so without negative consequences.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
Other keys are even more problematic. Alt, for example, does nothing, Alt + Tab does some light rendering, but Alt + KeyAboveTab will, depending on the current active applications, suddenly trigger N Wayland surfaces to start rendering at the same time.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
I have hard time seeing any static information can be enough, it's depends too much on context what is expected to happen. And can a BPF program really help? Unless BPF programs that pulls some internal kernel strings to speed things up whenever userspace wants I don't see how it is that much better.
I don't think userspace is necessarily too slow to actively particitpate in providing direct scheduling hints either. Input processing can, for example, be off loaded to a real time scheduled thread, and plumbing any hints about future expectations from rendering, windowing and layout subsystems will be significantly easier to plumb to a real time input thread than translated into static informations or BPF programs.
I mean, the kernel side input handler is called from irq context long before even the scheduler gets involved..
But I think you are over-thinking the Alt + SomeOtherKey case. The important thing isn't what the other key is, it is just to know that Alt is a modifier key (ie. handle it on key-up instead of key-down). No need to over-complicate things. It's probably enough to give the kernel a list of modifier+key combo's that do _something_..
Perhaps I'm over thinking it, it just seems all so unnecessary to complicate the kernel so that it's able to predict when GUI animations will happen instead of the GUI itself doing it when it is actually beneficial. All it'd take (naively) is uapi for the three kind of boosts downstream now does automatically from input events.
And like I've said before, keyboard input is the least problematic in terms of latency. It is a _lot_ easier to notice lag with touch scrolling or stylus (on screen). (The latter case, I think wayland has some catching up to do compared to CrOS or android.. you really need a way to allow the app to do front buffer rendering to an overlay for the stylus case, because even just 16ms delay is _very_ noticeable.)
Sure, but here too userpsace (rt thread in the compositor) is probably a good enough place to predict when to boost since it will be the one proxies e.g. the stylus input events to the application.
Front buffering on the other hand is a very different topic ;)
Jonas
BR, -R
On Tue, Mar 21, 2023 at 6:24 AM Jonas Ådahl jadahl@gmail.com wrote:
On Fri, Mar 17, 2023 at 08:59:48AM -0700, Rob Clark wrote:
On Fri, Mar 17, 2023 at 3:23 AM Jonas Ådahl jadahl@gmail.com wrote:
On Thu, Mar 16, 2023 at 09:28:55AM -0700, Rob Clark wrote:
On Thu, Mar 16, 2023 at 2:26 AM Jonas Ådahl jadahl@gmail.com wrote:
On Wed, Mar 15, 2023 at 09:19:49AM -0700, Rob Clark wrote:
On Wed, Mar 15, 2023 at 6:53 AM Jonas Ådahl jadahl@gmail.com wrote: > > On Fri, Mar 10, 2023 at 09:38:18AM -0800, Rob Clark wrote: > > On Fri, Mar 10, 2023 at 7:45 AM Jonas Ådahl jadahl@gmail.com wrote: > > > > > > On Wed, Mar 08, 2023 at 07:52:52AM -0800, Rob Clark wrote: > > > > From: Rob Clark robdclark@chromium.org > > > > > > > > Add a way to hint to the fence signaler of an upcoming deadline, such as > > > > vblank, which the fence waiter would prefer not to miss. This is to aid > > > > the fence signaler in making power management decisions, like boosting > > > > frequency as the deadline approaches and awareness of missing deadlines > > > > so that can be factored in to the frequency scaling. > > > > > > > > v2: Drop dma_fence::deadline and related logic to filter duplicate > > > > deadlines, to avoid increasing dma_fence size. The fence-context > > > > implementation will need similar logic to track deadlines of all > > > > the fences on the same timeline. [ckoenig] > > > > v3: Clarify locking wrt. set_deadline callback > > > > v4: Clarify in docs comment that this is a hint > > > > v5: Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. > > > > v6: More docs > > > > v7: Fix typo, clarify past deadlines > > > > > > > > Signed-off-by: Rob Clark robdclark@chromium.org > > > > Reviewed-by: Christian König christian.koenig@amd.com > > > > Acked-by: Pekka Paalanen pekka.paalanen@collabora.com > > > > Reviewed-by: Bagas Sanjaya bagasdotme@gmail.com > > > > --- > > > > > > Hi Rob! > > > > > > > Documentation/driver-api/dma-buf.rst | 6 +++ > > > > drivers/dma-buf/dma-fence.c | 59 ++++++++++++++++++++++++++++ > > > > include/linux/dma-fence.h | 22 +++++++++++ > > > > 3 files changed, 87 insertions(+) > > > > > > > > diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst > > > > index 622b8156d212..183e480d8cea 100644 > > > > --- a/Documentation/driver-api/dma-buf.rst > > > > +++ b/Documentation/driver-api/dma-buf.rst > > > > @@ -164,6 +164,12 @@ DMA Fence Signalling Annotations > > > > .. kernel-doc:: drivers/dma-buf/dma-fence.c > > > > :doc: fence signalling annotation > > > > > > > > +DMA Fence Deadline Hints > > > > +~~~~~~~~~~~~~~~~~~~~~~~~ > > > > + > > > > +.. kernel-doc:: drivers/dma-buf/dma-fence.c > > > > + :doc: deadline hints > > > > + > > > > DMA Fences Functions Reference > > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > > > > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c > > > > index 0de0482cd36e..f177c56269bb 100644 > > > > --- a/drivers/dma-buf/dma-fence.c > > > > +++ b/drivers/dma-buf/dma-fence.c > > > > @@ -912,6 +912,65 @@ dma_fence_wait_any_timeout(struct dma_fence **fences, uint32_t count, > > > > } > > > > EXPORT_SYMBOL(dma_fence_wait_any_timeout); > > > > > > > > +/** > > > > + * DOC: deadline hints > > > > + * > > > > + * In an ideal world, it would be possible to pipeline a workload sufficiently > > > > + * that a utilization based device frequency governor could arrive at a minimum > > > > + * frequency that meets the requirements of the use-case, in order to minimize > > > > + * power consumption. But in the real world there are many workloads which > > > > + * defy this ideal. For example, but not limited to: > > > > + * > > > > + * * Workloads that ping-pong between device and CPU, with alternating periods > > > > + * of CPU waiting for device, and device waiting on CPU. This can result in > > > > + * devfreq and cpufreq seeing idle time in their respective domains and in > > > > + * result reduce frequency. > > > > + * > > > > + * * Workloads that interact with a periodic time based deadline, such as double > > > > + * buffered GPU rendering vs vblank sync'd page flipping. In this scenario, > > > > + * missing a vblank deadline results in an *increase* in idle time on the GPU > > > > + * (since it has to wait an additional vblank period), sending a signal to > > > > + * the GPU's devfreq to reduce frequency, when in fact the opposite is what is > > > > + * needed. > > > > > > This is the use case I'd like to get some better understanding about how > > > this series intends to work, as the problematic scheduling behavior > > > triggered by missed deadlines has plagued compositing display servers > > > for a long time. > > > > > > I apologize, I'm not a GPU driver developer, nor an OpenGL driver > > > developer, so I will need some hand holding when it comes to > > > understanding exactly what piece of software is responsible for > > > communicating what piece of information. > > > > > > > + * > > > > + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. > > > > + * The deadline hint provides a way for the waiting driver, or userspace, to > > > > + * convey an appropriate sense of urgency to the signaling driver. > > > > + * > > > > + * A deadline hint is given in absolute ktime (CLOCK_MONOTONIC for userspace > > > > + * facing APIs). The time could either be some point in the future (such as > > > > + * the vblank based deadline for page-flipping, or the start of a compositor's > > > > + * composition cycle), or the current time to indicate an immediate deadline > > > > + * hint (Ie. forward progress cannot be made until this fence is signaled). > > > > > > Is it guaranteed that a GPU driver will use the actual start of the > > > vblank as the effective deadline? I have some memories of seing > > > something about vblank evasion browsing driver code, which I might have > > > misunderstood, but I have yet to find whether this is something > > > userspace can actually expect to be something it can rely on. > > > > I guess you mean s/GPU driver/display driver/ ? It makes things more > > clear if we talk about them separately even if they happen to be the > > same device. > > Sure, sorry about being unclear about that. > > > > > Assuming that is what you mean, nothing strongly defines what the > > deadline is. In practice there is probably some buffering in the > > display controller. For ex, block based (including bandwidth > > compressed) formats, you need to buffer up a row of blocks to > > efficiently linearize for scanout. So you probably need to latch some > > time before you start sending pixel data to the display. But details > > like this are heavily implementation dependent. I think the most > > reasonable thing to target is start of vblank. > > The driver exposing those details would be quite useful for userspace > though, so that it can delay committing updates to late, but not too > late. Setting a deadline to be the vblank seems easy enough, but it > isn't enough for scheduling the actual commit.
I'm not entirely sure how that would even work.. but OTOH I think you are talking about something on the order of 100us? But that is a bit of another topic.
Yes, something like that. But yea, it's not really related. Scheduling commits closer to the deadline has more complex behavior than that too, e.g. the need for real time scheduling, and knowing how long it usually takes to create and commit and for the kernel to process.
8-< *snip* 8-<
> > > > You need a fence to set the deadline, and for that work needs to be > > flushed. But you can't associate a deadline with work that the kernel > > is unaware of anyways. > > That makes sense, but it might also a bit inadequate to have it as the > only way to tell the kernel it should speed things up. Even with the > trick i915 does, with GNOME Shell, we still end up with the feedback > loop this series aims to mitigate. Doing triple buffering, i.e. delaying > or dropping the first frame is so far the best work around that works, > except doing other tricks that makes the kernel to ramp up its clock. > Having to rely on choosing between latency and frame drops should > ideally not have to be made.
Before you have a fence, the thing you want to be speeding up is the CPU, not the GPU. There are existing mechanisms for that.
Is there no benefit to let the GPU know earlier that it should speed up, so that when the job queue arrives, it's already up to speed?
Downstream we have input notifier that resumes the GPU so we can pipeline the 1-2ms it takes to boot up the GPU with userspace. But we wait to boost freq until we have cmdstream to submit, since that doesn't take as long. What needs help initially after input is all the stuff that happens on the CPU before the GPU can start to do anything ;-)
How do you deal with boosting CPU speeds downstream? Does the input notifier do that too?
Yes.. actually currently downstream (depending on device) we have 1 to 3 input notifiers, one for CPU boost, one for early-PSR-exit, and one to get a head start on booting up the GPU.
Would be really nice to upstream these, one way or the other, be it actually input event based, or via some uapi to just poke the kernel. I realize it's not related to this thread, so this is just me wishing things into the void.
There was a drm/input_helper proposed maybe a year or so back, mainly for the early-PSR-exit but I was planning to build on that for early GPU wake-up. I guess we should revisit it. Might not be the right place for cpu boost, but it solves some problems so it's a start.
As far as uapi, I think sysfs already gives you everything or at least most everything you need. For ex, /sys/devices/system/cpu/cpufreq/policy*/scaling_min_freq .. on the gpu side, for drivers using devfreq (ie. panfrost/msm/etc) there is similar sysfs. I'm not sure what sort of knobs are avail on intel/amd.
BR, -R
Btw, I guess I haven't made this clear, dma-fence deadline is trying to help the steady-state situation, rather than the input-latency situation. It might take a frame or two of missed deadlines for gpufreq to arrive at a good steady-state freq.
I'm just not sure it will help. Missed deadlines set at commit hasn't been enough in the past to let the kernel understand it should speed things up before the next frame (which will be a whole frame late without any triple buffering which should be a last resort), so I don't see how it will help by adding a userspace hook to do the same thing.
So deadline is just a superset of "right now" and "sometime in the future".. and this has been useful enough for i915 that they have both forms, when waiting on GPU via i915 specific ioctls and when pageflip (assuming userspace isn't deferring composition decision and instead just pushing it all down to the kernel). But this breaks down in a few cases:
- non pageflip (for ex. ping-ponging between cpu and gpu) use cases
when you wait via polling on fence fd or wait via drm_syncobj instead of DRM_IOCTL_I915_GEM_WAIT 2) when userspace decides late in frame to not pageflip because app fence isn't signaled yet
It breaks down in practice today, because we do entering the low-freq feedback loop that triple buffering today effectively works around. That is even with non-delayed page flipping, and a single pipeline source (compositor only rendering) or only using already signaled ready client buffers when compositing.
Anyway, I don't doubt its usefulness, just a bit pessimistic.
And this is all done in a way that doesn't help for situations where you have separate kms and render devices. Or the kms driver doesn't bypass atomic helpers (ie. uses drm_atomic_helper_wait_for_fences()). So the technique has already proven to be useful. This series just extends it beyond driver specific primitives (ie. dma_fence/drm_syncojb)
I think input latency and steady state target frequency here is tightly linked; what we should aim for is to provide enough information at the right time so that it does *not* take a frame or two to of missed deadlines to arrive at the target frequency, as those missed deadlines either means either stuttering and/or lag.
If you have some magic way for a gl/vk driver to accurately predict how many cycles it will take to execute a sequence of draws, I'm all ears.
Realistically, the best solution on sudden input is to overshoot and let freqs settle back down.
But there is a lot more to input latency than GPU freq. In UI workloads, even fullscreen animation, I don't really see the GPU going above the 2nd lowest OPP even on relatively small things like a618. UI input latency (touch scrolling, on-screen stylus / low-latency-ink, animations) are a separate issue from what this series addresses, and aren't too much to do with GPU freq.
That it helps with the deliberately late commit I do understand, but we don't do that yet, but intend to when there is kernel uapi to lets us do so without negative consequences.
TBF I'm of the belief that there is still a need for input based cpu boost (and early wake-up trigger for GPU).. we have something like this in CrOS kernel. That is a bit of a different topic, but my point is that fence deadlines are just one of several things we need to optimize power/perf and responsiveness, rather than the single thing that solves every problem under the sun ;-)
Perhaps; but I believe it's a bit of a back channel of intent; the piece of the puzzle that has the information to know whether there is need actually speed up is the compositor, not the kernel.
For example, pressing 'p' while a terminal is focused does not need high frequency clocks, it just needs the terminal emulator to draw a 'p' and the compositor to composite that update. Pressing <Super> may however trigger a non-trivial animation moving a lot of stuff around on screen, maybe triggering Wayland clients to draw and what not, and should most arguably have the ability to "warn" the kernel about the upcoming flood of work before it is already knocking on its door step.
The super key is problematic, but not for the reason you think. It is because it is a case where we should boost on key-up instead of key-down.. and the second key-up event comes after the cpu-boost is already in it's cool-down period. But even if suboptimal in cases like this, it is still useful for touch/stylus cases where the slightest of lag is much more perceptible.
Other keys are even more problematic. Alt, for example, does nothing, Alt + Tab does some light rendering, but Alt + KeyAboveTab will, depending on the current active applications, suddenly trigger N Wayland surfaces to start rendering at the same time.
This is getting off topic but I kinda favor coming up with some sort of static definition that userspace could give the kernel to let the kernel know what input to boost on. Or maybe something could be done with BPF?
I have hard time seeing any static information can be enough, it's depends too much on context what is expected to happen. And can a BPF program really help? Unless BPF programs that pulls some internal kernel strings to speed things up whenever userspace wants I don't see how it is that much better.
I don't think userspace is necessarily too slow to actively particitpate in providing direct scheduling hints either. Input processing can, for example, be off loaded to a real time scheduled thread, and plumbing any hints about future expectations from rendering, windowing and layout subsystems will be significantly easier to plumb to a real time input thread than translated into static informations or BPF programs.
I mean, the kernel side input handler is called from irq context long before even the scheduler gets involved..
But I think you are over-thinking the Alt + SomeOtherKey case. The important thing isn't what the other key is, it is just to know that Alt is a modifier key (ie. handle it on key-up instead of key-down). No need to over-complicate things. It's probably enough to give the kernel a list of modifier+key combo's that do _something_..
Perhaps I'm over thinking it, it just seems all so unnecessary to complicate the kernel so that it's able to predict when GUI animations will happen instead of the GUI itself doing it when it is actually beneficial. All it'd take (naively) is uapi for the three kind of boosts downstream now does automatically from input events.
And like I've said before, keyboard input is the least problematic in terms of latency. It is a _lot_ easier to notice lag with touch scrolling or stylus (on screen). (The latter case, I think wayland has some catching up to do compared to CrOS or android.. you really need a way to allow the app to do front buffer rendering to an overlay for the stylus case, because even just 16ms delay is _very_ noticeable.)
Sure, but here too userpsace (rt thread in the compositor) is probably a good enough place to predict when to boost since it will be the one proxies e.g. the stylus input events to the application.
Front buffering on the other hand is a very different topic ;)
Jonas
BR, -R
From: Rob Clark robdclark@chromium.org
Propagate the deadline to all the fences in the array.
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-fence-array.c | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index 5c8a7084577b..9b3ce8948351 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -123,12 +123,23 @@ static void dma_fence_array_release(struct dma_fence *fence) dma_fence_free(fence); }
+static void dma_fence_array_set_deadline(struct dma_fence *fence, + ktime_t deadline) +{ + struct dma_fence_array *array = to_dma_fence_array(fence); + unsigned i; + + for (i = 0; i < array->num_fences; ++i) + dma_fence_set_deadline(array->fences[i], deadline); +} + const struct dma_fence_ops dma_fence_array_ops = { .get_driver_name = dma_fence_array_get_driver_name, .get_timeline_name = dma_fence_array_get_timeline_name, .enable_signaling = dma_fence_array_enable_signaling, .signaled = dma_fence_array_signaled, .release = dma_fence_array_release, + .set_deadline = dma_fence_array_set_deadline, }; EXPORT_SYMBOL(dma_fence_array_ops);
From: Rob Clark robdclark@chromium.org
Propagate the deadline to all the fences in the chain.
v2: Use dma_fence_chain_contained [Tvrtko]
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com for this one. --- drivers/dma-buf/dma-fence-chain.c | 12 ++++++++++++ 1 file changed, 12 insertions(+)
diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c index a0d920576ba6..9663ba1bb6ac 100644 --- a/drivers/dma-buf/dma-fence-chain.c +++ b/drivers/dma-buf/dma-fence-chain.c @@ -206,6 +206,17 @@ static void dma_fence_chain_release(struct dma_fence *fence) dma_fence_free(fence); }
+ +static void dma_fence_chain_set_deadline(struct dma_fence *fence, + ktime_t deadline) +{ + dma_fence_chain_for_each(fence, fence) { + struct dma_fence *f = dma_fence_chain_contained(fence); + + dma_fence_set_deadline(f, deadline); + } +} + const struct dma_fence_ops dma_fence_chain_ops = { .use_64bit_seqno = true, .get_driver_name = dma_fence_chain_get_driver_name, @@ -213,6 +224,7 @@ const struct dma_fence_ops dma_fence_chain_ops = { .enable_signaling = dma_fence_chain_enable_signaling, .signaled = dma_fence_chain_signaled, .release = dma_fence_chain_release, + .set_deadline = dma_fence_chain_set_deadline, }; EXPORT_SYMBOL(dma_fence_chain_ops);
From: Rob Clark robdclark@chromium.org
Add a way to set a deadline on remaining resv fences according to the requested usage.
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com --- drivers/dma-buf/dma-resv.c | 22 ++++++++++++++++++++++ include/linux/dma-resv.h | 2 ++ 2 files changed, 24 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 1c76aed8e262..2a594b754af1 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -684,6 +684,28 @@ long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage, } EXPORT_SYMBOL_GPL(dma_resv_wait_timeout);
+/** + * dma_resv_set_deadline - Set a deadline on reservation's objects fences + * @obj: the reservation object + * @usage: controls which fences to include, see enum dma_resv_usage. + * @deadline: the requested deadline (MONOTONIC) + * + * May be called without holding the dma_resv lock. Sets @deadline on + * all fences filtered by @usage. + */ +void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage, + ktime_t deadline) +{ + struct dma_resv_iter cursor; + struct dma_fence *fence; + + dma_resv_iter_begin(&cursor, obj, usage); + dma_resv_for_each_fence_unlocked(&cursor, fence) { + dma_fence_set_deadline(fence, deadline); + } + dma_resv_iter_end(&cursor); +} +EXPORT_SYMBOL_GPL(dma_resv_set_deadline);
/** * dma_resv_test_signaled - Test if a reservation object's fences have been diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 0637659a702c..8d0e34dad446 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -479,6 +479,8 @@ int dma_resv_get_singleton(struct dma_resv *obj, enum dma_resv_usage usage, int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage, bool intr, unsigned long timeout); +void dma_resv_set_deadline(struct dma_resv *obj, enum dma_resv_usage usage, + ktime_t deadline); bool dma_resv_test_signaled(struct dma_resv *obj, enum dma_resv_usage usage); void dma_resv_describe(struct dma_resv *obj, struct seq_file *seq);
From: Rob Clark robdclark@chromium.org
We had all of the internal driver APIs, but not the all important userspace uABI, in the dma-buf doc. Fix that. And re-arrange the comments slightly as otherwise the comments for the ioctl nr defines would not show up.
v2: Fix docs build warning coming from newly including the uabi header in the docs build
Signed-off-by: Rob Clark robdclark@chromium.org Acked-by: Pekka Paalanen pekka.paalanen@collabora.com --- Documentation/driver-api/dma-buf.rst | 10 ++++++-- include/uapi/linux/sync_file.h | 37 +++++++++++----------------- 2 files changed, 23 insertions(+), 24 deletions(-)
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 183e480d8cea..ff3f8da296af 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -203,8 +203,8 @@ DMA Fence unwrap .. kernel-doc:: include/linux/dma-fence-unwrap.h :internal:
-DMA Fence uABI/Sync File -~~~~~~~~~~~~~~~~~~~~~~~~ +DMA Fence Sync File +~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/sync_file.c :export: @@ -212,6 +212,12 @@ DMA Fence uABI/Sync File .. kernel-doc:: include/linux/sync_file.h :internal:
+DMA Fence Sync File uABI +~~~~~~~~~~~~~~~~~~~~~~~~ + +.. kernel-doc:: include/uapi/linux/sync_file.h + :internal: + Indefinite DMA Fences ~~~~~~~~~~~~~~~~~~~~~
diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h index ee2dcfb3d660..7e42a5b7558b 100644 --- a/include/uapi/linux/sync_file.h +++ b/include/uapi/linux/sync_file.h @@ -16,12 +16,16 @@ #include <linux/types.h>
/** - * struct sync_merge_data - data passed to merge ioctl + * struct sync_merge_data - SYNC_IOC_MERGE: merge two fences * @name: name of new fence * @fd2: file descriptor of second fence * @fence: returns the fd of the new fence to userspace * @flags: merge_data flags * @pad: padding for 64-bit alignment, should always be zero + * + * Creates a new fence containing copies of the sync_pts in both + * the calling fd and sync_merge_data.fd2. Returns the new fence's + * fd in sync_merge_data.fence */ struct sync_merge_data { char name[32]; @@ -34,8 +38,8 @@ struct sync_merge_data { /** * struct sync_fence_info - detailed fence information * @obj_name: name of parent sync_timeline -* @driver_name: name of driver implementing the parent -* @status: status of the fence 0:active 1:signaled <0:error + * @driver_name: name of driver implementing the parent + * @status: status of the fence 0:active 1:signaled <0:error * @flags: fence_info flags * @timestamp_ns: timestamp of status change in nanoseconds */ @@ -48,14 +52,19 @@ struct sync_fence_info { };
/** - * struct sync_file_info - data returned from fence info ioctl + * struct sync_file_info - SYNC_IOC_FILE_INFO: get detailed information on a sync_file * @name: name of fence * @status: status of fence. 1: signaled 0:active <0:error * @flags: sync_file_info flags * @num_fences number of fences in the sync_file * @pad: padding for 64-bit alignment, should always be zero - * @sync_fence_info: pointer to array of structs sync_fence_info with all + * @sync_fence_info: pointer to array of struct &sync_fence_info with all * fences in the sync_file + * + * Takes a struct sync_file_info. If num_fences is 0, the field is updated + * with the actual number of fences. If num_fences is > 0, the system will + * use the pointer provided on sync_fence_info to return up to num_fences of + * struct sync_fence_info, with detailed fence information. */ struct sync_file_info { char name[32]; @@ -69,30 +78,14 @@ struct sync_file_info {
#define SYNC_IOC_MAGIC '>'
-/** +/* * Opcodes 0, 1 and 2 were burned during a API change to avoid users of the * old API to get weird errors when trying to handling sync_files. The API * change happened during the de-stage of the Sync Framework when there was * no upstream users available. */
-/** - * DOC: SYNC_IOC_MERGE - merge two fences - * - * Takes a struct sync_merge_data. Creates a new fence containing copies of - * the sync_pts in both the calling fd and sync_merge_data.fd2. Returns the - * new fence's fd in sync_merge_data.fence - */ #define SYNC_IOC_MERGE _IOWR(SYNC_IOC_MAGIC, 3, struct sync_merge_data) - -/** - * DOC: SYNC_IOC_FILE_INFO - get detailed information on a sync_file - * - * Takes a struct sync_file_info. If num_fences is 0, the field is updated - * with the actual number of fences. If num_fences is > 0, the system will - * use the pointer provided on sync_fence_info to return up to num_fences of - * struct sync_fence_info, with detailed fence information. - */ #define SYNC_IOC_FILE_INFO _IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info)
#endif /* _UAPI_LINUX_SYNC_H */
From: Rob Clark robdclark@chromium.org
The initial purpose is for igt tests, but this would also be useful for compositors that wait until close to vblank deadline to make decisions about which frame to show.
The igt tests can be found at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadli...
v2: Clarify the timebase, add link to igt tests v3: Use u64 value in ns to express deadline. v4: More doc
Signed-off-by: Rob Clark robdclark@chromium.org Acked-by: Pekka Paalanen pekka.paalanen@collabora.com --- drivers/dma-buf/dma-fence.c | 3 ++- drivers/dma-buf/sync_file.c | 19 +++++++++++++++++++ include/uapi/linux/sync_file.h | 22 ++++++++++++++++++++++ 3 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index f177c56269bb..74e36f6d05b0 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -933,7 +933,8 @@ EXPORT_SYMBOL(dma_fence_wait_any_timeout); * the GPU's devfreq to reduce frequency, when in fact the opposite is what is * needed. * - * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline. + * To this end, deadline hint(s) can be set on a &dma_fence via &dma_fence_set_deadline + * (or indirectly via userspace facing ioctls like &sync_set_deadline). * The deadline hint provides a way for the waiting driver, or userspace, to * convey an appropriate sense of urgency to the signaling driver. * diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c index af57799c86ce..418021cfb87c 100644 --- a/drivers/dma-buf/sync_file.c +++ b/drivers/dma-buf/sync_file.c @@ -350,6 +350,22 @@ static long sync_file_ioctl_fence_info(struct sync_file *sync_file, return ret; }
+static int sync_file_ioctl_set_deadline(struct sync_file *sync_file, + unsigned long arg) +{ + struct sync_set_deadline ts; + + if (copy_from_user(&ts, (void __user *)arg, sizeof(ts))) + return -EFAULT; + + if (ts.pad) + return -EINVAL; + + dma_fence_set_deadline(sync_file->fence, ns_to_ktime(ts.deadline_ns)); + + return 0; +} + static long sync_file_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -362,6 +378,9 @@ static long sync_file_ioctl(struct file *file, unsigned int cmd, case SYNC_IOC_FILE_INFO: return sync_file_ioctl_fence_info(sync_file, arg);
+ case SYNC_IOC_SET_DEADLINE: + return sync_file_ioctl_set_deadline(sync_file, arg); + default: return -ENOTTY; } diff --git a/include/uapi/linux/sync_file.h b/include/uapi/linux/sync_file.h index 7e42a5b7558b..d61752dca4c6 100644 --- a/include/uapi/linux/sync_file.h +++ b/include/uapi/linux/sync_file.h @@ -76,6 +76,27 @@ struct sync_file_info { __u64 sync_fence_info; };
+/** + * struct sync_set_deadline - SYNC_IOC_SET_DEADLINE - set a deadline hint on a fence + * @deadline_ns: absolute time of the deadline + * @pad: must be zero + * + * Allows userspace to set a deadline on a fence, see &dma_fence_set_deadline + * + * The timebase for the deadline is CLOCK_MONOTONIC (same as vblank). For + * example + * + * clock_gettime(CLOCK_MONOTONIC, &t); + * deadline_ns = (t.tv_sec * 1000000000L) + t.tv_nsec + ns_until_deadline + */ +struct sync_set_deadline { + __u64 deadline_ns; + /* Not strictly needed for alignment but gives some possibility + * for future extension: + */ + __u64 pad; +}; + #define SYNC_IOC_MAGIC '>'
/* @@ -87,5 +108,6 @@ struct sync_file_info {
#define SYNC_IOC_MERGE _IOWR(SYNC_IOC_MAGIC, 3, struct sync_merge_data) #define SYNC_IOC_FILE_INFO _IOWR(SYNC_IOC_MAGIC, 4, struct sync_file_info) +#define SYNC_IOC_SET_DEADLINE _IOW(SYNC_IOC_MAGIC, 5, struct sync_set_deadline)
#endif /* _UAPI_LINUX_SYNC_H */
From: Rob Clark robdclark@chromium.org
This consists of simply storing the most recent deadline, and adding an ioctl to retrieve the deadline. This can be used in conjunction with the SET_DEADLINE ioctl on a fence fd for testing. Ie. create various sw_sync fences, merge them into a fence-array, set deadline on the fence-array and confirm that it is propagated properly to each fence.
v2: Switch UABI to express deadline as u64 v3: More verbose UAPI docs, show how to convert from timespec v4: Better comments, track the soonest deadline, as a normal fence implementation would, return an error if no deadline set.
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com --- drivers/dma-buf/sw_sync.c | 81 ++++++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_debug.h | 2 + 2 files changed, 83 insertions(+)
diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c index 348b3a9170fa..f53071bca3af 100644 --- a/drivers/dma-buf/sw_sync.c +++ b/drivers/dma-buf/sw_sync.c @@ -52,12 +52,33 @@ struct sw_sync_create_fence_data { __s32 fence; /* fd of new fence */ };
+/** + * struct sw_sync_get_deadline - get the deadline hint of a sw_sync fence + * @deadline_ns: absolute time of the deadline + * @pad: must be zero + * @fence_fd: the sw_sync fence fd (in) + * + * Return the earliest deadline set on the fence. The timebase for the + * deadline is CLOCK_MONOTONIC (same as vblank). If there is no deadline + * set on the fence, this ioctl will return -ENOENT. + */ +struct sw_sync_get_deadline { + __u64 deadline_ns; + __u32 pad; + __s32 fence_fd; +}; + #define SW_SYNC_IOC_MAGIC 'W'
#define SW_SYNC_IOC_CREATE_FENCE _IOWR(SW_SYNC_IOC_MAGIC, 0,\ struct sw_sync_create_fence_data)
#define SW_SYNC_IOC_INC _IOW(SW_SYNC_IOC_MAGIC, 1, __u32) +#define SW_SYNC_GET_DEADLINE _IOWR(SW_SYNC_IOC_MAGIC, 2, \ + struct sw_sync_get_deadline) + + +#define SW_SYNC_HAS_DEADLINE_BIT DMA_FENCE_FLAG_USER_BITS
static const struct dma_fence_ops timeline_fence_ops;
@@ -171,6 +192,22 @@ static void timeline_fence_timeline_value_str(struct dma_fence *fence, snprintf(str, size, "%d", parent->value); }
+static void timeline_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{ + struct sync_pt *pt = dma_fence_to_sync_pt(fence); + unsigned long flags; + + spin_lock_irqsave(fence->lock, flags); + if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) { + if (ktime_before(deadline, pt->deadline)) + pt->deadline = deadline; + } else { + pt->deadline = deadline; + set_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags); + } + spin_unlock_irqrestore(fence->lock, flags); +} + static const struct dma_fence_ops timeline_fence_ops = { .get_driver_name = timeline_fence_get_driver_name, .get_timeline_name = timeline_fence_get_timeline_name, @@ -179,6 +216,7 @@ static const struct dma_fence_ops timeline_fence_ops = { .release = timeline_fence_release, .fence_value_str = timeline_fence_value_str, .timeline_value_str = timeline_fence_timeline_value_str, + .set_deadline = timeline_fence_set_deadline, };
/** @@ -387,6 +425,46 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg) return 0; }
+static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long arg) +{ + struct sw_sync_get_deadline data; + struct dma_fence *fence; + struct sync_pt *pt; + int ret = 0; + + if (copy_from_user(&data, (void __user *)arg, sizeof(data))) + return -EFAULT; + + if (data.deadline_ns || data.pad) + return -EINVAL; + + fence = sync_file_get_fence(data.fence_fd); + if (!fence) + return -EINVAL; + + pt = dma_fence_to_sync_pt(fence); + if (!pt) + return -EINVAL; + + spin_lock(fence->lock); + if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) { + data.deadline_ns = ktime_to_ns(pt->deadline); + } else { + ret = -ENOENT; + } + spin_unlock(fence->lock); + + dma_fence_put(fence); + + if (ret) + return ret; + + if (copy_to_user((void __user *)arg, &data, sizeof(data))) + return -EFAULT; + + return 0; +} + static long sw_sync_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -399,6 +477,9 @@ static long sw_sync_ioctl(struct file *file, unsigned int cmd, case SW_SYNC_IOC_INC: return sw_sync_ioctl_inc(obj, arg);
+ case SW_SYNC_GET_DEADLINE: + return sw_sync_ioctl_get_deadline(obj, arg); + default: return -ENOTTY; } diff --git a/drivers/dma-buf/sync_debug.h b/drivers/dma-buf/sync_debug.h index 6176e52ba2d7..a1bdd62efccd 100644 --- a/drivers/dma-buf/sync_debug.h +++ b/drivers/dma-buf/sync_debug.h @@ -55,11 +55,13 @@ static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence) * @base: base fence object * @link: link on the sync timeline's list * @node: node in the sync timeline's tree + * @deadline: the earliest fence deadline hint */ struct sync_pt { struct dma_fence base; struct list_head link; struct rb_node node; + ktime_t deadline; };
extern const struct file_operations sw_sync_debugfs_fops;
On 08/03/2023 15:52, Rob Clark wrote:
From: Rob Clark robdclark@chromium.org
This consists of simply storing the most recent deadline, and adding an ioctl to retrieve the deadline. This can be used in conjunction with the SET_DEADLINE ioctl on a fence fd for testing. Ie. create various sw_sync fences, merge them into a fence-array, set deadline on the fence-array and confirm that it is propagated properly to each fence.
v2: Switch UABI to express deadline as u64 v3: More verbose UAPI docs, show how to convert from timespec v4: Better comments, track the soonest deadline, as a normal fence implementation would, return an error if no deadline set.
Signed-off-by: Rob Clark robdclark@chromium.org Reviewed-by: Christian König christian.koenig@amd.com Acked-by: Pekka Paalanen pekka.paalanen@collabora.com
drivers/dma-buf/sw_sync.c | 81 ++++++++++++++++++++++++++++++++++++ drivers/dma-buf/sync_debug.h | 2 + 2 files changed, 83 insertions(+)
diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c index 348b3a9170fa..f53071bca3af 100644 --- a/drivers/dma-buf/sw_sync.c +++ b/drivers/dma-buf/sw_sync.c @@ -52,12 +52,33 @@ struct sw_sync_create_fence_data { __s32 fence; /* fd of new fence */ }; +/**
- struct sw_sync_get_deadline - get the deadline hint of a sw_sync fence
- @deadline_ns: absolute time of the deadline
- @pad: must be zero
- @fence_fd: the sw_sync fence fd (in)
- Return the earliest deadline set on the fence. The timebase for the
- deadline is CLOCK_MONOTONIC (same as vblank). If there is no deadline
Mentioning vblank reads odd since this is drivers/dma-buf/. Dunno.
- set on the fence, this ioctl will return -ENOENT.
- */
+struct sw_sync_get_deadline {
- __u64 deadline_ns;
- __u32 pad;
- __s32 fence_fd;
+};
- #define SW_SYNC_IOC_MAGIC 'W'
#define SW_SYNC_IOC_CREATE_FENCE _IOWR(SW_SYNC_IOC_MAGIC, 0,\ struct sw_sync_create_fence_data) #define SW_SYNC_IOC_INC _IOW(SW_SYNC_IOC_MAGIC, 1, __u32) +#define SW_SYNC_GET_DEADLINE _IOWR(SW_SYNC_IOC_MAGIC, 2, \
struct sw_sync_get_deadline)
+#define SW_SYNC_HAS_DEADLINE_BIT DMA_FENCE_FLAG_USER_BITS static const struct dma_fence_ops timeline_fence_ops; @@ -171,6 +192,22 @@ static void timeline_fence_timeline_value_str(struct dma_fence *fence, snprintf(str, size, "%d", parent->value); } +static void timeline_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{
- struct sync_pt *pt = dma_fence_to_sync_pt(fence);
- unsigned long flags;
- spin_lock_irqsave(fence->lock, flags);
- if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
if (ktime_before(deadline, pt->deadline))
pt->deadline = deadline;
- } else {
pt->deadline = deadline;
set_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags);
FWIW could use __set_bit to avoid needless atomic under spinlock.
- }
- spin_unlock_irqrestore(fence->lock, flags);
+}
- static const struct dma_fence_ops timeline_fence_ops = { .get_driver_name = timeline_fence_get_driver_name, .get_timeline_name = timeline_fence_get_timeline_name,
@@ -179,6 +216,7 @@ static const struct dma_fence_ops timeline_fence_ops = { .release = timeline_fence_release, .fence_value_str = timeline_fence_value_str, .timeline_value_str = timeline_fence_timeline_value_str,
- .set_deadline = timeline_fence_set_deadline, };
/** @@ -387,6 +425,46 @@ static long sw_sync_ioctl_inc(struct sync_timeline *obj, unsigned long arg) return 0; } +static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long arg) +{
- struct sw_sync_get_deadline data;
- struct dma_fence *fence;
- struct sync_pt *pt;
- int ret = 0;
- if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
return -EFAULT;
- if (data.deadline_ns || data.pad)
return -EINVAL;
- fence = sync_file_get_fence(data.fence_fd);
- if (!fence)
return -EINVAL;
- pt = dma_fence_to_sync_pt(fence);
- if (!pt)
return -EINVAL;
- spin_lock(fence->lock);
This may need to be _irq.
- if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
data.deadline_ns = ktime_to_ns(pt->deadline);
- } else {
ret = -ENOENT;
- }
- spin_unlock(fence->lock);
- dma_fence_put(fence);
- if (ret)
return ret;
- if (copy_to_user((void __user *)arg, &data, sizeof(data)))
return -EFAULT;
- return 0;
+}
- static long sw_sync_ioctl(struct file *file, unsigned int cmd, unsigned long arg) {
@@ -399,6 +477,9 @@ static long sw_sync_ioctl(struct file *file, unsigned int cmd, case SW_SYNC_IOC_INC: return sw_sync_ioctl_inc(obj, arg);
- case SW_SYNC_GET_DEADLINE:
return sw_sync_ioctl_get_deadline(obj, arg);
- default: return -ENOTTY; }
diff --git a/drivers/dma-buf/sync_debug.h b/drivers/dma-buf/sync_debug.h index 6176e52ba2d7..a1bdd62efccd 100644 --- a/drivers/dma-buf/sync_debug.h +++ b/drivers/dma-buf/sync_debug.h @@ -55,11 +55,13 @@ static inline struct sync_timeline *dma_fence_parent(struct dma_fence *fence)
- @base: base fence object
- @link: link on the sync timeline's list
- @node: node in the sync timeline's tree
*/ struct sync_pt { struct dma_fence base; struct list_head link; struct rb_node node;
- @deadline: the earliest fence deadline hint
- ktime_t deadline; };
extern const struct file_operations sw_sync_debugfs_fops;
Regards,
Tvrtko
As the finished fence is the one that is exposed to userspace, and therefore the one that other operations, like atomic update, would block on, we need to propagate the deadline from from the finished fence to the actual hw fence.
v2: Split into drm_sched_fence_set_parent() (ckoenig) v3: Ensure a thread calling drm_sched_fence_set_deadline_finished() sees fence->parent set before drm_sched_fence_set_parent() does this test_bit(DMA_FENCE_FLAG_HAS_DEADLINE_BIT).
Signed-off-by: Rob Clark robdclark@chromium.org Acked-by: Luben Tuikov luben.tuikov@amd.com --- drivers/gpu/drm/scheduler/sched_fence.c | 46 +++++++++++++++++++++++++ drivers/gpu/drm/scheduler/sched_main.c | 2 +- include/drm/gpu_scheduler.h | 17 +++++++++ 3 files changed, 64 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c index 7fd869520ef2..fe9c6468e440 100644 --- a/drivers/gpu/drm/scheduler/sched_fence.c +++ b/drivers/gpu/drm/scheduler/sched_fence.c @@ -123,6 +123,37 @@ static void drm_sched_fence_release_finished(struct dma_fence *f) dma_fence_put(&fence->scheduled); }
+static void drm_sched_fence_set_deadline_finished(struct dma_fence *f, + ktime_t deadline) +{ + struct drm_sched_fence *fence = to_drm_sched_fence(f); + struct dma_fence *parent; + unsigned long flags; + + spin_lock_irqsave(&fence->lock, flags); + + /* If we already have an earlier deadline, keep it: */ + if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) && + ktime_before(fence->deadline, deadline)) { + spin_unlock_irqrestore(&fence->lock, flags); + return; + } + + fence->deadline = deadline; + set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags); + + spin_unlock_irqrestore(&fence->lock, flags); + + /* + * smp_load_aquire() to ensure that if we are racing another + * thread calling drm_sched_fence_set_parent(), that we see + * the parent set before it calls test_bit(HAS_DEADLINE_BIT) + */ + parent = smp_load_acquire(&fence->parent); + if (parent) + dma_fence_set_deadline(parent, deadline); +} + static const struct dma_fence_ops drm_sched_fence_ops_scheduled = { .get_driver_name = drm_sched_fence_get_driver_name, .get_timeline_name = drm_sched_fence_get_timeline_name, @@ -133,6 +164,7 @@ static const struct dma_fence_ops drm_sched_fence_ops_finished = { .get_driver_name = drm_sched_fence_get_driver_name, .get_timeline_name = drm_sched_fence_get_timeline_name, .release = drm_sched_fence_release_finished, + .set_deadline = drm_sched_fence_set_deadline_finished, };
struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) @@ -147,6 +179,20 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f) } EXPORT_SYMBOL(to_drm_sched_fence);
+void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, + struct dma_fence *fence) +{ + /* + * smp_store_release() to ensure another thread racing us + * in drm_sched_fence_set_deadline_finished() sees the + * fence's parent set before test_bit() + */ + smp_store_release(&s_fence->parent, dma_fence_get(fence)); + if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, + &s_fence->finished.flags)) + dma_fence_set_deadline(fence, s_fence->deadline); +} + struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity, void *owner) { diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 4e6ad6e122bc..007f98c48f8d 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1019,7 +1019,7 @@ static int drm_sched_main(void *param) drm_sched_fence_scheduled(s_fence);
if (!IS_ERR_OR_NULL(fence)) { - s_fence->parent = dma_fence_get(fence); + drm_sched_fence_set_parent(s_fence, fence); /* Drop for original kref_init of the fence */ dma_fence_put(fence);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 9db9e5e504ee..99584e457153 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -41,6 +41,15 @@ */ #define DRM_SCHED_FENCE_DONT_PIPELINE DMA_FENCE_FLAG_USER_BITS
+/** + * DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT - A fence deadline hint has been set + * + * Because we could have a deadline hint can be set before the backing hw + * fence is created, we need to keep track of whether a deadline has already + * been set. + */ +#define DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT (DMA_FENCE_FLAG_USER_BITS + 1) + enum dma_resv_usage; struct dma_resv; struct drm_gem_object; @@ -280,6 +289,12 @@ struct drm_sched_fence { */ struct dma_fence finished;
+ /** + * @deadline: deadline set on &drm_sched_fence.finished which + * potentially needs to be propagated to &drm_sched_fence.parent + */ + ktime_t deadline; + /** * @parent: the fence returned by &drm_sched_backend_ops.run_job * when scheduling the job on hardware. We signal the @@ -568,6 +583,8 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity, enum drm_sched_priority priority); bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
+void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence, + struct dma_fence *fence); struct drm_sched_fence *drm_sched_fence_alloc( struct drm_sched_entity *s_entity, void *owner); void drm_sched_fence_init(struct drm_sched_fence *fence,
From: Rob Clark robdclark@chromium.org
Track the nearest deadline on a fence timeline and set a timer to expire shortly before to trigger boost if the fence has not yet been signaled.
v2: rebase
Signed-off-by: Rob Clark robdclark@chromium.org --- drivers/gpu/drm/msm/msm_fence.c | 74 +++++++++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_fence.h | 20 +++++++++ 2 files changed, 94 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c index 56641408ea74..51b461f32103 100644 --- a/drivers/gpu/drm/msm/msm_fence.c +++ b/drivers/gpu/drm/msm/msm_fence.c @@ -8,6 +8,35 @@
#include "msm_drv.h" #include "msm_fence.h" +#include "msm_gpu.h" + +static struct msm_gpu *fctx2gpu(struct msm_fence_context *fctx) +{ + struct msm_drm_private *priv = fctx->dev->dev_private; + return priv->gpu; +} + +static enum hrtimer_restart deadline_timer(struct hrtimer *t) +{ + struct msm_fence_context *fctx = container_of(t, + struct msm_fence_context, deadline_timer); + + kthread_queue_work(fctx2gpu(fctx)->worker, &fctx->deadline_work); + + return HRTIMER_NORESTART; +} + +static void deadline_work(struct kthread_work *work) +{ + struct msm_fence_context *fctx = container_of(work, + struct msm_fence_context, deadline_work); + + /* If deadline fence has already passed, nothing to do: */ + if (msm_fence_completed(fctx, fctx->next_deadline_fence)) + return; + + msm_devfreq_boost(fctx2gpu(fctx), 2); +}
struct msm_fence_context * @@ -36,6 +65,13 @@ msm_fence_context_alloc(struct drm_device *dev, volatile uint32_t *fenceptr, fctx->completed_fence = fctx->last_fence; *fctx->fenceptr = fctx->last_fence;
+ hrtimer_init(&fctx->deadline_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS); + fctx->deadline_timer.function = deadline_timer; + + kthread_init_work(&fctx->deadline_work, deadline_work); + + fctx->next_deadline = ktime_get(); + return fctx; }
@@ -62,6 +98,8 @@ void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence) spin_lock_irqsave(&fctx->spinlock, flags); if (fence_after(fence, fctx->completed_fence)) fctx->completed_fence = fence; + if (msm_fence_completed(fctx, fctx->next_deadline_fence)) + hrtimer_cancel(&fctx->deadline_timer); spin_unlock_irqrestore(&fctx->spinlock, flags); }
@@ -92,10 +130,46 @@ static bool msm_fence_signaled(struct dma_fence *fence) return msm_fence_completed(f->fctx, f->base.seqno); }
+static void msm_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{ + struct msm_fence *f = to_msm_fence(fence); + struct msm_fence_context *fctx = f->fctx; + unsigned long flags; + ktime_t now; + + spin_lock_irqsave(&fctx->spinlock, flags); + now = ktime_get(); + + if (ktime_after(now, fctx->next_deadline) || + ktime_before(deadline, fctx->next_deadline)) { + fctx->next_deadline = deadline; + fctx->next_deadline_fence = + max(fctx->next_deadline_fence, (uint32_t)fence->seqno); + + /* + * Set timer to trigger boost 3ms before deadline, or + * if we are already less than 3ms before the deadline + * schedule boost work immediately. + */ + deadline = ktime_sub(deadline, ms_to_ktime(3)); + + if (ktime_after(now, deadline)) { + kthread_queue_work(fctx2gpu(fctx)->worker, + &fctx->deadline_work); + } else { + hrtimer_start(&fctx->deadline_timer, deadline, + HRTIMER_MODE_ABS); + } + } + + spin_unlock_irqrestore(&fctx->spinlock, flags); +} + static const struct dma_fence_ops msm_fence_ops = { .get_driver_name = msm_fence_get_driver_name, .get_timeline_name = msm_fence_get_timeline_name, .signaled = msm_fence_signaled, + .set_deadline = msm_fence_set_deadline, };
struct dma_fence * diff --git a/drivers/gpu/drm/msm/msm_fence.h b/drivers/gpu/drm/msm/msm_fence.h index 7f1798c54cd1..cdaebfb94f5c 100644 --- a/drivers/gpu/drm/msm/msm_fence.h +++ b/drivers/gpu/drm/msm/msm_fence.h @@ -52,6 +52,26 @@ struct msm_fence_context { volatile uint32_t *fenceptr;
spinlock_t spinlock; + + /* + * TODO this doesn't really deal with multiple deadlines, like + * if userspace got multiple frames ahead.. OTOH atomic updates + * don't queue, so maybe that is ok + */ + + /** next_deadline: Time of next deadline */ + ktime_t next_deadline; + + /** + * next_deadline_fence: + * + * Fence value for next pending deadline. The deadline timer is + * canceled when this fence is signaled. + */ + uint32_t next_deadline_fence; + + struct hrtimer deadline_timer; + struct kthread_work deadline_work; };
struct msm_fence_context * msm_fence_context_alloc(struct drm_device *dev,
From: Rob Clark robdclark@chromium.org
I expect this patch to be replaced by someone who knows i915 better.
Signed-off-by: Rob Clark robdclark@chromium.org --- drivers/gpu/drm/i915/i915_request.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 7503dcb9043b..44491e7e214c 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -97,6 +97,25 @@ static bool i915_fence_enable_signaling(struct dma_fence *fence) return i915_request_enable_breadcrumb(to_request(fence)); }
+static void i915_fence_set_deadline(struct dma_fence *fence, ktime_t deadline) +{ + struct i915_request *rq = to_request(fence); + + if (i915_request_completed(rq)) + return; + + if (i915_request_started(rq)) + return; + + /* + * TODO something more clever for deadlines that are in the + * future. I think probably track the nearest deadline in + * rq->timeline and set timer to trigger boost accordingly? + */ + + intel_rps_boost(rq); +} + static signed long i915_fence_wait(struct dma_fence *fence, bool interruptible, signed long timeout) @@ -182,6 +201,7 @@ const struct dma_fence_ops i915_fence_ops = { .signaled = i915_fence_signaled, .wait = i915_fence_wait, .release = i915_fence_release, + .set_deadline = i915_fence_set_deadline, };
static void irq_execute_cb(struct irq_work *wrk)
On Wed, 8 Mar 2023 07:52:51 -0800 Rob Clark robdclark@gmail.com wrote:
From: Rob Clark robdclark@chromium.org
This series adds a deadline hint to fences, so realtime deadlines such as vblank can be communicated to the fence signaller for power/ frequency management decisions.
This is partially inspired by a trick i915 does, but implemented via dma-fence for a couple of reasons:
- To continue to be able to use the atomic helpers
- To support cases where display and gpu are different drivers
This iteration adds a dma-fence ioctl to set a deadline (both to support igt-tests, and compositors which delay decisions about which client buffer to display), and a sw_sync ioctl to read back the deadline. IGT tests utilizing these can be found at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadli...
v1: https://patchwork.freedesktop.org/series/93035/ v2: Move filtering out of later deadlines to fence implementation to avoid increasing the size of dma_fence v3: Add support in fence-array and fence-chain; Add some uabi to support igt tests and userspace compositors. v4: Rebase, address various comments, and add syncobj deadline support, and sync_file EPOLLPRI based on experience with perf/ freq issues with clvk compute workloads on i915 (anv) v5: Clarify that this is a hint as opposed to a more hard deadline guarantee, switch to using u64 ns values in UABI (still absolute CLOCK_MONOTONIC values), drop syncobj related cap and driver feature flag in favor of allowing count_handles==0 for probing kernel support. v6: Re-work vblank helper to calculate time of _start_ of vblank, and work correctly if the last vblank event was more than a frame ago. Add (mostly unrelated) drm/msm patch which also uses the vblank helper. Use dma_fence_chain_contained(). More verbose syncobj UABI comments. Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v7: Fix kbuild complaints about vblank helper. Add more docs. v8: Add patch to surface sync_file UAPI, and more docs updates. v9: Drop (E)POLLPRI support.. I still like it, but not essential and it can always be revived later. Fix doc build warning. v10: Update 11/15 to handle multiple CRTCs
Hi Rob,
it is very nice to keep revision numbers and list the changes in each patch. If I looked at series v8 last, and I now see series v10, and I look at a patch that lists changes done in v7, how do I know if that change was made between series v8 and v10 or earlier?
At least in some previous revision, series might have been v8 and a patch have new changes listed as v5 (because it was the 5th time that one patch was changed) instead of v8.
Am I expected to keep track of vN of each individual patch independently?
Thanks, pq
On Wed, Mar 8, 2023 at 7:53 AM Rob Clark robdclark@gmail.com wrote:
From: Rob Clark robdclark@chromium.org
This series adds a deadline hint to fences, so realtime deadlines such as vblank can be communicated to the fence signaller for power/ frequency management decisions.
This is partially inspired by a trick i915 does, but implemented via dma-fence for a couple of reasons:
- To continue to be able to use the atomic helpers
- To support cases where display and gpu are different drivers
This iteration adds a dma-fence ioctl to set a deadline (both to support igt-tests, and compositors which delay decisions about which client buffer to display), and a sw_sync ioctl to read back the deadline. IGT tests utilizing these can be found at:
https://gitlab.freedesktop.org/robclark/igt-gpu-tools/-/commits/fence-deadli...
jfwiw, mesa side of this:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/21973
BR, -R
v1: https://patchwork.freedesktop.org/series/93035/ v2: Move filtering out of later deadlines to fence implementation to avoid increasing the size of dma_fence v3: Add support in fence-array and fence-chain; Add some uabi to support igt tests and userspace compositors. v4: Rebase, address various comments, and add syncobj deadline support, and sync_file EPOLLPRI based on experience with perf/ freq issues with clvk compute workloads on i915 (anv) v5: Clarify that this is a hint as opposed to a more hard deadline guarantee, switch to using u64 ns values in UABI (still absolute CLOCK_MONOTONIC values), drop syncobj related cap and driver feature flag in favor of allowing count_handles==0 for probing kernel support. v6: Re-work vblank helper to calculate time of _start_ of vblank, and work correctly if the last vblank event was more than a frame ago. Add (mostly unrelated) drm/msm patch which also uses the vblank helper. Use dma_fence_chain_contained(). More verbose syncobj UABI comments. Drop DMA_FENCE_FLAG_HAS_DEADLINE_BIT. v7: Fix kbuild complaints about vblank helper. Add more docs. v8: Add patch to surface sync_file UAPI, and more docs updates. v9: Drop (E)POLLPRI support.. I still like it, but not essential and it can always be revived later. Fix doc build warning. v10: Update 11/15 to handle multiple CRTCs
Rob Clark (15): dma-buf/dma-fence: Add deadline awareness dma-buf/fence-array: Add fence deadline support dma-buf/fence-chain: Add fence deadline support dma-buf/dma-resv: Add a way to set fence deadline dma-buf/sync_file: Surface sync-file uABI dma-buf/sync_file: Add SET_DEADLINE ioctl dma-buf/sw_sync: Add fence deadline support drm/scheduler: Add fence deadline support drm/syncobj: Add deadline support for syncobj waits drm/vblank: Add helper to get next vblank time drm/atomic-helper: Set fence deadline for vblank drm/msm: Add deadline based boost support drm/msm: Add wait-boost support drm/msm/atomic: Switch to vblank_start helper drm/i915: Add deadline based boost support
Rob Clark (15): dma-buf/dma-fence: Add deadline awareness dma-buf/fence-array: Add fence deadline support dma-buf/fence-chain: Add fence deadline support dma-buf/dma-resv: Add a way to set fence deadline dma-buf/sync_file: Surface sync-file uABI dma-buf/sync_file: Add SET_DEADLINE ioctl dma-buf/sw_sync: Add fence deadline support drm/scheduler: Add fence deadline support drm/syncobj: Add deadline support for syncobj waits drm/vblank: Add helper to get next vblank time drm/atomic-helper: Set fence deadline for vblank drm/msm: Add deadline based boost support drm/msm: Add wait-boost support drm/msm/atomic: Switch to vblank_start helper drm/i915: Add deadline based boost support
Documentation/driver-api/dma-buf.rst | 16 ++++- drivers/dma-buf/dma-fence-array.c | 11 ++++ drivers/dma-buf/dma-fence-chain.c | 12 ++++ drivers/dma-buf/dma-fence.c | 60 ++++++++++++++++++ drivers/dma-buf/dma-resv.c | 22 +++++++ drivers/dma-buf/sw_sync.c | 81 +++++++++++++++++++++++++ drivers/dma-buf/sync_debug.h | 2 + drivers/dma-buf/sync_file.c | 19 ++++++ drivers/gpu/drm/drm_atomic_helper.c | 37 +++++++++++ drivers/gpu/drm/drm_syncobj.c | 64 +++++++++++++++---- drivers/gpu/drm/drm_vblank.c | 53 +++++++++++++--- drivers/gpu/drm/i915/i915_request.c | 20 ++++++ drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 15 ----- drivers/gpu/drm/msm/msm_atomic.c | 8 ++- drivers/gpu/drm/msm/msm_drv.c | 12 ++-- drivers/gpu/drm/msm/msm_fence.c | 74 ++++++++++++++++++++++ drivers/gpu/drm/msm/msm_fence.h | 20 ++++++ drivers/gpu/drm/msm/msm_gem.c | 5 ++ drivers/gpu/drm/msm/msm_kms.h | 8 --- drivers/gpu/drm/scheduler/sched_fence.c | 46 ++++++++++++++ drivers/gpu/drm/scheduler/sched_main.c | 2 +- include/drm/drm_vblank.h | 1 + include/drm/gpu_scheduler.h | 17 ++++++ include/linux/dma-fence.h | 22 +++++++ include/linux/dma-resv.h | 2 + include/uapi/drm/drm.h | 17 ++++++ include/uapi/drm/msm_drm.h | 14 ++++- include/uapi/linux/sync_file.h | 59 +++++++++++------- 28 files changed, 640 insertions(+), 79 deletions(-)
-- 2.39.2
On Wed, Mar 8, 2023 at 10:53 AM Rob Clark robdclark@gmail.com wrote:
From: Rob Clark robdclark@chromium.org
This series adds a deadline hint to fences, so realtime deadlines such as vblank can be communicated to the fence signaller for power/ frequency management decisions.
This is partially inspired by a trick i915 does, but implemented via dma-fence for a couple of reasons:
- To continue to be able to use the atomic helpers
- To support cases where display and gpu are different drivers
This iteration adds a dma-fence ioctl to set a deadline (both to support igt-tests, and compositors which delay decisions about which client buffer to display), and a sw_sync ioctl to read back the deadline. IGT tests utilizing these can be found at:
I read through the series and didn't spot anything. Have a rather weak
Reviewed-by: Matt Turner mattst88@gmail.com
Thanks!
linaro-mm-sig@lists.linaro.org