Hello Joshua.
On Fri, Aug 30, 2024 at 07:19:37AM GMT, Joshua Hahn joshua.hahnjy@gmail.com wrote:
Exposing this metric will allow users to accurately probe the niced CPU metric for each workload, and make more informed decisions when directing higher priority tasks.
I'm afraid I can't still appreciate exposing this value:
- It makes (some) sense only on leave cgroups (where variously nice'd tasks are competing against each other). Not so much on inner node cgroups (where it's a mere sum but sibling cgroups could have different weights, so the absolute times would contribute differently).
- When all tasks have nice > 0 (or nice <= 0), it loses any information it could have had.
(Thus I don't know whether to commit to exposing that value via cgroups.)
I wonder, wouldn't your use case be equally served by some post-processing [1] of /sys/kernel/debug/sched/debug info which is already available?
Regards, Michal
[1] Your metric is supposed to represent Σ_i^tasks ∫_t is_nice(i, t) dt
If I try to address the second remark by looking at Σ_i^tasks ∫_t nice(i, t) dt
and that resembles (nice=0 ~ weight=1024) Σ_i^tasks ∫_t weight(i, t) dt
swap sum and int ∫_t Σ_i^tasks weight(i, t) dt
where Σ_i^tasks weight(i, t)
can be taken from /sys/kernel/debug/sched/debug:cfs_rq[0].load_avg
above is only for CPU nr=0. So processing would mean sampling that file over all CPUs and time.