6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: John Stultz jstultz@google.com
[ Upstream commit c2ae8b0df2d1bb7a063f9e356e4e9a06cd4afe11 ]
Currently, if the sleep flag is set, psi_dequeue() doesn't change any of the psi_flags.
This is because psi_task_switch() will clear TSK_ONCPU as well as other potential flags (TSK_RUNNING), and the assumption is that a voluntary sleep always consists of a task being dequeued followed shortly there after with a psi_sched_switch() call.
Proxy Execution changes this expectation, as mutex-blocked tasks that would normally sleep stay on the runqueue. But in the case where the mutex-owning task goes to sleep, or the owner is on a remote cpu, we will then deactivate the blocked task shortly after.
In that situation, the mutex-blocked task will have had its TSK_ONCPU cleared when it was switched off the cpu, but it will stay TSK_RUNNING. Then if we later dequeue it (as currently done if we hit a case find_proxy_task() can't yet handle, such as the case of the owner being on another rq or a sleeping owner) psi_dequeue() won't change any state (leaving it TSK_RUNNING), as it incorrectly expects a psi_task_switch() call to immediately follow.
Later on when the task get woken/re-enqueued, and psi_flags are set for TSK_RUNNING, we hit an error as the task is already TSK_RUNNING:
psi: inconsistent task state! task=188:kworker/28:0 cpu=28 psi_flags=4 clear=0 set=4
To resolve this, extend the logic in psi_dequeue() so that if the sleep flag is set, we also check if psi_flags have TSK_ONCPU set (meaning the psi_task_switch is imminent) before we do the shortcut return.
If TSK_ONCPU is not set, that means we've already switched away, and this psi_dequeue call needs to clear the flags.
Fixes: be41bde4c3a8 ("sched: Add an initial sketch of the find_proxy_task() function") Reported-by: K Prateek Nayak kprateek.nayak@amd.com Signed-off-by: John Stultz jstultz@google.com Signed-off-by: Ingo Molnar mingo@kernel.org Tested-by: K Prateek Nayak kprateek.nayak@amd.com Tested-by: Haiyue Wang haiyuewa@163.com Acked-by: Johannes Weiner hannes@cmpxchg.org Link: https://patch.msgid.link/20251205012721.756394-1-jstultz@google.com Closes: https://lore.kernel.org/lkml/20251117185550.365156-1-kprateek.nayak@amd.com/ Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/sched/stats.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h index 26f3fd4d34cea..73bd6bca4d310 100644 --- a/kernel/sched/stats.h +++ b/kernel/sched/stats.h @@ -180,8 +180,13 @@ static inline void psi_dequeue(struct task_struct *p, int flags) * avoid walking all ancestors twice, psi_task_switch() handles * TSK_RUNNING and TSK_IOWAIT for us when it moves TSK_ONCPU. * Do nothing here. + * + * In the SCHED_PROXY_EXECUTION case we may do sleeping + * dequeues that are not followed by a task switch, so check + * TSK_ONCPU is set to ensure the task switch is imminent. + * Otherwise clear the flags as usual. */ - if (flags & DEQUEUE_SLEEP) + if ((flags & DEQUEUE_SLEEP) && (p->psi_flags & TSK_ONCPU)) return;
/*