On 08/24/2017 10:55 AM, Dietmar Eggemann wrote:
Hi Leo,
On 08/23/2017 10:00 AM, Leo Yan wrote:
Hi Leo, Can you please try this patch and let me know if it helps? There seems indeed missing change in google's tree I'm on since it's reverted. The tree Dietmar gave never had this change. You will still need ERRATUM 858921 to avoid panic(curr_run_sum < 0).
Hi Dietmar, do you have https://android-review.googlesource.com/#/c/kernel/common/+/426442/? If not, the bug that 426442 aims to fix can mask the problem so that it will be really hard to hit the cr_avg < 0.
Thanks, Joonwoo
From 2730faecb26de0969df5fc39c4014bc5000a5ca8 Mon Sep 17 00:00:00 2001 From: Olav Haugan ohaugan@codeaurora.org Date: Wed, 5 Aug 2015 08:45:21 -0700 Subject: [PATCH] sched: Update task->on_rq when tasks are moving between runqueues
Task->on_rq has three states: 0 - Task is not on runqueue (rq) 1 (TASK_ON_RQ_QUEUED) - Task is on rq 2 (TASK_ON_RQ_MIGRATING) - Task is on rq but in the process of being migrated to another rq
When a task is moving between rqs task->on_rq state should be TASK_ON_RQ_MIGRATING in order for WALT to account rq's cumulative runnable average correctly. Without such state marking for all the classes, WALT's update_history() would try to fixup task's demand which was never contributed to any of CPUs during migration.
Signed-off-by: Olav Haugan ohaugan@codeaurora.org [joonwoop: Reinforced changelog to explain why this is needed by WALT. Fixed conflicts in deadline.c] Signed-off-by: Joonwoo Park joonwoop@codeaurora.org --- kernel/sched/core.c | 2 ++ kernel/sched/deadline.c | 4 ++++ kernel/sched/rt.c | 4 ++++ 3 files changed, 10 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 312f894..843e440 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1332,7 +1332,9 @@ static void __migrate_swap_task(struct task_struct *p, int cpu) dst_rq = cpu_rq(cpu);
deactivate_task(src_rq, p, 0); + p->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(p, cpu); + p->on_rq = TASK_ON_RQ_QUEUED; activate_task(dst_rq, p, 0); check_preempt_curr(dst_rq, p, 0); } else { diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 2aae8b8..ab1a9a9 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1587,7 +1587,9 @@ retry:
deactivate_task(rq, next_task, 0); clear_average_bw(&next_task->dl, &rq->dl); + next_task->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(next_task, later_rq->cpu); + next_task->on_rq = TASK_ON_RQ_QUEUED; add_average_bw(&next_task->dl, &later_rq->dl); activate_task(later_rq, next_task, 0); ret = 1; @@ -1677,7 +1679,9 @@ static void pull_dl_task(struct rq *this_rq)
deactivate_task(src_rq, p, 0); clear_average_bw(&p->dl, &src_rq->dl); + p->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(p, this_cpu); + p->on_rq = TASK_ON_RQ_QUEUED; add_average_bw(&p->dl, &this_rq->dl); activate_task(this_rq, p, 0); dmin = p->dl.deadline; diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 25499a3..963d6d6 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2000,7 +2000,9 @@ retry: }
deactivate_task(rq, next_task, 0); + next_task->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(next_task, lowest_rq->cpu); + next_task->on_rq = TASK_ON_RQ_QUEUED; activate_task(lowest_rq, next_task, 0); ret = 1;
@@ -2254,7 +2256,9 @@ static void pull_rt_task(struct rq *this_rq) resched = true;
deactivate_task(src_rq, p, 0); + p->on_rq = TASK_ON_RQ_MIGRATING; set_task_cpu(p, this_cpu); + p->on_rq = TASK_ON_RQ_QUEUED; activate_task(this_rq, p, 0); /* * We continue with the search, just in -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, hosted by The Linux Foundation