On 04/09/2015 05:32 AM, Kevin Hilman wrote:
From: Kevin Hilman khilman@linaro.org
Commit cd5c2cc93d3d (hmp: Remove potential for task_struct access race) introduced a put_task_struct() to prevent races, but in doing so introduced potential spinlock recursion. (This change was further consolidated in commit 0baa5811bacf -- sched: hmp: unify active migration code.)
Unfortunately, the put_task_struct() is done while the runqueue spinlock is held, but put_task_struct() can also cause a reschedule causing the runqueue lock to be acquired recursively.
To fix, move the put_task_struct() outside the runqueue spinlock.
Reported-by: Victor Lixin victor.lixin@hisilicon.com Cc: Jorge Ramirez-Ortiz jorge.ramirez-ortiz@linaro.org Cc: Chris Redpath chris.redpath@arm.com Cc: Liviu Dudau Liviu.Dudau@arm.com Cc: Jon Medhurst tixy@linaro.org Signed-off-by: Kevin Hilman khilman@linaro.org
Reviewed-by: Alex Shi
Reported in LDTS Ticket: https://support.linaro.org/tickets/1349
Chris, Tixy, any objections to this HMP fix for LSK-v3.10?
kernel/sched/fair.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 41d0cbda605d..1baf6413a882 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6489,10 +6489,10 @@ static int __do_active_load_balance_cpu_stop(void *data, bool check_sd_lb_flag) rcu_read_unlock(); double_unlock_balance(busiest_rq, target_rq); out_unlock:
- if (!check_sd_lb_flag)
busiest_rq->active_balance = 0; raw_spin_unlock_irq(&busiest_rq->lock);put_task_struct(p);
- if (!check_sd_lb_flag)
return 0;put_task_struct(p);
}