5.15-stable review patch. If anyone has any objections, please let me know.
------------------
From: Chengming Zhou zhouchengming@bytedance.com
commit dc6e0818bc9a0336d9accf3ea35d146d72aa7a18 upstream.
Since cpuacct_charge() is called from the scheduler update_curr(), we must already have rq lock held, then the RCU read lock can be optimized away.
And do the same thing in it's wrapper cgroup_account_cputime(), but we can't use lockdep_assert_rq_held() there, which defined in kernel/sched/sched.h.
Suggested-by: Peter Zijlstra (Intel) peterz@infradead.org Signed-off-by: Chengming Zhou zhouchengming@bytedance.com Signed-off-by: Peter Zijlstra (Intel) peterz@infradead.org Link: https://lore.kernel.org/r/20220220051426.5274-2-zhouchengming@bytedance.com Signed-off-by: Ovidiu Panait ovidiu.panait@windriver.com Signed-off-by: Sasha Levin sashal@kernel.org --- include/linux/cgroup.h | 2 -- kernel/sched/cpuacct.c | 4 +--- 2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 45cdb12243e3f..f425389ce4bb2 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -792,11 +792,9 @@ static inline void cgroup_account_cputime(struct task_struct *task,
cpuacct_charge(task, delta_exec);
- rcu_read_lock(); cgrp = task_dfl_cgroup(task); if (cgroup_parent(cgrp)) __cgroup_account_cputime(cgrp, delta_exec); - rcu_read_unlock(); }
static inline void cgroup_account_cputime_field(struct task_struct *task, diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c index cacc2076ad214..f0af0fecde9d9 100644 --- a/kernel/sched/cpuacct.c +++ b/kernel/sched/cpuacct.c @@ -331,12 +331,10 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime) unsigned int cpu = task_cpu(tsk); struct cpuacct *ca;
- rcu_read_lock(); + lockdep_assert_rq_held(cpu_rq(cpu));
for (ca = task_ca(tsk); ca; ca = parent_ca(ca)) *per_cpu_ptr(ca->cpuusage, cpu) += cputime; - - rcu_read_unlock(); }
/*