From: Frederic Weisbecker frederic@kernel.org
commit 3e61e95e2d095e308616cba4ffb640f95a480e01 upstream.
The callbacks processing time limit makes sure we are not exceeding a given amount of time executing the queue.
However its "continue" clause bypasses the cond_resched() call on rcuc and NOCB kthreads, delaying it until we reach the limit, which can be very long...
Make sure the scheduler has a higher priority than the time limit.
Reviewed-by: Valentin Schneider valentin.schneider@arm.com Tested-by: Valentin Schneider valentin.schneider@arm.com Tested-by: Sebastian Andrzej Siewior bigeasy@linutronix.de Signed-off-by: Frederic Weisbecker frederic@kernel.org Cc: Valentin Schneider valentin.schneider@arm.com Cc: Peter Zijlstra peterz@infradead.org Cc: Sebastian Andrzej Siewior bigeasy@linutronix.de Cc: Josh Triplett josh@joshtriplett.org Cc: Joel Fernandes joel@joelfernandes.org Cc: Boqun Feng boqun.feng@gmail.com Cc: Neeraj Upadhyay neeraju@codeaurora.org Cc: Uladzislau Rezki urezki@gmail.com Cc: Thomas Gleixner tglx@linutronix.de Signed-off-by: Paul E. McKenney paulmck@kernel.org [UR: backport to 5.15-stable + commit update] Signed-off-by: Uladzislau Rezki (Sony) urezki@gmail.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- kernel/rcu/tree.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-)
--- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2513,10 +2513,22 @@ static void rcu_do_batch(struct rcu_data /* * Stop only if limit reached and CPU has something to do. */ - if (count >= bl && !offloaded && - (need_resched() || - (!is_idle_task(current) && !rcu_is_callbacks_kthread()))) - break; + if (in_serving_softirq()) { + if (count >= bl && (need_resched() || + (!is_idle_task(current) && !rcu_is_callbacks_kthread()))) + break; + } else { + local_bh_enable(); + lockdep_assert_irqs_enabled(); + cond_resched_tasks_rcu_qs(); + lockdep_assert_irqs_enabled(); + local_bh_disable(); + } + + /* + * Make sure we don't spend too much time here and deprive other + * softirq vectors of CPU cycles. + */ if (unlikely(tlimit)) { /* only call local_clock() every 32 callbacks */ if (likely((count & 31) || local_clock() < tlimit)) @@ -2524,13 +2536,6 @@ static void rcu_do_batch(struct rcu_data /* Exceeded the time limit, so leave. */ break; } - if (!in_serving_softirq()) { - local_bh_enable(); - lockdep_assert_irqs_enabled(); - cond_resched_tasks_rcu_qs(); - lockdep_assert_irqs_enabled(); - local_bh_disable(); - } }
local_irq_save(flags);