On 14/09/20 21:42, Thomas Gleixner wrote:
CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be removed. Cleanup the leftovers before doing so.
Signed-off-by: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Peter Zijlstra peterz@infradead.org Cc: Juri Lelli juri.lelli@redhat.com Cc: Vincent Guittot vincent.guittot@linaro.org Cc: Dietmar Eggemann dietmar.eggemann@arm.com Cc: Steven Rostedt rostedt@goodmis.org Cc: Ben Segall bsegall@google.com Cc: Mel Gorman mgorman@suse.de Cc: Daniel Bristot de Oliveira bristot@redhat.com
Small nit below;
Reviewed-by: Valentin Schneider valentin.schneider@arm.com
kernel/sched/core.c | 6 +----- lib/Kconfig.debug | 1 - 2 files changed, 1 insertion(+), 6 deletions(-)
--- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail( * finish_task_switch() for details. * * finish_task_switch() will drop rq->lock() and lower preempt_count
* and the preempt_enable() will end up enabling preemption (on
* PREEMPT_COUNT kernels).
I suppose this wanted to be s/PREEMPT_COUNT/PREEMPT/ in the first place, which ought to be still relevant.
* and the preempt_enable() will end up enabling preemption. */ rq = finish_task_switch(prev);