On Tue, Dec 17, 2013 at 01:23:08PM -0800, Kevin Hilman wrote:
The conversion of the max deferment from usecs to nsecs can easily overflow on platforms where a long is 32-bits. To fix, cast the usecs value to u64 before multiplying by NSECS_PER_USEC.
This was discovered on 32-bit ARM platform when extending the max deferment value.
Cc: Frederic Weisbecker fweisbec@gmail.com Signed-off-by: Kevin Hilman khilman@linaro.org
kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4b1fe3e69fe4..3d7c80e1c4d9 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2203,7 +2203,7 @@ u64 scheduler_tick_max_deferment(void) if (time_before_eq(next, now)) return 0;
- return jiffies_to_usecs(next - now) * NSEC_PER_USEC;
- return (u64)jiffies_to_usecs(next - now) * NSEC_PER_USEC;
Just to be sure I understand the issue. The problem is that jiffies_to_usecs() return an unsigned int which is then multiplied by NSEC_PER_USEC. If the result of the mul is too big to be stored in an unsigned int, we overflow and may loose some high part of the result. Right?
} static __init int sched_nohz_full_init_debug(void) -- 1.8.3