When cat /proc/pid/stat, do_task_stat will call into cputime_adjust, which call stack is like this:
[17179954.674326]BookE Watchdog detected hard LOCKUP on cpu 0 [17179954.674331]dCPU: 0 PID: 1262 Comm: TICK Tainted: P W O 4.4.176 #1 [17179954.674339]dtask: dc9d7040 task.stack: d3cb4000 [17179954.674344]NIP: c001b1a8 LR: c006a7ac CTR: 00000000 [17179954.674349]REGS: e6fe1f10 TRAP: 3202 Tainted: P W O (4.4.176) [17179954.674355]MSR: 00021002 <CE,ME> CR: 28002224 XER: 00000000 [17179954.674364] GPR00: 00000016 d3cb5cb0 dc9d7040 d3cb5cc0 00000000 0000025d ffe15b24 ffffffff GPR08: de86aead 00000000 000003ff ffffffff 28002222 0084d1c0 00000000 ffffffff GPR16: b5929ca0 b4bb7a48 c0863c08 0000048d 00000062 00000062 00000000 0000000f GPR24: 00000000 d3cb5d08 d3cb5d60 d3cb5d64 00029002 d3e9c214 fffff30e d3e9c20c [17179954.674410]NIP [c001b1a8] __div64_32+0x60/0xa0 [17179954.674422]LR [c006a7ac] cputime_adjust+0x124/0x138 [17179954.674434]Call Trace: [17179961.832693]Call Trace: [17179961.832695][d3cb5cb0] [c006a6dc] cputime_adjust+0x54/0x138 (unreliable) [17179961.832705][d3cb5cf0] [c006a818] task_cputime_adjusted+0x58/0x80 [17179961.832713][d3cb5d20] [c01dab44] do_task_stat+0x298/0x870 [17179961.832720][d3cb5de0] [c01d4948] proc_single_show+0x60/0xa4 [17179961.832728][d3cb5e10] [c01963d8] seq_read+0x2d8/0x52c [17179961.832736][d3cb5e80] [c01702fc] __vfs_read+0x40/0x114 [17179961.832744][d3cb5ef0] [c0170b1c] vfs_read+0x9c/0x10c [17179961.832751][d3cb5f10] [c0171440] SyS_read+0x68/0xc4 [17179961.832759][d3cb5f40] [c0010a40] ret_from_syscall+0x0/0x3c
do_task_stat->task_cputime_adjusted->cputime_adjust->scale_stime->div_u64 ->div_u64_rem->do_div->__div64_32
In some corner case, stime + utime = 0 if overflow. Even in v5.8.2 kernel the cputime has changed from unsigned long to u64 data type. About 200 days, the lowwer 32 bit will be 0x00000000. Because divisor for __div64_32 is unsigned long data type,which is 32 bit for powepc 32, the bug still exists.
So it is also a bug in the cputime_adjust which does not check if stime + utime = 0
time = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime));
The commit 3dc167ba5729 ("sched/cputime: Improve cputime_adjust()") in mainline kernel may has fixed this case. But it is also better to check if divisor is 0 in __div64_32 for other situation.
Signed-off-by: Guohua Zhong zhongguohua1@huawei.com Fixes:14cf11af6cf6 "( powerpc: Merge enough to start building in arch/powerpc.)" Fixes:94b212c29f68 "( powerpc: Move ppc64 boot wrapper code over to arch/powerpc)" Cc: stable@vger.kernel.org # v2.6.15+ --- arch/powerpc/boot/div64.S | 4 ++++ arch/powerpc/lib/div64.S | 4 ++++ 2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..39a25b9712d1 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,6 +13,9 @@
.globl __div64_32 __div64_32: + li r9,0 + cmplw r4,r9 # check if divisor r4 is zero + beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
/* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h>
_GLOBAL(__div64_32) + li r9,0 + cmplw r4,r9 # check if divisor r4 is zero + beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
Le 20/08/2020 à 15:10, Guohua Zhong a écrit :
When cat /proc/pid/stat, do_task_stat will call into cputime_adjust, which call stack is like this:
[17179954.674326]BookE Watchdog detected hard LOCKUP on cpu 0 [17179954.674331]dCPU: 0 PID: 1262 Comm: TICK Tainted: P W O 4.4.176 #1 [17179954.674339]dtask: dc9d7040 task.stack: d3cb4000 [17179954.674344]NIP: c001b1a8 LR: c006a7ac CTR: 00000000 [17179954.674349]REGS: e6fe1f10 TRAP: 3202 Tainted: P W O (4.4.176) [17179954.674355]MSR: 00021002 <CE,ME> CR: 28002224 XER: 00000000 [17179954.674364] GPR00: 00000016 d3cb5cb0 dc9d7040 d3cb5cc0 00000000 0000025d ffe15b24 ffffffff GPR08: de86aead 00000000 000003ff ffffffff 28002222 0084d1c0 00000000 ffffffff GPR16: b5929ca0 b4bb7a48 c0863c08 0000048d 00000062 00000062 00000000 0000000f GPR24: 00000000 d3cb5d08 d3cb5d60 d3cb5d64 00029002 d3e9c214 fffff30e d3e9c20c [17179954.674410]NIP [c001b1a8] __div64_32+0x60/0xa0 [17179954.674422]LR [c006a7ac] cputime_adjust+0x124/0x138 [17179954.674434]Call Trace: [17179961.832693]Call Trace: [17179961.832695][d3cb5cb0] [c006a6dc] cputime_adjust+0x54/0x138 (unreliable) [17179961.832705][d3cb5cf0] [c006a818] task_cputime_adjusted+0x58/0x80 [17179961.832713][d3cb5d20] [c01dab44] do_task_stat+0x298/0x870 [17179961.832720][d3cb5de0] [c01d4948] proc_single_show+0x60/0xa4 [17179961.832728][d3cb5e10] [c01963d8] seq_read+0x2d8/0x52c [17179961.832736][d3cb5e80] [c01702fc] __vfs_read+0x40/0x114 [17179961.832744][d3cb5ef0] [c0170b1c] vfs_read+0x9c/0x10c [17179961.832751][d3cb5f10] [c0171440] SyS_read+0x68/0xc4 [17179961.832759][d3cb5f40] [c0010a40] ret_from_syscall+0x0/0x3c
do_task_stat->task_cputime_adjusted->cputime_adjust->scale_stime->div_u64 ->div_u64_rem->do_div->__div64_32
In some corner case, stime + utime = 0 if overflow. Even in v5.8.2 kernel the cputime has changed from unsigned long to u64 data type. About 200 days, the lowwer 32 bit will be 0x00000000. Because divisor for __div64_32 is unsigned long data type,which is 32 bit for powepc 32, the bug still exists.
So it is also a bug in the cputime_adjust which does not check if stime + utime = 0
time = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime));
The commit 3dc167ba5729 ("sched/cputime: Improve cputime_adjust()") in mainline kernel may has fixed this case. But it is also better to check if divisor is 0 in __div64_32 for other situation.
Signed-off-by: Guohua Zhong zhongguohua1@huawei.com Fixes:14cf11af6cf6 "( powerpc: Merge enough to start building in arch/powerpc.)" Fixes:94b212c29f68 "( powerpc: Move ppc64 boot wrapper code over to arch/powerpc)" Cc: stable@vger.kernel.org # v2.6.15+
arch/powerpc/boot/div64.S | 4 ++++ arch/powerpc/lib/div64.S | 4 ++++ 2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..39a25b9712d1 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,6 +13,9 @@ .globl __div64_32 __div64_32:
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4
@@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr /* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h> _GLOBAL(__div64_32)
- li r9,0
You don't need to load r9 with 0, use cmplwi instead.
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero
You should leave space between the compare and the branch (i.e. have other instructions inbetween when possible), so that the processor can prepare the branching and do a good prediction. Same as the compare below, you see that there are two other instructions between the cmplw are the blt. You can eventually use another cr field than cr0 in order to nest several test/branches. Also because on recent powerpc32, instructions are fetched and executed two by two.
lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
Christophe
When cat /proc/pid/stat, do_task_stat will call into cputime_adjust, which call stack is like this:
[17179954.674326]BookE Watchdog detected hard LOCKUP on cpu 0 [17179954.674331]dCPU: 0 PID: 1262 Comm: TICK Tainted: P W O 4.4.176 #1 [17179954.674339]dtask: dc9d7040 task.stack: d3cb4000 [17179954.674344]NIP: c001b1a8 LR: c006a7ac CTR: 00000000 [17179954.674349]REGS: e6fe1f10 TRAP: 3202 Tainted: P W O (4.4.176) [17179954.674355]MSR: 00021002 <CE,ME> CR: 28002224 XER: 00000000 [17179954.674364] GPR00: 00000016 d3cb5cb0 dc9d7040 d3cb5cc0 00000000 0000025d ffe15b24 ffffffff GPR08: de86aead 00000000 000003ff ffffffff 28002222 0084d1c0 00000000 ffffffff GPR16: b5929ca0 b4bb7a48 c0863c08 0000048d 00000062 00000062 00000000 0000000f GPR24: 00000000 d3cb5d08 d3cb5d60 d3cb5d64 00029002 d3e9c214 fffff30e d3e9c20c [17179954.674410]NIP [c001b1a8] __div64_32+0x60/0xa0 [17179954.674422]LR [c006a7ac] cputime_adjust+0x124/0x138 [17179954.674434]Call Trace: [17179961.832693]Call Trace: [17179961.832695][d3cb5cb0] [c006a6dc] cputime_adjust+0x54/0x138 (unreliable) [17179961.832705][d3cb5cf0] [c006a818] task_cputime_adjusted+0x58/0x80 [17179961.832713][d3cb5d20] [c01dab44] do_task_stat+0x298/0x870 [17179961.832720][d3cb5de0] [c01d4948] proc_single_show+0x60/0xa4 [17179961.832728][d3cb5e10] [c01963d8] seq_read+0x2d8/0x52c [17179961.832736][d3cb5e80] [c01702fc] __vfs_read+0x40/0x114 [17179961.832744][d3cb5ef0] [c0170b1c] vfs_read+0x9c/0x10c [17179961.832751][d3cb5f10] [c0171440] SyS_read+0x68/0xc4 [17179961.832759][d3cb5f40] [c0010a40] ret_from_syscall+0x0/0x3c
do_task_stat->task_cputime_adjusted->cputime_adjust->scale_stime->div_u64 ->div_u64_rem->do_div->__div64_32
In some corner case, stime + utime = 0 if overflow. Even in v5.8.2 kernel the cputime has changed from unsigned long to u64 data type. About 200 days, the lowwer 32 bit will be 0x00000000. Because divisor for __div64_32 is unsigned long data type,which is 32 bit for powepc 32, the bug still exists.
So it is also a bug in the cputime_adjust which does not check if stime + utime = 0
time = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime));
The commit 3dc167ba5729 ("sched/cputime: Improve cputime_adjust()") in mainline kernel may has fixed this case. But it is also better to check if divisor is 0 in __div64_32 for other situation.
Signed-off-by: Guohua Zhong zhongguohua1@huawei.com Fixes:14cf11af6cf6 "( powerpc: Merge enough to start building in arch/powerpc.)" Fixes:94b212c29f68 "( powerpc: Move ppc64 boot wrapper code over to arch/powerpc)" Cc: stable@vger.kernel.org # v2.6.15+
arch/powerpc/boot/div64.S | 4 ++++ arch/powerpc/lib/div64.S | 4 ++++ 2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..39a25b9712d1 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,6 +13,9 @@ .globl __div64_32 __div64_32:
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4
@@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr /* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h> _GLOBAL(__div64_32)
- li r9,0
You don't need to load r9 with 0, use cmplwi instead.
I will change cmplw to cmplwi in next patch as your suggestion. Thanks
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero
You should leave space between the compare and the branch (i.e. have other instructions inbetween when possible), so that the processor can prepare the branching and do a good prediction. Same as the compare below, you see that there are two other instructions between the cmplw are the blt. You can eventually use another cr field than cr0 in order to nest several test/branches. Also because on recent powerpc32, instructions are fetched and executed two by two.
Good advice!
OK, let two lwz instructions between campare and branch as below:
diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..570774d9782d 100644 li r8,0 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,8 +13,10 @@ #include <asm/processor.h>
_GLOBAL(__div64_32) + cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) + beq 5f # jump to label 5 if r4(divisor) is zero cmplw r5,r4 li r7,0 li r8,0 lines 97-110 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 - blr +5: blr # return if divisor r4 is zero
lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
Guohua
Le 20/08/2020 à 15:10, Guohua Zhong a écrit :
When cat /proc/pid/stat, do_task_stat will call into cputime_adjust, which call stack is like this:
[17179954.674326]BookE Watchdog detected hard LOCKUP on cpu 0 [17179954.674331]dCPU: 0 PID: 1262 Comm: TICK Tainted: P W O 4.4.176 #1 [17179954.674339]dtask: dc9d7040 task.stack: d3cb4000 [17179954.674344]NIP: c001b1a8 LR: c006a7ac CTR: 00000000 [17179954.674349]REGS: e6fe1f10 TRAP: 3202 Tainted: P W O (4.4.176) [17179954.674355]MSR: 00021002 <CE,ME> CR: 28002224 XER: 00000000 [17179954.674364] GPR00: 00000016 d3cb5cb0 dc9d7040 d3cb5cc0 00000000 0000025d ffe15b24 ffffffff GPR08: de86aead 00000000 000003ff ffffffff 28002222 0084d1c0 00000000 ffffffff GPR16: b5929ca0 b4bb7a48 c0863c08 0000048d 00000062 00000062 00000000 0000000f GPR24: 00000000 d3cb5d08 d3cb5d60 d3cb5d64 00029002 d3e9c214 fffff30e d3e9c20c [17179954.674410]NIP [c001b1a8] __div64_32+0x60/0xa0 [17179954.674422]LR [c006a7ac] cputime_adjust+0x124/0x138 [17179954.674434]Call Trace: [17179961.832693]Call Trace: [17179961.832695][d3cb5cb0] [c006a6dc] cputime_adjust+0x54/0x138 (unreliable) [17179961.832705][d3cb5cf0] [c006a818] task_cputime_adjusted+0x58/0x80 [17179961.832713][d3cb5d20] [c01dab44] do_task_stat+0x298/0x870 [17179961.832720][d3cb5de0] [c01d4948] proc_single_show+0x60/0xa4 [17179961.832728][d3cb5e10] [c01963d8] seq_read+0x2d8/0x52c [17179961.832736][d3cb5e80] [c01702fc] __vfs_read+0x40/0x114 [17179961.832744][d3cb5ef0] [c0170b1c] vfs_read+0x9c/0x10c [17179961.832751][d3cb5f10] [c0171440] SyS_read+0x68/0xc4 [17179961.832759][d3cb5f40] [c0010a40] ret_from_syscall+0x0/0x3c
do_task_stat->task_cputime_adjusted->cputime_adjust->scale_stime->div_u64 ->div_u64_rem->do_div->__div64_32
In some corner case, stime + utime = 0 if overflow. Even in v5.8.2 kernel the cputime has changed from unsigned long to u64 data type. About 200 days, the lowwer 32 bit will be 0x00000000. Because divisor for __div64_32 is unsigned long data type,which is 32 bit for powepc 32, the bug still exists.
So it is also a bug in the cputime_adjust which does not check if stime + utime = 0
time = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime));
The commit 3dc167ba5729 ("sched/cputime: Improve cputime_adjust()") in mainline kernel may has fixed this case. But it is also better to check if divisor is 0 in __div64_32 for other situation.
Signed-off-by: Guohua Zhong zhongguohua1@huawei.com Fixes:14cf11af6cf6 "( powerpc: Merge enough to start building in arch/powerpc.)" Fixes:94b212c29f68 "( powerpc: Move ppc64 boot wrapper code over to arch/powerpc)" Cc: stable@vger.kernel.org # v2.6.15+
arch/powerpc/boot/div64.S | 4 ++++ arch/powerpc/lib/div64.S | 4 ++++ 2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..39a25b9712d1 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,6 +13,9 @@ .globl __div64_32 __div64_32:
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero
In generic version in lib/math/div64.c, there is no checking of 'base' either. Do we really want to add this check in the powerpc version only ?
The only user of __div64_32() is do_div() in include/asm-generic/div64.h. Wouldn't it be better to do the check there ?
Christophe
lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr /* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h> _GLOBAL(__div64_32)
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4
@@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
In generic version in lib/math/div64.c, there is no checking of 'base' either. Do we really want to add this check in the powerpc version only ?
The only user of __div64_32() is do_div() in include/asm-generic/div64.h. Wouldn't it be better to do the check there ?
Christophe
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack. Let the process to know that the divisor is zero by sending SIGFPE.
diff --git a/include/asm-generic/div64.h b/include/asm-generic/div64.h index a3b98c86f077..161c656ee3ee 100644 --- a/include/asm-generic/div64.h +++ b/include/asm-generic/div64.h @@ -43,6 +43,11 @@ # define do_div(n,base) ({ \ uint32_t __base = (base); \ uint32_t __rem; \ + if (unlikely(base == 0)) { \ + pr_err("do_div base=%d\n",base); \ + dump_stack(); \ + force_sig(SIGFPE); \ + }
Then it also needto add this checking in functions of div64_s64(), div64_u64(), div64_u64_rem(), div_s64_rem and div_u64_rem () in include/linux/math64.h
+ if (unlikely(divisor == 0)) { + pr_err("%s divisor=0\n",__func__); + dump_stack(); + force_sig(SIGFPE); + }
Guohua
lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr /* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h> _GLOBAL(__div64_32)
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4
@@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
On Sun, Aug 23, 2020 at 12:54:33AM +0800, Guohua Zhong wrote:
In generic version in lib/math/div64.c, there is no checking of 'base' either. Do we really want to add this check in the powerpc version only ?
The only user of __div64_32() is do_div() in include/asm-generic/div64.h. Wouldn't it be better to do the check there ?
Christophe
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack. Let the process to know that the divisor is zero by sending SIGFPE.
diff --git a/include/asm-generic/div64.h b/include/asm-generic/div64.h index a3b98c86f077..161c656ee3ee 100644 --- a/include/asm-generic/div64.h +++ b/include/asm-generic/div64.h @@ -43,6 +43,11 @@ # define do_div(n,base) ({ \ uint32_t __base = (base); \ uint32_t __rem; \
- if (unlikely(base == 0)) { \
pr_err("do_div base=%d\n",base); \
dump_stack(); \
force_sig(SIGFPE); \
- }
I suspect this will generate a strong reaction. SIGFPE is for user space instruction attempting a division by zero. A division by zero in the kernel is a kernel bug, period, and you don't want to kill a user process for this reason.
If it happens in an interrupt, the context of the kernel may not even be related to the current process.
Many other architectures (x86 for example) already trigger an exception on a division by zero but the handler will find that the exception happened in kernel context and generate an Oops, not raise a signal in a (possibly innocent) userland process.
Gabriel
Then it also needto add this checking in functions of div64_s64(), div64_u64(), div64_u64_rem(), div_s64_rem and div_u64_rem () in include/linux/math64.h
- if (unlikely(divisor == 0)) {
pr_err("%s divisor=0\n",__func__);
dump_stack();
force_sig(SIGFPE);
- }
Guohua
lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr /* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h> _GLOBAL(__div64_32)
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4
@@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
In generic version in lib/math/div64.c, there is no checking of 'base' either. Do we really want to add this check in the powerpc version only ?
The only user of __div64_32() is do_div() in include/asm-generic/div64.h. Wouldn't it be better to do the check there ?
Christophe
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack. Let the process to know that the divisor is zero by sending SIGFPE.
diff --git a/include/asm-generic/div64.h b/include/asm-generic/div64.h index a3b98c86f077..161c656ee3ee 100644 --- a/include/asm-generic/div64.h +++ b/include/asm-generic/div64.h @@ -43,6 +43,11 @@ # define do_div(n,base) ({ \ uint32_t __base = (base); \ uint32_t __rem; \
- if (unlikely(base == 0)) { \
pr_err("do_div base=%d\n",base); \
dump_stack(); \
force_sig(SIGFPE); \
- }
I suspect this will generate a strong reaction. SIGFPE is for user space instruction attempting a division by zero. A division by zero in the kernel is a kernel bug, period, and you don't want to kill a user process for this reason.
If it happens in an interrupt, the context of the kernel may not even be related to the current process.
Many other architectures (x86 for example) already trigger an exception on a division by zero but the handler will find that the exception happened in kernel context and generate an Oops, not raise a signal in a (possibly innocent) userland process.
OK. So just don't touch do_div functions in include/asm-generic/div64.h But for powerpc it can not trigger an exception when divisor is 0 in __div64_32.
So the patch as below is still useful for powerpc. If this patch looks good for you, please help to review. I will send the new patch later.
Thanks for your reply.
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..1d3561cf16fa 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,8 +13,10 @@
.globl __div64_32 .globl __div64_32 __div64_32: + cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) + beq 5f # jump to label 5 if r4(divisor) is zero cmplw r5,r4 li r7,0 li r8,0 @@ -52,7 +54,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 - blr +5: blr # return if divisor r4 is zero
/* * Extended precision shifts. diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..570774d9782d 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,8 +13,10 @@ #include <asm/processor.h>
_GLOBAL(__div64_32) + cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) + beq 5f # jump to label 5 if r4(divisor) is zero cmplw r5,r4 li r7,0 li r8,0 @@ -52,4 +54,4 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 - blr +5: blr # return if divisor r4 is zero
Guohua
From: Guohua Zhong
Sent: 24 August 2020 14:26
In generic version in lib/math/div64.c, there is no checking of 'base' either. Do we really want to add this check in the powerpc version only ?
The only user of __div64_32() is do_div() in include/asm-generic/div64.h. Wouldn't it be better to do the check there ?
Christophe
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack.
I thought that the onus was put on the caller to avoid divide by zero.
On x86 divide by zero causes an exception which (I'm pretty sure) leads to a oops/panic.
David
- Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Sun, Aug 23, 2020 at 12:54:33AM +0800, Guohua Zhong wrote:
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack. Let the process to know that the divisor is zero by sending SIGFPE.
That is now what the PowerPC integer divide insns do: they just leave the result undefined (and they can set the overflow flag then, but no one uses that).
Segher
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack. Let the process to know that the divisor is zero by sending SIGFPE.
That is now what the PowerPC integer divide insns do: they just leave the result undefined (and they can set the overflow flag then, but no one uses that).
OK ,So just keep the patch as below. If this patch looks good for you, please help to review. I will send the new patch later.
Thanks for your reply.
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..1d3561cf16fa 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,8 +13,10 @@
.globl __div64_32 .globl __div64_32 __div64_32: + cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) + beq 5f # jump to label 5 if r4(divisor) is zero cmplw r5,r4 li r7,0 li r8,0 @@ -52,7 +54,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 - blr +5: blr # return if divisor r4 is zero
/* * Extended precision shifts. diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..570774d9782d 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,8 +13,10 @@ #include <asm/processor.h>
_GLOBAL(__div64_32) + cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) + beq 5f # jump to label 5 if r4(divisor) is zero cmplw r5,r4 li r7,0 li r8,0 @@ -52,4 +54,4 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 - blr +5: blr # return if divisor r4 is zero
Guohua
On Mon, Aug 24, 2020 at 07:54:07PM +0800, Guohua Zhong wrote:
Yet, I have noticed that there is no checking of 'base' in these functions. But I am not sure how to check is better.As we know that the result is undefined when divisor is zero. It maybe good to print error and dump stack. Let the process to know that the divisor is zero by sending SIGFPE.
That is now what the PowerPC integer divide insns do: they just leave the result undefined (and they can set the overflow flag then, but no one uses that).
OK ,So just keep the patch as below. If this patch looks good for you, please help to review. I will send the new patch later.
Thanks for your reply.
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..1d3561cf16fa 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,8 +13,10 @@
.globl __div64_32 .globl __div64_32
__div64_32:
- cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3)
- beq 5f # jump to label 5 if r4(divisor) is zero
Just "beqlr".
This instruction scheduling hurts all CPUs that aren't 8xx, fwiw (but likely only in the case where r4 *is* zero, so who cares :-) )
So... What is the *goal* of this patch? It looks like the routine would not get into a loop if r4 is 0, just return the wrong result? But, it *always* will, there *is* no right result?
No caller should call it with zero as divisor ever, so in that sense, checking for it in the division routine is just pure wasted work.
Segher
When cat /proc/pid/stat, do_task_stat will call into cputime_adjust, which call stack is like this:
[17179954.674326]BookE Watchdog detected hard LOCKUP on cpu 0 [17179954.674331]dCPU: 0 PID: 1262 Comm: TICK Tainted: P W O 4.4.176 #1 [17179954.674339]dtask: dc9d7040 task.stack: d3cb4000 [17179954.674344]NIP: c001b1a8 LR: c006a7ac CTR: 00000000 [17179954.674349]REGS: e6fe1f10 TRAP: 3202 Tainted: P W O (4.4.176) [17179954.674355]MSR: 00021002 <CE,ME> CR: 28002224 XER: 00000000 [17179954.674364] GPR00: 00000016 d3cb5cb0 dc9d7040 d3cb5cc0 00000000 0000025d ffe15b24 ffffffff GPR08: de86aead 00000000 000003ff ffffffff 28002222 0084d1c0 00000000 ffffffff GPR16: b5929ca0 b4bb7a48 c0863c08 0000048d 00000062 00000062 00000000 0000000f GPR24: 00000000 d3cb5d08 d3cb5d60 d3cb5d64 00029002 d3e9c214 fffff30e d3e9c20c [17179954.674410]NIP [c001b1a8] __div64_32+0x60/0xa0 [17179954.674422]LR [c006a7ac] cputime_adjust+0x124/0x138 [17179954.674434]Call Trace: [17179961.832693]Call Trace: [17179961.832695][d3cb5cb0] [c006a6dc] cputime_adjust+0x54/0x138 (unreliable) [17179961.832705][d3cb5cf0] [c006a818] task_cputime_adjusted+0x58/0x80 [17179961.832713][d3cb5d20] [c01dab44] do_task_stat+0x298/0x870 [17179961.832720][d3cb5de0] [c01d4948] proc_single_show+0x60/0xa4 [17179961.832728][d3cb5e10] [c01963d8] seq_read+0x2d8/0x52c [17179961.832736][d3cb5e80] [c01702fc] __vfs_read+0x40/0x114 [17179961.832744][d3cb5ef0] [c0170b1c] vfs_read+0x9c/0x10c [17179961.832751][d3cb5f10] [c0171440] SyS_read+0x68/0xc4 [17179961.832759][d3cb5f40] [c0010a40] ret_from_syscall+0x0/0x3c
do_task_stat->task_cputime_adjusted->cputime_adjust->scale_stime->div_u64 ->div_u64_rem->do_div->__div64_32
In some corner case, stime + utime = 0 if overflow. Even in v5.8.2 kernel the cputime has changed from unsigned long to u64 data type. About 200 days, the lowwer 32 bit will be 0x00000000. Because divisor for __div64_32 is unsigned long data type,which is 32 bit for powepc 32, the bug still exists.
So it is also a bug in the cputime_adjust which does not check if stime + utime = 0
time = scale_stime((__force u64)stime, (__force u64)rtime, (__force u64)(stime + utime));
The commit 3dc167ba5729 ("sched/cputime: Improve cputime_adjust()") in mainline kernel may has fixed this case. But it is also better to check if divisor is 0 in __div64_32 for other situation.
Signed-off-by: Guohua Zhong zhongguohua1@huawei.com Fixes:14cf11af6cf6 "( powerpc: Merge enough to start building in arch/powerpc.)" Fixes:94b212c29f68 "( powerpc: Move ppc64 boot wrapper code over to arch/powerpc)" Cc: stable@vger.kernel.org # v2.6.15+
arch/powerpc/boot/div64.S | 4 ++++ arch/powerpc/lib/div64.S | 4 ++++ 2 files changed, 8 insertions(+)
diff --git a/arch/powerpc/boot/div64.S b/arch/powerpc/boot/div64.S index 4354928ed62e..39a25b9712d1 100644 --- a/arch/powerpc/boot/div64.S +++ b/arch/powerpc/boot/div64.S @@ -13,6 +13,9 @@ .globl __div64_32 __div64_32:
- li r9,0
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4
@@ -52,6 +55,7 @@ __div64_32: 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr /* diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..1cc9bcabf678 100644 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,6 +13,9 @@ #include <asm/processor.h> _GLOBAL(__div64_32)
- li r9,0
You don't need to load r9 with 0, use cmplwi instead.
I will change cmplw to cmplwi in next patch as your suggestion. Thanks
- cmplw r4,r9 # check if divisor r4 is zero
- beq 5f # jump to label 5 if r4(divisor) is zero
You should leave space between the compare and the branch (i.e. have other instructions inbetween when possible), so that the processor can prepare the branching and do a good prediction. Same as the compare below, you see that there are two other instructions between the cmplw are the blt. You can eventually use another cr field than cr0 in order to nest several test/branches. Also because on recent powerpc32, instructions are fetched and executed two by two.
Good advice!
OK, let two lwz instructions between campare and branch as below:
diff --git a/arch/powerpc/lib/div64.S b/arch/powerpc/lib/div64.S index 3d5426e7dcc4..570774d9782d 100644 li r8,0 --- a/arch/powerpc/lib/div64.S +++ b/arch/powerpc/lib/div64.S @@ -13,8 +13,10 @@ #include <asm/processor.h>
_GLOBAL(__div64_32) + cmplwi r4,0 # check if divisor r4 is zero lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) + beq 5f # jump to label 5 if r4(divisor) is zero cmplw r5,r4 li r7,0 li r8,0 lines 97-110 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 - blr +5: blr # return if divisor r4 is zero
lwz r5,0(r3) # get the dividend into r5/r6 lwz r6,4(r3) cmplw r5,r4 @@ -52,4 +55,5 @@ _GLOBAL(__div64_32) 4: stw r7,0(r3) # return the quotient in *r3 stw r8,4(r3) mr r3,r6 # return the remainder in r3 +5: # return if divisor r4 is zero blr
Guohua
linux-stable-mirror@lists.linaro.org