The patch titled Subject: mm: /proc/pid/smaps_rollup: fix NULL pointer deref in smaps_pte_range() has been added to the -mm tree. Its filename is mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch
This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-proc-pid-smaps_rollup-fix-null-p... and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-proc-pid-smaps_rollup-fix-null-p...
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated there every 3-4 working days
------------------------------------------------------ From: Vlastimil Babka vbabka@suse.cz Subject: mm: /proc/pid/smaps_rollup: fix NULL pointer deref in smaps_pte_range()
Leonardo reports an apparent regression in 4.19-rc7:
BUG: unable to handle kernel NULL pointer dereference at 00000000000000f0 PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP PTI CPU: 3 PID: 6032 Comm: python Not tainted 4.19.0-041900rc7-lowlatency #201810071631 Hardware name: LENOVO 80UG/Toronto 4A2, BIOS 0XCN45WW 08/09/2018 RIP: 0010:smaps_pte_range+0x32d/0x540 Code: 80 00 00 00 00 74 a9 48 89 de 41 f6 40 52 40 0f 85 04 02 00 00 49 2b 30 48 c1 ee 0c 49 03 b0 98 00 00 00 49 8b 80 a0 00 00 00 <48> 8b b8 f0 00 00 00 e8 b7 ef ec ff 48 85 c0 0f 84 71 ff ff ff a8 RSP: 0018:ffffb0cbc484fb88 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000560ddb9e9000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000560ddb9e9 RDI: 0000000000000001 RBP: ffffb0cbc484fbc0 R08: ffff94a5a227a578 R09: ffff94a5a227a578 R10: 0000000000000000 R11: 0000560ddbbe7000 R12: ffffe903098ba728 R13: ffffb0cbc484fc78 R14: ffffb0cbc484fcf8 R15: ffff94a5a2e9cf48 FS: 00007f6dfb683740(0000) GS:ffff94a5aaf80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000000000f0 CR3: 000000011c118001 CR4: 00000000003606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: __walk_page_range+0x3c2/0x6f0 walk_page_vma+0x42/0x60 smap_gather_stats+0x79/0xe0 ? gather_pte_stats+0x320/0x320 ? gather_hugetlb_stats+0x70/0x70 show_smaps_rollup+0xcd/0x1c0 seq_read+0x157/0x400 __vfs_read+0x3a/0x180 ? security_file_permission+0x93/0xc0 ? security_file_permission+0x93/0xc0 vfs_read+0x8f/0x140 ksys_read+0x55/0xc0 __x64_sys_read+0x1a/0x20 do_syscall_64+0x5a/0x110 entry_SYSCALL_64_after_hwframe+0x44/0xa9
Decoded code matched to local compilation+disassembly points to smaps_pte_entry():
} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap && pte_none(*pte))) { page = find_get_entry(vma->vm_file->f_mapping, linear_page_index(vma, addr));
Here, vma->vm_file is NULL. mss->check_shmem_swap should be false in that case, however for smaps_rollup, smap_gather_stats() can set the flag true for one vma and leave it true for subsequent vma's where it should be false.
To fix, reset the check_shmem_swap flag to false. There's also related bug which sets mss->swap to shmem_swapped, which in the context of smaps_rollup overwrites any value accumulated from previous vma's. Fix that as well.
Note that the report suggests a regression between 4.17.19 and 4.19-rc7, which makes the 4.19 series ending with commit 258f669e7e88 ("mm: /proc/pid/smaps_rollup: convert to single value seq_file") suspicious. But the mss was reused for rollup since 493b0e9d945f ("mm: add /proc/pid/smaps_rollup") so let's play it safe with the stable backport.
Link: http://lkml.kernel.org/r/555fbd1f-4ac9-0b58-dcd4-5dc4380ff7ca@suse.cz Link: https://bugzilla.kernel.org/show_bug.cgi?id=201377 Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup") Signed-off-by: Vlastimil Babka vbabka@suse.cz Reported-by: Leonardo Soares Mller leozinho29_eu@hotmail.com Tested-by: Leonardo Soares Mller leozinho29_eu@hotmail.com Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Daniel Colascione dancol@google.com Cc: Alexey Dobriyan adobriyan@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org ---
fs/proc/task_mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/proc/task_mmu.c~mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range +++ a/fs/proc/task_mmu.c @@ -713,6 +713,8 @@ static void smap_gather_stats(struct vm_ smaps_walk.private = mss;
#ifdef CONFIG_SHMEM + /* In case of smaps_rollup, reset the value from previous vma */ + mss->check_shmem_swap = false; if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) { /* * For shared or readonly shmem mappings we know that all @@ -728,7 +730,7 @@ static void smap_gather_stats(struct vm_
if (!shmem_swapped || (vma->vm_flags & VM_SHARED) || !(vma->vm_flags & VM_WRITE)) { - mss->swap = shmem_swapped; + mss->swap += shmem_swapped; } else { mss->check_shmem_swap = true; smaps_walk.pte_hole = smaps_pte_hole; _
Patches currently in -mm which might be from vbabka@suse.cz are
mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch mm-slab-combine-kmalloc_caches-and-kmalloc_dma_caches.patch mm-slab-slub-introduce-kmalloc-reclaimable-caches.patch dcache-allocate-external-names-from-reclaimable-kmalloc-caches.patch mm-rename-and-change-semantics-of-nr_indirectly_reclaimable_bytes.patch mm-proc-add-kreclaimable-to-proc-meminfo.patch mm-slab-shorten-kmalloc-cache-names-for-large-sizes.patch
On 10/16/18 1:18 AM, akpm@linux-foundation.org wrote:
The patch titled Subject: mm: /proc/pid/smaps_rollup: fix NULL pointer deref in smaps_pte_range() has been added to the -mm tree. Its filename is mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch
This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-proc-pid-smaps_rollup-fix-null-p... and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-proc-pid-smaps_rollup-fix-null-p...
Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated there every 3-4 working days
From: Vlastimil Babka vbabka@suse.cz Subject: mm: /proc/pid/smaps_rollup: fix NULL pointer deref in smaps_pte_range()
Leonardo reports an apparent regression in 4.19-rc7:
BUG: unable to handle kernel NULL pointer dereference at 00000000000000f0 PGD 0 P4D 0 Oops: 0000 [#1] PREEMPT SMP PTI CPU: 3 PID: 6032 Comm: python Not tainted 4.19.0-041900rc7-lowlatency #201810071631 Hardware name: LENOVO 80UG/Toronto 4A2, BIOS 0XCN45WW 08/09/2018 RIP: 0010:smaps_pte_range+0x32d/0x540 Code: 80 00 00 00 00 74 a9 48 89 de 41 f6 40 52 40 0f 85 04 02 00 00 49 2b 30 48 c1 ee 0c 49 03 b0 98 00 00 00 49 8b 80 a0 00 00 00 <48> 8b b8 f0 00 00 00 e8 b7 ef ec ff 48 85 c0 0f 84 71 ff ff ff a8 RSP: 0018:ffffb0cbc484fb88 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000560ddb9e9000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000560ddb9e9 RDI: 0000000000000001 RBP: ffffb0cbc484fbc0 R08: ffff94a5a227a578 R09: ffff94a5a227a578 R10: 0000000000000000 R11: 0000560ddbbe7000 R12: ffffe903098ba728 R13: ffffb0cbc484fc78 R14: ffffb0cbc484fcf8 R15: ffff94a5a2e9cf48 FS: 00007f6dfb683740(0000) GS:ffff94a5aaf80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000000000f0 CR3: 000000011c118001 CR4: 00000000003606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: __walk_page_range+0x3c2/0x6f0 walk_page_vma+0x42/0x60 smap_gather_stats+0x79/0xe0 ? gather_pte_stats+0x320/0x320 ? gather_hugetlb_stats+0x70/0x70 show_smaps_rollup+0xcd/0x1c0 seq_read+0x157/0x400 __vfs_read+0x3a/0x180 ? security_file_permission+0x93/0xc0 ? security_file_permission+0x93/0xc0 vfs_read+0x8f/0x140 ksys_read+0x55/0xc0 __x64_sys_read+0x1a/0x20 do_syscall_64+0x5a/0x110 entry_SYSCALL_64_after_hwframe+0x44/0xa9
Decoded code matched to local compilation+disassembly points to smaps_pte_entry():
} else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap && pte_none(*pte))) { page = find_get_entry(vma->vm_file->f_mapping, linear_page_index(vma, addr));
Here, vma->vm_file is NULL. mss->check_shmem_swap should be false in that case, however for smaps_rollup, smap_gather_stats() can set the flag true for one vma and leave it true for subsequent vma's where it should be false.
To fix, reset the check_shmem_swap flag to false. There's also related bug which sets mss->swap to shmem_swapped, which in the context of smaps_rollup overwrites any value accumulated from previous vma's. Fix that as well.
Note that the report suggests a regression between 4.17.19 and 4.19-rc7, which makes the 4.19 series ending with commit 258f669e7e88 ("mm: /proc/pid/smaps_rollup: convert to single value seq_file") suspicious. But the mss was reused for rollup since 493b0e9d945f ("mm: add /proc/pid/smaps_rollup") so let's play it safe with the stable backport.
The reporter also verified that at least 4.18-rc4 also triggers the bug, confirming it's indeed older than the 4.19 rewrite.
Link: http://lkml.kernel.org/r/555fbd1f-4ac9-0b58-dcd4-5dc4380ff7ca@suse.cz Link: https://bugzilla.kernel.org/show_bug.cgi?id=201377 Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup") Signed-off-by: Vlastimil Babka vbabka@suse.cz Reported-by: Leonardo Soares Mller leozinho29_eu@hotmail.com Tested-by: Leonardo Soares Mller leozinho29_eu@hotmail.com
Looks like some utf8 snafu ate the umlaut u's above.
Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Cc: Daniel Colascione dancol@google.com Cc: Alexey Dobriyan adobriyan@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org
fs/proc/task_mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/fs/proc/task_mmu.c~mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range +++ a/fs/proc/task_mmu.c @@ -713,6 +713,8 @@ static void smap_gather_stats(struct vm_ smaps_walk.private = mss; #ifdef CONFIG_SHMEM
- /* In case of smaps_rollup, reset the value from previous vma */
- mss->check_shmem_swap = false; if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) { /*
- For shared or readonly shmem mappings we know that all
@@ -728,7 +730,7 @@ static void smap_gather_stats(struct vm_ if (!shmem_swapped || (vma->vm_flags & VM_SHARED) || !(vma->vm_flags & VM_WRITE)) {
mss->swap = shmem_swapped;
} else { mss->check_shmem_swap = true; smaps_walk.pte_hole = smaps_pte_hole;mss->swap += shmem_swapped;
_
Patches currently in -mm which might be from vbabka@suse.cz are
mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch mm-slab-combine-kmalloc_caches-and-kmalloc_dma_caches.patch mm-slab-slub-introduce-kmalloc-reclaimable-caches.patch dcache-allocate-external-names-from-reclaimable-kmalloc-caches.patch mm-rename-and-change-semantics-of-nr_indirectly_reclaimable_bytes.patch mm-proc-add-kreclaimable-to-proc-meminfo.patch mm-slab-shorten-kmalloc-cache-names-for-large-sizes.patch
linux-stable-mirror@lists.linaro.org