> As already explained in the previous mail, there is a fixup for this in
> commit 81b6c9998979 ('scsi: core: check for device state in
> __scsi_remove_target()').
> Please check if this is applied, too.
I tested commit 81b6c9998979 cherry-picked on top of 4.14.20 and it
indeed solves the problem.
Can it be backported to 4.14 LTS, please?
This is a note to let you know that I've just added the patch titled
vfs: don't do RCU lookup of empty pathnames
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
vfs-don-t-do-rcu-lookup-of-empty-pathnames.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c0eb027e5aef70b71e5a38ee3e264dc0b497f343 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds(a)linux-foundation.org>
Date: Sun, 2 Apr 2017 17:10:08 -0700
Subject: vfs: don't do RCU lookup of empty pathnames
From: Linus Torvalds <torvalds(a)linux-foundation.org>
commit c0eb027e5aef70b71e5a38ee3e264dc0b497f343 upstream.
Normal pathname lookup doesn't allow empty pathnames, but using
AT_EMPTY_PATH (with name_to_handle_at() or fstatat(), for example) you
can trigger an empty pathname lookup.
And not only is the RCU lookup in that case entirely unnecessary
(because we'll obviously immediately finalize the end result), it is
actively wrong.
Why? An empth path is a special case that will return the original
'dirfd' dentry - and that dentry may not actually be RCU-free'd,
resulting in a potential use-after-free if we were to initialize the
path lazily under the RCU read lock and depend on complete_walk()
finalizing the dentry.
Found by syzkaller and KASAN.
Reported-by: Dmitry Vyukov <dvyukov(a)google.com>
Reported-by: Vegard Nossum <vegard.nossum(a)gmail.com>
Acked-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Eric Biggers <ebiggers3(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/namei.c | 3 +++
1 file changed, 3 insertions(+)
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1851,6 +1851,9 @@ static int path_init(int dfd, const char
{
int retval = 0;
+ if (!*s)
+ flags &= ~LOOKUP_RCU;
+
nd->last_type = LAST_ROOT; /* if there are only slashes... */
nd->flags = flags | LOOKUP_JUMPED;
nd->depth = 0;
Patches currently in stable-queue which might be from torvalds(a)linux-foundation.org are
queue-3.18/vfs-don-t-do-rcu-lookup-of-empty-pathnames.patch
Hi Greg, here's another one. When you have a chance can you please apply commit
c0eb027e5aef ("vfs: don't do RCU lookup of empty pathnames") to the stable
trees? I can reproduce the use-after-free on 4.4-stable and 4.9-stable, and it
is fixed by the patch. And I wasn't able to check 3.18 because KASAN isn't
available there, but I think the bug there as well. Thanks,
Eric
I ran into a 4.9 build regression in randconfig testing, starting with the
KAISER patches:
arch/x86/mm/kaiser.c: In function 'kaiser_init':
arch/x86/mm/kaiser.c:347:8: error: 'vsyscall_pgprot' undeclared (first use in this function); did you mean 'massage_pgprot'?
This is easy enough to fix, we just need to make the declaration visible
outside of the #ifdef. This works because the code using it is optimized
away when vsyscall_enabled() returns false at compile time.
Fixes: 9a0be5afbfbb ("vsyscall: Fix permissions for emulate mode with KAISER/PTI")
Signed-off-by: Arnd Bergmann <arnd(a)arndb.de>
---
arch/x86/include/asm/vsyscall.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/vsyscall.h b/arch/x86/include/asm/vsyscall.h
index 9ee85066f407..c98c21b7f4cd 100644
--- a/arch/x86/include/asm/vsyscall.h
+++ b/arch/x86/include/asm/vsyscall.h
@@ -13,7 +13,6 @@ extern void map_vsyscall(void);
*/
extern bool emulate_vsyscall(struct pt_regs *regs, unsigned long address);
extern bool vsyscall_enabled(void);
-extern unsigned long vsyscall_pgprot;
#else
static inline void map_vsyscall(void) {}
static inline bool emulate_vsyscall(struct pt_regs *regs, unsigned long address)
@@ -23,4 +22,6 @@ static inline bool emulate_vsyscall(struct pt_regs *regs, unsigned long address)
static inline bool vsyscall_enabled(void) { return false; }
#endif
+extern unsigned long vsyscall_pgprot;
+
#endif /* _ASM_X86_VSYSCALL_H */
--
2.9.0
This is a note to let you know that I've just added the patch titled
vfs: don't do RCU lookup of empty pathnames
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
vfs-don-t-do-rcu-lookup-of-empty-pathnames.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c0eb027e5aef70b71e5a38ee3e264dc0b497f343 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds(a)linux-foundation.org>
Date: Sun, 2 Apr 2017 17:10:08 -0700
Subject: vfs: don't do RCU lookup of empty pathnames
From: Linus Torvalds <torvalds(a)linux-foundation.org>
commit c0eb027e5aef70b71e5a38ee3e264dc0b497f343 upstream.
Normal pathname lookup doesn't allow empty pathnames, but using
AT_EMPTY_PATH (with name_to_handle_at() or fstatat(), for example) you
can trigger an empty pathname lookup.
And not only is the RCU lookup in that case entirely unnecessary
(because we'll obviously immediately finalize the end result), it is
actively wrong.
Why? An empth path is a special case that will return the original
'dirfd' dentry - and that dentry may not actually be RCU-free'd,
resulting in a potential use-after-free if we were to initialize the
path lazily under the RCU read lock and depend on complete_walk()
finalizing the dentry.
Found by syzkaller and KASAN.
Reported-by: Dmitry Vyukov <dvyukov(a)google.com>
Reported-by: Vegard Nossum <vegard.nossum(a)gmail.com>
Acked-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Eric Biggers <ebiggers3(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/namei.c | 3 +++
1 file changed, 3 insertions(+)
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -2138,6 +2138,9 @@ static const char *path_init(struct name
int retval = 0;
const char *s = nd->name->name;
+ if (!*s)
+ flags &= ~LOOKUP_RCU;
+
nd->last_type = LAST_ROOT; /* if there are only slashes... */
nd->flags = flags | LOOKUP_JUMPED | LOOKUP_PARENT;
nd->depth = 0;
Patches currently in stable-queue which might be from torvalds(a)linux-foundation.org are
queue-4.9/x86-spectre-fix-an-error-message.patch
queue-4.9/nospec-move-array_index_nospec-parameter-checking-into-separate-macro.patch
queue-4.9/x86-nvmx-properly-set-spec_ctrl-and-pred_cmd-before-merging-msrs.patch
queue-4.9/x86-speculation-add-asm-msr-index.h-dependency.patch
queue-4.9/x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch
queue-4.9/x86-cpu-change-type-of-x86_cache_size-variable-to-unsigned-int.patch
queue-4.9/x86-speculation-update-speculation-control-microcode-blacklist.patch
queue-4.9/x86-speculation-correct-speculation-control-microcode-blacklist-again.patch
queue-4.9/ocfs2-try-a-blocking-lock-before-return-aop_truncated_page.patch
queue-4.9/selftests-x86-do-not-rely-on-int-0x80-in-single_step_syscall.c.patch
queue-4.9/selftests-x86-mpx-fix-incorrect-bounds-with-old-_sigfault.patch
queue-4.9/mm-hide-a-warning-for-compile_test.patch
queue-4.9/selftests-x86-pkeys-remove-unused-functions.patch
queue-4.9/vfs-don-t-do-rcu-lookup-of-empty-pathnames.patch
queue-4.9/x86-speculation-fix-up-array_index_nospec_mask-asm-constraint.patch
queue-4.9/kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
queue-4.9/selftests-x86-do-not-rely-on-int-0x80-in-test_mremap_vdso.c.patch
queue-4.9/x86-speculation-clean-up-various-spectre-related-details.patch
queue-4.9/x86-entry-64-compat-clear-registers-for-compat-syscalls-to-reduce-speculation-attack-surface.patch
This is a note to let you know that I've just added the patch titled
dm: correctly handle chained bios in dec_pending()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-correctly-handle-chained-bios-in-dec_pending.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 8dd601fa8317243be887458c49f6c29c2f3d719f Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb(a)suse.com>
Date: Thu, 15 Feb 2018 20:00:15 +1100
Subject: dm: correctly handle chained bios in dec_pending()
From: NeilBrown <neilb(a)suse.com>
commit 8dd601fa8317243be887458c49f6c29c2f3d719f upstream.
dec_pending() is given an error status (possibly 0) to be recorded
against a bio. It can be called several times on the one 'struct
dm_io', and it is careful to only assign a non-zero error to
io->status. However when it then assigned io->status to bio->bi_status,
it is not careful and could overwrite a genuine error status with 0.
This can happen when chained bios are in use. If a bio is chained
beneath the bio that this dm_io is handling, the child bio might
complete and set bio->bi_status before the dm_io completes.
This has been possible since chained bios were introduced in 3.14, and
has become a lot easier to trigger with commit 18a25da84354 ("dm: ensure
bio submission follows a depth-first tree walk") as that commit caused
dm to start using chained bios itself.
A particular failure mode is that if a bio spans an 'error' target and a
working target, the 'error' fragment will complete instantly and set the
->bi_status, and the other fragment will normally complete a little
later, and will clear ->bi_status.
The fix is simply to only assign io_error to bio->bi_status when
io_error is not zero.
Reported-and-tested-by: Milan Broz <gmazyland(a)gmail.com>
Cc: stable(a)vger.kernel.org (v3.14+)
Signed-off-by: NeilBrown <neilb(a)suse.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -809,7 +809,8 @@ static void dec_pending(struct dm_io *io
} else {
/* done with normal IO or empty flush */
trace_block_bio_complete(md->queue, bio, io_error);
- bio->bi_error = io_error;
+ if (io_error)
+ bio->bi_error = io_error;
bio_endio(bio);
}
}
Patches currently in stable-queue which might be from neilb(a)suse.com are
queue-4.9/dm-correctly-handle-chained-bios-in-dec_pending.patch
This is a note to let you know that I've just added the patch titled
vfs: don't do RCU lookup of empty pathnames
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
vfs-don-t-do-rcu-lookup-of-empty-pathnames.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c0eb027e5aef70b71e5a38ee3e264dc0b497f343 Mon Sep 17 00:00:00 2001
From: Linus Torvalds <torvalds(a)linux-foundation.org>
Date: Sun, 2 Apr 2017 17:10:08 -0700
Subject: vfs: don't do RCU lookup of empty pathnames
From: Linus Torvalds <torvalds(a)linux-foundation.org>
commit c0eb027e5aef70b71e5a38ee3e264dc0b497f343 upstream.
Normal pathname lookup doesn't allow empty pathnames, but using
AT_EMPTY_PATH (with name_to_handle_at() or fstatat(), for example) you
can trigger an empty pathname lookup.
And not only is the RCU lookup in that case entirely unnecessary
(because we'll obviously immediately finalize the end result), it is
actively wrong.
Why? An empth path is a special case that will return the original
'dirfd' dentry - and that dentry may not actually be RCU-free'd,
resulting in a potential use-after-free if we were to initialize the
path lazily under the RCU read lock and depend on complete_walk()
finalizing the dentry.
Found by syzkaller and KASAN.
Reported-by: Dmitry Vyukov <dvyukov(a)google.com>
Reported-by: Vegard Nossum <vegard.nossum(a)gmail.com>
Acked-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Eric Biggers <ebiggers3(a)gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/namei.c | 3 +++
1 file changed, 3 insertions(+)
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -2000,6 +2000,9 @@ static const char *path_init(struct name
int retval = 0;
const char *s = nd->name->name;
+ if (!*s)
+ flags &= ~LOOKUP_RCU;
+
nd->last_type = LAST_ROOT; /* if there are only slashes... */
nd->flags = flags | LOOKUP_JUMPED | LOOKUP_PARENT;
nd->depth = 0;
Patches currently in stable-queue which might be from torvalds(a)linux-foundation.org are
queue-4.4/x86-cpu-change-type-of-x86_cache_size-variable-to-unsigned-int.patch
queue-4.4/mm-hide-a-warning-for-compile_test.patch
queue-4.4/vfs-don-t-do-rcu-lookup-of-empty-pathnames.patch
queue-4.4/kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
This is a note to let you know that I've just added the patch titled
dm: correctly handle chained bios in dec_pending()
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-correctly-handle-chained-bios-in-dec_pending.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 8dd601fa8317243be887458c49f6c29c2f3d719f Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb(a)suse.com>
Date: Thu, 15 Feb 2018 20:00:15 +1100
Subject: dm: correctly handle chained bios in dec_pending()
From: NeilBrown <neilb(a)suse.com>
commit 8dd601fa8317243be887458c49f6c29c2f3d719f upstream.
dec_pending() is given an error status (possibly 0) to be recorded
against a bio. It can be called several times on the one 'struct
dm_io', and it is careful to only assign a non-zero error to
io->status. However when it then assigned io->status to bio->bi_status,
it is not careful and could overwrite a genuine error status with 0.
This can happen when chained bios are in use. If a bio is chained
beneath the bio that this dm_io is handling, the child bio might
complete and set bio->bi_status before the dm_io completes.
This has been possible since chained bios were introduced in 3.14, and
has become a lot easier to trigger with commit 18a25da84354 ("dm: ensure
bio submission follows a depth-first tree walk") as that commit caused
dm to start using chained bios itself.
A particular failure mode is that if a bio spans an 'error' target and a
working target, the 'error' fragment will complete instantly and set the
->bi_status, and the other fragment will normally complete a little
later, and will clear ->bi_status.
The fix is simply to only assign io_error to bio->bi_status when
io_error is not zero.
Reported-and-tested-by: Milan Broz <gmazyland(a)gmail.com>
Cc: stable(a)vger.kernel.org (v3.14+)
Signed-off-by: NeilBrown <neilb(a)suse.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -974,7 +974,8 @@ static void dec_pending(struct dm_io *io
} else {
/* done with normal IO or empty flush */
trace_block_bio_complete(md->queue, bio, io_error);
- bio->bi_error = io_error;
+ if (io_error)
+ bio->bi_error = io_error;
bio_endio(bio);
}
}
Patches currently in stable-queue which might be from neilb(a)suse.com are
queue-4.4/dm-correctly-handle-chained-bios-in-dec_pending.patch
This is a note to let you know that I've just added the patch titled
x86/mm, mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-mm-mm-hwpoison-don-t-unconditionally-unmap-kernel-1-1-pages.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From fd0e786d9d09024f67bd71ec094b110237dc3840 Mon Sep 17 00:00:00 2001
From: Tony Luck <tony.luck(a)intel.com>
Date: Thu, 25 Jan 2018 14:23:48 -0800
Subject: x86/mm, mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages
From: Tony Luck <tony.luck(a)intel.com>
commit fd0e786d9d09024f67bd71ec094b110237dc3840 upstream.
In the following commit:
ce0fa3e56ad2 ("x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages")
... we added code to memory_failure() to unmap the page from the
kernel 1:1 virtual address space to avoid speculative access to the
page logging additional errors.
But memory_failure() may not always succeed in taking the page offline,
especially if the page belongs to the kernel. This can happen if
there are too many corrected errors on a page and either mcelog(8)
or drivers/ras/cec.c asks to take a page offline.
Since we remove the 1:1 mapping early in memory_failure(), we can
end up with the page unmapped, but still in use. On the next access
the kernel crashes :-(
There are also various debug paths that call memory_failure() to simulate
occurrence of an error. Since there is no actual error in memory, we
don't need to map out the page for those cases.
Revert most of the previous attempt and keep the solution local to
arch/x86/kernel/cpu/mcheck/mce.c. Unmap the page only when:
1) there is a real error
2) memory_failure() succeeds.
All of this only applies to 64-bit systems. 32-bit kernel doesn't map
all of memory into kernel space. It isn't worth adding the code to unmap
the piece that is mapped because nobody would run a 32-bit kernel on a
machine that has recoverable machine checks.
Signed-off-by: Tony Luck <tony.luck(a)intel.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)suse.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Dave <dave.hansen(a)intel.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Robert (Persistent Memory) <elliott(a)hpe.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-mm(a)kvack.org
Cc: stable(a)vger.kernel.org #v4.14
Fixes: ce0fa3e56ad2 ("x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages")
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/page_64.h | 4 ----
arch/x86/kernel/cpu/mcheck/mce-internal.h | 15 +++++++++++++++
arch/x86/kernel/cpu/mcheck/mce.c | 17 +++++++++++------
include/linux/mm_inline.h | 6 ------
mm/memory-failure.c | 2 --
5 files changed, 26 insertions(+), 18 deletions(-)
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -52,10 +52,6 @@ static inline void clear_page(void *page
void copy_page(void *to, void *from);
-#ifdef CONFIG_X86_MCE
-#define arch_unmap_kpfn arch_unmap_kpfn
-#endif
-
#endif /* !__ASSEMBLY__ */
#ifdef CONFIG_X86_VSYSCALL_EMULATION
--- a/arch/x86/kernel/cpu/mcheck/mce-internal.h
+++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h
@@ -115,4 +115,19 @@ static inline void mce_unregister_inject
extern struct mca_config mca_cfg;
+#ifndef CONFIG_X86_64
+/*
+ * On 32-bit systems it would be difficult to safely unmap a poison page
+ * from the kernel 1:1 map because there are no non-canonical addresses that
+ * we can use to refer to the address without risking a speculative access.
+ * However, this isn't much of an issue because:
+ * 1) Few unmappable pages are in the 1:1 map. Most are in HIGHMEM which
+ * are only mapped into the kernel as needed
+ * 2) Few people would run a 32-bit kernel on a machine that supports
+ * recoverable errors because they have too much memory to boot 32-bit.
+ */
+static inline void mce_unmap_kpfn(unsigned long pfn) {}
+#define mce_unmap_kpfn mce_unmap_kpfn
+#endif
+
#endif /* __X86_MCE_INTERNAL_H__ */
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -106,6 +106,10 @@ static struct irq_work mce_irq_work;
static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
+#ifndef mce_unmap_kpfn
+static void mce_unmap_kpfn(unsigned long pfn);
+#endif
+
/*
* CPU/chipset specific EDAC code can register a notifier call here to print
* MCE errors in a human-readable form.
@@ -582,7 +586,8 @@ static int srao_decode_notifier(struct n
if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) {
pfn = mce->addr >> PAGE_SHIFT;
- memory_failure(pfn, MCE_VECTOR, 0);
+ if (memory_failure(pfn, MCE_VECTOR, 0))
+ mce_unmap_kpfn(pfn);
}
return NOTIFY_OK;
@@ -1049,12 +1054,13 @@ static int do_memory_failure(struct mce
ret = memory_failure(m->addr >> PAGE_SHIFT, MCE_VECTOR, flags);
if (ret)
pr_err("Memory error not recovered");
+ else
+ mce_unmap_kpfn(m->addr >> PAGE_SHIFT);
return ret;
}
-#if defined(arch_unmap_kpfn) && defined(CONFIG_MEMORY_FAILURE)
-
-void arch_unmap_kpfn(unsigned long pfn)
+#ifndef mce_unmap_kpfn
+static void mce_unmap_kpfn(unsigned long pfn)
{
unsigned long decoy_addr;
@@ -1065,7 +1071,7 @@ void arch_unmap_kpfn(unsigned long pfn)
* We would like to just call:
* set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1);
* but doing that would radically increase the odds of a
- * speculative access to the posion page because we'd have
+ * speculative access to the poison page because we'd have
* the virtual address of the kernel 1:1 mapping sitting
* around in registers.
* Instead we get tricky. We create a non-canonical address
@@ -1090,7 +1096,6 @@ void arch_unmap_kpfn(unsigned long pfn)
if (set_memory_np(decoy_addr, 1))
pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
-
}
#endif
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -127,10 +127,4 @@ static __always_inline enum lru_list pag
#define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
-#ifdef arch_unmap_kpfn
-extern void arch_unmap_kpfn(unsigned long pfn);
-#else
-static __always_inline void arch_unmap_kpfn(unsigned long pfn) { }
-#endif
-
#endif
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1146,8 +1146,6 @@ int memory_failure(unsigned long pfn, in
return 0;
}
- arch_unmap_kpfn(pfn);
-
orig_head = hpage = compound_head(p);
num_poisoned_pages_inc();
Patches currently in stable-queue which might be from tony.luck(a)intel.com are
queue-4.15/x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch
queue-4.15/x86-mm-mm-hwpoison-don-t-unconditionally-unmap-kernel-1-1-pages.patch
This is a note to let you know that I've just added the patch titled
x86/mm, mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-mm-mm-hwpoison-don-t-unconditionally-unmap-kernel-1-1-pages.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From fd0e786d9d09024f67bd71ec094b110237dc3840 Mon Sep 17 00:00:00 2001
From: Tony Luck <tony.luck(a)intel.com>
Date: Thu, 25 Jan 2018 14:23:48 -0800
Subject: x86/mm, mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages
From: Tony Luck <tony.luck(a)intel.com>
commit fd0e786d9d09024f67bd71ec094b110237dc3840 upstream.
In the following commit:
ce0fa3e56ad2 ("x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages")
... we added code to memory_failure() to unmap the page from the
kernel 1:1 virtual address space to avoid speculative access to the
page logging additional errors.
But memory_failure() may not always succeed in taking the page offline,
especially if the page belongs to the kernel. This can happen if
there are too many corrected errors on a page and either mcelog(8)
or drivers/ras/cec.c asks to take a page offline.
Since we remove the 1:1 mapping early in memory_failure(), we can
end up with the page unmapped, but still in use. On the next access
the kernel crashes :-(
There are also various debug paths that call memory_failure() to simulate
occurrence of an error. Since there is no actual error in memory, we
don't need to map out the page for those cases.
Revert most of the previous attempt and keep the solution local to
arch/x86/kernel/cpu/mcheck/mce.c. Unmap the page only when:
1) there is a real error
2) memory_failure() succeeds.
All of this only applies to 64-bit systems. 32-bit kernel doesn't map
all of memory into kernel space. It isn't worth adding the code to unmap
the piece that is mapped because nobody would run a 32-bit kernel on a
machine that has recoverable machine checks.
Signed-off-by: Tony Luck <tony.luck(a)intel.com>
Cc: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)suse.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Dave <dave.hansen(a)intel.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Robert (Persistent Memory) <elliott(a)hpe.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: linux-mm(a)kvack.org
Cc: stable(a)vger.kernel.org #v4.14
Fixes: ce0fa3e56ad2 ("x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pages")
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/page_64.h | 4 ----
arch/x86/kernel/cpu/mcheck/mce-internal.h | 15 +++++++++++++++
arch/x86/kernel/cpu/mcheck/mce.c | 17 +++++++++++------
include/linux/mm_inline.h | 6 ------
mm/memory-failure.c | 2 --
5 files changed, 26 insertions(+), 18 deletions(-)
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -52,10 +52,6 @@ static inline void clear_page(void *page
void copy_page(void *to, void *from);
-#ifdef CONFIG_X86_MCE
-#define arch_unmap_kpfn arch_unmap_kpfn
-#endif
-
#endif /* !__ASSEMBLY__ */
#ifdef CONFIG_X86_VSYSCALL_EMULATION
--- a/arch/x86/kernel/cpu/mcheck/mce-internal.h
+++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h
@@ -115,4 +115,19 @@ static inline void mce_unregister_inject
extern struct mca_config mca_cfg;
+#ifndef CONFIG_X86_64
+/*
+ * On 32-bit systems it would be difficult to safely unmap a poison page
+ * from the kernel 1:1 map because there are no non-canonical addresses that
+ * we can use to refer to the address without risking a speculative access.
+ * However, this isn't much of an issue because:
+ * 1) Few unmappable pages are in the 1:1 map. Most are in HIGHMEM which
+ * are only mapped into the kernel as needed
+ * 2) Few people would run a 32-bit kernel on a machine that supports
+ * recoverable errors because they have too much memory to boot 32-bit.
+ */
+static inline void mce_unmap_kpfn(unsigned long pfn) {}
+#define mce_unmap_kpfn mce_unmap_kpfn
+#endif
+
#endif /* __X86_MCE_INTERNAL_H__ */
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -106,6 +106,10 @@ static struct irq_work mce_irq_work;
static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
+#ifndef mce_unmap_kpfn
+static void mce_unmap_kpfn(unsigned long pfn);
+#endif
+
/*
* CPU/chipset specific EDAC code can register a notifier call here to print
* MCE errors in a human-readable form.
@@ -582,7 +586,8 @@ static int srao_decode_notifier(struct n
if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) {
pfn = mce->addr >> PAGE_SHIFT;
- memory_failure(pfn, MCE_VECTOR, 0);
+ if (memory_failure(pfn, MCE_VECTOR, 0))
+ mce_unmap_kpfn(pfn);
}
return NOTIFY_OK;
@@ -1049,12 +1054,13 @@ static int do_memory_failure(struct mce
ret = memory_failure(m->addr >> PAGE_SHIFT, MCE_VECTOR, flags);
if (ret)
pr_err("Memory error not recovered");
+ else
+ mce_unmap_kpfn(m->addr >> PAGE_SHIFT);
return ret;
}
-#if defined(arch_unmap_kpfn) && defined(CONFIG_MEMORY_FAILURE)
-
-void arch_unmap_kpfn(unsigned long pfn)
+#ifndef mce_unmap_kpfn
+static void mce_unmap_kpfn(unsigned long pfn)
{
unsigned long decoy_addr;
@@ -1065,7 +1071,7 @@ void arch_unmap_kpfn(unsigned long pfn)
* We would like to just call:
* set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1);
* but doing that would radically increase the odds of a
- * speculative access to the posion page because we'd have
+ * speculative access to the poison page because we'd have
* the virtual address of the kernel 1:1 mapping sitting
* around in registers.
* Instead we get tricky. We create a non-canonical address
@@ -1090,7 +1096,6 @@ void arch_unmap_kpfn(unsigned long pfn)
if (set_memory_np(decoy_addr, 1))
pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
-
}
#endif
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -127,10 +127,4 @@ static __always_inline enum lru_list pag
#define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
-#ifdef arch_unmap_kpfn
-extern void arch_unmap_kpfn(unsigned long pfn);
-#else
-static __always_inline void arch_unmap_kpfn(unsigned long pfn) { }
-#endif
-
#endif
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1146,8 +1146,6 @@ int memory_failure(unsigned long pfn, in
return 0;
}
- arch_unmap_kpfn(pfn);
-
orig_head = hpage = compound_head(p);
num_poisoned_pages_inc();
Patches currently in stable-queue which might be from tony.luck(a)intel.com are
queue-4.14/x86-cpu-rename-cpu_data.x86_mask-to-cpu_data.x86_stepping.patch
queue-4.14/x86-mm-mm-hwpoison-don-t-unconditionally-unmap-kernel-1-1-pages.patch