This is a note to let you know that I've just added the patch titled
x86/pti: Document fix wrong index
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-pti-document-fix-wrong-index.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 98f0fceec7f84d80bc053e49e596088573086421 Mon Sep 17 00:00:00 2001
From: "zhenwei.pi" <zhenwei.pi(a)youruncloud.com>
Date: Thu, 18 Jan 2018 09:04:52 +0800
Subject: x86/pti: Document fix wrong index
From: zhenwei.pi <zhenwei.pi(a)youruncloud.com>
commit 98f0fceec7f84d80bc053e49e596088573086421 upstream.
In section <2. Runtime Cost>, fix wrong index.
Signed-off-by: zhenwei.pi <zhenwei.pi(a)youruncloud.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: dave.hansen(a)linux.intel.com
Link: https://lkml.kernel.org/r/1516237492-27739-1-git-send-email-zhenwei.pi@your…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/x86/pti.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/Documentation/x86/pti.txt
+++ b/Documentation/x86/pti.txt
@@ -78,7 +78,7 @@ this protection comes at a cost:
non-PTI SYSCALL entry code, so requires mapping fewer
things into the userspace page tables. The downside is
that stacks must be switched at entry time.
- d. Global pages are disabled for all kernel structures not
+ c. Global pages are disabled for all kernel structures not
mapped into both kernel and userspace page tables. This
feature of the MMU allows different processes to share TLB
entries mapping the kernel. Losing the feature means more
Patches currently in stable-queue which might be from zhenwei.pi(a)youruncloud.com are
queue-4.4/x86-pti-document-fix-wrong-index.patch
This is a note to let you know that I've just added the patch titled
retpoline: Introduce start/end markers of indirect thunk
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
retpoline-introduce-start-end-markers-of-indirect-thunk.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 736e80a4213e9bbce40a7c050337047128b472ac Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat(a)kernel.org>
Date: Fri, 19 Jan 2018 01:14:21 +0900
Subject: retpoline: Introduce start/end markers of indirect thunk
From: Masami Hiramatsu <mhiramat(a)kernel.org>
commit 736e80a4213e9bbce40a7c050337047128b472ac upstream.
Introduce start/end markers of __x86_indirect_thunk_* functions.
To make it easy, consolidate .text.__x86.indirect_thunk.* sections
to one .text.__x86.indirect_thunk section and put it in the
end of kernel text section and adds __indirect_thunk_start/end
so that other subsystem (e.g. kprobes) can identify it.
Signed-off-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: Andi Kleen <ak(a)linux.intel.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Ananth N Mavinakayanahalli <ananth(a)linux.vnet.ibm.com>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh(a)linux-foundation.org>
Link: https://lkml.kernel.org/r/151629206178.10241.6828804696410044771.stgit@devb…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/nospec-branch.h | 3 +++
arch/x86/kernel/vmlinux.lds.S | 7 +++++++
arch/x86/lib/retpoline.S | 2 +-
3 files changed, 11 insertions(+), 1 deletion(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -171,6 +171,9 @@ enum spectre_v2_mitigation {
SPECTRE_V2_IBRS,
};
+extern char __indirect_thunk_start[];
+extern char __indirect_thunk_end[];
+
/*
* On VMEXIT we must ensure that no RSB predictions learned in the guest
* can be followed in the host, by overwriting the RSB completely. Both
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -104,6 +104,13 @@ SECTIONS
IRQENTRY_TEXT
*(.fixup)
*(.gnu.warning)
+
+#ifdef CONFIG_RETPOLINE
+ __indirect_thunk_start = .;
+ *(.text.__x86.indirect_thunk)
+ __indirect_thunk_end = .;
+#endif
+
/* End of text section */
_etext = .;
} :text = 0x9090
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -9,7 +9,7 @@
#include <asm/nospec-branch.h>
.macro THUNK reg
- .section .text.__x86.indirect_thunk.\reg
+ .section .text.__x86.indirect_thunk
ENTRY(__x86_indirect_thunk_\reg)
CFI_STARTPROC
Patches currently in stable-queue which might be from mhiramat(a)kernel.org are
queue-4.4/kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
queue-4.4/kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
queue-4.4/retpoline-introduce-start-end-markers-of-indirect-thunk.patch
This is a note to let you know that I've just added the patch titled
x86/mce: Make machine check speculation protected
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-mce-make-machine-check-speculation-protected.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6f41c34d69eb005e7848716bbcafc979b35037d5 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Thu, 18 Jan 2018 16:28:26 +0100
Subject: x86/mce: Make machine check speculation protected
From: Thomas Gleixner <tglx(a)linutronix.de>
commit 6f41c34d69eb005e7848716bbcafc979b35037d5 upstream.
The machine check idtentry uses an indirect branch directly from the low
level code. This evades the speculation protection.
Replace it by a direct call into C code and issue the indirect call there
so the compiler can apply the proper speculation protection.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by:Borislav Petkov <bp(a)alien8.de>
Reviewed-by: David Woodhouse <dwmw(a)amazon.co.uk>
Niced-by: Peter Zijlstra <peterz(a)infradead.org>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801181626290.1847@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 2 +-
arch/x86/include/asm/traps.h | 1 +
arch/x86/kernel/cpu/mcheck/mce.c | 5 +++++
3 files changed, 7 insertions(+), 1 deletion(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1031,7 +1031,7 @@ idtentry async_page_fault do_async_page_
#endif
#ifdef CONFIG_X86_MCE
-idtentry machine_check has_error_code=0 paranoid=1 do_sym=*machine_check_vector(%rip)
+idtentry machine_check do_mce has_error_code=0 paranoid=1
#endif
/*
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -92,6 +92,7 @@ dotraplinkage void do_simd_coprocessor_e
#ifdef CONFIG_X86_32
dotraplinkage void do_iret_error(struct pt_regs *, long);
#endif
+dotraplinkage void do_mce(struct pt_regs *, long);
static inline int get_si_code(unsigned long condition)
{
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1672,6 +1672,11 @@ static void unexpected_machine_check(str
void (*machine_check_vector)(struct pt_regs *, long error_code) =
unexpected_machine_check;
+dotraplinkage void do_mce(struct pt_regs *regs, long error_code)
+{
+ machine_check_vector(regs, error_code);
+}
+
/*
* Called for each booted CPU to set up machine checks.
* Must be called with preempt off:
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.4/futex-prevent-overflow-by-strengthen-input-validation.patch
queue-4.4/x86-spectre-add-boot-time-option-to-select-spectre-v2-mitigation.patch
queue-4.4/x86-retpoline-irq32-convert-assembler-indirect-jumps.patch
queue-4.4/x86-mce-make-machine-check-speculation-protected.patch
queue-4.4/x86-pti-document-fix-wrong-index.patch
queue-4.4/x86-retpoline-hyperv-convert-assembler-indirect-jumps.patch
queue-4.4/x86-apic-vector-fix-off-by-one-in-error-path.patch
queue-4.4/x86-retpoline-entry-convert-entry-assembler-indirect-jumps.patch
queue-4.4/kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
queue-4.4/x86-asm-use-register-variable-to-get-stack-pointer-value.patch
queue-4.4/x86-cpu-amd-make-lfence-a-serializing-instruction.patch
queue-4.4/x86-retpoline-ftrace-convert-ftrace-assembler-indirect-jumps.patch
queue-4.4/sched-deadline-zero-out-positive-runtime-after-throttling-constrained-tasks.patch
queue-4.4/x86-retpoline-crypto-convert-crypto-assembler-indirect-jumps.patch
queue-4.4/module-add-retpoline-tag-to-vermagic.patch
queue-4.4/kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
queue-4.4/x86-retpoline-xen-convert-xen-hypercall-indirect-jumps.patch
queue-4.4/x86-retpoline-checksum32-convert-assembler-indirect-jumps.patch
queue-4.4/x86-mm-32-move-setup_clear_cpu_cap-x86_feature_pcid-earlier.patch
queue-4.4/x86-retpoline-fill-return-stack-buffer-on-vmexit.patch
queue-4.4/x86-retpoline-add-lfence-to-the-retpoline-rsb-filling-rsb-macros.patch
queue-4.4/x86-retpoline-optimize-inline-assembler-for-vmexit_fill_rsb.patch
queue-4.4/x86-retpoline-remove-compile-time-warning.patch
queue-4.4/x86-cpu-amd-use-lfence_rdtsc-in-preference-to-mfence_rdtsc.patch
queue-4.4/retpoline-introduce-start-end-markers-of-indirect-thunk.patch
queue-4.4/x86-retpoline-add-initial-retpoline-support.patch
queue-4.4/x86-cpu-x86-pti-do-not-enable-pti-on-amd-processors.patch
queue-4.4/x86-asm-make-asm-alternative.h-safe-from-assembly.patch
This is a note to let you know that I've just added the patch titled
kprobes/x86: Disable optimizing on the function jumps to indirect thunk
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c86a32c09f8ced67971a2310e3b0dda4d1749007 Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat(a)kernel.org>
Date: Fri, 19 Jan 2018 01:15:20 +0900
Subject: kprobes/x86: Disable optimizing on the function jumps to indirect thunk
From: Masami Hiramatsu <mhiramat(a)kernel.org>
commit c86a32c09f8ced67971a2310e3b0dda4d1749007 upstream.
Since indirect jump instructions will be replaced by jump
to __x86_indirect_thunk_*, those jmp instruction must be
treated as an indirect jump. Since optprobe prohibits to
optimize probes in the function which uses an indirect jump,
it also needs to find out the function which jump to
__x86_indirect_thunk_* and disable optimization.
Add a check that the jump target address is between the
__indirect_thunk_start/end when optimizing kprobe.
Signed-off-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: Andi Kleen <ak(a)linux.intel.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Ananth N Mavinakayanahalli <ananth(a)linux.vnet.ibm.com>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh(a)linux-foundation.org>
Link: https://lkml.kernel.org/r/151629212062.10241.6991266100233002273.stgit@devb…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/kprobes/opt.c | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -36,6 +36,7 @@
#include <asm/alternative.h>
#include <asm/insn.h>
#include <asm/debugreg.h>
+#include <asm/nospec-branch.h>
#include "common.h"
@@ -191,7 +192,7 @@ static int copy_optimized_instructions(u
}
/* Check whether insn is indirect jump */
-static int insn_is_indirect_jump(struct insn *insn)
+static int __insn_is_indirect_jump(struct insn *insn)
{
return ((insn->opcode.bytes[0] == 0xff &&
(X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */
@@ -225,6 +226,26 @@ static int insn_jump_into_range(struct i
return (start <= target && target <= start + len);
}
+static int insn_is_indirect_jump(struct insn *insn)
+{
+ int ret = __insn_is_indirect_jump(insn);
+
+#ifdef CONFIG_RETPOLINE
+ /*
+ * Jump to x86_indirect_thunk_* is treated as an indirect jump.
+ * Note that even with CONFIG_RETPOLINE=y, the kernel compiled with
+ * older gcc may use indirect jump. So we add this check instead of
+ * replace indirect-jump check.
+ */
+ if (!ret)
+ ret = insn_jump_into_range(insn,
+ (unsigned long)__indirect_thunk_start,
+ (unsigned long)__indirect_thunk_end -
+ (unsigned long)__indirect_thunk_start);
+#endif
+ return ret;
+}
+
/* Decode whole function to ensure any instructions don't jump into target */
static int can_optimize(unsigned long paddr)
{
Patches currently in stable-queue which might be from mhiramat(a)kernel.org are
queue-4.4/kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
queue-4.4/kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
queue-4.4/retpoline-introduce-start-end-markers-of-indirect-thunk.patch
This is a note to let you know that I've just added the patch titled
kprobes/x86: Blacklist indirect thunk functions for kprobes
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c1804a236894ecc942da7dc6c5abe209e56cba93 Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat(a)kernel.org>
Date: Fri, 19 Jan 2018 01:14:51 +0900
Subject: kprobes/x86: Blacklist indirect thunk functions for kprobes
From: Masami Hiramatsu <mhiramat(a)kernel.org>
commit c1804a236894ecc942da7dc6c5abe209e56cba93 upstream.
Mark __x86_indirect_thunk_* functions as blacklist for kprobes
because those functions can be called from anywhere in the kernel
including blacklist functions of kprobes.
Signed-off-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: Andi Kleen <ak(a)linux.intel.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Ananth N Mavinakayanahalli <ananth(a)linux.vnet.ibm.com>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh(a)linux-foundation.org>
Link: https://lkml.kernel.org/r/151629209111.10241.5444852823378068683.stgit@devb…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/lib/retpoline.S | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -25,7 +25,8 @@ ENDPROC(__x86_indirect_thunk_\reg)
* than one per register with the correct names. So we do it
* the simple and nasty way...
*/
-#define EXPORT_THUNK(reg) EXPORT_SYMBOL(__x86_indirect_thunk_ ## reg)
+#define __EXPORT_THUNK(sym) _ASM_NOKPROBE(sym); EXPORT_SYMBOL(sym)
+#define EXPORT_THUNK(reg) __EXPORT_THUNK(__x86_indirect_thunk_ ## reg)
#define GENERATE_THUNK(reg) THUNK reg ; EXPORT_THUNK(reg)
GENERATE_THUNK(_ASM_AX)
Patches currently in stable-queue which might be from mhiramat(a)kernel.org are
queue-4.4/kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
queue-4.4/kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
queue-4.4/retpoline-introduce-start-end-markers-of-indirect-thunk.patch
This is a note to let you know that I've just added the patch titled
x86/retpoline: Optimize inline assembler for vmexit_fill_RSB
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-retpoline-optimize-inline-assembler-for-vmexit_fill_rsb.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 3f7d875566d8e79c5e0b2c9a413e91b2c29e0854 Mon Sep 17 00:00:00 2001
From: Andi Kleen <ak(a)linux.intel.com>
Date: Wed, 17 Jan 2018 14:53:28 -0800
Subject: x86/retpoline: Optimize inline assembler for vmexit_fill_RSB
From: Andi Kleen <ak(a)linux.intel.com>
commit 3f7d875566d8e79c5e0b2c9a413e91b2c29e0854 upstream.
The generated assembler for the C fill RSB inline asm operations has
several issues:
- The C code sets up the loop register, which is then immediately
overwritten in __FILL_RETURN_BUFFER with the same value again.
- The C code also passes in the iteration count in another register, which
is not used at all.
Remove these two unnecessary operations. Just rely on the single constant
passed to the macro for the iterations.
Signed-off-by: Andi Kleen <ak(a)linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: dave.hansen(a)intel.com
Cc: gregkh(a)linuxfoundation.org
Cc: torvalds(a)linux-foundation.org
Cc: arjan(a)linux.intel.com
Link: https://lkml.kernel.org/r/20180117225328.15414-1-andi@firstfloor.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/nospec-branch.h | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -206,16 +206,17 @@ extern char __indirect_thunk_end[];
static inline void vmexit_fill_RSB(void)
{
#ifdef CONFIG_RETPOLINE
- unsigned long loops = RSB_CLEAR_LOOPS / 2;
+ unsigned long loops;
asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
ALTERNATIVE("jmp 910f",
__stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
X86_FEATURE_RETPOLINE)
"910:"
- : "=&r" (loops), ASM_CALL_CONSTRAINT
- : "r" (loops) : "memory" );
+ : "=r" (loops), ASM_CALL_CONSTRAINT
+ : : "memory" );
#endif
}
+
#endif /* __ASSEMBLY__ */
#endif /* __NOSPEC_BRANCH_H__ */
Patches currently in stable-queue which might be from ak(a)linux.intel.com are
queue-4.14/kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
queue-4.14/module-add-retpoline-tag-to-vermagic.patch
queue-4.14/kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
queue-4.14/x86-idt-mark-idt-tables-__initconst.patch
queue-4.14/x86-retpoline-fill-rsb-on-context-switch-for-affected-cpus.patch
queue-4.14/x86-retpoline-add-lfence-to-the-retpoline-rsb-filling-rsb-macros.patch
queue-4.14/x86-retpoline-optimize-inline-assembler-for-vmexit_fill_rsb.patch
queue-4.14/retpoline-introduce-start-end-markers-of-indirect-thunk.patch
queue-4.14/x86-intel_rdt-cqm-prevent-use-after-free.patch
This is a note to let you know that I've just added the patch titled
x86/pti: Document fix wrong index
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-pti-document-fix-wrong-index.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 98f0fceec7f84d80bc053e49e596088573086421 Mon Sep 17 00:00:00 2001
From: "zhenwei.pi" <zhenwei.pi(a)youruncloud.com>
Date: Thu, 18 Jan 2018 09:04:52 +0800
Subject: x86/pti: Document fix wrong index
From: zhenwei.pi <zhenwei.pi(a)youruncloud.com>
commit 98f0fceec7f84d80bc053e49e596088573086421 upstream.
In section <2. Runtime Cost>, fix wrong index.
Signed-off-by: zhenwei.pi <zhenwei.pi(a)youruncloud.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: dave.hansen(a)linux.intel.com
Link: https://lkml.kernel.org/r/1516237492-27739-1-git-send-email-zhenwei.pi@your…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
Documentation/x86/pti.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/Documentation/x86/pti.txt
+++ b/Documentation/x86/pti.txt
@@ -78,7 +78,7 @@ this protection comes at a cost:
non-PTI SYSCALL entry code, so requires mapping fewer
things into the userspace page tables. The downside is
that stacks must be switched at entry time.
- d. Global pages are disabled for all kernel structures not
+ c. Global pages are disabled for all kernel structures not
mapped into both kernel and userspace page tables. This
feature of the MMU allows different processes to share TLB
entries mapping the kernel. Losing the feature means more
Patches currently in stable-queue which might be from zhenwei.pi(a)youruncloud.com are
queue-4.14/x86-pti-document-fix-wrong-index.patch
This is a note to let you know that I've just added the patch titled
x86/mm: Rework wbinvd, hlt operation in stop_this_cpu()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-mm-rework-wbinvd-hlt-operation-in-stop_this_cpu.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f23d74f6c66c3697e032550eeef3f640391a3a7d Mon Sep 17 00:00:00 2001
From: Tom Lendacky <thomas.lendacky(a)amd.com>
Date: Wed, 17 Jan 2018 17:41:41 -0600
Subject: x86/mm: Rework wbinvd, hlt operation in stop_this_cpu()
From: Tom Lendacky <thomas.lendacky(a)amd.com>
commit f23d74f6c66c3697e032550eeef3f640391a3a7d upstream.
Some issues have been reported with the for loop in stop_this_cpu() that
issues the 'wbinvd; hlt' sequence. Reverting this sequence to halt()
has been shown to resolve the issue.
However, the wbinvd is needed when running with SME. The reason for the
wbinvd is to prevent cache flush races between encrypted and non-encrypted
entries that have the same physical address. This can occur when
kexec'ing from memory encryption active to inactive or vice-versa. The
important thing is to not have outside of kernel text memory references
(such as stack usage), so the usage of the native_*() functions is needed
since these expand as inline asm sequences. So instead of reverting the
change, rework the sequence.
Move the wbinvd instruction outside of the for loop as native_wbinvd()
and make its execution conditional on X86_FEATURE_SME. In the for loop,
change the asm 'wbinvd; hlt' sequence back to a halt sequence but use
the native_halt() call.
Fixes: bba4ed011a52 ("x86/mm, kexec: Allow kexec to be used with SME")
Reported-by: Dave Young <dyoung(a)redhat.com>
Signed-off-by: Tom Lendacky <thomas.lendacky(a)amd.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Tested-by: Dave Young <dyoung(a)redhat.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Tony Luck <tony.luck(a)intel.com>
Cc: Yu Chen <yu.c.chen(a)intel.com>
Cc: Baoquan He <bhe(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: kexec(a)lists.infradead.org
Cc: ebiederm(a)redhat.com
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Rui Zhang <rui.zhang(a)intel.com>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Boris Ostrovsky <boris.ostrovsky(a)oracle.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Link: https://lkml.kernel.org/r/20180117234141.21184.44067.stgit@tlendack-t1.amdo…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kernel/process.c | 25 +++++++++++++++----------
1 file changed, 15 insertions(+), 10 deletions(-)
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -380,19 +380,24 @@ void stop_this_cpu(void *dummy)
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
+ /*
+ * Use wbinvd on processors that support SME. This provides support
+ * for performing a successful kexec when going from SME inactive
+ * to SME active (or vice-versa). The cache must be cleared so that
+ * if there are entries with the same physical address, both with and
+ * without the encryption bit, they don't race each other when flushed
+ * and potentially end up with the wrong entry being committed to
+ * memory.
+ */
+ if (boot_cpu_has(X86_FEATURE_SME))
+ native_wbinvd();
for (;;) {
/*
- * Use wbinvd followed by hlt to stop the processor. This
- * provides support for kexec on a processor that supports
- * SME. With kexec, going from SME inactive to SME active
- * requires clearing cache entries so that addresses without
- * the encryption bit set don't corrupt the same physical
- * address that has the encryption bit set when caches are
- * flushed. To achieve this a wbinvd is performed followed by
- * a hlt. Even if the processor is not in the kexec/SME
- * scenario this only adds a wbinvd to a halting processor.
+ * Use native_halt() so that memory contents don't change
+ * (stack usage and variables) after possibly issuing the
+ * native_wbinvd() above.
*/
- asm volatile("wbinvd; hlt" : : : "memory");
+ native_halt();
}
}
Patches currently in stable-queue which might be from thomas.lendacky(a)amd.com are
queue-4.14/x86-mm-clean-up-register-saving-in-the-__enc_copy-assembly-code.patch
queue-4.14/x86-use-__nostackprotect-for-sme_encrypt_kernel.patch
queue-4.14/x86-mm-use-a-struct-to-reduce-parameters-for-sme-pgd-mapping.patch
queue-4.14/x86-mm-centralize-pmd-flags-in-sme_encrypt_kernel.patch
queue-4.14/x86-retpoline-fill-rsb-on-context-switch-for-affected-cpus.patch
queue-4.14/x86-mm-prepare-sme_encrypt_kernel-for-page-aligned-encryption.patch
queue-4.14/x86-retpoline-add-lfence-to-the-retpoline-rsb-filling-rsb-macros.patch
queue-4.14/x86-mm-rework-wbinvd-hlt-operation-in-stop_this_cpu.patch
queue-4.14/x86-mm-encrypt-the-initrd-earlier-for-bsp-microcode-update.patch
This is a note to let you know that I've just added the patch titled
x86/mce: Make machine check speculation protected
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-mce-make-machine-check-speculation-protected.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6f41c34d69eb005e7848716bbcafc979b35037d5 Mon Sep 17 00:00:00 2001
From: Thomas Gleixner <tglx(a)linutronix.de>
Date: Thu, 18 Jan 2018 16:28:26 +0100
Subject: x86/mce: Make machine check speculation protected
From: Thomas Gleixner <tglx(a)linutronix.de>
commit 6f41c34d69eb005e7848716bbcafc979b35037d5 upstream.
The machine check idtentry uses an indirect branch directly from the low
level code. This evades the speculation protection.
Replace it by a direct call into C code and issue the indirect call there
so the compiler can apply the proper speculation protection.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by:Borislav Petkov <bp(a)alien8.de>
Reviewed-by: David Woodhouse <dwmw(a)amazon.co.uk>
Niced-by: Peter Zijlstra <peterz(a)infradead.org>
Link: https://lkml.kernel.org/r/alpine.DEB.2.20.1801181626290.1847@nanos
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 2 +-
arch/x86/include/asm/traps.h | 1 +
arch/x86/kernel/cpu/mcheck/mce.c | 5 +++++
3 files changed, 7 insertions(+), 1 deletion(-)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1258,7 +1258,7 @@ idtentry async_page_fault do_async_page_
#endif
#ifdef CONFIG_X86_MCE
-idtentry machine_check has_error_code=0 paranoid=1 do_sym=*machine_check_vector(%rip)
+idtentry machine_check do_mce has_error_code=0 paranoid=1
#endif
/*
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -88,6 +88,7 @@ dotraplinkage void do_simd_coprocessor_e
#ifdef CONFIG_X86_32
dotraplinkage void do_iret_error(struct pt_regs *, long);
#endif
+dotraplinkage void do_mce(struct pt_regs *, long);
static inline int get_si_code(unsigned long condition)
{
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -1788,6 +1788,11 @@ static void unexpected_machine_check(str
void (*machine_check_vector)(struct pt_regs *, long error_code) =
unexpected_machine_check;
+dotraplinkage void do_mce(struct pt_regs *regs, long error_code)
+{
+ machine_check_vector(regs, error_code);
+}
+
/*
* Called for each booted CPU to set up machine checks.
* Must be called with preempt off:
Patches currently in stable-queue which might be from tglx(a)linutronix.de are
queue-4.14/futex-prevent-overflow-by-strengthen-input-validation.patch
queue-4.14/x86-mm-clean-up-register-saving-in-the-__enc_copy-assembly-code.patch
queue-4.14/x86-mce-make-machine-check-speculation-protected.patch
queue-4.14/x86-pti-document-fix-wrong-index.patch
queue-4.14/objtool-fix-clang-enum-conversion-warning.patch
queue-4.14/timers-unconditionally-check-deferrable-base.patch
queue-4.14/x86-apic-vector-fix-off-by-one-in-error-path.patch
queue-4.14/objtool-improve-error-message-for-bad-file-argument.patch
queue-4.14/futex-avoid-violating-the-10th-rule-of-futex.patch
queue-4.14/kprobes-x86-disable-optimizing-on-the-function-jumps-to-indirect-thunk.patch
queue-4.14/objtool-fix-seg-fault-with-gold-linker.patch
queue-4.14/x86-mm-use-a-struct-to-reduce-parameters-for-sme-pgd-mapping.patch
queue-4.14/x86-mm-pkeys-fix-fill_sig_info_pkey.patch
queue-4.14/x86-tsc-fix-erroneous-tsc-rate-on-skylake-xeon.patch
queue-4.14/x86-mm-centralize-pmd-flags-in-sme_encrypt_kernel.patch
queue-4.14/module-add-retpoline-tag-to-vermagic.patch
queue-4.14/x86-kasan-panic-if-there-is-not-enough-memory-to-boot.patch
queue-4.14/kprobes-x86-blacklist-indirect-thunk-functions-for-kprobes.patch
queue-4.14/x86-idt-mark-idt-tables-__initconst.patch
queue-4.14/x86-retpoline-fill-rsb-on-context-switch-for-affected-cpus.patch
queue-4.14/x86-mm-prepare-sme_encrypt_kernel-for-page-aligned-encryption.patch
queue-4.14/delayacct-account-blkio-completion-on-the-correct-task.patch
queue-4.14/x86-tsc-future-proof-native_calibrate_tsc.patch
queue-4.14/objtool-fix-seg-fault-with-clang-compiled-objects.patch
queue-4.14/x86-retpoline-add-lfence-to-the-retpoline-rsb-filling-rsb-macros.patch
queue-4.14/x86-retpoline-optimize-inline-assembler-for-vmexit_fill_rsb.patch
queue-4.14/x86-mm-rework-wbinvd-hlt-operation-in-stop_this_cpu.patch
queue-4.14/x86-mm-encrypt-the-initrd-earlier-for-bsp-microcode-update.patch
queue-4.14/retpoline-introduce-start-end-markers-of-indirect-thunk.patch
queue-4.14/x86-cpufeature-move-processor-tracing-out-of-scattered-features.patch
queue-4.14/x86-intel_rdt-cqm-prevent-use-after-free.patch
queue-4.14/objtool-fix-seg-fault-caused-by-missing-parameter.patch
This is a note to let you know that I've just added the patch titled
net: mvpp2: do not disable GMAC padding
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
net-mvpp2-do-not-disable-gmac-padding.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e749aca84b10f3987b2ee1f76e0c7d8aacc5653c Mon Sep 17 00:00:00 2001
From: Yan Markman <ymarkman(a)marvell.com>
Date: Tue, 28 Nov 2017 14:19:50 +0100
Subject: net: mvpp2: do not disable GMAC padding
From: Yan Markman <ymarkman(a)marvell.com>
commit e749aca84b10f3987b2ee1f76e0c7d8aacc5653c upstream.
Short fragmented packets may never be sent by the hardware when padding
is disabled. This patch stop modifying the GMAC padding bits, to leave
them to their reset value (disabled).
Fixes: 3919357fb0bb ("net: mvpp2: initialize the GMAC when using a port")
Signed-off-by: Yan Markman <ymarkman(a)marvell.com>
[Antoine: commit message]
Signed-off-by: Antoine Tenart <antoine.tenart(a)free-electrons.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/net/ethernet/marvell/mvpp2.c | 9 ---------
1 file changed, 9 deletions(-)
--- a/drivers/net/ethernet/marvell/mvpp2.c
+++ b/drivers/net/ethernet/marvell/mvpp2.c
@@ -4552,11 +4552,6 @@ static void mvpp2_port_mii_gmac_configur
MVPP22_CTRL4_QSGMII_BYPASS_ACTIVE;
val &= ~MVPP22_CTRL4_EXT_PIN_GMII_SEL;
writel(val, port->base + MVPP22_GMAC_CTRL_4_REG);
-
- val = readl(port->base + MVPP2_GMAC_CTRL_2_REG);
- val |= MVPP2_GMAC_DISABLE_PADDING;
- val &= ~MVPP2_GMAC_FLOW_CTRL_MASK;
- writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
} else if (phy_interface_mode_is_rgmii(port->phy_interface)) {
val = readl(port->base + MVPP22_GMAC_CTRL_4_REG);
val |= MVPP22_CTRL4_EXT_PIN_GMII_SEL |
@@ -4564,10 +4559,6 @@ static void mvpp2_port_mii_gmac_configur
MVPP22_CTRL4_QSGMII_BYPASS_ACTIVE;
val &= ~MVPP22_CTRL4_DP_CLK_SEL;
writel(val, port->base + MVPP22_GMAC_CTRL_4_REG);
-
- val = readl(port->base + MVPP2_GMAC_CTRL_2_REG);
- val &= ~MVPP2_GMAC_DISABLE_PADDING;
- writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
}
/* The port is connected to a copper PHY */
Patches currently in stable-queue which might be from ymarkman(a)marvell.com are
queue-4.14/net-mvpp2-do-not-disable-gmac-padding.patch