This is a note to let you know that I've just added the patch titled
x86/entry/64: Clear extra registers beyond syscall arguments, to reduce speculation attack surface
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-entry-64-clear-extra-registers-beyond-syscall-arguments-to-reduce-speculation-attack-surface.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 8e1eb3fa009aa7c0b944b3c8b26b07de0efb3200 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Mon, 5 Feb 2018 17:18:05 -0800
Subject: x86/entry/64: Clear extra registers beyond syscall arguments, to reduce speculation attack surface
From: Dan Williams <dan.j.williams(a)intel.com>
commit 8e1eb3fa009aa7c0b944b3c8b26b07de0efb3200 upstream.
At entry userspace may have (maliciously) populated the extra registers
outside the syscall calling convention with arbitrary values that could
be useful in a speculative execution (Spectre style) attack.
Clear these registers to minimize the kernel's attack surface.
Note, this only clears the extra registers and not the unused
registers for syscalls less than 6 arguments, since those registers are
likely to be clobbered well before their values could be put to use
under speculation.
Note, Linus found that the XOR instructions can be executed with
minimized cost if interleaved with the PUSH instructions, and Ingo's
analysis found that R10 and R11 should be included in the register
clearing beyond the typical 'extra' syscall calling convention
registers.
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Reported-by: Andi Kleen <ak(a)linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Cc: <stable(a)vger.kernel.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Link: http://lkml.kernel.org/r/151787988577.7847.16733592218894189003.stgit@dwill…
[ Made small improvements to the changelog and the code comments. ]
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/entry/entry_64.S | 13 +++++++++++++
1 file changed, 13 insertions(+)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -231,13 +231,26 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
pushq %r8 /* pt_regs->r8 */
pushq %r9 /* pt_regs->r9 */
pushq %r10 /* pt_regs->r10 */
+ /*
+ * Clear extra registers that a speculation attack might
+ * otherwise want to exploit. Interleave XOR with PUSH
+ * for better uop scheduling:
+ */
+ xorq %r10, %r10 /* nospec r10 */
pushq %r11 /* pt_regs->r11 */
+ xorq %r11, %r11 /* nospec r11 */
pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
pushq %rbp /* pt_regs->rbp */
+ xorl %ebp, %ebp /* nospec rbp */
pushq %r12 /* pt_regs->r12 */
+ xorq %r12, %r12 /* nospec r12 */
pushq %r13 /* pt_regs->r13 */
+ xorq %r13, %r13 /* nospec r13 */
pushq %r14 /* pt_regs->r14 */
+ xorq %r14, %r14 /* nospec r14 */
pushq %r15 /* pt_regs->r15 */
+ xorq %r15, %r15 /* nospec r15 */
UNWIND_HINT_REGS
TRACE_IRQS_OFF
Patches currently in stable-queue which might be from dan.j.williams(a)intel.com are
queue-4.14/kvm-nvmx-set-the-cpu_based_use_msr_bitmaps-if-we-have-a-valid-l02-msr-bitmap.patch
queue-4.14/x86-nvmx-properly-set-spec_ctrl-and-pred_cmd-before-merging-msrs.patch
queue-4.14/x86-speculation-update-speculation-control-microcode-blacklist.patch
queue-4.14/x86-speculation-correct-speculation-control-microcode-blacklist-again.patch
queue-4.14/x86-entry-64-clear-extra-registers-beyond-syscall-arguments-to-reduce-speculation-attack-surface.patch
queue-4.14/kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
queue-4.14/x86-mm-pti-fix-pti-comment-in-entry_syscall_64.patch
queue-4.14/x86-speculation-clean-up-various-spectre-related-details.patch
queue-4.14/revert-x86-speculation-simplify-indirect_branch_prediction_barrier.patch
queue-4.14/x86-entry-64-compat-clear-registers-for-compat-syscalls-to-reduce-speculation-attack-surface.patch
This is a note to let you know that I've just added the patch titled
Revert "x86/speculation: Simplify indirect_branch_prediction_barrier()"
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
revert-x86-speculation-simplify-indirect_branch_prediction_barrier.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f208820a321f9b23d77d7eed89945d862d62a3ed Mon Sep 17 00:00:00 2001
From: David Woodhouse <dwmw(a)amazon.co.uk>
Date: Sat, 10 Feb 2018 23:39:23 +0000
Subject: Revert "x86/speculation: Simplify indirect_branch_prediction_barrier()"
From: David Woodhouse <dwmw(a)amazon.co.uk>
commit f208820a321f9b23d77d7eed89945d862d62a3ed upstream.
This reverts commit 64e16720ea0879f8ab4547e3b9758936d483909b.
We cannot call C functions like that, without marking all the
call-clobbered registers as, well, clobbered. We might have got away
with it for now because the __ibp_barrier() function was *fairly*
unlikely to actually use any other registers. But no. Just no.
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Woodhouse <dwmw2(a)infradead.org>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: arjan.van.de.ven(a)intel.com
Cc: dave.hansen(a)intel.com
Cc: jmattson(a)google.com
Cc: karahmed(a)amazon.de
Cc: kvm(a)vger.kernel.org
Cc: pbonzini(a)redhat.com
Cc: rkrcmar(a)redhat.com
Cc: sironi(a)amazon.de
Link: http://lkml.kernel.org/r/1518305967-31356-3-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/include/asm/nospec-branch.h | 13 +++++++++----
arch/x86/include/asm/processor.h | 3 ---
arch/x86/kernel/cpu/bugs.c | 6 ------
3 files changed, 9 insertions(+), 13 deletions(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -164,10 +164,15 @@ static inline void vmexit_fill_RSB(void)
static inline void indirect_branch_prediction_barrier(void)
{
- alternative_input("",
- "call __ibp_barrier",
- X86_FEATURE_USE_IBPB,
- ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));
+ asm volatile(ALTERNATIVE("",
+ "movl %[msr], %%ecx\n\t"
+ "movl %[val], %%eax\n\t"
+ "movl $0, %%edx\n\t"
+ "wrmsr",
+ X86_FEATURE_USE_IBPB)
+ : : [msr] "i" (MSR_IA32_PRED_CMD),
+ [val] "i" (PRED_CMD_IBPB)
+ : "eax", "ecx", "edx", "memory");
}
#endif /* __ASSEMBLY__ */
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -968,7 +968,4 @@ bool xen_set_default_idle(void);
void stop_this_cpu(void *dummy);
void df_debug(struct pt_regs *regs, long error_code);
-
-void __ibp_barrier(void);
-
#endif /* _ASM_X86_PROCESSOR_H */
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -337,9 +337,3 @@ ssize_t cpu_show_spectre_v2(struct devic
spectre_v2_module_string());
}
#endif
-
-void __ibp_barrier(void)
-{
- __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0);
-}
-EXPORT_SYMBOL_GPL(__ibp_barrier);
Patches currently in stable-queue which might be from dwmw(a)amazon.co.uk are
queue-4.14/kvm-nvmx-set-the-cpu_based_use_msr_bitmaps-if-we-have-a-valid-l02-msr-bitmap.patch
queue-4.14/x86-nvmx-properly-set-spec_ctrl-and-pred_cmd-before-merging-msrs.patch
queue-4.14/x86-speculation-update-speculation-control-microcode-blacklist.patch
queue-4.14/x86-speculation-correct-speculation-control-microcode-blacklist-again.patch
queue-4.14/kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
queue-4.14/x86-speculation-clean-up-various-spectre-related-details.patch
queue-4.14/revert-x86-speculation-simplify-indirect_branch_prediction_barrier.patch
This is a note to let you know that I've just added the patch titled
powerpc/mm/radix: Split linear mapping on hot-unplug
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
powerpc-mm-radix-split-linear-mapping-on-hot-unplug.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4dd5f8a99e791a8c6500e3592f3ce81ae7edcde1 Mon Sep 17 00:00:00 2001
From: Balbir Singh <bsingharora(a)gmail.com>
Date: Wed, 7 Feb 2018 17:35:51 +1100
Subject: powerpc/mm/radix: Split linear mapping on hot-unplug
From: Balbir Singh <bsingharora(a)gmail.com>
commit 4dd5f8a99e791a8c6500e3592f3ce81ae7edcde1 upstream.
This patch splits the linear mapping if the hot-unplug range is
smaller than the mapping size. The code detects if the mapping needs
to be split into a smaller size and if so, uses the stop machine
infrastructure to clear the existing mapping and then remap the
remaining range using a smaller page size.
The code will skip any region of the mapping that overlaps with kernel
text and warn about it once. We don't want to remove a mapping where
the kernel text and the LMB we intend to remove overlap in the same
TLB mapping as it may affect the currently executing code.
I've tested these changes under a kvm guest with 2 vcpus, from a split
mapping point of view, some of the caveats mentioned above applied to
the testing I did.
Fixes: 4b5d62ca17a1 ("powerpc/mm: add radix__remove_section_mapping()")
Signed-off-by: Balbir Singh <bsingharora(a)gmail.com>
[mpe: Tweak change log to match updated behaviour]
Signed-off-by: Michael Ellerman <mpe(a)ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/powerpc/mm/pgtable-radix.c | 95 +++++++++++++++++++++++++++++++---------
1 file changed, 74 insertions(+), 21 deletions(-)
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -17,6 +17,7 @@
#include <linux/of_fdt.h>
#include <linux/mm.h>
#include <linux/string_helpers.h>
+#include <linux/stop_machine.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
@@ -671,6 +672,30 @@ static void free_pmd_table(pmd_t *pmd_st
pud_clear(pud);
}
+struct change_mapping_params {
+ pte_t *pte;
+ unsigned long start;
+ unsigned long end;
+ unsigned long aligned_start;
+ unsigned long aligned_end;
+};
+
+static int stop_machine_change_mapping(void *data)
+{
+ struct change_mapping_params *params =
+ (struct change_mapping_params *)data;
+
+ if (!data)
+ return -1;
+
+ spin_unlock(&init_mm.page_table_lock);
+ pte_clear(&init_mm, params->aligned_start, params->pte);
+ create_physical_mapping(params->aligned_start, params->start);
+ create_physical_mapping(params->end, params->aligned_end);
+ spin_lock(&init_mm.page_table_lock);
+ return 0;
+}
+
static void remove_pte_table(pte_t *pte_start, unsigned long addr,
unsigned long end)
{
@@ -699,6 +724,52 @@ static void remove_pte_table(pte_t *pte_
}
}
+/*
+ * clear the pte and potentially split the mapping helper
+ */
+static void split_kernel_mapping(unsigned long addr, unsigned long end,
+ unsigned long size, pte_t *pte)
+{
+ unsigned long mask = ~(size - 1);
+ unsigned long aligned_start = addr & mask;
+ unsigned long aligned_end = addr + size;
+ struct change_mapping_params params;
+ bool split_region = false;
+
+ if ((end - addr) < size) {
+ /*
+ * We're going to clear the PTE, but not flushed
+ * the mapping, time to remap and flush. The
+ * effects if visible outside the processor or
+ * if we are running in code close to the
+ * mapping we cleared, we are in trouble.
+ */
+ if (overlaps_kernel_text(aligned_start, addr) ||
+ overlaps_kernel_text(end, aligned_end)) {
+ /*
+ * Hack, just return, don't pte_clear
+ */
+ WARN_ONCE(1, "Linear mapping %lx->%lx overlaps kernel "
+ "text, not splitting\n", addr, end);
+ return;
+ }
+ split_region = true;
+ }
+
+ if (split_region) {
+ params.pte = pte;
+ params.start = addr;
+ params.end = end;
+ params.aligned_start = addr & ~(size - 1);
+ params.aligned_end = min_t(unsigned long, aligned_end,
+ (unsigned long)__va(memblock_end_of_DRAM()));
+ stop_machine(stop_machine_change_mapping, ¶ms, NULL);
+ return;
+ }
+
+ pte_clear(&init_mm, addr, pte);
+}
+
static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
unsigned long end)
{
@@ -714,13 +785,7 @@ static void remove_pmd_table(pmd_t *pmd_
continue;
if (pmd_huge(*pmd)) {
- if (!IS_ALIGNED(addr, PMD_SIZE) ||
- !IS_ALIGNED(next, PMD_SIZE)) {
- WARN_ONCE(1, "%s: unaligned range\n", __func__);
- continue;
- }
-
- pte_clear(&init_mm, addr, (pte_t *)pmd);
+ split_kernel_mapping(addr, end, PMD_SIZE, (pte_t *)pmd);
continue;
}
@@ -745,13 +810,7 @@ static void remove_pud_table(pud_t *pud_
continue;
if (pud_huge(*pud)) {
- if (!IS_ALIGNED(addr, PUD_SIZE) ||
- !IS_ALIGNED(next, PUD_SIZE)) {
- WARN_ONCE(1, "%s: unaligned range\n", __func__);
- continue;
- }
-
- pte_clear(&init_mm, addr, (pte_t *)pud);
+ split_kernel_mapping(addr, end, PUD_SIZE, (pte_t *)pud);
continue;
}
@@ -777,13 +836,7 @@ static void remove_pagetable(unsigned lo
continue;
if (pgd_huge(*pgd)) {
- if (!IS_ALIGNED(addr, PGDIR_SIZE) ||
- !IS_ALIGNED(next, PGDIR_SIZE)) {
- WARN_ONCE(1, "%s: unaligned range\n", __func__);
- continue;
- }
-
- pte_clear(&init_mm, addr, (pte_t *)pgd);
+ split_kernel_mapping(addr, end, PGDIR_SIZE, (pte_t *)pgd);
continue;
}
Patches currently in stable-queue which might be from bsingharora(a)gmail.com are
queue-4.14/powerpc-radix-remove-trace_tlbie-call-from-radix__flush_tlb_all.patch
queue-4.14/powerpc-mm-radix-split-linear-mapping-on-hot-unplug.patch
This is a note to let you know that I've just added the patch titled
KVM/x86: Reduce retpoline performance impact in slot_handle_level_range(), by always inlining iterator helper methods
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 928a4c39484281f8ca366f53a1db79330d058401 Mon Sep 17 00:00:00 2001
From: David Woodhouse <dwmw(a)amazon.co.uk>
Date: Sat, 10 Feb 2018 23:39:24 +0000
Subject: KVM/x86: Reduce retpoline performance impact in slot_handle_level_range(), by always inlining iterator helper methods
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: David Woodhouse <dwmw(a)amazon.co.uk>
commit 928a4c39484281f8ca366f53a1db79330d058401 upstream.
With retpoline, tight loops of "call this function for every XXX" are
very much pessimised by taking a prediction miss *every* time. This one
is by far the biggest contributor to the guest launch time with retpoline.
By marking the iterator slot_handle_…() functions always_inline, we can
ensure that the indirect function call can be optimised away into a
direct call and it actually generates slightly smaller code because
some of the other conditionals can get optimised away too.
Performance is now pretty close to what we see with nospectre_v2 on
the command line.
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Tested-by: Filippo Sironi <sironi(a)amazon.de>
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Reviewed-by: Filippo Sironi <sironi(a)amazon.de>
Acked-by: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Woodhouse <dwmw2(a)infradead.org>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: arjan.van.de.ven(a)intel.com
Cc: dave.hansen(a)intel.com
Cc: jmattson(a)google.com
Cc: karahmed(a)amazon.de
Cc: kvm(a)vger.kernel.org
Cc: rkrcmar(a)redhat.com
Link: http://lkml.kernel.org/r/1518305967-31356-4-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/mmu.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5063,7 +5063,7 @@ void kvm_mmu_uninit_vm(struct kvm *kvm)
typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head);
/* The caller should hold mmu-lock before calling this function. */
-static bool
+static __always_inline bool
slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
slot_level_handler fn, int start_level, int end_level,
gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb)
@@ -5093,7 +5093,7 @@ slot_handle_level_range(struct kvm *kvm,
return flush;
}
-static bool
+static __always_inline bool
slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
slot_level_handler fn, int start_level, int end_level,
bool lock_flush_tlb)
@@ -5104,7 +5104,7 @@ slot_handle_level(struct kvm *kvm, struc
lock_flush_tlb);
}
-static bool
+static __always_inline bool
slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
slot_level_handler fn, bool lock_flush_tlb)
{
@@ -5112,7 +5112,7 @@ slot_handle_all_level(struct kvm *kvm, s
PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb);
}
-static bool
+static __always_inline bool
slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
slot_level_handler fn, bool lock_flush_tlb)
{
@@ -5120,7 +5120,7 @@ slot_handle_large_level(struct kvm *kvm,
PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb);
}
-static bool
+static __always_inline bool
slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot,
slot_level_handler fn, bool lock_flush_tlb)
{
Patches currently in stable-queue which might be from dwmw(a)amazon.co.uk are
queue-4.14/kvm-nvmx-set-the-cpu_based_use_msr_bitmaps-if-we-have-a-valid-l02-msr-bitmap.patch
queue-4.14/x86-nvmx-properly-set-spec_ctrl-and-pred_cmd-before-merging-msrs.patch
queue-4.14/x86-speculation-update-speculation-control-microcode-blacklist.patch
queue-4.14/x86-speculation-correct-speculation-control-microcode-blacklist-again.patch
queue-4.14/kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
queue-4.14/x86-speculation-clean-up-various-spectre-related-details.patch
queue-4.14/revert-x86-speculation-simplify-indirect_branch_prediction_barrier.patch
This is a note to let you know that I've just added the patch titled
KVM/nVMX: Set the CPU_BASED_USE_MSR_BITMAPS if we have a valid L02 MSR bitmap
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
kvm-nvmx-set-the-cpu_based_use_msr_bitmaps-if-we-have-a-valid-l02-msr-bitmap.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 3712caeb14dcb33fb4d5114f14c0beef10aca101 Mon Sep 17 00:00:00 2001
From: KarimAllah Ahmed <karahmed(a)amazon.de>
Date: Sat, 10 Feb 2018 23:39:26 +0000
Subject: KVM/nVMX: Set the CPU_BASED_USE_MSR_BITMAPS if we have a valid L02 MSR bitmap
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: KarimAllah Ahmed <karahmed(a)amazon.de>
commit 3712caeb14dcb33fb4d5114f14c0beef10aca101 upstream.
We either clear the CPU_BASED_USE_MSR_BITMAPS and end up intercepting all
MSR accesses or create a valid L02 MSR bitmap and use that. This decision
has to be made every time we evaluate whether we are going to generate the
L02 MSR bitmap.
Before commit:
d28b387fb74d ("KVM/VMX: Allow direct access to MSR_IA32_SPEC_CTRL")
... this was probably OK since the decision was always identical.
This is no longer the case now since the MSR bitmap might actually
change once we decide to not intercept SPEC_CTRL and PRED_CMD.
Signed-off-by: KarimAllah Ahmed <karahmed(a)amazon.de>
Signed-off-by: David Woodhouse <dwmw(a)amazon.co.uk>
Acked-by: Paolo Bonzini <pbonzini(a)redhat.com>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Arjan van de Ven <arjan(a)linux.intel.com>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: David Woodhouse <dwmw2(a)infradead.org>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Radim Krčmář <rkrcmar(a)redhat.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: arjan.van.de.ven(a)intel.com
Cc: dave.hansen(a)intel.com
Cc: jmattson(a)google.com
Cc: kvm(a)vger.kernel.org
Cc: sironi(a)amazon.de
Link: http://lkml.kernel.org/r/1518305967-31356-6-git-send-email-dwmw@amazon.co.uk
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/kvm/vmx.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10127,7 +10127,8 @@ static void nested_get_vmcs12_pages(stru
if (cpu_has_vmx_msr_bitmap() &&
nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS) &&
nested_vmx_merge_msr_bitmap(vcpu, vmcs12))
- ;
+ vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL,
+ CPU_BASED_USE_MSR_BITMAPS);
else
vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL,
CPU_BASED_USE_MSR_BITMAPS);
Patches currently in stable-queue which might be from karahmed(a)amazon.de are
queue-4.14/kvm-nvmx-set-the-cpu_based_use_msr_bitmaps-if-we-have-a-valid-l02-msr-bitmap.patch
queue-4.14/x86-nvmx-properly-set-spec_ctrl-and-pred_cmd-before-merging-msrs.patch
queue-4.14/x86-speculation-update-speculation-control-microcode-blacklist.patch
queue-4.14/kvm-x86-reduce-retpoline-performance-impact-in-slot_handle_level_range-by-always-inlining-iterator-helper-methods.patch
queue-4.14/revert-x86-speculation-simplify-indirect_branch_prediction_barrier.patch
This is a note to let you know that I've just added the patch titled
crypto: sun4i_ss_prng - fix return value of sun4i_ss_prng_generate
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-sun4i_ss_prng-fix-return-value-of-sun4i_ss_prng_generate.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From dd78c832ffaf86eb6434e56de4bc3bc31f03f771 Mon Sep 17 00:00:00 2001
From: Artem Savkov <artem.savkov(a)gmail.com>
Date: Tue, 6 Feb 2018 22:20:21 +0100
Subject: crypto: sun4i_ss_prng - fix return value of sun4i_ss_prng_generate
From: Artem Savkov <artem.savkov(a)gmail.com>
commit dd78c832ffaf86eb6434e56de4bc3bc31f03f771 upstream.
According to crypto/rng.h generate function should return 0 on success
and < 0 on error.
Fixes: b8ae5c7387ad ("crypto: sun4i-ss - support the Security System PRNG")
Signed-off-by: Artem Savkov <artem.savkov(a)gmail.com>
Acked-by: Corentin Labbe <clabbe.montjoie(a)gmail.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/sunxi-ss/sun4i-ss-prng.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
+++ b/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
@@ -52,5 +52,5 @@ int sun4i_ss_prng_generate(struct crypto
writel(0, ss->base + SS_CTL);
spin_unlock(&ss->slock);
- return dlen;
+ return 0;
}
Patches currently in stable-queue which might be from artem.savkov(a)gmail.com are
queue-4.14/crypto-sun4i_ss_prng-convert-lock-to-_bh-in-sun4i_ss_prng_generate.patch
queue-4.14/crypto-sun4i_ss_prng-fix-return-value-of-sun4i_ss_prng_generate.patch
This is a note to let you know that I've just added the patch titled
crypto: sun4i_ss_prng - convert lock to _bh in sun4i_ss_prng_generate
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
crypto-sun4i_ss_prng-convert-lock-to-_bh-in-sun4i_ss_prng_generate.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 2e7d1d61ea6c0f1c4da5eb82cafac750d55637a7 Mon Sep 17 00:00:00 2001
From: Artem Savkov <artem.savkov(a)gmail.com>
Date: Tue, 6 Feb 2018 22:20:22 +0100
Subject: crypto: sun4i_ss_prng - convert lock to _bh in sun4i_ss_prng_generate
From: Artem Savkov <artem.savkov(a)gmail.com>
commit 2e7d1d61ea6c0f1c4da5eb82cafac750d55637a7 upstream.
Lockdep detects a possible deadlock in sun4i_ss_prng_generate() and
throws an "inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage" warning.
Disabling softirqs to fix this.
Fixes: b8ae5c7387ad ("crypto: sun4i-ss - support the Security System PRNG")
Signed-off-by: Artem Savkov <artem.savkov(a)gmail.com>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/crypto/sunxi-ss/sun4i-ss-prng.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
+++ b/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
@@ -28,7 +28,7 @@ int sun4i_ss_prng_generate(struct crypto
algt = container_of(alg, struct sun4i_ss_alg_template, alg.rng);
ss = algt->ss;
- spin_lock(&ss->slock);
+ spin_lock_bh(&ss->slock);
writel(mode, ss->base + SS_CTL);
@@ -51,6 +51,6 @@ int sun4i_ss_prng_generate(struct crypto
}
writel(0, ss->base + SS_CTL);
- spin_unlock(&ss->slock);
+ spin_unlock_bh(&ss->slock);
return 0;
}
Patches currently in stable-queue which might be from artem.savkov(a)gmail.com are
queue-4.14/crypto-sun4i_ss_prng-convert-lock-to-_bh-in-sun4i_ss_prng_generate.patch
queue-4.14/crypto-sun4i_ss_prng-fix-return-value-of-sun4i_ss_prng_generate.patch
This is a note to let you know that I've just added the patch titled
compiler-gcc.h: __nostackprotector needs gcc-4.4 and up
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
compiler-gcc.h-__nostackprotector-needs-gcc-4.4-and-up.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d9afaaa4ff7af8b87d4a205e48cb8a6f666d7f01 Mon Sep 17 00:00:00 2001
From: Geert Uytterhoeven <geert(a)linux-m68k.org>
Date: Thu, 1 Feb 2018 11:21:59 +0100
Subject: compiler-gcc.h: __nostackprotector needs gcc-4.4 and up
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Geert Uytterhoeven <geert(a)linux-m68k.org>
commit d9afaaa4ff7af8b87d4a205e48cb8a6f666d7f01 upstream.
Gcc versions before 4.4 do not recognize the __optimize__ compiler
attribute:
warning: ‘__optimize__’ attribute directive ignored
Fixes: 7375ae3a0b79ea07 ("compiler-gcc.h: Introduce __nostackprotector function attribute")
Signed-off-by: Geert Uytterhoeven <geert(a)linux-m68k.org>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/compiler-gcc.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -167,8 +167,6 @@
#if GCC_VERSION >= 40100
# define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
-
-#define __nostackprotector __attribute__((__optimize__("no-stack-protector")))
#endif
#if GCC_VERSION >= 40300
@@ -198,6 +196,7 @@
#if GCC_VERSION >= 40400
#define __optimize(level) __attribute__((__optimize__(level)))
+#define __nostackprotector __optimize("no-stack-protector")
#endif /* GCC_VERSION >= 40400 */
#if GCC_VERSION >= 40500
Patches currently in stable-queue which might be from geert(a)linux-m68k.org are
queue-4.14/compiler-gcc.h-__nostackprotector-needs-gcc-4.4-and-up.patch
queue-4.14/compiler-gcc.h-introduce-__optimize-function-attribute.patch
This is a note to let you know that I've just added the patch titled
compiler-gcc.h: Introduce __optimize function attribute
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
compiler-gcc.h-introduce-__optimize-function-attribute.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From df5d45aa08f848b79caf395211b222790534ccc7 Mon Sep 17 00:00:00 2001
From: Geert Uytterhoeven <geert(a)linux-m68k.org>
Date: Thu, 1 Feb 2018 11:21:58 +0100
Subject: compiler-gcc.h: Introduce __optimize function attribute
From: Geert Uytterhoeven <geert(a)linux-m68k.org>
commit df5d45aa08f848b79caf395211b222790534ccc7 upstream.
Create a new function attribute __optimize, which allows to specify an
optimization level on a per-function basis.
Signed-off-by: Geert Uytterhoeven <geert(a)linux-m68k.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
Signed-off-by: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/compiler-gcc.h | 4 ++++
include/linux/compiler.h | 4 ++++
2 files changed, 8 insertions(+)
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -196,6 +196,10 @@
#endif /* __CHECKER__ */
#endif /* GCC_VERSION >= 40300 */
+#if GCC_VERSION >= 40400
+#define __optimize(level) __attribute__((__optimize__(level)))
+#endif /* GCC_VERSION >= 40400 */
+
#if GCC_VERSION >= 40500
#ifndef __CHECKER__
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -266,6 +266,10 @@ static __always_inline void __write_once
#endif /* __ASSEMBLY__ */
+#ifndef __optimize
+# define __optimize(level)
+#endif
+
/* Compile time object size, -1 for unknown */
#ifndef __compiletime_object_size
# define __compiletime_object_size(obj) -1
Patches currently in stable-queue which might be from geert(a)linux-m68k.org are
queue-4.14/compiler-gcc.h-__nostackprotector-needs-gcc-4.4-and-up.patch
queue-4.14/compiler-gcc.h-introduce-__optimize-function-attribute.patch
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8e1eb3fa009aa7c0b944b3c8b26b07de0efb3200 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Mon, 5 Feb 2018 17:18:05 -0800
Subject: [PATCH] x86/entry/64: Clear extra registers beyond syscall arguments,
to reduce speculation attack surface
At entry userspace may have (maliciously) populated the extra registers
outside the syscall calling convention with arbitrary values that could
be useful in a speculative execution (Spectre style) attack.
Clear these registers to minimize the kernel's attack surface.
Note, this only clears the extra registers and not the unused
registers for syscalls less than 6 arguments, since those registers are
likely to be clobbered well before their values could be put to use
under speculation.
Note, Linus found that the XOR instructions can be executed with
minimized cost if interleaved with the PUSH instructions, and Ingo's
analysis found that R10 and R11 should be included in the register
clearing beyond the typical 'extra' syscall calling convention
registers.
Suggested-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Reported-by: Andi Kleen <ak(a)linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
Cc: <stable(a)vger.kernel.org>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Brian Gerst <brgerst(a)gmail.com>
Cc: Denys Vlasenko <dvlasenk(a)redhat.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Link: http://lkml.kernel.org/r/151787988577.7847.16733592218894189003.stgit@dwill…
[ Made small improvements to the changelog and the code comments. ]
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index c752abe89d80..065a71b90808 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -235,13 +235,26 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
pushq %r8 /* pt_regs->r8 */
pushq %r9 /* pt_regs->r9 */
pushq %r10 /* pt_regs->r10 */
+ /*
+ * Clear extra registers that a speculation attack might
+ * otherwise want to exploit. Interleave XOR with PUSH
+ * for better uop scheduling:
+ */
+ xorq %r10, %r10 /* nospec r10 */
pushq %r11 /* pt_regs->r11 */
+ xorq %r11, %r11 /* nospec r11 */
pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
pushq %rbp /* pt_regs->rbp */
+ xorl %ebp, %ebp /* nospec rbp */
pushq %r12 /* pt_regs->r12 */
+ xorq %r12, %r12 /* nospec r12 */
pushq %r13 /* pt_regs->r13 */
+ xorq %r13, %r13 /* nospec r13 */
pushq %r14 /* pt_regs->r14 */
+ xorq %r14, %r14 /* nospec r14 */
pushq %r15 /* pt_regs->r15 */
+ xorq %r15, %r15 /* nospec r15 */
UNWIND_HINT_REGS
TRACE_IRQS_OFF
When support for the A31/A31s CCU was first added, the clock ops for
the CLK_OUT_* clocks was set to the wrong type. The clocks are MP-type,
but the ops was set for div (M) clocks. This went unnoticed until now.
This was because while they are different clocks, their data structures
aligned in a way that ccu_div_ops would access the second ccu_div_internal
and ccu_mux_internal structures, which were valid, if not incorrect.
Furthermore, the use of these CLK_OUT_* was for feeding a precise 32.768
kHz clock signal to the WiFi chip. This was achievable by using the parent
with the same clock rate and no divider. So the incorrect divider setting
did not affect this usage.
Commit 946797aa3f08 ("clk: sunxi-ng: Support fixed post-dividers on MP
style clocks") added a new field to the ccu_mp structure, which broke
the aforementioned alignment. Now the system crashes as div_ops tries
to look up a nonexistent table.
Reported-by: Philipp Rossak <embed3d(a)gmail.com>
Fixes: c6e6c96d8fa6 ("clk: sunxi-ng: Add A31/A31s clocks")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Chen-Yu Tsai <wens(a)csie.org>
---
Philipp, can you give this a test and report if this fixes thing?
I don't have any A31/A31s boards online to test this.
---
drivers/clk/sunxi-ng/ccu-sun6i-a31.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
index 72b16ed1012b..3b97f60540ad 100644
--- a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
+++ b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
@@ -762,7 +762,7 @@ static struct ccu_mp out_a_clk = {
.features = CCU_FEATURE_FIXED_PREDIV,
.hw.init = CLK_HW_INIT_PARENTS("out-a",
clk_out_parents,
- &ccu_div_ops,
+ &ccu_mp_ops,
0),
},
};
@@ -783,7 +783,7 @@ static struct ccu_mp out_b_clk = {
.features = CCU_FEATURE_FIXED_PREDIV,
.hw.init = CLK_HW_INIT_PARENTS("out-b",
clk_out_parents,
- &ccu_div_ops,
+ &ccu_mp_ops,
0),
},
};
@@ -804,7 +804,7 @@ static struct ccu_mp out_c_clk = {
.features = CCU_FEATURE_FIXED_PREDIV,
.hw.init = CLK_HW_INIT_PARENTS("out-c",
clk_out_parents,
- &ccu_div_ops,
+ &ccu_mp_ops,
0),
},
};
--
2.16.1
Sometimes (firmware bug?) the V5 boost GPIO is not configured as output
by the BIOS, leading to the 5V boost convertor being permanently on,
Explicitly set the direction and drv flags rather then inheriting them
from the firmware to fix this.
Fixes: 585cb239f4de ("extcon: intel-cht-wc: Disable external 5v boost ...")
Cc: stable(a)vger.kernel.org
Reviewed-by: Andy Shevchenko <andy.shevchenko(a)gmail.com>
Signed-off-by: Hans de Goede <hdegoede(a)redhat.com>
---
Changes in v2:
-Add Fixes tag and Cc: stable
---
drivers/extcon/extcon-intel-cht-wc.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/extcon/extcon-intel-cht-wc.c b/drivers/extcon/extcon-intel-cht-wc.c
index 7c4bc8c44c3f..b7e9ea377d70 100644
--- a/drivers/extcon/extcon-intel-cht-wc.c
+++ b/drivers/extcon/extcon-intel-cht-wc.c
@@ -66,6 +66,8 @@
#define CHT_WC_VBUS_GPIO_CTLO 0x6e2d
#define CHT_WC_VBUS_GPIO_CTLO_OUTPUT BIT(0)
+#define CHT_WC_VBUS_GPIO_CTLO_DRV_OD BIT(4)
+#define CHT_WC_VBUS_GPIO_CTLO_DIR_OUT BIT(5)
enum cht_wc_usb_id {
USB_ID_OTG,
@@ -183,14 +185,15 @@ static void cht_wc_extcon_set_5v_boost(struct cht_wc_extcon_data *ext,
{
int ret, val;
- val = enable ? CHT_WC_VBUS_GPIO_CTLO_OUTPUT : 0;
-
/*
* The 5V boost converter is enabled through a gpio on the PMIC, since
* there currently is no gpio driver we access the gpio reg directly.
*/
- ret = regmap_update_bits(ext->regmap, CHT_WC_VBUS_GPIO_CTLO,
- CHT_WC_VBUS_GPIO_CTLO_OUTPUT, val);
+ val = CHT_WC_VBUS_GPIO_CTLO_DRV_OD | CHT_WC_VBUS_GPIO_CTLO_DIR_OUT;
+ if (enable)
+ val |= CHT_WC_VBUS_GPIO_CTLO_OUTPUT;
+
+ ret = regmap_write(ext->regmap, CHT_WC_VBUS_GPIO_CTLO, val);
if (ret)
dev_err(ext->dev, "Error writing Vbus GPIO CTLO: %d\n", ret);
}
--
2.14.3
Commit 61f5acea8737 ("Bluetooth: btusb: Restore QCA Rome suspend/resume fix
with a "rewritten" version") applied the USB_QUIRK_RESET_RESUME to all QCA
btusb devices. But it turns out that the resume problems are not caused by
the QCA Rome chipset, on most platforms it resumes fine. The resume
problems are actually a platform problem (likely the platform cutting all
power when suspended).
The USB_QUIRK_RESET_RESUME quirk also disable runtime suspend, so by
matching on usb-ids, we're causing all boards with these chips to use extra
power, to fix resume problems which only happen on some boards.
This commit fixes this by applying the quirk based on DMI matching instead
of on usb-ids, so that we match the platform and not the chipset.
Fixes: 61f5acea8737 ("Bluetooth: btusb: Restore QCA Rome suspend/resume..")
Cc: stable(a)vger.kernel.org
Cc: Brian Norris <briannorris(a)chromium.org>
Cc: Kai-Heng Feng <kai.heng.feng(a)canonical.com>
Signed-off-by: Hans de Goede <hdegoede(a)redhat.com>
---
drivers/bluetooth/btusb.c | 26 ++++++++++++++++++++------
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 2a55380ad730..a6023667f3b4 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -21,6 +21,7 @@
*
*/
+#include <linux/dmi.h>
#include <linux/module.h>
#include <linux/usb.h>
#include <linux/usb/quirks.h>
@@ -379,6 +380,22 @@ static const struct usb_device_id blacklist_table[] = {
{ } /* Terminating entry */
};
+/*
+ * The btusb build into some devices needs to be reset on resume, this is a
+ * problem with the platform (likely shutting off all power) not with the
+ * btusb chip itself. So we use a DMI list to match known broken platforms.
+ */
+static const struct dmi_system_id btusb_plat_needs_reset_resume_list[] = {
+ {
+ /* Lenovo yoga 920 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo YOGA 920"),
+ },
+ },
+ {}
+};
+
#define BTUSB_MAX_ISOC_FRAMES 10
#define BTUSB_INTR_RUNNING 0
@@ -2945,6 +2962,9 @@ static int btusb_probe(struct usb_interface *intf,
hdev->send = btusb_send_frame;
hdev->notify = btusb_notify;
+ if (dmi_check_system(btusb_plat_needs_reset_resume_list))
+ interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
+
#ifdef CONFIG_PM
err = btusb_config_oob_wake(hdev);
if (err)
@@ -3031,12 +3051,6 @@ static int btusb_probe(struct usb_interface *intf,
if (id->driver_info & BTUSB_QCA_ROME) {
data->setup_on_usb = btusb_setup_qca;
hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
-
- /* QCA Rome devices lose their updated firmware over suspend,
- * but the USB hub doesn't notice any status change.
- * explicitly request a device reset on resume.
- */
- interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
}
#ifdef CONFIG_BT_HCIBTUSB_RTL
--
2.14.3
If we fail to unbind the vma (due to a signal on an active buffer that
needs to be moved for the next execbuf), then we need to clear the
persistent tracking state we setup for this execbuf.
Fixes: c7c6e46f913b ("drm/i915: Convert execbuf to use struct-of-array packing for critical fields")
Testcase: igt/gem_fenced_exec_thrash/no-spare-fences-busy*
Signed-off-by: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin(a)intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen(a)linux.intel.com>
Cc: <stable(a)vger.kernel.org> # v4.14+
---
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 51f3c32c64bf..4eb28e84fda4 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -505,6 +505,8 @@ eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma)
list_add_tail(&vma->exec_link, &eb->unbound);
if (drm_mm_node_allocated(&vma->node))
err = i915_vma_unbind(vma);
+ if (unlikely(err))
+ vma->exec_flags = NULL;
}
return err;
}
--
2.16.1
This is a note to let you know that I've just added the patch titled
drm/i915/kbl: Change a KBL pci id to GT2 from GT1.5
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
drm-i915-kbl-change-a-kbl-pci-id-to-gt2-from-gt1.5.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 41693fd5237397d3c61b311af0fda1f6f39297c2 Mon Sep 17 00:00:00 2001
From: Anuj Phogat <anuj.phogat(a)gmail.com>
Date: Wed, 20 Sep 2017 13:31:26 -0700
Subject: drm/i915/kbl: Change a KBL pci id to GT2 from GT1.5
From: Anuj Phogat <anuj.phogat(a)gmail.com>
commit 41693fd5237397d3c61b311af0fda1f6f39297c2 upstream.
See Mesa commit 9c588ff
Cc: Matt Turner <mattst88(a)gmail.com>
Cc: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Signed-off-by: Anuj Phogat <anuj.phogat(a)gmail.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20170920203126.1323-1-anuj.ph…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/drm/i915_pciids.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/include/drm/i915_pciids.h
+++ b/include/drm/i915_pciids.h
@@ -339,7 +339,6 @@
#define INTEL_KBL_GT1_IDS(info) \
INTEL_VGA_DEVICE(0x5913, info), /* ULT GT1.5 */ \
INTEL_VGA_DEVICE(0x5915, info), /* ULX GT1.5 */ \
- INTEL_VGA_DEVICE(0x5917, info), /* DT GT1.5 */ \
INTEL_VGA_DEVICE(0x5906, info), /* ULT GT1 */ \
INTEL_VGA_DEVICE(0x590E, info), /* ULX GT1 */ \
INTEL_VGA_DEVICE(0x5902, info), /* DT GT1 */ \
@@ -349,6 +348,7 @@
#define INTEL_KBL_GT2_IDS(info) \
INTEL_VGA_DEVICE(0x5916, info), /* ULT GT2 */ \
+ INTEL_VGA_DEVICE(0x5917, info), /* Mobile GT2 */ \
INTEL_VGA_DEVICE(0x5921, info), /* ULT GT2F */ \
INTEL_VGA_DEVICE(0x591E, info), /* ULX GT2 */ \
INTEL_VGA_DEVICE(0x5912, info), /* DT GT2 */ \
Patches currently in stable-queue which might be from anuj.phogat(a)gmail.com are
queue-4.14/drm-i915-kbl-change-a-kbl-pci-id-to-gt2-from-gt1.5.patch
This is a note to let you know that I've just added the patch titled
mm, memory_hotplug: fix memmap initialization
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
mm-memory_hotplug-fix-memmap-initialization.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 9bb5a391f9a5707e04763cf14298fc4cc29bfecd Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko(a)suse.com>
Date: Wed, 31 Jan 2018 16:21:14 -0800
Subject: mm, memory_hotplug: fix memmap initialization
From: Michal Hocko <mhocko(a)suse.com>
commit 9bb5a391f9a5707e04763cf14298fc4cc29bfecd upstream.
Bharata has noticed that onlining a newly added memory doesn't increase
the total memory, pointing to commit f7f99100d8d9 ("mm: stop zeroing
memory during allocation in vmemmap") as a culprit. This commit has
changed the way how the memory for memmaps is initialized and moves it
from the allocation time to the initialization time. This works
properly for the early memmap init path.
It doesn't work for the memory hotplug though because we need to mark
page as reserved when the sparsemem section is created and later
initialize it completely during onlining. memmap_init_zone is called in
the early stage of onlining. With the current code it calls
__init_single_page and as such it clears up the whole stage and
therefore online_pages_range skips those pages.
Fix this by skipping mm_zero_struct_page in __init_single_page for
memory hotplug path. This is quite uggly but unifying both early init
and memory hotplug init paths is a large project. Make sure we plug the
regression at least.
Link: http://lkml.kernel.org/r/20180130101141.GW21609@dhcp22.suse.cz
Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap")
Signed-off-by: Michal Hocko <mhocko(a)suse.com>
Reported-by: Bharata B Rao <bharata(a)linux.vnet.ibm.com>
Tested-by: Bharata B Rao <bharata(a)linux.vnet.ibm.com>
Reviewed-by: Pavel Tatashin <pasha.tatashin(a)oracle.com>
Cc: Steven Sistare <steven.sistare(a)oracle.com>
Cc: Daniel Jordan <daniel.m.jordan(a)oracle.com>
Cc: Bob Picco <bob.picco(a)oracle.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
mm/page_alloc.c | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1177,9 +1177,10 @@ static void free_one_page(struct zone *z
}
static void __meminit __init_single_page(struct page *page, unsigned long pfn,
- unsigned long zone, int nid)
+ unsigned long zone, int nid, bool zero)
{
- mm_zero_struct_page(page);
+ if (zero)
+ mm_zero_struct_page(page);
set_page_links(page, zone, nid, pfn);
init_page_count(page);
page_mapcount_reset(page);
@@ -1194,9 +1195,9 @@ static void __meminit __init_single_page
}
static void __meminit __init_single_pfn(unsigned long pfn, unsigned long zone,
- int nid)
+ int nid, bool zero)
{
- return __init_single_page(pfn_to_page(pfn), pfn, zone, nid);
+ return __init_single_page(pfn_to_page(pfn), pfn, zone, nid, zero);
}
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
@@ -1217,7 +1218,7 @@ static void __meminit init_reserved_page
if (pfn >= zone->zone_start_pfn && pfn < zone_end_pfn(zone))
break;
}
- __init_single_pfn(pfn, zid, nid);
+ __init_single_pfn(pfn, zid, nid, true);
}
#else
static inline void init_reserved_page(unsigned long pfn)
@@ -1514,7 +1515,7 @@ static unsigned long __init deferred_ini
page++;
else
page = pfn_to_page(pfn);
- __init_single_page(page, pfn, zid, nid);
+ __init_single_page(page, pfn, zid, nid, true);
cond_resched();
}
}
@@ -5393,15 +5394,20 @@ not_early:
* can be created for invalid pages (for alignment)
* check here not to call set_pageblock_migratetype() against
* pfn out of zone.
+ *
+ * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
+ * because this is done early in sparse_add_one_section
*/
if (!(pfn & (pageblock_nr_pages - 1))) {
struct page *page = pfn_to_page(pfn);
- __init_single_page(page, pfn, zone, nid);
+ __init_single_page(page, pfn, zone, nid,
+ context != MEMMAP_HOTPLUG);
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
cond_resched();
} else {
- __init_single_pfn(pfn, zone, nid);
+ __init_single_pfn(pfn, zone, nid,
+ context != MEMMAP_HOTPLUG);
}
}
}
Patches currently in stable-queue which might be from mhocko(a)suse.com are
queue-4.15/mm-memory_hotplug-fix-memmap-initialization.patch
The patch below does not apply to the 4.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 9bb5a391f9a5707e04763cf14298fc4cc29bfecd Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko(a)suse.com>
Date: Wed, 31 Jan 2018 16:21:14 -0800
Subject: [PATCH] mm, memory_hotplug: fix memmap initialization
Bharata has noticed that onlining a newly added memory doesn't increase
the total memory, pointing to commit f7f99100d8d9 ("mm: stop zeroing
memory during allocation in vmemmap") as a culprit. This commit has
changed the way how the memory for memmaps is initialized and moves it
from the allocation time to the initialization time. This works
properly for the early memmap init path.
It doesn't work for the memory hotplug though because we need to mark
page as reserved when the sparsemem section is created and later
initialize it completely during onlining. memmap_init_zone is called in
the early stage of onlining. With the current code it calls
__init_single_page and as such it clears up the whole stage and
therefore online_pages_range skips those pages.
Fix this by skipping mm_zero_struct_page in __init_single_page for
memory hotplug path. This is quite uggly but unifying both early init
and memory hotplug init paths is a large project. Make sure we plug the
regression at least.
Link: http://lkml.kernel.org/r/20180130101141.GW21609@dhcp22.suse.cz
Fixes: f7f99100d8d9 ("mm: stop zeroing memory during allocation in vmemmap")
Signed-off-by: Michal Hocko <mhocko(a)suse.com>
Reported-by: Bharata B Rao <bharata(a)linux.vnet.ibm.com>
Tested-by: Bharata B Rao <bharata(a)linux.vnet.ibm.com>
Reviewed-by: Pavel Tatashin <pasha.tatashin(a)oracle.com>
Cc: Steven Sistare <steven.sistare(a)oracle.com>
Cc: Daniel Jordan <daniel.m.jordan(a)oracle.com>
Cc: Bob Picco <bob.picco(a)oracle.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a6972750e7c5..c7dd9c86e353 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1177,9 +1177,10 @@ static void free_one_page(struct zone *zone,
}
static void __meminit __init_single_page(struct page *page, unsigned long pfn,
- unsigned long zone, int nid)
+ unsigned long zone, int nid, bool zero)
{
- mm_zero_struct_page(page);
+ if (zero)
+ mm_zero_struct_page(page);
set_page_links(page, zone, nid, pfn);
init_page_count(page);
page_mapcount_reset(page);
@@ -1194,9 +1195,9 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
}
static void __meminit __init_single_pfn(unsigned long pfn, unsigned long zone,
- int nid)
+ int nid, bool zero)
{
- return __init_single_page(pfn_to_page(pfn), pfn, zone, nid);
+ return __init_single_page(pfn_to_page(pfn), pfn, zone, nid, zero);
}
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
@@ -1217,7 +1218,7 @@ static void __meminit init_reserved_page(unsigned long pfn)
if (pfn >= zone->zone_start_pfn && pfn < zone_end_pfn(zone))
break;
}
- __init_single_pfn(pfn, zid, nid);
+ __init_single_pfn(pfn, zid, nid, true);
}
#else
static inline void init_reserved_page(unsigned long pfn)
@@ -1534,7 +1535,7 @@ static unsigned long __init deferred_init_pages(int nid, int zid,
} else {
page++;
}
- __init_single_page(page, pfn, zid, nid);
+ __init_single_page(page, pfn, zid, nid, true);
nr_pages++;
}
return (nr_pages);
@@ -5399,15 +5400,20 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
* can be created for invalid pages (for alignment)
* check here not to call set_pageblock_migratetype() against
* pfn out of zone.
+ *
+ * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
+ * because this is done early in sparse_add_one_section
*/
if (!(pfn & (pageblock_nr_pages - 1))) {
struct page *page = pfn_to_page(pfn);
- __init_single_page(page, pfn, zone, nid);
+ __init_single_page(page, pfn, zone, nid,
+ context != MEMMAP_HOTPLUG);
set_pageblock_migratetype(page, MIGRATE_MOVABLE);
cond_resched();
} else {
- __init_single_pfn(pfn, zone, nid);
+ __init_single_pfn(pfn, zone, nid,
+ context != MEMMAP_HOTPLUG);
}
}
}
Our field definitions for CTR_EL0 suffer from a number of problems:
- The IDC and DIC fields are missing, which causes us to enable CTR
trapping on CPUs with either of these returning non-zero values.
- The ERG is FTR_LOWER_SAFE, whereas it should be treated like CWG as
FTR_HIGHER_SAFE so that applications can use it to avoid false sharing.
- [nit] A RES1 field is described as "RAO"
This patch updates the CTR_EL0 field definitions to fix these issues.
Cc: <stable(a)vger.kernel.org>
Cc: Shanker Donthineni <shankerd(a)codeaurora.org>
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
---
arch/arm64/kernel/cpufeature.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 29b1f873e337..2985a067fc13 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -199,9 +199,11 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
};
static const struct arm64_ftr_bits ftr_ctr[] = {
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RAO */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 29, 1, 1), /* DIC */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 28, 1, 1), /* IDC */
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, 24, 4, 0), /* CWG */
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0), /* ERG */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, 20, 4, 0), /* ERG */
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 1), /* DminLine */
/*
* Linux can handle differing I-cache policies. Userspace JITs will
--
2.1.4
This is the revised backport of the upstream commit
b3defb791b26ea0683a93a4f49c77ec45ec96f10
We had another backport (e.g. 623e5c8ae32b in 4.4.115), but it applies
the new mutex also to the code paths that are invoked via faked
kernel-to-kernel ioctls. As reported recently, this leads to a
deadlock at suspend (or other scenarios triggering the kernel
sequencer client).
This patch addresses the issue by taking the mutex only in the code
paths invoked by user-space, just like the original fix patch does.
Reported-and-tested-by: Andres Bertens <abertensu(a)yahoo.com>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
---
Tagged as 4.4.x, but should be applied to other older kernels, too.
sound/core/seq/seq_clientmgr.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
index 7bb9fe7a2c8e..dacc62fe5a58 100644
--- a/sound/core/seq/seq_clientmgr.c
+++ b/sound/core/seq/seq_clientmgr.c
@@ -2196,7 +2196,6 @@ static int snd_seq_do_ioctl(struct snd_seq_client *client, unsigned int cmd,
void __user *arg)
{
struct seq_ioctl_table *p;
- int ret;
switch (cmd) {
case SNDRV_SEQ_IOCTL_PVERSION:
@@ -2210,12 +2209,8 @@ static int snd_seq_do_ioctl(struct snd_seq_client *client, unsigned int cmd,
if (! arg)
return -EFAULT;
for (p = ioctl_tables; p->cmd; p++) {
- if (p->cmd == cmd) {
- mutex_lock(&client->ioctl_mutex);
- ret = p->func(client, arg);
- mutex_unlock(&client->ioctl_mutex);
- return ret;
- }
+ if (p->cmd == cmd)
+ return p->func(client, arg);
}
pr_debug("ALSA: seq unknown ioctl() 0x%x (type='%c', number=0x%02x)\n",
cmd, _IOC_TYPE(cmd), _IOC_NR(cmd));
@@ -2226,11 +2221,15 @@ static int snd_seq_do_ioctl(struct snd_seq_client *client, unsigned int cmd,
static long snd_seq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct snd_seq_client *client = file->private_data;
+ long ret;
if (snd_BUG_ON(!client))
return -ENXIO;
- return snd_seq_do_ioctl(client, cmd, (void __user *) arg);
+ mutex_lock(&client->ioctl_mutex);
+ ret = snd_seq_do_ioctl(client, cmd, (void __user *) arg);
+ mutex_unlock(&client->ioctl_mutex);
+ return ret;
}
#ifdef CONFIG_COMPAT
--
2.16.1
This is a note to let you know that I've just added the patch titled
s390: fix handling of -1 in set{,fs}[gu]id16 syscalls
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-fix-handling-of-1-in-set-fs-id16-syscalls.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6dd0d2d22aa363fec075cb2577ba273ac8462e94 Mon Sep 17 00:00:00 2001
From: Eugene Syromiatnikov <esyr(a)redhat.com>
Date: Mon, 15 Jan 2018 20:38:17 +0100
Subject: s390: fix handling of -1 in set{,fs}[gu]id16 syscalls
From: Eugene Syromiatnikov <esyr(a)redhat.com>
commit 6dd0d2d22aa363fec075cb2577ba273ac8462e94 upstream.
For some reason, the implementation of some 16-bit ID system calls
(namely, setuid16/setgid16 and setfsuid16/setfsgid16) used type cast
instead of low2highgid/low2highuid macros for converting [GU]IDs, which
led to incorrect handling of value of -1 (which ought to be considered
invalid).
Discovered by strace test suite.
Cc: stable(a)vger.kernel.org
Signed-off-by: Eugene Syromiatnikov <esyr(a)redhat.com>
Signed-off-by: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/s390/kernel/compat_linux.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/s390/kernel/compat_linux.c
+++ b/arch/s390/kernel/compat_linux.c
@@ -110,7 +110,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setregid16,
COMPAT_SYSCALL_DEFINE1(s390_setgid16, u16, gid)
{
- return sys_setgid((gid_t)gid);
+ return sys_setgid(low2highgid(gid));
}
COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
@@ -120,7 +120,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setreuid16,
COMPAT_SYSCALL_DEFINE1(s390_setuid16, u16, uid)
{
- return sys_setuid((uid_t)uid);
+ return sys_setuid(low2highuid(uid));
}
COMPAT_SYSCALL_DEFINE3(s390_setresuid16, u16, ruid, u16, euid, u16, suid)
@@ -173,12 +173,12 @@ COMPAT_SYSCALL_DEFINE3(s390_getresgid16,
COMPAT_SYSCALL_DEFINE1(s390_setfsuid16, u16, uid)
{
- return sys_setfsuid((uid_t)uid);
+ return sys_setfsuid(low2highuid(uid));
}
COMPAT_SYSCALL_DEFINE1(s390_setfsgid16, u16, gid)
{
- return sys_setfsgid((gid_t)gid);
+ return sys_setfsgid(low2highgid(gid));
}
static int groups16_to_user(u16 __user *grouplist, struct group_info *group_info)
Patches currently in stable-queue which might be from esyr(a)redhat.com are
queue-4.9/s390-fix-handling-of-1-in-set-fs-id16-syscalls.patch
This is a note to let you know that I've just added the patch titled
PM / devfreq: Propagate error from devfreq_add_device()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pm-devfreq-propagate-error-from-devfreq_add_device.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d1bf2d30728f310f72296b54f0651ecdb09cbb12 Mon Sep 17 00:00:00 2001
From: Bjorn Andersson <bjorn.andersson(a)linaro.org>
Date: Sun, 5 Nov 2017 21:27:41 -0800
Subject: PM / devfreq: Propagate error from devfreq_add_device()
From: Bjorn Andersson <bjorn.andersson(a)linaro.org>
commit d1bf2d30728f310f72296b54f0651ecdb09cbb12 upstream.
Propagate the error of devfreq_add_device() in devm_devfreq_add_device()
rather than statically returning ENOMEM. This makes it slightly faster
to pinpoint the cause of a returned error.
Fixes: 8cd84092d35e ("PM / devfreq: Add resource-managed function for devfreq device")
Cc: stable(a)vger.kernel.org
Acked-by: Chanwoo Choi <cw00.choi(a)samsung.com>
Signed-off-by: Bjorn Andersson <bjorn.andersson(a)linaro.org>
Signed-off-by: MyungJoo Ham <myungjoo.ham(a)samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/devfreq/devfreq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -684,7 +684,7 @@ struct devfreq *devm_devfreq_add_device(
devfreq = devfreq_add_device(dev, profile, governor_name, data);
if (IS_ERR(devfreq)) {
devres_free(ptr);
- return ERR_PTR(-ENOMEM);
+ return devfreq;
}
*ptr = devfreq;
Patches currently in stable-queue which might be from bjorn.andersson(a)linaro.org are
queue-4.9/pm-devfreq-propagate-error-from-devfreq_add_device.patch
queue-4.9/arm64-dts-msm8916-correct-ipc-references-for-smsm.patch