Hi Greg,
please add the attached patch to the stable kernels 5.4 and below.
It is a backport of upstream commit 4ba50e7c423c29639878c005732
which already went into 5.10 and 5.12.
Juergen
From: Michael Weiser <michael.weiser(a)gmx.de>
commit 1962682d2b2fbe6cfa995a85c53c069fadda473e upstream.
Stop printing a (ratelimited) kernel message for each instance of an
unimplemented syscall being called. Userland making an unimplemented
syscall is not necessarily misbehaviour and to be expected with a
current userland running on an older kernel. Also, the current message
looks scary to users but does not actually indicate a real problem nor
help them narrow down the cause. Just rely on sys_ni_syscall() to return
-ENOSYS.
Cc: <stable(a)vger.kernel.org>
Cc: Martin Vajnar <martin.vajnar(a)gmail.com>
Cc: musl(a)lists.openwall.com
Acked-by: Will Deacon <will.deacon(a)arm.com>
Signed-off-by: Michael Weiser <michael.weiser(a)gmx.de>
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Arnd Bergmann <arnd(a)arndb.de>
---
This was backported to v4.14 and later, but is missing in v4.4 and
before, apparently because of a trivial merge conflict. This is
a manual backport I did after I saw a report about the issue
by Martin Vajnar on the musl mailing list.
---
arch/arm64/kernel/traps.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 02710f99c137..a8c0fd0574fa 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -381,14 +381,6 @@ asmlinkage long do_ni_syscall(struct pt_regs *regs)
}
#endif
- if (show_unhandled_signals_ratelimited()) {
- pr_info("%s[%d]: syscall %d\n", current->comm,
- task_pid_nr(current), (int)regs->syscallno);
- dump_instr("", regs);
- if (user_mode(regs))
- __show_regs(regs);
- }
-
return sys_ni_syscall();
}
--
2.29.2
From: Cheng Jian <cj.chengjian(a)huawei.com>
commit 60588bfa223ff675b95f866249f90616613fbe31 upstream.
select_idle_cpu() will scan the LLC domain for idle CPUs,
it's always expensive. so the next commit :
1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()")
introduces a way to limit how many CPUs we scan.
But it consume some CPUs out of 'nr' that are not allowed
for the task and thus waste our attempts. The function
always return nr_cpumask_bits, and we can't find a CPU
which our task is allowed to run.
Cpumask may be too big, similar to select_idle_core(), use
per_cpu_ptr 'select_idle_mask' to prevent stack overflow.
Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()")
Signed-off-by: Cheng Jian <cj.chengjian(a)huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Reviewed-by: Srikar Dronamraju <srikar(a)linux.vnet.ibm.com>
Reviewed-by: Vincent Guittot <vincent.guittot(a)linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider(a)arm.com>
Link: https://lkml.kernel.org/r/20191213024530.28052-1-cj.chengjian@huawei.com
Signed-off-by: Yang Wei <yang.wei(a)linux.alibaba.com>
Tested-by: Yang Wei <yang.wei(a)linux.alibaba.com>
---
kernel/sched/fair.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 81096dd..37ac76d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5779,6 +5779,7 @@ static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd
*/
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
struct sched_domain *this_sd;
u64 avg_cost, avg_idle;
u64 time, cost;
@@ -5809,11 +5810,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
time = local_clock();
- for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
+ cpumask_and(cpus, sched_domain_span(sd), &p->cpus_allowed);
+
+ for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return -1;
- if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
- continue;
if (idle_cpu(cpu))
break;
}
--
1.8.3.1
Hi there,
I'm running Debian Buster on an old Asus EeePC and I see the battery
always at 100% when unplugged, with an estimated battery life of 4200
hours, no matter how long I've been using it without AC power.
I suspect this might be bug #201351, marked as duplicate of #199981 and
fixed in 5.0-rc1. Would you please consider backporting it to the 4.19
LTS kernel?
Salvatore Bonaccorso of the Debian Kernel Team wrote me on debian-kernel
they follow the upstream 4.19.y, so the best chance of getting the fix
in Debian is for you to include the patch.
Many thanks,
Laurențiu
[1] https://bugzilla.kernel.org/show_bug.cgi?id=201351
[2] https://bugzilla.kernel.org/show_bug.cgi?id=199981
[3]
https://patchwork.kernel.org/project/linux-acpi/patch/4426745.BlFkQnxG1M@as…
commit 89b158635ad79574bde8e94d45dad33f8cf09549 upstream.
LZ4 final literal copy could be overlapped when doing
in-place decompression, so it's unsafe to just use memcpy()
on an optimized memcpy approach but memmove() instead.
Upstream LZ4 has updated this years ago [1] (and the impact
is non-sensible [2] plus only a few bytes remain), this commit
just synchronizes LZ4 upstream code to the kernel side as well.
It can be observed as EROFS in-place decompression failure
on specific files when X86_FEATURE_ERMS is unsupported,
memcpy() optimization of commit 59daa706fbec ("x86, mem:
Optimize memcpy by avoiding memory false dependece") will
be enabled then.
Currently most modern x86-CPUs support ERMS, these CPUs just
use "rep movsb" approach so no problem at all. However, it can
still be verified with forcely disabling ERMS feature...
arch/x86/lib/memcpy_64.S:
ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
- "jmp memcpy_erms", X86_FEATURE_ERMS
+ "jmp memcpy_orig", X86_FEATURE_ERMS
We didn't observe any strange on arm64/arm/x86 platform before
since most memcpy() would behave in an increasing address order
("copy upwards" [3]) and it's the correct order of in-place
decompression but it really needs an update to memmove() for sure
considering it's an undefined behavior according to the standard
and some unique optimization already exists in the kernel.
[1] https://github.com/lz4/lz4/commit/33cb8518ac385835cc17be9a770b27b40cd0e15b
[2] https://github.com/lz4/lz4/pull/717#issuecomment-497818921
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=12518
Link: https://lkml.kernel.org/r/20201122030749.2698994-1-hsiangkao@redhat.com
Reviewed-by: Nick Terrell <terrelln(a)fb.com>
Cc: Yann Collet <yann.collet.73(a)gmail.com>
Cc: Miao Xie <miaoxie(a)huawei.com>
Cc: Chao Yu <yuchao0(a)huawei.com>
Cc: Li Guifu <bluce.liguifu(a)huawei.com>
Cc: Guo Xuenan <guoxuenan(a)huawei.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
Signed-off-by: Gao Xiang <hsiangkao(a)linux.alibaba.com>
---
Hi,
Please kindly consider these two backports to 5.4.y and 5.10.y LTS
kernels, and the reason shown as above (it could cause lz4 in-place
decompression (mainly EROFS) failure due to the different designed
memcpy overlapped behavior on x86 if ERMS is unsupported.) The lz4
upstream commit itself has been merged for 2 years. And the linux
upstream commit is also merged for months without any other
regression.
And in principle, it won't have any real impact at all, so I think
it's now safe to backport this to LTS kernels for unsupported ERMS
x86s.
Thanks,
Gao Xiang
lib/lz4/lz4_decompress.c | 6 +++++-
lib/lz4/lz4defs.h | 2 ++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c
index 0c9d3ad17e0f..4d0b59fa5550 100644
--- a/lib/lz4/lz4_decompress.c
+++ b/lib/lz4/lz4_decompress.c
@@ -260,7 +260,11 @@ static FORCE_INLINE int LZ4_decompress_generic(
}
}
- memcpy(op, ip, length);
+ /*
+ * supports overlapping memory regions; only matters
+ * for in-place decompression scenarios
+ */
+ LZ4_memmove(op, ip, length);
ip += length;
op += length;
diff --git a/lib/lz4/lz4defs.h b/lib/lz4/lz4defs.h
index 1a7fa9d9170f..369eb181d730 100644
--- a/lib/lz4/lz4defs.h
+++ b/lib/lz4/lz4defs.h
@@ -137,6 +137,8 @@ static FORCE_INLINE void LZ4_writeLE16(void *memPtr, U16 value)
return put_unaligned_le16(value, memPtr);
}
+#define LZ4_memmove(dst, src, size) __builtin_memmove(dst, src, size)
+
static FORCE_INLINE void LZ4_copy8(void *dst, const void *src)
{
#if LZ4_ARCH64
--
1.8.3.1