6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Han Gao <rabenda.cn(a)gmail.com>
[ Upstream commit 69a8b62a7aa1e54ff7623064f6507fa29c1d0d4e ]
Similar to the ARM64 commit 3505f30fb6a9s ("ARM64 / ACPI: If we chose
to boot from acpi then disable FDT"), let's not do DT hardware probing
if ACPI is enabled in early boot. This avoids errors caused by
repeated driver probing.
Signed-off-by: Han Gao <rabenda.cn(a)gmail.com>
Link: https://lore.kernel.org/r/20250910112401.552987-1-rabenda.cn@gmail.com
[pjw(a)kernel.org: cleaned up patch description and subject]
Signed-off-by: Paul Walmsley <pjw(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
arch/riscv/kernel/setup.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index f90cce7a3acea..d7ee62837aa4f 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -330,11 +330,14 @@ void __init setup_arch(char **cmdline_p)
/* Parse the ACPI tables for possible boot-time configuration */
acpi_boot_table_init();
+ if (acpi_disabled) {
#if IS_ENABLED(CONFIG_BUILTIN_DTB)
- unflatten_and_copy_device_tree();
+ unflatten_and_copy_device_tree();
#else
- unflatten_device_tree();
+ unflatten_device_tree();
#endif
+ }
+
misc_mem_init();
init_resources();
--
2.51.0
6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Danil Skrebenkov <danil.skrebenkov(a)cloudbear.ru>
[ Upstream commit ae9e9f3d67dcef7582a4524047b01e33c5185ddb ]
openSBI v1.7 adds harts checks for ipi operations. Especially it
adds comparison between hmask passed as an argument from linux
and mask of online harts (from openSBI side). If they don't
fit each other the error occurs.
When cpu is offline, cpu_online_mask is explicitly cleared in
__cpu_disable. However, there is no explicit clearing of
mm_cpumask. mm_cpumask is used for rfence operations that
call openSBI RFENCE extension which uses ipi to remote harts.
If hart is offline there may be error if mask of linux is not
as mask of online harts in openSBI.
this patch adds explicit clearing of mm_cpumask for offline hart.
Signed-off-by: Danil Skrebenkov <danil.skrebenkov(a)cloudbear.ru>
Reviewed-by: Andrew Jones <ajones(a)ventanamicro.com>
Link: https://lore.kernel.org/r/20250919132849.31676-1-danil.skrebenkov@cloudbear…
[pjw(a)kernel.org: rewrote subject line for clarity]
Signed-off-by: Paul Walmsley <pjw(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
arch/riscv/kernel/cpu-hotplug.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/riscv/kernel/cpu-hotplug.c b/arch/riscv/kernel/cpu-hotplug.c
index a1e38ecfc8be2..3f50d3dd76c6f 100644
--- a/arch/riscv/kernel/cpu-hotplug.c
+++ b/arch/riscv/kernel/cpu-hotplug.c
@@ -54,6 +54,7 @@ void arch_cpuhp_cleanup_dead_cpu(unsigned int cpu)
pr_notice("CPU%u: off\n", cpu);
+ clear_tasks_mm_cpumask(cpu);
/* Verify from the firmware if the cpu is really stopped*/
if (cpu_ops->cpu_is_stopped)
ret = cpu_ops->cpu_is_stopped(cpu);
--
2.51.0
6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz(a)infradead.org>
[ Upstream commit 9818af18db4bfefd320d0fef41390a616365e6f7 ]
Per Nathan, clang catches unused "static inline" functions in C files
since commit 6863f5643dd7 ("kbuild: allow Clang to find unused static
inline functions for W=1 build").
Linus said:
> So I entirely ignore W=1 issues, because I think so many of the extra
> warnings are bogus.
>
> But if this one in particular is causing more problems than most -
> some teams do seem to use W=1 as part of their test builds - it's fine
> to send me a patch that just moves bad warnings to W=2.
>
> And if anybody uses W=2 for their test builds, that's THEIR problem..
Here is the change to bump the warning from W=1 to W=2.
Fixes: 6863f5643dd7 ("kbuild: allow Clang to find unused static inline functions for W=1 build")
Signed-off-by: Peter Zijlstra <peterz(a)infradead.org>
Signed-off-by: Andy Shevchenko <andriy.shevchenko(a)linux.intel.com>
Link: https://patch.msgid.link/20251106105000.2103276-1-andriy.shevchenko@linux.i…
[nathan: Adjust comment as well]
Signed-off-by: Nathan Chancellor <nathan(a)kernel.org>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
include/linux/compiler_types.h | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 16755431fc11e..e768d2c693662 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -250,10 +250,9 @@ struct ftrace_likely_data {
/*
* GCC does not warn about unused static inline functions for -Wunused-function.
* Suppress the warning in clang as well by using __maybe_unused, but enable it
- * for W=1 build. This will allow clang to find unused functions. Remove the
- * __inline_maybe_unused entirely after fixing most of -Wunused-function warnings.
+ * for W=2 build. This will allow clang to find unused functions.
*/
-#ifdef KBUILD_EXTRA_WARN1
+#ifdef KBUILD_EXTRA_WARN2
#define __inline_maybe_unused
#else
#define __inline_maybe_unused __maybe_unused
--
2.51.0
6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Timur Kristóf <timur.kristof(a)gmail.com>
[ Upstream commit 7bdd91abf0cb3ea78160e2e78fb58b12f6a38d55 ]
Enabling ASPM causes randoms hangs on Tahiti and Oland on Zen4.
It's unclear if this is a platform-specific or GPU-specific issue.
Disable ASPM on SI for the time being.
Reviewed-by: Alex Deucher <alexander.deucher(a)amd.com>
Signed-off-by: Timur Kristóf <timur.kristof(a)gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index ddd0e7ab82be7..fdaf482c0c8a7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1880,6 +1880,13 @@ static bool amdgpu_device_pcie_dynamic_switching_supported(struct amdgpu_device
static bool amdgpu_device_aspm_support_quirk(struct amdgpu_device *adev)
{
+ /* Enabling ASPM causes randoms hangs on Tahiti and Oland on Zen4.
+ * It's unclear if this is a platform-specific or GPU-specific issue.
+ * Disable ASPM on SI for the time being.
+ */
+ if (adev->family == AMDGPU_FAMILY_SI)
+ return true;
+
#if IS_ENABLED(CONFIG_X86)
struct cpuinfo_x86 *c = &cpu_data(0);
--
2.51.0
6.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz(a)infradead.org>
[ Upstream commit 4cb5ac2626b5704ed712ac1d46b9d89fdfc12c5d ]
Shrikanth noted that the per-cpu reference counter was still some 10%
slower than the old immutable option (which removes the reference
counting entirely).
Further optimize the per-cpu reference counter by:
- switching from RCU to preempt;
- using __this_cpu_*() since we now have preempt disabled;
- switching from smp_load_acquire() to READ_ONCE().
This is all safe because disabling preemption inhibits the RCU grace
period exactly like rcu_read_lock().
Having preemption disabled allows using __this_cpu_*() provided the
only access to the variable is in task context -- which is the case
here.
Furthermore, since we know changing fph->state to FR_ATOMIC demands a
full RCU grace period we can rely on the implied smp_mb() from that to
replace the acquire barrier().
This is very similar to the percpu_down_read_internal() fast-path.
The reason this is significant for PowerPC is that it uses the generic
this_cpu_*() implementation which relies on local_irq_disable() (the
x86 implementation relies on it being a single memop instruction to be
IRQ-safe). Switching to preempt_disable() and __this_cpu*() avoids
this IRQ state swizzling. Also, PowerPC needs LWSYNC for the ACQUIRE
barrier, not having to use explicit barriers safes a bunch.
Combined this reduces the performance gap by half, down to some 5%.
Fixes: 760e6f7befba ("futex: Remove support for IMMUTABLE")
Reported-by: Shrikanth Hegde <sshegde(a)linux.ibm.com>
Tested-by: Shrikanth Hegde <sshegde(a)linux.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
Link: https://patch.msgid.link/20251106092929.GR4067720@noisy.programming.kicks-a…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
kernel/futex/core.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 125804fbb5cb1..2e77a6e5c8657 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -1680,10 +1680,10 @@ static bool futex_ref_get(struct futex_private_hash *fph)
{
struct mm_struct *mm = fph->mm;
- guard(rcu)();
+ guard(preempt)();
- if (smp_load_acquire(&fph->state) == FR_PERCPU) {
- this_cpu_inc(*mm->futex_ref);
+ if (READ_ONCE(fph->state) == FR_PERCPU) {
+ __this_cpu_inc(*mm->futex_ref);
return true;
}
@@ -1694,10 +1694,10 @@ static bool futex_ref_put(struct futex_private_hash *fph)
{
struct mm_struct *mm = fph->mm;
- guard(rcu)();
+ guard(preempt)();
- if (smp_load_acquire(&fph->state) == FR_PERCPU) {
- this_cpu_dec(*mm->futex_ref);
+ if (READ_ONCE(fph->state) == FR_PERCPU) {
+ __this_cpu_dec(*mm->futex_ref);
return false;
}
--
2.51.0