get_user/put_user change didn't spend time in next and
seems a bit too risky to rush. I'm keeping it in my tree
and we'll get it in the next cycle.
The following changes since commit ac3fd01e4c1efce8f2c054cdeb2ddd2fc0fb150d:
Linux 6.18-rc7 (2025-11-23 14:53:16 -0800)
are available in the Git repository at:
https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus
for you to fetch changes up to 205dd7a5d6ad6f4c8e8fcd3c3b95a7c0e7067fee:
virtio_pci: drop kernel.h (2025-11-30 18:02:43 -0500)
----------------------------------------------------------------
virtio,vhost: fixes, cleanups
Just a bunch of fixes and cleanups, mostly very simple. Several
features are merged through net-next this time around.
Signed-off-by: Michael S. Tsirkin <mst(a)redhat.com>
----------------------------------------------------------------
Alok Tiwari (3):
virtio_vdpa: fix misleading return in void function
vdpa/mlx5: Fix incorrect error code reporting in query_virtqueues
vdpa/pds: use %pe for ERR_PTR() in event handler registration
Kriish Sharma (1):
virtio: fix kernel-doc for mapping/free_coherent functions
Marco Crivellari (2):
virtio_balloon: add WQ_PERCPU to alloc_workqueue users
vduse: add WQ_PERCPU to alloc_workqueue users
Miaoqian Lin (1):
virtio: vdpa: Fix reference count leak in octep_sriov_enable()
Michael S. Tsirkin (11):
virtio: fix typo in virtio_device_ready() comment
virtio: fix whitespace in virtio_config_ops
virtio: fix grammar in virtio_queue_info docs
virtio: fix grammar in virtio_map_ops docs
virtio: standardize Returns documentation style
virtio: fix virtqueue_set_affinity() docs
virtio: fix map ops comment
virtio: clean up features qword/dword terms
vhost/test: add test specific macro for features
vhost: switch to arrays of feature bits
virtio_pci: drop kernel.h
Mike Christie (1):
vhost: Fix kthread worker cgroup failure handling
drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +-
drivers/vdpa/octeon_ep/octep_vdpa_main.c | 1 +
drivers/vdpa/pds/vdpa_dev.c | 2 +-
drivers/vdpa/vdpa_user/vduse_dev.c | 3 ++-
drivers/vhost/net.c | 29 +++++++++++-----------
drivers/vhost/scsi.c | 9 ++++---
drivers/vhost/test.c | 10 ++++++--
drivers/vhost/vhost.c | 4 ++-
drivers/vhost/vhost.h | 42 ++++++++++++++++++++++++++------
drivers/vhost/vsock.c | 10 +++++---
drivers/virtio/virtio.c | 12 ++++-----
drivers/virtio/virtio_balloon.c | 3 ++-
drivers/virtio/virtio_debug.c | 10 ++++----
drivers/virtio/virtio_pci_modern_dev.c | 6 ++---
drivers/virtio/virtio_ring.c | 7 +++---
drivers/virtio/virtio_vdpa.c | 2 +-
include/linux/virtio.h | 2 +-
include/linux/virtio_config.h | 24 +++++++++---------
include/linux/virtio_features.h | 29 +++++++++++-----------
include/linux/virtio_pci_modern.h | 8 +++---
include/uapi/linux/virtio_pci.h | 2 +-
21 files changed, 131 insertions(+), 86 deletions(-)
When VM boots with one virtio-crypto PCI device and builtin backend,
run openssl benchmark command with multiple processes, such as
openssl speed -evp aes-128-cbc -engine afalg -seconds 10 -multi 32
openssl processes will hangup and there is error reported like this:
virtio_crypto virtio0: dataq.0:id 3 is not a head!
It seems that the data virtqueue need protection when it is handled
for virtio done notification. If the spinlock protection is added
in virtcrypto_done_task(), openssl benchmark with multiple processes
works well.
Fixes: fed93fb62e05 ("crypto: virtio - Handle dataq logic with tasklet")
Cc: stable(a)vger.kernel.org
Signed-off-by: Bibo Mao <maobibo(a)loongson.cn>
---
drivers/crypto/virtio/virtio_crypto_core.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/crypto/virtio/virtio_crypto_core.c b/drivers/crypto/virtio/virtio_crypto_core.c
index 3d241446099c..ccc6b5c1b24b 100644
--- a/drivers/crypto/virtio/virtio_crypto_core.c
+++ b/drivers/crypto/virtio/virtio_crypto_core.c
@@ -75,15 +75,20 @@ static void virtcrypto_done_task(unsigned long data)
struct data_queue *data_vq = (struct data_queue *)data;
struct virtqueue *vq = data_vq->vq;
struct virtio_crypto_request *vc_req;
+ unsigned long flags;
unsigned int len;
+ spin_lock_irqsave(&data_vq->lock, flags);
do {
virtqueue_disable_cb(vq);
while ((vc_req = virtqueue_get_buf(vq, &len)) != NULL) {
+ spin_unlock_irqrestore(&data_vq->lock, flags);
if (vc_req->alg_cb)
vc_req->alg_cb(vc_req, len);
+ spin_lock_irqsave(&data_vq->lock, flags);
}
} while (!virtqueue_enable_cb(vq));
+ spin_unlock_irqrestore(&data_vq->lock, flags);
}
static void virtcrypto_dataq_callback(struct virtqueue *vq)
--
2.39.3
The below “No resource for ep” warning appears when a StartTransfer
command is issued for bulk or interrupt endpoints in
`dwc3_gadget_ep_enable` while a previous StartTransfer on the same
endpoint is still in progress. The gadget functions drivers can invoke
`usb_ep_enable` (which triggers a new StartTransfer command) before the
earlier transfer has completed. Because the previous StartTransfer is
still active, `dwc3_gadget_ep_disable` can skip the required
`EndTransfer` due to `DWC3_EP_DELAY_STOP`, leading to the endpoint
resources are busy for previous StartTransfer and warning ("No resource
for ep") from dwc3 driver.
To resolve this, a check is added to `dwc3_gadget_ep_enable` that
checks the `DWC3_EP_TRANSFER_STARTED` flag before issuing a new
StartTransfer. By preventing a second StartTransfer on an already busy
endpoint, the resource conflict is eliminated, the warning disappears,
and potential kernel panics caused by `panic_on_warn` are avoided.
------------[ cut here ]------------
dwc3 13200000.dwc3: No resource for ep1out
WARNING: CPU: 0 PID: 700 at drivers/usb/dwc3/gadget.c:398 dwc3_send_gadget_ep_cmd+0x2f8/0x76c
Call trace:
dwc3_send_gadget_ep_cmd+0x2f8/0x76c
__dwc3_gadget_ep_enable+0x490/0x7c0
dwc3_gadget_ep_enable+0x6c/0xe4
usb_ep_enable+0x5c/0x15c
mp_eth_stop+0xd4/0x11c
__dev_close_many+0x160/0x1c8
__dev_change_flags+0xfc/0x220
dev_change_flags+0x24/0x70
devinet_ioctl+0x434/0x524
inet_ioctl+0xa8/0x224
sock_do_ioctl+0x74/0x128
sock_ioctl+0x3bc/0x468
__arm64_sys_ioctl+0xa8/0xe4
invoke_syscall+0x58/0x10c
el0_svc_common+0xa8/0xdc
do_el0_svc+0x1c/0x28
el0_svc+0x38/0x88
el0t_64_sync_handler+0x70/0xbc
el0t_64_sync+0x1a8/0x1ac
Fixes: a97ea994605e ("usb: dwc3: gadget: offset Start Transfer latency for bulk EPs")
Cc: stable(a)vger.kernel.org
Signed-off-by: Selvarasu Ganesan <selvarasu.g(a)samsung.com>
---
Changes in v2:
- Removed change-id.
- Updated commit message.
Link to v1: https://lore.kernel.org/linux-usb/20251117152812.622-1-selvarasu.g@samsung.…
---
drivers/usb/dwc3/gadget.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 1f67fb6aead5..8d3caa71ea12 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -963,8 +963,9 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, unsigned int action)
* Issue StartTransfer here with no-op TRB so we can always rely on No
* Response Update Transfer command.
*/
- if (usb_endpoint_xfer_bulk(desc) ||
- usb_endpoint_xfer_int(desc)) {
+ if ((usb_endpoint_xfer_bulk(desc) ||
+ usb_endpoint_xfer_int(desc)) &&
+ !(dep->flags & DWC3_EP_TRANSFER_STARTED)) {
struct dwc3_gadget_ep_cmd_params params;
struct dwc3_trb *trb;
dma_addr_t trb_dma;
--
2.34.1
Currently, kvfree_rcu_barrier() flushes RCU sheaves across all slab
caches when a cache is destroyed. This is unnecessary; only the RCU
sheaves belonging to the cache being destroyed need to be flushed.
As suggested by Vlastimil Babka, introduce a weaker form of
kvfree_rcu_barrier() that operates on a specific slab cache.
Factor out flush_rcu_sheaves_on_cache() from flush_all_rcu_sheaves() and
call it from flush_all_rcu_sheaves() and kvfree_rcu_barrier_on_cache().
Call kvfree_rcu_barrier_on_cache() instead of kvfree_rcu_barrier() on
cache destruction.
The performance benefit is evaluated on a 12 core 24 threads AMD Ryzen
5900X machine (1 socket), by loading slub_kunit module.
Before:
Total calls: 19
Average latency (us): 18127
Total time (us): 344414
After:
Total calls: 19
Average latency (us): 10066
Total time (us): 191264
Two performance regression have been reported:
- stress module loader test's runtime increases by 50-60% (Daniel)
- internal graphics test's runtime on Tegra23 increases by 35% (Jon)
They are fixed by this change.
Suggested-by: Vlastimil Babka <vbabka(a)suse.cz>
Fixes: ec66e0d59952 ("slab: add sheaf support for batching kfree_rcu() operations")
Cc: <stable(a)vger.kernel.org>
Link: https://lore.kernel.org/linux-mm/1bda09da-93be-4737-aef0-d47f8c5c9301@suse.…
Reported-and-tested-by: Daniel Gomez <da.gomez(a)samsung.com>
Closes: https://lore.kernel.org/linux-mm/0406562e-2066-4cf8-9902-b2b0616dd742@kerne…
Reported-and-tested-by: Jon Hunter <jonathanh(a)nvidia.com>
Closes: https://lore.kernel.org/linux-mm/e988eff6-1287-425e-a06c-805af5bbf262@nvidi…
Signed-off-by: Harry Yoo <harry.yoo(a)oracle.com>
---
No code change, added proper tags and updated changelog.
include/linux/slab.h | 5 ++++
mm/slab.h | 1 +
mm/slab_common.c | 52 +++++++++++++++++++++++++++++------------
mm/slub.c | 55 ++++++++++++++++++++++++--------------------
4 files changed, 73 insertions(+), 40 deletions(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index cf443f064a66..937c93d44e8c 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -1149,6 +1149,10 @@ static inline void kvfree_rcu_barrier(void)
{
rcu_barrier();
}
+static inline void kvfree_rcu_barrier_on_cache(struct kmem_cache *s)
+{
+ rcu_barrier();
+}
static inline void kfree_rcu_scheduler_running(void) { }
#else
@@ -1156,6 +1160,7 @@ void kvfree_rcu_barrier(void);
void kfree_rcu_scheduler_running(void);
#endif
+void kvfree_rcu_barrier_on_cache(struct kmem_cache *s);
/**
* kmalloc_size_roundup - Report allocation bucket size for the given size
diff --git a/mm/slab.h b/mm/slab.h
index f730e012553c..e767aa7e91b0 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -422,6 +422,7 @@ static inline bool is_kmalloc_normal(struct kmem_cache *s)
bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj);
void flush_all_rcu_sheaves(void);
+void flush_rcu_sheaves_on_cache(struct kmem_cache *s);
#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
SLAB_CACHE_DMA32 | SLAB_PANIC | \
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 84dfff4f7b1f..dd8a49d6f9cc 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -492,7 +492,7 @@ void kmem_cache_destroy(struct kmem_cache *s)
return;
/* in-flight kfree_rcu()'s may include objects from our cache */
- kvfree_rcu_barrier();
+ kvfree_rcu_barrier_on_cache(s);
if (IS_ENABLED(CONFIG_SLUB_RCU_DEBUG) &&
(s->flags & SLAB_TYPESAFE_BY_RCU)) {
@@ -2038,25 +2038,13 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
}
EXPORT_SYMBOL_GPL(kvfree_call_rcu);
-/**
- * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
- *
- * Note that a single argument of kvfree_rcu() call has a slow path that
- * triggers synchronize_rcu() following by freeing a pointer. It is done
- * before the return from the function. Therefore for any single-argument
- * call that will result in a kfree() to a cache that is to be destroyed
- * during module exit, it is developer's responsibility to ensure that all
- * such calls have returned before the call to kmem_cache_destroy().
- */
-void kvfree_rcu_barrier(void)
+static inline void __kvfree_rcu_barrier(void)
{
struct kfree_rcu_cpu_work *krwp;
struct kfree_rcu_cpu *krcp;
bool queued;
int i, cpu;
- flush_all_rcu_sheaves();
-
/*
* Firstly we detach objects and queue them over an RCU-batch
* for all CPUs. Finally queued works are flushed for each CPU.
@@ -2118,8 +2106,43 @@ void kvfree_rcu_barrier(void)
}
}
}
+
+/**
+ * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete.
+ *
+ * Note that a single argument of kvfree_rcu() call has a slow path that
+ * triggers synchronize_rcu() following by freeing a pointer. It is done
+ * before the return from the function. Therefore for any single-argument
+ * call that will result in a kfree() to a cache that is to be destroyed
+ * during module exit, it is developer's responsibility to ensure that all
+ * such calls have returned before the call to kmem_cache_destroy().
+ */
+void kvfree_rcu_barrier(void)
+{
+ flush_all_rcu_sheaves();
+ __kvfree_rcu_barrier();
+}
EXPORT_SYMBOL_GPL(kvfree_rcu_barrier);
+/**
+ * kvfree_rcu_barrier_on_cache - Wait for in-flight kvfree_rcu() calls on a
+ * specific slab cache.
+ * @s: slab cache to wait for
+ *
+ * See the description of kvfree_rcu_barrier() for details.
+ */
+void kvfree_rcu_barrier_on_cache(struct kmem_cache *s)
+{
+ if (s->cpu_sheaves)
+ flush_rcu_sheaves_on_cache(s);
+ /*
+ * TODO: Introduce a version of __kvfree_rcu_barrier() that works
+ * on a specific slab cache.
+ */
+ __kvfree_rcu_barrier();
+}
+EXPORT_SYMBOL_GPL(kvfree_rcu_barrier_on_cache);
+
static unsigned long
kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc)
{
@@ -2215,4 +2238,3 @@ void __init kvfree_rcu_init(void)
}
#endif /* CONFIG_KVFREE_RCU_BATCHED */
-
diff --git a/mm/slub.c b/mm/slub.c
index 785e25a14999..7cec2220712b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4118,42 +4118,47 @@ static void flush_rcu_sheaf(struct work_struct *w)
/* needed for kvfree_rcu_barrier() */
-void flush_all_rcu_sheaves(void)
+void flush_rcu_sheaves_on_cache(struct kmem_cache *s)
{
struct slub_flush_work *sfw;
- struct kmem_cache *s;
unsigned int cpu;
- cpus_read_lock();
- mutex_lock(&slab_mutex);
+ mutex_lock(&flush_lock);
- list_for_each_entry(s, &slab_caches, list) {
- if (!s->cpu_sheaves)
- continue;
+ for_each_online_cpu(cpu) {
+ sfw = &per_cpu(slub_flush, cpu);
- mutex_lock(&flush_lock);
+ /*
+ * we don't check if rcu_free sheaf exists - racing
+ * __kfree_rcu_sheaf() might have just removed it.
+ * by executing flush_rcu_sheaf() on the cpu we make
+ * sure the __kfree_rcu_sheaf() finished its call_rcu()
+ */
- for_each_online_cpu(cpu) {
- sfw = &per_cpu(slub_flush, cpu);
+ INIT_WORK(&sfw->work, flush_rcu_sheaf);
+ sfw->s = s;
+ queue_work_on(cpu, flushwq, &sfw->work);
+ }
- /*
- * we don't check if rcu_free sheaf exists - racing
- * __kfree_rcu_sheaf() might have just removed it.
- * by executing flush_rcu_sheaf() on the cpu we make
- * sure the __kfree_rcu_sheaf() finished its call_rcu()
- */
+ for_each_online_cpu(cpu) {
+ sfw = &per_cpu(slub_flush, cpu);
+ flush_work(&sfw->work);
+ }
- INIT_WORK(&sfw->work, flush_rcu_sheaf);
- sfw->s = s;
- queue_work_on(cpu, flushwq, &sfw->work);
- }
+ mutex_unlock(&flush_lock);
+}
- for_each_online_cpu(cpu) {
- sfw = &per_cpu(slub_flush, cpu);
- flush_work(&sfw->work);
- }
+void flush_all_rcu_sheaves(void)
+{
+ struct kmem_cache *s;
+
+ cpus_read_lock();
+ mutex_lock(&slab_mutex);
- mutex_unlock(&flush_lock);
+ list_for_each_entry(s, &slab_caches, list) {
+ if (!s->cpu_sheaves)
+ continue;
+ flush_rcu_sheaves_on_cache(s);
}
mutex_unlock(&slab_mutex);
--
2.43.0
From: Peter Zijlstra <peterz(a)infradead.org>
[ Upstream commit 1fe4002cf7f23d70c79bda429ca2a9423ebcfdfa ]
A KASAN build bloats these single load/store helpers such that
it fails to inline them:
vmlinux.o: error: objtool: irqentry_exit+0x5e8: call to instruction_pointer_set() with UACCESS enabled
Make sure the compiler isn't allowed to do stupid.
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
Link: https://patch.msgid.link/20251031105435.GU4068168@noisy.programming.kicks-a…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
LLM Generated explanations, may be completely bogus:
## Comprehensive Analysis: x86/ptrace: Always inline trivial accessors
### 1. COMMIT MESSAGE ANALYSIS
**Subject:** `x86/ptrace: Always inline trivial accessors`
**Key points from the commit message:**
- A **KASAN build** bloats these single load/store helpers such that
they fail to inline
- The result is an **objtool ERROR**: `vmlinux.o: error: objtool:
irqentry_exit+0x5e8: call to instruction_pointer_set() with UACCESS
enabled`
- The commit ensures "the compiler isn't allowed to do stupid"
**Author:** Peter Zijlstra (Intel) - a highly respected kernel developer
and maintainer
**Committer:** Ingo Molnar - x86 subsystem maintainer
**Missing tags:**
- No `Cc: stable(a)vger.kernel.org` tag
- No `Fixes:` tag
### 2. CODE CHANGE ANALYSIS
The diff changes 8 trivial accessor functions from `static inline` to
`static __always_inline`:
| Function | Purpose | Complexity |
|----------|---------|------------|
| `regs_return_value()` | Returns `regs->ax` | 1 line |
| `regs_set_return_value()` | Sets `regs->ax = rc` | 1 line |
| `kernel_stack_pointer()` | Returns `regs->sp` | 1 line |
| `instruction_pointer()` | Returns `regs->ip` | 1 line |
| `instruction_pointer_set()` | Sets `regs->ip = val` | 1 line |
| `frame_pointer()` | Returns `regs->bp` | 1 line |
| `user_stack_pointer()` | Returns `regs->sp` | 1 line |
| `user_stack_pointer_set()` | Sets `regs->sp = val` | 1 line |
**Technical mechanism of the bug:**
1. KASAN adds memory sanitization instrumentation to functions
2. Even trivial one-liner functions get bloated with KASAN checks
3. The compiler decides the bloated functions are "too big" to inline
4. These functions get called from `irqentry_exit()` in contexts where
UACCESS is enabled (SMAP disabled via STAC)
5. Objtool validates that no unexpected function calls happen with
UACCESS enabled (security/correctness requirement)
6. Result: **BUILD FAILURE** (error, not warning)
**Why `__always_inline` fixes it:**
```c
#define __always_inline inline
__attribute__((__always_inline__))
```
This compiler attribute forces inlining regardless of any optimization
settings or instrumentation, ensuring these trivial accessors always
become inline code rather than function calls.
### 3. CLASSIFICATION
- **Category:** BUILD FIX
- **Type:** Fixes compilation error with KASAN on x86
- **Not a feature:** Simply enforces behavior that was intended
(functions should always inline)
- **Not a quirk/device ID/DT:** N/A
### 4. SCOPE AND RISK ASSESSMENT
**Scope:**
- 1 file changed: `arch/x86/include/asm/ptrace.h`
- 10 insertions, 10 deletions (only adding `__always_` prefix)
- Changes are purely compile-time
**Risk: VERY LOW**
- Zero runtime functional change when compiler already inlines
- Only forces the compiler to do what it was supposed to do
- Same pattern already successfully applied to other functions in the
same file:
- `user_mode()` - already `__always_inline`
- `v8086_mode()` - commit b008893b08dcc
- `ip_within_syscall_gap()` - commit c6b01dace2cd7
- `regs_irqs_disabled()` - already `__always_inline`
### 5. USER IMPACT
**Who is affected:**
- Anyone building x86 kernels with `CONFIG_KASAN=y`
- KASAN is used for memory debugging, commonly in development and CI
systems
- Enterprise distributions often enable KASAN in debug/test builds
**Severity:** HIGH (build failure = kernel cannot be compiled)
### 6. STABILITY INDICATORS
- **Reviewed-by:** None explicit, but committed through tip tree
- **Tested-by:** None explicit, but the error message shows it was
reproduced
- **Author credibility:** Peter Zijlstra is a top kernel maintainer
- **Committer credibility:** Ingo Molnar is the x86 maintainer
### 7. DEPENDENCY CHECK
**Dependencies:** NONE
- This is a standalone fix
- Does not depend on any other commits
- The affected code exists unchanged in all stable kernels (5.10, 5.15,
6.1, 6.6)
**Backport applicability verified:**
```
v5.10: static inline void instruction_pointer_set(...) ✓
v5.15: static inline void instruction_pointer_set(...) ✓
v6.1: static inline void instruction_pointer_set(...) ✓
v6.6: static inline void instruction_pointer_set(...) ✓
```
The patch should apply cleanly to all stable trees.
### 8. HISTORICAL CONTEXT
Similar fixes have been applied to this same file and other kernel
files:
| Commit | Description | Pattern |
|--------|-------------|---------|
| c6b01dace2cd7 | x86: Always inline ip_within_syscall_gap() | Same |
| b008893b08dcc | x86/ptrace: Always inline v8086_mode() | Same |
| cb0ca08b326aa | kcov: mark in_softirq_really() as __always_inline |
Same (backported to stable) |
The KASAN + objtool UACCESS validation issue is a known pattern that has
been addressed multiple times with `__always_inline`.
### SUMMARY
**Strong YES signals:**
- ✅ Fixes a build failure (compilation error, not warning)
- ✅ Small, surgical fix with clear scope (only adds `__always_inline`)
- ✅ Obviously correct - trivial accessors should always inline
- ✅ Zero functional/runtime change
- ✅ No dependencies, applies cleanly to all stable trees
- ✅ Well-established fix pattern used elsewhere in kernel
- ✅ Authored by highly trusted maintainer (Peter Zijlstra)
- ✅ Committed through the proper channel (tip tree via Ingo Molnar)
**Weak NO signals:**
- ⚠️ No explicit `Cc: stable` tag
- ⚠️ No `Fixes:` tag
The absence of stable tags is not disqualifying for build fixes. The
stable kernel rules explicitly state that "build fixes that prevent
compilation" are acceptable. This is a clear-cut build fix that prevents
KASAN-enabled x86 kernels from compiling.
### CONCLUSION
This commit **should be backported** to stable kernel trees. It's a
textbook example of a build fix:
- Small, contained, obviously correct
- Fixes a real build failure affecting KASAN users
- Zero risk of regression (only forces intended behavior)
- No dependencies, clean backport
- Follows established patterns from similar successful fixes
**YES**
arch/x86/include/asm/ptrace.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/ptrace.h b/arch/x86/include/asm/ptrace.h
index 50f75467f73d0..b5dec859bc75a 100644
--- a/arch/x86/include/asm/ptrace.h
+++ b/arch/x86/include/asm/ptrace.h
@@ -187,12 +187,12 @@ convert_ip_to_linear(struct task_struct *child, struct pt_regs *regs);
extern void send_sigtrap(struct pt_regs *regs, int error_code, int si_code);
-static inline unsigned long regs_return_value(struct pt_regs *regs)
+static __always_inline unsigned long regs_return_value(struct pt_regs *regs)
{
return regs->ax;
}
-static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
+static __always_inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
{
regs->ax = rc;
}
@@ -277,34 +277,34 @@ static __always_inline bool ip_within_syscall_gap(struct pt_regs *regs)
}
#endif
-static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+static __always_inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
{
return regs->sp;
}
-static inline unsigned long instruction_pointer(struct pt_regs *regs)
+static __always_inline unsigned long instruction_pointer(struct pt_regs *regs)
{
return regs->ip;
}
-static inline void instruction_pointer_set(struct pt_regs *regs,
- unsigned long val)
+static __always_inline
+void instruction_pointer_set(struct pt_regs *regs, unsigned long val)
{
regs->ip = val;
}
-static inline unsigned long frame_pointer(struct pt_regs *regs)
+static __always_inline unsigned long frame_pointer(struct pt_regs *regs)
{
return regs->bp;
}
-static inline unsigned long user_stack_pointer(struct pt_regs *regs)
+static __always_inline unsigned long user_stack_pointer(struct pt_regs *regs)
{
return regs->sp;
}
-static inline void user_stack_pointer_set(struct pt_regs *regs,
- unsigned long val)
+static __always_inline
+void user_stack_pointer_set(struct pt_regs *regs, unsigned long val)
{
regs->sp = val;
}
--
2.51.0
The recent refactoring of where runtime PM is enabled done in commit
f1eb4e792bb1 ("spi: spi-cadence-quadspi: Enable pm runtime earlier to
avoid imbalance") made the fact that when we do a pm_runtime_disable()
in the error paths of probe() we can trigger a runtime disable which in
turn results in duplicate clock disables. This is particularly likely
to happen when there is missing or broken DT description for the flashes
attached to the controller.
Early on in the probe function we do a pm_runtime_get_noresume() since
the probe function leaves the device in a powered up state but in the
error path we can't assume that PM is enabled so we also manually
disable everything, including clocks. This means that when runtime PM is
active both it and the probe function release the same reference to the
main clock for the IP, triggering warnings from the clock subsystem:
[ 8.693719] clk:75:7 already disabled
[ 8.693791] WARNING: CPU: 1 PID: 185 at /usr/src/kernel/drivers/clk/clk.c:1188 clk_core_disable+0xa0/0xb
...
[ 8.694261] clk_core_disable+0xa0/0xb4 (P)
[ 8.694272] clk_disable+0x38/0x60
[ 8.694283] cqspi_probe+0x7c8/0xc5c [spi_cadence_quadspi]
[ 8.694309] platform_probe+0x5c/0xa4
Dealing with this issue properly is complicated by the fact that we
don't know if runtime PM is active so can't tell if it will disable the
clocks or not. We can, however, sidestep the issue for the flash
descriptions by moving their parsing to when we parse the controller
properties which also save us doing a bunch of setup which can never be
used so let's do that.
Reported-by: Francesco Dolcini <francesco(a)dolcini.it>
Closes: https://lore.kernel.org/r/20251201072844.GA6785@francesco-nb
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
Changes in v2:
- Switch to moving the DT parsing earlier so we avoid triggering the
clock referencing problems.
- Link to v1: https://patch.msgid.link/20251202-spi-cadence-qspi-runtime-pm-imbalance-v1-…
---
drivers/spi/spi-cadence-quadspi.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
index af6d050da1c8..bdbeef05cd72 100644
--- a/drivers/spi/spi-cadence-quadspi.c
+++ b/drivers/spi/spi-cadence-quadspi.c
@@ -1845,6 +1845,12 @@ static int cqspi_probe(struct platform_device *pdev)
return -ENODEV;
}
+ ret = cqspi_setup_flash(cqspi);
+ if (ret) {
+ dev_err(dev, "failed to setup flash parameters %d\n", ret);
+ return ret;
+ }
+
/* Obtain QSPI clock. */
cqspi->clk = devm_clk_get(dev, NULL);
if (IS_ERR(cqspi->clk)) {
@@ -1988,12 +1994,6 @@ static int cqspi_probe(struct platform_device *pdev)
pm_runtime_get_noresume(dev);
}
- ret = cqspi_setup_flash(cqspi);
- if (ret) {
- dev_err(dev, "failed to setup flash parameters %d\n", ret);
- goto probe_setup_failed;
- }
-
host->num_chipselect = cqspi->num_chipselect;
if (ddata && (ddata->quirks & CQSPI_SUPPORT_DEVICE_RESET))
---
base-commit: cebdea5fc60642a39a76c237257a7e6662336006
change-id: 20251202-spi-cadence-qspi-runtime-pm-imbalance-657740cf7eae
Best regards,
--
Mark Brown <broonie(a)kernel.org>
The patch titled
Subject: kasan: unpoison vms[area] addresses with a common tag
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
kasan-unpoison-vms-addresses-with-a-common-tag.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Maciej Wieczor-Retman <maciej.wieczor-retman(a)intel.com>
Subject: kasan: unpoison vms[area] addresses with a common tag
Date: Thu, 04 Dec 2025 19:00:11 +0000
A KASAN tag mismatch, possibly causing a kernel panic, can be observed on
systems with a tag-based KASAN enabled and with multiple NUMA nodes. It
was reported on arm64 and reproduced on x86. It can be explained in the
following points:
1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.
Use the new vmalloc flag that disables random tag assignment in
__kasan_unpoison_vmalloc() - pass the same random tag to all the
vm_structs by tagging the pointers before they go inside
__kasan_unpoison_vmalloc(). Assigning a common tag resolves the pcpu
chunk address mismatch.
Link: https://lkml.kernel.org/r/873821114a9f722ffb5d6702b94782e902883fdf.17648745…
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman(a)intel.com>
Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS")
Cc: Alexander Potapenko <glider(a)google.com>
Cc: Andrey Konovalov <andreyknvl(a)gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a(a)gmail.com>
Cc: Danilo Krummrich <dakr(a)kernel.org>
Cc: Dmitriy Vyukov <dvyukov(a)google.com>
Cc: Jiayuan Chen <jiayuan.chen(a)linux.dev>
Cc: Kees Cook <kees(a)kernel.org>
Cc: Marco Elver <elver(a)google.com>
Cc: "Uladzislau Rezki (Sony)" <urezki(a)gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino(a)arm.com>
Cc: <stable(a)vger.kernel.org> [6.1+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/kasan/common.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
--- a/mm/kasan/common.c~kasan-unpoison-vms-addresses-with-a-common-tag
+++ a/mm/kasan/common.c
@@ -591,11 +591,28 @@ void __kasan_unpoison_vmap_areas(struct
unsigned long size;
void *addr;
int area;
+ u8 tag;
- for (area = 0 ; area < nr_vms ; area++) {
+ /*
+ * If KASAN_VMALLOC_KEEP_TAG was set at this point, all vms[] pointers
+ * would be unpoisoned with the KASAN_TAG_KERNEL which would disable
+ * KASAN checks down the line.
+ */
+ if (flags & KASAN_VMALLOC_KEEP_TAG) {
+ pr_warn("KASAN_VMALLOC_KEEP_TAG flag shouldn't be already set!\n");
+ return;
+ }
+
+ size = vms[0]->size;
+ addr = vms[0]->addr;
+ vms[0]->addr = __kasan_unpoison_vmalloc(addr, size, flags);
+ tag = get_tag(vms[0]->addr);
+
+ for (area = 1 ; area < nr_vms ; area++) {
size = vms[area]->size;
- addr = vms[area]->addr;
- vms[area]->addr = __kasan_unpoison_vmalloc(addr, size, flags);
+ addr = set_tag(vms[area]->addr, tag);
+ vms[area]->addr =
+ __kasan_unpoison_vmalloc(addr, size, flags | KASAN_VMALLOC_KEEP_TAG);
}
}
#endif
_
Patches currently in -mm which might be from maciej.wieczor-retman(a)intel.com are
kasan-refactor-pcpu-kasan-vmalloc-unpoison.patch
kasan-unpoison-vms-addresses-with-a-common-tag.patch
The patch titled
Subject: kasan: refactor pcpu kasan vmalloc unpoison
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
kasan-refactor-pcpu-kasan-vmalloc-unpoison.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Maciej Wieczor-Retman <maciej.wieczor-retman(a)intel.com>
Subject: kasan: refactor pcpu kasan vmalloc unpoison
Date: Thu, 04 Dec 2025 19:00:04 +0000
A KASAN tag mismatch, possibly causing a kernel panic, can be observed
on systems with a tag-based KASAN enabled and with multiple NUMA nodes.
It was reported on arm64 and reproduced on x86. It can be explained in
the following points:
1. There can be more than one virtual memory chunk.
2. Chunk's base address has a tag.
3. The base address points at the first chunk and thus inherits
the tag of the first chunk.
4. The subsequent chunks will be accessed with the tag from the
first chunk.
5. Thus, the subsequent chunks need to have their tag set to
match that of the first chunk.
Refactor code by reusing __kasan_unpoison_vmalloc in a new helper in
preparation for the actual fix.
Link: https://lkml.kernel.org/r/eb61d93b907e262eefcaa130261a08bcb6c5ce51.17648745…
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman(a)intel.com>
Fixes: 1d96320f8d53 ("kasan, vmalloc: add vmalloc tagging for SW_TAGS")
Cc: Alexander Potapenko <glider(a)google.com>
Cc: Andrey Konovalov <andreyknvl(a)gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a(a)gmail.com>
Cc: Danilo Krummrich <dakr(a)kernel.org>
Cc: Dmitriy Vyukov <dvyukov(a)google.com>
Cc: Jiayuan Chen <jiayuan.chen(a)linux.dev>
Cc: Kees Cook <kees(a)kernel.org>
Cc: Marco Elver <elver(a)google.com>
Cc: "Uladzislau Rezki (Sony)" <urezki(a)gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino(a)arm.com>
Cc: <stable(a)vger.kernel.org> [6.1+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/kasan.h | 15 +++++++++++++++
mm/kasan/common.c | 17 +++++++++++++++++
mm/vmalloc.c | 4 +---
3 files changed, 33 insertions(+), 3 deletions(-)
--- a/include/linux/kasan.h~kasan-refactor-pcpu-kasan-vmalloc-unpoison
+++ a/include/linux/kasan.h
@@ -615,6 +615,16 @@ static __always_inline void kasan_poison
__kasan_poison_vmalloc(start, size);
}
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
+ kasan_vmalloc_flags_t flags);
+static __always_inline void
+kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
+ kasan_vmalloc_flags_t flags)
+{
+ if (kasan_enabled())
+ __kasan_unpoison_vmap_areas(vms, nr_vms, flags);
+}
+
#else /* CONFIG_KASAN_VMALLOC */
static inline void kasan_populate_early_vm_area_shadow(void *start,
@@ -639,6 +649,11 @@ static inline void *kasan_unpoison_vmall
static inline void kasan_poison_vmalloc(const void *start, unsigned long size)
{ }
+static __always_inline void
+kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
+ kasan_vmalloc_flags_t flags)
+{ }
+
#endif /* CONFIG_KASAN_VMALLOC */
#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
--- a/mm/kasan/common.c~kasan-refactor-pcpu-kasan-vmalloc-unpoison
+++ a/mm/kasan/common.c
@@ -28,6 +28,7 @@
#include <linux/string.h>
#include <linux/types.h>
#include <linux/bug.h>
+#include <linux/vmalloc.h>
#include "kasan.h"
#include "../slab.h"
@@ -582,3 +583,19 @@ bool __kasan_check_byte(const void *addr
}
return true;
}
+
+#ifdef CONFIG_KASAN_VMALLOC
+void __kasan_unpoison_vmap_areas(struct vm_struct **vms, int nr_vms,
+ kasan_vmalloc_flags_t flags)
+{
+ unsigned long size;
+ void *addr;
+ int area;
+
+ for (area = 0 ; area < nr_vms ; area++) {
+ size = vms[area]->size;
+ addr = vms[area]->addr;
+ vms[area]->addr = __kasan_unpoison_vmalloc(addr, size, flags);
+ }
+}
+#endif
--- a/mm/vmalloc.c~kasan-refactor-pcpu-kasan-vmalloc-unpoison
+++ a/mm/vmalloc.c
@@ -4872,9 +4872,7 @@ retry:
* With hardware tag-based KASAN, marking is skipped for
* non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc().
*/
- for (area = 0; area < nr_vms; area++)
- vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
- vms[area]->size, KASAN_VMALLOC_PROT_NORMAL);
+ kasan_unpoison_vmap_areas(vms, nr_vms, KASAN_VMALLOC_PROT_NORMAL);
kfree(vas);
return vms;
_
Patches currently in -mm which might be from maciej.wieczor-retman(a)intel.com are
kasan-refactor-pcpu-kasan-vmalloc-unpoison.patch
kasan-unpoison-vms-addresses-with-a-common-tag.patch
The patch titled
Subject: mm/kasan: fix incorrect unpoisoning in vrealloc for KASAN
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Jiayuan Chen <jiayuan.chen(a)linux.dev>
Subject: mm/kasan: fix incorrect unpoisoning in vrealloc for KASAN
Date: Thu, 04 Dec 2025 18:59:55 +0000
Patch series "kasan: vmalloc: Fixes for the percpu allocator and
vrealloc", v3.
Patches fix two issues related to KASAN and vmalloc.
The first one, a KASAN tag mismatch, possibly resulting in a kernel panic,
can be observed on systems with a tag-based KASAN enabled and with
multiple NUMA nodes. Initially it was only noticed on x86 [1] but later a
similar issue was also reported on arm64 [2].
Specifically the problem is related to how vm_structs interact with
pcpu_chunks - both when they are allocated, assigned and when pcpu_chunk
addresses are derived.
When vm_structs are allocated they are unpoisoned, each with a different
random tag, if vmalloc support is enabled along the KASAN mode. Later
when first pcpu chunk is allocated it gets its 'base_addr' field set to
the first allocated vm_struct. With that it inherits that vm_struct's
tag.
When pcpu_chunk addresses are later derived (by pcpu_chunk_addr(), for
example in pcpu_alloc_noprof()) the base_addr field is used and offsets
are added to it. If the initial conditions are satisfied then some of the
offsets will point into memory allocated with a different vm_struct. So
while the lower bits will get accurately derived the tag bits in the top
of the pointer won't match the shadow memory contents.
The solution (proposed at v2 of the x86 KASAN series [3]) is to unpoison
the vm_structs with the same tag when allocating them for the per cpu
allocator (in pcpu_get_vm_areas()).
The second one reported by syzkaller [4] is related to vrealloc and
happens because of random tag generation when unpoisoning memory without
allocating new pages. This breaks shadow memory tracking and needs to
reuse the existing tag instead of generating a new one. At the same time
an inconsistency in used flags is corrected.
This patch (of 3):
Syzkaller reported a memory out-of-bounds bug [4]. This patch fixes two
issues:
1. In vrealloc the KASAN_VMALLOC_VM_ALLOC flag is missing when
unpoisoning the extended region. This flag is required to correctly
associate the allocation with KASAN's vmalloc tracking.
Note: In contrast, vzalloc (via __vmalloc_node_range_noprof)
explicitly sets KASAN_VMALLOC_VM_ALLOC and calls
kasan_unpoison_vmalloc() with it. vrealloc must behave consistently --
especially when reusing existing vmalloc regions -- to ensure KASAN can
track allocations correctly.
2. When vrealloc reuses an existing vmalloc region (without allocating
new pages) KASAN generates a new tag, which breaks tag-based memory
access tracking.
Introduce KASAN_VMALLOC_KEEP_TAG, a new KASAN flag that allows reusing the
tag already attached to the pointer, ensuring consistent tag behavior
during reallocation.
Pass KASAN_VMALLOC_KEEP_TAG and KASAN_VMALLOC_VM_ALLOC to the
kasan_unpoison_vmalloc inside vrealloc_node_align_noprof().
Link: https://lkml.kernel.org/r/38dece0a4074c43e48150d1e242f8242c73bf1a5.17648745…
Link: https://lore.kernel.org/all/e7e04692866d02e6d3b32bb43b998e5d17092ba4.173868… [1]
Link: https://lore.kernel.org/all/aMUrW1Znp1GEj7St@MiWiFi-R3L-srv/ [2]
Link: https://lore.kernel.org/all/CAPAsAGxDRv_uFeMYu9TwhBVWHCCtkSxoWY4xmFB_vowMbi… [3]
Link: https://syzkaller.appspot.com/bug?extid=3D997752115a851cb0cf36 [4]
Signed-off-by: Jiayuan Chen <jiayuan.chen(a)linux.dev>
Co-developed-by: Maciej Wieczor-Retman <maciej.wieczor-retman(a)intel.com>
Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman(a)intel.com>
Fixes: a0309faf1cb0 ("mm: vmalloc: support more granular vrealloc() sizing"
)
Reported-by: syzbot+997752115a851cb0cf36(a)syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/68e243a2.050a0220.1696c6.007d.GAE@google.com/T/
Cc: Alexander Potapenko <glider(a)google.com>
Cc: Andrey Konovalov <andreyknvl(a)gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a(a)gmail.com>
Cc: Danilo Krummrich <dakr(a)kernel.org>
Cc: Dmitriy Vyukov <dvyukov(a)google.com>
Cc: Kees Cook <kees(a)kernel.org>
Cc: Marco Elver <elver(a)google.com>
Cc: "Uladzislau Rezki (Sony)" <urezki(a)gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino(a)arm.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/kasan.h | 1 +
mm/kasan/hw_tags.c | 2 +-
mm/kasan/shadow.c | 4 +++-
mm/vmalloc.c | 4 +++-
4 files changed, 8 insertions(+), 3 deletions(-)
--- a/include/linux/kasan.h~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
+++ a/include/linux/kasan.h
@@ -28,6 +28,7 @@ typedef unsigned int __bitwise kasan_vma
#define KASAN_VMALLOC_INIT ((__force kasan_vmalloc_flags_t)0x01u)
#define KASAN_VMALLOC_VM_ALLOC ((__force kasan_vmalloc_flags_t)0x02u)
#define KASAN_VMALLOC_PROT_NORMAL ((__force kasan_vmalloc_flags_t)0x04u)
+#define KASAN_VMALLOC_KEEP_TAG ((__force kasan_vmalloc_flags_t)0x08u)
#define KASAN_VMALLOC_PAGE_RANGE 0x1 /* Apply exsiting page range */
#define KASAN_VMALLOC_TLB_FLUSH 0x2 /* TLB flush */
--- a/mm/kasan/hw_tags.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
+++ a/mm/kasan/hw_tags.c
@@ -361,7 +361,7 @@ void *__kasan_unpoison_vmalloc(const voi
return (void *)start;
}
- tag = kasan_random_tag();
+ tag = (flags & KASAN_VMALLOC_KEEP_TAG) ? get_tag(start) : kasan_random_tag();
start = set_tag(start, tag);
/* Unpoison and initialize memory up to size. */
--- a/mm/kasan/shadow.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
+++ a/mm/kasan/shadow.c
@@ -648,7 +648,9 @@ void *__kasan_unpoison_vmalloc(const voi
!(flags & KASAN_VMALLOC_PROT_NORMAL))
return (void *)start;
- start = set_tag(start, kasan_random_tag());
+ if (unlikely(!(flags & KASAN_VMALLOC_KEEP_TAG)))
+ start = set_tag(start, kasan_random_tag());
+
kasan_unpoison(start, size, false);
return (void *)start;
}
--- a/mm/vmalloc.c~mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan
+++ a/mm/vmalloc.c
@@ -4176,7 +4176,9 @@ void *vrealloc_node_align_noprof(const v
*/
if (size <= alloced_size) {
kasan_unpoison_vmalloc(p + old_size, size - old_size,
- KASAN_VMALLOC_PROT_NORMAL);
+ KASAN_VMALLOC_PROT_NORMAL |
+ KASAN_VMALLOC_VM_ALLOC |
+ KASAN_VMALLOC_KEEP_TAG);
/*
* No need to zero memory here, as unused memory will have
* already been zeroed at initial allocation time or during
_
Patches currently in -mm which might be from jiayuan.chen(a)linux.dev are
mm-kasan-fix-incorrect-unpoisoning-in-vrealloc-for-kasan.patch
The patch titled
Subject: Revert "mm: fix MAX_FOLIO_ORDER on powerpc configs with hugetlb"
has been added to the -mm mm-hotfixes-unstable branch. Its filename is
revert-mm-fix-max_folio_order-on-powerpc-configs-with-hugetlb.patch
This patch will shortly appear at
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patche…
This patch will later appear in the mm-hotfixes-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days
------------------------------------------------------
From: Shuah Khan <skhan(a)linuxfoundation.org>
Subject: Revert "mm: fix MAX_FOLIO_ORDER on powerpc configs with hugetlb"
Date: Wed, 3 Dec 2025 19:33:56 -0700
This reverts commit 39231e8d6ba7f794b566fd91ebd88c0834a23b98.
Enabling HAVE_GIGANTIC_FOLIOS broke kernel build and git clone on two
systems. git fetch-pack fails when cloning large repos and make hangs or
errors out of Makefile.build with Error: 139. These failures are random
with git clone failing after fetching 1% of the objects, and make hangs
while compiling random files.
Below is is one of the git clone failures:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux_6.19
Cloning into 'linux_6.19'...
remote: Enumerating objects: 11173575, done.
remote: Counting objects: 100% (785/785), done.
remote: Compressing objects: 100% (373/373), done.
remote: Total 11173575 (delta 534), reused 505 (delta 411), pack-reused 11172790 (from 1)
Receiving objects: 100% (11173575/11173575), 3.00 GiB | 7.08 MiB/s, done.
Resolving deltas: 100% (9195212/9195212), done.
fatal: did not receive expected object 0002003e951b5057c16de5a39140abcbf6e44e50
fatal: fetch-pack: invalid index-pack output
(akpm: reverting this probably just hides a bug, but let's do that for now
while the bug is being worked on).
Link: https://lkml.kernel.org/r/20251204023358.54107-1-skhan@linuxfoundation.org
Fixes: 39231e8d6ba7 ("mm: fix MAX_FOLIO_ORDER on powerpc configs with hugetlb")
Signed-off-by: Shuah Khan <skhan(a)linuxfoundation.org>
Cc: Christophe Leroy <christophe.leroy(a)csgroup.eu>
Cc: Liam Howlett <liam.howlett(a)oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes(a)oracle.com>
Cc: Madhavan Srinivasan <maddy(a)linux.ibm.com>
Cc: Masahiro Yamada <masahiroy(a)kernel.org>
Cc: Michael Ellerman <mpe(a)ellerman.id.au>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: Mike Rapoport <rppt(a)kernel.org>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: Suren Baghdasaryan <surenb(a)google.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Linus Torvalds <torvalds(a)linuxfoundation.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
arch/powerpc/Kconfig | 1 -
arch/powerpc/platforms/Kconfig.cputype | 1 +
include/linux/mm.h | 13 +++----------
mm/Kconfig | 7 -------
4 files changed, 4 insertions(+), 18 deletions(-)
--- a/arch/powerpc/Kconfig~revert-mm-fix-max_folio_order-on-powerpc-configs-with-hugetlb
+++ a/arch/powerpc/Kconfig
@@ -137,7 +137,6 @@ config PPC
select ARCH_HAS_DMA_OPS if PPC64
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL
- select ARCH_HAS_GIGANTIC_PAGE if ARCH_SUPPORTS_HUGETLBFS
select ARCH_HAS_KCOV
select ARCH_HAS_KERNEL_FPU_SUPPORT if PPC64 && PPC_FPU
select ARCH_HAS_MEMBARRIER_CALLBACKS
--- a/arch/powerpc/platforms/Kconfig.cputype~revert-mm-fix-max_folio_order-on-powerpc-configs-with-hugetlb
+++ a/arch/powerpc/platforms/Kconfig.cputype
@@ -423,6 +423,7 @@ config PPC_64S_HASH_MMU
config PPC_RADIX_MMU
bool "Radix MMU Support"
depends on PPC_BOOK3S_64
+ select ARCH_HAS_GIGANTIC_PAGE
default y
help
Enable support for the Power ISA 3.0 Radix style MMU. Currently this
--- a/include/linux/mm.h~revert-mm-fix-max_folio_order-on-powerpc-configs-with-hugetlb
+++ a/include/linux/mm.h
@@ -2074,7 +2074,7 @@ static inline unsigned long folio_nr_pag
return folio_large_nr_pages(folio);
}
-#if !defined(CONFIG_HAVE_GIGANTIC_FOLIOS)
+#if !defined(CONFIG_ARCH_HAS_GIGANTIC_PAGE)
/*
* We don't expect any folios that exceed buddy sizes (and consequently
* memory sections).
@@ -2087,17 +2087,10 @@ static inline unsigned long folio_nr_pag
* pages are guaranteed to be contiguous.
*/
#define MAX_FOLIO_ORDER PFN_SECTION_SHIFT
-#elif defined(CONFIG_HUGETLB_PAGE)
-/*
- * There is no real limit on the folio size. We limit them to the maximum we
- * currently expect (see CONFIG_HAVE_GIGANTIC_FOLIOS): with hugetlb, we expect
- * no folios larger than 16 GiB on 64bit and 1 GiB on 32bit.
- */
-#define MAX_FOLIO_ORDER get_order(IS_ENABLED(CONFIG_64BIT) ? SZ_16G : SZ_1G)
#else
/*
- * Without hugetlb, gigantic folios that are bigger than a single PUD are
- * currently impossible.
+ * There is no real limit on the folio size. We limit them to the maximum we
+ * currently expect (e.g., hugetlb, dax).
*/
#define MAX_FOLIO_ORDER PUD_ORDER
#endif
--- a/mm/Kconfig~revert-mm-fix-max_folio_order-on-powerpc-configs-with-hugetlb
+++ a/mm/Kconfig
@@ -908,13 +908,6 @@ config PAGE_MAPCOUNT
config PGTABLE_HAS_HUGE_LEAVES
def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE
-#
-# We can end up creating gigantic folio.
-#
-config HAVE_GIGANTIC_FOLIOS
- def_bool (HUGETLB_PAGE && ARCH_HAS_GIGANTIC_PAGE) || \
- (ZONE_DEVICE && HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
-
# TODO: Allow to be enabled without THP
config ARCH_SUPPORTS_HUGE_PFNMAP
def_bool n
_
Patches currently in -mm which might be from skhan(a)linuxfoundation.org are
revert-mm-fix-max_folio_order-on-powerpc-configs-with-hugetlb.patch
When cloning with node replacements (IORING_REGISTER_DST_REPLACE),
destination entries after the cloned range are not copied over.
Add logic to copy them over to the new destination table.
Fixes: c1329532d5aa ("io_uring/rsrc: allow cloning with node replacements")
Cc: stable(a)vger.kernel.org
Signed-off-by: Joanne Koong <joannelkoong(a)gmail.com>
---
io_uring/rsrc.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
index 04f56212398a..a63474b331bf 100644
--- a/io_uring/rsrc.c
+++ b/io_uring/rsrc.c
@@ -1205,7 +1205,7 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
if (ret)
return ret;
- /* Fill entries in data from dst that won't overlap with src */
+ /* Copy original dst nodes from before the cloned range */
for (i = 0; i < min(arg->dst_off, ctx->buf_table.nr); i++) {
struct io_rsrc_node *node = ctx->buf_table.nodes[i];
@@ -1238,6 +1238,16 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
i++;
}
+ /* Copy original dst nodes from after the cloned range */
+ for (i = nbufs; i < ctx->buf_table.nr; i++) {
+ struct io_rsrc_node *node = ctx->buf_table.nodes[i];
+
+ if (node) {
+ data.nodes[i] = node;
+ node->refs++;
+ }
+ }
+
/*
* If asked for replace, put the old table. data->nodes[] holds both
* old and new nodes at this point.
--
2.47.3
A new warning in Clang 22 [1] complains that @clidr passed to
get_clidr_el1() is an uninitialized const pointer. get_clidr_el1()
doesn't really care since it casts away the const-ness anyways -- it is
a false positive.
| ../arch/arm64/kvm/sys_regs.c:2838:23: warning: variable 'clidr' is uninitialized when passed as a const pointer argument here [-Wuninitialized-const-pointer]
| 2838 | get_clidr_el1(NULL, &clidr); /* Ugly... */
| | ^~~~~
This patch isn't needed for anything past 6.1 as this code section was
reworked in Commit 7af0c2534f4c ("KVM: arm64: Normalize cache
configuration"). Since there is no upstream equivalent, this patch just
needs to be applied to 5.15.
Disable this warning for sys_regs.o with an iron fist as it doesn't make
sense to waste maintainer's time or potentially break builds by
backporting large changelists from 6.2+.
Cc: stable(a)vger.kernel.org
Fixes: 7c8c5e6a9101e ("arm64: KVM: system register handling")
Link: https://github.com/llvm/llvm-project/commit/00dacf8c22f065cb52efb14cd091d44… [1]
Reviewed-by: Nathan Chancellor <nathan(a)kernel.org>
Signed-off-by: Justin Stitt <justinstitt(a)google.com>
---
Resending this with Nathan's RB tag, an updated commit log and better
recipients from checkpatch.pl.
I'm also sending a similar patch resend for 6.1.
---
arch/arm64/kvm/Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 989bb5dad2c8..109cca425d3e 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -25,3 +25,6 @@ kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
vgic/vgic-its.o vgic/vgic-debug.o
kvm-$(CONFIG_HW_PERF_EVENTS) += pmu-emul.o
+
+# Work around a false positive Clang 22 -Wuninitialized-const-pointer warning
+CFLAGS_sys_regs.o := $(call cc-disable-warning, uninitialized-const-pointer)
---
base-commit: 8bb7eca972ad531c9b149c0a51ab43a417385813
change-id: 20250728-b4-stable-disable-uninit-ptr-warn-5-15-c0c9db3df206
Best regards,
--
Justin Stitt <justinstitt(a)google.com>
Mejora tus contrataciones con PsicoSmart
body {
margin: 0;
padding: 0;
font-family: Arial, Helvetica, sans-serif;
font-size: 14px;
color: #333;
background-color: #ffffff;
}
table {
border-spacing: 0;
width: 100%;
max-width: 600px;
margin: auto;
}
td {
padding: 12px 20px;
}
a {
color: #1a73e8;
text-decoration: none;
}
.footer {
font-size: 12px;
color: #888888;
text-align: center;
}
Evalúa talento de forma objetiva y mejora tus contrataciones con PsicoSmart.
Hola, ,
¿Te ha pasado que un candidato luce perfecto en entrevista, pero en el trabajo no encaja como esperabas?
En selección, confiar solo en la percepción puede llevar a decisiones costosas. Por eso quiero presentarte PsicoSmart, una herramienta creada para que los equipos de Recursos Humanos tomen decisiones más objetivas y acertadas.
Con PsicoSmart puedes:
Aplicar 31 pruebas psicométricas que evalúan liderazgo, honestidad, comunicación e inteligencia.
Validar conocimientos técnicos con más de 2,500 exámenes especializados.
Supervisar la identidad de quien responde mediante captura fotográfica automática durante la evaluación.
Gestionar todo desde una sola plataforma, accesible desde cualquier dispositivo.
Si estás buscando mejorar tus contrataciones, podría ser una muy buena opción. Si quieres conocer más puedes responder este correo o simplemente contactarme, mis datos están abajo.
Saludos,
--------------
Atte.: Valeria Pérez
Ciudad de México: (55) 5018 0565
WhatsApp: +52 33 1607 2089
Si no deseas recibir más correos, haz clic aquí para darte de baja.
Para remover su dirección de esta lista haga <a href="https://s1.arrobamail.com/unsuscribe.php?id=yiwtsrewiswqruseup">click aquí</a>
As reported by Athul upstream in [1], there is a userspace regression caused
by commit 0c58a97f919c ("fuse: remove tmp folio for writebacks and internal rb
tree") where if there is a bug in a fuse server that causes the server to
never complete writeback, it will make wait_sb_inodes() wait forever, causing
sync paths to hang.
This is a resubmission of this patch [2] that was dropped from the original
series due to a buggy/malicious server still being able to hold up sync() /
the system in other ways if they wanted to, but the wait_sb_inodes() path is
particularly common and easier to hit for malfunctioning servers.
Thanks,
Joanne
[1] https://lore.kernel.org/regressions/CAJnrk1ZjQ8W8NzojsvJPRXiv9TuYPNdj8Ye7=C…
[2] https://lore.kernel.org/linux-fsdevel/20241122232359.429647-4-joannelkoong@…
Joanne Koong (2):
mm: rename AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM to
AS_WRITEBACK_MAY_HANG
fs/writeback: skip inodes with potential writeback hang in
wait_sb_inodes()
fs/fs-writeback.c | 3 +++
fs/fuse/file.c | 2 +-
include/linux/pagemap.h | 10 +++++-----
mm/vmscan.c | 3 +--
4 files changed, 10 insertions(+), 8 deletions(-)
--
2.47.3