This series fixes a memory corruption bug in KHO that occurs when KFENCE is enabled.
The root cause is that KHO metadata, allocated via kzalloc(), can be randomly serviced by kfence_alloc(). When a kernel boots via KHO, the early memblock allocator is restricted to a "scratch area". This forces the KFENCE pool to be allocated within this scratch area, creating a conflict. If KHO metadata is subsequently placed in this pool, it gets corrupted during the next kexec operation.
Patch 1/3 introduces a debug-only feature (CONFIG_KEXEC_HANDOVER_DEBUG) that adds checks to detect and fail any operation that attempts to place KHO metadata or preserved memory within the scratch area. This serves as a validation and diagnostic tool to confirm the problem without affecting production builds.
Patch 2/3 Increases bitmap to PAGE_SIZE, so buddy allocator can be used.
Patch 3/3 Provides the fix by modifying KHO to allocate its metadata directly from the buddy allocator instead of slab. This bypasses the KFENCE interception entirely.
Pasha Tatashin (3): liveupdate: kho: warn and fail on metadata or preserved memory in scratch area liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE liveupdate: kho: allocate metadata directly from the buddy allocator
include/linux/gfp.h | 3 ++ kernel/Kconfig.kexec | 9 ++++ kernel/Makefile | 1 + kernel/kexec_handover.c | 72 ++++++++++++++++++++------------ kernel/kexec_handover_debug.c | 25 +++++++++++ kernel/kexec_handover_internal.h | 16 +++++++ 6 files changed, 100 insertions(+), 26 deletions(-) create mode 100644 kernel/kexec_handover_debug.c create mode 100644 kernel/kexec_handover_internal.h
base-commit: 6548d364a3e850326831799d7e3ea2d7bb97ba08
It is invalid for KHO metadata or preserved memory regions to be located within the KHO scratch area, as this area is overwritten when the next kernel is loaded, and used early in boot by the next kernel. This can lead to memory corruption.
Adds checks to kho_preserve_* and KHO's internal metadata allocators (xa_load_or_alloc, new_chunk) to verify that the physical address of the memory does not overlap with any defined scratch region. If an overlap is detected, the operation will fail and a WARN_ON is triggered. To avoid performance overhead in production kernels, these checks are enabled only when CONFIG_KEXEC_HANDOVER_DEBUG is selected.
Signed-off-by: Pasha Tatashin pasha.tatashin@soleen.com --- kernel/Kconfig.kexec | 9 ++++++ kernel/Makefile | 1 + kernel/kexec_handover.c | 53 ++++++++++++++++++++++---------- kernel/kexec_handover_debug.c | 25 +++++++++++++++ kernel/kexec_handover_internal.h | 16 ++++++++++ 5 files changed, 87 insertions(+), 17 deletions(-) create mode 100644 kernel/kexec_handover_debug.c create mode 100644 kernel/kexec_handover_internal.h
diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec index 422270d64820..c94d36b5fcd9 100644 --- a/kernel/Kconfig.kexec +++ b/kernel/Kconfig.kexec @@ -109,6 +109,15 @@ config KEXEC_HANDOVER to keep data or state alive across the kexec. For this to work, both source and target kernels need to have this option enabled.
+config KEXEC_HANDOVER_DEBUG + bool "Enable Kexec Handover debug checks" + depends on KEXEC_HANDOVER_DEBUGFS + help + This option enables extra sanity checks for the Kexec Handover + subsystem. Since, KHO performance is crucial in live update + scenarios and the extra code might be adding overhead it is + only optionally enabled. + config CRASH_DUMP bool "kernel crash dumps" default ARCH_DEFAULT_CRASH_DUMP diff --git a/kernel/Makefile b/kernel/Makefile index df3dd8291bb6..9fe722305c9b 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -83,6 +83,7 @@ obj-$(CONFIG_KEXEC) += kexec.o obj-$(CONFIG_KEXEC_FILE) += kexec_file.o obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o +obj-$(CONFIG_KEXEC_HANDOVER_DEBUG) += kexec_handover_debug.o obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o obj-$(CONFIG_COMPAT) += compat.o obj-$(CONFIG_CGROUPS) += cgroup/ diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index 76f0940fb485..7b460806ef4f 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -8,6 +8,7 @@
#define pr_fmt(fmt) "KHO: " fmt
+#include <linux/cleanup.h> #include <linux/cma.h> #include <linux/count_zeros.h> #include <linux/debugfs.h> @@ -22,6 +23,7 @@
#include <asm/early_ioremap.h>
+#include "kexec_handover_internal.h" /* * KHO is tightly coupled with mm init and needs access to some of mm * internal APIs. @@ -133,26 +135,26 @@ static struct kho_out kho_out = {
static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz) { - void *elm, *res; + void *res = xa_load(xa, index);
- elm = xa_load(xa, index); - if (elm) - return elm; + if (res) + return res; + + void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
- elm = kzalloc(sz, GFP_KERNEL); if (!elm) return ERR_PTR(-ENOMEM);
+ if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz))) + return ERR_PTR(-EINVAL); + res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL); if (xa_is_err(res)) - res = ERR_PTR(xa_err(res)); - - if (res) { - kfree(elm); + return ERR_PTR(xa_err(res)); + else if (res) return res; - }
- return elm; + return no_free_ptr(elm); }
static void __kho_unpreserve(struct kho_mem_track *track, unsigned long pfn, @@ -345,15 +347,19 @@ static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE); static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk, unsigned long order) { - struct khoser_mem_chunk *chunk; + struct khoser_mem_chunk *chunk __free(kfree) = NULL;
chunk = kzalloc(PAGE_SIZE, GFP_KERNEL); if (!chunk) - return NULL; + return ERR_PTR(-ENOMEM); + + if (WARN_ON(kho_scratch_overlap(virt_to_phys(chunk), PAGE_SIZE))) + return ERR_PTR(-EINVAL); + chunk->hdr.order = order; if (cur_chunk) KHOSER_STORE_PTR(cur_chunk->hdr.next, chunk); - return chunk; + return no_free_ptr(chunk); }
static void kho_mem_ser_free(struct khoser_mem_chunk *first_chunk) @@ -374,14 +380,17 @@ static int kho_mem_serialize(struct kho_serialization *ser) struct khoser_mem_chunk *chunk = NULL; struct kho_mem_phys *physxa; unsigned long order; + int err = -ENOMEM;
xa_for_each(&ser->track.orders, order, physxa) { struct kho_mem_phys_bits *bits; unsigned long phys;
chunk = new_chunk(chunk, order); - if (!chunk) + if (IS_ERR(chunk)) { + err = PTR_ERR(chunk); goto err_free; + }
if (!first_chunk) first_chunk = chunk; @@ -391,8 +400,10 @@ static int kho_mem_serialize(struct kho_serialization *ser)
if (chunk->hdr.num_elms == ARRAY_SIZE(chunk->bitmaps)) { chunk = new_chunk(chunk, order); - if (!chunk) + if (IS_ERR(chunk)) { + err = PTR_ERR(chunk); goto err_free; + } }
elm = &chunk->bitmaps[chunk->hdr.num_elms]; @@ -409,7 +420,7 @@ static int kho_mem_serialize(struct kho_serialization *ser)
err_free: kho_mem_ser_free(first_chunk); - return -ENOMEM; + return err; }
static void __init deserialize_bitmap(unsigned int order, @@ -752,6 +763,9 @@ int kho_preserve_folio(struct folio *folio) const unsigned int order = folio_order(folio); struct kho_mem_track *track = &kho_out.ser.track;
+ if (WARN_ON(kho_scratch_overlap(pfn << PAGE_SHIFT, PAGE_SIZE << order))) + return -EINVAL; + return __kho_preserve_order(track, pfn, order); } EXPORT_SYMBOL_GPL(kho_preserve_folio); @@ -775,6 +789,11 @@ int kho_preserve_pages(struct page *page, unsigned int nr_pages) unsigned long failed_pfn = 0; int err = 0;
+ if (WARN_ON(kho_scratch_overlap(start_pfn << PAGE_SHIFT, + nr_pages << PAGE_SHIFT))) { + return -EINVAL; + } + while (pfn < end_pfn) { const unsigned int order = min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn)); diff --git a/kernel/kexec_handover_debug.c b/kernel/kexec_handover_debug.c new file mode 100644 index 000000000000..6efb696f5426 --- /dev/null +++ b/kernel/kexec_handover_debug.c @@ -0,0 +1,25 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * kexec_handover_debug.c - kexec handover optional debug functionality + * Copyright (C) 2025 Google LLC, Pasha Tatashin pasha.tatashin@soleen.com + */ + +#define pr_fmt(fmt) "KHO: " fmt + +#include "kexec_handover_internal.h" + +bool kho_scratch_overlap(phys_addr_t phys, size_t size) +{ + phys_addr_t scratch_start, scratch_end; + unsigned int i; + + for (i = 0; i < kho_scratch_cnt; i++) { + scratch_start = kho_scratch[i].addr; + scratch_end = kho_scratch[i].addr + kho_scratch[i].size; + + if (phys < scratch_end && (phys + size) > scratch_start) + return true; + } + + return false; +} diff --git a/kernel/kexec_handover_internal.h b/kernel/kexec_handover_internal.h new file mode 100644 index 000000000000..05e9720ba7b9 --- /dev/null +++ b/kernel/kexec_handover_internal.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef LINUX_KEXEC_HANDOVER_INTERNAL_H +#define LINUX_KEXEC_HANDOVER_INTERNAL_H + +#include <linux/types.h> + +#ifdef CONFIG_KEXEC_HANDOVER_DEBUG +bool kho_scratch_overlap(phys_addr_t phys, size_t size); +#else +static inline bool kho_scratch_overlap(phys_addr_t phys, size_t size) +{ + return false; +} +#endif /* CONFIG_KEXEC_HANDOVER_DEBUG */ + +#endif /* LINUX_KEXEC_HANDOVER_INTERNAL_H */
On Mon, Oct 20 2025, Pasha Tatashin wrote:
It is invalid for KHO metadata or preserved memory regions to be located within the KHO scratch area, as this area is overwritten when the next kernel is loaded, and used early in boot by the next kernel. This can lead to memory corruption.
Adds checks to kho_preserve_* and KHO's internal metadata allocators (xa_load_or_alloc, new_chunk) to verify that the physical address of the memory does not overlap with any defined scratch region. If an overlap is detected, the operation will fail and a WARN_ON is triggered. To avoid performance overhead in production kernels, these checks are enabled only when CONFIG_KEXEC_HANDOVER_DEBUG is selected.
Signed-off-by: Pasha Tatashin pasha.tatashin@soleen.com
[...]
@@ -133,26 +135,26 @@ static struct kho_out kho_out = { static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz) {
- void *elm, *res;
- void *res = xa_load(xa, index);
- elm = xa_load(xa, index);
- if (elm)
return elm;
- if (res)
return res;- void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL);
- elm = kzalloc(sz, GFP_KERNEL); if (!elm) return ERR_PTR(-ENOMEM);
- if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz)))
return ERR_PTR(-EINVAL);- res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL); if (xa_is_err(res))
res = ERR_PTR(xa_err(res));- if (res) {
kfree(elm);
return ERR_PTR(xa_err(res));- else if (res) return res;
- }
- return elm;
- return no_free_ptr(elm);
Super small nit: there exists return_ptr(p) which is a tiny bit neater IMO but certainly not worth doing a new revision over. So,
Reviewed-by: Pratyush Yadav pratyush@kernel.org
[...]
KHO memory preservation metadata is preserved in 512 byte chunks which requires their allocation from slab allocator. Slabs are not safe to be used with KHO because of kfence, and because partial slabs may lead leaks to the next kernel. Change the size to be PAGE_SIZE.
The kfence specifically may cause memory corruption, where it randomly provides slab objects that can be within the scratch area. The reason for that is that kfence allocates its objects prior to KHO scratch is marked as CMA region.
While this change could potentially increase metadata overhead on systems with sparsely preserved memory, this is being mitigated by ongoing work to reduce sparseness during preservation via 1G guest pages. Furthermore, this change aligns with future work on a stateless KHO, which will also use page-sized bitmaps for its radix tree metadata.
Signed-off-by: Pasha Tatashin pasha.tatashin@soleen.com --- kernel/kexec_handover.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index 7b460806ef4f..e5b91761fbfe 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -69,10 +69,10 @@ early_param("kho", kho_parse_enable); * Keep track of memory that is to be preserved across KHO. * * The serializing side uses two levels of xarrays to manage chunks of per-order - * 512 byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order of a - * 1TB system would fit inside a single 512 byte bitmap. For order 0 allocations - * each bitmap will cover 16M of address space. Thus, for 16G of memory at most - * 512K of bitmap memory will be needed for order 0. + * PAGE_SIZE byte bitmaps. For instance if PAGE_SIZE = 4096, the entire 1G order + * of a 8TB system would fit inside a single 4096 byte bitmap. For order 0 + * allocations each bitmap will cover 128M of address space. Thus, for 16G of + * memory at most 512K of bitmap memory will be needed for order 0. * * This approach is fully incremental, as the serialization progresses folios * can continue be aggregated to the tracker. The final step, immediately prior @@ -80,12 +80,14 @@ early_param("kho", kho_parse_enable); * successor kernel to parse. */
-#define PRESERVE_BITS (512 * 8) +#define PRESERVE_BITS (PAGE_SIZE * 8)
struct kho_mem_phys_bits { DECLARE_BITMAP(preserve, PRESERVE_BITS); };
+static_assert(sizeof(struct kho_mem_phys_bits) == PAGE_SIZE); + struct kho_mem_phys { /* * Points to kho_mem_phys_bits, a sparse bitmap array. Each bit is sized @@ -133,19 +135,19 @@ static struct kho_out kho_out = { .finalized = false, };
-static void *xa_load_or_alloc(struct xarray *xa, unsigned long index, size_t sz) +static void *xa_load_or_alloc(struct xarray *xa, unsigned long index) { void *res = xa_load(xa, index);
if (res) return res;
- void *elm __free(kfree) = kzalloc(sz, GFP_KERNEL); + void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (!elm) return ERR_PTR(-ENOMEM);
- if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), sz))) + if (WARN_ON(kho_scratch_overlap(virt_to_phys(elm), PAGE_SIZE))) return ERR_PTR(-EINVAL);
res = xa_cmpxchg(xa, index, NULL, elm, GFP_KERNEL); @@ -218,8 +220,7 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, } }
- bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS, - sizeof(*bits)); + bits = xa_load_or_alloc(&physxa->phys_bits, pfn_high / PRESERVE_BITS); if (IS_ERR(bits)) return PTR_ERR(bits);
On Mon, Oct 20 2025, Pasha Tatashin wrote:
KHO memory preservation metadata is preserved in 512 byte chunks which requires their allocation from slab allocator. Slabs are not safe to be used with KHO because of kfence, and because partial slabs may lead leaks to the next kernel. Change the size to be PAGE_SIZE.
The kfence specifically may cause memory corruption, where it randomly provides slab objects that can be within the scratch area. The reason for that is that kfence allocates its objects prior to KHO scratch is marked as CMA region.
While this change could potentially increase metadata overhead on systems with sparsely preserved memory, this is being mitigated by ongoing work to reduce sparseness during preservation via 1G guest pages. Furthermore, this change aligns with future work on a stateless KHO, which will also use page-sized bitmaps for its radix tree metadata.
Signed-off-by: Pasha Tatashin pasha.tatashin@soleen.com
Reviewed-by: Pratyush Yadav pratyush@kernel.org
[...]
KHO allocates metadata for its preserved memory map using the slab allocator via kzalloc(). This metadata is temporary and is used by the next kernel during early boot to find preserved memory.
A problem arises when KFENCE is enabled. kzalloc() calls can be randomly intercepted by kfence_alloc(), which services the allocation from a dedicated KFENCE memory pool. This pool is allocated early in boot via memblock.
When booting via KHO, the memblock allocator is restricted to a "scratch area", forcing the KFENCE pool to be allocated within it. This creates a conflict, as the scratch area is expected to be ephemeral and overwriteable by a subsequent kexec. If KHO metadata is placed in this KFENCE pool, it leads to memory corruption when the next kernel is loaded.
To fix this, modify KHO to allocate its metadata directly from the buddy allocator instead of slab.
Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation") Signed-off-by: Pasha Tatashin pasha.tatashin@soleen.com Reviewed-by: Pratyush Yadav pratyush@kernel.org --- include/linux/gfp.h | 3 +++ kernel/kexec_handover.c | 6 +++--- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0ceb4e09306c..623bee335383 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -7,6 +7,7 @@ #include <linux/mmzone.h> #include <linux/topology.h> #include <linux/alloc_tag.h> +#include <linux/cleanup.h> #include <linux/sched.h>
struct vm_area_struct; @@ -463,4 +464,6 @@ static inline struct folio *folio_alloc_gigantic_noprof(int order, gfp_t gfp, /* This should be paired with folio_put() rather than free_contig_range(). */ #define folio_alloc_gigantic(...) alloc_hooks(folio_alloc_gigantic_noprof(__VA_ARGS__))
+DEFINE_FREE(free_page, void *, free_page((unsigned long)_T)) + #endif /* __LINUX_GFP_H */ diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index e5b91761fbfe..de4466b47455 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -142,7 +142,7 @@ static void *xa_load_or_alloc(struct xarray *xa, unsigned long index) if (res) return res;
- void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL); + void *elm __free(free_page) = (void *)get_zeroed_page(GFP_KERNEL);
if (!elm) return ERR_PTR(-ENOMEM); @@ -348,9 +348,9 @@ static_assert(sizeof(struct khoser_mem_chunk) == PAGE_SIZE); static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk *cur_chunk, unsigned long order) { - struct khoser_mem_chunk *chunk __free(kfree) = NULL; + struct khoser_mem_chunk *chunk __free(free_page) = NULL;
- chunk = kzalloc(PAGE_SIZE, GFP_KERNEL); + chunk = (void *)get_zeroed_page(GFP_KERNEL); if (!chunk) return ERR_PTR(-ENOMEM);
On Mon, Oct 20, 2025 at 08:08:49PM -0400, Pasha Tatashin wrote:
This series fixes a memory corruption bug in KHO that occurs when KFENCE is enabled.
The root cause is that KHO metadata, allocated via kzalloc(), can be randomly serviced by kfence_alloc(). When a kernel boots via KHO, the early memblock allocator is restricted to a "scratch area". This forces the KFENCE pool to be allocated within this scratch area, creating a conflict. If KHO metadata is subsequently placed in this pool, it gets corrupted during the next kexec operation.
Patch 1/3 introduces a debug-only feature (CONFIG_KEXEC_HANDOVER_DEBUG) that adds checks to detect and fail any operation that attempts to place KHO metadata or preserved memory within the scratch area. This serves as a validation and diagnostic tool to confirm the problem without affecting production builds.
Patch 2/3 Increases bitmap to PAGE_SIZE, so buddy allocator can be used.
Patch 3/3 Provides the fix by modifying KHO to allocate its metadata directly from the buddy allocator instead of slab. This bypasses the KFENCE interception entirely.
Pasha Tatashin (3): liveupdate: kho: warn and fail on metadata or preserved memory in scratch area liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE liveupdate: kho: allocate metadata directly from the buddy allocator
With liveupdate: dropped from the subjects
Reviewed-by: Mike Rapoport (Microsoft) rppt@kernel.org
include/linux/gfp.h | 3 ++ kernel/Kconfig.kexec | 9 ++++ kernel/Makefile | 1 + kernel/kexec_handover.c | 72 ++++++++++++++++++++------------ kernel/kexec_handover_debug.c | 25 +++++++++++ kernel/kexec_handover_internal.h | 16 +++++++ 6 files changed, 100 insertions(+), 26 deletions(-) create mode 100644 kernel/kexec_handover_debug.c create mode 100644 kernel/kexec_handover_internal.h
base-commit: 6548d364a3e850326831799d7e3ea2d7bb97ba08
2.51.0.869.ge66316f041-goog
On Tue, Oct 21, 2025 at 2:01 AM Mike Rapoport rppt@kernel.org wrote:
On Mon, Oct 20, 2025 at 08:08:49PM -0400, Pasha Tatashin wrote:
This series fixes a memory corruption bug in KHO that occurs when KFENCE is enabled.
The root cause is that KHO metadata, allocated via kzalloc(), can be randomly serviced by kfence_alloc(). When a kernel boots via KHO, the early memblock allocator is restricted to a "scratch area". This forces the KFENCE pool to be allocated within this scratch area, creating a conflict. If KHO metadata is subsequently placed in this pool, it gets corrupted during the next kexec operation.
Patch 1/3 introduces a debug-only feature (CONFIG_KEXEC_HANDOVER_DEBUG) that adds checks to detect and fail any operation that attempts to place KHO metadata or preserved memory within the scratch area. This serves as a validation and diagnostic tool to confirm the problem without affecting production builds.
Patch 2/3 Increases bitmap to PAGE_SIZE, so buddy allocator can be used.
Patch 3/3 Provides the fix by modifying KHO to allocate its metadata directly from the buddy allocator instead of slab. This bypasses the KFENCE interception entirely.
Pasha Tatashin (3): liveupdate: kho: warn and fail on metadata or preserved memory in scratch area liveupdate: kho: Increase metadata bitmap size to PAGE_SIZE liveupdate: kho: allocate metadata directly from the buddy allocator
With liveupdate: dropped from the subjects
I noticed "liveupdate: " subject prefix left over only after sending these patches. Andrew, would you like me to resend them, or could you remove the prefix from these patches?
Reviewed-by: Mike Rapoport (Microsoft) rppt@kernel.org
include/linux/gfp.h | 3 ++ kernel/Kconfig.kexec | 9 ++++ kernel/Makefile | 1 + kernel/kexec_handover.c | 72 ++++++++++++++++++++------------ kernel/kexec_handover_debug.c | 25 +++++++++++ kernel/kexec_handover_internal.h | 16 +++++++ 6 files changed, 100 insertions(+), 26 deletions(-) create mode 100644 kernel/kexec_handover_debug.c create mode 100644 kernel/kexec_handover_internal.h
base-commit: 6548d364a3e850326831799d7e3ea2d7bb97ba08
2.51.0.869.ge66316f041-goog
-- Sincerely yours, Mike.
On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin pasha.tatashin@soleen.com wrote:
With liveupdate: dropped from the subjects
I noticed "liveupdate: " subject prefix left over only after sending these patches. Andrew, would you like me to resend them, or could you remove the prefix from these patches?
No problem.
What should we do about -stable kernels?
It doesn't seem worthwhile to backport a 3-patch series for a pretty obscure bug. Perhaps we could merge a patch which disables this combination in Kconfig, as a 6.18-rcX hotfix with a cc:stable.
Then for 6.19-rc1 we add this series and a fourth patch which undoes that Kconfig change?
On Tue, Oct 21, 2025 at 4:53 PM Andrew Morton akpm@linux-foundation.org wrote:
On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin pasha.tatashin@soleen.com wrote:
With liveupdate: dropped from the subjects
I noticed "liveupdate: " subject prefix left over only after sending these patches. Andrew, would you like me to resend them, or could you remove the prefix from these patches?
No problem.
What should we do about -stable kernels?
It doesn't seem worthwhile to backport a 3-patch series for a pretty obscure bug. Perhaps we could merge a patch which disables this
We are using KHO and have had obscure crashes due to this memory corruption, with stacks all over the place. I would prefer this fix to be properly backported to stable so we can also automatically consume it once we switch to the upstream KHO. I do not think disabling kfence in the Google fleet to resolve this problem would work for us, so if it is not going to be part of stable, we would have to backport it manually anyway.
Thanks, Pasha
combination in Kconfig, as a 6.18-rcX hotfix with a cc:stable.
Then for 6.19-rc1 we add this series and a fourth patch which undoes that Kconfig change?
On Tue, Oct 21, 2025 at 08:15:04PM -0400, Pasha Tatashin wrote:
On Tue, Oct 21, 2025 at 4:53 PM Andrew Morton akpm@linux-foundation.org wrote:
On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin pasha.tatashin@soleen.com wrote:
With liveupdate: dropped from the subjects
I noticed "liveupdate: " subject prefix left over only after sending these patches. Andrew, would you like me to resend them, or could you remove the prefix from these patches?
No problem.
What should we do about -stable kernels?
It doesn't seem worthwhile to backport a 3-patch series for a pretty obscure bug. Perhaps we could merge a patch which disables this
We are using KHO and have had obscure crashes due to this memory corruption, with stacks all over the place. I would prefer this fix to be properly backported to stable so we can also automatically consume it once we switch to the upstream KHO. I do not think disabling kfence in the Google fleet to resolve this problem would work for us, so if it is not going to be part of stable, we would have to backport it manually anyway.
The backport to stable is only relevant to 6.17 that's going to be EOL soon anyway. Do you really think it's worth the effort?
Thanks, Pasha
combination in Kconfig, as a 6.18-rcX hotfix with a cc:stable.
Then for 6.19-rc1 we add this series and a fourth patch which undoes that Kconfig change?
On Wed, 22 Oct 2025 08:48:34 +0300 Mike Rapoport rppt@kernel.org wrote:
We are using KHO and have had obscure crashes due to this memory corruption, with stacks all over the place. I would prefer this fix to be properly backported to stable so we can also automatically consume it once we switch to the upstream KHO. I do not think disabling kfence in the Google fleet to resolve this problem would work for us, so if it is not going to be part of stable, we would have to backport it manually anyway.
The backport to stable is only relevant to 6.17 that's going to be EOL soon anyway. Do you really think it's worth the effort?
If some organization is basing their next kernel on 6.17 then they'd like it.
Do we assume that all organizations follow the LTS schedule? I haven't been doing that.
On Tue, 21 Oct 2025 20:15:04 -0400 Pasha Tatashin pasha.tatashin@soleen.com wrote:
On Tue, Oct 21, 2025 at 4:53 PM Andrew Morton akpm@linux-foundation.org wrote:
On Tue, 21 Oct 2025 12:04:47 -0400 Pasha Tatashin pasha.tatashin@soleen.com wrote:
With liveupdate: dropped from the subjects
I noticed "liveupdate: " subject prefix left over only after sending these patches. Andrew, would you like me to resend them, or could you remove the prefix from these patches?
No problem.
What should we do about -stable kernels?
It doesn't seem worthwhile to backport a 3-patch series for a pretty obscure bug. Perhaps we could merge a patch which disables this
We are using KHO and have had obscure crashes due to this memory corruption, with stacks all over the place. I would prefer this fix to be properly backported to stable so we can also automatically consume it once we switch to the upstream KHO.
Oh.
I added this important info to the [0/N] changelog, added
Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation") Cc: stable@vger.kernel.org
to all three and moved this into mm.git's mm-hotfixes branch.
linux-kselftest-mirror@lists.linaro.org