Previously, commit ed129ec9057f ("KVM: x86: forcibly leave nested mode
on vCPU reset") addressed an issue where a triple fault occurring in
nested mode could lead to use-after-free scenarios. However, the commit
did not handle the analogous situation for System Management Mode (SMM).
This omission results in triggering a WARN when a vCPU reset occurs
while still in SMM mode, due to the check in kvm_vcpu_reset(). This
situation was reprodused using Syzkaller by:
1) Creating a KVM VM and vCPU
2) Sending a KVM_SMI ioctl to explicitly enter SMM
3) Executing invalid instructions causing consecutive exceptions and
eventually a triple fault
The issue manifests as follows:
WARNING: CPU: 0 PID: 25506 at arch/x86/kvm/x86.c:12112
kvm_vcpu_reset+0x1d2/0x1530 arch/x86/kvm/x86.c:12112
Modules linked in:
CPU: 0 PID: 25506 Comm: syz-executor.0 Not tainted
6.1.130-syzkaller-00157-g164fe5dde9b6 #0
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS 1.12.0-1 04/01/2014
RIP: 0010:kvm_vcpu_reset+0x1d2/0x1530 arch/x86/kvm/x86.c:12112
Call Trace:
<TASK>
shutdown_interception+0x66/0xb0 arch/x86/kvm/svm/svm.c:2136
svm_invoke_exit_handler+0x110/0x530 arch/x86/kvm/svm/svm.c:3395
svm_handle_exit+0x424/0x920 arch/x86/kvm/svm/svm.c:3457
vcpu_enter_guest arch/x86/kvm/x86.c:10959 [inline]
vcpu_run+0x2c43/0x5a90 arch/x86/kvm/x86.c:11062
kvm_arch_vcpu_ioctl_run+0x50f/0x1cf0 arch/x86/kvm/x86.c:11283
kvm_vcpu_ioctl+0x570/0xf00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4122
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl fs/ioctl.c:856 [inline]
__x64_sys_ioctl+0x19a/0x210 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x35/0x80 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Considering that hardware CPUs exit SMM mode completely upon receiving
a triple fault by triggering a hardware reset (which inherently leads
to exiting SMM), explicitly perform SMM exit prior to the WARN check.
Although subsequent code clears vCPU hflags, including the SMM flag,
calling kvm_smm_changed ensures the exit from SMM is handled correctly
and explicitly, aligning precisely with hardware behavior.
Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
Fixes: ed129ec9057f ("KVM: x86: forcibly leave nested mode on vCPU reset")
Cc: stable(a)vger.kernel.org
Signed-off-by: Mikhail Lobanov <m.lobanov(a)rosa.ru>
---
arch/x86/kvm/x86.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4b64ab350bcd..f1c95c21703a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12409,6 +12409,9 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
if (is_guest_mode(vcpu))
kvm_leave_nested(vcpu);
+ if (is_smm(vcpu))
+ kvm_smm_changed(vcpu, false);
+
kvm_lapic_reset(vcpu, init_event);
WARN_ON_ONCE(is_guest_mode(vcpu) || is_smm(vcpu));
--
2.47.2
Hi All,
Chages since v1:
- left fixes only, improvements will be posted separately;
- Fixes: and -stable tags added to patch descriptions;
This series is an attempt to fix the violation of lazy MMU mode context
requirement as described for arch_enter_lazy_mmu_mode():
This mode can only be entered and left under the protection of
the page table locks for all page tables which may be modified.
On s390 if I make arch_enter_lazy_mmu_mode() -> preempt_enable() and
arch_leave_lazy_mmu_mode() -> preempt_disable() I am getting this:
[ 553.332108] preempt_count: 1, expected: 0
[ 553.332117] no locks held by multipathd/2116.
[ 553.332128] CPU: 24 PID: 2116 Comm: multipathd Kdump: loaded Tainted:
[ 553.332139] Hardware name: IBM 3931 A01 701 (LPAR)
[ 553.332146] Call Trace:
[ 553.332152] [<00000000158de23a>] dump_stack_lvl+0xfa/0x150
[ 553.332167] [<0000000013e10d12>] __might_resched+0x57a/0x5e8
[ 553.332178] [<00000000144eb6c2>] __alloc_pages+0x2ba/0x7c0
[ 553.332189] [<00000000144d5cdc>] __get_free_pages+0x2c/0x88
[ 553.332198] [<00000000145663f6>] kasan_populate_vmalloc_pte+0x4e/0x110
[ 553.332207] [<000000001447625c>] apply_to_pte_range+0x164/0x3c8
[ 553.332218] [<000000001448125a>] apply_to_pmd_range+0xda/0x318
[ 553.332226] [<000000001448181c>] __apply_to_page_range+0x384/0x768
[ 553.332233] [<0000000014481c28>] apply_to_page_range+0x28/0x38
[ 553.332241] [<00000000145665da>] kasan_populate_vmalloc+0x82/0x98
[ 553.332249] [<00000000144c88d0>] alloc_vmap_area+0x590/0x1c90
[ 553.332257] [<00000000144ca108>] __get_vm_area_node.constprop.0+0x138/0x260
[ 553.332265] [<00000000144d17fc>] __vmalloc_node_range+0x134/0x360
[ 553.332274] [<0000000013d5dbf2>] alloc_thread_stack_node+0x112/0x378
[ 553.332284] [<0000000013d62726>] dup_task_struct+0x66/0x430
[ 553.332293] [<0000000013d63962>] copy_process+0x432/0x4b80
[ 553.332302] [<0000000013d68300>] kernel_clone+0xf0/0x7d0
[ 553.332311] [<0000000013d68bd6>] __do_sys_clone+0xae/0xc8
[ 553.332400] [<0000000013d68dee>] __s390x_sys_clone+0xd6/0x118
[ 553.332410] [<0000000013c9d34c>] do_syscall+0x22c/0x328
[ 553.332419] [<00000000158e7366>] __do_syscall+0xce/0xf0
[ 553.332428] [<0000000015913260>] system_call+0x70/0x98
This exposes a KASAN issue fixed with patch 1 and apply_to_pte_range()
issue fixed with patch 3, while patch 2 is a prerequisite.
Commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu
mode") looks like powerpc-only fix, yet not entirely conforming to the
above provided requirement (page tables itself are still not protected).
If I am not mistaken, xen and sparc are alike.
Thanks!
Alexander Gordeev (3):
kasan: Avoid sleepable page allocation from atomic context
mm: Cleanup apply_to_pte_range() routine
mm: Protect kernel pgtables in apply_to_pte_range()
mm/kasan/shadow.c | 9 +++------
mm/memory.c | 33 +++++++++++++++++++++------------
2 files changed, 24 insertions(+), 18 deletions(-)
--
2.45.2
The quilt patch titled
Subject: mm: protect kernel pgtables in apply_to_pte_range()
has been removed from the -mm tree. Its filename was
mm-protect-kernel-pgtables-in-apply_to_pte_range.patch
This patch was dropped because an updated version will be issued
------------------------------------------------------
From: Alexander Gordeev <agordeev(a)linux.ibm.com>
Subject: mm: protect kernel pgtables in apply_to_pte_range()
Date: Tue, 8 Apr 2025 18:07:32 +0200
The lazy MMU mode can only be entered and left under the protection of the
page table locks for all page tables which may be modified. Yet, when it
comes to kernel mappings apply_to_pte_range() does not take any locks.
That does not conform arch_enter|leave_lazy_mmu_mode() semantics and could
potentially lead to re-schedulling a process while in lazy MMU mode or
racing on a kernel page table updates.
Link: https://lkml.kernel.org/r/ef8f6538b83b7fc3372602f90375348f9b4f3596.17441281…
Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates")
Signed-off-by: Alexander Gordeev <agordeev(a)linux.ibm.com>
Cc: Andrey Ryabinin <ryabinin.a.a(a)gmail.com>
Cc: Guenetr Roeck <linux(a)roeck-us.net>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Jeremy Fitzhardinge <jeremy(a)goop.org>
Cc: Juegren Gross <jgross(a)suse.com>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/kasan/shadow.c | 7 ++-----
mm/memory.c | 5 ++++-
2 files changed, 6 insertions(+), 6 deletions(-)
--- a/mm/kasan/shadow.c~mm-protect-kernel-pgtables-in-apply_to_pte_range
+++ a/mm/kasan/shadow.c
@@ -308,14 +308,14 @@ static int kasan_populate_vmalloc_pte(pt
__memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
- spin_lock(&init_mm.page_table_lock);
if (likely(pte_none(ptep_get(ptep)))) {
set_pte_at(&init_mm, addr, ptep, pte);
page = 0;
}
- spin_unlock(&init_mm.page_table_lock);
+
if (page)
free_page(page);
+
return 0;
}
@@ -401,13 +401,10 @@ static int kasan_depopulate_vmalloc_pte(
page = (unsigned long)__va(pte_pfn(ptep_get(ptep)) << PAGE_SHIFT);
- spin_lock(&init_mm.page_table_lock);
-
if (likely(!pte_none(ptep_get(ptep)))) {
pte_clear(&init_mm, addr, ptep);
free_page(page);
}
- spin_unlock(&init_mm.page_table_lock);
return 0;
}
--- a/mm/memory.c~mm-protect-kernel-pgtables-in-apply_to_pte_range
+++ a/mm/memory.c
@@ -2926,6 +2926,7 @@ static int apply_to_pte_range(struct mm_
pte = pte_offset_kernel(pmd, addr);
if (!pte)
return err;
+ spin_lock(&init_mm.page_table_lock);
} else {
if (create)
pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
@@ -2951,7 +2952,9 @@ static int apply_to_pte_range(struct mm_
arch_leave_lazy_mmu_mode();
- if (mm != &init_mm)
+ if (mm == &init_mm)
+ spin_unlock(&init_mm.page_table_lock);
+ else
pte_unmap_unlock(mapped_pte, ptl);
*mask |= PGTBL_PTE_MODIFIED;
_
Patches currently in -mm which might be from agordeev(a)linux.ibm.com are
The quilt patch titled
Subject: kasan: avoid sleepable page allocation from atomic context
has been removed from the -mm tree. Its filename was
kasan-avoid-sleepable-page-allocation-from-atomic-context.patch
This patch was dropped because an updated version will be issued
------------------------------------------------------
From: Alexander Gordeev <agordeev(a)linux.ibm.com>
Subject: kasan: avoid sleepable page allocation from atomic context
Date: Tue, 8 Apr 2025 18:07:30 +0200
Patch series "mm: Fix apply_to_pte_range() vs lazy MMU mode", v2.
This series is an attempt to fix the violation of lazy MMU mode context
requirement as described for arch_enter_lazy_mmu_mode():
This mode can only be entered and left under the protection of
the page table locks for all page tables which may be modified.
On s390 if I make arch_enter_lazy_mmu_mode() -> preempt_enable() and
arch_leave_lazy_mmu_mode() -> preempt_disable() I am getting this:
[ 553.332108] preempt_count: 1, expected: 0
[ 553.332117] no locks held by multipathd/2116.
[ 553.332128] CPU: 24 PID: 2116 Comm: multipathd Kdump: loaded Tainted:
[ 553.332139] Hardware name: IBM 3931 A01 701 (LPAR)
[ 553.332146] Call Trace:
[ 553.332152] [<00000000158de23a>] dump_stack_lvl+0xfa/0x150
[ 553.332167] [<0000000013e10d12>] __might_resched+0x57a/0x5e8
[ 553.332178] [<00000000144eb6c2>] __alloc_pages+0x2ba/0x7c0
[ 553.332189] [<00000000144d5cdc>] __get_free_pages+0x2c/0x88
[ 553.332198] [<00000000145663f6>] kasan_populate_vmalloc_pte+0x4e/0x110
[ 553.332207] [<000000001447625c>] apply_to_pte_range+0x164/0x3c8
[ 553.332218] [<000000001448125a>] apply_to_pmd_range+0xda/0x318
[ 553.332226] [<000000001448181c>] __apply_to_page_range+0x384/0x768
[ 553.332233] [<0000000014481c28>] apply_to_page_range+0x28/0x38
[ 553.332241] [<00000000145665da>] kasan_populate_vmalloc+0x82/0x98
[ 553.332249] [<00000000144c88d0>] alloc_vmap_area+0x590/0x1c90
[ 553.332257] [<00000000144ca108>] __get_vm_area_node.constprop.0+0x138/0x260
[ 553.332265] [<00000000144d17fc>] __vmalloc_node_range+0x134/0x360
[ 553.332274] [<0000000013d5dbf2>] alloc_thread_stack_node+0x112/0x378
[ 553.332284] [<0000000013d62726>] dup_task_struct+0x66/0x430
[ 553.332293] [<0000000013d63962>] copy_process+0x432/0x4b80
[ 553.332302] [<0000000013d68300>] kernel_clone+0xf0/0x7d0
[ 553.332311] [<0000000013d68bd6>] __do_sys_clone+0xae/0xc8
[ 553.332400] [<0000000013d68dee>] __s390x_sys_clone+0xd6/0x118
[ 553.332410] [<0000000013c9d34c>] do_syscall+0x22c/0x328
[ 553.332419] [<00000000158e7366>] __do_syscall+0xce/0xf0
[ 553.332428] [<0000000015913260>] system_call+0x70/0x98
This exposes a KASAN issue fixed with patch 1 and apply_to_pte_range()
issue fixed with patch 3, while patch 2 is a prerequisite.
Commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu
mode") looks like powerpc-only fix, yet not entirely conforming to the
above provided requirement (page tables itself are still not protected).
If I am not mistaken, xen and sparc are alike.
This patch (of 3):
apply_to_page_range() enters lazy MMU mode and then invokes
kasan_populate_vmalloc_pte() callback on each page table walk iteration.
The lazy MMU mode may only be entered only under protection of the page
table lock. However, the callback can go into sleep when trying to
allocate a single page.
Change __get_free_page() allocation mode from GFP_KERNEL to GFP_ATOMIC to
avoid scheduling out while in atomic context.
Link: https://lkml.kernel.org/r/cover.1744128123.git.agordeev@linux.ibm.com
Link: https://lkml.kernel.org/r/2d9f4ac4528701b59d511a379a60107fa608ad30.17441281…
Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
Signed-off-by: Alexander Gordeev <agordeev(a)linux.ibm.com>
Cc: Andrey Ryabinin <ryabinin.a.a(a)gmail.com>
Cc: Guenetr Roeck <linux(a)roeck-us.net>
Cc: Hugh Dickins <hughd(a)google.com>
Cc: Jeremy Fitzhardinge <jeremy(a)goop.org>
Cc: Juegren Gross <jgross(a)suse.com>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/kasan/shadow.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/kasan/shadow.c~kasan-avoid-sleepable-page-allocation-from-atomic-context
+++ a/mm/kasan/shadow.c
@@ -301,7 +301,7 @@ static int kasan_populate_vmalloc_pte(pt
if (likely(!pte_none(ptep_get(ptep))))
return 0;
- page = __get_free_page(GFP_KERNEL);
+ page = __get_free_page(GFP_ATOMIC);
if (!page)
return -ENOMEM;
_
Patches currently in -mm which might be from agordeev(a)linux.ibm.com are
mm-cleanup-apply_to_pte_range-routine.patch
mm-protect-kernel-pgtables-in-apply_to_pte_range.patch
Hello,
I'm investigating if v5.15 and early versions are vulnerable to the following CVEs. Could you please help confirm the following cases?
For CVE-2024-36912, the suggested fix is 211f514ebf1e ("Drivers: hv: vmbus: Track decrypted status in vmbus_gpadl") according to https://www.cve.org/CVERecord?id=CVE-2024-36912
It seems 211f514ebf1e is based on d4dccf353db8 ("Drivers: hv: vmbus: Mark vmbus ring buffer visible to host in Isolation VM") which was introduced since v5.16. For v5.15 and early versions, vmbus ring buffer hadn't been made visible to host, so there's no need to backport 211f514ebf1e to those versions, right?
For CVE-2024-36913, the suggested fix is 03f5a999adba ("Drivers: hv: vmbus: Leak pages if set_memory_encrypted() fails") according to https://www.cve.org/CVERecord?id=CVE-2024-36913
It seems 03f5a999adba is based on f2f136c05fb6 ("Drivers: hv: vmbus: Add SNP support for VMbus channel initiate message") which was introduced since v5.16. For v5.15 and early verions, monitor pages hadn't been made visible to host, so there's no need to backport 03f5a999adba to those versions, right?
Thanks,
Zhe
This series backports some recent fixes for SVE/KVM interactions from
Mark Rutland to v5.15.
Signed-off-by: Mark Brown <broonie(a)kernel.org>
---
Changes in v3:
- Explicitly include "KVM: arm64: Always start with clearing SVE flag on
load", it was included previously as part of a conflcit resolution.
- Link to v2: https://lore.kernel.org/r/20250403-stable-sve-5-15-v2-0-30a36a78a20a@kernel…
Changes in v2:
- Resend with Greg and the stable list added.
- Link to v1: https://lore.kernel.org/r/20250402-stable-sve-5-15-v1-0-84d0e5ff1102@kernel…
---
Fuad Tabba (1):
KVM: arm64: Calculate cptr_el2 traps on activating traps
Marc Zyngier (2):
KVM: arm64: Get rid of host SVE tracking/saving
KVM: arm64: Always start with clearing SVE flag on load
Mark Brown (4):
KVM: arm64: Discard any SVE state when entering KVM guests
arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE
arm64/fpsimd: Have KVM explicitly say which FP registers to save
arm64/fpsimd: Stop using TIF_SVE to manage register saving in KVM
Mark Rutland (4):
KVM: arm64: Unconditionally save+flush host FPSIMD/SVE/SME state
KVM: arm64: Remove host FPSIMD saving for non-protected KVM
KVM: arm64: Remove VHE host restore of CPACR_EL1.ZEN
KVM: arm64: Eagerly switch ZCR_EL{1,2}
arch/arm64/include/asm/fpsimd.h | 4 +-
arch/arm64/include/asm/kvm_host.h | 17 +++--
arch/arm64/include/asm/kvm_hyp.h | 7 ++
arch/arm64/include/asm/processor.h | 7 ++
arch/arm64/kernel/fpsimd.c | 117 +++++++++++++++++++++++---------
arch/arm64/kernel/process.c | 3 +
arch/arm64/kernel/ptrace.c | 3 +
arch/arm64/kernel/signal.c | 3 +
arch/arm64/kvm/arm.c | 1 -
arch/arm64/kvm/fpsimd.c | 72 +++++++++-----------
arch/arm64/kvm/hyp/entry.S | 5 ++
arch/arm64/kvm/hyp/include/hyp/switch.h | 86 +++++++++++++++--------
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 ++-
arch/arm64/kvm/hyp/nvhe/switch.c | 52 +++++++++-----
arch/arm64/kvm/hyp/vhe/switch.c | 4 ++
arch/arm64/kvm/reset.c | 3 +
16 files changed, 266 insertions(+), 127 deletions(-)
---
base-commit: 0c935c049b5c196b83b968c72d348ae6fff83ea2
change-id: 20250326-stable-sve-5-15-bfd75482dcfa
Best regards,
--
Mark Brown <broonie(a)kernel.org>
From: Chris Wilson <chris.p.wilson(a)intel.com>
commit 78a033433a5ae4fee85511ee075bc9a48312c79e upstream.
If we abort driver initialisation in the middle of gt/engine discovery,
some engines will be fully setup and some not. Those incompletely setup
engines only have 'engine->release == NULL' and so will leak any of the
common objects allocated.
v2:
- Drop the destroy_pinned_context() helper for now. It's not really
worth it with just a single callsite at the moment. (Janusz)
Signed-off-by: Chris Wilson <chris.p.wilson(a)intel.com>
Cc: Janusz Krzysztofik <janusz.krzysztofik(a)linux.intel.com>
Signed-off-by: Matt Roper <matthew.d.roper(a)intel.com>
Reviewed-by: Janusz Krzysztofik <janusz.krzysztofik(a)linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220915232654.3283095-2-matt…
Signed-off-by: Zhi Yang <Zhi.Yang(a)windriver.com>
Signed-off-by: He Zhe <zhe.he(a)windriver.com>
---
Build test passed.
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index a19537706ed1..eb6f4d7f1e34 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -904,8 +904,13 @@ int intel_engines_init(struct intel_gt *gt)
return err;
err = setup(engine);
- if (err)
+ if (err) {
+ intel_engine_cleanup_common(engine);
return err;
+ }
+
+ /* The backend should now be responsible for cleanup */
+ GEM_BUG_ON(engine->release == NULL);
err = engine_init_common(engine);
if (err)
--
2.34.1