Linux v5.7 introduced a couple patches to make walking the stack of the idle tasks reliable when Linux runs as a Xen PV guest. Attached are the backports for the 5.4 LTS, which are of special interest when using the livepatch subsystem.
Markus
From: Miroslav Benes mbenes@suse.cz
upstream commit 2f62f36e62daec43aa7b9633ef7f18e042a80bed
The unwinder reports the boot CPU idle task's stack on XEN PV as unreliable, which affects at least live patching. There are two reasons for this. First, the task does not follow the x86 convention that its stack starts at the offset right below saved pt_regs. It allows the unwinder to easily detect the end of the stack and verify it. Second, startup_xen() function does not store the return address before jumping to xen_start_kernel() which confuses the unwinder.
Amend both issues by moving the starting point of initial stack in startup_xen() and storing the return address before the jump, which is exactly what call instruction does.
Signed-off-by: Miroslav Benes mbenes@suse.cz Reviewed-by: Juergen Gross jgross@suse.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Markus Boehme markubo@amazon.com --- arch/x86/xen/xen-head.S | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index c1d8b90aa4e2..deb6abe5d346 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -35,7 +35,11 @@ ENTRY(startup_xen) rep __ASM_SIZE(stos)
mov %_ASM_SI, xen_start_info - mov $init_thread_union+THREAD_SIZE, %_ASM_SP +#ifdef CONFIG_X86_64 + mov initial_stack(%rip), %rsp +#else + mov pa(initial_stack), %esp +#endif
#ifdef CONFIG_X86_64 /* Set up %gs. @@ -51,7 +55,7 @@ ENTRY(startup_xen) wrmsr #endif
- jmp xen_start_kernel + call xen_start_kernel END(startup_xen) __FINIT #endif
From: Miroslav Benes mbenes@suse.cz
upstream commit c3881eb58d56116c79ac4ee4f40fd15ead124c4b
The unwinder reports the secondary CPU idle tasks' stack on XEN PV as unreliable, which affects at least live patching. cpu_initialize_context() sets up the context of the CPU through VCPUOP_initialise hypercall. After it is woken up, the idle task starts in cpu_bringup_and_idle() function and its stack starts at the offset right below pt_regs. The unwinder correctly detects the end of stack there but it is confused by NULL return address in the last frame.
Introduce a wrapper in assembly, which just calls cpu_bringup_and_idle(). The return address is thus pushed on the stack and the wrapper contains the annotation hint for the unwinder regarding the stack state.
Signed-off-by: Miroslav Benes mbenes@suse.cz Reviewed-by: Juergen Gross jgross@suse.com Signed-off-by: Juergen Gross jgross@suse.com Signed-off-by: Markus Boehme markubo@amazon.com --- arch/x86/xen/smp_pv.c | 3 ++- arch/x86/xen/xen-head.S | 10 ++++++++++ 2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c index 64e6ec2c32a7..9d9777ded5f7 100644 --- a/arch/x86/xen/smp_pv.c +++ b/arch/x86/xen/smp_pv.c @@ -53,6 +53,7 @@ static DEFINE_PER_CPU(struct xen_common_irq, xen_irq_work) = { .irq = -1 }; static DEFINE_PER_CPU(struct xen_common_irq, xen_pmu_irq) = { .irq = -1 };
static irqreturn_t xen_irq_work_interrupt(int irq, void *dev_id); +void asm_cpu_bringup_and_idle(void);
static void cpu_bringup(void) { @@ -310,7 +311,7 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle) * pointing just below where pt_regs would be if it were a normal * kernel entry. */ - ctxt->user_regs.eip = (unsigned long)cpu_bringup_and_idle; + ctxt->user_regs.eip = (unsigned long)asm_cpu_bringup_and_idle; ctxt->flags = VGCF_IN_KERNEL; ctxt->user_regs.eflags = 0x1000; /* IOPL_RING1 */ ctxt->user_regs.ds = __USER_DS; diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index deb6abe5d346..591544b4b85a 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -58,6 +58,16 @@ ENTRY(startup_xen) call xen_start_kernel END(startup_xen) __FINIT + +#ifdef CONFIG_XEN_PV_SMP +.pushsection .text +SYM_CODE_START(asm_cpu_bringup_and_idle) + UNWIND_HINT_EMPTY + + call cpu_bringup_and_idle +SYM_CODE_END(asm_cpu_bringup_and_idle) +.popsection +#endif #endif
.pushsection .text
On Mon, May 16, 2022 at 10:54:37PM +0200, Markus Boehme wrote:
Linux v5.7 introduced a couple patches to make walking the stack of the idle tasks reliable when Linux runs as a Xen PV guest. Attached are the backports for the 5.4 LTS, which are of special interest when using the livepatch subsystem.
Now queued up, thanks.
greg k-h
linux-stable-mirror@lists.linaro.org