This patch set addresses KVM issue described in Geoff's kexec patch set[1]. (The subject was changed from "arm64: kexec: fix kvm issue in kexec.") See "Changes" below.
The basic approach here is to define a kvm tear-down function and add a reboot hook to gracefully shutdown the 1st kernel. This way, kvm gets free from kexec-specific cleanup and yet we allows future enhancement, like cpu hotplug & building kvm as a module, based on tear-down function. In this sense, patch #1 & #2 (and #5) actually fix the problem, and #3 & #4 are rather informative.
I confirmed that 1st kernel successfully shut down and 2nd kernel started with the following messages:
kvm [1]: Using HYP init bounce page @8fa52f000 kvm [1]: interrupt-controller@2c02f000 IRQ6 kvm [1]: timer IRQ3 kvm [1]: Hyp mode initialized successfully
test target: Base fast model version: kernel v4.0-rc4 + Geoff's kexec v8
I still have some concerns about the following points. Please let me know if you have any comments:
1) Call kvm_cpu_reset() on non-boot cpus in reboot notifier We don't have to do so in kexec-specific case. But the current code runs the function on each cpu for safety since we use a general reboot hook. 2) Flush D$ in kvm_cpu_reset() Currently doesn't do so because all the cpus are just going to shut down, and we actually flush D$ on boot cpu in Geoff's cpu_reset(). 3) Compatibility with arm implementation Frediano[2] is no longer working on this issue on arm as he left his company. But my approach here is based on a generic interface and can be applied to arm in a similar way.
[1] http://lists.infradead.org/pipermail/kexec/2015-March/013432.html [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/322231.h...
Changes from v1: * modified kvm_cpu_reset() implementation: - define a macro to translate va to addr in trampoline - use __hyp_default_vectors instead of kvm_get_hyp_stub_vectors() - shuffle the arguments in __cpu_reset_hyp_mode() - optimize TLB flush operations * changed a patch#2's name * added a patch#5 to add stub code for arm
AKASHI Takahiro (5): arm64: kvm: add a cpu tear-down function arm64: kvm: allow EL2 context to be reset on shutdown arm64: kvm: add cpu reset hook for cpu hotplug arm64: kvm: add cpu reset at module exit arm: kvm: add stub implementation for kvm_cpu_reset()
arch/arm/include/asm/kvm_asm.h | 1 + arch/arm/include/asm/kvm_host.h | 13 +++++++++- arch/arm/include/asm/kvm_mmu.h | 5 ++++ arch/arm/kvm/arm.c | 51 +++++++++++++++++++++++++++++++++++++ arch/arm/kvm/init.S | 6 +++++ arch/arm/kvm/mmu.c | 5 ++++ arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 12 ++++++++- arch/arm64/include/asm/kvm_mmu.h | 5 ++++ arch/arm64/include/asm/virt.h | 11 ++++++++ arch/arm64/kvm/Kconfig | 1 - arch/arm64/kvm/hyp-init.S | 35 +++++++++++++++++++++++++ arch/arm64/kvm/hyp.S | 16 +++++++++--- 13 files changed, 156 insertions(+), 6 deletions(-)
Cpu must be put back into its initial state, at least, in the following cases in order to shutdown the system and/or re-initialize cpus later on: 1) kexec/kdump 2) cpu hotplug (offline) 3) removing kvm as a module
To address those issues in later patches, this patch adds a tear-down function, kvm_cpu_reset(), that disables D-cache & MMU and restore a vector table to the initial stub at EL2.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm/kvm/arm.c | 15 +++++++++++++++ arch/arm/kvm/mmu.c | 5 +++++ arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 11 +++++++++++ arch/arm64/include/asm/kvm_mmu.h | 5 +++++ arch/arm64/include/asm/virt.h | 11 +++++++++++ arch/arm64/kvm/hyp-init.S | 35 +++++++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp.S | 16 +++++++++++++--- 8 files changed, 96 insertions(+), 3 deletions(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 5560f74..39df694 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -897,6 +897,21 @@ static void cpu_init_hyp_mode(void *dummy) __cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr); }
+static void kvm_cpu_reset(void *dummy) +{ + phys_addr_t boot_pgd_ptr; + phys_addr_t phys_idmap_start; + + if (__hyp_get_vectors() == hyp_default_vectors) + return; + + boot_pgd_ptr = kvm_mmu_get_boot_httbr(); + phys_idmap_start = kvm_get_idmap_start(); + __cpu_reset_hyp_mode(boot_pgd_ptr, phys_idmap_start, + hyp_default_vectors, + kvm_virt_to_trampoline(__kvm_hyp_reset)); +} + static int hyp_init_cpu_notify(struct notifier_block *self, unsigned long action, void *cpu) { diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 3e6859b..3631a37 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1490,6 +1490,11 @@ phys_addr_t kvm_get_idmap_vector(void) return hyp_idmap_vector; }
+phys_addr_t kvm_get_idmap_start(void) +{ + return hyp_idmap_start; +} + int kvm_mmu_init(void) { int err; diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index 4f7310f..f1c16e2 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -118,6 +118,7 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; +extern char __kvm_hyp_reset[];
extern char __kvm_hyp_vector[];
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 8ac3c70..6a8da9c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -199,6 +199,8 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
u64 kvm_call_hyp(void *hypfn, ...); +void kvm_call_reset(phys_addr_t boot_pgd_ptr, phys_addr_t phys_idmap_start, + unsigned long stub_vector_ptr, unsigned long reset_func); void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
@@ -223,6 +225,15 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, hyp_stack_ptr, vector_ptr); }
+static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, + phys_addr_t phys_idmap_start, + unsigned long stub_vector_ptr, + unsigned long reset_func) +{ + kvm_call_reset(boot_pgd_ptr, phys_idmap_start, stub_vector_ptr, + reset_func); +} + struct vgic_sr_vectors { void *save_vgic; void *restore_vgic; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 6458b53..c191432 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -96,6 +96,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_boot_httbr(void); phys_addr_t kvm_get_idmap_vector(void); +phys_addr_t kvm_get_idmap_start(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void);
@@ -305,5 +306,9 @@ static inline void __kvm_flush_dcache_pud(pud_t pud) void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
+extern char __hyp_idmap_text_start[]; +#define kvm_virt_to_trampoline(x) \ + (TRAMPOLINE_VA + ((x) - __hyp_idmap_text_start)) + #endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */ diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 3070096..7fcd087 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -61,6 +61,17 @@ #define BOOT_CPU_MODE_EL1 (0xe11) #define BOOT_CPU_MODE_EL2 (0xe12)
+/* + * HVC_RESET - Reset cpu in EL2 to initial state. + * + * @x0: entry address in trampoline code in va + * @x1: identical mapping page table in pa + * @x2: start address of identical mapping in pa + * @x3: initial stub vector in pa + */ + +#define HVC_RESET 5 + #ifndef __ASSEMBLY__
/* diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index c319116..d212990 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -115,6 +115,41 @@ target: /* We're now in the trampoline code, switch page tables */ eret ENDPROC(__kvm_hyp_init)
+ /* + * x0: HYP boot pgd + * x1: HYP phys_idmap_start + * x2: HYP stub vectors + */ +ENTRY(__kvm_hyp_reset) + /* We're in trampoline code in VA, switch back to boot page tables */ + msr ttbr0_el2, x0 + isb + + /* Invalidate the old TLBs */ + tlbi alle2 + dsb sy + + /* Branch into PA space */ + adr x0, 1f + bfi x1, x0, #0, #PAGE_SHIFT + br x1 + + /* We're now in idmap, disable MMU */ +1: mrs x0, sctlr_el2 + and x1, x0, #SCTLR_EL2_EE + orr x0, x0, x1 // preserve endianness of EL2 + ldr x1, =SCTLR_EL2_FLAGS + eor x1, x1, xzr + bic x0, x0, x1 // Clear SCTL_M and etc + msr sctlr_el2, x0 + isb + + /* Install stub vectors */ + msr vbar_el2, x2 + + eret +ENDPROC(__kvm_hyp_reset) + .ltorg
.popsection diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index fd085ec..7c3bdee 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -1136,6 +1136,11 @@ ENTRY(kvm_call_hyp) ret ENDPROC(kvm_call_hyp)
+ENTRY(kvm_call_reset) + hvc #HVC_RESET + ret +ENDPROC(kvm_call_reset) + .macro invalid_vector label, target .align 2 \label: @@ -1179,10 +1184,14 @@ el1_sync: // Guest trapped into EL2 cmp x18, #HVC_GET_VECTORS b.ne 1f mrs x0, vbar_el2 - b 2f + b 3f
-1: /* Default to HVC_CALL_HYP. */ + /* jump into trampoline code */ +1: cmp x18, #HVC_RESET + b.ne 2f + br x3 // no return
+2: /* Default to HVC_CALL_HYP. */ push lr, xzr
/* @@ -1196,7 +1205,8 @@ el1_sync: // Guest trapped into EL2 blr lr
pop lr, xzr -2: eret + +3: eret
el1_trap: /*
The current kvm implementation keeps EL2 vector table installed even when the system is shut down. This prevents kexec from putting the system with kvm back into EL2 when starting a new kernel.
This patch resolves this issue by calling a cpu tear-down function via reboot notifier, kvm_reboot_notify(), which is invoked by kernel_restart_prepare() in kernel_kexec(). While kvm has a generic hook, kvm_reboot(), we can't use it here because a cpu teardown function will not be invoked, under current implementation, if no guest vm has been created by kvm_create_vm(). Please note that kvm_usage_count is zero in this case.
We'd better, in the future, implement cpu hotplug support and put the arch-specific initialization into kvm_arch_hardware_enable/disable(). This way, we would be able to revert this patch.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm/kvm/arm.c | 21 +++++++++++++++++++++ arch/arm64/kvm/Kconfig | 1 - 2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 39df694..f64713e 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -25,6 +25,7 @@ #include <linux/vmalloc.h> #include <linux/fs.h> #include <linux/mman.h> +#include <linux/reboot.h> #include <linux/sched.h> #include <linux/kvm.h> #include <trace/events/kvm.h> @@ -1100,6 +1101,23 @@ struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr) return NULL; }
+static int kvm_reboot_notify(struct notifier_block *nb, + unsigned long val, void *v) +{ + /* + * Reset each CPU in EL2 to initial state. + */ + on_each_cpu(kvm_cpu_reset, NULL, 1); + + return NOTIFY_DONE; +} + +static struct notifier_block kvm_reboot_nb = { + .notifier_call = kvm_reboot_notify, + .next = NULL, + .priority = 0, /* FIXME */ +}; + /** * Initialize Hyp-mode and memory mappings on all CPUs. */ @@ -1138,6 +1156,9 @@ int kvm_arch_init(void *opaque) hyp_cpu_pm_init();
kvm_coproc_table_init(); + + register_reboot_notifier(&kvm_reboot_nb); + return 0; out_err: cpu_notifier_register_done(); diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 30ae7a7..f5590c8 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -18,7 +18,6 @@ if VIRTUALIZATION
config KVM bool "Kernel-based Virtual Machine (KVM) support" - depends on !KEXEC select MMU_NOTIFIER select PREEMPT_NOTIFIERS select ANON_INODES
This patch doesn't enable cpu hotplug under kvm, but is a prerequiste when the feature is implemented. Once kvm_arch_hardware_enable/disable() is properly implemented, arm64-specific cpu notifier hook, hyp_init_cpu_notify(), will be able to be removed and replaced by generic kvm_cpu_hotplug().
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm/include/asm/kvm_host.h | 1 - arch/arm/kvm/arm.c | 9 +++++++++ arch/arm64/include/asm/kvm_host.h | 1 - 3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 41008cd..ca97764 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -237,7 +237,6 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
-static inline void kvm_arch_hardware_disable(void) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index f64713e..4892974 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -922,6 +922,10 @@ static int hyp_init_cpu_notify(struct notifier_block *self, if (__hyp_get_vectors() == hyp_default_vectors) cpu_init_hyp_mode(NULL); break; + case CPU_DYING: + case CPU_DYING_FROZEN: + kvm_cpu_reset(NULL); + break; }
return NOTIFY_OK; @@ -1165,6 +1169,11 @@ out_err: return err; }
+void kvm_arch_hardware_disable(void) +{ + kvm_cpu_reset(NULL); +} + /* NOP: Compiling as a module not supported */ void kvm_arch_exit(void) { diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 6a8da9c..831e6a4 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -262,7 +262,6 @@ static inline void vgic_arch_setup(const struct vgic_params *vgic) } }
-static inline void kvm_arch_hardware_disable(void) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
This patch doesn't enable kvm to be built as a module, but is a prerequisite when kvm is transformed to be module-capable.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm/kvm/arm.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 4892974..85c142b 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -1178,6 +1178,12 @@ void kvm_arch_hardware_disable(void) void kvm_arch_exit(void) { kvm_perf_teardown(); + + unregister_reboot_notifier(&kvm_reboot_nb); + /* + * Reset each CPU in EL2 to initial state. + */ + on_each_cpu(kvm_cpu_reset, NULL, 1); }
static int arm_init(void)
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm/include/asm/kvm_asm.h | 1 + arch/arm/include/asm/kvm_host.h | 12 ++++++++++++ arch/arm/include/asm/kvm_mmu.h | 5 +++++ arch/arm/kvm/init.S | 6 ++++++ 4 files changed, 24 insertions(+)
diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h index 25410b2..462babf 100644 --- a/arch/arm/include/asm/kvm_asm.h +++ b/arch/arm/include/asm/kvm_asm.h @@ -85,6 +85,7 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[]; extern char __kvm_hyp_init_end[]; +extern char __kvm_hyp_reset[];
extern char __kvm_hyp_exit[]; extern char __kvm_hyp_exit_end[]; diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index ca97764..6d38134 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -220,6 +220,18 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr); }
+static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, + phys_addr_t phys_idmap_start, + unsigned long stub_vector_ptr, + unsigned long reset_func) +{ + /* + * TODO + * kvm_call_reset(boot_pgd_ptr, phys_idmap_start, stub_vector_ptr, + * reset_func); + */ +} + static inline int kvm_arch_dev_ioctl_check_extension(long ext) { return 0; diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index bf0fe99..dc9543b 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -66,6 +66,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_boot_httbr(void); phys_addr_t kvm_get_idmap_vector(void); +phys_addr_t kvm_get_idmap_start(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void);
@@ -270,6 +271,10 @@ static inline void __kvm_flush_dcache_pud(pud_t pud) void kvm_set_way_flush(struct kvm_vcpu *vcpu); void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
+extern char __hyp_idmap_text_start[]; +#define kvm_virt_to_trampoline(x) \ + (TRAMPOLINE_VA + ((x) - __hyp_idmap_text_start)) + #endif /* !__ASSEMBLY__ */
#endif /* __ARM_KVM_MMU_H__ */ diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S index 3988e72..9178c9a 100644 --- a/arch/arm/kvm/init.S +++ b/arch/arm/kvm/init.S @@ -156,4 +156,10 @@ target: @ We're now in the trampoline code, switch page tables .globl __kvm_hyp_init_end __kvm_hyp_init_end:
+ .globl __kvm_hyp_reset +__kvm_hyp_reset: + @ TODO + + eret + .popsection
On Thu, Mar 26, 2015 at 05:25:21PM +0900, AKASHI Takahiro wrote:
- Call kvm_cpu_reset() on non-boot cpus in reboot notifier We don't have to do so in kexec-specific case. But the current code runs the function on each cpu for safety since we use a general reboot hook.
- Flush D$ in kvm_cpu_reset() Currently doesn't do so because all the cpus are just going to shut down, and we actually flush D$ on boot cpu in Geoff's cpu_reset().
- Compatibility with arm implementation Frediano[2] is no longer working on this issue on arm as he left his company. But my approach here is based on a generic interface and can be applied to arm in a similar way.
i'm hitting this when rebooting with your patchset applied...
Rebooting. [ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006 [ 236.260863] FAR:0000028001000018 HPFAR: (null) PAR: (null) [ 236.260863] VCPU: (null) [ 236.260863] [ 236.284440] CPU: 1 PID: 0 Comm: swapper/1 Tainted: G W 4.0.0-0.rc5.git4.1.fc23.aarch64 #1 [ 236.293734] Hardware name: /Default string, BIOS ROD0074D 01/29/2015 [ 236.300164] Call trace: [ 236.302610] [<fffffe0000097770>] dump_backtrace+0x0/0x150 [ 236.307999] [<fffffe00000978e0>] show_stack+0x20/0x30 [ 236.313044] [<fffffe0000771828>] dump_stack+0x78/0x94 [ 236.318086] [<fffffe000076fdac>] panic+0xd8/0x234 [ 236.322780] [<fffffe000076fcd0>] __do_kernel_fault.part.1+0x8c/0x90 [ 236.329038] [<fffffe00000af94c>] kvm_cpu_reset+0x24/0x30 [ 236.334342] [<fffffe00001362c8>] flush_smp_call_function_queue+0x88/0x178 [ 236.341120] [<fffffe00001364dc>] generic_smp_call_function_single_interrupt+0x14/0x20 [ 236.348939] [<fffffe000009db5c>] handle_IPI+0x1ac/0x220 [ 236.354153] [<fffffe0000090460>] gic_handle_irq+0x88/0x90 [ 236.359541] Exception stack(0xfffffe03db0f3e20 to 0xfffffe03db0f3f40) [ 236.365972] 3e20: 00000001 00000000 db0f0000 fffffe03 db0f3f60 fffffe03 000949c0 fffffe00 [ 236.374138] 3e40: 000ff770 fffffe00 00000000 00000000 fe1b0b0c fffffe03 00000000 00000000 [ 236.382304] 3e60: db158048 fffffe03 00000001 00000000 97c9abf3 000e5f34 ffffe700 00000000 [ 236.390470] 3e80: db088f80 fffffe03 ffffe701 00000000 00000000 00000000 00000040 00000000 [ 236.398636] 3ea0: 00448320 00000000 00000000 00000000 00000001 00000000 ffffffff ffffffff [ 236.406801] 3ec0: 000c5800 fffffe00 90086f40 000003ff 00000000 00000000 00000001 00000000 [ 236.414967] 3ee0: db0f0000 fffffe03 00d406ec fffffe00 00d40650 fffffe00 00de4000 fffffe00 [ 236.423133] 3f00: 007a9b70 fffffe00 00de33a4 fffffe00 009baf20 fffffe00 000906e0 fffffe00 [ 236.431298] 3f20: 01000000 00000280 db0f3f60 fffffe03 000949bc fffffe00 db0f3f60 fffffe03 [ 236.439464] [<fffffe00000934e4>] el1_irq+0x64/0xc0 [ 236.444247] [<fffffe00000ff76c>] cpu_startup_entry+0x17c/0x1d0 [ 236.450068] [<fffffe000009d58c>] secondary_start_kernel+0x104/0x118 [ 236.456325] CPU0: stopping [ 236.459025] CPU: 0 PID: 978 Comm: reboot Tainted: G W 4.0.0-0.rc5.git4.1.fc23.aarch64 #1 [ 236.468232] Hardware name: /Default string, BIOS ROD0074D 01/29/2015 [ 236.474660] Call trace: [ 236.477099] [<fffffe0000097770>] dump_backtrace+0x0/0x150 [ 236.482488] [<fffffe00000978e0>] show_stack+0x20/0x30 [ 236.487529] [<fffffe0000771828>] dump_stack+0x78/0x94 [ 236.492570] [<fffffe000009db90>] handle_IPI+0x1e0/0x220 [ 236.497784] [<fffffe0000090460>] gic_handle_irq+0x88/0x90 [ 236.503172] Exception stack(0xfffffe001869bac0 to 0xfffffe001869bbe0) [ 236.509602] bac0: 00d41788 fffffe00 fe1a4c40 fffffe03 1869bc00 fffffe00 00135e24 fffffe00 [ 236.517768] bae0: 00000001 00000000 00000004 00000000 00000001 00000000 fe1c7678 fffffe03 [ 236.525934] bb00: 00d42880 fffffe00 fe144680 fffffe03 0000000f 00000000 85ff2d66 feff02fe [ 236.534100] bb20: 00000000 00000000 86514a0f feff02fe ff7f7f7f 7f7f7fff 01010101 01010101 [ 236.542265] bb40: 00000010 00000000 ffffffff ffffffff 00000000 ffffff00 ffffffff ffffffff [ 236.550431] bb60: 000e2b00 fffffe00 87355050 000003ff 00000000 00000000 00d41788 fffffe00 [ 236.558597] bb80: fe1a4c40 fffffe03 00d42000 fffffe00 00cf4c00 fffffe00 000af928 fffffe00 [ 236.566763] bba0: 00000000 00000000 00000001 00000000 00000004 00000000 00d41000 fffffe00 [ 236.574928] bbc0: 18698000 fffffe00 1869bc00 fffffe00 00135dfc fffffe00 1869bc00 fffffe00 [ 236.583094] [<fffffe00000934e4>] el1_irq+0x64/0xc0 [ 236.587875] [<fffffe0000135f14>] on_each_cpu+0x3c/0x68 [ 236.593003] [<fffffe00000af828>] kvm_reboot_notify+0x20/0x30 [ 236.598655] [<fffffe00000e0b08>] notifier_call_chain+0x58/0xa0 [ 236.604478] [<fffffe00000e0fe4>] __blocking_notifier_call_chain+0x54/0xa0 [ 236.611255] [<fffffe00000e1068>] blocking_notifier_call_chain+0x38/0x50 [ 236.617859] [<fffffe00000e27d0>] kernel_restart_prepare+0x28/0x50 [ 236.623941] [<fffffe00000e28fc>] kernel_restart+0x1c/0x80 [ 236.629329] [<fffffe00000e2c3c>] SyS_reboot+0x13c/0x238 [ 237.542991] SMP: failed to stop secondary CPUs [ 237.547426] ---[ end Kernel panic - not syncing: HYP panic: [ 237.547426] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006 [ 237.547426] FAR:0000028001000018 HPFAR: (null) PAR: (null) [ 237.547426] VCPU: (null) [ 237.547426] Boot firmware (version built at 12:39:20 on Jan 29 2015)
regards, Kyle
On 27/03/15 15:31, Kyle McMartin wrote:
On Thu, Mar 26, 2015 at 05:25:21PM +0900, AKASHI Takahiro wrote:
- Call kvm_cpu_reset() on non-boot cpus in reboot notifier We don't have to do so in kexec-specific case. But the current code runs the function on each cpu for safety since we use a general reboot hook.
- Flush D$ in kvm_cpu_reset() Currently doesn't do so because all the cpus are just going to shut down, and we actually flush D$ on boot cpu in Geoff's cpu_reset().
- Compatibility with arm implementation Frediano[2] is no longer working on this issue on arm as he left his company. But my approach here is based on a generic interface and can be applied to arm in a similar way.
i'm hitting this when rebooting with your patchset applied...
Rebooting. [ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Thanks,
M.
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
[ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
--Kyle
On 27/03/15 17:40, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
[ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
... unless you've just switched TTBR0_EL2 to something slightly inappropriate, and conveniently flushed the TLBs. Also, having 0 as the exception class is fairly indicative you've fetched some crap, which reinforce my idea that the page tables are pointing to nowhere-land.
I'll try to review the patches next week, maybe I'll spot something by inspection.
Thanks,
M.
On 03/28/2015 02:40 AM, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
[ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that - Exception class: Data abort taken without a change in Exception level - Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
-Takahiro AKASHI
--Kyle
On Mon, 30 Mar 2015 02:39:53 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On 03/28/2015 02:40 AM, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
[ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that
- Exception class: Data abort taken without a change in Exception level
- Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
... which is consistent with what we're seeing here (data fault on something that doesn't generate a load/store). I'm pretty sure the page tables are screwed.
Have you tested it with 64k pages?
Thanks,
M.
On 03/30/2015 04:16 PM, Marc Zyngier wrote:
On Mon, 30 Mar 2015 02:39:53 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On 03/28/2015 02:40 AM, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
[ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that - Exception class: Data abort taken without a change in Exception level - Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
... which is consistent with what we're seeing here (data fault on something that doesn't generate a load/store). I'm pretty sure the page tables are screwed.
Have you tested it with 64k pages?
Hmm... It seems that I was able to reproduce the problem if 64k pages enabled.
-Takahiro AKASHI
Thanks,
M.
Marc,
On 03/30/2015 05:54 PM, AKASHI Takahiro wrote:
On 03/30/2015 04:16 PM, Marc Zyngier wrote:
On Mon, 30 Mar 2015 02:39:53 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On 03/28/2015 02:40 AM, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
[ 236.260863] Kernel panic - not syncing: HYP panic: [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that - Exception class: Data abort taken without a change in Exception level - Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
... which is consistent with what we're seeing here (data fault on something that doesn't generate a load/store). I'm pretty sure the page tables are screwed.
Have you tested it with 64k pages?
Hmm... It seems that I was able to reproduce the problem if 64k pages enabled.
The entry address in trampoline code calc'ed by kvm_virt_to_trampoline(__kvm_hyp_reset) seems to be wrong due to improper page-alignment in hyp-init.S. The following patch fixed this problem, at least, in my environment(fast model). (I don't know why it's PAGE_SHIFT - 1, not PAGE_SHIFT.)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index d212990..45b8d98 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -24,7 +24,7 @@ .text .pushsection .hyp.idmap.text, "ax"
.align 11
.align (PAGE_SHIFT - 1)
ENTRY(__kvm_hyp_init) ventry __invalid // Synchronous EL2t
After applying this patch, I got another problem with kexec-tools on 64k page kernel, but I've already modified kexec-tools.
Thanks, -Takahiro AKASHI
-Takahiro AKASHI
Thanks,
M.
On Tue, 31 Mar 2015 07:04:44 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
Hi Takahiro,
Marc,
On 03/30/2015 05:54 PM, AKASHI Takahiro wrote:
On 03/30/2015 04:16 PM, Marc Zyngier wrote:
On Mon, 30 Mar 2015 02:39:53 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On 03/28/2015 02:40 AM, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote:
> [ 236.260863] Kernel panic - not syncing: HYP panic: > [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006
It would be interesting if you could find out what you have at offset 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to help much).
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that - Exception class: Data abort taken without a change in Exception level - Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
... which is consistent with what we're seeing here (data fault on something that doesn't generate a load/store). I'm pretty sure the page tables are screwed.
Have you tested it with 64k pages?
Hmm... It seems that I was able to reproduce the problem if 64k pages enabled.
The entry address in trampoline code calc'ed by kvm_virt_to_trampoline(__kvm_hyp_reset) seems to be wrong due to improper page-alignment in hyp-init.S. The following patch fixed this problem, at least, in my environment(fast model). (I don't know why it's PAGE_SHIFT - 1, not PAGE_SHIFT.)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index d212990..45b8d98 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -24,7 +24,7 @@ .text .pushsection .hyp.idmap.text, "ax"
.align 11
.align (PAGE_SHIFT - 1)
I'm afraid this is wrong. This alignment is for the vectors (which have to be aligned on a 2kB boundary, hence the ".align 11"), not for the code. Aligning it on a 32kB boundary doesn't make any sense, and just hides the bug.
I bet that without this hack, the hyp-init code is spread across two 64kB pages, and the kernel generates a bounce page for this code. By changing the alignment, you just end up having the code to fit in a single page, and no bounce page gets generated.
If I'm right above the above, it means that you're computing something against the wrong base. Can you please verify this scenario?
Now, the good news is that Ard is removing the bounce page from the KVM code (for unrelated reasons), and this may just be the saving grace.
ENTRY(__kvm_hyp_init) ventry __invalid // Synchronous EL2t
After applying this patch, I got another problem with kexec-tools on 64k page kernel, but I've already modified kexec-tools.
The idea that userspace behavior is dependent on the kernel page size is deeply worrying...
Thanks,
M.
Marc
On 03/31/2015 04:31 PM, Marc Zyngier wrote:
On Tue, 31 Mar 2015 07:04:44 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
Hi Takahiro,
Marc,
On 03/30/2015 05:54 PM, AKASHI Takahiro wrote:
On 03/30/2015 04:16 PM, Marc Zyngier wrote:
On Mon, 30 Mar 2015 02:39:53 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On 03/28/2015 02:40 AM, Kyle McMartin wrote:
On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote: >> [ 236.260863] Kernel panic - not syncing: HYP panic: >> [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006 > > It would be interesting if you could find out what you have at offset > 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to > help much). >
Given the alignment, i'm going to assume i'm looking at the right thing:
0000000000000820 <__kvm_hyp_reset>: 820: d51c2000 msr ttbr0_el2, x0 824: d5033fdf isb 828: d50c871f tlbi alle2 82c: d5033f9f dsb sy 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> 834: b3403c01 bfxil x1, x0, #0, #16 838: d61f0020 br x1 83c: d53c1000 mrs x0, sctlr_el2
but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that - Exception class: Data abort taken without a change in Exception level - Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
... which is consistent with what we're seeing here (data fault on something that doesn't generate a load/store). I'm pretty sure the page tables are screwed.
Have you tested it with 64k pages?
Hmm... It seems that I was able to reproduce the problem if 64k pages enabled.
The entry address in trampoline code calc'ed by kvm_virt_to_trampoline(__kvm_hyp_reset) seems to be wrong due to improper page-alignment in hyp-init.S. The following patch fixed this problem, at least, in my environment(fast model). (I don't know why it's PAGE_SHIFT - 1, not PAGE_SHIFT.)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index d212990..45b8d98 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -24,7 +24,7 @@ .text .pushsection .hyp.idmap.text, "ax"
.align 11
.align (PAGE_SHIFT - 1)
I'm afraid this is wrong. This alignment is for the vectors (which have to be aligned on a 2kB boundary, hence the ".align 11"), not for the code. Aligning it on a 32kB boundary doesn't make any sense, and just hides the bug.
I bet that without this hack, the hyp-init code is spread across two 64kB pages, and the kernel generates a bounce page for this code. By changing the alignment, you just end up having the code to fit in a single page, and no bounce page gets generated.
There seem to be two scenarios that make things go wrong: 1) As you mentioned above, trampoline code is spread across page boundary even though the whole size is less than a page. 2) The whole trampoline code fits into a single page, but the physical start address of this region (that is, __hyp_idmap_text_start) is not page-aligned. In this case, pa of __kvm_hyp_reset should also be offset.
Given any combinations of #1 and #2, __kvm_virt_to_trampoline() would get a bit complicated.
If I'm right above the above, it means that you're computing something against the wrong base. Can you please verify this scenario?
Now, the good news is that Ard is removing the bounce page from the KVM code (for unrelated reasons), and this may just be the saving grace.
Ard's patch will fix #1, but not #2. So I modified __kvm_virt_to_trampoline as followed and it seems to work well both on 4k-page kernel and 64k-page kernel (in addition to Ard's patch).
But please note that Ard's patch already makes __hyp_idmap_text_start 4kb-aligned. So why not PAGE_SIZE-aligned as my previous patch does?
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index c191432..facfd6d 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -308,7 +308,9 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
extern char __hyp_idmap_text_start[]; #define kvm_virt_to_trampoline(x) \
(TRAMPOLINE_VA + ((x) - __hyp_idmap_text_start))
(TRAMPOLINE_VA \
+ ((unsigned long)(x) \
- ((unsigned long)__hyp_idmap_text_start & PAGE_MASK)))
#endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */
ENTRY(__kvm_hyp_init) ventry __invalid // Synchronous EL2t
After applying this patch, I got another problem with kexec-tools on 64k page kernel, but I've already modified kexec-tools.
The idea that userspace behavior is dependent on the kernel page size is deeply worrying...
The logic is not directly related to a page size. Kexec-tools try to allocate several small chunks of memory in a fixed-size region of last part of main memory. Due to increased page size, the total size of chunks were overflowed.
Thanks, -Takahiro AKASHI
Thanks,
M.
On 01/04/15 06:06, AKASHI Takahiro wrote:
Marc
On 03/31/2015 04:31 PM, Marc Zyngier wrote:
On Tue, 31 Mar 2015 07:04:44 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
Hi Takahiro,
Marc,
On 03/30/2015 05:54 PM, AKASHI Takahiro wrote:
On 03/30/2015 04:16 PM, Marc Zyngier wrote:
On Mon, 30 Mar 2015 02:39:53 +0100 AKASHI Takahiro takahiro.akashi@linaro.org wrote:
On 03/28/2015 02:40 AM, Kyle McMartin wrote: > On Fri, Mar 27, 2015 at 03:37:04PM +0000, Marc Zyngier wrote: >>> [ 236.260863] Kernel panic - not syncing: HYP panic: >>> [ 236.260863] PS:600003c9 PC:000003ffffff0830 ESR:0000000096000006 >> >> It would be interesting if you could find out what you have at offset >> 0x830 of hyp-init.o (the stack trace is for EL1, and is not going to >> help much). >> > > Given the alignment, i'm going to assume i'm looking at the right thing: > > 0000000000000820 <__kvm_hyp_reset>: > 820: d51c2000 msr ttbr0_el2, x0 > 824: d5033fdf isb > 828: d50c871f tlbi alle2 > 82c: d5033f9f dsb sy > 830: 10000060 adr x0, 83c <__kvm_hyp_reset+0x1c> > 834: b3403c01 bfxil x1, x0, #0, #16 > 838: d61f0020 br x1 > 83c: d53c1000 mrs x0, sctlr_el2 > > but it seems fairly implausible to be trapping on ADR x0, 1f...
I've never seen this panic on fast model...
ESR shows that - Exception class: Data abort taken without a change in Exception level - Data fault status code: Translation fault at EL2
and FAR seems not to be a proper address.
... which is consistent with what we're seeing here (data fault on something that doesn't generate a load/store). I'm pretty sure the page tables are screwed.
Have you tested it with 64k pages?
Hmm... It seems that I was able to reproduce the problem if 64k pages enabled.
The entry address in trampoline code calc'ed by kvm_virt_to_trampoline(__kvm_hyp_reset) seems to be wrong due to improper page-alignment in hyp-init.S. The following patch fixed this problem, at least, in my environment(fast model). (I don't know why it's PAGE_SHIFT - 1, not PAGE_SHIFT.)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index d212990..45b8d98 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -24,7 +24,7 @@ .text .pushsection .hyp.idmap.text, "ax"
.align 11
.align (PAGE_SHIFT - 1)
I'm afraid this is wrong. This alignment is for the vectors (which have to be aligned on a 2kB boundary, hence the ".align 11"), not for the code. Aligning it on a 32kB boundary doesn't make any sense, and just hides the bug.
I bet that without this hack, the hyp-init code is spread across two 64kB pages, and the kernel generates a bounce page for this code. By changing the alignment, you just end up having the code to fit in a single page, and no bounce page gets generated.
There seem to be two scenarios that make things go wrong:
- As you mentioned above, trampoline code is spread across page boundary even though the whole size is less than a page.
- The whole trampoline code fits into a single page, but the physical start address of this region (that is, __hyp_idmap_text_start) is not page-aligned. In this case, pa of __kvm_hyp_reset should also be offset.
Given any combinations of #1 and #2, __kvm_virt_to_trampoline() would get a bit complicated.
If I'm right above the above, it means that you're computing something against the wrong base. Can you please verify this scenario?
Now, the good news is that Ard is removing the bounce page from the KVM code (for unrelated reasons), and this may just be the saving grace.
Ard's patch will fix #1, but not #2. So I modified __kvm_virt_to_trampoline as followed and it seems to work well both on 4k-page kernel and 64k-page kernel (in addition to Ard's patch).
But please note that Ard's patch already makes __hyp_idmap_text_start 4kb-aligned. So why not PAGE_SIZE-aligned as my previous patch does?
Well, there is a difference between wasting up to 4kB, and wasting up to 64kB. We align it on the smallest page size so we can be sure the code always fits in a single page, but I don't really want to bloat the kernel for no reason.
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index c191432..facfd6d 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -308,7 +308,9 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
extern char __hyp_idmap_text_start[]; #define kvm_virt_to_trampoline(x) \
(TRAMPOLINE_VA + ((x) - __hyp_idmap_text_start))
(TRAMPOLINE_VA \
+ ((unsigned long)(x) \
- ((unsigned long)__hyp_idmap_text_start & PAGE_MASK)))
#endif /* __ASSEMBLY__ */ #endif /* __ARM64_KVM_MMU_H__ */
This seems like the sensible thing to do.
ENTRY(__kvm_hyp_init) ventry __invalid // Synchronous EL2t
After applying this patch, I got another problem with kexec-tools on 64k page kernel, but I've already modified kexec-tools.
The idea that userspace behavior is dependent on the kernel page size is deeply worrying...
The logic is not directly related to a page size. Kexec-tools try to allocate several small chunks of memory in a fixed-size region of last part of main memory. Due to increased page size, the total size of chunks were overflowed.
OK.
M.
linaro-kernel@lists.linaro.org