This patch was initially intended to address a KVM issue described in Geoff's kexec patch set[1]. But now the title of patch#1 would be more suitable for the entire code as it was overhauled and, since the 4th version, the fix is implemented as kvm cpu hotplug after Mark's comment.
I confirmed that it works with kexec under the following scenarios: - boot 1st kernel - run a guest OS - (stop a guest OS) - reboot 2nd kernel by kexec - run a guest OS
test target: MediaTek MT8173-EVB version: kernel v4.0-rc1 + Geoff's kexec v8 + Ard's patch[2]
But I didn't test other complicated scenarios with cpu hotplug.
On arm, Frediano[3] is no longer working on this issue as he left his company. So patch#1 also has a stub definition for arm.
Changes from v4: * restructured the patchset as cpu_init_hyp_mode() and kvm_cpu_reset() were renamed to kvm_arch_hardware_{enable,disable}() respectively * omitted some obvious arguments from __cpu_reset_hyp_mode().
Changes from v3: * modified to use kvm cpu hotplug framework directly instead of reboot notifier hook
Changes from v2: * modified kvm_virt_to_trampoline() macro to fix a page-alignment issue[4]
Changes from v1: * modified kvm_cpu_reset() implementation: - define a macro to translate va to addr in trampoline - use __hyp_default_vectors instead of kvm_get_hyp_stub_vectors() - shuffle the arguments in __cpu_reset_hyp_mode() - optimize TLB flush operations * changed a patch#2's name * added a patch#5 to add stub code for arm
[1] http://lists.infradead.org/pipermail/kexec/2015-April/335533.html [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-March/334002.html [3] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/322231.h... [4] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-March/334910.html
AKASHI Takahiro (2): arm64: kvm: allows kvm cpu hotplug arm64: kvm: remove !KEXEC dependency
arch/arm/include/asm/kvm_host.h | 10 ++++++- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 58 ++++++++++++------------------------- arch/arm/kvm/mmu.c | 5 ++++ arch/arm64/include/asm/kvm_host.h | 11 ++++++- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/include/asm/virt.h | 9 ++++++ arch/arm64/kvm/Kconfig | 1 - arch/arm64/kvm/hyp-init.S | 33 +++++++++++++++++++++ arch/arm64/kvm/hyp.S | 32 +++++++++++++++++--- 10 files changed, 114 insertions(+), 47 deletions(-)
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm/include/asm/kvm_host.h | 10 ++++++- arch/arm/include/asm/kvm_mmu.h | 1 + arch/arm/kvm/arm.c | 58 ++++++++++++------------------------- arch/arm/kvm/mmu.c | 5 ++++ arch/arm64/include/asm/kvm_host.h | 11 ++++++- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/include/asm/virt.h | 9 ++++++ arch/arm64/kvm/hyp-init.S | 33 +++++++++++++++++++++ arch/arm64/kvm/hyp.S | 32 +++++++++++++++++--- 9 files changed, 114 insertions(+), 46 deletions(-)
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index d71607c..8b67e11 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -213,6 +213,15 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr); }
+static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, + phys_addr_t phys_idmap_start) +{ + /* + * TODO + * kvm_call_reset(boot_pgd_ptr, phys_idmap_start); + */ +} + static inline int kvm_arch_dev_ioctl_check_extension(long ext) { return 0; @@ -230,7 +239,6 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
-static inline void kvm_arch_hardware_disable(void) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 405aa18..dc6fadf 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -66,6 +66,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_boot_httbr(void); phys_addr_t kvm_get_idmap_vector(void); +phys_addr_t kvm_get_idmap_start(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void);
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index d9631ec..6833d7c 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -16,7 +16,6 @@ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */
-#include <linux/cpu.h> #include <linux/cpu_pm.h> #include <linux/errno.h> #include <linux/err.h> @@ -85,11 +84,6 @@ struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void) return &kvm_arm_running_vcpu; }
-int kvm_arch_hardware_enable(void) -{ - return 0; -} - int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) { return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; @@ -891,7 +885,7 @@ long kvm_arch_vm_ioctl(struct file *filp, } }
-static void cpu_init_hyp_mode(void *dummy) +int kvm_arch_hardware_enable(void) { phys_addr_t boot_pgd_ptr; phys_addr_t pgd_ptr; @@ -899,6 +893,9 @@ static void cpu_init_hyp_mode(void *dummy) unsigned long stack_page; unsigned long vector_ptr;
+ if (__hyp_get_vectors() != hyp_default_vectors) + return 0; + /* Switch from the HYP stub to our own HYP init vector */ __hyp_set_vectors(kvm_get_idmap_vector());
@@ -909,34 +906,31 @@ static void cpu_init_hyp_mode(void *dummy) vector_ptr = (unsigned long)__kvm_hyp_vector;
__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr); + + return 0; }
-static int hyp_init_cpu_notify(struct notifier_block *self, - unsigned long action, void *cpu) +void kvm_arch_hardware_disable(void) { - switch (action) { - case CPU_STARTING: - case CPU_STARTING_FROZEN: - if (__hyp_get_vectors() == hyp_default_vectors) - cpu_init_hyp_mode(NULL); - break; - } + phys_addr_t boot_pgd_ptr; + phys_addr_t phys_idmap_start;
- return NOTIFY_OK; -} + if (__hyp_get_vectors() == hyp_default_vectors) + return;
-static struct notifier_block hyp_init_cpu_nb = { - .notifier_call = hyp_init_cpu_notify, -}; + boot_pgd_ptr = kvm_mmu_get_boot_httbr(); + phys_idmap_start = kvm_get_idmap_start(); + + __cpu_reset_hyp_mode(boot_pgd_ptr, phys_idmap_start); +}
#ifdef CONFIG_CPU_PM static int hyp_init_cpu_pm_notifier(struct notifier_block *self, unsigned long cmd, void *v) { - if (cmd == CPU_PM_EXIT && - __hyp_get_vectors() == hyp_default_vectors) { - cpu_init_hyp_mode(NULL); + if (cmd == CPU_PM_EXIT && kvm_arm_get_running_vcpu()) { + kvm_arch_hardware_enable(); return NOTIFY_OK; }
@@ -1038,11 +1032,6 @@ static int init_hyp_mode(void) }
/* - * Execute the init code on each CPU. - */ - on_each_cpu(cpu_init_hyp_mode, NULL, 1); - - /* * Init HYP view of VGIC */ err = kvm_vgic_hyp_init(); @@ -1116,26 +1105,15 @@ int kvm_arch_init(void *opaque) } }
- cpu_notifier_register_begin(); - err = init_hyp_mode(); if (err) goto out_err;
- err = __register_cpu_notifier(&hyp_init_cpu_nb); - if (err) { - kvm_err("Cannot register HYP init CPU notifier (%d)\n", err); - goto out_err; - } - - cpu_notifier_register_done(); - hyp_cpu_pm_init();
kvm_coproc_table_init(); return 0; out_err: - cpu_notifier_register_done(); return err; }
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 1d5accb..ce5d0a2 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -1643,6 +1643,11 @@ phys_addr_t kvm_get_idmap_vector(void) return hyp_idmap_vector; }
+phys_addr_t kvm_get_idmap_start(void) +{ + return hyp_idmap_start; +} + int kvm_mmu_init(void) { int err; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f0f58c9..42cf627 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -192,6 +192,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void); struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
u64 kvm_call_hyp(void *hypfn, ...); +void kvm_call_reset(phys_addr_t boot_pgd_ptr, phys_addr_t phys_idmap_start); void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
@@ -216,6 +217,15 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, hyp_stack_ptr, vector_ptr); }
+static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr, + phys_addr_t phys_idmap_start) +{ + /* + * Call reset code, and switch back to stub hyp vectors. + */ + kvm_call_reset(boot_pgd_ptr, phys_idmap_start); +} + struct vgic_sr_vectors { void *save_vgic; void *restore_vgic; @@ -244,7 +254,6 @@ static inline void vgic_arch_setup(const struct vgic_params *vgic) } }
-static inline void kvm_arch_hardware_disable(void) {} static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 6150567..ff5a087 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -98,6 +98,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu); phys_addr_t kvm_mmu_get_httbr(void); phys_addr_t kvm_mmu_get_boot_httbr(void); phys_addr_t kvm_get_idmap_vector(void); +phys_addr_t kvm_get_idmap_start(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void);
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h index 3070096..8f59ca88 100644 --- a/arch/arm64/include/asm/virt.h +++ b/arch/arm64/include/asm/virt.h @@ -61,6 +61,15 @@ #define BOOT_CPU_MODE_EL1 (0xe11) #define BOOT_CPU_MODE_EL2 (0xe12)
+/* + * HVC_RESET - Reset cpu in EL2 to initial state. + * + * @x0: entry address in trampoline code in va + * @x1: identical mapping page table in pa + */ + +#define HVC_RESET 5 + #ifndef __ASSEMBLY__
/* diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 2e67a48..1925163 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -139,6 +139,39 @@ merged: eret ENDPROC(__kvm_hyp_init)
+ /* + * x0: HYP boot pgd + * x1: HYP phys_idmap_start + */ +ENTRY(__kvm_hyp_reset) + /* We're in trampoline code in VA, switch back to boot page tables */ + msr ttbr0_el2, x0 + isb + + /* Invalidate the old TLBs */ + tlbi alle2 + dsb sy + + /* Branch into PA space */ + adr x0, 1f + bfi x1, x0, #0, #PAGE_SHIFT + br x1 + + /* We're now in idmap, disable MMU */ +1: mrs x0, sctlr_el2 + ldr x1, =SCTLR_EL2_FLAGS + bic x0, x0, x1 // Clear SCTL_M and etc + msr sctlr_el2, x0 + isb + + /* Install stub vectors */ + adrp x0, __hyp_stub_vectors + add x0, x0, #:lo12:__hyp_stub_vectors + msr vbar_el2, x0 + + eret +ENDPROC(__kvm_hyp_reset) + .ltorg
.popsection diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index fd085ec..afe6263 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -1136,6 +1136,11 @@ ENTRY(kvm_call_hyp) ret ENDPROC(kvm_call_hyp)
+ENTRY(kvm_call_reset) + hvc #HVC_RESET + ret +ENDPROC(kvm_call_reset) + .macro invalid_vector label, target .align 2 \label: @@ -1179,10 +1184,27 @@ el1_sync: // Guest trapped into EL2 cmp x18, #HVC_GET_VECTORS b.ne 1f mrs x0, vbar_el2 - b 2f - -1: /* Default to HVC_CALL_HYP. */ + b do_eret
+ /* jump into trampoline code */ +1: cmp x18, #HVC_RESET + b.ne 2f + /* + * Entry point is: + * TRAMPOLINE_VA + * + (__kvm_hyp_reset - (__hyp_idmap_text_start & PAGE_MASK)) + */ + adrp x2, __kvm_hyp_reset + add x2, x2, #:lo12:__kvm_hyp_reset + adrp x3, __hyp_idmap_text_start + add x3, x3, #:lo12:__hyp_idmap_text_start + and x3, x3, PAGE_MASK + sub x2, x2, x3 + ldr x3, =TRAMPOLINE_VA + add x2, x2, x3 + br x2 // no return + +2: /* Default to HVC_CALL_HYP. */ push lr, xzr
/* @@ -1196,7 +1218,9 @@ el1_sync: // Guest trapped into EL2 blr lr
pop lr, xzr -2: eret + +do_eret: + eret
el1_trap: /*
Hi Takahiro,
On 29/05/15 06:38, AKASHI Takahiro wrote:
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org
This is starting to look good. Any chance you or Geoff could repost the whole series? This would help getting a better view of the whole thing, and hopefully put it on track for 4.3.
Thanks,
M.
On 09/06/15 19:00, Marc Zyngier wrote:
Hi Takahiro,
On 29/05/15 06:38, AKASHI Takahiro wrote:
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org
This is starting to look good. Any chance you or Geoff could repost the whole series? This would help getting a better view of the whole thing, and hopefully put it on track for 4.3.
So this has now been a whole month since I reviewed this, and I haven't seen any repost of this series. Any hope to see this within a reasonable time frame? Or is it considered dead already?
Thanks,
M.
Hi Marc,
On Mon, 2015-07-06 at 15:53 +0100, Marc Zyngier wrote:
So this has now been a whole month since I reviewed this, and I haven't seen any repost of this series. Any hope to see this within a reasonable time frame? Or is it considered dead already?
I was hoping to do some work on updating the base kexec patches last week, but all my time was taken up but other things. I'll try to start on it this week.
-Geoff
On 07/07/2015 01:57 AM, Geoff Levand wrote:
Hi Marc,
On Mon, 2015-07-06 at 15:53 +0100, Marc Zyngier wrote:
So this has now been a whole month since I reviewed this, and I haven't seen any repost of this series. Any hope to see this within a reasonable time frame? Or is it considered dead already?
I was hoping to do some work on updating the base kexec patches last week, but all my time was taken up but other things. I'll try to start on it this week.
I also have some update, and so will submit my patchset after Geoff. Pratyush's patch should also be synced.
-Takahiro AKASHI
-Geoff
On 07/07/15 08:43, AKASHI Takahiro wrote:
On 07/07/2015 01:57 AM, Geoff Levand wrote:
Hi Marc,
On Mon, 2015-07-06 at 15:53 +0100, Marc Zyngier wrote:
So this has now been a whole month since I reviewed this, and I haven't seen any repost of this series. Any hope to see this within a reasonable time frame? Or is it considered dead already?
I was hoping to do some work on updating the base kexec patches last week, but all my time was taken up but other things. I'll try to start on it this week.
I also have some update, and so will submit my patchset after Geoff. Pratyush's patch should also be synced.
Please coordinate between all of you and post a consistent series. It was painful enough to review the KVM stuff independently of the rest of the series, so I'd really really want to see a single series that addresses the issue end to end.
Thanks,
M.
On 29/05/15 06:38, AKASHI Takahiro wrote:
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
I think we do... on platforms where cpuidle uses psci to temporarily turn off cores that aren't in use, we lose the el2 state. This hotplug hook restores the state, even if there a no vms running.
This patch prevents me from running vms on such a platform, qemu gives:
kvm [1500]: Unsupported exception type: 6264688KVM internal error.
Suberror: 0
kvmtool goes with a more dramatic:
KVM exit reason: 17 ("KVM_EXIT_INTERNAL_ERROR")
Disabling CONFIG_ARM_CPUIDLE solves this problem.
(Sorry to revive an old thread - I've been using v4 of this patch for the hibernate/suspend-to-disk series).
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index fd085ec..afe6263 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -1136,6 +1136,11 @@ ENTRY(kvm_call_hyp) ret ENDPROC(kvm_call_hyp) +ENTRY(kvm_call_reset)
- hvc #HVC_RESET
- ret
+ENDPROC(kvm_call_reset)
.macro invalid_vector label, target .align 2 \label: @@ -1179,10 +1184,27 @@ el1_sync: // Guest trapped into EL2 cmp x18, #HVC_GET_VECTORS b.ne 1f mrs x0, vbar_el2
- b 2f
-1: /* Default to HVC_CALL_HYP. */
- b do_eret
- /* jump into trampoline code */
+1: cmp x18, #HVC_RESET
- b.ne 2f
- /*
* Entry point is:
* TRAMPOLINE_VA
* + (__kvm_hyp_reset - (__hyp_idmap_text_start & PAGE_MASK))
*/
- adrp x2, __kvm_hyp_reset
- add x2, x2, #:lo12:__kvm_hyp_reset
- adrp x3, __hyp_idmap_text_start
- add x3, x3, #:lo12:__hyp_idmap_text_start
- and x3, x3, PAGE_MASK
- sub x2, x2, x3
- ldr x3, =TRAMPOLINE_VA
- add x2, x2, x3
- br x2 // no return
+2: /* Default to HVC_CALL_HYP. */
What was the reason not to use kvm_call_hyp(__kvm_hyp_reset, ...)? (You mentioned you wanted to at [0] - I can't find the details in the archive)
Thanks,
James
[0] http://lists.infradead.org/pipermail/kexec/2015-April/335533.html
push lr, xzr /* @@ -1196,7 +1218,9 @@ el1_sync: // Guest trapped into EL2 blr lr pop lr, xzr -2: eret
+do_eret:
- eret
el1_trap: /*
On 10/12/2015 10:28 PM, James Morse wrote:
On 29/05/15 06:38, AKASHI Takahiro wrote:
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
I think we do... on platforms where cpuidle uses psci to temporarily turn off cores that aren't in use, we lose the el2 state. This hotplug hook restores the state, even if there a no vms running.
If I understand you correctly, with or without my patch, kvm doesn't work under cpuidle anyway. Right?
If so, saving/restoring cpu states (or at least, kicking cpu hotplug hooks) is cpuidle driver's responsibility, isn't it?
-Takahiro AKASHI
This patch prevents me from running vms on such a platform, qemu gives:
kvm [1500]: Unsupported exception type: 6264688KVM internal error.
Suberror: 0
kvmtool goes with a more dramatic:
KVM exit reason: 17 ("KVM_EXIT_INTERNAL_ERROR")
Disabling CONFIG_ARM_CPUIDLE solves this problem.
(Sorry to revive an old thread - I've been using v4 of this patch for the hibernate/suspend-to-disk series).
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index fd085ec..afe6263 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -1136,6 +1136,11 @@ ENTRY(kvm_call_hyp) ret ENDPROC(kvm_call_hyp)
+ENTRY(kvm_call_reset)
- hvc #HVC_RESET
- ret
+ENDPROC(kvm_call_reset)
- .macro invalid_vector label, target .align 2 \label:
@@ -1179,10 +1184,27 @@ el1_sync: // Guest trapped into EL2 cmp x18, #HVC_GET_VECTORS b.ne 1f mrs x0, vbar_el2
- b 2f
-1: /* Default to HVC_CALL_HYP. */
b do_eret
/* jump into trampoline code */
+1: cmp x18, #HVC_RESET
- b.ne 2f
- /*
* Entry point is:
* TRAMPOLINE_VA
* + (__kvm_hyp_reset - (__hyp_idmap_text_start & PAGE_MASK))
*/
- adrp x2, __kvm_hyp_reset
- add x2, x2, #:lo12:__kvm_hyp_reset
- adrp x3, __hyp_idmap_text_start
- add x3, x3, #:lo12:__hyp_idmap_text_start
- and x3, x3, PAGE_MASK
- sub x2, x2, x3
- ldr x3, =TRAMPOLINE_VA
- add x2, x2, x3
- br x2 // no return
+2: /* Default to HVC_CALL_HYP. */
What was the reason not to use kvm_call_hyp(__kvm_hyp_reset, ...)? (You mentioned you wanted to at [0] - I can't find the details in the archive)
Thanks,
James
[0] http://lists.infradead.org/pipermail/kexec/2015-April/335533.html
push lr, xzr
/* @@ -1196,7 +1218,9 @@ el1_sync: // Guest trapped into EL2 blr lr
pop lr, xzr -2: eret
+do_eret:
eret
el1_trap: /*
Hi,
On 13/10/15 06:38, AKASHI Takahiro wrote:
On 10/12/2015 10:28 PM, James Morse wrote:
On 29/05/15 06:38, AKASHI Takahiro wrote:
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
I think we do... on platforms where cpuidle uses psci to temporarily turn off cores that aren't in use, we lose the el2 state. This hotplug hook restores the state, even if there a no vms running.
I've just noticed there are two cpu notifiers - we may be referring to different ones. (hyp_init_cpu_pm_nb and hyp_init_cpu_nb)
If I understand you correctly, with or without my patch, kvm doesn't work under cpuidle anyway. Right?
It works with, and without, v4. This patch v5 causes the problem.
If so, saving/restoring cpu states (or at least, kicking cpu hotplug hooks) is cpuidle driver's responsibility, isn't it?
Yes - but with v5, (at least one of) the hotplug hooks isn't having the same effect as before:
Before v5, cpu_init_hyp_mode() is called via cpu_notify() each time cpu_suspend() suspends/wakes-up the core.
Logically it should be the 'pm' notifier that does this work:
if (cmd == CPU_PM_EXIT && __hyp_get_vectors() == hyp_default_vectors) { cpu_init_hyp_mode(NULL); return NOTIFY_OK;
With v5, kvm_arch_hardware_enable() isn't called each time cpu_suspend() cycles the core.
The problem appears to be this hunk, affecting the above code:
if (cmd == CPU_PM_EXIT &&
__hyp_get_vectors() == hyp_default_vectors) {
cpu_init_hyp_mode(NULL);
if (cmd == CPU_PM_EXIT && kvm_arm_get_running_vcpu()) {
kvm_arch_hardware_enable();
Changing this to just rename cpu_init_hyp_mode() to kvm_arch_hardware_enable() solves the problem.
Presumably kvm_arm_get_running_vcpu() evaluates to false before the first vm is started, meaning no vms can be started if pm events occur before starting the first vm.
Sorry I blamed the wrong cpu notifier hook - I didn't realise there were two!
Thanks,
James
This patch prevents me from running vms on such a platform, qemu gives:
kvm [1500]: Unsupported exception type: 6264688KVM internal error.
Suberror: 0
kvmtool goes with a more dramatic:
KVM exit reason: 17 ("KVM_EXIT_INTERNAL_ERROR")
Disabling CONFIG_ARM_CPUIDLE solves this problem.
(Sorry to revive an old thread - I've been using v4 of this patch for the hibernate/suspend-to-disk series).
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index fd085ec..afe6263 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -1136,6 +1136,11 @@ ENTRY(kvm_call_hyp) ret ENDPROC(kvm_call_hyp)
+ENTRY(kvm_call_reset)
- hvc #HVC_RESET
- ret
+ENDPROC(kvm_call_reset)
- .macro invalid_vector label, target .align 2 \label:
@@ -1179,10 +1184,27 @@ el1_sync: // Guest trapped into EL2 cmp x18, #HVC_GET_VECTORS b.ne 1f mrs x0, vbar_el2
- b 2f
-1: /* Default to HVC_CALL_HYP. */
b do_eret
/* jump into trampoline code */
+1: cmp x18, #HVC_RESET
- b.ne 2f
- /*
* Entry point is:
* TRAMPOLINE_VA
* + (__kvm_hyp_reset - (__hyp_idmap_text_start & PAGE_MASK))
*/
- adrp x2, __kvm_hyp_reset
- add x2, x2, #:lo12:__kvm_hyp_reset
- adrp x3, __hyp_idmap_text_start
- add x3, x3, #:lo12:__hyp_idmap_text_start
- and x3, x3, PAGE_MASK
- sub x2, x2, x3
- ldr x3, =TRAMPOLINE_VA
- add x2, x2, x3
- br x2 // no return
+2: /* Default to HVC_CALL_HYP. */
What was the reason not to use kvm_call_hyp(__kvm_hyp_reset, ...)? (You mentioned you wanted to at [0] - I can't find the details in the archive)
Thanks,
James
[0] http://lists.infradead.org/pipermail/kexec/2015-April/335533.html
James,
I reproduced the problem on Hikey board, but
On 10/13/2015 07:43 PM, James Morse wrote:
Hi,
On 13/10/15 06:38, AKASHI Takahiro wrote:
On 10/12/2015 10:28 PM, James Morse wrote:
On 29/05/15 06:38, AKASHI Takahiro wrote:
The current kvm implementation on arm64 does cpu-specific initialization at system boot, and has no way to gracefully shutdown a core in terms of kvm. This prevents, especially, kexec from rebooting the system on a boot core in EL2.
This patch adds a cpu tear-down function and also puts an existing cpu-init code into a separate function, kvm_arch_hardware_disable() and kvm_arch_hardware_enable() respectively. We don't need arm64-specific cpu hotplug hook any more.
I think we do... on platforms where cpuidle uses psci to temporarily turn off cores that aren't in use, we lose the el2 state. This hotplug hook restores the state, even if there a no vms running.
I've just noticed there are two cpu notifiers - we may be referring to different ones. (hyp_init_cpu_pm_nb and hyp_init_cpu_nb)
If I understand you correctly, with or without my patch, kvm doesn't work under cpuidle anyway. Right?
It works with, and without, v4. This patch v5 causes the problem.
If so, saving/restoring cpu states (or at least, kicking cpu hotplug hooks) is cpuidle driver's responsibility, isn't it?
Yes - but with v5, (at least one of) the hotplug hooks isn't having the same effect as before:
Before v5, cpu_init_hyp_mode() is called via cpu_notify() each time cpu_suspend() suspends/wakes-up the core.
Logically it should be the 'pm' notifier that does this work:
if (cmd == CPU_PM_EXIT && __hyp_get_vectors() == hyp_default_vectors) { cpu_init_hyp_mode(NULL); return NOTIFY_OK;
With v5, kvm_arch_hardware_enable() isn't called each time cpu_suspend() cycles the core.
Right. I misunderstood kvm_arm_get_running_vcpu().
The problem appears to be this hunk, affecting the above code:
if (cmd == CPU_PM_EXIT &&
__hyp_get_vectors() == hyp_default_vectors) {
cpu_init_hyp_mode(NULL);
if (cmd == CPU_PM_EXIT && kvm_arm_get_running_vcpu()) {
kvm_arch_hardware_enable();
Changing this to just rename cpu_init_hyp_mode() to kvm_arch_hardware_enable() solves the problem.
The change that you suggested won't work well because kvm needs to maintain cpu state with 'kvm_usage_count' using kvm_arch_hardware_enable/disable(). With this changed applied, you won't be able to do kexec.
I'm going to try more generic PM hook.
Thanks, -Takahiro AKASHI
Presumably kvm_arm_get_running_vcpu() evaluates to false before the first vm is started, meaning no vms can be started if pm events occur before starting the first vm.
Sorry I blamed the wrong cpu notifier hook - I didn't realise there were two!
Thanks,
James
This patch prevents me from running vms on such a platform, qemu gives:
kvm [1500]: Unsupported exception type: 6264688KVM internal error.
Suberror: 0
kvmtool goes with a more dramatic:
KVM exit reason: 17 ("KVM_EXIT_INTERNAL_ERROR")
Disabling CONFIG_ARM_CPUIDLE solves this problem.
(Sorry to revive an old thread - I've been using v4 of this patch for the hibernate/suspend-to-disk series).
Since this patch modifies common part of code between arm and arm64, one stub definition, __cpu_reset_hyp_mode(), is added on arm side to avoid compiling errors.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index fd085ec..afe6263 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -1136,6 +1136,11 @@ ENTRY(kvm_call_hyp) ret ENDPROC(kvm_call_hyp)
+ENTRY(kvm_call_reset)
- hvc #HVC_RESET
- ret
+ENDPROC(kvm_call_reset)
- .macro invalid_vector label, target .align 2 \label:
@@ -1179,10 +1184,27 @@ el1_sync: // Guest trapped into EL2 cmp x18, #HVC_GET_VECTORS b.ne 1f mrs x0, vbar_el2
- b 2f
-1: /* Default to HVC_CALL_HYP. */
b do_eret
/* jump into trampoline code */
+1: cmp x18, #HVC_RESET
- b.ne 2f
- /*
* Entry point is:
* TRAMPOLINE_VA
* + (__kvm_hyp_reset - (__hyp_idmap_text_start & PAGE_MASK))
*/
- adrp x2, __kvm_hyp_reset
- add x2, x2, #:lo12:__kvm_hyp_reset
- adrp x3, __hyp_idmap_text_start
- add x3, x3, #:lo12:__hyp_idmap_text_start
- and x3, x3, PAGE_MASK
- sub x2, x2, x3
- ldr x3, =TRAMPOLINE_VA
- add x2, x2, x3
- br x2 // no return
+2: /* Default to HVC_CALL_HYP. */
What was the reason not to use kvm_call_hyp(__kvm_hyp_reset, ...)? (You mentioned you wanted to at [0] - I can't find the details in the archive)
Thanks,
James
[0] http://lists.infradead.org/pipermail/kexec/2015-April/335533.html
By indroducing kvm cpu hotplug, this dependency is not needed any more.
Signed-off-by: AKASHI Takahiro takahiro.akashi@linaro.org --- arch/arm64/kvm/Kconfig | 1 - 1 file changed, 1 deletion(-)
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 13ac79e..5105e29 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -19,7 +19,6 @@ if VIRTUALIZATION config KVM bool "Kernel-based Virtual Machine (KVM) support" depends on OF - depends on !KEXEC select MMU_NOTIFIER select PREEMPT_NOTIFIERS select ANON_INODES
linaro-kernel@lists.linaro.org