There are several situations where VMM is involved when handling synchronous external instruction or data aborts, and often VMM needs to inject external aborts to guest. In addition to manipulating individual registers with KVM_SET_ONE_REG API, an easier way is to use the KVM_SET_VCPU_EVENTS API.
This patchset adds two new features to the KVM_SET_VCPU_EVENTS API. 1. Extend KVM_SET_VCPU_EVENTS to support external instruction abort. 2. Allow userspace to emulate ESR_ELx.ISS by supplying ESR_ELx. In this way, we can also allow userspace to emulate ESR_ELx.ISS2 in future.
The UAPI change for #1 is straightforward. However, I would appreciate some feedback on the ABI change for #2:
struct kvm_vcpu_events { struct { __u8 serror_pending; __u8 serror_has_esr; __u8 ext_dabt_pending; __u8 ext_iabt_pending; __u8 ext_abt_has_esr; __u8 pad[3]; __u64 serror_esr; __u64 ext_abt_esr; // <= +8 bytes } exception; __u32 reserved[10]; // <= -8 bytes };
The offset to kvm_vcpu_events.reserved changes, and the size of exception changes. I think we can't say userspace will never access reserved, or they will never use sizeof(exception). Theoretically this is an ABI break and I want to call it out and ask if a new ABI is needed for feature #2. For example, is it worthy to introduce exception_v2 or kvm_vcpu_events_v2.
Based on commit 7b8346bd9fce6 ("KVM: arm64: Don't attempt vLPI mappings when vPE allocation is disabled")
Jiaqi Yan (3): KVM: arm64: Allow userspace to supply ESR when injecting SEA KVM: selftests: Test injecting external abort with ISS Documentation: kvm: update UAPI for injecting SEA
Raghavendra Rao Ananta (1): KVM: arm64: Allow userspace to inject external instruction abort
Documentation/virt/kvm/api.rst | 48 +++-- arch/arm64/include/asm/kvm_emulate.h | 9 +- arch/arm64/include/uapi/asm/kvm.h | 7 +- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/emulate-nested.c | 6 +- arch/arm64/kvm/guest.c | 42 ++-- arch/arm64/kvm/inject_fault.c | 16 +- include/uapi/linux/kvm.h | 1 + tools/arch/arm64/include/uapi/asm/kvm.h | 7 +- .../selftests/kvm/arm64/external_aborts.c | 191 +++++++++++++++--- .../testing/selftests/kvm/arm64/inject_iabt.c | 98 +++++++++ 11 files changed, 352 insertions(+), 74 deletions(-) create mode 100644 tools/testing/selftests/kvm/arm64/inject_iabt.c
From: Raghavendra Rao Ananta rananta@google.com
When guest causes synchronous instruction external abort, VMM may need to inject instruction abort to guest. However, KVM_SET_VCPU_EVENTS currently only allows injecting external data aborts.
Extend the KVM_SET_VCPU_EVENTS ioctl to allow userspace injecting instruction abort into the guest.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com Signed-off-by: Raghavendra Rao Ananta rananta@google.com --- arch/arm64/include/uapi/asm/kvm.h | 3 ++- arch/arm64/kvm/arm.c | 1 + arch/arm64/kvm/guest.c | 15 ++++++++++----- include/uapi/linux/kvm.h | 1 + 4 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index ed5f3892674c7..643e8c4825451 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -184,8 +184,9 @@ struct kvm_vcpu_events { __u8 serror_pending; __u8 serror_has_esr; __u8 ext_dabt_pending; + __u8 ext_iabt_pending; /* Align it to 8 bytes */ - __u8 pad[5]; + __u8 pad[4]; __u64 serror_esr; } exception; __u32 reserved[12]; diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7a1a8210ff918..3d86d0ae7898b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -315,6 +315,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_IRQ_LINE_LAYOUT_2: case KVM_CAP_ARM_NISV_TO_USER: case KVM_CAP_ARM_INJECT_EXT_DABT: + case KVM_CAP_ARM_INJECT_EXT_IABT: case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_VCPU_ATTRIBUTES: case KVM_CAP_PTP_KVM: diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 16ba5e9ac86c3..d3c7b5015f20e 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -826,9 +826,9 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, events->exception.serror_esr = vcpu_get_vsesr(vcpu);
/* - * We never return a pending ext_dabt here because we deliver it to - * the virtual CPU directly when setting the event and it's no longer - * 'pending' at this point. + * We never return a pending ext_dabt or ext_iabt here because we + * deliver it to the virtual CPU directly when setting the event + * and it's no longer 'pending' at this point. */
return 0; @@ -853,16 +853,21 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, bool serror_pending = events->exception.serror_pending; bool has_esr = events->exception.serror_has_esr; bool ext_dabt_pending = events->exception.ext_dabt_pending; + bool ext_iabt_pending = events->exception.ext_iabt_pending; u64 esr = events->exception.serror_esr; int ret = 0;
+ /* DABT and IABT cannot happen at the same time. */ + if (ext_dabt_pending && ext_iabt_pending) + return -EINVAL; /* * Immediately commit the pending SEA to the vCPU's architectural * state which is necessary since we do not return a pending SEA * to userspace via KVM_GET_VCPU_EVENTS. */ - if (ext_dabt_pending) { - ret = kvm_inject_sea_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + if (ext_dabt_pending || ext_iabt_pending) { + ret = kvm_inject_sea(vcpu, ext_iabt_pending, + kvm_vcpu_get_hfar(vcpu)); commit_pending_events(vcpu); }
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index e4e566ff348b0..a7b047f95887c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -957,6 +957,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_EL2_E2H0 241 #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 +#define KVM_CAP_ARM_INJECT_EXT_IABT 245
struct kvm_irq_routing_irqchip { __u32 irqchip;
When VMM needs to replay a synchronous external abort (SEA) into guest, it may want to emulate ESR_ELx.ISS and ESR_ELx.ISS2.
Extend the KVM_SET_VCPU_EVENTS ioctl to allow userspace to supply ESR_ELx when injecting SEA into the guest, similar to what userspace can do when injecting SError.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com --- arch/arm64/include/asm/kvm_emulate.h | 9 ++++++-- arch/arm64/include/uapi/asm/kvm.h | 6 ++++-- arch/arm64/kvm/emulate-nested.c | 6 +++--- arch/arm64/kvm/guest.c | 31 +++++++++++++++++----------- arch/arm64/kvm/inject_fault.c | 16 +++++++------- 5 files changed, 41 insertions(+), 27 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index fa8a08a1ccd5c..80315d21cda13 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -46,9 +46,14 @@ void kvm_skip_instr32(struct kvm_vcpu *vcpu);
void kvm_inject_undefined(struct kvm_vcpu *vcpu); int kvm_inject_serror_esr(struct kvm_vcpu *vcpu, u64 esr); -int kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr); +int kvm_inject_sea_esr(struct kvm_vcpu *vcpu, bool iabt, u64 addr, u64 esr); void kvm_inject_size_fault(struct kvm_vcpu *vcpu);
+static inline int kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr) +{ + return kvm_inject_sea_esr(vcpu, iabt, addr, 0); +} + static inline int kvm_inject_sea_dabt(struct kvm_vcpu *vcpu, u64 addr) { return kvm_inject_sea(vcpu, false, addr); @@ -76,7 +81,7 @@ void kvm_vcpu_wfi(struct kvm_vcpu *vcpu); void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu); int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2); int kvm_inject_nested_irq(struct kvm_vcpu *vcpu); -int kvm_inject_nested_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr); +int kvm_inject_nested_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr, u64 esr); int kvm_inject_nested_serror(struct kvm_vcpu *vcpu, u64 esr);
static inline void kvm_inject_nested_sve_trap(struct kvm_vcpu *vcpu) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 643e8c4825451..406d6e67df822 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -185,11 +185,13 @@ struct kvm_vcpu_events { __u8 serror_has_esr; __u8 ext_dabt_pending; __u8 ext_iabt_pending; + __u8 ext_abt_has_esr; /* Align it to 8 bytes */ - __u8 pad[4]; + __u8 pad[3]; __u64 serror_esr; + __u64 ext_abt_esr; } exception; - __u32 reserved[12]; + __u32 reserved[10]; };
struct kvm_arm_copy_mte_tags { diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c index 90cb4b7ae0ff7..fa5e7fc701bfb 100644 --- a/arch/arm64/kvm/emulate-nested.c +++ b/arch/arm64/kvm/emulate-nested.c @@ -2827,10 +2827,10 @@ int kvm_inject_nested_irq(struct kvm_vcpu *vcpu) return kvm_inject_nested(vcpu, 0, except_type_irq); }
-int kvm_inject_nested_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr) +int kvm_inject_nested_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr, u64 esr) { - u64 esr = FIELD_PREP(ESR_ELx_EC_MASK, - iabt ? ESR_ELx_EC_IABT_LOW : ESR_ELx_EC_DABT_LOW); + esr |= FIELD_PREP(ESR_ELx_EC_MASK, + iabt ? ESR_ELx_EC_IABT_LOW : ESR_ELx_EC_DABT_LOW); esr |= ESR_ELx_FSC_EXTABT | ESR_ELx_IL;
vcpu_write_sys_reg(vcpu, FAR_EL2, addr); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index d3c7b5015f20e..018bf0d5277ec 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -847,27 +847,40 @@ static void commit_pending_events(struct kvm_vcpu *vcpu) kvm_call_hyp(__kvm_adjust_pc, vcpu); }
+#define ESR_EXCLUDE_ISS(name) ((name##_has_esr) && ((name##_esr) & ~ESR_ELx_ISS_MASK)) + int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { bool serror_pending = events->exception.serror_pending; - bool has_esr = events->exception.serror_has_esr; + bool serror_has_esr = events->exception.serror_has_esr; bool ext_dabt_pending = events->exception.ext_dabt_pending; bool ext_iabt_pending = events->exception.ext_iabt_pending; - u64 esr = events->exception.serror_esr; + bool ext_abt_has_esr = events->exception.ext_abt_has_esr; + u64 serror_esr = events->exception.serror_esr; + u64 ext_abt_esr = events->exception.ext_abt_esr; int ret = 0;
+ if (!cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && + (serror_has_esr || ext_abt_has_esr)) + return -EINVAL; + + if (ESR_EXCLUDE_ISS(serror) || ESR_EXCLUDE_ISS(ext_abt)) + return -EINVAL; + /* DABT and IABT cannot happen at the same time. */ if (ext_dabt_pending && ext_iabt_pending) return -EINVAL; + /* * Immediately commit the pending SEA to the vCPU's architectural * state which is necessary since we do not return a pending SEA * to userspace via KVM_GET_VCPU_EVENTS. */ if (ext_dabt_pending || ext_iabt_pending) { - ret = kvm_inject_sea(vcpu, ext_iabt_pending, - kvm_vcpu_get_hfar(vcpu)); + ret = kvm_inject_sea_esr(vcpu, ext_iabt_pending, + kvm_vcpu_get_hfar(vcpu), + ext_abt_has_esr ? ext_abt_esr : 0); commit_pending_events(vcpu); }
@@ -877,14 +890,8 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, if (!serror_pending) return 0;
- if (!cpus_have_final_cap(ARM64_HAS_RAS_EXTN) && has_esr) - return -EINVAL; - - if (has_esr && (esr & ~ESR_ELx_ISS_MASK)) - return -EINVAL; - - if (has_esr) - ret = kvm_inject_serror_esr(vcpu, esr); + if (serror_has_esr) + ret = kvm_inject_serror_esr(vcpu, serror_esr); else ret = kvm_inject_serror(vcpu);
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 6745f38b64f9c..410b2d6f6ae4c 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -102,11 +102,11 @@ static bool effective_sctlr2_nmea(struct kvm_vcpu *vcpu) return __effective_sctlr2_bit(vcpu, SCTLR2_EL1_NMEA_SHIFT); }
-static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr) +static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, + unsigned long addr, u64 esr) { unsigned long cpsr = *vcpu_cpsr(vcpu); bool is_aarch32 = vcpu_mode_is_32bit(vcpu); - u64 esr = 0;
/* This delight is brought to you by FEAT_DoubleFault2. */ if (effective_sctlr2_ease(vcpu)) @@ -199,12 +199,12 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt, u32 addr) vcpu_write_sys_reg(vcpu, far, FAR_EL1); }
-static void __kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr) +static void __kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr, u64 esr) { if (vcpu_el1_is_32bit(vcpu)) inject_abt32(vcpu, iabt, addr); else - inject_abt64(vcpu, iabt, addr); + inject_abt64(vcpu, iabt, addr, esr); }
static bool kvm_sea_target_is_el2(struct kvm_vcpu *vcpu) @@ -219,14 +219,14 @@ static bool kvm_sea_target_is_el2(struct kvm_vcpu *vcpu) (__vcpu_sys_reg(vcpu, HCRX_EL2) & HCRX_EL2_TMEA); }
-int kvm_inject_sea(struct kvm_vcpu *vcpu, bool iabt, u64 addr) +int kvm_inject_sea_esr(struct kvm_vcpu *vcpu, bool iabt, u64 addr, u64 esr) { lockdep_assert_held(&vcpu->mutex);
if (is_nested_ctxt(vcpu) && kvm_sea_target_is_el2(vcpu)) - return kvm_inject_nested_sea(vcpu, iabt, addr); + return kvm_inject_nested_sea(vcpu, iabt, addr, esr);
- __kvm_inject_sea(vcpu, iabt, addr); + __kvm_inject_sea(vcpu, iabt, addr, esr); return 1; }
@@ -237,7 +237,7 @@ void kvm_inject_size_fault(struct kvm_vcpu *vcpu) addr = kvm_vcpu_get_fault_ipa(vcpu); addr |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
- __kvm_inject_sea(vcpu, kvm_vcpu_trap_is_iabt(vcpu), addr); + __kvm_inject_sea(vcpu, kvm_vcpu_trap_is_iabt(vcpu), addr, 0);
/* * If AArch64 or LPAE, set FSC to 0 to indicate an Address
Test userspace can use KVM_SET_VCPU_EVENTS to inject an external instruction or data abort with customized ISS if provided.
The test injects fake external aborts without real instruction or data abort happening to VCPU, and only certain ESR_EL1 bits are expected and asserted.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com --- tools/arch/arm64/include/uapi/asm/kvm.h | 7 +- .../selftests/kvm/arm64/external_aborts.c | 191 +++++++++++++++--- .../testing/selftests/kvm/arm64/inject_iabt.c | 98 +++++++++ 3 files changed, 264 insertions(+), 32 deletions(-) create mode 100644 tools/testing/selftests/kvm/arm64/inject_iabt.c
diff --git a/tools/arch/arm64/include/uapi/asm/kvm.h b/tools/arch/arm64/include/uapi/asm/kvm.h index ed5f3892674c7..406d6e67df822 100644 --- a/tools/arch/arm64/include/uapi/asm/kvm.h +++ b/tools/arch/arm64/include/uapi/asm/kvm.h @@ -184,11 +184,14 @@ struct kvm_vcpu_events { __u8 serror_pending; __u8 serror_has_esr; __u8 ext_dabt_pending; + __u8 ext_iabt_pending; + __u8 ext_abt_has_esr; /* Align it to 8 bytes */ - __u8 pad[5]; + __u8 pad[3]; __u64 serror_esr; + __u64 ext_abt_esr; } exception; - __u32 reserved[12]; + __u32 reserved[10]; };
struct kvm_arm_copy_mte_tags { diff --git a/tools/testing/selftests/kvm/arm64/external_aborts.c b/tools/testing/selftests/kvm/arm64/external_aborts.c index 062bf84cced13..a6396ff4f84da 100644 --- a/tools/testing/selftests/kvm/arm64/external_aborts.c +++ b/tools/testing/selftests/kvm/arm64/external_aborts.c @@ -9,10 +9,14 @@
#define MMIO_ADDR 0x8000000ULL #define EXPECTED_SERROR_ISS (ESR_ELx_ISV | 0x1d1ed) +#define FAKE_DABT_ISS (ESR_ELx_ISV | ESR_ELx_SAS | ESR_ELx_SF | \ + ESR_ELx_AR | ESR_ELx_CM | ESR_ELx_FnV | \ + ESR_ELx_EA) +#define FAKE_IABT_ISS (ESR_ELx_ISV | ESR_ELx_FnV | ESR_ELx_EA)
static u64 expected_abort_pc;
-static void expect_sea_handler(struct ex_regs *regs) +static void expect_dabt_handler(struct ex_regs *regs) { u64 esr = read_sysreg(esr_el1);
@@ -23,19 +27,60 @@ static void expect_sea_handler(struct ex_regs *regs) GUEST_DONE(); }
+static void expect_dabt_esr_handler(struct ex_regs *regs) +{ + u64 esr = read_sysreg(esr_el1); + + GUEST_PRINTF("Handling guest instruction abort\n"); + GUEST_PRINTF(" ESR_EL1=%#lx\n", esr); + + GUEST_ASSERT_EQ(ESR_ELx_EC(esr), ESR_ELx_EC_DABT_CUR); + GUEST_ASSERT_EQ(esr & ESR_ELx_FSC_TYPE, ESR_ELx_FSC_EXTABT); + GUEST_ASSERT_EQ(esr & FAKE_DABT_ISS, FAKE_DABT_ISS); + + GUEST_DONE(); +} + +static void expect_iabt_esr_handler(struct ex_regs *regs) +{ + u64 esr = read_sysreg(esr_el1); + + GUEST_PRINTF("Handling guest instruction abort\n"); + GUEST_PRINTF(" ESR_EL1=%#lx\n", esr); + + GUEST_ASSERT_EQ(ESR_ELx_EC(esr), ESR_ELx_EC_IABT_CUR); + GUEST_ASSERT_EQ(esr & ESR_ELx_FSC_TYPE, ESR_ELx_FSC_EXTABT); + GUEST_ASSERT_EQ(esr & FAKE_IABT_ISS, FAKE_IABT_ISS); + + GUEST_DONE(); +} + static void unexpected_dabt_handler(struct ex_regs *regs) { GUEST_FAIL("Unexpected data abort at PC: %lx\n", regs->pc); }
-static struct kvm_vm *vm_create_with_dabt_handler(struct kvm_vcpu **vcpu, void *guest_code, - handler_fn dabt_handler) +static void unexpected_iabt_handler(struct ex_regs *regs) +{ + GUEST_FAIL("Unexpected instruction abort at PC: %lx\n", regs->pc); +} + +static struct kvm_vm *vm_create_with_extabt_handler(struct kvm_vcpu **vcpu, + void *guest_code, + handler_fn dabt_handler, + handler_fn iabt_handler) { struct kvm_vm *vm = vm_create_with_one_vcpu(vcpu, guest_code);
vm_init_descriptor_tables(vm); vcpu_init_descriptor_tables(*vcpu); - vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ESR_ELx_EC_DABT_CUR, dabt_handler); + + if (dabt_handler) + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_ELx_EC_DABT_CUR, dabt_handler); + if (iabt_handler) + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_ELx_EC_IABT_CUR, iabt_handler);
virt_map(vm, MMIO_ADDR, MMIO_ADDR, 1);
@@ -50,6 +95,26 @@ static void vcpu_inject_sea(struct kvm_vcpu *vcpu) vcpu_events_set(vcpu, &events); }
+static void vcpu_inject_dabt_esr(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_events events = {}; + + events.exception.ext_dabt_pending = true; + events.exception.ext_abt_has_esr = true; + events.exception.ext_abt_esr = FAKE_DABT_ISS; + vcpu_events_set(vcpu, &events); +} + +static void vcpu_inject_iabt_esr(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_events events = {}; + + events.exception.ext_iabt_pending = true; + events.exception.ext_abt_has_esr = true; + events.exception.ext_abt_esr = FAKE_IABT_ISS; + vcpu_events_set(vcpu, &events); +} + static bool vcpu_has_ras(struct kvm_vcpu *vcpu) { u64 pfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1)); @@ -79,17 +144,24 @@ static void __vcpu_run_expect(struct kvm_vcpu *vcpu, unsigned int cmd) { struct ucall uc;
- vcpu_run(vcpu); - switch (get_ucall(vcpu, &uc)) { - case UCALL_ABORT: - REPORT_GUEST_ASSERT(uc); - break; - default: - if (uc.cmd == cmd) - return; - - TEST_FAIL("Unexpected ucall: %lu", uc.cmd); - } + do { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + break; + case UCALL_PRINTF: + ksft_print_msg("From guest: %s", uc.buffer); + break; + default: + if (uc.cmd == cmd) { + ksft_print_msg("Expect ucall: %lu\n", uc.cmd); + return; + } + + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); + } + } while (true); }
static void vcpu_run_expect_done(struct kvm_vcpu *vcpu) @@ -122,8 +194,10 @@ static noinline void test_mmio_abort_guest(void) static void test_mmio_abort(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_mmio_abort_guest, - expect_sea_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_mmio_abort_guest, + expect_dabt_handler, + unexpected_iabt_handler); struct kvm_run *run = vcpu->run;
vcpu_run(vcpu); @@ -157,8 +231,10 @@ static void test_mmio_nisv_guest(void) static void test_mmio_nisv(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_mmio_nisv_guest, - unexpected_dabt_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_mmio_nisv_guest, + unexpected_dabt_handler, + unexpected_iabt_handler);
TEST_ASSERT(_vcpu_run(vcpu), "Expected nonzero return code from KVM_RUN"); TEST_ASSERT_EQ(errno, ENOSYS); @@ -173,8 +249,10 @@ static void test_mmio_nisv(void) static void test_mmio_nisv_abort(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_mmio_nisv_guest, - expect_sea_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_mmio_nisv_guest, + expect_dabt_handler, + unexpected_iabt_handler); struct kvm_run *run = vcpu->run;
vm_enable_cap(vm, KVM_CAP_ARM_NISV_TO_USER, 1); @@ -205,8 +283,10 @@ static void test_serror_masked_guest(void) static void test_serror_masked(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_serror_masked_guest, - unexpected_dabt_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_serror_masked_guest, + unexpected_dabt_handler, + unexpected_iabt_handler);
vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, unexpected_serror_handler);
@@ -240,8 +320,10 @@ static void test_serror_guest(void) static void test_serror(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_serror_guest, - unexpected_dabt_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_serror_guest, + unexpected_dabt_handler, + unexpected_iabt_handler);
vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, expect_serror_handler);
@@ -264,8 +346,10 @@ static void test_serror_emulated_guest(void) static void test_serror_emulated(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_serror_emulated_guest, - unexpected_dabt_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_serror_emulated_guest, + unexpected_dabt_handler, + unexpected_iabt_handler);
vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, expect_serror_handler);
@@ -290,8 +374,10 @@ static void test_mmio_ease_guest(void) static void test_mmio_ease(void) { struct kvm_vcpu *vcpu; - struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_mmio_ease_guest, - unexpected_dabt_handler); + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_mmio_ease_guest, + unexpected_dabt_handler, + unexpected_iabt_handler); struct kvm_run *run = vcpu->run; u64 pfr1;
@@ -305,7 +391,7 @@ static void test_mmio_ease(void) * SCTLR2_ELx.EASE changes the exception vector to the SError vector but * doesn't further modify the exception context (e.g. ESR_ELx, FAR_ELx). */ - vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, expect_sea_handler); + vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, expect_dabt_handler);
vcpu_run(vcpu); TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_MMIO); @@ -318,6 +404,49 @@ static void test_mmio_ease(void) kvm_vm_free(vm); }
+static void test_ext_abt_guest(void) +{ + GUEST_FAIL("Guest should only run (I|D)ABT handler"); +} + +static void test_inject_data_abort(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_ext_abt_guest, + expect_dabt_esr_handler, + unexpected_iabt_handler); + vcpu_inject_dabt_esr(vcpu); + vcpu_run_expect_done(vcpu); + kvm_vm_free(vm); +} + +static void vcpu_inject_invalid_abt(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_events events = {}; + int r; + + events.exception.ext_iabt_pending = true; + events.exception.ext_dabt_pending = true; + + r = __vcpu_ioctl(vcpu, KVM_SET_VCPU_EVENTS, &events); + TEST_ASSERT(r && errno == EINVAL, + KVM_IOCTL_ERROR(KVM_SET_VCPU_EVENTS, r)); +} + +static void test_inject_instruction_abort(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm = vm_create_with_extabt_handler(&vcpu, + test_ext_abt_guest, + unexpected_dabt_handler, + expect_iabt_esr_handler); + vcpu_inject_invalid_abt(vcpu); + vcpu_inject_iabt_esr(vcpu); + vcpu_run_expect_done(vcpu); + kvm_vm_free(vm); +} + int main(void) { test_mmio_abort(); @@ -327,4 +456,6 @@ int main(void) test_serror_masked(); test_serror_emulated(); test_mmio_ease(); + test_inject_instruction_abort(); + test_inject_data_abort(); } diff --git a/tools/testing/selftests/kvm/arm64/inject_iabt.c b/tools/testing/selftests/kvm/arm64/inject_iabt.c new file mode 100644 index 0000000000000..0c7999e5ba5b3 --- /dev/null +++ b/tools/testing/selftests/kvm/arm64/inject_iabt.c @@ -0,0 +1,98 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * inject_iabt.c - Tests for injecting instruction aborts into guest. + */ + +#include "processor.h" +#include "test_util.h" + +static void expect_iabt_handler(struct ex_regs *regs) +{ + u64 esr = read_sysreg(esr_el1); + + GUEST_PRINTF("Handling Guest SEA\n"); + GUEST_PRINTF(" ESR_EL1=%#lx\n", esr); + + GUEST_ASSERT_EQ(ESR_ELx_EC(esr), ESR_ELx_EC_IABT_CUR); + GUEST_ASSERT_EQ(esr & ESR_ELx_FSC_TYPE, ESR_ELx_FSC_EXTABT); + + GUEST_DONE(); +} + +static void guest_code(void) +{ + GUEST_FAIL("Guest should only run SEA handler"); +} + +static void vcpu_run_expect_done(struct kvm_vcpu *vcpu) +{ + struct ucall uc; + bool guest_done = false; + + do { + vcpu_run(vcpu); + switch (get_ucall(vcpu, &uc)) { + case UCALL_ABORT: + REPORT_GUEST_ASSERT(uc); + break; + case UCALL_PRINTF: + ksft_print_msg("From guest: %s", uc.buffer); + break; + case UCALL_DONE: + ksft_print_msg("Guest done gracefully!\n"); + guest_done = true; + break; + default: + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); + } + } while (!guest_done); +} + +static void vcpu_inject_ext_iabt(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_events events = {}; + + events.exception.ext_iabt_pending = true; + vcpu_events_set(vcpu, &events); +} + +static void vcpu_inject_invalid_abt(struct kvm_vcpu *vcpu) +{ + struct kvm_vcpu_events events = {}; + int r; + + events.exception.ext_iabt_pending = true; + events.exception.ext_dabt_pending = true; + + ksft_print_msg("Injecting invalid external abort events\n"); + r = __vcpu_ioctl(vcpu, KVM_SET_VCPU_EVENTS, &events); + TEST_ASSERT(r && errno == EINVAL, + KVM_IOCTL_ERROR(KVM_SET_VCPU_EVENTS, r)); +} + +static void test_inject_iabt(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + + vm_init_descriptor_tables(vm); + vcpu_init_descriptor_tables(vcpu); + + vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, + ESR_ELx_EC_IABT_CUR, expect_iabt_handler); + + vcpu_inject_invalid_abt(vcpu); + + vcpu_inject_ext_iabt(vcpu); + vcpu_run_expect_done(vcpu); + + kvm_vm_free(vm); +} + +int main(void) +{ + test_inject_iabt(); + return 0; +}
- KVM_CAP_ARM_INJECT_EXT_IABT: userspace can inject external instruction abort to guest. - ext_abt_has_esr and ext_abt_esr: userspace can supplement ISS fields while injecting SEA, for both data and instruction aborts.
Signed-off-by: Jiaqi Yan jiaqiyan@google.com --- Documentation/virt/kvm/api.rst | 48 +++++++++++++++++++++++++--------- 1 file changed, 36 insertions(+), 12 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 53e0179d52949..8190ffb145c37 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1236,9 +1236,11 @@ directly to the virtual CPU). __u8 serror_pending; __u8 serror_has_esr; __u8 ext_dabt_pending; + __u8 ext_iabt_pending; /* Align it to 8 bytes */ - __u8 pad[5]; + __u8 pad[4]; __u64 serror_esr; + __u64 ext_abt_esr; } exception; __u32 reserved[12]; }; @@ -1292,20 +1294,42 @@ ARM64:
User space may need to inject several types of events to the guest.
+Inject SError +~~~~~~~~~~~~~ + Set the pending SError exception state for this VCPU. It is not possible to 'cancel' an Serror that has been made pending.
-If the guest performed an access to I/O memory which could not be handled by -userspace, for example because of missing instruction syndrome decode -information or because there is no device mapped at the accessed IPA, then -userspace can ask the kernel to inject an external abort using the address -from the exiting fault on the VCPU. It is a programming error to set -ext_dabt_pending after an exit which was not either KVM_EXIT_MMIO or -KVM_EXIT_ARM_NISV. This feature is only available if the system supports -KVM_CAP_ARM_INJECT_EXT_DABT. This is a helper which provides commonality in -how userspace reports accesses for the above cases to guests, across different -userspace implementations. Nevertheless, userspace can still emulate all Arm -exceptions by manipulating individual registers using the KVM_SET_ONE_REG API. +Inject SEA (synchronous external abort) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +- If the guest performed an access to I/O memory which could not be handled by + userspace, for example because of missing instruction syndrome decode + information or because there is no device mapped at the accessed IPA. + +- If the guest consumed an uncorrectable memory error on guest owned memory, + and RAS in the Trusted Firmware chooses to notify PE with SEA, KVM has to + handle it when host APEI is unable to claim the SEA. If userspace has enabled + KVM_CAP_ARM_SEA_TO_USER, KVM returns to userspace with KVM_EXIT_ARM_SEA. + +For the cases above, userspace can ask the kernel to replay either an external +data abort (by setting ext_dabt_pending) or an external instruction abort +(by setting ext_iabt_pending) into the faulting VCPU. Userspace can provide +Instruction Specific Syndrome (ISS) in the ext_abt_esr field to supplement +the ESR register value being injected into the faulting VCPU. KVM will use the +address from the existing fault on the VCPU. Setting both ext_dabt_pending and +ext_iabt_pending at the same time will return -EINVAL. Setting anything not +being part of the ISS (bits [24:0] of ext_abt_esr) will return -EINVAL. + +It is a programming error to set ext_dabt_pending or ext_iabt_pending after an +exit which was not KVM_EXIT_MMIO, KVM_EXIT_ARM_NISV or KVM_EXIT_ARM_SEA. +Injecting SEA for data and instruction abort is only available if KVM supports +KVM_CAP_ARM_INJECT_EXT_DABT and KVM_CAP_ARM_INJECT_EXT_IABT respectively. + +This is a helper which provides commonality in how userspace reports accesses +for the above cases to guests, across different userspace implementations. +Nevertheless, userspace can still emulate all Arm exceptions by manipulating +individual registers using the KVM_SET_ONE_REG API.
See KVM_GET_VCPU_EVENTS for the data structure.
linux-kselftest-mirror@lists.linaro.org