I've removed the RFC tag from this version of the series, but the items that I'm looking for feedback on remains the same:
- The userspace ABI, in particular: - The vector length used for the SVE registers, access to the SVE registers and access to ZA and (if available) ZT0 depending on the current state of PSTATE.{SM,ZA}. - The use of a single finalisation for both SVE and SME.
- The addition of control for enabling fine grained traps in a similar manner to FGU but without the UNDEF, I'm not clear if this is desired at all and at present this requires symmetric read and write traps like FGU. That seemed like it might be desired from an implementation point of view but we already have one case where we enable an asymmetric trap (for ARM64_WORKAROUND_AMPERE_AC03_CPU_38) and it seems generally useful to enable asymmetrically.
This series implements support for SME use in non-protected KVM guests. Much of this is very similar to SVE, the main additional challenge that SME presents is that it introduces a new vector length similar to the SVE vector length and two new controls which change the registers seen by guests:
- PSTATE.ZA enables the ZA matrix register and, if SME2 is supported, the ZT0 LUT register. - PSTATE.SM enables streaming mode, a new floating point mode which uses the SVE register set with the separately configured SME vector length. In streaming mode implementation of the FFR register is optional.
It is also permitted to build systems which support SME without SVE, in this case when not in streaming mode no SVE registers or instructions are available. Further, there is no requirement that there be any overlap in the set of vector lengths supported by SVE and SME in a system, this is expected to be a common situation in practical systems.
Since there is a new vector length to configure we introduce a new feature parallel to the existing SVE one with a new pseudo register for the streaming mode vector length. Due to the overlap with SVE caused by streaming mode rather than finalising SME as a separate feature we use the existing SVE finalisation to also finalise SME, a new define KVM_ARM_VCPU_VEC is provided to help make user code clearer. Finalising SVE and SME separately would introduce complication with register access since finalising SVE makes the SVE registers writeable by userspace and doing multiple finalisations results in an error being reported. Dealing with a state where the SVE registers are writeable due to one of SVE or SME being finalised but may have their VL changed by the other being finalised seems like needless complexity with minimal practical utility, it seems clearer to just express directly that only one finalisation can be done in the ABI.
Access to the floating point registers follows the architecture:
- When both SVE and SME are present: - If PSTATE.SM == 0 the vector length used for the Z and P registers is the SVE vector length. - If PSTATE.SM == 1 the vector length used for the Z and P registers is the SME vector length. - If only SME is present: - If PSTATE.SM == 0 the Z and P registers are inaccessible and the floating point state accessed via the encodings for the V registers. - If PSTATE.SM == 1 the vector length used for the Z and P registers - The SME specific ZA and ZT0 registers are only accessible if SVCR.ZA is 1.
The VMM must understand this, in particular when loading state SVCR should be configured before other state. It should be noted that while the architecture refers to PSTATE.SM and PSTATE.ZA these PSTATE bits are not preserved in SPSR_ELx, they are only accessible via SVCR.
There are a large number of subfeatures for SME, most of which only offer additional instructions but some of which (SME2 and FA64) add architectural state. These are configured via the ID registers as per usual.
Protected KVM supported, with the implementation maintaining the existing restriction that the hypervisor will refuse to run if streaming mode or ZA is enabled. This both simplfies the code and avoids the need to allocate storage for host ZA and ZT0 state, there seems to be little practical use case for supporting this and the memory usage would be non-trivial.
The new KVM_ARM_VCPU_VEC feature and ZA and ZT0 registers have not been added to the get-reg-list selftest, the idea of supporting additional features there without restructuring the program to generate all possible feature combinations has been rejected. I will post a separate series which does that restructuring.
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v6: - Rebase onto v6.16-rc3. - Link to v5: https://lore.kernel.org/r/20250417-kvm-arm64-sme-v5-0-f469a2d5f574@kernel.or...
Changes in v5: - Rebase onto v6.15-rc2. - Add pKVM guest support. - Always restore SVCR. - Link to v4: https://lore.kernel.org/r/20250214-kvm-arm64-sme-v4-0-d64a681adcc2@kernel.or...
Changes in v4: - Rebase onto v6.14-rc2 and Mark Rutland's fixes. - Expose SME to nested guests. - Additional cleanups and test fixes following on from the rebase. - Flush register state on VMM PSTATE.{SM,ZA}. - Link to v3: https://lore.kernel.org/r/20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.or...
Changes in v3: - Rebase onto v6.12-rc2. - Link to v2: https://lore.kernel.org/r/20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.or...
Changes in v2: - Rebase onto v6.7-rc3. - Configure subfeatures based on host system only. - Complete nVHE support. - There was some snafu with sending v1 out, it didn't make it to the lists but in case it hit people's inboxes I'm sending as v2.
--- Mark Brown (28): arm64/fpsimd: Update FA64 and ZT0 enables when loading SME state arm64/fpsimd: Decide to save ZT0 and streaming mode FFR at bind time arm64/fpsimd: Check enable bit for FA64 when saving EFI state arm64/fpsimd: Determine maximum virtualisable SME vector length KVM: arm64: Introduce non-UNDEF FGT control KVM: arm64: Pay attention to FFR parameter in SVE save and load KVM: arm64: Pull ctxt_has_ helpers to start of sysreg-sr.h KVM: arm64: Move SVE state access macros after feature test macros KVM: arm64: Rename SVE finalization constants to be more general KVM: arm64: Document the KVM ABI for SME KVM: arm64: Define internal features for SME KVM: arm64: Rename sve_state_reg_region KVM: arm64: Store vector lengths in an array KVM: arm64: Implement SME vector length configuration KVM: arm64: Support SME control registers KVM: arm64: Support TPIDR2_EL0 KVM: arm64: Support SME identification registers for guests KVM: arm64: Support SME priority registers KVM: arm64: Provide assembly for SME register access KVM: arm64: Support userspace access to streaming mode Z and P registers KVM: arm64: Flush register state on writes to SVCR.SM and SVCR.ZA KVM: arm64: Expose SME specific state to userspace KVM: arm64: Context switch SME state for guests KVM: arm64: Handle SME exceptions KVM: arm64: Expose SME to nested guests KVM: arm64: Provide interface for configuring and enabling SME for guests KVM: arm64: selftests: Add SME system registers to get-reg-list KVM: arm64: selftests: Add SME to set_id_regs test
Documentation/virt/kvm/api.rst | 117 +++++++---- arch/arm64/include/asm/fpsimd.h | 26 +++ arch/arm64/include/asm/kvm_emulate.h | 6 + arch/arm64/include/asm/kvm_host.h | 168 ++++++++++++--- arch/arm64/include/asm/kvm_hyp.h | 5 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/include/asm/vncr_mapping.h | 2 + arch/arm64/include/uapi/asm/kvm.h | 33 +++ arch/arm64/kernel/cpufeature.c | 2 - arch/arm64/kernel/fpsimd.c | 89 ++++---- arch/arm64/kvm/arm.c | 10 + arch/arm64/kvm/fpsimd.c | 28 ++- arch/arm64/kvm/guest.c | 252 ++++++++++++++++++++--- arch/arm64/kvm/handle_exit.c | 14 ++ arch/arm64/kvm/hyp/fpsimd.S | 28 ++- arch/arm64/kvm/hyp/include/hyp/switch.h | 175 ++++++++++++++-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 97 +++++---- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 86 ++++++-- arch/arm64/kvm/hyp/nvhe/pkvm.c | 81 ++++++-- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 6 + arch/arm64/kvm/hyp/vhe/switch.c | 17 +- arch/arm64/kvm/nested.c | 3 +- arch/arm64/kvm/reset.c | 156 ++++++++++---- arch/arm64/kvm/sys_regs.c | 140 ++++++++++++- include/uapi/linux/kvm.h | 1 + tools/testing/selftests/kvm/arm64/get-reg-list.c | 32 ++- tools/testing/selftests/kvm/arm64/set_id_regs.c | 29 ++- 28 files changed, 1315 insertions(+), 294 deletions(-) --- base-commit: 7204503c922cfdb4fcfce4a4ab61f4558a01a73b change-id: 20230301-kvm-arm64-sme-06a1246d3636
Best regards, -- Mark Brown broonie@kernel.org
Currently we enable EL0 and EL1 access to FA64 and ZT0 at boot and leave them enabled throughout the runtime of the system. When we add KVM support we will need to make this configuration dynamic, these features may be disabled for some KVM guests. Since the host kernel saves the floating point state for non-protected guests and we wish to avoid KVM having to reload the floating point state needlessly on guest reentry let's move the configuration of these enables to the floating point state reload.
We provide a helper which does the configuration as part of a read/modify/write operation along with the configuration of the task VL, then update the floating point state load and SME access trap to use it. We also remove the setting of the enable bits from the CPU feature identification and resume paths. There will be a small overhead from setting the enables one at a time but this should be negligable in the context of the state load or access trap. In order to avoid compiler warnings due to unused variables in !CONFIG_ARM64_SME cases we avoid storing the vector length in temporary variables.
Signed-off-by: Mark Brown broonie@kernel.org
df --- arch/arm64/include/asm/fpsimd.h | 14 ++++++++++++ arch/arm64/kernel/cpufeature.c | 2 -- arch/arm64/kernel/fpsimd.c | 47 +++++++++++------------------------------ 3 files changed, 26 insertions(+), 37 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index b8cf0ea43cc0..b4359f942621 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -428,6 +428,18 @@ static inline size_t sme_state_size(struct task_struct const *task) return __sme_state_size(task_get_sme_vl(task)); }
+#define sme_cond_update_smcr(vl, fa64, zt0, reg) \ + do { \ + u64 __old = read_sysreg_s((reg)); \ + u64 __new = vl; \ + if (fa64) \ + __new |= SMCR_ELx_FA64; \ + if (zt0) \ + __new |= SMCR_ELx_EZT0; \ + if (__old != __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else
static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -456,6 +468,8 @@ static inline size_t sme_state_size(struct task_struct const *task) return 0; }
+#define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) + #endif /* ! CONFIG_ARM64_SME */
/* For use by EFI runtime services calls only */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index b34044e20128..397ef8693f5f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2911,7 +2911,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME_FA64, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_fa64, ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP) }, { @@ -2919,7 +2918,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME2, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_sme2, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2) }, #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index c37f02d7194e..653c0dec6b18 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -392,11 +392,15 @@ static void task_fpsimd_load(void)
/* Restore SME, override SVE register configuration if needed */ if (system_supports_sme()) { - unsigned long sme_vl = task_get_sme_vl(current); - - /* Ensure VL is set up for restoring data */ + /* + * Ensure VL is set up for restoring data. KVM might + * disable subfeatures so we reset them each time. + */ if (test_thread_flag(TIF_SME)) - sme_set_vq(sve_vq_from_vl(sme_vl) - 1); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1);
write_sysreg_s(current->thread.svcr, SYS_SVCR);
@@ -1237,26 +1241,6 @@ void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p) isb(); }
-void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME); - - /* Allow use of ZT0 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK, - SYS_SMCR_EL1); -} - -void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME); - - /* Allow use of FA64 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, - SYS_SMCR_EL1); -} - void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; @@ -1300,17 +1284,9 @@ void __init sme_setup(void)
void sme_suspend_exit(void) { - u64 smcr = 0; - if (!system_supports_sme()) return;
- if (system_supports_fa64()) - smcr |= SMCR_ELx_FA64; - if (system_supports_sme2()) - smcr |= SMCR_ELx_EZT0; - - write_sysreg_s(smcr, SYS_SMCR_EL1); write_sysreg_s(0, SYS_SMPRI_EL1); }
@@ -1425,9 +1401,10 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs) WARN_ON(1);
if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { - unsigned long vq_minus_one = - sve_vq_from_vl(task_get_sme_vl(current)) - 1; - sme_set_vq(vq_minus_one); + sme_cond_update_smcr(sve_vq_from_vl(task_get_sme_vl(current)) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1);
fpsimd_bind_task_to_cpu(); } else {
Some parts of the SME state are optional, enabled by additional features on top of the base FEAT_SME and controlled with enable bits in SMCR_ELx. We unconditionally enable these for the host but for KVM we will allow the feature set exposed to guests to be restricted by the VMM. These are the FFR register (FEAT_SME_FA64) and ZT0 (FEAT_SME2).
We defer saving of guest floating point state for non-protected guests to the host kernel. We also want to avoid having to reconfigure the guest floating point state if nothing used the floating point state while running the host. If the guest was running with the optional features disabled then traps will be enabled for them so the host kernel will need to skip accessing that state when saving state for the guest.
Support this by moving the decision about saving this state to the point where we bind floating point state to the CPU, adding a new variable to the cpu_fp_state which uses the enable bits in SMCR_ELx to flag which features are enabled.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/fpsimd.c | 10 ++++++++-- arch/arm64/kvm/fpsimd.c | 1 + 3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index b4359f942621..0ecdd7dcf623 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -87,6 +87,7 @@ struct cpu_fp_state { void *sme_state; u64 *svcr; u64 *fpmr; + u64 sme_features; unsigned int sve_vl; unsigned int sme_vl; enum fp_type *fp_type; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 653c0dec6b18..77f9dfaffe8b 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -477,12 +477,12 @@ static void fpsimd_save_user_state(void)
if (*svcr & SVCR_ZA_MASK) sme_save_state(last->sme_state, - system_supports_sme2()); + last->sme_features & SMCR_ELx_EZT0);
/* If we are in streaming mode override regular SVE. */ if (*svcr & SVCR_SM_MASK) { save_sve_regs = true; - save_ffr = system_supports_fa64(); + save_ffr = last->sme_features & SMCR_ELx_FA64; vl = last->sme_vl; } } @@ -1655,6 +1655,12 @@ static void fpsimd_bind_task_to_cpu(void) last->to_save = FP_STATE_CURRENT; current->thread.fpsimd_cpu = smp_processor_id();
+ last->sme_features = 0; + if (system_supports_fa64()) + last->sme_features |= SMCR_ELx_FA64; + if (system_supports_sme2()) + last->sme_features |= SMCR_ELx_EZT0; + /* * Toggle SVE and SME trapping for userspace if needed, these * are serialsied by ret_to_user(). diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 8f6c8f57c6b9..d67e2002d354 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -106,6 +106,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); fp_state.fp_type = &vcpu->arch.fp_type; + fp_state.sme_features = 0;
if (vcpu_has_sve(vcpu)) fp_state.to_save = FP_STATE_SVE;
Currently when deciding if we need to save FFR when in streaming mode prior to EFI calls we check if FA64 is supported by the system. Since KVM guest support will mean that FA64 might be enabled and disabled at runtime switch to checking if traps for FA64 are enabled in SMCR_EL1 instead.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kernel/fpsimd.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 77f9dfaffe8b..de2897d6208c 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1908,6 +1908,11 @@ static bool efi_sm_state; * either doing something wrong or you need to propose some refactoring. */
+static bool fa64_enabled(void) +{ + return read_sysreg_s(SYS_SMCR_EL1) & SMCR_ELx_FA64; +} + /* * __efi_fpsimd_begin(): prepare FPSIMD for making an EFI runtime services call */ @@ -1940,7 +1945,7 @@ void __efi_fpsimd_begin(void) * Unless we have FA64 FFR does not * exist in streaming mode. */ - if (!system_supports_fa64()) + if (!fa64_enabled()) ffr = !(svcr & SVCR_SM_MASK); }
@@ -1988,7 +1993,7 @@ void __efi_fpsimd_end(void) * Unless we have FA64 FFR does not * exist in streaming mode. */ - if (!system_supports_fa64()) + if (!fa64_enabled()) ffr = false; } }
As with SVE we can only virtualise SME vector lengths that are supported by all CPUs in the system, implement similar checks to those for SVE. Since unlike SVE there are no specific vector lengths that are architecturally required the handling is subtly different, we report a system where this happens with a maximum vector length of -1.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kernel/fpsimd.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index de2897d6208c..fbc586813f6a 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1244,7 +1244,8 @@ void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p) void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; - int min_bit, max_bit; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + int min_bit, max_bit, b;
if (!system_supports_sme()) return; @@ -1274,12 +1275,32 @@ void __init sme_setup(void) */ set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32));
+ bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map, + SVE_VQ_MAX); + + b = find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >= SVE_VQ_MAX) + /* All VLs virtualisable */ + info->max_virtualisable_vl = SVE_VQ_MAX; + else if (b == SVE_VQ_MAX - 1) + /* No virtualisable VLs */ + info->max_virtualisable_vl = -1; + else + info->max_virtualisable_vl = sve_vl_from_vq(__bit_to_vq(b + 1)); + + if (info->max_virtualisable_vl > info->max_vl) + info->max_virtualisable_vl = info->max_vl; + pr_info("SME: minimum available vector length %u bytes per vector\n", info->min_vl); pr_info("SME: maximum available vector length %u bytes per vector\n", info->max_vl); pr_info("SME: default vector length %u bytes per vector\n", get_sme_default_vl()); + + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (info->max_virtualisable_vl < info->max_vl) + pr_warn("SME: unvirtualisable vector lengths present\n"); }
void sme_suspend_exit(void)
We have support for determining a set of fine grained traps to enable for the guest which is tied to the support for injecting UNDEFs for undefined features. This means that we can't use the mechanism for system registers which should be present but need emulation, such as SMPRI_EL1 which should be accessible when SME is present but if SME priority support is absent SMPRI_EL1.Priority should be RAZ.
Add an additional set of fine grained traps fgt, mirroring the existing fgu array. We use the same format where we always set the bit for the trap in the array as for FGU. This makes it clear what is being explicitly managed and keeps the code consistent.
We do not convert the handling of ARM_WORKAROUND_AMPERE_ACO3_CPU_38 to this mechanism since this only enables a write trap and when implementing the existing UNDEF that we would share the read and write trap enablement (this being the overwhelmingly common case).
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 7 ++++--- 2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d27079968341..911da41e6bc0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -302,6 +302,12 @@ struct kvm_arch { */ u64 fgu[__NR_FGT_GROUP_IDS__];
+ /* + * Additional FGTs to enable for the guests, eg. for emulated + * registers, + */ + u64 fgt[__NR_FGT_GROUP_IDS__]; + /* * Stage 2 paging state for VMs with nested S2 using a virtual * VMID. diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 2ad57b117385..987f5c4c5747 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -283,9 +283,9 @@ static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) id; \ })
-#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \ +#define compute_trap_clr_set(vcpu, kvm, trap, reg, clr, set) \ do { \ - u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \ + u64 hfg = kvm->arch.trap[reg_to_fgt_group_id(reg)]; \ struct fgt_masks *m = reg_to_fgt_masks(reg); \ set |= hfg & m->mask; \ clr |= hfg & m->nmask; \ @@ -301,7 +301,8 @@ static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) \ compute_clr_set(vcpu, reg, c, s); \ \ - compute_undef_clr_set(vcpu, kvm, reg, c, s); \ + compute_trap_clr_set(vcpu, kvm, fgu, reg, c, s); \ + compute_trap_clr_set(vcpu, kvm, fgt, reg, c, s); \ \ val = m->nmask; \ val |= s; \
The hypervisor copies of the SVE save and load functions are prototyped with third arguments specifying FFR should be accessed but the assembly functions overwrite whatever is supplied to unconditionally access FFR. Remove this and use the supplied parameter.
This has no effect currently since FFR is always present for SVE but will be important for SME.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/hyp/fpsimd.S | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index e950875e31ce..6e16cbfc5df2 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -21,13 +21,11 @@ SYM_FUNC_START(__fpsimd_restore_state) SYM_FUNC_END(__fpsimd_restore_state)
SYM_FUNC_START(__sve_restore_state) - mov x2, #1 sve_load 0, x1, x2, 3 ret SYM_FUNC_END(__sve_restore_state)
SYM_FUNC_START(__sve_save_state) - mov x2, #1 sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state)
Rather than add earlier prototypes of specific ctxt_has_ helpers let's just pull all their definitions to the top of sysreg-sr.h so they're all available to all the individual save/restore functions.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 82 +++++++++++++++--------------- 1 file changed, 40 insertions(+), 42 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 4d0dbea4c56f..223819e95405 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,8 +16,6 @@ #include <asm/kvm_hyp.h> #include <asm/kvm_mmu.h>
-static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt); - static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu; @@ -28,6 +26,46 @@ static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) return vcpu; }
+static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); + + return kvm_has_mte(kern_hyp_va(vcpu->kvm)); +} + +static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!cpus_have_final_cap(ARM64_HAS_S1PIE)) + return false; + + vcpu = ctxt_to_vcpu(ctxt); + return kvm_has_s1pie(kern_hyp_va(vcpu->kvm)); +} + +static inline bool ctxt_has_tcrx(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!cpus_have_final_cap(ARM64_HAS_TCR2)) + return false; + + vcpu = ctxt_to_vcpu(ctxt); + return kvm_has_tcr2(kern_hyp_va(vcpu->kvm)); +} + +static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!system_supports_poe()) + return false; + + vcpu = ctxt_to_vcpu(ctxt); + return kvm_has_s1poe(kern_hyp_va(vcpu->kvm)); +} + static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) { return host_data_ptr(host_ctxt) != ctxt; @@ -69,46 +107,6 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); }
-static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); - - return kvm_has_mte(kern_hyp_va(vcpu->kvm)); -} - -static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu; - - if (!cpus_have_final_cap(ARM64_HAS_S1PIE)) - return false; - - vcpu = ctxt_to_vcpu(ctxt); - return kvm_has_s1pie(kern_hyp_va(vcpu->kvm)); -} - -static inline bool ctxt_has_tcrx(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu; - - if (!cpus_have_final_cap(ARM64_HAS_TCR2)) - return false; - - vcpu = ctxt_to_vcpu(ctxt); - return kvm_has_tcr2(kern_hyp_va(vcpu->kvm)); -} - -static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu; - - if (!system_supports_poe()) - return false; - - vcpu = ctxt_to_vcpu(ctxt); - return kvm_has_s1poe(kern_hyp_va(vcpu->kvm)); -} - static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR);
In preparation for SME support move the macros used to access SVE state after the feature test macros, we will need to test for SME subfeatures to determine the size of the SME state.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 50 +++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 911da41e6bc0..49ca2c3b05a2 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1023,31 +1023,6 @@ struct kvm_vcpu_arch { #define IN_NESTED_ERET __vcpu_single_flag(sflags, BIT(7))
-/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) - -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) - -#define vcpu_sve_zcr_elx(vcpu) \ - (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) - -#define sve_state_size_from_vl(sve_max_vl) ({ \ - size_t __size_ret; \ - unsigned int __vq; \ - \ - if (WARN_ON(!sve_vl_valid(sve_max_vl))) { \ - __size_ret = 0; \ - } else { \ - __vq = sve_vq_from_vl(sve_max_vl); \ - __size_ret = SVE_SIG_REGS_SIZE(__vq); \ - } \ - \ - __size_ret; \ -}) - -#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_max_vl) - #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_HW | \ @@ -1083,6 +1058,31 @@ struct kvm_vcpu_arch {
#define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs)
+/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl)) + +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) + +#define vcpu_sve_zcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) + +#define sve_state_size_from_vl(sve_max_vl) ({ \ + size_t __size_ret; \ + unsigned int __vq; \ + \ + if (WARN_ON(!sve_vl_valid(sve_max_vl))) { \ + __size_ret = 0; \ + } else { \ + __vq = sve_vq_from_vl(sve_max_vl); \ + __size_ret = SVE_SIG_REGS_SIZE(__vq); \ + } \ + \ + __size_ret; \ +}) + +#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_max_vl) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently
Due to the overlap between SVE and SME vector length configuration created by streaming mode SVE we will finalize both at once. Rename the existing finalization to use _VEC (vector) for the naming to avoid confusion.
Since this includes the userspace API we create an alias KVM_ARM_VCPU_VEC for the existing KVM_ARM_VCPU_SVE capability, existing code which does not enable SME will be unaffected and any SME only code will not need to use SVE constants.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 8 +++++--- arch/arm64/include/uapi/asm/kvm.h | 6 ++++++ arch/arm64/kvm/guest.c | 10 +++++----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 20 ++++++++++---------- 5 files changed, 27 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 49ca2c3b05a2..bd3bf8043c43 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -965,8 +965,8 @@ struct kvm_vcpu_arch {
/* KVM_ARM_VCPU_INIT completed */ #define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0)) -/* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +/* Vector config completed */ +#define VCPU_VEC_FINALIZED __vcpu_single_flag(cflags, BIT(1)) /* pKVM VCPU setup completed */ #define VCPU_PKVM_FINALIZED __vcpu_single_flag(cflags, BIT(2))
@@ -1037,6 +1037,8 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif
+#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ @@ -1536,7 +1538,7 @@ struct kvm *kvm_arch_alloc_vm(void); int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
-#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED) +#define kvm_arm_vcpu_vec_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_VEC_FINALIZED)
#define kvm_has_mte(kvm) \ (system_supports_mte() && \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index ed5f3892674c..4d789871bec1 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -107,6 +107,12 @@ struct kvm_regs { #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ #define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */
+/* + * An alias for _SVE since we finalize VL configuration for both SVE and SME + * simultaneously. + */ +#define KVM_ARM_VCPU_VEC KVM_ARM_VCPU_SVE + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2196979a24a3..73e714133bb6 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -342,7 +342,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT;
- if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */
if (WARN_ON(vcpu->arch.sve_state)) @@ -497,7 +497,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret;
- if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -523,7 +523,7 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret;
- if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -657,7 +657,7 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) return 0;
/* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu));
return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + 1; /* KVM_REG_ARM64_SVE_VLS */ @@ -675,7 +675,7 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, return 0;
/* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu));
/* * Enumerate this first, so that userspace can save/restore in diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 338505cb0171..a461f192230a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -420,7 +420,7 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *h int ret = 0;
if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { - vcpu_clear_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); return 0; }
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 959532422d3a..f7c63e145d54 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -92,7 +92,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. */ -static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { void *buf; unsigned int vl; @@ -122,21 +122,21 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) } vcpu->arch.sve_state = buf; - vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; }
int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { switch (feature) { - case KVM_ARM_VCPU_SVE: - if (!vcpu_has_sve(vcpu)) + case KVM_ARM_VCPU_VEC: + if (!vcpu_has_vec(vcpu)) return -EINVAL;
- if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
- return kvm_vcpu_finalize_sve(vcpu); + return kvm_vcpu_finalize_vec(vcpu); }
return -EINVAL; @@ -144,7 +144,7 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + if (vcpu_has_vec(vcpu) && !kvm_arm_vcpu_vec_finalized(vcpu)) return false;
return true; @@ -163,7 +163,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) kfree(vcpu->arch.ccsidr); }
-static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); @@ -203,11 +203,11 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu);
- if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); } else { - kvm_vcpu_reset_sve(vcpu); + kvm_vcpu_reset_vec(vcpu); }
if (vcpu_el1_is_32bit(vcpu))
SME, the Scalable Matrix Extension, is an arm64 extension which adds support for matrix operations, with core concepts patterned after SVE.
SVE introduced some complication in the ABI since it adds new vector floating point registers with runtime configurable size, the size being controlled by a prameter called the vector length (VL). To provide control of this to VMMs we offer two phase configuration of SVE, SVE must first be enabled for the vCPU with KVM_ARM_VCPU_INIT(KVM_ARM_VCPU_SVE), after which vector length may then be configured but the configurably sized floating point registers are inaccessible until finalized with a call to KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE) after which the configurably sized registers can be accessed.
SME introduces an additional independent configurable vector length which as well as controlling the size of the new ZA register also provides an alternative view of the configurably sized SVE registers (known as streaming mode) with the guest able to switch between the two modes as it pleases. There is also a fixed sized register ZT0 introduced in SME2. As well as streaming mode the guest may enable and disable ZA and (where SME2 is available) ZT0 dynamically independently of streaming mode. These modes are controlled via the system register SVCR.
We handle the configuration of the vector length for SME in a similar manner to SVE, requiring initialization and finalization of the feature with a pseudo register controlling the available SME vector lengths as for SVE. Further, if the guest has both SVE and SME then finalizing one prevents further configuration of the vector length for the other.
Where both SVE and SME are configured for the guest we always present the SVE registers to userspace as having the larger of the configured maximum SVE and SME vector lengths, discarding extra data at load time and zero padding on read as required if the active vector length is lower. Note that this means that enabling or disabling streaming mode while the guest is stopped will not zero Zn or Pn as it will when the guest is running, but it does allow SVCR, Zn and Pn to be read and written in any order.
Userspace access to ZA and (if configured) ZT0 is always available, they will be zeroed when the guest runs if disabled in SVCR and the value read will be zero if the guest stops with them disabled. This mirrors the behaviour of the architecture, enabling access causes ZA and ZT0 to be zeroed, while allowing access to SVCR, ZA and ZT0 to be performed in any order.
If SME is enabled for a guest without SVE then the FPSIMD Vn registers must be accessed via the low 128 bits of the SVE Zn registers as is the case when SVE is enabled. This is not ideal but allows access to SVCR and the registers in any order without duplication or ambiguity about which values should take effect. This may be an issue for VMMs that are unaware of SME on systems that implement it without SVE if they let SME be enabled, the lack of access to Vn may surprise them, but it seems like an unusual implementation choice.
For SME unware VMMs on systems with both SVE and SME support the SVE registers may be larger than expected, this should be less disruptive than on a system without SVE as they will simply ignore the high bits of the registers.
Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/virt/kvm/api.rst | 117 +++++++++++++++++++++++++++++------------ 1 file changed, 82 insertions(+), 35 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 9abf93ee5f65..00195e837b5f 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -406,7 +406,7 @@ Errors: instructions from device memory (arm64) ENOSYS data abort outside memslots with no syndrome info and KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) - EPERM SVE feature set but not finalized (arm64) + EPERM SVE or SME feature set but not finalized (arm64) ======= ==============================================================
This ioctl is used to run a guest virtual cpu. While there are no @@ -2593,12 +2593,12 @@ Specifically: 0x6020 0000 0010 00d5 FPCR 32 fp_regs.fpcr ======================= ========= ===== =======================================
-.. [1] These encodings are not accepted for SVE-enabled vcpus. See - :ref:`KVM_ARM_VCPU_INIT`. +.. [1] These encodings are not accepted for SVE enabled vcpus. See + :ref:`KVM_ARM_VCPU_INIT`. They are also not accepted when SME is + enabled without SVE and the vcpu is in streaming mode.
The equivalent register content can be accessed via bits [127:0] of - the corresponding SVE Zn registers instead for vcpus that have SVE - enabled (see below). + the corresponding SVE Zn registers in these cases (see below).
arm64 CCSIDR registers are demultiplexed by CSSELR value::
@@ -2629,24 +2629,34 @@ arm64 SVE registers have the following bit patterns:: 0x6050 0000 0015 060 slice:5 FFR bits[256*slice + 255 : 256*slice] 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-register
-Access to register IDs where 2048 * slice >= 128 * max_vq will fail with -ENOENT. max_vq is the vcpu's maximum supported vector length in 128-bit -quadwords: see [2]_ below. +arm64 SME registers have the following bit patterns: + + 0x6080 0000 0017 00 <n:5> slice:5 ZA.H[n] bits[2048*slice + 2047 : 2048*slice] + 0x60XX 0000 0017 0100 ZT0 + 0x6060 0000 0017 fffe KVM_REG_ARM64_SME_VLS pseudo-register + +Access to Z, P or ZA register IDs where 2048 * slice >= 128 * max_vq +will fail with ENOENT. max_vq is the vcpu's maximum supported vector +length in 128-bit quadwords: see [2]_ below. + +Access to the ZA and ZT0 registers is only available if SVCR.ZA is set +to 1.
These registers are only accessible on vcpus for which SVE is enabled. See KVM_ARM_VCPU_INIT for details.
-In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not -accessible until the vcpu's SVE configuration has been finalized -using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT -and KVM_ARM_VCPU_FINALIZE for more information about this procedure. +In addition, except for KVM_REG_ARM64_SVE_VLS and +KVM_REG_ARM64_SME_VLS, these registers are not accessible until the +vcpu's SVE and SME configuration has been finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). See KVM_ARM_VCPU_INIT and +KVM_ARM_VCPU_FINALIZE for more information about this procedure.
-KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector -lengths supported by the vcpu to be discovered and configured by -userspace. When transferred to or from user memory via KVM_GET_ONE_REG -or KVM_SET_ONE_REG, the value of this register is of type -__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as -follows:: +KVM_REG_ARM64_SVE_VLS and KVM_ARM64_VCPU_SME_VLS are pseudo-registers +that allows the set of vector lengths supported by the vcpu to be +discovered and configured by userspace. When transferred to or from +user memory via KVM_GET_ONE_REG or KVM_SET_ONE_REG, the value of this +register is of type __u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the +set of vector lengths as follows::
__u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS];
@@ -2658,19 +2668,25 @@ follows:: /* Vector length vq * 16 bytes not supported */
.. [2] The maximum value vq for which the above condition is true is - max_vq. This is the maximum vector length available to the guest on - this vcpu, and determines which register slices are visible through - this ioctl interface. + max_vq. This is the maximum vector length currently available to + the guest on this vcpu, and determines which register slices are + visible through this ioctl interface. + + If SME is supported then the max_vq used for the Z and P registers + then while SVCR.SM is 1 this vector length will be the maximum SME + vector length available for the guest, otherwise it will be the + maximum SVE vector length available.
(See Documentation/arch/arm64/sve.rst for an explanation of the "vq" nomenclature.)
-KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. -KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that -the host supports. +KVM_REG_ARM64_SVE_VLS and KVM_REG_ARM_SME_VLS are only accessible +after KVM_ARM_VCPU_INIT. KVM_ARM_VCPU_INIT initialises them to the +best set of vector lengths that the host supports.
-Userspace may subsequently modify it if desired until the vcpu's SVE -configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). +Userspace may subsequently modify these registers if desired until the +vcpu's SVE and SME configuration is finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC).
Apart from simply removing all vector lengths from the host set that exceed some value, support for arbitrarily chosen sets of vector lengths @@ -2678,8 +2694,8 @@ is hardware-dependent and may not be available. Attempting to configure an invalid set of vector lengths via KVM_SET_ONE_REG will fail with EINVAL.
-After the vcpu's SVE configuration is finalized, further attempts to -write this register will fail with EPERM. +After the vcpu's SVE or SME configuration is finalized, further +attempts to write these registers will fail with EPERM.
arm64 bitmap feature firmware pseudo-registers have the following bit pattern::
@@ -3462,6 +3478,7 @@ The initial values are defined as: - General Purpose registers, including PC and SP: set to 0 - FPSIMD/NEON registers: set to 0 - SVE registers: set to 0 + - SME registers: set to 0 - System registers: Reset to their architecturally defined values as for a warm reset to EL1 (resp. SVC) or EL2 (in the case of EL2 being enabled). @@ -3505,7 +3522,7 @@ Possible features:
- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. - Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC):
* After KVM_ARM_VCPU_INIT:
@@ -3513,7 +3530,7 @@ Possible features: initial value of this pseudo-register indicates the best set of vector lengths possible for a vcpu on this host.
- * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC}):
- KVM_RUN and KVM_GET_REG_LIST are not available;
@@ -3526,11 +3543,40 @@ Possible features: KVM_SET_ONE_REG, to modify the set of vector lengths available for the vcpu.
- * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC):
- the KVM_REG_ARM64_SVE_VLS pseudo-register is immutable, and can no longer be written using KVM_SET_ONE_REG.
+ - KVM_ARM_VCPU_SME: Enables SME for the CPU (arm64 only). + Depends on KVM_CAP_ARM_SME. + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + * After KVM_ARM_VCPU_INIT: + + - KVM_REG_ARM64_SME_VLS may be read using KVM_GET_ONE_REG: the + initial value of this pseudo-register indicates the best set of + vector lengths possible for a vcpu on this host. + + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC}): + + - KVM_RUN and KVM_GET_REG_LIST are not available; + + - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access + the scalable architectural SVE registers + KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or + KVM_REG_ARM64_SVE_FFR, the matrix register + KVM_REG_ARM64_SME_ZA() or the LUT register KVM_REG_ARM64_ZT(); + + - KVM_REG_ARM64_SME_VLS may optionally be written using + KVM_SET_ONE_REG, to modify the set of vector lengths available + for the vcpu. + + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - the KVM_REG_ARM64_SME_VLS pseudo-register is immutable, and can + no longer be written using KVM_SET_ONE_REG. + - KVM_ARM_VCPU_HAS_EL2: Enable Nested Virtualisation support, booting the guest from EL2 instead of EL1. Depends on KVM_CAP_ARM_EL2. @@ -5113,11 +5159,12 @@ Errors:
Recognised values for feature:
- ===== =========================================== - arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) - ===== =========================================== + ===== ============================================================== + arm64 KVM_ARM_VCPU_VEC (requires KVM_CAP_ARM_SVE or KVM_CAP_ARM_SME) + arm64 KVM_ARM_VCPU_SVE (alias for KVM_ARM_VCPU_VEC) + ===== ==============================================================
-Finalizes the configuration of the specified vcpu feature. +Finalizes the configuration of the specified vcpu features.
The vcpu must already have been initialised, enabling the affected feature, by means of a successful :ref:`KVM_ARM_VCPU_INIT <KVM_ARM_VCPU_INIT>` call with the
In order to simplify interdependencies in the rest of the series define the feature detection for SME and it's subfeatures. Due to the need for vector length configuration we define a flag for SME like for SVE. We also have two subfeatures which add architectural state, FA64 and SME2, which are configured via the normal ID register scheme.
Also provide helpers which check if the vCPU is in streaming mode or has ZA enabled.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 35 ++++++++++++++++++++++++++++++++++- arch/arm64/kvm/sys_regs.c | 2 +- 2 files changed, 35 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bd3bf8043c43..bf7aa52af405 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -355,6 +355,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_GUEST_HAS_SVE 9 /* MIDR_EL1, REVIDR_EL1, and AIDR_EL1 are writable from userspace */ #define KVM_ARCH_FLAG_WRITABLE_IMP_ID_REGS 10 + /* SME exposed to guest */ +#define KVM_ARCH_FLAG_GUEST_HAS_SME 11 unsigned long flags;
/* VM-wide vCPU feature set */ @@ -1037,7 +1039,16 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif
-#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) +#define kvm_has_sme(kvm) (system_supports_sme() && \ + test_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &(kvm)->arch.flags)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme(vcpu) kvm_has_sme(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme(vcpu) kvm_has_sme((vcpu)->kvm) +#endif + +#define vcpu_has_vec(vcpu) (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu))
#ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ @@ -1674,6 +1685,28 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define kvm_has_s1poe(k) \ (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP))
+#define kvm_has_fa64(k) \ + (system_supports_fa64() && \ + kvm_has_feat((k), ID_AA64SMFR0_EL1, FA64, IMP)) + +#define kvm_has_sme2(k) \ + (system_supports_sme2() && \ + kvm_has_feat((k), ID_AA64PFR1_EL1, SME, SME2)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme2(vcpu) kvm_has_sme2(kern_hyp_va((vcpu)->kvm)) +#define vcpu_has_fa64(vcpu) kvm_has_fa64(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme2(vcpu) kvm_has_sme2((vcpu)->kvm) +#define vcpu_has_fa64(vcpu) kvm_has_fa64((vcpu)->kvm) +#endif + +#define vcpu_in_streaming_mode(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_SM_MASK) + +#define vcpu_za_enabled(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_ZA_MASK) + static inline bool kvm_arch_has_irq_bypass(void) { return true; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 76c2f0da821f..7dd4a5ef0e81 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1774,7 +1774,7 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, static unsigned int sme_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { - if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, IMP)) + if (vcpu_has_sme(vcpu)) return 0;
return REG_HIDDEN;
As for SVE we will need to pull parts of dynamically sized registers out of a block of memory for SME so we will use a similar code pattern for this. Rename the current struct sve_state_reg_region in preparation for this.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/guest.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 73e714133bb6..bb4b91e43923 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -404,9 +404,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) */ #define vcpu_sve_slices(vcpu) 1
-/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ -struct sve_state_reg_region { - unsigned int koffset; /* offset into sve_state in kernel memory */ +/* Bounds of a single register slice within vcpu->arch.s[mv]e_state */ +struct vec_state_reg_region { + unsigned int koffset; /* offset into s[mv]e_state in kernel memory */ unsigned int klen; /* length in kernel memory */ unsigned int upad; /* extra trailing padding in user memory */ }; @@ -415,7 +415,7 @@ struct sve_state_reg_region { * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy */ -static int sve_reg_to_region(struct sve_state_reg_region *region, +static int sve_reg_to_region(struct vec_state_reg_region *region, struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -485,7 +485,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; char __user *uptr = (char __user *)reg->addr;
/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ @@ -511,7 +511,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; const char __user *uptr = (const char __user *)reg->addr;
/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
SME adds a second vector length configured in a very similar way to the SVE vector length, in order to facilitate future code sharing for SME refactor our storage of vector lengths to use an array like the host does. We do not yet take much advantage of this so the intermediate code is not as clean as might be.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 17 +++++++++++------ arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 6 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +++--- arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++++--- arch/arm64/kvm/reset.c | 22 +++++++++++----------- 9 files changed, 38 insertions(+), 32 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bf7aa52af405..f9dcd2530574 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -76,8 +76,10 @@ enum kvm_mode kvm_get_mode(void); static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; #endif
-extern unsigned int __ro_after_init kvm_sve_max_vl; -extern unsigned int __ro_after_init kvm_host_sve_max_vl; +extern unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; +extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; +DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); + int __init kvm_arm_init_sve(void);
u32 __attribute_const__ kvm_target_cpu(void); @@ -805,7 +807,7 @@ struct kvm_vcpu_arch { */ void *sve_state; enum fp_type fp_type; - unsigned int sve_max_vl; + unsigned int max_vl[ARM64_VEC_MAX];
/* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; @@ -1073,9 +1075,12 @@ struct kvm_vcpu_arch {
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) + sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) + +#define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[type]) + +#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE)
-#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl)
#define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) @@ -1094,7 +1099,7 @@ struct kvm_vcpu_arch { __size_ret; \ })
-#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.sve_max_vl) +#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_vl[ARM64_VEC_SVE])
/* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index e6be1f5d0967..0ad5a66e0d25 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -145,6 +145,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
extern unsigned long kvm_nvhe_sym(__icache_flags); extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); -extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); +extern unsigned int kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_MAX]);
#endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index ea58282f59bb..6925606f2263 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,7 +166,7 @@ static inline size_t pkvm_host_sve_state_size(void) return 0;
return size_add(sizeof(struct cpu_sve_state), - SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); + SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]))); }
struct pkvm_mapping { diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index d67e2002d354..134485b52f51 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -101,7 +101,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; - fp_state.sve_vl = vcpu->arch.sve_max_vl; + fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state = NULL; fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index bb4b91e43923..e9f17b8a6fba 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -318,7 +318,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT;
- if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) return -EINVAL;
memset(vqs, 0, sizeof(vqs)); @@ -356,7 +356,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq = vq;
- if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) return -EINVAL;
/* @@ -375,7 +375,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL;
/* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq);
return 0; } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 987f5c4c5747..899826ea10ea 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -543,8 +543,8 @@ static inline void __hyp_sve_save_host(void) struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM64_VEC_SVE]), &sve_state->fpsr, true); } @@ -599,7 +599,7 @@ static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) zcr_el2 = vcpu_sve_max_vq(vcpu) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR); } else { - zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1; + zcr_el2 = sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR);
zcr_el1 = vcpu_sve_max_vq(vcpu) - 1; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 3206b2c07f82..76be13efcfcb 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -34,7 +34,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); }
static void __hyp_sve_restore_host(void) @@ -50,8 +50,8 @@ static void __hyp_sve_restore_host(void) * that was discovered, if we wish to use larger VLs this will * need to be revisited. */ - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM64_VEC_SVE]), &sve_state->fpsr, true); write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index a461f192230a..65c49a5c7091 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,7 +20,7 @@ unsigned long __icache_flags; /* Used by kvm_get_vttbr(). */ unsigned int kvm_arm_vmid_bits;
-unsigned int kvm_host_sve_max_vl; +unsigned int kvm_host_max_vl[ARM64_VEC_MAX];
/* * The currently loaded hyp vCPU for each physical CPU. Used only when @@ -425,7 +425,8 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *h }
/* Limit guest vector length to the maximum supported by the host. */ - sve_max_vl = min(READ_ONCE(host_vcpu->arch.sve_max_vl), kvm_host_sve_max_vl); + sve_max_vl = min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), + kvm_host_max_vl[ARM64_VEC_SVE]); sve_state_size = sve_state_size_from_vl(sve_max_vl); sve_state = kern_hyp_va(READ_ONCE(host_vcpu->arch.sve_state));
@@ -439,7 +440,7 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *h goto err;
vcpu->arch.sve_state = sve_state; - vcpu->arch.sve_max_vl = sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_max_vl;
return 0; err: diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index f7c63e145d54..a8684a1346ec 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -32,7 +32,7 @@
/* Maximum phys_shift supported for any VM on this host */ static u32 __ro_after_init kvm_ipa_limit; -unsigned int __ro_after_init kvm_host_sve_max_vl; +unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX];
/* * ARMv8 Reset Values @@ -46,14 +46,14 @@ unsigned int __ro_after_init kvm_host_sve_max_vl; #define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ PSR_AA32_I_BIT | PSR_AA32_F_BIT)
-unsigned int __ro_after_init kvm_sve_max_vl; +unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX];
int __init kvm_arm_init_sve(void) { if (system_supports_sve()) { - kvm_sve_max_vl = sve_max_virtualisable_vl(); - kvm_host_sve_max_vl = sve_max_vl(); - kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl; + kvm_max_vl[ARM64_VEC_SVE] = sve_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SVE] = sve_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SVE]) = kvm_host_max_vl[ARM64_VEC_SVE];
/* * The get_sve_reg()/set_sve_reg() ioctl interface will need @@ -61,16 +61,16 @@ int __init kvm_arm_init_sve(void) * order to support vector lengths greater than * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) - kvm_sve_max_vl = VL_ARCH_MAX; + if (WARN_ON(kvm_max_vl[ARM64_VEC_SVE] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SVE] = VL_ARCH_MAX;
/* * Don't even try to make use of vector lengths that * aren't available on all CPUs, for now: */ - if (kvm_sve_max_vl < sve_max_vl()) + if (kvm_max_vl[ARM64_VEC_SVE] < sve_max_vl()) pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", - kvm_sve_max_vl); + kvm_max_vl[ARM64_VEC_SVE]); }
return 0; @@ -78,7 +78,7 @@ int __init kvm_arm_init_sve(void)
static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl = kvm_sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] = kvm_max_vl[ARM64_VEC_SVE];
/* * Userspace can still customize the vector lengths by writing @@ -99,7 +99,7 @@ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) size_t reg_sz; int ret;
- vl = vcpu->arch.sve_max_vl; + vl = vcpu->arch.max_vl[ARM64_VEC_SVE];
/* * Responsibility for these properties is shared between
SME implements a vector length which architecturally looks very similar to that for SVE, configured in a very similar manner. This controls the vector length used for the ZA matrix register, and for the SVE vector and predicate registers when in streaming mode. The only substantial difference is that unlike SVE the architecture does not guarantee that any particular vector length will be implemented.
Configuration for SME vector lengths is done using a virtual register as for SVE, hook up the implementation for the virtual register. Since we do not yet have support for any of the new SME registers stub register access functions are provided that only allow VL configuration. These will be extended as the SME specific registers, as for SVE.
Since vq_available() is currently only defined for CONFIG_SVE add a stub for builds where that is disabled.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/include/asm/kvm_host.h | 24 ++++++++++-- arch/arm64/include/uapi/asm/kvm.h | 9 +++++ arch/arm64/kvm/guest.c | 82 +++++++++++++++++++++++++++++++-------- 4 files changed, 96 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 0ecdd7dcf623..38c24c6485ad 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -340,6 +340,7 @@ static inline int sve_max_vl(void) return -EINVAL; }
+static inline bool vq_available(enum vec_type type, unsigned int vq) { return false; } static inline bool sve_vq_available(unsigned int vq) { return false; }
static inline void sve_user_disable(void) { BUILD_BUG(); } diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f9dcd2530574..a25a5a668d29 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -804,8 +804,15 @@ struct kvm_vcpu_arch { * low 128 bits of the SVE Z registers. When the core * floating point code saves the register state of a task it * records which view it saved in fp_type. + * + * If SME support is also present then it provides an + * alternative view of the SVE registers accessed as for the Z + * registers when PSTATE.SM is 1, plus an additional set of + * SME specific state in the matrix register ZA and LUT + * register ZT0. */ void *sve_state; + void *sme_state; enum fp_type fp_type; unsigned int max_vl[ARM64_VEC_MAX];
@@ -1073,14 +1080,23 @@ struct kvm_vcpu_arch {
#define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs)
-/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) - #define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[type])
#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) +#define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SME) + +#define vcpu_sve_max_vl(vcpu) ((vcpu)->arch.max_vl[ARM64_VEC_SVE]) +#define vcpu_sme_max_vl(vcpu) ((vcpu)->arch.max_vl[ARM64_VEC_SME])
+#define vcpu_max_vl(vcpu) max(vcpu_sve_max_vl(vcpu), vcpu_sme_max_vl(vcpu)) +#define vcpu_max_vq(vcpu) sve_vq_from_vl(vcpu_max_vl(vcpu)) + +#define vcpu_cur_sve_vl(vcpu) (vcpu_in_streaming_mode(vcpu) ? \ + vcpu_sme_max_vl(vcpu) : vcpu_sve_max_vl(vcpu)) + +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset(vcpu_cur_sve_vl(vcpu)))
#define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 4d789871bec1..5220797361e0 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -354,6 +354,15 @@ struct kvm_arm_counter_offset { #define KVM_ARM64_SVE_VLS_WORDS \ ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
+/* SME registers */ +#define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) + +/* Vector lengths pseudo-register: */ +#define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ + KVM_REG_SIZE_U512 | 0xffff) +#define KVM_ARM64_SME_VLS_WORDS \ + ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) + /* Bitmap feature firmware registers */ #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index e9f17b8a6fba..6460bb21e01d 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -310,22 +310,20 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) #define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq)))
-static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +static int get_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
- if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL;
memset(vqs, 0, sizeof(vqs));
- max_vq = vcpu_sve_max_vq(vcpu); + max_vq = vcpu_vec_max_vq(vcpu, vec_type); for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) - if (sve_vq_available(vq)) + if (vq_available(vec_type, vq)) vqs[vq_word(vq)] |= vq_mask(vq);
if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) @@ -334,40 +332,41 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; }
-static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +static int set_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
- if (!vcpu_has_sve(vcpu)) - return -ENOENT; - if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */
- if (WARN_ON(vcpu->arch.sve_state)) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL;
if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) return -EFAULT;
+ if (WARN_ON(vcpu->arch.sve_state || vcpu->arch.sme_state)) + return -EINVAL; + max_vq = 0; for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; ++vq) if (vq_present(vqs, vq)) max_vq = vq;
- if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) + if (max_vq > sve_vq_from_vl(kvm_max_vl[vec_type])) return -EINVAL;
/* * Vector lengths supported by the host can't currently be * hidden from the guest individually: instead we can only set a - * maximum via ZCR_EL2.LEN. So, make sure the available vector + * maximum via xCR_EL2.LEN. So, make sure the available vector * lengths match the set requested exactly up to the requested * maximum: */ for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) - if (vq_present(vqs, vq) != sve_vq_available(vq)) + if (vq_present(vqs, vq) != vq_available(vec_type, vq)) return -EINVAL;
/* Can't run with no vector lengths at all: */ @@ -375,11 +374,27 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL;
/* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[vec_type] = sve_vl_from_vq(max_vq);
return 0; }
+static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return set_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + #define SVE_REG_SLICE_SHIFT 0 #define SVE_REG_SLICE_BITS 5 #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) @@ -533,6 +548,39 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; }
+static int get_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return set_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SME_VLS) + return get_sme_vls(vcpu, reg); + + return -EINVAL; +} + +static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SME_VLS) + return set_sme_vls(vcpu, reg); + + return -EINVAL; +} int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL; @@ -775,6 +823,7 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return get_sme_reg(vcpu, reg); }
if (is_timer_reg(reg->id)) @@ -795,6 +844,7 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return set_sme_reg(vcpu, reg); }
if (is_timer_reg(reg->id))
SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add definitions and userspace access for them. These control the SME vector length in a manner similar to that for SVE and also have feature enable bits for SME2 and FA64. A subsequent patch will add management of them for guests as part of the general floating point context switch, as is done for the equivalent SVE registers.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++- 3 files changed, 39 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a25a5a668d29..14179e1ddb3e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -506,6 +506,7 @@ enum vcpu_sysreg { CPTR_EL2, /* Architectural Feature Trap Register (EL2) */ HACR_EL2, /* Hypervisor Auxiliary Control Register */ ZCR_EL2, /* SVE Control Register (EL2) */ + SMCR_EL2, /* SME Control Register (EL2) */ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */ TCR_EL2, /* Translation Control Register (EL2) */ @@ -543,6 +544,7 @@ enum vcpu_sysreg { VNCR(ACTLR_EL1),/* Auxiliary Control Register */ VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ + VNCR(SMCR_EL1), /* SME Control */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index 6f556e993644..aede5d6efad3 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -44,6 +44,7 @@ #define VNCR_HDFGWTR_EL2 0x1D8 #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 +#define VNCR_SMCR_EL1 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 7dd4a5ef0e81..90923edb3355 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -143,6 +143,7 @@ static bool get_el2_to_el1_mapping(unsigned int reg, MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL ); MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL ); MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL ); + MAPPED_EL2_SYSREG(SMCR_EL2, SMCR_EL1, NULL ); MAPPED_EL2_SYSREG(CONTEXTIDR_EL2, CONTEXTIDR_EL1, NULL ); default: return false; @@ -2558,6 +2559,37 @@ static bool access_gic_elrsr(struct kvm_vcpu *vcpu, return true; }
+static unsigned int sme_el2_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + return __el2_visibility(vcpu, rd, sme_visibility); +} + +static bool access_smcr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned int vq; + u64 smcr; + + if (guest_hyp_sve_traps_enabled(vcpu)) { + kvm_inject_nested_sve_trap(vcpu); + return true; + } + + if (!p->is_write) { + p->regval = vcpu_read_sys_reg(vcpu, SMCR_EL2); + return true; + } + + smcr = p->regval; + vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1; + vq = min(vq, vcpu_sme_max_vq(vcpu)); + vcpu_write_sys_reg(vcpu, SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1), + SMCR_EL2); + return true; +} + static unsigned int s1poe_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { @@ -2962,7 +2994,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, { SYS_DESC(SYS_SMPRI_EL1), undef_access }, - { SYS_DESC(SYS_SMCR_EL1), undef_access }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, @@ -3316,6 +3348,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
+ EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, + sme_el2_visibility), + EL2_REG(TTBR0_EL2, access_rw, reset_val, 0), EL2_REG(TTBR1_EL2, access_rw, reset_val, 0), EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1),
SME adds a new thread ID register, TPIDR2_EL0. This is used in userspace for delayed saving of the ZA state but in terms of the architecture is not really connected to SME other than being part of FEAT_SME. It has an independent fine grained trap and the runtime connection with the rest of SME is purely software defined.
Expose the register as a system register if the guest supports SME, context switching it along with the other EL0 TPIDRs.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 +++++++++++++++ arch/arm64/kvm/sys_regs.c | 3 ++- 3 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 14179e1ddb3e..c26099f74648 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -450,6 +450,7 @@ enum vcpu_sysreg { CSSELR_EL1, /* Cache Size Selection Register */ TPIDR_EL0, /* Thread ID, User R/W */ TPIDRRO_EL0, /* Thread ID, User R/O */ + TPIDR2_EL0, /* Thread ID, Register 2 */ TPIDR_EL1, /* Thread ID, Privileged */ CNTKCTL_EL1, /* Timer Control Register (EL1) */ PAR_EL1, /* Physical Address Register */ diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 223819e95405..efd1e0707f77 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -66,6 +66,17 @@ static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt) return kvm_has_s1poe(kern_hyp_va(vcpu->kvm)); }
+static inline bool ctxt_has_sme(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!system_supports_sme()) + return false; + + vcpu = ctxt_to_vcpu(ctxt); + return kvm_has_sme(kern_hyp_va(vcpu->kvm)); +} + static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) { return host_data_ptr(host_ctxt) != ctxt; @@ -105,6 +116,8 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); + if (ctxt_has_sme(ctxt)) + ctxt_sys_reg(ctxt, TPIDR2_EL0) = read_sysreg_s(SYS_TPIDR2_EL0); }
static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) @@ -174,6 +187,8 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0); + if (ctxt_has_sme(ctxt)) + write_sysreg_s(ctxt_sys_reg(ctxt, TPIDR2_EL0), SYS_TPIDR2_EL0); }
static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt, diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 90923edb3355..caa90dae8184 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -3170,7 +3170,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { .visibility = s1poe_visibility }, { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, - { SYS_DESC(SYS_TPIDR2_EL0), undef_access }, + { SYS_DESC(SYS_TPIDR2_EL0), NULL, reset_unknown, TPIDR2_EL0, + .visibility = sme_visibility},
{ SYS_DESC(SYS_SCXTNUM_EL0), undef_access },
The primary register for identifying SME is ID_AA64PFR1_EL1.SME. This is hidden from guests unless SME is enabled by the VMM. When it is visible it is writable and can be used to control the availability of SME2.
There is also a new register ID_AA64SMFR0_EL1 which we make writable, forcing it to all bits 0 if SME is disabled. This includes the field SMEver giving the SME version, userspace is responsible for ensuring the value is consistent with ID_AA64PFR1_EL1.SME. It also includes FA64, a separately enableable extension which provides the full FPSIMD and SVE instruction set including FFR in streaming mode. Userspace can control the availability of FA64 by writing to this field. The other features enumerated there only add new instructions, there are no architectural controls for these.
There is a further identification register SMIDR_EL1 which provides a basic description of the SME microarchitecture, in a manner similar to MIDR_EL1 for the PE. It also describes support for priority management and a basic affinity description for shared SME units, plus some RES0 space. We do not support priority management and affinity is not meaningful for guests so we mask out everything except for the microarchitecture description.
As for MIDR_EL1 and REVIDR_EL1 we expose the implementer and revision information to guests with the raw value from the CPU we are running on, this may present issues for asymmetric systems or for migration as it does for the existing registers.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/sys_regs.c | 46 +++++++++++++++++++++++++++++++++++---- 2 files changed, 43 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c26099f74648..29b8697c8144 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -494,6 +494,7 @@ enum vcpu_sysreg { /* FP/SIMD/SVE */ SVCR, FPMR, + SMIDR_EL1, /* Streaming Mode Identification Register */
/* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index caa90dae8184..b11bb95e9e35 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -774,6 +774,38 @@ static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) return mpidr; }
+static u64 reset_smidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +{ + u64 smidr = 0; + + if (!system_supports_sme()) + return smidr; + + smidr = read_sysreg_s(SYS_SMIDR_EL1); + + /* + * Mask out everything except for the implementer and revison, + * in particular priority management is not implemented. + */ + smidr &= SMIDR_EL1_IMPLEMENTER_MASK | SMIDR_EL1_REVISION_MASK; + + vcpu_write_sys_reg(vcpu, smidr, SMIDR_EL1); + + return smidr; +} + +static bool access_smidr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = vcpu_read_sys_reg(vcpu, r->reg); + + return true; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -1607,7 +1639,9 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac); }
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + if (!vcpu_has_sme(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS); @@ -1723,6 +1757,10 @@ static unsigned int id_visibility(const struct kvm_vcpu *vcpu, if (!vcpu_has_sve(vcpu)) return REG_RAZ; break; + case SYS_ID_AA64SMFR0_EL1: + if (!vcpu_has_sme(vcpu)) + return REG_RAZ; + break; }
return 0; @@ -2905,7 +2943,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_AA64PFR1_EL1_MTE_frac | ID_AA64PFR1_EL1_NMI | ID_AA64PFR1_EL1_RNDR_trap | - ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_RES0 | ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_RAS_frac | @@ -2913,7 +2950,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), - ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_WRITABLE(ID_AA64SMFR0_EL1, ~ID_AA64SMFR0_EL1_RES0), ID_UNALLOCATED(4,6), ID_WRITABLE(ID_AA64FPFR0_EL1, ~ID_AA64FPFR0_EL1_RES0),
@@ -3112,7 +3149,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1, .set_user = set_clidr, .val = ~CLIDR_EL1_RES0 }, { SYS_DESC(SYS_CCSIDR2_EL1), undef_access }, - { SYS_DESC(SYS_SMIDR_EL1), undef_access }, + { SYS_DESC(SYS_SMIDR_EL1), .access = access_smidr, .reset = reset_smidr, + .reg = SMIDR_EL1, .visibility = sme_visibility }, IMPLEMENTATION_ID(AIDR_EL1, GENMASK_ULL(63, 0)), { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, ID_FILTERED(CTR_EL0, ctr_el0,
SME has optional support for configuring the relative priorities of PEs in systems where they share a single SME hardware block, known as a SMCU. Currently we do not have any support for this in Linux and will also hide it from KVM guests, pending experience with practical implementations. The interface for configuring priority support is via two new system registers, these registers are always defined when SME is available.
The register SMPRI_EL1 allows control of SME execution priorities. Since we disable SME priority support for guests this register is RES0, define it as such and enable fine grained traps for SMPRI_EL1 to ensure that guests can't write to it even if the hardware supports priorites. Since the register should be readable with fixed contents we only trap writes, not reads.
There is also an EL2 register SMPRIMAP_EL2 for virtualisation of priorities, this is RES0 when priority configuration is not supported but has no specific traps available.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 23 ++++++++++++++++++++++- 3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 29b8697c8144..5ce9e06324b5 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -495,6 +495,7 @@ enum vcpu_sysreg { SVCR, FPMR, SMIDR_EL1, /* Streaming Mode Identification Register */ + SMPRI_EL1, /* Streaming Mode Priority Register */
/* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ @@ -547,6 +548,7 @@ enum vcpu_sysreg { VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ VNCR(SMCR_EL1), /* SME Control */ + VNCR(SMPRIMAP_EL2), /* Streaming Mode Priority Mapping Register */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index aede5d6efad3..454e076b77cb 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -45,6 +45,7 @@ #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 #define VNCR_SMCR_EL1 0x1F0 +#define VNCR_SMPRIMAP_EL2 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b11bb95e9e35..1fee8e534615 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1828,6 +1828,15 @@ static unsigned int fp8_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; }
+static unsigned int sme_raz_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sme(vcpu)) + return REG_RAZ; + + return REG_HIDDEN; +} + static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val) { if (!vcpu_has_sve(vcpu)) @@ -3030,7 +3039,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, - { SYS_DESC(SYS_SMPRI_EL1), undef_access }, + + /* + * SMPRI_EL1 is UNDEF when SME is disabled, the UNDEF is + * handled via FGU which is handled without consulting this + * table. + */ + { SYS_DESC(SYS_SMPRI_EL1), trap_raz_wi, .visibility = sme_raz_visibility }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, @@ -3387,6 +3403,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
+ EL2_REG_FILTERED(SMPRIMAP_EL2, trap_raz_wi, reset_val, 0, + sme_el2_visibility), EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, sme_el2_visibility),
@@ -5306,6 +5324,9 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) compute_fgu(kvm, HFGITR2_GROUP); compute_fgu(kvm, HDFGRTR2_GROUP);
+ if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) + kvm->arch.fgt[HFGWTR_GROUP] |= HFGWTR_EL2_nSMPRI_EL1_MASK; + set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags); out: mutex_unlock(&kvm->arch.config_lock);
Provide versions of the SME state save and restore functions for the hypervisor to allow it to restore ZA and ZT for guests.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_hyp.h | 3 +++ arch/arm64/kvm/hyp/fpsimd.S | 26 ++++++++++++++++++++++++++ 2 files changed, 29 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 0ad5a66e0d25..1c31d8b26aa9 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -115,6 +115,9 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr); void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr); +int __sve_get_vl(void); +void __sme_save_state(void const *state, bool restore_zt); +void __sme_restore_state(void const *state, bool restore_zt);
u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 6e16cbfc5df2..44a1b0a483da 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -29,3 +29,29 @@ SYM_FUNC_START(__sve_save_state) sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) + +SYM_FUNC_START(__sve_get_vl) + _sve_rdvl 0, 1 + ret +SYM_FUNC_END(__sve_get_vl) + +SYM_FUNC_START(__sme_save_state) + _sme_rdsvl 2, 1 // x2 = VL/8 + sme_save_za 0, x2, 12 // Leaves x0 pointing to the end of ZA + + cbz x1, 1f + _str_zt 0 +1: + ret +SYM_FUNC_END(__sme_save_state) + +SYM_FUNC_START(__sme_restore_state) + _sme_rdsvl 2, 1 // x2 = VL/8 + sme_load_za 0, x2, 12 // Leaves x0 pointing to end of ZA + + cbz x1, 1f + _ldr_zt 0 + +1: + ret +SYM_FUNC_END(__sme_restore_state)
SME introduces a mode called streaming mode where the Z, P and optionally FFR registers can be accessed using the SVE instructions but with the SME vector length. Reflect this in the ABI for accessing the guest registers by making the vector length for the vcpu reflect the vector length that would be seen by the guest were it running, using the SME vector length when the guest is configured for streaming mode.
Since SME may be present without SVE we also update the existing checks for access to the Z, P and V registers to check for either SVE or streaming mode. When not in streaming mode the guest floating point state may be accessed via the V registers.
Any VMM that supports SME must be aware of the need to configure streaming mode prior to writing the floating point registers that this creates.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/guest.c | 38 ++++++++++++++++++++++++++++++++++---- 1 file changed, 34 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 6460bb21e01d..4ba0afa369d5 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -73,6 +73,11 @@ static u64 core_reg_offset_from_id(u64 id) return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); }
+static bool vcpu_has_sve_regs(const struct kvm_vcpu *vcpu) +{ + return vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu); +} + static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) { int size; @@ -110,9 +115,10 @@ static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) /* * The KVM_REG_ARM64_SVE regs must be used instead of * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: + * SVE-enabled vcpus or when a SME enabled vcpu is in + * streaming mode: */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + if (vcpu_has_sve_regs(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL;
return size; @@ -426,6 +432,24 @@ struct vec_state_reg_region { unsigned int upad; /* extra trailing padding in user memory */ };
+/* + * We represent the Z and P registers to userspace using either the + * SVE or SME vector length, depending on which features the guest has + * and if the guest is in streaming mode. + */ +static unsigned int vcpu_sve_cur_vq(struct kvm_vcpu *vcpu) +{ + unsigned int vq = 0; + + if (vcpu_has_sve(vcpu)) + vq = vcpu_sve_max_vq(vcpu); + + if (vcpu_in_streaming_mode(vcpu)) + vq = vcpu_sme_max_vq(vcpu); + + return vq; +} + /* * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy @@ -466,7 +490,7 @@ static int sve_reg_to_region(struct vec_state_reg_region *region, if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT;
- vq = vcpu_sve_max_vq(vcpu); + vq = vcpu_sve_cur_vq(vcpu);
reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -476,7 +500,7 @@ static int sve_reg_to_region(struct vec_state_reg_region *region, if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT;
- vq = vcpu_sve_max_vq(vcpu); + vq = vcpu_sve_cur_vq(vcpu);
reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -515,6 +539,9 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
+ if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) @@ -541,6 +568,9 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
+ if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, region.klen)) return -EFAULT;
Writes to the physical SVCR.SM and SVCR.ZA change the state of PSTATE.SM and PSTATE.ZA, causing other floating point state to reset. Emulate this behaviour for writes done via the KVM userspace ABI.
Setting PSTATE.ZA to 1 causes ZA and ZT0 to be reset to 0, these are stored in sme_state. Setting PSTATE.ZA to 0 causes ZA and ZT0 to become inaccesible so no reset is needed.
Any change in PSTATE.SM causes the V, Z, P, FFR and FPMR registers to be reset to 0 and FPSR to be reset to 0x800009f.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 24 ++++++++++++++++++++++++ arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++++- 2 files changed, 52 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5ce9e06324b5..431e5c0ce119 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1123,6 +1123,30 @@ struct kvm_vcpu_arch {
#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_vl[ARM64_VEC_SVE])
+#define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) + +#define sme_state_size_from_vl(vl, sme2) ({ \ + size_t __size_ret; \ + unsigned int __vq; \ + \ + if (WARN_ON(!sve_vl_valid(vl))) { \ + __size_ret = 0; \ + } else { \ + __vq = sve_vq_from_vl(vl); \ + __size_ret = ZA_SIG_REGS_SIZE(__vq); \ + if (sme2) \ + __size_ret += ZT_SIG_REG_SIZE; \ + } \ + \ + __size_ret; \ +}) + +#define vcpu_sme_state_size(vcpu) ({ \ + unsigned long __vl; \ + __vl = (vcpu)->arch.max_vl[ARM64_VEC_SME]; \ + sme_state_size_from_vl(__vl, vcpu_has_sme2(vcpu)); \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 1fee8e534615..edc61a53ba64 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -806,6 +806,33 @@ static bool access_smidr(struct kvm_vcpu *vcpu, return true; }
+static int set_svcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + u64 val) +{ + u64 old = __vcpu_sys_reg(vcpu, rd->reg); + + if (val & SVCR_RES0) + return -EINVAL; + + if ((val & SVCR_ZA) && !(old & SVCR_ZA) && vcpu->arch.sme_state) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); + + if ((val & SVCR_SM) != (old & SVCR_SM)) { + memset(vcpu->arch.ctxt.fp_regs.vregs, 0, + sizeof(vcpu->arch.ctxt.fp_regs.vregs)); + + if (vcpu->arch.sve_state) + memset(vcpu->arch.sve_state, 0, + vcpu_sve_state_size(vcpu)); + + __vcpu_assign_sys_reg(vcpu, FPMR, 0); + vcpu->arch.ctxt.fp_regs.fpsr = 0x800009f; + } + + __vcpu_assign_sys_reg(vcpu, rd->reg, val); + return 0; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -3175,7 +3202,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { CTR_EL0_DminLine_MASK | CTR_EL0_L1Ip_MASK | CTR_EL0_IminLine_MASK), - { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility = sme_visibility }, + { SYS_DESC(SYS_SVCR), undef_access, reset_val, SVCR, 0, .visibility = sme_visibility, .set_user = set_svcr }, { SYS_DESC(SYS_FPMR), undef_access, reset_val, FPMR, 0, .visibility = fp8_visibility },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
SME introduces two new registers, the ZA matrix register and the ZT0 LUT register. Both of these registers are only accessible when PSTATE.ZA is set and ZT0 is only present if SME2 is enabled for the guest. Provide support for configuring these from VMMs.
The ZA matrix is a single SVL*SVL register which is available when PSTATE.ZA is set. We follow the pattern established by the architecture itself and expose this to userspace as a series of horizontal SVE vectors with the streaming mode vector length, using the format already established for the SVE vectors themselves.
ZT0 is a single register with a refreshingly fixed size 512 bit register which is like ZA accessible only when PSTATE.ZA is set. Add support for it to the userspace API, as with ZA we allow the register to be read or written regardless of the state of PSTATE.ZA in order to simplify userspace usage. The value will be reset to 0 whenever PSTATE.ZA changes from 0 to 1, userspace can read stale values but these are not observable by the guest without manipulation of PSTATE.ZA by userspace.
While there is currently only one ZT register the naming as ZT0 and the instruction encoding clearly leave room for future extensions adding more ZT registers. This encoding can readily support such an extension if one is introduced.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/uapi/asm/kvm.h | 17 ++++++ arch/arm64/kvm/guest.c | 114 +++++++++++++++++++++++++++++++++++++- 2 files changed, 129 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 5220797361e0..cf75a830f17a 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -357,6 +357,23 @@ struct kvm_arm_counter_offset { /* SME registers */ #define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_ARM64_SME_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SME_VQ_MAX __SVE_VQ_MAX + +/* ZA and ZTn occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SME_ZA_BASE 0 +#define KVM_REG_ARM64_SME_ZT_BASE 0x600 + +#define KVM_ARM64_SME_MAX_ZAHREG (__SVE_VQ_BYTES * KVM_ARM64_SME_VQ_MAX) + +#define KVM_REG_ARM64_SME_ZAHREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZA_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SME_ZTREG_SIZE (512 / 8) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xffff) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 4ba0afa369d5..6ed5f4fe6126 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -594,23 +594,133 @@ static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return set_vec_vls(ARM64_VEC_SME, vcpu, reg); }
+/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ +static int sme_reg_to_region(struct vec_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for ZA.H[n] registers */ + unsigned int vq = vcpu_sme_max_vq(vcpu) - 1; + const u64 za_h_max = vq * __SVE_VQ_BYTES; + const u64 zah_id_min = KVM_REG_ARM64_SME_ZAHREG(0, 0); + const u64 zah_id_max = KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, + SVE_NUM_SLICES - 1); + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maximum permitted length */ + + size_t sme_state_size; + + reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >= zah_id_min && reg->id <= zah_id_max) { + if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + /* ZA is exposed as SVE vectors ZA.H[n] */ + reqoffset = ZA_SIG_ZAV_OFFSET(vq, reg_num) - + ZA_SIG_REGS_OFFSET; + reqlen = KVM_SVE_ZREG_SIZE; + maxlen = SVE_SIG_ZREG_SIZE(vq); + } else if (reg->id == KVM_REG_ARM64_SME_ZT_BASE) { + /* ZA is exposed as SVE vectors ZA.H[n] */ + if (!kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, SME2) || + (reg->id & SVE_REG_SLICE_MASK) > 0 || + reg_num > 0) + return -ENOENT; + + /* ZT0 is stored after ZA */ + reqlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + maxlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + } else { + return -EINVAL; + } + + sme_state_size = vcpu_sme_state_size(vcpu); + if (WARN_ON(!sme_state_size)) + return -EINVAL; + + region->koffset = array_index_nospec(reqoffset, sme_state_size); + region->klen = min(maxlen, reqlen); + region->upad = reqlen - region->klen; + + return 0; +} + +/* + * ZA is exposed as an array of horizontal vectors with the same + * format as SVE, mirroring the architecture's LDR ZA[Wv, offs], [Xn] + * instruction. + */ + static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return get_sme_vls(vcpu, reg);
- return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; }
static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return set_sme_vls(vcpu, reg);
- return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL;
If the guest has SME state we need to context switch that state, provide support for that for normal guests.
SME has three sets of registers, ZA, ZT (only present for SME2) and also streaming SVE which replaces the standard floating point registers when active. The first two are fairly straightforward, they are accessible only when PSTATE.ZA is set and we can reuse the assembly from the host to save and load them from a single contiguous buffer. When PSTATE.ZA is not set then these registers are inaccessible, when the guest enables PSTATE.ZA all bits will be set to 0 by that and nothing is required on restore.
Streaming mode is slightly more complicated, when enabled via PSTATE.SM it provides a version of the SVE registers using the SME vector length and may optionally omit the FFR register. SME may also be present without SVE. The register state is stored in sve_state as for non-streaming SVE mode, we make an initial selection of registers to update based on the guest SVE support and then override this when loading SVCR if streaming mode is enabled.
A further complication is that when the hardware is in streaming mode guest operations that are invalid in in streaming mode will generate SME exceptions. There are also subfeature exceptions for SME2 controlled via SMCR which generate distinct exception codes. In many situations these exceptions are routed directly to the lower ELs with no opportunity for the hypervisor to intercept. So that guests do not see unexpected exception types due to the actual hardware configuration not being what the guest configured we update the SMCRs and SVCR even if the guest does not own the registers.
Since in order to avoid duplication with SME we now restore the register state outside of the SVE specific restore function we need to move the restore of the effective VL for nested guests to a separate restore function run after loading the floating point register state, along with the similar handling required for SME.
The selection of which vector length to use is handled by vcpu_sve_pffr().
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/fpsimd.h | 10 +++ arch/arm64/include/asm/kvm_emulate.h | 6 ++ arch/arm64/include/asm/kvm_host.h | 4 + arch/arm64/kvm/fpsimd.c | 25 ++++-- arch/arm64/kvm/hyp/include/hyp/switch.h | 151 ++++++++++++++++++++++++++++++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 80 +++++++++++++++-- 6 files changed, 255 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 38c24c6485ad..40b56fba9c54 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -442,6 +442,15 @@ static inline size_t sme_state_size(struct task_struct const *task) write_sysreg_s(__new, (reg)); \ } while (0)
+#define sme_cond_update_smcr_vq(val, reg) \ + do { \ + u64 __smcr = read_sysreg_s((reg)); \ + u64 __new = __smcr & ~SMCR_ELx_LEN_MASK; \ + __new |= (val) & SMCR_ELx_LEN_MASK; \ + if (__smcr != __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + #else
static inline void sme_user_disable(void) { BUILD_BUG(); } @@ -471,6 +480,7 @@ static inline size_t sme_state_size(struct task_struct const *task) }
#define sme_cond_update_smcr(val, fa64, zt0, reg) do { } while (0) +#define sme_cond_update_smcr_vq(val, reg) do { } while (0)
#endif /* ! CONFIG_ARM64_SME */
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 0720898f563e..06cabf08da14 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -629,4 +629,10 @@ static inline void vcpu_set_hcrx(struct kvm_vcpu *vcpu) vcpu->arch.hcrx_el2 |= HCRX_EL2_EnFPM; } } + +static inline bool guest_hyp_sme_traps_enabled(const struct kvm_vcpu *vcpu) +{ + return __guest_hyp_cptr_xen_trap_enabled(vcpu, SMEN); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 431e5c0ce119..08004908575a 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -732,6 +732,7 @@ struct kvm_host_data {
/* Used by pKVM only. */ u64 fpmr; + u64 smcr_el1;
/* Ownership of the FP regs */ enum { @@ -1107,6 +1108,9 @@ struct kvm_vcpu_arch { #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1)
+#define vcpu_sme_smcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? SMCR_EL2 : SMCR_EL1) + #define sve_state_size_from_vl(sve_max_vl) ({ \ size_t __size_ret; \ unsigned int __vq; \ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 134485b52f51..38ff05348aa3 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -95,19 +95,25 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled());
if (guest_owns_fp_regs()) { - /* - * Currently we do not support SME guests so SVCR is - * always 0 and we just need a variable to point to. - */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; - fp_state.sme_state = NULL; + fp_state.sme_state = vcpu->arch.sme_state; + fp_state.sme_vl = vcpu->arch.max_vl[ARM64_VEC_SME]; fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR); fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR); fp_state.fp_type = &vcpu->arch.fp_type; + fp_state.sme_features = 0; + if (kvm_has_fa64(vcpu->kvm)) + fp_state.sme_features |= SMCR_ELx_FA64; + if (kvm_has_sme2(vcpu->kvm)) + fp_state.sme_features |= SMCR_ELx_EZT0;
+ /* + * For SME only hosts fpsimd_save() will override the + * state selection if we are in streaming mode. + */ if (vcpu_has_sve(vcpu)) fp_state.to_save = FP_STATE_SVE; else @@ -116,6 +122,15 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fpsimd_bind_state_to_cpu(&fp_state);
clear_thread_flag(TIF_FOREIGN_FPSTATE); + } else { + /* + * We might have enabled SME to configure traps but + * insist the host doesn't run the hypervisor with SME + * enabled, ensure it's disabled again. + */ + if (system_supports_sme()) { + sme_smstop(); + } } }
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 899826ea10ea..02ff7654fa9d 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -516,6 +516,29 @@ static inline bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) return true; }
+static inline void __hyp_sme_restore_guest(struct kvm_vcpu *vcpu, + bool *restore_sve, + bool *restore_ffr) +{ + bool has_fa64 = vcpu_has_fa64(vcpu); + bool has_sme2 = vcpu_has_sme2(vcpu); + + sme_cond_update_smcr(vcpu_sme_max_vq(vcpu) - 1, has_fa64, has_sme2, + SYS_SMCR_EL2); + + write_sysreg_el1(__vcpu_sys_reg(vcpu, SMCR_EL1), SYS_SMCR); + + write_sysreg_s(__vcpu_sys_reg(vcpu, SVCR), SYS_SVCR); + + if (vcpu_in_streaming_mode(vcpu)) { + *restore_sve = true; + *restore_ffr = has_fa64; + } + + if (vcpu_za_enabled(vcpu)) + __sme_restore_state(vcpu_sme_state(vcpu), has_sme2); +} + static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { /* @@ -523,19 +546,25 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) * vCPU. Start off with the max VL so we can load the SVE state. */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr, - true);
+ write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); +} + +static inline void __hyp_nv_restore_guest_vls(struct kvm_vcpu *vcpu) +{ /* * The effective VL for a VM could differ from the max VL when running a * nested guest, as the guest hypervisor could select a smaller VL. Slap * that into hardware before wrapping up. */ - if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) + if (!(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) + return; + + if (vcpu_has_sve(vcpu)) sve_cond_update_zcr_vq(__vcpu_sys_reg(vcpu, ZCR_EL2), SYS_ZCR_EL2);
- write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); + if (vcpu_has_sme(vcpu)) + sme_cond_update_smcr_vq(__vcpu_sys_reg(vcpu, SMCR_EL2), SYS_SMCR_EL2); }
static inline void __hyp_sve_save_host(void) @@ -549,10 +578,40 @@ static inline void __hyp_sve_save_host(void) true); }
+static inline void kvm_sme_configure_traps(struct kvm_vcpu *vcpu) +{ + u64 smcr_el1, smcr_el2; + u64 svcr; + + if (!vcpu_has_sme(vcpu)) + return; + + /* A guest hypervisor may restrict the effective max VL. */ + if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) + smcr_el2 = __vcpu_sys_reg(vcpu, SMCR_EL2); + else + smcr_el2 = vcpu_sme_max_vq(vcpu) - 1; + + if (vcpu_has_fa64(vcpu)) + smcr_el2 |= SMCR_ELx_FA64; + if (vcpu_has_sme2(vcpu)) + smcr_el2 |= SMCR_ELx_EZT0; + + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 = __vcpu_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu)); + write_sysreg_el1(smcr_el1, SYS_SMCR); + + svcr = __vcpu_sys_reg(vcpu, SVCR); + write_sysreg_s(svcr, SYS_SVCR); +} + static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) { u64 zcr_el1, zcr_el2;
+ kvm_sme_configure_traps(vcpu); + if (!guest_owns_fp_regs()) return;
@@ -572,8 +631,50 @@ static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu)
static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) { + u64 smcr_el1, smcr_el2; u64 zcr_el1, zcr_el2;
+ if (vcpu_has_sme(vcpu)) { + /* + * __deactivate_cptr_traps( disabled traps), but there + * hasn't necessarily been a context synchronization + * event yet. + */ + isb(); + + smcr_el1 = read_sysreg_el1(SYS_SMCR); + __vcpu_assign_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu), smcr_el1); + + smcr_el2 = 0; + if (system_supports_fa64()) + smcr_el2 |= SMCR_ELx_FA64; + if (system_supports_sme2()) + smcr_el2 |= SMCR_ELx_EZT0; + + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such that + * the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + smcr_el2 |= vcpu_sme_max_vq(vcpu) - 1; + write_sysreg_el2(smcr_el2, SYS_SMCR); + } else { + smcr_el1 = smcr_el2; + smcr_el2 |= sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1; + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 |= vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el1(smcr_el1, SYS_SMCR); + } + + if (guest_owns_fp_regs()) { + u64 svcr = read_sysreg_s(SYS_SVCR); + __vcpu_assign_sys_reg(vcpu, SVCR, svcr); + } + } + if (!guest_owns_fp_regs()) return;
@@ -610,6 +711,16 @@ static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu)
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) { + /* + * The hypervisor refuses to run if streaming mode or ZA is + * enabled, we only need to save SMCR_EL1 for SME. For pKVM + * we will restore this, reset SMCR_EL2 to a fixed value and + * disable streaming mode and ZA to avoid any state being + * leaked. + */ + if (system_supports_sme()) + *host_data_ptr(smcr_el1) = read_sysreg_el1(SYS_SMCR); + /* * Non-protected kvm relies on the host restoring its sve state. * Protected kvm restores the host's sve state as not to reveal that @@ -634,14 +745,17 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) */ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) { - bool sve_guest; - u8 esr_ec; + bool restore_sve, restore_ffr; + bool sve_guest, sme_guest; + u8 esr_ec, esr_iss_smtc;
if (!system_supports_fpsimd()) return false;
sve_guest = vcpu_has_sve(vcpu); + sme_guest = vcpu_has_sme(vcpu); esr_ec = kvm_vcpu_trap_get_class(vcpu); + esr_iss_smtc = ESR_ELx_SME_ISS_SMTC((kvm_vcpu_get_esr(vcpu)));
/* Only handle traps the vCPU can support here: */ switch (esr_ec) { @@ -660,6 +774,15 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) if (guest_hyp_sve_traps_enabled(vcpu)) return false; break; + case ESR_ELx_EC_SME: + if (!sme_guest) + return false; + if (guest_hyp_sme_traps_enabled(vcpu)) + return false; + if (!kvm_has_sme2(vcpu->kvm) && + (esr_iss_smtc == ESR_ELx_SME_ISS_SMTC_ZT_DISABLED)) + return false; + break; default: return false; } @@ -675,8 +798,20 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) kvm_hyp_save_fpsimd_host(vcpu);
/* Restore the guest state */ + + /* These may be overridden for a SME guest */ + restore_sve = sve_guest; + restore_ffr = sve_guest; + if (sve_guest) __hyp_sve_restore_guest(vcpu); + if (sme_guest) + __hyp_sme_restore_guest(vcpu, &restore_sve, &restore_ffr); + + if (restore_sve) + __sve_restore_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.fp_regs.fpsr, + restore_ffr); else __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs);
@@ -687,6 +822,8 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
+ __hyp_nv_restore_guest_vls(vcpu); + *host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED;
/* diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 76be13efcfcb..ccbe0389c0b7 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -26,14 +26,17 @@ void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt);
static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) { + bool save_ffr = !vcpu_in_streaming_mode(vcpu) || vcpu_has_fa64(vcpu); + __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR)); + /* * On saving/restoring guest sve state, always use the maximum VL for * the guest. The layout of the data when saving the sve state depends * on the VL, so use a consistent (i.e., the maximum) guest VL. */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true); + __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, save_ffr); write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); }
@@ -57,9 +60,63 @@ static void __hyp_sve_restore_host(void) write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); }
-static void fpsimd_sve_flush(void) +static void __hyp_sme_save_guest(struct kvm_vcpu *vcpu) { - *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; + __vcpu_assign_sys_reg(vcpu, SMCR_EL1, read_sysreg_el1(SYS_SMCR)); + __vcpu_assign_sys_reg(vcpu, SVCR, read_sysreg_s(SYS_SVCR)); + + /* + * On saving/restoring guest sve state, always use the maximum VL for + * the guest. The layout of the data when saving the sve state depends + * on the VL, so use a consistent (i.e., the maximum) guest VL. + * + * We restore the FA64 and SME2 enables for the host since we + * will always restore the host configuration so if host and + * guest VLs are the same we might suppress an update. + */ + sme_cond_update_smcr(vcpu_sme_max_vq(vcpu) - 1, system_supports_fa64(), + system_supports_sme2(), SYS_SMCR_EL2); + + if (vcpu_za_enabled(vcpu)) + __sme_save_state(vcpu_sme_state(vcpu), vcpu_has_sme2(vcpu)); +} + +static void __hyp_sme_restore_host(void) +{ + /* + * The hypervisor refuses to run if we are in streaming mode + * or have ZA enabled so there is no SME specific state to + * restore other than the system registers. + * + * Note that this constrains the PE to the maximum shared VL + * that was discovered, if we wish to use larger VLs this will + * need to be revisited. + */ + sme_cond_update_smcr(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1, + cpus_have_final_cap(ARM64_SME_FA64), + cpus_have_final_cap(ARM64_SME2), SYS_SMCR_EL2); + + write_sysreg_el1(*host_data_ptr(smcr_el1), SYS_SMCR); + + sme_smstop(); +} + +static void fpsimd_sve_flush(struct kvm_vcpu *vcpu) +{ + /* + * If the guest has SME then we need to restore the trap + * controls in SMCR and mode in SVCR in order to ensure that + * traps generated directly to EL1 have the correct types, + * otherwise we can defer until we load the guest state. + */ + if (vcpu_has_sme(vcpu)) { + kvm_hyp_save_fpsimd_host(vcpu); + kvm_sme_configure_traps(vcpu); + + *host_data_ptr(fp_owner) = FP_STATE_FREE; + } else { + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED; + } }
static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) @@ -75,7 +132,10 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) */ isb();
- if (vcpu_has_sve(vcpu)) + if (vcpu_has_sme(vcpu)) + __hyp_sme_save_guest(vcpu); + + if (vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu)) __hyp_sve_save_guest(vcpu); else __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs); @@ -84,6 +144,9 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu) if (has_fpmr) __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR));
+ if (system_supports_sme()) + __hyp_sme_restore_host(); + if (system_supports_sve()) __hyp_sve_restore_host(); else @@ -121,7 +184,7 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) { struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
- fpsimd_sve_flush(); + fpsimd_sve_flush(host_vcpu); flush_debug_state(hyp_vcpu);
hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt; @@ -203,10 +266,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) struct pkvm_hyp_vcpu *hyp_vcpu = pkvm_get_loaded_hyp_vcpu();
/* - * KVM (and pKVM) doesn't support SME guests for now, and - * ensures that SME features aren't enabled in pstate when - * loading a vcpu. Therefore, if SME features enabled the host - * is misbehaving. + * KVM (and pKVM) refuses to run if PSTATE.{SM,ZA} are + * enabled. Therefore, if SME features enabled the + * host is misbehaving. */ if (unlikely(system_supports_sme() && read_sysreg_s(SYS_SVCR))) { ret = -EINVAL;
The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers. Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 11 ++++++----- arch/arm64/kvm/hyp/nvhe/switch.c | 4 +++- arch/arm64/kvm/hyp/vhe/switch.c | 17 ++++++++++++----- 4 files changed, 35 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 453266c96481..74e1db2e0458 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -232,6 +232,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; }
+/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -391,6 +404,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SVC64] = handle_svc, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, + [ESR_ELx_EC_SME] = handle_sme, [ESR_ELx_EC_ERET] = kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 02ff7654fa9d..9f30667e44d4 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -69,11 +69,8 @@ static inline void __activate_cptr_traps_nvhe(struct kvm_vcpu *vcpu) { u64 val = CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA;
- /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |= CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |= CPTR_EL2_TSM;
if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |= CPTR_EL2_TZ; @@ -101,6 +98,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu) val |= CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_EL1_SMEN; }
if (!vcpu_has_nv(vcpu)) @@ -142,6 +141,8 @@ static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu) val &= ~CPACR_EL1_FPEN; if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) val &= ~CPACR_EL1_ZEN; + if (!(SYS_FIELD_GET(CPACR_EL1, SMEN, cptr) & BIT(0))) + val &= ~CPACR_EL1_SMEN;
if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |= cptr & CPACR_EL1_E0POE; diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index 0e752b515d0f..3de22aff33a4 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -175,6 +175,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, @@ -186,7 +187,8 @@ static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, - [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, + [ESR_ELx_EC_FP_ASIMD] = kvm_handle_pvm_restricted, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 477f1580ffea..cb3062d53a5e 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -438,22 +438,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu *vcpu, u64 *exit_code) return true; }
-static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
if (!vcpu_has_nv(vcpu)) return false;
- if (sysreg != SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + }
if (guest_owns_fp_regs()) return false;
/* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -473,7 +479,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true;
- if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true;
return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -502,6 +508,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low,
With support for context switching SME state in place allow access to SME in nested guests.
The SME floating point state is handled along with all the other floating point state, SME specific floating point exceptions are directed into the same handlers as other floating point exceptions with NV specific handling for the vector lengths already in place.
TPIDR2_EL0 is context switched along with the other TPIDRs as part of the main guest register context switch.
SME priority support is currently masked from all guests including nested ones.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/nested.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index 5b191f4dc566..9a03439dab90 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -1442,9 +1442,10 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val) break;
case SYS_ID_AA64PFR1_EL1: - /* Only support BTI, SSBS, CSV2_frac */ + /* Only support BTI, SSBS, SME, CSV2_frac */ val &= (ID_AA64PFR1_EL1_BT | ID_AA64PFR1_EL1_SSBS | + ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_CSV2_frac); break;
Since SME requires configuration of a vector length in order to know the size of both the streaming mode SVE state and ZA array we implement a capability for it and require that it be enabled and finalized before the SME specific state can be accessed, similarly to SVE.
Due to the overlap with sizing the SVE state we finalise both SVE and SME with a single finalization, preventing any further changes to the SVE and SME configuration once KVM_ARM_VCPU_VEC (an alias for _VCPU_SVE) has been finalised. This is not a thing of great elegance but it ensures that we never have a state where one of SVE or SME is finalised and the other not, avoiding complexity.
SME is supported for normal and protected guests.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 12 +++- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/arm.c | 10 ++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 76 +++++++++++++++++++----- arch/arm64/kvm/hyp/nvhe/sys_regs.c | 6 ++ arch/arm64/kvm/reset.c | 116 +++++++++++++++++++++++++++++++------ include/uapi/linux/kvm.h | 1 + 7 files changed, 189 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 08004908575a..ff054b5fbc35 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -39,7 +39,7 @@
#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
-#define KVM_VCPU_MAX_FEATURES 9 +#define KVM_VCPU_MAX_FEATURES 10 #define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1)
#define KVM_REQ_SLEEP \ @@ -81,6 +81,7 @@ extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
int __init kvm_arm_init_sve(void); +int __init kvm_arm_init_sme(void);
u32 __attribute_const__ kvm_target_cpu(void); void kvm_reset_vcpu(struct kvm_vcpu *vcpu); @@ -1125,7 +1126,14 @@ struct kvm_vcpu_arch { __size_ret; \ })
-#define vcpu_sve_state_size(vcpu) sve_state_size_from_vl((vcpu)->arch.max_vl[ARM64_VEC_SVE]) +#define vcpu_sve_state_size(vcpu) ({ \ + unsigned int __max_vl; \ + \ + __max_vl = max((vcpu)->arch.max_vl[ARM64_VEC_SVE], \ + (vcpu)->arch.max_vl[ARM64_VEC_SME]); \ + \ + sve_state_size_from_vl(__max_vl); \ +})
#define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state))
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index cf75a830f17a..4fb0d7353d54 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ #define KVM_ARM_VCPU_HAS_EL2_E2H0 8 /* Limit NV support to E2H RES0 */ +#define KVM_ARM_VCPU_SME 9 /* enable SME for this CPU */
/* * An alias for _SVE since we finalize VL configuration for both SVE and SME diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 38a91bb5d4c7..ebfbc1a0a92c 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -392,6 +392,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SVE: r = system_supports_sve(); break; + case KVM_CAP_ARM_SME: + r = system_supports_sme(); + break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: r = kvm_has_full_ptr_auth(); @@ -1434,6 +1437,9 @@ static unsigned long system_supported_vcpu_features(void) if (!system_supports_sve()) clear_bit(KVM_ARM_VCPU_SVE, &features);
+ if (!system_supports_sme()) + clear_bit(KVM_ARM_VCPU_SME, &features); + if (!kvm_has_full_ptr_auth()) { clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); @@ -2837,6 +2843,10 @@ static __init int kvm_arm_init(void) if (err) return err;
+ err = kvm_arm_init_sme(); + if (err) + return err; + err = kvm_arm_vmid_alloc_init(); if (err) { kvm_err("Failed to initialize VMID allocator.\n"); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 65c49a5c7091..bee7abf95921 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -148,10 +148,6 @@ static int pkvm_check_pvm_cpu_features(struct kvm_vcpu *vcpu) !kvm_has_feat(kvm, ID_AA64PFR0_EL1, AdvSIMD, IMP)) return -EINVAL;
- /* No SME support in KVM right now. Check to catch if it changes. */ - if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) - return -EINVAL; - return 0; }
@@ -362,6 +358,11 @@ static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const struc kvm->arch.flags |= host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SVE); }
+ if (kvm_pvm_ext_allowed(KVM_CAP_ARM_SME)) { + set_bit(KVM_ARM_VCPU_SME, allowed_features); + kvm->arch.flags |= host_arch_flags & BIT(KVM_ARCH_FLAG_GUEST_HAS_SME); + } + bitmap_and(kvm->arch.vcpu_features, host_kvm->arch.vcpu_features, allowed_features, KVM_VCPU_MAX_FEATURES); } @@ -384,6 +385,18 @@ static void unpin_host_sve_state(struct pkvm_hyp_vcpu *hyp_vcpu) sve_state + vcpu_sve_state_size(&hyp_vcpu->vcpu)); }
+static void unpin_host_sme_state(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + void *sme_state; + + if (!vcpu_has_feature(&hyp_vcpu->vcpu, KVM_ARM_VCPU_SME)) + return; + + sme_state = kern_hyp_va(hyp_vcpu->vcpu.arch.sme_state); + hyp_unpin_shared_mem(sme_state, + sme_state + vcpu_sme_state_size(&hyp_vcpu->vcpu)); +} + static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_vcpus[], unsigned int nr_vcpus) { @@ -397,6 +410,7 @@ static void unpin_host_vcpus(struct pkvm_hyp_vcpu *hyp_vcpus[],
unpin_host_vcpu(hyp_vcpu->host_vcpu); unpin_host_sve_state(hyp_vcpu); + unpin_host_sme_state(hyp_vcpu); } }
@@ -411,23 +425,35 @@ static void init_pkvm_hyp_vm(struct kvm *host_kvm, struct pkvm_hyp_vm *hyp_vm, pkvm_init_features_from_host(hyp_vm, host_kvm); }
-static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *host_vcpu) +static int pkvm_vcpu_init_vec(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *host_vcpu) { struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu; - unsigned int sve_max_vl; - size_t sve_state_size; - void *sve_state; + unsigned int sve_max_vl, sme_max_vl; + size_t sve_state_size, sme_state_size; + void *sve_state, *sme_state; int ret = 0;
- if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { + if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE) && + !vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) { vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); return 0; }
/* Limit guest vector length to the maximum supported by the host. */ - sve_max_vl = min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), - kvm_host_max_vl[ARM64_VEC_SVE]); - sve_state_size = sve_state_size_from_vl(sve_max_vl); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) + sve_max_vl = min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SVE]), + kvm_host_max_vl[ARM64_VEC_SVE]); + else + sve_max_vl = 0; + + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + sme_max_vl = min(READ_ONCE(host_vcpu->arch.max_vl[ARM64_VEC_SME]), + kvm_host_max_vl[ARM64_VEC_SME]); + else + sme_max_vl = 0; + + /* We need SVE storage for the larger of normal or streaming mode */ + sve_state_size = sve_state_size_from_vl(max(sve_max_vl, sme_max_vl)); sve_state = kern_hyp_va(READ_ONCE(host_vcpu->arch.sve_state));
if (!sve_state || !sve_state_size) { @@ -439,12 +465,36 @@ static int pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu *h if (ret) goto err;
+ if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) { + sme_state_size = sme_state_size_from_vl(sme_max_vl, + vcpu_has_sme2(vcpu)); + sme_state = kern_hyp_va(READ_ONCE(host_vcpu->arch.sme_state)); + + if (!sme_state || !sme_state_size) { + ret = -EINVAL; + goto err_sve_mapped; + } + + ret = hyp_pin_shared_mem(sme_state, sme_state + sme_state_size); + if (ret) + goto err_sve_mapped; + } else { + sme_state = 0; + } + vcpu->arch.sve_state = sve_state; vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_max_vl;
+ vcpu->arch.sme_state = sme_state; + vcpu->arch.max_vl[ARM64_VEC_SME] = sme_max_vl; + return 0; + +err_sve_mapped: + hyp_unpin_shared_mem(sve_state, sve_state + sve_state_size); err: clear_bit(KVM_ARM_VCPU_SVE, vcpu->kvm->arch.vcpu_features); + clear_bit(KVM_ARM_VCPU_SME, vcpu->kvm->arch.vcpu_features); return ret; }
@@ -474,7 +524,7 @@ static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, if (ret) goto done;
- ret = pkvm_vcpu_init_sve(hyp_vcpu, host_vcpu); + ret = pkvm_vcpu_init_vec(hyp_vcpu, host_vcpu); done: if (ret) unpin_host_vcpu(host_vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/sys_regs.c b/arch/arm64/kvm/hyp/nvhe/sys_regs.c index 1ddd9ed3cbb3..eed177dfcb96 100644 --- a/arch/arm64/kvm/hyp/nvhe/sys_regs.c +++ b/arch/arm64/kvm/hyp/nvhe/sys_regs.c @@ -66,6 +66,11 @@ static bool vm_has_ptrauth(const struct kvm *kvm) kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC); }
+static bool vm_has_sme(const struct kvm *kvm) +{ + return system_supports_sme() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SME); +} + static bool vm_has_sve(const struct kvm *kvm) { return system_supports_sve() && kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_SVE); @@ -102,6 +107,7 @@ static const struct pvm_ftr_bits pvmid_aa64pfr0[] = { };
static const struct pvm_ftr_bits pvmid_aa64pfr1[] = { + MAX_FEAT_FUNC(ID_AA64PFR1_EL1, SME, SME2, vm_has_sme), MAX_FEAT(ID_AA64PFR1_EL1, BT, IMP), MAX_FEAT(ID_AA64PFR1_EL1, SSBS, SSBS2), MAX_FEAT_ENUM(ID_AA64PFR1_EL1, MTE_frac, NI), diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index a8684a1346ec..e6dc04267cbb 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -76,6 +76,34 @@ int __init kvm_arm_init_sve(void) return 0; }
+int __init kvm_arm_init_sme(void) +{ + if (system_supports_sme()) { + kvm_max_vl[ARM64_VEC_SME] = sme_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SME] = sme_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SME]) = kvm_host_max_vl[ARM64_VEC_SME]; + + /* + * The get_sve_reg()/set_sve_reg() ioctl interface will need + * to be extended with multiple register slice support in + * order to support vector lengths greater than + * VL_ARCH_MAX: + */ + if (WARN_ON(kvm_max_vl[ARM64_VEC_SME] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SME] = VL_ARCH_MAX; + + /* + * Don't even try to make use of vector lengths that + * aren't available on all CPUs, for now: + */ + if (kvm_max_vl[ARM64_VEC_SME] < sme_max_vl()) + pr_warn("KVM: SME vector length for guests limited to %u bytes\n", + kvm_max_vl[ARM64_VEC_SME]); + } + + return 0; +} + static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { vcpu->arch.max_vl[ARM64_VEC_SVE] = kvm_max_vl[ARM64_VEC_SVE]; @@ -88,42 +116,86 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) set_bit(KVM_ARCH_FLAG_GUEST_HAS_SVE, &vcpu->kvm->arch.flags); }
+static void kvm_vcpu_enable_sme(struct kvm_vcpu *vcpu) +{ + vcpu->arch.max_vl[ARM64_VEC_SME] = kvm_max_vl[ARM64_VEC_SME]; + + /* + * Userspace can still customize the vector lengths by writing + * KVM_REG_ARM64_SME_VLS. Allocation is deferred until + * kvm_arm_vcpu_finalize(), which freezes the configuration. + */ + set_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &vcpu->kvm->arch.flags); +} + /* - * Finalize vcpu's maximum SVE vector length, allocating - * vcpu->arch.sve_state as necessary. + * Finalize vcpu's maximum vector lengths, allocating + * vcpu->arch.sve_state and vcpu->arch.sme_state as necessary. */ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { - void *buf; + void *sve_state, *sme_state; unsigned int vl; - size_t reg_sz; int ret;
- vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; - /* * Responsibility for these properties is shared between * kvm_arm_init_sve(), kvm_vcpu_enable_sve() and * set_sve_vls(). Double-check here just to be sure: */ - if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || - vl > VL_ARCH_MAX)) - return -EIO; + if (vcpu_has_sve(vcpu)) { + vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sve_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + }
- reg_sz = vcpu_sve_state_size(vcpu); - buf = kzalloc(reg_sz, GFP_KERNEL_ACCOUNT); - if (!buf) + /* Similarly for SME */ + if (vcpu_has_sme(vcpu)) { + vl = vcpu->arch.max_vl[ARM64_VEC_SME]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sme_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + } + + sve_state = kzalloc(vcpu_sve_state_size(vcpu), GFP_KERNEL_ACCOUNT); + if (!sve_state) return -ENOMEM;
- ret = kvm_share_hyp(buf, buf + reg_sz); - if (ret) { - kfree(buf); - return ret; + ret = kvm_share_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + if (ret) + goto err_sve_alloc; + + if (vcpu_has_sme(vcpu)) { + sme_state = kzalloc(vcpu_sme_state_size(vcpu), + GFP_KERNEL_ACCOUNT); + if (!sme_state) { + ret = -ENOMEM; + goto err_sve_map; + } + + ret = kvm_share_hyp(sme_state, + sme_state + vcpu_sme_state_size(vcpu)); + if (ret) + goto err_sme_alloc; + } else { + sme_state = NULL; } - - vcpu->arch.sve_state = buf; + + vcpu->arch.sve_state = sve_state; + vcpu->arch.sme_state = sme_state; vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; + +err_sme_alloc: + kfree(sme_state); +err_sve_map: + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); +err_sve_alloc: + kfree(sve_state); + return ret; }
int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) @@ -153,12 +225,16 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { void *sve_state = vcpu->arch.sve_state; + void *sme_state = vcpu->arch.sme_state;
kvm_unshare_hyp(vcpu, vcpu + 1); if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); free_page((unsigned long)vcpu->arch.ctxt.vncr_array); + if (sme_state) + kvm_unshare_hyp(sme_state, sme_state + vcpu_sme_state_size(vcpu)); + kfree(sme_state); kfree(vcpu->arch.vncr_tlb); kfree(vcpu->arch.ccsidr); } @@ -167,6 +243,8 @@ static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); + if (vcpu_has_sme(vcpu)) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); }
/** @@ -206,6 +284,8 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + kvm_vcpu_enable_sme(vcpu); } else { kvm_vcpu_reset_vec(vcpu); } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 37891580d05d..931dbe1f3c9b 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -956,6 +956,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_EL2 240 #define KVM_CAP_ARM_EL2_E2H0 241 #define KVM_CAP_RISCV_MP_STATE_RESET 242 +#define KVM_CAP_ARM_SME 243
struct kvm_irq_routing_irqchip { __u32 irqchip;
SME adds a number of new system registers, update get-reg-list to check for them based on the visibility of SME.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/kvm/arm64/get-reg-list.c | 32 +++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/arm64/get-reg-list.c b/tools/testing/selftests/kvm/arm64/get-reg-list.c index d01798b6b3b4..920784aa2838 100644 --- a/tools/testing/selftests/kvm/arm64/get-reg-list.c +++ b/tools/testing/selftests/kvm/arm64/get-reg-list.c @@ -23,6 +23,18 @@ struct feature_id_reg { };
static struct feature_id_reg feat_id_regs[] = { + { + ARM64_SYS_REG(3, 0, 1, 2, 4), /* SMPRI_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 0, 1, 2, 6), /* SMCR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, { ARM64_SYS_REG(3, 0, 2, 0, 3), /* TCR2_EL1 */ ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ @@ -52,7 +64,25 @@ static struct feature_id_reg feat_id_regs[] = { ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 16, 1 - } + }, + { + ARM64_SYS_REG(3, 1, 0, 0, 6), /* SMIDR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 4, 2, 2), /* SVCR */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 13, 0, 5), /* TPIDR2_EL0 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, };
bool filter_reg(__u64 reg)
Add coverage of the SME ID registers to set_id_regs, ID_AA64PFR1_EL1.SME becomes writable and we add ID_AA64SMFR_EL1 and it's subfields.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 29 +++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c index 8f422bfdfcb9..f41c4e7da3e8 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -140,6 +140,7 @@ static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = {
static const struct reg_ftr_bits ftr_id_aa64pfr1_el1[] = { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, CSV2_frac, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SME, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SSBS, ID_AA64PFR1_EL1_SSBS_NI), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, BT, 0), REG_FTR_END, @@ -187,6 +188,28 @@ static const struct reg_ftr_bits ftr_id_aa64mmfr2_el1[] = { REG_FTR_END, };
+static const struct reg_ftr_bits ftr_id_aa64smfr0_el1[] = { + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, FA64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, LUTv2, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SMEver, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F64F64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16B16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I8I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, BI32I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F32F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8FMA, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP4, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP2, 0), + REG_FTR_END, +}; + static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] = { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0), @@ -217,6 +240,7 @@ static struct test_feature_reg test_regs[] = { TEST_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0_el1), TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), + TEST_REG(SYS_ID_AA64SMFR0_EL1, ftr_id_aa64smfr0_el1), TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1), };
@@ -233,6 +257,7 @@ static void guest_code(void) GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); + GUEST_REG_SYNC(SYS_ID_AA64SMFR0_EL1); GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); GUEST_REG_SYNC(SYS_CTR_EL0); GUEST_REG_SYNC(SYS_MIDR_EL1); @@ -774,8 +799,8 @@ int main(void) ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) + ARRAY_SIZE(ftr_id_aa64pfr1_el1) + ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) + ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + - ARRAY_SIZE(ftr_id_aa64zfr0_el1) - ARRAY_SIZE(test_regs) + 3 + - MPAM_IDREG_TEST + MTE_IDREG_TEST; + ARRAY_SIZE(ftr_id_aa64zfr0_el1) + ARRAY_SIZE(ftr_id_aa64smfr0_el1) - + ARRAY_SIZE(test_regs) + 3 + MPAM_IDREG_TEST + MTE_IDREG_TEST;
ksft_set_plan(test_cnt);
linux-kselftest-mirror@lists.linaro.org