I've removed the RFC tag from this version of the series, but the items that I'm looking for feedback on remains the same:
- The userspace ABI, in particular: - The vector length used for the SVE registers, access to the SVE registers and access to ZA and (if available) ZT0 depending on the current state of PSTATE.{SM,ZA}. - The use of a single finalisation for both SVE and SME.
- The addition of control for enabling fine grained traps in a similar manner to FGU but without the UNDEF, I'm not clear if this is desired at all and at present this requires symmetric read and write traps like FGU. That seemed like it might be desired from an implementation point of view but we already have one case where we enable an asymmetric trap (for ARM64_WORKAROUND_AMPERE_AC03_CPU_38) and it seems generally useful to enable asymmetrically.
This series implements support for SME use in non-protected KVM guests. Much of this is very similar to SVE, the main additional challenge that SME presents is that it introduces a new vector length similar to the SVE vector length and two new controls which change the registers seen by guests:
- PSTATE.ZA enables the ZA matrix register and, if SME2 is supported, the ZT0 LUT register. - PSTATE.SM enables streaming mode, a new floating point mode which uses the SVE register set with the separately configured SME vector length. In streaming mode implementation of the FFR register is optional.
It is also permitted to build systems which support SME without SVE, in this case when not in streaming mode no SVE registers or instructions are available. Further, there is no requirement that there be any overlap in the set of vector lengths supported by SVE and SME in a system, this is expected to be a common situation in practical systems.
Since there is a new vector length to configure we introduce a new feature parallel to the existing SVE one with a new pseudo register for the streaming mode vector length. Due to the overlap with SVE caused by streaming mode rather than finalising SME as a separate feature we use the existing SVE finalisation to also finalise SME, a new define KVM_ARM_VCPU_VEC is provided to help make user code clearer. Finalising SVE and SME separately would introduce complication with register access since finalising SVE makes the SVE regsiters writeable by userspace and doing multiple finalisations results in an error being reported. Dealing with a state where the SVE registers are writeable due to one of SVE or SME being finalised but may have their VL changed by the other being finalised seems like needless complexity with minimal practical utility, it seems clearer to just express directly that only one finalisation can be done in the ABI.
Access to the floating point registers follows the architecture:
- When both SVE and SME are present: - If PSTATE.SM == 0 the vector length used for the Z and P registers is the SVE vector length. - If PSTATE.SM == 1 the vector length used for the Z and P registers is the SME vector length. - If only SME is present: - If PSTATE.SM == 0 the Z and P registers are inaccessible and the floating point state accessed via the encodings for the V registers. - If PSTATE.SM == 1 the vector length used for the Z and P registers - The SME specific ZA and ZT0 registers are only accessible if SVCR.ZA is 1.
The VMM must understand this, in particular when loading state SVCR should be configured before other state.
There are a large number of subfeatures for SME, most of which only offer additional instructions but some of which (SME2 and FA64) add architectural state. These are configured via the ID registers as per usual.
The new KVM_ARM_VCPU_VEC feature and ZA and ZT0 registers have not been added to the get-reg-list selftest, the idea of supporting additional features there without restructuring the program to generate all possible feature combinations has been rejected. I will post a separate series which does that restructuring.
No support is present for protected guests, this is expected to be added separately, the series is already rather large and pKVM in general offers a subset of features.
This series is based on Mark Rutland's fix series:
https://lore.kernel.org/r/20250210195226.1215254-1-mark.rutland@arm.com
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v4: - Rebase onto v6.14-rc2 and Mark Rutland's fixes. - Expose SME to nested guests. - Additional cleanups and test fixes following on from the rebase. - Link to v3: https://lore.kernel.org/r/20241220-kvm-arm64-sme-v3-0-05b018c1ffeb@kernel.or...
Changes in v3: - Rebase onto v6.12-rc2. - Link to v2: https://lore.kernel.org/r/20231222-kvm-arm64-sme-v2-0-da226cb180bb@kernel.or...
Changes in v2: - Rebase onto v6.7-rc3. - Configure subfeatures based on host system only. - Complete nVHE support. - There was some snafu with sending v1 out, it didn't make it to the lists but in case it hit people's inboxes I'm sending as v2.
--- Mark Brown (27): arm64/fpsimd: Update FA64 and ZT0 enables when loading SME state arm64/fpsimd: Decide to save ZT0 and streaming mode FFR at bind time arm64/fpsimd: Check enable bit for FA64 when saving EFI state arm64/fpsimd: Determine maximum virtualisable SME vector length KVM: arm64: Introduce non-UNDEF FGT control KVM: arm64: Pay attention to FFR parameter in SVE save and load KVM: arm64: Pull ctxt_has_ helpers to start of sysreg-sr.h KVM: arm64: Move SVE state access macros after feature test macros KVM: arm64: Rename SVE finalization constants to be more general KVM: arm64: Document the KVM ABI for SME KVM: arm64: Define internal features for SME KVM: arm64: Rename sve_state_reg_region KVM: arm64: Store vector lengths in an array KVM: arm64: Implement SME vector length configuration KVM: arm64: Support SME control registers KVM: arm64: Support TPIDR2_EL0 KVM: arm64: Support SME identification registers for guests KVM: arm64: Support SME priority registers KVM: arm64: Provide assembly for SME state restore KVM: arm64: Support userspace access to streaming mode Z and P registers KVM: arm64: Expose SME specific state to userspace KVM: arm64: Context switch SME state for normal guests KVM: arm64: Handle SME exceptions KVM: arm64: Expose SME to nested guests KVM: arm64: Provide interface for configuring and enabling SME for guests KVM: arm64: selftests: Add SME system registers to get-reg-list KVM: arm64: selftests: Add SME to set_id_regs test
Documentation/virt/kvm/api.rst | 117 +++++++--- arch/arm64/include/asm/fpsimd.h | 22 ++ arch/arm64/include/asm/kvm_emulate.h | 12 +- arch/arm64/include/asm/kvm_host.h | 143 ++++++++++--- arch/arm64/include/asm/kvm_hyp.h | 4 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/include/asm/vncr_mapping.h | 2 + arch/arm64/include/uapi/asm/kvm.h | 33 +++ arch/arm64/kernel/cpufeature.c | 2 - arch/arm64/kernel/fpsimd.c | 86 ++++---- arch/arm64/kvm/arm.c | 10 + arch/arm64/kvm/fpsimd.c | 19 +- arch/arm64/kvm/guest.c | 262 ++++++++++++++++++++--- arch/arm64/kvm/handle_exit.c | 14 ++ arch/arm64/kvm/hyp/fpsimd.S | 18 +- arch/arm64/kvm/hyp/include/hyp/switch.h | 141 ++++++++++-- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 77 ++++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +- arch/arm64/kvm/hyp/nvhe/pkvm.c | 4 +- arch/arm64/kvm/hyp/nvhe/switch.c | 11 +- arch/arm64/kvm/hyp/vhe/switch.c | 21 +- arch/arm64/kvm/nested.c | 3 +- arch/arm64/kvm/reset.c | 154 +++++++++---- arch/arm64/kvm/sys_regs.c | 118 +++++++++- include/uapi/linux/kvm.h | 1 + tools/testing/selftests/kvm/arm64/get-reg-list.c | 32 ++- tools/testing/selftests/kvm/arm64/set_id_regs.c | 29 ++- 27 files changed, 1078 insertions(+), 268 deletions(-) --- base-commit: 6a25088d268ce4c2163142ead7fe1975bb687cb7 change-id: 20230301-kvm-arm64-sme-06a1246d3636 prerequisite-message-id: 20250210195226.1215254-1-mark.rutland@arm.com prerequisite-patch-id: 615ab9c526e9f6242bd5b8d7188e96fb66fb0e64 prerequisite-patch-id: e5c4f2ff9c9ba01a0f659dd1e8bf6396de46e197 prerequisite-patch-id: 0794d28526755180847841c045a6b7cb3d800c16 prerequisite-patch-id: 079d3a8a680f793b593268eeba000acc55a0b4ec prerequisite-patch-id: a3428f67a5ee49f2b01208f30b57984d5409d8f5 prerequisite-patch-id: 26393e401e9eae7cff5bb1d3bdb18b4e29ffc8fe prerequisite-patch-id: 64f9819f751da4a1c73b9d1b292ccee6afda89f6 prerequisite-patch-id: 0355baaa8ceb31dc85d015b56084c33416f78041
Best regards,
Currently we enable EL0 and EL1 access to FA64 and ZT0 at boot and leave them enabled throughout the runtime of the system. When we add KVM support we will need to make this configuration dynamic, these features may be disabled for some KVM guests. Since the host kernel saves the floating point state for non-protected guests and we wish to avoid KVM having to reload the floating point state needlessly on guest reentry let's move the configuration of these enables to the floating point state reload.
We provide a helper which does the configuration as part of a read/modify/write operation along with the configuration of the task VL, then update the floating point state load and SME access trap to use it. We also remove the setting of the enable bits from the CPU feature identification and resume paths. There will be a small overhead from setting the enables one at a time but this should be negligable in the context of the state load or access trap.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/fpsimd.h | 12 +++++++++++ arch/arm64/kernel/cpufeature.c | 2 -- arch/arm64/kernel/fpsimd.c | 44 ++++++++++++----------------------------- 3 files changed, 25 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index f2a84efc361858d4deda99faf1967cc7cac386c1..95355892d47b3ec1c77a3ab19ccad0d7f9a8d621 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -429,6 +429,18 @@ static inline size_t sme_state_size(struct task_struct const *task)
#endif /* ! CONFIG_ARM64_SME */
+#define sme_cond_update_smcr(vl, fa64, zt0, reg) \ + do { \ + u64 __old = read_sysreg_s((reg)); \ + u64 __new = vl; \ + if (fa64) \ + __new |= SMCR_ELx_FA64; \ + if (zt0) \ + __new |= SMCR_ELx_EZT0; \ + if (__old != __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + /* For use by EFI runtime services calls only */ extern void __efi_fpsimd_begin(void); extern void __efi_fpsimd_end(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f0910f20fbf8c18fbeb63bcee18abf13371b1d5e..7ba2a554243fccfb94d34307821101bfcf6c07bb 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2857,7 +2857,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME_FA64, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_fa64, ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP) }, { @@ -2865,7 +2864,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .capability = ARM64_SME2, .matches = has_cpuid_feature, - .cpu_enable = cpu_enable_sme2, ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2) }, #endif /* CONFIG_ARM64_SME */ diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 8370d55f035334edf9d4f01fb33e1054bddadf71..e52ec18f0fcab0e34123b7a115e886fca3fae210 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -399,9 +399,15 @@ static void task_fpsimd_load(void) if (system_supports_sme()) { unsigned long sme_vl = task_get_sme_vl(current);
- /* Ensure VL is set up for restoring data */ + /* + * Ensure VL is set up for restoring data. KVM might + * disable subfeatures so we reset them each time. + */ if (test_thread_flag(TIF_SME)) - sme_set_vq(sve_vq_from_vl(sme_vl) - 1); + sme_cond_update_smcr(sve_vq_from_vl(sme_vl) - 1, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1);
write_sysreg_s(current->thread.svcr, SYS_SVCR);
@@ -1267,26 +1273,6 @@ void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p) isb(); }
-void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME); - - /* Allow use of ZT0 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK, - SYS_SMCR_EL1); -} - -void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused p) -{ - /* This must be enabled after SME */ - BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME); - - /* Allow use of FA64 */ - write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK, - SYS_SMCR_EL1); -} - void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; @@ -1330,17 +1316,9 @@ void __init sme_setup(void)
void sme_suspend_exit(void) { - u64 smcr = 0; - if (!system_supports_sme()) return;
- if (system_supports_fa64()) - smcr |= SMCR_ELx_FA64; - if (system_supports_sme2()) - smcr |= SMCR_ELx_EZT0; - - write_sysreg_s(smcr, SYS_SMCR_EL1); write_sysreg_s(0, SYS_SMPRI_EL1); }
@@ -1457,7 +1435,11 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs) if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { unsigned long vq_minus_one = sve_vq_from_vl(task_get_sme_vl(current)) - 1; - sme_set_vq(vq_minus_one); + + sme_cond_update_smcr(vq_minus_one, + system_supports_fa64(), + system_supports_sme2(), + SYS_SMCR_EL1);
fpsimd_bind_task_to_cpu(); }
Some parts of the SME state are optional, enabled by additional features on top of the base FEAT_SME and controlled with enable bits in SMCR_ELx. We unconditionally enable these for the host but for KVM we will allow the feature set exposed to guests to be restricted by the VMM. These are the FFR register (FEAT_SME_FA64) and ZT0 (FEAT_SME2).
We defer saving of guest floating point state for non-protected guests to the host kernel. We also want to avoid having to reconfigure the guest floating point state if nothing used the floating point state while running the host. If the guest was running with the optional features disabled then traps will be enabled for them so the host kernel will need to skip accessing that state when saving state for the guest.
Support this by moving the decision about saving this state to the point where we bind floating point state to the CPU, adding a new variable to the cpu_fp_state which uses the enable bits in SMCR_ELx to flag which features are enabled.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/fpsimd.h | 1 + arch/arm64/kernel/fpsimd.c | 10 ++++++++-- arch/arm64/kvm/fpsimd.c | 1 + 3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 95355892d47b3ec1c77a3ab19ccad0d7f9a8d621..144cc805bfea112341b89c9c6028cf4b2a201c6c 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -88,6 +88,7 @@ struct cpu_fp_state { void *sme_state; u64 *svcr; u64 *fpmr; + u64 sme_features; unsigned int sve_vl; unsigned int sme_vl; enum fp_type *fp_type; diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index e52ec18f0fcab0e34123b7a115e886fca3fae210..446a379d87539bb37a9d4eb7466a73d8819afc56 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -478,12 +478,12 @@ static void fpsimd_save_user_state(void)
if (*svcr & SVCR_ZA_MASK) sme_save_state(last->sme_state, - system_supports_sme2()); + last->sme_features & SMCR_ELx_EZT0);
/* If we are in streaming mode override regular SVE. */ if (*svcr & SVCR_SM_MASK) { save_sve_regs = true; - save_ffr = system_supports_fa64(); + save_ffr = last->sme_features & SMCR_ELx_FA64; vl = last->sme_vl; } } @@ -1697,6 +1697,12 @@ static void fpsimd_bind_task_to_cpu(void) last->to_save = FP_STATE_CURRENT; current->thread.fpsimd_cpu = smp_processor_id();
+ last->sme_features = 0; + if (system_supports_fa64()) + last->sme_features |= SMCR_ELx_FA64; + if (system_supports_sme2()) + last->sme_features |= SMCR_ELx_EZT0; + /* * Toggle SVE and SME trapping for userspace if needed, these * are serialsied by ret_to_user(). diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 3cbb999419af7bb31ce9cec2baafcad00491610a..f324990a400293976fd505442f9eaccc6d01ed8f 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -111,6 +111,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); fp_state.fp_type = &vcpu->arch.fp_type; + fp_state.sme_features = 0;
if (vcpu_has_sve(vcpu)) fp_state.to_save = FP_STATE_SVE;
Currently when deciding if we need to save FFR when in streaming mode prior to EFI calls we check if FA64 is supported by the system. Since KVM guest support will mean that FA64 might be enabled and disabled at runtime switch to checking if traps for FA64 are enabled in SMCR_EL1 instead.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kernel/fpsimd.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 446a379d87539bb37a9d4eb7466a73d8819afc56..a97af9daea472cc57bafc0564fdfe4e327924db1 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1955,6 +1955,11 @@ static DEFINE_PER_CPU(bool, efi_sm_state); * either doing something wrong or you need to propose some refactoring. */
+static bool fa64_enabled(void) +{ + return read_sysreg_s(SYS_SMCR_EL1) & SMCR_ELx_FA64; +} + /* * __efi_fpsimd_begin(): prepare FPSIMD for making an EFI runtime services call */ @@ -1989,7 +1994,7 @@ void __efi_fpsimd_begin(void) * Unless we have FA64 FFR does not * exist in streaming mode. */ - if (!system_supports_fa64()) + if (!fa64_enabled()) ffr = !(svcr & SVCR_SM_MASK); }
@@ -2040,7 +2045,7 @@ void __efi_fpsimd_end(void) * Unless we have FA64 FFR does not * exist in streaming mode. */ - if (!system_supports_fa64()) + if (!fa64_enabled()) ffr = false; } }
As with SVE we can only virtualise SME vector lengths that are supported by all CPUs in the system, implement similar checks to those for SVE. Since unlike SVE there are no specific vector lengths that are architecturally required the handling is subtly different, we report a system where this happens with a maximum vector length of -1.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kernel/fpsimd.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index a97af9daea472cc57bafc0564fdfe4e327924db1..e3ac522977ce68b2330f7be1f4b5a9e7dfe6fd09 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1276,7 +1276,8 @@ void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p) void __init sme_setup(void) { struct vl_info *info = &vl_info[ARM64_VEC_SME]; - int min_bit, max_bit; + DECLARE_BITMAP(tmp_map, SVE_VQ_MAX); + int min_bit, max_bit, b;
if (!system_supports_sme()) return; @@ -1306,12 +1307,32 @@ void __init sme_setup(void) */ set_sme_default_vl(find_supported_vector_length(ARM64_VEC_SME, 32));
+ bitmap_andnot(tmp_map, info->vq_partial_map, info->vq_map, + SVE_VQ_MAX); + + b = find_last_bit(tmp_map, SVE_VQ_MAX); + if (b >= SVE_VQ_MAX) + /* All VLs virtualisable */ + info->max_virtualisable_vl = SVE_VQ_MAX; + else if (b == SVE_VQ_MAX - 1) + /* No virtualisable VLs */ + info->max_virtualisable_vl = -1; + else + info->max_virtualisable_vl = sve_vl_from_vq(__bit_to_vq(b + 1)); + + if (info->max_virtualisable_vl > info->max_vl) + info->max_virtualisable_vl = info->max_vl; + pr_info("SME: minimum available vector length %u bytes per vector\n", info->min_vl); pr_info("SME: maximum available vector length %u bytes per vector\n", info->max_vl); pr_info("SME: default vector length %u bytes per vector\n", get_sme_default_vl()); + + /* KVM decides whether to support mismatched systems. Just warn here: */ + if (info->max_virtualisable_vl < info->max_vl) + pr_warn("SME: unvirtualisable vector lengths present\n"); }
void sme_suspend_exit(void)
We have support for determining a set of fine grained traps to enable for the guest which is tied to the support for injecting UNDEFs for undefined features. This means that we can't use the mechanism for system registers which should be present but need emulation, such as SMPRI_EL1 which should be accessible when SME is present but if SME priority support is absent SMPRI_EL1.Priority should be RAZ.
Add an additional set of fine grained traps fgt, mirroring the existing fgu array. We use the same format where we always set the bit for the trap in the array as for FGU. This makes it clear what is being explicitly managed and keeps the code consistent.
We do not convert the handling of ARM_WORKAROUND_AMPERE_ACO3_CPU_38 to this mechanism since this only enables a write trap and when implementing the existing UNDEF that we would share the read and write trap enablement (this being the overwhelmingly common case).
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 7 ++++--- 2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c77acc99045764cf5230f84ef5c9f8432c7d983e..64f2885cd4a87e9807fbbbbe6de937ed304ab917 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -289,6 +289,12 @@ struct kvm_arch { */ u64 fgu[__NR_FGT_GROUP_IDS__];
+ /* + * Additional FGTs to enable for the guests, eg. for emulated + * registers, + */ + u64 fgt[__NR_FGT_GROUP_IDS__]; + /* * Stage 2 paging state for VMs with nested S2 using a virtual * VMID. diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index f5e882a358e2d6e6805d112ed646a112455012e8..c24e418cec101433f7484eb4f7af788c474eda3a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -98,9 +98,9 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) id; \ })
-#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \ +#define compute_trap_clr_set(vcpu, kvm, trap, reg, clr, set) \ do { \ - u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \ + u64 hfg = kvm->arch.trap[reg_to_fgt_group_id(reg)]; \ set |= hfg & __ ## reg ## _MASK; \ clr |= hfg & __ ## reg ## _nMASK; \ } while(0) @@ -113,7 +113,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu) if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) \ compute_clr_set(vcpu, reg, c, s); \ \ - compute_undef_clr_set(vcpu, kvm, reg, c, s); \ + compute_trap_clr_set(vcpu, kvm, fgu, reg, c, s); \ + compute_trap_clr_set(vcpu, kvm, fgt, reg, c, s); \ \ s |= set; \ c |= clr; \
The hypervisor copies of the SVE save and load functions are prototyped with third arguments specifying FFR should be accessed but the assembly functions overwrite whatever is supplied to unconditionally access FFR. Remove this and use the supplied parameter.
This has no effect currently since FFR is always present for SVE but will be important for SME.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/hyp/fpsimd.S | 2 -- 1 file changed, 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index e950875e31cee4df58d041519b7584356463c91b..6e16cbfc5df27e63732655dffea60d7039c37332 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -21,13 +21,11 @@ SYM_FUNC_START(__fpsimd_restore_state) SYM_FUNC_END(__fpsimd_restore_state)
SYM_FUNC_START(__sve_restore_state) - mov x2, #1 sve_load 0, x1, x2, 3 ret SYM_FUNC_END(__sve_restore_state)
SYM_FUNC_START(__sve_save_state) - mov x2, #1 sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state)
Rather than add earlier prototypes of specific ctxt_has_ helpers let's just pull all their definitions to the top of sysreg-sr.h so they're all available to all the individual save/restore functions.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 62 +++++++++++++++--------------- 1 file changed, 30 insertions(+), 32 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 76ff095c6b6ebf336aab3018ec1f384a5f19949e..7a4ad8a4c3727e4628b375cbefc5e0d3533687de 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,8 +16,6 @@ #include <asm/kvm_hyp.h> #include <asm/kvm_mmu.h>
-static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt); - static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu; @@ -28,36 +26,6 @@ static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) return vcpu; }
-static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) -{ - return host_data_ptr(host_ctxt) != ctxt; -} - -static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); - - if (ctxt_is_guest(ctxt) && kvm_host_owns_debug_regs(vcpu)) - return &vcpu->arch.external_mdscr_el1; - - return &ctxt_sys_reg(ctxt, MDSCR_EL1); -} - -static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) -{ - *ctxt_mdscr_el1(ctxt) = read_sysreg(mdscr_el1); - - // POR_EL0 can affect uaccess, so must be saved/restored early. - if (ctxt_has_s1poe(ctxt)) - ctxt_sys_reg(ctxt, POR_EL0) = read_sysreg_s(SYS_POR_EL0); -} - -static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) -{ - ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); - ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); -} - static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) { struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); @@ -98,6 +66,36 @@ static inline bool ctxt_has_s1poe(struct kvm_cpu_context *ctxt) return kvm_has_s1poe(kern_hyp_va(vcpu->kvm)); }
+static inline bool ctxt_is_guest(struct kvm_cpu_context *ctxt) +{ + return host_data_ptr(host_ctxt) != ctxt; +} + +static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt); + + if (ctxt_is_guest(ctxt) && kvm_host_owns_debug_regs(vcpu)) + return &vcpu->arch.external_mdscr_el1; + + return &ctxt_sys_reg(ctxt, MDSCR_EL1); +} + +static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) +{ + *ctxt_mdscr_el1(ctxt) = read_sysreg(mdscr_el1); + + // POR_EL0 can affect uaccess, so must be saved/restored early. + if (ctxt_has_s1poe(ctxt)) + ctxt_sys_reg(ctxt, POR_EL0) = read_sysreg_s(SYS_POR_EL0); +} + +static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) +{ + ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); + ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); +} + static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR);
In preparation for SME support move the macros used to access SVE state after the feature test macros, we will need to test for SME subfeatures to determine the size of the SME state.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 46 +++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 64f2885cd4a87e9807fbbbbe6de937ed304ab917..cef03309fbe6b7f44ab30357ceb037e7e342fb0c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -927,29 +927,6 @@ struct kvm_vcpu_arch { #define IN_WFI __vcpu_single_flag(sflags, BIT(6))
-/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ -#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) - -#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) - -#define vcpu_sve_zcr_elx(vcpu) \ - (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) - -#define vcpu_sve_state_size(vcpu) ({ \ - size_t __size_ret; \ - unsigned int __vcpu_vq; \ - \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ - __size_ret = 0; \ - } else { \ - __vcpu_vq = vcpu_sve_max_vq(vcpu); \ - __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ - } \ - \ - __size_ret; \ -}) - #define KVM_GUESTDBG_VALID_MASK (KVM_GUESTDBG_ENABLE | \ KVM_GUESTDBG_USE_SW_BP | \ KVM_GUESTDBG_USE_HW | \ @@ -985,6 +962,29 @@ struct kvm_vcpu_arch {
#define vcpu_gp_regs(v) (&(v)->arch.ctxt.regs)
+/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ +#define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ + sve_ffr_offset((vcpu)->arch.sve_max_vl)) + +#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl) + +#define vcpu_sve_zcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) + +#define vcpu_sve_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = vcpu_sve_max_vq(vcpu); \ + __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ + } \ + \ + __size_ret; \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently
Due to the overlap between SVE and SME vector length configuration created by streaming mode SVE we will finalize both at once. Rename the existing finalization to use _VEC (vector) for the naming to avoid confusion.
Since this includes the userspace API we create an alias KVM_ARM_VCPU_VEC for the existing KVM_ARM_VCPU_SVE capability, existing code which does not enable SME will be unaffected and any SME only code will not need to use SVE constants.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 6 ++++-- arch/arm64/include/uapi/asm/kvm.h | 6 ++++++ arch/arm64/kvm/guest.c | 10 +++++----- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 20 ++++++++++---------- 5 files changed, 26 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index cef03309fbe6b7f44ab30357ceb037e7e342fb0c..f12b13c3886a2b90d0f1e95f9d59f374d3c87398 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -874,7 +874,7 @@ struct kvm_vcpu_arch { /* KVM_ARM_VCPU_INIT completed */ #define VCPU_INITIALIZED __vcpu_single_flag(cflags, BIT(0)) /* SVE config completed */ -#define VCPU_SVE_FINALIZED __vcpu_single_flag(cflags, BIT(1)) +#define VCPU_VEC_FINALIZED __vcpu_single_flag(cflags, BIT(1))
/* Exception pending */ #define PENDING_EXCEPTION __vcpu_single_flag(iflags, BIT(0)) @@ -941,6 +941,8 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif
+#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) + #ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ ((cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH) || \ @@ -1423,7 +1425,7 @@ struct kvm *kvm_arch_alloc_vm(void); int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature); bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
-#define kvm_arm_vcpu_sve_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_SVE_FINALIZED) +#define kvm_arm_vcpu_vec_finalized(vcpu) vcpu_get_flag(vcpu, VCPU_VEC_FINALIZED)
#define kvm_has_mte(kvm) \ (system_supports_mte() && \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 568bf858f3198e2a6f651eb8ae793b54fd49e67a..ea5aaa4b1cfe82862883678bbe47373d53c005f6 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -106,6 +106,12 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */
+/* + * An alias for _SVE since we finalize VL configuration for both SVE and SME + * simultaneously. + */ +#define KVM_ARM_VCPU_VEC KVM_ARM_VCPU_SVE + struct kvm_vcpu_init { __u32 target; __u32 features[7]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2196979a24a325311d6111404e4d089287c41bfe..73e714133bb647c1d50fc67d5c8bf0569520a537 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -342,7 +342,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT;
- if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM; /* too late! */
if (WARN_ON(vcpu->arch.sve_state)) @@ -497,7 +497,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret;
- if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, @@ -523,7 +523,7 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (ret) return ret;
- if (!kvm_arm_vcpu_sve_finalized(vcpu)) + if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, @@ -657,7 +657,7 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) return 0;
/* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu));
return slices * (SVE_NUM_PREGS + SVE_NUM_ZREGS + 1 /* FFR */) + 1; /* KVM_REG_ARM64_SVE_VLS */ @@ -675,7 +675,7 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, return 0;
/* Policed by KVM_GET_REG_LIST: */ - WARN_ON(!kvm_arm_vcpu_sve_finalized(vcpu)); + WARN_ON(!kvm_arm_vcpu_vec_finalized(vcpu));
/* * Enumerate this first, so that userspace can save/restore in diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 3927fe52a3dde8f59ddb996d674dd252630613a3..5896e581c1fac943cf372f1df24311ef43529d62 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -381,7 +381,7 @@ static void pkvm_vcpu_init_sve(struct pkvm_hyp_vcpu *hyp_vcpu, struct kvm_vcpu * struct kvm_vcpu *vcpu = &hyp_vcpu->vcpu;
if (!vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) - vcpu_clear_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_clear_flag(vcpu, VCPU_VEC_FINALIZED); }
static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 803e11b0dc8f5eb74b07b0ad745b0c4f666713d5..ce726b1d4e8e90cfd4459a6cb9c67b8805424e22 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -92,7 +92,7 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) * Finalize vcpu's maximum SVE vector length, allocating * vcpu->arch.sve_state as necessary. */ -static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) +static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { void *buf; unsigned int vl; @@ -122,21 +122,21 @@ static int kvm_vcpu_finalize_sve(struct kvm_vcpu *vcpu) } vcpu->arch.sve_state = buf; - vcpu_set_flag(vcpu, VCPU_SVE_FINALIZED); + vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; }
int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) { switch (feature) { - case KVM_ARM_VCPU_SVE: - if (!vcpu_has_sve(vcpu)) + case KVM_ARM_VCPU_VEC: + if (!vcpu_has_vec(vcpu)) return -EINVAL;
- if (kvm_arm_vcpu_sve_finalized(vcpu)) + if (kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
- return kvm_vcpu_finalize_sve(vcpu); + return kvm_vcpu_finalize_vec(vcpu); }
return -EINVAL; @@ -144,7 +144,7 @@ int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature)
bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) { - if (vcpu_has_sve(vcpu) && !kvm_arm_vcpu_sve_finalized(vcpu)) + if (vcpu_has_vec(vcpu) && !kvm_arm_vcpu_vec_finalized(vcpu)) return false;
return true; @@ -161,7 +161,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) kfree(vcpu->arch.ccsidr); }
-static void kvm_vcpu_reset_sve(struct kvm_vcpu *vcpu) +static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); @@ -204,11 +204,11 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (loaded) kvm_arch_vcpu_put(vcpu);
- if (!kvm_arm_vcpu_sve_finalized(vcpu)) { + if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); } else { - kvm_vcpu_reset_sve(vcpu); + kvm_vcpu_reset_vec(vcpu); }
if (vcpu_el1_is_32bit(vcpu))
SME, the Scalable Matrix Extension, is an arm64 extension which adds support for matrix operations, with core concepts patterned after SVE.
SVE introduced some complication in the ABI since it adds new vector floating point registers with runtime configurable size, the size being controlled by a prameter called the vector length (VL). To provide control of this to VMMs we offer two phase configuration of SVE, SVE must first be enabled for the vCPU with KVM_ARM_VCPU_INIT(KVM_ARM_VCPU_SVE), after which vector length may then be configured but the configurably sized floating point registers are inaccessible until finalized with a call to KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE) after which the configurably sized registers can be accessed.
SME introduces an additional independent configurable vector length which as well as controlling the size of the new ZA register also provides an alternative view of the configurably sized SVE registers (known as streaming mode) with the guest able to switch between the two modes as it pleases. There is also a fixed sized register ZT0 introduced in SME2. As well as streaming mode the guest may enable and disable ZA and (where SME2 is available) ZT0 dynamically independently of streaming mode. These modes are controlled via the system register SVCR.
We handle the configuration of the vector length for SME in a similar manner to SVE, requiring initialization and finalization of the feature with a pseudo register controlling the available SME vector lengths as for SVE. Further, if the guest has both SVE and SME then finalizing one prevents further configuration of the vector length for the other.
Where both SVE and SME are configured for the guest we always present the SVE registers to userspace as having the larger of the configured maximum SVE and SME vector lengths, discarding extra data at load time and zero padding on read as required if the active vector length is lower. Note that this means that enabling or disabling streaming mode while the guest is stopped will not zero Zn or Pn as it will when the guest is running, but it does allow SVCR, Zn and Pn to be read and written in any order.
Userspace access to ZA and (if configured) ZT0 is always available, they will be zeroed when the guest runs if disabled in SVCR and the value read will be zero if the guest stops with them disabled. This mirrors the behaviour of the architecture, enabling access causes ZA and ZT0 to be zeroed, while allowing access to SVCR, ZA and ZT0 to be performed in any order.
If SME is enabled for a guest without SVE then the FPSIMD Vn registers must be accessed via the low 128 bits of the SVE Zn registers as is the case when SVE is enabled. This is not ideal but allows access to SVCR and the registers in any order without duplication or ambiguity about which values should take effect. This may be an issue for VMMs that are unaware of SME on systems that implement it without SVE if they let SME be enabled, the lack of access to Vn may surprise them, but it seems like an unusual implementation choice.
For SME unware VMMs on systems with both SVE and SME support the SVE registers may be larger than expected, this should be less disruptive than on a system without SVE as they will simply ignore the high bits of the registers.
Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/virt/kvm/api.rst | 117 +++++++++++++++++++++++++++++------------ 1 file changed, 82 insertions(+), 35 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 2b52eb77e29cba146b64489cfb1a267e7bb9b597..8c2cadfc079fc136dbae821121ec4648cfe0350c 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -406,7 +406,7 @@ Errors: instructions from device memory (arm64) ENOSYS data abort outside memslots with no syndrome info and KVM_CAP_ARM_NISV_TO_USER not enabled (arm64) - EPERM SVE feature set but not finalized (arm64) + EPERM SVE or SME feature set but not finalized (arm64) ======= ==============================================================
This ioctl is used to run a guest virtual cpu. While there are no @@ -2586,12 +2586,12 @@ Specifically: 0x6020 0000 0010 00d5 FPCR 32 fp_regs.fpcr ======================= ========= ===== =======================================
-.. [1] These encodings are not accepted for SVE-enabled vcpus. See - :ref:`KVM_ARM_VCPU_INIT`. +.. [1] These encodings are not accepted for SVE enabled vcpus. See + :ref:`KVM_ARM_VCPU_INIT`. They are also not accepted when SME is + enabled without SVE and the vcpu is in streaming mode.
The equivalent register content can be accessed via bits [127:0] of - the corresponding SVE Zn registers instead for vcpus that have SVE - enabled (see below). + the corresponding SVE Zn registers in these cases (see below).
arm64 CCSIDR registers are demultiplexed by CSSELR value::
@@ -2622,24 +2622,34 @@ arm64 SVE registers have the following bit patterns:: 0x6050 0000 0015 060 slice:5 FFR bits[256*slice + 255 : 256*slice] 0x6060 0000 0015 ffff KVM_REG_ARM64_SVE_VLS pseudo-register
-Access to register IDs where 2048 * slice >= 128 * max_vq will fail with -ENOENT. max_vq is the vcpu's maximum supported vector length in 128-bit -quadwords: see [2]_ below. +arm64 SME registers have the following bit patterns: + + 0x6080 0000 0017 00 <n:5> slice:5 ZA.H[n] bits[2048*slice + 2047 : 2048*slice] + 0x60XX 0000 0017 0100 ZT0 + 0x6060 0000 0017 fffe KVM_REG_ARM64_SME_VLS pseudo-register + +Access to Z, P or ZA register IDs where 2048 * slice >= 128 * max_vq +will fail with ENOENT. max_vq is the vcpu's maximum supported vector +length in 128-bit quadwords: see [2]_ below. + +Access to the ZA and ZT0 registers is only available if SVCR.ZA is set +to 1.
These registers are only accessible on vcpus for which SVE is enabled. See KVM_ARM_VCPU_INIT for details.
-In addition, except for KVM_REG_ARM64_SVE_VLS, these registers are not -accessible until the vcpu's SVE configuration has been finalized -using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). See KVM_ARM_VCPU_INIT -and KVM_ARM_VCPU_FINALIZE for more information about this procedure. +In addition, except for KVM_REG_ARM64_SVE_VLS and +KVM_REG_ARM64_SME_VLS, these registers are not accessible until the +vcpu's SVE and SME configuration has been finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC). See KVM_ARM_VCPU_INIT and +KVM_ARM_VCPU_FINALIZE for more information about this procedure.
-KVM_REG_ARM64_SVE_VLS is a pseudo-register that allows the set of vector -lengths supported by the vcpu to be discovered and configured by -userspace. When transferred to or from user memory via KVM_GET_ONE_REG -or KVM_SET_ONE_REG, the value of this register is of type -__u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the set of vector lengths as -follows:: +KVM_REG_ARM64_SVE_VLS and KVM_ARM64_VCPU_SME_VLS are pseudo-registers +that allows the set of vector lengths supported by the vcpu to be +discovered and configured by userspace. When transferred to or from +user memory via KVM_GET_ONE_REG or KVM_SET_ONE_REG, the value of this +register is of type __u64[KVM_ARM64_SVE_VLS_WORDS], and encodes the +set of vector lengths as follows::
__u64 vector_lengths[KVM_ARM64_SVE_VLS_WORDS];
@@ -2651,19 +2661,25 @@ follows:: /* Vector length vq * 16 bytes not supported */
.. [2] The maximum value vq for which the above condition is true is - max_vq. This is the maximum vector length available to the guest on - this vcpu, and determines which register slices are visible through - this ioctl interface. + max_vq. This is the maximum vector length currently available to + the guest on this vcpu, and determines which register slices are + visible through this ioctl interface. + + If SME is supported then the max_vq used for the Z and P registers + then while SVCR.SM is 1 this vector length will be the maximum SME + vector length available for the guest, otherwise it will be the + maximum SVE vector length available.
(See Documentation/arch/arm64/sve.rst for an explanation of the "vq" nomenclature.)
-KVM_REG_ARM64_SVE_VLS is only accessible after KVM_ARM_VCPU_INIT. -KVM_ARM_VCPU_INIT initialises it to the best set of vector lengths that -the host supports. +KVM_REG_ARM64_SVE_VLS and KVM_REG_ARM_SME_VLS are only accessible +after KVM_ARM_VCPU_INIT. KVM_ARM_VCPU_INIT initialises them to the +best set of vector lengths that the host supports.
-Userspace may subsequently modify it if desired until the vcpu's SVE -configuration is finalized using KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE). +Userspace may subsequently modify these registers if desired until the +vcpu's SVE and SME configuration is finalized using +KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC).
Apart from simply removing all vector lengths from the host set that exceed some value, support for arbitrarily chosen sets of vector lengths @@ -2671,8 +2687,8 @@ is hardware-dependent and may not be available. Attempting to configure an invalid set of vector lengths via KVM_SET_ONE_REG will fail with EINVAL.
-After the vcpu's SVE configuration is finalized, further attempts to -write this register will fail with EPERM. +After the vcpu's SVE or SME configuration is finalized, further +attempts to write these registers will fail with EPERM.
arm64 bitmap feature firmware pseudo-registers have the following bit pattern::
@@ -3455,6 +3471,7 @@ The initial values are defined as: - General Purpose registers, including PC and SP: set to 0 - FPSIMD/NEON registers: set to 0 - SVE registers: set to 0 + - SME registers: set to 0 - System registers: Reset to their architecturally defined values as for a warm reset to EL1 (resp. SVC)
@@ -3497,7 +3514,7 @@ Possible features:
- KVM_ARM_VCPU_SVE: Enables SVE for the CPU (arm64 only). Depends on KVM_CAP_ARM_SVE. - Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC):
* After KVM_ARM_VCPU_INIT:
@@ -3505,7 +3522,7 @@ Possible features: initial value of this pseudo-register indicates the best set of vector lengths possible for a vcpu on this host.
- * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC}):
- KVM_RUN and KVM_GET_REG_LIST are not available;
@@ -3518,11 +3535,40 @@ Possible features: KVM_SET_ONE_REG, to modify the set of vector lengths available for the vcpu.
- * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_SVE): + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC):
- the KVM_REG_ARM64_SVE_VLS pseudo-register is immutable, and can no longer be written using KVM_SET_ONE_REG.
+ - KVM_ARM_VCPU_SME: Enables SME for the CPU (arm64 only). + Depends on KVM_CAP_ARM_SME. + Requires KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + * After KVM_ARM_VCPU_INIT: + + - KVM_REG_ARM64_SME_VLS may be read using KVM_GET_ONE_REG: the + initial value of this pseudo-register indicates the best set of + vector lengths possible for a vcpu on this host. + + * Before KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC}): + + - KVM_RUN and KVM_GET_REG_LIST are not available; + + - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access + the scalable architectural SVE registers + KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or + KVM_REG_ARM64_SVE_FFR, the matrix register + KVM_REG_ARM64_SME_ZA() or the LUT register KVM_REG_ARM64_ZT(); + + - KVM_REG_ARM64_SME_VLS may optionally be written using + KVM_SET_ONE_REG, to modify the set of vector lengths available + for the vcpu. + + * After KVM_ARM_VCPU_FINALIZE(KVM_ARM_VCPU_VEC): + + - the KVM_REG_ARM64_SME_VLS pseudo-register is immutable, and can + no longer be written using KVM_SET_ONE_REG. + 4.83 KVM_ARM_PREFERRED_TARGET -----------------------------
@@ -5092,11 +5138,12 @@ Errors:
Recognised values for feature:
- ===== =========================================== - arm64 KVM_ARM_VCPU_SVE (requires KVM_CAP_ARM_SVE) - ===== =========================================== + ===== ============================================================== + arm64 KVM_ARM_VCPU_VEC (requires KVM_CAP_ARM_SVE or KVM_CAP_ARM_SME) + arm64 KVM_ARM_VCPU_SVE (alias for KVM_ARM_VCPU_VEC) + ===== ==============================================================
-Finalizes the configuration of the specified vcpu feature. +Finalizes the configuration of the specified vcpu features.
The vcpu must already have been initialised, enabling the affected feature, by means of a successful :ref:`KVM_ARM_VCPU_INIT <KVM_ARM_VCPU_INIT>` call with the
In order to simplify interdependencies in the rest of the series define the feature detection for SME and it's subfeatures. Due to the need for vector length configuration we define a flag for SME like for SVE. We also have two subfeatures which add architectural state, FA64 and SME2, which are configured via the normal ID register scheme.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 31 +++++++++++++++++++++++++++++-- arch/arm64/kvm/sys_regs.c | 2 +- 2 files changed, 30 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f12b13c3886a2b90d0f1e95f9d59f374d3c87398..55ada264b7383925269674a47446b8f659a7e4e6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -39,7 +39,7 @@
#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
-#define KVM_VCPU_MAX_FEATURES 7 +#define KVM_VCPU_MAX_FEATURES 9 #define KVM_VCPU_VALID_FEATURES (BIT(KVM_VCPU_MAX_FEATURES) - 1)
#define KVM_REQ_SLEEP \ @@ -340,6 +340,8 @@ struct kvm_arch { #define KVM_ARCH_FLAG_FGU_INITIALIZED 8 /* SVE exposed to guest */ #define KVM_ARCH_FLAG_GUEST_HAS_SVE 9 + /* SME exposed to guest */ +#define KVM_ARCH_FLAG_GUEST_HAS_SME 10 unsigned long flags;
/* VM-wide vCPU feature set */ @@ -941,7 +943,16 @@ struct kvm_vcpu_arch { #define vcpu_has_sve(vcpu) kvm_has_sve((vcpu)->kvm) #endif
-#define vcpu_has_vec(vcpu) vcpu_has_sve(vcpu) +#define kvm_has_sme(kvm) (system_supports_sme() && \ + test_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &(kvm)->arch.flags)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme(vcpu) kvm_has_sme(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme(vcpu) kvm_has_sme((vcpu)->kvm) +#endif + +#define vcpu_has_vec(vcpu) (vcpu_has_sve(vcpu) || vcpu_has_sme(vcpu))
#ifdef CONFIG_ARM64_PTR_AUTH #define vcpu_has_ptrauth(vcpu) \ @@ -1551,4 +1562,20 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define kvm_has_s1poe(k) \ (kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP))
+#define kvm_has_fa64(k) \ + (system_supports_fa64() && \ + kvm_has_feat((k), ID_AA64SMFR0_EL1, FA64, IMP)) + +#define kvm_has_sme2(k) \ + (system_supports_sme2() && \ + kvm_has_feat((k), ID_AA64PFR1_EL1, SME, SME2)) + +#ifdef __KVM_NVHE_HYPERVISOR__ +#define vcpu_has_sme2(vcpu) kvm_has_sme2(kern_hyp_va((vcpu)->kvm)) +#define vcpu_has_fa64(vcpu) kvm_has_fa64(kern_hyp_va((vcpu)->kvm)) +#else +#define vcpu_has_sme2(vcpu) kvm_has_sme2((vcpu)->kvm) +#define vcpu_has_fa64(vcpu) kvm_has_fa64((vcpu)->kvm) +#endif + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 82430c1e1dd02b1ac24fd2ddcd05a91272997fdb..49b7844af8a19467e7842347c4b05ceb44c4caaf 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1748,7 +1748,7 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu, static unsigned int sme_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { - if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, IMP)) + if (vcpu_has_sme(vcpu)) return 0;
return REG_HIDDEN;
As for SVE we will need to pull parts of dynamically sized registers out of a block of memory for SME so we will use a similar code pattern for this. Rename the current struct sve_state_reg_region in preparation for this.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/guest.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 73e714133bb647c1d50fc67d5c8bf0569520a537..bb4b91e4392359026a4115468138f16f48a5b850 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -404,9 +404,9 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) */ #define vcpu_sve_slices(vcpu) 1
-/* Bounds of a single SVE register slice within vcpu->arch.sve_state */ -struct sve_state_reg_region { - unsigned int koffset; /* offset into sve_state in kernel memory */ +/* Bounds of a single register slice within vcpu->arch.s[mv]e_state */ +struct vec_state_reg_region { + unsigned int koffset; /* offset into s[mv]e_state in kernel memory */ unsigned int klen; /* length in kernel memory */ unsigned int upad; /* extra trailing padding in user memory */ }; @@ -415,7 +415,7 @@ struct sve_state_reg_region { * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy */ -static int sve_reg_to_region(struct sve_state_reg_region *region, +static int sve_reg_to_region(struct vec_state_reg_region *region, struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { @@ -485,7 +485,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; char __user *uptr = (char __user *)reg->addr;
/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */ @@ -511,7 +511,7 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { int ret; - struct sve_state_reg_region region; + struct vec_state_reg_region region; const char __user *uptr = (const char __user *)reg->addr;
/* Handle the KVM_REG_ARM64_SVE_VLS pseudo-reg as a special case: */
SME adds a second vector length configured in a very similar way to the SVE vector length, in order to facilitate future code sharing for SME refactor our storage of vector lengths to use an array like the host does. We do not yet take much advantage of this so the intermediate code is not as clean as might be.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 17 +++++++++++------ arch/arm64/include/asm/kvm_hyp.h | 2 +- arch/arm64/include/asm/kvm_pkvm.h | 2 +- arch/arm64/kvm/fpsimd.c | 2 +- arch/arm64/kvm/guest.c | 6 +++--- arch/arm64/kvm/hyp/include/hyp/switch.h | 6 +++--- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 9 +++++---- arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +- arch/arm64/kvm/reset.c | 22 +++++++++++----------- 9 files changed, 37 insertions(+), 31 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 55ada264b7383925269674a47446b8f659a7e4e6..84aa728718e13b99b4f17f5fdffa81e380834e69 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -74,8 +74,10 @@ enum kvm_mode kvm_get_mode(void); static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; }; #endif
-extern unsigned int __ro_after_init kvm_sve_max_vl; -extern unsigned int __ro_after_init kvm_host_sve_max_vl; +extern unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX]; +extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; +DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use); + int __init kvm_arm_init_sve(void);
u32 __attribute_const__ kvm_target_cpu(void); @@ -716,7 +718,7 @@ struct kvm_vcpu_arch { */ void *sve_state; enum fp_type fp_type; - unsigned int sve_max_vl; + unsigned int max_vl[ARM64_VEC_MAX];
/* Stage 2 paging state used by the hardware on next switch */ struct kvm_s2_mmu *hw_mmu; @@ -977,9 +979,12 @@ struct kvm_vcpu_arch {
/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */ #define vcpu_sve_pffr(vcpu) (kern_hyp_va((vcpu)->arch.sve_state) + \ - sve_ffr_offset((vcpu)->arch.sve_max_vl)) + sve_ffr_offset((vcpu)->arch.max_vl[ARM64_VEC_SVE])) + +#define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[type]) + +#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE)
-#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl)
#define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) @@ -988,7 +993,7 @@ struct kvm_vcpu_arch { size_t __size_ret; \ unsigned int __vcpu_vq; \ \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.sve_max_vl))) { \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SVE]))) { \ __size_ret = 0; \ } else { \ __vcpu_vq = vcpu_sve_max_vq(vcpu); \ diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index c838309e4ec47e395d78127a8ee6bad8390c4411..21943cb98542750a1b626a8de6bbc095d7770ccf 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -143,6 +143,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
extern unsigned long kvm_nvhe_sym(__icache_flags); extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits); -extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl); +extern unsigned int kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_MAX]);
#endif /* __ARM64_KVM_HYP_H__ */ diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index eb65f12e81d90939de45f0c6e59f0777a1f3f78f..950aa6a7d2c0fd41ac9ddfc374681a41cca97d4f 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -159,7 +159,7 @@ static inline size_t pkvm_host_sve_state_size(void) return 0;
return size_add(sizeof(struct cpu_sve_state), - SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); + SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]))); }
struct pkvm_mapping { diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index f324990a400293976fd505442f9eaccc6d01ed8f..656e02312a5991419bfc3c22193b6b9b4354be64 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -106,7 +106,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; - fp_state.sve_vl = vcpu->arch.sve_max_vl; + fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; fp_state.sme_state = NULL; fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index bb4b91e4392359026a4115468138f16f48a5b850..e9f17b8a6fba476a472d3f09691611228ee9d203 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -318,7 +318,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!vcpu_has_sve(vcpu)) return -ENOENT;
- if (WARN_ON(!sve_vl_valid(vcpu->arch.sve_max_vl))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) return -EINVAL;
memset(vqs, 0, sizeof(vqs)); @@ -356,7 +356,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq = vq;
- if (max_vq > sve_vq_from_vl(kvm_sve_max_vl)) + if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) return -EINVAL;
/* @@ -375,7 +375,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL;
/* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.sve_max_vl = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq);
return 0; } diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index c24e418cec101433f7484eb4f7af788c474eda3a..2c2e22d86a2353531efa49187fe80acb01d31d20 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -370,8 +370,8 @@ static inline void __hyp_sve_save_host(void) struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); + __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM64_VEC_SVE]), &sve_state->fpsr, true); } @@ -426,7 +426,7 @@ static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) zcr_el2 = vcpu_sve_max_vq(vcpu) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR); } else { - zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1; + zcr_el2 = sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1; write_sysreg_el2(zcr_el2, SYS_ZCR);
zcr_el1 = vcpu_sve_max_vq(vcpu) - 1; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 2c37680d954cf2c2aed5abe7c2225b682861869a..a4ba03380d6e1049a5453772fa5291b08c1f70c1 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -34,7 +34,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu) */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true); - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); }
static void __hyp_sve_restore_host(void) @@ -50,8 +50,8 @@ static void __hyp_sve_restore_host(void) * that was discovered, if we wish to use larger VLs this will * need to be revisited. */ - write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl) - 1, SYS_ZCR_EL2); - __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl), + write_sysreg_s(sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SVE]) - 1, SYS_ZCR_EL2); + __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_max_vl[ARM64_VEC_SVE]), &sve_state->fpsr, true); write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR); @@ -125,7 +125,8 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state); /* Limit guest vector length to the maximum supported by the host. */ - hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl); + hyp_vcpu->vcpu.arch.max_vl[ARM64_VEC_SVE] = min(host_vcpu->arch.max_vl[ARM64_VEC_SVE], + kvm_host_max_vl[ARM64_VEC_SVE]);
hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWI | HCR_TWE); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 5896e581c1fac943cf372f1df24311ef43529d62..2a2f932143d5e5b2164162422ba6cf0e6731b13a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,7 +20,7 @@ unsigned long __icache_flags; /* Used by kvm_get_vttbr(). */ unsigned int kvm_arm_vmid_bits;
-unsigned int kvm_host_sve_max_vl; +unsigned int kvm_host_max_vl[ARM64_VEC_MAX];
/* * The currently loaded hyp vCPU for each physical CPU. Used only when diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index ce726b1d4e8e90cfd4459a6cb9c67b8805424e22..3cb91dc6dc3dc5cc484900dbd9f4cdfedb3e2b4a 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -32,7 +32,7 @@
/* Maximum phys_shift supported for any VM on this host */ static u32 __ro_after_init kvm_ipa_limit; -unsigned int __ro_after_init kvm_host_sve_max_vl; +unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX];
/* * ARMv8 Reset Values @@ -46,14 +46,14 @@ unsigned int __ro_after_init kvm_host_sve_max_vl; #define VCPU_RESET_PSTATE_SVC (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT | \ PSR_AA32_I_BIT | PSR_AA32_F_BIT)
-unsigned int __ro_after_init kvm_sve_max_vl; +unsigned int __ro_after_init kvm_max_vl[ARM64_VEC_MAX];
int __init kvm_arm_init_sve(void) { if (system_supports_sve()) { - kvm_sve_max_vl = sve_max_virtualisable_vl(); - kvm_host_sve_max_vl = sve_max_vl(); - kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl; + kvm_max_vl[ARM64_VEC_SVE] = sve_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SVE] = sve_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SVE]) = kvm_host_max_vl[ARM64_VEC_SVE];
/* * The get_sve_reg()/set_sve_reg() ioctl interface will need @@ -61,16 +61,16 @@ int __init kvm_arm_init_sve(void) * order to support vector lengths greater than * VL_ARCH_MAX: */ - if (WARN_ON(kvm_sve_max_vl > VL_ARCH_MAX)) - kvm_sve_max_vl = VL_ARCH_MAX; + if (WARN_ON(kvm_max_vl[ARM64_VEC_SVE] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SVE] = VL_ARCH_MAX;
/* * Don't even try to make use of vector lengths that * aren't available on all CPUs, for now: */ - if (kvm_sve_max_vl < sve_max_vl()) + if (kvm_max_vl[ARM64_VEC_SVE] < sve_max_vl()) pr_warn("KVM: SVE vector length for guests limited to %u bytes\n", - kvm_sve_max_vl); + kvm_max_vl[ARM64_VEC_SVE]); }
return 0; @@ -78,7 +78,7 @@ int __init kvm_arm_init_sve(void)
static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { - vcpu->arch.sve_max_vl = kvm_sve_max_vl; + vcpu->arch.max_vl[ARM64_VEC_SVE] = kvm_max_vl[ARM64_VEC_SVE];
/* * Userspace can still customize the vector lengths by writing @@ -99,7 +99,7 @@ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) size_t reg_sz; int ret;
- vl = vcpu->arch.sve_max_vl; + vl = vcpu->arch.max_vl[ARM64_VEC_SVE];
/* * Responsibility for these properties is shared between
SME implements a vector length which architecturally looks very similar to that for SVE, configured in a very similar manner. This controls the vector length used for the ZA matrix register, and for the SVE vector and predicate registers when in streaming mode. The only substantial difference is that unlike SVE the architecture does not guarantee that any particular vector length will be implemented.
Configuration for SME vector lengths is done using a virtual register as for SVE, hook up the implementation for the virtual register. Since we do not yet have support for any of the new SME registers stub register access functions are provided that only allow VL configuration. These will be extended as the SME specific registers, as for SVE.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 11 +++++ arch/arm64/include/uapi/asm/kvm.h | 9 ++++ arch/arm64/kvm/guest.c | 94 +++++++++++++++++++++++++++++++-------- 3 files changed, 95 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 84aa728718e13b99b4f17f5fdffa81e380834e69..5cc1014120ca6de249093b3e34437a99bc8bf935 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -715,8 +715,15 @@ struct kvm_vcpu_arch { * low 128 bits of the SVE Z registers. When the core * floating point code saves the register state of a task it * records which view it saved in fp_type. + * + * If SME support is also present then it provides an + * alternative view of the SVE registers accessed as for the Z + * registers when PSTATE.SM is 1, plus an additional set of + * SME specific state in the matrix register ZA and LUT + * register ZT0. */ void *sve_state; + void *sme_state; enum fp_type fp_type; unsigned int max_vl[ARM64_VEC_MAX];
@@ -984,7 +991,11 @@ struct kvm_vcpu_arch { #define vcpu_vec_max_vq(vcpu, type) sve_vq_from_vl((vcpu)->arch.max_vl[type])
#define vcpu_sve_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SVE) +#define vcpu_sme_max_vq(vcpu) vcpu_vec_max_vq(vcpu, ARM64_VEC_SME)
+#define vcpu_max_vl(vcpu) max((vcpu)->arch.max_vl[ARM64_VEC_SVE], \ + (vcpu)->arch.max_vl[ARM64_VEC_SME]) +#define vcpu_max_vq(vcpu) sve_vq_from_vl(vcpu_max_vl(vcpu))
#define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1) diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index ea5aaa4b1cfe82862883678bbe47373d53c005f6..ff7c4bcee542befae6961fae9eec2eb9bc8e2eb3 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -353,6 +353,15 @@ struct kvm_arm_counter_offset { #define KVM_ARM64_SVE_VLS_WORDS \ ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
+/* SME registers */ +#define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT) + +/* Vector lengths pseudo-register: */ +#define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ + KVM_REG_SIZE_U512 | 0xffff) +#define KVM_ARM64_SME_VLS_WORDS \ + ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) + /* Bitmap feature firmware registers */ #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index e9f17b8a6fba476a472d3f09691611228ee9d203..62df7830d11e6e7d01d175566e4b6a771ac2ec6d 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -310,22 +310,20 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64) #define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq)))
-static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +static int get_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
- if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[ARM64_VEC_SVE]))) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL;
memset(vqs, 0, sizeof(vqs));
- max_vq = vcpu_sve_max_vq(vcpu); + max_vq = vcpu_vec_max_vq(vcpu, vec_type); for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) - if (sve_vq_available(vq)) + if (vq_available(vec_type, vq)) vqs[vq_word(vq)] |= vq_mask(vq);
if (copy_to_user((void __user *)reg->addr, vqs, sizeof(vqs))) @@ -334,18 +332,13 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; }
-static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +static int set_vec_vls(enum vec_type vec_type, struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) { unsigned int max_vq, vq; u64 vqs[KVM_ARM64_SVE_VLS_WORDS];
- if (!vcpu_has_sve(vcpu)) - return -ENOENT; - - if (kvm_arm_vcpu_vec_finalized(vcpu)) - return -EPERM; /* too late! */ - - if (WARN_ON(vcpu->arch.sve_state)) + if (WARN_ON(!sve_vl_valid(vcpu->arch.max_vl[vec_type]))) return -EINVAL;
if (copy_from_user(vqs, (const void __user *)reg->addr, sizeof(vqs))) @@ -356,18 +349,18 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (vq_present(vqs, vq)) max_vq = vq;
- if (max_vq > sve_vq_from_vl(kvm_max_vl[ARM64_VEC_SVE])) + if (max_vq > sve_vq_from_vl(kvm_max_vl[vec_type])) return -EINVAL;
/* * Vector lengths supported by the host can't currently be * hidden from the guest individually: instead we can only set a - * maximum via ZCR_EL2.LEN. So, make sure the available vector + * maximum via xCR_EL2.LEN. So, make sure the available vector * lengths match the set requested exactly up to the requested * maximum: */ for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq) - if (vq_present(vqs, vq) != sve_vq_available(vq)) + if (vq_present(vqs, vq) != vq_available(vec_type, vq)) return -EINVAL;
/* Can't run with no vector lengths at all: */ @@ -375,11 +368,33 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return -EINVAL;
/* vcpu->arch.sve_state will be alloc'd by kvm_vcpu_finalize_sve() */ - vcpu->arch.max_vl[ARM64_VEC_SVE] = sve_vl_from_vq(max_vq); + vcpu->arch.max_vl[vec_type] = sve_vl_from_vq(max_vq);
return 0; }
+static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + +static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sve(vcpu)) + return -ENOENT; + + if (kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; /* too late! */ + + if (WARN_ON(vcpu->arch.sve_state)) + return -EINVAL; + + return set_vec_vls(ARM64_VEC_SVE, vcpu, reg); +} + #define SVE_REG_SLICE_SHIFT 0 #define SVE_REG_SLICE_BITS 5 #define SVE_REG_ID_SHIFT (SVE_REG_SLICE_SHIFT + SVE_REG_SLICE_BITS) @@ -533,6 +548,45 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return 0; }
+static int get_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + return get_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + if (!vcpu_has_sme(vcpu)) + return -ENOENT; + + if (kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; /* too late! */ + + if (WARN_ON(vcpu->arch.sme_state)) + return -EINVAL; + + return set_vec_vls(ARM64_VEC_SME, vcpu, reg); +} + +static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SME_VLS) + return get_sme_vls(vcpu, reg); + + return -EINVAL; +} + +static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) +{ + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ + if (reg->id == KVM_REG_ARM64_SME_VLS) + return set_sme_vls(vcpu, reg); + + return -EINVAL; +} int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL; @@ -775,6 +829,7 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return get_sme_reg(vcpu, reg); }
if (is_timer_reg(reg->id)) @@ -795,6 +850,7 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_FW_FEAT_BMAP: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); + case KVM_REG_ARM64_SME: return set_sme_reg(vcpu, reg); }
if (is_timer_reg(reg->id))
SME is configured by the system registers SMCR_EL1 and SMCR_EL2, add definitions and userspace access for them. These control the SME vector length in a manner similar to that for SVE and also have feature enable bits for SME2 and FA64. A subsequent patch will add management of them for guests as part of the general floating point context switch, as is done for the equivalent SVE registers.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 37 ++++++++++++++++++++++++++++++++++- 3 files changed, 39 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5cc1014120ca6de249093b3e34437a99bc8bf935..f987698f88acf7b01e08e44b46a0982e36cced95 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -485,6 +485,7 @@ enum vcpu_sysreg { CPTR_EL2, /* Architectural Feature Trap Register (EL2) */ HACR_EL2, /* Hypervisor Auxiliary Control Register */ ZCR_EL2, /* SVE Control Register (EL2) */ + SMCR_EL2, /* SME Control Register (EL2) */ TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */ TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */ TCR_EL2, /* Translation Control Register (EL2) */ @@ -522,6 +523,7 @@ enum vcpu_sysreg { VNCR(ACTLR_EL1),/* Auxiliary Control Register */ VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ + VNCR(SMCR_EL1), /* SME Control */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index 4f9bbd4d6c2671753124599475e5138bf6b9c749..74fc7400efbc7de6b8dd81a485f1e9d545baf7a9 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -42,6 +42,7 @@ #define VNCR_HDFGWTR_EL2 0x1D8 #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 +#define VNCR_SMCR_EL1 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 49b7844af8a19467e7842347c4b05ceb44c4caaf..597d6a33826d001268d53174581ef8e61e7dd946 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -142,6 +142,7 @@ static bool get_el2_to_el1_mapping(unsigned int reg, MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL ); MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL ); MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL ); + MAPPED_EL2_SYSREG(SMCR_EL2, SMCR_EL1, NULL ); MAPPED_EL2_SYSREG(CONTEXTIDR_EL2, CONTEXTIDR_EL1, NULL ); default: return false; @@ -2429,6 +2430,37 @@ static bool access_zcr_el2(struct kvm_vcpu *vcpu, return true; }
+static unsigned int sme_el2_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + return __el2_visibility(vcpu, rd, sme_visibility); +} + +static bool access_smcr_el2(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + unsigned int vq; + u64 smcr; + + if (guest_hyp_sve_traps_enabled(vcpu)) { + kvm_inject_nested_sve_trap(vcpu); + return true; + } + + if (!p->is_write) { + p->regval = vcpu_read_sys_reg(vcpu, SMCR_EL2); + return true; + } + + smcr = p->regval; + vq = SYS_FIELD_GET(SMCR_ELx, LEN, smcr) + 1; + vq = min(vq, vcpu_sme_max_vq(vcpu)); + vcpu_write_sys_reg(vcpu, SYS_FIELD_PREP(SMCR_ELx, LEN, vq - 1), + SMCR_EL2); + return true; +} + static unsigned int s1poe_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) { @@ -2695,7 +2727,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, { SYS_DESC(SYS_SMPRI_EL1), undef_access }, - { SYS_DESC(SYS_SMCR_EL1), undef_access }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, { SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 }, @@ -3047,6 +3079,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
+ EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, + sme_el2_visibility), + EL2_REG(TTBR0_EL2, access_rw, reset_val, 0), EL2_REG(TTBR1_EL2, access_rw, reset_val, 0), EL2_REG(TCR_EL2, access_rw, reset_val, TCR_EL2_RES1),
SME adds a new thread ID register, TPIDR2_EL0. This is used in userspace for delayed saving of the ZA state but in terms of the architecture is not really connected to SME other than being part of FEAT_SME. It has an independent fine grained trap and the runtime connection with the rest of SME is purely software defined.
Expose the register as a system register if the guest supports SME, context switching it along with the other EL0 TPIDRs.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 15 +++++++++++++++ arch/arm64/kvm/sys_regs.c | 9 ++++++--- 3 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f987698f88acf7b01e08e44b46a0982e36cced95..c770ed6138164fcd3e11b8517ef4120b4f4486b9 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -429,6 +429,7 @@ enum vcpu_sysreg { CSSELR_EL1, /* Cache Size Selection Register */ TPIDR_EL0, /* Thread ID, User R/W */ TPIDRRO_EL0, /* Thread ID, User R/O */ + TPIDR2_EL0, /* Thread ID, Register 2 */ TPIDR_EL1, /* Thread ID, Privileged */ CNTKCTL_EL1, /* Timer Control Register (EL1) */ PAR_EL1, /* Physical Address Register */ diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 7a4ad8a4c3727e4628b375cbefc5e0d3533687de..8141e4785161d0f610f1aa161d8448ca2f50312c 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -81,6 +81,17 @@ static inline u64 *ctxt_mdscr_el1(struct kvm_cpu_context *ctxt) return &ctxt_sys_reg(ctxt, MDSCR_EL1); }
+static inline bool ctxt_has_sme(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!system_supports_sme()) + return false; + + vcpu = ctxt_to_vcpu(ctxt); + return kvm_has_sme(kern_hyp_va(vcpu->kvm)); +} + static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { *ctxt_mdscr_el1(ctxt) = read_sysreg(mdscr_el1); @@ -94,6 +105,8 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); + if (ctxt_has_sme(ctxt)) + ctxt_sys_reg(ctxt, TPIDR2_EL0) = read_sysreg_s(SYS_TPIDR2_EL0); }
static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) @@ -163,6 +176,8 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0); + if (ctxt_has_sme(ctxt)) + write_sysreg_s(ctxt_sys_reg(ctxt, TPIDR2_EL0), SYS_TPIDR2_EL0); }
static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt, diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 597d6a33826d001268d53174581ef8e61e7dd946..eece67141480b8d4bbd2bac0f02f9208c7f86f8b 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2901,7 +2901,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { .visibility = s1poe_visibility }, { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, - { SYS_DESC(SYS_TPIDR2_EL0), undef_access }, + { SYS_DESC(SYS_TPIDR2_EL0), NULL, reset_unknown, TPIDR2_EL0, + .visibility = sme_visibility},
{ SYS_DESC(SYS_SCXTNUM_EL0), undef_access },
@@ -5033,8 +5034,7 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 | HFGxTR_EL2_nACCDATA_EL1 | - HFGxTR_EL2_nSMPRI_EL1_MASK | - HFGxTR_EL2_nTPIDR2_EL0_MASK); + HFGxTR_EL2_nSMPRI_EL1_MASK);
if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS)) kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS| @@ -5089,6 +5089,9 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) HFGITR_EL2_nBRBIALL); }
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) + kvm->arch.fgu[HFGxTR_GROUP] |= HFGxTR_EL2_nTPIDR2_EL0; + set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags); out: mutex_unlock(&kvm->arch.config_lock);
The primary register for identifying SME is ID_AA64PFR1_EL1.SME. This is hidden from guests unless SME is enabled by the VMM. When it is visible it is writable and can be used to control the availability of SME2.
There is also a new register ID_AA64SMFR0_EL1 which we make writable, forcing it to all bits 0 if SME is disabled. This includes the field SMEver giving the SME version, userspace is responsible for ensuring the value is consistent with ID_AA64PFR1_EL1.SME. It also includes FA64, a separately enableable extension which provides the full FPSIMD and SVE instruction set including FFR in streaming mode. Userspace can control the availability of FA64 by writing to this field. The other features enumerated there only add new instructions, there are no architectural controls for these.
There is a further identification register SMIDR_EL1 which provides a basic description of the SME microarchitecture, in a manner similar to MIDR_EL1 for the PE. It also describes support for priority management and a basic affinity description for shared SME units, plus some RES0 space. We do not support priority management and affinity is not meaningful for guests so we mask out everything except for the microarchitecture description.
As for MIDR_EL1 and REVIDR_EL1 we expose the implementer and revision information to guests with the raw value from the CPU we are running on, this may present issues for asymmetric systems or for migration as it does for the existing registers.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/kvm/sys_regs.c | 46 +++++++++++++++++++++++++++++++++++---- 2 files changed, 43 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index c770ed6138164fcd3e11b8517ef4120b4f4486b9..14615216845668bf1a614e9fbdb6112190fec054 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -473,6 +473,7 @@ enum vcpu_sysreg { /* FP/SIMD/SVE */ SVCR, FPMR, + SMIDR_EL1, /* Streaming Mode Identification Register */
/* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index eece67141480b8d4bbd2bac0f02f9208c7f86f8b..b515dfa1f27d0c744de996f6dcdff272aa184385 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -767,6 +767,38 @@ static u64 reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) return mpidr; }
+static u64 reset_smidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) +{ + u64 smidr = 0; + + if (!system_supports_sme()) + return smidr; + + smidr = read_sysreg_s(SYS_SMIDR_EL1); + + /* + * Mask out everything except for the implementer and revison, + * in particular priority management is not implemented. + */ + smidr &= SMIDR_EL1_IMPLEMENTER_MASK | SMIDR_EL1_REVISION_MASK; + + vcpu_write_sys_reg(vcpu, smidr, SMIDR_EL1); + + return smidr; +} + +static bool access_smidr(struct kvm_vcpu *vcpu, + struct sys_reg_params *p, + const struct sys_reg_desc *r) +{ + if (p->is_write) + return write_to_read_only(vcpu, p, r); + + p->regval = vcpu_read_sys_reg(vcpu, r->reg); + + return true; +} + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { @@ -1593,7 +1625,9 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu, if (!kvm_has_mte(vcpu->kvm)) val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
- val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + if (!vcpu_has_sme(vcpu)) + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME); + val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI); val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac); @@ -1697,6 +1731,10 @@ static unsigned int id_visibility(const struct kvm_vcpu *vcpu, if (!vcpu_has_sve(vcpu)) return REG_RAZ; break; + case SYS_ID_AA64SMFR0_EL1: + if (!vcpu_has_sme(vcpu)) + return REG_RAZ; + break; }
return 0; @@ -2637,7 +2675,6 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_AA64PFR1_EL1_MTE_frac | ID_AA64PFR1_EL1_NMI | ID_AA64PFR1_EL1_RNDR_trap | - ID_AA64PFR1_EL1_SME | ID_AA64PFR1_EL1_RES0 | ID_AA64PFR1_EL1_MPAM_frac | ID_AA64PFR1_EL1_RAS_frac | @@ -2645,7 +2682,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR), ID_UNALLOCATED(4,3), ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), - ID_HIDDEN(ID_AA64SMFR0_EL1), + ID_WRITABLE(ID_AA64SMFR0_EL1, ~ID_AA64SMFR0_EL1_RES0), ID_UNALLOCATED(4,6), ID_WRITABLE(ID_AA64FPFR0_EL1, ~ID_AA64FPFR0_EL1_RES0),
@@ -2845,7 +2882,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1, .set_user = set_clidr, .val = ~CLIDR_EL1_RES0 }, { SYS_DESC(SYS_CCSIDR2_EL1), undef_access }, - { SYS_DESC(SYS_SMIDR_EL1), undef_access }, + { SYS_DESC(SYS_SMIDR_EL1), .access = access_smidr, .reset = reset_smidr, + .reg = SMIDR_EL1, .visibility = sme_visibility }, { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, ID_FILTERED(CTR_EL0, ctr_el0, CTR_EL0_DIC_MASK |
SME has optional support for configuring the relative priorities of PEs in systems where they share a single SME hardware block, known as a SMCU. Currently we do not have any support for this in Linux and will also hide it from KVM guests, pending experience with practical implementations. The interface for configuring priority support is via two new system registers, these registers are always defined when SME is available.
The register SMPRI_EL1 allows control of SME execution priorities. Since we disable SME priority support for guests this register is RES0, define it as such and enable fine grained traps for SMPRI_EL1 to ensure that guests can't write to it even if the hardware supports priorites. We could with some adjustment to the generic FGT code allow untrapped reads but since we don't currently advertise priority support to guests there should be no reason for frequent accesses.
There is also an EL2 register SMPRIMAP_EL2 for virtualisation of priorities, this is RES0 when priority configuration is not supported but has no specific traps available.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 2 ++ arch/arm64/include/asm/vncr_mapping.h | 1 + arch/arm64/kvm/sys_regs.c | 30 +++++++++++++++++++++++++----- 3 files changed, 28 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 14615216845668bf1a614e9fbdb6112190fec054..acd243a31ccb77cce3d95829e22456a71e87725c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -474,6 +474,7 @@ enum vcpu_sysreg { SVCR, FPMR, SMIDR_EL1, /* Streaming Mode Identification Register */ + SMPRI_EL1, /* Streaming Mode Priority Register */
/* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ @@ -526,6 +527,7 @@ enum vcpu_sysreg { VNCR(CPACR_EL1),/* Coprocessor Access Control */ VNCR(ZCR_EL1), /* SVE Control */ VNCR(SMCR_EL1), /* SME Control */ + VNCR(SMPRIMAP_EL2), /* Streaming Mode Priority Mapping Register */ VNCR(TTBR0_EL1),/* Translation Table Base Register 0 */ VNCR(TTBR1_EL1),/* Translation Table Base Register 1 */ VNCR(TCR_EL1), /* Translation Control Register */ diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index 74fc7400efbc7de6b8dd81a485f1e9d545baf7a9..1685df741294b68e5ae4a4503258c3ee2667dda9 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -43,6 +43,7 @@ #define VNCR_ZCR_EL1 0x1E0 #define VNCR_HAFGRTR_EL2 0x1E8 #define VNCR_SMCR_EL1 0x1F0 +#define VNCR_SMPRIMAP_EL2 0x1F0 #define VNCR_TTBR0_EL1 0x200 #define VNCR_TTBR1_EL1 0x210 #define VNCR_FAR_EL1 0x220 diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index b515dfa1f27d0c744de996f6dcdff272aa184385..d22bd4df9e93213d6974fdfba0175c67716a1e95 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1802,6 +1802,15 @@ static unsigned int fp8_visibility(const struct kvm_vcpu *vcpu, return REG_HIDDEN; }
+static unsigned int sme_raz_visibility(const struct kvm_vcpu *vcpu, + const struct sys_reg_desc *rd) +{ + if (vcpu_has_sme(vcpu)) + return REG_RAZ; + + return REG_HIDDEN; +} + static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val) { if (!vcpu_has_sve(vcpu)) @@ -2763,7 +2772,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_ZCR_EL1), NULL, reset_val, ZCR_EL1, 0, .visibility = sve_visibility }, { SYS_DESC(SYS_TRFCR_EL1), undef_access }, - { SYS_DESC(SYS_SMPRI_EL1), undef_access }, + + /* + * SMPRI_EL1 is UNDEF when SME is disabled, the UNDEF is + * handled via FGU which is handled without consulting this + * table. + */ + { SYS_DESC(SYS_SMPRI_EL1), trap_raz_wi, .visibility = sme_raz_visibility }, + { SYS_DESC(SYS_SMCR_EL1), NULL, reset_val, SMCR_EL1, 0, .visibility = sme_visibility }, { SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 }, { SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 }, @@ -3118,6 +3134,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
+ EL2_REG_FILTERED(SMPRIMAP_EL2, trap_raz_wi, reset_val, 0, + sme_el2_visibility), EL2_REG_FILTERED(SMCR_EL2, access_smcr_el2, reset_val, 0, sme_el2_visibility),
@@ -5071,8 +5089,7 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 | - HFGxTR_EL2_nACCDATA_EL1 | - HFGxTR_EL2_nSMPRI_EL1_MASK); + HFGxTR_EL2_nACCDATA_EL1);
if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS)) kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS| @@ -5127,8 +5144,11 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) HFGITR_EL2_nBRBIALL); }
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) - kvm->arch.fgu[HFGxTR_GROUP] |= HFGxTR_EL2_nTPIDR2_EL0; + if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP)) + kvm->arch.fgt[HFGxTR_GROUP] |= HFGxTR_EL2_nSMPRI_EL1_MASK; + else + kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nTPIDR2_EL0 | + HFGxTR_EL2_nSMPRI_EL1_MASK);
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags); out:
Provide a __sme_restore_state() for the hypervisor to allow it to restore ZA and ZT for guests.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_hyp.h | 2 ++ arch/arm64/kvm/hyp/fpsimd.S | 16 ++++++++++++++++ 2 files changed, 18 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h index 21943cb98542750a1b626a8de6bbc095d7770ccf..5a1f8e4be18624efa6b887f09c36f0e8ad318c40 100644 --- a/arch/arm64/include/asm/kvm_hyp.h +++ b/arch/arm64/include/asm/kvm_hyp.h @@ -113,6 +113,8 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr); void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr); +int __sve_get_vl(void); +void __sme_restore_state(void const *state, bool restore_zt);
u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S index 6e16cbfc5df27e63732655dffea60d7039c37332..b21809adfba78522410da053fd4eccbb66484532 100644 --- a/arch/arm64/kvm/hyp/fpsimd.S +++ b/arch/arm64/kvm/hyp/fpsimd.S @@ -29,3 +29,19 @@ SYM_FUNC_START(__sve_save_state) sve_save 0, x1, x2, 3 ret SYM_FUNC_END(__sve_save_state) + +SYM_FUNC_START(__sve_get_vl) + _sve_rdvl 0, 1 + ret +SYM_FUNC_END(__sve_get_vl) + +SYM_FUNC_START(__sme_restore_state) + _sme_rdsvl 2, 1 // x2 = VL/8 + sme_load_za 0, x2, 12 // Leaves x0 pointing to end of ZA + + cbz x1, 1f + _ldr_zt 0 + +1: + ret +SYM_FUNC_END(__sme_restore_state)
SME introduces a mode called streaming mode where the Z, P and optionally FFR registers can be accessed using the SVE instructions but with the SME vector length. Reflect this in the ABI for accessing the guest registers by making the vector length for the vcpu reflect the vector length that would be seen by the guest were it running, using the SME vector length when the guest is configured for streaming mode.
Since SME may be present without SVE we also update the existing checks for access to the Z, P and V registers to check for either SVE or streaming mode. When not in streaming mode the guest floating point state may be accessed via the V registers.
Any VMM that supports SME must be aware of the need to configure streaming mode prior to writing the floating point registers that this creates.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 3 +++ arch/arm64/kvm/guest.c | 38 ++++++++++++++++++++++++++++++++++---- 2 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index acd243a31ccb77cce3d95829e22456a71e87725c..67ac3e37fde31901513c2e692693d3ba1967b0bd 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1600,4 +1600,7 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define vcpu_has_fa64(vcpu) kvm_has_fa64((vcpu)->kvm) #endif
+#define vcpu_in_streaming_mode(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_SM_MASK) + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 62df7830d11e6e7d01d175566e4b6a771ac2ec6d..87a0a027e1e7e2487eff703dee2958137c9640ef 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -73,6 +73,11 @@ static u64 core_reg_offset_from_id(u64 id) return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE); }
+static bool vcpu_has_sve_regs(const struct kvm_vcpu *vcpu) +{ + return vcpu_has_sve(vcpu) || vcpu_in_streaming_mode(vcpu); +} + static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) { int size; @@ -110,9 +115,10 @@ static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off) /* * The KVM_REG_ARM64_SVE regs must be used instead of * KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on - * SVE-enabled vcpus: + * SVE-enabled vcpus or when a SME enabled vcpu is in + * streaming mode: */ - if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off)) + if (vcpu_has_sve_regs(vcpu) && core_reg_offset_is_vreg(off)) return -EINVAL;
return size; @@ -426,6 +432,24 @@ struct vec_state_reg_region { unsigned int upad; /* extra trailing padding in user memory */ };
+/* + * We represent the Z and P registers to userspace using either the + * SVE or SME vector length, depending on which features the guest has + * and if the guest is in streaming mode. + */ +static unsigned int vcpu_sve_cur_vq(struct kvm_vcpu *vcpu) +{ + unsigned int vq = 0; + + if (vcpu_has_sve(vcpu)) + vq = vcpu_sve_max_vq(vcpu); + + if (vcpu_in_streaming_mode(vcpu)) + vq = vcpu_sme_max_vq(vcpu); + + return vq; +} + /* * Validate SVE register ID and get sanitised bounds for user/kernel SVE * register copy @@ -466,7 +490,7 @@ static int sve_reg_to_region(struct vec_state_reg_region *region, if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT;
- vq = vcpu_sve_max_vq(vcpu); + vq = vcpu_sve_cur_vq(vcpu);
reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -476,7 +500,7 @@ static int sve_reg_to_region(struct vec_state_reg_region *region, if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT;
- vq = vcpu_sve_max_vq(vcpu); + vq = vcpu_sve_cur_vq(vcpu);
reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) - SVE_SIG_REGS_OFFSET; @@ -515,6 +539,9 @@ static int get_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
+ if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_to_user(uptr, vcpu->arch.sve_state + region.koffset, region.klen) || clear_user(uptr + region.klen, region.upad)) @@ -541,6 +568,9 @@ static int set_sve_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) if (!kvm_arm_vcpu_vec_finalized(vcpu)) return -EPERM;
+ if (!vcpu_has_sve_regs(vcpu)) + return -EBUSY; + if (copy_from_user(vcpu->arch.sve_state + region.koffset, uptr, region.klen)) return -EFAULT;
SME introduces two new registers, the ZA matrix register and the ZT0 LUT register. Both of these registers are only accessible when PSTATE.ZA is set and ZT0 is only present if SME2 is enabled for the guest. Provide support for configuring these from VMMs.
The ZA matrix is a single SVL*SVL register which is available when PSTATE.ZA is set. We follow the pattern established by the architecture itself and expose this to userspace as a series of horizontal SVE vectors with the streaming mode vector length, using the format already established for the SVE vectors themselves.
ZT0 is a single register with a refreshingly fixed size 512 bit register which is like ZA accessible only when PSTATE.ZA is set. Add support for it to the userspace API, as with ZA we allow the register to be read or written regardless of the state of PSTATE.ZA in order to simplify userspace usage. The value will be reset to 0 whenever PSTATE.ZA changes from 0 to 1, userspace can read stale values but these are not observable by the guest without manipulation of PSTATE.ZA by userspace.
While there is currently only one ZT register the naming as ZT0 and the instruction encoding clearly leave room for future extensions adding more ZT registers. This encoding can readily support such an extension if one is introduced.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 21 +++++++ arch/arm64/include/uapi/asm/kvm.h | 17 ++++++ arch/arm64/kvm/guest.c | 114 +++++++++++++++++++++++++++++++++++++- 3 files changed, 150 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 67ac3e37fde31901513c2e692693d3ba1967b0bd..0b00b1e42b668367b89d7ffee2839fc53c14e753 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1020,6 +1020,24 @@ struct kvm_vcpu_arch { __size_ret; \ })
+#define vcpu_sme_state(vcpu) (kern_hyp_va((vcpu)->arch.sme_state)) + +#define vcpu_sme_state_size(vcpu) ({ \ + size_t __size_ret; \ + unsigned int __vcpu_vq; \ + \ + if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SME]))) { \ + __size_ret = 0; \ + } else { \ + __vcpu_vq = vcpu_sme_max_vq(vcpu); \ + __size_ret = ZA_SIG_REGS_SIZE(__vcpu_vq); \ + if (system_supports_sme2()) \ + __size_ret += ZT_SIG_REG_SIZE; \ + } \ + \ + __size_ret; \ +}) + /* * Only use __vcpu_sys_reg/ctxt_sys_reg if you know you want the * memory backed version of a register, and not the one most recently @@ -1603,4 +1621,7 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val); #define vcpu_in_streaming_mode(vcpu) \ (__vcpu_sys_reg(vcpu, SVCR) & SVCR_SM_MASK)
+#define vcpu_za_enabled(vcpu) \ + (__vcpu_sys_reg(vcpu, SVCR) & SVCR_ZA_MASK) + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index ff7c4bcee542befae6961fae9eec2eb9bc8e2eb3..1f75c9938f313e70a82d9792cb3c89caec2076a1 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -356,6 +356,23 @@ struct kvm_arm_counter_offset { /* SME registers */ #define KVM_REG_ARM64_SME (0x17 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_ARM64_SME_VQ_MIN __SVE_VQ_MIN +#define KVM_ARM64_SME_VQ_MAX __SVE_VQ_MAX + +/* ZA and ZTn occupy blocks at the following offsets within this range: */ +#define KVM_REG_ARM64_SME_ZA_BASE 0 +#define KVM_REG_ARM64_SME_ZT_BASE 0x600 + +#define KVM_ARM64_SME_MAX_ZAHREG (__SVE_VQ_BYTES * KVM_ARM64_SME_VQ_MAX) + +#define KVM_REG_ARM64_SME_ZAHREG(n, i) \ + (KVM_REG_ARM64 | KVM_REG_ARM64_SME | KVM_REG_ARM64_SME_ZA_BASE | \ + KVM_REG_SIZE_U2048 | \ + (((n) & (KVM_ARM64_SME_MAX_ZAHREG - 1)) << 5) | \ + ((i) & (KVM_ARM64_SVE_MAX_SLICES - 1))) + +#define KVM_REG_ARM64_SME_ZTREG_SIZE (512 / 8) + /* Vector lengths pseudo-register: */ #define KVM_REG_ARM64_SME_VLS (KVM_REG_ARM64 | KVM_REG_ARM64_SME | \ KVM_REG_SIZE_U512 | 0xffff) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 87a0a027e1e7e2487eff703dee2958137c9640ef..55459d1e7586291f8384e8073b5de2d6a8f5f2d9 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -600,23 +600,133 @@ static int set_sme_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) return set_vec_vls(ARM64_VEC_SME, vcpu, reg); }
+/* + * Validate SVE register ID and get sanitised bounds for user/kernel SVE + * register copy + */ +static int sme_reg_to_region(struct vec_state_reg_region *region, + struct kvm_vcpu *vcpu, + const struct kvm_one_reg *reg) +{ + /* reg ID ranges for ZA.H[n] registers */ + unsigned int vq = vcpu_sme_max_vq(vcpu) - 1; + const u64 za_h_max = vq * __SVE_VQ_BYTES; + const u64 zah_id_min = KVM_REG_ARM64_SME_ZAHREG(0, 0); + const u64 zah_id_max = KVM_REG_ARM64_SME_ZAHREG(za_h_max - 1, + SVE_NUM_SLICES - 1); + unsigned int reg_num; + + unsigned int reqoffset, reqlen; /* User-requested offset and length */ + unsigned int maxlen; /* Maximum permitted length */ + + size_t sme_state_size; + + reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT; + + if (reg->id >= zah_id_min && reg->id <= zah_id_max) { + if (!vcpu_has_sme(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0) + return -ENOENT; + + /* ZA is exposed as SVE vectors ZA.H[n] */ + reqoffset = ZA_SIG_ZAV_OFFSET(vq, reg_num) - + ZA_SIG_REGS_OFFSET; + reqlen = KVM_SVE_ZREG_SIZE; + maxlen = SVE_SIG_ZREG_SIZE(vq); + } else if (reg->id == KVM_REG_ARM64_SME_ZT_BASE) { + /* ZA is exposed as SVE vectors ZA.H[n] */ + if (!kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, SME, SME2) || + (reg->id & SVE_REG_SLICE_MASK) > 0 || + reg_num > 0) + return -ENOENT; + + /* ZT0 is stored after ZA */ + reqlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + maxlen = KVM_REG_ARM64_SME_ZTREG_SIZE; + } else { + return -EINVAL; + } + + sme_state_size = vcpu_sme_state_size(vcpu); + if (WARN_ON(!sme_state_size)) + return -EINVAL; + + region->koffset = array_index_nospec(reqoffset, sme_state_size); + region->klen = min(maxlen, reqlen); + region->upad = reqlen - region->klen; + + return 0; +} + +/* + * ZA is exposed as an array of horizontal vectors with the same + * format as SVE, mirroring the architecture's LDR ZA[Wv, offs], [Xn] + * instruction. + */ + static int get_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return get_sme_vls(vcpu, reg);
- return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; }
static int set_sme_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { + int ret; + struct vec_state_reg_region region; + char __user *uptr = (char __user *)reg->addr; + /* Handle the KVM_REG_ARM64_SME_VLS pseudo-reg as a special case: */ if (reg->id == KVM_REG_ARM64_SME_VLS) return set_sme_vls(vcpu, reg);
- return -EINVAL; + /* Try to interpret reg ID as an architectural SME register... */ + ret = sme_reg_to_region(®ion, vcpu, reg); + if (ret) + return ret; + + if (!kvm_arm_vcpu_vec_finalized(vcpu)) + return -EPERM; + + /* + * None of the SME specific registers are accessible unless + * PSTATE.ZA is set. + */ + if (!vcpu_za_enabled(vcpu)) + return -EINVAL; + + if (copy_from_user(vcpu->arch.sme_state + region.koffset, uptr, + region.klen)) + return -EFAULT; + + return 0; } + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) { return -EINVAL;
If the guest has SME state we need to context switch that state, provide support for that for normal guests.
SME has three sets of registers, ZA, ZT (only present for SME2) and also streaming SVE which replaces the standard floating point registers when active. The first two are fairly straightforward, they are accessible only when PSTATE.ZA is set and we can reuse the assembly from the host to save and load them from a single contiguous buffer. When PSTATE.ZA is not set then these registers are inaccessible, when the guest enables PSTATE.ZA all bits will be set to 0 by that and nothing is required on restore.
Streaming mode is slightly more complicated, when enabled via PSTATE.SM it provides a version of the SVE registers using the SME vector length and may optionally omit the FFR register. SME may also be present without SVE. The register state is stored in sve_state as for non-streaming SVE mode, we make an initial selection of registers to update based on the guest SVE support and then override this when loading SVCR if streaming mode is enabled.
Since in order to avoid duplication with SME we now restore the register state outside of the SVE specific restore function we need to move the restore of the effective VL for nested guests to a separate restore function run after loading the floating point register state, along with the similar handling required for SME.
The selection of which vector length to use is handled by vcpu_sve_pffr().
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/fpsimd.h | 9 +++ arch/arm64/include/asm/kvm_emulate.h | 12 +-- arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/fpsimd.c | 16 ++-- arch/arm64/kvm/hyp/include/hyp/switch.h | 128 +++++++++++++++++++++++++++++--- 5 files changed, 147 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 144cc805bfea112341b89c9c6028cf4b2a201c6c..f517b371e0132271a9bd693349a828e2b824ff07 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -442,6 +442,15 @@ static inline size_t sme_state_size(struct task_struct const *task) write_sysreg_s(__new, (reg)); \ } while (0)
+#define sme_cond_update_smcr_vq(val, reg) \ + do { \ + u64 __smcr = read_sysreg_s((reg)); \ + u64 __new = __smcr & ~SMCR_ELx_LEN_MASK; \ + __new |= (val) & SMCR_ELx_LEN_MASK; \ + if (__smcr != __new) \ + write_sysreg_s(__new, (reg)); \ + } while (0) + /* For use by EFI runtime services calls only */ extern void __efi_fpsimd_begin(void); extern void __efi_fpsimd_end(void); diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 78ec1ef2cfe82ac31b19cc26ece0d94cea19b46c..19a0faae5ca02a3c8a21ffd2f8da57d56eec0c65 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -590,12 +590,6 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu) do { \ BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \ BUILD_BUG_ON((clr) & CPACR_EL1_E0POE); \ - __build_check_all_or_none((clr), CPACR_EL1_FPEN); \ - __build_check_all_or_none((set), CPACR_EL1_FPEN); \ - __build_check_all_or_none((clr), CPACR_EL1_ZEN); \ - __build_check_all_or_none((set), CPACR_EL1_ZEN); \ - __build_check_all_or_none((clr), CPACR_EL1_SMEN); \ - __build_check_all_or_none((set), CPACR_EL1_SMEN); \ \ if (has_vhe() || has_hvhe()) \ sysreg_clear_set(cpacr_el1, clr, set); \ @@ -649,4 +643,10 @@ static inline bool guest_hyp_sve_traps_enabled(const struct kvm_vcpu *vcpu) { return __guest_hyp_cptr_xen_trap_enabled(vcpu, ZEN); } + +static inline bool guest_hyp_sme_traps_enabled(const struct kvm_vcpu *vcpu) +{ + return __guest_hyp_cptr_xen_trap_enabled(vcpu, SMEN); +} + #endif /* __ARM64_KVM_EMULATE_H__ */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 0b00b1e42b668367b89d7ffee2839fc53c14e753..bbab22688a465dc0b5a4463cfaf061612b550fbd 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1006,6 +1006,9 @@ struct kvm_vcpu_arch { #define vcpu_sve_zcr_elx(vcpu) \ (unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1)
+#define vcpu_sme_smcr_elx(vcpu) \ + (unlikely(is_hyp_ctxt(vcpu)) ? SMCR_EL2 : SMCR_EL1) + #define vcpu_sve_state_size(vcpu) ({ \ size_t __size_ret; \ unsigned int __vcpu_vq; \ diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 656e02312a5991419bfc3c22193b6b9b4354be64..ed3a17becc489c06c9e2daa3122ec26f1e240e7c 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -100,19 +100,25 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) WARN_ON_ONCE(!irqs_disabled());
if (guest_owns_fp_regs()) { - /* - * Currently we do not support SME guests so SVCR is - * always 0 and we just need a variable to point to. - */ fp_state.st = &vcpu->arch.ctxt.fp_regs; fp_state.sve_state = vcpu->arch.sve_state; fp_state.sve_vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; - fp_state.sme_state = NULL; + fp_state.sme_vl = vcpu->arch.max_vl[ARM64_VEC_SME]; + fp_state.sme_state = vcpu->arch.sme_state; fp_state.svcr = &__vcpu_sys_reg(vcpu, SVCR); fp_state.fpmr = &__vcpu_sys_reg(vcpu, FPMR); fp_state.fp_type = &vcpu->arch.fp_type; + fp_state.sme_features = 0; + if (kvm_has_fa64(vcpu->kvm)) + fp_state.sme_features |= SMCR_ELx_FA64; + if (kvm_has_sme2(vcpu->kvm)) + fp_state.sme_features |= SMCR_ELx_EZT0;
+ /* + * For SME only hosts fpsimd_save() will override the + * state selection if we are in streaming mode. + */ if (vcpu_has_sve(vcpu)) fp_state.to_save = FP_STATE_SVE; else diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 2c2e22d86a2353531efa49187fe80acb01d31d20..48f6188040fb6a59441bcb22937875de69d7ae34 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -343,6 +343,29 @@ static inline bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) return true; }
+static inline void __hyp_sme_restore_guest(struct kvm_vcpu *vcpu, + bool *restore_sve, + bool *restore_ffr) +{ + bool has_fa64 = vcpu_has_fa64(vcpu); + bool has_sme2 = vcpu_has_sme2(vcpu); + + sme_cond_update_smcr(vcpu_sme_max_vq(vcpu) - 1, has_fa64, has_sme2, + SYS_SMCR_EL2); + + write_sysreg_el1(__vcpu_sys_reg(vcpu, SMCR_EL1), SYS_SMCR); + + write_sysreg_s(__vcpu_sys_reg(vcpu, SVCR), SYS_SVCR); + + if (vcpu_in_streaming_mode(vcpu)) { + *restore_sve = true; + *restore_ffr = has_fa64; + } + + if (vcpu_za_enabled(vcpu)) + __sme_restore_state(vcpu_sme_state(vcpu), has_sme2); +} + static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) { /* @@ -350,19 +373,25 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu) * vCPU. Start off with the max VL so we can load the SVE state. */ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2); - __sve_restore_state(vcpu_sve_pffr(vcpu), - &vcpu->arch.ctxt.fp_regs.fpsr, - true);
+ write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); +} + +static inline void __hyp_nv_restore_guest_vls(struct kvm_vcpu *vcpu) +{ /* * The effective VL for a VM could differ from the max VL when running a * nested guest, as the guest hypervisor could select a smaller VL. Slap * that into hardware before wrapping up. */ - if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) + if (!(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) + return; + + if (vcpu_has_sve(vcpu)) sve_cond_update_zcr_vq(__vcpu_sys_reg(vcpu, ZCR_EL2), SYS_ZCR_EL2);
- write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR); + if (vcpu_has_sme(vcpu)) + sme_cond_update_smcr_vq(__vcpu_sys_reg(vcpu, SMCR_EL2), SYS_SMCR_EL2); }
static inline void __hyp_sve_save_host(void) @@ -378,6 +407,7 @@ static inline void __hyp_sve_save_host(void)
static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) { + u64 smcr_el1, smcr_el2; u64 zcr_el1, zcr_el2;
if (!guest_owns_fp_regs()) @@ -395,10 +425,29 @@ static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu) zcr_el1 = __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)); write_sysreg_el1(zcr_el1, SYS_ZCR); } + + if (vcpu_has_sme(vcpu)) { + /* A guest hypervisor may restrict the effective max VL. */ + if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) + smcr_el2 = __vcpu_sys_reg(vcpu, SMCR_EL2); + else + smcr_el2 = vcpu_sme_max_vq(vcpu) - 1; + + if (vcpu_has_fa64(vcpu)) + smcr_el2 |= SMCR_ELx_FA64; + if (vcpu_has_sme2(vcpu)) + smcr_el2 |= SMCR_ELx_EZT0; + + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 = __vcpu_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu)); + write_sysreg_el1(smcr_el1, SYS_SMCR); + } }
static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) { + u64 smcr_el1, smcr_el2; u64 zcr_el1, zcr_el2;
if (!guest_owns_fp_regs()) @@ -433,6 +482,36 @@ static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu) write_sysreg_el1(zcr_el1, SYS_ZCR); } } + + if (vcpu_has_sme(vcpu)) { + smcr_el1 = read_sysreg_el1(SYS_SMCR); + __vcpu_sys_reg(vcpu, vcpu_sme_smcr_elx(vcpu)) = smcr_el1; + + /* + * The guest's state is always saved using the guest's max VL. + * Ensure that the host has the guest's max VL active such that + * the host can save the guest's state lazily, but don't + * artificially restrict the host to the guest's max VL. + */ + if (has_vhe()) { + smcr_el2 = vcpu_sme_max_vq(vcpu) - 1; + if (vcpu_has_fa64(vcpu)) + smcr_el2 |= SMCR_ELx_FA64; + if (vcpu_has_sme2(vcpu)) + smcr_el2 |= SMCR_ELx_EZT0; + write_sysreg_el2(smcr_el2, SYS_SMCR); + } else { + smcr_el2 = sve_vq_from_vl(kvm_host_max_vl[ARM64_VEC_SME]) - 1; + if (system_supports_fa64()) + smcr_el2 |= SMCR_ELx_FA64; + if (system_supports_sme2()) + smcr_el2 |= SMCR_ELx_EZT0; + write_sysreg_el2(smcr_el2, SYS_SMCR); + + smcr_el1 = vcpu_sve_max_vq(vcpu) - 1; + write_sysreg_el1(smcr_el1, SYS_SMCR); + } + } }
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) @@ -466,14 +545,18 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) */ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) { - bool sve_guest; - u8 esr_ec; + u64 cpacr; + bool restore_sve, restore_ffr; + bool sve_guest, sme_guest; + u8 esr_ec, esr_iss;
if (!system_supports_fpsimd()) return false;
sve_guest = vcpu_has_sve(vcpu); + sme_guest = vcpu_has_sme(vcpu); esr_ec = kvm_vcpu_trap_get_class(vcpu); + esr_iss = ESR_ELx_ISS(kvm_vcpu_get_esr(vcpu));
/* Only handle traps the vCPU can support here: */ switch (esr_ec) { @@ -492,6 +575,15 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) if (guest_hyp_sve_traps_enabled(vcpu)) return false; break; + case ESR_ELx_EC_SME: + if (!sme_guest) + return false; + if (guest_hyp_sme_traps_enabled(vcpu)) + return false; + if (!kvm_has_sme2(vcpu->kvm) && + (esr_iss == ESR_ELx_SME_ISS_ZT_DISABLED)) + return false; + break; default: return false; } @@ -499,10 +591,12 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) /* Valid trap. Switch the context: */
/* First disable enough traps to allow us to update the registers */ + cpacr = CPACR_EL1_FPEN; if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve())) - cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN); - else - cpacr_clear_set(0, CPACR_EL1_FPEN); + cpacr |= CPACR_EL1_ZEN; + if (sme_guest) + cpacr |= CPACR_EL1_SMEN; + cpacr_clear_set(0, cpacr); isb();
/* Write out the host state if it's in the registers */ @@ -510,8 +604,20 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) kvm_hyp_save_fpsimd_host(vcpu);
/* Restore the guest state */ + + /* These may be overridden for a SME guest */ + restore_sve = sve_guest; + restore_ffr = sve_guest; + if (sve_guest) __hyp_sve_restore_guest(vcpu); + if (sme_guest) + __hyp_sme_restore_guest(vcpu, &restore_sve, &restore_ffr); + + if (restore_sve) + __sve_restore_state(vcpu_sve_pffr(vcpu), + &vcpu->arch.ctxt.fp_regs.fpsr, + restore_ffr); else __fpsimd_restore_state(&vcpu->arch.ctxt.fp_regs);
@@ -522,6 +628,8 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2);
+ __hyp_nv_restore_guest_vls(vcpu); + *host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED;
return true;
The access control for SME follows the same structure as for the base FP and SVE extensions, with control being via CPACR_ELx.SMEN and CPTR_EL2.TSM mirroring the equivalent FPSIMD and SVE controls in those registers. Add handling for these controls and exceptions mirroring the existing handling for FPSIMD and SVE.
When the hardware is in streaming mode guest operations that are invalid in in streaming mode will generate SME exceptions. Since these exceptions may be routed to EL1 with no opportunity for the hypervisor to intercept them we already have code which ensures that we exit streaming mode before running the guest. This ensures that guests do not receive unexpected SME exceptions.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/handle_exit.c | 14 ++++++++++++++ arch/arm64/kvm/hyp/nvhe/switch.c | 11 ++++++----- arch/arm64/kvm/hyp/vhe/switch.c | 21 ++++++++++++++++----- 3 files changed, 36 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c index 512d152233ff20eb6c65abbf731b4bcb543a7d07..036e722a8f3ce03f0bd756a84a6fbffeae9bf359 100644 --- a/arch/arm64/kvm/handle_exit.c +++ b/arch/arm64/kvm/handle_exit.c @@ -227,6 +227,19 @@ static int handle_sve(struct kvm_vcpu *vcpu) return 1; }
+/* + * Guest access to SME registers should be routed to this handler only + * when the system doesn't support SME. + */ +static int handle_sme(struct kvm_vcpu *vcpu) +{ + if (guest_hyp_sme_traps_enabled(vcpu)) + return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu)); + + kvm_inject_undefined(vcpu); + return 1; +} + /* * Two possibilities to handle a trapping ptrauth instruction: * @@ -310,6 +323,7 @@ static exit_handle_fn arm_exit_handlers[] = { [ESR_ELx_EC_SVC64] = handle_svc, [ESR_ELx_EC_SYS64] = kvm_handle_sys_reg, [ESR_ELx_EC_SVE] = handle_sve, + [ESR_ELx_EC_SME] = handle_sme, [ESR_ELx_EC_ERET] = kvm_handle_eret, [ESR_ELx_EC_IABT_LOW] = kvm_handle_guest_abort, [ESR_ELx_EC_DABT_LOW] = kvm_handle_guest_abort, diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c index eaeda9c8a1aa62f8f7eed36dbedc74513a453e4e..1f1883bc6f0cc270dd1ad38f96b9a0048e238e72 100644 --- a/arch/arm64/kvm/hyp/nvhe/switch.c +++ b/arch/arm64/kvm/hyp/nvhe/switch.c @@ -49,17 +49,16 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) val |= CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_EL1_SMEN; }
write_sysreg(val, cpacr_el1); } else { val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;
- /* - * Always trap SME since it's not supported in KVM. - * TSM is RES1 if SME isn't implemented. - */ - val |= CPTR_EL2_TSM; + if (!vcpu_has_sme(vcpu) || !guest_owns_fp_regs()) + val |= CPTR_EL2_TSM;
if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) val |= CPTR_EL2_TZ; @@ -222,6 +221,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, @@ -233,6 +233,7 @@ static const exit_handler_fn pvm_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_SYS64] = kvm_handle_pvm_sys64, [ESR_ELx_EC_SVE] = kvm_handle_pvm_restricted, + [ESR_ELx_EC_SME] = kvm_handle_pvm_restricted, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low, [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c index 647737d6e8d0b5f41b2a8d25a06265e4703126b3..44e3f0577ceea2a1574038c97fd769fef30d7098 100644 --- a/arch/arm64/kvm/hyp/vhe/switch.c +++ b/arch/arm64/kvm/hyp/vhe/switch.c @@ -83,6 +83,8 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) val |= CPACR_EL1_FPEN; if (vcpu_has_sve(vcpu)) val |= CPACR_EL1_ZEN; + if (vcpu_has_sme(vcpu)) + val |= CPACR_EL1_SMEN; } else { __activate_traps_fpsimd32(vcpu); } @@ -126,6 +128,8 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu) val &= ~CPACR_EL1_FPEN; if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) val &= ~CPACR_EL1_ZEN; + if (!(SYS_FIELD_GET(CPACR_EL1, SMEN, cptr) & BIT(0))) + val &= ~CPACR_EL1_SMEN;
if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) val |= cptr & CPACR_EL1_E0POE; @@ -486,22 +490,28 @@ static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu *vcpu, u64 *exit_code) return true; }
-static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) +static bool kvm_hyp_handle_vec_cr_el2(struct kvm_vcpu *vcpu, u64 *exit_code) { u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
if (!vcpu_has_nv(vcpu)) return false;
- if (sysreg != SYS_ZCR_EL2) + switch (sysreg) { + case SYS_ZCR_EL2: + case SYS_SMCR_EL2: + break; + default: return false; + }
if (guest_owns_fp_regs()) return false;
/* - * ZCR_EL2 traps are handled in the slow path, with the expectation - * that the guest's FP context has already been loaded onto the CPU. + * ZCR_EL2 and SMCR_EL2 traps are handled in the slow path, + * with the expectation that the guest's FP context has + * already been loaded onto the CPU. * * Load the guest's FP context and unconditionally forward to the * slow path for handling (i.e. return false). @@ -521,7 +531,7 @@ static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code) if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code)) return true;
- if (kvm_hyp_handle_zcr_el2(vcpu, exit_code)) + if (kvm_hyp_handle_vec_cr_el2(vcpu, exit_code)) return true;
return kvm_hyp_handle_sysreg(vcpu, exit_code); @@ -531,6 +541,7 @@ static const exit_handler_fn hyp_exit_handlers[] = { [0 ... ESR_ELx_EC_MAX] = NULL, [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32, [ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe, + [ESR_ELx_EC_SME] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd, [ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low,
With support for context switching SME state in place allow access to SME in nested guests.
The SME floating point state is handled along with all the other floating point state, SME specific floating point exceptions are directed into the same handlers as other floating point exceptions with NV specific handling for the vector lengths already in place.
TPIDR2_EL0 is context switched along with the other TPIDRs as part of the main guest register context switch.
SME priority support is currently masked from all guests including nested ones.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/kvm/nested.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c index 0c9387d2f50708565b5aac1fc0f86fefffd94ea1..58324806c8d588b0a77c835bafc89236a1ad52d1 100644 --- a/arch/arm64/kvm/nested.c +++ b/arch/arm64/kvm/nested.c @@ -840,10 +840,11 @@ static void limit_nv_id_regs(struct kvm *kvm) val |= FIELD_PREP(NV_FTR(PFR0, EL3), 0b0001); kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR0_EL1, val);
- /* Only support BTI, SSBS, CSV2_frac */ + /* Only support BTI, SSBS, CSV2_frac and SME */ val = kvm_read_vm_id_reg(kvm, SYS_ID_AA64PFR1_EL1); val &= (NV_FTR(PFR1, BT) | NV_FTR(PFR1, SSBS) | + NV_FTR(PFR1, SME) | NV_FTR(PFR1, CSV2_frac)); kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR1_EL1, val);
Since SME requires configuration of a vector length in order to know the size of both the streaming mode SVE state and ZA array we implement a capability for it and require that it be enabled and finalized before the SME specific state can be accessed, similarly to SVE.
Due to the overlap with sizing the SVE state we finalise both SVE and SME with a single finalization, preventing any further changes to the SVE and SME configuration once KVM_ARM_VCPU_VEC (an alias for _VCPU_SVE) has been finalised. This is not a thing of great elegance but it ensures that we never have a state where one of SVE or SME is finalised and the other not, avoiding complexity.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 5 +- arch/arm64/include/uapi/asm/kvm.h | 1 + arch/arm64/kvm/arm.c | 10 ++++ arch/arm64/kvm/reset.c | 114 ++++++++++++++++++++++++++++++++------ include/uapi/linux/kvm.h | 1 + 5 files changed, 111 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bbab22688a465dc0b5a4463cfaf061612b550fbd..ab7f0cddfcd2b57b742fd5193ebe44cbb8ca8677 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -79,6 +79,7 @@ extern unsigned int __ro_after_init kvm_host_max_vl[ARM64_VEC_MAX]; DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
int __init kvm_arm_init_sve(void); +int __init kvm_arm_init_sme(void);
u32 __attribute_const__ kvm_target_cpu(void); void kvm_reset_vcpu(struct kvm_vcpu *vcpu); @@ -1013,10 +1014,10 @@ struct kvm_vcpu_arch { size_t __size_ret; \ unsigned int __vcpu_vq; \ \ - if (WARN_ON(!sve_vl_valid((vcpu)->arch.max_vl[ARM64_VEC_SVE]))) { \ + if (WARN_ON(!sve_vl_valid(vcpu_max_vl(vcpu)))) { \ __size_ret = 0; \ } else { \ - __vcpu_vq = vcpu_sve_max_vq(vcpu); \ + __vcpu_vq = sve_vl_from_vq(vcpu_max_vl(vcpu)); \ __size_ret = SVE_SIG_REGS_SIZE(__vcpu_vq); \ } \ \ diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 1f75c9938f313e70a82d9792cb3c89caec2076a1..80a7d3119ec8421ca5c7050ed996fc9ba09ad7cf 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -105,6 +105,7 @@ struct kvm_regs { #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */ #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */ #define KVM_ARM_VCPU_HAS_EL2 7 /* Support nested virtualization */ +#define KVM_ARM_VCPU_SME 8 /* enable SME for this CPU */
/* * An alias for _SVE since we finalize VL configuration for both SVE and SME diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index b8e55a441282f57cb2d4bc55a43b41d5b774dfdd..5f44fe80af67369764d979e3cacd5bedacc5f7cb 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -377,6 +377,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_SVE: r = system_supports_sve(); break; + case KVM_CAP_ARM_SME: + r = system_supports_sme(); + break; case KVM_CAP_ARM_PTRAUTH_ADDRESS: case KVM_CAP_ARM_PTRAUTH_GENERIC: r = kvm_has_full_ptr_auth(); @@ -1394,6 +1397,9 @@ static unsigned long system_supported_vcpu_features(void) if (!system_supports_sve()) clear_bit(KVM_ARM_VCPU_SVE, &features);
+ if (!system_supports_sme()) + clear_bit(KVM_ARM_VCPU_SME, &features); + if (!kvm_has_full_ptr_auth()) { clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); @@ -2784,6 +2790,10 @@ static __init int kvm_arm_init(void) if (err) return err;
+ err = kvm_arm_init_sme(); + if (err) + return err; + err = kvm_arm_vmid_alloc_init(); if (err) { kvm_err("Failed to initialize VMID allocator.\n"); diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 3cb91dc6dc3dc5cc484900dbd9f4cdfedb3e2b4a..78b5aa7976dbbfec35d83f5e373ef87636c2e3e4 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -76,6 +76,34 @@ int __init kvm_arm_init_sve(void) return 0; }
+int __init kvm_arm_init_sme(void) +{ + if (system_supports_sme()) { + kvm_max_vl[ARM64_VEC_SME] = sme_max_virtualisable_vl(); + kvm_host_max_vl[ARM64_VEC_SME] = sme_max_vl(); + kvm_nvhe_sym(kvm_host_max_vl[ARM64_VEC_SME]) = kvm_host_max_vl[ARM64_VEC_SME]; + + /* + * The get_sve_reg()/set_sve_reg() ioctl interface will need + * to be extended with multiple register slice support in + * order to support vector lengths greater than + * VL_ARCH_MAX: + */ + if (WARN_ON(kvm_max_vl[ARM64_VEC_SME] > VL_ARCH_MAX)) + kvm_max_vl[ARM64_VEC_SME] = VL_ARCH_MAX; + + /* + * Don't even try to make use of vector lengths that + * aren't available on all CPUs, for now: + */ + if (kvm_max_vl[ARM64_VEC_SME] < sme_max_vl()) + pr_warn("KVM: SME vector length for guests limited to %u bytes\n", + kvm_max_vl[ARM64_VEC_SME]); + } + + return 0; +} + static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) { vcpu->arch.max_vl[ARM64_VEC_SVE] = kvm_max_vl[ARM64_VEC_SVE]; @@ -88,42 +116,84 @@ static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) set_bit(KVM_ARCH_FLAG_GUEST_HAS_SVE, &vcpu->kvm->arch.flags); }
+static void kvm_vcpu_enable_sme(struct kvm_vcpu *vcpu) +{ + vcpu->arch.max_vl[ARM64_VEC_SME] = kvm_max_vl[ARM64_VEC_SME]; + + /* + * Userspace can still customize the vector lengths by writing + * KVM_REG_ARM64_SME_VLS. Allocation is deferred until + * kvm_arm_vcpu_finalize(), which freezes the configuration. + */ + set_bit(KVM_ARCH_FLAG_GUEST_HAS_SME, &vcpu->kvm->arch.flags); +} + /* - * Finalize vcpu's maximum SVE vector length, allocating - * vcpu->arch.sve_state as necessary. + * Finalize vcpu's maximum vector lengths, allocating + * vcpu->arch.sve_state and vcpu->arch.sme_state as necessary. */ static int kvm_vcpu_finalize_vec(struct kvm_vcpu *vcpu) { - void *buf; + void *sve_state, *sme_state; unsigned int vl; - size_t reg_sz; int ret;
- vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; - /* * Responsibility for these properties is shared between * kvm_arm_init_sve(), kvm_vcpu_enable_sve() and * set_sve_vls(). Double-check here just to be sure: */ - if (WARN_ON(!sve_vl_valid(vl) || vl > sve_max_virtualisable_vl() || - vl > VL_ARCH_MAX)) - return -EIO; + if (vcpu_has_sve(vcpu)) { + vl = vcpu->arch.max_vl[ARM64_VEC_SVE]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sve_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + }
- reg_sz = vcpu_sve_state_size(vcpu); - buf = kzalloc(reg_sz, GFP_KERNEL_ACCOUNT); - if (!buf) + /* Similarly for SME */ + if (vcpu_has_sme(vcpu)) { + vl = vcpu->arch.max_vl[ARM64_VEC_SME]; + if (WARN_ON(!sve_vl_valid(vl) || + vl > sme_max_virtualisable_vl() || + vl > VL_ARCH_MAX)) + return -EIO; + } + + sve_state = kzalloc(vcpu_sve_state_size(vcpu), GFP_KERNEL_ACCOUNT); + if (!sve_state) return -ENOMEM;
- ret = kvm_share_hyp(buf, buf + reg_sz); - if (ret) { - kfree(buf); - return ret; + ret = kvm_share_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); + if (ret) + goto err_sve_alloc; + + if (vcpu_has_sme(vcpu)) { + sme_state = kzalloc(vcpu_sme_state_size(vcpu), + GFP_KERNEL_ACCOUNT); + if (!sme_state) { + ret = -ENOMEM; + goto err_sve_map; + } + + ret = kvm_share_hyp(sme_state, + sme_state + vcpu_sme_state_size(vcpu)); + if (ret) + goto err_sve_map; + } else { + sme_state = NULL; } - - vcpu->arch.sve_state = buf; + + vcpu->arch.sve_state = sve_state; + vcpu->arch.sme_state = sme_state; vcpu_set_flag(vcpu, VCPU_VEC_FINALIZED); return 0; + +err_sve_map: + kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); +err_sve_alloc: + kfree(sve_state); + return ret; }
int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu, int feature) @@ -153,11 +223,15 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu) void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu) { void *sve_state = vcpu->arch.sve_state; + void *sme_state = vcpu->arch.sme_state;
kvm_unshare_hyp(vcpu, vcpu + 1); if (sve_state) kvm_unshare_hyp(sve_state, sve_state + vcpu_sve_state_size(vcpu)); kfree(sve_state); + if (sme_state) + kvm_unshare_hyp(sme_state, sme_state + vcpu_sme_state_size(vcpu)); + kfree(sme_state); kfree(vcpu->arch.ccsidr); }
@@ -165,6 +239,8 @@ static void kvm_vcpu_reset_vec(struct kvm_vcpu *vcpu) { if (vcpu_has_sve(vcpu)) memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); + if (vcpu_has_sme(vcpu)) + memset(vcpu->arch.sme_state, 0, vcpu_sme_state_size(vcpu)); }
/** @@ -207,6 +283,8 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) if (!kvm_arm_vcpu_vec_finalized(vcpu)) { if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) kvm_vcpu_enable_sve(vcpu); + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SME)) + kvm_vcpu_enable_sme(vcpu); } else { kvm_vcpu_reset_vec(vcpu); } diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 45e6d8fca9b9969622dc15a6af924221782ac2e0..b1d9bd08e0c4643e5000dda5d2c5540e528fce7b 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -929,6 +929,7 @@ struct kvm_enable_cap { #define KVM_CAP_PRE_FAULT_MEMORY 236 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 +#define KVM_CAP_ARM_SME 239
struct kvm_irq_routing_irqchip { __u32 irqchip;
SME adds a number of new system registers, update get-reg-list to check for them based on the visibility of SME.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/kvm/arm64/get-reg-list.c | 32 +++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/arm64/get-reg-list.c b/tools/testing/selftests/kvm/arm64/get-reg-list.c index d43fb3f49050ba3de950d19d56b45beefec9dbeb..3e9c19c4a0d658f349a7d476a90b877882815709 100644 --- a/tools/testing/selftests/kvm/arm64/get-reg-list.c +++ b/tools/testing/selftests/kvm/arm64/get-reg-list.c @@ -23,6 +23,18 @@ struct feature_id_reg { };
static struct feature_id_reg feat_id_regs[] = { + { + ARM64_SYS_REG(3, 0, 1, 2, 4), /* SMPRI_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 0, 1, 2, 6), /* SMCR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, { ARM64_SYS_REG(3, 0, 2, 0, 3), /* TCR2_EL1 */ ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ @@ -52,7 +64,25 @@ static struct feature_id_reg feat_id_regs[] = { ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 16, 1 - } + }, + { + ARM64_SYS_REG(3, 1, 0, 0, 6), /* SMIDR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 4, 2, 2), /* SVCR */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, + { + ARM64_SYS_REG(3, 3, 13, 0, 5), /* TPIDR2_EL0 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 24, + 1 + }, };
bool filter_reg(__u64 reg)
Add coverage of the SME ID registers to set_id_regs, ID_AA64PFR1_EL1.SME becomes writable and we add ID_AA64SMFR_EL1 and it's subfields.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/kvm/arm64/set_id_regs.c | 29 +++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c index 217541fe653601a82697134c8d773e901a205cfe..16a4cd84f45472451479c6a5502c67adb4d64490 100644 --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c @@ -138,6 +138,7 @@ static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = {
static const struct reg_ftr_bits ftr_id_aa64pfr1_el1[] = { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, CSV2_frac, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SME, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, SSBS, ID_AA64PFR1_EL1_SSBS_NI), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR1_EL1, BT, 0), REG_FTR_END, @@ -182,6 +183,28 @@ static const struct reg_ftr_bits ftr_id_aa64mmfr2_el1[] = { REG_FTR_END, };
+static const struct reg_ftr_bits ftr_id_aa64smfr0_el1[] = { + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, FA64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, LUTv2, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SMEver, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F64F64, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I16I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16B16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F16, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F8F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, I8I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, B16F32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, BI32I32, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, F32F32, 0) +, REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8FMA, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP4, 0), + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64SMFR0_EL1, SF8DP2, 0), + REG_FTR_END, +}; + static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] = { REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0), REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0), @@ -212,6 +235,7 @@ static struct test_feature_reg test_regs[] = { TEST_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0_el1), TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), + TEST_REG(SYS_ID_AA64SMFR0_EL1, ftr_id_aa64smfr0_el1), TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1), };
@@ -228,6 +252,7 @@ static void guest_code(void) GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); + GUEST_REG_SYNC(SYS_ID_AA64SMFR0_EL1); GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); GUEST_REG_SYNC(SYS_CTR_EL0);
@@ -675,8 +700,8 @@ int main(void) ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) + ARRAY_SIZE(ftr_id_aa64pfr1_el1) + ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) + ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + - ARRAY_SIZE(ftr_id_aa64zfr0_el1) - ARRAY_SIZE(test_regs) + 2 + - MPAM_IDREG_TEST; + ARRAY_SIZE(ftr_id_aa64zfr0_el1) + ARRAY_SIZE(ftr_id_aa64smfr0_el1) - ARRAY_SIZE(test_regs) + + 2 + MPAM_IDREG_TEST;
ksft_set_plan(test_cnt);
On Fri, 14 Feb 2025 01:57:43 +0000, Mark Brown broonie@kernel.org wrote:
I've removed the RFC tag from this version of the series, but the items that I'm looking for feedback on remains the same:
The userspace ABI, in particular:
The vector length used for the SVE registers, access to the SVE registers and access to ZA and (if available) ZT0 depending on the current state of PSTATE.{SM,ZA}.
The use of a single finalisation for both SVE and SME.
The addition of control for enabling fine grained traps in a similar manner to FGU but without the UNDEF, I'm not clear if this is desired at all and at present this requires symmetric read and write traps like FGU. That seemed like it might be desired from an implementation point of view but we already have one case where we enable an asymmetric trap (for ARM64_WORKAROUND_AMPERE_AC03_CPU_38) and it seems generally useful to enable asymmetrically.
This series implements support for SME use in non-protected KVM guests.
[...]
Just to be clear: I do not intend to review a series that doesn't cover the full gamut of KVM from day 1. Protected mode is an absolute requirement. It is the largest KVM deployment, and Android phones the only commonly available platform with SME. If CCA gets merged prior to SME support, supporting it will also be a requirement.
Much of this is very similar to SVE, the main additional challenge that SME presents is that it introduces a new vector length similar to the SVE vector length and two new controls which change the registers seen by guests:
- PSTATE.ZA enables the ZA matrix register and, if SME2 is supported, the ZT0 LUT register.
- PSTATE.SM enables streaming mode, a new floating point mode which uses the SVE register set with the separately configured SME vector length. In streaming mode implementation of the FFR register is optional.
It is also permitted to build systems which support SME without SVE, in this case when not in streaming mode no SVE registers or instructions are available. Further, there is no requirement that there be any overlap in the set of vector lengths supported by SVE and SME in a system, this is expected to be a common situation in practical systems.
Since there is a new vector length to configure we introduce a new feature parallel to the existing SVE one with a new pseudo register for the streaming mode vector length. Due to the overlap with SVE caused by streaming mode rather than finalising SME as a separate feature we use the existing SVE finalisation to also finalise SME, a new define KVM_ARM_VCPU_VEC is provided to help make user code clearer. Finalising SVE and SME separately would introduce complication with register access since finalising SVE makes the SVE regsiters writeable by userspace and doing multiple finalisations results in an error being reported. Dealing with a state where the SVE registers are writeable due to one of SVE or SME being finalised but may have their VL changed by the other being finalised seems like needless complexity with minimal practical utility, it seems clearer to just express directly that only one finalisation can be done in the ABI.
Access to the floating point registers follows the architecture:
- When both SVE and SME are present:
- If PSTATE.SM == 0 the vector length used for the Z and P registers is the SVE vector length.
- If PSTATE.SM == 1 the vector length used for the Z and P registers is the SME vector length.
- If only SME is present:
- If PSTATE.SM == 0 the Z and P registers are inaccessible and the floating point state accessed via the encodings for the V registers.
- If PSTATE.SM == 1 the vector length used for the Z and P registers
- The SME specific ZA and ZT0 registers are only accessible if SVCR.ZA is 1.
The VMM must understand this, in particular when loading state SVCR should be configured before other state.
Why SVCR? This isn't a register, just an architected accessor to PSTATE.{ZA,SM}. Userspace already has direct access to PSTATE, so I don't understand this requirement.
Isn't it that there is simply a dependency between restoring PSTATE and any of the vector stuff? Also, how do you preserve the current ABI that do not have this requirement?
M.
On Fri, Feb 14, 2025 at 09:24:03AM +0000, Marc Zyngier wrote:
Mark Brown broonie@kernel.org wrote:
Just to be clear: I do not intend to review a series that doesn't cover the full gamut of KVM from day 1. Protected mode is an absolute requirement. It is the largest KVM deployment, and Android phones the only commonly available platform with SME. If CCA gets merged prior to SME support, supporting it will also be a requirement.
OK, no problem. This is a new requirement and I'd been trying to balance the concerns people have with the size of serieses like this with the need to get everything in, my plan had been to follow up as soon as possible with pKVM.
Access to the floating point registers follows the architecture:
- When both SVE and SME are present:
- If PSTATE.SM == 0 the vector length used for the Z and P registers is the SVE vector length.
- If PSTATE.SM == 1 the vector length used for the Z and P registers is the SME vector length.
- If only SME is present:
- If PSTATE.SM == 0 the Z and P registers are inaccessible and the floating point state accessed via the encodings for the V registers.
- If PSTATE.SM == 1 the vector length used for the Z and P registers
- The SME specific ZA and ZT0 registers are only accessible if SVCR.ZA is 1.
The VMM must understand this, in particular when loading state SVCR should be configured before other state.
Why SVCR? This isn't a register, just an architected accessor to PSTATE.{ZA,SM}. Userspace already has direct access to PSTATE, so I don't understand this requirement.
Could you be more explicit as to what you mean by direct access to PSTATE here? The direct access to these PSTATE fields is in the form of SVCR register accesses, or writes via SMSTART or SMSTOP instructions when executing code - is there another access mechanism I'm not aware of here? They don't appear in SPSR. Or is this a terminology issue with referring to SVCR as the mechanism for configuring PSTATE.{SM,ZA} without explicitly calling out that that's what's happening?
Isn't it that there is simply a dependency between restoring PSTATE and any of the vector stuff? Also, how do you preserve the current ABI that do not have this requirement?
Yes, that's the dependency - I'm spelling out explicitly what changes in the register view when PSTATE.{SM,ZA} are restored. This ABI is what you appeared to be asking for the last time we discussed this. Previously I had also proposed either:
- Exposing the streaming mode view of the register state as separate registers, selecting between the standard and streaming views for load/save based on the mode when the guest runs and clearing the inactive registers on userspace access.
- Always presenting userspace with the largest available vector length, zero padding when userspace reads and discarding unused high bits when loading into the registers for the guest. This ends up requiring rewriting between VLs, or to/from FPSIMD format, around periods of userspace access since when normally executing and context switching the guest we want to store the data in the native format for the current PSTATE.SM for performance.
both of which largely avoid the ordering requirements but add complexity to the implementation, and memory overhead in the first case. I'd originally implemented the second case, that seems the best of the available options to me. You weren't happy with these options and said that we should not translate register formats and always use the current mode for the vCPU, but given that changing PSTATE.SM changes the register sizes that ends up creating an ordering requirement. You seemed to agree that it was reasonable to have an ordering requirement with PSTATE.SM so long as it only came when SME had been explicitly enabled.
Would you prefer:
- Changing the register view based on the current value of PSTATE.SM. - Exposing streaming mode Z and P as separate registers. - Exposing the existing Z and P registers with the maximum S?E VL.
or some other option?
On Fri, 14 Feb 2025 15:13:52 +0000, Mark Brown broonie@kernel.org wrote:
On Fri, Feb 14, 2025 at 09:24:03AM +0000, Marc Zyngier wrote:
Mark Brown broonie@kernel.org wrote:
Just to be clear: I do not intend to review a series that doesn't cover the full gamut of KVM from day 1. Protected mode is an absolute requirement. It is the largest KVM deployment, and Android phones the only commonly available platform with SME. If CCA gets merged prior to SME support, supporting it will also be a requirement.
OK, no problem. This is a new requirement and I'd been trying to balance the concerns people have with the size of serieses like this with the need to get everything in, my plan had been to follow up as soon as possible with pKVM.
If size is an issue, split the UAPI from the core code. But don't fragment the overall world switch and exception handling, because that's a sure way to end-up with the same sort of problems we ended up fixing in SVE. pKVM has a direct influence on what you track, where you track it, and implementing it as an afterthought is a very bad idea.
Access to the floating point registers follows the architecture:
- When both SVE and SME are present:
- If PSTATE.SM == 0 the vector length used for the Z and P registers is the SVE vector length.
- If PSTATE.SM == 1 the vector length used for the Z and P registers is the SME vector length.
- If only SME is present:
- If PSTATE.SM == 0 the Z and P registers are inaccessible and the floating point state accessed via the encodings for the V registers.
- If PSTATE.SM == 1 the vector length used for the Z and P registers
- The SME specific ZA and ZT0 registers are only accessible if SVCR.ZA is 1.
The VMM must understand this, in particular when loading state SVCR should be configured before other state.
Why SVCR? This isn't a register, just an architected accessor to PSTATE.{ZA,SM}. Userspace already has direct access to PSTATE, so I don't understand this requirement.
Could you be more explicit as to what you mean by direct access to PSTATE here? The direct access to these PSTATE fields is in the form of SVCR register accesses, or writes via SMSTART or SMSTOP instructions when executing code - is there another access mechanism I'm not aware of here? They don't appear in SPSR. Or is this a terminology issue with referring to SVCR as the mechanism for configuring PSTATE.{SM,ZA} without explicitly calling out that that's what's happening?
I'm painfully aware of the architecture limitations.
However, I don't get your mention of SPSR here. The architecture is quite clear that PSTATE is where these bits are held, that they are not propagated anywhere else, and that's where userspace should expect to find them.
The fact that SW must use SVCR to alter PSTATE.{ZA,SM} doesn't matter. We save/restore registers, not accessors. If this means we need to play a dance when the VMM accesses PSTATE to reconciliate KVM's internal view with the userspace view, so be it.
It probably means you need to obtain a clarification of the architecture to define *where* these bits are stored in PSTATE, because that's not currently defined.
Isn't it that there is simply a dependency between restoring PSTATE and any of the vector stuff? Also, how do you preserve the current ABI that do not have this requirement?
Yes, that's the dependency - I'm spelling out explicitly what changes in the register view when PSTATE.{SM,ZA} are restored. This ABI is what you appeared to be asking for the last time we discussed this. Previously I had also proposed either:
Exposing the streaming mode view of the register state as separate registers, selecting between the standard and streaming views for load/save based on the mode when the guest runs and clearing the inactive registers on userspace access.
Always presenting userspace with the largest available vector length, zero padding when userspace reads and discarding unused high bits when loading into the registers for the guest. This ends up requiring rewriting between VLs, or to/from FPSIMD format, around periods of userspace access since when normally executing and context switching the guest we want to store the data in the native format for the current PSTATE.SM for performance.
both of which largely avoid the ordering requirements but add complexity to the implementation, and memory overhead in the first case. I'd originally implemented the second case, that seems the best of the available options to me. You weren't happy with these options and said that we should not translate register formats and always use the current mode for the vCPU, but given that changing PSTATE.SM changes the register sizes that ends up creating an ordering requirement. You seemed to agree that it was reasonable to have an ordering requirement with PSTATE.SM so long as it only came when SME had been explicitly enabled.
Would you prefer:
- Changing the register view based on the current value of PSTATE.SM.
- Exposing streaming mode Z and P as separate registers.
- Exposing the existing Z and P registers with the maximum S?E VL.
or some other option?
My take on this hasn't changed. I want to see something that behaves *exactly* like the architecture defines the expected behaviour of a CPU.
But you still haven't answered my question: How is the *current* ABI preserved? Does it require *not* selecting SME? Does it require anything else? I'm expecting simple answers to simple questions, not a wall of text describing something that is not emulating the architecture.
M.
On Mon, Feb 17, 2025 at 09:37:26AM +0000, Marc Zyngier wrote:
Mark Brown broonie@kernel.org wrote:
On Fri, Feb 14, 2025 at 09:24:03AM +0000, Marc Zyngier wrote:
Why SVCR? This isn't a register, just an architected accessor to PSTATE.{ZA,SM}. Userspace already has direct access to PSTATE, so I don't understand this requirement.
Could you be more explicit as to what you mean by direct access to PSTATE here? The direct access to these PSTATE fields is in the form of
I'm painfully aware of the architecture limitations.
However, I don't get your mention of SPSR here. The architecture is quite clear that PSTATE is where these bits are held, that they are not propagated anywhere else, and that's where userspace should expect to find them.
The fact that SW must use SVCR to alter PSTATE.{ZA,SM} doesn't matter. We save/restore registers, not accessors. If this means we need to play a dance when the VMM accesses PSTATE to reconciliate KVM's internal view with the userspace view, so be it.
Could you please clarify what you're referring to as the VMM accessing PSTATE here? The KVM API documentation defines it's concept of PSTATE as:
0x6030 0000 0010 0042 PSTATE 64 regs.pstate
in api.rst but does not futher elaborate. Looking at the code the values that appear there seem to be mapped fairly directly to SPSR values, this is why I was talking about them above.
It's not clear to me that PSTATE in the architecture is a register exactly. DDI0487 L.a D1.4 defines the PSTATE bits as being stored in a ProcState pseudocode structure (ProcState is defined in J1.3.3.457, the pseudocode maps PSTATE in J1.3.3.454) but has an explicit comment that "There is no significace in the field order". I can't seem to locate an architectural definition of the layout of PSTATE as a whole. I also can't find any direct observability of PSTATE as a whole which would require a layout definition for PSTATE itself from the architecture.
There *is* a statement in R_DQXFW that "The contents of PSTATE immediately before the exception was taken are written to SPSR_ELx" which is somewhat in conflict with the absence of SM and ZA fields in any of SPSR_ELx. There's also R_BWCFK similarly for exception return, though that also already has some additional text for PSTATE.{IT,T} for returns to AArch32. If everything in PSTATE ends up getting written to SPSR then we can use SPSR as our representation of PSTATE.
It probably means you need to obtain a clarification of the architecture to define *where* these bits are stored in PSTATE, because that's not currently defined.
I will raise the issue with R_DQXFW and friends with the architects but I'm not convinced that at this point the clarification wouldn't be an adjustment to those rules rather than the addition of fields for SM and ZA in the SPSRs. Without any additions the only access we have is via SVCR.
Isn't it that there is simply a dependency between restoring PSTATE and any of the vector stuff? Also, how do you preserve the current ABI that do not have this requirement?
Yes, that's the dependency - I'm spelling out explicitly what changes in the register view when PSTATE.{SM,ZA} are restored. This ABI is what you appeared to be asking for the last time we discussed this.
...
Would you prefer:
- Changing the register view based on the current value of PSTATE.SM.
- Exposing streaming mode Z and P as separate registers.
- Exposing the existing Z and P registers with the maximum S?E VL.
or some other option?
My take on this hasn't changed. I want to see something that behaves *exactly* like the architecture defines the expected behaviour of a CPU.
But you still haven't answered my question: How is the *current* ABI preserved? Does it require *not* selecting SME? Does it require anything else? I'm expecting simple answers to simple questions, not a
Yes, it requires not selecting SME and only that. If the VMM does not enable SME then it should see no change.
wall of text describing something that is not emulating the architecture.
I'm not clear what you're referring to as not emulating the architecture here, I *think* it's the issues with PSTATE.{SM,ZA} not appearing in SPSR_ELx and hence KVM's pstate?
Your general style of interaction means that it is not always altogether clear when your intent is to just ask a simple question or when it is to point out some problem you have seen.
linux-kselftest-mirror@lists.linaro.org