We have support for determining a set of fine grained traps to enable for the guest which is tied to the support for injecting UNDEFs for undefined features. This means that we can't use the mechanism for system registers which should be present but need emulation, such as SMPRI_EL1 which should be accessible when SME is present but if SME priority support is absent SMPRI_EL1.Priority should be RAZ.
Add an additional set of fine grained traps fgt, mirroring the existing fgu array. We use the same format where we always set the bit for the trap in the array as for FGU. This makes it clear what is being explicitly managed and keeps the code consistent.
We do not convert the handling of ARM_WORKAROUND_AMPERE_ACO3_CPU_38 to this mechanism since this only enables a write trap and when implementing the existing UNDEF that we would share the read and write trap enablement (this being the overwhelmingly common case).
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 6 ++++++ arch/arm64/kvm/hyp/include/hyp/switch.h | 7 ++++--- 2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index d27079968341..911da41e6bc0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -302,6 +302,12 @@ struct kvm_arch { */ u64 fgu[__NR_FGT_GROUP_IDS__];
+ /* + * Additional FGTs to enable for the guests, eg. for emulated + * registers, + */ + u64 fgt[__NR_FGT_GROUP_IDS__]; + /* * Stage 2 paging state for VMs with nested S2 using a virtual * VMID. diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index 2ad57b117385..987f5c4c5747 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -283,9 +283,9 @@ static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) id; \ })
-#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \ +#define compute_trap_clr_set(vcpu, kvm, trap, reg, clr, set) \ do { \ - u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \ + u64 hfg = kvm->arch.trap[reg_to_fgt_group_id(reg)]; \ struct fgt_masks *m = reg_to_fgt_masks(reg); \ set |= hfg & m->mask; \ clr |= hfg & m->nmask; \ @@ -301,7 +301,8 @@ static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) \ compute_clr_set(vcpu, reg, c, s); \ \ - compute_undef_clr_set(vcpu, kvm, reg, c, s); \ + compute_trap_clr_set(vcpu, kvm, fgu, reg, c, s); \ + compute_trap_clr_set(vcpu, kvm, fgt, reg, c, s); \ \ val = m->nmask; \ val |= s; \