On Mon, Feb 10, 2025 at 04:12:43PM +0000, Will Deacon wrote:
On Thu, Feb 06, 2025 at 02:10:56PM +0000, Mark Rutland wrote:
Now that the host eagerly saves its own FPSIMD/SVE/SME state, non-protected KVM never needs to save the host FPSIMD/SVE/SME state, and the code to do this is never used. Protected KVM still needs to save/restore the host FPSIMD/SVE state to avoid leaking guest state to the host (and to avoid revealing to the host whether the guest used FPSIMD/SVE/SME), and that code needs to be retained.
Remove the unused code and data structures.
To avoid the need for a stub copy of kvm_hyp_save_fpsimd_host() in the VHE hyp code, the nVHE/hVHE version is moved into the shared switch header, where it is only invoked when KVM is in protected mode.
Signed-off-by: Mark Rutland mark.rutland@arm.com Reviewed-by: Mark Brown broonie@kernel.org Cc: Catalin Marinas catalin.marinas@arm.com Cc: Fuad Tabba tabba@google.com Cc: Marc Zyngier maz@kernel.org Cc: Mark Brown broonie@kernel.org Cc: Oliver Upton oliver.upton@linux.dev Cc: Will Deacon will@kernel.org
arch/arm64/include/asm/kvm_host.h | 20 +++++------------- arch/arm64/kvm/arm.c | 8 ------- arch/arm64/kvm/fpsimd.c | 2 -- arch/arm64/kvm/hyp/include/hyp/switch.h | 25 ++++++++++++++++++++-- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 +- arch/arm64/kvm/hyp/nvhe/switch.c | 28 ------------------------- arch/arm64/kvm/hyp/vhe/switch.c | 8 ------- 7 files changed, 29 insertions(+), 64 deletions(-)
[...]
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h index f838a45665f26..c5b8a11ac4f50 100644 --- a/arch/arm64/kvm/hyp/include/hyp/switch.h +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h @@ -375,7 +375,28 @@ static inline void __hyp_sve_save_host(void) true); } -static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu); +static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu) +{
- /*
* Non-protected kvm relies on the host restoring its sve state.
* Protected kvm restores the host's sve state as not to reveal that
* fpsimd was used by a guest nor leak upper sve bits.
*/
- if (system_supports_sve()) {
__hyp_sve_save_host();
/* Re-enable SVE traps if not supported for the guest vcpu. */
if (!vcpu_has_sve(vcpu))
cpacr_clear_set(CPACR_EL1_ZEN, 0);
- } else {
__fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));
- }
- if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm)))
*host_data_ptr(fpmr) = read_sysreg_s(SYS_FPMR);
+}
/*
- We trap the first access to the FP/SIMD to save the host context and
@@ -425,7 +446,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) isb(); /* Write out the host state if it's in the registers */
- if (host_owns_fp_regs())
- if (is_protected_kvm_enabled() && host_owns_fp_regs()) kvm_hyp_save_fpsimd_host(vcpu);
I wondered briefly whether this would allow us to clean up the CPACR handling a little and avoid the conditional SVE trap re-enabling inside kvm_hyp_save_fpsimd_host() but I couldn't come up with a clean way to do it without an additional ISB. Hrm.
Anyway, as far as the patch goes:
Acked-by: Will Deacon will@kernel.org
Thanks!
FWIW, I'd also considered that, and I'd concluded that if anything we could do a subsequent simplification by pulling that out of kvm_hyp_save_fpsimd_host() and have kvm_hyp_handle_fpsimd() do something like:
| static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code) | { | ... | | /* Valid trap */ | | /* | * Enable everything EL2 might need to save/restore state. | * Maybe each of the bits should depend on system_has_xxx() | */ | cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN | CPACR_EL1_SMEN */ | isb(); | | ... | | /* Write out the host state if it's in the registers */ | if (is_protected_kvm_enabled() && host_owns_fp_regs()) | kvm_hyp_save_fpsimd_host(vcpu); | | /* Restore guest state */ | | ... | | /* | * Enable traps for the VCPU. The ERET will cause the traps to | * take effect in the guest, so no ISB is necessary. | */ | cpacr_guest = CPACR_EL1_FPEN; | if (vcpu_has_sve(vcpu)) | cpacr_guest |= CPACR_EL1_ZEN; | if (vcpu_has_sme(vcpu)) // whenever we add this | cpacr_guest |= CPACR_EL1_SMEN; | cpacr_clear_set(CPACR_EL1_FPEN | CPACR_EL1_ZEN | CPACR_EL1_SMEN, | cpacr_guest); | | return true; | }
... where we'd still have the CPACR write to re-enable traps, but it'd be unconditional, and wouldn't need an extra ISB.
If that makes sense to you, I can go spin that as a subsequent cleanup atop this series.
Mark.