The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally, it is simplest and works even if memory is not encrypted.
Cc: stable@vger.kernel.org Reported-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com --- arch/x86/kvm/mmu/mmu.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f92b40d798c..8b8edfbdbaef 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -538,15 +538,11 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, static u8 kvm_get_shadow_phys_bits(void) { /* - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected - * in CPU detection code, but MKTME treats those reduced bits as - * 'keyID' thus they are not reserved bits. Therefore for MKTME - * we should still return physical address bits reported by CPUID. + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected + * in CPU detection code, but the processor treats those reduced bits as + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at + * the physical address bits reported by CPUID. */ - if (!boot_cpu_has(X86_FEATURE_TME) || - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) - return boot_cpu_data.x86_phys_bits; - return cpuid_eax(0x80000008) & 0xff; }
On Wed, Dec 04, 2019 at 03:51:00PM +0100, Paolo Bonzini wrote:
The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally, it is simplest and works even if memory is not encrypted.
Cc: stable@vger.kernel.org Reported-by: Tom Lendacky thomas.lendacky@amd.com Signed-off-by: Paolo Bonzini pbonzini@redhat.com
arch/x86/kvm/mmu/mmu.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f92b40d798c..8b8edfbdbaef 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -538,15 +538,11 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, static u8 kvm_get_shadow_phys_bits(void) { /*
* boot_cpu_data.x86_phys_bits is reduced when MKTME is detected
* in CPU detection code, but MKTME treats those reduced bits as
* 'keyID' thus they are not reserved bits. Therefore for MKTME
* we should still return physical address bits reported by CPUID.
* boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected
* in CPU detection code, but the processor treats those reduced bits as
* 'keyID' thus they are not reserved bits. Therefore KVM needs to look at
*/* the physical address bits reported by CPUID.
- if (!boot_cpu_has(X86_FEATURE_TME) ||
WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008))
return boot_cpu_data.x86_phys_bits;
Removing this entirely will break CPUs that don't support leaf 0x80000008.
From a VMX perspective, I'm pretty sure all Intel hardware that supports
VMX is guaranteed to support 0x80000008, but I've no idea about SVM or any non-Intel CPU, and not supporting 0x80000008 in a virtual machine is technically legal/possible. We conditioned doing CPUID on TME because TME would be reported as supported iff 0x80000008 existed.
The extra bit of paranoia doesn't cost much, so play it safe? E.g.:
if (unlikely(boot_cpu_data.extended_cpuid_level < 0x80000008)) { WARN_ON_ONCE(boot_cpu_has(X86_FEATURE_TME) || SME?); return boot_cpu_data.x86_phys_bits; }
return cpuid_eax(0x80000008) & 0xff;
- return cpuid_eax(0x80000008) & 0xff;
} -- 1.8.3.1
On 04/12/19 16:29, Sean Christopherson wrote:
The extra bit of paranoia doesn't cost much, so play it safe? E.g.:
if (unlikely(boot_cpu_data.extended_cpuid_level < 0x80000008)) { WARN_ON_ONCE(boot_cpu_has(X86_FEATURE_TME) || SME?); return boot_cpu_data.x86_phys_bits; }
return cpuid_eax(0x80000008) & 0xff;
Sounds good. I wouldn't bother with the WARN even.
Paolo
Hi,
[This is an automated email]
This commit has been processed because it contains a -stable tag. The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.4.2, v5.3.15, v4.19.88, v4.14.158, v4.9.206, v4.4.206.
v5.4.2: Build OK! v5.3.15: Failed to apply! Possible dependencies: Unable to calculate
v4.19.88: Failed to apply! Possible dependencies: 7b6f8a06e482 ("kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c") f3ecb59dd49f ("kvm: x86: Fix reserved bits related calculation errors caused by MKTME")
v4.14.158: Failed to apply! Possible dependencies: 7b6f8a06e482 ("kvm: x86: Move kvm_set_mmio_spte_mask() from x86.c to mmu.c") f3ecb59dd49f ("kvm: x86: Fix reserved bits related calculation errors caused by MKTME")
v4.9.206: Failed to apply! Possible dependencies: 114df303a7ee ("kvm: x86: reduce collisions in mmu_page_hash") 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") 312b616b30d8 ("kvm: x86: mmu: Set SPTE_SPECIAL_MASK within mmu.c") 37f0e8fe6b10 ("kvm: x86: mmu: Do not use bit 63 for tracking special SPTEs") 66d73e12f278 ("KVM: X86: MMU: no mmu_notifier_seq++ in kvm_age_hva") 83ef6c8155c0 ("kvm: x86: mmu: Refactor accessed/dirty checks in mmu_spte_update/clear") 97dceba29a6a ("kvm: x86: mmu: Fast Page Fault path retries") daa07cbc9ae3 ("KVM: x86: fix L1TF's MMIO GFN calculation") dcdca5fed5f6 ("x86: kvm: mmu: make spte mmio mask more explicit") ea4114bcd3a8 ("kvm: x86: mmu: Rename spte_is_locklessly_modifiable()") f160c7b7bb32 ("kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits.") f3ecb59dd49f ("kvm: x86: Fix reserved bits related calculation errors caused by MKTME")
v4.4.206: Failed to apply! Possible dependencies: 018aabb56d61 ("KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct") 0e3d0648bd90 ("KVM: x86: MMU: always set accessed bit in shadow PTEs") 114df303a7ee ("kvm: x86: reduce collisions in mmu_page_hash") 14f4760562e4 ("kvm: set page dirty only if page has been writable") 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") 37f0e8fe6b10 ("kvm: x86: mmu: Do not use bit 63 for tracking special SPTEs") 83ef6c8155c0 ("kvm: x86: mmu: Refactor accessed/dirty checks in mmu_spte_update/clear") 8d5cf1610da5 ("kvm: mmu: extend the is_present check to 32 bits") daa07cbc9ae3 ("KVM: x86: fix L1TF's MMIO GFN calculation") ea4114bcd3a8 ("kvm: x86: mmu: Rename spte_is_locklessly_modifiable()") f160c7b7bb32 ("kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits.") f3ecb59dd49f ("kvm: x86: Fix reserved bits related calculation errors caused by MKTME") ffb128c89b77 ("kvm: mmu: don't set the present bit unconditionally")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
linux-stable-mirror@lists.linaro.org