On Wed, Jan 27, 2021, Peter Gonda wrote:
Grab kvm->lock before pinning memory when registering an encrypted region; sev_pin_memory() relies on kvm->lock being held to ensure correctness when checking and updating the number of pinned pages.
Add a lockdep assertion to help prevent future regressions.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Tom Lendacky thomas.lendacky@amd.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: Sean Christopherson seanjc@google.com Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active") Signed-off-by: Peter Gonda pgonda@google.com
V2
- Fix up patch description
- Correct file paths svm.c -> sev.c
- Add unlock of kvm->lock on sev_pin_memory error
V1
Put version info, and anything else that shouldn't be in the final commit, below the three dashes. AFAIK that requires manually editing the patch file before sending it.
Version info goes here.
arch/x86/kvm/svm/sev.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index c8ffdbc81709..b80e9bf0a31b 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -342,6 +342,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, unsigned long first, last; int ret;
- lockdep_assert_held(&kvm->lock);
- if (ulen == 0 || uaddr + ulen < uaddr) return ERR_PTR(-EINVAL);
@@ -1119,12 +1121,20 @@ int svm_register_enc_region(struct kvm *kvm, if (!region) return -ENOMEM;
- mutex_lock(&kvm->lock); region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages, 1); if (IS_ERR(region->pages)) { ret = PTR_ERR(region->pages);
goto e_free; }mutex_unlock(&kvm->lock);
- region->uaddr = range->addr;
- region->size = range->size;
- list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
- /*
- The guest may change the memory encryption attribute from C=0 -> C=1
- or vice versa for this memory range. Lets make sure caches are
@@ -1133,13 +1143,6 @@ int svm_register_enc_region(struct kvm *kvm, */ sev_clflush_pages(region->pages, region->npages);
I don't think it actually matters, but it feels like the flush should be done before adding the region to the list. That would also make this sequence consistent with the other flows.
Tom, any thoughts?
- region->uaddr = range->addr;
- region->size = range->size;
- mutex_lock(&kvm->lock);
- list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
- return ret;
e_free: -- 2.30.0.280.ga3ce27912f-goog