sev_pin_memory assumes that callers hold the kvm->lock. This was true for all callers except svm_register_enc_region since it does not originate from svm_mem_enc_op. Also added lockdep annotation to help prevent future regressions.
Tested: Booted SEV enabled VM on host.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Tom Lendacky thomas.lendacky@amd.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: Sean Christopherson seanjc@google.com Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 116a2214c5173 (KVM: SVM: Pin guest memory when SEV is active) Signed-off-by: Peter Gonda pgonda@google.com
--- arch/x86/kvm/svm.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index afdc5b44fe9f..9884e57f3d0f 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1699,6 +1699,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, struct page **pages; unsigned long first, last;
+ lockdep_assert_held(&kvm->lock); + if (ulen == 0 || uaddr + ulen < uaddr) return NULL;
@@ -7228,12 +7230,19 @@ static int svm_register_enc_region(struct kvm *kvm, if (!region) return -ENOMEM;
+ mutex_lock(&kvm->lock); region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages, 1); if (!region->pages) { ret = -ENOMEM; goto e_free; }
+ region->uaddr = range->addr; + region->size = range->size; + + list_add_tail(®ion->list, &sev->regions_list); + mutex_unlock(&kvm->lock); + /* * The guest may change the memory encryption attribute from C=0 -> C=1 * or vice versa for this memory range. Lets make sure caches are @@ -7242,13 +7251,6 @@ static int svm_register_enc_region(struct kvm *kvm, */ sev_clflush_pages(region->pages, region->npages);
- region->uaddr = range->addr; - region->size = range->size; - - mutex_lock(&kvm->lock); - list_add_tail(®ion->list, &sev->regions_list); - mutex_unlock(&kvm->lock); - return ret;
e_free:
On Tue, Jan 26, 2021, Peter Gonda wrote:
sev_pin_memory assumes that callers hold the kvm->lock. This was true for all callers except svm_register_enc_region since it does not originate
This doesn't actually state what change it being made, is only describes the problem. I'd also reword these sentences to avoid talking about assumptions, and instead use stronger language.
from svm_mem_enc_op. Also added lockdep annotation to help prevent
s/Also added/Add, i.e. describe what the patch is doing, not what you did in the past.
E.g.
Grab kvm->lock before pinning memory when registering an encrypted region; sev_pin_memory() relies on kvm->lock being held to ensure correctness when checking and updating the number of pinned pages.
Add a lockdep assertion to help prevent future regressions.
future regressions.
Tested: Booted SEV enabled VM on host.
Personally I'd just leave this out. Unless stated otherwise, it's implied that you've tested the patch.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Tom Lendacky thomas.lendacky@amd.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: Sean Christopherson seanjc@google.com Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 116a2214c5173 (KVM: SVM: Pin guest memory when SEV is active) Signed-off-by: Peter Gonda pgonda@google.com
arch/x86/kvm/svm.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index afdc5b44fe9f..9884e57f3d0f 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1699,6 +1699,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, struct page **pages; unsigned long first, last;
- lockdep_assert_held(&kvm->lock);
- if (ulen == 0 || uaddr + ulen < uaddr) return NULL;
@@ -7228,12 +7230,19 @@ static int svm_register_enc_region(struct kvm *kvm, if (!region) return -ENOMEM;
- mutex_lock(&kvm->lock); region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages, 1); if (!region->pages) { ret = -ENOMEM; goto e_free;
This error path needs to do mutex_unlock().
}
- region->uaddr = range->addr;
- region->size = range->size;
- list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
- /*
- The guest may change the memory encryption attribute from C=0 -> C=1
- or vice versa for this memory range. Lets make sure caches are
@@ -7242,13 +7251,6 @@ static int svm_register_enc_region(struct kvm *kvm, */ sev_clflush_pages(region->pages, region->npages);
- region->uaddr = range->addr;
- region->size = range->size;
- mutex_lock(&kvm->lock);
- list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
- return ret;
e_free: -- 2.30.0.280.ga3ce27912f-goog
On 1/26/21 12:54 PM, Peter Gonda wrote:
sev_pin_memory assumes that callers hold the kvm->lock. This was true for all callers except svm_register_enc_region since it does not originate from svm_mem_enc_op. Also added lockdep annotation to help prevent future regressions.
I'm not exactly sure what the problem is that your fixing? What is the symptom that you're seeing?
Tested: Booted SEV enabled VM on host.
Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Paolo Bonzini pbonzini@redhat.com Cc: Joerg Roedel joro@8bytes.org Cc: Tom Lendacky thomas.lendacky@amd.com Cc: Brijesh Singh brijesh.singh@amd.com Cc: Sean Christopherson seanjc@google.com Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: 116a2214c5173 (KVM: SVM: Pin guest memory when SEV is active)
I can't find this commit. The Linux and KVM trees have it as:
1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
Signed-off-by: Peter Gonda pgonda@google.com
arch/x86/kvm/svm.c | 16 +++++++++-------
This patch won't apply, as it has already been a few releases since svm.c was moved to the arch/x86/kvm/svm directory and this function now lives in sev.c.
Thanks, Tom
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index afdc5b44fe9f..9884e57f3d0f 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1699,6 +1699,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr, struct page **pages; unsigned long first, last;
- lockdep_assert_held(&kvm->lock);
- if (ulen == 0 || uaddr + ulen < uaddr) return NULL;
@@ -7228,12 +7230,19 @@ static int svm_register_enc_region(struct kvm *kvm, if (!region) return -ENOMEM;
- mutex_lock(&kvm->lock); region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages, 1); if (!region->pages) { ret = -ENOMEM; goto e_free; }
- region->uaddr = range->addr;
- region->size = range->size;
- list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
- /*
- The guest may change the memory encryption attribute from C=0 -> C=1
- or vice versa for this memory range. Lets make sure caches are
@@ -7242,13 +7251,6 @@ static int svm_register_enc_region(struct kvm *kvm, */ sev_clflush_pages(region->pages, region->npages);
- region->uaddr = range->addr;
- region->size = range->size;
- mutex_lock(&kvm->lock);
- list_add_tail(®ion->list, &sev->regions_list);
- mutex_unlock(&kvm->lock);
- return ret;
e_free:
On Tue, Jan 26, 2021, Tom Lendacky wrote:
On 1/26/21 12:54 PM, Peter Gonda wrote:
sev_pin_memory assumes that callers hold the kvm->lock. This was true for all callers except svm_register_enc_region since it does not originate from svm_mem_enc_op. Also added lockdep annotation to help prevent future regressions.
I'm not exactly sure what the problem is that your fixing? What is the symptom that you're seeing?
svm_register_enc_region() calls sev_pin_memory() without holding kvm->lock. If userspace does multiple KVM_MEMORY_ENCRYPT_REG_REGION in parallel, it could circumvent the rlimit(RLIMIT_MEMLOCK) check.
linux-stable-mirror@lists.linaro.org