The patch below does not apply to the 6.1-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y git checkout FETCH_HEAD git cherry-pick -x 8c2e8ac8ad4be68409e806ce1cc78fc7a04539f3 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2023040350-shrouded-scribble-3c39@gregkh' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
8c2e8ac8ad4b ("KVM: arm64: Check for kvm_vma_mte_allowed in the critical section") 13ec9308a857 ("KVM: arm64: Retry fault if vma_lookup() results become invalid") b0803ba72b55 ("KVM: arm64: Convert FSC_* over to ESR_ELx_FSC_*") 382b5b87a97d ("Merge branch kvm-arm64/mte-map-shared into kvmarm-master/next")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 8c2e8ac8ad4be68409e806ce1cc78fc7a04539f3 Mon Sep 17 00:00:00 2001 From: Marc Zyngier maz@kernel.org Date: Thu, 16 Mar 2023 17:45:46 +0000 Subject: [PATCH] KVM: arm64: Check for kvm_vma_mte_allowed in the critical section
On page fault, we find about the VMA that backs the page fault early on, and quickly release the mmap_read_lock. However, using the VMA pointer after the critical section is pretty dangerous, as a teardown may happen in the meantime and the VMA be long gone.
Move the sampling of the MTE permission early, and NULL-ify the VMA pointer after that, just to be on the safe side.
Signed-off-by: Marc Zyngier maz@kernel.org Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230316174546.3777507-3-maz@kernel.org Signed-off-by: Oliver Upton oliver.upton@linux.dev
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index cd819725193b..3b9d4d24c361 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1218,7 +1218,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, { int ret = 0; bool write_fault, writable, force_pte = false; - bool exec_fault; + bool exec_fault, mte_allowed; bool device = false; unsigned long mmu_seq; struct kvm *kvm = vcpu->kvm; @@ -1309,6 +1309,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, fault_ipa &= ~(vma_pagesize - 1);
gfn = fault_ipa >> PAGE_SHIFT; + mte_allowed = kvm_vma_mte_allowed(vma); + + /* Don't use the VMA after the unlock -- it may have vanished */ + vma = NULL;
/* * Read mmu_invalidate_seq so that KVM can detect if the results of @@ -1379,7 +1383,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (fault_status != ESR_ELx_FSC_PERM && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new disallowed VMA */ - if (kvm_vma_mte_allowed(vma)) { + if (mte_allowed) { sanitise_mte_tags(kvm, pfn, vma_pagesize); } else { ret = -EFAULT;