Patch f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") introduced a change that results in a circular locking dependency when a Secure Execution guest that is configured with crypto devices is started. The problem resulted due to the fact that the patch moved the setting of the guest's AP masks within the protection of the matrix_dev->lock when the vfio_ap driver is notified that the KVM pointer has been set. Since it is not critical that setting/clearing of the guest's AP masks when the driver is notified, the masks will not be updated under the matrix_dev->lock. The lock is necessary for the setting/unsetting of the KVM pointer, however, so that will remain in place.
The dependency chain for the circular lockdep resolved by this patch is:
#2 vfio_ap_mdev_group_notifier: kvm->lock matrix_dev->lock
#1: handle_pqap: matrix_dev->lock kvm_vcpu_ioctl: vcpu->mutex
#0: kvm_s390_cpus_to_pv: vcpu->mutex kvm_vm_ioctl: kvm->lock
Tony Krowiak (1): s390/vfio-ap: fix circular lockdep when setting/clearing crypto masks
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com --- drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c index 41fc2e4135fe..f4e19aa2acb9 100644 --- a/drivers/s390/crypto/vfio_ap_ops.c +++ b/drivers/s390/crypto/vfio_ap_ops.c @@ -322,6 +322,20 @@ static void vfio_ap_matrix_init(struct ap_config_info *info, matrix->adm_max = info->apxa ? info->Nd : 15; }
+static bool vfio_ap_mdev_has_crycb(struct ap_matrix_mdev *matrix_mdev) +{ + return (matrix_mdev->kvm && matrix_mdev->kvm->arch.crypto.crycbd); +} + +static void vfio_ap_mdev_commit_apcb(struct ap_matrix_mdev *matrix_mdev) +{ + if (vfio_ap_mdev_has_crycb(matrix_mdev)) + kvm_arch_crypto_set_masks(matrix_mdev->kvm, + matrix_mdev->matrix.apm, + matrix_mdev->matrix.aqm, + matrix_mdev->matrix.adm); +} + static int vfio_ap_mdev_create(struct kobject *kobj, struct mdev_device *mdev) { struct ap_matrix_mdev *matrix_mdev; @@ -1028,7 +1042,9 @@ static const struct attribute_group *vfio_ap_mdev_attr_groups[] = { * @kvm: reference to KVM instance * * Verifies no other mediated matrix device has @kvm and sets a reference to - * it in @matrix_mdev->kvm. + * it in @matrix_mdev->kvm. The matrix_dev->lock must not be taken prior to + * calling this function; doing so may result in a circular lock dependency + * when the kvm->lock is taken to set masks in the guest's APCB. * * Return 0 if no other mediated matrix device has a reference to @kvm; * otherwise, returns an -EPERM. @@ -1038,6 +1054,8 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev, { struct ap_matrix_mdev *m;
+ mutex_lock(&matrix_dev->lock); + list_for_each_entry(m, &matrix_dev->mdev_list, node) { if ((m != matrix_mdev) && (m->kvm == kvm)) return -EPERM; @@ -1046,6 +1064,8 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev, matrix_mdev->kvm = kvm; kvm_get_kvm(kvm); kvm->arch.crypto.pqap_hook = &matrix_mdev->pqap_hook; + mutex_unlock(&matrix_dev->lock); + vfio_ap_mdev_commit_apcb(matrix_mdev);
return 0; } @@ -1079,13 +1099,27 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb, return NOTIFY_DONE; }
+/** + * vfio_ap_mdev_unset_kvm + * + * @matrix_mdev: a matrix mediated device + * + * Clears the masks in the guest's APCB as well as the reference to KVM from + * @matrix_mdev. The matrix_dev->lock must not be taken prior to calling this + * function; doing so may result in a circular lock dependency when the + * kvm->lock is taken to clear the masks in the guest's APCB. + */ static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { - kvm_arch_crypto_clear_masks(matrix_mdev->kvm); - matrix_mdev->kvm->arch.crypto.pqap_hook = NULL; - vfio_ap_mdev_reset_queues(matrix_mdev->mdev); - kvm_put_kvm(matrix_mdev->kvm); - matrix_mdev->kvm = NULL; + if (matrix_mdev->kvm) { + kvm_arch_crypto_clear_masks(matrix_mdev->kvm); + mutex_lock(&matrix_dev->lock); + matrix_mdev->kvm->arch.crypto.pqap_hook = NULL; + vfio_ap_mdev_reset_queues(matrix_mdev->mdev); + kvm_put_kvm(matrix_mdev->kvm); + matrix_mdev->kvm = NULL; + mutex_unlock(&matrix_dev->lock); + } }
static int vfio_ap_mdev_group_notifier(struct notifier_block *nb, @@ -1098,32 +1132,15 @@ static int vfio_ap_mdev_group_notifier(struct notifier_block *nb, return NOTIFY_OK;
matrix_mdev = container_of(nb, struct ap_matrix_mdev, group_notifier); - mutex_lock(&matrix_dev->lock); - - if (!data) { - if (matrix_mdev->kvm) - vfio_ap_mdev_unset_kvm(matrix_mdev); - goto notify_done; - }
- ret = vfio_ap_mdev_set_kvm(matrix_mdev, data); - if (ret) { - notify_rc = NOTIFY_DONE; - goto notify_done; - } + if (!data) + vfio_ap_mdev_unset_kvm(matrix_mdev); + else + ret = vfio_ap_mdev_set_kvm(matrix_mdev, data);
- /* If there is no CRYCB pointer, then we can't copy the masks */ - if (!matrix_mdev->kvm->arch.crypto.crycbd) { + if (ret) notify_rc = NOTIFY_DONE; - goto notify_done; - } - - kvm_arch_crypto_set_masks(matrix_mdev->kvm, matrix_mdev->matrix.apm, - matrix_mdev->matrix.aqm, - matrix_mdev->matrix.adm);
-notify_done: - mutex_unlock(&matrix_dev->lock); return notify_rc; }
@@ -1257,10 +1274,8 @@ static void vfio_ap_mdev_release(struct mdev_device *mdev) { struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
- mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) vfio_ap_mdev_unset_kvm(matrix_mdev); - mutex_unlock(&matrix_dev->lock);
vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY, &matrix_mdev->iommu_notifier);
On Tue, 9 Feb 2021 14:48:30 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) {
- kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
- matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
- vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
- kvm_put_kvm(matrix_mdev->kvm);
- matrix_mdev->kvm = NULL;
- if (matrix_mdev->kvm) {
If you're doing setting/unsetting under matrix_dev->lock, is it possible that matrix_mdev->kvm gets unset between here and the next line, as you don't hold the lock?
Maybe you could - grab a reference to kvm while holding the lock - call the mask handling functions with that kvm reference - lock again, drop the reference, and do the rest of the processing?
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
mutex_lock(&matrix_dev->lock);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
mutex_unlock(&matrix_dev->lock);
- }
}
On Wed, 10 Feb 2021 11:53:34 +0100 Cornelia Huck cohuck@redhat.com wrote:
On Tue, 9 Feb 2021 14:48:30 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) {
- kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
- matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
- vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
- kvm_put_kvm(matrix_mdev->kvm);
- matrix_mdev->kvm = NULL;
- if (matrix_mdev->kvm) {
If you're doing setting/unsetting under matrix_dev->lock, is it possible that matrix_mdev->kvm gets unset between here and the next line, as you don't hold the lock?
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
I agree, matrix_mdev->kvm can go NULL any time and we are risking a null pointer dereference here.
Another idea would be to do
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm;
mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; matrix_mdev->kvm = NULL; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); matrix_mdev->kvm->arch.crypto.pqap_hook = NULL; vfio_ap_mdev_reset_queues(matrix_mdev->mdev); kvm_put_kvm(kvm); } mutex_unlock(&matrix_dev->lock); }
That way only one unset would actually do the unset and cleanup and every other invocation would bail out with only checking matrix_mdev->kvm.
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
mutex_lock(&matrix_dev->lock);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
mutex_unlock(&matrix_dev->lock);
- }
}
On Wed, 10 Feb 2021 16:24:29 +0100 Halil Pasic pasic@linux.ibm.com wrote:
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
I agree, matrix_mdev->kvm can go NULL any time and we are risking a null pointer dereference here.
Another idea would be to do
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm; mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; matrix_mdev->kvm = NULL; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
s/matrix_mdev->kvm/kvm
vfio_ap_mdev_reset_queues(matrix_mdev->mdev); kvm_put_kvm(kvm); } mutex_unlock(&matrix_dev->lock);
}
That way only one unset would actually do the unset and cleanup and every other invocation would bail out with only checking matrix_mdev->kvm.
But the problem with that is that we enable the the assign/unassign prematurely, which could interfere wit reset_queues(). Forget about it.
On 2/10/21 10:32 AM, Halil Pasic wrote:
On Wed, 10 Feb 2021 16:24:29 +0100 Halil Pasic pasic@linux.ibm.com wrote:
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
I agree, matrix_mdev->kvm can go NULL any time and we are risking a null pointer dereference here.
Another idea would be to do
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm; mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; matrix_mdev->kvm = NULL; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
s/matrix_mdev->kvm/kvm
vfio_ap_mdev_reset_queues(matrix_mdev->mdev); kvm_put_kvm(kvm); } mutex_unlock(&matrix_dev->lock);
}
That way only one unset would actually do the unset and cleanup and every other invocation would bail out with only checking matrix_mdev->kvm.
But the problem with that is that we enable the the assign/unassign prematurely, which could interfere wit reset_queues(). Forget about it.
Not sure what you mean by this.
On Wed, 10 Feb 2021 17:05:48 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
On 2/10/21 10:32 AM, Halil Pasic wrote:
On Wed, 10 Feb 2021 16:24:29 +0100 Halil Pasic pasic@linux.ibm.com wrote:
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
I agree, matrix_mdev->kvm can go NULL any time and we are risking a null pointer dereference here.
Another idea would be to do
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm; mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; matrix_mdev->kvm = NULL; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
s/matrix_mdev->kvm/kvm
vfio_ap_mdev_reset_queues(matrix_mdev->mdev); kvm_put_kvm(kvm); } mutex_unlock(&matrix_dev->lock);
}
That way only one unset would actually do the unset and cleanup and every other invocation would bail out with only checking matrix_mdev->kvm.
But the problem with that is that we enable the the assign/unassign prematurely, which could interfere wit reset_queues(). Forget about it.
Not sure what you mean by this.
I mean because above I first do (1) matrix_mdev->kvm = NULL; and then do (2) vfio_ap_mdev_reset_queues(matrix_mdev->mdev); another thread could do static ssize_t unassign_adapter_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { int ret; unsigned long apid; struct mdev_device *mdev = mdev_from_dev(dev); struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
/* If the guest is running, disallow un-assignment of adapter */ if (matrix_mdev->kvm) return -EBUSY; ... } between (1) and (2), and we would not bail out with -EBUSY because !!kvm because of (1). That means we would change matrix_mdev->matrix and we would not reset the queues that correspond to the apid that was just removed, because by the time we do the reset_queues, the queues are not in the matrix_mdev->matrix any more.
Does that make sense?
On 2/10/21 5:46 PM, Halil Pasic wrote:
On Wed, 10 Feb 2021 17:05:48 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
On 2/10/21 10:32 AM, Halil Pasic wrote:
On Wed, 10 Feb 2021 16:24:29 +0100 Halil Pasic pasic@linux.ibm.com wrote:
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
I agree, matrix_mdev->kvm can go NULL any time and we are risking a null pointer dereference here.
Another idea would be to do
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm; mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; matrix_mdev->kvm = NULL; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
s/matrix_mdev->kvm/kvm
vfio_ap_mdev_reset_queues(matrix_mdev->mdev); kvm_put_kvm(kvm); } mutex_unlock(&matrix_dev->lock);
}
That way only one unset would actually do the unset and cleanup and every other invocation would bail out with only checking matrix_mdev->kvm.
But the problem with that is that we enable the the assign/unassign prematurely, which could interfere wit reset_queues(). Forget about it.
Not sure what you mean by this.
I mean because above I first do (1) matrix_mdev->kvm = NULL; and then do (2) vfio_ap_mdev_reset_queues(matrix_mdev->mdev); another thread could do static ssize_t unassign_adapter_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { int ret; unsigned long apid; struct mdev_device *mdev = mdev_from_dev(dev); struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev); /* If the guest is running, disallow un-assignment of adapter */ if (matrix_mdev->kvm) return -EBUSY; ... } between (1) and (2), and we would not bail out with -EBUSY because !!kvm because of (1). That means we would change matrix_mdev->matrix and we would not reset the queues that correspond to the apid that was just removed, because by the time we do the reset_queues, the queues are not in the matrix_mdev->matrix any more.
Does that make sense?
Yes, it makes sense. I guess I didn't look closely at your suggestion when I said it was exactly what I implemented after agreeing with Connie. I had a slight difference in my implementation:
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm;
mutex_lock(&matrix_dev->lock);
if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); kvm->arch.crypto.pqap_hook = NULL; vfio_ap_mdev_reset_queues(matrix_mdev->mdev); matrix_mdev->kvm = NULL; kvm_put_kvm(kvm); }
mutex_unlock(&matrix_dev->lock); }
In your scenario, the unassignment would fail with -EBUSY because the matrix_mdev->kvm pointer would not have yet been cleared. The other problem with your implementation is that IRQ resources would not get cleared after the reset because the matrix_mdev->kvm pointer would be NULL at that time.
On Thu, 11 Feb 2021 09:21:26 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
Yes, it makes sense. I guess I didn't look closely at your suggestion when I said it was exactly what I implemented after agreeing with Connie. I had a slight difference in my implementation:
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm;
mutex_lock(&matrix_dev->lock);
if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; mutex_unlock(&matrix_dev->lock);
The problem with this one is that as soon as we drop the lock here, another thread can in theory execute the critical section below, which drops our reference to kvm via kvm_put_kvm(kvm). Thus when we enter kvm_arch_crypto_clear_mask(), even if we are guaranteed to have a non-null pointer, the pointee is not guaranteed to be around. So like Connie suggested, you better take another reference to kvm in the first critical section.
Regards, Halil
kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); kvm->arch.crypto.pqap_hook = NULL; vfio_ap_mdev_reset_queues(matrix_mdev->mdev); matrix_mdev->kvm = NULL; kvm_put_kvm(kvm); }
mutex_unlock(&matrix_dev->lock); }
On 2/11/21 11:47 AM, Halil Pasic wrote:
On Thu, 11 Feb 2021 09:21:26 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
Yes, it makes sense. I guess I didn't look closely at your suggestion when I said it was exactly what I implemented after agreeing with Connie. I had a slight difference in my implementation:
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm;
mutex_lock(&matrix_dev->lock);
if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; mutex_unlock(&matrix_dev->lock);
The problem with this one is that as soon as we drop the lock here, another thread can in theory execute the critical section below, which drops our reference to kvm via kvm_put_kvm(kvm). Thus when we enter kvm_arch_crypto_clear_mask(), even if we are guaranteed to have a non-null pointer, the pointee is not guaranteed to be around. So like Connie suggested, you better take another reference to kvm in the first critical section.
Sure.
Regards, Halil
kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); kvm->arch.crypto.pqap_hook = NULL; vfio_ap_mdev_reset_queues(matrix_mdev->mdev); matrix_mdev->kvm = NULL; kvm_put_kvm(kvm); }
mutex_unlock(&matrix_dev->lock); }
On 2/10/21 10:24 AM, Halil Pasic wrote:
On Wed, 10 Feb 2021 11:53:34 +0100 Cornelia Huck cohuck@redhat.com wrote:
On Tue, 9 Feb 2021 14:48:30 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-) static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) {
- kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
- matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
- vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
- kvm_put_kvm(matrix_mdev->kvm);
- matrix_mdev->kvm = NULL;
- if (matrix_mdev->kvm) {
If you're doing setting/unsetting under matrix_dev->lock, is it possible that matrix_mdev->kvm gets unset between here and the next line, as you don't hold the lock?
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
I agree, matrix_mdev->kvm can go NULL any time and we are risking a null pointer dereference here.
Another idea would be to do
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) { struct kvm *kvm; mutex_lock(&matrix_dev->lock); if (matrix_mdev->kvm) { kvm = matrix_mdev->kvm; matrix_mdev->kvm = NULL; mutex_unlock(&matrix_dev->lock); kvm_arch_crypto_clear_masks(kvm); mutex_lock(&matrix_dev->lock); matrix_mdev->kvm->arch.crypto.pqap_hook = NULL; vfio_ap_mdev_reset_queues(matrix_mdev->mdev); kvm_put_kvm(kvm); } mutex_unlock(&matrix_dev->lock); }
That way only one unset would actually do the unset and cleanup and every other invocation would bail out with only checking matrix_mdev->kvm.
How ironic, that is exactly what I did after agreeing with Connie.
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
mutex_lock(&matrix_dev->lock);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
mutex_unlock(&matrix_dev->lock);
- } }
On 2/10/21 5:53 AM, Cornelia Huck wrote:
On Tue, 9 Feb 2021 14:48:30 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) {
- kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
- matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
- vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
- kvm_put_kvm(matrix_mdev->kvm);
- matrix_mdev->kvm = NULL;
- if (matrix_mdev->kvm) {
If you're doing setting/unsetting under matrix_dev->lock, is it possible that matrix_mdev->kvm gets unset between here and the next line, as you don't hold the lock?
That is highly unlikely because the only place the matrix_mdev->kvm pointer is cleared is in this function which is called from only two places: the notifier that handles the VFIO_GROUP_NOTIFY_SET_KVM notification when the KVM pointer is cleared; the vfio_ap_mdev_release() function which is called when the mdev fd is closed (i.e., when the guest is shut down). The fact is, with the only end-to-end implementation currently available, the notifier callback is never invoked to clear the KVM pointer because the vfio_ap_mdev_release callback is invoked first and it unregisters the notifier callback.
Having said that, I suppose there is no guarantee that there will not be different userspace clients in the future that do things in a different order. At the very least, it wouldn't hurt to protect against that as you suggest below.
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
mutex_lock(&matrix_dev->lock);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
mutex_unlock(&matrix_dev->lock);
- } }
On Wed, 10 Feb 2021 15:34:24 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
On 2/10/21 5:53 AM, Cornelia Huck wrote:
On Tue, 9 Feb 2021 14:48:30 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) {
- kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
- matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
- vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
- kvm_put_kvm(matrix_mdev->kvm);
- matrix_mdev->kvm = NULL;
- if (matrix_mdev->kvm) {
If you're doing setting/unsetting under matrix_dev->lock, is it possible that matrix_mdev->kvm gets unset between here and the next line, as you don't hold the lock?
That is highly unlikely because the only place the matrix_mdev->kvm pointer is cleared is in this function which is called from only two places: the notifier that handles the VFIO_GROUP_NOTIFY_SET_KVM notification when the KVM pointer is cleared; the vfio_ap_mdev_release() function which is called when the mdev fd is closed (i.e., when the guest is shut down). The fact is, with the only end-to-end implementation currently available, the notifier callback is never invoked to clear the KVM pointer because the vfio_ap_mdev_release callback is invoked first and it unregisters the notifier callback.
Having said that, I suppose there is no guarantee that there will not be different userspace clients in the future that do things in a different order. At the very least, it wouldn't hurt to protect against that as you suggest below.
Yes, if userspace is able to use the interfaces in the certain way, we should always make sure that nothing bad happens if it does so, even if known userspace applications are well-behaved.
[Can we make an 'evil userspace' test program, maybe? The hardware dependency makes this hard to run, though.]
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
mutex_lock(&matrix_dev->lock);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
mutex_unlock(&matrix_dev->lock);
- } }
On 2/11/21 7:23 AM, Cornelia Huck wrote:
On Wed, 10 Feb 2021 15:34:24 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
On 2/10/21 5:53 AM, Cornelia Huck wrote:
On Tue, 9 Feb 2021 14:48:30 -0500 Tony Krowiak akrowiak@linux.ibm.com wrote:
This patch fixes a circular locking dependency in the CI introduced by commit f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated"). The lockdep only occurs when starting a Secure Execution guest. Crypto virtualization (vfio_ap) is not yet supported for SE guests; however, in order to avoid CI errors, this fix is being provided.
The circular lockdep was introduced when the masks in the guest's APCB were taken under the matrix_dev->lock. While the lock is definitely needed to protect the setting/unsetting of the KVM pointer, it is not necessarily critical for setting the masks, so this will not be done under protection of the matrix_dev->lock.
Fixes: f21916ec4826 ("s390/vfio-ap: clean up vfio_ap resources when KVM pointer invalidated") Cc: stable@vger.kernel.org Signed-off-by: Tony Krowiak akrowiak@linux.ibm.com
drivers/s390/crypto/vfio_ap_ops.c | 75 ++++++++++++++++++------------- 1 file changed, 45 insertions(+), 30 deletions(-)
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev) {
- kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
- matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
- vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
- kvm_put_kvm(matrix_mdev->kvm);
- matrix_mdev->kvm = NULL;
- if (matrix_mdev->kvm) {
If you're doing setting/unsetting under matrix_dev->lock, is it possible that matrix_mdev->kvm gets unset between here and the next line, as you don't hold the lock?
That is highly unlikely because the only place the matrix_mdev->kvm pointer is cleared is in this function which is called from only two places: the notifier that handles the VFIO_GROUP_NOTIFY_SET_KVM notification when the KVM pointer is cleared; the vfio_ap_mdev_release() function which is called when the mdev fd is closed (i.e., when the guest is shut down). The fact is, with the only end-to-end implementation currently available, the notifier callback is never invoked to clear the KVM pointer because the vfio_ap_mdev_release callback is invoked first and it unregisters the notifier callback.
Having said that, I suppose there is no guarantee that there will not be different userspace clients in the future that do things in a different order. At the very least, it wouldn't hurt to protect against that as you suggest below.
Yes, if userspace is able to use the interfaces in the certain way, we should always make sure that nothing bad happens if it does so, even if known userspace applications are well-behaved.
[Can we make an 'evil userspace' test program, maybe? The hardware dependency makes this hard to run, though.]
Of course it is possible to create such a test program, but off the top of my head, I can't come up with an algorithm that would result in the scenario you have laid out. I haven't dabbled in the QEMU space in quite some time; so, there would also be a bit of a re-learning curve. I'm not sure it would be worth the effort to take this on given how unlikely it is this scenario can happen, but I will take it into consideration as it is a good idea.
Maybe you could
- grab a reference to kvm while holding the lock
- call the mask handling functions with that kvm reference
- lock again, drop the reference, and do the rest of the processing?
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
mutex_lock(&matrix_dev->lock);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
mutex_unlock(&matrix_dev->lock);
- } }
linux-stable-mirror@lists.linaro.org