From: Lai Jiangshan laijs@linux.alibaba.com
In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as: need_tlb_flush |= kvm->tlbs_dirty; with need_tlb_flush's type being int and tlbs_dirty's type being long.
It means that tlbs_dirty is always used as int and the higher 32 bits is useless. We need to check tlbs_dirty in a correct way and this change checks it directly without propagating it to need_tlb_flush.
And need_tlb_flush is changed to boolean because it is used as a boolean and its name starts with "need".
Note: it's _extremely_ unlikely this neglecting of higher 32 bits can cause problems in practice. It would require encountering tlbs_dirty on a 4 billion count boundary, and KVM would need to be using shadow paging or be running a nested guest.
Cc: stable@vger.kernel.org Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path") Signed-off-by: Lai Jiangshan laijs@linux.alibaba.com --- Changed from V1: Update the patch and the changelog as Sean Christopherson suggested.
virt/kvm/kvm_main.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2541a17ff1c4..1c17f3d073cb 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -470,7 +470,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - int need_tlb_flush = 0, idx; + int idx; + bool need_tlb_flush;
idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); @@ -480,11 +481,10 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, * count is also read inside the mmu_lock critical section. */ kvm->mmu_notifier_count++; - need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, - range->flags); - need_tlb_flush |= kvm->tlbs_dirty; + need_tlb_flush = !!kvm_unmap_hva_range(kvm, range->start, range->end, + range->flags); /* we've to flush the tlb before the pages can be freed */ - if (need_tlb_flush) + if (need_tlb_flush || kvm->tlbs_dirty) kvm_flush_remote_tlbs(kvm);
spin_unlock(&kvm->mmu_lock);
Note, you don't actually need to Cc stable@vger.kernel.org when sending the patch. If/when the patch is merged to Linus' tree, the stable tree maintainers, or more accurately their scripts, will automatically pick up the patch and apply it to the relevant stable trees.
You'll probably be getting the following letter from Greg KH any time now :-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
On Tue, Dec 15, 2020, Lai Jiangshan wrote:
From: Lai Jiangshan laijs@linux.alibaba.com
In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as: need_tlb_flush |= kvm->tlbs_dirty; with need_tlb_flush's type being int and tlbs_dirty's type being long.
It means that tlbs_dirty is always used as int and the higher 32 bits is useless. We need to check tlbs_dirty in a correct way and this change checks it directly without propagating it to need_tlb_flush.
And need_tlb_flush is changed to boolean because it is used as a boolean and its name starts with "need".
Note: it's _extremely_ unlikely this neglecting of higher 32 bits can cause problems in practice. It would require encountering tlbs_dirty on a 4 billion count boundary, and KVM would need to be using shadow paging or be running a nested guest.
Cc: stable@vger.kernel.org Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path") Signed-off-by: Lai Jiangshan laijs@linux.alibaba.com
Changed from V1: Update the patch and the changelog as Sean Christopherson suggested.
virt/kvm/kvm_main.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2541a17ff1c4..1c17f3d073cb 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -470,7 +470,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm = mmu_notifier_to_kvm(mn);
- int need_tlb_flush = 0, idx;
- int idx;
- bool need_tlb_flush;
It's a bit silly given how small this patch already is, but I do think this should be split into two patches so that the kvm->tlbs_dirty bug fix is fully isolated for backporting. I.e. patch 1/2 would simply be:
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3abcb2ce5b7d..19dae28904f7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -485,9 +485,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, kvm->mmu_notifier_count++; need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, range->flags); - need_tlb_flush |= kvm->tlbs_dirty; /* we've to flush the tlb before the pages can be freed */ - if (need_tlb_flush) + if (need_tlb_flush || kvm->tlbs_dirty) kvm_flush_remote_tlbs(kvm);
spin_unlock(&kvm->mmu_lock);
idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); @@ -480,11 +481,10 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, * count is also read inside the mmu_lock critical section. */ kvm->mmu_notifier_count++;
- need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end,
range->flags);
- need_tlb_flush |= kvm->tlbs_dirty;
- need_tlb_flush = !!kvm_unmap_hva_range(kvm, range->start, range->end,
/* we've to flush the tlb before the pages can be freed */range->flags);
- if (need_tlb_flush)
- if (need_tlb_flush || kvm->tlbs_dirty) kvm_flush_remote_tlbs(kvm);
spin_unlock(&kvm->mmu_lock); -- 2.19.1.6.gb485710b
On Tue, Dec 15, 2020 at 10:44:18AM -0800, Sean Christopherson wrote:
Note, you don't actually need to Cc stable@vger.kernel.org when sending the patch. If/when the patch is merged to Linus' tree, the stable tree maintainers, or more accurately their scripts, will automatically pick up the patch and apply it to the relevant stable trees.
You'll probably be getting the following letter from Greg KH any time now :-)
Nope, this was fine, I don't mind seeing the patch before it hits Linus's tree (or any tree) at all. It lets me know what to look out for, nothing was done incorrectly here at all from what I can tell.
thanks,
greg k-h
On 15/12/20 19:44, Sean Christopherson wrote:
Note, you don't actually need to Ccstable@vger.kernel.org when sending the patch. If/when the patch is merged to Linus' tree, the stable tree maintainers, or more accurately their scripts, will automatically pick up the patch and apply it to the relevant stable trees.
You'll probably be getting the following letter from Greg KH any time now:-)
<formletter>
This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly.
</formletter>
As far as I know, as long as he sees LKML or something else in Cc he assumes it's an indication _for the maintainer_. Don't underestimate the power of Greg's mailing list filters. :)
Paolo
From: Lai Jiangshan laijs@linux.alibaba.com
In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as: need_tlb_flush |= kvm->tlbs_dirty; with need_tlb_flush's type being int and tlbs_dirty's type being long.
It means that tlbs_dirty is always used as int and the higher 32 bits is useless. We need to check tlbs_dirty in a correct way and this change checks it directly without propagating it to need_tlb_flush.
Note: it's _extremely_ unlikely this neglecting of higher 32 bits can cause problems in practice. It would require encountering tlbs_dirty on a 4 billion count boundary, and KVM would need to be using shadow paging or be running a nested guest.
Cc: stable@vger.kernel.org Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path") Signed-off-by: Lai Jiangshan laijs@linux.alibaba.com --- Changed from V1: Update the patch and the changelog as Sean Christopherson suggested.
Changed from v2: don't change the type of need_tlb_flush
virt/kvm/kvm_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2541a17ff1c4..3083fb53861d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -482,9 +482,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, kvm->mmu_notifier_count++; need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, range->flags); - need_tlb_flush |= kvm->tlbs_dirty; /* we've to flush the tlb before the pages can be freed */ - if (need_tlb_flush) + if (need_tlb_flush || kvm->tlbs_dirty) kvm_flush_remote_tlbs(kvm);
spin_unlock(&kvm->mmu_lock);
On 17/12/20 16:41, Lai Jiangshan wrote:
From: Lai Jiangshan laijs@linux.alibaba.com
In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as: need_tlb_flush |= kvm->tlbs_dirty; with need_tlb_flush's type being int and tlbs_dirty's type being long.
It means that tlbs_dirty is always used as int and the higher 32 bits is useless. We need to check tlbs_dirty in a correct way and this change checks it directly without propagating it to need_tlb_flush.
Note: it's _extremely_ unlikely this neglecting of higher 32 bits can cause problems in practice. It would require encountering tlbs_dirty on a 4 billion count boundary, and KVM would need to be using shadow paging or be running a nested guest.
Cc: stable@vger.kernel.org Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path") Signed-off-by: Lai Jiangshan laijs@linux.alibaba.com
Changed from V1: Update the patch and the changelog as Sean Christopherson suggested.
Changed from v2: don't change the type of need_tlb_flush
virt/kvm/kvm_main.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2541a17ff1c4..3083fb53861d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -482,9 +482,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, kvm->mmu_notifier_count++; need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end, range->flags);
- need_tlb_flush |= kvm->tlbs_dirty; /* we've to flush the tlb before the pages can be freed */
- if (need_tlb_flush)
- if (need_tlb_flush || kvm->tlbs_dirty) kvm_flush_remote_tlbs(kvm);
spin_unlock(&kvm->mmu_lock);
Queued, thanks.
Paolo
linux-stable-mirror@lists.linaro.org