On Tue, 26 Sep 2023 00:43:21 +0100, Oliver Upton oliver.upton@linux.dev wrote:
On Sun, Sep 24, 2023 at 11:12:30AM +0100, Marc Zyngier wrote:
On Sat, 23 Sep 2023 00:08:21 +0100, Oliver Upton oliver.upton@linux.dev wrote:
On Fri, Sep 22, 2023 at 10:32:29PM +0000, Oliver Upton wrote:
It is possible for multiple vCPUs to fault on the same IPA and attempt to resolve the fault. One of the page table walks will actually update the PTE and the rest will return -EAGAIN per our race detection scheme. KVM elides the TLB invalidation on the racing threads as the return value is nonzero.
Before commit a12ab1378a88 ("KVM: arm64: Use local TLBI on permission relaxation") KVM always used broadcast TLB invalidations when handling permission faults, which had the convenient property of making the stage-2 updates visible to all CPUs in the system. However now we do a local invalidation, and TLBI elision leads to vCPUs getting stuck in a permission fault loop. Remember that the architecture permits the TLB to cache translations that precipitate a permission fault.
The effects of this are slightly overstated (got ahead of myself). EAGAIN only crops up if the cmpxchg() fails, we return 0 if the PTE didn't need to be updated.
On the subsequent permission fault we'll do the right thing and invalidate the TLB, so this change is purely an optimization rather than a correctness issue.
Can you measure the actual effect of this change? In my (limited) experience, I had to actually trick the guest into doing this, and opportunistically invalidating TLBs didn't have any significant benefit.
Sure. We were debugging some issues of vCPU hangs during post-copy migration but that's more likely to be an issue with our VMM + out of tree code.
Marginal improvements be damned, I'm still somewhat keen on doing the TLB invalidation upon race detection anyway. Going back to the guest is pointless, since in all likelihood we will hit the TLB entry that led to the permission fault in the first place.
I guess it completely depends on the size of the TLB. The machines I deal with have a relatively small number of entries, and it doesn't take much to fully evict them.
Now, all of that is probably irrelevant as there should be little impact in performing the invalidation as long as it is local (unless you trap, but that's another problem).
FWIW:
Acked-by: Marc Zyngier maz@kernel.org
M.