On 8/23/22 22:35, Marc Zyngier wrote:
Heh, yeah I need to get that out the door. I'll also note that Gavin's changes are still relevant without that series, as we do write unprotect in parallel at PTE granularity after commit f783ef1c0e82 ("KVM: arm64: Add fast path to handle permission relaxation during dirty logging").
Ah, true. Now if only someone could explain how the whole producer-consumer thing works without a trace of a barrier, that'd be great...
Do you mean this?
void kvm_dirty_ring_push(struct kvm_dirty_ring *ring, u32 slot, u64 offset) { struct kvm_dirty_gfn *entry;
/* It should never get full */ WARN_ON_ONCE(kvm_dirty_ring_full(ring));
entry = &ring->dirty_gfns[ring->dirty_index & (ring->size - 1)];
entry->slot = slot; entry->offset = offset; /* * Make sure the data is filled in before we publish this to * the userspace program. There's no paired kernel-side reader. */ smp_wmb(); kvm_dirty_gfn_set_dirtied(entry); ring->dirty_index++; trace_kvm_dirty_ring_push(ring, slot, offset); }
The matching smp_rmb() is in userspace.
Paolo