On Mon, Feb 05, 2024 at 06:05:02PM +0800, Peter Xu wrote:
Shaoqin, Sean,
Apologies for a late comment. I'm trying to remember what I wrote..
On Fri, Feb 02, 2024 at 01:43:32AM -0500, Shaoqin Huang wrote:
Why sem_vcpu_cont and sem_vcpu_stop can be non-zero value? It's because the dirty_ring_before_vcpu_join() execute the sem_post(&sem_vcpu_cont) at the end of each dirty-ring test. It can cause two cases:
As a possible alternative, would it work if we simply reset all the sems for each run? Then we don't care about the leftovers. E.g. sem_destroy() at the end of run_test(), then always init to 0 at entry.
One more thing when I was reading the code again: I had a feeling that I missed one call to vcpu_handle_sync_stop() for the dirty ring case:
====== @@ -395,8 +395,7 @@ static void dirty_ring_after_vcpu_run(struct kvm_vcpu *vcpu, int ret, int err)
/* A ucall-sync or ring-full event is allowed */ if (get_ucall(vcpu, NULL) == UCALL_SYNC) { - /* We should allow this to continue */ - ; + vcpu_handle_sync_stop(); } else if (run->exit_reason == KVM_EXIT_DIRTY_RING_FULL || (ret == -1 && err == EINTR)) { /* Update the flag first before pause */ ======
Otherwise it'll be meaningless for run_test() to set vcpu_sync_stop_requested for the ring test, if the ring test never reads it..
Without about change, the test will still work (and I assume that's why nobody noticed including myself..), but IIUC the vcpu can stop later, e.g. until the ring fulls, or there's some leftover SIGUSR1 around.
With this change, the vcpu can stop earlier, as soon as the main thread requested a stop, which should be what the code wanted to do.
Shaoqin, feel free to have a look there too if you're working on the test.
Thanks,