On Tue, 2013-04-23 at 09:52 +0200, Vincent Guittot wrote:
static inline unsigned long *rq_nohz_flags(int cpu) { return rcu_dereference(cpu_rq(cpu)->sd)->nohz_flags; }
if (!test_bit(0, rq_nohz_flags(cpu))) return; clear_bit(0, rq_nohz_flags(cpu));
AFAICT, if we use different rcu_dereferences for modifying nohz_flags and for updating the nr_busy_cpus, we open a time window for desynchronization, isn't it ?
Oh right, we need to call rq_nohz_flags() once and use the pointer twice.
+unlock: rcu_read_unlock(); }
@@ -5409,14 +5416,21 @@ void set_cpu_sd_state_idle(void) { struct sched_domain *sd; int cpu = smp_processor_id();
if (test_bit(NOHZ_IDLE, nohz_flags(cpu)))
return;
set_bit(NOHZ_IDLE, nohz_flags(cpu));
int first_nohz_idle = 1; rcu_read_lock();
for_each_domain(cpu, sd)
for_each_domain(cpu, sd) {
if (first_nohz_idle) {
if (test_bit(NOHZ_IDLE, &sd->nohz_flags))
goto unlock;
set_bit(NOHZ_IDLE, &sd->nohz_flags);
first_nohz_idle = 0;
}
atomic_dec(&sd->groups->sgp->nr_busy_cpus);
}
+unlock:
Same here, .. why on earth do it for every sched_domain for that
cpu?
The update of rq_nohz_flags is done only once on the top level sched_domain and the nr_busy_cpus is updated on each sched_domain level in order to have a quick access to the number of busy cpu when we check for the need to kick an idle load balance
Ah, I didn't see the whole first_nohz_idle thing.. but why did you place it inside the loop in the first place? Wouldn't GCC be able to avoid the double cpu_rq(cpu)->sd dereference using CSE? Argh no,.. its got an ACCESS_ONCE in it that defeats GCC.
Bothersome..
I'd almost write it like:
struct sched_domain *sd;
rcu_read_lock(); sd = rcu_dereference_check_sched_domain(cpu_rq(cpu)->sd);
if (sd->nohz_idle) goto unlock; xchg(&sd->nohz_idle, 1); /* do we actually need atomics here? */
for (; sd; sd = sd->parent) atomic_dec(&sd->groups->sgp->nr_busy_cpus); unlock: rcu_read_unlock();