On 10/17/2016 10:14 AM, Vincent Guittot wrote:
When a task moves from/to a cfs_rq, we set a flag which is then used to propagate the change at parent level (sched_entity and cfs_rq) during next update. If the cfs_rq is throttled, the flag will stay pending until the cfs_rw is unthrottled.
minor nit:
s/cfs_rw/cfs_rq
[...]
@@ -8704,6 +8867,22 @@ static void detach_task_cfs_rq(struct task_struct *p) update_load_avg(se, 0); detach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false);
+#ifdef CONFIG_FAIR_GROUP_SCHED
- /*
* Propagate the detach across the tg tree to make it visible to the
* root
*/
- se = se->parent;
- for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
if (cfs_rq_throttled(cfs_rq))
break;
update_load_avg(se, UPDATE_TG);
- }
+#endif }
static void attach_entity_cfs_rq(struct sched_entity *se) @@ -8722,6 +8901,22 @@ static void attach_entity_cfs_rq(struct sched_entity *se) update_load_avg(se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD); attach_entity_load_avg(cfs_rq, se); update_tg_load_avg(cfs_rq, false);
+#ifdef CONFIG_FAIR_GROUP_SCHED
- /*
* Propagate the attach across the tg tree to make it visible to the
* root
*/
- se = se->parent;
- for_each_sched_entity(se) {
cfs_rq = cfs_rq_of(se);
if (cfs_rq_throttled(cfs_rq))
break;
update_load_avg(se, UPDATE_TG);
- }
+#endif }
The 'detach across' and 'attach across' in detach_task_cfs_rq() and attach_entity_cfs_rq() do the same so couldn't you not create a function propagate_foo() for it? This would avoid this ifdef as well.
You could further create in your '[PATCH 1/6 v5] sched: factorize attach entity':
detach_entity_cfs_rq() { update_load_avg() detach_entity_load_avg() update_tg_load_avg() propagate_load_avg() }
and then we would have:
attach_task_cfs_rq() -> attach_entity_cfs_rq() -> propagate_foo() detach_task_cfs_rq() -> detach_entity_cfs_rq() -> propagate_foo()
I guess you didn't because it would be only called one time but this symmetric approaches are easier to remember (at least for me).
[...]