From: Alex Shi alex.shi@intel.com
For power aware balancing, we care the sched domain/group's utilization. So add: sd_lb_stats.sd_util and sg_lb_stats.group_util.
And want to know which group is busiest but still has capability to handle more tasks, so add: sd_lb_stats.group_leader
Signed-off-by: Alex Shi alex.shi@intel.com [Added CONFIG_SCHED_POWER switch to enable this patch] Signed-off-by: Preeti U Murthy preeti@linux.vnet.ibm.com ---
kernel/sched/fair.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 681ad06..3d6d081 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5593,6 +5593,9 @@ struct sg_lb_stats { unsigned int nr_numa_running; unsigned int nr_preferred_running; #endif +#ifdef CONFIG_SCHED_POWER + unsigned int group_util; /* sum utilization of group */ +#endif };
/* @@ -5608,6 +5611,12 @@ struct sd_lb_stats {
struct sg_lb_stats busiest_stat;/* Statistics of the busiest group */ struct sg_lb_stats local_stat; /* Statistics of the local group */ + +#ifdef CONFIG_SCHED_POWER + /* Varibles of power aware scheduling */ + unsigned int sd_util; /* sum utilization of this domain */ + struct sched_group *group_leader; /* Group which relieves group_min */ +#endif };
static inline void init_sd_lb_stats(struct sd_lb_stats *sds)