From: Alex Shi alex.shi@intel.com
Current scheduler behavior is just consider for larger performance of system. So it try to spread tasks on more cpu sockets and cpu cores
To adding the consideration of power awareness, the patchset adds a powersaving scheduler policy. It will use runnable load util in scheduler balancing. The current scheduling is taken as performance policy.
performance: the current scheduling behaviour, try to spread tasks on more CPU sockets or cores. performance oriented. powersaving: will pack tasks into few sched group until all LCPU in the group is full, power oriented.
The incoming patches will enable powersaving scheduling in CFS.
Signed-off-by: Alex Shi alex.shi@intel.com [Added CONFIG_SCHED_POWER switch to enable this patch] Signed-off-by: Preeti U Murthy preeti@linux.vnet.ibm.com ---
kernel/sched/fair.c | 5 +++++ kernel/sched/sched.h | 7 +++++++ 2 files changed, 12 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bfa3c86..77da534 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7800,6 +7800,11 @@ static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task return rr_interval; }
+#ifdef CONFIG_SCHED_POWER +/* The default scheduler policy is 'performance'. */ +int __read_mostly sched_balance_policy = SCHED_POLICY_PERFORMANCE; +#endif + /* * All the scheduling class methods: */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 579712f..95fc013 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -23,6 +23,13 @@ extern atomic_long_t calc_load_tasks; extern long calc_load_fold_active(struct rq *this_rq); extern void update_cpu_load_active(struct rq *this_rq);
+#ifdef CONFIG_SCHED_POWER +#define SCHED_POLICY_PERFORMANCE (0x1) +#define SCHED_POLICY_POWERSAVING (0x2) + +extern int __read_mostly sched_balance_policy; +#endif + /* * Helpers for converting nanosecond timing to jiffy resolution */