On Tue, 2014-03-18 at 16:55 +0000, Jon Medhurst (Tixy) wrote:
On Tue, 2014-03-18 at 15:32 +0000, Chris Redpath wrote:
The HMP active migration code is functionally identical to the CFS active migration code apart from one flag check. Share the code and make the flag check optional.
Two wrapper functions allow the flag check to be present or not.
Signed-off-by: Chris Redpath chris.redpath@arm.com
This gets build errors if SCHED_HMP is not configured.
The changes below fix the build errors. Chris, are you OK for me to squash that into your patch?
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 4e3686b..3431533 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6211,10 +6211,17 @@ out_one_pinned: out: return ld_moved; } + #ifdef CONFIG_SCHED_HMP static unsigned int hmp_idle_pull(int this_cpu); -#endif static int move_specific_task(struct lb_env *env, struct task_struct *pm); +#else +inline int move_specific_task(struct lb_env *env, struct task_struct *pm) +{ + return 0; +} +#endif + /* * idle_balance is called by schedule() if this_cpu is about to become * idle. Attempts to pull tasks from other CPUs. @@ -6281,11 +6288,12 @@ static int __do_active_load_balance_cpu_stop(void *data, bool check_sd_lb_flag) int target_cpu = busiest_rq->push_cpu; struct rq *target_rq = cpu_rq(target_cpu); struct sched_domain *sd; - struct task_struct *p; + struct task_struct *p = NULL;
raw_spin_lock_irq(&busiest_rq->lock); +#ifdef CONFIG_SCHED_HMP p = busiest_rq->migrate_task; - +#endif /* make sure the requested cpu hasn't gone down in the meantime */ if (unlikely(busiest_cpu != smp_processor_id() || !busiest_rq->active_balance))