Hi Namhyung,
I re-written the patch as following. hackbench/aim9 doest show clean performance change. Actually we can get some profit. it also will be very slight. :) BTW, it still need another patch before apply this. Just to show the logical.
===========
From 145ff27744c8ac04eda056739fe5aa907a00877e Mon Sep 17 00:00:00 2001
From: Alex Shi alex.shi@intel.com Date: Fri, 11 Jan 2013 16:49:03 +0800 Subject: [PATCH 3/7] sched: select_idle_sibling optimization
Current logical in this function will insist to wake up the task in a totally idle group, otherwise it would rather back to previous cpu.
Or current cpu depending on result of wake_affine(), right?
The new logical will try to wake up the task on any idle cpu in the same cpu socket (in same sd_llc), while idle cpu in the smaller domain has higher priority.
But what about SMT domain?
The previous approach also descended till the SMT domain.Here we start from the SMT domain.
You could check with /proc/schedstat as to which are the different domains the cpu is a part of and SMT domain happens to be domain0.As far as i know for_each_lower_domain will descend till domain0.
I mean it seems that the code prefers running a task on a idle cpu which is a sibling thread in the same core rather than running it on an idle cpu in another idle core. I guess we didn't do that before.
It should has some help on burst wake up benchmarks like aim7.
Original-patch-by: Preeti U Murthy preeti@linux.vnet.ibm.com Signed-off-by: Alex Shi alex.shi@intel.com
kernel/sched/fair.c | 40 +++++++++++++++++++--------------------- 1 files changed, 19 insertions(+), 21 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e116215..fa40e49 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3253,13 +3253,13 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) /*
- Try and locate an idle CPU in the sched_domain.
*/ -static int select_idle_sibling(struct task_struct *p) +static int select_idle_sibling(struct task_struct *p,
struct sched_domain *affine_sd, int sync)
Where are these arguments used?
{ int cpu = smp_processor_id(); int prev_cpu = task_cpu(p); struct sched_domain *sd; struct sched_group *sg;
- int i;
/* * If the task is going to be woken-up on this cpu and if it is @@ -3281,27 +3281,25 @@ static int select_idle_sibling(struct task_struct *p) /* * Otherwise, iterate the domains and find an elegible idle cpu. */
- sd = rcu_dereference(per_cpu(sd_llc, prev_cpu));
- for_each_lower_domain(sd) {
- for_each_domain(prev_cpu, sd) {
Always start from the prev_cpu?
sg = sd->groups; do {
if (!cpumask_intersects(sched_group_cpus(sg),
tsk_cpus_allowed(p)))
goto next;
for_each_cpu(i, sched_group_cpus(sg)) {
if (!idle_cpu(i))
goto next;
}
prev_cpu = cpumask_first_and(sched_group_cpus(sg),
tsk_cpus_allowed(p));
goto done;
-next:
sg = sg->next;
} while (sg != sd->groups);
int nr_busy = atomic_read(&sg->sgp->nr_busy_cpus);
int i;
/* no idle cpu in the group */
if (nr_busy == sg->group_weight)
continue;
Maybe we can skip local group since it's a bottom-up search so we know there's no idle cpu in the lower domain from the prior iteration.
We could have done this for the current logic because it checks for an *idle* group.The local group would definitely fail this test.But here we need to check the local group also because we are looking for an idle cpu.
Regards Preeti U Murthy