Hi Peter/Ingo,
This set contains few more minor fixes that I could find for code responsible for creating sched domains. They are rebased of my earlier fixes:
Part 1: https://lkml.org/lkml/2013/6/4/253
Part 2: https://lkml.org/lkml/2013/6/10/141
They should be applied in this order to avoid conflicts.
My study of "How scheduling domains are created" is almost over now and so probably this is my last patchset for fixes related to scheduling domains.
Sorry for three separate sets, I sent them as soon as I had few of them sitting in my tree.
Viresh Kumar (3): sched: Use cached value of span instead of calling sched_domain_span() sched: don't call get_group() for covered cpus sched: remove WARN_ON(!sd) from init_sched_groups_power()
kernel/sched/core.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
In the beginning of build_sched_groups() we called sched_domain_span() and cached its return value in span. Few statements later we are calling it again to get the same pointer.
Lets use the cached value instead as it hasn't changed in between.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e585e10..4abc743 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5334,7 +5334,7 @@ build_sched_groups(struct sched_domain *sd, int cpu) get_group(cpu, sdd, &sd->groups); atomic_inc(&sd->groups->ref);
- if (cpu != cpumask_first(sched_domain_span(sd))) + if (cpu != cpumask_first(span)) return 0;
lockdep_assert_held(&sched_domains_mutex);
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. So, call get_group() after checking if cpu is covered or not.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- kernel/sched/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4abc743..27842f5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5344,12 +5344,12 @@ build_sched_groups(struct sched_domain *sd, int cpu)
for_each_cpu(i, span) { struct sched_group *sg; - int group = get_group(i, sdd, &sg); - int j; + int group, j;
if (cpumask_test_cpu(i, covered)) continue;
+ group = get_group(i, sdd, &sg); cpumask_clear(sched_group_cpus(sg)); sg->sgp->power = 0; cpumask_setall(sched_group_mask(sg));
On Tue, Jun 11, 2013 at 04:32:44PM +0530, Viresh Kumar wrote:
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. So, call get_group() after checking if cpu is covered or not.
Aside from not needing it; doing it would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it, right?
So effectively this fixes a memory leak?
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
kernel/sched/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 4abc743..27842f5 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5344,12 +5344,12 @@ build_sched_groups(struct sched_domain *sd, int cpu) for_each_cpu(i, span) { struct sched_group *sg;
int group = get_group(i, sdd, &sg);
int j;
int group, j;
if (cpumask_test_cpu(i, covered)) continue;
cpumask_clear(sched_group_cpus(sg)); sg->sgp->power = 0; cpumask_setall(sched_group_mask(sg));group = get_group(i, sdd, &sg);
-- 1.7.12.rc2.18.g61b472e
On 18 June 2013 15:33, Peter Zijlstra peterz@infradead.org wrote:
On Tue, Jun 11, 2013 at 04:32:44PM +0530, Viresh Kumar wrote:
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. So, call get_group() after checking if cpu is covered or not.
Aside from not needing it; doing it would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it, right?
So effectively this fixes a memory leak?
Exactly. To be precise: In cases where sg->cpumask contained more than one cpu (For any topology level), this patch helps freeing sg's memory for all cpus leaving the group leader.
May I update the logs again?
On Tue, Jun 18, 2013 at 03:45:15PM +0530, Viresh Kumar wrote:
On 18 June 2013 15:33, Peter Zijlstra peterz@infradead.org wrote:
On Tue, Jun 11, 2013 at 04:32:44PM +0530, Viresh Kumar wrote:
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. So, call get_group() after checking if cpu is covered or not.
Aside from not needing it; doing it would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it, right?
So effectively this fixes a memory leak?
Exactly. To be precise: In cases where sg->cpumask contained more than one cpu (For any topology level), this patch helps freeing sg's memory for all cpus leaving the group leader.
May I update the logs again?
Sure. Just send a new patch and I'll substitute.
On 18 June 2013 20:49, Peter Zijlstra peterz@infradead.org wrote:
Sure. Just send a new patch and I'll substitute.
This is the new log (patch is attached):
sched: Fix memory leakage in build_sched_groups()
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. Calling get_group() would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it.
This will happen only in cases where sg->cpumask contained more than one cpu (For any topology level). This patch would free sg's memory for all cpus leaving the group leader as the group isn't marked used now.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- kernel/sched/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
On Tue, Jun 18, 2013 at 08:59:39PM +0530, Viresh Kumar wrote:
On 18 June 2013 20:49, Peter Zijlstra peterz@infradead.org wrote:
Sure. Just send a new patch and I'll substitute.
This is the new log (patch is attached):
sched: Fix memory leakage in build_sched_groups()
In build_sched_groups() we don't need to call get_group() for cpus which are already covered in previous iterations. Calling get_group() would mark the group used and eventually leak it since we wouldn't connect it and not find it again to free it.
This will happen only in cases where sg->cpumask contained more than one cpu (For any topology level). This patch would free sg's memory for all cpus leaving the group leader as the group isn't marked used now.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
Thanks!
sd can't be NULL in init_sched_groups_power() and so checking it for NULL isn't useful. In case it is required, then also we need to rearrange the code a bit as we already accessed invalid pointer sd to get sg: sg = sd->groups.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 27842f5..c949f6d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5387,7 +5387,7 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd) { struct sched_group *sg = sd->groups;
- WARN_ON(!sd || !sg); + WARN_ON(!sg);
do { sg->group_weight = cpumask_weight(sched_group_cpus(sg));
On Tue, Jun 11, 2013 at 04:32:45PM +0530, Viresh Kumar wrote:
sd can't be NULL in init_sched_groups_power() and so checking it for NULL isn't useful. In case it is required, then also we need to rearrange the code a bit as we already accessed invalid pointer sd to get sg: sg = sd->groups.
Err on the side of paranoia :-)
linaro-kernel@lists.linaro.org