- A few trivial renames requested by Catalin - Pass cluster_id around by value outside of parse_cluster
arm64: topology: Implement basic CPU topology support arm64: topology: Add support for topology DT bindings arm64: topology: Tell the scheduler about the relative arm64: topology: Provide relative power numbers for cores
From: Mark Brown broonie@linaro.org
Add basic CPU topology support to arm64, based on the existing pre-v8 code and some work done by Mark Hambleton. This patch does not implement any topology discovery support since that should be based on information from firmware, it merely implements the scaffolding for integration of topology support in the architecture.
The goal is to separate the architecture hookup for providing topology information from the DT parsing in order to ease review and avoid blocking the architecture code (which will be built on by other work) with the DT code review by providing something something simple and basic.
A following patch will implement support for parsing the DT topology bindings for ARM, similar patches will be needed for ACPI.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/Kconfig | 24 +++++++++++ arch/arm64/include/asm/topology.h | 39 +++++++++++++++++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/smp.c | 12 ++++++ arch/arm64/kernel/topology.c | 91 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 167 insertions(+) create mode 100644 arch/arm64/include/asm/topology.h create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index dd4327f09ba4..02309c3eec33 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -163,6 +163,30 @@ config SMP
If you don't know what to do here, say N.
+config CPU_TOPOLOGY + bool "Support CPU topology definition" + depends on SMP + default y + help + Support CPU topology definition, based on configuration + provided by the firmware. + +config SCHED_MC + bool "Multi-core scheduler support" + depends on CPU_TOPOLOGY + help + Multi-core scheduler support improves the CPU scheduler's decision + making when dealing with multi-core CPU chips at a cost of slightly + increased overhead in some places. If unsure say N here. + +config SCHED_SMT + bool "SMT scheduler support" + depends on CPU_TOPOLOGY + help + Improves the CPU scheduler's decision making when dealing with + MultiThreading at a cost of slightly increased overhead in some + places. If unsure say N here. + config NR_CPUS int "Maximum number of CPUs (2-32)" range 2 32 diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h new file mode 100644 index 000000000000..6f5270c65a6c --- /dev/null +++ b/arch/arm64/include/asm/topology.h @@ -0,0 +1,39 @@ +#ifndef __ASM_TOPOLOGY_H +#define __ASM_TOPOLOGY_H + +#ifdef CONFIG_CPU_TOPOLOGY + +#include <linux/cpumask.h> + +struct cpu_topology { + int thread_id; + int core_id; + int socket_id; + cpumask_t thread_sibling; + cpumask_t core_sibling; +}; + +extern struct cpu_topology cpu_topology[NR_CPUS]; + +#define topology_physical_package_id(cpu) (cpu_topology[cpu].socket_id) +#define topology_core_id(cpu) (cpu_topology[cpu].core_id) +#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) +#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) + +#define mc_capable() (cpu_topology[0].socket_id != -1) +#define smt_capable() (cpu_topology[0].thread_id != -1) + +void init_cpu_topology(void); +void store_cpu_topology(unsigned int cpuid); +const struct cpumask *cpu_coregroup_mask(int cpu); + +#else + +static inline void init_cpu_topology(void) { } +static inline void store_cpu_topology(unsigned int cpuid) { } + +#endif + +#include <asm-generic/topology.h> + +#endif /* _ASM_ARM_TOPOLOGY_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 2d4554b13410..252b62181532 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -20,6 +20,7 @@ arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o +arm64-obj-$(CONFIG_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/ obj-m += $(arm64-obj-m) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 1b7617ab499b..40e20efc13e6 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -114,6 +114,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) return ret; }
+static void __cpuinit smp_store_cpu_info(unsigned int cpuid) +{ + store_cpu_topology(cpuid); +} + /* * This is the secondary CPU boot entry. We're using this CPUs * idle thread stack, but a set of temporary page tables. @@ -152,6 +157,8 @@ asmlinkage void secondary_start_kernel(void) */ notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu); + /* * OK, now it's safe to let the boot CPU continue. Wait for * the CPU migration code to notice that the CPU is online @@ -391,6 +398,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus) int err; unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology(); + + smp_store_cpu_info(smp_processor_id()); + + /* * are we trying to boot more cores than exist? */ diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c new file mode 100644 index 000000000000..980019fefeff --- /dev/null +++ b/arch/arm64/kernel/topology.c @@ -0,0 +1,91 @@ +/* + * arch/arm64/kernel/topology.c + * + * Copyright (C) 2011,2013 Linaro Limited. + * + * Based on the arm32 version written by Vincent Guittot in turn based on + * arch/sh/kernel/topology.c + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + */ + +#include <linux/cpu.h> +#include <linux/cpumask.h> +#include <linux/init.h> +#include <linux/percpu.h> +#include <linux/node.h> +#include <linux/nodemask.h> +#include <linux/sched.h> + +#include <asm/topology.h> + +/* + * cpu topology table + */ +struct cpu_topology cpu_topology[NR_CPUS]; +EXPORT_SYMBOL_GPL(cpu_topology); + +const struct cpumask *cpu_coregroup_mask(int cpu) +{ + return &cpu_topology[cpu].core_sibling; +} + +static void update_siblings_masks(unsigned int cpuid) +{ + struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; + int cpu; + + /* update core and thread sibling masks */ + for_each_possible_cpu(cpu) { + cpu_topo = &cpu_topology[cpu]; + + if (cpuid_topo->socket_id != cpu_topo->socket_id) + continue; + + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->core_sibling); + + if (cpuid_topo->core_id != cpu_topo->core_id) + continue; + + cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling); + } + smp_wmb(); +} + +void store_cpu_topology(unsigned int cpuid) +{ + struct cpu_topology *cpuid_topo = &cpu_topology[cpuid]; + + /* DT should have been parsed by the time we get here */ + if (cpuid_topo->core_id == -1) + pr_info("CPU%u: No topology information configured\n", cpuid); + else + update_siblings_masks(cpuid); +} + +/* + * init_cpu_topology is called at boot when only one cpu is running + * which prevent simultaneous write access to cpu_topology array + */ +void __init init_cpu_topology(void) +{ + unsigned int cpu; + + /* init core mask and power*/ + for_each_possible_cpu(cpu) { + struct cpu_topology *cpu_topo = &(cpu_topology[cpu]); + + cpu_topo->thread_id = -1; + cpu_topo->core_id = -1; + cpu_topo->socket_id = -1; + cpumask_clear(&cpu_topo->core_sibling); + cpumask_clear(&cpu_topo->thread_sibling); + } + smp_wmb(); +}
[adding Vincent in CC, questions related to SCHED MC macros]
Minor comments below.
On Sun, Jan 12, 2014 at 07:20:38PM +0000, Mark Brown wrote:
"something something", one something is enough.
[...]
Is there any reason why we can't rename socket_id to cluster_id ? It won't change our lives but at least we kind of know what it means in ARM world.
Are the two macros above still required in the kernel ? I can't see any usage at present.
Vincent, do you know why they were not removed in commit:
8e7fbcbc22c12414bcc9dfdd683637f58fb32759
I am certainly missing something.
__cpuinit has been (is being) removed from the kernel and probably should be removed from this definition too.
Too many empty lines, one is enough.
I have already commented on this. If this patchset is completely merged, that is one thing, if it is not we are adding include files for nothing. If you have time, and there should be given that the set missed the merge window it would be nice to split the includes, I will not nitpick any longer though, so it is up to you.
[...]
You do not need brackets, &cpu_topology[cpu] will do.
Lorenzo
On Mon, Jan 13, 2014 at 04:10:59PM +0000, Lorenzo Pieralisi wrote:
On Sun, Jan 12, 2014 at 07:20:38PM +0000, Mark Brown wrote:
Is there any reason why we can't rename socket_id to cluster_id ? It won't change our lives but at least we kind of know what it means in ARM world.
I really don't care, whatever you guys want.
+#define mc_capable() (cpu_topology[0].socket_id != -1) +#define smt_capable() (cpu_topology[0].thread_id != -1)
They're defined by a bunch of other architectures (including x86). If I had to guess I'd say the architectures are still providing the information so we don't need to go round adding it again if someone comes up with a use for it in the core.
On Mon, Jan 13, 2014 at 04:30:45PM +0000, Mark Brown wrote:
s/socket_id/cluster_id
unless we have a compelling reason to keep the socket_id naming, I do not see it, given that cpu_topology is arch specific anyway.
socket_id means really nothing in ARM world.
Again, Vincent if you see a compelling reason to keep socket_id as in arm32 that I am missing please shout.
Yes, let's keep the macros, just wanted to make sure I got it right.
Lorenzo
On 13 January 2014 18:44, Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
I don't have any compellingreason, i have just used the same name than other platform.
Vincent
On 13 January 2014 17:10, Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
I think it was not planned to be used only by the scheduler but since 8e7fbcbc22c12414bcc9dfdd683637f58fb32759, we have reach a situation where nobody use them for the moment.
From: Mark Brown broonie@linaro.org
Add support for parsing the explicit topology bindings to discover the topology of the system.
Since it is not currently clear how to map multi-level clusters for the scheduler all leaf clusters are presented to the scheduler at the same level. This should be enough to provide good support for current systems.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/kernel/topology.c | 143 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 980019fefeff..7ef0d783ffff 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -17,10 +17,150 @@ #include <linux/percpu.h> #include <linux/node.h> #include <linux/nodemask.h> +#include <linux/of.h> #include <linux/sched.h>
#include <asm/topology.h>
+#ifdef CONFIG_OF +static int __init get_cpu_for_node(struct device_node *node) +{ + struct device_node *cpu_node; + int cpu; + + cpu_node = of_parse_phandle(node, "cpu", 0); + if (!cpu_node) + return -1; + + for_each_possible_cpu(cpu) { + if (of_get_cpu_node(cpu, NULL) == cpu_node) + return cpu; + } + + pr_crit("Unable to find CPU node for %s\n", cpu_node->full_name); + return -1; +} + +static void __init parse_core(struct device_node *core, int cluster_id, + int core_id) +{ + char name[10]; + bool leaf = true; + int i, cpu; + struct device_node *t; + + i = 0; + do { + snprintf(name, sizeof(name), "thread%d", i); + t = of_get_child_by_name(core, name); + if (t) { + leaf = false; + cpu = get_cpu_for_node(t); + if (cpu >= 0) { + cpu_topology[cpu].socket_id = cluster_id; + cpu_topology[cpu].core_id = core_id; + cpu_topology[cpu].thread_id = i; + } else { + pr_err("%s: Can't get CPU for thread\n", + t->full_name); + } + } + i++; + } while (t); + + cpu = get_cpu_for_node(core); + if (cpu >= 0) { + if (!leaf) { + pr_err("%s: Core has both threads and CPU\n", + core->full_name); + return; + } + + cpu_topology[cpu].socket_id = cluster_id; + cpu_topology[cpu].core_id = core_id; + } else if (leaf) { + pr_err("%s: Can't get CPU for leaf core\n", core->full_name); + } +} + +static void __init parse_cluster(struct device_node *cluster, int depth) +{ + char name[10]; + bool leaf = true; + bool has_cores = false; + struct device_node *c; + static int cluster_id = 0; + int core_id = 0; + int i; + + /* + * First check for child clusters; we currently ignore any + * information about the nesting of clusters and present the + * scheduler with a flat list of them. + */ + i = 0; + do { + snprintf(name, sizeof(name), "cluster%d", i); + c = of_get_child_by_name(cluster, name); + if (c) { + parse_cluster(c, depth + 1); + leaf = false; + } + i++; + } while (c); + + /* Now check for cores */ + i = 0; + do { + snprintf(name, sizeof(name), "core%d", i); + c = of_get_child_by_name(cluster, name); + if (c) { + has_cores = true; + + if (depth == 0) + pr_err("%s: cpu-map children should be clusters\n", + c->full_name); + + if (leaf) + parse_core(c, cluster_id, core_id++); + else + pr_err("%s: Non-leaf cluster with core %s\n", + cluster->full_name, name); + } + i++; + } while (c); + + if (leaf && !has_cores) + pr_warn("%s: empty cluster\n", cluster->full_name); + + if (leaf) + cluster_id++; +} + +static void __init parse_dt_topology(void) +{ + struct device_node *cn; + + cn = of_find_node_by_path("/cpus"); + if (!cn) { + pr_err("No CPU information found in DT\n"); + return; + } + + /* + * If topology is provided as a cpu-map it is essentially a + * root cluster. + */ + cn = of_find_node_by_name(cn, "cpu-map"); + if (!cn) + return; + parse_cluster(cn, 0); +} + +#else +static inline void parse_dt_topology(void) {} +#endif + /* * cpu topology table */ @@ -87,5 +227,8 @@ void __init init_cpu_topology(void) cpumask_clear(&cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); } + + parse_dt_topology(); + smp_wmb(); }
Hi Mark,
apart from a couple of minor nits and a question, it looks fine to me.
On Sun, Jan 12, 2014 at 07:20:39PM +0000, Mark Brown wrote:
You could initialize i at declaration, I can understand why you are doing that explictly in parse_cluster (two loops, to make code clearer), but here it does not make much sense to add a line for that.
If we wanted to be very picky, you need to copy "thread" just once (same goes for other strings), but we'd better leave code as is IMHO.
Should we check the MT bit in MPIDR_EL1 before validating threads as well ?
I do not like the idea because this means reliance on MPIDR_EL1 for MT and DT for topology bits, but it might be a worthwhile check.
It is certainly odd to have a DT with threads and an MPIDR_EL1 with the MT bit clear.
static int __initdata cluster_id;
[...]
This comment is a bit misleading, because as you know, (1) topology can only be provided with cpu-map, (2) cpu-map is not a root cluster.
With the changes/comments above pending:
Reviewed-by: Lorenzo Pieralisi lorenzo.pieralisi@arm.com
On Tue, Jan 14, 2014 at 11:43:37AM +0000, Lorenzo Pieralisi wrote:
On Sun, Jan 12, 2014 at 07:20:39PM +0000, Mark Brown wrote:
I still find it clearer for do { } while loops to have the start condition required for the loop to function right next to the loop. Yes, you can save a line code but that's about it.
If we wanted to be very picky, you need to copy "thread" just once (same goes for other strings), but we'd better leave code as is IMHO.
That would just make the code more complex, we need to handle tens of cores so just doing i + '0' won't cut it.
Should we check the MT bit in MPIDR_EL1 before validating threads as well ?
I do not like the idea because this means reliance on MPIDR_EL1 for MT and DT for topology bits, but it might be a worthwhile check.
It is certainly odd to have a DT with threads and an MPIDR_EL1 with the MT bit clear.
Checking seems counter to the idea of forcing everyone to provide this information from the firmware in the first place - checking that one bit and ignoring the rest of the information even if it's good would seem perverse.
From: Mark Brown broonie@linaro.org
In heterogeneous systems like big.LITTLE systems the scheduler will be able to make better use of the available cores if we provide power numbers to it indicating their relative performance. Do this by parsing the CPU nodes in the DT.
This code currently has no effect as no information on the relative performance of the cores is provided.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/kernel/topology.c | 144 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 143 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 7ef0d783ffff..2748b252d4e7 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -19,9 +19,33 @@ #include <linux/nodemask.h> #include <linux/of.h> #include <linux/sched.h> +#include <linux/slab.h>
#include <asm/topology.h>
+/* + * cpu power table + * This per cpu data structure describes the relative capacity of each core. + * On a heteregenous system, cores don't have the same computation capacity + * and we reflect that difference in the cpu_power field so the scheduler can + * take this difference into account during load balance. A per cpu structure + * is preferred because each CPU updates its own cpu_power field during the + * load balance except for idle cores. One idle core is selected to run the + * rebalance_domains for all idle cores and the cpu_power can be updated + * during this sequence. + */ +static DEFINE_PER_CPU(unsigned long, cpu_scale); + +unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu) +{ + return per_cpu(cpu_scale, cpu); +} + +static void set_power_scale(unsigned int cpu, unsigned long power) +{ + per_cpu(cpu_scale, cpu) = power; +} + #ifdef CONFIG_OF static int __init get_cpu_for_node(struct device_node *node) { @@ -89,7 +113,7 @@ static void __init parse_cluster(struct device_node *cluster, int depth) bool leaf = true; bool has_cores = false; struct device_node *c; - static int cluster_id = 0; + static int cluster_id; int core_id = 0; int i;
@@ -137,9 +161,49 @@ static void __init parse_cluster(struct device_node *cluster, int depth) cluster_id++; }
+struct cpu_efficiency { + const char *compatible; + unsigned long efficiency; +}; + +/* + * Table of relative efficiency of each processors + * The efficiency value must fit in 20bit and the final + * cpu_scale value must be in the range + * 0 < cpu_scale < 3*SCHED_POWER_SCALE/2 + * in order to return at most 1 when DIV_ROUND_CLOSEST + * is used to compute the capacity of a CPU. + * Processors that are not defined in the table, + * use the default SCHED_POWER_SCALE value for cpu_scale. + */ +static const struct cpu_efficiency table_efficiency[] = { + { NULL, }, +}; + +static unsigned long *__cpu_capacity; +#define cpu_capacity(cpu) __cpu_capacity[cpu] + +static unsigned long middle_capacity = 1; + +/* + * Iterate all CPUs' descriptor in DT and compute the efficiency + * (as per table_efficiency). Also calculate a middle efficiency + * as close as possible to (max{eff_i} - min{eff_i}) / 2 + * This is later used to scale the cpu_power field such that an + * 'average' CPU is of middle power. Also see the comments near + * table_efficiency[] and update_cpu_power(). + */ static void __init parse_dt_topology(void) { + const struct cpu_efficiency *cpu_eff; struct device_node *cn; + unsigned long min_capacity = ULONG_MAX; + unsigned long max_capacity = 0; + unsigned long capacity = 0; + int cpu; + + __cpu_capacity = kcalloc(nr_cpu_ids, sizeof(*__cpu_capacity), + GFP_NOWAIT);
cn = of_find_node_by_path("/cpus"); if (!cn) { @@ -155,10 +219,84 @@ static void __init parse_dt_topology(void) if (!cn) return; parse_cluster(cn, 0); + + for_each_possible_cpu(cpu) { + const u32 *rate; + int len; + + /* Too early to use cpu->of_node */ + cn = of_get_cpu_node(cpu, NULL); + if (!cn) { + pr_err("Missing device node for CPU %d\n", cpu); + continue; + } + + for (cpu_eff = table_efficiency; cpu_eff->compatible; cpu_eff++) + if (of_device_is_compatible(cn, cpu_eff->compatible)) + break; + + if (cpu_eff->compatible == NULL) { + pr_warn("%s: Unknown CPU type\n", cn->full_name); + continue; + } + + rate = of_get_property(cn, "clock-frequency", &len); + if (!rate || len != 4) { + pr_err("%s: Missing clock-frequency property\n", + cn->full_name); + continue; + } + + capacity = ((be32_to_cpup(rate)) >> 20) * cpu_eff->efficiency; + + /* Save min capacity of the system */ + if (capacity < min_capacity) + min_capacity = capacity; + + /* Save max capacity of the system */ + if (capacity > max_capacity) + max_capacity = capacity; + + cpu_capacity(cpu) = capacity; + } + + /* If min and max capacities are equal we bypass the update of the + * cpu_scale because all CPUs have the same capacity. Otherwise, we + * compute a middle_capacity factor that will ensure that the capacity + * of an 'average' CPU of the system will be as close as possible to + * SCHED_POWER_SCALE, which is the default value, but with the + * constraint explained near table_efficiency[]. + */ + if (min_capacity == max_capacity) + return; + else if (4 * max_capacity < (3 * (max_capacity + min_capacity))) + middle_capacity = (min_capacity + max_capacity) + >> (SCHED_POWER_SHIFT+1); + else + middle_capacity = ((max_capacity / 3) + >> (SCHED_POWER_SHIFT-1)) + 1; + +} + +/* + * Look for a customed capacity of a CPU in the cpu_topo_data table during the + * boot. The update of all CPUs is in O(n^2) for heteregeneous system but the + * function returns directly for SMP system. + */ +static void update_cpu_power(unsigned int cpu) +{ + if (!cpu_capacity(cpu)) + return; + + set_power_scale(cpu, cpu_capacity(cpu) / middle_capacity); + + pr_info("CPU%u: update cpu_power %lu\n", + cpu, arch_scale_freq_power(NULL, cpu)); }
#else static inline void parse_dt_topology(void) {} +static inline void update_cpu_power(unsigned int cpuid) {} #endif
/* @@ -207,6 +345,8 @@ void store_cpu_topology(unsigned int cpuid) pr_info("CPU%u: No topology information configured\n", cpuid); else update_siblings_masks(cpuid); + + update_cpu_power(cpuid); }
/* @@ -226,6 +366,8 @@ void __init init_cpu_topology(void) cpu_topo->socket_id = -1; cpumask_clear(&cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); + + set_power_scale(cpu, SCHED_POWER_SCALE); }
parse_dt_topology();
On Sun, Jan 12, 2014 at 07:20:40PM +0000, Mark Brown wrote:
[...]
It has to be __initdata, and the line change above does not belong in this patch but patch 1.
[...]
I am wondering why we spit an error for a property that in practice is optional. Either we make it required, or we drop the error output.
Actually this is not defined anywhere apart from the ePAPR, which defines this property as required, but following your attempt to standardize it for ARM, I gather it should be considered optional.
If it is optional, should we really print an error ? (I know it is the same on arm32, I am questioning that code too).
Lorenzo
On Mon, Jan 13, 2014 at 04:40:21PM +0000, Lorenzo Pieralisi wrote:
On Sun, Jan 12, 2014 at 07:20:40PM +0000, Mark Brown wrote:
It has to be __initdata, and the line change above does not belong in this patch but patch 1.
I would really have expected static data from a function marked init to end up marked appropriately, but whatever.
I am wondering why we spit an error for a property that in practice is optional. Either we make it required, or we drop the error output.
It's already standard in the spec we claim to be following...
If it is optional, should we really print an error ? (I know it is the same on arm32, I am questioning that code too).
For big.LITTLE systems with the current implementation this information is required in order to scale the relative performances of the cores - such a system the maximum frequencies of the cores may vary as well as their type (or indeed theoretically even only their maximum frequency). We could at some future time end up evolving the code so that this information is acquired from cpufreq or somewhere but that's something that should probably happen kernel wide as part of the scheduler work rather than going off and doing something custom.
On Mon, Jan 13, 2014 at 05:01:56PM +0000, Mark Brown wrote:
I would not expect that.
So is it required ?
I was just referring to this thread, whose outcome is unclear to me.
http://archive.arm.linux.org.uk/lurker/message/20131206.115707.24b095f4.en.h...
I am not questioning why it is needed, I am just asking whether it is optional or not. If it is, getting error messages in the kernel log does not seem correct.
Lorenzo
On Tue, Jan 14, 2014 at 10:12:36AM +0000, Lorenzo Pieralisi wrote:
On Mon, Jan 13, 2014 at 05:01:56PM +0000, Mark Brown wrote:
I would really have expected static data from a function marked init to end up marked appropriately, but whatever.
I would not expect that.
Really? If something is local to a function marked init it seems like the __init ought to carry over to it.
It's already standard in the spec we claim to be following...
So is it required ?
That's what ePAPR says. If that's good decision making on the part of ePAPR or not is a separate question.
I was just referring to this thread, whose outcome is unclear to me.
http://archive.arm.linux.org.uk/lurker/message/20131206.115707.24b095f4.en.h...
At present we don't really have a better way to get the information so we're relying on it; until the scheduler is able to talk to cpufreq not providing this information means we won't be able to provide a relative performance estimate to the scheduler. This means that we probably ought to be telling the user if we couldn't figure out the top frequency for the core.
On Tue, Jan 14, 2014 at 12:13:43PM +0000, Mark Brown wrote:
Yes, for the same reason as static variables declared in a function do not end up in the .text section. You want the variable to be in the .init.data section and the compiler initialize it to 0 for you. If it was not declared as __initdata it would be added to the .bss section and zeroed by the kernel.
It is not the top frequency, it is the current frequency. Leaving the log is fine by me, but actually implies that the ePAPR must be followed, ie the property is "required".
Lorenzo
On Tue, Jan 14, 2014 at 01:23:19PM +0000, Lorenzo Pieralisi wrote:
I would really have expected static data from a function marked init to end up marked appropriately, but whatever.
I would not expect that.
Really? If something is local to a function marked init it seems like the __init ought to carry over to it.
I understand why it might happen from an implementation point of view but still not what I would expect to happen - I'd have expected that annotations applied to a function would be able to automatically do the right thing with their data.
Tweaking the semantics there was half the point of my patch (since having the current frequency makes no sense in the context of FDT or anything else without a running firmware), the other bit was just making this more discoverable since while we say we're following ePAPR I don't think anyone actually looks at it.
From: Mark Brown broonie@linaro.org
Provide performance numbers to the scheduler to help it fill the cores in the system on big.LITTLE systems. With the current scheduler this may perform poorly for applications that try to do OpenMP style work over all cores but should help for more common workloads. The current 32 bit ARM implementation provides a similar estimate so this helps ensure that work to improve big.LITTLE systems on ARMv7 systems performs similarly on ARMv8 systems.
The power numbers are the same as for ARMv7 since it seems that the expected differential between the big and little cores is very similar on both ARMv7 and ARMv8. These numbers are just an initial and basic approximation for use with the current scheduler, it is likely that both experience with silicon and ongoing work on improving the scheduler will lead to further tuning. In both ARMv7 and ARMv8 cases the numbers were based on the published DMIPS numbers.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/kernel/topology.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 2748b252d4e7..99347a7659c3 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -177,6 +177,8 @@ struct cpu_efficiency { * use the default SCHED_POWER_SCALE value for cpu_scale. */ static const struct cpu_efficiency table_efficiency[] = { + { "arm,cortex-a57", 3891 }, + { "arm,cortex-a53", 2048 }, { NULL, }, };
From: Mark Brown broonie@linaro.org
Add basic CPU topology support to arm64, based on the existing pre-v8 code and some work done by Mark Hambleton. This patch does not implement any topology discovery support since that should be based on information from firmware, it merely implements the scaffolding for integration of topology support in the architecture.
The goal is to separate the architecture hookup for providing topology information from the DT parsing in order to ease review and avoid blocking the architecture code (which will be built on by other work) with the DT code review by providing something something simple and basic.
A following patch will implement support for parsing the DT topology bindings for ARM, similar patches will be needed for ACPI.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/Kconfig | 24 +++++++++++ arch/arm64/include/asm/topology.h | 39 +++++++++++++++++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/smp.c | 12 ++++++ arch/arm64/kernel/topology.c | 91 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 167 insertions(+) create mode 100644 arch/arm64/include/asm/topology.h create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index dd4327f09ba4..02309c3eec33 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -163,6 +163,30 @@ config SMP
If you don't know what to do here, say N.
+config CPU_TOPOLOGY + bool "Support CPU topology definition" + depends on SMP + default y + help + Support CPU topology definition, based on configuration + provided by the firmware. + +config SCHED_MC + bool "Multi-core scheduler support" + depends on CPU_TOPOLOGY + help + Multi-core scheduler support improves the CPU scheduler's decision + making when dealing with multi-core CPU chips at a cost of slightly + increased overhead in some places. If unsure say N here. + +config SCHED_SMT + bool "SMT scheduler support" + depends on CPU_TOPOLOGY + help + Improves the CPU scheduler's decision making when dealing with + MultiThreading at a cost of slightly increased overhead in some + places. If unsure say N here. + config NR_CPUS int "Maximum number of CPUs (2-32)" range 2 32 diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h new file mode 100644 index 000000000000..6f5270c65a6c --- /dev/null +++ b/arch/arm64/include/asm/topology.h @@ -0,0 +1,39 @@ +#ifndef __ASM_TOPOLOGY_H +#define __ASM_TOPOLOGY_H + +#ifdef CONFIG_CPU_TOPOLOGY + +#include <linux/cpumask.h> + +struct cpu_topology { + int thread_id; + int core_id; + int socket_id; + cpumask_t thread_sibling; + cpumask_t core_sibling; +}; + +extern struct cpu_topology cpu_topology[NR_CPUS]; + +#define topology_physical_package_id(cpu) (cpu_topology[cpu].socket_id) +#define topology_core_id(cpu) (cpu_topology[cpu].core_id) +#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) +#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) + +#define mc_capable() (cpu_topology[0].socket_id != -1) +#define smt_capable() (cpu_topology[0].thread_id != -1) + +void init_cpu_topology(void); +void store_cpu_topology(unsigned int cpuid); +const struct cpumask *cpu_coregroup_mask(int cpu); + +#else + +static inline void init_cpu_topology(void) { } +static inline void store_cpu_topology(unsigned int cpuid) { } + +#endif + +#include <asm-generic/topology.h> + +#endif /* _ASM_ARM_TOPOLOGY_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 2d4554b13410..252b62181532 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -20,6 +20,7 @@ arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o +arm64-obj-$(CONFIG_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/ obj-m += $(arm64-obj-m) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 1b7617ab499b..40e20efc13e6 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -114,6 +114,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) return ret; }
+static void __cpuinit smp_store_cpu_info(unsigned int cpuid) +{ + store_cpu_topology(cpuid); +} + /* * This is the secondary CPU boot entry. We're using this CPUs * idle thread stack, but a set of temporary page tables. @@ -152,6 +157,8 @@ asmlinkage void secondary_start_kernel(void) */ notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu); + /* * OK, now it's safe to let the boot CPU continue. Wait for * the CPU migration code to notice that the CPU is online @@ -391,6 +398,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus) int err; unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology(); + + smp_store_cpu_info(smp_processor_id()); + + /* * are we trying to boot more cores than exist? */ diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c new file mode 100644 index 000000000000..980019fefeff --- /dev/null +++ b/arch/arm64/kernel/topology.c @@ -0,0 +1,91 @@ +/* + * arch/arm64/kernel/topology.c + * + * Copyright (C) 2011,2013 Linaro Limited. + * + * Based on the arm32 version written by Vincent Guittot in turn based on + * arch/sh/kernel/topology.c + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + */ + +#include <linux/cpu.h> +#include <linux/cpumask.h> +#include <linux/init.h> +#include <linux/percpu.h> +#include <linux/node.h> +#include <linux/nodemask.h> +#include <linux/sched.h> + +#include <asm/topology.h> + +/* + * cpu topology table + */ +struct cpu_topology cpu_topology[NR_CPUS]; +EXPORT_SYMBOL_GPL(cpu_topology); + +const struct cpumask *cpu_coregroup_mask(int cpu) +{ + return &cpu_topology[cpu].core_sibling; +} + +static void update_siblings_masks(unsigned int cpuid) +{ + struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; + int cpu; + + /* update core and thread sibling masks */ + for_each_possible_cpu(cpu) { + cpu_topo = &cpu_topology[cpu]; + + if (cpuid_topo->socket_id != cpu_topo->socket_id) + continue; + + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->core_sibling); + + if (cpuid_topo->core_id != cpu_topo->core_id) + continue; + + cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling); + } + smp_wmb(); +} + +void store_cpu_topology(unsigned int cpuid) +{ + struct cpu_topology *cpuid_topo = &cpu_topology[cpuid]; + + /* DT should have been parsed by the time we get here */ + if (cpuid_topo->core_id == -1) + pr_info("CPU%u: No topology information configured\n", cpuid); + else + update_siblings_masks(cpuid); +} + +/* + * init_cpu_topology is called at boot when only one cpu is running + * which prevent simultaneous write access to cpu_topology array + */ +void __init init_cpu_topology(void) +{ + unsigned int cpu; + + /* init core mask and power*/ + for_each_possible_cpu(cpu) { + struct cpu_topology *cpu_topo = &(cpu_topology[cpu]); + + cpu_topo->thread_id = -1; + cpu_topo->core_id = -1; + cpu_topo->socket_id = -1; + cpumask_clear(&cpu_topo->core_sibling); + cpumask_clear(&cpu_topo->thread_sibling); + } + smp_wmb(); +}
[adding Vincent in CC, questions related to SCHED MC macros]
Minor comments below.
On Sun, Jan 12, 2014 at 07:20:38PM +0000, Mark Brown wrote:
"something something", one something is enough.
[...]
Is there any reason why we can't rename socket_id to cluster_id ? It won't change our lives but at least we kind of know what it means in ARM world.
Are the two macros above still required in the kernel ? I can't see any usage at present.
Vincent, do you know why they were not removed in commit:
8e7fbcbc22c12414bcc9dfdd683637f58fb32759
I am certainly missing something.
__cpuinit has been (is being) removed from the kernel and probably should be removed from this definition too.
Too many empty lines, one is enough.
I have already commented on this. If this patchset is completely merged, that is one thing, if it is not we are adding include files for nothing. If you have time, and there should be given that the set missed the merge window it would be nice to split the includes, I will not nitpick any longer though, so it is up to you.
[...]
You do not need brackets, &cpu_topology[cpu] will do.
Lorenzo
On Mon, Jan 13, 2014 at 04:10:59PM +0000, Lorenzo Pieralisi wrote:
On Sun, Jan 12, 2014 at 07:20:38PM +0000, Mark Brown wrote:
Is there any reason why we can't rename socket_id to cluster_id ? It won't change our lives but at least we kind of know what it means in ARM world.
I really don't care, whatever you guys want.
+#define mc_capable() (cpu_topology[0].socket_id != -1) +#define smt_capable() (cpu_topology[0].thread_id != -1)
They're defined by a bunch of other architectures (including x86). If I had to guess I'd say the architectures are still providing the information so we don't need to go round adding it again if someone comes up with a use for it in the core.
On Mon, Jan 13, 2014 at 04:30:45PM +0000, Mark Brown wrote:
s/socket_id/cluster_id
unless we have a compelling reason to keep the socket_id naming, I do not see it, given that cpu_topology is arch specific anyway.
socket_id means really nothing in ARM world.
Again, Vincent if you see a compelling reason to keep socket_id as in arm32 that I am missing please shout.
Yes, let's keep the macros, just wanted to make sure I got it right.
Lorenzo
On 13 January 2014 18:44, Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
I don't have any compellingreason, i have just used the same name than other platform.
Vincent
On 13 January 2014 17:10, Lorenzo Pieralisi lorenzo.pieralisi@arm.com wrote:
I think it was not planned to be used only by the scheduler but since 8e7fbcbc22c12414bcc9dfdd683637f58fb32759, we have reach a situation where nobody use them for the moment.
linaro-kernel@lists.linaro.org