From: Mark Brown broonie@linaro.org
Another spin of the arm64 topology work - this should incorporate most of the feedback from Lorenzo, there's a few things that were still under discussion the main ones being:
- Should we have a smp_store_cpu_info(); like I say I like the errors it generates for omitted cores and the reuse of the SMP enumeration code (and cross-check with that I guess - make sure we don't get confused about which CPUs are getting enabled).
- Should we update the binding to allow cores in the root cpu_map node (since it's less effort in code and not a meaningful difference semantically), warn if we find cores in the cpu_map node or actively reject such DTs?
In both cases I don't much mind but I think what's there is reasonable so I've left the code as-is pending further feedback. I also didn't update the code to get more reuse of the iteration code, like I said I did look at that when writing the code but couldn't find anything that actually made things more pleasant but if someone has some ideas...
Everything else raised should be addressed I think.
Mark Brown (4): arm64: topology: Implement basic CPU topology support arm64: topology: Add support for topology DT bindings arm64: topology: Tell the scheduler about the relative power of cores arm64: topology: Provide relative power numbers for cores
arch/arm64/Kconfig | 24 +++ arch/arm64/include/asm/topology.h | 39 ++++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/smp.c | 12 ++ arch/arm64/kernel/topology.c | 384 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 460 insertions(+) create mode 100644 arch/arm64/include/asm/topology.h create mode 100644 arch/arm64/kernel/topology.c
From: Mark Brown broonie@linaro.org
Add basic CPU topology support to arm64, based on the existing pre-v8 code and some work done by Mark Hambleton. This patch does not implement any topology discovery support since that should be based on information from firmware, it merely implements the scaffolding for integration of topology support in the architecture.
The goal is to separate the architecture hookup for providing topology information from the DT parsing in order to ease review and avoid blocking the architecture code (which will be built on by other work) with the DT code review by providing something something simple and basic.
A following patch will implement support for parsing the DT topology bindings for ARM, similar patches will be needed for ACPI.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/Kconfig | 24 ++++++++++ arch/arm64/include/asm/topology.h | 39 +++++++++++++++++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/smp.c | 12 +++++ arch/arm64/kernel/topology.c | 92 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 168 insertions(+) create mode 100644 arch/arm64/include/asm/topology.h create mode 100644 arch/arm64/kernel/topology.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6d4dd22ee4b7..00fcd490b3be 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -154,6 +154,30 @@ config SMP
If you don't know what to do here, say N.
+config ARM_CPU_TOPOLOGY + bool "Support CPU topology definition" + depends on SMP + default y + help + Support CPU topology definition, based on configuration + provided by the firmware. + +config SCHED_MC + bool "Multi-core scheduler support" + depends on ARM_CPU_TOPOLOGY + help + Multi-core scheduler support improves the CPU scheduler's decision + making when dealing with multi-core CPU chips at a cost of slightly + increased overhead in some places. If unsure say N here. + +config SCHED_SMT + bool "SMT scheduler support" + depends on ARM_CPU_TOPOLOGY + help + Improves the CPU scheduler's decision making when dealing with + MultiThreading at a cost of slightly increased overhead in some + places. If unsure say N here. + config NR_CPUS int "Maximum number of CPUs (2-32)" range 2 32 diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h new file mode 100644 index 000000000000..58b8b84adcd2 --- /dev/null +++ b/arch/arm64/include/asm/topology.h @@ -0,0 +1,39 @@ +#ifndef _ASM_ARM_TOPOLOGY_H +#define _ASM_ARM_TOPOLOGY_H + +#ifdef CONFIG_ARM_CPU_TOPOLOGY + +#include <linux/cpumask.h> + +struct cputopo_arm { + int thread_id; + int core_id; + int socket_id; + cpumask_t thread_sibling; + cpumask_t core_sibling; +}; + +extern struct cputopo_arm cpu_topology[NR_CPUS]; + +#define topology_physical_package_id(cpu) (cpu_topology[cpu].socket_id) +#define topology_core_id(cpu) (cpu_topology[cpu].core_id) +#define topology_core_cpumask(cpu) (&cpu_topology[cpu].core_sibling) +#define topology_thread_cpumask(cpu) (&cpu_topology[cpu].thread_sibling) + +#define mc_capable() (cpu_topology[0].socket_id != -1) +#define smt_capable() (cpu_topology[0].thread_id != -1) + +void init_cpu_topology(void); +void store_cpu_topology(unsigned int cpuid); +const struct cpumask *cpu_coregroup_mask(int cpu); + +#else + +static inline void init_cpu_topology(void) { } +static inline void store_cpu_topology(unsigned int cpuid) { } + +#endif + +#include <asm-generic/topology.h> + +#endif /* _ASM_ARM_TOPOLOGY_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 5ba2fd43a75b..2d145e38ad49 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP) += smp.o smp_spin_table.o arm64-obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o +arm64-obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o
obj-y += $(arm64-obj-y) vdso/ obj-m += $(arm64-obj-m) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index a0c2ca602cf8..0271fbde5363 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -113,6 +113,11 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) return ret; }
+static void __cpuinit smp_store_cpu_info(unsigned int cpuid) +{ + store_cpu_topology(cpuid); +} + /* * This is the secondary CPU boot entry. We're using this CPUs * idle thread stack, but a set of temporary page tables. @@ -150,6 +155,8 @@ asmlinkage void secondary_start_kernel(void) */ notify_cpu_starting(cpu);
+ smp_store_cpu_info(cpu); + /* * OK, now it's safe to let the boot CPU continue. Wait for * the CPU migration code to notice that the CPU is online @@ -388,6 +395,11 @@ void __init smp_prepare_cpus(unsigned int max_cpus) int err; unsigned int cpu, ncores = num_possible_cpus();
+ init_cpu_topology(); + + smp_store_cpu_info(smp_processor_id()); + + /* * are we trying to boot more cores than exist? */ diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c new file mode 100644 index 000000000000..853544f30a8b --- /dev/null +++ b/arch/arm64/kernel/topology.c @@ -0,0 +1,92 @@ +/* + * arch/arm64/kernel/topology.c + * + * Copyright (C) 2011,2013 Linaro Limited. + * + * Based on the arm32 version written by Vincent Guittot in turn based on + * arch/sh/kernel/topology.c + * + * This file is subject to the terms and conditions of the GNU General Public + * License. See the file "COPYING" in the main directory of this archive + * for more details. + */ + +#include <linux/cpu.h> +#include <linux/cpumask.h> +#include <linux/init.h> +#include <linux/percpu.h> +#include <linux/node.h> +#include <linux/nodemask.h> +#include <linux/sched.h> +#include <linux/slab.h> + +#include <asm/topology.h> + +/* + * cpu topology table + */ +struct cputopo_arm cpu_topology[NR_CPUS]; +EXPORT_SYMBOL_GPL(cpu_topology); + +const struct cpumask *cpu_coregroup_mask(int cpu) +{ + return &cpu_topology[cpu].core_sibling; +} + +static void update_siblings_masks(unsigned int cpuid) +{ + struct cputopo_arm *cpu_topo, *cpuid_topo = &cpu_topology[cpuid]; + int cpu; + + /* update core and thread sibling masks */ + for_each_possible_cpu(cpu) { + cpu_topo = &cpu_topology[cpu]; + + if (cpuid_topo->socket_id != cpu_topo->socket_id) + continue; + + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->core_sibling); + + if (cpuid_topo->core_id != cpu_topo->core_id) + continue; + + cpumask_set_cpu(cpuid, &cpu_topo->thread_sibling); + if (cpu != cpuid) + cpumask_set_cpu(cpu, &cpuid_topo->thread_sibling); + } + smp_wmb(); +} + +void store_cpu_topology(unsigned int cpuid) +{ + struct cputopo_arm *cpuid_topo = &cpu_topology[cpuid]; + + /* DT should have been parsed by the time we get here */ + if (cpuid_topo->core_id == -1) + pr_info("CPU%u: No topology information configured\n", cpuid); + else + update_siblings_masks(cpuid); +} + +/* + * init_cpu_topology is called at boot when only one cpu is running + * which prevent simultaneous write access to cpu_topology array + */ +void __init init_cpu_topology(void) +{ + unsigned int cpu; + + /* init core mask and power*/ + for_each_possible_cpu(cpu) { + struct cputopo_arm *cpu_topo = &(cpu_topology[cpu]); + + cpu_topo->thread_id = -1; + cpu_topo->core_id = -1; + cpu_topo->socket_id = -1; + cpumask_clear(&cpu_topo->core_sibling); + cpumask_clear(&cpu_topo->thread_sibling); + } + smp_wmb(); +}
From: Mark Brown broonie@linaro.org
Add support for parsing the explicit topology bindings to discover the topology of the system.
Since it is not currently clear how to map multi-level clusters for the scheduler all leaf clusters are presented to the scheduler at the same level. This should be enough to provide good support for current systems.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/kernel/topology.c | 145 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 853544f30a8b..5a2724b3d4b7 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -17,11 +17,153 @@ #include <linux/percpu.h> #include <linux/node.h> #include <linux/nodemask.h> +#include <linux/of.h> #include <linux/sched.h> #include <linux/slab.h>
#include <asm/topology.h>
+#ifdef CONFIG_OF +static int cluster_id; + +static int __init get_cpu_for_node(struct device_node *node) +{ + struct device_node *cpu_node; + int cpu; + + cpu_node = of_parse_phandle(node, "cpu", 0); + if (!cpu_node) { + pr_crit("%s: Unable to parse CPU phandle\n", node->full_name); + return -1; + } + + for_each_possible_cpu(cpu) { + if (of_get_cpu_node(cpu, NULL) == cpu_node) + return cpu; + } + + pr_crit("Unable to find CPU node for %s\n", cpu_node->full_name); + return -1; +} + +static void __init parse_core(struct device_node *core, int core_id) +{ + char name[10]; + bool leaf = true; + int i, cpu; + struct device_node *t; + + i = 0; + do { + snprintf(name, sizeof(name), "thread%d", i); + t = of_get_child_by_name(core, name); + if (t) { + leaf = false; + cpu = get_cpu_for_node(t); + if (cpu >= 0) { + pr_info("CPU%d: socket %d core %d thread %d\n", + cpu, cluster_id, core_id, i); + cpu_topology[cpu].socket_id = cluster_id; + cpu_topology[cpu].core_id = core_id; + cpu_topology[cpu].thread_id = i; + } else { + pr_err("%s: Can't get CPU for thread\n", + t->full_name); + } + } + i++; + } while (t); + + cpu = get_cpu_for_node(core); + if (cpu >= 0) { + if (!leaf) { + pr_err("%s: Core has both threads and CPU\n", + core->full_name); + return; + } + + pr_info("CPU%d: socket %d core %d\n", + cpu, cluster_id, core_id); + cpu_topology[cpu].socket_id = cluster_id; + cpu_topology[cpu].core_id = core_id; + } else if (leaf) { + pr_err("%s: Can't get CPU for leaf core\n", core->full_name); + } +} + +static void __init parse_cluster(struct device_node *cluster) +{ + char name[10]; + bool leaf = true; + bool has_cores = false; + struct device_node *c; + int core_id = 0; + int i; + + /* + * First check for child clusters; we currently ignore any + * information about the nesting of clusters and present the + * scheduler with a flat list of them. + */ + i = 0; + do { + snprintf(name, sizeof(name), "cluster%d", i); + c = of_get_child_by_name(cluster, name); + if (c) { + parse_cluster(c); + leaf = false; + } + i++; + } while (c); + + /* Now check for cores */ + i = 0; + do { + snprintf(name, sizeof(name), "core%d", i); + c = of_get_child_by_name(cluster, name); + if (c) { + has_cores = true; + + if (leaf) + parse_core(c, core_id++); + else + pr_err("%s: Non-leaf cluster with core %s\n", + cluster->full_name, name); + } + i++; + } while (c); + + if (leaf && !has_cores) + pr_warn("%s: empty cluster\n", cluster->full_name); + + if (leaf) + cluster_id++; +} + +static void __init parse_dt_topology(void) +{ + struct device_node *cn; + + cn = of_find_node_by_path("/cpus"); + if (!cn) { + pr_err("No CPU information found in DT\n"); + return; + } + + /* + * If topology is provided as a cpu-map it is essentially a + * root cluster. + */ + cn = of_find_node_by_name(cn, "cpu-map"); + if (!cn) + return; + parse_cluster(cn); +} + +#else +static inline void parse_dt_topology(void) {} +#endif + /* * cpu topology table */ @@ -88,5 +230,8 @@ void __init init_cpu_topology(void) cpumask_clear(&cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); } + + parse_dt_topology(); + smp_wmb(); }
From: Mark Brown broonie@linaro.org
In non-heterogeneous systems like big.LITTLE systems the scheduler will be able to make better use of the available cores if we provide power numbers to it indicating their relative performance. Do this by parsing the CPU nodes in the DT.
This code currently has no effect as no information on the relative performance of the cores is provided.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/kernel/topology.c | 145 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 5a2724b3d4b7..68ccf4f4f258 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -23,6 +23,29 @@
#include <asm/topology.h>
+/* + * cpu power table + * This per cpu data structure describes the relative capacity of each core. + * On a heteregenous system, cores don't have the same computation capacity + * and we reflect that difference in the cpu_power field so the scheduler can + * take this difference into account during load balance. A per cpu structure + * is preferred because each CPU updates its own cpu_power field during the + * load balance except for idle cores. One idle core is selected to run the + * rebalance_domains for all idle cores and the cpu_power can be updated + * during this sequence. + */ +static DEFINE_PER_CPU(unsigned long, cpu_scale); + +unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu) +{ + return per_cpu(cpu_scale, cpu); +} + +static void set_power_scale(unsigned int cpu, unsigned long power) +{ + per_cpu(cpu_scale, cpu) = power; +} + #ifdef CONFIG_OF static int cluster_id;
@@ -140,9 +163,49 @@ static void __init parse_cluster(struct device_node *cluster) cluster_id++; }
+struct cpu_efficiency { + const char *compatible; + unsigned long efficiency; +}; + +/* + * Table of relative efficiency of each processors + * The efficiency value must fit in 20bit and the final + * cpu_scale value must be in the range + * 0 < cpu_scale < 3*SCHED_POWER_SCALE/2 + * in order to return at most 1 when DIV_ROUND_CLOSEST + * is used to compute the capacity of a CPU. + * Processors that are not defined in the table, + * use the default SCHED_POWER_SCALE value for cpu_scale. + */ +static const struct cpu_efficiency table_efficiency[] = { + { NULL, }, +}; + +static unsigned long *__cpu_capacity; +#define cpu_capacity(cpu) __cpu_capacity[cpu] + +static unsigned long middle_capacity = 1; + +/* + * Iterate all CPUs' descriptor in DT and compute the efficiency + * (as per table_efficiency). Also calculate a middle efficiency + * as close as possible to (max{eff_i} - min{eff_i}) / 2 + * This is later used to scale the cpu_power field such that an + * 'average' CPU is of middle power. Also see the comments near + * table_efficiency[] and update_cpu_power(). + */ static void __init parse_dt_topology(void) { + const struct cpu_efficiency *cpu_eff; struct device_node *cn; + unsigned long min_capacity = (unsigned long)(-1); + unsigned long max_capacity = 0; + unsigned long capacity = 0; + int alloc_size, cpu; + + alloc_size = nr_cpu_ids * sizeof(*__cpu_capacity); + __cpu_capacity = kzalloc(alloc_size, GFP_NOWAIT);
cn = of_find_node_by_path("/cpus"); if (!cn) { @@ -158,10 +221,88 @@ static void __init parse_dt_topology(void) if (!cn) return; parse_cluster(cn); + + for_each_possible_cpu(cpu) { + const u32 *rate; + int len; + + /* Too early to use cpu->of_node */ + cn = of_get_cpu_node(cpu, NULL); + if (!cn) { + pr_err("Missing device node for CPU %d\n", cpu); + continue; + } + + /* check if the cpu is marked as "disabled", if so ignore */ + if (!of_device_is_available(cn)) + continue; + + for (cpu_eff = table_efficiency; cpu_eff->compatible; cpu_eff++) + if (of_device_is_compatible(cn, cpu_eff->compatible)) + break; + + if (cpu_eff->compatible == NULL) { + pr_warn("%s: Unknown CPU type\n", cn->full_name); + continue; + } + + rate = of_get_property(cn, "clock-frequency", &len); + if (!rate || len != 4) { + pr_err("%s: Missing clock-frequency property\n", + cn->full_name); + continue; + } + + capacity = ((be32_to_cpup(rate)) >> 20) * cpu_eff->efficiency; + + /* Save min capacity of the system */ + if (capacity < min_capacity) + min_capacity = capacity; + + /* Save max capacity of the system */ + if (capacity > max_capacity) + max_capacity = capacity; + + cpu_capacity(cpu) = capacity; + } + + /* If min and max capacities are equal we bypass the update of the + * cpu_scale because all CPUs have the same capacity. Otherwise, we + * compute a middle_capacity factor that will ensure that the capacity + * of an 'average' CPU of the system will be as close as possible to + * SCHED_POWER_SCALE, which is the default value, but with the + * constraint explained near table_efficiency[]. + */ + if (min_capacity == max_capacity) + return; + else if (4 * max_capacity < (3 * (max_capacity + min_capacity))) + middle_capacity = (min_capacity + max_capacity) + >> (SCHED_POWER_SHIFT+1); + else + middle_capacity = ((max_capacity / 3) + >> (SCHED_POWER_SHIFT-1)) + 1; + +} + +/* + * Look for a customed capacity of a CPU in the cpu_topo_data table during the + * boot. The update of all CPUs is in O(n^2) for heteregeneous system but the + * function returns directly for SMP system. + */ +static void update_cpu_power(unsigned int cpu) +{ + if (!cpu_capacity(cpu)) + return; + + set_power_scale(cpu, cpu_capacity(cpu) / middle_capacity); + + pr_info("CPU%u: update cpu_power %lu\n", + cpu, arch_scale_freq_power(NULL, cpu)); }
#else static inline void parse_dt_topology(void) {} +static inline void update_cpu_power(unsigned int cpuid) {} #endif
/* @@ -210,6 +351,8 @@ void store_cpu_topology(unsigned int cpuid) pr_info("CPU%u: No topology information configured\n", cpuid); else update_siblings_masks(cpuid); + + update_cpu_power(cpuid); }
/* @@ -229,6 +372,8 @@ void __init init_cpu_topology(void) cpu_topo->socket_id = -1; cpumask_clear(&cpu_topo->core_sibling); cpumask_clear(&cpu_topo->thread_sibling); + + set_power_scale(cpu, SCHED_POWER_SCALE); }
parse_dt_topology();
On Thu, Dec 19, 2013 at 08:06:14PM +0000, Mark Brown wrote:
From: Mark Brown broonie@linaro.org
In non-heterogeneous systems like big.LITTLE systems the scheduler will be able to make better use of the available cores if we provide power numbers to it indicating their relative performance. Do this by parsing the CPU nodes in the DT.
This code currently has no effect as no information on the relative performance of the cores is provided.
Signed-off-by: Mark Brown broonie@linaro.org
arch/arm64/kernel/topology.c | 145 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
[...]
+/*
- Iterate all CPUs' descriptor in DT and compute the efficiency
- (as per table_efficiency). Also calculate a middle efficiency
- as close as possible to (max{eff_i} - min{eff_i}) / 2
- This is later used to scale the cpu_power field such that an
- 'average' CPU is of middle power. Also see the comments near
- table_efficiency[] and update_cpu_power().
- */
static void __init parse_dt_topology(void) {
- const struct cpu_efficiency *cpu_eff; struct device_node *cn;
- unsigned long min_capacity = (unsigned long)(-1);
ULONG_MAX ?
- unsigned long max_capacity = 0;
- unsigned long capacity = 0;
- int alloc_size, cpu;
- alloc_size = nr_cpu_ids * sizeof(*__cpu_capacity);
- __cpu_capacity = kzalloc(alloc_size, GFP_NOWAIT);
kcalloc ? BTW this patch should include slab.h not the previous ones, because that's the only patch where memory allocation takes place unless I am missing something.
cn = of_find_node_by_path("/cpus"); if (!cn) { @@ -158,10 +221,88 @@ static void __init parse_dt_topology(void) if (!cn) return; parse_cluster(cn);
- for_each_possible_cpu(cpu) {
const u32 *rate;
int len;
/* Too early to use cpu->of_node */
cn = of_get_cpu_node(cpu, NULL);
if (!cn) {
pr_err("Missing device node for CPU %d\n", cpu);
continue;
}
/* check if the cpu is marked as "disabled", if so ignore */
if (!of_device_is_available(cn))
continue;
It is time we defined what a "disabled" CPU means in ARM world, I need to have a proper look into this since this topic has been brought up before.
Lorenzo
On Tue, Jan 07, 2014 at 01:05:40PM +0000, Lorenzo Pieralisi wrote:
On Thu, Dec 19, 2013 at 08:06:14PM +0000, Mark Brown wrote:
/* check if the cpu is marked as "disabled", if so ignore */
if (!of_device_is_available(cn))
continue;
It is time we defined what a "disabled" CPU means in ARM world, I need to have a proper look into this since this topic has been brought up before.
What is the confusion here - why would there be something architecture specific going on?
On Tue, Jan 07, 2014 at 01:38:29PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 01:05:40PM +0000, Lorenzo Pieralisi wrote:
On Thu, Dec 19, 2013 at 08:06:14PM +0000, Mark Brown wrote:
/* check if the cpu is marked as "disabled", if so ignore */
if (!of_device_is_available(cn))
continue;
It is time we defined what a "disabled" CPU means in ARM world, I need to have a proper look into this since this topic has been brought up before.
What is the confusion here - why would there be something architecture specific going on?
I think this check was added following this thread discussion:
http://lkml.indiana.edu/hypermail/linux/kernel/1306.0/03663.html
So my question is: what does "disabled" mean ? A CPU present in HW that can't/must not be booted ?
ePAPR v1.1 page 43:
"disabled". The CPU is in a quiescent state. A quiescent CPU is in a state where it cannot interfere with the normal operation of other CPUs, nor can its state be affected by the normal operation of other running CPUs, except by an explicit method for enabling or reenabling the quiescent CPU (see the enable-method property).
This means that a "disabled" CPU can be booted with eg PSCI but that is not what the thread in the link above wants to achieve. Furthermore, if we add the check in topology.c, the check must be executed also when building the cpu_logical_map, otherwise a "disabled" cpu would be marked possible and then booted, am I wrong ?
Lorenzo
On Tue, Jan 07, 2014 at 02:29:29PM +0000, Lorenzo Pieralisi wrote:
On Tue, Jan 07, 2014 at 01:38:29PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 01:05:40PM +0000, Lorenzo Pieralisi wrote:
It is time we defined what a "disabled" CPU means in ARM world, I need to have a proper look into this since this topic has been brought up before.
What is the confusion here - why would there be something architecture specific going on?
I think this check was added following this thread discussion:
http://lkml.indiana.edu/hypermail/linux/kernel/1306.0/03663.html
So my question is: what does "disabled" mean ? A CPU present in HW that can't/must not be booted ?
Yes, that would seem to be the obvious meaning and consistent with ePAPR (in so far as we're paying a blind bit of notice to ePAPR, see other threads...).
ePAPR v1.1 page 43:
"disabled". The CPU is in a quiescent state. A quiescent CPU is in a state where it cannot interfere with the normal operation of other CPUs, nor can its state be affected by the normal operation of other running CPUs, except by an explicit method for enabling or reenabling the quiescent CPU (see the enable-method property).
This means that a "disabled" CPU can be booted with eg PSCI but that is not what the thread in the link above wants to achieve. Furthermore, if
I think that's just another bit of ill considered wording from ePAPR that doesn't really reflect reality; it seems like what they're trying to shoot for there is administratively down.
At the very least that means hot unplugged, and it seems reasonable to read that as being stronger than that. The current ARM implementation is more conservative since it doesn't provide any way to put the core on line but it does seem more likely to match what a system integrator would be trying to achieve and it also matches the standard meaning of disabled.
we add the check in topology.c, the check must be executed also when building the cpu_logical_map, otherwise a "disabled" cpu would be marked possible and then booted, am I wrong ?
Right, this is a separate issue in the SMP enumeration code - it should be paying attention to the property and at the very least defaulting the core to being unplugged, though like I say I do find the ARM meaning more sane.
In any case I don't vastly care, I guess I'll drop this for now.
On Tue, Jan 07, 2014 at 03:06:31PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 02:29:29PM +0000, Lorenzo Pieralisi wrote:
On Tue, Jan 07, 2014 at 01:38:29PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 01:05:40PM +0000, Lorenzo Pieralisi wrote:
It is time we defined what a "disabled" CPU means in ARM world, I need to have a proper look into this since this topic has been brought up before.
What is the confusion here - why would there be something architecture specific going on?
I think this check was added following this thread discussion:
http://lkml.indiana.edu/hypermail/linux/kernel/1306.0/03663.html
So my question is: what does "disabled" mean ? A CPU present in HW that can't/must not be booted ?
Yes, that would seem to be the obvious meaning and consistent with ePAPR (in so far as we're paying a blind bit of notice to ePAPR, see other threads...).
Just playing devil's advocate and trying to reuse ePAPR bindings as much as possible, provided they define what we need on ARM. In this case it seems they do not.
ePAPR v1.1 page 43:
"disabled". The CPU is in a quiescent state. A quiescent CPU is in a state where it cannot interfere with the normal operation of other CPUs, nor can its state be affected by the normal operation of other running CPUs, except by an explicit method for enabling or reenabling the quiescent CPU (see the enable-method property).
This means that a "disabled" CPU can be booted with eg PSCI but that is not what the thread in the link above wants to achieve. Furthermore, if
I think that's just another bit of ill considered wording from ePAPR that doesn't really reflect reality; it seems like what they're trying to shoot for there is administratively down.
At the very least that means hot unplugged, and it seems reasonable to read that as being stronger than that. The current ARM implementation is more conservative since it doesn't provide any way to put the core on line but it does seem more likely to match what a system integrator would be trying to achieve and it also matches the standard meaning of disabled.
What do you mean by ARM implementation ? The status property is currently ignored on ARM. I'd agree with what you are saying but that should be specified in DT bindings.
we add the check in topology.c, the check must be executed also when building the cpu_logical_map, otherwise a "disabled" cpu would be marked possible and then booted, am I wrong ?
Right, this is a separate issue in the SMP enumeration code - it should be paying attention to the property and at the very least defaulting the core to being unplugged, though like I say I do find the ARM meaning more sane.
Again, I tend to agree, since this means that the CPU is there but simply is not a "possible" one. To be debated.
In any case I don't vastly care, I guess I'll drop this for now.
Yes, I think dropping the check is fine for now, we can add it if/when we achieve consensus, that should not be a big deal.
Lorenzo
On Tue, Jan 07, 2014 at 05:56:42PM +0000, Lorenzo Pieralisi wrote:
On Tue, Jan 07, 2014 at 03:06:31PM +0000, Mark Brown wrote:
At the very least that means hot unplugged, and it seems reasonable to read that as being stronger than that. The current ARM implementation is more conservative since it doesn't provide any way to put the core on line but it does seem more likely to match what a system integrator would be trying to achieve and it also matches the standard meaning of disabled.
What do you mean by ARM implementation ? The status property is currently ignored on ARM. I'd agree with what you are saying but that should be specified in DT bindings.
The 32 bit ARM implementation. That code was supposed to be just cut'n'pasted from there, though I see now it wasn't...
From: Mark Brown broonie@linaro.org
Provide performance numbers to the scheduler to help it fill the cores in the system on big.LITTLE systems. With the current scheduler this may perform poorly for applications that try to do OpenMP style work over all cores but should help for more common workloads.
The power numbers are the same as for ARMv7 since it seems that the expected differential between the big and little cores is very similar on both ARMv7 and ARMv8. These numbers are just an initial and basic approximation for use with the current scheduler, it is likely that both experience with silicon and ongoing work on improving the scheduler will lead to further tuning. In both ARMv7 and ARMv8 cases the numbers were based on the published DMIPS numbers.
Signed-off-by: Mark Brown broonie@linaro.org --- arch/arm64/kernel/topology.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c index 68ccf4f4f258..67df4639d2b1 100644 --- a/arch/arm64/kernel/topology.c +++ b/arch/arm64/kernel/topology.c @@ -179,6 +179,8 @@ struct cpu_efficiency { * use the default SCHED_POWER_SCALE value for cpu_scale. */ static const struct cpu_efficiency table_efficiency[] = { + { "arm,cortex-a57", 3891 }, + { "arm,cortex-a53", 2048 }, { NULL, }, };
Hi Mark,
On Thu, Dec 19, 2013 at 08:06:11PM +0000, Mark Brown wrote:
Another spin of the arm64 topology work - this should incorporate most of the feedback from Lorenzo, there's a few things that were still under discussion the main ones being:
We'll have to leave this patchset for early January, hopefully we have a bit of time to review before the merging window. I (and Will, Lorenzo) am on holiday until around 6th of January.
Have a Good Christmas!
On Thu, Dec 19, 2013 at 08:06:11PM +0000, Mark Brown wrote:
From: Mark Brown broonie@linaro.org
Another spin of the arm64 topology work - this should incorporate most of the feedback from Lorenzo, there's a few things that were still under discussion the main ones being:
- Should we have a smp_store_cpu_info(); like I say I like the errors it generates for omitted cores and the reuse of the SMP enumeration code (and cross-check with that I guess - make sure we don't get confused about which CPUs are getting enabled).
I agree, we should keep it, it makes sense.
- Should we update the binding to allow cores in the root cpu_map node (since it's less effort in code and not a meaningful difference semantically), warn if we find cores in the cpu_map node or actively reject such DTs?
I think cpu-map must only contain cluster nodes as descendant children. This to prevent creative DTs with cluster and core nodes at top topology level. Overall it makes sense, cores can only exist in a cluster container, might seem churn but at least that's strict.
In both cases I don't much mind but I think what's there is reasonable so I've left the code as-is pending further feedback. I also didn't update the code to get more reuse of the iteration code, like I said I did look at that when writing the code but couldn't find anything that actually made things more pleasant but if someone has some ideas...
I still think that most of the DT parsing code can and should be reused also for other purposes (eg IRQ affinity). Comments on the patches concerned.
Lorenzo
On Tue, Jan 07, 2014 at 06:05:45PM +0000, Lorenzo Pieralisi wrote:
On Thu, Dec 19, 2013 at 08:06:11PM +0000, Mark Brown wrote:
- Should we update the binding to allow cores in the root cpu_map node (since it's less effort in code and not a meaningful difference semantically), warn if we find cores in the cpu_map node or actively reject such DTs?
I think cpu-map must only contain cluster nodes as descendant children. This to prevent creative DTs with cluster and core nodes at top topology level. Overall it makes sense, cores can only exist in a cluster container, might seem churn but at least that's strict.
That still leaves the question of what you want to happen with such maps.
In both cases I don't much mind but I think what's there is reasonable so I've left the code as-is pending further feedback. I also didn't update the code to get more reuse of the iteration code, like I said I did look at that when writing the code but couldn't find anything that actually made things more pleasant but if someone has some ideas...
I still think that most of the DT parsing code can and should be reused also for other purposes (eg IRQ affinity). Comments on the patches concerned.
I'm not seeing any mails there... Note that most of the code is there because the binding took the decision to build the numbering for the subnodes into the names which is very unusual for DT and hence not very something the tooling works well with. Do these other bindings have the same problem?
get_cpu_for_node() could probably be shifted into a header, perhaps when there's other users though.
On Tue, Jan 07, 2014 at 06:23:55PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 06:05:45PM +0000, Lorenzo Pieralisi wrote:
I think cpu-map must only contain cluster nodes as descendant children. This to prevent creative DTs with cluster and core nodes at top topology level. Overall it makes sense, cores can only exist in a cluster container, might seem churn but at least that's strict.
That still leaves the question of what you want to happen with such maps.
I've implemented a warning for this; it seems more constructive than rejecting such DTs outright.
On Tue, Jan 07, 2014 at 06:46:58PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 06:23:55PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 06:05:45PM +0000, Lorenzo Pieralisi wrote:
I think cpu-map must only contain cluster nodes as descendant children. This to prevent creative DTs with cluster and core nodes at top topology level. Overall it makes sense, cores can only exist in a cluster container, might seem churn but at least that's strict.
That still leaves the question of what you want to happen with such maps.
I've implemented a warning for this; it seems more constructive than rejecting such DTs outright.
At least we should reject the nodes that do not follow bindings rules and warn on them. We should keep the valid nodes properties and build the topology from the resulting valus, even though this is a slippery slope, basically the topology is botched but kernel spits a warning on this so I guess that's acceptable.
Lorenzo
On Tue, Jan 07, 2014 at 06:23:55PM +0000, Mark Brown wrote:
On Tue, Jan 07, 2014 at 06:05:45PM +0000, Lorenzo Pieralisi wrote:
On Thu, Dec 19, 2013 at 08:06:11PM +0000, Mark Brown wrote:
- Should we update the binding to allow cores in the root cpu_map node (since it's less effort in code and not a meaningful difference semantically), warn if we find cores in the cpu_map node or actively reject such DTs?
I think cpu-map must only contain cluster nodes as descendant children. This to prevent creative DTs with cluster and core nodes at top topology level. Overall it makes sense, cores can only exist in a cluster container, might seem churn but at least that's strict.
That still leaves the question of what you want to happen with such maps.
In both cases I don't much mind but I think what's there is reasonable so I've left the code as-is pending further feedback. I also didn't update the code to get more reuse of the iteration code, like I said I did look at that when writing the code but couldn't find anything that actually made things more pleasant but if someone has some ideas...
I still think that most of the DT parsing code can and should be reused also for other purposes (eg IRQ affinity). Comments on the patches concerned.
I'm not seeing any mails there... Note that most of the code is there because the binding took the decision to build the numbering for the subnodes into the names which is very unusual for DT and hence not very something the tooling works well with. Do these other bindings have the same problem?
Reviewing the parsing code now. I know the numbering is unusual and the decision was not a simple one to make, it has been discussed and the reason is simple: I do not want reg property in topology nodes, because they are meaningless (we defined the topology to remove the dependency on the MPIDR, so if we added back reg properties to eg cluster nodes, they might be thought as cluster identifiers and that's wrong. Only cpu nodes reflect HW MPIDR values).
No, as far as I can tell at present, IRQ affinity and C-state affinity will work with phandles to cpu-map nodes, so the parsing you need for the topology should not be reused for that purpose, but it is hard to tell by just reviewing the code, I have to apply the patches and see how they can be adapted for other purposes.
Nothing prevents us from merging code as it is and consolidate it when needed, provided we will do that.
get_cpu_for_node() could probably be shifted into a header, perhaps when there's other users though.
Right.
Thanks, Lorenzo
On Wed, Jan 08, 2014 at 10:24:10AM +0000, Lorenzo Pieralisi wrote:
Nothing prevents us from merging code as it is and consolidate it when needed, provided we will do that.
OK, that sounds good.
get_cpu_for_node() could probably be shifted into a header, perhaps when there's other users though.
Right.
Actually there's a bit of an issue with pulling that into core code - some of the PowerPC systems use the thread ID but apparently not in a way that's entirely compatible with what ARM is doing from a first glance.
linaro-kernel@lists.linaro.org