We do report driver's successful {un}registration from cpufreq core, but is done
with pr_debug() and so this doesn't appear in boot logs.
Convert this to pr_info() to make it visible in logs.
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
drivers/cpufreq/cpufreq.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 62259d2..63d8f8f 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -2468,7 +2468,7 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
}
register_hotcpu_notifier(&cpufreq_cpu_notifier);
- pr_debug("driver %s up and running\n", driver_data->name);
+ pr_info("driver %s up and running\n", driver_data->name);
return 0;
err_if_unreg:
@@ -2499,7 +2499,7 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver)
if (!cpufreq_driver || (driver != cpufreq_driver))
return -EINVAL;
- pr_debug("unregistering driver %s\n", driver->name);
+ pr_info("unregistering driver %s\n", driver->name);
subsys_interface_unregister(&cpufreq_interface);
if (cpufreq_boost_supported())
--
2.0.0.rc2
Lorenzo and Mark agreed on the following updated patch from Lorenzo:
http://www.spinics.net/lists/arm-kernel/msg336998.html
W.r.t. cluster numbering, we're now back to where we were with the
the original patch sent out in April:
https://lkml.org/lkml/2014/4/22/951
Were there any other objections to this approach?
AFAICT, this patch should be good to go for 3.16.
------->8--------
Create cpu topology based on MPIDR. When hardware sets MPIDR to sane
values, this method will always work. Therefore it should also work well
as the fallback method. [1]
When we have multiple processing elements in the system, we create
the cpu topology by mapping each affinity level (from lowest to highest)
to threads (if they exist), cores, and clusters.
[1] http://www.spinics.net/lists/arm-kernel/msg317445.html
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com>
Signed-off-by: Zi Shen Lim <zlim(a)broadcom.com>
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
arch/arm64/include/asm/cputype.h | 2 ++
arch/arm64/kernel/topology.c | 47 ++++++++++++++++++++++++++++------------
2 files changed, 35 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index c404fb0..7639e8b 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -18,6 +18,8 @@
#define INVALID_HWID ULONG_MAX
+#define MPIDR_UP_BITMASK (0x1 << 30)
+#define MPIDR_MT_BITMASK (0x1 << 24)
#define MPIDR_HWID_BITMASK 0xff00ffffff
#define MPIDR_LEVEL_BITS_SHIFT 3
diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
index 43514f9..b6ee26b 100644
--- a/arch/arm64/kernel/topology.c
+++ b/arch/arm64/kernel/topology.c
@@ -20,6 +20,7 @@
#include <linux/of.h>
#include <linux/sched.h>
+#include <asm/cputype.h>
#include <asm/topology.h>
static int __init get_cpu_for_node(struct device_node *node)
@@ -188,13 +189,9 @@ static int __init parse_dt_topology(void)
* Check that all cores are in the topology; the SMP code will
* only mark cores described in the DT as possible.
*/
- for_each_possible_cpu(cpu) {
- if (cpu_topology[cpu].cluster_id == -1) {
- pr_err("CPU%d: No topology information specified\n",
- cpu);
+ for_each_possible_cpu(cpu)
+ if (cpu_topology[cpu].cluster_id == -1)
ret = -EINVAL;
- }
- }
out_map:
of_node_put(map);
@@ -219,14 +216,6 @@ static void update_siblings_masks(unsigned int cpuid)
struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
int cpu;
- if (cpuid_topo->cluster_id == -1) {
- /*
- * DT does not contain topology information for this cpu.
- */
- pr_debug("CPU%u: No topology information configured\n", cpuid);
- return;
- }
-
/* update core and thread sibling masks */
for_each_possible_cpu(cpu) {
cpu_topo = &cpu_topology[cpu];
@@ -249,6 +238,36 @@ static void update_siblings_masks(unsigned int cpuid)
void store_cpu_topology(unsigned int cpuid)
{
+ struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
+ u64 mpidr;
+
+ if (cpuid_topo->cluster_id != -1)
+ goto topology_populated;
+
+ mpidr = read_cpuid_mpidr();
+
+ /* Uniprocessor systems can rely on default topology values */
+ if (mpidr & MPIDR_UP_BITMASK)
+ return;
+
+ /* Create cpu topology mapping based on MPIDR. */
+ if (mpidr & MPIDR_MT_BITMASK) {
+ /* Multiprocessor system : Multi-threads per core */
+ cpuid_topo->thread_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+ cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ cpuid_topo->cluster_id = MPIDR_AFFINITY_LEVEL(mpidr, 2);
+ } else {
+ /* Multiprocessor system : Single-thread per core */
+ cpuid_topo->thread_id = -1;
+ cpuid_topo->core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
+ cpuid_topo->cluster_id = MPIDR_AFFINITY_LEVEL(mpidr, 1);
+ }
+
+ pr_debug("CPU%u: cluster %d core %d thread %d mpidr %#016llx\n",
+ cpuid, cpuid_topo->cluster_id, cpuid_topo->core_id,
+ cpuid_topo->thread_id, mpidr);
+
+topology_populated:
update_siblings_masks(cpuid);
}
--
1.8.4
Hi Thomas/Daniel et al,
This isn't about the problem I reported earlier, where you advised
to add ONESHOT_STOPPED mode: https://lkml.org/lkml/2014/5/9/508.
Above problem was about stopping the clock-event device when
its not used anymore.
This ($subject) problem was initially spotted on Ivybrdge V2, 12 core
X86 server by Santosh. And then I reproduced it on Dual core ARM
Exynos (isn't that frequent as it was on x86 though).
Problem: Getting spurious ticks where hrtimer_interrupt() returns
without servicing any hrtimers.
Kernel hack to catch this: http://pastebin.com/bTM7nqDc (Over 3.16-rc3)
X86 boot logs: http://pastebin.com/E6axDnsa (search: hrtimer_interrupt)
/proc/cpuinfo: http://pastebin.com/uQx9TmsA
The last I could debug it to is:
- Clockevent device is programmed for time 'x' seconds (Verified this
by storing next-event from within lapic_next_event()).
- Tick fires ~300 us before 'x'
- Traversing through the list of hrtimers doesn't result in any pending
hrtimer and we simply return. And so *spurious* interrupt.
- Happens when ticks are active or stopped (search for "tick-stopped"
in logs)
Driver monitored for x86: arch/x86/kernel/apic/apic.c
Similar behavior observed on exynos with arm_arch_timer.c
I couldn't get any deeper into it to see what's going on. From the behavior
It looks lik the calculations we are doing with dev->mult/shift gives
timeout <= next-event, whereas it should be >= ? Not at all sure though.
Reported-by: Santosh Shukla <santosh.shukla(a)linaro.org>
Note: Even the Hacky patchset that tried to to disable clockevent device
when not used anymore, isn't able to fix it:
https://lkml.org/lkml/2014/5/9/99..
--
viresh
Implement and enable context tracking for arm64 (which is
a prerequisite for FULL_NOHZ support). This patchset
builds upon earlier work by Kevin Hilman and is based on
Will Deacon's tree.
Changes v7 to v8:
* Fix bug where el1_irq was calling ct_user_exit rather than el0_irq
Changes v6 to v7:
* Rename parameter of ct_user_exit from restore to syscall
Changes v5 to v6:
* Don't save far_el1 in x26 in el0_dbg path (not needed)
* TIF_NOHZ processes go through the slow path (so no register
save/restore is needed in ct_user_enter)
Changes v4 to v5:
* Improvement to code restoring far_el1 (suggested by Christopher Covington)
* Improvement to register save/restore in ct_user_enter
Changes v3 to v4:
* Rename parameter of ct_user_exit from save to restore
* Rebased patch to Will Deacon's tree (branch remotes/origin/aarch64
of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git)
Changes v2 to v3:
* Save/restore necessary registers in ct_user_enter and ct_user_exit
* Annotate "error paths" out of el0_sync with ct_user_exit
Changes v1 to v2:
* Save far_el1 in x26 temporarily
Larry Bassel (2):
arm64: adjust el0_sync so that a function can be called
arm64: enable context tracking
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/thread_info.h | 4 +++
arch/arm64/kernel/entry.S | 58 +++++++++++++++++++++++++++++++-----
3 files changed, 56 insertions(+), 7 deletions(-)
--
1.8.3.2
(I don't think that discussions below about ptrace() have impact on
this patchset.
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/268923.html
)
(Please apply this patch after my audit patch in order to avoid some
conflict on arm64/Kconfig.)
This patch enables secure computing (system call filtering) on arm64.
System calls can be allowed or denied by loaded bpf-style rules.
Architecture specific part is to run secure_computing() on syscall entry
and check the result. See [2/2]
This code is tested on ARMv8 fast model using libseccomp v2.1.1 with
modifications for arm64 and verified by its "live" tests, 20, 21 and 24.
Changes v3 -> v4:
* removed the following patch and moved it to "arm64: prerequisites for
audit and ftrace" patchset since it is required for audit and ftrace in
case of !COMPAT, too.
"arm64: is_compat_task is defined both in asm/compat.h and linux/compat.h"
Changes v2 -> v3:
* removed unnecessary 'type cast' operations [2/3]
* check for a return value (-1) of secure_computing() explicitly [2/3]
* aligned with the patch, "arm64: split syscall_trace() into separate
functions for enter/exit" [2/3]
* changed default of CONFIG_SECCOMP to n [2/3]
Changes v1 -> v2:
* added generic seccomp.h for arm64 to utilize it [1,2/3]
* changed syscall_trace() to return more meaningful value (-EPERM)
on seccomp failure case [2/3]
* aligned with the change in "arm64: make a single hook to syscall_trace()
for all syscall features" v2 [2/3]
* removed is_compat_task() definition from compat.h [3/3]
AKASHI Takahiro (2):
asm-generic: Add generic seccomp.h for secure computing mode 1
arm64: Add seccomp support
arch/arm64/Kconfig | 14 ++++++++++++++
arch/arm64/include/asm/seccomp.h | 25 +++++++++++++++++++++++++
arch/arm64/include/asm/unistd.h | 3 +++
arch/arm64/kernel/entry.S | 4 ++++
arch/arm64/kernel/ptrace.c | 6 ++++++
include/asm-generic/seccomp.h | 28 ++++++++++++++++++++++++++++
6 files changed, 80 insertions(+)
create mode 100644 arch/arm64/include/asm/seccomp.h
create mode 100644 include/asm-generic/seccomp.h
--
1.7.9.5