when disable CONFIG_SMP, we can't build success because of
error: implicit declaration of function ‘per_cpu_offset’
per_cpu_offset is available only if CONFIG_SMP is enable.
Signed-off-by: Zhizhou.zhang <zhizhou.zh(a)gmail.com>
---
arch/arm64/kernel/suspend.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index 1fa9ce4..3c7dd59 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -91,11 +91,13 @@ int cpu_suspend(unsigned long arg)
cpu_switch_mm(mm->pgd, mm);
flush_tlb_all();
+#ifdef CONFIG_SMP
/*
* Restore per-cpu offset before any kernel
* subsystem relying on it has a chance to run.
*/
set_my_cpu_offset(per_cpu_offset(cpu));
+#endif
/*
* Restore HW breakpoint registers to sane values
--
1.7.9.5
From: Mark Brown <broonie(a)linaro.org>
The per-regulator pdata is optional so we need to check that it's there
before dereferencing it. This wasn't done in "regulator: tps65090: Allow
setting the overcurrent wait time", fix that.
Reported-by: Olof Johansson <olof(a)lixom.net>
Signed-off-by: Mark Brown <broonie(a)linaro.org>
---
drivers/regulator/tps65090-regulator.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/regulator/tps65090-regulator.c b/drivers/regulator/tps65090-regulator.c
index ca04e9f..fbe0bf5 100644
--- a/drivers/regulator/tps65090-regulator.c
+++ b/drivers/regulator/tps65090-regulator.c
@@ -306,8 +306,11 @@ static int tps65090_regulator_probe(struct platform_device *pdev)
ri = &pmic[num];
ri->dev = &pdev->dev;
ri->desc = &tps65090_regulator_desc[num];
- ri->overcurrent_wait_valid = tps_pdata->overcurrent_wait_valid;
- ri->overcurrent_wait = tps_pdata->overcurrent_wait;
+ if (tps_pdata) {
+ ri->overcurrent_wait_valid =
+ tps_pdata->overcurrent_wait_valid;
+ ri->overcurrent_wait = tps_pdata->overcurrent_wait;
+ }
/*
* TPS5090 DCDC support the control from external digital input.
--
1.9.2
Hi Tixy,
In respect of the idle pull issue
(https://bugs.launchpad.net/linaro-stable-kernel/+bug/1301886) I did a
bit of root-cause digging.
I'm not sure why I didn't see this in our testing because as you say
it's pretty easy to trigger. I can trigger it either with hotplug
(sometimes even twice per unplug) or by starting IKS mode. Either our
automated test doesn't do any hotplug testing or we somehow missed
recognising the failure condition. I know we did not run an IKS test on
the revalidated version, but I would expect us to have some hotplug
tests in the MP functional testing.
Basil, can you look into that please?
Ultimately, what happens is that the scheduler will often run __schedule
on a CPU which is in the process of being shut down. Its probably too
costly to try to compute when it shouldn't run, hence the scheduler
makes it safe to run on a mostly-offline CPU.
When there is nothing to schedule in, idle_balance is executed (and
hence hmp_idle_pull). This happens after the relevant rq has been marked
as offline and the sched domains have been rebuilt, but before the tasks
are migrated away.
Vincent refers to this in a paper he wrote back in 2012 about hotplug as
zombie CPUs (
http://www.rdrop.com/users/paulmck/realtime/paper/hotplug-ecrts.2012.06.11a…
- in section 2.3).
It seems to me that we really should be not doing anything in
idle_balance in this situation, and the existing code accomplishes that
because the sched_domains already reflect the new world at that point in
time. The HMP patch doesn't really care about sched_domains, which is
where the problem comes in.
It's trivial to add a check to abort idle_balance altogether if the rq
is offline, but perhaps nobody has added it since it is only taking a
small amount of time on a CPU which is about to turn off and the
conditional will need to be evaluated otherwise in every idle balance.
Changing the BUG to a simple return NULL and fixing up the callers for
this case as you did in your testing patch is functionally correct and
safe. The question for me is if we should bother to try to optimize
idle_pull behavior during cpu_down - I'm open to opinions :)
Best Regards,
Chris
Compile-time configuration is messy and precludes multi-platform kernels
so remove it. Task packing should instead be tuned at run-time, e.g. in
user-side boot scripts, with something like:
echo 1 > /sys/kernel/hmp/packing_enable
echo 650 > /sys/kernel/hmp/packing_limit
Signed-off-by: Jon Medhurst <tixy(a)linaro.org>
---
Mark and Alex, please don't apply this directly. Assuming that this
change doesn't provoke any dissent, it can be pulled from the usual
place... git://git.linaro.org/arm/big.LITTLE/mp.git for-lsk
kernel/sched/fair.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 43857fe..6610622 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3676,15 +3676,8 @@ unsigned int hmp_next_up_threshold = 4096;
unsigned int hmp_next_down_threshold = 4096;
#ifdef CONFIG_SCHED_HMP_LITTLE_PACKING
-#ifndef CONFIG_ARCH_VEXPRESS_TC2
unsigned int hmp_packing_enabled = 1;
unsigned int hmp_full_threshold = (NICE_0_LOAD * 9) / 8;
-#else
-/* TC2 has a sharp consumption curve @ around 800Mhz, so
- we aim to spread the load around that frequency. */
-unsigned int hmp_packing_enabled;
-unsigned int hmp_full_threshold = 650; /* 80% of the 800Mhz freq * NICE_0_LOAD */
-#endif
#endif
static unsigned int hmp_up_migration(int cpu, int *target_cpu, struct sched_entity *se);
--
1.7.10.4