Hi there,
On Wed, May 08, 2013 at 11:17:49AM +0800, Leo Yan wrote:
hi Nico & all,
After we studied the IKS code, we believe the code is general and smoothly and can almost meet well for our own SoC's requirement; here also have some questions want to confirm with you guys:
- When outbound core wake up inbound core, the outbound core's
thread will sleep until the inbound core use MCPM’s early pork to send IPI;
a) Looks like this method somehow is due to TC2 board has long letancy to power on/off cluster and core; right? How about to use polling method? because on our own SoC, the wakenup interval will take _only_ about 10 ~ 20us;
This is correct, TC2 has much longer latencies, especially if a whole cluster needs to be powered up.
b) The inbound core will send IPI to outbound core for the synchronization, but at this point the inbound core's GIC's cpu interface is disabled; so even the core's cpu interface is disabled, can the core send SGI to other cores?
SGIs are triggered by writing to the GIC Distributor. I believe that doesn't require the triggering CPU's CPU interface to be enabled.
The destination CPU's CPU interface needs to be enabled in order for the interrupt to be received, though.
Nico can confirm whether this is correct.
I believe this does change for GICv3 though -- it may be necessary to fire up the CPU interface before an SGI can be sent in that case. This isn't an issue for v7 based platforms, but may need addressing in the future.
c) MCPM's patchset merged for mainline have no related function for early pork, so later will early pork related functions be committed to mainline?
I'll let Nico comment on that one.
From my side I don't see a strong technical reason why not.
- Now the switching is an async operation, means after the function
bL_switch_request is return back, we cannot say switching has been completed; so we have some concern for it.
I did write some patches to solve that, by providing a way for the caller to be notified of completion, while keeping the underlying mechanism asynchronous.
Nico: Can you remember whether you had any concerns about that functionality? See "ARM: bL_switcher: Add switch completion callback", posted to you on Mar 15.
I hadn't pushed for merging those at the time because I was hoping to do more testing on them, but I was diverted by other activities.
For example, when switch from A15 core to A7 core, then maybe we want to decrease the voltage so that can save power; if the switching is an async operation, then it maybe introduce the issue is: after return back from the function bL_switch_request, then s/w will decrease the voltage; but at the meantime, the real switching is ongoing on another pair cores.
i browser the git log and get to know at the beginning the switching is synced by using kernel's workqueue, later changed to use a dedicated kernel thread with FIFO type; do u think it's better to go ahead to add sync method for switching?
Migrating from one cluster to the other has intrinsic power and time costs, caused by the time and effort needed to power CPUs up and down and migrate software and cache state across. This is more than just the time to power up a CPU.
This means that there is a limit on how rapidly it is worth switching between clusters before it leads to a net loss in terms of power and/or performance. Over shorter timescales, fine-grained CPU idling may provide better overall behaviour results.
My general expectation is that at reasonable switching speeds, the extra overhead of the asynchronous CPU power-on compared with a synchronous approach may not have a big impact on overall system behaviour.
However, it would certainly interesting to measure these effects on a faster platform. TC2 is the only hardware we had direct access to for our development work, and on that hardware the asynchronous power-up is a definite advantage.
- After enabled switcher, then it will disable hotplug.
Actually current code can support hotplug with IKS; because with IKS, the logical core will map the according physical core id and GIC's interface id, so that it can make sure if the system has hotplugged out which physical core, later the kernel can hotplug in this physical core. So could u give more hints why iks need disable hotplug?
Hotplug is possible to implement, but adds some complexity. The combination of IKS and CPUidle should allow all CPUs to be powered down automatically when not needed, so enable hotplug may not be a huge extra benefit, but that doesn't mean that functionality could not be added in the future.
Cheers --Dave