On 26 June 2014 11:13, Chris Redpath <Chris.Redpath@arm.com> wrote:

An alternative might be to re-engineer the solution to provide a low-resume-latency request interface which the scheduler can use instead of a timer hack to keep it alive - the existing QoS interface isn't suitable because it's global and wakes all the CPUs up to get the new latency requirement in effect. That would not strictly be a big.LITTLE feature, but as its the only client it would likely need to be carried around in the same patch set.

There is already a suitable state disable API in cpuidle, but determining which states to disable still requires reading the exit latency which is only available in this driver information.

In summary, I don't like this fix although I accept that it works and it'll continue to work for TC2 and any other platform where the idle driver cannot be changed at runtime.

What do you think?

From my perspective the request interface seems prettiest, though as you say it's a bit invasive. I don't know if this is an interface that the upstream scheduler work is likely to use (ie, something that would ever get upstreamed), it'd be good if it was from the point of view of driver authors.

However as a matter of expediency just not locking seems viable; I would be surprised if anyone was doing anything which caused problems with that in practical big.LITTLE applications. Is it worth putting this in as a quick workaround if a proper fix isn't going to be available quickly?