On Tuesday, February 26, 2013 03:13:32 PM Viresh Kumar wrote:
Currently MIN_LATENCY_MULTIPLIER is set defined as 100 and so on a system with transition latency of 1 ms, the minimum sampling time comes to be around 100 ms. That is quite big if you want to get better performance for your system.
Redefine MIN_LATENCY_MULTIPLIER to 20 so that we can support 20ms sampling rate for such platforms.
Redefining MIN_LATENCY_MULTIPLIER shouldn't hurt that much, but this looks like a workaround. It only modifies the minimal sampling rate that userspace can set. You would still need to set something from userspace to get the perfect sampling rate for this platform.
I wonder where the cpufreq driver does get the 1ms latency from? Is this value valid? The driver should return the correct latency, then there is no need for workarounds like this.
Thomas