On Fri, Sep 12, 2025 at 12:41:14PM +0200, Rafael J. Wysocki wrote:
On Wed, Sep 10, 2025 at 8:53 AM Shawn Guo shawnguo2@yeah.net wrote:
From: Shawn Guo shawnguo@kernel.org
A regression is seen with 6.6 -> 6.12 kernel upgrade on platforms where cpufreq-dt driver sets cpuinfo.transition_latency as CPUFREQ_ETERNAL (-1), due to that platform's DT doesn't provide the optional property 'clock-latency-ns'. The dbs sampling_rate was 10000 us on 6.6 and suddently becomes 6442450 us (4294967295 / 1000 * 1.5) on 6.12 for these platforms, because that the 10 ms cap for transition_delay_us was accidentally dropped by the commits below.
IIRC, this was not accidental.
I could be wrong, but my understanding is that the intention of Qais's commits is to drop 10 ms (and LATENCY_MULTIPLIER) as the *minimal* limit on transition_delay_us, so that it's possible to get a much less transition_delay_us on platforms like M1 mac mini where the transition latency is just tens of us. But it breaks platforms where 10 ms used to be the *maximum* limit.
Even if it's intentional to remove 10 ms as both the minimal and maximum limits, breaking some platforms must not be intentional, I guess :)
Why do you want to address the issue in the cpufreq core instead of doing that in the cpufreq-dt driver?
My intuition was to fix the regression at where the regression was introduced by recovering the code behavior.
CPUFREQ_ETERNAL doesn't appear to be a reasonable default for cpuinfo.transition_latency. Maybe just change the default there to 10 ms?
I think cpufreq-dt is doing what it's asked to do, no?
/* * Maximum transition latency is in nanoseconds - if it's unknown, * CPUFREQ_ETERNAL shall be used. */
Also, 10 ms will then be turned into 15 ms by:
/* Give a 50% breathing room between updates */ return latency + (latency >> 1);
Shawn