Hi Guys,
This patchset add APIs in OPP layer to allow OPPs transitioning from
within OPP layer. Currently all OPP users need to replicate the same
code to switch between OPPs. While the same can be handled easily by
OPP-core.
The first 7 patches update the OPP core to introduce the new APIs and
the next Nine patches update cpufreq-dt for the same.
11 out of 17 are already Reviewed by Stephen, only few are left :)
I hope this is the last version of the series :)
Testing:
- Tested on exynos 5250-arndale (dual-cortex-A15)
- Tested for both Old-V1 bindings and New V2 bindings
- Tested with regulator names as: 'cpu-supply' and 'cpu0-supply'
- Tested with Unsupported supply ranges as well, to check the
opp-disable logic
V2->V3:
- Very minor updates.
- find_supply_name() doesn't return an error value now, and so its
callers don't check for it.
- And so we don't need to initialize name to NULL
Viresh Kumar (16):
PM / OPP: get/put regulators from OPP core
PM / OPP: Disable OPPs that aren't supported by the regulator
PM / OPP: Introduce dev_pm_opp_get_max_volt_latency()
PM / OPP: Introduce dev_pm_opp_get_max_transition_latency()
PM / OPP: Parse clock-latency and voltage-tolerance for v1 bindings
PM / OPP: Manage device clk
PM / OPP: Add dev_pm_opp_set_rate()
cpufreq: dt: Convert few pr_debug/err() calls to dev_dbg/err()
cpufreq: dt: Rename 'need_update' to 'opp_v1'
cpufreq: dt: OPP layers handles clock-latency for V1 bindings as well
cpufreq: dt: Pass regulator name to the OPP core
cpufreq: dt: Unsupported OPPs are already disabled
cpufreq: dt: Reuse dev_pm_opp_get_max_transition_latency()
cpufreq: dt: Use dev_pm_opp_set_rate() to switch frequency
cpufreq: dt: No need to fetch voltage-tolerance
cpufreq: dt: No need to allocate resources anymore
drivers/base/power/opp/core.c | 420 ++++++++++++++++++++++++++++++++++++++++++
drivers/base/power/opp/opp.h | 13 ++
drivers/cpufreq/cpufreq-dt.c | 300 +++++++++++-------------------
include/linux/pm_opp.h | 27 +++
4 files changed, 565 insertions(+), 195 deletions(-)
--
2.7.1.370.gb2aa7f8
Ensure that the move of a sched_entity will be reflected in load and
utilization of the task_group hierarchy.
When a sched_entity moves between groups or CPUs, load and utilization
of cfs_rq don't reflect the changes immediately but converge to new values.
As a result, the metrics are no more aligned with the new balance of the
load in the system and next decisions will have a biased view.
This patchset synchronizes load/utilization of sched_entity with its child
cfs_rq (se->my-q) only when tasks move to/from child cfs_rq:
-move between task group
-migration between CPUs
Otherwise, PELT is updated as usual.
This version doesn't include any changes related to discussion that have
started during the review of the previous version about:
- encapsulate the sequence for changing the propoerty of a task
- remove a cfs_rq from list during update_blocked_averages
These topics don't gain anything from being added in this patchset as they
are fairly independent and deserve a separate patch.
Changes since v3:
- Replaced the 2 arguments of update_load_avg by 1 flags argument
- Propagated move in runnable_load_avg when sched_entity is already on_rq
- Ensure that intermediate value will not reach memory when updating load and
utilization
- Optimize the the calculation of load_avg of the sched_entity
- Fixed some typo
Changes since v2:
- Propagate both utilization and load
- Synced sched_entity and se->my_q instead of adding the delta
Changes since v1:
- This patch needs the patch that fixes issue with rq->leaf_cfs_rq_list
"sched: fix hierarchical order in rq->leaf_cfs_rq_list" in order to work
correctly. I haven't sent them as a single patchset because the fix is
independent of this one
- Merge some functions that are always used together
- During update of blocked load, ensure that the sched_entity is synced
with the cfs_rq applying changes
- Fix an issue when task changes its cpu affinity
Vincent Guittot (7):
sched: factorize attach entity
sched: fix hierarchical order in rq->leaf_cfs_rq_list
sched: factorize PELT update
sched: propagate load during synchronous attach/detach
sched: propagate asynchrous detach
sched: fix task group initialization
sched: fix wrong utilization accounting when switching to fair class
kernel/sched/core.c | 21 +--
kernel/sched/fair.c | 354 +++++++++++++++++++++++++++++++++++++++++----------
kernel/sched/sched.h | 2 +
3 files changed, 300 insertions(+), 77 deletions(-)
--
1.9.1