On 15 September 2016 at 16:43, Peter Zijlstra peterz@infradead.org wrote:
On Mon, Sep 12, 2016 at 09:47:49AM +0200, Vincent Guittot wrote:
+static inline void +update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se) +{
struct cfs_rq *gcfs_rq = group_cfs_rq(se);
long delta, load = gcfs_rq->avg.load_avg;
/* If the load of group cfs_rq is null, the load of the
* sched_entity will also be null so we can skip the formula
*/
if (load) {
long tg_load;
/* Get tg's load and ensure tg_load > 0 */
tg_load = atomic_long_read(&gcfs_rq->tg->load_avg) + 1;
/* Ensure tg_load >= load and updated with current load*/
tg_load -= gcfs_rq->tg_load_avg_contrib;
tg_load += load;
/* scale gcfs_rq's load into tg's shares*/
load *= scale_load_down(gcfs_rq->tg->shares);
load /= tg_load;
/*
* we need to compute a correction term in the case that the
* task group is consuming <1 cpu so that we would contribute
* the same load as a task of equal weight.
*/
if (tg_load < scale_load_down(gcfs_rq->tg->shares)) {
load *= tg_load;
load /= scale_load_down(gcfs_rq->tg->shares);
}
Note that you're reversing the exact scaling you just applied.
Yes, Indeed
That is: shares tg_load load * ------- * ------- == load tg_load shares
}
So something like:
shares = scale_load_down(gcfs_rq->tg->shares); if (tg_load >= shares) { load *= shares; load /= tg_load; }
should be the same as the above and saves a bunch of math, no?
Yes