Commit b89997aa authored by Vincent Donnefort's avatar Vincent Donnefort Committed by Ingo Molnar
Browse files

sched/pelt: Fix task util_est update filtering



Being called for each dequeue, util_est reduces the number of its updates
by filtering out when the EWMA signal is different from the task util_avg
by less than 1%. It is a problem for a sudden util_avg ramp-up. Due to the
decay from a previous high util_avg, EWMA might now be close enough to
the new util_avg. No update would then happen while it would leave
ue.enqueued with an out-of-date value.

Taking into consideration the two util_est members, EWMA and enqueued for
the filtering, ensures, for both, an up-to-date value.

This is for now an issue only for the trace probe that might return the
stale value. Functional-wise, it isn't a problem, as the value is always
accessed through max(enqueued, ewma).

This problem has been observed using LISA's UtilConvergence:test_means on
the sd845c board.

No regression observed with Hackbench on sd845c and Perf-bench sched pipe
on hikey/hikey960.

Signed-off-by: default avatarVincent Donnefort <vincent.donnefort@arm.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
Reviewed-by: default avatarDietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: default avatarVincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/20210225165820.1377125-1-vincent.donnefort@arm.com
parent 39a2a6eb
Loading
Loading
Loading
Loading
+12 −3
Original line number Diff line number Diff line
@@ -3941,6 +3941,8 @@ static inline void util_est_dequeue(struct cfs_rq *cfs_rq,
	trace_sched_util_est_cfs_tp(cfs_rq);
}

#define UTIL_EST_MARGIN (SCHED_CAPACITY_SCALE / 100)

/*
 * Check if a (signed) value is within a specified (unsigned) margin,
 * based on the observation that:
@@ -3958,7 +3960,7 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
				   struct task_struct *p,
				   bool task_sleep)
{
	long last_ewma_diff;
	long last_ewma_diff, last_enqueued_diff;
	struct util_est ue;

	if (!sched_feat(UTIL_EST))
@@ -3979,6 +3981,8 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
	if (ue.enqueued & UTIL_AVG_UNCHANGED)
		return;

	last_enqueued_diff = ue.enqueued;

	/*
	 * Reset EWMA on utilization increases, the moving average is used only
	 * to smooth utilization decreases.
@@ -3992,12 +3996,17 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
	}

	/*
	 * Skip update of task's estimated utilization when its EWMA is
	 * Skip update of task's estimated utilization when its members are
	 * already ~1% close to its last activation value.
	 */
	last_ewma_diff = ue.enqueued - ue.ewma;
	if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100)))
	last_enqueued_diff -= ue.enqueued;
	if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) {
		if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN))
			goto done;

		return;
	}

	/*
	 * To avoid overestimation of actual task utilization, skip updates if