ANDROID: sched/fair: Attempt to improve throughput for asym cap systems

In some systems the capacity and group weights line up to defeat all the
small imbalance correction conditions in fix_small_imbalance, which can
cause bad task placement. Add a new condition if the existing code can't
see anything to fix:

If we have asymmetric capacity, and there are more tasks than CPUs in
the busiest group *and* there are less tasks than CPUs in the local group
then we try to pull something. There could be transient small tasks which
prevent this from working, but on the whole it is beneficial for those
systems with inconvenient capacity/cluster size relationships.

Change-Id: Icf81cde215c082a61f816534b7990ccb70aee409
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
This commit is contained in:
Chris Redpath
2018-06-01 20:34:10 +01:00
committed by Quentin Perret
parent 65b8ddef4e
commit f351885fc7

View File

@@ -8425,7 +8425,22 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
capa_move /= SCHED_CAPACITY_SCALE;
/* Move if we gain throughput */
if (capa_move > capa_now)
if (capa_move > capa_now) {
env->imbalance = busiest->load_per_task;
return;
}
/* We can't see throughput improvement with the load-based
* method, but it is possible depending upon group size and
* capacity range that there might still be an underutilized
* cpu available in an asymmetric capacity system. Do one last
* check just in case.
*/
if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
busiest->group_type == group_overloaded &&
busiest->sum_nr_running > busiest->group_weight &&
local->sum_nr_running < local->group_weight &&
local->group_capacity < busiest->group_capacity)
env->imbalance = busiest->load_per_task;
}