sched/fair: Revert sched-domain iteration breakage
Patchesc22402a2f
("sched/fair: Let minimally loaded cpu balance the group") and0ce90475
("sched/fair: Add some serialization to the sched_domain load-balance walk") are horribly broken so revert them. The problem is that while it sounds good to have the minimally loaded cpu do the pulling of more load, the way we walk the domains there is absolutely no guarantee this cpu will actually get to the domain. In fact its very likely it wont. Therefore the higher up the tree we get, the less likely it is we'll balance at all. The first of mask always walks up, while sucky in that it accumulates load on the first cpu and needs extra passes to spread it out at least guarantees a cpu gets up that far and load-balancing happens at all. Since its now always the first and idle cpus should always be able to balance so they get a task as fast as possible we can also do away with the added serialization. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-rpuhs5s56aiv1aw7khv9zkw6@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:

committed by
Ingo Molnar

parent
316ad24830
commit
04f733b4af
@@ -927,7 +927,6 @@ struct sched_group_power {
|
||||
struct sched_group {
|
||||
struct sched_group *next; /* Must be a circular list */
|
||||
atomic_t ref;
|
||||
int balance_cpu;
|
||||
|
||||
unsigned int group_weight;
|
||||
struct sched_group_power *sgp;
|
||||
|
Reference in New Issue
Block a user