Files
android_kernel_xiaomi_sm8450/kernel
Peter Zijlstra e210bffd39 sched/fair: Fix and optimize the fork() path
The task_fork_fair() callback already calls __set_task_cpu() and takes
rq->lock.

If we move the sched_class::task_fork callback in sched_fork() under
the existing p->pi_lock, right after its set_task_cpu() call, we can
avoid doing two such calls and omit the IRQ disabling on the rq->lock.

Change to __set_task_cpu() to skip the migration bits, this is a new
task, not a migration. Similarly, make wake_up_new_task() use
__set_task_cpu() for the same reason, the task hasn't actually
migrated as it hasn't ever ran.

This cures the problem of calling migrate_task_rq_fair(), which does
remove_entity_from_load_avg() on tasks that have never been added to
the load avg to begin with.

This bug would result in transiently messed up load_avg values, averaged
out after a few dozen milliseconds. This is probably the reason why
this bug was not found for such a long time.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-06-27 12:17:50 +02:00
..
2016-05-10 17:12:49 +02:00
2015-04-11 22:27:55 -04:00
2015-11-19 17:51:48 +01:00
2016-01-22 18:04:28 -05:00
2016-04-04 09:46:47 -04:00
2016-02-08 11:25:39 -05:00
2015-12-14 14:54:37 -05:00
2016-05-12 11:05:27 -04:00
2015-04-12 21:03:31 +02:00
2015-11-23 09:44:58 +01:00
2016-05-23 17:04:14 -07:00
2016-02-16 13:04:58 -05:00
2016-05-27 15:26:11 -07:00
2015-01-17 10:02:23 +13:00
2016-03-01 20:36:56 +01:00