sched: Change cfs_rq load avg to unsigned long
Since the 'u64 runnable_load_avg, blocked_load_avg' in cfs_rq struct are smaller than 'unsigned long' cfs_rq->load.weight. We don't need u64 vaiables to describe them. unsigned long is more efficient and convenience. Signed-off-by: Alex Shi <alex.shi@intel.com> Reviewed-by: Paul Turner <pjt@google.com> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1371694737-29336-10-git-send-email-alex.shi@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
@@ -277,7 +277,7 @@ struct cfs_rq {
|
||||
* This allows for the description of both thread and group usage (in
|
||||
* the FAIR_GROUP_SCHED case).
|
||||
*/
|
||||
u64 runnable_load_avg, blocked_load_avg;
|
||||
unsigned long runnable_load_avg, blocked_load_avg;
|
||||
atomic64_t decay_counter, removed_load;
|
||||
u64 last_decay;
|
||||
|
||||
|
Reference in New Issue
Block a user