mm: numa: Revert temporarily disabling of NUMA migration
With the scan rate code working (at least for multi-instance specjbb), the large hammer that is "sched: Do not migrate memory immediately after switching node" can be replaced with something smarter. Revert temporarily migration disabling and all traces of numa_migrate_seq. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1381141781-10992-61-git-send-email-mgorman@suse.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:

committed by
Ingo Molnar

parent
930aa174fc
commit
1e3646ffc6
@@ -2404,18 +2404,6 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long
|
||||
last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
|
||||
if (!cpupid_pid_unset(last_cpupid) && cpupid_to_nid(last_cpupid) != thisnid)
|
||||
goto out;
|
||||
|
||||
#ifdef CONFIG_NUMA_BALANCING
|
||||
/*
|
||||
* If the scheduler has just moved us away from our
|
||||
* preferred node, do not bother migrating pages yet.
|
||||
* This way a short and temporary process migration will
|
||||
* not cause excessive memory migration.
|
||||
*/
|
||||
if (thisnid != current->numa_preferred_nid &&
|
||||
!current->numa_migrate_seq)
|
||||
goto out;
|
||||
#endif
|
||||
}
|
||||
|
||||
if (curnid != polnid)
|
||||
|
Reference in New Issue
Block a user