mm, page_alloc: remove MIGRATE_RESERVE
MIGRATE_RESERVE preserves an old property of the buddy allocator that existed prior to fragmentation avoidance -- min_free_kbytes worth of pages tended to remain contiguous until the only alternative was to fail the allocation. At the time it was discovered that high-order atomic allocations relied on this property so MIGRATE_RESERVE was introduced. A later patch will introduce an alternative MIGRATE_HIGHATOMIC so this patch deletes MIGRATE_RESERVE and supporting code so it'll be easier to review. Note that this patch in isolation may look like a false regression if someone was bisecting high-order atomic allocation failures. Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:

committed by
Linus Torvalds

parent
f77cf4e4cc
commit
974a786e63
@@ -116,7 +116,7 @@ static void set_recommended_min_free_kbytes(void)
|
||||
for_each_populated_zone(zone)
|
||||
nr_zones++;
|
||||
|
||||
/* Make sure at least 2 hugepages are free for MIGRATE_RESERVE */
|
||||
/* Ensure 2 pageblocks are free to assist fragmentation avoidance */
|
||||
recommended_min = pageblock_nr_pages * nr_zones * 2;
|
||||
|
||||
/*
|
||||
|
Reference in New Issue
Block a user