mm, hugetlb, soft_offline: use new_page_nodemask for soft offline migration
new_page is yet another duplication of the migration callback which has to handle hugetlb migration specially. We can safely use the generic new_page_nodemask for the same purpose. Please note that gigantic hugetlb pages do not need any special handling because alloc_huge_page_nodemask will make sure to check pages in all per node pools. The reason this was done previously was that alloc_huge_page_node treated NO_NUMA_NODE and a specific node differently and so alloc_huge_page_node(nid) would check on this specific node. Link: http://lkml.kernel.org/r/20170622193034.28972-4-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Reported-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Tested-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:

committed by
Linus Torvalds

parent
3e59fcb0e8
commit
ef77ba5ce6
@@ -1484,16 +1484,8 @@ EXPORT_SYMBOL(unpoison_memory);
|
|||||||
static struct page *new_page(struct page *p, unsigned long private, int **x)
|
static struct page *new_page(struct page *p, unsigned long private, int **x)
|
||||||
{
|
{
|
||||||
int nid = page_to_nid(p);
|
int nid = page_to_nid(p);
|
||||||
if (PageHuge(p)) {
|
|
||||||
struct hstate *hstate = page_hstate(compound_head(p));
|
|
||||||
|
|
||||||
if (hstate_is_gigantic(hstate))
|
return new_page_nodemask(p, nid, &node_states[N_MEMORY]);
|
||||||
return alloc_huge_page_node(hstate, NUMA_NO_NODE);
|
|
||||||
|
|
||||||
return alloc_huge_page_node(hstate, nid);
|
|
||||||
} else {
|
|
||||||
return __alloc_pages_node(nid, GFP_HIGHUSER_MOVABLE, 0);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
Reference in New Issue
Block a user