mm: follow_hugetlb_page flags
follow_hugetlb_page() shouldn't be guessing about the coredump case either: pass the foll_flags down to it, instead of just the write bit. Remove that obscure huge_zeropage_ok() test. The decision is easy, though unlike the non-huge case - here vm_ops->fault is always set. But we know that a fault would serve up zeroes, unless there's already a hugetlbfs pagecache page to back the range. (Alternatively, since hugetlb pages aren't swapped out under pressure, you could save more dump space by arguing that a page not yet faulted into this process cannot be relevant to the dump; but that would be more surprising.) Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Acked-by: Rik van Riel <riel@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:

committed by
Linus Torvalds

parent
8e4b9a6071
commit
2a15efc953
14
mm/memory.c
14
mm/memory.c
@@ -1260,17 +1260,19 @@ int __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
|
||||
!(vm_flags & vma->vm_flags))
|
||||
return i ? : -EFAULT;
|
||||
|
||||
if (is_vm_hugetlb_page(vma)) {
|
||||
i = follow_hugetlb_page(mm, vma, pages, vmas,
|
||||
&start, &nr_pages, i, write);
|
||||
continue;
|
||||
}
|
||||
|
||||
foll_flags = FOLL_TOUCH;
|
||||
if (pages)
|
||||
foll_flags |= FOLL_GET;
|
||||
if (flags & GUP_FLAGS_DUMP)
|
||||
foll_flags |= FOLL_DUMP;
|
||||
if (write)
|
||||
foll_flags |= FOLL_WRITE;
|
||||
|
||||
if (is_vm_hugetlb_page(vma)) {
|
||||
i = follow_hugetlb_page(mm, vma, pages, vmas,
|
||||
&start, &nr_pages, i, foll_flags);
|
||||
continue;
|
||||
}
|
||||
|
||||
do {
|
||||
struct page *page;
|
||||
|
Reference in New Issue
Block a user