thp: do_huge_pmd_wp_page(): handle huge zero page

On write access to huge zero page we alloc a new huge page and clear it.

If ENOMEM, graceful fallback: we create a new pmd table and set pte around
fault address to newly allocated normal (4k) page.  All other ptes in the
pmd set to normal zero page.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@linux.intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Kirill A. Shutemov
2012-12-12 13:50:54 -08:00
committed by Linus Torvalds
parent fc9fe822f7
commit 93b4796ded
3 changed files with 104 additions and 22 deletions

View File

@@ -724,13 +724,6 @@ static inline int is_zero_pfn(unsigned long pfn)
}
#endif
#ifndef my_zero_pfn
static inline unsigned long my_zero_pfn(unsigned long addr)
{
return zero_pfn;
}
#endif
/*
* vm_normal_page -- This function gets the "struct page" associated with a pte.
*