mm: make compound_head() robust

Hugh has pointed that compound_head() call can be unsafe in some
context. There's one example:

	CPU0					CPU1

isolate_migratepages_block()
  page_count()
    compound_head()
      !!PageTail() == true
					put_page()
					  tail->first_page = NULL
      head = tail->first_page
					alloc_pages(__GFP_COMP)
					   prep_compound_page()
					     tail->first_page = head
					     __SetPageTail(p);
      !!PageTail() == true
    <head == NULL dereferencing>

The race is pure theoretical. I don't it's possible to trigger it in
practice. But who knows.

We can fix the race by changing how encode PageTail() and compound_head()
within struct page to be able to update them in one shot.

The patch introduces page->compound_head into third double word block in
front of compound_dtor and compound_order. Bit 0 encodes PageTail() and
the rest bits are pointer to head page if bit zero is set.

The patch moves page->pmd_huge_pte out of word, just in case if an
architecture defines pgtable_t into something what can have the bit 0
set.

hugetlb_cgroup uses page->lru.next in the second tail page to store
pointer struct hugetlb_cgroup. The patch switch it to use page->private
in the second tail page instead. The space is free since ->first_page is
removed from the union.

The patch also opens possibility to remove HUGETLB_CGROUP_MIN_ORDER
limitation, since there's now space in first tail page to store struct
hugetlb_cgroup pointer. But that's out of scope of the patch.

That means page->compound_head shares storage space with:

 - page->lru.next;
 - page->next;
 - page->rcu_head.next;

That's too long list to be absolutely sure, but looks like nobody uses
bit 0 of the word.

page->rcu_head.next guaranteed[1] to have bit 0 clean as long as we use
call_rcu(), call_rcu_bh(), call_rcu_sched(), or call_srcu(). But future
call_rcu_lazy() is not allowed as it makes use of the bit and we can
get false positive PageTail().

[1] http://lkml.kernel.org/g/20150827163634.GD4029@linux.vnet.ibm.com

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Kirill A. Shutemov
2015-11-06 16:29:54 -08:00
committed by Linus Torvalds
parent f1e61557f0
commit 1d798ca3f1
15 changed files with 89 additions and 182 deletions

View File

@@ -430,46 +430,6 @@ static inline void compound_unlock_irqrestore(struct page *page,
#endif
}
static inline struct page *compound_head_by_tail(struct page *tail)
{
struct page *head = tail->first_page;
/*
* page->first_page may be a dangling pointer to an old
* compound page, so recheck that it is still a tail
* page before returning.
*/
smp_rmb();
if (likely(PageTail(tail)))
return head;
return tail;
}
/*
* Since either compound page could be dismantled asynchronously in THP
* or we access asynchronously arbitrary positioned struct page, there
* would be tail flag race. To handle this race, we should call
* smp_rmb() before checking tail flag. compound_head_by_tail() did it.
*/
static inline struct page *compound_head(struct page *page)
{
if (unlikely(PageTail(page)))
return compound_head_by_tail(page);
return page;
}
/*
* If we access compound page synchronously such as access to
* allocated page, there is no need to handle tail flag race, so we can
* check tail flag directly without any synchronization primitive.
*/
static inline struct page *compound_head_fast(struct page *page)
{
if (unlikely(PageTail(page)))
return page->first_page;
return page;
}
/*
* The atomic page->_mapcount, starts from -1: so that transitions
* both from it and to it can be tracked, using atomic_inc_and_test
@@ -518,7 +478,7 @@ static inline void get_huge_page_tail(struct page *page)
VM_BUG_ON_PAGE(!PageTail(page), page);
VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
VM_BUG_ON_PAGE(atomic_read(&page->_count) != 0, page);
if (compound_tail_refcounted(page->first_page))
if (compound_tail_refcounted(compound_head(page)))
atomic_inc(&page->_mapcount);
}
@@ -541,13 +501,7 @@ static inline struct page *virt_to_head_page(const void *x)
{
struct page *page = virt_to_page(x);
/*
* We don't need to worry about synchronization of tail flag
* when we call virt_to_head_page() since it is only called for
* already allocated page and this page won't be freed until
* this virt_to_head_page() is finished. So use _fast variant.
*/
return compound_head_fast(page);
return compound_head(page);
}
/*
@@ -1586,8 +1540,7 @@ static inline bool ptlock_init(struct page *page)
* with 0. Make sure nobody took it in use in between.
*
* It can happen if arch try to use slab for page table allocation:
* slab code uses page->slab_cache and page->first_page (for tail
* pages), which share storage with page->ptl.
* slab code uses page->slab_cache, which share storage with page->ptl.
*/
VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
if (!ptlock_alloc(page))