mm: add_active_or_unevictable into rmap
lru_cache_add_active_or_unevictable() and page_add_new_anon_rmap() always appear together. Save some symbol table space and some jumping around by removing lru_cache_add_active_or_unevictable(), folding its code into page_add_new_anon_rmap(): like how we add file pages to lru just after adding them to page cache. Remove the nearby "TODO: is this safe?" comments (yes, it is safe), and change page_add_new_anon_rmap()'s address BUG_ON to VM_BUG_ON as originally intended. Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
このコミットが含まれているのは:
19
mm/swap.c
19
mm/swap.c
@@ -246,25 +246,6 @@ void add_page_to_unevictable_list(struct page *page)
|
||||
spin_unlock_irq(&zone->lru_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* lru_cache_add_active_or_unevictable
|
||||
* @page: the page to be added to LRU
|
||||
* @vma: vma in which page is mapped for determining reclaimability
|
||||
*
|
||||
* place @page on active or unevictable LRU list, depending on
|
||||
* page_evictable(). Note that if the page is not evictable,
|
||||
* it goes directly back onto it's zone's unevictable list. It does
|
||||
* NOT use a per cpu pagevec.
|
||||
*/
|
||||
void lru_cache_add_active_or_unevictable(struct page *page,
|
||||
struct vm_area_struct *vma)
|
||||
{
|
||||
if (page_evictable(page, vma))
|
||||
lru_cache_add_lru(page, LRU_ACTIVE + page_is_file_cache(page));
|
||||
else
|
||||
add_page_to_unevictable_list(page);
|
||||
}
|
||||
|
||||
/*
|
||||
* Drain pages out of the cpu's pagevecs.
|
||||
* Either "cpu" is the current CPU, and preemption has already been
|
||||
|
新しいイシューから参照
ユーザーをブロックする