mm: remove try_to_munlock from vmscan
An unfortunate feature of the Unevictable LRU work was that reclaiming an anonymous page involved an extra scan through the anon_vma: to check that the page is evictable before allocating swap, because the swap could not be freed reliably soon afterwards. Now try_to_free_swap() has replaced remove_exclusive_swap_page(), that's not an issue any more: remove try_to_munlock() call from shrink_page_list(), leaving it to try_to_munmap() to discover if the page is one to be culled to the unevictable list - in which case then try_to_free_swap(). Update unevictable-lru.txt to remove comments on the try_to_munlock() in shrink_page_list(), and shorten some lines over 80 columns. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Robin Holt <holt@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
11
mm/vmscan.c
11
mm/vmscan.c
@@ -625,15 +625,6 @@ static unsigned long shrink_page_list(struct list_head *page_list,
|
||||
if (PageAnon(page) && !PageSwapCache(page)) {
|
||||
if (!(sc->gfp_mask & __GFP_IO))
|
||||
goto keep_locked;
|
||||
switch (try_to_munlock(page)) {
|
||||
case SWAP_FAIL: /* shouldn't happen */
|
||||
case SWAP_AGAIN:
|
||||
goto keep_locked;
|
||||
case SWAP_MLOCK:
|
||||
goto cull_mlocked;
|
||||
case SWAP_SUCCESS:
|
||||
; /* fall thru'; add to swap cache */
|
||||
}
|
||||
if (!add_to_swap(page, GFP_ATOMIC))
|
||||
goto activate_locked;
|
||||
may_enter_fs = 1;
|
||||
@@ -752,6 +743,8 @@ free_it:
|
||||
continue;
|
||||
|
||||
cull_mlocked:
|
||||
if (PageSwapCache(page))
|
||||
try_to_free_swap(page);
|
||||
unlock_page(page);
|
||||
putback_lru_page(page);
|
||||
continue;
|
||||
|
Reference in New Issue
Block a user