FROMGIT: mm/migrate: fix race between lock page and clear PG_Isolated

When memory is tight, system may start to compact memory for large
continuous memory demands.  If one process tries to lock a memory page
that is being locked and isolated for compaction, it may wait a long time
or even forever.  This is because compaction will perform non-atomic
PG_Isolated clear while holding page lock, this may overwrite PG_waiters
set by the process that can't obtain the page lock and add itself to the
waiting queue to wait for the lock to be unlocked.

CPU1                            CPU2
lock_page(page); (successful)
                                lock_page(); (failed)
__ClearPageIsolated(page);      SetPageWaiters(page) (may be overwritten)
unlock_page(page);

The solution is to not perform non-atomic operation on page flags while
holding page lock.

Link: https://lkml.kernel.org/r/20220315030515.20263-1-andrew.yang@mediatek.com
Signed-off-by: andrew.yang <andrew.yang@mediatek.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: "Vlastimil Babka" <vbabka@suse.cz>
Cc: David Howells <dhowells@redhat.com>
Cc: "William Kucharski" <william.kucharski@oracle.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Nicholas Tang <nicholas.tang@mediatek.com>
Cc: Kuan-Ying Lee <Kuan-Ying.Lee@mediatek.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
(cherry picked from commit 48911e41ddee4fe113bf1e4303dda1a413b169c9
 https://github.com/hnaz/linux-mm.git)
Bug: 225086204
Change-Id: I58547aeae29bcbc2071776330f9c31b9eb9f5aa8
This commit is contained in:
andrew.yang
2022-03-15 16:58:34 +11:00
parent de0334216b
commit 39aca15979
2 changed files with 8 additions and 8 deletions

View File

@@ -795,7 +795,7 @@ PAGE_TYPE_OPS(Guard, guard)
extern bool is_free_buddy_page(struct page *page);
__PAGEFLAG(Isolated, isolated, PF_ANY);
PAGEFLAG(Isolated, isolated, PF_ANY);
/*
* If network-based swap is enabled, sl*b must keep track of whether pages

View File

@@ -105,7 +105,7 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
/* Driver shouldn't use PG_isolated bit of page->flags */
WARN_ON_ONCE(PageIsolated(page));
__SetPageIsolated(page);
SetPageIsolated(page);
unlock_page(page);
return 0;
@@ -129,7 +129,7 @@ void putback_movable_page(struct page *page)
mapping = page_mapping(page);
mapping->a_ops->putback_page(page);
__ClearPageIsolated(page);
ClearPageIsolated(page);
}
/*
@@ -162,7 +162,7 @@ void putback_movable_pages(struct list_head *l)
if (PageMovable(page))
putback_movable_page(page);
else
__ClearPageIsolated(page);
ClearPageIsolated(page);
unlock_page(page);
put_page(page);
} else {
@@ -952,7 +952,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
VM_BUG_ON_PAGE(!PageIsolated(page), page);
if (!PageMovable(page)) {
rc = MIGRATEPAGE_SUCCESS;
__ClearPageIsolated(page);
ClearPageIsolated(page);
goto out;
}
@@ -974,7 +974,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
* We clear PG_movable under page_lock so any compactor
* cannot try to migrate this page.
*/
__ClearPageIsolated(page);
ClearPageIsolated(page);
}
/*
@@ -1160,7 +1160,7 @@ static int unmap_and_move(new_page_t get_new_page,
if (unlikely(__PageMovable(page))) {
lock_page(page);
if (!PageMovable(page))
__ClearPageIsolated(page);
ClearPageIsolated(page);
unlock_page(page);
}
goto out;
@@ -1215,7 +1215,7 @@ out:
if (PageMovable(page))
putback_movable_page(page);
else
__ClearPageIsolated(page);
ClearPageIsolated(page);
unlock_page(page);
put_page(page);
}