FROMGIT: mm, page_poison: use static key more efficiently

Commit 11c9c7edae ("mm/page_poison.c: replace bool variable with static
key") changed page_poisoning_enabled() to a static key check.  However,
the function is not inlined, so each check still involves a function call
with overhead not eliminated when page poisoning is disabled.

Analogically to how debug_pagealloc is handled, this patch converts
page_poisoning_enabled() back to boolean check, and introduces
page_poisoning_enabled_static() for fast paths.  Both functions are
inlined.

The function kernel_poison_pages() is also called unconditionally and does
the static key check inside.  Remove it from there and put it to callers.
Also split it to two functions kernel_poison_pages() and
kernel_unpoison_pages() instead of the confusing bool parameter.

Also optimize the check that enables page poisoning instead of
debug_pagealloc for architectures without proper debug_pagealloc support.
Move the check to init_mem_debugging_and_hardening() to enable a single
static key instead of having two static branches in
page_poisoning_enabled_static().

Link: https://lkml.kernel.org/r/20201113104033.22907-3-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Laura Abbott <labbott@kernel.org>
Cc: Mateusz Nosek <mateusznosek0@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

(cherry picked from commit 8db26a3d47354ce7271a8cab03cd65b9d3d610b9
 https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git akpm)
Bug: 182930667
Signed-off-by: Alexander Potapenko <glider@google.com>
Change-Id: Ifc3fdf5cd58f3b8346bf81480df3836811e7458b
This commit is contained in:
Vlastimil Babka
2020-12-14 19:13:34 -08:00
committed by Todd Kjos
parent 0879d44ddd
commit e871c7feeb
4 changed files with 52 additions and 54 deletions

View File

@@ -813,6 +813,17 @@ void init_mem_debugging_and_hardening(void)
static_branch_enable(&init_on_free);
}
#ifdef CONFIG_PAGE_POISONING
/*
* Page poisoning is debug page alloc for some arches. If
* either of those options are enabled, enable poisoning.
*/
if (page_poisoning_enabled() ||
(!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
debug_pagealloc_enabled()))
static_branch_enable(&_page_poisoning_enabled);
#endif
#ifdef CONFIG_DEBUG_PAGEALLOC
if (!debug_pagealloc_enabled())
return;
@@ -1302,7 +1313,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
if (want_init_on_free())
kernel_init_free_pages(page, 1 << order);
kernel_poison_pages(page, 1 << order, 0);
kernel_poison_pages(page, 1 << order);
/*
* With hardware tag-based KASAN, memory tags must be set before the
* page becomes unavailable via debug_pagealloc or arch_free_page.
@@ -2253,7 +2265,7 @@ static inline int check_new_page(struct page *page)
static inline bool free_pages_prezeroed(void)
{
return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
page_poisoning_enabled()) || want_init_on_free();
page_poisoning_enabled_static()) || want_init_on_free();
}
#ifdef CONFIG_DEBUG_VM
@@ -2316,7 +2328,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
if (debug_pagealloc_enabled_static())
kernel_map_pages(page, 1 << order, 1);
kasan_alloc_pages(page, order);
kernel_poison_pages(page, 1 << order, 1);
kernel_unpoison_pages(page, 1 << order);
set_page_owner(page, order, gfp_flags);
}