Merge branch 'akpm' (patches from Andrew)
Merge updates from Andrew Morton: "Am experimenting with splitting MM up into identifiable subsystems perhaps with a view to gitifying it in complex ways. Also with more verbose "incoming" emails. Most of MM is here and a few other trees. Subsystems affected by this patch series: - hotfixes - iommu - scripts - arch/sh - ocfs2 - mm:slab-generic - mm:slub - mm:kmemleak - mm:kasan - mm:cleanups - mm:debug - mm:pagecache - mm:swap - mm:memcg - mm:gup - mm:pagemap - mm:infrastructure - mm:vmalloc - mm:initialization - mm:pagealloc - mm:vmscan - mm:tools - mm:proc - mm:ras - mm:oom-kill hotfixes: mm: vmscan: scan anonymous pages on file refaults mm/nvdimm: add is_ioremap_addr and use that to check ioremap address mm/memcontrol: fix wrong statistics in memory.stat mm/z3fold.c: lock z3fold page before __SetPageMovable() nilfs2: do not use unexported cpu_to_le32()/le32_to_cpu() in uapi header MAINTAINERS: nilfs2: update email address iommu: include/linux/dmar.h: replace single-char identifiers in macros scripts: scripts/decode_stacktrace: match basepath using shell prefix operator, not regex scripts/decode_stacktrace: look for modules with .ko.debug extension scripts/spelling.txt: drop "sepc" from the misspelling list scripts/spelling.txt: add spelling fix for prohibited scripts/decode_stacktrace: Accept dash/underscore in modules scripts/spelling.txt: add more spellings to spelling.txt arch/sh: arch/sh/configs/sdk7786_defconfig: remove CONFIG_LOGFS sh: config: remove left-over BACKLIGHT_LCD_SUPPORT sh: prevent warnings when using iounmap ocfs2: fs: ocfs: fix spelling mistake "hearbeating" -> "heartbeat" ocfs2/dlm: use struct_size() helper ocfs2: add last unlock times in locking_state ocfs2: add locking filter debugfs file ocfs2: add first lock wait time in locking_state ocfs: no need to check return value of debugfs_create functions fs/ocfs2/dlmglue.c: unneeded variable: "status" ocfs2: use kmemdup rather than duplicating its implementation mm:slab-generic: Patch series "mm/slab: Improved sanity checking": mm/slab: validate cache membership under freelist hardening mm/slab: sanity-check page type when looking up cache lkdtm/heap: add tests for freelist hardening mm:slub: mm/slub.c: avoid double string traverse in kmem_cache_flags() slub: don't panic for memcg kmem cache creation failure mm:kmemleak: mm/kmemleak.c: fix check for softirq context mm/kmemleak.c: change error at _write when kmemleak is disabled docs: kmemleak: add more documentation details mm:kasan: mm/kasan: print frame description for stack bugs Patch series "Bitops instrumentation for KASAN", v5: lib/test_kasan: add bitops tests x86: use static_cpu_has in uaccess region to avoid instrumentation asm-generic, x86: add bitops instrumentation for KASAN Patch series "mm/kasan: Add object validation in ksize()", v3: mm/kasan: introduce __kasan_check_{read,write} mm/kasan: change kasan_check_{read,write} to return boolean lib/test_kasan: Add test for double-kzfree detection mm/slab: refactor common ksize KASAN logic into slab_common.c mm/kasan: add object validation in ksize() mm:cleanups: include/linux/pfn_t.h: remove pfn_t_to_virt() Patch series "remove ARCH_SELECT_MEMORY_MODEL where it has no effect": arm: remove ARCH_SELECT_MEMORY_MODEL s390: remove ARCH_SELECT_MEMORY_MODEL sparc: remove ARCH_SELECT_MEMORY_MODEL mm/gup.c: make follow_page_mask() static mm/memory.c: trivial clean up in insert_page() mm: make !CONFIG_HUGE_PAGE wrappers into static inlines include/linux/mm_types.h: ifdef struct vm_area_struct::swap_readahead_info mm: remove the account_page_dirtied export mm/page_isolation.c: change the prototype of undo_isolate_page_range() include/linux/vmpressure.h: use spinlock_t instead of struct spinlock mm: remove the exporting of totalram_pages include/linux/pagemap.h: document trylock_page() return value mm:debug: mm/failslab.c: by default, do not fail allocations with direct reclaim only Patch series "debug_pagealloc improvements": mm, debug_pagelloc: use static keys to enable debugging mm, page_alloc: more extensive free page checking with debug_pagealloc mm, debug_pagealloc: use a page type instead of page_ext flag mm:pagecache: Patch series "fix filler_t callback type mismatches", v2: mm/filemap.c: fix an overly long line in read_cache_page mm/filemap: don't cast ->readpage to filler_t for do_read_cache_page jffs2: pass the correct prototype to read_cache_page 9p: pass the correct prototype to read_cache_page mm/filemap.c: correct the comment about VM_FAULT_RETRY mm:swap: mm, swap: fix race between swapoff and some swap operations mm/swap_state.c: simplify total_swapcache_pages() with get_swap_device() mm, swap: use rbtree for swap_extent mm/mincore.c: fix race between swapoff and mincore mm:memcg: memcg, oom: no oom-kill for __GFP_RETRY_MAYFAIL memcg, fsnotify: no oom-kill for remote memcg charging mm, memcg: introduce memory.events.local mm: memcontrol: dump memory.stat during cgroup OOM Patch series "mm: reparent slab memory on cgroup removal", v7: mm: memcg/slab: postpone kmem_cache memcg pointer initialization to memcg_link_cache() mm: memcg/slab: rename slab delayed deactivation functions and fields mm: memcg/slab: generalize postponed non-root kmem_cache deactivation mm: memcg/slab: introduce __memcg_kmem_uncharge_memcg() mm: memcg/slab: unify SLAB and SLUB page accounting mm: memcg/slab: don't check the dying flag on kmem_cache creation mm: memcg/slab: synchronize access to kmem_cache dying flag using a spinlock mm: memcg/slab: rework non-root kmem_cache lifecycle management mm: memcg/slab: stop setting page->mem_cgroup pointer for slab pages mm: memcg/slab: reparent memcg kmem_caches on cgroup removal mm, memcg: add a memcg_slabinfo debugfs file mm:gup: Patch series "switch the remaining architectures to use generic GUP", v4: mm: use untagged_addr() for get_user_pages_fast addresses mm: simplify gup_fast_permitted mm: lift the x86_32 PAE version of gup_get_pte to common code MIPS: use the generic get_user_pages_fast code sh: add the missing pud_page definition sh: use the generic get_user_pages_fast code sparc64: add the missing pgd_page definition sparc64: define untagged_addr() sparc64: use the generic get_user_pages_fast code mm: rename CONFIG_HAVE_GENERIC_GUP to CONFIG_HAVE_FAST_GUP mm: reorder code blocks in gup.c mm: consolidate the get_user_pages* implementations mm: validate get_user_pages_fast flags mm: move the powerpc hugepd code to mm/gup.c mm: switch gup_hugepte to use try_get_compound_head mm: mark the page referenced in gup_hugepte mm/gup: speed up check_and_migrate_cma_pages() on huge page mm/gup.c: remove some BUG_ONs from get_gate_page() mm/gup.c: mark undo_dev_pagemap as __maybe_unused mm:pagemap: asm-generic, x86: introduce generic pte_{alloc,free}_one[_kernel] alpha: switch to generic version of pte allocation arm: switch to generic version of pte allocation arm64: switch to generic version of pte allocation csky: switch to generic version of pte allocation m68k: sun3: switch to generic version of pte allocation mips: switch to generic version of pte allocation nds32: switch to generic version of pte allocation nios2: switch to generic version of pte allocation parisc: switch to generic version of pte allocation riscv: switch to generic version of pte allocation um: switch to generic version of pte allocation unicore32: switch to generic version of pte allocation mm/pgtable: drop pgtable_t variable from pte_fn_t functions mm/memory.c: fail when offset == num in first check of __vm_map_pages() mm:infrastructure: mm/mmu_notifier: use hlist_add_head_rcu() mm:vmalloc: Patch series "Some cleanups for the KVA/vmalloc", v5: mm/vmalloc.c: remove "node" argument mm/vmalloc.c: preload a CPU with one object for split purpose mm/vmalloc.c: get rid of one single unlink_va() when merge mm/vmalloc.c: switch to WARN_ON() and move it under unlink_va() mm/vmalloc.c: spelling> s/informaion/information/ mm:initialization: mm/large system hash: use vmalloc for size > MAX_ORDER when !hashdist mm/large system hash: clear hashdist when only one node with memory is booted mm:pagealloc: arm64: move jump_label_init() before parse_early_param() Patch series "add init_on_alloc/init_on_free boot options", v10: mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options mm: init: report memory auto-initialization features at boot time mm:vmscan: mm: vmscan: remove double slab pressure by inc'ing sc->nr_scanned mm: vmscan: correct some vmscan counters for THP swapout mm:tools: tools/vm/slabinfo: order command line options tools/vm/slabinfo: add partial slab listing to -X tools/vm/slabinfo: add option to sort by partial slabs tools/vm/slabinfo: add sorting info to help menu mm:proc: proc: use down_read_killable mmap_sem for /proc/pid/maps proc: use down_read_killable mmap_sem for /proc/pid/smaps_rollup proc: use down_read_killable mmap_sem for /proc/pid/pagemap proc: use down_read_killable mmap_sem for /proc/pid/clear_refs proc: use down_read_killable mmap_sem for /proc/pid/map_files mm: use down_read_killable for locking mmap_sem in access_remote_vm mm: smaps: split PSS into components mm: vmalloc: show number of vmalloc pages in /proc/meminfo mm:ras: mm/memory-failure.c: clarify error message mm:oom-kill: mm: memcontrol: use CSS_TASK_ITER_PROCS at mem_cgroup_scan_tasks() mm, oom: refactor dump_tasks for memcg OOMs mm, oom: remove redundant task_in_mem_cgroup() check oom: decouple mems_allowed from oom_unkillable_task mm/oom_kill.c: remove redundant OOM score normalization in select_bad_process()" * akpm: (147 commits) mm/oom_kill.c: remove redundant OOM score normalization in select_bad_process() oom: decouple mems_allowed from oom_unkillable_task mm, oom: remove redundant task_in_mem_cgroup() check mm, oom: refactor dump_tasks for memcg OOMs mm: memcontrol: use CSS_TASK_ITER_PROCS at mem_cgroup_scan_tasks() mm/memory-failure.c: clarify error message mm: vmalloc: show number of vmalloc pages in /proc/meminfo mm: smaps: split PSS into components mm: use down_read_killable for locking mmap_sem in access_remote_vm proc: use down_read_killable mmap_sem for /proc/pid/map_files proc: use down_read_killable mmap_sem for /proc/pid/clear_refs proc: use down_read_killable mmap_sem for /proc/pid/pagemap proc: use down_read_killable mmap_sem for /proc/pid/smaps_rollup proc: use down_read_killable mmap_sem for /proc/pid/maps tools/vm/slabinfo: add sorting info to help menu tools/vm/slabinfo: add option to sort by partial slabs tools/vm/slabinfo: add partial slab listing to -X tools/vm/slabinfo: order command line options mm: vmscan: correct some vmscan counters for THP swapout mm: vmscan: remove double slab pressure by inc'ing sc->nr_scanned ...
This commit is contained in:
@@ -92,12 +92,14 @@ static inline bool dmar_rcu_check(void)
|
||||
|
||||
#define dmar_rcu_dereference(p) rcu_dereference_check((p), dmar_rcu_check())
|
||||
|
||||
#define for_each_dev_scope(a, c, p, d) \
|
||||
for ((p) = 0; ((d) = (p) < (c) ? dmar_rcu_dereference((a)[(p)].dev) : \
|
||||
NULL, (p) < (c)); (p)++)
|
||||
#define for_each_dev_scope(devs, cnt, i, tmp) \
|
||||
for ((i) = 0; ((tmp) = (i) < (cnt) ? \
|
||||
dmar_rcu_dereference((devs)[(i)].dev) : NULL, (i) < (cnt)); \
|
||||
(i)++)
|
||||
|
||||
#define for_each_active_dev_scope(a, c, p, d) \
|
||||
for_each_dev_scope((a), (c), (p), (d)) if (!(d)) { continue; } else
|
||||
#define for_each_active_dev_scope(devs, cnt, i, tmp) \
|
||||
for_each_dev_scope((devs), (cnt), (i), (tmp)) \
|
||||
if (!(tmp)) { continue; } else
|
||||
|
||||
extern int dmar_table_init(void);
|
||||
extern int dmar_dev_scope_init(void);
|
||||
|
@@ -16,29 +16,11 @@ struct user_struct;
|
||||
struct mmu_gather;
|
||||
|
||||
#ifndef is_hugepd
|
||||
/*
|
||||
* Some architectures requires a hugepage directory format that is
|
||||
* required to support multiple hugepage sizes. For example
|
||||
* a4fe3ce76 "powerpc/mm: Allow more flexible layouts for hugepage pagetables"
|
||||
* introduced the same on powerpc. This allows for a more flexible hugepage
|
||||
* pagetable layout.
|
||||
*/
|
||||
typedef struct { unsigned long pd; } hugepd_t;
|
||||
#define is_hugepd(hugepd) (0)
|
||||
#define __hugepd(x) ((hugepd_t) { (x) })
|
||||
static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
|
||||
unsigned pdshift, unsigned long end,
|
||||
int write, struct page **pages, int *nr)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#else
|
||||
extern int gup_huge_pd(hugepd_t hugepd, unsigned long addr,
|
||||
unsigned pdshift, unsigned long end,
|
||||
int write, struct page **pages, int *nr);
|
||||
#endif
|
||||
|
||||
|
||||
#ifdef CONFIG_HUGETLB_PAGE
|
||||
|
||||
#include <linux/mempolicy.h>
|
||||
@@ -608,22 +590,92 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
|
||||
|
||||
#else /* CONFIG_HUGETLB_PAGE */
|
||||
struct hstate {};
|
||||
#define alloc_huge_page(v, a, r) NULL
|
||||
#define alloc_huge_page_node(h, nid) NULL
|
||||
#define alloc_huge_page_nodemask(h, preferred_nid, nmask) NULL
|
||||
#define alloc_huge_page_vma(h, vma, address) NULL
|
||||
#define alloc_bootmem_huge_page(h) NULL
|
||||
#define hstate_file(f) NULL
|
||||
#define hstate_sizelog(s) NULL
|
||||
#define hstate_vma(v) NULL
|
||||
#define hstate_inode(i) NULL
|
||||
#define page_hstate(page) NULL
|
||||
#define huge_page_size(h) PAGE_SIZE
|
||||
#define huge_page_mask(h) PAGE_MASK
|
||||
#define vma_kernel_pagesize(v) PAGE_SIZE
|
||||
#define vma_mmu_pagesize(v) PAGE_SIZE
|
||||
#define huge_page_order(h) 0
|
||||
#define huge_page_shift(h) PAGE_SHIFT
|
||||
|
||||
static inline struct page *alloc_huge_page(struct vm_area_struct *vma,
|
||||
unsigned long addr,
|
||||
int avoid_reserve)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct page *alloc_huge_page_node(struct hstate *h, int nid)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct page *
|
||||
alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct page *alloc_huge_page_vma(struct hstate *h,
|
||||
struct vm_area_struct *vma,
|
||||
unsigned long address)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline int __alloc_bootmem_huge_page(struct hstate *h)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline struct hstate *hstate_file(struct file *f)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct hstate *hstate_sizelog(int page_size_log)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct hstate *hstate_inode(struct inode *i)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct hstate *page_hstate(struct page *page)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline unsigned long huge_page_size(struct hstate *h)
|
||||
{
|
||||
return PAGE_SIZE;
|
||||
}
|
||||
|
||||
static inline unsigned long huge_page_mask(struct hstate *h)
|
||||
{
|
||||
return PAGE_MASK;
|
||||
}
|
||||
|
||||
static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma)
|
||||
{
|
||||
return PAGE_SIZE;
|
||||
}
|
||||
|
||||
static inline unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
|
||||
{
|
||||
return PAGE_SIZE;
|
||||
}
|
||||
|
||||
static inline unsigned int huge_page_order(struct hstate *h)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline unsigned int huge_page_shift(struct hstate *h)
|
||||
{
|
||||
return PAGE_SHIFT;
|
||||
}
|
||||
|
||||
static inline bool hstate_is_gigantic(struct hstate *h)
|
||||
{
|
||||
return false;
|
||||
|
@@ -2,14 +2,43 @@
|
||||
#ifndef _LINUX_KASAN_CHECKS_H
|
||||
#define _LINUX_KASAN_CHECKS_H
|
||||
|
||||
#if defined(__SANITIZE_ADDRESS__) || defined(__KASAN_INTERNAL)
|
||||
void kasan_check_read(const volatile void *p, unsigned int size);
|
||||
void kasan_check_write(const volatile void *p, unsigned int size);
|
||||
#include <linux/types.h>
|
||||
|
||||
/*
|
||||
* __kasan_check_*: Always available when KASAN is enabled. This may be used
|
||||
* even in compilation units that selectively disable KASAN, but must use KASAN
|
||||
* to validate access to an address. Never use these in header files!
|
||||
*/
|
||||
#ifdef CONFIG_KASAN
|
||||
bool __kasan_check_read(const volatile void *p, unsigned int size);
|
||||
bool __kasan_check_write(const volatile void *p, unsigned int size);
|
||||
#else
|
||||
static inline void kasan_check_read(const volatile void *p, unsigned int size)
|
||||
{ }
|
||||
static inline void kasan_check_write(const volatile void *p, unsigned int size)
|
||||
{ }
|
||||
static inline bool __kasan_check_read(const volatile void *p, unsigned int size)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
static inline bool __kasan_check_write(const volatile void *p, unsigned int size)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* kasan_check_*: Only available when the particular compilation unit has KASAN
|
||||
* instrumentation enabled. May be used in header files.
|
||||
*/
|
||||
#ifdef __SANITIZE_ADDRESS__
|
||||
#define kasan_check_read __kasan_check_read
|
||||
#define kasan_check_write __kasan_check_write
|
||||
#else
|
||||
static inline bool kasan_check_read(const volatile void *p, unsigned int size)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
static inline bool kasan_check_write(const volatile void *p, unsigned int size)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@@ -76,8 +76,11 @@ void kasan_free_shadow(const struct vm_struct *vm);
|
||||
int kasan_add_zero_shadow(void *start, unsigned long size);
|
||||
void kasan_remove_zero_shadow(void *start, unsigned long size);
|
||||
|
||||
size_t ksize(const void *);
|
||||
static inline void kasan_unpoison_slab(const void *ptr) { ksize(ptr); }
|
||||
size_t __ksize(const void *);
|
||||
static inline void kasan_unpoison_slab(const void *ptr)
|
||||
{
|
||||
kasan_unpoison_shadow(ptr, __ksize(ptr));
|
||||
}
|
||||
size_t kasan_metadata_size(struct kmem_cache *cache);
|
||||
|
||||
bool kasan_save_enable_multi_shot(void);
|
||||
|
@@ -233,8 +233,9 @@ struct mem_cgroup {
|
||||
/* OOM-Killer disable */
|
||||
int oom_kill_disable;
|
||||
|
||||
/* memory.events */
|
||||
/* memory.events and memory.events.local */
|
||||
struct cgroup_file events_file;
|
||||
struct cgroup_file events_local_file;
|
||||
|
||||
/* handle for "memory.swap.events" */
|
||||
struct cgroup_file swap_events_file;
|
||||
@@ -281,6 +282,7 @@ struct mem_cgroup {
|
||||
|
||||
/* memory.events */
|
||||
atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS];
|
||||
atomic_long_t memory_events_local[MEMCG_NR_MEMORY_EVENTS];
|
||||
|
||||
unsigned long socket_pressure;
|
||||
|
||||
@@ -392,7 +394,6 @@ out:
|
||||
|
||||
struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
|
||||
|
||||
bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
|
||||
struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
|
||||
|
||||
struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
|
||||
@@ -747,6 +748,9 @@ static inline void count_memcg_event_mm(struct mm_struct *mm,
|
||||
static inline void memcg_memory_event(struct mem_cgroup *memcg,
|
||||
enum memcg_memory_event event)
|
||||
{
|
||||
atomic_long_inc(&memcg->memory_events_local[event]);
|
||||
cgroup_file_notify(&memcg->events_local_file);
|
||||
|
||||
do {
|
||||
atomic_long_inc(&memcg->memory_events[event]);
|
||||
cgroup_file_notify(&memcg->events_file);
|
||||
@@ -870,12 +874,6 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline bool task_in_mem_cgroup(struct task_struct *task,
|
||||
const struct mem_cgroup *memcg)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
|
||||
{
|
||||
return NULL;
|
||||
@@ -1273,6 +1271,8 @@ int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
|
||||
void __memcg_kmem_uncharge(struct page *page, int order);
|
||||
int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
|
||||
struct mem_cgroup *memcg);
|
||||
void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
|
||||
unsigned int nr_pages);
|
||||
|
||||
extern struct static_key_false memcg_kmem_enabled_key;
|
||||
extern struct workqueue_struct *memcg_kmem_cache_wq;
|
||||
@@ -1314,6 +1314,14 @@ static inline int memcg_kmem_charge_memcg(struct page *page, gfp_t gfp,
|
||||
return __memcg_kmem_charge_memcg(page, gfp, order, memcg);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void memcg_kmem_uncharge_memcg(struct page *page, int order,
|
||||
struct mem_cgroup *memcg)
|
||||
{
|
||||
if (memcg_kmem_enabled())
|
||||
__memcg_kmem_uncharge_memcg(memcg, 1 << order);
|
||||
}
|
||||
|
||||
/*
|
||||
* helper for accessing a memcg's index. It will be used as an index in the
|
||||
* child cache array in kmem_cache, and also to derive its name. This function
|
||||
|
@@ -633,6 +633,11 @@ static inline bool is_vmalloc_addr(const void *x)
|
||||
return false;
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifndef is_ioremap_addr
|
||||
#define is_ioremap_addr(x) is_vmalloc_addr(x)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
extern int is_vmalloc_or_module_addr(const void *x);
|
||||
#else
|
||||
@@ -2681,8 +2686,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
|
||||
return 0;
|
||||
}
|
||||
|
||||
typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
|
||||
void *data);
|
||||
typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data);
|
||||
extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
|
||||
unsigned long size, pte_fn_t fn, void *data);
|
||||
|
||||
@@ -2696,11 +2700,42 @@ static inline void kernel_poison_pages(struct page *page, int numpages,
|
||||
int enable) { }
|
||||
#endif
|
||||
|
||||
extern bool _debug_pagealloc_enabled;
|
||||
#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON
|
||||
DECLARE_STATIC_KEY_TRUE(init_on_alloc);
|
||||
#else
|
||||
DECLARE_STATIC_KEY_FALSE(init_on_alloc);
|
||||
#endif
|
||||
static inline bool want_init_on_alloc(gfp_t flags)
|
||||
{
|
||||
if (static_branch_unlikely(&init_on_alloc) &&
|
||||
!page_poisoning_enabled())
|
||||
return true;
|
||||
return flags & __GFP_ZERO;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON
|
||||
DECLARE_STATIC_KEY_TRUE(init_on_free);
|
||||
#else
|
||||
DECLARE_STATIC_KEY_FALSE(init_on_free);
|
||||
#endif
|
||||
static inline bool want_init_on_free(void)
|
||||
{
|
||||
return static_branch_unlikely(&init_on_free) &&
|
||||
!page_poisoning_enabled();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT
|
||||
DECLARE_STATIC_KEY_TRUE(_debug_pagealloc_enabled);
|
||||
#else
|
||||
DECLARE_STATIC_KEY_FALSE(_debug_pagealloc_enabled);
|
||||
#endif
|
||||
|
||||
static inline bool debug_pagealloc_enabled(void)
|
||||
{
|
||||
return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled;
|
||||
if (!IS_ENABLED(CONFIG_DEBUG_PAGEALLOC))
|
||||
return false;
|
||||
|
||||
return static_branch_unlikely(&_debug_pagealloc_enabled);
|
||||
}
|
||||
|
||||
#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
|
||||
@@ -2850,11 +2885,9 @@ extern long copy_huge_page_from_user(struct page *dst_page,
|
||||
bool allow_pagefault);
|
||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
|
||||
|
||||
extern struct page_ext_operations debug_guardpage_ops;
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGEALLOC
|
||||
extern unsigned int _debug_guardpage_minorder;
|
||||
extern bool _debug_guardpage_enabled;
|
||||
DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled);
|
||||
|
||||
static inline unsigned int debug_guardpage_minorder(void)
|
||||
{
|
||||
@@ -2863,21 +2896,15 @@ static inline unsigned int debug_guardpage_minorder(void)
|
||||
|
||||
static inline bool debug_guardpage_enabled(void)
|
||||
{
|
||||
return _debug_guardpage_enabled;
|
||||
return static_branch_unlikely(&_debug_guardpage_enabled);
|
||||
}
|
||||
|
||||
static inline bool page_is_guard(struct page *page)
|
||||
{
|
||||
struct page_ext *page_ext;
|
||||
|
||||
if (!debug_guardpage_enabled())
|
||||
return false;
|
||||
|
||||
page_ext = lookup_page_ext(page);
|
||||
if (unlikely(!page_ext))
|
||||
return false;
|
||||
|
||||
return test_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags);
|
||||
return PageGuard(page);
|
||||
}
|
||||
#else
|
||||
static inline unsigned int debug_guardpage_minorder(void) { return 0; }
|
||||
|
@@ -329,7 +329,9 @@ struct vm_area_struct {
|
||||
struct file * vm_file; /* File we map to (can be NULL). */
|
||||
void * vm_private_data; /* was vm_pte (shared mem) */
|
||||
|
||||
#ifdef CONFIG_SWAP
|
||||
atomic_long_t swap_readahead_info;
|
||||
#endif
|
||||
#ifndef CONFIG_MMU
|
||||
struct vm_region *vm_region; /* NOMMU mapping region */
|
||||
#endif
|
||||
|
@@ -108,7 +108,6 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
|
||||
bool __oom_reap_task_mm(struct mm_struct *mm);
|
||||
|
||||
extern unsigned long oom_badness(struct task_struct *p,
|
||||
struct mem_cgroup *memcg, const nodemask_t *nodemask,
|
||||
unsigned long totalpages);
|
||||
|
||||
extern bool out_of_memory(struct oom_control *oc);
|
||||
|
@@ -703,6 +703,7 @@ PAGEFLAG_FALSE(DoubleMap)
|
||||
#define PG_offline 0x00000100
|
||||
#define PG_kmemcg 0x00000200
|
||||
#define PG_table 0x00000400
|
||||
#define PG_guard 0x00000800
|
||||
|
||||
#define PageType(page, flag) \
|
||||
((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
|
||||
@@ -754,6 +755,11 @@ PAGE_TYPE_OPS(Kmemcg, kmemcg)
|
||||
*/
|
||||
PAGE_TYPE_OPS(Table, table)
|
||||
|
||||
/*
|
||||
* Marks guardpages used with debug_pagealloc.
|
||||
*/
|
||||
PAGE_TYPE_OPS(Guard, guard)
|
||||
|
||||
extern bool is_free_buddy_page(struct page *page);
|
||||
|
||||
__PAGEFLAG(Isolated, isolated, PF_ANY);
|
||||
|
@@ -50,7 +50,7 @@ start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
|
||||
* Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE.
|
||||
* target range is [start_pfn, end_pfn)
|
||||
*/
|
||||
int
|
||||
void
|
||||
undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
|
||||
unsigned migratetype);
|
||||
|
||||
|
@@ -17,7 +17,6 @@ struct page_ext_operations {
|
||||
#ifdef CONFIG_PAGE_EXTENSION
|
||||
|
||||
enum page_ext_flags {
|
||||
PAGE_EXT_DEBUG_GUARD,
|
||||
PAGE_EXT_OWNER,
|
||||
#if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT)
|
||||
PAGE_EXT_YOUNG,
|
||||
|
@@ -383,8 +383,7 @@ extern int read_cache_pages(struct address_space *mapping,
|
||||
static inline struct page *read_mapping_page(struct address_space *mapping,
|
||||
pgoff_t index, void *data)
|
||||
{
|
||||
filler_t *filler = (filler_t *)mapping->a_ops->readpage;
|
||||
return read_cache_page(mapping, index, filler, data);
|
||||
return read_cache_page(mapping, index, NULL, data);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -452,6 +451,9 @@ extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
|
||||
unsigned int flags);
|
||||
extern void unlock_page(struct page *page);
|
||||
|
||||
/*
|
||||
* Return true if the page was successfully locked
|
||||
*/
|
||||
static inline int trylock_page(struct page *page)
|
||||
{
|
||||
page = compound_head(page);
|
||||
|
@@ -66,13 +66,6 @@ static inline phys_addr_t pfn_t_to_phys(pfn_t pfn)
|
||||
return PFN_PHYS(pfn_t_to_pfn(pfn));
|
||||
}
|
||||
|
||||
static inline void *pfn_t_to_virt(pfn_t pfn)
|
||||
{
|
||||
if (pfn_t_has_page(pfn) && !is_device_private_page(pfn_t_to_page(pfn)))
|
||||
return __va(pfn_t_to_phys(pfn));
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline pfn_t page_to_pfn_t(struct page *page)
|
||||
{
|
||||
return pfn_to_pfn_t(page_to_pfn(page));
|
||||
|
@@ -16,6 +16,7 @@
|
||||
#include <linux/overflow.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/percpu-refcount.h>
|
||||
|
||||
|
||||
/*
|
||||
@@ -115,6 +116,10 @@
|
||||
/* Objects are reclaimable */
|
||||
#define SLAB_RECLAIM_ACCOUNT ((slab_flags_t __force)0x00020000U)
|
||||
#define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */
|
||||
|
||||
/* Slab deactivation flag */
|
||||
#define SLAB_DEACTIVATED ((slab_flags_t __force)0x10000000U)
|
||||
|
||||
/*
|
||||
* ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
|
||||
*
|
||||
@@ -151,8 +156,7 @@ void kmem_cache_destroy(struct kmem_cache *);
|
||||
int kmem_cache_shrink(struct kmem_cache *);
|
||||
|
||||
void memcg_create_kmem_cache(struct mem_cgroup *, struct kmem_cache *);
|
||||
void memcg_deactivate_kmem_caches(struct mem_cgroup *);
|
||||
void memcg_destroy_kmem_caches(struct mem_cgroup *);
|
||||
void memcg_deactivate_kmem_caches(struct mem_cgroup *, struct mem_cgroup *);
|
||||
|
||||
/*
|
||||
* Please use this macro to create slab caches. Simply specify the
|
||||
@@ -184,6 +188,7 @@ void * __must_check __krealloc(const void *, size_t, gfp_t);
|
||||
void * __must_check krealloc(const void *, size_t, gfp_t);
|
||||
void kfree(const void *);
|
||||
void kzfree(const void *);
|
||||
size_t __ksize(const void *);
|
||||
size_t ksize(const void *);
|
||||
|
||||
#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
|
||||
@@ -641,11 +646,12 @@ struct memcg_cache_params {
|
||||
struct mem_cgroup *memcg;
|
||||
struct list_head children_node;
|
||||
struct list_head kmem_caches_node;
|
||||
struct percpu_ref refcnt;
|
||||
|
||||
void (*deact_fn)(struct kmem_cache *);
|
||||
void (*work_fn)(struct kmem_cache *);
|
||||
union {
|
||||
struct rcu_head deact_rcu_head;
|
||||
struct work_struct deact_work;
|
||||
struct rcu_head rcu_head;
|
||||
struct work_struct work;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@@ -148,7 +148,7 @@ struct zone;
|
||||
* We always assume that blocks are of size PAGE_SIZE.
|
||||
*/
|
||||
struct swap_extent {
|
||||
struct list_head list;
|
||||
struct rb_node rb_node;
|
||||
pgoff_t start_page;
|
||||
pgoff_t nr_pages;
|
||||
sector_t start_block;
|
||||
@@ -175,8 +175,9 @@ enum {
|
||||
SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */
|
||||
SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */
|
||||
SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */
|
||||
SWP_VALID = (1 << 13), /* swap is valid to be operated on? */
|
||||
/* add others here before... */
|
||||
SWP_SCANNING = (1 << 13), /* refcount in scan_swap_map */
|
||||
SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */
|
||||
};
|
||||
|
||||
#define SWAP_CLUSTER_MAX 32UL
|
||||
@@ -247,8 +248,7 @@ struct swap_info_struct {
|
||||
unsigned int cluster_next; /* likely index for next allocation */
|
||||
unsigned int cluster_nr; /* countdown to next cluster search */
|
||||
struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */
|
||||
struct swap_extent *curr_swap_extent;
|
||||
struct swap_extent first_swap_extent;
|
||||
struct rb_root swap_extent_root;/* root of the swap extent rbtree */
|
||||
struct block_device *bdev; /* swap device or bdev of swap file */
|
||||
struct file *swap_file; /* seldom referenced */
|
||||
unsigned int old_block_size; /* seldom referenced */
|
||||
@@ -460,7 +460,7 @@ extern unsigned int count_swap_pages(int, int);
|
||||
extern sector_t map_swap_page(struct page *, struct block_device **);
|
||||
extern sector_t swapdev_block(int, pgoff_t);
|
||||
extern int page_swapcount(struct page *);
|
||||
extern int __swap_count(struct swap_info_struct *si, swp_entry_t entry);
|
||||
extern int __swap_count(swp_entry_t entry);
|
||||
extern int __swp_swapcount(swp_entry_t entry);
|
||||
extern int swp_swapcount(swp_entry_t entry);
|
||||
extern struct swap_info_struct *page_swap_info(struct page *);
|
||||
@@ -470,6 +470,12 @@ extern int try_to_free_swap(struct page *);
|
||||
struct backing_dev_info;
|
||||
extern int init_swap_address_space(unsigned int type, unsigned long nr_pages);
|
||||
extern void exit_swap_address_space(unsigned int type);
|
||||
extern struct swap_info_struct *get_swap_device(swp_entry_t entry);
|
||||
|
||||
static inline void put_swap_device(struct swap_info_struct *si)
|
||||
{
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
#else /* CONFIG_SWAP */
|
||||
|
||||
@@ -576,7 +582,7 @@ static inline int page_swapcount(struct page *page)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int __swap_count(struct swap_info_struct *si, swp_entry_t entry)
|
||||
static inline int __swap_count(swp_entry_t entry)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
@@ -72,10 +72,12 @@ extern void vm_unmap_aliases(void);
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
extern void __init vmalloc_init(void);
|
||||
extern unsigned long vmalloc_nr_pages(void);
|
||||
#else
|
||||
static inline void vmalloc_init(void)
|
||||
{
|
||||
}
|
||||
static inline unsigned long vmalloc_nr_pages(void) { return 0; }
|
||||
#endif
|
||||
|
||||
extern void *vmalloc(unsigned long size);
|
||||
|
@@ -17,7 +17,7 @@ struct vmpressure {
|
||||
unsigned long tree_scanned;
|
||||
unsigned long tree_reclaimed;
|
||||
/* The lock is used to keep the scanned/reclaimed above in sync. */
|
||||
struct spinlock sr_lock;
|
||||
spinlock_t sr_lock;
|
||||
|
||||
/* The list of vmpressure_event structs. */
|
||||
struct list_head events;
|
||||
|
Reference in New Issue
Block a user