mm: clean up and clarify lruvec lookup procedure
There is a per-memcg lruvec and a NUMA node lruvec. Which one is being used is somewhat confusing right now, and it's easy to make mistakes - especially when it comes to global reclaim. How it works: when memory cgroups are enabled, we always use the root_mem_cgroup's per-node lruvecs. When memory cgroups are not compiled in or disabled at runtime, we use pgdat->lruvec. Document that in a comment. Due to the way the reclaim code is generalized, all lookups use the mem_cgroup_lruvec() helper function, and nobody should have to find the right lruvec manually right now. But to avoid future mistakes, rename the pgdat->lruvec member to pgdat->__lruvec and delete the convenience wrapper that suggests it's a commonly accessed member. While in this area, swap the mem_cgroup_lruvec() argument order. The name suggests a memcg operation, yet it takes a pgdat first and a memcg second. I have to double take every time I call this. Fix that. Link: http://lkml.kernel.org/r/20191022144803.302233-3-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Roman Gushchin <guro@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:

committed by
Linus Torvalds

parent
de3b01506e
commit
867e5e1de1
@@ -233,7 +233,7 @@ void *workingset_eviction(struct page *page)
|
||||
VM_BUG_ON_PAGE(page_count(page), page);
|
||||
VM_BUG_ON_PAGE(!PageLocked(page), page);
|
||||
|
||||
lruvec = mem_cgroup_lruvec(pgdat, memcg);
|
||||
lruvec = mem_cgroup_lruvec(memcg, pgdat);
|
||||
eviction = atomic_long_inc_return(&lruvec->inactive_age);
|
||||
return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page));
|
||||
}
|
||||
@@ -280,7 +280,7 @@ void workingset_refault(struct page *page, void *shadow)
|
||||
memcg = mem_cgroup_from_id(memcgid);
|
||||
if (!mem_cgroup_disabled() && !memcg)
|
||||
goto out;
|
||||
lruvec = mem_cgroup_lruvec(pgdat, memcg);
|
||||
lruvec = mem_cgroup_lruvec(memcg, pgdat);
|
||||
refault = atomic_long_read(&lruvec->inactive_age);
|
||||
active_file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES);
|
||||
|
||||
@@ -345,7 +345,7 @@ void workingset_activation(struct page *page)
|
||||
memcg = page_memcg_rcu(page);
|
||||
if (!mem_cgroup_disabled() && !memcg)
|
||||
goto out;
|
||||
lruvec = mem_cgroup_lruvec(page_pgdat(page), memcg);
|
||||
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
|
||||
atomic_long_inc(&lruvec->inactive_age);
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
@@ -426,7 +426,7 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
|
||||
struct lruvec *lruvec;
|
||||
int i;
|
||||
|
||||
lruvec = mem_cgroup_lruvec(NODE_DATA(sc->nid), sc->memcg);
|
||||
lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
|
||||
for (pages = 0, i = 0; i < NR_LRU_LISTS; i++)
|
||||
pages += lruvec_page_state_local(lruvec,
|
||||
NR_LRU_BASE + i);
|
||||
|
Reference in New Issue
Block a user