memcg: simple migration handling

Now, management of "charge" under page migration is done under following
manner. (Assume migrate page contents from oldpage to newpage)

 before
  - "newpage" is charged before migration.
 at success.
  - "oldpage" is uncharged at somewhere(unmap, radix-tree-replace)
 at failure
  - "newpage" is uncharged.
  - "oldpage" is charged if necessary (*1)

But (*1) is not reliable....because of GFP_ATOMIC.

This patch tries to change behavior as following by charge/commit/cancel ops.

 before
  - charge PAGE_SIZE (no target page)
 success
  - commit charge against "newpage".
 failure
  - commit charge against "oldpage".
    (PCG_USED bit works effectively to avoid double-counting)
  - if "oldpage" is obsolete, cancel charge of PAGE_SIZE.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
KAMEZAWA Hiroyuki
2009-01-07 18:07:50 -08:00
committed by Linus Torvalds
parent bced0520fe
commit 01b1ae63c2
3 changed files with 73 additions and 96 deletions

View File

@@ -29,8 +29,6 @@ struct mm_struct;
extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
extern int mem_cgroup_charge_migrate_fixup(struct page *page,
struct mm_struct *mm, gfp_t gfp_mask);
/* for swap handling */
extern int mem_cgroup_try_charge(struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **ptr);
@@ -60,8 +58,9 @@ extern struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
((cgroup) == mem_cgroup_from_task((mm)->owner))
extern int
mem_cgroup_prepare_migration(struct page *page, struct page *newpage);
extern void mem_cgroup_end_migration(struct page *page);
mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr);
extern void mem_cgroup_end_migration(struct mem_cgroup *mem,
struct page *oldpage, struct page *newpage);
/*
* For memory reclaim.
@@ -94,12 +93,6 @@ static inline int mem_cgroup_cache_charge(struct page *page,
return 0;
}
static inline int mem_cgroup_charge_migrate_fixup(struct page *page,
struct mm_struct *mm, gfp_t gfp_mask)
{
return 0;
}
static inline int mem_cgroup_try_charge(struct mm_struct *mm,
gfp_t gfp_mask, struct mem_cgroup **ptr)
{
@@ -144,12 +137,14 @@ static inline int task_in_mem_cgroup(struct task_struct *task,
}
static inline int
mem_cgroup_prepare_migration(struct page *page, struct page *newpage)
mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr)
{
return 0;
}
static inline void mem_cgroup_end_migration(struct page *page)
static inline void mem_cgroup_end_migration(struct mem_cgroup *mem,
struct page *oldpage,
struct page *newpage)
{
}