ANDROID: cma: redirect page allocation to CMA

CMA pages are designed to be used as fallback for movable allocations
and cannot be used for non-movable allocations. If CMA pages are
utilized poorly, non-movable allocations may end up getting starved if
all regular movable pages are allocated and the only pages left are
CMA. Always using CMA pages first creates unacceptable performance
problems. As a midway alternative, use CMA pages for certain
userspace allocations. The userspace pages can be migrated or dropped
quickly which giving decent utilization.

Additionally, add a fall-backs for failed CMA allocations in rmqueue()
and __rmqueue_pcplist() (the latter addition being driven by a report
by the kernel test robot); these fallbacks were dealt with differently
in the original version of the patch as the rmqueue() call chain has
changed).

Bug: 158645321
Link: https://lore.kernel.org/lkml/cover.1604282969.git.cgoldswo@codeaurora.org/
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Heesub Shin <heesub.shin@samsung.com>
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
[cgoldswo@codeaurora.org: Place in bugfixes; remove cma_alloc zone flag]
Signed-off-by: Chris Goldsworthy <cgoldswo@codeaurora.org>
Change-Id: Ibca5eedfc5eacd44542ad483851d741166715f84
This commit is contained in:
Heesub Shin
2013-01-06 18:10:00 -08:00
committed by Chris Goldsworthy
parent f93a33edc8
commit 7ff00a49a2
4 changed files with 73 additions and 31 deletions

View File

@@ -39,11 +39,21 @@ struct vm_area_struct;
#define ___GFP_HARDWALL 0x100000u
#define ___GFP_THISNODE 0x200000u
#define ___GFP_ACCOUNT 0x400000u
#ifdef CONFIG_CMA
#define ___GFP_CMA 0x800000u
#else
#define ___GFP_CMA 0
#endif
#ifdef CONFIG_LOCKDEP
#ifdef CONFIG_CMA
#define ___GFP_NOLOCKDEP 0x1000000u
#else
#define ___GFP_NOLOCKDEP 0x800000u
#endif
#else
#define ___GFP_NOLOCKDEP 0
#endif
/* If the above are modified, __GFP_BITS_SHIFT may need updating */
/*
@@ -57,6 +67,7 @@ struct vm_area_struct;
#define __GFP_HIGHMEM ((__force gfp_t)___GFP_HIGHMEM)
#define __GFP_DMA32 ((__force gfp_t)___GFP_DMA32)
#define __GFP_MOVABLE ((__force gfp_t)___GFP_MOVABLE) /* ZONE_MOVABLE allowed */
#define __GFP_CMA ((__force gfp_t)___GFP_CMA)
#define GFP_ZONEMASK (__GFP_DMA|__GFP_HIGHMEM|__GFP_DMA32|__GFP_MOVABLE)
/**
@@ -224,7 +235,11 @@ struct vm_area_struct;
#define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP)
/* Room for N __GFP_FOO bits */
#ifdef CONFIG_CMA
#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP))
#else
#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP))
#endif
#define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1))
/**