mm: add & use zone_end_pfn() and zone_spans_pfn()
Add 2 helpers (zone_end_pfn() and zone_spans_pfn()) to reduce code duplication. This also switches to using them in compaction (where an additional variable needed to be renamed), page_alloc, vmstat, memory_hotplug, and kmemleak. Note that in compaction.c I avoid calling zone_end_pfn() repeatedly because I expect at some point the sycronization issues with start_pfn & spanned_pages will need fixing, either by actually using the seqlock or clever memory barrier usage. Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com> Cc: David Hansen <dave@linux.vnet.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:

committed by
Linus Torvalds

parent
9127ab4ff9
commit
108bcc96ef
@@ -527,6 +527,16 @@ static inline int zone_is_oom_locked(const struct zone *zone)
|
||||
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
|
||||
}
|
||||
|
||||
static inline unsigned zone_end_pfn(const struct zone *zone)
|
||||
{
|
||||
return zone->zone_start_pfn + zone->spanned_pages;
|
||||
}
|
||||
|
||||
static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
|
||||
{
|
||||
return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
|
||||
}
|
||||
|
||||
/*
|
||||
* The "priority" of VM scanning is how much of the queues we will scan in one
|
||||
* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
|
||||
|
Reference in New Issue
Block a user