From abbc0d53ee9ee6f1bd740ec09adf2b9a199c3598 Mon Sep 17 00:00:00 2001 From: Kalesh Singh Date: Thu, 4 Apr 2024 22:37:48 -0700 Subject: [PATCH] ANDROID: 16K: Exclude ELF padding for fault around range Userspace apps often analyze memory consumption by the use of mm rss_stat counters -- via the kmem/rss_stat trace event or from /proc//statm. rss_stat counters are only updated when the PTEs are updated. What this means is that pages can be present in the page cache from readahead but not visible to userspace (not attributed to the app) as there is no corresponding VMA (PTEs) for the respective page cache pages. A side effect of the loader now extending ELF LOAD segments to be contiguously mapped in the virtual address space, means that the VMA is extended to cover the padding pages. When filesystems, such as f2fs and ext4, that implement vm_ops->map_pages() attempt to perform a do_fault_around() the extent of the fault around is restricted by the area of the enclosing VMA. Since the loader extends LOAD segment VMAs to be contiguously mapped, the extent of the fault around is also increased. The result of which, is that the PTEs corresponding to the padding pages are updated and reflected in the rss_stat counters. It is not common that userspace application developers be aware of this nuance in the kernel's memory accounting. To avoid apparent regressions in memory usage to userspace, restrict the fault around range to only valid data pages (i.e. exclude the padding pages at the end of the VMA). Bug: 330117029 Bug: 327600007 Bug: 330767927 Bug: 328266487 Bug: 329803029 Change-Id: I2c7a39ec1b040be2b9fb47801f95042f5dbf869d Signed-off-by: Kalesh Singh --- mm/memory.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index 94c5e331e49e..8d558d462832 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -57,6 +57,7 @@ #include #include #include +#include #include #include #include @@ -4359,7 +4360,7 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) end_pgoff = start_pgoff - ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)) + PTRS_PER_PTE - 1; - end_pgoff = min3(end_pgoff, vma_pages(vmf->vma) + vmf->vma->vm_pgoff - 1, + end_pgoff = min3(end_pgoff, vma_data_pages(vmf->vma) + vmf->vma->vm_pgoff - 1, start_pgoff + nr_pages - 1); if (!(vmf->flags & FAULT_FLAG_SPECULATIVE) &&