mm: provide a saner PTE walking API for modules

commit 9fd6dad1261a541b3f5fa7dc5b152222306e6702 upstream.

Currently, the follow_pfn function is exported for modules but
follow_pte is not.  However, follow_pfn is very easy to misuse,
because it does not provide protections (so most of its callers
assume the page is writable!) and because it returns after having
already unlocked the page table lock.

Provide instead a simplified version of follow_pte that does
not have the pmdpp and range arguments.  The older version
survives as follow_invalidate_pte() for use by fs/dax.c.

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Paolo Bonzini
2021-02-05 05:07:11 -05:00
committed by Greg Kroah-Hartman
parent 83d42c2586
commit a42150f1c9
4 changed files with 45 additions and 11 deletions

View File

@@ -1893,7 +1893,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
spinlock_t *ptl;
int r;
r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl);
r = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
if (r) {
/*
* get_user_pages fails for VM_IO and VM_PFNMAP vmas and does
@@ -1908,7 +1908,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
if (r)
return r;
r = follow_pte(vma->vm_mm, addr, NULL, &ptep, NULL, &ptl);
r = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
if (r)
return r;
}