[PATCH] mm: unmap_vmas with inner ptlock
Remove the page_table_lock from around the calls to unmap_vmas, and replace the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are now safe to descend without page_table_lock. Don't attempt fancy locking for hugepages, just take page_table_lock in unmap_hugepage_range. Which makes zap_hugepage_range, and the hugetlb test in zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway. Nor does unmap_vmas have much use for its mm arg now. The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without page_table_lock: if they're implemented at all, they typically come down to flush_cache_range (usually done outside page_table_lock) and flush_tlb_range (which we already audited for the mprotect case). Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:

committed by
Linus Torvalds

parent
8f4f8c164c
commit
508034a32b
@@ -682,7 +682,7 @@ struct zap_details {
|
||||
|
||||
unsigned long zap_page_range(struct vm_area_struct *vma, unsigned long address,
|
||||
unsigned long size, struct zap_details *);
|
||||
unsigned long unmap_vmas(struct mmu_gather **tlb, struct mm_struct *mm,
|
||||
unsigned long unmap_vmas(struct mmu_gather **tlb,
|
||||
struct vm_area_struct *start_vma, unsigned long start_addr,
|
||||
unsigned long end_addr, unsigned long *nr_accounted,
|
||||
struct zap_details *);
|
||||
|
Reference in New Issue
Block a user