ANDROID: mm: use raw seqcount variants in vm_write_*

write_seqcount_begin expects to be called from a non-preemptible
context to avoid preemption by a read section that can spin due
to an odd value. But the readers of vm_sequence never retries and
thus writers need not disable preemption. Use the non-lockdep
variant as lockdep checks are now in-built to write_seqcount_begin.

Bug: 161210518
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Change-Id: If4f0cddd7f0a79136495060d4acc1702abb46817
This commit is contained in:
Vinayak Menon
2021-01-15 19:52:40 +05:30
کامیت شده توسط Suren Baghdasaryan
والد 531f65ae67
کامیت c9201630e8
3فایلهای تغییر یافته به همراه18 افزوده شده و 57 حذف شده

مشاهده پرونده

@@ -788,29 +788,9 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
long adjust_next = 0;
int remove_next = 0;
/*
* Why using vm_raw_write*() functions here to avoid lockdep's warning ?
*
* Locked is complaining about a theoretical lock dependency, involving
* 3 locks:
* mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim
*
* Here are the major path leading to this dependency :
* 1. __vma_adjust() mmap_sem -> vm_sequence -> i_mmap_rwsem
* 2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim
* 3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem
* 4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence
*
* So there is no way to solve this easily, especially because in
* unmap_mapping_range() the i_mmap_rwsem is grab while the impacted
* VMAs are not yet known.
* However, the way the vm_seq is used is guarantying that we will
* never block on it since we just check for its value and never wait
* for it to move, see vma_has_changed() and handle_speculative_fault().
*/
vm_raw_write_begin(vma);
vm_write_begin(vma);
if (next)
vm_raw_write_begin(next);
vm_write_begin(next);
if (next && !insert) {
struct vm_area_struct *exporter = NULL, *importer = NULL;
@@ -892,8 +872,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
error = anon_vma_clone(importer, exporter);
if (error) {
if (next && next != vma)
vm_raw_write_end(next);
vm_raw_write_end(vma);
vm_write_end(next);
vm_write_end(vma);
return error;
}
}
@@ -1022,7 +1002,7 @@ again:
if (next->anon_vma)
anon_vma_merge(vma, next);
mm->map_count--;
vm_raw_write_end(next);
vm_write_end(next);
put_vma(next);
/*
* In mprotect's case 6 (see comments on vma_merge),
@@ -1038,7 +1018,7 @@ again:
*/
next = vma->vm_next;
if (next)
vm_raw_write_begin(next);
vm_write_begin(next);
} else {
/*
* For the scope of the comment "next" and
@@ -1086,9 +1066,9 @@ again:
uprobe_mmap(insert);
if (next && next != vma)
vm_raw_write_end(next);
vm_write_end(next);
if (!keep_locked)
vm_raw_write_end(vma);
vm_write_end(vma);
validate_mm(mm);
@@ -3469,7 +3449,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
* that we protect it right now, and let the caller unprotect
* it once the move is done.
*/
vm_raw_write_begin(new_vma);
vm_write_begin(new_vma);
vma_link(mm, new_vma, prev, rb_link, rb_parent);
*need_rmap_locks = false;
}