FROMLIST: x86/mm: add speculative pagefault handling

Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
 handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
 handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !CONFIG_SPECULATIVE_PAGE_FAULT]
[Handle memory protection key fault]
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>

Link: https://lore.kernel.org/patchwork/patch/1062684/
Bug: 161210518
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Change-Id: If994d027e8602d8d647dfe560c7ac68b49baf2f5
This commit is contained in:
Peter Zijlstra
2019-04-16 15:45:19 +02:00
committed by Suren Baghdasaryan
parent 08c1a30975
commit 86ee4a531e

View File

@@ -1214,7 +1214,7 @@ void do_user_addr_fault(struct pt_regs *regs,
unsigned long hw_error_code,
unsigned long address)
{
struct vm_area_struct *vma;
struct vm_area_struct *vma = NULL;
struct task_struct *tsk;
struct mm_struct *mm;
vm_fault_t fault;
@@ -1298,6 +1298,16 @@ void do_user_addr_fault(struct pt_regs *regs,
}
#endif
/*
* Do not try to do a speculative page fault if the fault was due to
* protection keys since it can't be resolved.
*/
if (!(hw_error_code & X86_PF_PK)) {
fault = handle_speculative_fault(mm, address, flags, &vma);
if (fault != VM_FAULT_RETRY)
goto done;
}
/*
* Kernel-mode access to the user address space should only occur
* on well-defined single instructions listed in the exception
@@ -1391,6 +1401,8 @@ good_area:
}
mmap_read_unlock(mm);
done:
if (unlikely(fault & VM_FAULT_ERROR)) {
mm_fault_error(regs, hw_error_code, address, fault);
return;