Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Paolo Bonzini: "ARM: - Move the arch-specific code into arch/arm64/kvm - Start the post-32bit cleanup - Cherry-pick a few non-invasive pre-NV patches x86: - Rework of TLB flushing - Rework of event injection, especially with respect to nested virtualization - Nested AMD event injection facelift, building on the rework of generic code and fixing a lot of corner cases - Nested AMD live migration support - Optimization for TSC deadline MSR writes and IPIs - Various cleanups - Asynchronous page fault cleanups (from tglx, common topic branch with tip tree) - Interrupt-based delivery of asynchronous "page ready" events (host side) - Hyper-V MSRs and hypercalls for guest debugging - VMX preemption timer fixes s390: - Cleanups Generic: - switch vCPU thread wakeup from swait to rcuwait The other architectures, and the guest side of the asynchronous page fault work, will come next week" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (256 commits) KVM: selftests: fix rdtsc() for vmx_tsc_adjust_test KVM: check userspace_addr for all memslots KVM: selftests: update hyperv_cpuid with SynDBG tests x86/kvm/hyper-v: Add support for synthetic debugger via hypercalls x86/kvm/hyper-v: enable hypercalls regardless of hypercall page x86/kvm/hyper-v: Add support for synthetic debugger interface x86/hyper-v: Add synthetic debugger definitions KVM: selftests: VMX preemption timer migration test KVM: nVMX: Fix VMX preemption timer migration x86/kvm/hyper-v: Explicitly align hcall param for kvm_hyperv_exit KVM: x86/pmu: Support full width counting KVM: x86/pmu: Tweak kvm_pmu_get_msr to pass 'struct msr_data' in KVM: x86: announce KVM_FEATURE_ASYNC_PF_INT KVM: x86: acknowledgment mechanism for async pf page ready notifications KVM: x86: interrupt based APF 'page ready' event delivery KVM: introduce kvm_read_guest_offset_cached() KVM: rename kvm_arch_can_inject_async_page_present() to kvm_arch_can_dequeue_async_page_present() KVM: x86: extend struct kvm_vcpu_pv_apf_data with token info Revert "KVM: async_pf: Fix #DF due to inject "Page not Present" and "Page Ready" exceptions simultaneously" KVM: VMX: Replace zero-length array with flexible-array ...
This commit is contained in:
@@ -30,6 +30,7 @@
|
||||
#include <asm/desc.h> /* store_idt(), ... */
|
||||
#include <asm/cpu_entry_area.h> /* exception stack */
|
||||
#include <asm/pgtable_areas.h> /* VMALLOC_START, ... */
|
||||
#include <asm/kvm_para.h> /* kvm_handle_async_pf */
|
||||
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <asm/trace/exceptions.h>
|
||||
@@ -1359,6 +1360,24 @@ do_page_fault(struct pt_regs *regs, unsigned long hw_error_code,
|
||||
unsigned long address)
|
||||
{
|
||||
prefetchw(¤t->mm->mmap_sem);
|
||||
/*
|
||||
* KVM has two types of events that are, logically, interrupts, but
|
||||
* are unfortunately delivered using the #PF vector. These events are
|
||||
* "you just accessed valid memory, but the host doesn't have it right
|
||||
* now, so I'll put you to sleep if you continue" and "that memory
|
||||
* you tried to access earlier is available now."
|
||||
*
|
||||
* We are relying on the interrupted context being sane (valid RSP,
|
||||
* relevant locks not held, etc.), which is fine as long as the
|
||||
* interrupted context had IF=1. We are also relying on the KVM
|
||||
* async pf type field and CR2 being read consistently instead of
|
||||
* getting values from real and async page faults mixed up.
|
||||
*
|
||||
* Fingers crossed.
|
||||
*/
|
||||
if (kvm_handle_async_pf(regs, (u32)address))
|
||||
return;
|
||||
|
||||
trace_page_fault_entries(regs, hw_error_code, address);
|
||||
|
||||
if (unlikely(kmmio_fault(regs, address)))
|
||||
|
Reference in New Issue
Block a user