x86/kvm/mmu: introduce guest_mmu
When EPT is used for nested guest we need to re-init MMU as shadow EPT MMU (nested_ept_init_mmu_context() does that). When we return back from L2 to L1 kvm_mmu_reset_context() in nested_vmx_load_cr3() resets MMU back to normal TDP mode. Add a special 'guest_mmu' so we can use separate root caches; the improved hit rate is not very important for single vCPU performance, but it avoids contention on the mmu_lock for many vCPUs. On the nested CPUID benchmark, with 16 vCPUs, an L2->L1->L2 vmexit goes from 42k to 26k cycles. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:

committed by
Paolo Bonzini

parent
6a82cd1c7b
commit
14c07ad89f
@@ -548,6 +548,9 @@ struct kvm_vcpu_arch {
|
||||
/* Non-nested MMU for L1 */
|
||||
struct kvm_mmu root_mmu;
|
||||
|
||||
/* L1 MMU when running nested */
|
||||
struct kvm_mmu guest_mmu;
|
||||
|
||||
/*
|
||||
* Paging state of an L2 guest (used for nested npt)
|
||||
*
|
||||
|
Reference in New Issue
Block a user