KVM: arm/arm64: VGIC/ITS: protect kvm_read_guest() calls with SRCU lock
kvm_read_guest() will eventually look up in kvm_memslots(), which requires either to hold the kvm->slots_lock or to be inside a kvm->srcu critical section. In contrast to x86 and s390 we don't take the SRCU lock on every guest exit, so we have to do it individually for each kvm_read_guest() call. Provide a wrapper which does that and use that everywhere. Note that ending the SRCU critical section before returning from the kvm_read_guest() wrapper is safe, because the data has been *copied*, so we don't need to rely on valid references to the memslot anymore. Cc: Stable <stable@vger.kernel.org> # 4.8+ Reported-by: Jan Glauber <jan.glauber@caviumnetworks.com> Signed-off-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:

committed by
Paolo Bonzini

parent
9c4188762f
commit
bf308242ab
@@ -309,6 +309,22 @@ static inline unsigned int kvm_get_vmid_bits(void)
|
||||
return 8;
|
||||
}
|
||||
|
||||
/*
|
||||
* We are not in the kvm->srcu critical section most of the time, so we take
|
||||
* the SRCU read lock here. Since we copy the data from the user page, we
|
||||
* can immediately drop the lock again.
|
||||
*/
|
||||
static inline int kvm_read_guest_lock(struct kvm *kvm,
|
||||
gpa_t gpa, void *data, unsigned long len)
|
||||
{
|
||||
int srcu_idx = srcu_read_lock(&kvm->srcu);
|
||||
int ret = kvm_read_guest(kvm, gpa, data, len);
|
||||
|
||||
srcu_read_unlock(&kvm->srcu, srcu_idx);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline void *kvm_get_hyp_vector(void)
|
||||
{
|
||||
return kvm_ksym_ref(__kvm_hyp_vector);
|
||||
|
Reference in New Issue
Block a user