Commit 19979fba authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini
Browse files

KVM: x86: Remove obsolete disabling of page faults in kvm_arch_vcpu_put()



Remove the disabling of page faults across kvm_steal_time_set_preempted()
as KVM now accesses the steal time struct (shared with the guest) via a
cached mapping (see commit b0431382, "x86/KVM: Make sure
KVM_VCPU_FLUSH_TLB flag is not missed".)  The cache lookup is flagged as
atomic, thus it would be a bug if KVM tried to resolve a new pfn, i.e.
we want the splat that would be reached via might_fault().

Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
Message-Id: <20210123000334.3123628-2-seanjc@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent bd2fae8d
Loading
Loading
Loading
Loading
+0 −10
Original line number Diff line number Diff line
@@ -4040,15 +4040,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
	if (vcpu->preempted && !vcpu->arch.guest_state_protected)
		vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu);

	/*
	 * Disable page faults because we're in atomic context here.
	 * kvm_write_guest_offset_cached() would call might_fault()
	 * that relies on pagefault_disable() to tell if there's a
	 * bug. NOTE: the write to guest memory may not go through if
	 * during postcopy live migration or if there's heavy guest
	 * paging.
	 */
	pagefault_disable();
	/*
	 * kvm_memslots() will be called by
	 * kvm_write_guest_offset_cached() so take the srcu lock.
@@ -4056,7 +4047,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
	idx = srcu_read_lock(&vcpu->kvm->srcu);
	kvm_steal_time_set_preempted(vcpu);
	srcu_read_unlock(&vcpu->kvm->srcu, idx);
	pagefault_enable();
	kvm_x86_ops.vcpu_put(vcpu);
	vcpu->arch.last_host_tsc = rdtsc();
	/*