Commit 34ff2139 authored by Rishabh Bhatnagar's avatar Rishabh Bhatnagar Committed by sanglipeng
Browse files

KVM: x86: Remove obsolete disabling of page faults in kvm_arch_vcpu_put()

stable inclusion
from stable-v5.10.180
commit 029662004359364428d6cca688acb0441189af1b
category: bugfix
bugzilla: https://gitee.com/openeuler/kernel/issues/I8DDFN

Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=029662004359364428d6cca688acb0441189af1b



--------------------------------

From: Sean Christopherson <seanjc@google.com>

commit 19979fba upstream.

Remove the disabling of page faults across kvm_steal_time_set_preempted()
as KVM now accesses the steal time struct (shared with the guest) via a
cached mapping (see commit b0431382, "x86/KVM: Make sure
KVM_VCPU_FLUSH_TLB flag is not missed".)  The cache lookup is flagged as
atomic, thus it would be a bug if KVM tried to resolve a new pfn, i.e.
we want the splat that would be reached via might_fault().

Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
Message-Id: <20210123000334.3123628-2-seanjc@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarRishabh Bhatnagar <risbhat@amazon.com>
Tested-by: default avatarAllen Pais <apais@linux.microsoft.com>
Acked-by: default avatarSean Christopherson <seanjc@google.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarsanglipeng <sanglipeng1@jd.com>
parent 6858be5f
Loading
Loading
Loading
Loading
+0 −10
Original line number Diff line number Diff line
@@ -4352,15 +4352,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
	if (vcpu->preempted)
		vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu);

	/*
	 * Disable page faults because we're in atomic context here.
	 * kvm_write_guest_offset_cached() would call might_fault()
	 * that relies on pagefault_disable() to tell if there's a
	 * bug. NOTE: the write to guest memory may not go through if
	 * during postcopy live migration or if there's heavy guest
	 * paging.
	 */
	pagefault_disable();
	/*
	 * kvm_memslots() will be called by
	 * kvm_write_guest_offset_cached() so take the srcu lock.
@@ -4368,7 +4359,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
	idx = srcu_read_lock(&vcpu->kvm->srcu);
	kvm_steal_time_set_preempted(vcpu);
	srcu_read_unlock(&vcpu->kvm->srcu, idx);
	pagefault_enable();
	kvm_x86_ops.vcpu_put(vcpu);
	vcpu->arch.last_host_tsc = rdtsc();
	/*