Commit 945024d7 authored by Jon Kohler's avatar Jon Kohler Committed by Paolo Bonzini
Browse files

KVM: x86: optimize PKU branching in kvm_load_{guest|host}_xsave_state



kvm_load_{guest|host}_xsave_state handles xsave on vm entry and exit,
part of which is managing memory protection key state. The latest
arch.pkru is updated with a rdpkru, and if that doesn't match the base
host_pkru (which about 70% of the time), we issue a __write_pkru.

To improve performance, implement the following optimizations:
 1. Reorder if conditions prior to wrpkru in both
    kvm_load_{guest|host}_xsave_state.

    Flip the ordering of the || condition so that XFEATURE_MASK_PKRU is
    checked first, which when instrumented in our environment appeared
    to be always true and less overall work than kvm_read_cr4_bits.

    For kvm_load_guest_xsave_state, hoist arch.pkru != host_pkru ahead
    one position. When instrumented, I saw this be true roughly ~70% of
    the time vs the other conditions which were almost always true.
    With this change, we will avoid 3rd condition check ~30% of the time.

 2. Wrap PKU sections with CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS,
    as if the user compiles out this feature, we should not have
    these branches at all.

Signed-off-by: default avatarJon Kohler <jon@nutanix.com>
Message-Id: <20220324004439.6709-1-jon@nutanix.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent f44509f8
Loading
Loading
Loading
Loading
+9 −5
Original line number Diff line number Diff line
@@ -961,11 +961,13 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
			wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
	}

#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
	if (static_cpu_has(X86_FEATURE_PKU) &&
	    (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||
	     (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)) &&
	    vcpu->arch.pkru != vcpu->arch.host_pkru)
	    vcpu->arch.pkru != vcpu->arch.host_pkru &&
	    ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) ||
	     kvm_read_cr4_bits(vcpu, X86_CR4_PKE)))
		write_pkru(vcpu->arch.pkru);
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
}
EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);

@@ -974,13 +976,15 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
	if (vcpu->arch.guest_state_protected)
		return;

#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
	if (static_cpu_has(X86_FEATURE_PKU) &&
	    (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||
	     (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU))) {
	    ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) ||
	     kvm_read_cr4_bits(vcpu, X86_CR4_PKE))) {
		vcpu->arch.pkru = rdpkru();
		if (vcpu->arch.pkru != vcpu->arch.host_pkru)
			write_pkru(vcpu->arch.host_pkru);
	}
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */

	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {