Unverified Commit 61798d1a authored by openeuler-ci-bot's avatar openeuler-ci-bot Committed by Gitee
Browse files

!456 Backport CVEs and bugfixes

Merge Pull Request from: @zhangjialin11 
 
Pull new CVEs:
CVE-2023-1118
CVE-2023-1073
CVE-2022-27672
CVE-2023-0461
CVE-2023-1075
CVE-2023-22995
CVE-2023-26607
CVE-2023-1078
CVE-2023-1076

net bugfix from Zhang Changzhong
md bugfixes from Yu Kuai
blk-mq bugfix from Yu Kuai
fs bugfixes from Baokun Li and Zhihao Cheng
ring-buffer bugfix from Zheng Yejian 
 
Link:https://gitee.com/openeuler/kernel/pulls/456

 

Reviewed-by: default avatarZheng Zengkai <zhengzengkai@huawei.com>
Signed-off-by: default avatarZheng Zengkai <zhengzengkai@huawei.com>
parents 6981ca3c 27477362
Loading
Loading
Loading
Loading
+91 −0
Original line number Diff line number Diff line

.. SPDX-License-Identifier: GPL-2.0

Cross-Thread Return Address Predictions
=======================================

Certain AMD and Hygon processors are subject to a cross-thread return address
predictions vulnerability. When running in SMT mode and one sibling thread
transitions out of C0 state, the other sibling thread could use return target
predictions from the sibling thread that transitioned out of C0.

The Spectre v2 mitigations protect the Linux kernel, as it fills the return
address prediction entries with safe targets when context switching to the idle
thread. However, KVM does allow a VMM to prevent exiting guest mode when
transitioning out of C0. This could result in a guest-controlled return target
being consumed by the sibling thread.

Affected processors
-------------------

The following CPUs are vulnerable:

    - AMD Family 17h processors
    - Hygon Family 18h processors

Related CVEs
------------

The following CVE entry is related to this issue:

   ==============  =======================================
   CVE-2022-27672  Cross-Thread Return Address Predictions
   ==============  =======================================

Problem
-------

Affected SMT-capable processors support 1T and 2T modes of execution when SMT
is enabled. In 2T mode, both threads in a core are executing code. For the
processor core to enter 1T mode, it is required that one of the threads
requests to transition out of the C0 state. This can be communicated with the
HLT instruction or with an MWAIT instruction that requests non-C0.
When the thread re-enters the C0 state, the processor transitions back
to 2T mode, assuming the other thread is also still in C0 state.

In affected processors, the return address predictor (RAP) is partitioned
depending on the SMT mode. For instance, in 2T mode each thread uses a private
16-entry RAP, but in 1T mode, the active thread uses a 32-entry RAP. Upon
transition between 1T/2T mode, the RAP contents are not modified but the RAP
pointers (which control the next return target to use for predictions) may
change. This behavior may result in return targets from one SMT thread being
used by RET predictions in the sibling thread following a 1T/2T switch. In
particular, a RET instruction executed immediately after a transition to 1T may
use a return target from the thread that just became idle. In theory, this
could lead to information disclosure if the return targets used do not come
from trustworthy code.

Attack scenarios
----------------

An attack can be mounted on affected processors by performing a series of CALL
instructions with targeted return locations and then transitioning out of C0
state.

Mitigation mechanism
--------------------

Before entering idle state, the kernel context switches to the idle thread. The
context switch fills the RAP entries (referred to as the RSB in Linux) with safe
targets by performing a sequence of CALL instructions.

Prevent a guest VM from directly putting the processor into an idle state by
intercepting HLT and MWAIT instructions.

Both mitigations are required to fully address this issue.

Mitigation control on the kernel command line
---------------------------------------------

Use existing Spectre v2 mitigations that will fill the RSB on context switch.

Mitigation control for KVM - module parameter
---------------------------------------------

By default, the KVM hypervisor mitigates this issue by intercepting guest
attempts to transition out of C0. A VMM can use the KVM_CAP_X86_DISABLE_EXITS
capability to override those interceptions, but since this is not common, the
mitigation that covers this path is not enabled by default.

The mitigation for the KVM_CAP_X86_DISABLE_EXITS capability can be turned on
using the boolean module parameter mitigate_smt_rsb, e.g. ``kvm.mitigate_smt_rsb=1``.
+1 −0
Original line number Diff line number Diff line
@@ -17,3 +17,4 @@ are configurable at compile, boot or run time.
   special-register-buffer-data-sampling.rst
   core-scheduling.rst
   processor_mmio_stale_data.rst
   cross-thread-rsb.rst
+1 −0
Original line number Diff line number Diff line
@@ -471,5 +471,6 @@
#define X86_BUG_MMIO_UNKNOWN		X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
#define X86_BUG_RETBLEED		X86_BUG(27) /* CPU is affected by RETBleed */
#define X86_BUG_EIBRS_PBRSB		X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
#define X86_BUG_SMT_RSB			X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */

#endif /* _ASM_X86_CPUFEATURES_H */
+7 −2
Original line number Diff line number Diff line
@@ -1124,6 +1124,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
#define MMIO_SBDS	BIT(2)
/* CPU is affected by RETbleed, speculating where you would not expect it */
#define RETBLEED	BIT(3)
/* CPU is affected by SMT (cross-thread) return predictions */
#define SMT_RSB		BIT(4)

static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
@@ -1155,8 +1157,8 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {

	VULNBL_AMD(0x15, RETBLEED),
	VULNBL_AMD(0x16, RETBLEED),
	VULNBL_AMD(0x17, RETBLEED),
	VULNBL_HYGON(0x18, RETBLEED),
	VULNBL_AMD(0x17, RETBLEED | SMT_RSB),
	VULNBL_HYGON(0x18, RETBLEED | SMT_RSB),
	{}
};

@@ -1274,6 +1276,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
	    !(ia32_cap & ARCH_CAP_PBRSB_NO))
		setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);

	if (cpu_matches(cpu_vuln_blacklist, SMT_RSB))
		setup_force_cpu_bug(X86_BUG_SMT_RSB);

	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
		return;

+32 −11
Original line number Diff line number Diff line
@@ -174,6 +174,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
int __read_mostly pi_inject_timer = -1;
module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);

/* Enable/disable SMT_RSB bug mitigation */
bool __read_mostly mitigate_smt_rsb;
module_param(mitigate_smt_rsb, bool, 0444);

/*
 * Restoring the host value for MSRs that are only consumed when running in
 * usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU
@@ -3945,10 +3949,15 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
		r = KVM_CLOCK_TSC_STABLE;
		break;
	case KVM_CAP_X86_DISABLE_EXITS:
		r |=  KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE |
		r = KVM_X86_DISABLE_EXITS_PAUSE;

		if (!mitigate_smt_rsb) {
			r |= KVM_X86_DISABLE_EXITS_HLT |
			     KVM_X86_DISABLE_EXITS_CSTATE;

			if (kvm_can_mwait_in_guest())
				r |= KVM_X86_DISABLE_EXITS_MWAIT;
		}
		break;
	case KVM_CAP_X86_SMM:
		/* SMBASE is usually relocated above 1M on modern chipsets,
@@ -5491,15 +5500,26 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
		if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS)
			break;

		if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE)
			kvm->arch.pause_in_guest = true;

#define SMT_RSB_MSG "This processor is affected by the Cross-Thread Return Predictions vulnerability. " \
		    "KVM_CAP_X86_DISABLE_EXITS should only be used with SMT disabled or trusted guests."

		if (!mitigate_smt_rsb) {
			if (boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible() &&
			    (cap->args[0] & ~KVM_X86_DISABLE_EXITS_PAUSE))
				pr_warn_once(SMT_RSB_MSG);

			if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&
			    kvm_can_mwait_in_guest())
				kvm->arch.mwait_in_guest = true;
			if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)
				kvm->arch.hlt_in_guest = true;
		if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE)
			kvm->arch.pause_in_guest = true;
			if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)
				kvm->arch.cstate_in_guest = true;
		}

		r = 0;
		break;
	case KVM_CAP_MSR_PLATFORM_INFO:
@@ -11698,6 +11718,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_apicv_update_request);
static int __init kvm_x86_init(void)
{
	kvm_mmu_x86_module_init();
	mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible();
	return 0;
}
module_init(kvm_x86_init);
Loading