Loading Documentation/admin-guide/hw-vuln/index.rst +7 −7 Original line number Diff line number Diff line Loading @@ -13,11 +13,11 @@ are configurable at compile, boot or run time. l1tf mds tsx_async_abort multihit.rst special-register-buffer-data-sampling.rst core-scheduling.rst l1d_flush.rst processor_mmio_stale_data.rst cross-thread-rsb.rst multihit special-register-buffer-data-sampling core-scheduling l1d_flush processor_mmio_stale_data cross-thread-rsb srso gather_data_sampling.rst gather_data_sampling Documentation/admin-guide/hw-vuln/srso.rst +44 −27 Original line number Diff line number Diff line Loading @@ -42,43 +42,60 @@ The sysfs file showing SRSO mitigation status is: The possible values in this file are: - 'Not affected' The processor is not vulnerable * 'Not affected': - 'Vulnerable: no microcode' The processor is vulnerable, no microcode extending IBPB functionality to address the vulnerability has been applied. The processor is not vulnerable - 'Mitigation: microcode' Extended IBPB functionality microcode patch has been applied. It does not address User->Kernel and Guest->Host transitions protection but it does address User->User and VM->VM attack vectors. * 'Vulnerable: no microcode': The processor is vulnerable, no microcode extending IBPB functionality to address the vulnerability has been applied. * 'Mitigation: microcode': Extended IBPB functionality microcode patch has been applied. It does not address User->Kernel and Guest->Host transitions protection but it does address User->User and VM->VM attack vectors. Note that User->User mitigation is controlled by how the IBPB aspect in the Spectre v2 mitigation is selected: * conditional IBPB: where each process can select whether it needs an IBPB issued around it PR_SPEC_DISABLE/_ENABLE etc, see :doc:`spectre` * strict: i.e., always on - by supplying spectre_v2_user=on on the kernel command line (spec_rstack_overflow=microcode) - 'Mitigation: safe RET' Software-only mitigation. It complements the extended IBPB microcode patch functionality by addressing User->Kernel and Guest->Host transitions protection. * 'Mitigation: safe RET': Software-only mitigation. It complements the extended IBPB microcode patch functionality by addressing User->Kernel and Guest->Host transitions protection. Selected by default or by spec_rstack_overflow=safe-ret Selected by default or by spec_rstack_overflow=safe-ret * 'Mitigation: IBPB': - 'Mitigation: IBPB' Similar protection as "safe RET" above but employs an IBPB barrier on privilege domain crossings (User->Kernel, Guest->Host). Similar protection as "safe RET" above but employs an IBPB barrier on privilege domain crossings (User->Kernel, Guest->Host). (spec_rstack_overflow=ibpb) - 'Mitigation: IBPB on VMEXIT' Mitigation addressing the cloud provider scenario - the Guest->Host transitions only. * 'Mitigation: IBPB on VMEXIT': Mitigation addressing the cloud provider scenario - the Guest->Host transitions only. (spec_rstack_overflow=ibpb-vmexit) In order to exploit vulnerability, an attacker needs to: - gain local access on the machine Loading arch/x86/include/asm/processor.h +2 −0 Original line number Diff line number Diff line Loading @@ -731,4 +731,6 @@ bool arch_is_platform_page(u64 paddr); #define arch_is_platform_page arch_is_platform_page #endif extern bool gds_ucode_mitigated(void); #endif /* _ASM_X86_PROCESSOR_H */ arch/x86/kernel/vmlinux.lds.S +9 −3 Original line number Diff line number Diff line Loading @@ -529,11 +529,17 @@ INIT_PER_CPU(irq_stack_backing_store); #ifdef CONFIG_CPU_SRSO /* * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR * GNU ld cannot do XOR until 2.41. * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1 * * LLVM lld cannot do XOR until lld-17. * https://github.com/llvm/llvm-project/commit/fae96104d4378166cbe5c875ef8ed808a356f3fb * * Instead do: (A | B) - (A & B) in order to compute the XOR * of the two function addresses: */ . = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) - (srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), . = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) - (ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), "SRSO function pair won't alias"); #endif Loading arch/x86/kvm/x86.c +0 −2 Original line number Diff line number Diff line Loading @@ -314,8 +314,6 @@ u64 __read_mostly host_xcr0; static struct kmem_cache *x86_emulator_cache; extern bool gds_ucode_mitigated(void); /* * When called, it means the previous get/set msr reached an invalid msr. * Return true if we want to ignore/silent this failed msr access. Loading Loading
Documentation/admin-guide/hw-vuln/index.rst +7 −7 Original line number Diff line number Diff line Loading @@ -13,11 +13,11 @@ are configurable at compile, boot or run time. l1tf mds tsx_async_abort multihit.rst special-register-buffer-data-sampling.rst core-scheduling.rst l1d_flush.rst processor_mmio_stale_data.rst cross-thread-rsb.rst multihit special-register-buffer-data-sampling core-scheduling l1d_flush processor_mmio_stale_data cross-thread-rsb srso gather_data_sampling.rst gather_data_sampling
Documentation/admin-guide/hw-vuln/srso.rst +44 −27 Original line number Diff line number Diff line Loading @@ -42,43 +42,60 @@ The sysfs file showing SRSO mitigation status is: The possible values in this file are: - 'Not affected' The processor is not vulnerable * 'Not affected': - 'Vulnerable: no microcode' The processor is vulnerable, no microcode extending IBPB functionality to address the vulnerability has been applied. The processor is not vulnerable - 'Mitigation: microcode' Extended IBPB functionality microcode patch has been applied. It does not address User->Kernel and Guest->Host transitions protection but it does address User->User and VM->VM attack vectors. * 'Vulnerable: no microcode': The processor is vulnerable, no microcode extending IBPB functionality to address the vulnerability has been applied. * 'Mitigation: microcode': Extended IBPB functionality microcode patch has been applied. It does not address User->Kernel and Guest->Host transitions protection but it does address User->User and VM->VM attack vectors. Note that User->User mitigation is controlled by how the IBPB aspect in the Spectre v2 mitigation is selected: * conditional IBPB: where each process can select whether it needs an IBPB issued around it PR_SPEC_DISABLE/_ENABLE etc, see :doc:`spectre` * strict: i.e., always on - by supplying spectre_v2_user=on on the kernel command line (spec_rstack_overflow=microcode) - 'Mitigation: safe RET' Software-only mitigation. It complements the extended IBPB microcode patch functionality by addressing User->Kernel and Guest->Host transitions protection. * 'Mitigation: safe RET': Software-only mitigation. It complements the extended IBPB microcode patch functionality by addressing User->Kernel and Guest->Host transitions protection. Selected by default or by spec_rstack_overflow=safe-ret Selected by default or by spec_rstack_overflow=safe-ret * 'Mitigation: IBPB': - 'Mitigation: IBPB' Similar protection as "safe RET" above but employs an IBPB barrier on privilege domain crossings (User->Kernel, Guest->Host). Similar protection as "safe RET" above but employs an IBPB barrier on privilege domain crossings (User->Kernel, Guest->Host). (spec_rstack_overflow=ibpb) - 'Mitigation: IBPB on VMEXIT' Mitigation addressing the cloud provider scenario - the Guest->Host transitions only. * 'Mitigation: IBPB on VMEXIT': Mitigation addressing the cloud provider scenario - the Guest->Host transitions only. (spec_rstack_overflow=ibpb-vmexit) In order to exploit vulnerability, an attacker needs to: - gain local access on the machine Loading
arch/x86/include/asm/processor.h +2 −0 Original line number Diff line number Diff line Loading @@ -731,4 +731,6 @@ bool arch_is_platform_page(u64 paddr); #define arch_is_platform_page arch_is_platform_page #endif extern bool gds_ucode_mitigated(void); #endif /* _ASM_X86_PROCESSOR_H */
arch/x86/kernel/vmlinux.lds.S +9 −3 Original line number Diff line number Diff line Loading @@ -529,11 +529,17 @@ INIT_PER_CPU(irq_stack_backing_store); #ifdef CONFIG_CPU_SRSO /* * GNU ld cannot do XOR so do: (A | B) - (A & B) in order to compute the XOR * GNU ld cannot do XOR until 2.41. * https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f6f78318fca803c4907fb8d7f6ded8295f1947b1 * * LLVM lld cannot do XOR until lld-17. * https://github.com/llvm/llvm-project/commit/fae96104d4378166cbe5c875ef8ed808a356f3fb * * Instead do: (A | B) - (A & B) in order to compute the XOR * of the two function addresses: */ . = ASSERT(((srso_untrain_ret_alias | srso_safe_ret_alias) - (srso_untrain_ret_alias & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), . = ASSERT(((ABSOLUTE(srso_untrain_ret_alias) | srso_safe_ret_alias) - (ABSOLUTE(srso_untrain_ret_alias) & srso_safe_ret_alias)) == ((1 << 2) | (1 << 8) | (1 << 14) | (1 << 20)), "SRSO function pair won't alias"); #endif Loading
arch/x86/kvm/x86.c +0 −2 Original line number Diff line number Diff line Loading @@ -314,8 +314,6 @@ u64 __read_mostly host_xcr0; static struct kmem_cache *x86_emulator_cache; extern bool gds_ucode_mitigated(void); /* * When called, it means the previous get/set msr reached an invalid msr. * Return true if we want to ignore/silent this failed msr access. Loading