Skip to content
  1. Mar 24, 2018
    • Waiman Long's avatar
      x86/efi: Free efi_pgd with free_pages() · 06ace26f
      Waiman Long authored
      The efi_pgd is allocated as PGD_ALLOCATION_ORDER pages and therefore must
      also be freed as PGD_ALLOCATION_ORDER pages with free_pages().
      
      Fixes: d9e9a641
      
       ("x86/mm/pti: Allocate a separate user PGD")
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/1521746333-19593-1-git-send-email-longman@redhat.com
      06ace26f
  2. Mar 20, 2018
    • Boris Ostrovsky's avatar
      x86/vsyscall/64: Use proper accessor to update P4D entry · 31ad7f8e
      Boris Ostrovsky authored
      Writing to it directly does not work for Xen PV guests.
      
      Fixes: 49275fef
      
       ("x86/vsyscall/64: Explicitly set _PAGE_USER in the pagetable hierarchy")
      Signed-off-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Acked-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180319143154.3742-1-boris.ostrovsky@oracle.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      31ad7f8e
    • Christoph Hellwig's avatar
      x86/cpu: Remove the CONFIG_X86_PPRO_FENCE=y quirk · 5927145e
      Christoph Hellwig authored
      
      
      There were only a few Pentium Pro multiprocessors systems where this
      errata applied. They are more than 20 years old now, and we've slowly
      dropped places which put the workarounds in and discouraged anyone
      from enabling the workaround.
      
      Get rid of it for good.
      
      Tested-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Jon Mason <jdmason@kudzu.us>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Muli Ben-Yehuda <mulix@mulix.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: iommu@lists.linux-foundation.org
      Link: http://lkml.kernel.org/r/20180319103826.12853-2-hch@lst.de
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5927145e
    • H.J. Lu's avatar
      x86/boot/64: Verify alignment of the LOAD segment · c55b8550
      H.J. Lu authored
      
      
      Since the x86-64 kernel must be aligned to 2MB, refuse to boot the
      kernel if the alignment of the LOAD segment isn't a multiple of 2MB.
      
      Signed-off-by: default avatarH.J. Lu <hjl.tools@gmail.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/CAMe9rOrR7xSJgUfiCoZLuqWUwymRxXPoGBW38%2BpN%3D9g%2ByKNhZw@mail.gmail.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c55b8550
    • H.J. Lu's avatar
      x86/build/64: Force the linker to use 2MB page size · e3d03598
      H.J. Lu authored
      
      
      Binutils 2.31 will enable -z separate-code by default for x86 to avoid
      mixing code pages with data to improve cache performance as well as
      security.  To reduce x86-64 executable and shared object sizes, the
      maximum page size is reduced from 2MB to 4KB.  But x86-64 kernel must
      be aligned to 2MB.  Pass -z max-page-size=0x200000 to linker to force
      2MB page size regardless of the default page size used by linker.
      
      Tested with Linux kernel 4.15.6 on x86-64.
      
      Signed-off-by: default avatarH.J. Lu <hjl.tools@gmail.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/CAMe9rOp4_%3D_8twdpTyAP2DhONOCeaTOsniJLoppzhoNptL8xzA@mail.gmail.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e3d03598
  3. Mar 19, 2018
    • Andy Lutomirski's avatar
      selftests/x86/ptrace_syscall: Fix for yet more glibc interference · 4b0b37d4
      Andy Lutomirski authored
      
      
      glibc keeps getting cleverer, and my version now turns raise() into
      more than one syscall.  Since the test relies on ptrace seeing an
      exact set of syscalls, this breaks the test.  Replace raise(SIGSTOP)
      with syscall(SYS_tgkill, ...) to force glibc to get out of our way.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kselftest@vger.kernel.org
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/bc80338b453afa187bc5f895bd8e2c8d6e264da2.1521300271.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      4b0b37d4
  4. Mar 17, 2018
  5. Mar 16, 2018
  6. Mar 14, 2018
    • Josh Poimboeuf's avatar
      jump_label: Fix sparc64 warning · af1d830b
      Josh Poimboeuf authored
      The kbuild test robot reported the following warning on sparc64:
      
        kernel/jump_label.c: In function '__jump_label_update':
        kernel/jump_label.c:376:51: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
             WARN_ONCE(1, "can't patch jump_label at %pS", (void *)entry->code);
      
      On sparc64, the jump_label entry->code field is of type u32, but
      pointers are 64-bit.  Silence the warning by casting entry->code to an
      unsigned long before casting it to a pointer.  This is also what the
      sparc jump label code does.
      
      Fixes: dc1dd184
      
       ("jump_label: Warn on failed jump_label patching attempt")
      Reported-by: default avatarkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: "David S . Miller" <davem@davemloft.net>
      Link: https://lkml.kernel.org/r/c966fed42be6611254a62d46579ec7416548d572.1521041026.git.jpoimboe@redhat.com
      af1d830b
    • Andy Whitcroft's avatar
      x86/speculation, objtool: Annotate indirect calls/jumps for objtool on 32-bit kernels · a14bff13
      Andy Whitcroft authored
      In the following commit:
      
        9e0e3c51
      
       ("x86/speculation, objtool: Annotate indirect calls/jumps for objtool")
      
      ... we added annotations for CALL_NOSPEC/JMP_NOSPEC on 64-bit x86 kernels,
      but we did not annotate the 32-bit path.
      
      Annotate it similarly.
      
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180314112427.22351-1-apw@canonical.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a14bff13
    • Andy Lutomirski's avatar
      x86/vm86/32: Fix POPF emulation · b5069782
      Andy Lutomirski authored
      
      
      POPF would trap if VIP was set regardless of whether IF was set.  Fix it.
      
      Suggested-by: default avatarStas Sergeev <stsp@list.ru>
      Reported-by: default avatarBart Oldeman <bartoldeman@gmail.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Fixes: 5ed92a8a
      
       ("x86/vm86: Use the normal pt_regs area for vm86")
      Link: http://lkml.kernel.org/r/ce95f40556e7b2178b6bc06ee9557827ff94bd28.1521003603.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b5069782
    • Andy Lutomirski's avatar
      selftests/x86/entry_from_vm86: Add test cases for POPF · 78393fdd
      Andy Lutomirski authored
      
      
      POPF is currently broken -- add tests to catch the error.  This
      results in:
      
         [RUN]	POPF with VIP set and IF clear from vm86 mode
         [INFO]	Exited vm86 mode due to STI
         [FAIL]	Incorrect return reason (started at eip = 0xd, ended at eip = 0xf)
      
      because POPF currently fails to check IF before reporting a pending
      interrupt.
      
      This patch also makes the FAIL message a bit more informative.
      
      Reported-by: default avatarBart Oldeman <bartoldeman@gmail.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/a16270b5cfe7832d6d00c479d0f871066cbdb52b.1521003603.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      78393fdd
    • Andy Lutomirski's avatar
      selftests/x86/entry_from_vm86: Exit with 1 if we fail · 327d53d0
      Andy Lutomirski authored
      
      
      Fix a logic error that caused the test to exit with 0 even if test
      cases failed.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bartoldeman@gmail.com
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/b1cc37144038958a469c8f70a5f47a6a5638636a.1521003603.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      327d53d0
  7. Mar 12, 2018
  8. Mar 09, 2018
    • Francis Deslauriers's avatar
      x86/kprobes: Fix kernel crash when probing .entry_trampoline code · c07a8f8b
      Francis Deslauriers authored
      
      
      Disable the kprobe probing of the entry trampoline:
      
      .entry_trampoline is a code area that is used to ensure page table
      isolation between userspace and kernelspace.
      
      At the beginning of the execution of the trampoline, we load the
      kernel's CR3 register. This has the effect of enabling the translation
      of the kernel virtual addresses to physical addresses. Before this
      happens most kernel addresses can not be translated because the running
      process' CR3 is still used.
      
      If a kprobe is placed on the trampoline code before that change of the
      CR3 register happens the kernel crashes because int3 handling pages are
      not accessible.
      
      To fix this, add the .entry_trampoline section to the kprobe blacklist
      to prohibit the probing of code before all the kernel pages are
      accessible.
      
      Signed-off-by: default avatarFrancis Deslauriers <francis.deslauriers@efficios.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: mathieu.desnoyers@efficios.com
      Cc: mhiramat@kernel.org
      Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c07a8f8b
  9. Mar 08, 2018
  10. Mar 07, 2018
  11. Mar 01, 2018
    • Thomas Gleixner's avatar
      x86/cpu_entry_area: Sync cpu_entry_area to initial_page_table · 945fd17a
      Thomas Gleixner authored
      The separation of the cpu_entry_area from the fixmap missed the fact that
      on 32bit non-PAE kernels the cpu_entry_area mapping might not be covered in
      initial_page_table by the previous synchronizations.
      
      This results in suspend/resume failures because 32bit utilizes initial page
      table for resume. The absence of the cpu_entry_area mapping results in a
      triple fault, aka. insta reboot.
      
      With PAE enabled this works by chance because the PGD entry which covers
      the fixmap and other parts incindentally provides the cpu_entry_area
      mapping as well.
      
      Synchronize the initial page table after setting up the cpu entry
      area. Instead of adding yet another copy of the same code, move it to a
      function and invoke it from the various places.
      
      It needs to be investigated if the existing calls in setup_arch() and
      setup_per_cpu_areas() can be replaced by the later invocation from
      setup_cpu_entry_areas(), but that's beyond the scope of this fix.
      
      Fixes: 92a0f81d
      
       ("x86/cpu_entry_area: Move it out of the fixmap")
      Reported-by: default avatarWoody Suwalski <terraluna977@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarWoody Suwalski <terraluna977@gmail.com>
      Cc: William Grant <william.grant@canonical.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1802282137290.1392@nanos.tec.linutronix.de
      945fd17a
  12. Feb 28, 2018
  13. Feb 23, 2018
    • Paolo Bonzini's avatar
      KVM/VMX: Optimize vmx_vcpu_run() and svm_vcpu_run() by marking the RDMSR path as unlikely() · 946fbbc1
      Paolo Bonzini authored
      
      
      vmx_vcpu_run() and svm_vcpu_run() are large functions, and giving
      branch hints to the compiler can actually make a substantial cycle
      difference by keeping the fast path contiguous in memory.
      
      With this optimization, the retpoline-guest/retpoline-host case is
      about 50 cycles faster.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: KarimAllah Ahmed <karahmed@amazon.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20180222154318.20361-3-pbonzini@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      946fbbc1
    • Paolo Bonzini's avatar
      KVM/x86: Remove indirect MSR op calls from SPEC_CTRL · ecb586bd
      Paolo Bonzini authored
      
      
      Having a paravirt indirect call in the IBRS restore path is not a
      good idea, since we are trying to protect from speculative execution
      of bogus indirect branch targets.  It is also slower, so use
      native_wrmsrl() on the vmentry path too.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: KarimAllah Ahmed <karahmed@amazon.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Cc: stable@vger.kernel.org
      Fixes: d28b387f
      
      
      Link: http://lkml.kernel.org/r/20180222154318.20361-2-pbonzini@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ecb586bd
  14. Feb 21, 2018