Skip to content
  1. Mar 17, 2018
  2. Mar 16, 2018
  3. Mar 14, 2018
    • Josh Poimboeuf's avatar
      jump_label: Fix sparc64 warning · af1d830b
      Josh Poimboeuf authored
      The kbuild test robot reported the following warning on sparc64:
      
        kernel/jump_label.c: In function '__jump_label_update':
        kernel/jump_label.c:376:51: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
             WARN_ONCE(1, "can't patch jump_label at %pS", (void *)entry->code);
      
      On sparc64, the jump_label entry->code field is of type u32, but
      pointers are 64-bit.  Silence the warning by casting entry->code to an
      unsigned long before casting it to a pointer.  This is also what the
      sparc jump label code does.
      
      Fixes: dc1dd184
      
       ("jump_label: Warn on failed jump_label patching attempt")
      Reported-by: default avatarkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: "David S . Miller" <davem@davemloft.net>
      Link: https://lkml.kernel.org/r/c966fed42be6611254a62d46579ec7416548d572.1521041026.git.jpoimboe@redhat.com
      af1d830b
    • Andy Whitcroft's avatar
      x86/speculation, objtool: Annotate indirect calls/jumps for objtool on 32-bit kernels · a14bff13
      Andy Whitcroft authored
      In the following commit:
      
        9e0e3c51
      
       ("x86/speculation, objtool: Annotate indirect calls/jumps for objtool")
      
      ... we added annotations for CALL_NOSPEC/JMP_NOSPEC on 64-bit x86 kernels,
      but we did not annotate the 32-bit path.
      
      Annotate it similarly.
      
      Signed-off-by: default avatarAndy Whitcroft <apw@canonical.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180314112427.22351-1-apw@canonical.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a14bff13
    • Andy Lutomirski's avatar
      x86/vm86/32: Fix POPF emulation · b5069782
      Andy Lutomirski authored
      
      
      POPF would trap if VIP was set regardless of whether IF was set.  Fix it.
      
      Suggested-by: default avatarStas Sergeev <stsp@list.ru>
      Reported-by: default avatarBart Oldeman <bartoldeman@gmail.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Fixes: 5ed92a8a
      
       ("x86/vm86: Use the normal pt_regs area for vm86")
      Link: http://lkml.kernel.org/r/ce95f40556e7b2178b6bc06ee9557827ff94bd28.1521003603.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b5069782
    • Andy Lutomirski's avatar
      selftests/x86/entry_from_vm86: Add test cases for POPF · 78393fdd
      Andy Lutomirski authored
      
      
      POPF is currently broken -- add tests to catch the error.  This
      results in:
      
         [RUN]	POPF with VIP set and IF clear from vm86 mode
         [INFO]	Exited vm86 mode due to STI
         [FAIL]	Incorrect return reason (started at eip = 0xd, ended at eip = 0xf)
      
      because POPF currently fails to check IF before reporting a pending
      interrupt.
      
      This patch also makes the FAIL message a bit more informative.
      
      Reported-by: default avatarBart Oldeman <bartoldeman@gmail.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/a16270b5cfe7832d6d00c479d0f871066cbdb52b.1521003603.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      78393fdd
    • Andy Lutomirski's avatar
      selftests/x86/entry_from_vm86: Exit with 1 if we fail · 327d53d0
      Andy Lutomirski authored
      
      
      Fix a logic error that caused the test to exit with 0 even if test
      cases failed.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stas Sergeev <stsp@list.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bartoldeman@gmail.com
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/b1cc37144038958a469c8f70a5f47a6a5638636a.1521003603.git.luto@kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      327d53d0
  4. Mar 12, 2018
  5. Mar 09, 2018
    • Francis Deslauriers's avatar
      x86/kprobes: Fix kernel crash when probing .entry_trampoline code · c07a8f8b
      Francis Deslauriers authored
      
      
      Disable the kprobe probing of the entry trampoline:
      
      .entry_trampoline is a code area that is used to ensure page table
      isolation between userspace and kernelspace.
      
      At the beginning of the execution of the trampoline, we load the
      kernel's CR3 register. This has the effect of enabling the translation
      of the kernel virtual addresses to physical addresses. Before this
      happens most kernel addresses can not be translated because the running
      process' CR3 is still used.
      
      If a kprobe is placed on the trampoline code before that change of the
      CR3 register happens the kernel crashes because int3 handling pages are
      not accessible.
      
      To fix this, add the .entry_trampoline section to the kprobe blacklist
      to prohibit the probing of code before all the kernel pages are
      accessible.
      
      Signed-off-by: default avatarFrancis Deslauriers <francis.deslauriers@efficios.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: mathieu.desnoyers@efficios.com
      Cc: mhiramat@kernel.org
      Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c07a8f8b
  6. Mar 08, 2018
  7. Mar 07, 2018
  8. Mar 01, 2018
    • Thomas Gleixner's avatar
      x86/cpu_entry_area: Sync cpu_entry_area to initial_page_table · 945fd17a
      Thomas Gleixner authored
      The separation of the cpu_entry_area from the fixmap missed the fact that
      on 32bit non-PAE kernels the cpu_entry_area mapping might not be covered in
      initial_page_table by the previous synchronizations.
      
      This results in suspend/resume failures because 32bit utilizes initial page
      table for resume. The absence of the cpu_entry_area mapping results in a
      triple fault, aka. insta reboot.
      
      With PAE enabled this works by chance because the PGD entry which covers
      the fixmap and other parts incindentally provides the cpu_entry_area
      mapping as well.
      
      Synchronize the initial page table after setting up the cpu entry
      area. Instead of adding yet another copy of the same code, move it to a
      function and invoke it from the various places.
      
      It needs to be investigated if the existing calls in setup_arch() and
      setup_per_cpu_areas() can be replaced by the later invocation from
      setup_cpu_entry_areas(), but that's beyond the scope of this fix.
      
      Fixes: 92a0f81d
      
       ("x86/cpu_entry_area: Move it out of the fixmap")
      Reported-by: default avatarWoody Suwalski <terraluna977@gmail.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarWoody Suwalski <terraluna977@gmail.com>
      Cc: William Grant <william.grant@canonical.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1802282137290.1392@nanos.tec.linutronix.de
      945fd17a
  9. Feb 28, 2018
  10. Feb 23, 2018
    • Paolo Bonzini's avatar
      KVM/VMX: Optimize vmx_vcpu_run() and svm_vcpu_run() by marking the RDMSR path as unlikely() · 946fbbc1
      Paolo Bonzini authored
      
      
      vmx_vcpu_run() and svm_vcpu_run() are large functions, and giving
      branch hints to the compiler can actually make a substantial cycle
      difference by keeping the fast path contiguous in memory.
      
      With this optimization, the retpoline-guest/retpoline-host case is
      about 50 cycles faster.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: KarimAllah Ahmed <karahmed@amazon.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20180222154318.20361-3-pbonzini@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      946fbbc1
    • Paolo Bonzini's avatar
      KVM/x86: Remove indirect MSR op calls from SPEC_CTRL · ecb586bd
      Paolo Bonzini authored
      
      
      Having a paravirt indirect call in the IBRS restore path is not a
      good idea, since we are trying to protect from speculative execution
      of bogus indirect branch targets.  It is also slower, so use
      native_wrmsrl() on the vmentry path too.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Cc: David Woodhouse <dwmw@amazon.co.uk>
      Cc: KarimAllah Ahmed <karahmed@amazon.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kvm@vger.kernel.org
      Cc: stable@vger.kernel.org
      Fixes: d28b387f
      
      
      Link: http://lkml.kernel.org/r/20180222154318.20361-2-pbonzini@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ecb586bd
  11. Feb 21, 2018
    • Peter Zijlstra's avatar
      objtool, retpolines: Integrate objtool with retpoline support more closely · d5028ba8
      Peter Zijlstra authored
      
      
      Disable retpoline validation in objtool if your compiler sucks, and otherwise
      select the validation stuff for CONFIG_RETPOLINE=y (most builds would already
      have it set due to ORC).
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d5028ba8
    • Josh Poimboeuf's avatar
      x86/entry/64: Simplify ENCODE_FRAME_POINTER · 0ca7d5ba
      Josh Poimboeuf authored
      
      
      On 64-bit, the stack pointer is always aligned on interrupt, so instead
      of setting the LSB of the pt_regs address, we can just add 1 to it.
      
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180221024214.lhl5jfgw33c4vz3m@treble
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      0ca7d5ba
    • Josh Poimboeuf's avatar
      extable: Make init_kernel_text() global · 9fbcc57a
      Josh Poimboeuf authored
      
      
      Convert init_kernel_text() to a global function and use it in a few
      places instead of manually comparing _sinittext and _einittext.
      
      Note that kallsyms.h has a very similar function called
      is_kernel_inittext(), but its end check is inclusive.  I'm not sure
      whether that's intentional behavior, so I didn't touch it.
      
      Suggested-by: default avatarJason Baron <jbaron@akamai.com>
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/4335d02be8d45ca7d265d2f174251d0b7ee6c5fd.1519051220.git.jpoimboe@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      9fbcc57a
    • Josh Poimboeuf's avatar
      jump_label: Warn on failed jump_label patching attempt · dc1dd184
      Josh Poimboeuf authored
      
      
      Currently when the jump label code encounters an address which isn't
      recognized by kernel_text_address(), it just silently fails.
      
      This can be dangerous because jump labels are used in a variety of
      places, and are generally expected to work.  Convert the silent failure
      to a warning.
      
      This won't warn about attempted writes to tracepoints in __init code
      after initmem has been freed, as those are already guarded by the
      entry->code check.
      
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/de3a271c93807adb7ed48f4e946b4f9156617680.1519051220.git.jpoimboe@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      dc1dd184
    • Josh Poimboeuf's avatar
      jump_label: Explicitly disable jump labels in __init code · 33352244
      Josh Poimboeuf authored
      
      
      After initmem has been freed, any jump labels in __init code are
      prevented from being written to by the kernel_text_address() check in
      __jump_label_update().  However, this check is quite broad.  If
      kernel_text_address() were to return false for any other reason, the
      jump label write would fail silently with no warning.
      
      For jump labels in module init code, entry->code is set to zero to
      indicate that the entry is disabled.  Do the same thing for core kernel
      init code.  This makes the behavior more consistent, and will also make
      it more straightforward to detect non-init jump label write failures in
      the next patch.
      
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/c52825...
      33352244
    • Dominik Brodowski's avatar
      x86/entry/64: Open-code switch_to_thread_stack() · f3d415ea
      Dominik Brodowski authored
      
      
      Open-code the two instances which called switch_to_thread_stack(). This
      allows us to remove the wrapper around DO_SWITCH_TO_THREAD_STACK.
      
      While at it, update the UNWIND hint to reflect where the IRET frame is,
      and update the commentary to reflect what we are actually doing here.
      
      Signed-off-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-7-linux@dominikbrodowski.net
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f3d415ea
    • Dominik Brodowski's avatar
      x86/entry/64: Move ASM_CLAC to interrupt_entry() · b2855d8d
      Dominik Brodowski authored
      
      
      Moving ASM_CLAC to interrupt_entry means two instructions (addq / pushq
      and call interrupt_entry) are not covered by it. However, it offers a
      noticeable size reduction (-.2k):
      
         text	   data	    bss	    dec	    hex	filename
        16882	      0	      0	  16882	   41f2	entry_64.o-orig
        16623	      0	      0	  16623	   40ef	entry_64.o
      
      Suggested-by: default avatarBrian Gerst <brgerst@gmail.com>
      Signed-off-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-6-linux@dominikbrodowski.net
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b2855d8d
    • Dominik Brodowski's avatar
      x86/entry/64: Remove 'interrupt' macro · 3aa99fc3
      Dominik Brodowski authored
      
      
      It is now trivial to call interrupt_entry() and then the actual worker.
      Therefore, remove the interrupt macro and open code it all.
      
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: dan.j.williams@intel.com
      Link: http://lkml.kernel.org/r/20180220210113.6725-5-linux@dominikbrodowski.net
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3aa99fc3