Skip to content
  1. Nov 23, 2014
  2. Nov 08, 2014
  3. Oct 19, 2014
    • David S. Miller's avatar
      sparc64: Do not define thread fpregs save area as zero-length array. · e2653143
      David S. Miller authored
      
      
      This breaks the stack end corruption detection facility.
      
      What that facility does it write a magic value to "end_of_stack()"
      and checking to see if it gets overwritten.
      
      "end_of_stack()" is "task_thread_info(p) + 1", which for sparc64 is
      the beginning of the FPU register save area.
      
      So once the user uses the FPU, the magic value is overwritten and the
      debug checks trigger.
      
      Fix this by making the size explicit.
      
      Due to the size we use for the fpsaved[], gsr[], and xfsr[] arrays we
      are limited to 7 levels of FPU state saves.  So each FPU register set
      is 256 bytes, allocate 256 * 7 for the fpregs area.
      
      Reported-by: default avatarMeelis Roos <mroos@linux.ee>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e2653143
    • David S. Miller's avatar
      sparc64: Fix corrupted thread fault code. · 84bd6d8b
      David S. Miller authored
      Every path that ends up at do_sparc64_fault() must install a valid
      FAULT_CODE_* bitmask in the per-thread fault code byte.
      
      Two paths leading to the label winfix_trampoline (which expects the
      FAULT_CODE_* mask in register %g4) were not doing so:
      
      1) For pre-hypervisor TLB protection violation traps, if we took
         the 'winfix_trampoline' path we wouldn't have %g4 initialized
         with the FAULT_CODE_* value yet.  Resulting in using the
         TLB_TAG_ACCESS register address value instead.
      
      2) In the TSB miss path, when we notice that we are going to use a
         hugepage mapping, but we haven't allocated the hugepage TSB yet, we
         still have to take the window fixup case into consideration and
         in that particular path we leave %g4 not setup properly.
      
      Errors on this sort were largely invisible previously, but after
      commit 4ccb9272 ("sparc64: sun4v TLB
      error power off events") we now have a fault_code mask bit
      (FAULT_CODE_BAD_RA) that triggers due to this bug.
      
      FAULT_CODE_BAD_RA triggers because this bit is set in TLB_TAG_ACCESS
      (see #1 above) and thus we get seemingly random bus errors triggered
      for user processes.
      
      Fixes: 4ccb9272
      
       ("sparc64: sun4v TLB error power off events")
      Reported-by: default avatarMeelis Roos <mroos@linux.ee>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      84bd6d8b
    • Andy Lutomirski's avatar
      x86,kvm,vmx: Preserve CR4 across VM entry · d974baa3
      Andy Lutomirski authored
      
      
      CR4 isn't constant; at least the TSD and PCE bits can vary.
      
      TBH, treating CR0 and CR3 as constant scares me a bit, too, but it looks
      like it's correct.
      
      This adds a branch and a read from cr4 to each vm entry.  Because it is
      extremely likely that consecutive entries into the same vcpu will have
      the same host cr4 value, this fixes up the vmcs instead of restoring cr4
      after the fact.  A subsequent patch will add a kernel-wide cr4 shadow,
      reducing the overhead in the common case to just two memory reads and a
      branch.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Acked-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Cc: stable@vger.kernel.org
      Cc: Petr Matousek <pmatouse@redhat.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d974baa3
  4. Oct 17, 2014
  5. Oct 16, 2014
  6. Oct 15, 2014
    • Steve Capper's avatar
      arm: kvm: STRICT_MM_TYPECHECKS fix for user_mem_abort · 3d08c629
      Steve Capper authored
      Commit:
      b8865767
      
       ARM: KVM: user_mem_abort: support stage 2 MMIO page mapping
      
      introduced some code in user_mem_abort that failed to compile if
      STRICT_MM_TYPECHECKS was enabled.
      
      This patch fixes up the failing comparison.
      
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Reviewed-by: default avatarKim Phillips <kim.phillips@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      3d08c629
    • Olof Johansson's avatar
      ARM: sunxi_defconfig: enable CONFIG_REGULATOR · 9a2ad529
      Olof Johansson authored
      Commit 97a13e52
      
       ('net: phy: mdio-sun4i: don't select REGULATOR') removed
      the select of REGULATOR, which means that it now has to be explicitly
      enabled in the defconfig or things won't work very well.
      
      In particular, this fixes a problem with SD/MMC not probing on my A31-based
      board.
      
      Cc: Beniamino Galvani <b.galvani@gmail.com>
      Signed-off-by: default avatarOlof Johansson <olof@lixom.net>
      9a2ad529
    • David S. Miller's avatar
      sparc64: Fix FPU register corruption with AES crypto offload. · f4da3628
      David S. Miller authored
      
      
      The AES loops in arch/sparc/crypto/aes_glue.c use a scheme where the
      key material is preloaded into the FPU registers, and then we loop
      over and over doing the crypt operation, reusing those pre-cooked key
      registers.
      
      There are intervening blkcipher*() calls between the crypt operation
      calls.  And those might perform memcpy() and thus also try to use the
      FPU.
      
      The sparc64 kernel FPU usage mechanism is designed to allow such
      recursive uses, but with a catch.
      
      There has to be a trap between the two FPU using threads of control.
      
      The mechanism works by, when the FPU is already in use by the kernel,
      allocating a slot for FPU saving at trap time.  Then if, within the
      trap handler, we try to use the FPU registers, the pre-trap FPU
      register state is saved into the slot.  Then at trap return time we
      notice this and restore the pre-trap FPU state.
      
      Over the long term there are various more involved ways we can make
      this work, but for a quick fix let's take advantage of the fact that
      the situation where this happens is very limited.
      
      All sparc64 chips that support the crypto instructiosn also are using
      the Niagara4 memcpy routine, and that routine only uses the FPU for
      large copies where we can't get the source aligned properly to a
      multiple of 8 bytes.
      
      We look to see if the FPU is already in use in this context, and if so
      we use the non-large copy path which only uses integer registers.
      
      Furthermore, we also limit this special logic to when we are doing
      kernel copy, rather than a user copy.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f4da3628
    • Iyappan Subramanian's avatar
    • Alexei Starovoitov's avatar
      x86: bpf_jit: fix two bugs in eBPF JIT compiler · e0ee9c12
      Alexei Starovoitov authored
      1.
      JIT compiler using multi-pass approach to converge to final image size,
      since x86 instructions are variable length. It starts with large
      gaps between instructions (so some jumps may use imm32 instead of imm8)
      and iterates until total program size is the same as in previous pass.
      This algorithm works only if program size is strictly decreasing.
      Programs that use LD_ABS insn need additional code in prologue, but it
      was not emitted during 1st pass, so there was a chance that 2nd pass would
      adjust imm32->imm8 jump offsets to the same number of bytes as increase in
      prologue, which may cause algorithm to erroneously decide that size converged.
      Fix it by always emitting largest prologue in the first pass which
      is detected by oldproglen==0 check.
      Also change error check condition 'proglen != oldproglen' to fail gracefully.
      
      2.
      while staring at the code realized that 64-byte buffer may not be enough
      when 1st insn is large, so increase it to 128 to avoid buffer overflow
      (theoretical maximum size of prologue+div is 109) and add runtime check.
      
      Fixes: 62258278
      
       ("net: filter: x86: internal BPF JIT")
      Reported-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Tested-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e0ee9c12
  7. Oct 14, 2014
  8. Oct 13, 2014