Skip to content
  1. May 23, 2017
  2. May 14, 2017
  3. May 11, 2017
  4. May 10, 2017
    • Nicolas Dichtel's avatar
      uapi: export all headers under uapi directories · fcc8487d
      Nicolas Dichtel authored
      
      
      Regularly, when a new header is created in include/uapi/, the developer
      forgets to add it in the corresponding Kbuild file. This error is usually
      detected after the release is out.
      
      In fact, all headers under uapi directories should be exported, thus it's
      useless to have an exhaustive list.
      
      After this patch, the following files, which were not exported, are now
      exported (with make headers_install_all):
      asm-arc/kvm_para.h
      asm-arc/ucontext.h
      asm-blackfin/shmparam.h
      asm-blackfin/ucontext.h
      asm-c6x/shmparam.h
      asm-c6x/ucontext.h
      asm-cris/kvm_para.h
      asm-h8300/shmparam.h
      asm-h8300/ucontext.h
      asm-hexagon/shmparam.h
      asm-m32r/kvm_para.h
      asm-m68k/kvm_para.h
      asm-m68k/shmparam.h
      asm-metag/kvm_para.h
      asm-metag/shmparam.h
      asm-metag/ucontext.h
      asm-mips/hwcap.h
      asm-mips/reg.h
      asm-mips/ucontext.h
      asm-nios2/kvm_para.h
      asm-nios2/ucontext.h
      asm-openrisc/shmparam.h
      asm-parisc/kvm_para.h
      asm-powerpc/perf_regs.h
      asm-sh/kvm_para.h
      asm-sh/ucontext.h
      asm-tile/shmparam.h
      asm-unicore32/shmparam.h
      asm-unicore32/ucontext.h
      asm-x86/hwcap2.h
      asm-xtensa/kvm_para.h
      drm/armada_drm.h
      drm/etnaviv_drm.h
      drm/vgem_drm.h
      linux/aspeed-lpc-ctrl.h
      linux/auto_dev-ioctl.h
      linux/bcache.h
      linux/btrfs_tree.h
      linux/can/vxcan.h
      linux/cifs/cifs_mount.h
      linux/coresight-stm.h
      linux/cryptouser.h
      linux/fsmap.h
      linux/genwqe/genwqe_card.h
      linux/hash_info.h
      linux/kcm.h
      linux/kcov.h
      linux/kfd_ioctl.h
      linux/lightnvm.h
      linux/module.h
      linux/nbd-netlink.h
      linux/nilfs2_api.h
      linux/nilfs2_ondisk.h
      linux/nsfs.h
      linux/pr.h
      linux/qrtr.h
      linux/rpmsg.h
      linux/sched/types.h
      linux/sed-opal.h
      linux/smc.h
      linux/smc_diag.h
      linux/stm.h
      linux/switchtec_ioctl.h
      linux/vfio_ccw.h
      linux/wil6210_uapi.h
      rdma/bnxt_re-abi.h
      
      Note that I have removed from this list the files which are generated in every
      exported directories (like .install or .install.cmd).
      
      Thanks to Julien Floret <julien.floret@6wind.com> for the tip to get all
      subdirs with a pure makefile command.
      
      For the record, note that exported files for asm directories are a mix of
      files listed by:
       - include/uapi/asm-generic/Kbuild.asm;
       - arch/<arch>/include/uapi/asm/Kbuild;
       - arch/<arch>/include/asm/Kbuild.
      
      Signed-off-by: default avatarNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Acked-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      Acked-by: default avatarMark Salter <msalter@redhat.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      fcc8487d
    • Mark Rutland's avatar
      arm64: uaccess: suppress spurious clang warning · d135b8b5
      Mark Rutland authored
      
      
      Clang tries to warn when there's a mismatch between an operand's size,
      and the size of the register it is held in, as this may indicate a bug.
      Specifically, clang warns when the operand's type is less than 64 bits
      wide, and the register is used unqualified (i.e. %N rather than %xN or
      %wN).
      
      Unfortunately clang can generate these warnings for unreachable code.
      For example, for code like:
      
      do {                                            \
              typeof(*(ptr)) __v = (v);               \
              switch(sizeof(*(ptr))) {                \
              case 1:                                 \
                      // assume __v is 1 byte wide    \
                      asm ("{op}b %w0" : : "r" (v));  \
                      break;                          \
              case 8:                                 \
                      // assume __v is 8 bytes wide   \
                      asm ("{op} %0" : : "r" (v));    \
                      break;                          \
              }
      while (0)
      
      ... if op() were passed a char value and pointer to char, clang may
      produce a warning for the unreachable case where sizeof(*(ptr)) is 8.
      
      For the same reasons, clang produces warnings when __put_user_err() is
      used for types that are less than 64 bits wide.
      
      We could avoid this with a cast to a fixed-width type in each of the
      cases. However, GCC will then warn that pointer types are being cast to
      mismatched integer sizes (in unreachable paths).
      
      Another option would be to use the same union trickery as we do for
      __smp_store_release() and __smp_load_acquire(), but this is fairly
      invasive.
      
      Instead, this patch suppresses the clang warning by using an x modifier
      in the assembly for the 8 byte case of __put_user_err(). No additional
      work is necessary as the value has been cast to typeof(*(ptr)), so the
      compiler will have performed any necessary extension for the reachable
      case.
      
      For consistency, __get_user_err() is also updated to use the x modifier
      for its 8 byte case.
      
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reported-by: default avatarMatthias Kaehlcke <mka@chromium.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d135b8b5
    • Mark Rutland's avatar
      arm64: atomic_lse: match asm register sizes · 8997c934
      Mark Rutland authored
      
      
      The LSE atomic code uses asm register variables to ensure that
      parameters are allocated in specific registers. In the majority of cases
      we specifically ask for an x register when using 64-bit values, but in a
      couple of cases we use a w regsiter for a 64-bit value.
      
      For asm register variables, the compiler only cares about the register
      index, with wN and xN having the same meaning. The compiler determines
      the register size to use based on the type of the variable. Thus, this
      inconsistency is merely confusing, and not harmful to code generation.
      
      For consistency, this patch updates those cases to use the x register
      alias. There should be no functional change as a result of this patch.
      
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8997c934
    • Mark Rutland's avatar
      arm64: armv8_deprecated: ensure extension of addr · 55de49f9
      Mark Rutland authored
      Our compat swp emulation holds the compat user address in an unsigned
      int, which it passes to __user_swpX_asm(). When a 32-bit value is passed
      in a register, the upper 32 bits of the register are unknown, and we
      must extend the value to 64 bits before we can use it as a base address.
      
      This patch casts the address to unsigned long to ensure it has been
      suitably extended, avoiding the potential issue, and silencing a related
      warning from clang.
      
      Fixes: bd35a4ad
      
       ("arm64: Port SWP/SWPB emulation support from arm")
      Cc: <stable@vger.kernel.org> # 3.19.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      55de49f9
    • Mark Rutland's avatar
      arm64: uaccess: ensure extension of access_ok() addr · a06040d7
      Mark Rutland authored
      Our access_ok() simply hands its arguments over to __range_ok(), which
      implicitly assummes that the addr parameter is 64 bits wide. This isn't
      necessarily true for compat code, which might pass down a 32-bit address
      parameter.
      
      In these cases, we don't have a guarantee that the address has been zero
      extended to 64 bits, and the upper bits of the register may contain
      unknown values, potentially resulting in a suprious failure.
      
      Avoid this by explicitly casting the addr parameter to an unsigned long
      (as is done on other architectures), ensuring that the parameter is
      widened appropriately.
      
      Fixes: 0aea86a2
      
       ("arm64: User access library functions")
      Cc: <stable@vger.kernel.org> # 3.7.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a06040d7
    • Mark Rutland's avatar
      arm64: ensure extension of smp_store_release value · 994870be
      Mark Rutland authored
      When an inline assembly operand's type is narrower than the register it
      is allocated to, the least significant bits of the register (up to the
      operand type's width) are valid, and any other bits are permitted to
      contain any arbitrary value. This aligns with the AAPCS64 parameter
      passing rules.
      
      Our __smp_store_release() implementation does not account for this, and
      implicitly assumes that operands have been zero-extended to the width of
      the type being stored to. Thus, we may store unknown values to memory
      when the value type is narrower than the pointer type (e.g. when storing
      a char to a long).
      
      This patch fixes the issue by casting the value operand to the same
      width as the pointer operand in all cases, which ensures that the value
      is zero-extended as we expect. We use the same union trickery as
      __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that
      pointers are potentially cast to narrower width integers in unreachable
      paths.
      
      A whitespace issue at the top of __smp_store_release() is also
      corrected.
      
      No changes are necessary for __smp_load_acquire(). Load instructions
      implicitly clear any upper bits of the register, and the compiler will
      only consider the least significant bits of the register as valid
      regardless.
      
      Fixes: 47933ad4 ("arch: Introduce smp_load_acquire(), smp_store_release()")
      Fixes: 878a84d5
      
       ("arm64: add missing data types in smp_load_acquire/smp_store_release")
      Cc: <stable@vger.kernel.org> # 3.14.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Matthias Kaehlcke <mka@chromium.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      994870be
    • Mark Rutland's avatar
      arm64: xchg: hazard against entire exchange variable · fee960be
      Mark Rutland authored
      The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard
      against other accesses to the memory location being exchanged. However,
      the pointer passed to the constraint is a u8 pointer, and thus the
      hazard only applies to the first byte of the location.
      
      GCC can take advantage of this, assuming that other portions of the
      location are unchanged, as demonstrated with the following test case:
      
      union u {
      	unsigned long l;
      	unsigned int i[2];
      };
      
      unsigned long update_char_hazard(union u *u)
      {
      	unsigned int a, b;
      
      	a = u->i[1];
      	asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL));
      	b = u->i[1];
      
      	return a ^ b;
      }
      
      unsigned long update_long_hazard(union u *u)
      {
      	unsigned int a, b;
      
      	a = u->i[1];
      	asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL));
      	b = u->i[1];
      
      	return a ^ b;
      }
      
      The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when
      using -O2 or above:
      
      0000000000000000 <update_char_hazard>:
         0:	d2800001 	mov	x1, #0x0                   	// #0
         4:	f9000001 	str	x1, [x0]
         8:	d2800000 	mov	x0, #0x0                   	// #0
         c:	d65f03c0 	ret
      
      0000000000000010 <update_long_hazard>:
        10:	b9400401 	ldr	w1, [x0,#4]
        14:	d2800002 	mov	x2, #0x0                   	// #0
        18:	f9000002 	str	x2, [x0]
        1c:	b9400400 	ldr	w0, [x0,#4]
        20:	4a000020 	eor	w0, w1, w0
        24:	d65f03c0 	ret
      
      This patch fixes the issue by passing an unsigned long pointer into the
      +Q constraint, as we do for our cmpxchg code. This may hazard against
      more than is necessary, but this is better than missing a necessary
      hazard.
      
      Fixes: 305d454a
      
       ("arm64: atomics: implement native {relaxed, acquire, release} atomics")
      Cc: <stable@vger.kernel.org> # 4.4.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fee960be
    • Kristina Martsenko's avatar
      arm64: entry: improve data abort handling of tagged pointers · 276e9327
      Kristina Martsenko authored
      When handling a data abort from EL0, we currently zero the top byte of
      the faulting address, as we assume the address is a TTBR0 address, which
      may contain a non-zero address tag. However, the address may be a TTBR1
      address, in which case we should not zero the top byte. This patch fixes
      that. The effect is that the full TTBR1 address is passed to the task's
      signal handler (or printed out in the kernel log).
      
      When handling a data abort from EL1, we leave the faulting address
      intact, as we assume it's either a TTBR1 address or a TTBR0 address with
      tag 0x00. This is true as far as I'm aware, we don't seem to access a
      tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to
      forget about address tags, and code added in the future may not always
      remember to remove tags from addresses before accessing them. So add tag
      handling to the EL1 data abort handler as well. This also makes it
      consistent with the EL0 data abort handler.
      
      Fixes: d50240a5
      
       ("arm64: mm: permit use of tagged pointers at EL0")
      Cc: <stable@vger.kernel.org> # 3.12.x-
      Reviewed-by: default avatarDave Martin <Dave.Martin@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      276e9327
    • Kristina Martsenko's avatar
      arm64: hw_breakpoint: fix watchpoint matching for tagged pointers · 7dcd9dd8
      Kristina Martsenko authored
      When we take a watchpoint exception, the address that triggered the
      watchpoint is found in FAR_EL1. We compare it to the address of each
      configured watchpoint to see which one was hit.
      
      The configured watchpoint addresses are untagged, while the address in
      FAR_EL1 will have an address tag if the data access was done using a
      tagged address. The tag needs to be removed to compare the address to
      the watchpoints.
      
      Currently we don't remove it, and as a result can report the wrong
      watchpoint as being hit (specifically, always either the highest TTBR0
      watchpoint or lowest TTBR1 watchpoint). This patch removes the tag.
      
      Fixes: d50240a5
      
       ("arm64: mm: permit use of tagged pointers at EL0")
      Cc: <stable@vger.kernel.org> # 3.12.x-
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7dcd9dd8
    • Kristina Martsenko's avatar
      arm64: traps: fix userspace cache maintenance emulation on a tagged pointer · 81cddd65
      Kristina Martsenko authored
      When we emulate userspace cache maintenance in the kernel, we can
      currently send the task a SIGSEGV even though the maintenance was done
      on a valid address. This happens if the address has a non-zero address
      tag, and happens to not be mapped in.
      
      When we get the address from a user register, we don't currently remove
      the address tag before performing cache maintenance on it. If the
      maintenance faults, we end up in either __do_page_fault, where find_vma
      can't find the VMA if the address has a tag, or in do_translation_fault,
      where the tagged address will appear to be above TASK_SIZE. In both
      cases, the address is not mapped in, and the task is sent a SIGSEGV.
      
      This patch removes the tag from the address before using it. With this
      patch, the fault is handled correctly, the address gets mapped in, and
      the cache maintenance succeeds.
      
      As a second bug, if cache maintenance (correctly) fails on an invalid
      tagged address, the address gets passed into arm64_notify_segfault,
      where find_vma fails to find the VMA due to the tag, and the wrong
      si_code may be sent as part of the siginfo_t of the segfault. With this
      patch, the correct si_code is sent.
      
      Fixes: 7dd01aef
      
       ("arm64: trap userspace "dc cvau" cache operation on errata-affected core")
      Cc: <stable@vger.kernel.org> # 4.8.x-
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarKristina Martsenko <kristina.martsenko@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      81cddd65
  5. May 09, 2017
  6. May 08, 2017
  7. May 05, 2017
  8. May 04, 2017
    • Christoffer Dall's avatar
      KVM: arm/arm64: Move shared files to virt/kvm/arm · 35d2d5d4
      Christoffer Dall authored
      
      
      For some time now we have been having a lot of shared functionality
      between the arm and arm64 KVM support in arch/arm, which not only
      required a horrible inter-arch reference from the Makefile in
      arch/arm64/kvm, but also created confusion for newcomers to the code
      base, as was recently seen on the mailing list.
      
      Further, it causes confusion for things like cscope, which needs special
      attention to index specific shared files for arm64 from the arm tree.
      
      Move the shared files into virt/kvm/arm and move the trace points along
      with it.  When moving the tracepoints we have to modify the way the vgic
      creates definitions of the trace points, so we take the chance to
      include the VGIC tracepoints in its very own special vgic trace.h file.
      
      Signed-off-by: default avatarChristoffer Dall <cdall@linaro.org>
      35d2d5d4
  9. May 03, 2017
    • Daniel Borkmann's avatar
      bpf, arm64: fix jit branch offset related to ldimm64 · ddc665a4
      Daniel Borkmann authored
      When the instruction right before the branch destination is
      a 64 bit load immediate, we currently calculate the wrong
      jump offset in the ctx->offset[] array as we only account
      one instruction slot for the 64 bit load immediate although
      it uses two BPF instructions. Fix it up by setting the offset
      into the right slot after we incremented the index.
      
      Before (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  54ffff82  b.cs 0x00000020
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      After (ldimm64 test 1):
      
        [...]
        00000020:  52800007  mov w7, #0x0 // #0
        00000024:  d2800060  mov x0, #0x3 // #3
        00000028:  d2800041  mov x1, #0x2 // #2
        0000002c:  eb01001f  cmp x0, x1
        00000030:  540000a2  b.cs 0x00000044
        00000034:  d29fffe7  mov x7, #0xffff // #65535
        00000038:  f2bfffe7  movk x7, #0xffff, lsl #16
        0000003c:  f2dfffe7  movk x7, #0xffff, lsl #32
        00000040:  f2ffffe7  movk x7, #0xffff, lsl #48
        00000044:  d29dddc7  mov x7, #0xeeee // #61166
        00000048:  f2bdddc7  movk x7, #0xeeee, lsl #16
        0000004c:  f2ddddc7  movk x7, #0xeeee, lsl #32
        00000050:  f2fdddc7  movk x7, #0xeeee, lsl #48
        [...]
      
      Also, add a couple of test cases to make sure JITs pass
      this test. Tested on Cavium ThunderX ARMv8. The added
      test cases all pass after the fix.
      
      Fixes: 8eee539d
      
       ("arm64: bpf: fix out-of-bounds read in bpf2a64_offset()")
      Reported-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Cc: Xi Wang <xi.wang@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ddc665a4
    • Daniel Borkmann's avatar
      bpf, arm64: implement jiting of BPF_XADD · 85f68fe8
      Daniel Borkmann authored
      
      
      This work adds BPF_XADD for BPF_W/BPF_DW to the arm64 JIT and therefore
      completes JITing of all BPF instructions, meaning we can thus also remove
      the 'notyet' label and do not need to fall back to the interpreter when
      BPF_XADD is used in a program!
      
      This now also brings arm64 JIT in line with x86_64, s390x, ppc64, sparc64,
      where all current eBPF features are supported.
      
      BPF_W example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_W, R10, -40, 0x10),
          BPF_STX_XADD(BPF_W, R10, R0, -40),
          BPF_LDX_MEM(BPF_W, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7, #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  b82b6b2a  str w10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  885f7d4b  ldxr w11, [x10]
        00000040:  0b07016b  add w11, w11, w7
        00000044:  880b7d4b  stxr w11, w11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      BPF_DW example from test_bpf:
      
        .u.insns_int = {
          BPF_ALU32_IMM(BPF_MOV, R0, 0x12),
          BPF_ST_MEM(BPF_DW, R10, -40, 0x10),
          BPF_STX_XADD(BPF_DW, R10, R0, -40),
          BPF_LDX_MEM(BPF_DW, R0, R10, -40),
          BPF_EXIT_INSN(),
        },
      
        [...]
        00000020:  52800247  mov w7,  #0x12 // #18
        00000024:  928004eb  mov x11, #0xffffffffffffffd8 // #-40
        00000028:  d280020a  mov x10, #0x10 // #16
        0000002c:  f82b6b2a  str x10, [x25,x11]
        // start of xadd mapping:
        00000030:  928004ea  mov x10, #0xffffffffffffffd8 // #-40
        00000034:  8b19014a  add x10, x10, x25
        00000038:  f9800151  prfm pstl1strm, [x10]
        0000003c:  c85f7d4b  ldxr x11, [x10]
        00000040:  8b07016b  add x11, x11, x7
        00000044:  c80b7d4b  stxr w11, x11, [x10]
        00000048:  35ffffab  cbnz w11, 0x0000003c
        // end of xadd mapping:
        [...]
      
      Tested on Cavium ThunderX ARMv8, test suite results after the patch:
      
        No JIT:   [ 3751.855362] test_bpf: Summary: 311 PASSED, 0 FAILED, [0/303 JIT'ed]
        With JIT: [ 3573.759527] test_bpf: Summary: 311 PASSED, 0 FAILED, [303/303 JIT'ed]
      
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      85f68fe8
  10. May 02, 2017
  11. Apr 29, 2017
  12. Apr 28, 2017
  13. Apr 27, 2017
  14. Apr 26, 2017
    • Ard Biesheuvel's avatar
      arm64: module: split core and init PLT sections · 24af6c4e
      Ard Biesheuvel authored
      
      
      The arm64 module PLT code allocates all PLT entries in a single core
      section, since the overhead of having a separate init PLT section is
      not justified by the small number of PLT entries usually required for
      init code.
      
      However, the core and init module regions are allocated independently,
      and there is a corner case where the core region may be allocated from
      the VMALLOC region if the dedicated module region is exhausted, but the
      init region, being much smaller, can still be allocated from the module
      region. This leads to relocation failures if the distance between those
      regions exceeds 128 MB. (In fact, this corner case is highly unlikely to
      occur on arm64, but the issue has been observed on ARM, whose module
      region is much smaller).
      
      So split the core and init PLT regions, and name the latter ".init.plt"
      so it gets allocated along with (and sufficiently close to) the .init
      sections that it serves. Also, given that init PLT entries may need to
      be emitted for branches that target the core module, modify the logic
      that disregards defined symbols to only disregard symbols that are
      defined in the same section as the relocated branch instruction.
      
      Since there may now be two PLT entries associated with each entry in
      the symbol table, we can no longer hijack the symbol::st_size fields
      to record the addresses of PLT entries as we emit them for zero-addend
      relocations. So instead, perform an explicit comparison to check for
      duplicate entries.
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      24af6c4e
  15. Apr 25, 2017