Skip to content
  1. Apr 28, 2016
  2. Apr 22, 2016
  3. Apr 18, 2016
  4. Mar 26, 2016
    • Alexander Potapenko's avatar
      mm, kasan: stackdepot implementation. Enable stackdepot for SLAB · cd11016e
      Alexander Potapenko authored
      
      
      Implement the stack depot and provide CONFIG_STACKDEPOT.  Stack depot
      will allow KASAN store allocation/deallocation stack traces for memory
      chunks.  The stack traces are stored in a hash table and referenced by
      handles which reside in the kasan_alloc_meta and kasan_free_meta
      structures in the allocated memory chunks.
      
      IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
      duplication.
      
      Right now stackdepot support is only enabled in SLAB allocator.  Once
      KASAN features in SLAB are on par with those in SLUB we can switch SLUB
      to stackdepot as well, thus removing the dependency on SLUB stack
      bookkeeping, which wastes a lot of memory.
      
      This patch is based on the "mm: kasan: stack depots" patch originally
      prepared by Dmitry Chernenkov.
      
      Joonsoo has said that he plans to reuse the stackdepot code for the
      mm/page_owner.c debugging facility.
      
      [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
      [aryabinin@virtuozzo.com: comment style fixes]
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cd11016e
    • Alexander Potapenko's avatar
      arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections · be7635e7
      Alexander Potapenko authored
      
      
      KASAN needs to know whether the allocation happens in an IRQ handler.
      This lets us strip everything below the IRQ entry point to reduce the
      number of unique stack traces needed to be stored.
      
      Move the definition of __irq_entry to <linux/interrupt.h> so that the
      users don't need to pull in <linux/ftrace.h>.  Also introduce the
      __softirq_entry macro which is similar to __irq_entry, but puts the
      corresponding functions to the .softirqentry.text section.
      
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      be7635e7
    • Tony Luck's avatar
      [IA64] Enable preadv2 and pwritev2 syscalls for ia64 · 2d5ae5c2
      Tony Luck authored
      New system calls added in:
            f17d8b35
      
      
            vfs: vfs: Define new syscalls preadv2,pwritev2
      
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      2d5ae5c2
  5. Mar 25, 2016
    • Yoshinori Sato's avatar
      h8300: switch EARLYCON · 8cad4892
      Yoshinori Sato authored
      
      
      earlyprintk is architecture specific option.
      earlycon is generic and small footprint.
      
      Signed-off-by: default avatarYoshinori Sato <ysato@users.sourceforge.jp>
      8cad4892
    • Geert Uytterhoeven's avatar
      h8300: dts: Rename the serial port clock to fck · d8581616
      Geert Uytterhoeven authored
      
      
      The clock is really the device functional clock, not the interface
      clock. Rename it.
      
      Signed-off-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Acked-by: default avatarLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      d8581616
    • Mark Rutland's avatar
      arm64: mm: allow preemption in copy_to_user_page · 691b1e2e
      Mark Rutland authored
      
      
      Currently we disable preemption in copy_to_user_page; a behaviour that
      we inherited from the 32-bit arm code. This was necessary for older
      cores without broadcast data cache maintenance, and ensured that cache
      lines were dirtied and cleaned by the same CPU. On these systems dirty
      cache line migration was not possible, so this was sufficient to
      guarantee coherency.
      
      On contemporary systems, cache coherence protocols permit (dirty) cache
      lines to migrate between CPUs as a result of speculation, prefetching,
      and other behaviours. To account for this, in ARMv8 data cache
      maintenance operations are broadcast and affect all data caches in the
      domain associated with the VA (i.e. ISH for kernel and user mappings).
      
      In __switch_to we ensure that tasks can be safely migrated in the middle
      of a maintenance sequence, using a dsb(ish) to ensure prior explicit
      memory accesses are observed and cache maintenance operations are
      completed before a task can be run on another CPU.
      
      Given the above, it is not necessary to disable preemption in
      copy_to_user_page. This patch removes the preempt_{disable,enable}
      calls, permitting preemption.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      691b1e2e
    • Mark Rutland's avatar
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland authored
      Commit 324420bf
      
       ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
    • Ard Biesheuvel's avatar
      arm64: kaslr: use callee saved register to preserve SCTLR across C call · d5e57437
      Ard Biesheuvel authored
      
      
      The KASLR code incorrectly expects the contents of x18 to be preserved
      across a call into C code, and uses it to stash the contents of SCTLR_EL1
      before enabling the MMU. If the MMU needs to be disabled again to create
      the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is
      likely to crash the system if x18 has been clobbered by kasan_early_init()
      or kaslr_early_init(). So use x22 instead, which is not in use so far in
      head.S
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d5e57437
  6. Mar 23, 2016
  7. Mar 22, 2016
    • Paolo Bonzini's avatar
      KVM: PPC: do not compile in vfio.o unconditionally · 0af574be
      Paolo Bonzini authored
      
      
      Build on 32-bit PPC fails with the following error:
      
       int kvm_vfio_ops_init(void)
            ^
       In file included from arch/powerpc/kvm/../../../virt/kvm/vfio.c:21:0:
       arch/powerpc/kvm/../../../virt/kvm/vfio.h:8:90: note: previous definition of ‘kvm_vfio_ops_init’ was here
       arch/powerpc/kvm/../../../virt/kvm/vfio.c:292:6: error: redefinition of ‘kvm_vfio_ops_exit’
       void kvm_vfio_ops_exit(void)
                   ^
       In file included from arch/powerpc/kvm/../../../virt/kvm/vfio.c:21:0:
       arch/powerpc/kvm/../../../virt/kvm/vfio.h:12:91: note: previous definition of ‘kvm_vfio_ops_exit’ was here
       scripts/Makefile.build:258: recipe for target arch/powerpc/kvm/../../../virt/kvm/vfio.o failed
       make[3]: *** [arch/powerpc/kvm/../../../virt/kvm/vfio.o] Error 1
      
      Check whether CONFIG_KVM_VFIO is set before including vfio.o
      in the build.
      
      Reported-by: default avatarPranith Kumar <bobby.prani@gmail.com>
      Tested-by: default avatarPranith Kumar <bobby.prani@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0af574be
    • Rik van Riel's avatar
      kvm, rt: change async pagefault code locking for PREEMPT_RT · 9db284f3
      Rik van Riel authored
      
      
      The async pagefault wake code can run from the idle task in exception
      context, so everything here needs to be made non-preemptible.
      
      Conversion to a simple wait queue and raw spinlock does the trick.
      
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9db284f3