Skip to content
  1. Apr 08, 2014
  2. Apr 07, 2014
    • Mark Salter's avatar
      arm64: fix !CONFIG_COMPAT build failures · ff268ff7
      Mark Salter authored
      Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with:
      
        arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’:
        arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’
      
        arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’
      
      This is due to some recent arm64 perf commits with compat support:
      
        commit 23c7d70d:
          ARM64: perf: add support for frame pointer unwinding in compat mode
      
        commit 2ee0d7fd
      
      :
          ARM64: perf: add support for perf registers API
      
      Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not
      defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch
      allows the arm64 kernel to build with and without CONFIG_COMPAT.
      
      Signed-off-by: default avatarMark Salter <msalter@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ff268ff7
  3. Apr 05, 2014
  4. Apr 03, 2014
  5. Apr 02, 2014
  6. Mar 24, 2014
  7. Mar 21, 2014
  8. Mar 19, 2014
  9. Mar 15, 2014
  10. Mar 13, 2014
  11. Mar 10, 2014
  12. Mar 04, 2014
  13. Feb 28, 2014
  14. Feb 27, 2014
  15. Feb 26, 2014
    • Nathan Lynch's avatar
      arm64: vdso: clean up vdso_pagelist initialization · 16fb1a9b
      Nathan Lynch authored
      
      
      Remove some unnecessary bits that were apparently carried over from
      another architecture's implementation:
      
      - No need to get_page() the vdso text/data - these are part of the
        kernel image.
      - No need for ClearPageReserved on the vdso text.
      - No need to vmap the first text page to check the ELF header - this
        can be done through &vdso_start.
      
      Also some minor cleanup:
      - Use kcalloc for vdso_pagelist array allocation.
      - Don't print on allocation failure, slab/slub will do that for us.
      
      Signed-off-by: default avatarNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      16fb1a9b
    • Ritesh Harjani's avatar
      arm64: Change misleading function names in dma-mapping · bb10eb7b
      Ritesh Harjani authored
      
      
      arm64_swiotlb_alloc/free_coherent name can be misleading
      somtimes with CMA support being enabled after this
      patch (c2104debc235b745265b64d610237a6833fd53)
      
      Change this name to be more generic:
      __dma_alloc/free_coherent
      
      Signed-off-by: default avatarRitesh Harjani <ritesh.harjani@gmail.com>
      [catalin.marinas@arm.com: renamed arm64_swiotlb_dma_ops to coherent_swiotlb_dma_ops]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bb10eb7b
    • Geoff Levand's avatar
      arm64: Fix the soft_restart routine · 09024aa6
      Geoff Levand authored
      
      
      Change the soft_restart() routine to call cpu_reset() at its identity mapped
      physical address.
      
      The cpu_reset() routine must be called at its identity mapped physical address
      so that when the MMU is turned off the instruction pointer will be at the correct
      location in physical memory.
      
      Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      09024aa6
    • Catalin Marinas's avatar
      arm64: Extend the idmap to the whole kernel image · ea8c2e11
      Catalin Marinas authored
      
      
      This patch changes the idmap page table creation during boot to cover
      the whole kernel image, allowing functions like cpu_reset() to be safely
      called with the physical address.
      
      This patch also simplifies the create_block_map asm macro to no longer
      take an idmap argument and always use the phys/virt/end parameters. For
      the idmap case, phys == virt.
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ea8c2e11
    • Catalin Marinas's avatar
      arm64: Convert asm/tlb.h to generic mmu_gather · 020c1427
      Catalin Marinas authored
      Over the past couple of years, the generic mmu_gather gained range
      tracking - 597e1c35 (mm/mmu_gather: enable tlb flush range in generic
      mmu_gather), 2b047252 (Fix TLB gather virtual address range
      invalidation corner cases) - and tlb_fast_mode() has been removed -
      29eb7782
      
       (arch, mm: Remove tlb_fast_mode()).
      
      The new mmu_gather structure is now suitable for arm64 and this patch
      converts the arch asm/tlb.h to the generic code. One functional
      difference is the shift_arg_pages() case where previously the code was
      flushing the full mm (no tlb_start_vma call) but now it flushes the
      range given to tlb_gather_mmu() (possibly slightly more efficient
      previously).
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      020c1427