Skip to content
  1. Dec 06, 2019
    • Mark Rutland's avatar
      arm64: ftrace: fix ifdeffery · 70927d02
      Mark Rutland authored
      
      
      When I tweaked the ftrace entry assembly in commit:
      
        3b23e499 ("arm64: implement ftrace with regs")
      
      ... my ifdeffery tweaks left ftrace_graph_caller undefined for
      CONFIG_DYNAMIC_FTRACE && CONFIG_FUNCTION_GRAPH_TRACER when ftrace is
      based on mcount.
      
      The kbuild test robot reported that this issue is detected at link time:
      
      | arch/arm64/kernel/entry-ftrace.o: In function `skip_ftrace_call':
      | arch/arm64/kernel/entry-ftrace.S:238: undefined reference to `ftrace_graph_caller'
      | arch/arm64/kernel/entry-ftrace.S:238:(.text+0x3c): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol
      | `ftrace_graph_caller'
      | arch/arm64/kernel/entry-ftrace.S:243: undefined reference to `ftrace_graph_caller'
      | arch/arm64/kernel/entry-ftrace.S:243:(.text+0x54): relocation truncated to fit: R_AARCH64_CONDBR19 against undefined symbol
      | `ftrace_graph_caller'
      
      This patch fixes the ifdeffery so that the mcount version of
      ftrace_graph_caller doesn't depend on CONFIG_DYNAMIC_FTRACE. At the same
      time, a redundant #else is removed from the ifdeffery for the
      patchable-function-entry version of ftrace_graph_caller.
      
      Fixes: 3b23e499 ("arm64: implement ftrace with regs")
      Reported-by: default avatarkbuild test robot <lkp@intel.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Torsten Duwe <duwe@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      70927d02
    • Sebastian Andrzej Siewior's avatar
      arm64: KVM: Invoke compute_layout() before alternatives are applied · 0492747c
      Sebastian Andrzej Siewior authored
      
      
      compute_layout() is invoked as part of an alternative fixup under
      stop_machine(). This function invokes get_random_long() which acquires a
      sleeping lock on -RT which can not be acquired in this context.
      
      Rename compute_layout() to kvm_compute_layout() and invoke it before
      stop_machine() applies the alternatives. Add a __init prefix to
      kvm_compute_layout() because the caller has it, too (and so the code can be
      discarded after boot).
      
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      0492747c
    • Catalin Marinas's avatar
      arm64: Validate tagged addresses in access_ok() called from kernel threads · df325e05
      Catalin Marinas authored
      
      
      __range_ok(), invoked from access_ok(), clears the tag of the user
      address only if CONFIG_ARM64_TAGGED_ADDR_ABI is enabled and the thread
      opted in to the relaxed ABI. The latter sets the TIF_TAGGED_ADDR thread
      flag. In the case of asynchronous I/O (e.g. io_submit()), the
      access_ok() may be called from a kernel thread. Since kernel threads
      don't have TIF_TAGGED_ADDR set, access_ok() will fail for valid tagged
      user addresses. Example from the ffs_user_copy_worker() thread:
      
      	use_mm(io_data->mm);
      	ret = ffs_copy_to_iter(io_data->buf, ret, &io_data->data);
      	unuse_mm(io_data->mm);
      
      Relax the __range_ok() check to always untag the user address if called
      in the context of a kernel thread. The user pointers would have already
      been checked via aio_setup_rw() -> import_{single_range,iovec}() at the
      time of the asynchronous I/O request.
      
      Fixes: 63f0c603 ("arm64: Introduce prctl() options to control the tagged user addresses ABI")
      Cc: <stable@vger.kernel.org> # 5.4.x-
      Cc: Will Deacon <will@kernel.org>
      Reported-by: default avatarEvgenii Stepanov <eugenis@google.com>
      Tested-by: default avatarEvgenii Stepanov <eugenis@google.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      df325e05
  2. Dec 04, 2019
    • Mark Brown's avatar
      arm64: mm: Fix column alignment for UXN in kernel_page_tables · cba779d8
      Mark Brown authored
      
      
      UXN is the only individual PTE bit other than the PTE_ATTRINDX_MASK ones
      which doesn't have both a set and a clear value provided, meaning that the
      columns in the table won't all be aligned. The PTE_ATTRINDX_MASK values
      are all both mutually exclusive and longer so are listed last to make a
      single final column for those values. Ensure everything is aligned by
      providing a clear value for UXN.
      
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      cba779d8
    • Mark Rutland's avatar
      arm64: insn: consistently handle exit text · ca2ef4ff
      Mark Rutland authored
      
      
      A kernel built with KASAN && FTRACE_WITH_REGS && !MODULES, produces a
      boot-time splat in the bowels of ftrace:
      
      | [    0.000000] ftrace: allocating 32281 entries in 127 pages
      | [    0.000000] ------------[ cut here ]------------
      | [    0.000000] WARNING: CPU: 0 PID: 0 at kernel/trace/ftrace.c:2019 ftrace_bug+0x27c/0x328
      | [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.4.0-rc3-00008-g7f08ae53a7e3 #13
      | [    0.000000] Hardware name: linux,dummy-virt (DT)
      | [    0.000000] pstate: 60000085 (nZCv daIf -PAN -UAO)
      | [    0.000000] pc : ftrace_bug+0x27c/0x328
      | [    0.000000] lr : ftrace_init+0x640/0x6cc
      | [    0.000000] sp : ffffa000120e7e00
      | [    0.000000] x29: ffffa000120e7e00 x28: ffff00006ac01b10
      | [    0.000000] x27: ffff00006ac898c0 x26: dfffa00000000000
      | [    0.000000] x25: ffffa000120ef290 x24: ffffa0001216df40
      | [    0.000000] x23: 000000000000018d x22: ffffa0001244c700
      | [    0.000000] x21: ffffa00011bf393c x20: ffff00006ac898c0
      | [    0.000000] x19: 00000000ffffffff x18: 0000000000001584
      | [    0.000000] x17: 0000000000001540 x16: 0000000000000007
      | [    0.000000] x15: 0000000000000000 x14: ffffa00010432770
      | [    0.000000] x13: ffff940002483519 x12: 1ffff40002483518
      | [    0.000000] x11: 1ffff40002483518 x10: ffff940002483518
      | [    0.000000] x9 : dfffa00000000000 x8 : 0000000000000001
      | [    0.000000] x7 : ffff940002483519 x6 : ffffa0001241a8c0
      | [    0.000000] x5 : ffff940002483519 x4 : ffff940002483519
      | [    0.000000] x3 : ffffa00011780870 x2 : 0000000000000001
      | [    0.000000] x1 : 1fffe0000d591318 x0 : 0000000000000000
      | [    0.000000] Call trace:
      | [    0.000000]  ftrace_bug+0x27c/0x328
      | [    0.000000]  ftrace_init+0x640/0x6cc
      | [    0.000000]  start_kernel+0x27c/0x654
      | [    0.000000] random: get_random_bytes called from print_oops_end_marker+0x30/0x60 with crng_init=0
      | [    0.000000] ---[ end trace 0000000000000000 ]---
      | [    0.000000] ftrace faulted on writing
      | [    0.000000] [<ffffa00011bf393c>] _GLOBAL__sub_D_65535_0___tracepoint_initcall_level+0x4/0x28
      | [    0.000000] Initializing ftrace call sites
      | [    0.000000] ftrace record flags: 0
      | [    0.000000]  (0)
      | [    0.000000]  expected tramp: ffffa000100b3344
      
      This is due to an unfortunate combination of several factors.
      
      Building with KASAN results in the compiler generating anonymous
      functions to register/unregister global variables against the shadow
      memory. These functions are placed in .text.startup/.text.exit, and
      given mangled names like _GLOBAL__sub_{I,D}_65535_0_$OTHER_SYMBOL. The
      kernel linker script places these in .init.text and .exit.text
      respectively, which are both discarded at runtime as part of initmem.
      
      Building with FTRACE_WITH_REGS uses -fpatchable-function-entry=2, which
      also instruments KASAN's anonymous functions. When these are discarded
      with the rest of initmem, ftrace removes dangling references to these
      call sites.
      
      Building without MODULES implicitly disables STRICT_MODULE_RWX, and
      causes arm64's patch_map() function to treat any !core_kernel_text()
      symbol as something that can be modified in-place. As core_kernel_text()
      is only true for .text and .init.text, with the latter depending on
      system_state < SYSTEM_RUNNING, we'll treat .exit.text as something that
      can be patched in-place. However, .exit.text is mapped read-only.
      
      Hence in this configuration the ftrace init code blows up while trying
      to patch one of the functions generated by KASAN.
      
      We could try to filter out the call sites in .exit.text rather than
      initializing them, but this would be inconsistent with how we handle
      .init.text, and requires hooking into core bits of ftrace. The behaviour
      of patch_map() is also inconsistent today, so instead let's clean that
      up and have it consistently handle .exit.text.
      
      This patch teaches patch_map() to handle .exit.text at init time,
      preventing the boot-time splat above. The flow of patch_map() is
      reworked to make the logic clearer and minimize redundant
      conditionality.
      
      Fixes: 3b23e499 ("arm64: implement ftrace with regs")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Torsten Duwe <duwe@suse.de>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ca2ef4ff
    • Will Deacon's avatar
      arm64: mm: Fix initialisation of DMA zones on non-NUMA systems · 93b90414
      Will Deacon authored
      John reports that the recently merged commit 1a8e1cef ("arm64: use
      both ZONE_DMA and ZONE_DMA32") breaks the boot on his DB845C board:
      
        | Booting Linux on physical CPU 0x0000000000 [0x517f803c]
        | Linux version 5.4.0-mainline-10675-g957a03b9e38f
        | Machine model: Thundercomm Dragonboard 845c
        | [...]
        | Built 1 zonelists, mobility grouping on.  Total pages: -188245
        | Kernel command line: earlycon
        | firmware_class.path=/vendor/firmware/ androidboot.hardware=db845c
        | init=/init androidboot.boot_devices=soc/1d84000.ufshc
        | printk.devkmsg=on buildvariant=userdebug root=/dev/sda2
        | androidboot.bootdevice=1d84000.ufshc androidboot.serialno=c4e1189c
        | androidboot.baseband=sda
        | msm_drm.dsi_display0=dsi_lt9611_1080_video_display:
        | androidboot.slot_suffix=_a skip_initramfs rootwait ro init=/init
        |
        | <hangs indefinitely here>
      
      This is because, when CONFIG_NUMA=n, zone_sizes_init() fails to handle
      memblocks that fall entirely within the ZONE_DMA region and erroneously ends up
      trying to add a negatively-sized region into the following ZONE_DMA32, which is
      later interpreted as a large unsigned region by the core MM code.
      
      Rework the non-NUMA implementation of zone_sizes_init() so that the start
      address of the memblock being processed is adjusted according to the end of the
      previous zone, which is then range-checked before updating the hole information
      of subsequent zones.
      
      Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
      Link: https://lore.kernel.org/lkml/CALAqxLVVcsmFrDKLRGRq7GewcW405yTOxG=KR3csVzQ6bXutkA@mail.gmail.com
      
      
      Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32")
      Reported-by: default avatarJohn Stultz <john.stultz@linaro.org>
      Tested-by: default avatarJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      93b90414
  3. Nov 14, 2019
  4. Nov 12, 2019
  5. Nov 09, 2019
    • Catalin Marinas's avatar
      Merge branches 'for-next/elf-hwcap-docs', 'for-next/smccc-conduit-cleanup',... · 6be22809
      Catalin Marinas authored
      Merge branches 'for-next/elf-hwcap-docs', 'for-next/smccc-conduit-cleanup', 'for-next/zone-dma', 'for-next/relax-icc_pmr_el1-sync', 'for-next/double-page-fault', 'for-next/misc', 'for-next/kselftest-arm64-signal' and 'for-next/kaslr-diagnostics' into for-next/core
      
      * for-next/elf-hwcap-docs:
        : Update the arm64 ELF HWCAP documentation
        docs/arm64: cpu-feature-registers: Rewrite bitfields that don't follow [e, s]
        docs/arm64: cpu-feature-registers: Documents missing visible fields
        docs/arm64: elf_hwcaps: Document HWCAP_SB
        docs/arm64: elf_hwcaps: sort the HWCAP{, 2} documentation by ascending value
      
      * for-next/smccc-conduit-cleanup:
        : SMC calling convention conduit clean-up
        firmware: arm_sdei: use common SMCCC_CONDUIT_*
        firmware/psci: use common SMCCC_CONDUIT_*
        arm: spectre-v2: use arm_smccc_1_1_get_conduit()
        arm64: errata: use arm_smccc_1_1_get_conduit()
        arm/arm64: smccc/psci: add arm_smccc_1_1_get_conduit()
      
      * for-next/zone-dma:
        : Reintroduction of ZONE_DMA for Raspberry Pi 4 support
        arm64: mm: reserve CMA and crashkernel in ZONE_DMA32
        dma/direct: turn ARCH_ZONE_DMA_BITS into a variable
        arm64: Make arm64_dma32_phys_limit static
        arm64: mm: Fix unused variable warning in zone_sizes_init
        mm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type'
        arm64: use both ZONE_DMA and ZONE_DMA32
        arm64: rename variables used to calculate ZONE_DMA32's size
        arm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys()
      
      * for-next/relax-icc_pmr_el1-sync:
        : Relax ICC_PMR_EL1 (GICv3) accesses when ICC_CTLR_EL1.PMHE is clear
        arm64: Document ICC_CTLR_EL3.PMHE setting requirements
        arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear
      
      * for-next/double-page-fault:
        : Avoid a double page fault in __copy_from_user_inatomic() if hw does not support auto Access Flag
        mm: fix double page fault on arm64 if PTE_AF is cleared
        x86/mm: implement arch_faults_on_old_pte() stub on x86
        arm64: mm: implement arch_faults_on_old_pte() on arm64
        arm64: cpufeature: introduce helper cpu_has_hw_af()
      
      * for-next/misc:
        : Various fixes and clean-ups
        arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist
        arm64: mm: Remove MAX_USER_VA_BITS definition
        arm64: mm: simplify the page end calculation in __create_pgd_mapping()
        arm64: print additional fault message when executing non-exec memory
        arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill()
        arm64: pgtable: Correct typo in comment
        arm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1
        arm64: cpufeature: Fix typos in comment
        arm64/mm: Poison initmem while freeing with free_reserved_area()
        arm64: use generic free_initrd_mem()
        arm64: simplify syscall wrapper ifdeffery
      
      * for-next/kselftest-arm64-signal:
        : arm64-specific kselftest support with signal-related test-cases
        kselftest: arm64: fake_sigreturn_misaligned_sp
        kselftest: arm64: fake_sigreturn_bad_size
        kselftest: arm64: fake_sigreturn_duplicated_fpsimd
        kselftest: arm64: fake_sigreturn_missing_fpsimd
        kselftest: arm64: fake_sigreturn_bad_size_for_magic0
        kselftest: arm64: fake_sigreturn_bad_magic
        kselftest: arm64: add helper get_current_context
        kselftest: arm64: extend test_init functionalities
        kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht]
        kselftest: arm64: mangle_pstate_invalid_daif_bits
        kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils
        kselftest: arm64: extend toplevel skeleton Makefile
      
      * for-next/kaslr-diagnostics:
        : Provide diagnostics on boot for KASLR
        arm64: kaslr: Check command line before looking for a seed
        arm64: kaslr: Announce KASLR status on boot
      6be22809
    • Mark Brown's avatar
      arm64: kaslr: Check command line before looking for a seed · 2203e1ad
      Mark Brown authored
      
      
      Now that we print diagnostics at boot the reason why we do not initialise
      KASLR matters. Currently we check for a seed before we check if the user
      has explicitly disabled KASLR on the command line which will result in
      misleading diagnostics so reverse the order of those checks. We still
      parse the seed from the DT early so that if the user has both provided a
      seed and disabled KASLR on the command line we still mask the seed on
      the command line.
      
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2203e1ad
    • Mark Brown's avatar
      arm64: kaslr: Announce KASLR status on boot · 294a9ddd
      Mark Brown authored
      
      
      Currently the KASLR code is silent at boot unless it forces on KPTI in
      which case a message will be printed for that. This can lead to users
      incorrectly believing their system has the feature enabled when it in
      fact does not, and if they notice the problem the lack of any
      diagnostics makes it harder to understand the problem. Add an initcall
      which prints a message showing the status of KASLR during boot to make
      the status clear.
      
      This is particularly useful in cases where we don't have a seed. It
      seems to be a relatively common error for system integrators and
      administrators to enable KASLR in their configuration but not provide
      the seed at runtime, often due to seed provisioning breaking at some
      later point after it is initially enabled and verified.
      
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      294a9ddd
  6. Nov 08, 2019
  7. Nov 07, 2019
  8. Nov 06, 2019
    • Mark Rutland's avatar
      arm64: ftrace: minimize ifdeffery · 7f08ae53
      Mark Rutland authored
      
      
      Now that we no longer refer to mod->arch.ftrace_trampolines in the body
      of ftrace_make_call(), we can use IS_ENABLED() rather than ifdeffery,
      and make the code easier to follow. Likewise in ftrace_make_nop().
      
      Let's do so.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      7f08ae53
    • Torsten Duwe's avatar
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe authored
      
      
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      
      Signed-off-by: default avatarTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
    • Mark Rutland's avatar
      arm64: asm-offsets: add S_FP · 1f377e04
      Mark Rutland authored
      
      
      So that assembly code can more easily manipulate the FP (x29) within a
      pt_regs, add an S_FP asm-offsets definition.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      1f377e04
    • Mark Rutland's avatar
      arm64: insn: add encoder for MOV (register) · e3bf8a67
      Mark Rutland authored
      
      
      For FTRACE_WITH_REGS, we're going to want to generate a MOV (register)
      instruction as part of the callsite intialization. As MOV (register) is
      an alias for ORR (shifted register), we can generate this with
      aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and
      difficult to read in-context.
      
      Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can
      write callers in a more straightforward way.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      e3bf8a67
    • Mark Rutland's avatar
      arm64: module/ftrace: intialize PLT at load time · f1a54ae9
      Mark Rutland authored
      
      
      Currently we lazily-initialize a module's ftrace PLT at runtime when we
      install the first ftrace call. To do so we have to apply a number of
      sanity checks, transiently mark the module text as RW, and perform an
      IPI as part of handling Neoverse-N1 erratum #1542419.
      
      We only expect the ftrace trampoline to point at ftrace_caller() (AKA
      FTRACE_ADDR), so let's simplify all of this by intializing the PLT at
      module load time, before the module loader marks the module RO and
      performs the intial I-cache maintenance for the module.
      
      Thus we can rely on the module having been correctly intialized, and can
      simplify the runtime work necessary to install an ftrace call in a
      module. This will also allow for the removal of module_disable_ro().
      
      Tested by forcing ftrace_make_call() to use the module PLT, and then
      loading up a module after setting up ftrace with:
      
      | echo ":mod:<module-name>" > set_ftrace_filter;
      | echo function > current_tracer;
      | modprobe <module-name>
      
      Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
      selected, we wrap its use along with most of module_init_ftrace_plt()
      with ifdeffery rather than using IS_ENABLED().
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      f1a54ae9
    • Mark Rutland's avatar
      arm64: module: rework special section handling · bd8b21d3
      Mark Rutland authored
      
      
      When we load a module, we have to perform some special work for a couple
      of named sections. To do this, we iterate over all of the module's
      sections, and perform work for each section we recognize.
      
      To make it easier to handle the unexpected absence of a section, and to
      make the section-specific logic easer to read, let's factor the section
      search into a helper. Similar is already done in the core module loader,
      and other architectures (and ideally we'd unify these in future).
      
      If we expect a module to have an ftrace trampoline section, but it
      doesn't have one, we'll now reject loading the module. When
      ARM64_MODULE_PLTS is selected, any correctly built module should have
      one (and this is assumed by arm64's ftrace PLT code) and the absence of
      such a section implies something has gone wrong at build time.
      
      Subsequent patches will make use of the new helper.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      bd8b21d3
    • Mark Rutland's avatar
      module/ftrace: handle patchable-function-entry · a1326b17
      Mark Rutland authored
      
      
      When using patchable-function-entry, the compiler will record the
      callsites into a section named "__patchable_function_entries" rather
      than "__mcount_loc". Let's abstract this difference behind a new
      FTRACE_CALLSITE_SECTION, so that architectures don't have to handle this
      explicitly (e.g. with custom module linker scripts).
      
      As parisc currently handles this explicitly, it is fixed up accordingly,
      with its custom linker script removed. Since FTRACE_CALLSITE_SECTION is
      only defined when DYNAMIC_FTRACE is selected, the parisc module loading
      code is updated to only use the definition in that case. When
      DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so
      this removes some redundant work in that case.
      
      To make sure that this is keep up-to-date for modules and the main
      kernel, a comment is added to vmlinux.lds.h, with the existing ifdeffery
      simplified for legibility.
      
      I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled,
      and verified that the section made it into the .ko files for modules.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarHelge Deller <deller@gmx.de>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarSven Schnelle <svens@stackframe.org>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: linux-parisc@vger.kernel.org
      a1326b17
    • Mark Rutland's avatar
      ftrace: add ftrace_init_nop() · fbf6c73c
      Mark Rutland authored
      
      
      Architectures may need to perform special initialization of ftrace
      callsites, and today they do so by special-casing ftrace_make_nop() when
      the expected branch address is MCOUNT_ADDR. In some cases (e.g. for
      patchable-function-entry), we don't have an mcount-like symbol and don't
      want a synthetic MCOUNT_ADDR, but we may need to perform some
      initialization of callsites.
      
      To make it possible to separate initialization from runtime
      modification, and to handle cases without an mcount-like symbol, this
      patch adds an optional ftrace_init_nop() function that architectures can
      implement, which does not pass a branch address.
      
      Where an architecture does not provide ftrace_init_nop(), we will fall
      back to the existing behaviour of calling ftrace_make_nop() with
      MCOUNT_ADDR.
      
      At the same time, ftrace_code_disable() is renamed to
      ftrace_nop_initialize() to make it clearer that it is intended to
      intialize a callsite into a disabled state, and is not for disabling a
      callsite that has been runtime enabled. The kerneldoc description of rec
      arguments is updated to cover non-mcount callsites.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarSven Schnelle <svens@stackframe.org>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      fbf6c73c
    • Rich Wiley's avatar
      arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist · 918e1946
      Rich Wiley authored
      
      
      NVIDIA Carmel CPUs don't implement ID_AA64PFR0_EL1.CSV3 but
      aren't susceptible to Meltdown, so add Carmel to kpti_safe_list[].
      
      Signed-off-by: default avatarRich Wiley <rwiley@nvidia.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      918e1946
    • Bhupesh Sharma's avatar
      arm64: mm: Remove MAX_USER_VA_BITS definition · 218564b1
      Bhupesh Sharma authored
      
      
      commit 9b31cf49 ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
      introduced the MAX_USER_VA_BITS definition, which was used to support
      the arm64 mm use-cases where the user-space could use 52-bit virtual
      addresses whereas the kernel-space would still could a maximum of 48-bit
      virtual addressing.
      
      But, now with commit b6d00d47 ("arm64: mm: Introduce 52-bit Kernel
      VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
      there is no longer any scenario where user VA != kernel VA size
      (even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).
      
      Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
      VA_BITS (maximum VA space size) in all possible use-cases. Note that
      even though the 'vabits_actual' value would be 48 for arm64 hardware
      which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
      is enabled), VA_BITS would still be set to a value 52. Hence this change
      would be safe in all possible VA address space combinations.
      
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: kexec@lists.infradead.org
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarBhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      218564b1
    • Masahiro Yamada's avatar
      arm64: mm: simplify the page end calculation in __create_pgd_mapping() · 32d18708
      Masahiro Yamada authored
      
      
      Calculate the page-aligned end address more simply.
      
      The local variable, "length" is unneeded.
      
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      32d18708