Skip to content
  1. Apr 02, 2020
    • Ard Biesheuvel's avatar
      arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature · e16e65a0
      Ard Biesheuvel authored
      
      
      When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with
      different permissions (r-x for .text, r-- for .rodata, rw- for .data,
      etc) are rounded up to 2 MiB so they can be mapped more efficiently.
      In particular, it permits the segments to be mapped using level 2
      block entries when using 4k pages, which is expected to result in less
      TLB pressure.
      
      However, the mappings for the bulk of the kernel will use level 2
      entries anyway, and the misaligned fringes are organized such that they
      can take advantage of the contiguous bit, and use far fewer level 3
      entries than would be needed otherwise.
      
      This makes the value of this feature dubious at best, and since it is not
      enabled in defconfig or in the distro configs, it does not appear to be
      in wide use either. So let's just remove it.
      
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarLaura Abbott <labbott@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e16e65a0
    • Mark Brown's avatar
      arm64: Always force a branch protection mode when the compiler has one · b8fdef31
      Mark Brown authored
      Compilers with branch protection support can be configured to enable it by
      default, it is likely that distributions will do this as part of deploying
      branch protection system wide. As well as the slight overhead from having
      some extra NOPs for unused branch protection features this can cause more
      serious problems when the kernel is providing pointer authentication to
      userspace but not built for pointer authentication itself. In that case our
      switching of keys for userspace can affect the kernel unexpectedly, causing
      pointer authentication instructions in the kernel to corrupt addresses.
      
      To ensure that we get consistent and reliable behaviour always explicitly
      initialise the branch protection mode, ensuring that the kernel is built
      the same way regardless of the compiler defaults.
      
      Fixes: 75031975
      
       (arm64: add basic pointer authentication support)
      Reported-by: default avatarSzabolcs Nagy <szabolcs.nagy@arm.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      [catalin.marinas@arm.com: remove Kconfig option in favour of Makefile check]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b8fdef31
  2. Apr 01, 2020
  3. Mar 26, 2020
  4. Mar 25, 2020
    • Catalin Marinas's avatar
      Merge branch 'for-next/kernel-ptrauth' into for-next/core · 44ca0e00
      Catalin Marinas authored
      * for-next/kernel-ptrauth:
        : Return address signing - in-kernel support
        arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
        lkdtm: arm64: test kernel pointer authentication
        arm64: compile the kernel with ptrauth return address signing
        kconfig: Add support for 'as-option'
        arm64: suspend: restore the kernel ptrauth keys
        arm64: __show_regs: strip PAC from lr in printk
        arm64: unwind: strip PAC from kernel addresses
        arm64: mask PAC bits of __builtin_return_address
        arm64: initialize ptrauth keys for kernel booting task
        arm64: initialize and switch ptrauth kernel keys
        arm64: enable ptrauth earlier
        arm64: cpufeature: handle conflicts based on capability
        arm64: cpufeature: Move cpu capability helpers inside C file
        arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
        arm64: install user ptrauth keys at kernel exit time
        arm64: rename ptrauth key structures to be user-specific
        arm64: cpufeature: add pointer auth meta-capabilities
        arm64: cpufeature: Fix meta-capability cpufeature check
      44ca0e00
    • Catalin Marinas's avatar
      Merge branch 'for-next/asm-cleanups' into for-next/core · 806dc825
      Catalin Marinas authored
      * for-next/asm-cleanups:
        : Various asm clean-ups (alignment, mov_q vs ldr, .idmap)
        arm64: move kimage_vaddr to .rodata
        arm64: use mov_q instead of literal ldr
      806dc825
    • Catalin Marinas's avatar
      Merge branch 'for-next/asm-annotations' into for-next/core · 0829a076
      Catalin Marinas authored
      * for-next/asm-annotations:
        : Modernise arm64 assembly annotations
        arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
        arm64: Mark call_smc_arch_workaround_1 as __maybe_unused
        arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=y
        arm64: vdso32: Convert to modern assembler annotations
        arm64: vdso: Convert to modern assembler annotations
        arm64: sdei: Annotate SDEI entry points using new style annotations
        arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations
        arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs
        arm64: kvm: Annotate assembly using modern annoations
        arm64: kernel: Convert to modern annotations for assembly data
        arm64: head: Annotate stext and preserve_boot_args as code
        arm64: head.S: Convert to modern annotations for assembly functions
        arm64: ftrace: Modernise annotation of return_to_handler
        arm64: ftrace: Correct annotation of ftrace_caller assembly
        arm64: entry-ftrace.S: Convert to modern annotations for assembly functions
        arm64: entry: Additional annotation conversions for entry.S
        arm64: entry: Annotate ret_from_fork as code
        arm64: entry: Annotate vector table and handlers as code
        arm64: crypto: Modernize names for AES function macros
        arm64: crypto: Modernize some extra assembly annotations
      0829a076
    • Catalin Marinas's avatar
      Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei',... · da12d273
      Catalin Marinas authored
      Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', 'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core
      
      * for-next/memory-hotremove:
        : Memory hot-remove support for arm64
        arm64/mm: Enable memory hot remove
        arm64/mm: Hold memory hotplug lock while walking for kernel page table dump
      
      * for-next/arm_sdei:
        : SDEI: fix double locking on return from hibernate and clean-up
        firmware: arm_sdei: clean up sdei_event_create()
        firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp
        firmware: arm_sdei: fix possible double-lock on hibernate error path
        firmware: arm_sdei: fix double-lock on hibernate with shared events
      
      * for-next/amu:
        : ARMv8.4 Activity Monitors support
        clocksource/drivers/arm_arch_timer: validate arch_timer_rate
        arm64: use activity monitors for frequency invariance
        cpufreq: add function to get the hardware max frequency
        Documentation: arm64: document support for the AMU extension
        arm64/kvm: disable access to AMU registers from kvm guests
        arm64: trap to EL1 accesses to AMU counters from EL0
        arm64: add support for the AMU extension v1
      
      * for-next/final-cap-helper:
        : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it
        arm64: kvm: hyp: use cpus_have_final_cap()
        arm64: cpufeature: add cpus_have_final_cap()
      
      * for-next/cpu_ops-cleanup:
        : cpu_ops[] access code clean-up
        arm64: Introduce get_cpu_ops() helper function
        arm64: Rename cpu_read_ops() to init_cpu_ops()
        arm64: Declare ACPI parking protocol CPU operation if needed
      
      * for-next/misc:
        : Various fixes and clean-ups
        arm64: define __alloc_zeroed_user_highpage
        arm64/kernel: Simplify __cpu_up() by bailing out early
        arm64: remove redundant blank for '=' operator
        arm64: kexec_file: Fixed code style.
        arm64: add blank after 'if'
        arm64: fix spelling mistake "ca not" -> "cannot"
        arm64: entry: unmask IRQ in el0_sp()
        arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI)
        arm64: csum: Optimise IPv6 header checksum
        arch/arm64: fix typo in a comment
        arm64: remove gratuitious/stray .ltorg stanzas
        arm64: Update comment for ASID() macro
        arm64: mm: convert cpu_do_switch_mm() to C
        arm64: fix NUMA Kconfig typos
      
      * for-next/perf:
        : arm64 perf updates
        arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
        KVM: arm64: limit PMU version to PMUv3 for ARMv8.1
        arm64: cpufeature: Extract capped perfmon fields
        arm64: perf: Clean up enable/disable calls
        perf: arm-ccn: Use scnprintf() for robustness
        arm64: perf: Support new DT compatibles
        arm64: perf: Refactor PMU init callbacks
        perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
      da12d273
    • Mark Brown's avatar
      arm64: head: Convert install_el2_stub to SYM_INNER_LABEL · d4abd29d
      Mark Brown authored
      
      
      New assembly annotations have recently been introduced which aim to
      make the way we describe symbols in assembly more consistent. Recently the
      arm64 assembler was converted to use these but install_el2_stub was missed.
      
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      [catalin.marinas@arm.com: changed to SYM_L_LOCAL]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d4abd29d
    • Gavin Shan's avatar
      arm64: Introduce get_cpu_ops() helper function · de58ed5e
      Gavin Shan authored
      
      
      This introduces get_cpu_ops() to return the CPU operations according to
      the given CPU index. For now, it simply returns the @cpu_ops[cpu] as
      before. Also, helper function __cpu_try_die() is introduced to be shared
      by cpu_die() and ipi_cpu_crash_stop(). So it shouldn't introduce any
      functional changes.
      
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      de58ed5e
    • Gavin Shan's avatar
      arm64: Rename cpu_read_ops() to init_cpu_ops() · 6885fb12
      Gavin Shan authored
      
      
      This renames cpu_read_ops() to init_cpu_ops() as the function is only
      called in initialization phase. Also, we will introduce get_cpu_ops() in
      the subsequent patches, to retireve the CPU operation by the given CPU
      index. The usage of cpu_read_ops() and get_cpu_ops() are difficult to be
      distinguished from their names.
      
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      6885fb12
    • Gavin Shan's avatar
      arm64: Declare ACPI parking protocol CPU operation if needed · 7fec52bf
      Gavin Shan authored
      
      
      It's obvious we needn't declare the corresponding CPU operation when
      CONFIG_ARM64_ACPI_PARKING_PROTOCOL is disabled, even it doesn't cause
      any compiling warnings.
      
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7fec52bf
  5. Mar 24, 2020
  6. Mar 20, 2020
  7. Mar 18, 2020