Skip to content
  1. Sep 10, 2021
    • Chen Wandun's avatar
      arm64: kdump: Skip kmemleak scan reserved memory for kdump · 85f58eb1
      Chen Wandun authored
      Trying to boot with kdump + kmemleak, command will result in a crash:
      "echo scan > /sys/kernel/debug/kmemleak"
      
      crashkernel reserved: 0x0000000007c00000 - 0x0000000027c00000 (512 MB)
      Kernel command line: BOOT_IMAGE=(hd1,gpt2)/vmlinuz-5.14.0-rc5-next-20210809+ root=/dev/mapper/ao-root ro rd.lvm.lv=ao/root rd.lvm.lv=ao/swap crashkernel=512M
      Unable to handle kernel paging request at virtual address ffff000007c00000
      Mem abort info:
        ESR = 0x96000007
        EC = 0x25: DABT (current EL), IL = 32 bits
        SET = 0, FnV = 0
        EA = 0, S1PTW = 0
        FSC = 0x07: level 3 translation fault
      Data abort info:
        ISV = 0, ISS = 0x00000007
        CM = 0, WnR = 0
      swapper pgtable: 64k pages, 48-bit VAs, pgdp=00002024f0d80000
      [ffff000007c00000] pgd=1800205ffffd0003, p4d=1800205ffffd0003, pud=1800205ffffd0003, pmd=1800205ffffc0003, pte=0068000007c00f06
      Internal error: Oops: 96000007 [#1] SMP
      pstate: 804000c9 (Nzcv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
      pc : scan_block+0x98/0x230
      lr : scan_block+0x94/0x230
      sp : ffff80008d6cfb70
      x29: ffff80008d6cfb70 x28: 0000000000000000 x27: 0000000000000000
      x26: 00000000000000c0 x25: 0000000000000001 x24: 0000000000000000
      x23: ffffa88a6b18b398 x22: ffff000007c00ff9 x21: ffffa88a6ac7fc40
      x20: ffffa88a6af6a830 x19: ffff000007c00000 x18: 0000000000000000
      x17: 0000000000000000 x16: 0000000000000000 x15: ffffffffffffffff
      x14: ffffffff00000000 x13: ffffffffffffffff x12: 0000000000000020
      x11: 0000000000000000 x10: 0000000001080000 x9 : ffffa88a6951c77c
      x8 : ffffa88a6a893988 x7 : ffff203ff6cfb3c0 x6 : ffffa88a6a52b3c0
      x5 : ffff203ff6cfb3c0 x4 : 0000000000000000 x3 : 0000000000000000
      x2 : 0000000000000001 x1 : ffff20226cb56a40 x0 : 0000000000000000
      Call trace:
       scan_block+0x98/0x230
       scan_gray_list+0x120/0x270
       kmemleak_scan+0x3a0/0x648
       kmemleak_write+0x3ac/0x4c8
       full_proxy_write+0x6c/0xa0
       vfs_write+0xc8/0x2b8
       ksys_write+0x70/0xf8
       __arm64_sys_write+0x24/0x30
       invoke_syscall+0x4c/0x110
       el0_svc_common+0x9c/0x190
       do_el0_svc+0x30/0x98
       el0_svc+0x28/0xd8
       el0t_64_sync_handler+0x90/0xb8
       el0t_64_sync+0x180/0x184
      
      The reserved memory for kdump will be looked up by kmemleak, this area
      will be set invalid when kdump service is bring up. That will result in
      crash when kmemleak scan this area.
      
      Fixes: a7259df7
      
       ("memblock: make memblock_find_in_range method private")
      Signed-off-by: default avatarChen Wandun <chenwandun@huawei.com>
      Reviewed-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/20210910064844.3827813-1-chenwandun@huawei.com
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      85f58eb1
    • Ard Biesheuvel's avatar
      arm64: mm: limit linear region to 51 bits for KVM in nVHE mode · 88053ec8
      Ard Biesheuvel authored
      KVM in nVHE mode divides up its VA space into two equal halves, and
      picks the half that does not conflict with the HYP ID map to map its
      linear region. This worked fine when the kernel's linear map itself was
      guaranteed to cover precisely as many bits of VA space, but this was
      changed by commit f4693c27 ("arm64: mm: extend linear region for
      52-bit VA configurations").
      
      The result is that, depending on the placement of the ID map, kernel-VA
      to hyp-VA translations may produce addresses that either conflict with
      other HYP mappings (including the ID map itself) or generate addresses
      outside of the 52-bit addressable range, neither of which is likely to
      lead to anything useful.
      
      Given that 52-bit capable cores are guaranteed to implement VHE, this
      only affects configurations such as pKVM where we opt into non-VHE mode
      even if the hardware is VHE capable. So just for these configurations,
      let's limit the kernel linear map to 51 bits and work around the
      problem.
      
      Fixes: f4693c27
      
       ("arm64: mm: extend linear region for 52-bit VA configurations")
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20210826165613.60774-1-ardb@kernel.org
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      88053ec8
  2. Aug 31, 2021
    • Catalin Marinas's avatar
      Merge remote-tracking branch 'tip/sched/arm64' into for-next/core · 65266a7c
      Catalin Marinas authored
      * tip/sched/arm64: (785 commits)
        Documentation: arm64: describe asymmetric 32-bit support
        arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores
        arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0
        arm64: Advertise CPUs capable of running 32-bit applications in sysfs
        arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system
        arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0
        arm64: Implement task_cpu_possible_mask()
        sched: Introduce dl_task_check_affinity() to check proposed affinity
        sched: Allow task CPU affinity to be restricted on asymmetric systems
        sched: Split the guts of sched_setaffinity() into a helper function
        sched: Introduce task_struct::user_cpus_ptr to track requested affinity
        sched: Reject CPU affinity changes based on task_cpu_possible_mask()
        cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq()
        cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus()
        cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1
        sched: Introduce task_cpu_possible_mask() to limit fallback rq selection
        sched: Cgroup SCHED_IDLE support
        sched/topology: Skip updating masks for non-online nodes
        Linux 5.14-rc6
        lib: use PFN_PHYS() in devmem_is_allowed()
        ...
      65266a7c
  3. Aug 26, 2021
    • Catalin Marinas's avatar
      Merge branch 'for-next/entry' into for-next/core · 1a7f67e6
      Catalin Marinas authored
      * for-next/entry:
        : More entry.S clean-ups and conversion to C.
        arm64: entry: call exit_to_user_mode() from C
        arm64: entry: move bulk of ret_to_user to C
        arm64: entry: clarify entry/exit helpers
        arm64: entry: consolidate entry/exit helpers
      1a7f67e6
    • Catalin Marinas's avatar
      Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest',... · 622909e5
      Catalin Marinas authored
      Merge branches 'for-next/mte', 'for-next/misc' and 'for-next/kselftest', remote-tracking branch 'arm64/for-next/perf' into for-next/core
      
      * arm64/for-next/perf:
        arm64/perf: Replace '0xf' instances with ID_AA64DFR0_PMUVER_IMP_DEF
      
      * for-next/mte:
        : Miscellaneous MTE improvements.
        arm64/cpufeature: Optionally disable MTE via command-line
        arm64: kasan: mte: remove redundant mte_report_once logic
        arm64: kasan: mte: use a constant kernel GCR_EL1 value
        arm64: avoid double ISB on kernel entry
        arm64: mte: optimize GCR_EL1 modification on kernel entry/exit
        Documentation: document the preferred tag checking mode feature
        arm64: mte: introduce a per-CPU tag checking mode preference
        arm64: move preemption disablement to prctl handlers
        arm64: mte: change ASYNC and SYNC TCF settings into bitfields
        arm64: mte: rename gcr_user_excl to mte_ctrl
        arm64: mte: avoid TFSRE0_EL1 related operations unless in async mode
      
      * for-next/misc:
        : Miscellaneous updates.
        arm64: Do not trap PMSNEVFR_EL1
        arm64: mm: fix comment typo of pud_offset_phys()
        arm64: signal32: Drop pointless call to sigdelsetmask()
        arm64/sve: Better handle failure to allocate SVE register storage
        arm64: Document the requirement for SCR_EL3.HCE
        arm64: head: avoid over-mapping in map_memory
        arm64/sve: Add a comment documenting the binutils needed for SVE asm
        arm64/sve: Add some comments for sve_save/load_state()
        arm64: replace in_irq() with in_hardirq()
        arm64: mm: Fix TLBI vs ASID rollover
        arm64: entry: Add SYM_CODE annotation for __bad_stack
        arm64: fix typo in a comment
        arm64: move the (z)install rules to arch/arm64/Makefile
        arm64/sve: Make fpsimd_bind_task_to_cpu() static
        arm64: unnecessary end 'return;' in void functions
        arm64/sme: Document boot requirements for SME
        arm64: use __func__ to get function name in pr_err
        arm64: SSBS/DIT: print SSBS and DIT bit when printing PSTATE
        arm64: cpufeature: Use defined macro instead of magic numbers
        arm64/kexec: Test page size support with new TGRAN range values
      
      * for-next/kselftest:
        : Kselftest additions for arm64.
        kselftest/arm64: signal: Add a TODO list for signal handling tests
        kselftest/arm64: signal: Add test case for SVE register state in signals
        kselftest/arm64: signal: Verify that signals can't change the SVE vector length
        kselftest/arm64: signal: Check SVE signal frame shows expected vector length
        kselftest/arm64: signal: Support signal frames with SVE register data
        kselftest/arm64: signal: Add SVE to the set of features we can check for
        kselftest/arm64: pac: Fix skipping of tests on systems without PAC
        kselftest/arm64: mte: Fix misleading output when skipping tests
        kselftest/arm64: Add a TODO list for floating point tests
        kselftest/arm64: Add tests for SVE vector configuration
        kselftest/arm64: Validate vector lengths are set in sve-probe-vls
        kselftest/arm64: Provide a helper binary and "library" for SVE RDVL
        kselftest/arm64: Ignore check_gcr_el1_cswitch binary
      622909e5
    • Alexandru Elisei's avatar
      arm64: Do not trap PMSNEVFR_EL1 · 50cb99fa
      Alexandru Elisei authored
      Commit 31c00d2a ("arm64: Disable fine grained traps on boot") zeroed
      the fine grained trap registers to prevent unwanted register traps from
      occuring. However, for the PMSNEVFR_EL1 register, the corresponding
      HDFG{R,W}TR_EL2.nPMSNEVFR_EL1 fields must be 1 to disable trapping. Set
      both fields to 1 if FEAT_SPEv1p2 is detected to disable read and write
      traps.
      
      Fixes: 31c00d2a
      
       ("arm64: Disable fine grained traps on boot")
      Cc: <stable@vger.kernel.org> # 5.13.x
      Signed-off-by: default avatarAlexandru Elisei <alexandru.elisei@arm.com>
      Reviewed-by: default avatarMark Brown <broonie@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210824154523.906270-1-alexandru.elisei@arm.com
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      50cb99fa
    • Xujun Leng's avatar
      arm64: mm: fix comment typo of pud_offset_phys() · 5845e703
      Xujun Leng authored
      
      
      Fix a typo in the comment of macro pud_offset_phys().
      
      Signed-off-by: default avatarXujun Leng <lengxujun2007@126.com>
      Link: https://lore.kernel.org/r/20210825150526.12582-1-lengxujun2007@126.com
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      5845e703
    • Will Deacon's avatar
      arm64: signal32: Drop pointless call to sigdelsetmask() · 24de5838
      Will Deacon authored
      Commit 77097ae5
      
       ("most of set_current_blocked() callers want
      SIGKILL/SIGSTOP removed from set") extended set_current_blocked() to
      remove SIGKILL and SIGSTOP from the new signal set and updated all
      callers accordingly.
      
      Unfortunately, this collided with the merge of the arm64 architecture,
      which duly removes these signals when restoring the compat sigframe, as
      this was what was previously done by arch/arm/.
      
      Remove the redundant call to sigdelsetmask() from
      compat_restore_sigframe().
      
      Reported-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210825093911.24493-1-will@kernel.org
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      24de5838
  4. Aug 24, 2021
  5. Aug 23, 2021
  6. Aug 21, 2021
  7. Aug 20, 2021