Skip to content
  1. Dec 06, 2016
  2. Dec 05, 2016
  3. Dec 02, 2016
  4. Nov 29, 2016
    • Catalin Marinas's avatar
      Merge Will Deacon's for-next/perf branch into for-next/core · 00cc2e07
      Catalin Marinas authored
      * will/for-next/perf:
        selftests: arm64: add test for unaligned/inexact watchpoint handling
        arm64: Allow hw watchpoint of length 3,5,6 and 7
        arm64: hw_breakpoint: Handle inexact watchpoint addresses
        arm64: Allow hw watchpoint at varied offset from base address
        hw_breakpoint: Allow watchpoint of length 3,5,6 and 7
      00cc2e07
    • Jintack's avatar
      arm64: head.S: Fix CNTHCTL_EL2 access on VHE system · 1650ac49
      Jintack authored
      
      
      Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit.
      EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they
      are 11th and 10th bits respectively when E2H is set.  Current code is
      unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set.
      
      In fact, we don't need to set those two bits, which allow EL1 and EL0 to
      access physical timer and counter respectively, if E2H and TGE are set
      for the host kernel. They will be configured later as necessary. First,
      we don't need to configure those bits for EL1, since the host kernel
      runs in EL2.  It is a hypervisor's responsibility to configure them
      before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses
      are configured in the later stage of boot process.
      
      Signed-off-by: default avatarJintack Lim <jintack@cs.columbia.edu>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1650ac49
  5. Nov 24, 2016
  6. Nov 22, 2016
    • Catalin Marinas's avatar
      arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN · ba42822a
      Catalin Marinas authored
      
      
      This patch adds the Kconfig option to enable support for TTBR0 PAN
      emulation. The option is default off because of a slight performance hit
      when enabled, caused by the additional TTBR0_EL1 switching during user
      access operations or exception entry/exit code.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ba42822a
    • Catalin Marinas's avatar
      arm64: xen: Enable user access before a privcmd hvc call · 9cf09d68
      Catalin Marinas authored
      
      
      Privcmd calls are issued by the userspace. The kernel needs to enable
      access to TTBR0_EL1 as the hypervisor would issue stage 1 translations
      to user memory via AT instructions. Since AT instructions are not
      affected by the PAN bit (ARMv8.1), we only need the explicit
      uaccess_enable/disable if the TTBR0 PAN option is enabled.
      
      Reviewed-by: default avatarJulien Grall <julien.grall@arm.com>
      Acked-by: default avatarStefano Stabellini <sstabellini@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9cf09d68
    • Catalin Marinas's avatar
      arm64: Handle faults caused by inadvertent user access with PAN enabled · 78688963
      Catalin Marinas authored
      
      
      When TTBR0_EL1 is set to the reserved page, an erroneous kernel access
      to user space would generate a translation fault. This patch adds the
      checks for the software-set PSR_PAN_BIT to emulate a permission fault
      and report it accordingly.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      78688963
    • Catalin Marinas's avatar
      arm64: Disable TTBR0_EL1 during normal kernel execution · 39bc88e5
      Catalin Marinas authored
      
      
      When the TTBR0 PAN feature is enabled, the kernel entry points need to
      disable access to TTBR0_EL1. The PAN status of the interrupted context
      is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
      Restoring access to TTBR0_EL1 is done on exception return if returning
      to user or returning to a context where PAN was disabled.
      
      Context switching via switch_mm() must defer the update of TTBR0_EL1
      until a return to user or an explicit uaccess_enable() call.
      
      Special care needs to be taken for two cases where TTBR0_EL1 is set
      outside the normal kernel context switch operation: EFI run-time
      services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
      Code has been added to avoid deferred TTBR0_EL1 switching as in
      switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
      special TTBR0_EL1.
      
      User cache maintenance (user_cache_maint_handler and
      __flush_cache_user_range) needs the TTBR0_EL1 re-instated since the
      operations are performed by user virtual address.
      
      This patch also removes a stale comment on the switch_mm() function.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      39bc88e5
    • Catalin Marinas's avatar
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas authored
      
      
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
    • Catalin Marinas's avatar
      arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro · f33bcf03
      Catalin Marinas authored
      
      
      This patch takes the errata workaround code out of cpu_do_switch_mm into
      a dedicated post_ttbr0_update_workaround macro which will be reused in a
      subsequent patch.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f33bcf03
    • Catalin Marinas's avatar
      arm64: Factor out PAN enabling/disabling into separate uaccess_* macros · bd38967d
      Catalin Marinas authored
      
      
      This patch moves the directly coded alternatives for turning PAN on/off
      into separate uaccess_{enable,disable} macros or functions. The asm
      macros take a few arguments which will be used in subsequent patches.
      
      Note that any (unlikely) access that the compiler might generate between
      uaccess_enable() and uaccess_disable(), other than those explicitly
      specified by the user access code, will not be protected by PAN.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bd38967d
    • Catalin Marinas's avatar
      arm64: Update the synchronous external abort fault description · a8ada146
      Catalin Marinas authored
      
      
      This patch updates the description of the synchronous external aborts on
      translation table walks.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a8ada146
  7. Nov 19, 2016
    • Pratyush Anand's avatar
      selftests: arm64: add test for unaligned/inexact watchpoint handling · f43365ee
      Pratyush Anand authored
      
      
      ARM64 hardware expects 64bit aligned address for watchpoint invocation.
      However, it provides byte selection method to select any number of
      consecutive byte set within the range of 1-8.
      
      This patch adds support to test all such byte selection option for
      different memory write sizes.
      
      Patch also adds a test for handling the case when the cpu does not
      report an address which exactly matches one of the regions we have
      been watching (which is a situation permitted by the spec if an
      instruction accesses both watched and unwatched regions). The test
      was failing on a MSM8996pro before this patch series and is
      passing now.
      
      Signed-off-by: default avatarPavel Labath <labath@google.com>
      Signed-off-by: default avatarPratyush Anand <panand@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f43365ee
    • Pratyush Anand's avatar
      arm64: Allow hw watchpoint of length 3,5,6 and 7 · 0ddb8e0b
      Pratyush Anand authored
      
      
      Since, arm64 can support all offset within a double word limit. Therefore,
      now support other lengths within that range as well.
      
      Signed-off-by: default avatarPratyush Anand <panand@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      0ddb8e0b
    • Pavel Labath's avatar
      arm64: hw_breakpoint: Handle inexact watchpoint addresses · fdfeff0f
      Pavel Labath authored
      
      
      Arm64 hardware does not always report a watchpoint hit address that
      matches one of the watchpoints set. It can also report an address
      "near" the watchpoint if a single instruction access both watched and
      unwatched addresses. There is no straight-forward way, short of
      disassembling the offending instruction, to map that address back to
      the watchpoint.
      
      Previously, when the hardware reported a watchpoint hit on an address
      that did not match our watchpoint (this happens in case of instructions
      which access large chunks of memory such as "stp") the process would
      enter a loop where we would be continually resuming it (because we did
      not recognise that watchpoint hit) and it would keep hitting the
      watchpoint again and again. The tracing process would never get
      notified of the watchpoint hit.
      
      This commit fixes the problem by looking at the watchpoints near the
      address reported by the hardware. If the address does not exactly match
      one of the watchpoints we have set, it attributes the hit to the
      nearest watchpoint we have.  This heuristic is a bit dodgy, but I don't
      think we can do much more, given the hardware limitations.
      
      Signed-off-by: default avatarPavel Labath <labath@google.com>
      [panand: reworked to rebase on his patches]
      Signed-off-by: default avatarPratyush Anand <panand@redhat.com>
      [will: use __ffs instead of ffs - 1]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      fdfeff0f
    • Pratyush Anand's avatar
      arm64: Allow hw watchpoint at varied offset from base address · b08fb180
      Pratyush Anand authored
      
      
      ARM64 hardware supports watchpoint at any double word aligned address.
      However, it can select any consecutive bytes from offset 0 to 7 from that
      base address. For example, if base address is programmed as 0x420030 and
      byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will
      generate a watchpoint exception.
      
      Currently, we do not have such modularity. We can only program byte,
      halfword, word and double word access exception from any base address.
      
      This patch adds support to overcome above limitations.
      
      Signed-off-by: default avatarPratyush Anand <panand@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      b08fb180
    • Pratyush Anand's avatar
      hw_breakpoint: Allow watchpoint of length 3,5,6 and 7 · 651be3cb
      Pratyush Anand authored
      
      
      We only support breakpoint/watchpoint of length 1, 2, 4 and 8. If we can
      support other length as well, then user may watch more data with less
      number of watchpoints (provided hardware supports it). For example: if we
      have to watch only 4th, 5th and 6th byte from a 64 bit aligned address, we
      will have to use two slots to implement it currently. One slot will watch a
      half word at offset 4 and other a byte at offset 6. If we can have a
      watchpoint of length 3 then we can watch it with single slot as well.
      
      ARM64 hardware does support such functionality, therefore adding these new
      definitions in generic layer.
      
      Signed-off-by: default avatarPratyush Anand <panand@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      651be3cb
  8. Nov 17, 2016
  9. Nov 12, 2016
    • Mark Rutland's avatar
      arm64: split thread_info from task stack · c02433dd
      Mark Rutland authored
      
      
      This patch moves arm64's struct thread_info from the task stack into
      task_struct. This protects thread_info from corruption in the case of
      stack overflows, and makes its address harder to determine if stack
      addresses are leaked, making a number of attacks more difficult. Precise
      detection and handling of overflow is left for subsequent patches.
      
      Largely, this involves changing code to store the task_struct in sp_el0,
      and acquire the thread_info from the task struct. Core code now
      implements current_thread_info(), and as noted in <linux/sched.h> this
      relies on offsetof(task_struct, thread_info) == 0, enforced by core
      code.
      
      This change means that the 'tsk' register used in entry.S now points to
      a task_struct, rather than a thread_info as it used to. To make this
      clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
      appropriately updated to account for the structural change.
      
      Userspace clobbers sp_el0, and we can no longer restore this from the
      stack. Instead, the current task is cached in a per-cpu variable that we
      can safely access from early assembly as interrupts are disabled (and we
      are thus not preemptible).
      
      Both secondary entry and idle are updated to stash the sp and task
      pointer separately.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c02433dd
    • Mark Rutland's avatar
      arm64: assembler: introduce ldr_this_cpu · 1b7e2296
      Mark Rutland authored
      
      
      Shortly we will want to load a percpu variable in the return from
      userspace path. We can save an instruction by folding the addition of
      the percpu offset into the load instruction, and this patch adds a new
      helper to do so.
      
      At the same time, we clean up this_cpu_ptr for consistency. As with
      {adr,ldr,str}_l, we change the template to take the destination register
      first, and name this dst. Secondly, we rename the macro to adr_this_cpu,
      following the scheme of adr_l, and matching the newly added
      ldr_this_cpu.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1b7e2296
    • Mark Rutland's avatar
      arm64: make cpu number a percpu variable · 57c82954
      Mark Rutland authored
      
      
      In the absence of CONFIG_THREAD_INFO_IN_TASK, core code maintains
      thread_info::cpu, and low-level architecture code can access this to
      build raw_smp_processor_id(). With CONFIG_THREAD_INFO_IN_TASK, core code
      maintains task_struct::cpu, which for reasons of hte header soup is not
      accessible to low-level arch code.
      
      Instead, we can maintain a percpu variable containing the cpu number.
      
      For both the old and new implementation of raw_smp_processor_id(), we
      read a syreg into a GPR, add an offset, and load the result. As the
      offset is now larger, it may not be folded into the load, but otherwise
      the assembly shouldn't change much.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      57c82954
    • Mark Rutland's avatar
      arm64: smp: prepare for smp_processor_id() rework · 580efaa7
      Mark Rutland authored
      
      
      Subsequent patches will make smp_processor_id() use a percpu variable.
      This will make smp_processor_id() dependent on the percpu offset, and
      thus we cannot use smp_processor_id() to figure out what to initialise
      the offset to.
      
      Prepare for this by initialising the percpu offset based on
      current::cpu, which will work regardless of how smp_processor_id() is
      implemented. Also, make this relationship obvious by placing this code
      together at the start of secondary_start_kernel().
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      580efaa7
    • Mark Rutland's avatar
      arm64: move sp_el0 and tpidr_el1 into cpu_suspend_ctx · 623b476f
      Mark Rutland authored
      
      
      When returning from idle, we rely on the fact that thread_info lives at
      the end of the kernel stack, and restore this by masking the saved stack
      pointer. Subsequent patches will sever the relationship between the
      stack and thread_info, and to cater for this we must save/restore sp_el0
      explicitly, storing it in cpu_suspend_ctx.
      
      As cpu_suspend_ctx must be doubleword aligned, this leaves us with an
      extra slot in cpu_suspend_ctx. We can use this to save/restore tpidr_el1
      in the same way, which simplifies the code, avoiding pointer chasing on
      the restore path (as we no longer need to load thread_info::cpu followed
      by the relevant slot in __per_cpu_offset based on this).
      
      This patch stashes both registers in cpu_suspend_ctx.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      623b476f
    • Mark Rutland's avatar
      arm64: prep stack walkers for THREAD_INFO_IN_TASK · 9bbd4c56
      Mark Rutland authored
      
      
      When CONFIG_THREAD_INFO_IN_TASK is selected, task stacks may be freed
      before a task is destroyed. To account for this, the stacks are
      refcounted, and when manipulating the stack of another task, it is
      necessary to get/put the stack to ensure it isn't freed and/or re-used
      while we do so.
      
      This patch reworks the arm64 stack walking code to account for this.
      When CONFIG_THREAD_INFO_IN_TASK is not selected these perform no
      refcounting, and this should only be a structural change that does not
      affect behaviour.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9bbd4c56
    • Mark Rutland's avatar
      arm64: unexport walk_stackframe · 2020a5ae
      Mark Rutland authored
      
      
      The walk_stackframe functions is architecture-specific, with a varying
      prototype, and common code should not use it directly. None of its
      current users can be built as modules. With THREAD_INFO_IN_TASK, users
      will also need to hold a stack reference before calling it.
      
      There's no reason for it to be exported, and it's very easy to misuse,
      so unexport it for now.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2020a5ae
    • Mark Rutland's avatar
      arm64: traps: simplify die() and __die() · 876e7a38
      Mark Rutland authored
      
      
      In arm64's die and __die routines we pass around a thread_info, and
      subsequently use this to determine the relevant task_struct, and the end
      of the thread's stack. Subsequent patches will decouple thread_info from
      the stack, and this approach will no longer work.
      
      To figure out the end of the stack, we can use the new generic
      end_of_stack() helper. As we only call __die() from die(), and die()
      always deals with the current task, we can remove the parameter and have
      both acquire current directly, which also makes it clear that __die
      can't be called for arbitrary tasks.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      876e7a38
    • Mark Rutland's avatar
      arm64: factor out current_stack_pointer · a9ea0017
      Mark Rutland authored
      
      
      We define current_stack_pointer in <asm/thread_info.h>, though other
      files and header relying upon it do not have this necessary include, and
      are thus fragile to changes in the header soup.
      
      Subsequent patches will affect the header soup such that directly
      including <asm/thread_info.h> may result in a circular header include in
      some of these cases, so we can't simply include <asm/thread_info.h>.
      
      Instead, factor current_thread_info into its own header, and have all
      existing users include this explicitly.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a9ea0017
    • Mark Rutland's avatar
      arm64: asm-offsets: remove unused definitions · 3fe12da4
      Mark Rutland authored
      
      
      Subsequent patches will move the thread_info::{task,cpu} fields, and the
      current TI_{TASK,CPU} offset definitions are not used anywhere.
      
      This patch removes the redundant definitions.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3fe12da4
    • Mark Rutland's avatar
      arm64: thread_info remove stale items · dcbe0285
      Mark Rutland authored
      
      
      We have a comment claiming __switch_to() cares about where cpu_context
      is located relative to cpu_domain in thread_info. However arm64 has
      never had a thread_info::cpu_domain field, and neither __switch_to nor
      cpu_switch_to care where the cpu_context field is relative to others.
      
      Additionally, the init_thread_info alias is never used anywhere in the
      kernel, and will shortly become problematic when thread_info is moved
      into task_struct.
      
      This patch removes both.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      dcbe0285
    • Mark Rutland's avatar
      thread_info: include <current.h> for THREAD_INFO_IN_TASK · dc3d2a67
      Mark Rutland authored
      
      
      When CONFIG_THREAD_INFO_IN_TASK is selected, the current_thread_info()
      macro relies on current having been defined prior to its use. However,
      not all users of current_thread_info() include <asm/current.h>, and thus
      current is not guaranteed to be defined.
      
      When CONFIG_THREAD_INFO_IN_TASK is not selected, it's possible that
      get_current() / current are based upon current_thread_info(), and
      <asm/current.h> includes <asm/thread_info.h>. Thus always including
      <asm/current.h> would result in circular dependences on some platforms.
      
      To ensure both cases work, this patch includes <asm/current.h>, but only
      when CONFIG_THREAD_INFO_IN_TASK is selected.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      dc3d2a67
    • Mark Rutland's avatar
      thread_info: factor out restart_block · 53d74d05
      Mark Rutland authored
      Since commit f56141e3
      
       ("all arches, signal: move restart_block
      to struct task_struct"), thread_info and restart_block have been
      logically distinct, yet struct restart_block is still defined in
      <linux/thread_info.h>.
      
      At least one architecture (erroneously) uses restart_block as part of
      its thread_info, and thus the definition of restart_block must come
      before the include of <asm/thread_info>. Subsequent patches in this
      series need to shuffle the order of includes and definitions in
      <linux/thread_info.h>, and will make this ordering fragile.
      
      This patch moves the definition of restart_block out to its own header.
      This serves as generic cleanup, logically separating thread_info and
      restart_block, and also makes it easier to avoid fragility.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      53d74d05
  10. Nov 10, 2016