Commit e5e726f7 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge tag 'locking-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking and atomics updates from Thomas Gleixner:
 "The regular pile:

   - A few improvements to the mutex code

   - Documentation updates for atomics to clarify the difference between
     cmpxchg() and try_cmpxchg() and to explain the forward progress
     expectations.

   - Simplification of the atomics fallback generator

   - The addition of arch_atomic_long*() variants and generic arch_*()
     bitops based on them.

   - Add the missing might_sleep() invocations to the down*() operations
     of semaphores.

  The PREEMPT_RT locking core:

   - Scheduler updates to support the state preserving mechanism for
     'sleeping' spin- and rwlocks on RT.

     This mechanism is carefully preserving the state of the task when
     blocking on a 'sleeping' spin- or rwlock and takes regular wake-ups
     targeted at the same task into account. The preserved or updated
     (via a regular wakeup) state is restored when the lock has been
     acquired.

   - Restructuring of the rtmutex code so it can be utilized and
     extended for the RT specific lock variants.

   - Restructuring of the ww_mutex code to allow sharing of the ww_mutex
     specific functionality for rtmutex based ww_mutexes.

   - Header file disentangling to allow substitution of the regular lock
     implementations with the PREEMPT_RT variants without creating an
     unmaintainable #ifdef mess.

   - Shared base code for the PREEMPT_RT specific rw_semaphore and
     rwlock implementations.

     Contrary to the regular rw_semaphores and rwlocks the PREEMPT_RT
     implementation is writer unfair because it is infeasible to do
     priority inheritance on multiple readers. Experience over the years
     has shown that real-time workloads are not the typical workloads
     which are sensitive to writer starvation.

     The alternative solution would be to allow only a single reader
     which has been tried and discarded as it is a major bottleneck
     especially for mmap_sem. Aside of that many of the writer
     starvation critical usage sites have been converted to a writer
     side mutex/spinlock and RCU read side protections in the past
     decade so that the issue is less prominent than it used to be.

   - The actual rtmutex based lock substitutions for PREEMPT_RT enabled
     kernels which affect mutex, ww_mutex, rw_semaphore, spinlock_t and
     rwlock_t. The spin/rw_lock*() functions disable migration across
     the critical section to preserve the existing semantics vs per-CPU
     variables.

   - Rework of the futex REQUEUE_PI mechanism to handle the case of
     early wake-ups which interleave with a re-queue operation to
     prevent the situation that a task would be blocked on both the
     rtmutex associated to the outer futex and the rtmutex based hash
     bucket spinlock.

     While this situation cannot happen on !RT enabled kernels the
     changes make the underlying concurrency problems easier to
     understand in general. As a result the difference between !RT and
     RT kernels is reduced to the handling of waiting for the critical
     section. !RT kernels simply spin-wait as before and RT kernels
     utilize rcu_wait().

   - The substitution of local_lock for PREEMPT_RT with a spinlock which
     protects the critical section while staying preemptible. The CPU
     locality is established by disabling migration.

  The underlying concepts of this code have been in use in PREEMPT_RT for
  way more than a decade. The code has been refactored several times over
  the years and this final incarnation has been optimized once again to be
  as non-intrusive as possible, i.e. the RT specific parts are mostly
  isolated.

  It has been extensively tested in the 5.14-rt patch series and it has
  been verified that !RT kernels are not affected by these changes"

* tag 'locking-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (92 commits)
  locking/rtmutex: Return success on deadlock for ww_mutex waiters
  locking/rtmutex: Prevent spurious EDEADLK return caused by ww_mutexes
  locking/rtmutex: Dequeue waiter on ww_mutex deadlock
  locking/rtmutex: Dont dereference waiter lockless
  locking/semaphore: Add might_sleep() to down_*() family
  locking/ww_mutex: Initialize waiter.ww_ctx properly
  static_call: Update API documentation
  locking/local_lock: Add PREEMPT_RT support
  locking/spinlock/rt: Prepare for RT local_lock
  locking/rtmutex: Add adaptive spinwait mechanism
  locking/rtmutex: Implement equal priority lock stealing
  preempt: Adjust PREEMPT_LOCK_OFFSET for RT
  locking/rtmutex: Prevent lockdep false positive with PI futexes
  futex: Prevent requeue_pi() lock nesting issue on RT
  futex: Simplify handle_early_requeue_pi_wakeup()
  futex: Reorder sanity checks in futex_requeue()
  futex: Clarify comment in futex_requeue()
  futex: Restructure futex_requeue()
  futex: Correct the number of requeued waiters for PI
  futex: Remove bogus condition for requeue PI
  ...
parents 08403e21 a055fcc1
Loading
Loading
Loading
Loading
+94 −0
Original line number Diff line number Diff line
@@ -271,3 +271,97 @@ WRITE_ONCE. Thus:
			SC *y, t;

is allowed.


CMPXCHG vs TRY_CMPXCHG
----------------------

  int atomic_cmpxchg(atomic_t *ptr, int old, int new);
  bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new);

Both provide the same functionality, but try_cmpxchg() can lead to more
compact code. The functions relate like:

  bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new)
  {
    int ret, old = *oldp;
    ret = atomic_cmpxchg(ptr, old, new);
    if (ret != old)
      *oldp = ret;
    return ret == old;
  }

and:

  int atomic_cmpxchg(atomic_t *ptr, int old, int new)
  {
    (void)atomic_try_cmpxchg(ptr, &old, new);
    return old;
  }

Usage:

  old = atomic_read(&v);			old = atomic_read(&v);
  for (;;) {					do {
    new = func(old);				  new = func(old);
    tmp = atomic_cmpxchg(&v, old, new);		} while (!atomic_try_cmpxchg(&v, &old, new));
    if (tmp == old)
      break;
    old = tmp;
  }

NB. try_cmpxchg() also generates better code on some platforms (notably x86)
where the function more closely matches the hardware instruction.


FORWARD PROGRESS
----------------

In general strong forward progress is expected of all unconditional atomic
operations -- those in the Arithmetic and Bitwise classes and xchg(). However
a fair amount of code also requires forward progress from the conditional
atomic operations.

Specifically 'simple' cmpxchg() loops are expected to not starve one another
indefinitely. However, this is not evident on LL/SC architectures, because
while an LL/SC architecure 'can/should/must' provide forward progress
guarantees between competing LL/SC sections, such a guarantee does not
transfer to cmpxchg() implemented using LL/SC. Consider:

  old = atomic_read(&v);
  do {
    new = func(old);
  } while (!atomic_try_cmpxchg(&v, &old, new));

which on LL/SC becomes something like:

  old = atomic_read(&v);
  do {
    new = func(old);
  } while (!({
    volatile asm ("1: LL  %[oldval], %[v]\n"
                  "   CMP %[oldval], %[old]\n"
                  "   BNE 2f\n"
                  "   SC  %[new], %[v]\n"
                  "   BNE 1b\n"
                  "2:\n"
                  : [oldval] "=&r" (oldval), [v] "m" (v)
		  : [old] "r" (old), [new] "r" (new)
                  : "memory");
    success = (oldval == old);
    if (!success)
      old = oldval;
    success; }));

However, even the forward branch from the failed compare can cause the LL/SC
to fail on some architectures, let alone whatever the compiler makes of the C
loop body. As a result there is no guarantee what so ever the cacheline
containing @v will stay on the local CPU and progress is made.

Even native CAS architectures can fail to provide forward progress for their
primitive (See Sparc64 for an example).

Such implementations are strongly encouraged to add exponential backoff loops
to a failed CAS in order to ensure some progress. Affected architectures are
also strongly encouraged to inspect/audit the atomic fallbacks, refcount_t and
their locking primitives.
+2 −2
Original line number Diff line number Diff line
@@ -1904,8 +1904,8 @@ int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type)
	dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n",
		atomisp_subdev_source_pad(vdev), asd->index);

	BUG_ON(!rt_mutex_is_locked(&isp->mutex));
	BUG_ON(!mutex_is_locked(&isp->streamoff_mutex));
	lockdep_assert_held(&isp->mutex);
	lockdep_assert_held(&isp->streamoff_mutex);

	if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
		dev_dbg(isp->dev, "unsupported v4l2 buf type\n");

include/asm-generic/atomic-long.h

deleted100644 → 0
+0 −1014
Original line number Diff line number Diff line
// SPDX-License-Identifier: GPL-2.0

// Generated by scripts/atomic/gen-atomic-long.sh
// DO NOT MODIFY THIS FILE DIRECTLY

#ifndef _ASM_GENERIC_ATOMIC_LONG_H
#define _ASM_GENERIC_ATOMIC_LONG_H

#include <linux/compiler.h>
#include <asm/types.h>

#ifdef CONFIG_64BIT
typedef atomic64_t atomic_long_t;
#define ATOMIC_LONG_INIT(i)		ATOMIC64_INIT(i)
#define atomic_long_cond_read_acquire	atomic64_cond_read_acquire
#define atomic_long_cond_read_relaxed	atomic64_cond_read_relaxed
#else
typedef atomic_t atomic_long_t;
#define ATOMIC_LONG_INIT(i)		ATOMIC_INIT(i)
#define atomic_long_cond_read_acquire	atomic_cond_read_acquire
#define atomic_long_cond_read_relaxed	atomic_cond_read_relaxed
#endif

#ifdef CONFIG_64BIT

static __always_inline long
atomic_long_read(const atomic_long_t *v)
{
	return atomic64_read(v);
}

static __always_inline long
atomic_long_read_acquire(const atomic_long_t *v)
{
	return atomic64_read_acquire(v);
}

static __always_inline void
atomic_long_set(atomic_long_t *v, long i)
{
	atomic64_set(v, i);
}

static __always_inline void
atomic_long_set_release(atomic_long_t *v, long i)
{
	atomic64_set_release(v, i);
}

static __always_inline void
atomic_long_add(long i, atomic_long_t *v)
{
	atomic64_add(i, v);
}

static __always_inline long
atomic_long_add_return(long i, atomic_long_t *v)
{
	return atomic64_add_return(i, v);
}

static __always_inline long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
	return atomic64_add_return_acquire(i, v);
}

static __always_inline long
atomic_long_add_return_release(long i, atomic_long_t *v)
{
	return atomic64_add_return_release(i, v);
}

static __always_inline long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
	return atomic64_add_return_relaxed(i, v);
}

static __always_inline long
atomic_long_fetch_add(long i, atomic_long_t *v)
{
	return atomic64_fetch_add(i, v);
}

static __always_inline long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
	return atomic64_fetch_add_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
	return atomic64_fetch_add_release(i, v);
}

static __always_inline long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
	return atomic64_fetch_add_relaxed(i, v);
}

static __always_inline void
atomic_long_sub(long i, atomic_long_t *v)
{
	atomic64_sub(i, v);
}

static __always_inline long
atomic_long_sub_return(long i, atomic_long_t *v)
{
	return atomic64_sub_return(i, v);
}

static __always_inline long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
	return atomic64_sub_return_acquire(i, v);
}

static __always_inline long
atomic_long_sub_return_release(long i, atomic_long_t *v)
{
	return atomic64_sub_return_release(i, v);
}

static __always_inline long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
	return atomic64_sub_return_relaxed(i, v);
}

static __always_inline long
atomic_long_fetch_sub(long i, atomic_long_t *v)
{
	return atomic64_fetch_sub(i, v);
}

static __always_inline long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
	return atomic64_fetch_sub_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
	return atomic64_fetch_sub_release(i, v);
}

static __always_inline long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
	return atomic64_fetch_sub_relaxed(i, v);
}

static __always_inline void
atomic_long_inc(atomic_long_t *v)
{
	atomic64_inc(v);
}

static __always_inline long
atomic_long_inc_return(atomic_long_t *v)
{
	return atomic64_inc_return(v);
}

static __always_inline long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
	return atomic64_inc_return_acquire(v);
}

static __always_inline long
atomic_long_inc_return_release(atomic_long_t *v)
{
	return atomic64_inc_return_release(v);
}

static __always_inline long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
	return atomic64_inc_return_relaxed(v);
}

static __always_inline long
atomic_long_fetch_inc(atomic_long_t *v)
{
	return atomic64_fetch_inc(v);
}

static __always_inline long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
	return atomic64_fetch_inc_acquire(v);
}

static __always_inline long
atomic_long_fetch_inc_release(atomic_long_t *v)
{
	return atomic64_fetch_inc_release(v);
}

static __always_inline long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
	return atomic64_fetch_inc_relaxed(v);
}

static __always_inline void
atomic_long_dec(atomic_long_t *v)
{
	atomic64_dec(v);
}

static __always_inline long
atomic_long_dec_return(atomic_long_t *v)
{
	return atomic64_dec_return(v);
}

static __always_inline long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
	return atomic64_dec_return_acquire(v);
}

static __always_inline long
atomic_long_dec_return_release(atomic_long_t *v)
{
	return atomic64_dec_return_release(v);
}

static __always_inline long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
	return atomic64_dec_return_relaxed(v);
}

static __always_inline long
atomic_long_fetch_dec(atomic_long_t *v)
{
	return atomic64_fetch_dec(v);
}

static __always_inline long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
	return atomic64_fetch_dec_acquire(v);
}

static __always_inline long
atomic_long_fetch_dec_release(atomic_long_t *v)
{
	return atomic64_fetch_dec_release(v);
}

static __always_inline long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
	return atomic64_fetch_dec_relaxed(v);
}

static __always_inline void
atomic_long_and(long i, atomic_long_t *v)
{
	atomic64_and(i, v);
}

static __always_inline long
atomic_long_fetch_and(long i, atomic_long_t *v)
{
	return atomic64_fetch_and(i, v);
}

static __always_inline long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
	return atomic64_fetch_and_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
	return atomic64_fetch_and_release(i, v);
}

static __always_inline long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
	return atomic64_fetch_and_relaxed(i, v);
}

static __always_inline void
atomic_long_andnot(long i, atomic_long_t *v)
{
	atomic64_andnot(i, v);
}

static __always_inline long
atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
	return atomic64_fetch_andnot(i, v);
}

static __always_inline long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
	return atomic64_fetch_andnot_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
	return atomic64_fetch_andnot_release(i, v);
}

static __always_inline long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
	return atomic64_fetch_andnot_relaxed(i, v);
}

static __always_inline void
atomic_long_or(long i, atomic_long_t *v)
{
	atomic64_or(i, v);
}

static __always_inline long
atomic_long_fetch_or(long i, atomic_long_t *v)
{
	return atomic64_fetch_or(i, v);
}

static __always_inline long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
	return atomic64_fetch_or_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
	return atomic64_fetch_or_release(i, v);
}

static __always_inline long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
	return atomic64_fetch_or_relaxed(i, v);
}

static __always_inline void
atomic_long_xor(long i, atomic_long_t *v)
{
	atomic64_xor(i, v);
}

static __always_inline long
atomic_long_fetch_xor(long i, atomic_long_t *v)
{
	return atomic64_fetch_xor(i, v);
}

static __always_inline long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
	return atomic64_fetch_xor_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
	return atomic64_fetch_xor_release(i, v);
}

static __always_inline long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
	return atomic64_fetch_xor_relaxed(i, v);
}

static __always_inline long
atomic_long_xchg(atomic_long_t *v, long i)
{
	return atomic64_xchg(v, i);
}

static __always_inline long
atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
	return atomic64_xchg_acquire(v, i);
}

static __always_inline long
atomic_long_xchg_release(atomic_long_t *v, long i)
{
	return atomic64_xchg_release(v, i);
}

static __always_inline long
atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
	return atomic64_xchg_relaxed(v, i);
}

static __always_inline long
atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
	return atomic64_cmpxchg(v, old, new);
}

static __always_inline long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
	return atomic64_cmpxchg_acquire(v, old, new);
}

static __always_inline long
atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
	return atomic64_cmpxchg_release(v, old, new);
}

static __always_inline long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
	return atomic64_cmpxchg_relaxed(v, old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
	return atomic64_try_cmpxchg(v, (s64 *)old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
	return atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
	return atomic64_try_cmpxchg_release(v, (s64 *)old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
	return atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
}

static __always_inline bool
atomic_long_sub_and_test(long i, atomic_long_t *v)
{
	return atomic64_sub_and_test(i, v);
}

static __always_inline bool
atomic_long_dec_and_test(atomic_long_t *v)
{
	return atomic64_dec_and_test(v);
}

static __always_inline bool
atomic_long_inc_and_test(atomic_long_t *v)
{
	return atomic64_inc_and_test(v);
}

static __always_inline bool
atomic_long_add_negative(long i, atomic_long_t *v)
{
	return atomic64_add_negative(i, v);
}

static __always_inline long
atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
	return atomic64_fetch_add_unless(v, a, u);
}

static __always_inline bool
atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
	return atomic64_add_unless(v, a, u);
}

static __always_inline bool
atomic_long_inc_not_zero(atomic_long_t *v)
{
	return atomic64_inc_not_zero(v);
}

static __always_inline bool
atomic_long_inc_unless_negative(atomic_long_t *v)
{
	return atomic64_inc_unless_negative(v);
}

static __always_inline bool
atomic_long_dec_unless_positive(atomic_long_t *v)
{
	return atomic64_dec_unless_positive(v);
}

static __always_inline long
atomic_long_dec_if_positive(atomic_long_t *v)
{
	return atomic64_dec_if_positive(v);
}

#else /* CONFIG_64BIT */

static __always_inline long
atomic_long_read(const atomic_long_t *v)
{
	return atomic_read(v);
}

static __always_inline long
atomic_long_read_acquire(const atomic_long_t *v)
{
	return atomic_read_acquire(v);
}

static __always_inline void
atomic_long_set(atomic_long_t *v, long i)
{
	atomic_set(v, i);
}

static __always_inline void
atomic_long_set_release(atomic_long_t *v, long i)
{
	atomic_set_release(v, i);
}

static __always_inline void
atomic_long_add(long i, atomic_long_t *v)
{
	atomic_add(i, v);
}

static __always_inline long
atomic_long_add_return(long i, atomic_long_t *v)
{
	return atomic_add_return(i, v);
}

static __always_inline long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
	return atomic_add_return_acquire(i, v);
}

static __always_inline long
atomic_long_add_return_release(long i, atomic_long_t *v)
{
	return atomic_add_return_release(i, v);
}

static __always_inline long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
	return atomic_add_return_relaxed(i, v);
}

static __always_inline long
atomic_long_fetch_add(long i, atomic_long_t *v)
{
	return atomic_fetch_add(i, v);
}

static __always_inline long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
	return atomic_fetch_add_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
	return atomic_fetch_add_release(i, v);
}

static __always_inline long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
	return atomic_fetch_add_relaxed(i, v);
}

static __always_inline void
atomic_long_sub(long i, atomic_long_t *v)
{
	atomic_sub(i, v);
}

static __always_inline long
atomic_long_sub_return(long i, atomic_long_t *v)
{
	return atomic_sub_return(i, v);
}

static __always_inline long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
	return atomic_sub_return_acquire(i, v);
}

static __always_inline long
atomic_long_sub_return_release(long i, atomic_long_t *v)
{
	return atomic_sub_return_release(i, v);
}

static __always_inline long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
	return atomic_sub_return_relaxed(i, v);
}

static __always_inline long
atomic_long_fetch_sub(long i, atomic_long_t *v)
{
	return atomic_fetch_sub(i, v);
}

static __always_inline long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
	return atomic_fetch_sub_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
	return atomic_fetch_sub_release(i, v);
}

static __always_inline long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
	return atomic_fetch_sub_relaxed(i, v);
}

static __always_inline void
atomic_long_inc(atomic_long_t *v)
{
	atomic_inc(v);
}

static __always_inline long
atomic_long_inc_return(atomic_long_t *v)
{
	return atomic_inc_return(v);
}

static __always_inline long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
	return atomic_inc_return_acquire(v);
}

static __always_inline long
atomic_long_inc_return_release(atomic_long_t *v)
{
	return atomic_inc_return_release(v);
}

static __always_inline long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
	return atomic_inc_return_relaxed(v);
}

static __always_inline long
atomic_long_fetch_inc(atomic_long_t *v)
{
	return atomic_fetch_inc(v);
}

static __always_inline long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
	return atomic_fetch_inc_acquire(v);
}

static __always_inline long
atomic_long_fetch_inc_release(atomic_long_t *v)
{
	return atomic_fetch_inc_release(v);
}

static __always_inline long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
	return atomic_fetch_inc_relaxed(v);
}

static __always_inline void
atomic_long_dec(atomic_long_t *v)
{
	atomic_dec(v);
}

static __always_inline long
atomic_long_dec_return(atomic_long_t *v)
{
	return atomic_dec_return(v);
}

static __always_inline long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
	return atomic_dec_return_acquire(v);
}

static __always_inline long
atomic_long_dec_return_release(atomic_long_t *v)
{
	return atomic_dec_return_release(v);
}

static __always_inline long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
	return atomic_dec_return_relaxed(v);
}

static __always_inline long
atomic_long_fetch_dec(atomic_long_t *v)
{
	return atomic_fetch_dec(v);
}

static __always_inline long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
	return atomic_fetch_dec_acquire(v);
}

static __always_inline long
atomic_long_fetch_dec_release(atomic_long_t *v)
{
	return atomic_fetch_dec_release(v);
}

static __always_inline long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
	return atomic_fetch_dec_relaxed(v);
}

static __always_inline void
atomic_long_and(long i, atomic_long_t *v)
{
	atomic_and(i, v);
}

static __always_inline long
atomic_long_fetch_and(long i, atomic_long_t *v)
{
	return atomic_fetch_and(i, v);
}

static __always_inline long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
	return atomic_fetch_and_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
	return atomic_fetch_and_release(i, v);
}

static __always_inline long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
	return atomic_fetch_and_relaxed(i, v);
}

static __always_inline void
atomic_long_andnot(long i, atomic_long_t *v)
{
	atomic_andnot(i, v);
}

static __always_inline long
atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
	return atomic_fetch_andnot(i, v);
}

static __always_inline long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
	return atomic_fetch_andnot_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
	return atomic_fetch_andnot_release(i, v);
}

static __always_inline long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
	return atomic_fetch_andnot_relaxed(i, v);
}

static __always_inline void
atomic_long_or(long i, atomic_long_t *v)
{
	atomic_or(i, v);
}

static __always_inline long
atomic_long_fetch_or(long i, atomic_long_t *v)
{
	return atomic_fetch_or(i, v);
}

static __always_inline long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
	return atomic_fetch_or_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
	return atomic_fetch_or_release(i, v);
}

static __always_inline long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
	return atomic_fetch_or_relaxed(i, v);
}

static __always_inline void
atomic_long_xor(long i, atomic_long_t *v)
{
	atomic_xor(i, v);
}

static __always_inline long
atomic_long_fetch_xor(long i, atomic_long_t *v)
{
	return atomic_fetch_xor(i, v);
}

static __always_inline long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
	return atomic_fetch_xor_acquire(i, v);
}

static __always_inline long
atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
	return atomic_fetch_xor_release(i, v);
}

static __always_inline long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
	return atomic_fetch_xor_relaxed(i, v);
}

static __always_inline long
atomic_long_xchg(atomic_long_t *v, long i)
{
	return atomic_xchg(v, i);
}

static __always_inline long
atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
	return atomic_xchg_acquire(v, i);
}

static __always_inline long
atomic_long_xchg_release(atomic_long_t *v, long i)
{
	return atomic_xchg_release(v, i);
}

static __always_inline long
atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
	return atomic_xchg_relaxed(v, i);
}

static __always_inline long
atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
	return atomic_cmpxchg(v, old, new);
}

static __always_inline long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
	return atomic_cmpxchg_acquire(v, old, new);
}

static __always_inline long
atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
	return atomic_cmpxchg_release(v, old, new);
}

static __always_inline long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
	return atomic_cmpxchg_relaxed(v, old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
	return atomic_try_cmpxchg(v, (int *)old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
	return atomic_try_cmpxchg_acquire(v, (int *)old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
	return atomic_try_cmpxchg_release(v, (int *)old, new);
}

static __always_inline bool
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
	return atomic_try_cmpxchg_relaxed(v, (int *)old, new);
}

static __always_inline bool
atomic_long_sub_and_test(long i, atomic_long_t *v)
{
	return atomic_sub_and_test(i, v);
}

static __always_inline bool
atomic_long_dec_and_test(atomic_long_t *v)
{
	return atomic_dec_and_test(v);
}

static __always_inline bool
atomic_long_inc_and_test(atomic_long_t *v)
{
	return atomic_inc_and_test(v);
}

static __always_inline bool
atomic_long_add_negative(long i, atomic_long_t *v)
{
	return atomic_add_negative(i, v);
}

static __always_inline long
atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
	return atomic_fetch_add_unless(v, a, u);
}

static __always_inline bool
atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
	return atomic_add_unless(v, a, u);
}

static __always_inline bool
atomic_long_inc_not_zero(atomic_long_t *v)
{
	return atomic_inc_not_zero(v);
}

static __always_inline bool
atomic_long_inc_unless_negative(atomic_long_t *v)
{
	return atomic_inc_unless_negative(v);
}

static __always_inline bool
atomic_long_dec_unless_positive(atomic_long_t *v)
{
	return atomic_dec_unless_positive(v);
}

static __always_inline long
atomic_long_dec_if_positive(atomic_long_t *v)
{
	return atomic_dec_if_positive(v);
}

#endif /* CONFIG_64BIT */
#endif /* _ASM_GENERIC_ATOMIC_LONG_H */
// a624200981f552b2c6be4f32fe44da8289f30d87
+20 −12

File changed.

Preview size limit exceeded, changes collapsed.

+21 −18

File changed.

Preview size limit exceeded, changes collapsed.

Loading