Commit d0c00640 authored by Uros Bizjak's avatar Uros Bizjak Committed by Peter Zijlstra
Browse files

jump_label: Use atomic_try_cmpxchg() in static_key_slow_inc_cpuslocked()



Use atomic_try_cmpxchg() instead of atomic_cmpxchg (*ptr, old, new) ==
old in static_key_slow_inc_cpuslocked().  x86 CMPXCHG instruction
returns success in ZF flag, so this change saves a compare after
cmpxchg (and related move instruction in front of cmpxchg).

Also, atomic_try_cmpxchg() implicitly assigns old *ptr value to "old" when
cmpxchg fails, enabling further code simplifications.

No functional change intended.

Signed-off-by: default avatarUros Bizjak <ubizjak@gmail.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20221019140850.3395-1-ubizjak@gmail.com
parent 247f34f7
Loading
Loading
Loading
Loading
+2 −6
Original line number Diff line number Diff line
@@ -115,8 +115,6 @@ EXPORT_SYMBOL_GPL(static_key_count);

void static_key_slow_inc_cpuslocked(struct static_key *key)
{
	int v, v1;

	STATIC_KEY_CHECK_USE(key);
	lockdep_assert_cpus_held();

@@ -132,11 +130,9 @@ void static_key_slow_inc_cpuslocked(struct static_key *key)
	 * so it counts as "enabled" in jump_label_update().  Note that
	 * atomic_inc_unless_negative() checks >= 0, so roll our own.
	 */
	for (v = atomic_read(&key->enabled); v > 0; v = v1) {
		v1 = atomic_cmpxchg(&key->enabled, v, v + 1);
		if (likely(v1 == v))
	for (int v = atomic_read(&key->enabled); v > 0; )
		if (likely(atomic_try_cmpxchg(&key->enabled, &v, v + 1)))
			return;
	}

	jump_label_lock();
	if (atomic_read(&key->enabled) == 0) {