+1
−0
arch/csky/include/asm/spinlock.h
0 → 100644
+12
−0
arch/csky/include/asm/spinlock_types.h
0 → 100644
+9
−0
Loading
Enable qspinlock by the requirements mentioned in a8ad07e5 ("asm-generic: qspinlock: Indicate the use of mixed-size atomics"). C-SKY only has "ldex/stex" for all atomic operations. So csky give a strong forward guarantee for "ldex/stex." That means when ldex grabbed the cache line into $L1, it would block other cores from snooping the address with several cycles. The atomic_fetch_add & xchg16 has the same forward guarantee level in C-SKY. Qspinlock has better code size and performance in a fast path. Signed-off-by:Guo Ren <guoren@linux.alibaba.com> Signed-off-by:
Guo Ren <guoren@kernel.org>