Commit d8d9bbb0 authored by Dave Chinner's avatar Dave Chinner Committed by Dave Chinner
Browse files

xfs: reduce the number of atomic when locking a buffer after lookup



Avoid an extra atomic operation in the non-trylock case by only
doing a trylock if the XBF_TRYLOCK flag is set. This follows the
pattern in the IO path with NOWAIT semantics where the
"trylock-fail-lock" path showed 5-10% reduced throughput compared to
just using single lock call when not under NOWAIT conditions. So
make that same change here, too.

See commit 942491c9 ("xfs: fix AIM7 regression") for details.

Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
[hch: split from a larger patch]
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarDarrick J. Wong <djwong@kernel.org>
parent 34800080
Loading
Loading
Loading
Loading
+3 −2
Original line number Diff line number Diff line
@@ -534,11 +534,12 @@ xfs_buf_find_lock(
	struct xfs_buf          *bp,
	xfs_buf_flags_t		flags)
{
	if (!xfs_buf_trylock(bp)) {
	if (flags & XBF_TRYLOCK) {
		if (!xfs_buf_trylock(bp)) {
			XFS_STATS_INC(bp->b_mount, xb_busy_locked);
			return -EAGAIN;
		}
	} else {
		xfs_buf_lock(bp);
		XFS_STATS_INC(bp->b_mount, xb_get_locked_waited);
	}