Commit 2600badf authored by Jakub Kicinski's avatar Jakub Kicinski
Browse files

Merge branch 'net-refcount-address-dst_entry-reference-count-scalability-issues'

Thomas Gleixner says:

====================
net, refcount: Address dst_entry reference count scalability issues

This is version 3 of this series. Version 2 can be found here:

     https://lore.kernel.org/lkml/20230307125358.772287565@linutronix.de

Wangyang and Arjan reported a bottleneck in the networking code related to
struct dst_entry::__refcnt. Performance tanks massively when concurrency on
a dst_entry increases.

This happens when there are a large amount of connections to or from the
same IP address. The memtier benchmark when run on the same host as
memcached amplifies this massively. But even over real network connections
this issue can be observed at an obviously smaller scale (due to the
network bandwith limitations in my setup, i.e. 1Gb). How to reproduce:

  Run memcached with -t $N and memtier_benchmark with -t $M and --ratio=1:100
  on the same machine. localhost connections amplify the problem.

  Start with the defaults for $N and $M and increase them. Depending on
  your machine this will tank at some point. But even in reasonably small
  $N, $M scenarios the refcount operations and the resulting false sharing
  fallout becomes visible in perf top. At some point it becomes the
  dominating issue.

There are two factors which make this reference count a scalability issue:

   1) False sharing

      dst_entry:__refcnt is located at offset 64 of dst_entry, which puts
      it into a seperate cacheline vs. the read mostly members located at
      the beginning of the struct.

      That prevents false sharing vs. the struct members in the first 64
      bytes of the structure, but there is also

      	    dst_entry::lwtstate

      which is located after the reference count and in the same cache
      line. This member is read after a reference count has been acquired.

      The other problem is struct rtable, which embeds a struct dst_entry
      at offset 0. struct dst_entry has a size of 112 bytes, which means
      that the struct members of rtable which follow the dst member share
      the same cache line as dst_entry::__refcnt. Especially

      	  rtable::rt_genid

      is also read by the contexts which have a reference count acquired
      already.

      When dst_entry:__refcnt is incremented or decremented via an atomic
      operation these read accesses stall and contribute to the performance
      problem.

   2) atomic_inc_not_zero()

      A reference on dst_entry:__refcnt is acquired via
      atomic_inc_not_zero() and released via atomic_dec_return().

      atomic_inc_not_zero() is implemted via a atomic_try_cmpxchg() loop,
      which exposes O(N^2) behaviour under contention with N concurrent
      operations. Contention scalability is degrading with even a small
      amount of contenders and gets worse from there.

      Lightweight instrumentation exposed an average of 8!! retry loops per
      atomic_inc_not_zero() invocation in a inc()/dec() loop running
      concurrently on 112 CPUs.

      There is nothing which can be done to make atomic_inc_not_zero() more
      scalable.

The following series addresses these issues:

    1) Reorder and pad struct dst_entry to prevent the false sharing.

    2) Implement and use a reference count implementation which avoids the
       atomic_inc_not_zero() problem.

       It is slightly less performant in the case of the final 0 -> -1
       transition, but the deconstruction of these objects is a low
       frequency event. get()/put() pairs are in the hotpath and that's
       what this implementation optimizes for.

       The algorithm of this reference count is only suitable for RCU
       managed objects. Therefore it cannot replace the refcount_t
       algorithm, which is also based on atomic_inc_not_zero(), due to a
       subtle race condition related to the 0 -> -1 transition and the final
       verdict to mark the reference count dead. See details in patch 2/3.

       It might be just my lack of imagination which declares this to be
       impossible and I'd be happy to be proven wrong.

       As a bonus the new rcuref implementation provides underflow/overflow
       detection and mitigation while being performance wise on par with
       open coded atomic_inc_not_zero() / atomic_dec_return() pairs even in
       the non-contended case.

The combination of these two changes results in performance gains in micro
benchmarks and also localhost and networked memtier benchmarks talking to
memcached. It's hard to quantify the benchmark results as they depend
heavily on the micro-architecture and the number of concurrent operations.

The overall gain of both changes for localhost memtier ranges from 1.2X to
3.2X and from +2% to %5% range for networked operations on a 1Gb connection.

A micro benchmark which enforces maximized concurrency shows a gain between
1.2X and 4.7X!!!

Obviously this is focussed on a particular problem and therefore needs to
be discussed in detail. It also requires wider testing outside of the cases
which this is focussed on.

Though the false sharing issue is obvious and should be addressed
independent of the more focussed reference count changes.

The series is also available from git:

  git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git rcuref

Changes vs. V2:

  - Rename __refcnt to __rcuref (Linus)

  - Fix comments and changelogs (Mark, Qiuxu)

  - Fixup kernel doc of generated atomic_add_negative() variants

I want to say thanks to Wangyang who analyzed the issue and provided the
initial fix for the false sharing problem. Further thanks go to Arjan
Peter, Marc, Will and Borislav for valuable input and providing test
results on machines which I do not have access to, and to Linus and
Eric, Qiuxu and Mark for helpful feedback.
====================

Link: https://lore.kernel.org/r/20230323102649.764958589@linutronix.de


Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents b133fffe bc9d3a9f
Loading
Loading
Loading
Loading
+22 −8
Original line number Diff line number Diff line
@@ -16,6 +16,7 @@
#include <linux/bug.h>
#include <linux/jiffies.h>
#include <linux/refcount.h>
#include <linux/rcuref.h>
#include <net/neighbour.h>
#include <asm/processor.h>
#include <linux/indirect_call_wrapper.h>
@@ -61,23 +62,36 @@ struct dst_entry {
	unsigned short		trailer_len;	/* space to reserve at tail */

	/*
	 * __refcnt wants to be on a different cache line from
	 * __rcuref wants to be on a different cache line from
	 * input/output/ops or performance tanks badly
	 */
#ifdef CONFIG_64BIT
	atomic_t		__refcnt;	/* 64-bit offset 64 */
	rcuref_t		__rcuref;	/* 64-bit offset 64 */
#endif
	int			__use;
	unsigned long		lastuse;
	struct lwtunnel_state   *lwtstate;
	struct rcu_head		rcu_head;
	short			error;
	short			__pad;
	__u32			tclassid;
#ifndef CONFIG_64BIT
	atomic_t		__refcnt;	/* 32-bit offset 64 */
	struct lwtunnel_state   *lwtstate;
	rcuref_t		__rcuref;	/* 32-bit offset 64 */
#endif
	netdevice_tracker	dev_tracker;

	/*
	 * Used by rtable and rt6_info. Moves lwtstate into the next cache
	 * line on 64bit so that lwtstate does not cause false sharing with
	 * __rcuref under contention of __rcuref. This also puts the
	 * frequently accessed members of rtable and rt6_info out of the
	 * __rcuref cache line.
	 */
	struct list_head	rt_uncached;
	struct uncached_list	*rt_uncached_list;
#ifdef CONFIG_64BIT
	struct lwtunnel_state   *lwtstate;
#endif
};

struct dst_metrics {
@@ -225,10 +239,10 @@ static inline void dst_hold(struct dst_entry *dst)
{
	/*
	 * If your kernel compilation stops here, please check
	 * the placement of __refcnt in struct dst_entry
	 * the placement of __rcuref in struct dst_entry
	 */
	BUILD_BUG_ON(offsetof(struct dst_entry, __refcnt) & 63);
	WARN_ON(atomic_inc_not_zero(&dst->__refcnt) == 0);
	BUILD_BUG_ON(offsetof(struct dst_entry, __rcuref) & 63);
	WARN_ON(!rcuref_get(&dst->__rcuref));
}

static inline void dst_use_noref(struct dst_entry *dst, unsigned long time)
@@ -292,7 +306,7 @@ static inline void skb_dst_copy(struct sk_buff *nskb, const struct sk_buff *oskb
 */
static inline bool dst_hold_safe(struct dst_entry *dst)
{
	return atomic_inc_not_zero(&dst->__refcnt);
	return rcuref_get(&dst->__rcuref);
}

/**
+0 −3
Original line number Diff line number Diff line
@@ -217,9 +217,6 @@ struct rt6_info {
	struct inet6_dev		*rt6i_idev;
	u32				rt6i_flags;

	struct list_head		rt6i_uncached;
	struct uncached_list		*rt6i_uncached_list;

	/* more non-fragment space at head required */
	unsigned short			rt6i_nfheader_len;
};
+1 −1
Original line number Diff line number Diff line
@@ -100,7 +100,7 @@ static inline struct dst_entry *ip6_route_output(struct net *net,
static inline void ip6_rt_put_flags(struct rt6_info *rt, int flags)
{
	if (!(flags & RT6_LOOKUP_F_DST_NOREF) ||
	    !list_empty(&rt->rt6i_uncached))
	    !list_empty(&rt->dst.rt_uncached))
		ip6_rt_put(rt);
}

+0 −3
Original line number Diff line number Diff line
@@ -78,9 +78,6 @@ struct rtable {
	/* Miscellaneous cached information */
	u32			rt_mtu_locked:1,
				rt_pmtu:31;

	struct list_head	rt_uncached;
	struct uncached_list	*rt_uncached_list;
};

static inline bool rt_is_input_route(const struct rtable *rt)
+1 −1
Original line number Diff line number Diff line
@@ -2131,7 +2131,7 @@ sk_dst_get(struct sock *sk)

	rcu_read_lock();
	dst = rcu_dereference(sk->sk_dst_cache);
	if (dst && !atomic_inc_not_zero(&dst->__refcnt))
	if (dst && !rcuref_get(&dst->__rcuref))
		dst = NULL;
	rcu_read_unlock();
	return dst;
Loading