Commit d519f350 authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller
Browse files

tcp: minor optimization in tcp_add_backlog()



If packet is going to be coalesced, sk_sndbuf/sk_rcvbuf values
are not used. Defer their access to the point we need them.

Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 3ad4b7c8
Loading
Loading
Loading
Loading
+2 −3
Original line number Diff line number Diff line
@@ -1800,8 +1800,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)

bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
{
	u32 limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf);
	u32 tail_gso_size, tail_gso_segs;
	u32 limit, tail_gso_size, tail_gso_segs;
	struct skb_shared_info *shinfo;
	const struct tcphdr *th;
	struct tcphdr *thtail;
@@ -1909,7 +1908,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
	 * to reduce memory overhead, so add a little headroom here.
	 * Few sockets backlog are possibly concurrently non empty.
	 */
	limit += 64*1024;
	limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024;

	if (unlikely(sk_add_backlog(sk, skb, limit))) {
		bh_unlock_sock(sk);