Commit 9cbfea02 authored by Sieng Piaw Liew's avatar Sieng Piaw Liew Committed by Jakub Kicinski
Browse files

bcm63xx_enet: batch process rx path



Use netif_receive_skb_list to batch process rx skb.
Tested on BCM6328 320 MHz using iperf3 -M 512, increasing performance
by 12.5%.

Before:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec   120 MBytes  33.7 Mbits/sec  277         sender
[  4]   0.00-30.00  sec   120 MBytes  33.5 Mbits/sec            receiver

After:
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec   136 MBytes  37.9 Mbits/sec  203         sender
[  4]   0.00-30.00  sec   135 MBytes  37.7 Mbits/sec            receiver

Signed-off-by: default avatarSieng Piaw Liew <liew.s.piaw@gmail.com>
Acked-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent 2e423387
Loading
Loading
Loading
Loading
+5 −1
Original line number Diff line number Diff line
@@ -297,10 +297,12 @@ static void bcm_enet_refill_rx_timer(struct timer_list *t)
static int bcm_enet_receive_queue(struct net_device *dev, int budget)
{
	struct bcm_enet_priv *priv;
	struct list_head rx_list;
	struct device *kdev;
	int processed;

	priv = netdev_priv(dev);
	INIT_LIST_HEAD(&rx_list);
	kdev = &priv->pdev->dev;
	processed = 0;

@@ -391,10 +393,12 @@ static int bcm_enet_receive_queue(struct net_device *dev, int budget)
		skb->protocol = eth_type_trans(skb, dev);
		dev->stats.rx_packets++;
		dev->stats.rx_bytes += len;
		netif_receive_skb(skb);
		list_add_tail(&skb->list, &rx_list);

	} while (--budget > 0);

	netif_receive_skb_list(&rx_list);

	if (processed || !priv->rx_desc_count) {
		bcm_enet_refill_rx(dev);