Commit ca0a53dc authored by David S. Miller's avatar David S. Miller
Browse files

Merge branch 'net-hw-counters-for-soft-devices'

Ido Schimmel says:

====================
HW counters for soft devices

Petr says:

Offloading switch device drivers may be able to collect statistics of the
traffic taking place in the HW datapath that pertains to a certain soft
netdevice, such as a VLAN. In this patch set, add the necessary
infrastructure to allow exposing these statistics to the offloaded
netdevice in question, and add mlxsw offload.

Across HW platforms, the counter itself very likely constitutes a limited
resource, and the act of counting may have a performance impact. Therefore
this patch set makes the HW statistics collection opt-in and togglable from
userspace on a per-netdevice basis.

Additionally, HW devices may have various limiting conditions under which
they can realize the counter. Therefore it is also possible to query
whether the requested counter is realized by any driver. In TC parlance,
which is to a degree reused in this patch set, two values are recognized:
"request" tracks whether the user enabled collecting HW statistics, and
"used" tracks whether any HW statistics are actually collected.

In the past, this author has expressed the opinion that `a typical user
doing "ip -s l sh", including various scripts, wants to see the full
picture and not worry what's going on where'. While that would be nice,
unfortunately it cannot work:

- Packets that trap from the HW datapath to the SW datapath would be
  double counted.

  For a given netdevice, some traffic can be purely a SW artifact, and some
  may flow through the HW object corresponding to the netdevice. But some
  traffic can also get trapped to the SW datapath after bumping the HW
  counter. It is not clear how to make sure double-counting does not occur
  in the SW datapath in that case, while still making sure that possibly
  divergent SW forwarding path gets bumped as appropriate.

  So simply adding HW and SW stats may work roughly, most of the time, but
  there are scenarios where the result is nonsensical.

- HW devices will have limitations as to what type of traffic they can
  count.

  In case of mlxsw, which is part of this patch set, there is no reasonable
  way to count all traffic going through a certain netdevice, such as a
  VLAN netdevice enslaved to a bridge. It is however very simple to count
  traffic flowing through an L3 object, such as a VLAN netdevice with an IP
  address.

  Similarly for physical netdevices, the L3 object at which the counter is
  installed is the subport carrying untagged traffic.

  These are not "just counters". It is important that the user understands
  what is being counted. It would be incorrect to conflate these statistics
  with another existing statistics suite.

To that end, this patch set introduces a statistics suite called "L3
stats". This label should make it easy to understand what is being counted,
and to decide whether a given device can or cannot implement this suite for
some type of netdevice. At the same time, the code is written to make
future extensions easy, should a device pop up that can implement a
different flavor of statistics suite (say L2, or an address-family-specific
suite).

For example, using a work-in-progress iproute2[1], to turn on and then list
the counters on a VLAN netdevice:

    # ip stats set dev swp1.200 l3_stats on
    # ip stats show dev swp1.200 group offload subgroup l3_stats
    56: swp1.200: group offload subgroup l3_stats on used on
	RX:  bytes packets errors dropped  missed   mcast
		0       0      0       0       0       0
	TX:  bytes packets errors dropped carrier collsns
		0       0      0       0       0       0

The patchset progresses as follows:

- Patch #1 is a cleanup.

- In patch #2, remove the assumption that all LINK_OFFLOAD_XSTATS are
  dev-backed.

  The only attribute defined under the nest is currently
  IFLA_OFFLOAD_XSTATS_CPU_HIT. L3_STATS differs from CPU_HIT in that the
  driver that supplies the statistics is not the same as the driver that
  implements the netdevice. Make the code compatible with this in patch #2.

- In patch #3, add the possibility to filter inside nests.

  The filter_mask field of RTM_GETSTATS header determines which
  top-level attributes should be included in the netlink response. This
  saves processing time by only including the bits that the user cares
  about instead of always dumping everything. This is doubly important
  for HW-backed statistics that would typically require a trip to the
  device to fetch the stats. In this patch, the UAPI is extended to
  allow filtering inside IFLA_STATS_LINK_OFFLOAD_XSTATS in particular,
  but the scheme is easily extensible to other nests as well.

- In patch #4, propagate extack where we need it.
  In patch #5, make it possible to propagate errors from drivers to the
  user.

- In patch #6, add the in-kernel APIs for keeping track of the new stats
  suite, and the notifiers that the core uses to communicate with the
  drivers.

- In patch #7, add UAPI for obtaining the new stats suite.

- In patch #8, add a new UAPI message, RTM_SETSTATS, which will carry
  the message to toggle the newly-added stats suite.
  In patch #9, add the toggle itself.

At this point the core is ready for drivers to add support for the new
stats suite.

- In patches #10, #11 and #12, apply small tweaks to mlxsw code.

- In patch #13, add support for L3 stats, which are realized as RIF
  counters.

- Finally in patch #14, a selftest is added to the net/forwarding
  directory. Technically this is a HW-specific test, in that without a HW
  implementing the counters, it just will not pass. But devices that
  support L3 statistics at all are likely to be able to reuse this
  selftest, so it seems appropriate to put it in the general forwarding
  directory.

We also have a netdevsim implementation, and a corresponding selftest that
verifies specifically some of the core code. We intend to contribute these
later. Interested parties can take a look at the raw code at [2].

[1] https://github.com/pmachata/iproute2/commits/soft_counters
[2] https://github.com/pmachata/linux_mlxsw/commits/petrm_soft_counters_2



v2:
- Patch #3:
    - Do not declare strict_start_type at the new policies, since they are
      used with nla_parse_nested() (sans _deprecated).
    - Use NLA_POLICY_NESTED to declare what the nest contents should be
    - Use NLA_POLICY_MASK instead of BITFIELD32 for the filtering
      attribute.
- Patch #6:
    - s/monotonous/monotonic/ in commit message
    - Use a newly-added struct rtnl_hw_stats64 for stats transfer
- Patch #7:
    - Use a newly-added struct rtnl_hw_stats64 for stats transfer
- Patch #8:
    - Do not declare strict_start_type at the new policies, since they are
      used with nla_parse_nested() (sans _deprecated).
- Patch #13:
    - Use a newly-added struct rtnl_hw_stats64 for stats transfer
====================

Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents cbcc44db ba95e793
Loading
Loading
Loading
Loading
+5 −3
Original line number Diff line number Diff line
@@ -6784,13 +6784,15 @@ static inline void mlxsw_reg_ritr_counter_pack(char *payload, u32 index,
		set_type = MLXSW_REG_RITR_COUNTER_SET_TYPE_BASIC;
	else
		set_type = MLXSW_REG_RITR_COUNTER_SET_TYPE_NO_COUNT;
	mlxsw_reg_ritr_egress_counter_set_type_set(payload, set_type);

	if (egress)
	if (egress) {
		mlxsw_reg_ritr_egress_counter_set_type_set(payload, set_type);
		mlxsw_reg_ritr_egress_counter_index_set(payload, index);
	else
	} else {
		mlxsw_reg_ritr_ingress_counter_set_type_set(payload, set_type);
		mlxsw_reg_ritr_ingress_counter_index_set(payload, index);
	}
}

static inline void mlxsw_reg_ritr_rif_pack(char *payload, u16 rif)
{
+17 −3
Original line number Diff line number Diff line
@@ -4823,6 +4823,22 @@ static int mlxsw_sp_netdevice_vxlan_event(struct mlxsw_sp *mlxsw_sp,
	return 0;
}

static bool mlxsw_sp_netdevice_event_is_router(unsigned long event)
{
	switch (event) {
	case NETDEV_PRE_CHANGEADDR:
	case NETDEV_CHANGEADDR:
	case NETDEV_CHANGEMTU:
	case NETDEV_OFFLOAD_XSTATS_ENABLE:
	case NETDEV_OFFLOAD_XSTATS_DISABLE:
	case NETDEV_OFFLOAD_XSTATS_REPORT_USED:
	case NETDEV_OFFLOAD_XSTATS_REPORT_DELTA:
		return true;
	default:
		return false;
	}
}

static int mlxsw_sp_netdevice_event(struct notifier_block *nb,
				    unsigned long event, void *ptr)
{
@@ -4847,9 +4863,7 @@ static int mlxsw_sp_netdevice_event(struct notifier_block *nb,
	else if (mlxsw_sp_netdev_is_ipip_ul(mlxsw_sp, dev))
		err = mlxsw_sp_netdevice_ipip_ul_event(mlxsw_sp, dev,
						       event, ptr);
	else if (event == NETDEV_PRE_CHANGEADDR ||
		 event == NETDEV_CHANGEADDR ||
		 event == NETDEV_CHANGEMTU)
	else if (mlxsw_sp_netdevice_event_is_router(event))
		err = mlxsw_sp_netdevice_router_port_event(dev, event, ptr);
	else if (mlxsw_sp_is_vrf_event(event, ptr))
		err = mlxsw_sp_netdevice_vrf_event(dev, event, ptr);
+2 −2
Original line number Diff line number Diff line
@@ -266,10 +266,10 @@ static int mlxsw_sp_dpipe_table_erif_counters_update(void *priv, bool enable)
		if (!rif)
			continue;
		if (enable)
			mlxsw_sp_rif_counter_alloc(mlxsw_sp, rif,
			mlxsw_sp_rif_counter_alloc(rif,
						   MLXSW_SP_RIF_COUNTER_EGRESS);
		else
			mlxsw_sp_rif_counter_free(mlxsw_sp, rif,
			mlxsw_sp_rif_counter_free(rif,
						  MLXSW_SP_RIF_COUNTER_EGRESS);
	}
	mutex_unlock(&mlxsw_sp->router->lock);
+295 −10
Original line number Diff line number Diff line
@@ -225,6 +225,64 @@ int mlxsw_sp_rif_counter_value_get(struct mlxsw_sp *mlxsw_sp,
	return 0;
}

struct mlxsw_sp_rif_counter_set_basic {
	u64 good_unicast_packets;
	u64 good_multicast_packets;
	u64 good_broadcast_packets;
	u64 good_unicast_bytes;
	u64 good_multicast_bytes;
	u64 good_broadcast_bytes;
	u64 error_packets;
	u64 discard_packets;
	u64 error_bytes;
	u64 discard_bytes;
};

static int
mlxsw_sp_rif_counter_fetch_clear(struct mlxsw_sp_rif *rif,
				 enum mlxsw_sp_rif_counter_dir dir,
				 struct mlxsw_sp_rif_counter_set_basic *set)
{
	struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
	char ricnt_pl[MLXSW_REG_RICNT_LEN];
	unsigned int *p_counter_index;
	int err;

	if (!mlxsw_sp_rif_counter_valid_get(rif, dir))
		return -EINVAL;

	p_counter_index = mlxsw_sp_rif_p_counter_get(rif, dir);
	if (!p_counter_index)
		return -EINVAL;

	mlxsw_reg_ricnt_pack(ricnt_pl, *p_counter_index,
			     MLXSW_REG_RICNT_OPCODE_CLEAR);
	err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(ricnt), ricnt_pl);
	if (err)
		return err;

	if (!set)
		return 0;

#define MLXSW_SP_RIF_COUNTER_EXTRACT(NAME)				\
		(set->NAME = mlxsw_reg_ricnt_ ## NAME ## _get(ricnt_pl))

	MLXSW_SP_RIF_COUNTER_EXTRACT(good_unicast_packets);
	MLXSW_SP_RIF_COUNTER_EXTRACT(good_multicast_packets);
	MLXSW_SP_RIF_COUNTER_EXTRACT(good_broadcast_packets);
	MLXSW_SP_RIF_COUNTER_EXTRACT(good_unicast_bytes);
	MLXSW_SP_RIF_COUNTER_EXTRACT(good_multicast_bytes);
	MLXSW_SP_RIF_COUNTER_EXTRACT(good_broadcast_bytes);
	MLXSW_SP_RIF_COUNTER_EXTRACT(error_packets);
	MLXSW_SP_RIF_COUNTER_EXTRACT(discard_packets);
	MLXSW_SP_RIF_COUNTER_EXTRACT(error_bytes);
	MLXSW_SP_RIF_COUNTER_EXTRACT(discard_bytes);

#undef MLXSW_SP_RIF_COUNTER_EXTRACT

	return 0;
}

static int mlxsw_sp_rif_counter_clear(struct mlxsw_sp *mlxsw_sp,
				      unsigned int counter_index)
{
@@ -235,16 +293,20 @@ static int mlxsw_sp_rif_counter_clear(struct mlxsw_sp *mlxsw_sp,
	return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ricnt), ricnt_pl);
}

int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp *mlxsw_sp,
			       struct mlxsw_sp_rif *rif,
int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp_rif *rif,
			       enum mlxsw_sp_rif_counter_dir dir)
{
	struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
	unsigned int *p_counter_index;
	int err;

	if (mlxsw_sp_rif_counter_valid_get(rif, dir))
		return 0;

	p_counter_index = mlxsw_sp_rif_p_counter_get(rif, dir);
	if (!p_counter_index)
		return -EINVAL;

	err = mlxsw_sp_counter_alloc(mlxsw_sp, MLXSW_SP_COUNTER_SUB_POOL_RIF,
				     p_counter_index);
	if (err)
@@ -268,10 +330,10 @@ int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp *mlxsw_sp,
	return err;
}

void mlxsw_sp_rif_counter_free(struct mlxsw_sp *mlxsw_sp,
			       struct mlxsw_sp_rif *rif,
void mlxsw_sp_rif_counter_free(struct mlxsw_sp_rif *rif,
			       enum mlxsw_sp_rif_counter_dir dir)
{
	struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;
	unsigned int *p_counter_index;

	if (!mlxsw_sp_rif_counter_valid_get(rif, dir))
@@ -296,14 +358,12 @@ static void mlxsw_sp_rif_counters_alloc(struct mlxsw_sp_rif *rif)
	if (!devlink_dpipe_table_counter_enabled(devlink,
						 MLXSW_SP_DPIPE_TABLE_NAME_ERIF))
		return;
	mlxsw_sp_rif_counter_alloc(mlxsw_sp, rif, MLXSW_SP_RIF_COUNTER_EGRESS);
	mlxsw_sp_rif_counter_alloc(rif, MLXSW_SP_RIF_COUNTER_EGRESS);
}

static void mlxsw_sp_rif_counters_free(struct mlxsw_sp_rif *rif)
{
	struct mlxsw_sp *mlxsw_sp = rif->mlxsw_sp;

	mlxsw_sp_rif_counter_free(mlxsw_sp, rif, MLXSW_SP_RIF_COUNTER_EGRESS);
	mlxsw_sp_rif_counter_free(rif, MLXSW_SP_RIF_COUNTER_EGRESS);
}

#define MLXSW_SP_PREFIX_COUNT (sizeof(struct in6_addr) * BITS_PER_BYTE + 1)
@@ -8148,6 +8208,166 @@ u16 mlxsw_sp_ipip_lb_ul_rif_id(const struct mlxsw_sp_rif_ipip_lb *lb_rif)
	return lb_rif->ul_rif_id;
}

static bool
mlxsw_sp_router_port_l3_stats_enabled(struct mlxsw_sp_rif *rif)
{
	return mlxsw_sp_rif_counter_valid_get(rif,
					      MLXSW_SP_RIF_COUNTER_EGRESS) &&
	       mlxsw_sp_rif_counter_valid_get(rif,
					      MLXSW_SP_RIF_COUNTER_INGRESS);
}

static int
mlxsw_sp_router_port_l3_stats_enable(struct mlxsw_sp_rif *rif)
{
	int err;

	err = mlxsw_sp_rif_counter_alloc(rif, MLXSW_SP_RIF_COUNTER_INGRESS);
	if (err)
		return err;

	/* Clear stale data. */
	err = mlxsw_sp_rif_counter_fetch_clear(rif,
					       MLXSW_SP_RIF_COUNTER_INGRESS,
					       NULL);
	if (err)
		goto err_clear_ingress;

	err = mlxsw_sp_rif_counter_alloc(rif, MLXSW_SP_RIF_COUNTER_EGRESS);
	if (err)
		goto err_alloc_egress;

	/* Clear stale data. */
	err = mlxsw_sp_rif_counter_fetch_clear(rif,
					       MLXSW_SP_RIF_COUNTER_EGRESS,
					       NULL);
	if (err)
		goto err_clear_egress;

	return 0;

err_clear_egress:
	mlxsw_sp_rif_counter_free(rif, MLXSW_SP_RIF_COUNTER_EGRESS);
err_alloc_egress:
err_clear_ingress:
	mlxsw_sp_rif_counter_free(rif, MLXSW_SP_RIF_COUNTER_INGRESS);
	return err;
}

static void
mlxsw_sp_router_port_l3_stats_disable(struct mlxsw_sp_rif *rif)
{
	mlxsw_sp_rif_counter_free(rif, MLXSW_SP_RIF_COUNTER_EGRESS);
	mlxsw_sp_rif_counter_free(rif, MLXSW_SP_RIF_COUNTER_INGRESS);
}

static void
mlxsw_sp_router_port_l3_stats_report_used(struct mlxsw_sp_rif *rif,
					  struct netdev_notifier_offload_xstats_info *info)
{
	if (!mlxsw_sp_router_port_l3_stats_enabled(rif))
		return;
	netdev_offload_xstats_report_used(info->report_used);
}

static int
mlxsw_sp_router_port_l3_stats_fetch(struct mlxsw_sp_rif *rif,
				    struct rtnl_hw_stats64 *p_stats)
{
	struct mlxsw_sp_rif_counter_set_basic ingress;
	struct mlxsw_sp_rif_counter_set_basic egress;
	int err;

	err = mlxsw_sp_rif_counter_fetch_clear(rif,
					       MLXSW_SP_RIF_COUNTER_INGRESS,
					       &ingress);
	if (err)
		return err;

	err = mlxsw_sp_rif_counter_fetch_clear(rif,
					       MLXSW_SP_RIF_COUNTER_EGRESS,
					       &egress);
	if (err)
		return err;

#define MLXSW_SP_ROUTER_ALL_GOOD(SET, SFX)		\
		((SET.good_unicast_ ## SFX) +		\
		 (SET.good_multicast_ ## SFX) +		\
		 (SET.good_broadcast_ ## SFX))

	p_stats->rx_packets = MLXSW_SP_ROUTER_ALL_GOOD(ingress, packets);
	p_stats->tx_packets = MLXSW_SP_ROUTER_ALL_GOOD(egress, packets);
	p_stats->rx_bytes = MLXSW_SP_ROUTER_ALL_GOOD(ingress, bytes);
	p_stats->tx_bytes = MLXSW_SP_ROUTER_ALL_GOOD(egress, bytes);
	p_stats->rx_errors = ingress.error_packets;
	p_stats->tx_errors = egress.error_packets;
	p_stats->rx_dropped = ingress.discard_packets;
	p_stats->tx_dropped = egress.discard_packets;
	p_stats->multicast = ingress.good_multicast_packets +
			     ingress.good_broadcast_packets;

#undef MLXSW_SP_ROUTER_ALL_GOOD

	return 0;
}

static int
mlxsw_sp_router_port_l3_stats_report_delta(struct mlxsw_sp_rif *rif,
					   struct netdev_notifier_offload_xstats_info *info)
{
	struct rtnl_hw_stats64 stats = {};
	int err;

	if (!mlxsw_sp_router_port_l3_stats_enabled(rif))
		return 0;

	err = mlxsw_sp_router_port_l3_stats_fetch(rif, &stats);
	if (err)
		return err;

	netdev_offload_xstats_report_delta(info->report_delta, &stats);
	return 0;
}

struct mlxsw_sp_router_hwstats_notify_work {
	struct work_struct work;
	struct net_device *dev;
};

static void mlxsw_sp_router_hwstats_notify_work(struct work_struct *work)
{
	struct mlxsw_sp_router_hwstats_notify_work *hws_work =
		container_of(work, struct mlxsw_sp_router_hwstats_notify_work,
			     work);

	rtnl_lock();
	rtnl_offload_xstats_notify(hws_work->dev);
	rtnl_unlock();
	dev_put(hws_work->dev);
	kfree(hws_work);
}

static void
mlxsw_sp_router_hwstats_notify_schedule(struct net_device *dev)
{
	struct mlxsw_sp_router_hwstats_notify_work *hws_work;

	/* To collect notification payload, the core ends up sending another
	 * notifier block message, which would deadlock on the attempt to
	 * acquire the router lock again. Just postpone the notification until
	 * later.
	 */

	hws_work = kzalloc(sizeof(*hws_work), GFP_KERNEL);
	if (!hws_work)
		return;

	INIT_WORK(&hws_work->work, mlxsw_sp_router_hwstats_notify_work);
	dev_hold(dev);
	hws_work->dev = dev;
	mlxsw_core_schedule_work(&hws_work->work);
}

int mlxsw_sp_rif_dev_ifindex(const struct mlxsw_sp_rif *rif)
{
	return rif->dev->ifindex;
@@ -8158,6 +8378,16 @@ const struct net_device *mlxsw_sp_rif_dev(const struct mlxsw_sp_rif *rif)
	return rif->dev;
}

static void mlxsw_sp_rif_push_l3_stats(struct mlxsw_sp_rif *rif)
{
	struct rtnl_hw_stats64 stats = {};

	if (!mlxsw_sp_router_port_l3_stats_fetch(rif, &stats))
		netdev_offload_xstats_push_delta(rif->dev,
						 NETDEV_OFFLOAD_XSTATS_TYPE_L3,
						 &stats);
}

static struct mlxsw_sp_rif *
mlxsw_sp_rif_create(struct mlxsw_sp *mlxsw_sp,
		    const struct mlxsw_sp_rif_params *params,
@@ -8218,10 +8448,19 @@ mlxsw_sp_rif_create(struct mlxsw_sp *mlxsw_sp,
			goto err_mr_rif_add;
	}

	if (netdev_offload_xstats_enabled(rif->dev,
					  NETDEV_OFFLOAD_XSTATS_TYPE_L3)) {
		err = mlxsw_sp_router_port_l3_stats_enable(rif);
		if (err)
			goto err_stats_enable;
		mlxsw_sp_router_hwstats_notify_schedule(rif->dev);
	} else {
		mlxsw_sp_rif_counters_alloc(rif);
	}

	return rif;

err_stats_enable:
err_mr_rif_add:
	for (i--; i >= 0; i--)
		mlxsw_sp_mr_rif_del(vr->mr_table[i], rif);
@@ -8251,7 +8490,15 @@ static void mlxsw_sp_rif_destroy(struct mlxsw_sp_rif *rif)
	mlxsw_sp_router_rif_gone_sync(mlxsw_sp, rif);
	vr = &mlxsw_sp->router->vrs[rif->vr_id];

	if (netdev_offload_xstats_enabled(rif->dev,
					  NETDEV_OFFLOAD_XSTATS_TYPE_L3)) {
		mlxsw_sp_rif_push_l3_stats(rif);
		mlxsw_sp_router_port_l3_stats_disable(rif);
		mlxsw_sp_router_hwstats_notify_schedule(rif->dev);
	} else {
		mlxsw_sp_rif_counters_free(rif);
	}

	for (i = 0; i < MLXSW_SP_L3_PROTO_MAX; i++)
		mlxsw_sp_mr_rif_del(vr->mr_table[i], rif);
	ops->deconfigure(rif);
@@ -9128,6 +9375,35 @@ static int mlxsw_sp_router_port_pre_changeaddr_event(struct mlxsw_sp_rif *rif,
	return -ENOBUFS;
}

static int
mlxsw_sp_router_port_offload_xstats_cmd(struct mlxsw_sp_rif *rif,
					unsigned long event,
					struct netdev_notifier_offload_xstats_info *info)
{
	switch (info->type) {
	case NETDEV_OFFLOAD_XSTATS_TYPE_L3:
		break;
	default:
		return 0;
	}

	switch (event) {
	case NETDEV_OFFLOAD_XSTATS_ENABLE:
		return mlxsw_sp_router_port_l3_stats_enable(rif);
	case NETDEV_OFFLOAD_XSTATS_DISABLE:
		mlxsw_sp_router_port_l3_stats_disable(rif);
		return 0;
	case NETDEV_OFFLOAD_XSTATS_REPORT_USED:
		mlxsw_sp_router_port_l3_stats_report_used(rif, info);
		return 0;
	case NETDEV_OFFLOAD_XSTATS_REPORT_DELTA:
		return mlxsw_sp_router_port_l3_stats_report_delta(rif, info);
	}

	WARN_ON_ONCE(1);
	return 0;
}

int mlxsw_sp_netdevice_router_port_event(struct net_device *dev,
					 unsigned long event, void *ptr)
{
@@ -9153,6 +9429,15 @@ int mlxsw_sp_netdevice_router_port_event(struct net_device *dev,
	case NETDEV_PRE_CHANGEADDR:
		err = mlxsw_sp_router_port_pre_changeaddr_event(rif, ptr);
		break;
	case NETDEV_OFFLOAD_XSTATS_ENABLE:
	case NETDEV_OFFLOAD_XSTATS_DISABLE:
	case NETDEV_OFFLOAD_XSTATS_REPORT_USED:
	case NETDEV_OFFLOAD_XSTATS_REPORT_DELTA:
		err = mlxsw_sp_router_port_offload_xstats_cmd(rif, event, ptr);
		break;
	default:
		WARN_ON_ONCE(1);
		break;
	}

out:
+2 −4
Original line number Diff line number Diff line
@@ -159,11 +159,9 @@ int mlxsw_sp_rif_counter_value_get(struct mlxsw_sp *mlxsw_sp,
				   struct mlxsw_sp_rif *rif,
				   enum mlxsw_sp_rif_counter_dir dir,
				   u64 *cnt);
void mlxsw_sp_rif_counter_free(struct mlxsw_sp *mlxsw_sp,
			       struct mlxsw_sp_rif *rif,
void mlxsw_sp_rif_counter_free(struct mlxsw_sp_rif *rif,
			       enum mlxsw_sp_rif_counter_dir dir);
int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp *mlxsw_sp,
			       struct mlxsw_sp_rif *rif,
int mlxsw_sp_rif_counter_alloc(struct mlxsw_sp_rif *rif,
			       enum mlxsw_sp_rif_counter_dir dir);
struct mlxsw_sp_neigh_entry *
mlxsw_sp_rif_neigh_next(struct mlxsw_sp_rif *rif,
Loading