Commit 98173633 authored by Jakub Kicinski's avatar Jakub Kicinski
Browse files

Merge tag 'mlx5-updates-2023-08-16' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2023-08-16

1) aRFS ethtool stats

Improve aRFS observability by adding new set of counters. Each Rx
ring will have this set of counters listed below.
These counters are exposed through ethtool -S.

1.1) arfs_add: number of times a new rule has been created.
1.2) arfs_request_in: number of times a rule  was requested to move from
   its current Rx ring to a new Rx ring (incremented on the destination
   Rx ring).
1.3) arfs_request_out: number of times a rule  was requested to move out
   from its current Rx ring (incremented on source/current Rx ring).
1.4) arfs_expired: number of times a rule has been expired by the
   kernel and removed from HW.
1.5) arfs_err: number of times a rule creation or modification has
   failed.

2) Supporting inline WQE when possible in SW steering

3) Misc cleanups and fixups to net-next branch

* tag 'mlx5-updates-2023-08-16' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
  net/mlx5: Devcom, only use devcom after NULL check in mlx5_devcom_send_event()
  net/mlx5: DR, Supporting inline WQE when possible
  net/mlx5: Rename devlink port ops struct for PFs/VFs
  net/mlx5: Remove VPORT_UPLINK handling from devlink_port.c
  net/mlx5: Call mlx5_esw_offloads_rep_load/unload() for uplink port directly
  net/mlx5: Update dead links in Kconfig documentation
  net/mlx5: Remove health syndrome enum duplication
  net/mlx5: DR, Remove unneeded local variable
  net/mlx5: DR, Fix code indentation
  net/mlx5: IRQ, consolidate irq and affinity mask allocation
  net/mlx5e: Fix spelling mistake "Faided" -> "Failed"
  net/mlx5e: aRFS, Introduce ethtool stats
  net/mlx5e: aRFS, Warn if aRFS table does not exist for aRFS rule
  net/mlx5e: aRFS, Prevent repeated kernel rule migrations requests
====================

Link: https://lore.kernel.org/r/20230821175739.81188-1-saeed@kernel.org


Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 504fc6f4 7d7c6e8c
Loading
Loading
Loading
Loading
+18 −5
Original line number Diff line number Diff line
@@ -346,6 +346,24 @@ the software port.
     - The number of receive packets with CQE compression on ring i [#accel]_.
     - Acceleration

   * - `rx[i]_arfs_add`
     - The number of aRFS flow rules added to the device for direct RQ steering
       on ring i [#accel]_.
     - Acceleration

   * - `rx[i]_arfs_request_in`
     - Number of flow rules that have been requested to move into ring i for
       direct RQ steering [#accel]_.
     - Acceleration

   * - `rx[i]_arfs_request_out`
     - Number of flow rules that have been requested to move out of ring i [#accel]_.
     - Acceleration

   * - `rx[i]_arfs_expired`
     - Number of flow rules that have been expired and removed [#accel]_.
     - Acceleration

   * - `rx[i]_arfs_err`
     - Number of flow rules that failed to be added to the flow table.
     - Error
@@ -445,11 +463,6 @@ the software port.
       context.
     - Error

   * - `rx[i]_xsk_arfs_err`
     - aRFS (accelerated Receive Flow Steering) does not occur in the XSK RQ
       context, so this counter should never increment.
     - Error

   * - `rx[i]_xdp_tx_xmit`
     - The number of packets forwarded back to the port due to XDP program
       `XDP_TX` action (bouncing). these packets are not counted by other
+7 −7
Original line number Diff line number Diff line
@@ -36,7 +36,7 @@ Enabling the driver and kconfig options

**CONFIG_MLX5_CORE_EN_DCB=(y/n)**:

|    Enables `Data Center Bridging (DCB) Support <https://community.mellanox.com/s/article/howto-auto-config-pfc-and-ets-on-connectx-4-via-lldp-dcbx>`_.
|    Enables `Data Center Bridging (DCB) Support <https://enterprise-support.nvidia.com/s/article/howto-auto-config-pfc-and-ets-on-connectx-4-via-lldp-dcbx>`_.


**CONFIG_MLX5_CORE_IPOIB=(y/n)**
@@ -59,12 +59,12 @@ Enabling the driver and kconfig options
**CONFIG_MLX5_EN_ARFS=(y/n)**

|    Enables Hardware-accelerated receive flow steering (arfs) support, and ntuple filtering.
|    https://community.mellanox.com/s/article/howto-configure-arfs-on-connectx-4
|    https://enterprise-support.nvidia.com/s/article/howto-configure-arfs-on-connectx-4


**CONFIG_MLX5_EN_IPSEC=(y/n)**

|    Enables `IPSec XFRM cryptography-offload acceleration <https://support.mellanox.com/s/article/ConnectX-6DX-Bluefield-2-IPsec-HW-Full-Offload-Configuration-Guide>`_.
|    Enables :ref:`IPSec XFRM cryptography-offload acceleration <xfrm_device>`.


**CONFIG_MLX5_EN_MACSEC=(y/n)**
@@ -87,8 +87,8 @@ Enabling the driver and kconfig options

|    Ethernet SRIOV E-Switch support in ConnectX NIC. E-Switch provides internal SRIOV packet steering
|    and switching for the enabled VFs and PF in two available modes:
|           1) `Legacy SRIOV mode (L2 mac vlan steering based) <https://community.mellanox.com/s/article/howto-configure-sr-iov-for-connectx-4-connectx-5-with-kvm--ethernet-x>`_.
|           2) `Switchdev mode (eswitch offloads) <https://www.mellanox.com/related-docs/prod_software/ASAP2_Hardware_Offloading_for_vSwitches_User_Manual_v4.4.pdf>`_.
|           1) `Legacy SRIOV mode (L2 mac vlan steering based) <https://enterprise-support.nvidia.com/s/article/HowTo-Configure-SR-IOV-for-ConnectX-4-ConnectX-5-ConnectX-6-with-KVM-Ethernet>`_.
|           2) :ref:`Switchdev mode (eswitch offloads) <switchdev>`.


**CONFIG_MLX5_FPGA=(y/n)**
@@ -101,13 +101,13 @@ Enabling the driver and kconfig options

**CONFIG_MLX5_INFINIBAND=(y/n/m)** (module mlx5_ib.ko)

|    Provides low-level InfiniBand/RDMA and `RoCE <https://community.mellanox.com/s/article/recommended-network-configuration-examples-for-roce-deployment>`_ support.
|    Provides low-level InfiniBand/RDMA and `RoCE <https://enterprise-support.nvidia.com/s/article/recommended-network-configuration-examples-for-roce-deployment>`_ support.


**CONFIG_MLX5_MPFS=(y/n)**

|    Ethernet Multi-Physical Function Switch (MPFS) support in ConnectX NIC.
|    MPFs is required for when `Multi-Host <http://www.mellanox.com/page/multihost>`_ configuration is enabled to allow passing
|    MPFs is required for when `Multi-Host <https://www.nvidia.com/en-us/networking/multi-host/>`_ configuration is enabled to allow passing
|    user configured unicast MAC addresses to the requesting PF.


+1 −0
Original line number Diff line number Diff line
.. SPDX-License-Identifier: GPL-2.0
.. _xfrm_device:

===============================================
XFRM device - offloading the IPsec computations
+17 −4
Original line number Diff line number Diff line
@@ -432,8 +432,10 @@ static void arfs_may_expire_flow(struct mlx5e_priv *priv)
	}
	spin_unlock_bh(&arfs->arfs_lock);
	hlist_for_each_entry_safe(arfs_rule, htmp, &del_list, hlist) {
		if (arfs_rule->rule)
		if (arfs_rule->rule) {
			mlx5_del_flow_rules(arfs_rule->rule);
			priv->channel_stats[arfs_rule->rxq]->rq.arfs_expired++;
		}
		hlist_del(&arfs_rule->hlist);
		kfree(arfs_rule);
	}
@@ -509,6 +511,7 @@ static struct mlx5_flow_handle *arfs_add_rule(struct mlx5e_priv *priv,

	spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
	if (!spec) {
		priv->channel_stats[arfs_rule->rxq]->rq.arfs_err++;
		err = -ENOMEM;
		goto out;
	}
@@ -519,6 +522,8 @@ static struct mlx5_flow_handle *arfs_add_rule(struct mlx5e_priv *priv,
		 ntohs(tuple->etype));
	arfs_table = arfs_get_table(arfs, tuple->ip_proto, tuple->etype);
	if (!arfs_table) {
		WARN_ONCE(1, "arfs table does not exist for etype %u and ip_proto %u\n",
			  tuple->etype, tuple->ip_proto);
		err = -EINVAL;
		goto out;
	}
@@ -600,10 +605,12 @@ static void arfs_modify_rule_rq(struct mlx5e_priv *priv,
	dst.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
	dst.tir_num = mlx5e_rx_res_get_tirn_direct(priv->rx_res, rxq);
	err =  mlx5_modify_rule_destination(rule, &dst, NULL);
	if (err)
	if (err) {
		priv->channel_stats[rxq]->rq.arfs_err++;
		netdev_warn(priv->netdev,
			    "Failed to modify aRFS rule destination to rq=%d\n", rxq);
	}
}

static void arfs_handle_work(struct work_struct *work)
{
@@ -632,6 +639,7 @@ static void arfs_handle_work(struct work_struct *work)
		if (IS_ERR(rule))
			goto out;
		arfs_rule->rule = rule;
		priv->channel_stats[arfs_rule->rxq]->rq.arfs_add++;
	} else {
		arfs_modify_rule_rq(priv, arfs_rule->rule,
				    arfs_rule->rxq);
@@ -650,8 +658,10 @@ static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
	struct arfs_tuple *tuple;

	rule = kzalloc(sizeof(*rule), GFP_ATOMIC);
	if (!rule)
	if (!rule) {
		priv->channel_stats[rxq]->rq.arfs_err++;
		return NULL;
	}

	rule->priv = priv;
	rule->rxq = rxq;
@@ -740,10 +750,13 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
	spin_lock_bh(&arfs->arfs_lock);
	arfs_rule = arfs_find_rule(arfs_t, &fk);
	if (arfs_rule) {
		if (arfs_rule->rxq == rxq_index) {
		if (arfs_rule->rxq == rxq_index || work_busy(&arfs_rule->arfs_work)) {
			spin_unlock_bh(&arfs->arfs_lock);
			return arfs_rule->filter_id;
		}

		priv->channel_stats[rxq_index]->rq.arfs_request_in++;
		priv->channel_stats[arfs_rule->rxq]->rq.arfs_request_out++;
		arfs_rule->rxq = rxq_index;
	} else {
		arfs_rule = arfs_alloc_rule(priv, arfs_t, &fk, rxq_index, flow_id);
+18 −4
Original line number Diff line number Diff line
@@ -180,7 +180,13 @@ static const struct counter_desc sw_stats_desc[] = {
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_congst_umr) },
#ifdef CONFIG_MLX5_EN_ARFS
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_add) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_request_in) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_request_out) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_expired) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_err) },
#endif
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_recover) },
#ifdef CONFIG_PAGE_POOL_STATS
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_fast) },
@@ -231,7 +237,6 @@ static const struct counter_desc sw_stats_desc[] = {
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_cqe_compress_blks) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_cqe_compress_pkts) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_congst_umr) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xsk_arfs_err) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_xmit) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_mpwqe) },
	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_xsk_inlnw) },
@@ -321,7 +326,6 @@ static void mlx5e_stats_grp_sw_update_stats_xskrq(struct mlx5e_sw_stats *s,
	s->rx_xsk_cqe_compress_blks      += xskrq_stats->cqe_compress_blks;
	s->rx_xsk_cqe_compress_pkts      += xskrq_stats->cqe_compress_pkts;
	s->rx_xsk_congst_umr             += xskrq_stats->congst_umr;
	s->rx_xsk_arfs_err               += xskrq_stats->arfs_err;
}

static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s,
@@ -354,7 +358,13 @@ static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s,
	s->rx_cqe_compress_blks       += rq_stats->cqe_compress_blks;
	s->rx_cqe_compress_pkts       += rq_stats->cqe_compress_pkts;
	s->rx_congst_umr              += rq_stats->congst_umr;
#ifdef CONFIG_MLX5_EN_ARFS
	s->rx_arfs_add                += rq_stats->arfs_add;
	s->rx_arfs_request_in         += rq_stats->arfs_request_in;
	s->rx_arfs_request_out        += rq_stats->arfs_request_out;
	s->rx_arfs_expired            += rq_stats->arfs_expired;
	s->rx_arfs_err                += rq_stats->arfs_err;
#endif
	s->rx_recover                 += rq_stats->recover;
#ifdef CONFIG_PAGE_POOL_STATS
	s->rx_pp_alloc_fast          += rq_stats->pp_alloc_fast;
@@ -1990,7 +2000,13 @@ static const struct counter_desc rq_stats_desc[] = {
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, congst_umr) },
#ifdef CONFIG_MLX5_EN_ARFS
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_add) },
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_request_in) },
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_request_out) },
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_expired) },
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_err) },
#endif
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, recover) },
#ifdef CONFIG_PAGE_POOL_STATS
	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_fast) },
@@ -2092,7 +2108,6 @@ static const struct counter_desc xskrq_stats_desc[] = {
	{ MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
	{ MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
	{ MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, congst_umr) },
	{ MLX5E_DECLARE_XSKRQ_STAT(struct mlx5e_rq_stats, arfs_err) },
};

static const struct counter_desc xsksq_stats_desc[] = {
@@ -2168,7 +2183,6 @@ static const struct counter_desc ptp_rq_stats_desc[] = {
	{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
	{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
	{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, congst_umr) },
	{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, arfs_err) },
	{ MLX5E_DECLARE_PTP_RQ_STAT(struct mlx5e_rq_stats, recover) },
};

Loading