Skip to content
  1. Oct 03, 2022
  2. Oct 02, 2022
    • David S. Miller's avatar
      Merge branch 'tc-bind_class-hook' · 9d435073
      David S. Miller authored
      
      
      Zhengchao Shao says:
      
      ====================
      refactor duplicate codes in bind_class hook function
      
      All the bind_class callback duplicate the same logic, so we can refactor
      them. First, ensure n arg not empty before call bind_class hook function.
      Then, add tc_cls_bind_class() helper. Last, use tc_cls_bind_class() in
      filter.
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9d435073
    • Zhengchao Shao's avatar
      net: sched: use tc_cls_bind_class() in filter · cc9039a1
      Zhengchao Shao authored
      
      
      Use tc_cls_bind_class() in filter.
      
      Signed-off-by: default avatarZhengchao Shao <shaozhengchao@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc9039a1
    • Zhengchao Shao's avatar
      net: sched: cls_api: introduce tc_cls_bind_class() helper · 402963e3
      Zhengchao Shao authored
      
      
      All the bind_class callback duplicate the same logic, this patch
      introduces tc_cls_bind_class() helper for common usage.
      
      Signed-off-by: default avatarZhengchao Shao <shaozhengchao@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      402963e3
    • Zhengchao Shao's avatar
      net: sched: ensure n arg not empty before call bind_class · 4e6263ec
      Zhengchao Shao authored
      
      
      All bind_class callbacks are directly returned when n arg is empty.
      Therefore, bind_class is invoked only when n arg is not empty.
      
      Signed-off-by: default avatarZhengchao Shao <shaozhengchao@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4e6263ec
    • Jakub Kicinski's avatar
      Merge branch 'mlx5-xsk-updates-part3-2022-09-30' · bc37b24e
      Jakub Kicinski authored
      
      
      Saeed Mahameed says:
      
      ====================
      mlx5 xsk updates part3 2022-09-30
      
      The gist of this 4 part series is in this patchset's last patch
      
      This series contains performance optimizations. XSK starts using the
      batching allocator, and XSK data path gets separated from the regular
      RX, allowing to drop some branches not relevant for non-XSK use cases.
      Some minor optimizations for indirect calls and need_wakeup are also
      included.
      
      Other than that, this series adds a few features to the mlx5e
      implementation of XSK:
      
      1. XDP metadata support on XSK RQs.
      
      2. RSS contexts support for XSK RQs.
      
      3. Some other optimizations
      
      4. Last but not least, change the queuing scheme, so that XSK RQs no longer
      use higher indices, but replace the regular RQs.
      
      Maxim Says:
      ==========
      
      In the initial implementation of XSK in mlx5e, XSK RQs coexisted with
      regular RQs in the same channel. The main idea was to allow RSS work the
      same for regular traffic, without need to reconfigure RSS to exclude XSK
      queues.
      
      However, this scheme didn't prove to be beneficial, mainly because of
      incompatibility with other vendors. Some tools don't properly support
      using higher indices for XSK queues, some tools get confused with the
      double amount of RQs exposed in sysfs. Some use cases are purely XSK,
      and allocating the same amount of unused regular RQs is a waste of
      resources.
      
      This commit changes the queuing scheme to the standard one, where XSK
      RQs replace regular RQs on the channels where XSK sockets are open. Two
      RQs still exist in the channel to allow failsafe disable of XSK, but
      only one is exposed at a time. The next commit will achieve the desired
      memory save by flushing the buffers when the regular RQ is unused.
      
      As the result of this transition:
      
      1. It's possible to use RSS contexts over XSK RQs.
      
      2. It's possible to dedicate all queues to XSK.
      
      3. When XSK RQs coexist with regular RQs, the admin should make sure no
      unwanted traffic goes into XSK RQs by either excluding them from RSS or
      settings up the XDP program to return XDP_PASS for non-XSK traffic.
      
      4. When using a mixed fleet of mlx5e devices and other netdevs, the same
      configuration can be applied. If the application supports the fallback
      to copy mode on unsupported drivers, it will work too.
      
      ==========
      
      Part 4 will include some final xsk optimizations and minor improvements
      
      part 1: https://lore.kernel.org/netdev/20220927203611.244301-1-saeed@kernel.org/
      part 2: https://lore.kernel.org/netdev/20220929072156.93299-1-saeed@kernel.org/
      ====================
      
      Link: https://lore.kernel.org/r/20220930162903.62262-1-saeed@kernel.org
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      bc37b24e
    • Maxim Mikityanskiy's avatar
      net/mlx5e: xsk: Use queue indices starting from 0 for XSK queues · 3db4c85c
      Maxim Mikityanskiy authored
      
      
      In the initial implementation of XSK in mlx5e, XSK RQs coexisted with
      regular RQs in the same channel. The main idea was to allow RSS work the
      same for regular traffic, without need to reconfigure RSS to exclude XSK
      queues.
      
      However, this scheme didn't prove to be beneficial, mainly because of
      incompatibility with other vendors. Some tools don't properly support
      using higher indices for XSK queues, some tools get confused with the
      double amount of RQs exposed in sysfs. Some use cases are purely XSK,
      and allocating the same amount of unused regular RQs is a waste of
      resources.
      
      This commit changes the queuing scheme to the standard one, where XSK
      RQs replace regular RQs on the channels where XSK sockets are open. Two
      RQs still exist in the channel to allow failsafe disable of XSK, but
      only one is exposed at a time. The next commit will achieve the desired
      memory save by flushing the buffers when the regular RQ is unused.
      
      As the result of this transition:
      
      1. It's possible to use RSS contexts over XSK RQs.
      
      2. It's possible to dedicate all queues to XSK.
      
      3. When XSK RQs coexist with regular RQs, the admin should make sure no
      unwanted traffic goes into XSK RQs by either excluding them from RSS or
      settings up the XDP program to return XDP_PASS for non-XSK traffic.
      
      4. When using a mixed fleet of mlx5e devices and other netdevs, the same
      configuration can be applied. If the application supports the fallback
      to copy mode on unsupported drivers, it will work too.
      
      Signed-off-by: default avatarMaxim Mikityanskiy <maximmi@nvidia.com>
      Reviewed-by: default avatarTariq Toukan <tariqt@nvidia.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@nvidia.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      3db4c85c
    • Maxim Mikityanskiy's avatar
      net/mlx5e: Introduce the mlx5e_flush_rq function · d9ba64de
      Maxim Mikityanskiy authored
      
      
      Add a function to flush an RQ: clean up descriptors, release pages and
      reset the RQ. This procedure is used by the recovery flow, and it will
      also be used in a following commit to free some memory when switching a
      channel to the XSK mode.
      
      Signed-off-by: default avatarMaxim Mikityanskiy <maximmi@nvidia.com>
      Reviewed-by: default avatarTariq Toukan <tariqt@nvidia.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@nvidia.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      d9ba64de