Skip to content
  1. Feb 08, 2024
  2. Feb 07, 2024
    • Jakub Kicinski's avatar
      selftests: cmsg_ipv6: repeat the exact packet · 4b00d0c5
      Jakub Kicinski authored
      cmsg_ipv6 test requests tcpdump to capture 4 packets,
      and sends until tcpdump quits. Only the first packet
      is "real", however, and the rest are basic UDP packets.
      So if tcpdump doesn't start in time it will miss
      the real packet and only capture the UDP ones.
      
      This makes the test fail on slow machine (no KVM or with
      debug enabled) 100% of the time, while it passes in fast
      environments.
      
      Repeat the "real" / expected packet.
      
      Fixes: 9657ad09 ("selftests: net: test IPV6_TCLASS")
      Fixes: 05ae83d5
      
       ("selftests: net: test IPV6_HOPLIMIT")
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarSimon Horman <horms@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4b00d0c5
    • Petr Tesarik's avatar
      net: stmmac: protect updates of 64-bit statistics counters · 38cc3c6d
      Petr Tesarik authored
      As explained by a comment in <linux/u64_stats_sync.h>, write side of struct
      u64_stats_sync must ensure mutual exclusion, or one seqcount update could
      be lost on 32-bit platforms, thus blocking readers forever. Such lockups
      have been observed in real world after stmmac_xmit() on one CPU raced with
      stmmac_napi_poll_tx() on another CPU.
      
      To fix the issue without introducing a new lock, split the statics into
      three parts:
      
      1. fields updated only under the tx queue lock,
      2. fields updated only during NAPI poll,
      3. fields updated only from interrupt context,
      
      Updates to fields in the first two groups are already serialized through
      other locks. It is sufficient to split the existing struct u64_stats_sync
      so that each group has its own.
      
      Note that tx_set_ic_bit is updated from both contexts. Split this counter
      so that each context gets its own, and calculate their sum to get the total
      value in stmmac_get_ethtool_stats().
      
      For the third group, multiple interrupts may be processed by different CPUs
      at the same time, but interrupts on the same CPU will not nest. Move fields
      from this group to a newly created per-cpu struct stmmac_pcpu_stats.
      
      Fixes: 133466c3 ("net: stmmac: use per-queue 64 bit statistics where necessary")
      Link: https://lore.kernel.org/netdev/Za173PhviYg-1qIn@torres.zugschlus.de/t/
      
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPetr Tesarik <petr@tesarici.cz>
      Reviewed-by: default avatarJisheng Zhang <jszhang@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      38cc3c6d
    • Eric Dumazet's avatar
      ppp_async: limit MRU to 64K · cb88cb53
      Eric Dumazet authored
      syzbot triggered a warning [1] in __alloc_pages():
      
      WARN_ON_ONCE_GFP(order > MAX_PAGE_ORDER, gfp)
      
      Willem fixed a similar issue in commit c0a2a1b0 ("ppp: limit MRU to 64K")
      
      Adopt the same sanity check for ppp_async_ioctl(PPPIOCSMRU)
      
      [1]:
      
       WARNING: CPU: 1 PID: 11 at mm/page_alloc.c:4543 __alloc_pages+0x308/0x698 mm/page_alloc.c:4543
      Modules linked in:
      CPU: 1 PID: 11 Comm: kworker/u4:0 Not tainted 6.8.0-rc2-syzkaller-g41bccc98fb79 #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
      Workqueue: events_unbound flush_to_ldisc
      pstate: 204000c5 (nzCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
       pc : __alloc_pages+0x308/0x698 mm/page_alloc.c:4543
       lr : __alloc_pages+0xc8/0x698 mm/page_alloc.c:4537
      sp : ffff800093967580
      x29: ffff800093967660 x28: ffff8000939675a0 x27: dfff800000000000
      x26: ffff70001272ceb4 x25: 0000000000000000 x24: ffff8000939675c0
      x23: 0000000000000000 x22: 0000000000060820 x21: 1ffff0001272ceb8
      x20: ffff8000939675e0 x19: 0000000000000010 x18: ffff800093967120
      x17: ffff800083bded5c x16: ffff80008ac97500 x15: 0000000000000005
      x14: 1ffff0001272cebc x13: 0000000000000000 x12: 0000000000000000
      x11: ffff70001272cec1 x10: 1ffff0001272cec0 x9 : 0000000000000001
      x8 : ffff800091c91000 x7 : 0000000000000000 x6 : 000000000000003f
      x5 : 00000000ffffffff x4 : 0000000000000000 x3 : 0000000000000020
      x2 : 0000000000000008 x1 : 0000000000000000 x0 : ffff8000939675e0
      Call trace:
        __alloc_pages+0x308/0x698 mm/page_alloc.c:4543
        __alloc_pages_node include/linux/gfp.h:238 [inline]
        alloc_pages_node include/linux/gfp.h:261 [inline]
        __kmalloc_large_node+0xbc/0x1fc mm/slub.c:3926
        __do_kmalloc_node mm/slub.c:3969 [inline]
        __kmalloc_node_track_caller+0x418/0x620 mm/slub.c:4001
        kmalloc_reserve+0x17c/0x23c net/core/skbuff.c:590
        __alloc_skb+0x1c8/0x3d8 net/core/skbuff.c:651
        __netdev_alloc_skb+0xb8/0x3e8 net/core/skbuff.c:715
        netdev_alloc_skb include/linux/skbuff.h:3235 [inline]
        dev_alloc_skb include/linux/skbuff.h:3248 [inline]
        ppp_async_input drivers/net/ppp/ppp_async.c:863 [inline]
        ppp_asynctty_receive+0x588/0x186c drivers/net/ppp/ppp_async.c:341
        tty_ldisc_receive_buf+0x12c/0x15c drivers/tty/tty_buffer.c:390
        tty_port_default_receive_buf+0x74/0xac drivers/tty/tty_port.c:37
        receive_buf drivers/tty/tty_buffer.c:444 [inline]
        flush_to_ldisc+0x284/0x6e4 drivers/tty/tty_buffer.c:494
        process_one_work+0x694/0x1204 kernel/workqueue.c:2633
        process_scheduled_works kernel/workqueue.c:2706 [inline]
        worker_thread+0x938/0xef4 kernel/workqueue.c:2787
        kthread+0x288/0x310 kernel/kthread.c:388
        ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:860
      
      Fixes: 1da177e4
      
       ("Linux-2.6.12-rc2")
      Reported-and-tested-by: default avatar <syzbot+c5da1f087c9e4ec6c933@syzkaller.appspotmail.com>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reviewed-by: default avatarWillem de Bruijn <willemb@google.com>
      Link: https://lore.kernel.org/r/20240205171004.1059724-1-edumazet@google.com
      
      
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      cb88cb53
    • Jiri Pirko's avatar
      devlink: avoid potential loop in devlink_rel_nested_in_notify_work() · 58086721
      Jiri Pirko authored
      
      
      In case devlink_rel_nested_in_notify_work() can not take the devlink
      lock mutex. Convert the work to delayed work and in case of reschedule
      do it jiffie later and avoid potential looping.
      
      Suggested-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Fixes: c137743b
      
       ("devlink: introduce object and nested devlink relationship infra")
      Signed-off-by: default avatarJiri Pirko <jiri@nvidia.com>
      Link: https://lore.kernel.org/r/20240205171114.338679-1-jiri@resnulli.us
      
      
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      58086721
    • Kuniyuki Iwashima's avatar
      af_unix: Call kfree_skb() for dead unix_(sk)->oob_skb in GC. · 1279f9d9
      Kuniyuki Iwashima authored
      syzbot reported a warning [0] in __unix_gc() with a repro, which
      creates a socketpair and sends one socket's fd to itself using the
      peer.
      
        socketpair(AF_UNIX, SOCK_STREAM, 0, [3, 4]) = 0
        sendmsg(4, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\360", iov_len=1}],
                msg_iovlen=1, msg_control=[{cmsg_len=20, cmsg_level=SOL_SOCKET,
                                            cmsg_type=SCM_RIGHTS, cmsg_data=[3]}],
                msg_controllen=24, msg_flags=0}, MSG_OOB|MSG_PROBE|MSG_DONTWAIT|MSG_ZEROCOPY) = 1
      
      This forms a self-cyclic reference that GC should finally untangle
      but does not due to lack of MSG_OOB handling, resulting in memory
      leak.
      
      Recently, commit 11498715 ("af_unix: Remove io_uring code for
      GC.") removed io_uring's dead code in GC and revealed the problem.
      
      The code was executed at the final stage of GC and unconditionally
      moved all GC candidates from gc_candidates to gc_inflight_list.
      That papered over the reported problem by always making the following
      WARN_ON_ONCE(!list_empty(&gc_candidates)) false.
      
      The problem has been there since commit 2aab4b96
      
       ("af_unix: fix
      struct pid leaks in OOB support") added full scm support for MSG_OOB
      while fixing another bug.
      
      To fix this problem, we must call kfree_skb() for unix_sk(sk)->oob_skb
      if the socket still exists in gc_candidates after purging collected skb.
      
      Then, we need to set NULL to oob_skb before calling kfree_skb() because
      it calls last fput() and triggers unix_release_sock(), where we call
      duplicate kfree_skb(u->oob_skb) if not NULL.
      
      Note that the leaked socket remained being linked to a global list, so
      kmemleak also could not detect it.  We need to check /proc/net/protocol
      to notice the unfreed socket.
      
      [0]:
      WARNING: CPU: 0 PID: 2863 at net/unix/garbage.c:345 __unix_gc+0xc74/0xe80 net/unix/garbage.c:345
      Modules linked in:
      CPU: 0 PID: 2863 Comm: kworker/u4:11 Not tainted 6.8.0-rc1-syzkaller-00583-g1701940b1a02 #0
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
      Workqueue: events_unbound __unix_gc
      RIP: 0010:__unix_gc+0xc74/0xe80 net/unix/garbage.c:345
      Code: 8b 5c 24 50 e9 86 f8 ff ff e8 f8 e4 22 f8 31 d2 48 c7 c6 30 6a 69 89 4c 89 ef e8 97 ef ff ff e9 80 f9 ff ff e8 dd e4 22 f8 90 <0f> 0b 90 e9 7b fd ff ff 48 89 df e8 5c e7 7c f8 e9 d3 f8 ff ff e8
      RSP: 0018:ffffc9000b03fba0 EFLAGS: 00010293
      RAX: 0000000000000000 RBX: ffffc9000b03fc10 RCX: ffffffff816c493e
      RDX: ffff88802c02d940 RSI: ffffffff896982f3 RDI: ffffc9000b03fb30
      RBP: ffffc9000b03fce0 R08: 0000000000000001 R09: fffff52001607f66
      R10: 0000000000000003 R11: 0000000000000002 R12: dffffc0000000000
      R13: ffffc9000b03fc10 R14: ffffc9000b03fc10 R15: 0000000000000001
      FS:  0000000000000000(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00005559c8677a60 CR3: 000000000d57a000 CR4: 00000000003506f0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Call Trace:
       <TASK>
       process_one_work+0x889/0x15e0 kernel/workqueue.c:2633
       process_scheduled_works kernel/workqueue.c:2706 [inline]
       worker_thread+0x8b9/0x12a0 kernel/workqueue.c:2787
       kthread+0x2c6/0x3b0 kernel/kthread.c:388
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:242
       </TASK>
      
      Reported-by: default avatar <syzbot+fa3ef895554bdbfd1183@syzkaller.appspotmail.com>
      Closes: https://syzkaller.appspot.com/bug?extid=fa3ef895554bdbfd1183
      Fixes: 2aab4b96
      
       ("af_unix: fix struct pid leaks in OOB support")
      Signed-off-by: default avatarKuniyuki Iwashima <kuniyu@amazon.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/r/20240203183149.63573-1-kuniyu@amazon.com
      
      
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      1279f9d9
  3. Feb 06, 2024
  4. Feb 05, 2024
    • Paolo Abeni's avatar
      selftests: net: let big_tcp test cope with slow env · a19747c3
      Paolo Abeni authored
      In very slow environments, most big TCP cases including
      segmentation and reassembly of big TCP packets have a good
      chance to fail: by default the TCP client uses write size
      well below 64K. If the host is low enough autocorking is
      unable to build real big TCP packets.
      
      Address the issue using much larger write operations.
      
      Note that is hard to observe the issue without an extremely
      slow and/or overloaded environment; reduce the TCP transfer
      time to allow for much easier/faster reproducibility.
      
      Fixes: 6bb382bc
      
       ("selftests: add a selftest for big tcp")
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Acked-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a19747c3
    • David S. Miller's avatar
      Merge branch 'rxrpc-fixes' · 645eb543
      David S. Miller authored
      
      
      David Howells says:
      
      ====================
      rxrpc: Miscellaneous fixes
      
      Here some miscellaneous fixes for AF_RXRPC:
      
       (1) The zero serial number has a special meaning in an ACK packet serial
           reference, so skip it when assigning serial numbers to transmitted
           packets.
      
       (2) Don't set the reference serial number in a delayed ACK as the ACK
           cannot be used for RTT calculation.
      
       (3) Don't emit a DUP ACK response to a PING RESPONSE ACK coming back to a
           call that completed in the meantime.
      
       (4) Fix the counting of acks and nacks in ACK packet to better drive
           congestion management.  We want to know if there have been new
           acks/nacks since the last ACK packet, not that there are still
           acks/nacks.  This is more complicated as we have to save the old SACK
           table and compare it.
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      645eb543
    • David Howells's avatar
      rxrpc: Fix counting of new acks and nacks · 41b7fa15
      David Howells authored
      Fix the counting of new acks and nacks when parsing a packet - something
      that is used in congestion control.
      
      As the code stands, it merely notes if there are any nacks whereas what we
      really should do is compare the previous SACK table to the new one,
      assuming we get two successive ACK packets with nacks in them.  However, we
      really don't want to do that if we can avoid it as the tables might not
      correspond directly as one may be shifted from the other - something that
      will only get harder to deal with once extended ACK tables come into full
      use (with a capacity of up to 8192).
      
      Instead, count the number of nacks shifted out of the old SACK, the number
      of nacks retained in the portion still active and the number of new acks
      and nacks in the new table then calculate what we need.
      
      Note this ends up a bit of an estimate as the Rx protocol allows acks to be
      withdrawn by the receiver and packets requested to be retransmitted.
      
      Fixes: d57a3a15
      
       ("rxrpc: Save last ACK's SACK table rather than marking txbufs")
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: Marc Dionne <marc.dionne@auristor.com>
      cc: "David S. Miller" <davem@davemloft.net>
      cc: Eric Dumazet <edumazet@google.com>
      cc: Jakub Kicinski <kuba@kernel.org>
      cc: Paolo Abeni <pabeni@redhat.com>
      cc: linux-afs@lists.infradead.org
      cc: netdev@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      41b7fa15
    • David Howells's avatar
      rxrpc: Fix response to PING RESPONSE ACKs to a dead call · 6f769f22
      David Howells authored
      Stop rxrpc from sending a DUP ACK in response to a PING RESPONSE ACK on a
      dead call.  We may have initiated the ping but the call may have beaten the
      response to completion.
      
      Fixes: 18bfeba5
      
       ("rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor")
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: Marc Dionne <marc.dionne@auristor.com>
      cc: "David S. Miller" <davem@davemloft.net>
      cc: Eric Dumazet <edumazet@google.com>
      cc: Jakub Kicinski <kuba@kernel.org>
      cc: Paolo Abeni <pabeni@redhat.com>
      cc: linux-afs@lists.infradead.org
      cc: netdev@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6f769f22
    • David Howells's avatar
      rxrpc: Fix delayed ACKs to not set the reference serial number · e7870cf1
      David Howells authored
      Fix the construction of delayed ACKs to not set the reference serial number
      as they can't be used as an RTT reference.
      
      Fixes: 17926a79
      
       ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both")
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: Marc Dionne <marc.dionne@auristor.com>
      cc: "David S. Miller" <davem@davemloft.net>
      cc: Eric Dumazet <edumazet@google.com>
      cc: Jakub Kicinski <kuba@kernel.org>
      cc: Paolo Abeni <pabeni@redhat.com>
      cc: linux-afs@lists.infradead.org
      cc: netdev@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e7870cf1
    • David Howells's avatar
      rxrpc: Fix generation of serial numbers to skip zero · f3104141
      David Howells authored
      In the Rx protocol, every packet generated is marked with a per-connection
      monotonically increasing serial number.  This number can be referenced in
      an ACK packet generated in response to an incoming packet - thereby
      allowing the sender to use this for RTT determination, amongst other
      things.
      
      However, if the reference field in the ACK is zero, it doesn't refer to any
      incoming packet (it could be a ping to find out if a packet got lost, for
      example) - so we shouldn't generate zero serial numbers.
      
      Fix the generation of serial numbers to retry if it comes up with a zero.
      
      Furthermore, since the serial numbers are only ever allocated within the
      I/O thread this connection is bound to, there's no need for atomics so
      remove that too.
      
      Fixes: 17926a79
      
       ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both")
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: Marc Dionne <marc.dionne@auristor.com>
      cc: "David S. Miller" <davem@davemloft.net>
      cc: Eric Dumazet <edumazet@google.com>
      cc: Jakub Kicinski <kuba@kernel.org>
      cc: Paolo Abeni <pabeni@redhat.com>
      cc: linux-afs@lists.infradead.org
      cc: netdev@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f3104141
    • David S. Miller's avatar
      Merge branch 'nfp-fixes' · fdeba0b5
      David S. Miller authored
      
      
      Louis Peens says:
      
      ====================
      nfp: a few simple driver fixes
      
      This is combining a few unrelated one-liner fixes which have been
      floating around internally into a single series. I'm not sure what is
      the least amount of overhead for reviewers, this or a separate
      submission per-patch? I guess it probably depends on personal
      preference, but please let me know if there is a strong preference to
      rather split these in the future.
      
      Summary:
      
      Patch1: Fixes an old issue which was hidden because 0 just so happens to
              be the correct value.
      Patch2: Fixes a corner case for flower offloading with bond ports
      Patch3: Re-enables the 'NETDEV_XDP_ACT_REDIRECT', which was accidentally
              disabled after a previous refactor.
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fdeba0b5
    • James Hershaw's avatar
      nfp: enable NETDEV_XDP_ACT_REDIRECT feature flag · 0f4d6f01
      James Hershaw authored
      Enable previously excluded xdp feature flag for NFD3 devices. This
      feature flag is required in order to bind nfp interfaces to an xdp
      socket and the nfp driver does in fact support the feature.
      
      Fixes: 66c0e13a
      
       ("drivers: net: turn on XDP features")
      Cc: stable@vger.kernel.org # 6.3+
      Signed-off-by: default avatarJames Hershaw <james.hershaw@corigine.com>
      Signed-off-by: default avatarLouis Peens <louis.peens@corigine.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0f4d6f01
    • Daniel de Villiers's avatar
      nfp: flower: prevent re-adding mac index for bonded port · 1a1c1330
      Daniel de Villiers authored
      When physical ports are reset (either through link failure or manually
      toggled down and up again) that are slaved to a Linux bond with a tunnel
      endpoint IP address on the bond device, not all tunnel packets arriving
      on the bond port are decapped as expected.
      
      The bond dev assigns the same MAC address to itself and each of its
      slaves. When toggling a slave device, the same MAC address is therefore
      offloaded to the NFP multiple times with different indexes.
      
      The issue only occurs when re-adding the shared mac. The
      nfp_tunnel_add_shared_mac() function has a conditional check early on
      that checks if a mac entry already exists and if that mac entry is
      global: (entry && nfp_tunnel_is_mac_idx_global(entry->index)). In the
      case of a bonded device (For example br-ex), the mac index is obtained,
      and no new index is assigned.
      
      We therefore modify the conditional in nfp_tunnel_add_shared_mac() to
      check if the port belongs to the LAG along with the existing checks to
      prevent a new global mac index from being re-assigned to the slave port.
      
      Fixes: 20cce886
      
       ("nfp: flower: enable MAC address sharing for offloadable devs")
      CC: stable@vger.kernel.org # 5.1+
      Signed-off-by: default avatarDaniel de Villiers <daniel.devilliers@corigine.com>
      Signed-off-by: default avatarLouis Peens <louis.peens@corigine.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1a1c1330