Skip to content
  1. Jan 03, 2020
    • Po Liu's avatar
      enetc: add support time specific departure base on the qos etf · 0d08c9ec
      Po Liu authored
      
      
      ENETC implement time specific departure capability, which enables
      the user to specify when a frame can be transmitted. When this
      capability is enabled, the device will delay the transmission of
      the frame so that it can be transmitted at the precisely specified time.
      The delay departure time up to 0.5 seconds in the future. If the
      departure time in the transmit BD has not yet been reached, based
      on the current time, the packet will not be transmitted.
      
      This driver was loaded by Qos driver ETF. User could load it by tc
      commands. Here are the example commands:
      
      tc qdisc add dev eth0 root handle 1: mqprio \
      	   num_tc 8 map 0 1 2 3 4 5 6 7 hw 1
      tc qdisc replace dev eth0 parent 1:8 etf \
      	   clockid CLOCK_TAI delta 30000  offload
      
      These example try to set queue mapping first and then set queue 7
      with 30us ahead dequeue time.
      
      Then user send test frame should set SO_TXTIME feature for socket.
      
      There are also some limitations for this feature in hardware:
      - Transmit checksum offloads and time specific departure operation
      are mutually exclusive.
      - Time Aware Shaper feature (Qbv) offload and time specific departure
      operation are mutually exclusive.
      
      Signed-off-by: default avatarPo Liu <Po.Liu@nxp.com>
      Reviewed-by: default avatarVladimir Oltean <vladimir.oltean@nxp.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0d08c9ec
    • Julia Lawall's avatar
      fsl/fman: use resource_size · a02158d6
      Julia Lawall authored
      Use resource_size rather than a verbose computation on
      the end and start fields.
      
      The semantic patch that makes these changes is as follows:
      (http://coccinelle.lip6.fr/
      
      )
      
      <smpl>
      @@ struct resource ptr; @@
      - (ptr.end + 1 - ptr.start)
      + resource_size(&ptr)
      
      @@ struct resource *ptr; @@
      - (ptr->end + 1 - ptr->start)
      + resource_size(ptr)
      </smpl>
      
      Signed-off-by: default avatarJulia Lawall <Julia.Lawall@inria.fr>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a02158d6
    • Julia Lawall's avatar
      ptp: ptp_clockmatrix: constify copied structure · 6485f9ae
      Julia Lawall authored
      
      
      The idtcm_caps structure is only copied into another structure,
      so make it const.
      
      The opportunity for this change was found using Coccinelle.
      
      Signed-off-by: default avatarJulia Lawall <Julia.Lawall@inria.fr>
      Acked-by: default avatarRichard Cochran <richardcochran@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6485f9ae
    • Ben Hutchings's avatar
      sfc: Remove unnecessary dependencies on I2C · edf45791
      Ben Hutchings authored
      
      
      Only the SFC4000 code, now moved to sfc-falcon, needed I2C.
      
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Acked-by: default avatarEdward Cree <ecree@solarflare.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      edf45791
    • David S. Miller's avatar
      Merge branch 'tcp-Add-support-for-L3-domains-to-MD5-auth' · 7a8d8a46
      David S. Miller authored
      David Ahern says:
      
      ====================
      tcp: Add support for L3 domains to MD5 auth
      
      With VRF, the scope of network addresses is limited to the L3 domain
      the device is associated. MD5 keys are based on addresses, so proper
      VRF support requires an L3 domain to be considered for the lookups.
      
      Leverage the new TCP_MD5SIG_EXT option to add support for a device index
      to MD5 keys. The __tcpm_pad entry in tcp_md5sig is renamed to tcpm_ifindex
      and a new flag, TCP_MD5SIG_FLAG_IFINDEX, in tcpm_flags determines if the
      entry is examined. This follows what was done for MD5 and prefixes with
      commits
         8917a777 ("tcp: md5: add TCP_MD5SIG_EXT socket option to set a key address prefix")
         6797318e
      
       ("tcp: md5: add an address prefix for key lookup")
      
      Handling both a device AND L3 domain is much more complicated for the
      response paths. This set focuses only on L3 support - requiring the
      device index to be an l3mdev (ie, VRF). Support for slave devices can
      be added later if desired, much like the progression of support for
      sockets bound to a VRF and then bound to a device in a VRF. Kernel
      code is setup to explicitly call out that current lookup is for an L3
      index, while the uapi just references a device index allowing its
      meaning to include other devices in the future.
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7a8d8a46
    • David Ahern's avatar
      fcnal-test: Add TCP MD5 tests for VRF · 5cad8bce
      David Ahern authored
      
      
      Add tests for new TCP MD5 API for L3 domains (VRF).
      
      A new namespace is added to create a duplicate configuration between
      the VRF and default VRF to verify overlapping config is handled properly.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5cad8bce
    • David Ahern's avatar
      fcnal-test: Add TCP MD5 tests · f0bee1eb
      David Ahern authored
      
      
      Add tests for existing TCP MD5 APIs - both single address
      config and the new extended API for prefixes.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f0bee1eb
    • David Ahern's avatar
      nettest: Add support for TCP_MD5 extensions · eb09cf03
      David Ahern authored
      
      
      Update nettest to implement TCP_MD5SIG_EXT for a prefix and a device.
      
      Add a new option, -m, to specify a prefix and length to use with MD5
      auth. The device option comes from the existing -d option. If either
      are set and MD5 auth is requested, TCP_MD5SIG_EXT is used instead of
      TCP_MD5SIG.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eb09cf03
    • David Ahern's avatar
      nettest: Return 1 on MD5 failure for server mode · 1bfb45d8
      David Ahern authored
      
      
      On failure to set MD5 password, do_server should return 1 so that the
      program exits with 1 rather than 255. This used for negative testing
      when adding MD5 with device option.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1bfb45d8
    • David Ahern's avatar
      net: Add device index to tcp_md5sig · 6b102db5
      David Ahern authored
      
      
      Add support for userspace to specify a device index to limit the scope
      of an entry via the TCP_MD5SIG_EXT setsockopt. The existing __tcpm_pad
      is renamed to tcpm_ifindex and the new field is only checked if the new
      TCP_MD5SIG_FLAG_IFINDEX is set in tcpm_flags. For now, the device index
      must point to an L3 master device (e.g., VRF). The API and error
      handling are setup to allow the constraint to be relaxed in the future
      to any device index.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6b102db5
    • David Ahern's avatar
      tcp: Add l3index to tcp_md5sig_key and md5 functions · dea53bb8
      David Ahern authored
      
      
      Add l3index to tcp_md5sig_key to represent the L3 domain of a key, and
      add l3index to tcp_md5_do_add and tcp_md5_do_del to fill in the key.
      
      With the key now based on an l3index, add the new parameter to the
      lookup functions and consider the l3index when looking for a match.
      
      The l3index comes from the skb when processing ingress packets leveraging
      the helpers created for socket lookups, tcp_v4_sdif and inet_iif (and the
      v6 variants). When the sdif index is set it means the packet ingressed a
      device that is part of an L3 domain and inet_iif points to the VRF device.
      For egress, the L3 domain is determined from the socket binding and
      sk_bound_dev_if.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dea53bb8
    • David Ahern's avatar
      ipv4/tcp: Pass dif and sdif to tcp_v4_inbound_md5_hash · 534322ca
      David Ahern authored
      
      
      The original ingress device index is saved to the cb space of the skb
      and the cb is moved during tcp processing. Since tcp_v4_inbound_md5_hash
      can be called before and after the cb move, pass dif and sdif to it so
      the caller can save both prior to the cb move. Both are used by a later
      patch.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      534322ca
    • David Ahern's avatar
      ipv6/tcp: Pass dif and sdif to tcp_v6_inbound_md5_hash · d14c77e0
      David Ahern authored
      
      
      The original ingress device index is saved to the cb space of the skb
      and the cb is moved during tcp processing. Since tcp_v6_inbound_md5_hash
      can be called before and after the cb move, pass dif and sdif to it so
      the caller can save both prior to the cb move. Both are used by a later
      patch.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d14c77e0
    • David Ahern's avatar
      ipv4/tcp: Use local variable for tcp_md5_addr · cea97609
      David Ahern authored
      
      
      Extract the typecast to (union tcp_md5_addr *) to a local variable
      rather than the current long, inline declaration with function calls.
      
      No functional change intended.
      
      Signed-off-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cea97609
    • Niu Xilei's avatar
      vxlan: Fix alignment and code style of vxlan.c · 98c81476
      Niu Xilei authored
      
      
      Fixed Coding function and style issues
      
      Signed-off-by: default avatarNiu Xilei <niu_xilei@163.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      98c81476
    • David S. Miller's avatar
      Merge branch 'mlxsw-Allow-setting-default-port-priority' · f5e5d272
      David S. Miller authored
      
      
      Ido Schimmel says:
      
      ====================
      mlxsw: Allow setting default port priority
      
      Petr says:
      
      When LLDP APP TLV selector 1 (EtherType) is used with PID of 0, the
      corresponding entry specifies "default application priority [...] when
      application priority is not otherwise specified."
      
      mlxsw currently supports this type of APP entry, but uses it only as a
      fallback for unspecified DSCP rules. However non-IP traffic is prioritized
      according to port-default priority, not according to the DSCP-to-prio
      tables, and thus it's currently not possible to prioritize such traffic
      correctly.
      
      This patchset extends the use of the abovementioned APP entry to also set
      default port priority (in patches #1 and #2) and then (in patch #3) adds a
      selftest.
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f5e5d272
    • Petr Machata's avatar
      selftests: mlxsw: Add a self-test for port-default priority · c5341bcc
      Petr Machata authored
      
      
      Send non-IP traffic to a port and observe that it gets prioritized
      according to the lldptool app=$prio,1,0 rules.
      
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Reviewed-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c5341bcc
    • Petr Machata's avatar
      mlxsw: spectrum_dcb: Allow setting default port priority · 379a00dd
      Petr Machata authored
      
      
      When APP TLV selector 1 (EtherType) is used with PID of 0, the
      corresponding entry specifies "default application priority [...] when
      application priority is not otherwise specified."
      
      mlxsw currently supports this type of APP entry, but uses it only as a
      fallback for unspecified DSCP rules. However non-IP traffic is prioritized
      according to port-default priority, not according to the DSCP-to-prio
      tables, and thus it's currently not possible to prioritize such traffic
      correctly.
      
      Extend the use of the abovementioned APP entry to also set default port
      priority.
      
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      379a00dd
    • Petr Machata's avatar
      mlxsw: reg: Add QoS Port DSCP to Priority Mapping Register · d8446884
      Petr Machata authored
      
      
      Add QPDP. This register controls the port default Switch Priority and
      Color. The default Switch Priority and Color are used for frames where the
      trust state uses default values. Currently there are two cases where this
      applies: a port is in trust-PCP state, but a packet arrives untagged; and a
      port is in trust-DSCP state, but a non-IP packet arrives.
      
      Signed-off-by: default avatarPetr Machata <petrm@mellanox.com>
      Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d8446884
    • David S. Miller's avatar
      Merge branch 'page_pool-NUMA-node-handling-fixes' · c9a2069b
      David S. Miller authored
      
      
      Jesper Dangaard Brouer says:
      
      ====================
      page_pool: NUMA node handling fixes
      
      The recently added NUMA changes (merged for v5.5) to page_pool, it both
      contains a bug in handling NUMA_NO_NODE condition, and added code to
      the fast-path.
      
      This patchset fixes the bug and moves code out of fast-path. The first
      patch contains a fix that should be considered for 5.5. The second
      patch reduce code size and overhead in case CONFIG_NUMA is disabled.
      
      Currently the NUMA_NO_NODE setting bug only affects driver 'ti_cpsw'
      (drivers/net/ethernet/ti/), but after this patchset, we plan to move
      other drivers (netsec and mvneta) to use NUMA_NO_NODE setting.
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c9a2069b
    • Jesper Dangaard Brouer's avatar
      page_pool: help compiler remove code in case CONFIG_NUMA=n · f13fc107
      Jesper Dangaard Brouer authored
      
      
      When kernel is compiled without NUMA support, then page_pool NUMA
      config setting (pool->p.nid) doesn't make any practical sense. The
      compiler cannot see that it can remove the code paths.
      
      This patch avoids reading pool->p.nid setting in case of !CONFIG_NUMA,
      in allocation and numa check code, which helps compiler to see the
      optimisation potential. It leaves update code intact to keep API the
      same.
      
       $ ./scripts/bloat-o-meter net/core/page_pool.o-numa-enabled \
                                 net/core/page_pool.o-numa-disabled
       add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-113 (-113)
       Function                                     old     new   delta
       page_pool_create                             401     398      -3
       __page_pool_alloc_pages_slow                 439     426     -13
       page_pool_refill_alloc_cache                 425     328     -97
       Total: Before=3611, After=3498, chg -3.13%
      
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f13fc107
    • Jesper Dangaard Brouer's avatar
      page_pool: handle page recycle for NUMA_NO_NODE condition · 44768dec
      Jesper Dangaard Brouer authored
      The check in pool_page_reusable (page_to_nid(page) == pool->p.nid) is
      not valid if page_pool was configured with pool->p.nid = NUMA_NO_NODE.
      
      The goal of the NUMA changes in commit d5394610 ("page_pool: Don't
      recycle non-reusable pages"), were to have RX-pages that belongs to the
      same NUMA node as the CPU processing RX-packet during softirq/NAPI. As
      illustrated by the performance measurements.
      
      This patch moves the NAPI checks out of fast-path, and at the same time
      solves the NUMA_NO_NODE issue.
      
      First realize that alloc_pages_node() with pool->p.nid = NUMA_NO_NODE
      will lookup current CPU nid (Numa ID) via numa_mem_id(), which is used
      as the the preferred nid.  It is only in rare situations, where
      e.g. NUMA zone runs dry, that page gets doesn't get allocated from
      preferred nid.  The page_pool API allows drivers to control the nid
      themselves via controlling pool->p.nid.
      
      This patch moves the NAPI check to when alloc cache is refilled, via
      dequeuing/consuming pages from the ptr_ring. Thus, we can allow placing
      pages from remote NUMA into the ptr_ring, as the dequeue/consume step
      will check the NUMA node. All current drivers using page_pool will
      alloc/refill RX-ring from same CPU running softirq/NAPI process.
      
      Drivers that control the nid explicitly, also use page_pool_update_nid
      when changing nid runtime.  To speed up transision to new nid the alloc
      cache is now flushed on nid changes.  This force pages to come from
      ptr_ring, which does the appropate nid check.
      
      For the NUMA_NO_NODE case, when a NIC IRQ is moved to another NUMA
      node, we accept that transitioning the alloc cache doesn't happen
      immediately. The preferred nid change runtime via consulting
      numa_mem_id() based on the CPU processing RX-packets.
      
      Notice, to avoid stressing the page buddy allocator and avoid doing too
      much work under softirq with preempt disabled, the NUMA check at
      ptr_ring dequeue will break the refill cycle, when detecting a NUMA
      mismatch. This will cause a slower transition, but its done on purpose.
      
      Fixes: d5394610
      
       ("page_pool: Don't recycle non-reusable pages")
      Reported-by: default avatarLi RongQing <lirongqing@baidu.com>
      Reported-by: default avatarYunsheng Lin <linyunsheng@huawei.com>
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      44768dec
  2. Jan 01, 2020
  3. Dec 31, 2019
    • Taehee Yoo's avatar
      hsr: fix slab-out-of-bounds Read in hsr_debugfs_rename() · 04b69426
      Taehee Yoo authored
      
      
      hsr slave interfaces don't have debugfs directory.
      So, hsr_debugfs_rename() shouldn't be called when hsr slave interface name
      is changed.
      
      Test commands:
          ip link add dummy0 type dummy
          ip link add dummy1 type dummy
          ip link add hsr0 type hsr slave1 dummy0 slave2 dummy1
          ip link set dummy0 name ap
      
      Splat looks like:
      [21071.899367][T22666] ap: renamed from dummy0
      [21071.914005][T22666] ==================================================================
      [21071.919008][T22666] BUG: KASAN: slab-out-of-bounds in hsr_debugfs_rename+0xaa/0xb0 [hsr]
      [21071.923640][T22666] Read of size 8 at addr ffff88805febcd98 by task ip/22666
      [21071.926941][T22666]
      [21071.927750][T22666] CPU: 0 PID: 22666 Comm: ip Not tainted 5.5.0-rc2+ #240
      [21071.929919][T22666] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
      [21071.935094][T22666] Call Trace:
      [21071.935867][T22666]  dump_stack+0x96/0xdb
      [21071.936687][T22666]  ? hsr_debugfs_rename+0xaa/0xb0 [hsr]
      [21071.937774][T22666]  print_address_description.constprop.5+0x1be/0x360
      [21071.939019][T22666]  ? hsr_debugfs_rename+0xaa/0xb0 [hsr]
      [21071.940081][T22666]  ? hsr_debugfs_rename+0xaa/0xb0 [hsr]
      [21071.940949][T22666]  __kasan_report+0x12a/0x16f
      [21071.941758][T22666]  ? hsr_debugfs_rename+0xaa/0xb0 [hsr]
      [21071.942674][T22666]  kasan_report+0xe/0x20
      [21071.943325][T22666]  hsr_debugfs_rename+0xaa/0xb0 [hsr]
      [21071.944187][T22666]  hsr_netdev_notify+0x1fe/0x9b0 [hsr]
      [21071.945052][T22666]  ? __module_text_address+0x13/0x140
      [21071.945897][T22666]  notifier_call_chain+0x90/0x160
      [21071.946743][T22666]  dev_change_name+0x419/0x840
      [21071.947496][T22666]  ? __read_once_size_nocheck.constprop.6+0x10/0x10
      [21071.948600][T22666]  ? netdev_adjacent_rename_links+0x280/0x280
      [21071.949577][T22666]  ? __read_once_size_nocheck.constprop.6+0x10/0x10
      [21071.950672][T22666]  ? lock_downgrade+0x6e0/0x6e0
      [21071.951345][T22666]  ? do_setlink+0x811/0x2ef0
      [21071.951991][T22666]  do_setlink+0x811/0x2ef0
      [21071.952613][T22666]  ? is_bpf_text_address+0x81/0xe0
      [ ... ]
      
      Reported-by: default avatar <syzbot+9328206518f08318a5fd@syzkaller.appspotmail.com>
      Fixes: 4c2d5e33
      
       ("hsr: rename debugfs file when interface name is changed")
      Signed-off-by: default avatarTaehee Yoo <ap420073@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      04b69426
    • Davide Caratti's avatar
      net/sched: add delete_empty() to filters and use it in cls_flower · a5b72a08
      Davide Caratti authored
      Revert "net/sched: cls_u32: fix refcount leak in the error path of
      u32_change()", and fix the u32 refcount leak in a more generic way that
      preserves the semantic of rule dumping.
      On tc filters that don't support lockless insertion/removal, there is no
      need to guard against concurrent insertion when a removal is in progress.
      Therefore, for most of them we can avoid a full walk() when deleting, and
      just decrease the refcount, like it was done on older Linux kernels.
      This fixes situations where walk() was wrongly detecting a non-empty
      filter, like it happened with cls_u32 in the error path of change(), thus
      leading to failures in the following tdc selftests:
      
       6aa7: (filter, u32) Add/Replace u32 with source match and invalid indev
       6658: (filter, u32) Add/Replace u32 with custom hash table and invalid handle
       74c2: (filter, u32) Add/Replace u32 filter with invalid hash table id
      
      On cls_flower, and on (future) lockless filters, this check is necessary:
      move all the check_empty() logic in a callback so that each filter
      can have its own implementation. For cls_flower, it's sufficient to check
      if no IDRs have been allocated.
      
      This reverts commit 275c44aa.
      
      Changes since v1:
       - document the need for delete_empty() when TCF_PROTO_OPS_DOIT_UNLOCKED
         is used, thanks to Vlad Buslov
       - implement delete_empty() without doing fl_walk(), thanks to Vlad Buslov
       - squash revert and new fix in a single patch, to be nice with bisect
         tests that run tdc on u32 filter, thanks to Dave Miller
      
      Fixes: 275c44aa ("net/sched: cls_u32: fix refcount leak in the error path of u32_change()")
      Fixes: 6676d5e4
      
       ("net: sched: set dedicated tcf_walker flag when tp is empty")
      Suggested-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
      Suggested-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: default avatarDavide Caratti <dcaratti@redhat.com>
      Reviewed-by: default avatarVlad Buslov <vladbu@mellanox.com>
      Tested-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
      Acked-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a5b72a08
    • Vijay Khemka's avatar
      net/ncsi: Fix gma flag setting after response · 9e860947
      Vijay Khemka authored
      
      
      gma_flag was set at the time of GMA command request but it should
      only be set after getting successful response. Movinng this flag
      setting in GMA response handler.
      
      This flag is used mainly for not repeating GMA command once
      received MAC address.
      
      Signed-off-by: default avatarVijay Khemka <vijaykhemka@fb.com>
      Reviewed-by: default avatarSamuel Mendoza-Jonas <sam@mendozajonas.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9e860947