Skip to content
  1. Apr 11, 2017
  2. Apr 10, 2017
  3. Apr 09, 2017
    • David S. Miller's avatar
      Merge branch 'dsa-receive-path-simplifications' · 417d978f
      David S. Miller authored
      
      
      Florian Fainelli says:
      
      ====================
      net: dsa: Receive path simplifications
      
      This patch series does factor the common code found in all tag implementations
      into dsa_switch_rcv(). The original motivation was to add GRO support, but this
      may be a lot of work with unclear benefits at this point.
      
      Changes in v2:
      - take care of tag_mtk.c in the process
      ====================
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      417d978f
    • Florian Fainelli's avatar
      net: dsa: Factor bottom tag receive functions · a86d8bec
      Florian Fainelli authored
      
      
      All DSA tag receive functions do strictly the same thing after they have located
      the originating source port from their tag specific protocol:
      
      - push ETH_HLEN bytes
      - set pkt_type to PACKET_HOST
      - call eth_type_trans()
      - bump up counters
      - call netif_receive_skb()
      
      Factor all of that into dsa_switch_rcv(). This also makes us return a pointer to
      a sk_buff, which makes us symetric with the xmit function.
      
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a86d8bec
    • Florian Fainelli's avatar
      net: dsa: Move skb_unshare() to dsa_switch_rcv() · 16c5dcb1
      Florian Fainelli authored
      
      
      All DSA tag receive functions need to unshare the skb before mangling it, move
      this to the generic dsa_switch_rcv() function which will allow us to make the
      tag receive function return their mangled skb without caring about freeing a
      NULL skb.
      
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      16c5dcb1
    • Florian Fainelli's avatar
      net: dsa: Do not check for NULL dst in tag parsers · 9d7f9c4f
      Florian Fainelli authored
      
      
      dsa_switch_rcv() already tests for dst == NULL, so there is no need to duplicate
      the same check within the tag receive functions.
      
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9d7f9c4f
    • Steffen Klassert's avatar
      skbuff: Extend gso_type to unsigned int. · 7f564528
      Steffen Klassert authored
      
      
      All available gso_type flags are currently in use, so
      extend gso_type from 'unsigned short' to 'unsigned int'
      to be able to add further flags.
      
      We reorder the struct skb_shared_info to use
      two bytes of the four byte hole before dataref.
      All fields before dataref are cleared, i.e.
      four bytes more than before the change.
      
      The remaining two byte hole is moved to the
      beginning of the structure, this protects us
      from immediate overwites on out of bound writes
      to the sk_buff head.
      
      Structure layout on x86-64 before the change:
      
      struct skb_shared_info {
      	unsigned char              nr_frags;             /*     0     1 */
      	__u8                       tx_flags;             /*     1     1 */
      	short unsigned int         gso_size;             /*     2     2 */
      	short unsigned int         gso_segs;             /*     4     2 */
      	short unsigned int         gso_type;             /*     6     2 */
      	struct sk_buff *           frag_list;            /*     8     8 */
      	struct skb_shared_hwtstamps hwtstamps;           /*    16     8 */
      	u32                        tskey;                /*    24     4 */
      	__be32                     ip6_frag_id;          /*    28     4 */
      	atomic_t                   dataref;              /*    32     4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	void *                     destructor_arg;       /*    40     8 */
      	skb_frag_t                 frags[17];            /*    48   272 */
      	/* --- cacheline 5 boundary (320 bytes) --- */
      
      	/* size: 320, cachelines: 5, members: 12 */
      	/* sum members: 316, holes: 1, sum holes: 4 */
      };
      
      Structure layout on x86-64 after the change:
      
      struct skb_shared_info {
      	short unsigned int         _unused;              /*     0     2 */
      	unsigned char              nr_frags;             /*     2     1 */
      	__u8                       tx_flags;             /*     3     1 */
      	short unsigned int         gso_size;             /*     4     2 */
      	short unsigned int         gso_segs;             /*     6     2 */
      	struct sk_buff *           frag_list;            /*     8     8 */
      	struct skb_shared_hwtstamps hwtstamps;           /*    16     8 */
      	unsigned int               gso_type;             /*    24     4 */
      	u32                        tskey;                /*    28     4 */
      	__be32                     ip6_frag_id;          /*    32     4 */
      	atomic_t                   dataref;              /*    36     4 */
      	void *                     destructor_arg;       /*    40     8 */
      	skb_frag_t                 frags[17];            /*    48   272 */
      	/* --- cacheline 5 boundary (320 bytes) --- */
      
      	/* size: 320, cachelines: 5, members: 13 */
      };
      
      Signed-off-by: default avatarSteffen Klassert <steffen.klassert@secunet.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7f564528
  4. Apr 08, 2017