Message ID | 446695cb-50b8-4187-bf11-63aedb6e9aed@gmail.com |
---|---|
State | New |
Headers | show |
Series | net: gro: encapsulation bug fix and flush checks improvements | expand |
On Thu, Feb 22, 2024 at 10:05 PM Richard Gobert <richardbgobert@gmail.com> wrote: > > Commits a602456 ("udp: Add GRO functions to UDP socket") and 57c67ff ("udp: > additional GRO support") introduce incorrect usage of {ip,ipv6}_hdr in the > complete phase of gro. The functions always return skb->network_header, > which in the case of encapsulated packets at the gro complete phase, is > always set to the innermost L3 of the packet. That means that calling > {ip,ipv6}_hdr for skbs which completed the GRO receive phase (both in > gro_list and *_gro_complete) when parsing an encapsulated packet's _outer_ > L3/L4 may return an unexpected value. > > This incorrect usage leads to a bug in GRO's UDP socket lookup. > udp{4,6}_lib_lookup_skb functions use ip_hdr/ipv6_hdr respectively. These > *_hdr functions return network_header which will point to the innermost L3, > resulting in the wrong offset being used in __udp{4,6}_lib_lookup with > encapsulated packets. > > Reproduction example: > > Endpoint configuration example (fou + local address bind) > > # ip fou add port 6666 ipproto 4 > # ip link add name tun1 type ipip remote 2.2.2.1 local 2.2.2.2 encap fou encap-dport 5555 encap-sport 6666 mode ipip > # ip link set tun1 up > # ip a add 1.1.1.2/24 dev tun1 > > Netperf TCP_STREAM result on net-next before patch is applied: > > net-next main, GRO enabled: > $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 131072 16384 16384 5.28 2.37 > > net-next main, GRO disabled: > $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 131072 16384 16384 5.01 2745.06 > > patch applied, GRO enabled: > $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 131072 16384 16384 5.01 2877.38 > > This patch fixes this bug and prevents similar future misuse of > network_header by setting network_header and inner_network_header to their > respective values during the receive phase of GRO. This results in > more coherent {inner_,}network_header values for every skb in gro_list, > which also means there's no need to set/fix these values before passing > the packet forward. > > network_header is already set in dev_gro_receive and under encapsulation we > set inner_network_header. *_gro_complete functions use a new helper > function - skb_gro_complete_network_header, which returns the > network_header/inner_network_header offset during the GRO complete phase, > depending on skb->encapsulation. > > Fixes: 57c67ff4bd92 ("udp: additional GRO support") > Signed-off-by: Richard Gobert <richardbgobert@gmail.com> > --- > include/net/gro.h | 14 +++++++++++++- > net/8021q/vlan_core.c | 3 +++ > net/ipv4/af_inet.c | 8 ++++---- > net/ipv4/tcp_offload.c | 2 +- > net/ipv4/udp_offload.c | 2 +- > net/ipv6/ip6_offload.c | 11 +++++------ > net/ipv6/tcpv6_offload.c | 2 +- > net/ipv6/udp_offload.c | 2 +- > 8 files changed, 29 insertions(+), 15 deletions(-) > > diff --git a/include/net/gro.h b/include/net/gro.h > index b435f0ddbf64..89502a7e35ed 100644 > --- a/include/net/gro.h > +++ b/include/net/gro.h > @@ -177,10 +177,22 @@ static inline void *skb_gro_header(struct sk_buff *skb, > return ptr; > } > > +static inline int skb_gro_network_offset(struct sk_buff *skb) > +{ > + return NAPI_GRO_CB(skb)->encap_mark ? skb_inner_network_offset(skb) : > + skb_network_offset(skb); > +} > + > static inline void *skb_gro_network_header(struct sk_buff *skb) > { > return (NAPI_GRO_CB(skb)->frag0 ?: skb->data) + > - skb_network_offset(skb); > + skb_gro_network_offset(skb); > +} > + > +static inline void *skb_gro_complete_network_header(struct sk_buff *skb) > +{ > + return skb->encapsulation ? skb_inner_network_header(skb) : > + skb_network_header(skb); > } > > static inline __wsum inet_gro_compute_pseudo(struct sk_buff *skb, int proto) > diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c > index f00158234505..8bc871397e47 100644 > --- a/net/8021q/vlan_core.c > +++ b/net/8021q/vlan_core.c > @@ -478,6 +478,9 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head, > if (unlikely(!vhdr)) > goto out; > > + if (!NAPI_GRO_CB(skb)->encap_mark) > + skb_set_network_header(skb, hlen); > + > type = vhdr->h_vlan_encapsulated_proto; > > ptype = gro_find_receive_by_type(type); > diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c > index 835f4f9d98d2..c0f3c162bf73 100644 > --- a/net/ipv4/af_inet.c > +++ b/net/ipv4/af_inet.c > @@ -1564,7 +1564,9 @@ struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb) > > NAPI_GRO_CB(skb)->is_atomic = !!(iph->frag_off & htons(IP_DF)); > NAPI_GRO_CB(skb)->flush |= flush; > - skb_set_network_header(skb, off); > + if (NAPI_GRO_CB(skb)->encap_mark) > + skb_set_inner_network_header(skb, off); > + > /* The above will be needed by the transport layer if there is one > * immediately following this IP hdr. > */ > @@ -1643,10 +1645,8 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff) > int proto = iph->protocol; > int err = -ENOSYS; > > - if (skb->encapsulation) { > + if (skb->encapsulation) > skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IP)); > - skb_set_inner_network_header(skb, nhoff); > - } > > iph_set_totlen(iph, skb->len - nhoff); > csum_replace2(&iph->check, totlen, iph->tot_len); > diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c > index 8311c38267b5..8bbcd3f502ac 100644 > --- a/net/ipv4/tcp_offload.c > +++ b/net/ipv4/tcp_offload.c > @@ -330,7 +330,7 @@ struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb) > > INDIRECT_CALLABLE_SCOPE int tcp4_gro_complete(struct sk_buff *skb, int thoff) > { > - const struct iphdr *iph = ip_hdr(skb); > + const struct iphdr *iph = skb_gro_complete_network_header(skb); > struct tcphdr *th = tcp_hdr(skb); > > th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, > diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c > index 6c95d28d0c4a..7f59cede67f5 100644 > --- a/net/ipv4/udp_offload.c > +++ b/net/ipv4/udp_offload.c > @@ -709,7 +709,7 @@ EXPORT_SYMBOL(udp_gro_complete); > > INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) > { > - const struct iphdr *iph = ip_hdr(skb); > + const struct iphdr *iph = skb_gro_complete_network_header(skb); > struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); > > /* do fraglist only if there is no outer UDP encap (or we already processed it) */ > diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c > index cca64c7809be..db7e3db587b9 100644 > --- a/net/ipv6/ip6_offload.c > +++ b/net/ipv6/ip6_offload.c > @@ -67,7 +67,7 @@ static int ipv6_gro_pull_exthdrs(struct sk_buff *skb, int off, int proto) > off += len; > } > > - skb_gro_pull(skb, off - skb_network_offset(skb)); > + skb_gro_pull(skb, off - skb_gro_network_offset(skb)); > return proto; > } > > @@ -236,7 +236,8 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, > if (unlikely(!iph)) > goto out; > > - skb_set_network_header(skb, off); > + if (NAPI_GRO_CB(skb)->encap_mark) > + skb_set_inner_network_header(skb, off); > > flush += ntohs(iph->payload_len) != skb->len - hlen; > > @@ -259,7 +260,7 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, > NAPI_GRO_CB(skb)->proto = proto; > > flush--; > - nlen = skb_network_header_len(skb); > + nlen = skb_gro_offset(skb) - off; > > list_for_each_entry(p, head, list) { > const struct ipv6hdr *iph2; > @@ -353,10 +354,8 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) > int err = -ENOSYS; > u32 payload_len; > > - if (skb->encapsulation) { > + if (skb->encapsulation) > skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6)); > - skb_set_inner_network_header(skb, nhoff); > - } > > payload_len = skb->len - nhoff - sizeof(*iph); > if (unlikely(payload_len > IPV6_MAXPLEN)) { > diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c > index bf0c957e4b5e..79eeaced2834 100644 > --- a/net/ipv6/tcpv6_offload.c > +++ b/net/ipv6/tcpv6_offload.c > @@ -29,7 +29,7 @@ struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb) > > INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int thoff) > { > - const struct ipv6hdr *iph = ipv6_hdr(skb); > + const struct ipv6hdr *iph = skb_gro_complete_network_header(skb); > struct tcphdr *th = tcp_hdr(skb); > > th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, > diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c > index 6b95ba241ebe..897caa2e39fb 100644 > --- a/net/ipv6/udp_offload.c > +++ b/net/ipv6/udp_offload.c > @@ -164,7 +164,7 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) > > INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) > { > - const struct ipv6hdr *ipv6h = ipv6_hdr(skb); > + const struct ipv6hdr *ipv6h = skb_gro_complete_network_header(skb); > struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); My intuition is that this patch has a high cost for normal GRO processing. SW-GRO is already a bottleneck on ARM cores in smart NICS. I would suggest instead using parameters to give both the nhoff and thoff values this would avoid many conditionals in the fast path. -> INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff); struct udphdr *uh = (struct udphdr *)(skb->data + thoff); ... } INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); Why storing in skb fields things that really could be propagated more efficiently as function parameters ?
Eric Dumazet wrote: > > My intuition is that this patch has a high cost for normal GRO processing. > SW-GRO is already a bottleneck on ARM cores in smart NICS. > > I would suggest instead using parameters to give both the nhoff and thoff values > this would avoid many conditionals in the fast path. > > -> > > INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int > nhoff, int thoff) > { > const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff); > struct udphdr *uh = (struct udphdr *)(skb->data + thoff); > ... > } > > INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int > nhoff, int thoff) > { > const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); > struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); > > Why storing in skb fields things that really could be propagated more > efficiently as function parameters ? Hi Eric, Thanks for the review! I agree, the conditionals could be a problem and are actually not needed. The third commit in this patch series introduces an optimisation for ipv6/ipv4 using the correct {inner_}network_header. We can remove the conditionals; I thought about multiple ways to do so. First, remove the conditional in skb_gro_network_offset: static inline int skb_gro_network_offset(const struct sk_buff *skb) { const u32 mask = NAPI_GRO_CB(skb)->encap_mark - 1; return (skb_network_offset(skb) & mask) | (skb_inner_network_offset(skb) & ~mask); } And for the conditionals in {inet,ipv6}_gro_receive I thought about two ideas. The first is to move set_inner_network_header to encapsulation gro functions like ipip_gro_receive, this way there's one less write (in comparison to main) in these functions: static struct sk_buff *ipip_gro_receive(struct list_head *head, struct sk_buff *skb) { ... NAPI_GRO_CB(skb)->encap_mark = 1; skb_set_inner_network_header(skb, skb_gro_offset(skb)); The second way is to always write to inner_network_header: INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, struct sk_buff *skb) { ... skb_set_inner_network_header(skb, off); ... What do you think is better? I think the 1st is more beneficial for the fast path. We could then use the {inner_}network_header separation to optimise the receive path, such as in the 3rd commit in this patch series. Regards, Richard
On Thu, Feb 29, 2024 at 2:22 PM Richard Gobert <richardbgobert@gmail.com> wrote: > > > > Eric Dumazet wrote: > > > > My intuition is that this patch has a high cost for normal GRO processing. > > SW-GRO is already a bottleneck on ARM cores in smart NICS. > > > > I would suggest instead using parameters to give both the nhoff and thoff values > > this would avoid many conditionals in the fast path. > > > > -> > > > > INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int > > nhoff, int thoff) > > { > > const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff); > > struct udphdr *uh = (struct udphdr *)(skb->data + thoff); > > ... > > } > > > > INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int > > nhoff, int thoff) > > { > > const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); > > struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); > > > > Why storing in skb fields things that really could be propagated more > > efficiently as function parameters ? > > Hi Eric, > Thanks for the review! > > I agree, the conditionals could be a problem and are actually not needed. > The third commit in this patch series introduces an optimisation for > ipv6/ipv4 using the correct {inner_}network_header. We can remove the > conditionals; I thought about multiple ways to do so. First, remove the > conditional in skb_gro_network_offset: > > static inline int skb_gro_network_offset(const struct sk_buff *skb) > { > const u32 mask = NAPI_GRO_CB(skb)->encap_mark - 1; > return (skb_network_offset(skb) & mask) | (skb_inner_network_offset(skb) & ~mask); > } I was trying to say that we do not need all these helpers, storing state in NAPI_GRO_CB(skb), dirtying cache lines... Ideally, the skb network/transport/... headers could be set at the last stage, in gro_complete(big_gro_skb), instead of doing this for each segment. All the gro_receive() could be much faster by using additional parameters (nhoff, thoff) skb_gro_offset() could be replaced by the current offset (nhoff or other name), passed as a parameter. Here is a WIP for gro_complete() step, this looks large but this is only adding a 2nd 'offset' parameter Prior offset (typically network offset), called p_off Old argument nhoff, (renamed thoff if that makes sense), pointing to the current offset. drivers/net/geneve.c | 6 +++--- drivers/net/vxlan/vxlan_core.c | 11 +++++++---- include/linux/etherdevice.h | 2 +- include/linux/netdevice.h | 2 +- include/linux/udp.h | 2 +- include/net/gro.h | 10 +++++----- include/net/inet_common.h | 2 +- include/net/tcp.h | 4 ++-- include/net/udp.h | 8 ++++---- include/net/udp_tunnel.h | 2 +- net/8021q/vlan_core.c | 4 ++-- net/core/gro.c | 2 +- net/ethernet/eth.c | 4 ++-- net/ipv4/af_inet.c | 8 ++++---- net/ipv4/fou_core.c | 9 +++++---- net/ipv4/gre_offload.c | 4 ++-- net/ipv4/tcp_offload.c | 6 +++--- net/ipv4/udp.c | 3 ++- net/ipv4/udp_offload.c | 24 ++++++++++++------------ net/ipv6/ip6_offload.c | 22 ++++++++++++---------- net/ipv6/tcpv6_offload.c | 7 ++++--- net/ipv6/udp.c | 3 ++- net/ipv6/udp_offload.c | 12 ++++++------ 23 files changed, 83 insertions(+), 74 deletions(-) diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c index 6f3f9b446b1d202f6c71a20ce48088691e9120bf..af8dfdd47ffdeb7bdea302c5957e81faf83b19db 100644 --- a/drivers/net/geneve.c +++ b/drivers/net/geneve.c @@ -546,7 +546,7 @@ static struct sk_buff *geneve_gro_receive(struct sock *sk, } static int geneve_gro_complete(struct sock *sk, struct sk_buff *skb, - int nhoff) + int p_off, int nhoff) { struct genevehdr *gh; struct packet_offload *ptype; @@ -560,11 +560,11 @@ static int geneve_gro_complete(struct sock *sk, struct sk_buff *skb, /* since skb->encapsulation is set, eth_gro_complete() sets the inner mac header */ if (likely(type == htons(ETH_P_TEB))) - return eth_gro_complete(skb, nhoff + gh_len); + return eth_gro_complete(skb, p_off, nhoff + gh_len); ptype = gro_find_complete_by_type(type); if (ptype) - err = ptype->callbacks.gro_complete(skb, nhoff + gh_len); + err = ptype->callbacks.gro_complete(skb, p_off, nhoff + gh_len); skb_set_inner_mac_header(skb, nhoff + gh_len); diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c index 386cbe4d33272791e80470bd1378731d0c3b4d3b..84c123405b70f986a40b9f531e826807bcfc880b 100644 --- a/drivers/net/vxlan/vxlan_core.c +++ b/drivers/net/vxlan/vxlan_core.c @@ -767,15 +767,17 @@ static struct sk_buff *vxlan_gpe_gro_receive(struct sock *sk, return pp; } -static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) +static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, + int p_off, int nhoff) { /* Sets 'skb->inner_mac_header' since we are always called with * 'skb->encapsulation' set. */ - return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr)); + return eth_gro_complete(skb, p_off, nhoff + sizeof(struct vxlanhdr)); } -static int vxlan_gpe_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) +static int vxlan_gpe_gro_complete(struct sock *sk, struct sk_buff *skb, + int p_off, int nhoff) { struct vxlanhdr *vh = (struct vxlanhdr *)(skb->data + nhoff); const struct packet_offload *ptype; @@ -786,7 +788,8 @@ static int vxlan_gpe_gro_complete(struct sock *sk, struct sk_buff *skb, int nhof return err; ptype = gro_find_complete_by_type(protocol); if (ptype) - err = ptype->callbacks.gro_complete(skb, nhoff + sizeof(struct vxlanhdr)); + err = ptype->callbacks.gro_complete(skb, p_off, nhoff + + sizeof(struct vxlanhdr)); return err; } diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h index 224645f17c333b2311573197a28b41701eb35f92..b081b43d9686a1f3b0ddc9d84e66566e297a2d67 100644 --- a/include/linux/etherdevice.h +++ b/include/linux/etherdevice.h @@ -64,7 +64,7 @@ struct net_device *devm_alloc_etherdev_mqs(struct device *dev, int sizeof_priv, #define devm_alloc_etherdev(dev, sizeof_priv) devm_alloc_etherdev_mqs(dev, sizeof_priv, 1, 1) struct sk_buff *eth_gro_receive(struct list_head *head, struct sk_buff *skb); -int eth_gro_complete(struct sk_buff *skb, int nhoff); +int eth_gro_complete(struct sk_buff *skb, int p_off, int nhoff); /* Reserved Ethernet Addresses per IEEE 802.1Q */ static const u8 eth_reserved_addr_base[ETH_ALEN] __aligned(2) = diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 09023e44db4e2c3a2133afc52ba5a335d6030646..b21745f6233a2b56a64adda9f303d4db22019a37 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -2788,7 +2788,7 @@ struct offload_callbacks { netdev_features_t features); struct sk_buff *(*gro_receive)(struct list_head *head, struct sk_buff *skb); - int (*gro_complete)(struct sk_buff *skb, int nhoff); + int (*gro_complete)(struct sk_buff *skb, int nhoff, int thoff); }; struct packet_offload { diff --git a/include/linux/udp.h b/include/linux/udp.h index 3748e82b627b7044508db66adbf77c54a8e3d612..a04d94a3b42f8f5fb3d85622d2f3e56a94c4ea86 100644 --- a/include/linux/udp.h +++ b/include/linux/udp.h @@ -82,7 +82,7 @@ struct udp_sock { struct sk_buff *skb); int (*gro_complete)(struct sock *sk, struct sk_buff *skb, - int nhoff); + int nhoff, int thoff); /* udp_recvmsg try to use this before splicing sk_receive_queue */ struct sk_buff_head reader_queue ____cacheline_aligned_in_smp; diff --git a/include/net/gro.h b/include/net/gro.h index b435f0ddbf64f7bf740b7e479a1b28bcdef122c6..2856b00b84dfbf122871ef9bedab2097a7c823eb 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -385,18 +385,18 @@ static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb, INDIRECT_CALLABLE_DECLARE(struct sk_buff *ipv6_gro_receive(struct list_head *, struct sk_buff *)); -INDIRECT_CALLABLE_DECLARE(int ipv6_gro_complete(struct sk_buff *, int)); +INDIRECT_CALLABLE_DECLARE(int ipv6_gro_complete(struct sk_buff *, int, int)); INDIRECT_CALLABLE_DECLARE(struct sk_buff *inet_gro_receive(struct list_head *, struct sk_buff *)); -INDIRECT_CALLABLE_DECLARE(int inet_gro_complete(struct sk_buff *, int)); +INDIRECT_CALLABLE_DECLARE(int inet_gro_complete(struct sk_buff *, int, int)); INDIRECT_CALLABLE_DECLARE(struct sk_buff *udp4_gro_receive(struct list_head *, struct sk_buff *)); -INDIRECT_CALLABLE_DECLARE(int udp4_gro_complete(struct sk_buff *, int)); +INDIRECT_CALLABLE_DECLARE(int udp4_gro_complete(struct sk_buff *, int, int)); INDIRECT_CALLABLE_DECLARE(struct sk_buff *udp6_gro_receive(struct list_head *, struct sk_buff *)); -INDIRECT_CALLABLE_DECLARE(int udp6_gro_complete(struct sk_buff *, int)); +INDIRECT_CALLABLE_DECLARE(int udp6_gro_complete(struct sk_buff *, int, int)); #define indirect_call_gro_receive_inet(cb, f2, f1, head, skb) \ ({ \ @@ -407,7 +407,7 @@ INDIRECT_CALLABLE_DECLARE(int udp6_gro_complete(struct sk_buff *, int)); struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb, struct udphdr *uh, struct sock *sk); -int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup); +int udp_gro_complete(struct sk_buff *skb, int nhoff, int thoff, udp_lookup_t lookup); static inline struct udphdr *udp_gro_udphdr(struct sk_buff *skb) { diff --git a/include/net/inet_common.h b/include/net/inet_common.h index f50a644d87a9871fbed2dfd49d4bde9d3df0fd92..605f917c830c9b1794dfc3bda1bdc9ac24a0def3 100644 --- a/include/net/inet_common.h +++ b/include/net/inet_common.h @@ -64,7 +64,7 @@ int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len); struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb); -int inet_gro_complete(struct sk_buff *skb, int nhoff); +int inet_gro_complete(struct sk_buff *skb, int nhoff, int thoff); struct sk_buff *inet_gso_segment(struct sk_buff *skb, netdev_features_t features); diff --git a/include/net/tcp.h b/include/net/tcp.h index 6ae35199d3b3c159ba029ff74b109c56a7c7d2fc..ad8e6efc8bd867ada990820b5badce96ee49c3da 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -2196,9 +2196,9 @@ void tcp_v4_destroy_sock(struct sock *sk); struct sk_buff *tcp_gso_segment(struct sk_buff *skb, netdev_features_t features); struct sk_buff *tcp_gro_receive(struct list_head *head, struct sk_buff *skb); -INDIRECT_CALLABLE_DECLARE(int tcp4_gro_complete(struct sk_buff *skb, int thoff)); +INDIRECT_CALLABLE_DECLARE(int tcp4_gro_complete(struct sk_buff *skb, int nhoff, int thoff)); INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb)); -INDIRECT_CALLABLE_DECLARE(int tcp6_gro_complete(struct sk_buff *skb, int thoff)); +INDIRECT_CALLABLE_DECLARE(int tcp6_gro_complete(struct sk_buff *skb, int nhoff, int thoff)); INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb)); #ifdef CONFIG_INET void tcp_gro_complete(struct sk_buff *skb); diff --git a/include/net/udp.h b/include/net/udp.h index 488a6d2babccf26edfbaecc525f25e03d86b7d62..601d1c3b677a9acd6b6615e64de3f034f940287d 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -166,8 +166,8 @@ static inline void udp_csum_pull_header(struct sk_buff *skb) UDP_SKB_CB(skb)->cscov -= sizeof(struct udphdr); } -typedef struct sock *(*udp_lookup_t)(const struct sk_buff *skb, __be16 sport, - __be16 dport); +typedef struct sock *(*udp_lookup_t)(const struct sk_buff *skb, int nhoff, + __be16 sport, __be16 dport); void udp_v6_early_demux(struct sk_buff *skb); INDIRECT_CALLABLE_DECLARE(int udpv6_rcv(struct sk_buff *)); @@ -301,7 +301,7 @@ struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport, struct sock *__udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport, __be32 daddr, __be16 dport, int dif, int sdif, struct udp_table *tbl, struct sk_buff *skb); -struct sock *udp4_lib_lookup_skb(const struct sk_buff *skb, +struct sock *udp4_lib_lookup_skb(const struct sk_buff *skb, int nhoff, __be16 sport, __be16 dport); struct sock *udp6_lib_lookup(struct net *net, const struct in6_addr *saddr, __be16 sport, @@ -312,7 +312,7 @@ struct sock *__udp6_lib_lookup(struct net *net, const struct in6_addr *daddr, __be16 dport, int dif, int sdif, struct udp_table *tbl, struct sk_buff *skb); -struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, +struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, int nhoff, __be16 sport, __be16 dport); int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor); diff --git a/include/net/udp_tunnel.h b/include/net/udp_tunnel.h index d716214fe03df0a56266c22c7e8b42ba650e728b..a641392e70b0aa3d01826430640f3db6557d604b 100644 --- a/include/net/udp_tunnel.h +++ b/include/net/udp_tunnel.h @@ -75,7 +75,7 @@ typedef struct sk_buff *(*udp_tunnel_gro_receive_t)(struct sock *sk, struct list_head *head, struct sk_buff *skb); typedef int (*udp_tunnel_gro_complete_t)(struct sock *sk, struct sk_buff *skb, - int nhoff); + int nhoff, int thoff); struct udp_tunnel_sock_cfg { void *sk_user_data; /* user data used by encap_rcv call back */ diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c index f001582345052f8c26e008058ae5f721f8bc224d..247704cf70aff1279c62507a940581e5049175b9 100644 --- a/net/8021q/vlan_core.c +++ b/net/8021q/vlan_core.c @@ -510,7 +510,7 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head, return pp; } -static int vlan_gro_complete(struct sk_buff *skb, int nhoff) +static int vlan_gro_complete(struct sk_buff *skb, int p_off, int nhoff) { struct vlan_hdr *vhdr = (struct vlan_hdr *)(skb->data + nhoff); __be16 type = vhdr->h_vlan_encapsulated_proto; @@ -521,7 +521,7 @@ static int vlan_gro_complete(struct sk_buff *skb, int nhoff) if (ptype) err = INDIRECT_CALL_INET(ptype->callbacks.gro_complete, ipv6_gro_complete, inet_gro_complete, - skb, nhoff + sizeof(*vhdr)); + skb, p_off, nhoff + sizeof(*vhdr)); return err; } diff --git a/net/core/gro.c b/net/core/gro.c index 0759277dc14ee65d0a5376d48694cc1cccaee959..07768055ecf25d426e7cd01551f32978a70e8379 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -254,7 +254,7 @@ static void napi_gro_complete(struct napi_struct *napi, struct sk_buff *skb) err = INDIRECT_CALL_INET(ptype->callbacks.gro_complete, ipv6_gro_complete, inet_gro_complete, - skb, 0); + skb, 0, 0); break; } rcu_read_unlock(); diff --git a/net/ethernet/eth.c b/net/ethernet/eth.c index 2edc8b796a4e7326aa44128a0618e15b9aa817de..7515e6bcbb7d1e62fa0af2fd477bdea0284bfe40 100644 --- a/net/ethernet/eth.c +++ b/net/ethernet/eth.c @@ -453,7 +453,7 @@ struct sk_buff *eth_gro_receive(struct list_head *head, struct sk_buff *skb) } EXPORT_SYMBOL(eth_gro_receive); -int eth_gro_complete(struct sk_buff *skb, int nhoff) +int eth_gro_complete(struct sk_buff *skb, int p_off, int nhoff) { struct ethhdr *eh = (struct ethhdr *)(skb->data + nhoff); __be16 type = eh->h_proto; @@ -467,7 +467,7 @@ int eth_gro_complete(struct sk_buff *skb, int nhoff) if (ptype != NULL) err = INDIRECT_CALL_INET(ptype->callbacks.gro_complete, ipv6_gro_complete, inet_gro_complete, - skb, nhoff + sizeof(*eh)); + skb, p_off, nhoff + sizeof(*eh)); return err; } diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 5daebdcbca326aa1fc042e1e1ff1e82a18bd283d..bef9b222f3c90487c7d44015efb23c08168cf164 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1640,7 +1640,7 @@ int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len) } EXPORT_SYMBOL(inet_recv_error); -int inet_gro_complete(struct sk_buff *skb, int nhoff) +int inet_gro_complete(struct sk_buff *skb, int prior_off, int nhoff) { struct iphdr *iph = (struct iphdr *)(skb->data + nhoff); const struct net_offload *ops; @@ -1666,17 +1666,17 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff) */ err = INDIRECT_CALL_2(ops->callbacks.gro_complete, tcp4_gro_complete, udp4_gro_complete, - skb, nhoff + sizeof(*iph)); + skb, nhoff, nhoff + sizeof(*iph)); out: return err; } -static int ipip_gro_complete(struct sk_buff *skb, int nhoff) +static int ipip_gro_complete(struct sk_buff *skb, int prior_off, int nhoff) { skb->encapsulation = 1; skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP4; - return inet_gro_complete(skb, nhoff); + return inet_gro_complete(skb, prior_off, nhoff); } int inet_ctl_sock_create(struct sock **sk, unsigned short family, diff --git a/net/ipv4/fou_core.c b/net/ipv4/fou_core.c index 0c41076e31edadd16f8e55ebc50f84db262a2f0d..ac4a6595d5cdf7e1e4b0eef417173d8d2d4ad76d 100644 --- a/net/ipv4/fou_core.c +++ b/net/ipv4/fou_core.c @@ -260,7 +260,7 @@ static struct sk_buff *fou_gro_receive(struct sock *sk, } static int fou_gro_complete(struct sock *sk, struct sk_buff *skb, - int nhoff) + int p_off, int nhoff) { const struct net_offload __rcu **offloads; u8 proto = fou_from_sock(sk)->protocol; @@ -272,7 +272,7 @@ static int fou_gro_complete(struct sock *sk, struct sk_buff *skb, if (WARN_ON(!ops || !ops->callbacks.gro_complete)) goto out; - err = ops->callbacks.gro_complete(skb, nhoff); + err = ops->callbacks.gro_complete(skb, p_off, nhoff); skb_set_inner_mac_header(skb, nhoff); @@ -445,7 +445,8 @@ static struct sk_buff *gue_gro_receive(struct sock *sk, return pp; } -static int gue_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) +static int gue_gro_complete(struct sock *sk, struct sk_buff *skb, + int p_off, int nhoff) { struct guehdr *guehdr = (struct guehdr *)(skb->data + nhoff); const struct net_offload __rcu **offloads; @@ -480,7 +481,7 @@ static int gue_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff) if (WARN_ON(!ops || !ops->callbacks.gro_complete)) goto out; - err = ops->callbacks.gro_complete(skb, nhoff + guehlen); + err = ops->callbacks.gro_complete(skb, p_off, nhoff + guehlen); skb_set_inner_mac_header(skb, nhoff + guehlen); diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c index 311e70bfce407a2cadaa33fbef9a3976375711f4..803a8498f3030ec80a0ed41313e314021416a207 100644 --- a/net/ipv4/gre_offload.c +++ b/net/ipv4/gre_offload.c @@ -233,7 +233,7 @@ static struct sk_buff *gre_gro_receive(struct list_head *head, return pp; } -static int gre_gro_complete(struct sk_buff *skb, int nhoff) +static int gre_gro_complete(struct sk_buff *skb, int p_off, int nhoff) { struct gre_base_hdr *greh = (struct gre_base_hdr *)(skb->data + nhoff); struct packet_offload *ptype; @@ -253,7 +253,7 @@ static int gre_gro_complete(struct sk_buff *skb, int nhoff) ptype = gro_find_complete_by_type(type); if (ptype) - err = ptype->callbacks.gro_complete(skb, nhoff + grehlen); + err = ptype->callbacks.gro_complete(skb, p_off, nhoff + grehlen); skb_set_inner_mac_header(skb, nhoff + grehlen); diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index 8311c38267b55ba97e59924c3c1c5b59f133fdcd..6f126f7d806d60e4d884c4f95b53b8e4bd9fbb8a 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -328,10 +328,10 @@ struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb) return tcp_gro_receive(head, skb); } -INDIRECT_CALLABLE_SCOPE int tcp4_gro_complete(struct sk_buff *skb, int thoff) +INDIRECT_CALLABLE_SCOPE int tcp4_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { - const struct iphdr *iph = ip_hdr(skb); - struct tcphdr *th = tcp_hdr(skb); + const struct iphdr *iph = (const struct iphdr *)(skb->data + nhoff); + struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, iph->daddr, 0); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index a8acea17b4e5344d022ae8f8eb674d1a36f8035a..70a6d174855f76fb5dd289e3ef222a5158890e4f 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -532,9 +532,10 @@ static inline struct sock *__udp4_lib_lookup_skb(struct sk_buff *skb, } struct sock *udp4_lib_lookup_skb(const struct sk_buff *skb, + int nhoff, __be16 sport, __be16 dport) { - const struct iphdr *iph = ip_hdr(skb); + const struct iphdr *iph = (const struct iphdr *)(skb->data + nhoff); struct net *net = dev_net(skb->dev); int iif, sdif; diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index 6c95d28d0c4a7e56d587a986113b3711f8de964c..fa82a5a08688086415bae1d425d034f12311b0ab 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -669,18 +669,18 @@ static int udp_gro_complete_segment(struct sk_buff *skb) return 0; } -int udp_gro_complete(struct sk_buff *skb, int nhoff, +int udp_gro_complete(struct sk_buff *skb, int nhoff, int thoff, udp_lookup_t lookup) { - __be16 newlen = htons(skb->len - nhoff); - struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + struct udphdr *uh = (struct udphdr *)(skb->data + thoff); + __be16 newlen = htons(skb->len - thoff); struct sock *sk; int err; uh->len = newlen; sk = INDIRECT_CALL_INET(lookup, udp6_lib_lookup_skb, - udp4_lib_lookup_skb, skb, uh->source, uh->dest); + udp4_lib_lookup_skb, skb, nhoff, uh->source, uh->dest); if (sk && udp_sk(sk)->gro_complete) { skb_shinfo(skb)->gso_type = uh->check ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL; @@ -694,8 +694,8 @@ int udp_gro_complete(struct sk_buff *skb, int nhoff, * functions to make them set up the inner offsets. */ skb->encapsulation = 1; - err = udp_sk(sk)->gro_complete(sk, skb, - nhoff + sizeof(struct udphdr)); + err = udp_sk(sk)->gro_complete(sk, skb, nhoff, + thoff + sizeof(struct udphdr)); } else { err = udp_gro_complete_segment(skb); } @@ -707,14 +707,14 @@ int udp_gro_complete(struct sk_buff *skb, int nhoff, } EXPORT_SYMBOL(udp_gro_complete); -INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) +INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { - const struct iphdr *iph = ip_hdr(skb); - struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + const struct iphdr *iph = (const struct iphdr *)(skb->data + nhoff); + struct udphdr *uh = (struct udphdr *)(skb->data + thoff); /* do fraglist only if there is no outer UDP encap (or we already processed it) */ if (NAPI_GRO_CB(skb)->is_flist && !NAPI_GRO_CB(skb)->encap_mark) { - uh->len = htons(skb->len - nhoff); + uh->len = htons(skb->len - thoff); skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4); skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; @@ -731,10 +731,10 @@ INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) } if (uh->check) - uh->check = ~udp_v4_check(skb->len - nhoff, iph->saddr, + uh->check = ~udp_v4_check(skb->len - thoff, iph->saddr, iph->daddr, 0); - return udp_gro_complete(skb, nhoff, udp4_lib_lookup_skb); + return udp_gro_complete(skb, nhoff, thoff, udp4_lib_lookup_skb); } static const struct net_offload udpv4_offload = { diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c index cca64c7809bee9a0360cbfab6a645d3f8d2ffea3..e3a05b84c76ab9d55afafa94a53130b3b0d6c7f7 100644 --- a/net/ipv6/ip6_offload.c +++ b/net/ipv6/ip6_offload.c @@ -346,12 +346,14 @@ static struct sk_buff *ip4ip6_gro_receive(struct list_head *head, return inet_gro_receive(head, skb); } -INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) +INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, + int p_off, int nhoff) { const struct net_offload *ops; struct ipv6hdr *iph; int err = -ENOSYS; u32 payload_len; + int nhlen; if (skb->encapsulation) { skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6)); @@ -387,36 +389,36 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) iph->payload_len = htons(payload_len); } - nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); - if (WARN_ON(!ops || !ops->callbacks.gro_complete)) + nhlen = sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); + if (WARN_ON_ONCE(!ops || !ops->callbacks.gro_complete)) goto out; err = INDIRECT_CALL_L4(ops->callbacks.gro_complete, tcp6_gro_complete, - udp6_gro_complete, skb, nhoff); + udp6_gro_complete, skb, nhoff, nhoff + nhlen); out: return err; } -static int sit_gro_complete(struct sk_buff *skb, int nhoff) +static int sit_gro_complete(struct sk_buff *skb, int p_off, int nhoff) { skb->encapsulation = 1; skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP4; - return ipv6_gro_complete(skb, nhoff); + return ipv6_gro_complete(skb, p_off, nhoff); } -static int ip6ip6_gro_complete(struct sk_buff *skb, int nhoff) +static int ip6ip6_gro_complete(struct sk_buff *skb, int p_off, int nhoff) { skb->encapsulation = 1; skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP6; - return ipv6_gro_complete(skb, nhoff); + return ipv6_gro_complete(skb, p_off, nhoff); } -static int ip4ip6_gro_complete(struct sk_buff *skb, int nhoff) +static int ip4ip6_gro_complete(struct sk_buff *skb, int p_off, int nhoff) { skb->encapsulation = 1; skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP6; - return inet_gro_complete(skb, nhoff); + return inet_gro_complete(skb, p_off, nhoff); } static struct packet_offload ipv6_packet_offload __read_mostly = { diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c index bf0c957e4b5eaaabc0ac3a7e55c7de6608cec156..5043d2ff34eb8a7f9aeb7db6b4347a0f976b8502 100644 --- a/net/ipv6/tcpv6_offload.c +++ b/net/ipv6/tcpv6_offload.c @@ -27,10 +27,11 @@ struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb) return tcp_gro_receive(head, skb); } -INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int thoff) +INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, + int nhoff, int thoff) { - const struct ipv6hdr *iph = ipv6_hdr(skb); - struct tcphdr *th = tcp_hdr(skb); + const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); + struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, &iph->daddr, 0); diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 3f2249b4cd5f6a594dd9768e29f20f0d9a57faed..400243c89d8234ff98b3991c049e83237fa6686f 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -273,9 +273,10 @@ static struct sock *__udp6_lib_lookup_skb(struct sk_buff *skb, } struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, + int nhoff, __be16 sport, __be16 dport) { - const struct ipv6hdr *iph = ipv6_hdr(skb); + const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); struct net *net = dev_net(skb->dev); int iif, sdif; diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index 6b95ba241ebe2af7e5f2760d8a9c1d78f08579c5..7bf0fb609451586ab9fb1de955d1ae2ea2dbee64 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -162,14 +162,14 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) return NULL; } -INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) +INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { - const struct ipv6hdr *ipv6h = ipv6_hdr(skb); - struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); + const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff); + struct udphdr *uh = (struct udphdr *)(skb->data + thoff); /* do fraglist only if there is no outer UDP encap (or we already processed it) */ if (NAPI_GRO_CB(skb)->is_flist && !NAPI_GRO_CB(skb)->encap_mark) { - uh->len = htons(skb->len - nhoff); + uh->len = htons(skb->len - thoff); skb_shinfo(skb)->gso_type |= (SKB_GSO_FRAGLIST|SKB_GSO_UDP_L4); skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; @@ -186,10 +186,10 @@ INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) } if (uh->check) - uh->check = ~udp_v6_check(skb->len - nhoff, &ipv6h->saddr, + uh->check = ~udp_v6_check(skb->len - thoff, &ipv6h->saddr, &ipv6h->daddr, 0); - return udp_gro_complete(skb, nhoff, udp6_lib_lookup_skb); + return udp_gro_complete(skb, nhoff, thoff, udp6_lib_lookup_skb); } static const struct net_offload udpv6_offload = {
Eric Dumazet wrote: > On Thu, Feb 29, 2024 at 2:22 PM Richard Gobert <richardbgobert@gmail.com> wrote: >> >> >> >> Eric Dumazet wrote: >>> >>> My intuition is that this patch has a high cost for normal GRO processing. >>> SW-GRO is already a bottleneck on ARM cores in smart NICS. >>> >>> I would suggest instead using parameters to give both the nhoff and thoff values >>> this would avoid many conditionals in the fast path. >>> >>> -> >>> >>> INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int >>> nhoff, int thoff) >>> { >>> const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff); >>> struct udphdr *uh = (struct udphdr *)(skb->data + thoff); >>> ... >>> } >>> >>> INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int >>> nhoff, int thoff) >>> { >>> const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); >>> struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); >>> >>> Why storing in skb fields things that really could be propagated more >>> efficiently as function parameters ? >> >> Hi Eric, >> Thanks for the review! >> >> I agree, the conditionals could be a problem and are actually not needed. >> The third commit in this patch series introduces an optimisation for >> ipv6/ipv4 using the correct {inner_}network_header. We can remove the >> conditionals; I thought about multiple ways to do so. First, remove the >> conditional in skb_gro_network_offset: >> >> static inline int skb_gro_network_offset(const struct sk_buff *skb) >> { >> const u32 mask = NAPI_GRO_CB(skb)->encap_mark - 1; >> return (skb_network_offset(skb) & mask) | (skb_inner_network_offset(skb) & ~mask); >> } > > I was trying to say that we do not need all these helpers, storing > state in NAPI_GRO_CB(skb), > dirtying cache lines... > > Ideally, the skb network/transport/... headers could be set at the > last stage, in gro_complete(big_gro_skb), > instead of doing this for each segment. > > All the gro_receive() could be much faster by using additional > parameters (nhoff, thoff) > > skb_gro_offset() could be replaced by the current offset (nhoff or > other name), passed as a parameter. > > Here is a WIP for gro_complete() step, this looks large but this is > only adding a 2nd 'offset' parameter > > Prior offset (typically network offset), called p_off > Old argument nhoff, (renamed thoff if that makes sense), pointing to > the current offset. > You're right, it seemed to me like a broad change but it is mainly cosmetic. I'll finish your version and submit it to fix the bug. I still believe that setting inner_network_header is a valuable change. For example, although skb_gro_network_offset is used - setting it in encapsulation protocol functions (such as ipip_gro_receive) allow us to remove conditionals from {ipv6,inet}_gro_receive gro_list loop and remove flush_id from napi_gro_cb as written in the 3rd commit. What are your thoughts about it as a separate patch?
diff --git a/include/net/gro.h b/include/net/gro.h index b435f0ddbf64..89502a7e35ed 100644 --- a/include/net/gro.h +++ b/include/net/gro.h @@ -177,10 +177,22 @@ static inline void *skb_gro_header(struct sk_buff *skb, return ptr; } +static inline int skb_gro_network_offset(struct sk_buff *skb) +{ + return NAPI_GRO_CB(skb)->encap_mark ? skb_inner_network_offset(skb) : + skb_network_offset(skb); +} + static inline void *skb_gro_network_header(struct sk_buff *skb) { return (NAPI_GRO_CB(skb)->frag0 ?: skb->data) + - skb_network_offset(skb); + skb_gro_network_offset(skb); +} + +static inline void *skb_gro_complete_network_header(struct sk_buff *skb) +{ + return skb->encapsulation ? skb_inner_network_header(skb) : + skb_network_header(skb); } static inline __wsum inet_gro_compute_pseudo(struct sk_buff *skb, int proto) diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c index f00158234505..8bc871397e47 100644 --- a/net/8021q/vlan_core.c +++ b/net/8021q/vlan_core.c @@ -478,6 +478,9 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head, if (unlikely(!vhdr)) goto out; + if (!NAPI_GRO_CB(skb)->encap_mark) + skb_set_network_header(skb, hlen); + type = vhdr->h_vlan_encapsulated_proto; ptype = gro_find_receive_by_type(type); diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 835f4f9d98d2..c0f3c162bf73 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1564,7 +1564,9 @@ struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb) NAPI_GRO_CB(skb)->is_atomic = !!(iph->frag_off & htons(IP_DF)); NAPI_GRO_CB(skb)->flush |= flush; - skb_set_network_header(skb, off); + if (NAPI_GRO_CB(skb)->encap_mark) + skb_set_inner_network_header(skb, off); + /* The above will be needed by the transport layer if there is one * immediately following this IP hdr. */ @@ -1643,10 +1645,8 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff) int proto = iph->protocol; int err = -ENOSYS; - if (skb->encapsulation) { + if (skb->encapsulation) skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IP)); - skb_set_inner_network_header(skb, nhoff); - } iph_set_totlen(iph, skb->len - nhoff); csum_replace2(&iph->check, totlen, iph->tot_len); diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index 8311c38267b5..8bbcd3f502ac 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -330,7 +330,7 @@ struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb) INDIRECT_CALLABLE_SCOPE int tcp4_gro_complete(struct sk_buff *skb, int thoff) { - const struct iphdr *iph = ip_hdr(skb); + const struct iphdr *iph = skb_gro_complete_network_header(skb); struct tcphdr *th = tcp_hdr(skb); th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c index 6c95d28d0c4a..7f59cede67f5 100644 --- a/net/ipv4/udp_offload.c +++ b/net/ipv4/udp_offload.c @@ -709,7 +709,7 @@ EXPORT_SYMBOL(udp_gro_complete); INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) { - const struct iphdr *iph = ip_hdr(skb); + const struct iphdr *iph = skb_gro_complete_network_header(skb); struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); /* do fraglist only if there is no outer UDP encap (or we already processed it) */ diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c index cca64c7809be..db7e3db587b9 100644 --- a/net/ipv6/ip6_offload.c +++ b/net/ipv6/ip6_offload.c @@ -67,7 +67,7 @@ static int ipv6_gro_pull_exthdrs(struct sk_buff *skb, int off, int proto) off += len; } - skb_gro_pull(skb, off - skb_network_offset(skb)); + skb_gro_pull(skb, off - skb_gro_network_offset(skb)); return proto; } @@ -236,7 +236,8 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, if (unlikely(!iph)) goto out; - skb_set_network_header(skb, off); + if (NAPI_GRO_CB(skb)->encap_mark) + skb_set_inner_network_header(skb, off); flush += ntohs(iph->payload_len) != skb->len - hlen; @@ -259,7 +260,7 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, NAPI_GRO_CB(skb)->proto = proto; flush--; - nlen = skb_network_header_len(skb); + nlen = skb_gro_offset(skb) - off; list_for_each_entry(p, head, list) { const struct ipv6hdr *iph2; @@ -353,10 +354,8 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) int err = -ENOSYS; u32 payload_len; - if (skb->encapsulation) { + if (skb->encapsulation) skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6)); - skb_set_inner_network_header(skb, nhoff); - } payload_len = skb->len - nhoff - sizeof(*iph); if (unlikely(payload_len > IPV6_MAXPLEN)) { diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c index bf0c957e4b5e..79eeaced2834 100644 --- a/net/ipv6/tcpv6_offload.c +++ b/net/ipv6/tcpv6_offload.c @@ -29,7 +29,7 @@ struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb) INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int thoff) { - const struct ipv6hdr *iph = ipv6_hdr(skb); + const struct ipv6hdr *iph = skb_gro_complete_network_header(skb); struct tcphdr *th = tcp_hdr(skb); th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c index 6b95ba241ebe..897caa2e39fb 100644 --- a/net/ipv6/udp_offload.c +++ b/net/ipv6/udp_offload.c @@ -164,7 +164,7 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) { - const struct ipv6hdr *ipv6h = ipv6_hdr(skb); + const struct ipv6hdr *ipv6h = skb_gro_complete_network_header(skb); struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); /* do fraglist only if there is no outer UDP encap (or we already processed it) */
Commits a602456 ("udp: Add GRO functions to UDP socket") and 57c67ff ("udp: additional GRO support") introduce incorrect usage of {ip,ipv6}_hdr in the complete phase of gro. The functions always return skb->network_header, which in the case of encapsulated packets at the gro complete phase, is always set to the innermost L3 of the packet. That means that calling {ip,ipv6}_hdr for skbs which completed the GRO receive phase (both in gro_list and *_gro_complete) when parsing an encapsulated packet's _outer_ L3/L4 may return an unexpected value. This incorrect usage leads to a bug in GRO's UDP socket lookup. udp{4,6}_lib_lookup_skb functions use ip_hdr/ipv6_hdr respectively. These *_hdr functions return network_header which will point to the innermost L3, resulting in the wrong offset being used in __udp{4,6}_lib_lookup with encapsulated packets. Reproduction example: Endpoint configuration example (fou + local address bind) # ip fou add port 6666 ipproto 4 # ip link add name tun1 type ipip remote 2.2.2.1 local 2.2.2.2 encap fou encap-dport 5555 encap-sport 6666 mode ipip # ip link set tun1 up # ip a add 1.1.1.2/24 dev tun1 Netperf TCP_STREAM result on net-next before patch is applied: net-next main, GRO enabled: $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 131072 16384 16384 5.28 2.37 net-next main, GRO disabled: $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 131072 16384 16384 5.01 2745.06 patch applied, GRO enabled: $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 131072 16384 16384 5.01 2877.38 This patch fixes this bug and prevents similar future misuse of network_header by setting network_header and inner_network_header to their respective values during the receive phase of GRO. This results in more coherent {inner_,}network_header values for every skb in gro_list, which also means there's no need to set/fix these values before passing the packet forward. network_header is already set in dev_gro_receive and under encapsulation we set inner_network_header. *_gro_complete functions use a new helper function - skb_gro_complete_network_header, which returns the network_header/inner_network_header offset during the GRO complete phase, depending on skb->encapsulation. Fixes: 57c67ff4bd92 ("udp: additional GRO support") Signed-off-by: Richard Gobert <richardbgobert@gmail.com> --- include/net/gro.h | 14 +++++++++++++- net/8021q/vlan_core.c | 3 +++ net/ipv4/af_inet.c | 8 ++++---- net/ipv4/tcp_offload.c | 2 +- net/ipv4/udp_offload.c | 2 +- net/ipv6/ip6_offload.c | 11 +++++------ net/ipv6/tcpv6_offload.c | 2 +- net/ipv6/udp_offload.c | 2 +- 8 files changed, 29 insertions(+), 15 deletions(-)