6.11-stable review patch. If anyone has any objections, please let me know.
------------------
From: Felix Fietkau nbd@nbd.name
commit 17bd3bd82f9f79f3feba15476c2b2c95a9b11ff8 upstream.
Detect tcp gso fraglist skbs with corrupted geometry (see below) and pass these to skb_segment instead of skb_segment_list, as the first can segment them correctly.
Valid SKB_GSO_FRAGLIST skbs - consist of two or more segments - the head_skb holds the protocol headers plus first gso_size - one or more frag_list skbs hold exactly one segment - all but the last must be gso_size
Optional datapath hooks such as NAT and BPF (bpf_skb_pull_data) can modify these skbs, breaking these invariants.
In extreme cases they pull all data into skb linear. For TCP, this causes a NULL ptr deref in __tcpv4_gso_segment_list_csum at tcp_hdr(seg->next).
Detect invalid geometry due to pull, by checking head_skb size. Don't just drop, as this may blackhole a destination. Convert to be able to pass to regular skb_segment.
Approach and description based on a patch by Willem de Bruijn.
Link: https://lore.kernel.org/netdev/20240428142913.18666-1-shiming.cheng@mediatek... Link: https://lore.kernel.org/netdev/20240922150450.3873767-1-willemdebruijn.kerne... Fixes: bee88cd5bd83 ("net: add support for segmenting TCP fraglist GSO packets") Cc: stable@vger.kernel.org Signed-off-by: Felix Fietkau nbd@nbd.name Reviewed-by: Willem de Bruijn willemb@google.com Link: https://patch.msgid.link/20240926085315.51524-1-nbd@nbd.name Signed-off-by: Jakub Kicinski kuba@kernel.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- net/ipv4/tcp_offload.c | 10 ++++++++-- net/ipv6/tcpv6_offload.c | 10 ++++++++-- 2 files changed, 16 insertions(+), 4 deletions(-)
--- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -101,8 +101,14 @@ static struct sk_buff *tcp4_gso_segment( if (!pskb_may_pull(skb, sizeof(struct tcphdr))) return ERR_PTR(-EINVAL);
- if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) - return __tcp4_gso_segment_list(skb, features); + if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { + struct tcphdr *th = tcp_hdr(skb); + + if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) + return __tcp4_gso_segment_list(skb, features); + + skb->ip_summed = CHECKSUM_NONE; + }
if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { const struct iphdr *iph = ip_hdr(skb); --- a/net/ipv6/tcpv6_offload.c +++ b/net/ipv6/tcpv6_offload.c @@ -159,8 +159,14 @@ static struct sk_buff *tcp6_gso_segment( if (!pskb_may_pull(skb, sizeof(*th))) return ERR_PTR(-EINVAL);
- if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) - return __tcp6_gso_segment_list(skb, features); + if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { + struct tcphdr *th = tcp_hdr(skb); + + if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) + return __tcp6_gso_segment_list(skb, features); + + skb->ip_summed = CHECKSUM_NONE; + }
if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { const struct ipv6hdr *ipv6h = ipv6_hdr(skb);