From: Kuniyuki Iwashima kuniyu@amazon.com Date: Mon, 26 Jun 2023 14:08:46 -0700
From: Lorenz Bauer lmb@isovalent.com Date: Mon, 26 Jun 2023 16:09:03 +0100
Currently the bpf_sk_assign helper in tc BPF context refuses SO_REUSEPORT sockets. This means we can't use the helper to steer traffic to Envoy, which configures SO_REUSEPORT on its sockets. In turn, we're blocked from removing TPROXY from our setup.
The reason that bpf_sk_assign refuses such sockets is that the bpf_sk_lookup helpers don't execute SK_REUSEPORT programs. Instead, one of the reuseport sockets is selected by hash. This could cause dispatch to the "wrong" socket:
sk = bpf_sk_lookup_tcp(...) // select SO_REUSEPORT by hash bpf_sk_assign(skb, sk) // SK_REUSEPORT wasn't executed
Fixing this isn't as simple as invoking SK_REUSEPORT from the lookup helpers unfortunately. In the tc context, L2 headers are at the start of the skb, while SK_REUSEPORT expects L3 headers instead.
Instead, we execute the SK_REUSEPORT program when the assigned socket is pulled out of the skb, further up the stack. This creates some trickiness with regards to refcounting as bpf_sk_assign will put both refcounted and RCU freed sockets in skb->sk. reuseport sockets are RCU freed. We can infer that the sk_assigned socket is RCU freed if the reuseport lookup succeeds, but convincing yourself of this fact isn't straight forward. Therefore we defensively check refcounting on the sk_assign sock even though it's probably not required in practice.
Fixes: 8e368dc72e86 ("bpf: Fix use of sk->sk_reuseport from sk_assign") Fixes: cf7fbe660f2d ("bpf: Add socket assign support") Co-developed-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Daniel Borkmann daniel@iogearbox.net Signed-off-by: Lorenz Bauer lmb@isovalent.com Cc: Joe Stringer joe@cilium.io Link: https://lore.kernel.org/bpf/CACAyw98+qycmpQzKupquhkxbvWK4OFyDuuLMBNROnfWMZxU...
include/net/inet6_hashtables.h | 59 ++++++++++++++++++++++++++++++++++++++---- include/net/inet_hashtables.h | 52 +++++++++++++++++++++++++++++++++++-- include/net/sock.h | 7 +++-- include/uapi/linux/bpf.h | 3 --- net/core/filter.c | 2 -- net/ipv4/udp.c | 8 ++++-- net/ipv6/udp.c | 10 ++++--- tools/include/uapi/linux/bpf.h | 3 --- 8 files changed, 122 insertions(+), 22 deletions(-)
diff --git a/include/net/inet6_hashtables.h b/include/net/inet6_hashtables.h index 4d2a1a3c0be7..4d300af6ccb6 100644 --- a/include/net/inet6_hashtables.h +++ b/include/net/inet6_hashtables.h @@ -103,6 +103,49 @@ static inline struct sock *__inet6_lookup(struct net *net, daddr, hnum, dif, sdif); } +static inline +struct sock *inet6_steal_sock(struct net *net, struct sk_buff *skb, int doff,
const struct in6_addr *saddr, const __be16 sport,
const struct in6_addr *daddr, const __be16 dport,
bool *refcounted, inet6_ehashfn_t ehashfn)
+{
- struct sock *sk, *reuse_sk;
- bool prefetched;
- sk = skb_steal_sock(skb, refcounted, &prefetched);
- if (!sk)
return NULL;
- if (!prefetched)
return sk;
- if (sk->sk_protocol == IPPROTO_TCP) {
if (sk->sk_state != TCP_LISTEN)
return sk;
- } else if (sk->sk_protocol == IPPROTO_UDP) {
if (sk->sk_state != TCP_CLOSE)
return sk;
- } else {
return sk;
- }
- reuse_sk = inet6_lookup_reuseport(net, sk, skb, doff,
saddr, sport, daddr, ntohs(dport),
ehashfn);
- if (!reuse_sk || reuse_sk == sk)
return sk;
- /* We've chosen a new reuseport sock which is never refcounted.
* sk might be refcounted however, drop the reference if necessary.
*/
- if (*refcounted) {
sock_put(sk);
*refcounted = false;
- }
As *refcounted should be false here (TCP_LISTEN and UDP sk have SOCK_RCU_FREE and other sk does not reach here), I prefer adding WARN_ON_ONCE() to catch a future bug:
WARN_ON_ONCE(*refcounted); sock_put(sk);
Sorry, sock_put(sk) is not needed here.