This is a note to let you know that I've just added the patch titled
tuntap: disable preemption during XDP processing
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tuntap-disable-preemption-during-xdp-processing.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Jason Wang <jasowang(a)redhat.com>
Date: Sat, 24 Feb 2018 11:32:25 +0800
Subject: tuntap: disable preemption during XDP processing
From: Jason Wang <jasowang(a)redhat.com>
[ Upstream commit 23e43f07f896f8578318cfcc9466f1e8b8ab21b6 ]
Except for tuntap, all other drivers' XDP was implemented at NAPI
poll() routine in a bh. This guarantees all XDP operation were done at
the same CPU which is required by e.g BFP_MAP_TYPE_PERCPU_ARRAY. But
for tuntap, we do it in process context and we try to protect XDP
processing by RCU reader lock. This is insufficient since
CONFIG_PREEMPT_RCU can preempt the RCU reader critical section which
breaks the assumption that all XDP were processed in the same CPU.
Fixing this by simply disabling preemption during XDP processing.
Fixes: 761876c857cb ("tap: XDP support")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/net/tun.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1471,6 +1471,7 @@ static struct sk_buff *tun_build_skb(str
else
*skb_xdp = 0;
+ preempt_disable();
rcu_read_lock();
xdp_prog = rcu_dereference(tun->xdp_prog);
if (xdp_prog && !*skb_xdp) {
@@ -1494,6 +1495,7 @@ static struct sk_buff *tun_build_skb(str
if (err)
goto err_redirect;
rcu_read_unlock();
+ preempt_enable();
return NULL;
case XDP_TX:
xdp_xmit = true;
@@ -1515,6 +1517,7 @@ static struct sk_buff *tun_build_skb(str
skb = build_skb(buf, buflen);
if (!skb) {
rcu_read_unlock();
+ preempt_enable();
return ERR_PTR(-ENOMEM);
}
@@ -1527,10 +1530,12 @@ static struct sk_buff *tun_build_skb(str
skb->dev = tun->dev;
generic_xdp_tx(skb, xdp_prog);
rcu_read_unlock();
+ preempt_enable();
return NULL;
}
rcu_read_unlock();
+ preempt_enable();
return skb;
@@ -1538,6 +1543,7 @@ err_redirect:
put_page(alloc_frag->page);
err_xdp:
rcu_read_unlock();
+ preempt_enable();
this_cpu_inc(tun->pcpu_stats->rx_dropped);
return NULL;
}
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.15/tuntap-correctly-add-the-missing-xdp-flush.patch
queue-4.15/virtio-net-disable-napi-only-when-enabled-during-xdp-set.patch
queue-4.15/tuntap-disable-preemption-during-xdp-processing.patch
This is a note to let you know that I've just added the patch titled
tuntap: correctly add the missing XDP flush
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tuntap-correctly-add-the-missing-xdp-flush.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Jason Wang <jasowang(a)redhat.com>
Date: Sat, 24 Feb 2018 11:32:26 +0800
Subject: tuntap: correctly add the missing XDP flush
From: Jason Wang <jasowang(a)redhat.com>
[ Upstream commit 1bb4f2e868a2891ab8bc668b8173d6ccb8c4ce6f ]
We don't flush batched XDP packets through xdp_do_flush_map(), this
will cause packets stall at TX queue. Consider we don't do XDP on NAPI
poll(), the only possible fix is to call xdp_do_flush_map()
immediately after xdp_do_redirect().
Note, this in fact won't try to batch packets through devmap, we could
address in the future.
Reported-by: Christoffer Dall <christoffer.dall(a)linaro.org>
Fixes: 761876c857cb ("tap: XDP support")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/net/tun.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1490,6 +1490,7 @@ static struct sk_buff *tun_build_skb(str
get_page(alloc_frag->page);
alloc_frag->offset += buflen;
err = xdp_do_redirect(tun->dev, &xdp, xdp_prog);
+ xdp_do_flush_map();
if (err)
goto err_redirect;
rcu_read_unlock();
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.15/tuntap-correctly-add-the-missing-xdp-flush.patch
queue-4.15/virtio-net-disable-napi-only-when-enabled-during-xdp-set.patch
queue-4.15/tuntap-disable-preemption-during-xdp-processing.patch
This is a note to let you know that I've just added the patch titled
tls: Use correct sk->sk_prot for IPV6
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tls-use-correct-sk-sk_prot-for-ipv6.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Boris Pismenny <borisp(a)mellanox.com>
Date: Tue, 27 Feb 2018 14:18:39 +0200
Subject: tls: Use correct sk->sk_prot for IPV6
From: Boris Pismenny <borisp(a)mellanox.com>
[ Upstream commit c113187d38ff85dc302a1bb55864b203ebb2ba10 ]
The tls ulp overrides sk->prot with a new tls specific proto structs.
The tls specific structs were previously based on the ipv4 specific
tcp_prot sturct.
As a result, attaching the tls ulp to an ipv6 tcp socket replaced
some ipv6 callback with the ipv4 equivalents.
This patch adds ipv6 tls proto structs and uses them when
attached to ipv6 sockets.
Fixes: 3c4d7559159b ('tls: kernel TLS support')
Signed-off-by: Boris Pismenny <borisp(a)mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal(a)mellanox.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/tls/tls_main.c | 52 +++++++++++++++++++++++++++++++++++++---------------
1 file changed, 37 insertions(+), 15 deletions(-)
--- a/net/tls/tls_main.c
+++ b/net/tls/tls_main.c
@@ -46,16 +46,26 @@ MODULE_DESCRIPTION("Transport Layer Secu
MODULE_LICENSE("Dual BSD/GPL");
enum {
+ TLSV4,
+ TLSV6,
+ TLS_NUM_PROTS,
+};
+
+enum {
TLS_BASE_TX,
TLS_SW_TX,
TLS_NUM_CONFIG,
};
-static struct proto tls_prots[TLS_NUM_CONFIG];
+static struct proto *saved_tcpv6_prot;
+static DEFINE_MUTEX(tcpv6_prot_mutex);
+static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG];
static inline void update_sk_prot(struct sock *sk, struct tls_context *ctx)
{
- sk->sk_prot = &tls_prots[ctx->tx_conf];
+ int ip_ver = sk->sk_family == AF_INET6 ? TLSV6 : TLSV4;
+
+ sk->sk_prot = &tls_prots[ip_ver][ctx->tx_conf];
}
int wait_on_pending_writer(struct sock *sk, long *timeo)
@@ -450,8 +460,21 @@ static int tls_setsockopt(struct sock *s
return do_tls_setsockopt(sk, optname, optval, optlen);
}
+static void build_protos(struct proto *prot, struct proto *base)
+{
+ prot[TLS_BASE_TX] = *base;
+ prot[TLS_BASE_TX].setsockopt = tls_setsockopt;
+ prot[TLS_BASE_TX].getsockopt = tls_getsockopt;
+ prot[TLS_BASE_TX].close = tls_sk_proto_close;
+
+ prot[TLS_SW_TX] = prot[TLS_BASE_TX];
+ prot[TLS_SW_TX].sendmsg = tls_sw_sendmsg;
+ prot[TLS_SW_TX].sendpage = tls_sw_sendpage;
+}
+
static int tls_init(struct sock *sk)
{
+ int ip_ver = sk->sk_family == AF_INET6 ? TLSV6 : TLSV4;
struct inet_connection_sock *icsk = inet_csk(sk);
struct tls_context *ctx;
int rc = 0;
@@ -476,6 +499,17 @@ static int tls_init(struct sock *sk)
ctx->getsockopt = sk->sk_prot->getsockopt;
ctx->sk_proto_close = sk->sk_prot->close;
+ /* Build IPv6 TLS whenever the address of tcpv6_prot changes */
+ if (ip_ver == TLSV6 &&
+ unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) {
+ mutex_lock(&tcpv6_prot_mutex);
+ if (likely(sk->sk_prot != saved_tcpv6_prot)) {
+ build_protos(tls_prots[TLSV6], sk->sk_prot);
+ smp_store_release(&saved_tcpv6_prot, sk->sk_prot);
+ }
+ mutex_unlock(&tcpv6_prot_mutex);
+ }
+
ctx->tx_conf = TLS_BASE_TX;
update_sk_prot(sk, ctx);
out:
@@ -488,21 +522,9 @@ static struct tcp_ulp_ops tcp_tls_ulp_op
.init = tls_init,
};
-static void build_protos(struct proto *prot, struct proto *base)
-{
- prot[TLS_BASE_TX] = *base;
- prot[TLS_BASE_TX].setsockopt = tls_setsockopt;
- prot[TLS_BASE_TX].getsockopt = tls_getsockopt;
- prot[TLS_BASE_TX].close = tls_sk_proto_close;
-
- prot[TLS_SW_TX] = prot[TLS_BASE_TX];
- prot[TLS_SW_TX].sendmsg = tls_sw_sendmsg;
- prot[TLS_SW_TX].sendpage = tls_sw_sendpage;
-}
-
static int __init tls_register(void)
{
- build_protos(tls_prots, &tcp_prot);
+ build_protos(tls_prots[TLSV4], &tcp_prot);
tcp_register_ulp(&tcp_tls_ulp_ops);
Patches currently in stable-queue which might be from borisp(a)mellanox.com are
queue-4.15/tls-use-correct-sk-sk_prot-for-ipv6.patch
This is a note to let you know that I've just added the patch titled
tcp_bbr: better deal with suboptimal GSO
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp_bbr-better-deal-with-suboptimal-gso.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Eric Dumazet <edumazet(a)google.com>
Date: Wed, 21 Feb 2018 06:43:03 -0800
Subject: tcp_bbr: better deal with suboptimal GSO
From: Eric Dumazet <edumazet(a)google.com>
[ Upstream commit 350c9f484bde93ef229682eedd98cd5f74350f7f ]
BBR uses tcp_tso_autosize() in an attempt to probe what would be the
burst sizes and to adjust cwnd in bbr_target_cwnd() with following
gold formula :
/* Allow enough full-sized skbs in flight to utilize end systems. */
cwnd += 3 * bbr->tso_segs_goal;
But GSO can be lacking or be constrained to very small
units (ip link set dev ... gso_max_segs 2)
What we really want is to have enough packets in flight so that both
GSO and GRO are efficient.
So in the case GSO is off or downgraded, we still want to have the same
number of packets in flight as if GSO/TSO was fully operational, so
that GRO can hopefully be working efficiently.
To fix this issue, we make tcp_tso_autosize() unaware of
sk->sk_gso_max_segs
Only tcp_tso_segs() has to enforce the gso_max_segs limit.
Tested:
ethtool -K eth0 tso off gso off
tc qd replace dev eth0 root pfifo_fast
Before patch:
for f in {1..5}; do ./super_netperf 1 -H lpaa24 -- -K bbr; done
691 (ss -temoi shows cwnd is stuck around 6 )
667
651
631
517
After patch :
# for f in {1..5}; do ./super_netperf 1 -H lpaa24 -- -K bbr; done
1733 (ss -temoi shows cwnd is around 386 )
1778
1746
1781
1718
Fixes: 0f8782ea1497 ("tcp_bbr: add BBR congestion control")
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Reported-by: Oleksandr Natalenko <oleksandr(a)natalenko.name>
Acked-by: Neal Cardwell <ncardwell(a)google.com>
Acked-by: Soheil Hassas Yeganeh <soheil(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_output.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1730,7 +1730,7 @@ u32 tcp_tso_autosize(const struct sock *
*/
segs = max_t(u32, bytes / mss_now, min_tso_segs);
- return min_t(u32, segs, sk->sk_gso_max_segs);
+ return segs;
}
EXPORT_SYMBOL(tcp_tso_autosize);
@@ -1742,9 +1742,10 @@ static u32 tcp_tso_segs(struct sock *sk,
const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0;
- return tso_segs ? :
- tcp_tso_autosize(sk, mss_now,
- sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs);
+ if (!tso_segs)
+ tso_segs = tcp_tso_autosize(sk, mss_now,
+ sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs);
+ return min_t(u32, tso_segs, sk->sk_gso_max_segs);
}
/* Returns the portion of skb which can be sent right away */
Patches currently in stable-queue which might be from edumazet(a)google.com are
queue-4.15/doc-change-the-min-default-value-of-tcp_wmem-tcp_rmem.patch
queue-4.15/tcp-purge-write-queue-upon-rst.patch
queue-4.15/net_sched-gen_estimator-fix-broken-estimators-based-on-percpu-stats.patch
queue-4.15/tcp_bbr-better-deal-with-suboptimal-gso.patch
This is a note to let you know that I've just added the patch titled
tcp: tracepoint: only call trace_tcp_send_reset with full socket
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp-tracepoint-only-call-trace_tcp_send_reset-with-full-socket.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Song Liu <songliubraving(a)fb.com>
Date: Tue, 6 Feb 2018 20:50:23 -0800
Subject: tcp: tracepoint: only call trace_tcp_send_reset with full socket
From: Song Liu <songliubraving(a)fb.com>
[ Upstream commit 5c487bb9adddbc1d23433e09d2548759375c2b52 ]
tracepoint tcp_send_reset requires a full socket to work. However, it
may be called when in TCP_TIME_WAIT:
case TCP_TW_RST:
tcp_v6_send_reset(sk, skb);
inet_twsk_deschedule_put(inet_twsk(sk));
goto discard_it;
To avoid this problem, this patch checks the socket with sk_fullsock()
before calling trace_tcp_send_reset().
Fixes: c24b14c46bb8 ("tcp: add tracepoint trace_tcp_send_reset")
Signed-off-by: Song Liu <songliubraving(a)fb.com>
Reviewed-by: Lawrence Brakmo <brakmo(a)fb.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_ipv4.c | 3 ++-
net/ipv6/tcp_ipv6.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -705,7 +705,8 @@ static void tcp_v4_send_reset(const stru
*/
if (sk) {
arg.bound_dev_if = sk->sk_bound_dev_if;
- trace_tcp_send_reset(sk, skb);
+ if (sk_fullsock(sk))
+ trace_tcp_send_reset(sk, skb);
}
BUILD_BUG_ON(offsetof(struct sock, sk_bound_dev_if) !=
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -943,7 +943,8 @@ static void tcp_v6_send_reset(const stru
if (sk) {
oif = sk->sk_bound_dev_if;
- trace_tcp_send_reset(sk, skb);
+ if (sk_fullsock(sk))
+ trace_tcp_send_reset(sk, skb);
}
tcp_v6_send_response(sk, skb, seq, ack_seq, 0, 0, 0, oif, key, 1, 0, 0);
Patches currently in stable-queue which might be from songliubraving(a)fb.com are
queue-4.15/tcp-tracepoint-only-call-trace_tcp_send_reset-with-full-socket.patch
This is a note to let you know that I've just added the patch titled
tcp: revert F-RTO middle-box workaround
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp-revert-f-rto-middle-box-workaround.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Yuchung Cheng <ycheng(a)google.com>
Date: Tue, 27 Feb 2018 14:15:01 -0800
Subject: tcp: revert F-RTO middle-box workaround
From: Yuchung Cheng <ycheng(a)google.com>
[ Upstream commit d4131f09770d9b7471c9da65e6ecd2477746ac5c ]
This reverts commit cc663f4d4c97b7297fb45135ab23cfd508b35a77. While fixing
some broken middle-boxes that modifies receive window fields, it does not
address middle-boxes that strip off SACK options. The best solution is
to fully revert this patch and the root F-RTO enhancement.
Fixes: cc663f4d4c97 ("tcp: restrict F-RTO to work-around broken middle-boxes")
Reported-by: Teodor Milkov <tm(a)del.bg>
Signed-off-by: Yuchung Cheng <ycheng(a)google.com>
Signed-off-by: Neal Cardwell <ncardwell(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_input.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 45f750e85714..50963f92a67d 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1915,7 +1915,6 @@ void tcp_enter_loss(struct sock *sk)
struct tcp_sock *tp = tcp_sk(sk);
struct net *net = sock_net(sk);
struct sk_buff *skb;
- bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery;
bool is_reneg; /* is receiver reneging on SACKs? */
bool mark_lost;
@@ -1974,17 +1973,15 @@ void tcp_enter_loss(struct sock *sk)
tp->high_seq = tp->snd_nxt;
tcp_ecn_queue_cwr(tp);
- /* F-RTO RFC5682 sec 3.1 step 1: retransmit SND.UNA if no previous
- * loss recovery is underway except recurring timeout(s) on
- * the same SND.UNA (sec 3.2). Disable F-RTO on path MTU probing
- *
- * In theory F-RTO can be used repeatedly during loss recovery.
- * In practice this interacts badly with broken middle-boxes that
- * falsely raise the receive window, which results in repeated
- * timeouts and stop-and-go behavior.
+ /* F-RTO RFC5682 sec 3.1 step 1 mandates to disable F-RTO
+ * if a previous recovery is underway, otherwise it may incorrectly
+ * call a timeout spurious if some previously retransmitted packets
+ * are s/acked (sec 3.2). We do not apply that retriction since
+ * retransmitted skbs are permanently tagged with TCPCB_EVER_RETRANS
+ * so FLAG_ORIG_SACK_ACKED is always correct. But we do disable F-RTO
+ * on PTMU discovery to avoid sending new data.
*/
tp->frto = net->ipv4.sysctl_tcp_frto &&
- (new_recovery || icsk->icsk_retransmits) &&
!inet_csk(sk)->icsk_mtup.probe_size;
}
--
2.14.3
Patches currently in stable-queue which might be from ycheng(a)google.com are
queue-4.15/tcp-purge-write-queue-upon-rst.patch
queue-4.15/tcp-revert-f-rto-extension-to-detect-more-spurious-timeouts.patch
queue-4.15/tcp-revert-f-rto-middle-box-workaround.patch
This is a note to let you know that I've just added the patch titled
tcp: purge write queue upon RST
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp-purge-write-queue-upon-rst.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Soheil Hassas Yeganeh <soheil(a)google.com>
Date: Tue, 27 Feb 2018 18:32:18 -0500
Subject: tcp: purge write queue upon RST
From: Soheil Hassas Yeganeh <soheil(a)google.com>
[ Upstream commit a27fd7a8ed3856faaf5a2ff1c8c5f00c0667aaa0 ]
When the connection is reset, there is no point in
keeping the packets on the write queue until the connection
is closed.
RFC 793 (page 70) and RFC 793-bis (page 64) both suggest
purging the write queue upon RST:
https://tools.ietf.org/html/draft-ietf-tcpm-rfc793bis-07
Moreover, this is essential for a correct MSG_ZEROCOPY
implementation, because userspace cannot call close(fd)
before receiving zerocopy signals even when the connection
is reset.
Fixes: f214f915e7db ("tcp: enable MSG_ZEROCOPY")
Signed-off-by: Soheil Hassas Yeganeh <soheil(a)google.com>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: Yuchung Cheng <ycheng(a)google.com>
Signed-off-by: Neal Cardwell <ncardwell(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_input.c | 1 +
1 file changed, 1 insertion(+)
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -3988,6 +3988,7 @@ void tcp_reset(struct sock *sk)
/* This barrier is coupled with smp_rmb() in tcp_poll() */
smp_wmb();
+ tcp_write_queue_purge(sk);
tcp_done(sk);
if (!sock_flag(sk, SOCK_DEAD))
Patches currently in stable-queue which might be from soheil(a)google.com are
queue-4.15/tcp-purge-write-queue-upon-rst.patch
queue-4.15/tcp_bbr-better-deal-with-suboptimal-gso.patch
This is a note to let you know that I've just added the patch titled
tcp: revert F-RTO extension to detect more spurious timeouts
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp-revert-f-rto-extension-to-detect-more-spurious-timeouts.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Yuchung Cheng <ycheng(a)google.com>
Date: Tue, 27 Feb 2018 14:15:02 -0800
Subject: tcp: revert F-RTO extension to detect more spurious timeouts
From: Yuchung Cheng <ycheng(a)google.com>
[ Upstream commit fc68e171d376c322e6777a3d7ac2f0278b68b17f ]
This reverts commit 89fe18e44f7ee5ab1c90d0dff5835acee7751427.
While the patch could detect more spurious timeouts, it could cause
poor TCP performance on broken middle-boxes that modifies TCP packets
(e.g. receive window, SACK options). Since the performance gain is
much smaller compared to the potential loss. The best solution is
to fully revert the change.
Fixes: 89fe18e44f7e ("tcp: extend F-RTO to catch more spurious timeouts")
Reported-by: Teodor Milkov <tm(a)del.bg>
Signed-off-by: Yuchung Cheng <ycheng(a)google.com>
Signed-off-by: Neal Cardwell <ncardwell(a)google.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_input.c | 30 ++++++++++++------------------
1 file changed, 12 insertions(+), 18 deletions(-)
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1915,6 +1915,7 @@ void tcp_enter_loss(struct sock *sk)
struct tcp_sock *tp = tcp_sk(sk);
struct net *net = sock_net(sk);
struct sk_buff *skb;
+ bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery;
bool is_reneg; /* is receiver reneging on SACKs? */
bool mark_lost;
@@ -1973,15 +1974,12 @@ void tcp_enter_loss(struct sock *sk)
tp->high_seq = tp->snd_nxt;
tcp_ecn_queue_cwr(tp);
- /* F-RTO RFC5682 sec 3.1 step 1 mandates to disable F-RTO
- * if a previous recovery is underway, otherwise it may incorrectly
- * call a timeout spurious if some previously retransmitted packets
- * are s/acked (sec 3.2). We do not apply that retriction since
- * retransmitted skbs are permanently tagged with TCPCB_EVER_RETRANS
- * so FLAG_ORIG_SACK_ACKED is always correct. But we do disable F-RTO
- * on PTMU discovery to avoid sending new data.
+ /* F-RTO RFC5682 sec 3.1 step 1: retransmit SND.UNA if no previous
+ * loss recovery is underway except recurring timeout(s) on
+ * the same SND.UNA (sec 3.2). Disable F-RTO on path MTU probing
*/
tp->frto = net->ipv4.sysctl_tcp_frto &&
+ (new_recovery || icsk->icsk_retransmits) &&
!inet_csk(sk)->icsk_mtup.probe_size;
}
@@ -2634,18 +2632,14 @@ static void tcp_process_loss(struct sock
tcp_try_undo_loss(sk, false))
return;
- /* The ACK (s)acks some never-retransmitted data meaning not all
- * the data packets before the timeout were lost. Therefore we
- * undo the congestion window and state. This is essentially
- * the operation in F-RTO (RFC5682 section 3.1 step 3.b). Since
- * a retransmitted skb is permantly marked, we can apply such an
- * operation even if F-RTO was not used.
- */
- if ((flag & FLAG_ORIG_SACK_ACKED) &&
- tcp_try_undo_loss(sk, tp->undo_marker))
- return;
-
if (tp->frto) { /* F-RTO RFC5682 sec 3.1 (sack enhanced version). */
+ /* Step 3.b. A timeout is spurious if not all data are
+ * lost, i.e., never-retransmitted data are (s)acked.
+ */
+ if ((flag & FLAG_ORIG_SACK_ACKED) &&
+ tcp_try_undo_loss(sk, true))
+ return;
+
if (after(tp->snd_nxt, tp->high_seq)) {
if (flag & FLAG_DATA_SACKED || is_dupack)
tp->frto = 0; /* Step 3.a. loss was real */
Patches currently in stable-queue which might be from ycheng(a)google.com are
queue-4.15/tcp-purge-write-queue-upon-rst.patch
queue-4.15/tcp-revert-f-rto-extension-to-detect-more-spurious-timeouts.patch
queue-4.15/tcp-revert-f-rto-middle-box-workaround.patch
This is a note to let you know that I've just added the patch titled
tcp: Honor the eor bit in tcp_mtu_probe
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
tcp-honor-the-eor-bit-in-tcp_mtu_probe.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Ilya Lesokhin <ilyal(a)mellanox.com>
Date: Mon, 12 Feb 2018 12:57:04 +0200
Subject: tcp: Honor the eor bit in tcp_mtu_probe
From: Ilya Lesokhin <ilyal(a)mellanox.com>
[ Upstream commit 808cf9e38cd7923036a99f459ccc8cf2955e47af ]
Avoid SKB coalescing if eor bit is set in one of the relevant
SKBs.
Fixes: c134ecb87817 ("tcp: Make use of MSG_EOR in tcp_sendmsg")
Signed-off-by: Ilya Lesokhin <ilyal(a)mellanox.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/ipv4/tcp_output.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2026,6 +2026,24 @@ static inline void tcp_mtu_check_reprobe
}
}
+static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)
+{
+ struct sk_buff *skb, *next;
+
+ skb = tcp_send_head(sk);
+ tcp_for_write_queue_from_safe(skb, next, sk) {
+ if (len <= skb->len)
+ break;
+
+ if (unlikely(TCP_SKB_CB(skb)->eor))
+ return false;
+
+ len -= skb->len;
+ }
+
+ return true;
+}
+
/* Create a new MTU probe if we are ready.
* MTU probe is regularly attempting to increase the path MTU by
* deliberately sending larger packets. This discovers routing
@@ -2098,6 +2116,9 @@ static int tcp_mtu_probe(struct sock *sk
return 0;
}
+ if (!tcp_can_coalesce_send_queue_head(sk, probe_size))
+ return -1;
+
/* We're allowed to probe. Build it now. */
nskb = sk_stream_alloc_skb(sk, probe_size, GFP_ATOMIC, false);
if (!nskb)
@@ -2133,6 +2154,10 @@ static int tcp_mtu_probe(struct sock *sk
/* We've eaten all the data from this skb.
* Throw it away. */
TCP_SKB_CB(nskb)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags;
+ /* If this is the last SKB we copy and eor is set
+ * we need to propagate it to the new skb.
+ */
+ TCP_SKB_CB(nskb)->eor = TCP_SKB_CB(skb)->eor;
tcp_unlink_write_queue(skb, sk);
sk_wmem_free_skb(sk, skb);
} else {
Patches currently in stable-queue which might be from ilyal(a)mellanox.com are
queue-4.15/tls-use-correct-sk-sk_prot-for-ipv6.patch
queue-4.15/tcp-honor-the-eor-bit-in-tcp_mtu_probe.patch
This is a note to let you know that I've just added the patch titled
sctp: verify size of a new chunk in _sctp_make_chunk()
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sctp-verify-size-of-a-new-chunk-in-_sctp_make_chunk.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Alexey Kodanev <alexey.kodanev(a)oracle.com>
Date: Fri, 9 Feb 2018 17:35:23 +0300
Subject: sctp: verify size of a new chunk in _sctp_make_chunk()
From: Alexey Kodanev <alexey.kodanev(a)oracle.com>
[ Upstream commit 07f2c7ab6f8d0a7e7c5764c4e6cc9c52951b9d9c ]
When SCTP makes INIT or INIT_ACK packet the total chunk length
can exceed SCTP_MAX_CHUNK_LEN which leads to kernel panic when
transmitting these packets, e.g. the crash on sending INIT_ACK:
[ 597.804948] skbuff: skb_over_panic: text:00000000ffae06e4 len:120168
put:120156 head:000000007aa47635 data:00000000d991c2de
tail:0x1d640 end:0xfec0 dev:<NULL>
...
[ 597.976970] ------------[ cut here ]------------
[ 598.033408] kernel BUG at net/core/skbuff.c:104!
[ 600.314841] Call Trace:
[ 600.345829] <IRQ>
[ 600.371639] ? sctp_packet_transmit+0x2095/0x26d0 [sctp]
[ 600.436934] skb_put+0x16c/0x200
[ 600.477295] sctp_packet_transmit+0x2095/0x26d0 [sctp]
[ 600.540630] ? sctp_packet_config+0x890/0x890 [sctp]
[ 600.601781] ? __sctp_packet_append_chunk+0x3b4/0xd00 [sctp]
[ 600.671356] ? sctp_cmp_addr_exact+0x3f/0x90 [sctp]
[ 600.731482] sctp_outq_flush+0x663/0x30d0 [sctp]
[ 600.788565] ? sctp_make_init+0xbf0/0xbf0 [sctp]
[ 600.845555] ? sctp_check_transmitted+0x18f0/0x18f0 [sctp]
[ 600.912945] ? sctp_outq_tail+0x631/0x9d0 [sctp]
[ 600.969936] sctp_cmd_interpreter.isra.22+0x3be1/0x5cb0 [sctp]
[ 601.041593] ? sctp_sf_do_5_1B_init+0x85f/0xc30 [sctp]
[ 601.104837] ? sctp_generate_t1_cookie_event+0x20/0x20 [sctp]
[ 601.175436] ? sctp_eat_data+0x1710/0x1710 [sctp]
[ 601.233575] sctp_do_sm+0x182/0x560 [sctp]
[ 601.284328] ? sctp_has_association+0x70/0x70 [sctp]
[ 601.345586] ? sctp_rcv+0xef4/0x32f0 [sctp]
[ 601.397478] ? sctp6_rcv+0xa/0x20 [sctp]
...
Here the chunk size for INIT_ACK packet becomes too big, mostly
because of the state cookie (INIT packet has large size with
many address parameters), plus additional server parameters.
Later this chunk causes the panic in skb_put_data():
skb_packet_transmit()
sctp_packet_pack()
skb_put_data(nskb, chunk->skb->data, chunk->skb->len);
'nskb' (head skb) was previously allocated with packet->size
from u16 'chunk->chunk_hdr->length'.
As suggested by Marcelo we should check the chunk's length in
_sctp_make_chunk() before trying to allocate skb for it and
discard a chunk if its size bigger than SCTP_MAX_CHUNK_LEN.
Signed-off-by: Alexey Kodanev <alexey.kodanev(a)oracle.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leinter(a)gmail.com>
Acked-by: Neil Horman <nhorman(a)tuxdriver.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/sctp/sm_make_chunk.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- a/net/sctp/sm_make_chunk.c
+++ b/net/sctp/sm_make_chunk.c
@@ -1378,9 +1378,14 @@ static struct sctp_chunk *_sctp_make_chu
struct sctp_chunk *retval;
struct sk_buff *skb;
struct sock *sk;
+ int chunklen;
+
+ chunklen = SCTP_PAD4(sizeof(*chunk_hdr) + paylen);
+ if (chunklen > SCTP_MAX_CHUNK_LEN)
+ goto nodata;
/* No need to allocate LL here, as this is only a chunk. */
- skb = alloc_skb(SCTP_PAD4(sizeof(*chunk_hdr) + paylen), gfp);
+ skb = alloc_skb(chunklen, gfp);
if (!skb)
goto nodata;
Patches currently in stable-queue which might be from alexey.kodanev(a)oracle.com are
queue-4.15/sctp-fix-dst-refcnt-leak-in-sctp_v6_get_dst.patch
queue-4.15/udplite-fix-partial-checksum-initialization.patch
queue-4.15/sctp-verify-size-of-a-new-chunk-in-_sctp_make_chunk.patch
This is a note to let you know that I've just added the patch titled
sctp: fix dst refcnt leak in sctp_v6_get_dst()
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sctp-fix-dst-refcnt-leak-in-sctp_v6_get_dst.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Alexey Kodanev <alexey.kodanev(a)oracle.com>
Date: Mon, 5 Feb 2018 15:10:35 +0300
Subject: sctp: fix dst refcnt leak in sctp_v6_get_dst()
From: Alexey Kodanev <alexey.kodanev(a)oracle.com>
[ Upstream commit 957d761cf91cdbb175ad7d8f5472336a4d54dbf2 ]
When going through the bind address list in sctp_v6_get_dst() and
the previously found address is better ('matchlen > bmatchlen'),
the code continues to the next iteration without releasing currently
held destination.
Fix it by releasing 'bdst' before continue to the next iteration, and
instead of introducing one more '!IS_ERR(bdst)' check for dst_release(),
move the already existed one right after ip6_dst_lookup_flow(), i.e. we
shouldn't proceed further if we get an error for the route lookup.
Fixes: dbc2b5e9a09e ("sctp: fix src address selection if using secondary addresses for ipv6")
Signed-off-by: Alexey Kodanev <alexey.kodanev(a)oracle.com>
Acked-by: Neil Horman <nhorman(a)tuxdriver.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/sctp/ipv6.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -326,8 +326,10 @@ static void sctp_v6_get_dst(struct sctp_
final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
bdst = ip6_dst_lookup_flow(sk, fl6, final_p);
- if (!IS_ERR(bdst) &&
- ipv6_chk_addr(dev_net(bdst->dev),
+ if (IS_ERR(bdst))
+ continue;
+
+ if (ipv6_chk_addr(dev_net(bdst->dev),
&laddr->a.v6.sin6_addr, bdst->dev, 1)) {
if (!IS_ERR_OR_NULL(dst))
dst_release(dst);
@@ -336,8 +338,10 @@ static void sctp_v6_get_dst(struct sctp_
}
bmatchlen = sctp_v6_addr_match_len(daddr, &laddr->a);
- if (matchlen > bmatchlen)
+ if (matchlen > bmatchlen) {
+ dst_release(bdst);
continue;
+ }
if (!IS_ERR_OR_NULL(dst))
dst_release(dst);
Patches currently in stable-queue which might be from alexey.kodanev(a)oracle.com are
queue-4.15/sctp-fix-dst-refcnt-leak-in-sctp_v6_get_dst.patch
queue-4.15/udplite-fix-partial-checksum-initialization.patch
queue-4.15/sctp-verify-size-of-a-new-chunk-in-_sctp_make_chunk.patch
This is a note to let you know that I've just added the patch titled
sctp: fix dst refcnt leak in sctp_v4_get_dst
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sctp-fix-dst-refcnt-leak-in-sctp_v4_get_dst.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Tommi Rantala <tommi.t.rantala(a)nokia.com>
Date: Mon, 5 Feb 2018 21:48:14 +0200
Subject: sctp: fix dst refcnt leak in sctp_v4_get_dst
From: Tommi Rantala <tommi.t.rantala(a)nokia.com>
[ Upstream commit 4a31a6b19f9ddf498c81f5c9b089742b7472a6f8 ]
Fix dst reference count leak in sctp_v4_get_dst() introduced in commit
410f03831 ("sctp: add routing output fallback"):
When walking the address_list, successive ip_route_output_key() calls
may return the same rt->dst with the reference incremented on each call.
The code would not decrement the dst refcount when the dst pointer was
identical from the previous iteration, causing the dst refcnt leak.
Testcase:
ip netns add TEST
ip netns exec TEST ip link set lo up
ip link add dummy0 type dummy
ip link add dummy1 type dummy
ip link add dummy2 type dummy
ip link set dev dummy0 netns TEST
ip link set dev dummy1 netns TEST
ip link set dev dummy2 netns TEST
ip netns exec TEST ip addr add 192.168.1.1/24 dev dummy0
ip netns exec TEST ip link set dummy0 up
ip netns exec TEST ip addr add 192.168.1.2/24 dev dummy1
ip netns exec TEST ip link set dummy1 up
ip netns exec TEST ip addr add 192.168.1.3/24 dev dummy2
ip netns exec TEST ip link set dummy2 up
ip netns exec TEST sctp_test -H 192.168.1.2 -P 20002 -h 192.168.1.1 -p 20000 -s -B 192.168.1.3
ip netns del TEST
In 4.4 and 4.9 kernels this results to:
[ 354.179591] unregister_netdevice: waiting for lo to become free. Usage count = 1
[ 364.419674] unregister_netdevice: waiting for lo to become free. Usage count = 1
[ 374.663664] unregister_netdevice: waiting for lo to become free. Usage count = 1
[ 384.903717] unregister_netdevice: waiting for lo to become free. Usage count = 1
[ 395.143724] unregister_netdevice: waiting for lo to become free. Usage count = 1
[ 405.383645] unregister_netdevice: waiting for lo to become free. Usage count = 1
...
Fixes: 410f03831 ("sctp: add routing output fallback")
Fixes: 0ca50d12f ("sctp: fix src address selection if using secondary addresses")
Signed-off-by: Tommi Rantala <tommi.t.rantala(a)nokia.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Acked-by: Neil Horman <nhorman(a)tuxdriver.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/sctp/protocol.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -514,22 +514,20 @@ static void sctp_v4_get_dst(struct sctp_
if (IS_ERR(rt))
continue;
- if (!dst)
- dst = &rt->dst;
-
/* Ensure the src address belongs to the output
* interface.
*/
odev = __ip_dev_find(sock_net(sk), laddr->a.v4.sin_addr.s_addr,
false);
if (!odev || odev->ifindex != fl4->flowi4_oif) {
- if (&rt->dst != dst)
+ if (!dst)
+ dst = &rt->dst;
+ else
dst_release(&rt->dst);
continue;
}
- if (dst != &rt->dst)
- dst_release(dst);
+ dst_release(dst);
dst = &rt->dst;
break;
}
Patches currently in stable-queue which might be from tommi.t.rantala(a)nokia.com are
queue-4.15/sctp-fix-dst-refcnt-leak-in-sctp_v4_get_dst.patch
This is a note to let you know that I've just added the patch titled
sctp: do not pr_err for the duplicated node in transport rhlist
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sctp-do-not-pr_err-for-the-duplicated-node-in-transport-rhlist.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Xin Long <lucien.xin(a)gmail.com>
Date: Mon, 12 Feb 2018 18:29:06 +0800
Subject: sctp: do not pr_err for the duplicated node in transport rhlist
From: Xin Long <lucien.xin(a)gmail.com>
[ Upstream commit 27af86bb038d9c8b8066cd17854ddaf2ea92bce1 ]
The pr_err in sctp_hash_transport was supposed to report a sctp bug
for using rhashtable/rhlist.
The err '-EEXIST' introduced in Commit cd2b70875058 ("sctp: check
duplicate node before inserting a new transport") doesn't belong
to that case.
So just return -EEXIST back without pr_err any kmsg.
Fixes: cd2b70875058 ("sctp: check duplicate node before inserting a new transport")
Reported-by: Wei Chen <weichen(a)redhat.com>
Signed-off-by: Xin Long <lucien.xin(a)gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Acked-by: Neil Horman <nhorman(a)tuxdriver.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/sctp/input.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
--- a/net/sctp/input.c
+++ b/net/sctp/input.c
@@ -897,15 +897,12 @@ int sctp_hash_transport(struct sctp_tran
rhl_for_each_entry_rcu(transport, tmp, list, node)
if (transport->asoc->ep == t->asoc->ep) {
rcu_read_unlock();
- err = -EEXIST;
- goto out;
+ return -EEXIST;
}
rcu_read_unlock();
err = rhltable_insert_key(&sctp_transport_hashtable, &arg,
&t->node, sctp_hash_params);
-
-out:
if (err)
pr_err_once("insert transport fail, errno %d\n", err);
Patches currently in stable-queue which might be from lucien.xin(a)gmail.com are
queue-4.15/bridge-check-brport-attr-show-in-brport_show.patch
queue-4.15/sctp-do-not-pr_err-for-the-duplicated-node-in-transport-rhlist.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix underestimated count of buffer elements
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-underestimated-count-of-buffer-elements.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Ursula Braun <ubraun(a)linux.vnet.ibm.com>
Date: Fri, 9 Feb 2018 11:03:49 +0100
Subject: s390/qeth: fix underestimated count of buffer elements
From: Ursula Braun <ubraun(a)linux.vnet.ibm.com>
[ Upstream commit 89271c65edd599207dd982007900506283c90ae3 ]
For a memory range/skb where the last byte falls onto a page boundary
(ie. 'end' is of the form xxx...xxx001), the PFN_UP() part of the
calculation currently doesn't round up to the next PFN due to an
off-by-one error.
Thus qeth believes that the skb occupies one page less than it
actually does, and may select a IO buffer that doesn't have enough spare
buffer elements to fit all of the skb's data.
HW detects this as a malformed buffer descriptor, and raises an
exception which then triggers device recovery.
Fixes: 2863c61334aa ("qeth: refactor calculation of SBALE count")
Signed-off-by: Ursula Braun <ubraun(a)linux.vnet.ibm.com>
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_core.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -836,7 +836,7 @@ struct qeth_trap_id {
*/
static inline int qeth_get_elements_for_range(addr_t start, addr_t end)
{
- return PFN_UP(end - 1) - PFN_DOWN(start);
+ return PFN_UP(end) - PFN_DOWN(start);
}
static inline int qeth_get_micros(void)
Patches currently in stable-queue which might be from ubraun(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix SETIP command handling
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-setip-command-handling.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Fri, 9 Feb 2018 11:03:50 +0100
Subject: s390/qeth: fix SETIP command handling
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit 1c5b2216fbb973a9410e0b06389740b5c1289171 ]
send_control_data() applies some special handling to SETIP v4 IPA
commands. But current code parses *all* command types for the SETIP
command code. Limit the command code check to IPA commands.
Fixes: 5b54e16f1a54 ("qeth: do not spin for SETIP ip assist command")
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_core.h | 5 +++++
drivers/s390/net/qeth_core_main.c | 14 ++++++++------
2 files changed, 13 insertions(+), 6 deletions(-)
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -581,6 +581,11 @@ struct qeth_cmd_buffer {
void (*callback) (struct qeth_channel *, struct qeth_cmd_buffer *);
};
+static inline struct qeth_ipa_cmd *__ipa_cmd(struct qeth_cmd_buffer *iob)
+{
+ return (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE);
+}
+
/**
* definition of a qeth channel, used for read and write
*/
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -2057,7 +2057,7 @@ int qeth_send_control_data(struct qeth_c
unsigned long flags;
struct qeth_reply *reply = NULL;
unsigned long timeout, event_timeout;
- struct qeth_ipa_cmd *cmd;
+ struct qeth_ipa_cmd *cmd = NULL;
QETH_CARD_TEXT(card, 2, "sendctl");
@@ -2083,10 +2083,13 @@ int qeth_send_control_data(struct qeth_c
while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ;
qeth_prepare_control_data(card, len, iob);
- if (IS_IPA(iob->data))
+ if (IS_IPA(iob->data)) {
+ cmd = __ipa_cmd(iob);
event_timeout = QETH_IPA_TIMEOUT;
- else
+ } else {
event_timeout = QETH_TIMEOUT;
+ }
+
timeout = jiffies + event_timeout;
QETH_CARD_TEXT(card, 6, "noirqpnd");
@@ -2111,9 +2114,8 @@ int qeth_send_control_data(struct qeth_c
/* we have only one long running ipassist, since we can ensure
process context of this command we can sleep */
- cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE);
- if ((cmd->hdr.command == IPA_CMD_SETIP) &&
- (cmd->hdr.prot_version == QETH_PROT_IPV4)) {
+ if (cmd && cmd->hdr.command == IPA_CMD_SETIP &&
+ cmd->hdr.prot_version == QETH_PROT_IPV4) {
if (!wait_event_timeout(reply->wait_q,
atomic_read(&reply->received), event_timeout))
goto time_err;
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix overestimated count of buffer elements
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-overestimated-count-of-buffer-elements.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Tue, 27 Feb 2018 18:58:12 +0100
Subject: s390/qeth: fix overestimated count of buffer elements
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit 12472af89632beb1ed8dea29d4efe208ca05b06a ]
qeth_get_elements_for_range() doesn't know how to handle a 0-length
range (ie. start == end), and returns 1 when it should return 0.
Such ranges occur on TSO skbs, where the L2/L3/L4 headers (and thus all
of the skb's linear data) are skipped when mapping the skb into regular
buffer elements.
This overestimation may cause several performance-related issues:
1. sub-optimal IO buffer selection, where the next buffer gets selected
even though the skb would actually still fit into the current buffer.
2. forced linearization, if the element count for a non-linear skb
exceeds QETH_MAX_BUFFER_ELEMENTS.
Rather than modifying qeth_get_elements_for_range() and adding overhead
to every caller, fix up those callers that are in risk of passing a
0-length range.
Fixes: 2863c61334aa ("qeth: refactor calculation of SBALE count")
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_core_main.c | 10 ++++++----
drivers/s390/net/qeth_l3_main.c | 11 ++++++-----
2 files changed, 12 insertions(+), 9 deletions(-)
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -3835,10 +3835,12 @@ EXPORT_SYMBOL_GPL(qeth_get_elements_for_
int qeth_get_elements_no(struct qeth_card *card,
struct sk_buff *skb, int extra_elems, int data_offset)
{
- int elements = qeth_get_elements_for_range(
- (addr_t)skb->data + data_offset,
- (addr_t)skb->data + skb_headlen(skb)) +
- qeth_get_elements_for_frags(skb);
+ addr_t end = (addr_t)skb->data + skb_headlen(skb);
+ int elements = qeth_get_elements_for_frags(skb);
+ addr_t start = (addr_t)skb->data + data_offset;
+
+ if (start != end)
+ elements += qeth_get_elements_for_range(start, end);
if ((elements + extra_elems) > QETH_MAX_BUFFER_ELEMENTS(card)) {
QETH_DBF_MESSAGE(2, "Invalid size of IP packet "
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -2629,11 +2629,12 @@ static void qeth_tso_fill_header(struct
static int qeth_l3_get_elements_no_tso(struct qeth_card *card,
struct sk_buff *skb, int extra_elems)
{
- addr_t tcpdptr = (addr_t)tcp_hdr(skb) + tcp_hdrlen(skb);
- int elements = qeth_get_elements_for_range(
- tcpdptr,
- (addr_t)skb->data + skb_headlen(skb)) +
- qeth_get_elements_for_frags(skb);
+ addr_t start = (addr_t)tcp_hdr(skb) + tcp_hdrlen(skb);
+ addr_t end = (addr_t)skb->data + skb_headlen(skb);
+ int elements = qeth_get_elements_for_frags(skb);
+
+ if (start != end)
+ elements += qeth_get_elements_for_range(start, end);
if ((elements + extra_elems) > QETH_MAX_BUFFER_ELEMENTS(card)) {
QETH_DBF_MESSAGE(2,
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix IPA command submission race
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-ipa-command-submission-race.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Tue, 27 Feb 2018 18:58:17 +0100
Subject: s390/qeth: fix IPA command submission race
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit d22ffb5a712f9211ffd104c38fc17cbfb1b5e2b0 ]
If multiple IPA commands are build & sent out concurrently,
fill_ipacmd_header() may assign a seqno value to a command that's
different from what send_control_data() later assigns to this command's
reply.
This is due to other commands passing through send_control_data(),
and incrementing card->seqno.ipa along the way.
So one IPA command has no reply that's waiting for its seqno, while some
other IPA command has multiple reply objects waiting for it.
Only one of those waiting replies wins, and the other(s) times out and
triggers a recovery via send_ipa_cmd().
Fix this by making sure that the same seqno value is assigned to
a command and its reply object.
Do so immediately before submitting the command & while holding the
irq_pending "lock", to produce nicely ascending seqnos.
As a side effect, *all* IPA commands now use a reply object that's
waiting for its actual seqno. Previously, early IPA commands that were
submitted while the card was still DOWN used the "catch-all" IDX seqno.
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_core_main.c | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -2071,24 +2071,25 @@ int qeth_send_control_data(struct qeth_c
}
reply->callback = reply_cb;
reply->param = reply_param;
- if (card->state == CARD_STATE_DOWN)
- reply->seqno = QETH_IDX_COMMAND_SEQNO;
- else
- reply->seqno = card->seqno.ipa++;
+
init_waitqueue_head(&reply->wait_q);
- spin_lock_irqsave(&card->lock, flags);
- list_add_tail(&reply->list, &card->cmd_waiter_list);
- spin_unlock_irqrestore(&card->lock, flags);
while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ;
- qeth_prepare_control_data(card, len, iob);
if (IS_IPA(iob->data)) {
cmd = __ipa_cmd(iob);
+ cmd->hdr.seqno = card->seqno.ipa++;
+ reply->seqno = cmd->hdr.seqno;
event_timeout = QETH_IPA_TIMEOUT;
} else {
+ reply->seqno = QETH_IDX_COMMAND_SEQNO;
event_timeout = QETH_TIMEOUT;
}
+ qeth_prepare_control_data(card, len, iob);
+
+ spin_lock_irqsave(&card->lock, flags);
+ list_add_tail(&reply->list, &card->cmd_waiter_list);
+ spin_unlock_irqrestore(&card->lock, flags);
timeout = jiffies + event_timeout;
@@ -2870,7 +2871,7 @@ static void qeth_fill_ipacmd_header(stru
memset(cmd, 0, sizeof(struct qeth_ipa_cmd));
cmd->hdr.command = command;
cmd->hdr.initiator = IPA_CMD_INITIATOR_HOST;
- cmd->hdr.seqno = card->seqno.ipa;
+ /* cmd->hdr.seqno is set by qeth_send_control_data() */
cmd->hdr.adapter_type = qeth_get_ipa_adp_type(card->info.link_type);
cmd->hdr.rel_adapter_no = (__u8) card->info.portno;
if (card->options.layer2)
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix IP removal on offline cards
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-ip-removal-on-offline-cards.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Tue, 27 Feb 2018 18:58:13 +0100
Subject: s390/qeth: fix IP removal on offline cards
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit 98d823ab1fbdcb13abc25b420f9bb71bade42056 ]
If the HW is not reachable, then none of the IPs in qeth's internal
table has been registered with the HW yet. So when deleting such an IP,
there's no need to stage it for deregistration - just drop it from
the table.
This fixes the "add-delete-add" scenario on an offline card, where the
the second "add" merely increments the IP's use count. But as the IP is
still set to DISP_ADDR_DELETE from the previous "delete" step,
l3_recover_ip() won't register it with the HW when the card goes online.
Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_l3_main.c | 14 +++-----------
1 file changed, 3 insertions(+), 11 deletions(-)
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -256,12 +256,8 @@ int qeth_l3_delete_ip(struct qeth_card *
if (addr->in_progress)
return -EINPROGRESS;
- if (!qeth_card_hw_is_reachable(card)) {
- addr->disp_flag = QETH_DISP_ADDR_DELETE;
- return 0;
- }
-
- rc = qeth_l3_deregister_addr_entry(card, addr);
+ if (qeth_card_hw_is_reachable(card))
+ rc = qeth_l3_deregister_addr_entry(card, addr);
hash_del(&addr->hnode);
kfree(addr);
@@ -404,11 +400,7 @@ static void qeth_l3_recover_ip(struct qe
spin_lock_bh(&card->ip_lock);
hash_for_each_safe(card->ip_htable, i, tmp, addr, hnode) {
- if (addr->disp_flag == QETH_DISP_ADDR_DELETE) {
- qeth_l3_deregister_addr_entry(card, addr);
- hash_del(&addr->hnode);
- kfree(addr);
- } else if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
+ if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
if (addr->proto == QETH_PROT_IPV4) {
addr->in_progress = 1;
spin_unlock_bh(&card->ip_lock);
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix IP address lookup for L3 devices
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Tue, 27 Feb 2018 18:58:16 +0100
Subject: s390/qeth: fix IP address lookup for L3 devices
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit c5c48c58b259bb8f0482398370ee539d7a12df3e ]
Current code ("qeth_l3_ip_from_hash()") matches a queried address object
against objects in the IP table by IP address, Mask/Prefix Length and
MAC address ("qeth_l3_ipaddrs_is_equal()"). But what callers actually
require is either
a) "is this IP address registered" (ie. match by IP address only),
before adding a new address.
b) or "is this address object registered" (ie. match all relevant
attributes), before deleting an address.
Right now
1. the ADD path is too strict in its lookup, and eg. doesn't detect
conflicts between an existing NORMAL address and a new VIPA address
(because the NORMAL address will have mask != 0, while VIPA has
a mask == 0),
2. the DELETE path is not strict enough, and eg. allows del_rxip() to
delete a VIPA address as long as the IP address matches.
Fix all this by adding helpers (_addr_match_ip() and _addr_match_all())
that do the appropriate checking.
Note that the ADD path for NORMAL addresses is special, as qeth keeps
track of how many times such an address is in use (and there is no
immediate way of returning errors to the caller). So when a requested
NORMAL address _fully_ matches an existing one, it's not considered a
conflict and we merely increment the refcount.
Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_l3.h | 34 ++++++++++++++
drivers/s390/net/qeth_l3_main.c | 91 ++++++++++++++++++----------------------
2 files changed, 74 insertions(+), 51 deletions(-)
--- a/drivers/s390/net/qeth_l3.h
+++ b/drivers/s390/net/qeth_l3.h
@@ -40,8 +40,40 @@ struct qeth_ipaddr {
unsigned int pfxlen;
} a6;
} u;
-
};
+
+static inline bool qeth_l3_addr_match_ip(struct qeth_ipaddr *a1,
+ struct qeth_ipaddr *a2)
+{
+ if (a1->proto != a2->proto)
+ return false;
+ if (a1->proto == QETH_PROT_IPV6)
+ return ipv6_addr_equal(&a1->u.a6.addr, &a2->u.a6.addr);
+ return a1->u.a4.addr == a2->u.a4.addr;
+}
+
+static inline bool qeth_l3_addr_match_all(struct qeth_ipaddr *a1,
+ struct qeth_ipaddr *a2)
+{
+ /* Assumes that the pair was obtained via qeth_l3_addr_find_by_ip(),
+ * so 'proto' and 'addr' match for sure.
+ *
+ * For ucast:
+ * - 'mac' is always 0.
+ * - 'mask'/'pfxlen' for RXIP/VIPA is always 0. For NORMAL, matching
+ * values are required to avoid mixups in takeover eligibility.
+ *
+ * For mcast,
+ * - 'mac' is mapped from the IP, and thus always matches.
+ * - 'mask'/'pfxlen' is always 0.
+ */
+ if (a1->type != a2->type)
+ return false;
+ if (a1->proto == QETH_PROT_IPV6)
+ return a1->u.a6.pfxlen == a2->u.a6.pfxlen;
+ return a1->u.a4.mask == a2->u.a4.mask;
+}
+
static inline u64 qeth_l3_ipaddr_hash(struct qeth_ipaddr *addr)
{
u64 ret = 0;
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -150,6 +150,24 @@ int qeth_l3_string_to_ipaddr(const char
return -EINVAL;
}
+static struct qeth_ipaddr *qeth_l3_find_addr_by_ip(struct qeth_card *card,
+ struct qeth_ipaddr *query)
+{
+ u64 key = qeth_l3_ipaddr_hash(query);
+ struct qeth_ipaddr *addr;
+
+ if (query->is_multicast) {
+ hash_for_each_possible(card->ip_mc_htable, addr, hnode, key)
+ if (qeth_l3_addr_match_ip(addr, query))
+ return addr;
+ } else {
+ hash_for_each_possible(card->ip_htable, addr, hnode, key)
+ if (qeth_l3_addr_match_ip(addr, query))
+ return addr;
+ }
+ return NULL;
+}
+
static void qeth_l3_convert_addr_to_bits(u8 *addr, u8 *bits, int len)
{
int i, j;
@@ -203,34 +221,6 @@ static bool qeth_l3_is_addr_covered_by_i
return rc;
}
-inline int
-qeth_l3_ipaddrs_is_equal(struct qeth_ipaddr *addr1, struct qeth_ipaddr *addr2)
-{
- return addr1->proto == addr2->proto &&
- !memcmp(&addr1->u, &addr2->u, sizeof(addr1->u)) &&
- !memcmp(&addr1->mac, &addr2->mac, sizeof(addr1->mac));
-}
-
-static struct qeth_ipaddr *
-qeth_l3_ip_from_hash(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
-{
- struct qeth_ipaddr *addr;
-
- if (tmp_addr->is_multicast) {
- hash_for_each_possible(card->ip_mc_htable, addr,
- hnode, qeth_l3_ipaddr_hash(tmp_addr))
- if (qeth_l3_ipaddrs_is_equal(tmp_addr, addr))
- return addr;
- } else {
- hash_for_each_possible(card->ip_htable, addr,
- hnode, qeth_l3_ipaddr_hash(tmp_addr))
- if (qeth_l3_ipaddrs_is_equal(tmp_addr, addr))
- return addr;
- }
-
- return NULL;
-}
-
int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
{
int rc = 0;
@@ -245,8 +235,8 @@ int qeth_l3_delete_ip(struct qeth_card *
QETH_CARD_HEX(card, 4, ((char *)&tmp_addr->u.a6.addr) + 8, 8);
}
- addr = qeth_l3_ip_from_hash(card, tmp_addr);
- if (!addr)
+ addr = qeth_l3_find_addr_by_ip(card, tmp_addr);
+ if (!addr || !qeth_l3_addr_match_all(addr, tmp_addr))
return -ENOENT;
addr->ref_counter--;
@@ -268,6 +258,7 @@ int qeth_l3_add_ip(struct qeth_card *car
{
int rc = 0;
struct qeth_ipaddr *addr;
+ char buf[40];
QETH_CARD_TEXT(card, 4, "addip");
@@ -278,8 +269,20 @@ int qeth_l3_add_ip(struct qeth_card *car
QETH_CARD_HEX(card, 4, ((char *)&tmp_addr->u.a6.addr) + 8, 8);
}
- addr = qeth_l3_ip_from_hash(card, tmp_addr);
- if (!addr) {
+ addr = qeth_l3_find_addr_by_ip(card, tmp_addr);
+ if (addr) {
+ if (tmp_addr->type != QETH_IP_TYPE_NORMAL)
+ return -EADDRINUSE;
+ if (qeth_l3_addr_match_all(addr, tmp_addr)) {
+ addr->ref_counter++;
+ return 0;
+ }
+ qeth_l3_ipaddr_to_string(tmp_addr->proto, (u8 *)&tmp_addr->u,
+ buf);
+ dev_warn(&card->gdev->dev,
+ "Registering IP address %s failed\n", buf);
+ return -EADDRINUSE;
+ } else {
addr = qeth_l3_get_addr_buffer(tmp_addr->proto);
if (!addr)
return -ENOMEM;
@@ -327,11 +330,7 @@ int qeth_l3_add_ip(struct qeth_card *car
hash_del(&addr->hnode);
kfree(addr);
}
- } else {
- if (addr->type == QETH_IP_TYPE_NORMAL)
- addr->ref_counter++;
}
-
return rc;
}
@@ -715,12 +714,7 @@ int qeth_l3_add_vipa(struct qeth_card *c
return -ENOMEM;
spin_lock_bh(&card->ip_lock);
-
- if (qeth_l3_ip_from_hash(card, ipaddr))
- rc = -EEXIST;
- else
- qeth_l3_add_ip(card, ipaddr);
-
+ rc = qeth_l3_add_ip(card, ipaddr);
spin_unlock_bh(&card->ip_lock);
kfree(ipaddr);
@@ -783,12 +777,7 @@ int qeth_l3_add_rxip(struct qeth_card *c
return -ENOMEM;
spin_lock_bh(&card->ip_lock);
-
- if (qeth_l3_ip_from_hash(card, ipaddr))
- rc = -EEXIST;
- else
- qeth_l3_add_ip(card, ipaddr);
-
+ rc = qeth_l3_add_ip(card, ipaddr);
spin_unlock_bh(&card->ip_lock);
kfree(ipaddr);
@@ -1396,8 +1385,9 @@ qeth_l3_add_mc_to_hash(struct qeth_card
memcpy(tmp->mac, buf, sizeof(tmp->mac));
tmp->is_multicast = 1;
- ipm = qeth_l3_ip_from_hash(card, tmp);
+ ipm = qeth_l3_find_addr_by_ip(card, tmp);
if (ipm) {
+ /* for mcast, by-IP match means full match */
ipm->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
} else {
ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV4);
@@ -1480,8 +1470,9 @@ qeth_l3_add_mc6_to_hash(struct qeth_card
sizeof(struct in6_addr));
tmp->is_multicast = 1;
- ipm = qeth_l3_ip_from_hash(card, tmp);
+ ipm = qeth_l3_find_addr_by_ip(card, tmp);
if (ipm) {
+ /* for mcast, by-IP match means full match */
ipm->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
continue;
}
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
s390/qeth: fix double-free on IP add/remove race
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
s390-qeth-fix-double-free-on-ip-add-remove-race.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Tue, 27 Feb 2018 18:58:14 +0100
Subject: s390/qeth: fix double-free on IP add/remove race
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit 14d066c3531a87f727968cacd85bd95c75f59843 ]
Registering an IPv4 address with the HW takes quite a while, so we
temporarily drop the ip_htable lock. Any concurrent add/remove of the
same IP adjusts the IP's use count, and (on remove) is then blocked by
addr->in_progress.
After the register call has completed, we check the use count for
concurrently attempted add/remove calls - and possibly straight-away
deregister the IP again. This happens via l3_delete_ip(), which
1) looks up the queried IP in the htable (getting a reference to the
*same* queried object),
2) deregisters the IP from the HW, and
3) frees the IP object.
The caller in l3_add_ip() then does a second free on the same object.
For this case, skip all the extra checks and lookups in l3_delete_ip()
and just deregister & free the IP object ourselves.
Fixes: 5f78e29ceebf ("qeth: optimize IP handling in rx_mode callback")
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_l3_main.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -320,7 +320,8 @@ int qeth_l3_add_ip(struct qeth_card *car
(rc == IPA_RC_LAN_OFFLINE)) {
addr->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
if (addr->ref_counter < 1) {
- qeth_l3_delete_ip(card, addr);
+ qeth_l3_deregister_addr_entry(card, addr);
+ hash_del(&addr->hnode);
kfree(addr);
}
} else {
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
rxrpc: Fix send in rxrpc_send_data_packet()
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rxrpc-fix-send-in-rxrpc_send_data_packet.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: David Howells <dhowells(a)redhat.com>
Date: Thu, 22 Feb 2018 14:38:14 +0000
Subject: rxrpc: Fix send in rxrpc_send_data_packet()
From: David Howells <dhowells(a)redhat.com>
[ Upstream commit 93c62c45ed5fad1b87e3a45835b251cd68de9c46 ]
All the kernel_sendmsg() calls in rxrpc_send_data_packet() need to send
both parts of the iov[] buffer, but one of them does not. Fix it so that
it does.
Without this, short IPv6 rxrpc DATA packets may be seen that have the rxrpc
header included, but no payload.
Fixes: 5a924b8951f8 ("rxrpc: Don't store the rxrpc header in the Tx queue sk_buffs")
Reported-by: Marc Dionne <marc.dionne(a)auristor.com>
Signed-off-by: David Howells <dhowells(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/rxrpc/output.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/net/rxrpc/output.c
+++ b/net/rxrpc/output.c
@@ -445,7 +445,7 @@ send_fragmentable:
(char *)&opt, sizeof(opt));
if (ret == 0) {
ret = kernel_sendmsg(conn->params.local->socket, &msg,
- iov, 1, iov[0].iov_len);
+ iov, 2, len);
opt = IPV6_PMTUDISC_DO;
kernel_setsockopt(conn->params.local->socket,
Patches currently in stable-queue which might be from dhowells(a)redhat.com are
queue-4.15/rxrpc-fix-send-in-rxrpc_send_data_packet.patch
This is a note to let you know that I've just added the patch titled
Revert "s390/qeth: fix using of ref counter for rxip addresses"
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:57 PST 2018
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Date: Tue, 27 Feb 2018 18:58:15 +0100
Subject: Revert "s390/qeth: fix using of ref counter for rxip addresses"
From: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
[ Upstream commit 4964c66fd49b2e2342da35358f2ff74614bcbaee ]
This reverts commit cb816192d986f7596009dedcf2201fe2e5bc2aa7.
The issue this attempted to fix never actually occurs.
l3_add_rxip() checks (via l3_ip_from_hash()) if the requested address
was previously added to the card. If so, it returns -EEXIST and doesn't
call l3_add_ip().
As a result, the "address exists" path in l3_add_ip() is never taken
for rxip addresses, and this patch had no effect.
Fixes: cb816192d986 ("s390/qeth: fix using of ref counter for rxip addresses")
Signed-off-by: Julian Wiedmann <jwi(a)linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/s390/net/qeth_l3_main.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -250,8 +250,7 @@ int qeth_l3_delete_ip(struct qeth_card *
return -ENOENT;
addr->ref_counter--;
- if (addr->ref_counter > 0 && (addr->type == QETH_IP_TYPE_NORMAL ||
- addr->type == QETH_IP_TYPE_RXIP))
+ if (addr->type == QETH_IP_TYPE_NORMAL && addr->ref_counter > 0)
return rc;
if (addr->in_progress)
return -EINPROGRESS;
@@ -329,9 +328,8 @@ int qeth_l3_add_ip(struct qeth_card *car
kfree(addr);
}
} else {
- if (addr->type == QETH_IP_TYPE_NORMAL ||
- addr->type == QETH_IP_TYPE_RXIP)
- addr->ref_counter++;
+ if (addr->type == QETH_IP_TYPE_NORMAL)
+ addr->ref_counter++;
}
return rc;
Patches currently in stable-queue which might be from jwi(a)linux.vnet.ibm.com are
queue-4.15/s390-qeth-fix-setip-command-handling.patch
queue-4.15/s390-qeth-fix-ip-address-lookup-for-l3-devices.patch
queue-4.15/s390-qeth-fix-ipa-command-submission-race.patch
queue-4.15/revert-s390-qeth-fix-using-of-ref-counter-for-rxip-addresses.patch
queue-4.15/s390-qeth-fix-overestimated-count-of-buffer-elements.patch
queue-4.15/s390-qeth-fix-double-free-on-ip-add-remove-race.patch
queue-4.15/s390-qeth-fix-ip-removal-on-offline-cards.patch
queue-4.15/s390-qeth-fix-underestimated-count-of-buffer-elements.patch
This is a note to let you know that I've just added the patch titled
ppp: prevent unregistered channels from connecting to PPP units
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ppp-prevent-unregistered-channels-from-connecting-to-ppp-units.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Guillaume Nault <g.nault(a)alphalink.fr>
Date: Fri, 2 Mar 2018 18:41:16 +0100
Subject: ppp: prevent unregistered channels from connecting to PPP units
From: Guillaume Nault <g.nault(a)alphalink.fr>
[ Upstream commit 77f840e3e5f09c6d7d727e85e6e08276dd813d11 ]
PPP units don't hold any reference on the channels connected to it.
It is the channel's responsibility to ensure that it disconnects from
its unit before being destroyed.
In practice, this is ensured by ppp_unregister_channel() disconnecting
the channel from the unit before dropping a reference on the channel.
However, it is possible for an unregistered channel to connect to a PPP
unit: register a channel with ppp_register_net_channel(), attach a
/dev/ppp file to it with ioctl(PPPIOCATTCHAN), unregister the channel
with ppp_unregister_channel() and finally connect the /dev/ppp file to
a PPP unit with ioctl(PPPIOCCONNECT).
Once in this situation, the channel is only held by the /dev/ppp file,
which can be released at anytime and free the channel without letting
the parent PPP unit know. Then the ppp structure ends up with dangling
pointers in its ->channels list.
Prevent this scenario by forbidding unregistered channels from
connecting to PPP units. This maintains the code logic by keeping
ppp_unregister_channel() responsible from disconnecting the channel if
necessary and avoids modification on the reference counting mechanism.
This issue seems to predate git history (successfully reproduced on
Linux 2.6.26 and earlier PPP commits are unrelated).
Signed-off-by: Guillaume Nault <g.nault(a)alphalink.fr>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/net/ppp/ppp_generic.c | 9 +++++++++
1 file changed, 9 insertions(+)
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -3161,6 +3161,15 @@ ppp_connect_channel(struct channel *pch,
goto outl;
ppp_lock(ppp);
+ spin_lock_bh(&pch->downl);
+ if (!pch->chan) {
+ /* Don't connect unregistered channels */
+ spin_unlock_bh(&pch->downl);
+ ppp_unlock(ppp);
+ ret = -ENOTCONN;
+ goto outl;
+ }
+ spin_unlock_bh(&pch->downl);
if (pch->file.hdrlen > ppp->file.hdrlen)
ppp->file.hdrlen = pch->file.hdrlen;
hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */
Patches currently in stable-queue which might be from g.nault(a)alphalink.fr are
queue-4.15/ppp-prevent-unregistered-channels-from-connecting-to-ppp-units.patch
This is a note to let you know that I've just added the patch titled
netlink: ensure to loop over all netns in genlmsg_multicast_allns()
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netlink-ensure-to-loop-over-all-netns-in-genlmsg_multicast_allns.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: Nicolas Dichtel <nicolas.dichtel(a)6wind.com>
Date: Tue, 6 Feb 2018 14:48:32 +0100
Subject: netlink: ensure to loop over all netns in genlmsg_multicast_allns()
From: Nicolas Dichtel <nicolas.dichtel(a)6wind.com>
[ Upstream commit cb9f7a9a5c96a773bbc9c70660dc600cfff82f82 ]
Nowadays, nlmsg_multicast() returns only 0 or -ESRCH but this was not the
case when commit 134e63756d5f was pushed.
However, there was no reason to stop the loop if a netns does not have
listeners.
Returns -ESRCH only if there was no listeners in all netns.
To avoid having the same problem in the future, I didn't take the
assumption that nlmsg_multicast() returns only 0 or -ESRCH.
Fixes: 134e63756d5f ("genetlink: make netns aware")
CC: Johannes Berg <johannes.berg(a)intel.com>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel(a)6wind.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netlink/genetlink.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
--- a/net/netlink/genetlink.c
+++ b/net/netlink/genetlink.c
@@ -1081,6 +1081,7 @@ static int genlmsg_mcast(struct sk_buff
{
struct sk_buff *tmp;
struct net *net, *prev = NULL;
+ bool delivered = false;
int err;
for_each_net_rcu(net) {
@@ -1092,14 +1093,21 @@ static int genlmsg_mcast(struct sk_buff
}
err = nlmsg_multicast(prev->genl_sock, tmp,
portid, group, flags);
- if (err)
+ if (!err)
+ delivered = true;
+ else if (err != -ESRCH)
goto error;
}
prev = net;
}
- return nlmsg_multicast(prev->genl_sock, skb, portid, group, flags);
+ err = nlmsg_multicast(prev->genl_sock, skb, portid, group, flags);
+ if (!err)
+ delivered = true;
+ else if (err != -ESRCH)
+ goto error;
+ return delivered ? 0 : -ESRCH;
error:
kfree_skb(skb);
return err;
Patches currently in stable-queue which might be from nicolas.dichtel(a)6wind.com are
queue-4.15/netlink-ensure-to-loop-over-all-netns-in-genlmsg_multicast_allns.patch
This is a note to let you know that I've just added the patch titled
netlink: put module reference if dump start fails
to the 4.15-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netlink-put-module-reference-if-dump-start-fails.patch
and it can be found in the queue-4.15 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:56 PST 2018
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
Date: Wed, 21 Feb 2018 04:41:59 +0100
Subject: netlink: put module reference if dump start fails
From: "Jason A. Donenfeld" <Jason(a)zx2c4.com>
[ Upstream commit b87b6194be631c94785fe93398651e804ed43e28 ]
Before, if cb->start() failed, the module reference would never be put,
because cb->cb_running is intentionally false at this point. Users are
generally annoyed by this because they can no longer unload modules that
leak references. Also, it may be possible to tediously wrap a reference
counter back to zero, especially since module.c still uses atomic_inc
instead of refcount_inc.
This patch expands the error path to simply call module_put if
cb->start() fails.
Fixes: 41c87425a1ac ("netlink: do not set cb_running if dump's start() errs")
Signed-off-by: Jason A. Donenfeld <Jason(a)zx2c4.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netlink/af_netlink.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -2275,7 +2275,7 @@ int __netlink_dump_start(struct sock *ss
if (cb->start) {
ret = cb->start(cb);
if (ret)
- goto error_unlock;
+ goto error_put;
}
nlk->cb_running = true;
@@ -2295,6 +2295,8 @@ int __netlink_dump_start(struct sock *ss
*/
return -EINTR;
+error_put:
+ module_put(control->module);
error_unlock:
sock_put(sk);
mutex_unlock(nlk->cb_mutex);
Patches currently in stable-queue which might be from Jason(a)zx2c4.com are
queue-4.15/netlink-put-module-reference-if-dump-start-fails.patch