Two tiny patches for the IBRS code. They should go in
through the x86/pti tree and should apply to both 4.9 and 4.14 trees.
Thanks,
Paolo
v1->v2: remove patch 2, the same bug has already been fixed
Paolo Bonzini (3):
KVM: x86: use native MSR ops for SPEC_CTRL
KVM: VMX: mark RDMSR path as unlikely
arch/x86/kvm/svm.c | 9 +++++----
arch/x86/kvm/vmx.c | 9 +++++----
2 files changed, 10 insertions(+), 8 deletions(-)
--
1.8.3.1
commit a307a1e6bc0d ("cpufreq: s3c: use cpufreq_generic_init()")
accidentally broke cpufreq on s3c2410 and s3c2412. These two platforms
don't have a CPU frequency table and used to skip calling
cpufreq_table_validate_and_show() for them. But with the above commit,
we started calling it unconditionally and that will eventually fail as
the frequency table pointer is NULL.
Fix this by calling cpufreq_table_validate_and_show() conditionally
again.
Cc: stable(a)vger.kernel.org # v3.13+
Fixes: a307a1e6bc0d ("cpufreq: s3c: use cpufreq_generic_init()")
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
---
drivers/cpufreq/s3c24xx-cpufreq.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/s3c24xx-cpufreq.c b/drivers/cpufreq/s3c24xx-cpufreq.c
index 7b596fa38ad2..6bebc1f9f55a 100644
--- a/drivers/cpufreq/s3c24xx-cpufreq.c
+++ b/drivers/cpufreq/s3c24xx-cpufreq.c
@@ -351,7 +351,13 @@ struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
static int s3c_cpufreq_init(struct cpufreq_policy *policy)
{
policy->clk = clk_arm;
- return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency);
+
+ policy->cpuinfo.transition_latency = cpu_cur.info->latency;
+
+ if (ftab)
+ return cpufreq_table_validate_and_show(policy, ftab);
+
+ return 0;
}
static int __init s3c_cpufreq_initclks(void)
--
2.15.0.194.g9af6a3dea062
Hi James,
Here's a collection of fixes for Linux keyrings, mostly thanks to Eric
Biggers, if you could pass them along to Linus. They include:
(1) Fix some PKCS#7 verification issues.
(2) Fix handling of unsupported crypto in X.509.
(3) Fix too-large allocation in big_key.
The patches can be found here also:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/tag/?…
And also on the keys-fixes branch.
David
---
David Howells (1):
KEYS: Use individual pages in big_key for crypto buffers
Eric Biggers (5):
PKCS#7: fix certificate chain verification
PKCS#7: fix certificate blacklisting
PKCS#7: fix direct verification of SignerInfo signature
X.509: fix BUG_ON() when hash algorithm is unsupported
X.509: fix NULL dereference when restricting key with unsupported_sig
crypto/asymmetric_keys/pkcs7_trust.c | 1
crypto/asymmetric_keys/pkcs7_verify.c | 12 ++--
crypto/asymmetric_keys/public_key.c | 4 +
crypto/asymmetric_keys/restrict.c | 21 ++++--
security/keys/big_key.c | 110 ++++++++++++++++++++++++++-------
5 files changed, 111 insertions(+), 37 deletions(-)
This is a note to let you know that I've just added the patch titled
xfrm: skip policies marked as dead while rehashing
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
xfrm-skip-policies-marked-as-dead-while-rehashing.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 862591bf4f519d1b8d859af720fafeaebdd0162a Mon Sep 17 00:00:00 2001
From: Florian Westphal <fw(a)strlen.de>
Date: Wed, 27 Dec 2017 23:25:45 +0100
Subject: xfrm: skip policies marked as dead while rehashing
From: Florian Westphal <fw(a)strlen.de>
commit 862591bf4f519d1b8d859af720fafeaebdd0162a upstream.
syzkaller triggered following KASAN splat:
BUG: KASAN: slab-out-of-bounds in xfrm_hash_rebuild+0xdbe/0xf00 net/xfrm/xfrm_policy.c:618
read of size 2 at addr ffff8801c8e92fe4 by task kworker/1:1/23 [..]
Workqueue: events xfrm_hash_rebuild [..]
__asan_report_load2_noabort+0x14/0x20 mm/kasan/report.c:428
xfrm_hash_rebuild+0xdbe/0xf00 net/xfrm/xfrm_policy.c:618
process_one_work+0xbbf/0x1b10 kernel/workqueue.c:2112
worker_thread+0x223/0x1990 kernel/workqueue.c:2246 [..]
The reproducer triggers:
1016 if (error) {
1017 list_move_tail(&walk->walk.all, &x->all);
1018 goto out;
1019 }
in xfrm_policy_walk() via pfkey (it sets tiny rcv space, dump
callback returns -ENOBUFS).
In this case, *walk is located the pfkey socket struct, so this socket
becomes visible in the global policy list.
It looks like this is intentional -- phony walker has walk.dead set to 1
and all other places skip such "policies".
Ccing original authors of the two commits that seem to expose this
issue (first patch missed ->dead check, second patch adds pfkey
sockets to policies dumper list).
Fixes: 880a6fab8f6ba5b ("xfrm: configure policy hash table thresholds by netlink")
Fixes: 12a169e7d8f4b1c ("ipsec: Put dumpers on the dump list")
Cc: Herbert Xu <herbert(a)gondor.apana.org.au>
Cc: Timo Teras <timo.teras(a)iki.fi>
Cc: Christophe Gouault <christophe.gouault(a)6wind.com>
Reported-by: syzbot <bot+c028095236fcb6f4348811565b75084c754dc729(a)syzkaller.appspotmail.com>
Signed-off-by: Florian Westphal <fw(a)strlen.de>
Signed-off-by: Steffen Klassert <steffen.klassert(a)secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/xfrm/xfrm_policy.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -610,7 +610,8 @@ static void xfrm_hash_rebuild(struct wor
/* re-insert all policies by order of creation */
list_for_each_entry_reverse(policy, &net->xfrm.policy_all, walk.all) {
- if (xfrm_policy_id2dir(policy->index) >= XFRM_POLICY_MAX) {
+ if (policy->walk.dead ||
+ xfrm_policy_id2dir(policy->index) >= XFRM_POLICY_MAX) {
/* skip socket policies */
continue;
}
Patches currently in stable-queue which might be from fw(a)strlen.de are
queue-4.14/xfrm-skip-policies-marked-as-dead-while-rehashing.patch
queue-4.14/netfilter-x_tables-avoid-out-of-bounds-reads-in-xt_request_find_-match-target.patch
queue-4.14/netfilter-on-sockopt-acquire-sock-lock-only-in-the-required-scope.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.14/xfrm-don-t-call-xfrm_policy_cache_flush-while-holding-spinlock.patch
This is a note to let you know that I've just added the patch titled
xfrm: fix rcu usage in xfrm_get_type_offload
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
xfrm-fix-rcu-usage-in-xfrm_get_type_offload.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 2f10a61cee8fdb9f8da90f5db687e1862b22cf06 Mon Sep 17 00:00:00 2001
From: Sabrina Dubroca <sd(a)queasysnail.net>
Date: Sun, 31 Dec 2017 16:18:56 +0100
Subject: xfrm: fix rcu usage in xfrm_get_type_offload
From: Sabrina Dubroca <sd(a)queasysnail.net>
commit 2f10a61cee8fdb9f8da90f5db687e1862b22cf06 upstream.
request_module can sleep, thus we cannot hold rcu_read_lock() while
calling it. The function also jumps back and takes rcu_read_lock()
again (in xfrm_state_get_afinfo()), resulting in an imbalance.
This codepath is triggered whenever a new offloaded state is created.
Fixes: ffdb5211da1c ("xfrm: Auto-load xfrm offload modules")
Reported-by: syzbot+ca425f44816d749e8eb49755567a75ee48cf4a30(a)syzkaller.appspotmail.com
Signed-off-by: Sabrina Dubroca <sd(a)queasysnail.net>
Signed-off-by: Steffen Klassert <steffen.klassert(a)secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/xfrm/xfrm_state.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -313,13 +313,14 @@ retry:
if ((type && !try_module_get(type->owner)))
type = NULL;
+ rcu_read_unlock();
+
if (!type && try_load) {
request_module("xfrm-offload-%d-%d", family, proto);
try_load = 0;
goto retry;
}
- rcu_read_unlock();
return type;
}
Patches currently in stable-queue which might be from sd(a)queasysnail.net are
queue-4.14/xfrm-fix-rcu-usage-in-xfrm_get_type_offload.patch
This is a note to let you know that I've just added the patch titled
xfrm: Fix stack-out-of-bounds read on socket policy lookup.
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
xfrm-fix-stack-out-of-bounds-read-on-socket-policy-lookup.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ddc47e4404b58f03e98345398fb12d38fe291512 Mon Sep 17 00:00:00 2001
From: Steffen Klassert <steffen.klassert(a)secunet.com>
Date: Wed, 29 Nov 2017 06:53:55 +0100
Subject: xfrm: Fix stack-out-of-bounds read on socket policy lookup.
From: Steffen Klassert <steffen.klassert(a)secunet.com>
commit ddc47e4404b58f03e98345398fb12d38fe291512 upstream.
When we do tunnel or beet mode, we pass saddr and daddr from the
template to xfrm_state_find(), this is ok. On transport mode,
we pass the addresses from the flowi, assuming that the IP
addresses (and address family) don't change during transformation.
This assumption is wrong in the IPv4 mapped IPv6 case, packet
is IPv4 and template is IPv6.
Fix this by catching address family missmatches of the policy
and the flow already before we do the lookup.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Steffen Klassert <steffen.klassert(a)secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/xfrm/xfrm_policy.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -1168,9 +1168,15 @@ static struct xfrm_policy *xfrm_sk_polic
again:
pol = rcu_dereference(sk->sk_policy[dir]);
if (pol != NULL) {
- bool match = xfrm_selector_match(&pol->selector, fl, family);
+ bool match;
int err = 0;
+ if (pol->family != family) {
+ pol = NULL;
+ goto out;
+ }
+
+ match = xfrm_selector_match(&pol->selector, fl, family);
if (match) {
if ((sk->sk_mark & pol->mark.m) != pol->mark.v) {
pol = NULL;
Patches currently in stable-queue which might be from steffen.klassert(a)secunet.com are
queue-4.14/xfrm-fix-stack-out-of-bounds-read-on-socket-policy-lookup.patch
queue-4.14/xfrm-skip-policies-marked-as-dead-while-rehashing.patch
queue-4.14/xfrm-fix-rcu-usage-in-xfrm_get_type_offload.patch
queue-4.14/esp-fix-gro-when-the-headers-not-fully-in-the-linear-part-of-the-skb.patch
queue-4.14/xfrm-don-t-call-xfrm_policy_cache_flush-while-holding-spinlock.patch
queue-4.14/xfrm-check-id-proto-in-validate_tmpl.patch
This is a note to let you know that I've just added the patch titled
xfrm: check id proto in validate_tmpl()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
xfrm-check-id-proto-in-validate_tmpl.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6a53b7593233ab9e4f96873ebacc0f653a55c3e1 Mon Sep 17 00:00:00 2001
From: Cong Wang <xiyou.wangcong(a)gmail.com>
Date: Mon, 27 Nov 2017 11:15:16 -0800
Subject: xfrm: check id proto in validate_tmpl()
From: Cong Wang <xiyou.wangcong(a)gmail.com>
commit 6a53b7593233ab9e4f96873ebacc0f653a55c3e1 upstream.
syzbot reported a kernel warning in xfrm_state_fini(), which
indicates that we have entries left in the list
net->xfrm.state_all whose proto is zero. And
xfrm_id_proto_match() doesn't consider them as a match with
IPSEC_PROTO_ANY in this case.
Proto with value 0 is probably not a valid value, at least
verify_newsa_info() doesn't consider it valid either.
This patch fixes it by checking the proto value in
validate_tmpl() and rejecting invalid ones, like what iproute2
does in xfrm_xfrmproto_getbyname().
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Cc: Steffen Klassert <steffen.klassert(a)secunet.com>
Cc: Herbert Xu <herbert(a)gondor.apana.org.au>
Signed-off-by: Cong Wang <xiyou.wangcong(a)gmail.com>
Signed-off-by: Steffen Klassert <steffen.klassert(a)secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/xfrm/xfrm_user.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
--- a/net/xfrm/xfrm_user.c
+++ b/net/xfrm/xfrm_user.c
@@ -1443,6 +1443,21 @@ static int validate_tmpl(int nr, struct
default:
return -EINVAL;
}
+
+ switch (ut[i].id.proto) {
+ case IPPROTO_AH:
+ case IPPROTO_ESP:
+ case IPPROTO_COMP:
+#if IS_ENABLED(CONFIG_IPV6)
+ case IPPROTO_ROUTING:
+ case IPPROTO_DSTOPTS:
+#endif
+ case IPSEC_PROTO_ANY:
+ break;
+ default:
+ return -EINVAL;
+ }
+
}
return 0;
Patches currently in stable-queue which might be from xiyou.wangcong(a)gmail.com are
queue-4.14/net_sched-gen_estimator-fix-lockdep-splat.patch
queue-4.14/netfilter-xt_cgroup-initialize-info-priv-in-cgroup_mt_check_v1.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.14/xfrm-check-id-proto-in-validate_tmpl.patch
This is a note to let you know that I've just added the patch titled
xfrm: don't call xfrm_policy_cache_flush while holding spinlock
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
xfrm-don-t-call-xfrm_policy_cache_flush-while-holding-spinlock.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From b1bdcb59b64f806ef08d25a85c39ffb3ad841ce6 Mon Sep 17 00:00:00 2001
From: Florian Westphal <fw(a)strlen.de>
Date: Sat, 6 Jan 2018 01:13:08 +0100
Subject: xfrm: don't call xfrm_policy_cache_flush while holding spinlock
From: Florian Westphal <fw(a)strlen.de>
commit b1bdcb59b64f806ef08d25a85c39ffb3ad841ce6 upstream.
xfrm_policy_cache_flush can sleep, so it cannot be called while holding
a spinlock. We could release the lock first, but I don't see why we need
to invoke this function here in first place, the packet path won't reuse
an xdst entry unless its still valid.
While at it, add an annotation to xfrm_policy_cache_flush, it would
have probably caught this bug sooner.
Fixes: ec30d78c14a813 ("xfrm: add xdst pcpu cache")
Reported-by: syzbot+e149f7d1328c26f9c12f(a)syzkaller.appspotmail.com
Signed-off-by: Florian Westphal <fw(a)strlen.de>
Signed-off-by: Steffen Klassert <steffen.klassert(a)secunet.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/xfrm/xfrm_policy.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -975,8 +975,6 @@ int xfrm_policy_flush(struct net *net, u
}
if (!cnt)
err = -ESRCH;
- else
- xfrm_policy_cache_flush();
out:
spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
return err;
@@ -1738,6 +1736,8 @@ void xfrm_policy_cache_flush(void)
bool found = 0;
int cpu;
+ might_sleep();
+
local_bh_disable();
rcu_read_lock();
for_each_possible_cpu(cpu) {
Patches currently in stable-queue which might be from fw(a)strlen.de are
queue-4.14/xfrm-skip-policies-marked-as-dead-while-rehashing.patch
queue-4.14/netfilter-x_tables-avoid-out-of-bounds-reads-in-xt_request_find_-match-target.patch
queue-4.14/netfilter-on-sockopt-acquire-sock-lock-only-in-the-required-scope.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.14/xfrm-don-t-call-xfrm_policy_cache_flush-while-holding-spinlock.patch
This is a note to let you know that I've just added the patch titled
usb: core: Add a helper function to check the validity of EP type in URB
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
usb-core-add-a-helper-function-to-check-the-validity-of-ep-type-in-urb.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e901b9873876ca30a09253731bd3a6b00c44b5b0 Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai(a)suse.de>
Date: Wed, 4 Oct 2017 16:15:59 +0200
Subject: usb: core: Add a helper function to check the validity of EP type in URB
From: Takashi Iwai <tiwai(a)suse.de>
commit e901b9873876ca30a09253731bd3a6b00c44b5b0 upstream.
This patch adds a new helper function to perform a sanity check of the
given URB to see whether it contains a valid endpoint. It's a light-
weight version of what usb_submit_urb() does, but without the kernel
warning followed by the stack trace, just returns an error code.
Especially for a driver that doesn't parse the descriptor but fills
the URB with the fixed endpoint (e.g. some quirks for non-compliant
devices), this kind of check is preferable at the probe phase before
actually submitting the urb.
Tested-by: Andrey Konovalov <andreyknvl(a)google.com>
Acked-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/usb/core/urb.c | 30 ++++++++++++++++++++++++++----
include/linux/usb.h | 2 ++
2 files changed, 28 insertions(+), 4 deletions(-)
--- a/drivers/usb/core/urb.c
+++ b/drivers/usb/core/urb.c
@@ -187,6 +187,31 @@ EXPORT_SYMBOL_GPL(usb_unanchor_urb);
/*-------------------------------------------------------------------*/
+static const int pipetypes[4] = {
+ PIPE_CONTROL, PIPE_ISOCHRONOUS, PIPE_BULK, PIPE_INTERRUPT
+};
+
+/**
+ * usb_urb_ep_type_check - sanity check of endpoint in the given urb
+ * @urb: urb to be checked
+ *
+ * This performs a light-weight sanity check for the endpoint in the
+ * given urb. It returns 0 if the urb contains a valid endpoint, otherwise
+ * a negative error code.
+ */
+int usb_urb_ep_type_check(const struct urb *urb)
+{
+ const struct usb_host_endpoint *ep;
+
+ ep = usb_pipe_endpoint(urb->dev, urb->pipe);
+ if (!ep)
+ return -EINVAL;
+ if (usb_pipetype(urb->pipe) != pipetypes[usb_endpoint_type(&ep->desc)])
+ return -EINVAL;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(usb_urb_ep_type_check);
+
/**
* usb_submit_urb - issue an asynchronous transfer request for an endpoint
* @urb: pointer to the urb describing the request
@@ -326,9 +351,6 @@ EXPORT_SYMBOL_GPL(usb_unanchor_urb);
*/
int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
{
- static int pipetypes[4] = {
- PIPE_CONTROL, PIPE_ISOCHRONOUS, PIPE_BULK, PIPE_INTERRUPT
- };
int xfertype, max;
struct usb_device *dev;
struct usb_host_endpoint *ep;
@@ -444,7 +466,7 @@ int usb_submit_urb(struct urb *urb, gfp_
*/
/* Check that the pipe's type matches the endpoint's type */
- if (usb_pipetype(urb->pipe) != pipetypes[xfertype])
+ if (usb_urb_ep_type_check(urb))
dev_WARN(&dev->dev, "BOGUS urb xfer, pipe %x != type %x\n",
usb_pipetype(urb->pipe), pipetypes[xfertype]);
--- a/include/linux/usb.h
+++ b/include/linux/usb.h
@@ -1729,6 +1729,8 @@ static inline int usb_urb_dir_out(struct
return (urb->transfer_flags & URB_DIR_MASK) == URB_DIR_OUT;
}
+int usb_urb_ep_type_check(const struct urb *urb);
+
void *usb_alloc_coherent(struct usb_device *dev, size_t size,
gfp_t mem_flags, dma_addr_t *dma);
void usb_free_coherent(struct usb_device *dev, size_t size,
Patches currently in stable-queue which might be from tiwai(a)suse.de are
queue-4.14/alsa-bcd2000-add-a-sanity-check-for-invalid-eps.patch
queue-4.14/alsa-caiaq-add-a-sanity-check-for-invalid-eps.patch
queue-4.14/alsa-line6-add-a-sanity-check-for-invalid-eps.patch
queue-4.14/usb-core-add-a-helper-function-to-check-the-validity-of-ep-type-in-urb.patch
This is a note to let you know that I've just added the patch titled
vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e9cb4239134c860e5f92c75bf5321bd377bb505b Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang(a)redhat.com>
Date: Tue, 23 Jan 2018 17:27:25 +0800
Subject: vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
From: Jason Wang <jasowang(a)redhat.com>
commit e9cb4239134c860e5f92c75bf5321bd377bb505b upstream.
We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to
hold mutexes of all virtqueues. This may confuse lockdep to report a
possible deadlock because of trying to hold locks belong to same
class. Switch to use mutex_lock_nested() to avoid false positive.
Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API")
Reported-by: syzbot+dbb7c1161485e61b0241(a)syzkaller.appspotmail.com
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/vhost/vhost.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -904,7 +904,7 @@ static void vhost_dev_lock_vqs(struct vh
{
int i = 0;
for (i = 0; i < d->nvqs; ++i)
- mutex_lock(&d->vqs[i]->mutex);
+ mutex_lock_nested(&d->vqs[i]->mutex, i);
}
static void vhost_dev_unlock_vqs(struct vhost_dev *d)
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.14/ptr_ring-try-vmalloc-when-kmalloc-fails.patch
queue-4.14/vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
queue-4.14/ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
This is a note to let you know that I've just added the patch titled
staging: android: ion: Switch from WARN to pr_warn
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
staging-android-ion-switch-from-warn-to-pr_warn.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e4e179a844f52e907e550f887d0a2171f1508af1 Mon Sep 17 00:00:00 2001
From: Laura Abbott <labbott(a)redhat.com>
Date: Fri, 5 Jan 2018 11:14:09 -0800
Subject: staging: android: ion: Switch from WARN to pr_warn
From: Laura Abbott <labbott(a)redhat.com>
commit e4e179a844f52e907e550f887d0a2171f1508af1 upstream.
Syzbot reported a warning with Ion:
WARNING: CPU: 0 PID: 3502 at drivers/staging/android/ion/ion-ioctl.c:73 ion_ioctl+0x2db/0x380 drivers/staging/android/ion/ion-ioctl.c:73
Kernel panic - not syncing: panic_on_warn set ...
This is a warning that validation of the ioctl fields failed. This was
deliberately added as a warning to make it very obvious to developers that
something needed to be fixed. In reality, this is overkill and disturbs
fuzzing. Switch to pr_warn for a message instead.
Reported-by: syzbot+fa2d5f63ee5904a0115a(a)syzkaller.appspotmail.com
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/android/ion/ion-ioctl.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/staging/android/ion/ion-ioctl.c
+++ b/drivers/staging/android/ion/ion-ioctl.c
@@ -71,8 +71,10 @@ long ion_ioctl(struct file *filp, unsign
return -EFAULT;
ret = validate_ioctl_arg(cmd, &data);
- if (WARN_ON_ONCE(ret))
+ if (ret) {
+ pr_warn_once("%s: ioctl validate failed\n", __func__);
return ret;
+ }
if (!(dir & _IOC_WRITE))
memset(&data, 0, sizeof(data));
Patches currently in stable-queue which might be from labbott(a)redhat.com are
queue-4.14/staging-android-ion-switch-from-warn-to-pr_warn.patch
queue-4.14/staging-android-ion-add-__gfp_nowarn-for-system-contig-heap.patch
This is a note to let you know that I've just added the patch titled
staging: android: ion: Add __GFP_NOWARN for system contig heap
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
staging-android-ion-add-__gfp_nowarn-for-system-contig-heap.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0c75f10312a35b149b2cebb1832316b35c2337ca Mon Sep 17 00:00:00 2001
From: Laura Abbott <labbott(a)redhat.com>
Date: Fri, 5 Jan 2018 11:14:08 -0800
Subject: staging: android: ion: Add __GFP_NOWARN for system contig heap
From: Laura Abbott <labbott(a)redhat.com>
commit 0c75f10312a35b149b2cebb1832316b35c2337ca upstream.
syzbot reported a warning from Ion:
WARNING: CPU: 1 PID: 3485 at mm/page_alloc.c:3926
...
__alloc_pages_nodemask+0x9fb/0xd80 mm/page_alloc.c:4252
alloc_pages_current+0xb6/0x1e0 mm/mempolicy.c:2036
alloc_pages include/linux/gfp.h:492 [inline]
ion_system_contig_heap_allocate+0x40/0x2c0
drivers/staging/android/ion/ion_system_heap.c:374
ion_buffer_create drivers/staging/android/ion/ion.c:93 [inline]
ion_alloc+0x2c1/0x9e0 drivers/staging/android/ion/ion.c:420
ion_ioctl+0x26d/0x380 drivers/staging/android/ion/ion-ioctl.c:84
vfs_ioctl fs/ioctl.c:46 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:686
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692
This is a warning about attempting to allocate order > MAX_ORDER. This
is coming from a userspace Ion allocation request. Since userspace is
free to request however much memory it wants (and the kernel is free to
deny its allocation), silence the allocation attempt with __GFP_NOWARN
in case it fails.
Reported-by: syzbot+76e7efc4748495855a4d(a)syzkaller.appspotmail.com
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/android/ion/ion_system_heap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/staging/android/ion/ion_system_heap.c
+++ b/drivers/staging/android/ion/ion_system_heap.c
@@ -371,7 +371,7 @@ static int ion_system_contig_heap_alloca
unsigned long i;
int ret;
- page = alloc_pages(low_order_gfp_flags, order);
+ page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order);
if (!page)
return -ENOMEM;
Patches currently in stable-queue which might be from labbott(a)redhat.com are
queue-4.14/staging-android-ion-switch-from-warn-to-pr_warn.patch
queue-4.14/staging-android-ion-add-__gfp_nowarn-for-system-contig-heap.patch
This is a note to let you know that I've just added the patch titled
selinux: skip bounded transition processing if the policy isn't loaded
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4b14752ec4e0d87126e636384cf37c8dd9df157c Mon Sep 17 00:00:00 2001
From: Paul Moore <paul(a)paul-moore.com>
Date: Tue, 5 Dec 2017 17:17:43 -0500
Subject: selinux: skip bounded transition processing if the policy isn't loaded
From: Paul Moore <paul(a)paul-moore.com>
commit 4b14752ec4e0d87126e636384cf37c8dd9df157c upstream.
We can't do anything reasonable in security_bounded_transition() if we
don't have a policy loaded, and in fact we could run into problems
with some of the code inside expecting a policy. Fix these problems
like we do many others in security/selinux/ss/services.c by checking
to see if the policy is loaded (ss_initialized) and returning quickly
if it isn't.
Reported-by: syzbot <syzkaller-bugs(a)googlegroups.com>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
Acked-by: Stephen Smalley <sds(a)tycho.nsa.gov>
Reviewed-by: James Morris <james.l.morris(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/selinux/ss/services.c | 3 +++
1 file changed, 3 insertions(+)
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -867,6 +867,9 @@ int security_bounded_transition(u32 old_
int index;
int rc;
+ if (!ss_initialized)
+ return 0;
+
read_lock(&policy_rwlock);
rc = -EINVAL;
Patches currently in stable-queue which might be from paul(a)paul-moore.com are
queue-4.14/selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
queue-4.14/selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
This is a note to let you know that I've just added the patch titled
sctp: set frag_point in sctp_setsockopt_maxseg correctly
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sctp-set-frag_point-in-sctp_setsockopt_maxseg-correctly.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ecca8f88da5c4260cc2bccfefd2a24976704c366 Mon Sep 17 00:00:00 2001
From: Xin Long <lucien.xin(a)gmail.com>
Date: Fri, 17 Nov 2017 14:11:11 +0800
Subject: sctp: set frag_point in sctp_setsockopt_maxseg correctly
From: Xin Long <lucien.xin(a)gmail.com>
commit ecca8f88da5c4260cc2bccfefd2a24976704c366 upstream.
Now in sctp_setsockopt_maxseg user_frag or frag_point can be set with
val >= 8 and val <= SCTP_MAX_CHUNK_LEN. But both checks are incorrect.
val >= 8 means frag_point can even be less than SCTP_DEFAULT_MINSEGMENT.
Then in sctp_datamsg_from_user(), when it's value is greater than cookie
echo len and trying to bundle with cookie echo chunk, the first_len will
overflow.
The worse case is when it's value is equal as cookie echo len, first_len
becomes 0, it will go into a dead loop for fragment later on. In Hangbin
syzkaller testing env, oom was even triggered due to consecutive memory
allocation in that loop.
Besides, SCTP_MAX_CHUNK_LEN is the max size of the whole chunk, it should
deduct the data header for frag_point or user_frag check.
This patch does a proper check with SCTP_DEFAULT_MINSEGMENT subtracting
the sctphdr and datahdr, SCTP_MAX_CHUNK_LEN subtracting datahdr when
setting frag_point via sockopt. It also improves sctp_setsockopt_maxseg
codes.
Suggested-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Reported-by: Hangbin Liu <liuhangbin(a)gmail.com>
Signed-off-by: Xin Long <lucien.xin(a)gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/net/sctp/sctp.h | 3 ++-
net/sctp/socket.c | 29 +++++++++++++++++++----------
2 files changed, 21 insertions(+), 11 deletions(-)
--- a/include/net/sctp/sctp.h
+++ b/include/net/sctp/sctp.h
@@ -444,7 +444,8 @@ static inline int sctp_frag_point(const
if (asoc->user_frag)
frag = min_t(int, frag, asoc->user_frag);
- frag = SCTP_TRUNC4(min_t(int, frag, SCTP_MAX_CHUNK_LEN));
+ frag = SCTP_TRUNC4(min_t(int, frag, SCTP_MAX_CHUNK_LEN -
+ sizeof(struct sctp_data_chunk)));
return frag;
}
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -3136,9 +3136,9 @@ static int sctp_setsockopt_mappedv4(stru
*/
static int sctp_setsockopt_maxseg(struct sock *sk, char __user *optval, unsigned int optlen)
{
+ struct sctp_sock *sp = sctp_sk(sk);
struct sctp_assoc_value params;
struct sctp_association *asoc;
- struct sctp_sock *sp = sctp_sk(sk);
int val;
if (optlen == sizeof(int)) {
@@ -3154,26 +3154,35 @@ static int sctp_setsockopt_maxseg(struct
if (copy_from_user(¶ms, optval, optlen))
return -EFAULT;
val = params.assoc_value;
- } else
+ } else {
return -EINVAL;
+ }
- if ((val != 0) && ((val < 8) || (val > SCTP_MAX_CHUNK_LEN)))
- return -EINVAL;
+ if (val) {
+ int min_len, max_len;
- asoc = sctp_id2assoc(sk, params.assoc_id);
- if (!asoc && params.assoc_id && sctp_style(sk, UDP))
- return -EINVAL;
+ min_len = SCTP_DEFAULT_MINSEGMENT - sp->pf->af->net_header_len;
+ min_len -= sizeof(struct sctphdr) +
+ sizeof(struct sctp_data_chunk);
+
+ max_len = SCTP_MAX_CHUNK_LEN - sizeof(struct sctp_data_chunk);
+ if (val < min_len || val > max_len)
+ return -EINVAL;
+ }
+
+ asoc = sctp_id2assoc(sk, params.assoc_id);
if (asoc) {
if (val == 0) {
- val = asoc->pathmtu;
- val -= sp->pf->af->net_header_len;
+ val = asoc->pathmtu - sp->pf->af->net_header_len;
val -= sizeof(struct sctphdr) +
- sizeof(struct sctp_data_chunk);
+ sizeof(struct sctp_data_chunk);
}
asoc->user_frag = val;
asoc->frag_point = sctp_frag_point(asoc, asoc->pathmtu);
} else {
+ if (params.assoc_id && sctp_style(sk, UDP))
+ return -EINVAL;
sp->user_frag = val;
}
Patches currently in stable-queue which might be from lucien.xin(a)gmail.com are
queue-4.14/sctp-set-frag_point-in-sctp_setsockopt_maxseg-correctly.patch
This is a note to let you know that I've just added the patch titled
selinux: ensure the context is NUL terminated in security_context_to_sid_core()
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ef28df55ac27e1e5cd122e19fa311d886d47a756 Mon Sep 17 00:00:00 2001
From: Paul Moore <paul(a)paul-moore.com>
Date: Tue, 28 Nov 2017 18:51:12 -0500
Subject: selinux: ensure the context is NUL terminated in security_context_to_sid_core()
From: Paul Moore <paul(a)paul-moore.com>
commit ef28df55ac27e1e5cd122e19fa311d886d47a756 upstream.
The syzbot/syzkaller automated tests found a problem in
security_context_to_sid_core() during early boot (before we load the
SELinux policy) where we could potentially feed context strings without
NUL terminators into the strcmp() function.
We already guard against this during normal operation (after the SELinux
policy has been loaded) by making a copy of the context strings and
explicitly adding a NUL terminator to the end. The patch extends this
protection to the early boot case (no loaded policy) by moving the context
copy earlier in security_context_to_sid_core().
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
Reviewed-By: William Roberts <william.c.roberts(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/selinux/ss/services.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -1413,27 +1413,25 @@ static int security_context_to_sid_core(
if (!scontext_len)
return -EINVAL;
+ /* Copy the string to allow changes and ensure a NUL terminator */
+ scontext2 = kmemdup_nul(scontext, scontext_len, gfp_flags);
+ if (!scontext2)
+ return -ENOMEM;
+
if (!ss_initialized) {
int i;
for (i = 1; i < SECINITSID_NUM; i++) {
- if (!strcmp(initial_sid_to_string[i], scontext)) {
+ if (!strcmp(initial_sid_to_string[i], scontext2)) {
*sid = i;
- return 0;
+ goto out;
}
}
*sid = SECINITSID_KERNEL;
- return 0;
+ goto out;
}
*sid = SECSID_NULL;
- /* Copy the string so that we can modify the copy as we parse it. */
- scontext2 = kmalloc(scontext_len + 1, gfp_flags);
- if (!scontext2)
- return -ENOMEM;
- memcpy(scontext2, scontext, scontext_len);
- scontext2[scontext_len] = 0;
-
if (force) {
/* Save another copy for storing in uninterpreted form */
rc = -ENOMEM;
Patches currently in stable-queue which might be from paul(a)paul-moore.com are
queue-4.14/selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
queue-4.14/selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
This is a note to let you know that I've just added the patch titled
rds: tcp: correctly sequence cleanup on netns deletion.
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rds-tcp-correctly-sequence-cleanup-on-netns-deletion.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 681648e67d43cf269c5590ecf021ed481f4551fc Mon Sep 17 00:00:00 2001
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Date: Thu, 30 Nov 2017 11:11:28 -0800
Subject: rds: tcp: correctly sequence cleanup on netns deletion.
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
commit 681648e67d43cf269c5590ecf021ed481f4551fc upstream.
Commit 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net")
introduces a regression in rds-tcp netns cleanup. The cleanup_net(),
(and thus rds_tcp_dev_event notification) is only called from put_net()
when all netns refcounts go to 0, but this cannot happen if the
rds_connection itself is holding a c_net ref that it expects to
release in rds_tcp_kill_sock.
Instead, the rds_tcp_kill_sock callback should make sure to
tear down state carefully, ensuring that the socket teardown
is only done after all data-structures and workqs that depend
on it are quiesced.
The original motivation for commit 8edc3affc077 ("rds: tcp: Take explicit
refcounts on struct net") was to resolve a race condition reported by
syzkaller where workqs for tx/rx/connect were triggered after the
namespace was deleted. Those worker threads should have been
cancelled/flushed before socket tear-down and indeed,
rds_conn_path_destroy() does try to sequence this by doing
/* cancel cp_send_w */
/* cancel cp_recv_w */
/* flush cp_down_w */
/* free data structures */
Here the "flush cp_down_w" will trigger rds_conn_shutdown and thus
invoke rds_tcp_conn_path_shutdown() to close the tcp socket, so that
we ought to have satisfied the requirement that "socket-close is
done after all other dependent state is quiesced". However,
rds_conn_shutdown has a bug in that it *always* triggers the reconnect
workq (and if connection is successful, we always restart tx/rx
workqs so with the right timing, we risk the race conditions reported
by syzkaller).
Netns deletion is like module teardown- no need to restart a
reconnect in this case. We can use the c_destroy_in_prog bit
to avoid restarting the reconnect.
Fixes: 8edc3affc077 ("rds: tcp: Take explicit refcounts on struct net")
Signed-off-by: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar(a)oracle.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/rds/connection.c | 3 ++-
net/rds/rds.h | 6 +++---
net/rds/tcp.c | 4 ++--
3 files changed, 7 insertions(+), 6 deletions(-)
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -366,6 +366,8 @@ void rds_conn_shutdown(struct rds_conn_p
* to the conn hash, so we never trigger a reconnect on this
* conn - the reconnect is always triggered by the active peer. */
cancel_delayed_work_sync(&cp->cp_conn_w);
+ if (conn->c_destroy_in_prog)
+ return;
rcu_read_lock();
if (!hlist_unhashed(&conn->c_hash_node)) {
rcu_read_unlock();
@@ -445,7 +447,6 @@ void rds_conn_destroy(struct rds_connect
*/
rds_cong_remove_conn(conn);
- put_net(conn->c_net);
kfree(conn->c_path);
kmem_cache_free(rds_conn_slab, conn);
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -150,7 +150,7 @@ struct rds_connection {
/* Protocol version */
unsigned int c_version;
- struct net *c_net;
+ possible_net_t c_net;
struct list_head c_map_item;
unsigned long c_map_queued;
@@ -165,13 +165,13 @@ struct rds_connection {
static inline
struct net *rds_conn_net(struct rds_connection *conn)
{
- return conn->c_net;
+ return read_pnet(&conn->c_net);
}
static inline
void rds_conn_net_set(struct rds_connection *conn, struct net *net)
{
- conn->c_net = get_net(net);
+ write_pnet(&conn->c_net, net);
}
#define RDS_FLAG_CONG_BITMAP 0x01
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -527,7 +527,7 @@ static void rds_tcp_kill_sock(struct net
rds_tcp_listen_stop(lsock, &rtn->rds_tcp_accept_w);
spin_lock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
- struct net *c_net = tc->t_cpath->cp_conn->c_net;
+ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
if (net != c_net || !tc->t_sock)
continue;
@@ -586,7 +586,7 @@ static void rds_tcp_sysctl_reset(struct
spin_lock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
- struct net *c_net = tc->t_cpath->cp_conn->c_net;
+ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
if (net != c_net || !tc->t_sock)
continue;
Patches currently in stable-queue which might be from sowmini.varadhan(a)oracle.com are
queue-4.14/rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
queue-4.14/rds-tcp-correctly-sequence-cleanup-on-netns-deletion.patch
This is a note to let you know that I've just added the patch titled
rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f10b4cff98c6977668434fbf5dd58695eeca2897 Mon Sep 17 00:00:00 2001
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Date: Thu, 30 Nov 2017 11:11:29 -0800
Subject: rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
commit f10b4cff98c6977668434fbf5dd58695eeca2897 upstream.
The rds_tcp_kill_sock() function parses the rds_tcp_conn_list
to find the rds_connection entries marked for deletion as part
of the netns deletion under the protection of the rds_tcp_conn_lock.
Since the rds_tcp_conn_list tracks rds_tcp_connections (which
have a 1:1 mapping with rds_conn_path), multiple tc entries in
the rds_tcp_conn_list will map to a single rds_connection, and will
be deleted as part of the rds_conn_destroy() operation that is
done outside the rds_tcp_conn_lock.
The rds_tcp_conn_list traversal done under the protection of
rds_tcp_conn_lock should not leave any doomed tc entries in
the list after the rds_tcp_conn_lock is released, else another
concurrently executiong netns delete (for a differnt netns) thread
may trip on these entries.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar(a)oracle.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/rds/tcp.c | 9 +++++++--
net/rds/tcp.h | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -306,7 +306,8 @@ static void rds_tcp_conn_free(void *arg)
rdsdebug("freeing tc %p\n", tc);
spin_lock_irqsave(&rds_tcp_conn_lock, flags);
- list_del(&tc->t_tcp_node);
+ if (!tc->t_tcp_node_detached)
+ list_del(&tc->t_tcp_node);
spin_unlock_irqrestore(&rds_tcp_conn_lock, flags);
kmem_cache_free(rds_tcp_conn_slab, tc);
@@ -531,8 +532,12 @@ static void rds_tcp_kill_sock(struct net
if (net != c_net || !tc->t_sock)
continue;
- if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn))
+ if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
list_move_tail(&tc->t_tcp_node, &tmp_list);
+ } else {
+ list_del(&tc->t_tcp_node);
+ tc->t_tcp_node_detached = true;
+ }
}
spin_unlock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &tmp_list, t_tcp_node) {
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -12,6 +12,7 @@ struct rds_tcp_incoming {
struct rds_tcp_connection {
struct list_head t_tcp_node;
+ bool t_tcp_node_detached;
struct rds_conn_path *t_cpath;
/* t_conn_path_lock synchronizes the connection establishment between
* rds_tcp_accept_one and rds_tcp_conn_path_connect
Patches currently in stable-queue which might be from sowmini.varadhan(a)oracle.com are
queue-4.14/rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
queue-4.14/rds-tcp-correctly-sequence-cleanup-on-netns-deletion.patch
This is a note to let you know that I've just added the patch titled
ptr_ring: try vmalloc() when kmalloc() fails
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ptr_ring-try-vmalloc-when-kmalloc-fails.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0bf7800f1799b5b1fd7d4f024e9ece53ac489011 Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang(a)redhat.com>
Date: Fri, 9 Feb 2018 17:45:50 +0800
Subject: ptr_ring: try vmalloc() when kmalloc() fails
From: Jason Wang <jasowang(a)redhat.com>
commit 0bf7800f1799b5b1fd7d4f024e9ece53ac489011 upstream.
This patch switch to use kvmalloc_array() for using a vmalloc()
fallback to help in case kmalloc() fails.
Reported-by: syzbot+e4d4f9ddd4295539735d(a)syzkaller.appspotmail.com
Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/ptr_ring.h | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -445,11 +445,14 @@ static inline int ptr_ring_consume_batch
__PTR_RING_PEEK_CALL_v; \
})
+/* Not all gfp_t flags (besides GFP_KERNEL) are allowed. See
+ * documentation for vmalloc for which of them are legal.
+ */
static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)
{
if (size * sizeof(void *) > KMALLOC_MAX_SIZE)
return NULL;
- return kcalloc(size, sizeof(void *), gfp);
+ return kvmalloc_array(size, sizeof(void *), gfp | __GFP_ZERO);
}
static inline void __ptr_ring_set_size(struct ptr_ring *r, int size)
@@ -582,7 +585,7 @@ static inline int ptr_ring_resize(struct
spin_unlock(&(r)->producer_lock);
spin_unlock_irqrestore(&(r)->consumer_lock, flags);
- kfree(old);
+ kvfree(old);
return 0;
}
@@ -622,7 +625,7 @@ static inline int ptr_ring_resize_multip
}
for (i = 0; i < nrings; ++i)
- kfree(queues[i]);
+ kvfree(queues[i]);
kfree(queues);
@@ -630,7 +633,7 @@ static inline int ptr_ring_resize_multip
nomem:
while (--i >= 0)
- kfree(queues[i]);
+ kvfree(queues[i]);
kfree(queues);
@@ -645,7 +648,7 @@ static inline void ptr_ring_cleanup(stru
if (destroy)
while ((ptr = ptr_ring_consume(r)))
destroy(ptr);
- kfree(r->queue);
+ kvfree(r->queue);
}
#endif /* _LINUX_PTR_RING_H */
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.14/ptr_ring-try-vmalloc-when-kmalloc-fails.patch
queue-4.14/vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
queue-4.14/ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
This is a note to let you know that I've just added the patch titled
netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insert
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 7dc68e98757a8eccf8ca7a53a29b896f1eef1f76 Mon Sep 17 00:00:00 2001
From: Cong Wang <xiyou.wangcong(a)gmail.com>
Date: Mon, 5 Feb 2018 14:41:45 -0800
Subject: netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insert
From: Cong Wang <xiyou.wangcong(a)gmail.com>
commit 7dc68e98757a8eccf8ca7a53a29b896f1eef1f76 upstream.
rateest_hash is supposed to be protected by xt_rateest_mutex,
and, as suggested by Eric, lookup and insert should be atomic,
so we should acquire the xt_rateest_mutex once for both.
So introduce a non-locking helper for internal use and keep the
locking one for external.
Reported-by: <syzbot+5cb189720978275e4c75(a)syzkaller.appspotmail.com>
Fixes: 5859034d7eb8 ("[NETFILTER]: x_tables: add RATEEST target")
Signed-off-by: Cong Wang <xiyou.wangcong(a)gmail.com>
Reviewed-by: Florian Westphal <fw(a)strlen.de>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/xt_RATEEST.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
--- a/net/netfilter/xt_RATEEST.c
+++ b/net/netfilter/xt_RATEEST.c
@@ -39,23 +39,31 @@ static void xt_rateest_hash_insert(struc
hlist_add_head(&est->list, &rateest_hash[h]);
}
-struct xt_rateest *xt_rateest_lookup(const char *name)
+static struct xt_rateest *__xt_rateest_lookup(const char *name)
{
struct xt_rateest *est;
unsigned int h;
h = xt_rateest_hash(name);
- mutex_lock(&xt_rateest_mutex);
hlist_for_each_entry(est, &rateest_hash[h], list) {
if (strcmp(est->name, name) == 0) {
est->refcnt++;
- mutex_unlock(&xt_rateest_mutex);
return est;
}
}
- mutex_unlock(&xt_rateest_mutex);
+
return NULL;
}
+
+struct xt_rateest *xt_rateest_lookup(const char *name)
+{
+ struct xt_rateest *est;
+
+ mutex_lock(&xt_rateest_mutex);
+ est = __xt_rateest_lookup(name);
+ mutex_unlock(&xt_rateest_mutex);
+ return est;
+}
EXPORT_SYMBOL_GPL(xt_rateest_lookup);
void xt_rateest_put(struct xt_rateest *est)
@@ -100,8 +108,10 @@ static int xt_rateest_tg_checkentry(cons
net_get_random_once(&jhash_rnd, sizeof(jhash_rnd));
- est = xt_rateest_lookup(info->name);
+ mutex_lock(&xt_rateest_mutex);
+ est = __xt_rateest_lookup(info->name);
if (est) {
+ mutex_unlock(&xt_rateest_mutex);
/*
* If estimator parameters are specified, they must match the
* existing estimator.
@@ -139,11 +149,13 @@ static int xt_rateest_tg_checkentry(cons
info->est = est;
xt_rateest_hash_insert(est);
+ mutex_unlock(&xt_rateest_mutex);
return 0;
err2:
kfree(est);
err1:
+ mutex_unlock(&xt_rateest_mutex);
return ret;
}
Patches currently in stable-queue which might be from xiyou.wangcong(a)gmail.com are
queue-4.14/net_sched-gen_estimator-fix-lockdep-splat.patch
queue-4.14/netfilter-xt_cgroup-initialize-info-priv-in-cgroup_mt_check_v1.patch
queue-4.14/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.14/xfrm-check-id-proto-in-validate_tmpl.patch
This is a note to let you know that I've just added the patch titled
ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6e6e41c3112276288ccaf80c70916779b84bb276 Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang(a)redhat.com>
Date: Fri, 9 Feb 2018 17:45:49 +0800
Subject: ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE
From: Jason Wang <jasowang(a)redhat.com>
commit 6e6e41c3112276288ccaf80c70916779b84bb276 upstream.
To avoid slab to warn about exceeded size, fail early if queue
occupies more than KMALLOC_MAX_SIZE.
Reported-by: syzbot+e4d4f9ddd4295539735d(a)syzkaller.appspotmail.com
Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/ptr_ring.h | 2 ++
1 file changed, 2 insertions(+)
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -447,6 +447,8 @@ static inline int ptr_ring_consume_batch
static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)
{
+ if (size * sizeof(void *) > KMALLOC_MAX_SIZE)
+ return NULL;
return kcalloc(size, sizeof(void *), gfp);
}
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.14/ptr_ring-try-vmalloc-when-kmalloc-fails.patch
queue-4.14/vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
queue-4.14/ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch