This is a note to let you know that I've just added the patch titled
staging: android: ion: Add __GFP_NOWARN for system contig heap
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
staging-android-ion-add-__gfp_nowarn-for-system-contig-heap.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0c75f10312a35b149b2cebb1832316b35c2337ca Mon Sep 17 00:00:00 2001
From: Laura Abbott <labbott(a)redhat.com>
Date: Fri, 5 Jan 2018 11:14:08 -0800
Subject: staging: android: ion: Add __GFP_NOWARN for system contig heap
From: Laura Abbott <labbott(a)redhat.com>
commit 0c75f10312a35b149b2cebb1832316b35c2337ca upstream.
syzbot reported a warning from Ion:
WARNING: CPU: 1 PID: 3485 at mm/page_alloc.c:3926
...
__alloc_pages_nodemask+0x9fb/0xd80 mm/page_alloc.c:4252
alloc_pages_current+0xb6/0x1e0 mm/mempolicy.c:2036
alloc_pages include/linux/gfp.h:492 [inline]
ion_system_contig_heap_allocate+0x40/0x2c0
drivers/staging/android/ion/ion_system_heap.c:374
ion_buffer_create drivers/staging/android/ion/ion.c:93 [inline]
ion_alloc+0x2c1/0x9e0 drivers/staging/android/ion/ion.c:420
ion_ioctl+0x26d/0x380 drivers/staging/android/ion/ion-ioctl.c:84
vfs_ioctl fs/ioctl.c:46 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:686
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:692
This is a warning about attempting to allocate order > MAX_ORDER. This
is coming from a userspace Ion allocation request. Since userspace is
free to request however much memory it wants (and the kernel is free to
deny its allocation), silence the allocation attempt with __GFP_NOWARN
in case it fails.
Reported-by: syzbot+76e7efc4748495855a4d(a)syzkaller.appspotmail.com
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/android/ion/ion_system_heap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/staging/android/ion/ion_system_heap.c
+++ b/drivers/staging/android/ion/ion_system_heap.c
@@ -384,7 +384,7 @@ static int ion_system_contig_heap_alloca
if (align > (PAGE_SIZE << order))
return -EINVAL;
- page = alloc_pages(low_order_gfp_flags, order);
+ page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order);
if (!page)
return -ENOMEM;
Patches currently in stable-queue which might be from labbott(a)redhat.com are
queue-4.9/staging-android-ion-switch-from-warn-to-pr_warn.patch
queue-4.9/staging-android-ion-add-__gfp_nowarn-for-system-contig-heap.patch
This is a note to let you know that I've just added the patch titled
staging: android: ion: Switch from WARN to pr_warn
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
staging-android-ion-switch-from-warn-to-pr_warn.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From e4e179a844f52e907e550f887d0a2171f1508af1 Mon Sep 17 00:00:00 2001
From: Laura Abbott <labbott(a)redhat.com>
Date: Fri, 5 Jan 2018 11:14:09 -0800
Subject: staging: android: ion: Switch from WARN to pr_warn
From: Laura Abbott <labbott(a)redhat.com>
commit e4e179a844f52e907e550f887d0a2171f1508af1 upstream.
Syzbot reported a warning with Ion:
WARNING: CPU: 0 PID: 3502 at drivers/staging/android/ion/ion-ioctl.c:73 ion_ioctl+0x2db/0x380 drivers/staging/android/ion/ion-ioctl.c:73
Kernel panic - not syncing: panic_on_warn set ...
This is a warning that validation of the ioctl fields failed. This was
deliberately added as a warning to make it very obvious to developers that
something needed to be fixed. In reality, this is overkill and disturbs
fuzzing. Switch to pr_warn for a message instead.
Reported-by: syzbot+fa2d5f63ee5904a0115a(a)syzkaller.appspotmail.com
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Laura Abbott <labbott(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/staging/android/ion/ion-ioctl.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/drivers/staging/android/ion/ion-ioctl.c
+++ b/drivers/staging/android/ion/ion-ioctl.c
@@ -83,8 +83,10 @@ long ion_ioctl(struct file *filp, unsign
return -EFAULT;
ret = validate_ioctl_arg(cmd, &data);
- if (WARN_ON_ONCE(ret))
+ if (ret) {
+ pr_warn_once("%s: ioctl validate failed\n", __func__);
return ret;
+ }
if (!(dir & _IOC_WRITE))
memset(&data, 0, sizeof(data));
Patches currently in stable-queue which might be from labbott(a)redhat.com are
queue-4.9/staging-android-ion-switch-from-warn-to-pr_warn.patch
queue-4.9/staging-android-ion-add-__gfp_nowarn-for-system-contig-heap.patch
This is a note to let you know that I've just added the patch titled
selinux: skip bounded transition processing if the policy isn't loaded
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 4b14752ec4e0d87126e636384cf37c8dd9df157c Mon Sep 17 00:00:00 2001
From: Paul Moore <paul(a)paul-moore.com>
Date: Tue, 5 Dec 2017 17:17:43 -0500
Subject: selinux: skip bounded transition processing if the policy isn't loaded
From: Paul Moore <paul(a)paul-moore.com>
commit 4b14752ec4e0d87126e636384cf37c8dd9df157c upstream.
We can't do anything reasonable in security_bounded_transition() if we
don't have a policy loaded, and in fact we could run into problems
with some of the code inside expecting a policy. Fix these problems
like we do many others in security/selinux/ss/services.c by checking
to see if the policy is loaded (ss_initialized) and returning quickly
if it isn't.
Reported-by: syzbot <syzkaller-bugs(a)googlegroups.com>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
Acked-by: Stephen Smalley <sds(a)tycho.nsa.gov>
Reviewed-by: James Morris <james.l.morris(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/selinux/ss/services.c | 3 +++
1 file changed, 3 insertions(+)
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -854,6 +854,9 @@ int security_bounded_transition(u32 old_
int index;
int rc;
+ if (!ss_initialized)
+ return 0;
+
read_lock(&policy_rwlock);
rc = -EINVAL;
Patches currently in stable-queue which might be from paul(a)paul-moore.com are
queue-4.9/selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
queue-4.9/selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
This is a note to let you know that I've just added the patch titled
selinux: ensure the context is NUL terminated in security_context_to_sid_core()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ef28df55ac27e1e5cd122e19fa311d886d47a756 Mon Sep 17 00:00:00 2001
From: Paul Moore <paul(a)paul-moore.com>
Date: Tue, 28 Nov 2017 18:51:12 -0500
Subject: selinux: ensure the context is NUL terminated in security_context_to_sid_core()
From: Paul Moore <paul(a)paul-moore.com>
commit ef28df55ac27e1e5cd122e19fa311d886d47a756 upstream.
The syzbot/syzkaller automated tests found a problem in
security_context_to_sid_core() during early boot (before we load the
SELinux policy) where we could potentially feed context strings without
NUL terminators into the strcmp() function.
We already guard against this during normal operation (after the SELinux
policy has been loaded) by making a copy of the context strings and
explicitly adding a NUL terminator to the end. The patch extends this
protection to the early boot case (no loaded policy) by moving the context
copy earlier in security_context_to_sid_core().
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
Reviewed-By: William Roberts <william.c.roberts(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
security/selinux/ss/services.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -1400,27 +1400,25 @@ static int security_context_to_sid_core(
if (!scontext_len)
return -EINVAL;
+ /* Copy the string to allow changes and ensure a NUL terminator */
+ scontext2 = kmemdup_nul(scontext, scontext_len, gfp_flags);
+ if (!scontext2)
+ return -ENOMEM;
+
if (!ss_initialized) {
int i;
for (i = 1; i < SECINITSID_NUM; i++) {
- if (!strcmp(initial_sid_to_string[i], scontext)) {
+ if (!strcmp(initial_sid_to_string[i], scontext2)) {
*sid = i;
- return 0;
+ goto out;
}
}
*sid = SECINITSID_KERNEL;
- return 0;
+ goto out;
}
*sid = SECSID_NULL;
- /* Copy the string so that we can modify the copy as we parse it. */
- scontext2 = kmalloc(scontext_len + 1, gfp_flags);
- if (!scontext2)
- return -ENOMEM;
- memcpy(scontext2, scontext, scontext_len);
- scontext2[scontext_len] = 0;
-
if (force) {
/* Save another copy for storing in uninterpreted form */
rc = -ENOMEM;
Patches currently in stable-queue which might be from paul(a)paul-moore.com are
queue-4.9/selinux-skip-bounded-transition-processing-if-the-policy-isn-t-loaded.patch
queue-4.9/selinux-ensure-the-context-is-nul-terminated-in-security_context_to_sid_core.patch
This is a note to let you know that I've just added the patch titled
sctp: set frag_point in sctp_setsockopt_maxseg correctly
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
sctp-set-frag_point-in-sctp_setsockopt_maxseg-correctly.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ecca8f88da5c4260cc2bccfefd2a24976704c366 Mon Sep 17 00:00:00 2001
From: Xin Long <lucien.xin(a)gmail.com>
Date: Fri, 17 Nov 2017 14:11:11 +0800
Subject: sctp: set frag_point in sctp_setsockopt_maxseg correctly
From: Xin Long <lucien.xin(a)gmail.com>
commit ecca8f88da5c4260cc2bccfefd2a24976704c366 upstream.
Now in sctp_setsockopt_maxseg user_frag or frag_point can be set with
val >= 8 and val <= SCTP_MAX_CHUNK_LEN. But both checks are incorrect.
val >= 8 means frag_point can even be less than SCTP_DEFAULT_MINSEGMENT.
Then in sctp_datamsg_from_user(), when it's value is greater than cookie
echo len and trying to bundle with cookie echo chunk, the first_len will
overflow.
The worse case is when it's value is equal as cookie echo len, first_len
becomes 0, it will go into a dead loop for fragment later on. In Hangbin
syzkaller testing env, oom was even triggered due to consecutive memory
allocation in that loop.
Besides, SCTP_MAX_CHUNK_LEN is the max size of the whole chunk, it should
deduct the data header for frag_point or user_frag check.
This patch does a proper check with SCTP_DEFAULT_MINSEGMENT subtracting
the sctphdr and datahdr, SCTP_MAX_CHUNK_LEN subtracting datahdr when
setting frag_point via sockopt. It also improves sctp_setsockopt_maxseg
codes.
Suggested-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Reported-by: Hangbin Liu <liuhangbin(a)gmail.com>
Signed-off-by: Xin Long <lucien.xin(a)gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/net/sctp/sctp.h | 3 ++-
net/sctp/socket.c | 29 +++++++++++++++++++----------
2 files changed, 21 insertions(+), 11 deletions(-)
--- a/include/net/sctp/sctp.h
+++ b/include/net/sctp/sctp.h
@@ -433,7 +433,8 @@ static inline int sctp_frag_point(const
if (asoc->user_frag)
frag = min_t(int, frag, asoc->user_frag);
- frag = SCTP_TRUNC4(min_t(int, frag, SCTP_MAX_CHUNK_LEN));
+ frag = SCTP_TRUNC4(min_t(int, frag, SCTP_MAX_CHUNK_LEN -
+ sizeof(struct sctp_data_chunk)));
return frag;
}
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -3125,9 +3125,9 @@ static int sctp_setsockopt_mappedv4(stru
*/
static int sctp_setsockopt_maxseg(struct sock *sk, char __user *optval, unsigned int optlen)
{
+ struct sctp_sock *sp = sctp_sk(sk);
struct sctp_assoc_value params;
struct sctp_association *asoc;
- struct sctp_sock *sp = sctp_sk(sk);
int val;
if (optlen == sizeof(int)) {
@@ -3143,26 +3143,35 @@ static int sctp_setsockopt_maxseg(struct
if (copy_from_user(¶ms, optval, optlen))
return -EFAULT;
val = params.assoc_value;
- } else
+ } else {
return -EINVAL;
+ }
- if ((val != 0) && ((val < 8) || (val > SCTP_MAX_CHUNK_LEN)))
- return -EINVAL;
+ if (val) {
+ int min_len, max_len;
- asoc = sctp_id2assoc(sk, params.assoc_id);
- if (!asoc && params.assoc_id && sctp_style(sk, UDP))
- return -EINVAL;
+ min_len = SCTP_DEFAULT_MINSEGMENT - sp->pf->af->net_header_len;
+ min_len -= sizeof(struct sctphdr) +
+ sizeof(struct sctp_data_chunk);
+
+ max_len = SCTP_MAX_CHUNK_LEN - sizeof(struct sctp_data_chunk);
+ if (val < min_len || val > max_len)
+ return -EINVAL;
+ }
+
+ asoc = sctp_id2assoc(sk, params.assoc_id);
if (asoc) {
if (val == 0) {
- val = asoc->pathmtu;
- val -= sp->pf->af->net_header_len;
+ val = asoc->pathmtu - sp->pf->af->net_header_len;
val -= sizeof(struct sctphdr) +
- sizeof(struct sctp_data_chunk);
+ sizeof(struct sctp_data_chunk);
}
asoc->user_frag = val;
asoc->frag_point = sctp_frag_point(asoc, asoc->pathmtu);
} else {
+ if (params.assoc_id && sctp_style(sk, UDP))
+ return -EINVAL;
sp->user_frag = val;
}
Patches currently in stable-queue which might be from lucien.xin(a)gmail.com are
queue-4.9/sctp-set-frag_point-in-sctp_setsockopt_maxseg-correctly.patch
This is a note to let you know that I've just added the patch titled
rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f10b4cff98c6977668434fbf5dd58695eeca2897 Mon Sep 17 00:00:00 2001
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Date: Thu, 30 Nov 2017 11:11:29 -0800
Subject: rds: tcp: atomically purge entries from rds_tcp_conn_list during netns delete
From: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
commit f10b4cff98c6977668434fbf5dd58695eeca2897 upstream.
The rds_tcp_kill_sock() function parses the rds_tcp_conn_list
to find the rds_connection entries marked for deletion as part
of the netns deletion under the protection of the rds_tcp_conn_lock.
Since the rds_tcp_conn_list tracks rds_tcp_connections (which
have a 1:1 mapping with rds_conn_path), multiple tc entries in
the rds_tcp_conn_list will map to a single rds_connection, and will
be deleted as part of the rds_conn_destroy() operation that is
done outside the rds_tcp_conn_lock.
The rds_tcp_conn_list traversal done under the protection of
rds_tcp_conn_lock should not leave any doomed tc entries in
the list after the rds_tcp_conn_lock is released, else another
concurrently executiong netns delete (for a differnt netns) thread
may trip on these entries.
Reported-by: syzbot <syzkaller(a)googlegroups.com>
Signed-off-by: Sowmini Varadhan <sowmini.varadhan(a)oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar(a)oracle.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/rds/tcp.c | 9 +++++++--
net/rds/tcp.h | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
--- a/net/rds/tcp.c
+++ b/net/rds/tcp.c
@@ -303,7 +303,8 @@ static void rds_tcp_conn_free(void *arg)
rdsdebug("freeing tc %p\n", tc);
spin_lock_irqsave(&rds_tcp_conn_lock, flags);
- list_del(&tc->t_tcp_node);
+ if (!tc->t_tcp_node_detached)
+ list_del(&tc->t_tcp_node);
spin_unlock_irqrestore(&rds_tcp_conn_lock, flags);
kmem_cache_free(rds_tcp_conn_slab, tc);
@@ -528,8 +529,12 @@ static void rds_tcp_kill_sock(struct net
if (net != c_net || !tc->t_sock)
continue;
- if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn))
+ if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
list_move_tail(&tc->t_tcp_node, &tmp_list);
+ } else {
+ list_del(&tc->t_tcp_node);
+ tc->t_tcp_node_detached = true;
+ }
}
spin_unlock_irq(&rds_tcp_conn_lock);
list_for_each_entry_safe(tc, _tc, &tmp_list, t_tcp_node) {
--- a/net/rds/tcp.h
+++ b/net/rds/tcp.h
@@ -11,6 +11,7 @@ struct rds_tcp_incoming {
struct rds_tcp_connection {
struct list_head t_tcp_node;
+ bool t_tcp_node_detached;
struct rds_conn_path *t_cpath;
/* t_conn_path_lock synchronizes the connection establishment between
* rds_tcp_accept_one and rds_tcp_conn_path_connect
Patches currently in stable-queue which might be from sowmini.varadhan(a)oracle.com are
queue-4.9/rds-tcp-atomically-purge-entries-from-rds_tcp_conn_list-during-netns-delete.patch
This is a note to let you know that I've just added the patch titled
ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6e6e41c3112276288ccaf80c70916779b84bb276 Mon Sep 17 00:00:00 2001
From: Jason Wang <jasowang(a)redhat.com>
Date: Fri, 9 Feb 2018 17:45:49 +0800
Subject: ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE
From: Jason Wang <jasowang(a)redhat.com>
commit 6e6e41c3112276288ccaf80c70916779b84bb276 upstream.
To avoid slab to warn about exceeded size, fail early if queue
occupies more than KMALLOC_MAX_SIZE.
Reported-by: syzbot+e4d4f9ddd4295539735d(a)syzkaller.appspotmail.com
Fixes: 2e0ab8ca83c12 ("ptr_ring: array based FIFO for pointers")
Signed-off-by: Jason Wang <jasowang(a)redhat.com>
Acked-by: Michael S. Tsirkin <mst(a)redhat.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/ptr_ring.h | 2 ++
1 file changed, 2 insertions(+)
--- a/include/linux/ptr_ring.h
+++ b/include/linux/ptr_ring.h
@@ -351,6 +351,8 @@ static inline void *ptr_ring_consume_bh(
static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)
{
+ if (size * sizeof(void *) > KMALLOC_MAX_SIZE)
+ return NULL;
return kcalloc(size, sizeof(void *), gfp);
}
Patches currently in stable-queue which might be from jasowang(a)redhat.com are
queue-4.9/vhost-use-mutex_lock_nested-in-vhost_dev_lock_vqs.patch
queue-4.9/ptr_ring-fail-early-if-queue-occupies-more-than-kmalloc_max_size.patch
This is a note to let you know that I've just added the patch titled
netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insert
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 7dc68e98757a8eccf8ca7a53a29b896f1eef1f76 Mon Sep 17 00:00:00 2001
From: Cong Wang <xiyou.wangcong(a)gmail.com>
Date: Mon, 5 Feb 2018 14:41:45 -0800
Subject: netfilter: xt_RATEEST: acquire xt_rateest_mutex for hash insert
From: Cong Wang <xiyou.wangcong(a)gmail.com>
commit 7dc68e98757a8eccf8ca7a53a29b896f1eef1f76 upstream.
rateest_hash is supposed to be protected by xt_rateest_mutex,
and, as suggested by Eric, lookup and insert should be atomic,
so we should acquire the xt_rateest_mutex once for both.
So introduce a non-locking helper for internal use and keep the
locking one for external.
Reported-by: <syzbot+5cb189720978275e4c75(a)syzkaller.appspotmail.com>
Fixes: 5859034d7eb8 ("[NETFILTER]: x_tables: add RATEEST target")
Signed-off-by: Cong Wang <xiyou.wangcong(a)gmail.com>
Reviewed-by: Florian Westphal <fw(a)strlen.de>
Reviewed-by: Eric Dumazet <edumazet(a)google.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/xt_RATEEST.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
--- a/net/netfilter/xt_RATEEST.c
+++ b/net/netfilter/xt_RATEEST.c
@@ -39,23 +39,31 @@ static void xt_rateest_hash_insert(struc
hlist_add_head(&est->list, &rateest_hash[h]);
}
-struct xt_rateest *xt_rateest_lookup(const char *name)
+static struct xt_rateest *__xt_rateest_lookup(const char *name)
{
struct xt_rateest *est;
unsigned int h;
h = xt_rateest_hash(name);
- mutex_lock(&xt_rateest_mutex);
hlist_for_each_entry(est, &rateest_hash[h], list) {
if (strcmp(est->name, name) == 0) {
est->refcnt++;
- mutex_unlock(&xt_rateest_mutex);
return est;
}
}
- mutex_unlock(&xt_rateest_mutex);
+
return NULL;
}
+
+struct xt_rateest *xt_rateest_lookup(const char *name)
+{
+ struct xt_rateest *est;
+
+ mutex_lock(&xt_rateest_mutex);
+ est = __xt_rateest_lookup(name);
+ mutex_unlock(&xt_rateest_mutex);
+ return est;
+}
EXPORT_SYMBOL_GPL(xt_rateest_lookup);
void xt_rateest_put(struct xt_rateest *est)
@@ -100,8 +108,10 @@ static int xt_rateest_tg_checkentry(cons
net_get_random_once(&jhash_rnd, sizeof(jhash_rnd));
- est = xt_rateest_lookup(info->name);
+ mutex_lock(&xt_rateest_mutex);
+ est = __xt_rateest_lookup(info->name);
if (est) {
+ mutex_unlock(&xt_rateest_mutex);
/*
* If estimator parameters are specified, they must match the
* existing estimator.
@@ -139,11 +149,13 @@ static int xt_rateest_tg_checkentry(cons
info->est = est;
xt_rateest_hash_insert(est);
+ mutex_unlock(&xt_rateest_mutex);
return 0;
err2:
kfree(est);
err1:
+ mutex_unlock(&xt_rateest_mutex);
return ret;
}
Patches currently in stable-queue which might be from xiyou.wangcong(a)gmail.com are
queue-4.9/netfilter-xt_cgroup-initialize-info-priv-in-cgroup_mt_check_v1.patch
queue-4.9/netfilter-xt_rateest-acquire-xt_rateest_mutex-for-hash-insert.patch
queue-4.9/xfrm-check-id-proto-in-validate_tmpl.patch
This is a note to let you know that I've just added the patch titled
Provide a function to create a NUL-terminated string from unterminated data
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
provide-a-function-to-create-a-nul-terminated-string-from-unterminated-data.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f35157417215ec138c920320c746fdb3e04ef1d5 Mon Sep 17 00:00:00 2001
From: David Howells <dhowells(a)redhat.com>
Date: Tue, 4 Jul 2017 17:25:02 +0100
Subject: Provide a function to create a NUL-terminated string from unterminated data
From: David Howells <dhowells(a)redhat.com>
commit f35157417215ec138c920320c746fdb3e04ef1d5 upstream.
Provide a function, kmemdup_nul(), that will create a NUL-terminated string
from an unterminated character array where the length is known in advance.
This is better than kstrndup() in situations where we already know the
string length as the strnlen() in kstrndup() is superfluous.
Signed-off-by: David Howells <dhowells(a)redhat.com>
Signed-off-by: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
include/linux/string.h | 1 +
mm/util.c | 24 ++++++++++++++++++++++++
2 files changed, 25 insertions(+)
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -123,6 +123,7 @@ extern char *kstrdup(const char *s, gfp_
extern const char *kstrdup_const(const char *s, gfp_t gfp);
extern char *kstrndup(const char *s, size_t len, gfp_t gfp);
extern void *kmemdup(const void *src, size_t len, gfp_t gfp);
+extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp);
extern char **argv_split(gfp_t gfp, const char *str, int *argcp);
extern void argv_free(char **argv);
--- a/mm/util.c
+++ b/mm/util.c
@@ -80,6 +80,8 @@ EXPORT_SYMBOL(kstrdup_const);
* @s: the string to duplicate
* @max: read at most @max chars from @s
* @gfp: the GFP mask used in the kmalloc() call when allocating memory
+ *
+ * Note: Use kmemdup_nul() instead if the size is known exactly.
*/
char *kstrndup(const char *s, size_t max, gfp_t gfp)
{
@@ -118,6 +120,28 @@ void *kmemdup(const void *src, size_t le
EXPORT_SYMBOL(kmemdup);
/**
+ * kmemdup_nul - Create a NUL-terminated string from unterminated data
+ * @s: The data to stringify
+ * @len: The size of the data
+ * @gfp: the GFP mask used in the kmalloc() call when allocating memory
+ */
+char *kmemdup_nul(const char *s, size_t len, gfp_t gfp)
+{
+ char *buf;
+
+ if (!s)
+ return NULL;
+
+ buf = kmalloc_track_caller(len + 1, gfp);
+ if (buf) {
+ memcpy(buf, s, len);
+ buf[len] = '\0';
+ }
+ return buf;
+}
+EXPORT_SYMBOL(kmemdup_nul);
+
+/**
* memdup_user - duplicate memory region from user space
*
* @src: source address in user space
Patches currently in stable-queue which might be from dhowells(a)redhat.com are
queue-4.9/provide-a-function-to-create-a-nul-terminated-string-from-unterminated-data.patch
This is a note to let you know that I've just added the patch titled
netfilter: x_tables: fix int overflow in xt_alloc_table_info()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
netfilter-x_tables-fix-int-overflow-in-xt_alloc_table_info.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 889c604fd0b5f6d3b8694ade229ee44124de1127 Mon Sep 17 00:00:00 2001
From: Dmitry Vyukov <dvyukov(a)google.com>
Date: Thu, 28 Dec 2017 09:48:54 +0100
Subject: netfilter: x_tables: fix int overflow in xt_alloc_table_info()
From: Dmitry Vyukov <dvyukov(a)google.com>
commit 889c604fd0b5f6d3b8694ade229ee44124de1127 upstream.
syzkaller triggered OOM kills by passing ipt_replace.size = -1
to IPT_SO_SET_REPLACE. The root cause is that SMP_ALIGN() in
xt_alloc_table_info() causes int overflow and the size check passes
when it should not. SMP_ALIGN() is no longer needed leftover.
Remove SMP_ALIGN() call in xt_alloc_table_info().
Reported-by: syzbot+4396883fa8c4f64e0175(a)syzkaller.appspotmail.com
Signed-off-by: Dmitry Vyukov <dvyukov(a)google.com>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/netfilter/x_tables.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
--- a/net/netfilter/x_tables.c
+++ b/net/netfilter/x_tables.c
@@ -39,8 +39,6 @@ MODULE_LICENSE("GPL");
MODULE_AUTHOR("Harald Welte <laforge(a)netfilter.org>");
MODULE_DESCRIPTION("{ip,ip6,arp,eb}_tables backend module");
-#define SMP_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1))
-
struct compat_delta {
unsigned int offset; /* offset in kernel */
int delta; /* delta in 32bit user land */
@@ -952,7 +950,7 @@ struct xt_table_info *xt_alloc_table_inf
return NULL;
/* Pedantry: prevent them from hitting BUG() in vmalloc.c --RR */
- if ((SMP_ALIGN(size) >> PAGE_SHIFT) + 2 > totalram_pages)
+ if ((size >> PAGE_SHIFT) + 2 > totalram_pages)
return NULL;
if (sz <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER))
Patches currently in stable-queue which might be from dvyukov(a)google.com are
queue-4.9/kvm-x86-fix-escape-of-guest-dr6-to-the-host.patch
queue-4.9/blktrace-fix-unlocked-registration-of-tracepoints.patch
queue-4.9/netfilter-x_tables-fix-int-overflow-in-xt_alloc_table_info.patch
queue-4.9/netfilter-ipt_clusterip-fix-out-of-bounds-accesses-in-clusterip_tg_check.patch
queue-4.9/kcov-detect-double-association-with-a-single-task.patch