The patch titled
Subject: khugepaged: fix null-pointer dereference due to race
has been removed from the -mm tree. Its filename was
khugepaged-fix-null-pointer-dereference-due-to-race.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Subject: khugepaged: fix null-pointer dereference due to race
khugepaged has to drop mmap lock several times while collapsing a page.
The situation can change while the lock is dropped and we need to
re-validate that the VMA is still in place and the PMD is still subject
for collapse.
But we miss one corner case: while collapsing an anonymous pages the VMA
could be replaced with file VMA. If the file VMA doesn't have any
private pages we get NULL pointer dereference:
general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
anon_vma_lock_write include/linux/rmap.h:120 [inline]
collapse_huge_page mm/khugepaged.c:1110 [inline]
khugepaged_scan_pmd mm/khugepaged.c:1349 [inline]
khugepaged_scan_mm_slot mm/khugepaged.c:2110 [inline]
khugepaged_do_scan mm/khugepaged.c:2193 [inline]
khugepaged+0x3bba/0x5a10 mm/khugepaged.c:2238
The fix is to make sure that the VMA is anonymous in
hugepage_vma_revalidate(). The helper is only used for collapsing
anonymous pages.
Link: http://lkml.kernel.org/r/20200722121439.44328-1-kirill.shutemov@linux.intel…
Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Reported-by: syzbot+ed318e8b790ca72c5ad0(a)syzkaller.appspotmail.com
Reviewed-by: David Hildenbrand <david(a)redhat.com>
Acked-by: Yang Shi <yang.shi(a)linux.alibaba.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/khugepaged.c | 3 +++
1 file changed, 3 insertions(+)
--- a/mm/khugepaged.c~khugepaged-fix-null-pointer-dereference-due-to-race
+++ a/mm/khugepaged.c
@@ -958,6 +958,9 @@ static int hugepage_vma_revalidate(struc
return SCAN_ADDRESS_RANGE;
if (!hugepage_vma_check(vma, vma->vm_flags))
return SCAN_VMA_CHECK;
+ /* Anon VMA expected */
+ if (!vma->anon_vma || vma->vm_ops)
+ return SCAN_VMA_CHECK;
return 0;
}
_
Patches currently in -mm which might be from kirill.shutemov(a)linux.intel.com are
The patch titled
Subject: mm/hugetlb: avoid hardcoding while checking if cma is enabled
has been removed from the -mm tree. Its filename was
mm-hugetlb-avoid-hardcoding-while-checking-if-cma-is-enabled.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Barry Song <song.bao.hua(a)hisilicon.com>
Subject: mm/hugetlb: avoid hardcoding while checking if cma is enabled
hugetlb_cma[0] can be NULL due to various reasons, for example, node0 has
no memory. so NULL hugetlb_cma[0] doesn't necessarily mean cma is not
enabled. gigantic pages might have been reserved on other nodes. This
patch fixes possible double reservation and CMA leak.
[akpm(a)linux-foundation.org: fix CONFIG_CMA=n warning]
[sfr(a)canb.auug.org.au: better checks before using hugetlb_cma]
Link: http://lkml.kernel.org/r/20200721205716.6dbaa56b@canb.auug.org.au
Link: http://lkml.kernel.org/r/20200710005726.36068-1-song.bao.hua@hisilicon.com
Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Signed-off-by: Barry Song <song.bao.hua(a)hisilicon.com>
Acked-by: Roman Gushchin <guro(a)fb.com>
Reviewed-by: Mike Kravetz <mike.kravetz(a)oracle.com>
Cc: Jonathan Cameron <jonathan.cameron(a)huawei.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/hugetlb.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
--- a/mm/hugetlb.c~mm-hugetlb-avoid-hardcoding-while-checking-if-cma-is-enabled
+++ a/mm/hugetlb.c
@@ -45,7 +45,10 @@ int hugetlb_max_hstate __read_mostly;
unsigned int default_hstate_idx;
struct hstate hstates[HUGE_MAX_HSTATE];
+#ifdef CONFIG_CMA
static struct cma *hugetlb_cma[MAX_NUMNODES];
+#endif
+static unsigned long hugetlb_cma_size __initdata;
/*
* Minimum page order among possible hugepage sizes, set to a proper value
@@ -1235,9 +1238,10 @@ static void free_gigantic_page(struct pa
* If the page isn't allocated using the cma allocator,
* cma_release() returns false.
*/
- if (IS_ENABLED(CONFIG_CMA) &&
- cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
+#ifdef CONFIG_CMA
+ if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
return;
+#endif
free_contig_range(page_to_pfn(page), 1 << order);
}
@@ -1248,7 +1252,8 @@ static struct page *alloc_gigantic_page(
{
unsigned long nr_pages = 1UL << huge_page_order(h);
- if (IS_ENABLED(CONFIG_CMA)) {
+#ifdef CONFIG_CMA
+ {
struct page *page;
int node;
@@ -1262,6 +1267,7 @@ static struct page *alloc_gigantic_page(
return page;
}
}
+#endif
return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
}
@@ -2571,7 +2577,7 @@ static void __init hugetlb_hstate_alloc_
for (i = 0; i < h->max_huge_pages; ++i) {
if (hstate_is_gigantic(h)) {
- if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) {
+ if (hugetlb_cma_size) {
pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n");
break;
}
@@ -5654,7 +5660,6 @@ void move_hugetlb_state(struct page *old
}
#ifdef CONFIG_CMA
-static unsigned long hugetlb_cma_size __initdata;
static bool cma_reserve_called __initdata;
static int __init cmdline_parse_hugetlb_cma(char *p)
_
Patches currently in -mm which might be from song.bao.hua(a)hisilicon.com are
mm-hugetlb-split-hugetlb_cma-in-nodes-with-memory.patch
mm-cma-fix-the-name-of-cma-areas.patch
mm-hugetlb-fix-the-name-of-hugetlb-cma.patch
The patch titled
Subject: mm: memcg/slab: fix memory leak at non-root kmem_cache destroy
has been removed from the -mm tree. Its filename was
mm-memcg-slab-fix-memory-leak-at-non-root-kmem_cache-destroy.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Muchun Song <songmuchun(a)bytedance.com>
Subject: mm: memcg/slab: fix memory leak at non-root kmem_cache destroy
If the kmem_cache refcount is greater than one, we should not mark the
root kmem_cache as dying. If we mark the root kmem_cache dying
incorrectly, the non-root kmem_cache can never be destroyed. It resulted
in memory leak when memcg was destroyed. We can use the following steps
to reproduce.
1) Use kmem_cache_create() to create a new kmem_cache named A.
2) Coincidentally, the kmem_cache A is an alias for kmem_cache B,
so the refcount of B is just increased.
3) Use kmem_cache_destroy() to destroy the kmem_cache A, just
decrease the B's refcount but mark the B as dying.
4) Create a new memory cgroup and alloc memory from the kmem_cache
B. It leads to create a non-root kmem_cache for allocating memory.
5) When destroy the memory cgroup created in the step 4), the
non-root kmem_cache can never be destroyed.
If we repeat steps 4) and 5), this will cause a lot of memory leak. So
only when refcount reach zero, we mark the root kmem_cache as dying.
Link: http://lkml.kernel.org/r/20200716165103.83462-1-songmuchun@bytedance.com
Fixes: 92ee383f6daa ("mm: fix race between kmem_cache destroy, create and deactivate")
Signed-off-by: Muchun Song <songmuchun(a)bytedance.com>
Reviewed-by: Shakeel Butt <shakeelb(a)google.com>
Acked-by: Roman Gushchin <guro(a)fb.com>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Shakeel Butt <shakeelb(a)google.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/slab_common.c | 35 ++++++++++++++++++++++++++++-------
1 file changed, 28 insertions(+), 7 deletions(-)
--- a/mm/slab_common.c~mm-memcg-slab-fix-memory-leak-at-non-root-kmem_cache-destroy
+++ a/mm/slab_common.c
@@ -326,6 +326,14 @@ int slab_unmergeable(struct kmem_cache *
if (s->refcount < 0)
return 1;
+#ifdef CONFIG_MEMCG_KMEM
+ /*
+ * Skip the dying kmem_cache.
+ */
+ if (s->memcg_params.dying)
+ return 1;
+#endif
+
return 0;
}
@@ -886,12 +894,15 @@ static int shutdown_memcg_caches(struct
return 0;
}
-static void flush_memcg_workqueue(struct kmem_cache *s)
+static void memcg_set_kmem_cache_dying(struct kmem_cache *s)
{
spin_lock_irq(&memcg_kmem_wq_lock);
s->memcg_params.dying = true;
spin_unlock_irq(&memcg_kmem_wq_lock);
+}
+static void flush_memcg_workqueue(struct kmem_cache *s)
+{
/*
* SLAB and SLUB deactivate the kmem_caches through call_rcu. Make
* sure all registered rcu callbacks have been invoked.
@@ -923,10 +934,6 @@ static inline int shutdown_memcg_caches(
{
return 0;
}
-
-static inline void flush_memcg_workqueue(struct kmem_cache *s)
-{
-}
#endif /* CONFIG_MEMCG_KMEM */
void slab_kmem_cache_release(struct kmem_cache *s)
@@ -944,8 +951,6 @@ void kmem_cache_destroy(struct kmem_cach
if (unlikely(!s))
return;
- flush_memcg_workqueue(s);
-
get_online_cpus();
get_online_mems();
@@ -955,6 +960,22 @@ void kmem_cache_destroy(struct kmem_cach
if (s->refcount)
goto out_unlock;
+#ifdef CONFIG_MEMCG_KMEM
+ memcg_set_kmem_cache_dying(s);
+
+ mutex_unlock(&slab_mutex);
+
+ put_online_mems();
+ put_online_cpus();
+
+ flush_memcg_workqueue(s);
+
+ get_online_cpus();
+ get_online_mems();
+
+ mutex_lock(&slab_mutex);
+#endif
+
err = shutdown_memcg_caches(s);
if (!err)
err = shutdown_cache(s);
_
Patches currently in -mm which might be from songmuchun(a)bytedance.com are
mm-page_alloc-skip-setting-nodemask-when-we-are-in-interrupt.patch
The patch titled
Subject: mm/memcg: fix refcount error while moving and swapping
has been removed from the -mm tree. Its filename was
mm-memcg-fix-refcount-error-while-moving-and-swapping.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Hugh Dickins <hughd(a)google.com>
Subject: mm/memcg: fix refcount error while moving and swapping
It was hard to keep a test running, moving tasks between memcgs with
move_charge_at_immigrate, while swapping: mem_cgroup_id_get_many()'s
refcount is discovered to be 0 (supposedly impossible), so it is then
forced to REFCOUNT_SATURATED, and after thousands of warnings in quick
succession, the test is at last put out of misery by being OOM killed.
This is because of the way moved_swap accounting was saved up until the
task move gets completed in __mem_cgroup_clear_mc(), deferred from when
mem_cgroup_move_swap_account() actually exchanged old and new ids.
Concurrent activity can free up swap quicker than the task is scanned,
bringing id refcount down 0 (which should only be possible when
offlining).
Just skip that optimization: do that part of the accounting immediately.
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2007071431050.4726@eggly.anvils
Fixes: 615d66c37c75 ("mm: memcontrol: fix memcg id ref counter on swap charge move")
Signed-off-by: Hugh Dickins <hughd(a)google.com>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Alex Shi <alex.shi(a)linux.alibaba.com>
Cc: Shakeel Butt <shakeelb(a)google.com>
Cc: Michal Hocko <mhocko(a)suse.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/mm/memcontrol.c~mm-memcg-fix-refcount-error-while-moving-and-swapping
+++ a/mm/memcontrol.c
@@ -5669,7 +5669,6 @@ static void __mem_cgroup_clear_mc(void)
if (!mem_cgroup_is_root(mc.to))
page_counter_uncharge(&mc.to->memory, mc.moved_swap);
- mem_cgroup_id_get_many(mc.to, mc.moved_swap);
css_put_many(&mc.to->css, mc.moved_swap);
mc.moved_swap = 0;
@@ -5860,7 +5859,8 @@ put: /* get_mctgt_type() gets the page
ent = target.ent;
if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) {
mc.precharge--;
- /* we fixup refcnts and charges later. */
+ mem_cgroup_id_get_many(mc.to, 1);
+ /* we fixup other refcnts and charges later. */
mc.moved_swap++;
}
break;
_
Patches currently in -mm which might be from hughd(a)google.com are
The patch titled
Subject: vfs/xattr: mm/shmem: kernfs: release simple xattr entry in a right way
has been removed from the -mm tree. Its filename was
vfs-xattr-mm-shmem-kernfs-release-simple-xattr-entry-in-a-right-way.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Chengguang Xu <cgxu519(a)mykernel.net>
Subject: vfs/xattr: mm/shmem: kernfs: release simple xattr entry in a right way
After commit fdc85222d58e ("kernfs: kvmalloc xattr value instead of
kmalloc"), simple xattr entry is allocated with kvmalloc() instead of
kmalloc(), so we should release it with kvfree() instead of kfree().
Link: http://lkml.kernel.org/r/20200704051608.15043-1-cgxu519@mykernel.net
Fixes: fdc85222d58e ("kernfs: kvmalloc xattr value instead of kmalloc")
Signed-off-by: Chengguang Xu <cgxu519(a)mykernel.net>
Acked-by: Hugh Dickins <hughd(a)google.com>
Acked-by: Tejun Heo <tj(a)kernel.org>
Cc: Daniel Xu <dxu(a)dxuuu.xyz>
Cc: Chris Down <chris(a)chrisdown.name>
Cc: Andreas Dilger <adilger(a)dilger.ca>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Al Viro <viro(a)zeniv.linux.org.uk>
Cc: <stable(a)vger.kernel.org> [5.7]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
include/linux/xattr.h | 3 ++-
mm/shmem.c | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
--- a/include/linux/xattr.h~vfs-xattr-mm-shmem-kernfs-release-simple-xattr-entry-in-a-right-way
+++ a/include/linux/xattr.h
@@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/spinlock.h>
+#include <linux/mm.h>
#include <uapi/linux/xattr.h>
struct inode;
@@ -94,7 +95,7 @@ static inline void simple_xattrs_free(st
list_for_each_entry_safe(xattr, node, &xattrs->head, list) {
kfree(xattr->name);
- kfree(xattr);
+ kvfree(xattr);
}
}
--- a/mm/shmem.c~vfs-xattr-mm-shmem-kernfs-release-simple-xattr-entry-in-a-right-way
+++ a/mm/shmem.c
@@ -3178,7 +3178,7 @@ static int shmem_initxattrs(struct inode
new_xattr->name = kmalloc(XATTR_SECURITY_PREFIX_LEN + len,
GFP_KERNEL);
if (!new_xattr->name) {
- kfree(new_xattr);
+ kvfree(new_xattr);
return -ENOMEM;
}
_
Patches currently in -mm which might be from cgxu519(a)mykernel.net are
The patch titled
Subject: mm/mmap.c: close race between munmap() and expand_upwards()/downwards()
has been removed from the -mm tree. Its filename was
mm-close-race-between-munmap-and-expand_upwards-downwards.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Subject: mm/mmap.c: close race between munmap() and expand_upwards()/downwards()
VMA with VM_GROWSDOWN or VM_GROWSUP flag set can change their size under
mmap_read_lock(). It can lead to race with __do_munmap():
Thread A Thread B
__do_munmap()
detach_vmas_to_be_unmapped()
mmap_write_downgrade()
expand_downwards()
vma->vm_start = address;
// The VMA now overlaps with
// VMAs detached by the Thread A
// page fault populates expanded part
// of the VMA
unmap_region()
// Zaps pagetables partly
// populated by Thread B
Similar race exists for expand_upwards().
The fix is to avoid downgrading mmap_lock in __do_munmap() if detached
VMAs are next to VM_GROWSDOWN or VM_GROWSUP VMA.
[akpm(a)linux-foundation.org: s/mmap_sem/mmap_lock/ in comment]
Link: http://lkml.kernel.org/r/20200709105309.42495-1-kirill.shutemov@linux.intel…
Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov(a)linux.intel.com>
Reported-by: Jann Horn <jannh(a)google.com>
Acked-by: Vlastimil Babka <vbabka(a)suse.cz>
Reviewed-by: Yang Shi <yang.shi(a)linux.alibaba.com>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: <stable(a)vger.kernel.org> [4.20+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/mmap.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
--- a/mm/mmap.c~mm-close-race-between-munmap-and-expand_upwards-downwards
+++ a/mm/mmap.c
@@ -2620,7 +2620,7 @@ static void unmap_region(struct mm_struc
* Create a list of vma's touched by the unmap, removing them from the mm's
* vma list as we go..
*/
-static void
+static bool
detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
struct vm_area_struct *prev, unsigned long end)
{
@@ -2645,6 +2645,17 @@ detach_vmas_to_be_unmapped(struct mm_str
/* Kill the cache */
vmacache_invalidate(mm);
+
+ /*
+ * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or
+ * VM_GROWSUP VMA. Such VMAs can change their size under
+ * down_read(mmap_lock) and collide with the VMA we are about to unmap.
+ */
+ if (vma && (vma->vm_flags & VM_GROWSDOWN))
+ return false;
+ if (prev && (prev->vm_flags & VM_GROWSUP))
+ return false;
+ return true;
}
/*
@@ -2825,7 +2836,8 @@ int __do_munmap(struct mm_struct *mm, un
}
/* Detach vmas from rbtree */
- detach_vmas_to_be_unmapped(mm, vma, prev, end);
+ if (!detach_vmas_to_be_unmapped(mm, vma, prev, end))
+ downgrade = false;
if (downgrade)
mmap_write_downgrade(mm);
_
Patches currently in -mm which might be from kirill.shutemov(a)linux.intel.com are
This is the start of the stable review cycle for the 4.9.148 release.
There are 22 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Sun Dec 30 11:31:00 UTC 2018.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v4.x/stable-review/patch-4.9.148-rc…
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-4.9.y
and the diffstat can be found below.
thanks,
greg k-h
-------------
Pseudo-Shortlog of commits:
Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Linux 4.9.148-rc1
Gustavo A. R. Silva <gustavo(a)embeddedor.com>
drm/ioctl: Fix Spectre v1 vulnerabilities
Ivan Delalande <colona(a)arista.com>
proc/sysctl: don't return ENOMEM on lookup when a table is unregistering
Sergey Senozhatsky <sergey.senozhatsky.work(a)gmail.com>
panic: avoid deadlocks in re-entrant console drivers
Richard Weinberger <richard(a)nod.at>
ubifs: Handle re-linking of inodes correctly while recovery
Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
x86/fpu: Disable bottom halves while loading FPU registers
Colin Ian King <colin.king(a)canonical.com>
x86/mtrr: Don't copy uninitialized gentry fields back to userspace
Dexuan Cui <decui(a)microsoft.com>
Drivers: hv: vmbus: Return -EINVAL for the sys files for unopened channels
Christophe Leroy <christophe.leroy(a)c-s.fr>
gpio: max7301: fix driver for use with CONFIG_VMAP_STACK
Russell King <rmk+kernel(a)armlinux.org.uk>
mmc: omap_hsmmc: fix DMA API warning
Ulf Hansson <ulf.hansson(a)linaro.org>
mmc: core: Use a minimum 1600ms timeout when enabling CACHE ctrl
Ulf Hansson <ulf.hansson(a)linaro.org>
mmc: core: Allow BKOPS and CACHE ctrl even if no HPI support
Ulf Hansson <ulf.hansson(a)linaro.org>
mmc: core: Reset HPI enabled state during re-init and in case of errors
Jörgen Storvist <jorgen.storvist(a)gmail.com>
USB: serial: option: add Telit LN940 series
Jörgen Storvist <jorgen.storvist(a)gmail.com>
USB: serial: option: add Fibocom NL668 series
Jörgen Storvist <jorgen.storvist(a)gmail.com>
USB: serial: option: add Simcom SIM7500/SIM7600 (MBIM mode)
Tore Anderson <tore(a)fud.no>
USB: serial: option: add HP lt4132
Jörgen Storvist <jorgen.storvist(a)gmail.com>
USB: serial: option: add GosunCn ZTE WeLink ME3630
Mathias Nyman <mathias.nyman(a)linux.intel.com>
xhci: Don't prevent USB2 bus suspend in state check intended for USB3 only
Hui Peng <benquike(a)gmail.com>
USB: hso: Fix OOB memory access in hso_probe/hso_get_config_data
Bart Van Assche <bart.vanassche(a)wdc.com>
ib_srpt: Fix a use-after-free in __srpt_close_all_ch()
Mikulas Patocka <mpatocka(a)redhat.com>
block: fix infinite loop if the device loses discard capability
Jens Axboe <axboe(a)kernel.dk>
block: break discard submissions into the user defined size
-------------
Diffstat:
Makefile | 4 ++--
arch/x86/kernel/cpu/mtrr/if.c | 2 ++
arch/x86/kernel/fpu/signal.c | 4 ++--
block/blk-lib.c | 22 ++++++++++++++++++---
drivers/gpio/gpio-max7301.c | 12 +++---------
drivers/gpu/drm/drm_ioctl.c | 10 ++++++++--
drivers/hv/vmbus_drv.c | 20 +++++++++++++++++++
drivers/infiniband/ulp/srpt/ib_srpt.c | 4 ++--
drivers/mmc/core/mmc.c | 24 ++++++++++++++---------
drivers/mmc/host/omap_hsmmc.c | 12 +++++++++++-
drivers/net/usb/hso.c | 18 +++++++++++++++--
drivers/usb/host/xhci-hub.c | 3 ++-
drivers/usb/serial/option.c | 16 ++++++++++++++-
fs/proc/proc_sysctl.c | 13 ++++++------
fs/ubifs/replay.c | 37 +++++++++++++++++++++++++++++++++++
kernel/panic.c | 6 +++++-
16 files changed, 165 insertions(+), 42 deletions(-)