The patch titled
Subject: mm: migrate: don't rely on PageMovable() of newpage after unlocking it
has been removed from the -mm tree. Its filename was
mm-migrate-dont-rely-on-pagemovable-of-newpage-after-unlocking-it.patch
This patch was dropped because it is obsolete
------------------------------------------------------
From: David Hildenbrand <david(a)redhat.com>
Subject: mm: migrate: don't rely on PageMovable() of newpage after unlocking it
While debugging some crashes related to virtio-balloon deflation that
happened under the old balloon migration code, I stumbled over a race that
still exists today.
What we experienced:
drivers/virtio/virtio_balloon.c:release_pages_balloon():
- WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0
- list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100
Turns out after having added the page to a local list when dequeuing, the
page would suddenly be moved to an LRU list before we would free it via
the local list, corrupting both lists. So a page we own and that is !LRU
was moved to an LRU list.
In __unmap_and_move(), we lock the old and newpage and perform the
migration. In case of vitio-balloon, the new page will become movable,
the old page will no longer be movable.
However, after unlocking newpage, there is nothing stopping the newpage
from getting dequeued and freed by virtio-balloon. This
will result in the newpage
1. No longer having PageMovable()
2. Getting moved to the local list before finally freeing it (using
page->lru)
Back in the migration thread in __unmap_and_move(), we would after
unlocking the newpage suddenly no longer have PageMovable(newpage) and
will therefore call putback_lru_page(newpage), modifying page->lru
although that list is still in use by virtio-balloon.
To summarize, we have a race between migrating the newpage and checking
for PageMovable(newpage). Instead of checking PageMovable(newpage), we
can simply rely on is_lru of the original page.
Looks like this was introduced by d6d86c0a7f8d ("mm/balloon_compaction:
redesign ballooned pages management"), which was backported up to 3.12.
Old compaction code used PageBalloon() via -_is_movable_balloon_page()
instead of PageMovable(), however with the same semantics.
Link: http://lkml.kernel.org/r/20190128160403.16657-1-david@redhat.com
Fixes: d6d86c0a7f8d ("mm/balloon_compaction: redesign ballooned pages management")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reported-by: Vratislav Bendel <vbendel(a)redhat.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Rafael Aquini <aquini(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Jan Kara <jack(a)suse.cz>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Dominik Brodowski <linux(a)dominikbrodowski.net>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Konstantin Khlebnikov <k.khlebnikov(a)samsung.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: <stable(a)vger.kernel.org> [3.12+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/migrate.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/mm/migrate.c~mm-migrate-dont-rely-on-pagemovable-of-newpage-after-unlocking-it
+++ a/mm/migrate.c
@@ -1135,10 +1135,12 @@ out:
* If migration is successful, decrease refcount of the newpage
* which will not free the page because new page owner increased
* refcounter. As well, if it is LRU page, add the page to LRU
- * list in here.
+ * list in here. Don't rely on PageMovable(newpage), as that could
+ * already have changed after unlocking newpage (e.g.
+ * virtio-balloon deflation).
*/
if (rc == MIGRATEPAGE_SUCCESS) {
- if (unlikely(__PageMovable(newpage)))
+ if (unlikely(!is_lru))
put_page(newpage);
else
putback_lru_page(newpage);
_
Patches currently in -mm which might be from david(a)redhat.com are
mm-balloon-update-comment-about-isolation-migration-compaction.patch
mm-convert-pg_balloon-to-pg_offline.patch
kexec-export-pg_offline-to-vmcoreinfo.patch
xen-balloon-mark-inflated-pages-pg_offline.patch
hv_balloon-mark-inflated-pages-pg_offline.patch
vmw_balloon-mark-inflated-pages-pg_offline.patch
vmw_balloon-mark-inflated-pages-pg_offline-v2.patch
pm-hibernate-use-pfn_to_online_page.patch
pm-hibernate-exclude-all-pageoffline-pages.patch
pm-hibernate-exclude-all-pageoffline-pages-v2.patch
Hi Greg,
Can you please revert this commit in 4.14?
commit e65cd9a20343ea90f576c24c38ee85ab6e7d5fec
Author: Tycho Andersen <tycho(a)tycho.ws>
Date: Tue Feb 20 19:47:47 2018 -0700
seccomp: add a selftest for get_metadata
[ Upstream commit d057dc4e35e16050befa3dda943876dab39cbf80 ]
Let's test that we get the flags correctly, and that we preserve
the filter
index across the ptrace(PTRACE_SECCOMP_GET_METADATA) correctly.
PTRACE_SECCOMP_GET_METADATA was only added in 4.16
(26500475ac1b499d8636ff281311d633909f5d20)
And it's also breaking seccomp_bpf.c compilation for me:
seccomp_bpf.c: In function ‘get_metadata’:
seccomp_bpf.c:2878:26: error: storage size of ‘md’ isn’t known
struct seccomp_metadata md;
^~
-Tommi
Hi Greg,
Can you please pick these two upstream patches to 4.14?
They fix broken perf unwinding for me.
commit 3d20c6246690219881786de10d2dda93f616d0ac
Author: Martin Vuille <
jpmv27(a)aim.com>
Date: Sun Feb 11 16:24:20 2018 -0500
perf unwind: Unwind with libdw doesn't take symfs into account
commit 1fe627da30331024f453faef04d500079b901107
Author: Milian Wolff <
milian.wolff(a)kdab.com>
Date: Mon Oct 29 15:16:44 2018 +0100
perf unwind: Take pgoff into account when reporting elf to libdwfl
-Tommi
From: "Gustavo A. R. Silva" <gustavo(a)embeddedor.com>
[ Upstream commit a37805098900a6e73a55b3a43b7d3bcd987bb3f4 ]
idx can be indirectly controlled by user-space, hence leading to a
potential exploitation of the Spectre variant 1 vulnerability.
This issue was detected with the help of Smatch:
drivers/gpu/drm/drm_bufs.c:1420 drm_legacy_freebufs() warn: potential
spectre issue 'dma->buflist' [r] (local cap)
Fix this by sanitizing idx before using it to index dma->buflist
Notice that given that speculation windows are large, the policy is
to kill the speculation on the first load and not worry if it can be
completed with a dependent load/store [1].
[1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2
Signed-off-by: Gustavo A. R. Silva <gustavo(a)embeddedor.com>
Signed-off-by: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20181016095549.GA23586@embedd…
Signed-off-by: Sasha Levin <sashal(a)kernel.org>
---
drivers/gpu/drm/drm_bufs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c
index 7412acaf3cde..d7d10cabb9bb 100644
--- a/drivers/gpu/drm/drm_bufs.c
+++ b/drivers/gpu/drm/drm_bufs.c
@@ -36,6 +36,8 @@
#include <drm/drmP.h>
#include "drm_legacy.h"
+#include <linux/nospec.h>
+
static struct drm_map_list *drm_find_matching_map(struct drm_device *dev,
struct drm_local_map *map)
{
@@ -1417,6 +1419,7 @@ int drm_legacy_freebufs(struct drm_device *dev, void *data,
idx, dma->buf_count - 1);
return -EINVAL;
}
+ idx = array_index_nospec(idx, dma->buf_count);
buf = dma->buflist[idx];
if (buf->file_priv != file_priv) {
DRM_ERROR("Process %d freeing buffer not owned\n",
--
2.19.1
The patch titled
Subject: Revert "mm, memory_hotplug: initialize struct pages for the full memory section"
has been added to the -mm tree. Its filename is
revert-mm-memory_hotplug-initialize-struct-pages-for-the-full-memory-section.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/revert-mm-memory_hotplug-initializ…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/revert-mm-memory_hotplug-initializ…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Michal Hocko <mhocko(a)suse.com>
Subject: Revert "mm, memory_hotplug: initialize struct pages for the full memory section"
This reverts 2830bf6f05fb3e05b ("mm, memory_hotplug: initialize struct
pages for the full memory section").
The underlying assumption that one sparse section belongs into a single
numa node doesn't hold really. Robert Shteynfeld has reported a boot
failure. The boot log was not captured but his memory layout is as
follows:
[ 0.286954] Early memory node ranges
[ 0.286955] node 1: [mem 0x0000000000001000-0x0000000000090fff]
[ 0.286955] node 1: [mem 0x0000000000100000-0x00000000dbdf8fff]
[ 0.286956] node 1: [mem 0x0000000100000000-0x0000001423ffffff]
[ 0.286956] node 0: [mem 0x0000001424000000-0x0000002023ffffff]
This means that node0 starts in the middle of a memory section which is
also in node1. memmap_init_zone tries to initialize padding of a section
even when it is outside of the given pfn range because there are code
paths (e.g. memory hotplug) which assume that the full worth of memory
section is always initialized. In this particular case, though, such a
range is already intialized and most likely already managed by the page
allocator. Scribbling over those pages corrupts the internal state and
likely blows up when any of those pages gets used.
Link: http://lkml.kernel.org/r/20190125181549.GE20411@dhcp22.suse.cz
Fixes: 2830bf6f05fb ("mm, memory_hotplug: initialize struct pages for the full memory section")
Signed-off-by: Michal Hocko <mhocko(a)suse.com>
Reported-by: Robert Shteynfeld <robert.shteynfeld(a)gmail.com>
Cc: Mikhail Zaslonko <zaslonko(a)linux.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer(a)de.ibm.com>
Cc: Mikhail Gavrilov <mikhail.v.gavrilov(a)gmail.com>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Alexander Duyck <alexander.h.duyck(a)linux.intel.com>
Cc: Pasha Tatashin <Pavel.Tatashin(a)microsoft.com>
Cc: Martin Schwidefsky <schwidefsky(a)de.ibm.com>
Cc: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/page_alloc.c | 12 ------------
1 file changed, 12 deletions(-)
--- a/mm/page_alloc.c~revert-mm-memory_hotplug-initialize-struct-pages-for-the-full-memory-section
+++ a/mm/page_alloc.c
@@ -5701,18 +5701,6 @@ void __meminit memmap_init_zone(unsigned
cond_resched();
}
}
-#ifdef CONFIG_SPARSEMEM
- /*
- * If the zone does not span the rest of the section then
- * we should at least initialize those pages. Otherwise we
- * could blow up on a poisoned page in some paths which depend
- * on full sections being initialized (e.g. memory hotplug).
- */
- while (end_pfn % PAGES_PER_SECTION) {
- __init_single_page(pfn_to_page(end_pfn), end_pfn, zone, nid);
- end_pfn++;
- }
-#endif
}
#ifdef CONFIG_ZONE_DEVICE
_
Patches currently in -mm which might be from mhocko(a)suse.com are
mm-memory_hotplug-is_mem_section_removable-do-not-pass-the-end-of-a-zone.patch
revert-mm-memory_hotplug-initialize-struct-pages-for-the-full-memory-section.patch
mm-oom-marks-all-killed-tasks-as-oom-victims.patch
memcg-do-not-report-racy-no-eligible-oom-tasks.patch
The patch titled
Subject: mm: migrate: make buffer_migrate_page_norefs() actually succeed
has been added to the -mm tree. Its filename is
mm-migrate-make-buffer_migrate_page_norefs-actually-succeed.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-migrate-make-buffer_migrate_pag…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-migrate-make-buffer_migrate_pag…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Jan Kara <jack(a)suse.cz>
Subject: mm: migrate: make buffer_migrate_page_norefs() actually succeed
Currently, buffer_migrate_page_norefs() was constantly failing because
buffer_migrate_lock_buffers() grabbed reference on each buffer. In fact,
there's no reason for buffer_migrate_lock_buffers() to grab any buffer
references as the page is locked during all our operation and thus nobody
can reclaim buffers from the page. So remove grabbing of buffer
references which also makes buffer_migrate_page_norefs() succeed.
Link: http://lkml.kernel.org/r/20190116131217.7226-1-jack@suse.cz
Fixes: 89cb0888ca14 "mm: migrate: provide buffer_migrate_page_norefs()"
Signed-off-by: Jan Kara <jack(a)suse.cz>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work(a)gmail.com>
Cc: Pavel Machek <pavel(a)ucw.cz>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: Vlastimil Babka <vbabka(a)suse.cz>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Michal Hocko <mhocko(a)kernel.org>
Cc: Zi Yan <zi.yan(a)cs.rutgers.edu>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/migrate.c | 5 -----
1 file changed, 5 deletions(-)
--- a/mm/migrate.c~mm-migrate-make-buffer_migrate_page_norefs-actually-succeed
+++ a/mm/migrate.c
@@ -709,7 +709,6 @@ static bool buffer_migrate_lock_buffers(
/* Simple case, sync compaction */
if (mode != MIGRATE_ASYNC) {
do {
- get_bh(bh);
lock_buffer(bh);
bh = bh->b_this_page;
@@ -720,18 +719,15 @@ static bool buffer_migrate_lock_buffers(
/* async case, we cannot block on lock_buffer so use trylock_buffer */
do {
- get_bh(bh);
if (!trylock_buffer(bh)) {
/*
* We failed to lock the buffer and cannot stall in
* async migration. Release the taken locks
*/
struct buffer_head *failed_bh = bh;
- put_bh(failed_bh);
bh = head;
while (bh != failed_bh) {
unlock_buffer(bh);
- put_bh(bh);
bh = bh->b_this_page;
}
return false;
@@ -818,7 +814,6 @@ unlock_buffers:
bh = head;
do {
unlock_buffer(bh);
- put_bh(bh);
bh = bh->b_this_page;
} while (bh != head);
_
Patches currently in -mm which might be from jack(a)suse.cz are
mm-migrate-make-buffer_migrate_page_norefs-actually-succeed.patch
The patch titled
Subject: mm: migrate: don't rely on PageMovable() of newpage after unlocking it
has been added to the -mm tree. Its filename is
mm-migrate-dont-rely-on-pagemovable-of-newpage-after-unlocking-it.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-migrate-dont-rely-on-pagemovabl…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-migrate-dont-rely-on-pagemovabl…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: David Hildenbrand <david(a)redhat.com>
Subject: mm: migrate: don't rely on PageMovable() of newpage after unlocking it
While debugging some crashes related to virtio-balloon deflation that
happened under the old balloon migration code, I stumbled over a race that
still exists today.
What we experienced:
drivers/virtio/virtio_balloon.c:release_pages_balloon():
- WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0
- list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100
Turns out after having added the page to a local list when dequeuing, the
page would suddenly be moved to an LRU list before we would free it via
the local list, corrupting both lists. So a page we own and that is !LRU
was moved to an LRU list.
In __unmap_and_move(), we lock the old and newpage and perform the
migration. In case of vitio-balloon, the new page will become movable,
the old page will no longer be movable.
However, after unlocking newpage, there is nothing stopping the newpage
from getting dequeued and freed by virtio-balloon. This
will result in the newpage
1. No longer having PageMovable()
2. Getting moved to the local list before finally freeing it (using
page->lru)
Back in the migration thread in __unmap_and_move(), we would after
unlocking the newpage suddenly no longer have PageMovable(newpage) and
will therefore call putback_lru_page(newpage), modifying page->lru
although that list is still in use by virtio-balloon.
To summarize, we have a race between migrating the newpage and checking
for PageMovable(newpage). Instead of checking PageMovable(newpage), we
can simply rely on is_lru of the original page.
Looks like this was introduced by d6d86c0a7f8d ("mm/balloon_compaction:
redesign ballooned pages management"), which was backported up to 3.12.
Old compaction code used PageBalloon() via -_is_movable_balloon_page()
instead of PageMovable(), however with the same semantics.
Link: http://lkml.kernel.org/r/20190128160403.16657-1-david@redhat.com
Fixes: d6d86c0a7f8d ("mm/balloon_compaction: redesign ballooned pages management")
Signed-off-by: David Hildenbrand <david(a)redhat.com>
Reported-by: Vratislav Bendel <vbendel(a)redhat.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Acked-by: Rafael Aquini <aquini(a)redhat.com>
Cc: Mel Gorman <mgorman(a)techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov(a)linux.intel.com>
Cc: Naoya Horiguchi <n-horiguchi(a)ah.jp.nec.com>
Cc: Jan Kara <jack(a)suse.cz>
Cc: Andrea Arcangeli <aarcange(a)redhat.com>
Cc: Dominik Brodowski <linux(a)dominikbrodowski.net>
Cc: Matthew Wilcox <willy(a)infradead.org>
Cc: Konstantin Khlebnikov <k.khlebnikov(a)samsung.com>
Cc: Minchan Kim <minchan(a)kernel.org>
Cc: <stable(a)vger.kernel.org> [3.12+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/migrate.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/mm/migrate.c~mm-migrate-dont-rely-on-pagemovable-of-newpage-after-unlocking-it
+++ a/mm/migrate.c
@@ -1135,10 +1135,12 @@ out:
* If migration is successful, decrease refcount of the newpage
* which will not free the page because new page owner increased
* refcounter. As well, if it is LRU page, add the page to LRU
- * list in here.
+ * list in here. Don't rely on PageMovable(newpage), as that could
+ * already have changed after unlocking newpage (e.g.
+ * virtio-balloon deflation).
*/
if (rc == MIGRATEPAGE_SUCCESS) {
- if (unlikely(__PageMovable(newpage)))
+ if (unlikely(!is_lru))
put_page(newpage);
else
putback_lru_page(newpage);
_
Patches currently in -mm which might be from david(a)redhat.com are
mm-balloon-update-comment-about-isolation-migration-compaction.patch
mm-convert-pg_balloon-to-pg_offline.patch
kexec-export-pg_offline-to-vmcoreinfo.patch
xen-balloon-mark-inflated-pages-pg_offline.patch
hv_balloon-mark-inflated-pages-pg_offline.patch
vmw_balloon-mark-inflated-pages-pg_offline.patch
vmw_balloon-mark-inflated-pages-pg_offline-v2.patch
pm-hibernate-use-pfn_to_online_page.patch
pm-hibernate-exclude-all-pageoffline-pages.patch
pm-hibernate-exclude-all-pageoffline-pages-v2.patch
mm-migrate-dont-rely-on-pagemovable-of-newpage-after-unlocking-it.patch
The patch below does not apply to the 4.20-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From a1e1cb72d96491277ede8d257ce6b48a381dd336 Mon Sep 17 00:00:00 2001
From: Mike Snitzer <snitzer(a)redhat.com>
Date: Thu, 17 Jan 2019 10:48:01 -0500
Subject: [PATCH] dm: fix redundant IO accounting for bios that need splitting
The risk of redundant IO accounting was not taken into consideration
when commit 18a25da84354 ("dm: ensure bio submission follows a
depth-first tree walk") introduced IO splitting in terms of recursion
via generic_make_request().
Fix this by subtracting the split bio's payload from the IO stats that
were already accounted for by start_io_acct() upon dm_make_request()
entry. This repeat oscillation of the IO accounting, up then down,
isn't ideal but refactoring DM core's IO splitting to pre-split bios
_before_ they are accounted turned out to be an excessive amount of
change that will need a full development cycle to refine and verify.
Before this fix:
/dev/mapper/stripe_dev is a 4-way stripe using a 32k chunksize, so
bios are split on 32k boundaries.
# fio --name=16M --filename=/dev/mapper/stripe_dev --rw=write --bs=64k --size=16M \
--iodepth=1 --ioengine=libaio --direct=1 --refill_buffers
with debugging added:
[103898.310264] device-mapper: core: start_io_acct: dm-2 WRITE bio->bi_iter.bi_sector=0 len=128
[103898.318704] device-mapper: core: __split_and_process_bio: recursing for following split bio:
[103898.329136] device-mapper: core: start_io_acct: dm-2 WRITE bio->bi_iter.bi_sector=64 len=64
...
16M written yet 136M (278528 * 512b) accounted:
# cat /sys/block/dm-2/stat | awk '{ print $7 }'
278528
After this fix:
16M written and 16M (32768 * 512b) accounted:
# cat /sys/block/dm-2/stat | awk '{ print $7 }'
32768
Fixes: 18a25da84354 ("dm: ensure bio submission follows a depth-first tree walk")
Cc: stable(a)vger.kernel.org # 4.16+
Reported-by: Bryan Gurney <bgurney(a)redhat.com>
Reviewed-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index fcb97b0a5743..fbadda68e23b 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1584,6 +1584,9 @@ static void init_clone_info(struct clone_info *ci, struct mapped_device *md,
ci->sector = bio->bi_iter.bi_sector;
}
+#define __dm_part_stat_sub(part, field, subnd) \
+ (part_stat_get(part, field) -= (subnd))
+
/*
* Entry point to split a bio into clones and submit them to the targets.
*/
@@ -1638,6 +1641,19 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md,
struct bio *b = bio_split(bio, bio_sectors(bio) - ci.sector_count,
GFP_NOIO, &md->queue->bio_split);
ci.io->orig_bio = b;
+
+ /*
+ * Adjust IO stats for each split, otherwise upon queue
+ * reentry there will be redundant IO accounting.
+ * NOTE: this is a stop-gap fix, a proper fix involves
+ * significant refactoring of DM core's bio splitting
+ * (by eliminating DM's splitting and just using bio_split)
+ */
+ part_stat_lock();
+ __dm_part_stat_sub(&dm_disk(md)->part0,
+ sectors[op_stat_group(bio_op(bio))], ci.sector_count);
+ part_stat_unlock();
+
bio_chain(b, bio);
ret = generic_make_request(bio);
break;