This is a note to let you know that I've just added the patch titled
dm bufio: fix integer overflow when limiting maximum cache size
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-bufio-fix-integer-overflow-when-limiting-maximum-cache-size.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 74d4108d9e681dbbe4a2940ed8fdff1f6868184c Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Wed, 15 Nov 2017 16:38:09 -0800
Subject: dm bufio: fix integer overflow when limiting maximum cache size
From: Eric Biggers <ebiggers(a)google.com>
commit 74d4108d9e681dbbe4a2940ed8fdff1f6868184c upstream.
The default max_cache_size_bytes for dm-bufio is meant to be the lesser
of 25% of the size of the vmalloc area and 2% of the size of lowmem.
However, on 32-bit systems the intermediate result in the expression
(VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100
overflows, causing the wrong result to be computed. For example, on a
32-bit system where the vmalloc area is 520093696 bytes, the result is
1174405 rather than the expected 130023424, which makes the maximum
cache size much too small (far less than 2% of lowmem). This causes
severe performance problems for dm-verity users on affected systems.
Fix this by using mult_frac() to correctly multiply by a percentage. Do
this for all places in dm-bufio that multiply by a percentage. Also
replace (VMALLOC_END - VMALLOC_START) with VMALLOC_TOTAL, which contrary
to the comment is now defined in include/linux/vmalloc.h.
Depends-on: 9993bc635 ("sched/x86: Fix overflow in cyc2ns_offset")
Fixes: 95d402f057f2 ("dm: add bufio")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-bufio.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -928,7 +928,8 @@ static void __get_memory_limit(struct dm
buffers = c->minimum_buffers;
*limit_buffers = buffers;
- *threshold_buffers = buffers * DM_BUFIO_WRITEBACK_PERCENT / 100;
+ *threshold_buffers = mult_frac(buffers,
+ DM_BUFIO_WRITEBACK_PERCENT, 100);
}
/*
@@ -1829,19 +1830,15 @@ static int __init dm_bufio_init(void)
memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
- mem = (__u64)((totalram_pages - totalhigh_pages) *
- DM_BUFIO_MEMORY_PERCENT / 100) << PAGE_SHIFT;
+ mem = (__u64)mult_frac(totalram_pages - totalhigh_pages,
+ DM_BUFIO_MEMORY_PERCENT, 100) << PAGE_SHIFT;
if (mem > ULONG_MAX)
mem = ULONG_MAX;
#ifdef CONFIG_MMU
- /*
- * Get the size of vmalloc space the same way as VMALLOC_TOTAL
- * in fs/proc/internal.h
- */
- if (mem > (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100)
- mem = (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100;
+ if (mem > mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100))
+ mem = mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100);
#endif
dm_bufio_default_cache_size = mem;
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.4/lib-mpi-call-cond_resched-from-mpi_powm-loop.patch
queue-4.4/dm-bufio-fix-integer-overflow-when-limiting-maximum-cache-size.patch
This is a note to let you know that I've just added the patch titled
ovl: Put upperdentry if ovl_check_origin() fails
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
ovl-put-upperdentry-if-ovl_check_origin-fails.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 5455f92b54e516995a9ca45bbf790d3629c27a93 Mon Sep 17 00:00:00 2001
From: Vivek Goyal <vgoyal(a)redhat.com>
Date: Wed, 1 Nov 2017 15:37:22 -0400
Subject: ovl: Put upperdentry if ovl_check_origin() fails
From: Vivek Goyal <vgoyal(a)redhat.com>
commit 5455f92b54e516995a9ca45bbf790d3629c27a93 upstream.
If ovl_check_origin() fails, we should put upperdentry. We have a reference
on it by now. So goto out_put_upper instead of out.
Fixes: a9d019573e88 ("ovl: lookup non-dir copy-up-origin by file handle")
Signed-off-by: Vivek Goyal <vgoyal(a)redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
fs/overlayfs/namei.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/fs/overlayfs/namei.c
+++ b/fs/overlayfs/namei.c
@@ -630,7 +630,7 @@ struct dentry *ovl_lookup(struct inode *
err = ovl_check_origin(upperdentry, roe->lowerstack,
roe->numlower, &stack, &ctr);
if (err)
- goto out;
+ goto out_put_upper;
}
if (d.redirect) {
Patches currently in stable-queue which might be from vgoyal(a)redhat.com are
queue-4.14/ovl-put-upperdentry-if-ovl_check_origin-fails.patch
This is a note to let you know that I've just added the patch titled
dm zoned: ignore last smaller runt zone
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-zoned-ignore-last-smaller-runt-zone.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 114e025968b5990ad0b57bf60697ea64ee206aac Mon Sep 17 00:00:00 2001
From: Damien Le Moal <damien.lemoal(a)wdc.com>
Date: Sat, 28 Oct 2017 16:39:34 +0900
Subject: dm zoned: ignore last smaller runt zone
From: Damien Le Moal <damien.lemoal(a)wdc.com>
commit 114e025968b5990ad0b57bf60697ea64ee206aac upstream.
The SCSI layer allows ZBC drives to have a smaller last runt zone. For
such a device, specifying the entire capacity for a dm-zoned target
table entry fails because the specified capacity is not aligned on a
device zone size indicated in the request queue structure of the
device.
Fix this problem by ignoring the last runt zone in the entry length
when seting up the dm-zoned target (ctr method) and when iterating table
entries of the target (iterate_devices method). This allows dm-zoned
users to still easily setup a target using the entire device capacity
(as mandated by dm-zoned) or the aligned capacity excluding the last
runt zone.
While at it, replace direct references to the device queue chunk_sectors
limit with calls to the accessor blk_queue_zone_sectors().
Reported-by: Peter Desnoyers <pjd(a)ccs.neu.edu>
Signed-off-by: Damien Le Moal <damien.lemoal(a)wdc.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-zoned-target.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
--- a/drivers/md/dm-zoned-target.c
+++ b/drivers/md/dm-zoned-target.c
@@ -660,6 +660,7 @@ static int dmz_get_zoned_device(struct d
struct dmz_target *dmz = ti->private;
struct request_queue *q;
struct dmz_dev *dev;
+ sector_t aligned_capacity;
int ret;
/* Get the target device */
@@ -685,15 +686,17 @@ static int dmz_get_zoned_device(struct d
goto err;
}
+ q = bdev_get_queue(dev->bdev);
dev->capacity = i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT;
- if (ti->begin || (ti->len != dev->capacity)) {
+ aligned_capacity = dev->capacity & ~(blk_queue_zone_sectors(q) - 1);
+ if (ti->begin ||
+ ((ti->len != dev->capacity) && (ti->len != aligned_capacity))) {
ti->error = "Partial mapping not supported";
ret = -EINVAL;
goto err;
}
- q = bdev_get_queue(dev->bdev);
- dev->zone_nr_sectors = q->limits.chunk_sectors;
+ dev->zone_nr_sectors = blk_queue_zone_sectors(q);
dev->zone_nr_sectors_shift = ilog2(dev->zone_nr_sectors);
dev->zone_nr_blocks = dmz_sect2blk(dev->zone_nr_sectors);
@@ -929,8 +932,10 @@ static int dmz_iterate_devices(struct dm
iterate_devices_callout_fn fn, void *data)
{
struct dmz_target *dmz = ti->private;
+ struct dmz_dev *dev = dmz->dev;
+ sector_t capacity = dev->capacity & ~(dev->zone_nr_sectors - 1);
- return fn(ti, dmz->ddev, 0, dmz->dev->capacity, data);
+ return fn(ti, dmz->ddev, 0, capacity, data);
}
static struct target_type dmz_type = {
Patches currently in stable-queue which might be from damien.lemoal(a)wdc.com are
queue-4.14/dm-zoned-ignore-last-smaller-runt-zone.patch
This is a note to let you know that I've just added the patch titled
dm mpath: remove annoying message of 'blk_get_request() returned -11'
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-mpath-remove-annoying-message-of-blk_get_request-returned-11.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 9dc112e2daf87b40607fd8d357e2d7de32290d45 Mon Sep 17 00:00:00 2001
From: Ming Lei <ming.lei(a)redhat.com>
Date: Sat, 30 Sep 2017 19:46:48 +0800
Subject: dm mpath: remove annoying message of 'blk_get_request() returned -11'
From: Ming Lei <ming.lei(a)redhat.com>
commit 9dc112e2daf87b40607fd8d357e2d7de32290d45 upstream.
It is very normal to see allocation failure, especially with blk-mq
request_queues, so it's unnecessary to report this error and annoy
people.
In practice this 'blk_get_request() returned -11' error gets logged
quite frequently when a blk-mq DM multipath device sees heavy IO.
This change is marked for stable@ because the annoying message in
question was included in stable@ commit 7083abbbf.
Fixes: 7083abbbf ("dm mpath: avoid that path removal can trigger an infinite loop")
Signed-off-by: Ming Lei <ming.lei(a)redhat.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-mpath.c | 2 --
1 file changed, 2 deletions(-)
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -499,8 +499,6 @@ static int multipath_clone_and_map(struc
if (IS_ERR(clone)) {
/* EBUSY, ENODEV or EWOULDBLOCK: requeue */
bool queue_dying = blk_queue_dying(q);
- DMERR_LIMIT("blk_get_request() returned %ld%s - requeuing",
- PTR_ERR(clone), queue_dying ? " (path offline)" : "");
if (queue_dying) {
atomic_inc(&m->pg_init_in_progress);
activate_or_offline_path(pgpath);
Patches currently in stable-queue which might be from ming.lei(a)redhat.com are
queue-4.14/dm-mpath-remove-annoying-message-of-blk_get_request-returned-11.patch
This is a note to let you know that I've just added the patch titled
dm integrity: allow unaligned bv_offset
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-integrity-allow-unaligned-bv_offset.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 95b1369a9638cfa322ad1c0cde8efbe524059884 Mon Sep 17 00:00:00 2001
From: Mikulas Patocka <mpatocka(a)redhat.com>
Date: Tue, 7 Nov 2017 10:40:40 -0500
Subject: dm integrity: allow unaligned bv_offset
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Mikulas Patocka <mpatocka(a)redhat.com>
commit 95b1369a9638cfa322ad1c0cde8efbe524059884 upstream.
When slub_debug is enabled kmalloc returns unaligned memory. XFS uses
this unaligned memory for its buffers (if an unaligned buffer crosses a
page, XFS frees it and allocates a full page instead - see the function
xfs_buf_allocate_memory).
dm-integrity checks if bv_offset is aligned on page size and this check
fail with slub_debug and XFS.
Fix this bug by removing the bv_offset check, leaving only the check for
bv_len.
Fixes: 7eada909bfd7 ("dm: add integrity target")
Reported-by: Bruno Prémont <bonbons(a)sysophe.eu>
Reviewed-by: Milan Broz <gmazyland(a)gmail.com>
Signed-off-by: Mikulas Patocka <mpatocka(a)redhat.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-integrity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -1376,7 +1376,7 @@ static int dm_integrity_map(struct dm_ta
struct bvec_iter iter;
struct bio_vec bv;
bio_for_each_segment(bv, bio, iter) {
- if (unlikely((bv.bv_offset | bv.bv_len) & ((ic->sectors_per_block << SECTOR_SHIFT) - 1))) {
+ if (unlikely(bv.bv_len & ((ic->sectors_per_block << SECTOR_SHIFT) - 1))) {
DMERR("Bio vector (%u,%u) is not aligned on %u-sector boundary",
bv.bv_offset, bv.bv_len, ic->sectors_per_block);
return DM_MAPIO_KILL;
Patches currently in stable-queue which might be from mpatocka(a)redhat.com are
queue-4.14/dm-allocate-struct-mapped_device-with-kvzalloc.patch
queue-4.14/dm-integrity-allow-unaligned-bv_offset.patch
queue-4.14/dm-crypt-allow-unaligned-bv_offset.patch
This is a note to let you know that I've just added the patch titled
dm crypt: allow unaligned bv_offset
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-crypt-allow-unaligned-bv_offset.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0440d5c0ca9744b92a07aeb6df0a9a75db6f4280 Mon Sep 17 00:00:00 2001
From: Mikulas Patocka <mpatocka(a)redhat.com>
Date: Tue, 7 Nov 2017 10:35:57 -0500
Subject: dm crypt: allow unaligned bv_offset
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: Mikulas Patocka <mpatocka(a)redhat.com>
commit 0440d5c0ca9744b92a07aeb6df0a9a75db6f4280 upstream.
When slub_debug is enabled kmalloc returns unaligned memory. XFS uses
this unaligned memory for its buffers (if an unaligned buffer crosses a
page, XFS frees it and allocates a full page instead - see the function
xfs_buf_allocate_memory).
dm-crypt checks if bv_offset is aligned on page size and these checks
fail with slub_debug and XFS.
Fix this bug by removing the bv_offset checks. Switch to checking if
bv_len is aligned instead of bv_offset (this check should be sufficient
to prevent overruns if a bio with too small bv_len is received).
Fixes: 8f0009a22517 ("dm crypt: optionally support larger encryption sector size")
Reported-by: Bruno Prémont <bonbons(a)sysophe.eu>
Tested-by: Bruno Prémont <bonbons(a)sysophe.eu>
Signed-off-by: Mikulas Patocka <mpatocka(a)redhat.com>
Reviewed-by: Milan Broz <gmazyland(a)gmail.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-crypt.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1075,7 +1075,7 @@ static int crypt_convert_block_aead(stru
BUG_ON(cc->integrity_iv_size && cc->integrity_iv_size != cc->iv_size);
/* Reject unexpected unaligned bio. */
- if (unlikely(bv_in.bv_offset & (cc->sector_size - 1)))
+ if (unlikely(bv_in.bv_len & (cc->sector_size - 1)))
return -EIO;
dmreq = dmreq_of_req(cc, req);
@@ -1168,7 +1168,7 @@ static int crypt_convert_block_skcipher(
int r = 0;
/* Reject unexpected unaligned bio. */
- if (unlikely(bv_in.bv_offset & (cc->sector_size - 1)))
+ if (unlikely(bv_in.bv_len & (cc->sector_size - 1)))
return -EIO;
dmreq = dmreq_of_req(cc, req);
Patches currently in stable-queue which might be from mpatocka(a)redhat.com are
queue-4.14/dm-allocate-struct-mapped_device-with-kvzalloc.patch
queue-4.14/dm-integrity-allow-unaligned-bv_offset.patch
queue-4.14/dm-crypt-allow-unaligned-bv_offset.patch
This is a note to let you know that I've just added the patch titled
dm cache: fix race condition in the writeback mode overwrite_bio optimisation
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-cache-fix-race-condition-in-the-writeback-mode-overwrite_bio-optimisation.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From d1260e2a3f85f4c1010510a15f89597001318b1b Mon Sep 17 00:00:00 2001
From: Joe Thornber <ejt(a)redhat.com>
Date: Fri, 10 Nov 2017 07:53:31 -0500
Subject: dm cache: fix race condition in the writeback mode overwrite_bio optimisation
From: Joe Thornber <ejt(a)redhat.com>
commit d1260e2a3f85f4c1010510a15f89597001318b1b upstream.
When a DM cache in writeback mode moves data between the slow and fast
device it can often avoid a copy if the triggering bio either:
i) covers the whole block (no point copying if we're about to overwrite it)
ii) the migration is a promotion and the origin block is currently discarded
Prior to this fix there was a race with case (ii). The discard status
was checked with a shared lock held (rather than exclusive). This meant
another bio could run in parallel and write data to the origin, removing
the discard state. After the promotion the parallel write would have
been lost.
With this fix the discard status is re-checked once the exclusive lock
has been aquired. If the block is no longer discarded it falls back to
the slower full copy path.
Fixes: b29d4986d ("dm cache: significant rework to leverage dm-bio-prison-v2")
Signed-off-by: Joe Thornber <ejt(a)redhat.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-cache-target.c | 86 ++++++++++++++++++++++++++-----------------
1 file changed, 53 insertions(+), 33 deletions(-)
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -1201,6 +1201,18 @@ static void background_work_end(struct c
/*----------------------------------------------------------------*/
+static bool bio_writes_complete_block(struct cache *cache, struct bio *bio)
+{
+ return (bio_data_dir(bio) == WRITE) &&
+ (bio->bi_iter.bi_size == (cache->sectors_per_block << SECTOR_SHIFT));
+}
+
+static bool optimisable_bio(struct cache *cache, struct bio *bio, dm_oblock_t block)
+{
+ return writeback_mode(&cache->features) &&
+ (is_discarded_oblock(cache, block) || bio_writes_complete_block(cache, bio));
+}
+
static void quiesce(struct dm_cache_migration *mg,
void (*continuation)(struct work_struct *))
{
@@ -1474,13 +1486,51 @@ static void mg_upgrade_lock(struct work_
}
}
+static void mg_full_copy(struct work_struct *ws)
+{
+ struct dm_cache_migration *mg = ws_to_mg(ws);
+ struct cache *cache = mg->cache;
+ struct policy_work *op = mg->op;
+ bool is_policy_promote = (op->op == POLICY_PROMOTE);
+
+ if ((!is_policy_promote && !is_dirty(cache, op->cblock)) ||
+ is_discarded_oblock(cache, op->oblock)) {
+ mg_upgrade_lock(ws);
+ return;
+ }
+
+ init_continuation(&mg->k, mg_upgrade_lock);
+
+ if (copy(mg, is_policy_promote)) {
+ DMERR_LIMIT("%s: migration copy failed", cache_device_name(cache));
+ mg->k.input = BLK_STS_IOERR;
+ mg_complete(mg, false);
+ }
+}
+
static void mg_copy(struct work_struct *ws)
{
- int r;
struct dm_cache_migration *mg = ws_to_mg(ws);
if (mg->overwrite_bio) {
/*
+ * No exclusive lock was held when we last checked if the bio
+ * was optimisable. So we have to check again in case things
+ * have changed (eg, the block may no longer be discarded).
+ */
+ if (!optimisable_bio(mg->cache, mg->overwrite_bio, mg->op->oblock)) {
+ /*
+ * Fallback to a real full copy after doing some tidying up.
+ */
+ bool rb = bio_detain_shared(mg->cache, mg->op->oblock, mg->overwrite_bio);
+ BUG_ON(rb); /* An exclussive lock must _not_ be held for this block */
+ mg->overwrite_bio = NULL;
+ inc_io_migrations(mg->cache);
+ mg_full_copy(ws);
+ return;
+ }
+
+ /*
* It's safe to do this here, even though it's new data
* because all IO has been locked out of the block.
*
@@ -1489,26 +1539,8 @@ static void mg_copy(struct work_struct *
*/
overwrite(mg, mg_update_metadata_after_copy);
- } else {
- struct cache *cache = mg->cache;
- struct policy_work *op = mg->op;
- bool is_policy_promote = (op->op == POLICY_PROMOTE);
-
- if ((!is_policy_promote && !is_dirty(cache, op->cblock)) ||
- is_discarded_oblock(cache, op->oblock)) {
- mg_upgrade_lock(ws);
- return;
- }
-
- init_continuation(&mg->k, mg_upgrade_lock);
-
- r = copy(mg, is_policy_promote);
- if (r) {
- DMERR_LIMIT("%s: migration copy failed", cache_device_name(cache));
- mg->k.input = BLK_STS_IOERR;
- mg_complete(mg, false);
- }
- }
+ } else
+ mg_full_copy(ws);
}
static int mg_lock_writes(struct dm_cache_migration *mg)
@@ -1748,18 +1780,6 @@ static void inc_miss_counter(struct cach
/*----------------------------------------------------------------*/
-static bool bio_writes_complete_block(struct cache *cache, struct bio *bio)
-{
- return (bio_data_dir(bio) == WRITE) &&
- (bio->bi_iter.bi_size == (cache->sectors_per_block << SECTOR_SHIFT));
-}
-
-static bool optimisable_bio(struct cache *cache, struct bio *bio, dm_oblock_t block)
-{
- return writeback_mode(&cache->features) &&
- (is_discarded_oblock(cache, block) || bio_writes_complete_block(cache, bio));
-}
-
static int map_bio(struct cache *cache, struct bio *bio, dm_oblock_t block,
bool *commit_needed)
{
Patches currently in stable-queue which might be from ejt(a)redhat.com are
queue-4.14/dm-cache-fix-race-condition-in-the-writeback-mode-overwrite_bio-optimisation.patch
This is a note to let you know that I've just added the patch titled
dm bufio: fix integer overflow when limiting maximum cache size
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-bufio-fix-integer-overflow-when-limiting-maximum-cache-size.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 74d4108d9e681dbbe4a2940ed8fdff1f6868184c Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Wed, 15 Nov 2017 16:38:09 -0800
Subject: dm bufio: fix integer overflow when limiting maximum cache size
From: Eric Biggers <ebiggers(a)google.com>
commit 74d4108d9e681dbbe4a2940ed8fdff1f6868184c upstream.
The default max_cache_size_bytes for dm-bufio is meant to be the lesser
of 25% of the size of the vmalloc area and 2% of the size of lowmem.
However, on 32-bit systems the intermediate result in the expression
(VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100
overflows, causing the wrong result to be computed. For example, on a
32-bit system where the vmalloc area is 520093696 bytes, the result is
1174405 rather than the expected 130023424, which makes the maximum
cache size much too small (far less than 2% of lowmem). This causes
severe performance problems for dm-verity users on affected systems.
Fix this by using mult_frac() to correctly multiply by a percentage. Do
this for all places in dm-bufio that multiply by a percentage. Also
replace (VMALLOC_END - VMALLOC_START) with VMALLOC_TOTAL, which contrary
to the comment is now defined in include/linux/vmalloc.h.
Depends-on: 9993bc635 ("sched/x86: Fix overflow in cyc2ns_offset")
Fixes: 95d402f057f2 ("dm: add bufio")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-bufio.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -974,7 +974,8 @@ static void __get_memory_limit(struct dm
buffers = c->minimum_buffers;
*limit_buffers = buffers;
- *threshold_buffers = buffers * DM_BUFIO_WRITEBACK_PERCENT / 100;
+ *threshold_buffers = mult_frac(buffers,
+ DM_BUFIO_WRITEBACK_PERCENT, 100);
}
/*
@@ -1910,19 +1911,15 @@ static int __init dm_bufio_init(void)
memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
- mem = (__u64)((totalram_pages - totalhigh_pages) *
- DM_BUFIO_MEMORY_PERCENT / 100) << PAGE_SHIFT;
+ mem = (__u64)mult_frac(totalram_pages - totalhigh_pages,
+ DM_BUFIO_MEMORY_PERCENT, 100) << PAGE_SHIFT;
if (mem > ULONG_MAX)
mem = ULONG_MAX;
#ifdef CONFIG_MMU
- /*
- * Get the size of vmalloc space the same way as VMALLOC_TOTAL
- * in fs/proc/internal.h
- */
- if (mem > (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100)
- mem = (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100;
+ if (mem > mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100))
+ mem = mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100);
#endif
dm_bufio_default_cache_size = mem;
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-4.14/lib-mpi-call-cond_resched-from-mpi_powm-loop.patch
queue-4.14/dm-bufio-fix-integer-overflow-when-limiting-maximum-cache-size.patch
This is a note to let you know that I've just added the patch titled
dm: allocate struct mapped_device with kvzalloc
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-allocate-struct-mapped_device-with-kvzalloc.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 856eb0916d181da6d043cc33e03f54d5c5bbe54a Mon Sep 17 00:00:00 2001
From: Mikulas Patocka <mpatocka(a)redhat.com>
Date: Tue, 31 Oct 2017 19:33:02 -0400
Subject: dm: allocate struct mapped_device with kvzalloc
From: Mikulas Patocka <mpatocka(a)redhat.com>
commit 856eb0916d181da6d043cc33e03f54d5c5bbe54a upstream.
The structure srcu_struct can be very big, its size is proportional to the
value CONFIG_NR_CPUS. The Fedora kernel has CONFIG_NR_CPUS 8192, the field
io_barrier in the struct mapped_device has 84kB in the debugging kernel
and 50kB in the non-debugging kernel. The large size may result in failure
of the function kzalloc_node.
In order to avoid the allocation failure, we use the function
kvzalloc_node, this function falls back to vmalloc if a large contiguous
chunk of memory is not available. This patch also moves the field
io_barrier to the last position of struct mapped_device - the reason is
that on many processor architectures, short memory offsets result in
smaller code than long memory offsets - on x86-64 it reduces code size by
320 bytes.
Note to stable kernel maintainers - the kernels 4.11 and older don't have
the function kvzalloc_node, you can use the function vzalloc_node instead.
Signed-off-by: Mikulas Patocka <mpatocka(a)redhat.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-core.h | 3 ++-
drivers/md/dm.c | 6 +++---
2 files changed, 5 insertions(+), 4 deletions(-)
--- a/drivers/md/dm-core.h
+++ b/drivers/md/dm-core.h
@@ -29,7 +29,6 @@ struct dm_kobject_holder {
* DM targets must _not_ deference a mapped_device to directly access its members!
*/
struct mapped_device {
- struct srcu_struct io_barrier;
struct mutex suspend_lock;
/*
@@ -127,6 +126,8 @@ struct mapped_device {
struct blk_mq_tag_set *tag_set;
bool use_blk_mq:1;
bool init_tio_pdu:1;
+
+ struct srcu_struct io_barrier;
};
void dm_init_md_queue(struct mapped_device *md);
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1695,7 +1695,7 @@ static struct mapped_device *alloc_dev(i
struct mapped_device *md;
void *old_md;
- md = kzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id);
+ md = kvzalloc_node(sizeof(*md), GFP_KERNEL, numa_node_id);
if (!md) {
DMWARN("unable to allocate device, out of memory.");
return NULL;
@@ -1795,7 +1795,7 @@ bad_io_barrier:
bad_minor:
module_put(THIS_MODULE);
bad_module_get:
- kfree(md);
+ kvfree(md);
return NULL;
}
@@ -1814,7 +1814,7 @@ static void free_dev(struct mapped_devic
free_minor(minor);
module_put(THIS_MODULE);
- kfree(md);
+ kvfree(md);
}
static void __bind_mempools(struct mapped_device *md, struct dm_table *t)
Patches currently in stable-queue which might be from mpatocka(a)redhat.com are
queue-4.14/dm-allocate-struct-mapped_device-with-kvzalloc.patch
queue-4.14/dm-integrity-allow-unaligned-bv_offset.patch
queue-4.14/dm-crypt-allow-unaligned-bv_offset.patch
This is a note to let you know that I've just added the patch titled
dm bufio: fix integer overflow when limiting maximum cache size
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
dm-bufio-fix-integer-overflow-when-limiting-maximum-cache-size.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 74d4108d9e681dbbe4a2940ed8fdff1f6868184c Mon Sep 17 00:00:00 2001
From: Eric Biggers <ebiggers(a)google.com>
Date: Wed, 15 Nov 2017 16:38:09 -0800
Subject: dm bufio: fix integer overflow when limiting maximum cache size
From: Eric Biggers <ebiggers(a)google.com>
commit 74d4108d9e681dbbe4a2940ed8fdff1f6868184c upstream.
The default max_cache_size_bytes for dm-bufio is meant to be the lesser
of 25% of the size of the vmalloc area and 2% of the size of lowmem.
However, on 32-bit systems the intermediate result in the expression
(VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100
overflows, causing the wrong result to be computed. For example, on a
32-bit system where the vmalloc area is 520093696 bytes, the result is
1174405 rather than the expected 130023424, which makes the maximum
cache size much too small (far less than 2% of lowmem). This causes
severe performance problems for dm-verity users on affected systems.
Fix this by using mult_frac() to correctly multiply by a percentage. Do
this for all places in dm-bufio that multiply by a percentage. Also
replace (VMALLOC_END - VMALLOC_START) with VMALLOC_TOTAL, which contrary
to the comment is now defined in include/linux/vmalloc.h.
Depends-on: 9993bc635 ("sched/x86: Fix overflow in cyc2ns_offset")
Fixes: 95d402f057f2 ("dm: add bufio")
Signed-off-by: Eric Biggers <ebiggers(a)google.com>
Signed-off-by: Mike Snitzer <snitzer(a)redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/md/dm-bufio.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -876,7 +876,8 @@ static void __get_memory_limit(struct dm
buffers = c->minimum_buffers;
*limit_buffers = buffers;
- *threshold_buffers = buffers * DM_BUFIO_WRITEBACK_PERCENT / 100;
+ *threshold_buffers = mult_frac(buffers,
+ DM_BUFIO_WRITEBACK_PERCENT, 100);
}
/*
@@ -1764,19 +1765,15 @@ static int __init dm_bufio_init(void)
memset(&dm_bufio_caches, 0, sizeof dm_bufio_caches);
memset(&dm_bufio_cache_names, 0, sizeof dm_bufio_cache_names);
- mem = (__u64)((totalram_pages - totalhigh_pages) *
- DM_BUFIO_MEMORY_PERCENT / 100) << PAGE_SHIFT;
+ mem = (__u64)mult_frac(totalram_pages - totalhigh_pages,
+ DM_BUFIO_MEMORY_PERCENT, 100) << PAGE_SHIFT;
if (mem > ULONG_MAX)
mem = ULONG_MAX;
#ifdef CONFIG_MMU
- /*
- * Get the size of vmalloc space the same way as VMALLOC_TOTAL
- * in fs/proc/internal.h
- */
- if (mem > (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100)
- mem = (VMALLOC_END - VMALLOC_START) * DM_BUFIO_VMALLOC_PERCENT / 100;
+ if (mem > mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100))
+ mem = mult_frac(VMALLOC_TOTAL, DM_BUFIO_VMALLOC_PERCENT, 100);
#endif
dm_bufio_default_cache_size = mem;
Patches currently in stable-queue which might be from ebiggers(a)google.com are
queue-3.18/lib-mpi-call-cond_resched-from-mpi_powm-loop.patch
queue-3.18/dm-bufio-fix-integer-overflow-when-limiting-maximum-cache-size.patch
This is a note to let you know that I've just added the patch titled
PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 7f342678634f16795892677204366e835e450dda Mon Sep 17 00:00:00 2001
From: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
Date: Tue, 17 Oct 2017 05:47:38 -0700
Subject: PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF
From: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
commit 7f342678634f16795892677204366e835e450dda upstream.
The Cavium ThunderX (CN8XXX) family of PCIe Root Ports does not advertise
an ACS capability. However, the RTL internally implements similar
protection as if ACS had Request Redirection, Completion Redirection,
Source Validation, and Upstream Forwarding features enabled.
Change Cavium ACS capabilities quirk flags accordingly.
Fixes: b404bcfbf035 ("PCI: Add ACS quirk for all Cavium devices")
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
[bhelgaas: tidy changelog, comment, stable tag]
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/pci/quirks.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -4088,12 +4088,14 @@ static int pci_quirk_amd_sb_acs(struct p
static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags)
{
/*
- * Cavium devices matching this quirk do not perform peer-to-peer
- * with other functions, allowing masking out these bits as if they
- * were unimplemented in the ACS capability.
+ * Cavium root ports don't advertise an ACS capability. However,
+ * the RTL internally implements similar protection as if ACS had
+ * Request Redirection, Completion Redirection, Source Validation,
+ * and Upstream Forwarding features enabled. Assert that the
+ * hardware implements and enables equivalent ACS functionality for
+ * these flags.
*/
- acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR |
- PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT);
+ acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF);
return acs_flags ? 0 : 1;
}
Patches currently in stable-queue which might be from Vadim.Lomovtsev(a)cavium.com are
queue-4.9/pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
This is a note to let you know that I've just added the patch titled
ALSA: hda: Add Raven PCI ID
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
alsa-hda-add-raven-pci-id.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 9ceace3c9c18c67676e75141032a65a8e01f9a7a Mon Sep 17 00:00:00 2001
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Date: Thu, 23 Nov 2017 20:07:00 +0530
Subject: ALSA: hda: Add Raven PCI ID
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
commit 9ceace3c9c18c67676e75141032a65a8e01f9a7a upstream.
This commit adds PCI ID for Raven platform
Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
sound/pci/hda/hda_intel.c | 3 +++
1 file changed, 3 insertions(+)
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -2305,6 +2305,9 @@ static const struct pci_device_id azx_id
/* AMD Hudson */
{ PCI_DEVICE(0x1022, 0x780d),
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ /* AMD Raven */
+ { PCI_DEVICE(0x1022, 0x15e3),
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
/* ATI HDMI */
{ PCI_DEVICE(0x1002, 0x0002),
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
Patches currently in stable-queue which might be from Vijendar.Mukunda(a)amd.com are
queue-4.9/alsa-hda-add-raven-pci-id.patch
This is a note to let you know that I've just added the patch titled
ALSA: hda: Add Raven PCI ID
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
alsa-hda-add-raven-pci-id.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 9ceace3c9c18c67676e75141032a65a8e01f9a7a Mon Sep 17 00:00:00 2001
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Date: Thu, 23 Nov 2017 20:07:00 +0530
Subject: ALSA: hda: Add Raven PCI ID
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
commit 9ceace3c9c18c67676e75141032a65a8e01f9a7a upstream.
This commit adds PCI ID for Raven platform
Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
sound/pci/hda/hda_intel.c | 3 +++
1 file changed, 3 insertions(+)
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -2316,6 +2316,9 @@ static const struct pci_device_id azx_id
/* AMD Hudson */
{ PCI_DEVICE(0x1022, 0x780d),
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ /* AMD Raven */
+ { PCI_DEVICE(0x1022, 0x15e3),
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
/* ATI HDMI */
{ PCI_DEVICE(0x1002, 0x0002),
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
Patches currently in stable-queue which might be from Vijendar.Mukunda(a)amd.com are
queue-4.4/alsa-hda-add-raven-pci-id.patch
This is a note to let you know that I've just added the patch titled
PM / OPP: Add missing of_node_put(np)
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pm-opp-add-missing-of_node_put-np.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 7978db344719dab1e56d05e6fc04aaaddcde0a5e Mon Sep 17 00:00:00 2001
From: Tobias Jordan <Tobias.Jordan(a)elektrobit.com>
Date: Wed, 4 Oct 2017 11:35:03 +0530
Subject: PM / OPP: Add missing of_node_put(np)
From: Tobias Jordan <Tobias.Jordan(a)elektrobit.com>
commit 7978db344719dab1e56d05e6fc04aaaddcde0a5e upstream.
The for_each_available_child_of_node() loop in _of_add_opp_table_v2()
doesn't drop the reference to "np" on errors. Fix that.
Fixes: 274659029c9d (PM / OPP: Add support to parse "operating-points-v2" bindings)
Signed-off-by: Tobias Jordan <Tobias.Jordan(a)elektrobit.com>
[ VK: Improved commit log. ]
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
Reviewed-by: Stephen Boyd <sboyd(a)codeaurora.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/base/power/opp/of.c | 1 +
1 file changed, 1 insertion(+)
--- a/drivers/base/power/opp/of.c
+++ b/drivers/base/power/opp/of.c
@@ -397,6 +397,7 @@ static int _of_add_opp_table_v2(struct d
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
ret);
_dev_pm_opp_remove_table(opp_table, dev, false);
+ of_node_put(np);
goto put_opp_table;
}
}
Patches currently in stable-queue which might be from Tobias.Jordan(a)elektrobit.com are
queue-4.14/pm-opp-add-missing-of_node_put-np.patch
This is a note to let you know that I've just added the patch titled
PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 7f342678634f16795892677204366e835e450dda Mon Sep 17 00:00:00 2001
From: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
Date: Tue, 17 Oct 2017 05:47:38 -0700
Subject: PCI: Set Cavium ACS capability quirk flags to assert RR/CR/SV/UF
From: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
commit 7f342678634f16795892677204366e835e450dda upstream.
The Cavium ThunderX (CN8XXX) family of PCIe Root Ports does not advertise
an ACS capability. However, the RTL internally implements similar
protection as if ACS had Request Redirection, Completion Redirection,
Source Validation, and Upstream Forwarding features enabled.
Change Cavium ACS capabilities quirk flags accordingly.
Fixes: b404bcfbf035 ("PCI: Add ACS quirk for all Cavium devices")
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
[bhelgaas: tidy changelog, comment, stable tag]
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/pci/quirks.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -4215,12 +4215,14 @@ static int pci_quirk_amd_sb_acs(struct p
static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags)
{
/*
- * Cavium devices matching this quirk do not perform peer-to-peer
- * with other functions, allowing masking out these bits as if they
- * were unimplemented in the ACS capability.
+ * Cavium root ports don't advertise an ACS capability. However,
+ * the RTL internally implements similar protection as if ACS had
+ * Request Redirection, Completion Redirection, Source Validation,
+ * and Upstream Forwarding features enabled. Assert that the
+ * hardware implements and enables equivalent ACS functionality for
+ * these flags.
*/
- acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR |
- PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT);
+ acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF);
if (!((dev->device >= 0xa000) && (dev->device <= 0xa0ff)))
return -ENOTTY;
Patches currently in stable-queue which might be from Vadim.Lomovtsev(a)cavium.com are
queue-4.14/pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
queue-4.14/pci-apply-cavium-thunderx-acs-quirk-to-more-root-ports.patch
This is a note to let you know that I've just added the patch titled
PCI: hv: Use effective affinity mask
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pci-hv-use-effective-affinity-mask.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 79aa801e899417a56863d6713f76c4e108856000 Mon Sep 17 00:00:00 2001
From: Dexuan Cui <decui(a)microsoft.com>
Date: Wed, 1 Nov 2017 20:30:53 +0000
Subject: PCI: hv: Use effective affinity mask
From: Dexuan Cui <decui(a)microsoft.com>
commit 79aa801e899417a56863d6713f76c4e108856000 upstream.
The effective_affinity_mask is always set when an interrupt is assigned in
__assign_irq_vector() -> apic->cpu_mask_to_apicid(), e.g. for struct apic
apic_physflat: -> default_cpu_mask_to_apicid() ->
irq_data_update_effective_affinity(), but it looks d->common->affinity
remains all-1's before the user space or the kernel changes it later.
In the early allocation/initialization phase of an IRQ, we should use the
effective_affinity_mask, otherwise Hyper-V may not deliver the interrupt to
the expected CPU. Without the patch, if we assign 7 Mellanox ConnectX-3
VFs to a 32-vCPU VM, one of the VFs may fail to receive interrupts.
Tested-by: Adrian Suhov <v-adsuho(a)microsoft.com>
Signed-off-by: Dexuan Cui <decui(a)microsoft.com>
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Reviewed-by: Jake Oshins <jakeo(a)microsoft.com>
Cc: Jork Loeser <jloeser(a)microsoft.com>
Cc: Stephen Hemminger <sthemmin(a)microsoft.com>
Cc: K. Y. Srinivasan <kys(a)microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/pci/host/pci-hyperv.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -879,7 +879,7 @@ static void hv_irq_unmask(struct irq_dat
int cpu;
u64 res;
- dest = irq_data_get_affinity_mask(data);
+ dest = irq_data_get_effective_affinity_mask(data);
pdev = msi_desc_to_pci_dev(msi_desc);
pbus = pdev->bus;
hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
@@ -1042,6 +1042,7 @@ static void hv_compose_msi_msg(struct ir
struct hv_pci_dev *hpdev;
struct pci_bus *pbus;
struct pci_dev *pdev;
+ struct cpumask *dest;
struct compose_comp_ctxt comp;
struct tran_int_desc *int_desc;
struct {
@@ -1056,6 +1057,7 @@ static void hv_compose_msi_msg(struct ir
int ret;
pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data));
+ dest = irq_data_get_effective_affinity_mask(data);
pbus = pdev->bus;
hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata);
hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn));
@@ -1081,14 +1083,14 @@ static void hv_compose_msi_msg(struct ir
switch (pci_protocol_version) {
case PCI_PROTOCOL_VERSION_1_1:
size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1,
- irq_data_get_affinity_mask(data),
+ dest,
hpdev->desc.win_slot.slot,
cfg->vector);
break;
case PCI_PROTOCOL_VERSION_1_2:
size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2,
- irq_data_get_affinity_mask(data),
+ dest,
hpdev->desc.win_slot.slot,
cfg->vector);
break;
Patches currently in stable-queue which might be from decui(a)microsoft.com are
queue-4.14/pci-hv-use-effective-affinity-mask.patch
This is a note to let you know that I've just added the patch titled
PCI/ASPM: Use correct capability pointer to program LTR_L1.2_THRESHOLD
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pci-aspm-use-correct-capability-pointer-to-program-ltr_l1.2_threshold.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c00054f540bf81e592e1fee709b0bdbf20f478b5 Mon Sep 17 00:00:00 2001
From: Bjorn Helgaas <bhelgaas(a)google.com>
Date: Mon, 13 Nov 2017 15:05:50 -0600
Subject: PCI/ASPM: Use correct capability pointer to program LTR_L1.2_THRESHOLD
From: Bjorn Helgaas <bhelgaas(a)google.com>
commit c00054f540bf81e592e1fee709b0bdbf20f478b5 upstream.
Previously we programmed the LTR_L1.2_THRESHOLD in the parent (upstream)
device using the capability pointer of the *child* (downstream) device,
which corrupted some random word of the parent's config space.
Use the parent's L1 SS capability pointer to program its
LTR_L1.2_THRESHOLD.
Fixes: aeda9adebab8 ("PCI/ASPM: Configure L1 substate settings")
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Reviewed-by: Vidya Sagar <vidyas(a)nvidia.com>
CC: Rajat Jain <rajatja(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/pci/pcie/aspm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pci/pcie/aspm.c
+++ b/drivers/pci/pcie/aspm.c
@@ -658,7 +658,7 @@ static void pcie_config_aspm_l1ss(struct
0xFF00, link->l1ss.ctl1);
/* Program LTR L1.2 threshold in both ports */
- pci_clear_and_set_dword(parent, dw_cap_ptr + PCI_L1SS_CTL1,
+ pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1,
0xE3FF0000, link->l1ss.ctl1);
pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1,
0xE3FF0000, link->l1ss.ctl1);
Patches currently in stable-queue which might be from bhelgaas(a)google.com are
queue-4.14/pci-hv-use-effective-affinity-mask.patch
queue-4.14/pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
queue-4.14/pci-aspm-account-for-downstream-device-s-port-common_mode_restore_time.patch
queue-4.14/pci-aspm-use-correct-capability-pointer-to-program-ltr_l1.2_threshold.patch
queue-4.14/pci-apply-cavium-thunderx-acs-quirk-to-more-root-ports.patch
This is a note to let you know that I've just added the patch titled
PCI/ASPM: Account for downstream device's Port Common_Mode_Restore_Time
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pci-aspm-account-for-downstream-device-s-port-common_mode_restore_time.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 94ac327e043ee40d7fc57b54541da50507ef4e99 Mon Sep 17 00:00:00 2001
From: Bjorn Helgaas <bhelgaas(a)google.com>
Date: Mon, 13 Nov 2017 08:50:30 -0600
Subject: PCI/ASPM: Account for downstream device's Port Common_Mode_Restore_Time
From: Bjorn Helgaas <bhelgaas(a)google.com>
commit 94ac327e043ee40d7fc57b54541da50507ef4e99 upstream.
Every Port that supports the L1.2 substate advertises its Port
Common_Mode_Restore_Time, i.e., the time the Port requires to re-establish
common mode when exiting L1.2 (see PCIe r3.1, sec 7.33.2).
Per sec 5.5.3.3.1, when exiting L1.2, the Downstream Port (the device at
the upstream end of the link) must send TS1 training sequences for at least
T(COMMONMODE) after it detects electrical idle exit on the Link. We want
this to be long enough for both ends of the Link, so we should set it to
the maximum of the Port Common_Mode_Restore_Time for the upstream and
downstream components on the Link.
Previously we only looked at the Port Common_Mode_Restore_Time of the
upstream device, so if the downstream device required more time, we didn't
program the upstream device's T(COMMONMODE) correctly.
Fixes: f1f0366dd6be ("PCI/ASPM: Calculate and save the L1.2 timing parameters")
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Reviewed-by: Vidya Sagar <vidyas(a)nvidia.com>
Acked-by: Rajat Jain <rajatja(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/pci/pcie/aspm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/drivers/pci/pcie/aspm.c
+++ b/drivers/pci/pcie/aspm.c
@@ -453,7 +453,7 @@ static void aspm_calc_l1ss_info(struct p
/* Choose the greater of the two T_cmn_mode_rstr_time */
val1 = (upreg->l1ss_cap >> 8) & 0xFF;
- val2 = (upreg->l1ss_cap >> 8) & 0xFF;
+ val2 = (dwreg->l1ss_cap >> 8) & 0xFF;
if (val1 > val2)
link->l1ss.ctl1 |= val1 << 8;
else
Patches currently in stable-queue which might be from bhelgaas(a)google.com are
queue-4.14/pci-hv-use-effective-affinity-mask.patch
queue-4.14/pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
queue-4.14/pci-aspm-account-for-downstream-device-s-port-common_mode_restore_time.patch
queue-4.14/pci-aspm-use-correct-capability-pointer-to-program-ltr_l1.2_threshold.patch
queue-4.14/pci-apply-cavium-thunderx-acs-quirk-to-more-root-ports.patch
This is a note to let you know that I've just added the patch titled
PCI: Apply Cavium ThunderX ACS quirk to more Root Ports
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
pci-apply-cavium-thunderx-acs-quirk-to-more-root-ports.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From f2ddaf8dfd4a5071ad09074d2f95ab85d35c8a1e Mon Sep 17 00:00:00 2001
From: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
Date: Tue, 17 Oct 2017 05:47:39 -0700
Subject: PCI: Apply Cavium ThunderX ACS quirk to more Root Ports
From: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
commit f2ddaf8dfd4a5071ad09074d2f95ab85d35c8a1e upstream.
Extend the Cavium ThunderX ACS quirk to cover more device IDs and restrict
it to only Root Ports.
Signed-off-by: Vadim Lomovtsev <Vadim.Lomovtsev(a)cavium.com>
[bhelgaas: changelog, stable tag]
Signed-off-by: Bjorn Helgaas <bhelgaas(a)google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/pci/quirks.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -4212,6 +4212,19 @@ static int pci_quirk_amd_sb_acs(struct p
#endif
}
+static bool pci_quirk_cavium_acs_match(struct pci_dev *dev)
+{
+ /*
+ * Effectively selects all downstream ports for whole ThunderX 1
+ * family by 0xf800 mask (which represents 8 SoCs), while the lower
+ * bits of device ID are used to indicate which subdevice is used
+ * within the SoC.
+ */
+ return (pci_is_pcie(dev) &&
+ (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) &&
+ ((dev->device & 0xf800) == 0xa000));
+}
+
static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags)
{
/*
@@ -4224,7 +4237,7 @@ static int pci_quirk_cavium_acs(struct p
*/
acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF);
- if (!((dev->device >= 0xa000) && (dev->device <= 0xa0ff)))
+ if (!pci_quirk_cavium_acs_match(dev))
return -ENOTTY;
return acs_flags ? 0 : 1;
Patches currently in stable-queue which might be from Vadim.Lomovtsev(a)cavium.com are
queue-4.14/pci-set-cavium-acs-capability-quirk-flags-to-assert-rr-cr-sv-uf.patch
queue-4.14/pci-apply-cavium-thunderx-acs-quirk-to-more-root-ports.patch
This is a note to let you know that I've just added the patch titled
net: mvneta: fix handling of the Tx descriptor counter
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
net-mvneta-fix-handling-of-the-tx-descriptor-counter.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0d63785c6b94b5d2f095f90755825f90eea791f5 Mon Sep 17 00:00:00 2001
From: Simon Guinot <simon.guinot(a)sequanux.org>
Date: Mon, 13 Nov 2017 16:27:02 +0100
Subject: net: mvneta: fix handling of the Tx descriptor counter
From: Simon Guinot <simon.guinot(a)sequanux.org>
commit 0d63785c6b94b5d2f095f90755825f90eea791f5 upstream.
The mvneta controller provides a 8-bit register to update the pending
Tx descriptor counter. Then, a maximum of 255 Tx descriptors can be
added at once. In the current code the mvneta_txq_pend_desc_add function
assumes the caller takes care of this limit. But it is not the case. In
some situations (xmit_more flag), more than 255 descriptors are added.
When this happens, the Tx descriptor counter register is updated with a
wrong value, which breaks the whole Tx queue management.
This patch fixes the issue by allowing the mvneta_txq_pend_desc_add
function to process more than 255 Tx descriptors.
Fixes: 2a90f7e1d5d0 ("net: mvneta: add xmit_more support")
Signed-off-by: Simon Guinot <simon.guinot(a)sequanux.org>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/net/ethernet/marvell/mvneta.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -816,11 +816,14 @@ static void mvneta_txq_pend_desc_add(str
{
u32 val;
- /* Only 255 descriptors can be added at once ; Assume caller
- * process TX desriptors in quanta less than 256
- */
- val = pend_desc + txq->pending;
- mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val);
+ pend_desc += txq->pending;
+
+ /* Only 255 Tx descriptors can be added at once */
+ do {
+ val = min(pend_desc, 255);
+ mvreg_write(pp, MVNETA_TXQ_UPDATE_REG(txq->id), val);
+ pend_desc -= val;
+ } while (pend_desc > 0);
txq->pending = 0;
}
Patches currently in stable-queue which might be from simon.guinot(a)sequanux.org are
queue-4.14/net-mvneta-fix-handling-of-the-tx-descriptor-counter.patch
This is a note to let you know that I've just added the patch titled
nbd: wait uninterruptible for the dead timeout
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nbd-wait-uninterruptible-for-the-dead-timeout.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From ff57dc94faec023abc267cdc45766fccff497557 Mon Sep 17 00:00:00 2001
From: Josef Bacik <jbacik(a)fb.com>
Date: Mon, 6 Nov 2017 16:11:57 -0500
Subject: nbd: wait uninterruptible for the dead timeout
From: Josef Bacik <jbacik(a)fb.com>
commit ff57dc94faec023abc267cdc45766fccff497557 upstream.
If we have a pending signal or the user kills their application then
it'll bring down the whole device, which is less than awesome. Instead
wait uninterruptible for the dead timeout so we're sure we gave it our
best shot.
Fixes: 560bc4b39952 ("nbd: handle dead connections")
Signed-off-by: Josef Bacik <jbacik(a)fb.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/block/nbd.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -723,9 +723,9 @@ static int wait_for_reconnect(struct nbd
return 0;
if (test_bit(NBD_DISCONNECTED, &config->runtime_flags))
return 0;
- wait_event_interruptible_timeout(config->conn_wait,
- atomic_read(&config->live_connections),
- config->dead_conn_timeout);
+ wait_event_timeout(config->conn_wait,
+ atomic_read(&config->live_connections),
+ config->dead_conn_timeout);
return atomic_read(&config->live_connections);
}
Patches currently in stable-queue which might be from jbacik(a)fb.com are
queue-4.14/nbd-wait-uninterruptible-for-the-dead-timeout.patch
queue-4.14/nbd-don-t-start-req-until-after-the-dead-connection-logic.patch
This is a note to let you know that I've just added the patch titled
nbd: don't start req until after the dead connection logic
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
nbd-don-t-start-req-until-after-the-dead-connection-logic.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 6a468d5990ecd1c2d07dd85f8633bbdd0ba61c40 Mon Sep 17 00:00:00 2001
From: Josef Bacik <jbacik(a)fb.com>
Date: Mon, 6 Nov 2017 16:11:58 -0500
Subject: nbd: don't start req until after the dead connection logic
From: Josef Bacik <jbacik(a)fb.com>
commit 6a468d5990ecd1c2d07dd85f8633bbdd0ba61c40 upstream.
We can end up sleeping for a while waiting for the dead timeout, which
means we could get the per request timer to fire. We did handle this
case, but if the dead timeout happened right after we submitted we'd
either tear down the connection or possibly requeue as we're handling an
error and race with the endio which can lead to panics and other
hilarity.
Fixes: 560bc4b39952 ("nbd: handle dead connections")
Signed-off-by: Josef Bacik <jbacik(a)fb.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/block/nbd.c | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -288,15 +288,6 @@ static enum blk_eh_timer_return nbd_xmit
cmd->status = BLK_STS_TIMEOUT;
return BLK_EH_HANDLED;
}
-
- /* If we are waiting on our dead timer then we could get timeout
- * callbacks for our request. For this we just want to reset the timer
- * and let the queue side take care of everything.
- */
- if (!completion_done(&cmd->send_complete)) {
- nbd_config_put(nbd);
- return BLK_EH_RESET_TIMER;
- }
config = nbd->config;
if (config->num_connections > 1) {
@@ -740,6 +731,7 @@ static int nbd_handle_cmd(struct nbd_cmd
if (!refcount_inc_not_zero(&nbd->config_refs)) {
dev_err_ratelimited(disk_to_dev(nbd->disk),
"Socks array is empty\n");
+ blk_mq_start_request(req);
return -EINVAL;
}
config = nbd->config;
@@ -748,6 +740,7 @@ static int nbd_handle_cmd(struct nbd_cmd
dev_err_ratelimited(disk_to_dev(nbd->disk),
"Attempted send on invalid socket\n");
nbd_config_put(nbd);
+ blk_mq_start_request(req);
return -EINVAL;
}
cmd->status = BLK_STS_OK;
@@ -771,6 +764,7 @@ again:
*/
sock_shutdown(nbd);
nbd_config_put(nbd);
+ blk_mq_start_request(req);
return -EIO;
}
goto again;
@@ -781,6 +775,7 @@ again:
* here so that it gets put _after_ the request that is already on the
* dispatch list.
*/
+ blk_mq_start_request(req);
if (unlikely(nsock->pending && nsock->pending != req)) {
blk_mq_requeue_request(req, true);
ret = 0;
@@ -793,10 +788,10 @@ again:
ret = nbd_send_cmd(nbd, cmd, index);
if (ret == -EAGAIN) {
dev_err_ratelimited(disk_to_dev(nbd->disk),
- "Request send failed trying another connection\n");
+ "Request send failed, requeueing\n");
nbd_mark_nsock_dead(nbd, nsock, 1);
- mutex_unlock(&nsock->tx_lock);
- goto again;
+ blk_mq_requeue_request(req, true);
+ ret = 0;
}
out:
mutex_unlock(&nsock->tx_lock);
@@ -820,7 +815,6 @@ static blk_status_t nbd_queue_rq(struct
* done sending everything over the wire.
*/
init_completion(&cmd->send_complete);
- blk_mq_start_request(bd->rq);
/* We can be called directly from the user space process, which means we
* could possibly have signals pending so our sendmsg will fail. In
Patches currently in stable-queue which might be from jbacik(a)fb.com are
queue-4.14/nbd-wait-uninterruptible-for-the-dead-timeout.patch
queue-4.14/nbd-don-t-start-req-until-after-the-dead-connection-logic.patch
This is a note to let you know that I've just added the patch titled
ALSA: hda: Add Raven PCI ID
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
alsa-hda-add-raven-pci-id.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 9ceace3c9c18c67676e75141032a65a8e01f9a7a Mon Sep 17 00:00:00 2001
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Date: Thu, 23 Nov 2017 20:07:00 +0530
Subject: ALSA: hda: Add Raven PCI ID
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
commit 9ceace3c9c18c67676e75141032a65a8e01f9a7a upstream.
This commit adds PCI ID for Raven platform
Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
sound/pci/hda/hda_intel.c | 3 +++
1 file changed, 3 insertions(+)
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -2463,6 +2463,9 @@ static const struct pci_device_id azx_id
/* AMD Hudson */
{ PCI_DEVICE(0x1022, 0x780d),
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ /* AMD Raven */
+ { PCI_DEVICE(0x1022, 0x15e3),
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
/* ATI HDMI */
{ PCI_DEVICE(0x1002, 0x0002),
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
Patches currently in stable-queue which might be from Vijendar.Mukunda(a)amd.com are
queue-4.14/alsa-hda-add-raven-pci-id.patch
This is a note to let you know that I've just added the patch titled
ALSA: hda: Add Raven PCI ID
to the 3.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
alsa-hda-add-raven-pci-id.patch
and it can be found in the queue-3.18 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 9ceace3c9c18c67676e75141032a65a8e01f9a7a Mon Sep 17 00:00:00 2001
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Date: Thu, 23 Nov 2017 20:07:00 +0530
Subject: ALSA: hda: Add Raven PCI ID
From: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
commit 9ceace3c9c18c67676e75141032a65a8e01f9a7a upstream.
This commit adds PCI ID for Raven platform
Signed-off-by: Vijendar Mukunda <Vijendar.Mukunda(a)amd.com>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
sound/pci/hda/hda_intel.c | 3 +++
1 file changed, 3 insertions(+)
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -2092,6 +2092,9 @@ static const struct pci_device_id azx_id
/* AMD Hudson */
{ PCI_DEVICE(0x1022, 0x780d),
.driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ /* AMD Raven */
+ { PCI_DEVICE(0x1022, 0x15e3),
+ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
/* ATI HDMI */
{ PCI_DEVICE(0x1002, 0x0002),
.driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
Patches currently in stable-queue which might be from Vijendar.Mukunda(a)amd.com are
queue-3.18/alsa-hda-add-raven-pci-id.patch
On Mon, Nov 27, 2017 at 12:40:36PM +0000, James Hogan wrote:
> Hi Greg,
>
> On Mon, Nov 27, 2017 at 01:35:46PM +0100, gregkh(a)linuxfoundation.org wrote:
> > The patch below was submitted to be applied to the 4.9-stable tree.
> >
> > I fail to see how this patch meets the stable kernel rules as found at
> > Documentation/process/stable-kernel-rules.rst.
> >
> > I could be totally wrong, and if so, please respond to
> > <stable(a)vger.kernel.org> and let me know why this patch should be
> > applied. Otherwise, it is now dropped from my patch queues, never to be
> > seen again.
>
> I should have adjusted the commit message. KERN_WARN doesn't exist so it
> actually fixes a build error as well as switching to pr_warn().
What build error? I have not heard of this breaking the build on 4.9
for the past year, is it in some config that no one uses? :)
thanks,
greg k-h