The bug exists in the memcmp in which the length passed in must
be guaranteed to be 1. This bug currently exists because
the second pointer passed in, can be smaller than the
cmd->data_length, which causes a fortify_panic.
The fix is to use memchr_inv instead to find whether or not
a 0 exists instead of using memcmp. This way you dont have to
worry about buffer overflow which is the reason for the
fortify_panic.
The bug was found by running a block backstore via LIO.
[ 496.212958] Call Trace:
[ 496.212960] [c0000007e58e3800] [c000000000cbbefc] fortify_panic+0x24/0x38 (unreliable)
[ 496.212965] [c0000007e58e3860] [d00000000f150c28] iblock_execute_write_same+0x3b8/0x3c0 [target_core_iblock]
[ 496.212976] [c0000007e58e3910] [d000000006c737d4] __target_execute_cmd+0x54/0x150 [target_core_mod]
[ 496.212982] [c0000007e58e3940] [d000000006d32ce4] ibmvscsis_write_pending+0x74/0xe0 [ibmvscsis]
[ 496.212991] [c0000007e58e39b0] [d000000006c74fc8] transport_generic_new_cmd+0x318/0x370 [target_core_mod]
[ 496.213001] [c0000007e58e3a30] [d000000006c75084] transport_handle_cdb_direct+0x64/0xd0 [target_core_mod]
[ 496.213011] [c0000007e58e3aa0] [d000000006c75298] target_submit_cmd_map_sgls+0x1a8/0x320 [target_core_mod]
[ 496.213021] [c0000007e58e3b30] [d000000006c75458] target_submit_cmd+0x48/0x60 [target_core_mod]
[ 496.213026] [c0000007e58e3bd0] [d000000006d34c20] ibmvscsis_scheduler+0x370/0x600 [ibmvscsis]
[ 496.213031] [c0000007e58e3c90] [c00000000013135c] process_one_work+0x1ec/0x580
[ 496.213035] [c0000007e58e3d20] [c000000000131798] worker_thread+0xa8/0x600
[ 496.213039] [c0000007e58e3dc0] [c00000000013a468] kthread+0x168/0x1b0
[ 496.213044] [c0000007e58e3e30] [c00000000000b528] ret_from_kernel_thread+0x5c/0xb4
Fixes: 2237498f0b5c ("target/iblock: Convert WRITE_SAME to blkdev_issue_zeroout")
Signed-off-by: Bryant G. Ly <bryantly(a)linux.vnet.ibm.com>
Reviewed-by: Steven Royer <seroyer(a)linux.vnet.ibm.com>
Tested-by: Taylor Jakobson <tjakobs(a)us.ibm.com>
Cc: Christoph Hellwig <hch(a)lst.de>
Cc: Nicholas Bellinger <nab(a)linux-iscsi.org>
Cc: <stable(a)vger.kernel.org>
---
drivers/target/target_core_iblock.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 07c814c..6042901 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -427,8 +427,8 @@ iblock_execute_zero_out(struct block_device *bdev, struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
struct scatterlist *sg = &cmd->t_data_sg[0];
- unsigned char *buf, zero = 0x00, *p = &zero;
- int rc, ret;
+ unsigned char *buf, *not_zero;
+ int ret;
buf = kmap(sg_page(sg)) + sg->offset;
if (!buf)
@@ -437,10 +437,10 @@ iblock_execute_zero_out(struct block_device *bdev, struct se_cmd *cmd)
* Fall back to block_execute_write_same() slow-path if
* incoming WRITE_SAME payload does not contain zeros.
*/
- rc = memcmp(buf, p, cmd->data_length);
+ not_zero = memchr_inv(buf, 0x00, cmd->data_length);
kunmap(sg_page(sg));
- if (rc)
+ if (not_zero)
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
ret = blkdev_issue_zeroout(bdev,
--
2.7.2
From: Matthew Wilcox <mawilcox(a)microsoft.com>
Subject: mm/filemap.c: fix NULL pointer in page_cache_tree_insert()
f2fs specifies the __GFP_ZERO flag for allocating some of its pages.
Unfortunately, the page cache also uses the mapping's GFP flags for
allocating radix tree nodes. It always masked off the __GFP_HIGHMEM
flag, and masks off __GFP_ZERO in some paths, but not all. That causes
radix tree nodes to be allocated with a NULL list_head, which causes
backtraces like:
[<ffffff80086f4de0>] __list_del_entry+0x30/0xd0
[<ffffff8008362018>] list_lru_del+0xac/0x1ac
[<ffffff800830f04c>] page_cache_tree_insert+0xd8/0x110
The __GFP_DMA and __GFP_DMA32 flags would also be able to sneak through if
they are ever used. Fix them all by using GFP_RECLAIM_MASK at the
innermost location, and remove it from earlier in the callchain.
Link: http://lkml.kernel.org/r/20180411060320.14458-2-willy@infradead.org
Fixes: 449dd6984d0e ("mm: keep page cache radix tree nodes in check")
Signed-off-by: Matthew Wilcox <mawilcox(a)microsoft.com>
Reported-by: Chris Fries <cfries(a)google.com>
Debugged-by: Minchan Kim <minchan(a)kernel.org>
Acked-by: Johannes Weiner <hannes(a)cmpxchg.org>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/filemap.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff -puN mm/filemap.c~fix-null-pointer-in-page_cache_tree_insert mm/filemap.c
--- a/mm/filemap.c~fix-null-pointer-in-page_cache_tree_insert
+++ a/mm/filemap.c
@@ -786,7 +786,7 @@ int replace_page_cache_page(struct page
VM_BUG_ON_PAGE(!PageLocked(new), new);
VM_BUG_ON_PAGE(new->mapping, new);
- error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
+ error = radix_tree_preload(gfp_mask & GFP_RECLAIM_MASK);
if (!error) {
struct address_space *mapping = old->mapping;
void (*freepage)(struct page *);
@@ -842,7 +842,7 @@ static int __add_to_page_cache_locked(st
return error;
}
- error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM);
+ error = radix_tree_maybe_preload(gfp_mask & GFP_RECLAIM_MASK);
if (error) {
if (!huge)
mem_cgroup_cancel_charge(page, memcg, false);
@@ -1585,8 +1585,7 @@ no_page:
if (fgp_flags & FGP_ACCESSED)
__SetPageReferenced(page);
- err = add_to_page_cache_lru(page, mapping, offset,
- gfp_mask & GFP_RECLAIM_MASK);
+ err = add_to_page_cache_lru(page, mapping, offset, gfp_mask);
if (unlikely(err)) {
put_page(page);
page = NULL;
@@ -2387,7 +2386,7 @@ static int page_cache_read(struct file *
if (!page)
return -ENOMEM;
- ret = add_to_page_cache_lru(page, mapping, offset, gfp_mask & GFP_KERNEL);
+ ret = add_to_page_cache_lru(page, mapping, offset, gfp_mask);
if (ret == 0)
ret = mapping->a_ops->readpage(file, page);
else if (ret == -EEXIST)
_
From: Ian Kent <raven(a)themaw.net>
Subject: autofs: mount point create should honour passed in mode
The autofs file system mkdir inode operation blindly sets the created
directory mode to S_IFDIR | 0555, ingoring the passed in mode, which can
cause selinux dac_override denials.
But the function also checks if the caller is the daemon (as no-one else
should be able to do anything here) so there's no point in not honouring
the passed in mode, allowing the daemon to set appropriate mode when
required.
Link: http://lkml.kernel.org/r/152361593601.8051.14014139124905996173.stgit@pluto…
Signed-off-by: Ian Kent <raven(a)themaw.net>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
fs/autofs4/root.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -puN fs/autofs4/root.c~autofs-mount-point-create-should-honour-passed-in-mode fs/autofs4/root.c
--- a/fs/autofs4/root.c~autofs-mount-point-create-should-honour-passed-in-mode
+++ a/fs/autofs4/root.c
@@ -749,7 +749,7 @@ static int autofs4_dir_mkdir(struct inod
autofs4_del_active(dentry);
- inode = autofs4_get_inode(dir->i_sb, S_IFDIR | 0555);
+ inode = autofs4_get_inode(dir->i_sb, S_IFDIR | mode);
if (!inode)
return -ENOMEM;
d_add(dentry, inode);
_
From: Ioan Nicu <ioan.nicu.ext(a)nokia.com>
Subject: rapidio: fix rio_dma_transfer error handling
Some of the mport_dma_req structure members were initialized late
inside the do_dma_request() function, just before submitting the
request to the dma engine. But we have some error branches before
that. In case of such an error, the code would return on the error
path and trigger the calling of dma_req_free() with a req structure
which is not completely initialized. This causes a NULL pointer
dereference in dma_req_free().
This patch fixes these error branches by making sure that all
necessary mport_dma_req structure members are initialized in
rio_dma_transfer() immediately after the request structure gets
allocated.
Link: http://lkml.kernel.org/r/20180412150605.GA31409@nokia.com
Fixes: bbd876adb8c72 ("rapidio: use a reference count for struct mport_dma_req")
Signed-off-by: Ioan Nicu <ioan.nicu.ext(a)nokia.com>
Tested-by: Alexander Sverdlin <alexander.sverdlin(a)nokia.com>
Acked-by: Alexandre Bounine <alex.bou9(a)gmail.com>
Cc: Barry Wood <barry.wood(a)idt.com>
Cc: Matt Porter <mporter(a)kernel.crashing.org>
Cc: Christophe JAILLET <christophe.jaillet(a)wanadoo.fr>
Cc: Logan Gunthorpe <logang(a)deltatee.com>
Cc: Chris Wilson <chris(a)chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin(a)intel.com>
Cc: Frank Kunz <frank.kunz(a)nokia.com>
Cc: <stable(a)vger.kernel.org> [4.6+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
drivers/rapidio/devices/rio_mport_cdev.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff -puN drivers/rapidio/devices/rio_mport_cdev.c~rapidio-fix-rio_dma_transfer-error-handling drivers/rapidio/devices/rio_mport_cdev.c
--- a/drivers/rapidio/devices/rio_mport_cdev.c~rapidio-fix-rio_dma_transfer-error-handling
+++ a/drivers/rapidio/devices/rio_mport_cdev.c
@@ -740,10 +740,7 @@ static int do_dma_request(struct mport_d
tx->callback = dma_xfer_callback;
tx->callback_param = req;
- req->dmach = chan;
- req->sync = sync;
req->status = DMA_IN_PROGRESS;
- init_completion(&req->req_comp);
kref_get(&req->refcount);
cookie = dmaengine_submit(tx);
@@ -831,13 +828,20 @@ rio_dma_transfer(struct file *filp, u32
if (!req)
return -ENOMEM;
- kref_init(&req->refcount);
-
ret = get_dma_channel(priv);
if (ret) {
kfree(req);
return ret;
}
+ chan = priv->dmach;
+
+ kref_init(&req->refcount);
+ init_completion(&req->req_comp);
+ req->dir = dir;
+ req->filp = filp;
+ req->priv = priv;
+ req->dmach = chan;
+ req->sync = sync;
/*
* If parameter loc_addr != NULL, we are transferring data from/to
@@ -925,11 +929,6 @@ rio_dma_transfer(struct file *filp, u32
xfer->offset, xfer->length);
}
- req->dir = dir;
- req->filp = filp;
- req->priv = priv;
- chan = priv->dmach;
-
nents = dma_map_sg(chan->device->dev,
req->sgt.sgl, req->sgt.nents, dir);
if (nents == 0) {
_
From: Greg Thelen <gthelen(a)google.com>
Subject: writeback: safer lock nesting
lock_page_memcg()/unlock_page_memcg() use spin_lock_irqsave/restore() if
the page's memcg is undergoing move accounting, which occurs when a
process leaves its memcg for a new one that has
memory.move_charge_at_immigrate set.
unlocked_inode_to_wb_begin,end() use spin_lock_irq/spin_unlock_irq() if
the given inode is switching writeback domains. Switches occur when
enough writes are issued from a new domain.
This existing pattern is thus suspicious:
lock_page_memcg(page);
unlocked_inode_to_wb_begin(inode, &locked);
...
unlocked_inode_to_wb_end(inode, locked);
unlock_page_memcg(page);
If both inode switch and process memcg migration are both in-flight then
unlocked_inode_to_wb_end() will unconditionally enable interrupts while
still holding the lock_page_memcg() irq spinlock. This suggests the
possibility of deadlock if an interrupt occurs before unlock_page_memcg().
truncate
__cancel_dirty_page
lock_page_memcg
unlocked_inode_to_wb_begin
unlocked_inode_to_wb_end
<interrupts mistakenly enabled>
<interrupt>
end_page_writeback
test_clear_page_writeback
lock_page_memcg
<deadlock>
unlock_page_memcg
Due to configuration limitations this deadlock is not currently possible
because we don't mix cgroup writeback (a cgroupv2 feature) and
memory.move_charge_at_immigrate (a cgroupv1 feature).
If the kernel is hacked to always claim inode switching and memcg
moving_account, then this script triggers lockup in less than a minute:
cd /mnt/cgroup/memory
mkdir a b
echo 1 > a/memory.move_charge_at_immigrate
echo 1 > b/memory.move_charge_at_immigrate
(
echo $BASHPID > a/cgroup.procs
while true; do
dd if=/dev/zero of=/mnt/big bs=1M count=256
done
) &
while true; do
sync
done &
sleep 1h &
SLEEP=$!
while true; do
echo $SLEEP > a/cgroup.procs
echo $SLEEP > b/cgroup.procs
done
The deadlock does not seem possible, so it's debatable if there's any
reason to modify the kernel. I suggest we should to prevent future
surprises. And Wang Long said "this deadlock occurs three times in our
environment", so there's more reason to apply this, even to stable.
Stable 4.4 has minor conflicts applying this patch. For a clean 4.4 patch
see "[PATCH for-4.4] writeback: safer lock nesting"
https://lkml.org/lkml/2018/4/11/146
Wang Long said "this deadlock occurs three times in our environment"
[gthelen(a)google.com: v4]
Link: http://lkml.kernel.org/r/20180411084653.254724-1-gthelen@google.com
[akpm(a)linux-foundation.org: comment tweaks, struct initialization simplification]
Change-Id: Ibb773e8045852978f6207074491d262f1b3fb613
Link: http://lkml.kernel.org/r/20180410005908.167976-1-gthelen@google.com
Fixes: 682aa8e1a6a1 ("writeback: implement unlocked_inode_to_wb transaction and use it for stat updates")
Signed-off-by: Greg Thelen <gthelen(a)google.com>
Reported-by: Wang Long <wanglong19(a)meituan.com>
Acked-by: Wang Long <wanglong19(a)meituan.com>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Johannes Weiner <hannes(a)cmpxchg.org>
Cc: Tejun Heo <tj(a)kernel.org>
Cc: Nicholas Piggin <npiggin(a)gmail.com>
Cc: <stable(a)vger.kernel.org> [v4.2+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
fs/fs-writeback.c | 7 +++---
include/linux/backing-dev-defs.h | 5 ++++
include/linux/backing-dev.h | 30 +++++++++++++++--------------
mm/page-writeback.c | 18 ++++++++---------
4 files changed, 34 insertions(+), 26 deletions(-)
diff -puN fs/fs-writeback.c~writeback-safer-lock-nesting fs/fs-writeback.c
--- a/fs/fs-writeback.c~writeback-safer-lock-nesting
+++ a/fs/fs-writeback.c
@@ -745,11 +745,12 @@ int inode_congested(struct inode *inode,
*/
if (inode && inode_to_wb_is_valid(inode)) {
struct bdi_writeback *wb;
- bool locked, congested;
+ struct wb_lock_cookie lock_cookie = {};
+ bool congested;
- wb = unlocked_inode_to_wb_begin(inode, &locked);
+ wb = unlocked_inode_to_wb_begin(inode, &lock_cookie);
congested = wb_congested(wb, cong_bits);
- unlocked_inode_to_wb_end(inode, locked);
+ unlocked_inode_to_wb_end(inode, &lock_cookie);
return congested;
}
diff -puN include/linux/backing-dev-defs.h~writeback-safer-lock-nesting include/linux/backing-dev-defs.h
--- a/include/linux/backing-dev-defs.h~writeback-safer-lock-nesting
+++ a/include/linux/backing-dev-defs.h
@@ -223,6 +223,11 @@ static inline void set_bdi_congested(str
set_wb_congested(bdi->wb.congested, sync);
}
+struct wb_lock_cookie {
+ bool locked;
+ unsigned long flags;
+};
+
#ifdef CONFIG_CGROUP_WRITEBACK
/**
diff -puN include/linux/backing-dev.h~writeback-safer-lock-nesting include/linux/backing-dev.h
--- a/include/linux/backing-dev.h~writeback-safer-lock-nesting
+++ a/include/linux/backing-dev.h
@@ -347,7 +347,7 @@ static inline struct bdi_writeback *inod
/**
* unlocked_inode_to_wb_begin - begin unlocked inode wb access transaction
* @inode: target inode
- * @lockedp: temp bool output param, to be passed to the end function
+ * @cookie: output param, to be passed to the end function
*
* The caller wants to access the wb associated with @inode but isn't
* holding inode->i_lock, the i_pages lock or wb->list_lock. This
@@ -355,12 +355,12 @@ static inline struct bdi_writeback *inod
* association doesn't change until the transaction is finished with
* unlocked_inode_to_wb_end().
*
- * The caller must call unlocked_inode_to_wb_end() with *@lockdep
- * afterwards and can't sleep during transaction. IRQ may or may not be
- * disabled on return.
+ * The caller must call unlocked_inode_to_wb_end() with *@cookie afterwards and
+ * can't sleep during the transaction. IRQs may or may not be disabled on
+ * return.
*/
static inline struct bdi_writeback *
-unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
+unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie)
{
rcu_read_lock();
@@ -368,10 +368,10 @@ unlocked_inode_to_wb_begin(struct inode
* Paired with store_release in inode_switch_wb_work_fn() and
* ensures that we see the new wb if we see cleared I_WB_SWITCH.
*/
- *lockedp = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;
+ cookie->locked = smp_load_acquire(&inode->i_state) & I_WB_SWITCH;
- if (unlikely(*lockedp))
- xa_lock_irq(&inode->i_mapping->i_pages);
+ if (unlikely(cookie->locked))
+ xa_lock_irqsave(&inode->i_mapping->i_pages, cookie->flags);
/*
* Protected by either !I_WB_SWITCH + rcu_read_lock() or the i_pages
@@ -383,12 +383,13 @@ unlocked_inode_to_wb_begin(struct inode
/**
* unlocked_inode_to_wb_end - end inode wb access transaction
* @inode: target inode
- * @locked: *@lockedp from unlocked_inode_to_wb_begin()
+ * @cookie: @cookie from unlocked_inode_to_wb_begin()
*/
-static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
+static inline void unlocked_inode_to_wb_end(struct inode *inode,
+ struct wb_lock_cookie *cookie)
{
- if (unlikely(locked))
- xa_unlock_irq(&inode->i_mapping->i_pages);
+ if (unlikely(cookie->locked))
+ xa_unlock_irqrestore(&inode->i_mapping->i_pages, cookie->flags);
rcu_read_unlock();
}
@@ -435,12 +436,13 @@ static inline struct bdi_writeback *inod
}
static inline struct bdi_writeback *
-unlocked_inode_to_wb_begin(struct inode *inode, bool *lockedp)
+unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie)
{
return inode_to_wb(inode);
}
-static inline void unlocked_inode_to_wb_end(struct inode *inode, bool locked)
+static inline void unlocked_inode_to_wb_end(struct inode *inode,
+ struct wb_lock_cookie *cookie)
{
}
diff -puN mm/page-writeback.c~writeback-safer-lock-nesting mm/page-writeback.c
--- a/mm/page-writeback.c~writeback-safer-lock-nesting
+++ a/mm/page-writeback.c
@@ -2502,13 +2502,13 @@ void account_page_redirty(struct page *p
if (mapping && mapping_cap_account_dirty(mapping)) {
struct inode *inode = mapping->host;
struct bdi_writeback *wb;
- bool locked;
+ struct wb_lock_cookie cookie = {};
- wb = unlocked_inode_to_wb_begin(inode, &locked);
+ wb = unlocked_inode_to_wb_begin(inode, &cookie);
current->nr_dirtied--;
dec_node_page_state(page, NR_DIRTIED);
dec_wb_stat(wb, WB_DIRTIED);
- unlocked_inode_to_wb_end(inode, locked);
+ unlocked_inode_to_wb_end(inode, &cookie);
}
}
EXPORT_SYMBOL(account_page_redirty);
@@ -2614,15 +2614,15 @@ void __cancel_dirty_page(struct page *pa
if (mapping_cap_account_dirty(mapping)) {
struct inode *inode = mapping->host;
struct bdi_writeback *wb;
- bool locked;
+ struct wb_lock_cookie cookie = {};
lock_page_memcg(page);
- wb = unlocked_inode_to_wb_begin(inode, &locked);
+ wb = unlocked_inode_to_wb_begin(inode, &cookie);
if (TestClearPageDirty(page))
account_page_cleaned(page, mapping, wb);
- unlocked_inode_to_wb_end(inode, locked);
+ unlocked_inode_to_wb_end(inode, &cookie);
unlock_page_memcg(page);
} else {
ClearPageDirty(page);
@@ -2654,7 +2654,7 @@ int clear_page_dirty_for_io(struct page
if (mapping && mapping_cap_account_dirty(mapping)) {
struct inode *inode = mapping->host;
struct bdi_writeback *wb;
- bool locked;
+ struct wb_lock_cookie cookie = {};
/*
* Yes, Virginia, this is indeed insane.
@@ -2691,14 +2691,14 @@ int clear_page_dirty_for_io(struct page
* always locked coming in here, so we get the desired
* exclusion.
*/
- wb = unlocked_inode_to_wb_begin(inode, &locked);
+ wb = unlocked_inode_to_wb_begin(inode, &cookie);
if (TestClearPageDirty(page)) {
dec_lruvec_page_state(page, NR_FILE_DIRTY);
dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);
dec_wb_stat(wb, WB_RECLAIMABLE);
ret = 1;
}
- unlocked_inode_to_wb_end(inode, locked);
+ unlocked_inode_to_wb_end(inode, &cookie);
return ret;
}
return TestClearPageDirty(page);
_
The pci-hyperv driver's channel callback hv_pci_onchannelcallback() is not
really a hot path, so we don't need to mark it as a perf_device, meaning
with this patch all HV_PCIE channels' target_cpu will be CPU0.
Signed-off-by: Dexuan Cui <decui(a)microsoft.com>
Cc: stable(a)vger.kernel.org
Cc: Stephen Hemminger <sthemmin(a)microsoft.com>
Cc: K. Y. Srinivasan <kys(a)microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys(a)microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
(cherry picked from commit 238064f13d057390a8c5e1a6a80f4f0a0ec46499)
Signed-off-by: Mohammed Gamal <mgamal(a)redhat.com>
---
drivers/hv/channel_mgmt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index c6d9d19..ecc2bd2 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -71,7 +71,7 @@ static const struct vmbus_device vmbus_devs[] = {
/* PCIE */
{ .dev_type = HV_PCIE,
HV_PCIE_GUID,
- .perf_device = true,
+ .perf_device = false,
},
/* Synthetic Frame Buffer */
--
1.8.3.1
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 6225f9c64b40bc8a22503e9cda70f55d7a9dd3c6 Mon Sep 17 00:00:00 2001
From: Ryo Kodama <ryo.kodama.vz(a)renesas.com>
Date: Fri, 9 Mar 2018 20:24:21 +0900
Subject: [PATCH] pwm: rcar: Fix a condition to prevent mismatch value setting
to duty
This patch fixes an issue that is possible to set mismatch value to duty
for R-Car PWM if we input the following commands:
# cd /sys/class/pwm/<pwmchip>/
# echo 0 > export
# cd pwm0
# echo 30 > period
# echo 30 > duty_cycle
# echo 0 > duty_cycle
# cat duty_cycle
0
# echo 1 > enable
--> Then, the actual duty_cycle is 30, not 0.
So, this patch adds a condition into rcar_pwm_config() to fix this
issue.
Signed-off-by: Ryo Kodama <ryo.kodama.vz(a)renesas.com>
[shimoda: revise the commit log and add Fixes and Cc tags]
Fixes: ed6c1476bf7f ("pwm: Add support for R-Car PWM Timer")
Cc: Cc: <stable(a)vger.kernel.org> # v4.4+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh(a)renesas.com>
Signed-off-by: Thierry Reding <thierry.reding(a)gmail.com>
diff --git a/drivers/pwm/pwm-rcar.c b/drivers/pwm/pwm-rcar.c
index 1c85ecc9e7ac..0fcf94ffad32 100644
--- a/drivers/pwm/pwm-rcar.c
+++ b/drivers/pwm/pwm-rcar.c
@@ -156,8 +156,12 @@ static int rcar_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
if (div < 0)
return div;
- /* Let the core driver set pwm->period if disabled and duty_ns == 0 */
- if (!pwm_is_enabled(pwm) && !duty_ns)
+ /*
+ * Let the core driver set pwm->period if disabled and duty_ns == 0.
+ * But, this driver should prevent to set the new duty_ns if current
+ * duty_cycle is not set
+ */
+ if (!pwm_is_enabled(pwm) && !duty_ns && !pwm->state.duty_cycle)
return 0;
rcar_pwm_update(rp, RCAR_PWMCR_SYNC, RCAR_PWMCR_SYNC, RCAR_PWMCR);
The patch below does not apply to the 4.16-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 5d447f09b8d8346c64f4c952a67c61f7ce88d3c1 Mon Sep 17 00:00:00 2001
From: Shirish S <shirish.s(a)amd.com>
Date: Wed, 21 Feb 2018 16:10:33 +0530
Subject: [PATCH] drm/amd/display: check for ipp before calling cursor
operations
Currently all cursor related functions are made to all
pipes that are attached to a particular stream.
This is not applicable to pipes that do not have cursor plane
initialised like underlay.
Hence this patch allows cursor related operations on a pipe
only if ipp in available on that particular pipe.
The check is added to set_cursor_position & set_cursor_attribute.
Signed-off-by: Shirish S <shirish.s(a)amd.com>
Reviewed-by: Harry Wentland <harry.wentland(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Cc: stable(a)vger.kernel.org
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index 87a193ac2883..cd5819789d76 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -198,7 +198,8 @@ bool dc_stream_set_cursor_attributes(
for (i = 0; i < MAX_PIPES; i++) {
struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[i];
- if (pipe_ctx->stream != stream || (!pipe_ctx->plane_res.xfm && !pipe_ctx->plane_res.dpp))
+ if (pipe_ctx->stream != stream || (!pipe_ctx->plane_res.xfm &&
+ !pipe_ctx->plane_res.dpp) || !pipe_ctx->plane_res.ipp)
continue;
if (pipe_ctx->top_pipe && pipe_ctx->plane_state != pipe_ctx->top_pipe->plane_state)
continue;
@@ -237,7 +238,8 @@ bool dc_stream_set_cursor_position(
if (pipe_ctx->stream != stream ||
(!pipe_ctx->plane_res.mi && !pipe_ctx->plane_res.hubp) ||
!pipe_ctx->plane_state ||
- (!pipe_ctx->plane_res.xfm && !pipe_ctx->plane_res.dpp))
+ (!pipe_ctx->plane_res.xfm && !pipe_ctx->plane_res.dpp) ||
+ !pipe_ctx->plane_res.ipp)
continue;
core_dc->hwss.set_cursor_position(pipe_ctx);
The patch below does not apply to the 4.16-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From ea74e15fb547483f9f86088443f2d3c9f518de8b Mon Sep 17 00:00:00 2001
From: Harry Wentland <harry.wentland(a)amd.com>
Date: Tue, 20 Feb 2018 13:36:23 -0500
Subject: [PATCH] drm/amd/display: Default HDMI6G support to true. Log VBIOS
table error.
There have been many reports of Ellesmere and Baffin systems not being
able to drive HDMI 4k60 due to the fact that we check the HDMI_6GB_EN
bit from VBIOS table. Windows seems to not have this issue.
On some systems we fail to the encoder cap info from VBIOS. In that case
we should default to enabling HDMI6G support.
This was tested by dwagner on
https://bugs.freedesktop.org/show_bug.cgi?id=102820
Signed-off-by: Harry Wentland <harry.wentland(a)amd.com>
Reviewed-by: Roman Li <Roman.Li(a)amd.com>
Reviewed-by: Tony Cheng <Tony.Cheng(a)amd.com>
Acked-by: Harry Wentland <harry.wentland(a)amd.com>
Signed-off-by: Alex Deucher <alexander.deucher(a)amd.com>
Cc: stable(a)vger.kernel.org
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
index f0d63ac7724a..81776e4797ed 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
@@ -678,6 +678,7 @@ void dce110_link_encoder_construct(
{
struct bp_encoder_cap_info bp_cap_info = {0};
const struct dc_vbios_funcs *bp_funcs = init_data->ctx->dc_bios->funcs;
+ enum bp_result result = BP_RESULT_OK;
enc110->base.funcs = &dce110_lnk_enc_funcs;
enc110->base.ctx = init_data->ctx;
@@ -752,15 +753,24 @@ void dce110_link_encoder_construct(
enc110->base.preferred_engine = ENGINE_ID_UNKNOWN;
}
+ /* default to one to mirror Windows behavior */
+ enc110->base.features.flags.bits.HDMI_6GB_EN = 1;
+
+ result = bp_funcs->get_encoder_cap_info(enc110->base.ctx->dc_bios,
+ enc110->base.id, &bp_cap_info);
+
/* Override features with DCE-specific values */
- if (BP_RESULT_OK == bp_funcs->get_encoder_cap_info(
- enc110->base.ctx->dc_bios, enc110->base.id,
- &bp_cap_info)) {
+ if (BP_RESULT_OK == result) {
enc110->base.features.flags.bits.IS_HBR2_CAPABLE =
bp_cap_info.DP_HBR2_EN;
enc110->base.features.flags.bits.IS_HBR3_CAPABLE =
bp_cap_info.DP_HBR3_EN;
enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
+ } else {
+ dm_logger_write(enc110->base.ctx->logger, LOG_WARNING,
+ "%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
+ __func__,
+ result);
}
}
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 0731de476a37c33485af82d64041c9d193208df8 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 21 Mar 2018 21:22:34 -0700
Subject: [PATCH] nfit: skip region registration for incomplete control regions
Per the ACPI specification the only functional purpose for a DIMM
Control Region to be mapped into the system physical address space, from
an OSPM perspective, is to support block-apertures. However, there are
some BIOSen that publish DIMM Control Region SPA entries for pre-boot
environment consumption. Undo the kernel policy of generating disabled
'ndblk' regions when this configuration is detected.
Cc: <stable(a)vger.kernel.org>
Fixes: 1f7df6f88b92 ("libnvdimm, nfit: regions (block-data-window...)")
Reviewed-by: Toshi Kani <toshi.kani(a)hpe.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 39ad06143e78..4530d89044db 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -2578,7 +2578,7 @@ static int acpi_nfit_init_mapping(struct acpi_nfit_desc *acpi_desc,
struct acpi_nfit_system_address *spa = nfit_spa->spa;
struct nd_blk_region_desc *ndbr_desc;
struct nfit_mem *nfit_mem;
- int blk_valid = 0, rc;
+ int rc;
if (!nvdimm) {
dev_err(acpi_desc->dev, "spa%d dimm: %#x not found\n",
@@ -2598,15 +2598,14 @@ static int acpi_nfit_init_mapping(struct acpi_nfit_desc *acpi_desc,
if (!nfit_mem || !nfit_mem->bdw) {
dev_dbg(acpi_desc->dev, "spa%d %s missing bdw\n",
spa->range_index, nvdimm_name(nvdimm));
- } else {
- mapping->size = nfit_mem->bdw->capacity;
- mapping->start = nfit_mem->bdw->start_address;
- ndr_desc->num_lanes = nfit_mem->bdw->windows;
- blk_valid = 1;
+ break;
}
+ mapping->size = nfit_mem->bdw->capacity;
+ mapping->start = nfit_mem->bdw->start_address;
+ ndr_desc->num_lanes = nfit_mem->bdw->windows;
ndr_desc->mapping = mapping;
- ndr_desc->num_mappings = blk_valid;
+ ndr_desc->num_mappings = 1;
ndbr_desc = to_blk_region_desc(ndr_desc);
ndbr_desc->enable = acpi_nfit_blk_region_enable;
ndbr_desc->do_io = acpi_desc->blk_do_io;
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 0731de476a37c33485af82d64041c9d193208df8 Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Wed, 21 Mar 2018 21:22:34 -0700
Subject: [PATCH] nfit: skip region registration for incomplete control regions
Per the ACPI specification the only functional purpose for a DIMM
Control Region to be mapped into the system physical address space, from
an OSPM perspective, is to support block-apertures. However, there are
some BIOSen that publish DIMM Control Region SPA entries for pre-boot
environment consumption. Undo the kernel policy of generating disabled
'ndblk' regions when this configuration is detected.
Cc: <stable(a)vger.kernel.org>
Fixes: 1f7df6f88b92 ("libnvdimm, nfit: regions (block-data-window...)")
Reviewed-by: Toshi Kani <toshi.kani(a)hpe.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 39ad06143e78..4530d89044db 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -2578,7 +2578,7 @@ static int acpi_nfit_init_mapping(struct acpi_nfit_desc *acpi_desc,
struct acpi_nfit_system_address *spa = nfit_spa->spa;
struct nd_blk_region_desc *ndbr_desc;
struct nfit_mem *nfit_mem;
- int blk_valid = 0, rc;
+ int rc;
if (!nvdimm) {
dev_err(acpi_desc->dev, "spa%d dimm: %#x not found\n",
@@ -2598,15 +2598,14 @@ static int acpi_nfit_init_mapping(struct acpi_nfit_desc *acpi_desc,
if (!nfit_mem || !nfit_mem->bdw) {
dev_dbg(acpi_desc->dev, "spa%d %s missing bdw\n",
spa->range_index, nvdimm_name(nvdimm));
- } else {
- mapping->size = nfit_mem->bdw->capacity;
- mapping->start = nfit_mem->bdw->start_address;
- ndr_desc->num_lanes = nfit_mem->bdw->windows;
- blk_valid = 1;
+ break;
}
+ mapping->size = nfit_mem->bdw->capacity;
+ mapping->start = nfit_mem->bdw->start_address;
+ ndr_desc->num_lanes = nfit_mem->bdw->windows;
ndr_desc->mapping = mapping;
- ndr_desc->num_mappings = blk_valid;
+ ndr_desc->num_mappings = 1;
ndbr_desc = to_blk_region_desc(ndr_desc);
ndbr_desc->enable = acpi_nfit_blk_region_enable;
ndbr_desc->do_io = acpi_desc->blk_do_io;
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From c31898c8c711f2bbbcaebe802a55827e288d875a Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Fri, 6 Apr 2018 11:25:38 -0700
Subject: [PATCH] libnvdimm, dimm: fix dpa reservation vs uninitialized label
area
At initialization time the 'dimm' driver caches a copy of the memory
device's label area and reserves address space for each of the
namespaces defined.
However, as can be seen below, the reservation occurs even when the
index blocks are invalid:
nvdimm nmem0: nvdimm_init_config_data: len: 131072 rc: 0
nvdimm nmem0: config data size: 131072
nvdimm nmem0: __nd_label_validate: nsindex0 labelsize 1 invalid
nvdimm nmem0: __nd_label_validate: nsindex1 labelsize 1 invalid
nvdimm nmem0: : pmem-6025e505: 0x1000000000 @ 0xf50000000 reserve <-- bad
Gate dpa reservation on the presence of valid index blocks.
Cc: <stable(a)vger.kernel.org>
Fixes: 4a826c83db4e ("libnvdimm: namespace indices: read and validate")
Reported-by: Krzysztof Rusocki <krzysztof.rusocki(a)intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
diff --git a/drivers/nvdimm/dimm.c b/drivers/nvdimm/dimm.c
index f8913b8124b6..233907889f96 100644
--- a/drivers/nvdimm/dimm.c
+++ b/drivers/nvdimm/dimm.c
@@ -67,9 +67,11 @@ static int nvdimm_probe(struct device *dev)
ndd->ns_next = nd_label_next_nsindex(ndd->ns_current);
nd_label_copy(ndd, to_next_namespace_index(ndd),
to_current_namespace_index(ndd));
- rc = nd_label_reserve_dpa(ndd);
- if (ndd->ns_current >= 0)
- nvdimm_set_aliasing(dev);
+ if (ndd->ns_current >= 0) {
+ rc = nd_label_reserve_dpa(ndd);
+ if (rc == 0)
+ nvdimm_set_aliasing(dev);
+ }
nvdimm_clear_locked(dev);
nvdimm_bus_unlock(dev);
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From c31898c8c711f2bbbcaebe802a55827e288d875a Mon Sep 17 00:00:00 2001
From: Dan Williams <dan.j.williams(a)intel.com>
Date: Fri, 6 Apr 2018 11:25:38 -0700
Subject: [PATCH] libnvdimm, dimm: fix dpa reservation vs uninitialized label
area
At initialization time the 'dimm' driver caches a copy of the memory
device's label area and reserves address space for each of the
namespaces defined.
However, as can be seen below, the reservation occurs even when the
index blocks are invalid:
nvdimm nmem0: nvdimm_init_config_data: len: 131072 rc: 0
nvdimm nmem0: config data size: 131072
nvdimm nmem0: __nd_label_validate: nsindex0 labelsize 1 invalid
nvdimm nmem0: __nd_label_validate: nsindex1 labelsize 1 invalid
nvdimm nmem0: : pmem-6025e505: 0x1000000000 @ 0xf50000000 reserve <-- bad
Gate dpa reservation on the presence of valid index blocks.
Cc: <stable(a)vger.kernel.org>
Fixes: 4a826c83db4e ("libnvdimm: namespace indices: read and validate")
Reported-by: Krzysztof Rusocki <krzysztof.rusocki(a)intel.com>
Signed-off-by: Dan Williams <dan.j.williams(a)intel.com>
diff --git a/drivers/nvdimm/dimm.c b/drivers/nvdimm/dimm.c
index f8913b8124b6..233907889f96 100644
--- a/drivers/nvdimm/dimm.c
+++ b/drivers/nvdimm/dimm.c
@@ -67,9 +67,11 @@ static int nvdimm_probe(struct device *dev)
ndd->ns_next = nd_label_next_nsindex(ndd->ns_current);
nd_label_copy(ndd, to_next_namespace_index(ndd),
to_current_namespace_index(ndd));
- rc = nd_label_reserve_dpa(ndd);
- if (ndd->ns_current >= 0)
- nvdimm_set_aliasing(dev);
+ if (ndd->ns_current >= 0) {
+ rc = nd_label_reserve_dpa(ndd);
+ if (rc == 0)
+ nvdimm_set_aliasing(dev);
+ }
nvdimm_clear_locked(dev);
nvdimm_bus_unlock(dev);
The patch below does not apply to the 4.16-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e2fb992d82c626c43ed0566e07c410e56a087af3 Mon Sep 17 00:00:00 2001
From: James Bottomley <James.Bottomley(a)HansenPartnership.com>
Date: Wed, 21 Mar 2018 11:43:48 -0700
Subject: [PATCH] tpm: add retry logic
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
TPM2 can return TPM2_RC_RETRY to any command and when it does we get
unexpected failures inside the kernel that surprise users (this is
mostly observed in the trusted key handling code). The UEFI 2.6 spec
has advice on how to handle this:
The firmware SHALL not return TPM2_RC_RETRY prior to the completion
of the call to ExitBootServices().
Implementer’s Note: the implementation of this function should check
the return value in the TPM response and, if it is TPM2_RC_RETRY,
resend the command. The implementation may abort if a sufficient
number of retries has been done.
So we follow that advice in our tpm_transmit() code using
TPM2_DURATION_SHORT as the initial wait duration and
TPM2_DURATION_LONG as the maximum wait time. This should fix all the
in-kernel use cases and also means that user space TSS implementations
don't have to have their own retry handling.
Signed-off-by: James Bottomley <James.Bottomley(a)HansenPartnership.com>
Cc: stable(a)vger.kernel.org
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen(a)linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen(a)linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen(a)linux.intel.com>
diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 22288ff70a0b..d5379a79274c 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -399,21 +399,10 @@ static void tpm_relinquish_locality(struct tpm_chip *chip)
chip->locality = -1;
}
-/**
- * tpm_transmit - Internal kernel interface to transmit TPM commands.
- *
- * @chip: TPM chip to use
- * @space: tpm space
- * @buf: TPM command buffer
- * @bufsiz: length of the TPM command buffer
- * @flags: tpm transmit flags - bitmap
- *
- * Return:
- * 0 when the operation is successful.
- * A negative number for system errors (errno).
- */
-ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
- u8 *buf, size_t bufsiz, unsigned int flags)
+static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ struct tpm_space *space,
+ u8 *buf, size_t bufsiz,
+ unsigned int flags)
{
struct tpm_output_header *header = (void *)buf;
int rc;
@@ -544,6 +533,62 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
return rc ? rc : len;
}
+/**
+ * tpm_transmit - Internal kernel interface to transmit TPM commands.
+ *
+ * @chip: TPM chip to use
+ * @space: tpm space
+ * @buf: TPM command buffer
+ * @bufsiz: length of the TPM command buffer
+ * @flags: tpm transmit flags - bitmap
+ *
+ * A wrapper around tpm_try_transmit that handles TPM2_RC_RETRY
+ * returns from the TPM and retransmits the command after a delay up
+ * to a maximum wait of TPM2_DURATION_LONG.
+ *
+ * Note: TPM1 never returns TPM2_RC_RETRY so the retry logic is TPM2
+ * only
+ *
+ * Return:
+ * the length of the return when the operation is successful.
+ * A negative number for system errors (errno).
+ */
+ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+ u8 *buf, size_t bufsiz, unsigned int flags)
+{
+ struct tpm_output_header *header = (struct tpm_output_header *)buf;
+ /* space for header and handles */
+ u8 save[TPM_HEADER_SIZE + 3*sizeof(u32)];
+ unsigned int delay_msec = TPM2_DURATION_SHORT;
+ u32 rc = 0;
+ ssize_t ret;
+ const size_t save_size = min(space ? sizeof(save) : TPM_HEADER_SIZE,
+ bufsiz);
+
+ /*
+ * Subtlety here: if we have a space, the handles will be
+ * transformed, so when we restore the header we also have to
+ * restore the handles.
+ */
+ memcpy(save, buf, save_size);
+
+ for (;;) {
+ ret = tpm_try_transmit(chip, space, buf, bufsiz, flags);
+ if (ret < 0)
+ break;
+ rc = be32_to_cpu(header->return_code);
+ if (rc != TPM2_RC_RETRY)
+ break;
+ delay_msec *= 2;
+ if (delay_msec > TPM2_DURATION_LONG) {
+ dev_err(&chip->dev, "TPM is in retry loop\n");
+ break;
+ }
+ tpm_msleep(delay_msec);
+ memcpy(buf, save, save_size);
+ }
+ return ret;
+}
/**
* tpm_transmit_cmd - send a tpm command to the device
* The function extracts tpm out header return code
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index ab3bcdd4d328..67656a97793a 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -115,6 +115,7 @@ enum tpm2_return_codes {
TPM2_RC_COMMAND_CODE = 0x0143,
TPM2_RC_TESTING = 0x090A, /* RC_WARN */
TPM2_RC_REFERENCE_H0 = 0x0910,
+ TPM2_RC_RETRY = 0x0922,
};
enum tpm2_algorithms {
The patch below does not apply to the 4.16-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 2be8ffed093b91536d52b5cd2c99b52f605c9ba6 Mon Sep 17 00:00:00 2001
From: James Bottomley <James.Bottomley(a)HansenPartnership.com>
Date: Thu, 22 Mar 2018 17:32:20 +0200
Subject: [PATCH] tpm: fix intermittent failure with self tests
My Nuvoton 6xx in a Dell XPS-13 has been intermittently failing to work
(necessitating a reboot). The problem seems to be that the TPM gets into a
state where the partial self-test doesn't return TPM_RC_SUCCESS (meaning
all tests have run to completion), but instead returns TPM_RC_TESTING
(meaning some tests are still running in the background). There are
various theories that resending the self-test command actually causes the
tests to restart and thus triggers more TPM_RC_TESTING returns until the
timeout is exceeded.
There are several issues here: firstly being we shouldn't slow down the
boot sequence waiting for the self test to complete once the TPM
backgrounds them. It will actually make available all functions that have
passed and if it gets a failure return TPM_RC_FAILURE to every subsequent
command. So the fix is to kick off self tests once and if they return
TPM_RC_TESTING log that as a backgrounded self test and continue on. In
order to prevent other tpm users from seeing any TPM_RC_TESTING returns
(which it might if they send a command that needs a TPM subsystem which is
still under test), we loop in tpm_transmit_cmd until either a timeout or we
don't get a TPM_RC_TESTING return.
Finally, there have been observations of strange returns from a partial
test. One Nuvoton is occasionally returning TPM_RC_COMMAND_CODE, so treat
any unexpected return from a partial self test as an indication we need to
run a full self test.
[jarkko.sakkinen(a)linux.intel.com: cleaned up some klog messages and
dropped tpm_transmit_check() helper function from James' original
commit.]
Fixes: 2482b1bba5122 ("tpm: Trigger only missing TPM 2.0 self tests")
Cc: stable(a)vger.kernel.org
Signed-off-by: James Bottomley <James.Bottomley(a)HansenPartnership.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkine(a)linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkine(a)linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkine(a)linux.intel.com>
diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index d5379a79274c..c43a9e28995e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -564,6 +564,8 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
ssize_t ret;
const size_t save_size = min(space ? sizeof(save) : TPM_HEADER_SIZE,
bufsiz);
+ /* the command code is where the return code will be */
+ u32 cc = be32_to_cpu(header->return_code);
/*
* Subtlety here: if we have a space, the handles will be
@@ -577,11 +579,21 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
if (ret < 0)
break;
rc = be32_to_cpu(header->return_code);
- if (rc != TPM2_RC_RETRY)
+ if (rc != TPM2_RC_RETRY && rc != TPM2_RC_TESTING)
+ break;
+ /*
+ * return immediately if self test returns test
+ * still running to shorten boot time.
+ */
+ if (rc == TPM2_RC_TESTING && cc == TPM2_CC_SELF_TEST)
break;
delay_msec *= 2;
if (delay_msec > TPM2_DURATION_LONG) {
- dev_err(&chip->dev, "TPM is in retry loop\n");
+ if (rc == TPM2_RC_RETRY)
+ dev_err(&chip->dev, "in retry loop\n");
+ else
+ dev_err(&chip->dev,
+ "self test is still running\n");
break;
}
tpm_msleep(delay_msec);
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index 67656a97793a..7f2d0f489e9c 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -111,6 +111,7 @@ enum tpm2_return_codes {
TPM2_RC_HASH = 0x0083, /* RC_FMT1 */
TPM2_RC_HANDLE = 0x008B,
TPM2_RC_INITIALIZE = 0x0100, /* RC_VER1 */
+ TPM2_RC_FAILURE = 0x0101,
TPM2_RC_DISABLED = 0x0120,
TPM2_RC_COMMAND_CODE = 0x0143,
TPM2_RC_TESTING = 0x090A, /* RC_WARN */
diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
index c1ddbbba406e..96c77c8e7f40 100644
--- a/drivers/char/tpm/tpm2-cmd.c
+++ b/drivers/char/tpm/tpm2-cmd.c
@@ -31,10 +31,6 @@ struct tpm2_startup_in {
__be16 startup_type;
} __packed;
-struct tpm2_self_test_in {
- u8 full_test;
-} __packed;
-
struct tpm2_get_tpm_pt_in {
__be32 cap_id;
__be32 property_id;
@@ -60,7 +56,6 @@ struct tpm2_get_random_out {
union tpm2_cmd_params {
struct tpm2_startup_in startup_in;
- struct tpm2_self_test_in selftest_in;
struct tpm2_get_tpm_pt_in get_tpm_pt_in;
struct tpm2_get_tpm_pt_out get_tpm_pt_out;
struct tpm2_get_random_in getrandom_in;
@@ -829,16 +824,6 @@ unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal)
}
EXPORT_SYMBOL_GPL(tpm2_calc_ordinal_duration);
-#define TPM2_SELF_TEST_IN_SIZE \
- (sizeof(struct tpm_input_header) + \
- sizeof(struct tpm2_self_test_in))
-
-static const struct tpm_input_header tpm2_selftest_header = {
- .tag = cpu_to_be16(TPM2_ST_NO_SESSIONS),
- .length = cpu_to_be32(TPM2_SELF_TEST_IN_SIZE),
- .ordinal = cpu_to_be32(TPM2_CC_SELF_TEST)
-};
-
/**
* tpm2_do_selftest() - ensure that all self tests have passed
*
@@ -854,27 +839,24 @@ static const struct tpm_input_header tpm2_selftest_header = {
*/
static int tpm2_do_selftest(struct tpm_chip *chip)
{
+ struct tpm_buf buf;
+ int full;
int rc;
- unsigned int delay_msec = 10;
- long duration;
- struct tpm2_cmd cmd;
- duration = jiffies_to_msecs(
- tpm2_calc_ordinal_duration(chip, TPM2_CC_SELF_TEST));
-
- while (1) {
- cmd.header.in = tpm2_selftest_header;
- cmd.params.selftest_in.full_test = 0;
-
- rc = tpm_transmit_cmd(chip, NULL, &cmd, TPM2_SELF_TEST_IN_SIZE,
- 0, 0, "continue selftest");
+ for (full = 0; full < 2; full++) {
+ rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_SELF_TEST);
+ if (rc)
+ return rc;
- if (rc != TPM2_RC_TESTING || delay_msec >= duration)
- break;
+ tpm_buf_append_u8(&buf, full);
+ rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 0, 0,
+ "attempting the self test");
+ tpm_buf_destroy(&buf);
- /* wait longer than before */
- delay_msec *= 2;
- tpm_msleep(delay_msec);
+ if (rc == TPM2_RC_TESTING)
+ rc = TPM2_RC_SUCCESS;
+ if (rc == TPM2_RC_INITIALIZE || rc == TPM2_RC_SUCCESS)
+ return rc;
}
return rc;
@@ -1060,10 +1042,8 @@ int tpm2_auto_startup(struct tpm_chip *chip)
goto out;
rc = tpm2_do_selftest(chip);
- if (rc != 0 && rc != TPM2_RC_INITIALIZE) {
- dev_err(&chip->dev, "TPM self test failed\n");
+ if (rc && rc != TPM2_RC_INITIALIZE)
goto out;
- }
if (rc == TPM2_RC_INITIALIZE) {
rc = tpm_startup(chip);
@@ -1071,10 +1051,8 @@ int tpm2_auto_startup(struct tpm_chip *chip)
goto out;
rc = tpm2_do_selftest(chip);
- if (rc) {
- dev_err(&chip->dev, "TPM self test failed\n");
+ if (rc)
goto out;
- }
}
rc = tpm2_get_pcr_allocation(chip);
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e15dc99dbb9cf99f6432e8e3c0b3a8f7a3403a86 Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai(a)suse.de>
Date: Sat, 7 Apr 2018 11:48:58 +0200
Subject: [PATCH] ALSA: pcm: Fix endless loop for XRUN recovery in OSS
emulation
The commit 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS
ioctls and read/write") split the PCM preparation code to a locked
version, and it added a sanity check of runtime->oss.prepare flag
along with the change. This leaded to an endless loop when the stream
gets XRUN: namely, snd_pcm_oss_write3() and co call
snd_pcm_oss_prepare() without setting runtime->oss.prepare flag and
the loop continues until the PCM state reaches to another one.
As the function is supposed to execute the preparation
unconditionally, drop the invalid state check there.
The bug was triggered by syzkaller.
Fixes: 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS ioctls and read/write")
Reported-by: syzbot+150189c103427d31a053(a)syzkaller.appspotmail.com
Reported-by: syzbot+7e3f31a52646f939c052(a)syzkaller.appspotmail.com
Reported-by: syzbot+4f2016cf5185da7759dc(a)syzkaller.appspotmail.com
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
index 481ab0e94ffa..1980f68246cb 100644
--- a/sound/core/oss/pcm_oss.c
+++ b/sound/core/oss/pcm_oss.c
@@ -1128,13 +1128,14 @@ static int snd_pcm_oss_get_active_substream(struct snd_pcm_oss_file *pcm_oss_fil
}
/* call with params_lock held */
+/* NOTE: this always call PREPARE unconditionally no matter whether
+ * runtime->oss.prepare is set or not
+ */
static int snd_pcm_oss_prepare(struct snd_pcm_substream *substream)
{
int err;
struct snd_pcm_runtime *runtime = substream->runtime;
- if (!runtime->oss.prepare)
- return 0;
err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_PREPARE, NULL);
if (err < 0) {
pcm_dbg(substream->pcm,
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e15dc99dbb9cf99f6432e8e3c0b3a8f7a3403a86 Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai(a)suse.de>
Date: Sat, 7 Apr 2018 11:48:58 +0200
Subject: [PATCH] ALSA: pcm: Fix endless loop for XRUN recovery in OSS
emulation
The commit 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS
ioctls and read/write") split the PCM preparation code to a locked
version, and it added a sanity check of runtime->oss.prepare flag
along with the change. This leaded to an endless loop when the stream
gets XRUN: namely, snd_pcm_oss_write3() and co call
snd_pcm_oss_prepare() without setting runtime->oss.prepare flag and
the loop continues until the PCM state reaches to another one.
As the function is supposed to execute the preparation
unconditionally, drop the invalid state check there.
The bug was triggered by syzkaller.
Fixes: 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS ioctls and read/write")
Reported-by: syzbot+150189c103427d31a053(a)syzkaller.appspotmail.com
Reported-by: syzbot+7e3f31a52646f939c052(a)syzkaller.appspotmail.com
Reported-by: syzbot+4f2016cf5185da7759dc(a)syzkaller.appspotmail.com
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
index 481ab0e94ffa..1980f68246cb 100644
--- a/sound/core/oss/pcm_oss.c
+++ b/sound/core/oss/pcm_oss.c
@@ -1128,13 +1128,14 @@ static int snd_pcm_oss_get_active_substream(struct snd_pcm_oss_file *pcm_oss_fil
}
/* call with params_lock held */
+/* NOTE: this always call PREPARE unconditionally no matter whether
+ * runtime->oss.prepare is set or not
+ */
static int snd_pcm_oss_prepare(struct snd_pcm_substream *substream)
{
int err;
struct snd_pcm_runtime *runtime = substream->runtime;
- if (!runtime->oss.prepare)
- return 0;
err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_PREPARE, NULL);
if (err < 0) {
pcm_dbg(substream->pcm,
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e15dc99dbb9cf99f6432e8e3c0b3a8f7a3403a86 Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai(a)suse.de>
Date: Sat, 7 Apr 2018 11:48:58 +0200
Subject: [PATCH] ALSA: pcm: Fix endless loop for XRUN recovery in OSS
emulation
The commit 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS
ioctls and read/write") split the PCM preparation code to a locked
version, and it added a sanity check of runtime->oss.prepare flag
along with the change. This leaded to an endless loop when the stream
gets XRUN: namely, snd_pcm_oss_write3() and co call
snd_pcm_oss_prepare() without setting runtime->oss.prepare flag and
the loop continues until the PCM state reaches to another one.
As the function is supposed to execute the preparation
unconditionally, drop the invalid state check there.
The bug was triggered by syzkaller.
Fixes: 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS ioctls and read/write")
Reported-by: syzbot+150189c103427d31a053(a)syzkaller.appspotmail.com
Reported-by: syzbot+7e3f31a52646f939c052(a)syzkaller.appspotmail.com
Reported-by: syzbot+4f2016cf5185da7759dc(a)syzkaller.appspotmail.com
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
index 481ab0e94ffa..1980f68246cb 100644
--- a/sound/core/oss/pcm_oss.c
+++ b/sound/core/oss/pcm_oss.c
@@ -1128,13 +1128,14 @@ static int snd_pcm_oss_get_active_substream(struct snd_pcm_oss_file *pcm_oss_fil
}
/* call with params_lock held */
+/* NOTE: this always call PREPARE unconditionally no matter whether
+ * runtime->oss.prepare is set or not
+ */
static int snd_pcm_oss_prepare(struct snd_pcm_substream *substream)
{
int err;
struct snd_pcm_runtime *runtime = substream->runtime;
- if (!runtime->oss.prepare)
- return 0;
err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_PREPARE, NULL);
if (err < 0) {
pcm_dbg(substream->pcm,
The patch below does not apply to the 4.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e15dc99dbb9cf99f6432e8e3c0b3a8f7a3403a86 Mon Sep 17 00:00:00 2001
From: Takashi Iwai <tiwai(a)suse.de>
Date: Sat, 7 Apr 2018 11:48:58 +0200
Subject: [PATCH] ALSA: pcm: Fix endless loop for XRUN recovery in OSS
emulation
The commit 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS
ioctls and read/write") split the PCM preparation code to a locked
version, and it added a sanity check of runtime->oss.prepare flag
along with the change. This leaded to an endless loop when the stream
gets XRUN: namely, snd_pcm_oss_write3() and co call
snd_pcm_oss_prepare() without setting runtime->oss.prepare flag and
the loop continues until the PCM state reaches to another one.
As the function is supposed to execute the preparation
unconditionally, drop the invalid state check there.
The bug was triggered by syzkaller.
Fixes: 02a5d6925cd3 ("ALSA: pcm: Avoid potential races between OSS ioctls and read/write")
Reported-by: syzbot+150189c103427d31a053(a)syzkaller.appspotmail.com
Reported-by: syzbot+7e3f31a52646f939c052(a)syzkaller.appspotmail.com
Reported-by: syzbot+4f2016cf5185da7759dc(a)syzkaller.appspotmail.com
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai(a)suse.de>
diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
index 481ab0e94ffa..1980f68246cb 100644
--- a/sound/core/oss/pcm_oss.c
+++ b/sound/core/oss/pcm_oss.c
@@ -1128,13 +1128,14 @@ static int snd_pcm_oss_get_active_substream(struct snd_pcm_oss_file *pcm_oss_fil
}
/* call with params_lock held */
+/* NOTE: this always call PREPARE unconditionally no matter whether
+ * runtime->oss.prepare is set or not
+ */
static int snd_pcm_oss_prepare(struct snd_pcm_substream *substream)
{
int err;
struct snd_pcm_runtime *runtime = substream->runtime;
- if (!runtime->oss.prepare)
- return 0;
err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_PREPARE, NULL);
if (err < 0) {
pcm_dbg(substream->pcm,