The patch below does not apply to the 4.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b1f382178d150f256c1cf95b9341fda6eb764459 Mon Sep 17 00:00:00 2001
From: Ross Zwisler <zwisler(a)kernel.org>
Date: Tue, 11 Sep 2018 13:31:16 -0400
Subject: [PATCH] ext4: close race between direct IO and ext4_break_layouts()
If the refcount of a page is lowered between the time that it is returned
by dax_busy_page() and when the refcount is again checked in
ext4_break_layouts() => ___wait_var_event(), the waiting function
ext4_wait_dax_page() will never be called. This means that
ext4_break_layouts() will still have 'retry' set to false, so we'll stop
looping and never check the refcount of other pages in this inode.
Instead, always continue looping as long as dax_layout_busy_page() gives us
a page which it found with an elevated refcount.
Signed-off-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Cc: stable(a)vger.kernel.org
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 694f31364206..723058bfe43b 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4195,9 +4195,8 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
return 0;
}
-static void ext4_wait_dax_page(struct ext4_inode_info *ei, bool *did_unlock)
+static void ext4_wait_dax_page(struct ext4_inode_info *ei)
{
- *did_unlock = true;
up_write(&ei->i_mmap_sem);
schedule();
down_write(&ei->i_mmap_sem);
@@ -4207,14 +4206,12 @@ int ext4_break_layouts(struct inode *inode)
{
struct ext4_inode_info *ei = EXT4_I(inode);
struct page *page;
- bool retry;
int error;
if (WARN_ON_ONCE(!rwsem_is_locked(&ei->i_mmap_sem)))
return -EINVAL;
do {
- retry = false;
page = dax_layout_busy_page(inode->i_mapping);
if (!page)
return 0;
@@ -4222,8 +4219,8 @@ int ext4_break_layouts(struct inode *inode)
error = ___wait_var_event(&page->_refcount,
atomic_read(&page->_refcount) == 1,
TASK_INTERRUPTIBLE, 0, 0,
- ext4_wait_dax_page(ei, &retry));
- } while (error == 0 && retry);
+ ext4_wait_dax_page(ei));
+ } while (error == 0);
return error;
}
We attempt to get fences earlier in the hopes that everything will
already have fences and no callbacks will be needed. If we do succeed
in getting a fence, getting one a second time will result in a duplicate
ref with no unref. This is causing memory leaks in Vulkan applications
that create a lot of fences; playing for a few hours can, apparently,
bring down the system.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107899
Signed-off-by: Jason Ekstrand <jason(a)jlekstrand.net>
Cc: stable(a)vger.kernel.org
---
drivers/gpu/drm/drm_syncobj.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index adb3cb27d31e..759278fef35a 100644
--- a/drivers/gpu/drm/drm_syncobj.c
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -97,6 +97,8 @@ static int drm_syncobj_fence_get_or_add_callback(struct drm_syncobj *syncobj,
{
int ret;
+ WARN_ON(*fence);
+
*fence = drm_syncobj_fence_get(syncobj);
if (*fence)
return 1;
@@ -743,6 +745,9 @@ static signed long drm_syncobj_array_wait_timeout(struct drm_syncobj **syncobjs,
if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {
for (i = 0; i < count; ++i) {
+ if (entries[i].fence)
+ continue;
+
drm_syncobj_fence_get_or_add_callback(syncobjs[i],
&entries[i].fence,
&entries[i].syncobj_cb,
--
2.17.1
From: Zachary Zhang <zhangzg(a)marvell.com>
commit 91a2968e245d6ba616db37001fa1a043078b1a65 usptream.
The PCIE I/O and MEM resource allocation mechanism is that root bus
goes through the following steps:
1. Check PCI bridges' range and computes I/O and Mem base/limits.
2. Sort all subordinate devices I/O and MEM resource requirements and
allocate the resources and writes/updates subordinate devices'
requirements to PCI bridges I/O and Mem MEM/limits registers.
Currently, PCI Aardvark driver only handles the second step and lacks
the first step, so there is an I/O and MEM resource allocation failure
when using a PCI switch. This commit fixes that by sizing bridges
before doing the resource allocation.
Fixes: 8c39d710363c1 ("PCI: aardvark: Add Aardvark PCI host controller
driver")
Signed-off-by: Zachary Zhang <zhangzg(a)marvell.com>
[Thomas: edit commit log.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni(a)bootlin.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com>
Cc: <stable(a)vger.kernel.org>
---
drivers/pci/host/pci-aardvark.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/host/pci-aardvark.c b/drivers/pci/host/pci-aardvark.c
index 9bfc22b5da4b..5f3048e75bec 100644
--- a/drivers/pci/host/pci-aardvark.c
+++ b/drivers/pci/host/pci-aardvark.c
@@ -954,6 +954,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
bus = bridge->bus;
+ pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
--
2.14.4
From: Zachary Zhang <zhangzg(a)marvell.com>
commit 91a2968e245d6ba616db37001fa1a043078b1a65 usptream.
The PCIE I/O and MEM resource allocation mechanism is that root bus
goes through the following steps:
1. Check PCI bridges' range and computes I/O and Mem base/limits.
2. Sort all subordinate devices I/O and MEM resource requirements and
allocate the resources and writes/updates subordinate devices'
requirements to PCI bridges I/O and Mem MEM/limits registers.
Currently, PCI Aardvark driver only handles the second step and lacks
the first step, so there is an I/O and MEM resource allocation failure
when using a PCI switch. This commit fixes that by sizing bridges
before doing the resource allocation.
Fixes: 8c39d710363c1 ("PCI: aardvark: Add Aardvark PCI host controller
driver")
Signed-off-by: Zachary Zhang <zhangzg(a)marvell.com>
[Thomas: edit commit log.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni(a)bootlin.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi(a)arm.com>
Cc: <stable(a)vger.kernel.org>
---
drivers/pci/host/pci-aardvark.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/pci/host/pci-aardvark.c b/drivers/pci/host/pci-aardvark.c
index 11bad826683e..1dbd09c91a7c 100644
--- a/drivers/pci/host/pci-aardvark.c
+++ b/drivers/pci/host/pci-aardvark.c
@@ -976,6 +976,7 @@ static int advk_pcie_probe(struct platform_device *pdev)
return -ENOMEM;
}
+ pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
list_for_each_entry(child, &bus->children, node)
--
2.14.4
From: Catalin Marinas <catalin.marinas(a)arm.com>
commit db3899a6477a4dccd26cbfb7f408b6be2cc068e0 upstream.
When a kernel is built with CONFIG_TRACE_IRQFLAGS the following warning
is produced when entering userspace for the first time:
WARNING: at /work/Linux/linux-2.6-aarch64/kernel/locking/lockdep.c:3519
Modules linked in:
CPU: 1 PID: 1 Comm: systemd Not tainted 4.4.0-rc3+ #639
Hardware name: Juno (DT)
task: ffffffc9768a0000 ti: ffffffc9768a8000 task.ti: ffffffc9768a8000
PC is at check_flags.part.22+0x19c/0x1a8
LR is at check_flags.part.22+0x19c/0x1a8
pc : [<ffffffc0000fba6c>] lr : [<ffffffc0000fba6c>] pstate: 600001c5
sp : ffffffc9768abe10
x29: ffffffc9768abe10 x28: ffffffc9768a8000
x27: 0000000000000000 x26: 0000000000000001
x25: 00000000000000a6 x24: ffffffc00064be6c
x23: ffffffc0009f249e x22: ffffffc9768a0000
x21: ffffffc97fea5480 x20: 00000000000001c0
x19: ffffffc00169a000 x18: 0000005558cc7b58
x17: 0000007fb78e3180 x16: 0000005558d2e238
x15: ffffffffffffffff x14: 0ffffffffffffffd
x13: 0000000000000008 x12: 0101010101010101
x11: 7f7f7f7f7f7f7f7f x10: fefefefefefeff63
x9 : 7f7f7f7f7f7f7f7f x8 : 6e655f7371726964
x7 : 0000000000000001 x6 : ffffffc0001079c4
x5 : 0000000000000000 x4 : 0000000000000001
x3 : ffffffc001698438 x2 : 0000000000000000
x1 : ffffffc9768a0000 x0 : 000000000000002e
Call trace:
[<ffffffc0000fba6c>] check_flags.part.22+0x19c/0x1a8
[<ffffffc0000fc440>] lock_is_held+0x80/0x98
[<ffffffc00064bafc>] __schedule+0x404/0x730
[<ffffffc00064be6c>] schedule+0x44/0xb8
[<ffffffc000085bb0>] ret_to_user+0x0/0x24
possible reason: unannotated irqs-off.
irq event stamp: 502169
hardirqs last enabled at (502169): [<ffffffc000085a98>] el0_irq_naked+0x1c/0x24
hardirqs last disabled at (502167): [<ffffffc0000bb3bc>] __do_softirq+0x17c/0x298
softirqs last enabled at (502168): [<ffffffc0000bb43c>] __do_softirq+0x1fc/0x298
softirqs last disabled at (502143): [<ffffffc0000bb830>] irq_exit+0xa0/0xf0
This happens because we disable interrupts in ret_to_user before calling
schedule() in work_resched. This patch adds the necessary
trace_hardirqs_off annotation.
Signed-off-by: Catalin Marinas <catalin.marinas(a)arm.com>
Reported-by: Mark Rutland <mark.rutland(a)arm.com>
Cc: Will Deacon <will.deacon(a)arm.com>
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
---
Without this patch and CONFIG_TRACE_IRQFLAGS enabled, the mentioned warning
is seen in v3.18.y and v4.4.y. The patch applies cleanly to both releases.
I confirmed that the problem is gone after applying it.
arch/arm64/kernel/entry.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 7ed3d75f6304..e5b25389c48f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -634,6 +634,9 @@ work_pending:
bl do_notify_resume
b ret_to_user
work_resched:
+#ifdef CONFIG_TRACE_IRQFLAGS
+ bl trace_hardirqs_off // the IRQs are off here, inform the tracing code
+#endif
bl schedule
/*
--
2.7.4
Please pick up the following commit which has merged to Linus' tree:
commit d0cdb3ce8834332d918fc9c8ff74f8a169ec9abe
Author: Steve Muckle <smuckle(a)google.com>
Date: Fri Aug 31 15:42:17 2018 -0700
sched/fair: Fix vruntime_normalized() for remote non-migration wakeup
This commit is 3 lines in diffs. It fixes an issue where, during certain
usecases, a task's vruntime may be greatly inflated, causing it to not
execute with the priority that it should.
The commit it fixes went into 4.7 so the fix should go into 4.9, 4.14,
and 4.18. I have verified the commit applies cleanly to 4.9.128,
4.14.71, and 4.18.9.
thanks,
Steve
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b1f382178d150f256c1cf95b9341fda6eb764459 Mon Sep 17 00:00:00 2001
From: Ross Zwisler <zwisler(a)kernel.org>
Date: Tue, 11 Sep 2018 13:31:16 -0400
Subject: [PATCH] ext4: close race between direct IO and ext4_break_layouts()
If the refcount of a page is lowered between the time that it is returned
by dax_busy_page() and when the refcount is again checked in
ext4_break_layouts() => ___wait_var_event(), the waiting function
ext4_wait_dax_page() will never be called. This means that
ext4_break_layouts() will still have 'retry' set to false, so we'll stop
looping and never check the refcount of other pages in this inode.
Instead, always continue looping as long as dax_layout_busy_page() gives us
a page which it found with an elevated refcount.
Signed-off-by: Ross Zwisler <ross.zwisler(a)linux.intel.com>
Reviewed-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Cc: stable(a)vger.kernel.org
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 694f31364206..723058bfe43b 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4195,9 +4195,8 @@ int ext4_update_disksize_before_punch(struct inode *inode, loff_t offset,
return 0;
}
-static void ext4_wait_dax_page(struct ext4_inode_info *ei, bool *did_unlock)
+static void ext4_wait_dax_page(struct ext4_inode_info *ei)
{
- *did_unlock = true;
up_write(&ei->i_mmap_sem);
schedule();
down_write(&ei->i_mmap_sem);
@@ -4207,14 +4206,12 @@ int ext4_break_layouts(struct inode *inode)
{
struct ext4_inode_info *ei = EXT4_I(inode);
struct page *page;
- bool retry;
int error;
if (WARN_ON_ONCE(!rwsem_is_locked(&ei->i_mmap_sem)))
return -EINVAL;
do {
- retry = false;
page = dax_layout_busy_page(inode->i_mapping);
if (!page)
return 0;
@@ -4222,8 +4219,8 @@ int ext4_break_layouts(struct inode *inode)
error = ___wait_var_event(&page->_refcount,
atomic_read(&page->_refcount) == 1,
TASK_INTERRUPTIBLE, 0, 0,
- ext4_wait_dax_page(ei, &retry));
- } while (error == 0 && retry);
+ ext4_wait_dax_page(ei));
+ } while (error == 0);
return error;
}
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From e71cf591876536f7dd5a54ef68d631278ca6faa1 Mon Sep 17 00:00:00 2001
From: Thomas Hellstrom <thellstrom(a)vmware.com>
Date: Fri, 14 Sep 2018 09:24:19 +0200
Subject: [PATCH] drm/vmwgfx: Fix buffer object eviction
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Commit 19be55701071 ("drm/ttm: add operation ctx to ttm_bo_validate v2")
introduced a regression where the vmwgfx driver refused to evict a
buffer that was still busy instead of waiting for it to become idle.
Fix this.
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Thomas Hellstrom <thellstrom(a)vmware.com>
Reviewed-by: Christian König <christian.koenig(a)amd.com>
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
index 1f134570b759..f0ab6b2313bb 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
@@ -3729,7 +3729,7 @@ int vmw_validate_single_buffer(struct vmw_private *dev_priv,
{
struct vmw_buffer_object *vbo =
container_of(bo, struct vmw_buffer_object, base);
- struct ttm_operation_ctx ctx = { interruptible, true };
+ struct ttm_operation_ctx ctx = { interruptible, false };
int ret;
if (vbo->pin_count > 0)
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8c39e2699f8acb2e29782a834e56306da24937fe Mon Sep 17 00:00:00 2001
From: Vincent Pelletier <plr.vincent(a)gmail.com>
Date: Sun, 9 Sep 2018 04:09:27 +0000
Subject: [PATCH] scsi: target: iscsi: Use bin2hex instead of a
re-implementation
Signed-off-by: Vincent Pelletier <plr.vincent(a)gmail.com>
Reviewed-by: Mike Christie <mchristi(a)redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
index 6c3b4c022894..4e680d753941 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -26,15 +26,6 @@
#include "iscsi_target_nego.h"
#include "iscsi_target_auth.h"
-static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
-{
- int i;
-
- for (i = 0; i < src_len; i++) {
- sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
- }
-}
-
static int chap_gen_challenge(
struct iscsi_conn *conn,
int caller,
@@ -50,7 +41,7 @@ static int chap_gen_challenge(
ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
if (unlikely(ret))
return ret;
- chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
+ bin2hex(challenge_asciihex, chap->challenge,
CHAP_CHALLENGE_LENGTH);
/*
* Set CHAP_C, and copy the generated challenge into c_str.
@@ -289,7 +280,7 @@ static int chap_server_compute_md5(
goto out;
}
- chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
+ bin2hex(response, server_digest, MD5_SIGNATURE_SIZE);
pr_debug("[server] MD5 Server Digest: %s\n", response);
if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
@@ -411,7 +402,7 @@ static int chap_server_compute_md5(
/*
* Convert response from binary hex to ascii hext.
*/
- chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
+ bin2hex(response, digest, MD5_SIGNATURE_SIZE);
*nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s",
response);
*nr_out_len += 1;
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8c39e2699f8acb2e29782a834e56306da24937fe Mon Sep 17 00:00:00 2001
From: Vincent Pelletier <plr.vincent(a)gmail.com>
Date: Sun, 9 Sep 2018 04:09:27 +0000
Subject: [PATCH] scsi: target: iscsi: Use bin2hex instead of a
re-implementation
Signed-off-by: Vincent Pelletier <plr.vincent(a)gmail.com>
Reviewed-by: Mike Christie <mchristi(a)redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen(a)oracle.com>
diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
index 6c3b4c022894..4e680d753941 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -26,15 +26,6 @@
#include "iscsi_target_nego.h"
#include "iscsi_target_auth.h"
-static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)
-{
- int i;
-
- for (i = 0; i < src_len; i++) {
- sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);
- }
-}
-
static int chap_gen_challenge(
struct iscsi_conn *conn,
int caller,
@@ -50,7 +41,7 @@ static int chap_gen_challenge(
ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);
if (unlikely(ret))
return ret;
- chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,
+ bin2hex(challenge_asciihex, chap->challenge,
CHAP_CHALLENGE_LENGTH);
/*
* Set CHAP_C, and copy the generated challenge into c_str.
@@ -289,7 +280,7 @@ static int chap_server_compute_md5(
goto out;
}
- chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);
+ bin2hex(response, server_digest, MD5_SIGNATURE_SIZE);
pr_debug("[server] MD5 Server Digest: %s\n", response);
if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {
@@ -411,7 +402,7 @@ static int chap_server_compute_md5(
/*
* Convert response from binary hex to ascii hext.
*/
- chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);
+ bin2hex(response, digest, MD5_SIGNATURE_SIZE);
*nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s",
response);
*nr_out_len += 1;
Quoting Jason Ekstrand (2018-09-26 10:43:59)
> On Wed, Sep 26, 2018 at 3:18 AM Chris Wilson <chris(a)chris-wilson.co.uk> wrote:
>
> Quoting Jason Ekstrand (2018-09-26 08:17:03)
> > We attempt to get fences earlier in the hopes that everything will
> > already have fences and no callbacks will be needed. If we do succeed
> > in getting a fence, getting one a second time will result in a duplicate
> > ref with no unref. This is causing memory leaks in Vulkan applications
> > that create a lot of fences; playing for a few hours can, apparently,
> > bring down the system.
> >
> > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107899
> > Signed-off-by: Jason Ekstrand <jason(a)jlekstrand.net>
> > Cc: stable(a)vger.kernel.org
> > ---
> > drivers/gpu/drm/drm_syncobj.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/
> drm_syncobj.c
> > index adb3cb27d31e..759278fef35a 100644
> > --- a/drivers/gpu/drm/drm_syncobj.c
> > +++ b/drivers/gpu/drm/drm_syncobj.c
> > @@ -97,6 +97,8 @@ static int drm_syncobj_fence_get_or_add_callback(struct
> drm_syncobj *syncobj,
> > {
> > int ret;
> >
> > + WARN_ON(*fence);
>
> I would have just put if (*fence) return; since the function is tied to
> the array_wait implementation.
>
>
> I considered doing that but marginally liked this better. If you have a
> preference, I'm happy to change itl.
Patch works, the difference is insignificant, land the patch and close
the bug.
-Chris
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 5223c9c1cbfc0cd4d0a1b50758e0949af3290fa1 Mon Sep 17 00:00:00 2001
From: Angelo Dureghello <angelo(a)sysam.it>
Date: Sat, 18 Aug 2018 01:51:58 +0200
Subject: [PATCH] spi: spi-fsl-dspi: fix broken DSPI_EOQ_MODE
This patch fixes the dspi_eoq_write function used by the
ColdFire mcf5441x family. The 16 bit cmd part must be re-set at
each data transfer.
Also, now that fifo_size variables are used for eoq_read/write,
a proper fifo size must be set (16 slots for the ColdFire dspi
module version).
Signed-off-by: Angelo Dureghello <angelo(a)sysam.it>
Acked-by: Esben Haabendal <esben(a)haabendal.dk>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
index 7cb3ab0a35a0..3082e72e4f6c 100644
--- a/drivers/spi/spi-fsl-dspi.c
+++ b/drivers/spi/spi-fsl-dspi.c
@@ -30,7 +30,11 @@
#define DRIVER_NAME "fsl-dspi"
+#ifdef CONFIG_M5441x
+#define DSPI_FIFO_SIZE 16
+#else
#define DSPI_FIFO_SIZE 4
+#endif
#define DSPI_DMA_BUFSIZE (DSPI_FIFO_SIZE * 1024)
#define SPI_MCR 0x00
@@ -623,9 +627,11 @@ static void dspi_tcfq_read(struct fsl_dspi *dspi)
static void dspi_eoq_write(struct fsl_dspi *dspi)
{
int fifo_size = DSPI_FIFO_SIZE;
+ u16 xfer_cmd = dspi->tx_cmd;
/* Fill TX FIFO with as many transfers as possible */
while (dspi->len && fifo_size--) {
+ dspi->tx_cmd = xfer_cmd;
/* Request EOQF for last transfer in FIFO */
if (dspi->len == dspi->bytes_per_word || fifo_size == 0)
dspi->tx_cmd |= SPI_PUSHR_CMD_EOQ;
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 85516a9881a31e2c7a8d10f4697f3adcccc7cef1 Mon Sep 17 00:00:00 2001
From: Miquel Raynal <miquel.raynal(a)bootlin.com>
Date: Fri, 7 Sep 2018 16:35:54 +0200
Subject: [PATCH] mtd: partitions: fix unbalanced of_node_get/put()
While at first mtd_part_of_parse() would just call
of_get_chil_by_name(), it has been patched to deal with sub-partitions
and will now directly manipulate the node returned by mtd_get_of_node()
if the MTD device is a partition.
A of_node_put() was a bit below in the code, to balance the
of_get_child_by_name(). However, despite its name, mtd_get_of_node()
does not take a reference on the OF node. It is a simple helper hiding
some pointer logic to retrieve the OF node related to an MTD
device.
The direct effect of such unbalanced reference counting is visible by
rmmod'ing any module that would have added MTD partitions:
OF: ERROR: Bad of_node_put() on <of_path_to_partition>
As it seems normal to get a reference on the OF node during the
of_property_for_each_string() that follows, add a call to
of_node_get() when relevant.
Fixes: 76a832254ab0 ("mtd: partitions: use DT info for parsing partitions with "compatible" prop")
Cc: stable(a)vger.kernel.org
Signed-off-by: Miquel Raynal <miquel.raynal(a)bootlin.com>
Signed-off-by: Boris Brezillon <boris.brezillon(a)bootlin.com>
diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
index 52e2cb35fc79..99c460facd5e 100644
--- a/drivers/mtd/mtdpart.c
+++ b/drivers/mtd/mtdpart.c
@@ -873,8 +873,11 @@ static int mtd_part_of_parse(struct mtd_info *master,
int ret, err = 0;
np = mtd_get_of_node(master);
- if (!mtd_is_partition(master))
+ if (mtd_is_partition(master))
+ of_node_get(np);
+ else
np = of_get_child_by_name(np, "partitions");
+
of_property_for_each_string(np, "compatible", prop, compat) {
parser = mtd_part_get_compatible_parser(compat);
if (!parser)
drm_mode_setcrtc() retries modesetting in case one of the functions it
calls returns -EDEADLK. connector_set, mode and fb are freed before
retrying, but they are not set to NULL. This can cause
drm_mode_setcrtc() to use those variables.
For example: On the first try __drm_mode_set_config_internal() returns
-EDEADLK. connector_set, mode and fb are freed. Next retry starts, and
drm_modeset_lock_all_ctx() returns -EDEADLK, and we jump to 'out'. The
code will happily try to release all three again.
This leads to crashes of different kinds, depending on the sequence the
EDEADLKs happen.
Fix this by setting the three variables to NULL at the start of the
retry loop.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen(a)ti.com>
Cc: stable(a)vger.kernel.org
---
drivers/gpu/drm/drm_crtc.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index 2f6c877299e4..2ad14593fb23 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -570,9 +570,9 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
struct drm_mode_crtc *crtc_req = data;
struct drm_crtc *crtc;
struct drm_plane *plane;
- struct drm_connector **connector_set = NULL, *connector;
- struct drm_framebuffer *fb = NULL;
- struct drm_display_mode *mode = NULL;
+ struct drm_connector **connector_set, *connector;
+ struct drm_framebuffer *fb;
+ struct drm_display_mode *mode;
struct drm_mode_set set;
uint32_t __user *set_connectors_ptr;
struct drm_modeset_acquire_ctx ctx;
@@ -601,6 +601,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
mutex_lock(&crtc->dev->mode_config.mutex);
drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE);
retry:
+ connector_set = NULL;
+ fb = NULL;
+ mode = NULL;
+
ret = drm_modeset_lock_all_ctx(crtc->dev, &ctx);
if (ret)
goto out;
--
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki