This is a note to let you know that I've just added the patch titled
bridge: check brport attr show in brport_show
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
bridge-check-brport-attr-show-in-brport_show.patch
and it can be found in the queue-4.14 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From foo@baz Tue Mar 6 19:02:12 PST 2018
From: Xin Long <lucien.xin(a)gmail.com>
Date: Mon, 12 Feb 2018 17:15:40 +0800
Subject: bridge: check brport attr show in brport_show
From: Xin Long <lucien.xin(a)gmail.com>
[ Upstream commit 1b12580af1d0677c3c3a19e35bfe5d59b03f737f ]
Now br_sysfs_if file flush doesn't have attr show. To read it will
cause kernel panic after users chmod u+r this file.
Xiong found this issue when running the commands:
ip link add br0 type bridge
ip link add type veth
ip link set veth0 master br0
chmod u+r /sys/devices/virtual/net/veth0/brport/flush
timeout 3 cat /sys/devices/virtual/net/veth0/brport/flush
kernel crashed with NULL a pointer dereference call trace.
This patch is to fix it by return -EINVAL when brport_attr->show
is null, just the same as the check for brport_attr->store in
brport_store().
Fixes: 9cf637473c85 ("bridge: add sysfs hook to flush forwarding table")
Reported-by: Xiong Zhou <xzhou(a)redhat.com>
Signed-off-by: Xin Long <lucien.xin(a)gmail.com>
Signed-off-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
net/bridge/br_sysfs_if.c | 3 +++
1 file changed, 3 insertions(+)
--- a/net/bridge/br_sysfs_if.c
+++ b/net/bridge/br_sysfs_if.c
@@ -235,6 +235,9 @@ static ssize_t brport_show(struct kobjec
struct brport_attribute *brport_attr = to_brport_attr(attr);
struct net_bridge_port *p = to_brport(kobj);
+ if (!brport_attr->show)
+ return -EINVAL;
+
return brport_attr->show(p, buf);
}
Patches currently in stable-queue which might be from lucien.xin(a)gmail.com are
queue-4.14/bridge-check-brport-attr-show-in-brport_show.patch
queue-4.14/sctp-do-not-pr_err-for-the-duplicated-node-in-transport-rhlist.patch
On Mon, Mar 05, 2018 at 12:46:38PM +0000, Mark Brown wrote:
> On Fri, Mar 02, 2018 at 05:54:15PM +0100, Greg KH wrote:
> > On Fri, Mar 02, 2018 at 05:14:50PM +0800, Alex Shi wrote:
> > > On 03/01/2018 11:24 PM, Greg KH wrote:
>
> > > > And why are you making this patchset up? What is wrong with the patches
> > > > in the android-common tree for this?
>
> > > We believe the LTS is the base kernel for android/lsk, so the fixing
> > > patches should get it first and then merge to other tree.
>
> > But you know that android-common is already fine here, the needed
> > patches are all integrated into there, so no additional work is needed
> > for android devices. So what devices do you expect to use this 4.9
> > backport?
>
> See below...
>
> > What is "lsk"?
>
> The Linaro Stable Kernel, it's LTS plus some feature backports.
>
> > But really, I don't see this need as all ARM devices that I know of that
> > are stuck on 4.9.y are already using the android-common tree. Same for
> > 4.4.y. Do you know of any that are not, and that can not just use
> > 4.14.y instead?
>
> There's way more to ARM than just Android systems, assuming that getting
> things into the Android kernel is enough is like assuming that x86 is
> covered since the distros have their own backports - it covers a lot of
> users but not everyone. Off the top of my head there's things like
> routers, NASs, cameras, IoT, radio systems, industrial appliances, set
> top boxes and these days even servers. Most of these segments are just
> as conservative about taking new kernel versions on shipping product as
> the phone vendors are, the practices that make people relucant to take
> bigger updates in production are general engineering practices common
> across industry.
I know there is lots more than Android to ARM, but the huge majority by
quantity is Android.
What I'm saying here is look at all of the backports that were required
to get this working in the android tree. It was non-trivial by a long
shot, and based on that work, this series feels really "small" and I'm
really worried that it's not really working or solving the problem here.
There are major features that were backported to the android trees for
ARM that the upstream features for Spectre and Meltdown built on top of
to get their solution. To not backport all of that is a huge risk,
right?
So that's why I keep pointing people at the android trees. Look at what
they did there. There's nothing stoping anyone who is really insistant
on staying on these old kernel versions from pulling from those branches
to get these bugfixes in a known stable, and tested, implementation.
That's why I point people there[1]. To do all of the backporting and
add the new features feels _way_ beyond what I should be taking into the
stable kernels. We didn't do it for x86, why should we do it for ARM?
Yes, we did a horrid hack for the x86 backports (with the known issues
that it has, and people seem to keep ignoring, which is crazy), and I
would suggest NOT doing that same type of hack for ARM, but go grab a
tree that we all know to work correctly if you are suck with these old
kernels!
Or just move to 4.14.y. Seriously, that's probably the safest thing in
the long run for anyone here. And when you realize you can't do that,
go yell at your SoC for forcing you into the nightmare that they conned
you into by their 3+ million lines added to their kernel tree. You were
always living on borowed time, and it looks like that time is finally
up...
thanks,
greg k-h
[1] It's also why I keep doing the LTS merges into the android-common
trees within days of the upstream LTS release (today being an
exception). That way once you do a pull/merge, you can just keep
always merging to keep a secure device that is always up to date
with the latest LTS releases in a simple way. How much easier can I
make it for the ARM ecosystem here, really?
Oh, I know, get the SoC vendors to merge from the android-common
trees into their trees. Look, that's already happening today for at
least 3 major SoCs! So just go pull the update from your SoC today,
for your chip, and it automatically has all of these fixes in it
already! If you know a SoC that is not pulling these updates
regularly, let me know and I'll work with them to resolve that[2].
[2] I have offered to do that merge myself, from the android-common
trees into any "internal" tree, so that future merges happen cleanly
and automatically, for any company that asks for it. So far only
one company has taken me up on it, and it only took me a week to get
it all up and working properly. It took a ton of "fun" quilt and
git work, but in the end, it all worked, and has worked cleanly
since then, showing it can be done.
Hi Sasha,
due to improved testing, I see crashes when running sabrelite images in
qemu with v4.1.y and older kernels. Please apply 32cba57ba74be ("net: fec:
introduce fec_ptp_stop and use in probe fail path") to v4.1.y to fix
the problem in this release.
I'll send a backport to v3.18 separately.
Thanks,
Guenter
This is a note to let you know that I've just added the patch titled
x86/xen: Zero MSR_IA32_SPEC_CTRL before suspend
to the 4.4-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-xen-zero-msr_ia32_spec_ctrl-before-suspend.patch
and it can be found in the queue-4.4 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 71c208dd54ab971036d83ff6d9837bae4976e623 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross(a)suse.com>
Date: Mon, 26 Feb 2018 15:08:18 +0100
Subject: x86/xen: Zero MSR_IA32_SPEC_CTRL before suspend
From: Juergen Gross <jgross(a)suse.com>
commit 71c208dd54ab971036d83ff6d9837bae4976e623 upstream.
Older Xen versions (4.5 and before) might have problems migrating pv
guests with MSR_IA32_SPEC_CTRL having a non-zero value. So before
suspending zero that MSR and restore it after being resumed.
Signed-off-by: Juergen Gross <jgross(a)suse.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Jan Beulich <jbeulich(a)suse.com>
Cc: stable(a)vger.kernel.org
Cc: xen-devel(a)lists.xenproject.org
Cc: boris.ostrovsky(a)oracle.com
Link: https://lkml.kernel.org/r/20180226140818.4849-1-jgross@suse.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/xen/suspend.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -1,11 +1,14 @@
#include <linux/types.h>
#include <linux/tick.h>
+#include <linux/percpu-defs.h>
#include <xen/xen.h>
#include <xen/interface/xen.h>
#include <xen/grant_table.h>
#include <xen/events.h>
+#include <asm/cpufeatures.h>
+#include <asm/msr-index.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/page.h>
#include <asm/fixmap.h>
@@ -68,6 +71,8 @@ static void xen_pv_post_suspend(int susp
xen_mm_unpin_all();
}
+static DEFINE_PER_CPU(u64, spec_ctrl);
+
void xen_arch_pre_suspend(void)
{
if (xen_pv_domain())
@@ -84,6 +89,9 @@ void xen_arch_post_suspend(int cancelled
static void xen_vcpu_notify_restore(void *data)
{
+ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL))
+ wrmsrl(MSR_IA32_SPEC_CTRL, this_cpu_read(spec_ctrl));
+
/* Boot processor notified via generic timekeeping_resume() */
if (smp_processor_id() == 0)
return;
@@ -93,7 +101,15 @@ static void xen_vcpu_notify_restore(void
static void xen_vcpu_notify_suspend(void *data)
{
+ u64 tmp;
+
tick_suspend_local();
+
+ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
+ rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+ this_cpu_write(spec_ctrl, tmp);
+ wrmsrl(MSR_IA32_SPEC_CTRL, 0);
+ }
}
void xen_arch_resume(void)
Patches currently in stable-queue which might be from jgross(a)suse.com are
queue-4.4/x86-xen-zero-msr_ia32_spec_ctrl-before-suspend.patch
Prior to 25520d55cdb6 ("block: Inline blk_integrity in struct gendisk")
we needed to temporarily add a zero-capacity disk before registering for
blk-integrity. But adding a zero-capacity disk caused the partition
table scanning to bail early, and this resulted in partitions not coming
up after a probe of the BTT or blk namespaces.
We can now register for integrity before the disk has been added, and
this fixes the rescan problems.
Fixes: 25520d55cdb6 ("block: Inline blk_integrity in struct gendisk")
Reported-by: Dariusz Dokupil <dariusz.dokupil(a)intel.com>
Cc: Dan Williams <dan.j.williams(a)intel.com>
Cc: stable(a)vger.kernel.org
Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
---
drivers/nvdimm/blk.c | 3 +--
drivers/nvdimm/btt.c | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c
index 345acca576b3..1bd7b3734751 100644
--- a/drivers/nvdimm/blk.c
+++ b/drivers/nvdimm/blk.c
@@ -278,8 +278,6 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
disk->queue = q;
disk->flags = GENHD_FL_EXT_DEVT;
nvdimm_namespace_disk_name(&nsblk->common, disk->disk_name);
- set_capacity(disk, 0);
- device_add_disk(dev, disk);
if (devm_add_action_or_reset(dev, nd_blk_release_disk, disk))
return -ENOMEM;
@@ -292,6 +290,7 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
}
set_capacity(disk, available_disk_size >> SECTOR_SHIFT);
+ device_add_disk(dev, disk);
revalidate_disk(disk);
return 0;
}
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index 2ef544f10ec8..4b95ac513de2 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1545,8 +1545,6 @@ static int btt_blk_init(struct btt *btt)
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, btt->btt_queue);
btt->btt_queue->queuedata = btt;
- set_capacity(btt->btt_disk, 0);
- device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
if (btt_meta_size(btt)) {
int rc = nd_integrity_init(btt->btt_disk, btt_meta_size(btt));
@@ -1558,6 +1556,7 @@ static int btt_blk_init(struct btt *btt)
}
}
set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
+ device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
btt->nd_btt->size = btt->nlba * (u64)btt->sector_size;
revalidate_disk(btt->btt_disk);
--
2.14.3
This is a note to let you know that I've just added the patch titled
x86/xen: Zero MSR_IA32_SPEC_CTRL before suspend
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-xen-zero-msr_ia32_spec_ctrl-before-suspend.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 71c208dd54ab971036d83ff6d9837bae4976e623 Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross(a)suse.com>
Date: Mon, 26 Feb 2018 15:08:18 +0100
Subject: x86/xen: Zero MSR_IA32_SPEC_CTRL before suspend
From: Juergen Gross <jgross(a)suse.com>
commit 71c208dd54ab971036d83ff6d9837bae4976e623 upstream.
Older Xen versions (4.5 and before) might have problems migrating pv
guests with MSR_IA32_SPEC_CTRL having a non-zero value. So before
suspending zero that MSR and restore it after being resumed.
Signed-off-by: Juergen Gross <jgross(a)suse.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Reviewed-by: Jan Beulich <jbeulich(a)suse.com>
Cc: stable(a)vger.kernel.org
Cc: xen-devel(a)lists.xenproject.org
Cc: boris.ostrovsky(a)oracle.com
Link: https://lkml.kernel.org/r/20180226140818.4849-1-jgross@suse.com
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/xen/suspend.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -1,11 +1,14 @@
#include <linux/types.h>
#include <linux/tick.h>
+#include <linux/percpu-defs.h>
#include <xen/xen.h>
#include <xen/interface/xen.h>
#include <xen/grant_table.h>
#include <xen/events.h>
+#include <asm/cpufeatures.h>
+#include <asm/msr-index.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/page.h>
#include <asm/fixmap.h>
@@ -68,6 +71,8 @@ static void xen_pv_post_suspend(int susp
xen_mm_unpin_all();
}
+static DEFINE_PER_CPU(u64, spec_ctrl);
+
void xen_arch_pre_suspend(void)
{
if (xen_pv_domain())
@@ -84,6 +89,9 @@ void xen_arch_post_suspend(int cancelled
static void xen_vcpu_notify_restore(void *data)
{
+ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL))
+ wrmsrl(MSR_IA32_SPEC_CTRL, this_cpu_read(spec_ctrl));
+
/* Boot processor notified via generic timekeeping_resume() */
if (smp_processor_id() == 0)
return;
@@ -93,7 +101,15 @@ static void xen_vcpu_notify_restore(void
static void xen_vcpu_notify_suspend(void *data)
{
+ u64 tmp;
+
tick_suspend_local();
+
+ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
+ rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
+ this_cpu_write(spec_ctrl, tmp);
+ wrmsrl(MSR_IA32_SPEC_CTRL, 0);
+ }
}
void xen_arch_resume(void)
Patches currently in stable-queue which might be from jgross(a)suse.com are
queue-4.9/x86-xen-zero-msr_ia32_spec_ctrl-before-suspend.patch
This is a note to let you know that I've just added the patch titled
x86/platform/intel-mid: Handle Intel Edison reboot correctly
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
x86-platform-intel-mid-handle-intel-edison-reboot-correctly.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 028091f82eefd5e84f81cef81a7673016ecbe78b Mon Sep 17 00:00:00 2001
From: Sebastian Panceac <sebastian(a)resin.io>
Date: Wed, 28 Feb 2018 11:40:49 +0200
Subject: x86/platform/intel-mid: Handle Intel Edison reboot correctly
From: Sebastian Panceac <sebastian(a)resin.io>
commit 028091f82eefd5e84f81cef81a7673016ecbe78b upstream.
When the Intel Edison module is powered with 3.3V, the reboot command makes
the module stuck. If the module is powered at a greater voltage, like 4.4V
(as the Edison Mini Breakout board does), reboot works OK.
The official Intel Edison BSP sends the IPCMSG_COLD_RESET message to the
SCU by default. The IPCMSG_COLD_BOOT which is used by the upstream kernel
is only sent when explicitely selected on the kernel command line.
Use IPCMSG_COLD_RESET unconditionally which makes reboot work independent
of the power supply voltage.
[ tglx: Massaged changelog ]
Fixes: bda7b072de99 ("x86/platform/intel-mid: Implement power off sequence")
Signed-off-by: Sebastian Panceac <sebastian(a)resin.io>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Acked-by: Andy Shevchenko <andy.shevchenko(a)gmail.com>
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/1519810849-15131-1-git-send-email-sebastian@resin…
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/x86/platform/intel-mid/intel-mid.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/platform/intel-mid/intel-mid.c
+++ b/arch/x86/platform/intel-mid/intel-mid.c
@@ -79,7 +79,7 @@ static void intel_mid_power_off(void)
static void intel_mid_reboot(void)
{
- intel_scu_ipc_simple_command(IPCMSG_COLD_BOOT, 0);
+ intel_scu_ipc_simple_command(IPCMSG_COLD_RESET, 0);
}
static unsigned long __init intel_mid_calibrate_tsc(void)
Patches currently in stable-queue which might be from sebastian(a)resin.io are
queue-4.9/x86-platform-intel-mid-handle-intel-edison-reboot-correctly.patch
This is a note to let you know that I've just added the patch titled
timers: Forward timer base before migrating timers
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
timers-forward-timer-base-before-migrating-timers.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From c52232a49e203a65a6e1a670cd5262f59e9364a0 Mon Sep 17 00:00:00 2001
From: Lingutla Chandrasekhar <clingutla(a)codeaurora.org>
Date: Thu, 18 Jan 2018 17:20:22 +0530
Subject: timers: Forward timer base before migrating timers
From: Lingutla Chandrasekhar <clingutla(a)codeaurora.org>
commit c52232a49e203a65a6e1a670cd5262f59e9364a0 upstream.
On CPU hotunplug the enqueued timers of the unplugged CPU are migrated to a
live CPU. This happens from the control thread which initiated the unplug.
If the CPU on which the control thread runs came out from a longer idle
period then the base clock of that CPU might be stale because the control
thread runs prior to any event which forwards the clock.
In such a case the timers from the unplugged CPU are queued on the live CPU
based on the stale clock which can cause large delays due to increased
granularity of the outer timer wheels which are far away from base:;clock.
But there is a worse problem than that. The following sequence of events
illustrates it:
- CPU0 timer1 is queued expires = 59969 and base->clk = 59131.
The timer is queued at wheel level 2, with resulting expiry time = 60032
(due to level granularity).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU0 is hotplugged at @60009
- CPU1 exits idle and runs the control thread which migrates the
timers from CPU0
timer1 is now queued in level 0 for immediate handling in the next
softirq because the requested expiry time 59969 is before CPU1 base->clk
60007
- CPU1 runs code which forwards the base clock which succeeds because the
next expiring timer. which was collected at idle entry time is still set
to 60020.
So it forwards beyond 60007 and therefore misses to expire the migrated
timer1. That timer gets expired when the wheel wraps around again, which
takes between 63 and 630ms depending on the HZ setting.
Address both problems by invoking forward_timer_base() for the control CPUs
timer base. All other places, which might run into a similar problem
(mod_timer()/add_timer_on()) already invoke forward_timer_base() to avoid
that.
[ tglx: Massaged comment and changelog ]
Fixes: a683f390b93f ("timers: Forward the wheel clock whenever possible")
Co-developed-by: Neeraj Upadhyay <neeraju(a)codeaurora.org>
Signed-off-by: Neeraj Upadhyay <neeraju(a)codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingutla(a)codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Anna-Maria Gleixner <anna-maria(a)linutronix.de>
Cc: linux-arm-msm(a)vger.kernel.org
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/20180118115022.6368-1-clingutla@codeaurora.org
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
kernel/time/timer.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1884,6 +1884,12 @@ int timers_dead_cpu(unsigned int cpu)
spin_lock_irq(&new_base->lock);
spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
+ /*
+ * The current CPUs base clock might be stale. Update it
+ * before moving the timers over.
+ */
+ forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
for (i = 0; i < WHEEL_SIZE; i++)
Patches currently in stable-queue which might be from clingutla(a)codeaurora.org are
queue-4.9/timers-forward-timer-base-before-migrating-timers.patch
This is a note to let you know that I've just added the patch titled
parisc: Fix ordering of cache and TLB flushes
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
parisc-fix-ordering-of-cache-and-tlb-flushes.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0adb24e03a124b79130c9499731936b11ce2677d Mon Sep 17 00:00:00 2001
From: John David Anglin <dave.anglin(a)bell.net>
Date: Tue, 27 Feb 2018 08:16:07 -0500
Subject: parisc: Fix ordering of cache and TLB flushes
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
From: John David Anglin <dave.anglin(a)bell.net>
commit 0adb24e03a124b79130c9499731936b11ce2677d upstream.
The change to flush_kernel_vmap_range() wasn't sufficient to avoid the
SMP stalls. The problem is some drivers call these routines with
interrupts disabled. Interrupts need to be enabled for flush_tlb_all()
and flush_cache_all() to work. This version adds checks to ensure
interrupts are not disabled before calling routines that need IPI
interrupts. When interrupts are disabled, we now drop into slower code.
The attached change fixes the ordering of cache and TLB flushes in
several cases. When we flush the cache using the existing PTE/TLB
entries, we need to flush the TLB after doing the cache flush. We don't
need to do this when we flush the entire instruction and data caches as
these flushes don't use the existing TLB entries. The same is true for
tmpalias region flushes.
The flush_kernel_vmap_range() and invalidate_kernel_vmap_range()
routines have been updated.
Secondly, we added a new purge_kernel_dcache_range_asm() routine to
pacache.S and use it in invalidate_kernel_vmap_range(). Nominally,
purges are faster than flushes as the cache lines don't have to be
written back to memory.
Hopefully, this is sufficient to resolve the remaining problems due to
cache speculation. So far, testing indicates that this is the case. I
did work up a patch using tmpalias flushes, but there is a performance
hit because we need the physical address for each page, and we also need
to sequence access to the tmpalias flush code. This increases the
probability of stalls.
Signed-off-by: John David Anglin <dave.anglin(a)bell.net>
Cc: stable(a)vger.kernel.org # 4.9+
Signed-off-by: Helge Deller <deller(a)gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
arch/parisc/include/asm/cacheflush.h | 1
arch/parisc/kernel/cache.c | 57 +++++++++++++++++++----------------
arch/parisc/kernel/pacache.S | 22 +++++++++++++
3 files changed, 54 insertions(+), 26 deletions(-)
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -25,6 +25,7 @@ void flush_user_icache_range_asm(unsigne
void flush_kernel_icache_range_asm(unsigned long, unsigned long);
void flush_user_dcache_range_asm(unsigned long, unsigned long);
void flush_kernel_dcache_range_asm(unsigned long, unsigned long);
+void purge_kernel_dcache_range_asm(unsigned long, unsigned long);
void flush_kernel_dcache_page_asm(void *);
void flush_kernel_icache_page(void *);
void flush_user_dcache_range(unsigned long, unsigned long);
--- a/arch/parisc/kernel/cache.c
+++ b/arch/parisc/kernel/cache.c
@@ -464,10 +464,10 @@ EXPORT_SYMBOL(copy_user_page);
int __flush_tlb_range(unsigned long sid, unsigned long start,
unsigned long end)
{
- unsigned long flags, size;
+ unsigned long flags;
- size = (end - start);
- if (size >= parisc_tlb_flush_threshold) {
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ end - start >= parisc_tlb_flush_threshold) {
flush_tlb_all();
return 1;
}
@@ -538,13 +538,11 @@ void flush_cache_mm(struct mm_struct *mm
struct vm_area_struct *vma;
pgd_t *pgd;
- /* Flush the TLB to avoid speculation if coherency is required. */
- if (parisc_requires_coherency())
- flush_tlb_all();
-
/* Flushing the whole cache on each cpu takes forever on
rp3440, etc. So, avoid it if the mm isn't too big. */
- if (mm_total_size(mm) >= parisc_cache_flush_threshold) {
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ mm_total_size(mm) >= parisc_cache_flush_threshold) {
+ flush_tlb_all();
flush_cache_all();
return;
}
@@ -552,9 +550,9 @@ void flush_cache_mm(struct mm_struct *mm
if (mm->context == mfsp(3)) {
for (vma = mm->mmap; vma; vma = vma->vm_next) {
flush_user_dcache_range_asm(vma->vm_start, vma->vm_end);
- if ((vma->vm_flags & VM_EXEC) == 0)
- continue;
- flush_user_icache_range_asm(vma->vm_start, vma->vm_end);
+ if (vma->vm_flags & VM_EXEC)
+ flush_user_icache_range_asm(vma->vm_start, vma->vm_end);
+ flush_tlb_range(vma, vma->vm_start, vma->vm_end);
}
return;
}
@@ -598,14 +596,9 @@ flush_user_icache_range(unsigned long st
void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
- BUG_ON(!vma->vm_mm->context);
-
- /* Flush the TLB to avoid speculation if coherency is required. */
- if (parisc_requires_coherency())
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ end - start >= parisc_cache_flush_threshold) {
flush_tlb_range(vma, start, end);
-
- if ((end - start) >= parisc_cache_flush_threshold
- || vma->vm_mm->context != mfsp(3)) {
flush_cache_all();
return;
}
@@ -613,6 +606,7 @@ void flush_cache_range(struct vm_area_st
flush_user_dcache_range_asm(start, end);
if (vma->vm_flags & VM_EXEC)
flush_user_icache_range_asm(start, end);
+ flush_tlb_range(vma, start, end);
}
void
@@ -621,8 +615,7 @@ flush_cache_page(struct vm_area_struct *
BUG_ON(!vma->vm_mm->context);
if (pfn_valid(pfn)) {
- if (parisc_requires_coherency())
- flush_tlb_page(vma, vmaddr);
+ flush_tlb_page(vma, vmaddr);
__flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
}
}
@@ -630,21 +623,33 @@ flush_cache_page(struct vm_area_struct *
void flush_kernel_vmap_range(void *vaddr, int size)
{
unsigned long start = (unsigned long)vaddr;
+ unsigned long end = start + size;
- if ((unsigned long)size > parisc_cache_flush_threshold)
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ (unsigned long)size >= parisc_cache_flush_threshold) {
+ flush_tlb_kernel_range(start, end);
flush_data_cache();
- else
- flush_kernel_dcache_range_asm(start, start + size);
+ return;
+ }
+
+ flush_kernel_dcache_range_asm(start, end);
+ flush_tlb_kernel_range(start, end);
}
EXPORT_SYMBOL(flush_kernel_vmap_range);
void invalidate_kernel_vmap_range(void *vaddr, int size)
{
unsigned long start = (unsigned long)vaddr;
+ unsigned long end = start + size;
- if ((unsigned long)size > parisc_cache_flush_threshold)
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ (unsigned long)size >= parisc_cache_flush_threshold) {
+ flush_tlb_kernel_range(start, end);
flush_data_cache();
- else
- flush_kernel_dcache_range_asm(start, start + size);
+ return;
+ }
+
+ purge_kernel_dcache_range_asm(start, end);
+ flush_tlb_kernel_range(start, end);
}
EXPORT_SYMBOL(invalidate_kernel_vmap_range);
--- a/arch/parisc/kernel/pacache.S
+++ b/arch/parisc/kernel/pacache.S
@@ -1110,6 +1110,28 @@ ENTRY_CFI(flush_kernel_dcache_range_asm)
.procend
ENDPROC_CFI(flush_kernel_dcache_range_asm)
+ENTRY_CFI(purge_kernel_dcache_range_asm)
+ .proc
+ .callinfo NO_CALLS
+ .entry
+
+ ldil L%dcache_stride, %r1
+ ldw R%dcache_stride(%r1), %r23
+ ldo -1(%r23), %r21
+ ANDCM %r26, %r21, %r26
+
+1: cmpb,COND(<<),n %r26, %r25,1b
+ pdc,m %r23(%r26)
+
+ sync
+ syncdma
+ bv %r0(%r2)
+ nop
+ .exit
+
+ .procend
+ENDPROC_CFI(purge_kernel_dcache_range_asm)
+
ENTRY_CFI(flush_user_icache_range_asm)
.proc
.callinfo NO_CALLS
Patches currently in stable-queue which might be from dave.anglin(a)bell.net are
queue-4.9/parisc-fix-ordering-of-cache-and-tlb-flushes.patch
This is a note to let you know that I've just added the patch titled
cpufreq: s3c24xx: Fix broken s3c_cpufreq_init()
to the 4.9-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=sum…
The filename of the patch is:
cpufreq-s3c24xx-fix-broken-s3c_cpufreq_init.patch
and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <stable(a)vger.kernel.org> know about it.
>From 0373ca74831b0f93cd4cdbf7ad3aec3c33a479a5 Mon Sep 17 00:00:00 2001
From: Viresh Kumar <viresh.kumar(a)linaro.org>
Date: Fri, 23 Feb 2018 09:38:28 +0530
Subject: cpufreq: s3c24xx: Fix broken s3c_cpufreq_init()
From: Viresh Kumar <viresh.kumar(a)linaro.org>
commit 0373ca74831b0f93cd4cdbf7ad3aec3c33a479a5 upstream.
commit a307a1e6bc0d "cpufreq: s3c: use cpufreq_generic_init()"
accidentally broke cpufreq on s3c2410 and s3c2412.
These two platforms don't have a CPU frequency table and used to skip
calling cpufreq_table_validate_and_show() for them. But with the
above commit, we started calling it unconditionally and that will
eventually fail as the frequency table pointer is NULL.
Fix this by calling cpufreq_table_validate_and_show() conditionally
again.
Fixes: a307a1e6bc0d "cpufreq: s3c: use cpufreq_generic_init()"
Cc: 3.13+ <stable(a)vger.kernel.org> # v3.13+
Signed-off-by: Viresh Kumar <viresh.kumar(a)linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
---
drivers/cpufreq/s3c24xx-cpufreq.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
--- a/drivers/cpufreq/s3c24xx-cpufreq.c
+++ b/drivers/cpufreq/s3c24xx-cpufreq.c
@@ -351,7 +351,13 @@ struct clk *s3c_cpufreq_clk_get(struct d
static int s3c_cpufreq_init(struct cpufreq_policy *policy)
{
policy->clk = clk_arm;
- return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency);
+
+ policy->cpuinfo.transition_latency = cpu_cur.info->latency;
+
+ if (ftab)
+ return cpufreq_table_validate_and_show(policy, ftab);
+
+ return 0;
}
static int __init s3c_cpufreq_initclks(void)
Patches currently in stable-queue which might be from viresh.kumar(a)linaro.org are
queue-4.9/cpufreq-s3c24xx-fix-broken-s3c_cpufreq_init.patch