The patch titled
Subject: fork: report pid exhaustion correctly
has been added to the -mm tree. Its filename is
fork-report-pid-exhaustion-correctly.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/fork-report-pid-exhaustion-correct…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/fork-report-pid-exhaustion-correct…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: KJ Tsanaktsidis <ktsanaktsidis(a)zendesk.com>
Subject: fork: report pid exhaustion correctly
Make the clone and fork syscalls return EAGAIN when the limit on the
number of pids /proc/sys/kernel/pid_max is exceeded.
Currently, when the pid_max limit is exceeded, the kernel will return
ENOSPC from the fork and clone syscalls. This is contrary to the
documented behaviour, which explicitly calls out the pid_max case as one
where EAGAIN should be returned. It also leads to really confusing error
messages in userspace programs which will complain about a lack of disk
space when they fail to create processes/threads for this reason.
This error is being returned because alloc_pid() uses the idr api to find
a new pid; when there are none available, idr_alloc_cyclic() returns
-ENOSPC, and this is being propagated back to userspace.
This behaviour has been broken before, and was explicitly fixed in
commit 35f71bc0a09a ("fork: report pid reservation failure properly"),
so I think -EAGAIN is definitely the right thing to return in this case.
The current behaviour change dates from commit 95846ecf9dac ("pid:
replace pid bitmap implementation with IDR AIP") and was I believe
unintentional.
This patch has no impact on the case where allocating a pid fails because
the child reaper for the namespace is dead; that case will still return
-ENOMEM.
Link: http://lkml.kernel.org/r/20180903111016.46461-1-ktsanaktsidis@zendesk.com
Fixes: 95846ecf9dac ("pid: replace pid bitmap implementation with IDR AIP")
Signed-off-by: KJ Tsanaktsidis <ktsanaktsidis(a)zendesk.com>
Reviewed-by: Andrew Morton <akpm(a)linux-foundation.org>
Cc: Michal Hocko <mhocko(a)suse.cz>
Cc: Gargi Sharma <gs051095(a)gmail.com>
Cc: Rik van Riel <riel(a)redhat.com>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
diff -puN kernel/pid.c~fork-report-pid-exhaustion-correctly kernel/pid.c
--- a/kernel/pid.c~fork-report-pid-exhaustion-correctly
+++ a/kernel/pid.c
@@ -195,7 +195,7 @@ struct pid *alloc_pid(struct pid_namespa
idr_preload_end();
if (nr < 0) {
- retval = nr;
+ retval = (nr == -ENOSPC) ? -EAGAIN : nr;
goto out_free;
}
_
Patches currently in -mm which might be from ktsanaktsidis(a)zendesk.com are
fork-report-pid-exhaustion-correctly.patch
Avoid growing the file system to an extent so that the last block
group is too small to hold all of the metadata that must be stored in
the block group.
This problem can be triggered with the following reproducer:
umount /mnt
mke2fs -F -m0 -b 4096 -t ext4 -O resize_inode,^has_journal \
-E resize=1073741824 /tmp/foo.img 128M
mount /tmp/foo.img /mnt
truncate --size 1708M /tmp/foo.img
resize2fs /dev/loop0 295400
umount /mnt
e2fsck -fy /tmp/foo.img
Reported-by: Torsten Hilbrich <torsten.hilbrich(a)secunet.com>
Signed-off-by: Theodore Ts'o <tytso(a)mit.edu>
Cc: stable(a)vger.kernel.org
---
fs/ext4/resize.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index e5fb38451a73..33655a6eff4d 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -1986,6 +1986,26 @@ int ext4_resize_fs(struct super_block *sb, ext4_fsblk_t n_blocks_count)
}
}
+ /*
+ * Make sure the last group has enough space so that it's
+ * guaranteed to have enough space for all metadata blocks
+ * that it might need to hold. (We might not need to store
+ * the inode table blocks in the last block group, but there
+ * will be cases where this might be needed.)
+ */
+ if ((ext4_group_first_block_no(sb, n_group) +
+ ext4_group_overhead_blocks(sb, n_group) + 2 +
+ sbi->s_itb_per_group + sbi->s_cluster_ratio) >= n_blocks_count) {
+ n_blocks_count = ext4_group_first_block_no(sb, n_group);
+ n_group--;
+ n_blocks_count_retry = 0;
+ if (resize_inode) {
+ iput(resize_inode);
+ resize_inode = NULL;
+ }
+ goto retry;
+ }
+
/* extend the last group */
if (n_group == o_group)
add = n_blocks_count - o_blocks_count;
--
2.18.0.rc0
The patch titled
Subject: mm: disable deferred struct page for 32-bit arches
has been added to the -mm tree. Its filename is
mm-disable-deferred-struct-page-for-32-bit-arches.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-disable-deferred-struct-page-fo…
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-disable-deferred-struct-page-fo…
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Pasha Tatashin <Pavel.Tatashin(a)microsoft.com>
Subject: mm: disable deferred struct page for 32-bit arches
Deferred struct page init is needed only on systems with large amount of
physical memory to improve boot performance. 32-bit systems do not
benefit from this feature.
Jiri reported a problem where deferred struct pages do not work well with
x86-32:
[ 0.035162] Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)
[ 0.035725] Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
[ 0.036269] Initializing CPU#0
[ 0.036513] Initializing HighMem for node 0 (00036ffe:0007ffe0)
[ 0.038459] page:f6780000 is uninitialized and poisoned
[ 0.038460] raw: ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff ffffffff
[ 0.039509] page dumped because: VM_BUG_ON_PAGE(1 && PageCompound(page))
[ 0.040038] ------------[ cut here ]------------
[ 0.040399] kernel BUG at include/linux/page-flags.h:293!
[ 0.040823] invalid opcode: 0000 [#1] SMP PTI
[ 0.041166] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.0-rc1_pt_jiri #9
[ 0.041694] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014
[ 0.042496] EIP: free_highmem_page+0x64/0x80
[ 0.042839] Code: 13 46 d8 c1 e8 18 5d 83 e0 03 8d 04 c0 c1 e0 06 ff 80 ec 5f 44 d8 c3 8d b4 26 00 00 00 00 ba 08 65 28 d8 89 d8 e8 fc 71 02 00 <0f> 0b 8d 76 00 8d bc 27 00 00 00 00 ba d0 b1 26 d8 89 d8 e8 e4 71
[ 0.044338] EAX: 0000003c EBX: f6780000 ECX: 00000000 EDX: d856cbe8
[ 0.044868] ESI: 0007ffe0 EDI: d838df20 EBP: d838df00 ESP: d838defc
[ 0.045372] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 EFLAGS: 00210086
[ 0.045913] CR0: 80050033 CR2: 00000000 CR3: 18556000 CR4: 00040690
[ 0.046413] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 0.046913] DR6: fffe0ff0 DR7: 00000400
[ 0.047220] Call Trace:
[ 0.047419] add_highpages_with_active_regions+0xbd/0x10d
[ 0.047854] set_highmem_pages_init+0x5b/0x71
[ 0.048202] mem_init+0x2b/0x1e8
[ 0.048460] start_kernel+0x1d2/0x425
[ 0.048757] i386_start_kernel+0x93/0x97
[ 0.049073] startup_32_smp+0x164/0x168
[ 0.049379] Modules linked in:
[ 0.049626] ---[ end trace 337949378db0abbb ]---
We free highmem pages before their struct pages are initialized:
mem_init()
set_highmem_pages_init()
add_highpages_with_active_regions()
free_highmem_page()
.. Access uninitialized struct page here..
Because there is no reason to have this feature on 32-bit systems, just
disable it.
Link: http://lkml.kernel.org/r/20180831150506.31246-1-pavel.tatashin@microsoft.com
Fixes: 2e3ca40f03bb ("mm: relax deferred struct page requirements")
Signed-off-by: Pavel Tatashin <pavel.tatashin(a)microsoft.com>
Reported-by: Jiri Slaby <jslaby(a)suse.cz>
Acked-by: Michal Hocko <mhocko(a)suse.com>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
diff -puN mm/Kconfig~mm-disable-deferred-struct-page-for-32-bit-arches mm/Kconfig
--- a/mm/Kconfig~mm-disable-deferred-struct-page-for-32-bit-arches
+++ a/mm/Kconfig
@@ -637,6 +637,7 @@ config DEFERRED_STRUCT_PAGE_INIT
depends on NO_BOOTMEM
depends on SPARSEMEM
depends on !NEED_PER_CPU_KM
+ depends on 64BIT
help
Ordinarily all struct pages are initialised during early boot in a
single thread. On very large machines this can take a considerable
_
Patches currently in -mm which might be from Pavel.Tatashin(a)microsoft.com are
mm-disable-deferred-struct-page-for-32-bit-arches.patch
Hi Greg,
I was trying to compile 4.14.67 using GCC 8.2.0 with gcc-plugins enabled
and ran into this issue [1]. If I applied both of the commits mentioned,
it resolved the problem so I think it should be included in 4.14.
Thanks!
[1]
https://groups.google.com/forum/#!msg/qubes-devel/Q3cdQKQS4Tk/gKr4F52HAAAJ
--
Lance Albertson
Director
Oregon State University | Open Source Lab
---
From c82b7623edd74a57fffd69dbab25d180c3b16c6f Mon Sep 17 00:00:00 2001
From: "valdis.kletnieks(a)vt.edu" <valdis.kletnieks(a)vt.edu>
Date: Sun, 4 Feb 2018 12:01:43 -0500
Subject: [PATCH 2/2] gcc-plugins: Add include required by GCC release 8
GCC requires another #include to get the gcc-plugins to build cleanly.
Signed-off-by: Valdis Kletnieks <valdis.kletnieks(a)vt.edu>
Signed-off-by: Kees Cook <keescook(a)chromium.org>
---
scripts/gcc-plugins/gcc-common.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/scripts/gcc-plugins/gcc-common.h
b/scripts/gcc-plugins/gcc-common.h
index ffd1dfaa1cc1..f46750053377 100644
--- a/scripts/gcc-plugins/gcc-common.h
+++ b/scripts/gcc-plugins/gcc-common.h
@@ -97,6 +97,10 @@
#include "predict.h"
#include "ipa-utils.h"
+#if BUILDING_GCC_VERSION >= 8000
+#include "stringpool.h"
+#endif
+
#if BUILDING_GCC_VERSION >= 4009
#include "attribs.h"
#include "varasm.h"
--
Lance Albertson
Director
Oregon State University | Open Source Lab
The patch
regulator: Fix 'do-nothing' value for regulators without suspend state
has been applied to the regulator tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
>From 3edd79cf5a44b12dbb13bc320f5788aed6562b36 Mon Sep 17 00:00:00 2001
From: Marek Szyprowski <m.szyprowski(a)samsung.com>
Date: Mon, 3 Sep 2018 16:49:37 +0200
Subject: [PATCH] regulator: Fix 'do-nothing' value for regulators without
suspend state
Some regulators don't have all states defined and in such cases regulator
core should not assume anything. However in current implementation
of of_get_regulation_constraints() DO_NOTHING_IN_SUSPEND enable value was
set only for regulators which had suspend node defined, otherwise the
default 0 value was used, what means DISABLE_IN_SUSPEND. This lead to
broken system suspend/resume on boards, which had simple regulator
constraints definition (without suspend state nodes).
To avoid further mismatches between the default and uninitialized values
of the suspend enabled/disabled states, change the values of the them,
so default '0' means DO_NOTHING_IN_SUSPEND.
Fixes: 72069f9957a1: regulator: leave one item to record whether regulator is enabled
Signed-off-by: Marek Szyprowski <m.szyprowski(a)samsung.com>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
drivers/regulator/core.c | 2 +-
drivers/regulator/of_regulator.c | 2 --
include/linux/regulator/machine.h | 6 +++---
3 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
index bb1324f93143..90215f57270f 100644
--- a/drivers/regulator/core.c
+++ b/drivers/regulator/core.c
@@ -3161,7 +3161,7 @@ static inline int regulator_suspend_toggle(struct regulator_dev *rdev,
if (!rstate->changeable)
return -EPERM;
- rstate->enabled = en;
+ rstate->enabled = (en) ? ENABLE_IN_SUSPEND : DISABLE_IN_SUSPEND;
return 0;
}
diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
index 638f17d4c848..210fc20f7de7 100644
--- a/drivers/regulator/of_regulator.c
+++ b/drivers/regulator/of_regulator.c
@@ -213,8 +213,6 @@ static void of_get_regulation_constraints(struct device_node *np,
else if (of_property_read_bool(suspend_np,
"regulator-off-in-suspend"))
suspend_state->enabled = DISABLE_IN_SUSPEND;
- else
- suspend_state->enabled = DO_NOTHING_IN_SUSPEND;
if (!of_property_read_u32(np, "regulator-suspend-min-microvolt",
&pval))
diff --git a/include/linux/regulator/machine.h b/include/linux/regulator/machine.h
index 3468703d663a..a459a5e973a7 100644
--- a/include/linux/regulator/machine.h
+++ b/include/linux/regulator/machine.h
@@ -48,9 +48,9 @@ struct regulator;
* DISABLE_IN_SUSPEND - turn off regulator in suspend states
* ENABLE_IN_SUSPEND - keep regulator on in suspend states
*/
-#define DO_NOTHING_IN_SUSPEND (-1)
-#define DISABLE_IN_SUSPEND 0
-#define ENABLE_IN_SUSPEND 1
+#define DO_NOTHING_IN_SUSPEND 0
+#define DISABLE_IN_SUSPEND 1
+#define ENABLE_IN_SUSPEND 2
/* Regulator active discharge flags */
enum regulator_active_discharge {
--
2.19.0.rc1
The patch
spi: tegra20-slink: explicitly enable/disable clock
has been applied to the spi tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
>From 7001cab1dabc0b72b2b672ef58a90ab64f5e2343 Mon Sep 17 00:00:00 2001
From: Marcel Ziswiler <marcel.ziswiler(a)toradex.com>
Date: Wed, 29 Aug 2018 08:47:57 +0200
Subject: [PATCH] spi: tegra20-slink: explicitly enable/disable clock
Depending on the SPI instance one may get an interrupt storm upon
requesting resp. interrupt unless the clock is explicitly enabled
beforehand. This has been observed trying to bring up instance 4 on
T20.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler(a)toradex.com>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
drivers/spi/spi-tegra20-slink.c | 31 +++++++++++++++++++++++--------
1 file changed, 23 insertions(+), 8 deletions(-)
diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
index 6f7b946b5ced..1427f343b39a 100644
--- a/drivers/spi/spi-tegra20-slink.c
+++ b/drivers/spi/spi-tegra20-slink.c
@@ -1063,6 +1063,24 @@ static int tegra_slink_probe(struct platform_device *pdev)
goto exit_free_master;
}
+ /* disabled clock may cause interrupt storm upon request */
+ tspi->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(tspi->clk)) {
+ ret = PTR_ERR(tspi->clk);
+ dev_err(&pdev->dev, "Can not get clock %d\n", ret);
+ goto exit_free_master;
+ }
+ ret = clk_prepare(tspi->clk);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Clock prepare failed %d\n", ret);
+ goto exit_free_master;
+ }
+ ret = clk_enable(tspi->clk);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Clock enable failed %d\n", ret);
+ goto exit_free_master;
+ }
+
spi_irq = platform_get_irq(pdev, 0);
tspi->irq = spi_irq;
ret = request_threaded_irq(tspi->irq, tegra_slink_isr,
@@ -1071,14 +1089,7 @@ static int tegra_slink_probe(struct platform_device *pdev)
if (ret < 0) {
dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
tspi->irq);
- goto exit_free_master;
- }
-
- tspi->clk = devm_clk_get(&pdev->dev, NULL);
- if (IS_ERR(tspi->clk)) {
- dev_err(&pdev->dev, "can not get clock\n");
- ret = PTR_ERR(tspi->clk);
- goto exit_free_irq;
+ goto exit_clk_disable;
}
tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
@@ -1138,6 +1149,8 @@ static int tegra_slink_probe(struct platform_device *pdev)
tegra_slink_deinit_dma_param(tspi, true);
exit_free_irq:
free_irq(spi_irq, tspi);
+exit_clk_disable:
+ clk_disable(tspi->clk);
exit_free_master:
spi_master_put(master);
return ret;
@@ -1150,6 +1163,8 @@ static int tegra_slink_remove(struct platform_device *pdev)
free_irq(tspi->irq, tspi);
+ clk_disable(tspi->clk);
+
if (tspi->tx_dma_chan)
tegra_slink_deinit_dma_param(tspi, false);
--
2.19.0.rc1
The patch
ASoC: rsnd: fixup not to call clk_get/set under non-atomic
has been applied to the asoc tree at
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.
You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.
If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.
Please add any relevant lists and maintainers to the CCs when replying
to this mail.
Thanks,
Mark
>From 4d230d12710646788af581ba0155d83ab48b955c Mon Sep 17 00:00:00 2001
From: Jiada Wang <jiada_wang(a)mentor.com>
Date: Mon, 3 Sep 2018 07:08:58 +0000
Subject: [PATCH] ASoC: rsnd: fixup not to call clk_get/set under non-atomic
Clocking operations clk_get/set_rate, are non-atomic,
they shouldn't be called in soc_pcm_trigger() which is atomic.
Following issue was found due to execution of clk_get_rate() causes
sleep in soc_pcm_trigger(), which shouldn't be blocked.
We can reproduce this issue by following
> enable CONFIG_DEBUG_ATOMIC_SLEEP=y
> compile, and boot
> mount -t debugfs none /sys/kernel/debug
> while true; do cat /sys/kernel/debug/clk/clk_summary > /dev/null; done &
> while true; do aplay xxx; done
This patch adds support to .prepare callback, and moves non-atomic
clocking operations to it. As .prepare is non-atomic, it is always
called before trigger_start/trigger_stop.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
in_atomic(): 1, irqs_disabled(): 128, pid: 2242, name: aplay
INFO: lockdep is turned off.
irq event stamp: 5964
hardirqs last enabled at (5963): [<ffff200008e59e40>] mutex_lock_nested+0x6e8/0x6f0
hardirqs last disabled at (5964): [<ffff200008e623f0>] _raw_spin_lock_irqsave+0x24/0x68
softirqs last enabled at (5502): [<ffff200008081838>] __do_softirq+0x560/0x10c0
softirqs last disabled at (5495): [<ffff2000080c2e78>] irq_exit+0x160/0x25c
Preemption disabled at:[ 62.904063] [<ffff200008be4d48>] snd_pcm_stream_lock+0xb4/0xc0
CPU: 2 PID: 2242 Comm: aplay Tainted: G B C 4.9.54+ #186
Hardware name: Renesas Salvator-X board based on r8a7795 (DT)
Call trace:
[<ffff20000808fe48>] dump_backtrace+0x0/0x37c
[<ffff2000080901d8>] show_stack+0x14/0x1c
[<ffff2000086f4458>] dump_stack+0xfc/0x154
[<ffff2000081134a0>] ___might_sleep+0x57c/0x58c
[<ffff2000081136b8>] __might_sleep+0x208/0x21c
[<ffff200008e5980c>] mutex_lock_nested+0xb4/0x6f0
[<ffff2000087cac74>] clk_prepare_lock+0xb0/0x184
[<ffff2000087cb094>] clk_core_get_rate+0x14/0x54
[<ffff2000087cb0f4>] clk_get_rate+0x20/0x34
[<ffff20000113aa00>] rsnd_adg_ssi_clk_try_start+0x158/0x4f8 [snd_soc_rcar]
[<ffff20000113da00>] rsnd_ssi_init+0x668/0x7a0 [snd_soc_rcar]
[<ffff200001133ff4>] rsnd_soc_dai_trigger+0x4bc/0xcf8 [snd_soc_rcar]
[<ffff200008c1af24>] soc_pcm_trigger+0x2a4/0x2d4
Fixes: e7d850dd10f4 ("ASoC: rsnd: use mod base common method on SSI-parent")
Signed-off-by: Jiada Wang <jiada_wang(a)mentor.com>
Signed-off-by: Timo Wischer <twischer(a)de.adit-jv.com>
[Kuninori: tidyup for upstream]
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx(a)renesas.com>
Tested-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx(a)renesas.com>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
---
sound/soc/sh/rcar/core.c | 11 +++++++++++
sound/soc/sh/rcar/rsnd.h | 7 +++++++
sound/soc/sh/rcar/ssi.c | 16 ++++++++++------
3 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
index f8425d8b44d2..b35f5509cfe2 100644
--- a/sound/soc/sh/rcar/core.c
+++ b/sound/soc/sh/rcar/core.c
@@ -958,12 +958,23 @@ static void rsnd_soc_dai_shutdown(struct snd_pcm_substream *substream,
rsnd_dai_stream_quit(io);
}
+static int rsnd_soc_dai_prepare(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+{
+ struct rsnd_priv *priv = rsnd_dai_to_priv(dai);
+ struct rsnd_dai *rdai = rsnd_dai_to_rdai(dai);
+ struct rsnd_dai_stream *io = rsnd_rdai_to_io(rdai, substream);
+
+ return rsnd_dai_call(prepare, io, priv);
+}
+
static const struct snd_soc_dai_ops rsnd_soc_dai_ops = {
.startup = rsnd_soc_dai_startup,
.shutdown = rsnd_soc_dai_shutdown,
.trigger = rsnd_soc_dai_trigger,
.set_fmt = rsnd_soc_dai_set_fmt,
.set_tdm_slot = rsnd_soc_set_dai_tdm_slot,
+ .prepare = rsnd_soc_dai_prepare,
};
void rsnd_parse_connect_common(struct rsnd_dai *rdai,
diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
index 96d93330b1e1..8f7a0abfa751 100644
--- a/sound/soc/sh/rcar/rsnd.h
+++ b/sound/soc/sh/rcar/rsnd.h
@@ -280,6 +280,9 @@ struct rsnd_mod_ops {
int (*nolock_stop)(struct rsnd_mod *mod,
struct rsnd_dai_stream *io,
struct rsnd_priv *priv);
+ int (*prepare)(struct rsnd_mod *mod,
+ struct rsnd_dai_stream *io,
+ struct rsnd_priv *priv);
};
struct rsnd_dai_stream;
@@ -309,6 +312,7 @@ struct rsnd_mod {
* H 0: fallback
* H 0: hw_params
* H 0: pointer
+ * H 0: prepare
*/
#define __rsnd_mod_shift_nolock_start 0
#define __rsnd_mod_shift_nolock_stop 0
@@ -323,6 +327,7 @@ struct rsnd_mod {
#define __rsnd_mod_shift_fallback 28 /* always called */
#define __rsnd_mod_shift_hw_params 28 /* always called */
#define __rsnd_mod_shift_pointer 28 /* always called */
+#define __rsnd_mod_shift_prepare 28 /* always called */
#define __rsnd_mod_add_probe 0
#define __rsnd_mod_add_remove 0
@@ -337,6 +342,7 @@ struct rsnd_mod {
#define __rsnd_mod_add_fallback 0
#define __rsnd_mod_add_hw_params 0
#define __rsnd_mod_add_pointer 0
+#define __rsnd_mod_add_prepare 0
#define __rsnd_mod_call_probe 0
#define __rsnd_mod_call_remove 0
@@ -351,6 +357,7 @@ struct rsnd_mod {
#define __rsnd_mod_call_pointer 0
#define __rsnd_mod_call_nolock_start 0
#define __rsnd_mod_call_nolock_stop 1
+#define __rsnd_mod_call_prepare 0
#define rsnd_mod_to_priv(mod) ((mod)->priv)
#define rsnd_mod_name(mod) ((mod)->ops->name)
diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
index 8304e4ec9242..3f880ec66459 100644
--- a/sound/soc/sh/rcar/ssi.c
+++ b/sound/soc/sh/rcar/ssi.c
@@ -283,7 +283,7 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod,
if (rsnd_ssi_is_multi_slave(mod, io))
return 0;
- if (ssi->usrcnt > 1) {
+ if (ssi->rate) {
if (ssi->rate != rate) {
dev_err(dev, "SSI parent/child should use same rate\n");
return -EINVAL;
@@ -434,7 +434,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
struct rsnd_priv *priv)
{
struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);
- int ret;
if (!rsnd_ssi_is_run_mods(mod, io))
return 0;
@@ -443,10 +442,6 @@ static int rsnd_ssi_init(struct rsnd_mod *mod,
rsnd_mod_power_on(mod);
- ret = rsnd_ssi_master_clk_start(mod, io);
- if (ret < 0)
- return ret;
-
rsnd_ssi_config_init(mod, io);
rsnd_ssi_register_setup(mod);
@@ -852,6 +847,13 @@ static int rsnd_ssi_pio_pointer(struct rsnd_mod *mod,
return 0;
}
+static int rsnd_ssi_prepare(struct rsnd_mod *mod,
+ struct rsnd_dai_stream *io,
+ struct rsnd_priv *priv)
+{
+ return rsnd_ssi_master_clk_start(mod, io);
+}
+
static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
.name = SSI_NAME,
.probe = rsnd_ssi_common_probe,
@@ -864,6 +866,7 @@ static struct rsnd_mod_ops rsnd_ssi_pio_ops = {
.pointer = rsnd_ssi_pio_pointer,
.pcm_new = rsnd_ssi_pcm_new,
.hw_params = rsnd_ssi_hw_params,
+ .prepare = rsnd_ssi_prepare,
};
static int rsnd_ssi_dma_probe(struct rsnd_mod *mod,
@@ -940,6 +943,7 @@ static struct rsnd_mod_ops rsnd_ssi_dma_ops = {
.pcm_new = rsnd_ssi_pcm_new,
.fallback = rsnd_ssi_fallback,
.hw_params = rsnd_ssi_hw_params,
+ .prepare = rsnd_ssi_prepare,
};
int rsnd_ssi_is_dma_mode(struct rsnd_mod *mod)
--
2.19.0.rc1
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From cb9d7fd51d9fbb329d182423bd7b92d0f8cb0e01 Mon Sep 17 00:00:00 2001
From: Vincent Whitchurch <vincent.whitchurch(a)axis.com>
Date: Tue, 21 Aug 2018 17:25:07 +0200
Subject: [PATCH] watchdog: Mark watchdog touch functions as notrace
Some architectures need to use stop_machine() to patch functions for
ftrace, and the assumption is that the stopped CPUs do not make function
calls to traceable functions when they are in the stopped state.
Commit ce4f06dcbb5d ("stop_machine: Touch_nmi_watchdog() after
MULTI_STOP_PREPARE") added calls to the watchdog touch functions from
the stopped CPUs and those functions lack notrace annotations. This
leads to crashes when enabling/disabling ftrace on ARM kernels built
with the Thumb-2 instruction set.
Fix it by adding the necessary notrace annotations.
Fixes: ce4f06dcbb5d ("stop_machine: Touch_nmi_watchdog() after MULTI_STOP_PREPARE")
Signed-off-by: Vincent Whitchurch <vincent.whitchurch(a)axis.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: oleg(a)redhat.com
Cc: tj(a)kernel.org
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/20180821152507.18313-1-vincent.whitchurch@axis.com
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 5470dce212c0..977918d5d350 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -261,7 +261,7 @@ static void __touch_watchdog(void)
* entering idle state. This should only be used for scheduler events.
* Use touch_softlockup_watchdog() for everything else.
*/
-void touch_softlockup_watchdog_sched(void)
+notrace void touch_softlockup_watchdog_sched(void)
{
/*
* Preemption can be enabled. It doesn't matter which CPU's timestamp
@@ -270,7 +270,7 @@ void touch_softlockup_watchdog_sched(void)
raw_cpu_write(watchdog_touch_ts, 0);
}
-void touch_softlockup_watchdog(void)
+notrace void touch_softlockup_watchdog(void)
{
touch_softlockup_watchdog_sched();
wq_watchdog_touch(raw_smp_processor_id());
diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
index 1f7020d65d0a..71381168dede 100644
--- a/kernel/watchdog_hld.c
+++ b/kernel/watchdog_hld.c
@@ -29,7 +29,7 @@ static struct cpumask dead_events_mask;
static unsigned long hardlockup_allcpu_dumped;
static atomic_t watchdog_cpus = ATOMIC_INIT(0);
-void arch_touch_nmi_watchdog(void)
+notrace void arch_touch_nmi_watchdog(void)
{
/*
* Using __raw here because some code paths have
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 60e80198c3df..0280deac392e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -5574,7 +5574,7 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
mod_timer(&wq_watchdog_timer, jiffies + thresh);
}
-void wq_watchdog_touch(int cpu)
+notrace void wq_watchdog_touch(int cpu)
{
if (cpu >= 0)
per_cpu(wq_watchdog_touched_cpu, cpu) = jiffies;
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 932d47448c3caa0fa99e84d7f5bc302aa286efd8 Mon Sep 17 00:00:00 2001
From: "H. Nikolaus Schaller" <hns(a)goldelico.com>
Date: Tue, 26 Jun 2018 15:28:29 +0200
Subject: [PATCH] power: generic-adc-battery: fix out-of-bounds write when
copying channel properties
We did have sporadic problems in the pinctrl framework during boot
where a pin group name unexpectedly became NULL leading to a NULL
dereference in strcmp.
Detailled analysis of the failing cases did reveal that there were
two devm allocated objects close to each other. The second one was
the affected group_desc in pinmux and the first one was the
psy_desc->properties buffer of the gab driver.
Review of the gab code showed that the address calculation for
one memcpy() is wrong. It does
properties + sizeof(type) * index
but C is defined to do the index multiplication already for
pointer + integer additions. Hence the factor was applied twice
and the memcpy() does write outside of the properties buffer.
Sometimes it happened to be the pinctrl and triggered the strcmp(NULL).
Anyways, it is overkill to use a memcpy() here instead of a simple
assignment, which is easier to read and has less risk for wrong
address calculations. So we change code to a simple assignment.
If we initialize the index to the first free location, we can even
remove the local variable 'properties'.
This bug seems to exist right from the beginning in 3.7-rc1 in
commit e60fea794e6e ("power: battery: Generic battery driver using IIO")
Signed-off-by: H. Nikolaus Schaller <hns(a)goldelico.com>
Cc: stable(a)vger.kernel.org
Fixes: e60fea794e6e ("power: battery: Generic battery driver using IIO")
Signed-off-by: Sebastian Reichel <sebastian.reichel(a)collabora.co.uk>
diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
index 28dc056eaafa..11f59275bff4 100644
--- a/drivers/power/supply/generic-adc-battery.c
+++ b/drivers/power/supply/generic-adc-battery.c
@@ -241,10 +241,9 @@ static int gab_probe(struct platform_device *pdev)
struct power_supply_desc *psy_desc;
struct power_supply_config psy_cfg = {};
struct gab_platform_data *pdata = pdev->dev.platform_data;
- enum power_supply_property *properties;
int ret = 0;
int chan;
- int index = 0;
+ int index = ARRAY_SIZE(gab_props);
adc_bat = devm_kzalloc(&pdev->dev, sizeof(*adc_bat), GFP_KERNEL);
if (!adc_bat) {
@@ -278,8 +277,6 @@ static int gab_probe(struct platform_device *pdev)
}
memcpy(psy_desc->properties, gab_props, sizeof(gab_props));
- properties = (enum power_supply_property *)
- ((char *)psy_desc->properties + sizeof(gab_props));
/*
* getting channel from iio and copying the battery properties
@@ -293,15 +290,12 @@ static int gab_probe(struct platform_device *pdev)
adc_bat->channel[chan] = NULL;
} else {
/* copying properties for supported channels only */
- memcpy(properties + sizeof(*(psy_desc->properties)) * index,
- &gab_dyn_props[chan],
- sizeof(gab_dyn_props[chan]));
- index++;
+ psy_desc->properties[index++] = gab_dyn_props[chan];
}
}
/* none of the channels are supported so let's bail out */
- if (index == 0) {
+ if (index == ARRAY_SIZE(gab_props)) {
ret = -ENODEV;
goto second_mem_fail;
}
@@ -312,7 +306,7 @@ static int gab_probe(struct platform_device *pdev)
* as come channels may be not be supported by the device.So
* we need to take care of that.
*/
- psy_desc->num_properties = ARRAY_SIZE(gab_props) + index;
+ psy_desc->num_properties = index;
adc_bat->psy = power_supply_register(&pdev->dev, psy_desc, &psy_cfg);
if (IS_ERR(adc_bat->psy)) {
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From a427503edaaed9b75ed9746a654cece7e93e60a8 Mon Sep 17 00:00:00 2001
From: "H. Nikolaus Schaller" <hns(a)goldelico.com>
Date: Tue, 26 Jun 2018 15:28:30 +0200
Subject: [PATCH] power: generic-adc-battery: check for duplicate properties
copied from iio channels
If an iio channel defines a basic property, there are duplicate entries
in /sys/class/power/*/uevent.
So add a check to avoid duplicates. Since all channels may be duplicates,
we have to modify the related error check.
Signed-off-by: H. Nikolaus Schaller <hns(a)goldelico.com>
Cc: stable(a)vger.kernel.org
Fixes: e60fea794e6e ("power: battery: Generic battery driver using IIO")
Signed-off-by: Sebastian Reichel <sebastian.reichel(a)collabora.co.uk>
diff --git a/drivers/power/supply/generic-adc-battery.c b/drivers/power/supply/generic-adc-battery.c
index 11f59275bff4..bc462d1ec963 100644
--- a/drivers/power/supply/generic-adc-battery.c
+++ b/drivers/power/supply/generic-adc-battery.c
@@ -244,6 +244,7 @@ static int gab_probe(struct platform_device *pdev)
int ret = 0;
int chan;
int index = ARRAY_SIZE(gab_props);
+ bool any = false;
adc_bat = devm_kzalloc(&pdev->dev, sizeof(*adc_bat), GFP_KERNEL);
if (!adc_bat) {
@@ -290,12 +291,22 @@ static int gab_probe(struct platform_device *pdev)
adc_bat->channel[chan] = NULL;
} else {
/* copying properties for supported channels only */
- psy_desc->properties[index++] = gab_dyn_props[chan];
+ int index2;
+
+ for (index2 = 0; index2 < index; index2++) {
+ if (psy_desc->properties[index2] ==
+ gab_dyn_props[chan])
+ break; /* already known */
+ }
+ if (index2 == index) /* really new */
+ psy_desc->properties[index++] =
+ gab_dyn_props[chan];
+ any = true;
}
}
/* none of the channels are supported so let's bail out */
- if (index == ARRAY_SIZE(gab_props)) {
+ if (!any) {
ret = -ENODEV;
goto second_mem_fail;
}
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b40b3e9358fbafff6a4ba0f4b9658f6617146f9c Mon Sep 17 00:00:00 2001
From: Dan Carpenter <dan.carpenter(a)oracle.com>
Date: Wed, 11 Jul 2018 15:29:31 +0300
Subject: [PATCH] mei: bus: type promotion bug in mei_nfc_if_version()
We accidentally removed the check for negative returns
without considering the issue of type promotion.
The "if_version_length" variable is type size_t so if __mei_cl_recv()
returns a negative then "bytes_recv" is type promoted
to a high positive value and treated as success.
Cc: <stable(a)vger.kernel.org>
Fixes: 582ab27a063a ("mei: bus: fix received data size check in NFC fixup")
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Tomas Winkler <tomas.winkler(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
index e45fe826d87d..65e28be3c8cc 100644
--- a/drivers/misc/mei/bus-fixup.c
+++ b/drivers/misc/mei/bus-fixup.c
@@ -341,7 +341,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
ret = 0;
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0, 0);
- if (bytes_recv < if_version_length) {
+ if (bytes_recv < 0 || bytes_recv < if_version_length) {
dev_err(bus->dev, "Could not read IF version\n");
ret = -EIO;
goto err;
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b40b3e9358fbafff6a4ba0f4b9658f6617146f9c Mon Sep 17 00:00:00 2001
From: Dan Carpenter <dan.carpenter(a)oracle.com>
Date: Wed, 11 Jul 2018 15:29:31 +0300
Subject: [PATCH] mei: bus: type promotion bug in mei_nfc_if_version()
We accidentally removed the check for negative returns
without considering the issue of type promotion.
The "if_version_length" variable is type size_t so if __mei_cl_recv()
returns a negative then "bytes_recv" is type promoted
to a high positive value and treated as success.
Cc: <stable(a)vger.kernel.org>
Fixes: 582ab27a063a ("mei: bus: fix received data size check in NFC fixup")
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Tomas Winkler <tomas.winkler(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
index e45fe826d87d..65e28be3c8cc 100644
--- a/drivers/misc/mei/bus-fixup.c
+++ b/drivers/misc/mei/bus-fixup.c
@@ -341,7 +341,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
ret = 0;
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0, 0);
- if (bytes_recv < if_version_length) {
+ if (bytes_recv < 0 || bytes_recv < if_version_length) {
dev_err(bus->dev, "Could not read IF version\n");
ret = -EIO;
goto err;
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b40b3e9358fbafff6a4ba0f4b9658f6617146f9c Mon Sep 17 00:00:00 2001
From: Dan Carpenter <dan.carpenter(a)oracle.com>
Date: Wed, 11 Jul 2018 15:29:31 +0300
Subject: [PATCH] mei: bus: type promotion bug in mei_nfc_if_version()
We accidentally removed the check for negative returns
without considering the issue of type promotion.
The "if_version_length" variable is type size_t so if __mei_cl_recv()
returns a negative then "bytes_recv" is type promoted
to a high positive value and treated as success.
Cc: <stable(a)vger.kernel.org>
Fixes: 582ab27a063a ("mei: bus: fix received data size check in NFC fixup")
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Tomas Winkler <tomas.winkler(a)intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
index e45fe826d87d..65e28be3c8cc 100644
--- a/drivers/misc/mei/bus-fixup.c
+++ b/drivers/misc/mei/bus-fixup.c
@@ -341,7 +341,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
ret = 0;
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0, 0);
- if (bytes_recv < if_version_length) {
+ if (bytes_recv < 0 || bytes_recv < if_version_length) {
dev_err(bus->dev, "Could not read IF version\n");
ret = -EIO;
goto err;
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8c8d953c28000045e5e823f3398319f04d49a7f1 Mon Sep 17 00:00:00 2001
From: Paul Burton <paul.burton(a)mips.com>
Date: Tue, 19 Dec 2017 15:11:08 -0800
Subject: [PATCH] MIPS: Schedule on CPUs we need to lose FPU for a mode switch
Commit 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:
1) The IPI may arrive at a point where the FPU is in the process of
being enabled, but that process is not yet complete leading to a
state we aren't prepared to handle. For example:
[ 370.215903] do_cpu invoked from kernel context![#1]:
[ 370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
[ 370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
[ 370.235399] $ 0 : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
[ 370.243882] $ 4 : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
[ 370.252317] $ 8 : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
[ 370.260753] $12 : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
[ 370.269179] $16 : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
[ 370.277612] $20 : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
[ 370.286038] $24 : 0000000000000000 900000001612048c
[ 370.294476] $28 : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
[ 370.302909] Hi : 0000000000000000
[ 370.306595] Lo : 0000000000000200
[ 370.310376] epc : ffffffff80115d38 _save_fp+0x10/0xa0
[ 370.315784] ra : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.322707] Status: 140084e2 KX SX UX KERNEL EXL
[ 370.327980] Cause : 1080002c (ExcCode 0b)
[ 370.332091] PrId : 0001a428 (MIPS P6600)
[ 370.336179] Modules linked in:
[ 370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
[ 370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
[ 370.358161] 0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
[ 370.366575] ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
[ 370.374989] ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
[ 370.383395] 0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
[ 370.391817] 000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
[ 370.400230] a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
[ 370.408644] ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
[ 370.417066] ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
[ 370.425472] ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
[ 370.433885] ...
[ 370.436562] Call Trace:
[ 370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
[ 370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
[ 370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
[ 370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
[ 370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
[ 370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
[ 370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
[ 370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
[ 370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
[ 370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
[ 370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
[ 370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
[ 370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10
2) The IPI may arrive during kernel use of the FPU, since we generally
only disable preemption around use of the FPU & leave interrupts
enabled. This can lead to us unexpectedly losing access to the FPU
in places where it previously had not been possible. For example:
do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82
#2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10
At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:
3) The IPI may arrive whilst the kernel is running code that will lead
to a preempt_disable() call & FPU usage soon. If this happens then
the IPI will be serviced & we'll proceed to enable an FPU whilst the
mode switch is in progress, leading to strange & inconsistent
behaviour.
Further to all of this is a separate but related problem:
4) There are various paths through which we may enable the FPU without
the user having triggered a coprocessor 1 disabled exception. These
paths are those in which we emulate instructions & then enable the
FPU with the expectation that the user might execute an FP
instruction shortly afterwards. However these paths have not
previously checked whether an FP mode switch is underway for the
task, and therefore could enable the FPU whilst such a mode switch
is in progress leading to strange & inconsistent behaviour for user
code.
This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:
a) Prevent any threads that are part of the affected process from being
able to obtain ownership of the FPU.
b) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
c) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
d) Allow threads to obtain ownership of the FPU again.
This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:
A) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
B) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.
The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable <stable(a)vger.kernel.org> # v4.0+
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index da2004cef2d5..b509371a6b0c 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -126,8 +126,6 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm)
for_each_possible_cpu(i)
cpu_context(i, mm) = 0;
- atomic_set(&mm->context.fp_mode_switching, 0);
-
mm->context.bd_emupage_allocmap = NULL;
spin_lock_init(&mm->context.bd_emupage_lock);
init_waitqueue_head(&mm->context.bd_emupage_queue);
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 8d85046adcc8..fe6001d748cf 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -29,6 +29,7 @@
#include <linux/kallsyms.h>
#include <linux/random.h>
#include <linux/prctl.h>
+#include <linux/cpu.h>
#include <asm/asm.h>
#include <asm/bootinfo.h>
@@ -691,19 +692,25 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
-static void prepare_for_fp_mode_switch(void *info)
+static long prepare_for_fp_mode_switch(void *unused)
{
- struct mm_struct *mm = info;
-
- if (current->mm == mm)
- lose_fpu(1);
+ /*
+ * This is icky, but we use this to simply ensure that all CPUs have
+ * context switched, regardless of whether they were previously running
+ * kernel or user code. This ensures that no CPU currently has its FPU
+ * enabled, or is about to attempt to enable it through any path other
+ * than enable_restore_fp_context() which will wait appropriately for
+ * fp_mode_switching to be zero.
+ */
+ return 0;
}
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
struct task_struct *t;
- int max_users;
+ struct cpumask process_cpus;
+ int cpu;
/* If nothing to change, return right away, successfully. */
if (value == mips_get_process_fp_mode(task))
@@ -736,35 +743,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && raw_cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
- /* Proceed with the mode switch */
- preempt_disable();
-
- /* Save FP & vector context, then disable FPU & MSA */
- if (task->signal == current->signal)
- lose_fpu(1);
-
- /* Prevent any threads from obtaining live FP context */
- atomic_set(&task->mm->context.fp_mode_switching, 1);
- smp_mb__after_atomic();
-
- /*
- * If there are multiple online CPUs then force any which are running
- * threads in this process to lose their FPU context, which they can't
- * regain until fp_mode_switching is cleared later.
- */
- if (num_online_cpus() > 1) {
- /* No need to send an IPI for the local CPU */
- max_users = (task->mm == current->mm) ? 1 : 0;
-
- if (atomic_read(¤t->mm->mm_users) > max_users)
- smp_call_function(prepare_for_fp_mode_switch,
- (void *)current->mm, 1);
- }
-
- /*
- * There are now no threads of the process with live FP context, so it
- * is safe to proceed with the FP mode switch.
- */
+ /* Indicate the new FP mode in each thread */
for_each_thread(task, t) {
/* Update desired FP register width */
if (value & PR_FP_MODE_FR) {
@@ -781,9 +760,34 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
clear_tsk_thread_flag(t, TIF_HYBRID_FPREGS);
}
- /* Allow threads to use FP again */
- atomic_set(&task->mm->context.fp_mode_switching, 0);
- preempt_enable();
+ /*
+ * We need to ensure that all threads in the process have switched mode
+ * before returning, in order to allow userland to not worry about
+ * races. We can do this by forcing all CPUs that any thread in the
+ * process may be running on to schedule something else - in this case
+ * prepare_for_fp_mode_switch().
+ *
+ * We begin by generating a mask of all CPUs that any thread in the
+ * process may be running on.
+ */
+ cpumask_clear(&process_cpus);
+ for_each_thread(task, t)
+ cpumask_set_cpu(task_cpu(t), &process_cpus);
+
+ /*
+ * Now we schedule prepare_for_fp_mode_switch() on each of those CPUs.
+ *
+ * The CPUs may have rescheduled already since we switched mode or
+ * generated the cpumask, but that doesn't matter. If the task in this
+ * process is scheduled out then our scheduling
+ * prepare_for_fp_mode_switch() will simply be redundant. If it's
+ * scheduled in then it will already have picked up the new FP mode
+ * whilst doing so.
+ */
+ get_online_cpus();
+ for_each_cpu_and(cpu, &process_cpus, cpu_online_mask)
+ work_on_cpu(cpu, prepare_for_fp_mode_switch, NULL);
+ put_online_cpus();
wake_up_var(&task->mm->context.fp_mode_switching);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index d67fa74622ee..4d9ca9b465ae 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1220,13 +1220,6 @@ static int enable_restore_fp_context(int msa)
{
int err, was_fpu_owner, prior_msa;
- /*
- * If an FP mode switch is currently underway, wait for it to
- * complete before proceeding.
- */
- wait_var_event(¤t->mm->context.fp_mode_switching,
- !atomic_read(¤t->mm->context.fp_mode_switching));
-
if (!used_math()) {
/* First time FP context user. */
preempt_disable();
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8c8d953c28000045e5e823f3398319f04d49a7f1 Mon Sep 17 00:00:00 2001
From: Paul Burton <paul.burton(a)mips.com>
Date: Tue, 19 Dec 2017 15:11:08 -0800
Subject: [PATCH] MIPS: Schedule on CPUs we need to lose FPU for a mode switch
Commit 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:
1) The IPI may arrive at a point where the FPU is in the process of
being enabled, but that process is not yet complete leading to a
state we aren't prepared to handle. For example:
[ 370.215903] do_cpu invoked from kernel context![#1]:
[ 370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
[ 370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
[ 370.235399] $ 0 : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
[ 370.243882] $ 4 : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
[ 370.252317] $ 8 : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
[ 370.260753] $12 : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
[ 370.269179] $16 : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
[ 370.277612] $20 : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
[ 370.286038] $24 : 0000000000000000 900000001612048c
[ 370.294476] $28 : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
[ 370.302909] Hi : 0000000000000000
[ 370.306595] Lo : 0000000000000200
[ 370.310376] epc : ffffffff80115d38 _save_fp+0x10/0xa0
[ 370.315784] ra : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.322707] Status: 140084e2 KX SX UX KERNEL EXL
[ 370.327980] Cause : 1080002c (ExcCode 0b)
[ 370.332091] PrId : 0001a428 (MIPS P6600)
[ 370.336179] Modules linked in:
[ 370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
[ 370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
[ 370.358161] 0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
[ 370.366575] ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
[ 370.374989] ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
[ 370.383395] 0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
[ 370.391817] 000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
[ 370.400230] a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
[ 370.408644] ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
[ 370.417066] ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
[ 370.425472] ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
[ 370.433885] ...
[ 370.436562] Call Trace:
[ 370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
[ 370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
[ 370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
[ 370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
[ 370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
[ 370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
[ 370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
[ 370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
[ 370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
[ 370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
[ 370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
[ 370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
[ 370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10
2) The IPI may arrive during kernel use of the FPU, since we generally
only disable preemption around use of the FPU & leave interrupts
enabled. This can lead to us unexpectedly losing access to the FPU
in places where it previously had not been possible. For example:
do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82
#2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10
At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:
3) The IPI may arrive whilst the kernel is running code that will lead
to a preempt_disable() call & FPU usage soon. If this happens then
the IPI will be serviced & we'll proceed to enable an FPU whilst the
mode switch is in progress, leading to strange & inconsistent
behaviour.
Further to all of this is a separate but related problem:
4) There are various paths through which we may enable the FPU without
the user having triggered a coprocessor 1 disabled exception. These
paths are those in which we emulate instructions & then enable the
FPU with the expectation that the user might execute an FP
instruction shortly afterwards. However these paths have not
previously checked whether an FP mode switch is underway for the
task, and therefore could enable the FPU whilst such a mode switch
is in progress leading to strange & inconsistent behaviour for user
code.
This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:
a) Prevent any threads that are part of the affected process from being
able to obtain ownership of the FPU.
b) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
c) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
d) Allow threads to obtain ownership of the FPU again.
This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:
A) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
B) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.
The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable <stable(a)vger.kernel.org> # v4.0+
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index da2004cef2d5..b509371a6b0c 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -126,8 +126,6 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm)
for_each_possible_cpu(i)
cpu_context(i, mm) = 0;
- atomic_set(&mm->context.fp_mode_switching, 0);
-
mm->context.bd_emupage_allocmap = NULL;
spin_lock_init(&mm->context.bd_emupage_lock);
init_waitqueue_head(&mm->context.bd_emupage_queue);
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 8d85046adcc8..fe6001d748cf 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -29,6 +29,7 @@
#include <linux/kallsyms.h>
#include <linux/random.h>
#include <linux/prctl.h>
+#include <linux/cpu.h>
#include <asm/asm.h>
#include <asm/bootinfo.h>
@@ -691,19 +692,25 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
-static void prepare_for_fp_mode_switch(void *info)
+static long prepare_for_fp_mode_switch(void *unused)
{
- struct mm_struct *mm = info;
-
- if (current->mm == mm)
- lose_fpu(1);
+ /*
+ * This is icky, but we use this to simply ensure that all CPUs have
+ * context switched, regardless of whether they were previously running
+ * kernel or user code. This ensures that no CPU currently has its FPU
+ * enabled, or is about to attempt to enable it through any path other
+ * than enable_restore_fp_context() which will wait appropriately for
+ * fp_mode_switching to be zero.
+ */
+ return 0;
}
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
struct task_struct *t;
- int max_users;
+ struct cpumask process_cpus;
+ int cpu;
/* If nothing to change, return right away, successfully. */
if (value == mips_get_process_fp_mode(task))
@@ -736,35 +743,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && raw_cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
- /* Proceed with the mode switch */
- preempt_disable();
-
- /* Save FP & vector context, then disable FPU & MSA */
- if (task->signal == current->signal)
- lose_fpu(1);
-
- /* Prevent any threads from obtaining live FP context */
- atomic_set(&task->mm->context.fp_mode_switching, 1);
- smp_mb__after_atomic();
-
- /*
- * If there are multiple online CPUs then force any which are running
- * threads in this process to lose their FPU context, which they can't
- * regain until fp_mode_switching is cleared later.
- */
- if (num_online_cpus() > 1) {
- /* No need to send an IPI for the local CPU */
- max_users = (task->mm == current->mm) ? 1 : 0;
-
- if (atomic_read(¤t->mm->mm_users) > max_users)
- smp_call_function(prepare_for_fp_mode_switch,
- (void *)current->mm, 1);
- }
-
- /*
- * There are now no threads of the process with live FP context, so it
- * is safe to proceed with the FP mode switch.
- */
+ /* Indicate the new FP mode in each thread */
for_each_thread(task, t) {
/* Update desired FP register width */
if (value & PR_FP_MODE_FR) {
@@ -781,9 +760,34 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
clear_tsk_thread_flag(t, TIF_HYBRID_FPREGS);
}
- /* Allow threads to use FP again */
- atomic_set(&task->mm->context.fp_mode_switching, 0);
- preempt_enable();
+ /*
+ * We need to ensure that all threads in the process have switched mode
+ * before returning, in order to allow userland to not worry about
+ * races. We can do this by forcing all CPUs that any thread in the
+ * process may be running on to schedule something else - in this case
+ * prepare_for_fp_mode_switch().
+ *
+ * We begin by generating a mask of all CPUs that any thread in the
+ * process may be running on.
+ */
+ cpumask_clear(&process_cpus);
+ for_each_thread(task, t)
+ cpumask_set_cpu(task_cpu(t), &process_cpus);
+
+ /*
+ * Now we schedule prepare_for_fp_mode_switch() on each of those CPUs.
+ *
+ * The CPUs may have rescheduled already since we switched mode or
+ * generated the cpumask, but that doesn't matter. If the task in this
+ * process is scheduled out then our scheduling
+ * prepare_for_fp_mode_switch() will simply be redundant. If it's
+ * scheduled in then it will already have picked up the new FP mode
+ * whilst doing so.
+ */
+ get_online_cpus();
+ for_each_cpu_and(cpu, &process_cpus, cpu_online_mask)
+ work_on_cpu(cpu, prepare_for_fp_mode_switch, NULL);
+ put_online_cpus();
wake_up_var(&task->mm->context.fp_mode_switching);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index d67fa74622ee..4d9ca9b465ae 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1220,13 +1220,6 @@ static int enable_restore_fp_context(int msa)
{
int err, was_fpu_owner, prior_msa;
- /*
- * If an FP mode switch is currently underway, wait for it to
- * complete before proceeding.
- */
- wait_var_event(¤t->mm->context.fp_mode_switching,
- !atomic_read(¤t->mm->context.fp_mode_switching));
-
if (!used_math()) {
/* First time FP context user. */
preempt_disable();
The patch below does not apply to the 4.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8c8d953c28000045e5e823f3398319f04d49a7f1 Mon Sep 17 00:00:00 2001
From: Paul Burton <paul.burton(a)mips.com>
Date: Tue, 19 Dec 2017 15:11:08 -0800
Subject: [PATCH] MIPS: Schedule on CPUs we need to lose FPU for a mode switch
Commit 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:
1) The IPI may arrive at a point where the FPU is in the process of
being enabled, but that process is not yet complete leading to a
state we aren't prepared to handle. For example:
[ 370.215903] do_cpu invoked from kernel context![#1]:
[ 370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
[ 370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
[ 370.235399] $ 0 : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
[ 370.243882] $ 4 : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
[ 370.252317] $ 8 : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
[ 370.260753] $12 : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
[ 370.269179] $16 : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
[ 370.277612] $20 : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
[ 370.286038] $24 : 0000000000000000 900000001612048c
[ 370.294476] $28 : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
[ 370.302909] Hi : 0000000000000000
[ 370.306595] Lo : 0000000000000200
[ 370.310376] epc : ffffffff80115d38 _save_fp+0x10/0xa0
[ 370.315784] ra : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.322707] Status: 140084e2 KX SX UX KERNEL EXL
[ 370.327980] Cause : 1080002c (ExcCode 0b)
[ 370.332091] PrId : 0001a428 (MIPS P6600)
[ 370.336179] Modules linked in:
[ 370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
[ 370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
[ 370.358161] 0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
[ 370.366575] ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
[ 370.374989] ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
[ 370.383395] 0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
[ 370.391817] 000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
[ 370.400230] a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
[ 370.408644] ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
[ 370.417066] ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
[ 370.425472] ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
[ 370.433885] ...
[ 370.436562] Call Trace:
[ 370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
[ 370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
[ 370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
[ 370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
[ 370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
[ 370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
[ 370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
[ 370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
[ 370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
[ 370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
[ 370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
[ 370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
[ 370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10
2) The IPI may arrive during kernel use of the FPU, since we generally
only disable preemption around use of the FPU & leave interrupts
enabled. This can lead to us unexpectedly losing access to the FPU
in places where it previously had not been possible. For example:
do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82
#2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10
At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:
3) The IPI may arrive whilst the kernel is running code that will lead
to a preempt_disable() call & FPU usage soon. If this happens then
the IPI will be serviced & we'll proceed to enable an FPU whilst the
mode switch is in progress, leading to strange & inconsistent
behaviour.
Further to all of this is a separate but related problem:
4) There are various paths through which we may enable the FPU without
the user having triggered a coprocessor 1 disabled exception. These
paths are those in which we emulate instructions & then enable the
FPU with the expectation that the user might execute an FP
instruction shortly afterwards. However these paths have not
previously checked whether an FP mode switch is underway for the
task, and therefore could enable the FPU whilst such a mode switch
is in progress leading to strange & inconsistent behaviour for user
code.
This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:
a) Prevent any threads that are part of the affected process from being
able to obtain ownership of the FPU.
b) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
c) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
d) Allow threads to obtain ownership of the FPU again.
This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:
A) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
B) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.
The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable <stable(a)vger.kernel.org> # v4.0+
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index da2004cef2d5..b509371a6b0c 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -126,8 +126,6 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm)
for_each_possible_cpu(i)
cpu_context(i, mm) = 0;
- atomic_set(&mm->context.fp_mode_switching, 0);
-
mm->context.bd_emupage_allocmap = NULL;
spin_lock_init(&mm->context.bd_emupage_lock);
init_waitqueue_head(&mm->context.bd_emupage_queue);
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 8d85046adcc8..fe6001d748cf 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -29,6 +29,7 @@
#include <linux/kallsyms.h>
#include <linux/random.h>
#include <linux/prctl.h>
+#include <linux/cpu.h>
#include <asm/asm.h>
#include <asm/bootinfo.h>
@@ -691,19 +692,25 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
-static void prepare_for_fp_mode_switch(void *info)
+static long prepare_for_fp_mode_switch(void *unused)
{
- struct mm_struct *mm = info;
-
- if (current->mm == mm)
- lose_fpu(1);
+ /*
+ * This is icky, but we use this to simply ensure that all CPUs have
+ * context switched, regardless of whether they were previously running
+ * kernel or user code. This ensures that no CPU currently has its FPU
+ * enabled, or is about to attempt to enable it through any path other
+ * than enable_restore_fp_context() which will wait appropriately for
+ * fp_mode_switching to be zero.
+ */
+ return 0;
}
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
struct task_struct *t;
- int max_users;
+ struct cpumask process_cpus;
+ int cpu;
/* If nothing to change, return right away, successfully. */
if (value == mips_get_process_fp_mode(task))
@@ -736,35 +743,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && raw_cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
- /* Proceed with the mode switch */
- preempt_disable();
-
- /* Save FP & vector context, then disable FPU & MSA */
- if (task->signal == current->signal)
- lose_fpu(1);
-
- /* Prevent any threads from obtaining live FP context */
- atomic_set(&task->mm->context.fp_mode_switching, 1);
- smp_mb__after_atomic();
-
- /*
- * If there are multiple online CPUs then force any which are running
- * threads in this process to lose their FPU context, which they can't
- * regain until fp_mode_switching is cleared later.
- */
- if (num_online_cpus() > 1) {
- /* No need to send an IPI for the local CPU */
- max_users = (task->mm == current->mm) ? 1 : 0;
-
- if (atomic_read(¤t->mm->mm_users) > max_users)
- smp_call_function(prepare_for_fp_mode_switch,
- (void *)current->mm, 1);
- }
-
- /*
- * There are now no threads of the process with live FP context, so it
- * is safe to proceed with the FP mode switch.
- */
+ /* Indicate the new FP mode in each thread */
for_each_thread(task, t) {
/* Update desired FP register width */
if (value & PR_FP_MODE_FR) {
@@ -781,9 +760,34 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
clear_tsk_thread_flag(t, TIF_HYBRID_FPREGS);
}
- /* Allow threads to use FP again */
- atomic_set(&task->mm->context.fp_mode_switching, 0);
- preempt_enable();
+ /*
+ * We need to ensure that all threads in the process have switched mode
+ * before returning, in order to allow userland to not worry about
+ * races. We can do this by forcing all CPUs that any thread in the
+ * process may be running on to schedule something else - in this case
+ * prepare_for_fp_mode_switch().
+ *
+ * We begin by generating a mask of all CPUs that any thread in the
+ * process may be running on.
+ */
+ cpumask_clear(&process_cpus);
+ for_each_thread(task, t)
+ cpumask_set_cpu(task_cpu(t), &process_cpus);
+
+ /*
+ * Now we schedule prepare_for_fp_mode_switch() on each of those CPUs.
+ *
+ * The CPUs may have rescheduled already since we switched mode or
+ * generated the cpumask, but that doesn't matter. If the task in this
+ * process is scheduled out then our scheduling
+ * prepare_for_fp_mode_switch() will simply be redundant. If it's
+ * scheduled in then it will already have picked up the new FP mode
+ * whilst doing so.
+ */
+ get_online_cpus();
+ for_each_cpu_and(cpu, &process_cpus, cpu_online_mask)
+ work_on_cpu(cpu, prepare_for_fp_mode_switch, NULL);
+ put_online_cpus();
wake_up_var(&task->mm->context.fp_mode_switching);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index d67fa74622ee..4d9ca9b465ae 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1220,13 +1220,6 @@ static int enable_restore_fp_context(int msa)
{
int err, was_fpu_owner, prior_msa;
- /*
- * If an FP mode switch is currently underway, wait for it to
- * complete before proceeding.
- */
- wait_var_event(¤t->mm->context.fp_mode_switching,
- !atomic_read(¤t->mm->context.fp_mode_switching));
-
if (!used_math()) {
/* First time FP context user. */
preempt_disable();
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 8c8d953c28000045e5e823f3398319f04d49a7f1 Mon Sep 17 00:00:00 2001
From: Paul Burton <paul.burton(a)mips.com>
Date: Tue, 19 Dec 2017 15:11:08 -0800
Subject: [PATCH] MIPS: Schedule on CPUs we need to lose FPU for a mode switch
Commit 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode
switches") ensures that we react to PR_SET_FP_MODE prctl syscalls
quickly by broadcasting an IPI in order to cause CPUs to lose FPU access
when necessary. Whilst it achieves that, unfortunately it causes all
sorts of strange race conditions because:
1) The IPI may arrive at a point where the FPU is in the process of
being enabled, but that process is not yet complete leading to a
state we aren't prepared to handle. For example:
[ 370.215903] do_cpu invoked from kernel context![#1]:
[ 370.221064] CPU: 0 PID: 963 Comm: fp-prctl Not tainted 4.9.0-rc5-00323-g210db32-dirty #226
[ 370.229420] task: a8000000fd672e00 task.stack: a8000000fd630000
[ 370.235399] $ 0 : 0000000000000000 0000000000000001 0000000000000001 a8000000fd630000
[ 370.243882] $ 4 : a8000000fd672e00 0000000000000000 0000000000000453 0000000000000000
[ 370.252317] $ 8 : 0000000000000000 a8000000fd637c28 1000000000000000 0000000000000010
[ 370.260753] $12 : 00000000140084e0 ffffffff80109c00 0000000000000000 0000000000000002
[ 370.269179] $16 : ffffffff8092f080 a8000000fd672e00 ffffffff80107fe8 a8000000fd485000
[ 370.277612] $20 : ffffffff8084d328 ffffffff80940000 0000000000000009 ffffffff80930000
[ 370.286038] $24 : 0000000000000000 900000001612048c
[ 370.294476] $28 : a8000000fd630000 a8000000fd637ac0 ffffffff80937300 ffffffff8010807c
[ 370.302909] Hi : 0000000000000000
[ 370.306595] Lo : 0000000000000200
[ 370.310376] epc : ffffffff80115d38 _save_fp+0x10/0xa0
[ 370.315784] ra : ffffffff8010807c prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.322707] Status: 140084e2 KX SX UX KERNEL EXL
[ 370.327980] Cause : 1080002c (ExcCode 0b)
[ 370.332091] PrId : 0001a428 (MIPS P6600)
[ 370.336179] Modules linked in:
[ 370.339486] Process fp-prctl (pid: 963, threadinfo=a8000000fd630000, task=a8000000fd672e00, tls=00000000756e67d0)
[ 370.349724] Stack : 0000000000000000 a8000000fd557dc0 0000000000000000 ffffffff801ca8e0
[ 370.358161] 0000000000000000 a8000000fd637b9c 0000000000000009 ffffffff80923780
[ 370.366575] ffffffff80850000 ffffffff8011610c 00000000000000b8 ffffffff801a5084
[ 370.374989] ffffffff8084a370 ffffffff8084a388 ffffffff80923780 ffffffff80923828
[ 370.383395] 0000000000010000 ffffffff809237a8 0000000000020000 ffffffff80a40000
[ 370.391817] 000000000000007c 00000000004a0000 00000000756dedd0 ffffffff801a5188
[ 370.400230] a800000002014900 0000000000000001 ffffffff80923780 0000000080923828
[ 370.408644] ffffffff80923780 ffffffff80923780 ffffffff80923828 ffffffff801a521c
[ 370.417066] ffffffff80923780 ffffffff80923828 0000000000010000 ffffffff801a8f84
[ 370.425472] ffffffff80a40000 a8000000fd637c20 ffffffff80a39240 0000000000000001
[ 370.433885] ...
[ 370.436562] Call Trace:
[ 370.439222] [<ffffffff80115d38>] _save_fp+0x10/0xa0
[ 370.444305] [<ffffffff8010807c>] prepare_for_fp_mode_switch+0x94/0x1b0
[ 370.451035] [<ffffffff801ca8e0>] flush_smp_call_function_queue+0xf8/0x230
[ 370.457991] [<ffffffff8011610c>] ipi_call_interrupt+0xc/0x20
[ 370.463814] [<ffffffff801a5084>] __handle_irq_event_percpu+0xc4/0x1a8
[ 370.470404] [<ffffffff801a5188>] handle_irq_event_percpu+0x20/0x68
[ 370.476734] [<ffffffff801a521c>] handle_irq_event+0x4c/0x88
[ 370.482486] [<ffffffff801a8f84>] handle_edge_irq+0x12c/0x210
[ 370.488316] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.494280] [<ffffffff804a2dbc>] gic_handle_shared_int+0x194/0x268
[ 370.500616] [<ffffffff801a47a0>] generic_handle_irq+0x38/0x48
[ 370.506529] [<ffffffff80107e60>] do_IRQ+0x18/0x28
[ 370.511445] [<ffffffff804a1524>] plat_irq_dispatch+0xc4/0x140
[ 370.517339] [<ffffffff80106230>] ret_from_irq+0x0/0x4
[ 370.522583] [<ffffffff8010fad4>] do_ri+0x4fc/0x7e8
[ 370.527546] [<ffffffff80106220>] ret_from_exception+0x0/0x10
2) The IPI may arrive during kernel use of the FPU, since we generally
only disable preemption around use of the FPU & leave interrupts
enabled. This can lead to us unexpectedly losing access to the FPU
in places where it previously had not been possible. For example:
do_cpu invoked from kernel context![#2]:
CPU: 2 PID: 7338 Comm: fp-prctl Tainted: G D 4.7.0-00424-g49b0c82
#2
task: 838e4000 ti: 88d38000 task.ti: 88d38000
$ 0 : 00000000 00000001 ffffffff 88d3fef8
$ 4 : 838e4000 88d38004 00000000 00000001
$ 8 : 3400fc01 801f8020 808e9100 24000000
$12 : dbffffff 807b69d8 807b0000 00000000
$16 : 00000000 80786150 00400fc4 809c0398
$20 : 809c0338 0040273c 88d3ff28 808e9d30
$24 : 808e9d30 00400fb4
$28 : 88d38000 88d3fe88 00000000 8011a2ac
Hi : 0040273c
Lo : 88d3ff28
epc : 80114178 _restore_fp+0x10/0xa0
ra : 8011a2ac mipsr2_decoder+0xd5c/0x1660
Status: 1400fc03 KERNEL EXL IE
Cause : 1080002c (ExcCode 0b)
PrId : 0001a920 (MIPS I6400)
Modules linked in:
Process fp-prctl (pid: 7338, threadinfo=88d38000, task=838e4000, tls=766527d0)
Stack : 00000000 00000000 00000000 88d3fe98 00000000 00000000 809c0398 809c0338
808e9100 00000000 88d3ff28 00400fc4 00400fc4 0040273c 7fb69e18 004a0000
004a0000 004a0000 7664add0 8010de18 00000000 00000000 88d3fef8 88d3ff28
808e9100 00000000 766527d0 8010e534 000c0000 85755000 8181d580 00000000
00000000 00000000 004a0000 00000000 766527d0 7fb69e18 004a0000 80105c20
...
Call Trace:
[<80114178>] _restore_fp+0x10/0xa0
[<8011a2ac>] mipsr2_decoder+0xd5c/0x1660
[<8010de18>] do_ri+0x90/0x6b8
[<80105c20>] ret_from_exception+0x0/0x10
At first glance a simple fix may seem to be to disable interrupts around
kernel use of the FPU rather than merely preemption, however this would
introduce further overhead outside of the mode switch path & doesn't
solve the third problem:
3) The IPI may arrive whilst the kernel is running code that will lead
to a preempt_disable() call & FPU usage soon. If this happens then
the IPI will be serviced & we'll proceed to enable an FPU whilst the
mode switch is in progress, leading to strange & inconsistent
behaviour.
Further to all of this is a separate but related problem:
4) There are various paths through which we may enable the FPU without
the user having triggered a coprocessor 1 disabled exception. These
paths are those in which we emulate instructions & then enable the
FPU with the expectation that the user might execute an FP
instruction shortly afterwards. However these paths have not
previously checked whether an FP mode switch is underway for the
task, and therefore could enable the FPU whilst such a mode switch
is in progress leading to strange & inconsistent behaviour for user
code.
This patch fixes all of the above by taking a step back & re-examining
our approach to FP mode switches. Up until now we have taken these basic
steps:
a) Prevent any threads that are part of the affected process from being
able to obtain ownership of the FPU.
b) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
c) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
d) Allow threads to obtain ownership of the FPU again.
This approach is however more complex than necessary. All that we really
require is that the mode switch has occurred for all threads that are
part of the affected process before mips_set_process_fp_mode(), and thus
the PR_SET_FP_MODE prctl() syscall, returns. This doesn't require that
we stop threads from owning or using an FPU whilst a mode switch occurs,
only that we force them to relinquish it after the mode switch has
occurred such that they next own an FPU with the correct mode
configured. Our basic steps therefore simplify to:
A) Set the thread flags for each thread that is part of the affected
process to reflect the new FP mode.
B) Cause any threads that are part of the affected process and already
have ownership of an FPU to lose it.
We implement B) by forcing each CPU which might be running a thread
which is part of the affected process to schedule a no-op function,
which causes the affected thread to lose its FPU ownership when it is
descheduled.
The end result is simpler FP mode switching with less overhead in the
FPU enable path (ie. enable_restore_fp_context()) and fewer moving
parts.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 9791554b45a2 ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS")
Fixes: 6b8322576e9d ("MIPS: Force CPUs to lose FP context during mode switches")
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable <stable(a)vger.kernel.org> # v4.0+
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index da2004cef2d5..b509371a6b0c 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -126,8 +126,6 @@ init_new_context(struct task_struct *tsk, struct mm_struct *mm)
for_each_possible_cpu(i)
cpu_context(i, mm) = 0;
- atomic_set(&mm->context.fp_mode_switching, 0);
-
mm->context.bd_emupage_allocmap = NULL;
spin_lock_init(&mm->context.bd_emupage_lock);
init_waitqueue_head(&mm->context.bd_emupage_queue);
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 8d85046adcc8..fe6001d748cf 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -29,6 +29,7 @@
#include <linux/kallsyms.h>
#include <linux/random.h>
#include <linux/prctl.h>
+#include <linux/cpu.h>
#include <asm/asm.h>
#include <asm/bootinfo.h>
@@ -691,19 +692,25 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
-static void prepare_for_fp_mode_switch(void *info)
+static long prepare_for_fp_mode_switch(void *unused)
{
- struct mm_struct *mm = info;
-
- if (current->mm == mm)
- lose_fpu(1);
+ /*
+ * This is icky, but we use this to simply ensure that all CPUs have
+ * context switched, regardless of whether they were previously running
+ * kernel or user code. This ensures that no CPU currently has its FPU
+ * enabled, or is about to attempt to enable it through any path other
+ * than enable_restore_fp_context() which will wait appropriately for
+ * fp_mode_switching to be zero.
+ */
+ return 0;
}
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
struct task_struct *t;
- int max_users;
+ struct cpumask process_cpus;
+ int cpu;
/* If nothing to change, return right away, successfully. */
if (value == mips_get_process_fp_mode(task))
@@ -736,35 +743,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && raw_cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
- /* Proceed with the mode switch */
- preempt_disable();
-
- /* Save FP & vector context, then disable FPU & MSA */
- if (task->signal == current->signal)
- lose_fpu(1);
-
- /* Prevent any threads from obtaining live FP context */
- atomic_set(&task->mm->context.fp_mode_switching, 1);
- smp_mb__after_atomic();
-
- /*
- * If there are multiple online CPUs then force any which are running
- * threads in this process to lose their FPU context, which they can't
- * regain until fp_mode_switching is cleared later.
- */
- if (num_online_cpus() > 1) {
- /* No need to send an IPI for the local CPU */
- max_users = (task->mm == current->mm) ? 1 : 0;
-
- if (atomic_read(¤t->mm->mm_users) > max_users)
- smp_call_function(prepare_for_fp_mode_switch,
- (void *)current->mm, 1);
- }
-
- /*
- * There are now no threads of the process with live FP context, so it
- * is safe to proceed with the FP mode switch.
- */
+ /* Indicate the new FP mode in each thread */
for_each_thread(task, t) {
/* Update desired FP register width */
if (value & PR_FP_MODE_FR) {
@@ -781,9 +760,34 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
clear_tsk_thread_flag(t, TIF_HYBRID_FPREGS);
}
- /* Allow threads to use FP again */
- atomic_set(&task->mm->context.fp_mode_switching, 0);
- preempt_enable();
+ /*
+ * We need to ensure that all threads in the process have switched mode
+ * before returning, in order to allow userland to not worry about
+ * races. We can do this by forcing all CPUs that any thread in the
+ * process may be running on to schedule something else - in this case
+ * prepare_for_fp_mode_switch().
+ *
+ * We begin by generating a mask of all CPUs that any thread in the
+ * process may be running on.
+ */
+ cpumask_clear(&process_cpus);
+ for_each_thread(task, t)
+ cpumask_set_cpu(task_cpu(t), &process_cpus);
+
+ /*
+ * Now we schedule prepare_for_fp_mode_switch() on each of those CPUs.
+ *
+ * The CPUs may have rescheduled already since we switched mode or
+ * generated the cpumask, but that doesn't matter. If the task in this
+ * process is scheduled out then our scheduling
+ * prepare_for_fp_mode_switch() will simply be redundant. If it's
+ * scheduled in then it will already have picked up the new FP mode
+ * whilst doing so.
+ */
+ get_online_cpus();
+ for_each_cpu_and(cpu, &process_cpus, cpu_online_mask)
+ work_on_cpu(cpu, prepare_for_fp_mode_switch, NULL);
+ put_online_cpus();
wake_up_var(&task->mm->context.fp_mode_switching);
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index d67fa74622ee..4d9ca9b465ae 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -1220,13 +1220,6 @@ static int enable_restore_fp_context(int msa)
{
int err, was_fpu_owner, prior_msa;
- /*
- * If an FP mode switch is currently underway, wait for it to
- * complete before proceeding.
- */
- wait_var_event(¤t->mm->context.fp_mode_switching,
- !atomic_read(¤t->mm->context.fp_mode_switching));
-
if (!used_math()) {
/* First time FP context user. */
preempt_disable();
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 344ebf09949c31bcb8818d8458b65add29f1d67b Mon Sep 17 00:00:00 2001
From: Paul Burton <paul.burton(a)mips.com>
Date: Mon, 18 Jun 2018 17:37:59 -0700
Subject: [PATCH] MIPS: Always use -march=<arch>, not -<arch> shortcuts
The VDSO Makefile filters CFLAGS to select a subset which it uses whilst
building the VDSO ELF. One of the flags it allows through is the -march=
flag that selects the architecture/ISA to target.
Unfortunately in cases where CONFIG_CPU_MIPS32_R{1,2}=y and the
toolchain defaults to building for MIPS64, the main MIPS Makefile ends
up using the short-form -<arch> flags in cflags-y. This is because the
calls to cc-option always fail to use the long-form -march=<arch> flag
due to the lack of an -mabi=<abi> flag in KBUILD_CFLAGS at the point
where the cc-option function is executed. The resulting GCC invocation
is something like:
$ mips64-linux-gcc -Werror -march=mips32r2 -c -x c /dev/null -o tmp
cc1: error: '-march=mips32r2' is not compatible with the selected ABI
These short-form -<arch> flags are dropped by the VDSO Makefile's
filtering, and so we attempt to build the VDSO without specifying any
architecture. This results in an attempt to build the VDSO using
whatever the compiler's default architecture is, regardless of whether
that is suitable for the kernel configuration.
One encountered build failure resulting from this mismatch is a
rejection of the sync instruction if the kernel is configured for a
MIPS32 or MIPS64 r1 or r2 target but the toolchain defaults to an older
architecture revision such as MIPS1 which did not include the sync
instruction:
CC arch/mips/vdso/gettimeofday.o
/tmp/ccGQKoOj.s: Assembler messages:
/tmp/ccGQKoOj.s:273: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:329: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:520: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:714: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1009: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1066: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1114: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1279: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1334: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1374: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1459: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1514: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1814: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:2002: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:2066: Error: opcode not supported on this processor: mips1 (mips1) `sync'
make[2]: *** [scripts/Makefile.build:318: arch/mips/vdso/gettimeofday.o] Error 1
make[1]: *** [scripts/Makefile.build:558: arch/mips/vdso] Error 2
make[1]: *** Waiting for unfinished jobs....
This can be reproduced for example by attempting to build
pistachio_defconfig using Arnd's GCC 8.1.0 mips64 toolchain from
kernel.org:
https://mirrors.edge.kernel.org/pub/tools/crosstool/files/bin/x86_64/8.1.0/…
Resolve this problem by using the long-form -march=<arch> in all cases,
which makes it through the arch/mips/vdso/Makefile's filtering & is thus
consistently used to build both the kernel proper & the VDSO.
The use of cc-option to prefer the long-form & fall back to the
short-form flags makes no sense since the short-form is just an
abbreviation for the also-supported long-form in all GCC versions that
we support building with. This means there is no case in which we have
to use the short-form -<arch> flags, so we can simply remove them.
The manual redefinition of _MIPS_ISA is removed naturally along with the
use of the short-form flags that it accompanied, and whilst here we
remove the separate assembler ISA selection. I suspect that both of
these were only required due to the mips32 vs mips2 mismatch that was
introduced by commit 59b3e8e9aac6 ("[MIPS] Makefile crapectomy.") and
fixed but not cleaned up by commit 9200c0b2a07c ("[MIPS] Fix Makefile
bugs for MIPS32/MIPS64 R1 and R2.").
I've marked this for backport as far as v4.4 where the MIPS VDSO was
introduced. In earlier kernels there should be no ill effect to using
the short-form flags.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable(a)vger.kernel.org # v4.4+
Reviewed-by: James Hogan <jhogan(a)kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/19579/
diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index e2122cca4ae2..1e98d22ec119 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -155,15 +155,11 @@ cflags-$(CONFIG_CPU_R4300) += -march=r4300 -Wa,--trap
cflags-$(CONFIG_CPU_VR41XX) += -march=r4100 -Wa,--trap
cflags-$(CONFIG_CPU_R4X00) += -march=r4600 -Wa,--trap
cflags-$(CONFIG_CPU_TX49XX) += -march=r4600 -Wa,--trap
-cflags-$(CONFIG_CPU_MIPS32_R1) += $(call cc-option,-march=mips32,-mips32 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
- -Wa,-mips32 -Wa,--trap
-cflags-$(CONFIG_CPU_MIPS32_R2) += $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
- -Wa,-mips32r2 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS32_R1) += -march=mips32 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS32_R2) += -march=mips32r2 -Wa,--trap
cflags-$(CONFIG_CPU_MIPS32_R6) += -march=mips32r6 -Wa,--trap -modd-spreg
-cflags-$(CONFIG_CPU_MIPS64_R1) += $(call cc-option,-march=mips64,-mips64 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
- -Wa,-mips64 -Wa,--trap
-cflags-$(CONFIG_CPU_MIPS64_R2) += $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
- -Wa,-mips64r2 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS64_R1) += -march=mips64 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS64_R2) += -march=mips64r2 -Wa,--trap
cflags-$(CONFIG_CPU_MIPS64_R6) += -march=mips64r6 -Wa,--trap
cflags-$(CONFIG_CPU_R5000) += -march=r5000 -Wa,--trap
cflags-$(CONFIG_CPU_R5432) += $(call cc-option,-march=r5400,-march=r5000) \
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 344ebf09949c31bcb8818d8458b65add29f1d67b Mon Sep 17 00:00:00 2001
From: Paul Burton <paul.burton(a)mips.com>
Date: Mon, 18 Jun 2018 17:37:59 -0700
Subject: [PATCH] MIPS: Always use -march=<arch>, not -<arch> shortcuts
The VDSO Makefile filters CFLAGS to select a subset which it uses whilst
building the VDSO ELF. One of the flags it allows through is the -march=
flag that selects the architecture/ISA to target.
Unfortunately in cases where CONFIG_CPU_MIPS32_R{1,2}=y and the
toolchain defaults to building for MIPS64, the main MIPS Makefile ends
up using the short-form -<arch> flags in cflags-y. This is because the
calls to cc-option always fail to use the long-form -march=<arch> flag
due to the lack of an -mabi=<abi> flag in KBUILD_CFLAGS at the point
where the cc-option function is executed. The resulting GCC invocation
is something like:
$ mips64-linux-gcc -Werror -march=mips32r2 -c -x c /dev/null -o tmp
cc1: error: '-march=mips32r2' is not compatible with the selected ABI
These short-form -<arch> flags are dropped by the VDSO Makefile's
filtering, and so we attempt to build the VDSO without specifying any
architecture. This results in an attempt to build the VDSO using
whatever the compiler's default architecture is, regardless of whether
that is suitable for the kernel configuration.
One encountered build failure resulting from this mismatch is a
rejection of the sync instruction if the kernel is configured for a
MIPS32 or MIPS64 r1 or r2 target but the toolchain defaults to an older
architecture revision such as MIPS1 which did not include the sync
instruction:
CC arch/mips/vdso/gettimeofday.o
/tmp/ccGQKoOj.s: Assembler messages:
/tmp/ccGQKoOj.s:273: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:329: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:520: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:714: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1009: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1066: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1114: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1279: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1334: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1374: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1459: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1514: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:1814: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:2002: Error: opcode not supported on this processor: mips1 (mips1) `sync'
/tmp/ccGQKoOj.s:2066: Error: opcode not supported on this processor: mips1 (mips1) `sync'
make[2]: *** [scripts/Makefile.build:318: arch/mips/vdso/gettimeofday.o] Error 1
make[1]: *** [scripts/Makefile.build:558: arch/mips/vdso] Error 2
make[1]: *** Waiting for unfinished jobs....
This can be reproduced for example by attempting to build
pistachio_defconfig using Arnd's GCC 8.1.0 mips64 toolchain from
kernel.org:
https://mirrors.edge.kernel.org/pub/tools/crosstool/files/bin/x86_64/8.1.0/…
Resolve this problem by using the long-form -march=<arch> in all cases,
which makes it through the arch/mips/vdso/Makefile's filtering & is thus
consistently used to build both the kernel proper & the VDSO.
The use of cc-option to prefer the long-form & fall back to the
short-form flags makes no sense since the short-form is just an
abbreviation for the also-supported long-form in all GCC versions that
we support building with. This means there is no case in which we have
to use the short-form -<arch> flags, so we can simply remove them.
The manual redefinition of _MIPS_ISA is removed naturally along with the
use of the short-form flags that it accompanied, and whilst here we
remove the separate assembler ISA selection. I suspect that both of
these were only required due to the mips32 vs mips2 mismatch that was
introduced by commit 59b3e8e9aac6 ("[MIPS] Makefile crapectomy.") and
fixed but not cleaned up by commit 9200c0b2a07c ("[MIPS] Fix Makefile
bugs for MIPS32/MIPS64 R1 and R2.").
I've marked this for backport as far as v4.4 where the MIPS VDSO was
introduced. In earlier kernels there should be no ill effect to using
the short-form flags.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable(a)vger.kernel.org # v4.4+
Reviewed-by: James Hogan <jhogan(a)kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/19579/
diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index e2122cca4ae2..1e98d22ec119 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -155,15 +155,11 @@ cflags-$(CONFIG_CPU_R4300) += -march=r4300 -Wa,--trap
cflags-$(CONFIG_CPU_VR41XX) += -march=r4100 -Wa,--trap
cflags-$(CONFIG_CPU_R4X00) += -march=r4600 -Wa,--trap
cflags-$(CONFIG_CPU_TX49XX) += -march=r4600 -Wa,--trap
-cflags-$(CONFIG_CPU_MIPS32_R1) += $(call cc-option,-march=mips32,-mips32 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
- -Wa,-mips32 -Wa,--trap
-cflags-$(CONFIG_CPU_MIPS32_R2) += $(call cc-option,-march=mips32r2,-mips32r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS32) \
- -Wa,-mips32r2 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS32_R1) += -march=mips32 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS32_R2) += -march=mips32r2 -Wa,--trap
cflags-$(CONFIG_CPU_MIPS32_R6) += -march=mips32r6 -Wa,--trap -modd-spreg
-cflags-$(CONFIG_CPU_MIPS64_R1) += $(call cc-option,-march=mips64,-mips64 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
- -Wa,-mips64 -Wa,--trap
-cflags-$(CONFIG_CPU_MIPS64_R2) += $(call cc-option,-march=mips64r2,-mips64r2 -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64) \
- -Wa,-mips64r2 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS64_R1) += -march=mips64 -Wa,--trap
+cflags-$(CONFIG_CPU_MIPS64_R2) += -march=mips64r2 -Wa,--trap
cflags-$(CONFIG_CPU_MIPS64_R6) += -march=mips64r6 -Wa,--trap
cflags-$(CONFIG_CPU_R5000) += -march=r5000 -Wa,--trap
cflags-$(CONFIG_CPU_R5432) += $(call cc-option,-march=r5400,-march=r5000) \
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b1c03f1ef48d36ff28afb06e8f0c1233ef072f1d Mon Sep 17 00:00:00 2001
From: Matt Redfearn <matt.redfearn(a)mips.com>
Date: Wed, 23 May 2018 14:39:58 +0100
Subject: [PATCH] MIPS: memset.S: Fix byte_fixup for MIPSr6
The __clear_user function is defined to return the number of bytes that
could not be cleared. From the underlying memset / bzero implementation
this means setting register a2 to that number on return. Currently if a
page fault is triggered within the MIPSr6 version of setting of initial
unaligned bytes, the value loaded into a2 on return is meaningless.
During the MIPSr6 version of the initial unaligned bytes block, register
a2 contains the number of bytes to be set beyond the initial unaligned
bytes. The t0 register is initally set to the number of unaligned bytes
- STORSIZE, effectively a negative version of the number of unaligned
bytes. This is then incremented before each byte is saved.
The label .Lbyte_fixup\@ is jumped to on page fault. Currently the value
in a2 is incorrectly replaced by 0 - t0 + 1, effectively the number of
unaligned bytes remaining. This leads to the failures being reported by
the following test code:
static int __init test_clear_user(void)
{
int j, k;
pr_info("\n\n\nTesting clear_user\n");
for (j = 0; j < 512; j++) {
if ((k = clear_user(NULL+3, j)) != j) {
pr_err("clear_user (NULL %d) returned %d\n", j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
Which reports:
[ 3.965439] Testing clear_user
[ 3.973169] clear_user (NULL 8) returned 6
[ 3.976782] clear_user (NULL 9) returned 6
[ 3.980390] clear_user (NULL 10) returned 6
[ 3.984052] clear_user (NULL 11) returned 6
[ 3.987524] clear_user (NULL 12) returned 6
Fix this by subtracting t0 from a2 (rather than $0), effectivey giving:
unset_bytes = (#bytes - (#unaligned bytes)) - (-#unaligned bytes remaining + 1) + 1
a2 = a2 - t0 + 1
This fixes the value returned from __clear user when the number of bytes
to set is > LONGSIZE and the address is invalid and unaligned.
Unfortunately, this breaks the fixup handling for unaligned bytes after
the final long, where register a2 still contains the number of bytes
remaining to be set and the t0 register is to 0 - the number of
unaligned bytes remaining.
Because t0 is now is now subtracted from a2 rather than 0, the number of
bytes unset is reported incorrectly:
static int __init test_clear_user(void)
{
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
if ((k = clear_user(test + PAGE_SIZE - 254, j)) != j - 254) {
pr_err("clear_user (%px %d) returned %d\n",
test + PAGE_SIZE - 254, j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
[ 3.976775] clear_user (c00000000000df02 256) returned 4
[ 3.981957] clear_user (c00000000000df02 257) returned 6
[ 3.986425] clear_user (c00000000000df02 258) returned 8
[ 3.990850] clear_user (c00000000000df02 259) returned 10
[ 3.995332] clear_user (c00000000000df02 260) returned 12
[ 3.999815] clear_user (c00000000000df02 261) returned 14
Fix this by ensuring that a2 is set to 0 during the set of final
unaligned bytes.
Signed-off-by: Matt Redfearn <matt.redfearn(a)mips.com>
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 8c56208aff77 ("MIPS: lib: memset: Add MIPS R6 support")
Patchwork: https://patchwork.linux-mips.org/patch/19338/
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v4.0+
diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
index 1cc306520a55..fac26ce64b2f 100644
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -195,6 +195,7 @@
#endif
#else
PTR_SUBU t0, $0, a2
+ move a2, zero /* No remaining longs */
PTR_ADDIU t0, 1
STORE_BYTE(0)
STORE_BYTE(1)
@@ -231,7 +232,7 @@
#ifdef CONFIG_CPU_MIPSR6
.Lbyte_fixup\@:
- PTR_SUBU a2, $0, t0
+ PTR_SUBU a2, t0
jr ra
PTR_ADDIU a2, 1
#endif /* CONFIG_CPU_MIPSR6 */
The patch below does not apply to the 4.9-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b1c03f1ef48d36ff28afb06e8f0c1233ef072f1d Mon Sep 17 00:00:00 2001
From: Matt Redfearn <matt.redfearn(a)mips.com>
Date: Wed, 23 May 2018 14:39:58 +0100
Subject: [PATCH] MIPS: memset.S: Fix byte_fixup for MIPSr6
The __clear_user function is defined to return the number of bytes that
could not be cleared. From the underlying memset / bzero implementation
this means setting register a2 to that number on return. Currently if a
page fault is triggered within the MIPSr6 version of setting of initial
unaligned bytes, the value loaded into a2 on return is meaningless.
During the MIPSr6 version of the initial unaligned bytes block, register
a2 contains the number of bytes to be set beyond the initial unaligned
bytes. The t0 register is initally set to the number of unaligned bytes
- STORSIZE, effectively a negative version of the number of unaligned
bytes. This is then incremented before each byte is saved.
The label .Lbyte_fixup\@ is jumped to on page fault. Currently the value
in a2 is incorrectly replaced by 0 - t0 + 1, effectively the number of
unaligned bytes remaining. This leads to the failures being reported by
the following test code:
static int __init test_clear_user(void)
{
int j, k;
pr_info("\n\n\nTesting clear_user\n");
for (j = 0; j < 512; j++) {
if ((k = clear_user(NULL+3, j)) != j) {
pr_err("clear_user (NULL %d) returned %d\n", j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
Which reports:
[ 3.965439] Testing clear_user
[ 3.973169] clear_user (NULL 8) returned 6
[ 3.976782] clear_user (NULL 9) returned 6
[ 3.980390] clear_user (NULL 10) returned 6
[ 3.984052] clear_user (NULL 11) returned 6
[ 3.987524] clear_user (NULL 12) returned 6
Fix this by subtracting t0 from a2 (rather than $0), effectivey giving:
unset_bytes = (#bytes - (#unaligned bytes)) - (-#unaligned bytes remaining + 1) + 1
a2 = a2 - t0 + 1
This fixes the value returned from __clear user when the number of bytes
to set is > LONGSIZE and the address is invalid and unaligned.
Unfortunately, this breaks the fixup handling for unaligned bytes after
the final long, where register a2 still contains the number of bytes
remaining to be set and the t0 register is to 0 - the number of
unaligned bytes remaining.
Because t0 is now is now subtracted from a2 rather than 0, the number of
bytes unset is reported incorrectly:
static int __init test_clear_user(void)
{
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
if ((k = clear_user(test + PAGE_SIZE - 254, j)) != j - 254) {
pr_err("clear_user (%px %d) returned %d\n",
test + PAGE_SIZE - 254, j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
[ 3.976775] clear_user (c00000000000df02 256) returned 4
[ 3.981957] clear_user (c00000000000df02 257) returned 6
[ 3.986425] clear_user (c00000000000df02 258) returned 8
[ 3.990850] clear_user (c00000000000df02 259) returned 10
[ 3.995332] clear_user (c00000000000df02 260) returned 12
[ 3.999815] clear_user (c00000000000df02 261) returned 14
Fix this by ensuring that a2 is set to 0 during the set of final
unaligned bytes.
Signed-off-by: Matt Redfearn <matt.redfearn(a)mips.com>
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 8c56208aff77 ("MIPS: lib: memset: Add MIPS R6 support")
Patchwork: https://patchwork.linux-mips.org/patch/19338/
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v4.0+
diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
index 1cc306520a55..fac26ce64b2f 100644
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -195,6 +195,7 @@
#endif
#else
PTR_SUBU t0, $0, a2
+ move a2, zero /* No remaining longs */
PTR_ADDIU t0, 1
STORE_BYTE(0)
STORE_BYTE(1)
@@ -231,7 +232,7 @@
#ifdef CONFIG_CPU_MIPSR6
.Lbyte_fixup\@:
- PTR_SUBU a2, $0, t0
+ PTR_SUBU a2, t0
jr ra
PTR_ADDIU a2, 1
#endif /* CONFIG_CPU_MIPSR6 */
The patch below does not apply to the 4.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From b1c03f1ef48d36ff28afb06e8f0c1233ef072f1d Mon Sep 17 00:00:00 2001
From: Matt Redfearn <matt.redfearn(a)mips.com>
Date: Wed, 23 May 2018 14:39:58 +0100
Subject: [PATCH] MIPS: memset.S: Fix byte_fixup for MIPSr6
The __clear_user function is defined to return the number of bytes that
could not be cleared. From the underlying memset / bzero implementation
this means setting register a2 to that number on return. Currently if a
page fault is triggered within the MIPSr6 version of setting of initial
unaligned bytes, the value loaded into a2 on return is meaningless.
During the MIPSr6 version of the initial unaligned bytes block, register
a2 contains the number of bytes to be set beyond the initial unaligned
bytes. The t0 register is initally set to the number of unaligned bytes
- STORSIZE, effectively a negative version of the number of unaligned
bytes. This is then incremented before each byte is saved.
The label .Lbyte_fixup\@ is jumped to on page fault. Currently the value
in a2 is incorrectly replaced by 0 - t0 + 1, effectively the number of
unaligned bytes remaining. This leads to the failures being reported by
the following test code:
static int __init test_clear_user(void)
{
int j, k;
pr_info("\n\n\nTesting clear_user\n");
for (j = 0; j < 512; j++) {
if ((k = clear_user(NULL+3, j)) != j) {
pr_err("clear_user (NULL %d) returned %d\n", j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
Which reports:
[ 3.965439] Testing clear_user
[ 3.973169] clear_user (NULL 8) returned 6
[ 3.976782] clear_user (NULL 9) returned 6
[ 3.980390] clear_user (NULL 10) returned 6
[ 3.984052] clear_user (NULL 11) returned 6
[ 3.987524] clear_user (NULL 12) returned 6
Fix this by subtracting t0 from a2 (rather than $0), effectivey giving:
unset_bytes = (#bytes - (#unaligned bytes)) - (-#unaligned bytes remaining + 1) + 1
a2 = a2 - t0 + 1
This fixes the value returned from __clear user when the number of bytes
to set is > LONGSIZE and the address is invalid and unaligned.
Unfortunately, this breaks the fixup handling for unaligned bytes after
the final long, where register a2 still contains the number of bytes
remaining to be set and the t0 register is to 0 - the number of
unaligned bytes remaining.
Because t0 is now is now subtracted from a2 rather than 0, the number of
bytes unset is reported incorrectly:
static int __init test_clear_user(void)
{
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
if ((k = clear_user(test + PAGE_SIZE - 254, j)) != j - 254) {
pr_err("clear_user (%px %d) returned %d\n",
test + PAGE_SIZE - 254, j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
[ 3.976775] clear_user (c00000000000df02 256) returned 4
[ 3.981957] clear_user (c00000000000df02 257) returned 6
[ 3.986425] clear_user (c00000000000df02 258) returned 8
[ 3.990850] clear_user (c00000000000df02 259) returned 10
[ 3.995332] clear_user (c00000000000df02 260) returned 12
[ 3.999815] clear_user (c00000000000df02 261) returned 14
Fix this by ensuring that a2 is set to 0 during the set of final
unaligned bytes.
Signed-off-by: Matt Redfearn <matt.redfearn(a)mips.com>
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Fixes: 8c56208aff77 ("MIPS: lib: memset: Add MIPS R6 support")
Patchwork: https://patchwork.linux-mips.org/patch/19338/
Cc: James Hogan <jhogan(a)kernel.org>
Cc: Ralf Baechle <ralf(a)linux-mips.org>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v4.0+
diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
index 1cc306520a55..fac26ce64b2f 100644
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -195,6 +195,7 @@
#endif
#else
PTR_SUBU t0, $0, a2
+ move a2, zero /* No remaining longs */
PTR_ADDIU t0, 1
STORE_BYTE(0)
STORE_BYTE(1)
@@ -231,7 +232,7 @@
#ifdef CONFIG_CPU_MIPSR6
.Lbyte_fixup\@:
- PTR_SUBU a2, $0, t0
+ PTR_SUBU a2, t0
jr ra
PTR_ADDIU a2, 1
#endif /* CONFIG_CPU_MIPSR6 */
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 81365a947de453bcaba2558f8de5dadcffe05bc1 Mon Sep 17 00:00:00 2001
From: Masami Hiramatsu <mhiramat(a)kernel.org>
Date: Sat, 28 Apr 2018 21:36:02 +0900
Subject: [PATCH] kprobes: Show address of kprobes if kallsyms does
Show probed address in debugfs kprobe list file as same
as kallsyms does. This information is used for checking
kprobes are placed in the expected address. So it should
be able to compared with address in kallsyms.
Signed-off-by: Masami Hiramatsu <mhiramat(a)kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth(a)in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy(a)intel.com>
Cc: Arnd Bergmann <arnd(a)arndb.de>
Cc: David Howells <dhowells(a)redhat.com>
Cc: David S . Miller <davem(a)davemloft.net>
Cc: Heiko Carstens <heiko.carstens(a)de.ibm.com>
Cc: Jon Medhurst <tixy(a)linaro.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Thomas Richter <tmricht(a)linux.ibm.com>
Cc: Tobin C . Harding <me(a)tobin.cc>
Cc: Will Deacon <will.deacon(a)arm.com>
Cc: acme(a)kernel.org
Cc: akpm(a)linux-foundation.org
Cc: brueckner(a)linux.vnet.ibm.com
Cc: linux-arch(a)vger.kernel.org
Cc: rostedt(a)goodmis.org
Cc: schwidefsky(a)de.ibm.com
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/lkml/152491896256.9916.1583733714492565296.stgit@de…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index ab1bfa3d1d9c..1d6130d2937a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -2226,19 +2226,23 @@ static void report_probe(struct seq_file *pi, struct kprobe *p,
const char *sym, int offset, char *modname, struct kprobe *pp)
{
char *kprobe_type;
+ void *addr = p->addr;
if (p->pre_handler == pre_handler_kretprobe)
kprobe_type = "r";
else
kprobe_type = "k";
+ if (!kallsyms_show_value())
+ addr = NULL;
+
if (sym)
- seq_printf(pi, "%p %s %s+0x%x %s ",
- p->addr, kprobe_type, sym, offset,
+ seq_printf(pi, "%px %s %s+0x%x %s ",
+ addr, kprobe_type, sym, offset,
(modname ? modname : " "));
- else
- seq_printf(pi, "%p %s %p ",
- p->addr, kprobe_type, p->addr);
+ else /* try to use %pS */
+ seq_printf(pi, "%px %s %pS ",
+ addr, kprobe_type, p->addr);
if (!pp)
pp = p;
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From cc51e5428ea54f575d49cfcede1d4cb3a72b4ec4 Mon Sep 17 00:00:00 2001
From: Andi Kleen <ak(a)linux.intel.com>
Date: Fri, 24 Aug 2018 10:03:50 -0700
Subject: [PATCH] x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+
On Nehalem and newer core CPUs the CPU cache internally uses 44 bits
physical address space. The L1TF workaround is limited by this internal
cache address width, and needs to have one bit free there for the
mitigation to work.
Older client systems report only 36bit physical address space so the range
check decides that L1TF is not mitigated for a 36bit phys/32GB system with
some memory holes.
But since these actually have the larger internal cache width this warning
is bogus because it would only really be needed if the system had more than
43bits of memory.
Add a new internal x86_cache_bits field. Normally it is the same as the
physical bits field reported by CPUID, but for Nehalem and newerforce it to
be at least 44bits.
Change the L1TF memory size warning to use the new cache_bits field to
avoid bogus warnings and remove the bogus comment about memory size.
Fixes: 17dbca119312 ("x86/speculation/l1tf: Add sysfs reporting for l1tf")
Reported-by: George Anchev <studio(a)anchev.net>
Reported-by: Christopher Snowhill <kode54(a)gmail.com>
Signed-off-by: Andi Kleen <ak(a)linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: x86(a)kernel.org
Cc: linux-kernel(a)vger.kernel.org
Cc: Michael Hocko <mhocko(a)suse.com>
Cc: vbabka(a)suse.cz
Cc: stable(a)vger.kernel.org
Link: https://lkml.kernel.org/r/20180824170351.34874-1-andi@firstfloor.org
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index c24297268ebc..d53c54b842da 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -132,6 +132,8 @@ struct cpuinfo_x86 {
/* Index into per_cpu list: */
u16 cpu_index;
u32 microcode;
+ /* Address space bits used by the cache internally */
+ u8 x86_cache_bits;
unsigned initialized : 1;
} __randomize_layout;
@@ -183,7 +185,7 @@ extern void cpu_detect(struct cpuinfo_x86 *c);
static inline unsigned long long l1tf_pfn_limit(void)
{
- return BIT_ULL(boot_cpu_data.x86_phys_bits - 1 - PAGE_SHIFT);
+ return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
}
extern void early_cpu_init(void);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4c2313d0b9ca..40bdaea97fe7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -668,6 +668,45 @@ EXPORT_SYMBOL_GPL(l1tf_mitigation);
enum vmx_l1d_flush_state l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_AUTO;
EXPORT_SYMBOL_GPL(l1tf_vmx_mitigation);
+/*
+ * These CPUs all support 44bits physical address space internally in the
+ * cache but CPUID can report a smaller number of physical address bits.
+ *
+ * The L1TF mitigation uses the top most address bit for the inversion of
+ * non present PTEs. When the installed memory reaches into the top most
+ * address bit due to memory holes, which has been observed on machines
+ * which report 36bits physical address bits and have 32G RAM installed,
+ * then the mitigation range check in l1tf_select_mitigation() triggers.
+ * This is a false positive because the mitigation is still possible due to
+ * the fact that the cache uses 44bit internally. Use the cache bits
+ * instead of the reported physical bits and adjust them on the affected
+ * machines to 44bit if the reported bits are less than 44.
+ */
+static void override_cache_bits(struct cpuinfo_x86 *c)
+{
+ if (c->x86 != 6)
+ return;
+
+ switch (c->x86_model) {
+ case INTEL_FAM6_NEHALEM:
+ case INTEL_FAM6_WESTMERE:
+ case INTEL_FAM6_SANDYBRIDGE:
+ case INTEL_FAM6_IVYBRIDGE:
+ case INTEL_FAM6_HASWELL_CORE:
+ case INTEL_FAM6_HASWELL_ULT:
+ case INTEL_FAM6_HASWELL_GT3E:
+ case INTEL_FAM6_BROADWELL_CORE:
+ case INTEL_FAM6_BROADWELL_GT3E:
+ case INTEL_FAM6_SKYLAKE_MOBILE:
+ case INTEL_FAM6_SKYLAKE_DESKTOP:
+ case INTEL_FAM6_KABYLAKE_MOBILE:
+ case INTEL_FAM6_KABYLAKE_DESKTOP:
+ if (c->x86_cache_bits < 44)
+ c->x86_cache_bits = 44;
+ break;
+ }
+}
+
static void __init l1tf_select_mitigation(void)
{
u64 half_pa;
@@ -675,6 +714,8 @@ static void __init l1tf_select_mitigation(void)
if (!boot_cpu_has_bug(X86_BUG_L1TF))
return;
+ override_cache_bits(&boot_cpu_data);
+
switch (l1tf_mitigation) {
case L1TF_MITIGATION_OFF:
case L1TF_MITIGATION_FLUSH_NOWARN:
@@ -694,11 +735,6 @@ static void __init l1tf_select_mitigation(void)
return;
#endif
- /*
- * This is extremely unlikely to happen because almost all
- * systems have far more MAX_PA/2 than RAM can be fit into
- * DIMM slots.
- */
half_pa = (u64)l1tf_pfn_limit() << PAGE_SHIFT;
if (e820__mapped_any(half_pa, ULLONG_MAX - half_pa, E820_TYPE_RAM)) {
pr_warn("System has more than MAX_PA/2 memory. L1TF mitigation not effective.\n");
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 84dee5ab745a..44c4ef3d989b 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -919,6 +919,7 @@ void get_cpu_address_sizes(struct cpuinfo_x86 *c)
else if (cpu_has(c, X86_FEATURE_PAE) || cpu_has(c, X86_FEATURE_PSE36))
c->x86_phys_bits = 36;
#endif
+ c->x86_cache_bits = c->x86_phys_bits;
}
static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
From: Ulf Hansson <ulf.hansson(a)linaro.org>
In case the PM domain fails to be powered on in genpd_dev_pm_attach(), it
returns -EPROBE_DEFER, but keeping the device attached to its PM domain.
This leads to problems when the next attempt to attach is re-tried. More
precisely, in that situation an -EEXIST error code is returned, because the
device already has its PM domain pointer assigned, from the first attempt.
Now, because of the sloppy error handling by the existing callers of
dev_pm_domain_attach(), probing is allowed to continue when -EEXIST is
returned. However, in such case there are no guarantees that the PM domain
is powered on by genpd, which may lead to hangs when buses/drivers tried to
access their devices.
Let's fix this behaviour, simply by detaching the device when powering on
fails in genpd_dev_pm_attach().
Cc: v4.11+ <stable(a)vger.kernel.org> # v4.11+
Signed-off-by: Ulf Hansson <ulf.hansson(a)linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
(cherry picked from commit 72038df3c580c4c326b83c86149d7ac34007532a)
Cherry-picked to imx_4.9.y with minimal conflicts so that we can
properly handle errors from SCFW pm calls
Signed-off-by: Leonard Crestez <leonard.crestez(a)nxp.com>
---
drivers/base/power/domain.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 9c3e535795a0..d9681d372997 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -1936,10 +1936,13 @@ int genpd_dev_pm_attach(struct device *dev)
dev->pm_domain->sync = genpd_dev_pm_sync;
mutex_lock(&pd->lock);
ret = genpd_poweron(pd, 0);
mutex_unlock(&pd->lock);
+
+ if (ret)
+ genpd_remove_device(pd, dev);
out:
return ret ? -EPROBE_DEFER : 0;
}
EXPORT_SYMBOL_GPL(genpd_dev_pm_attach);
#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
--
2.17.1
The steps taken by usb core to set a new interface is very different from
what is done on the xHC host side.
xHC hardware will do everything in one go. One command is used to set up
new endpoints, free old endpoints, check bandwidth, and run the new
endpoints.
All this is done by xHC when usb core asks the hcd to check for
available bandwidth. At this point usb core has not yet flushed the old
endpoints, which will cause use-after-free issues in xhci driver as
queued URBs are cancelled on a re-allocated endpoint.
To resolve this add a call to usb_disable_interface() which will flush
the endpoints before calling usb_hcd_alloc_bandwidth()
Additional checks in xhci driver will also be implemented to gracefully
handle stale URB cancel on freed and re-allocated endpoints
Cc: <stable(a)vger.kernel.org>
Reported-by: Sudip Mukherjee <sudipm.mukherjee(a)gmail.com>
Signed-off-by: Mathias Nyman <mathias.nyman(a)linux.intel.com>
---
drivers/usb/core/message.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
index 228672f..304bef2 100644
--- a/drivers/usb/core/message.c
+++ b/drivers/usb/core/message.c
@@ -1377,6 +1377,13 @@ int usb_set_interface(struct usb_device *dev, int interface, int alternate)
return -EINVAL;
}
+ /*
+ * usb3 hosts configure the interface in usb_hcd_alloc_bandwidth,
+ * including freeing dropped endpoint ring buffers.
+ * Make sure the interface endpoints are flushed before that
+ */
+ usb_disable_interface(dev, iface, false);
+
/* Make sure we have enough bandwidth for this alternate interface.
* Remove the current alt setting and add the new alt setting.
*/
--
2.7.4
Hi,
I would like to take the chance to speak with the person who handle your
images in your company?
Our ability of imaging editing.
Cutting out for your images
Clipping path for your images
Masking for your images
Retouching for your images
Retouching for the Portraits images
We have 17 photo editors in house and daily basis 700 images can be done.
We can give testing for your photos if you send us one or two to start
working.
Thanks,
Jason Jones
The patch below does not apply to the 4.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From d814a49198eafa6163698bdd93961302f3a877a4 Mon Sep 17 00:00:00 2001
From: Ethan Lien <ethanlien(a)synology.com>
Date: Mon, 2 Jul 2018 15:44:58 +0800
Subject: [PATCH] btrfs: use correct compare function of dirty_metadata_bytes
We use customized, nodesize batch value to update dirty_metadata_bytes.
We should also use batch version of compare function or we will easily
goto fast path and get false result from percpu_counter_compare().
Fixes: e2d845211eda ("Btrfs: use percpu counter for dirty metadata count")
CC: stable(a)vger.kernel.org # 4.4+
Signed-off-by: Ethan Lien <ethanlien(a)synology.com>
Reviewed-by: Nikolay Borisov <nborisov(a)suse.com>
Signed-off-by: David Sterba <dsterba(a)suse.com>
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 6023eed3e805..e3858b2fe014 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -959,8 +959,9 @@ static int btree_writepages(struct address_space *mapping,
fs_info = BTRFS_I(mapping->host)->root->fs_info;
/* this is a bit racy, but that's ok */
- ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
- BTRFS_DIRTY_METADATA_THRESH);
+ ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+ BTRFS_DIRTY_METADATA_THRESH,
+ fs_info->dirty_metadata_batch);
if (ret < 0)
return 0;
}
@@ -4134,8 +4135,9 @@ static void __btrfs_btree_balance_dirty(struct btrfs_fs_info *fs_info,
if (flush_delayed)
btrfs_balance_delayed_items(fs_info);
- ret = percpu_counter_compare(&fs_info->dirty_metadata_bytes,
- BTRFS_DIRTY_METADATA_THRESH);
+ ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+ BTRFS_DIRTY_METADATA_THRESH,
+ fs_info->dirty_metadata_batch);
if (ret > 0) {
balance_dirty_pages_ratelimited(fs_info->btree_inode->i_mapping);
}
commit 4c4a39dd5fe2d13e2d2fa5fceb8ef95d19fc389a upstream
If there is a mismatch in the I/D min line size, we must
always use the system wide safe value both in applications
and in the kernel, while performing cache operations. However,
we have been checking more bits than just the min line sizes,
which triggers false negatives. We may need to trap the user
accesses in such cases, but not necessarily patch the kernel.
This patch fixes the check to do the right thing as advertised.
A new capability will be added to check mismatches in other
fields and ensure we trap the CTR accesses.
Fixes: be68a8aaf925 ("arm64: cpufeature: Fix CTR_EL0 field definitions")
Cc: <stable(a)vger.kernel.org> # v4.14
Cc: Mark Rutland <mark.rutland(a)arm.com>
Cc: Catalin Marinas <catalin.marinas(a)arm.com>
Reported-by: Will Deacon <will.deacon(a)arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose(a)arm.com>
Signed-off-by: Will Deacon <will.deacon(a)arm.com>
---
arch/arm64/include/asm/cache.h | 5 +++++
arch/arm64/kernel/cpu_errata.c | 6 ++++--
arch/arm64/kernel/cpufeature.c | 4 ++--
3 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index ea9bb4e..e40f8a2 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -20,9 +20,14 @@
#define CTR_L1IP_SHIFT 14
#define CTR_L1IP_MASK 3
+#define CTR_DMINLINE_SHIFT 16
+#define CTR_IMINLINE_SHIFT 0
#define CTR_CWG_SHIFT 24
#define CTR_CWG_MASK 15
+#define CTR_CACHE_MINLINE_MASK \
+ (0xf << CTR_DMINLINE_SHIFT | 0xf << CTR_IMINLINE_SHIFT)
+
#define CTR_L1IP(ctr) (((ctr) >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK)
#define ICACHE_POLICY_VPIPT 0
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index eccdb28..3d1569c 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -48,9 +48,11 @@ static bool
has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
int scope)
{
+ u64 mask = CTR_CACHE_MINLINE_MASK;
+
WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
- return (read_cpuid_cachetype() & arm64_ftr_reg_ctrel0.strict_mask) !=
- (arm64_ftr_reg_ctrel0.sys_val & arm64_ftr_reg_ctrel0.strict_mask);
+ return (read_cpuid_cachetype() & mask) !=
+ (arm64_ftr_reg_ctrel0.sys_val & mask);
}
static int cpu_enable_trap_ctr_access(void *__unused)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 376cf12..003dd39 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -180,14 +180,14 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 28, 1, 1), /* IDC */
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, 24, 4, 0), /* CWG */
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, 20, 4, 0), /* ERG */
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 1), /* DminLine */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
/*
* Linux can handle differing I-cache policies. Userspace JITs will
* make use of *minLine.
* If we have differing I-cache policies, report it as the weakest - VIPT.
*/
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, 14, 2, ICACHE_POLICY_VIPT), /* L1Ip */
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0), /* IminLine */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
ARM64_FTR_END,
};
--
2.7.4
When a system suffers from dcache aliasing a user program may observe
stale VDSO data from an aliased cache line. Notably this can break the
expectation that clock_gettime(CLOCK_MONOTONIC, ...) is, as its name
suggests, monotonic.
In order to ensure that users observe updates to the VDSO data page as
intended, align the user mappings of the VDSO data page such that their
cache colouring matches that of the virtual address range which the
kernel will use to update the data page - typically its unmapped address
within kseg0.
This ensures that we don't introduce aliasing cache lines for the VDSO
data page, and therefore that userland will observe updates without
requiring cache invalidation.
Signed-off-by: Paul Burton <paul.burton(a)mips.com>
Reported-by: Hauke Mehrtens <hauke(a)hauke-m.de>
Reported-by: Rene Nielsen <rene.nielsen(a)microsemi.com>
Reported-by: Alexandre Belloni <alexandre.belloni(a)bootlin.com>
Fixes: ebb5e78cc634 ("MIPS: Initial implementation of a VDSO")
Cc: James Hogan <jhogan(a)kernel.org>
Cc: linux-mips(a)linux-mips.org
Cc: stable(a)vger.kernel.org # v4.4+
---
Hi Alexandre,
Could you try this out on your Ocelot system? Hopefully it'll solve the
problem just as well as James' patch but doesn't need the questionable
change to arch_get_unmapped_area_common().
Thanks,
Paul
---
arch/mips/kernel/vdso.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
index 019035d7225c..5fb617a42335 100644
--- a/arch/mips/kernel/vdso.c
+++ b/arch/mips/kernel/vdso.c
@@ -13,6 +13,7 @@
#include <linux/err.h>
#include <linux/init.h>
#include <linux/ioport.h>
+#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/slab.h>
@@ -20,6 +21,7 @@
#include <asm/abi.h>
#include <asm/mips-cps.h>
+#include <asm/page.h>
#include <asm/vdso.h>
/* Kernel-provided data used by the VDSO. */
@@ -128,12 +130,30 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
vvar_size = gic_size + PAGE_SIZE;
size = vvar_size + image->size;
+ /*
+ * Find a region that's large enough for us to perform the
+ * colour-matching alignment below.
+ */
+ if (cpu_has_dc_aliases)
+ size += shm_align_mask + 1;
+
base = get_unmapped_area(NULL, 0, size, 0, 0);
if (IS_ERR_VALUE(base)) {
ret = base;
goto out;
}
+ /*
+ * If we suffer from dcache aliasing, ensure that the VDSO data page is
+ * coloured the same as the kernel's mapping of that memory. This
+ * ensures that when the kernel updates the VDSO data userland will see
+ * it without requiring cache invalidations.
+ */
+ if (cpu_has_dc_aliases) {
+ base = __ALIGN_MASK(base, shm_align_mask);
+ base += ((unsigned long)&vdso_data - gic_size) & shm_align_mask;
+ }
+
data_addr = base + gic_size;
vdso_addr = data_addr + PAGE_SIZE;
--
2.18.0
Dear Supplier,
Please kindly confirm back in reply if am onto the right contact
personin your company responsible for inquiries & orders.
Please can you as well send your updated 2018 product
catalog/price listing to our Import Department desk.
Regards
Walter Benson
Purchase Manager
Aharkco Trading Co., LTD.
The patch below does not apply to the 4.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From ad0eaee6195db1db1749dd46b9e6f4466793d178 Mon Sep 17 00:00:00 2001
From: "Gustavo A. R. Silva" <gustavo(a)embeddedor.com>
Date: Mon, 6 Aug 2018 07:14:51 -0500
Subject: [PATCH] ASoC: wm8994: Fix missing break in switch
Add missing break statement in order to prevent the code from falling
through to the default case.
Addresses-Coverity-ID: 115050 ("Missing break in switch")
Reported-by: Valdis Kletnieks <valdis.kletnieks(a)vt.edu>
Signed-off-by: Gustavo A. R. Silva <gustavo(a)embeddedor.com>
Acked-by: Charles Keepax <ckeepax(a)opensource.cirrus.com>
Signed-off-by: Mark Brown <broonie(a)kernel.org>
Cc: stable(a)vger.kernel.org
diff --git a/sound/soc/codecs/wm8994.c b/sound/soc/codecs/wm8994.c
index 62f8c5b9ba92..14f1b0c0d286 100644
--- a/sound/soc/codecs/wm8994.c
+++ b/sound/soc/codecs/wm8994.c
@@ -2432,7 +2432,7 @@ static int wm8994_set_dai_sysclk(struct snd_soc_dai *dai,
snd_soc_component_update_bits(component, WM8994_POWER_MANAGEMENT_2,
WM8994_OPCLK_ENA, 0);
}
- /* fall through */
+ break;
default:
return -EINVAL;
From: Dan Carpenter <dan.carpenter(a)oracle.com>
[ Upstream commit f019f07ecf6a6b8bd6d7853bce70925d90af02d1 ]
The uio_unregister_device() function assumes that if "info->uio_dev" is
non-NULL that means "info" is fully allocated. Setting info->uio_de
has to be the last thing in the function.
In the current code, if request_threaded_irq() fails then we return with
info->uio_dev set to non-NULL but info is not fully allocated and it can
lead to double frees.
Fixes: beafc54c4e2f ("UIO: Add the User IO core code")
Signed-off-by: Dan Carpenter <dan.carpenter(a)oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin(a)microsoft.com>
---
drivers/uio/uio.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
index 208bc52fc84d..f0a9ea2740df 100644
--- a/drivers/uio/uio.c
+++ b/drivers/uio/uio.c
@@ -841,8 +841,6 @@ int __uio_register_device(struct module *owner,
if (ret)
goto err_uio_dev_add_attributes;
- info->uio_dev = idev;
-
if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) {
/*
* Note that we deliberately don't use devm_request_irq
@@ -858,6 +856,7 @@ int __uio_register_device(struct module *owner,
goto err_request_irq;
}
+ info->uio_dev = idev;
return 0;
err_request_irq:
--
2.17.1