commit 98b8e4e5c17bf87c1b18ed929472051dab39878c upstream
Please apply to 4.9.x and later stable kernel trees (this is as far back
as 86d9f48534e8 was applied).
This submission should have followed Option 1, but I failed to Cc:
stable in the original commit.
Calling acpi_wmi_init() at the subsys_initcall() level causes ordering
issues to appear on some systems and they are difficult to reproduce,
because there is no guaranteed ordering between subsys_initcall()
calls, so they may occur in different orders on different systems.
In particular, commit 86d9f48534e8 (mm/slab: fix kmemcg cache
creation delayed issue) exposed one of these issues where genl_init()
and acpi_wmi_init() are both called at the same initcall level, but
the former must run before the latter so as to avoid a NULL pointer
dereference.
For this reason, move the acpi_wmi_init() invocation to the
initcall_sync level which should still be early enough for things
to work correctly in the WMI land.
Link: https://marc.info/?t=151274596700002&r=1&w=2
Reported-by: Jonathan McDowell <noodles(a)earth.li>
Reported-by: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Tested-by: Jonathan McDowell <noodles(a)earth.li>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
Signed-off-by: Darren Hart (VMware) <dvhart(a)infradead.org>
---
drivers/platform/x86/wmi.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
index 791449a..daa68ac 100644
--- a/drivers/platform/x86/wmi.c
+++ b/drivers/platform/x86/wmi.c
@@ -1458,5 +1458,5 @@ static void __exit acpi_wmi_exit(void)
class_unregister(&wmi_bus_class);
}
-subsys_initcall(acpi_wmi_init);
+subsys_initcall_sync(acpi_wmi_init);
module_exit(acpi_wmi_exit);
--
2.9.4
--
Darren Hart
VMware Open Source Technology Center
From: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
commit 68920c973254c5b71a684645c5f6f82d6732c5d6 upstream.
With upcoming CONFIG_UBSAN the following BUILD_BUG_ON in
net/mac80211/debugfs.c starts to trigger:
BUILD_BUG_ON(hw_flag_names[NUM_IEEE80211_HW_FLAGS] != (void *)0x1);
It seems, that compiler instrumentation causes some code
deoptimizations. Because of that GCC is not being able to resolve
condition in BUILD_BUG_ON() at compile time.
We could make size of hw_flag_names array unspecified and replace the
condition in BUILD_BUG_ON() with following:
ARRAY_SIZE(hw_flag_names) != NUM_IEEE80211_HW_FLAGS
That will have the same effect as before (adding new flag without
updating array will trigger build failure) except it doesn't fail with
CONFIG_UBSAN. As a bonus this patch slightly decreases size of
hw_flag_names array.
Signed-off-by: Andrey Ryabinin <aryabinin(a)virtuozzo.com>
Cc: Johannes Berg <johannes(a)sipsolutions.net>
Cc: "David S. Miller" <davem(a)davemloft.net>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
[Daniel: backport to 4.4.]
Signed-off-by: Daniel Wagner <daniel.wagner(a)siemens.com>
---
Hi,
The only stable tree which is missing this fix is 4.4. 4.1 doesn't
have 30686bf7f5b3 ("mac80211: convert HW flags to unsigned long
bitmap") which makes gcc unhappy with allmodconfig. 4.9 contains the
fix.
Thanks,
Daniel
net/mac80211/debugfs.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c
index 4d2aaebd4f97..e546a987a9d3 100644
--- a/net/mac80211/debugfs.c
+++ b/net/mac80211/debugfs.c
@@ -91,7 +91,7 @@ static const struct file_operations reset_ops = {
};
#endif
-static const char *hw_flag_names[NUM_IEEE80211_HW_FLAGS + 1] = {
+static const char *hw_flag_names[] = {
#define FLAG(F) [IEEE80211_HW_##F] = #F
FLAG(HAS_RATE_CONTROL),
FLAG(RX_INCLUDES_FCS),
@@ -125,9 +125,6 @@ static const char *hw_flag_names[NUM_IEEE80211_HW_FLAGS + 1] = {
FLAG(TDLS_WIDER_BW),
FLAG(SUPPORTS_AMSDU_IN_AMPDU),
FLAG(BEACON_TX_STATUS),
-
- /* keep last for the build bug below */
- (void *)0x1
#undef FLAG
};
@@ -147,7 +144,7 @@ static ssize_t hwflags_read(struct file *file, char __user *user_buf,
/* fail compilation if somebody adds or removes
* a flag without updating the name array above
*/
- BUILD_BUG_ON(hw_flag_names[NUM_IEEE80211_HW_FLAGS] != (void *)0x1);
+ BUILD_BUG_ON(ARRAY_SIZE(hw_flag_names) != NUM_IEEE80211_HW_FLAGS);
for (i = 0; i < NUM_IEEE80211_HW_FLAGS; i++) {
if (test_bit(i, local->hw.flags))
--
2.14.3
From: Peter Zijlstra <peterz(a)infradead.org>
commit 321027c1fe77f892f4ea07846aeae08cefbbb290 upstream.
Di Shen reported a race between two concurrent sys_perf_event_open()
calls where both try and move the same pre-existing software group
into a hardware context.
The problem is exactly that described in commit:
f63a8daa5812 ("perf: Fix event->ctx locking")
... where, while we wait for a ctx->mutex acquisition, the event->ctx
relation can have changed under us.
That very same commit failed to recognise sys_perf_event_context() as an
external access vector to the events and thereby didn't apply the
established locking rules correctly.
So while one sys_perf_event_open() call is stuck waiting on
mutex_lock_double(), the other (which owns said locks) moves the group
about. So by the time the former sys_perf_event_open() acquires the
locks, the context we've acquired is stale (and possibly dead).
Apply the established locking rules as per perf_event_ctx_lock_nested()
to the mutex_lock_double() for the 'move_group' case. This obviously means
we need to validate state after we acquire the locks.
Reported-by: Di Shen (Keen Lab)
Tested-by: John Dias <joaodias(a)google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Cc: Alexander Shishkin <alexander.shishkin(a)linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme(a)kernel.org>
Cc: Arnaldo Carvalho de Melo <acme(a)redhat.com>
Cc: Jiri Olsa <jolsa(a)redhat.com>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Min Chong <mchong(a)google.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Stephane Eranian <eranian(a)google.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Vince Weaver <vincent.weaver(a)maine.edu>
Fixes: f63a8daa5812 ("perf: Fix event->ctx locking")
Link: http://lkml.kernel.org/r/20170106131444.GZ3174@twins.programming.kicks-ass.…
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
[bwh: Backported to 3.16:
- Use ACCESS_ONCE() instead of READ_ONCE()
- Test perf_event::group_flags instead of group_caps
- Add the err_locked cleanup block, which we didn't need before
- Adjust context]
Signed-off-by: Ben Hutchings <ben(a)decadent.org.uk>
Signed-off-by: Suren Baghdasaryan <surenb(a)google.com>
Signed-off-by: Amit Pundir <amit.pundir(a)linaro.org>
---
This upstream patch is featured in recent Android Security bulletin.
Picked up this backported patch from android-3.18. Build tested on 3.18.91
kernel/events/core.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 57 insertions(+), 4 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 9b12efcefdf7..de3303aab7d6 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -7414,6 +7414,37 @@ static void mutex_lock_double(struct mutex *a, struct mutex *b)
mutex_lock_nested(b, SINGLE_DEPTH_NESTING);
}
+/*
+ * Variation on perf_event_ctx_lock_nested(), except we take two context
+ * mutexes.
+ */
+static struct perf_event_context *
+__perf_event_ctx_lock_double(struct perf_event *group_leader,
+ struct perf_event_context *ctx)
+{
+ struct perf_event_context *gctx;
+
+again:
+ rcu_read_lock();
+ gctx = ACCESS_ONCE(group_leader->ctx);
+ if (!atomic_inc_not_zero(&gctx->refcount)) {
+ rcu_read_unlock();
+ goto again;
+ }
+ rcu_read_unlock();
+
+ mutex_lock_double(&gctx->mutex, &ctx->mutex);
+
+ if (group_leader->ctx != gctx) {
+ mutex_unlock(&ctx->mutex);
+ mutex_unlock(&gctx->mutex);
+ put_ctx(gctx);
+ goto again;
+ }
+
+ return gctx;
+}
+
/**
* sys_perf_event_open - open a performance event, associate it to a task/cpu
*
@@ -7626,14 +7657,31 @@ SYSCALL_DEFINE5(perf_event_open,
}
if (move_group) {
- gctx = group_leader->ctx;
+ gctx = __perf_event_ctx_lock_double(group_leader, ctx);
+
+ /*
+ * Check if we raced against another sys_perf_event_open() call
+ * moving the software group underneath us.
+ */
+ if (!(group_leader->group_flags & PERF_GROUP_SOFTWARE)) {
+ /*
+ * If someone moved the group out from under us, check
+ * if this new event wound up on the same ctx, if so
+ * its the regular !move_group case, otherwise fail.
+ */
+ if (gctx != ctx) {
+ err = -EINVAL;
+ goto err_locked;
+ } else {
+ perf_event_ctx_unlock(group_leader, gctx);
+ move_group = 0;
+ }
+ }
/*
* See perf_event_ctx_lock() for comments on the details
* of swizzling perf_event::ctx.
*/
- mutex_lock_double(&gctx->mutex, &ctx->mutex);
-
perf_remove_from_context(group_leader, false);
/*
@@ -7674,7 +7722,7 @@ SYSCALL_DEFINE5(perf_event_open,
perf_unpin_context(ctx);
if (move_group) {
- mutex_unlock(&gctx->mutex);
+ perf_event_ctx_unlock(group_leader, gctx);
put_ctx(gctx);
}
mutex_unlock(&ctx->mutex);
@@ -7703,6 +7751,11 @@ SYSCALL_DEFINE5(perf_event_open,
fd_install(event_fd, event_file);
return event_fd;
+err_locked:
+ if (move_group)
+ perf_event_ctx_unlock(group_leader, gctx);
+ mutex_unlock(&ctx->mutex);
+ fput(event_file);
err_context:
perf_unpin_context(ctx);
put_ctx(ctx);
--
2.7.4
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From c8c5a3a24d395b14447a9a89d61586a913840a3b Mon Sep 17 00:00:00 2001
From: "Maciej W. Rozycki" <macro(a)mips.com>
Date: Mon, 11 Dec 2017 22:56:54 +0000
Subject: [PATCH] MIPS: Disallow outsized PTRACE_SETREGSET NT_PRFPREG regset
accesses
Complement commit c23b3d1a5311 ("MIPS: ptrace: Change GP regset to use
correct core dump register layout") and also reject outsized
PTRACE_SETREGSET requests to the NT_PRFPREG regset, like with the
NT_PRSTATUS regset.
Signed-off-by: Maciej W. Rozycki <macro(a)mips.com>
Fixes: c23b3d1a5311 ("MIPS: ptrace: Change GP regset to use correct core dump register layout")
Cc: James Hogan <james.hogan(a)mips.com>
Cc: Paul Burton <Paul.Burton(a)mips.com>
Cc: Alex Smith <alex(a)alex-smith.me.uk>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v3.17+
Patchwork: https://patchwork.linux-mips.org/patch/17930/
Signed-off-by: Ralf Baechle <ralf(a)linux-mips.org>
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index 256908951a7c..0b23b1ad99e6 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -550,6 +550,9 @@ static int fpr_set(struct task_struct *target,
BUG_ON(count % sizeof(elf_fpreg_t));
+ if (pos + count > sizeof(elf_fpregset_t))
+ return -EIO;
+
init_fp_ctx(target);
if (sizeof(target->thread.fpu.fpr[0]) == sizeof(elf_fpreg_t))
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From be07a6a1188372b6d19a3307ec33211fc9c9439d Mon Sep 17 00:00:00 2001
From: "Maciej W. Rozycki" <macro(a)mips.com>
Date: Mon, 11 Dec 2017 22:54:33 +0000
Subject: [PATCH] MIPS: Fix an FCSR access API regression with NT_PRFPREG and
MSA
Fix a commit 72b22bbad1e7 ("MIPS: Don't assume 64-bit FP registers for
FP regset") public API regression, then activated by commit 1db1af84d6df
("MIPS: Basic MSA context switching support"), that caused the FCSR
register not to be read or written for CONFIG_CPU_HAS_MSA kernel
configurations (regardless of actual presence or absence of the MSA
feature in a given processor) with ptrace(2) PTRACE_GETREGSET and
PTRACE_SETREGSET requests nor recorded in core dumps.
This is because with !CONFIG_CPU_HAS_MSA configurations the whole of
`elf_fpregset_t' array is bulk-copied as it is, which includes the FCSR
in one half of the last, 33rd slot, whereas with CONFIG_CPU_HAS_MSA
configurations array elements are copied individually, and then only the
leading 32 FGR slots while the remaining slot is ignored.
Correct the code then such that only FGR slots are copied in the
respective !MSA and MSA helpers an then the FCSR slot is handled
separately in common code. Use `ptrace_setfcr31' to update the FCSR
too, so that the read-only mask is respected.
Retrieving a correct value of FCSR is important in debugging not only
for the human to be able to get the right interpretation of the
situation, but for correct operation of GDB as well. This is because
the condition code bits in FSCR are used by GDB to determine the
location to place a breakpoint at when single-stepping through an FPU
branch instruction. If such a breakpoint is placed incorrectly (i.e.
with the condition reversed), then it will be missed, likely causing the
debuggee to run away from the control of GDB and consequently breaking
the process of investigation.
Fortunately GDB continues using the older PTRACE_GETFPREGS ptrace(2)
request which is unaffected, so the regression only really hits with
post-mortem debug sessions using a core dump file, in which case
execution, and consequently single-stepping through branches is not
possible. Of course core files created by buggy kernels out there will
have the value of FCSR recorded clobbered, but such core files cannot be
corrected and the person using them simply will have to be aware that
the value of FCSR retrieved is not reliable.
Which also means we can likely get away without defining a replacement
API which would ensure a correct value of FSCR to be retrieved, or none
at all.
This is based on previous work by Alex Smith, extensively rewritten.
Signed-off-by: Alex Smith <alex(a)alex-smith.me.uk>
Signed-off-by: James Hogan <james.hogan(a)mips.com>
Signed-off-by: Maciej W. Rozycki <macro(a)mips.com>
Fixes: 72b22bbad1e7 ("MIPS: Don't assume 64-bit FP registers for FP regset")
Cc: Paul Burton <Paul.Burton(a)mips.com>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v3.15+
Patchwork: https://patchwork.linux-mips.org/patch/17928/
Signed-off-by: Ralf Baechle <ralf(a)linux-mips.org>
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index 47a01d5f26ea..0a939593ccb7 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -422,7 +422,7 @@ static int gpr64_set(struct task_struct *target,
/*
* Copy the floating-point context to the supplied NT_PRFPREG buffer,
* !CONFIG_CPU_HAS_MSA variant. FP context's general register slots
- * correspond 1:1 to buffer slots.
+ * correspond 1:1 to buffer slots. Only general registers are copied.
*/
static int fpr_get_fpa(struct task_struct *target,
unsigned int *pos, unsigned int *count,
@@ -430,13 +430,14 @@ static int fpr_get_fpa(struct task_struct *target,
{
return user_regset_copyout(pos, count, kbuf, ubuf,
&target->thread.fpu,
- 0, sizeof(elf_fpregset_t));
+ 0, NUM_FPU_REGS * sizeof(elf_fpreg_t));
}
/*
* Copy the floating-point context to the supplied NT_PRFPREG buffer,
* CONFIG_CPU_HAS_MSA variant. Only lower 64 bits of FP context's
- * general register slots are copied to buffer slots.
+ * general register slots are copied to buffer slots. Only general
+ * registers are copied.
*/
static int fpr_get_msa(struct task_struct *target,
unsigned int *pos, unsigned int *count,
@@ -458,20 +459,29 @@ static int fpr_get_msa(struct task_struct *target,
return 0;
}
-/* Copy the floating-point context to the supplied NT_PRFPREG buffer. */
+/*
+ * Copy the floating-point context to the supplied NT_PRFPREG buffer.
+ * Choose the appropriate helper for general registers, and then copy
+ * the FCSR register separately.
+ */
static int fpr_get(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf)
{
+ const int fcr31_pos = NUM_FPU_REGS * sizeof(elf_fpreg_t);
int err;
- /* XXX fcr31 */
-
if (sizeof(target->thread.fpu.fpr[0]) == sizeof(elf_fpreg_t))
err = fpr_get_fpa(target, &pos, &count, &kbuf, &ubuf);
else
err = fpr_get_msa(target, &pos, &count, &kbuf, &ubuf);
+ if (err)
+ return err;
+
+ err = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+ &target->thread.fpu.fcr31,
+ fcr31_pos, fcr31_pos + sizeof(u32));
return err;
}
@@ -479,7 +489,7 @@ static int fpr_get(struct task_struct *target,
/*
* Copy the supplied NT_PRFPREG buffer to the floating-point context,
* !CONFIG_CPU_HAS_MSA variant. Buffer slots correspond 1:1 to FP
- * context's general register slots.
+ * context's general register slots. Only general registers are copied.
*/
static int fpr_set_fpa(struct task_struct *target,
unsigned int *pos, unsigned int *count,
@@ -487,13 +497,14 @@ static int fpr_set_fpa(struct task_struct *target,
{
return user_regset_copyin(pos, count, kbuf, ubuf,
&target->thread.fpu,
- 0, sizeof(elf_fpregset_t));
+ 0, NUM_FPU_REGS * sizeof(elf_fpreg_t));
}
/*
* Copy the supplied NT_PRFPREG buffer to the floating-point context,
* CONFIG_CPU_HAS_MSA variant. Buffer slots are copied to lower 64
- * bits only of FP context's general register slots.
+ * bits only of FP context's general register slots. Only general
+ * registers are copied.
*/
static int fpr_set_msa(struct task_struct *target,
unsigned int *pos, unsigned int *count,
@@ -518,6 +529,8 @@ static int fpr_set_msa(struct task_struct *target,
/*
* Copy the supplied NT_PRFPREG buffer to the floating-point context.
+ * Choose the appropriate helper for general registers, and then copy
+ * the FCSR register separately.
*
* We optimize for the case where `count % sizeof(elf_fpreg_t) == 0',
* which is supposed to have been guaranteed by the kernel before
@@ -530,18 +543,30 @@ static int fpr_set(struct task_struct *target,
unsigned int pos, unsigned int count,
const void *kbuf, const void __user *ubuf)
{
+ const int fcr31_pos = NUM_FPU_REGS * sizeof(elf_fpreg_t);
+ u32 fcr31;
int err;
BUG_ON(count % sizeof(elf_fpreg_t));
- /* XXX fcr31 */
-
init_fp_ctx(target);
if (sizeof(target->thread.fpu.fpr[0]) == sizeof(elf_fpreg_t))
err = fpr_set_fpa(target, &pos, &count, &kbuf, &ubuf);
else
err = fpr_set_msa(target, &pos, &count, &kbuf, &ubuf);
+ if (err)
+ return err;
+
+ if (count > 0) {
+ err = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &fcr31,
+ fcr31_pos, fcr31_pos + sizeof(u32));
+ if (err)
+ return err;
+
+ ptrace_setfcr31(target, fcr31);
+ }
return err;
}
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 80b3ffce0196ea50068885d085ff981e4b8396f4 Mon Sep 17 00:00:00 2001
From: "Maciej W. Rozycki" <macro(a)mips.com>
Date: Mon, 11 Dec 2017 22:53:14 +0000
Subject: [PATCH] MIPS: Consistently handle buffer counter with
PTRACE_SETREGSET
Update commit d614fd58a283 ("mips/ptrace: Preserve previous registers
for short regset write") bug and consistently consume all data supplied
to `fpr_set_msa' with the ptrace(2) PTRACE_SETREGSET request, such that
a zero data buffer counter is returned where insufficient data has been
given to fill a whole number of FP general registers.
In reality this is not going to happen, as the caller is supposed to
only supply data covering a whole number of registers and it is verified
in `ptrace_regset' and again asserted in `fpr_set', however structuring
code such that the presence of trailing partial FP general register data
causes `fpr_set_msa' to return with a non-zero data buffer counter makes
it appear that this trailing data will be used if there are subsequent
writes made to FP registers, which is going to be the case with the FCSR
once the missing write to that register has been fixed.
Fixes: d614fd58a283 ("mips/ptrace: Preserve previous registers for short regset write")
Signed-off-by: Maciej W. Rozycki <macro(a)mips.com>
Cc: James Hogan <james.hogan(a)mips.com>
Cc: Paul Burton <Paul.Burton(a)mips.com>
Cc: Alex Smith <alex(a)alex-smith.me.uk>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v4.11+
Patchwork: https://patchwork.linux-mips.org/patch/17927/
Signed-off-by: Ralf Baechle <ralf(a)linux-mips.org>
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index 7fcadaaf330f..47a01d5f26ea 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -504,7 +504,7 @@ static int fpr_set_msa(struct task_struct *target,
int err;
BUILD_BUG_ON(sizeof(fpr_val) != sizeof(elf_fpreg_t));
- for (i = 0; i < NUM_FPU_REGS && *count >= sizeof(elf_fpreg_t); i++) {
+ for (i = 0; i < NUM_FPU_REGS && *count > 0; i++) {
err = user_regset_copyin(pos, count, kbuf, ubuf,
&fpr_val, i * sizeof(elf_fpreg_t),
(i + 1) * sizeof(elf_fpreg_t));
The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable(a)vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From dc24d0edf33c3e15099688b6bbdf7bdc24bf6e91 Mon Sep 17 00:00:00 2001
From: "Maciej W. Rozycki" <macro(a)mips.com>
Date: Mon, 11 Dec 2017 22:52:15 +0000
Subject: [PATCH] MIPS: Guard against any partial write attempt with
PTRACE_SETREGSET
Complement commit d614fd58a283 ("mips/ptrace: Preserve previous
registers for short regset write") and ensure that no partial register
write attempt is made with PTRACE_SETREGSET, as we do not preinitialize
any temporaries used to hold incoming register data and consequently
random data could be written.
It is the responsibility of the caller, such as `ptrace_regset', to
arrange for writes to span whole registers only, so here we only assert
that it has indeed happened.
Signed-off-by: Maciej W. Rozycki <macro(a)mips.com>
Fixes: 72b22bbad1e7 ("MIPS: Don't assume 64-bit FP registers for FP regset")
Cc: James Hogan <james.hogan(a)mips.com>
Cc: Paul Burton <Paul.Burton(a)mips.com>
Cc: Alex Smith <alex(a)alex-smith.me.uk>
Cc: Dave Martin <Dave.Martin(a)arm.com>
Cc: linux-mips(a)linux-mips.org
Cc: linux-kernel(a)vger.kernel.org
Cc: stable(a)vger.kernel.org # v3.15+
Patchwork: https://patchwork.linux-mips.org/patch/17926/
Signed-off-by: Ralf Baechle <ralf(a)linux-mips.org>
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index 62e8ffd9370a..7fcadaaf330f 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -516,7 +516,15 @@ static int fpr_set_msa(struct task_struct *target,
return 0;
}
-/* Copy the supplied NT_PRFPREG buffer to the floating-point context. */
+/*
+ * Copy the supplied NT_PRFPREG buffer to the floating-point context.
+ *
+ * We optimize for the case where `count % sizeof(elf_fpreg_t) == 0',
+ * which is supposed to have been guaranteed by the kernel before
+ * calling us, e.g. in `ptrace_regset'. We enforce that requirement,
+ * so that we can safely avoid preinitializing temporaries for
+ * partial register writes.
+ */
static int fpr_set(struct task_struct *target,
const struct user_regset *regset,
unsigned int pos, unsigned int count,
@@ -524,6 +532,8 @@ static int fpr_set(struct task_struct *target,
{
int err;
+ BUG_ON(count % sizeof(elf_fpreg_t));
+
/* XXX fcr31 */
init_fp_ctx(target);