The quilt patch titled
Subject: ARM: prctl: reject PR_SET_MDWE on pre-ARMv6
has been removed from the -mm tree. Its filename was
arm-prctl-reject-pr_set_mdwe-on-pre-armv6.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Zev Weiss <zev(a)bewilderbeest.net>
Subject: ARM: prctl: reject PR_SET_MDWE on pre-ARMv6
Date: Mon, 26 Feb 2024 17:35:42 -0800
On v5 and lower CPUs we can't provide MDWE protection, so ensure we fail
any attempt to enable it via prctl(PR_SET_MDWE).
Previously such an attempt would misleadingly succeed, leading to any
subsequent mmap(PROT_READ|PROT_WRITE) or execve() failing unconditionally
(the latter somewhat violently via force_fatal_sig(SIGSEGV) due to
READ_IMPLIES_EXEC).
Link: https://lkml.kernel.org/r/20240227013546.15769-6-zev@bewilderbeest.net
Signed-off-by: Zev Weiss <zev(a)bewilderbeest.net>
Cc: <stable(a)vger.kernel.org> [6.3+]
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Florent Revest <revest(a)chromium.org>
Cc: Helge Deller <deller(a)gmx.de>
Cc: "James E.J. Bottomley" <James.Bottomley(a)HansenPartnership.com>
Cc: Josh Triplett <josh(a)joshtriplett.org>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: Miguel Ojeda <ojeda(a)kernel.org>
Cc: Mike Rapoport (IBM) <rppt(a)kernel.org>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Ondrej Mosnacek <omosnace(a)redhat.com>
Cc: Rick Edgecombe <rick.p.edgecombe(a)intel.com>
Cc: Russell King (Oracle) <linux(a)armlinux.org.uk>
Cc: Sam James <sam(a)gentoo.org>
Cc: Stefan Roesch <shr(a)devkernel.io>
Cc: Yang Shi <yang(a)os.amperecomputing.com>
Cc: Yin Fengwei <fengwei.yin(a)intel.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
arch/arm/include/asm/mman.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
--- /dev/null
+++ a/arch/arm/include/asm/mman.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_MMAN_H__
+#define __ASM_MMAN_H__
+
+#include <asm/system_info.h>
+#include <uapi/asm/mman.h>
+
+static inline bool arch_memory_deny_write_exec_supported(void)
+{
+ return cpu_architecture() >= CPU_ARCH_ARMv6;
+}
+#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
+
+#endif /* __ASM_MMAN_H__ */
_
Patches currently in -mm which might be from zev(a)bewilderbeest.net are
The quilt patch titled
Subject: prctl: generalize PR_SET_MDWE support check to be per-arch
has been removed from the -mm tree. Its filename was
prctl-generalize-pr_set_mdwe-support-check-to-be-per-arch.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Zev Weiss <zev(a)bewilderbeest.net>
Subject: prctl: generalize PR_SET_MDWE support check to be per-arch
Date: Mon, 26 Feb 2024 17:35:41 -0800
Patch series "ARM: prctl: Reject PR_SET_MDWE where not supported".
I noticed after a recent kernel update that my ARM926 system started
segfaulting on any execve() after calling prctl(PR_SET_MDWE). After some
investigation it appears that ARMv5 is incapable of providing the
appropriate protections for MDWE, since any readable memory is also
implicitly executable.
The prctl_set_mdwe() function already had some special-case logic added
disabling it on PARISC (commit 793838138c15, "prctl: Disable
prctl(PR_SET_MDWE) on parisc"); this patch series (1) generalizes that
check to use an arch_*() function, and (2) adds a corresponding override
for ARM to disable MDWE on pre-ARMv6 CPUs.
With the series applied, prctl(PR_SET_MDWE) is rejected on ARMv5 and
subsequent execve() calls (as well as mmap(PROT_READ|PROT_WRITE)) can
succeed instead of unconditionally failing; on ARMv6 the prctl works as it
did previously.
[0] https://lore.kernel.org/all/2023112456-linked-nape-bf19@gregkh/
This patch (of 2):
There exist systems other than PARISC where MDWE may not be feasible to
support; rather than cluttering up the generic code with additional
arch-specific logic let's add a generic function for checking MDWE support
and allow each arch to override it as needed.
Link: https://lkml.kernel.org/r/20240227013546.15769-4-zev@bewilderbeest.net
Link: https://lkml.kernel.org/r/20240227013546.15769-5-zev@bewilderbeest.net
Signed-off-by: Zev Weiss <zev(a)bewilderbeest.net>
Acked-by: Helge Deller <deller(a)gmx.de> [parisc]
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: Florent Revest <revest(a)chromium.org>
Cc: "James E.J. Bottomley" <James.Bottomley(a)HansenPartnership.com>
Cc: Josh Triplett <josh(a)joshtriplett.org>
Cc: Kees Cook <keescook(a)chromium.org>
Cc: Miguel Ojeda <ojeda(a)kernel.org>
Cc: Mike Rapoport (IBM) <rppt(a)kernel.org>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Ondrej Mosnacek <omosnace(a)redhat.com>
Cc: Rick Edgecombe <rick.p.edgecombe(a)intel.com>
Cc: Russell King (Oracle) <linux(a)armlinux.org.uk>
Cc: Sam James <sam(a)gentoo.org>
Cc: Stefan Roesch <shr(a)devkernel.io>
Cc: Yang Shi <yang(a)os.amperecomputing.com>
Cc: Yin Fengwei <fengwei.yin(a)intel.com>
Cc: <stable(a)vger.kernel.org> [6.3+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
arch/parisc/include/asm/mman.h | 14 ++++++++++++++
include/linux/mman.h | 8 ++++++++
kernel/sys.c | 7 +++++--
3 files changed, 27 insertions(+), 2 deletions(-)
--- /dev/null
+++ a/arch/parisc/include/asm/mman.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_MMAN_H__
+#define __ASM_MMAN_H__
+
+#include <uapi/asm/mman.h>
+
+/* PARISC cannot allow mdwe as it needs writable stacks */
+static inline bool arch_memory_deny_write_exec_supported(void)
+{
+ return false;
+}
+#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
+
+#endif /* __ASM_MMAN_H__ */
--- a/include/linux/mman.h~prctl-generalize-pr_set_mdwe-support-check-to-be-per-arch
+++ a/include/linux/mman.h
@@ -162,6 +162,14 @@ calc_vm_flag_bits(unsigned long flags)
unsigned long vm_commit_limit(void);
+#ifndef arch_memory_deny_write_exec_supported
+static inline bool arch_memory_deny_write_exec_supported(void)
+{
+ return true;
+}
+#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
+#endif
+
/*
* Denies creating a writable executable mapping or gaining executable permissions.
*
--- a/kernel/sys.c~prctl-generalize-pr_set_mdwe-support-check-to-be-per-arch
+++ a/kernel/sys.c
@@ -2408,8 +2408,11 @@ static inline int prctl_set_mdwe(unsigne
if (bits & PR_MDWE_NO_INHERIT && !(bits & PR_MDWE_REFUSE_EXEC_GAIN))
return -EINVAL;
- /* PARISC cannot allow mdwe as it needs writable stacks */
- if (IS_ENABLED(CONFIG_PARISC))
+ /*
+ * EOPNOTSUPP might be more appropriate here in principle, but
+ * existing userspace depends on EINVAL specifically.
+ */
+ if (!arch_memory_deny_write_exec_supported())
return -EINVAL;
current_bits = get_current_mdwe();
_
Patches currently in -mm which might be from zev(a)bewilderbeest.net are
The quilt patch titled
Subject: mm: cachestat: fix two shmem bugs
has been removed from the -mm tree. Its filename was
mm-cachestat-fix-two-shmem-bugs.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Johannes Weiner <hannes(a)cmpxchg.org>
Subject: mm: cachestat: fix two shmem bugs
Date: Fri, 15 Mar 2024 05:55:56 -0400
When cachestat on shmem races with swapping and invalidation, there
are two possible bugs:
1) A swapin error can have resulted in a poisoned swap entry in the
shmem inode's xarray. Calling get_shadow_from_swap_cache() on it
will result in an out-of-bounds access to swapper_spaces[].
Validate the entry with non_swap_entry() before going further.
2) When we find a valid swap entry in the shmem's inode, the shadow
entry in the swapcache might not exist yet: swap IO is still in
progress and we're before __remove_mapping; swapin, invalidation,
or swapoff have removed the shadow from swapcache after we saw the
shmem swap entry.
This will send a NULL to workingset_test_recent(). The latter
purely operates on pointer bits, so it won't crash - node 0, memcg
ID 0, eviction timestamp 0, etc. are all valid inputs - but it's a
bogus test. In theory that could result in a false "recently
evicted" count.
Such a false positive wouldn't be the end of the world. But for
code clarity and (future) robustness, be explicit about this case.
Bail on get_shadow_from_swap_cache() returning NULL.
Link: https://lkml.kernel.org/r/20240315095556.GC581298@cmpxchg.org
Fixes: cf264e1329fb ("cachestat: implement cachestat syscall")
Signed-off-by: Johannes Weiner <hannes(a)cmpxchg.org>
Reported-by: Chengming Zhou <chengming.zhou(a)linux.dev> [Bug #1]
Reported-by: Jann Horn <jannh(a)google.com> [Bug #2]
Reviewed-by: Chengming Zhou <chengming.zhou(a)linux.dev>
Reviewed-by: Nhat Pham <nphamcs(a)gmail.com>
Cc: <stable(a)vger.kernel.org> [v6.5+]
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
mm/filemap.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
--- a/mm/filemap.c~mm-cachestat-fix-two-shmem-bugs
+++ a/mm/filemap.c
@@ -4197,7 +4197,23 @@ static void filemap_cachestat(struct add
/* shmem file - in swap cache */
swp_entry_t swp = radix_to_swp_entry(folio);
+ /* swapin error results in poisoned entry */
+ if (non_swap_entry(swp))
+ goto resched;
+
+ /*
+ * Getting a swap entry from the shmem
+ * inode means we beat
+ * shmem_unuse(). rcu_read_lock()
+ * ensures swapoff waits for us before
+ * freeing the swapper space. However,
+ * we can race with swapping and
+ * invalidation, so there might not be
+ * a shadow in the swapcache (yet).
+ */
shadow = get_shadow_from_swap_cache(swp);
+ if (!shadow)
+ goto resched;
}
#endif
if (workingset_test_recent(shadow, true, &workingset))
_
Patches currently in -mm which might be from hannes(a)cmpxchg.org are
mm-zswap-optimize-zswap-pool-size-tracking.patch
mm-zpool-return-pool-size-in-pages.patch
mm-page_alloc-remove-pcppage-migratetype-caching.patch
mm-page_alloc-optimize-free_unref_folios.patch
mm-page_alloc-fix-up-block-types-when-merging-compatible-blocks.patch
mm-page_alloc-move-free-pages-when-converting-block-during-isolation.patch
mm-page_alloc-fix-move_freepages_block-range-error.patch
mm-page_alloc-fix-freelist-movement-during-block-conversion.patch
mm-page_alloc-close-migratetype-race-between-freeing-and-stealing.patch
mm-page_isolation-prepare-for-hygienic-freelists.patch
mm-page_isolation-prepare-for-hygienic-freelists-fix.patch
mm-page_alloc-consolidate-free-page-accounting.patch
The quilt patch titled
Subject: init: open /initrd.image with O_LARGEFILE
has been removed from the -mm tree. Its filename was
init-open-initrdimage-with-o_largefile.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: John Sperbeck <jsperbeck(a)google.com>
Subject: init: open /initrd.image with O_LARGEFILE
Date: Sun, 17 Mar 2024 15:15:22 -0700
If initrd data is larger than 2Gb, we'll eventually fail to write to the
/initrd.image file when we hit that limit, unless O_LARGEFILE is set.
Link: https://lkml.kernel.org/r/20240317221522.896040-1-jsperbeck@google.com
Signed-off-by: John Sperbeck <jsperbeck(a)google.com>
Cc: Jens Axboe <axboe(a)kernel.dk>
Cc: Nick Desaulniers <ndesaulniers(a)google.com>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: <stable(a)vger.kernel.org>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
---
init/initramfs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/init/initramfs.c~init-open-initrdimage-with-o_largefile
+++ a/init/initramfs.c
@@ -682,7 +682,7 @@ static void __init populate_initrd_image
printk(KERN_INFO "rootfs image is not initramfs (%s); looks like an initrd\n",
err);
- file = filp_open("/initrd.image", O_WRONLY | O_CREAT, 0700);
+ file = filp_open("/initrd.image", O_WRONLY|O_CREAT|O_LARGEFILE, 0700);
if (IS_ERR(file))
return;
_
Patches currently in -mm which might be from jsperbeck(a)google.com are
folio_is_secretmem() currently relies on secretmem folios being LRU folios,
to save some cycles.
However, folios might reside in a folio batch without the LRU flag set, or
temporarily have their LRU flag cleared. Consequently, the LRU flag is
unreliable for this purpose.
In particular, this is the case when secretmem_fault() allocates a
fresh page and calls filemap_add_folio()->folio_add_lru(). The folio might
be added to the per-cpu folio batch and won't get the LRU flag set until
the batch was drained using e.g., lru_add_drain().
Consequently, folio_is_secretmem() might not detect secretmem folios
and GUP-fast can succeed in grabbing a secretmem folio, crashing the
kernel when we would later try reading/writing to the folio, because
the folio has been unmapped from the directmap.
Fix it by removing that unreliable check.
Reported-by: xingwei lee <xrivendell7(a)gmail.com>
Reported-by: yue sun <samsun1006219(a)gmail.com>
Closes: https://lore.kernel.org/lkml/CABOYnLyevJeravW=QrH0JUPYEcDN160aZFb7kwndm-J2r…
Debugged-by: Miklos Szeredi <miklos(a)szeredi.hu>
Tested-by: Miklos Szeredi <mszeredi(a)redhat.com>
Fixes: 1507f51255c9 ("mm: introduce memfd_secret system call to create "secret" memory areas")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: David Hildenbrand <david(a)redhat.com>
---
include/linux/secretmem.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 35f3a4a8ceb1..acf7e1a3f3de 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -13,10 +13,10 @@ static inline bool folio_is_secretmem(struct folio *folio)
/*
* Using folio_mapping() is quite slow because of the actual call
* instruction.
- * We know that secretmem pages are not compound and LRU so we can
+ * We know that secretmem pages are not compound, so we can
* save a couple of cycles here.
*/
- if (folio_test_large(folio) || !folio_test_lru(folio))
+ if (folio_test_large(folio))
return false;
mapping = (struct address_space *)
--
2.43.2
The index in the loop has already been added one and is equal to the
number of PDOs to be updated when leaving the loop.
Fixes: cd099cde4ed2 ("usb: typec: tcpm: Support multiple capabilities")
Cc: stable(a)vger.kernel.org
Signed-off-by: Kyle Tso <kyletso(a)google.com>
---
drivers/usb/typec/tcpm/tcpm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
index 3d505614bff1..566dad0cb9d3 100644
--- a/drivers/usb/typec/tcpm/tcpm.c
+++ b/drivers/usb/typec/tcpm/tcpm.c
@@ -6852,14 +6852,14 @@ static int tcpm_pd_set(struct typec_port *p, struct usb_power_delivery *pd)
if (data->sink_desc.pdo[0]) {
for (i = 0; i < PDO_MAX_OBJECTS && data->sink_desc.pdo[i]; i++)
port->snk_pdo[i] = data->sink_desc.pdo[i];
- port->nr_snk_pdo = i + 1;
+ port->nr_snk_pdo = i;
port->operating_snk_mw = data->operating_snk_mw;
}
if (data->source_desc.pdo[0]) {
for (i = 0; i < PDO_MAX_OBJECTS && data->source_desc.pdo[i]; i++)
port->snk_pdo[i] = data->source_desc.pdo[i];
- port->nr_src_pdo = i + 1;
+ port->nr_src_pdo = i;
}
switch (port->state) {
--
2.44.0.291.gc1ea87d7ee-goog
Hi,
The current version of delay reporting code can report incorrect
values when paired with a firmware which enables this feature.
Unfortunately there are several smaller issues that needed to be addressed
to correct the behavior:
Wrong information was used for the host side of counter
For MTL/LNL used incorrect (in a sense that it was verified only on MTL)
link side counter function.
The link side counter needs compensation logic if pause/resume is used.
The offset values were not refreshed from firmware.
Finally, not strictly connected, but the ALSA buffer size needs to be
constrained to avoid constant xrun from media players (like mpv)
The series applies cleanly for 6.9 and 6.8.y stable, but older stable
would need manual backport, but it is questionable if it is needed as
MTL/LNL is missing features.
Mark, can you pick these patches for the 6.9 cycle as fixes?
Regards,
Peter
---
Peter Ujfalusi (17):
ASoC: SOF: Add dsp_max_burst_size_in_ms member to snd_sof_pcm_stream
ASoC: SOF: ipc4-topology: Save the DMA maximum burst size for PCMs
ASoC: SOF: Intel: hda-pcm: Use dsp_max_burst_size_in_ms to place
constraint
ASoC: SOF: Intel: hda: Implement get_stream_position (Linear Link
Position)
ASoC: SOF: Intel: mtl/lnl: Use the generic get_stream_position
callback
ASoC: SOF: Introduce a new callback pair to be used for PCM delay
reporting
ASoC: SOF: Intel: Set the dai/host get frame/byte counter callbacks
ASoC: SOF: ipc4-pcm: Use the snd_sof_pcm_get_dai_frame_counter() for
pcm_delay
ASoC: SOF: Intel: hda-common-ops: Do not set the get_stream_position
callback
ASoC: SOF: Remove the get_stream_position callback
ASoC: SOF: ipc4-pcm: Move struct sof_ipc4_timestamp_info definition
locally
ASoC: SOF: ipc4-pcm: Combine the SOF_IPC4_PIPE_PAUSED cases in
pcm_trigger
ASoC: SOF: ipc4-pcm: Invalidate the stream_start_offset in PAUSED
state
ASoC: SOF: sof-pcm: Add pointer callback to sof_ipc_pcm_ops
ASoC: SOF: ipc4-pcm: Correct the delay calculation
ALSA: hda: Add pplcllpl/u members to hdac_ext_stream
ASoC: SOF: Intel: hda: Compensate LLP in case it is not reset
include/sound/hdaudio_ext.h | 3 +
sound/soc/sof/intel/hda-common-ops.c | 3 +
sound/soc/sof/intel/hda-dai-ops.c | 11 ++
sound/soc/sof/intel/hda-pcm.c | 29 ++++
sound/soc/sof/intel/hda-stream.c | 70 ++++++++++
sound/soc/sof/intel/hda.h | 6 +
sound/soc/sof/intel/lnl.c | 2 -
sound/soc/sof/intel/mtl.c | 14 --
sound/soc/sof/intel/mtl.h | 10 --
sound/soc/sof/ipc4-pcm.c | 191 ++++++++++++++++++++++-----
sound/soc/sof/ipc4-priv.h | 14 --
sound/soc/sof/ipc4-topology.c | 22 ++-
sound/soc/sof/ops.h | 24 +++-
sound/soc/sof/pcm.c | 8 ++
sound/soc/sof/sof-audio.h | 9 +-
sound/soc/sof/sof-priv.h | 24 +++-
16 files changed, 351 insertions(+), 89 deletions(-)
--
2.44.0
folio_is_secretmem() states that secretmem folios cannot be LRU folios:
so we may only exit early if we find an LRU folio. Yet, we exit early if
we find a folio that is not a secretmem folio.
Consequently, folio_is_secretmem() fails to detect secretmem folios and,
therefore, we can succeed in grabbing a secretmem folio during GUP-fast,
crashing the kernel when we later try reading/writing to the folio, because
the folio has been unmapped from the directmap.
Reported-by: xingwei lee <xrivendell7(a)gmail.com>
Reported-by: yue sun <samsun1006219(a)gmail.com>
Closes: https://lore.kernel.org/lkml/CABOYnLyevJeravW=QrH0JUPYEcDN160aZFb7kwndm-J2r…
Debugged-by: Miklos Szeredi <miklos(a)szeredi.hu>
Reviewed-by: Mike Rapoport (IBM) <rppt(a)kernel.org>
Tested-by: Miklos Szeredi <mszeredi(a)redhat.com>
Fixes: 1507f51255c9 ("mm: introduce memfd_secret system call to create "secret" memory areas")
Cc: <stable(a)vger.kernel.org>
Signed-off-by: David Hildenbrand <david(a)redhat.com>
---
include/linux/secretmem.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 35f3a4a8ceb1..6996f1f53f14 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -16,7 +16,7 @@ static inline bool folio_is_secretmem(struct folio *folio)
* We know that secretmem pages are not compound and LRU so we can
* save a couple of cycles here.
*/
- if (folio_test_large(folio) || !folio_test_lru(folio))
+ if (folio_test_large(folio) || folio_test_lru(folio))
return false;
mapping = (struct address_space *)
--
2.43.2