On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote:
diff --git a/lib/string.c b/lib/string.c index eb4486ed40d25..b632c71df1a50 100644 --- a/lib/string.c +++ b/lib/string.c @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) return -E2BIG; +#ifndef CONFIG_DCACHE_WORD_ACCESS #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS /* * If src is unaligned, don't cross a page boundary, @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) /* If src or dest is unaligned, don't do word-at-a-time. */ if (((long) dest | (long) src) & (sizeof(long) - 1)) max = 0; +#endif #endif /*
* read_word_at_a_time() below may read uninitialized bytes after the
* trailing zero and use them in comparisons. Disable this optimization
* under KMSAN to prevent false positive reports.
* load_unaligned_zeropad() or read_word_at_a_time() below may read
* uninitialized bytes after the trailing zero and use them in
* comparisons. Disable this optimization under KMSAN to prevent
*/ if (IS_ENABLED(CONFIG_KMSAN)) max = 0;* false positive reports.
@@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) while (max >= sizeof(unsigned long)) { unsigned long c, data; +#ifdef CONFIG_DCACHE_WORD_ACCESS
c = load_unaligned_zeropad(src+res);
+#else c = read_word_at_a_time(src+res); +#endif if (has_zero(c, &data, &constants)) { data = prep_zero_mask(c, data, &constants); data = create_zero_mask(data);
Kees mentioned the scenario where this crosses the page boundary and we pad the source with zeros. It's probably fine but there are 70+ cases where the strscpy() return value is checked, I only looked at a couple.
Could we at least preserve the behaviour with regards to page boundaries and keep the existing 'max' limiting logic? If I read the code correctly, a fall back to reading one byte at a time from an unmapped page would panic. We also get this behaviour if src[0] is reading from an invalid address, though for arm64 the panic would be in ex_handler_load_unaligned_zeropad() when count >= 8.
Reading across tag granule (but not across page boundary) and causing a tag check fault would result in padding but we can live with this and only architectures that do MTE-style tag checking would get the new behaviour.
What I haven't checked is whether a tag check fault in ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for MTE (it would be a second tag check fault while processing the first). At a quick look, it seems ok but it might be worth checking.