On Wed, Apr 02, 2025 at 05:08:51PM -0700, Peter Collingbourne wrote:
On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas catalin.marinas@arm.com wrote:
On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote:
diff --git a/lib/string.c b/lib/string.c index eb4486ed40d25..b632c71df1a50 100644 --- a/lib/string.c +++ b/lib/string.c @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) return -E2BIG;
+#ifndef CONFIG_DCACHE_WORD_ACCESS #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS /* * If src is unaligned, don't cross a page boundary, @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) /* If src or dest is unaligned, don't do word-at-a-time. */ if (((long) dest | (long) src) & (sizeof(long) - 1)) max = 0; +#endif #endif
/*
* read_word_at_a_time() below may read uninitialized bytes after the
* trailing zero and use them in comparisons. Disable this optimization
* under KMSAN to prevent false positive reports.
* load_unaligned_zeropad() or read_word_at_a_time() below may read
* uninitialized bytes after the trailing zero and use them in
* comparisons. Disable this optimization under KMSAN to prevent
* false positive reports. */ if (IS_ENABLED(CONFIG_KMSAN)) max = 0;
@@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) while (max >= sizeof(unsigned long)) { unsigned long c, data;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
c = load_unaligned_zeropad(src+res);
+#else c = read_word_at_a_time(src+res); +#endif if (has_zero(c, &data, &constants)) { data = prep_zero_mask(c, data, &constants); data = create_zero_mask(data);
Kees mentioned the scenario where this crosses the page boundary and we pad the source with zeros. It's probably fine but there are 70+ cases where the strscpy() return value is checked, I only looked at a couple.
The return value is the same with/without the patch, it's the number of bytes copied before the null terminator (i.e. not including the extra nulls now written).
I was thinking of the -E2BIG return but you are right, the patch wouldn't change this. If, for example, you read 8 bytes across a page boundary and it faults, load_unaligned_zeropad() returns fewer characters copied, implying the source was null-terminated. read_word_at_a_time(), OTOH, panics in the next byte-at-a-time loop. But it wouldn't return -E2BIG either, so it doesn't matter for the caller.
Could we at least preserve the behaviour with regards to page boundaries and keep the existing 'max' limiting logic? If I read the code correctly, a fall back to reading one byte at a time from an unmapped page would panic. We also get this behaviour if src[0] is reading from an invalid address, though for arm64 the panic would be in ex_handler_load_unaligned_zeropad() when count >= 8.
So do you think that the code should continue to panic if the source string is unterminated because of a page boundary? I don't have a strong opinion but maybe that's something that we should only do if some error checking option is turned on?
It's mostly about keeping the current behaviour w.r.t. page boundaries. Not a strong opinion either. The change would be to not read across page boundaries.
Reading across tag granule (but not across page boundary) and causing a tag check fault would result in padding but we can live with this and only architectures that do MTE-style tag checking would get the new behaviour.
By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls now written to the destination?
No, I meant the padding of the source when a fault occurs. The write to the destination would only be a single '\0' byte. It's the destination safe termination vs. panic above.
What I haven't checked is whether a tag check fault in ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for MTE (it would be a second tag check fault while processing the first). At a quick look, it seems ok but it might be worth checking.
Yes, that works, and I added a test case for that in v5. The stack trace looks like this:
Thanks for checking.