On Wed, Apr 2, 2025 at 1:10 PM Catalin Marinas catalin.marinas@arm.com wrote:
On Fri, Mar 28, 2025 at 05:03:36PM -0700, Peter Collingbourne wrote:
diff --git a/lib/string.c b/lib/string.c index eb4486ed40d25..b632c71df1a50 100644 --- a/lib/string.c +++ b/lib/string.c @@ -119,6 +119,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) if (count == 0 || WARN_ON_ONCE(count > INT_MAX)) return -E2BIG;
+#ifndef CONFIG_DCACHE_WORD_ACCESS #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS /* * If src is unaligned, don't cross a page boundary, @@ -133,12 +134,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) /* If src or dest is unaligned, don't do word-at-a-time. */ if (((long) dest | (long) src) & (sizeof(long) - 1)) max = 0; +#endif #endif
/*
* read_word_at_a_time() below may read uninitialized bytes after the
* trailing zero and use them in comparisons. Disable this optimization
* under KMSAN to prevent false positive reports.
* load_unaligned_zeropad() or read_word_at_a_time() below may read
* uninitialized bytes after the trailing zero and use them in
* comparisons. Disable this optimization under KMSAN to prevent
* false positive reports. */ if (IS_ENABLED(CONFIG_KMSAN)) max = 0;
@@ -146,7 +149,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count) while (max >= sizeof(unsigned long)) { unsigned long c, data;
+#ifdef CONFIG_DCACHE_WORD_ACCESS
c = load_unaligned_zeropad(src+res);
+#else c = read_word_at_a_time(src+res); +#endif if (has_zero(c, &data, &constants)) { data = prep_zero_mask(c, data, &constants); data = create_zero_mask(data);
Kees mentioned the scenario where this crosses the page boundary and we pad the source with zeros. It's probably fine but there are 70+ cases where the strscpy() return value is checked, I only looked at a couple.
The return value is the same with/without the patch, it's the number of bytes copied before the null terminator (i.e. not including the extra nulls now written).
Could we at least preserve the behaviour with regards to page boundaries and keep the existing 'max' limiting logic? If I read the code correctly, a fall back to reading one byte at a time from an unmapped page would panic. We also get this behaviour if src[0] is reading from an invalid address, though for arm64 the panic would be in ex_handler_load_unaligned_zeropad() when count >= 8.
So do you think that the code should continue to panic if the source string is unterminated because of a page boundary? I don't have a strong opinion but maybe that's something that we should only do if some error checking option is turned on?
Reading across tag granule (but not across page boundary) and causing a tag check fault would result in padding but we can live with this and only architectures that do MTE-style tag checking would get the new behaviour.
By "padding" do you mean the extra (up to sizeof(unsigned long)) nulls now written to the destination? It seems unlikely that code would deliberately depend on the nulls not being written, the number of nulls written is not part of the documented interface contract and will vary right now depending on how close the source string is to a page boundary. If code is accidentally depending on nulls not being written, that's almost certainly a bug anyway (because of the page boundary thing) and we should fix it if discovered by this change.
What I haven't checked is whether a tag check fault in ex_handler_load_unaligned_zeropad() would confuse the KASAN logic for MTE (it would be a second tag check fault while processing the first). At a quick look, it seems ok but it might be worth checking.
Yes, that works, and I added a test case for that in v5. The stack trace looks like this:
[ 21.969736] Call trace: [ 21.969739] show_stack+0x18/0x24 (C) [ 21.969756] __dump_stack+0x28/0x38 [ 21.969764] dump_stack_lvl+0x54/0x6c [ 21.969770] print_address_description+0x7c/0x274 [ 21.969780] print_report+0x90/0xe8 [ 21.969789] kasan_report+0xf0/0x150 [ 21.969799] __do_kernel_fault+0x5c/0x1cc [ 21.969808] do_bad_area+0x30/0xec [ 21.969816] do_tag_check_fault+0x20/0x30 [ 21.969824] do_mem_abort+0x3c/0x8c [ 21.969832] el1_abort+0x3c/0x5c [ 21.969840] el1h_64_sync_handler+0x50/0xcc [ 21.969847] el1h_64_sync+0x6c/0x70 [ 21.969854] fixup_exception+0xb0/0xe4 (P) [ 21.969865] __do_kernel_fault+0x80/0x1cc [ 21.969873] do_bad_area+0x30/0xec [ 21.969881] do_tag_check_fault+0x20/0x30 [ 21.969889] do_mem_abort+0x3c/0x8c [ 21.969896] el1_abort+0x3c/0x5c [ 21.969905] el1h_64_sync_handler+0x50/0xcc [ 21.969912] el1h_64_sync+0x6c/0x70 [ 21.969917] sized_strscpy+0x30/0x114 (P) [ 21.969929] kunit_try_run_case+0x64/0x160 [ 21.969939] kunit_generic_run_threadfn_adapter+0x28/0x4c [ 21.969950] kthread+0x1c4/0x208 [ 21.969956] ret_from_fork+0x10/0x20
Peter