=== Overview
arm64 has a feature called Top Byte Ignore, which allows to embed pointer tags into the top byte of each pointer. Userspace programs (such as HWASan, a memory debugging tool [1]) might use this feature and pass tagged user pointers to the kernel through syscalls or other interfaces.
Right now the kernel is already able to handle user faults with tagged pointers, due to these patches:
1. 81cddd65 ("arm64: traps: fix userspace cache maintenance emulation on a tagged pointer") 2. 7dcd9dd8 ("arm64: hw_breakpoint: fix watchpoint matching for tagged pointers") 3. 276e9327 ("arm64: entry: improve data abort handling of tagged pointers")
This patchset extends tagged pointer support to syscall arguments.
As per the proposed ABI change [3], tagged pointers are only allowed to be passed to syscalls when they point to memory ranges obtained by anonymous mmap() or sbrk() (see the patchset [3] for more details).
For non-memory syscalls this is done by untaging user pointers when the kernel performs pointer checking to find out whether the pointer comes from userspace (most notably in access_ok). The untagging is done only when the pointer is being checked, the tag is preserved as the pointer makes its way through the kernel and stays tagged when the kernel dereferences the pointer when perfoming user memory accesses.
Memory syscalls (mmap, mprotect, etc.) don't do user memory accesses but rather deal with memory ranges, and untagged pointers are better suited to describe memory ranges internally. Thus for memory syscalls we untag pointers completely when they enter the kernel.
=== Other approaches
One of the alternative approaches to untagging that was considered is to completely strip the pointer tag as the pointer enters the kernel with some kind of a syscall wrapper, but that won't work with the countless number of different ioctl calls. With this approach we would need a custom wrapper for each ioctl variation, which doesn't seem practical.
An alternative approach to untagging pointers in memory syscalls prologues is to inspead allow tagged pointers to be passed to find_vma() (and other vma related functions) and untag them there. Unfortunately, a lot of find_vma() callers then compare or subtract the returned vma start and end fields against the pointer that was being searched. Thus this approach would still require changing all find_vma() callers.
=== Testing
The following testing approaches has been taken to find potential issues with user pointer untagging:
1. Static testing (with sparse [2] and separately with a custom static analyzer based on Clang) to track casts of __user pointers to integer types to find places where untagging needs to be done.
2. Static testing with grep to find parts of the kernel that call find_vma() (and other similar functions) or directly compare against vm_start/vm_end fields of vma.
3. Static testing with grep to find parts of the kernel that compare user pointers with TASK_SIZE or other similar consts and macros.
4. Dynamic testing: adding BUG_ON(has_tag(addr)) to find_vma() and running a modified syzkaller version that passes tagged pointers to the kernel.
Based on the results of the testing the requried patches have been added to the patchset.
=== Notes
This patchset is meant to be merged together with "arm64 relaxed ABI" [3].
This patchset is a prerequisite for ARM's memory tagging hardware feature support [4].
This patchset has been merged into the Pixel 2 kernel tree and is now being used to enable testing of Pixel 2 phones with HWASan.
Thanks!
[1] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html
[2] https://github.com/lucvoo/sparse-dev/commit/5f960cb10f56ec2017c128ef9d16060e...
[3] https://lkml.org/lkml/2019/3/18/819
[4] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture...
Changes in v12: - Changed untagging in tcp_zerocopy_receive() to also untag zc->address. - Fixed untagging in prctl_set_mm* to only untag pointers for vma lookups and validity checks, but leave them as is for actual user space accesses. - Updated the link to the v2 of the "arm64 relaxed ABI" patchset [3]. - Dropped the documentation patch, as the "arm64 relaxed ABI" patchset [3] handles that.
Changes in v11: - Added "uprobes, arm64: untag user pointers in find_active_uprobe" patch. - Added "bpf, arm64: untag user pointers in stack_map_get_build_id_offset" patch. - Fixed "tracing, arm64: untag user pointers in seq_print_user_ip" to correctly perform subtration with a tagged addr. - Moved untagged_addr() from SYSCALL_DEFINE3(mprotect) and SYSCALL_DEFINE4(pkey_mprotect) to do_mprotect_pkey(). - Moved untagged_addr() definition for other arches from include/linux/memory.h to include/linux/mm.h. - Changed untagging in strn*_user() to perform userspace accesses through tagged pointers. - Updated the documentation to mention that passing tagged pointers to memory syscalls is allowed. - Updated the test to use malloc'ed memory instead of stack memory.
Changes in v10: - Added "mm, arm64: untag user pointers passed to memory syscalls" back. - New patch "fs, arm64: untag user pointers in fs/userfaultfd.c". - New patch "net, arm64: untag user pointers in tcp_zerocopy_receive". - New patch "kernel, arm64: untag user pointers in prctl_set_mm*". - New patch "tracing, arm64: untag user pointers in seq_print_user_ip".
Changes in v9: - Rebased onto 4.20-rc6. - Used u64 instead of __u64 in type casts in the untagged_addr macro for arm64. - Added braces around (addr) in the untagged_addr macro for other arches.
Changes in v8: - Rebased onto 65102238 (4.20-rc1). - Added a note to the cover letter on why syscall wrappers/shims that untag user pointers won't work. - Added a note to the cover letter that this patchset has been merged into the Pixel 2 kernel tree. - Documentation fixes, in particular added a list of syscalls that don't support tagged user pointers.
Changes in v7: - Rebased onto 17b57b18 (4.19-rc6). - Dropped the "arm64: untag user address in __do_user_fault" patch, since the existing patches already handle user faults properly. - Dropped the "usb, arm64: untag user addresses in devio" patch, since the passed pointer must come from a vma and therefore be untagged. - Dropped the "arm64: annotate user pointers casts detected by sparse" patch (see the discussion to the replies of the v6 of this patchset). - Added more context to the cover letter. - Updated Documentation/arm64/tagged-pointers.txt.
Changes in v6: - Added annotations for user pointer casts found by sparse. - Rebased onto 050cdc6c (4.19-rc1+).
Changes in v5: - Added 3 new patches that add untagging to places found with static analysis. - Rebased onto 44c929e1 (4.18-rc8).
Changes in v4: - Added a selftest for checking that passing tagged pointers to the kernel succeeds. - Rebased onto 81e97f013 (4.18-rc1+).
Changes in v3: - Rebased onto e5c51f30 (4.17-rc6+). - Added linux-arch@ to the list of recipients.
Changes in v2: - Rebased onto 2d618bdf (4.17-rc3+). - Removed excessive untagging in gup.c. - Removed untagging pointers returned from __uaccess_mask_ptr.
Changes in v1: - Rebased onto 4.17-rc1.
Changes in RFC v2: - Added "#ifndef untagged_addr..." fallback in linux/uaccess.h instead of defining it for each arch individually. - Updated Documentation/arm64/tagged-pointers.txt. - Dropped "mm, arm64: untag user addresses in memory syscalls". - Rebased onto 3eb2ce82 (4.16-rc7).
Signed-off-by: Andrey Konovalov andreyknvl@google.com
Andrey Konovalov (13): uaccess: add untagged_addr definition for other arches arm64: untag user pointers in access_ok and __uaccess_mask_ptr lib, arm64: untag user pointers in strn*_user mm, arm64: untag user pointers passed to memory syscalls mm, arm64: untag user pointers in mm/gup.c fs, arm64: untag user pointers in copy_mount_options fs, arm64: untag user pointers in fs/userfaultfd.c net, arm64: untag user pointers in tcp_zerocopy_receive kernel, arm64: untag user pointers in prctl_set_mm* tracing, arm64: untag user pointers in seq_print_user_ip uprobes, arm64: untag user pointers in find_active_uprobe bpf, arm64: untag user pointers in stack_map_get_build_id_offset selftests, arm64: add a selftest for passing tagged pointers to kernel
arch/arm64/include/asm/uaccess.h | 10 +++-- fs/namespace.c | 2 +- fs/userfaultfd.c | 5 +++ include/linux/mm.h | 4 ++ ipc/shm.c | 2 + kernel/bpf/stackmap.c | 6 ++- kernel/events/uprobes.c | 2 + kernel/sys.c | 44 +++++++++++++------ kernel/trace/trace_output.c | 5 ++- lib/strncpy_from_user.c | 3 +- lib/strnlen_user.c | 3 +- mm/gup.c | 4 ++ mm/madvise.c | 2 + mm/mempolicy.c | 5 +++ mm/migrate.c | 1 + mm/mincore.c | 2 + mm/mlock.c | 5 +++ mm/mmap.c | 7 +++ mm/mprotect.c | 1 + mm/mremap.c | 2 + mm/msync.c | 2 + net/ipv4/tcp.c | 9 +++- tools/testing/selftests/arm64/.gitignore | 1 + tools/testing/selftests/arm64/Makefile | 11 +++++ .../testing/selftests/arm64/run_tags_test.sh | 12 +++++ tools/testing/selftests/arm64/tags_test.c | 21 +++++++++ 26 files changed, 144 insertions(+), 27 deletions(-) create mode 100644 tools/testing/selftests/arm64/.gitignore create mode 100644 tools/testing/selftests/arm64/Makefile create mode 100755 tools/testing/selftests/arm64/run_tags_test.sh create mode 100644 tools/testing/selftests/arm64/tags_test.c
To allow arm64 syscalls to accept tagged pointers from userspace, we must untag them when they are passed to the kernel. Since untagging is done in generic parts of the kernel, the untagged_addr macro needs to be defined for all architectures.
Define it as a noop for architectures other than arm64.
Acked-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Andrey Konovalov andreyknvl@google.com --- include/linux/mm.h | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h index 76769749b5a5..4d674518d392 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -99,6 +99,10 @@ extern int mmap_rnd_compat_bits __read_mostly; #include <asm/pgtable.h> #include <asm/processor.h>
+#ifndef untagged_addr +#define untagged_addr(addr) (addr) +#endif + #ifndef __pa_symbol #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) #endif
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
copy_from_user (and a few other similar functions) are used to copy data from user memory into the kernel memory or vice versa. Since a user can provided a tagged pointer to one of the syscalls that use copy_from_user, we need to correctly handle such pointers.
Do this by untagging user pointers in access_ok and in __uaccess_mask_ptr, before performing access validity checks.
Note, that this patch only temporarily untags the pointers to perform the checks, but then passes them as is into the kernel internals.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Andrey Konovalov andreyknvl@google.com --- arch/arm64/include/asm/uaccess.h | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index e5d5f31c6d36..9164ecb5feca 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -94,7 +94,7 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si return ret; }
-#define access_ok(addr, size) __range_ok(addr, size) +#define access_ok(addr, size) __range_ok(untagged_addr(addr), size) #define user_addr_max get_fs
#define _ASM_EXTABLE(from, to) \ @@ -226,7 +226,8 @@ static inline void uaccess_enable_not_uao(void)
/* * Sanitise a uaccess pointer such that it becomes NULL if above the - * current addr_limit. + * current addr_limit. In case the pointer is tagged (has the top byte set), + * untag the pointer before checking. */ #define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr) static inline void __user *__uaccess_mask_ptr(const void __user *ptr) @@ -234,10 +235,11 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) void __user *safe_ptr;
asm volatile( - " bics xzr, %1, %2\n" + " bics xzr, %3, %2\n" " csel %0, %1, xzr, eq\n" : "=&r" (safe_ptr) - : "r" (ptr), "r" (current_thread_info()->addr_limit) + : "r" (ptr), "r" (current_thread_info()->addr_limit), + "r" (untagged_addr(ptr)) : "cc");
csdb();
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
strncpy_from_user and strnlen_user accept user addresses as arguments, and do not go through the same path as copy_from_user and others, so here we need to handle the case of tagged user addresses separately.
Untag user pointers passed to these functions.
Note, that this patch only temporarily untags the pointers to perform validity checks, but then uses them as is to perform user memory accesses.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- lib/strncpy_from_user.c | 3 ++- lib/strnlen_user.c | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c index 58eacd41526c..6209bb9507c7 100644 --- a/lib/strncpy_from_user.c +++ b/lib/strncpy_from_user.c @@ -6,6 +6,7 @@ #include <linux/uaccess.h> #include <linux/kernel.h> #include <linux/errno.h> +#include <linux/mm.h>
#include <asm/byteorder.h> #include <asm/word-at-a-time.h> @@ -107,7 +108,7 @@ long strncpy_from_user(char *dst, const char __user *src, long count) return 0;
max_addr = user_addr_max(); - src_addr = (unsigned long)src; + src_addr = (unsigned long)untagged_addr(src); if (likely(src_addr < max_addr)) { unsigned long max = max_addr - src_addr; long retval; diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c index 1c1a1b0e38a5..8ca3d2ac32ec 100644 --- a/lib/strnlen_user.c +++ b/lib/strnlen_user.c @@ -2,6 +2,7 @@ #include <linux/kernel.h> #include <linux/export.h> #include <linux/uaccess.h> +#include <linux/mm.h>
#include <asm/word-at-a-time.h>
@@ -109,7 +110,7 @@ long strnlen_user(const char __user *str, long count) return 0;
max_addr = user_addr_max(); - src_addr = (unsigned long)str; + src_addr = (unsigned long)untagged_addr(str); if (likely(src_addr < max_addr)) { unsigned long max = max_addr - src_addr; long retval;
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
This patch allows tagged pointers to be passed to the following memory syscalls: madvise, mbind, get_mempolicy, mincore, mlock, mlock2, brk, mmap_pgoff, old_mmap, munmap, remap_file_pages, mprotect, pkey_mprotect, mremap, msync and shmdt.
This is done by untagging pointers passed to these syscalls in the prologues of their handlers.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- ipc/shm.c | 2 ++ mm/madvise.c | 2 ++ mm/mempolicy.c | 5 +++++ mm/migrate.c | 1 + mm/mincore.c | 2 ++ mm/mlock.c | 5 +++++ mm/mmap.c | 7 +++++++ mm/mprotect.c | 1 + mm/mremap.c | 2 ++ mm/msync.c | 2 ++ 10 files changed, 29 insertions(+)
diff --git a/ipc/shm.c b/ipc/shm.c index ce1ca9f7c6e9..7af8951e6c41 100644 --- a/ipc/shm.c +++ b/ipc/shm.c @@ -1593,6 +1593,7 @@ SYSCALL_DEFINE3(shmat, int, shmid, char __user *, shmaddr, int, shmflg) unsigned long ret; long err;
+ shmaddr = untagged_addr(shmaddr); err = do_shmat(shmid, shmaddr, shmflg, &ret, SHMLBA); if (err) return err; @@ -1732,6 +1733,7 @@ long ksys_shmdt(char __user *shmaddr)
SYSCALL_DEFINE1(shmdt, char __user *, shmaddr) { + shmaddr = untagged_addr(shmaddr); return ksys_shmdt(shmaddr); }
diff --git a/mm/madvise.c b/mm/madvise.c index 21a7881a2db4..64e6d34a7f9b 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -809,6 +809,8 @@ SYSCALL_DEFINE3(madvise, unsigned long, start, size_t, len_in, int, behavior) size_t len; struct blk_plug plug;
+ start = untagged_addr(start); + if (!madvise_behavior_valid(behavior)) return error;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c index af171ccb56a2..31691737c59c 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1334,6 +1334,7 @@ static long kernel_mbind(unsigned long start, unsigned long len, int err; unsigned short mode_flags;
+ start = untagged_addr(start); mode_flags = mode & MPOL_MODE_FLAGS; mode &= ~MPOL_MODE_FLAGS; if (mode >= MPOL_MAX) @@ -1491,6 +1492,8 @@ static int kernel_get_mempolicy(int __user *policy, int uninitialized_var(pval); nodemask_t nodes;
+ addr = untagged_addr(addr); + if (nmask != NULL && maxnode < nr_node_ids) return -EINVAL;
@@ -1576,6 +1579,8 @@ COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len, unsigned long nr_bits, alloc_size; nodemask_t bm;
+ start = untagged_addr(start); + nr_bits = min_t(unsigned long, maxnode-1, MAX_NUMNODES); alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;
diff --git a/mm/migrate.c b/mm/migrate.c index ac6f4939bb59..ecc6dcdefb1f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1612,6 +1612,7 @@ static int do_pages_move(struct mm_struct *mm, nodemask_t task_nodes, if (get_user(node, nodes + i)) goto out_flush; addr = (unsigned long)p; + addr = untagged_addr(addr);
err = -ENODEV; if (node < 0 || node >= MAX_NUMNODES) diff --git a/mm/mincore.c b/mm/mincore.c index 218099b5ed31..c4a3f4484b6b 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -228,6 +228,8 @@ SYSCALL_DEFINE3(mincore, unsigned long, start, size_t, len, unsigned long pages; unsigned char *tmp;
+ start = untagged_addr(start); + /* Check the start address: needs to be page-aligned.. */ if (start & ~PAGE_MASK) return -EINVAL; diff --git a/mm/mlock.c b/mm/mlock.c index 080f3b36415b..6934ec92bf39 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -715,6 +715,7 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla
SYSCALL_DEFINE2(mlock, unsigned long, start, size_t, len) { + start = untagged_addr(start); return do_mlock(start, len, VM_LOCKED); }
@@ -722,6 +723,8 @@ SYSCALL_DEFINE3(mlock2, unsigned long, start, size_t, len, int, flags) { vm_flags_t vm_flags = VM_LOCKED;
+ start = untagged_addr(start); + if (flags & ~MLOCK_ONFAULT) return -EINVAL;
@@ -735,6 +738,8 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len) { int ret;
+ start = untagged_addr(start); + len = PAGE_ALIGN(len + (offset_in_page(start))); start &= PAGE_MASK;
diff --git a/mm/mmap.c b/mm/mmap.c index 41eb48d9b527..512c679c7f33 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -199,6 +199,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) bool downgraded = false; LIST_HEAD(uf);
+ brk = untagged_addr(brk); + if (down_write_killable(&mm->mmap_sem)) return -EINTR;
@@ -1571,6 +1573,8 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len, struct file *file = NULL; unsigned long retval;
+ addr = untagged_addr(addr); + if (!(flags & MAP_ANONYMOUS)) { audit_mmap_fd(fd, flags); file = fget(fd); @@ -2867,6 +2871,7 @@ EXPORT_SYMBOL(vm_munmap);
SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len) { + addr = untagged_addr(addr); profile_munmap(addr); return __vm_munmap(addr, len, true); } @@ -2885,6 +2890,8 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, unsigned long ret = -EINVAL; struct file *file;
+ start = untagged_addr(start); + pr_warn_once("%s (%d) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.rst.\n", current->comm, current->pid);
diff --git a/mm/mprotect.c b/mm/mprotect.c index 028c724dcb1a..3c2b11629f89 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -468,6 +468,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, if (grows == (PROT_GROWSDOWN|PROT_GROWSUP)) /* can't be both */ return -EINVAL;
+ start = untagged_addr(start); if (start & ~PAGE_MASK) return -EINVAL; if (!len) diff --git a/mm/mremap.c b/mm/mremap.c index e3edef6b7a12..6422aeee65bb 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -605,6 +605,8 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, LIST_HEAD(uf_unmap_early); LIST_HEAD(uf_unmap);
+ addr = untagged_addr(addr); + if (flags & ~(MREMAP_FIXED | MREMAP_MAYMOVE)) return ret;
diff --git a/mm/msync.c b/mm/msync.c index ef30a429623a..c3bd3e75f687 100644 --- a/mm/msync.c +++ b/mm/msync.c @@ -37,6 +37,8 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len, int, flags) int unmapped_error = 0; int error = -EINVAL;
+ start = untagged_addr(start); + if (flags & ~(MS_ASYNC | MS_INVALIDATE | MS_SYNC)) goto out; if (offset_in_page(start))
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
mm/gup.c provides a kernel interface that accepts user addresses and manipulates user pages directly (for example get_user_pages, that is used by the futex syscall). Since a user can provided tagged addresses, we need to handle this case.
Add untagging to gup.c functions that use user addresses for vma lookups.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- mm/gup.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/mm/gup.c b/mm/gup.c index f84e22685aaa..3192741e0b3a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -686,6 +686,8 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (!nr_pages) return 0;
+ start = untagged_addr(start); + VM_BUG_ON(!!pages != !!(gup_flags & FOLL_GET));
/* @@ -848,6 +850,8 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, struct vm_area_struct *vma; vm_fault_t ret, major = 0;
+ address = untagged_addr(address); + if (unlocked) fault_flags |= FAULT_FLAG_ALLOW_RETRY;
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
In copy_mount_options a user address is being subtracted from TASK_SIZE. If the address is lower than TASK_SIZE, the size is calculated to not allow the exact_copy_from_user() call to cross TASK_SIZE boundary. However if the address is tagged, then the size will be calculated incorrectly.
Untag the address before subtracting.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- fs/namespace.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/namespace.c b/fs/namespace.c index c9cab307fa77..c27e5713bf04 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -2825,7 +2825,7 @@ void *copy_mount_options(const void __user * data) * the remainder of the page. */ /* copy_from_user cannot cross TASK_SIZE ! */ - size = TASK_SIZE - (unsigned long)data; + size = TASK_SIZE - (unsigned long)untagged_addr(data); if (size > PAGE_SIZE) size = PAGE_SIZE;
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
userfaultfd_register() and userfaultfd_unregister() use provided user pointers for vma lookups, which can only by done with untagged pointers.
Untag user pointers in these functions.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- fs/userfaultfd.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 89800fc7dc9d..a3b70e0d9756 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1320,6 +1320,9 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, goto out; }
+ uffdio_register.range.start = + untagged_addr(uffdio_register.range.start); + ret = validate_range(mm, uffdio_register.range.start, uffdio_register.range.len); if (ret) @@ -1507,6 +1510,8 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister))) goto out;
+ uffdio_unregister.start = untagged_addr(uffdio_unregister.start); + ret = validate_range(mm, uffdio_unregister.start, uffdio_unregister.len); if (ret)
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
tcp_zerocopy_receive() uses provided user pointers for vma lookups, which can only by done with untagged pointers.
Untag user pointers in this function.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- net/ipv4/tcp.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6baa6dc1b13b..e76beb5ff1ff 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1749,7 +1749,7 @@ EXPORT_SYMBOL(tcp_mmap); static int tcp_zerocopy_receive(struct sock *sk, struct tcp_zerocopy_receive *zc) { - unsigned long address = (unsigned long)zc->address; + unsigned long address; const skb_frag_t *frags = NULL; u32 length = 0, seq, offset; struct vm_area_struct *vma; @@ -1758,7 +1758,12 @@ static int tcp_zerocopy_receive(struct sock *sk, int inq; int ret;
- if (address & (PAGE_SIZE - 1) || address != zc->address) + address = (unsigned long)untagged_addr(zc->address); + + /* The second test in this if detects if the u64->unsigned long + * conversion had any truncated bits. + */ + if (address & (PAGE_SIZE - 1) || address != untagged_addr(zc->address)) return -EINVAL;
if (sk->sk_state == TCP_LISTEN)
On Mon, Mar 18, 2019 at 10:18 AM Andrey Konovalov andreyknvl@google.com wrote:
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
tcp_zerocopy_receive() uses provided user pointers for vma lookups, which can only by done with untagged pointers.
Untag user pointers in this function.
Signed-off-by: Andrey Konovalov andreyknvl@google.com
net/ipv4/tcp.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6baa6dc1b13b..e76beb5ff1ff 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1749,7 +1749,7 @@ EXPORT_SYMBOL(tcp_mmap); static int tcp_zerocopy_receive(struct sock *sk, struct tcp_zerocopy_receive *zc) {
unsigned long address = (unsigned long)zc->address;
unsigned long address; const skb_frag_t *frags = NULL; u32 length = 0, seq, offset; struct vm_area_struct *vma;
@@ -1758,7 +1758,12 @@ static int tcp_zerocopy_receive(struct sock *sk, int inq; int ret;
if (address & (PAGE_SIZE - 1) || address != zc->address)
address = (unsigned long)untagged_addr(zc->address);
/* The second test in this if detects if the u64->unsigned long
* conversion had any truncated bits.
*/
if (address & (PAGE_SIZE - 1) || address != untagged_addr(zc->address)) return -EINVAL; if (sk->sk_state == TCP_LISTEN)
This is quite ugly, the comment does not really help nor belong to this patch.
What about using untagged_addr() only once ?
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6baa6dc1b13b0b94b1da238668b93e167cf444fe..855a1f68c1ea9b0d07a92bd7f5e7c24840a99d3d 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1761,6 +1761,8 @@ static int tcp_zerocopy_receive(struct sock *sk, if (address & (PAGE_SIZE - 1) || address != zc->address) return -EINVAL;
+ address = untagged_addr(address); + if (sk->sk_state == TCP_LISTEN) return -ENOTCONN;
On Mon, Mar 18, 2019 at 6:35 PM Eric Dumazet edumazet@google.com wrote:
On Mon, Mar 18, 2019 at 10:18 AM Andrey Konovalov andreyknvl@google.com wrote:
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
tcp_zerocopy_receive() uses provided user pointers for vma lookups, which can only by done with untagged pointers.
Untag user pointers in this function.
Signed-off-by: Andrey Konovalov andreyknvl@google.com
net/ipv4/tcp.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6baa6dc1b13b..e76beb5ff1ff 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1749,7 +1749,7 @@ EXPORT_SYMBOL(tcp_mmap); static int tcp_zerocopy_receive(struct sock *sk, struct tcp_zerocopy_receive *zc) {
unsigned long address = (unsigned long)zc->address;
unsigned long address; const skb_frag_t *frags = NULL; u32 length = 0, seq, offset; struct vm_area_struct *vma;
@@ -1758,7 +1758,12 @@ static int tcp_zerocopy_receive(struct sock *sk, int inq; int ret;
if (address & (PAGE_SIZE - 1) || address != zc->address)
address = (unsigned long)untagged_addr(zc->address);
/* The second test in this if detects if the u64->unsigned long
* conversion had any truncated bits.
*/
if (address & (PAGE_SIZE - 1) || address != untagged_addr(zc->address)) return -EINVAL; if (sk->sk_state == TCP_LISTEN)
This is quite ugly, the comment does not really help nor belong to this patch.
What about using untagged_addr() only once ?
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6baa6dc1b13b0b94b1da238668b93e167cf444fe..855a1f68c1ea9b0d07a92bd7f5e7c24840a99d3d 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1761,6 +1761,8 @@ static int tcp_zerocopy_receive(struct sock *sk, if (address & (PAGE_SIZE - 1) || address != zc->address) return -EINVAL;
address = untagged_addr(address);
if (sk->sk_state == TCP_LISTEN) return -ENOTCONN;
Looks good, will do it like this in the next version. Thanks!
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
prctl_set_mm() and prctl_set_mm_map() use provided user pointers for vma lookups and do some pointer comparisons to perform validation, which can only by done with untagged pointers.
Untag user pointers in these functions for vma lookup and validity checks.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- kernel/sys.c | 44 ++++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-)
diff --git a/kernel/sys.c b/kernel/sys.c index 12df0e5434b8..fe26ccf3c9e6 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -1885,11 +1885,12 @@ static int prctl_set_mm_exe_file(struct mm_struct *mm, unsigned int fd) * WARNING: we don't require any capability here so be very careful * in what is allowed for modification from userspace. */ -static int validate_prctl_map(struct prctl_mm_map *prctl_map) +static int validate_prctl_map(struct prctl_mm_map *tagged_prctl_map) { unsigned long mmap_max_addr = TASK_SIZE; struct mm_struct *mm = current->mm; int error = -EINVAL, i; + struct prctl_mm_map prctl_map;
static const unsigned char offsets[] = { offsetof(struct prctl_mm_map, start_code), @@ -1905,12 +1906,25 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map) offsetof(struct prctl_mm_map, env_end), };
+ memcpy(&prctl_map, tagged_prctl_map, sizeof(prctl_map)); + prctl_map.start_code = untagged_addr(prctl_map.start_code); + prctl_map.end_code = untagged_addr(prctl_map.end_code); + prctl_map.start_data = untagged_addr(prctl_map.start_data); + prctl_map.end_data = untagged_addr(prctl_map.end_data); + prctl_map.start_brk = untagged_addr(prctl_map.start_brk); + prctl_map.brk = untagged_addr(prctl_map.brk); + prctl_map.start_stack = untagged_addr(prctl_map.start_stack); + prctl_map.arg_start = untagged_addr(prctl_map.arg_start); + prctl_map.arg_end = untagged_addr(prctl_map.arg_end); + prctl_map.env_start = untagged_addr(prctl_map.env_start); + prctl_map.env_end = untagged_addr(prctl_map.env_end); + /* * Make sure the members are not somewhere outside * of allowed address space. */ for (i = 0; i < ARRAY_SIZE(offsets); i++) { - u64 val = *(u64 *)((char *)prctl_map + offsets[i]); + u64 val = *(u64 *)((char *)&prctl_map + offsets[i]);
if ((unsigned long)val >= mmap_max_addr || (unsigned long)val < mmap_min_addr) @@ -1921,8 +1935,8 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map) * Make sure the pairs are ordered. */ #define __prctl_check_order(__m1, __op, __m2) \ - ((unsigned long)prctl_map->__m1 __op \ - (unsigned long)prctl_map->__m2) ? 0 : -EINVAL + ((unsigned long)prctl_map.__m1 __op \ + (unsigned long)prctl_map.__m2) ? 0 : -EINVAL error = __prctl_check_order(start_code, <, end_code); error |= __prctl_check_order(start_data, <, end_data); error |= __prctl_check_order(start_brk, <=, brk); @@ -1937,23 +1951,24 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map) /* * @brk should be after @end_data in traditional maps. */ - if (prctl_map->start_brk <= prctl_map->end_data || - prctl_map->brk <= prctl_map->end_data) + if (prctl_map.start_brk <= prctl_map.end_data || + prctl_map.brk <= prctl_map.end_data) goto out;
/* * Neither we should allow to override limits if they set. */ - if (check_data_rlimit(rlimit(RLIMIT_DATA), prctl_map->brk, - prctl_map->start_brk, prctl_map->end_data, - prctl_map->start_data)) + if (check_data_rlimit(rlimit(RLIMIT_DATA), prctl_map.brk, + prctl_map.start_brk, prctl_map.end_data, + prctl_map.start_data)) goto out;
/* * Someone is trying to cheat the auxv vector. */ - if (prctl_map->auxv_size) { - if (!prctl_map->auxv || prctl_map->auxv_size > sizeof(mm->saved_auxv)) + if (prctl_map.auxv_size) { + if (!prctl_map.auxv || prctl_map.auxv_size > + sizeof(mm->saved_auxv)) goto out; }
@@ -1962,7 +1977,7 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map) * change /proc/pid/exe link: only local sys admin should * be allowed to. */ - if (prctl_map->exe_fd != (u32)-1) { + if (prctl_map.exe_fd != (u32)-1) { if (!ns_capable(current_user_ns(), CAP_SYS_ADMIN)) goto out; } @@ -2120,13 +2135,14 @@ static int prctl_set_mm(int opt, unsigned long addr, if (opt == PR_SET_MM_AUXV) return prctl_set_auxv(mm, addr, arg4);
- if (addr >= TASK_SIZE || addr < mmap_min_addr) + if (untagged_addr(addr) >= TASK_SIZE || + untagged_addr(addr) < mmap_min_addr) return -EINVAL;
error = -EINVAL;
down_write(&mm->mmap_sem); - vma = find_vma(mm, addr); + vma = find_vma(mm, untagged_addr(addr));
prctl_map.start_code = mm->start_code; prctl_map.end_code = mm->end_code;
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
seq_print_user_ip() uses provided user pointers for vma lookups, which can only by done with untagged pointers.
Untag user pointers in this function.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- kernel/trace/trace_output.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c index 54373d93e251..6376bee93c84 100644 --- a/kernel/trace/trace_output.c +++ b/kernel/trace/trace_output.c @@ -370,6 +370,7 @@ static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm, { struct file *file = NULL; unsigned long vmstart = 0; + unsigned long untagged_ip = untagged_addr(ip); int ret = 1;
if (s->full) @@ -379,7 +380,7 @@ static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm, const struct vm_area_struct *vma;
down_read(&mm->mmap_sem); - vma = find_vma(mm, ip); + vma = find_vma(mm, untagged_ip); if (vma) { file = vma->vm_file; vmstart = vma->vm_start; @@ -388,7 +389,7 @@ static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm, ret = trace_seq_path(s, &file->f_path); if (ret) trace_seq_printf(s, "[+0x%lx]", - ip - vmstart); + untagged_ip - vmstart); } up_read(&mm->mmap_sem); }
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
find_active_uprobe() uses provided user pointer (obtained via instruction_pointer(regs)) for vma lookups, which can only by done with untagged pointers.
Untag the user pointer in this function.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- kernel/events/uprobes.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index c5cde87329c7..d3a2716a813a 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1992,6 +1992,8 @@ static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp) struct uprobe *uprobe = NULL; struct vm_area_struct *vma;
+ bp_vaddr = untagged_addr(bp_vaddr); + down_read(&mm->mmap_sem); vma = find_vma(mm, bp_vaddr); if (vma && vma->vm_start <= bp_vaddr) {
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
stack_map_get_build_id_offset() uses provided user pointers for vma lookups, which can only by done with untagged pointers.
Untag the user pointer in this function for doing the lookup and calculating the offset, but save as is into the bpf_stack_build_id struct.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- kernel/bpf/stackmap.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 950ab2f28922..bb89341d3faf 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -320,7 +320,9 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, }
for (i = 0; i < trace_nr; i++) { - vma = find_vma(current->mm, ips[i]); + u64 untagged_ip = untagged_addr(ips[i]); + + vma = find_vma(current->mm, untagged_ip); if (!vma || stack_map_get_build_id(vma, id_offs[i].build_id)) { /* per entry fall back to ips */ id_offs[i].status = BPF_STACK_BUILD_ID_IP; @@ -328,7 +330,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs, memset(id_offs[i].build_id, 0, BPF_BUILD_ID_SIZE); continue; } - id_offs[i].offset = (vma->vm_pgoff << PAGE_SHIFT) + ips[i] + id_offs[i].offset = (vma->vm_pgoff << PAGE_SHIFT) + untagged_ip - vma->vm_start; id_offs[i].status = BPF_STACK_BUILD_ID_VALID; }
This patch is a part of a series that extends arm64 kernel ABI to allow to pass tagged user pointers (with the top byte set to something else other than 0x00) as syscall arguments.
This patch adds a simple test, that calls the uname syscall with a tagged user pointer as an argument. Without the kernel accepting tagged user pointers the test fails with EFAULT.
Signed-off-by: Andrey Konovalov andreyknvl@google.com --- tools/testing/selftests/arm64/.gitignore | 1 + tools/testing/selftests/arm64/Makefile | 11 ++++++++++ .../testing/selftests/arm64/run_tags_test.sh | 12 +++++++++++ tools/testing/selftests/arm64/tags_test.c | 21 +++++++++++++++++++ 4 files changed, 45 insertions(+) create mode 100644 tools/testing/selftests/arm64/.gitignore create mode 100644 tools/testing/selftests/arm64/Makefile create mode 100755 tools/testing/selftests/arm64/run_tags_test.sh create mode 100644 tools/testing/selftests/arm64/tags_test.c
diff --git a/tools/testing/selftests/arm64/.gitignore b/tools/testing/selftests/arm64/.gitignore new file mode 100644 index 000000000000..e8fae8d61ed6 --- /dev/null +++ b/tools/testing/selftests/arm64/.gitignore @@ -0,0 +1 @@ +tags_test diff --git a/tools/testing/selftests/arm64/Makefile b/tools/testing/selftests/arm64/Makefile new file mode 100644 index 000000000000..a61b2e743e99 --- /dev/null +++ b/tools/testing/selftests/arm64/Makefile @@ -0,0 +1,11 @@ +# SPDX-License-Identifier: GPL-2.0 + +# ARCH can be overridden by the user for cross compiling +ARCH ?= $(shell uname -m 2>/dev/null || echo not) + +ifneq (,$(filter $(ARCH),aarch64 arm64)) +TEST_GEN_PROGS := tags_test +TEST_PROGS := run_tags_test.sh +endif + +include ../lib.mk diff --git a/tools/testing/selftests/arm64/run_tags_test.sh b/tools/testing/selftests/arm64/run_tags_test.sh new file mode 100755 index 000000000000..745f11379930 --- /dev/null +++ b/tools/testing/selftests/arm64/run_tags_test.sh @@ -0,0 +1,12 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +echo "--------------------" +echo "running tags test" +echo "--------------------" +./tags_test +if [ $? -ne 0 ]; then + echo "[FAIL]" +else + echo "[PASS]" +fi diff --git a/tools/testing/selftests/arm64/tags_test.c b/tools/testing/selftests/arm64/tags_test.c new file mode 100644 index 000000000000..2bd1830a7ebe --- /dev/null +++ b/tools/testing/selftests/arm64/tags_test.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> +#include <stdint.h> +#include <sys/utsname.h> + +#define SHIFT_TAG(tag) ((uint64_t)(tag) << 56) +#define SET_TAG(ptr, tag) (((uint64_t)(ptr) & ~SHIFT_TAG(0xff)) | \ + SHIFT_TAG(tag)) + +int main(void) +{ + struct utsname *ptr = (struct utsname *)malloc(sizeof(*ptr)); + void *tagged_ptr = (void *)SET_TAG(ptr, 0x42); + int err = uname(tagged_ptr); + + free(ptr); + return err; +}
On Mon, 18 Mar 2019 18:17:32 +0100 Andrey Konovalov andreyknvl@google.com wrote:
=== Notes
This patchset is meant to be merged together with "arm64 relaxed ABI" [3].
What does this mean, precisely? That neither series is useful without the other? That either patchset will break things without the other?
Only a small fraction of these patches carry evidence of having been reviewed. Fixable?
Which maintainer tree would be appropriate for carrying these patches?
On Tue, Mar 19, 2019 at 11:32:12AM -0700, Andrew Morton wrote:
On Mon, 18 Mar 2019 18:17:32 +0100 Andrey Konovalov andreyknvl@google.com wrote:
=== Notes
This patchset is meant to be merged together with "arm64 relaxed ABI" [3].
What does this mean, precisely? That neither series is useful without the other? That either patchset will break things without the other?
This series does the work of relaxing the ABI w.r.t. pointer syscall arguments for arm64 (while preserving backwards compatibility, we can't break this). Vincenzo's patches [1] document the ABI relaxation and introduce an AT_FLAG bit by which user space can check for the presence of such support. So I'd say [1] goes on top of this series.
Once we agreed on the ABI definition, they should be posted as a single series.
Only a small fraction of these patches carry evidence of having been reviewed. Fixable?
That's fixable, though the discussions go back to last summer mostly at a higher level: are we sure these are the only places that need patching? The outcome of such discussions was a document clarifying which pointers user can tag and pass to the kernel based on the origins of the memory range (e.g. anonymous mmap()).
I'd very much like to get input from the SPARC ADI guys on these series (cc'ing Khalid). While currently for arm64 that's just a software feature (the hardware one, MTE - memory tagging extensions, is coming later), the ADI has similar requirements regarding the user ABI. AFAICT from the SPARC example code, the user is not allowed to pass a tagged pointers (non-zero top byte) into the kernel. Feedback from the Google hwasan guys is that such approach is not practical for a generic deployment of this feature (e.g. automatic tagging of heap allocations).
Which maintainer tree would be appropriate for carrying these patches?
Given that the arm64 changes are fairly minimal, the -mm tree works for me (once I reviewed/acked the patches and, ideally, get the SPARC people onboard with such approach).
[1] https://lkml.org/lkml/2019/3/18/819
linux-kselftest-mirror@lists.linaro.org