The kernel has recently added support for shadow stacks, currently x86 only using their CET feature but both arm64 and RISC-V have equivalent features (GCS and Zicfiss respectively), I am actively working on GCS[1]. With shadow stacks the hardware maintains an additional stack containing only the return addresses for branch instructions which is not generally writeable by userspace and ensures that any returns are to the recorded addresses. This provides some protection against ROP attacks and making it easier to collect call stacks. These shadow stacks are allocated in the address space of the userspace process.
Our API for shadow stacks does not currently offer userspace any flexiblity for managing the allocation of shadow stacks for newly created threads, instead the kernel allocates a new shadow stack with the same size as the normal stack whenever a thread is created with the feature enabled. The stacks allocated in this way are freed by the kernel when the thread exits or shadow stacks are disabled for the thread. This lack of flexibility and control isn't ideal, in the vast majority of cases the shadow stack will be over allocated and the implicit allocation and deallocation is not consistent with other interfaces. As far as I can tell the interface is done in this manner mainly because the shadow stack patches were in development since before clone3() was implemented.
Since clone3() is readily extensible let's add support for specifying a shadow stack when creating a new thread or process in a similar manner to how the normal stack is specified, keeping the current implicit allocation behaviour if one is not specified either with clone3() or through the use of clone(). The user must provide a shadow stack address and size, this must point to memory mapped for use as a shadow stackby map_shadow_stack() with a shadow stack token at the top of the stack.
Please note that the x86 portions of this code are build tested only, I don't appear to have a system that can run CET avaible to me, I have done testing with an integration into my pending work for GCS. There is some possibility that the arm64 implementation may require the use of clone3() and explicit userspace allocation of shadow stacks, this is still under discussion.
Please further note that the token consumption done by clone3() is not currently implemented in an atomic fashion, Rick indicated that he would look into fixing this if people are OK with the implementation.
A new architecture feature Kconfig option for shadow stacks is added as here, this was suggested as part of the review comments for the arm64 GCS series and since we need to detect if shadow stacks are supported it seemed sensible to roll it in here.
[1] https://lore.kernel.org/r/20231009-arm64-gcs-v6-0-78e55deaa4dd@kernel.org/
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v5: - Rebase onto v6.8-rc2. - Rework ABI to have the user allocate the shadow stack memory with map_shadow_stack() and a token. - Force inlining of the x86 shadow stack enablement. - Move shadow stack enablement out into a shared header for reuse by other tests. - Link to v4: https://lore.kernel.org/r/20231128-clone3-shadow-stack-v4-0-8b28ffe4f676@ker...
Changes in v4: - Formatting changes. - Use a define for minimum shadow stack size and move some basic validation to fork.c. - Link to v3: https://lore.kernel.org/r/20231120-clone3-shadow-stack-v3-0-a7b8ed3e2acc@ker...
Changes in v3: - Rebase onto v6.7-rc2. - Remove stale shadow_stack in internal kargs. - If a shadow stack is specified unconditionally use it regardless of CLONE_ parameters. - Force enable shadow stacks in the selftest. - Update changelogs for RISC-V feature rename. - Link to v2: https://lore.kernel.org/r/20231114-clone3-shadow-stack-v2-0-b613f8681155@ker...
Changes in v2: - Rebase onto v6.7-rc1. - Remove ability to provide preallocated shadow stack, just specify the desired size. - Link to v1: https://lore.kernel.org/r/20231023-clone3-shadow-stack-v1-0-d867d0b5d4d0@ker...
--- Mark Brown (7): Documentation: userspace-api: Add shadow stack API documentation selftests: Provide helper header for shadow stack testing mm: Introduce ARCH_HAS_USER_SHADOW_STACK fork: Add shadow stack support to clone3() selftests/clone3: Factor more of main loop into test_clone3() selftests/clone3: Allow tests to flag if -E2BIG is a valid error code selftests/clone3: Test shadow stack support
Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 41 +++++ arch/x86/Kconfig | 1 + arch/x86/include/asm/shstk.h | 11 +- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/shstk.c | 91 +++++++--- fs/proc/task_mmu.c | 2 +- include/linux/mm.h | 2 +- include/linux/sched/task.h | 2 + include/uapi/linux/sched.h | 13 +- kernel/fork.c | 61 +++++-- mm/Kconfig | 6 + tools/testing/selftests/clone3/clone3.c | 211 ++++++++++++++++++---- tools/testing/selftests/clone3/clone3_selftests.h | 8 + tools/testing/selftests/ksft_shstk.h | 63 +++++++ 15 files changed, 430 insertions(+), 85 deletions(-) --- base-commit: 41bccc98fb7931d63d03f326a746ac4d429c1dd3 change-id: 20231019-clone3-shadow-stack-15d40d2bf536
Best regards,
There are a number of architectures with shadow stack features which we are presenting to userspace with as consistent an API as we can (though there are some architecture specifics). Especially given that there are some important considerations for userspace code interacting directly with the feature let's provide some documentation covering the common aspects.
Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 41 ++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+)
diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index 09f61bd2ac2e..c142183d9c98 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -27,6 +27,7 @@ place where this information is gathered. iommufd media/index netlink/index + shadow_stack sysfs-platform_profile vduse futex2 diff --git a/Documentation/userspace-api/shadow_stack.rst b/Documentation/userspace-api/shadow_stack.rst new file mode 100644 index 000000000000..c6e5ab795b60 --- /dev/null +++ b/Documentation/userspace-api/shadow_stack.rst @@ -0,0 +1,41 @@ +============= +Shadow Stacks +============= + +Introduction +============ + +Several architectures have features which provide backward edge +control flow protection through a hardware maintained stack, only +writeable by userspace through very limited operations. This feature +is referred to as shadow stacks on Linux, on x86 it is part of Intel +Control Enforcement Technology (CET), on arm64 it is Guarded Control +Stacks feature (FEAT_GCS) and for RISC-V it is the Zicfiss extension. +It is expected that this feature will normally be managed by the +system dynamic linker and libc in ways broadly transparent to +application code, this document covers interfaces and considerations + + +Enabling +======== + +Shadow stacks default to disabled when a userspace process is +executed, they can be enabled for the current thread with a syscall: + + - For x86 the ARCH_SHSTK_ENABLE arch_prctl() + +It is expected that this will normally be done by the dynamic linker. +Any new threads created by a thread with shadow stacks enabled will +themsleves have shadow stacks enabled. + + +Enablement considerations +========================= + +- Returning from the function that enables shadow stacks without first + disabling them will cause a shadow stack exception. This includes + any syscall wrapper or other library functions, the syscall will need + to be inlined. +- A lock feature allows userspace to prevent disabling of shadow stacks. +- This that change the stack context like longjmp() or use of ucontext + changes on signal return will need support from libc.
Hi,
On 2/2/24 16:04, Mark Brown wrote:
There are a number of architectures with shadow stack features which we are presenting to userspace with as consistent an API as we can (though there are some architecture specifics). Especially given that there are some important considerations for userspace code interacting directly with the feature let's provide some documentation covering the common aspects.
Signed-off-by: Mark Brown broonie@kernel.org
Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 41 ++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+)
diff --git a/Documentation/userspace-api/shadow_stack.rst b/Documentation/userspace-api/shadow_stack.rst new file mode 100644 index 000000000000..c6e5ab795b60 --- /dev/null +++ b/Documentation/userspace-api/shadow_stack.rst @@ -0,0 +1,41 @@ +============= +Shadow Stacks +=============
+Introduction +============
+Several architectures have features which provide backward edge +control flow protection through a hardware maintained stack, only +writeable by userspace through very limited operations. This feature +is referred to as shadow stacks on Linux, on x86 it is part of Intel
on Linux. On x86
+Control Enforcement Technology (CET), on arm64 it is Guarded Control +Stacks feature (FEAT_GCS) and for RISC-V it is the Zicfiss extension.> +It is expected that this feature will normally be managed by the +system dynamic linker and libc in ways broadly transparent to +application code, this document covers interfaces and considerations
code. This considerations.
+Enabling +========
+Shadow stacks default to disabled when a userspace process is +executed, they can be enabled for the current thread with a syscall:
executed. They
- For x86 the ARCH_SHSTK_ENABLE arch_prctl()
+It is expected that this will normally be done by the dynamic linker. +Any new threads created by a thread with shadow stacks enabled will +themsleves have shadow stacks enabled.
themselves
+Enablement considerations +=========================
+- Returning from the function that enables shadow stacks without first
- disabling them will cause a shadow stack exception. This includes
- any syscall wrapper or other library functions, the syscall will need
functions. The
- to be inlined.
+- A lock feature allows userspace to prevent disabling of shadow stacks. +- This that change the stack context like longjmp() or use of ucontext
Those ?
- changes on signal return will need support from libc.
On Fri, Feb 2, 2024 at 4:05 PM Mark Brown broonie@kernel.org wrote:
There are a number of architectures with shadow stack features which we are presenting to userspace with as consistent an API as we can (though there are some architecture specifics). Especially given that there are some important considerations for userspace code interacting directly with the feature let's provide some documentation covering the common aspects.
Signed-off-by: Mark Brown broonie@kernel.org
Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 41 ++++++++++++++++++++++++++++ 2 files changed, 42 insertions(+)
diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index 09f61bd2ac2e..c142183d9c98 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -27,6 +27,7 @@ place where this information is gathered. iommufd media/index netlink/index
- shadow_stack sysfs-platform_profile vduse futex2
diff --git a/Documentation/userspace-api/shadow_stack.rst b/Documentation/userspace-api/shadow_stack.rst new file mode 100644 index 000000000000..c6e5ab795b60 --- /dev/null +++ b/Documentation/userspace-api/shadow_stack.rst @@ -0,0 +1,41 @@ +============= +Shadow Stacks +=============
+Introduction +============
+Several architectures have features which provide backward edge +control flow protection through a hardware maintained stack, only +writeable by userspace through very limited operations. This feature +is referred to as shadow stacks on Linux, on x86 it is part of Intel +Control Enforcement Technology (CET), on arm64 it is Guarded Control +Stacks feature (FEAT_GCS) and for RISC-V it is the Zicfiss extension. +It is expected that this feature will normally be managed by the +system dynamic linker and libc in ways broadly transparent to +application code, this document covers interfaces and considerations
+Enabling +========
+Shadow stacks default to disabled when a userspace process is +executed, they can be enabled for the current thread with a syscall:
- For x86 the ARCH_SHSTK_ENABLE arch_prctl()
+It is expected that this will normally be done by the dynamic linker. +Any new threads created by a thread with shadow stacks enabled will +themsleves have shadow stacks enabled.
+Enablement considerations +=========================
+- Returning from the function that enables shadow stacks without first
- disabling them will cause a shadow stack exception. This includes
- any syscall wrapper or other library functions, the syscall will need
- to be inlined.
+- A lock feature allows userspace to prevent disabling of shadow stacks. +- This that change the stack context like longjmp() or use of ucontext
- changes on signal return will need support from libc.
For enabling considerations, it will be good to have some words on ss tokens too, probably along the following lines.
- Shadow stack tokens: During shadow stack switches (either by user mode or kernel), to prevent inadvertent shadow stack pivoting, it is expected to save some predefined formatted token during shadow stack save operation and validating the token during shadow stack restore operation.
-- 2.30.2
While almost all users of shadow stacks should be relying on the dynamic linker and libc to enable the feature there are several low level test programs where it is useful to enable without any libc support, allowing testing without full system enablement. This low level testing is helpful during bringup of the support itself, and also in enabling coverage by automated testing without needing all system components in the target root filesystems to have enablement.
Provide a header with helpers for this purpose, intended for use only by test programs directly exercising shadow stack interfaces.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/ksft_shstk.h | 63 ++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+)
diff --git a/tools/testing/selftests/ksft_shstk.h b/tools/testing/selftests/ksft_shstk.h new file mode 100644 index 000000000000..85d0747c1802 --- /dev/null +++ b/tools/testing/selftests/ksft_shstk.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Helpers for shadow stack enablement, this is intended to only be + * used by low level test programs directly exercising interfaces for + * working with shadow stacks. + * + * Copyright (C) 2024 ARM Ltd. + */ + +#ifndef __KSFT_SHSTK_H +#define __KSFT_SHSTK_H + +#include <asm/mman.h> + +/* This is currently only defined for x86 */ +#ifndef SHADOW_STACK_SET_TOKEN +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) +#endif + +static bool shadow_stack_enabled; + +#ifdef __x86_64__ +#define ARCH_SHSTK_ENABLE 0x5001 +#define ARCH_SHSTK_SHSTK (1ULL << 0) + +#define ARCH_PRCTL(arg1, arg2) \ +({ \ + long _ret; \ + register long _num asm("eax") = __NR_arch_prctl; \ + register long _arg1 asm("rdi") = (long)(arg1); \ + register long _arg2 asm("rsi") = (long)(arg2); \ + \ + asm volatile ( \ + "syscall\n" \ + : "=a"(_ret) \ + : "r"(_arg1), "r"(_arg2), \ + "0"(_num) \ + : "rcx", "r11", "memory", "cc" \ + ); \ + _ret; \ +}) + +#define ENABLE_SHADOW_STACK +static inline __attribute__((always_inline)) void enable_shadow_stack(void) +{ + int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK); + if (ret == 0) + shadow_stack_enabled = true; +} + +#endif + +#ifndef __NR_map_shadow_stack +#define __NR_map_shadow_stack 453 +#endif + +#ifndef ENABLE_SHADOW_STACK +static inline void enable_shadow_stack(void) { } +#endif + +#endif + +
On Sat, 2024-02-03 at 00:04 +0000, Mark Brown wrote:
While almost all users of shadow stacks should be relying on the dynamic linker and libc to enable the feature there are several low level test programs where it is useful to enable without any libc support, allowing testing without full system enablement. This low level testing is helpful during bringup of the support itself, and also in enabling coverage by automated testing without needing all system components in the target root filesystems to have enablement.
Provide a header with helpers for this purpose, intended for use only by test programs directly exercising shadow stack interfaces.
Thanks.
Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com
Since multiple architectures have support for shadow stacks and we need to select support for this feature in several places in the generic code provide a generic config option that the architectures can select.
Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Signed-off-by: Mark Brown broonie@kernel.org --- arch/x86/Kconfig | 1 + fs/proc/task_mmu.c | 2 +- include/linux/mm.h | 2 +- mm/Kconfig | 6 ++++++ 4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5edec175b9bf..34553911d07d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1952,6 +1952,7 @@ config X86_USER_SHADOW_STACK depends on AS_WRUSS depends on X86_64 select ARCH_USES_HIGH_VMA_FLAGS + select ARCH_HAS_USER_SHADOW_STACK select X86_CET help Shadow stack protection is a hardware feature that detects function diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3f78ebbb795f..ff2c601f7d1c 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -700,7 +700,7 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR [ilog2(VM_UFFD_MINOR)] = "ui", #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ -#ifdef CONFIG_X86_USER_SHADOW_STACK +#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK [ilog2(VM_SHADOW_STACK)] = "ss", #endif }; diff --git a/include/linux/mm.h b/include/linux/mm.h index f5a97dec5169..c0a782eda803 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -341,7 +341,7 @@ extern unsigned int kobjsize(const void *objp); #endif #endif /* CONFIG_ARCH_HAS_PKEYS */
-#ifdef CONFIG_X86_USER_SHADOW_STACK +#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK /* * VM_SHADOW_STACK should not be set with VM_SHARED because of lack of * support core mm. diff --git a/mm/Kconfig b/mm/Kconfig index ffc3a2ba3a8c..9119e016777a 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1261,6 +1261,12 @@ config LOCK_MM_AND_FIND_VMA config IOMMU_MM_DATA bool
+config ARCH_HAS_USER_SHADOW_STACK + bool + help + The architecture has hardware support for userspace shadow call + stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss). + source "mm/damon/Kconfig"
endmenu
On Sat, 2024-02-03 at 00:04 +0000, Mark Brown wrote:
Since multiple architectures have support for shadow stacks and we need to select support for this feature in several places in the generic code provide a generic config option that the architectures can select.
Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com
On Fri, Feb 2, 2024 at 4:05 PM Mark Brown broonie@kernel.org wrote:
Since multiple architectures have support for shadow stacks and we need to select support for this feature in several places in the generic code provide a generic config option that the architectures can select.
Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Deepak Gupta <debug@rivosinc.com
Unlike with the normal stack there is no API for configuring the the shadow stack for a new thread, instead the kernel will dynamically allocate a new shadow stack with the same size as the normal stack. This appears to be due to the shadow stack series having been in development since before the more extensible clone3() was added rather than anything more deliberate.
Add a parameter to clone3() specifying the size of a shadow stack for the newly created process. If no shadow stack is specified then the existing implicit allocation behaviour is maintained.
If the architecture does not support shadow stacks the shadow stack size parameter must be zero, architectures that do support the feature are expected to enforce the same requirement on individual systems that lack shadow stack support.
Update the existing x86 implementation to pay attention to the newly added arguments, in order to maintain compatibility we use the existing behaviour if no shadow stack is specified. Minimal validation is done of the supplied parameters, detailed enforcement is left to when the thread is executed. Since we are now using more fields from the kernel_clone_args we pass that into the shadow stack code rather than individual fields.
At present this implemntation does not consume the shadow stack token atomically as would be desirable.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/x86/include/asm/shstk.h | 11 ++++-- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/shstk.c | 91 +++++++++++++++++++++++++++++++++----------- include/linux/sched/task.h | 2 + include/uapi/linux/sched.h | 13 +++++-- kernel/fork.c | 61 +++++++++++++++++++++++------ 6 files changed, 137 insertions(+), 43 deletions(-)
diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index 42fee8959df7..8be7b0a909c3 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -6,6 +6,7 @@ #include <linux/types.h>
struct task_struct; +struct kernel_clone_args; struct ksignal;
#ifdef CONFIG_X86_USER_SHADOW_STACK @@ -16,8 +17,8 @@ struct thread_shstk {
long shstk_prctl(struct task_struct *task, int option, unsigned long arg2); void reset_thread_features(void); -unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, - unsigned long stack_size); +unsigned long shstk_alloc_thread_stack(struct task_struct *p, + const struct kernel_clone_args *args); void shstk_free(struct task_struct *p); int setup_signal_shadow_stack(struct ksignal *ksig); int restore_signal_shadow_stack(void); @@ -26,8 +27,10 @@ static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } static inline void reset_thread_features(void) {} static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p, - unsigned long clone_flags, - unsigned long stack_size) { return 0; } + const struct kernel_clone_args *args) +{ + return 0; +} static inline void shstk_free(struct task_struct *p) {} static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; } static inline int restore_signal_shadow_stack(void) { return 0; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index ab49ade31b0d..d2bfcd44de05 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -207,7 +207,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) * is disabled, new_ssp will remain 0, and fpu_clone() will know not to * update it. */ - new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size); + new_ssp = shstk_alloc_thread_stack(p, args); if (IS_ERR_VALUE(new_ssp)) return PTR_ERR((void *)new_ssp);
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 59e15dd8d0f8..24d0e9b825da 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -191,44 +191,89 @@ void reset_thread_features(void) current->thread.features_locked = 0; }
-unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, - unsigned long stack_size) +static bool shstk_consume_token(struct task_struct *tsk, + unsigned long addr) +{ + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + u64 expected = (addr - SS_FRAME_SIZE) | BIT(0); + u64 val; + + /* This should really be an atomic cpmxchg. It is not. */ + __get_user(val, (__user u64 *)addr); + if (val != expected) + return false; + + if (write_user_shstk_64((u64 __user *)addr, 0)) + return false; + + return true; +} + +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) { struct thread_shstk *shstk = &tsk->thread.shstk; + unsigned long clone_flags = args->flags; unsigned long addr, size;
/* * If shadow stack is not enabled on the new thread, skip any - * switch to a new shadow stack. + * implicit switch to a new shadow stack and reject attempts to + * explciitly specify one. */ - if (!features_enabled(ARCH_SHSTK_SHSTK)) - return 0; + if (!features_enabled(ARCH_SHSTK_SHSTK)) { + if (args->shadow_stack || args->shadow_stack_size) + return (unsigned long)ERR_PTR(-EINVAL);
- /* - * For CLONE_VFORK the child will share the parents shadow stack. - * Make sure to clear the internal tracking of the thread shadow - * stack so the freeing logic run for child knows to leave it alone. - */ - if (clone_flags & CLONE_VFORK) { - shstk->base = 0; - shstk->size = 0; return 0; }
/* - * For !CLONE_VM the child will use a copy of the parents shadow - * stack. + * If the user specified a shadow stack then do some basic + * validation and use it, otherwise fall back to a default + * shadow stack size if the clone_flags don't indicate an + * allocation is unneeded. */ - if (!(clone_flags & CLONE_VM)) - return 0; + if (args->shadow_stack) { + addr = args->shadow_stack; + size = args->shadow_stack_size;
- size = adjust_shstk_size(stack_size); - addr = alloc_shstk(0, size, 0, false); - if (IS_ERR_VALUE(addr)) - return addr; + /* There should be a valid token at the top of the stack. */ + if (!shstk_consume_token(tsk, addr + size - sizeof(u64))) + return (unsigned long)ERR_PTR(-EINVAL); + } else { + /* + * For CLONE_VFORK the child will share the parents + * shadow stack. Make sure to clear the internal + * tracking of the thread shadow stack so the freeing + * logic run for child knows to leave it alone. + */ + if (clone_flags & CLONE_VFORK) { + shstk->base = 0; + shstk->size = 0; + return 0; + }
- shstk->base = addr; - shstk->size = size; + /* + * For !CLONE_VM the child will use a copy of the + * parents shadow stack. + */ + if (!(clone_flags & CLONE_VM)) + return 0; + + size = args->stack_size; + size = adjust_shstk_size(size); + addr = alloc_shstk(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return addr; + + /* We allocated the shadow stack, we should deallocate it. */ + shstk->base = addr; + shstk->size = size; + }
return addr + size; } diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index d362aacf9f89..dd577e8dc881 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -43,6 +43,8 @@ struct kernel_clone_args { void *fn_arg; struct cgroup *cgrp; struct css_set *cset; + unsigned long shadow_stack; + unsigned long shadow_stack_size; };
/* diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 3bac0a8ceab2..8b7af52548fd 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -84,6 +84,10 @@ * kernel's limit of nested PID namespaces. * @cgroup: If CLONE_INTO_CGROUP is specified set this to * a file descriptor for the cgroup. + * @shadow_stack: Pointer to the memory allocated for the child + * shadow stack. + * @shadow_stack_size: Specify the size of the shadow stack for + * the child process. * * The structure is versioned by size and thus extensible. * New struct members must go at the end of the struct and @@ -101,12 +105,15 @@ struct clone_args { __aligned_u64 set_tid; __aligned_u64 set_tid_size; __aligned_u64 cgroup; + __aligned_u64 shadow_stack; + __aligned_u64 shadow_stack_size; }; #endif
-#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ -#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ -#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ +#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ +#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#define CLONE_ARGS_SIZE_VER3 104 /* sizeof fourth published struct */
/* * Scheduling policies diff --git a/kernel/fork.c b/kernel/fork.c index 0d944e92a43f..fca041cc2b8a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -123,6 +123,11 @@ */ #define MAX_THREADS FUTEX_TID_MASK
+/* + * Require that shadow stacks can store at least one element + */ +#define SHADOW_STACK_SIZE_MIN sizeof(void *) + /* * Protected counters by write_lock_irq(&tasklist_lock) */ @@ -3062,7 +3067,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, CLONE_ARGS_SIZE_VER1); BUILD_BUG_ON(offsetofend(struct clone_args, cgroup) != CLONE_ARGS_SIZE_VER2); - BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER2); + BUILD_BUG_ON(offsetofend(struct clone_args, shadow_stack_size) != + CLONE_ARGS_SIZE_VER3); + BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER3);
if (unlikely(usize > PAGE_SIZE)) return -E2BIG; @@ -3095,16 +3102,18 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, return -EINVAL;
*kargs = (struct kernel_clone_args){ - .flags = args.flags, - .pidfd = u64_to_user_ptr(args.pidfd), - .child_tid = u64_to_user_ptr(args.child_tid), - .parent_tid = u64_to_user_ptr(args.parent_tid), - .exit_signal = args.exit_signal, - .stack = args.stack, - .stack_size = args.stack_size, - .tls = args.tls, - .set_tid_size = args.set_tid_size, - .cgroup = args.cgroup, + .flags = args.flags, + .pidfd = u64_to_user_ptr(args.pidfd), + .child_tid = u64_to_user_ptr(args.child_tid), + .parent_tid = u64_to_user_ptr(args.parent_tid), + .exit_signal = args.exit_signal, + .stack = args.stack, + .stack_size = args.stack_size, + .tls = args.tls, + .set_tid_size = args.set_tid_size, + .cgroup = args.cgroup, + .shadow_stack = args.shadow_stack, + .shadow_stack_size = args.shadow_stack_size, };
if (args.set_tid && @@ -3145,6 +3154,34 @@ static inline bool clone3_stack_valid(struct kernel_clone_args *kargs) return true; }
+/** + * clone3_shadow_stack_valid - check and prepare shadow stack + * @kargs: kernel clone args + * + * Verify that shadow stacks are only enabled if supported. + */ +static inline bool clone3_shadow_stack_valid(struct kernel_clone_args *kargs) +{ + if (kargs->shadow_stack) { + if (!kargs->shadow_stack_size) + return false; + + if (kargs->shadow_stack_size < SHADOW_STACK_SIZE_MIN) + return false; + + if (kargs->shadow_stack_size > rlimit(RLIMIT_STACK)) + return false; + + /* + * The architecture must check support on the specific + * machine. + */ + return IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK); + } else { + return !kargs->shadow_stack_size; + } +} + static bool clone3_args_valid(struct kernel_clone_args *kargs) { /* Verify that no unknown flags are passed along. */ @@ -3167,7 +3204,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs) kargs->exit_signal) return false;
- if (!clone3_stack_valid(kargs)) + if (!clone3_stack_valid(kargs) || !clone3_shadow_stack_valid(kargs)) return false;
return true;
On Sat, 2024-02-03 at 00:05 +0000, Mark Brown wrote:
+static bool shstk_consume_token(struct task_struct *tsk, + unsigned long addr) +{ + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + u64 expected = (addr - SS_FRAME_SIZE) | BIT(0); + u64 val;
+ /* This should really be an atomic cpmxchg. It is not. */ + __get_user(val, (__user u64 *)addr); + if (val != expected) + return false;
+ if (write_user_shstk_64((u64 __user *)addr, 0)) + return false;
+ return true; +}
So, don't we want to consume the token on the *new* task's MM, which was already duplicated but still unmapped? In which case I think the other arch's would need to GUP regardless of the existence of shadow stack atomic ops.
If so, my question is, can we GUP on the new MM at this point? There is a lot going in copy_process(). My first suspicion of complication is the work on the child that happens in cgroup_post_fork().
I wonder about adding a shstk_post_fork() to make it easier to think about and maintain, even if there are no issues today.
On Fri, 2024-02-09 at 12:18 -0800, Rick Edgecombe wrote:
So, don't we want to consume the token on the *new* task's MM, which was already duplicated but still unmapped? In which case I think the other arch's would need to GUP regardless of the existence of shadow stack atomic ops.
I mean for the !CLONE_VM case.
On Fri, Feb 09, 2024 at 08:18:11PM +0000, Edgecombe, Rick P wrote:
On Sat, 2024-02-03 at 00:05 +0000, Mark Brown wrote:
+ if (write_user_shstk_64((u64 __user *)addr, 0)) + return false;
+ return true; +}
So, don't we want to consume the token on the *new* task's MM, which was already duplicated but still unmapped? In which case I think the other arch's would need to GUP regardless of the existence of shadow stack atomic ops.
Yes, that would be better - if nothing else it allows reuse of the same shadow stack for multiple !CLONE_VM clone3()s.
I wonder about adding a shstk_post_fork() to make it easier to think about and maintain, even if there are no issues today.
I agree.
On Sat, 2024-02-03 at 00:05 +0000, Mark Brown wrote:
+ if (args->shadow_stack) { + addr = args->shadow_stack; + size = args->shadow_stack_size; - size = adjust_shstk_size(stack_size); - addr = alloc_shstk(0, size, 0, false); - if (IS_ERR_VALUE(addr)) - return addr; + /* There should be a valid token at the top of the stack. */ + if (!shstk_consume_token(tsk, addr + size - sizeof(u64))) + return (unsigned long)ERR_PTR(-EINVAL);
I think for this case, it needs: shstk->base = 0; shstk->size = 0;
To prevent trying to free the parents shadow stack when the child exits.
In order to make it easier to add more configuration for the tests and more support for runtime detection of when tests can be run pass the structure describing the tests into test_clone3() rather than picking the arguments out of it and have that function do all the per-test work.
No functional change.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 77 ++++++++++++++++----------------- 1 file changed, 37 insertions(+), 40 deletions(-)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index 3c9bf0cd82a8..1108bd8e36d6 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -30,6 +30,19 @@ enum test_mode { CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG, };
+typedef bool (*filter_function)(void); +typedef size_t (*size_function)(void); + +struct test { + const char *name; + uint64_t flags; + size_t size; + size_function size_function; + int expected; + enum test_mode test_mode; + filter_function filter; +}; + static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) { struct __clone_args args = { @@ -104,30 +117,40 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) return 0; }
-static bool test_clone3(uint64_t flags, size_t size, int expected, - enum test_mode test_mode) +static void test_clone3(const struct test *test) { + size_t size; int ret;
+ if (test->filter && test->filter()) { + ksft_test_result_skip("%s\n", test->name); + return; + } + + if (test->size_function) + size = test->size_function(); + else + size = test->size; + + ksft_print_msg("Running test '%s'\n", test->name); + ksft_print_msg( "[%d] Trying clone3() with flags %#" PRIx64 " (size %zu)\n", - getpid(), flags, size); - ret = call_clone3(flags, size, test_mode); + getpid(), test->flags, size); + ret = call_clone3(test->flags, size, test->test_mode); ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n", - getpid(), ret, expected); - if (ret != expected) { + getpid(), ret, test->expected); + if (ret != test->expected) { ksft_print_msg( "[%d] Result (%d) is different than expected (%d)\n", - getpid(), ret, expected); - return false; + getpid(), ret, test->expected); + ksft_test_result_fail("%s\n", test->name); + return; }
- return true; + ksft_test_result_pass("%s\n", test->name); }
-typedef bool (*filter_function)(void); -typedef size_t (*size_function)(void); - static bool not_root(void) { if (getuid() != 0) { @@ -155,16 +178,6 @@ static size_t page_size_plus_8(void) return getpagesize() + 8; }
-struct test { - const char *name; - uint64_t flags; - size_t size; - size_function size_function; - int expected; - enum test_mode test_mode; - filter_function filter; -}; - static const struct test tests[] = { { .name = "simple clone3()", @@ -314,24 +327,8 @@ int main(int argc, char *argv[]) ksft_set_plan(ARRAY_SIZE(tests)); test_clone3_supported();
- for (i = 0; i < ARRAY_SIZE(tests); i++) { - if (tests[i].filter && tests[i].filter()) { - ksft_test_result_skip("%s\n", tests[i].name); - continue; - } - - if (tests[i].size_function) - size = tests[i].size_function(); - else - size = tests[i].size; - - ksft_print_msg("Running test '%s'\n", tests[i].name); - - ksft_test_result(test_clone3(tests[i].flags, size, - tests[i].expected, - tests[i].test_mode), - "%s\n", tests[i].name); - } + for (i = 0; i < ARRAY_SIZE(tests); i++) + test_clone3(&tests[i]);
ksft_finished(); }
The clone_args structure is extensible, with the syscall passing in the length of the structure. Inside the kernel we use copy_struct_from_user() to read the struct but this has the unfortunate side effect of silently accepting some overrun in the structure size providing the extra data is all zeros. This means that we can't discover the clone3() features that the running kernel supports by simply probing with various struct sizes. We need to check this for the benefit of test systems which run newer kselftests on old kernels.
Add a flag which can be set on a test to indicate that clone3() may return -E2BIG due to the use of newer struct versions. Currently no tests need this but it will become an issue for testing clone3() support for shadow stacks, the support for shadow stacks is already present on x86.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index 1108bd8e36d6..6adbfd14c841 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -39,6 +39,7 @@ struct test { size_t size; size_function size_function; int expected; + bool e2big_valid; enum test_mode test_mode; filter_function filter; }; @@ -141,6 +142,11 @@ static void test_clone3(const struct test *test) ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n", getpid(), ret, test->expected); if (ret != test->expected) { + if (test->e2big_valid && ret == -E2BIG) { + ksft_print_msg("Test reported -E2BIG\n"); + ksft_test_result_skip("%s\n", test->name); + return; + } ksft_print_msg( "[%d] Result (%d) is different than expected (%d)\n", getpid(), ret, test->expected);
Add basic test coverage for specifying the shadow stack for a newly created thread via clone3(), including coverage of the newly extended argument structure.
In order to facilitate testing on systems without userspace shadow stack support we manually enable shadow stacks on startup, this is architecture specific due to the use of an arch_prctl() on x86. Due to interactions with potential userspace locking of features we actually detect support for shadow stacks on the running system by attempting to allocate a shadow stack page during initialisation using map_shadow_stack(), warning if this succeeds when the enable failed.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 128 ++++++++++++++++++++++ tools/testing/selftests/clone3/clone3_selftests.h | 8 ++ 2 files changed, 136 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index 6adbfd14c841..c468d9b87bd5 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -3,6 +3,7 @@ /* Based on Christian Brauner's clone3() example */
#define _GNU_SOURCE +#include <asm/mman.h> #include <errno.h> #include <inttypes.h> #include <linux/types.h> @@ -11,6 +12,7 @@ #include <stdint.h> #include <stdio.h> #include <stdlib.h> +#include <sys/mman.h> #include <sys/syscall.h> #include <sys/types.h> #include <sys/un.h> @@ -19,8 +21,12 @@ #include <sched.h>
#include "../kselftest.h" +#include "../ksft_shstk.h" #include "clone3_selftests.h"
+static bool shadow_stack_supported; +static size_t max_supported_args_size; + enum test_mode { CLONE3_ARGS_NO_TEST, CLONE3_ARGS_ALL_0, @@ -28,6 +34,10 @@ enum test_mode { CLONE3_ARGS_INVAL_EXIT_SIGNAL_NEG, CLONE3_ARGS_INVAL_EXIT_SIGNAL_CSIG, CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG, + CLONE3_ARGS_SHADOW_STACK, + CLONE3_ARGS_SHADOW_STACK_NO_SIZE, + CLONE3_ARGS_SHADOW_STACK_NO_POINTER, + CLONE3_ARGS_SHADOW_STACK_NO_TOKEN, };
typedef bool (*filter_function)(void); @@ -44,6 +54,43 @@ struct test { filter_function filter; };
+/* + * We check for shadow stack support by attempting to use + * map_shadow_stack() since features may have been locked by the + * dynamic linker resulting in spurious errors when we attempt to + * enable on startup. We warn if the enable failed. + */ +static void test_shadow_stack_supported(void) +{ + long ret; + + ret = syscall(__NR_map_shadow_stack, 0, getpagesize(), 0); + if (ret == -1) { + ksft_print_msg("map_shadow_stack() not supported\n"); + } else if ((void *)ret == MAP_FAILED) { + ksft_print_msg("Failed to map shadow stack\n"); + } else { + ksft_print_msg("Shadow stack supportd\n"); + shadow_stack_supported = true; + + if (!shadow_stack_enabled) + ksft_print_msg("Mapped but did not enable shadow stack\n"); + } +} + +static unsigned long long get_shadow_stack_page(unsigned long flags) +{ + unsigned long long page; + + page = syscall(__NR_map_shadow_stack, 0, getpagesize(), flags); + if ((void *)page == MAP_FAILED) { + ksft_print_msg("map_shadow_stack() failed: %d\n", errno); + return 0; + } + + return page; +} + static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) { struct __clone_args args = { @@ -89,6 +136,20 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) case CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG: args.exit_signal = 0x00000000000000f0ULL; break; + case CLONE3_ARGS_SHADOW_STACK: + args.shadow_stack = get_shadow_stack_page(SHADOW_STACK_SET_TOKEN); + args.shadow_stack_size = getpagesize(); + break; + case CLONE3_ARGS_SHADOW_STACK_NO_POINTER: + args.shadow_stack_size = getpagesize(); + break; + case CLONE3_ARGS_SHADOW_STACK_NO_SIZE: + args.shadow_stack = get_shadow_stack_page(SHADOW_STACK_SET_TOKEN); + break; + case CLONE3_ARGS_SHADOW_STACK_NO_TOKEN: + args.shadow_stack = get_shadow_stack_page(0); + args.shadow_stack_size = getpagesize(); + break; }
memcpy(&args_ext.args, &args, sizeof(struct __clone_args)); @@ -179,6 +240,26 @@ static bool no_timenamespace(void) return true; }
+static bool have_shadow_stack(void) +{ + if (shadow_stack_supported) { + ksft_print_msg("Shadow stack supported\n"); + return true; + } + + return false; +} + +static bool no_shadow_stack(void) +{ + if (!shadow_stack_supported) { + ksft_print_msg("Shadow stack not supported\n"); + return true; + } + + return false; +} + static size_t page_size_plus_8(void) { return getpagesize() + 8; @@ -322,6 +403,50 @@ static const struct test tests[] = { .expected = -EINVAL, .test_mode = CLONE3_ARGS_NO_TEST, }, + { + .name = "Shadow stack on system with shadow stack", + .flags = CLONE_VM, + .size = 0, + .expected = 0, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with no pointer", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_POINTER, + }, + { + .name = "Shadow stack with no size", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_SIZE, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with no token", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_TOKEN, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack on system without shadow stack", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK, + .filter = have_shadow_stack, + }, };
int main(int argc, char *argv[]) @@ -329,9 +454,12 @@ int main(int argc, char *argv[]) size_t size; int i;
+ enable_shadow_stack(); + ksft_print_header(); ksft_set_plan(ARRAY_SIZE(tests)); test_clone3_supported(); + test_shadow_stack_supported();
for (i = 0; i < ARRAY_SIZE(tests); i++) test_clone3(&tests[i]); diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h index 3d2663fe50ba..1011dae85098 100644 --- a/tools/testing/selftests/clone3/clone3_selftests.h +++ b/tools/testing/selftests/clone3/clone3_selftests.h @@ -31,6 +31,14 @@ struct __clone_args { __aligned_u64 set_tid; __aligned_u64 set_tid_size; __aligned_u64 cgroup; +#ifndef CLONE_ARGS_SIZE_VER2 +#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#endif + __aligned_u64 shadow_stack; + __aligned_u64 shadow_stack_size; +#ifndef CLONE_ARGS_SIZE_VER3 +#define CLONE_ARGS_SIZE_VER3 104 /* sizeof fourth published struct */ +#endif };
static pid_t sys_clone3(struct __clone_args *args, size_t size)
On Sat, 2024-02-03 at 00:04 +0000, Mark Brown wrote:
Please note that the x86 portions of this code are build tested only, I don't appear to have a system that can run CET avaible to me, I have done testing with an integration into my pending work for GCS. There is some possibility that the arm64 implementation may require the use of clone3() and explicit userspace allocation of shadow stacks, this is still under discussion.
It all passed for me on the x86 side.
linux-kselftest-mirror@lists.linaro.org