The kernel has recently added support for shadow stacks, currently x86 only using their CET feature but both arm64 and RISC-V have equivalent features (GCS and Zicfiss respectively), I am actively working on GCS[1]. With shadow stacks the hardware maintains an additional stack containing only the return addresses for branch instructions which is not generally writeable by userspace and ensures that any returns are to the recorded addresses. This provides some protection against ROP attacks and making it easier to collect call stacks. These shadow stacks are allocated in the address space of the userspace process.
Our API for shadow stacks does not currently offer userspace any flexiblity for managing the allocation of shadow stacks for newly created threads, instead the kernel allocates a new shadow stack with the same size as the normal stack whenever a thread is created with the feature enabled. The stacks allocated in this way are freed by the kernel when the thread exits or shadow stacks are disabled for the thread. This lack of flexibility and control isn't ideal, in the vast majority of cases the shadow stack will be over allocated and the implicit allocation and deallocation is not consistent with other interfaces. As far as I can tell the interface is done in this manner mainly because the shadow stack patches were in development since before clone3() was implemented.
Since clone3() is readily extensible let's add support for specifying a shadow stack when creating a new thread or process, keeping the current implicit allocation behaviour if one is not specified either with clone3() or through the use of clone(). The user must provide a shadow stack pointer, this must point to memory mapped for use as a shadow stackby map_shadow_stack() with an architecture specified shadow stack token at the top of the stack.
Please note that the x86 portions of this code are build tested only, I don't appear to have a system that can run CET available to me.
[1] https://lore.kernel.org/linux-arm-kernel/20241001-arm64-gcs-v13-0-222b78d87e...
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v13: - Rebase onto v6.13-rc1. - Link to v12: https://lore.kernel.org/r/20241031-clone3-shadow-stack-v12-0-7183eb8bee17@ke...
Changes in v12: - Add the regular prctl() to the userspace API document since arm64 support is queued in -next. - Link to v11: https://lore.kernel.org/r/20241005-clone3-shadow-stack-v11-0-2a6a2bd6d651@ke...
Changes in v11: - Rebase onto arm64 for-next/gcs, which is based on v6.12-rc1, and integrate arm64 support. - Rework the interface to specify a shadow stack pointer rather than a base and size like we do for the regular stack. - Link to v10: https://lore.kernel.org/r/20240821-clone3-shadow-stack-v10-0-06e8797b9445@ke...
Changes in v10: - Integrate fixes & improvements for the x86 implementation from Rick Edgecombe. - Require that the shadow stack be VM_WRITE. - Require that the shadow stack base and size be sizeof(void *) aligned. - Clean up trailing newline. - Link to v9: https://lore.kernel.org/r/20240819-clone3-shadow-stack-v9-0-962d74f99464@ker...
Changes in v9: - Pull token validation earlier and report problems with an error return to parent rather than signal delivery to the child. - Verify that the top of the supplied shadow stack is VM_SHADOW_STACK. - Rework token validation to only do the page mapping once. - Drop no longer needed support for testing for signals in selftest. - Fix typo in comments. - Link to v8: https://lore.kernel.org/r/20240808-clone3-shadow-stack-v8-0-0acf37caf14c@ker...
Changes in v8: - Fix token verification with user specified shadow stack. - Don't track user managed shadow stacks for child processes. - Link to v7: https://lore.kernel.org/r/20240731-clone3-shadow-stack-v7-0-a9532eebfb1d@ker...
Changes in v7: - Rebase onto v6.11-rc1. - Typo fixes. - Link to v6: https://lore.kernel.org/r/20240623-clone3-shadow-stack-v6-0-9ee7783b1fb9@ker...
Changes in v6: - Rebase onto v6.10-rc3. - Ensure we don't try to free the parent shadow stack in error paths of x86 arch code. - Spelling fixes in userspace API document. - Additional cleanups and improvements to the clone3() tests to support the shadow stack tests. - Link to v5: https://lore.kernel.org/r/20240203-clone3-shadow-stack-v5-0-322c69598e4b@ker...
Changes in v5: - Rebase onto v6.8-rc2. - Rework ABI to have the user allocate the shadow stack memory with map_shadow_stack() and a token. - Force inlining of the x86 shadow stack enablement. - Move shadow stack enablement out into a shared header for reuse by other tests. - Link to v4: https://lore.kernel.org/r/20231128-clone3-shadow-stack-v4-0-8b28ffe4f676@ker...
Changes in v4: - Formatting changes. - Use a define for minimum shadow stack size and move some basic validation to fork.c. - Link to v3: https://lore.kernel.org/r/20231120-clone3-shadow-stack-v3-0-a7b8ed3e2acc@ker...
Changes in v3: - Rebase onto v6.7-rc2. - Remove stale shadow_stack in internal kargs. - If a shadow stack is specified unconditionally use it regardless of CLONE_ parameters. - Force enable shadow stacks in the selftest. - Update changelogs for RISC-V feature rename. - Link to v2: https://lore.kernel.org/r/20231114-clone3-shadow-stack-v2-0-b613f8681155@ker...
Changes in v2: - Rebase onto v6.7-rc1. - Remove ability to provide preallocated shadow stack, just specify the desired size. - Link to v1: https://lore.kernel.org/r/20231023-clone3-shadow-stack-v1-0-d867d0b5d4d0@ker...
--- Mark Brown (8): arm64/gcs: Return a success value from gcs_alloc_thread_stack() Documentation: userspace-api: Add shadow stack API documentation selftests: Provide helper header for shadow stack testing fork: Add shadow stack support to clone3() selftests/clone3: Remove redundant flushes of output streams selftests/clone3: Factor more of main loop into test_clone3() selftests/clone3: Allow tests to flag if -E2BIG is a valid error code selftests/clone3: Test shadow stack support
Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 42 ++++ arch/arm64/include/asm/gcs.h | 8 +- arch/arm64/kernel/process.c | 8 +- arch/arm64/mm/gcs.c | 62 +++++- arch/x86/include/asm/shstk.h | 11 +- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/shstk.c | 57 +++++- include/asm-generic/cacheflush.h | 11 ++ include/linux/sched/task.h | 17 ++ include/uapi/linux/sched.h | 10 +- kernel/fork.c | 96 +++++++-- tools/testing/selftests/clone3/clone3.c | 226 ++++++++++++++++++---- tools/testing/selftests/clone3/clone3_selftests.h | 65 ++++++- tools/testing/selftests/ksft_shstk.h | 98 ++++++++++ 15 files changed, 633 insertions(+), 81 deletions(-) --- base-commit: 40384c840ea1944d7c5a392e8975ed088ecf0b37 change-id: 20231019-clone3-shadow-stack-15d40d2bf536
Best regards,
Currently as a result of templating from x86 code gcs_alloc_thread_stack() returns a pointer as an unsigned int however on arm64 we don't actually use this pointer value as anything other than a pass/fail flag. Simplify the interface to just return an int with 0 on success and a negative error code on failure.
Acked-by: Deepak Gupta debug@rivosinc.com Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/gcs.h | 8 ++++---- arch/arm64/kernel/process.c | 8 ++++---- arch/arm64/mm/gcs.c | 8 ++++---- 3 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index f50660603ecf5dc09a92740062df3a089b02b219..d8923b5f03b776252aca76ce316ef57399d71fa9 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -64,8 +64,8 @@ static inline bool task_gcs_el0_enabled(struct task_struct *task) void gcs_set_el0_mode(struct task_struct *task); void gcs_free(struct task_struct *task); void gcs_preserve_current_state(void); -unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, - const struct kernel_clone_args *args); +int gcs_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args);
static inline int gcs_check_locked(struct task_struct *task, unsigned long new_val) @@ -91,8 +91,8 @@ static inline bool task_gcs_el0_enabled(struct task_struct *task) static inline void gcs_set_el0_mode(struct task_struct *task) { } static inline void gcs_free(struct task_struct *task) { } static inline void gcs_preserve_current_state(void) { } -static inline unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, - const struct kernel_clone_args *args) +static inline int gcs_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) { return -ENOTSUPP; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 2968a33bb3bc16208ff672590fd9a9a8d0b26b19..c217ab67e82baa212d008b62b876acf8b2b492d6 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -297,7 +297,7 @@ static void flush_gcs(void) static int copy_thread_gcs(struct task_struct *p, const struct kernel_clone_args *args) { - unsigned long gcs; + int ret;
if (!system_supports_gcs()) return 0; @@ -305,9 +305,9 @@ static int copy_thread_gcs(struct task_struct *p, p->thread.gcs_base = 0; p->thread.gcs_size = 0;
- gcs = gcs_alloc_thread_stack(p, args); - if (IS_ERR_VALUE(gcs)) - return PTR_ERR((void *)gcs); + ret = gcs_alloc_thread_stack(p, args); + if (ret != 0) + return ret;
p->thread.gcs_el0_mode = current->thread.gcs_el0_mode; p->thread.gcs_el0_locked = current->thread.gcs_el0_locked; diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index 5c46ec527b1cdaa8f52cff445d70ba0f8509d086..1f633a482558b59aac5427963d42b37fce08c8a6 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -38,8 +38,8 @@ static unsigned long gcs_size(unsigned long size) return max(PAGE_SIZE, size); }
-unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, - const struct kernel_clone_args *args) +int gcs_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) { unsigned long addr, size;
@@ -59,13 +59,13 @@ unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, size = gcs_size(size); addr = alloc_gcs(0, size); if (IS_ERR_VALUE(addr)) - return addr; + return PTR_ERR((void *)addr);
tsk->thread.gcs_base = addr; tsk->thread.gcs_size = size; tsk->thread.gcspr_el0 = addr + size - sizeof(u64);
- return addr; + return 0; }
SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags)
There are a number of architectures with shadow stack features which we are presenting to userspace with as consistent an API as we can (though there are some architecture specifics). Especially given that there are some important considerations for userspace code interacting directly with the feature let's provide some documentation covering the common aspects.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com Reviewed-by: Kees Cook kees@kernel.org Tested-by: Kees Cook kees@kernel.org Acked-by: Shuah Khan skhan@linuxfoundation.org Acked-by: Yury Khrustalev yury.khrustalev@arm.com Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/shadow_stack.rst | 42 ++++++++++++++++++++++++++++ 2 files changed, 43 insertions(+)
diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index 274cc7546efc2a042d2dc00aa67c71c52372179a..c39709bfba2c5682d0d1a22444db17c17bcf01ce 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -59,6 +59,7 @@ Everything else
ELF netlink/index + shadow_stack sysfs-platform_profile vduse futex2 diff --git a/Documentation/userspace-api/shadow_stack.rst b/Documentation/userspace-api/shadow_stack.rst new file mode 100644 index 0000000000000000000000000000000000000000..9d0d4e79cfa7c47d3208dd53071a3d0b86c18575 --- /dev/null +++ b/Documentation/userspace-api/shadow_stack.rst @@ -0,0 +1,42 @@ +============= +Shadow Stacks +============= + +Introduction +============ + +Several architectures have features which provide backward edge +control flow protection through a hardware maintained stack, only +writeable by userspace through very limited operations. This feature +is referred to as shadow stacks on Linux, on x86 it is part of Intel +Control Enforcement Technology (CET), on arm64 it is Guarded Control +Stacks feature (FEAT_GCS) and for RISC-V it is the Zicfiss extension. +It is expected that this feature will normally be managed by the +system dynamic linker and libc in ways broadly transparent to +application code, this document covers interfaces and considerations. + + +Enabling +======== + +Shadow stacks default to disabled when a userspace process is +executed, they can be enabled for the current thread with a syscall: + + - For x86 the ARCH_SHSTK_ENABLE arch_prctl() + - For other architectures the PR_SET_SHADOW_STACK_ENABLE prctl() + +It is expected that this will normally be done by the dynamic linker. +Any new threads created by a thread with shadow stacks enabled will +themselves have shadow stacks enabled. + + +Enablement considerations +========================= + +- Returning from the function that enables shadow stacks without first + disabling them will cause a shadow stack exception. This includes + any syscall wrapper or other library functions, the syscall will need + to be inlined. +- A lock feature allows userspace to prevent disabling of shadow stacks. +- Those that change the stack context like longjmp() or use of ucontext + changes on signal return will need support from libc.
While almost all users of shadow stacks should be relying on the dynamic linker and libc to enable the feature there are several low level test programs where it is useful to enable without any libc support, allowing testing without full system enablement. This low level testing is helpful during bringup of the support itself, and also in enabling coverage by automated testing without needing all system components in the target root filesystems to have enablement.
Provide a header with helpers for this purpose, intended for use only by test programs directly exercising shadow stack interfaces.
Reviewed-by: Rick Edgecombe rick.p.edgecombe@intel.com Reviewed-by: Kees Cook kees@kernel.org Tested-by: Kees Cook kees@kernel.org Acked-by: Shuah Khan skhan@linuxfoundation.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/ksft_shstk.h | 98 ++++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+)
diff --git a/tools/testing/selftests/ksft_shstk.h b/tools/testing/selftests/ksft_shstk.h new file mode 100644 index 0000000000000000000000000000000000000000..869ecea2bf3ea3d30cead9819d2b3a75f5397754 --- /dev/null +++ b/tools/testing/selftests/ksft_shstk.h @@ -0,0 +1,98 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Helpers for shadow stack enablement, this is intended to only be + * used by low level test programs directly exercising interfaces for + * working with shadow stacks. + * + * Copyright (C) 2024 ARM Ltd. + */ + +#ifndef __KSFT_SHSTK_H +#define __KSFT_SHSTK_H + +#include <asm/mman.h> + +/* This is currently only defined for x86 */ +#ifndef SHADOW_STACK_SET_TOKEN +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) +#endif + +static bool shadow_stack_enabled; + +#ifdef __x86_64__ +#define ARCH_SHSTK_ENABLE 0x5001 +#define ARCH_SHSTK_SHSTK (1ULL << 0) + +#define ARCH_PRCTL(arg1, arg2) \ +({ \ + long _ret; \ + register long _num asm("eax") = __NR_arch_prctl; \ + register long _arg1 asm("rdi") = (long)(arg1); \ + register long _arg2 asm("rsi") = (long)(arg2); \ + \ + asm volatile ( \ + "syscall\n" \ + : "=a"(_ret) \ + : "r"(_arg1), "r"(_arg2), \ + "0"(_num) \ + : "rcx", "r11", "memory", "cc" \ + ); \ + _ret; \ +}) + +#define ENABLE_SHADOW_STACK +static inline __attribute__((always_inline)) void enable_shadow_stack(void) +{ + int ret = ARCH_PRCTL(ARCH_SHSTK_ENABLE, ARCH_SHSTK_SHSTK); + if (ret == 0) + shadow_stack_enabled = true; +} + +#endif + +#ifdef __aarch64__ +#define PR_SET_SHADOW_STACK_STATUS 75 +# define PR_SHADOW_STACK_ENABLE (1UL << 0) + +#define my_syscall2(num, arg1, arg2) \ +({ \ + register long _num __asm__ ("x8") = (num); \ + register long _arg1 __asm__ ("x0") = (long)(arg1); \ + register long _arg2 __asm__ ("x1") = (long)(arg2); \ + register long _arg3 __asm__ ("x2") = 0; \ + register long _arg4 __asm__ ("x3") = 0; \ + register long _arg5 __asm__ ("x4") = 0; \ + \ + __asm__ volatile ( \ + "svc #0\n" \ + : "=r"(_arg1) \ + : "r"(_arg1), "r"(_arg2), \ + "r"(_arg3), "r"(_arg4), \ + "r"(_arg5), "r"(_num) \ + : "memory", "cc" \ + ); \ + _arg1; \ +}) + +#define ENABLE_SHADOW_STACK +static inline __attribute__((always_inline)) void enable_shadow_stack(void) +{ + int ret; + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + PR_SHADOW_STACK_ENABLE); + if (ret == 0) + shadow_stack_enabled = true; +} + +#endif + +#ifndef __NR_map_shadow_stack +#define __NR_map_shadow_stack 453 +#endif + +#ifndef ENABLE_SHADOW_STACK +static inline void enable_shadow_stack(void) { } +#endif + +#endif
Unlike with the normal stack there is no API for configuring the the shadow stack for a new thread, instead the kernel will dynamically allocate a new shadow stack with the same size as the normal stack. This appears to be due to the shadow stack series having been in development since before the more extensible clone3() was added rather than anything more deliberate.
Add a paramter to clone3() specifying the shadow stack pointer to use for the new thread, this is inconsistent with the way we specify the normal stack but during review concerns were expressed about having to identify where the shadow stack pointer should be placed especially in cases where the shadow stack has been previously active. If no shadow stack is specified then the existing implicit allocation behaviour is maintained.
If a shadow stack pointer is specified then it is required to have an architecture defined token placed on the stack, this will be consumed by the new task. If no valid token is present then this will be reported with -EINVAL. This token prevents new threads being created pointing at the shadow stack of an existing running thread.
If the architecture does not support shadow stacks the shadow stack pointer must be not be specified, architectures that do support the feature are expected to enforce the same requirement on individual systems that lack shadow stack support.
Update the existing arm64 and x86 implementations to pay attention to the newly added arguments, in order to maintain compatibility we use the existing behaviour if no shadow stack is specified. Since we are now using more fields from the kernel_clone_args we pass that into the shadow stack code rather than individual fields.
Portions of the x86 architecture code were written by Rick Edgecombe.
Acked-by: Yury Khrustalev yury.khrustalev@arm.com Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/mm/gcs.c | 54 +++++++++++++++++++++- arch/x86/include/asm/shstk.h | 11 +++-- arch/x86/kernel/process.c | 2 +- arch/x86/kernel/shstk.c | 57 +++++++++++++++++++++--- include/asm-generic/cacheflush.h | 11 +++++ include/linux/sched/task.h | 17 +++++++ include/uapi/linux/sched.h | 10 +++-- kernel/fork.c | 96 +++++++++++++++++++++++++++++++++++----- 8 files changed, 232 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index 1f633a482558b59aac5427963d42b37fce08c8a6..c4e93b7ce05c5dfa1128923ad587f9b5a7fb0051 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -43,8 +43,24 @@ int gcs_alloc_thread_stack(struct task_struct *tsk, { unsigned long addr, size;
- if (!system_supports_gcs()) + if (!system_supports_gcs()) { + if (args->shadow_stack_pointer) + return -EINVAL; + + return 0; + } + + /* + * If the user specified a GCS then use it, otherwise fall + * back to a default allocation strategy. Validation is done + * in arch_shstk_validate_clone(). + */ + if (args->shadow_stack_pointer) { + tsk->thread.gcs_base = 0; + tsk->thread.gcs_size = 0; + tsk->thread.gcspr_el0 = args->shadow_stack_pointer; return 0; + }
if (!task_gcs_el0_enabled(tsk)) return 0; @@ -68,6 +84,42 @@ int gcs_alloc_thread_stack(struct task_struct *tsk, return 0; }
+static bool gcs_consume_token(struct vm_area_struct *vma, struct page *page, + unsigned long user_addr) +{ + u64 expected = GCS_CAP(user_addr); + u64 *token = page_address(page) + offset_in_page(user_addr); + + if (!cmpxchg_to_user_page(vma, page, user_addr, token, expected, 0)) + return false; + set_page_dirty_lock(page); + + return true; +} + +int arch_shstk_validate_clone(struct task_struct *tsk, + struct vm_area_struct *vma, + struct page *page, + struct kernel_clone_args *args) +{ + unsigned long gcspr_el0; + int ret = 0; + + /* Ensure that a token written as a result of a pivot is visible */ + gcsb_dsync(); + + gcspr_el0 = args->shadow_stack_pointer; + if (!gcs_consume_token(vma, page, gcspr_el0)) + return -EINVAL; + + tsk->thread.gcspr_el0 = gcspr_el0 + sizeof(u64); + + /* Ensure that our token consumption visible */ + gcsb_dsync(); + + return ret; +} + SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags) { unsigned long alloc_size; diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index 4cb77e004615dff003426a2eb594460ca1015f4e..252feeda69991e939942c74556e23e27c835e766 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -6,6 +6,7 @@ #include <linux/types.h>
struct task_struct; +struct kernel_clone_args; struct ksignal;
#ifdef CONFIG_X86_USER_SHADOW_STACK @@ -16,8 +17,8 @@ struct thread_shstk {
long shstk_prctl(struct task_struct *task, int option, unsigned long arg2); void reset_thread_features(void); -unsigned long shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, - unsigned long stack_size); +unsigned long shstk_alloc_thread_stack(struct task_struct *p, + const struct kernel_clone_args *args); void shstk_free(struct task_struct *p); int setup_signal_shadow_stack(struct ksignal *ksig); int restore_signal_shadow_stack(void); @@ -28,8 +29,10 @@ static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } static inline void reset_thread_features(void) {} static inline unsigned long shstk_alloc_thread_stack(struct task_struct *p, - unsigned long clone_flags, - unsigned long stack_size) { return 0; } + const struct kernel_clone_args *args) +{ + return 0; +} static inline void shstk_free(struct task_struct *p) {} static inline int setup_signal_shadow_stack(struct ksignal *ksig) { return 0; } static inline int restore_signal_shadow_stack(void) { return 0; } diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index f63f8fd00a91f3d1171f307b92179556ba2d716d..59456ab8d93faee29c3b223b64eb41659df76032 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -207,7 +207,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) * is disabled, new_ssp will remain 0, and fpu_clone() will know not to * update it. */ - new_ssp = shstk_alloc_thread_stack(p, clone_flags, args->stack_size); + new_ssp = shstk_alloc_thread_stack(p, args); if (IS_ERR_VALUE(new_ssp)) return PTR_ERR((void *)new_ssp);
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 059685612362d7b1865eabf400888fbfa0659c1e..056e2c9ec30531d0901297da07f1842b47d2fcd5 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -191,18 +191,65 @@ void reset_thread_features(void) current->thread.features_locked = 0; }
-unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, - unsigned long stack_size) +int arch_shstk_validate_clone(struct task_struct *t, + struct vm_area_struct *vma, + struct page *page, + struct kernel_clone_args *args) +{ + /* + * SSP is aligned, so reserved bits and mode bit are a zero, just mark + * the token 64-bit. + */ + void *maddr = kmap_local_page(page); + int offset; + unsigned long addr, ssp; + u64 expected; + + if (!features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + ssp = args->shadow_stack_pointer; + addr = ssp - SS_FRAME_SIZE; + expected = ssp | BIT(0); + offset = offset_in_page(addr); + + if (!cmpxchg_to_user_page(vma, page, addr, (unsigned long *)(maddr + offset), + expected, 0)) + return -EINVAL; + set_page_dirty_lock(page); + + return 0; +} + +unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) { struct thread_shstk *shstk = &tsk->thread.shstk; + unsigned long clone_flags = args->flags; unsigned long addr, size;
/* * If shadow stack is not enabled on the new thread, skip any - * switch to a new shadow stack. + * implicit switch to a new shadow stack and reject attempts to + * explicitly specify one. */ - if (!features_enabled(ARCH_SHSTK_SHSTK)) + if (!features_enabled(ARCH_SHSTK_SHSTK)) { + if (args->shadow_stack_pointer) + return (unsigned long)ERR_PTR(-EINVAL); + return 0; + } + + /* + * If the user specified a shadow stack then use it, otherwise + * fall back to a default allocation strategy. Validation is + * done in arch_shstk_validate_clone(). + */ + if (args->shadow_stack_pointer) { + shstk->base = 0; + shstk->size = 0; + return args->shadow_stack_pointer; + }
/* * For CLONE_VFORK the child will share the parents shadow stack. @@ -222,7 +269,7 @@ unsigned long shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long cl if (!(clone_flags & CLONE_VM)) return 0;
- size = adjust_shstk_size(stack_size); + size = adjust_shstk_size(args->stack_size); addr = alloc_shstk(0, size, 0, false); if (IS_ERR_VALUE(addr)) return addr; diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 7ee8a179d1036e1d8010b8b18a8f3022e41c1695..96cc0c7a5c90fd7e899d0c5fe7c706302265efcf 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -124,4 +124,15 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) } while (0) #endif
+#ifndef cmpxchg_to_user_page +#define cmpxchg_to_user_page(vma, page, vaddr, ptr, old, new) \ +({ \ + bool ret; \ + \ + ret = try_cmpxchg(ptr, &old, new); \ + flush_icache_user_page(vma, page, vaddr, sizeof(*ptr)); \ + ret; \ +}) +#endif + #endif /* _ASM_GENERIC_CACHEFLUSH_H */ diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 0f2aeb37bbb047335a399326b31bc8df81b75a3a..cd36389619d5c97401f7b90e177c6027c232783b 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -16,6 +16,7 @@ struct task_struct; struct rusage; union thread_union; struct css_set; +struct vm_area_struct;
/* All the bits taken by the old clone syscall. */ #define CLONE_LEGACY_FLAGS 0xffffffffULL @@ -43,6 +44,7 @@ struct kernel_clone_args { void *fn_arg; struct cgroup *cgrp; struct css_set *cset; + unsigned long shadow_stack_pointer; };
/* @@ -236,4 +238,19 @@ static inline void task_unlock(struct task_struct *p)
DEFINE_GUARD(task_lock, struct task_struct *, task_lock(_T), task_unlock(_T))
+#ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK +int arch_shstk_validate_clone(struct task_struct *p, + struct vm_area_struct *vma, + struct page *page, + struct kernel_clone_args *args); +#else +static inline int arch_shstk_validate_clone(struct task_struct *p, + struct vm_area_struct *vma, + struct page *page, + struct kernel_clone_args *args) +{ + return 0; +} +#endif + #endif /* _LINUX_SCHED_TASK_H */ diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h index 359a14cc76a4038aeacef14b2915d5ce60d0cf44..586a1c05a4e4ca05584d4d500223bcf6c3add54c 100644 --- a/include/uapi/linux/sched.h +++ b/include/uapi/linux/sched.h @@ -84,6 +84,8 @@ * kernel's limit of nested PID namespaces. * @cgroup: If CLONE_INTO_CGROUP is specified set this to * a file descriptor for the cgroup. + * @shadow_stack_pointer: Value to use for shadow stack pointer in the + * child process. * * The structure is versioned by size and thus extensible. * New struct members must go at the end of the struct and @@ -101,12 +103,14 @@ struct clone_args { __aligned_u64 set_tid; __aligned_u64 set_tid_size; __aligned_u64 cgroup; + __aligned_u64 shadow_stack_pointer; }; #endif
-#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ -#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ -#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */ +#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */ +#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#define CLONE_ARGS_SIZE_VER3 96 /* sizeof fourth published struct */
/* * Scheduling policies diff --git a/kernel/fork.c b/kernel/fork.c index 1450b461d196a1efee0e120780467a96f6c7d491..6ef945c7d4220f0838d004a8b37615cccc91b7da 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2128,6 +2128,51 @@ static void rv_task_fork(struct task_struct *p) #define rv_task_fork(p) do {} while (0) #endif
+static int shstk_validate_clone(struct task_struct *p, + struct kernel_clone_args *args) +{ + struct mm_struct *mm; + struct vm_area_struct *vma; + struct page *page; + unsigned long addr; + int ret; + + if (!IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK)) + return 0; + + if (!args->shadow_stack_pointer) + return 0; + + mm = get_task_mm(p); + if (!mm) + return -EFAULT; + + mmap_read_lock(mm); + + addr = untagged_addr_remote(mm, args->shadow_stack_pointer); + page = get_user_page_vma_remote(mm, addr, FOLL_FORCE | FOLL_WRITE, + &vma); + if (IS_ERR(page)) { + ret = -EFAULT; + goto out; + } + + if (!(vma->vm_flags & VM_SHADOW_STACK) || + !(vma->vm_flags & VM_WRITE)) { + ret = -EFAULT; + goto out_page; + } + + ret = arch_shstk_validate_clone(p, vma, page, args); + +out_page: + put_page(page); +out: + mmap_read_unlock(mm); + mmput(mm); + return ret; +} + /* * This creates a new process as a copy of the old one, * but does not actually start it yet. @@ -2402,6 +2447,9 @@ __latent_entropy struct task_struct *copy_process( if (retval) goto bad_fork_cleanup_namespaces; retval = copy_thread(p, args); + if (retval) + goto bad_fork_cleanup_io; + retval = shstk_validate_clone(p, args); if (retval) goto bad_fork_cleanup_io;
@@ -2965,7 +3013,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, CLONE_ARGS_SIZE_VER1); BUILD_BUG_ON(offsetofend(struct clone_args, cgroup) != CLONE_ARGS_SIZE_VER2); - BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER2); + BUILD_BUG_ON(offsetofend(struct clone_args, shadow_stack_pointer) != + CLONE_ARGS_SIZE_VER3); + BUILD_BUG_ON(sizeof(struct clone_args) != CLONE_ARGS_SIZE_VER3);
if (unlikely(usize > PAGE_SIZE)) return -E2BIG; @@ -2998,16 +3048,17 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs, return -EINVAL;
*kargs = (struct kernel_clone_args){ - .flags = args.flags, - .pidfd = u64_to_user_ptr(args.pidfd), - .child_tid = u64_to_user_ptr(args.child_tid), - .parent_tid = u64_to_user_ptr(args.parent_tid), - .exit_signal = args.exit_signal, - .stack = args.stack, - .stack_size = args.stack_size, - .tls = args.tls, - .set_tid_size = args.set_tid_size, - .cgroup = args.cgroup, + .flags = args.flags, + .pidfd = u64_to_user_ptr(args.pidfd), + .child_tid = u64_to_user_ptr(args.child_tid), + .parent_tid = u64_to_user_ptr(args.parent_tid), + .exit_signal = args.exit_signal, + .stack = args.stack, + .stack_size = args.stack_size, + .tls = args.tls, + .set_tid_size = args.set_tid_size, + .cgroup = args.cgroup, + .shadow_stack_pointer = args.shadow_stack_pointer, };
if (args.set_tid && @@ -3048,6 +3099,27 @@ static inline bool clone3_stack_valid(struct kernel_clone_args *kargs) return true; }
+/** + * clone3_shadow_stack_valid - check and prepare shadow stack + * @kargs: kernel clone args + * + * Verify that shadow stacks are only enabled if supported. + */ +static inline bool clone3_shadow_stack_valid(struct kernel_clone_args *kargs) +{ + if (!kargs->shadow_stack_pointer) + return true; + + if (!IS_ALIGNED(kargs->shadow_stack_pointer, sizeof(void *))) + return false; + + /* + * The architecture must check support on the specific + * machine. + */ + return IS_ENABLED(CONFIG_ARCH_HAS_USER_SHADOW_STACK); +} + static bool clone3_args_valid(struct kernel_clone_args *kargs) { /* Verify that no unknown flags are passed along. */ @@ -3070,7 +3142,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs) kargs->exit_signal) return false;
- if (!clone3_stack_valid(kargs)) + if (!clone3_stack_valid(kargs) || !clone3_shadow_stack_valid(kargs)) return false;
return true;
Since there were widespread issues with output not being flushed the kselftest framework was modified to explicitly set the output streams unbuffered in commit 58e2847ad2e6 ("selftests: line buffer test program's stdout") so there is no need to explicitly flush in the clone3 tests.
Reviewed-by: Kees Cook kees@kernel.org Tested-by: Kees Cook kees@kernel.org Acked-by: Shuah Khan skhan@linuxfoundation.org Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3_selftests.h | 2 -- 1 file changed, 2 deletions(-)
diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h index 3d2663fe50ba56f011629e4f2eb68a72bcceb087..39b5dcba663c30b9fc2542d9a0d2686105ce5761 100644 --- a/tools/testing/selftests/clone3/clone3_selftests.h +++ b/tools/testing/selftests/clone3/clone3_selftests.h @@ -35,8 +35,6 @@ struct __clone_args {
static pid_t sys_clone3(struct __clone_args *args, size_t size) { - fflush(stdout); - fflush(stderr); return syscall(__NR_clone3, args, size); }
In order to make it easier to add more configuration for the tests and more support for runtime detection of when tests can be run pass the structure describing the tests into test_clone3() rather than picking the arguments out of it and have that function do all the per-test work.
No functional change.
Reviewed-by: Kees Cook kees@kernel.org Tested-by: Kees Cook kees@kernel.org Acked-by: Shuah Khan skhan@linuxfoundation.org Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 77 ++++++++++++++++----------------- 1 file changed, 37 insertions(+), 40 deletions(-)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index e61f07973ce5e27aff30047b35e03b1b51875c15..e066b201fa64eb17c55939b7cec18ac5d109613b 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -30,6 +30,19 @@ enum test_mode { CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG, };
+typedef bool (*filter_function)(void); +typedef size_t (*size_function)(void); + +struct test { + const char *name; + uint64_t flags; + size_t size; + size_function size_function; + int expected; + enum test_mode test_mode; + filter_function filter; +}; + static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) { struct __clone_args args = { @@ -109,30 +122,40 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) return 0; }
-static bool test_clone3(uint64_t flags, size_t size, int expected, - enum test_mode test_mode) +static void test_clone3(const struct test *test) { + size_t size; int ret;
+ if (test->filter && test->filter()) { + ksft_test_result_skip("%s\n", test->name); + return; + } + + if (test->size_function) + size = test->size_function(); + else + size = test->size; + + ksft_print_msg("Running test '%s'\n", test->name); + ksft_print_msg( "[%d] Trying clone3() with flags %#" PRIx64 " (size %zu)\n", - getpid(), flags, size); - ret = call_clone3(flags, size, test_mode); + getpid(), test->flags, size); + ret = call_clone3(test->flags, size, test->test_mode); ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n", - getpid(), ret, expected); - if (ret != expected) { + getpid(), ret, test->expected); + if (ret != test->expected) { ksft_print_msg( "[%d] Result (%d) is different than expected (%d)\n", - getpid(), ret, expected); - return false; + getpid(), ret, test->expected); + ksft_test_result_fail("%s\n", test->name); + return; }
- return true; + ksft_test_result_pass("%s\n", test->name); }
-typedef bool (*filter_function)(void); -typedef size_t (*size_function)(void); - static bool not_root(void) { if (getuid() != 0) { @@ -160,16 +183,6 @@ static size_t page_size_plus_8(void) return getpagesize() + 8; }
-struct test { - const char *name; - uint64_t flags; - size_t size; - size_function size_function; - int expected; - enum test_mode test_mode; - filter_function filter; -}; - static const struct test tests[] = { { .name = "simple clone3()", @@ -319,24 +332,8 @@ int main(int argc, char *argv[]) ksft_set_plan(ARRAY_SIZE(tests)); test_clone3_supported();
- for (i = 0; i < ARRAY_SIZE(tests); i++) { - if (tests[i].filter && tests[i].filter()) { - ksft_test_result_skip("%s\n", tests[i].name); - continue; - } - - if (tests[i].size_function) - size = tests[i].size_function(); - else - size = tests[i].size; - - ksft_print_msg("Running test '%s'\n", tests[i].name); - - ksft_test_result(test_clone3(tests[i].flags, size, - tests[i].expected, - tests[i].test_mode), - "%s\n", tests[i].name); - } + for (i = 0; i < ARRAY_SIZE(tests); i++) + test_clone3(&tests[i]);
ksft_finished(); }
The clone_args structure is extensible, with the syscall passing in the length of the structure. Inside the kernel we use copy_struct_from_user() to read the struct but this has the unfortunate side effect of silently accepting some overrun in the structure size providing the extra data is all zeros. This means that we can't discover the clone3() features that the running kernel supports by simply probing with various struct sizes. We need to check this for the benefit of test systems which run newer kselftests on old kernels.
Add a flag which can be set on a test to indicate that clone3() may return -E2BIG due to the use of newer struct versions. Currently no tests need this but it will become an issue for testing clone3() support for shadow stacks, the support for shadow stacks is already present on x86.
Reviewed-by: Kees Cook kees@kernel.org Tested-by: Kees Cook kees@kernel.org Acked-by: Shuah Khan skhan@linuxfoundation.org Reviewed-by: Catalin Marinas catalin.marinas@arm.com Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index e066b201fa64eb17c55939b7cec18ac5d109613b..5b8b7d640e70132242fc6939450669acd0c534f9 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -39,6 +39,7 @@ struct test { size_t size; size_function size_function; int expected; + bool e2big_valid; enum test_mode test_mode; filter_function filter; }; @@ -146,6 +147,11 @@ static void test_clone3(const struct test *test) ksft_print_msg("[%d] clone3() with flags says: %d expected %d\n", getpid(), ret, test->expected); if (ret != test->expected) { + if (test->e2big_valid && ret == -E2BIG) { + ksft_print_msg("Test reported -E2BIG\n"); + ksft_test_result_skip("%s\n", test->name); + return; + } ksft_print_msg( "[%d] Result (%d) is different than expected (%d)\n", getpid(), ret, test->expected);
Add basic test coverage for specifying the shadow stack for a newly created thread via clone3(), including coverage of the newly extended argument structure. We check that a user specified shadow stack can be provided, and that invalid combinations of parameters are rejected.
In order to facilitate testing on systems without userspace shadow stack support we manually enable shadow stacks on startup, this is architecture specific due to the use of an arch_prctl() on x86. Due to interactions with potential userspace locking of features we actually detect support for shadow stacks on the running system by attempting to allocate a shadow stack page during initialisation using map_shadow_stack(), warning if this succeeds when the enable failed.
In order to allow testing of user configured shadow stacks on architectures with that feature we need to ensure that we do not return from the function where the clone3() syscall is called in the child process, doing so would trigger a shadow stack underflow. To do this we use inline assembly rather than the standard syscall wrapper to call clone3(). In order to avoid surprises we also use a syscall rather than the libc exit() function., this should be overly cautious.
Acked-by: Shuah Khan skhan@linuxfoundation.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3.c | 143 +++++++++++++++++++++- tools/testing/selftests/clone3/clone3_selftests.h | 63 ++++++++++ 2 files changed, 205 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/clone3/clone3.c b/tools/testing/selftests/clone3/clone3.c index 5b8b7d640e70132242fc6939450669acd0c534f9..b0378d7418cc8b00caebc6f92f58280bc04b0f80 100644 --- a/tools/testing/selftests/clone3/clone3.c +++ b/tools/testing/selftests/clone3/clone3.c @@ -3,6 +3,7 @@ /* Based on Christian Brauner's clone3() example */
#define _GNU_SOURCE +#include <asm/mman.h> #include <errno.h> #include <inttypes.h> #include <linux/types.h> @@ -11,6 +12,7 @@ #include <stdint.h> #include <stdio.h> #include <stdlib.h> +#include <sys/mman.h> #include <sys/syscall.h> #include <sys/types.h> #include <sys/un.h> @@ -19,8 +21,12 @@ #include <sched.h>
#include "../kselftest.h" +#include "../ksft_shstk.h" #include "clone3_selftests.h"
+static bool shadow_stack_supported; +static size_t max_supported_args_size; + enum test_mode { CLONE3_ARGS_NO_TEST, CLONE3_ARGS_ALL_0, @@ -28,6 +34,10 @@ enum test_mode { CLONE3_ARGS_INVAL_EXIT_SIGNAL_NEG, CLONE3_ARGS_INVAL_EXIT_SIGNAL_CSIG, CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG, + CLONE3_ARGS_SHADOW_STACK, + CLONE3_ARGS_SHADOW_STACK_MISALIGNED, + CLONE3_ARGS_SHADOW_STACK_NO_TOKEN, + CLONE3_ARGS_SHADOW_STACK_NORMAL_MEMORY, };
typedef bool (*filter_function)(void); @@ -44,6 +54,44 @@ struct test { filter_function filter; };
+ +/* + * We check for shadow stack support by attempting to use + * map_shadow_stack() since features may have been locked by the + * dynamic linker resulting in spurious errors when we attempt to + * enable on startup. We warn if the enable failed. + */ +static void test_shadow_stack_supported(void) +{ + long ret; + + ret = syscall(__NR_map_shadow_stack, 0, getpagesize(), 0); + if (ret == -1) { + ksft_print_msg("map_shadow_stack() not supported\n"); + } else if ((void *)ret == MAP_FAILED) { + ksft_print_msg("Failed to map shadow stack\n"); + } else { + ksft_print_msg("Shadow stack supportd\n"); + shadow_stack_supported = true; + + if (!shadow_stack_enabled) + ksft_print_msg("Mapped but did not enable shadow stack\n"); + } +} + +static void *get_shadow_stack_page(unsigned long flags) +{ + unsigned long long page; + + page = syscall(__NR_map_shadow_stack, 0, getpagesize(), flags); + if ((void *)page == MAP_FAILED) { + ksft_print_msg("map_shadow_stack() failed: %d\n", errno); + return 0; + } + + return (void *)page; +} + static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) { struct __clone_args args = { @@ -57,6 +105,7 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) } args_ext;
pid_t pid = -1; + void *p; int status;
memset(&args_ext, 0, sizeof(args_ext)); @@ -89,6 +138,26 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode) case CLONE3_ARGS_INVAL_EXIT_SIGNAL_NSIG: args.exit_signal = 0x00000000000000f0ULL; break; + case CLONE3_ARGS_SHADOW_STACK: + p = get_shadow_stack_page(SHADOW_STACK_SET_TOKEN); + p += getpagesize() - sizeof(void *); + args.shadow_stack_pointer = (unsigned long long)p; + break; + case CLONE3_ARGS_SHADOW_STACK_MISALIGNED: + p = get_shadow_stack_page(SHADOW_STACK_SET_TOKEN); + p += getpagesize() - sizeof(void *) - 1; + args.shadow_stack_pointer = (unsigned long long)p; + break; + case CLONE3_ARGS_SHADOW_STACK_NORMAL_MEMORY: + p = malloc(getpagesize()); + p += getpagesize() - sizeof(void *); + args.shadow_stack_pointer = (unsigned long long)p; + break; + case CLONE3_ARGS_SHADOW_STACK_NO_TOKEN: + p = get_shadow_stack_page(0); + p += getpagesize() - sizeof(void *); + args.shadow_stack_pointer = (unsigned long long)p; + break; }
memcpy(&args_ext.args, &args, sizeof(struct __clone_args)); @@ -102,7 +171,12 @@ static int call_clone3(uint64_t flags, size_t size, enum test_mode test_mode)
if (pid == 0) { ksft_print_msg("I am the child, my PID is %d\n", getpid()); - _exit(EXIT_SUCCESS); + /* + * Use a raw syscall to ensure we don't get issues + * with manually specified shadow stack and exit handlers. + */ + syscall(__NR_exit, EXIT_SUCCESS); + ksft_print_msg("CHILD FAILED TO EXIT PID is %d\n", getpid()); }
ksft_print_msg("I am the parent (%d). My child's pid is %d\n", @@ -184,6 +258,26 @@ static bool no_timenamespace(void) return true; }
+static bool have_shadow_stack(void) +{ + if (shadow_stack_supported) { + ksft_print_msg("Shadow stack supported\n"); + return true; + } + + return false; +} + +static bool no_shadow_stack(void) +{ + if (!shadow_stack_supported) { + ksft_print_msg("Shadow stack not supported\n"); + return true; + } + + return false; +} + static size_t page_size_plus_8(void) { return getpagesize() + 8; @@ -327,6 +421,50 @@ static const struct test tests[] = { .expected = -EINVAL, .test_mode = CLONE3_ARGS_NO_TEST, }, + { + .name = "Shadow stack on system with shadow stack", + .size = 0, + .expected = 0, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with misaligned address", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_MISALIGNED, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with normal memory", + .flags = CLONE_VM, + .size = 0, + .expected = -EFAULT, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NORMAL_MEMORY, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack with no token", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK_NO_TOKEN, + .filter = no_shadow_stack, + }, + { + .name = "Shadow stack on system without shadow stack", + .flags = CLONE_VM, + .size = 0, + .expected = -EINVAL, + .e2big_valid = true, + .test_mode = CLONE3_ARGS_SHADOW_STACK, + .filter = have_shadow_stack, + }, };
int main(int argc, char *argv[]) @@ -334,9 +472,12 @@ int main(int argc, char *argv[]) size_t size; int i;
+ enable_shadow_stack(); + ksft_print_header(); ksft_set_plan(ARRAY_SIZE(tests)); test_clone3_supported(); + test_shadow_stack_supported();
for (i = 0; i < ARRAY_SIZE(tests); i++) test_clone3(&tests[i]); diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h index 39b5dcba663c30b9fc2542d9a0d2686105ce5761..26ff1554408a59af26bd708dc9c852210e370828 100644 --- a/tools/testing/selftests/clone3/clone3_selftests.h +++ b/tools/testing/selftests/clone3/clone3_selftests.h @@ -31,12 +31,75 @@ struct __clone_args { __aligned_u64 set_tid; __aligned_u64 set_tid_size; __aligned_u64 cgroup; +#ifndef CLONE_ARGS_SIZE_VER2 +#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */ +#endif + __aligned_u64 shadow_stack_pointer; +#ifndef CLONE_ARGS_SIZE_VER3 +#define CLONE_ARGS_SIZE_VER3 96 /* sizeof fourth published struct */ +#endif };
+/* + * For architectures with shadow stack support we need to be + * absolutely sure that the clone3() syscall will be inline and not a + * function call so we open code. + */ +#ifdef __x86_64__ +static pid_t __always_inline sys_clone3(struct __clone_args *args, size_t size) +{ + long ret; + register long _num __asm__ ("rax") = __NR_clone3; + register long _args __asm__ ("rdi") = (long)(args); + register long _size __asm__ ("rsi") = (long)(size); + + __asm__ volatile ( + "syscall\n" + : "=a"(ret) + : "r"(_args), "r"(_size), + "0"(_num) + : "rcx", "r11", "memory", "cc" + ); + + if (ret < 0) { + errno = -ret; + return -1; + } + + return ret; +} +#elif defined(__aarch64__) +static pid_t __always_inline sys_clone3(struct __clone_args *args, size_t size) +{ + register long _num __asm__ ("x8") = __NR_clone3; + register long _args __asm__ ("x0") = (long)(args); + register long _size __asm__ ("x1") = (long)(size); + register long arg2 __asm__ ("x2") = 0; + register long arg3 __asm__ ("x3") = 0; + register long arg4 __asm__ ("x4") = 0; + + __asm__ volatile ( + "svc #0\n" + : "=r"(_args) + : "r"(_args), "r"(_size), + "r"(_num), "r"(arg2), + "r"(arg3), "r"(arg4) + : "memory", "cc" + ); + + if ((int)_args < 0) { + errno = -((int)_args); + return -1; + } + + return _args; +} +#else static pid_t sys_clone3(struct __clone_args *args, size_t size) { return syscall(__NR_clone3, args, size); } +#endif
static inline void test_clone3_supported(void) {
linux-kselftest-mirror@lists.linaro.org