The arm64 Guarded Control Stack (GCS) feature provides support for hardware protected stacks of return addresses, intended to provide hardening against return oriented programming (ROP) attacks and to make it easier to gather call stacks for applications such as profiling.
When GCS is active a secondary stack called the Guarded Control Stack is maintained, protected with a memory attribute which means that it can only be written with specific GCS operations. The current GCS pointer can not be directly written to by userspace. When a BL is executed the value stored in LR is also pushed onto the GCS, and when a RET is executed the top of the GCS is popped and compared to LR with a fault being raised if the values do not match. GCS operations may only be performed on GCS pages, a data abort is generated if they are not.
The combination of hardware enforcement and lack of extra instructions in the function entry and exit paths should result in something which has less overhead and is more difficult to attack than a purely software implementation like clang's shadow stacks.
This series implements support for use of GCS by userspace, along with support for use of GCS within KVM guests. It does not enable use of GCS by either EL1 or EL2, this will be implemented separately. Executables are started without GCS and must use a prctl() to enable it, it is expected that this will be done very early in application execution by the dynamic linker or other startup code. For dynamic linking this will be done by checking that everything in the executable is marked as GCS compatible.
x86 has an equivalent feature called shadow stacks, this series depends on the x86 patches for generic memory management support for the new guarded/shadow stack page type and shares APIs as much as possible. As there has been extensive discussion with the wider community around the ABI for shadow stacks I have as far as practical kept implementation decisions close to those for x86, anticipating that review would lead to similar conclusions in the absence of strong reasoning for divergence.
The main divergence I am concious of is that x86 allows shadow stack to be enabled and disabled repeatedly, freeing the shadow stack for the thread whenever disabled, while this implementation keeps the GCS allocated after disable but refuses to reenable it. This is to avoid races with things actively walking the GCS during a disable, we do anticipate that some systems will wish to disable GCS at runtime but are not aware of any demand for subsequently reenabling it.
x86 uses an arch_prctl() to manage enable and disable, since only x86 and S/390 use arch_prctl() a generic prctl() was proposed[1] as part of a patch set for the equivalent RISC-V Zicfiss feature which I initially adopted fairly directly but following review feedback has been revised quite a bit.
We currently maintain the x86 pattern of implicitly allocating a shadow stack for threads started with shadow stack enabled, there has been some discussion of removing this support and requiring the use of clone3() with explicit allocation of shadow stacks instead. I have no strong feelings either way, implicit allocation is not really consistent with anything else we do and creates the potential for errors around thread exit but on the other hand it is existing ABI on x86 and minimises the changes needed in userspace code.
glibc and bionic changes using this ABI have been implemented and tested. Headless Android systems have been validated and Ross Burton has used this code has been used to bring up a Yocto system with GCS enabed as standard, a test implementation of V8 support has also been done.
There is an open issue with support for CRIU, on x86 this required the ability to set the GCS mode via ptrace. This series supports configuring mode bits other than enable/disable via ptrace but it needs to be confirmed if this is sufficient.
The series depends on support for shadow stacks in clone3(), that series includes the addition of ARCH_HAS_USER_SHADOW_STACK.
https://lore.kernel.org/r/20240731-clone3-shadow-stack-v7-0-a9532eebfb1d@ker...
You can see a branch with the full set of dependencies against Linus' tree at:
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/misc.git arm64-gcs
[1] https://lore.kernel.org/lkml/20230213045351.3945824-1-debug@rivosinc.com/
Signed-off-by: Mark Brown broonie@kernel.org --- Changes in v10: - Fix issues with THP. - Tighten up requirements for initialising GCSCR*. - Only generate GCS signal frames for threads using GCS. - Only context switch EL1 GCS registers if S1PIE is enabled. - Move context switch of GCSCRE0_EL1 to EL0 context switch. - Make GCS registers unconditionally visible to userspace. - Use FHU infrastructure. - Don't change writability of ID_AA64PFR1_EL1 for KVM. - Remove unused arguments from alloc_gcs(). - Typo fixes. - Link to v9: https://lore.kernel.org/r/20240625-arm64-gcs-v9-0-0f634469b8f0@kernel.org
Changes in v9: - Rebase onto v6.10-rc3. - Restructure and clarify memory management fault handling. - Fix up basic-gcs for the latest clone3() changes. - Convert to newly merged KVM ID register based feature configuration. - Fixes for NV traps. - Link to v8: https://lore.kernel.org/r/20240203-arm64-gcs-v8-0-c9fec77673ef@kernel.org
Changes in v8: - Invalidate signal cap token on stack when consuming. - Typo and other trivial fixes. - Don't try to use process_vm_write() on GCS, it intentionally does not work. - Fix leak of thread GCSs. - Rebase onto latest clone3() series. - Link to v7: https://lore.kernel.org/r/20231122-arm64-gcs-v7-0-201c483bd775@kernel.org
Changes in v7: - Rebase onto v6.7-rc2 via the clone3() patch series. - Change the token used to cap the stack during signal handling to be compatible with GCSPOPM. - Fix flags for new page types. - Fold in support for clone3(). - Replace copy_to_user_gcs() with put_user_gcs(). - Link to v6: https://lore.kernel.org/r/20231009-arm64-gcs-v6-0-78e55deaa4dd@kernel.org
Changes in v6: - Rebase onto v6.6-rc3. - Add some more gcsb_dsync() barriers following spec clarifications. - Due to ongoing discussion around clone()/clone3() I've not updated anything there, the behaviour is the same as on previous versions. - Link to v5: https://lore.kernel.org/r/20230822-arm64-gcs-v5-0-9ef181dd6324@kernel.org
Changes in v5: - Don't map any permissions for user GCSs, we always use EL0 accessors or use a separate mapping of the page. - Reduce the standard size of the GCS to RLIMIT_STACK/2. - Enforce a PAGE_SIZE alignment requirement on map_shadow_stack(). - Clarifications and fixes to documentation. - More tests. - Link to v4: https://lore.kernel.org/r/20230807-arm64-gcs-v4-0-68cfa37f9069@kernel.org
Changes in v4: - Implement flags for map_shadow_stack() allowing the cap and end of stack marker to be enabled independently or not at all. - Relax size and alignment requirements for map_shadow_stack(). - Add more blurb explaining the advantages of hardware enforcement. - Link to v3: https://lore.kernel.org/r/20230731-arm64-gcs-v3-0-cddf9f980d98@kernel.org
Changes in v3: - Rebase onto v6.5-rc4. - Add a GCS barrier on context switch. - Add a GCS stress test. - Link to v2: https://lore.kernel.org/r/20230724-arm64-gcs-v2-0-dc2c1d44c2eb@kernel.org
Changes in v2: - Rebase onto v6.5-rc3. - Rework prctl() interface to allow each bit to be locked independently. - map_shadow_stack() now places the cap token based on the size requested by the caller not the actual space allocated. - Mode changes other than enable via ptrace are now supported. - Expand test coverage. - Various smaller fixes and adjustments. - Link to v1: https://lore.kernel.org/r/20230716-arm64-gcs-v1-0-bf567f93bba6@kernel.org
--- Mark Brown (40): arm64/mm: Restructure arch_validate_flags() for extensibility prctl: arch-agnostic prctl for shadow stack mman: Add map_shadow_stack() flags arm64: Document boot requirements for Guarded Control Stacks arm64/gcs: Document the ABI for Guarded Control Stacks arm64/sysreg: Add definitions for architected GCS caps arm64/gcs: Add manual encodings of GCS instructions arm64/gcs: Provide put_user_gcs() arm64/gcs: Provide basic EL2 setup to allow GCS usage at EL0 and EL1 arm64/cpufeature: Runtime detection of Guarded Control Stack (GCS) arm64/mm: Allocate PIE slots for EL0 guarded control stack mm: Define VM_SHADOW_STACK for arm64 when we support GCS arm64/mm: Map pages for guarded control stack KVM: arm64: Manage GCS access and registers for guests arm64/idreg: Add overrride for GCS arm64/hwcap: Add hwcap for GCS arm64/traps: Handle GCS exceptions arm64/mm: Handle GCS data aborts arm64/gcs: Context switch GCS state for EL0 arm64/gcs: Ensure that new threads have a GCS arm64/gcs: Implement shadow stack prctl() interface arm64/mm: Implement map_shadow_stack() arm64/signal: Set up and restore the GCS context for signal handlers arm64/signal: Expose GCS state in signal frames arm64/ptrace: Expose GCS via ptrace and core files arm64: Add Kconfig for Guarded Control Stack (GCS) kselftest/arm64: Verify the GCS hwcap kselftest: Provide shadow stack enable helpers for arm64 selftests/clone3: Enable arm64 shadow stack testing kselftest/arm64: Add GCS as a detected feature in the signal tests kselftest/arm64: Add framework support for GCS to signal handling tests kselftest/arm64: Allow signals tests to specify an expected si_code kselftest/arm64: Always run signals tests with GCS enabled kselftest/arm64: Add very basic GCS test program kselftest/arm64: Add a GCS test program built with the system libc kselftest/arm64: Add test coverage for GCS mode locking kselftest/arm64: Add GCS signal tests kselftest/arm64: Add a GCS stress test kselftest/arm64: Enable GCS for the FP stress tests KVM: selftests: arm64: Add GCS registers to get-reg-list
Documentation/admin-guide/kernel-parameters.txt | 3 + Documentation/arch/arm64/booting.rst | 30 + Documentation/arch/arm64/elf_hwcaps.rst | 2 + Documentation/arch/arm64/gcs.rst | 233 +++++++ Documentation/arch/arm64/index.rst | 1 + Documentation/filesystems/proc.rst | 2 +- arch/arm64/Kconfig | 20 + arch/arm64/include/asm/cpufeature.h | 6 + arch/arm64/include/asm/el2_setup.h | 29 + arch/arm64/include/asm/esr.h | 28 +- arch/arm64/include/asm/exception.h | 2 + arch/arm64/include/asm/gcs.h | 107 +++ arch/arm64/include/asm/hwcap.h | 1 + arch/arm64/include/asm/kvm_host.h | 8 + arch/arm64/include/asm/mman.h | 23 +- arch/arm64/include/asm/pgtable-prot.h | 14 +- arch/arm64/include/asm/processor.h | 7 + arch/arm64/include/asm/sysreg.h | 20 + arch/arm64/include/asm/uaccess.h | 40 ++ arch/arm64/include/asm/vncr_mapping.h | 2 + arch/arm64/include/uapi/asm/hwcap.h | 1 + arch/arm64/include/uapi/asm/ptrace.h | 8 + arch/arm64/include/uapi/asm/sigcontext.h | 9 + arch/arm64/kernel/cpufeature.c | 12 + arch/arm64/kernel/cpuinfo.c | 1 + arch/arm64/kernel/entry-common.c | 23 + arch/arm64/kernel/pi/idreg-override.c | 2 + arch/arm64/kernel/process.c | 85 +++ arch/arm64/kernel/ptrace.c | 59 ++ arch/arm64/kernel/signal.c | 240 ++++++- arch/arm64/kernel/traps.c | 11 + arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 49 +- arch/arm64/kvm/sys_regs.c | 12 + arch/arm64/mm/Makefile | 1 + arch/arm64/mm/fault.c | 42 ++ arch/arm64/mm/gcs.c | 324 +++++++++ arch/arm64/mm/mmap.c | 10 +- arch/arm64/tools/cpucaps | 1 + arch/x86/include/uapi/asm/mman.h | 3 - include/linux/mm.h | 16 +- include/uapi/asm-generic/mman.h | 4 + include/uapi/linux/elf.h | 1 + include/uapi/linux/prctl.h | 22 + kernel/sys.c | 30 + tools/testing/selftests/arm64/Makefile | 2 +- tools/testing/selftests/arm64/abi/hwcap.c | 19 + tools/testing/selftests/arm64/fp/assembler.h | 15 + tools/testing/selftests/arm64/fp/fpsimd-test.S | 2 + tools/testing/selftests/arm64/fp/sve-test.S | 2 + tools/testing/selftests/arm64/fp/za-test.S | 2 + tools/testing/selftests/arm64/fp/zt-test.S | 2 + tools/testing/selftests/arm64/gcs/.gitignore | 5 + tools/testing/selftests/arm64/gcs/Makefile | 24 + tools/testing/selftests/arm64/gcs/asm-offsets.h | 0 tools/testing/selftests/arm64/gcs/basic-gcs.c | 357 ++++++++++ tools/testing/selftests/arm64/gcs/gcs-locking.c | 200 ++++++ .../selftests/arm64/gcs/gcs-stress-thread.S | 311 +++++++++ tools/testing/selftests/arm64/gcs/gcs-stress.c | 530 +++++++++++++++ tools/testing/selftests/arm64/gcs/gcs-util.h | 100 +++ tools/testing/selftests/arm64/gcs/libc-gcs.c | 736 +++++++++++++++++++++ tools/testing/selftests/arm64/signal/.gitignore | 1 + .../testing/selftests/arm64/signal/test_signals.c | 17 +- .../testing/selftests/arm64/signal/test_signals.h | 6 + .../selftests/arm64/signal/test_signals_utils.c | 32 +- .../selftests/arm64/signal/test_signals_utils.h | 39 ++ .../arm64/signal/testcases/gcs_exception_fault.c | 62 ++ .../selftests/arm64/signal/testcases/gcs_frame.c | 88 +++ .../arm64/signal/testcases/gcs_write_fault.c | 67 ++ .../selftests/arm64/signal/testcases/testcases.c | 7 + .../selftests/arm64/signal/testcases/testcases.h | 1 + tools/testing/selftests/clone3/clone3_selftests.h | 26 + tools/testing/selftests/ksft_shstk.h | 37 ++ tools/testing/selftests/kvm/aarch64/get-reg-list.c | 28 + 73 files changed, 4222 insertions(+), 40 deletions(-) --- base-commit: 2d2c15fd64fcaba525a96e3198e4a4732680a49e change-id: 20230303-arm64-gcs-e311ab0d8729
Best regards,
Currently arch_validate_flags() is written in a very non-extensible fashion, returning immediately if MTE is not supported and writing the MTE check as a direct return. Since we will want to add more checks for GCS refactor the existing code to be more extensible, no functional change intended.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/mman.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index 5966ee4a6154..c21849ffdd88 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -52,11 +52,17 @@ static inline bool arch_validate_prot(unsigned long prot,
static inline bool arch_validate_flags(unsigned long vm_flags) { - if (!system_supports_mte()) - return true; + if (system_supports_mte()) { + /* + * only allow VM_MTE if VM_MTE_ALLOWED has been set + * previously + */ + if ((vm_flags & VM_MTE) && !(vm_flags & VM_MTE_ALLOWED)) + return false; + } + + return true;
- /* only allow VM_MTE if VM_MTE_ALLOWED has been set previously */ - return !(vm_flags & VM_MTE) || (vm_flags & VM_MTE_ALLOWED); } #define arch_validate_flags(vm_flags) arch_validate_flags(vm_flags)
On Thu, Aug 01, 2024 at 01:06:28PM +0100, Mark Brown wrote:
Currently arch_validate_flags() is written in a very non-extensible fashion, returning immediately if MTE is not supported and writing the MTE check as a direct return. Since we will want to add more checks for GCS refactor the existing code to be more extensible, no functional change intended.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Three architectures (x86, aarch64, riscv) have announced support for shadow stacks with fairly similar functionality. While x86 is using arch_prctl() to control the functionality neither arm64 nor riscv uses that interface so this patch adds arch-agnostic prctl() support to get and set status of shadow stacks and lock the current configuation to prevent further changes, with support for turning on and off individual subfeatures so applications can limit their exposure to features that they do not need. The features are:
- PR_SHADOW_STACK_ENABLE: Tracking and enforcement of shadow stacks, including allocation of a shadow stack if one is not already allocated. - PR_SHADOW_STACK_WRITE: Writes to specific addresses in the shadow stack. - PR_SHADOW_STACK_PUSH: Push additional values onto the shadow stack.
These features are expected to be inherited by new threads and cleared on exec(), unknown features should be rejected for enable but accepted for locking (in order to allow for future proofing).
This is based on a patch originally written by Deepak Gupta but modified fairly heavily, support for indirect landing pads is removed, additional modes added and the locking interface reworked. The set status prctl() is also reworked to just set flags, if setting/reading the shadow stack pointer is required this could be a separate prctl.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- include/linux/mm.h | 4 ++++ include/uapi/linux/prctl.h | 22 ++++++++++++++++++++++ kernel/sys.c | 30 ++++++++++++++++++++++++++++++ 3 files changed, 56 insertions(+)
diff --git a/include/linux/mm.h b/include/linux/mm.h index 3357625c1db3..96faf26b6083 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4201,4 +4201,8 @@ void vma_pgtable_walk_end(struct vm_area_struct *vma);
int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t *size);
+int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status); +int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status); +int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status); + #endif /* _LINUX_MM_H */ diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 35791791a879..557a3d2ac1d4 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -328,4 +328,26 @@ struct prctl_mm_map { # define PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC 0x10 /* Clear the aspect on exec */ # define PR_PPC_DEXCR_CTRL_MASK 0x1f
+/* + * Get the current shadow stack configuration for the current thread, + * this will be the value configured via PR_SET_SHADOW_STACK_STATUS. + */ +#define PR_GET_SHADOW_STACK_STATUS 74 + +/* + * Set the current shadow stack configuration. Enabling the shadow + * stack will cause a shadow stack to be allocated for the thread. + */ +#define PR_SET_SHADOW_STACK_STATUS 75 +# define PR_SHADOW_STACK_ENABLE (1UL << 0) +# define PR_SHADOW_STACK_WRITE (1UL << 1) +# define PR_SHADOW_STACK_PUSH (1UL << 2) + +/* + * Prevent further changes to the specified shadow stack + * configuration. All bits may be locked via this call, including + * undefined bits. + */ +#define PR_LOCK_SHADOW_STACK_STATUS 76 + #endif /* _LINUX_PRCTL_H */ diff --git a/kernel/sys.c b/kernel/sys.c index 3a2df1bd9f64..7e0c10e867cf 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2324,6 +2324,21 @@ int __weak arch_prctl_spec_ctrl_set(struct task_struct *t, unsigned long which, return -EINVAL; }
+int __weak arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status) +{ + return -EINVAL; +} + +int __weak arch_set_shadow_stack_status(struct task_struct *t, unsigned long status) +{ + return -EINVAL; +} + +int __weak arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status) +{ + return -EINVAL; +} + #define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LOCAL_THROTTLE)
#ifdef CONFIG_ANON_VMA_NAME @@ -2782,6 +2797,21 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3, case PR_RISCV_SET_ICACHE_FLUSH_CTX: error = RISCV_SET_ICACHE_FLUSH_CTX(arg2, arg3); break; + case PR_GET_SHADOW_STACK_STATUS: + if (arg3 || arg4 || arg5) + return -EINVAL; + error = arch_get_shadow_stack_status(me, (unsigned long __user *) arg2); + break; + case PR_SET_SHADOW_STACK_STATUS: + if (arg3 || arg4 || arg5) + return -EINVAL; + error = arch_set_shadow_stack_status(me, arg2); + break; + case PR_LOCK_SHADOW_STACK_STATUS: + if (arg3 || arg4 || arg5) + return -EINVAL; + error = arch_lock_shadow_stack_status(me, arg2); + break; default: error = -EINVAL; break;
On Thu, Aug 01, 2024 at 01:06:29PM +0100, Mark Brown wrote:
Three architectures (x86, aarch64, riscv) have announced support for shadow stacks with fairly similar functionality. While x86 is using arch_prctl() to control the functionality neither arm64 nor riscv uses that interface so this patch adds arch-agnostic prctl() support to get and set status of shadow stacks and lock the current configuation to prevent further changes, with support for turning on and off individual subfeatures so applications can limit their exposure to features that they do not need. The features are:
- PR_SHADOW_STACK_ENABLE: Tracking and enforcement of shadow stacks, including allocation of a shadow stack if one is not already allocated.
- PR_SHADOW_STACK_WRITE: Writes to specific addresses in the shadow stack.
- PR_SHADOW_STACK_PUSH: Push additional values onto the shadow stack.
These features are expected to be inherited by new threads and cleared on exec(), unknown features should be rejected for enable but accepted for locking (in order to allow for future proofing).
This is based on a patch originally written by Deepak Gupta but modified fairly heavily, support for indirect landing pads is removed, additional modes added and the locking interface reworked. The set status prctl() is also reworked to just set flags, if setting/reading the shadow stack pointer is required this could be a separate prctl.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
In preparation for adding arm64 GCS support make the map_shadow_stack() SHADOW_STACK_SET_TOKEN flag generic and add _SET_MARKER. The existing flag indicats that a token usable for stack switch should be added to the top of the newly mapped GCS region while the new flag indicates that a top of stack marker suitable for use by unwinders should be added above that.
For arm64 the top of stack marker is all bits 0.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/x86/include/uapi/asm/mman.h | 3 --- include/uapi/asm-generic/mman.h | 4 ++++ 2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h index 46cdc941f958..ac1e6277212b 100644 --- a/arch/x86/include/uapi/asm/mman.h +++ b/arch/x86/include/uapi/asm/mman.h @@ -5,9 +5,6 @@ #define MAP_32BIT 0x40 /* only give out 32bit addresses */ #define MAP_ABOVE4G 0x80 /* only map above 4GB */
-/* Flags for map_shadow_stack(2) */ -#define SHADOW_STACK_SET_TOKEN (1ULL << 0) /* Set up a restore token in the shadow stack */ - #include <asm-generic/mman.h>
#endif /* _ASM_X86_MMAN_H */ diff --git a/include/uapi/asm-generic/mman.h b/include/uapi/asm-generic/mman.h index 57e8195d0b53..d6a282687af5 100644 --- a/include/uapi/asm-generic/mman.h +++ b/include/uapi/asm-generic/mman.h @@ -19,4 +19,8 @@ #define MCL_FUTURE 2 /* lock all future mappings */ #define MCL_ONFAULT 4 /* lock all pages that are faulted in */
+#define SHADOW_STACK_SET_TOKEN (1ULL << 0) /* Set up a restore token in the shadow stack */ +#define SHADOW_STACK_SET_MARKER (1ULL << 1) /* Set up a top of stack merker in the shadow stack */ + + #endif /* __ASM_GENERIC_MMAN_H */
On Thu, Aug 01, 2024 at 01:06:30PM +0100, Mark Brown wrote:
In preparation for adding arm64 GCS support make the map_shadow_stack() SHADOW_STACK_SET_TOKEN flag generic and add _SET_MARKER. The existing flag indicats that a token usable for stack switch should be added to
Nit: indicates
diff --git a/include/uapi/asm-generic/mman.h b/include/uapi/asm-generic/mman.h index 57e8195d0b53..d6a282687af5 100644 --- a/include/uapi/asm-generic/mman.h +++ b/include/uapi/asm-generic/mman.h @@ -19,4 +19,8 @@ #define MCL_FUTURE 2 /* lock all future mappings */ #define MCL_ONFAULT 4 /* lock all pages that are faulted in */ +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) /* Set up a restore token in the shadow stack */ +#define SHADOW_STACK_SET_MARKER (1ULL << 1) /* Set up a top of stack merker in the shadow stack */
s/merker/marker/
Otherwise:
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
FEAT_GCS introduces a number of new system registers, we require that access to these registers is not trapped when we identify that the feature is detected. Since if GCS is enabled any function call instruction will be checked we also require that the feature be specifically disabled.
Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/arch/arm64/booting.rst | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+)
diff --git a/Documentation/arch/arm64/booting.rst b/Documentation/arch/arm64/booting.rst index b57776a68f15..f5b8e4bb9653 100644 --- a/Documentation/arch/arm64/booting.rst +++ b/Documentation/arch/arm64/booting.rst @@ -411,6 +411,36 @@ Before jumping into the kernel, the following conditions must be met:
- HFGRWR_EL2.nPIRE0_EL1 (bit 57) must be initialised to 0b1.
+ - For CPUs with Guarded Control Stacks (FEAT_GCS): + + - If EL3 is present: + + - SCR_EL3.GCSEn (bit 39) must be initialised to 0b1. + + - If EL2 is present: + + - GCSCR_EL2 must be initialised to 0. + + - If the kernel is entered at EL1 and EL2 is present: + + - GCSCR_EL1 must be initialised to 0. + + - GCSCRE0_EL1 must be initialised to 0. + + - HFGITR_EL2.nGCSEPP (bit 59) must be initialised to 0b1. + + - HFGITR_EL2.nGCSSTR_EL1 (bit 58) must be initialised to 0b1. + + - HFGITR_EL2.nGCSPUSHM_EL1 (bit 57) must be initialised to 0b1. + + - HFGRTR_EL2.nGCS_EL1 (bit 53) must be initialised to 0b1. + + - HFGRTR_EL2.nGCS_EL0 (bit 52) must be initialised to 0b1. + + - HFGWTR_EL2.nGCS_EL1 (bit 53) must be initialised to 0b1. + + - HFGWTR_EL2.nGCS_EL0 (bit 52) must be initialised to 0b1. + The requirements described above for CPU mode, caches, MMUs, architected timers, coherency and system registers apply to all CPUs. All CPUs must enter the kernel in the same exception level. Where the values documented
On Thu, Aug 01, 2024 at 01:06:31PM +0100, Mark Brown wrote:
- If EL2 is present:
- GCSCR_EL2 must be initialised to 0.
- If the kernel is entered at EL1 and EL2 is present:
- GCSCR_EL1 must be initialised to 0.
- GCSCRE0_EL1 must be initialised to 0.
Currently booting.rst doesn't list *_EL1 registers to be initialised when the kernel is entered at EL1, that would usually be the responsibility of EL1. The exception is some bits in SCTLR_EL1 around not entering with the MMU and caches enabled. But here I think it makes sense to add these GCS registers since if some random bits are set, they can affect kernels (and user apps) that don't have GCS support.
Don't we need HCRX_EL2.GCSEn to be set when entered at EL1?
On Thu, Aug 15, 2024 at 06:00:15PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:31PM +0100, Mark Brown wrote:
- If EL2 is present:
- GCSCR_EL2 must be initialised to 0.
- If the kernel is entered at EL1 and EL2 is present:
- GCSCR_EL1 must be initialised to 0.
- GCSCRE0_EL1 must be initialised to 0.
Currently booting.rst doesn't list *_EL1 registers to be initialised when the kernel is entered at EL1, that would usually be the responsibility of EL1. The exception is some bits in SCTLR_EL1 around not entering with the MMU and caches enabled. But here I think it makes sense to add these GCS registers since if some random bits are set, they can affect kernels (and user apps) that don't have GCS support.
Right, exactly - the trouble here is that if we enter EL1 with GCS enabled we aren't able to do function calls until we either disable GCS or configure the MMU and allocate a GCS. This means that all existing kernels which haven't heard of GCS require that GCS be disabled prior to starting, they'll just fault within a couple of instructions whenever they reach the EL for which GCS is enabled so it seems sensible to just require that this is set up. It is hard to envision a scenario in which it would be reasonable to start in a different configuration.
Now I think about it I should move those two to not depend on EL2 being present, that's just cut'n'paste.
Don't we need HCRX_EL2.GCSEn to be set when entered at EL1?
Yes, if we want GCS to do anything. I've added this.
Add some documentation of the userspace ABI for Guarded Control Stacks.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/arch/arm64/gcs.rst | 233 +++++++++++++++++++++++++++++++++++++ Documentation/arch/arm64/index.rst | 1 + 2 files changed, 234 insertions(+)
diff --git a/Documentation/arch/arm64/gcs.rst b/Documentation/arch/arm64/gcs.rst new file mode 100644 index 000000000000..8b16394b4c29 --- /dev/null +++ b/Documentation/arch/arm64/gcs.rst @@ -0,0 +1,233 @@ +=============================================== +Guarded Control Stack support for AArch64 Linux +=============================================== + +This document outlines briefly the interface provided to userspace by Linux in +order to support use of the ARM Guarded Control Stack (GCS) feature. + +This is an outline of the most important features and issues only and not +intended to be exhaustive. + + + +1. General +----------- + +* GCS is an architecture feature intended to provide greater protection + against return oriented programming (ROP) attacks and to simplify the + implementation of features that need to collect stack traces such as + profiling. + +* When GCS is enabled a separate guarded control stack is maintained by the + PE which is writeable only through specific GCS operations. This + stores the call stack only, when a procedure call instruction is + performed the current PC is pushed onto the GCS and on RET the + address in the LR is verified against that on the top of the GCS. + +* When active the current GCS pointer is stored in the system register + GCSPR_EL0. This is readable by userspace but can only be updated + via specific GCS instructions. + +* The architecture provides instructions for switching between guarded + control stacks with checks to ensure that the new stack is a valid + target for switching. + +* The functionality of GCS is similar to that provided by the x86 Shadow + Stack feature, due to sharing of userspace interfaces the ABI refers to + shadow stacks rather than GCS. + +* Support for GCS is reported to userspace via HWCAP2_GCS in the aux vector + AT_HWCAP2 entry. + +* GCS is enabled per thread. While there is support for disabling GCS + at runtime this should be done with great care. + +* GCS memory access faults are reported as normal memory access faults. + +* GCS specific errors (those reported with EC 0x2d) will be reported as + SIGSEGV with a si_code of SEGV_CPERR (control protection error). + +* GCS is supported only for AArch64. + +* On systems where GCS is supported GCSPR_EL0 is always readable by EL0 + regardless of the GCS configuration for the thread. + +* The architecture supports enabling GCS without verifying that return values + in LR match those in the GCS, the LR will be ignored. This is not supported + by Linux. + +* EL0 GCS entries with bit 63 set are reserved for use, one such use is defined + below for signals and should be ignored when parsing the stack if not + understood. + + +2. Enabling and disabling Guarded Control Stacks +------------------------------------------------- + +* GCS is enabled and disabled for a thread via the PR_SET_SHADOW_STACK_STATUS + prctl(), this takes a single flags argument specifying which GCS features + should be used. + +* When set PR_SHADOW_STACK_ENABLE flag allocates a Guarded Control Stack + and enables GCS for the thread, enabling the functionality controlled by + GCSCRE0_EL1.{nTR, RVCHKEN, PCRSEL}. + +* When set the PR_SHADOW_STACK_PUSH flag enables the functionality controlled + by GCSCRE0_EL1.PUSHMEn, allowing explicit GCS pushes. + +* When set the PR_SHADOW_STACK_WRITE flag enables the functionality controlled + by GCSCRE0_EL1.STREn, allowing explicit stores to the Guarded Control Stack. + +* Any unknown flags will cause PR_SET_SHADOW_STACK_STATUS to return -EINVAL. + +* PR_LOCK_SHADOW_STACK_STATUS is passed a bitmask of features with the same + values as used for PR_SET_SHADOW_STACK_STATUS. Any future changes to the + status of the specified GCS mode bits will be rejected. + +* PR_LOCK_SHADOW_STACK_STATUS allows any bit to be locked, this allows + userspace to prevent changes to any future features. + +* There is no support for a process to remove a lock that has been set for + it. + +* PR_SET_SHADOW_STACK_STATUS and PR_LOCK_SHADOW_STACK_STATUS affect only the + thread that called them, any other running threads will be unaffected. + +* New threads inherit the GCS configuration of the thread that created them. + +* GCS is disabled on exec(). + +* The current GCS configuration for a thread may be read with the + PR_GET_SHADOW_STACK_STATUS prctl(), this returns the same flags that + are passed to PR_SET_SHADOW_STACK_STATUS. + +* If GCS is disabled for a thread after having previously been enabled then + the stack will remain allocated for the lifetime of the thread. At present + any attempt to reenable GCS for the thread will be rejected, this may be + revisited in future. + +* It should be noted that since enabling GCS will result in GCS becoming + active immediately it is not normally possible to return from the function + that invoked the prctl() that enabled GCS. It is expected that the normal + usage will be that GCS is enabled very early in execution of a program. + + + +3. Allocation of Guarded Control Stacks +---------------------------------------- + +* When GCS is enabled for a thread a new Guarded Control Stack will be + allocated for it of size RLIMIT_STACK or 2 gigabytes, whichever is + smaller. + +* When a new thread is created by a thread which has GCS enabled then a + new Guarded Control Stack will be allocated for the new thread with + half the size of the standard stack. + +* When a stack is allocated by enabling GCS or during thread creation then + the top 8 bytes of the stack will be initialised to 0 and GCSPR_EL0 will + be set to point to the address of this 0 value, this can be used to + detect the top of the stack. + +* Additional Guarded Control Stacks can be allocated using the + map_shadow_stack() system call. + +* Stacks allocated using map_shadow_stack() can optionally have an end of + stack marker and cap placed at the top of the stack. If the flag + SHADOW_STACK_SET_TOKEN is specified a cap will be placed on the stack, + if SHADOW_STACK_SET_MARKER is not specified the cap will be the top 8 + bytes of the stack and if it is specified then the cap will be the next + 8 bytes. While specifying just SHADOW_STACK_SET_MARKER by itself is + valid since the marker is all bits 0 it has no observable effect. + +* Stacks allocated using map_shadow_stack() must have a size which is a + multiple of 8 bytes larger than 8 bytes and must be 8 bytes aligned. + +* An address can be specified to map_shadow_stack(), if one is provided then + it must be aligned to a page boundary. + +* When a thread is freed the Guarded Control Stack initially allocated for + that thread will be freed. Note carefully that if the stack has been + switched this may not be the stack currently in use by the thread. + + +4. Signal handling +-------------------- + +* A new signal frame record gcs_context encodes the current GCS mode and + pointer for the interrupted context on signal delivery. This will always + be present on systems that support GCS. + +* The record contains a flag field which reports the current GCS configuration + for the interrupted context as PR_GET_SHADOW_STACK_STATUS would. + +* The signal handler is run with the same GCS configuration as the interrupted + context. + +* When GCS is enabled for the interrupted thread a signal handling specific + GCS cap token will be written to the GCS, this is an architectural GCS cap + token with bit 63 set and the token type (bits 0..11) all clear. The + GCSPR_EL0 reported in the signal frame will point to this cap token. + +* The signal handler will use the same GCS as the interrupted context. + +* When GCS is enabled on signal entry a frame with the address of the signal + return handler will be pushed onto the GCS, allowing return from the signal + handler via RET as normal. This will not be reported in the gcs_context in + the signal frame. + + +5. Signal return +----------------- + +When returning from a signal handler: + +* If there is a gcs_context record in the signal frame then the GCS flags + and GCSPR_EL0 will be restored from that context prior to further + validation. + +* If there is no gcs_context record in the signal frame then the GCS + configuration will be unchanged. + +* If GCS is enabled on return from a signal handler then GCSPR_EL0 must + point to a valid GCS signal cap record, this will be popped from the + GCS prior to signal return. + +* If the GCS configuration is locked when returning from a signal then any + attempt to change the GCS configuration will be treated as an error. This + is true even if GCS was not enabled prior to signal entry. + +* GCS may be disabled via signal return but any attempt to enable GCS via + signal return will be rejected. + + +6. ptrace extensions +--------------------- + +* A new regset NT_ARM_GCS is defined for use with PTRACE_GETREGSET and + PTRACE_SETREGSET. + +* Due to the complexity surrounding allocation and deallocation of stacks and + lack of practical application it is not possible to enable GCS via ptrace. + GCS may be disabled via the ptrace interface. + +* Other GCS modes may be configured via ptrace. + +* Configuration via ptrace ignores locking of GCS mode bits. + + +7. ELF coredump extensions +--------------------------- + +* NT_ARM_GCS notes will be added to each coredump for each thread of the + dumped process. The contents will be equivalent to the data that would + have been read if a PTRACE_GETREGSET of the corresponding type were + executed for each thread when the coredump was generated. + + + +8. /proc extensions +-------------------- + +* Guarded Control Stack pages will include "ss" in their VmFlags in + /proc/<pid>/smaps. diff --git a/Documentation/arch/arm64/index.rst b/Documentation/arch/arm64/index.rst index 78544de0a8a9..056f6a739d25 100644 --- a/Documentation/arch/arm64/index.rst +++ b/Documentation/arch/arm64/index.rst @@ -15,6 +15,7 @@ ARM64 Architecture cpu-feature-registers cpu-hotplug elf_hwcaps + gcs hugetlbpage kdump legacy_instructions
On Thu, Aug 01, 2024 at 01:06:32PM +0100, Mark Brown wrote:
+1. General +-----------
[...]
+* EL0 GCS entries with bit 63 set are reserved for use, one such use is defined
Maybe "reserved for specific uses". The proposed sentenced feels like it's missing something.
- below for signals and should be ignored when parsing the stack if not
- understood.
[...]
+3. Allocation of Guarded Control Stacks +----------------------------------------
+* When GCS is enabled for a thread a new Guarded Control Stack will be
- allocated for it of size RLIMIT_STACK or 2 gigabytes, whichever is
- smaller.
+* When a new thread is created by a thread which has GCS enabled then a
- new Guarded Control Stack will be allocated for the new thread with
- half the size of the standard stack.
Is the half size still the case? It also seems a bit inconsistent to have RLIMIT_STACK when GCS is enabled and half the stack size when a new thread is created.
[...]
+* When a thread is freed the Guarded Control Stack initially allocated for
- that thread will be freed. Note carefully that if the stack has been
- switched this may not be the stack currently in use by the thread.
Is this true for shadow stacks explicitly allocated by the user with map_shadow_stack()?
+4. Signal handling +--------------------
+* A new signal frame record gcs_context encodes the current GCS mode and
- pointer for the interrupted context on signal delivery. This will always
- be present on systems that support GCS.
+* The record contains a flag field which reports the current GCS configuration
- for the interrupted context as PR_GET_SHADOW_STACK_STATUS would.
+* The signal handler is run with the same GCS configuration as the interrupted
- context.
+* When GCS is enabled for the interrupted thread a signal handling specific
- GCS cap token will be written to the GCS, this is an architectural GCS cap
- token with bit 63 set and the token type (bits 0..11) all clear. The
- GCSPR_EL0 reported in the signal frame will point to this cap token.
+* The signal handler will use the same GCS as the interrupted context.
I assume this is true even with sigaltstack. Not easy to have alternative shadow stack without additional ABI.
On Fri, Aug 16, 2024 at 12:09:01PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:32PM +0100, Mark Brown wrote:
+* EL0 GCS entries with bit 63 set are reserved for use, one such use is defined
Maybe "reserved for specific uses". The proposed sentenced feels like it's missing something.
Actually we removed the usage of bit 63 so I'll just drop this.
+* When a new thread is created by a thread which has GCS enabled then a
- new Guarded Control Stack will be allocated for the new thread with
- half the size of the standard stack.
Is the half size still the case? It also seems a bit inconsistent to have RLIMIT_STACK when GCS is enabled and half the stack size when a new thread is created.
Yes, this predates the rebase onto clone3() - I'll update.
[...]
+* When a thread is freed the Guarded Control Stack initially allocated for
- that thread will be freed. Note carefully that if the stack has been
- switched this may not be the stack currently in use by the thread.
Is this true for shadow stacks explicitly allocated by the user with map_shadow_stack()?
It is only true for the stacks allocaeted by the kernel, if we didn't allocate a stack we don't free it.
+* The signal handler will use the same GCS as the interrupted context.
I assume this is true even with sigaltstack. Not easy to have alternative shadow stack without additional ABI.
Yes.
The architecture defines a format for guarded control stack caps, used to mark the top of an unused GCS in order to limit the potential for exploitation via stack switching. Add definitions associated with these.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/sysreg.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 4a9ea103817e..b8d8718a7b8b 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -1077,6 +1077,26 @@ #define POE_RXW UL(0x7) #define POE_MASK UL(0xf)
+/* + * Definitions for Guarded Control Stack + */ + +#define GCS_CAP_ADDR_MASK GENMASK(63, 12) +#define GCS_CAP_ADDR_SHIFT 12 +#define GCS_CAP_ADDR_WIDTH 52 +#define GCS_CAP_ADDR(x) FIELD_GET(GCS_CAP_ADDR_MASK, x) + +#define GCS_CAP_TOKEN_MASK GENMASK(11, 0) +#define GCS_CAP_TOKEN_SHIFT 0 +#define GCS_CAP_TOKEN_WIDTH 12 +#define GCS_CAP_TOKEN(x) FIELD_GET(GCS_CAP_TOKEN_MASK, x) + +#define GCS_CAP_VALID_TOKEN 0x1 +#define GCS_CAP_IN_PROGRESS_TOKEN 0x5 + +#define GCS_CAP(x) ((((unsigned long)x) & GCS_CAP_ADDR_MASK) | \ + GCS_CAP_VALID_TOKEN) + #define ARM64_FEATURE_FIELD_BITS 4
/* Defined for compatibility only, do not add new users. */
On Thu, Aug 01, 2024 at 01:06:33PM +0100, Mark Brown wrote:
The architecture defines a format for guarded control stack caps, used to mark the top of an unused GCS in order to limit the potential for exploitation via stack switching. Add definitions associated with these.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Acked-by: Catalin Marinas catalin.marinas@arm.com
Define C callable functions for GCS instructions used by the kernel. In order to avoid ambitious toolchain requirements for GCS support these are manually encoded, this means we have fixed register numbers which will be a bit limiting for the compiler but none of these should be used in sufficiently fast paths for this to be a problem.
Note that GCSSTTR is used to store to EL0.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/gcs.h | 51 ++++++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/uaccess.h | 22 +++++++++++++++++ 2 files changed, 73 insertions(+)
diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h new file mode 100644 index 000000000000..7c5e95218db6 --- /dev/null +++ b/arch/arm64/include/asm/gcs.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023 ARM Ltd. + */ +#ifndef __ASM_GCS_H +#define __ASM_GCS_H + +#include <asm/types.h> +#include <asm/uaccess.h> + +static inline void gcsb_dsync(void) +{ + asm volatile(".inst 0xd503227f" : : : "memory"); +} + +static inline void gcsstr(u64 *addr, u64 val) +{ + register u64 *_addr __asm__ ("x0") = addr; + register long _val __asm__ ("x1") = val; + + /* GCSSTTR x1, x0 */ + asm volatile( + ".inst 0xd91f1c01\n" + : + : "rZ" (_val), "r" (_addr) + : "memory"); +} + +static inline void gcsss1(u64 Xt) +{ + asm volatile ( + "sys #3, C7, C7, #2, %0\n" + : + : "rZ" (Xt) + : "memory"); +} + +static inline u64 gcsss2(void) +{ + u64 Xt; + + asm volatile( + "SYSL %0, #3, C7, C7, #3\n" + : "=r" (Xt) + : + : "memory"); + + return Xt; +} + +#endif diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 28f665e0975a..6aba10e38d1c 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -502,4 +502,26 @@ static inline size_t probe_subpage_writeable(const char __user *uaddr,
#endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */
+#ifdef CONFIG_ARM64_GCS + +static inline int gcssttr(unsigned long __user *addr, unsigned long val) +{ + register unsigned long __user *_addr __asm__ ("x0") = addr; + register unsigned long _val __asm__ ("x1") = val; + int err = 0; + + /* GCSSTTR x1, x0 */ + asm volatile( + "1: .inst 0xd91f1c01\n" + "2: \n" + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) + : "+r" (err) + : "rZ" (_val), "r" (_addr) + : "memory"); + + return err; +} + +#endif /* CONFIG_ARM64_GCS */ + #endif /* __ASM_UACCESS_H */
On Thu, Aug 01, 2024 at 01:06:34PM +0100, Mark Brown wrote:
Define C callable functions for GCS instructions used by the kernel. In order to avoid ambitious toolchain requirements for GCS support these are manually encoded, this means we have fixed register numbers which will be a bit limiting for the compiler but none of these should be used in sufficiently fast paths for this to be a problem.
Note that GCSSTTR is used to store to EL0.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Acked-by: Catalin Marinas catalin.marinas@arm.com
In order for EL1 to write to an EL0 GCS it must use the GCSSTTR instruction rather than a normal STTR. Provide a put_user_gcs() which does this.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/uaccess.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 6aba10e38d1c..ecdd47cf1d01 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -522,6 +522,24 @@ static inline int gcssttr(unsigned long __user *addr, unsigned long val) return err; }
+static inline void put_user_gcs(unsigned long val, unsigned long __user *addr, + int *err) +{ + int ret; + + if (!access_ok((char __user *)addr, sizeof(u64))) { + *err = -EFAULT; + return; + } + + uaccess_ttbr0_enable(); + ret = gcssttr(addr, val); + if (ret != 0) + *err = ret; + uaccess_ttbr0_disable(); +} + + #endif /* CONFIG_ARM64_GCS */
#endif /* __ASM_UACCESS_H */
On Thu, Aug 01, 2024 at 01:06:35PM +0100, Mark Brown wrote:
In order for EL1 to write to an EL0 GCS it must use the GCSSTTR instruction rather than a normal STTR. Provide a put_user_gcs() which does this.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
There is a control HCRX_EL2.GCSEn which must be set to allow GCS features to take effect at lower ELs and also fine grained traps for GCS usage at EL0 and EL1. Configure all these to allow GCS usage by EL0 and EL1.
We also initialise GCSCR_EL1 and GCSCRE0_EL1 to ensure that we can execute function call instructions without faulting regardless of the state when the kernel is started.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/el2_setup.h | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+)
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h index fd87c4b8f984..09211aebcf03 100644 --- a/arch/arm64/include/asm/el2_setup.h +++ b/arch/arm64/include/asm/el2_setup.h @@ -27,6 +27,14 @@ ubfx x0, x0, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4 cbz x0, .Lskip_hcrx_@ mov_q x0, HCRX_HOST_FLAGS + + /* Enable GCS if supported */ + mrs_s x1, SYS_ID_AA64PFR1_EL1 + ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 + cbz x1, .Lset_hcrx_@ + orr x0, x0, #HCRX_EL2_GCSEn + +.Lset_hcrx_@: msr_s SYS_HCRX_EL2, x0 .Lskip_hcrx_@: .endm @@ -191,6 +199,15 @@ orr x0, x0, #HFGxTR_EL2_nPIR_EL1 orr x0, x0, #HFGxTR_EL2_nPIRE0_EL1
+ /* GCS depends on PIE so we don't check it if PIE is absent */ + mrs_s x1, SYS_ID_AA64PFR1_EL1 + ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 + cbz x1, .Lset_fgt_@ + + /* Disable traps of access to GCS registers at EL0 and EL1 */ + orr x0, x0, #HFGxTR_EL2_nGCS_EL1_MASK + orr x0, x0, #HFGxTR_EL2_nGCS_EL0_MASK + .Lset_fgt_@: msr_s SYS_HFGRTR_EL2, x0 msr_s SYS_HFGWTR_EL2, x0 @@ -204,6 +221,17 @@ .Lskip_fgt_@: .endm
+.macro __init_el2_gcs + mrs_s x1, SYS_ID_AA64PFR1_EL1 + ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4 + cbz x1, .Lskip_gcs_@ + + /* Ensure GCS is not enabled when we start trying to do BLs */ + msr_s SYS_GCSCR_EL1, xzr + msr_s SYS_GCSCRE0_EL1, xzr +.Lskip_gcs_@: +.endm + .macro __init_el2_nvhe_prepare_eret mov x0, #INIT_PSTATE_EL1 msr spsr_el2, x0 @@ -229,6 +257,7 @@ __init_el2_nvhe_idregs __init_el2_cptr __init_el2_fgt + __init_el2_gcs .endm
#ifndef __KVM_NVHE_HYPERVISOR__
On Thu, Aug 01, 2024 at 01:06:36PM +0100, Mark Brown wrote:
There is a control HCRX_EL2.GCSEn which must be set to allow GCS features to take effect at lower ELs and also fine grained traps for GCS usage at EL0 and EL1. Configure all these to allow GCS usage by EL0 and EL1.
We also initialise GCSCR_EL1 and GCSCRE0_EL1 to ensure that we can execute function call instructions without faulting regardless of the state when the kernel is started.
Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Add a cpufeature for GCS, allowing other code to conditionally support it at runtime.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/cpufeature.h | 6 ++++++ arch/arm64/kernel/cpufeature.c | 9 +++++++++ arch/arm64/tools/cpucaps | 1 + 3 files changed, 16 insertions(+)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 558434267271..e0f0e4c24544 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -832,6 +832,12 @@ static inline bool system_supports_lpa2(void) return cpus_have_final_cap(ARM64_HAS_LPA2); }
+static inline bool system_supports_gcs(void) +{ + return IS_ENABLED(CONFIG_ARM64_GCS) && + alternative_has_cap_unlikely(ARM64_HAS_GCS); +} + int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt); bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 646ecd3069fd..315bd7be1106 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -291,6 +291,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { };
static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = { + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_GCS), + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_GCS_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME), FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0), @@ -2870,6 +2872,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_nv1, ARM64_CPUID_FIELDS_NEG(ID_AA64MMFR4_EL1, E2H0, NI_NV1) }, + { + .desc = "Guarded Control Stack (GCS)", + .capability = ARM64_HAS_GCS, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .matches = has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, GCS, IMP) + }, {}, };
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index ac3429d892b9..66eff95c0824 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -29,6 +29,7 @@ HAS_EVT HAS_FPMR HAS_FGT HAS_FPSIMD +HAS_GCS HAS_GENERIC_AUTH HAS_GENERIC_AUTH_ARCH_QARMA3 HAS_GENERIC_AUTH_ARCH_QARMA5
On Thu, Aug 01, 2024 at 01:06:37PM +0100, Mark Brown wrote:
@@ -2870,6 +2872,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_nv1, ARM64_CPUID_FIELDS_NEG(ID_AA64MMFR4_EL1, E2H0, NI_NV1) },
- {
.desc = "Guarded Control Stack (GCS)",
.capability = ARM64_HAS_GCS,
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
Given that it's for user space only, system feature makes sense.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Pages used for guarded control stacks need to be described to the hardware using the Permission Indirection Extension, GCS is not supported without PIE. In order to support copy on write for guarded stacks we allocate two values, one for active GCSs and one for GCS pages marked as read only prior to copy.
Since the actual effect is defined using PIE the specific bit pattern used does not matter to the hardware but we choose two values which differ only in PTE_WRITE in order to help share code with non-PIE cases.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/pgtable-prot.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index b11cfb9fdd37..545d54c88520 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -144,15 +144,23 @@ static inline bool __pure lpa2_is_enabled(void) /* 6: PTE_PXN | PTE_WRITE */ /* 7: PAGE_SHARED_EXEC PTE_PXN | PTE_WRITE | PTE_USER */ /* 8: PAGE_KERNEL_ROX PTE_UXN */ -/* 9: PTE_UXN | PTE_USER */ +/* 9: PAGE_GCS_RO PTE_UXN | PTE_USER */ /* a: PAGE_KERNEL_EXEC PTE_UXN | PTE_WRITE */ -/* b: PTE_UXN | PTE_WRITE | PTE_USER */ +/* b: PAGE_GCS PTE_UXN | PTE_WRITE | PTE_USER */ /* c: PAGE_KERNEL_RO PTE_UXN | PTE_PXN */ /* d: PAGE_READONLY PTE_UXN | PTE_PXN | PTE_USER */ /* e: PAGE_KERNEL PTE_UXN | PTE_PXN | PTE_WRITE */ /* f: PAGE_SHARED PTE_UXN | PTE_PXN | PTE_WRITE | PTE_USER */
+#define _PAGE_GCS (_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_WRITE | PTE_USER) +#define _PAGE_GCS_RO (_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_USER) + +#define PAGE_GCS __pgprot(_PAGE_GCS) +#define PAGE_GCS_RO __pgprot(_PAGE_GCS_RO) + #define PIE_E0 ( \ + PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS), PIE_GCS) | \ + PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS_RO), PIE_R) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_EXECONLY), PIE_X_O) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY_EXEC), PIE_RX) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED_EXEC), PIE_RWX) | \ @@ -160,6 +168,8 @@ static inline bool __pure lpa2_is_enabled(void) PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED), PIE_RW))
#define PIE_E1 ( \ + PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS), PIE_NONE_O) | \ + PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS_RO), PIE_NONE_O) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_EXECONLY), PIE_NONE_O) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY_EXEC), PIE_R) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED_EXEC), PIE_RW) | \
On Thu, Aug 01, 2024 at 01:06:38PM +0100, Mark Brown wrote:
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index b11cfb9fdd37..545d54c88520 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -144,15 +144,23 @@ static inline bool __pure lpa2_is_enabled(void) /* 6: PTE_PXN | PTE_WRITE */ /* 7: PAGE_SHARED_EXEC PTE_PXN | PTE_WRITE | PTE_USER */ /* 8: PAGE_KERNEL_ROX PTE_UXN */ -/* 9: PTE_UXN | PTE_USER */ +/* 9: PAGE_GCS_RO PTE_UXN | PTE_USER */ /* a: PAGE_KERNEL_EXEC PTE_UXN | PTE_WRITE */ -/* b: PTE_UXN | PTE_WRITE | PTE_USER */ +/* b: PAGE_GCS PTE_UXN | PTE_WRITE | PTE_USER */ /* c: PAGE_KERNEL_RO PTE_UXN | PTE_PXN */ /* d: PAGE_READONLY PTE_UXN | PTE_PXN | PTE_USER */ /* e: PAGE_KERNEL PTE_UXN | PTE_PXN | PTE_WRITE */ /* f: PAGE_SHARED PTE_UXN | PTE_PXN | PTE_WRITE | PTE_USER */ +#define _PAGE_GCS (_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_WRITE | PTE_USER) +#define _PAGE_GCS_RO (_PAGE_DEFAULT | PTE_NG | PTE_UXN | PTE_USER)
+#define PAGE_GCS __pgprot(_PAGE_GCS) +#define PAGE_GCS_RO __pgprot(_PAGE_GCS_RO)
#define PIE_E0 ( \
- PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS), PIE_GCS) | \
- PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS_RO), PIE_R) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_EXECONLY), PIE_X_O) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_READONLY_EXEC), PIE_RX) | \ PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED_EXEC), PIE_RWX) | \
@@ -160,6 +168,8 @@ static inline bool __pure lpa2_is_enabled(void) PIRx_ELx_PERM(pte_pi_index(_PAGE_SHARED), PIE_RW)) #define PIE_E1 ( \
- PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS), PIE_NONE_O) | \
- PIRx_ELx_PERM(pte_pi_index(_PAGE_GCS_RO), PIE_NONE_O) | \
It's fine to keep PIE_NONE_O here, the kernel wouldn't need to access this memory with unprivileged instructions (it only matters for the futex code using LDXR/STXR).
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Use VM_HIGH_ARCH_5 for guarded control stack pages.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/filesystems/proc.rst | 2 +- include/linux/mm.h | 12 +++++++++++- 2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index e834779d9611..6a882c57a7e7 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -579,7 +579,7 @@ encoded manner. The codes are the following: mt arm64 MTE allocation tags are enabled um userfaultfd missing tracking uw userfaultfd wr-protect tracking - ss shadow stack page + ss shadow/guarded control stack page sl sealed == =======================================
diff --git a/include/linux/mm.h b/include/linux/mm.h index 96faf26b6083..c6c7454ce4e0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -353,7 +353,17 @@ extern unsigned int kobjsize(const void *objp); * for more details on the guard size. */ # define VM_SHADOW_STACK VM_HIGH_ARCH_5 -#else +#endif + +#if defined(CONFIG_ARM64_GCS) +/* + * arm64's Guarded Control Stack implements similar functionality and + * has similar constraints to shadow stacks. + */ +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 +#endif + +#ifndef VM_SHADOW_STACK # define VM_SHADOW_STACK VM_NONE #endif
On Thu, 2024-08-01 at 13:06 +0100, Mark Brown wrote:
Use VM_HIGH_ARCH_5 for guarded control stack pages.
FYI - If you want to have more complete guard gaps, you need to do this for arm too: https://lore.kernel.org/linux-mm/20240326021656.202649-14-rick.p.edgecombe@i...
Using VM_SHADOW_STACK only gets you part of the way there.
On Thu, Aug 15, 2024 at 03:20:52PM +0000, Edgecombe, Rick P wrote:
On Thu, 2024-08-01 at 13:06 +0100, Mark Brown wrote:
Use VM_HIGH_ARCH_5 for guarded control stack pages.
FYI - If you want to have more complete guard gaps, you need to do this for arm too: https://lore.kernel.org/linux-mm/20240326021656.202649-14-rick.p.edgecombe@i...
Using VM_SHADOW_STACK only gets you part of the way there.
Oh, thanks for the heads up - I'd missed that.
On Thu, Aug 15, 2024 at 04:26:47PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 03:20:52PM +0000, Edgecombe, Rick P wrote:
FYI - If you want to have more complete guard gaps, you need to do this for arm too: https://lore.kernel.org/linux-mm/20240326021656.202649-14-rick.p.edgecombe@i...
Using VM_SHADOW_STACK only gets you part of the way there.
Oh, thanks for the heads up - I'd missed that.
Looking at this I think it makes sense to do as was done for x86 and split this out into a separate series (part of why I'd missed it), updating the generic implementation to do this by default. That'll touch a bunch of architectures and the series is already quite big, it's not really an ABI impact.
On Thu, 2024-08-15 at 17:39 +0100, Mark Brown wrote:
Oh, thanks for the heads up - I'd missed that.
Looking at this I think it makes sense to do as was done for x86 and split this out into a separate series (part of why I'd missed it), updating the generic implementation to do this by default. That'll touch a bunch of architectures and the series is already quite big, it's not really an ABI impact.
The series is already upstream. You just need to add an arm version of that linked patch. But up to you.
On Thu, Aug 15, 2024 at 05:53:19PM +0000, Edgecombe, Rick P wrote:
On Thu, 2024-08-15 at 17:39 +0100, Mark Brown wrote:
Oh, thanks for the heads up - I'd missed that.
Looking at this I think it makes sense to do as was done for x86 and split this out into a separate series (part of why I'd missed it), updating the generic implementation to do this by default. That'll touch a bunch of architectures and the series is already quite big, it's not really an ABI impact.
The series is already upstream. You just need to add an arm version of that linked patch. But up to you.
Your series modified the existing x86 custom arch_get_unmapped_area*() functions, arm64 uses the generic implementation of those so I'd have to either add custom implementations (which I can't imagine would be met with great enthusiasm) or update the generic ones. A generic implementation seems reasonable and it looks like RISC-V would also end up using it so while it's a bit invasive it does seem more sensible to do the change there.
On Thu, 2024-08-15 at 19:19 +0100, Mark Brown wrote:
The series is already upstream. You just need to add an arm version of that linked patch. But up to you.
Your series modified the existing x86 custom arch_get_unmapped_area*() functions, arm64 uses the generic implementation of those so I'd have to either add custom implementations (which I can't imagine would be met with great enthusiasm) or update the generic ones. A generic implementation seems reasonable and it looks like RISC-V would also end up using it so while it's a bit invasive it does seem more sensible to do the change there.
Ah, I misunderstood. Makes sense.
On Thu, Aug 01, 2024 at 01:06:39PM +0100, Mark Brown wrote:
Use VM_HIGH_ARCH_5 for guarded control stack pages.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Map pages flagged as being part of a GCS as such rather than using the full set of generic VM flags.
This is done using a conditional rather than extending the size of protection_map since that would make for a very sparse array.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/mman.h | 9 +++++++++ arch/arm64/mm/mmap.c | 10 +++++++++- 2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index c21849ffdd88..6d3fe6433a62 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -61,6 +61,15 @@ static inline bool arch_validate_flags(unsigned long vm_flags) return false; }
+ if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) { + /* + * An executable GCS isn't a good idea, and the mm + * core can't cope with a shared GCS. + */ + if (vm_flags & (VM_EXEC | VM_ARM64_BTI | VM_SHARED)) + return false; + } + return true;
} diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 642bdf908b22..3ed63fc8cd0a 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -83,9 +83,17 @@ arch_initcall(adjust_protection_map);
pgprot_t vm_get_page_prot(unsigned long vm_flags) { - pteval_t prot = pgprot_val(protection_map[vm_flags & + pteval_t prot; + + /* Short circuit GCS to avoid bloating the table. */ + if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) { + prot = _PAGE_GCS_RO; + } else { + prot = pgprot_val(protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]); + }
+ /* VM_ARM64_BTI on a GCS is rejected in arch_validate_flags() */ if (vm_flags & VM_ARM64_BTI) prot |= PTE_GP;
On Thu, Aug 01, 2024 at 01:06:40PM +0100, Mark Brown wrote:
Map pages flagged as being part of a GCS as such rather than using the full set of generic VM flags.
This is done using a conditional rather than extending the size of protection_map since that would make for a very sparse array.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
arch/arm64/include/asm/mman.h | 9 +++++++++ arch/arm64/mm/mmap.c | 10 +++++++++- 2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index c21849ffdd88..6d3fe6433a62 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -61,6 +61,15 @@ static inline bool arch_validate_flags(unsigned long vm_flags) return false; }
- if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {
/*
* An executable GCS isn't a good idea, and the mm
* core can't cope with a shared GCS.
*/
if (vm_flags & (VM_EXEC | VM_ARM64_BTI | VM_SHARED))
return false;
- }
I wonder whether we should clear VM_MAYEXEC early on during the vma creation. This way the mprotect() case will be handled in the core code. At a quick look, do_mmap() seems to always set VM_MAYEXEC but discard it for non-executable file mmap. Last time I looked (when doing MTE) there wasn't a way for the arch code to clear specific VM_* flags, only to validate them. But I think we should just clear VM_MAYEXEC and also return an error for VM_EXEC in the core do_mmap() if VM_SHADOW_STACK. It would cover the other architectures doing shadow stacks.
Regarding VM_SHARED, how do we even end up with this via the map_shadow_stack() syscall? I can't see how one can pass MAP_SHARED to do_mmap() on this path. I'm fine with a VM_WARN_ON() if you want the check (and there's no way a user can trigger it).
Is there any arch restriction with setting BTI and GCS? It doesn't make sense but curious if it matters. We block the exec permission anyway (unless the BTI pages moved to PIE as well, I don't remember).
- return true;
} diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index 642bdf908b22..3ed63fc8cd0a 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -83,9 +83,17 @@ arch_initcall(adjust_protection_map); pgprot_t vm_get_page_prot(unsigned long vm_flags) {
- pteval_t prot = pgprot_val(protection_map[vm_flags &
- pteval_t prot;
- /* Short circuit GCS to avoid bloating the table. */
- if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {
prot = _PAGE_GCS_RO;
- } else {
prot = pgprot_val(protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
- }
This looks fine to me. Such page will become proper GCS on first access.
On Mon, Aug 19, 2024 at 10:10:36AM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:40PM +0100, Mark Brown wrote:
- if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {
/*
* An executable GCS isn't a good idea, and the mm
* core can't cope with a shared GCS.
*/
if (vm_flags & (VM_EXEC | VM_ARM64_BTI | VM_SHARED))
return false;
- }
I wonder whether we should clear VM_MAYEXEC early on during the vma creation. This way the mprotect() case will be handled in the core code. At a quick look, do_mmap() seems to always set VM_MAYEXEC but discard it for non-executable file mmap. Last time I looked (when doing MTE) there wasn't a way for the arch code to clear specific VM_* flags, only to validate them. But I think we should just clear VM_MAYEXEC and also return an error for VM_EXEC in the core do_mmap() if VM_SHADOW_STACK. It would cover the other architectures doing shadow stacks.
Yes, I think adding something generic would make sense here. That feels like a cleanup which could be split out?
Regarding VM_SHARED, how do we even end up with this via the map_shadow_stack() syscall? I can't see how one can pass MAP_SHARED to do_mmap() on this path. I'm fine with a VM_WARN_ON() if you want the check (and there's no way a user can trigger it).
It's just a defenesive programming thing, I'm not aware of any way in which it should be possible to trigger this.
Is there any arch restriction with setting BTI and GCS? It doesn't make sense but curious if it matters. We block the exec permission anyway (unless the BTI pages moved to PIE as well, I don't remember).
As you say BTI should be meaningless for a non-executable page like GCS, I'm not aware of any way in which it matters. BTI is separate to PIE.
On Mon, Aug 19, 2024 at 05:33:24PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 10:10:36AM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:40PM +0100, Mark Brown wrote:
- if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {
/*
* An executable GCS isn't a good idea, and the mm
* core can't cope with a shared GCS.
*/
if (vm_flags & (VM_EXEC | VM_ARM64_BTI | VM_SHARED))
return false;
- }
I wonder whether we should clear VM_MAYEXEC early on during the vma creation. This way the mprotect() case will be handled in the core code. At a quick look, do_mmap() seems to always set VM_MAYEXEC but discard it for non-executable file mmap. Last time I looked (when doing MTE) there wasn't a way for the arch code to clear specific VM_* flags, only to validate them. But I think we should just clear VM_MAYEXEC and also return an error for VM_EXEC in the core do_mmap() if VM_SHADOW_STACK. It would cover the other architectures doing shadow stacks.
Yes, I think adding something generic would make sense here. That feels like a cleanup which could be split out?
It can be done separately. It doesn't look like x86 has such checks. Adding it generically would be a slight ABI tightening but I doubt it matters, no sane software would use an executable shadow stack.
Regarding VM_SHARED, how do we even end up with this via the map_shadow_stack() syscall? I can't see how one can pass MAP_SHARED to do_mmap() on this path. I'm fine with a VM_WARN_ON() if you want the check (and there's no way a user can trigger it).
It's just a defenesive programming thing, I'm not aware of any way in which it should be possible to trigger this.
Is there any arch restriction with setting BTI and GCS? It doesn't make sense but curious if it matters. We block the exec permission anyway (unless the BTI pages moved to PIE as well, I don't remember).
As you say BTI should be meaningless for a non-executable page like GCS, I'm not aware of any way in which it matters. BTI is separate to PIE.
My thoughts were whether we can get rid of this hunk entirely by handling it in the core code. We'd allow BTI if one wants such useless combination but clear VM_MAYEXEC in the core code (and ignore VM_SHARED since you can't set it anyway).
On Tue, Aug 20, 2024 at 03:59:21PM +0100, Catalin Marinas wrote:
On Mon, Aug 19, 2024 at 05:33:24PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 10:10:36AM +0100, Catalin Marinas wrote:
At a quick look, do_mmap() seems to always set VM_MAYEXEC but discard it for non-executable file mmap. Last time I looked (when doing MTE) there wasn't a way for the arch code to clear specific VM_* flags, only to validate them. But I think we should just clear VM_MAYEXEC and also return an error for VM_EXEC in the core do_mmap() if VM_SHADOW_STACK. It would cover the other architectures doing shadow stacks.
Yes, I think adding something generic would make sense here. That feels like a cleanup which could be split out?
It can be done separately. It doesn't look like x86 has such checks. Adding it generically would be a slight ABI tightening but I doubt it matters, no sane software would use an executable shadow stack.
OK.
Is there any arch restriction with setting BTI and GCS? It doesn't make sense but curious if it matters. We block the exec permission anyway (unless the BTI pages moved to PIE as well, I don't remember).
As you say BTI should be meaningless for a non-executable page like GCS, I'm not aware of any way in which it matters. BTI is separate to PIE.
My thoughts were whether we can get rid of this hunk entirely by handling it in the core code. We'd allow BTI if one wants such useless combination but clear VM_MAYEXEC in the core code (and ignore VM_SHARED since you can't set it anyway).
I have to admit that the BTI because I was shoving _EXEC in there rather than because it specifically needed to be blocked. So change the check for VM_SHARED to a VM_WARN_ON(), and leave the _EXEC check for now pending the above core change?
On Tue, Aug 20, 2024 at 04:28:21PM +0100, Mark Brown wrote:
On Tue, Aug 20, 2024 at 03:59:21PM +0100, Catalin Marinas wrote:
On Mon, Aug 19, 2024 at 05:33:24PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 10:10:36AM +0100, Catalin Marinas wrote:
Is there any arch restriction with setting BTI and GCS? It doesn't make sense but curious if it matters. We block the exec permission anyway (unless the BTI pages moved to PIE as well, I don't remember).
As you say BTI should be meaningless for a non-executable page like GCS, I'm not aware of any way in which it matters. BTI is separate to PIE.
My thoughts were whether we can get rid of this hunk entirely by handling it in the core code. We'd allow BTI if one wants such useless combination but clear VM_MAYEXEC in the core code (and ignore VM_SHARED since you can't set it anyway).
I have to admit that the BTI because I was shoving _EXEC in there rather than because it specifically needed to be blocked. So change the check for VM_SHARED to a VM_WARN_ON(), and leave the _EXEC check for now pending the above core change?
Yes, sounds good.
GCS introduces a number of system registers for EL1 and EL0, on systems with GCS we need to context switch them and expose them to VMMs to allow guests to use GCS.
In order to allow guests to use GCS we also need to configure HCRX_EL2.GCSEn, if this is not set GCS instructions will be noops and CHKFEAT will report GCS as disabled. Also enable fine grained traps for access to the GCS registers by guests which do not have the feature enabled.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/kvm_host.h | 8 +++++ arch/arm64/include/asm/vncr_mapping.h | 2 ++ arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 49 ++++++++++++++++++++++++------ arch/arm64/kvm/sys_regs.c | 12 ++++++++ 4 files changed, 61 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a33f5996ca9f..5818e4a1c2d1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -446,6 +446,10 @@ enum vcpu_sysreg { GCR_EL1, /* Tag Control Register */ TFSRE0_EL1, /* Tag Fault Status Register (EL0) */
+ /* Guarded Control Stack registers */ + GCSCRE0_EL1, /* Guarded Control Stack Control (EL0) */ + GCSPR_EL0, /* Guarded Control Stack Pointer (EL0) */ + /* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ IFSR32_EL2, /* Instruction Fault Status Register */ @@ -517,6 +521,10 @@ enum vcpu_sysreg { VNCR(PIR_EL1), /* Permission Indirection Register 1 (EL1) */ VNCR(PIRE0_EL1), /* Permission Indirection Register 0 (EL1) */
+ /* Guarded Control Stack registers */ + VNCR(GCSPR_EL1), /* Guarded Control Stack Pointer (EL1) */ + VNCR(GCSCR_EL1), /* Guarded Control Stack Control (EL1) */ + VNCR(HFGRTR_EL2), VNCR(HFGWTR_EL2), VNCR(HFGITR_EL2), diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index df2c47c55972..5e83e6f579fd 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -88,6 +88,8 @@ #define VNCR_PMSIRR_EL1 0x840 #define VNCR_PMSLATFR_EL1 0x848 #define VNCR_TRFCR_EL1 0x880 +#define VNCR_GCSPR_EL1 0x8C0 +#define VNCR_GCSCR_EL1 0x8D0 #define VNCR_MPAM1_EL1 0x900 #define VNCR_MPAMHCR_EL2 0x930 #define VNCR_MPAMVPMV_EL2 0x938 diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 4c0fdabaf8ae..ac29352e225a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,6 +16,27 @@ #include <asm/kvm_hyp.h> #include <asm/kvm_mmu.h>
+static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu; + + if (!vcpu) + vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt); + + return vcpu; +} + +static inline bool ctxt_has_gcs(struct kvm_cpu_context *ctxt) +{ + struct kvm_vcpu *vcpu; + + if (!cpus_have_final_cap(ARM64_HAS_GCS)) + return false; + + vcpu = ctxt_to_vcpu(ctxt); + return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64PFR1_EL1, GCS, IMP); +} + static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, MDSCR_EL1) = read_sysreg(mdscr_el1); @@ -25,16 +46,10 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); -} - -static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) -{ - struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu; - - if (!vcpu) - vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt); - - return vcpu; + if (ctxt_has_gcs(ctxt)) { + ctxt_sys_reg(ctxt, GCSPR_EL0) = read_sysreg_s(SYS_GCSPR_EL0); + ctxt_sys_reg(ctxt, GCSCRE0_EL1) = read_sysreg_s(SYS_GCSCRE0_EL1); + } }
static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) @@ -79,6 +94,10 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) if (ctxt_has_s1pie(ctxt)) { ctxt_sys_reg(ctxt, PIR_EL1) = read_sysreg_el1(SYS_PIR); ctxt_sys_reg(ctxt, PIRE0_EL1) = read_sysreg_el1(SYS_PIRE0); + if (ctxt_has_gcs(ctxt)) { + ctxt_sys_reg(ctxt, GCSPR_EL1) = read_sysreg_el1(SYS_GCSPR); + ctxt_sys_reg(ctxt, GCSCR_EL1) = read_sysreg_el1(SYS_GCSCR); + } } } ctxt_sys_reg(ctxt, ESR_EL1) = read_sysreg_el1(SYS_ESR); @@ -126,6 +145,11 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0); + if (ctxt_has_gcs(ctxt)) { + write_sysreg_s(ctxt_sys_reg(ctxt, GCSPR_EL0), SYS_GCSPR_EL0); + write_sysreg_s(ctxt_sys_reg(ctxt, GCSCRE0_EL1), + SYS_GCSCRE0_EL1); + } }
static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) @@ -157,6 +181,11 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) if (ctxt_has_s1pie(ctxt)) { write_sysreg_el1(ctxt_sys_reg(ctxt, PIR_EL1), SYS_PIR); write_sysreg_el1(ctxt_sys_reg(ctxt, PIRE0_EL1), SYS_PIRE0); + + if (ctxt_has_gcs(ctxt)) { + write_sysreg_el1(ctxt_sys_reg(ctxt, GCSPR_EL1), SYS_GCSPR); + write_sysreg_el1(ctxt_sys_reg(ctxt, GCSCR_EL1), SYS_GCSCR); + } } } write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL1), SYS_ESR); diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c90324060436..ac98d3237130 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2446,6 +2446,10 @@ static const struct sys_reg_desc sys_reg_descs[] = { PTRAUTH_KEY(APDB), PTRAUTH_KEY(APGA),
+ { SYS_DESC(SYS_GCSCR_EL1), NULL, reset_val, GCSCR_EL1, 0 }, + { SYS_DESC(SYS_GCSPR_EL1), NULL, reset_unknown, GCSPR_EL1 }, + { SYS_DESC(SYS_GCSCRE0_EL1), NULL, reset_val, GCSCRE0_EL1, 0 }, + { SYS_DESC(SYS_SPSR_EL1), access_spsr}, { SYS_DESC(SYS_ELR_EL1), access_elr},
@@ -2535,6 +2539,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { CTR_EL0_IDC_MASK | CTR_EL0_DminLine_MASK | CTR_EL0_IminLine_MASK), + { SYS_DESC(SYS_GCSPR_EL0), NULL, reset_unknown, GCSPR_EL0 }, { SYS_DESC(SYS_SVCR), undef_access },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, @@ -4560,6 +4565,9 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
if (kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP)) vcpu->arch.hcrx_el2 |= HCRX_EL2_TCR2En; + + if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP)) + vcpu->arch.hcrx_el2 |= HCRX_EL2_GCSEn; }
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags)) @@ -4604,6 +4612,10 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP)) + kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nGCS_EL0 | + HFGxTR_EL2_nGCS_EL1); + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP)) kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 | HAFGRTR_EL2_RES1);
On Thu, 01 Aug 2024 13:06:41 +0100, Mark Brown broonie@kernel.org wrote:
GCS introduces a number of system registers for EL1 and EL0, on systems with GCS we need to context switch them and expose them to VMMs to allow guests to use GCS.
In order to allow guests to use GCS we also need to configure HCRX_EL2.GCSEn, if this is not set GCS instructions will be noops and CHKFEAT will report GCS as disabled. Also enable fine grained traps for access to the GCS registers by guests which do not have the feature enabled.
Signed-off-by: Mark Brown broonie@kernel.org
arch/arm64/include/asm/kvm_host.h | 8 +++++ arch/arm64/include/asm/vncr_mapping.h | 2 ++ arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 49 ++++++++++++++++++++++++------ arch/arm64/kvm/sys_regs.c | 12 ++++++++ 4 files changed, 61 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index a33f5996ca9f..5818e4a1c2d1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -446,6 +446,10 @@ enum vcpu_sysreg { GCR_EL1, /* Tag Control Register */ TFSRE0_EL1, /* Tag Fault Status Register (EL0) */
- /* Guarded Control Stack registers */
- GCSCRE0_EL1, /* Guarded Control Stack Control (EL0) */
- GCSPR_EL0, /* Guarded Control Stack Pointer (EL0) */
- /* 32bit specific registers. */ DACR32_EL2, /* Domain Access Control Register */ IFSR32_EL2, /* Instruction Fault Status Register */
@@ -517,6 +521,10 @@ enum vcpu_sysreg { VNCR(PIR_EL1), /* Permission Indirection Register 1 (EL1) */ VNCR(PIRE0_EL1), /* Permission Indirection Register 0 (EL1) */
- /* Guarded Control Stack registers */
- VNCR(GCSPR_EL1), /* Guarded Control Stack Pointer (EL1) */
- VNCR(GCSCR_EL1), /* Guarded Control Stack Control (EL1) */
- VNCR(HFGRTR_EL2), VNCR(HFGWTR_EL2), VNCR(HFGITR_EL2),
diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h index df2c47c55972..5e83e6f579fd 100644 --- a/arch/arm64/include/asm/vncr_mapping.h +++ b/arch/arm64/include/asm/vncr_mapping.h @@ -88,6 +88,8 @@ #define VNCR_PMSIRR_EL1 0x840 #define VNCR_PMSLATFR_EL1 0x848 #define VNCR_TRFCR_EL1 0x880 +#define VNCR_GCSPR_EL1 0x8C0 +#define VNCR_GCSCR_EL1 0x8D0 #define VNCR_MPAM1_EL1 0x900 #define VNCR_MPAMHCR_EL2 0x930 #define VNCR_MPAMVPMV_EL2 0x938 diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h index 4c0fdabaf8ae..ac29352e225a 100644 --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h @@ -16,6 +16,27 @@ #include <asm/kvm_hyp.h> #include <asm/kvm_mmu.h> +static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) +{
- struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
- if (!vcpu)
vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
- return vcpu;
+}
+static inline bool ctxt_has_gcs(struct kvm_cpu_context *ctxt) +{
- struct kvm_vcpu *vcpu;
- if (!cpus_have_final_cap(ARM64_HAS_GCS))
return false;
- vcpu = ctxt_to_vcpu(ctxt);
- return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64PFR1_EL1, GCS, IMP);
+}
static inline void __sysreg_save_common_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, MDSCR_EL1) = read_sysreg(mdscr_el1); @@ -25,16 +46,10 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt) { ctxt_sys_reg(ctxt, TPIDR_EL0) = read_sysreg(tpidr_el0); ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0); -}
-static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt) -{
- struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
- if (!vcpu)
vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
- return vcpu;
- if (ctxt_has_gcs(ctxt)) {
ctxt_sys_reg(ctxt, GCSPR_EL0) = read_sysreg_s(SYS_GCSPR_EL0);
ctxt_sys_reg(ctxt, GCSCRE0_EL1) = read_sysreg_s(SYS_GCSCRE0_EL1);
- }
} static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt) @@ -79,6 +94,10 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt) if (ctxt_has_s1pie(ctxt)) { ctxt_sys_reg(ctxt, PIR_EL1) = read_sysreg_el1(SYS_PIR); ctxt_sys_reg(ctxt, PIRE0_EL1) = read_sysreg_el1(SYS_PIRE0);
if (ctxt_has_gcs(ctxt)) {
ctxt_sys_reg(ctxt, GCSPR_EL1) = read_sysreg_el1(SYS_GCSPR);
ctxt_sys_reg(ctxt, GCSCR_EL1) = read_sysreg_el1(SYS_GCSCR);
} } ctxt_sys_reg(ctxt, ESR_EL1) = read_sysreg_el1(SYS_ESR);}
@@ -126,6 +145,11 @@ static inline void __sysreg_restore_user_state(struct kvm_cpu_context *ctxt) { write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL0), tpidr_el0); write_sysreg(ctxt_sys_reg(ctxt, TPIDRRO_EL0), tpidrro_el0);
- if (ctxt_has_gcs(ctxt)) {
write_sysreg_s(ctxt_sys_reg(ctxt, GCSPR_EL0), SYS_GCSPR_EL0);
write_sysreg_s(ctxt_sys_reg(ctxt, GCSCRE0_EL1),
SYS_GCSCRE0_EL1);
- }
} static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) @@ -157,6 +181,11 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt) if (ctxt_has_s1pie(ctxt)) { write_sysreg_el1(ctxt_sys_reg(ctxt, PIR_EL1), SYS_PIR); write_sysreg_el1(ctxt_sys_reg(ctxt, PIRE0_EL1), SYS_PIRE0);
if (ctxt_has_gcs(ctxt)) {
write_sysreg_el1(ctxt_sys_reg(ctxt, GCSPR_EL1), SYS_GCSPR);
write_sysreg_el1(ctxt_sys_reg(ctxt, GCSCR_EL1), SYS_GCSCR);
} } write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL1), SYS_ESR);}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c90324060436..ac98d3237130 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -2446,6 +2446,10 @@ static const struct sys_reg_desc sys_reg_descs[] = { PTRAUTH_KEY(APDB), PTRAUTH_KEY(APGA),
- { SYS_DESC(SYS_GCSCR_EL1), NULL, reset_val, GCSCR_EL1, 0 },
- { SYS_DESC(SYS_GCSPR_EL1), NULL, reset_unknown, GCSPR_EL1 },
- { SYS_DESC(SYS_GCSCRE0_EL1), NULL, reset_val, GCSCRE0_EL1, 0 },
Global visibility for these registers? Why should we expose them to userspace if the feature is neither present nor configured?
{ SYS_DESC(SYS_SPSR_EL1), access_spsr}, { SYS_DESC(SYS_ELR_EL1), access_elr}, @@ -2535,6 +2539,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { CTR_EL0_IDC_MASK | CTR_EL0_DminLine_MASK | CTR_EL0_IminLine_MASK),
- { SYS_DESC(SYS_GCSPR_EL0), NULL, reset_unknown, GCSPR_EL0 }, { SYS_DESC(SYS_SVCR), undef_access },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, @@ -4560,6 +4565,9 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) if (kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP)) vcpu->arch.hcrx_el2 |= HCRX_EL2_TCR2En;
if (kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
}vcpu->arch.hcrx_el2 |= HCRX_EL2_GCSEn;
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags)) @@ -4604,6 +4612,10 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu) kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nGCS_EL0 |
HFGxTR_EL2_nGCS_EL1);
How can this work if you don't handle ID_AA64PFR_EL1 being written to? You are exposing GCS to all guests without giving the VMM an opportunity to turn it off. This breaks A->B->A migration, which is not acceptable.
M.
On Fri, Aug 16, 2024 at 03:15:19PM +0100, Marc Zyngier wrote:
Mark Brown broonie@kernel.org wrote:
- { SYS_DESC(SYS_GCSCR_EL1), NULL, reset_val, GCSCR_EL1, 0 },
- { SYS_DESC(SYS_GCSPR_EL1), NULL, reset_unknown, GCSPR_EL1 },
- { SYS_DESC(SYS_GCSCRE0_EL1), NULL, reset_val, GCSCRE0_EL1, 0 },
Global visibility for these registers? Why should we expose them to userspace if the feature is neither present nor configured?
...
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nGCS_EL0 |
HFGxTR_EL2_nGCS_EL1);
How can this work if you don't handle ID_AA64PFR_EL1 being written to? You are exposing GCS to all guests without giving the VMM an opportunity to turn it off. This breaks A->B->A migration, which is not acceptable.
This was done based on your positive review of the POE series which follows the same pattern:
https://lore.kernel.org/linux-arm-kernel/20240503130147.1154804-8-joey.gouly... https://lore.kernel.org/linux-arm-kernel/864jagmxn7.wl-maz@kernel.org/
in which you didn't note any concerns about the handling for the sysregs.
If your decisions have changed then you'll need to withdraw your review there, I'd figured that given the current incompleteness of the writability conversions and there being a bunch of existing registers exposed unconditionally you'd decided to defer until some more general cleanup of the situation.
On Fri, 16 Aug 2024 15:40:33 +0100, Mark Brown broonie@kernel.org wrote:
[1 <text/plain; us-ascii (7bit)>] On Fri, Aug 16, 2024 at 03:15:19PM +0100, Marc Zyngier wrote:
Mark Brown broonie@kernel.org wrote:
- { SYS_DESC(SYS_GCSCR_EL1), NULL, reset_val, GCSCR_EL1, 0 },
- { SYS_DESC(SYS_GCSPR_EL1), NULL, reset_unknown, GCSPR_EL1 },
- { SYS_DESC(SYS_GCSCRE0_EL1), NULL, reset_val, GCSCRE0_EL1, 0 },
Global visibility for these registers? Why should we expose them to userspace if the feature is neither present nor configured?
...
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nGCS_EL0 |
HFGxTR_EL2_nGCS_EL1);
How can this work if you don't handle ID_AA64PFR_EL1 being written to? You are exposing GCS to all guests without giving the VMM an opportunity to turn it off. This breaks A->B->A migration, which is not acceptable.
This was done based on your positive review of the POE series which follows the same pattern:
https://lore.kernel.org/linux-arm-kernel/20240503130147.1154804-8-joey.gouly... https://lore.kernel.org/linux-arm-kernel/864jagmxn7.wl-maz@kernel.org/
in which you didn't note any concerns about the handling for the sysregs.
If your decisions have changed then you'll need to withdraw your review there, I'd figured that given the current incompleteness of the writability conversions and there being a bunch of existing registers exposed unconditionally you'd decided to defer until some more general cleanup of the situation.
Thanks for pointing out that I missed this crucial detail in the POE series. I'll immediately go and point that out.
M.
Hook up an override for GCS, allowing it to be disabled from the command line by specifying arm64.nogcs in case there are problems.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/admin-guide/kernel-parameters.txt | 3 +++ arch/arm64/kernel/pi/idreg-override.c | 2 ++ 2 files changed, 5 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f1384c7b59c9..3006c1bd1af0 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -441,6 +441,9 @@ arm64.nobti [ARM64] Unconditionally disable Branch Target Identification support
+ arm64.nogcs [ARM64] Unconditionally disable Guarded Control Stack + support + arm64.nomops [ARM64] Unconditionally disable Memory Copy and Memory Set instructions support
diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c index 29d4b6244a6f..2bb709d78405 100644 --- a/arch/arm64/kernel/pi/idreg-override.c +++ b/arch/arm64/kernel/pi/idreg-override.c @@ -133,6 +133,7 @@ static const struct ftr_set_desc pfr1 __prel64_initconst = { .override = &id_aa64pfr1_override, .fields = { FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ), + FIELD("gcs", ID_AA64PFR1_EL1_GCS_SHIFT, NULL), FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL), FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter), {} @@ -215,6 +216,7 @@ static const struct { { "arm64.nosve", "id_aa64pfr0.sve=0" }, { "arm64.nosme", "id_aa64pfr1.sme=0" }, { "arm64.nobti", "id_aa64pfr1.bt=0" }, + { "arm64.nogcs", "id_aa64pfr1.gcs=0" }, { "arm64.nopauth", "id_aa64isar1.gpi=0 id_aa64isar1.gpa=0 " "id_aa64isar1.api=0 id_aa64isar1.apa=0 "
On Thu, Aug 01, 2024 at 01:06:42PM +0100, Mark Brown wrote:
Hook up an override for GCS, allowing it to be disabled from the command line by specifying arm64.nogcs in case there are problems.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Acked-by: Catalin Marinas catalin.marinas@arm.com
Provide a hwcap to enable userspace to detect support for GCS.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- Documentation/arch/arm64/elf_hwcaps.rst | 2 ++ arch/arm64/include/asm/hwcap.h | 1 + arch/arm64/include/uapi/asm/hwcap.h | 1 + arch/arm64/kernel/cpufeature.c | 3 +++ arch/arm64/kernel/cpuinfo.c | 1 + 5 files changed, 8 insertions(+)
diff --git a/Documentation/arch/arm64/elf_hwcaps.rst b/Documentation/arch/arm64/elf_hwcaps.rst index 448c1664879b..cf87be078f33 100644 --- a/Documentation/arch/arm64/elf_hwcaps.rst +++ b/Documentation/arch/arm64/elf_hwcaps.rst @@ -365,6 +365,8 @@ HWCAP2_SME_SF8DP2 HWCAP2_SME_SF8DP4 Functionality implied by ID_AA64SMFR0_EL1.SF8DP4 == 0b1.
+HWCAP2_GCS + Functionality implied by ID_AA64PFR1_EL1.GCS == 0b1
4. Unused AT_HWCAP bits ----------------------- diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h index 4edd3b61df11..fd7e162e7e39 100644 --- a/arch/arm64/include/asm/hwcap.h +++ b/arch/arm64/include/asm/hwcap.h @@ -157,6 +157,7 @@ #define KERNEL_HWCAP_SME_SF8FMA __khwcap2_feature(SME_SF8FMA) #define KERNEL_HWCAP_SME_SF8DP4 __khwcap2_feature(SME_SF8DP4) #define KERNEL_HWCAP_SME_SF8DP2 __khwcap2_feature(SME_SF8DP2) +#define KERNEL_HWCAP_GCS __khwcap2_feature(GCS)
/* * This yields a mask that user programs can use to figure out what diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h index 285610e626f5..328fb7843e2f 100644 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -122,5 +122,6 @@ #define HWCAP2_SME_SF8FMA (1UL << 60) #define HWCAP2_SME_SF8DP4 (1UL << 61) #define HWCAP2_SME_SF8DP2 (1UL << 62) +#define HWCAP2_GCS (1UL << 63)
#endif /* _UAPI__ASM_HWCAP_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 315bd7be1106..e3e8290a4447 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2994,6 +2994,9 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = { HWCAP_CAP(ID_AA64ZFR0_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM), HWCAP_CAP(ID_AA64ZFR0_EL1, F32MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM), HWCAP_CAP(ID_AA64ZFR0_EL1, F64MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM), +#endif +#ifdef CONFIG_ARM64_GCS + HWCAP_CAP(ID_AA64PFR1_EL1, GCS, IMP, CAP_HWCAP, KERNEL_HWCAP_GCS), #endif HWCAP_CAP(ID_AA64PFR1_EL1, SSBS, SSBS2, CAP_HWCAP, KERNEL_HWCAP_SSBS), #ifdef CONFIG_ARM64_BTI diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c index 09eeaa24d456..2f539e3101ee 100644 --- a/arch/arm64/kernel/cpuinfo.c +++ b/arch/arm64/kernel/cpuinfo.c @@ -143,6 +143,7 @@ static const char *const hwcap_str[] = { [KERNEL_HWCAP_SME_SF8FMA] = "smesf8fma", [KERNEL_HWCAP_SME_SF8DP4] = "smesf8dp4", [KERNEL_HWCAP_SME_SF8DP2] = "smesf8dp2", + [KERNEL_HWCAP_GCS] = "gcs", };
#ifdef CONFIG_COMPAT
On Thu, Aug 01, 2024 at 01:06:43PM +0100, Mark Brown wrote:
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h index 285610e626f5..328fb7843e2f 100644 --- a/arch/arm64/include/uapi/asm/hwcap.h +++ b/arch/arm64/include/uapi/asm/hwcap.h @@ -122,5 +122,6 @@ #define HWCAP2_SME_SF8FMA (1UL << 60) #define HWCAP2_SME_SF8DP4 (1UL << 61) #define HWCAP2_SME_SF8DP2 (1UL << 62) +#define HWCAP2_GCS (1UL << 63)
You'll be fighting with Joey over the last bit here ;) (we do have HWCAP3 though).
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
A new exception code is defined for GCS specific faults other than standard load/store faults, for example GCS token validation failures, add handling for this. These faults are reported to userspace as segfaults with code SEGV_CPERR (protection error), mirroring the reporting for x86 shadow stack errors.
GCS faults due to memory load/store operations generate data aborts with a flag set, these will be handled separately as part of the data abort handling.
Since we do not currently enable GCS for EL1 we should not get any faults there but while we're at it we wire things up there, treating any GCS fault as fatal.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/esr.h | 28 +++++++++++++++++++++++++++- arch/arm64/include/asm/exception.h | 2 ++ arch/arm64/kernel/entry-common.c | 23 +++++++++++++++++++++++ arch/arm64/kernel/traps.c | 11 +++++++++++ 4 files changed, 63 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index 56c148890daf..0c231adf3867 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -51,7 +51,8 @@ #define ESR_ELx_EC_FP_EXC32 (0x28) /* Unallocated EC: 0x29 - 0x2B */ #define ESR_ELx_EC_FP_EXC64 (0x2C) -/* Unallocated EC: 0x2D - 0x2E */ +#define ESR_ELx_EC_GCS (0x2D) +/* Unallocated EC: 0x2E */ #define ESR_ELx_EC_SERROR (0x2F) #define ESR_ELx_EC_BREAKPT_LOW (0x30) #define ESR_ELx_EC_BREAKPT_CUR (0x31) @@ -385,6 +386,31 @@ #define ESR_ELx_MOPS_ISS_SRCREG(esr) (((esr) & (UL(0x1f) << 5)) >> 5) #define ESR_ELx_MOPS_ISS_SIZEREG(esr) (((esr) & (UL(0x1f) << 0)) >> 0)
+/* ISS field definitions for GCS */ +#define ESR_ELx_ExType_SHIFT (20) +#define ESR_ELx_ExType_MASK GENMASK(23, 20) +#define ESR_ELx_Raddr_SHIFT (10) +#define ESR_ELx_Raddr_MASK GENMASK(14, 10) +#define ESR_ELx_Rn_SHIFT (5) +#define ESR_ELx_Rn_MASK GENMASK(9, 5) +#define ESR_ELx_Rvalue_SHIFT 5 +#define ESR_ELx_Rvalue_MASK GENMASK(9, 5) +#define ESR_ELx_IT_SHIFT (0) +#define ESR_ELx_IT_MASK GENMASK(4, 0) + +#define ESR_ELx_ExType_DATA_CHECK 0 +#define ESR_ELx_ExType_EXLOCK 1 +#define ESR_ELx_ExType_STR 2 + +#define ESR_ELx_IT_RET 0 +#define ESR_ELx_IT_GCSPOPM 1 +#define ESR_ELx_IT_RET_KEYA 2 +#define ESR_ELx_IT_RET_KEYB 3 +#define ESR_ELx_IT_GCSSS1 4 +#define ESR_ELx_IT_GCSSS2 5 +#define ESR_ELx_IT_GCSPOPCX 6 +#define ESR_ELx_IT_GCSPOPX 7 + #ifndef __ASSEMBLY__ #include <asm/types.h>
diff --git a/arch/arm64/include/asm/exception.h b/arch/arm64/include/asm/exception.h index f296662590c7..674518464718 100644 --- a/arch/arm64/include/asm/exception.h +++ b/arch/arm64/include/asm/exception.h @@ -57,6 +57,8 @@ void do_el0_undef(struct pt_regs *regs, unsigned long esr); void do_el1_undef(struct pt_regs *regs, unsigned long esr); void do_el0_bti(struct pt_regs *regs); void do_el1_bti(struct pt_regs *regs, unsigned long esr); +void do_el0_gcs(struct pt_regs *regs, unsigned long esr); +void do_el1_gcs(struct pt_regs *regs, unsigned long esr); void do_debug_exception(unsigned long addr_if_watchpoint, unsigned long esr, struct pt_regs *regs); void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs); diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index b77a15955f28..54f2d16d82f4 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -463,6 +463,15 @@ static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr) exit_to_kernel_mode(regs); }
+static void noinstr el1_gcs(struct pt_regs *regs, unsigned long esr) +{ + enter_from_kernel_mode(regs); + local_daif_inherit(regs); + do_el1_gcs(regs, esr); + local_daif_mask(); + exit_to_kernel_mode(regs); +} + static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr) { unsigned long far = read_sysreg(far_el1); @@ -505,6 +514,9 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs) case ESR_ELx_EC_BTI: el1_bti(regs, esr); break; + case ESR_ELx_EC_GCS: + el1_gcs(regs, esr); + break; case ESR_ELx_EC_BREAKPT_CUR: case ESR_ELx_EC_SOFTSTP_CUR: case ESR_ELx_EC_WATCHPT_CUR: @@ -684,6 +696,14 @@ static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr) exit_to_user_mode(regs); }
+static void noinstr el0_gcs(struct pt_regs *regs, unsigned long esr) +{ + enter_from_user_mode(regs); + local_daif_restore(DAIF_PROCCTX); + do_el0_gcs(regs, esr); + exit_to_user_mode(regs); +} + static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr) { enter_from_user_mode(regs); @@ -766,6 +786,9 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs) case ESR_ELx_EC_MOPS: el0_mops(regs, esr); break; + case ESR_ELx_EC_GCS: + el0_gcs(regs, esr); + break; case ESR_ELx_EC_BREAKPT_LOW: case ESR_ELx_EC_SOFTSTP_LOW: case ESR_ELx_EC_WATCHPT_LOW: diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 9e22683aa921..d410dcc12ed8 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -500,6 +500,16 @@ void do_el1_bti(struct pt_regs *regs, unsigned long esr) die("Oops - BTI", regs, esr); }
+void do_el0_gcs(struct pt_regs *regs, unsigned long esr) +{ + force_signal_inject(SIGSEGV, SEGV_CPERR, regs->pc, 0); +} + +void do_el1_gcs(struct pt_regs *regs, unsigned long esr) +{ + die("Oops - GCS", regs, esr); +} + void do_el0_fpac(struct pt_regs *regs, unsigned long esr) { force_signal_inject(SIGILL, ILL_ILLOPN, regs->pc, esr); @@ -838,6 +848,7 @@ static const char *esr_class_str[] = { [ESR_ELx_EC_MOPS] = "MOPS", [ESR_ELx_EC_FP_EXC32] = "FP (AArch32)", [ESR_ELx_EC_FP_EXC64] = "FP (AArch64)", + [ESR_ELx_EC_GCS] = "Guarded Control Stack", [ESR_ELx_EC_SERROR] = "SError", [ESR_ELx_EC_BREAKPT_LOW] = "Breakpoint (lower EL)", [ESR_ELx_EC_BREAKPT_CUR] = "Breakpoint (current EL)",
On Thu, Aug 01, 2024 at 01:06:44PM +0100, Mark Brown wrote:
A new exception code is defined for GCS specific faults other than standard load/store faults, for example GCS token validation failures, add handling for this. These faults are reported to userspace as segfaults with code SEGV_CPERR (protection error), mirroring the reporting for x86 shadow stack errors.
GCS faults due to memory load/store operations generate data aborts with a flag set, these will be handled separately as part of the data abort handling.
Since we do not currently enable GCS for EL1 we should not get any faults there but while we're at it we wire things up there, treating any GCS fault as fatal.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
All GCS operations at EL0 must happen on a page which is marked as having UnprivGCS access, including read operations. If a GCS operation attempts to access a page without this then it will generate a data abort with the GCS bit set in ESR_EL1.ISS2.
EL0 may validly generate such faults, for example due to copy on write which will cause the GCS data to be stored in a read only page with no GCS permissions until the actual copy happens. Since UnprivGCS allows both reads and writes to the GCS (though only through GCS operations) we need to ensure that the memory management subsystem handles GCS accesses as writes at all times. Do this by adding FAULT_FLAG_WRITE to any GCS page faults, adding handling to ensure that invalid cases are identfied as such early so the memory management core does not think they will succeed. The core cannot distinguish between VMAs which are generally writeable and VMAs which are only writeable through GCS operations.
EL1 may validly write to EL0 GCS for management purposes (eg, while initialising with cap tokens).
We also report any GCS faults in VMAs not marked as part of a GCS as access violations, causing a fault to be delivered to userspace if it attempts to do GCS operations outside a GCS.
Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/mm/fault.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 451ba7cbd5ad..0973dd09f11a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -486,6 +486,14 @@ static void do_bad_area(unsigned long far, unsigned long esr, } }
+static bool is_gcs_fault(unsigned long esr) +{ + if (!esr_is_data_abort(esr)) + return false; + + return ESR_ELx_ISS2(esr) & ESR_ELx_GCS; +} + static bool is_el0_instruction_abort(unsigned long esr) { return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW; @@ -500,6 +508,25 @@ static bool is_write_abort(unsigned long esr) return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); }
+static bool is_invalid_gcs_access(struct vm_area_struct *vma, u64 esr) +{ + if (!system_supports_gcs()) + return false; + + if (unlikely(is_gcs_fault(esr))) { + /* GCS accesses must be performed on a GCS page */ + if (!(vma->vm_flags & VM_SHADOW_STACK)) + return true; + if (!(vma->vm_flags & VM_WRITE)) + return true; + } else if (unlikely(vma->vm_flags & VM_SHADOW_STACK)) { + /* Only GCS operations can write to a GCS page */ + return is_write_abort(esr); + } + + return false; +} + static int __kprobes do_page_fault(unsigned long far, unsigned long esr, struct pt_regs *regs) { @@ -535,6 +562,14 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, /* It was exec fault */ vm_flags = VM_EXEC; mm_flags |= FAULT_FLAG_INSTRUCTION; + } else if (is_gcs_fault(esr)) { + /* + * The GCS permission on a page implies both read and + * write so always handle any GCS fault as a write fault, + * we need to trigger CoW even for GCS reads. + */ + vm_flags = VM_WRITE; + mm_flags |= FAULT_FLAG_WRITE; } else if (is_write_abort(esr)) { /* It was write fault */ vm_flags = VM_WRITE; @@ -568,6 +603,13 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, if (!vma) goto lock_mmap;
+ if (is_invalid_gcs_access(vma, esr)) { + vma_end_read(vma); + fault = 0; + si_code = SEGV_ACCERR; + goto bad_area; + } + if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); fault = 0;
On Thu, Aug 01, 2024 at 01:06:45PM +0100, Mark Brown wrote:
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 451ba7cbd5ad..0973dd09f11a 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -486,6 +486,14 @@ static void do_bad_area(unsigned long far, unsigned long esr, } } +static bool is_gcs_fault(unsigned long esr) +{
- if (!esr_is_data_abort(esr))
return false;
- return ESR_ELx_ISS2(esr) & ESR_ELx_GCS;
+}
static bool is_el0_instruction_abort(unsigned long esr) { return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW; @@ -500,6 +508,25 @@ static bool is_write_abort(unsigned long esr) return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM); } +static bool is_invalid_gcs_access(struct vm_area_struct *vma, u64 esr) +{
- if (!system_supports_gcs())
return false;
- if (unlikely(is_gcs_fault(esr))) {
/* GCS accesses must be performed on a GCS page */
if (!(vma->vm_flags & VM_SHADOW_STACK))
return true;
if (!(vma->vm_flags & VM_WRITE))
return true;
Do we need the VM_WRITE check here? Further down in do_page_fault(), we already do the check as we set vm_flags = VM_WRITE.
- } else if (unlikely(vma->vm_flags & VM_SHADOW_STACK)) {
/* Only GCS operations can write to a GCS page */
return is_write_abort(esr);
- }
- return false;
+}
static int __kprobes do_page_fault(unsigned long far, unsigned long esr, struct pt_regs *regs) { @@ -535,6 +562,14 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, /* It was exec fault */ vm_flags = VM_EXEC; mm_flags |= FAULT_FLAG_INSTRUCTION;
- } else if (is_gcs_fault(esr)) {
/*
* The GCS permission on a page implies both read and
* write so always handle any GCS fault as a write fault,
* we need to trigger CoW even for GCS reads.
*/
vm_flags = VM_WRITE;
} else if (is_write_abort(esr)) { /* It was write fault */ vm_flags = VM_WRITE;mm_flags |= FAULT_FLAG_WRITE;
@@ -568,6 +603,13 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, if (!vma) goto lock_mmap;
- if (is_invalid_gcs_access(vma, esr)) {
vma_end_read(vma);
fault = 0;
si_code = SEGV_ACCERR;
goto bad_area;
- }
- if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); fault = 0;
This check I mentioned above.
I was wondering whether we should prevent mprotect(PROT_READ) on the GCS page. But I guess that's fine, we'll SIGSEGV later if we get an invalid GCS access.
On Mon, Aug 19, 2024 at 10:17:52AM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:45PM +0100, Mark Brown wrote:
+static bool is_invalid_gcs_access(struct vm_area_struct *vma, u64 esr) +{
- if (unlikely(is_gcs_fault(esr))) {
/* GCS accesses must be performed on a GCS page */
if (!(vma->vm_flags & VM_SHADOW_STACK))
return true;
if (!(vma->vm_flags & VM_WRITE))
return true;
Do we need the VM_WRITE check here? Further down in do_page_fault(), we already do the check as we set vm_flags = VM_WRITE.
if (!(vma->vm_flags & vm_flags)) { vma_end_read(vma); fault = 0;
It looks bitrotted, yes.
I was wondering whether we should prevent mprotect(PROT_READ) on the GCS page. But I guess that's fine, we'll SIGSEGV later if we get an invalid GCS access.
Yeah, that doesn't seem like a particular problem - the concern is adding rather than removing GCS.
There are two registers controlling the GCS state of EL0, GCSPR_EL0 which is the current GCS pointer and GCSCRE0_EL1 which has enable bits for the specific GCS functionality enabled for EL0. Manage these on context switch and process lifetime events, GCS is reset on exec(). Also ensure that any changes to the GCS memory are visible to other PEs and that changes from other PEs are visible on this one by issuing a GCSB DSYNC when moving to or from a thread with GCS.
Since the current GCS configuration of a thread will be visible to userspace we store the configuration in the format used with userspace and provide a helper which configures the system register as needed.
On systems that support GCS we always allow access to GCSPR_EL0, this facilitates reporting of GCS faults if userspace implements disabling of GCS on error - the GCS can still be discovered and examined even if GCS has been disabled.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/gcs.h | 24 ++++++++++++++++ arch/arm64/include/asm/processor.h | 6 ++++ arch/arm64/kernel/process.c | 56 ++++++++++++++++++++++++++++++++++++++ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/gcs.c | 39 ++++++++++++++++++++++++++ 5 files changed, 126 insertions(+)
diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index 7c5e95218db6..04594ef59dad 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -48,4 +48,28 @@ static inline u64 gcsss2(void) return Xt; }
+#ifdef CONFIG_ARM64_GCS + +static inline bool task_gcs_el0_enabled(struct task_struct *task) +{ + return current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE; +} + +void gcs_set_el0_mode(struct task_struct *task); +void gcs_free(struct task_struct *task); +void gcs_preserve_current_state(void); + +#else + +static inline bool task_gcs_el0_enabled(struct task_struct *task) +{ + return false; +} + +static inline void gcs_set_el0_mode(struct task_struct *task) { } +static inline void gcs_free(struct task_struct *task) { } +static inline void gcs_preserve_current_state(void) { } + +#endif + #endif diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index f77371232d8c..c55e3600604a 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -184,6 +184,12 @@ struct thread_struct { u64 sctlr_user; u64 svcr; u64 tpidr2_el0; +#ifdef CONFIG_ARM64_GCS + unsigned int gcs_el0_mode; + u64 gcspr_el0; + u64 gcs_base; + u64 gcs_size; +#endif };
static inline unsigned int thread_get_vl(struct thread_struct *thread, diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 4ae31b7af6c3..5f00cb0da9c3 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -48,6 +48,7 @@ #include <asm/cacheflush.h> #include <asm/exec.h> #include <asm/fpsimd.h> +#include <asm/gcs.h> #include <asm/mmu_context.h> #include <asm/mte.h> #include <asm/processor.h> @@ -271,12 +272,32 @@ static void flush_tagged_addr_state(void) clear_thread_flag(TIF_TAGGED_ADDR); }
+#ifdef CONFIG_ARM64_GCS + +static void flush_gcs(void) +{ + if (!system_supports_gcs()) + return; + + gcs_free(current); + current->thread.gcs_el0_mode = 0; + write_sysreg_s(0, SYS_GCSCRE0_EL1); + write_sysreg_s(0, SYS_GCSPR_EL0); +} + +#else + +static void flush_gcs(void) { } + +#endif + void flush_thread(void) { fpsimd_flush_thread(); tls_thread_flush(); flush_ptrace_hw_breakpoint(current); flush_tagged_addr_state(); + flush_gcs(); }
void arch_release_task_struct(struct task_struct *tsk) @@ -471,6 +492,40 @@ static void entry_task_switch(struct task_struct *next) __this_cpu_write(__entry_task, next); }
+#ifdef CONFIG_ARM64_GCS + +void gcs_preserve_current_state(void) +{ + if (task_gcs_el0_enabled(current)) + current->thread.gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); +} + +static void gcs_thread_switch(struct task_struct *next) +{ + if (!system_supports_gcs()) + return; + + gcs_preserve_current_state(); + + gcs_set_el0_mode(next); + write_sysreg_s(next->thread.gcspr_el0, SYS_GCSPR_EL0); + + /* + * Ensure that GCS changes are observable by/from other PEs in + * case of migration. + */ + if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next)) + gcsb_dsync(); +} + +#else + +static void gcs_thread_switch(struct task_struct *next) +{ +} + +#endif + /* * ARM erratum 1418040 handling, affecting the 32bit view of CNTVCT. * Ensure access is disabled when switching to a 32bit task, ensure @@ -530,6 +585,7 @@ struct task_struct *__switch_to(struct task_struct *prev, ssbs_thread_switch(next); erratum_1418040_thread_switch(next); ptrauth_thread_switch_user(next); + gcs_thread_switch(next);
/* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 60454256945b..1a7b3a2f21e6 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_TRANS_TABLE) += trans_pgd.o obj-$(CONFIG_TRANS_TABLE) += trans_pgd-asm.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ARM64_MTE) += mteswap.o +obj-$(CONFIG_ARM64_GCS) += gcs.o KASAN_SANITIZE_physaddr.o += n
obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c new file mode 100644 index 000000000000..b0a67efc522b --- /dev/null +++ b/arch/arm64/mm/gcs.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <linux/mm.h> +#include <linux/mman.h> +#include <linux/syscalls.h> +#include <linux/types.h> + +#include <asm/cpufeature.h> +#include <asm/page.h> + +/* + * Apply the GCS mode configured for the specified task to the + * hardware. + */ +void gcs_set_el0_mode(struct task_struct *task) +{ + u64 gcscre0_el1 = GCSCRE0_EL1_nTR; + + if (task->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE) + gcscre0_el1 |= GCSCRE0_EL1_RVCHKEN | GCSCRE0_EL1_PCRSEL; + + if (task->thread.gcs_el0_mode & PR_SHADOW_STACK_WRITE) + gcscre0_el1 |= GCSCRE0_EL1_STREn; + + if (task->thread.gcs_el0_mode & PR_SHADOW_STACK_PUSH) + gcscre0_el1 |= GCSCRE0_EL1_PUSHMEn; + + write_sysreg_s(gcscre0_el1, SYS_GCSCRE0_EL1); +} + +void gcs_free(struct task_struct *task) +{ + if (task->thread.gcs_base) + vm_munmap(task->thread.gcs_base, task->thread.gcs_size); + + task->thread.gcspr_el0 = 0; + task->thread.gcs_base = 0; + task->thread.gcs_size = 0; +}
On Thu, Aug 01, 2024 at 01:06:46PM +0100, Mark Brown wrote:
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 4ae31b7af6c3..5f00cb0da9c3 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c
[...]
+static void gcs_thread_switch(struct task_struct *next) +{
- if (!system_supports_gcs())
return;
- gcs_preserve_current_state();
- gcs_set_el0_mode(next);
- write_sysreg_s(next->thread.gcspr_el0, SYS_GCSPR_EL0);
- /*
* Ensure that GCS changes are observable by/from other PEs in
* case of migration.
*/
- if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next))
gcsb_dsync();
Could we do the sysreg writing under this 'if' block? If no app is using GCS (which would be the case for a while), it looks like unnecessary sysreg accesses.
What's the GCSB DSYNC supposed to do here? The Arm ARM talks about ordering between GCS memory effects and other memory effects. I haven't looked at the memory model in detail yet (D11.9.1) but AFAICT it has nothing to do with the system registers. We'll need this barrier when ordering is needed between explicit or implicit (e.g. BL) GCS accesses and the explicit classic memory accesses. Paging comes to mind, so maybe flush_dcache_page() would need this barrier. ptrace() is another case if the memory accessed is a GCS page. I can see you added it in other places, I'll have a look as I go through the rest. But I don't think one is needed here.
On Mon, Aug 19, 2024 at 12:46:13PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:46PM +0100, Mark Brown wrote:
- /*
* Ensure that GCS changes are observable by/from other PEs in
* case of migration.
*/
- if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next))
gcsb_dsync();
Could we do the sysreg writing under this 'if' block? If no app is using GCS (which would be the case for a while), it looks like unnecessary sysreg accesses.
Yes, that should be fine I think.
What's the GCSB DSYNC supposed to do here? The Arm ARM talks about ordering between GCS memory effects and other memory effects. I haven't looked at the memory model in detail yet (D11.9.1) but AFAICT it has nothing to do with the system registers. We'll need this barrier when ordering is needed between explicit or implicit (e.g. BL) GCS accesses and the explicit classic memory accesses. Paging comes to mind, so maybe flush_dcache_page() would need this barrier. ptrace() is another case if the memory accessed is a GCS page. I can see you added it in other places, I'll have a look as I go through the rest. But I don't think one is needed here.
It's not particuarly for the system registers, is there's so that anything else that looks at the task's GCS sees the current state. I'm pretty confident this excessive, the goal was to err on the side of correctness and then relax later.
On Mon, Aug 19, 2024 at 04:44:42PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 12:46:13PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:46PM +0100, Mark Brown wrote:
- /*
* Ensure that GCS changes are observable by/from other PEs in
* case of migration.
*/
- if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next))
gcsb_dsync();
[...]
What's the GCSB DSYNC supposed to do here? The Arm ARM talks about ordering between GCS memory effects and other memory effects. I haven't looked at the memory model in detail yet (D11.9.1) but AFAICT it has nothing to do with the system registers. We'll need this barrier when ordering is needed between explicit or implicit (e.g. BL) GCS accesses and the explicit classic memory accesses. Paging comes to mind, so maybe flush_dcache_page() would need this barrier. ptrace() is another case if the memory accessed is a GCS page. I can see you added it in other places, I'll have a look as I go through the rest. But I don't think one is needed here.
It's not particuarly for the system registers, is there's so that anything else that looks at the task's GCS sees the current state.
Ah, so that's the to ensure that any writes on the CPU to the GCS stack would be observable if the task appears on a different CPU (together with the additional classic ordering/spinlocks used for the run queues). Maybe update the comment to say "GCS memory effects" instead of "GCS changes". I read the latter as GCS sysreg changes. Something like below would make it clearer:
/* * Ensure that GCS memory effects of the 'prev' thread are * ordered before other memory accesses with release semantics * (or preceded by a DMB) on the current PE. In addition, any * memory accesses with acquire semantics (or succeeded by a * DMB) are ordered before GCS memory effects of the 'next' * thread. This will ensure that the GCS memory effects are * visible to other PEs in case of migration. */
Feel free to rephrase as you see fit.
I'm pretty confident this excessive, the goal was to err on the side of correctness and then relax later.
I think we are missing some. Paging should be ok as we have a pte change and TLBI and IIRC the same rules as for standard memory accesses apply. ptrace() memory accesses may need something though I'm fine with considering this a best effort (we can't guarantee anyway if any accesses are on different CPUs). I haven't got to the signal handling patch yet.
On Mon, Aug 19, 2024 at 04:44:52PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 12:46:13PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:46PM +0100, Mark Brown wrote:
- /*
* Ensure that GCS changes are observable by/from other PEs in
* case of migration.
*/
- if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next))
gcsb_dsync();
Could we do the sysreg writing under this 'if' block? If no app is using GCS (which would be the case for a while), it looks like unnecessary sysreg accesses.
Yes, that should be fine I think.
I forgot when writing the above that we always allow reads from GCSPR_EL0 in order to avoid corner cases for unwinders in the case of asynchronous disable. I'd expect that to be cheap to access though.
On Tue, Aug 20, 2024 at 06:56:19PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 04:44:52PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 12:46:13PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:46PM +0100, Mark Brown wrote:
- /*
* Ensure that GCS changes are observable by/from other PEs in
* case of migration.
*/
- if (task_gcs_el0_enabled(current) || task_gcs_el0_enabled(next))
gcsb_dsync();
Could we do the sysreg writing under this 'if' block? If no app is using GCS (which would be the case for a while), it looks like unnecessary sysreg accesses.
Yes, that should be fine I think.
I forgot when writing the above that we always allow reads from GCSPR_EL0 in order to avoid corner cases for unwinders in the case of asynchronous disable. I'd expect that to be cheap to access though.
But then gcs_preserve_current_state() doesn't save the GCSPR_EL0 value if the shadow stack was disabled. At the subsequent switch to this task, we write some stale value.
On Wed, Aug 21, 2024 at 09:50:22AM +0100, Catalin Marinas wrote:
On Tue, Aug 20, 2024 at 06:56:19PM +0100, Mark Brown wrote:
I forgot when writing the above that we always allow reads from GCSPR_EL0 in order to avoid corner cases for unwinders in the case of asynchronous disable. I'd expect that to be cheap to access though.
But then gcs_preserve_current_state() doesn't save the GCSPR_EL0 value if the shadow stack was disabled. At the subsequent switch to this task, we write some stale value.
True, we should make the disable save the current value.
When a new thread is created by a thread with GCS enabled the GCS needs to be specified along with the regular stack. clone3() has been extended to support this case, allowing userspace to explicitly specify the size and location of the GCS. The specified GCS must have a valid GCS token at the top of the stack, as though userspace were pivoting to the new GCS. This will be consumed on use. At present we do not atomically consume the token, this will be addressed in a future revision.
Unfortunately plain clone() is not extensible and existing clone3() users will not specify a stack so all existing code would be broken if we mandated specifying the stack explicitly. For compatibility with these cases and also x86 (which did not initially implement clone3() support for shadow stacks) if no GCS is specified we will allocate one so when a thread is created which has GCS enabled allocate one for it. We follow the extensively discussed x86 implementation and allocate min(RLIMIT_STACK, 2G). Since the GCS only stores the call stack and not any variables this should be more than sufficient for most applications.
GCSs allocated via this mechanism will be freed when the thread exits, those explicitly configured by the user will not.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/gcs.h | 9 +++ arch/arm64/kernel/process.c | 29 +++++++++ arch/arm64/mm/gcs.c | 142 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 180 insertions(+)
diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index 04594ef59dad..c1f274fdb9c0 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -8,6 +8,8 @@ #include <asm/types.h> #include <asm/uaccess.h>
+struct kernel_clone_args; + static inline void gcsb_dsync(void) { asm volatile(".inst 0xd503227f" : : : "memory"); @@ -58,6 +60,8 @@ static inline bool task_gcs_el0_enabled(struct task_struct *task) void gcs_set_el0_mode(struct task_struct *task); void gcs_free(struct task_struct *task); void gcs_preserve_current_state(void); +unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args);
#else
@@ -69,6 +73,11 @@ static inline bool task_gcs_el0_enabled(struct task_struct *task) static inline void gcs_set_el0_mode(struct task_struct *task) { } static inline void gcs_free(struct task_struct *task) { } static inline void gcs_preserve_current_state(void) { } +static inline unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) +{ + return -ENOTSUPP; +}
#endif
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 5f00cb0da9c3..d6d3a96cf2e4 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -285,9 +285,32 @@ static void flush_gcs(void) write_sysreg_s(0, SYS_GCSPR_EL0); }
+static int copy_thread_gcs(struct task_struct *p, + const struct kernel_clone_args *args) +{ + unsigned long gcs; + + gcs = gcs_alloc_thread_stack(p, args); + if (IS_ERR_VALUE(gcs)) + return PTR_ERR((void *)gcs); + + p->thread.gcs_el0_mode = current->thread.gcs_el0_mode; + p->thread.gcs_el0_locked = current->thread.gcs_el0_locked; + + /* Ensure the current state of the GCS is seen by CoW */ + gcsb_dsync(); + + return 0; +} + #else
static void flush_gcs(void) { } +static int copy_thread_gcs(struct task_struct *p, + const struct kernel_clone_args *args) +{ + return 0; +}
#endif
@@ -303,6 +326,7 @@ void flush_thread(void) void arch_release_task_struct(struct task_struct *tsk) { fpsimd_release_task(tsk); + gcs_free(tsk); }
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) @@ -366,6 +390,7 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) unsigned long stack_start = args->stack; unsigned long tls = args->tls; struct pt_regs *childregs = task_pt_regs(p); + int ret;
memset(&p->thread.cpu_context, 0, sizeof(struct cpu_context));
@@ -407,6 +432,10 @@ int copy_thread(struct task_struct *p, const struct kernel_clone_args *args) p->thread.uw.tp_value = tls; p->thread.tpidr2_el0 = 0; } + + ret = copy_thread_gcs(p, args); + if (ret != 0) + return ret; } else { /* * A kthread has no context to ERET to, so ensure any buggy diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index b0a67efc522b..b71f6b408513 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -8,6 +8,138 @@ #include <asm/cpufeature.h> #include <asm/page.h>
+static unsigned long alloc_gcs(unsigned long addr, unsigned long size) +{ + int flags = MAP_ANONYMOUS | MAP_PRIVATE; + struct mm_struct *mm = current->mm; + unsigned long mapped_addr, unused; + + if (addr) + flags |= MAP_FIXED_NOREPLACE; + + mmap_write_lock(mm); + mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags, + VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL); + mmap_write_unlock(mm); + + return mapped_addr; +} + +static unsigned long gcs_size(unsigned long size) +{ + if (size) + return PAGE_ALIGN(size); + + /* Allocate RLIMIT_STACK/2 with limits of PAGE_SIZE..2G */ + size = PAGE_ALIGN(min_t(unsigned long long, + rlimit(RLIMIT_STACK) / 2, SZ_2G)); + return max(PAGE_SIZE, size); +} + +static bool gcs_consume_token(struct mm_struct *mm, unsigned long user_addr) +{ + u64 expected = GCS_CAP(user_addr); + u64 val; + int ret; + + /* This should really be an atomic cmpxchg. It is not. */ + ret = access_remote_vm(mm, user_addr, &val, sizeof(val), + FOLL_FORCE); + if (ret != sizeof(val)) + return false; + + if (val != expected) + return false; + + val = 0; + ret = access_remote_vm(mm, user_addr, &val, sizeof(val), + FOLL_FORCE | FOLL_WRITE); + if (ret != sizeof(val)) + return false; + + return true; +} + +int arch_shstk_post_fork(struct task_struct *tsk, + struct kernel_clone_args *args) +{ + struct mm_struct *mm; + unsigned long addr, size, gcspr_el0; + int ret = 0; + + mm = get_task_mm(tsk); + if (!mm) + return -EFAULT; + + addr = args->shadow_stack; + size = args->shadow_stack_size; + + /* + * There should be a token, and there is likely to be an optional + * end of stack marker above it. + */ + gcspr_el0 = addr + size - (2 * sizeof(u64)); + if (!gcs_consume_token(mm, gcspr_el0)) { + gcspr_el0 += sizeof(u64); + if (!gcs_consume_token(mm, gcspr_el0)) { + ret = -EINVAL; + goto out; + } + } + + tsk->thread.gcspr_el0 = gcspr_el0 + sizeof(u64); + +out: + mmput(mm); + + return ret; +} + +unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, + const struct kernel_clone_args *args) +{ + unsigned long addr, size; + + /* If the user specified a GCS use it. */ + if (args->shadow_stack_size) { + if (!system_supports_gcs()) + return (unsigned long)ERR_PTR(-EINVAL); + + /* GCSPR_EL0 will be set up when verifying token post fork */ + addr = args->shadow_stack; + } else { + + /* + * Otherwise fall back to legacy clone() support and + * implicitly allocate a GCS if we need a new one. + */ + + if (!system_supports_gcs()) + return 0; + + if (!task_gcs_el0_enabled(tsk)) + return 0; + + if ((args->flags & (CLONE_VFORK | CLONE_VM)) != CLONE_VM) { + tsk->thread.gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); + return 0; + } + + size = args->stack_size; + + size = gcs_size(size); + addr = alloc_gcs(0, size); + if (IS_ERR_VALUE(addr)) + return addr; + + tsk->thread.gcs_base = addr; + tsk->thread.gcs_size = size; + tsk->thread.gcspr_el0 = addr + size - sizeof(u64); + } + + return addr; +} + /* * Apply the GCS mode configured for the specified task to the * hardware. @@ -30,6 +162,16 @@ void gcs_set_el0_mode(struct task_struct *task)
void gcs_free(struct task_struct *task) { + + /* + * When fork() with CLONE_VM fails, the child (tsk) already + * has a GCS allocated, and exit_thread() calls this function + * to free it. In this case the parent (current) and the + * child share the same mm struct. + */ + if (!task->mm || task->mm != current->mm) + return; + if (task->thread.gcs_base) vm_munmap(task->thread.gcs_base, task->thread.gcs_size);
On Thu, Aug 01, 2024 at 01:06:47PM +0100, Mark Brown wrote:
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 5f00cb0da9c3..d6d3a96cf2e4 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -285,9 +285,32 @@ static void flush_gcs(void) write_sysreg_s(0, SYS_GCSPR_EL0); } +static int copy_thread_gcs(struct task_struct *p,
const struct kernel_clone_args *args)
+{
- unsigned long gcs;
- gcs = gcs_alloc_thread_stack(p, args);
- if (IS_ERR_VALUE(gcs))
return PTR_ERR((void *)gcs);
Is 0 an ok value here? I can see further down that gcs_alloc_thread_stack() may return 0.
- p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
- p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
- /* Ensure the current state of the GCS is seen by CoW */
- gcsb_dsync();
I don't get this barrier. What does it have to do with CoW, which memory effects is it trying to order?
diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index b0a67efc522b..b71f6b408513 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -8,6 +8,138 @@ #include <asm/cpufeature.h> #include <asm/page.h> +static unsigned long alloc_gcs(unsigned long addr, unsigned long size) +{
- int flags = MAP_ANONYMOUS | MAP_PRIVATE;
- struct mm_struct *mm = current->mm;
- unsigned long mapped_addr, unused;
- if (addr)
flags |= MAP_FIXED_NOREPLACE;
- mmap_write_lock(mm);
- mapped_addr = do_mmap(NULL, addr, size, PROT_READ, flags,
VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL);
- mmap_write_unlock(mm);
- return mapped_addr;
+}
+static unsigned long gcs_size(unsigned long size) +{
- if (size)
return PAGE_ALIGN(size);
- /* Allocate RLIMIT_STACK/2 with limits of PAGE_SIZE..2G */
- size = PAGE_ALIGN(min_t(unsigned long long,
rlimit(RLIMIT_STACK) / 2, SZ_2G));
- return max(PAGE_SIZE, size);
+}
So we still have RLIMIT_STACK/2. I thought we got rid of that and just went with RLIMIT_STACK (or I misremember).
+static bool gcs_consume_token(struct mm_struct *mm, unsigned long user_addr) +{
- u64 expected = GCS_CAP(user_addr);
- u64 val;
- int ret;
- /* This should really be an atomic cmpxchg. It is not. */
- ret = access_remote_vm(mm, user_addr, &val, sizeof(val),
FOLL_FORCE);
- if (ret != sizeof(val))
return false;
- if (val != expected)
return false;
- val = 0;
- ret = access_remote_vm(mm, user_addr, &val, sizeof(val),
FOLL_FORCE | FOLL_WRITE);
- if (ret != sizeof(val))
return false;
- return true;
+}
As per the clone3() thread, I think we should try to use get_user_page_vma_remote() and do a cmpxchg() directly.
How does the user write the initial token? Do we need any barriers before/after consuming the token?
On Mon, Aug 19, 2024 at 01:04:18PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:47PM +0100, Mark Brown wrote:
+static int copy_thread_gcs(struct task_struct *p,
const struct kernel_clone_args *args)
+{
- unsigned long gcs;
- gcs = gcs_alloc_thread_stack(p, args);
- if (IS_ERR_VALUE(gcs))
return PTR_ERR((void *)gcs);
Is 0 an ok value here? I can see further down that gcs_alloc_thread_stack() may return 0.
Yes, it's fine for a thread not to have a GCS.
- p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
- p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
- /* Ensure the current state of the GCS is seen by CoW */
- gcsb_dsync();
I don't get this barrier. What does it have to do with CoW, which memory effects is it trying to order?
Yeah, I can't remember what that's supposed to be protecting.
- /* Allocate RLIMIT_STACK/2 with limits of PAGE_SIZE..2G */
- size = PAGE_ALIGN(min_t(unsigned long long,
rlimit(RLIMIT_STACK) / 2, SZ_2G));
- return max(PAGE_SIZE, size);
+}
So we still have RLIMIT_STACK/2. I thought we got rid of that and just went with RLIMIT_STACK (or I misremember).
I honestly can't remember either way, it's quite possible it's changed multiple times. I don't have super strong feelings on the particular value here.
+static bool gcs_consume_token(struct mm_struct *mm, unsigned long user_addr) +{
As per the clone3() thread, I think we should try to use get_user_page_vma_remote() and do a cmpxchg() directly.
I've left this as is for now, mainly because it keeps the code in line with x86 and I can't directly test the x86 code. IIRC we can't just do a standard userspace cmpxchg since that will access as though we were at EL0 but EL0 doesn't have standard write permission for the page.
How does the user write the initial token? Do we need any barriers before/after consuming the token?
The token is created by map_shadow_stack() or as part of a GCS pivot. A sync beforehand is probably safer, with the current code we'll have one when we switch to the task.
On Mon, Aug 19, 2024 at 04:57:08PM +0100, Mark Brown wrote:
On Mon, Aug 19, 2024 at 01:04:18PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:47PM +0100, Mark Brown wrote:
+static int copy_thread_gcs(struct task_struct *p,
const struct kernel_clone_args *args)
+{
- unsigned long gcs;
- gcs = gcs_alloc_thread_stack(p, args);
- if (IS_ERR_VALUE(gcs))
return PTR_ERR((void *)gcs);
Is 0 an ok value here? I can see further down that gcs_alloc_thread_stack() may return 0.
Yes, it's fine for a thread not to have a GCS.
OK, so we only get a 0 here if the gcs_{base,size} has not be initialised. Looks fine.
- p->thread.gcs_el0_mode = current->thread.gcs_el0_mode;
- p->thread.gcs_el0_locked = current->thread.gcs_el0_locked;
- /* Ensure the current state of the GCS is seen by CoW */
- gcsb_dsync();
I don't get this barrier. What does it have to do with CoW, which memory effects is it trying to order?
Yeah, I can't remember what that's supposed to be protecting.
The GCS memory writes in the parent must indeed be visible in the child that could start on a different CPU. So, in principle, we need some form of ordering similar to the context switch. However, in case of classic fork(), the child won't be started until the PTEs have been made read-only and a TLBI issued. This would ensure the completion of any GCS memory accesses in the parent (at least that's my reading of the Arm ARM).
If we have normal thread creation without CoW, is the parent writing anything to the stack that the new thread needs to observe? The map_shadow_stack() call will cause a GCSSTTR and this wouldn't be ordered with subsequent memory writes. But we already have a GCSB DSYNC in map_shadow_stack() after put_user_gcs().
My conclusion is that we don't need this barrier.
- /* Allocate RLIMIT_STACK/2 with limits of PAGE_SIZE..2G */
- size = PAGE_ALIGN(min_t(unsigned long long,
rlimit(RLIMIT_STACK) / 2, SZ_2G));
- return max(PAGE_SIZE, size);
+}
So we still have RLIMIT_STACK/2. I thought we got rid of that and just went with RLIMIT_STACK (or I misremember).
I honestly can't remember either way, it's quite possible it's changed multiple times. I don't have super strong feelings on the particular value here.
The half size looks a lot more arbitrary to me than picking the same size as the stack. So I'd go with RLIMIT_STACK.
+static bool gcs_consume_token(struct mm_struct *mm, unsigned long user_addr) +{
As per the clone3() thread, I think we should try to use get_user_page_vma_remote() and do a cmpxchg() directly.
I've left this as is for now, mainly because it keeps the code in line with x86 and I can't directly test the x86 code.
I thought for the clone3() x86 code we'll need the remote vma, so we have to use the get_user_page_vma_remote() API anyway.
IIRC we can't just do a standard userspace cmpxchg since that will access as though we were at EL0 but EL0 doesn't have standard write permission for the page.
Correct but GUP goes through the kernel mapping, not the user one. So get_user_page_vma_remote() returns a page and you just do a classic cmpxchg() at page_address() (plus some offset).
Implement the architecture neutral prtctl() interface for setting the shadow stack status, this supports setting and reading the current GCS configuration for the current thread.
Userspace can enable basic GCS functionality and additionally also support for GCS pushes and arbitrary GCS stores. It is expected that this prctl() will be called very early in application startup, for example by the dynamic linker, and not subsequently adjusted during normal operation. Users should carefully note that after enabling GCS for a thread GCS will become active with no call stack so it is not normally possible to return from the function that invoked the prctl().
State is stored per thread, enabling GCS for a thread causes a GCS to be allocated for that thread.
Userspace may lock the current GCS configuration by specifying PR_SHADOW_STACK_ENABLE_LOCK, this prevents any further changes to the GCS configuration via any means.
If GCS is not being enabled then all flags other than _LOCK are ignored, it is not possible to enable stores or pops without enabling GCS.
When disabling the GCS we do not free the allocated stack, this allows for inspection of the GCS after disabling as part of fault reporting. Since it is not an expected use case and since it presents some complications in determining what to do with previously initialsed data on the GCS attempts to reenable GCS after this are rejected. This can be revisted if a use case arises.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/gcs.h | 22 +++++++++++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/mm/gcs.c | 81 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 104 insertions(+)
diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index c1f274fdb9c0..48c97e63e56a 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -50,6 +50,9 @@ static inline u64 gcsss2(void) return Xt; }
+#define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK \ + (PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_WRITE | PR_SHADOW_STACK_PUSH) + #ifdef CONFIG_ARM64_GCS
static inline bool task_gcs_el0_enabled(struct task_struct *task) @@ -63,6 +66,20 @@ void gcs_preserve_current_state(void); unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, const struct kernel_clone_args *args);
+static inline int gcs_check_locked(struct task_struct *task, + unsigned long new_val) +{ + unsigned long cur_val = task->thread.gcs_el0_mode; + + cur_val &= task->thread.gcs_el0_locked; + new_val &= task->thread.gcs_el0_locked; + + if (cur_val != new_val) + return -EBUSY; + + return 0; +} + #else
static inline bool task_gcs_el0_enabled(struct task_struct *task) @@ -78,6 +95,11 @@ static inline unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, { return -ENOTSUPP; } +static inline int gcs_check_locked(struct task_struct *task, + unsigned long new_val) +{ + return 0; +}
#endif
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index c55e3600604a..58eb48cd539f 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -186,6 +186,7 @@ struct thread_struct { u64 tpidr2_el0; #ifdef CONFIG_ARM64_GCS unsigned int gcs_el0_mode; + unsigned int gcs_el0_locked; u64 gcspr_el0; u64 gcs_base; u64 gcs_size; diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index b71f6b408513..0d39829f862e 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -179,3 +179,84 @@ void gcs_free(struct task_struct *task) task->thread.gcs_base = 0; task->thread.gcs_size = 0; } + +int arch_set_shadow_stack_status(struct task_struct *task, unsigned long arg) +{ + unsigned long gcs, size; + int ret; + + if (!system_supports_gcs()) + return -EINVAL; + + if (is_compat_thread(task_thread_info(task))) + return -EINVAL; + + /* Reject unknown flags */ + if (arg & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + ret = gcs_check_locked(task, arg); + if (ret != 0) + return ret; + + /* If we are enabling GCS then make sure we have a stack */ + if (arg & PR_SHADOW_STACK_ENABLE) { + if (!task_gcs_el0_enabled(task)) { + /* Do not allow GCS to be reenabled */ + if (task->thread.gcs_base) + return -EINVAL; + + if (task != current) + return -EBUSY; + + size = gcs_size(0); + gcs = alloc_gcs(0, size); + if (!gcs) + return -ENOMEM; + + task->thread.gcspr_el0 = gcs + size - sizeof(u64); + task->thread.gcs_base = gcs; + task->thread.gcs_size = size; + if (task == current) + write_sysreg_s(task->thread.gcspr_el0, + SYS_GCSPR_EL0); + + } + } + + task->thread.gcs_el0_mode = arg; + if (task == current) + gcs_set_el0_mode(task); + + return 0; +} + +int arch_get_shadow_stack_status(struct task_struct *task, + unsigned long __user *arg) +{ + if (!system_supports_gcs()) + return -EINVAL; + + if (is_compat_thread(task_thread_info(task))) + return -EINVAL; + + return put_user(task->thread.gcs_el0_mode, arg); +} + +int arch_lock_shadow_stack_status(struct task_struct *task, + unsigned long arg) +{ + if (!system_supports_gcs()) + return -EINVAL; + + if (is_compat_thread(task_thread_info(task))) + return -EINVAL; + + /* + * We support locking unknown bits so applications can prevent + * any changes in a future proof manner. + */ + task->thread.gcs_el0_locked |= arg; + + return 0; +}
On Thu, Aug 01, 2024 at 01:06:48PM +0100, Mark Brown wrote:
Implement the architecture neutral prtctl() interface for setting the
s/prtctl/prctl/
+int arch_set_shadow_stack_status(struct task_struct *task, unsigned long arg) +{
- unsigned long gcs, size;
- int ret;
- if (!system_supports_gcs())
return -EINVAL;
- if (is_compat_thread(task_thread_info(task)))
return -EINVAL;
- /* Reject unknown flags */
- if (arg & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK)
return -EINVAL;
- ret = gcs_check_locked(task, arg);
- if (ret != 0)
return ret;
- /* If we are enabling GCS then make sure we have a stack */
- if (arg & PR_SHADOW_STACK_ENABLE) {
if (!task_gcs_el0_enabled(task)) {
/* Do not allow GCS to be reenabled */
if (task->thread.gcs_base)
return -EINVAL;
if (task != current)
return -EBUSY;
size = gcs_size(0);
gcs = alloc_gcs(0, size);
if (!gcs)
return -ENOMEM;
task->thread.gcspr_el0 = gcs + size - sizeof(u64);
task->thread.gcs_base = gcs;
task->thread.gcs_size = size;
if (task == current)
write_sysreg_s(task->thread.gcspr_el0,
SYS_GCSPR_EL0);
}
- }
Nitpick: use a single 'if' instead of nesting (unless subsequent patches add more to the first block).
Otherwise it looks fine.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
On Wed, Aug 21, 2024 at 01:54:33PM +0100, Catalin Marinas wrote:
Otherwise it looks fine.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
I've also added:
+ + /* Ensure we remember GCSPR_EL0 if we're disabling. */ + if (task_gcs_el0_enabled(current)) + current->thread.gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0);
to handle the disable case.
As discussed extensively in the changelog for the addition of this syscall on x86 ("x86/shstk: Introduce map_shadow_stack syscall") the existing mmap() and madvise() syscalls do not map entirely well onto the security requirements for guarded control stacks since they lead to windows where memory is allocated but not yet protected or stacks which are not properly and safely initialised. Instead a new syscall map_shadow_stack() has been defined which allocates and initialises a shadow stack page.
Implement this for arm64. Two flags are provided, allowing applications to request that the stack be initialised with a valid cap token at the top of the stack and optionally also an end of stack marker above that. We support requesting an end of stack marker alone but since this is a NULL pointer it is indistinguishable from not initialising anything by itself.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/mm/gcs.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 61 insertions(+)
diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index 0d39829f862e..6703c70581a4 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -140,6 +140,67 @@ unsigned long gcs_alloc_thread_stack(struct task_struct *tsk, return addr; }
+SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags) +{ + unsigned long alloc_size; + unsigned long __user *cap_ptr; + unsigned long cap_val; + int ret = 0; + int cap_offset; + + if (!system_supports_gcs()) + return -EOPNOTSUPP; + + if (flags & ~(SHADOW_STACK_SET_TOKEN | SHADOW_STACK_SET_MARKER)) + return -EINVAL; + + if (addr && (addr % PAGE_SIZE)) + return -EINVAL; + + if (size == 8 || size % 8) + return -EINVAL; + + /* + * An overflow would result in attempting to write the restore token + * to the wrong location. Not catastrophic, but just return the right + * error code and block it. + */ + alloc_size = PAGE_ALIGN(size); + if (alloc_size < size) + return -EOVERFLOW; + + addr = alloc_gcs(addr, alloc_size); + if (IS_ERR_VALUE(addr)) + return addr; + + /* + * Put a cap token at the end of the allocated region so it + * can be switched to. + */ + if (flags & SHADOW_STACK_SET_TOKEN) { + /* Leave an extra empty frame as a top of stack marker? */ + if (flags & SHADOW_STACK_SET_MARKER) + cap_offset = 2; + else + cap_offset = 1; + + cap_ptr = (unsigned long __user *)(addr + size - + (cap_offset * sizeof(unsigned long))); + cap_val = GCS_CAP(cap_ptr); + + put_user_gcs(cap_val, cap_ptr, &ret); + if (ret != 0) { + vm_munmap(addr, size); + return -EFAULT; + } + + /* Ensure the new cap is viaible for GCS */ + gcsb_dsync(); + } + + return addr; +} + /* * Apply the GCS mode configured for the specified task to the * hardware.
On Thu, Aug 01, 2024 at 01:06:49PM +0100, Mark Brown wrote:
+SYSCALL_DEFINE3(map_shadow_stack, unsigned long, addr, unsigned long, size, unsigned int, flags) +{
- unsigned long alloc_size;
- unsigned long __user *cap_ptr;
- unsigned long cap_val;
- int ret = 0;
- int cap_offset;
- if (!system_supports_gcs())
return -EOPNOTSUPP;
- if (flags & ~(SHADOW_STACK_SET_TOKEN | SHADOW_STACK_SET_MARKER))
return -EINVAL;
- if (addr && (addr % PAGE_SIZE))
return -EINVAL;
- if (size == 8 || size % 8)
return -EINVAL;
Nitpicks: use PAGE_ALIGNED and IS_ALIGNED(size, 8).
- /*
* An overflow would result in attempting to write the restore token
* to the wrong location. Not catastrophic, but just return the right
* error code and block it.
*/
- alloc_size = PAGE_ALIGN(size);
- if (alloc_size < size)
return -EOVERFLOW;
- addr = alloc_gcs(addr, alloc_size);
- if (IS_ERR_VALUE(addr))
return addr;
- /*
* Put a cap token at the end of the allocated region so it
* can be switched to.
*/
- if (flags & SHADOW_STACK_SET_TOKEN) {
/* Leave an extra empty frame as a top of stack marker? */
if (flags & SHADOW_STACK_SET_MARKER)
cap_offset = 2;
else
cap_offset = 1;
cap_ptr = (unsigned long __user *)(addr + size -
(cap_offset * sizeof(unsigned long)));
cap_val = GCS_CAP(cap_ptr);
put_user_gcs(cap_val, cap_ptr, &ret);
if (ret != 0) {
vm_munmap(addr, size);
return -EFAULT;
}
/* Ensure the new cap is viaible for GCS */
gcsb_dsync();
s/viaible/visible/
On the comment itself, the barrier does not ensure visibility in absolute term, it's all about ordering relative to other accesses. It might be good to clarify what we actually need in terms of ordering. One aspect came up in an earlier patch was around thread creation. For the current CPU, subsequent GCS accesses in program order will see this token. Classic LDR/STR won't without the barrier. I think this matters when we check the token in the clone3() implementation. Maybe write something along the lines of "ensure the new cap is ordered before standard memory accesses to the same location".
Anyway, the patch looks fine.
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
When invoking a signal handler we use the GCS configuration and stack for the current thread.
Since we implement signal return by calling the signal handler with a return address set up pointing to a trampoline in the vDSO we need to also configure any active GCS for this by pushing a frame for the trampoline onto the GCS. If we do not do this then signal return will generate a GCS protection fault.
In order to guard against attempts to bypass GCS protections via signal return we only allow returning with GCSPR_EL0 pointing to an address where it was previously preempted by a signal. We do this by pushing a cap onto the GCS, this takes the form of an architectural GCS cap token with the top bit set and token type of 0 which we add on signal entry and validate and pop off on signal return. The combination of the top bit being set and the token type mean that this can't be interpreted as a valid token or address.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/asm/gcs.h | 1 + arch/arm64/kernel/signal.c | 134 +++++++++++++++++++++++++++++++++++++++++-- arch/arm64/mm/gcs.c | 1 + 3 files changed, 131 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/gcs.h b/arch/arm64/include/asm/gcs.h index 48c97e63e56a..f50660603ecf 100644 --- a/arch/arm64/include/asm/gcs.h +++ b/arch/arm64/include/asm/gcs.h @@ -9,6 +9,7 @@ #include <asm/uaccess.h>
struct kernel_clone_args; +struct ksignal;
static inline void gcsb_dsync(void) { diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index 4a77f4976e11..a1e0aa38bff9 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -25,6 +25,7 @@ #include <asm/elf.h> #include <asm/exception.h> #include <asm/cacheflush.h> +#include <asm/gcs.h> #include <asm/ucontext.h> #include <asm/unistd.h> #include <asm/fpsimd.h> @@ -34,6 +35,37 @@ #include <asm/traps.h> #include <asm/vdso.h>
+#ifdef CONFIG_ARM64_GCS +/* Extra bit set in the address distinguishing a signal cap token. */ +#define GCS_SIGNAL_CAP_FLAG BIT(63) + +#define GCS_SIGNAL_CAP(addr) ((((unsigned long)addr) & GCS_CAP_ADDR_MASK) | \ + GCS_SIGNAL_CAP_FLAG) + +static bool gcs_signal_cap_valid(u64 addr, u64 val) +{ + /* + * The top bit should be set, this is an invalid address for + * EL0 and will only be set for caps created by signals. + */ + if (!(val & GCS_SIGNAL_CAP_FLAG)) + return false; + + /* The rest should be a standard architectural cap token. */ + val &= ~GCS_SIGNAL_CAP_FLAG; + + /* The cap must not have a token set */ + if (GCS_CAP_TOKEN(val) != 0) + return false; + + /* The cap must store the VA the cap was stored at */ + if (GCS_CAP_ADDR(addr) != GCS_CAP_ADDR(val)) + return false; + + return true; +} +#endif + /* * Do a signal return; undo the signal stack. These are aligned to 128-bit. */ @@ -860,6 +892,50 @@ static int restore_sigframe(struct pt_regs *regs, return err; }
+#ifdef CONFIG_ARM64_GCS +static int gcs_restore_signal(void) +{ + u64 gcspr_el0, cap; + int ret; + + if (!system_supports_gcs()) + return 0; + + if (!(current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) + return 0; + + gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); + + /* + * GCSPR_EL0 should be pointing at a capped GCS, read the cap... + */ + gcsb_dsync(); + ret = copy_from_user(&cap, (__user void*)gcspr_el0, sizeof(cap)); + if (ret) + return -EFAULT; + + /* + * ...then check that the cap is the actual GCS before + * restoring it. + */ + if (!gcs_signal_cap_valid(gcspr_el0, cap)) + return -EINVAL; + + /* Invalidate the token to prevent reuse */ + put_user_gcs(0, (__user void*)gcspr_el0, &ret); + if (ret != 0) + return -EFAULT; + + current->thread.gcspr_el0 = gcspr_el0 + sizeof(cap); + write_sysreg_s(current->thread.gcspr_el0, SYS_GCSPR_EL0); + + return 0; +} + +#else +static int gcs_restore_signal(void) { return 0; } +#endif + SYSCALL_DEFINE0(rt_sigreturn) { struct pt_regs *regs = current_pt_regs(); @@ -886,6 +962,9 @@ SYSCALL_DEFINE0(rt_sigreturn) if (restore_altstack(&frame->uc.uc_stack)) goto badframe;
+ if (gcs_restore_signal()) + goto badframe; + return regs->regs[0];
badframe: @@ -1130,7 +1209,50 @@ static int get_sigframe(struct rt_sigframe_user_layout *user, return 0; }
-static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, +#ifdef CONFIG_ARM64_GCS + +static int gcs_signal_entry(__sigrestore_t sigtramp, struct ksignal *ksig) +{ + unsigned long __user *gcspr_el0; + int ret = 0; + + if (!system_supports_gcs()) + return 0; + + if (!task_gcs_el0_enabled(current)) + return 0; + + /* + * We are entering a signal handler, current register state is + * active. + */ + gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0); + + /* + * Push a cap and the GCS entry for the trampoline onto the GCS. + */ + put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret); + put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret); + if (ret != 0) + return ret; + + gcsb_dsync(); + + gcspr_el0 -= 2; + write_sysreg_s((unsigned long)gcspr_el0, SYS_GCSPR_EL0); + + return 0; +} +#else + +static int gcs_signal_entry(__sigrestore_t sigtramp, struct ksignal *ksig) +{ + return 0; +} + +#endif + +static int setup_return(struct pt_regs *regs, struct ksignal *ksig, struct rt_sigframe_user_layout *user, int usig) { __sigrestore_t sigtramp; @@ -1138,7 +1260,7 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, regs->regs[0] = usig; regs->sp = (unsigned long)user->sigframe; regs->regs[29] = (unsigned long)&user->next_frame->fp; - regs->pc = (unsigned long)ka->sa.sa_handler; + regs->pc = (unsigned long)ksig->ka.sa.sa_handler;
/* * Signal delivery is a (wacky) indirect function call in @@ -1178,12 +1300,14 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, sme_smstop(); }
- if (ka->sa.sa_flags & SA_RESTORER) - sigtramp = ka->sa.sa_restorer; + if (ksig->ka.sa.sa_flags & SA_RESTORER) + sigtramp = ksig->ka.sa.sa_restorer; else sigtramp = VDSO_SYMBOL(current->mm->context.vdso, sigtramp);
regs->regs[30] = (unsigned long)sigtramp; + + return gcs_signal_entry(sigtramp, ksig); }
static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, @@ -1206,7 +1330,7 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set, err |= __save_altstack(&frame->uc.uc_stack, regs->sp); err |= setup_sigframe(&user, regs, set); if (err == 0) { - setup_return(regs, &ksig->ka, &user, usig); + err = setup_return(regs, ksig, &user, usig); if (ksig->ka.sa.sa_flags & SA_SIGINFO) { err |= copy_siginfo_to_user(&frame->info, &ksig->info); regs->regs[1] = (unsigned long)&frame->info; diff --git a/arch/arm64/mm/gcs.c b/arch/arm64/mm/gcs.c index 6703c70581a4..30614f4ad164 100644 --- a/arch/arm64/mm/gcs.c +++ b/arch/arm64/mm/gcs.c @@ -6,6 +6,7 @@ #include <linux/types.h>
#include <asm/cpufeature.h> +#include <asm/gcs.h> #include <asm/page.h>
static unsigned long alloc_gcs(unsigned long addr, unsigned long size)
On Thu, Aug 01, 2024 at 01:06:50PM +0100, Mark Brown wrote:
When invoking a signal handler we use the GCS configuration and stack for the current thread.
Since we implement signal return by calling the signal handler with a return address set up pointing to a trampoline in the vDSO we need to also configure any active GCS for this by pushing a frame for the trampoline onto the GCS. If we do not do this then signal return will generate a GCS protection fault.
In order to guard against attempts to bypass GCS protections via signal return we only allow returning with GCSPR_EL0 pointing to an address where it was previously preempted by a signal. We do this by pushing a cap onto the GCS, this takes the form of an architectural GCS cap token with the top bit set and token type of 0 which we add on signal entry and validate and pop off on signal return. The combination of the top bit being set and the token type mean that this can't be interpreted as a valid token or address.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
arch/arm64/include/asm/gcs.h | 1 + arch/arm64/kernel/signal.c | 134 +++++++++++++++++++++++++++++++++++++++++-- arch/arm64/mm/gcs.c | 1 + 3 files changed, 131 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
[...]
+#ifdef CONFIG_ARM64_GCS
+static int gcs_signal_entry(__sigrestore_t sigtramp, struct ksignal *ksig) +{
- unsigned long __user *gcspr_el0;
- int ret = 0;
- if (!system_supports_gcs())
return 0;
- if (!task_gcs_el0_enabled(current))
return 0;
- /*
* We are entering a signal handler, current register state is
* active.
*/
- gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0);
- /*
* Push a cap and the GCS entry for the trampoline onto the GCS.
*/
- put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret);
- put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret);
- if (ret != 0)
return ret;
What happens if we went wrong here, or if the signal we are delivering was caused by a GCS overrun or bad GCSPR_EL0 in the first place?
It feels like a program has no way to rescue itself from excessive recursion in some thread. Is there something equivalent to sigaltstack()?
Or is the shadow stack always supposed to be big enough to cope with recursion that exhausts the main stack and alternate signal stack (and if so, how is this ensured)?
- gcsb_dsync();
- gcspr_el0 -= 2;
- write_sysreg_s((unsigned long)gcspr_el0, SYS_GCSPR_EL0);
- return 0;
+}
[...]
Cheers ---Dave
On Wed, Aug 14, 2024 at 03:51:42PM +0100, Dave Martin wrote:
On Thu, Aug 01, 2024 at 01:06:50PM +0100, Mark Brown wrote:
- put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret);
- put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret);
- if (ret != 0)
return ret;
What happens if we went wrong here, or if the signal we are delivering was caused by a GCS overrun or bad GCSPR_EL0 in the first place?
It feels like a program has no way to rescue itself from excessive recursion in some thread. Is there something equivalent to sigaltstack()?
Or is the shadow stack always supposed to be big enough to cope with recursion that exhausts the main stack and alternate signal stack (and if so, how is this ensured)?
There's no sigaltstack() for GCS, this is also the ABI with the existing shadow stack on x86 and should be addressed in a cross architecture fashion. There have been some discussions about providing a shadow alt stack but they've generally been circular and inconclusive, there were a bunch of tradeoffs for corner cases and nobody had a clear sense as to what a good solution should be. It was a bit unclear that actively doing anything was worthwhile. The issues were IIRC around unwinders and disjoint shadow stacks, compatibility with non-shadow stacks and behaviour when we overflow the shadow stack. I think there were also some applications trying to be very clever with alt stacks that needed to be interacted with and complicated everything but I could be misremembering there.
Practically speaking since we're only storing return addresses the default GCS should be extremely large so it's unlikely to come up without first encountering and handling issues on the normal stack. Users allocating their own shadow stacks should be careful. This isn't really satisfying but is probably fine in practice, there's certainly not been any pressure yet from the existing x86 deployments (though at present nobody can explicitly select their own shadow stack size, perhaps it'll become more of an issue when the clone3() stuff is in).
On Wed, Aug 14, 2024 at 05:00:23PM +0100, Mark Brown wrote:
On Wed, Aug 14, 2024 at 03:51:42PM +0100, Dave Martin wrote:
On Thu, Aug 01, 2024 at 01:06:50PM +0100, Mark Brown wrote:
- put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret);
- put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret);
- if (ret != 0)
return ret;
What happens if we went wrong here, or if the signal we are delivering was caused by a GCS overrun or bad GCSPR_EL0 in the first place?
It feels like a program has no way to rescue itself from excessive recursion in some thread. Is there something equivalent to sigaltstack()?
Or is the shadow stack always supposed to be big enough to cope with recursion that exhausts the main stack and alternate signal stack (and if so, how is this ensured)?
There's no sigaltstack() for GCS, this is also the ABI with the existing shadow stack on x86 and should be addressed in a cross architecture fashion. There have been some discussions about providing a shadow alt stack but they've generally been circular and inconclusive, there were a bunch of tradeoffs for corner cases and nobody had a clear sense as to what a good solution should be. It was a bit unclear that actively doing anything was worthwhile. The issues were IIRC around unwinders and disjoint shadow stacks, compatibility with non-shadow stacks and behaviour when we overflow the shadow stack. I think there were also some applications trying to be very clever with alt stacks that needed to be interacted with and complicated everything but I could be misremembering there.
Practically speaking since we're only storing return addresses the default GCS should be extremely large so it's unlikely to come up without first encountering and handling issues on the normal stack. Users allocating their own shadow stacks should be careful. This isn't really satisfying but is probably fine in practice, there's certainly not been any pressure yet from the existing x86 deployments (though at present nobody can explicitly select their own shadow stack size, perhaps it'll become more of an issue when the clone3() stuff is in).
Ack, if this is a known limitation then I guess it makes sense just to follow other arches.
I see that we default the shadow stack size to half the main stack size, which should indeed count as "huge". I guess this makes shadow stack overrun unlikely at least (at least, not before the main stack overruns).
Hopping to an alternate (main) stack while continuing to push on the same shadow stack doesn't sound broken in principle.
Is there a test for taking and returning from a signal on an alternate (main) stack, when a shadow stack is in use? Sounds like something that would be good to check if not.
Cheers ---Dave
On Thu, Aug 15, 2024 at 02:37:22PM +0100, Dave Martin wrote:
Is there a test for taking and returning from a signal on an alternate (main) stack, when a shadow stack is in use? Sounds like something that would be good to check if not.
Not specifically for any of the architectures.
On Thu, Aug 15, 2024 at 03:45:45PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 02:37:22PM +0100, Dave Martin wrote:
Is there a test for taking and returning from a signal on an alternate (main) stack, when a shadow stack is in use? Sounds like something that would be good to check if not.
Not specifically for any of the architectures.
Can you see any reason why this shouldn't work?
Maybe I'll hacking up a test if I get around to it, but don't take this as a promise!
Cheers ---Dave
On Thu, Aug 15, 2024 at 04:11:56PM +0100, Dave Martin wrote:
On Thu, Aug 15, 2024 at 03:45:45PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 02:37:22PM +0100, Dave Martin wrote:
Is there a test for taking and returning from a signal on an alternate (main) stack, when a shadow stack is in use? Sounds like something that would be good to check if not.
Not specifically for any of the architectures.
Can you see any reason why this shouldn't work?
No, it's expected to work - I'm just not specifically aware of an explicit test for it. Possibly some of the userspace bringup work might've covered it? Any libc tests for altstack support should've exercised it for example.
Maybe I'll hacking up a test if I get around to it, but don't take this as a promise!
Thanks for your firm commitment! :P
On Thu, Aug 15, 2024 at 04:29:54PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 04:11:56PM +0100, Dave Martin wrote:
On Thu, Aug 15, 2024 at 03:45:45PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 02:37:22PM +0100, Dave Martin wrote:
Is there a test for taking and returning from a signal on an alternate (main) stack, when a shadow stack is in use? Sounds like something that would be good to check if not.
Not specifically for any of the architectures.
Can you see any reason why this shouldn't work?
No, it's expected to work - I'm just not specifically aware of an explicit test for it. Possibly some of the userspace bringup work might've covered it? Any libc tests for altstack support should've exercised it for example.
That's true; if libc is built to use shadow stack, generic API tests ought to cover this.
Maybe I'll hacking up a test if I get around to it, but don't take this as a promise!
Thanks for your firm commitment! :P
No problem...
Cheers ---Dave
On Thu, Aug 01, 2024 at 01:06:50PM +0100, Mark Brown wrote:
@@ -860,6 +892,50 @@ static int restore_sigframe(struct pt_regs *regs, return err; } +#ifdef CONFIG_ARM64_GCS +static int gcs_restore_signal(void) +{
- u64 gcspr_el0, cap;
Nitpick: use 'unsigned long __user *gcspr_el0' as in the gcs_signal_entry(). It's more consistent and probably less casting.
- int ret;
- if (!system_supports_gcs())
return 0;
- if (!(current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE))
return 0;
- gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0);
- /*
* GCSPR_EL0 should be pointing at a capped GCS, read the cap...
*/
- gcsb_dsync();
- ret = copy_from_user(&cap, (__user void*)gcspr_el0, sizeof(cap));
- if (ret)
return -EFAULT;
Can the user change GCSPR_EL0 to a non-shadow-stack region, fake the cap before sigreturn? copy_from_user() cannot check it's a GCS page. Does it actually matter?
@@ -1130,7 +1209,50 @@ static int get_sigframe(struct rt_sigframe_user_layout *user, return 0; } -static void setup_return(struct pt_regs *regs, struct k_sigaction *ka, +#ifdef CONFIG_ARM64_GCS
+static int gcs_signal_entry(__sigrestore_t sigtramp, struct ksignal *ksig) +{
- unsigned long __user *gcspr_el0;
- int ret = 0;
- if (!system_supports_gcs())
return 0;
- if (!task_gcs_el0_enabled(current))
return 0;
- /*
* We are entering a signal handler, current register state is
* active.
*/
- gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0);
- /*
* Push a cap and the GCS entry for the trampoline onto the GCS.
*/
- put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret);
- put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret);
- if (ret != 0)
return ret;
Doesn't the second put_user_gcs() override the previous ret?
- gcsb_dsync();
Wondering if we need the barrier both for entry and restore. If the restore happens on another CPU, we have the barriers in the context switch code already. If it's only the kernel writing the caps with GCSSTTR on setting up the stack and checking it on return, a single barrier is sufficient (can be this one). If the user can write something on the stack or maybe doing a sigreturn without fully unwinding the stack, we may need both. Either way, it would help to add some comments on these barriers.
On Wed, Aug 21, 2024 at 06:28:49PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:50PM +0100, Mark Brown wrote:
- ret = copy_from_user(&cap, (__user void*)gcspr_el0, sizeof(cap));
- if (ret)
return -EFAULT;
Can the user change GCSPR_EL0 to a non-shadow-stack region, fake the cap before sigreturn? copy_from_user() cannot check it's a GCS page. Does it actually matter?
We don't take any steps to prevent that since I'm not clear that it matters, as soon as userspace tries to use the non-GCS page as a GCS it will fault. Given the abundance of ways in which a signal handler can cause a crash it didn't seem worth specific code, the cap token check is about protecting an actual GCS.
- /*
* Push a cap and the GCS entry for the trampoline onto the GCS.
*/
- put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret);
- put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret);
- if (ret != 0)
return ret;
Doesn't the second put_user_gcs() override the previous ret?
No, we only set ret on error - if the first one faults it'll set ret then the second one will either leave it unchanged or write the same error code depending on if it fails. This idiom is used quite a lot in the signal code.
On Wed, Aug 21, 2024 at 07:03:13PM +0100, Mark Brown wrote:
On Wed, Aug 21, 2024 at 06:28:49PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:50PM +0100, Mark Brown wrote:
- /*
* Push a cap and the GCS entry for the trampoline onto the GCS.
*/
- put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret);
- put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret);
- if (ret != 0)
return ret;
Doesn't the second put_user_gcs() override the previous ret?
No, we only set ret on error - if the first one faults it'll set ret then the second one will either leave it unchanged or write the same error code depending on if it fails. This idiom is used quite a lot in the signal code.
You are right, I missed that it's called 'err' in put_user_gcs(), thought it's overridden.
Add a context for the GCS state and include it in the signal context when running on a system that supports GCS. We reuse the same flags that the prctl() uses to specify which GCS features are enabled and also provide the current GCS pointer.
We do not support enabling GCS via signal return, there is a conflict between specifying GCSPR_EL0 and allocation of a new GCS and this is not an ancticipated use case. We also enforce GCS configuration locking on signal return.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/uapi/asm/sigcontext.h | 9 +++ arch/arm64/kernel/signal.c | 106 +++++++++++++++++++++++++++++++ 2 files changed, 115 insertions(+)
diff --git a/arch/arm64/include/uapi/asm/sigcontext.h b/arch/arm64/include/uapi/asm/sigcontext.h index 8a45b7a411e0..c2d61e8efc84 100644 --- a/arch/arm64/include/uapi/asm/sigcontext.h +++ b/arch/arm64/include/uapi/asm/sigcontext.h @@ -176,6 +176,15 @@ struct zt_context { __u16 __reserved[3]; };
+#define GCS_MAGIC 0x47435300 + +struct gcs_context { + struct _aarch64_ctx head; + __u64 gcspr; + __u64 features_enabled; + __u64 reserved; +}; + #endif /* !__ASSEMBLY__ */
#include <asm/sve_context.h> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c index a1e0aa38bff9..6db364beb024 100644 --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@ -88,6 +88,7 @@ struct rt_sigframe_user_layout {
unsigned long fpsimd_offset; unsigned long esr_offset; + unsigned long gcs_offset; unsigned long sve_offset; unsigned long tpidr2_offset; unsigned long za_offset; @@ -217,6 +218,8 @@ struct user_ctxs { u32 zt_size; struct fpmr_context __user *fpmr; u32 fpmr_size; + struct gcs_context __user *gcs; + u32 gcs_size; };
static int preserve_fpsimd_context(struct fpsimd_context __user *ctx) @@ -636,6 +639,81 @@ extern int restore_zt_context(struct user_ctxs *user);
#endif /* ! CONFIG_ARM64_SME */
+#ifdef CONFIG_ARM64_GCS + +static int preserve_gcs_context(struct gcs_context __user *ctx) +{ + int err = 0; + u64 gcspr; + + /* + * We will add a cap token to the frame, include it in the + * GCSPR_EL0 we report to support stack switching via + * sigreturn. + */ + gcs_preserve_current_state(); + gcspr = current->thread.gcspr_el0 - 8; + + __put_user_error(GCS_MAGIC, &ctx->head.magic, err); + __put_user_error(sizeof(*ctx), &ctx->head.size, err); + __put_user_error(gcspr, &ctx->gcspr, err); + __put_user_error(0, &ctx->reserved, err); + __put_user_error(current->thread.gcs_el0_mode, + &ctx->features_enabled, err); + + return err; +} + +static int restore_gcs_context(struct user_ctxs *user) +{ + u64 gcspr, enabled; + int err = 0; + + if (user->gcs_size != sizeof(*user->gcs)) + return -EINVAL; + + __get_user_error(gcspr, &user->gcs->gcspr, err); + __get_user_error(enabled, &user->gcs->features_enabled, err); + if (err) + return err; + + /* Don't allow unknown modes */ + if (enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + err = gcs_check_locked(current, enabled); + if (err != 0) + return err; + + /* Don't allow enabling */ + if (!task_gcs_el0_enabled(current) && + (enabled & PR_SHADOW_STACK_ENABLE)) + return -EINVAL; + + /* If we are disabling disable everything */ + if (!(enabled & PR_SHADOW_STACK_ENABLE)) + enabled = 0; + + current->thread.gcs_el0_mode = enabled; + + /* + * We let userspace set GCSPR_EL0 to anything here, we will + * validate later in gcs_restore_signal(). + */ + current->thread.gcspr_el0 = gcspr; + write_sysreg_s(current->thread.gcspr_el0, SYS_GCSPR_EL0); + + return 0; +} + +#else /* ! CONFIG_ARM64_GCS */ + +/* Turn any non-optimised out attempts to use these into a link error: */ +extern int preserve_gcs_context(void __user *ctx); +extern int restore_gcs_context(struct user_ctxs *user); + +#endif /* ! CONFIG_ARM64_GCS */ + static int parse_user_sigframe(struct user_ctxs *user, struct rt_sigframe __user *sf) { @@ -653,6 +731,7 @@ static int parse_user_sigframe(struct user_ctxs *user, user->za = NULL; user->zt = NULL; user->fpmr = NULL; + user->gcs = NULL;
if (!IS_ALIGNED((unsigned long)base, 16)) goto invalid; @@ -758,6 +837,17 @@ static int parse_user_sigframe(struct user_ctxs *user, user->fpmr_size = size; break;
+ case GCS_MAGIC: + if (!system_supports_gcs()) + goto invalid; + + if (user->gcs) + goto invalid; + + user->gcs = (struct gcs_context __user *)head; + user->gcs_size = size; + break; + case EXTRA_MAGIC: if (have_extra_context) goto invalid; @@ -877,6 +967,9 @@ static int restore_sigframe(struct pt_regs *regs, err = restore_fpsimd_context(&user); }
+ if (err == 0 && system_supports_gcs() && user.gcs) + err = restore_gcs_context(&user); + if (err == 0 && system_supports_tpidr2() && user.tpidr2) err = restore_tpidr2_context(&user);
@@ -999,6 +1092,13 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, return err; }
+ if (add_all || task_gcs_el0_enabled(current)) { + err = sigframe_alloc(user, &user->gcs_offset, + sizeof(struct gcs_context)); + if (err) + return err; + } + if (system_supports_sve() || system_supports_sme()) { unsigned int vq = 0;
@@ -1099,6 +1199,12 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user, __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); }
+ if (system_supports_gcs() && err == 0 && user->gcs_offset) { + struct gcs_context __user *gcs_ctx = + apply_user_offset(user, user->gcs_offset); + err |= preserve_gcs_context(gcs_ctx); + } + /* Scalable Vector Extension state (including streaming), if present */ if ((system_supports_sve() || system_supports_sme()) && err == 0 && user->sve_offset) {
On Thu, Aug 01, 2024 at 01:06:51PM +0100, Mark Brown wrote:
Add a context for the GCS state and include it in the signal context when running on a system that supports GCS. We reuse the same flags that the prctl() uses to specify which GCS features are enabled and also provide the current GCS pointer.
We do not support enabling GCS via signal return, there is a conflict between specifying GCSPR_EL0 and allocation of a new GCS and this is not an ancticipated use case. We also enforce GCS configuration locking on signal return.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
arch/arm64/include/uapi/asm/sigcontext.h | 9 +++ arch/arm64/kernel/signal.c | 106 +++++++++++++++++++++++++++++++ 2 files changed, 115 insertions(+)
[...]
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
[...]
@@ -999,6 +1092,13 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user, return err; }
- if (add_all || task_gcs_el0_enabled(current)) {
err = sigframe_alloc(user, &user->gcs_offset,
sizeof(struct gcs_context));
if (err)
return err;
- }
Who turns on GCS? I have a concern that if libc is new enough to be built for GCS then the libc startup code will to turn it on, even if the binary stack running on top of libc is old.
Whether a given library should break old binaries is a bit of a grey area, but I think that libraries that deliberately export stable ABIs probably shouldn't.
With that in mind, does any GCS state need to be saved at all?
Is there any scenario where it is legitimate for the signal handler to change the shadow stack mode or to return with an altered GCSPR_EL0?
Is the guarded stack considered necessary (or at least beneficial) for backtracing, or is the regular stack sufficient?
(I'm assuming that unwind tables / debug info should allow the shadow stack to be unwound anyway; rather this question is about whether software can straightforwardly find out the interrupted GCSPR_EL0 without this information... and whether it needs to.)
if (system_supports_sve() || system_supports_sme()) { unsigned int vq = 0; @@ -1099,6 +1199,12 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user, __put_user_error(current->thread.fault_code, &esr_ctx->esr, err); }
- if (system_supports_gcs() && err == 0 && user->gcs_offset) {
struct gcs_context __user *gcs_ctx =
apply_user_offset(user, user->gcs_offset);
err |= preserve_gcs_context(gcs_ctx);
- }
[...]
Cheers ---Dave
On Wed, Aug 14, 2024 at 04:09:51PM +0100, Dave Martin wrote:
On Thu, Aug 01, 2024 at 01:06:51PM +0100, Mark Brown wrote:
- if (add_all || task_gcs_el0_enabled(current)) {
err = sigframe_alloc(user, &user->gcs_offset,
sizeof(struct gcs_context));
if (err)
return err;
- }
Who turns on GCS? I have a concern that if libc is new enough to be built for GCS then the libc startup code will to turn it on, even if the binary stack running on top of libc is old.
It should normally be the dynamic linker which should be looking for annotatations in the binaries it's loading before it decides if it's going to turn on GCS (and libc doing something similar if it's going to dlopen() things in a process with GCS enabled).
Is there any scenario where it is legitimate for the signal handler to change the shadow stack mode or to return with an altered GCSPR_EL0?
If userspace can rewrite the stack pointer on return (eg, to return to a different context as part of userspace threading) then it will need to be able to also update GCSPR_EL0 to something consistent otherwise attempting to return from the interrupted context isn't going to go well. Changing the mode is a bit more exotic, as it is in general. It's as much to provide information to the signal handler as anything else.
Is the guarded stack considered necessary (or at least beneficial) for backtracing, or is the regular stack sufficient?
It's potentially beneficial, being less vulnerable to corruption and simpler to parse if all you're interested in is return addresses. Profiling in particular was mentioned, grabbing a linear block of memory will hopefully be less overhead than chasing down the stack. The regular stack should generally be sufficient though.
On Wed, Aug 14, 2024 at 05:21:44PM +0100, Mark Brown wrote:
On Wed, Aug 14, 2024 at 04:09:51PM +0100, Dave Martin wrote:
On Thu, Aug 01, 2024 at 01:06:51PM +0100, Mark Brown wrote:
- if (add_all || task_gcs_el0_enabled(current)) {
err = sigframe_alloc(user, &user->gcs_offset,
sizeof(struct gcs_context));
if (err)
return err;
- }
Who turns on GCS? I have a concern that if libc is new enough to be built for GCS then the libc startup code will to turn it on, even if the binary stack running on top of libc is old.
It should normally be the dynamic linker which should be looking for annotatations in the binaries it's loading before it decides if it's going to turn on GCS (and libc doing something similar if it's going to dlopen() things in a process with GCS enabled).
My thought was that if libc knows about shadow stacks, it is probably going to be built to use them too and so would enable shadow stack during startup to protect its own code.
(Technically those would be independent decisions, but it seems a good idea to use a hardening feature if you know about and it is present.)
If so, shadow stacks might always get turned on before the main program gets a look-in.
Or is that not the expectation?
Is there any scenario where it is legitimate for the signal handler to change the shadow stack mode or to return with an altered GCSPR_EL0?
If userspace can rewrite the stack pointer on return (eg, to return to a different context as part of userspace threading) then it will need to
Do we know if code that actually does that? IIUC, trying to do this is totally broken on most arches nowadays; making it work requires a reentrant C library and/or logic to defer signals around critical sections in userspace.
"Real" threading makes this pretty pointless for the most part.
Related question: does shadow stack work with ucontext-based coroutines? Per-context stacks need to be allocated by the program for that.
be able to also update GCSPR_EL0 to something consistent otherwise attempting to return from the interrupted context isn't going to go well. Changing the mode is a bit more exotic, as it is in general. It's as much to provide information to the signal handler as anything else.
I'm not sure that we should always put information in the signal frame that the signal handler can't obtain directly.
I guess it's harmless to include this, though.
Is the guarded stack considered necessary (or at least beneficial) for backtracing, or is the regular stack sufficient?
It's potentially beneficial, being less vulnerable to corruption and simpler to parse if all you're interested in is return addresses. Profiling in particular was mentioned, grabbing a linear block of memory will hopefully be less overhead than chasing down the stack. The regular stack should generally be sufficient though.
I guess we can't really argue that the shadow stack pointer is redundant here though. The whole point of shadow stacks is to make things more robust...
Just kicking the tyres on the question of whether we need it here, but I guess it's hard to make a good case for saying "no".
Cheers ---Dave
On Thu, Aug 15, 2024 at 03:01:21PM +0100, Dave Martin wrote:
My thought was that if libc knows about shadow stacks, it is probably going to be built to use them too and so would enable shadow stack during startup to protect its own code.
(Technically those would be independent decisions, but it seems a good idea to use a hardening feature if you know about and it is present.)
If so, shadow stacks might always get turned on before the main program gets a look-in.
Or is that not the expectation?
The expectation (at least for arm64) is that the main program will only have shadow stacks if everything says it can support them. If the dynamic linker turns them on during startup prior to parsing the main executables this means that it should turn them off before actually starting the executable, taking care to consider any locking of features.
Is there any scenario where it is legitimate for the signal handler to change the shadow stack mode or to return with an altered GCSPR_EL0?
If userspace can rewrite the stack pointer on return (eg, to return to a different context as part of userspace threading) then it will need to
Do we know if code that actually does that? IIUC, trying to do this is totally broken on most arches nowadays; making it work requires a reentrant C library and/or logic to defer signals around critical sections in userspace.
"Real" threading makes this pretty pointless for the most part.
Related question: does shadow stack work with ucontext-based coroutines? Per-context stacks need to be allocated by the program for that.
Yes, ucontext based coroutines are the sort of thing I meant when I was talking about returning to a different context?
be able to also update GCSPR_EL0 to something consistent otherwise attempting to return from the interrupted context isn't going to go well. Changing the mode is a bit more exotic, as it is in general. It's as much to provide information to the signal handler as anything else.
I'm not sure that we should always put information in the signal frame that the signal handler can't obtain directly.
I guess it's harmless to include this, though.
If we don't include it then if different ucontexts have different GCS features enabled we run into trouble on context switch.
Is the guarded stack considered necessary (or at least beneficial) for backtracing, or is the regular stack sufficient?
It's potentially beneficial, being less vulnerable to corruption and simpler to parse if all you're interested in is return addresses. Profiling in particular was mentioned, grabbing a linear block of memory will hopefully be less overhead than chasing down the stack. The regular stack should generally be sufficient though.
I guess we can't really argue that the shadow stack pointer is redundant here though. The whole point of shadow stacks is to make things more robust...
Just kicking the tyres on the question of whether we need it here, but I guess it's hard to make a good case for saying "no".
Indeed. The general model here is that we don't break userspace that relies on parses the normal stack (so the GCS is never *necessary*) but clearly you want to have it.
On Thu, Aug 15, 2024 at 04:05:32PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 03:01:21PM +0100, Dave Martin wrote:
My thought was that if libc knows about shadow stacks, it is probably going to be built to use them too and so would enable shadow stack during startup to protect its own code.
(Technically those would be independent decisions, but it seems a good idea to use a hardening feature if you know about and it is present.)
If so, shadow stacks might always get turned on before the main program gets a look-in.
Or is that not the expectation?
The expectation (at least for arm64) is that the main program will only have shadow stacks if everything says it can support them. If the dynamic linker turns them on during startup prior to parsing the main executables this means that it should turn them off before actually starting the executable, taking care to consider any locking of features.
Hmm, so we really do get a clear "enable shadow stack" call to the kernel, which we can reasonaly expect won't happen for ancient software?
If so, I think dumping the GCS state in the sigframe could be made conditional on that without problems (?)
(We could always make it unconditional later if it turn out that that approach breaks something.)
Is there any scenario where it is legitimate for the signal handler to change the shadow stack mode or to return with an altered GCSPR_EL0?
If userspace can rewrite the stack pointer on return (eg, to return to a different context as part of userspace threading) then it will need to
Do we know if code that actually does that? IIUC, trying to do this is totally broken on most arches nowadays; making it work requires a reentrant C library and/or logic to defer signals around critical sections in userspace.
"Real" threading makes this pretty pointless for the most part.
Related question: does shadow stack work with ucontext-based coroutines? Per-context stacks need to be allocated by the program for that.
Yes, ucontext based coroutines are the sort of thing I meant when I was talking about returning to a different context?
Ah, right. Doing this asynchronously on the back of a signal (instead of doing a sigreturn) is the bad thing. setcontext() officially doesn't work for this any more, and doing it by hacking or rebuilding the sigframe is extremely hairy and probably a terrible idea for the reasons I gave.
be able to also update GCSPR_EL0 to something consistent otherwise attempting to return from the interrupted context isn't going to go well. Changing the mode is a bit more exotic, as it is in general. It's as much to provide information to the signal handler as anything else.
Note, the way sigcontext (a.k.a. mcontext).__reserved[] is used by glibc for the ucontext API is inspired by the way the kernel uses it, but not guaranteed to be compatible. For the ucontext API glibc doesn't try to store/restore asynchronous contexts (which is why setcontext() from a signal handler is totally broken), so there is no need to store SVE/SME state and hence lots of free space, so this probably is supportable with shadow stacks -- if there's a way to allocate them. This series would be unaffected either way.
(IIRC, the contents of mcontext.__reserved[] is totally incompatible with what the kernel puts in there, and doesn't have the same record structure.)
I'm not sure that we should always put information in the signal frame that the signal handler can't obtain directly.
I guess it's harmless to include this, though.
If we don't include it then if different ucontexts have different GCS features enabled we run into trouble on context switch.
As outlined above, nowadays you can only use setcontext() on a context obtained from getcontext(). Using setcontext() on a context obtained from a sigframe works by accident or not at all, but in any case coroutines always switch synchronously and don't rely on doing this.
(See where setcontext deals with the FPSIMD regs: https://sourceware.org/git/?p=glibc.git%3Ba=blob%3Bf=sysdeps/unix/sysv/linux... )
So, overall I think making ucontext coroutines with with GCS is purely a libc matter that is "interesting" here, but we don't need to worry about.
Is the guarded stack considered necessary (or at least beneficial) for backtracing, or is the regular stack sufficient?
It's potentially beneficial, being less vulnerable to corruption and simpler to parse if all you're interested in is return addresses. Profiling in particular was mentioned, grabbing a linear block of memory will hopefully be less overhead than chasing down the stack. The regular stack should generally be sufficient though.
I guess we can't really argue that the shadow stack pointer is redundant here though. The whole point of shadow stacks is to make things more robust...
Just kicking the tyres on the question of whether we need it here, but I guess it's hard to make a good case for saying "no".
Indeed. The general model here is that we don't break userspace that relies on parses the normal stack (so the GCS is never *necessary*) but clearly you want to have it.
Agreed, but perhaps not in programs that haven't enabled shadow stack?
Cheers ---Dave
On Thu, Aug 15, 2024 at 04:33:25PM +0100, Dave Martin wrote:
On Thu, Aug 15, 2024 at 04:05:32PM +0100, Mark Brown wrote:
The expectation (at least for arm64) is that the main program will only have shadow stacks if everything says it can support them. If the dynamic linker turns them on during startup prior to parsing the main executables this means that it should turn them off before actually starting the executable, taking care to consider any locking of features.
Hmm, so we really do get a clear "enable shadow stack" call to the kernel, which we can reasonaly expect won't happen for ancient software?
Yes, userspace always has to explicitly enable the GCS.
If so, I think dumping the GCS state in the sigframe could be made conditional on that without problems (?)
It is - we only allocate the sigframe if the task has GCS enabled.
Related question: does shadow stack work with ucontext-based coroutines? Per-context stacks need to be allocated by the program for that.
Yes, ucontext based coroutines are the sort of thing I meant when I was talking about returning to a different context?
Ah, right. Doing this asynchronously on the back of a signal (instead of doing a sigreturn) is the bad thing. setcontext() officially doesn't work for this any more, and doing it by hacking or rebuilding the sigframe is extremely hairy and probably a terrible idea for the reasons I gave.
I see. I tend to view this as more adventurous than I personally would be when writing userspace code but equally I don't see a need to actively break things. There's no *requirement* to use libc...
So, overall I think making ucontext coroutines with with GCS is purely a libc matter that is "interesting" here, but we don't need to worry about.
Yes, it's not our problem so long as we don't get in the way somehow.
On Thu, Aug 15, 2024 at 04:46:04PM +0100, Mark Brown wrote:
On Thu, Aug 15, 2024 at 04:33:25PM +0100, Dave Martin wrote:
On Thu, Aug 15, 2024 at 04:05:32PM +0100, Mark Brown wrote:
The expectation (at least for arm64) is that the main program will only have shadow stacks if everything says it can support them. If the dynamic linker turns them on during startup prior to parsing the main executables this means that it should turn them off before actually starting the executable, taking care to consider any locking of features.
Hmm, so we really do get a clear "enable shadow stack" call to the kernel, which we can reasonaly expect won't happen for ancient software?
Yes, userspace always has to explicitly enable the GCS.
If so, I think dumping the GCS state in the sigframe could be made conditional on that without problems (?)
It is - we only allocate the sigframe if the task has GCS enabled.
OK, makes sense.
Related question: does shadow stack work with ucontext-based coroutines? Per-context stacks need to be allocated by the program for that.
Yes, ucontext based coroutines are the sort of thing I meant when I was talking about returning to a different context?
Ah, right. Doing this asynchronously on the back of a signal (instead of doing a sigreturn) is the bad thing. setcontext() officially doesn't work for this any more, and doing it by hacking or rebuilding the sigframe is extremely hairy and probably a terrible idea for the reasons I gave.
I see. I tend to view this as more adventurous than I personally would be when writing userspace code but equally I don't see a need to actively break things. There's no *requirement* to use libc...
So, overall I think making ucontext coroutines with with GCS is purely a libc matter that is "interesting" here, but we don't need to worry about.
Yes, it's not our problem so long as we don't get in the way somehow.
Sure. "Hairy and probably a terrible idea" is not the same as "impossible", but you need to know what you're doing and you get exposed to all sorts of portability challenges.
There's a limit to how much we should attempt to smooth over all that.
Anyway, I think what the GCS patches are doing looks reasonable.
Cheers ---Dave
On Thu, Aug 01, 2024 at 01:06:51PM +0100, Mark Brown wrote:
@@ -636,6 +639,81 @@ extern int restore_zt_context(struct user_ctxs *user); #endif /* ! CONFIG_ARM64_SME */ +#ifdef CONFIG_ARM64_GCS
+static int preserve_gcs_context(struct gcs_context __user *ctx) +{
- int err = 0;
- u64 gcspr;
- /*
* We will add a cap token to the frame, include it in the
* GCSPR_EL0 we report to support stack switching via
* sigreturn.
*/
- gcs_preserve_current_state();
- gcspr = current->thread.gcspr_el0 - 8;
We discussed briefly offline. Not a problem in this patch but gcs_preserve_current_state() only saves it conditionally on GCS being enabled for the task. However, you mentioned that the register is always available to the user, so I'd rather change the preserving function to save it unconditionally.
- __put_user_error(GCS_MAGIC, &ctx->head.magic, err);
- __put_user_error(sizeof(*ctx), &ctx->head.size, err);
- __put_user_error(gcspr, &ctx->gcspr, err);
- __put_user_error(0, &ctx->reserved, err);
- __put_user_error(current->thread.gcs_el0_mode,
&ctx->features_enabled, err);
- return err;
+}
+static int restore_gcs_context(struct user_ctxs *user) +{
- u64 gcspr, enabled;
- int err = 0;
- if (user->gcs_size != sizeof(*user->gcs))
return -EINVAL;
- __get_user_error(gcspr, &user->gcs->gcspr, err);
- __get_user_error(enabled, &user->gcs->features_enabled, err);
- if (err)
return err;
- /* Don't allow unknown modes */
- if (enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK)
return -EINVAL;
- err = gcs_check_locked(current, enabled);
- if (err != 0)
return err;
- /* Don't allow enabling */
- if (!task_gcs_el0_enabled(current) &&
(enabled & PR_SHADOW_STACK_ENABLE))
return -EINVAL;
We don't allow enabling and that's fine but we don't restore gcspr either with this early return.
- /* If we are disabling disable everything */
- if (!(enabled & PR_SHADOW_STACK_ENABLE))
enabled = 0;
- current->thread.gcs_el0_mode = enabled;
- /*
* We let userspace set GCSPR_EL0 to anything here, we will
* validate later in gcs_restore_signal().
*/
- current->thread.gcspr_el0 = gcspr;
- write_sysreg_s(current->thread.gcspr_el0, SYS_GCSPR_EL0);
I think you should move this further up unconditionally.
Provide a new register type NT_ARM_GCS reporting the current GCS mode and pointer for EL0. Due to the interactions with allocation and deallocation of Guarded Control Stacks we do not permit any changes to the GCS mode via ptrace, only GCSPR_EL0 may be changed.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/include/uapi/asm/ptrace.h | 8 +++++ arch/arm64/kernel/ptrace.c | 59 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/elf.h | 1 + 3 files changed, 68 insertions(+)
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h index 7fa2f7036aa7..0f39ba4f3efd 100644 --- a/arch/arm64/include/uapi/asm/ptrace.h +++ b/arch/arm64/include/uapi/asm/ptrace.h @@ -324,6 +324,14 @@ struct user_za_header { #define ZA_PT_SIZE(vq) \ (ZA_PT_ZA_OFFSET + ZA_PT_ZA_SIZE(vq))
+/* GCS state (NT_ARM_GCS) */ + +struct user_gcs { + __u64 features_enabled; + __u64 features_locked; + __u64 gcspr_el0; +}; + #endif /* __ASSEMBLY__ */
#endif /* _UAPI__ASM_PTRACE_H */ diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 0d022599eb61..9db0b669fee3 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -34,6 +34,7 @@ #include <asm/cpufeature.h> #include <asm/debug-monitors.h> #include <asm/fpsimd.h> +#include <asm/gcs.h> #include <asm/mte.h> #include <asm/pointer_auth.h> #include <asm/stacktrace.h> @@ -1440,6 +1441,51 @@ static int tagged_addr_ctrl_set(struct task_struct *target, const struct } #endif
+#ifdef CONFIG_ARM64_GCS +static int gcs_get(struct task_struct *target, + const struct user_regset *regset, + struct membuf to) +{ + struct user_gcs user_gcs; + + if (target == current) + gcs_preserve_current_state(); + + user_gcs.features_enabled = target->thread.gcs_el0_mode; + user_gcs.features_locked = target->thread.gcs_el0_locked; + user_gcs.gcspr_el0 = target->thread.gcspr_el0; + + return membuf_write(&to, &user_gcs, sizeof(user_gcs)); +} + +static int gcs_set(struct task_struct *target, const struct + user_regset *regset, unsigned int pos, + unsigned int count, const void *kbuf, const + void __user *ubuf) +{ + int ret; + struct user_gcs user_gcs; + + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &user_gcs, 0, -1); + if (ret) + return ret; + + if (user_gcs.features_enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + /* Do not allow enable via ptrace */ + if ((user_gcs.features_enabled & PR_SHADOW_STACK_ENABLE) && + !(target->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) + return -EBUSY; + + target->thread.gcs_el0_mode = user_gcs.features_enabled; + target->thread.gcs_el0_locked = user_gcs.features_locked; + target->thread.gcspr_el0 = user_gcs.gcspr_el0; + + return 0; +} +#endif + enum aarch64_regset { REGSET_GPR, REGSET_FPR, @@ -1469,6 +1515,9 @@ enum aarch64_regset { #ifdef CONFIG_ARM64_TAGGED_ADDR_ABI REGSET_TAGGED_ADDR_CTRL, #endif +#ifdef CONFIG_ARM64_GCS + REGSET_GCS, +#endif };
static const struct user_regset aarch64_regsets[] = { @@ -1628,6 +1677,16 @@ static const struct user_regset aarch64_regsets[] = { .set = tagged_addr_ctrl_set, }, #endif +#ifdef CONFIG_ARM64_GCS + [REGSET_GCS] = { + .core_note_type = NT_ARM_GCS, + .n = sizeof(struct user_gcs) / sizeof(u64), + .size = sizeof(u64), + .align = sizeof(u64), + .regset_get = gcs_get, + .set = gcs_set, + }, +#endif };
static const struct user_regset_view user_aarch64_view = { diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h index b54b313bcf07..77d4910bbb9d 100644 --- a/include/uapi/linux/elf.h +++ b/include/uapi/linux/elf.h @@ -441,6 +441,7 @@ typedef struct elf64_shdr { #define NT_ARM_ZA 0x40c /* ARM SME ZA registers */ #define NT_ARM_ZT 0x40d /* ARM SME ZT registers */ #define NT_ARM_FPMR 0x40e /* ARM floating point mode register */ +#define NT_ARM_GCS 0x40f /* ARM GCS state */ #define NT_ARC_V2 0x600 /* ARCv2 accumulator/extra registers */ #define NT_VMCOREDD 0x700 /* Vmcore Device Dump Note */ #define NT_MIPS_DSP 0x800 /* MIPS DSP ASE registers */
On Thu, Aug 01, 2024 at 01:06:52PM +0100, Mark Brown wrote:
@@ -1440,6 +1441,51 @@ static int tagged_addr_ctrl_set(struct task_struct *target, const struct } #endif +#ifdef CONFIG_ARM64_GCS +static int gcs_get(struct task_struct *target,
const struct user_regset *regset,
struct membuf to)
+{
- struct user_gcs user_gcs;
- if (target == current)
gcs_preserve_current_state();
- user_gcs.features_enabled = target->thread.gcs_el0_mode;
- user_gcs.features_locked = target->thread.gcs_el0_locked;
- user_gcs.gcspr_el0 = target->thread.gcspr_el0;
If it's not the current thread, I guess the task was interrupted, scheduled out (potentially on another CPU) and its GCSPR_EL0 saved.
- return membuf_write(&to, &user_gcs, sizeof(user_gcs));
+}
+static int gcs_set(struct task_struct *target, const struct
user_regset *regset, unsigned int pos,
unsigned int count, const void *kbuf, const
void __user *ubuf)
+{
- int ret;
- struct user_gcs user_gcs;
- ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &user_gcs, 0, -1);
- if (ret)
return ret;
- if (user_gcs.features_enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK)
return -EINVAL;
- /* Do not allow enable via ptrace */
- if ((user_gcs.features_enabled & PR_SHADOW_STACK_ENABLE) &&
!(target->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE))
return -EBUSY;
- target->thread.gcs_el0_mode = user_gcs.features_enabled;
- target->thread.gcs_el0_locked = user_gcs.features_locked;
- target->thread.gcspr_el0 = user_gcs.gcspr_el0;
As in the previous thread, I thought we need to restore GCSPR_EL0 unconditionally.
I don't particularly like that this register becomes some scrap one that threads can use regardless of GCS. Not sure we have a simple solution. We could track three states: GCS never enabled, GCS enabled and GCS disabled after being enabled. It's probably not worth it.
On ptrace() access to the shadow stack, we rely on the barrier in the context switch code if stopping a thread. If other threads are running on other CPUs, it's racy anyway even for normal accesses, so I don't think we need to do anything more for ptrace.
On Wed, Aug 21, 2024 at 06:57:16PM +0100, Catalin Marinas wrote:
On Thu, Aug 01, 2024 at 01:06:52PM +0100, Mark Brown wrote:
- if (user_gcs.features_enabled & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK)
return -EINVAL;
- /* Do not allow enable via ptrace */
- if ((user_gcs.features_enabled & PR_SHADOW_STACK_ENABLE) &&
!(target->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE))
return -EBUSY;
- target->thread.gcs_el0_mode = user_gcs.features_enabled;
- target->thread.gcs_el0_locked = user_gcs.features_locked;
- target->thread.gcspr_el0 = user_gcs.gcspr_el0;
As in the previous thread, I thought we need to restore GCSPR_EL0 unconditionally.
My thinking here is that if we return an error code then ideally we shouldn't implement any changes, especially in cases like this one where we're rejecting inputs that are just invalid. That seems like the least surprising outcome and what we should strive for, even if it's too complicated for some cases. It is possible to write GCSPR_EL0 regardless of if GCS is enabled, it's just not possible to write it as part of an otherwise invalid write. The validation is checking for unknown features and enables. With clone3() we could relax the enable check, but I've just pulled that out of the series for the time being.
I don't particularly like that this register becomes some scrap one that threads can use regardless of GCS. Not sure we have a simple solution. We could track three states: GCS never enabled, GCS enabled and GCS disabled after being enabled. It's probably not worth it.
Yeah, I did give consideration to that but as you say I think it's probably not worth it - you end up with a state machine which for the most part only gets two of the states exercised.
On ptrace() access to the shadow stack, we rely on the barrier in the context switch code if stopping a thread. If other threads are running on other CPUs, it's racy anyway even for normal accesses, so I don't think we need to do anything more for ptrace.
Yes, I don't see any reason to try to do anything special there.
On Wed, Aug 21, 2024 at 07:28:08PM +0100, Mark Brown wrote:
part of an otherwise invalid write. The validation is checking for unknown features and enables. With clone3() we could relax the enable check, but I've just pulled that out of the series for the time being.
Actually thinking about it some more I'll just remove the check for enable, the support for threads with GCS enabled and no kernel allocated GCS is already there and I didn't pull that bit out.
Provide a Kconfig option allowing the user to select if GCS support is built into the kernel.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- arch/arm64/Kconfig | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b3fc891f1544..83e27857bc6a 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2127,6 +2127,26 @@ config ARM64_EPAN if the cpu does not implement the feature. endmenu # "ARMv8.7 architectural features"
+menu "v9.4 architectural features" + +config ARM64_GCS + bool "Enable support for Guarded Control Stack (GCS)" + default y + select ARCH_HAS_USER_SHADOW_STACK + select ARCH_USES_HIGH_VMA_FLAGS + help + Guarded Control Stack (GCS) provides support for a separate + stack with restricted access which contains only return + addresses. This can be used to harden against some attacks + by comparing return address used by the program with what is + stored in the GCS, and may also be used to efficiently obtain + the call stack for applications such as profiling. + + The feature is detected at runtime, and will remain disabled + if the system does not implement the feature. + +endmenu # "v9.4 architectural features" + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y
On Thu, Aug 01, 2024 at 01:06:53PM +0100, Mark Brown wrote:
Provide a Kconfig option allowing the user to select if GCS support is built into the kernel.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
Reviewed-by: Catalin Marinas catalin.marinas@arm.com
Add coverage of the GCS hwcap to the hwcap selftest, using a read of GCSPR_EL0 to generate SIGILL without having to worry about enabling GCS.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/abi/hwcap.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c index d8909b2b535a..dc54ae894fe5 100644 --- a/tools/testing/selftests/arm64/abi/hwcap.c +++ b/tools/testing/selftests/arm64/abi/hwcap.c @@ -98,6 +98,17 @@ static void fpmr_sigill(void) asm volatile("mrs x0, S3_3_C4_C4_2" : : : "x0"); }
+static void gcs_sigill(void) +{ + unsigned long *gcspr; + + asm volatile( + "mrs %0, S3_3_C2_C5_1" + : "=r" (gcspr) + : + : "cc"); +} + static void ilrcpc_sigill(void) { /* LDAPUR W0, [SP, #8] */ @@ -528,6 +539,14 @@ static const struct hwcap_data { .sigill_fn = fpmr_sigill, .sigill_reliable = true, }, + { + .name = "GCS", + .at_hwcap = AT_HWCAP2, + .hwcap_bit = HWCAP2_GCS, + .cpuinfo = "gcs", + .sigill_fn = gcs_sigill, + .sigill_reliable = true, + }, { .name = "JSCVT", .at_hwcap = AT_HWCAP,
Hello,
Mark Brown broonie@kernel.org writes:
Add coverage of the GCS hwcap to the hwcap selftest, using a read of GCSPR_EL0 to generate SIGILL without having to worry about enabling GCS.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/abi/hwcap.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
The hwcap test passes on my FVP setup:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
Allow test programs to use the shadow stack helpers on arm64.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/ksft_shstk.h | 37 ++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+)
diff --git a/tools/testing/selftests/ksft_shstk.h b/tools/testing/selftests/ksft_shstk.h index 85d0747c1802..302957a0bbd5 100644 --- a/tools/testing/selftests/ksft_shstk.h +++ b/tools/testing/selftests/ksft_shstk.h @@ -50,6 +50,43 @@ static inline __attribute__((always_inline)) void enable_shadow_stack(void)
#endif
+#ifdef __aarch64__ +#define PR_SET_SHADOW_STACK_STATUS 75 +# define PR_SHADOW_STACK_ENABLE (1UL << 0) + +#define my_syscall2(num, arg1, arg2) \ +({ \ + register long _num __asm__ ("x8") = (num); \ + register long _arg1 __asm__ ("x0") = (long)(arg1); \ + register long _arg2 __asm__ ("x1") = (long)(arg2); \ + register long _arg3 __asm__ ("x2") = 0; \ + register long _arg4 __asm__ ("x3") = 0; \ + register long _arg5 __asm__ ("x4") = 0; \ + \ + __asm__ volatile ( \ + "svc #0\n" \ + : "=r"(_arg1) \ + : "r"(_arg1), "r"(_arg2), \ + "r"(_arg3), "r"(_arg4), \ + "r"(_arg5), "r"(_num) \ + : "memory", "cc" \ + ); \ + _arg1; \ +}) + +#define ENABLE_SHADOW_STACK +static inline __attribute__((always_inline)) void enable_shadow_stack(void) +{ + int ret; + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + PR_SHADOW_STACK_ENABLE); + if (ret == 0) + shadow_stack_enabled = true; +} + +#endif + #ifndef __NR_map_shadow_stack #define __NR_map_shadow_stack 453 #endif
In order to test shadow stack support in clone3() the clone3() selftests need to have a fully inline clone3() call, provide one for arm64.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/clone3/clone3_selftests.h | 26 +++++++++++++++++++++++ 1 file changed, 26 insertions(+)
diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h index 38d82934668a..e32915085333 100644 --- a/tools/testing/selftests/clone3/clone3_selftests.h +++ b/tools/testing/selftests/clone3/clone3_selftests.h @@ -69,6 +69,32 @@ static pid_t __always_inline sys_clone3(struct __clone_args *args, size_t size)
return ret; } +#elif defined(__aarch64__) +static pid_t __always_inline sys_clone3(struct __clone_args *args, size_t size) +{ + register long _num __asm__ ("x8") = __NR_clone3; + register long _args __asm__ ("x0") = (long)(args); + register long _size __asm__ ("x1") = (long)(size); + register long arg2 __asm__ ("x2") = 0; + register long arg3 __asm__ ("x3") = 0; + register long arg4 __asm__ ("x4") = 0; + + __asm__ volatile ( + "svc #0\n" + : "=r"(_args) + : "r"(_args), "r"(_size), + "r"(_num), "r"(arg2), + "r"(arg3), "r"(arg4) + : "memory", "cc" + ); + + if ((int)_args < 0) { + errno = -((int)_args); + return -1; + } + + return _args; +} #else static pid_t sys_clone3(struct __clone_args *args, size_t size) {
Mark Brown broonie@kernel.org writes:
In order to test shadow stack support in clone3() the clone3() selftests need to have a fully inline clone3() call, provide one for arm64.
Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/clone3/clone3_selftests.h | 26 +++++++++++++++++++++++ 1 file changed, 26 insertions(+)
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
The clone3 test passes on my FVP setup:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
In preparation for testing GCS related signal handling add it as a feature we check for in the signal handling support code.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/signal/test_signals.h | 2 ++ tools/testing/selftests/arm64/signal/test_signals_utils.c | 3 +++ 2 files changed, 5 insertions(+)
diff --git a/tools/testing/selftests/arm64/signal/test_signals.h b/tools/testing/selftests/arm64/signal/test_signals.h index 1e6273d81575..7ada43688c02 100644 --- a/tools/testing/selftests/arm64/signal/test_signals.h +++ b/tools/testing/selftests/arm64/signal/test_signals.h @@ -35,6 +35,7 @@ enum { FSME_BIT, FSME_FA64_BIT, FSME2_BIT, + FGCS_BIT, FMAX_END };
@@ -43,6 +44,7 @@ enum { #define FEAT_SME (1UL << FSME_BIT) #define FEAT_SME_FA64 (1UL << FSME_FA64_BIT) #define FEAT_SME2 (1UL << FSME2_BIT) +#define FEAT_GCS (1UL << FGCS_BIT)
/* * A descriptor used to describe and configure a test case. diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.c b/tools/testing/selftests/arm64/signal/test_signals_utils.c index 0dc948db3a4a..89ef95c1af0e 100644 --- a/tools/testing/selftests/arm64/signal/test_signals_utils.c +++ b/tools/testing/selftests/arm64/signal/test_signals_utils.c @@ -30,6 +30,7 @@ static char const *const feats_names[FMAX_END] = { " SME ", " FA64 ", " SME2 ", + " GCS ", };
#define MAX_FEATS_SZ 128 @@ -329,6 +330,8 @@ int test_init(struct tdescr *td) td->feats_supported |= FEAT_SME_FA64; if (getauxval(AT_HWCAP2) & HWCAP2_SME2) td->feats_supported |= FEAT_SME2; + if (getauxval(AT_HWCAP2) & HWCAP2_GCS) + td->feats_supported |= FEAT_GCS; if (feats_ok(td)) { if (td->feats_required & td->feats_supported) fprintf(stderr,
Teach the framework about the GCS signal context, avoiding warnings on the unknown context.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/signal/testcases/testcases.c | 7 +++++++ tools/testing/selftests/arm64/signal/testcases/testcases.h | 1 + 2 files changed, 8 insertions(+)
diff --git a/tools/testing/selftests/arm64/signal/testcases/testcases.c b/tools/testing/selftests/arm64/signal/testcases/testcases.c index 674b88cc8c39..49d036e97996 100644 --- a/tools/testing/selftests/arm64/signal/testcases/testcases.c +++ b/tools/testing/selftests/arm64/signal/testcases/testcases.c @@ -217,6 +217,13 @@ bool validate_reserved(ucontext_t *uc, size_t resv_sz, char **err) *err = "Bad size for fpmr_context"; new_flags |= FPMR_CTX; break; + case GCS_MAGIC: + if (flags & GCS_CTX) + *err = "Multiple GCS_MAGIC"; + if (head->size != sizeof(struct gcs_context)) + *err = "Bad size for gcs_context"; + new_flags |= GCS_CTX; + break; case EXTRA_MAGIC: if (flags & EXTRA_CTX) *err = "Multiple EXTRA_MAGIC"; diff --git a/tools/testing/selftests/arm64/signal/testcases/testcases.h b/tools/testing/selftests/arm64/signal/testcases/testcases.h index 7727126347e0..dc3cf777dafe 100644 --- a/tools/testing/selftests/arm64/signal/testcases/testcases.h +++ b/tools/testing/selftests/arm64/signal/testcases/testcases.h @@ -20,6 +20,7 @@ #define EXTRA_CTX (1 << 3) #define ZT_CTX (1 << 4) #define FPMR_CTX (1 << 5) +#define GCS_CTX (1 << 6)
#define KSFT_BAD_MAGIC 0xdeadbeef
Currently we ignore si_code unless the expected signal is a SIGSEGV, in which case we enforce it being SEGV_ACCERR. Allow test cases to specify exactly which si_code should be generated so we can validate this, and test for other segfault codes.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- .../testing/selftests/arm64/signal/test_signals.h | 4 +++ .../selftests/arm64/signal/test_signals_utils.c | 29 ++++++++++++++-------- 2 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/tools/testing/selftests/arm64/signal/test_signals.h b/tools/testing/selftests/arm64/signal/test_signals.h index 7ada43688c02..ee75a2c25ce7 100644 --- a/tools/testing/selftests/arm64/signal/test_signals.h +++ b/tools/testing/selftests/arm64/signal/test_signals.h @@ -71,6 +71,10 @@ struct tdescr { * Zero when no signal is expected on success */ int sig_ok; + /* + * expected si_code for sig_ok, or 0 to not check + */ + int sig_ok_code; /* signum expected on unsupported CPU features. */ int sig_unsupp; /* a timeout in second for test completion */ diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.c b/tools/testing/selftests/arm64/signal/test_signals_utils.c index 89ef95c1af0e..63deca32b0df 100644 --- a/tools/testing/selftests/arm64/signal/test_signals_utils.c +++ b/tools/testing/selftests/arm64/signal/test_signals_utils.c @@ -143,16 +143,25 @@ static bool handle_signal_ok(struct tdescr *td, "current->token ZEROED...test is probably broken!\n"); abort(); } - /* - * Trying to narrow down the SEGV to the ones generated by Kernel itself - * via arm64_notify_segfault(). This is a best-effort check anyway, and - * the si_code check may need to change if this aspect of the kernel - * ABI changes. - */ - if (td->sig_ok == SIGSEGV && si->si_code != SEGV_ACCERR) { - fprintf(stdout, - "si_code != SEGV_ACCERR...test is probably broken!\n"); - abort(); + if (td->sig_ok_code) { + if (si->si_code != td->sig_ok_code) { + fprintf(stdout, "si_code is %d not %d\n", + si->si_code, td->sig_ok_code); + abort(); + } + } else { + /* + * Trying to narrow down the SEGV to the ones + * generated by Kernel itself via + * arm64_notify_segfault(). This is a best-effort + * check anyway, and the si_code check may need to + * change if this aspect of the kernel ABI changes. + */ + if (td->sig_ok == SIGSEGV && si->si_code != SEGV_ACCERR) { + fprintf(stdout, + "si_code != SEGV_ACCERR...test is probably broken!\n"); + abort(); + } } td->pass = 1; /*
Since it is not possible to return from the function that enabled GCS without disabling GCS it is very inconvenient to use the signal handling tests to cover GCS when GCS is not enabled by the toolchain and runtime, something that no current distribution does. Since none of the testcases do anything with stacks that would cause problems with GCS we can sidestep this issue by unconditionally enabling GCS on startup and exiting with a call to exit() rather than a return from main().
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- .../testing/selftests/arm64/signal/test_signals.c | 17 ++++++++++++- .../selftests/arm64/signal/test_signals_utils.h | 29 ++++++++++++++++++++++ 2 files changed, 45 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/signal/test_signals.c b/tools/testing/selftests/arm64/signal/test_signals.c index 00051b40d71e..30e95f50db19 100644 --- a/tools/testing/selftests/arm64/signal/test_signals.c +++ b/tools/testing/selftests/arm64/signal/test_signals.c @@ -7,6 +7,10 @@ * Each test provides its own tde struct tdescr descriptor to link with * this wrapper. Framework provides common helpers. */ + +#include <sys/auxv.h> +#include <sys/prctl.h> + #include <kselftest.h>
#include "test_signals.h" @@ -16,6 +20,16 @@ struct tdescr *current = &tde;
int main(int argc, char *argv[]) { + /* + * Ensure GCS is at least enabled throughout the tests if + * supported, otherwise the inability to return from the + * function that enabled GCS makes it very inconvenient to set + * up test cases. The prctl() may fail if GCS was locked by + * libc setup code. + */ + if (getauxval(AT_HWCAP2) & HWCAP2_GCS) + gcs_set_state(PR_SHADOW_STACK_ENABLE); + ksft_print_msg("%s :: %s\n", current->name, current->descr); if (test_setup(current) && test_init(current)) { test_run(current); @@ -23,5 +37,6 @@ int main(int argc, char *argv[]) } test_result(current);
- return current->result; + /* Do not return in case GCS was enabled */ + exit(current->result); } diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.h b/tools/testing/selftests/arm64/signal/test_signals_utils.h index 762c8fe9c54a..1e80808ee105 100644 --- a/tools/testing/selftests/arm64/signal/test_signals_utils.h +++ b/tools/testing/selftests/arm64/signal/test_signals_utils.h @@ -18,6 +18,35 @@ void test_cleanup(struct tdescr *td); int test_run(struct tdescr *td); void test_result(struct tdescr *td);
+#ifndef __NR_prctl +#define __NR_prctl 167 +#endif + +/* + * The prctl takes 1 argument but we need to ensure that the other + * values passed in registers to the syscall are zero since the kernel + * validates them. + */ +#define gcs_set_state(state) \ + ({ \ + register long _num __asm__ ("x8") = __NR_prctl; \ + register long _arg1 __asm__ ("x0") = PR_SET_SHADOW_STACK_STATUS; \ + register long _arg2 __asm__ ("x1") = (long)(state); \ + register long _arg3 __asm__ ("x2") = 0; \ + register long _arg4 __asm__ ("x3") = 0; \ + register long _arg5 __asm__ ("x4") = 0; \ + \ + __asm__ volatile ( \ + "svc #0\n" \ + : "=r"(_arg1) \ + : "r"(_arg1), "r"(_arg2), \ + "r"(_arg3), "r"(_arg4), \ + "r"(_arg5), "r"(_num) \ + : "memory", "cc" \ + ); \ + _arg1; \ + }) + static inline bool feats_ok(struct tdescr *td) { if (td->feats_incompatible & td->feats_supported)
This test program just covers the basic GCS ABI, covering aspects of the ABI as standalone features without attempting to integrate things.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/Makefile | 2 +- tools/testing/selftests/arm64/gcs/.gitignore | 1 + tools/testing/selftests/arm64/gcs/Makefile | 18 ++ tools/testing/selftests/arm64/gcs/basic-gcs.c | 357 ++++++++++++++++++++++++++ tools/testing/selftests/arm64/gcs/gcs-util.h | 90 +++++++ 5 files changed, 467 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/Makefile b/tools/testing/selftests/arm64/Makefile index 28b93cab8c0d..22029e60eff3 100644 --- a/tools/testing/selftests/arm64/Makefile +++ b/tools/testing/selftests/arm64/Makefile @@ -4,7 +4,7 @@ ARCH ?= $(shell uname -m 2>/dev/null || echo not)
ifneq (,$(filter $(ARCH),aarch64 arm64)) -ARM64_SUBTARGETS ?= tags signal pauth fp mte bti abi +ARM64_SUBTARGETS ?= tags signal pauth fp mte bti abi gcs else ARM64_SUBTARGETS := endif diff --git a/tools/testing/selftests/arm64/gcs/.gitignore b/tools/testing/selftests/arm64/gcs/.gitignore new file mode 100644 index 000000000000..0e5e695ecba5 --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/.gitignore @@ -0,0 +1 @@ +basic-gcs diff --git a/tools/testing/selftests/arm64/gcs/Makefile b/tools/testing/selftests/arm64/gcs/Makefile new file mode 100644 index 000000000000..61a30f483429 --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/Makefile @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2023 ARM Limited +# +# In order to avoid interaction with the toolchain and dynamic linker the +# portions of these tests that interact with the GCS are implemented using +# nolibc. +# + +TEST_GEN_PROGS := basic-gcs + +include ../../lib.mk + +$(OUTPUT)/basic-gcs: basic-gcs.c + $(CC) -g -fno-asynchronous-unwind-tables -fno-ident -s -Os -nostdlib \ + -static -include ../../../../include/nolibc/nolibc.h \ + -I../../../../../usr/include \ + -std=gnu99 -I../.. -g \ + -ffreestanding -Wall $^ -o $@ -lgcc diff --git a/tools/testing/selftests/arm64/gcs/basic-gcs.c b/tools/testing/selftests/arm64/gcs/basic-gcs.c new file mode 100644 index 000000000000..3fb9742342a3 --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/basic-gcs.c @@ -0,0 +1,357 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Limited. + */ + +#include <limits.h> +#include <stdbool.h> + +#include <linux/prctl.h> + +#include <sys/mman.h> +#include <asm/mman.h> +#include <linux/sched.h> + +#include "kselftest.h" +#include "gcs-util.h" + +/* nolibc doesn't have sysconf(), just hard code the maximum */ +static size_t page_size = 65536; + +static __attribute__((noinline)) void valid_gcs_function(void) +{ + /* Do something the compiler can't optimise out */ + my_syscall1(__NR_prctl, PR_SVE_GET_VL); +} + +static inline int gcs_set_status(unsigned long mode) +{ + bool enabling = mode & PR_SHADOW_STACK_ENABLE; + int ret; + unsigned long new_mode; + + /* + * The prctl takes 1 argument but we need to ensure that the + * other 3 values passed in registers to the syscall are zero + * since the kernel validates them. + */ + ret = my_syscall5(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, mode, + 0, 0, 0); + + if (ret == 0) { + ret = my_syscall5(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, + &new_mode, 0, 0, 0); + if (ret == 0) { + if (new_mode != mode) { + ksft_print_msg("Mode set to %lx not %lx\n", + new_mode, mode); + ret = -EINVAL; + } + } else { + ksft_print_msg("Failed to validate mode: %d\n", ret); + } + + if (enabling != chkfeat_gcs()) { + ksft_print_msg("%senabled by prctl but %senabled in CHKFEAT\n", + enabling ? "" : "not ", + chkfeat_gcs() ? "" : "not "); + ret = -EINVAL; + } + } + + return ret; +} + +/* Try to read the status */ +static bool read_status(void) +{ + unsigned long state; + int ret; + + ret = my_syscall5(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, + &state, 0, 0, 0); + if (ret != 0) { + ksft_print_msg("Failed to read state: %d\n", ret); + return false; + } + + return state & PR_SHADOW_STACK_ENABLE; +} + +/* Just a straight enable */ +static bool base_enable(void) +{ + int ret; + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE); + if (ret) { + ksft_print_msg("PR_SHADOW_STACK_ENABLE failed %d\n", ret); + return false; + } + + return true; +} + +/* Check we can read GCSPR_EL0 when GCS is enabled */ +static bool read_gcspr_el0(void) +{ + unsigned long *gcspr_el0; + + ksft_print_msg("GET GCSPR\n"); + gcspr_el0 = get_gcspr(); + ksft_print_msg("GCSPR_EL0 is %p\n", gcspr_el0); + + return true; +} + +/* Also allow writes to stack */ +static bool enable_writeable(void) +{ + int ret; + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_WRITE); + if (ret) { + ksft_print_msg("PR_SHADOW_STACK_ENABLE writeable failed: %d\n", ret); + return false; + } + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE); + if (ret) { + ksft_print_msg("failed to restore plain enable %d\n", ret); + return false; + } + + return true; +} + +/* Also allow writes to stack */ +static bool enable_push_pop(void) +{ + int ret; + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_PUSH); + if (ret) { + ksft_print_msg("PR_SHADOW_STACK_ENABLE with push failed: %d\n", + ret); + return false; + } + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE); + if (ret) { + ksft_print_msg("failed to restore plain enable %d\n", ret); + return false; + } + + return true; +} + +/* Enable GCS and allow everything */ +static bool enable_all(void) +{ + int ret; + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_PUSH | + PR_SHADOW_STACK_WRITE); + if (ret) { + ksft_print_msg("PR_SHADOW_STACK_ENABLE with everything failed: %d\n", + ret); + return false; + } + + ret = gcs_set_status(PR_SHADOW_STACK_ENABLE); + if (ret) { + ksft_print_msg("failed to restore plain enable %d\n", ret); + return false; + } + + return true; +} + +static bool enable_invalid(void) +{ + int ret = gcs_set_status(ULONG_MAX); + if (ret == 0) { + ksft_print_msg("GCS_SET_STATUS %lx succeeded\n", ULONG_MAX); + return false; + } + + return true; +} + +/* Map a GCS */ +static bool map_guarded_stack(void) +{ + int ret; + uint64_t *buf; + uint64_t expected_cap; + int elem; + bool pass = true; + + buf = (void *)my_syscall3(__NR_map_shadow_stack, 0, page_size, + SHADOW_STACK_SET_MARKER | + SHADOW_STACK_SET_TOKEN); + if (buf == MAP_FAILED) { + ksft_print_msg("Failed to map %lu byte GCS: %d\n", + page_size, errno); + return false; + } + ksft_print_msg("Mapped GCS at %p-%p\n", buf, + (void *)((uint64_t)buf + page_size)); + + /* The top of the newly allocated region should be 0 */ + elem = (page_size / sizeof(uint64_t)) - 1; + if (buf[elem]) { + ksft_print_msg("Last entry is 0x%llx not 0x0\n", buf[elem]); + pass = false; + } + + /* Then a valid cap token */ + elem--; + expected_cap = ((uint64_t)buf + page_size - 16); + expected_cap &= GCS_CAP_ADDR_MASK; + expected_cap |= GCS_CAP_VALID_TOKEN; + if (buf[elem] != expected_cap) { + ksft_print_msg("Cap entry is 0x%llx not 0x%llx\n", + buf[elem], expected_cap); + pass = false; + } + ksft_print_msg("cap token is 0x%llx\n", buf[elem]); + + /* The rest should be zeros */ + for (elem = 0; elem < page_size / sizeof(uint64_t) - 2; elem++) { + if (!buf[elem]) + continue; + ksft_print_msg("GCS slot %d is 0x%llx not 0x0\n", + elem, buf[elem]); + pass = false; + } + + ret = munmap(buf, page_size); + if (ret != 0) { + ksft_print_msg("Failed to unmap %ld byte GCS: %d\n", + page_size, errno); + pass = false; + } + + return pass; +} + +/* A fork()ed process can run */ +static bool test_fork(void) +{ + unsigned long child_mode; + int ret, status; + pid_t pid; + bool pass = true; + + pid = fork(); + if (pid == -1) { + ksft_print_msg("fork() failed: %d\n", errno); + pass = false; + goto out; + } + if (pid == 0) { + /* In child, make sure we can call a function, read + * the GCS pointer and status and then exit */ + valid_gcs_function(); + get_gcspr(); + + ret = my_syscall5(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, + &child_mode, 0, 0, 0); + if (ret == 0 && !(child_mode & PR_SHADOW_STACK_ENABLE)) { + ksft_print_msg("GCS not enabled in child\n"); + ret = -EINVAL; + } + + exit(ret); + } + + /* + * In parent, check we can still do function calls then block + * for the child. + */ + valid_gcs_function(); + + ksft_print_msg("Waiting for child %d\n", pid); + + ret = waitpid(pid, &status, 0); + if (ret == -1) { + ksft_print_msg("Failed to wait for child: %d\n", + errno); + return false; + } + + if (!WIFEXITED(status)) { + ksft_print_msg("Child exited due to signal %d\n", + WTERMSIG(status)); + pass = false; + } else { + if (WEXITSTATUS(status)) { + ksft_print_msg("Child exited with status %d\n", + WEXITSTATUS(status)); + pass = false; + } + } + +out: + + return pass; +} + +typedef bool (*gcs_test)(void); + +static struct { + char *name; + gcs_test test; + bool needs_enable; +} tests[] = { + { "read_status", read_status }, + { "base_enable", base_enable, true }, + { "read_gcspr_el0", read_gcspr_el0 }, + { "enable_writeable", enable_writeable, true }, + { "enable_push_pop", enable_push_pop, true }, + { "enable_all", enable_all, true }, + { "enable_invalid", enable_invalid, true }, + { "map_guarded_stack", map_guarded_stack }, + { "fork", test_fork }, +}; + +int main(void) +{ + int i, ret; + unsigned long gcs_mode; + + ksft_print_header(); + + /* + * We don't have getauxval() with nolibc so treat a failure to + * read GCS state as a lack of support and skip. + */ + ret = my_syscall5(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, + &gcs_mode, 0, 0, 0); + if (ret != 0) + ksft_exit_skip("Failed to read GCS state: %d\n", ret); + + if (!(gcs_mode & PR_SHADOW_STACK_ENABLE)) { + gcs_mode = PR_SHADOW_STACK_ENABLE; + ret = my_syscall5(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + gcs_mode, 0, 0, 0); + if (ret != 0) + ksft_exit_fail_msg("Failed to enable GCS: %d\n", ret); + } + + ksft_set_plan(ARRAY_SIZE(tests)); + + for (i = 0; i < ARRAY_SIZE(tests); i++) { + ksft_test_result((*tests[i].test)(), "%s\n", tests[i].name); + } + + /* One last test: disable GCS, we can do this one time */ + my_syscall5(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, 0, 0, 0, 0); + if (ret != 0) + ksft_print_msg("Failed to disable GCS: %d\n", ret); + + ksft_finished(); + + return 0; +} diff --git a/tools/testing/selftests/arm64/gcs/gcs-util.h b/tools/testing/selftests/arm64/gcs/gcs-util.h new file mode 100644 index 000000000000..1ae6864d3f86 --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/gcs-util.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2023 ARM Limited. + */ + +#ifndef GCS_UTIL_H +#define GCS_UTIL_H + +#include <stdbool.h> + +#ifndef __NR_map_shadow_stack +#define __NR_map_shadow_stack 453 +#endif + +#ifndef __NR_prctl +#define __NR_prctl 167 +#endif + +/* Shadow Stack/Guarded Control Stack interface */ +#define PR_GET_SHADOW_STACK_STATUS 74 +#define PR_SET_SHADOW_STACK_STATUS 75 +#define PR_LOCK_SHADOW_STACK_STATUS 76 + +# define PR_SHADOW_STACK_ENABLE (1UL << 0) +# define PR_SHADOW_STACK_WRITE (1UL << 1) +# define PR_SHADOW_STACK_PUSH (1UL << 2) + +#define PR_SHADOW_STACK_ALL_MODES \ + PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_WRITE | PR_SHADOW_STACK_PUSH + +#define SHADOW_STACK_SET_TOKEN (1ULL << 0) /* Set up a restore token in the shadow stack */ +#define SHADOW_STACK_SET_MARKER (1ULL << 1) /* Set up a top of stack merker in the shadow stack */ + +#define GCS_CAP_ADDR_MASK (0xfffffffffffff000UL) +#define GCS_CAP_TOKEN_MASK (0x0000000000000fffUL) +#define GCS_CAP_VALID_TOKEN 1 +#define GCS_CAP_IN_PROGRESS_TOKEN 5 + +#define GCS_CAP(x) (((unsigned long)(x) & GCS_CAP_ADDR_MASK) | \ + GCS_CAP_VALID_TOKEN) + +static inline unsigned long *get_gcspr(void) +{ + unsigned long *gcspr; + + asm volatile( + "mrs %0, S3_3_C2_C5_1" + : "=r" (gcspr) + : + : "cc"); + + return gcspr; +} + +static inline void __attribute__((always_inline)) gcsss1(unsigned long *Xt) +{ + asm volatile ( + "sys #3, C7, C7, #2, %0\n" + : + : "rZ" (Xt) + : "memory"); +} + +static inline unsigned long __attribute__((always_inline)) *gcsss2(void) +{ + unsigned long *Xt; + + asm volatile( + "SYSL %0, #3, C7, C7, #3\n" + : "=r" (Xt) + : + : "memory"); + + return Xt; +} + +static inline bool chkfeat_gcs(void) +{ + register long val __asm__ ("x16") = 1; + + /* CHKFEAT x16 */ + asm volatile( + "hint #0x28\n" + : "=r" (val) + : "r" (val)); + + return val != 1; +} + +#endif
Mark Brown broonie@kernel.org writes:
This test program just covers the basic GCS ABI, covering aspects of the ABI as standalone features without attempting to integrate things.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/Makefile | 2 +- tools/testing/selftests/arm64/gcs/.gitignore | 1 + tools/testing/selftests/arm64/gcs/Makefile | 18 ++ tools/testing/selftests/arm64/gcs/basic-gcs.c | 357 ++++++++++++++++++++++++++ tools/testing/selftests/arm64/gcs/gcs-util.h | 90 +++++++ 5 files changed, 467 insertions(+), 1 deletion(-)
The basic-gcs test passes on my FVP setup:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
There are things like threads which nolibc struggles with which we want to add coverage for, and the ABI allows us to test most of these even if libc itself does not understand GCS so add a test application built using the system libc.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/gcs/.gitignore | 1 + tools/testing/selftests/arm64/gcs/Makefile | 4 +- tools/testing/selftests/arm64/gcs/gcs-util.h | 10 + tools/testing/selftests/arm64/gcs/libc-gcs.c | 736 +++++++++++++++++++++++++++ 4 files changed, 750 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/gcs/.gitignore b/tools/testing/selftests/arm64/gcs/.gitignore index 0e5e695ecba5..5810c4a163d4 100644 --- a/tools/testing/selftests/arm64/gcs/.gitignore +++ b/tools/testing/selftests/arm64/gcs/.gitignore @@ -1 +1,2 @@ basic-gcs +libc-gcs diff --git a/tools/testing/selftests/arm64/gcs/Makefile b/tools/testing/selftests/arm64/gcs/Makefile index 61a30f483429..a8fdf21e9a47 100644 --- a/tools/testing/selftests/arm64/gcs/Makefile +++ b/tools/testing/selftests/arm64/gcs/Makefile @@ -6,7 +6,9 @@ # nolibc. #
-TEST_GEN_PROGS := basic-gcs +TEST_GEN_PROGS := basic-gcs libc-gcs + +LDLIBS+=-lpthread
include ../../lib.mk
diff --git a/tools/testing/selftests/arm64/gcs/gcs-util.h b/tools/testing/selftests/arm64/gcs/gcs-util.h index 1ae6864d3f86..8ac37dc3c78e 100644 --- a/tools/testing/selftests/arm64/gcs/gcs-util.h +++ b/tools/testing/selftests/arm64/gcs/gcs-util.h @@ -16,6 +16,16 @@ #define __NR_prctl 167 #endif
+#ifndef NT_ARM_GCS +#define NT_ARM_GCS 0x40f + +struct user_gcs { + __u64 features_enabled; + __u64 features_locked; + __u64 gcspr_el0; +}; +#endif + /* Shadow Stack/Guarded Control Stack interface */ #define PR_GET_SHADOW_STACK_STATUS 74 #define PR_SET_SHADOW_STACK_STATUS 75 diff --git a/tools/testing/selftests/arm64/gcs/libc-gcs.c b/tools/testing/selftests/arm64/gcs/libc-gcs.c new file mode 100644 index 000000000000..937f8bee7bdd --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/libc-gcs.c @@ -0,0 +1,736 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Limited. + */ + +#define _GNU_SOURCE + +#include <pthread.h> +#include <stdbool.h> + +#include <sys/auxv.h> +#include <sys/mman.h> +#include <sys/prctl.h> +#include <sys/ptrace.h> +#include <sys/uio.h> + +#include <asm/hwcap.h> +#include <asm/mman.h> + +#include <linux/compiler.h> + +#include "kselftest_harness.h" + +#include "gcs-util.h" + +#define my_syscall2(num, arg1, arg2) \ +({ \ + register long _num __asm__ ("x8") = (num); \ + register long _arg1 __asm__ ("x0") = (long)(arg1); \ + register long _arg2 __asm__ ("x1") = (long)(arg2); \ + register long _arg3 __asm__ ("x2") = 0; \ + register long _arg4 __asm__ ("x3") = 0; \ + register long _arg5 __asm__ ("x4") = 0; \ + \ + __asm__ volatile ( \ + "svc #0\n" \ + : "=r"(_arg1) \ + : "r"(_arg1), "r"(_arg2), \ + "r"(_arg3), "r"(_arg4), \ + "r"(_arg5), "r"(_num) \ + : "memory", "cc" \ + ); \ + _arg1; \ +}) + +static noinline void gcs_recurse(int depth) +{ + if (depth) + gcs_recurse(depth - 1); + + /* Prevent tail call optimization so we actually recurse */ + asm volatile("dsb sy" : : : "memory"); +} + +/* Smoke test that a function call and return works*/ +TEST(can_call_function) +{ + gcs_recurse(0); +} + +static void *gcs_test_thread(void *arg) +{ + int ret; + unsigned long mode; + + /* + * Some libcs don't seem to fill unused arguments with 0 but + * the kernel validates this so we supply all 5 arguments. + */ + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + if (ret != 0) { + ksft_print_msg("PR_GET_SHADOW_STACK_STATUS failed: %d\n", ret); + return NULL; + } + + if (!(mode & PR_SHADOW_STACK_ENABLE)) { + ksft_print_msg("GCS not enabled in thread, mode is %u\n", + mode); + return NULL; + } + + /* Just in case... */ + gcs_recurse(0); + + /* Use a non-NULL value to indicate a pass */ + return &gcs_test_thread; +} + +/* Verify that if we start a new thread it has GCS enabled */ +TEST(gcs_enabled_thread) +{ + pthread_t thread; + void *thread_ret; + int ret; + + ret = pthread_create(&thread, NULL, gcs_test_thread, NULL); + ASSERT_TRUE(ret == 0); + if (ret != 0) + return; + + ret = pthread_join(thread, &thread_ret); + ASSERT_TRUE(ret == 0); + if (ret != 0) + return; + + ASSERT_TRUE(thread_ret != NULL); +} + +/* Read the GCS until we find the terminator */ +TEST(gcs_find_terminator) +{ + unsigned long *gcs, *cur; + + gcs = get_gcspr(); + cur = gcs; + while (*cur) + cur++; + + ksft_print_msg("GCS in use from %p-%p\n", gcs, cur); + + /* + * We should have at least whatever called into this test so + * the two pointer should differ. + */ + ASSERT_TRUE(gcs != cur); +} + +/* + * We can access a GCS via ptrace + * + * This could usefully have a fixture but note that each test is + * fork()ed into a new child whcih causes issues. Might be better to + * lift at least some of this out into a separate, non-harness, test + * program. + */ +TEST(ptrace_read_write) +{ + pid_t child, pid; + int ret, status; + siginfo_t si; + uint64_t val, rval, gcspr; + struct user_gcs child_gcs; + struct iovec iov, local_iov, remote_iov; + + child = fork(); + if (child == -1) { + ksft_print_msg("fork() failed: %d (%s)\n", + errno, strerror(errno)); + ASSERT_NE(child, -1); + } + + if (child == 0) { + /* + * In child, make sure there's something on the stack and + * ask to be traced. + */ + gcs_recurse(0); + if (ptrace(PTRACE_TRACEME, -1, NULL, NULL)) + ksft_exit_fail_msg("PTRACE_TRACEME", strerror(errno)); + + if (raise(SIGSTOP)) + ksft_exit_fail_msg("raise(SIGSTOP)", strerror(errno)); + + return; + } + + ksft_print_msg("Child: %d\n", child); + + /* Attach to the child */ + while (1) { + int sig; + + pid = wait(&status); + if (pid == -1) { + ksft_print_msg("wait() failed: %s", + strerror(errno)); + goto error; + } + + /* + * This should never happen but it's hard to flag in + * the framework. + */ + if (pid != child) + continue; + + if (WIFEXITED(status) || WIFSIGNALED(status)) + ksft_exit_fail_msg("Child died unexpectedly\n"); + + if (!WIFSTOPPED(status)) + goto error; + + sig = WSTOPSIG(status); + + if (ptrace(PTRACE_GETSIGINFO, pid, NULL, &si)) { + if (errno == ESRCH) { + ASSERT_NE(errno, ESRCH); + return; + } + + if (errno == EINVAL) { + sig = 0; /* bust group-stop */ + goto cont; + } + + ksft_print_msg("PTRACE_GETSIGINFO: %s\n", + strerror(errno)); + goto error; + } + + if (sig == SIGSTOP && si.si_code == SI_TKILL && + si.si_pid == pid) + break; + + cont: + if (ptrace(PTRACE_CONT, pid, NULL, sig)) { + if (errno == ESRCH) { + ASSERT_NE(errno, ESRCH); + return; + } + + ksft_print_msg("PTRACE_CONT: %s\n", strerror(errno)); + goto error; + } + } + + /* Where is the child GCS? */ + iov.iov_base = &child_gcs; + iov.iov_len = sizeof(child_gcs); + ret = ptrace(PTRACE_GETREGSET, child, NT_ARM_GCS, &iov); + if (ret != 0) { + ksft_print_msg("Failed to read child GCS state: %s (%d)\n", + strerror(errno), errno); + goto error; + } + + /* We should have inherited GCS over fork(), confirm */ + if (!(child_gcs.features_enabled & PR_SHADOW_STACK_ENABLE)) { + ASSERT_TRUE(child_gcs.features_enabled & + PR_SHADOW_STACK_ENABLE); + goto error; + } + + gcspr = child_gcs.gcspr_el0; + ksft_print_msg("Child GCSPR 0x%lx, flags %x, locked %x\n", + gcspr, child_gcs.features_enabled, + child_gcs.features_locked); + + /* Ideally we'd cross check with the child memory map */ + + errno = 0; + val = ptrace(PTRACE_PEEKDATA, child, (void *)gcspr, NULL); + ret = errno; + if (ret != 0) + ksft_print_msg("PTRACE_PEEKDATA failed: %s (%d)\n", + strerror(ret), ret); + EXPECT_EQ(ret, 0); + + /* The child should be in a function, the GCSPR shouldn't be 0 */ + EXPECT_NE(val, 0); + + /* Same thing via process_vm_readv() */ + local_iov.iov_base = &rval; + local_iov.iov_len = sizeof(rval); + remote_iov.iov_base = (void *)gcspr; + remote_iov.iov_len = sizeof(rval); + ret = process_vm_readv(child, &local_iov, 1, &remote_iov, 1, 0); + if (ret == -1) + ksft_print_msg("process_vm_readv() failed: %s (%d)\n", + strerror(errno), errno); + EXPECT_EQ(ret, sizeof(rval)); + EXPECT_EQ(val, rval); + + /* Write data via a peek */ + ret = ptrace(PTRACE_POKEDATA, child, (void *)gcspr, NULL); + if (ret == -1) + ksft_print_msg("PTRACE_POKEDATA failed: %s (%d)\n", + strerror(errno), errno); + EXPECT_EQ(ret, 0); + EXPECT_EQ(0, ptrace(PTRACE_PEEKDATA, child, (void *)gcspr, NULL)); + + /* Restore what we had before */ + ret = ptrace(PTRACE_POKEDATA, child, (void *)gcspr, val); + if (ret == -1) + ksft_print_msg("PTRACE_POKEDATA failed: %s (%d)\n", + strerror(errno), errno); + EXPECT_EQ(ret, 0); + EXPECT_EQ(val, ptrace(PTRACE_PEEKDATA, child, (void *)gcspr, NULL)); + + /* That's all, folks */ + kill(child, SIGKILL); + return; + +error: + kill(child, SIGKILL); + ASSERT_FALSE(true); +} + +FIXTURE(map_gcs) +{ + unsigned long *stack; +}; + +FIXTURE_VARIANT(map_gcs) +{ + size_t stack_size; + unsigned long flags; +}; + +FIXTURE_VARIANT_ADD(map_gcs, s2k_cap_marker) +{ + .stack_size = 2 * 1024, + .flags = SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s2k_cap) +{ + .stack_size = 2 * 1024, + .flags = SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s2k_marker) +{ + .stack_size = 2 * 1024, + .flags = SHADOW_STACK_SET_MARKER, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s2k) +{ + .stack_size = 2 * 1024, + .flags = 0, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s4k_cap_marker) +{ + .stack_size = 4 * 1024, + .flags = SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s4k_cap) +{ + .stack_size = 4 * 1024, + .flags = SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s3k_marker) +{ + .stack_size = 4 * 1024, + .flags = SHADOW_STACK_SET_MARKER, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s4k) +{ + .stack_size = 4 * 1024, + .flags = 0, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s16k_cap_marker) +{ + .stack_size = 16 * 1024, + .flags = SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s16k_cap) +{ + .stack_size = 16 * 1024, + .flags = SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s16k_marker) +{ + .stack_size = 16 * 1024, + .flags = SHADOW_STACK_SET_MARKER, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s16k) +{ + .stack_size = 16 * 1024, + .flags = 0, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s64k_cap_marker) +{ + .stack_size = 64 * 1024, + .flags = SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s64k_cap) +{ + .stack_size = 64 * 1024, + .flags = SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s64k_marker) +{ + .stack_size = 64 * 1024, + .flags = SHADOW_STACK_SET_MARKER, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s64k) +{ + .stack_size = 64 * 1024, + .flags = 0, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s128k_cap_marker) +{ + .stack_size = 128 * 1024, + .flags = SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s128k_cap) +{ + .stack_size = 128 * 1024, + .flags = SHADOW_STACK_SET_TOKEN, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s128k_marker) +{ + .stack_size = 128 * 1024, + .flags = SHADOW_STACK_SET_MARKER, +}; + +FIXTURE_VARIANT_ADD(map_gcs, s128k) +{ + .stack_size = 128 * 1024, + .flags = 0, +}; + +FIXTURE_SETUP(map_gcs) +{ + self->stack = (void *)syscall(__NR_map_shadow_stack, 0, + variant->stack_size, + variant->flags); + ASSERT_FALSE(self->stack == MAP_FAILED); + ksft_print_msg("Allocated stack from %p-%p\n", self->stack, + (unsigned long)self->stack + variant->stack_size); +} + +FIXTURE_TEARDOWN(map_gcs) +{ + int ret; + + if (self->stack != MAP_FAILED) { + ret = munmap(self->stack, variant->stack_size); + ASSERT_EQ(ret, 0); + } +} + +/* The stack has a cap token */ +TEST_F(map_gcs, stack_capped) +{ + unsigned long *stack = self->stack; + size_t cap_index; + + cap_index = (variant->stack_size / sizeof(unsigned long)); + + switch (variant->flags & (SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN)) { + case SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN: + cap_index -= 2; + break; + case SHADOW_STACK_SET_TOKEN: + cap_index -= 1; + break; + case SHADOW_STACK_SET_MARKER: + case 0: + /* No cap, no test */ + return; + } + + ASSERT_EQ(stack[cap_index], GCS_CAP(&stack[cap_index])); +} + +/* The top of the stack is 0 */ +TEST_F(map_gcs, stack_terminated) +{ + unsigned long *stack = self->stack; + size_t term_index; + + if (!(variant->flags & SHADOW_STACK_SET_MARKER)) + return; + + term_index = (variant->stack_size / sizeof(unsigned long)) - 1; + + ASSERT_EQ(stack[term_index], 0); +} + +/* Writes should fault */ +TEST_F_SIGNAL(map_gcs, not_writeable, SIGSEGV) +{ + self->stack[0] = 0; +} + +/* Put it all together, we can safely switch to and from the stack */ +TEST_F(map_gcs, stack_switch) +{ + size_t cap_index; + cap_index = (variant->stack_size / sizeof(unsigned long)); + unsigned long *orig_gcspr_el0, *pivot_gcspr_el0; + + /* Skip over the stack terminator and point at the cap */ + switch (variant->flags & (SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN)) { + case SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN: + cap_index -= 2; + break; + case SHADOW_STACK_SET_TOKEN: + cap_index -= 1; + break; + case SHADOW_STACK_SET_MARKER: + case 0: + /* No cap, no test */ + return; + } + pivot_gcspr_el0 = &self->stack[cap_index]; + + /* Pivot to the new GCS */ + ksft_print_msg("Pivoting to %p from %p, target has value 0x%lx\n", + pivot_gcspr_el0, get_gcspr(), + *pivot_gcspr_el0); + gcsss1(pivot_gcspr_el0); + orig_gcspr_el0 = gcsss2(); + ksft_print_msg("Pivoted to %p from %p, target has value 0x%lx\n", + get_gcspr(), orig_gcspr_el0, + *pivot_gcspr_el0); + + ksft_print_msg("Pivoted, GCSPR_EL0 now %p\n", get_gcspr()); + + /* New GCS must be in the new buffer */ + ASSERT_TRUE((unsigned long)get_gcspr() > (unsigned long)self->stack); + ASSERT_TRUE((unsigned long)get_gcspr() <= + (unsigned long)self->stack + variant->stack_size); + + /* We should be able to use all but 2 slots of the new stack */ + ksft_print_msg("Recursing %d levels\n", cap_index - 1); + gcs_recurse(cap_index - 1); + + /* Pivot back to the original GCS */ + gcsss1(orig_gcspr_el0); + pivot_gcspr_el0 = gcsss2(); + + gcs_recurse(0); + ksft_print_msg("Pivoted back to GCSPR_EL0 0x%lx\n", get_gcspr()); +} + +/* We fault if we try to go beyond the end of the stack */ +TEST_F_SIGNAL(map_gcs, stack_overflow, SIGSEGV) +{ + size_t cap_index; + cap_index = (variant->stack_size / sizeof(unsigned long)); + unsigned long *orig_gcspr_el0, *pivot_gcspr_el0; + + /* Skip over the stack terminator and point at the cap */ + switch (variant->flags & (SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN)) { + case SHADOW_STACK_SET_MARKER | SHADOW_STACK_SET_TOKEN: + cap_index -= 2; + break; + case SHADOW_STACK_SET_TOKEN: + cap_index -= 1; + break; + case SHADOW_STACK_SET_MARKER: + case 0: + /* No cap, no test but we need to SEGV to avoid a false fail */ + orig_gcspr_el0 = get_gcspr(); + *orig_gcspr_el0 = 0; + return; + } + pivot_gcspr_el0 = &self->stack[cap_index]; + + /* Pivot to the new GCS */ + ksft_print_msg("Pivoting to %p from %p, target has value 0x%lx\n", + pivot_gcspr_el0, get_gcspr(), + *pivot_gcspr_el0); + gcsss1(pivot_gcspr_el0); + orig_gcspr_el0 = gcsss2(); + ksft_print_msg("Pivoted to %p from %p, target has value 0x%lx\n", + pivot_gcspr_el0, orig_gcspr_el0, + *pivot_gcspr_el0); + + ksft_print_msg("Pivoted, GCSPR_EL0 now %p\n", get_gcspr()); + + /* New GCS must be in the new buffer */ + ASSERT_TRUE((unsigned long)get_gcspr() > (unsigned long)self->stack); + ASSERT_TRUE((unsigned long)get_gcspr() <= + (unsigned long)self->stack + variant->stack_size); + + /* Now try to recurse, we should fault doing this. */ + ksft_print_msg("Recursing %d levels...\n", cap_index + 1); + gcs_recurse(cap_index + 1); + ksft_print_msg("...done\n"); + + /* Clean up properly to try to guard against spurious passes. */ + gcsss1(orig_gcspr_el0); + pivot_gcspr_el0 = gcsss2(); + ksft_print_msg("Pivoted back to GCSPR_EL0 0x%lx\n", get_gcspr()); +} + +FIXTURE(map_invalid_gcs) +{ +}; + +FIXTURE_VARIANT(map_invalid_gcs) +{ + size_t stack_size; +}; + +FIXTURE_SETUP(map_invalid_gcs) +{ +} + +FIXTURE_TEARDOWN(map_invalid_gcs) +{ +} + +/* GCS must be larger than 16 bytes */ +FIXTURE_VARIANT_ADD(map_invalid_gcs, too_small) +{ + .stack_size = 8, +}; + +/* GCS size must be 16 byte aligned */ +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_1) { .stack_size = 1024 + 1 }; +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_2) { .stack_size = 1024 + 2 }; +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_3) { .stack_size = 1024 + 3 }; +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_4) { .stack_size = 1024 + 4 }; +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_5) { .stack_size = 1024 + 5 }; +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_6) { .stack_size = 1024 + 6 }; +FIXTURE_VARIANT_ADD(map_invalid_gcs, unligned_7) { .stack_size = 1024 + 7 }; + +TEST_F(map_invalid_gcs, do_map) +{ + void *stack; + + stack = (void *)syscall(__NR_map_shadow_stack, 0, + variant->stack_size, 0); + ASSERT_TRUE(stack == MAP_FAILED); + if (stack != MAP_FAILED) + munmap(stack, variant->stack_size); +} + +FIXTURE(invalid_mprotect) +{ + unsigned long *stack; + size_t stack_size; +}; + +FIXTURE_VARIANT(invalid_mprotect) +{ + unsigned long flags; +}; + +FIXTURE_SETUP(invalid_mprotect) +{ + self->stack_size = sysconf(_SC_PAGE_SIZE); + self->stack = (void *)syscall(__NR_map_shadow_stack, 0, + self->stack_size, 0); + ASSERT_FALSE(self->stack == MAP_FAILED); + ksft_print_msg("Allocated stack from %p-%p\n", self->stack, + (unsigned long)self->stack + self->stack_size); +} + +FIXTURE_TEARDOWN(invalid_mprotect) +{ + int ret; + + if (self->stack != MAP_FAILED) { + ret = munmap(self->stack, self->stack_size); + ASSERT_EQ(ret, 0); + } +} + +FIXTURE_VARIANT_ADD(invalid_mprotect, exec) +{ + .flags = PROT_EXEC, +}; + +FIXTURE_VARIANT_ADD(invalid_mprotect, bti) +{ + .flags = PROT_BTI, +}; + +FIXTURE_VARIANT_ADD(invalid_mprotect, exec_bti) +{ + .flags = PROT_EXEC | PROT_BTI, +}; + +TEST_F(invalid_mprotect, do_map) +{ + int ret; + + ret = mprotect(self->stack, self->stack_size, variant->flags); + ASSERT_EQ(ret, -1); +} + +TEST_F(invalid_mprotect, do_map_read) +{ + int ret; + + ret = mprotect(self->stack, self->stack_size, + variant->flags | PROT_READ); + ASSERT_EQ(ret, -1); +} + +int main(int argc, char **argv) +{ + unsigned long gcs_mode; + int ret; + + if (!(getauxval(AT_HWCAP2) & HWCAP2_GCS)) + ksft_exit_skip("SKIP GCS not supported\n"); + + /* + * Force shadow stacks on, our tests *should* be fine with or + * without libc support and with or without this having ended + * up tagged for GCS and enabled by the dynamic linker. We + * can't use the libc prctl() function since we can't return + * from enabling the stack. + */ + ret = my_syscall2(__NR_prctl, PR_GET_SHADOW_STACK_STATUS, &gcs_mode); + if (ret) { + ksft_print_msg("Failed to read GCS state: %d\n", ret); + return EXIT_FAILURE; + } + + if (!(gcs_mode & PR_SHADOW_STACK_ENABLE)) { + gcs_mode = PR_SHADOW_STACK_ENABLE; + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + gcs_mode); + if (ret) { + ksft_print_msg("Failed to configure GCS: %d\n", ret); + return EXIT_FAILURE; + } + } + + /* Avoid returning in case libc doesn't understand GCS */ + exit(test_harness_run(argc, argv)); +}
Mark Brown broonie@kernel.org writes:
There are things like threads which nolibc struggles with which we want to add coverage for, and the ABI allows us to test most of these even if libc itself does not understand GCS so add a test application built using the system libc.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/gcs/.gitignore | 1 + tools/testing/selftests/arm64/gcs/Makefile | 4 +- tools/testing/selftests/arm64/gcs/gcs-util.h | 10 + tools/testing/selftests/arm64/gcs/libc-gcs.c | 736 +++++++++++++++++++++++++++ 4 files changed, 750 insertions(+), 1 deletion(-)
The libc-gcs test passes on my FVP setup:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
Verify that we can lock individual GCS mode bits, that other modes aren't affected and as a side effect also that every combination of modes can be enabled.
Normally the inability to reenable GCS after disabling it would be an issue with testing but fortunately the kselftest_harness runs each test within a fork()ed child. This can be inconvenient for some kinds of testing but here it means that each test is in a separate thread and therefore won't be affected by other tests in the suite.
Once we get toolchains with support for enabling GCS by default we will need to take care to not do that in the build system but there are no such toolchains yet so it is not yet an issue.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/gcs/.gitignore | 1 + tools/testing/selftests/arm64/gcs/Makefile | 2 +- tools/testing/selftests/arm64/gcs/gcs-locking.c | 200 ++++++++++++++++++++++++ 3 files changed, 202 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/gcs/.gitignore b/tools/testing/selftests/arm64/gcs/.gitignore index 5810c4a163d4..0c86f53f68ad 100644 --- a/tools/testing/selftests/arm64/gcs/.gitignore +++ b/tools/testing/selftests/arm64/gcs/.gitignore @@ -1,2 +1,3 @@ basic-gcs libc-gcs +gcs-locking diff --git a/tools/testing/selftests/arm64/gcs/Makefile b/tools/testing/selftests/arm64/gcs/Makefile index a8fdf21e9a47..2173d6275956 100644 --- a/tools/testing/selftests/arm64/gcs/Makefile +++ b/tools/testing/selftests/arm64/gcs/Makefile @@ -6,7 +6,7 @@ # nolibc. #
-TEST_GEN_PROGS := basic-gcs libc-gcs +TEST_GEN_PROGS := basic-gcs libc-gcs gcs-locking
LDLIBS+=-lpthread
diff --git a/tools/testing/selftests/arm64/gcs/gcs-locking.c b/tools/testing/selftests/arm64/gcs/gcs-locking.c new file mode 100644 index 000000000000..f6a73254317e --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/gcs-locking.c @@ -0,0 +1,200 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2023 ARM Limited. + * + * Tests for GCS mode locking. These tests rely on both having GCS + * unconfigured on entry and on the kselftest harness running each + * test in a fork()ed process which will have it's own mode. + */ + +#include <limits.h> + +#include <sys/auxv.h> +#include <sys/prctl.h> + +#include <asm/hwcap.h> + +#include "kselftest_harness.h" + +#include "gcs-util.h" + +#define my_syscall2(num, arg1, arg2) \ +({ \ + register long _num __asm__ ("x8") = (num); \ + register long _arg1 __asm__ ("x0") = (long)(arg1); \ + register long _arg2 __asm__ ("x1") = (long)(arg2); \ + register long _arg3 __asm__ ("x2") = 0; \ + register long _arg4 __asm__ ("x3") = 0; \ + register long _arg5 __asm__ ("x4") = 0; \ + \ + __asm__ volatile ( \ + "svc #0\n" \ + : "=r"(_arg1) \ + : "r"(_arg1), "r"(_arg2), \ + "r"(_arg3), "r"(_arg4), \ + "r"(_arg5), "r"(_num) \ + : "memory", "cc" \ + ); \ + _arg1; \ +}) + +/* No mode bits are rejected for locking */ +TEST(lock_all_modes) +{ + int ret; + + ret = prctl(PR_LOCK_SHADOW_STACK_STATUS, ULONG_MAX, 0, 0, 0); + ASSERT_EQ(ret, 0); +} + +FIXTURE(valid_modes) +{ +}; + +FIXTURE_VARIANT(valid_modes) +{ + unsigned long mode; +}; + +FIXTURE_VARIANT_ADD(valid_modes, enable) +{ + .mode = PR_SHADOW_STACK_ENABLE, +}; + +FIXTURE_VARIANT_ADD(valid_modes, enable_write) +{ + .mode = PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_WRITE, +}; + +FIXTURE_VARIANT_ADD(valid_modes, enable_push) +{ + .mode = PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_PUSH, +}; + +FIXTURE_VARIANT_ADD(valid_modes, enable_write_push) +{ + .mode = PR_SHADOW_STACK_ENABLE | PR_SHADOW_STACK_WRITE | + PR_SHADOW_STACK_PUSH, +}; + +FIXTURE_SETUP(valid_modes) +{ +} + +FIXTURE_TEARDOWN(valid_modes) +{ +} + +/* We can set the mode at all */ +TEST_F(valid_modes, set) +{ + int ret; + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + variant->mode); + ASSERT_EQ(ret, 0); + + _exit(0); +} + +/* Enabling, locking then disabling is rejected */ +TEST_F(valid_modes, enable_lock_disable) +{ + unsigned long mode; + int ret; + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + variant->mode); + ASSERT_EQ(ret, 0); + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + ASSERT_EQ(mode, variant->mode); + + ret = prctl(PR_LOCK_SHADOW_STACK_STATUS, variant->mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, 0); + ASSERT_EQ(ret, -EBUSY); + + _exit(0); +} + +/* Locking then enabling is rejected */ +TEST_F(valid_modes, lock_enable) +{ + unsigned long mode; + int ret; + + ret = prctl(PR_LOCK_SHADOW_STACK_STATUS, variant->mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + variant->mode); + ASSERT_EQ(ret, -EBUSY); + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + ASSERT_EQ(mode, 0); + + _exit(0); +} + +/* Locking then changing other modes is fine */ +TEST_F(valid_modes, lock_enable_disable_others) +{ + unsigned long mode; + int ret; + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + variant->mode); + ASSERT_EQ(ret, 0); + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + ASSERT_EQ(mode, variant->mode); + + ret = prctl(PR_LOCK_SHADOW_STACK_STATUS, variant->mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + PR_SHADOW_STACK_ALL_MODES); + ASSERT_EQ(ret, 0); + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + ASSERT_EQ(mode, PR_SHADOW_STACK_ALL_MODES); + + + ret = my_syscall2(__NR_prctl, PR_SET_SHADOW_STACK_STATUS, + variant->mode); + ASSERT_EQ(ret, 0); + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + ASSERT_EQ(ret, 0); + ASSERT_EQ(mode, variant->mode); + + _exit(0); +} + +int main(int argc, char **argv) +{ + unsigned long mode; + int ret; + + if (!(getauxval(AT_HWCAP2) & HWCAP2_GCS)) + ksft_exit_skip("SKIP GCS not supported\n"); + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &mode, 0, 0, 0); + if (ret) { + ksft_print_msg("Failed to read GCS state: %d\n", ret); + return EXIT_FAILURE; + } + + if (mode & PR_SHADOW_STACK_ENABLE) { + ksft_print_msg("GCS was enabled, test unsupported\n"); + return KSFT_SKIP; + } + + return test_harness_run(argc, argv); +}
Mark Brown broonie@kernel.org writes:
Verify that we can lock individual GCS mode bits, that other modes aren't affected and as a side effect also that every combination of modes can be enabled.
Normally the inability to reenable GCS after disabling it would be an issue with testing but fortunately the kselftest_harness runs each test within a fork()ed child. This can be inconvenient for some kinds of testing but here it means that each test is in a separate thread and therefore won't be affected by other tests in the suite.
Once we get toolchains with support for enabling GCS by default we will need to take care to not do that in the build system but there are no such toolchains yet so it is not yet an issue.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/gcs/.gitignore | 1 + tools/testing/selftests/arm64/gcs/Makefile | 2 +- tools/testing/selftests/arm64/gcs/gcs-locking.c | 200 ++++++++++++++++++++++++ 3 files changed, 202 insertions(+), 1 deletion(-)
The gcs-locking test passes on my FVP setup:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
Do some testing of the signal handling for GCS, checking that a GCS frame has the expected information in it and that the expected signals are delivered with invalid operations.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/signal/.gitignore | 1 + .../selftests/arm64/signal/test_signals_utils.h | 10 +++ .../arm64/signal/testcases/gcs_exception_fault.c | 62 +++++++++++++++ .../selftests/arm64/signal/testcases/gcs_frame.c | 88 ++++++++++++++++++++++ .../arm64/signal/testcases/gcs_write_fault.c | 67 ++++++++++++++++ 5 files changed, 228 insertions(+)
diff --git a/tools/testing/selftests/arm64/signal/.gitignore b/tools/testing/selftests/arm64/signal/.gitignore index 1ce5b5eac386..75d691c13207 100644 --- a/tools/testing/selftests/arm64/signal/.gitignore +++ b/tools/testing/selftests/arm64/signal/.gitignore @@ -2,6 +2,7 @@ mangle_* fake_sigreturn_* fpmr_* +gcs_* sme_* ssve_* sve_* diff --git a/tools/testing/selftests/arm64/signal/test_signals_utils.h b/tools/testing/selftests/arm64/signal/test_signals_utils.h index 1e80808ee105..36fc12b3cd60 100644 --- a/tools/testing/selftests/arm64/signal/test_signals_utils.h +++ b/tools/testing/selftests/arm64/signal/test_signals_utils.h @@ -6,6 +6,7 @@
#include <assert.h> #include <stdio.h> +#include <stdint.h> #include <string.h>
#include <linux/compiler.h> @@ -47,6 +48,15 @@ void test_result(struct tdescr *td); _arg1; \ })
+static inline __attribute__((always_inline)) uint64_t get_gcspr_el0(void) +{ + uint64_t val; + + asm volatile("mrs %0, S3_3_C2_C5_1" : "=r" (val)); + + return val; +} + static inline bool feats_ok(struct tdescr *td) { if (td->feats_incompatible & td->feats_supported) diff --git a/tools/testing/selftests/arm64/signal/testcases/gcs_exception_fault.c b/tools/testing/selftests/arm64/signal/testcases/gcs_exception_fault.c new file mode 100644 index 000000000000..6228448b2ae7 --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/gcs_exception_fault.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2023 ARM Limited + */ + +#include <errno.h> +#include <signal.h> +#include <unistd.h> + +#include <sys/mman.h> +#include <sys/prctl.h> + +#include "test_signals_utils.h" +#include "testcases.h" + +/* + * We should get this from asm/siginfo.h but the testsuite is being + * clever with redefining siginfo_t. + */ +#ifndef SEGV_CPERR +#define SEGV_CPERR 10 +#endif + +static inline void gcsss1(uint64_t Xt) +{ + asm volatile ( + "sys #3, C7, C7, #2, %0\n" + : + : "rZ" (Xt) + : "memory"); +} + +static int gcs_op_fault_trigger(struct tdescr *td) +{ + /* + * The slot below our current GCS should be in a valid GCS but + * must not have a valid cap in it. + */ + gcsss1(get_gcspr_el0() - 8); + + return 0; +} + +static int gcs_op_fault_signal(struct tdescr *td, siginfo_t *si, + ucontext_t *uc) +{ + ASSERT_GOOD_CONTEXT(uc); + + return 1; +} + +struct tdescr tde = { + .name = "Invalid GCS operation", + .descr = "An invalid GCS operation generates the expected signal", + .feats_required = FEAT_GCS, + .timeout = 3, + .sig_ok = SIGSEGV, + .sig_ok_code = SEGV_CPERR, + .sanity_disabled = true, + .trigger = gcs_op_fault_trigger, + .run = gcs_op_fault_signal, +}; diff --git a/tools/testing/selftests/arm64/signal/testcases/gcs_frame.c b/tools/testing/selftests/arm64/signal/testcases/gcs_frame.c new file mode 100644 index 000000000000..b405d82321da --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/gcs_frame.c @@ -0,0 +1,88 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2023 ARM Limited + */ + +#include <signal.h> +#include <ucontext.h> +#include <sys/prctl.h> + +#include "test_signals_utils.h" +#include "testcases.h" + +static union { + ucontext_t uc; + char buf[1024 * 64]; +} context; + +static int gcs_regs(struct tdescr *td, siginfo_t *si, ucontext_t *uc) +{ + size_t offset; + struct _aarch64_ctx *head = GET_BUF_RESV_HEAD(context); + struct gcs_context *gcs; + unsigned long expected, gcspr; + uint64_t *u64_val; + int ret; + + ret = prctl(PR_GET_SHADOW_STACK_STATUS, &expected, 0, 0, 0); + if (ret != 0) { + fprintf(stderr, "Unable to query GCS status\n"); + return 1; + } + + /* We expect a cap to be added to the GCS in the signal frame */ + gcspr = get_gcspr_el0(); + gcspr -= 8; + fprintf(stderr, "Expecting GCSPR_EL0 %lx\n", gcspr); + + if (!get_current_context(td, &context.uc, sizeof(context))) { + fprintf(stderr, "Failed getting context\n"); + return 1; + } + + /* Ensure that the signal restore token was consumed */ + u64_val = (uint64_t *)get_gcspr_el0() + 1; + if (*u64_val) { + fprintf(stderr, "GCS value at %p is %lx not 0\n", + u64_val, *u64_val); + return 1; + } + + fprintf(stderr, "Got context\n"); + + head = get_header(head, GCS_MAGIC, GET_BUF_RESV_SIZE(context), + &offset); + if (!head) { + fprintf(stderr, "No GCS context\n"); + return 1; + } + + gcs = (struct gcs_context *)head; + + /* Basic size validation is done in get_current_context() */ + + if (gcs->features_enabled != expected) { + fprintf(stderr, "Features enabled %llx but expected %lx\n", + gcs->features_enabled, expected); + return 1; + } + + if (gcs->gcspr != gcspr) { + fprintf(stderr, "Got GCSPR %llx but expected %lx\n", + gcs->gcspr, gcspr); + return 1; + } + + fprintf(stderr, "GCS context validated\n"); + td->pass = 1; + + return 0; +} + +struct tdescr tde = { + .name = "GCS basics", + .descr = "Validate a GCS signal context", + .feats_required = FEAT_GCS, + .timeout = 3, + .run = gcs_regs, +}; diff --git a/tools/testing/selftests/arm64/signal/testcases/gcs_write_fault.c b/tools/testing/selftests/arm64/signal/testcases/gcs_write_fault.c new file mode 100644 index 000000000000..faeabb18c4b2 --- /dev/null +++ b/tools/testing/selftests/arm64/signal/testcases/gcs_write_fault.c @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2023 ARM Limited + */ + +#include <errno.h> +#include <signal.h> +#include <unistd.h> + +#include <sys/mman.h> +#include <sys/prctl.h> + +#include "test_signals_utils.h" +#include "testcases.h" + +static uint64_t *gcs_page; + +#ifndef __NR_map_shadow_stack +#define __NR_map_shadow_stack 453 +#endif + +static bool alloc_gcs(struct tdescr *td) +{ + long page_size = sysconf(_SC_PAGE_SIZE); + + gcs_page = (void *)syscall(__NR_map_shadow_stack, 0, + page_size, 0); + if (gcs_page == MAP_FAILED) { + fprintf(stderr, "Failed to map %ld byte GCS: %d\n", + page_size, errno); + return false; + } + + return true; +} + +static int gcs_write_fault_trigger(struct tdescr *td) +{ + /* Verify that the page is readable (ie, not completely unmapped) */ + fprintf(stderr, "Read value 0x%lx\n", gcs_page[0]); + + /* A regular write should trigger a fault */ + gcs_page[0] = EINVAL; + + return 0; +} + +static int gcs_write_fault_signal(struct tdescr *td, siginfo_t *si, + ucontext_t *uc) +{ + ASSERT_GOOD_CONTEXT(uc); + + return 1; +} + + +struct tdescr tde = { + .name = "GCS write fault", + .descr = "Normal writes to a GCS segfault", + .feats_required = FEAT_GCS, + .timeout = 3, + .sig_ok = SIGSEGV, + .sanity_disabled = true, + .init = alloc_gcs, + .trigger = gcs_write_fault_trigger, + .run = gcs_write_fault_signal, +};
Mark Brown broonie@kernel.org writes:
Do some testing of the signal handling for GCS, checking that a GCS frame has the expected information in it and that the expected signals are delivered with invalid operations.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/signal/.gitignore | 1 + .../selftests/arm64/signal/test_signals_utils.h | 10 +++ .../arm64/signal/testcases/gcs_exception_fault.c | 62 +++++++++++++++ .../selftests/arm64/signal/testcases/gcs_frame.c | 88 ++++++++++++++++++++++ .../arm64/signal/testcases/gcs_write_fault.c | 67 ++++++++++++++++ 5 files changed, 228 insertions(+)
The gcs_exception_fault, gcs_frame and gcs_write_fault tests pass on my FVP setup:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
Add a stress test which runs one more process than we have CPUs spinning through a very recursive function with frequent syscalls immediately prior to return and signals being injected every 100ms. The goal is to flag up any scheduling related issues, for example failure to ensure that barriers are inserted when moving a GCS using task to another CPU. The test runs for a configurable amount of time, defaulting to 10 seconds.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/gcs/.gitignore | 2 + tools/testing/selftests/arm64/gcs/Makefile | 6 +- tools/testing/selftests/arm64/gcs/asm-offsets.h | 0 .../selftests/arm64/gcs/gcs-stress-thread.S | 311 ++++++++++++ tools/testing/selftests/arm64/gcs/gcs-stress.c | 530 +++++++++++++++++++++ 5 files changed, 848 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/arm64/gcs/.gitignore b/tools/testing/selftests/arm64/gcs/.gitignore index 0c86f53f68ad..1e8d1f6b27f2 100644 --- a/tools/testing/selftests/arm64/gcs/.gitignore +++ b/tools/testing/selftests/arm64/gcs/.gitignore @@ -1,3 +1,5 @@ basic-gcs libc-gcs gcs-locking +gcs-stress +gcs-stress-thread diff --git a/tools/testing/selftests/arm64/gcs/Makefile b/tools/testing/selftests/arm64/gcs/Makefile index 2173d6275956..d8b06ca51e22 100644 --- a/tools/testing/selftests/arm64/gcs/Makefile +++ b/tools/testing/selftests/arm64/gcs/Makefile @@ -6,7 +6,8 @@ # nolibc. #
-TEST_GEN_PROGS := basic-gcs libc-gcs gcs-locking +TEST_GEN_PROGS := basic-gcs libc-gcs gcs-locking gcs-stress +TEST_GEN_PROGS_EXTENDED := gcs-stress-thread
LDLIBS+=-lpthread
@@ -18,3 +19,6 @@ $(OUTPUT)/basic-gcs: basic-gcs.c -I../../../../../usr/include \ -std=gnu99 -I../.. -g \ -ffreestanding -Wall $^ -o $@ -lgcc + +$(OUTPUT)/gcs-stress-thread: gcs-stress-thread.S + $(CC) -nostdlib $^ -o $@ diff --git a/tools/testing/selftests/arm64/gcs/asm-offsets.h b/tools/testing/selftests/arm64/gcs/asm-offsets.h new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/tools/testing/selftests/arm64/gcs/gcs-stress-thread.S b/tools/testing/selftests/arm64/gcs/gcs-stress-thread.S new file mode 100644 index 000000000000..b88b25217da5 --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/gcs-stress-thread.S @@ -0,0 +1,311 @@ +// Program that loops for ever doing lots of recursions and system calls, +// intended to be used as part of a stress test for GCS context switching. +// +// Copyright 2015-2023 Arm Ltd + +#include <asm/unistd.h> + +#define sa_sz 32 +#define sa_flags 8 +#define sa_handler 0 +#define sa_mask_sz 8 + +#define si_code 8 + +#define SIGINT 2 +#define SIGABRT 6 +#define SIGUSR1 10 +#define SIGSEGV 11 +#define SIGUSR2 12 +#define SIGTERM 15 +#define SEGV_CPERR 10 + +#define SA_NODEFER 1073741824 +#define SA_SIGINFO 4 +#define ucontext_regs 184 + +#define PR_SET_SHADOW_STACK_STATUS 75 +# define PR_SHADOW_STACK_ENABLE (1UL << 0) + +#define GCSPR_EL0 S3_3_C2_C5_1 + +.macro function name + .macro endfunction + .type \name, @function + .purgem endfunction + .endm +\name: +.endm + +// Print a single character x0 to stdout +// Clobbers x0-x2,x8 +function putc + str x0, [sp, #-16]! + + mov x0, #1 // STDOUT_FILENO + mov x1, sp + mov x2, #1 + mov x8, #__NR_write + svc #0 + + add sp, sp, #16 + ret +endfunction +.globl putc + +// Print a NUL-terminated string starting at address x0 to stdout +// Clobbers x0-x3,x8 +function puts + mov x1, x0 + + mov x2, #0 +0: ldrb w3, [x0], #1 + cbz w3, 1f + add x2, x2, #1 + b 0b + +1: mov w0, #1 // STDOUT_FILENO + mov x8, #__NR_write + svc #0 + + ret +endfunction +.globl puts + +// Utility macro to print a literal string +// Clobbers x0-x4,x8 +.macro puts string + .pushsection .rodata.str1.1, "aMS", @progbits, 1 +.L__puts_literal@: .string "\string" + .popsection + + ldr x0, =.L__puts_literal@ + bl puts +.endm + +// Print an unsigned decimal number x0 to stdout +// Clobbers x0-x4,x8 +function putdec + mov x1, sp + str x30, [sp, #-32]! // Result can't be > 20 digits + + mov x2, #0 + strb w2, [x1, #-1]! // Write the NUL terminator + + mov x2, #10 +0: udiv x3, x0, x2 // div-mod loop to generate the digits + msub x0, x3, x2, x0 + add w0, w0, #'0' + strb w0, [x1, #-1]! + mov x0, x3 + cbnz x3, 0b + + ldrb w0, [x1] + cbnz w0, 1f + mov w0, #'0' // Print "0" for 0, not "" + strb w0, [x1, #-1]! + +1: mov x0, x1 + bl puts + + ldr x30, [sp], #32 + ret +endfunction +.globl putdec + +// Print an unsigned decimal number x0 to stdout, followed by a newline +// Clobbers x0-x5,x8 +function putdecn + mov x5, x30 + + bl putdec + mov x0, #'\n' + bl putc + + ret x5 +endfunction +.globl putdecn + +// Fill x1 bytes starting at x0 with 0. +// Clobbers x1, x2. +function memclr + mov w2, #0 +endfunction +.globl memclr + // fall through to memfill + +// Trivial memory fill: fill x1 bytes starting at address x0 with byte w2 +// Clobbers x1 +function memfill + cmp x1, #0 + b.eq 1f + +0: strb w2, [x0], #1 + subs x1, x1, #1 + b.ne 0b + +1: ret +endfunction +.globl memfill + +// w0: signal number +// x1: sa_action +// w2: sa_flags +// Clobbers x0-x6,x8 +function setsignal + str x30, [sp, #-((sa_sz + 15) / 16 * 16 + 16)]! + + mov w4, w0 + mov x5, x1 + mov w6, w2 + + add x0, sp, #16 + mov x1, #sa_sz + bl memclr + + mov w0, w4 + add x1, sp, #16 + str w6, [x1, #sa_flags] + str x5, [x1, #sa_handler] + mov x2, #0 + mov x3, #sa_mask_sz + mov x8, #__NR_rt_sigaction + svc #0 + + cbz w0, 1f + + puts "sigaction failure\n" + b abort + +1: ldr x30, [sp], #((sa_sz + 15) / 16 * 16 + 16) + ret +endfunction + + +function tickle_handler + // Perhaps collect GCSPR_EL0 here in future? + ret +endfunction + +function terminate_handler + mov w21, w0 + mov x20, x2 + + puts "Terminated by signal " + mov w0, w21 + bl putdec + puts ", no error\n" + + mov x0, #0 + mov x8, #__NR_exit + svc #0 +endfunction + +function segv_handler + // stash the siginfo_t * + mov x20, x1 + + // Disable GCS, we don't want additional faults logging things + mov x0, PR_SET_SHADOW_STACK_STATUS + mov x1, xzr + mov x2, xzr + mov x3, xzr + mov x4, xzr + mov x5, xzr + mov x8, #__NR_prctl + svc #0 + + puts "Got SIGSEGV code " + + ldr x21, [x20, #si_code] + mov x0, x21 + bl putdec + + // GCS faults should have si_code SEGV_CPERR + cmp x21, #SEGV_CPERR + bne 1f + + puts " (GCS violation)" +1: + mov x0, '\n' + bl putc + b abort +endfunction + +// Recurse x20 times +.macro recurse id +function recurse\id + stp x29, x30, [sp, #-16]! + mov x29, sp + + cmp x20, 0 + beq 1f + sub x20, x20, 1 + bl recurse\id + +1: + ldp x29, x30, [sp], #16 + + // Do a syscall immediately prior to returning to try to provoke + // scheduling and migration at a point where coherency issues + // might trigger. + mov x8, #__NR_getpid + svc #0 + + ret +endfunction +.endm + +// Generate and use two copies so we're changing the GCS contents +recurse 1 +recurse 2 + +.globl _start +function _start + // Run with GCS + mov x0, PR_SET_SHADOW_STACK_STATUS + mov x1, PR_SHADOW_STACK_ENABLE + mov x2, xzr + mov x3, xzr + mov x4, xzr + mov x5, xzr + mov x8, #__NR_prctl + svc #0 + cbz x0, 1f + puts "Failed to enable GCS\n" + b abort +1: + + mov w0, #SIGTERM + adr x1, terminate_handler + mov w2, #SA_SIGINFO + bl setsignal + + mov w0, #SIGUSR1 + adr x1, tickle_handler + mov w2, #SA_SIGINFO + orr w2, w2, #SA_NODEFER + bl setsignal + + mov w0, #SIGSEGV + adr x1, segv_handler + mov w2, #SA_SIGINFO + orr w2, w2, #SA_NODEFER + bl setsignal + + puts "Running\n" + +loop: + // Small recursion depth so we're frequently flipping between + // the two recursors and changing what's on the stack + mov x20, #5 + bl recurse1 + mov x20, #5 + bl recurse2 + b loop +endfunction + +abort: + mov x0, #255 + mov x8, #__NR_exit + svc #0 diff --git a/tools/testing/selftests/arm64/gcs/gcs-stress.c b/tools/testing/selftests/arm64/gcs/gcs-stress.c new file mode 100644 index 000000000000..a81417cd6f5c --- /dev/null +++ b/tools/testing/selftests/arm64/gcs/gcs-stress.c @@ -0,0 +1,530 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022-3 ARM Limited. + */ + +#define _GNU_SOURCE +#define _POSIX_C_SOURCE 199309L + +#include <errno.h> +#include <getopt.h> +#include <poll.h> +#include <signal.h> +#include <stdbool.h> +#include <stddef.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <unistd.h> +#include <sys/auxv.h> +#include <sys/epoll.h> +#include <sys/prctl.h> +#include <sys/types.h> +#include <sys/uio.h> +#include <sys/wait.h> +#include <asm/hwcap.h> + +#include "../../kselftest.h" + +struct child_data { + char *name, *output; + pid_t pid; + int stdout; + bool output_seen; + bool exited; + int exit_status; + int exit_signal; +}; + +static int epoll_fd; +static struct child_data *children; +static struct epoll_event *evs; +static int tests; +static int num_children; +static bool terminate; + +static int startup_pipe[2]; + +static int num_processors(void) +{ + long nproc = sysconf(_SC_NPROCESSORS_CONF); + if (nproc < 0) { + perror("Unable to read number of processors\n"); + exit(EXIT_FAILURE); + } + + return nproc; +} + +static void start_thread(struct child_data *child) +{ + int ret, pipefd[2], i; + struct epoll_event ev; + + ret = pipe(pipefd); + if (ret != 0) + ksft_exit_fail_msg("Failed to create stdout pipe: %s (%d)\n", + strerror(errno), errno); + + child->pid = fork(); + if (child->pid == -1) + ksft_exit_fail_msg("fork() failed: %s (%d)\n", + strerror(errno), errno); + + if (!child->pid) { + /* + * In child, replace stdout with the pipe, errors to + * stderr from here as kselftest prints to stdout. + */ + ret = dup2(pipefd[1], 1); + if (ret == -1) { + fprintf(stderr, "dup2() %d\n", errno); + exit(EXIT_FAILURE); + } + + /* + * Duplicate the read side of the startup pipe to + * FD 3 so we can close everything else. + */ + ret = dup2(startup_pipe[0], 3); + if (ret == -1) { + fprintf(stderr, "dup2() %d\n", errno); + exit(EXIT_FAILURE); + } + + /* + * Very dumb mechanism to clean open FDs other than + * stdio. We don't want O_CLOEXEC for the pipes... + */ + for (i = 4; i < 8192; i++) + close(i); + + /* + * Read from the startup pipe, there should be no data + * and we should block until it is closed. We just + * carry on on error since this isn't super critical. + */ + ret = read(3, &i, sizeof(i)); + if (ret < 0) + fprintf(stderr, "read(startp pipe) failed: %s (%d)\n", + strerror(errno), errno); + if (ret > 0) + fprintf(stderr, "%d bytes of data on startup pipe\n", + ret); + close(3); + + ret = execl("gcs-stress-thread", "gcs-stress-thread", NULL); + fprintf(stderr, "execl(gcs-stress-thread) failed: %d (%s)\n", + errno, strerror(errno)); + + exit(EXIT_FAILURE); + } else { + /* + * In parent, remember the child and close our copy of the + * write side of stdout. + */ + close(pipefd[1]); + child->stdout = pipefd[0]; + child->output = NULL; + child->exited = false; + child->output_seen = false; + + ev.events = EPOLLIN | EPOLLHUP; + ev.data.ptr = child; + + ret = asprintf(&child->name, "Thread-%d", child->pid); + if (ret == -1) + ksft_exit_fail_msg("asprintf() failed\n"); + + ret = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, child->stdout, &ev); + if (ret < 0) { + ksft_exit_fail_msg("%s EPOLL_CTL_ADD failed: %s (%d)\n", + child->name, strerror(errno), errno); + } + } + + ksft_print_msg("Started %s\n", child->name); + num_children++; +} + +static bool child_output_read(struct child_data *child) +{ + char read_data[1024]; + char work[1024]; + int ret, len, cur_work, cur_read; + + ret = read(child->stdout, read_data, sizeof(read_data)); + if (ret < 0) { + if (errno == EINTR) + return true; + + ksft_print_msg("%s: read() failed: %s (%d)\n", + child->name, strerror(errno), + errno); + return false; + } + len = ret; + + child->output_seen = true; + + /* Pick up any partial read */ + if (child->output) { + strncpy(work, child->output, sizeof(work) - 1); + cur_work = strnlen(work, sizeof(work)); + free(child->output); + child->output = NULL; + } else { + cur_work = 0; + } + + cur_read = 0; + while (cur_read < len) { + work[cur_work] = read_data[cur_read++]; + + if (work[cur_work] == '\n') { + work[cur_work] = '\0'; + ksft_print_msg("%s: %s\n", child->name, work); + cur_work = 0; + } else { + cur_work++; + } + } + + if (cur_work) { + work[cur_work] = '\0'; + ret = asprintf(&child->output, "%s", work); + if (ret == -1) + ksft_exit_fail_msg("Out of memory\n"); + } + + return false; +} + +static void child_output(struct child_data *child, uint32_t events, + bool flush) +{ + bool read_more; + + if (events & EPOLLIN) { + do { + read_more = child_output_read(child); + } while (read_more); + } + + if (events & EPOLLHUP) { + close(child->stdout); + child->stdout = -1; + flush = true; + } + + if (flush && child->output) { + ksft_print_msg("%s: %s<EOF>\n", child->name, child->output); + free(child->output); + child->output = NULL; + } +} + +static void child_tickle(struct child_data *child) +{ + if (child->output_seen && !child->exited) + kill(child->pid, SIGUSR1); +} + +static void child_stop(struct child_data *child) +{ + if (!child->exited) + kill(child->pid, SIGTERM); +} + +static void child_cleanup(struct child_data *child) +{ + pid_t ret; + int status; + bool fail = false; + + if (!child->exited) { + do { + ret = waitpid(child->pid, &status, 0); + if (ret == -1 && errno == EINTR) + continue; + + if (ret == -1) { + ksft_print_msg("waitpid(%d) failed: %s (%d)\n", + child->pid, strerror(errno), + errno); + fail = true; + break; + } + + if (WIFEXITED(status)) { + child->exit_status = WEXITSTATUS(status); + child->exited = true; + } + + if (WIFSIGNALED(status)) { + child->exit_signal = WTERMSIG(status); + ksft_print_msg("%s: Exited due to signal %d\n", + child->name); + fail = true; + child->exited = true; + } + } while (!child->exited); + } + + if (!child->output_seen) { + ksft_print_msg("%s no output seen\n", child->name); + fail = true; + } + + if (child->exit_status != 0) { + ksft_print_msg("%s exited with error code %d\n", + child->name, child->exit_status); + fail = true; + } + + ksft_test_result(!fail, "%s\n", child->name); +} + +static void handle_child_signal(int sig, siginfo_t *info, void *context) +{ + int i; + bool found = false; + + for (i = 0; i < num_children; i++) { + if (children[i].pid == info->si_pid) { + children[i].exited = true; + children[i].exit_status = info->si_status; + found = true; + break; + } + } + + if (!found) + ksft_print_msg("SIGCHLD for unknown PID %d with status %d\n", + info->si_pid, info->si_status); +} + +static void handle_exit_signal(int sig, siginfo_t *info, void *context) +{ + int i; + + /* If we're already exiting then don't signal again */ + if (terminate) + return; + + ksft_print_msg("Got signal, exiting...\n"); + + terminate = true; + + /* + * This should be redundant, the main loop should clean up + * after us, but for safety stop everything we can here. + */ + for (i = 0; i < num_children; i++) + child_stop(&children[i]); +} + +/* Handle any pending output without blocking */ +static void drain_output(bool flush) +{ + int ret = 1; + int i; + + while (ret > 0) { + ret = epoll_wait(epoll_fd, evs, tests, 0); + if (ret < 0) { + if (errno == EINTR) + continue; + ksft_print_msg("epoll_wait() failed: %s (%d)\n", + strerror(errno), errno); + } + + for (i = 0; i < ret; i++) + child_output(evs[i].data.ptr, evs[i].events, flush); + } +} + +static const struct option options[] = { + { "timeout", required_argument, NULL, 't' }, + { } +}; + +int main(int argc, char **argv) +{ + int seen_children; + bool all_children_started = false; + int gcs_threads; + int timeout = 10; + int ret, cpus, i, c; + struct sigaction sa; + + while ((c = getopt_long(argc, argv, "t:", options, NULL)) != -1) { + switch (c) { + case 't': + ret = sscanf(optarg, "%d", &timeout); + if (ret != 1) + ksft_exit_fail_msg("Failed to parse timeout %s\n", + optarg); + break; + default: + ksft_exit_fail_msg("Unknown argument\n"); + } + } + + cpus = num_processors(); + tests = 0; + + if (getauxval(AT_HWCAP2) & HWCAP2_GCS) { + /* One extra thread, trying to trigger migrations */ + gcs_threads = cpus + 1; + tests += gcs_threads; + } else { + gcs_threads = 0; + } + + ksft_print_header(); + ksft_set_plan(tests); + + ksft_print_msg("%d CPUs, %d GCS threads\n", + cpus, gcs_threads); + + if (!tests) + ksft_exit_skip("No tests scheduled\n"); + + if (timeout > 0) + ksft_print_msg("Will run for %ds\n", timeout); + else + ksft_print_msg("Will run until terminated\n"); + + children = calloc(sizeof(*children), tests); + if (!children) + ksft_exit_fail_msg("Unable to allocate child data\n"); + + ret = epoll_create1(EPOLL_CLOEXEC); + if (ret < 0) + ksft_exit_fail_msg("epoll_create1() failed: %s (%d)\n", + strerror(errno), ret); + epoll_fd = ret; + + /* Create a pipe which children will block on before execing */ + ret = pipe(startup_pipe); + if (ret != 0) + ksft_exit_fail_msg("Failed to create startup pipe: %s (%d)\n", + strerror(errno), errno); + + /* Get signal handers ready before we start any children */ + memset(&sa, 0, sizeof(sa)); + sa.sa_sigaction = handle_exit_signal; + sa.sa_flags = SA_RESTART | SA_SIGINFO; + sigemptyset(&sa.sa_mask); + ret = sigaction(SIGINT, &sa, NULL); + if (ret < 0) + ksft_print_msg("Failed to install SIGINT handler: %s (%d)\n", + strerror(errno), errno); + ret = sigaction(SIGTERM, &sa, NULL); + if (ret < 0) + ksft_print_msg("Failed to install SIGTERM handler: %s (%d)\n", + strerror(errno), errno); + sa.sa_sigaction = handle_child_signal; + ret = sigaction(SIGCHLD, &sa, NULL); + if (ret < 0) + ksft_print_msg("Failed to install SIGCHLD handler: %s (%d)\n", + strerror(errno), errno); + + evs = calloc(tests, sizeof(*evs)); + if (!evs) + ksft_exit_fail_msg("Failed to allocated %d epoll events\n", + tests); + + for (i = 0; i < gcs_threads; i++) + start_thread(&children[i]); + + /* + * All children started, close the startup pipe and let them + * run. + */ + close(startup_pipe[0]); + close(startup_pipe[1]); + + timeout *= 10; + for (;;) { + /* Did we get a signal asking us to exit? */ + if (terminate) + break; + + /* + * Timeout is counted in 100ms with no output, the + * tests print during startup then are silent when + * running so this should ensure they all ran enough + * to install the signal handler, this is especially + * useful in emulation where we will both be slow and + * likely to have a large set of VLs. + */ + ret = epoll_wait(epoll_fd, evs, tests, 100); + if (ret < 0) { + if (errno == EINTR) + continue; + ksft_exit_fail_msg("epoll_wait() failed: %s (%d)\n", + strerror(errno), errno); + } + + /* Output? */ + if (ret > 0) { + for (i = 0; i < ret; i++) { + child_output(evs[i].data.ptr, evs[i].events, + false); + } + continue; + } + + /* Otherwise epoll_wait() timed out */ + + /* + * If the child processes have not produced output they + * aren't actually running the tests yet. + */ + if (!all_children_started) { + seen_children = 0; + + for (i = 0; i < num_children; i++) + if (children[i].output_seen || + children[i].exited) + seen_children++; + + if (seen_children != num_children) { + ksft_print_msg("Waiting for %d children\n", + num_children - seen_children); + continue; + } + + all_children_started = true; + } + + ksft_print_msg("Sending signals, timeout remaining: %d00ms\n", + timeout); + + for (i = 0; i < num_children; i++) + child_tickle(&children[i]); + + /* Negative timeout means run indefinitely */ + if (timeout < 0) + continue; + if (--timeout == 0) + break; + } + + ksft_print_msg("Finishing up...\n"); + terminate = true; + + for (i = 0; i < tests; i++) + child_stop(&children[i]); + + drain_output(false); + + for (i = 0; i < tests; i++) + child_cleanup(&children[i]); + + drain_output(true); + + ksft_finished(); +}
Mark Brown broonie@kernel.org writes:
Add a stress test which runs one more process than we have CPUs spinning through a very recursive function with frequent syscalls immediately prior to return and signals being injected every 100ms. The goal is to flag up any scheduling related issues, for example failure to ensure that barriers are inserted when moving a GCS using task to another CPU. The test runs for a configurable amount of time, defaulting to 10 seconds.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/gcs/.gitignore | 2 + tools/testing/selftests/arm64/gcs/Makefile | 6 +- tools/testing/selftests/arm64/gcs/asm-offsets.h | 0 .../selftests/arm64/gcs/gcs-stress-thread.S | 311 ++++++++++++ tools/testing/selftests/arm64/gcs/gcs-stress.c | 530 +++++++++++++++++++++ 5 files changed, 848 insertions(+), 1 deletion(-)
Unfortunately, gcs-stress still fails on my FVP setup. I tested on an arm64 defconfig with and without THP enabled with, the same results:
$ sudo ./run_kselftest.sh -t arm64:gcs-stress -o 600 TAP version 13 1..1 # overriding timeout to 600 # selftests: arm64: gcs-stress # TAP version 13 # 1..9 # # 8 CPUs, 9 GCS threads # # Will run for 10s # # Started Thread-4870 # # Started Thread-4871 # # Started Thread-4872 # # Started Thread-4873 # # Started Thread-4874 # # Started Thread-4875 # # Started Thread-4876 # # Started Thread-4877 # # Started Thread-4878 # # Waiting for 9 children # # Waiting for 9 children # # Thread-4870: Failed to enable GCS # # Thread-4871: Failed to enable GCS # # Thread-4872: Failed to enable GCS # # Thread-4873: Failed to enable GCS # # Thread-4876: Failed to enable GCS # # Thread-4875: Failed to enable GCS # # Thread-4874: Failed to enable GCS # # Thread-4878: Failed to enable GCS # # Thread-4877: Failed to enable GCS # # Sending signals, timeout remaining: 10000ms # # Sending signals, timeout remaining: 9900ms # # Sending signals, timeout remaining: 9800ms ⋮ # # Sending signals, timeout remaining: 300ms # # Sending signals, timeout remaining: 200ms # # Sending signals, timeout remaining: 100ms # # Finishing up... # # Thread-4870 exited with error code 255 # not ok 1 Thread-4870 # # Thread-4871 exited with error code 255 # not ok 2 Thread-4871 # # Thread-4872 exited with error code 255 # not ok 3 Thread-4872 # # Thread-4873 exited with error code 255 # not ok 4 Thread-4873 # # Thread-4874 exited with error code 255 # not ok 5 Thread-4874 # # Thread-4875 exited with error code 255 # not ok 6 Thread-4875 # # Thread-4876 exited with error code 255 # not ok 7 Thread-4876 # # Thread-4877 exited with error code 255 # not ok 8 Thread-4877 # # Thread-4878 exited with error code 255 # not ok 9 Thread-4878 # # Totals: pass:0 fail:9 xfail:0 xpass:0 skip:0 error:0 ok 1 selftests: arm64: gcs-stress bauermann@armv94:/var/tmp/selftests-arm64-gcs-v10$ echo $? 0
On Wed, Aug 07, 2024 at 07:39:54PM -0300, Thiago Jung Bauermann wrote:
Mark Brown broonie@kernel.org writes:
Add a stress test which runs one more process than we have CPUs spinning through a very recursive function with frequent syscalls immediately prior
Unfortunately, gcs-stress still fails on my FVP setup. I tested on an arm64 defconfig with and without THP enabled with, the same results:
Can you please try to investigate why this is happening on your system? I am unable to reproduce this, for example the actual branch that was posted gave this:
# selftests: arm64: gcs-stress # TAP version 13 # 1..9 # # 8 CPUs, 9 GCS threads # # Will run for 10s # # Started Thread-8350 # # Started Thread-8351 # # Started Thread-8352 # # Started Thread-8353 # # Started Thread-8354 # # Started Thread-8355 # # Started Thread-8356 # # Started Thread-8357 # # Started Thread-8358 # # Thread-8350: Running
...
# # Sending signals, timeout remaining: 100ms # # Finishing up... # # Thread-8351: Terminated by signal 15, no error # # Thread-8352: Terminated by signal 15, no error # # Thread-8353: Terminated by signal 15, no error # # Thread-8354: Terminated by signal 15, no error # # Thread-8355: Terminated by signal 15, no error # # Thread-8357: Terminated by signal 15, no error # # Thread-8358: Terminated by signal 15, no error # ok 1 Thread-8350 # ok 2 Thread-8351 # ok 3 Thread-8352 # ok 4 Thread-8353 # ok 5 Thread-8354 # ok 6 Thread-8355 # ok 7 Thread-8356 # ok 8 Thread-8357 # ok 9 Thread-8358 # # Thread-8356: Terminated by signal 15, no error # # Thread-8350: Terminated by signal 15, no error # # Totals: pass:9 fail:0 xfail:0 xpass:0 skip:0 error:0
and Anders also ran the selftests successfully, including with THP enabled (as noted in the changelog those issues should now be resolved). THP issues should not have been relevant for this test as it doesn't fork with GCS enabled.
# # Thread-4870: Failed to enable GCS
which is printed if a basic PR_SET_SHADOW_STACK_STATUS fails immediately the program starts executing:
function _start // Run with GCS mov x0, PR_SET_SHADOW_STACK_STATUS mov x1, PR_SHADOW_STACK_ENABLE mov x2, xzr mov x3, xzr mov x4, xzr mov x5, xzr mov x8, #__NR_prctl svc #0 cbz x0, 1f puts "Failed to enable GCS\n" b abort
the defines for which all seem up to date (and unlikely to fail in system or config specific fashions). What happens if you try to execute the gcs-stress-thread binary directly, does strace show anything interesting? If you instrument arch_set_shadow_stack_status() in the kernel does it show anything?
Mark Brown broonie@kernel.org writes:
On Wed, Aug 07, 2024 at 07:39:54PM -0300, Thiago Jung Bauermann wrote:
# # Thread-4870: Failed to enable GCS
which is printed if a basic PR_SET_SHADOW_STACK_STATUS fails immediately the program starts executing:
function _start // Run with GCS mov x0, PR_SET_SHADOW_STACK_STATUS mov x1, PR_SHADOW_STACK_ENABLE mov x2, xzr mov x3, xzr mov x4, xzr mov x5, xzr mov x8, #__NR_prctl svc #0 cbz x0, 1f puts "Failed to enable GCS\n" b abort
the defines for which all seem up to date (and unlikely to fail in system or config specific fashions). What happens if you try to execute the gcs-stress-thread binary directly, does strace show anything interesting? If you instrument arch_set_shadow_stack_status() in the kernel does it show anything?
Thank you for the pointer. It turned out that I accidentally ran the selftests binaries from the v9 version instead of the v10 version, and the gcs-stress-thread binary failed because it was using the old value for PR_SET_SHADOW_STACK_STATUS.
Using the v10 version of the selftests the gcs-stress test passes. Sorry for the false alarm.
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
On Thu, Aug 08, 2024 at 03:23:50AM -0300, Thiago Jung Bauermann wrote:
Thank you for the pointer. It turned out that I accidentally ran the selftests binaries from the v9 version instead of the v10 version, and the gcs-stress-thread binary failed because it was using the old value for PR_SET_SHADOW_STACK_STATUS.
Using the v10 version of the selftests the gcs-stress test passes. Sorry for the false alarm.
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
Ah, that's a relief thanks!
While it's a bit off topic for them the floating point stress tests do give us some coverage of context thrashing cases, and also of active signal delivery separate to the relatively complicated framework in the actual signals tests. Have the tests enable GCS on startup, ignoring failures so they continue to work as before on systems without GCS.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/arm64/fp/assembler.h | 15 +++++++++++++++ tools/testing/selftests/arm64/fp/fpsimd-test.S | 2 ++ tools/testing/selftests/arm64/fp/sve-test.S | 2 ++ tools/testing/selftests/arm64/fp/za-test.S | 2 ++ tools/testing/selftests/arm64/fp/zt-test.S | 2 ++ 5 files changed, 23 insertions(+)
diff --git a/tools/testing/selftests/arm64/fp/assembler.h b/tools/testing/selftests/arm64/fp/assembler.h index 9b38a0da407d..1fc46a5642c2 100644 --- a/tools/testing/selftests/arm64/fp/assembler.h +++ b/tools/testing/selftests/arm64/fp/assembler.h @@ -65,4 +65,19 @@ endfunction bl puts .endm
+#define PR_SET_SHADOW_STACK_STATUS 75 +# define PR_SHADOW_STACK_ENABLE (1UL << 0) + +.macro enable_gcs + // Run with GCS + mov x0, PR_SET_SHADOW_STACK_STATUS + mov x1, PR_SHADOW_STACK_ENABLE + mov x2, xzr + mov x3, xzr + mov x4, xzr + mov x5, xzr + mov x8, #__NR_prctl + svc #0 +.endm + #endif /* ! ASSEMBLER_H */ diff --git a/tools/testing/selftests/arm64/fp/fpsimd-test.S b/tools/testing/selftests/arm64/fp/fpsimd-test.S index 8b960d01ed2e..b16fb7f42e3e 100644 --- a/tools/testing/selftests/arm64/fp/fpsimd-test.S +++ b/tools/testing/selftests/arm64/fp/fpsimd-test.S @@ -215,6 +215,8 @@ endfunction // Main program entry point .globl _start function _start + enable_gcs + mov x23, #0 // signal count
mov w0, #SIGINT diff --git a/tools/testing/selftests/arm64/fp/sve-test.S b/tools/testing/selftests/arm64/fp/sve-test.S index fff60e2a25ad..2fb4f0b84476 100644 --- a/tools/testing/selftests/arm64/fp/sve-test.S +++ b/tools/testing/selftests/arm64/fp/sve-test.S @@ -378,6 +378,8 @@ endfunction // Main program entry point .globl _start function _start + enable_gcs + mov x23, #0 // Irritation signal count
mov w0, #SIGINT diff --git a/tools/testing/selftests/arm64/fp/za-test.S b/tools/testing/selftests/arm64/fp/za-test.S index 095b45531640..b2603aba99de 100644 --- a/tools/testing/selftests/arm64/fp/za-test.S +++ b/tools/testing/selftests/arm64/fp/za-test.S @@ -231,6 +231,8 @@ endfunction // Main program entry point .globl _start function _start + enable_gcs + mov x23, #0 // signal count
mov w0, #SIGINT diff --git a/tools/testing/selftests/arm64/fp/zt-test.S b/tools/testing/selftests/arm64/fp/zt-test.S index b5c81e81a379..8d9609a49008 100644 --- a/tools/testing/selftests/arm64/fp/zt-test.S +++ b/tools/testing/selftests/arm64/fp/zt-test.S @@ -200,6 +200,8 @@ endfunction // Main program entry point .globl _start function _start + enable_gcs + mov x23, #0 // signal count
mov w0, #SIGINT
Mark Brown broonie@kernel.org writes:
While it's a bit off topic for them the floating point stress tests do give us some coverage of context thrashing cases, and also of active signal delivery separate to the relatively complicated framework in the actual signals tests. Have the tests enable GCS on startup, ignoring failures so they continue to work as before on systems without GCS.
Reviewed-by: Thiago Jung Bauermann thiago.bauermann@linaro.org Signed-off-by: Mark Brown broonie@kernel.org
tools/testing/selftests/arm64/fp/assembler.h | 15 +++++++++++++++ tools/testing/selftests/arm64/fp/fpsimd-test.S | 2 ++ tools/testing/selftests/arm64/fp/sve-test.S | 2 ++ tools/testing/selftests/arm64/fp/za-test.S | 2 ++ tools/testing/selftests/arm64/fp/zt-test.S | 2 ++ 5 files changed, 23 insertions(+)
The fpsimd, sve, za and zt tests don't find any errors on my FVP setup when left running for a while:
Tested-by: Thiago Jung Bauermann thiago.bauermann@linaro.org
GCS adds new registers GCSCR_EL1, GCSCRE0_EL1, GCSPR_EL1 and GCSPR_EL0. Add these to those validated by get-reg-list.
Signed-off-by: Mark Brown broonie@kernel.org --- tools/testing/selftests/kvm/aarch64/get-reg-list.c | 28 ++++++++++++++++++++++ 1 file changed, 28 insertions(+)
diff --git a/tools/testing/selftests/kvm/aarch64/get-reg-list.c b/tools/testing/selftests/kvm/aarch64/get-reg-list.c index 709d7d721760..9785f41e6042 100644 --- a/tools/testing/selftests/kvm/aarch64/get-reg-list.c +++ b/tools/testing/selftests/kvm/aarch64/get-reg-list.c @@ -29,6 +29,24 @@ static struct feature_id_reg feat_id_regs[] = { 0, 1 }, + { + ARM64_SYS_REG(3, 0, 2, 5, 0), /* GCSCR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 44, + 1 + }, + { + ARM64_SYS_REG(3, 0, 2, 5, 1), /* GCSPR_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 44, + 1 + }, + { + ARM64_SYS_REG(3, 0, 2, 5, 2), /* GCSCRE0_EL1 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 44, + 1 + }, { ARM64_SYS_REG(3, 0, 10, 2, 2), /* PIRE0_EL1 */ ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ @@ -40,6 +58,12 @@ static struct feature_id_reg feat_id_regs[] = { ARM64_SYS_REG(3, 0, 0, 7, 3), /* ID_AA64MMFR3_EL1 */ 4, 1 + }, + { + ARM64_SYS_REG(3, 3, 2, 5, 1), /* GCSPR_EL0 */ + ARM64_SYS_REG(3, 0, 0, 4, 1), /* ID_AA64PFR1_EL1 */ + 44, + 1 } };
@@ -460,6 +484,9 @@ static __u64 base_regs[] = { ARM64_SYS_REG(3, 0, 2, 0, 1), /* TTBR1_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 2), /* TCR_EL1 */ ARM64_SYS_REG(3, 0, 2, 0, 3), /* TCR2_EL1 */ + ARM64_SYS_REG(3, 0, 2, 5, 0), /* GCSCR_EL1 */ + ARM64_SYS_REG(3, 0, 2, 5, 1), /* GCSPR_EL1 */ + ARM64_SYS_REG(3, 0, 2, 5, 2), /* GCSCRE0_EL1 */ ARM64_SYS_REG(3, 0, 5, 1, 0), /* AFSR0_EL1 */ ARM64_SYS_REG(3, 0, 5, 1, 1), /* AFSR1_EL1 */ ARM64_SYS_REG(3, 0, 5, 2, 0), /* ESR_EL1 */ @@ -475,6 +502,7 @@ static __u64 base_regs[] = { ARM64_SYS_REG(3, 0, 13, 0, 4), /* TPIDR_EL1 */ ARM64_SYS_REG(3, 0, 14, 1, 0), /* CNTKCTL_EL1 */ ARM64_SYS_REG(3, 2, 0, 0, 0), /* CSSELR_EL1 */ + ARM64_SYS_REG(3, 3, 2, 5, 1), /* GCSPR_EL0 */ ARM64_SYS_REG(3, 3, 13, 0, 2), /* TPIDR_EL0 */ ARM64_SYS_REG(3, 3, 13, 0, 3), /* TPIDRRO_EL0 */ ARM64_SYS_REG(3, 3, 14, 0, 1), /* CNTPCT_EL0 */
On 2024-08-01 13:06, Mark Brown wrote:
The arm64 Guarded Control Stack (GCS) feature provides support for hardware protected stacks of return addresses, intended to provide hardening against return oriented programming (ROP) attacks and to make it easier to gather call stacks for applications such as profiling.
When GCS is active a secondary stack called the Guarded Control Stack is maintained, protected with a memory attribute which means that it can only be written with specific GCS operations. The current GCS pointer can not be directly written to by userspace. When a BL is executed the value stored in LR is also pushed onto the GCS, and when a RET is executed the top of the GCS is popped and compared to LR with a fault being raised if the values do not match. GCS operations may only be performed on GCS pages, a data abort is generated if they are not.
The combination of hardware enforcement and lack of extra instructions in the function entry and exit paths should result in something which has less overhead and is more difficult to attack than a purely software implementation like clang's shadow stacks.
This series implements support for use of GCS by userspace, along with support for use of GCS within KVM guests. It does not enable use of GCS by either EL1 or EL2, this will be implemented separately. Executables are started without GCS and must use a prctl() to enable it, it is expected that this will be done very early in application execution by the dynamic linker or other startup code. For dynamic linking this will be done by checking that everything in the executable is marked as GCS compatible.
x86 has an equivalent feature called shadow stacks, this series depends on the x86 patches for generic memory management support for the new guarded/shadow stack page type and shares APIs as much as possible. As there has been extensive discussion with the wider community around the ABI for shadow stacks I have as far as practical kept implementation decisions close to those for x86, anticipating that review would lead to similar conclusions in the absence of strong reasoning for divergence.
The main divergence I am concious of is that x86 allows shadow stack to be enabled and disabled repeatedly, freeing the shadow stack for the thread whenever disabled, while this implementation keeps the GCS allocated after disable but refuses to reenable it. This is to avoid races with things actively walking the GCS during a disable, we do anticipate that some systems will wish to disable GCS at runtime but are not aware of any demand for subsequently reenabling it.
x86 uses an arch_prctl() to manage enable and disable, since only x86 and S/390 use arch_prctl() a generic prctl() was proposed[1] as part of a patch set for the equivalent RISC-V Zicfiss feature which I initially adopted fairly directly but following review feedback has been revised quite a bit.
We currently maintain the x86 pattern of implicitly allocating a shadow stack for threads started with shadow stack enabled, there has been some discussion of removing this support and requiring the use of clone3() with explicit allocation of shadow stacks instead. I have no strong feelings either way, implicit allocation is not really consistent with anything else we do and creates the potential for errors around thread exit but on the other hand it is existing ABI on x86 and minimises the changes needed in userspace code.
glibc and bionic changes using this ABI have been implemented and tested. Headless Android systems have been validated and Ross Burton has used this code has been used to bring up a Yocto system with GCS enabed as standard, a test implementation of V8 support has also been done.
There is an open issue with support for CRIU, on x86 this required the ability to set the GCS mode via ptrace. This series supports configuring mode bits other than enable/disable via ptrace but it needs to be confirmed if this is sufficient.
The series depends on support for shadow stacks in clone3(), that series includes the addition of ARCH_HAS_USER_SHADOW_STACK.
https://lore.kernel.org/r/20240731-clone3-shadow-stack-v7-0-a9532eebfb1d@ker...
Verified this patchset on a FVP.
Tested-by: Linux Kernel Functional Testing lkft@linaro.org
Cheers, Anders
On Thu, 01 Aug 2024 13:06:27 +0100, Mark Brown broonie@kernel.org wrote:
[...]
- Don't change writability of ID_AA64PFR1_EL1 for KVM.
How does it work then?
M.
linux-kselftest-mirror@lists.linaro.org