On Tue, Feb 20, 2024 at 11:30:22PM +0000, Edgecombe, Rick P wrote:
On Tue, 2024-02-20 at 13:57 -0500, Rich Felker wrote:
On Tue, Feb 20, 2024 at 06:41:05PM +0000, Edgecombe, Rick P wrote:
Hmm, could the shadow stack underflow onto the real stack then? Not sure how bad that is. INCSSP (incrementing the SSP register on x86) loops are not rare so it seems like something that could happen.
Shadow stack underflow should fault on attempt to access non-shadow-stack memory as shadow-stack, no?
Maybe I'm misunderstanding. I thought the proposal included allowing shadow stack access to convert normal address ranges to shadow stack, and normal writes to convert shadow stack to normal.
As I understood the original discussion of the proposal on IRC, it was only one-way (from shadow to normal). Unless I'm missing something, making it one-way is necessary to catch situations where the shadow stack would become compromised.
Shadow stacks currently have automatic guard gaps to try to prevent one thread from overflowing onto another thread's shadow stack. This would somewhat opens that up, as the stack guard gaps are usually maintained by userspace for new threads. It would have to be thought through if these could still be enforced with checking at additional spots.
I would think the existing guard pages would already do that if a thread's shadow stack is contiguous with its own data stack.
The difference is that the kernel provides the guard gaps, where this would rely on userspace to do it. It is not a showstopper either.
I think my biggest question on this is how does it change the capability for two threads to share a shadow stack. It might require some special rules around the syscall that writes restore tokens. So I'm not sure. It probably needs a POC.
Why would they be sharing a shadow stack?
From the musl side, I have always looked at the entirely of shadow stack stuff with very heavy skepticism, and anything that breaks existing interface contracts, introduced places where apps can get auto-killed because a late resource allocation fails, or requires applications to code around the existence of something that should be an implementation detail, is a non-starter. To even consider shadow stack support, it must truely be fully non-breaking.
The manual assembly stack switching and JIT code in the apps needs to be updated. I don't think there is a way around it.
Indeed, I'm not talking about programs with JIT/manual stack-switching asm, just anything using existing APIs for control of stack -- pthread_setstack, makecontext, sigaltstack, etc.
I agree though that the late allocation failures are not great. Mark is working on clone3 support which should allow moving the shadow stack allocation to happen in userspace with the normal stack. Even for riscv though, doesn't it need to update a new register in stack switching?
If clone is called with signals masked, it's probably not necessary for the kernel to set the shadow stack register as part of clone3.
BTW, x86 shadow stack has a mode where the shadow stack is writable with a special instruction (WRSS). It enables the SSP to be set arbitrarily by writing restore tokens. We discussed this as an option to make the existing longjmp() and signal stuff work more transparently for glibc.
_Without_ doing this, sigaltstack cannot be used to recover from stack overflows if the shadow stack limit is reached first, and makecontext cannot be supported without memory leaks and unreportable error conditions.
FWIW, I think the makecontext() shadow stack leaking is a bad idea. I would prefer the existing makecontext() interface just didn't support shadow stack, rather than the leaking solution glibc does today.
AIUI the proposal by Stefan makes it non-leaking because it's just using normal memory that reverts to normal usage on any non-shadow-stack access.
Right, but does it break any existing apps anyway (because of small ucontext stack sizes)?
BTW, when I talk about "not supporting" I don't mean the app should crash. I mean it should instead run normally, just without shadow stack enabled. Not sure if that was clear. Since shadow stack is not essential for an application to function, it is only security hardening on top.
Although determining if an application supports shadow stack has turned out to be difficult in practice. Handling dlopen() is especially hard.
One reasonable thing to do, that might be preferable to overengineered solutions, is to disable shadow-stack process-wide if an interface incompatible with it is used (sigaltstack, pthread_create with an attribute setup using pthread_attr_setstack, makecontext, etc.), as well as if an incompatible library is is dlopened. This is much more acceptable than continuing to run with shadow stacks managed sloppily by the kernel and async killing the process on OOM, and is probably *more compatible* with apps than changing the minimum stack size requirements out from under them.
The place where it's really needed to be able to allocate the shadow stack synchronously under userspace control, in order to harden normal applications that aren't doing funny things, is in pthread_create without a caller-provided stack.
Rich