On Thu, Oct 26, 2023 at 05:10:47PM +0000, Edgecombe, Rick P wrote:
On Mon, 2023-10-23 at 19:32 +0100, Mark Brown wrote:
Right. We're already adding the cost of the extra map_shadow_stack() so it doesn't seem that out of scope. We could also allow clone3() to be used for allocation, potentially removing the ability to specify the address entirely and only specifying the size. I did consider that option but it felt awkward in the API, though equally the whole shadow stack allocation thing is a bit that way. That would avoid concerns about placing and validating tokens entirely but gives less control to userspace.
There is also cost in the form of extra complexity. Not to throw FUD, but GUP has been the source of thorny problems. And here we would be doing it around security races. We're probably helped that shadow stack is only private/anonymous memory, so maybe it's enough of a normal case to not worry about it.
Still, there is some extra complexity, and I'm not sure if we really need it. The justification seems to mostly be that it's not as flexible as normal stacks with clone3.
I definitely agree on the complexity, trying to valdiate a token is going to be more code doing fiddly things and there's always the risk that something will change around it and invalidate assumptions the code makes. Particularly given my inability to test x86 I'm certainly way more happy pushing this series forward implementing size only than I am doing token validation.
I don't understand why doing size-only is awkward. Just because it doesn't match the regular stack clone3 semantics?
Basically, yes - we don't allocate userpace pages in clone3() for the normal stack and we do offer userspace control over where to place things. There was some grumbling about this in the current ABI from the arm64 side, though the limited control of the size is more of the issue really.
I'm not sure placement control is essential but the other bit of it is the freeing of the shadow stack, especially if userspace is doing stack switches the current behaviour where we free the stack when the thread is exiting doesn't feel great exactly. It's mainly an issue for programs that pivot stacks which isn't the common case but it is a general sharp edge.
This also doesn't do anything to stop anyone trying to allocate sub page shadow stacks if they're trying to save memory with all the lack of overrun protection that implies, though that seems to me to be much more of a deliberate decision that people might make, a token would prevent that too unless write access to the shadow stack is enabled.
Sorry, I'm not following. Sub-page shadow stacks?
If someone decides to allocate a page of shadow stack then point thread A at the first half of the page and thread B at the second half of the page nothing would stop them. There are obvious issues with this but I can see someone trying to do it in a system that creates lots of threads and has memory constraints.
+ /* + * This doesn't validate that the addresses are mapped + * VM_SHADOW_STACK, just that they're mapped at all. + */
It just checks the range, right?
Yes, same check as for the normal stack.
What looked wrong is that the comment says that it checks if the addresses are mapped, but the code just does access_ok(). It's a minor thing in any case.
Oh, I see, yes.