On Thu, Jan 4, 2024 at 3:47 PM Randy Dunlap rdunlap@infradead.org wrote:
On 1/4/24 10:51, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
Add documentation for mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
Documentation/userspace-api/mseal.rst | 181 ++++++++++++++++++++++++++ 1 file changed, 181 insertions(+) create mode 100644 Documentation/userspace-api/mseal.rst
diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst new file mode 100644 index 000000000000..1700ce5af218 --- /dev/null +++ b/Documentation/userspace-api/mseal.rst @@ -0,0 +1,181 @@ +.. SPDX-License-Identifier: GPL-2.0
+===================== +Introduction of mseal +=====================
+:Author: Jeff Xu jeffxu@chromium.org
+Modern CPUs support memory permissions such as RW and NX bits. The memory +permission feature improves security stance on memory corruption bugs, i.e. +the attacker can’t just write to arbitrary memory and point the code to it, +the memory has to be marked with X bit, or else an exception will happen.
+Memory sealing additionally protects the mapping itself against +modifications. This is useful to mitigate memory corruption issues where a +corrupted pointer is passed to a memory management system. For example, +such an attacker primitive can break control-flow integrity guarantees +since read-only memory that is supposed to be trusted can become writable +or .text pages can get remapped. Memory sealing can automatically be +applied by the runtime loader to seal .text and .rodata pages and +applications can additionally seal security critical data at runtime.
+A similar feature already exists in the XNU kernel with the +VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2].
+User API +======== +Two system calls are involved in virtual memory sealing, mseal() and mmap().
+mseal() +----------- +The mseal() syscall has following signature:
+``int mseal(void addr, size_t len, unsigned long flags)``
+**addr/len**: virtual memory address range.
+The address range set by ``addr``/``len`` must meet:
- The start address must be in an allocated VMA.
- The start address must be page aligned.
- The end address (``addr`` + ``len``) must be in an allocated VMA.
- no gap (unallocated memory) between start and end address.
+The ``len`` will be paged aligned implicitly by the kernel.
Does that mean that the <len> will be extended to be page aligned if it's not already page aligned?
Yes. the code (do_mseal) calls PAGE_ALIGNED(len). mprotect() also has this.
Two test cases cover this part. test_seal_mprotect_unalign_len test_seal_mprotect_unalign_len_variant_2
-Jeff
-- #Randy