From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
------------------------------------------------------------------
This patchset proposes a new mseal() syscall for the Linux kernel.
In a nutshell, mseal() protects the VMAs of a given virtual memory range against modifications, such as changes to their permission bits.
Modern CPUs support memory permissions, such as the read/write (RW) and no-execute (NX) bits. Linux has supported NX since the release of kernel version 2.6.8 in August 2004 [1]. The memory permission feature improves the security stance on memory corruption bugs, as an attacker cannot simply write to arbitrary memory and point the code to it. The memory must be marked with the X bit, or else an exception will occur. Internally, the kernel maintains the memory permissions in a data structure called VMA (vm_area_struct). mseal() additionally protects the VMA itself against modifications of the selected seal type.
Memory sealing is useful to mitigate memory corruption issues where a corrupted pointer is passed to a memory management system. For example, such an attacker primitive can break control-flow integrity guarantees since read-only memory that is supposed to be trusted can become writable or .text pages can get remapped. Memory sealing can automatically be applied by the runtime loader to seal .text and .rodata pages and applications can additionally seal security critical data at runtime. A similar feature already exists in the XNU kernel with the VM_FLAGS_PERMANENT [3] flag and on OpenBSD with the mimmutable syscall [4]. Also, Chrome wants to adopt this feature for their CFI work [2] and this patchset has been designed to be compatible with the Chrome use case.
Two system calls are involved in sealing the map: mmap() and mseal().
The new mseal() is an syscall on 64 bit CPU, and with following signature:
int mseal(void addr, size_t len, unsigned long flags) addr/len: memory range. flags: reserved.
mseal() blocks following operations for the given memory range.
1> Unmapping, moving to another location, and shrinking the size, via munmap() and mremap(), can leave an empty space, therefore can be replaced with a VMA with a new set of attributes.
2> Moving or expanding a different VMA into the current location, via mremap().
3> Modifying a VMA via mmap(MAP_FIXED).
4> Size expansion, via mremap(), does not appear to pose any specific risks to sealed VMAs. It is included anyway because the use case is unclear. In any case, users can rely on merging to expand a sealed VMA.
5> mprotect() and pkey_mprotect().
6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous memory, when users don't have write permission to the memory. Those behaviors can alter region contents by discarding pages, effectively a memset(0) for anonymous memory.
The idea that inspired this patch comes from Stephen Röttger’s work in V8 CFI [5]. Chrome browser in ChromeOS will be the first user of this API.
Indeed, the Chrome browser has very specific requirements for sealing, which are distinct from those of most applications. For example, in the case of libc, sealing is only applied to read-only (RO) or read-execute (RX) memory segments (such as .text and .RELRO) to prevent them from becoming writable, the lifetime of those mappings are tied to the lifetime of the process.
Chrome wants to seal two large address space reservations that are managed by different allocators. The memory is mapped RW- and RWX respectively but write access to it is restricted using pkeys (or in the future ARM permission overlay extensions). The lifetime of those mappings are not tied to the lifetime of the process, therefore, while the memory is sealed, the allocators still need to free or discard the unused memory. For example, with madvise(DONTNEED).
However, always allowing madvise(DONTNEED) on this range poses a security risk. For example if a jump instruction crosses a page boundary and the second page gets discarded, it will overwrite the target bytes with zeros and change the control flow. Checking write-permission before the discard operation allows us to control when the operation is valid. In this case, the madvise will only succeed if the executing thread has PKEY write permissions and PKRU changes are protected in software by control-flow integrity.
Although the initial version of this patch series is targeting the Chrome browser as its first user, it became evident during upstream discussions that we would also want to ensure that the patch set eventually is a complete solution for memory sealing and compatible with other use cases. The specific scenario currently in mind is glibc's use case of loading and sealing ELF executables. To this end, Stephen is working on a change to glibc to add sealing support to the dynamic linker, which will seal all non-writable segments at startup. Once this work is completed, all applications will be able to automatically benefit from these new protections.
In closing, I would like to formally acknowledge the valuable contributions received during the RFC process, which were instrumental in shaping this patch:
Jann Horn: raising awareness and providing valuable insights on the destructive madvise operations. Liam R. Howlett: perf optimization. Linus Torvalds: assisting in defining system call signature and scope. Theo de Raadt: sharing the experiences and insight gained from implementing mimmutable() in OpenBSD.
MM perf benchmarks ================== This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
Based on the result, for 6.8 kernel, sealing check adds 20-40 nano seconds, or around 50-100 CPU cycles, per VMA.
In addition, I applied the sealing to 5.10 kernel: The first test (measuring time) syscall__ vmas t tmseal delta_ns per_vma % munmap__ 1 357 390 33 33 109% munmap__ 2 442 463 21 11 105% munmap__ 4 614 634 20 5 103% munmap__ 8 1017 1137 120 15 112% munmap__ 16 1889 2153 263 16 114% munmap__ 32 4109 4088 -21 -1 99% mprotect 1 235 227 -7 -7 97% mprotect 2 495 464 -30 -15 94% mprotect 4 741 764 24 6 103% mprotect 8 1434 1437 2 0 100% mprotect 16 2958 2991 33 2 101% mprotect 32 6431 6608 177 6 103% madvise_ 1 191 208 16 16 109% madvise_ 2 300 324 24 12 108% madvise_ 4 450 473 23 6 105% madvise_ 8 753 806 53 7 107% madvise_ 16 1467 1592 125 8 108% madvise_ 32 2795 3405 610 19 122% The second test (measuring cpu cycle) syscall__ nbr_vma cpu cmseal delta_cpu per_vma % munmap__ 1 684 715 31 31 105% munmap__ 2 861 898 38 19 104% munmap__ 4 1183 1235 51 13 104% munmap__ 8 1999 2045 46 6 102% munmap__ 16 3839 3816 -23 -1 99% munmap__ 32 7672 7887 216 7 103% mprotect 1 397 443 46 46 112% mprotect 2 738 788 50 25 107% mprotect 4 1221 1256 35 9 103% mprotect 8 2356 2429 72 9 103% mprotect 16 4961 4935 -26 -2 99% mprotect 32 9882 10172 291 9 103% madvise_ 1 351 380 29 29 108% madvise_ 2 565 615 49 25 109% madvise_ 4 872 933 61 15 107% madvise_ 8 1508 1640 132 16 109% madvise_ 16 3078 3323 245 15 108% madvise_ 32 5893 6704 811 25 114%
For 5.10 kernel, sealing check adds 0-15 ns in time, or 10-30 CPU cycles, there is even decrease in some cases.
It might be interesting to compare 5.10 and 6.8 kernel The first test (measuring time) syscall__ vmas t_5_10 t_6_8 delta_ns per_vma % munmap__ 1 357 909 552 552 254% munmap__ 2 442 1398 956 478 316% munmap__ 4 614 2444 1830 458 398% munmap__ 8 1017 4029 3012 377 396% munmap__ 16 1889 6647 4758 297 352% munmap__ 32 4109 11811 7702 241 287% mprotect 1 235 439 204 204 187% mprotect 2 495 1659 1164 582 335% mprotect 4 741 3747 3006 752 506% mprotect 8 1434 6755 5320 665 471% mprotect 16 2958 13748 10790 674 465% mprotect 32 6431 27827 21397 669 433% madvise_ 1 191 240 49 49 125% madvise_ 2 300 366 67 33 122% madvise_ 4 450 623 173 43 138% madvise_ 8 753 1110 357 45 147% madvise_ 16 1467 2127 660 41 145% madvise_ 32 2795 4109 1314 41 147%
The second test (measuring cpu cycle) syscall__ vmas cpu_5_10 c_6_8 delta_cpu per_vma % munmap__ 1 684 1790 1106 1106 262% munmap__ 2 861 2819 1958 979 327% munmap__ 4 1183 4959 3776 944 419% munmap__ 8 1999 8262 6263 783 413% munmap__ 16 3839 13099 9260 579 341% munmap__ 32 7672 23221 15549 486 303% mprotect 1 397 906 509 509 228% mprotect 2 738 3019 2281 1140 409% mprotect 4 1221 6149 4929 1232 504% mprotect 8 2356 9978 7622 953 423% mprotect 16 4961 20448 15487 968 412% mprotect 32 9882 40972 31091 972 415% madvise_ 1 351 434 82 82 123% madvise_ 2 565 752 186 93 133% madvise_ 4 872 1313 442 110 151% madvise_ 8 1508 2271 763 95 151% madvise_ 16 3078 4312 1234 77 140% madvise_ 32 5893 8376 2483 78 142%
From 5.10 to 6.8 munmap: added 250-550 ns in time, or 500-1100 in cpu cycle, per vma. mprotect: added 200-750 ns in time, or 500-1200 in cpu cycle, per vma. madvise: added 33-50 ns in time, or 70-110 in cpu cycle, per vma.
In comparison to mseal, which adds 20-40 ns or 50-100 CPU cycles, the increase from 5.10 to 6.8 is significantly larger, approximately ten times greater for munmap and mprotect.
When I discuss the mm performance with Brian Makin, an engineer worked on performance, it was brought to my attention that such a performance benchmarks, which measuring millions of mm syscall in a tight loop, may not accurately reflect real-world scenarios, such as that of a database service. Also this is tested using a single HW and ChromeOS, the data from another HW or distribution might be different. It might be best to take this data with a grain of salt.
Change history: =============== V10: - rebase to 6.9.rc3 (no code change, resolve conflict only) - Stephen Röttger applied mseal() in Chrome code, and I tested it on chromebook, the mseal() is working as designed.
V9: - remove mmap(PROT_SEAL) and mmap(MAP_SEALABLE) (Linus, Theo de Raadt) - Update mseal_test to check for prot bit (Liam R. Howlett) - Update documentation to give more detail on sealing check (Liam R. Howlett) - Add seal_elf test. - Add performance measure data. - mseal_test: fix arm build. https://lore.kernel.org/all/20240214151130.616240-1-jeffxu@chromium.org/
V8: - perf optimization in mmap. (Liam R. Howlett) - add one testcase (test_seal_zero_address) - Update mseal.rst to add note for MAP_SEALABLE. https://lore.kernel.org/lkml/20240131175027.3287009-1-jeffxu@chromium.org/
V7: - fix index.rst (Randy Dunlap) - fix arm build (Randy Dunlap) - return EPERM for blocked operations (Theo de Raadt) https://lore.kernel.org/linux-mm/20240122152905.2220849-2-jeffxu@chromium.or...
V6: - Drop RFC from subject, Given Linus's general approval. - Adjust syscall number for mseal (main Jan.11/2024) - Code style fix (Matthew Wilcox) - selftest: use ksft macros (Muhammad Usama Anjum) - Document fix. (Randy Dunlap) https://lore.kernel.org/all/20240111234142.2944934-1-jeffxu@chromium.org/
V5: - fix build issue in mseal-Wire-up-mseal-syscall (Suggested by Linus Torvalds, and Greg KH) - updates on selftest. https://lore.kernel.org/lkml/20240109154547.1839886-1-jeffxu@chromium.org/#r
V4: (Suggested by Linus Torvalds) - new signature: mseal(start,len,flags) - 32 bit is not supported. vm_seal is removed, use vm_flags instead. - single bit in vm_flags for sealed state. - CONFIG_MSEAL kernel config is removed. - single bit of PROT_SEAL in the "Prot" field of mmap(). Other changes: - update selftest (Suggested by Muhammad Usama Anjum) - update documentation. https://lore.kernel.org/all/20240104185138.169307-1-jeffxu@chromium.org/
V3: - Abandon per-syscall approach, (Suggested by Linus Torvalds). - Organize sealing types around their functionality, such as MM_SEAL_BASE, MM_SEAL_PROT_PKEY. - Extend the scope of sealing from calls originated in userspace to both kernel and userspace. (Suggested by Linus Torvalds) - Add seal type support in mmap(). (Suggested by Pedro Falcato) - Add a new sealing type: MM_SEAL_DISCARD_RO_ANON to prevent destructive operations of madvise. (Suggested by Jann Horn and Stephen Röttger) - Make sealed VMAs mergeable. (Suggested by Jann Horn) - Add MAP_SEALABLE to mmap() - Add documentation - mseal.rst https://lore.kernel.org/linux-mm/20231212231706.2680890-2-jeffxu@chromium.or...
v2: Use _BITUL to define MM_SEAL_XX type. Use unsigned long for seal type in sys_mseal() and other functions. Remove internal VM_SEAL_XX type and convert_user_seal_type(). Remove MM_ACTION_XX type. Remove caller_origin(ON_BEHALF_OF_XX) and replace with sealing bitmask. Add more comments in code. Add a detailed commit message. https://lore.kernel.org/lkml/20231017090815.1067790-1-jeffxu@chromium.org/
v1: https://lore.kernel.org/lkml/20231016143828.647848-1-jeffxu@chromium.org/
---------------------------------------------------------------- [1] https://kernelnewbies.org/Linux_2_6_8 [2] https://v8.dev/blog/control-flow-integrity [3] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9... [4] https://man.openbsd.org/mimmutable.2 [5] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgea... [6] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfU... [7] https://lore.kernel.org/lkml/20230515130553.2311248-1-jeffxu@chromium.org/ [8] https://github.com/peaktocreek/mmperf
Jeff Xu (5): mseal: Wire up mseal syscall mseal: add mseal syscall selftest mm/mseal memory sealing mseal:add documentation selftest mm/mseal read-only elf memory segment
Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/mseal.rst | 199 ++ arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 + arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_n64.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 5 +- kernel/sys_ni.c | 1 + mm/Makefile | 4 + mm/internal.h | 37 + mm/madvise.c | 12 + mm/mmap.c | 31 +- mm/mprotect.c | 10 + mm/mremap.c | 31 + mm/mseal.c | 307 ++++ tools/testing/selftests/mm/.gitignore | 2 + tools/testing/selftests/mm/Makefile | 2 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++ tools/testing/selftests/mm/seal_elf.c | 183 ++ 33 files changed, 2678 insertions(+), 3 deletions(-) create mode 100644 Documentation/userspace-api/mseal.rst create mode 100644 mm/mseal.c create mode 100644 tools/testing/selftests/mm/mseal_test.c create mode 100644 tools/testing/selftests/mm/seal_elf.c
From: Jeff Xu jeffxu@chromium.org
Wire up mseal syscall for all architectures.
Signed-off-by: Jeff Xu jeffxu@chromium.org --- arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 ++ arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_n64.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/uapi/asm-generic/unistd.h | 5 ++++- kernel/sys_ni.c | 1 + 19 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index 8ff110826ce2..d8f96362e9f8 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -501,3 +501,4 @@ 569 common lsm_get_self_attr sys_lsm_get_self_attr 570 common lsm_set_self_attr sys_lsm_set_self_attr 571 common lsm_list_modules sys_lsm_list_modules +572 common mseal sys_mseal diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index b6c9e01e14f5..2ed7d229c8f9 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -475,3 +475,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index 491b2b9bd553..1346579f802f 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800)
-#define __NR_compat_syscalls 462 +#define __NR_compat_syscalls 463 #endif
#define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index 7118282d1c79..266b96acc014 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -929,6 +929,8 @@ __SYSCALL(__NR_lsm_get_self_attr, sys_lsm_get_self_attr) __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr) #define __NR_lsm_list_modules 461 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) +#define __NR_mseal 462 +__SYSCALL(__NR_mseal, sys_mseal)
/* * Please add new compat syscalls above this comment and update diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 7fd43fd4c9f2..22a3cbd4c602 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -461,3 +461,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index b00ab2cabab9..2b81a6bd78b2 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -467,3 +467,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index 83cfc9eb6b88..cc869f5d5693 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -400,3 +400,4 @@ 459 n32 lsm_get_self_attr sys_lsm_get_self_attr 460 n32 lsm_set_self_attr sys_lsm_set_self_attr 461 n32 lsm_list_modules sys_lsm_list_modules +462 n32 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl index 532b855df589..1464c6be6eb3 100644 --- a/arch/mips/kernel/syscalls/syscall_n64.tbl +++ b/arch/mips/kernel/syscalls/syscall_n64.tbl @@ -376,3 +376,4 @@ 459 n64 lsm_get_self_attr sys_lsm_get_self_attr 460 n64 lsm_set_self_attr sys_lsm_set_self_attr 461 n64 lsm_list_modules sys_lsm_list_modules +462 n64 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index f45c9530ea93..008ebe60263e 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -449,3 +449,4 @@ 459 o32 lsm_get_self_attr sys_lsm_get_self_attr 460 o32 lsm_set_self_attr sys_lsm_set_self_attr 461 o32 lsm_list_modules sys_lsm_list_modules +462 o32 mseal sys_mseal diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index b236a84c4e12..b13c21373974 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -460,3 +460,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 17173b82ca21..3656f1ca7a21 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -548,3 +548,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 095bb86339a7..bd0fee24ad10 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal sys_mseal diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index 86fe269f0220..bbf83a2db986 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index b23d59313589..ac6c281ccfe0 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -507,3 +507,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 5f8591ce7f25..7fd1f57ad3d3 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -466,3 +466,4 @@ 459 i386 lsm_get_self_attr sys_lsm_get_self_attr 460 i386 lsm_set_self_attr sys_lsm_set_self_attr 461 i386 lsm_list_modules sys_lsm_list_modules +462 i386 mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 7e8d46f4147f..52df0dec70da 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -383,6 +383,7 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal
# # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index dd116598fb25..67083fc1b2f5 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -432,3 +432,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 75f00965ab15..d983c48a3b6a 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -842,8 +842,11 @@ __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr) #define __NR_lsm_list_modules 461 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules)
+#define __NR_mseal 462 +__SYSCALL(__NR_mseal, sys_mseal) + #undef __NR_syscalls -#define __NR_syscalls 462 +#define __NR_syscalls 463
/* * 32 bit systems traditionally used different diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index faad00cce269..d7eee421d4bc 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -196,6 +196,7 @@ COND_SYSCALL(migrate_pages); COND_SYSCALL(move_pages); COND_SYSCALL(set_mempolicy_home_node); COND_SYSCALL(cachestat); +COND_SYSCALL(mseal);
COND_SYSCALL(perf_event_open); COND_SYSCALL(accept4);
On 4/15/24 9:35 PM, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
Wire up mseal syscall for all architectures.
It isn't logical to wire up something which isn't present. Please first add the mseal() and then wire up. Please swap first and second patches. I've seen this same comment before.
Signed-off-by: Jeff Xu jeffxu@chromium.org
arch/alpha/kernel/syscalls/syscall.tbl | 1 + arch/arm/tools/syscall.tbl | 1 + arch/arm64/include/asm/unistd.h | 2 +- arch/arm64/include/asm/unistd32.h | 2 ++ arch/m68k/kernel/syscalls/syscall.tbl | 1 + arch/microblaze/kernel/syscalls/syscall.tbl | 1 + arch/mips/kernel/syscalls/syscall_n32.tbl | 1 + arch/mips/kernel/syscalls/syscall_n64.tbl | 1 + arch/mips/kernel/syscalls/syscall_o32.tbl | 1 + arch/parisc/kernel/syscalls/syscall.tbl | 1 + arch/powerpc/kernel/syscalls/syscall.tbl | 1 + arch/s390/kernel/syscalls/syscall.tbl | 1 + arch/sh/kernel/syscalls/syscall.tbl | 1 + arch/sparc/kernel/syscalls/syscall.tbl | 1 + arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + arch/xtensa/kernel/syscalls/syscall.tbl | 1 + include/uapi/asm-generic/unistd.h | 5 ++++- kernel/sys_ni.c | 1 + 19 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl index 8ff110826ce2..d8f96362e9f8 100644 --- a/arch/alpha/kernel/syscalls/syscall.tbl +++ b/arch/alpha/kernel/syscalls/syscall.tbl @@ -501,3 +501,4 @@ 569 common lsm_get_self_attr sys_lsm_get_self_attr 570 common lsm_set_self_attr sys_lsm_set_self_attr 571 common lsm_list_modules sys_lsm_list_modules +572 common mseal sys_mseal diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl index b6c9e01e14f5..2ed7d229c8f9 100644 --- a/arch/arm/tools/syscall.tbl +++ b/arch/arm/tools/syscall.tbl @@ -475,3 +475,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h index 491b2b9bd553..1346579f802f 100644 --- a/arch/arm64/include/asm/unistd.h +++ b/arch/arm64/include/asm/unistd.h @@ -39,7 +39,7 @@ #define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5) #define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800) -#define __NR_compat_syscalls 462 +#define __NR_compat_syscalls 463 #endif #define __ARCH_WANT_SYS_CLONE diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h index 7118282d1c79..266b96acc014 100644 --- a/arch/arm64/include/asm/unistd32.h +++ b/arch/arm64/include/asm/unistd32.h @@ -929,6 +929,8 @@ __SYSCALL(__NR_lsm_get_self_attr, sys_lsm_get_self_attr) __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr) #define __NR_lsm_list_modules 461 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) +#define __NR_mseal 462 +__SYSCALL(__NR_mseal, sys_mseal) /*
- Please add new compat syscalls above this comment and update
diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl index 7fd43fd4c9f2..22a3cbd4c602 100644 --- a/arch/m68k/kernel/syscalls/syscall.tbl +++ b/arch/m68k/kernel/syscalls/syscall.tbl @@ -461,3 +461,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl index b00ab2cabab9..2b81a6bd78b2 100644 --- a/arch/microblaze/kernel/syscalls/syscall.tbl +++ b/arch/microblaze/kernel/syscalls/syscall.tbl @@ -467,3 +467,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl index 83cfc9eb6b88..cc869f5d5693 100644 --- a/arch/mips/kernel/syscalls/syscall_n32.tbl +++ b/arch/mips/kernel/syscalls/syscall_n32.tbl @@ -400,3 +400,4 @@ 459 n32 lsm_get_self_attr sys_lsm_get_self_attr 460 n32 lsm_set_self_attr sys_lsm_set_self_attr 461 n32 lsm_list_modules sys_lsm_list_modules +462 n32 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl index 532b855df589..1464c6be6eb3 100644 --- a/arch/mips/kernel/syscalls/syscall_n64.tbl +++ b/arch/mips/kernel/syscalls/syscall_n64.tbl @@ -376,3 +376,4 @@ 459 n64 lsm_get_self_attr sys_lsm_get_self_attr 460 n64 lsm_set_self_attr sys_lsm_set_self_attr 461 n64 lsm_list_modules sys_lsm_list_modules +462 n64 mseal sys_mseal diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl index f45c9530ea93..008ebe60263e 100644 --- a/arch/mips/kernel/syscalls/syscall_o32.tbl +++ b/arch/mips/kernel/syscalls/syscall_o32.tbl @@ -449,3 +449,4 @@ 459 o32 lsm_get_self_attr sys_lsm_get_self_attr 460 o32 lsm_set_self_attr sys_lsm_set_self_attr 461 o32 lsm_list_modules sys_lsm_list_modules +462 o32 mseal sys_mseal diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl index b236a84c4e12..b13c21373974 100644 --- a/arch/parisc/kernel/syscalls/syscall.tbl +++ b/arch/parisc/kernel/syscalls/syscall.tbl @@ -460,3 +460,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl index 17173b82ca21..3656f1ca7a21 100644 --- a/arch/powerpc/kernel/syscalls/syscall.tbl +++ b/arch/powerpc/kernel/syscalls/syscall.tbl @@ -548,3 +548,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl index 095bb86339a7..bd0fee24ad10 100644 --- a/arch/s390/kernel/syscalls/syscall.tbl +++ b/arch/s390/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal sys_mseal diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl index 86fe269f0220..bbf83a2db986 100644 --- a/arch/sh/kernel/syscalls/syscall.tbl +++ b/arch/sh/kernel/syscalls/syscall.tbl @@ -464,3 +464,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl index b23d59313589..ac6c281ccfe0 100644 --- a/arch/sparc/kernel/syscalls/syscall.tbl +++ b/arch/sparc/kernel/syscalls/syscall.tbl @@ -507,3 +507,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 5f8591ce7f25..7fd1f57ad3d3 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -466,3 +466,4 @@ 459 i386 lsm_get_self_attr sys_lsm_get_self_attr 460 i386 lsm_set_self_attr sys_lsm_set_self_attr 461 i386 lsm_list_modules sys_lsm_list_modules +462 i386 mseal sys_mseal diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index 7e8d46f4147f..52df0dec70da 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -383,6 +383,7 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal # # Due to a historical design error, certain syscalls are numbered differently diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl index dd116598fb25..67083fc1b2f5 100644 --- a/arch/xtensa/kernel/syscalls/syscall.tbl +++ b/arch/xtensa/kernel/syscalls/syscall.tbl @@ -432,3 +432,4 @@ 459 common lsm_get_self_attr sys_lsm_get_self_attr 460 common lsm_set_self_attr sys_lsm_set_self_attr 461 common lsm_list_modules sys_lsm_list_modules +462 common mseal sys_mseal diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 75f00965ab15..d983c48a3b6a 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -842,8 +842,11 @@ __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr) #define __NR_lsm_list_modules 461 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) +#define __NR_mseal 462 +__SYSCALL(__NR_mseal, sys_mseal)
#undef __NR_syscalls -#define __NR_syscalls 462 +#define __NR_syscalls 463 /*
- 32 bit systems traditionally used different
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index faad00cce269..d7eee421d4bc 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -196,6 +196,7 @@ COND_SYSCALL(migrate_pages); COND_SYSCALL(move_pages); COND_SYSCALL(set_mempolicy_home_node); COND_SYSCALL(cachestat); +COND_SYSCALL(mseal); COND_SYSCALL(perf_event_open); COND_SYSCALL(accept4);
On Mon, 15 Apr 2024 at 11:11, Muhammad Usama Anjum usama.anjum@collabora.com wrote:
It isn't logical to wire up something which isn't present
Actually, with system calls, the rules end up being almost opposite.
There's no point in adding the code if it's not reachable. So adding the system call code before adding the wiring makes no sense.
So you have two cases: add the stubs first, or add the code first. Neither does anything without the other.
So then you go "add both in the same commit" option, which ends up being horrible from a "review the code" standpoint. The two parts are entirely different and mixing them up makes the patch very unclear (and has very different target audiences for reviewing it - the MM people really shouldn't have to look at the architecture wiring parts).
End result: there are no "this is the logical ordering" cases.
But the "wire up system calls" part actually has some reasons to be first:
- it reserves the system call number
- it adds the "when system call isn't enabled, return -ENOSYS" conditional system call logic
so I actually tend prefer this ordering when it comes to system calls.
Linus
On Mon, Apr 15, 2024 at 11:21 AM Linus Torvalds torvalds@linux-foundation.org wrote:
On Mon, 15 Apr 2024 at 11:11, Muhammad Usama Anjum usama.anjum@collabora.com wrote:
It isn't logical to wire up something which isn't present
Actually, with system calls, the rules end up being almost opposite.
There's no point in adding the code if it's not reachable. So adding the system call code before adding the wiring makes no sense.
So you have two cases: add the stubs first, or add the code first. Neither does anything without the other.
So then you go "add both in the same commit" option, which ends up being horrible from a "review the code" standpoint. The two parts are entirely different and mixing them up makes the patch very unclear (and has very different target audiences for reviewing it - the MM people really shouldn't have to look at the architecture wiring parts).
End result: there are no "this is the logical ordering" cases.
But the "wire up system calls" part actually has some reasons to be first:
it reserves the system call number
it adds the "when system call isn't enabled, return -ENOSYS"
conditional system call logic
so I actually tend prefer this ordering when it comes to system calls.
I confirm that the wire up change can be merged by its own, i.e. build will pass, and -ENOSYS will be returned at runtime.
Thanks Linus for clarifying this. -Jeff
Linus
From: Jeff Xu jeffxu@chromium.org
The new mseal() is an syscall on 64 bit CPU, and with following signature:
int mseal(void addr, size_t len, unsigned long flags) addr/len: memory range. flags: reserved.
mseal() blocks following operations for the given memory range.
1> Unmapping, moving to another location, and shrinking the size, via munmap() and mremap(), can leave an empty space, therefore can be replaced with a VMA with a new set of attributes.
2> Moving or expanding a different VMA into the current location, via mremap().
3> Modifying a VMA via mmap(MAP_FIXED).
4> Size expansion, via mremap(), does not appear to pose any specific risks to sealed VMAs. It is included anyway because the use case is unclear. In any case, users can rely on merging to expand a sealed VMA.
5> mprotect() and pkey_mprotect().
6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous memory, when users don't have write permission to the memory. Those behaviors can alter region contents by discarding pages, effectively a memset(0) for anonymous memory.
Following input during RFC are incooperated into this patch:
Jann Horn: raising awareness and providing valuable insights on the destructive madvise operations. Linus Torvalds: assisting in defining system call signature and scope. Liam R. Howlett: perf optimization. Theo de Raadt: sharing the experiences and insight gained from implementing mimmutable() in OpenBSD.
Finally, the idea that inspired this patch comes from Stephen Röttger’s work in Chrome V8 CFI.
Signed-off-by: Jeff Xu jeffxu@chromium.org --- include/linux/syscalls.h | 1 + mm/Makefile | 4 + mm/internal.h | 37 +++++ mm/madvise.c | 12 ++ mm/mmap.c | 31 +++- mm/mprotect.c | 10 ++ mm/mremap.c | 31 ++++ mm/mseal.c | 307 +++++++++++++++++++++++++++++++++++++++ 8 files changed, 432 insertions(+), 1 deletion(-) create mode 100644 mm/mseal.c
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index e619ac10cd23..9104952d323d 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -821,6 +821,7 @@ asmlinkage long sys_process_mrelease(int pidfd, unsigned int flags); asmlinkage long sys_remap_file_pages(unsigned long start, unsigned long size, unsigned long prot, unsigned long pgoff, unsigned long flags); +asmlinkage long sys_mseal(unsigned long start, size_t len, unsigned long flags); asmlinkage long sys_mbind(unsigned long start, unsigned long len, unsigned long mode, const unsigned long __user *nmask, diff --git a/mm/Makefile b/mm/Makefile index 4abb40b911ec..739811890e36 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -42,6 +42,10 @@ ifdef CONFIG_CROSS_MEMORY_ATTACH mmu-$(CONFIG_MMU) += process_vm_access.o endif
+ifdef CONFIG_64BIT +mmu-$(CONFIG_MMU) += mseal.o +endif + obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ maccess.o page-writeback.o folio-compat.o \ readahead.o swap.o truncate.o vmscan.o shrinker.o \ diff --git a/mm/internal.h b/mm/internal.h index 7e486f2c502c..a858161489b3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1326,6 +1326,43 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority);
+#ifdef CONFIG_64BIT +/* VM is sealed, in vm_flags */ +#define VM_SEALED _BITUL(63) +#endif + +#ifdef CONFIG_64BIT +static inline int can_do_mseal(unsigned long flags) +{ + if (flags) + return -EINVAL; + + return 0; +} + +bool can_modify_mm(struct mm_struct *mm, unsigned long start, + unsigned long end); +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, + unsigned long end, int behavior); +#else +static inline int can_do_mseal(unsigned long flags) +{ + return -EPERM; +} + +static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + return true; +} + +static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, + unsigned long end, int behavior) +{ + return true; +} +#endif + #ifdef CONFIG_SHRINKER_DEBUG static inline __printf(2, 0) int shrinker_debugfs_name_alloc( struct shrinker *shrinker, const char *fmt, va_list ap) diff --git a/mm/madvise.c b/mm/madvise.c index 44a498c94158..f7d589534e82 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1394,6 +1394,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, * -EIO - an I/O error occurred while paging in data. * -EBADF - map exists, but area maps something that isn't a file. * -EAGAIN - a kernel resource was temporarily unavailable. + * -EPERM - memory is sealed. */ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior) { @@ -1437,10 +1438,21 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh start = untagged_addr_remote(mm, start); end = start + len;
+ /* + * Check if the address range is sealed for do_madvise(). + * can_modify_mm_madv assumes we have acquired the lock on MM. + */ + if (!can_modify_mm_madv(mm, start, end, behavior)) { + error = -EPERM; + goto out; + } + blk_start_plug(&plug); error = madvise_walk_vmas(mm, start, end, behavior, madvise_vma_behavior); blk_finish_plug(&plug); + +out: if (write) mmap_write_unlock(mm); else diff --git a/mm/mmap.c b/mm/mmap.c index 6dbda99a47da..4b80076c319e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1267,6 +1267,16 @@ unsigned long do_mmap(struct file *file, unsigned long addr, return -EEXIST; }
+ /* + * addr is returned from get_unmapped_area, + * There are two cases: + * 1> MAP_FIXED == false + * unallocated memory, no need to check sealing. + * 1> MAP_FIXED == true + * sealing is checked inside mmap_region when + * do_vmi_munmap is called. + */ + if (prot == PROT_EXEC) { pkey = execute_only_pkey(mm); if (pkey < 0) @@ -2682,6 +2692,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm, if (end == start) return -EINVAL;
+ /* + * Check if memory is sealed before arch_unmap. + * Prevent unmapping a sealed VMA. + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(mm, start, end)) + return -EPERM; + /* arch_unmap() might do unmaps itself. */ arch_unmap(mm, start, end);
@@ -2744,7 +2762,10 @@ unsigned long mmap_region(struct file *file, unsigned long addr, }
/* Unmap any existing mapping in the area */ - if (do_vmi_munmap(&vmi, mm, addr, len, uf, false)) + error = do_vmi_munmap(&vmi, mm, addr, len, uf, false); + if (error == -EPERM) + return error; + else if (error) return -ENOMEM;
/* @@ -3094,6 +3115,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm;
+ /* + * Check if memory is sealed before arch_unmap. + * Prevent unmapping a sealed VMA. + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(mm, start, end)) + return -EPERM; + arch_unmap(mm, start, end); return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock); } diff --git a/mm/mprotect.c b/mm/mprotect.c index f8a4544b4601..b30b2494bfcd 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -32,6 +32,7 @@ #include <linux/sched/sysctl.h> #include <linux/userfaultfd_k.h> #include <linux/memory-tiers.h> +#include <uapi/linux/mman.h> #include <asm/cacheflush.h> #include <asm/mmu_context.h> #include <asm/tlbflush.h> @@ -743,6 +744,15 @@ static int do_mprotect_pkey(unsigned long start, size_t len, } }
+ /* + * checking if memory is sealed. + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(current->mm, start, end)) { + error = -EPERM; + goto out; + } + prev = vma_prev(&vmi); if (start > vma->vm_start) prev = vma; diff --git a/mm/mremap.c b/mm/mremap.c index 38d98465f3d8..d69b438dcf83 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -902,7 +902,25 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len, if ((mm->map_count + 2) >= sysctl_max_map_count - 3) return -ENOMEM;
+ /* + * In mremap_to(). + * Move a VMA to another location, check if src addr is sealed. + * + * Place can_modify_mm here because mremap_to() + * does its own checking for address range, and we only + * check the sealing after passing those checks. + * + * can_modify_mm assumes we have acquired the lock on MM. + */ + if (!can_modify_mm(mm, addr, addr + old_len)) + return -EPERM; + if (flags & MREMAP_FIXED) { + /* + * In mremap_to(). + * VMA is moved to dst address, and munmap dst first. + * do_munmap will check if dst is sealed. + */ ret = do_munmap(mm, new_addr, new_len, uf_unmap_early); if (ret) goto out; @@ -1061,6 +1079,19 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, goto out; }
+ /* + * Below is shrink/expand case (not mremap_to()) + * Check if src address is sealed, if so, reject. + * In other words, prevent shrinking or expanding a sealed VMA. + * + * Place can_modify_mm here so we can keep the logic related to + * shrink/expand together. + */ + if (!can_modify_mm(mm, addr, addr + old_len)) { + ret = -EPERM; + goto out; + } + /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. diff --git a/mm/mseal.c b/mm/mseal.c new file mode 100644 index 000000000000..daadac4b8125 --- /dev/null +++ b/mm/mseal.c @@ -0,0 +1,307 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Implement mseal() syscall. + * + * Copyright (c) 2023,2024 Google, Inc. + * + * Author: Jeff Xu jeffxu@chromium.org + */ + +#include <linux/mempolicy.h> +#include <linux/mman.h> +#include <linux/mm.h> +#include <linux/mm_inline.h> +#include <linux/mmu_context.h> +#include <linux/syscalls.h> +#include <linux/sched.h> +#include "internal.h" + +static inline bool vma_is_sealed(struct vm_area_struct *vma) +{ + return (vma->vm_flags & VM_SEALED); +} + +static inline void set_vma_sealed(struct vm_area_struct *vma) +{ + vm_flags_set(vma, VM_SEALED); +} + +/* + * check if a vma is sealed for modification. + * return true, if modification is allowed. + */ +static bool can_modify_vma(struct vm_area_struct *vma) +{ + if (vma_is_sealed(vma)) + return false; + + return true; +} + +static bool is_madv_discard(int behavior) +{ + return behavior & + (MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED | + MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK); +} + +static bool is_ro_anon(struct vm_area_struct *vma) +{ + /* check anonymous mapping. */ + if (vma->vm_file || vma->vm_flags & VM_SHARED) + return false; + + /* + * check for non-writable: + * PROT=RO or PKRU is not writeable. + */ + if (!(vma->vm_flags & VM_WRITE) || + !arch_vma_access_permitted(vma, true, false, false)) + return true; + + return false; +} + +/* + * Check if the vmas of a memory range are allowed to be modified. + * the memory ranger can have a gap (unallocated memory). + * return true, if it is allowed. + */ +bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end) +{ + struct vm_area_struct *vma; + + VMA_ITERATOR(vmi, mm, start); + + /* going through each vma to check. */ + for_each_vma_range(vmi, vma, end) { + if (!can_modify_vma(vma)) + return false; + } + + /* Allow by default. */ + return true; +} + +/* + * Check if the vmas of a memory range are allowed to be modified by madvise. + * the memory ranger can have a gap (unallocated memory). + * return true, if it is allowed. + */ +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end, + int behavior) +{ + struct vm_area_struct *vma; + + VMA_ITERATOR(vmi, mm, start); + + if (!is_madv_discard(behavior)) + return true; + + /* going through each vma to check. */ + for_each_vma_range(vmi, vma, end) + if (is_ro_anon(vma) && !can_modify_vma(vma)) + return false; + + /* Allow by default. */ + return true; +} + +static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma, + struct vm_area_struct **prev, unsigned long start, + unsigned long end, vm_flags_t newflags) +{ + int ret = 0; + vm_flags_t oldflags = vma->vm_flags; + + if (newflags == oldflags) + goto out; + + vma = vma_modify_flags(vmi, *prev, vma, start, end, newflags); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto out; + } + + set_vma_sealed(vma); +out: + *prev = vma; + return ret; +} + +/* + * Check for do_mseal: + * 1> start is part of a valid vma. + * 2> end is part of a valid vma. + * 3> No gap (unallocated address) between start and end. + * 4> map is sealable. + */ +static int check_mm_seal(unsigned long start, unsigned long end) +{ + struct vm_area_struct *vma; + unsigned long nstart = start; + + VMA_ITERATOR(vmi, current->mm, start); + + /* going through each vma to check. */ + for_each_vma_range(vmi, vma, end) { + if (vma->vm_start > nstart) + /* unallocated memory found. */ + return -ENOMEM; + + if (vma->vm_end >= end) + return 0; + + nstart = vma->vm_end; + } + + return -ENOMEM; +} + +/* + * Apply sealing. + */ +static int apply_mm_seal(unsigned long start, unsigned long end) +{ + unsigned long nstart; + struct vm_area_struct *vma, *prev; + + VMA_ITERATOR(vmi, current->mm, start); + + vma = vma_iter_load(&vmi); + /* + * Note: check_mm_seal should already checked ENOMEM case. + * so vma should not be null, same for the other ENOMEM cases. + */ + prev = vma_prev(&vmi); + if (start > vma->vm_start) + prev = vma; + + nstart = start; + for_each_vma_range(vmi, vma, end) { + int error; + unsigned long tmp; + vm_flags_t newflags; + + newflags = vma->vm_flags | VM_SEALED; + tmp = vma->vm_end; + if (tmp > end) + tmp = end; + error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags); + if (error) + return error; + nstart = vma_iter_end(&vmi); + } + + return 0; +} + +/* + * mseal(2) seals the VM's meta data from + * selected syscalls. + * + * addr/len: VM address range. + * + * The address range by addr/len must meet: + * start (addr) must be in a valid VMA. + * end (addr + len) must be in a valid VMA. + * no gap (unallocated memory) between start and end. + * start (addr) must be page aligned. + * + * len: len will be page aligned implicitly. + * + * Below VMA operations are blocked after sealing. + * 1> Unmapping, moving to another location, and shrinking + * the size, via munmap() and mremap(), can leave an empty + * space, therefore can be replaced with a VMA with a new + * set of attributes. + * 2> Moving or expanding a different vma into the current location, + * via mremap(). + * 3> Modifying a VMA via mmap(MAP_FIXED). + * 4> Size expansion, via mremap(), does not appear to pose any + * specific risks to sealed VMAs. It is included anyway because + * the use case is unclear. In any case, users can rely on + * merging to expand a sealed VMA. + * 5> mprotect and pkey_mprotect. + * 6> Some destructive madvice() behavior (e.g. MADV_DONTNEED) + * for anonymous memory, when users don't have write permission to the + * memory. Those behaviors can alter region contents by discarding pages, + * effectively a memset(0) for anonymous memory. + * + * flags: reserved. + * + * return values: + * zero: success. + * -EINVAL: + * invalid input flags. + * start address is not page aligned. + * Address arange (start + len) overflow. + * -ENOMEM: + * addr is not a valid address (not allocated). + * end (start + len) is not a valid address. + * a gap (unallocated memory) between start and end. + * -EPERM: + * - In 32 bit architecture, sealing is not supported. + * Note: + * user can call mseal(2) multiple times, adding a seal on an + * already sealed memory is a no-action (no error). + * + * unseal() is not supported. + */ +static int do_mseal(unsigned long start, size_t len_in, unsigned long flags) +{ + size_t len; + int ret = 0; + unsigned long end; + struct mm_struct *mm = current->mm; + + ret = can_do_mseal(flags); + if (ret) + return ret; + + start = untagged_addr(start); + if (!PAGE_ALIGNED(start)) + return -EINVAL; + + len = PAGE_ALIGN(len_in); + /* Check to see whether len was rounded up from small -ve to zero. */ + if (len_in && !len) + return -EINVAL; + + end = start + len; + if (end < start) + return -EINVAL; + + if (end == start) + return 0; + + if (mmap_write_lock_killable(mm)) + return -EINTR; + + /* + * First pass, this helps to avoid + * partial sealing in case of error in input address range, + * e.g. ENOMEM error. + */ + ret = check_mm_seal(start, end); + if (ret) + goto out; + + /* + * Second pass, this should success, unless there are errors + * from vma_modify_flags, e.g. merge/split error, or process + * reaching the max supported VMAs, however, those cases shall + * be rare. + */ + ret = apply_mm_seal(start, end); + +out: + mmap_write_unlock(current->mm); + return ret; +} + +SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long, + flags) +{ + return do_mseal(start, len, flags); +}
* jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
The new mseal() is an syscall on 64 bit CPU, and with following signature:
int mseal(void addr, size_t len, unsigned long flags) addr/len: memory range. flags: reserved.
mseal() blocks following operations for the given memory range.
1> Unmapping, moving to another location, and shrinking the size, via munmap() and mremap(), can leave an empty space, therefore can be replaced with a VMA with a new set of attributes.
2> Moving or expanding a different VMA into the current location, via mremap().
3> Modifying a VMA via mmap(MAP_FIXED).
4> Size expansion, via mremap(), does not appear to pose any specific risks to sealed VMAs. It is included anyway because the use case is unclear. In any case, users can rely on merging to expand a sealed VMA.
5> mprotect() and pkey_mprotect().
6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous memory, when users don't have write permission to the memory. Those behaviors can alter region contents by discarding pages, effectively a memset(0) for anonymous memory.
Following input during RFC are incooperated into this patch:
Jann Horn: raising awareness and providing valuable insights on the destructive madvise operations. Linus Torvalds: assisting in defining system call signature and scope. Liam R. Howlett: perf optimization. Theo de Raadt: sharing the experiences and insight gained from implementing mimmutable() in OpenBSD.
Finally, the idea that inspired this patch comes from Stephen Röttger’s work in Chrome V8 CFI.
No per-vma change is checked prior to entering a per-vma modification loop today. This means that mseal() differs in behaviour in "up-front failure" vs "partial change failure" that exists in every other function.
I'm not saying it's wrong or that it's right - I'm just wondering what the direction is here. Either we should do as much up-front as possible or keep with tradition and have (partial) success where possible.
If you look at do_mprotect_pkey(), you can even see map_deny_write_exec() being checked in a loop during modifications.
I think we can all agree that having some up-front and some later without any reason will lead to a higher probability of things getting missed.
Thanks, Liam
On Tue, Apr 16, 2024 at 4:59 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
The new mseal() is an syscall on 64 bit CPU, and with following signature:
int mseal(void addr, size_t len, unsigned long flags) addr/len: memory range. flags: reserved.
[...]
No per-vma change is checked prior to entering a per-vma modification loop today. This means that mseal() differs in behaviour in "up-front failure" vs "partial change failure" that exists in every other function.
I'm not saying it's wrong or that it's right - I'm just wondering what the direction is here. Either we should do as much up-front as possible or keep with tradition and have (partial) success where possible.
FWIW, in the current version, I think ENOMEM can happen both in the up-front check (for calling the syscall on unmapped ranges) as well as in the later loop (for VMA splitting failure).
I think no matter what we do, a process that gets an error other than ENOSYS from mseal() will probably not get much actionable information from the return value... no matter whether sealing worked partly or not at all, the process will have the same choice between either exiting (if it treats sealing failure as a fatal error for security reasons) or continuing as if the sealing had worked.
Liam R. Howlett Liam.Howlett@oracle.com wrote:
No per-vma change is checked prior to entering a per-vma modification loop today. This means that mseal() differs in behaviour in "up-front failure" vs "partial change failure" that exists in every other function.
I discussed this with Liam and Jeff a while ago (seperate conversations).
A bunch of linux m*() syscalls have weaker atomicity gaurantees than the other systems I looked into.
Linux is an outlier here. Other systems do two passes over the "entries in the range", before commiting to success or failure. When success is returned, it means the whole range has been changed. When an error is identified in the first pass, then no changes are applied, and error is returned. I found no partial results in my limited reading of various VM systems.
Actually the gaurantee of having done nothing upon error, is very common system call behaviour. POSIX and defacto standards don't seem to specify by specific wording as far as I can see, but majority of systems seem to do so because it matches expectations.
Considering all the system calls, I can't think of any examples. There are a few specific ioctl which were designed wrong.
I suspect, for performance reasons, there will be little appetite to repair the m*() syscalls in Linux. (I would appreciate if they were brought up to standard, so I guess that starts the 20 year counter :)
I think we can all agree that having some up-front and some later without any reason will lead to a higher probability of things getting missed.
Also as attack surface. I spent some time thinking about circumstances where this might help an attack.
The risk is that mprotect() return value is very rarely checked, yet parts of objects will change. mprotect() is probably the least checked system call, since people assume it will always succeed entirely; not the case on Linux. Even more so not the case once immutable memory ranges come into play, it's an even more likely error condition now.
I didn't find a particular piece of software (or an old attack) which would help an attack with the sloppy permission handling aspects, but I only thought about it for a couple days... there are people with more time on their hands.
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1836 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <sys/mman.h> +#include <stdint.h> +#include <unistd.h> +#include <string.h> +#include <sys/time.h> +#include <sys/resource.h> +#include <stdbool.h> +#include "../kselftest.h" +#include <syscall.h> +#include <errno.h> +#include <stdio.h> +#include <stdlib.h> +#include <assert.h> +#include <fcntl.h> +#include <assert.h> +#include <sys/ioctl.h> +#include <sys/vfs.h> +#include <sys/stat.h> + +/* + * need those definition for manually build using gcc. + * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test + */ +#ifndef PKEY_DISABLE_ACCESS +# define PKEY_DISABLE_ACCESS 0x1 +#endif + +#ifndef PKEY_DISABLE_WRITE +# define PKEY_DISABLE_WRITE 0x2 +#endif + +#ifndef PKEY_BITS_PER_KEY +#define PKEY_BITS_PER_PKEY 2 +#endif + +#ifndef PKEY_MASK +#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE) +#endif + +#define FAIL_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + +#define SKIP_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + + +#define TEST_END_CHECK() {\ + ksft_test_result_pass("%s\n", __func__);\ + return;\ +test_end:\ + return;\ +} + +#ifndef u64 +#define u64 unsigned long long +#endif + +static unsigned long get_vma_size(void *addr, int *prot) +{ + FILE *maps; + char line[256]; + int size = 0; + uintptr_t addr_start, addr_end; + char protstr[5]; + *prot = 0; + + maps = fopen("/proc/self/maps", "r"); + if (!maps) + return 0; + + while (fgets(line, sizeof(line), maps)) { + if (sscanf(line, "%lx-%lx %4s", &addr_start, &addr_end, &protstr) == 3) { + if (addr_start == (uintptr_t) addr) { + size = addr_end - addr_start; + if (protstr[0] == 'r') + *prot |= 0x4; + if (protstr[1] == 'w') + *prot |= 0x2; + if (protstr[2] == 'x') + *prot |= 0x1; + break; + } + } + } + fclose(maps); + return size; +} + +/* + * define sys_xyx to call syscall directly. + */ +static int sys_mseal(void *start, size_t len) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mseal, start, len, 0); + return sret; +} + +static int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mprotect, ptr, size, prot); + return sret; +} + +static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot, + unsigned long pkey) +{ + int sret; + + errno = 0; + sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey); + return sret; +} + +static void *sys_mmap(void *addr, unsigned long len, unsigned long prot, + unsigned long flags, unsigned long fd, unsigned long offset) +{ + void *sret; + + errno = 0; + sret = (void *) syscall(__NR_mmap, addr, len, prot, + flags, fd, offset); + return sret; +} + +static int sys_munmap(void *ptr, size_t size) +{ + int sret; + + errno = 0; + sret = syscall(__NR_munmap, ptr, size); + return sret; +} + +static int sys_madvise(void *start, size_t len, int types) +{ + int sret; + + errno = 0; + sret = syscall(__NR_madvise, start, len, types); + return sret; +} + +static int sys_pkey_alloc(unsigned long flags, unsigned long init_val) +{ + int ret = syscall(__NR_pkey_alloc, flags, init_val); + + return ret; +} + +static unsigned int __read_pkey_reg(void) +{ + unsigned int pkey_reg = 0; +#if defined(__i386__) || defined(__x86_64__) /* arch */ + unsigned int eax, edx; + unsigned int ecx = 0; + + asm volatile(".byte 0x0f,0x01,0xee\n\t" + : "=a" (eax), "=d" (edx) + : "c" (ecx)); + pkey_reg = eax; +#endif + return pkey_reg; +} + +static void __write_pkey_reg(u64 pkey_reg) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */ + unsigned int eax = pkey_reg; + unsigned int ecx = 0; + unsigned int edx = 0; + + asm volatile(".byte 0x0f,0x01,0xef\n\t" + : : "a" (eax), "c" (ecx), "d" (edx)); + assert(pkey_reg == __read_pkey_reg()); +#endif +} + +static unsigned long pkey_bit_position(int pkey) +{ + return pkey * PKEY_BITS_PER_PKEY; +} + +static u64 set_pkey_bits(u64 reg, int pkey, u64 flags) +{ + unsigned long shift = pkey_bit_position(pkey); + + /* mask out bits from pkey in old value */ + reg &= ~((u64)PKEY_MASK << shift); + /* OR in new bits for pkey */ + reg |= (flags & PKEY_MASK) << shift; + return reg; +} + +static void set_pkey(int pkey, unsigned long pkey_value) +{ + unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE); + u64 new_pkey_reg; + + assert(!(pkey_value & ~mask)); + new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value); + __write_pkey_reg(new_pkey_reg); +} + +static void setup_single_address(int size, void **ptrOut) +{ + void *ptr; + + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + assert(ptr != (void *)-1); + *ptrOut = ptr; +} + +static void setup_single_address_rw(int size, void **ptrOut) +{ + void *ptr; + unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE; + + ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0); + assert(ptr != (void *)-1); + *ptrOut = ptr; +} + +static void clean_single_address(void *ptr, int size) +{ + int ret; + + ret = munmap(ptr, size); + assert(!ret); +} + +static void seal_single_address(void *ptr, int size) +{ + int ret; + + ret = sys_mseal(ptr, size); + assert(!ret); +} + +bool seal_support(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + + ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (ptr == (void *) -1) + return false; + + ret = sys_mseal(ptr, page_size); + if (ret < 0) + return false; + + return true; +} + +bool pkey_supported(void) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */ + int pkey = sys_pkey_alloc(0, 0); + + if (pkey > 0) + return true; +#endif + return false; +} + +static void test_seal_addseal(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_unmapped_start(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* munmap 2 pages from ptr. */ + ret = sys_munmap(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail because 2 pages from ptr are unmapped. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail because 2 pages from ptr are unmapped. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_unmapped_middle(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* munmap 2 pages from ptr + page. */ + ret = sys_munmap(ptr + page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail, since middle 2 pages are unmapped. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail as well. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* we still can add seal to the first page and last page*/ + ret = sys_mseal(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mseal(ptr + 3 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_unmapped_end(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* unmap last 2 pages. */ + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail since last 2 pages are unmapped. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail as well. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* The first 2 pages is not sealed, and can add seals */ + ret = sys_mseal(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_multiple_vmas(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split the vma into 3. */ + ret = sys_mprotect(ptr + page_size, 2 * page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will get applied to all 4 pages - 3 VMAs. */ + ret = sys_mprotect(ptr, size, PROT_READ); + FAIL_TEST_IF_FALSE(!ret); + + /* use mprotect to split the vma into 3. */ + ret = sys_mprotect(ptr + page_size, 2 * page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* mseal get applied to all 4 pages - 3 VMAs. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_split_start(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split at middle */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the first page, this will split the VMA */ + ret = sys_mseal(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* add seal to the remain 3 pages */ + ret = sys_mseal(ptr + page_size, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_split_end(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + /* use mprotect to split at middle */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the last page */ + ret = sys_mseal(ptr + 3 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* Adding seals to the first 3 pages */ + ret = sys_mseal(ptr, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_invalid_input(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(8 * page_size, &ptr); + clean_single_address(ptr + 4 * page_size, 4 * page_size); + + /* invalid flag */ + ret = syscall(__NR_mseal, ptr, size, 0x20); + FAIL_TEST_IF_FALSE(ret < 0); + + /* unaligned address */ + ret = sys_mseal(ptr + 1, 2 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* length too big */ + ret = sys_mseal(ptr, 5 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* length overflow */ + ret = sys_mseal(ptr, UINT64_MAX/page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* start is not in a valid VMA */ + ret = sys_mseal(ptr - page_size, 5 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + TEST_END_CHECK(); +} + +static void test_seal_zero_length(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal 0 length will be OK, same as mprotect */ + ret = sys_mseal(ptr, 0); + FAIL_TEST_IF_FALSE(!ret); + + /* verify the 4 pages are not sealed by previous call. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_zero_address(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int prot; + + /* use mmap to change protection. */ + ptr = sys_mmap(0, size, PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + FAIL_TEST_IF_FALSE(ptr == 0); + + size = get_vma_size(ptr, &prot); + FAIL_TEST_IF_FALSE(size == 4 * page_size); + + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + /* verify the 4 pages are sealed by previous call. */ + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret); + + TEST_END_CHECK(); +} + +static void test_seal_twice(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + + setup_single_address(size, &ptr); + + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + /* apply the same seal will be OK. idempotent. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, size); + + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_start_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, page_size); + + /* the first page is sealed. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* pages after the first page is not sealed. */ + ret = sys_mprotect(ptr + page_size, page_size * 3, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_end_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr + page_size, 3 * page_size); + + /* first page is not sealed */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* last 3 page are sealed */ + ret = sys_mprotect(ptr + page_size, page_size * 3, + PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_unalign_len(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, page_size * 2 - 1); + + /* 2 pages are sealed. */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mprotect(ptr + page_size * 2, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_unalign_len_variant_2(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + if (seal) + seal_single_address(ptr, page_size * 2 + 1); + + /* 3 pages are sealed. */ + ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mprotect(ptr + page_size * 3, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_two_vma(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) + seal_single_address(ptr, page_size * 4); + + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_mprotect(ptr + page_size * 2, page_size * 2, + PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_two_vma_with_split(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split as two vma. */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* mseal can apply across 2 vma, also split them. */ + if (seal) + seal_single_address(ptr + page_size, page_size * 2); + + /* the first page is not sealed. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* the second page is sealed. */ + ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* the third page is sealed. */ + ret = sys_mprotect(ptr + 2 * page_size, page_size, + PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* the fouth page is not sealed. */ + ret = sys_mprotect(ptr + 3 * page_size, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_partial_mprotect(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* seal one page. */ + if (seal) + seal_single_address(ptr, page_size); + + /* mprotect first 2 page will fail, since the first page are sealed. */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_two_vma_with_gap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* use mprotect to split. */ + ret = sys_mprotect(ptr + 3 * page_size, page_size, + PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* use munmap to free two pages in the middle */ + ret = sys_munmap(ptr + page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* mprotect will fail, because there is a gap in the address. */ + /* notes, internally mprotect still updated the first page. */ + ret = sys_mprotect(ptr, 4 * page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret < 0); + + /* mseal will fail as well. */ + ret = sys_mseal(ptr, 4 * page_size); + FAIL_TEST_IF_FALSE(ret < 0); + + /* the first page is not sealed. */ + ret = sys_mprotect(ptr, page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret == 0); + + /* the last page is not sealed. */ + ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret == 0); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_split(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal all 4 pages. */ + if (seal) { + ret = sys_mseal(ptr, 4 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mprotect is sealed. */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_mprotect_merge(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split one page. */ + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + /* seal first two pages. */ + if (seal) { + ret = sys_mseal(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* 2 pages are sealed. */ + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* last 2 pages are not sealed. */ + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ); + FAIL_TEST_IF_FALSE(ret == 0); + + TEST_END_CHECK(); +} + +static void test_seal_munmap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* 4 pages are sealed. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +/* + * allocate 4 pages, + * use mprotect to split it as two VMAs + * seal the whole range + * munmap will fail on both + */ +static void test_seal_munmap_two_vma(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + /* use mprotect to split */ + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + ret = sys_munmap(ptr, page_size * 2); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr + page_size, page_size * 2); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +/* + * allocate a VMA with 4 pages. + * munmap the middle 2 pages. + * seal the whole 4 pages, will fail. + * munmap the first page will be OK. + * munmap the last page will be OK. + */ +static void test_seal_munmap_vma_with_gap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + ret = sys_munmap(ptr + page_size, page_size * 2); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + /* can't have gap in the middle. */ + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(ret < 0); + } + + ret = sys_munmap(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr + page_size * 2, page_size); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_munmap_start_freed(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int prot; + + setup_single_address(size, &ptr); + + /* unmap the first page. */ + ret = sys_munmap(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the last 3 pages. */ + if (seal) { + ret = sys_mseal(ptr + page_size, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* unmap from the first page. */ + ret = sys_munmap(ptr, size); + if (seal) { + FAIL_TEST_IF_FALSE(ret < 0); + + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == page_size * 3); + } else { + /* note: this will be OK, even the first page is */ + /* already unmapped. */ + FAIL_TEST_IF_FALSE(!ret); + + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 0); + } + + TEST_END_CHECK(); +} + +static void test_munmap_end_freed(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + /* unmap last page. */ + ret = sys_munmap(ptr + page_size * 3, page_size); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the first 3 pages. */ + if (seal) { + ret = sys_mseal(ptr, 3 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* unmap all pages. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_munmap_middle_freed(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int prot; + + setup_single_address(size, &ptr); + /* unmap 2 pages in the middle. */ + ret = sys_munmap(ptr + page_size, page_size * 2); + FAIL_TEST_IF_FALSE(!ret); + + /* seal the first page. */ + if (seal) { + ret = sys_mseal(ptr, page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* munmap all 4 pages. */ + ret = sys_munmap(ptr, size); + if (seal) { + FAIL_TEST_IF_FALSE(ret < 0); + + size = get_vma_size(ptr, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + + size = get_vma_size(ptr + page_size * 3, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + } else { + FAIL_TEST_IF_FALSE(!ret); + + size = get_vma_size(ptr, &prot); + FAIL_TEST_IF_FALSE(size == 0); + + size = get_vma_size(ptr + page_size * 3, &prot); + FAIL_TEST_IF_FALSE(size == 0); + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_shrink(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* shrink from 4 pages to 2 pages. */ + ret2 = mremap(ptr, size, 2 * page_size, 0, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_expand(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + /* ummap last 2 pages. */ + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* expand from 2 page to 4 pages. */ + ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 == ptr); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move(bool seal) +{ + void *ptr, *newPtr; + unsigned long page_size = getpagesize(); + unsigned long size = page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_address(size, &newPtr); + clean_single_address(newPtr, size); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* move from ptr to fixed address. */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mmap_overwrite_prot(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* use mmap to change protection. */ + ret2 = sys_mmap(ptr, size, PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == ptr); + + TEST_END_CHECK(); +} + +static void test_seal_mmap_expand(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 12 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + /* ummap last 4 pages. */ + ret = sys_munmap(ptr + 8 * page_size, 4 * page_size); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, 8 * page_size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* use mmap to expand. */ + ret2 = sys_mmap(ptr, size, PROT_READ, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == ptr); + + TEST_END_CHECK(); +} + +static void test_seal_mmap_shrink(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 12 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* use mmap to shrink. */ + ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ, + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == ptr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_shrink_fixed(bool seal) +{ + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move and shrink to fixed address */ + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED, + newAddr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == newAddr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_expand_fixed(bool seal) +{ + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(page_size, &ptr); + setup_single_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(newAddr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move and expand to fixed address */ + ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED, + newAddr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == newAddr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_fixed(bool seal) +{ + void *ptr; + void *newAddr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + setup_single_address(size, &newAddr); + + if (seal) { + ret = sys_mseal(newAddr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move to fixed address */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else + FAIL_TEST_IF_FALSE(ret2 == newAddr); + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_fixed_zero(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* + * MREMAP_FIXED can move the mapping to zero address + */ + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED, + 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 == 0); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_dontunmap(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* mremap to move, and don't unmap src addr. */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + + } + + TEST_END_CHECK(); +} + +static void test_seal_mremap_move_dontunmap_anyaddr(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + void *ret2; + + setup_single_address(size, &ptr); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* + * The 0xdeaddead should not have effect on dest addr + * when MREMAP_DONTUNMAP is set. + */ + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, + 0xdeaddead); + if (seal) { + FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); + FAIL_TEST_IF_FALSE(errno == EPERM); + } else { + FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED); + FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead); + + } + + TEST_END_CHECK(); +} + + +static void test_seal_merge_and_split(void) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size; + int ret; + int prot; + + /* (24 RO) */ + setup_single_address(24 * page_size, &ptr); + + /* use mprotect(NONE) to set out boundary */ + /* (1 NONE) (22 RO) (1 NONE) */ + ret = sys_mprotect(ptr, page_size, PROT_NONE); + FAIL_TEST_IF_FALSE(!ret); + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 22 * page_size); + FAIL_TEST_IF_FALSE(prot == 4); + + /* use mseal to split from beginning */ + /* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */ + ret = sys_mseal(ptr + page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + size = get_vma_size(ptr + 2 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 21 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* use mseal to split from the end. */ + /* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */ + ret = sys_mseal(ptr + 22 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + 22 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + size = get_vma_size(ptr + 2 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 20 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* merge with prev. */ + /* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */ + ret = sys_mseal(ptr + 2 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 2 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* merge with after. */ + /* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */ + ret = sys_mseal(ptr + 21 * page_size, page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + 21 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 2 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* split and merge from prev */ + /* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */ + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + 1 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 3 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + ret = sys_munmap(ptr + page_size, page_size); + FAIL_TEST_IF_FALSE(ret < 0); + ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE); + FAIL_TEST_IF_FALSE(ret < 0); + + /* split and merge from next */ + /* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */ + ret = sys_mseal(ptr + 20 * page_size, 2 * page_size); + FAIL_TEST_IF_FALSE(!ret); + FAIL_TEST_IF_FALSE(prot == 0x4); + size = get_vma_size(ptr + 20 * page_size, &prot); + FAIL_TEST_IF_FALSE(size == 3 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + /* merge from middle of prev and middle of next. */ + /* (1 NONE) (22 RO_SEAL) (1 NONE) */ + ret = sys_mseal(ptr + 2 * page_size, 20 * page_size); + FAIL_TEST_IF_FALSE(!ret); + size = get_vma_size(ptr + page_size, &prot); + FAIL_TEST_IF_FALSE(size == 22 * page_size); + FAIL_TEST_IF_FALSE(prot == 0x4); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_rw(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address_rw(size, &ptr); + FAIL_TEST_IF_FALSE(ptr != (void *)-1); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't take effect on RW memory. */ + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + /* base seal still apply. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_pkey(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int pkey; + + SKIP_TEST_IF_FALSE(pkey_supported()); + + setup_single_address_rw(size, &ptr); + FAIL_TEST_IF_FALSE(ptr != (void *)-1); + + pkey = sys_pkey_alloc(0, 0); + FAIL_TEST_IF_FALSE(pkey > 0); + + ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey); + FAIL_TEST_IF_FALSE(!ret); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't take effect if PKRU allow write. */ + set_pkey(pkey, 0); + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + /* sealing will take effect if PKRU deny write. */ + set_pkey(pkey, PKEY_DISABLE_WRITE); + ret = sys_madvise(ptr, size, MADV_DONTNEED); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + /* base seal still apply. */ + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_filebacked(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + int fd; + unsigned long mapflags = MAP_PRIVATE; + + fd = memfd_create("test", 0); + FAIL_TEST_IF_FALSE(fd > 0); + + ret = fallocate(fd, 0, 0, size); + FAIL_TEST_IF_FALSE(!ret); + + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0); + FAIL_TEST_IF_FALSE(ptr != MAP_FAILED); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't apply for file backed mapping. */ + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + close(fd); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon_on_shared(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED; + + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0); + FAIL_TEST_IF_FALSE(ptr != (void *)-1); + + if (seal) { + ret = sys_mseal(ptr, size); + FAIL_TEST_IF_FALSE(!ret); + } + + /* sealing doesn't apply for shared mapping. */ + ret = sys_madvise(ptr, size, MADV_DONTNEED); + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +static void test_seal_discard_ro_anon(bool seal) +{ + void *ptr; + unsigned long page_size = getpagesize(); + unsigned long size = 4 * page_size; + int ret; + + setup_single_address(size, &ptr); + + if (seal) + seal_single_address(ptr, size); + + ret = sys_madvise(ptr, size, MADV_DONTNEED); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + ret = sys_munmap(ptr, size); + if (seal) + FAIL_TEST_IF_FALSE(ret < 0); + else + FAIL_TEST_IF_FALSE(!ret); + + TEST_END_CHECK(); +} + +int main(int argc, char **argv) +{ + bool test_seal = seal_support(); + + ksft_print_header(); + + if (!test_seal) + ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n"); + + if (!pkey_supported()) + ksft_print_msg("PKEY not supported\n"); + + ksft_set_plan(80); + + test_seal_addseal(); + test_seal_unmapped_start(); + test_seal_unmapped_middle(); + test_seal_unmapped_end(); + test_seal_multiple_vmas(); + test_seal_split_start(); + test_seal_split_end(); + test_seal_invalid_input(); + test_seal_zero_length(); + test_seal_twice(); + + test_seal_mprotect(false); + test_seal_mprotect(true); + + test_seal_start_mprotect(false); + test_seal_start_mprotect(true); + + test_seal_end_mprotect(false); + test_seal_end_mprotect(true); + + test_seal_mprotect_unalign_len(false); + test_seal_mprotect_unalign_len(true); + + test_seal_mprotect_unalign_len_variant_2(false); + test_seal_mprotect_unalign_len_variant_2(true); + + test_seal_mprotect_two_vma(false); + test_seal_mprotect_two_vma(true); + + test_seal_mprotect_two_vma_with_split(false); + test_seal_mprotect_two_vma_with_split(true); + + test_seal_mprotect_partial_mprotect(false); + test_seal_mprotect_partial_mprotect(true); + + test_seal_mprotect_two_vma_with_gap(false); + test_seal_mprotect_two_vma_with_gap(true); + + test_seal_mprotect_merge(false); + test_seal_mprotect_merge(true); + + test_seal_mprotect_split(false); + test_seal_mprotect_split(true); + + test_seal_munmap(false); + test_seal_munmap(true); + test_seal_munmap_two_vma(false); + test_seal_munmap_two_vma(true); + test_seal_munmap_vma_with_gap(false); + test_seal_munmap_vma_with_gap(true); + + test_munmap_start_freed(false); + test_munmap_start_freed(true); + test_munmap_middle_freed(false); + test_munmap_middle_freed(true); + test_munmap_end_freed(false); + test_munmap_end_freed(true); + + test_seal_mremap_shrink(false); + test_seal_mremap_shrink(true); + test_seal_mremap_expand(false); + test_seal_mremap_expand(true); + test_seal_mremap_move(false); + test_seal_mremap_move(true); + + test_seal_mremap_shrink_fixed(false); + test_seal_mremap_shrink_fixed(true); + test_seal_mremap_expand_fixed(false); + test_seal_mremap_expand_fixed(true); + test_seal_mremap_move_fixed(false); + test_seal_mremap_move_fixed(true); + test_seal_mremap_move_dontunmap(false); + test_seal_mremap_move_dontunmap(true); + test_seal_mremap_move_fixed_zero(false); + test_seal_mremap_move_fixed_zero(true); + test_seal_mremap_move_dontunmap_anyaddr(false); + test_seal_mremap_move_dontunmap_anyaddr(true); + test_seal_discard_ro_anon(false); + test_seal_discard_ro_anon(true); + test_seal_discard_ro_anon_on_rw(false); + test_seal_discard_ro_anon_on_rw(true); + test_seal_discard_ro_anon_on_shared(false); + test_seal_discard_ro_anon_on_shared(true); + test_seal_discard_ro_anon_on_filebacked(false); + test_seal_discard_ro_anon_on_filebacked(true); + test_seal_mmap_overwrite_prot(false); + test_seal_mmap_overwrite_prot(true); + test_seal_mmap_expand(false); + test_seal_mmap_expand(true); + test_seal_mmap_shrink(false); + test_seal_mmap_shrink(true); + + test_seal_merge_and_split(); + test_seal_zero_address(); + + test_seal_discard_ro_anon_on_pkey(false); + test_seal_discard_ro_anon_on_pkey(true); + + ksft_finished(); + return 0; +}
Please fix following for this and fifth patch as well:
--> checkpatch.pl --codespell tools/testing/selftests/mm/mseal_test.c
WARNING: Macros with flow control statements should be avoided #42: FILE: tools/testing/selftests/mm/mseal_test.c:42: +#define FAIL_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0)
WARNING: Macros with flow control statements should be avoided #50: FILE: tools/testing/selftests/mm/mseal_test.c:50: +#define SKIP_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0)
WARNING: Macros with flow control statements should be avoided #59: FILE: tools/testing/selftests/mm/mseal_test.c:59: +#define TEST_END_CHECK() {\ + ksft_test_result_pass("%s\n", __func__);\ + return;\ +test_end:\ + return;\ +}
On 4/15/24 9:35 PM, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test. +static void __write_pkey_reg(u64 pkey_reg) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */
- unsigned int eax = pkey_reg;
- unsigned int ecx = 0;
- unsigned int edx = 0;
- asm volatile(".byte 0x0f,0x01,0xef\n\t"
: : "a" (eax), "c" (ecx), "d" (edx));
- assert(pkey_reg == __read_pkey_reg());
Use ksft_exit_fail_msg instead of assert to stay inside TAP format if condition is false and error is generated.
+int main(int argc, char **argv) +{
- bool test_seal = seal_support();
- ksft_print_header();
- if (!test_seal)
ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n");
- if (!pkey_supported())
ksft_print_msg("PKEY not supported\n");
- ksft_set_plan(80);
- test_seal_addseal();
- test_seal_unmapped_start();
- test_seal_unmapped_middle();
- test_seal_unmapped_end();
- test_seal_multiple_vmas();
- test_seal_split_start();
- test_seal_split_end();
- test_seal_invalid_input();
- test_seal_zero_length();
- test_seal_twice();
- test_seal_mprotect(false);
- test_seal_mprotect(true);
- test_seal_start_mprotect(false);
- test_seal_start_mprotect(true);
- test_seal_end_mprotect(false);
- test_seal_end_mprotect(true);
- test_seal_mprotect_unalign_len(false);
- test_seal_mprotect_unalign_len(true);
- test_seal_mprotect_unalign_len_variant_2(false);
- test_seal_mprotect_unalign_len_variant_2(true);
- test_seal_mprotect_two_vma(false);
- test_seal_mprotect_two_vma(true);
- test_seal_mprotect_two_vma_with_split(false);
- test_seal_mprotect_two_vma_with_split(true);
- test_seal_mprotect_partial_mprotect(false);
- test_seal_mprotect_partial_mprotect(true);
- test_seal_mprotect_two_vma_with_gap(false);
- test_seal_mprotect_two_vma_with_gap(true);
- test_seal_mprotect_merge(false);
- test_seal_mprotect_merge(true);
- test_seal_mprotect_split(false);
- test_seal_mprotect_split(true);
- test_seal_munmap(false);
- test_seal_munmap(true);
- test_seal_munmap_two_vma(false);
- test_seal_munmap_two_vma(true);
- test_seal_munmap_vma_with_gap(false);
- test_seal_munmap_vma_with_gap(true);
- test_munmap_start_freed(false);
- test_munmap_start_freed(true);
- test_munmap_middle_freed(false);
- test_munmap_middle_freed(true);
- test_munmap_end_freed(false);
- test_munmap_end_freed(true);
- test_seal_mremap_shrink(false);
- test_seal_mremap_shrink(true);
- test_seal_mremap_expand(false);
- test_seal_mremap_expand(true);
- test_seal_mremap_move(false);
- test_seal_mremap_move(true);
- test_seal_mremap_shrink_fixed(false);
- test_seal_mremap_shrink_fixed(true);
- test_seal_mremap_expand_fixed(false);
- test_seal_mremap_expand_fixed(true);
- test_seal_mremap_move_fixed(false);
- test_seal_mremap_move_fixed(true);
- test_seal_mremap_move_dontunmap(false);
- test_seal_mremap_move_dontunmap(true);
- test_seal_mremap_move_fixed_zero(false);
- test_seal_mremap_move_fixed_zero(true);
- test_seal_mremap_move_dontunmap_anyaddr(false);
- test_seal_mremap_move_dontunmap_anyaddr(true);
- test_seal_discard_ro_anon(false);
- test_seal_discard_ro_anon(true);
- test_seal_discard_ro_anon_on_rw(false);
- test_seal_discard_ro_anon_on_rw(true);
- test_seal_discard_ro_anon_on_shared(false);
- test_seal_discard_ro_anon_on_shared(true);
- test_seal_discard_ro_anon_on_filebacked(false);
- test_seal_discard_ro_anon_on_filebacked(true);
- test_seal_mmap_overwrite_prot(false);
- test_seal_mmap_overwrite_prot(true);
- test_seal_mmap_expand(false);
- test_seal_mmap_expand(true);
- test_seal_mmap_shrink(false);
- test_seal_mmap_shrink(true);
- test_seal_merge_and_split();
- test_seal_zero_address();
- test_seal_discard_ro_anon_on_pkey(false);
- test_seal_discard_ro_anon_on_pkey(true);
- ksft_finished();
- return 0;
The return isn't needed as ksft_finished() calls exit() with right exit code.
+}
On Mon, Apr 15, 2024 at 11:32 AM Muhammad Usama Anjum usama.anjum@collabora.com wrote:
Please fix following for this and fifth patch as well:
--> checkpatch.pl --codespell tools/testing/selftests/mm/mseal_test.c
WARNING: Macros with flow control statements should be avoided #42: FILE: tools/testing/selftests/mm/mseal_test.c:42: +#define FAIL_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_fail("%s, line:%d\n", __func__,
__LINE__);\
goto test_end;\
} \
} \
while (0)
WARNING: Macros with flow control statements should be avoided #50: FILE: tools/testing/selftests/mm/mseal_test.c:50: +#define SKIP_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_skip("%s, line:%d\n", __func__,
__LINE__);\
goto test_end;\
} \
} \
while (0)
WARNING: Macros with flow control statements should be avoided #59: FILE: tools/testing/selftests/mm/mseal_test.c:59: +#define TEST_END_CHECK() {\
ksft_test_result_pass("%s\n", __func__);\
return;\
+test_end:\
return;\
+}
I tried to fix those warnings of checkpatch in the past, but no good solution. If I put the condition check in the test, the code will have too many "if" and decrease readability. If there is a better solution, I'm happy to do that, suggestions are welcome.
On 4/15/24 9:35 PM, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test. +static void __write_pkey_reg(u64 pkey_reg) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */
unsigned int eax = pkey_reg;
unsigned int ecx = 0;
unsigned int edx = 0;
asm volatile(".byte 0x0f,0x01,0xef\n\t"
: : "a" (eax), "c" (ecx), "d" (edx));
assert(pkey_reg == __read_pkey_reg());
Use ksft_exit_fail_msg instead of assert to stay inside TAP format if condition is false and error is generated.
I can remove the usage of assert() from the test.
+int main(int argc, char **argv) +{
bool test_seal = seal_support();
ksft_print_header();
if (!test_seal)
ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n");
if (!pkey_supported())
ksft_print_msg("PKEY not supported\n");
ksft_set_plan(80);
test_seal_addseal();
test_seal_unmapped_start();
test_seal_unmapped_middle();
test_seal_unmapped_end();
test_seal_multiple_vmas();
test_seal_split_start();
test_seal_split_end();
test_seal_invalid_input();
test_seal_zero_length();
test_seal_twice();
test_seal_mprotect(false);
test_seal_mprotect(true);
test_seal_start_mprotect(false);
test_seal_start_mprotect(true);
test_seal_end_mprotect(false);
test_seal_end_mprotect(true);
test_seal_mprotect_unalign_len(false);
test_seal_mprotect_unalign_len(true);
test_seal_mprotect_unalign_len_variant_2(false);
test_seal_mprotect_unalign_len_variant_2(true);
test_seal_mprotect_two_vma(false);
test_seal_mprotect_two_vma(true);
test_seal_mprotect_two_vma_with_split(false);
test_seal_mprotect_two_vma_with_split(true);
test_seal_mprotect_partial_mprotect(false);
test_seal_mprotect_partial_mprotect(true);
test_seal_mprotect_two_vma_with_gap(false);
test_seal_mprotect_two_vma_with_gap(true);
test_seal_mprotect_merge(false);
test_seal_mprotect_merge(true);
test_seal_mprotect_split(false);
test_seal_mprotect_split(true);
test_seal_munmap(false);
test_seal_munmap(true);
test_seal_munmap_two_vma(false);
test_seal_munmap_two_vma(true);
test_seal_munmap_vma_with_gap(false);
test_seal_munmap_vma_with_gap(true);
test_munmap_start_freed(false);
test_munmap_start_freed(true);
test_munmap_middle_freed(false);
test_munmap_middle_freed(true);
test_munmap_end_freed(false);
test_munmap_end_freed(true);
test_seal_mremap_shrink(false);
test_seal_mremap_shrink(true);
test_seal_mremap_expand(false);
test_seal_mremap_expand(true);
test_seal_mremap_move(false);
test_seal_mremap_move(true);
test_seal_mremap_shrink_fixed(false);
test_seal_mremap_shrink_fixed(true);
test_seal_mremap_expand_fixed(false);
test_seal_mremap_expand_fixed(true);
test_seal_mremap_move_fixed(false);
test_seal_mremap_move_fixed(true);
test_seal_mremap_move_dontunmap(false);
test_seal_mremap_move_dontunmap(true);
test_seal_mremap_move_fixed_zero(false);
test_seal_mremap_move_fixed_zero(true);
test_seal_mremap_move_dontunmap_anyaddr(false);
test_seal_mremap_move_dontunmap_anyaddr(true);
test_seal_discard_ro_anon(false);
test_seal_discard_ro_anon(true);
test_seal_discard_ro_anon_on_rw(false);
test_seal_discard_ro_anon_on_rw(true);
test_seal_discard_ro_anon_on_shared(false);
test_seal_discard_ro_anon_on_shared(true);
test_seal_discard_ro_anon_on_filebacked(false);
test_seal_discard_ro_anon_on_filebacked(true);
test_seal_mmap_overwrite_prot(false);
test_seal_mmap_overwrite_prot(true);
test_seal_mmap_expand(false);
test_seal_mmap_expand(true);
test_seal_mmap_shrink(false);
test_seal_mmap_shrink(true);
test_seal_merge_and_split();
test_seal_zero_address();
test_seal_discard_ro_anon_on_pkey(false);
test_seal_discard_ro_anon_on_pkey(true);
ksft_finished();
return 0;
The return isn't needed as ksft_finished() calls exit() with right exit code.
Sure. I can remove "return 0"
Thanks -Jeff
- Jeff
+}
-- BR, Muhammad Usama Anjum
On Mon, Apr 15, 2024 at 01:27:32PM -0700, Jeff Xu wrote:
On Mon, Apr 15, 2024 at 11:32 AM Muhammad Usama Anjum usama.anjum@collabora.com wrote:
Please fix following for this and fifth patch as well:
--> checkpatch.pl --codespell tools/testing/selftests/mm/mseal_test.c
WARNING: Macros with flow control statements should be avoided #42: FILE: tools/testing/selftests/mm/mseal_test.c:42: +#define FAIL_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_fail("%s, line:%d\n", __func__,
__LINE__);\
goto test_end;\
} \
} \
while (0)
WARNING: Macros with flow control statements should be avoided #50: FILE: tools/testing/selftests/mm/mseal_test.c:50: +#define SKIP_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_skip("%s, line:%d\n", __func__,
__LINE__);\
goto test_end;\
} \
} \
while (0)
WARNING: Macros with flow control statements should be avoided #59: FILE: tools/testing/selftests/mm/mseal_test.c:59: +#define TEST_END_CHECK() {\
ksft_test_result_pass("%s\n", __func__);\
return;\
+test_end:\
return;\
+}
I tried to fix those warnings of checkpatch in the past, but no good solution. If I put the condition check in the test, the code will have too many "if" and decrease readability. If there is a better solution, I'm happy to do that, suggestions are welcome.
Yeah, these are more "conventions" from checkpatch. I think it's fine to ignore this warning, especially for selftests.
On 15/04/2024 17:35, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1836 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <sys/mman.h>
I'm afraid this is causing a build error on our CI, and as a result we are not running any mm selftests currently.
The error is here:
CC mseal_test mseal_test.c: In function ‘test_seal_mremap_move_dontunmap’: mseal_test.c:1469:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1469 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); | ^~~~~~~~~~~~~~~~ mseal_test.c:1469:50: note: each undeclared identifier is reported only once for each function it appears in mseal_test.c: In function ‘test_seal_mremap_move_dontunmap_anyaddr’: mseal_test.c:1501:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1501 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, | ^~~~~~~~~~~~~~~~
And I think the reason is due to our CI's toolchain's sys/mman.h not including linux/mman.h where MREMAP_DONTUNMAP is defined.
I think the fix is to explicitly #include <linux/mman.h>, as a number of other mm selftests do.
I'm not sure if this is still in mm-unstable? If so, it would good to remove it so we can resume our testing.
+#include <stdint.h> +#include <unistd.h> +#include <string.h> +#include <sys/time.h> +#include <sys/resource.h> +#include <stdbool.h> +#include "../kselftest.h" +#include <syscall.h> +#include <errno.h> +#include <stdio.h> +#include <stdlib.h> +#include <assert.h> +#include <fcntl.h> +#include <assert.h> +#include <sys/ioctl.h> +#include <sys/vfs.h> +#include <sys/stat.h>
+/*
- need those definition for manually build using gcc.
- gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test
- */
+#ifndef PKEY_DISABLE_ACCESS +# define PKEY_DISABLE_ACCESS 0x1 +#endif
If you pull in linux/mman.h directly, you shouldn't need this define as it will be pulled in.
+#ifndef PKEY_DISABLE_WRITE +# define PKEY_DISABLE_WRITE 0x2 +#endif
And this one.
+#ifndef PKEY_BITS_PER_KEY
bug: I think you missed the 'P' in PKEY?
+#define PKEY_BITS_PER_PKEY 2 +#endif
If you #include "pkey-helpers.h" you should get this define.
+#ifndef PKEY_MASK +#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE) +#endif
And you can use the PKEY_ACCESS_MASK macro that will be pulled in to avoid this define too.
Thanks, Ryan
+#define FAIL_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\
goto test_end;\
} \
- } \
- while (0)
+#define SKIP_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\
goto test_end;\
} \
- } \
- while (0)
+#define TEST_END_CHECK() {\
ksft_test_result_pass("%s\n", __func__);\
return;\
+test_end:\
return;\
+}
+#ifndef u64 +#define u64 unsigned long long +#endif
+static unsigned long get_vma_size(void *addr, int *prot) +{
- FILE *maps;
- char line[256];
- int size = 0;
- uintptr_t addr_start, addr_end;
- char protstr[5];
- *prot = 0;
- maps = fopen("/proc/self/maps", "r");
- if (!maps)
return 0;
- while (fgets(line, sizeof(line), maps)) {
if (sscanf(line, "%lx-%lx %4s", &addr_start, &addr_end, &protstr) == 3) {
if (addr_start == (uintptr_t) addr) {
size = addr_end - addr_start;
if (protstr[0] == 'r')
*prot |= 0x4;
if (protstr[1] == 'w')
*prot |= 0x2;
if (protstr[2] == 'x')
*prot |= 0x1;
break;
}
}
- }
- fclose(maps);
- return size;
+}
+/*
- define sys_xyx to call syscall directly.
- */
+static int sys_mseal(void *start, size_t len) +{
- int sret;
- errno = 0;
- sret = syscall(__NR_mseal, start, len, 0);
- return sret;
+}
+static int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{
- int sret;
- errno = 0;
- sret = syscall(__NR_mprotect, ptr, size, prot);
- return sret;
+}
+static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
unsigned long pkey)
+{
- int sret;
- errno = 0;
- sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey);
- return sret;
+}
+static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
- unsigned long flags, unsigned long fd, unsigned long offset)
+{
- void *sret;
- errno = 0;
- sret = (void *) syscall(__NR_mmap, addr, len, prot,
flags, fd, offset);
- return sret;
+}
+static int sys_munmap(void *ptr, size_t size) +{
- int sret;
- errno = 0;
- sret = syscall(__NR_munmap, ptr, size);
- return sret;
+}
+static int sys_madvise(void *start, size_t len, int types) +{
- int sret;
- errno = 0;
- sret = syscall(__NR_madvise, start, len, types);
- return sret;
+}
+static int sys_pkey_alloc(unsigned long flags, unsigned long init_val) +{
- int ret = syscall(__NR_pkey_alloc, flags, init_val);
- return ret;
+}
+static unsigned int __read_pkey_reg(void) +{
- unsigned int pkey_reg = 0;
+#if defined(__i386__) || defined(__x86_64__) /* arch */
- unsigned int eax, edx;
- unsigned int ecx = 0;
- asm volatile(".byte 0x0f,0x01,0xee\n\t"
: "=a" (eax), "=d" (edx)
: "c" (ecx));
- pkey_reg = eax;
+#endif
- return pkey_reg;
+}
+static void __write_pkey_reg(u64 pkey_reg) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */
- unsigned int eax = pkey_reg;
- unsigned int ecx = 0;
- unsigned int edx = 0;
- asm volatile(".byte 0x0f,0x01,0xef\n\t"
: : "a" (eax), "c" (ecx), "d" (edx));
- assert(pkey_reg == __read_pkey_reg());
+#endif +}
+static unsigned long pkey_bit_position(int pkey) +{
- return pkey * PKEY_BITS_PER_PKEY;
+}
+static u64 set_pkey_bits(u64 reg, int pkey, u64 flags) +{
- unsigned long shift = pkey_bit_position(pkey);
- /* mask out bits from pkey in old value */
- reg &= ~((u64)PKEY_MASK << shift);
- /* OR in new bits for pkey */
- reg |= (flags & PKEY_MASK) << shift;
- return reg;
+}
+static void set_pkey(int pkey, unsigned long pkey_value) +{
- unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE);
- u64 new_pkey_reg;
- assert(!(pkey_value & ~mask));
- new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value);
- __write_pkey_reg(new_pkey_reg);
+}
+static void setup_single_address(int size, void **ptrOut) +{
- void *ptr;
- ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
- assert(ptr != (void *)-1);
- *ptrOut = ptr;
+}
+static void setup_single_address_rw(int size, void **ptrOut) +{
- void *ptr;
- unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
- ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
- assert(ptr != (void *)-1);
- *ptrOut = ptr;
+}
+static void clean_single_address(void *ptr, int size) +{
- int ret;
- ret = munmap(ptr, size);
- assert(!ret);
+}
+static void seal_single_address(void *ptr, int size) +{
- int ret;
- ret = sys_mseal(ptr, size);
- assert(!ret);
+}
+bool seal_support(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
- if (ptr == (void *) -1)
return false;
- ret = sys_mseal(ptr, page_size);
- if (ret < 0)
return false;
- return true;
+}
+bool pkey_supported(void) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */
- int pkey = sys_pkey_alloc(0, 0);
- if (pkey > 0)
return true;
+#endif
- return false;
+}
+static void test_seal_addseal(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_unmapped_start(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- /* munmap 2 pages from ptr. */
- ret = sys_munmap(ptr, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* mprotect will fail because 2 pages from ptr are unmapped. */
- ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* mseal will fail because 2 pages from ptr are unmapped. */
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(ret < 0);
- ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_unmapped_middle(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- /* munmap 2 pages from ptr + page. */
- ret = sys_munmap(ptr + page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* mprotect will fail, since middle 2 pages are unmapped. */
- ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* mseal will fail as well. */
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* we still can add seal to the first page and last page*/
- ret = sys_mseal(ptr, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- ret = sys_mseal(ptr + 3 * page_size, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_unmapped_end(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- /* unmap last 2 pages. */
- ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* mprotect will fail since last 2 pages are unmapped. */
- ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* mseal will fail as well. */
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* The first 2 pages is not sealed, and can add seals */
- ret = sys_mseal(ptr, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_multiple_vmas(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- /* use mprotect to split the vma into 3. */
- ret = sys_mprotect(ptr + page_size, 2 * page_size,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* mprotect will get applied to all 4 pages - 3 VMAs. */
- ret = sys_mprotect(ptr, size, PROT_READ);
- FAIL_TEST_IF_FALSE(!ret);
- /* use mprotect to split the vma into 3. */
- ret = sys_mprotect(ptr + page_size, 2 * page_size,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* mseal get applied to all 4 pages - 3 VMAs. */
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_split_start(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- /* use mprotect to split at middle */
- ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal the first page, this will split the VMA */
- ret = sys_mseal(ptr, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* add seal to the remain 3 pages */
- ret = sys_mseal(ptr + page_size, 3 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_split_end(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- /* use mprotect to split at middle */
- ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal the last page */
- ret = sys_mseal(ptr + 3 * page_size, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* Adding seals to the first 3 pages */
- ret = sys_mseal(ptr, 3 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_invalid_input(void) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(8 * page_size, &ptr);
- clean_single_address(ptr + 4 * page_size, 4 * page_size);
- /* invalid flag */
- ret = syscall(__NR_mseal, ptr, size, 0x20);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* unaligned address */
- ret = sys_mseal(ptr + 1, 2 * page_size);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* length too big */
- ret = sys_mseal(ptr, 5 * page_size);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* length overflow */
- ret = sys_mseal(ptr, UINT64_MAX/page_size);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* start is not in a valid VMA */
- ret = sys_mseal(ptr - page_size, 5 * page_size);
- FAIL_TEST_IF_FALSE(ret < 0);
- TEST_END_CHECK();
+}
+static void test_seal_zero_length(void) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal 0 length will be OK, same as mprotect */
- ret = sys_mseal(ptr, 0);
- FAIL_TEST_IF_FALSE(!ret);
- /* verify the 4 pages are not sealed by previous call. */
- ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_zero_address(void) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- int prot;
- /* use mmap to change protection. */
- ptr = sys_mmap(0, size, PROT_NONE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
- FAIL_TEST_IF_FALSE(ptr == 0);
- size = get_vma_size(ptr, &prot);
- FAIL_TEST_IF_FALSE(size == 4 * page_size);
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(!ret);
- /* verify the 4 pages are sealed by previous call. */
- ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(ret);
- TEST_END_CHECK();
+}
+static void test_seal_twice(void) +{
- int ret;
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- setup_single_address(size, &ptr);
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(!ret);
- /* apply the same seal will be OK. idempotent. */
- ret = sys_mseal(ptr, size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal)
seal_single_address(ptr, size);
- ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_start_mprotect(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal)
seal_single_address(ptr, page_size);
- /* the first page is sealed. */
- ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- /* pages after the first page is not sealed. */
- ret = sys_mprotect(ptr + page_size, page_size * 3,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_end_mprotect(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal)
seal_single_address(ptr + page_size, 3 * page_size);
- /* first page is not sealed */
- ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* last 3 page are sealed */
- ret = sys_mprotect(ptr + page_size, page_size * 3,
PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_unalign_len(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal)
seal_single_address(ptr, page_size * 2 - 1);
- /* 2 pages are sealed. */
- ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- ret = sys_mprotect(ptr + page_size * 2, page_size,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_unalign_len_variant_2(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal)
seal_single_address(ptr, page_size * 2 + 1);
- /* 3 pages are sealed. */
- ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- ret = sys_mprotect(ptr + page_size * 3, page_size,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_two_vma(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* use mprotect to split */
- ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- if (seal)
seal_single_address(ptr, page_size * 4);
- ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- ret = sys_mprotect(ptr + page_size * 2, page_size * 2,
PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_two_vma_with_split(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* use mprotect to split as two vma. */
- ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* mseal can apply across 2 vma, also split them. */
- if (seal)
seal_single_address(ptr + page_size, page_size * 2);
- /* the first page is not sealed. */
- ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* the second page is sealed. */
- ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- /* the third page is sealed. */
- ret = sys_mprotect(ptr + 2 * page_size, page_size,
PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- /* the fouth page is not sealed. */
- ret = sys_mprotect(ptr + 3 * page_size, page_size,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_partial_mprotect(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* seal one page. */
- if (seal)
seal_single_address(ptr, page_size);
- /* mprotect first 2 page will fail, since the first page are sealed. */
- ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_two_vma_with_gap(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* use mprotect to split. */
- ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* use mprotect to split. */
- ret = sys_mprotect(ptr + 3 * page_size, page_size,
PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* use munmap to free two pages in the middle */
- ret = sys_munmap(ptr + page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* mprotect will fail, because there is a gap in the address. */
- /* notes, internally mprotect still updated the first page. */
- ret = sys_mprotect(ptr, 4 * page_size, PROT_READ);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* mseal will fail as well. */
- ret = sys_mseal(ptr, 4 * page_size);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* the first page is not sealed. */
- ret = sys_mprotect(ptr, page_size, PROT_READ);
- FAIL_TEST_IF_FALSE(ret == 0);
- /* the last page is not sealed. */
- ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ);
- FAIL_TEST_IF_FALSE(ret == 0);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_split(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* use mprotect to split. */
- ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal all 4 pages. */
- if (seal) {
ret = sys_mseal(ptr, 4 * page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* mprotect is sealed. */
- ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_mprotect_merge(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* use mprotect to split one page. */
- ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal first two pages. */
- if (seal) {
ret = sys_mseal(ptr, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* 2 pages are sealed. */
- ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- /* last 2 pages are not sealed. */
- ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
- FAIL_TEST_IF_FALSE(ret == 0);
- TEST_END_CHECK();
+}
+static void test_seal_munmap(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* 4 pages are sealed. */
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+/*
- allocate 4 pages,
- use mprotect to split it as two VMAs
- seal the whole range
- munmap will fail on both
- */
+static void test_seal_munmap_two_vma(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* use mprotect to split */
- ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
- FAIL_TEST_IF_FALSE(!ret);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- ret = sys_munmap(ptr, page_size * 2);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- ret = sys_munmap(ptr + page_size, page_size * 2);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+/*
- allocate a VMA with 4 pages.
- munmap the middle 2 pages.
- seal the whole 4 pages, will fail.
- munmap the first page will be OK.
- munmap the last page will be OK.
- */
+static void test_seal_munmap_vma_with_gap(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- ret = sys_munmap(ptr + page_size, page_size * 2);
- FAIL_TEST_IF_FALSE(!ret);
- if (seal) {
/* can't have gap in the middle. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(ret < 0);
- }
- ret = sys_munmap(ptr, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- ret = sys_munmap(ptr + page_size * 2, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- ret = sys_munmap(ptr, size);
- FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_munmap_start_freed(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- int prot;
- setup_single_address(size, &ptr);
- /* unmap the first page. */
- ret = sys_munmap(ptr, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal the last 3 pages. */
- if (seal) {
ret = sys_mseal(ptr + page_size, 3 * page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* unmap from the first page. */
- ret = sys_munmap(ptr, size);
- if (seal) {
FAIL_TEST_IF_FALSE(ret < 0);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == page_size * 3);
- } else {
/* note: this will be OK, even the first page is */
/* already unmapped. */
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == 0);
- }
- TEST_END_CHECK();
+}
+static void test_munmap_end_freed(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- /* unmap last page. */
- ret = sys_munmap(ptr + page_size * 3, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal the first 3 pages. */
- if (seal) {
ret = sys_mseal(ptr, 3 * page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* unmap all pages. */
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_munmap_middle_freed(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- int prot;
- setup_single_address(size, &ptr);
- /* unmap 2 pages in the middle. */
- ret = sys_munmap(ptr + page_size, page_size * 2);
- FAIL_TEST_IF_FALSE(!ret);
- /* seal the first page. */
- if (seal) {
ret = sys_mseal(ptr, page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* munmap all 4 pages. */
- ret = sys_munmap(ptr, size);
- if (seal) {
FAIL_TEST_IF_FALSE(ret < 0);
size = get_vma_size(ptr, &prot);
FAIL_TEST_IF_FALSE(size == page_size);
size = get_vma_size(ptr + page_size * 3, &prot);
FAIL_TEST_IF_FALSE(size == page_size);
- } else {
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr, &prot);
FAIL_TEST_IF_FALSE(size == 0);
size = get_vma_size(ptr + page_size * 3, &prot);
FAIL_TEST_IF_FALSE(size == 0);
- }
- TEST_END_CHECK();
+}
+static void test_seal_mremap_shrink(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* shrink from 4 pages to 2 pages. */
- ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
- }
- TEST_END_CHECK();
+}
+static void test_seal_mremap_expand(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- /* ummap last 2 pages. */
- ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- if (seal) {
ret = sys_mseal(ptr, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* expand from 2 page to 4 pages. */
- ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else {
FAIL_TEST_IF_FALSE(ret2 == ptr);
- }
- TEST_END_CHECK();
+}
+static void test_seal_mremap_move(bool seal) +{
- void *ptr, *newPtr;
- unsigned long page_size = getpagesize();
- unsigned long size = page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- setup_single_address(size, &newPtr);
- clean_single_address(newPtr, size);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* move from ptr to fixed address. */
- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
- }
- TEST_END_CHECK();
+}
+static void test_seal_mmap_overwrite_prot(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* use mmap to change protection. */
- ret2 = sys_mmap(ptr, size, PROT_NONE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else
FAIL_TEST_IF_FALSE(ret2 == ptr);
- TEST_END_CHECK();
+}
+static void test_seal_mmap_expand(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 12 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- /* ummap last 4 pages. */
- ret = sys_munmap(ptr + 8 * page_size, 4 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- if (seal) {
ret = sys_mseal(ptr, 8 * page_size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* use mmap to expand. */
- ret2 = sys_mmap(ptr, size, PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else
FAIL_TEST_IF_FALSE(ret2 == ptr);
- TEST_END_CHECK();
+}
+static void test_seal_mmap_shrink(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 12 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* use mmap to shrink. */
- ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else
FAIL_TEST_IF_FALSE(ret2 == ptr);
- TEST_END_CHECK();
+}
+static void test_seal_mremap_shrink_fixed(bool seal) +{
- void *ptr;
- void *newAddr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- setup_single_address(size, &newAddr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* mremap to move and shrink to fixed address */
- ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
newAddr);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else
FAIL_TEST_IF_FALSE(ret2 == newAddr);
- TEST_END_CHECK();
+}
+static void test_seal_mremap_expand_fixed(bool seal) +{
- void *ptr;
- void *newAddr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(page_size, &ptr);
- setup_single_address(size, &newAddr);
- if (seal) {
ret = sys_mseal(newAddr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* mremap to move and expand to fixed address */
- ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
newAddr);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else
FAIL_TEST_IF_FALSE(ret2 == newAddr);
- TEST_END_CHECK();
+}
+static void test_seal_mremap_move_fixed(bool seal) +{
- void *ptr;
- void *newAddr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- setup_single_address(size, &newAddr);
- if (seal) {
ret = sys_mseal(newAddr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* mremap to move to fixed address */
- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else
FAIL_TEST_IF_FALSE(ret2 == newAddr);
- TEST_END_CHECK();
+}
+static void test_seal_mremap_move_fixed_zero(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /*
* MREMAP_FIXED can move the mapping to zero address
*/
- ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else {
FAIL_TEST_IF_FALSE(ret2 == 0);
- }
- TEST_END_CHECK();
+}
+static void test_seal_mremap_move_dontunmap(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* mremap to move, and don't unmap src addr. */
- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
- }
- TEST_END_CHECK();
+}
+static void test_seal_mremap_move_dontunmap_anyaddr(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- void *ret2;
- setup_single_address(size, &ptr);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /*
* The 0xdeaddead should not have effect on dest addr
* when MREMAP_DONTUNMAP is set.
*/
- ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
0xdeaddead);
- if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
- } else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead);
- }
- TEST_END_CHECK();
+}
+static void test_seal_merge_and_split(void) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size;
- int ret;
- int prot;
- /* (24 RO) */
- setup_single_address(24 * page_size, &ptr);
- /* use mprotect(NONE) to set out boundary */
- /* (1 NONE) (22 RO) (1 NONE) */
- ret = sys_mprotect(ptr, page_size, PROT_NONE);
- FAIL_TEST_IF_FALSE(!ret);
- ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 22 * page_size);
- FAIL_TEST_IF_FALSE(prot == 4);
- /* use mseal to split from beginning */
- /* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */
- ret = sys_mseal(ptr + page_size, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + page_size, &prot);
- FAIL_TEST_IF_FALSE(size == page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- size = get_vma_size(ptr + 2 * page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 21 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- /* use mseal to split from the end. */
- /* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */
- ret = sys_mseal(ptr + 22 * page_size, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + 22 * page_size, &prot);
- FAIL_TEST_IF_FALSE(size == page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- size = get_vma_size(ptr + 2 * page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 20 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- /* merge with prev. */
- /* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */
- ret = sys_mseal(ptr + 2 * page_size, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 2 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- /* merge with after. */
- /* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */
- ret = sys_mseal(ptr + 21 * page_size, page_size);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + 21 * page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 2 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- /* split and merge from prev */
- /* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */
- ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + 1 * page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 3 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- ret = sys_munmap(ptr + page_size, page_size);
- FAIL_TEST_IF_FALSE(ret < 0);
- ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE);
- FAIL_TEST_IF_FALSE(ret < 0);
- /* split and merge from next */
- /* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */
- ret = sys_mseal(ptr + 20 * page_size, 2 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- size = get_vma_size(ptr + 20 * page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 3 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- /* merge from middle of prev and middle of next. */
- /* (1 NONE) (22 RO_SEAL) (1 NONE) */
- ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
- FAIL_TEST_IF_FALSE(!ret);
- size = get_vma_size(ptr + page_size, &prot);
- FAIL_TEST_IF_FALSE(size == 22 * page_size);
- FAIL_TEST_IF_FALSE(prot == 0x4);
- TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_rw(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address_rw(size, &ptr);
- FAIL_TEST_IF_FALSE(ptr != (void *)-1);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* sealing doesn't take effect on RW memory. */
- ret = sys_madvise(ptr, size, MADV_DONTNEED);
- FAIL_TEST_IF_FALSE(!ret);
- /* base seal still apply. */
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_pkey(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- int pkey;
- SKIP_TEST_IF_FALSE(pkey_supported());
- setup_single_address_rw(size, &ptr);
- FAIL_TEST_IF_FALSE(ptr != (void *)-1);
- pkey = sys_pkey_alloc(0, 0);
- FAIL_TEST_IF_FALSE(pkey > 0);
- ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey);
- FAIL_TEST_IF_FALSE(!ret);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* sealing doesn't take effect if PKRU allow write. */
- set_pkey(pkey, 0);
- ret = sys_madvise(ptr, size, MADV_DONTNEED);
- FAIL_TEST_IF_FALSE(!ret);
- /* sealing will take effect if PKRU deny write. */
- set_pkey(pkey, PKEY_DISABLE_WRITE);
- ret = sys_madvise(ptr, size, MADV_DONTNEED);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- /* base seal still apply. */
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_filebacked(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- int fd;
- unsigned long mapflags = MAP_PRIVATE;
- fd = memfd_create("test", 0);
- FAIL_TEST_IF_FALSE(fd > 0);
- ret = fallocate(fd, 0, 0, size);
- FAIL_TEST_IF_FALSE(!ret);
- ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
- FAIL_TEST_IF_FALSE(ptr != MAP_FAILED);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* sealing doesn't apply for file backed mapping. */
- ret = sys_madvise(ptr, size, MADV_DONTNEED);
- FAIL_TEST_IF_FALSE(!ret);
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- close(fd);
- TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_shared(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
- ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
- FAIL_TEST_IF_FALSE(ptr != (void *)-1);
- if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
- }
- /* sealing doesn't apply for shared mapping. */
- ret = sys_madvise(ptr, size, MADV_DONTNEED);
- FAIL_TEST_IF_FALSE(!ret);
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon(bool seal) +{
- void *ptr;
- unsigned long page_size = getpagesize();
- unsigned long size = 4 * page_size;
- int ret;
- setup_single_address(size, &ptr);
- if (seal)
seal_single_address(ptr, size);
- ret = sys_madvise(ptr, size, MADV_DONTNEED);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- ret = sys_munmap(ptr, size);
- if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
- else
FAIL_TEST_IF_FALSE(!ret);
- TEST_END_CHECK();
+}
+int main(int argc, char **argv) +{
- bool test_seal = seal_support();
- ksft_print_header();
- if (!test_seal)
ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n");
- if (!pkey_supported())
ksft_print_msg("PKEY not supported\n");
- ksft_set_plan(80);
- test_seal_addseal();
- test_seal_unmapped_start();
- test_seal_unmapped_middle();
- test_seal_unmapped_end();
- test_seal_multiple_vmas();
- test_seal_split_start();
- test_seal_split_end();
- test_seal_invalid_input();
- test_seal_zero_length();
- test_seal_twice();
- test_seal_mprotect(false);
- test_seal_mprotect(true);
- test_seal_start_mprotect(false);
- test_seal_start_mprotect(true);
- test_seal_end_mprotect(false);
- test_seal_end_mprotect(true);
- test_seal_mprotect_unalign_len(false);
- test_seal_mprotect_unalign_len(true);
- test_seal_mprotect_unalign_len_variant_2(false);
- test_seal_mprotect_unalign_len_variant_2(true);
- test_seal_mprotect_two_vma(false);
- test_seal_mprotect_two_vma(true);
- test_seal_mprotect_two_vma_with_split(false);
- test_seal_mprotect_two_vma_with_split(true);
- test_seal_mprotect_partial_mprotect(false);
- test_seal_mprotect_partial_mprotect(true);
- test_seal_mprotect_two_vma_with_gap(false);
- test_seal_mprotect_two_vma_with_gap(true);
- test_seal_mprotect_merge(false);
- test_seal_mprotect_merge(true);
- test_seal_mprotect_split(false);
- test_seal_mprotect_split(true);
- test_seal_munmap(false);
- test_seal_munmap(true);
- test_seal_munmap_two_vma(false);
- test_seal_munmap_two_vma(true);
- test_seal_munmap_vma_with_gap(false);
- test_seal_munmap_vma_with_gap(true);
- test_munmap_start_freed(false);
- test_munmap_start_freed(true);
- test_munmap_middle_freed(false);
- test_munmap_middle_freed(true);
- test_munmap_end_freed(false);
- test_munmap_end_freed(true);
- test_seal_mremap_shrink(false);
- test_seal_mremap_shrink(true);
- test_seal_mremap_expand(false);
- test_seal_mremap_expand(true);
- test_seal_mremap_move(false);
- test_seal_mremap_move(true);
- test_seal_mremap_shrink_fixed(false);
- test_seal_mremap_shrink_fixed(true);
- test_seal_mremap_expand_fixed(false);
- test_seal_mremap_expand_fixed(true);
- test_seal_mremap_move_fixed(false);
- test_seal_mremap_move_fixed(true);
- test_seal_mremap_move_dontunmap(false);
- test_seal_mremap_move_dontunmap(true);
- test_seal_mremap_move_fixed_zero(false);
- test_seal_mremap_move_fixed_zero(true);
- test_seal_mremap_move_dontunmap_anyaddr(false);
- test_seal_mremap_move_dontunmap_anyaddr(true);
- test_seal_discard_ro_anon(false);
- test_seal_discard_ro_anon(true);
- test_seal_discard_ro_anon_on_rw(false);
- test_seal_discard_ro_anon_on_rw(true);
- test_seal_discard_ro_anon_on_shared(false);
- test_seal_discard_ro_anon_on_shared(true);
- test_seal_discard_ro_anon_on_filebacked(false);
- test_seal_discard_ro_anon_on_filebacked(true);
- test_seal_mmap_overwrite_prot(false);
- test_seal_mmap_overwrite_prot(true);
- test_seal_mmap_expand(false);
- test_seal_mmap_expand(true);
- test_seal_mmap_shrink(false);
- test_seal_mmap_shrink(true);
- test_seal_merge_and_split();
- test_seal_zero_address();
- test_seal_discard_ro_anon_on_pkey(false);
- test_seal_discard_ro_anon_on_pkey(true);
- ksft_finished();
- return 0;
+}
On Thu, May 2, 2024 at 4:24 AM Ryan Roberts ryan.roberts@arm.com wrote:
On 15/04/2024 17:35, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1836 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <sys/mman.h>
I'm afraid this is causing a build error on our CI, and as a result we are not running any mm selftests currently.
The error is here:
CC mseal_test mseal_test.c: In function ‘test_seal_mremap_move_dontunmap’: mseal_test.c:1469:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1469 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); | ^~~~~~~~~~~~~~~~ mseal_test.c:1469:50: note: each undeclared identifier is reported only once for each function it appears in mseal_test.c: In function ‘test_seal_mremap_move_dontunmap_anyaddr’: mseal_test.c:1501:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1501 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, | ^~~~~~~~~~~~~~~~
And I think the reason is due to our CI's toolchain's sys/mman.h not including linux/mman.h where MREMAP_DONTUNMAP is defined.
I think the fix is to explicitly #include <linux/mman.h>, as a number of other mm selftests do.
I will give it a try today.
I'm not sure if this is still in mm-unstable? If so, it would good to remove it so we can resume our testing.
+#include <stdint.h> +#include <unistd.h> +#include <string.h> +#include <sys/time.h> +#include <sys/resource.h> +#include <stdbool.h> +#include "../kselftest.h" +#include <syscall.h> +#include <errno.h> +#include <stdio.h> +#include <stdlib.h> +#include <assert.h> +#include <fcntl.h> +#include <assert.h> +#include <sys/ioctl.h> +#include <sys/vfs.h> +#include <sys/stat.h>
+/*
- need those definition for manually build using gcc.
- gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test
- */
+#ifndef PKEY_DISABLE_ACCESS +# define PKEY_DISABLE_ACCESS 0x1 +#endif
If you pull in linux/mman.h directly, you shouldn't need this define as it will be pulled in.
+#ifndef PKEY_DISABLE_WRITE +# define PKEY_DISABLE_WRITE 0x2 +#endif
And this one.
+#ifndef PKEY_BITS_PER_KEY
bug: I think you missed the 'P' in PKEY?
Will be included in the fix.
+#define PKEY_BITS_PER_PKEY 2 +#endif
If you #include "pkey-helpers.h" you should get this define.
pkey-helpres.h includes some externs (e.g. dprint_in_signal) which makes it difficult to include.
+#ifndef PKEY_MASK +#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE) +#endif
And you can use the PKEY_ACCESS_MASK macro that will be pulled in to avoid this define too.
Thanks, Ryan
+#define FAIL_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\
goto test_end;\
} \
} \
while (0)
+#define SKIP_TEST_IF_FALSE(c) do {\
if (!(c)) {\
ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\
goto test_end;\
} \
} \
while (0)
+#define TEST_END_CHECK() {\
ksft_test_result_pass("%s\n", __func__);\
return;\
+test_end:\
return;\
+}
+#ifndef u64 +#define u64 unsigned long long +#endif
+static unsigned long get_vma_size(void *addr, int *prot) +{
FILE *maps;
char line[256];
int size = 0;
uintptr_t addr_start, addr_end;
char protstr[5];
*prot = 0;
maps = fopen("/proc/self/maps", "r");
if (!maps)
return 0;
while (fgets(line, sizeof(line), maps)) {
if (sscanf(line, "%lx-%lx %4s", &addr_start, &addr_end, &protstr) == 3) {
if (addr_start == (uintptr_t) addr) {
size = addr_end - addr_start;
if (protstr[0] == 'r')
*prot |= 0x4;
if (protstr[1] == 'w')
*prot |= 0x2;
if (protstr[2] == 'x')
*prot |= 0x1;
break;
}
}
}
fclose(maps);
return size;
+}
+/*
- define sys_xyx to call syscall directly.
- */
+static int sys_mseal(void *start, size_t len) +{
int sret;
errno = 0;
sret = syscall(__NR_mseal, start, len, 0);
return sret;
+}
+static int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{
int sret;
errno = 0;
sret = syscall(__NR_mprotect, ptr, size, prot);
return sret;
+}
+static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
unsigned long pkey)
+{
int sret;
errno = 0;
sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey);
return sret;
+}
+static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
unsigned long flags, unsigned long fd, unsigned long offset)
+{
void *sret;
errno = 0;
sret = (void *) syscall(__NR_mmap, addr, len, prot,
flags, fd, offset);
return sret;
+}
+static int sys_munmap(void *ptr, size_t size) +{
int sret;
errno = 0;
sret = syscall(__NR_munmap, ptr, size);
return sret;
+}
+static int sys_madvise(void *start, size_t len, int types) +{
int sret;
errno = 0;
sret = syscall(__NR_madvise, start, len, types);
return sret;
+}
+static int sys_pkey_alloc(unsigned long flags, unsigned long init_val) +{
int ret = syscall(__NR_pkey_alloc, flags, init_val);
return ret;
+}
+static unsigned int __read_pkey_reg(void) +{
unsigned int pkey_reg = 0;
+#if defined(__i386__) || defined(__x86_64__) /* arch */
unsigned int eax, edx;
unsigned int ecx = 0;
asm volatile(".byte 0x0f,0x01,0xee\n\t"
: "=a" (eax), "=d" (edx)
: "c" (ecx));
pkey_reg = eax;
+#endif
return pkey_reg;
+}
+static void __write_pkey_reg(u64 pkey_reg) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */
unsigned int eax = pkey_reg;
unsigned int ecx = 0;
unsigned int edx = 0;
asm volatile(".byte 0x0f,0x01,0xef\n\t"
: : "a" (eax), "c" (ecx), "d" (edx));
assert(pkey_reg == __read_pkey_reg());
+#endif +}
+static unsigned long pkey_bit_position(int pkey) +{
return pkey * PKEY_BITS_PER_PKEY;
+}
+static u64 set_pkey_bits(u64 reg, int pkey, u64 flags) +{
unsigned long shift = pkey_bit_position(pkey);
/* mask out bits from pkey in old value */
reg &= ~((u64)PKEY_MASK << shift);
/* OR in new bits for pkey */
reg |= (flags & PKEY_MASK) << shift;
return reg;
+}
+static void set_pkey(int pkey, unsigned long pkey_value) +{
unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE);
u64 new_pkey_reg;
assert(!(pkey_value & ~mask));
new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value);
__write_pkey_reg(new_pkey_reg);
+}
+static void setup_single_address(int size, void **ptrOut) +{
void *ptr;
ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
assert(ptr != (void *)-1);
*ptrOut = ptr;
+}
+static void setup_single_address_rw(int size, void **ptrOut) +{
void *ptr;
unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
assert(ptr != (void *)-1);
*ptrOut = ptr;
+}
+static void clean_single_address(void *ptr, int size) +{
int ret;
ret = munmap(ptr, size);
assert(!ret);
+}
+static void seal_single_address(void *ptr, int size) +{
int ret;
ret = sys_mseal(ptr, size);
assert(!ret);
+}
+bool seal_support(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
if (ptr == (void *) -1)
return false;
ret = sys_mseal(ptr, page_size);
if (ret < 0)
return false;
return true;
+}
+bool pkey_supported(void) +{ +#if defined(__i386__) || defined(__x86_64__) /* arch */
int pkey = sys_pkey_alloc(0, 0);
if (pkey > 0)
return true;
+#endif
return false;
+}
+static void test_seal_addseal(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_unmapped_start(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
/* munmap 2 pages from ptr. */
ret = sys_munmap(ptr, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
/* mprotect will fail because 2 pages from ptr are unmapped. */
ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(ret < 0);
/* mseal will fail because 2 pages from ptr are unmapped. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(ret < 0);
ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_unmapped_middle(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
/* munmap 2 pages from ptr + page. */
ret = sys_munmap(ptr + page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
/* mprotect will fail, since middle 2 pages are unmapped. */
ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(ret < 0);
/* mseal will fail as well. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(ret < 0);
/* we still can add seal to the first page and last page*/
ret = sys_mseal(ptr, page_size);
FAIL_TEST_IF_FALSE(!ret);
ret = sys_mseal(ptr + 3 * page_size, page_size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_unmapped_end(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
/* unmap last 2 pages. */
ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
/* mprotect will fail since last 2 pages are unmapped. */
ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(ret < 0);
/* mseal will fail as well. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(ret < 0);
/* The first 2 pages is not sealed, and can add seals */
ret = sys_mseal(ptr, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_multiple_vmas(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
/* use mprotect to split the vma into 3. */
ret = sys_mprotect(ptr + page_size, 2 * page_size,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* mprotect will get applied to all 4 pages - 3 VMAs. */
ret = sys_mprotect(ptr, size, PROT_READ);
FAIL_TEST_IF_FALSE(!ret);
/* use mprotect to split the vma into 3. */
ret = sys_mprotect(ptr + page_size, 2 * page_size,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* mseal get applied to all 4 pages - 3 VMAs. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_split_start(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
/* use mprotect to split at middle */
ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* seal the first page, this will split the VMA */
ret = sys_mseal(ptr, page_size);
FAIL_TEST_IF_FALSE(!ret);
/* add seal to the remain 3 pages */
ret = sys_mseal(ptr + page_size, 3 * page_size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_split_end(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
/* use mprotect to split at middle */
ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* seal the last page */
ret = sys_mseal(ptr + 3 * page_size, page_size);
FAIL_TEST_IF_FALSE(!ret);
/* Adding seals to the first 3 pages */
ret = sys_mseal(ptr, 3 * page_size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_invalid_input(void) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(8 * page_size, &ptr);
clean_single_address(ptr + 4 * page_size, 4 * page_size);
/* invalid flag */
ret = syscall(__NR_mseal, ptr, size, 0x20);
FAIL_TEST_IF_FALSE(ret < 0);
/* unaligned address */
ret = sys_mseal(ptr + 1, 2 * page_size);
FAIL_TEST_IF_FALSE(ret < 0);
/* length too big */
ret = sys_mseal(ptr, 5 * page_size);
FAIL_TEST_IF_FALSE(ret < 0);
/* length overflow */
ret = sys_mseal(ptr, UINT64_MAX/page_size);
FAIL_TEST_IF_FALSE(ret < 0);
/* start is not in a valid VMA */
ret = sys_mseal(ptr - page_size, 5 * page_size);
FAIL_TEST_IF_FALSE(ret < 0);
TEST_END_CHECK();
+}
+static void test_seal_zero_length(void) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* seal 0 length will be OK, same as mprotect */
ret = sys_mseal(ptr, 0);
FAIL_TEST_IF_FALSE(!ret);
/* verify the 4 pages are not sealed by previous call. */
ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_zero_address(void) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
int prot;
/* use mmap to change protection. */
ptr = sys_mmap(0, size, PROT_NONE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
FAIL_TEST_IF_FALSE(ptr == 0);
size = get_vma_size(ptr, &prot);
FAIL_TEST_IF_FALSE(size == 4 * page_size);
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
/* verify the 4 pages are sealed by previous call. */
ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(ret);
TEST_END_CHECK();
+}
+static void test_seal_twice(void) +{
int ret;
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
setup_single_address(size, &ptr);
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
/* apply the same seal will be OK. idempotent. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal)
seal_single_address(ptr, size);
ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_start_mprotect(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal)
seal_single_address(ptr, page_size);
/* the first page is sealed. */
ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
/* pages after the first page is not sealed. */
ret = sys_mprotect(ptr + page_size, page_size * 3,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_end_mprotect(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal)
seal_single_address(ptr + page_size, 3 * page_size);
/* first page is not sealed */
ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* last 3 page are sealed */
ret = sys_mprotect(ptr + page_size, page_size * 3,
PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_unalign_len(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal)
seal_single_address(ptr, page_size * 2 - 1);
/* 2 pages are sealed. */
ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
ret = sys_mprotect(ptr + page_size * 2, page_size,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_unalign_len_variant_2(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal)
seal_single_address(ptr, page_size * 2 + 1);
/* 3 pages are sealed. */
ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
ret = sys_mprotect(ptr + page_size * 3, page_size,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_two_vma(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* use mprotect to split */
ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
if (seal)
seal_single_address(ptr, page_size * 4);
ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
ret = sys_mprotect(ptr + page_size * 2, page_size * 2,
PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_two_vma_with_split(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* use mprotect to split as two vma. */
ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* mseal can apply across 2 vma, also split them. */
if (seal)
seal_single_address(ptr + page_size, page_size * 2);
/* the first page is not sealed. */
ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* the second page is sealed. */
ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
/* the third page is sealed. */
ret = sys_mprotect(ptr + 2 * page_size, page_size,
PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
/* the fouth page is not sealed. */
ret = sys_mprotect(ptr + 3 * page_size, page_size,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_partial_mprotect(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* seal one page. */
if (seal)
seal_single_address(ptr, page_size);
/* mprotect first 2 page will fail, since the first page are sealed. */
ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_two_vma_with_gap(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* use mprotect to split. */
ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* use mprotect to split. */
ret = sys_mprotect(ptr + 3 * page_size, page_size,
PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* use munmap to free two pages in the middle */
ret = sys_munmap(ptr + page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
/* mprotect will fail, because there is a gap in the address. */
/* notes, internally mprotect still updated the first page. */
ret = sys_mprotect(ptr, 4 * page_size, PROT_READ);
FAIL_TEST_IF_FALSE(ret < 0);
/* mseal will fail as well. */
ret = sys_mseal(ptr, 4 * page_size);
FAIL_TEST_IF_FALSE(ret < 0);
/* the first page is not sealed. */
ret = sys_mprotect(ptr, page_size, PROT_READ);
FAIL_TEST_IF_FALSE(ret == 0);
/* the last page is not sealed. */
ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ);
FAIL_TEST_IF_FALSE(ret == 0);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_split(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* use mprotect to split. */
ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* seal all 4 pages. */
if (seal) {
ret = sys_mseal(ptr, 4 * page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* mprotect is sealed. */
ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_mprotect_merge(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* use mprotect to split one page. */
ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
/* seal first two pages. */
if (seal) {
ret = sys_mseal(ptr, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* 2 pages are sealed. */
ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
/* last 2 pages are not sealed. */
ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
FAIL_TEST_IF_FALSE(ret == 0);
TEST_END_CHECK();
+}
+static void test_seal_munmap(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* 4 pages are sealed. */
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+/*
- allocate 4 pages,
- use mprotect to split it as two VMAs
- seal the whole range
- munmap will fail on both
- */
+static void test_seal_munmap_two_vma(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* use mprotect to split */
ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
FAIL_TEST_IF_FALSE(!ret);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
ret = sys_munmap(ptr, page_size * 2);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
ret = sys_munmap(ptr + page_size, page_size * 2);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+/*
- allocate a VMA with 4 pages.
- munmap the middle 2 pages.
- seal the whole 4 pages, will fail.
- munmap the first page will be OK.
- munmap the last page will be OK.
- */
+static void test_seal_munmap_vma_with_gap(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
ret = sys_munmap(ptr + page_size, page_size * 2);
FAIL_TEST_IF_FALSE(!ret);
if (seal) {
/* can't have gap in the middle. */
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(ret < 0);
}
ret = sys_munmap(ptr, page_size);
FAIL_TEST_IF_FALSE(!ret);
ret = sys_munmap(ptr + page_size * 2, page_size);
FAIL_TEST_IF_FALSE(!ret);
ret = sys_munmap(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_munmap_start_freed(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
int prot;
setup_single_address(size, &ptr);
/* unmap the first page. */
ret = sys_munmap(ptr, page_size);
FAIL_TEST_IF_FALSE(!ret);
/* seal the last 3 pages. */
if (seal) {
ret = sys_mseal(ptr + page_size, 3 * page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* unmap from the first page. */
ret = sys_munmap(ptr, size);
if (seal) {
FAIL_TEST_IF_FALSE(ret < 0);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == page_size * 3);
} else {
/* note: this will be OK, even the first page is */
/* already unmapped. */
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == 0);
}
TEST_END_CHECK();
+}
+static void test_munmap_end_freed(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
/* unmap last page. */
ret = sys_munmap(ptr + page_size * 3, page_size);
FAIL_TEST_IF_FALSE(!ret);
/* seal the first 3 pages. */
if (seal) {
ret = sys_mseal(ptr, 3 * page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* unmap all pages. */
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_munmap_middle_freed(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
int prot;
setup_single_address(size, &ptr);
/* unmap 2 pages in the middle. */
ret = sys_munmap(ptr + page_size, page_size * 2);
FAIL_TEST_IF_FALSE(!ret);
/* seal the first page. */
if (seal) {
ret = sys_mseal(ptr, page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* munmap all 4 pages. */
ret = sys_munmap(ptr, size);
if (seal) {
FAIL_TEST_IF_FALSE(ret < 0);
size = get_vma_size(ptr, &prot);
FAIL_TEST_IF_FALSE(size == page_size);
size = get_vma_size(ptr + page_size * 3, &prot);
FAIL_TEST_IF_FALSE(size == page_size);
} else {
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr, &prot);
FAIL_TEST_IF_FALSE(size == 0);
size = get_vma_size(ptr + page_size * 3, &prot);
FAIL_TEST_IF_FALSE(size == 0);
}
TEST_END_CHECK();
+}
+static void test_seal_mremap_shrink(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* shrink from 4 pages to 2 pages. */
ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
}
TEST_END_CHECK();
+}
+static void test_seal_mremap_expand(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
/* ummap last 2 pages. */
ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
if (seal) {
ret = sys_mseal(ptr, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* expand from 2 page to 4 pages. */
ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else {
FAIL_TEST_IF_FALSE(ret2 == ptr);
}
TEST_END_CHECK();
+}
+static void test_seal_mremap_move(bool seal) +{
void *ptr, *newPtr;
unsigned long page_size = getpagesize();
unsigned long size = page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
setup_single_address(size, &newPtr);
clean_single_address(newPtr, size);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* move from ptr to fixed address. */
ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
}
TEST_END_CHECK();
+}
+static void test_seal_mmap_overwrite_prot(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* use mmap to change protection. */
ret2 = sys_mmap(ptr, size, PROT_NONE,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else
FAIL_TEST_IF_FALSE(ret2 == ptr);
TEST_END_CHECK();
+}
+static void test_seal_mmap_expand(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 12 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
/* ummap last 4 pages. */
ret = sys_munmap(ptr + 8 * page_size, 4 * page_size);
FAIL_TEST_IF_FALSE(!ret);
if (seal) {
ret = sys_mseal(ptr, 8 * page_size);
FAIL_TEST_IF_FALSE(!ret);
}
/* use mmap to expand. */
ret2 = sys_mmap(ptr, size, PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else
FAIL_TEST_IF_FALSE(ret2 == ptr);
TEST_END_CHECK();
+}
+static void test_seal_mmap_shrink(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 12 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* use mmap to shrink. */
ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else
FAIL_TEST_IF_FALSE(ret2 == ptr);
TEST_END_CHECK();
+}
+static void test_seal_mremap_shrink_fixed(bool seal) +{
void *ptr;
void *newAddr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
setup_single_address(size, &newAddr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* mremap to move and shrink to fixed address */
ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
newAddr);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else
FAIL_TEST_IF_FALSE(ret2 == newAddr);
TEST_END_CHECK();
+}
+static void test_seal_mremap_expand_fixed(bool seal) +{
void *ptr;
void *newAddr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(page_size, &ptr);
setup_single_address(size, &newAddr);
if (seal) {
ret = sys_mseal(newAddr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* mremap to move and expand to fixed address */
ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
newAddr);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else
FAIL_TEST_IF_FALSE(ret2 == newAddr);
TEST_END_CHECK();
+}
+static void test_seal_mremap_move_fixed(bool seal) +{
void *ptr;
void *newAddr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
setup_single_address(size, &newAddr);
if (seal) {
ret = sys_mseal(newAddr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* mremap to move to fixed address */
ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else
FAIL_TEST_IF_FALSE(ret2 == newAddr);
TEST_END_CHECK();
+}
+static void test_seal_mremap_move_fixed_zero(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/*
* MREMAP_FIXED can move the mapping to zero address
*/
ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else {
FAIL_TEST_IF_FALSE(ret2 == 0);
}
TEST_END_CHECK();
+}
+static void test_seal_mremap_move_dontunmap(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* mremap to move, and don't unmap src addr. */
ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
}
TEST_END_CHECK();
+}
+static void test_seal_mremap_move_dontunmap_anyaddr(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
void *ret2;
setup_single_address(size, &ptr);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/*
* The 0xdeaddead should not have effect on dest addr
* when MREMAP_DONTUNMAP is set.
*/
ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
0xdeaddead);
if (seal) {
FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
FAIL_TEST_IF_FALSE(errno == EPERM);
} else {
FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead);
}
TEST_END_CHECK();
+}
+static void test_seal_merge_and_split(void) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size;
int ret;
int prot;
/* (24 RO) */
setup_single_address(24 * page_size, &ptr);
/* use mprotect(NONE) to set out boundary */
/* (1 NONE) (22 RO) (1 NONE) */
ret = sys_mprotect(ptr, page_size, PROT_NONE);
FAIL_TEST_IF_FALSE(!ret);
ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == 22 * page_size);
FAIL_TEST_IF_FALSE(prot == 4);
/* use mseal to split from beginning */
/* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */
ret = sys_mseal(ptr + page_size, page_size);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
size = get_vma_size(ptr + 2 * page_size, &prot);
FAIL_TEST_IF_FALSE(size == 21 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
/* use mseal to split from the end. */
/* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */
ret = sys_mseal(ptr + 22 * page_size, page_size);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + 22 * page_size, &prot);
FAIL_TEST_IF_FALSE(size == page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
size = get_vma_size(ptr + 2 * page_size, &prot);
FAIL_TEST_IF_FALSE(size == 20 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
/* merge with prev. */
/* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */
ret = sys_mseal(ptr + 2 * page_size, page_size);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == 2 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
/* merge with after. */
/* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */
ret = sys_mseal(ptr + 21 * page_size, page_size);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + 21 * page_size, &prot);
FAIL_TEST_IF_FALSE(size == 2 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
/* split and merge from prev */
/* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */
ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + 1 * page_size, &prot);
FAIL_TEST_IF_FALSE(size == 3 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
ret = sys_munmap(ptr + page_size, page_size);
FAIL_TEST_IF_FALSE(ret < 0);
ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE);
FAIL_TEST_IF_FALSE(ret < 0);
/* split and merge from next */
/* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */
ret = sys_mseal(ptr + 20 * page_size, 2 * page_size);
FAIL_TEST_IF_FALSE(!ret);
FAIL_TEST_IF_FALSE(prot == 0x4);
size = get_vma_size(ptr + 20 * page_size, &prot);
FAIL_TEST_IF_FALSE(size == 3 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
/* merge from middle of prev and middle of next. */
/* (1 NONE) (22 RO_SEAL) (1 NONE) */
ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
FAIL_TEST_IF_FALSE(!ret);
size = get_vma_size(ptr + page_size, &prot);
FAIL_TEST_IF_FALSE(size == 22 * page_size);
FAIL_TEST_IF_FALSE(prot == 0x4);
TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_rw(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address_rw(size, &ptr);
FAIL_TEST_IF_FALSE(ptr != (void *)-1);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* sealing doesn't take effect on RW memory. */
ret = sys_madvise(ptr, size, MADV_DONTNEED);
FAIL_TEST_IF_FALSE(!ret);
/* base seal still apply. */
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_pkey(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
int pkey;
SKIP_TEST_IF_FALSE(pkey_supported());
setup_single_address_rw(size, &ptr);
FAIL_TEST_IF_FALSE(ptr != (void *)-1);
pkey = sys_pkey_alloc(0, 0);
FAIL_TEST_IF_FALSE(pkey > 0);
ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey);
FAIL_TEST_IF_FALSE(!ret);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* sealing doesn't take effect if PKRU allow write. */
set_pkey(pkey, 0);
ret = sys_madvise(ptr, size, MADV_DONTNEED);
FAIL_TEST_IF_FALSE(!ret);
/* sealing will take effect if PKRU deny write. */
set_pkey(pkey, PKEY_DISABLE_WRITE);
ret = sys_madvise(ptr, size, MADV_DONTNEED);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
/* base seal still apply. */
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_filebacked(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
int fd;
unsigned long mapflags = MAP_PRIVATE;
fd = memfd_create("test", 0);
FAIL_TEST_IF_FALSE(fd > 0);
ret = fallocate(fd, 0, 0, size);
FAIL_TEST_IF_FALSE(!ret);
ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
FAIL_TEST_IF_FALSE(ptr != MAP_FAILED);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* sealing doesn't apply for file backed mapping. */
ret = sys_madvise(ptr, size, MADV_DONTNEED);
FAIL_TEST_IF_FALSE(!ret);
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
close(fd);
TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon_on_shared(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
FAIL_TEST_IF_FALSE(ptr != (void *)-1);
if (seal) {
ret = sys_mseal(ptr, size);
FAIL_TEST_IF_FALSE(!ret);
}
/* sealing doesn't apply for shared mapping. */
ret = sys_madvise(ptr, size, MADV_DONTNEED);
FAIL_TEST_IF_FALSE(!ret);
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+static void test_seal_discard_ro_anon(bool seal) +{
void *ptr;
unsigned long page_size = getpagesize();
unsigned long size = 4 * page_size;
int ret;
setup_single_address(size, &ptr);
if (seal)
seal_single_address(ptr, size);
ret = sys_madvise(ptr, size, MADV_DONTNEED);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
ret = sys_munmap(ptr, size);
if (seal)
FAIL_TEST_IF_FALSE(ret < 0);
else
FAIL_TEST_IF_FALSE(!ret);
TEST_END_CHECK();
+}
+int main(int argc, char **argv) +{
bool test_seal = seal_support();
ksft_print_header();
if (!test_seal)
ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n");
if (!pkey_supported())
ksft_print_msg("PKEY not supported\n");
ksft_set_plan(80);
test_seal_addseal();
test_seal_unmapped_start();
test_seal_unmapped_middle();
test_seal_unmapped_end();
test_seal_multiple_vmas();
test_seal_split_start();
test_seal_split_end();
test_seal_invalid_input();
test_seal_zero_length();
test_seal_twice();
test_seal_mprotect(false);
test_seal_mprotect(true);
test_seal_start_mprotect(false);
test_seal_start_mprotect(true);
test_seal_end_mprotect(false);
test_seal_end_mprotect(true);
test_seal_mprotect_unalign_len(false);
test_seal_mprotect_unalign_len(true);
test_seal_mprotect_unalign_len_variant_2(false);
test_seal_mprotect_unalign_len_variant_2(true);
test_seal_mprotect_two_vma(false);
test_seal_mprotect_two_vma(true);
test_seal_mprotect_two_vma_with_split(false);
test_seal_mprotect_two_vma_with_split(true);
test_seal_mprotect_partial_mprotect(false);
test_seal_mprotect_partial_mprotect(true);
test_seal_mprotect_two_vma_with_gap(false);
test_seal_mprotect_two_vma_with_gap(true);
test_seal_mprotect_merge(false);
test_seal_mprotect_merge(true);
test_seal_mprotect_split(false);
test_seal_mprotect_split(true);
test_seal_munmap(false);
test_seal_munmap(true);
test_seal_munmap_two_vma(false);
test_seal_munmap_two_vma(true);
test_seal_munmap_vma_with_gap(false);
test_seal_munmap_vma_with_gap(true);
test_munmap_start_freed(false);
test_munmap_start_freed(true);
test_munmap_middle_freed(false);
test_munmap_middle_freed(true);
test_munmap_end_freed(false);
test_munmap_end_freed(true);
test_seal_mremap_shrink(false);
test_seal_mremap_shrink(true);
test_seal_mremap_expand(false);
test_seal_mremap_expand(true);
test_seal_mremap_move(false);
test_seal_mremap_move(true);
test_seal_mremap_shrink_fixed(false);
test_seal_mremap_shrink_fixed(true);
test_seal_mremap_expand_fixed(false);
test_seal_mremap_expand_fixed(true);
test_seal_mremap_move_fixed(false);
test_seal_mremap_move_fixed(true);
test_seal_mremap_move_dontunmap(false);
test_seal_mremap_move_dontunmap(true);
test_seal_mremap_move_fixed_zero(false);
test_seal_mremap_move_fixed_zero(true);
test_seal_mremap_move_dontunmap_anyaddr(false);
test_seal_mremap_move_dontunmap_anyaddr(true);
test_seal_discard_ro_anon(false);
test_seal_discard_ro_anon(true);
test_seal_discard_ro_anon_on_rw(false);
test_seal_discard_ro_anon_on_rw(true);
test_seal_discard_ro_anon_on_shared(false);
test_seal_discard_ro_anon_on_shared(true);
test_seal_discard_ro_anon_on_filebacked(false);
test_seal_discard_ro_anon_on_filebacked(true);
test_seal_mmap_overwrite_prot(false);
test_seal_mmap_overwrite_prot(true);
test_seal_mmap_expand(false);
test_seal_mmap_expand(true);
test_seal_mmap_shrink(false);
test_seal_mmap_shrink(true);
test_seal_merge_and_split();
test_seal_zero_address();
test_seal_discard_ro_anon_on_pkey(false);
test_seal_discard_ro_anon_on_pkey(true);
ksft_finished();
return 0;
+}
On Thu, May 2, 2024 at 4:24 AM Ryan Roberts ryan.roberts@arm.com wrote:
On 15/04/2024 17:35, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1836 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <sys/mman.h>
I'm afraid this is causing a build error on our CI, and as a result we are not running any mm selftests currently.
The error is here:
CC mseal_test mseal_test.c: In function ‘test_seal_mremap_move_dontunmap’: mseal_test.c:1469:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1469 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); | ^~~~~~~~~~~~~~~~ mseal_test.c:1469:50: note: each undeclared identifier is reported only once for each function it appears in mseal_test.c: In function ‘test_seal_mremap_move_dontunmap_anyaddr’: mseal_test.c:1501:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1501 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, | ^~~~~~~~~~~~~~~~
And I think the reason is due to our CI's toolchain's sys/mman.h not including linux/mman.h where MREMAP_DONTUNMAP is defined.
I think the fix is to explicitly #include <linux/mman.h>, as a number of other mm selftests do.
When I tried to build with aarch64-linux-gnu-gcc, this passed.
aarch64-linux-gnu-gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test -lm -Wall
I don't have the exact environment to repro the issue and verify the fix. I will send a patch with the linux/mman.h.
I will probably need some help to verify the fix on arm build, Ryan, could you help with this ?
Thanks -Jeff
On 02/05/2024 23:39, Jeff Xu wrote:
On Thu, May 2, 2024 at 4:24 AM Ryan Roberts ryan.roberts@arm.com wrote:
On 15/04/2024 17:35, jeffxu@chromium.org wrote:
From: Jeff Xu jeffxu@chromium.org
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org
tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/mseal_test.c | 1836 +++++++++++++++++++++++ 3 files changed, 1838 insertions(+) create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index d26e962f2ac4..98eaa4590f11 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -47,3 +47,4 @@ mkdirty va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map +mseal_test diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index eb5f39a2668b..95d10fe1b3c1 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test +TEST_GEN_FILES += mseal_test TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c new file mode 100644 index 000000000000..06c780d1d8e5 --- /dev/null +++ b/tools/testing/selftests/mm/mseal_test.c @@ -0,0 +1,1836 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <sys/mman.h>
I'm afraid this is causing a build error on our CI, and as a result we are not running any mm selftests currently.
The error is here:
CC mseal_test mseal_test.c: In function ‘test_seal_mremap_move_dontunmap’: mseal_test.c:1469:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1469 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0); | ^~~~~~~~~~~~~~~~ mseal_test.c:1469:50: note: each undeclared identifier is reported only once for each function it appears in mseal_test.c: In function ‘test_seal_mremap_move_dontunmap_anyaddr’: mseal_test.c:1501:50: error: ‘MREMAP_DONTUNMAP’ undeclared (first use in this function) 1501 | ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, | ^~~~~~~~~~~~~~~~
And I think the reason is due to our CI's toolchain's sys/mman.h not including linux/mman.h where MREMAP_DONTUNMAP is defined.
I think the fix is to explicitly #include <linux/mman.h>, as a number of other mm selftests do.
When I tried to build with aarch64-linux-gnu-gcc, this passed.
aarch64-linux-gnu-gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test -lm -Wall
Its the same on my local system; I'm told our CI is using GCC 10, which I suspect makes the difference.
I don't have the exact environment to repro the issue and verify the fix. I will send a patch with the linux/mman.h.
I will probably need some help to verify the fix on arm build, Ryan, could you help with this ?
I'll pass this on to our CI folks, and hopefully get confirmation shortly.
Thanks -Jeff
From: Jeff Xu jeffxu@chromium.org
Add documentation for mseal().
Signed-off-by: Jeff Xu jeffxu@chromium.org --- Documentation/userspace-api/index.rst | 1 + Documentation/userspace-api/mseal.rst | 199 ++++++++++++++++++++++++++ 2 files changed, 200 insertions(+) create mode 100644 Documentation/userspace-api/mseal.rst
diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst index afecfe3cc4a8..5926115ec0ed 100644 --- a/Documentation/userspace-api/index.rst +++ b/Documentation/userspace-api/index.rst @@ -20,6 +20,7 @@ System calls futex2 ebpf/index ioctl/index + mseal
Security-related interfaces =========================== diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst new file mode 100644 index 000000000000..4132eec995a3 --- /dev/null +++ b/Documentation/userspace-api/mseal.rst @@ -0,0 +1,199 @@ +.. SPDX-License-Identifier: GPL-2.0 + +===================== +Introduction of mseal +===================== + +:Author: Jeff Xu jeffxu@chromium.org + +Modern CPUs support memory permissions such as RW and NX bits. The memory +permission feature improves security stance on memory corruption bugs, i.e. +the attacker can’t just write to arbitrary memory and point the code to it, +the memory has to be marked with X bit, or else an exception will happen. + +Memory sealing additionally protects the mapping itself against +modifications. This is useful to mitigate memory corruption issues where a +corrupted pointer is passed to a memory management system. For example, +such an attacker primitive can break control-flow integrity guarantees +since read-only memory that is supposed to be trusted can become writable +or .text pages can get remapped. Memory sealing can automatically be +applied by the runtime loader to seal .text and .rodata pages and +applications can additionally seal security critical data at runtime. + +A similar feature already exists in the XNU kernel with the +VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2]. + +User API +======== +mseal() +----------- +The mseal() syscall has the following signature: + +``int mseal(void addr, size_t len, unsigned long flags)`` + +**addr/len**: virtual memory address range. + +The address range set by ``addr``/``len`` must meet: + - The start address must be in an allocated VMA. + - The start address must be page aligned. + - The end address (``addr`` + ``len``) must be in an allocated VMA. + - no gap (unallocated memory) between start and end address. + +The ``len`` will be paged aligned implicitly by the kernel. + +**flags**: reserved for future use. + +**return values**: + +- ``0``: Success. + +- ``-EINVAL``: + - Invalid input ``flags``. + - The start address (``addr``) is not page aligned. + - Address range (``addr`` + ``len``) overflow. + +- ``-ENOMEM``: + - The start address (``addr``) is not allocated. + - The end address (``addr`` + ``len``) is not allocated. + - A gap (unallocated memory) between start and end address. + +- ``-EPERM``: + - sealing is supported only on 64-bit CPUs, 32-bit is not supported. + +- For above error cases, users can expect the given memory range is + unmodified, i.e. no partial update. + +- There might be other internal errors/cases not listed here, e.g. + error during merging/splitting VMAs, or the process reaching the max + number of supported VMAs. In those cases, partial updates to the given + memory range could happen. However, those cases should be rare. + +**Blocked operations after sealing**: + Unmapping, moving to another location, and shrinking the size, + via munmap() and mremap(), can leave an empty space, therefore + can be replaced with a VMA with a new set of attributes. + + Moving or expanding a different VMA into the current location, + via mremap(). + + Modifying a VMA via mmap(MAP_FIXED). + + Size expansion, via mremap(), does not appear to pose any + specific risks to sealed VMAs. It is included anyway because + the use case is unclear. In any case, users can rely on + merging to expand a sealed VMA. + + mprotect() and pkey_mprotect(). + + Some destructive madvice() behaviors (e.g. MADV_DONTNEED) + for anonymous memory, when users don't have write permission to the + memory. Those behaviors can alter region contents by discarding pages, + effectively a memset(0) for anonymous memory. + + Kernel will return -EPERM for blocked operations. + + For blocked operations, one can expect the given address is unmodified, + i.e. no partial update. Note, this is different from existing mm + system call behaviors, where partial updates are made till an error is + found and returned to userspace. To give an example: + + Assume following code sequence: + + - ptr = mmap(null, 8192, PROT_NONE); + - munmap(ptr + 4096, 4096); + - ret1 = mprotect(ptr, 8192, PROT_READ); + - mseal(ptr, 4096); + - ret2 = mprotect(ptr, 8192, PROT_NONE); + + ret1 will be -ENOMEM, the page from ptr is updated to PROT_READ. + + ret2 will be -EPERM, the page remains to be PROT_READ. + +**Note**: + +- mseal() only works on 64-bit CPUs, not 32-bit CPU. + +- users can call mseal() multiple times, mseal() on an already sealed memory + is a no-action (not error). + +- munseal() is not supported. + +Use cases: +========== +- glibc: + The dynamic linker, during loading ELF executables, can apply sealing to + non-writable memory segments. + +- Chrome browser: protect some security sensitive data-structures. + +Notes on which memory to seal: +============================== + +It might be important to note that sealing changes the lifetime of a mapping, +i.e. the sealed mapping won’t be unmapped till the process terminates or the +exec system call is invoked. Applications can apply sealing to any virtual +memory region from userspace, but it is crucial to thoroughly analyze the +mapping's lifetime prior to apply the sealing. + +For example: + +- aio/shm + + aio/shm can call mmap()/munmap() on behalf of userspace, e.g. ksys_shmdt() in + shm.c. The lifetime of those mapping are not tied to the lifetime of the + process. If those memories are sealed from userspace, then munmap() will fail, + causing leaks in VMA address space during the lifetime of the process. + +- Brk (heap) + + Currently, userspace applications can seal parts of the heap by calling + malloc() and mseal(). + let's assume following calls from user space: + + - ptr = malloc(size); + - mprotect(ptr, size, RO); + - mseal(ptr, size); + - free(ptr); + + Technically, before mseal() is added, the user can change the protection of + the heap by calling mprotect(RO). As long as the user changes the protection + back to RW before free(), the memory range can be reused. + + Adding mseal() into the picture, however, the heap is then sealed partially, + the user can still free it, but the memory remains to be RO. If the address + is re-used by the heap manager for another malloc, the process might crash + soon after. Therefore, it is important not to apply sealing to any memory + that might get recycled. + + Furthermore, even if the application never calls the free() for the ptr, + the heap manager may invoke the brk system call to shrink the size of the + heap. In the kernel, the brk-shrink will call munmap(). Consequently, + depending on the location of the ptr, the outcome of brk-shrink is + nondeterministic. + + +Additional notes: +================= +As Jann Horn pointed out in [3], there are still a few ways to write +to RO memory, which is, in a way, by design. Those cases are not covered +by mseal(). If applications want to block such cases, sandbox tools (such as +seccomp, LSM, etc) might be considered. + +Those cases are: + +- Write to read-only memory through /proc/self/mem interface. +- Write to read-only memory through ptrace (such as PTRACE_POKETEXT). +- userfaultfd. + +The idea that inspired this patch comes from Stephen Röttger’s work in V8 +CFI [4]. Chrome browser in ChromeOS will be the first user of this API. + +Reference: +========== +[1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9... + +[2] https://man.openbsd.org/mimmutable.2 + +[3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfU... + +[4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgea...
From: Jeff Xu jeffxu@chromium.org
Sealing read-only of elf mapping so it can't be changed by mprotect.
Signed-off-by: Jeff Xu jeffxu@chromium.org --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/seal_elf.c | 183 ++++++++++++++++++++++++++ 3 files changed, 185 insertions(+) create mode 100644 tools/testing/selftests/mm/seal_elf.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index 98eaa4590f11..0b9ab987601c 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -48,3 +48,4 @@ va_high_addr_switch hugetlb_fault_after_madv hugetlb_madv_vs_map mseal_test +seal_elf diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 95d10fe1b3c1..02392c426759 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -60,6 +60,7 @@ TEST_GEN_FILES += mrelease_test TEST_GEN_FILES += mremap_dontunmap TEST_GEN_FILES += mremap_test TEST_GEN_FILES += mseal_test +TEST_GEN_FILES += seal_elf TEST_GEN_FILES += on-fault-limit TEST_GEN_FILES += pagemap_ioctl TEST_GEN_FILES += thuge-gen diff --git a/tools/testing/selftests/mm/seal_elf.c b/tools/testing/selftests/mm/seal_elf.c new file mode 100644 index 000000000000..61a2f1c94e02 --- /dev/null +++ b/tools/testing/selftests/mm/seal_elf.c @@ -0,0 +1,183 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#include <sys/mman.h> +#include <stdint.h> +#include <unistd.h> +#include <string.h> +#include <sys/time.h> +#include <sys/resource.h> +#include <stdbool.h> +#include "../kselftest.h" +#include <syscall.h> +#include <errno.h> +#include <stdio.h> +#include <stdlib.h> +#include <assert.h> +#include <fcntl.h> +#include <assert.h> +#include <sys/ioctl.h> +#include <sys/vfs.h> +#include <sys/stat.h> + +/* + * need those definition for manually build using gcc. + * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 seal_elf.c -o seal_elf + */ +#define FAIL_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + +#define SKIP_TEST_IF_FALSE(c) do {\ + if (!(c)) {\ + ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\ + goto test_end;\ + } \ + } \ + while (0) + + +#define TEST_END_CHECK() {\ + ksft_test_result_pass("%s\n", __func__);\ + return;\ +test_end:\ + return;\ +} + +#ifndef u64 +#define u64 unsigned long long +#endif + +/* + * define sys_xyx to call syscall directly. + */ +static int sys_mseal(void *start, size_t len) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mseal, start, len, 0); + return sret; +} + +static void *sys_mmap(void *addr, unsigned long len, unsigned long prot, + unsigned long flags, unsigned long fd, unsigned long offset) +{ + void *sret; + + errno = 0; + sret = (void *) syscall(__NR_mmap, addr, len, prot, + flags, fd, offset); + return sret; +} + +inline int sys_mprotect(void *ptr, size_t size, unsigned long prot) +{ + int sret; + + errno = 0; + sret = syscall(__NR_mprotect, ptr, size, prot); + return sret; +} + +static bool seal_support(void) +{ + int ret; + void *ptr; + unsigned long page_size = getpagesize(); + + ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (ptr == (void *) -1) + return false; + + ret = sys_mseal(ptr, page_size); + if (ret < 0) + return false; + + return true; +} + +const char somestr[4096] = {"READONLY"}; + +static void test_seal_elf(void) +{ + int ret; + FILE *maps; + char line[512]; + int size = 0; + uintptr_t addr_start, addr_end; + char prot[5]; + char filename[256]; + unsigned long page_size = getpagesize(); + unsigned long long ptr = (unsigned long long) somestr; + char *somestr2 = (char *)somestr; + + /* + * Modify the protection of readonly somestr + */ + if (((unsigned long long)ptr % page_size) != 0) + ptr = (unsigned long long)ptr & ~(page_size - 1); + + ksft_print_msg("somestr = %s\n", somestr); + ksft_print_msg("change protection to rw\n"); + ret = sys_mprotect((void *)ptr, page_size, PROT_READ|PROT_WRITE); + FAIL_TEST_IF_FALSE(!ret); + *somestr2 = 'A'; + ksft_print_msg("somestr is modified to: %s\n", somestr); + ret = sys_mprotect((void *)ptr, page_size, PROT_READ); + FAIL_TEST_IF_FALSE(!ret); + + maps = fopen("/proc/self/maps", "r"); + FAIL_TEST_IF_FALSE(maps); + + /* + * apply sealing to elf binary + */ + while (fgets(line, sizeof(line), maps)) { + if (sscanf(line, "%lx-%lx %4s %*x %*x:%*x %*u %255[^\n]", + &addr_start, &addr_end, &prot, &filename) == 4) { + if (strlen(filename)) { + /* + * seal the mapping if read only. + */ + if (strstr(prot, "r-")) { + ret = sys_mseal((void *)addr_start, addr_end - addr_start); + FAIL_TEST_IF_FALSE(!ret); + ksft_print_msg("sealed: %lx-%lx %s %s\n", + addr_start, addr_end, prot, filename); + if ((uintptr_t) somestr >= addr_start && + (uintptr_t) somestr <= addr_end) + ksft_print_msg("mapping for somestr found\n"); + } + } + } + } + fclose(maps); + + ret = sys_mprotect((void *)ptr, page_size, PROT_READ | PROT_WRITE); + FAIL_TEST_IF_FALSE(ret < 0); + ksft_print_msg("somestr is sealed, mprotect is rejected\n"); + + TEST_END_CHECK(); +} + +int main(int argc, char **argv) +{ + bool test_seal = seal_support(); + + ksft_print_header(); + ksft_print_msg("pid=%d\n", getpid()); + + if (!test_seal) + ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n"); + + ksft_set_plan(1); + + test_seal_elf(); + + ksft_finished(); + return 0; +}
* jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
...
MM perf benchmarks
This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Thank you for doing this performance testing.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
...
When I discuss the mm performance with Brian Makin, an engineer worked on performance, it was brought to my attention that such a performance benchmarks, which measuring millions of mm syscall in a tight loop, may not accurately reflect real-world scenarios, such as that of a database service. Also this is tested using a single HW and ChromeOS, the data from another HW or distribution might be different. It might be best to take this data with a grain of salt.
Absolutely, these types of benchmarks are pointless to simulate what will really happen with any sane program.
However, they are valuable in that they can highlight areas where something may have been made more inefficient. These inefficiencies would otherwise be lost in the noise of regular system use. They can be used as a relatively high level sanity on what you believe is going on.
I appreciate you doing the work on testing the performance here.
...
On Tue, Apr 16, 2024 at 8:13 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
...
MM perf benchmarks
This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Thank you for doing this performance testing.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
The madvise() has a bigger percentage (per_vma %), but it also has a smaller base value (cpu).
-Jeff
On Tue, Apr 16, 2024 at 12:40 PM Jeff Xu jeffxu@chromium.org wrote:
On Tue, Apr 16, 2024 at 8:13 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
...
MM perf benchmarks
This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Thank you for doing this performance testing.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
The madvise() has a bigger percentage (per_vma %), but it also has a smaller base value (cpu).
Sorry, it's unclear to me what the "vmas" column denotes. Is that how many VMAs were created before timing the syscall? If so, then 32 is the max that you show here while you seem to have tested with 1000 VMAs. What is the overhead with 1000 VMAs? My worry is that if the overhead grows linearly with the number of VMAs then the effects will be quite noticeable on Android where an application with a few thousand VMAs is not so unusual.
-Jeff
On Thu, Apr 18, 2024 at 1:19 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Apr 16, 2024 at 12:40 PM Jeff Xu jeffxu@chromium.org wrote:
On Tue, Apr 16, 2024 at 8:13 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
...
MM perf benchmarks
This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Thank you for doing this performance testing.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
The madvise() has a bigger percentage (per_vma %), but it also has a smaller base value (cpu).
Sorry, it's unclear to me what the "vmas" column denotes. Is that how many VMAs were created before timing the syscall? If so, then 32 is the max that you show here while you seem to have tested with 1000 VMAs. What is the overhead with 1000 VMAs?
The vmas column is the number of VMA used in one call.
For example: for 32 and mprotect(ptr,size), the memory range used in mprotect has 32 VMAs.
It also matters how many memory ranges are in-use at the time of the test, This is where 1000 comes in. The test creates 1000 memory ranges, each memory range has 32 vmas, then calls mprotect on the 1000 memory range. (the pseudocode was included in the original email)
My worry is that if the overhead grows linearly with the number of VMAs then the effects will be quite noticeable on Android where an application with a few thousand VMAs is not so unusual.
The overhead is likely to grow linearly with the number of VMA, since it takes time to retrieve VMA's metadata.
Let's use one data sample to look at impact:
Test: munmap 1000 memory range, each memory range has 1 VMA
syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104%
For those 1000 munmap calls, sealing adds 35000 ns in total, or 35 ns per call.
The delta seems to be insignificant. e.g. it will take about 28571 munmap call to have 1 ms difference (1000000/35=28571)
When I look at the data from 5.10 to 6.8, for the same munmap call, 6.8 adds 552 ns per call, which is 15 times bigger.
syscall__ vmas t_5_10 t_6_8 delta_ns per_vma % munmap__ 1 357 909 552 552 254%
-Jeff
On Thu, Apr 18, 2024 at 6:22 PM Jeff Xu jeffxu@chromium.org wrote:
On Thu, Apr 18, 2024 at 1:19 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Apr 16, 2024 at 12:40 PM Jeff Xu jeffxu@chromium.org wrote:
On Tue, Apr 16, 2024 at 8:13 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
...
MM perf benchmarks
This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Thank you for doing this performance testing.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
The madvise() has a bigger percentage (per_vma %), but it also has a smaller base value (cpu).
Sorry, it's unclear to me what the "vmas" column denotes. Is that how many VMAs were created before timing the syscall? If so, then 32 is the max that you show here while you seem to have tested with 1000 VMAs. What is the overhead with 1000 VMAs?
The vmas column is the number of VMA used in one call.
For example: for 32 and mprotect(ptr,size), the memory range used in mprotect has 32 VMAs.
Ok, so the 32 here denotes how many VMAs one mprotect() call spans?
It also matters how many memory ranges are in-use at the time of the test, This is where 1000 comes in. The test creates 1000 memory ranges, each memory range has 32 vmas, then calls mprotect on the 1000 memory range. (the pseudocode was included in the original email)
So, if each range has 32 vmas and you have 1000 ranges then you are creating 32000 vmas? Sorry, your pseudocode does not clarify that. My current understanding is this:
for (i = 0; i < 1000, i++) mmap N*1000 areas (N=[1-32]) start the sampling for (j = 0; j < 1000, j++) mprotect N areas with one syscall stop and save the sample munmap N*1000 areas calculates all samples.
Is that correct?
My worry is that if the overhead grows linearly with the number of VMAs then the effects will be quite noticeable on Android where an application with a few thousand VMAs is not so unusual.
The overhead is likely to grow linearly with the number of VMA, since it takes time to retrieve VMA's metadata.
Let's use one data sample to look at impact:
Test: munmap 1000 memory range, each memory range has 1 VMA
syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104%
For those 1000 munmap calls, sealing adds 35000 ns in total, or 35 ns per call.
The delta seems to be insignificant. e.g. it will take about 28571 munmap call to have 1 ms difference (1000000/35=28571)
When I look at the data from 5.10 to 6.8, for the same munmap call, 6.8 adds 552 ns per call, which is 15 times bigger.
syscall__ vmas t_5_10 t_6_8 delta_ns per_vma % munmap__ 1 357 909 552 552 254%
I'm not yet claiming the overhead is too high. I want to understand first what is being measured here. Thanks, Suren.
-Jeff
On Fri, Apr 19, 2024 at 7:57 AM Suren Baghdasaryan surenb@google.com wrote:
On Thu, Apr 18, 2024 at 6:22 PM Jeff Xu jeffxu@chromium.org wrote:
On Thu, Apr 18, 2024 at 1:19 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Apr 16, 2024 at 12:40 PM Jeff Xu jeffxu@chromium.org wrote:
On Tue, Apr 16, 2024 at 8:13 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
From: Jeff Xu jeffxu@chromium.org
This is V10 version, it rebases v9 patch to 6.9.rc3. We also applied and tested mseal() in chrome and chromebook.
...
MM perf benchmarks
This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to check the VMAs’ sealing flag, so that no partial update can be made, when any segment within the given memory range is sealed.
To measure the performance impact of this loop, two tests are developed. [8]
The first is measuring the time taken for a particular system call, by using clock_gettime(CLOCK_MONOTONIC). The second is using PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have similar results.
The tests have roughly below sequence: for (i = 0; i < 1000, i++) create 1000 mappings (1 page per VMA) start the sampling for (j = 0; j < 1000, j++) mprotect one mapping stop and save the sample delete 1000 mappings calculates all samples.
Thank you for doing this performance testing.
Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, 4G memory, Chromebook.
Based on the latest upstream code: The first test (measuring time) syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104% munmap__ 2 1398 1502 104 52 107% munmap__ 4 2444 2594 149 37 106% munmap__ 8 4029 4323 293 37 107% munmap__ 16 6647 6935 288 18 104% munmap__ 32 11811 12398 587 18 105% mprotect 1 439 465 26 26 106% mprotect 2 1659 1745 86 43 105% mprotect 4 3747 3889 142 36 104% mprotect 8 6755 6969 215 27 103% mprotect 16 13748 14144 396 25 103% mprotect 32 27827 28969 1142 36 104% madvise_ 1 240 262 22 22 109% madvise_ 2 366 442 76 38 121% madvise_ 4 623 751 128 32 121% madvise_ 8 1110 1324 215 27 119% madvise_ 16 2127 2451 324 20 115% madvise_ 32 4109 4642 534 17 113%
The second test (measuring cpu cycle) syscall__ vmas cpu cmseal delta_cpu per_vma % munmap__ 1 1790 1890 100 100 106% munmap__ 2 2819 3033 214 107 108% munmap__ 4 4959 5271 312 78 106% munmap__ 8 8262 8745 483 60 106% munmap__ 16 13099 14116 1017 64 108% munmap__ 32 23221 24785 1565 49 107% mprotect 1 906 967 62 62 107% mprotect 2 3019 3203 184 92 106% mprotect 4 6149 6569 420 105 107% mprotect 8 9978 10524 545 68 105% mprotect 16 20448 21427 979 61 105% mprotect 32 40972 42935 1963 61 105% madvise_ 1 434 497 63 63 115% madvise_ 2 752 899 147 74 120% madvise_ 4 1313 1513 200 50 115% madvise_ 8 2271 2627 356 44 116% madvise_ 16 4312 4883 571 36 113% madvise_ 32 8376 9319 943 29 111%
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
The madvise() has a bigger percentage (per_vma %), but it also has a smaller base value (cpu).
Sorry, it's unclear to me what the "vmas" column denotes. Is that how many VMAs were created before timing the syscall? If so, then 32 is the max that you show here while you seem to have tested with 1000 VMAs. What is the overhead with 1000 VMAs?
The vmas column is the number of VMA used in one call.
For example: for 32 and mprotect(ptr,size), the memory range used in mprotect has 32 VMAs.
Ok, so the 32 here denotes how many VMAs one mprotect() call spans?
Yes.
It also matters how many memory ranges are in-use at the time of the test, This is where 1000 comes in. The test creates 1000 memory ranges, each memory range has 32 vmas, then calls mprotect on the 1000 memory range. (the pseudocode was included in the original email)
So, if each range has 32 vmas and you have 1000 ranges then you are creating 32000 vmas? Sorry, your pseudocode does not clarify that. My current understanding is this:
for (i = 0; i < 1000, i++) mmap N*1000 areas (N=[1-32]) start the sampling for (j = 0; j < 1000, j++) mprotect N areas with one syscall stop and save the sample munmap N*1000 areas calculates all samples.
Is that correct?
Yes, There will be 32000 VMA in the system.
The pseudocode is correct in concept. The test implementation is slightly different, it uses mprotect to split the memory and make sure the VMAs doesn't merge. For detail, the reference [8] of the original email link to the test code.
-Jeff
On Fri, Apr 19, 2024 at 3:15 PM Jeff Xu jeffxu@chromium.org wrote:
On Fri, Apr 19, 2024 at 7:57 AM Suren Baghdasaryan surenb@google.com wrote:
On Thu, Apr 18, 2024 at 6:22 PM Jeff Xu jeffxu@chromium.org wrote:
On Thu, Apr 18, 2024 at 1:19 PM Suren Baghdasaryan surenb@google.com wrote:
On Tue, Apr 16, 2024 at 12:40 PM Jeff Xu jeffxu@chromium.org wrote:
On Tue, Apr 16, 2024 at 8:13 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- jeffxu@chromium.org jeffxu@chromium.org [240415 12:35]:
> From: Jeff Xu jeffxu@chromium.org > > This is V10 version, it rebases v9 patch to 6.9.rc3. > We also applied and tested mseal() in chrome and chromebook. > > ------------------------------------------------------------------ ...
> MM perf benchmarks > ================== > This patch adds a loop in the mprotect/munmap/madvise(DONTNEED) to > check the VMAs’ sealing flag, so that no partial update can be made, > when any segment within the given memory range is sealed. > > To measure the performance impact of this loop, two tests are developed. > [8] > > The first is measuring the time taken for a particular system call, > by using clock_gettime(CLOCK_MONOTONIC). The second is using > PERF_COUNT_HW_REF_CPU_CYCLES (exclude user space). Both tests have > similar results. > > The tests have roughly below sequence: > for (i = 0; i < 1000, i++) > create 1000 mappings (1 page per VMA) > start the sampling > for (j = 0; j < 1000, j++) > mprotect one mapping > stop and save the sample > delete 1000 mappings > calculates all samples.
Thank you for doing this performance testing.
> > Below tests are performed on Intel(R) Pentium(R) Gold 7505 @ 2.00GHz, > 4G memory, Chromebook. > > Based on the latest upstream code: > The first test (measuring time) > syscall__ vmas t t_mseal delta_ns per_vma % > munmap__ 1 909 944 35 35 104% > munmap__ 2 1398 1502 104 52 107% > munmap__ 4 2444 2594 149 37 106% > munmap__ 8 4029 4323 293 37 107% > munmap__ 16 6647 6935 288 18 104% > munmap__ 32 11811 12398 587 18 105% > mprotect 1 439 465 26 26 106% > mprotect 2 1659 1745 86 43 105% > mprotect 4 3747 3889 142 36 104% > mprotect 8 6755 6969 215 27 103% > mprotect 16 13748 14144 396 25 103% > mprotect 32 27827 28969 1142 36 104% > madvise_ 1 240 262 22 22 109% > madvise_ 2 366 442 76 38 121% > madvise_ 4 623 751 128 32 121% > madvise_ 8 1110 1324 215 27 119% > madvise_ 16 2127 2451 324 20 115% > madvise_ 32 4109 4642 534 17 113% > > The second test (measuring cpu cycle) > syscall__ vmas cpu cmseal delta_cpu per_vma % > munmap__ 1 1790 1890 100 100 106% > munmap__ 2 2819 3033 214 107 108% > munmap__ 4 4959 5271 312 78 106% > munmap__ 8 8262 8745 483 60 106% > munmap__ 16 13099 14116 1017 64 108% > munmap__ 32 23221 24785 1565 49 107% > mprotect 1 906 967 62 62 107% > mprotect 2 3019 3203 184 92 106% > mprotect 4 6149 6569 420 105 107% > mprotect 8 9978 10524 545 68 105% > mprotect 16 20448 21427 979 61 105% > mprotect 32 40972 42935 1963 61 105% > madvise_ 1 434 497 63 63 115% > madvise_ 2 752 899 147 74 120% > madvise_ 4 1313 1513 200 50 115% > madvise_ 8 2271 2627 356 44 116% > madvise_ 16 4312 4883 571 36 113% > madvise_ 32 8376 9319 943 29 111% >
If I am reading this right, madvise() is affected more than the other calls? Is that expected or do we need to have a closer look?
The madvise() has a bigger percentage (per_vma %), but it also has a smaller base value (cpu).
Sorry, it's unclear to me what the "vmas" column denotes. Is that how many VMAs were created before timing the syscall? If so, then 32 is the max that you show here while you seem to have tested with 1000 VMAs. What is the overhead with 1000 VMAs?
The vmas column is the number of VMA used in one call.
For example: for 32 and mprotect(ptr,size), the memory range used in mprotect has 32 VMAs.
Ok, so the 32 here denotes how many VMAs one mprotect() call spans?
Yes.
It also matters how many memory ranges are in-use at the time of the test, This is where 1000 comes in. The test creates 1000 memory ranges, each memory range has 32 vmas, then calls mprotect on the 1000 memory range. (the pseudocode was included in the original email)
So, if each range has 32 vmas and you have 1000 ranges then you are creating 32000 vmas? Sorry, your pseudocode does not clarify that. My current understanding is this:
for (i = 0; i < 1000, i++) mmap N*1000 areas (N=[1-32]) start the sampling for (j = 0; j < 1000, j++) mprotect N areas with one syscall stop and save the sample munmap N*1000 areas calculates all samples.
Is that correct?
Yes, There will be 32000 VMA in the system.
The pseudocode is correct in concept. The test implementation is slightly different, it uses mprotect to split the memory and make sure the VMAs doesn't merge. For detail, the reference [8] of the original email link to the test code.
Ok, thanks for clarifications. I don't think the overhead is high enough to worry about. Thanks, Suren.
-Jeff
On Fri, Apr 19, 2024 at 2:22 AM Jeff Xu jeffxu@chromium.org wrote:
The overhead is likely to grow linearly with the number of VMA, since it takes time to retrieve VMA's metadata.
Let's use one data sample to look at impact:
Test: munmap 1000 memory range, each memory range has 1 VMA
syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104%
For those 1000 munmap calls, sealing adds 35000 ns in total, or 35 ns per call.
Have you tried to spray around some likely() and unlikely()s? Does that make a difference? I'm thinking that sealing VMAs will be very rare, and mprotect/munmapping them is probably a programming error anyway, so the extra branches in the mprotect/munmap/madvice (etc) should be a nice target for some branch annotation.
On Fri, Apr 19, 2024 at 10:59 AM Pedro Falcato pedro.falcato@gmail.com wrote:
On Fri, Apr 19, 2024 at 2:22 AM Jeff Xu jeffxu@chromium.org wrote:
The overhead is likely to grow linearly with the number of VMA, since it takes time to retrieve VMA's metadata.
Let's use one data sample to look at impact:
Test: munmap 1000 memory range, each memory range has 1 VMA
syscall__ vmas t t_mseal delta_ns per_vma % munmap__ 1 909 944 35 35 104%
For those 1000 munmap calls, sealing adds 35000 ns in total, or 35 ns per call.
Have you tried to spray around some likely() and unlikely()s? Does that make a difference? I'm thinking that sealing VMAs will be very rare, and mprotect/munmapping them is probably a programming error anyway, so the extra branches in the mprotect/munmap/madvice (etc) should be a nice target for some branch annotation.
Most cost will be in locating the node in the maple tree that stores the VMAs, branch annotation is not possible there.
We could put unlikely() in the can_modify_mm check, I suspect it won't make a measurable difference in the real-world. On the other hand, this also causes no harm, and existing mm code uses unlikely/likely in a lot of places, so why not.
I will send a follow-up patch in the next few days.
Thanks -Jeff
-- Pedro
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
The code appears to be stable enough for a merge.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
On Tue, May 14, 2024 at 10:46:46AM -0700, Andrew Morton wrote:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
Oh, I thought I had already reviewed it. FWIW, please consider it:
Reviewed-by: Kees Cook keescook@chromium.org
The code appears to be stable enough for a merge.
Agreed.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
Yes please. :)
On Tue, May 14, 2024 at 12:52:13PM -0700, Kees Cook wrote:
On Tue, May 14, 2024 at 10:46:46AM -0700, Andrew Morton wrote:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
Oh, I thought I had already reviewed it. FWIW, please consider it:
Reviewed-by: Kees Cook keescook@chromium.org
The code appears to be stable enough for a merge.
Agreed.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
Yes please. :)
Is the plan still to land this for 6.10? With the testing it's had in -next and Liam's review, I think we're good to go?
Thanks!
-Kees
On Thu, 23 May 2024 16:32:26 -0700 Kees Cook keescook@chromium.org wrote:
On Tue, May 14, 2024 at 12:52:13PM -0700, Kees Cook wrote:
On Tue, May 14, 2024 at 10:46:46AM -0700, Andrew Morton wrote:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
Oh, I thought I had already reviewed it. FWIW, please consider it:
Reviewed-by: Kees Cook keescook@chromium.org
The code appears to be stable enough for a merge.
Agreed.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
Yes please. :)
Is the plan still to land this for 6.10? With the testing it's had in -next and Liam's review, I think we're good to go?
The testing and implementation review seem OK. But from a higher-level perspective Linus doesn't seem to be on board(?). I was planning on holding onto this, see if the discussion progresses across this -rc cycle.
On Thu, 23 May 2024 at 16:54, Andrew Morton akpm@linux-foundation.org wrote:
The testing and implementation review seem OK. But from a higher-level perspective Linus doesn't seem to be on board(?).
Oh, I'm fine with mseal.
I wasn't fine with the insane "m*() system calls should be atomic" discussion where Theo was just making shit up. I honestly don't think mseal() needs it either.
Linus
Andrew Morton akpm@linux-foundation.org writes:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
The code appears to be stable enough for a merge.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
I hate to be obnoxious, but I *was* copied ... :)
Not taking a position on merging, but I have to ask: are we convinced at this point that mseal() isn't a chrome-only system call? Did we ever see the glibc patches that were promised?
Thanks,
jon
On Tue, May 14, 2024 at 02:59:57PM -0600, Jonathan Corbet wrote:
Andrew Morton akpm@linux-foundation.org writes:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
The code appears to be stable enough for a merge.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
I hate to be obnoxious, but I *was* copied ... :)
Not taking a position on merging, but I have to ask: are we convinced at this point that mseal() isn't a chrome-only system call? Did we ever see the glibc patches that were promised?
I think _this_ version of mseal() is OpenBSD's mimmutable() with a basically unused extra 'flags' argument. As such, we have an existance proof that it's useful beyond Chrome.
I think Liam still had concerns around the walk-the-vmas-twice-to-error-out-early part of the implementation? Although we can always fix the implementation later; changing the API is hard.
Matthew Wilcox willy@infradead.org wrote:
Not taking a position on merging, but I have to ask: are we convinced at this point that mseal() isn't a chrome-only system call? Did we ever see the glibc patches that were promised?
I think _this_ version of mseal() is OpenBSD's mimmutable() with a basically unused extra 'flags' argument. As such, we have an existance proof that it's useful beyond Chrome.
Yes, it is close enough.
I think Liam still had concerns around the walk-the-vmas-twice-to-error-out-early part of the implementation? Although we can always fix the implementation later; changing the API is hard.
Yes I am a bit worried about the point Liam brings up -- we've discussed it privately at length. Matthew, to keep it short I have a different viewpoint:
Some of the Linux m* system calls have non-conforming, partial-work-then-return-error behaviour. I cannot find anything like this in any system call in any other operating system, and I believe there is a defacto rule against doing this, and Linux has an optimization which violating this, and I think it could be fixed with fairly minor expense, and can't imagine it affecting a single application.
I worry that the non-atomicity will one day be used by an attacker.
On Tue, 14 May 2024 16:48:47 -0600 "Theo de Raadt" deraadt@openbsd.org wrote:
Matthew Wilcox willy@infradead.org wrote:
Not taking a position on merging, but I have to ask: are we convinced at this point that mseal() isn't a chrome-only system call? Did we ever see the glibc patches that were promised?
I think _this_ version of mseal() is OpenBSD's mimmutable() with a basically unused extra 'flags' argument. As such, we have an existance proof that it's useful beyond Chrome.
Yes, it is close enough.
I think Liam still had concerns around the walk-the-vmas-twice-to-error-out-early part of the implementation? Although we can always fix the implementation later; changing the API is hard.
Yes I am a bit worried about the point Liam brings up -- we've discussed it privately at length. Matthew, to keep it short I have a different viewpoint:
Some of the Linux m* system calls have non-conforming, partial-work-then-return-error behaviour. I cannot find anything like this in any system call in any other operating system, and I believe there is a defacto rule against doing this, and Linux has an optimization which violating this, and I think it could be fixed with fairly minor expense, and can't imagine it affecting a single application.
Thanks.
I worry that the non-atomicity will one day be used by an attacker.
How might an attacker exploit this?
Andrew Morton akpm@linux-foundation.org wrote:
I worry that the non-atomicity will one day be used by an attacker.
How might an attacker exploit this?
Various ways which are going to be very application specific. Most ways will depend on munmap / mprotect arguments being incorrect for some reason, and callers not checking the return values.
After the system call, the memory is in a very surprising configuration.
Consider a larger memory region containing the following sections:
[regular memory] [sealed memory] [regular memory containing a secret]
unmap() gets called on the whole region, for some reason. The first section is removed. It hits the sealed memory, and returns EPERM. It does not unmap the sealed reason, not the memory containing the secret.
The return values of mprotect and munmap are *very rarely* checked, which adds additional intrigue. They are not checked because these system calls never failed in this way on systems before Linux.
It is difficult to write test programs which fail under the current ENOMEM situation (the only current failure mode, AFAIK). But with the new mseal() EPERM condition, it will be very easy to write programs which leave memory behind.
I don't know how you'll document this trap in the manual page, let me try.
If msealed memory is found inside the range [start, start+len], earlier memory will be unmapped, but later memory will remain unmapped and the system call returns error EPERM.
If kernel memory shortage occurs while unmapping the region, early regions may be unmapped but higher regions may remain mapped, and the system call may return ENOMEM.
I feel so gross now, time for a shower..
On Tue, May 14, 2024 at 05:47:30PM -0600, Theo de Raadt wrote:
Andrew Morton akpm@linux-foundation.org wrote:
I worry that the non-atomicity will one day be used by an attacker.
How might an attacker exploit this?
Various ways which are going to be very application specific. Most ways will depend on munmap / mprotect arguments being incorrect for some reason, and callers not checking the return values.
After the system call, the memory is in a very surprising configuration.
Consider a larger memory region containing the following sections:
[regular memory] [sealed memory] [regular memory containing a secret]
unmap() gets called on the whole region, for some reason. The first section is removed. It hits the sealed memory, and returns EPERM. It does not unmap the sealed reason, not the memory containing the secret.
If we consider that the attack consists, for an attacker, in mseal()ing the beginning of an area to make sure to pin the whole area by making a future munmap() fail, maybe we could make munmap() not stop anymore on such errors and continue to unmap the rest of the area, for the purpose of not leaving such a theoretical attack vector work ? After all, munmap() currently skips wholes and continues to unmap the area. But then what would prevent the attacker from doing mseal() on the whole area in this case, to prevent it from being unmapped ?
Wouldn't it be more effective to have a non-resettable prctl() allowing the application to prefer to be killed upon such an munmap() failure in order to stay consistent and more robust against such class of attacks?
Willy
On Tue, 14 May 2024 at 20:13, Willy Tarreau w@1wt.eu wrote:
Wouldn't it be more effective to have a non-resettable prctl() allowing the application to prefer to be killed upon such an munmap() failure in order to stay consistent and more robust against such class of attacks?
This whole argument is based on a castle of sand, and some notion that this is a problem in the first place.
Guys, if you let untrusted code execute random system calls, the whole "look, now unmap() acts oddly" IS THE LEAST OF YOUR ISSUES.
This whole "problem" is made-up. It's not real. Theo is literally upset about something that Linux has done forever, and that has never been an issue.
Stop inventing make-believe problems - there are enough *real* bugs people can look at that you really don't need to.
Linus
On Tue, 14 May 2024 at 20:36, Linus Torvalds torvalds@linux-foundation.org wrote:
Guys, if you let untrusted code execute random system calls, the whole "look, now unmap() acts oddly" IS THE LEAST OF YOUR ISSUES.
Side note: it doesn't even help to make things "atomic". munmap() acts oddly whether it fals completely or whether it fails partially, and if the user doesn't check the result, neither case is great.
If you want to have some "hardened mseal()", you make any attempt to change a mseal'ed memory area be a fatal error. The whole "atomic or not" is a complete red herring.
I'd certainly be ok with that. If the point of mseal is "you can't change this mapping", then anybody who tries to change it is obviously untrustworthy, and killing the whole thing sounds perfectly sane to me.
Maybe that's a first valid use-case for the flags argument.
Linus
On Tue, May 14, 2024 at 09:14:37PM -0700, Linus Torvalds wrote:
On Tue, 14 May 2024 at 20:36, Linus Torvalds torvalds@linux-foundation.org wrote:
Guys, if you let untrusted code execute random system calls, the whole "look, now unmap() acts oddly" IS THE LEAST OF YOUR ISSUES.
I totally agree with this, I'm more speaking about a more general hardening measure, like what is commonly offered via prctl() and that, for example, manages to mitigate the consequences of a successful RCE.
Side note: it doesn't even help to make things "atomic". munmap() acts oddly whether it fals completely or whether it fails partially, and if the user doesn't check the result, neither case is great.
I don't find the "atomic" aspect that important either, however the munmap() man page says:
All pages containing a part of the indicated range are un- mapped, and subsequent references to these pages will generate SIGSEGV. It is not an error if the indicated range does not contain any mapped pages.
This alone is an encouragement to not check the result. And BTW, what should one do to try to repair the situation after a failed munmap() ? It reads as "best effort" above: usually upon return, anything that could be unmapped was unmapped. That's how I'm reading it. I think it's a nice property that makes this syscall trustable by its users, and contrary to the atomic aspect I would find it nice if munmap() would properly unmap everything it can then return the error caused by the encounter of a sealed area. For me that's even the opposite of an atomic approach, it's really about making sure to best follow the developer's intent regardless of any obstacles.
If you want to have some "hardened mseal()", you make any attempt to change a mseal'ed memory area be a fatal error. The whole "atomic or not" is a complete red herring.
Yep, agreed.
I'd certainly be ok with that. If the point of mseal is "you can't change this mapping", then anybody who tries to change it is obviously untrustworthy, and killing the whole thing sounds perfectly sane to me.
Maybe that's a first valid use-case for the flags argument.
That could be for that use case (developer doing mseal, attacker trying munmap), indeed, though that couldn't cover for the other way around (attacker doing mseal() in hope to make a future munmap() fail).
That's what I like with prctl(), it offers the developer a panoply of options to decide when and how to lock down a process in order to mitigate consequences of exploited bugs.
And it could be independent on this series, by essentially focusing on the ability to kill a process that fails to munmap() a sealed area. I.e. no need to that that property on the area itself, it's a matter of whether we consider the process sensitive enough or not.
Willy
On Tue, 14 May 2024 at 15:48, Theo de Raadt deraadt@openbsd.org wrote:
and can't imagine it affecting a single application
Honestly, that's the reason for not caring.
You have to do actively wrong things for this to matter AT ALL.
So no, we're not making gratuitous changes for stupid reasons.
I worry that the non-atomicity will one day be used by an attacker.
Blah blah blah. That's a made-up scare tactic if I ever heard one. It's unworthy of you.
Anybody who does mprotect/mmap/munmap/whatever over multiple independent memory mappings had better know exactly what mappings they are touching. Otherwise they are *already* just doing random crap.
In other words: nobody actually does that. Yes, you have people who first carve out one big area with an mmap(), and then do their own memory management within that area. But the point is, they are very much in control and if they do something inconsistent, they absolutely only have themselves to blame.
And if you have some app that randomly does mprotect etc over multipl memory mappings that it doesn't know what the f*^% they are, then there is no saving such a piece of unbelievable garbahe.
So stop the pointless fear-mongering. Linux does the smart thing, which is to not waste a single cycle on something that cannot possibly be relevant.
Linus
On Tue, 14 May 2024 at 17:57, Theo de Raadt deraadt@openbsd.org wrote:
Let's wait and see.
You may not be aware, but the Open Group literally endorses the Linux model:
"When mprotect() fails for reasons other than [EINVAL], the protections on some of the pages in the range [addr,addr+len) may have been changed"
at least according to this:
https://pubs.opengroup.org/onlinepubs/9699919799/functions/mprotect.html
so I think your atomicity arguments have always been misleading. At least for mprotect, POSIX is very explicit about this not being atomic.
I find very similar wording in mmap:
"If mmap() fails for reasons other than [EBADF], [EINVAL], or [ENOTSUP], some of the mappings in the address range starting at addr and continuing for len bytes may have been unmapped"
Maybe some atomicity rules have always been true for BSD, but they've never been true for Linux, and while I don't know how authoritative that opengroup thing is, it's what google found.
(Linus, don't be a jerk)
I'm not the one who makes unsubstantiated statements and uses scare tactics to try to make said arguments sound more valid than they are.
So keep your arguments real, please.
Linus
Linus Torvalds torvalds@linux-foundation.org wrote:
Regarding mprotect(), POSIX also says:
An implementation may permit accesses other than those specified by prot; however, no implementation shall permit a write to succeed where PROT_WRITE has not been set or shall permit any access where PROT_NONE alone has been set.
When sealed memory is encountered in the middle of a range, an error will be returned (which almost noone looks at). Memory after the sealed region will not be fixed to follow this rule.
It may retain higher permission.
Maybe some atomicity rules have always been true for BSD, but they've never been true for Linux, and while I don't know how authoritative that opengroup thing is, it's what google found.
It is not a BSD thing. I searched many kernels. I did not find the Linux behaviour anywhere else.
(Linus, don't be a jerk)
I'm not the one who makes unsubstantiated statements and uses scare tactics to try to make said arguments sound more valid than they are.
So keep your arguments real, please.
CAN YOU PLEASE SHUT IT WITH THE PERSONAL ATTACKS? ARE YOU SO INSECURE THAT YOU NEED TO TAKE A TECHNICAL DISCUSSION AND MAKE IT PERSONAL?
In a new world of immutable / sealed memory, I believe there is a much bigger problem and I would appreciate if the Linux team would give it some consideration.
mprotect and munmap (and other calls) can now fail, due to intentional address space manipulation requested by a process (previously).
The other previous errors have been transient system effects, like ENOMEM.
This EPERM with partial change is not transient. A 5 line test program can show memory which is not released, or which memory will retain incorrect permissions.
Have any of you written test programs?
On Tue, 14 May 2024 at 18:47, Theo de Raadt deraadt@openbsd.org wrote:
Linus Torvalds torvalds@linux-foundation.org wrote:
Regarding mprotect(), POSIX also says:
An implementation may permit accesses other than those specified by prot; however, no implementation shall permit a write to succeed where PROT_WRITE has not been set or shall permit any access where PROT_NONE alone has been set.
Why do you quote entirely irrelevant issues?
If the mprotect didn't succeed, then clearly the above is irrelevant.
When sealed memory is encountered in the middle of a range, an error will be returned (which almost noone looks at). Memory after the sealed region will not be fixed to follow this rule.
It may retain higher permission.
This is not in any way specific to mseal().
Theo, you're making shit up.
You claim that this is somehow new behavior:
The other previous errors have been transient system effects, like ENOMEM.
but that's simply NOT TRUE. Try this:
#include <stdio.h> #include <sys/mman.h>
int main(int argc, char **argv) { /* Just three pages for VM space allocation */ void *a = mmap(NULL, 3*4096, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
/* Make the second page a shared read mapping of stdin */ mmap(a+4096, 4096, PROT_READ, MAP_FIXED | MAP_SHARED, 0, 0);
/* Turn them all PROT_WRITE */ mprotect(a, 3*4096, PROT_WRITE);
fprintf(stderr, "Write to first page\n"); *(int *) (a+0) = 0;
fprintf(stderr, "Write to second page\n"); *(int *) (a+4096) = 0;
fprintf(stderr, "Write to third page\n"); *(int *) (a+2*4096) = 0; }
and what you will get (under Linux) is
$ ./a.out < ./a.out Write to first page Write to second page Segmentation fault (core dumped)
because that mprotect() will have returned EACCES on the shared mapping, but will have successfully made the first one writable.
End result: this whole "transient system effects" is just not true. And "mseal()" isn't somethign new.
If somebody makes random mprotect() calls, and doesn't check the result, they get exactly what they deserve. And mseal() isn't the issue - bad programming is.
Anyway, you're just making things up for your nonexistent arguments. I'm done.
Linus
Linus Torvalds torvalds@linux-foundation.org wrote:
On Tue, 14 May 2024 at 18:47, Theo de Raadt deraadt@openbsd.org wrote:
Linus Torvalds torvalds@linux-foundation.org wrote:
Regarding mprotect(), POSIX also says:
An implementation may permit accesses other than those specified by prot; however, no implementation shall permit a write to succeed where PROT_WRITE has not been set or shall permit any access where PROT_NONE alone has been set.
Why do you quote entirely irrelevant issues?
If the mprotect didn't succeed, then clearly the above is irrelevant.
Imagine the following region:
<--------------------------------------------- len [region PROT_READ] [region PROT_READ + sealed] addr ^
then perform mprotect(addr, len, PROT_WRITE | PROT_READ);
This will return -1, with EPERM, when it encounters the sealed region.
I believe in Linux, since it has not checked for errors as a first phase, this changes the first region of memory to PROT_READ | PROT_WRITE. Liam, is that correct? If I am correct, then this follows:
So tell me -- did the mprotect() system call succeed or did not it succeed?
If EPERM means it did not succeed, then why is the first region now writable?
Immediately after this "call that failed", the process can perform a write to that first region. But no succesful system call was made to change that memory to PROT_WRITE.
Alternatively, does EPERM mean it did not completely fail, and therefore it is OK that that the prot value has been applied? That's really obscure, and undocumented.
In any case it seems, PROT_WRITE can be set on memory, and it is even more pointless than before for userland to check the errno *because you can't determine the resulting protection on every page of memory. It's all a mishmash after that.
(There is no POSIX system call to ask "what is the permission of a page or region).
Theo, you're making shit up.
I'm trying to have a technical discussion. Please change your approach, Linus.
* Theo de Raadt deraadt@openbsd.org [240514 22:42]:
Linus Torvalds torvalds@linux-foundation.org wrote:
On Tue, 14 May 2024 at 18:47, Theo de Raadt deraadt@openbsd.org wrote:
Linus Torvalds torvalds@linux-foundation.org wrote:
Regarding mprotect(), POSIX also says:
An implementation may permit accesses other than those specified by prot; however, no implementation shall permit a write to succeed where PROT_WRITE has not been set or shall permit any access where PROT_NONE alone has been set.
Why do you quote entirely irrelevant issues?
If the mprotect didn't succeed, then clearly the above is irrelevant.
Imagine the following region:
<--------------------------------------------- len [region PROT_READ] [region PROT_READ + sealed]
addr ^
then perform mprotect(addr, len, PROT_WRITE | PROT_READ);
This will return -1, with EPERM, when it encounters the sealed region.
I believe in Linux, since it has not checked for errors as a first phase, this changes the first region of memory to PROT_READ | PROT_WRITE. Liam, is that correct?
I really don't want to fight about this - I just want to have reliable code that is maintainable. I think the correctness argument is always going to be unclear because we're all going to interpret the documentation from our point of view - which is probably how we got here in the first place. My opinion of the matter of correctness is, obviously, the least important.
My problem right now is that we're changing it so that we are not consistent in when we should check. I'm not sure how doing both fits into either model, but it increases the next change going to the 'wrong' side of the argument (whatever side that happens to be from your view).
If there isn't a technical reason to keep the check before, then we should treat mseal the same as all other checks.
If we are going to have an up-front check, then it makes sense to keep the checks that we can (reasonably) do at the same time together. Linus, you said up front checks is a good thing to aim for.
That said, I don't think the example above will allow the madvise to succeed at all. mseal checks the entire region up front while most other checks occur during the loop across vmas.
Thanks, Liam
* Andrew Morton akpm@linux-foundation.org [240514 13:47]:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
The code appears to be stable enough for a merge.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
I have looked at this code a fair bit at this point, but I wanted to get some clarification on the fact that we now have mseal doing checks upfront while others fail in the middle.
mbind: /* * If any vma in the range got policy other than MPOL_BIND * or MPOL_PREFERRED_MANY we return error. We don't reset * the home node for vmas we already updated before. */
mlock: mlock will abort (through one path), when it sees a gap in mapped areas, but will not undo what it did so far.
mlock_fixup can fail on vma_modify_flags(), but previous vmas are not updated. This can fail due to allocation failures on the splitting of VMAs (or failed merging). The allocations could happen before, but this is more work (but doable, no doubt).
userfaultfd is... complicated.
And even munmap() can fail and NOT undo the previous split(s). mmap.c: /* * If userfaultfd_unmap_prep returns an error the vmas * will remain split, but userland will get a * highly unexpected error anyway. This is no * different than the case where the first of the two * __split_vma fails, but we don't undo the first * split, despite we could. This is unlikely enough * failure that it's not worth optimizing it for. */
But this one is different form the others.. madvise: /* * If the interval [start,end) covers some unmapped address * ranges, just ignore them, but return -ENOMEM at the end. * - different from the way of handling in mlock etc. */
Either we are planning to clean this up and do what we can up-front, or just move the mseal check with the rest. Otherwise we are making a larger mess with more technical dept for a single user, and I think this is not an acceptable trade-off.
Considering the benchmarks that were provided, performance arguments seem like they are not a concern.
I want to know if we are planning to sort and move existing checks if we proceed with this change?
Thanks, Liam
On Tue, May 14, 2024 at 2:28 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Andrew Morton akpm@linux-foundation.org [240514 13:47]:
On Mon, 15 Apr 2024 16:35:19 +0000 jeffxu@chromium.org wrote:
This patchset proposes a new mseal() syscall for the Linux kernel.
I have not moved this into mm-stable for a 6.10 merge. Mainly because of the total lack of Reviewed-by:s and Acked-by:s.
The code appears to be stable enough for a merge.
It's awkward that we're in conference this week, but I ask people to give consideration to the desirability of moving mseal() into mainline sometime over the next week, please.
I have looked at this code a fair bit at this point, but I wanted to get some clarification on the fact that we now have mseal doing checks upfront while others fail in the middle.
mbind: /* * If any vma in the range got policy other than MPOL_BIND * or MPOL_PREFERRED_MANY we return error. We don't reset * the home node for vmas we already updated before. */
mlock: mlock will abort (through one path), when it sees a gap in mapped areas, but will not undo what it did so far.
mlock_fixup can fail on vma_modify_flags(), but previous vmas are not updated. This can fail due to allocation failures on the splitting of VMAs (or failed merging). The allocations could happen before, but this is more work (but doable, no doubt).
userfaultfd is... complicated.
And even munmap() can fail and NOT undo the previous split(s). mmap.c: /* * If userfaultfd_unmap_prep returns an error the vmas * will remain split, but userland will get a * highly unexpected error anyway. This is no * different than the case where the first of the two * __split_vma fails, but we don't undo the first * split, despite we could. This is unlikely enough * failure that it's not worth optimizing it for. */
But this one is different form the others.. madvise: /* * If the interval [start,end) covers some unmapped address * ranges, just ignore them, but return -ENOMEM at the end. * - different from the way of handling in mlock etc. */
Thanks.
The current mseal patch does up-front checking in two different situations: 1 when applying mseal() Checking for unallocated memory in the given memory range.
2 When checking mseal flag during mprotect/unmap/remap/mmap Checking mseal flag is placed ahead of the main business logic, and treated the same as input arguments check.
Either we are planning to clean this up and do what we can up-front, or just move the mseal check with the rest. Otherwise we are making a larger mess with more technical dept for a single user, and I think this is not an acceptable trade-off.
The sealing use case is different from regular mm API and this didn't create additional technical debt. Please allow me to explain those separately.
The main use case and threat model is that an attacker exploits a vulnerability and has arbitrary write access to the process, and can manipulate some arguments to syscalls from some threads. Placing the checking of mseal flag ahead of mprotect main business logic is stricter compared with doing it in-place. It is meant to be harder for the attacker, e.g. blocking the opportunistically attempt of munmap by modifying the size argument.
The legit app code won't call mprotect/munmap on sealed memory. It is irrelevant for both precheck and in-place check approaches, from a legit app code point of view.
The regular mm API (other than sealing)'s user-case is for legit code, a legit code knows the whole picture of memory mappings and is unlikely to rely on the opportunist return.
The user-cases are different, I hope we look into the difference and not force them into one direction.
About tech debt, code-wise , placing pre-check ahead of the main business logic of mprotect/munmap APIs, reduces the size of code change, and is easy to carry from release to release, or backporting.
But let's compare the alternatives - doing it in-place without precheck. - munmap munmap calls arch_unmap(mm, start, end) ahead of main business logic, the checking of sealing flags would need to be architect specific. In addition, if arch_unmap return fails due to sealing, the code should still proceed, till the main business logic fails again.
- mremap/mmap The check of sealing would be scattered, e.g. checking the src address range in-place, dest arrange in-place, unmap in-place, etc. The code is complex and prone to error.
-mprotect/madvice Easy to change to in-place.
- mseal mseal() check unallocated memory in the given memory range in the pre-check. Easy to change to in-place (same as mprotect)
The situation in munmap and mremap/mmap make in-place checks less desirable imo.
Considering the benchmarks that were provided, performance arguments seem like they are not a concern.
Yes. Performance is not a factor in making a design choice on this.
I want to know if we are planning to sort and move existing checks if we proceed with this change?
I would argue that we should not change the existing mm code. mseal is new and no backward compatible problem. That is not the case for mprotect and other mm api. E.g. if we were to change mprotect to add a precheck for memory gap, some badly written application might break.
The 'atomic' approach is also really difficult to enforce to the whole MM area, mseal() doesn't claim it is atomic. Most regular mm API might go deeper in mm data structure to update page tables and HW, etc. The rollback in handling those error cases, and performance cost. I'm not sure if the benefit is worth the cost. However, atomicity is another topic to discuss unrelated to mm sealing. The current design of mm sealing is due to its use case and practical coding reason.
Thanks -Jeff
Thanks, Liam
* Jeff Xu jeffxu@chromium.org [240515 13:18]: ...
The current mseal patch does up-front checking in two different situations: 1 when applying mseal() Checking for unallocated memory in the given memory range.
2 When checking mseal flag during mprotect/unmap/remap/mmap Checking mseal flag is placed ahead of the main business logic, and treated the same as input arguments check.
Either we are planning to clean this up and do what we can up-front, or just move the mseal check with the rest. Otherwise we are making a larger mess with more technical dept for a single user, and I think this is not an acceptable trade-off.
The sealing use case is different from regular mm API and this didn't create additional technical debt. Please allow me to explain those separately.
The main use case and threat model is that an attacker exploits a vulnerability and has arbitrary write access to the process, and can manipulate some arguments to syscalls from some threads. Placing the checking of mseal flag ahead of mprotect main business logic is stricter compared with doing it in-place. It is meant to be harder for the attacker, e.g. blocking the opportunistically attempt of munmap by modifying the size argument.
If you can manipulate some arguments to syscalls, couldn't it avoid having the VMA mseal'ed?
Again I don't care where the check goes - but having it happen alone is pointless.
The legit app code won't call mprotect/munmap on sealed memory. It is irrelevant for both precheck and in-place check approaches, from a legit app code point of view.
So let's do them together.
...
About tech debt, code-wise , placing pre-check ahead of the main business logic of mprotect/munmap APIs, reduces the size of code change, and is easy to carry from release to release, or backporting.
It sounds like the other changes to the looping code in recent kernels is going to mess up the backporting if we do this with the rest of the checks.
But let's compare the alternatives - doing it in-place without precheck.
- munmap
munmap calls arch_unmap(mm, start, end) ahead of main business logic, the checking of sealing flags would need to be architect specific. In addition, if arch_unmap return fails due to sealing, the code should still proceed, till the main business logic fails again.
You are going to mseal the vdso?
- mremap/mmap
The check of sealing would be scattered, e.g. checking the src address range in-place, dest arrange in-place, unmap in-place, etc. The code is complex and prone to error.
-mprotect/madvice Easy to change to in-place.
- mseal
mseal() check unallocated memory in the given memory range in the pre-check. Easy to change to in-place (same as mprotect)
The situation in munmap and mremap/mmap make in-place checks less desirable imo.
Considering the benchmarks that were provided, performance arguments seem like they are not a concern.
Yes. Performance is not a factor in making a design choice on this.
I want to know if we are planning to sort and move existing checks if we proceed with this change?
I would argue that we should not change the existing mm code. mseal is new and no backward compatible problem. That is not the case for mprotect and other mm api. E.g. if we were to change mprotect to add a precheck for memory gap, some badly written application might break.
This is a weak argument. Your new function may break these badly written applications *if* gcc adds support. If you're not checking the return type then it doesn't really matter - the application will run into issues rather quickly anyways. The only thing that you could argue is the speed - but you've proven that false.
The 'atomic' approach is also really difficult to enforce to the whole MM area, mseal() doesn't claim it is atomic. Most regular mm API might go deeper in mm data structure to update page tables and HW, etc. The rollback in handling those error cases, and performance cost. I'm not sure if the benefit is worth the cost. However, atomicity is another topic to discuss unrelated to mm sealing. The current design of mm sealing is due to its use case and practical coding reason.
"best effort" is what I'm saying. It's actually not really difficult to do atomic, but no one cares besides Theo.
How hard is it to put userfaultfd into your loop and avoid having that horrible userfaulfd in munmap? For years people see horrible failure paths and just dump in a huge comment saying "but it's okay because it's probably not going to happen". But now we're putting this test up front, and doing it alone - Why?
Thanks, Liam
On Wed, May 15, 2024 at 3:19 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Jeff Xu jeffxu@chromium.org [240515 13:18]:
...
The current mseal patch does up-front checking in two different situations: 1 when applying mseal() Checking for unallocated memory in the given memory range.
2 When checking mseal flag during mprotect/unmap/remap/mmap Checking mseal flag is placed ahead of the main business logic, and treated the same as input arguments check.
Either we are planning to clean this up and do what we can up-front, or just move the mseal check with the rest. Otherwise we are making a larger mess with more technical dept for a single user, and I think this is not an acceptable trade-off.
The sealing use case is different from regular mm API and this didn't create additional technical debt. Please allow me to explain those separately.
The main use case and threat model is that an attacker exploits a vulnerability and has arbitrary write access to the process, and can manipulate some arguments to syscalls from some threads. Placing the checking of mseal flag ahead of mprotect main business logic is stricter compared with doing it in-place. It is meant to be harder for the attacker, e.g. blocking the opportunistically attempt of munmap by modifying the size argument.
If you can manipulate some arguments to syscalls, couldn't it avoid having the VMA mseal'ed?
The mm sealing can be applied in advance. This type of approach is common in sandboxer, e.g. setup restrictive environments in advance.
Again I don't care where the check goes - but having it happen alone is pointless.
The legit app code won't call mprotect/munmap on sealed memory. It is irrelevant for both precheck and in-place check approaches, from a legit app code point of view.
So let's do them together.
For the user case I describe in the threat-model, precheck is a better approach. Legit code doesn't care.
...
About tech debt, code-wise , placing pre-check ahead of the main business logic of mprotect/munmap APIs, reduces the size of code change, and is easy to carry from release to release, or backporting.
It sounds like the other changes to the looping code in recent kernels is going to mess up the backporting if we do this with the rest of the checks.
What other changes do you refer to ?
I backported V9 to 5.10 when I ran the performance test on your request, and the backporting to 5.10 is relatively straight forward, the mseal flag check is placed after input arguments check and before the main business logic.
But let's compare the alternatives - doing it in-place without precheck.
- munmap
munmap calls arch_unmap(mm, start, end) ahead of main business logic, the checking of sealing flags would need to be architect specific. In addition, if arch_unmap return fails due to sealing, the code should still proceed, till the main business logic fails again.
You are going to mseal the vdso?
How is that relevant ? To answer your question: I don't know at this moment. The initial scope of libc change is sealing the RO/RX part during elf loading.e.g. .text and .RELO
- mremap/mmap
The check of sealing would be scattered, e.g. checking the src address range in-place, dest arrange in-place, unmap in-place, etc. The code is complex and prone to error.
-mprotect/madvice Easy to change to in-place.
- mseal
mseal() check unallocated memory in the given memory range in the pre-check. Easy to change to in-place (same as mprotect)
The situation in munmap and mremap/mmap make in-place checks less desirable imo.
Considering the benchmarks that were provided, performance arguments seem like they are not a concern.
Yes. Performance is not a factor in making a design choice on this.
I want to know if we are planning to sort and move existing checks if we proceed with this change?
I would argue that we should not change the existing mm code. mseal is new and no backward compatible problem. That is not the case for mprotect and other mm api. E.g. if we were to change mprotect to add a precheck for memory gap, some badly written application might break.
This is a weak argument. Your new function may break these badly written applications *if* gcc adds support. If you're not checking the return type then it doesn't really matter - the application will run into issues rather quickly anyways. The only thing that you could argue is the speed - but you've proven that false.
The point I raised here is that there is a risk to modify mm API's established behavior. Kernel doesn't usually make this kind of behavior change.
mm sealing is a new functionality, I think applications will need to opt in , e.g. allow dynamic linker to seal .text.
The 'atomic' approach is also really difficult to enforce to the whole MM area, mseal() doesn't claim it is atomic. Most regular mm API might go deeper in mm data structure to update page tables and HW, etc. The rollback in handling those error cases, and performance cost. I'm not sure if the benefit is worth the cost. However, atomicity is another topic to discuss unrelated to mm sealing. The current design of mm sealing is due to its use case and practical coding reason.
"best effort" is what I'm saying. It's actually not really difficult to do atomic, but no one cares besides Theo.
OK, if you strongly believe in 'atomic' or 'best effort atomic', whatever it is, consider sending a patch and getting feedback from the community ?
How hard is it to put userfaultfd into your loop and avoid having that horrible userfaulfd in munmap? For years people see horrible failure paths and just dump in a huge comment saying "but it's okay because it's probably not going to happen". But now we're putting this test up front, and doing it alone - Why?
As a summary of why: - The use case: it makes it harder for attackers to modify memory opportunistically. - Code: Less and simpler code change.
Thanks -Jeff
Thanks, Liam
TL;DR for Andrew (and to save his page down key):
Reviewed-by: Liam R. Howlett Liam.Howlett@oracle.com
* Jeff Xu jeffxu@chromium.org [240515 20:59]:
On Wed, May 15, 2024 at 3:19 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Jeff Xu jeffxu@chromium.org [240515 13:18]:
...
The current mseal patch does up-front checking in two different situations: 1 when applying mseal() Checking for unallocated memory in the given memory range.
2 When checking mseal flag during mprotect/unmap/remap/mmap Checking mseal flag is placed ahead of the main business logic, and treated the same as input arguments check.
Either we are planning to clean this up and do what we can up-front, or just move the mseal check with the rest. Otherwise we are making a larger mess with more technical dept for a single user, and I think this is not an acceptable trade-off.
The sealing use case is different from regular mm API and this didn't create additional technical debt. Please allow me to explain those separately.
The main use case and threat model is that an attacker exploits a vulnerability and has arbitrary write access to the process, and can manipulate some arguments to syscalls from some threads. Placing the checking of mseal flag ahead of mprotect main business logic is stricter compared with doing it in-place. It is meant to be harder for the attacker, e.g. blocking the opportunistically attempt of munmap by modifying the size argument.
If you can manipulate some arguments to syscalls, couldn't it avoid having the VMA mseal'ed?
The mm sealing can be applied in advance. This type of approach is common in sandboxer, e.g. setup restrictive environments in advance.
Thanks, this detail slipped my mind.
Again I don't care where the check goes - but having it happen alone is pointless.
The legit app code won't call mprotect/munmap on sealed memory. It is irrelevant for both precheck and in-place check approaches, from a legit app code point of view.
So let's do them together.
For the user case I describe in the threat-model, precheck is a better approach. Legit code doesn't care.
This is the case for other checks as well, but they're all done together.
...
About tech debt, code-wise , placing pre-check ahead of the main business logic of mprotect/munmap APIs, reduces the size of code change, and is easy to carry from release to release, or backporting.
It sounds like the other changes to the looping code in recent kernels is going to mess up the backporting if we do this with the rest of the checks.
What other changes do you refer to ?
I backported V9 to 5.10 when I ran the performance test on your request, and the backporting to 5.10 is relatively straight forward, the mseal flag check is placed after input arguments check and before the main business logic.
The changes to the later looping code would complicate your backport. 94d7d9233951 ("mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al."), for example.
But let's compare the alternatives - doing it in-place without precheck.
- munmap
munmap calls arch_unmap(mm, start, end) ahead of main business logic, the checking of sealing flags would need to be architect specific. In addition, if arch_unmap return fails due to sealing, the code should still proceed, till the main business logic fails again.
You are going to mseal the vdso?
How is that relevant ?
This is generally what arch_unmap() is checking, that's why I was wondering if it would be affected.
To answer your question: I don't know at this moment. The initial scope of libc change is sealing the RO/RX part during elf loading.e.g. .text and .RELO
Right, this is for chrome in your usecase.
- mremap/mmap
The check of sealing would be scattered, e.g. checking the src address range in-place, dest arrange in-place, unmap in-place, etc. The code is complex and prone to error.
-mprotect/madvice Easy to change to in-place.
- mseal
mseal() check unallocated memory in the given memory range in the pre-check. Easy to change to in-place (same as mprotect)
The situation in munmap and mremap/mmap make in-place checks less desirable imo.
Considering the benchmarks that were provided, performance arguments seem like they are not a concern.
Yes. Performance is not a factor in making a design choice on this.
I want to know if we are planning to sort and move existing checks if we proceed with this change?
I would argue that we should not change the existing mm code. mseal is new and no backward compatible problem. That is not the case for mprotect and other mm api. E.g. if we were to change mprotect to add a precheck for memory gap, some badly written application might break.
This is a weak argument. Your new function may break these badly written applications *if* gcc adds support. If you're not checking the return type then it doesn't really matter - the application will run into issues rather quickly anyways. The only thing that you could argue is the speed - but you've proven that false.
The point I raised here is that there is a risk to modify mm API's established behavior. Kernel doesn't usually make this kind of behavior change.
Sure, but we have security checks happening later and they can fail 1/2 way through. Although, depending on the 1/2 success is an application bug and means the application is not portable. This was my main reason for requesting this check be placed with the rest, as we are now treating mseal() as a special case among even security features.
Some of the existing checks add unnecessary complications to keep them together, unfortunately. Your addition of a loop prior to making the changes means we can probably simplify some of these checks by generalizing the loop in future patches.
mm sealing is a new functionality, I think applications will need to opt in , e.g. allow dynamic linker to seal .text.
The 'atomic' approach is also really difficult to enforce to the whole MM area, mseal() doesn't claim it is atomic. Most regular mm API might go deeper in mm data structure to update page tables and HW, etc. The rollback in handling those error cases, and performance cost. I'm not sure if the benefit is worth the cost. However, atomicity is another topic to discuss unrelated to mm sealing. The current design of mm sealing is due to its use case and practical coding reason.
"best effort" is what I'm saying. It's actually not really difficult to do atomic, but no one cares besides Theo.
OK, if you strongly believe in 'atomic' or 'best effort atomic', whatever it is, consider sending a patch and getting feedback from the community ?
Sounds good. This will probably happen over time.
How hard is it to put userfaultfd into your loop and avoid having that horrible userfaulfd in munmap? For years people see horrible failure paths and just dump in a huge comment saying "but it's okay because it's probably not going to happen". But now we're putting this test up front, and doing it alone - Why?
As a summary of why:
- The use case: it makes it harder for attackers to modify memory
opportunistically.
- Code: Less and simpler code change.
Fair enough. Thank you for providing the arguments for each up-front check vs embedding them. I didn't want to hold up your feature for so long and I appreciate you taking the time to respond to my questions on your decisions. Apologies for kicking the hornets nest on this one.
I think, in the future, we can use your forward loop to clean up some of the design decisions of the past - ideally by choice and not by CVE forced changes. Hopefully having both pre and inter-loop checks won't mean one will be missed when altering these code paths.
Thanks, Liam
On Tue, May 21, 2024 at 9:00 AM Liam R. Howlett Liam.Howlett@oracle.com wrote:
TL;DR for Andrew (and to save his page down key):
Reviewed-by: Liam R. Howlett Liam.Howlett@oracle.com
Many thanks!
-Jeff
linux-kselftest-mirror@lists.linaro.org