Userland library functions such as allocators and threading implementations often require regions of memory to act as 'guard pages' - mappings which, when accessed, result in a fatal signal being sent to the accessing process.
The current means by which these are implemented is via a PROT_NONE mmap() mapping, which provides the required semantics however incur an overhead of a VMA for each such region.
With a great many processes and threads, this can rapidly add up and incur a significant memory penalty. It also has the added problem of preventing merges that might otherwise be permitted.
This series takes a different approach - an idea suggested by Vlasimil Babka (and before him David Hildenbrand and Jann Horn - perhaps more - the provenance becomes a little tricky to ascertain after this - please forgive any omissions!) - rather than locating the guard pages at the VMA layer, instead placing them in page tables mapping the required ranges.
Early testing of the prototype version of this code suggests a 5 times speed up in memory mapping invocations (in conjunction with use of process_madvise()) and a 13% reduction in VMAs on an entirely idle android system and unoptimised code.
We expect with optimisation and a loaded system with a larger number of guard pages this could significantly increase, but in any case these numbers are encouraging.
This way, rather than having separate VMAs specifying which parts of a range are guard pages, instead we have a VMA spanning the entire range of memory a user is permitted to access and including ranges which are to be 'guarded'.
After mapping this, a user can specify which parts of the range should result in a fatal signal when accessed.
By restricting the ability to specify guard pages to memory mapped by existing VMAs, we can rely on the mappings being torn down when the mappings are ultimately unmapped and everything works simply as if the memory were not faulted in, from the point of view of the containing VMAs.
This mechanism in effect poisons memory ranges similar to hardware memory poisoning, only it is an entirely software-controlled form of poisoning.
Any poisoned region of memory is also able to 'unpoisoned', that is, to have its poison markers removed.
The mechanism is implemented via madvise() behaviour - MADV_GUARD_POISON which simply poisons ranges - and MADV_GUARD_UNPOISON - which clears this poisoning.
Poisoning can be performed across multiple VMAs and any existing mappings will be cleared, that is zapped, before installing the poisoned page table mappings.
There is no concept of 'nested' poisoning, multiple attempts to poison a range will, after the first poisoning, have no effect.
Importantly, unpoisoning of poisoned ranges has no effect on non-poisoned memory, so a user can safely unpoison a range of memory and clear only poison page table mappings leaving the rest intact.
The actual mechanism by which the page table entries are specified makes use of existing logic - PTE markers, which are used for the userfaultfd UFFDIO_POISON mechanism.
Unfortunately PTE_MARKER_POISONED is not suited for the guard page mechanism as it results in VM_FAULT_HWPOISON semantics in the fault handler, so we add our own specific PTE_MARKER_GUARD and adapt existing logic to handle it.
We also extend the generic page walk mechanism to allow for installation of PTEs (carefully restricted to memory management logic only to prevent unwanted abuse).
We ensure that zapping performed by, for instance, MADV_DONTNEED, does not remove guard poison markers, nor does forking (except when VM_WIPEONFORK is specified for a VMA which implies a total removal of memory characteristics).
It's important to note that the guard page implementation is emphatically NOT a security feature, so a user can remove the poisoning if they wish. We simply implement it in such a way as to provide the least surprising behaviour.
An extensive set of self-tests are provided which ensure behaviour is as expected and additionally self-documents expected behaviour of poisoned ranges.
Suggested-by: Vlastimil Babka vbabka@suze.cz Suggested-by: Jann Horn jannh@google.com Suggested-by: David Hildenbrand david@redhat.com
Lorenzo Stoakes (4): mm: pagewalk: add the ability to install PTEs mm: add PTE_MARKER_GUARD PTE marker mm: madvise: implement lightweight guard page mechanism selftests/mm: add self tests for guard page feature
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/linux/mm_inline.h | 2 +- include/linux/pagewalk.h | 18 +- include/linux/swapops.h | 26 +- include/uapi/asm-generic/mman-common.h | 3 + mm/hugetlb.c | 3 + mm/internal.h | 6 + mm/madvise.c | 158 +++ mm/memory.c | 18 +- mm/mprotect.c | 3 +- mm/mseal.c | 1 + mm/pagewalk.c | 174 ++-- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/guard-pages.c | 1168 ++++++++++++++++++++++ 18 files changed, 1525 insertions(+), 69 deletions(-) create mode 100644 tools/testing/selftests/mm/guard-pages.c
-- 2.46.2
The existing generic pagewalk logic permits the walking of page tables, invoking callbacks at individual page table levels via user-provided mm_walk_ops callbacks.
This is useful for traversing existing page table entries, but precludes the ability to establish new ones.
Existing mechanism for performing a walk which also installs page table entries if necessary are heavily duplicated throughout the kernel, each with semantic differences from one another and largely unavailable for use elsewhere.
Rather than add yet another implementation, we extend the generic pagewalk logic to enable the installation of page table entries by adding a new install_pte() callback in mm_walk_ops. If this is specified, then upon encountering a missing page table entry, we allocate and install a new one and continue the traversal.
If a THP huge page is encountered, we make use of existing logic to split it. Then once we reach the PTE level, we invoke the install_pte() callback which provides a PTE entry to install. We do not support hugetlb at this stage.
If this function returns an error, or an allocation fails during the operation, we abort the operation altogether. It is up to the caller to deal appropriately with partially populated page table ranges.
If install_pte() is defined, the semantics of pte_entry() change - this callback is then only invoked if the entry already exists. This is a useful property, as it allows a caller to handle existing PTEs while installing new ones where necessary in the specified range.
If install_pte() is not defined, then there is no functional difference to this patch, so all existing logic will work precisely as it did before.
As we only permit the installation of PTEs where a mapping does not already exist there is no need for TLB management, however we do invoke update_mmu_cache() for architectures which require manual maintenance of mappings for other CPUs.
We explicitly do not allow the existing page walk API to expose this feature as it is dangerous and intended for internal mm use only. Therefore we provide a new walk_page_range_mm() function exposed only to mm/internal.h.
Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com --- include/linux/pagewalk.h | 18 +++- mm/internal.h | 6 ++ mm/pagewalk.c | 174 ++++++++++++++++++++++++++------------- 3 files changed, 136 insertions(+), 62 deletions(-)
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index f5eb5a32aeed..9700a29f8afb 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -25,12 +25,15 @@ enum page_walk_lock { * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to * split_huge_page() instead of handling it explicitly. - * @pte_entry: if set, called for each PTE (lowest-level) entry, - * including empty ones + * @pte_entry: if set, called for each PTE (lowest-level) entry + * including empty ones, except if @install_pte is set. + * If @install_pte is set, @pte_entry is called only for + * existing PTEs. * @pte_hole: if set, called for each hole at all levels, * depth is -1 if not known, 0:PGD, 1:P4D, 2:PUD, 3:PMD. * Any folded depths (where PTRS_PER_P?D is equal to 1) - * are skipped. + * are skipped. If @install_pte is specified, this will + * not trigger for any populated ranges. * @hugetlb_entry: if set, called for each hugetlb entry. This hook * function is called with the vma lock held, in order to * protect against a concurrent freeing of the pte_t* or @@ -51,6 +54,13 @@ enum page_walk_lock { * @pre_vma: if set, called before starting walk on a non-null vma. * @post_vma: if set, called after a walk on a non-null vma, provided * that @pre_vma and the vma walk succeeded. + * @install_pte: if set, missing page table entries are installed and + * thus all levels are always walked in the specified + * range. This callback is then invoked at the PTE level + * (having split any THP pages prior), providing the PTE to + * install. If allocations fail, the walk is aborted. This + * operation is only available for userland memory. Not + * usable for hugetlb ranges. * * p?d_entry callbacks are called even if those levels are folded on a * particular architecture/configuration. @@ -76,6 +86,8 @@ struct mm_walk_ops { int (*pre_vma)(unsigned long start, unsigned long end, struct mm_walk *walk); void (*post_vma)(struct mm_walk *walk); + int (*install_pte)(unsigned long addr, unsigned long next, + pte_t *ptep, struct mm_walk *walk); enum page_walk_lock walk_lock; };
diff --git a/mm/internal.h b/mm/internal.h index 93083bbeeefa..1bfe45b7fa08 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -12,6 +12,7 @@ #include <linux/mm.h> #include <linux/mm_inline.h> #include <linux/pagemap.h> +#include <linux/pagewalk.h> #include <linux/rmap.h> #include <linux/swap.h> #include <linux/swapops.h> @@ -1443,4 +1444,9 @@ static inline void accept_page(struct page *page) } #endif /* CONFIG_UNACCEPTED_MEMORY */
+/* pagewalk.c */ +int walk_page_range_mm(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 461ea3bbd8d9..c3b9624948c1 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -6,6 +6,8 @@ #include <linux/swap.h> #include <linux/swapops.h>
+#include "internal.h" + /* * We want to know the real level where a entry is located ignoring any * folding of levels which may be happening. For example if p4d is folded then @@ -29,9 +31,23 @@ static int walk_pte_range_inner(pte_t *pte, unsigned long addr, int err = 0;
for (;;) { - err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk); - if (err) - break; + if (ops->install_pte && pte_none(ptep_get(pte))) { + pte_t new_pte; + + err = ops->install_pte(addr, addr + PAGE_SIZE, &new_pte, + walk); + if (err) + break; + + set_pte_at(walk->mm, addr, pte, new_pte); + /* Non-present before, so for arches that need it. */ + if (!WARN_ON_ONCE(walk->no_vma)) + update_mmu_cache(walk->vma, addr, pte); + } else { + err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk); + if (err) + break; + } if (addr >= end - PAGE_SIZE) break; addr += PAGE_SIZE; @@ -89,11 +105,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, again: next = pmd_addr_end(addr, end); if (pmd_none(*pmd)) { - if (ops->pte_hole) + if (ops->install_pte) + err = __pte_alloc(walk->mm, pmd); + else if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) break; - continue; + if (!ops->install_pte) + continue; }
walk->action = ACTION_SUBTREE; @@ -116,7 +135,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, */ if ((!walk->vma && (pmd_leaf(*pmd) || !pmd_present(*pmd))) || walk->action == ACTION_CONTINUE || - !(ops->pte_entry)) + !(ops->pte_entry || ops->install_pte)) continue;
if (walk->vma) @@ -148,11 +167,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, again: next = pud_addr_end(addr, end); if (pud_none(*pud)) { - if (ops->pte_hole) + if (ops->install_pte) + err = __pmd_alloc(walk->mm, pud, addr); + else if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) break; - continue; + if (!ops->install_pte) + continue; }
walk->action = ACTION_SUBTREE; @@ -167,7 +189,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) || walk->action == ACTION_CONTINUE || - !(ops->pmd_entry || ops->pte_entry)) + !(ops->pmd_entry || ops->pte_entry || ops->install_pte)) continue;
if (walk->vma) @@ -196,18 +218,22 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, do { next = p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(p4d)) { - if (ops->pte_hole) + if (ops->install_pte) + err = __pud_alloc(walk->mm, p4d, addr); + else if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) break; - continue; + if (!ops->install_pte) + continue; } if (ops->p4d_entry) { err = ops->p4d_entry(p4d, addr, next, walk); if (err) break; } - if (ops->pud_entry || ops->pmd_entry || ops->pte_entry) + if (ops->pud_entry || ops->pmd_entry || ops->pte_entry || + ops->install_pte) err = walk_pud_range(p4d, addr, next, walk); if (err) break; @@ -231,18 +257,22 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) { - if (ops->pte_hole) + if (ops->install_pte) + err = __p4d_alloc(walk->mm, pgd, addr); + else if (ops->pte_hole) err = ops->pte_hole(addr, next, 0, walk); if (err) break; - continue; + if (!ops->install_pte) + continue; } if (ops->pgd_entry) { err = ops->pgd_entry(pgd, addr, next, walk); if (err) break; } - if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || ops->pte_entry) + if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || + ops->pte_entry || ops->install_pte) err = walk_p4d_range(pgd, addr, next, walk); if (err) break; @@ -334,6 +364,11 @@ static int __walk_page_range(unsigned long start, unsigned long end, int err = 0; struct vm_area_struct *vma = walk->vma; const struct mm_walk_ops *ops = walk->ops; + bool is_hugetlb = is_vm_hugetlb_page(vma); + + /* We do not support hugetlb PTE installation. */ + if (ops->install_pte && is_hugetlb) + return -EINVAL;
if (ops->pre_vma) { err = ops->pre_vma(start, end, walk); @@ -341,7 +376,7 @@ static int __walk_page_range(unsigned long start, unsigned long end, return err; }
- if (is_vm_hugetlb_page(vma)) { + if (is_hugetlb) { if (ops->hugetlb_entry) err = walk_hugetlb_range(start, end, walk); } else @@ -380,47 +415,7 @@ static inline void process_vma_walk_lock(struct vm_area_struct *vma, #endif }
-/** - * walk_page_range - walk page table with caller specific callbacks - * @mm: mm_struct representing the target process of page table walk - * @start: start address of the virtual address range - * @end: end address of the virtual address range - * @ops: operation to call during the walk - * @private: private data for callbacks' usage - * - * Recursively walk the page table tree of the process represented by @mm - * within the virtual address range [@start, @end). During walking, we can do - * some caller-specific works for each entry, by setting up pmd_entry(), - * pte_entry(), and/or hugetlb_entry(). If you don't set up for some of these - * callbacks, the associated entries/pages are just ignored. - * The return values of these callbacks are commonly defined like below: - * - * - 0 : succeeded to handle the current entry, and if you don't reach the - * end address yet, continue to walk. - * - >0 : succeeded to handle the current entry, and return to the caller - * with caller specific value. - * - <0 : failed to handle the current entry, and return to the caller - * with error code. - * - * Before starting to walk page table, some callers want to check whether - * they really want to walk over the current vma, typically by checking - * its vm_flags. walk_page_test() and @ops->test_walk() are used for this - * purpose. - * - * If operations need to be staged before and committed after a vma is walked, - * there are two callbacks, pre_vma() and post_vma(). Note that post_vma(), - * since it is intended to handle commit-type operations, can't return any - * errors. - * - * struct mm_walk keeps current values of some common data like vma and pmd, - * which are useful for the access from callbacks. If you want to pass some - * caller-specific data to callbacks, @private should be helpful. - * - * Locking: - * Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_lock, - * because these function traverse vma list and/or access to vma's data. - */ -int walk_page_range(struct mm_struct *mm, unsigned long start, +int walk_page_range_mm(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private) { @@ -479,6 +474,57 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, return err; }
+/** + * walk_page_range - walk page table with caller specific callbacks + * @mm: mm_struct representing the target process of page table walk + * @start: start address of the virtual address range + * @end: end address of the virtual address range + * @ops: operation to call during the walk + * @private: private data for callbacks' usage + * + * Recursively walk the page table tree of the process represented by @mm + * within the virtual address range [@start, @end). During walking, we can do + * some caller-specific works for each entry, by setting up pmd_entry(), + * pte_entry(), and/or hugetlb_entry(). If you don't set up for some of these + * callbacks, the associated entries/pages are just ignored. + * The return values of these callbacks are commonly defined like below: + * + * - 0 : succeeded to handle the current entry, and if you don't reach the + * end address yet, continue to walk. + * - >0 : succeeded to handle the current entry, and return to the caller + * with caller specific value. + * - <0 : failed to handle the current entry, and return to the caller + * with error code. + * + * Before starting to walk page table, some callers want to check whether + * they really want to walk over the current vma, typically by checking + * its vm_flags. walk_page_test() and @ops->test_walk() are used for this + * purpose. + * + * If operations need to be staged before and committed after a vma is walked, + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma(), + * since it is intended to handle commit-type operations, can't return any + * errors. + * + * struct mm_walk keeps current values of some common data like vma and pmd, + * which are useful for the access from callbacks. If you want to pass some + * caller-specific data to callbacks, @private should be helpful. + * + * Locking: + * Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_lock, + * because these function traverse vma list and/or access to vma's data. + */ +int walk_page_range(struct mm_struct *mm, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private) +{ + /* For internal use only. */ + if (ops->install_pte) + return -EINVAL; + + return walk_page_range_mm(mm, start, end, ops, private); +} + /** * walk_page_range_novma - walk a range of pagetables not backed by a vma * @mm: mm_struct representing the target process of page table walk @@ -494,7 +540,7 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, * walking the kernel pages tables or page tables for firmware. * * Note: Be careful to walk the kernel pages tables, the caller may be need to - * take other effective approache (mmap lock may be insufficient) to prevent + * take other effective approaches (mmap lock may be insufficient) to prevent * the intermediate kernel page tables belonging to the specified address range * from being freed (e.g. memory hot-remove). */ @@ -511,7 +557,7 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start, .no_vma = true };
- if (start >= end || !walk.mm) + if (start >= end || !walk.mm || ops->install_pte) return -EINVAL;
/* @@ -556,6 +602,9 @@ int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, return -EINVAL; if (start < vma->vm_start || end > vma->vm_end) return -EINVAL; + /* For internal use only. */ + if (ops->install_pte) + return -EINVAL;
process_mm_walk_lock(walk.mm, ops->walk_lock); process_vma_walk_lock(vma, ops->walk_lock); @@ -574,6 +623,9 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
if (!walk.mm) return -EINVAL; + /* For internal use only. */ + if (ops->install_pte) + return -EINVAL;
process_mm_walk_lock(walk.mm, ops->walk_lock); process_vma_walk_lock(vma, ops->walk_lock); @@ -623,6 +675,10 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, unsigned long start_addr, end_addr; int err = 0;
+ /* For internal use only. */ + if (ops->install_pte) + return -EINVAL; + lockdep_assert_held(&mapping->i_mmap_rwsem); vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, first_index + nr - 1) {
On Fri, Sep 27, 2024 at 01:51:11PM GMT, Lorenzo Stoakes wrote:
The existing generic pagewalk logic permits the walking of page tables, invoking callbacks at individual page table levels via user-provided mm_walk_ops callbacks.
This is useful for traversing existing page table entries, but precludes the ability to establish new ones.
Existing mechanism for performing a walk which also installs page table entries if necessary are heavily duplicated throughout the kernel, each with semantic differences from one another and largely unavailable for use elsewhere.
Rather than add yet another implementation, we extend the generic pagewalk logic to enable the installation of page table entries by adding a new install_pte() callback in mm_walk_ops. If this is specified, then upon encountering a missing page table entry, we allocate and install a new one and continue the traversal.
If a THP huge page is encountered, we make use of existing logic to split it. Then once we reach the PTE level, we invoke the install_pte() callback which provides a PTE entry to install. We do not support hugetlb at this stage.
If this function returns an error, or an allocation fails during the operation, we abort the operation altogether. It is up to the caller to deal appropriately with partially populated page table ranges.
If install_pte() is defined, the semantics of pte_entry() change - this callback is then only invoked if the entry already exists. This is a useful property, as it allows a caller to handle existing PTEs while installing new ones where necessary in the specified range.
If install_pte() is not defined, then there is no functional difference to this patch, so all existing logic will work precisely as it did before.
As we only permit the installation of PTEs where a mapping does not already exist there is no need for TLB management, however we do invoke update_mmu_cache() for architectures which require manual maintenance of mappings for other CPUs.
We explicitly do not allow the existing page walk API to expose this feature as it is dangerous and intended for internal mm use only. Therefore we provide a new walk_page_range_mm() function exposed only to mm/internal.h.
Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
include/linux/pagewalk.h | 18 +++- mm/internal.h | 6 ++ mm/pagewalk.c | 174 ++++++++++++++++++++++++++------------- 3 files changed, 136 insertions(+), 62 deletions(-)
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index f5eb5a32aeed..9700a29f8afb 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -25,12 +25,15 @@ enum page_walk_lock {
this handler is required to be able to handle
pmd_trans_huge() pmds. They may simply choose to
split_huge_page() instead of handling it explicitly.
- @pte_entry: if set, called for each PTE (lowest-level) entry,
including empty ones
- @pte_entry: if set, called for each PTE (lowest-level) entry
including empty ones, except if @install_pte is set.
If @install_pte is set, @pte_entry is called only for
existing PTEs.
- @pte_hole: if set, called for each hole at all levels,
depth is -1 if not known, 0:PGD, 1:P4D, 2:PUD, 3:PMD.
Any folded depths (where PTRS_PER_P?D is equal to 1)
are skipped.
are skipped. If @install_pte is specified, this will
not trigger for any populated ranges.
- @hugetlb_entry: if set, called for each hugetlb entry. This hook
function is called with the vma lock held, in order to
protect against a concurrent freeing of the pte_t* or
@@ -51,6 +54,13 @@ enum page_walk_lock {
- @pre_vma: if set, called before starting walk on a non-null vma.
- @post_vma: if set, called after a walk on a non-null vma, provided
that @pre_vma and the vma walk succeeded.
- @install_pte: if set, missing page table entries are installed and
thus all levels are always walked in the specified
range. This callback is then invoked at the PTE level
(having split any THP pages prior), providing the PTE to
install. If allocations fail, the walk is aborted. This
operation is only available for userland memory. Not
usable for hugetlb ranges.
- p?d_entry callbacks are called even if those levels are folded on a
- particular architecture/configuration.
@@ -76,6 +86,8 @@ struct mm_walk_ops { int (*pre_vma)(unsigned long start, unsigned long end, struct mm_walk *walk); void (*post_vma)(struct mm_walk *walk);
- int (*install_pte)(unsigned long addr, unsigned long next,
enum page_walk_lock walk_lock;pte_t *ptep, struct mm_walk *walk);
};
diff --git a/mm/internal.h b/mm/internal.h index 93083bbeeefa..1bfe45b7fa08 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -12,6 +12,7 @@ #include <linux/mm.h> #include <linux/mm_inline.h> #include <linux/pagemap.h> +#include <linux/pagewalk.h> #include <linux/rmap.h> #include <linux/swap.h> #include <linux/swapops.h> @@ -1443,4 +1444,9 @@ static inline void accept_page(struct page *page) } #endif /* CONFIG_UNACCEPTED_MEMORY */
+/* pagewalk.c */ +int walk_page_range_mm(struct mm_struct *mm, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private);
#endif /* __MM_INTERNAL_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 461ea3bbd8d9..c3b9624948c1 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -6,6 +6,8 @@ #include <linux/swap.h> #include <linux/swapops.h>
We need to add an include here for asm/tlbflush.h I believe to make update_mmu_cache() available, this was overlooked as for x86 this is included through some other header.
I will add on respin.
+#include "internal.h"
/*
- We want to know the real level where a entry is located ignoring any
- folding of levels which may be happening. For example if p4d is folded then
@@ -29,9 +31,23 @@ static int walk_pte_range_inner(pte_t *pte, unsigned long addr, int err = 0;
for (;;) {
err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
if (err)
break;
if (ops->install_pte && pte_none(ptep_get(pte))) {
pte_t new_pte;
err = ops->install_pte(addr, addr + PAGE_SIZE, &new_pte,
walk);
if (err)
break;
set_pte_at(walk->mm, addr, pte, new_pte);
/* Non-present before, so for arches that need it. */
if (!WARN_ON_ONCE(walk->no_vma))
update_mmu_cache(walk->vma, addr, pte);
} else {
err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
if (err)
break;
if (addr >= end - PAGE_SIZE) break; addr += PAGE_SIZE;}
@@ -89,11 +105,14 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, again: next = pmd_addr_end(addr, end); if (pmd_none(*pmd)) {
if (ops->pte_hole)
if (ops->install_pte)
err = __pte_alloc(walk->mm, pmd);
else if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) break;
continue;
if (!ops->install_pte)
continue;
}
walk->action = ACTION_SUBTREE;
@@ -116,7 +135,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end, */ if ((!walk->vma && (pmd_leaf(*pmd) || !pmd_present(*pmd))) || walk->action == ACTION_CONTINUE ||
!(ops->pte_entry))
!(ops->pte_entry || ops->install_pte)) continue;
if (walk->vma)
@@ -148,11 +167,14 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, again: next = pud_addr_end(addr, end); if (pud_none(*pud)) {
if (ops->pte_hole)
if (ops->install_pte)
err = __pmd_alloc(walk->mm, pud, addr);
else if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) break;
continue;
if (!ops->install_pte)
continue;
}
walk->action = ACTION_SUBTREE;
@@ -167,7 +189,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
if ((!walk->vma && (pud_leaf(*pud) || !pud_present(*pud))) || walk->action == ACTION_CONTINUE ||
!(ops->pmd_entry || ops->pte_entry))
!(ops->pmd_entry || ops->pte_entry || ops->install_pte)) continue;
if (walk->vma)
@@ -196,18 +218,22 @@ static int walk_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, do { next = p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(p4d)) {
if (ops->pte_hole)
if (ops->install_pte)
err = __pud_alloc(walk->mm, p4d, addr);
else if (ops->pte_hole) err = ops->pte_hole(addr, next, depth, walk); if (err) break;
continue;
if (!ops->install_pte)
} if (ops->p4d_entry) { err = ops->p4d_entry(p4d, addr, next, walk); if (err) break; }continue;
if (ops->pud_entry || ops->pmd_entry || ops->pte_entry)
if (ops->pud_entry || ops->pmd_entry || ops->pte_entry ||
if (err) break;ops->install_pte) err = walk_pud_range(p4d, addr, next, walk);
@@ -231,18 +257,22 @@ static int walk_pgd_range(unsigned long addr, unsigned long end, do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) {
if (ops->pte_hole)
if (ops->install_pte)
err = __p4d_alloc(walk->mm, pgd, addr);
else if (ops->pte_hole) err = ops->pte_hole(addr, next, 0, walk); if (err) break;
continue;
if (!ops->install_pte)
} if (ops->pgd_entry) { err = ops->pgd_entry(pgd, addr, next, walk); if (err) break; }continue;
if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry || ops->pte_entry)
if (ops->p4d_entry || ops->pud_entry || ops->pmd_entry ||
if (err) break;ops->pte_entry || ops->install_pte) err = walk_p4d_range(pgd, addr, next, walk);
@@ -334,6 +364,11 @@ static int __walk_page_range(unsigned long start, unsigned long end, int err = 0; struct vm_area_struct *vma = walk->vma; const struct mm_walk_ops *ops = walk->ops;
bool is_hugetlb = is_vm_hugetlb_page(vma);
/* We do not support hugetlb PTE installation. */
if (ops->install_pte && is_hugetlb)
return -EINVAL;
if (ops->pre_vma) { err = ops->pre_vma(start, end, walk);
@@ -341,7 +376,7 @@ static int __walk_page_range(unsigned long start, unsigned long end, return err; }
- if (is_vm_hugetlb_page(vma)) {
- if (is_hugetlb) { if (ops->hugetlb_entry) err = walk_hugetlb_range(start, end, walk); } else
@@ -380,47 +415,7 @@ static inline void process_vma_walk_lock(struct vm_area_struct *vma, #endif }
-/**
- walk_page_range - walk page table with caller specific callbacks
- @mm: mm_struct representing the target process of page table walk
- @start: start address of the virtual address range
- @end: end address of the virtual address range
- @ops: operation to call during the walk
- @private: private data for callbacks' usage
- Recursively walk the page table tree of the process represented by @mm
- within the virtual address range [@start, @end). During walking, we can do
- some caller-specific works for each entry, by setting up pmd_entry(),
- pte_entry(), and/or hugetlb_entry(). If you don't set up for some of these
- callbacks, the associated entries/pages are just ignored.
- The return values of these callbacks are commonly defined like below:
- 0 : succeeded to handle the current entry, and if you don't reach the
end address yet, continue to walk.
0 : succeeded to handle the current entry, and return to the caller
with caller specific value.
- <0 : failed to handle the current entry, and return to the caller
with error code.
- Before starting to walk page table, some callers want to check whether
- they really want to walk over the current vma, typically by checking
- its vm_flags. walk_page_test() and @ops->test_walk() are used for this
- purpose.
- If operations need to be staged before and committed after a vma is walked,
- there are two callbacks, pre_vma() and post_vma(). Note that post_vma(),
- since it is intended to handle commit-type operations, can't return any
- errors.
- struct mm_walk keeps current values of some common data like vma and pmd,
- which are useful for the access from callbacks. If you want to pass some
- caller-specific data to callbacks, @private should be helpful.
- Locking:
- Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_lock,
- because these function traverse vma list and/or access to vma's data.
- */
-int walk_page_range(struct mm_struct *mm, unsigned long start, +int walk_page_range_mm(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private) { @@ -479,6 +474,57 @@ int walk_page_range(struct mm_struct *mm, unsigned long start, return err; }
+/**
- walk_page_range - walk page table with caller specific callbacks
- @mm: mm_struct representing the target process of page table walk
- @start: start address of the virtual address range
- @end: end address of the virtual address range
- @ops: operation to call during the walk
- @private: private data for callbacks' usage
- Recursively walk the page table tree of the process represented by @mm
- within the virtual address range [@start, @end). During walking, we can do
- some caller-specific works for each entry, by setting up pmd_entry(),
- pte_entry(), and/or hugetlb_entry(). If you don't set up for some of these
- callbacks, the associated entries/pages are just ignored.
- The return values of these callbacks are commonly defined like below:
- 0 : succeeded to handle the current entry, and if you don't reach the
end address yet, continue to walk.
0 : succeeded to handle the current entry, and return to the caller
with caller specific value.
- <0 : failed to handle the current entry, and return to the caller
with error code.
- Before starting to walk page table, some callers want to check whether
- they really want to walk over the current vma, typically by checking
- its vm_flags. walk_page_test() and @ops->test_walk() are used for this
- purpose.
- If operations need to be staged before and committed after a vma is walked,
- there are two callbacks, pre_vma() and post_vma(). Note that post_vma(),
- since it is intended to handle commit-type operations, can't return any
- errors.
- struct mm_walk keeps current values of some common data like vma and pmd,
- which are useful for the access from callbacks. If you want to pass some
- caller-specific data to callbacks, @private should be helpful.
- Locking:
- Callers of walk_page_range() and walk_page_vma() should hold @mm->mmap_lock,
- because these function traverse vma list and/or access to vma's data.
- */
+int walk_page_range(struct mm_struct *mm, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private)
+{
- /* For internal use only. */
- if (ops->install_pte)
return -EINVAL;
- return walk_page_range_mm(mm, start, end, ops, private);
+}
/**
- walk_page_range_novma - walk a range of pagetables not backed by a vma
- @mm: mm_struct representing the target process of page table walk
@@ -494,7 +540,7 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
- walking the kernel pages tables or page tables for firmware.
- Note: Be careful to walk the kernel pages tables, the caller may be need to
- take other effective approache (mmap lock may be insufficient) to prevent
*/
- take other effective approaches (mmap lock may be insufficient) to prevent
- the intermediate kernel page tables belonging to the specified address range
- from being freed (e.g. memory hot-remove).
@@ -511,7 +557,7 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start, .no_vma = true };
- if (start >= end || !walk.mm)
if (start >= end || !walk.mm || ops->install_pte) return -EINVAL;
/*
@@ -556,6 +602,9 @@ int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, return -EINVAL; if (start < vma->vm_start || end > vma->vm_end) return -EINVAL;
/* For internal use only. */
if (ops->install_pte)
return -EINVAL;
process_mm_walk_lock(walk.mm, ops->walk_lock); process_vma_walk_lock(vma, ops->walk_lock);
@@ -574,6 +623,9 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
if (!walk.mm) return -EINVAL;
/* For internal use only. */
if (ops->install_pte)
return -EINVAL;
process_mm_walk_lock(walk.mm, ops->walk_lock); process_vma_walk_lock(vma, ops->walk_lock);
@@ -623,6 +675,10 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, unsigned long start_addr, end_addr; int err = 0;
- /* For internal use only. */
- if (ops->install_pte)
return -EINVAL;
- lockdep_assert_held(&mapping->i_mmap_rwsem); vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, first_index + nr - 1) {
-- 2.46.2
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Rather than add yet another implementation, we extend the generic pagewalk logic to enable the installation of page table entries by adding a new install_pte() callback in mm_walk_ops. If this is specified, then upon encountering a missing page table entry, we allocate and install a new one and continue the traversal.
[...]
Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
Reviewed-by: Jann Horn jannh@google.com
On Fri, Oct 11, 2024 at 08:11:28PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Rather than add yet another implementation, we extend the generic pagewalk logic to enable the installation of page table entries by adding a new install_pte() callback in mm_walk_ops. If this is specified, then upon encountering a missing page table entry, we allocate and install a new one and continue the traversal.
[...]
Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
Reviewed-by: Jann Horn jannh@google.com
Thanks!
Hi Lorenzo,
sorry for only replying to this so late.
On Fri, Sep 27, 2024 at 01:51:11PM +0100, Lorenzo Stoakes wrote:
The existing generic pagewalk logic permits the walking of page tables, invoking callbacks at individual page table levels via user-provided mm_walk_ops callbacks.
This is useful for traversing existing page table entries, but precludes the ability to establish new ones.
Existing mechanism for performing a walk which also installs page table entries if necessary are heavily duplicated throughout the kernel, each with semantic differences from one another and largely unavailable for use elsewhere.
I do like the idea of having common code for installing page tables!
Minor nits below:
+int walk_page_range_mm(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private)
It would be good to have a minimum level of documentation for this function, including how it differs from walk_page_range and why it should remain internal.
- /* For internal use only. */
- if (ops->install_pte)
return -EINVAL;
And this should probably be expanded a bit, including that no exported symbol should allow inserting arbitrary PTEs. Maybe best done with a helper to share that comment with the other places that have this check.
On Mon, Oct 14, 2024 at 11:47:58PM -0700, Christoph Hellwig wrote:
Hi Lorenzo,
sorry for only replying to this so late.
No worries, and thanks for taking a look! :)
On Fri, Sep 27, 2024 at 01:51:11PM +0100, Lorenzo Stoakes wrote:
The existing generic pagewalk logic permits the walking of page tables, invoking callbacks at individual page table levels via user-provided mm_walk_ops callbacks.
This is useful for traversing existing page table entries, but precludes the ability to establish new ones.
Existing mechanism for performing a walk which also installs page table entries if necessary are heavily duplicated throughout the kernel, each with semantic differences from one another and largely unavailable for use elsewhere.
I do like the idea of having common code for installing page tables!
Awesome.
Minor nits below:
+int walk_page_range_mm(struct mm_struct *mm, unsigned long start, unsigned long end, const struct mm_walk_ops *ops, void *private)
It would be good to have a minimum level of documentation for this function, including how it differs from walk_page_range and why it should remain internal.
Will add on respin!
- /* For internal use only. */
- if (ops->install_pte)
return -EINVAL;
And this should probably be expanded a bit, including that no exported symbol should allow inserting arbitrary PTEs. Maybe best done with a helper to share that comment with the other places that have this check.
Yeah a helper makes sense actually, a more general 'are these ops valid?' thing. Will update on next respin with some explanation.
The next iteration I plan to un-RFC as seems generally the concept is unopposed for this series, will make these changes then.
Thanks!
Add a new PTE marker that results in any access causing the accessing process to segfault.
This is preferable to PTE_MARKER_POISONED, which results in the same handling as hardware poisoned memory, and is thus undesirable for cases where we simply wish to 'soft' poison a range.
This is in preparation for implementing the ability to specify guard pages at the page table level, i.e. ranges that, when accessed, should cause process termination.
Additionally, rename zap_drop_file_uffd_wp() to zap_drop_markers() - the function checks the ZAP_FLAG_DROP_MARKER flag so naming it for this single purpose was simply incorrect.
We then reuse the same logic to determine whether a zap should clear a guard entry - this should only be performed on teardown and never on MADV_DONTNEED or the like.
Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com --- include/linux/mm_inline.h | 2 +- include/linux/swapops.h | 26 ++++++++++++++++++++++++-- mm/hugetlb.c | 3 +++ mm/memory.c | 18 +++++++++++++++--- 4 files changed, 43 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 6f801c7b36e2..0d97a14bf051 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -531,7 +531,7 @@ static inline pte_marker copy_pte_marker( { pte_marker srcm = pte_marker_get(entry); /* Always copy error entries. */ - pte_marker dstm = srcm & PTE_MARKER_POISONED; + pte_marker dstm = srcm & (PTE_MARKER_POISONED | PTE_MARKER_GUARD);
/* Only copy PTE markers if UFFD register matches. */ if ((srcm & PTE_MARKER_UFFD_WP) && userfaultfd_wp(dst_vma)) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index cb468e418ea1..4d0606df0791 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -426,9 +426,15 @@ typedef unsigned long pte_marker; * "Poisoned" here is meant in the very general sense of "future accesses are * invalid", instead of referring very specifically to hardware memory errors. * This marker is meant to represent any of various different causes of this. + * + * Note that, when encountered by the faulting logic, PTEs with this marker will + * result in VM_FAULT_HWPOISON and thus regardless trigger hardware memory error + * logic. */ #define PTE_MARKER_POISONED BIT(1) -#define PTE_MARKER_MASK (BIT(2) - 1) +/* Indicates that, on fault, this PTE will case a SIGSEGV signal to be sent. */ +#define PTE_MARKER_GUARD BIT(2) +#define PTE_MARKER_MASK (BIT(3) - 1)
static inline swp_entry_t make_pte_marker_entry(pte_marker marker) { @@ -461,9 +467,25 @@ static inline swp_entry_t make_poisoned_swp_entry(void) }
static inline int is_poisoned_swp_entry(swp_entry_t entry) +{ + /* + * We treat guard pages as poisoned too as these have the same semantics + * as poisoned ranges, only with different fault handling. + */ + return is_pte_marker_entry(entry) && + (pte_marker_get(entry) & + (PTE_MARKER_POISONED | PTE_MARKER_GUARD)); +} + +static inline swp_entry_t make_guard_swp_entry(void) +{ + return make_pte_marker_entry(PTE_MARKER_GUARD); +} + +static inline int is_guard_swp_entry(swp_entry_t entry) { return is_pte_marker_entry(entry) && - (pte_marker_get(entry) & PTE_MARKER_POISONED); + (pte_marker_get(entry) & PTE_MARKER_GUARD); }
/* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 190fa05635f4..daf69ac46360 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6348,6 +6348,9 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, ret = VM_FAULT_HWPOISON_LARGE | VM_FAULT_SET_HINDEX(hstate_index(h)); goto out_mutex; + } else if (marker & PTE_MARKER_GUARD) { + ret = VM_FAULT_SIGSEGV; + goto out_mutex; } }
diff --git a/mm/memory.c b/mm/memory.c index 5c6486e33e63..6c413c3d72fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1457,7 +1457,7 @@ static inline bool should_zap_folio(struct zap_details *details, return !folio_test_anon(folio); }
-static inline bool zap_drop_file_uffd_wp(struct zap_details *details) +static inline bool zap_drop_markers(struct zap_details *details) { if (!details) return false; @@ -1478,7 +1478,7 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, if (vma_is_anonymous(vma)) return;
- if (zap_drop_file_uffd_wp(details)) + if (zap_drop_markers(details)) return;
for (;;) { @@ -1673,7 +1673,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, * drop the marker if explicitly requested. */ if (!vma_is_anonymous(vma) && - !zap_drop_file_uffd_wp(details)) + !zap_drop_markers(details)) + continue; + } else if (is_guard_swp_entry(entry)) { + /* + * Ordinary zapping should not remove guard PTE + * markers. Only do so if we should remove PTE markers + * in general. + */ + if (!zap_drop_markers(details)) continue; } else if (is_hwpoison_entry(entry) || is_poisoned_swp_entry(entry)) { @@ -4005,6 +4013,10 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) if (marker & PTE_MARKER_POISONED) return VM_FAULT_HWPOISON;
+ /* Hitting a guard page is always a fatal condition. */ + if (marker & PTE_MARKER_GUARD) + return VM_FAULT_SIGSEGV; + if (pte_marker_entry_uffd_wp(entry)) return pte_marker_handle_uffd_wp(vmf);
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Add a new PTE marker that results in any access causing the accessing process to segfault.
[...]
static inline int is_poisoned_swp_entry(swp_entry_t entry) +{
/*
* We treat guard pages as poisoned too as these have the same semantics
* as poisoned ranges, only with different fault handling.
*/
return is_pte_marker_entry(entry) &&
(pte_marker_get(entry) &
(PTE_MARKER_POISONED | PTE_MARKER_GUARD));
+}
This means MADV_FREE will also clear guard PTEs, right?
diff --git a/mm/memory.c b/mm/memory.c index 5c6486e33e63..6c413c3d72fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1457,7 +1457,7 @@ static inline bool should_zap_folio(struct zap_details *details, return !folio_test_anon(folio); }
-static inline bool zap_drop_file_uffd_wp(struct zap_details *details) +static inline bool zap_drop_markers(struct zap_details *details) { if (!details) return false; @@ -1478,7 +1478,7 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, if (vma_is_anonymous(vma)) return;
if (zap_drop_file_uffd_wp(details))
if (zap_drop_markers(details)) return; for (;;) {
@@ -1673,7 +1673,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, * drop the marker if explicitly requested. */ if (!vma_is_anonymous(vma) &&
!zap_drop_file_uffd_wp(details))
!zap_drop_markers(details))
continue;
} else if (is_guard_swp_entry(entry)) {
/*
* Ordinary zapping should not remove guard PTE
* markers. Only do so if we should remove PTE markers
* in general.
*/
if (!zap_drop_markers(details)) continue;
Just a comment: It's nice that the feature is restricted to anonymous VMAs, otherwise we'd have to figure out here what to do about unmap_mapping_folio() (which sets ZAP_FLAG_DROP_MARKER together with details.single_folio)...
} else if (is_hwpoison_entry(entry) || is_poisoned_swp_entry(entry)) {
@@ -4005,6 +4013,10 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) if (marker & PTE_MARKER_POISONED) return VM_FAULT_HWPOISON;
/* Hitting a guard page is always a fatal condition. */
if (marker & PTE_MARKER_GUARD)
return VM_FAULT_SIGSEGV;
if (pte_marker_entry_uffd_wp(entry)) return pte_marker_handle_uffd_wp(vmf);
-- 2.46.2
On Fri, Oct 11, 2024 at 08:11:32PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Add a new PTE marker that results in any access causing the accessing process to segfault.
[...]
static inline int is_poisoned_swp_entry(swp_entry_t entry) +{
/*
* We treat guard pages as poisoned too as these have the same semantics
* as poisoned ranges, only with different fault handling.
*/
return is_pte_marker_entry(entry) &&
(pte_marker_get(entry) &
(PTE_MARKER_POISONED | PTE_MARKER_GUARD));
+}
This means MADV_FREE will also clear guard PTEs, right?
Yes, this is expected, it acts like unmap in effect (with a delayed effect), so we give it the same semantics. The same thing happens with hardware poisoning.
You can see in the tests what expectations we have with different operations, we assert there this specific behaviour:
/* Lazyfree range. */ ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_FREE), 0);
/* This should simply clear the poison markers. */ for (i = 0; i < 10; i++) { ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); }
The tests somewhat self-document expected behaviour.
diff --git a/mm/memory.c b/mm/memory.c index 5c6486e33e63..6c413c3d72fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1457,7 +1457,7 @@ static inline bool should_zap_folio(struct zap_details *details, return !folio_test_anon(folio); }
-static inline bool zap_drop_file_uffd_wp(struct zap_details *details) +static inline bool zap_drop_markers(struct zap_details *details) { if (!details) return false; @@ -1478,7 +1478,7 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, if (vma_is_anonymous(vma)) return;
if (zap_drop_file_uffd_wp(details))
if (zap_drop_markers(details)) return; for (;;) {
@@ -1673,7 +1673,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, * drop the marker if explicitly requested. */ if (!vma_is_anonymous(vma) &&
!zap_drop_file_uffd_wp(details))
!zap_drop_markers(details))
continue;
} else if (is_guard_swp_entry(entry)) {
/*
* Ordinary zapping should not remove guard PTE
* markers. Only do so if we should remove PTE markers
* in general.
*/
if (!zap_drop_markers(details)) continue;
Just a comment: It's nice that the feature is restricted to anonymous VMAs, otherwise we'd have to figure out here what to do about unmap_mapping_folio() (which sets ZAP_FLAG_DROP_MARKER together with details.single_folio)...
Yes this is not the only issue with file-backed mappings. Readahead being another, and plenty more.
We will probably look at how we might do this once this patch set lands, and tackle all of these fun things then...
} else if (is_hwpoison_entry(entry) || is_poisoned_swp_entry(entry)) {
@@ -4005,6 +4013,10 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) if (marker & PTE_MARKER_POISONED) return VM_FAULT_HWPOISON;
/* Hitting a guard page is always a fatal condition. */
if (marker & PTE_MARKER_GUARD)
return VM_FAULT_SIGSEGV;
if (pte_marker_entry_uffd_wp(entry)) return pte_marker_handle_uffd_wp(vmf);
-- 2.46.2
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
Currently users must establish PROT_NONE ranges to achieve this.
However this is very costly memory-wise - we need a VMA for each and every one of these regions AND they become unmergeable with surrounding VMAs.
In addition repeated mmap() calls require repeated kernel context switches and contention of the mmap lock to install these ranges, potentially also having to unmap memory if installed over existing ranges.
The lightweight guard approach eliminates the VMA cost altogether - rather than establishing a PROT_NONE VMA, it operates at the level of page table entries - poisoning PTEs such that accesses to them cause a fault followed by a SIGSGEV signal being raised.
This is achieved through the PTE marker mechanism, which a previous commit in this series extended to permit this to be done, installed via the generic page walking logic, also extended by a prior commit for this purpose.
These poison ranges are established with MADV_GUARD_POISON, and if the range in which they are installed contain any existing mappings, they will be zapped, i.e. free the range and unmap memory (thus mimicking the behaviour of MADV_DONTNEED in this respect).
Any existing poison entries will be left untouched. There is no nesting of poisoned pages.
Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather unexpected behaviour, but are cleared on process teardown or unmapping of memory ranges.
Ranges can have the poison property removed by MADV_GUARD_UNPOISON - 'remedying' the poisoning. The ranges over which this is applied, should they contain non-poison entries, will be untouched, only poison entries will be cleared.
We permit this operation on anonymous memory only, and only VMAs which are non-special, non-huge and not mlock()'d (if we permitted this we'd have to drop locked pages which would be rather counterintuitive).
The poisoning of the range must be performed under mmap write lock as we have to install an anon_vma to ensure correct behaviour on fork.
Suggested-by: Vlastimil Babka vbabka@suze.cz Suggested-by: Jann Horn jannh@google.com Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com --- arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 + mm/madvise.c | 158 +++++++++++++++++++++++++ mm/mprotect.c | 3 +- mm/mseal.c | 1 + 8 files changed, 176 insertions(+), 1 deletion(-)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 763929e814e9..71e13f27742d 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -78,6 +78,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */ + /* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index 9c48d9a21aa0..1a2222322f77 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -105,6 +105,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */ + /* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index 68c44f99bc93..380905522397 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -75,6 +75,9 @@ #define MADV_HWPOISON 100 /* poison a page for testing */ #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */ + /* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 1ff0c858544f..e8d5affceb28 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -113,6 +113,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */ + /* compatibility flags */ #define MAP_FILE 0
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6ce1f1ceb432..5dfd3d442de4 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -79,6 +79,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */ + /* compatibility flags */ #define MAP_FILE 0
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE: + case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */ return 0; default: /* be safe, default to 1. list exceptions explicitly */ @@ -1017,6 +1018,157 @@ static long madvise_remove(struct vm_area_struct *vma, return error; }
+static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked) +{ + vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB; + + /* + * A user could lock after poisoning but that's fine, as they'd not be + * able to fault in. The issue arises when we try to zap existing locked + * VMAs. We don't want to do that. + */ + if (!allow_locked) + disallowed |= VM_LOCKED; + + if (!vma_is_anonymous(vma)) + return false; + + if ((vma->vm_flags & (VM_MAYWRITE | disallowed)) != VM_MAYWRITE) + return false; + + return true; +} + +static int guard_poison_install_pte(unsigned long addr, unsigned long next, + pte_t *ptep, struct mm_walk *walk) +{ + unsigned long *num_installed = (unsigned long *)walk->private; + + (*num_installed)++; + /* Simply install a PTE marker, this causes segfault on access. */ + *ptep = make_pte_marker(PTE_MARKER_GUARD); + + return 0; +} + +static bool is_guard_pte_marker(pte_t ptent) +{ + return is_pte_marker(ptent) && + is_guard_swp_entry(pte_to_swp_entry(ptent)); +} + +static int guard_poison_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pte_t ptent = ptep_get(pte); + + /* + * If not a guard marker, simply abort the operation. We return a value + * > 0 indicating a non-error abort. + */ + return !is_guard_pte_marker(ptent); +} + +static const struct mm_walk_ops guard_poison_walk_ops = { + .install_pte = guard_poison_install_pte, + .pte_entry = guard_poison_pte_entry, + /* We might need to install an anon_vma. */ + .walk_lock = PGWALK_WRLOCK, +}; + +static long madvise_guard_poison(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + long err; + bool retried = false; + + *prev = vma; + if (!is_valid_guard_vma(vma, /* allow_locked = */false)) + return -EINVAL; + + /* + * Optimistically try to install the guard poison pages first. If any + * non-guard pages are encountered, give up and zap the range before + * trying again. + */ + while (true) { + unsigned long num_installed = 0; + + /* Returns < 0 on error, == 0 if success, > 0 if zap needed. */ + err = walk_page_range_mm(vma->vm_mm, start, end, + &guard_poison_walk_ops, + &num_installed); + /* + * If we install poison markers, then the range is no longer + * empty from a page table perspective and therefore it's + * appropriate to have an anon_vma. + * + * This ensures that on fork, we copy page tables correctly. + */ + if (err >= 0 && num_installed > 0) { + int err_anon = anon_vma_prepare(vma); + + if (err_anon) + err = err_anon; + } + + if (err <= 0) + return err; + + if (!retried) + /* + * OK some of the range have non-guard pages mapped, zap + * them. This leaves existing guard pages in place. + */ + zap_page_range_single(vma, start, end - start, NULL); + else + /* + * If we reach here, then there is a racing fault that + * has populated the PTE after we zapped. Give up and + * let the user know to try again. + */ + return -EAGAIN; + + retried = true; + } +} + +static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + pte_t ptent = ptep_get(pte); + + if (is_guard_pte_marker(ptent)) { + /* Simply clear the PTE marker. */ + pte_clear_not_present_full(walk->mm, addr, pte, true); + update_mmu_cache(walk->vma, addr, pte); + } + + return 0; +} + +static const struct mm_walk_ops guard_unpoison_walk_ops = { + .pte_entry = guard_unpoison_pte_entry, + .walk_lock = PGWALK_RDLOCK, +}; + +static long madvise_guard_unpoison(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + *prev = vma; + /* + * We're ok with unpoisoning mlock()'d ranges, as this is a + * non-destructive action. + */ + if (!is_valid_guard_vma(vma, /* allow_locked = */true)) + return -EINVAL; + + return walk_page_range(vma->vm_mm, start, end, + &guard_unpoison_walk_ops, NULL); +} + /* * Apply an madvise behavior to a region of a vma. madvise_update_vma * will handle splitting a vm area into separate areas, each area with its own @@ -1098,6 +1250,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, break; case MADV_COLLAPSE: return madvise_collapse(vma, prev, start, end); + case MADV_GUARD_POISON: + return madvise_guard_poison(vma, prev, start, end); + case MADV_GUARD_UNPOISON: + return madvise_guard_unpoison(vma, prev, start, end); }
anon_name = anon_vma_name(vma); @@ -1197,6 +1353,8 @@ madvise_behavior_valid(int behavior) case MADV_DODUMP: case MADV_WIPEONFORK: case MADV_KEEPONFORK: + case MADV_GUARD_POISON: + case MADV_GUARD_UNPOISON: #ifdef CONFIG_MEMORY_FAILURE case MADV_SOFT_OFFLINE: case MADV_HWPOISON: diff --git a/mm/mprotect.c b/mm/mprotect.c index 0c5d6d06107d..d0e3ebfadef8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, } else if (is_pte_marker_entry(entry)) { /* * Ignore error swap entries unconditionally, - * because any access should sigbus anyway. + * because any access should sigbus/sigsegv + * anyway. */ if (is_poisoned_swp_entry(entry)) continue; diff --git a/mm/mseal.c b/mm/mseal.c index ece977bd21e1..21bf5534bcf5 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) case MADV_REMOVE: case MADV_DONTFORK: case MADV_WIPEONFORK: + case MADV_GUARD_POISON: return true; }
Hi Lorenzo,
Please add me to this series, I 'm interested in everything related to mseal :-), thanks.
I also added Kees into the cc, since mseal is a security feature.
On Fri, Sep 27, 2024 at 5:52 AM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
Currently users must establish PROT_NONE ranges to achieve this.
However this is very costly memory-wise - we need a VMA for each and every one of these regions AND they become unmergeable with surrounding VMAs.
In addition repeated mmap() calls require repeated kernel context switches and contention of the mmap lock to install these ranges, potentially also having to unmap memory if installed over existing ranges.
The lightweight guard approach eliminates the VMA cost altogether - rather than establishing a PROT_NONE VMA, it operates at the level of page table entries - poisoning PTEs such that accesses to them cause a fault followed by a SIGSGEV signal being raised.
This is achieved through the PTE marker mechanism, which a previous commit in this series extended to permit this to be done, installed via the generic page walking logic, also extended by a prior commit for this purpose.
These poison ranges are established with MADV_GUARD_POISON, and if the range in which they are installed contain any existing mappings, they will be zapped, i.e. free the range and unmap memory (thus mimicking the behaviour of MADV_DONTNEED in this respect).
Any existing poison entries will be left untouched. There is no nesting of poisoned pages.
Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather unexpected behaviour, but are cleared on process teardown or unmapping of memory ranges.
Ranges can have the poison property removed by MADV_GUARD_UNPOISON - 'remedying' the poisoning. The ranges over which this is applied, should they contain non-poison entries, will be untouched, only poison entries will be cleared.
We permit this operation on anonymous memory only, and only VMAs which are non-special, non-huge and not mlock()'d (if we permitted this we'd have to drop locked pages which would be rather counterintuitive).
The poisoning of the range must be performed under mmap write lock as we have to install an anon_vma to ensure correct behaviour on fork.
Suggested-by: Vlastimil Babka vbabka@suze.cz Suggested-by: Jann Horn jannh@google.com Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 + mm/madvise.c | 158 +++++++++++++++++++++++++ mm/mprotect.c | 3 +- mm/mseal.c | 1 + 8 files changed, 176 insertions(+), 1 deletion(-)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 763929e814e9..71e13f27742d 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -78,6 +78,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index 9c48d9a21aa0..1a2222322f77 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -105,6 +105,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index 68c44f99bc93..380905522397 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -75,6 +75,9 @@ #define MADV_HWPOISON 100 /* poison a page for testing */ #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 1ff0c858544f..e8d5affceb28 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -113,6 +113,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6ce1f1ceb432..5dfd3d442de4 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -79,6 +79,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */ return 0; default: /* be safe, default to 1. list exceptions explicitly */
@@ -1017,6 +1018,157 @@ static long madvise_remove(struct vm_area_struct *vma, return error; }
+static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked) +{
vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB;
/*
* A user could lock after poisoning but that's fine, as they'd not be
* able to fault in. The issue arises when we try to zap existing locked
* VMAs. We don't want to do that.
*/
if (!allow_locked)
disallowed |= VM_LOCKED;
if (!vma_is_anonymous(vma))
return false;
if ((vma->vm_flags & (VM_MAYWRITE | disallowed)) != VM_MAYWRITE)
return false;
return true;
+}
+static int guard_poison_install_pte(unsigned long addr, unsigned long next,
pte_t *ptep, struct mm_walk *walk)
+{
unsigned long *num_installed = (unsigned long *)walk->private;
(*num_installed)++;
/* Simply install a PTE marker, this causes segfault on access. */
*ptep = make_pte_marker(PTE_MARKER_GUARD);
return 0;
+}
+static bool is_guard_pte_marker(pte_t ptent) +{
return is_pte_marker(ptent) &&
is_guard_swp_entry(pte_to_swp_entry(ptent));
+}
+static int guard_poison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
/*
* If not a guard marker, simply abort the operation. We return a value
* > 0 indicating a non-error abort.
*/
return !is_guard_pte_marker(ptent);
+}
+static const struct mm_walk_ops guard_poison_walk_ops = {
.install_pte = guard_poison_install_pte,
.pte_entry = guard_poison_pte_entry,
/* We might need to install an anon_vma. */
.walk_lock = PGWALK_WRLOCK,
+};
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
+static long madvise_guard_unpoison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
*prev = vma;
/*
* We're ok with unpoisoning mlock()'d ranges, as this is a
* non-destructive action.
*/
if (!is_valid_guard_vma(vma, /* allow_locked = */true))
return -EINVAL;
return walk_page_range(vma->vm_mm, start, end,
&guard_unpoison_walk_ops, NULL);
+}
/*
- Apply an madvise behavior to a region of a vma. madvise_update_vma
- will handle splitting a vm area into separate areas, each area with its own
@@ -1098,6 +1250,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, break; case MADV_COLLAPSE: return madvise_collapse(vma, prev, start, end);
case MADV_GUARD_POISON:
return madvise_guard_poison(vma, prev, start, end);
case MADV_GUARD_UNPOISON:
return madvise_guard_unpoison(vma, prev, start, end); } anon_name = anon_vma_name(vma);
@@ -1197,6 +1353,8 @@ madvise_behavior_valid(int behavior) case MADV_DODUMP: case MADV_WIPEONFORK: case MADV_KEEPONFORK:
case MADV_GUARD_POISON:
case MADV_GUARD_UNPOISON:
#ifdef CONFIG_MEMORY_FAILURE case MADV_SOFT_OFFLINE: case MADV_HWPOISON: diff --git a/mm/mprotect.c b/mm/mprotect.c index 0c5d6d06107d..d0e3ebfadef8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, } else if (is_pte_marker_entry(entry)) { /* * Ignore error swap entries unconditionally,
* because any access should sigbus anyway.
* because any access should sigbus/sigsegv
* anyway. */ if (is_poisoned_swp_entry(entry)) continue;
diff --git a/mm/mseal.c b/mm/mseal.c index ece977bd21e1..21bf5534bcf5 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) case MADV_REMOVE: case MADV_DONTFORK: case MADV_WIPEONFORK:
case MADV_GUARD_POISON:
Can you please describe the rationale to add this to the existing mseal's semantic ?
I didn't not find any description from the cover letter or this patch's description, hence asking.
Thanks -Jeff
return true; }
-- 2.46.2
On Fri, Oct 04, 2024 at 11:17:13AM -0700, Jeff Xu wrote:
Hi Lorenzo,
Please add me to this series, I 'm interested in everything related to mseal :-), thanks.
Hi Jeff, more than happy to cc you on this going forward :)
The only change to mseal is a trivial change because the poison operation discards, wasn't intentional, but apologies, I should have cc'd you regardless! Will do so on any such interaction with mseal moving forward.
I also added Kees into the cc, since mseal is a security feature.
Sure no problem happy to keep Kees cc-d too (Kees - ping me if you'd prefer not :>), however a note on this - guard pages _themselves_ are emphatically NOT a security feature, and make no guarantees on this front, but rather are a convenience/effiency thing.
Obviously however I am adding madvise() functionality here, and all such functionality must take into account whether or not they are discard operations as to ensure mseal semantics are obeyed - see below for my argument as to why I feel the poison operation falls under this.
On Fri, Sep 27, 2024 at 5:52 AM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
Currently users must establish PROT_NONE ranges to achieve this.
However this is very costly memory-wise - we need a VMA for each and every one of these regions AND they become unmergeable with surrounding VMAs.
In addition repeated mmap() calls require repeated kernel context switches and contention of the mmap lock to install these ranges, potentially also having to unmap memory if installed over existing ranges.
The lightweight guard approach eliminates the VMA cost altogether - rather than establishing a PROT_NONE VMA, it operates at the level of page table entries - poisoning PTEs such that accesses to them cause a fault followed by a SIGSGEV signal being raised.
This is achieved through the PTE marker mechanism, which a previous commit in this series extended to permit this to be done, installed via the generic page walking logic, also extended by a prior commit for this purpose.
These poison ranges are established with MADV_GUARD_POISON, and if the range in which they are installed contain any existing mappings, they will be zapped, i.e. free the range and unmap memory (thus mimicking the behaviour of MADV_DONTNEED in this respect).
Any existing poison entries will be left untouched. There is no nesting of poisoned pages.
Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather unexpected behaviour, but are cleared on process teardown or unmapping of memory ranges.
Ranges can have the poison property removed by MADV_GUARD_UNPOISON - 'remedying' the poisoning. The ranges over which this is applied, should they contain non-poison entries, will be untouched, only poison entries will be cleared.
We permit this operation on anonymous memory only, and only VMAs which are non-special, non-huge and not mlock()'d (if we permitted this we'd have to drop locked pages which would be rather counterintuitive).
The poisoning of the range must be performed under mmap write lock as we have to install an anon_vma to ensure correct behaviour on fork.
Suggested-by: Vlastimil Babka vbabka@suze.cz Suggested-by: Jann Horn jannh@google.com Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 + mm/madvise.c | 158 +++++++++++++++++++++++++ mm/mprotect.c | 3 +- mm/mseal.c | 1 + 8 files changed, 176 insertions(+), 1 deletion(-)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 763929e814e9..71e13f27742d 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -78,6 +78,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index 9c48d9a21aa0..1a2222322f77 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -105,6 +105,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index 68c44f99bc93..380905522397 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -75,6 +75,9 @@ #define MADV_HWPOISON 100 /* poison a page for testing */ #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 1ff0c858544f..e8d5affceb28 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -113,6 +113,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6ce1f1ceb432..5dfd3d442de4 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -79,6 +79,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */ return 0; default: /* be safe, default to 1. list exceptions explicitly */
@@ -1017,6 +1018,157 @@ static long madvise_remove(struct vm_area_struct *vma, return error; }
+static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked) +{
vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB;
/*
* A user could lock after poisoning but that's fine, as they'd not be
* able to fault in. The issue arises when we try to zap existing locked
* VMAs. We don't want to do that.
*/
if (!allow_locked)
disallowed |= VM_LOCKED;
if (!vma_is_anonymous(vma))
return false;
if ((vma->vm_flags & (VM_MAYWRITE | disallowed)) != VM_MAYWRITE)
return false;
return true;
+}
+static int guard_poison_install_pte(unsigned long addr, unsigned long next,
pte_t *ptep, struct mm_walk *walk)
+{
unsigned long *num_installed = (unsigned long *)walk->private;
(*num_installed)++;
/* Simply install a PTE marker, this causes segfault on access. */
*ptep = make_pte_marker(PTE_MARKER_GUARD);
return 0;
+}
+static bool is_guard_pte_marker(pte_t ptent) +{
return is_pte_marker(ptent) &&
is_guard_swp_entry(pte_to_swp_entry(ptent));
+}
+static int guard_poison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
/*
* If not a guard marker, simply abort the operation. We return a value
* > 0 indicating a non-error abort.
*/
return !is_guard_pte_marker(ptent);
+}
+static const struct mm_walk_ops guard_poison_walk_ops = {
.install_pte = guard_poison_install_pte,
.pte_entry = guard_poison_pte_entry,
/* We might need to install an anon_vma. */
.walk_lock = PGWALK_WRLOCK,
+};
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
+static long madvise_guard_unpoison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
*prev = vma;
/*
* We're ok with unpoisoning mlock()'d ranges, as this is a
* non-destructive action.
*/
if (!is_valid_guard_vma(vma, /* allow_locked = */true))
return -EINVAL;
return walk_page_range(vma->vm_mm, start, end,
&guard_unpoison_walk_ops, NULL);
+}
/*
- Apply an madvise behavior to a region of a vma. madvise_update_vma
- will handle splitting a vm area into separate areas, each area with its own
@@ -1098,6 +1250,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, break; case MADV_COLLAPSE: return madvise_collapse(vma, prev, start, end);
case MADV_GUARD_POISON:
return madvise_guard_poison(vma, prev, start, end);
case MADV_GUARD_UNPOISON:
return madvise_guard_unpoison(vma, prev, start, end); } anon_name = anon_vma_name(vma);
@@ -1197,6 +1353,8 @@ madvise_behavior_valid(int behavior) case MADV_DODUMP: case MADV_WIPEONFORK: case MADV_KEEPONFORK:
case MADV_GUARD_POISON:
case MADV_GUARD_UNPOISON:
#ifdef CONFIG_MEMORY_FAILURE case MADV_SOFT_OFFLINE: case MADV_HWPOISON: diff --git a/mm/mprotect.c b/mm/mprotect.c index 0c5d6d06107d..d0e3ebfadef8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, } else if (is_pte_marker_entry(entry)) { /* * Ignore error swap entries unconditionally,
* because any access should sigbus anyway.
* because any access should sigbus/sigsegv
* anyway. */ if (is_poisoned_swp_entry(entry)) continue;
diff --git a/mm/mseal.c b/mm/mseal.c index ece977bd21e1..21bf5534bcf5 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) case MADV_REMOVE: case MADV_DONTFORK: case MADV_WIPEONFORK:
case MADV_GUARD_POISON:
Can you please describe the rationale to add this to the existing mseal's semantic ?
I didn't not find any description from the cover letter or this patch's description, hence asking.
Sure, this is because when you guard-poison ranges that have existing mappings, it zaps them, which performs basically the exact same operation as MADV_DONTNEED, and obviously discards any underlying data in doing so.
As a result, I felt it was correct to add this operation to the list of discard operations from the perspective of mseal.
Thanks -Jeff
return true; }
-- 2.46.2
Hi Lorenzo
On Fri, Oct 4, 2024 at 11:26 AM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Fri, Oct 04, 2024 at 11:17:13AM -0700, Jeff Xu wrote:
Hi Lorenzo,
Please add me to this series, I 'm interested in everything related to mseal :-), thanks.
Hi Jeff, more than happy to cc you on this going forward :)
The only change to mseal is a trivial change because the poison operation discards, wasn't intentional, but apologies, I should have cc'd you regardless! Will do so on any such interaction with mseal moving forward.
No problems :-).
I do sometimes scan the emails to search for mseal keyword and that is how I find this patch series.
I also added Kees into the cc, since mseal is a security feature.
Sure no problem happy to keep Kees cc-d too (Kees - ping me if you'd prefer not :>), however a note on this - guard pages _themselves_ are emphatically NOT a security feature, and make no guarantees on this front, but rather are a convenience/effiency thing.
It is a nice feature nevertheless. I imagine the guide page can detect cases such as trying to go-over the main stack ?
Obviously however I am adding madvise() functionality here, and all such functionality must take into account whether or not they are discard operations as to ensure mseal semantics are obeyed - see below for my argument as to why I feel the poison operation falls under this.
On Fri, Sep 27, 2024 at 5:52 AM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
Currently users must establish PROT_NONE ranges to achieve this.
However this is very costly memory-wise - we need a VMA for each and every one of these regions AND they become unmergeable with surrounding VMAs.
In addition repeated mmap() calls require repeated kernel context switches and contention of the mmap lock to install these ranges, potentially also having to unmap memory if installed over existing ranges.
The lightweight guard approach eliminates the VMA cost altogether - rather than establishing a PROT_NONE VMA, it operates at the level of page table entries - poisoning PTEs such that accesses to them cause a fault followed by a SIGSGEV signal being raised.
This is achieved through the PTE marker mechanism, which a previous commit in this series extended to permit this to be done, installed via the generic page walking logic, also extended by a prior commit for this purpose.
These poison ranges are established with MADV_GUARD_POISON, and if the range in which they are installed contain any existing mappings, they will be zapped, i.e. free the range and unmap memory (thus mimicking the behaviour of MADV_DONTNEED in this respect).
Any existing poison entries will be left untouched. There is no nesting of poisoned pages.
Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather unexpected behaviour, but are cleared on process teardown or unmapping of memory ranges.
Ranges can have the poison property removed by MADV_GUARD_UNPOISON - 'remedying' the poisoning. The ranges over which this is applied, should they contain non-poison entries, will be untouched, only poison entries will be cleared.
We permit this operation on anonymous memory only, and only VMAs which are non-special, non-huge and not mlock()'d (if we permitted this we'd have to drop locked pages which would be rather counterintuitive).
The poisoning of the range must be performed under mmap write lock as we have to install an anon_vma to ensure correct behaviour on fork.
Suggested-by: Vlastimil Babka vbabka@suze.cz Suggested-by: Jann Horn jannh@google.com Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 + mm/madvise.c | 158 +++++++++++++++++++++++++ mm/mprotect.c | 3 +- mm/mseal.c | 1 + 8 files changed, 176 insertions(+), 1 deletion(-)
diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 763929e814e9..71e13f27742d 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -78,6 +78,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index 9c48d9a21aa0..1a2222322f77 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -105,6 +105,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index 68c44f99bc93..380905522397 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -75,6 +75,9 @@ #define MADV_HWPOISON 100 /* poison a page for testing */ #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 1ff0c858544f..e8d5affceb28 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -113,6 +113,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6ce1f1ceb432..5dfd3d442de4 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -79,6 +79,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
+#define MADV_GUARD_POISON 102 /* fatal signal on access to range */ +#define MADV_GUARD_UNPOISON 103 /* revoke guard poisoning */
/* compatibility flags */ #define MAP_FILE 0
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */ return 0; default: /* be safe, default to 1. list exceptions explicitly */
@@ -1017,6 +1018,157 @@ static long madvise_remove(struct vm_area_struct *vma, return error; }
+static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked) +{
vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB;
/*
* A user could lock after poisoning but that's fine, as they'd not be
* able to fault in. The issue arises when we try to zap existing locked
* VMAs. We don't want to do that.
*/
if (!allow_locked)
disallowed |= VM_LOCKED;
if (!vma_is_anonymous(vma))
return false;
if ((vma->vm_flags & (VM_MAYWRITE | disallowed)) != VM_MAYWRITE)
return false;
return true;
+}
+static int guard_poison_install_pte(unsigned long addr, unsigned long next,
pte_t *ptep, struct mm_walk *walk)
+{
unsigned long *num_installed = (unsigned long *)walk->private;
(*num_installed)++;
/* Simply install a PTE marker, this causes segfault on access. */
*ptep = make_pte_marker(PTE_MARKER_GUARD);
return 0;
+}
+static bool is_guard_pte_marker(pte_t ptent) +{
return is_pte_marker(ptent) &&
is_guard_swp_entry(pte_to_swp_entry(ptent));
+}
+static int guard_poison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
/*
* If not a guard marker, simply abort the operation. We return a value
* > 0 indicating a non-error abort.
*/
return !is_guard_pte_marker(ptent);
+}
+static const struct mm_walk_ops guard_poison_walk_ops = {
.install_pte = guard_poison_install_pte,
.pte_entry = guard_poison_pte_entry,
/* We might need to install an anon_vma. */
.walk_lock = PGWALK_WRLOCK,
+};
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
+static long madvise_guard_unpoison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
*prev = vma;
/*
* We're ok with unpoisoning mlock()'d ranges, as this is a
* non-destructive action.
*/
if (!is_valid_guard_vma(vma, /* allow_locked = */true))
return -EINVAL;
return walk_page_range(vma->vm_mm, start, end,
&guard_unpoison_walk_ops, NULL);
+}
/*
- Apply an madvise behavior to a region of a vma. madvise_update_vma
- will handle splitting a vm area into separate areas, each area with its own
@@ -1098,6 +1250,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, break; case MADV_COLLAPSE: return madvise_collapse(vma, prev, start, end);
case MADV_GUARD_POISON:
return madvise_guard_poison(vma, prev, start, end);
case MADV_GUARD_UNPOISON:
return madvise_guard_unpoison(vma, prev, start, end); } anon_name = anon_vma_name(vma);
@@ -1197,6 +1353,8 @@ madvise_behavior_valid(int behavior) case MADV_DODUMP: case MADV_WIPEONFORK: case MADV_KEEPONFORK:
case MADV_GUARD_POISON:
case MADV_GUARD_UNPOISON:
#ifdef CONFIG_MEMORY_FAILURE case MADV_SOFT_OFFLINE: case MADV_HWPOISON: diff --git a/mm/mprotect.c b/mm/mprotect.c index 0c5d6d06107d..d0e3ebfadef8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, } else if (is_pte_marker_entry(entry)) { /* * Ignore error swap entries unconditionally,
* because any access should sigbus anyway.
* because any access should sigbus/sigsegv
* anyway. */ if (is_poisoned_swp_entry(entry)) continue;
diff --git a/mm/mseal.c b/mm/mseal.c index ece977bd21e1..21bf5534bcf5 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) case MADV_REMOVE: case MADV_DONTFORK: case MADV_WIPEONFORK:
case MADV_GUARD_POISON:
Can you please describe the rationale to add this to the existing mseal's semantic ?
I didn't not find any description from the cover letter or this patch's description, hence asking.
Sure, this is because when you guard-poison ranges that have existing mappings, it zaps them, which performs basically the exact same operation as MADV_DONTNEED, and obviously discards any underlying data in doing so.
As a result, I felt it was correct to add this operation to the list of discard operations from the perspective of mseal.
That makes sense. Thanks for thinking about memory sealing when adding new features.
If possible, please add the reasoning here in the commit description in the next version, for future reference. As far as I am concerned, the mseal.c changes LGTM.
Thanks -Jeff
Thanks -Jeff
return true; }
-- 2.46.2
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
[...]
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 +
I kinda wonder if we could start moving the parts of those headers that are the same for all architectures to include/uapi/linux/mman.h instead... but that's maybe out of scope for this series.
[...]
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */
What does poisoning need a write lock for? anon_vma_prepare() doesn't need it (it only needs mmap_lock held for reading), zap_page_range_single() doesn't need it, and pagewalk also doesn't need it as long as the range being walked is covered by a VMA, which it is...
I see you set PGWALK_WRLOCK in guard_poison_walk_ops with a comment saying "We might need to install an anon_vma" - is that referring to an older version of the patch where the anon_vma_prepare() call was inside the pagewalk callback or something like that? Either way, anon_vma_prepare() doesn't need write locks (it can't, it has to work from the page fault handling path).
return 0; default: /* be safe, default to 1. list exceptions explicitly */
[...]
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
I'd move this up, to before we create poison PTEs. There's no harm in attaching an anon_vma to the VMA even if the rest of the operation fails; and I think it would be weird to have error paths that don't attach an anon_vma even though they .
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
Hmm, yeah, it would be nice if we could avoid telling userspace to loop on -EAGAIN but I guess we don't have any particularly good options here? Well, we could bail out with -EINTR if a (fatal?) signal is pending and otherwise keep looping... if we'd tell userspace "try again on -EAGAIN", we might as well do that in the kernel...
(Personally I would put curly braces around these branches because they occupy multiple lines, though the coding style doesn't explicitly say that, so I guess maybe it's a matter of personal preference... adding curly braces here would match what is done, for example, in relocate_vma_down().)
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
I think that last parameter probably should be "false"? The sparc code calls it "fullmm", which is a term the MM code uses when talking about operations that remove all mappings in the entire mm_struct because the process has died, which allows using some faster special-case version of TLB shootdown or something along those lines.
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
It is a _little_ weird that unpoisoning creates page tables when they don't already exist, which will also prevent creating THP entries on fault in such areas afterwards... but I guess it doesn't really matter given that poisoning has that effect, too, and you probably usually won't call MADV_GUARD_UNPOISON on an area that hasn't been poisoned before... so I guess this is not an actionable comment.
On Fri, Oct 11, 2024 at 11:12 AM Jann Horn jannh@google.com wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
[...]
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 +
I kinda wonder if we could start moving the parts of those headers that are the same for all architectures to include/uapi/linux/mman.h instead... but that's maybe out of scope for this series.
[...]
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */
What does poisoning need a write lock for? anon_vma_prepare() doesn't need it (it only needs mmap_lock held for reading), zap_page_range_single() doesn't need it, and pagewalk also doesn't need it as long as the range being walked is covered by a VMA, which it is...
I see you set PGWALK_WRLOCK in guard_poison_walk_ops with a comment saying "We might need to install an anon_vma" - is that referring to an older version of the patch where the anon_vma_prepare() call was inside the pagewalk callback or something like that? Either way, anon_vma_prepare() doesn't need write locks (it can't, it has to work from the page fault handling path).
I was wondering about that too and I can't find any reason for write-locking the mm for this operation. PGWALK_WRLOCK should also be changed to PGWALK_RDLOCK as we are not modifying the VMA.
BTW, I'm testing your patchset on Android and so far it is stable!
return 0; default: /* be safe, default to 1. list exceptions explicitly */
[...]
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
I'd move this up, to before we create poison PTEs. There's no harm in attaching an anon_vma to the VMA even if the rest of the operation fails; and I think it would be weird to have error paths that don't attach an anon_vma even though they .
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
Hmm, yeah, it would be nice if we could avoid telling userspace to loop on -EAGAIN but I guess we don't have any particularly good options here? Well, we could bail out with -EINTR if a (fatal?) signal is pending and otherwise keep looping... if we'd tell userspace "try again on -EAGAIN", we might as well do that in the kernel...
(Personally I would put curly braces around these branches because they occupy multiple lines, though the coding style doesn't explicitly say that, so I guess maybe it's a matter of personal preference... adding curly braces here would match what is done, for example, in relocate_vma_down().)
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
I think that last parameter probably should be "false"? The sparc code calls it "fullmm", which is a term the MM code uses when talking about operations that remove all mappings in the entire mm_struct because the process has died, which allows using some faster special-case version of TLB shootdown or something along those lines.
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
It is a _little_ weird that unpoisoning creates page tables when they don't already exist, which will also prevent creating THP entries on fault in such areas afterwards... but I guess it doesn't really matter given that poisoning has that effect, too, and you probably usually won't call MADV_GUARD_UNPOISON on an area that hasn't been poisoned before... so I guess this is not an actionable comment.
On Fri, Oct 11, 2024 at 01:55:42PM -0700, Suren Baghdasaryan wrote:
On Fri, Oct 11, 2024 at 11:12 AM Jann Horn jannh@google.com wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
[...]
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 +
I kinda wonder if we could start moving the parts of those headers that are the same for all architectures to include/uapi/linux/mman.h instead... but that's maybe out of scope for this series.
[...]
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */
What does poisoning need a write lock for? anon_vma_prepare() doesn't need it (it only needs mmap_lock held for reading), zap_page_range_single() doesn't need it, and pagewalk also doesn't need it as long as the range being walked is covered by a VMA, which it is...
I see you set PGWALK_WRLOCK in guard_poison_walk_ops with a comment saying "We might need to install an anon_vma" - is that referring to an older version of the patch where the anon_vma_prepare() call was inside the pagewalk callback or something like that? Either way, anon_vma_prepare() doesn't need write locks (it can't, it has to work from the page fault handling path).
I was wondering about that too and I can't find any reason for write-locking the mm for this operation. PGWALK_WRLOCK should also be changed to PGWALK_RDLOCK as we are not modifying the VMA.
Indeed, as I said to Jann you're right and I was in error to use this, will change!
BTW, I'm testing your patchset on Android and so far it is stable!
Thanks!
As there is no significant conceptual pushback to this series, I will un-RFC and post a version with fixes for the issues Jann raised, as well as a fix for some xtensa et al. issues with header includes.
return 0; default: /* be safe, default to 1. list exceptions explicitly */
[...]
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
I'd move this up, to before we create poison PTEs. There's no harm in attaching an anon_vma to the VMA even if the rest of the operation fails; and I think it would be weird to have error paths that don't attach an anon_vma even though they .
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
Hmm, yeah, it would be nice if we could avoid telling userspace to loop on -EAGAIN but I guess we don't have any particularly good options here? Well, we could bail out with -EINTR if a (fatal?) signal is pending and otherwise keep looping... if we'd tell userspace "try again on -EAGAIN", we might as well do that in the kernel...
(Personally I would put curly braces around these branches because they occupy multiple lines, though the coding style doesn't explicitly say that, so I guess maybe it's a matter of personal preference... adding curly braces here would match what is done, for example, in relocate_vma_down().)
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
I think that last parameter probably should be "false"? The sparc code calls it "fullmm", which is a term the MM code uses when talking about operations that remove all mappings in the entire mm_struct because the process has died, which allows using some faster special-case version of TLB shootdown or something along those lines.
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
It is a _little_ weird that unpoisoning creates page tables when they don't already exist, which will also prevent creating THP entries on fault in such areas afterwards... but I guess it doesn't really matter given that poisoning has that effect, too, and you probably usually won't call MADV_GUARD_UNPOISON on an area that hasn't been poisoned before... so I guess this is not an actionable comment.
On Fri, Oct 11, 2024 at 08:11:36PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
[...]
arch/alpha/include/uapi/asm/mman.h | 3 + arch/mips/include/uapi/asm/mman.h | 3 + arch/parisc/include/uapi/asm/mman.h | 3 + arch/xtensa/include/uapi/asm/mman.h | 3 + include/uapi/asm-generic/mman-common.h | 3 +
I kinda wonder if we could start moving the parts of those headers that are the same for all architectures to include/uapi/linux/mman.h instead... but that's maybe out of scope for this series.
Arnd already had a look at this in a recent series. I had the same feeling doing this...
[...]
diff --git a/mm/madvise.c b/mm/madvise.c index e871a72a6c32..7216e10723ae 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -60,6 +60,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: case MADV_COLLAPSE:
case MADV_GUARD_UNPOISON: /* Only poisoning needs a write lock. */
What does poisoning need a write lock for? anon_vma_prepare() doesn't need it (it only needs mmap_lock held for reading), zap_page_range_single() doesn't need it, and pagewalk also doesn't need it as long as the range being walked is covered by a VMA, which it is...
I see you set PGWALK_WRLOCK in guard_poison_walk_ops with a comment saying "We might need to install an anon_vma" - is that referring to an older version of the patch where the anon_vma_prepare() call was inside the pagewalk callback or something like that? Either way, anon_vma_prepare() doesn't need write locks (it can't, it has to work from the page fault handling path).
OK this was a misunderstanding. Actually there have been more than one, at first I thought a write lock would protect us against racing faults (nope, due to RCU vma locking now :) and then I had assumed literally changing a vma field _surely_ must require a write lock, also it appears no as __anon_vma_prepare(), amusingly, uses the mm->page_table_lock to protect against accesses to vma->anon_vma.
And yes you're right it is triggered on the fault path so has to work that way.
TL;DR will change to read lock.
return 0; default: /* be safe, default to 1. list exceptions explicitly */
[...]
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
I'd move this up, to before we create poison PTEs. There's no harm in attaching an anon_vma to the VMA even if the rest of the operation fails; and I think it would be weird to have error paths that don't attach an anon_vma even though they .
I think you didn't finish this sentence :)
I disagree, we might have absolutely no need to do it, and I'd rather only do so _if_ we have to.
It feels like the logical spot to do it and, while the cases where it wouldn't happen are ones where pages are already poisoned (the vma->anon_vma == NULL test will fail so basically a no-op) or error on page walk.
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
Hmm, yeah, it would be nice if we could avoid telling userspace to loop on -EAGAIN but I guess we don't have any particularly good options here? Well, we could bail out with -EINTR if a (fatal?) signal is pending and otherwise keep looping... if we'd tell userspace "try again on -EAGAIN", we might as well do that in the kernel...
The problem is you could conceivably go on for quite some time, while holding and contending a HIGHLY contended lock (mm->mmap_lock) so I'd really rather let userspace take care of it.
You could avoid this by having the walker be a _replace_ operation, that is - if we encounter an existing mapping, replace in-place with a poison marker rather than install marker/zap.
However doing that would involve either completely abstracting such logic from scratch (a significant task in itself) to avoid duplication which be hugely off-topic for the patch set or worse, duplicating a whole bunch of page walking logic once again.
By being optimistic and simply having the user having to handle looping which seems reasonable (again, it's weird if you're installing poison markers and another thread could be racing you) we avoid all of that.
(Personally I would put curly braces around these branches because they occupy multiple lines, though the coding style doesn't explicitly say that, so I guess maybe it's a matter of personal preference... adding curly braces here would match what is done, for example, in relocate_vma_down().)
Hey I wrote that too! ;) Sure I can change that.
retried = true;
}
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
pte_t ptent = ptep_get(pte);
if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, true);
I think that last parameter probably should be "false"? The sparc code calls it "fullmm", which is a term the MM code uses when talking about operations that remove all mappings in the entire mm_struct because the process has died, which allows using some faster special-case version of TLB shootdown or something along those lines.
Yeah I think you're right. Will change.
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
It is a _little_ weird that unpoisoning creates page tables when they don't already exist, which will also prevent creating THP entries on fault in such areas afterwards... but I guess it doesn't really matter given that poisoning has that effect, too, and you probably usually won't call MADV_GUARD_UNPOISON on an area that hasn't been poisoned before... so I guess this is not an actionable comment.
It doesn't? There's no .install_pte so if an entries are non-present we ignore.
HOWEVER, we do split THP. I don't think there's any way around it unless we extended the page walker to handle this more gracefully (pmd level being able to hint that we shouldn't do that or something), but that's really out of scope here.
The idea is that a caller can lazily call MADV_GUARD_UNPOISON on a range knowing things stay as they were, I guess we can add to the manpage a note that this will split THP?
On Mon, Oct 14, 2024 at 1:09 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Fri, Oct 11, 2024 at 08:11:36PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
return 0; default: /* be safe, default to 1. list exceptions explicitly */
[...]
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
I'd move this up, to before we create poison PTEs. There's no harm in attaching an anon_vma to the VMA even if the rest of the operation fails; and I think it would be weird to have error paths that don't attach an anon_vma even though they .
I think you didn't finish this sentence :)
Oops...
I disagree, we might have absolutely no need to do it, and I'd rather only do so _if_ we have to.
But there's no downside to erroring out after having installed an anon_vma, right?
It feels like the logical spot to do it and, while the cases where it wouldn't happen are ones where pages are already poisoned (the vma->anon_vma == NULL test will fail so basically a no-op) or error on page walk.
My understanding is that some of the MM code basically assumes that a VMA without an anon_vma and without userfault-WP can't contain any state that needs to be preserved; or something along those lines. As you pointed out, fork() is one such case (which maybe doesn't matter so much because it can't race with this operation).
khugepaged also relies on this assumption in retract_page_tables(), though that function is not used on anonymous VMAs. If MADVISE_GUARD is extended to cover file VMAs in the future, then I think we could race with retract_page_tables() in a functionally relevant way even when MADVISE_GUARD succeeds: If khugepaged preempts us between the page walk and installing the anon_vma, retract_page_tables() could observe that we don't have an anon_vma yet and throw away a page table in which we just installed guard PTEs.
Though I guess really that's not the main reason why I'm saying this; my main reason is that almost any other path that has to ensure an anon_vma is present does that part first (usually because the ordering matters and this way around is more or less the only possible ordering). So even if there are some specific reasons why you can do the ordering the other way around here, it kinda stands out to me as being weird...
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
Hmm, yeah, it would be nice if we could avoid telling userspace to loop on -EAGAIN but I guess we don't have any particularly good options here? Well, we could bail out with -EINTR if a (fatal?) signal is pending and otherwise keep looping... if we'd tell userspace "try again on -EAGAIN", we might as well do that in the kernel...
The problem is you could conceivably go on for quite some time, while holding and contending a HIGHLY contended lock (mm->mmap_lock) so I'd really rather let userspace take care of it.
Hmm... so if the retry was handled in-kernel, you'd basically ideally have the retry happen all the way up in do_madvise(), where the mmap lock can be dropped and re-taken?
You could avoid this by having the walker be a _replace_ operation, that is
- if we encounter an existing mapping, replace in-place with a poison
marker rather than install marker/zap.
However doing that would involve either completely abstracting such logic from scratch (a significant task in itself) to avoid duplication which be hugely off-topic for the patch set or worse, duplicating a whole bunch of page walking logic once again.
Mmh, yeah, you'd have to extract the locked part of zap_pte_range() and add your own copy of all the stuff that happens higher up for setting up TLB flushes and such... I see how that would be a massive pain and error-prone.
By being optimistic and simply having the user having to handle looping which seems reasonable (again, it's weird if you're installing poison markers and another thread could be racing you) we avoid all of that.
I guess one case in which that could happen legitimately is if you race a MADV_POISON on the area 0x1ff000-0x200100 (first page is populated, second page is not, pmd entry corresponding to 0x200000 is clear) with a page fault at 0x200200? So you could have a scenario like:
1. MADV_POISON starts walk_page_range() 2. MADV_POISON sees non-zero, non-poison PTE at 0x1ff000, stops the walk 3. MADV_POISON does zap_page_range_single() 4. pagefault at 0x200200 happens and populates with a hugepage 5. MADV_POISON enters walk_page_range() 6. MADV_POISON splits the THP 7. MADV_POISON sees a populated PTE
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
It is a _little_ weird that unpoisoning creates page tables when they don't already exist, which will also prevent creating THP entries on fault in such areas afterwards... but I guess it doesn't really matter given that poisoning has that effect, too, and you probably usually won't call MADV_GUARD_UNPOISON on an area that hasn't been poisoned before... so I guess this is not an actionable comment.
It doesn't? There's no .install_pte so if an entries are non-present we ignore.
Ah, right, of course. Nevermind.
HOWEVER, we do split THP. I don't think there's any way around it unless we extended the page walker to handle this more gracefully (pmd level being able to hint that we shouldn't do that or something), but that's really out of scope here.
I think the `walk->action == ACTION_CONTINUE` check in walk_pmd_range() would let you do that, see wp_clean_pmd_entry() for an example. But yeah I guess that might just be unnecessary complexity.
The idea is that a caller can lazily call MADV_GUARD_UNPOISON on a range knowing things stay as they were, I guess we can add to the manpage a note that this will split THP?
Yeah, might make sense...
On Mon, Oct 14, 2024 at 05:56:50PM +0200, Jann Horn wrote:
On Mon, Oct 14, 2024 at 1:09 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Fri, Oct 11, 2024 at 08:11:36PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
return 0; default: /* be safe, default to 1. list exceptions explicitly */
[...]
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
long err;
bool retried = false;
*prev = vma;
if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
/*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
while (true) {
unsigned long num_installed = 0;
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops,
&num_installed);
/*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
if (err >= 0 && num_installed > 0) {
int err_anon = anon_vma_prepare(vma);
I'd move this up, to before we create poison PTEs. There's no harm in attaching an anon_vma to the VMA even if the rest of the operation fails; and I think it would be weird to have error paths that don't attach an anon_vma even though they .
I think you didn't finish this sentence :)
Oops...
I disagree, we might have absolutely no need to do it, and I'd rather only do so _if_ we have to.
But there's no downside to erroring out after having installed an anon_vma, right?
We then use a resource we don't have to. I think it's more logical to only take that action when we know we need to.
It feels like the logical spot to do it and, while the cases where it wouldn't happen are ones where pages are already poisoned (the vma->anon_vma == NULL test will fail so basically a no-op) or error on page walk.
My understanding is that some of the MM code basically assumes that a VMA without an anon_vma and without userfault-WP can't contain any state that needs to be preserved; or something along those lines. As you pointed out, fork() is one such case (which maybe doesn't matter so much because it can't race with this operation).
khugepaged also relies on this assumption in retract_page_tables(), though that function is not used on anonymous VMAs. If MADVISE_GUARD is extended to cover file VMAs in the future, then I think we could race with retract_page_tables() in a functionally relevant way even when MADVISE_GUARD succeeds: If khugepaged preempts us between the page walk and installing the anon_vma, retract_page_tables() could observe that we don't have an anon_vma yet and throw away a page table in which we just installed guard PTEs.
Well for one retract_page_tables() seems to require the VMA to be file-backed :) So we can disregard this at this stage.
We enter into a slightly strange scenario with file-backed as to how we manifest memory poisoning, because a file will have backing in the page cache or an anon page for shmem and it seems that khugepage() operates at this level and simply remaps at the higher order.
But we then introduce a way the _mapping_ can be different and we have to correctly handle that.
So I think actually you'd see this break there too?
Interesting that we special-case uffd-wp, which similarly uses PTE markers and this is commented in retract_page_tables():
/* * When a vma is registered with uffd-wp, we cannot recycle * the page table because there may be pte markers installed. * Other vmas can still have the same file mapped hugely, but * skip this one: it will always be mapped in small page size * for uffd-wp registered ranges. */ if (userfaultfd_wp(vma)) continue;
So this is something (one of many) I will note down to think about when we come on to file-backed guard pages.
Though I guess really that's not the main reason why I'm saying this; my main reason is that almost any other path that has to ensure an anon_vma is present does that part first (usually because the ordering matters and this way around is more or less the only possible ordering). So even if there are some specific reasons why you can do the ordering the other way around here, it kinda stands out to me as being weird...
I mean, fair enough, on the basis of convention and to avoid future issues with this I'll move it.
if (err_anon)
err = err_anon;
}
if (err <= 0)
return err;
if (!retried)
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
else
/*
* If we reach here, then there is a racing fault that
* has populated the PTE after we zapped. Give up and
* let the user know to try again.
*/
return -EAGAIN;
Hmm, yeah, it would be nice if we could avoid telling userspace to loop on -EAGAIN but I guess we don't have any particularly good options here? Well, we could bail out with -EINTR if a (fatal?) signal is pending and otherwise keep looping... if we'd tell userspace "try again on -EAGAIN", we might as well do that in the kernel...
The problem is you could conceivably go on for quite some time, while holding and contending a HIGHLY contended lock (mm->mmap_lock) so I'd really rather let userspace take care of it.
Hmm... so if the retry was handled in-kernel, you'd basically ideally have the retry happen all the way up in do_madvise(), where the mmap lock can be dropped and re-taken?
Yeah perhaps, but that gets (really) horrible.
You could avoid this by having the walker be a _replace_ operation, that is
- if we encounter an existing mapping, replace in-place with a poison
marker rather than install marker/zap.
However doing that would involve either completely abstracting such logic from scratch (a significant task in itself) to avoid duplication which be hugely off-topic for the patch set or worse, duplicating a whole bunch of page walking logic once again.
Mmh, yeah, you'd have to extract the locked part of zap_pte_range() and add your own copy of all the stuff that happens higher up for setting up TLB flushes and such... I see how that would be a massive pain and error-prone.
Yep, I'd really, really like to avoid doing that, the solution we have now is neat and avoids such duplication.
By being optimistic and simply having the user having to handle looping which seems reasonable (again, it's weird if you're installing poison markers and another thread could be racing you) we avoid all of that.
I guess one case in which that could happen legitimately is if you race a MADV_POISON on the area 0x1ff000-0x200100 (first page is populated, second page is not, pmd entry corresponding to 0x200000 is clear) with a page fault at 0x200200? So you could have a scenario like:
- MADV_POISON starts walk_page_range()
- MADV_POISON sees non-zero, non-poison PTE at 0x1ff000, stops the walk
- MADV_POISON does zap_page_range_single()
- pagefault at 0x200200 happens and populates with a hugepage
- MADV_POISON enters walk_page_range()
- MADV_POISON splits the THP
- MADV_POISON sees a populated PTE
You really shouldn't be seeing page faults in the range you are setting up poison markers for _at all_ :) it's something you'd do ahead of time.
But of course it's possible some scenario could arise like that, that's what the EAGAIN is for.
I just really don't want to get into a realm of trying to prove absolutely under all circumstances that we can't go on forever in a loop like that.
If you drop the lock for contention then you up the risk of that, it just feels dangerous.
A userland program can however live with a 'if EAGAIN try again' situation.
An alternative approach to this might be to try to take the VMA lock, but given the fraught situation with locking elsewhere I wonder if we should.
Also, you have to be realy unlucky with timing for this to happen, even in the scenario you mention (where you'd have to be unlucky with alignment too), unless you're _heavily_ page faulting in the range, either way a userland loop checking EAGAIN doesn't seem unreasonable.
update_mmu_cache(walk->vma, addr, pte);
}
return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
.pte_entry = guard_unpoison_pte_entry,
.walk_lock = PGWALK_RDLOCK,
+};
It is a _little_ weird that unpoisoning creates page tables when they don't already exist, which will also prevent creating THP entries on fault in such areas afterwards... but I guess it doesn't really matter given that poisoning has that effect, too, and you probably usually won't call MADV_GUARD_UNPOISON on an area that hasn't been poisoned before... so I guess this is not an actionable comment.
It doesn't? There's no .install_pte so if an entries are non-present we ignore.
Ah, right, of course. Nevermind.
HOWEVER, we do split THP. I don't think there's any way around it unless we extended the page walker to handle this more gracefully (pmd level being able to hint that we shouldn't do that or something), but that's really out of scope here.
I think the `walk->action == ACTION_CONTINUE` check in walk_pmd_range() would let you do that, see wp_clean_pmd_entry() for an example. But yeah I guess that might just be unnecessary complexity.
Ah yeah... cool, actually think I will add that then, I hadn't noticed you could update that _in a callback_, as I first thought it was something you could set ahead of time then noticed the walker code resets it and... yeah cool.
The idea is that a caller can lazily call MADV_GUARD_UNPOISON on a range knowing things stay as they were, I guess we can add to the manpage a note that this will split THP?
Yeah, might make sense...
No need then :)
On Mon, Oct 14, 2024 at 7:02 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Mon, Oct 14, 2024 at 05:56:50PM +0200, Jann Horn wrote:
On Mon, Oct 14, 2024 at 1:09 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Fri, Oct 11, 2024 at 08:11:36PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
By being optimistic and simply having the user having to handle looping which seems reasonable (again, it's weird if you're installing poison markers and another thread could be racing you) we avoid all of that.
I guess one case in which that could happen legitimately is if you race a MADV_POISON on the area 0x1ff000-0x200100 (first page is populated, second page is not, pmd entry corresponding to 0x200000 is clear) with a page fault at 0x200200? So you could have a scenario like:
- MADV_POISON starts walk_page_range()
- MADV_POISON sees non-zero, non-poison PTE at 0x1ff000, stops the walk
- MADV_POISON does zap_page_range_single()
- pagefault at 0x200200 happens and populates with a hugepage
- MADV_POISON enters walk_page_range()
- MADV_POISON splits the THP
- MADV_POISON sees a populated PTE
You really shouldn't be seeing page faults in the range you are setting up poison markers for _at all_ :) it's something you'd do ahead of time.
But that's not what happens in my example - the address where the fault happens (0x200200) *is not* in the address range that MADV_POISON is called on (0x1ff000-0x200100). The fault and the MADV_POISON are in different 4KiB pages. What causes the conflict is that the fault and the MADV_POISON overlap the same *2MiB region* (both are in the region 0x200000-0x400000), and so THP stuff can effectively cause "page faults in the range you are setting up poison markers for".
But of course it's possible some scenario could arise like that, that's what the EAGAIN is for.
I just really don't want to get into a realm of trying to prove absolutely under all circumstances that we can't go on forever in a loop like that.
We can have a bailout on signal_pending() or something like that, and a cond_resched(). Then as far as I know, it won't really make a difference in behavior whether the loop is in the kernel or in userspace code that's following what the manpage tells it to do - either way, the program will loop until it either finishes its work or is interrupted by a signal, and either way it can get preempted. (Well, except under PREEMPT_NONE, but that is basically asking for long scheduling delays.)
And we do have other codepaths that have to loop endlessly if they keep racing with page table updates the wrong way, though I guess those loops are not going to always scan over a large address range over and over again...
Maybe something like this would be good enough, and mirror what you'd otherwise tell userspace to do?
@@ -1598,6 +1598,7 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh return madvise_inject_error(behavior, start, start + len_in); #endif
+retry: write = madvise_need_mmap_write(behavior); if (write) { if (mmap_write_lock_killable(mm)) @@ -1627,6 +1628,12 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh else mmap_read_unlock(mm);
+ if (error == <<<some special value>>>) { + if (!signal_pending(current)) + goto retry; + error = -ERESTARTNOINTR; + } + return error; }
Buuut, heh, actually, I just realized: You could even omit this and simply replace -EINTR with -ERESTARTNOINTR in your code as the error value, and then the kernel would automatically go back into the syscall for you after going through signal handing and such, without userspace noticing. https://lore.kernel.org/all/20121206220955.GZ4939@ZenIV.linux.org.uk/ has some explanation on how this works. Basically it tells the architecture's syscall entry code to move the userspace instruction pointer back to the syscall instruction, so as soon as execution returns to userspace, the first userspace instruction that executes will immediately re-do the syscall. That might be the easiest way, even if it is maybe a *little* bit of an API abuse to use this thing without having a pending signal...
If you drop the lock for contention then you up the risk of that, it just feels dangerous.
A userland program can however live with a 'if EAGAIN try again' situation.
An alternative approach to this might be to try to take the VMA lock, but given the fraught situation with locking elsewhere I wonder if we should.
Also, you have to be realy unlucky with timing for this to happen, even in the scenario you mention (where you'd have to be unlucky with alignment too), unless you're _heavily_ page faulting in the range, either way a userland loop checking EAGAIN doesn't seem unreasonable.
Yes, we could do -EINTR and document that for userspace, and as long as everyone using this properly reads the documentation, it will be fine. Though I imagine that from the userspace programmer perspective that's a weird API design - as in, if this error code always means I have to try again, why can't the kernel do that internally. It's kind of leaking an implementation detail into the UAPI.
On Mon, Oct 14, 2024 at 08:14:26PM +0200, Jann Horn wrote:
On Mon, Oct 14, 2024 at 7:02 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Mon, Oct 14, 2024 at 05:56:50PM +0200, Jann Horn wrote:
On Mon, Oct 14, 2024 at 1:09 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
On Fri, Oct 11, 2024 at 08:11:36PM +0200, Jann Horn wrote:
On Fri, Sep 27, 2024 at 2:51 PM Lorenzo Stoakes lorenzo.stoakes@oracle.com wrote:
By being optimistic and simply having the user having to handle looping which seems reasonable (again, it's weird if you're installing poison markers and another thread could be racing you) we avoid all of that.
I guess one case in which that could happen legitimately is if you race a MADV_POISON on the area 0x1ff000-0x200100 (first page is populated, second page is not, pmd entry corresponding to 0x200000 is clear) with a page fault at 0x200200? So you could have a scenario like:
- MADV_POISON starts walk_page_range()
- MADV_POISON sees non-zero, non-poison PTE at 0x1ff000, stops the walk
- MADV_POISON does zap_page_range_single()
- pagefault at 0x200200 happens and populates with a hugepage
- MADV_POISON enters walk_page_range()
- MADV_POISON splits the THP
- MADV_POISON sees a populated PTE
You really shouldn't be seeing page faults in the range you are setting up poison markers for _at all_ :) it's something you'd do ahead of time.
But that's not what happens in my example - the address where the fault happens (0x200200) *is not* in the address range that MADV_POISON is called on (0x1ff000-0x200100). The fault and the MADV_POISON are in different 4KiB pages. What causes the conflict is that the fault and the MADV_POISON overlap the same *2MiB region* (both are in the region 0x200000-0x400000), and so THP stuff can effectively cause "page faults in the range you are setting up poison markers for".
Right sorry maybe I wasn't clear in what I said - there should not be faults in the 'vicinity' of the poison pages, that is the range which you potentially intend to protect with the poison markers.
HOWEVER this is problematic clearly for something like a userspace allocator where you might be allocating small ranges that might fit within a huge page.
At the same time, I think you'd have to get pretty unlucky - you'd need to have faulted in enough for a huge page to be collapsed by THP, immediately adjacent to where you are installing this poison range, which spans multiple adjacent pages for some reason over the 2 MiB boundary (I assume you mean 0x201000 not 0x200100 btw :P).
Anyway I think this is moot as I am warming to the idea of us just looping to be honest.
There's a limit to how much can be faulted in (i.e. everything), and we hold the lock.
The user is _choosing_ to call this function and if there happens to be enormously huge amounts of faulting going on then so be it.
But of course it's possible some scenario could arise like that, that's what the EAGAIN is for.
I just really don't want to get into a realm of trying to prove absolutely under all circumstances that we can't go on forever in a loop like that.
We can have a bailout on signal_pending() or something like that, and a cond_resched(). Then as far as I know, it won't really make a difference in behavior whether the loop is in the kernel or in userspace code that's following what the manpage tells it to do - either way, the program will loop until it either finishes its work or is interrupted by a signal, and either way it can get preempted. (Well, except under PREEMPT_NONE, but that is basically asking for long scheduling delays.)
And we do have other codepaths that have to loop endlessly if they keep racing with page table updates the wrong way, though I guess those loops are not going to always scan over a large address range over and over again...
Maybe something like this would be good enough, and mirror what you'd otherwise tell userspace to do?
@@ -1598,6 +1598,7 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh return madvise_inject_error(behavior, start, start + len_in); #endif
+retry: write = madvise_need_mmap_write(behavior); if (write) { if (mmap_write_lock_killable(mm)) @@ -1627,6 +1628,12 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh else mmap_read_unlock(mm);
if (error == <<<some special value>>>) {
if (!signal_pending(current))
goto retry;
error = -ERESTARTNOINTR;
}
return error;
}
Buuut, heh, actually, I just realized: You could even omit this and simply replace -EINTR with -ERESTARTNOINTR in your code as the error
Interesting that that exists had no idea :)
I think I'd rather avoid it as it looks so specific and a lot more like asking for trouble than simply looping.
value, and then the kernel would automatically go back into the syscall for you after going through signal handing and such, without userspace noticing. https://lore.kernel.org/all/20121206220955.GZ4939@ZenIV.linux.org.uk/ has some explanation on how this works. Basically it tells the architecture's syscall entry code to move the userspace instruction pointer back to the syscall instruction, so as soon as execution returns to userspace, the first userspace instruction that executes will immediately re-do the syscall. That might be the easiest way, even if it is maybe a *little* bit of an API abuse to use this thing without having a pending signal...
If you drop the lock for contention then you up the risk of that, it just feels dangerous.
A userland program can however live with a 'if EAGAIN try again' situation.
An alternative approach to this might be to try to take the VMA lock, but given the fraught situation with locking elsewhere I wonder if we should.
Also, you have to be realy unlucky with timing for this to happen, even in the scenario you mention (where you'd have to be unlucky with alignment too), unless you're _heavily_ page faulting in the range, either way a userland loop checking EAGAIN doesn't seem unreasonable.
Yes, we could do -EINTR and document that for userspace, and as long as everyone using this properly reads the documentation, it will be fine. Though I imagine that from the userspace programmer perspective that's a weird API design - as in, if this error code always means I have to try again, why can't the kernel do that internally. It's kind of leaking an implementation detail into the UAPI.
Overall I am warming to us just looping.
I mean it's hard to argue against this being 'surprising' behaviour - the user expects to be able to install poison markers and for that to just be applied regardless of faulting.
And it's not exactly a huge amount more effort for us to simply loop, I just wanted to avoid it to avoid having to think about whether there's cases that could result in an eternal loop...
We'd need a write lock to VMA lock the VMAs and prevent racing faults (or in the case of non-VMA lock kernels to simplly prevent them) which adds to contention issues arguably a lot more than simply looping under the read lock.
Let me think about this but I think I will go ahead and try to add something simple that loops + checks for pending fatal signals for the next iteration of this series.
Utilise the kselftest harmness to implement tests for the guard page implementation.
We start by implement basic tests asserting that guard pages can be established (poisoned), cleared (remedied) and that touching poisoned pages result in SIGSEGV. We also assert that, in remedying a range, non-poison pages remain intact.
We then examine different operations on regions containing poison markers behave to ensure correct behaviour:
* Operations over multiple VMAs operate as expected. * Invoking MADV_GUARD_POISION / MADV_GUARD_REMEDY via process_madvise() in batches works correctly. * Ensuring that munmap() correctly tears down poison markers. * Using mprotect() to adjust protection bits does not in any way override or cause issues with poison markers. * Ensuring that splitting and merging VMAs around poison markers causes no issue - i.e. that a marker which 'belongs' to one VMA can function just as well 'belonging' to another. * Ensuring that madvise(..., MADV_DONTNEED) does not remove poison markers. * Ensuring that mlock()'ing a range containing poison markers does not cause issues. * Ensuring that mremap() can move a poisoned range and retain poison markers. * Ensuring that mremap() can expand a poisoned range and retain poison markers (perhaps moving the range). * Ensuring that mremap() can shrink a poisoned range and retain poison markers. * Ensuring that forking a process correctly retains poison markers. * Ensuring that forking a VMA with VM_WIPEONFORK set behaves sanely. * Ensuring that lazyfree simply clears poison markers. * Ensuring that userfaultfd can co-exist with guard pages. * Ensuring that madvise(..., MADV_POPULATE_READ) and madvise(..., MADV_POPULATE_WRITE) error out when encountering poison markers. * Ensuring that madvise(..., MADV_COLD) and madvise(..., MADV_PAGEOUT) do not remove poison markers.
Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com --- tools/testing/selftests/mm/.gitignore | 1 + tools/testing/selftests/mm/Makefile | 1 + tools/testing/selftests/mm/guard-pages.c | 1168 ++++++++++++++++++++++ 3 files changed, 1170 insertions(+) create mode 100644 tools/testing/selftests/mm/guard-pages.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore index 689bbd520296..8f01f4da1c0d 100644 --- a/tools/testing/selftests/mm/.gitignore +++ b/tools/testing/selftests/mm/.gitignore @@ -54,3 +54,4 @@ droppable hugetlb_dio pkey_sighandler_tests_32 pkey_sighandler_tests_64 +guard-pages diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile index 02e1204971b0..15c734d6cfec 100644 --- a/tools/testing/selftests/mm/Makefile +++ b/tools/testing/selftests/mm/Makefile @@ -79,6 +79,7 @@ TEST_GEN_FILES += hugetlb_fault_after_madv TEST_GEN_FILES += hugetlb_madv_vs_map TEST_GEN_FILES += hugetlb_dio TEST_GEN_FILES += droppable +TEST_GEN_FILES += guard-pages
ifneq ($(ARCH),arm64) TEST_GEN_FILES += soft-dirty diff --git a/tools/testing/selftests/mm/guard-pages.c b/tools/testing/selftests/mm/guard-pages.c new file mode 100644 index 000000000000..2ab0ff3ba5a0 --- /dev/null +++ b/tools/testing/selftests/mm/guard-pages.c @@ -0,0 +1,1168 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#define _GNU_SOURCE +#include "../kselftest_harness.h" +#include <assert.h> +#include <fcntl.h> +#include <setjmp.h> +#include <errno.h> +#include <linux/userfaultfd.h> +#include <signal.h> +#include <stdbool.h> +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/ioctl.h> +#include <sys/mman.h> +#include <sys/syscall.h> +#include <sys/uio.h> +#include <unistd.h> + +/* These may not yet be available in the uAPI so define if not. */ + +#ifndef MADV_GUARD_POISON +#define MADV_GUARD_POISON 102 +#endif + +#ifndef MADV_GUARD_UNPOISON +#define MADV_GUARD_UNPOISON 103 +#endif + +volatile bool signal_jump_set; +sigjmp_buf signal_jmp_buf; + +static int userfaultfd(int flags) +{ + return syscall(SYS_userfaultfd, flags); +} + +static void handle_fatal(int c) +{ + if (!signal_jump_set) + return; + + siglongjmp(signal_jmp_buf, c); +} + +static int pidfd_open(pid_t pid, unsigned int flags) +{ + return syscall(SYS_pidfd_open, pid, flags); +} + +/* + * Enable our signal catcher and try to read/write the specified buffer. The + * return value indicates whether the read/write succeeds without a fatal + * signal. + */ +static bool try_access_buf(char *ptr, bool write) +{ + bool failed; + + /* Tell signal handler to jump back here on fatal signal. */ + signal_jump_set = true; + /* If a fatal signal arose, we will jump back here and failed is set. */ + failed = sigsetjmp(signal_jmp_buf, 0) != 0; + + if (!failed) { + if (write) { + *ptr = 'x'; + } else { + const volatile char *chr = ptr; + + /* Force read. */ + (void)*chr; + } + } + + signal_jump_set = false; + return !failed; +} + +/* Try and read from a buffer, return true if no fatal signal. */ +static bool try_read_buf(char *ptr) +{ + return try_access_buf(ptr, false); +} + +/* Try and write to a buffer, return true if no fatal signal. */ +static bool try_write_buf(char *ptr) +{ + return try_access_buf(ptr, true); +} + +/* + * Try and BOTH read from AND write to a buffer, return true if BOTH operations + * succeed. + */ +static bool try_read_write_buf(char *ptr) +{ + return try_read_buf(ptr) && try_write_buf(ptr); +} + +FIXTURE(guard_pages) +{ + unsigned long page_size; +}; + +FIXTURE_SETUP(guard_pages) +{ + struct sigaction act = { + .sa_handler = &handle_fatal, + .sa_flags = SA_NODEFER, + }; + + sigemptyset(&act.sa_mask); + if (sigaction(SIGSEGV, &act, NULL)) { + perror("sigaction"); + ksft_exit_fail(); + } + + self->page_size = (unsigned long)sysconf(_SC_PAGESIZE); +}; + +FIXTURE_TEARDOWN(guard_pages) +{ + struct sigaction act = { + .sa_handler = SIG_DFL, + .sa_flags = SA_NODEFER, + }; + + sigemptyset(&act.sa_mask); + sigaction(SIGSEGV, &act, NULL); +} + +TEST_F(guard_pages, basic) +{ + const unsigned long NUM_PAGES = 10; + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + ptr = mmap(NULL, NUM_PAGES * page_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANON, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Trivially assert we can touch the first page. */ + ASSERT_TRUE(try_read_write_buf(ptr)); + + ASSERT_EQ(madvise(ptr, page_size, MADV_GUARD_POISON), 0); + + /* Establish that 1st page SIGSEGV's. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + + /* Ensure we can touch everything else.*/ + for (i = 1; i < NUM_PAGES; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Establish a guard page at the end of the mapping. */ + ASSERT_EQ(madvise(&ptr[(NUM_PAGES - 1) * page_size], page_size, + MADV_GUARD_POISON), 0); + + /* Check that both guard pages result in SIGSEGV. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[(NUM_PAGES - 1) * page_size])); + + /* Unpoison the first. */ + ASSERT_FALSE(madvise(ptr, page_size, MADV_GUARD_UNPOISON)); + + /* Make sure we can touch it. */ + ASSERT_TRUE(try_read_write_buf(ptr)); + + /* Unpoison the last. */ + ASSERT_FALSE(madvise(&ptr[(NUM_PAGES - 1) * page_size], page_size, + MADV_GUARD_UNPOISON)); + + /* Make sure we can touch it. */ + ASSERT_TRUE(try_read_write_buf(&ptr[(NUM_PAGES - 1) * page_size])); + + /* + * Test setting a _range_ of pages, namely the first 3. The first of + * these be faulted in, so this also tests that we can poison backed + * pages. + */ + ASSERT_EQ(madvise(ptr, 3 * page_size, MADV_GUARD_POISON), 0); + + /* Make sure they are all poisoned. */ + for (i = 0; i < 3; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Make sure the rest are not. */ + for (i = 3; i < NUM_PAGES; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Unpoison them. */ + ASSERT_EQ(madvise(ptr, NUM_PAGES * page_size, MADV_GUARD_UNPOISON), 0); + + /* Now make sure we can touch everything. */ + for (i = 0; i < NUM_PAGES; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Now unpoison everything, make sure we don't remove existing entries */ + ASSERT_EQ(madvise(ptr, NUM_PAGES * page_size, MADV_GUARD_UNPOISON), 0); + + for (i = 0; i < NUM_PAGES * page_size; i += page_size) { + ASSERT_EQ(ptr[i], 'x'); + } + + ASSERT_EQ(munmap(ptr, NUM_PAGES * page_size), 0); +} + +/* Assert that operations applied across multiple VMAs work as expected. */ +TEST_F(guard_pages, multi_vma) +{ + const unsigned long page_size = self->page_size; + char *ptr_region, *ptr, *ptr1, *ptr2, *ptr3; + int i; + + /* Reserve a 100 page region over which we can install VMAs. */ + ptr_region = mmap(NULL, 100 * page_size, PROT_NONE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_region, MAP_FAILED); + + /* Place a VMA of 10 pages size at the start of the region. */ + ptr1 = mmap(ptr_region, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr1, MAP_FAILED); + + /* Place a VMA of 5 pages size 50 pages into the region. */ + ptr2 = mmap(&ptr_region[50 * page_size], 5 * page_size, + PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr2, MAP_FAILED); + + /* Place a VMA of 20 pages size at the end of the region. */ + ptr3 = mmap(&ptr_region[80 * page_size], 20 * page_size, + PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr3, MAP_FAILED); + + /* Unmap gaps. */ + ASSERT_EQ(munmap(&ptr_region[10 * page_size], 40 * page_size), 0); + ASSERT_EQ(munmap(&ptr_region[55 * page_size], 25 * page_size), 0); + + /* + * We end up with VMAs like this: + * + * 0 10 .. 50 55 .. 80 100 + * [---] [---] [---] + */ + + /* Now poison the whole range and make sure all VMAs are poisoned. */ + + /* + * madvise() is certifiable and lets you perform operations over gaps, + * everything works, but it indicates an error and errno is set to + * -ENOMEM. Also if anything runs out of memory it is set to + * -ENOMEM. You are meant to guess which is which. + */ + ASSERT_EQ(madvise(ptr_region, 100 * page_size, MADV_GUARD_POISON), -1); + ASSERT_EQ(errno, ENOMEM); + + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr1[i * page_size])); + } + + for (i = 0; i < 5; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr2[i * page_size])); + } + + for (i = 0; i < 20; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr3[i * page_size])); + } + + /* Now unpoison the range and assert the opposite. */ + + ASSERT_EQ(madvise(ptr_region, 100 * page_size, MADV_GUARD_UNPOISON), -1); + ASSERT_EQ(errno, ENOMEM); + + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr1[i * page_size])); + } + + for (i = 0; i < 5; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr2[i * page_size])); + } + + for (i = 0; i < 20; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr3[i * page_size])); + } + + /* Now map incompatible VMAs in the gaps. */ + ptr = mmap(&ptr_region[10 * page_size], 40 * page_size, + PROT_READ | PROT_WRITE | PROT_EXEC, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + ptr = mmap(&ptr_region[55 * page_size], 25 * page_size, + PROT_READ | PROT_WRITE | PROT_EXEC, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* + * We end up with VMAs like this: + * + * 0 10 .. 50 55 .. 80 100 + * [---][xxxx][---][xxxx][---] + * + * Where 'x' signifies VMAs that cannot be merged with those adjacent to + * them. + */ + + /* Multiple VMAs adjacent to one another should result in no error. */ + ASSERT_EQ(madvise(ptr_region, 100 * page_size, MADV_GUARD_POISON), 0); + for (i = 0; i < 100; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr_region[i * page_size])); + } + ASSERT_EQ(madvise(ptr_region, 100 * page_size, MADV_GUARD_UNPOISON), 0); + for (i = 0; i < 100; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr_region[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr_region, 100 * page_size), 0); +} + +/* + * Assert that batched operations performed using process_madvise() work as + * expected. + */ +TEST_F(guard_pages, process_madvise) +{ + const unsigned long page_size = self->page_size; + pid_t pid = getpid(); + int pidfd = pidfd_open(pid, 0); + char *ptr_region, *ptr1, *ptr2, *ptr3; + ssize_t count; + struct iovec vec[6]; + + ASSERT_NE(pidfd, -1); + + /* Reserve region to map over. */ + ptr_region = mmap(NULL, 100 * page_size, PROT_NONE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_region, MAP_FAILED); + + /* 10 pages offset 1 page into reserve region. */ + ptr1 = mmap(&ptr_region[page_size], 10 * page_size, + PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr1, MAP_FAILED); + /* We want poison markers at start/end of each VMA. */ + vec[0].iov_base = ptr1; + vec[0].iov_len = page_size; + vec[1].iov_base = &ptr1[9 * page_size]; + vec[1].iov_len = page_size; + + /* 5 pages offset 50 pages into reserve region. */ + ptr2 = mmap(&ptr_region[50 * page_size], 5 * page_size, + PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr2, MAP_FAILED); + vec[2].iov_base = ptr2; + vec[2].iov_len = page_size; + vec[3].iov_base = &ptr2[4 * page_size]; + vec[3].iov_len = page_size; + + /* 20 pages offset 79 pages into reserve region. */ + ptr3 = mmap(&ptr_region[79 * page_size], 20 * page_size, + PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr3, MAP_FAILED); + vec[4].iov_base = ptr3; + vec[4].iov_len = page_size; + vec[5].iov_base = &ptr3[19 * page_size]; + vec[5].iov_len = page_size; + + /* Free surrounding VMAs. */ + ASSERT_EQ(munmap(ptr_region, page_size), 0); + ASSERT_EQ(munmap(&ptr_region[11 * page_size], 39 * page_size), 0); + ASSERT_EQ(munmap(&ptr_region[55 * page_size], 24 * page_size), 0); + ASSERT_EQ(munmap(&ptr_region[99 * page_size], page_size), 0); + + /* Now poison in one step. */ + count = process_madvise(pidfd, vec, 6, MADV_GUARD_POISON, 0); + + /* OK we don't have permission to do this, skip. */ + if (count == -1 && errno == EPERM) + ksft_exit_skip("No process_madvise() permissions\n"); + + /* Returns the number of bytes advised. */ + ASSERT_EQ(count, 6 * page_size); + + /* Now make sure the poisoning was applied. */ + + ASSERT_FALSE(try_read_write_buf(ptr1)); + ASSERT_FALSE(try_read_write_buf(&ptr1[9 * page_size])); + + ASSERT_FALSE(try_read_write_buf(ptr2)); + ASSERT_FALSE(try_read_write_buf(&ptr2[4 * page_size])); + + ASSERT_FALSE(try_read_write_buf(ptr3)); + ASSERT_FALSE(try_read_write_buf(&ptr3[19 * page_size])); + + /* Now do the same with unpoison... */ + count = process_madvise(pidfd, vec, 6, MADV_GUARD_UNPOISON, 0); + + /* ...and everything should now succeed. */ + + ASSERT_TRUE(try_read_write_buf(ptr1)); + ASSERT_TRUE(try_read_write_buf(&ptr1[9 * page_size])); + + ASSERT_TRUE(try_read_write_buf(ptr2)); + ASSERT_TRUE(try_read_write_buf(&ptr2[4 * page_size])); + + ASSERT_TRUE(try_read_write_buf(ptr3)); + ASSERT_TRUE(try_read_write_buf(&ptr3[19 * page_size])); + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr1, 10 * page_size), 0); + ASSERT_EQ(munmap(ptr2, 5 * page_size), 0); + ASSERT_EQ(munmap(ptr3, 20 * page_size), 0); + close(pidfd); +} + +/* Assert that unmapping ranges does not leave poison behind. */ +TEST_F(guard_pages, munmap) +{ + const unsigned long page_size = self->page_size; + char *ptr, *ptr_new1, *ptr_new2; + + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison first and last pages. */ + ASSERT_EQ(madvise(ptr, page_size, MADV_GUARD_POISON), 0); + ASSERT_EQ(madvise(&ptr[9 * page_size], page_size, MADV_GUARD_POISON), 0); + + /* Assert that they are poisoned. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[9 * page_size])); + + /* Unmap them. */ + ASSERT_EQ(munmap(ptr, page_size), 0); + ASSERT_EQ(munmap(&ptr[9 * page_size], page_size), 0); + + /* Map over them.*/ + ptr_new1 = mmap(ptr, page_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_new1, MAP_FAILED); + ptr_new2 = mmap(&ptr[9 * page_size], page_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_new2, MAP_FAILED); + + /* Assert that they are now not poisoned. */ + ASSERT_TRUE(try_read_write_buf(ptr_new1)); + ASSERT_TRUE(try_read_write_buf(ptr_new2)); + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Assert that mprotect() operations have no bearing on guard poison markers. */ +TEST_F(guard_pages, mprotect) +{ + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison the middle of the range. */ + ASSERT_EQ(madvise(&ptr[5 * page_size], 2 * page_size, + MADV_GUARD_POISON), 0); + + /* Assert that it is indeed poisoned. */ + ASSERT_FALSE(try_read_write_buf(&ptr[5 * page_size])); + ASSERT_FALSE(try_read_write_buf(&ptr[6 * page_size])); + + /* Now make these pages read-only. */ + ASSERT_EQ(mprotect(&ptr[5 * page_size], 2 * page_size, PROT_READ), 0); + + /* Make sure the range is still poisoned. */ + ASSERT_FALSE(try_read_buf(&ptr[5 * page_size])); + ASSERT_FALSE(try_read_buf(&ptr[6 * page_size])); + + /* Make sure we can poison again without issue.*/ + ASSERT_EQ(madvise(&ptr[5 * page_size], 2 * page_size, + MADV_GUARD_POISON), 0); + + /* Make sure the range is, yet again, still poisoned. */ + ASSERT_FALSE(try_read_buf(&ptr[5 * page_size])); + ASSERT_FALSE(try_read_buf(&ptr[6 * page_size])); + + /* Now unpoison the whole range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_UNPOISON), 0); + + /* Make sure the whole range is readable. */ + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_buf(&ptr[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Split and merge VMAs and make sure guard pages still behave. */ +TEST_F(guard_pages, split_merge) +{ + const unsigned long page_size = self->page_size; + char *ptr, *ptr_new; + int i; + + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison the whole range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), 0); + + /* Make sure the whole range is poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Now unmap some pages in the range so we split. */ + ASSERT_EQ(munmap(&ptr[2 * page_size], page_size), 0); + ASSERT_EQ(munmap(&ptr[5 * page_size], page_size), 0); + ASSERT_EQ(munmap(&ptr[8 * page_size], page_size), 0); + + /* Make sure the remaining ranges are poisoned post-split. */ + for (i = 0; i < 2; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + for (i = 2; i < 5; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + for (i = 6; i < 8; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + for (i = 9; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Now map them again - the unmap will have cleared the poison. */ + ptr_new = mmap(&ptr[2 * page_size], page_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_new, MAP_FAILED); + ptr_new = mmap(&ptr[5 * page_size], page_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_new, MAP_FAILED); + ptr_new = mmap(&ptr[8 * page_size], page_size, PROT_READ | PROT_WRITE, + MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_new, MAP_FAILED); + + /* Now make sure poisoning is as expected. */ + for (i = 0; i < 10; i++) { + bool result = try_read_write_buf(&ptr[i * page_size]); + + if (i == 2 || i == 5 || i == 8) { + ASSERT_TRUE(result); + } else { + ASSERT_FALSE(result); + } + } + + /* Now poison everything again. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), 0); + + /* Make sure the whole range is poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Now split the range into three. */ + ASSERT_EQ(mprotect(ptr, 3 * page_size, PROT_READ), 0); + ASSERT_EQ(mprotect(&ptr[7 * page_size], 3 * page_size, PROT_READ), 0); + + /* Make sure the whole range is poisoned for read. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_buf(&ptr[i * page_size])); + } + + /* Now reset protection bits so we merge the whole thing. */ + ASSERT_EQ(mprotect(ptr, 3 * page_size, PROT_READ | PROT_WRITE), 0); + ASSERT_EQ(mprotect(&ptr[7 * page_size], 3 * page_size, + PROT_READ | PROT_WRITE), 0); + + /* Make sure the whole range is still poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Split range into 3 again... */ + ASSERT_EQ(mprotect(ptr, 3 * page_size, PROT_READ), 0); + ASSERT_EQ(mprotect(&ptr[7 * page_size], 3 * page_size, PROT_READ), 0); + + /* ...and unpoison the whole range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_UNPOISON), 0); + + /* Make sure the whole range is remedied for read. */ + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_buf(&ptr[i * page_size])); + } + + /* Merge them again. */ + ASSERT_EQ(mprotect(ptr, 3 * page_size, PROT_READ | PROT_WRITE), 0); + ASSERT_EQ(mprotect(&ptr[7 * page_size], 3 * page_size, + PROT_READ | PROT_WRITE), 0); + + /* Now ensure the merged range is remedied for read/write. */ + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Assert that MADV_DONTNEED does not remove guard poison markers. */ +TEST_F(guard_pages, dontneed) +{ + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Back the whole range. */ + for (i = 0; i < 10; i++) { + ptr[i * page_size] = 'y'; + } + + /* Poison every other page. */ + for (i = 0; i < 10; i += 2) { + ASSERT_EQ(madvise(&ptr[i * page_size], + page_size, MADV_GUARD_POISON), 0); + } + + /* Indicate that we don't need any of the range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_DONTNEED), 0); + + /* Check to ensure poison markers are still in place. */ + for (i = 0; i < 10; i++) { + bool result = try_read_buf(&ptr[i * page_size]); + + if (i % 2 == 0) { + ASSERT_FALSE(result); + } else { + ASSERT_TRUE(result); + /* Make sure we really did get reset to zero page. */ + ASSERT_EQ(ptr[i * page_size], '\0'); + } + + /* Now write... */ + result = try_write_buf(&ptr[i * page_size]); + + /* ...and make sure same result. */ + if (i % 2 == 0) { + ASSERT_FALSE(result); + } else { + ASSERT_TRUE(result); + } + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Assert that mlock()'ed pages work correctly with poison markers. */ +TEST_F(guard_pages, mlock) +{ + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Populate. */ + for (i = 0; i < 10; i++) { + ptr[i * page_size] = 'y'; + } + + /* Lock. */ + ASSERT_EQ(mlock(ptr, 10 * page_size), 0); + + /* Now try to poison, should fail with EINVAL. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), -1); + ASSERT_EQ(errno, EINVAL); + + /* OK unlock. */ + ASSERT_EQ(munlock(ptr, 10 * page_size), 0); + + /* Poison first half of range, should now succeed. */ + ASSERT_EQ(madvise(ptr, 5 * page_size, MADV_GUARD_POISON), 0); + + /* Make sure poison works. */ + for (i = 0; i < 10; i++) { + bool result = try_read_write_buf(&ptr[i * page_size]); + + if (i < 5) { + ASSERT_FALSE(result); + } else { + ASSERT_TRUE(result); + ASSERT_EQ(ptr[i * page_size], 'x'); + } + } + + /* + * Now lock the latter part of the range. We can't lock the poisoned + * pages, as this would result in the pages being populated and the + * poisoning would cause this to error out. + */ + ASSERT_EQ(mlock(&ptr[5 * page_size], 5 * page_size), 0); + + /* + * Now unpoison, we do not permit mlock()'d ranges to be remedied as it is + * a non-destructive operation. + */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_UNPOISON), 0); + + /* Now check that everything is remedied. */ + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* + * Assert that moving, extending and shrinking memory via mremap() retains + * poison markers where possible. + * + * - Moving a mapping alone should retain markers as they are. + */ +TEST_F(guard_pages, mremap_move) +{ + const unsigned long page_size = self->page_size; + char *ptr, *ptr_new; + + /* Map 5 pages. */ + ptr = mmap(NULL, 5 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Place poison markers at both ends of the 5 page span. */ + ASSERT_EQ(madvise(ptr, page_size, MADV_GUARD_POISON), 0); + ASSERT_EQ(madvise(&ptr[4 * page_size], page_size, MADV_GUARD_POISON), 0); + + /* Make sure the poison is in effect. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[4 * page_size])); + + /* Map a new region we will move this range into. Doing this ensures + * that we have reserved a range to map into. + */ + ptr_new = mmap(NULL, 5 * page_size, PROT_NONE, MAP_ANON | MAP_PRIVATE, + -1, 0); + ASSERT_NE(ptr_new, MAP_FAILED); + + ASSERT_EQ(mremap(ptr, 5 * page_size, 5 * page_size, + MREMAP_MAYMOVE | MREMAP_FIXED, ptr_new), ptr_new); + + /* Make sure the poison is retained. */ + ASSERT_FALSE(try_read_write_buf(ptr_new)); + ASSERT_FALSE(try_read_write_buf(&ptr_new[4 * page_size])); + + /* + * Clean up - we only need reference the new pointer as we overwrote the + * PROT_NONE range and moved the existing one. + */ + munmap(ptr_new, 5 * page_size); +} + +/* + * Assert that moving, extending and shrinking memory via mremap() retains + * poison markers where possible. + * + * - Expanding should retain, only now in different position. The user will have + * to unpoison manually to fix up (they'd have to do the same if it were a + * PROT_NONE mapping) + */ +TEST_F(guard_pages, mremap_expand) +{ + const unsigned long page_size = self->page_size; + char *ptr, *ptr_new; + + /* Map 10 pages... */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + /* ...But unmap the last 5 so we can ensure we can expand into them. */ + ASSERT_EQ(munmap(&ptr[5 * page_size], 5 * page_size), 0); + + /* Place poison markers at both ends of the 5 page span. */ + ASSERT_EQ(madvise(ptr, page_size, MADV_GUARD_POISON), 0); + ASSERT_EQ(madvise(&ptr[4 * page_size], page_size, MADV_GUARD_POISON), 0); + + /* Make sure the poison is in effect. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[4 * page_size])); + + /* Now expand to 10 pages. */ + ptr = mremap(ptr, 5 * page_size, 10 * page_size, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Make sure the poison is retained in its original positions. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[4 * page_size])); + + /* Reserve a region which we can move to and expand into. */ + ptr_new = mmap(NULL, 20 * page_size, PROT_NONE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr_new, MAP_FAILED); + + /* Now move and expand into it. */ + ptr = mremap(ptr, 10 * page_size, 20 * page_size, + MREMAP_MAYMOVE | MREMAP_FIXED, ptr_new); + ASSERT_EQ(ptr, ptr_new); + + /* Again, make sure the poison is retained in its original + * positions. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[4 * page_size])); + + /* + * A real user would have to unpoison, but would reasonably expect all + * characteristics of the mapping to be retained, including poison + * markers. + */ + + /* Cleanup. */ + munmap(ptr, 20 * page_size); +} +/* + * Assert that moving, extending and shrinking memory via mremap() retains + * poison markers where possible. + * + * - Shrinking will result in markers that are shrunk over being removed. Again, + * if the user were using a PROT_NONE mapping they'd have to manually fix this + * up also so this is OK. + */ +TEST_F(guard_pages, mremap_shrink) +{ + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + /* Map 5 pages. */ + ptr = mmap(NULL, 5 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Place poison markers at both ends of the 5 page span. */ + ASSERT_EQ(madvise(ptr, page_size, MADV_GUARD_POISON), 0); + ASSERT_EQ(madvise(&ptr[4 * page_size], page_size, MADV_GUARD_POISON), 0); + + /* Make sure the poison is in effect. */ + ASSERT_FALSE(try_read_write_buf(ptr)); + ASSERT_FALSE(try_read_write_buf(&ptr[4 * page_size])); + + /* Now shrink to 3 pages. */ + ptr = mremap(ptr, 5 * page_size, 3 * page_size, MREMAP_MAYMOVE); + ASSERT_NE(ptr, MAP_FAILED); + + /* We expect the poison marker at the start to be retained... */ + ASSERT_FALSE(try_read_write_buf(ptr)); + + /* ...But remaining pages will not have poison markers. */ + for (i = 1; i < 3; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i + page_size])); + } + + /* + * As with expansion, a real user would have to unpoison and fixup. But + * you'd have to do similar manual things with PROT_NONE mappings too. + */ + + /* + * If we expand back to the original size, the end marker will, of + * course, no longer be present. + */ + ptr = mremap(ptr, 3 * page_size, 5 * page_size, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Again, we expect the poison marker at the start to be retained... */ + ASSERT_FALSE(try_read_write_buf(ptr)); + + /* ...But remaining pages will not have poison markers. */ + for (i = 1; i < 5; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i + page_size])); + } + + /* Cleanup. */ + munmap(ptr, 5 * page_size); +} + +/* + * Assert that forking a process with VMAs that do not have VM_WIPEONFORK set + * retain guard pages. + */ +TEST_F(guard_pages, fork) +{ + const unsigned long page_size = self->page_size; + char *ptr; + pid_t pid; + int i; + + /* Map 10 pages. */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison the first 5 pages. */ + ASSERT_EQ(madvise(ptr, 5 * page_size, MADV_GUARD_POISON), 0); + + pid = fork(); + ASSERT_NE(pid, -1); + if (!pid) { + /* This is the child process now. */ + + /* Assert that the poisoning is in effect. */ + for (i = 0; i < 10; i++) { + bool result = try_read_write_buf(&ptr[i * page_size]); + + if (i < 5) { + ASSERT_FALSE(result); + } else { + ASSERT_TRUE(result); + } + } + + /* Now unpoison the range.*/ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_UNPOISON), 0); + + exit(0); + } + + /* Parent process. */ + + /* Parent simply waits on child. */ + waitpid(pid, NULL, 0); + + /* Child unpoison does not impact parent page table state. */ + for (i = 0; i < 10; i++) { + bool result = try_read_write_buf(&ptr[i * page_size]); + + if (i < 5) { + ASSERT_FALSE(result); + } else { + ASSERT_TRUE(result); + } + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* + * Assert that forking a process with VMAs that do have VM_WIPEONFORK set + * behave as expected. + */ +TEST_F(guard_pages, fork_wipeonfork) +{ + const unsigned long page_size = self->page_size; + char *ptr; + pid_t pid; + int i; + + /* Map 10 pages. */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Mark wipe on fork. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_WIPEONFORK), 0); + + /* Poison the first 5 pages. */ + ASSERT_EQ(madvise(ptr, 5 * page_size, MADV_GUARD_POISON), 0); + + pid = fork(); + ASSERT_NE(pid, -1); + if (!pid) { + /* This is the child process now. */ + + /* Poison will have been wiped. */ + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + exit(0); + } + + /* Parent process. */ + + waitpid(pid, NULL, 0); + + /* Poison should be in effect.*/ + for (i = 0; i < 10; i++) { + bool result = try_read_write_buf(&ptr[i * page_size]); + + if (i < 5) { + ASSERT_FALSE(result); + } else { + ASSERT_TRUE(result); + } + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Ensure that MADV_FREE frees poison entries as expected. */ +TEST_F(guard_pages, lazyfree) +{ + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + /* Map 10 pages. */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), 0); + + /* Ensure poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Lazyfree range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_FREE), 0); + + /* This should simply clear the poison markers. */ + for (i = 0; i < 10; i++) { + ASSERT_TRUE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Ensure that MADV_POPULATE_READ, MADV_POPULATE_WRITE behave as expected. */ +TEST_F(guard_pages, populate) +{ + const unsigned long page_size = self->page_size; + char *ptr; + + /* Map 10 pages. */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), 0); + + /* Populate read should error out... */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_POPULATE_READ), -1); + ASSERT_EQ(errno, EFAULT); + + /* ...as should populate write. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_POPULATE_WRITE), -1); + ASSERT_EQ(errno, EFAULT); + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Ensure that MADV_COLD, MADV_PAGEOUT do not remove poison markers. */ +TEST_F(guard_pages, cold_pageout) +{ + const unsigned long page_size = self->page_size; + char *ptr; + int i; + + /* Map 10 pages. */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Poison range. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), 0); + + /* Ensured poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Now mark cold. This should have no impact on poison markers. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_COLD), 0); + + /* Should remain poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* OK, now page out. This should equally, have no effect on markers. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_PAGEOUT), 0); + + /* Should remain poisoned. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +/* Ensure that guard pages do not break userfaultd. */ +TEST_F(guard_pages, uffd) +{ + const unsigned long page_size = self->page_size; + int uffd; + char *ptr; + int i; + struct uffdio_api api = { + .api = UFFD_API, + .features = 0, + }; + struct uffdio_register reg; + struct uffdio_range range; + + /* Set up uffd. */ + uffd = userfaultfd(0); + if (uffd == -1 && errno == EPERM) + ksft_exit_skip("No uffd permissions\n"); + ASSERT_NE(uffd, -1); + + ASSERT_EQ(ioctl(uffd, UFFDIO_API, &api), 0); + + /* Map 10 pages. */ + ptr = mmap(NULL, 10 * page_size, PROT_READ | PROT_WRITE, + MAP_ANON | MAP_PRIVATE, -1, 0); + ASSERT_NE(ptr, MAP_FAILED); + + /* Register the range with uffd. */ + range.start = (unsigned long)ptr; + range.len = 10 * page_size; + reg.range = range; + reg.mode = UFFDIO_REGISTER_MODE_MISSING; + ASSERT_EQ(ioctl(uffd, UFFDIO_REGISTER, ®), 0); + + /* Poison the range. This should not trigger the uffd. */ + ASSERT_EQ(madvise(ptr, 10 * page_size, MADV_GUARD_POISON), 0); + + /* The poisoning should behave as usual with no uffd intervention. */ + for (i = 0; i < 10; i++) { + ASSERT_FALSE(try_read_write_buf(&ptr[i * page_size])); + } + + /* Cleanup. */ + ASSERT_EQ(ioctl(uffd, UFFDIO_UNREGISTER, &range), 0); + close(uffd); + ASSERT_EQ(munmap(ptr, 10 * page_size), 0); +} + +TEST_HARNESS_MAIN
Hi!
Userland library functions such as allocators and threading implementations often require regions of memory to act as 'guard pages' - mappings which, when accessed, result in a fatal signal being sent to the accessing process.
...
Suggested-by: Vlastimil Babka vbabka@suze.cz
suse.cz, I believe. (They may prefer suse.com address).
BR, Pavel
On Mon, Sep 30, 2024 at 09:23:47AM GMT, Pavel Machek wrote:
Hi!
Userland library functions such as allocators and threading implementations often require regions of memory to act as 'guard pages' - mappings which, when accessed, result in a fatal signal being sent to the accessing process.
...
Suggested-by: Vlastimil Babka vbabka@suze.cz
suse.cz, I believe. (They may prefer suse.com address).
Sigh yes I know, I know :)
Sorry Vlasta! ;)
BR, Pavel
-- People of Russia, stop Putin before his war on Ukraine escalates.
linux-kselftest-mirror@lists.linaro.org