On 10/21/24 22:27, Lorenzo Stoakes wrote:
On Mon, Oct 21, 2024 at 10:11:29PM +0200, Vlastimil Babka wrote:
On 10/20/24 18:20, Lorenzo Stoakes wrote:
Implement a new lightweight guard page feature, that is regions of userland virtual memory that, when accessed, cause a fatal signal to arise.
Currently users must establish PROT_NONE ranges to achieve this.
However this is very costly memory-wise - we need a VMA for each and every one of these regions AND they become unmergeable with surrounding VMAs.
In addition repeated mmap() calls require repeated kernel context switches and contention of the mmap lock to install these ranges, potentially also having to unmap memory if installed over existing ranges.
The lightweight guard approach eliminates the VMA cost altogether - rather than establishing a PROT_NONE VMA, it operates at the level of page table entries - poisoning PTEs such that accesses to them cause a fault followed by a SIGSGEV signal being raised.
This is achieved through the PTE marker mechanism, which a previous commit in this series extended to permit this to be done, installed via the generic page walking logic, also extended by a prior commit for this purpose.
These poison ranges are established with MADV_GUARD_POISON, and if the range in which they are installed contain any existing mappings, they will be zapped, i.e. free the range and unmap memory (thus mimicking the behaviour of MADV_DONTNEED in this respect).
Any existing poison entries will be left untouched. There is no nesting of poisoned pages.
Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather unexpected behaviour, but are cleared on process teardown or unmapping of memory ranges.
Ranges can have the poison property removed by MADV_GUARD_UNPOISON - 'remedying' the poisoning. The ranges over which this is applied, should they contain non-poison entries, will be untouched, only poison entries will be cleared.
We permit this operation on anonymous memory only, and only VMAs which are non-special, non-huge and not mlock()'d (if we permitted this we'd have to drop locked pages which would be rather counterintuitive).
Suggested-by: Vlastimil Babka vbabka@suse.cz Suggested-by: Jann Horn jannh@google.com Suggested-by: David Hildenbrand david@redhat.com Signed-off-by: Lorenzo Stoakes lorenzo.stoakes@oracle.com
<snip>
+static long madvise_guard_poison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
- long err;
- *prev = vma;
- if (!is_valid_guard_vma(vma, /* allow_locked = */false))
return -EINVAL;
- /*
* If we install poison markers, then the range is no longer
* empty from a page table perspective and therefore it's
* appropriate to have an anon_vma.
*
* This ensures that on fork, we copy page tables correctly.
*/
- err = anon_vma_prepare(vma);
- if (err)
return err;
- /*
* Optimistically try to install the guard poison pages first. If any
* non-guard pages are encountered, give up and zap the range before
* trying again.
*/
Should the page walker become powerful enough to handle this in one go? :)
I can tell you've not read previous threads...
Whoops, you're right, I did read v1 but forgot about the RFC.
But we can assume people who'll only see the code after it's merged will not have read it either, so since a potentially endless loop could be suspicious, expanding the comment to explain how it's fine wouldn't hurt?
I've addressed this in discussion with Jann - we'd have to do a full fat huge comprehensive thing to do an in-place replace.
It'd either have to be fully duplicative of the multiple copies of the very lengthily code to do this sort of thing right (some in mm/madvise.c itself) or I'd have to go off and do a totally new pre-requisite series centralising that in a way that people probably wouldn't accept... I'm not sure the benefits outway the costs.
But sure, if it's too big a task to teach it to zap ptes with all the tlb flushing etc (I assume it's something page walkers don't do today), it makes sense to do it this way. Or we could require userspace to zap first (MADV_DONTNEED), but that would unnecessarily mean extra syscalls for the use case of an allocator debug mode that wants to turn freed memory to guards to catch use after free. So this seems like a good compromise...
This is optimistic as the comment says, you very often won't need to do this, so we do a little extra work in the case that you need to zap, vs. the more likely case that you don't when you don't.
In the face of racing faults, which we can't reasonably prevent without having to write _and_ VMA lock which is an egregious requirement, this wouldn't really save us anythign anyway.
OK.
- while (true) {
/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
err = walk_page_range_mm(vma->vm_mm, start, end,
&guard_poison_walk_ops, NULL);
if (err <= 0)
return err;
/*
* OK some of the range have non-guard pages mapped, zap
* them. This leaves existing guard pages in place.
*/
zap_page_range_single(vma, start, end - start, NULL);
... however the potentially endless loop doesn't seem great. Could a malicious program keep refaulting the range (ignoring any segfaults if it loses a race) with one thread while failing to make progress here with another thread? Is that ok because it would only punish itself?
Sigh. Again, I don't think you've read the previous series have you? Or even the changelog... I added this as Jann asked for it. Originally we'd -EAGAIN if we got raced. See the discussion over in v1 for details.
I did it that way specifically to avoid such things, but Jann didn't appear to think it was a problem.
If Jann is fine with this then it must be secure enough.
I mean if we have to retry the guards page installation more than once, it means the program *is* racing faults with installing guard ptes in the same range, right? So it would be right to segfault it. But I guess when we detect it here, we have no way to send the signal to the right thread and it would be too late? So unless we can do the PTE zap+install marker atomically, maybe there's no better way and this is acceptable as a malicious program can harm only itself?
Yup you'd only be hurting yourself. I went over this with Jann, who didn't appear to have a problem with this approach from a security perspective, in fact he explicitly asked me to do this :)
Maybe it would be just simpler to install the marker via zap_details rather than the pagewalk?
Ah the inevitable 'please completely rework how you do everything' comment I was expecting at some point :)
Job security :)
j/k
Obviously I've considered this (and a number of other approaches), it would fundamentally change what zap is - right now if it can't traverse a page table level that job done (it's job is to remove PTEs not create).
We'd instead have to completely rework the logic to be able to _install_ page tables and then carefully check we don't break anything and only do it in the specific cases we need.
Or we could add a mode that says 'replace with a poison marker' rather than zap, but that has potential TLB concerns, splits it across two operations (installation and zapping), and then could you really be sure that there isn't a really really badly timed race that would mean you'd have to loop again?
Right now it's simple, elegant, small and we can only make ourselves wait. I don't think this is a huge problem.
I think I'll need an actual security/DoS-based justification to change this.
if (fatal_signal_pending(current))
return -EINTR;
cond_resched();
- }
+}
+static int guard_unpoison_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
+{
- pte_t ptent = ptep_get(pte);
- if (is_guard_pte_marker(ptent)) {
/* Simply clear the PTE marker. */
pte_clear_not_present_full(walk->mm, addr, pte, false);
update_mmu_cache(walk->vma, addr, pte);
- }
- return 0;
+}
+static const struct mm_walk_ops guard_unpoison_walk_ops = {
- .pte_entry = guard_unpoison_pte_entry,
- .walk_lock = PGWALK_RDLOCK,
+};
+static long madvise_guard_unpoison(struct vm_area_struct *vma,
struct vm_area_struct **prev,
unsigned long start, unsigned long end)
+{
- *prev = vma;
- /*
* We're ok with unpoisoning mlock()'d ranges, as this is a
* non-destructive action.
*/
- if (!is_valid_guard_vma(vma, /* allow_locked = */true))
return -EINVAL;
- return walk_page_range(vma->vm_mm, start, end,
&guard_unpoison_walk_ops, NULL);
+}
/*
- Apply an madvise behavior to a region of a vma. madvise_update_vma
- will handle splitting a vm area into separate areas, each area with its own
@@ -1098,6 +1260,10 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, break; case MADV_COLLAPSE: return madvise_collapse(vma, prev, start, end);
case MADV_GUARD_POISON:
return madvise_guard_poison(vma, prev, start, end);
case MADV_GUARD_UNPOISON:
return madvise_guard_unpoison(vma, prev, start, end);
}
anon_name = anon_vma_name(vma);
@@ -1197,6 +1363,8 @@ madvise_behavior_valid(int behavior) case MADV_DODUMP: case MADV_WIPEONFORK: case MADV_KEEPONFORK:
- case MADV_GUARD_POISON:
- case MADV_GUARD_UNPOISON:
#ifdef CONFIG_MEMORY_FAILURE case MADV_SOFT_OFFLINE: case MADV_HWPOISON: diff --git a/mm/mprotect.c b/mm/mprotect.c index 0c5d6d06107d..d0e3ebfadef8 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -236,7 +236,8 @@ static long change_pte_range(struct mmu_gather *tlb, } else if (is_pte_marker_entry(entry)) { /* * Ignore error swap entries unconditionally,
* because any access should sigbus anyway.
* because any access should sigbus/sigsegv
* anyway. */ if (is_poisoned_swp_entry(entry)) continue;
diff --git a/mm/mseal.c b/mm/mseal.c index ece977bd21e1..21bf5534bcf5 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -30,6 +30,7 @@ static bool is_madv_discard(int behavior) case MADV_REMOVE: case MADV_DONTFORK: case MADV_WIPEONFORK:
- case MADV_GUARD_POISON: return true; }