[ based on kvm/next ]
Unmapping virtual machine guest memory from the host kernel's direct map is a successful mitigation against Spectre-style transient execution issues: If the kernel page tables do not contain entries pointing to guest memory, then any attempted speculative read through the direct map will necessarily be blocked by the MMU before any observable microarchitectural side-effects happen. This means that Spectre-gadgets and similar cannot be used to target virtual machine memory. Roughly 60% of speculative execution issues fall into this category [1, Table 1].
This patch series extends guest_memfd with the ability to remove its memory from the host kernel's direct map, to be able to attain the above protection for KVM guests running inside guest_memfd.
Additionally, a Firecracker branch with support for these VMs can be found on GitHub [2].
For more details, please refer to the v5 cover letter [v5]. No substantial changes in design have taken place since.
=== Changes Since v5 ===
- Fix up error handling for set_direct_map_[in]valid_noflush() (Mike) - Fix capability check for KVM_GUEST_MEMFD_NO_DIRECT_MAP (Mike) - Make secretmem_aops static in mm/secretmem.c (Mike) - Fixup some more comments in gup.c that referred to secretmem specifically to instead point to AS_NO_DIRECT_MAP (Mike) - New patch (PATCH 4/11) to avoid ifdeffery in kvm_gmem_free_folio() (Mike) - vma_is_no_direct_map() -> vma_has_no_direct_map() rename (David) - Squash some patches (David) - Fix up const-ness of parameters to new functions in pagemap.h (Fuad)
[1]: https://download.vusec.net/papers/quarantine_raid23.pdf [2]: https://github.com/firecracker-microvm/firecracker/tree/feature/secret-hidin... [RFCv1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk/ [RFCv2]: https://lore.kernel.org/kvm/20240910163038.1298452-1-roypat@amazon.co.uk/ [RFCv3]: https://lore.kernel.org/kvm/20241030134912.515725-1-roypat@amazon.co.uk/ [v4]: https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk/ [v5]: https://lore.kernel.org/kvm/20250828093902.2719-1-roypat@amazon.co.uk/
Elliot Berman (1): filemap: Pass address_space mapping to ->free_folio()
Patrick Roy (10): arch: export set_direct_map_valid_noflush to KVM module mm: introduce AS_NO_DIRECT_MAP KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate KVM: guest_memfd: Add flag to remove from direct map KVM: selftests: load elf via bounce buffer KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd != -1 KVM: selftests: Add guest_memfd based vm_mem_backing_src_types KVM: selftests: stuff vm_mem_backing_src_type into vm_shape KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing selftests KVM: selftests: Test guest execution from direct map removed gmem
Documentation/filesystems/locking.rst | 2 +- Documentation/virt/kvm/api.rst | 5 ++ arch/arm64/include/asm/kvm_host.h | 12 ++++ arch/arm64/mm/pageattr.c | 1 + arch/loongarch/mm/pageattr.c | 1 + arch/riscv/mm/pageattr.c | 1 + arch/s390/mm/pageattr.c | 1 + arch/x86/mm/pat/set_memory.c | 1 + fs/nfs/dir.c | 11 ++-- fs/orangefs/inode.c | 3 +- include/linux/fs.h | 2 +- include/linux/kvm_host.h | 9 +++ include/linux/pagemap.h | 16 +++++ include/linux/secretmem.h | 18 ------ include/uapi/linux/kvm.h | 2 + lib/buildid.c | 4 +- mm/filemap.c | 9 +-- mm/gup.c | 19 ++---- mm/mlock.c | 2 +- mm/secretmem.c | 11 ++-- mm/vmscan.c | 4 +- .../testing/selftests/kvm/guest_memfd_test.c | 2 + .../testing/selftests/kvm/include/kvm_util.h | 37 ++++++++--- .../testing/selftests/kvm/include/test_util.h | 8 +++ tools/testing/selftests/kvm/lib/elf.c | 8 +-- tools/testing/selftests/kvm/lib/io.c | 23 +++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 61 +++++++++++-------- tools/testing/selftests/kvm/lib/test_util.c | 8 +++ tools/testing/selftests/kvm/lib/x86/sev.c | 1 + .../selftests/kvm/pre_fault_memory_test.c | 1 + .../selftests/kvm/set_memory_region_test.c | 50 +++++++++++++-- .../kvm/x86/private_mem_conversions_test.c | 7 ++- virt/kvm/guest_memfd.c | 56 ++++++++++++++--- virt/kvm/kvm_main.c | 5 ++ 34 files changed, 288 insertions(+), 113 deletions(-)
base-commit: a6ad54137af92535cfe32e19e5f3bc1bb7dbd383
From: Elliot Berman quic_eberman@quicinc.com
When guest_memfd removes memory from the host kernel's direct map, direct map entries must be restored before the memory is freed again. To do so, ->free_folio() needs to know whether a gmem folio was direct map removed in the first place though. While possible to keep track of this information on each individual folio (e.g. via page flags), direct map removal is an all-or-nothing property of the entire guest_memfd, so it is less error prone to just check the flag stored in the gmem inode's private data. However, by the time ->free_folio() is called, folio->mapping might be cleared. To still allow access to the address space from which the folio was just removed, pass it in as an additional argument to ->free_folio, as the mapping is well-known to all callers.
Link: https://lore.kernel.org/all/15f665b4-2d33-41ca-ac50-fafe24ade32f@redhat.com/ Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Signed-off-by: Elliot Berman quic_eberman@quicinc.com [patrick: rewrite shortlog for new usecase] Signed-off-by: Patrick Roy roypat@amazon.co.uk --- Documentation/filesystems/locking.rst | 2 +- fs/nfs/dir.c | 11 ++++++----- fs/orangefs/inode.c | 3 ++- include/linux/fs.h | 2 +- mm/filemap.c | 9 +++++---- mm/secretmem.c | 3 ++- mm/vmscan.c | 4 ++-- virt/kvm/guest_memfd.c | 3 ++- 8 files changed, 21 insertions(+), 16 deletions(-)
diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index aa287ccdac2f..74c97287ec40 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -262,7 +262,7 @@ prototypes:: sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t start, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); int (*migrate_folio)(struct address_space *, struct folio *dst, struct folio *src, enum migrate_mode); diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index d81217923936..644bd54e052c 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -55,7 +55,7 @@ static int nfs_closedir(struct inode *, struct file *); static int nfs_readdir(struct file *, struct dir_context *); static int nfs_fsync_dir(struct file *, loff_t, loff_t, int); static loff_t nfs_llseek_dir(struct file *, loff_t, int); -static void nfs_readdir_clear_array(struct folio *); +static void nfs_readdir_clear_array(struct address_space *, struct folio *); static int nfs_do_create(struct inode *dir, struct dentry *dentry, umode_t mode, int open_flags);
@@ -218,7 +218,8 @@ static void nfs_readdir_folio_init_array(struct folio *folio, u64 last_cookie, /* * we are freeing strings created by nfs_add_to_readdir_array() */ -static void nfs_readdir_clear_array(struct folio *folio) +static void nfs_readdir_clear_array(struct address_space *mapping, + struct folio *folio) { struct nfs_cache_array *array; unsigned int i; @@ -233,7 +234,7 @@ static void nfs_readdir_clear_array(struct folio *folio) static void nfs_readdir_folio_reinit_array(struct folio *folio, u64 last_cookie, u64 change_attr) { - nfs_readdir_clear_array(folio); + nfs_readdir_clear_array(folio->mapping, folio); nfs_readdir_folio_init_array(folio, last_cookie, change_attr); }
@@ -249,7 +250,7 @@ nfs_readdir_folio_array_alloc(u64 last_cookie, gfp_t gfp_flags) static void nfs_readdir_folio_array_free(struct folio *folio) { if (folio) { - nfs_readdir_clear_array(folio); + nfs_readdir_clear_array(folio->mapping, folio); folio_put(folio); } } @@ -391,7 +392,7 @@ static void nfs_readdir_folio_init_and_validate(struct folio *folio, u64 cookie, if (folio_test_uptodate(folio)) { if (nfs_readdir_folio_validate(folio, cookie, change_attr)) return; - nfs_readdir_clear_array(folio); + nfs_readdir_clear_array(folio->mapping, folio); } nfs_readdir_folio_init_array(folio, cookie, change_attr); folio_mark_uptodate(folio); diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c index a01400cd41fd..37227ba71593 100644 --- a/fs/orangefs/inode.c +++ b/fs/orangefs/inode.c @@ -452,7 +452,8 @@ static bool orangefs_release_folio(struct folio *folio, gfp_t foo) return !folio_test_private(folio); }
-static void orangefs_free_folio(struct folio *folio) +static void orangefs_free_folio(struct address_space *mapping, + struct folio *folio) { kfree(folio_detach_private(folio)); } diff --git a/include/linux/fs.h b/include/linux/fs.h index d7ab4f96d705..afb0748ffda6 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -457,7 +457,7 @@ struct address_space_operations { sector_t (*bmap)(struct address_space *, sector_t); void (*invalidate_folio) (struct folio *, size_t offset, size_t len); bool (*release_folio)(struct folio *, gfp_t); - void (*free_folio)(struct folio *folio); + void (*free_folio)(struct address_space *, struct folio *folio); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); /* * migrate the contents of a folio to the specified target. If diff --git a/mm/filemap.c b/mm/filemap.c index 751838ef05e5..3dd8ad922d80 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -226,11 +226,11 @@ void __filemap_remove_folio(struct folio *folio, void *shadow)
void filemap_free_folio(struct address_space *mapping, struct folio *folio) { - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *);
free_folio = mapping->a_ops->free_folio; if (free_folio) - free_folio(folio); + free_folio(mapping, folio);
folio_put_refs(folio, folio_nr_pages(folio)); } @@ -820,7 +820,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); void replace_page_cache_folio(struct folio *old, struct folio *new) { struct address_space *mapping = old->mapping; - void (*free_folio)(struct folio *) = mapping->a_ops->free_folio; + void (*free_folio)(struct address_space *, struct folio *) = + mapping->a_ops->free_folio; pgoff_t offset = old->index; XA_STATE(xas, &mapping->i_pages, offset);
@@ -849,7 +850,7 @@ void replace_page_cache_folio(struct folio *old, struct folio *new) __lruvec_stat_add_folio(new, NR_SHMEM); xas_unlock_irq(&xas); if (free_folio) - free_folio(old); + free_folio(mapping, old); folio_put(old); } EXPORT_SYMBOL_GPL(replace_page_cache_folio); diff --git a/mm/secretmem.c b/mm/secretmem.c index 60137305bc20..422dcaa32506 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -150,7 +150,8 @@ static int secretmem_migrate_folio(struct address_space *mapping, return -EBUSY; }
-static void secretmem_free_folio(struct folio *folio) +static void secretmem_free_folio(struct address_space *mapping, + struct folio *folio) { set_direct_map_default_noflush(folio_page(folio, 0)); folio_zero_segment(folio, 0, folio_size(folio)); diff --git a/mm/vmscan.c b/mm/vmscan.c index a48aec8bfd92..559bd6ac965c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -788,7 +788,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio, xa_unlock_irq(&mapping->i_pages); put_swap_folio(folio, swap); } else { - void (*free_folio)(struct folio *); + void (*free_folio)(struct address_space *, struct folio *);
free_folio = mapping->a_ops->free_folio; /* @@ -817,7 +817,7 @@ static int __remove_mapping(struct address_space *mapping, struct folio *folio, spin_unlock(&mapping->host->i_lock);
if (free_folio) - free_folio(folio); + free_folio(mapping, folio); }
return 1; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 08a6bc7d25b6..9ec4c45e3cf2 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -430,7 +430,8 @@ static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *fol }
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE -static void kvm_gmem_free_folio(struct folio *folio) +static void kvm_gmem_free_folio(struct address_space *mapping, + struct folio *folio) { struct page *page = folio_page(folio, 0); kvm_pfn_t pfn = page_to_pfn(page);
On Fri, Sep 12, 2025 at 09:17:31AM +0000, Roy, Patrick wrote:
From: Elliot Berman quic_eberman@quicinc.com
When guest_memfd removes memory from the host kernel's direct map, direct map entries must be restored before the memory is freed again. To do so, ->free_folio() needs to know whether a gmem folio was direct map removed in the first place though. While possible to keep track of this information on each individual folio (e.g. via page flags), direct map removal is an all-or-nothing property of the entire guest_memfd, so it is less error prone to just check the flag stored in the gmem inode's private data. However, by the time ->free_folio() is called, folio->mapping might be cleared. To still allow access to the address space from which the folio was just removed, pass it in as an additional argument to ->free_folio, as the mapping is well-known to all callers.
Link: https://lore.kernel.org/all/15f665b4-2d33-41ca-ac50-fafe24ade32f@redhat.com/ Suggested-by: David Hildenbrand david@redhat.com Acked-by: David Hildenbrand david@redhat.com Signed-off-by: Elliot Berman quic_eberman@quicinc.com [patrick: rewrite shortlog for new usecase] Signed-off-by: Patrick Roy roypat@amazon.co.uk
Reviewed-by: Pedro Falcato pfalcato@suse.de
Use the new per-module export functionality to allow KVM (and only KVM) access to set_direct_map_valid_noflush(). This allows guest_memfd to remove its memory from the direct map, even if KVM is built as a module.
Direct map removal gives guest_memfd the same protection that memfd_secret enjoys, such as hardening against Spectre-like attacks through in-kernel gadgets.
Reviewed-by: Fuad Tabba tabba@google.com Signed-off-by: Patrick Roy roypat@amazon.co.uk --- arch/arm64/mm/pageattr.c | 1 + arch/loongarch/mm/pageattr.c | 1 + arch/riscv/mm/pageattr.c | 1 + arch/s390/mm/pageattr.c | 1 + arch/x86/mm/pat/set_memory.c | 1 + 5 files changed, 5 insertions(+)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 04d4a8f676db..4f3cddfab9b0 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -291,6 +291,7 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
return set_memory_valid(addr, nr, valid); } +EXPORT_SYMBOL_FOR_MODULES(set_direct_map_valid_noflush, "kvm");
#ifdef CONFIG_DEBUG_PAGEALLOC /* diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c index f5e910b68229..458f5ae6a89b 100644 --- a/arch/loongarch/mm/pageattr.c +++ b/arch/loongarch/mm/pageattr.c @@ -236,3 +236,4 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
return __set_memory(addr, 1, set, clear); } +EXPORT_SYMBOL_FOR_MODULES(set_direct_map_valid_noflush, "kvm"); diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c index 3f76db3d2769..6db31040cd66 100644 --- a/arch/riscv/mm/pageattr.c +++ b/arch/riscv/mm/pageattr.c @@ -400,6 +400,7 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
return __set_memory((unsigned long)page_address(page), nr, set, clear); } +EXPORT_SYMBOL_FOR_MODULES(set_direct_map_valid_noflush, "kvm");
#ifdef CONFIG_DEBUG_PAGEALLOC static int debug_pagealloc_set_page(pte_t *pte, unsigned long addr, void *data) diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c index 348e759840e7..8ffd9ef09bc6 100644 --- a/arch/s390/mm/pageattr.c +++ b/arch/s390/mm/pageattr.c @@ -413,6 +413,7 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
return __set_memory((unsigned long)page_to_virt(page), nr, flags); } +EXPORT_SYMBOL_FOR_MODULES(set_direct_map_valid_noflush, "kvm");
bool kernel_page_present(struct page *page) { diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 8834c76f91c9..87e9c7d2dcdc 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2661,6 +2661,7 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
return __set_pages_np(page, nr); } +EXPORT_SYMBOL_FOR_MODULES(set_direct_map_valid_noflush, "kvm");
#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable)
Add AS_NO_DIRECT_MAP for mappings where direct map entries of folios are set to not present . Currently, mappings that match this description are secretmem mappings (memfd_secret()). Later, some guest_memfd configurations will also fall into this category.
Reject this new type of mappings in all locations that currently reject secretmem mappings, on the assumption that if secretmem mappings are rejected somewhere, it is precisely because of an inability to deal with folios without direct map entries, and then make memfd_secret() use AS_NO_DIRECT_MAP on its address_space to drop its special vma_is_secretmem()/secretmem_mapping() checks.
This drops a optimization in gup_fast_folio_allowed() where secretmem_mapping() was only called if CONFIG_SECRETMEM=y. secretmem is enabled by default since commit b758fe6df50d ("mm/secretmem: make it on by default"), so the secretmem check did not actually end up elided in most cases anymore anyway.
Use a new flag instead of overloading AS_INACCESSIBLE (which is already set by guest_memfd) because not all guest_memfd mappings will end up being direct map removed (e.g. in pKVM setups, parts of guest_memfd that can be mapped to userspace should also be GUP-able, and generally not have restrictions on who can access it).
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- include/linux/pagemap.h | 16 ++++++++++++++++ include/linux/secretmem.h | 18 ------------------ lib/buildid.c | 4 ++-- mm/gup.c | 19 +++++-------------- mm/mlock.c | 2 +- mm/secretmem.c | 8 ++------ 6 files changed, 26 insertions(+), 41 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 12a12dae727d..1f5739f6a9f5 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -211,6 +211,7 @@ enum mapping_flags { folio contents */ AS_INACCESSIBLE = 8, /* Do not attempt direct R/W access to the mapping */ AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM = 9, + AS_NO_DIRECT_MAP = 10, /* Folios in the mapping are not in the direct map */ /* Bits 16-25 are used for FOLIO_ORDER */ AS_FOLIO_ORDER_BITS = 5, AS_FOLIO_ORDER_MIN = 16, @@ -346,6 +347,21 @@ static inline bool mapping_writeback_may_deadlock_on_reclaim(struct address_spac return test_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags); }
+static inline void mapping_set_no_direct_map(struct address_space *mapping) +{ + set_bit(AS_NO_DIRECT_MAP, &mapping->flags); +} + +static inline bool mapping_no_direct_map(const struct address_space *mapping) +{ + return test_bit(AS_NO_DIRECT_MAP, &mapping->flags); +} + +static inline bool vma_has_no_direct_map(const struct vm_area_struct *vma) +{ + return vma->vm_file && mapping_no_direct_map(vma->vm_file->f_mapping); +} + static inline gfp_t mapping_gfp_mask(struct address_space * mapping) { return mapping->gfp_mask; diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h index e918f96881f5..0ae1fb057b3d 100644 --- a/include/linux/secretmem.h +++ b/include/linux/secretmem.h @@ -4,28 +4,10 @@
#ifdef CONFIG_SECRETMEM
-extern const struct address_space_operations secretmem_aops; - -static inline bool secretmem_mapping(struct address_space *mapping) -{ - return mapping->a_ops == &secretmem_aops; -} - -bool vma_is_secretmem(struct vm_area_struct *vma); bool secretmem_active(void);
#else
-static inline bool vma_is_secretmem(struct vm_area_struct *vma) -{ - return false; -} - -static inline bool secretmem_mapping(struct address_space *mapping) -{ - return false; -} - static inline bool secretmem_active(void) { return false; diff --git a/lib/buildid.c b/lib/buildid.c index c4b0f376fb34..89e567954284 100644 --- a/lib/buildid.c +++ b/lib/buildid.c @@ -65,8 +65,8 @@ static int freader_get_folio(struct freader *r, loff_t file_off)
freader_put_folio(r);
- /* reject secretmem folios created with memfd_secret() */ - if (secretmem_mapping(r->file->f_mapping)) + /* reject folios without direct map entries (e.g. from memfd_secret() or guest_memfd()) */ + if (mapping_no_direct_map(r->file->f_mapping)) return -EFAULT;
r->folio = filemap_get_folio(r->file->f_mapping, file_off >> PAGE_SHIFT); diff --git a/mm/gup.c b/mm/gup.c index adffe663594d..75a0cffdf37d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -11,7 +11,6 @@ #include <linux/rmap.h> #include <linux/swap.h> #include <linux/swapops.h> -#include <linux/secretmem.h>
#include <linux/sched/signal.h> #include <linux/rwsem.h> @@ -1234,7 +1233,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if ((gup_flags & FOLL_SPLIT_PMD) && is_vm_hugetlb_page(vma)) return -EOPNOTSUPP;
- if (vma_is_secretmem(vma)) + if (vma_has_no_direct_map(vma)) return -EFAULT;
if (write) { @@ -2736,7 +2735,7 @@ EXPORT_SYMBOL(get_user_pages_unlocked); * This call assumes the caller has pinned the folio, that the lowest page table * level still points to this folio, and that interrupts have been disabled. * - * GUP-fast must reject all secretmem folios. + * GUP-fast must reject all folios without direct map entries (such as secretmem). * * Writing to pinned file-backed dirty tracked folios is inherently problematic * (see comment describing the writable_file_mapping_allowed() function). We @@ -2751,7 +2750,6 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags) { bool reject_file_backed = false; struct address_space *mapping; - bool check_secretmem = false; unsigned long mapping_flags;
/* @@ -2763,18 +2761,10 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags) reject_file_backed = true;
/* We hold a folio reference, so we can safely access folio fields. */ - - /* secretmem folios are always order-0 folios. */ - if (IS_ENABLED(CONFIG_SECRETMEM) && !folio_test_large(folio)) - check_secretmem = true; - - if (!reject_file_backed && !check_secretmem) - return true; - if (WARN_ON_ONCE(folio_test_slab(folio))) return false;
- /* hugetlb neither requires dirty-tracking nor can be secretmem. */ + /* hugetlb neither requires dirty-tracking nor can be without direct map. */ if (folio_test_hugetlb(folio)) return true;
@@ -2812,8 +2802,9 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags) * At this point, we know the mapping is non-null and points to an * address_space object. */ - if (check_secretmem && secretmem_mapping(mapping)) + if (mapping_no_direct_map(mapping)) return false; + /* The only remaining allowed file system is shmem. */ return !reject_file_backed || shmem_mapping(mapping); } diff --git a/mm/mlock.c b/mm/mlock.c index a1d93ad33c6d..36f5e70faeb0 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -474,7 +474,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
if (newflags == oldflags || (oldflags & VM_SPECIAL) || is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) || - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE)) + vma_is_dax(vma) || vma_has_no_direct_map(vma) || (oldflags & VM_DROPPABLE)) /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */ goto out;
diff --git a/mm/secretmem.c b/mm/secretmem.c index 422dcaa32506..b5ce55079695 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -134,11 +134,6 @@ static int secretmem_mmap_prepare(struct vm_area_desc *desc) return 0; }
-bool vma_is_secretmem(struct vm_area_struct *vma) -{ - return vma->vm_ops == &secretmem_vm_ops; -} - static const struct file_operations secretmem_fops = { .release = secretmem_release, .mmap_prepare = secretmem_mmap_prepare, @@ -157,7 +152,7 @@ static void secretmem_free_folio(struct address_space *mapping, folio_zero_segment(folio, 0, folio_size(folio)); }
-const struct address_space_operations secretmem_aops = { +static const struct address_space_operations secretmem_aops = { .dirty_folio = noop_dirty_folio, .free_folio = secretmem_free_folio, .migrate_folio = secretmem_migrate_folio, @@ -206,6 +201,7 @@ static struct file *secretmem_file_create(unsigned long flags)
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); mapping_set_unevictable(inode->i_mapping); + mapping_set_no_direct_map(inode->i_mapping);
inode->i_op = &secretmem_iops; inode->i_mapping->a_ops = &secretmem_aops;
Add a no-op stub for kvm_arch_gmem_invalidate if CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE=n. This allows defining kvm_gmem_free_folio without ifdef-ery, which allows more cleanly using guest_memfd's free_folio callback for non-arch-invalidation related code.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- include/linux/kvm_host.h | 2 ++ virt/kvm/guest_memfd.c | 4 ---- 2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8b47891adca1..1d0585616aa3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2573,6 +2573,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); +#else +static inline void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) { } #endif
#ifdef CONFIG_KVM_GENERIC_PRE_FAULT_MEMORY diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 9ec4c45e3cf2..81028984ff89 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -429,7 +429,6 @@ static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *fol return MF_DELAYED; }
-#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE static void kvm_gmem_free_folio(struct address_space *mapping, struct folio *folio) { @@ -439,15 +438,12 @@ static void kvm_gmem_free_folio(struct address_space *mapping,
kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order)); } -#endif
static const struct address_space_operations kvm_gmem_aops = { .dirty_folio = noop_dirty_folio, .migrate_folio = kvm_gmem_migrate_folio, .error_remove_folio = kvm_gmem_error_folio, -#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE .free_folio = kvm_gmem_free_folio, -#endif };
static int kvm_gmem_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
Add GUEST_MEMFD_FLAG_NO_DIRECT_MAP flag for KVM_CREATE_GUEST_MEMFD() ioctl. When set, guest_memfd folios will be removed from the direct map after preparation, with direct map entries only restored when the folios are freed.
To ensure these folios do not end up in places where the kernel cannot deal with them, set AS_NO_DIRECT_MAP on the guest_memfd's struct address_space if GUEST_MEMFD_FLAG_NO_DIRECT_MAP is requested.
Add KVM_CAP_GUEST_MEMFD_NO_DIRECT_MAP to let userspace discover whether guest_memfd supports GUEST_MEMFD_FLAG_NO_DIRECT_MAP. Support depends on guest_memfd itself being supported, but also on whether linux supports manipulatomg the direct map at page granularity at all (possible most of the time, outliers being arm64 where its impossible if the direct map has been setup using hugepages, as arm64 cannot break these apart due to break-before-make semantics, and powerpc, which does not select ARCH_HAS_SET_DIRECT_MAP, which also doesn't support guest_memfd anyway though).
Note that this flag causes removal of direct map entries for all guest_memfd folios independent of whether they are "shared" or "private" (although current guest_memfd only supports either all folios in the "shared" state, or all folios in the "private" state if GUEST_MEMFD_FLAG_MMAP is not set). The usecase for removing direct map entries of also the shared parts of guest_memfd are a special type of non-CoCo VM where, host userspace is trusted to have access to all of guest memory, but where Spectre-style transient execution attacks through the host kernel's direct map should still be mitigated. In this setup, KVM retains access to guest memory via userspace mappings of guest_memfd, which are reflected back into KVM's memslots via userspace_addr. This is needed for things like MMIO emulation on x86_64 to work.
Do not perform TLB flushes after direct map manipulations. This is because TLB flushes resulted in a up to 40x elongation of page faults in guest_memfd (scaling with the number of CPU cores), or a 5x elongation of memory population. TLB flushes are not needed for functional correctness (the virt->phys mapping technically stays "correct", the kernel should simply not use it for a while). On the other hand, it means that the desired protection from Spectre-style attacks is not perfect, as an attacker could try to prevent a stale TLB entry from getting evicted, keeping it alive until the page it refers to is used by the guest for some sensitive data, and then targeting it using a spectre-gadget.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- Documentation/virt/kvm/api.rst | 5 ++++ arch/arm64/include/asm/kvm_host.h | 12 ++++++++ include/linux/kvm_host.h | 7 +++++ include/uapi/linux/kvm.h | 2 ++ virt/kvm/guest_memfd.c | 49 +++++++++++++++++++++++++++---- virt/kvm/kvm_main.c | 5 ++++ 6 files changed, 75 insertions(+), 5 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index c17a87a0a5ac..b52c14d58798 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6418,6 +6418,11 @@ When the capability KVM_CAP_GUEST_MEMFD_MMAP is supported, the 'flags' field supports GUEST_MEMFD_FLAG_MMAP. Setting this flag on guest_memfd creation enables mmap() and faulting of guest_memfd memory to host userspace.
+When the capability KVM_CAP_GMEM_NO_DIRECT_MAP is supported, the 'flags' field +supports GUEST_MEMFG_FLAG_NO_DIRECT_MAP. Setting this flag makes the guest_memfd +instance behave similarly to memfd_secret, and unmaps the memory backing it from +the kernel's address space after allocation. + When the KVM MMU performs a PFN lookup to service a guest fault and the backing guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be consumed from guest_memfd, regardless of whether it is a shared or a private diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 2f2394cce24e..0bfd8e5fd9de 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -19,6 +19,7 @@ #include <linux/maple_tree.h> #include <linux/percpu.h> #include <linux/psci.h> +#include <linux/set_memory.h> #include <asm/arch_gicv3.h> #include <asm/barrier.h> #include <asm/cpufeature.h> @@ -1706,5 +1707,16 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt); void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1); void check_feature_map(void);
+#ifdef CONFIG_KVM_GUEST_MEMFD +static inline bool kvm_arch_gmem_supports_no_direct_map(void) +{ + /* + * Without FWB, direct map access is needed in kvm_pgtable_stage2_map(), + * as it calls dcache_clean_inval_poc(). + */ + return can_set_direct_map() && cpus_have_final_cap(ARM64_HAS_STAGE2_FWB); +} +#define kvm_arch_gmem_supports_no_direct_map kvm_arch_gmem_supports_no_direct_map +#endif /* CONFIG_KVM_GUEST_MEMFD */
#endif /* __ARM64_KVM_HOST_H__ */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1d0585616aa3..a9468bce55f2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -36,6 +36,7 @@ #include <linux/rbtree.h> #include <linux/xarray.h> #include <asm/signal.h> +#include <linux/set_memory.h>
#include <linux/kvm.h> #include <linux/kvm_para.h> @@ -731,6 +732,12 @@ static inline bool kvm_arch_has_private_mem(struct kvm *kvm) bool kvm_arch_supports_gmem_mmap(struct kvm *kvm); #endif
+#ifdef CONFIG_KVM_GUEST_MEMFD +#ifndef kvm_arch_gmem_supports_no_direct_map +#define kvm_arch_gmem_supports_no_direct_map can_set_direct_map +#endif +#endif /* CONFIG_KVM_GUEST_MEMFD */ + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 6efa98a57ec1..33c8e8946019 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -963,6 +963,7 @@ struct kvm_enable_cap { #define KVM_CAP_RISCV_MP_STATE_RESET 242 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 #define KVM_CAP_GUEST_MEMFD_MMAP 244 +#define KVM_CAP_GUEST_MEMFD_NO_DIRECT_MAP 245
struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1600,6 +1601,7 @@ struct kvm_memory_attributes {
#define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) #define GUEST_MEMFD_FLAG_MMAP (1ULL << 0) +#define GUEST_MEMFD_FLAG_NO_DIRECT_MAP (1ULL << 1)
struct kvm_create_guest_memfd { __u64 size; diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 81028984ff89..3c64099fc98a 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -4,6 +4,7 @@ #include <linux/kvm_host.h> #include <linux/pagemap.h> #include <linux/anon_inodes.h> +#include <linux/set_memory.h>
#include "kvm_mm.h"
@@ -42,9 +43,24 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slo return 0; }
-static inline void kvm_gmem_mark_prepared(struct folio *folio) +static bool kvm_gmem_test_no_direct_map(struct inode *inode) { - folio_mark_uptodate(folio); + return ((unsigned long) inode->i_private) & GUEST_MEMFD_FLAG_NO_DIRECT_MAP; +} + +static inline int kvm_gmem_mark_prepared(struct folio *folio) +{ + struct inode *inode = folio_inode(folio); + int r = 0; + + if (kvm_gmem_test_no_direct_map(inode)) + r = set_direct_map_valid_noflush(folio_page(folio, 0), folio_nr_pages(folio), + false); + + if (!r) + folio_mark_uptodate(folio); + + return r; }
/* @@ -82,7 +98,7 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot, index = ALIGN_DOWN(index, 1 << folio_order(folio)); r = __kvm_gmem_prepare_folio(kvm, slot, index, folio); if (!r) - kvm_gmem_mark_prepared(folio); + r = kvm_gmem_mark_prepared(folio);
return r; } @@ -344,8 +360,15 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf) }
if (!folio_test_uptodate(folio)) { + int err = 0; + clear_highpage(folio_page(folio, 0)); - kvm_gmem_mark_prepared(folio); + err = kvm_gmem_mark_prepared(folio); + + if (err) { + ret = vmf_error(err); + goto out_folio; + } }
vmf->page = folio_file_page(folio, vmf->pgoff); @@ -436,6 +459,16 @@ static void kvm_gmem_free_folio(struct address_space *mapping, kvm_pfn_t pfn = page_to_pfn(page); int order = folio_order(folio);
+ /* + * Direct map restoration cannot fail, as the only error condition + * for direct map manipulation is failure to allocate page tables + * when splitting huge pages, but this split would have already + * happened in set_direct_map_invalid_noflush() in kvm_gmem_mark_prepared(). + * Thus set_direct_map_valid_noflush() here only updates prot bits. + */ + if (kvm_gmem_test_no_direct_map(mapping->host)) + set_direct_map_valid_noflush(page, folio_nr_pages(folio), true); + kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order)); }
@@ -500,6 +533,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) /* Unmovable mappings are supposed to be marked unevictable as well. */ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
+ if (flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) + mapping_set_no_direct_map(inode->i_mapping); + kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); @@ -524,6 +560,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) if (kvm_arch_supports_gmem_mmap(kvm)) valid_flags |= GUEST_MEMFD_FLAG_MMAP;
+ if (kvm_arch_gmem_supports_no_direct_map()) + valid_flags |= GUEST_MEMFD_FLAG_NO_DIRECT_MAP; + if (flags & ~valid_flags) return -EINVAL;
@@ -768,7 +807,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long p = src ? src + i * PAGE_SIZE : NULL; ret = post_populate(kvm, gfn, pfn, p, max_order, opaque); if (!ret) - kvm_gmem_mark_prepared(folio); + ret = kvm_gmem_mark_prepared(folio);
put_folio_and_exit: folio_put(folio); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 18f29ef93543..b5e702d95230 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -65,6 +65,7 @@ #include <trace/events/kvm.h>
#include <linux/kvm_dirty_ring.h> +#include <linux/set_memory.h>
/* Worst case buffer size needed for holding an integer. */ @@ -4916,6 +4917,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) return kvm_supported_mem_attributes(kvm); #endif #ifdef CONFIG_KVM_GUEST_MEMFD + case KVM_CAP_GUEST_MEMFD_NO_DIRECT_MAP: + if (!kvm_arch_gmem_supports_no_direct_map()) + return 0; + fallthrough; case KVM_CAP_GUEST_MEMFD: return 1; case KVM_CAP_GUEST_MEMFD_MMAP:
If guest memory is backed using a VMA that does not allow GUP (e.g. a userspace mapping of guest_memfd when the fd was allocated using KVM_GMEM_NO_DIRECT_MAP), then directly loading the test ELF binary into it via read(2) potentially does not work. To nevertheless support loading binaries in this cases, do the read(2) syscall using a bounce buffer, and then memcpy from the bounce buffer into guest memory.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- .../testing/selftests/kvm/include/test_util.h | 1 + tools/testing/selftests/kvm/lib/elf.c | 8 +++---- tools/testing/selftests/kvm/lib/io.c | 23 +++++++++++++++++++ 3 files changed, 28 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index c6ef895fbd9a..0409b7b96c94 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -46,6 +46,7 @@ do { \
ssize_t test_write(int fd, const void *buf, size_t count); ssize_t test_read(int fd, void *buf, size_t count); +ssize_t test_read_bounce(int fd, void *buf, size_t count); int test_seq_read(const char *path, char **bufp, size_t *sizep);
void __printf(5, 6) test_assert(bool exp, const char *exp_str, diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c index f34d926d9735..e829fbe0a11e 100644 --- a/tools/testing/selftests/kvm/lib/elf.c +++ b/tools/testing/selftests/kvm/lib/elf.c @@ -31,7 +31,7 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp) * the real size of the ELF header. */ unsigned char ident[EI_NIDENT]; - test_read(fd, ident, sizeof(ident)); + test_read_bounce(fd, ident, sizeof(ident)); TEST_ASSERT((ident[EI_MAG0] == ELFMAG0) && (ident[EI_MAG1] == ELFMAG1) && (ident[EI_MAG2] == ELFMAG2) && (ident[EI_MAG3] == ELFMAG3), "ELF MAGIC Mismatch,\n" @@ -79,7 +79,7 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp) offset_rv = lseek(fd, 0, SEEK_SET); TEST_ASSERT(offset_rv == 0, "Seek to ELF header failed,\n" " rv: %zi expected: %i", offset_rv, 0); - test_read(fd, hdrp, sizeof(*hdrp)); + test_read_bounce(fd, hdrp, sizeof(*hdrp)); TEST_ASSERT(hdrp->e_phentsize == sizeof(Elf64_Phdr), "Unexpected physical header size,\n" " hdrp->e_phentsize: %x\n" @@ -146,7 +146,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
/* Read in the program header. */ Elf64_Phdr phdr; - test_read(fd, &phdr, sizeof(phdr)); + test_read_bounce(fd, &phdr, sizeof(phdr));
/* Skip if this header doesn't describe a loadable segment. */ if (phdr.p_type != PT_LOAD) @@ -187,7 +187,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename) " expected: 0x%jx", n1, errno, (intmax_t) offset_rv, (intmax_t) phdr.p_offset); - test_read(fd, addr_gva2hva(vm, phdr.p_vaddr), + test_read_bounce(fd, addr_gva2hva(vm, phdr.p_vaddr), phdr.p_filesz); } } diff --git a/tools/testing/selftests/kvm/lib/io.c b/tools/testing/selftests/kvm/lib/io.c index fedb2a741f0b..74419becc8bc 100644 --- a/tools/testing/selftests/kvm/lib/io.c +++ b/tools/testing/selftests/kvm/lib/io.c @@ -155,3 +155,26 @@ ssize_t test_read(int fd, void *buf, size_t count)
return num_read; } + +/* Test read via intermediary buffer + * + * Same as test_read, except read(2)s happen into a bounce buffer that is memcpy'd + * to buf. For use with buffers that cannot be GUP'd (e.g. guest_memfd VMAs if + * guest_memfd was created with GUEST_MEMFD_FLAG_NO_DIRECT_MAP). + */ +ssize_t test_read_bounce(int fd, void *buf, size_t count) +{ + void *bounce_buffer; + ssize_t num_read; + + TEST_ASSERT(count >= 0, "Unexpected count, count: %li", count); + + bounce_buffer = malloc(count); + TEST_ASSERT(bounce_buffer != NULL, "Failed to allocate bounce buffer"); + + num_read = test_read(fd, bounce_buffer, count); + memcpy(buf, bounce_buffer, num_read); + free(bounce_buffer); + + return num_read; +}
Have vm_mem_add() always set KVM_MEM_GUEST_MEMFD in the memslot flags if a guest_memfd is passed in as an argument. This eliminates the possibility where a guest_memfd instance is passed to vm_mem_add(), but it ends up being ignored because the flags argument does not specify KVM_MEM_GUEST_MEMFD at the same time.
This makes it easy to support more scenarios in which no vm_mem_add() is not passed a guest_memfd instance, but is expected to allocate one. Currently, this only happens if guest_memfd == -1 but flags & KVM_MEM_GUEST_MEMFD != 0, but later vm_mem_add() will gain support for loading the test code itself into guest_memfd (via GUEST_MEMFD_FLAG_MMAP) if requested via a special vm_mem_backing_src_type, at which point having to make sure the src_type and flags are in-sync becomes cumbersome.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- tools/testing/selftests/kvm/lib/kvm_util.c | 26 +++++++++++++--------- 1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index c3f5142b0a54..cc67dfecbf65 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1107,22 +1107,26 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
region->backing_src_type = src_type;
- if (flags & KVM_MEM_GUEST_MEMFD) { - if (guest_memfd < 0) { + if (guest_memfd < 0) { + if (flags & KVM_MEM_GUEST_MEMFD) { uint32_t guest_memfd_flags = 0; TEST_ASSERT(!guest_memfd_offset, "Offset must be zero when creating new guest_memfd"); guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags); - } else { - /* - * Install a unique fd for each memslot so that the fd - * can be closed when the region is deleted without - * needing to track if the fd is owned by the framework - * or by the caller. - */ - guest_memfd = dup(guest_memfd); - TEST_ASSERT(guest_memfd >= 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd)); } + } else { + /* + * Install a unique fd for each memslot so that the fd + * can be closed when the region is deleted without + * needing to track if the fd is owned by the framework + * or by the caller. + */ + guest_memfd = dup(guest_memfd); + TEST_ASSERT(guest_memfd >= 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd)); + } + + if (guest_memfd > 0) { + flags |= KVM_MEM_GUEST_MEMFD;
region->region.guest_memfd = guest_memfd; region->region.guest_memfd_offset = guest_memfd_offset;
Allow selftests to configure their memslots such that userspace_addr is set to a MAP_SHARED mapping of the guest_memfd that's associated with the memslot. This setup is the configuration for non-CoCo VMs, where all guest memory is backed by a guest_memfd whose folios are all marked shared, but KVM is still able to access guest memory to provide functionality such as MMIO emulation on x86.
Add backing types for normal guest_memfd, as well as direct map removed guest_memfd.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- .../testing/selftests/kvm/include/kvm_util.h | 18 ++++++ .../testing/selftests/kvm/include/test_util.h | 7 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 63 ++++++++++--------- tools/testing/selftests/kvm/lib/test_util.c | 8 +++ 4 files changed, 66 insertions(+), 30 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 23a506d7eca3..5204a0a18a7f 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -635,6 +635,24 @@ static inline bool is_smt_on(void)
void vm_create_irqchip(struct kvm_vm *vm);
+static inline uint32_t backing_src_guest_memfd_flags(enum vm_mem_backing_src_type t) +{ + uint32_t flags = 0; + + switch (t) { + case VM_MEM_SRC_GUEST_MEMFD: + flags |= GUEST_MEMFD_FLAG_MMAP; + fallthrough; + case VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP: + flags |= GUEST_MEMFD_FLAG_NO_DIRECT_MAP; + break; + default: + break; + } + + return flags; +} + static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size, uint64_t flags) { diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index 0409b7b96c94..a56e53fc7b39 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -133,6 +133,8 @@ enum vm_mem_backing_src_type { VM_MEM_SRC_ANONYMOUS_HUGETLB_16GB, VM_MEM_SRC_SHMEM, VM_MEM_SRC_SHARED_HUGETLB, + VM_MEM_SRC_GUEST_MEMFD, + VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP, NUM_SRC_TYPES, };
@@ -165,6 +167,11 @@ static inline bool backing_src_is_shared(enum vm_mem_backing_src_type t) return vm_mem_backing_src_alias(t)->flag & MAP_SHARED; }
+static inline bool backing_src_is_guest_memfd(enum vm_mem_backing_src_type t) +{ + return t == VM_MEM_SRC_GUEST_MEMFD || t == VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP; +} + static inline bool backing_src_can_be_huge(enum vm_mem_backing_src_type t) { return t != VM_MEM_SRC_ANONYMOUS && t != VM_MEM_SRC_SHMEM; diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index cc67dfecbf65..a81089f7c83f 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1060,6 +1060,34 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, alignment = 1; #endif
+ if (guest_memfd < 0) { + if ((flags & KVM_MEM_GUEST_MEMFD) || backing_src_is_guest_memfd(src_type)) { + uint32_t guest_memfd_flags = backing_src_guest_memfd_flags(src_type); + + TEST_ASSERT(!guest_memfd_offset, + "Offset must be zero when creating new guest_memfd"); + guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags); + } + } else { + /* + * Install a unique fd for each memslot so that the fd + * can be closed when the region is deleted without + * needing to track if the fd is owned by the framework + * or by the caller. + */ + guest_memfd = dup(guest_memfd); + TEST_ASSERT(guest_memfd >= 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd)); + } + + if (guest_memfd > 0) { + flags |= KVM_MEM_GUEST_MEMFD; + + region->region.guest_memfd = guest_memfd; + region->region.guest_memfd_offset = guest_memfd_offset; + } else { + region->region.guest_memfd = -1; + } + /* * When using THP mmap is not guaranteed to returned a hugepage aligned * address so we have to pad the mmap. Padding is not needed for HugeTLB @@ -1075,10 +1103,13 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, if (alignment > 1) region->mmap_size += alignment;
- region->fd = -1; - if (backing_src_is_shared(src_type)) + if (backing_src_is_guest_memfd(src_type)) + region->fd = guest_memfd; + else if (backing_src_is_shared(src_type)) region->fd = kvm_memfd_alloc(region->mmap_size, src_type == VM_MEM_SRC_SHARED_HUGETLB); + else + region->fd = -1;
region->mmap_start = mmap(NULL, region->mmap_size, PROT_READ | PROT_WRITE, @@ -1106,34 +1137,6 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, }
region->backing_src_type = src_type; - - if (guest_memfd < 0) { - if (flags & KVM_MEM_GUEST_MEMFD) { - uint32_t guest_memfd_flags = 0; - TEST_ASSERT(!guest_memfd_offset, - "Offset must be zero when creating new guest_memfd"); - guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags); - } - } else { - /* - * Install a unique fd for each memslot so that the fd - * can be closed when the region is deleted without - * needing to track if the fd is owned by the framework - * or by the caller. - */ - guest_memfd = dup(guest_memfd); - TEST_ASSERT(guest_memfd >= 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd)); - } - - if (guest_memfd > 0) { - flags |= KVM_MEM_GUEST_MEMFD; - - region->region.guest_memfd = guest_memfd; - region->region.guest_memfd_offset = guest_memfd_offset; - } else { - region->region.guest_memfd = -1; - } - region->unused_phy_pages = sparsebit_alloc(); if (vm_arch_has_protected_memory(vm)) region->protected_phy_pages = sparsebit_alloc(); diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index 03eb99af9b8d..b2baee680083 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -299,6 +299,14 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i) */ .flag = MAP_SHARED, }, + [VM_MEM_SRC_GUEST_MEMFD] = { + .name = "guest_memfd", + .flag = MAP_SHARED, + }, + [VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP] = { + .name = "guest_memfd_no_direct_map", + .flag = MAP_SHARED, + } }; _Static_assert(ARRAY_SIZE(aliases) == NUM_SRC_TYPES, "Missing new backing src types?");
Use one of the padding fields in struct vm_shape to carry an enum vm_mem_backing_src_type value, to give the option to overwrite the default of VM_MEM_SRC_ANONYMOUS in __vm_create().
Overwriting this default will allow tests to create VMs where the test code is backed by mmap'd guest_memfd instead of anonymous memory.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- .../testing/selftests/kvm/include/kvm_util.h | 19 ++++++++++--------- tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- tools/testing/selftests/kvm/lib/x86/sev.c | 1 + .../selftests/kvm/pre_fault_memory_test.c | 1 + 4 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 5204a0a18a7f..8baa0bbacd09 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -188,7 +188,7 @@ enum vm_guest_mode { struct vm_shape { uint32_t type; uint8_t mode; - uint8_t pad0; + uint8_t src_type; uint16_t pad1; };
@@ -196,14 +196,15 @@ kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
#define VM_TYPE_DEFAULT 0
-#define VM_SHAPE(__mode) \ -({ \ - struct vm_shape shape = { \ - .mode = (__mode), \ - .type = VM_TYPE_DEFAULT \ - }; \ - \ - shape; \ +#define VM_SHAPE(__mode) \ +({ \ + struct vm_shape shape = { \ + .mode = (__mode), \ + .type = VM_TYPE_DEFAULT, \ + .src_type = VM_MEM_SRC_ANONYMOUS \ + }; \ + \ + shape; \ })
#if defined(__aarch64__) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index a81089f7c83f..3a22794bd959 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -495,7 +495,7 @@ struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus, if (is_guest_memfd_required(shape)) flags |= KVM_MEM_GUEST_MEMFD;
- vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, flags); + vm_userspace_mem_region_add(vm, shape.src_type, 0, 0, nr_pages, flags); for (i = 0; i < NR_MEM_REGIONS; i++) vm->memslots[i] = 0;
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c index c3a9838f4806..d920880e4fc0 100644 --- a/tools/testing/selftests/kvm/lib/x86/sev.c +++ b/tools/testing/selftests/kvm/lib/x86/sev.c @@ -164,6 +164,7 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code, struct vm_shape shape = { .mode = VM_MODE_DEFAULT, .type = type, + .src_type = VM_MEM_SRC_ANONYMOUS, }; struct kvm_vm *vm; struct kvm_vcpu *cpus[1]; diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c index 0350a8896a2f..d403f8d2f26f 100644 --- a/tools/testing/selftests/kvm/pre_fault_memory_test.c +++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c @@ -68,6 +68,7 @@ static void __test_pre_fault_memory(unsigned long vm_type, bool private) const struct vm_shape shape = { .mode = VM_MODE_DEFAULT, .type = vm_type, + .src_type = VM_MEM_SRC_ANONYMOUS, }; struct kvm_vcpu *vcpu; struct kvm_run *run;
Extend mem conversion selftests to cover the scenario that the guest can fault in and write gmem-backed guest memory even if its direct map removed. Also cover the new flag in guest_memfd_test.c tests.
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- tools/testing/selftests/kvm/guest_memfd_test.c | 2 ++ .../selftests/kvm/x86/private_mem_conversions_test.c | 7 ++++--- 2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index b3ca6737f304..1187438b6831 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -275,6 +275,8 @@ static void test_guest_memfd(unsigned long vm_type)
if (vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_MMAP)) flags |= GUEST_MEMFD_FLAG_MMAP; + if (vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_NO_DIRECT_MAP)) + flags |= GUEST_MEMFD_FLAG_NO_DIRECT_MAP;
test_create_guest_memfd_multiple(vm); test_create_guest_memfd_invalid_sizes(vm, flags, page_size); diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c index 82a8d88b5338..8427d9fbdb23 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c @@ -367,7 +367,7 @@ static void *__test_mem_conversions(void *__vcpu) }
static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t nr_vcpus, - uint32_t nr_memslots) + uint32_t nr_memslots, uint64_t gmem_flags) { /* * Allocate enough memory so that each vCPU's chunk of memory can be @@ -394,7 +394,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
- memfd = vm_create_guest_memfd(vm, memfd_size, 0); + memfd = vm_create_guest_memfd(vm, memfd_size, gmem_flags);
for (i = 0; i < nr_memslots; i++) vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, @@ -477,7 +477,8 @@ int main(int argc, char *argv[]) } }
- test_mem_conversions(src_type, nr_vcpus, nr_memslots); + test_mem_conversions(src_type, nr_vcpus, nr_memslots, 0); + test_mem_conversions(src_type, nr_vcpus, nr_memslots, GUEST_MEMFD_FLAG_NO_DIRECT_MAP);
return 0; }
Add a selftest that loads itself into guest_memfd (via GUEST_MEMFD_FLAG_MMAP) and triggers an MMIO exit when executed. This exercises x86 MMIO emulation code inside KVM for guest_memfd-backed memslots where the guest_memfd folios are direct map removed. Particularly, it validates that x86 MMIO emulation code (guest page table walks + instruction fetch) correctly accesses gmem through the VMA that's been reflected into the memslot's userspace_addr field (instead of trying to do direct map accesses).
Signed-off-by: Patrick Roy roypat@amazon.co.uk --- .../selftests/kvm/set_memory_region_test.c | 50 +++++++++++++++++-- 1 file changed, 46 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index ce3ac0fd6dfb..cb3bc642d376 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -603,6 +603,41 @@ static void test_mmio_during_vectoring(void)
kvm_vm_free(vm); } + +static void guest_code_trigger_mmio(void) +{ + /* + * Read some GPA that is not backed by a memslot. KVM consider this + * as MMIO and tell userspace to emulate the read. + */ + READ_ONCE(*((uint64_t *)MEM_REGION_GPA)); + + GUEST_DONE(); +} + +static void test_guest_memfd_mmio(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + struct vm_shape shape = { + .mode = VM_MODE_DEFAULT, + .src_type = VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP, + }; + pthread_t vcpu_thread; + + pr_info("Testing MMIO emulation for instructions in gmem\n"); + + vm = __vm_create_shape_with_one_vcpu(shape, &vcpu, 0, guest_code_trigger_mmio); + + virt_map(vm, MEM_REGION_GPA, MEM_REGION_GPA, 1); + + pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu); + + /* If the MMIO read was successfully emulated, the vcpu thread will exit */ + pthread_join(vcpu_thread, NULL); + + kvm_vm_free(vm); +} #endif
int main(int argc, char *argv[]) @@ -626,10 +661,17 @@ int main(int argc, char *argv[]) test_add_max_memory_regions();
#ifdef __x86_64__ - if (kvm_has_cap(KVM_CAP_GUEST_MEMFD) && - (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) { - test_add_private_memory_region(); - test_add_overlapping_private_memory_regions(); + if (kvm_has_cap(KVM_CAP_GUEST_MEMFD)) { + if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)) { + test_add_private_memory_region(); + test_add_overlapping_private_memory_regions(); + } + + if (kvm_has_cap(KVM_CAP_GUEST_MEMFD_MMAP) && + kvm_has_cap(KVM_CAP_GUEST_MEMFD_NO_DIRECT_MAP)) + test_guest_memfd_mmio(); + else + pr_info("Skipping tests requiring KVM_CAP_GUEST_MEMFD_MMAP | KVM_CAP_GUEST_MEMFD_NO_DIRECT_MAP"); } else { pr_info("Skipping tests for KVM_MEM_GUEST_MEMFD memory regions\n"); }
linux-kselftest-mirror@lists.linaro.org