[ based on kvm/next ]
Implement guest_memfd allocation and population via the write syscall. This is useful in non-CoCo use cases where the host can access guest memory. Even though the same can also be achieved via userspace mapping and memcpying from userspace, write provides a more performant option because it does not need to set page tables and it does not cause a page fault for every page like memcpy would. Note that memcpy cannot be accelerated via MADV_POPULATE_WRITE as it is not supported by guest_memfd and relies on GUP.
Populating 512MiB of guest_memfd on a x86 machine: - via memcpy: 436 ms - via write: 202 ms (-54%)
v5: - Replace the call to the unexported filemap_remove_folio with zeroing the bytes that could not be copied - Fix checkpatch findings
v4: - https://lore.kernel.org/kvm/20250828153049.3922-1-kalyazin@amazon.com - Switch from implementing the write callback to write_iter - Remove conditional compilation
v3: - https://lore.kernel.org/kvm/20250303130838.28812-1-kalyazin@amazon.com - David/Mike D: Only compile support for the write syscall if CONFIG_KVM_GMEM_SHARED_MEM (now gone) is enabled. v2: - https://lore.kernel.org/kvm/20241129123929.64790-1-kalyazin@amazon.com - Switch from an ioctl to the write syscall to implement population
v1: - https://lore.kernel.org/kvm/20241024095429.54052-1-kalyazin@amazon.com
Nikita Kalyazin (2): KVM: guest_memfd: add generic population via write KVM: selftests: update guest_memfd write tests
.../testing/selftests/kvm/guest_memfd_test.c | 86 +++++++++++++++++-- virt/kvm/guest_memfd.c | 62 ++++++++++++- 2 files changed, 141 insertions(+), 7 deletions(-)
base-commit: a6ad54137af92535cfe32e19e5f3bc1bb7dbd383
From: Nikita Kalyazin kalyazin@amazon.com
write syscall populates guest_memfd with user-supplied data in a generic way, ie no vendor-specific preparation is performed. This is supposed to be used in non-CoCo setups where guest memory is not hardware-encrypted.
The following behaviour is implemented: - only page-aligned count and offset are allowed - if the memory is already allocated, the call will successfully populate it - if the memory is not allocated, the call will both allocate and populate - if the memory is already populated, the call will not repopulate it
Signed-off-by: Nikita Kalyazin kalyazin@amazon.com --- virt/kvm/guest_memfd.c | 64 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 08a6bc7d25b6..a2e86ec13e4b 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -379,7 +379,9 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) }
static struct file_operations kvm_gmem_fops = { - .mmap = kvm_gmem_mmap, + .mmap = kvm_gmem_mmap, + .llseek = default_llseek, + .write_iter = generic_perform_write, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -390,6 +392,63 @@ void kvm_gmem_init(struct module *module) kvm_gmem_fops.owner = module; }
+static int kvm_kmem_gmem_write_begin(const struct kiocb *kiocb, + struct address_space *mapping, + loff_t pos, unsigned int len, + struct folio **foliop, + void **fsdata) +{ + struct file *file = kiocb->ki_filp; + pgoff_t index = pos >> PAGE_SHIFT; + struct folio *folio; + + if (!PAGE_ALIGNED(pos) || len != PAGE_SIZE) + return -EINVAL; + + if (pos + len > i_size_read(file_inode(file))) + return -EINVAL; + + folio = kvm_gmem_get_folio(file_inode(file), index); + if (IS_ERR(folio)) + return -EFAULT; + + if (WARN_ON_ONCE(folio_test_large(folio))) { + folio_unlock(folio); + folio_put(folio); + return -EFAULT; + } + + if (folio_test_uptodate(folio)) { + folio_unlock(folio); + folio_put(folio); + return -ENOSPC; + } + + *foliop = folio; + return 0; +} + +static int kvm_kmem_gmem_write_end(const struct kiocb *kiocb, + struct address_space *mapping, + loff_t pos, unsigned int len, + unsigned int copied, + struct folio *folio, void *fsdata) +{ + if (copied) { + if (copied < len) { + unsigned int from = pos & (PAGE_SIZE - 1); + + folio_zero_range(folio, from + copied, len - copied); + } + kvm_gmem_mark_prepared(folio); + } + + folio_unlock(folio); + folio_put(folio); + + return copied; +} + static int kvm_gmem_migrate_folio(struct address_space *mapping, struct folio *dst, struct folio *src, enum migrate_mode mode) @@ -442,6 +501,8 @@ static void kvm_gmem_free_folio(struct folio *folio)
static const struct address_space_operations kvm_gmem_aops = { .dirty_folio = noop_dirty_folio, + .write_begin = kvm_kmem_gmem_write_begin, + .write_end = kvm_kmem_gmem_write_end, .migrate_folio = kvm_gmem_migrate_folio, .error_remove_folio = kvm_gmem_error_folio, #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE @@ -489,6 +550,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) }
file->f_flags |= O_LARGEFILE; + file->f_mode |= FMODE_LSEEK | FMODE_PWRITE;
inode = file->f_inode; WARN_ON(file->f_mapping != inode->i_mapping);
On Tue, Sep 2, 2025 at 4:20 AM Kalyazin, Nikita kalyazin@amazon.co.uk wrote:
From: Nikita Kalyazin kalyazin@amazon.com
Hi Nikita,
write syscall populates guest_memfd with user-supplied data in a generic way, ie no vendor-specific preparation is performed. This is supposed to be used in non-CoCo setups where guest memory is not hardware-encrypted.
What's meant to happen if we do use this for CoCo VMs? I would expect write() to fail, but I don't see why it would (seems like we need/want a check that we aren't write()ing to private memory).
The following behaviour is implemented:
- only page-aligned count and offset are allowed
- if the memory is already allocated, the call will successfully populate it
- if the memory is not allocated, the call will both allocate and populate
- if the memory is already populated, the call will not repopulate it
Signed-off-by: Nikita Kalyazin kalyazin@amazon.com
virt/kvm/guest_memfd.c | 64 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 08a6bc7d25b6..a2e86ec13e4b 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -379,7 +379,9 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) }
static struct file_operations kvm_gmem_fops = {
.mmap = kvm_gmem_mmap,
.mmap = kvm_gmem_mmap,
.llseek = default_llseek,
.write_iter = generic_perform_write,
You seem to have accidentally replaced some tabs with spaces here. :) Please keep the style consistent.
.open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate,
@@ -390,6 +392,63 @@ void kvm_gmem_init(struct module *module) kvm_gmem_fops.owner = module; }
+static int kvm_kmem_gmem_write_begin(const struct kiocb *kiocb,
struct address_space *mapping,
loff_t pos, unsigned int len,
struct folio **foliop,
void **fsdata)
+{
struct file *file = kiocb->ki_filp;
pgoff_t index = pos >> PAGE_SHIFT;
struct folio *folio;
if (!PAGE_ALIGNED(pos) || len != PAGE_SIZE)
return -EINVAL;
Requiring pos to be page-aligned seems like a strange restriction, and requiring len to be exactly PAGE_SIZE just seems wrong. I don't see any reason why the below logic can't be made to work with an unrestricted pos and len (in other words, I don't see how guest_memfd is special vs other filesystems in this regard).
if (pos + len > i_size_read(file_inode(file)))
return -EINVAL;
folio = kvm_gmem_get_folio(file_inode(file), index);
if (IS_ERR(folio))
return -EFAULT;
if (WARN_ON_ONCE(folio_test_large(folio))) {
folio_unlock(folio);
folio_put(folio);
return -EFAULT;
}
if (folio_test_uptodate(folio)) {
folio_unlock(folio);
folio_put(folio);
return -ENOSPC;
Does it actually matter for the folio not to be uptodate? It seems unnecessarily restrictive not to be able to overwrite data if we're saying that this is only usable for unencrypted memory anyway.
Is ENOSPC really the right errno for this? (Maybe -EFAULT?)
}
*foliop = folio;
return 0;
+}
+static int kvm_kmem_gmem_write_end(const struct kiocb *kiocb,
struct address_space *mapping,
loff_t pos, unsigned int len,
unsigned int copied,
struct folio *folio, void *fsdata)
+{
if (copied) {
if (copied < len) {
unsigned int from = pos & (PAGE_SIZE - 1);
How about:
unsigned int from = pos & ((1UL << folio_order(*folio)) - 1)
So that we don't need to require !folio_test_large() in kvm_kmem_gmem_write_begin().
folio_zero_range(folio, from + copied, len - copied);
}
kvm_gmem_mark_prepared(folio);
}
folio_unlock(folio);
folio_put(folio);
return copied;
+}
static int kvm_gmem_migrate_folio(struct address_space *mapping, struct folio *dst, struct folio *src, enum migrate_mode mode) @@ -442,6 +501,8 @@ static void kvm_gmem_free_folio(struct folio *folio)
static const struct address_space_operations kvm_gmem_aops = { .dirty_folio = noop_dirty_folio,
.write_begin = kvm_kmem_gmem_write_begin,
.write_end = kvm_kmem_gmem_write_end, .migrate_folio = kvm_gmem_migrate_folio, .error_remove_folio = kvm_gmem_error_folio,
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE @@ -489,6 +550,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) }
file->f_flags |= O_LARGEFILE;
file->f_mode |= FMODE_LSEEK | FMODE_PWRITE; inode = file->f_inode; WARN_ON(file->f_mapping != inode->i_mapping);
-- 2.50.1
On 10/09/2025 22:23, James Houghton wrote:
On Tue, Sep 2, 2025 at 4:20 AM Kalyazin, Nikita kalyazin@amazon.co.uk wrote:
From: Nikita Kalyazin kalyazin@amazon.com
Hi Nikita,
Hi James,
Thanks for the review!
write syscall populates guest_memfd with user-supplied data in a generic way, ie no vendor-specific preparation is performed. This is supposed to be used in non-CoCo setups where guest memory is not hardware-encrypted.
What's meant to happen if we do use this for CoCo VMs? I would expect write() to fail, but I don't see why it would (seems like we need/want a check that we aren't write()ing to private memory).
I am not so sure that write() should fail even in CoCo VMs if we access not-yet-prepared pages. My understanding was that the CoCoisation of the memory occurs during "preparation". But I may be wrong here.
The following behaviour is implemented:
- only page-aligned count and offset are allowed
- if the memory is already allocated, the call will successfully populate it
- if the memory is not allocated, the call will both allocate and populate
- if the memory is already populated, the call will not repopulate it
Signed-off-by: Nikita Kalyazin kalyazin@amazon.com
virt/kvm/guest_memfd.c | 64 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 63 insertions(+), 1 deletion(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 08a6bc7d25b6..a2e86ec13e4b 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -379,7 +379,9 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) }
static struct file_operations kvm_gmem_fops = {
.mmap = kvm_gmem_mmap,
.mmap = kvm_gmem_mmap,
.llseek = default_llseek,
.write_iter = generic_perform_write,
You seem to have accidentally replaced some tabs with spaces here. :) Please keep the style consistent.
Thanks for spotting that, will fix in the next version!
.open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate,
@@ -390,6 +392,63 @@ void kvm_gmem_init(struct module *module) kvm_gmem_fops.owner = module; }
+static int kvm_kmem_gmem_write_begin(const struct kiocb *kiocb,
struct address_space *mapping,
loff_t pos, unsigned int len,
struct folio **foliop,
void **fsdata)
+{
struct file *file = kiocb->ki_filp;
pgoff_t index = pos >> PAGE_SHIFT;
struct folio *folio;
if (!PAGE_ALIGNED(pos) || len != PAGE_SIZE)
return -EINVAL;
Requiring pos to be page-aligned seems like a strange restriction, and requiring len to be exactly PAGE_SIZE just seems wrong. I don't see any reason why the below logic can't be made to work with an unrestricted pos and len (in other words, I don't see how guest_memfd is special vs other filesystems in this regard).
I don't have a real reason to apply those restrictions. Happy to remove them, thanks.
if (pos + len > i_size_read(file_inode(file)))
return -EINVAL;
folio = kvm_gmem_get_folio(file_inode(file), index);
if (IS_ERR(folio))
return -EFAULT;
if (WARN_ON_ONCE(folio_test_large(folio))) {
folio_unlock(folio);
folio_put(folio);
return -EFAULT;
}
if (folio_test_uptodate(folio)) {
folio_unlock(folio);
folio_put(folio);
return -ENOSPC;
Does it actually matter for the folio not to be uptodate? It seems unnecessarily restrictive not to be able to overwrite data if we're saying that this is only usable for unencrypted memory anyway.
In the context of direct map removal [1] it does actually because when we mark a folio as prepared, we remove it from the direct map making it inaccessible to the way write() performs the copy. It does not matter if direct map removal isn't enabled though. Do you think it should be conditional?
[1]: https://lore.kernel.org/kvm/20250828093902.2719-1-roypat@amazon.co.uk
Is ENOSPC really the right errno for this? (Maybe -EFAULT?)
I don't have a strong opinion here. My reasoning was if the folio is already "sealed" by the direct map removal, then it is no longer a part of the "writable space", so -ENOSPC makes sense. Maybe this intuition only works for me so I'm happy to change to -EFAULT if it looks less confusing.
}
*foliop = folio;
return 0;
+}
+static int kvm_kmem_gmem_write_end(const struct kiocb *kiocb,
struct address_space *mapping,
loff_t pos, unsigned int len,
unsigned int copied,
struct folio *folio, void *fsdata)
+{
if (copied) {
if (copied < len) {
unsigned int from = pos & (PAGE_SIZE - 1);
How about:
unsigned int from = pos & ((1UL << folio_order(*folio)) - 1)
So that we don't need to require !folio_test_large() in kvm_kmem_gmem_write_begin().
Thanks, will apply to the next version.
folio_zero_range(folio, from + copied, len - copied);
}
kvm_gmem_mark_prepared(folio);
}
folio_unlock(folio);
folio_put(folio);
return copied;
+}
- static int kvm_gmem_migrate_folio(struct address_space *mapping, struct folio *dst, struct folio *src, enum migrate_mode mode)
@@ -442,6 +501,8 @@ static void kvm_gmem_free_folio(struct folio *folio)
static const struct address_space_operations kvm_gmem_aops = { .dirty_folio = noop_dirty_folio,
.write_begin = kvm_kmem_gmem_write_begin,
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE.write_end = kvm_kmem_gmem_write_end, .migrate_folio = kvm_gmem_migrate_folio, .error_remove_folio = kvm_gmem_error_folio,
@@ -489,6 +550,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) }
file->f_flags |= O_LARGEFILE;
file->f_mode |= FMODE_LSEEK | FMODE_PWRITE; inode = file->f_inode; WARN_ON(file->f_mapping != inode->i_mapping);
-- 2.50.1
From: Nikita Kalyazin kalyazin@amazon.com
This is to reflect that the write syscall is now implemented for guest_memfd.
Signed-off-by: Nikita Kalyazin kalyazin@amazon.com --- .../testing/selftests/kvm/guest_memfd_test.c | 86 +++++++++++++++++-- 1 file changed, 80 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index b3ca6737f304..1236e31f5041 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -24,18 +24,91 @@ #include "test_util.h" #include "ucall_common.h"
-static void test_file_read_write(int fd) +static void test_file_read(int fd) { char buf[64];
TEST_ASSERT(read(fd, buf, sizeof(buf)) < 0, "read on a guest_mem fd should fail"); - TEST_ASSERT(write(fd, buf, sizeof(buf)) < 0, - "write on a guest_mem fd should fail"); TEST_ASSERT(pread(fd, buf, sizeof(buf), 0) < 0, "pread on a guest_mem fd should fail"); - TEST_ASSERT(pwrite(fd, buf, sizeof(buf), 0) < 0, - "pwrite on a guest_mem fd should fail"); +} + +static void test_file_write(int fd, size_t total_size) +{ + size_t page_size = getpagesize(); + void *buf = NULL; + int ret; + + ret = posix_memalign(&buf, page_size, total_size); + TEST_ASSERT_EQ(ret, 0); + + /* Check arguments correctness checks work as expected */ + + ret = pwrite(fd, buf, page_size - 1, 0); + TEST_ASSERT(ret == -1, "write unaligned count on a guest_mem fd should fail"); + TEST_ASSERT_EQ(errno, EINVAL); + + ret = pwrite(fd, buf, page_size, 1); + TEST_ASSERT(ret == -1, "write unaligned offset on a guest_mem fd should fail"); + TEST_ASSERT_EQ(errno, EINVAL); + + ret = pwrite(fd, buf, page_size, total_size); + TEST_ASSERT(ret == -1, "writing past the file size on a guest_mem fd should fail"); + TEST_ASSERT_EQ(errno, EINVAL); + + ret = pwrite(fd, NULL, page_size, 0); + TEST_ASSERT(ret == -1, "supplying a NULL buffer when writing a guest_mem fd should fail"); + TEST_ASSERT_EQ(errno, EFAULT); + + /* Check double population is not allowed */ + + ret = pwrite(fd, buf, page_size, 0); + TEST_ASSERT(ret == page_size, "page-aligned write on a guest_mem fd should succeed"); + + ret = pwrite(fd, buf, page_size, 0); + TEST_ASSERT(ret == -1, "write on already populated guest_mem fd should fail"); + TEST_ASSERT_EQ(errno, ENOSPC); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, page_size); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed"); + + /* Check population is allowed again after punching a hole */ + + ret = pwrite(fd, buf, page_size, 0); + TEST_ASSERT(ret == page_size, + "page-aligned write on a punched guest_mem fd should succeed"); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, page_size); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed"); + + /* Check population of already allocated memory is allowed */ + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE, 0, page_size); + TEST_ASSERT(!ret, "fallocate with aligned offset and size should succeed"); + + ret = pwrite(fd, buf, page_size, 0); + TEST_ASSERT(ret == page_size, "write on a preallocated guest_mem fd should succeed"); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, page_size); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed"); + + /* Check population works until an already populated page is encountered */ + + ret = pwrite(fd, buf, total_size, 0); + TEST_ASSERT(ret == total_size, "page-aligned write on a guest_mem fd should succeed"); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, page_size); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed"); + + ret = pwrite(fd, buf, total_size, 0); + TEST_ASSERT(ret == page_size, "write on a guest_mem fd should not overwrite data"); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, total_size); + TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed"); + + + free(buf); }
static void test_mmap_supported(int fd, size_t page_size, size_t total_size) @@ -281,7 +354,8 @@ static void test_guest_memfd(unsigned long vm_type)
fd = vm_create_guest_memfd(vm, total_size, flags);
- test_file_read_write(fd); + test_file_read(fd); + test_file_write(fd, total_size);
if (flags & GUEST_MEMFD_FLAG_MMAP) { test_mmap_supported(fd, page_size, total_size);
On Tue, Sep 2, 2025 at 4:20 AM Kalyazin, Nikita kalyazin@amazon.co.uk wrote:
[ based on kvm/next ]
Implement guest_memfd allocation and population via the write syscall. This is useful in non-CoCo use cases where the host can access guest memory. Even though the same can also be achieved via userspace mapping and memcpying from userspace, write provides a more performant option because it does not need to set page tables and it does not cause a page fault for every page like memcpy would. Note that memcpy cannot be accelerated via MADV_POPULATE_WRITE as it is not supported by guest_memfd and relies on GUP.
Populating 512MiB of guest_memfd on a x86 machine:
- via memcpy: 436 ms
- via write: 202 ms (-54%)
Silly question: can you remind me why this speed-up is important?
Also, I think we can get the same effect as MADV_POPULATE_WRITE just by making a second VMA for the memory file and reading the first byte of each page. Is that a viable strategy for your use case?
Seems fine to me to allow write() for guest_memfd anyway. :)
On 10/09/2025 22:37, James Houghton wrote:
On Tue, Sep 2, 2025 at 4:20 AM Kalyazin, Nikita kalyazin@amazon.co.uk wrote:
[ based on kvm/next ]
Implement guest_memfd allocation and population via the write syscall. This is useful in non-CoCo use cases where the host can access guest memory. Even though the same can also be achieved via userspace mapping and memcpying from userspace, write provides a more performant option because it does not need to set page tables and it does not cause a page fault for every page like memcpy would. Note that memcpy cannot be accelerated via MADV_POPULATE_WRITE as it is not supported by guest_memfd and relies on GUP.
Populating 512MiB of guest_memfd on a x86 machine:
- via memcpy: 436 ms
- via write: 202 ms (-54%)
Silly question: can you remind me why this speed-up is important?
The speed-up is important for the Firecracker use case [1] because it is likely for the population to stand on the hot path of the snapshot restore process. Even though we aim to prepopulate the guest memory before it gets accessed by the guest, for large VMs the guest has a good chance to hit a page that isn't yet populated triggering on-demand fault handling which is much slower, and we'd like to avoid those as much as we can.
[1]: https://github.com/firecracker-microvm/firecracker/blob/main/docs/snapshotti...
Also, I think we can get the same effect as MADV_POPULATE_WRITE just by making a second VMA for the memory file and reading the first byte of each page. Is that a viable strategy for your use case?
If I understand correctly what you mean, it doesn't look much different from the memcpy option I mention above. All those one-byte read accesses will trigger user mapping faults for every page, and they are quite slow. write() allows to avoid them completely.
Seems fine to me to allow write() for guest_memfd anyway. :)
Glad to hear that!
linux-kselftest-mirror@lists.linaro.org