On 10/06/2025 23:25, Peter Xu wrote:
On Fri, Apr 04, 2025 at 03:43:50PM +0000, Nikita Kalyazin wrote:
Add support for sending a pagefault event if userfaultfd is registered. Only page minor event is currently supported.
Signed-off-by: Nikita Kalyazin kalyazin@amazon.com
virt/kvm/guest_memfd.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index fbf89e643add..096d89e7282d 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -4,6 +4,9 @@ #include <linux/kvm_host.h> #include <linux/pagemap.h> #include <linux/anon_inodes.h> +#ifdef CONFIG_KVM_PRIVATE_MEM +#include <linux/userfaultfd_k.h> +#endif /* CONFIG_KVM_PRIVATE_MEM */
#include "kvm_mm.h"
@@ -380,6 +383,13 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) kvm_gmem_mark_prepared(folio); }
if (userfaultfd_minor(vmf->vma) &&
!(vmf->flags & FAULT_FLAG_USERFAULT_CONTINUE)) {
folio_unlock(folio);
filemap_invalidate_unlock_shared(inode->i_mapping);
return handle_userfault(vmf, VM_UFFD_MINOR);
}
Hmm, does guest-memfd (when with your current approach) at least needs to define the new can_userfault() hook?
Meanwhile, we have some hard-coded lines so far, like:
mfill_atomic(): if (!vma_is_shmem(dst_vma) && uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) goto out_unlock;
I thought it would fail guest-memfd already on a CONTINUE request, and it doesn't seem to be touched yet in this series.
I'm not yet sure how the test worked out without hitting things like it. Highly likely I missed something. Some explanations would be welcomed..
Yes, I realised that I'd failed to post this part soon after I sent the series, but I refrained from sending a new version because the upstream consensus was to review/merge the mmap support in guest_memfd [1] before continuing to build on top of it. This is the missed part I planned to include in the next version. Sorry for the confusion.
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 64551e8a55fb..080437fa7eab 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -221,8 +221,10 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, if (vm_flags & VM_DROPPABLE) return false;
- if (!vma->vm_ops->can_userfault || - !vma->vm_ops->can_userfault(vma, VM_UFFD_MINOR)) + if ((vm_flags & VM_UFFD_MINOR) && + (!vma->vm_ops || + !vma->vm_ops->can_userfault || + !vma->vm_ops->can_userfault(vma, VM_UFFD_MINOR))) return false;
/* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 0aa82c968e16..638360a78561 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -788,7 +788,9 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, src_start, len, flags);
- can_userfault = dst_vma->vm_ops->can_userfault && + can_userfault = + dst_vma->vm_ops && + dst_vma->vm_ops->can_userfault && dst_vma->vm_ops->can_userfault(dst_vma, __VM_UFFD_FLAGS);
if (!vma_is_anonymous(dst_vma) && !can_userfault) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 91ee5dd91c31..202b12dc4b6f 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -420,8 +420,15 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) return ret; }
+static bool kvm_gmem_can_userfault(struct vm_area_struct *vma, + unsigned long vm_flags) +{ + return vm_flags & VM_UFFD_MINOR; +} + static const struct vm_operations_struct kvm_gmem_vm_ops = { - .fault = kvm_gmem_fault, + .fault = kvm_gmem_fault, + .can_userfault = kvm_gmem_can_userfault, };
static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
[1] https://lore.kernel.org/kvm/20250605153800.557144-1-tabba@google.com/
Thanks,
vmf->page = folio_file_page(folio, vmf->pgoff);
out_folio:
2.47.1
-- Peter Xu