Tested the patch series by auditing the actual userspace (HVA) mappings and seeing if the corresponding physical PFNs correspond to the expected NUMA node.
Enabled QEMU's kvm_set_user_memory tracepoint to dump the HVA/guest_memfd/guest_memfd_offset/base GPA/size. This helped determine the HVAs and the memslot that QEMU registers with KVM via the kvm_set_user_memory_region() helper.
After that dumped the PFNs getting mapped into the guest for a particular GPA via enabling the kvm_mmu_set_spte kernel trace events, performed the GPA->memslot->HVA mapping (via QEMU traces above) and then looked in /proc/<qemu_pid>/numa_maps to validate the HVA is bound to the NUMA node associated with that memslot/guest_memfd.
Additionally, looked up the PFN (from kernel traces) in /proc/zoneinfo to validate that the physical page belongs to the NUMA node associated with the memslot/guest_memfd.
This testing/validation is based on the following trees:
Host Kernel:
https://github.com/AMDESE/linux/commits/snp-hugetlb-v2-wip0/
This tree is based on commit 27cb583e25d0 from David Hildenbrand's guestmemfd_preview tree (which already includes base mmap support) with Google's HugeTLB v2 patches rebased on top of those (which include both in-place conversion and hugetlb infrastructure), along with additional patches to enable in-place conversion and hugetlb for SNP.
QEMU:
https://github.com/AMDESE/qemu/commits/snp-hugetlb-dev-wip0/
QEMU command line used for testing/validation:
qemu-system-x86_64 --enable-kvm -object sev-snp-guest,id=sev0,cbitpos=51,reduced-phys-bits=1,convert-in-place=true -object memory-backend-memfd,id=ram0,host-nodes=0,policy=bind,size=150000M,prealloc=false -numa node,nodeid=0,memdev=ram0,cpus=0-31,cpus=64-95 -object memory-backend-memfd,id=ram1,host-nodes=1,policy=bind,size=150000M,prealloc=false -numa node,nodeid=1,memdev=ram1,cpus=32-63,cpus=96-127
(guest NUMA configuration mapped 1:1 to host NUMA configuration).
Tested-by: Ashish Kalra ashish.kalra@amd.com
Thanks, Ashish
On 9/24/2025 1:19 PM, David Hildenbrand wrote:
On 27.08.25 19:52, Shivank Garg wrote:
This series introduces NUMA-aware memory placement support for KVM guests with guest_memfd memory backends. It builds upon Fuad Tabba's work (V17) that enabled host-mapping for guest_memfd memory [1] and can be applied directly applied on KVM tree [2] (branch kvm-next, base commit: a6ad5413, Merge branch 'guest-memfd-mmap' into HEAD)
Heads-up: I'll queue this (incl. the replacement patch for #4 from the reply) and send it tomorrow as a PR against kvm/next to Paolo.