On Wed, Jul 13, 2022, at 3:35 AM, Gupta, Pankaj wrote:
This is the v7 of this series which tries to implement the fd-based KVM guest private memory. The patches are based on latest kvm/queue branch commit:
b9b71f43683a (kvm/queue) KVM: x86/mmu: Buffer nested MMU
split_desc_cache only by default capacity
Introduction
In general this patch series introduce fd-based memslot which provides guest memory through memory file descriptor fd[offset,size] instead of hva/size. The fd can be created from a supported memory filesystem like tmpfs/hugetlbfs etc. which we refer as memory backing store. KVM
Thinking a bit, As host side fd on tmpfs or shmem will store memory on host page cache instead of mapping pages into userspace address space. Can we hit double (un-coordinated) page cache problem with this when guest page cache is also used?
This is my understanding: in host it will be indeed in page cache (in current shmem implementation) but that's just the way it allocates and provides the physical memory for the guest. In guest, guest OS will not see this fd (absolutely), it only sees guest memory, on top of which it can build its own page cache system for its own file-mapped content but that is unrelated to host page cache.
yes. If guest fills its page cache with file backed memory, this at host side(on shmem fd backend) will also fill the host page cache fast. This can have an impact on performance of guest VM's if host goes to memory pressure situation sooner. Or else we end up utilizing way less System RAM.
Is this in any meaningful way different from a regular VM?
--Andy