On Tue, Apr 12, 2022 at 05:16:22PM -0700, Andy Lutomirski wrote:
On Fri, Apr 8, 2022, at 2:05 PM, Vishal Annapurve wrote:
This series implements selftests targeting the feature floated by Chao via: https://lore.kernel.org/linux-mm/20220310140911.50924-1-chao.p.peng@linux.in...
Below changes aim to test the fd based approach for guest private memory in context of normal (non-confidential) VMs executing on non-confidential platforms.
Confidential platforms along with the confidentiality aware software stack support a notion of private/shared accesses from the confidential VMs. Generally, a bit in the GPA conveys the shared/private-ness of the access. Non-confidential platforms don't have a notion of private or shared accesses from the guest VMs. To support this notion, KVM_HC_MAP_GPA_RANGE is modified to allow marking an access from a VM within a GPA range as always shared or private. Any suggestions regarding implementing this ioctl alternatively/cleanly are appreciated.
This is fantastic. I do think we need to decide how this should work in general. We have a few platforms with somewhat different properties:
TDX: The guest decides, per memory access (using a GPA bit), whether an access is private or shared. In principle, the same address could be *both* and be distinguished by only that bit, and the two addresses would refer to different pages.
SEV: The guest decides, per memory access (using a GPA bit), whether an access is private or shared. At any given time, a physical address (with that bit masked off) can be private, shared, or invalid, but it can't be valid as private and shared at the same time.
pKVM (currently, as I understand it): the guest decides by hypercall, in advance of an access, which addresses are private and which are shared.
This series, if I understood it correctly, is like TDX except with no hardware security.
Sean or Chao, do you have a clear sense of whether the current fd-based private memory proposal can cleanly support SEV and pKVM? What, if anything, needs to be done on the API side to get that working well? I don't think we need to support SEV or pKVM right away to get this merged, but I do think we should understand how the API can map to them.
I've been looking at porting the SEV-SNP hypervisor patches over to using memfd, and I hit an issue that I think is generally applicable to SEV/SEV-ES as well. Namely at guest init time we have something like the following flow:
VMM: - allocate shared memory to back the guest and map it into guest address space - initialize shared memory with initialize memory contents (namely the BIOS) - ask KVM to encrypt these pages in-place and measure them to generate the initial measured payload for attestation, via KVM_SEV_LAUNCH_UPDATE with the GPA for each range of memory to encrypt. KVM: - issue SEV_LAUNCH_UPDATE firmware command, which takes an HPA as input and does an in-place encryption/measure of the page.
With current v5 of the memfd/UPM series, I think the expected flow is that we would fallocate() these ranges from the private fd backend in advance of calling KVM_SEV_LAUNCH_UPDATE (if VMM does it after we'd destroy the initial guest payload, since they'd be replaced by newly-allocated pages). But if VMM does it before, VMM has no way to initialize the guest memory contents, since mmap()/pwrite() are disallowed due to MFD_INACCESSIBLE.
I think something similar to your proposal[1] here of making pread()/pwrite() possible for private-fd-backed memory that's been flagged as "shareable" would work for this case. Although here the "shareable" flag could be removed immediately upon successful completion of the SEV_LAUNCH_UPDATE firmware command.
I think with TDX this isn't an issue because their analagous TDH.MEM.PAGE.ADD seamcall takes a pair of source/dest HPA as input params, so the VMM wouldn't need write access to dest HPA at any point, just source HPA.
[1] https://lwn.net/ml/linux-kernel/eefc3c74-acca-419c-8947-726ce2458446@www.fas...