Hello,
This is v3 of the patch series for TDX selftests.
It has been updated for Intel’s V10 of the TDX host patches which was proposed in https://lkml.org/lkml/2022/8/8/877
The tree can be found at https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v3/
Changes from RFC v2:
Selftest setup now builds upon the KVM selftest framework when setting up the guest for testing. We now use the KVM selftest framework to build the guest page tables and load the ELF binary into guest memory.
Inlining of the entire guest image is no longer required and that allows us to cleanly separate code into different compilation units and be able to use proper assembly instead of inline assembly (addresses Sean’s comment).
To achieve this, we take a dependency on the SEV VM tests: https://lore.kernel.org/lkml/20221018205845.770121-1-pgonda@google.com/T/. Those patches provide functions for the host to allocate and track protected memory in the guest.
In RFCv3, TDX selftest code is organized into:
+ headers in tools/testing/selftests/kvm/include/x86_64/tdx/ + common code in tools/testing/selftests/kvm/lib/x86_64/tdx/ + selftests in tools/testing/selftests/kvm/x86_64/tdx_*
RFCv3 also adds additional selftests for UPM.
Dependencies
+ Peter’s patches, which provide functions for the host to allocate and track protected memory in the guest. https://lore.kernel.org/lkml/20221018205845.770121-1-pgonda@google.com/T/ + Peter’s patches depend on Sean’s patches: + https://lore.kernel.org/linux-arm-kernel/20220825232522.3997340-1-seanjc@goo... + https://lore.kernel.org/lkml/20221006004512.666529-1-seanjc@google.com/T/ + Proposed fixes for these these issues mentioned on the mailing list + https://lore.kernel.org/lkml/36cde6d6-128d-884e-1447-09b08bb5de3d@intel.com/ + https://lore.kernel.org/lkml/diqzedtubs0d.fsf@google.com/ + https://lore.kernel.org/lkml/67b782ee-c95c-d6bc-3cca-cdfe75f4bf6a@intel.com/ + https://lore.kernel.org/lkml/diqzcz7cd983.fsf@ackerleytng-cloudtop-sg.c.goog... + https://lore.kernel.org/linux-mm/20221116205025.1510291-1-ackerleytng@google...
Further work for this patch series/TODOs
+ Sean’s comments for the non-confidential UPM selftests patch series at https://lore.kernel.org/lkml/Y8dC8WDwEmYixJqt@google.com/T/#u apply here as well + Add ucall support for TDX selftests
I would also like to acknowledge the following people, who helped review or test patches in RFCv1 and RFCv2:
+ Sean Christopherson seanjc@google.com + Zhenzhong Duan zhenzhong.duan@intel.com + Peter Gonda pgonda@google.com + Andrew Jones drjones@redhat.com + Maxim Levitsky mlevitsk@redhat.com + Xiaoyao Li xiaoyao.li@intel.com + David Matlack dmatlack@google.com + Marc Orr marcorr@google.com + Isaku Yamahata isaku.yamahata@gmail.com
Links to earlier patch series
+ RFC v1: https://lore.kernel.org/lkml/20210726183816.1343022-1-erdemaktas@google.com/... + RFC v2: https://lore.kernel.org/lkml/20220830222000.709028-1-sagis@google.com/T/#u
Ackerley Tng (14): KVM: selftests: Add function to allow one-to-one GVA to GPA mappings KVM: selftests: Expose function that sets up sregs based on VM's mode KVM: selftests: Store initial stack address in struct kvm_vcpu KVM: selftests: Refactor steps in vCPU descriptor table initialization KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs' attribute configuration KVM: selftests: Require GCC to realign stacks on function entry KVM: selftests: Add functions to allow mapping as shared KVM: selftests: Add support for restricted memory KVM: selftests: TDX: Update load_td_memory_region for VM memory backed by restricted memfd KVM: selftests: Expose _vm_vaddr_alloc KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET KVM: selftests: TDX: Add TDX UPM selftest KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion
Erdem Aktas (4): KVM: selftests: Add support for creating non-default type VMs KVM: selftests: Add helper functions to create TDX VMs KVM: selftests: TDX: Add TDX lifecycle test KVM: selftests: TDX: Adding test case for TDX port IO
Roger Wang (1): KVM: selftests: TDX: Add TDG.VP.INFO test
Ryan Afranji (2): KVM: selftests: TDX: Verify the behavior when host consumes a TD private memory KVM: selftests: TDX: Add shared memory test
Sagi Shahar (10): KVM: selftests: TDX: Add report_fatal_error test KVM: selftests: TDX: Add basic TDX CPUID test KVM: selftests: TDX: Add basic get_td_vmcall_info test KVM: selftests: TDX: Add TDX IO writes test KVM: selftests: TDX: Add TDX IO reads test KVM: selftests: TDX: Add TDX MSR read/write tests KVM: selftests: TDX: Add TDX HLT exit test KVM: selftests: TDX: Add TDX MMIO reads test KVM: selftests: TDX: Add TDX MMIO writes test KVM: selftests: TDX: Add TDX CPUID TDVMCALL test
tools/testing/selftests/kvm/.gitignore | 3 + tools/testing/selftests/kvm/Makefile | 10 +- .../selftests/kvm/include/kvm_util_base.h | 43 +- .../testing/selftests/kvm/include/test_util.h | 2 + .../selftests/kvm/include/x86_64/processor.h | 4 + .../kvm/include/x86_64/tdx/td_boot.h | 82 + .../kvm/include/x86_64/tdx/td_boot_asm.h | 16 + .../selftests/kvm/include/x86_64/tdx/tdcall.h | 59 + .../selftests/kvm/include/x86_64/tdx/tdx.h | 65 + .../kvm/include/x86_64/tdx/tdx_util.h | 19 + .../kvm/include/x86_64/tdx/test_util.h | 164 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 123 +- tools/testing/selftests/kvm/lib/test_util.c | 7 + .../selftests/kvm/lib/x86_64/processor.c | 77 +- tools/testing/selftests/kvm/lib/x86_64/sev.c | 2 +- .../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++ .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 158 ++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 231 +++ .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 562 +++++++ .../selftests/kvm/lib/x86_64/tdx/test_util.c | 101 ++ .../kvm/x86_64/tdx_shared_mem_test.c | 137 ++ .../selftests/kvm/x86_64/tdx_upm_test.c | 460 ++++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 1329 +++++++++++++++++ 23 files changed, 3709 insertions(+), 46 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
-- 2.39.0.246.g2a6d74b583-goog
One-to-one GVA to GPA mappings can be used in the guest to set up boot sequences during which paging is enabled, hence requiring a transition from using physical to virtual addresses in consecutive instructions.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/kvm_util_base.h | 2 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 35 ++++++++++++++++--- 2 files changed, 32 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 8dac0f49d20b9..0db5cd4b8383a 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -402,6 +402,8 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + uint32_t data_memslot); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index fa8aea97cdb62..5257bce6f546d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1237,6 +1237,8 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * vm - Virtual Machine * sz - Size in bytes * vaddr_min - Minimum starting virtual address + * paddr_min - Minimum starting physical address + * data_memslot - memslot number to allocate in * encrypt - Whether the region should be handled as encrypted * * Output Args: None @@ -1251,14 +1253,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * a page. */ static vm_vaddr_t -_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, bool encrypt) +_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + vm_paddr_t paddr_min, uint32_t data_memslot, bool encrypt) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm); vm_paddr_t paddr = _vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, - 0, encrypt); + paddr_min, + data_memslot, encrypt);
/* * Find an unused range of virtual page addresses of at least @@ -1281,12 +1284,34 @@ _vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, bool encrypt
vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) { - return _vm_vaddr_alloc(vm, sz, vaddr_min, vm->protected); + return _vm_vaddr_alloc(vm, sz, vaddr_min, + KVM_UTIL_MIN_PFN * vm->page_size, 0, + vm->protected); }
vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min) { - return _vm_vaddr_alloc(vm, sz, vaddr_min, false); + return _vm_vaddr_alloc(vm, sz, vaddr_min, + KVM_UTIL_MIN_PFN * vm->page_size, 0, false); +} + +/** + * Allocate memory in @vm of size @sz in memslot with id @data_memslot, + * beginning with the desired address of @vaddr_min. + * + * If there isn't enough memory at @vaddr_min, find the next possible address + * that can meet the requested size in the given memslot. + * + * Return the address where the memory is allocated. + */ +vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + uint32_t data_memslot) +{ + vm_vaddr_t gva = _vm_vaddr_alloc(vm, sz, vaddr_min, (vm_paddr_t) vaddr_min, + data_memslot, vm->protected); + ASSERT_EQ(gva, addr_gva2gpa(vm, gva)); + + return gva; }
/*
From: Erdem Aktas erdemaktas@google.com
Currently vm_create function only creates KVM_VM_TYPE_DEFAULT type VMs. Adding type parameter to ____vm_create to create new VM types.
Signed-off-by: Erdem Aktas erdemaktas@google.com Reviewed-by: David Matlack dmatlack@google.com Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/include/kvm_util_base.h | 6 ++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 6 +++--- tools/testing/selftests/kvm/lib/x86_64/sev.c | 2 +- 3 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 0db5cd4b8383a..0fa4dab3d8e52 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -27,6 +27,8 @@
#define NSEC_PER_SEC 1000000000L
+#define KVM_VM_TYPE_DEFAULT 0 + typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
@@ -686,13 +688,13 @@ uint64_t vm_nr_pages_required(enum vm_guest_mode mode, * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to * calculate the amount of memory needed for per-vCPU data, e.g. stacks. */ -struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages); +struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages, int type); struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, uint64_t nr_extra_pages);
static inline struct kvm_vm *vm_create_barebones(void) { - return ____vm_create(VM_MODE_DEFAULT, 0); + return ____vm_create(VM_MODE_DEFAULT, 0, KVM_VM_TYPE_DEFAULT); }
static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 5257bce6f546d..14246f0fd2e78 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -185,7 +185,7 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = { _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES, "Missing new mode params?");
-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) +struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages, int type) { struct kvm_vm *vm;
@@ -201,7 +201,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages) hash_init(vm->regions.slot_hash);
vm->mode = mode; - vm->type = 0; + vm->type = type;
vm->pa_bits = vm_guest_mode_params[mode].pa_bits; vm->va_bits = vm_guest_mode_params[mode].va_bits; @@ -337,7 +337,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus, struct userspace_mem_region *slot0; struct kvm_vm *vm;
- vm = ____vm_create(mode, nr_pages); + vm = ____vm_create(mode, nr_pages, KVM_VM_TYPE_DEFAULT);
kvm_vm_elf_load(vm, program_invocation_name);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/sev.c b/tools/testing/selftests/kvm/lib/x86_64/sev.c index faed2ebe63ac9..dedfd9f45cfb3 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/sev.c +++ b/tools/testing/selftests/kvm/lib/x86_64/sev.c @@ -221,7 +221,7 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t policy, void *guest_code, uint64_t nr_pages = vm_nr_pages_required(mode, 1, 0); struct kvm_vm *vm;
- vm = ____vm_create(mode, nr_pages); + vm = ____vm_create(mode, nr_pages, KVM_VM_TYPE_DEFAULT);
kvm_sev_ioctl(vm, KVM_SEV_INIT, NULL);
This allows initializing sregs without setting vCPU registers in KVM.
No functional change intended.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/processor.h | 2 + .../selftests/kvm/lib/x86_64/processor.c | 39 ++++++++++--------- 2 files changed, 23 insertions(+), 18 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index e8ca0d8a6a7e0..74e0d3698f30c 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -644,6 +644,8 @@ const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid, void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid); void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu);
+void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs); + static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function, uint32_t index) diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index ed811181320de..1bb07d3c025b0 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -589,35 +589,38 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp, kvm_seg_fill_gdt_64bit(vm, segp); }
-static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs) { - struct kvm_sregs sregs; - - /* Set mode specific system register values. */ - vcpu_sregs_get(vcpu, &sregs); - - sregs.idt.limit = 0; + sregs->idt.limit = 0;
- kvm_setup_gdt(vm, &sregs.gdt); + kvm_setup_gdt(vm, &sregs->gdt);
switch (vm->mode) { case VM_MODE_PXXV48_4K: - sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG; - sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR; - sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX); - - kvm_seg_set_unusable(&sregs.ldt); - kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs); - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds); - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es); - kvm_setup_tss_64bit(vm, &sregs.tr, 0x18); + sregs->cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG; + sregs->cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR; + sregs->efer |= (EFER_LME | EFER_LMA | EFER_NX); + + kvm_seg_set_unusable(&sregs->ldt); + kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs->cs); + kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs->ds); + kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs->es); + kvm_setup_tss_64bit(vm, &sregs->tr, 0x18); break;
default: TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode); }
- sregs.cr3 = vm->pgd; + sregs->cr3 = vm->pgd; +} + +static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + struct kvm_sregs sregs; + + vcpu_sregs_get(vcpu, &sregs); + vcpu_setup_mode_sregs(vm, &sregs); vcpu_sregs_set(vcpu, &sregs); }
TDX guests' registers cannot be initialized directly using vcpu_regs_set(), hence the stack pointer needs to be initialized by the guest itself, running boot code beginning at the reset vector.
We store the stack address as part of struct kvm_vcpu so that it can be accessible later to be passed to the boot code for rsp initialization.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/include/kvm_util_base.h | 1 + tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 0fa4dab3d8e52..cdc204cfeb4c2 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -54,6 +54,7 @@ struct kvm_vcpu { int fd; struct kvm_vm *vm; struct kvm_run *run; + vm_vaddr_t initial_stack_addr; #ifdef __x86_64__ struct kvm_cpuid2 *cpuid; #endif diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 1bb07d3c025b0..3046b555fee49 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -673,10 +673,12 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); vcpu_setup(vm, vcpu);
+ vcpu->initial_stack_addr = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()); + /* Setup guest general purpose registers */ vcpu_regs_get(vcpu, ®s); regs.rflags = regs.rflags | 0x2; - regs.rsp = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()); + regs.rsp = vcpu->initial_stack_addr; regs.rip = (unsigned long) guest_code; vcpu_regs_set(vcpu, ®s);
Split the vCPU descriptor table initialization process into a few steps and expose them:
+ Setting up the IDT + Syncing exception handlers into the guest
In kvm_setup_idt(), we conditionally allocate guest memory for vm->idt to avoid double allocation when kvm_setup_idt() is used after vm_init_descriptor_tables().
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/processor.h | 2 ++ .../selftests/kvm/lib/x86_64/processor.c | 19 ++++++++++++++++--- 2 files changed, 18 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h index 74e0d3698f30c..abaaab4d885c1 100644 --- a/tools/testing/selftests/kvm/include/x86_64/processor.h +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h @@ -750,6 +750,8 @@ struct ex_regs { uint64_t rflags; };
+void kvm_setup_idt(struct kvm_vm *vm, struct kvm_dtable *dt); +void sync_exception_handlers_to_guest(struct kvm_vm *vm); void vm_init_descriptor_tables(struct kvm_vm *vm); void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu); void vm_install_exception_handler(struct kvm_vm *vm, int vector, diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 3046b555fee49..1ea1019d48c13 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -1174,19 +1174,32 @@ void vm_init_descriptor_tables(struct kvm_vm *vm) DEFAULT_CODE_SELECTOR); }
+void kvm_setup_idt(struct kvm_vm *vm, struct kvm_dtable *dt) +{ + if (!vm->idt) + vm->idt = vm_vaddr_alloc_page(vm); + + dt->base = vm->idt; + dt->limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1; +} + +void sync_exception_handlers_to_guest(struct kvm_vm *vm) +{ + *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; +} + void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu) { struct kvm_vm *vm = vcpu->vm; struct kvm_sregs sregs;
vcpu_sregs_get(vcpu, &sregs); - sregs.idt.base = vm->idt; - sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1; + kvm_setup_idt(vcpu->vm, &sregs.idt); sregs.gdt.base = vm->gdt; sregs.gdt.limit = getpagesize() - 1; kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs); vcpu_sregs_set(vcpu, &sregs); - *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; + sync_exception_handlers_to_guest(vm); }
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
From: Erdem Aktas erdemaktas@google.com
TDX requires additional IOCTLs to initialize VM and vCPUs to add private memory and to finalize the VM memory. Also additional utility functions are provided to manipulate a TD, similar to those that manipulate a VM in the current selftest framework.
A TD's initial register state cannot be manipulated directly by setting the VM's memory, hence boot code is provided at the TD's reset vector. This boot code takes boot parameters loaded in the TD's memory and sets up the TD for the selftest.
Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com Co-developed-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- Changes RFCv2 -> RFCv3
+ Refactored out required cpuid changes as tdx_apply_cpuid_restrictions (a more meaningful function) + Also fixed logic for setting LBR and CET features to use sub-leaf 1 instead of 0 previously + Renamed ioctl calls to match names used in the kernel for easy referencing --- tools/testing/selftests/kvm/Makefile | 2 + .../kvm/include/x86_64/tdx/td_boot.h | 82 ++++ .../kvm/include/x86_64/tdx/td_boot_asm.h | 16 + .../kvm/include/x86_64/tdx/tdx_util.h | 16 + .../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++++ .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 438 ++++++++++++++++++ 6 files changed, 655 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 1eb9b2aa7c220..317927d9c55bd 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -59,6 +59,8 @@ LIBKVM_x86_64 += lib/x86_64/svm.c LIBKVM_x86_64 += lib/x86_64/ucall.c LIBKVM_x86_64 += lib/x86_64/vmx.c LIBKVM_x86_64 += lib/x86_64/sev.c +LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c +LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
LIBKVM_aarch64 += lib/aarch64/gic.c LIBKVM_aarch64 += lib/aarch64/gic_v3.c diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h new file mode 100644 index 0000000000000..148057e569d69 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TD_BOOT_H +#define SELFTEST_TDX_TD_BOOT_H + +#include <stdint.h> +#include "tdx/td_boot_asm.h" + +/* + * Layout for boot section (not to scale) + * + * GPA + * ┌─────────────────────────────┬──0x1_0000_0000 (4GB) + * │ Boot code trampoline │ + * ├─────────────────────────────┼──0x0_ffff_fff0: Reset vector (16B below 4GB) + * │ Boot code │ + * ├─────────────────────────────┼──td_boot will be copied here, so that the + * │ │ jmp to td_boot is exactly at the reset vector + * │ Empty space │ + * │ │ + * ├─────────────────────────────┤ + * │ │ + * │ │ + * │ Boot parameters │ + * │ │ + * │ │ + * └─────────────────────────────┴──0x0_ffff_0000: TD_BOOT_PARAMETERS_GPA + */ +#define FOUR_GIGABYTES_GPA (4ULL << 30) + +/** + * The exact memory layout for LGDT or LIDT instructions. + */ +struct __packed td_boot_parameters_dtr { + uint16_t limit; + uint32_t base; +}; + +/** + * The exact layout in memory required for a ljmp, including the selector for + * changing code segment. + */ +struct __packed td_boot_parameters_ljmp_target { + uint32_t eip_gva; + uint16_t code64_sel; +}; + +/** + * Allows each vCPU to be initialized with different eip and esp. + */ +struct __packed td_per_vcpu_parameters { + uint32_t esp_gva; + struct td_boot_parameters_ljmp_target ljmp_target; +}; + +/** + * Boot parameters for the TD. + * + * Unlike a regular VM, we can't ask KVM to set registers such as esp, eip, etc + * before boot, so to run selftests, these registers' values have to be + * initialized by the TD. + * + * This struct is loaded in TD private memory at TD_BOOT_PARAMETERS_GPA. + * + * The TD boot code will read off parameters from this struct and set up the + * vcpu for executing selftests. + */ +struct __packed td_boot_parameters { + uint32_t cr0; + uint32_t cr3; + uint32_t cr4; + struct td_boot_parameters_dtr gdtr; + struct td_boot_parameters_dtr idtr; + struct td_per_vcpu_parameters per_vcpu[]; +}; + +extern void td_boot(void); +extern void reset_vector(void); +extern void td_boot_code_end(void); + +#define TD_BOOT_CODE_SIZE (td_boot_code_end - td_boot) + +#endif /* SELFTEST_TDX_TD_BOOT_H */ diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h new file mode 100644 index 0000000000000..0a07104f7debf --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TD_BOOT_ASM_H +#define SELFTEST_TDX_TD_BOOT_ASM_H + +/* + * GPA where TD boot parameters wil lbe loaded. + * + * TD_BOOT_PARAMETERS_GPA is arbitrarily chosen to + * + * + be within the 4GB address space + * + provide enough contiguous memory for the struct td_boot_parameters such + * that there is one struct td_per_vcpu_parameters for KVM_MAX_VCPUS + */ +#define TD_BOOT_PARAMETERS_GPA 0xffff0000 + +#endif // SELFTEST_TDX_TD_BOOT_ASM_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h new file mode 100644 index 0000000000000..274b245f200bf --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTESTS_TDX_KVM_UTIL_H +#define SELFTESTS_TDX_KVM_UTIL_H + +#include <stdint.h> + +#include "kvm_util_base.h" + +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code); + +struct kvm_vm *td_create(void); +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, + uint64_t attributes); +void td_finalize(struct kvm_vm *vm); + +#endif // SELFTESTS_TDX_KVM_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S new file mode 100644 index 0000000000000..800e09264d4ec --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include "tdx/td_boot_asm.h" + +/* Offsets for reading struct td_boot_parameters */ +#define TD_BOOT_PARAMETERS_CR0 0 +#define TD_BOOT_PARAMETERS_CR3 4 +#define TD_BOOT_PARAMETERS_CR4 8 +#define TD_BOOT_PARAMETERS_GDT 12 +#define TD_BOOT_PARAMETERS_IDT 18 +#define TD_BOOT_PARAMETERS_PER_VCPU 24 + +/* Offsets for reading struct td_per_vcpu_parameters */ +#define TD_PER_VCPU_PARAMETERS_ESP_GVA 0 +#define TD_PER_VCPU_PARAMETERS_LJMP_TARGET 4 + +#define SIZEOF_TD_PER_VCPU_PARAMETERS 10 + +.code32 + +.globl td_boot +td_boot: + /* In this procedure, edi is used as a temporary register */ + cli + + /* Paging is off */ + + movl $TD_BOOT_PARAMETERS_GPA, %ebx + + /* + * Find the address of struct td_per_vcpu_parameters for this + * vCPU based on esi (TDX spec: initialized with vcpu id). Put + * struct address into register for indirect addressing + */ + movl $SIZEOF_TD_PER_VCPU_PARAMETERS, %eax + mul %esi + leal TD_BOOT_PARAMETERS_PER_VCPU(%ebx), %edi + addl %edi, %eax + + /* Setup stack */ + movl TD_PER_VCPU_PARAMETERS_ESP_GVA(%eax), %esp + + /* Setup GDT */ + leal TD_BOOT_PARAMETERS_GDT(%ebx), %edi + lgdt (%edi) + + /* Setup IDT */ + leal TD_BOOT_PARAMETERS_IDT(%ebx), %edi + lidt (%edi) + + /* + * Set up control registers (There are no instructions to + * mov from memory to control registers, hence we need to use ebx + * as a scratch register) + */ + movl TD_BOOT_PARAMETERS_CR4(%ebx), %edi + movl %edi, %cr4 + movl TD_BOOT_PARAMETERS_CR3(%ebx), %edi + movl %edi, %cr3 + movl TD_BOOT_PARAMETERS_CR0(%ebx), %edi + movl %edi, %cr0 + + /* Paging is on after setting the most significant bit on cr0 */ + + /* + * Jump to selftest guest code. Far jumps read <segment + * selector:new eip> from addr+4:addr. This location has + * already been set up in boot parameters, and we can read boot + * parameters because boot code and boot parameters are loaded so + * that GVA and GPA are mapped 1:1. + */ + ljmp *TD_PER_VCPU_PARAMETERS_LJMP_TARGET(%eax) + +.globl reset_vector +reset_vector: + jmp td_boot + /* + * Pad reset_vector to its full size of 16 bytes so that this + * can be loaded with the end of reset_vector aligned to GPA=4G + */ + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + +/* Leave marker so size of td_boot code can be computed */ +.globl td_boot_code_end +td_boot_code_end: + +/* Disable executable stack */ +.section .note.GNU-stack,"",%progbits diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c new file mode 100644 index 0000000000000..3564059c0b89b --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c @@ -0,0 +1,438 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#define _GNU_SOURCE +#include <asm/kvm.h> +#include <asm/kvm_host.h> +#include <errno.h> +#include <linux/kvm.h> +#include <stdint.h> +#include <sys/ioctl.h> + +#include "kvm_util.h" +#include "test_util.h" +#include "tdx/td_boot.h" +#include "kvm_util_base.h" +#include "processor.h" + +/* + * TDX ioctls + */ + +static char *tdx_cmd_str[] = { + "KVM_TDX_CAPABILITIES", + "KVM_TDX_INIT_VM", + "KVM_TDX_INIT_VCPU", + "KVM_TDX_INIT_MEM_REGION", + "KVM_TDX_FINALIZE_VM" +}; +#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str)) + +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) +{ + struct kvm_tdx_cmd tdx_cmd; + int r; + + TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n", + ioctl_no); + + memset(&tdx_cmd, 0x0, sizeof(tdx_cmd)); + tdx_cmd.id = ioctl_no; + tdx_cmd.flags = flags; + tdx_cmd.data = (uint64_t)data; + + r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd); + TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r, + errno); +} + +#define XFEATURE_LBR 15 +#define XFEATURE_XTILECFG 17 +#define XFEATURE_XTILEDATA 18 +#define XFEATURE_CET_U 11 +#define XFEATURE_CET_S 12 + +#define XFEATURE_MASK_LBR (1 << XFEATURE_LBR) +#define XFEATURE_MASK_CET_U (1 << XFEATURE_CET_U) +#define XFEATURE_MASK_CET_S (1 << XFEATURE_CET_S) +#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_U | XFEATURE_MASK_CET_S) +#define XFEATURE_MASK_XTILECFG (1 << XFEATURE_XTILECFG) +#define XFEATURE_MASK_XTILEDATA (1 << XFEATURE_XTILEDATA) +#define XFEATURE_MASK_XTILE (XFEATURE_MASK_XTILECFG | XFEATURE_MASK_XTILEDATA) + +static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data) +{ + for (int i = 0; i < cpuid_data->nent; i++) { + struct kvm_cpuid_entry2 *e = &cpuid_data->entries[i]; + + if (e->function == 0xd && e->index == 0) { + /* + * TDX module requires both XTILE_{CFG, DATA} to be set. + * Both bits are required for AMX to be functional. + */ + if ((e->eax & XFEATURE_MASK_XTILE) != + XFEATURE_MASK_XTILE) { + e->eax &= ~XFEATURE_MASK_XTILE; + } + } + if (e->function == 0xd && e->index == 1) { + /* + * TDX doesn't support LBR yet. + * Disable bits from the XCR0 register. + */ + e->ecx &= ~XFEATURE_MASK_LBR; + /* + * TDX modules requires both CET_{U, S} to be set even + * if only one is supported. + */ + if (e->ecx & XFEATURE_MASK_CET) + e->ecx |= XFEATURE_MASK_CET; + } + } +} + +static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes) +{ + const struct kvm_cpuid2 *cpuid; + struct kvm_tdx_init_vm init_vm; + + memset(&init_vm, 0, sizeof(struct kvm_tdx_init_vm)); + + cpuid = kvm_get_supported_cpuid(); + + memcpy(&init_vm.cpuid, cpuid, kvm_cpuid2_size(cpuid->nent)); + init_vm.attributes = attributes; + + tdx_apply_cpuid_restrictions(&init_vm.cpuid); + + tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, &init_vm); +} + +static void tdx_td_vcpu_init(struct kvm_vcpu *vcpu) +{ + const struct kvm_cpuid2 *cpuid = kvm_get_supported_cpuid(); + + vcpu_init_cpuid(vcpu, cpuid); + tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL); +} + +static void tdx_init_mem_region(struct kvm_vm *vm, void *source_pages, + uint64_t gpa, uint64_t size) +{ + struct kvm_tdx_init_mem_region mem_region = { + .source_addr = (uint64_t)source_pages, + .gpa = gpa, + .nr_pages = size / PAGE_SIZE, + }; + uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION; + + TEST_ASSERT((mem_region.nr_pages > 0) && + ((mem_region.nr_pages * PAGE_SIZE) == size), + "Cannot add partial pages to the guest memory.\n"); + TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0, + "Source memory buffer is not page aligned\n"); + tdx_ioctl(vm->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region); +} + +static void tdx_td_finalizemr(struct kvm_vm *vm) +{ + tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL); +} + +/* + * TD creation/setup/finalization + */ + +static void tdx_enable_capabilities(struct kvm_vm *vm) +{ + int rc; + + rc = kvm_check_cap(KVM_CAP_X2APIC_API); + TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!"); + rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP); + TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!"); + + vm_enable_cap(vm, KVM_CAP_X2APIC_API, + KVM_X2APIC_API_USE_32BIT_IDS | + KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK); + vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24); +} + +static void tdx_configure_memory_encryption(struct kvm_vm *vm) +{ + /* Configure shared/enCrypted bit for this VM according to TDX spec */ + vm->arch.s_bit = 1ULL << (vm->pa_bits - 1); + vm->arch.c_bit = 0; + /* Set gpa_protected_mask so that tagging/untagging of GPAs works */ + vm->gpa_protected_mask = vm->arch.s_bit; + /* There's no need to mask any encryption bits for PTEs */ + vm->arch.pte_me_mask = 0; + /* This VM is protected (has memory encryption) */ + vm->protected = true; +} + +static void tdx_apply_cr4_restrictions(struct kvm_sregs *sregs) +{ + /* TDX spec 11.6.2: CR4 bit MCE is fixed to 1 */ + sregs->cr4 |= X86_CR4_MCE; + + /* Set this because UEFI also sets this up, to handle XMM exceptions */ + sregs->cr4 |= X86_CR4_OSXMMEXCPT; + + /* TDX spec 11.6.2: CR4 bit VMXE and SMXE are fixed to 0 */ + sregs->cr4 &= ~(X86_CR4_VMXE | X86_CR4_SMXE); +} + +static void load_td_boot_code(struct kvm_vm *vm) +{ + void *boot_code_hva = addr_gpa2hva(vm, FOUR_GIGABYTES_GPA - TD_BOOT_CODE_SIZE); + + TEST_ASSERT(td_boot_code_end - reset_vector == 16, + "The reset vector must be 16 bytes in size."); + memcpy(boot_code_hva, td_boot, TD_BOOT_CODE_SIZE); +} + +static void load_td_per_vcpu_parameters(struct td_boot_parameters *params, + struct kvm_sregs *sregs, + struct kvm_vcpu *vcpu, + void *guest_code) +{ + /* Store vcpu_index to match what the TDX module would store internally */ + static uint32_t vcpu_index; + + struct td_per_vcpu_parameters *vcpu_params = ¶ms->per_vcpu[vcpu_index]; + + TEST_ASSERT(vcpu->initial_stack_addr != 0, + "initial stack address should not be 0"); + TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff, + "initial stack address must fit in 32 bits"); + TEST_ASSERT((uint64_t)guest_code <= 0xffffffff, + "guest_code must fit in 32 bits"); + TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0"); + + vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr; + vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code; + vcpu_params->ljmp_target.code64_sel = sregs->cs.selector; + + vcpu_index++; +} + +static void load_td_common_parameters(struct td_boot_parameters *params, + struct kvm_sregs *sregs) +{ + /* Set parameters! */ + params->cr0 = sregs->cr0; + params->cr3 = sregs->cr3; + params->cr4 = sregs->cr4; + params->gdtr.limit = sregs->gdt.limit; + params->gdtr.base = sregs->gdt.base; + params->idtr.limit = sregs->idt.limit; + params->idtr.base = sregs->idt.base; + + TEST_ASSERT(params->cr0 != 0, "cr0 should not be 0"); + TEST_ASSERT(params->cr3 != 0, "cr3 should not be 0"); + TEST_ASSERT(params->cr4 != 0, "cr4 should not be 0"); + TEST_ASSERT(params->gdtr.base != 0, "gdt base address should not be 0"); +} + +static void load_td_boot_parameters(struct td_boot_parameters *params, + struct kvm_vcpu *vcpu, void *guest_code) +{ + struct kvm_sregs sregs; + + /* Assemble parameters in sregs */ + memset(&sregs, 0, sizeof(struct kvm_sregs)); + vcpu_setup_mode_sregs(vcpu->vm, &sregs); + tdx_apply_cr4_restrictions(&sregs); + kvm_setup_idt(vcpu->vm, &sregs.idt); + + if (!params->cr0) + load_td_common_parameters(params, &sregs); + + load_td_per_vcpu_parameters(params, &sregs, vcpu, guest_code); +} + +/** + * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set + * up any general purpose registers as they will be initialized by the TDX. In + * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State + * of Guest GPRs" for more information on vCPUs initial register values when + * entering the TD first time. + * + * Input Args: + * vm - Virtual Machine + * vcpuid - The id of the VCPU to add to the VM. + */ +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code) +{ + struct kvm_vcpu *vcpu; + + /* + * TD setup will not use the value of rip set in vm_vcpu_add anyway, so + * NULL can be used for guest_code. + */ + vcpu = vm_vcpu_add(vm, vcpu_id, NULL); + + tdx_td_vcpu_init(vcpu); + + load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA), + vcpu, guest_code); + + return vcpu; +} + +/** + * Iterate over set ranges within sparsebit @s. In each iteration, + * @range_begin and @range_end will take the beginning and end of the set range, + * which are of type sparsebit_idx_t. + * + * For example, if the range [3, 7] (inclusive) is set, within the iteration, + * @range_begin will take the value 3 and @range_end will take the value 7. + * + * Ensure that there is at least one bit set before using this macro with + * sparsebit_any_set(), because sparsebit_first_set() will abort if none are + * set. + */ +#define sparsebit_for_each_set_range(s, range_begin, range_end) \ + for (range_begin = sparsebit_first_set(s), \ + range_end = sparsebit_next_clear(s, range_begin) - 1; \ + range_begin && range_end; \ + range_begin = sparsebit_next_set(s, range_end), \ + range_end = sparsebit_next_clear(s, range_begin) - 1) +/* + * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the -1 + * would then cause an underflow back to 2**64 - 1. This is expected and + * correct. + * + * If the last range in the sparsebit is [x, y] and we try to iterate, + * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try and + * find the first range, but that's correct because the condition expression + * would cause us to quit the loop. + */ + +static void load_td_memory_region(struct kvm_vm *vm, + struct userspace_mem_region *region) +{ + const struct sparsebit *pages = region->protected_phy_pages; + const uint64_t hva_base = region->region.userspace_addr; + const vm_paddr_t gpa_base = region->region.guest_phys_addr; + const sparsebit_idx_t lowest_page_in_region = gpa_base >> + vm->page_shift; + + sparsebit_idx_t i; + sparsebit_idx_t j; + + if (!sparsebit_any_set(pages)) + return; + + sparsebit_for_each_set_range(pages, i, j) { + const uint64_t size_to_load = (j - i + 1) * vm->page_size; + const uint64_t offset = + (i - lowest_page_in_region) * vm->page_size; + const uint64_t hva = hva_base + offset; + const uint64_t gpa = gpa_base + offset; + void *source_addr; + + /* + * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place, + * hence we have to make a copy if there's only one backing + * memory source + */ + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + TEST_ASSERT( + source_addr, + "Could not allocate memory for loading memory region"); + + memcpy(source_addr, (void *)hva, size_to_load); + + tdx_init_mem_region(vm, source_addr, gpa, size_to_load); + + munmap(source_addr, size_to_load); + } +} + +static void load_td_private_memory(struct kvm_vm *vm) +{ + int ctr; + struct userspace_mem_region *region; + + hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) { + load_td_memory_region(vm, region); + } +} + +struct kvm_vm *td_create(void) +{ + return ____vm_create(VM_MODE_DEFAULT, 0, KVM_X86_TDX_VM); +} + +static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type) +{ + vm_vaddr_t addr; + size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE); + vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation; + size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE); + + vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages, 0); + addr = vm_vaddr_alloc_1to1(vm, boot_code_allocation, boot_code_base_gpa, 1); + ASSERT_EQ(addr, boot_code_base_gpa); + + load_td_boot_code(vm); +} + +static size_t td_boot_parameters_size(void) +{ + int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); + size_t total_per_vcpu_parameters_size = + max_vcpus * sizeof(struct td_per_vcpu_parameters); + + return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size; +} + +static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type) +{ + vm_vaddr_t addr; + size_t boot_params_size = td_boot_parameters_size(); + int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE); + size_t total_size = npages * PAGE_SIZE; + + vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2, npages, 0); + addr = vm_vaddr_alloc_1to1(vm, total_size, TD_BOOT_PARAMETERS_GPA, 2); + ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA); +} + +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, + uint64_t attributes) +{ + uint64_t nr_pages_required; + + tdx_enable_capabilities(vm); + + tdx_configure_memory_encryption(vm); + + tdx_td_init(vm, attributes); + + nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0); + + /* + * Add memory (add 0th memslot) for TD. This will be used to setup the + * CPU (provide stack space for the CPU) and to load the elf file. + */ + vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required, 0); + + kvm_vm_elf_load(vm, program_invocation_name); + + vm_init_descriptor_tables(vm); + + td_setup_boot_code(vm, src_type); + td_setup_boot_parameters(vm, src_type); +} + +void td_finalize(struct kvm_vm *vm) +{ + sync_exception_handlers_to_guest(vm); + + load_td_private_memory(vm); + + tdx_td_finalizemr(vm); +} -- 2.39.0.246.g2a6d74b583-goog
This also exercises the KVM_TDX_CAPABILITIES ioctl.
Suggested-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 75 ++++++++++++++++++- 1 file changed, 72 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c index 3564059c0b89b..2e9679d24a843 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c @@ -27,10 +27,9 @@ static char *tdx_cmd_str[] = { }; #define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
-static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) +static int _tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) { struct kvm_tdx_cmd tdx_cmd; - int r;
TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n", ioctl_no); @@ -40,11 +39,63 @@ static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) tdx_cmd.flags = flags; tdx_cmd.data = (uint64_t)data;
- r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd); + return ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd); +} + +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) +{ + int r; + + r = _tdx_ioctl(fd, ioctl_no, flags, data); TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r, errno); }
+static struct kvm_tdx_capabilities *tdx_read_capabilities(void) +{ + int i; + int rc = -1; + int nr_cpuid_configs = 4; + struct kvm_tdx_capabilities *tdx_cap = NULL; + int kvm_fd; + + kvm_fd = open_kvm_dev_path_or_exit(); + + do { + nr_cpuid_configs *= 2; + + tdx_cap = realloc( + tdx_cap, sizeof(*tdx_cap) + + nr_cpuid_configs * sizeof(*tdx_cap->cpuid_configs)); + TEST_ASSERT(tdx_cap != NULL, + "Could not allocate memory for tdx capability nr_cpuid_configs %d\n", + nr_cpuid_configs); + + tdx_cap->nr_cpuid_configs = nr_cpuid_configs; + rc = _tdx_ioctl(kvm_fd, KVM_TDX_CAPABILITIES, 0, tdx_cap); + } while (rc < 0 && errno == E2BIG); + + TEST_ASSERT(rc == 0, "KVM_TDX_CAPABILITIES failed: %d %d", + rc, errno); + + pr_debug("tdx_cap: attrs: fixed0 0x%016llx fixed1 0x%016llx\n" + "tdx_cap: xfam fixed0 0x%016llx fixed1 0x%016llx\n", + tdx_cap->attrs_fixed0, tdx_cap->attrs_fixed1, + tdx_cap->xfam_fixed0, tdx_cap->xfam_fixed1); + + for (i = 0; i < tdx_cap->nr_cpuid_configs; i++) { + const struct kvm_tdx_cpuid_config *config = + &tdx_cap->cpuid_configs[i]; + pr_debug("cpuid config[%d]: leaf 0x%x sub_leaf 0x%x eax 0x%08x ebx 0x%08x ecx 0x%08x edx 0x%08x\n", + i, config->leaf, config->sub_leaf, + config->eax, config->ebx, config->ecx, config->edx); + } + + close(kvm_fd); + + return tdx_cap; +} + #define XFEATURE_LBR 15 #define XFEATURE_XTILECFG 17 #define XFEATURE_XTILEDATA 18 @@ -90,6 +141,21 @@ static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data) } }
+static void tdx_check_attributes(uint64_t attributes) +{ + struct kvm_tdx_capabilities *tdx_cap; + + tdx_cap = tdx_read_capabilities(); + + /* TDX spec: any bits 0 in attrs_fixed0 must be 0 in attributes */ + ASSERT_EQ(attributes & ~tdx_cap->attrs_fixed0, 0); + + /* TDX spec: any bits 1 in attrs_fixed1 must be 1 in attributes */ + ASSERT_EQ(attributes & tdx_cap->attrs_fixed1, tdx_cap->attrs_fixed1); + + free(tdx_cap); +} + static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes) { const struct kvm_cpuid2 *cpuid; @@ -100,6 +166,9 @@ static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes) cpuid = kvm_get_supported_cpuid();
memcpy(&init_vm.cpuid, cpuid, kvm_cpuid2_size(cpuid->nent)); + + tdx_check_attributes(attributes); + init_vm.attributes = attributes;
tdx_apply_cpuid_restrictions(&init_vm.cpuid);
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Similar issue discussed at https://lore.kernel.org/all/CAGtprH9yKvuaF5yruh3BupQe4BxDGiBQk3ExtY2m39yP-tp...
Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 317927d9c55bd..5f9cc1e6ee67e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -205,7 +205,7 @@ LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include else LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include endif -CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ +CFLAGS += -mstackrealign -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \ -I$(<D) -Iinclude/$(UNAME_M) -I ../rseq -I.. $(EXTRA_CFLAGS) \
On Sat, Jan 21, 2023, Ackerley Tng wrote:
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Huh, I had completely forgotten that this is why SSE is problematic. I ran into this with the base UPM selftests and just disabled SSE. /facepalm.
We should figure out exactly what is causing a misaligned stack. As you've noted, the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), the starting stack should be page aligned, which means something is causing the stack to become unaligned at runtime. I'd rather hunt down that something than paper over it by having the compiler force realignment.
Similar issue discussed at https://lore.kernel.org/all/CAGtprH9yKvuaF5yruh3BupQe4BxDGiBQk3ExtY2m39yP-tp...
Signed-off-by: Ackerley Tng ackerleytng@google.com
tools/testing/selftests/kvm/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 317927d9c55bd..5f9cc1e6ee67e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -205,7 +205,7 @@ LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include else LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include endif -CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ +CFLAGS += -mstackrealign -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
-I$(<D) -Iinclude/$(UNAME_M) -I ../rseq -I.. $(EXTRA_CFLAGS) \
2.39.0.246.g2a6d74b583-goog
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson seanjc@google.com wrote:
On Sat, Jan 21, 2023, Ackerley Tng wrote:
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Huh, I had completely forgotten that this is why SSE is problematic. I ran into this with the base UPM selftests and just disabled SSE. /facepalm.
We should figure out exactly what is causing a misaligned stack. As you've noted, the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), the starting stack should be page aligned, which means something is causing the stack to become unaligned at runtime. I'd rather hunt down that something than paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at boot time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
Similar issue discussed at https://lore.kernel.org/all/CAGtprH9yKvuaF5yruh3BupQe4BxDGiBQk3ExtY2m39yP-tp...
Signed-off-by: Ackerley Tng ackerleytng@google.com
tools/testing/selftests/kvm/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 317927d9c55bd..5f9cc1e6ee67e 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -205,7 +205,7 @@ LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include else LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include endif -CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ +CFLAGS += -mstackrealign -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \ -I$(<D) -Iinclude/$(UNAME_M) -I ../rseq -I.. $(EXTRA_CFLAGS) \ -- 2.39.0.246.g2a6d74b583-goog
On 23.01.2023 19:30, Erdem Aktas wrote:
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson seanjc@google.com wrote:
On Sat, Jan 21, 2023, Ackerley Tng wrote:
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Huh, I had completely forgotten that this is why SSE is problematic. I ran into this with the base UPM selftests and just disabled SSE. /facepalm.
We should figure out exactly what is causing a misaligned stack. As you've noted, the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), the starting stack should be page aligned, which means something is causing the stack to become unaligned at runtime. I'd rather hunt down that something than paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at boot time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
32-bit stack needs to be 16-byte aligned, too (at function call boundaries) - see [1] chapter 2.2.2 "The Stack Frame"
Thanks, Maciej
On Mon, Jan 23, 2023, Maciej S. Szmigiero wrote:
On 23.01.2023 19:30, Erdem Aktas wrote:
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson seanjc@google.com wrote:
On Sat, Jan 21, 2023, Ackerley Tng wrote:
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Huh, I had completely forgotten that this is why SSE is problematic. I ran into this with the base UPM selftests and just disabled SSE. /facepalm.
We should figure out exactly what is causing a misaligned stack. As you've noted, the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), the starting stack should be page aligned, which means something is causing the stack to become unaligned at runtime. I'd rather hunt down that something than paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at boot time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
32-bit stack needs to be 16-byte aligned, too (at function call boundaries) - see [1] chapter 2.2.2 "The Stack Frame"
And this showing up in the non-TDX selftests rules that out as the sole problem; the selftests stuff 64-bit mode, i.e. don't have 32-bit boot code.
On Mon, Jan 23, 2023 at 10:53 AM Sean Christopherson seanjc@google.com wrote:
On Mon, Jan 23, 2023, Maciej S. Szmigiero wrote:
On 23.01.2023 19:30, Erdem Aktas wrote:
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson seanjc@google.com wrote:
On Sat, Jan 21, 2023, Ackerley Tng wrote:
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Huh, I had completely forgotten that this is why SSE is problematic. I ran into this with the base UPM selftests and just disabled SSE. /facepalm.
We should figure out exactly what is causing a misaligned stack. As you've noted, the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), the starting stack should be page aligned, which means something is causing the stack to become unaligned at runtime. I'd rather hunt down that something than paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at boot time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
32-bit stack needs to be 16-byte aligned, too (at function call boundaries) - see [1] chapter 2.2.2 "The Stack Frame"
And this showing up in the non-TDX selftests rules that out as the sole problem; the selftests stuff 64-bit mode, i.e. don't have 32-bit boot code.
Thanks Maciej and Sean for the clarification. I was suspecting the hand-coded assembly part that we have for TDX tests but it being happening in the non-TDX selftests disproves it.
On Mon, Jan 23, 2023, Erdem Aktas wrote:
On Mon, Jan 23, 2023 at 10:53 AM Sean Christopherson seanjc@google.com wrote:
On Mon, Jan 23, 2023, Maciej S. Szmigiero wrote:
On 23.01.2023 19:30, Erdem Aktas wrote:
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson seanjc@google.com wrote:
On Sat, Jan 21, 2023, Ackerley Tng wrote:
Some SSE instructions assume a 16-byte aligned stack, and GCC compiles assuming the stack is aligned: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination results in a #GP in guests.
Adding this compiler flag will generate an alternate prologue and epilogue to realign the runtime stack, which makes selftest code slower and bigger, but this is okay since we do not need selftest code to be extremely performant.
Huh, I had completely forgotten that this is why SSE is problematic. I ran into this with the base UPM selftests and just disabled SSE. /facepalm.
We should figure out exactly what is causing a misaligned stack. As you've noted, the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), the starting stack should be page aligned, which means something is causing the stack to become unaligned at runtime. I'd rather hunt down that something than paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at boot time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
32-bit stack needs to be 16-byte aligned, too (at function call boundaries) - see [1] chapter 2.2.2 "The Stack Frame"
And this showing up in the non-TDX selftests rules that out as the sole problem; the selftests stuff 64-bit mode, i.e. don't have 32-bit boot code.
Thanks Maciej and Sean for the clarification. I was suspecting the hand-coded assembly part that we have for TDX tests but it being happening in the non-TDX selftests disproves it.
Not necessarily, it could be both. Goofs in the handcoded assembly and PEBKAC on my end :-)
On Mon, Jan 23, 2023, Erdem Aktas wrote:
On Mon, Jan 23, 2023 at 10:53 AM Sean Christopherson
seanjc@google.com wrote:
On Mon, Jan 23, 2023, Maciej S. Szmigiero wrote:
On 23.01.2023 19:30, Erdem Aktas wrote:
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson
seanjc@google.com wrote:
On Sat, Jan 21, 2023, Ackerley Tng wrote: > Some SSE instructions assume a 16-byte aligned stack, and GCC
compiles
> assuming the stack is aligned: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This
combination
> results in a #GP in guests. > > Adding this compiler flag will generate an alternate prologue
and
> epilogue to realign the runtime stack, which makes selftest
code
> slower and bigger, but this is okay since we do not need
selftest code
> to be extremely performant.
Huh, I had completely forgotten that this is why SSE is
problematic. I ran into
this with the base UPM selftests and just disabled SSE.
/facepalm.
We should figure out exactly what is causing a misaligned
stack. As you've noted,
the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm
misreading vm_arch_vcpu_add(),
the starting stack should be page aligned, which means
something is causing the
stack to become unaligned at runtime. I'd rather hunt down
that something than
paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at
boot
time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
32-bit stack needs to be 16-byte aligned, too (at function call
boundaries) -
see [1] chapter 2.2.2 "The Stack Frame"
And this showing up in the non-TDX selftests rules that out as the
sole problem;
the selftests stuff 64-bit mode, i.e. don't have 32-bit boot code.
Thanks Maciej and Sean for the clarification. I was suspecting the hand-coded assembly part that we have for TDX tests but it being happening in the non-TDX selftests disproves it.
Not necessarily, it could be both. Goofs in the handcoded assembly and PEBKAC on my end :-)
I figured it out!
GCC assumes that the stack is 16-byte aligned **before** the call instruction. Since call pushes rip to the stack, GCC will compile code assuming that on entrance to the function, the stack is -8 from a 16-byte aligned address.
Since for TDs we do a ljmp to guest code, providing a function's address, the stack was not modified by a call instruction pushing rip to the stack, so the stack is 16-byte aligned when the guest code starts running, instead of 16-byte aligned -8 that GCC expects.
For VMs, we set rip to a function pointer, and the VM starts running with a 16-byte algined stack too.
To fix this, I propose that in vm_arch_vcpu_add(), we align the allocated stack address and then subtract 8 from that:
@@ -573,10 +573,13 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); vcpu_setup(vm, vcpu);
+ stack_vaddr += (DEFAULT_STACK_PGS * getpagesize()); + stack_vaddr = ALIGN_DOWN(stack_vaddr, 16) - 8; + /* Setup guest general purpose registers */ vcpu_regs_get(vcpu, ®s); regs.rflags = regs.rflags | 0x2; - regs.rsp = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()); + regs.rsp = stack_vaddr; regs.rip = (unsigned long) guest_code; vcpu_regs_set(vcpu, ®s);
On 15.02.2023 01:50, Ackerley Tng wrote:
On Mon, Jan 23, 2023, Erdem Aktas wrote:
On Mon, Jan 23, 2023 at 10:53 AM Sean Christopherson seanjc@google.com wrote:
On Mon, Jan 23, 2023, Maciej S. Szmigiero wrote:
On 23.01.2023 19:30, Erdem Aktas wrote:
On Fri, Jan 20, 2023 at 4:28 PM Sean Christopherson seanjc@google.com wrote: > > On Sat, Jan 21, 2023, Ackerley Tng wrote: > > Some SSE instructions assume a 16-byte aligned stack, and GCC compiles > > assuming the stack is aligned: > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838. This combination > > results in a #GP in guests. > > > > Adding this compiler flag will generate an alternate prologue and > > epilogue to realign the runtime stack, which makes selftest code > > slower and bigger, but this is okay since we do not need selftest code > > to be extremely performant. > > Huh, I had completely forgotten that this is why SSE is problematic. I ran into > this with the base UPM selftests and just disabled SSE. /facepalm. > > We should figure out exactly what is causing a misaligned stack. As you've noted, > the x86-64 ABI requires a 16-byte aligned RSP. Unless I'm misreading vm_arch_vcpu_add(), > the starting stack should be page aligned, which means something is causing the > stack to become unaligned at runtime. I'd rather hunt down that something than > paper over it by having the compiler force realignment.
Is not it due to the 32bit execution part of the guest code at boot time. Any push/pop of 32bit registers might make it a 16-byte unaligned stack.
32-bit stack needs to be 16-byte aligned, too (at function call boundaries) - see [1] chapter 2.2.2 "The Stack Frame"
And this showing up in the non-TDX selftests rules that out as the sole problem; the selftests stuff 64-bit mode, i.e. don't have 32-bit boot code.
Thanks Maciej and Sean for the clarification. I was suspecting the hand-coded assembly part that we have for TDX tests but it being happening in the non-TDX selftests disproves it.
Not necessarily, it could be both. Goofs in the handcoded assembly and PEBKAC on my end :-)
I figured it out!
GCC assumes that the stack is 16-byte aligned **before** the call instruction. Since call pushes rip to the stack, GCC will compile code assuming that on entrance to the function, the stack is -8 from a 16-byte aligned address.
Since for TDs we do a ljmp to guest code, providing a function's address, the stack was not modified by a call instruction pushing rip to the stack, so the stack is 16-byte aligned when the guest code starts running, instead of 16-byte aligned -8 that GCC expects.
For VMs, we set rip to a function pointer, and the VM starts running with a 16-byte algined stack too.
To fix this, I propose that in vm_arch_vcpu_add(), we align the allocated stack address and then subtract 8 from that:
Note that if this code is ever used to launch a vCPU with 32-bit entry point it will need to subtract 4 bytes instead of 8 bytes.
I think it would be worthwhile to at least place a comment mentioning this near the stack aligning expression so nobody misses this fact.
Thanks, Maciej
On Wed, Feb 15, 2023, Maciej S. Szmigiero wrote:
On 15.02.2023 01:50, Ackerley Tng wrote:
To fix this, I propose that in vm_arch_vcpu_add(), we align the allocated stack address and then subtract 8 from that:
Note that if this code is ever used to launch a vCPU with 32-bit entry point it will need to subtract 4 bytes instead of 8 bytes.
I think it would be worthwhile to at least place a comment mentioning this near the stack aligning expression so nobody misses this fact.
Heh, I've no objection to a comment, though this really is the tip of the iceberg if we want to add 32-bit guest support in selftests.
On Wed, Feb 15, 2023, Ackerley Tng wrote:
I figured it out!
GCC assumes that the stack is 16-byte aligned **before** the call instruction. Since call pushes rip to the stack, GCC will compile code assuming that on entrance to the function, the stack is -8 from a 16-byte aligned address.
Since for TDs we do a ljmp to guest code, providing a function's address, the stack was not modified by a call instruction pushing rip to the stack, so the stack is 16-byte aligned when the guest code starts running, instead of 16-byte aligned -8 that GCC expects.
For VMs, we set rip to a function pointer, and the VM starts running with a 16-byte algined stack too.
To fix this, I propose that in vm_arch_vcpu_add(), we align the allocated stack address and then subtract 8 from that:
@@ -573,10 +573,13 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); vcpu_setup(vm, vcpu);
stack_vaddr += (DEFAULT_STACK_PGS * getpagesize());
stack_vaddr = ALIGN_DOWN(stack_vaddr, 16) - 8;
The ALIGN_DOWN should be unnecessary, we've got larger issues if getpagesize() isn't 16-byte aligned and/or if __vm_vaddr_alloc() returns anything but a page-aligned address. Maybe add a TEST_ASSERT() sanity check that stack_vaddr is page-aligned at this point?
And in addition to the comment suggested by Maciej, can you also add a comment explaining the -8 adjust? Yeah, someone can go read the changelog, but I think this is worth explicitly documenting in code.
Lastly, can you post it as a standalone patch?
Many thanks!
I figured it out!
GCC assumes that the stack is 16-byte aligned **before** the call instruction. Since call pushes rip to the stack, GCC will compile code assuming that on entrance to the function, the stack is -8 from a 16-byte aligned address.
Since for TDs we do a ljmp to guest code, providing a function's address, the stack was not modified by a call instruction pushing rip to the stack, so the stack is 16-byte aligned when the guest code starts running, instead of 16-byte aligned -8 that GCC expects.
For VMs, we set rip to a function pointer, and the VM starts running with a 16-byte algined stack too.
To fix this, I propose that in vm_arch_vcpu_add(), we align the allocated stack address and then subtract 8 from that:
@@ -573,10 +573,13 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm
*vm,
uint32_t vcpu_id, vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); vcpu_setup(vm, vcpu);
stack_vaddr += (DEFAULT_STACK_PGS * getpagesize());
stack_vaddr = ALIGN_DOWN(stack_vaddr, 16) - 8;
The ALIGN_DOWN should be unnecessary, we've got larger issues if getpagesize() isn't 16-byte aligned and/or if __vm_vaddr_alloc() returns anything but a page-aligned address. Maybe add a TEST_ASSERT() sanity check that stack_vaddr is page-aligned at this point?
And in addition to the comment suggested by Maciej, can you also add a comment explaining the -8 adjust? Yeah, someone can go read the changelog, but I think this is worth explicitly documenting in code.
Lastly, can you post it as a standalone patch?
Many thanks!
Thanks Maciej and Sean, I've made the changes you requested and posted it as a standalone patch at https://lore.kernel.org/lkml/32866e5d00174697730d6231d2fb81f6b8d98c8a.167665...
From: Erdem Aktas erdemaktas@google.com
Adding a test to verify TDX lifecycle by creating a TD and running a dummy TDG.VP.VMCALL <Instruction.IO> inside it.
Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com Co-developed-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- Changes RFCv2 -> RFCv3 + Add gitignore for tdx_vm_tests binary + Thanks to Isaku for the comment about saving rbp separately for gcc! In this revision, the assembly that wraps tdcall has been mostly taken from the kernel’s implementation, which would be familiar to you! + TDX test-related code is now grouped in tools/testing/selftests/kvm/include/x86_64/tdx/test_util.{c,h} and test-related functions and macros are prefixed with tdx_test to indicate its purpose. --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 4 + .../selftests/kvm/include/x86_64/tdx/tdcall.h | 35 ++++++++ .../selftests/kvm/include/x86_64/tdx/tdx.h | 12 +++ .../kvm/include/x86_64/tdx/test_util.h | 52 +++++++++++ .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 90 +++++++++++++++++++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++++ .../selftests/kvm/lib/x86_64/tdx/test_util.c | 34 +++++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 ++++++++++ 9 files changed, 300 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 813e7610619d9..370d6430b32b4 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -65,6 +65,7 @@ /x86_64/xss_msr_test /x86_64/vmx_pmu_caps_test /x86_64/triple_fault_event_test +/x86_64/tdx_vm_tests /access_tracking_perf_test /demand_paging_test /dirty_log_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 5f9cc1e6ee67e..9f289322a4933 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -61,6 +61,9 @@ LIBKVM_x86_64 += lib/x86_64/vmx.c LIBKVM_x86_64 += lib/x86_64/sev.c LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S +LIBKVM_x86_64 += lib/x86_64/tdx/tdcall.S +LIBKVM_x86_64 += lib/x86_64/tdx/tdx.c +LIBKVM_x86_64 += lib/x86_64/tdx/test_util.c
LIBKVM_aarch64 += lib/aarch64/gic.c LIBKVM_aarch64 += lib/aarch64/gic_v3.c @@ -148,6 +151,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test +TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
# Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h new file mode 100644 index 0000000000000..78001bfec9c8d --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Adapted from arch/x86/include/asm/shared/tdx.h */ + +#ifndef SELFTESTS_TDX_TDCALL_H +#define SELFTESTS_TDX_TDCALL_H + +#include <linux/bits.h> +#include <linux/types.h> + +#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0 +#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1 + +#define TDX_HCALL_HAS_OUTPUT BIT(0) + +#define TDX_HYPERCALL_STANDARD 0 + +/* + * Used in __tdx_hypercall() to pass down and get back registers' values of + * the TDCALL instruction when requesting services from the VMM. + * + * This is a software only structure and not part of the TDX module/VMM ABI. + */ +struct tdx_hypercall_args { + u64 r10; + u64 r11; + u64 r12; + u64 r13; + u64 r14; + u64 r15; +}; + +/* Used to request services from the VMM */ +u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags); + +#endif // SELFTESTS_TDX_TDCALL_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h new file mode 100644 index 0000000000000..a7161efe4ee2e --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TDX_H +#define SELFTEST_TDX_TDX_H + +#include <stdint.h> + +#define TDG_VP_VMCALL_INSTRUCTION_IO 30 + +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, + uint64_t write, uint64_t *data); + +#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h new file mode 100644 index 0000000000000..b570b6d978ff1 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TEST_UTIL_H +#define SELFTEST_TDX_TEST_UTIL_H + +#include <stdbool.h> + +#include "tdcall.h" + +#define TDX_TEST_SUCCESS_PORT 0x30 +#define TDX_TEST_SUCCESS_SIZE 4 + +/** + * Assert that tdx_test_success() was called in the guest. + */ +#define TDX_TEST_ASSERT_SUCCESS(VCPU) \ + (TEST_ASSERT( \ + ((VCPU)->run->exit_reason == KVM_EXIT_IO) && \ + ((VCPU)->run->io.port == TDX_TEST_SUCCESS_PORT) && \ + ((VCPU)->run->io.size == TDX_TEST_SUCCESS_SIZE) && \ + ((VCPU)->run->io.direction == \ + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE), \ + "Unexpected exit values while waiting for test completion: %u (%s) %d %d %d\n", \ + (VCPU)->run->exit_reason, \ + exit_reason_str((VCPU)->run->exit_reason), \ + (VCPU)->run->io.port, (VCPU)->run->io.size, \ + (VCPU)->run->io.direction)) + +/** + * Run a test in a new process. + * + * There might be multiple tests we are running and if one test fails, it will + * prevent the subsequent tests to run due to how tests are failing with + * TEST_ASSERT function. The run_in_new_process function will run a test in a + * new process context and wait for it to finish or fail to prevent TEST_ASSERT + * to kill the main testing process. + */ +void run_in_new_process(void (*func)(void)); + +/** + * Verify that the TDX is supported by KVM. + */ +bool is_tdx_enabled(void); + +/** + * Report test success to userspace. + * + * Use TDX_TEST_ASSERT_SUCCESS() to assert that this function was called in the + * guest. + */ +void tdx_test_success(void); + +#endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S new file mode 100644 index 0000000000000..df9c1ed4bb2d1 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Adapted from arch/x86/coco/tdx/tdcall.S */ + +#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */ +#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */ +#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */ +#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */ +#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */ +#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */ + +/* + * Bitmasks of exposed registers (with VMM). + */ +#define TDX_R10 0x400 +#define TDX_R11 0x800 +#define TDX_R12 0x1000 +#define TDX_R13 0x2000 +#define TDX_R14 0x4000 +#define TDX_R15 0x8000 + +#define TDX_HCALL_HAS_OUTPUT 0x1 + +/* + * These registers are clobbered to hold arguments for each + * TDVMCALL. They are safe to expose to the VMM. + * Each bit in this mask represents a register ID. Bit field + * details can be found in TDX GHCI specification, section + * titled "TDCALL [TDG.VP.VMCALL] leaf". + */ +#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \ + TDX_R12 | TDX_R13 | \ + TDX_R14 | TDX_R15 ) + +.code64 +.section .text + +.globl __tdx_hypercall +.type __tdx_hypercall, @function +__tdx_hypercall: + /* Set up stack frame */ + push %rbp + movq %rsp, %rbp + + /* Save callee-saved GPRs as mandated by the x86_64 ABI */ + push %r15 + push %r14 + push %r13 + push %r12 + + /* Mangle function call ABI into TDCALL ABI: */ + /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */ + xor %eax, %eax + + /* Copy hypercall registers from arg struct: */ + movq TDX_HYPERCALL_r10(%rdi), %r10 + movq TDX_HYPERCALL_r11(%rdi), %r11 + movq TDX_HYPERCALL_r12(%rdi), %r12 + movq TDX_HYPERCALL_r13(%rdi), %r13 + movq TDX_HYPERCALL_r14(%rdi), %r14 + movq TDX_HYPERCALL_r15(%rdi), %r15 + + movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx + + tdcall + + /* TDVMCALL leaf return code is in R10 */ + movq %r10, %rax + + /* Copy hypercall result registers to arg struct if needed */ + testq $TDX_HCALL_HAS_OUTPUT, %rsi + jz .Lout + + movq %r10, TDX_HYPERCALL_r10(%rdi) + movq %r11, TDX_HYPERCALL_r11(%rdi) + movq %r12, TDX_HYPERCALL_r12(%rdi) + movq %r13, TDX_HYPERCALL_r13(%rdi) + movq %r14, TDX_HYPERCALL_r14(%rdi) + movq %r15, TDX_HYPERCALL_r15(%rdi) +.Lout: + /* Restore callee-saved GPRs as mandated by the x86_64 ABI */ + pop %r12 + pop %r13 + pop %r14 + pop %r15 + + pop %rbp + ret + +/* Disable executable stack */ +.section .note.GNU-stack,"",%progbits diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c new file mode 100644 index 0000000000000..c2414523487a7 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "tdx/tdcall.h" +#include "tdx/tdx.h" + +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, + uint64_t write, uint64_t *data) +{ + uint64_t ret; + struct tdx_hypercall_args args = { + .r10 = TDX_HYPERCALL_STANDARD, + .r11 = TDG_VP_VMCALL_INSTRUCTION_IO, + .r12 = size, + .r13 = write, + .r14 = port, + }; + + if (write) + args.r15 = *data; + + ret = __tdx_hypercall(&args, write ? 0 : TDX_HCALL_HAS_OUTPUT); + + if (!write) + *data = args.r11; + + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c new file mode 100644 index 0000000000000..6905d0ca38774 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c @@ -0,0 +1,34 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <stdbool.h> +#include <stdint.h> +#include <stdlib.h> +#include <sys/wait.h> +#include <unistd.h> + +#include "kvm_util_base.h" +#include "tdx/tdx.h" +#include "tdx/test_util.h" + +void run_in_new_process(void (*func)(void)) +{ + if (fork() == 0) { + func(); + exit(0); + } + wait(NULL); +} + +bool is_tdx_enabled(void) +{ + return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM)); +} + +void tdx_test_success(void) +{ + uint64_t code = 0; + + tdg_vp_vmcall_instruction_io(TDX_TEST_SUCCESS_PORT, + TDX_TEST_SUCCESS_SIZE, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c new file mode 100644 index 0000000000000..a18d1c9d60264 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <signal.h> +#include "kvm_util_base.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include "test_util.h" + +void guest_code_lifecycle(void) +{ + tdx_test_success(); +} + +void verify_td_lifecycle(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_lifecycle); + td_finalize(vm); + + printf("Verifying TD lifecycle:\n"); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +int main(int argc, char **argv) +{ + setbuf(stdout, NULL); + + if (!is_tdx_enabled()) { + print_skip("TDX is not supported by the KVM"); + exit(KSFT_SKIP); + } + + run_in_new_process(&verify_td_lifecycle); + + return 0; +} -- 2.39.0.246.g2a6d74b583-goog
From: Sagi Shahar sagis@google.com
The test checks report_fatal_error functionality.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 3 ++ .../kvm/include/x86_64/tdx/test_util.h | 19 ++++++++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 18 ++++++++ .../selftests/kvm/lib/x86_64/tdx/test_util.c | 10 +++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 44 +++++++++++++++++++ 5 files changed, 94 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index a7161efe4ee2e..28959bdb07628 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -4,9 +4,12 @@
#include <stdint.h>
+#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003 + #define TDG_VP_VMCALL_INSTRUCTION_IO 30
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h index b570b6d978ff1..6d69921136bd2 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h @@ -49,4 +49,23 @@ bool is_tdx_enabled(void); */ void tdx_test_success(void);
+/** + * Report an error with @error_code to userspace. + * + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution + * is not expected to continue beyond this point. + */ +void tdx_test_fatal(uint64_t error_code); + +/** + * Report an error with @error_code to userspace. + * + * @data_gpa may point to an optional shared guest memory holding the error + * string. + * + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution + * is not expected to continue beyond this point. + */ +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index c2414523487a7..e8c399f2277cf 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only
+#include <string.h> + #include "tdx/tdcall.h" #include "tdx/tdx.h"
@@ -25,3 +27,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
return ret; } + +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa) +{ + struct tdx_hypercall_args args; + + memset(&args, 0, sizeof(struct tdx_hypercall_args)); + + if (data_gpa) + error_code |= 0x8000000000000000; + + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR; + args.r12 = error_code; + args.r13 = data_gpa; + + __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c index 6905d0ca38774..7f3cd8089cea3 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c @@ -32,3 +32,13 @@ void tdx_test_success(void) TDX_TEST_SUCCESS_SIZE, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code); } + +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa) +{ + tdg_vp_vmcall_report_fatal_error(error_code, data_gpa); +} + +void tdx_test_fatal(uint64_t error_code) +{ + tdx_test_fatal_with_data(error_code, 0); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index a18d1c9d60264..627d60b573bb6 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -2,6 +2,7 @@
#include <signal.h> #include "kvm_util_base.h" +#include "tdx/tdx.h" #include "tdx/tdx_util.h" #include "tdx/test_util.h" #include "test_util.h" @@ -30,6 +31,48 @@ void verify_td_lifecycle(void) printf("\t ... PASSED\n"); }
+void guest_code_report_fatal_error(void) +{ + uint64_t err; + + /* + * Note: err should follow the GHCI spec definition: + * bits 31:0 should be set to 0. + * bits 62:32 are used for TD-specific extended error code. + * bit 63 is used to mark additional information in shared memory. + */ + err = 0x0BAAAAAD00000000; + if (err) + tdx_test_fatal(err); + + tdx_test_success(); +} +void verify_report_fatal_error(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error); + td_finalize(vm); + + printf("Verifying report_fatal_error:\n"); + + vcpu_run(vcpu); + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + ASSERT_EQ(vcpu->run->system_event.ndata, 3); + ASSERT_EQ(vcpu->run->system_event.data[0], TDG_VP_VMCALL_REPORT_FATAL_ERROR); + ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000); + ASSERT_EQ(vcpu->run->system_event.data[2], 0); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -40,6 +83,7 @@ int main(int argc, char **argv) }
run_in_new_process(&verify_td_lifecycle); + run_in_new_process(&verify_report_fatal_error);
return 0; }
From: Erdem Aktas erdemaktas@google.com
Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.
Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../kvm/include/x86_64/tdx/test_util.h | 34 ++++++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 82 +++++++++++++++++++ 2 files changed, 116 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h index 6d69921136bd2..95a5d5be7f0bf 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h @@ -9,6 +9,40 @@ #define TDX_TEST_SUCCESS_PORT 0x30 #define TDX_TEST_SUCCESS_SIZE 4
+/** + * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was + * called in the guest. + */ +#define TDX_TEST_ASSERT_IO(VCPU, PORT, SIZE, DIR) \ + do { \ + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \ + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \ + (VCPU)->run->exit_reason, \ + exit_reason_str((VCPU)->run->exit_reason)); \ + \ + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \ + ((VCPU)->run->io.port == (PORT)) && \ + ((VCPU)->run->io.size == (SIZE)) && \ + ((VCPU)->run->io.direction == (DIR)), \ + "Got unexpected IO exit values: %u (%s) %d %d %d\n", \ + (VCPU)->run->exit_reason, \ + exit_reason_str((VCPU)->run->exit_reason), \ + (VCPU)->run->io.port, (VCPU)->run->io.size, \ + (VCPU)->run->io.direction); \ + } while (0) + +/** + * Check and report if there was some failure in the guest, either an exception + * like a triple fault, or if a tdx_test_fatal() was hit. + */ +#define TDX_TEST_CHECK_GUEST_FAILURE(VCPU) \ + do { \ + if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \ + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \ + (VCPU)->run->system_event.data[1], \ + (VCPU)->run->system_event.data[1]); \ + } while (0) + /** * Assert that tdx_test_success() was called in the guest. */ diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 627d60b573bb6..885c2b6bb1b96 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -2,6 +2,7 @@
#include <signal.h> #include "kvm_util_base.h" +#include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/tdx_util.h" #include "tdx/test_util.h" @@ -73,6 +74,86 @@ void verify_report_fatal_error(void) printf("\t ... PASSED\n"); }
+#define TDX_IOEXIT_TEST_PORT 0x50 + +/* + * Verifies IO functionality by writing a |value| to a predefined port. + * Verifies that the read value is |value| + 1 from the same port. + * If all the tests are passed then write a value to port TDX_TEST_PORT + */ +void guest_ioexit(void) +{ + uint64_t data_out, data_in, delta; + uint64_t ret; + + data_out = 0xAB; + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &data_out); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_READ, + &data_in); + if (ret) + tdx_test_fatal(ret); + + delta = data_in - data_out; + if (delta != 1) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +void verify_td_ioexit(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint32_t port_data; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_ioexit); + td_finalize(vm); + + printf("Verifying TD IO Exit:\n"); + + /* Wait for guest to do a IO write */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + printf("\t ... IO WRITE: OK\n"); + + /* + * Wait for the guest to do a IO read. Provide the previous written data + * + 1 back to the guest + */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_READ); + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1; + + printf("\t ... IO READ: OK\n"); + + /* + * Wait for the guest to complete execution successfully. The read + * value is checked within the guest. + */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + printf("\t ... IO verify read/write values: OK\n"); + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -84,6 +165,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_lifecycle); run_in_new_process(&verify_report_fatal_error); + run_in_new_process(&verify_td_ioexit);
return 0; }
From: Sagi Shahar sagis@google.com
The test reads CPUID values from inside a TD VM and compare them to expected values.
The test targets CPUID values which are virtualized as "As Configured", "As Configured (if Native)", "Calculated", "Fixed" and "Native" according to the TDX spec.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- Changes RFCv2 -> RFCv3 + Manually inlined cpuid function in cpuid test. This highlights the purpose of this test - to test the result of the cpuid instruction. + Replace find_cpuid_entry with kvm_get_supported_cpuid_entry from tools/testing/selftests/kvm/lib/x86_64/processor.c --- .../kvm/include/x86_64/tdx/test_util.h | 9 ++ .../selftests/kvm/lib/x86_64/tdx/test_util.c | 11 ++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 106 ++++++++++++++++++ 3 files changed, 126 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h index 95a5d5be7f0bf..af0ddbfe8d71b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h @@ -9,6 +9,9 @@ #define TDX_TEST_SUCCESS_PORT 0x30 #define TDX_TEST_SUCCESS_SIZE 4
+#define TDX_TEST_REPORT_PORT 0x31 +#define TDX_TEST_REPORT_SIZE 4 + /** * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was * called in the guest. @@ -102,4 +105,10 @@ void tdx_test_fatal(uint64_t error_code); */ void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
+/** + * Report a 32 bit value from the guest to user space using TDG.VP.VMCALL + * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT. + */ +uint64_t tdx_test_report_to_user_space(uint32_t data); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c index 7f3cd8089cea3..55c5a1e634df7 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c @@ -42,3 +42,14 @@ void tdx_test_fatal(uint64_t error_code) { tdx_test_fatal_with_data(error_code, 0); } + +uint64_t tdx_test_report_to_user_space(uint32_t data) +{ + /* Upcast data to match tdg_vp_vmcall_instruction_io signature */ + uint64_t data_64 = data; + + return tdg_vp_vmcall_instruction_io(TDX_TEST_REPORT_PORT, + TDX_TEST_REPORT_SIZE, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &data_64); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 885c2b6bb1b96..b6072769967fa 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -2,6 +2,7 @@
#include <signal.h> #include "kvm_util_base.h" +#include "processor.h" #include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/tdx_util.h" @@ -154,6 +155,110 @@ void verify_td_ioexit(void) printf("\t ... PASSED\n"); }
+/* + * Verifies CPUID functionality by reading CPUID values in guest. The guest + * will then send the values to userspace using an IO write to be checked + * against the expected values. + */ +void guest_code_cpuid(void) +{ + uint64_t err; + uint32_t ebx, ecx; + + /* Read CPUID leaf 0x1 */ + asm volatile ( + "cpuid" + : "=b" (ebx), "=c" (ecx) + : "a" (0x1) + : "edx"); + + err = tdx_test_report_to_user_space(ebx); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_to_user_space(ecx); + if (err) + tdx_test_fatal(err); + + tdx_test_success(); +} + +void verify_td_cpuid(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint32_t ebx, ecx; + const struct kvm_cpuid_entry2 *cpuid_entry; + uint32_t guest_clflush_line_size; + uint32_t guest_max_addressable_ids, host_max_addressable_ids; + uint32_t guest_sse3_enabled; + uint32_t guest_fma_enabled; + uint32_t guest_initial_apic_id; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_cpuid); + td_finalize(vm); + + printf("Verifying TD CPUID:\n"); + + /* Wait for guest to report ebx value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + /* Wait for guest to report either ecx value or error */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + /* Wait for guest to complete execution */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + /* Verify the CPUID values we got from the guest. */ + printf("\t ... Verifying CPUID values from guest\n"); + + /* Get KVM CPUIDs for reference */ + cpuid_entry = kvm_get_supported_cpuid_entry(1); + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n"); + + host_max_addressable_ids = (cpuid_entry->ebx >> 16) & 0xFF; + + guest_sse3_enabled = ecx & 0x1; // Native + guest_clflush_line_size = (ebx >> 8) & 0xFF; // Fixed + guest_max_addressable_ids = (ebx >> 16) & 0xFF; // As Configured + guest_fma_enabled = (ecx >> 12) & 0x1; // As Configured (if Native) + guest_initial_apic_id = (ebx >> 24) & 0xFF; // Calculated + + ASSERT_EQ(guest_sse3_enabled, 1); + ASSERT_EQ(guest_clflush_line_size, 8); + ASSERT_EQ(guest_max_addressable_ids, host_max_addressable_ids); + + /* TODO: This only tests the native value. To properly test + * "As Configured (if Native)" we need to override this value + * in the TD params + */ + ASSERT_EQ(guest_fma_enabled, 1); + + /* TODO: guest_initial_apic_id is calculated based on the number of + * VCPUs in the TD. From the spec: "Virtual CPU index, starting from 0 + * and allocated sequentially on each successful TDH.VP.INIT" + * To test non-trivial values we either need a TD with multiple VCPUs + * or to pick a different calculated value. + */ + ASSERT_EQ(guest_initial_apic_id, 0); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -166,6 +271,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_td_lifecycle); run_in_new_process(&verify_report_fatal_error); run_in_new_process(&verify_td_ioexit); + run_in_new_process(&verify_td_cpuid);
return 0; } -- 2.39.0.246.g2a6d74b583-goog
From: Sagi Shahar sagis@google.com
The test calls get_td_vmcall_info from the guest and verifies the expected returned values.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 3 + .../kvm/include/x86_64/tdx/test_util.h | 27 +++++++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 23 ++++++ .../selftests/kvm/lib/x86_64/tdx/test_util.c | 46 +++++++++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 80 +++++++++++++++++++ 5 files changed, 179 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index 28959bdb07628..37ad16943e299 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -4,6 +4,7 @@
#include <stdint.h>
+#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
#define TDG_VP_VMCALL_INSTRUCTION_IO 30 @@ -11,5 +12,7 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); +uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, + uint64_t *r13, uint64_t *r14);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h index af0ddbfe8d71b..8a9b6a1bec3eb 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h @@ -4,6 +4,7 @@
#include <stdbool.h>
+#include "kvm_util_base.h" #include "tdcall.h"
#define TDX_TEST_SUCCESS_PORT 0x30 @@ -111,4 +112,30 @@ void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa); */ uint64_t tdx_test_report_to_user_space(uint32_t data);
+/** + * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL + * <Instruction.IO> call. + * + * Data is sent to host in 2 calls. LSB is sent (and needs to be read) first. + */ +uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data); + +/** + * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL + * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT. + */ +uint64_t tdx_test_report_64bit_to_user_space(uint64_t data); + +/** + * Read a 64 bit value from the guest in user space, sent using + * tdx_test_send_64bit(). + */ +uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port); + +/** + * Read a 64 bit value from the guest in user space, sent using + * tdx_test_report_64bit_to_user_space. + */ +uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index e8c399f2277cf..7254d61515db2 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -43,3 +43,26 @@ void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
__tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, + uint64_t *r13, uint64_t *r14) +{ + uint64_t ret; + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_GET_TD_VM_CALL_INFO, + .r12 = 0, + }; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (r11) + *r11 = args.r11; + if (r12) + *r12 = args.r12; + if (r13) + *r13 = args.r13; + if (r14) + *r14 = args.r14; + + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c index 55c5a1e634df7..3ae651cd5fac4 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c @@ -7,6 +7,7 @@ #include <unistd.h>
#include "kvm_util_base.h" +#include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/test_util.h"
@@ -53,3 +54,48 @@ uint64_t tdx_test_report_to_user_space(uint32_t data) TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &data_64); } + +uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data) +{ + uint64_t err; + uint64_t data_lo = data & 0xFFFFFFFF; + uint64_t data_hi = (data >> 32) & 0xFFFFFFFF; + + err = tdg_vp_vmcall_instruction_io(port, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &data_lo); + if (err) + return err; + + return tdg_vp_vmcall_instruction_io(port, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &data_hi); +} + +uint64_t tdx_test_report_64bit_to_user_space(uint64_t data) +{ + return tdx_test_send_64bit(TDX_TEST_REPORT_PORT, data); +} + +uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port) +{ + uint32_t lo, hi; + uint64_t res; + + TDX_TEST_ASSERT_IO(vcpu, port, 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + lo = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + + TDX_TEST_ASSERT_IO(vcpu, port, 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + hi = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + res = hi; + res = (res << 32) | lo; + return res; +} + +uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu) +{ + return tdx_test_read_64bit(vcpu, TDX_TEST_REPORT_PORT); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index b6072769967fa..188442a734dca 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -259,6 +259,85 @@ void verify_td_cpuid(void) printf("\t ... PASSED\n"); }
+/* + * Verifies get_td_vmcall_info functionality. + */ +void guest_code_get_td_vmcall_info(void) +{ + uint64_t err; + uint64_t r11, r12, r13, r14; + + err = tdg_vp_vmcall_get_td_vmcall_info(&r11, &r12, &r13, &r14); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r11); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r12); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r13); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r14); + if (err) + tdx_test_fatal(err); + + tdx_test_success(); +} + +void verify_get_td_vmcall_info(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint64_t r11, r12, r13, r14; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_get_td_vmcall_info); + td_finalize(vm); + + printf("Verifying TD get vmcall info:\n"); + + /* Wait for guest to report r11 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r11 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r12 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r12 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r13 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r13 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r14 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r14 = tdx_test_read_64bit_report_from_guest(vcpu); + + ASSERT_EQ(r11, 0); + ASSERT_EQ(r12, 0); + ASSERT_EQ(r13, 0); + ASSERT_EQ(r14, 0); + + /* Wait for guest to complete execution */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -272,6 +351,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_report_fatal_error); run_in_new_process(&verify_td_ioexit); run_in_new_process(&verify_td_cpuid); + run_in_new_process(&verify_get_td_vmcall_info);
return 0; }
From: Sagi Shahar sagis@google.com
The test verifies IO writes of various sizes from the guest to the host.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdcall.h | 3 + .../selftests/kvm/x86_64/tdx_vm_tests.c | 91 +++++++++++++++++++ 2 files changed, 94 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h index 78001bfec9c8d..b5e94b7c48fa5 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h @@ -10,6 +10,9 @@ #define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0 #define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
+#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000 +#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000 + #define TDX_HCALL_HAS_OUTPUT BIT(0)
#define TDX_HYPERCALL_STANDARD 0 diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 188442a734dca..ac23d1ad1e687 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -338,6 +338,96 @@ void verify_get_td_vmcall_info(void) printf("\t ... PASSED\n"); }
+#define TDX_IO_WRITES_TEST_PORT 0x51 + +/* + * Verifies IO functionality by writing values of different sizes + * to the host. + */ +void guest_io_writes(void) +{ + uint64_t byte_1 = 0xAB; + uint64_t byte_2 = 0xABCD; + uint64_t byte_4 = 0xFFABCDEF; + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &byte_1); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 2, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &byte_2); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &byte_4); + if (ret) + tdx_test_fatal(ret); + + // Write an invalid number of bytes. + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 5, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &byte_4); + if (ret) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +void verify_guest_writes(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint8_t byte_1; + uint16_t byte_2; + uint32_t byte_4; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_io_writes); + td_finalize(vm); + + printf("Verifying guest writes:\n"); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + byte_1 = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 2, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + byte_2 = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + byte_4 = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + ASSERT_EQ(byte_1, 0xAB); + ASSERT_EQ(byte_2, 0xABCD); + ASSERT_EQ(byte_4, 0xFFABCDEF); + + vcpu_run(vcpu); + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -352,6 +442,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_td_ioexit); run_in_new_process(&verify_td_cpuid); run_in_new_process(&verify_get_td_vmcall_info); + run_in_new_process(&verify_guest_writes);
return 0; }
From: Sagi Shahar sagis@google.com
The test verifies IO reads of various sizes from the host to the guest.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index ac23d1ad1e687..71aa4e5907a05 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -428,6 +428,92 @@ void verify_guest_writes(void) printf("\t ... PASSED\n"); }
+#define TDX_IO_READS_TEST_PORT 0x52 + +/* + * Verifies IO functionality by reading values of different sizes + * from the host. + */ +void guest_io_reads(void) +{ + uint64_t data; + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_READ, + &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0xAB) + tdx_test_fatal(1); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 2, + TDG_VP_VMCALL_INSTRUCTION_IO_READ, + &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0xABCD) + tdx_test_fatal(2); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_READ, + &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0xFFABCDEF) + tdx_test_fatal(4); + + // Read an invalid number of bytes. + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 5, + TDG_VP_VMCALL_INSTRUCTION_IO_READ, + &data); + if (ret) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +void verify_guest_reads(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_io_reads); + td_finalize(vm); + + printf("Verifying guest reads:\n"); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 1, + TDG_VP_VMCALL_INSTRUCTION_IO_READ); + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xAB; + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 2, + TDG_VP_VMCALL_INSTRUCTION_IO_READ); + *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xABCD; + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_READ); + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xFFABCDEF; + + vcpu_run(vcpu); + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -443,6 +529,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_td_cpuid); run_in_new_process(&verify_get_td_vmcall_info); run_in_new_process(&verify_guest_writes); + run_in_new_process(&verify_guest_reads);
return 0; }
From: Sagi Shahar sagis@google.com
The test verifies reads and writes for MSR registers with different access level.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- Changes RFCv2 -> RFCv3 + Fixed typo in MTTR->MTRR --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 4 + .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 +++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 217 ++++++++++++++++++ 3 files changed, 248 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index 37ad16943e299..fbac1951cfe35 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -8,11 +8,15 @@ #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
#define TDG_VP_VMCALL_INSTRUCTION_IO 30 +#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 +#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, uint64_t *r13, uint64_t *r14); +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value); +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index 7254d61515db2..43088d6f40b50 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -66,3 +66,30 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
return ret; } + +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value) +{ + uint64_t ret; + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_RDMSR, + .r12 = index, + }; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (ret_value) + *ret_value = args.r11; + + return ret; +} + +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_WRMSR, + .r12 = index, + .r13 = value, + }; + + return __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 71aa4e5907a05..65ca1ec2a6e82 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -514,6 +514,221 @@ void verify_guest_reads(void) printf("\t ... PASSED\n"); }
+/* + * Define a filter which denies all MSR access except the following: + * MTTR_BASE_0: Allow read/write access + * MTTR_BASE_1: Allow read access + * MTTR_BASE_2: Allow write access + */ +static u64 tdx_msr_test_allow_bits = 0xFFFFFFFFFFFFFFFF; +#define MTTR_BASE_0 (0x200) +#define MTTR_BASE_1 (0x202) +#define MTTR_BASE_2 (0x204) +struct kvm_msr_filter tdx_msr_test_filter = { + .flags = KVM_MSR_FILTER_DEFAULT_DENY, + .ranges = { + { + .flags = KVM_MSR_FILTER_READ | + KVM_MSR_FILTER_WRITE, + .nmsrs = 1, + .base = MTTR_BASE_0, + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits, + }, { + .flags = KVM_MSR_FILTER_READ, + .nmsrs = 1, + .base = MTTR_BASE_1, + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits, + }, { + .flags = KVM_MSR_FILTER_WRITE, + .nmsrs = 1, + .base = MTTR_BASE_2, + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits, + }, + }, +}; + +/* + * Verifies MSR read functionality. + */ +void guest_msr_read(void) +{ + uint64_t data; + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_rdmsr(MTTR_BASE_0, &data); + if (ret) + tdx_test_fatal(ret); + + ret = tdx_test_report_64bit_to_user_space(data); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_instruction_rdmsr(MTTR_BASE_1, &data); + if (ret) + tdx_test_fatal(ret); + + ret = tdx_test_report_64bit_to_user_space(data); + if (ret) + tdx_test_fatal(ret); + + /* We expect this call to fail since MTTR_BASE_2 is write only */ + ret = tdg_vp_vmcall_instruction_rdmsr(MTTR_BASE_2, &data); + if (ret) { + ret = tdx_test_report_64bit_to_user_space(ret); + if (ret) + tdx_test_fatal(ret); + } else { + tdx_test_fatal(-99); + } + + tdx_test_success(); +} + +void verify_guest_msr_reads(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint64_t data; + int ret; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + + /* + * Set explicit MSR filter map to control access to the MSR registers + * used in the test. + */ + printf("\t ... Setting test MSR filter\n"); + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR); + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable"); + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER); + + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER); + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable"); + + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter); + TEST_ASSERT(ret == 0, + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)", + ret, errno, strerror(errno)); + + vcpu = td_vcpu_add(vm, 0, guest_msr_read); + td_finalize(vm); + + printf("Verifying guest msr reads:\n"); + + printf("\t ... Setting test MTTR values\n"); + /* valid values for mtrr type are 0, 1, 4, 5, 6 */ + vcpu_set_msr(vcpu, MTTR_BASE_0, 4); + vcpu_set_msr(vcpu, MTTR_BASE_1, 5); + vcpu_set_msr(vcpu, MTTR_BASE_2, 6); + + printf("\t ... Running guest\n"); + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + ASSERT_EQ(data, 4); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + ASSERT_EQ(data, 5); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +/* + * Verifies MSR write functionality. + */ +void guest_msr_write(void) +{ + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_wrmsr(MTTR_BASE_0, 4); + if (ret) + tdx_test_fatal(ret); + + /* We expect this call to fail since MTTR_BASE_1 is read only */ + ret = tdg_vp_vmcall_instruction_wrmsr(MTTR_BASE_1, 5); + if (ret) { + ret = tdx_test_report_64bit_to_user_space(ret); + if (ret) + tdx_test_fatal(ret); + } else { + tdx_test_fatal(-99); + } + + + ret = tdg_vp_vmcall_instruction_wrmsr(MTTR_BASE_2, 6); + if (ret) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +void verify_guest_msr_writes(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + uint64_t data; + int ret; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + + /* + * Set explicit MSR filter map to control access to the MSR registers + * used in the test. + */ + printf("\t ... Setting test MSR filter\n"); + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR); + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable"); + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER); + + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER); + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable"); + + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter); + TEST_ASSERT(ret == 0, + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)", + ret, errno, strerror(errno)); + + vcpu = td_vcpu_add(vm, 0, guest_msr_write); + td_finalize(vm); + + printf("Verifying guest msr writes:\n"); + + printf("\t ... Running guest\n"); + /* Only the write to MTTR_BASE_1 should trigger an exit */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + printf("\t ... Verifying MTTR values writen by guest\n"); + + ASSERT_EQ(vcpu_get_msr(vcpu, MTTR_BASE_0), 4); + ASSERT_EQ(vcpu_get_msr(vcpu, MTTR_BASE_1), 0); + ASSERT_EQ(vcpu_get_msr(vcpu, MTTR_BASE_2), 6); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -530,6 +745,8 @@ int main(int argc, char **argv) run_in_new_process(&verify_get_td_vmcall_info); run_in_new_process(&verify_guest_writes); run_in_new_process(&verify_guest_reads); + run_in_new_process(&verify_guest_msr_writes); + run_in_new_process(&verify_guest_msr_reads);
return 0; } -- 2.39.0.246.g2a6d74b583-goog
From: Sagi Shahar sagis@google.com
The test verifies that the guest runs TDVMCALL<INSTRUCTION.HLT> and the guest vCPU enters to the halted state.
Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 3 + .../selftests/kvm/lib/x86_64/tdx/tdx.c | 10 +++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 78 +++++++++++++++++++ 3 files changed, 91 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index fbac1951cfe35..7eba9d80b3681 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -7,10 +7,12 @@ #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
+#define TDG_VP_VMCALL_INSTRUCTION_HLT 12 #define TDG_VP_VMCALL_INSTRUCTION_IO 30 #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
+ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); @@ -18,5 +20,6 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, uint64_t *r13, uint64_t *r14); uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value); uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index 43088d6f40b50..1af0626c2a4ad 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -93,3 +93,13 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
return __tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_HLT, + .r12 = interrupt_blocked_flag, + }; + + return __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 65ca1ec2a6e82..346c1e07af9c0 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -728,6 +728,83 @@ void verify_guest_msr_writes(void) printf("\t ... PASSED\n"); }
+/* + * Verifies HLT functionality. + */ +void guest_hlt(void) +{ + uint64_t ret; + uint64_t interrupt_blocked_flag; + + interrupt_blocked_flag = 0; + ret = tdg_vp_vmcall_instruction_hlt(interrupt_blocked_flag); + if (ret) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +void _verify_guest_hlt(int signum); + +void wake_me(int interval) +{ + struct sigaction action; + + action.sa_handler = _verify_guest_hlt; + sigemptyset(&action.sa_mask); + action.sa_flags = 0; + + TEST_ASSERT(sigaction(SIGALRM, &action, NULL) == 0, + "Could not set the alarm handler!"); + + alarm(interval); +} + +void _verify_guest_hlt(int signum) +{ + struct kvm_vm *vm; + static struct kvm_vcpu *vcpu; + + /* + * This function will also be called by SIGALRM handler to check the + * vCPU MP State. If vm has been initialized, then we are in the signal + * handler. Check the MP state and let the guest run again. + */ + if (vcpu != NULL) { + struct kvm_mp_state mp_state; + + vcpu_mp_state_get(vcpu, &mp_state); + ASSERT_EQ(mp_state.mp_state, KVM_MP_STATE_HALTED); + + /* Let the guest to run and finish the test.*/ + mp_state.mp_state = KVM_MP_STATE_RUNNABLE; + vcpu_mp_state_set(vcpu, &mp_state); + return; + } + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_hlt); + td_finalize(vm); + + printf("Verifying HLT:\n"); + + printf("\t ... Running guest\n"); + + /* Wait 1 second for guest to execute HLT */ + wake_me(1); + vcpu_run(vcpu); + + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +void verify_guest_hlt(void) +{ + _verify_guest_hlt(0); +}
int main(int argc, char **argv) { @@ -747,6 +824,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_guest_reads); run_in_new_process(&verify_guest_msr_writes); run_in_new_process(&verify_guest_msr_reads); + run_in_new_process(&verify_guest_hlt);
return 0; }
From: Sagi Shahar sagis@google.com
The test verifies MMIO reads of various sizes from the host to the guest.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdcall.h | 2 + .../selftests/kvm/include/x86_64/tdx/tdx.h | 3 + .../kvm/include/x86_64/tdx/test_util.h | 23 +++++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 ++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++ 5 files changed, 134 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h index b5e94b7c48fa5..95fcdbd8404e9 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h @@ -9,6 +9,8 @@
#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0 #define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1 +#define TDG_VP_VMCALL_VE_REQUEST_MMIO_READ 0 +#define TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE 1
#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000 #define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000 diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index 7eba9d80b3681..8dd6a65485260 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -11,6 +11,7 @@ #define TDG_VP_VMCALL_INSTRUCTION_IO 30 #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32 +#define TDG_VP_VMCALL_VE_REQUEST_MMIO 48
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, @@ -21,5 +22,7 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value); uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag); +uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, + uint64_t *data_out);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h index 8a9b6a1bec3eb..af412b7646049 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h @@ -35,6 +35,29 @@ (VCPU)->run->io.direction); \ } while (0)
+ +/** + * Assert that some MMIO operation involving TDG.VP.VMCALL <#VERequestMMIO> was + * called in the guest. + */ +#define TDX_TEST_ASSERT_MMIO(VCPU, ADDR, SIZE, DIR) \ + do { \ + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_MMIO, \ + "Got exit_reason other than KVM_EXIT_MMIO: %u (%s)\n", \ + (VCPU)->run->exit_reason, \ + exit_reason_str((VCPU)->run->exit_reason)); \ + \ + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_MMIO) && \ + ((VCPU)->run->mmio.phys_addr == (ADDR)) && \ + ((VCPU)->run->mmio.len == (SIZE)) && \ + ((VCPU)->run->mmio.is_write == (DIR)), \ + "Got an unexpected MMIO exit values: %u (%s) %llu %d %d\n", \ + (VCPU)->run->exit_reason, \ + exit_reason_str((VCPU)->run->exit_reason), \ + (VCPU)->run->mmio.phys_addr, (VCPU)->run->mmio.len, \ + (VCPU)->run->mmio.is_write); \ + } while (0) + /** * Check and report if there was some failure in the guest, either an exception * like a triple fault, or if a tdx_test_fatal() was hit. diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index 1af0626c2a4ad..dcdacf08bcd60 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -103,3 +103,22 @@ uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)
return __tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, + uint64_t *data_out) +{ + uint64_t ret; + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO, + .r12 = size, + .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_READ, + .r14 = address, + }; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (data_out) + *data_out = args.r11; + + return ret; +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 346c1e07af9c0..88f0429db0176 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -806,6 +806,92 @@ void verify_guest_hlt(void) _verify_guest_hlt(0); }
+/* Pick any address that was not mapped into the guest to test MMIO */ +#define TDX_MMIO_TEST_ADDR 0x200000000 + +void guest_mmio_reads(void) +{ + uint64_t data; + uint64_t ret; + + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 1, &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0x12) + tdx_test_fatal(1); + + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 2, &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0x1234) + tdx_test_fatal(2); + + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 4, &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0x12345678) + tdx_test_fatal(4); + + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 8, &data); + if (ret) + tdx_test_fatal(ret); + if (data != 0x1234567890ABCDEF) + tdx_test_fatal(8); + + // Read an invalid number of bytes. + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 10, &data); + if (ret) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +/* + * Varifies guest MMIO reads. + */ +void verify_mmio_reads(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_mmio_reads); + td_finalize(vm); + + printf("Verifying TD MMIO reads:\n"); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ); + *(uint8_t *)vcpu->run->mmio.data = 0x12; + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ); + *(uint16_t *)vcpu->run->mmio.data = 0x1234; + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ); + *(uint32_t *)vcpu->run->mmio.data = 0x12345678; + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ); + *(uint64_t *)vcpu->run->mmio.data = 0x1234567890ABCDEF; + + vcpu_run(vcpu); + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -825,6 +911,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_guest_msr_writes); run_in_new_process(&verify_guest_msr_reads); run_in_new_process(&verify_guest_hlt); + run_in_new_process(&verify_mmio_reads);
return 0; }
From: Sagi Shahar sagis@google.com
The test verifies MMIO writes of various sizes from the guest to the host.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 + .../selftests/kvm/lib/x86_64/tdx/tdx.c | 14 +++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++ 3 files changed, 101 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index 8dd6a65485260..f2a90ad8a55c6 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -24,5 +24,7 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag); uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, uint64_t *data_out); +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, + uint64_t data_in);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index dcdacf08bcd60..8b12ac7049572 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -122,3 +122,17 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
return ret; } + +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, + uint64_t data_in) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO, + .r12 = size, + .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE, + .r14 = address, + .r15 = data_in, + }; + + return __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 88f0429db0176..dcc0940a74e92 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -892,6 +892,90 @@ void verify_mmio_reads(void) printf("\t ... PASSED\n"); }
+void guest_mmio_writes(void) +{ + uint64_t ret; + + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 1, 0x12); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 2, 0x1234); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 4, 0x12345678); + if (ret) + tdx_test_fatal(ret); + + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 8, 0x1234567890ABCDEF); + if (ret) + tdx_test_fatal(ret); + + // Write across page boundary. + ret = tdg_vp_vmcall_ve_request_mmio_write(PAGE_SIZE - 1, 8, 0); + if (ret) + tdx_test_fatal(ret); + + tdx_test_success(); +} + +/* + * Varifies guest MMIO writes. + */ +void verify_mmio_writes(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint8_t byte_1; + uint16_t byte_2; + uint32_t byte_4; + uint64_t byte_8; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_mmio_writes); + td_finalize(vm); + + printf("Verifying TD MMIO writes:\n"); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE); + byte_1 = *(uint8_t *)(vcpu->run->mmio.data); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE); + byte_2 = *(uint16_t *)(vcpu->run->mmio.data); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE); + byte_4 = *(uint32_t *)(vcpu->run->mmio.data); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE); + byte_8 = *(uint64_t *)(vcpu->run->mmio.data); + + ASSERT_EQ(byte_1, 0x12); + ASSERT_EQ(byte_2, 0x1234); + ASSERT_EQ(byte_4, 0x12345678); + ASSERT_EQ(byte_8, 0x1234567890ABCDEF); + + vcpu_run(vcpu); + ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -912,6 +996,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_guest_msr_reads); run_in_new_process(&verify_guest_hlt); run_in_new_process(&verify_mmio_reads); + run_in_new_process(&verify_mmio_writes);
return 0; }
From: Sagi Shahar sagis@google.com
This test issues a CPUID TDVMCALL from inside the guest to get the CPUID values as seen by KVM.
Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 4 + .../selftests/kvm/lib/x86_64/tdx/tdx.c | 26 +++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 94 +++++++++++++++++++ 3 files changed, 124 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index f2a90ad8a55c6..e746372206a25 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -7,6 +7,7 @@ #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
+#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10 #define TDG_VP_VMCALL_INSTRUCTION_HLT 12 #define TDG_VP_VMCALL_INSTRUCTION_IO 30 #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 @@ -26,5 +27,8 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, uint64_t *data_out); uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, uint64_t data_in); +uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, + uint32_t *ret_eax, uint32_t *ret_ebx, + uint32_t *ret_ecx, uint32_t *ret_edx);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index 8b12ac7049572..d9d60dd58dfdd 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -136,3 +136,29 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
return __tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, + uint32_t *ret_eax, uint32_t *ret_ebx, + uint32_t *ret_ecx, uint32_t *ret_edx) +{ + uint64_t ret; + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_CPUID, + .r12 = eax, + .r13 = ecx, + }; + + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (ret_eax) + *ret_eax = args.r12; + if (ret_ebx) + *ret_ebx = args.r13; + if (ret_ecx) + *ret_ecx = args.r14; + if (ret_edx) + *ret_edx = args.r15; + + return ret; +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index dcc0940a74e92..b9d96b5f6521f 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -976,6 +976,99 @@ void verify_mmio_writes(void) printf("\t ... PASSED\n"); }
+/* + * Verifies CPUID TDVMCALL functionality. + * The guest will then send the values to userspace using an IO write to be + * checked against the expected values. + */ +void guest_code_cpuid_tdcall(void) +{ + uint64_t err; + uint32_t eax, ebx, ecx, edx; + + // Read CPUID leaf 0x1 from host. + err = tdg_vp_vmcall_instruction_cpuid(/*eax=*/1, /*ecx=*/0, + &eax, &ebx, &ecx, &edx); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_to_user_space(eax); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_to_user_space(ebx); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_to_user_space(ecx); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_to_user_space(edx); + if (err) + tdx_test_fatal(err); + + tdx_test_success(); +} + +void verify_td_cpuid_tdcall(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + uint32_t eax, ebx, ecx, edx; + const struct kvm_cpuid_entry2 *cpuid_entry; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_cpuid_tdcall); + td_finalize(vm); + + printf("Verifying TD CPUID TDVMCALL:\n"); + + /* Wait for guest to report CPUID values */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + eax = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + edx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + /* Get KVM CPUIDs for reference */ + cpuid_entry = kvm_get_supported_cpuid_entry(1); + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n"); + + ASSERT_EQ(cpuid_entry->eax, eax); + // Mask lapic ID when comparing ebx. + ASSERT_EQ(cpuid_entry->ebx & ~0xFF000000, ebx & ~0xFF000000); + ASSERT_EQ(cpuid_entry->ecx, ecx); + ASSERT_EQ(cpuid_entry->edx, edx); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -997,6 +1090,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_guest_hlt); run_in_new_process(&verify_mmio_reads); run_in_new_process(&verify_mmio_writes); + run_in_new_process(&verify_td_cpuid_tdcall);
return 0; }
From: Ryan Afranji afranji@google.com
The test checks that host can only read fixed values when trying to access the guest's private memory.
Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+)
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index b9d96b5f6521f..4c338206dbaf2 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -1069,6 +1069,90 @@ void verify_td_cpuid_tdcall(void) printf("\t ... PASSED\n"); }
+/* + * Shared variables between guest and host for host reading private mem test + */ +static uint64_t tdx_test_host_read_private_mem_addr; +#define TDX_HOST_READ_PRIVATE_MEM_PORT_TEST 0x53 + +void guest_host_read_priv_mem(void) +{ + uint64_t ret; + uint64_t placeholder = 0; + + /* Set value */ + *((uint32_t *) tdx_test_host_read_private_mem_addr) = 0xABCD; + + /* Exit so host can read value */ + ret = tdg_vp_vmcall_instruction_io( + TDX_HOST_READ_PRIVATE_MEM_PORT_TEST, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &placeholder); + if (ret) + tdx_test_fatal(ret); + + /* Update guest_var's value and have host reread it. */ + *((uint32_t *) tdx_test_host_read_private_mem_addr) = 0xFEDC; + + tdx_test_success(); +} + +void verify_host_reading_private_mem(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm_vaddr_t test_page; + uint64_t *host_virt; + uint64_t first_host_read; + uint64_t second_host_read; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_host_read_priv_mem); + + test_page = vm_vaddr_alloc_page(vm); + TEST_ASSERT(test_page < BIT_ULL(32), + "Test address should fit in 32 bits so it can be sent to the guest"); + + host_virt = addr_gva2hva(vm, test_page); + TEST_ASSERT(host_virt != NULL, + "Guest address not found in guest memory regions\n"); + + tdx_test_host_read_private_mem_addr = test_page; + sync_global_to_guest(vm, tdx_test_host_read_private_mem_addr); + + td_finalize(vm); + + printf("Verifying host's behavior when reading TD private memory:\n"); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_HOST_READ_PRIVATE_MEM_PORT_TEST, + 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + printf("\t ... Guest's variable contains 0xABCD\n"); + + /* Host reads guest's variable. */ + first_host_read = *host_virt; + printf("\t ... Host's read attempt value: %lu\n", first_host_read); + + /* Guest updates variable and host rereads it. */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + printf("\t ... Guest's variable updated to 0xFEDC\n"); + + second_host_read = *host_virt; + printf("\t ... Host's second read attempt value: %lu\n", + second_host_read); + + TEST_ASSERT(first_host_read == second_host_read, + "Host did not read a fixed pattern\n"); + + printf("\t ... Fixed pattern was returned to the host\n"); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -1091,6 +1175,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_mmio_reads); run_in_new_process(&verify_mmio_writes); run_in_new_process(&verify_td_cpuid_tdcall); + run_in_new_process(&verify_host_reading_private_mem);
return 0; }
From: Roger Wang runanwang@google.com
Adds a test for TDG.VP.INFO
Signed-off-by: Roger Wang runanwang@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- Changes RFCv2 -> RFCv3 + Use KVM_CAP_MAX_VCPUS to set max_vcpus, check that it was passed to the TDX module by reading it back --- .../selftests/kvm/include/x86_64/tdx/tdcall.h | 19 +++ .../selftests/kvm/include/x86_64/tdx/tdx.h | 6 + .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 68 ++++++++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++ .../selftests/kvm/x86_64/tdx_vm_tests.c | 148 ++++++++++++++++++ 5 files changed, 268 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h index 95fcdbd8404e9..a65ce8f3c109b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h @@ -37,4 +37,23 @@ struct tdx_hypercall_args { /* Used to request services from the VMM */ u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
+/* + * Used to gather the output registers values of the TDCALL and SEAMCALL + * instructions when requesting services from the TDX module. + * + * This is a software only structure and not part of the TDX module/VMM ABI. + */ +struct tdx_module_output { + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + u64 r10; + u64 r11; +}; + +/* Used to communicate with the TDX module */ +u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + #endif // SELFTESTS_TDX_TDCALL_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index e746372206a25..ffab2c3ca312b 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -4,6 +4,8 @@
#include <stdint.h>
+#define TDG_VP_INFO 1 + #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
@@ -31,4 +33,8 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, uint32_t *ret_eax, uint32_t *ret_ebx, uint32_t *ret_ecx, uint32_t *ret_edx);
+uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, + uint64_t *r8, uint64_t *r9, + uint64_t *r10, uint64_t *r11); + #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S index df9c1ed4bb2d1..601d715314434 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S @@ -86,5 +86,73 @@ __tdx_hypercall: pop %rbp ret
+#define TDX_MODULE_rcx 0 /* offsetof(struct tdx_module_output, rcx) */ +#define TDX_MODULE_rdx 8 /* offsetof(struct tdx_module_output, rdx) */ +#define TDX_MODULE_r8 16 /* offsetof(struct tdx_module_output, r8) */ +#define TDX_MODULE_r9 24 /* offsetof(struct tdx_module_output, r9) */ +#define TDX_MODULE_r10 32 /* offsetof(struct tdx_module_output, r10) */ +#define TDX_MODULE_r11 40 /* offsetof(struct tdx_module_output, r11) */ + +.globl __tdx_module_call +.type __tdx_module_call, @function +__tdx_module_call: + /* Set up stack frame */ + push %rbp + movq %rsp, %rbp + + /* Callee-saved, so preserve it */ + push %r12 + + /* + * Push output pointer to stack. + * After the operation, it will be fetched into R12 register. + */ + push %r9 + + /* Mangle function call ABI into TDCALL/SEAMCALL ABI: */ + /* Move Leaf ID to RAX */ + mov %rdi, %rax + /* Move input 4 to R9 */ + mov %r8, %r9 + /* Move input 3 to R8 */ + mov %rcx, %r8 + /* Move input 1 to RCX */ + mov %rsi, %rcx + /* Leave input param 2 in RDX */ + + tdcall + + /* + * Fetch output pointer from stack to R12 (It is used + * as temporary storage) + */ + pop %r12 + + /* + * Since this macro can be invoked with NULL as an output pointer, + * check if caller provided an output struct before storing output + * registers. + * + * Update output registers, even if the call failed (RAX != 0). + * Other registers may contain details of the failure. + */ + test %r12, %r12 + jz .Lno_output_struct + + /* Copy result registers to output struct: */ + movq %rcx, TDX_MODULE_rcx(%r12) + movq %rdx, TDX_MODULE_rdx(%r12) + movq %r8, TDX_MODULE_r8(%r12) + movq %r9, TDX_MODULE_r9(%r12) + movq %r10, TDX_MODULE_r10(%r12) + movq %r11, TDX_MODULE_r11(%r12) + +.Lno_output_struct: + /* Restore the state of R12 register */ + pop %r12 + + pop %rbp + ret + /* Disable executable stack */ .section .note.GNU-stack,"",%progbits diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index d9d60dd58dfdd..a280136634d3b 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -162,3 +162,30 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
return ret; } + +uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, + uint64_t *r8, uint64_t *r9, + uint64_t *r10, uint64_t *r11) +{ + uint64_t ret; + struct tdx_module_output out; + + memset(&out, 0, sizeof(struct tdx_module_output)); + + ret = __tdx_module_call(TDG_VP_INFO, 0, 0, 0, 0, &out); + + if (rcx) + *rcx = out.rcx; + if (rdx) + *rdx = out.rdx; + if (r8) + *r8 = out.r8; + if (r9) + *r9 = out.r9; + if (r10) + *r10 = out.r10; + if (r11) + *r11 = out.r11; + + return ret; +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c index 4c338206dbaf2..e2dba3b5ee63e 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c @@ -1153,6 +1153,153 @@ void verify_host_reading_private_mem(void) printf("\t ... PASSED\n"); }
+/* + * Do a TDG.VP.INFO call from the guest + */ +void guest_tdcall_vp_info(void) +{ + uint64_t err; + uint64_t rcx, rdx, r8, r9, r10, r11; + + err = tdg_vp_info(&rcx, &rdx, &r8, &r9, &r10, &r11); + if (err) + tdx_test_fatal(err); + + /* return values to user space host */ + err = tdx_test_report_64bit_to_user_space(rcx); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(rdx); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r8); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r9); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r10); + if (err) + tdx_test_fatal(err); + + err = tdx_test_report_64bit_to_user_space(r11); + if (err) + tdx_test_fatal(err); + + tdx_test_success(); +} + +/* + * TDG.VP.INFO call from the guest. Verify the right values are returned + */ +void verify_tdcall_vp_info(void) +{ + const int num_vcpus = 2; + struct kvm_vcpu *vcpus[num_vcpus]; + struct kvm_vm *vm; + + uint64_t rcx, rdx, r8, r9, r10, r11; + uint32_t ret_num_vcpus, ret_max_vcpus; + uint64_t attributes; + uint32_t i; + const struct kvm_cpuid_entry2 *cpuid_entry; + int max_pa = -1; + int ret; + + vm = td_create(); + + /* Set value for kvm->max_vcpus to be checked later */ +#define TEST_VP_INFO_MAX_VCPUS 75 + ret = kvm_check_cap(KVM_CAP_MAX_VCPUS); + TEST_ASSERT(ret, "TDX: KVM_CAP_MAX_VCPUS is not supported!"); + vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, TEST_VP_INFO_MAX_VCPUS); + +#define TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT (1UL << 28) +#define TDX_TDPARAM_ATTR_PKS_BIT (1UL << 30) + /* Setting attributes parameter used by TDH.MNG.INIT to 0x50000000 */ + attributes = TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT | + TDX_TDPARAM_ATTR_PKS_BIT; + + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, attributes); + + for (i = 0; i < num_vcpus; i++) + vcpus[i] = td_vcpu_add(vm, i, guest_tdcall_vp_info); + + td_finalize(vm); + + printf("Verifying TDG.VP.INFO call:\n"); + + /* Get KVM CPUIDs for reference */ + cpuid_entry = kvm_get_supported_cpuid_entry(0x80000008); + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n"); + max_pa = cpuid_entry->eax & 0xff; + + for (i = 0; i < num_vcpus; i++) { + struct kvm_vcpu *vcpu = vcpus[i]; + + /* Wait for guest to report rcx value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + rcx = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report rdx value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + rdx = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r8 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r8 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r9 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r9 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r10 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r10 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r11 value */ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + r11 = tdx_test_read_64bit_report_from_guest(vcpu); + + ret_num_vcpus = r8 & 0xFFFFFFFF; + ret_max_vcpus = (r8 >> 32) & 0xFFFFFFFF; + + /* first bits 5:0 of rcx represent the GPAW */ + ASSERT_EQ(rcx & 0x3F, max_pa); + /* next 63:6 bits of rcx is reserved and must be 0 */ + ASSERT_EQ(rcx >> 6, 0); + ASSERT_EQ(rdx, attributes); + ASSERT_EQ(ret_num_vcpus, num_vcpus); + ASSERT_EQ(ret_max_vcpus, TEST_VP_INFO_MAX_VCPUS); + /* VCPU_INDEX = i */ + ASSERT_EQ(r9, i); + /* verify reserved registers are 0 */ + ASSERT_EQ(r10, 0); + ASSERT_EQ(r11, 0); + + /* Wait for guest to complete execution */ + vcpu_run(vcpu); + + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + printf("\t ... Guest completed run on VCPU=%u\n", i); + } + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { setbuf(stdout, NULL); @@ -1176,6 +1323,7 @@ int main(int argc, char **argv) run_in_new_process(&verify_mmio_writes); run_in_new_process(&verify_td_cpuid_tdcall); run_in_new_process(&verify_host_reading_private_mem); + run_in_new_process(&verify_tdcall_vp_info);
return 0; } -- 2.39.0.246.g2a6d74b583-goog
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/kvm_util_base.h | 24 ++++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 32 +++++++++++++++++++ .../selftests/kvm/lib/x86_64/processor.c | 15 +++++++-- 3 files changed, 69 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index cdc204cfeb4c2..30453e2de8396 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -412,6 +412,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages); +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages); void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa); void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); @@ -843,6 +845,28 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr virt_arch_pg_map(vm, vaddr, paddr); }
+/* + * VM Virtual Page Map as Shared + * + * Input Args: + * vm - Virtual Machine + * vaddr - VM Virtual Address + * paddr - VM Physical Address + * memslot - Memory region slot for new virtual translation tables + * + * Output Args: None + * + * Return: None + * + * Within @vm, creates a virtual translation for the page starting + * at @vaddr to the page starting at @paddr. + */ +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr); + +static inline void virt_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) +{ + virt_arch_pg_map_shared(vm, vaddr, paddr); +}
/* * Address Guest Virtual to Guest Physical diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 14246f0fd2e78..6673be2f49c31 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1384,6 +1384,38 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, } }
+/* + * Map a range of VM virtual address to the VM's physical address as shared + * + * Input Args: + * vm - Virtual Machine + * vaddr - Virtuall address to map + * paddr - VM Physical Address + * npages - The number of pages to map + * + * Output Args: None + * + * Return: None + * + * Within the VM given by @vm, creates a virtual translation for + * @npages starting at @vaddr to the page range starting at @paddr. + */ +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages) +{ + size_t page_size = vm->page_size; + size_t size = npages * page_size; + + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + while (npages--) { + virt_pg_map_shared(vm, vaddr, paddr); + vaddr += page_size; + paddr += page_size; + } +} + /* * Address VM Physical to Host Virtual * diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c index 1ea1019d48c13..b4c18f7335dab 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c @@ -176,7 +176,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, return pte; }
-void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) +static void ___virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + int level, bool protected) { const uint64_t pg_size = PG_LEVEL_SIZE(level); uint64_t *pml4e, *pdpe, *pde; @@ -223,17 +224,27 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr); *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
- if (vm_is_gpa_protected(vm, paddr)) + if (protected) *pte |= vm->arch.c_bit; else *pte |= vm->arch.s_bit; }
+void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) +{ + ___virt_pg_map(vm, vaddr, paddr, level, vm_is_gpa_protected(vm, paddr)); +} + void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); }
+void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) +{ + ___virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K, false); +} + void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level) {
From: Ryan Afranji afranji@google.com
Adds a test that sets up shared memory between the host and guest.
Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 + .../kvm/include/x86_64/tdx/tdx_util.h | 3 + .../selftests/kvm/lib/x86_64/tdx/tdx.c | 16 ++ .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 30 ++++ .../kvm/x86_64/tdx_shared_mem_test.c | 137 ++++++++++++++++++ 7 files changed, 190 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index 370d6430b32b4..e1663b0f809b4 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -66,6 +66,7 @@ /x86_64/vmx_pmu_caps_test /x86_64/triple_fault_event_test /x86_64/tdx_vm_tests +/x86_64/tdx_shared_mem_test /access_tracking_perf_test /demand_paging_test /dirty_log_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 9f289322a4933..27e9148212fa5 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -152,6 +152,7 @@ TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests +TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test
# Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index ffab2c3ca312b..857a297e51ac6 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -7,6 +7,7 @@ #define TDG_VP_INFO 1
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 +#define TDG_VP_VMCALL_MAP_GPA 0x10001 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10 @@ -36,5 +37,6 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, uint64_t *r8, uint64_t *r9, uint64_t *r10, uint64_t *r11); +uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h index 274b245f200bf..58374453b4b7e 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h @@ -13,4 +13,7 @@ void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t attributes); void td_finalize(struct kvm_vm *vm);
+void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + bool shared_to_private); + #endif // SELFTESTS_TDX_KVM_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index a280136634d3b..e0a39f29a0662 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -189,3 +189,19 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
return ret; } + +uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out) +{ + uint64_t ret; + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_MAP_GPA, + .r12 = address, + .r13 = size + }; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (data_out) + *data_out = args.r11; + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c index 2e9679d24a843..4d6615b97770a 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c @@ -505,3 +505,33 @@ void td_finalize(struct kvm_vm *vm)
tdx_td_finalizemr(vm); } + +/** + * Handle conversion of memory with @size beginning @gpa for @vm. Set + * @shared_to_private to true for shared to private conversions and false + * otherwise. + * + * Since this is just for selftests, we will just keep both pieces of backing + * memory allocated and not deallocate/allocate memory; we'll just do the + * minimum of calling KVM_MEMORY_ENCRYPT_REG_REGION and + * KVM_MEMORY_ENCRYPT_UNREG_REGION. + */ +void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size, + bool shared_to_private) +{ + struct kvm_enc_region range; + char *ioctl_string = shared_to_private + ? "KVM_MEMORY_ENCRYPT_REG_REGION" + : "KVM_MEMORY_ENCRYPT_UNREG_REGION"; + + range.addr = gpa; + range.size = size; + + printf("\t ... calling %s ioctl with gpa=%#lx, size=%#lx\n", + ioctl_string, gpa, size); + + if (shared_to_private) + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range); + else + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_UNREG_REGION, &range); +} diff --git a/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c new file mode 100644 index 0000000000000..eb4cf64ae83a8 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <linux/kvm.h> +#include <stdint.h> + +#include "kvm_util_base.h" +#include "processor.h" +#include "tdx/tdcall.h" +#include "tdx/tdx.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include "test_util.h" + +#define TDX_SHARED_MEM_TEST_PRIVATE_GVA (0x80000000) +#define TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK BIT_ULL(30) +#define TDX_SHARED_MEM_TEST_SHARED_GVA \ + (TDX_SHARED_MEM_TEST_PRIVATE_GVA | \ + TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK) + +#define TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE (0xcafecafe) +#define TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE (0xabcdabcd) + +#define TDX_SHARED_MEM_TEST_INFO_PORT 0x87 + +/* + * Shared variables between guest and host + */ +static uint64_t test_mem_private_gpa; +static uint64_t test_mem_shared_gpa; + +void guest_shared_mem(void) +{ + uint32_t *test_mem_shared_gva = + (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA; + + uint64_t placeholder; + uint64_t ret; + + /* Map gpa as shared */ + ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE, + &placeholder); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE; + + /* Exit so host can read shared value */ + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + &placeholder); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + /* Read value written by host and send it back out for verification */ + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, + (uint64_t *)test_mem_shared_gva); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); +} + +int verify_shared_mem(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm_vaddr_t test_mem_private_gva; + uint32_t *test_mem_hva; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_shared_mem); + + /* + * Set up shared memory page for testing by first allocating as private + * and then mapping the same GPA again as shared. This way, the TD does + * not have to remap its page tables at runtime. + */ + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size, + TDX_SHARED_MEM_TEST_PRIVATE_GVA); + ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA); + + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva); + TEST_ASSERT(test_mem_hva != NULL, + "Guest address not found in guest memory regions\n"); + + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva); + virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA, + test_mem_private_gpa); + + test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1); + sync_global_to_guest(vm, test_mem_private_gpa); + sync_global_to_guest(vm, test_mem_shared_gpa); + + td_finalize(vm); + + printf("Verifying shared memory accesses for TDX\n"); + + /* Begin guest execution; guest writes to shared memory. */ + printf("\t ... Starting guest execution\n"); + + /* Handle map gpa as shared */ + /* TODO: MapGPA should exit to the host VMM, but now it doesn't */ + // vcpu_run(vcpu); + // ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_MEMORY_FAULT); + // handle_memory_conversion(vm, vcpu->run->memory.gpa, vcpu->run->memory.size, + // vcpu->run->memory.flags == KVM_MEMORY_EXIT_FLAG_PRIVATE); + + vcpu_run(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE); + + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE; + vcpu_run(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE); + + printf("\t ... PASSED\n"); + + kvm_vm_free(vm); + + return 0; +} + +int main(int argc, char **argv) +{ + if (!is_tdx_enabled()) { + printf("TDX is not supported by the KVM\n" + "Skipping the TDX tests.\n"); + return 0; + } + + return verify_shared_mem(); +}
With this, vm_userspace_mem_region_add() can use restricted memory to back guest memory.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/kvm_util_base.h | 7 ++- .../testing/selftests/kvm/include/test_util.h | 2 + tools/testing/selftests/kvm/lib/kvm_util.c | 48 ++++++++++++++++--- tools/testing/selftests/kvm/lib/test_util.c | 7 +++ 4 files changed, 55 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 30453e2de8396..950fd337898e1 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -33,7 +33,10 @@ typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
struct userspace_mem_region { - struct kvm_userspace_memory_region region; + union { + struct kvm_userspace_memory_region region; + struct kvm_userspace_memory_region_ext region_ext; + }; struct sparsebit *unused_phy_pages; struct sparsebit *protected_phy_pages; int fd; @@ -214,7 +217,7 @@ static inline bool kvm_has_cap(long cap)
#define kvm_do_ioctl(fd, cmd, arg) \ ({ \ - static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd), ""); \ + static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) >= _IOC_SIZE(cmd), ""); \ ioctl(fd, cmd, arg); \ })
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h index befc754ce9b3b..01456a78b3a2e 100644 --- a/tools/testing/selftests/kvm/include/test_util.h +++ b/tools/testing/selftests/kvm/include/test_util.h @@ -94,6 +94,7 @@ enum vm_mem_backing_src_type { VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB, VM_MEM_SRC_ANONYMOUS_HUGETLB_2GB, VM_MEM_SRC_ANONYMOUS_HUGETLB_16GB, + VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD, VM_MEM_SRC_SHMEM, VM_MEM_SRC_SHARED_HUGETLB, NUM_SRC_TYPES, @@ -104,6 +105,7 @@ enum vm_mem_backing_src_type { struct vm_mem_backing_src_alias { const char *name; uint32_t flag; + bool need_restricted_memfd; };
#define MIN_RUN_DELAY_NS 200000UL diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 6673be2f49c31..4e5928fa71c44 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -15,7 +15,6 @@ #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> -#include <linux/kernel.h>
#define KVM_UTIL_MIN_PFN 2
@@ -799,6 +798,27 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, errno, strerror(errno)); }
+/** + * Initialize memory in restricted_fd with size @memory_region_size and return + * the fd. + * + * Errors out if there's any error + */ +static int initialize_restricted_memfd(uint64_t memory_region_size) +{ + int ret; + int mfd = -1; + + mfd = syscall(__NR_memfd_restricted, 0); + TEST_ASSERT(mfd != -1, "Failed to create private memfd"); + ret = ftruncate(mfd, memory_region_size); + TEST_ASSERT(ret != -1, "Failed to resize memfd %d to %lx", mfd, memory_region_size); + ret = fallocate(mfd, 0, 0, memory_region_size); + TEST_ASSERT(ret != -1, "Failed to allocate %lx bytes in memfd %d", memory_region_size, mfd); + + return mfd; +} + /* * VM Userspace Memory Region Add * @@ -830,6 +850,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *region; size_t backing_src_pagesz = get_backing_src_pagesz(src_type); size_t alignment; + int restricted_memfd = -1;
TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages, "Number of guest pages is not compatible with the host. " @@ -927,14 +948,24 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
/* As needed perform madvise */ if ((src_type == VM_MEM_SRC_ANONYMOUS || - src_type == VM_MEM_SRC_ANONYMOUS_THP) && thp_configured()) { - ret = madvise(region->host_mem, npages * vm->page_size, - src_type == VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUGEPAGE); + src_type == VM_MEM_SRC_ANONYMOUS_THP || + src_type == VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD) && thp_configured()) { + int advice = src_type == VM_MEM_SRC_ANONYMOUS_THP + ? MADV_HUGEPAGE + : MADV_NOHUGEPAGE; + ret = madvise(region->host_mem, npages * vm->page_size, advice); TEST_ASSERT(ret == 0, "madvise failed, addr: %p length: 0x%lx src_type: %s", region->host_mem, npages * vm->page_size, vm_mem_backing_src_alias(src_type)->name); }
+ if (vm_mem_backing_src_alias(src_type)->need_restricted_memfd) { + restricted_memfd = initialize_restricted_memfd(npages * vm->page_size); + TEST_ASSERT(restricted_memfd != -1, + "Failed to create restricted memfd"); + flags |= KVM_MEM_PRIVATE; + } + region->unused_phy_pages = sparsebit_alloc(); region->protected_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, @@ -944,13 +975,16 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, region->region.guest_phys_addr = guest_paddr; region->region.memory_size = npages * vm->page_size; region->region.userspace_addr = (uintptr_t) region->host_mem; - ret = __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, ®ion->region); + region->region_ext.restricted_fd = restricted_memfd; + region->region_ext.restricted_offset = 0; + ret = __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION, ®ion->region_ext); TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n" " rc: %i errno: %i\n" " slot: %u flags: 0x%x\n" - " guest_phys_addr: 0x%lx size: 0x%lx", + " guest_phys_addr: 0x%lx size: 0x%lx restricted_fd: %d", ret, errno, slot, flags, - guest_paddr, (uint64_t) region->region.memory_size); + guest_paddr, (uint64_t) region->region.memory_size, + restricted_memfd);
/* Add to quick lookup data structures */ vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region); diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c index 6d23878bbfe1a..2d53e55d13565 100644 --- a/tools/testing/selftests/kvm/lib/test_util.c +++ b/tools/testing/selftests/kvm/lib/test_util.c @@ -8,6 +8,7 @@ #include <assert.h> #include <ctype.h> #include <limits.h> +#include <stdbool.h> #include <stdlib.h> #include <time.h> #include <sys/stat.h> @@ -254,6 +255,11 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i) */ .flag = MAP_SHARED, }, + [VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD] = { + .name = "anonymous_and_restricted_memfd", + .flag = ANON_FLAGS, + .need_restricted_memfd = true, + }, }; _Static_assert(ARRAY_SIZE(aliases) == NUM_SRC_TYPES, "Missing new backing src types?"); @@ -272,6 +278,7 @@ size_t get_backing_src_pagesz(uint32_t i) switch (i) { case VM_MEM_SRC_ANONYMOUS: case VM_MEM_SRC_SHMEM: + case VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD: return getpagesize(); case VM_MEM_SRC_ANONYMOUS_THP: return get_trans_hugepagesz();
If guest memory is backed by restricted memfd
+ UPM is being used, hence encrypted memory region has to be registered + Can avoid making a copy of guest memory before getting TDX to initialize the memory region
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 43 +++++++++++++++---- 1 file changed, 34 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c index 4d6615b97770a..77803b7a1d739 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c @@ -207,6 +207,23 @@ static void tdx_td_finalizemr(struct kvm_vm *vm) tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL); }
+/* + * Other ioctls + */ + +/** + * Register a memory region that may contain encrypted data in KVM. + */ +static void register_encrypted_memory_region( + struct kvm_vm *vm, struct userspace_mem_region *region) +{ + struct kvm_enc_region range = { + .addr = region->region.guest_phys_addr, + .size = region->region.memory_size, + }; + vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range); +} + /* * TD creation/setup/finalization */ @@ -393,30 +410,38 @@ static void load_td_memory_region(struct kvm_vm *vm, if (!sparsebit_any_set(pages)) return;
+ + if (region->region_ext.restricted_fd != -1) + register_encrypted_memory_region(vm, region); + sparsebit_for_each_set_range(pages, i, j) { const uint64_t size_to_load = (j - i + 1) * vm->page_size; const uint64_t offset = (i - lowest_page_in_region) * vm->page_size; const uint64_t hva = hva_base + offset; const uint64_t gpa = gpa_base + offset; - void *source_addr; + void *source_addr = (void *)hva;
/* * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place, * hence we have to make a copy if there's only one backing * memory source */ - source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); - TEST_ASSERT( - source_addr, - "Could not allocate memory for loading memory region"); - - memcpy(source_addr, (void *)hva, size_to_load); + if (region->region_ext.restricted_fd == -1) { + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + TEST_ASSERT( + source_addr, + "Could not allocate memory for loading memory region"); + + memcpy(source_addr, (void *)hva, size_to_load); + memset((void *)hva, 0, size_to_load); + }
tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
- munmap(source_addr, size_to_load); + if (region->region_ext.restricted_fd == -1) + munmap(source_addr, size_to_load); } }
vm_vaddr_alloc always allocates memory in memslot 0. This allows users of this function to choose which memslot to allocate virtual memory in.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/include/kvm_util_base.h | 3 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 2 +- 2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index 950fd337898e1..abeffa5a47e88 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -406,6 +406,9 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id); +vm_vaddr_t +_vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + vm_paddr_t paddr_min, uint32_t data_memslot, bool encrypt); vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min); vm_vaddr_t vm_vaddr_alloc_1to1(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 4e5928fa71c44..2f1e592d29dfd 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1286,7 +1286,7 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, * a unique set of pages, with the minimum real allocation being at least * a page. */ -static vm_vaddr_t +vm_vaddr_t _vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, vm_paddr_t paddr_min, uint32_t data_memslot, bool encrypt) {
Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h | 2 ++ tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c | 5 +++++ 2 files changed, 7 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index 857a297e51ac6..c8e4b9ce795ea 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -5,6 +5,7 @@ #include <stdint.h>
#define TDG_VP_INFO 1 +#define TDG_MEM_PAGE_ACCEPT 6
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_MAP_GPA 0x10001 @@ -38,5 +39,6 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, uint64_t *r8, uint64_t *r9, uint64_t *r10, uint64_t *r11); uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out); +uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index e0a39f29a0662..2ebc47e268779 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -205,3 +205,8 @@ uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_o *data_out = args.r11; return ret; } + +uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level) +{ + return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, gpa | level, 0, 0, 0, NULL); +}
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/include/x86_64/tdx/tdx.h | 21 +++++++++++++++++++ .../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 +++++++++++++++++ 2 files changed, 40 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h index c8e4b9ce795ea..2dfb9432c32f9 100644 --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h @@ -5,6 +5,7 @@ #include <stdint.h>
#define TDG_VP_INFO 1 +#define TDG_VP_VEINFO_GET 3 #define TDG_MEM_PAGE_ACCEPT 6
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 @@ -41,4 +42,24 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out); uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);
+/* + * Used by the #VE exception handler to gather the #VE exception + * info from the TDX module. This is a software only structure + * and not part of the TDX module/VMM ABI. + * + * Adapted from arch/x86/include/asm/tdx.h + */ +struct ve_info { + uint64_t exit_reason; + uint64_t exit_qual; + /* Guest Linear (virtual) Address */ + uint64_t gla; + /* Guest Physical Address */ + uint64_t gpa; + uint32_t instr_len; + uint32_t instr_info; +}; + +uint64_t tdg_vp_veinfo_get(struct ve_info *ve); + #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c index 2ebc47e268779..11a19832758bb 100644 --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c @@ -210,3 +210,22 @@ uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level) { return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, gpa | level, 0, 0, 0, NULL); } + +uint64_t tdg_vp_veinfo_get(struct ve_info *ve) +{ + uint64_t ret; + struct tdx_module_output out; + + memset(&out, 0, sizeof(struct tdx_module_output)); + + ret = __tdx_module_call(TDG_VP_VEINFO_GET, 0, 0, 0, 0, &out); + + ve->exit_reason = out.rcx; + ve->exit_qual = out.rdx; + ve->gla = out.r8; + ve->gpa = out.r9; + ve->instr_len = out.r10 & 0xffffffff; + ve->instr_info = out.r10 >> 32; + + return ret; +}
This tests the use of restricted memory with explicit MapGPA calls.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/x86_64/tdx_upm_test.c | 392 ++++++++++++++++++ 3 files changed, 394 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index e1663b0f809b4..c1def2271f4cd 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -67,6 +67,7 @@ /x86_64/triple_fault_event_test /x86_64/tdx_vm_tests /x86_64/tdx_shared_mem_test +/x86_64/tdx_upm_test /access_tracking_perf_test /demand_paging_test /dirty_log_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 27e9148212fa5..2b1434368a0f2 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -153,6 +153,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test +TEST_GEN_PROGS_x86_64 += x86_64/tdx_upm_test
# Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c new file mode 100644 index 0000000000000..13914aebd7da7 --- /dev/null +++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c @@ -0,0 +1,392 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <asm/kvm.h> +#include <asm/vmx.h> +#include <linux/kvm.h> +#include <stdbool.h> +#include <stdint.h> + +#include "kvm_util_base.h" +#include "processor.h" +#include "tdx/tdcall.h" +#include "tdx/tdx.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include "test_util.h" + +/* TDX UPM test patterns */ +#define PATTERN_CONFIDENCE_CHECK (0x11) +#define PATTERN_HOST_FOCUS (0x22) +#define PATTERN_GUEST_GENERAL (0x33) +#define PATTERN_GUEST_FOCUS (0x44) + +/* + * 0x80000000 is arbitrarily selected. The selected address need not be the same + * as TDX_UPM_TEST_AREA_GVA_PRIVATE, but it should not overlap with selftest + * code or boot page. + */ +#define TDX_UPM_TEST_AREA_GPA (0x80000000) +/* Test area GPA is arbitrarily selected */ +#define TDX_UPM_TEST_AREA_GVA_PRIVATE (0x90000000) +/* Select any bit that can be used as a flag */ +#define TDX_UPM_TEST_AREA_GVA_SHARED_BIT (32) +/* + * TDX_UPM_TEST_AREA_GVA_SHARED is used to map the same GPA twice into the + * guest, once as shared and once as private + */ +#define TDX_UPM_TEST_AREA_GVA_SHARED \ + (TDX_UPM_TEST_AREA_GVA_PRIVATE | \ + BIT_ULL(TDX_UPM_TEST_AREA_GVA_SHARED_BIT)) + +/* The test area is 2MB in size */ +#define TDX_UPM_TEST_AREA_SIZE (2 << 20) +/* 0th general area is 1MB in size */ +#define TDX_UPM_GENERAL_AREA_0_SIZE (1 << 20) +/* Focus area is 40KB in size */ +#define TDX_UPM_FOCUS_AREA_SIZE (40 << 10) +/* 1st general area is the rest of the space in the test area */ +#define TDX_UPM_GENERAL_AREA_1_SIZE \ + (TDX_UPM_TEST_AREA_SIZE - TDX_UPM_GENERAL_AREA_0_SIZE - \ + TDX_UPM_FOCUS_AREA_SIZE) + +/* + * The test memory area is set up as two general areas, sandwiching a focus + * area. The general areas act as control areas. After they are filled, they + * are not expected to change throughout the tests. The focus area is memory + * permissions change from private to shared and vice-versa. + * + * The focus area is intentionally small, and sandwiched to test that when the + * focus area's permissions change, the other areas' permissions are not + * affected. + */ +struct __packed tdx_upm_test_area { + uint8_t general_area_0[TDX_UPM_GENERAL_AREA_0_SIZE]; + uint8_t focus_area[TDX_UPM_FOCUS_AREA_SIZE]; + uint8_t general_area_1[TDX_UPM_GENERAL_AREA_1_SIZE]; +}; + +static void fill_test_area(struct tdx_upm_test_area *test_area_base, + uint8_t pattern) +{ + memset(test_area_base, pattern, sizeof(*test_area_base)); +} + +static void fill_focus_area(struct tdx_upm_test_area *test_area_base, + uint8_t pattern) +{ + memset(test_area_base->focus_area, pattern, + sizeof(test_area_base->focus_area)); +} + +static bool check_area(uint8_t *base, uint64_t size, uint8_t expected_pattern) +{ + size_t i; + + for (i = 0; i < size; i++) { + if (base[i] != expected_pattern) + return false; + } + + return true; +} + +static bool check_general_areas(struct tdx_upm_test_area *test_area_base, + uint8_t expected_pattern) +{ + return (check_area(test_area_base->general_area_0, + sizeof(test_area_base->general_area_0), + expected_pattern) && + check_area(test_area_base->general_area_1, + sizeof(test_area_base->general_area_1), + expected_pattern)); +} + +static bool check_focus_area(struct tdx_upm_test_area *test_area_base, + uint8_t expected_pattern) +{ + return check_area(test_area_base->focus_area, + sizeof(test_area_base->focus_area), expected_pattern); +} + +static bool check_test_area(struct tdx_upm_test_area *test_area_base, + uint8_t expected_pattern) +{ + return (check_general_areas(test_area_base, expected_pattern) && + check_focus_area(test_area_base, expected_pattern)); +} + +static bool fill_and_check(struct tdx_upm_test_area *test_area_base, uint8_t pattern) +{ + fill_test_area(test_area_base, pattern); + + return check_test_area(test_area_base, pattern); +} + +#define TDX_UPM_TEST_ASSERT(x) \ + do { \ + if (!(x)) \ + tdx_test_fatal(__LINE__); \ + } while (0) + +/* + * Shared variables between guest and host + */ +static struct tdx_upm_test_area *test_area_gpa_private; +static struct tdx_upm_test_area *test_area_gpa_shared; + +/* + * Test stages for syncing with host + */ +enum { + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST = 1, + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST, + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN, +}; + +#define TDX_UPM_TEST_ACCEPT_PRINT_PORT 0x87 + +/** + * Does vcpu_run, and also manages memory conversions if requested by the TD. + */ +void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm, + struct kvm_vcpu *vcpu) +{ + for (;;) { + vcpu_run(vcpu); + if ( + vcpu->run->exit_reason == KVM_EXIT_IO && + vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) { + uint64_t gpa = tdx_test_read_64bit( + vcpu, TDX_UPM_TEST_ACCEPT_PRINT_PORT); + printf("\t ... guest accepting 1 page at GPA: 0x%lx\n", gpa); + continue; + } + + break; + } +} + +static void guest_upm_explicit(void) +{ + uint64_t ret = 0; + uint64_t failed_gpa; + + struct tdx_upm_test_area *test_area_gva_private = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE; + struct tdx_upm_test_area *test_area_gva_shared = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED; + + /* Check: host reading private memory does not modify guest's view */ + fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST); + + TDX_UPM_TEST_ASSERT( + check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* Remap focus area as shared */ + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_shared->focus_area, + sizeof(test_area_gpa_shared->focus_area), + &failed_gpa); + TDX_UPM_TEST_ASSERT(!ret); + + /* General areas should be unaffected by remapping */ + TDX_UPM_TEST_ASSERT( + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* + * Use memory contents to confirm that the memory allocated using mmap + * is used as backing memory for shared memory - PATTERN_CONFIDENCE_CHECK + * was written by the VMM at the beginning of this test. + */ + TDX_UPM_TEST_ASSERT( + check_focus_area(test_area_gva_shared, PATTERN_CONFIDENCE_CHECK)); + + /* Guest can use focus area after remapping as shared */ + fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST); + + /* Check that guest has the same view of shared memory */ + TDX_UPM_TEST_ASSERT( + check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS)); + + /* Remap focus area back to private */ + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_private->focus_area, + sizeof(test_area_gpa_private->focus_area), + &failed_gpa); + TDX_UPM_TEST_ASSERT(!ret); + + /* General areas should be unaffected by remapping */ + TDX_UPM_TEST_ASSERT( + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* Focus area should be zeroed after remapping */ + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_private, 0)); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN); + + /* Check that guest can use private memory after focus area is remapped as private */ + TDX_UPM_TEST_ASSERT( + fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_success(); +} + +static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct tdx_upm_test_area *test_area_base_hva) +{ + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST); + + /* + * Check that host should read PATTERN_CONFIDENCE_CHECK from guest's + * private memory. This confirms that regular memory (userspace_addr in + * struct kvm_userspace_memory_region) is used to back the host's view + * of private memory, since PATTERN_CONFIDENCE_CHECK was written to that + * memory before starting the guest. + */ + TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory."); + + vcpu_run_and_manage_memory_conversions(vm, vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST); + + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_GUEST_FOCUS), + "Host should have the same view of shared memory as guest."); + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Host's view of private memory should still be backed by regular memory."); + + /* Check that host can use shared memory */ + fill_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS); + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS), + "Host should be able to use shared memory."); + + vcpu_run_and_manage_memory_conversions(vm, vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); + ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN); + + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Host's view of private memory should be backed by regular memory."); + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS), + "Host's view of private memory should be backed by regular memory."); + + vcpu_run(vcpu); + TDX_TEST_CHECK_GUEST_FAILURE(vcpu); + TDX_TEST_ASSERT_SUCCESS(vcpu); + + printf("\t ... PASSED\n"); +} + +static bool address_between(uint64_t addr, void *lo, void *hi) +{ + return (uint64_t)lo <= addr && addr < (uint64_t)hi; +} + +static void guest_ve_handler(struct ex_regs *regs) +{ + uint64_t ret; + struct ve_info ve; + + ret = tdg_vp_veinfo_get(&ve); + TDX_UPM_TEST_ASSERT(!ret); + + /* For this test, we will only handle EXIT_REASON_EPT_VIOLATION */ + TDX_UPM_TEST_ASSERT(ve.exit_reason == EXIT_REASON_EPT_VIOLATION); + + /* Validate GPA in fault */ + TDX_UPM_TEST_ASSERT( + address_between(ve.gpa, + test_area_gpa_private->focus_area, + test_area_gpa_private->general_area_1)); + + tdx_test_send_64bit(TDX_UPM_TEST_ACCEPT_PRINT_PORT, ve.gpa); + +#define MEM_PAGE_ACCEPT_LEVEL_4K 0 +#define MEM_PAGE_ACCEPT_LEVEL_2M 1 + ret = tdg_mem_page_accept(ve.gpa, MEM_PAGE_ACCEPT_LEVEL_4K); + TDX_UPM_TEST_ASSERT(!ret); +} + +static void verify_upm_test(void) +{ + struct kvm_vm *vm; + struct kvm_vcpu *vcpu; + + vm_vaddr_t test_area_gva_private; + struct tdx_upm_test_area *test_area_base_hva; + uint64_t test_area_npages; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_upm_explicit); + + vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler); + + /* + * Set up shared memory page for testing by first allocating as private + * and then mapping the same GPA again as shared. This way, the TD does + * not have to remap its page tables at runtime. + */ + test_area_npages = TDX_UPM_TEST_AREA_SIZE / vm->page_size; + vm_userspace_mem_region_add(vm, + VM_MEM_SRC_ANONYMOUS_AND_RESTRICTED_MEMFD, + TDX_UPM_TEST_AREA_GPA, 3, test_area_npages, + 0); + + test_area_gva_private = _vm_vaddr_alloc( + vm, TDX_UPM_TEST_AREA_SIZE, TDX_UPM_TEST_AREA_GVA_PRIVATE, + TDX_UPM_TEST_AREA_GPA, 3, true); + ASSERT_EQ(test_area_gva_private, TDX_UPM_TEST_AREA_GVA_PRIVATE); + + test_area_gpa_private = (struct tdx_upm_test_area *) + addr_gva2gpa(vm, test_area_gva_private); + virt_map_shared(vm, TDX_UPM_TEST_AREA_GVA_SHARED, + (uint64_t)test_area_gpa_private, + test_area_npages); + ASSERT_EQ(addr_gva2gpa(vm, TDX_UPM_TEST_AREA_GVA_SHARED), + (vm_paddr_t)test_area_gpa_private); + + test_area_base_hva = addr_gva2hva(vm, TDX_UPM_TEST_AREA_GVA_PRIVATE); + + TEST_ASSERT(fill_and_check(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Failed to mark memory intended as backing memory for TD shared memory"); + + sync_global_to_guest(vm, test_area_gpa_private); + test_area_gpa_shared = (struct tdx_upm_test_area *) + ((uint64_t)test_area_gpa_private | BIT_ULL(vm->pa_bits - 1)); + sync_global_to_guest(vm, test_area_gpa_shared); + + td_finalize(vm); + + printf("Verifying UPM functionality: explicit MapGPA\n"); + + run_selftest(vm, vcpu, test_area_base_hva); + + kvm_vm_free(vm); +} + +int main(int argc, char **argv) +{ + /* Disable stdout buffering */ + setbuf(stdout, NULL); + + if (!is_tdx_enabled()) { + printf("TDX is not supported by the KVM\n" + "Skipping the TDX tests.\n"); + return 0; + } + + run_in_new_process(&verify_upm_test); +}
This tests the use of restricted memory without explicit MapGPA calls.
Signed-off-by: Ackerley Tng ackerleytng@google.com --- .../selftests/kvm/x86_64/tdx_upm_test.c | 88 ++++++++++++++++--- 1 file changed, 78 insertions(+), 10 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c index 13914aebd7da7..56ba3d4fb15a5 100644 --- a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c +++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c @@ -149,11 +149,18 @@ enum { * Does vcpu_run, and also manages memory conversions if requested by the TD. */ void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm, - struct kvm_vcpu *vcpu) + struct kvm_vcpu *vcpu, bool handle_conversions) { for (;;) { vcpu_run(vcpu); - if ( + if (handle_conversions && + vcpu->run->exit_reason == KVM_EXIT_MEMORY_FAULT) { + handle_memory_conversion( + vm, vcpu->run->memory.gpa, + vcpu->run->memory.size, + vcpu->run->memory.flags == KVM_MEMORY_EXIT_FLAG_PRIVATE); + continue; + } else if ( vcpu->run->exit_reason == KVM_EXIT_IO && vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) { uint64_t gpa = tdx_test_read_64bit( @@ -233,8 +240,53 @@ static void guest_upm_explicit(void) tdx_test_success(); }
+static void guest_upm_implicit(void) +{ + struct tdx_upm_test_area *test_area_gva_private = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE; + struct tdx_upm_test_area *test_area_gva_shared = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED; + + /* Check: host reading private memory does not modify guest's view */ + fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST); + + TDX_UPM_TEST_ASSERT( + check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* Use focus area as shared */ + fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS); + + /* General areas should not be affected */ + TDX_UPM_TEST_ASSERT( + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST); + + /* Check that guest has the same view of shared memory */ + TDX_UPM_TEST_ASSERT( + check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS)); + + /* Use focus area as private */ + fill_focus_area(test_area_gva_private, PATTERN_GUEST_FOCUS); + + /* General areas should be unaffected by remapping */ + TDX_UPM_TEST_ASSERT( + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN); + + /* Check that guest can use private memory after focus area is remapped as private */ + TDX_UPM_TEST_ASSERT( + fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_success(); +} + static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, - struct tdx_upm_test_area *test_area_base_hva) + struct tdx_upm_test_area *test_area_base_hva, + bool implicit) { vcpu_run(vcpu); TDX_TEST_CHECK_GUEST_FAILURE(vcpu); @@ -253,7 +305,7 @@ static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");
- vcpu_run_and_manage_memory_conversions(vm, vcpu); + vcpu_run_and_manage_memory_conversions(vm, vcpu, implicit); TDX_TEST_CHECK_GUEST_FAILURE(vcpu); TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); @@ -270,7 +322,7 @@ static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS), "Host should be able to use shared memory.");
- vcpu_run_and_manage_memory_conversions(vm, vcpu); + vcpu_run_and_manage_memory_conversions(vm, vcpu, implicit); TDX_TEST_CHECK_GUEST_FAILURE(vcpu); TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE); @@ -319,18 +371,20 @@ static void guest_ve_handler(struct ex_regs *regs) TDX_UPM_TEST_ASSERT(!ret); }
-static void verify_upm_test(void) +static void verify_upm_test(bool implicit) { struct kvm_vm *vm; struct kvm_vcpu *vcpu;
+ void *guest_code; vm_vaddr_t test_area_gva_private; struct tdx_upm_test_area *test_area_base_hva; uint64_t test_area_npages;
vm = td_create(); td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); - vcpu = td_vcpu_add(vm, 0, guest_upm_explicit); + guest_code = implicit ? guest_upm_implicit : guest_upm_explicit; + vcpu = td_vcpu_add(vm, 0, guest_code);
vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);
@@ -370,13 +424,26 @@ static void verify_upm_test(void)
td_finalize(vm);
- printf("Verifying UPM functionality: explicit MapGPA\n"); + if (implicit) + printf("Verifying UPM functionality: implicit conversion\n"); + else + printf("Verifying UPM functionality: explicit MapGPA\n");
- run_selftest(vm, vcpu, test_area_base_hva); + run_selftest(vm, vcpu, test_area_base_hva, implicit);
kvm_vm_free(vm); }
+void verify_upm_test_explicit(void) +{ + verify_upm_test(false); +} + +void verify_upm_test_implicit(void) +{ + verify_upm_test(true); +} + int main(int argc, char **argv) { /* Disable stdout buffering */ @@ -388,5 +455,6 @@ int main(int argc, char **argv) return 0; }
- run_in_new_process(&verify_upm_test); + run_in_new_process(&verify_upm_test_explicit); + run_in_new_process(&verify_upm_test_implicit); }
linux-kselftest-mirror@lists.linaro.org