This is v6 of the TDX selftests that follow RFC v5 sent more than a year ago. While it has been a while since the previous posting, the TDX selftests kept up to date with the latest TDX development and supported the health of the TDX base series.
With TDX base support now in kvm-coco-queue it is a good opportunity to to again share the TDX selftests and also remove the "RFC" to convey that this work is now ready to be considered for inclusion in support of the TDX base support.
Apart from the addition of one new test ("KVM: selftests: TDX: Test LOG_DIRTY_PAGES flag to a non-GUEST_MEMFD memslot") this series should be familiar to anybody that previously looked at "RFC v5". All previous feedback has been addressed. At the same time the changes to TDX base support needed several matching changes in the TDX selftests that prompted dropping all previously received "Reviewed-by" tags to indicate that the patches deserve a new look. In support of upstream inclusion this version also includes many non functional changes intended to follow the style and customs of this area.
This series is based on: commit 58dd191cf39c ("KVM: x86: Forbid the use of kvm_load_host_xsave_state() with guest_state_protected") from branch kvm-coco-queue on git://git.kernel.org/pub/scm/virt/kvm/kvm.git
While the kvm-coco-queue already contains these selftests, this is a more up-to-date version of the patches.
The tree can be found at: https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-v6
I would like to acknowledge the following people, who helped keep these patches up to date with the latest TDX patches and prepare them for review:
Reinette Chatre reinette.chatre@intel.com Isaku Yamahata isaku.yamahata@intel.com Binbin Wu binbin.wu@linux.intel.com Adrian Hunter adrian.hunter@intel.com Rick Edgecombe rick.p.edgecombe@intel.com
Links to earlier patch series:
RFC v5: https://lore.kernel.org/all/20231212204647.2170650-1-sagis@google.com/ RFC v4: https://lore.kernel.org/lkml/20230725220132.2310657-1-afranji@google.com/ RFC v3: https://lore.kernel.org/lkml/20230121001542.2472357-1-ackerleytng@google.com... RFC v2: https://lore.kernel.org/lkml/20220830222000.709028-1-sagis@google.com/T/#u RFC v1: https://lore.kernel.org/lkml/20210726183816.1343022-1-erdemaktas@google.com/...
Ackerley Tng (12): KVM: selftests: Add function to allow one-to-one GVA to GPA mappings KVM: selftests: Expose function that sets up sregs based on VM's mode KVM: selftests: Store initial stack address in struct kvm_vcpu KVM: selftests: Add vCPU descriptor table initialization utility KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs' attribute configuration KVM: selftests: TDX: Update load_td_memory_region() for VM memory backed by guest memfd KVM: selftests: Add functions to allow mapping as shared KVM: selftests: KVM: selftests: Expose new vm_vaddr_alloc_private() KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET KVM: selftests: TDX: Add TDX UPM selftest KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion
Erdem Aktas (3): KVM: selftests: Add helper functions to create TDX VMs KVM: selftests: TDX: Add TDX lifecycle test KVM: selftests: TDX: Add TDX HLT exit test
Isaku Yamahata (1): KVM: selftests: Update kvm_init_vm_address_properties() for TDX
Roger Wang (1): KVM: selftests: TDX: Add TDG.VP.INFO test
Ryan Afranji (2): KVM: selftests: TDX: Verify the behavior when host consumes a TD private memory KVM: selftests: TDX: Add shared memory test
Sagi Shahar (10): KVM: selftests: TDX: Add report_fatal_error test KVM: selftests: TDX: Adding test case for TDX port IO KVM: selftests: TDX: Add basic TDX CPUID test KVM: selftests: TDX: Add basic TDG.VP.VMCALL<GetTdVmCallInfo> test KVM: selftests: TDX: Add TDX IO writes test KVM: selftests: TDX: Add TDX IO reads test KVM: selftests: TDX: Add TDX MSR read/write tests KVM: selftests: TDX: Add TDX MMIO reads test KVM: selftests: TDX: Add TDX MMIO writes test KVM: selftests: TDX: Add TDX CPUID TDVMCALL test
Yan Zhao (1): KVM: selftests: TDX: Test LOG_DIRTY_PAGES flag to a non-GUEST_MEMFD memslot
tools/testing/selftests/kvm/Makefile.kvm | 8 + .../testing/selftests/kvm/include/kvm_util.h | 36 + .../selftests/kvm/include/x86/kvm_util_arch.h | 1 + .../selftests/kvm/include/x86/processor.h | 2 + .../selftests/kvm/include/x86/tdx/td_boot.h | 83 ++ .../kvm/include/x86/tdx/td_boot_asm.h | 16 + .../selftests/kvm/include/x86/tdx/tdcall.h | 54 + .../selftests/kvm/include/x86/tdx/tdx.h | 67 + .../selftests/kvm/include/x86/tdx/tdx_util.h | 23 + .../selftests/kvm/include/x86/tdx/test_util.h | 133 ++ tools/testing/selftests/kvm/lib/kvm_util.c | 74 +- .../testing/selftests/kvm/lib/x86/processor.c | 108 +- .../selftests/kvm/lib/x86/tdx/td_boot.S | 100 ++ .../selftests/kvm/lib/x86/tdx/tdcall.S | 163 +++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 243 ++++ .../selftests/kvm/lib/x86/tdx/tdx_util.c | 643 +++++++++ .../selftests/kvm/lib/x86/tdx/test_util.c | 187 +++ .../selftests/kvm/x86/tdx_shared_mem_test.c | 129 ++ .../testing/selftests/kvm/x86/tdx_upm_test.c | 461 ++++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 1254 +++++++++++++++++ 20 files changed, 3742 insertions(+), 43 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/td_boot.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/td_boot_asm.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/tdcall.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/tdx.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/test_util.h create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/td_boot.S create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/tdx.c create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/test_util.c create mode 100644 tools/testing/selftests/kvm/x86/tdx_shared_mem_test.c create mode 100644 tools/testing/selftests/kvm/x86/tdx_upm_test.c create mode 100644 tools/testing/selftests/kvm/x86/tdx_vm_test.c
From: Ackerley Tng ackerleytng@google.com
One-to-one GVA to GPA mappings can be used in the guest to set up boot sequences during which paging is enabled, hence requiring a transition from using physical to virtual addresses in consecutive instructions.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../testing/selftests/kvm/include/kvm_util.h | 3 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 27 +++++++++++++++---- 2 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 373912464fb4..1bc0b44e78de 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -609,6 +609,9 @@ vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type); +vm_vaddr_t vm_vaddr_identity_alloc(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type); vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages); vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 0be1c61263eb..40dd63f2bd05 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1443,15 +1443,14 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, }
static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, - vm_vaddr_t vaddr_min, + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min, enum kvm_mem_region_type type, bool protected) { uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm); - vm_paddr_t paddr = __vm_phy_pages_alloc(vm, pages, - KVM_UTIL_MIN_PFN * vm->page_size, + vm_paddr_t paddr = __vm_phy_pages_alloc(vm, pages, paddr_min, vm->memslots[type], protected);
/* @@ -1475,7 +1474,7 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type) { - return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, + return ____vm_vaddr_alloc(vm, sz, vaddr_min, KVM_UTIL_MIN_PFN * vm->page_size, type, vm_arch_has_protected_memory(vm)); }
@@ -1483,7 +1482,25 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type) { - return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, false); + return ____vm_vaddr_alloc(vm, sz, vaddr_min, KVM_UTIL_MIN_PFN * vm->page_size, type, false); +} + +/* + * Allocate memory in @vm of size @sz beginning with the desired virtual address + * of @vaddr_min and backed by physical address equal to returned virtual + * address. + * + * Return the address where the memory is allocated. + */ +vm_vaddr_t vm_vaddr_identity_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, + enum kvm_mem_region_type type) +{ + vm_vaddr_t gva = ____vm_vaddr_alloc(vm, sz, vaddr_min, + (vm_paddr_t)vaddr_min, type, + vm_arch_has_protected_memory(vm)); + TEST_ASSERT_EQ(gva, addr_gva2gpa(vm, gva)); + + return gva; }
/*
From: Ackerley Tng ackerleytng@google.com
This allows initializing sregs without setting vCPU registers in KVM.
No functional change intended.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/processor.h | 1 + .../testing/selftests/kvm/lib/x86/processor.c | 45 ++++++++++--------- 2 files changed, 25 insertions(+), 21 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h index b46f1e5120d1..3162c6e8ea23 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1025,6 +1025,7 @@ static inline struct kvm_cpuid2 *allocate_kvm_cpuid2(int nr_entries) }
void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid); +void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs);
static inline void vcpu_get_cpuid(struct kvm_vcpu *vcpu) { diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index 55971df6906c..1d6ae28aa398 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -488,34 +488,37 @@ static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp) segp->present = 1; }
-static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs) { - struct kvm_sregs sregs; - TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
- /* Set mode specific system register values. */ - vcpu_sregs_get(vcpu, &sregs); - - sregs.idt.base = vm->arch.idt; - sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1; - sregs.gdt.base = vm->arch.gdt; - sregs.gdt.limit = getpagesize() - 1; + sregs->idt.base = vm->arch.idt; + sregs->idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1; + sregs->gdt.base = vm->arch.gdt; + sregs->gdt.limit = getpagesize() - 1;
- sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG; - sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR; + sregs->cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG; + sregs->cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR; if (kvm_cpu_has(X86_FEATURE_XSAVE)) - sregs.cr4 |= X86_CR4_OSXSAVE; - sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX); + sregs->cr4 |= X86_CR4_OSXSAVE; + sregs->efer |= (EFER_LME | EFER_LMA | EFER_NX); + + kvm_seg_set_unusable(&sregs->ldt); + kvm_seg_set_kernel_code_64bit(&sregs->cs); + kvm_seg_set_kernel_data_64bit(&sregs->ds); + kvm_seg_set_kernel_data_64bit(&sregs->es); + kvm_seg_set_kernel_data_64bit(&sregs->gs); + kvm_seg_set_tss_64bit(vm->arch.tss, &sregs->tr);
- kvm_seg_set_unusable(&sregs.ldt); - kvm_seg_set_kernel_code_64bit(&sregs.cs); - kvm_seg_set_kernel_data_64bit(&sregs.ds); - kvm_seg_set_kernel_data_64bit(&sregs.es); - kvm_seg_set_kernel_data_64bit(&sregs.gs); - kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr); + sregs->cr3 = vm->pgd; +} + +static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu) +{ + struct kvm_sregs sregs;
- sregs.cr3 = vm->pgd; + vcpu_sregs_get(vcpu, &sregs); + vcpu_setup_mode_sregs(vm, &sregs); vcpu_sregs_set(vcpu, &sregs); }
From: Ackerley Tng ackerleytng@google.com
TDX guests' registers cannot be initialized directly using vcpu_regs_set(), hence the stack pointer needs to be initialized by the guest itself, running boot code beginning at the reset vector.
Store the stack address as part of struct kvm_vcpu so that it can be accessible later to be passed to the boot code for rsp initialization.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/include/kvm_util.h | 1 + tools/testing/selftests/kvm/lib/x86/processor.c | 2 ++ 2 files changed, 3 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 1bc0b44e78de..74ecfd8d7ae0 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -58,6 +58,7 @@ struct kvm_vcpu { int fd; struct kvm_vm *vm; struct kvm_run *run; + vm_vaddr_t initial_stack_addr; #ifdef __x86_64__ struct kvm_cpuid2 *cpuid; #endif diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index 1d6ae28aa398..7c0fe3b138a1 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -695,6 +695,8 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id) vcpu_init_sregs(vm, vcpu); vcpu_init_xcrs(vm, vcpu);
+ vcpu->initial_stack_addr = stack_vaddr; + /* Setup guest general purpose registers */ vcpu_regs_get(vcpu, ®s); regs.rflags = regs.rflags | 0x2;
From: Ackerley Tng ackerleytng@google.com
Turn vCPU descriptor table initialization into a utility for use by tests needing finer control, for example for TDX TD creation.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/include/x86/processor.h | 1 + tools/testing/selftests/kvm/lib/x86/processor.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h index 3162c6e8ea23..7c4e545ae9c9 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1178,6 +1178,7 @@ struct idt_entry { uint32_t offset2; uint32_t reserved; };
+void sync_exception_handlers_to_guest(struct kvm_vm *vm); void vm_install_exception_handler(struct kvm_vm *vm, int vector, void (*handler)(struct ex_regs *));
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index 7c0fe3b138a1..80b7c4482485 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -585,6 +585,11 @@ void route_exception(struct ex_regs *regs) regs->vector, regs->rip); }
+void sync_exception_handlers_to_guest(struct kvm_vm *vm) +{ + *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; +} + static void vm_init_descriptor_tables(struct kvm_vm *vm) { extern void *idt_handlers; @@ -600,7 +605,7 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm) for (i = 0; i < NUM_INTERRUPTS; i++) set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; + sync_exception_handlers_to_guest(vm);
kvm_seg_set_kernel_code_64bit(&seg); kvm_seg_fill_gdt_64bit(vm, &seg);
From: Isaku Yamahata isaku.yamahata@intel.com
Let kvm_init_vm_address_properties() initialize vm->arch.{s_bit, tag_mask} similar to SEV.
Set shared bit position based on guest maximum physical address width instead of maximum physical address width, because that is what KVM uses, refer to setup_tdparams_eptp_controls(), and because maximum physical address width can be different.
In the case of SRF, guest maximum physical address width is 48 because SRF does not support 5-level EPT, even though the maximum physical address width is 52.
Co-developed-by: Adrian Hunter adrian.hunter@intel.com Signed-off-by: Adrian Hunter adrian.hunter@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Sagi Shahar sagis@google.com --- .../testing/selftests/kvm/lib/x86/processor.c | 20 ++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index 80b7c4482485..1c42af328f19 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -1167,13 +1167,27 @@ void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
void kvm_init_vm_address_properties(struct kvm_vm *vm) { - if (vm->type == KVM_X86_SEV_VM || vm->type == KVM_X86_SEV_ES_VM || - vm->type == KVM_X86_SNP_VM) { + uint32_t gpa_bits = kvm_cpu_property(X86_PROPERTY_GUEST_MAX_PHY_ADDR); + + switch (vm->type) { + case KVM_X86_SEV_VM: + case KVM_X86_SEV_ES_VM: + case KVM_X86_SNP_VM: vm->arch.sev_fd = open_sev_dev_path_or_exit(); vm->arch.c_bit = BIT_ULL(this_cpu_property(X86_PROPERTY_SEV_C_BIT)); vm->gpa_tag_mask = vm->arch.c_bit; - } else { + break; + case KVM_X86_TDX_VM: + TEST_ASSERT(gpa_bits == 48 || gpa_bits == 52, + "TDX: bad X86_PROPERTY_GUEST_MAX_PHY_ADDR value: %u", gpa_bits); + vm->arch.sev_fd = -1; + vm->arch.s_bit = 1ULL << (gpa_bits - 1); + vm->arch.c_bit = 0; + vm->gpa_tag_mask = vm->arch.s_bit; + break; + default: vm->arch.sev_fd = -1; + break; } }
From: Erdem Aktas erdemaktas@google.com
TDX requires additional IOCTLs to initialize VM and vCPUs to add private memory and to finalize the VM memory. Also additional utility functions are provided to manipulate a TD, similar to those that manipulate a VM in the current selftest framework.
A TD's initial register state cannot be manipulated directly by setting the VM's memory, hence boot code is provided at the TD's reset vector. This boot code takes boot parameters loaded in the TD's memory and sets up the TD for the selftest.
Userspace needs to ensure consistency between KVM's CPUID and the TDX Module's view. Obtain the CPUID supported by KVM and make adjustments to reflect features of interest and the limited KVM PV features supported for TD guest. This involves masking the feature bits from CPUID entries and filtering CPUID entries of features not supported by TDX before initializing the TD.
Suggested-by: Isaku Yamahata isaku.yamahata@intel.com Co-developed-by: Sagi Shahar sagis@google.com Signed-off-by: Sagi Shahar sagis@google.com Co-developed-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com Co-developed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Co-developed-by: Rick Edgecombe rick.p.edgecombe@intel.com Signed-off-by: Rick Edgecombe rick.p.edgecombe@intel.com Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/Makefile.kvm | 2 + .../testing/selftests/kvm/include/kvm_util.h | 6 + .../selftests/kvm/include/x86/kvm_util_arch.h | 1 + .../selftests/kvm/include/x86/tdx/td_boot.h | 83 +++ .../kvm/include/x86/tdx/td_boot_asm.h | 16 + .../selftests/kvm/include/x86/tdx/tdx_util.h | 19 + tools/testing/selftests/kvm/lib/kvm_util.c | 6 +- .../testing/selftests/kvm/lib/x86/processor.c | 19 +- .../selftests/kvm/lib/x86/tdx/td_boot.S | 100 ++++ .../selftests/kvm/lib/x86/tdx/tdx_util.c | 566 ++++++++++++++++++ 10 files changed, 807 insertions(+), 11 deletions(-) create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/td_boot.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/td_boot_asm.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/td_boot.S create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..8e7a12d74745 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -28,6 +28,8 @@ LIBKVM_x86 += lib/x86/sev.c LIBKVM_x86 += lib/x86/svm.c LIBKVM_x86 += lib/x86/ucall.c LIBKVM_x86 += lib/x86/vmx.c +LIBKVM_x86 += lib/x86/tdx/tdx_util.c +LIBKVM_x86 += lib/x86/tdx/td_boot.S
LIBKVM_arm64 += lib/arm64/gic.c LIBKVM_arm64 += lib/arm64/gic_v3.c diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 74ecfd8d7ae0..813ba634dc49 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -79,6 +79,7 @@ enum kvm_mem_region_type { MEM_REGION_DATA, MEM_REGION_PT, MEM_REGION_TEST_DATA, + MEM_REGION_TDX_BOOT_PARAMS, NR_MEM_REGIONS, };
@@ -985,6 +986,9 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm); unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size); unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages); unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages); +uint64_t vm_nr_pages_required(enum vm_guest_mode mode, + uint32_t nr_runnable_vcpus, + uint64_t extra_mem_pages); static inline unsigned int vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages) { @@ -1150,6 +1154,8 @@ static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm) */ void kvm_selftest_arch_init(void);
+void vm_init_descriptor_tables(struct kvm_vm *vm); + void kvm_arch_vm_post_create(struct kvm_vm *vm);
bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr); diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h index 972bb1c4ab4c..80db1e4c38ba 100644 --- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h +++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h @@ -19,6 +19,7 @@ struct kvm_vm_arch { uint64_t s_bit; int sev_fd; bool is_pt_protected; + bool has_protected_regs; };
static inline bool __vm_arch_has_protected_memory(struct kvm_vm_arch *arch) diff --git a/tools/testing/selftests/kvm/include/x86/tdx/td_boot.h b/tools/testing/selftests/kvm/include/x86/tdx/td_boot.h new file mode 100644 index 000000000000..94a50295f953 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86/tdx/td_boot.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TD_BOOT_H +#define SELFTEST_TDX_TD_BOOT_H + +#include <stdint.h> + +#include "tdx/td_boot_asm.h" + +/* + * Layout for boot section (not to scale) + * + * GPA + * _________________________________ 0x1_0000_0000 (4GB) + * | Boot code trampoline | + * |___________________________|____ 0x0_ffff_fff0: Reset vector (16B below 4GB) + * | Boot code | + * |___________________________|____ td_boot will be copied here, so that the + * | | jmp to td_boot is exactly at the reset vector + * | Empty space | + * | | + * |───────────────────────────| + * | | + * | | + * | Boot parameters | + * | | + * | | + * |___________________________|____ 0x0_ffff_0000: TD_BOOT_PARAMETERS_GPA + */ +#define FOUR_GIGABYTES_GPA (4ULL << 30) + +/* + * The exact memory layout for LGDT or LIDT instructions. + */ +struct __packed td_boot_parameters_dtr { + uint16_t limit; + uint32_t base; +}; + +/* + * The exact layout in memory required for a ljmp, including the selector for + * changing code segment. + */ +struct __packed td_boot_parameters_ljmp_target { + uint32_t eip_gva; + uint16_t code64_sel; +}; + +/* + * Allows each vCPU to be initialized with different eip and esp. + */ +struct __packed td_per_vcpu_parameters { + uint32_t esp_gva; + struct td_boot_parameters_ljmp_target ljmp_target; +}; + +/* + * Boot parameters for the TD. + * + * Unlike a regular VM, KVM cannot set registers such as esp, eip, etc + * before boot, so to run selftests, these registers' values have to be + * initialized by the TD. + * + * This struct is loaded in TD private memory at TD_BOOT_PARAMETERS_GPA. + * + * The TD boot code will read off parameters from this struct and set up the + * vCPU for executing selftests. + */ +struct __packed td_boot_parameters { + uint32_t cr0; + uint32_t cr3; + uint32_t cr4; + struct td_boot_parameters_dtr gdtr; + struct td_boot_parameters_dtr idtr; + struct td_per_vcpu_parameters per_vcpu[]; +}; + +void td_boot(void); +void reset_vector(void); +void td_boot_code_end(void); + +#define TD_BOOT_CODE_SIZE (td_boot_code_end - td_boot) + +#endif /* SELFTEST_TDX_TD_BOOT_H */ diff --git a/tools/testing/selftests/kvm/include/x86/tdx/td_boot_asm.h b/tools/testing/selftests/kvm/include/x86/tdx/td_boot_asm.h new file mode 100644 index 000000000000..10b4b527595c --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86/tdx/td_boot_asm.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TD_BOOT_ASM_H +#define SELFTEST_TDX_TD_BOOT_ASM_H + +/* + * GPA where TD boot parameters will be loaded. + * + * TD_BOOT_PARAMETERS_GPA is arbitrarily chosen to + * + * + be within the 4GB address space + * + provide enough contiguous memory for the struct td_boot_parameters such + * that there is one struct td_per_vcpu_parameters for KVM_MAX_VCPUS + */ +#define TD_BOOT_PARAMETERS_GPA 0xffff0000 + +#endif // SELFTEST_TDX_TD_BOOT_ASM_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h new file mode 100644 index 000000000000..57a2f5893ffe --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTESTS_TDX_KVM_UTIL_H +#define SELFTESTS_TDX_KVM_UTIL_H + +#include <stdint.h> + +#include "kvm_util.h" + +void tdx_filter_cpuid(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid_data); +void __tdx_mask_cpuid_features(struct kvm_cpuid_entry2 *entry); + +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code); + +struct kvm_vm *td_create(void); +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, + uint64_t attributes); +void td_finalize(struct kvm_vm *vm); + +#endif // SELFTESTS_TDX_KVM_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 40dd63f2bd05..f8cf49794eed 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -373,9 +373,9 @@ struct kvm_vm *____vm_create(struct vm_shape shape) return vm; }
-static uint64_t vm_nr_pages_required(enum vm_guest_mode mode, - uint32_t nr_runnable_vcpus, - uint64_t extra_mem_pages) +uint64_t vm_nr_pages_required(enum vm_guest_mode mode, + uint32_t nr_runnable_vcpus, + uint64_t extra_mem_pages) { uint64_t page_size = vm_guest_mode_params[mode].page_size; uint64_t nr_pages; diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index 1c42af328f19..9b2c236e723a 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -590,7 +590,7 @@ void sync_exception_handlers_to_guest(struct kvm_vm *vm) *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers; }
-static void vm_init_descriptor_tables(struct kvm_vm *vm) +void vm_init_descriptor_tables(struct kvm_vm *vm) { extern void *idt_handlers; struct kvm_segment seg; @@ -697,16 +697,19 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
vcpu = __vm_vcpu_add(vm, vcpu_id); vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid()); - vcpu_init_sregs(vm, vcpu); - vcpu_init_xcrs(vm, vcpu);
vcpu->initial_stack_addr = stack_vaddr;
- /* Setup guest general purpose registers */ - vcpu_regs_get(vcpu, ®s); - regs.rflags = regs.rflags | 0x2; - regs.rsp = stack_vaddr; - vcpu_regs_set(vcpu, ®s); + if (!vm->arch.has_protected_regs) { + vcpu_init_sregs(vm, vcpu); + vcpu_init_xcrs(vm, vcpu); + + /* Setup guest general purpose registers */ + vcpu_regs_get(vcpu, ®s); + regs.rflags = regs.rflags | 0x2; + regs.rsp = stack_vaddr; + vcpu_regs_set(vcpu, ®s); + }
/* Setup the MP state */ mp_state.mp_state = 0; diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/td_boot.S b/tools/testing/selftests/kvm/lib/x86/tdx/td_boot.S new file mode 100644 index 000000000000..c8cbe214bba9 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86/tdx/td_boot.S @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include "tdx/td_boot_asm.h" + +/* Offsets for reading struct td_boot_parameters. */ +#define TD_BOOT_PARAMETERS_CR0 0 +#define TD_BOOT_PARAMETERS_CR3 4 +#define TD_BOOT_PARAMETERS_CR4 8 +#define TD_BOOT_PARAMETERS_GDT 12 +#define TD_BOOT_PARAMETERS_IDT 18 +#define TD_BOOT_PARAMETERS_PER_VCPU 24 + +/* Offsets for reading struct td_per_vcpu_parameters. */ +#define TD_PER_VCPU_PARAMETERS_ESP_GVA 0 +#define TD_PER_VCPU_PARAMETERS_LJMP_TARGET 4 + +#define SIZEOF_TD_PER_VCPU_PARAMETERS 10 + +.code32 + +.globl td_boot +td_boot: + /* In this procedure, edi is used as a temporary register. */ + cli + + /* Paging is off. */ + + movl $TD_BOOT_PARAMETERS_GPA, %ebx + + /* + * Find the address of struct td_per_vcpu_parameters for this + * vCPU based on esi (TDX spec: initialized with vCPU id). Put + * struct address into register for indirect addressing. + */ + movl $SIZEOF_TD_PER_VCPU_PARAMETERS, %eax + mul %esi + leal TD_BOOT_PARAMETERS_PER_VCPU(%ebx), %edi + addl %edi, %eax + + /* Setup stack. */ + movl TD_PER_VCPU_PARAMETERS_ESP_GVA(%eax), %esp + + /* Setup GDT. */ + leal TD_BOOT_PARAMETERS_GDT(%ebx), %edi + lgdt (%edi) + + /* Setup IDT. */ + leal TD_BOOT_PARAMETERS_IDT(%ebx), %edi + lidt (%edi) + + /* + * Set up control registers (There are no instructions to mov from + * memory to control registers, hence use ebx as a scratch register). + */ + movl TD_BOOT_PARAMETERS_CR4(%ebx), %edi + movl %edi, %cr4 + movl TD_BOOT_PARAMETERS_CR3(%ebx), %edi + movl %edi, %cr3 + movl TD_BOOT_PARAMETERS_CR0(%ebx), %edi + movl %edi, %cr0 + + /* Paging is on after setting the most significant bit on cr0. */ + + /* + * Jump to selftest guest code. Far jumps read <segment + * selector:new eip> from addr+4:addr. This location has + * already been set up in boot parameters, and boot parameters can + * be read because boot code and boot parameters are loaded so + * that GVA and GPA are mapped 1:1. + */ + ljmp *TD_PER_VCPU_PARAMETERS_LJMP_TARGET(%eax) + +.globl reset_vector +reset_vector: + jmp td_boot + /* + * Pad reset_vector to its full size of 16 bytes so that this + * can be loaded with the end of reset_vector aligned to GPA=4G. + */ + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + int3 + +/* Leave marker so size of td_boot code can be computed. */ +.globl td_boot_code_end +td_boot_code_end: + +/* Disable executable stack. */ +.section .note.GNU-stack,"",%progbits diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c new file mode 100644 index 000000000000..392d6272d17e --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c @@ -0,0 +1,566 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <asm/kvm.h> +#include <errno.h> +#include <linux/kvm.h> +#include <stdint.h> +#include <sys/ioctl.h> + +#include "kvm_util.h" +#include "processor.h" +#include "tdx/td_boot.h" +#include "test_util.h" + +uint64_t tdx_s_bit; + +/* + * TDX ioctls + */ + +static char *tdx_cmd_str[] = { + "KVM_TDX_CAPABILITIES", + "KVM_TDX_INIT_VM", + "KVM_TDX_INIT_VCPU", + "KVM_TDX_INIT_MEM_REGION", + "KVM_TDX_FINALIZE_VM", + "KVM_TDX_GET_CPUID" +}; + +#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str)) + +static int _tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) +{ + struct kvm_tdx_cmd tdx_cmd; + + TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n", + ioctl_no); + + memset(&tdx_cmd, 0x0, sizeof(tdx_cmd)); + tdx_cmd.id = ioctl_no; + tdx_cmd.flags = flags; + tdx_cmd.data = (uint64_t)data; + + return ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd); +} + +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data) +{ + int r; + + r = _tdx_ioctl(fd, ioctl_no, flags, data); + TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r, + errno); +} + +static struct kvm_tdx_capabilities *tdx_read_capabilities(struct kvm_vm *vm) +{ + struct kvm_tdx_capabilities *tdx_cap = NULL; + int nr_cpuid_configs = 4; + int rc = -1; + int i; + + do { + nr_cpuid_configs *= 2; + + tdx_cap = realloc(tdx_cap, sizeof(*tdx_cap) + + sizeof(tdx_cap->cpuid) + + (sizeof(struct kvm_cpuid_entry2) * nr_cpuid_configs)); + TEST_ASSERT(tdx_cap, + "Could not allocate memory for tdx capability nr_cpuid_configs %d\n", + nr_cpuid_configs); + + tdx_cap->cpuid.nent = nr_cpuid_configs; + rc = _tdx_ioctl(vm->fd, KVM_TDX_CAPABILITIES, 0, tdx_cap); + } while (rc < 0 && errno == E2BIG); + + TEST_ASSERT(rc == 0, "KVM_TDX_CAPABILITIES failed: %d %d", + rc, errno); + + pr_debug("tdx_cap: supported_attrs: 0x%016llx\n" + "tdx_cap: supported_xfam 0x%016llx\n", + tdx_cap->supported_attrs, tdx_cap->supported_xfam); + + for (i = 0; i < tdx_cap->cpuid.nent; i++) { + const struct kvm_cpuid_entry2 *config = &tdx_cap->cpuid.entries[i]; + + pr_debug("cpuid config[%d]: leaf 0x%x sub_leaf 0x%x eax 0x%08x ebx 0x%08x ecx 0x%08x edx 0x%08x\n", + i, config->function, config->index, + config->eax, config->ebx, config->ecx, config->edx); + } + + return tdx_cap; +} + +static struct kvm_cpuid_entry2 *tdx_find_cpuid_config(struct kvm_tdx_capabilities *cap, + uint32_t leaf, uint32_t sub_leaf) +{ + struct kvm_cpuid_entry2 *config; + uint32_t i; + + for (i = 0; i < cap->cpuid.nent; i++) { + config = &cap->cpuid.entries[i]; + + if (config->function == leaf && config->index == sub_leaf) + return config; + } + + return NULL; +} + +#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL) + +static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data) +{ + for (int i = 0; i < cpuid_data->nent; i++) { + struct kvm_cpuid_entry2 *e = &cpuid_data->entries[i]; + + if (e->function == 0xd && e->index == 0) { + /* + * TDX module requires both XTILE_{CFG, DATA} to be set. + * Both bits are required for AMX to be functional. + */ + if ((e->eax & XFEATURE_MASK_XTILE) != + XFEATURE_MASK_XTILE) { + e->eax &= ~XFEATURE_MASK_XTILE; + } + } + if (e->function == 0xd && e->index == 1) { + /* + * TDX doesn't support LBR yet. + * Disable bits from the XCR0 register. + */ + e->ecx &= ~XFEATURE_MASK_LBR; + /* + * TDX modules requires both CET_{U, S} to be set even + * if only one is supported. + */ + if (e->ecx & XFEATURE_MASK_CET) + e->ecx |= XFEATURE_MASK_CET; + } + } +} + +#define KVM_MAX_CPUID_ENTRIES 256 + +#define CPUID_EXT_VMX BIT(5) +#define CPUID_EXT_SMX BIT(6) +#define CPUID_PSE36 BIT(17) +#define CPUID_7_0_EBX_TSC_ADJUST BIT(1) +#define CPUID_7_0_EBX_SGX BIT(2) +#define CPUID_7_0_EBX_INTEL_PT BIT(25) +#define CPUID_7_0_ECX_SGX_LC BIT(30) +#define CPUID_APM_INVTSC BIT(8) +#define CPUID_8000_0008_EBX_WBNOINVD BIT(9) +#define CPUID_EXT_PDCM BIT(15) + +#define TDX_SUPPORTED_KVM_FEATURES ((1U << KVM_FEATURE_NOP_IO_DELAY) | \ + (1U << KVM_FEATURE_PV_UNHALT) | \ + (1U << KVM_FEATURE_PV_TLB_FLUSH) | \ + (1U << KVM_FEATURE_PV_SEND_IPI) | \ + (1U << KVM_FEATURE_POLL_CONTROL) | \ + (1U << KVM_FEATURE_PV_SCHED_YIELD) | \ + (1U << KVM_FEATURE_MSI_EXT_DEST_ID)) + +void __tdx_mask_cpuid_features(struct kvm_cpuid_entry2 *entry) +{ + /* + * Only entries with sub-leaf zero need to be masked, but some of these + * leaves have other sub-leaves defined. Bail on any non-zero sub-leaf, + * so they don't get unintentionally modified. + */ + if (entry->index) + return; + + switch (entry->function) { + case 0x1: + entry->ecx &= ~(CPUID_EXT_VMX | CPUID_EXT_SMX); + entry->edx &= ~CPUID_PSE36; + break; + case 0x7: + entry->ebx &= ~(CPUID_7_0_EBX_TSC_ADJUST | CPUID_7_0_EBX_SGX); + entry->ebx &= ~CPUID_7_0_EBX_INTEL_PT; + entry->ecx &= ~CPUID_7_0_ECX_SGX_LC; + break; + case 0x40000001: + entry->eax &= TDX_SUPPORTED_KVM_FEATURES; + break; + case 0x80000007: + entry->edx |= CPUID_APM_INVTSC; + break; + case 0x80000008: + entry->ebx &= CPUID_8000_0008_EBX_WBNOINVD; + break; + default: + break; + } +} + +static void tdx_mask_cpuid_features(struct kvm_cpuid2 *cpuid_data) +{ + for (int i = 0; i < cpuid_data->nent; i++) + __tdx_mask_cpuid_features(&cpuid_data->entries[i]); +} + +void tdx_filter_cpuid(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid_data) +{ + struct kvm_tdx_capabilities *tdx_cap; + struct kvm_cpuid_entry2 *config; + struct kvm_cpuid_entry2 *e; + int i; + + tdx_cap = tdx_read_capabilities(vm); + + i = 0; + while (i < cpuid_data->nent) { + e = cpuid_data->entries + i; + config = tdx_find_cpuid_config(tdx_cap, e->function, e->index); + + if (!config) { + int left = cpuid_data->nent - i - 1; + + if (left > 0) + memmove(cpuid_data->entries + i, + cpuid_data->entries + i + 1, + sizeof(*cpuid_data->entries) * left); + cpuid_data->nent--; + continue; + } + + e->eax &= config->eax; + e->ebx &= config->ebx; + e->ecx &= config->ecx; + e->edx &= config->edx; + + i++; + } + + free(tdx_cap); +} + +static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes) +{ + struct kvm_tdx_init_vm *init_vm; + const struct kvm_cpuid2 *tmp; + struct kvm_cpuid2 *cpuid; + + tmp = kvm_get_supported_cpuid(); + + cpuid = allocate_kvm_cpuid2(KVM_MAX_CPUID_ENTRIES); + memcpy(cpuid, tmp, kvm_cpuid2_size(tmp->nent)); + tdx_mask_cpuid_features(cpuid); + + init_vm = calloc(1, sizeof(*init_vm) + + sizeof(init_vm->cpuid.entries[0]) * cpuid->nent); + TEST_ASSERT(init_vm, "vm allocation failed"); + + memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent)); + free(cpuid); + + init_vm->attributes = attributes; + + tdx_apply_cpuid_restrictions(&init_vm->cpuid); + tdx_filter_cpuid(vm, &init_vm->cpuid); + + tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, init_vm); + free(init_vm); +} + +static void tdx_td_vcpu_init(struct kvm_vcpu *vcpu) +{ + struct kvm_cpuid2 *cpuid; + + cpuid = allocate_kvm_cpuid2(KVM_MAX_CPUID_ENTRIES); + tdx_ioctl(vcpu->fd, KVM_TDX_GET_CPUID, 0, cpuid); + vcpu_init_cpuid(vcpu, cpuid); + free(cpuid); + tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL); + /* + * Refresh CPUID to get KVM's "runtime" updates which are done by + * KVM_TDX_INIT_VCPU. + */ + vcpu_get_cpuid(vcpu); +} + +static void tdx_init_mem_region(struct kvm_vm *vm, void *source_pages, + uint64_t gpa, uint64_t size) +{ + uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION; + struct kvm_tdx_init_mem_region mem_region = { + .source_addr = (uint64_t)source_pages, + .gpa = gpa, + .nr_pages = size / PAGE_SIZE, + }; + struct kvm_vcpu *vcpu; + + vcpu = list_first_entry_or_null(&vm->vcpus, struct kvm_vcpu, list); + + TEST_ASSERT((mem_region.nr_pages > 0) && + ((mem_region.nr_pages * PAGE_SIZE) == size), + "Cannot add partial pages to the guest memory.\n"); + TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0, + "Source memory buffer is not page aligned\n"); + tdx_ioctl(vcpu->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region); +} + +static void tdx_td_finalize_mr(struct kvm_vm *vm) +{ + tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL); +} + +/* + * TD creation/setup/finalization + */ + +static void tdx_enable_capabilities(struct kvm_vm *vm) +{ + int rc; + + rc = kvm_check_cap(KVM_CAP_X2APIC_API); + TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!"); + rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP); + TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!"); + + vm_enable_cap(vm, KVM_CAP_X2APIC_API, + KVM_X2APIC_API_USE_32BIT_IDS | + KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK); + vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24); +} + +static void tdx_apply_cr4_restrictions(struct kvm_sregs *sregs) +{ + /* TDX spec 11.6.2: CR4 bit MCE is fixed to 1 */ + sregs->cr4 |= X86_CR4_MCE; + + /* Set this because UEFI also sets this up, to handle XMM exceptions */ + sregs->cr4 |= X86_CR4_OSXMMEXCPT; + + /* TDX spec 11.6.2: CR4 bit VMXE and SMXE are fixed to 0 */ + sregs->cr4 &= ~(X86_CR4_VMXE | X86_CR4_SMXE); +} + +static void load_td_boot_code(struct kvm_vm *vm) +{ + void *boot_code_hva = addr_gpa2hva(vm, FOUR_GIGABYTES_GPA - TD_BOOT_CODE_SIZE); + + TEST_ASSERT(td_boot_code_end - reset_vector == 16, + "The reset vector must be 16 bytes in size."); + memcpy(boot_code_hva, td_boot, TD_BOOT_CODE_SIZE); +} + +static void load_td_per_vcpu_parameters(struct td_boot_parameters *params, + struct kvm_sregs *sregs, + struct kvm_vcpu *vcpu, + void *guest_code) +{ + struct td_per_vcpu_parameters *vcpu_params = ¶ms->per_vcpu[vcpu->id]; + + TEST_ASSERT(vcpu->initial_stack_addr != 0, + "initial stack address should not be 0"); + TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff, + "initial stack address must fit in 32 bits"); + TEST_ASSERT((uint64_t)guest_code <= 0xffffffff, + "guest_code must fit in 32 bits"); + TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0"); + + vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr; + vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code; + vcpu_params->ljmp_target.code64_sel = sregs->cs.selector; +} + +static void load_td_common_parameters(struct td_boot_parameters *params, + struct kvm_sregs *sregs) +{ + /* Set parameters! */ + params->cr0 = sregs->cr0; + params->cr3 = sregs->cr3; + params->cr4 = sregs->cr4; + params->gdtr.limit = sregs->gdt.limit; + params->gdtr.base = sregs->gdt.base; + params->idtr.limit = sregs->idt.limit; + params->idtr.base = sregs->idt.base; + + TEST_ASSERT(params->cr0 != 0, "cr0 should not be 0"); + TEST_ASSERT(params->cr3 != 0, "cr3 should not be 0"); + TEST_ASSERT(params->cr4 != 0, "cr4 should not be 0"); + TEST_ASSERT(params->gdtr.base != 0, "gdt base address should not be 0"); + TEST_ASSERT(params->idtr.base != 0, "idt base address should not be 0"); +} + +static void load_td_boot_parameters(struct td_boot_parameters *params, + struct kvm_vcpu *vcpu, void *guest_code) +{ + struct kvm_sregs sregs; + + /* Assemble parameters in sregs */ + memset(&sregs, 0, sizeof(struct kvm_sregs)); + vcpu_setup_mode_sregs(vcpu->vm, &sregs); + tdx_apply_cr4_restrictions(&sregs); + + if (!params->cr0) + load_td_common_parameters(params, &sregs); + + load_td_per_vcpu_parameters(params, &sregs, vcpu, guest_code); +} + +/* + * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set + * up any general purpose registers as they will be initialized by the TDX. In + * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State + * of Guest GPRs" for more information on vCPUs initial register values when + * entering the TD first time. + * + * Input Args: + * vm - Virtual Machine + * vcpuid - The id of the vCPU to add to the VM. + */ +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code) +{ + struct kvm_vcpu *vcpu; + + vm->arch.has_protected_regs = true; + vcpu = vm_arch_vcpu_add(vm, vcpu_id); + + tdx_td_vcpu_init(vcpu); + + load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA), + vcpu, guest_code); + + return vcpu; +} + +static void load_td_memory_region(struct kvm_vm *vm, + struct userspace_mem_region *region) +{ + const struct sparsebit *pages = region->protected_phy_pages; + const vm_paddr_t gpa_base = region->region.guest_phys_addr; + const uint64_t hva_base = region->region.userspace_addr; + const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift; + + sparsebit_idx_t i; + sparsebit_idx_t j; + + if (!sparsebit_any_set(pages)) + return; + + sparsebit_for_each_set_range(pages, i, j) { + const uint64_t size_to_load = (j - i + 1) * vm->page_size; + const uint64_t offset = + (i - lowest_page_in_region) * vm->page_size; + const uint64_t hva = hva_base + offset; + const uint64_t gpa = gpa_base + offset; + void *source_addr; + + /* + * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place. + * Make a copy if there's only one backing memory source. + */ + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + TEST_ASSERT(source_addr, + "Could not allocate memory for loading memory region"); + + memcpy(source_addr, (void *)hva, size_to_load); + + tdx_init_mem_region(vm, source_addr, gpa, size_to_load); + + munmap(source_addr, size_to_load); + } +} + +static void load_td_private_memory(struct kvm_vm *vm) +{ + struct userspace_mem_region *region; + int ctr; + + hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) { + load_td_memory_region(vm, region); + } +} + +struct kvm_vm *td_create(void) +{ + const struct vm_shape shape = { + .mode = VM_MODE_DEFAULT, + .type = KVM_X86_TDX_VM, + }; + + return ____vm_create(shape); +} + +static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type) +{ + size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE); + vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation; + size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE); + vm_vaddr_t addr; + + vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages, + KVM_MEM_GUEST_MEMFD); + vm->memslots[MEM_REGION_CODE] = 1; + addr = vm_vaddr_identity_alloc(vm, boot_code_allocation, + boot_code_base_gpa, MEM_REGION_CODE); + TEST_ASSERT_EQ(addr, boot_code_base_gpa); + + load_td_boot_code(vm); +} + +static size_t td_boot_parameters_size(void) +{ + int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); + size_t total_per_vcpu_parameters_size = + max_vcpus * sizeof(struct td_per_vcpu_parameters); + + return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size; +} + +static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type) +{ + size_t boot_params_size = td_boot_parameters_size(); + int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE); + size_t total_size = npages * PAGE_SIZE; + vm_vaddr_t addr; + + vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2, + npages, KVM_MEM_GUEST_MEMFD); + vm->memslots[MEM_REGION_TDX_BOOT_PARAMS] = 2; + addr = vm_vaddr_identity_alloc(vm, total_size, TD_BOOT_PARAMETERS_GPA, + MEM_REGION_TDX_BOOT_PARAMS); + TEST_ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA); +} + +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, + uint64_t attributes) +{ + uint64_t nr_pages_required; + + tdx_enable_capabilities(vm); + + tdx_td_init(vm, attributes); + + nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0); + + /* + * Add memory (add 0th memslot) for TD. This will be used to setup the + * CPU (provide stack space for the CPU) and to load the elf file. + */ + vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required, + KVM_MEM_GUEST_MEMFD); + + kvm_vm_elf_load(vm, program_invocation_name); + tdx_s_bit = vm->arch.s_bit; + sync_global_to_guest(vm, tdx_s_bit); + + vm_init_descriptor_tables(vm); + + td_setup_boot_code(vm, src_type); + td_setup_boot_parameters(vm, src_type); +} + +void td_finalize(struct kvm_vm *vm) +{ + sync_exception_handlers_to_guest(vm); + + load_td_private_memory(vm); + + tdx_td_finalize_mr(vm); +}
From: Ackerley Tng ackerleytng@google.com
This also exercises the KVM_TDX_CAPABILITIES ioctl.
Suggested-by: Isaku Yamahata isaku.yamahata@intel.com Co-developed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/lib/x86/tdx/tdx_util.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+)
diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c index 392d6272d17e..bb074af4a476 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c @@ -140,6 +140,21 @@ static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data) } }
+static void tdx_check_attributes(struct kvm_vm *vm, uint64_t attributes) +{ + struct kvm_tdx_capabilities *tdx_cap; + + tdx_cap = tdx_read_capabilities(vm); + + /* TDX spec: any bits 0 in supported_attrs must be 0 in attributes */ + TEST_ASSERT_EQ(attributes & ~tdx_cap->supported_attrs, 0); + + /* TDX spec: any bits 1 in attributes must be 1 in supported_attrs */ + TEST_ASSERT_EQ(attributes & tdx_cap->supported_attrs, attributes); + + free(tdx_cap); +} + #define KVM_MAX_CPUID_ENTRIES 256
#define CPUID_EXT_VMX BIT(5) @@ -256,6 +271,8 @@ static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes) memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent)); free(cpuid);
+ tdx_check_attributes(vm, attributes); + init_vm->attributes = attributes;
tdx_apply_cpuid_restrictions(&init_vm->cpuid);
From: Ackerley Tng ackerleytng@google.com
If guest memory is backed by restricted memfd
+ UPM is being used, hence encrypted memory region has to be registered + Can avoid making a copy of guest memory before getting TDX to initialize the memory region
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/lib/x86/tdx/tdx_util.c | 38 +++++++++++++++---- 1 file changed, 30 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c index bb074af4a476..e2bf9766dc03 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c @@ -324,6 +324,21 @@ static void tdx_td_finalize_mr(struct kvm_vm *vm) tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL); }
+/* + * Other ioctls + */ + +/* + * Register a memory region that may contain encrypted data in KVM. + */ +static void register_encrypted_memory_region(struct kvm_vm *vm, + struct userspace_mem_region *region) +{ + vm_set_memory_attributes(vm, region->region.guest_phys_addr, + region->region.memory_size, + KVM_MEMORY_ATTRIBUTE_PRIVATE); +} + /* * TD creation/setup/finalization */ @@ -459,28 +474,35 @@ static void load_td_memory_region(struct kvm_vm *vm, if (!sparsebit_any_set(pages)) return;
+ if (region->region.guest_memfd != -1) + register_encrypted_memory_region(vm, region); + sparsebit_for_each_set_range(pages, i, j) { const uint64_t size_to_load = (j - i + 1) * vm->page_size; const uint64_t offset = (i - lowest_page_in_region) * vm->page_size; const uint64_t hva = hva_base + offset; const uint64_t gpa = gpa_base + offset; - void *source_addr; + void *source_addr = (void *)hva;
/* * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place. * Make a copy if there's only one backing memory source. */ - source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); - TEST_ASSERT(source_addr, - "Could not allocate memory for loading memory region"); - - memcpy(source_addr, (void *)hva, size_to_load); + if (region->region.guest_memfd == -1) { + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + TEST_ASSERT(source_addr, + "Could not allocate memory for loading memory region"); + + memcpy(source_addr, (void *)hva, size_to_load); + memset((void *)hva, 0, size_to_load); + }
tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
- munmap(source_addr, size_to_load); + if (region->region.guest_memfd == -1) + munmap(source_addr, size_to_load); } }
From: Erdem Aktas erdemaktas@google.com
Adding a test to verify TDX lifecycle by creating a TD and running a dummy TDG.VP.VMCALL <Instruction.IO> inside it.
Co-developed-by: Sagi Shahar sagis@google.com Signed-off-by: Sagi Shahar sagis@google.com Co-developed-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com Co-developed-by: Reinette Chatre reinette.chatre@intel.com Signed-off-by: Reinette Chatre reinette.chatre@intel.com Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/Makefile.kvm | 4 + .../selftests/kvm/include/x86/tdx/tdcall.h | 32 +++++++ .../selftests/kvm/include/x86/tdx/tdx.h | 12 +++ .../selftests/kvm/include/x86/tdx/test_util.h | 41 ++++++++ .../selftests/kvm/lib/x86/tdx/tdcall.S | 95 +++++++++++++++++++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 27 ++++++ .../selftests/kvm/lib/x86/tdx/test_util.c | 61 ++++++++++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 47 +++++++++ 8 files changed, 319 insertions(+) create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/tdcall.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/tdx.h create mode 100644 tools/testing/selftests/kvm/include/x86/tdx/test_util.h create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/tdx.c create mode 100644 tools/testing/selftests/kvm/lib/x86/tdx/test_util.c create mode 100644 tools/testing/selftests/kvm/x86/tdx_vm_test.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index 8e7a12d74745..e98d5413991a 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -30,6 +30,9 @@ LIBKVM_x86 += lib/x86/ucall.c LIBKVM_x86 += lib/x86/vmx.c LIBKVM_x86 += lib/x86/tdx/tdx_util.c LIBKVM_x86 += lib/x86/tdx/td_boot.S +LIBKVM_x86 += lib/x86/tdx/tdcall.S +LIBKVM_x86 += lib/x86/tdx/tdx.c +LIBKVM_x86 += lib/x86/tdx/test_util.c
LIBKVM_arm64 += lib/arm64/gic.c LIBKVM_arm64 += lib/arm64/gic_v3.c @@ -141,6 +144,7 @@ TEST_GEN_PROGS_x86 += rseq_test TEST_GEN_PROGS_x86 += steal_time TEST_GEN_PROGS_x86 += system_counter_offset_test TEST_GEN_PROGS_x86 += pre_fault_memory_test +TEST_GEN_PROGS_x86 += x86/tdx_vm_test
# Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86 += x86/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h new file mode 100644 index 000000000000..a6c966e93486 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Adapted from arch/x86/include/asm/shared/tdx.h */ + +#ifndef SELFTESTS_TDX_TDCALL_H +#define SELFTESTS_TDX_TDCALL_H + +#include <linux/bits.h> +#include <linux/types.h> + +#define TDX_HCALL_HAS_OUTPUT BIT(0) + +#define TDX_HYPERCALL_STANDARD 0 + +/* + * Used in __tdx_hypercall() to pass down and get back registers' values of + * the TDCALL instruction when requesting services from the VMM. + * + * This is a software only structure and not part of the TDX module/VMM ABI. + */ +struct tdx_hypercall_args { + u64 r10; + u64 r11; + u64 r12; + u64 r13; + u64 r14; + u64 r15; +}; + +/* Used to request services from the VMM */ +u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags); + +#endif // SELFTESTS_TDX_TDCALL_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h new file mode 100644 index 000000000000..a7161efe4ee2 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TDX_H +#define SELFTEST_TDX_TDX_H + +#include <stdint.h> + +#define TDG_VP_VMCALL_INSTRUCTION_IO 30 + +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, + uint64_t write, uint64_t *data); + +#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h new file mode 100644 index 000000000000..07d63bf1ffe1 --- /dev/null +++ b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef SELFTEST_TDX_TEST_UTIL_H +#define SELFTEST_TDX_TEST_UTIL_H + +#include <stdbool.h> + +#include "tdcall.h" + +#define TDX_TEST_SUCCESS_PORT 0x30 +#define TDX_TEST_SUCCESS_SIZE 4 + +/* Port I/O direction */ +#define PORT_READ 0 +#define PORT_WRITE 1 + +/* + * Run a test in a new process. + * + * There might be multiple tests running and if one test fails, it will + * prevent the subsequent tests to run due to how tests are failing with + * TEST_ASSERT function. run_in_new_process() will run a test in a new process + * context and wait for it to finish or fail to prevent TEST_ASSERT to kill the + * main testing process. + */ +int run_in_new_process(void (*func)(void)); + +/* + * Verify that the TDX is supported by KVM. + */ +bool is_tdx_enabled(void); + +/* + * Report test success to userspace. + * + * Use tdx_test_assert_success() to assert that this function was called in the + * guest. + */ +void tdx_test_success(void); +void tdx_test_assert_success(struct kvm_vcpu *vcpu); + +#endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S new file mode 100644 index 000000000000..b10769d1d557 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Adapted from arch/x86/coco/tdx/tdcall.S */ + +/* + * TDCALL is supported in Binutils >= 2.36, add it for older version. + */ +#define tdcall .byte 0x66,0x0f,0x01,0xcc + +#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */ +#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */ +#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */ +#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */ +#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */ +#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */ + +/* + * Bitmasks of exposed registers (with VMM). + */ +#define TDX_R10 0x400 +#define TDX_R11 0x800 +#define TDX_R12 0x1000 +#define TDX_R13 0x2000 +#define TDX_R14 0x4000 +#define TDX_R15 0x8000 + +#define TDX_HCALL_HAS_OUTPUT 0x1 + +/* + * These registers are clobbered to hold arguments for each + * TDVMCALL. They are safe to expose to the VMM. + * Each bit in this mask represents a register ID. Bit field + * details can be found in TDX GHCI specification, section + * titled "TDCALL [TDG.VP.VMCALL] leaf". + */ +#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \ + TDX_R12 | TDX_R13 | \ + TDX_R14 | TDX_R15 ) + +.code64 +.section .text + +.globl __tdx_hypercall +.type __tdx_hypercall, @function +__tdx_hypercall: + /* Set up stack frame */ + push %rbp + movq %rsp, %rbp + + /* Save callee-saved GPRs as mandated by the x86_64 ABI */ + push %r15 + push %r14 + push %r13 + push %r12 + + /* Mangle function call ABI into TDCALL ABI: */ + /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */ + xor %eax, %eax + + /* Copy hypercall registers from arg struct: */ + movq TDX_HYPERCALL_r10(%rdi), %r10 + movq TDX_HYPERCALL_r11(%rdi), %r11 + movq TDX_HYPERCALL_r12(%rdi), %r12 + movq TDX_HYPERCALL_r13(%rdi), %r13 + movq TDX_HYPERCALL_r14(%rdi), %r14 + movq TDX_HYPERCALL_r15(%rdi), %r15 + + movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx + + tdcall + + /* TDVMCALL leaf return code is in R10 */ + movq %r10, %rax + + /* Copy hypercall result registers to arg struct if needed */ + testq $TDX_HCALL_HAS_OUTPUT, %rsi + jz .Lout + + movq %r10, TDX_HYPERCALL_r10(%rdi) + movq %r11, TDX_HYPERCALL_r11(%rdi) + movq %r12, TDX_HYPERCALL_r12(%rdi) + movq %r13, TDX_HYPERCALL_r13(%rdi) + movq %r14, TDX_HYPERCALL_r14(%rdi) + movq %r15, TDX_HYPERCALL_r15(%rdi) +.Lout: + /* Restore callee-saved GPRs as mandated by the x86_64 ABI */ + pop %r12 + pop %r13 + pop %r14 + pop %r15 + + pop %rbp + ret + +/* Disable executable stack */ +.section .note.GNU-stack,"",%progbits diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c new file mode 100644 index 000000000000..f417ee75bee2 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -0,0 +1,27 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include "tdx/tdcall.h" +#include "tdx/tdx.h" + +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, + uint64_t write, uint64_t *data) +{ + struct tdx_hypercall_args args = { + .r10 = TDX_HYPERCALL_STANDARD, + .r11 = TDG_VP_VMCALL_INSTRUCTION_IO, + .r12 = size, + .r13 = write, + .r14 = port, + }; + uint64_t ret; + + if (write) + args.r15 = *data; + + ret = __tdx_hypercall(&args, write ? 0 : TDX_HCALL_HAS_OUTPUT); + + if (!write) + *data = args.r11; + + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c new file mode 100644 index 000000000000..7355b213c344 --- /dev/null +++ b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <stdbool.h> +#include <stdint.h> +#include <stdlib.h> +#include <sys/wait.h> +#include <unistd.h> + +#include "kvm_util.h" +#include "tdx/tdx.h" +#include "tdx/test_util.h" + +int run_in_new_process(void (*func)(void)) +{ + int wstatus; + pid_t ret; + + if (fork() == 0) { + func(); + exit(0); + } + ret = wait(&wstatus); + if (ret == -1) + return -1; + + if (WIFEXITED(wstatus) && WEXITSTATUS(wstatus)) + return -1; + else if (WIFSIGNALED(wstatus)) + return -1; + + return 0; +} + +bool is_tdx_enabled(void) +{ + return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM)); +} + +void tdx_test_success(void) +{ + uint64_t code = 0; + + tdg_vp_vmcall_instruction_io(TDX_TEST_SUCCESS_PORT, TDX_TEST_SUCCESS_SIZE, + PORT_WRITE, &code); +} + +/* + * Assert that tdx_test_success() was called in the guest. + */ +void tdx_test_assert_success(struct kvm_vcpu *vcpu) +{ + TEST_ASSERT((vcpu->run->exit_reason == KVM_EXIT_IO) && + (vcpu->run->io.port == TDX_TEST_SUCCESS_PORT) && + (vcpu->run->io.size == TDX_TEST_SUCCESS_SIZE) && + (vcpu->run->io.direction == PORT_WRITE), + "Unexpected exit values while waiting for test completion: %u (%s) %d %d %d\n", + vcpu->run->exit_reason, + exit_reason_str(vcpu->run->exit_reason), + vcpu->run->io.port, vcpu->run->io.size, + vcpu->run->io.direction); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c new file mode 100644 index 000000000000..fdb7c40065a6 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <signal.h> + +#include "kvm_util.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include "test_util.h" + +static void guest_code_lifecycle(void) +{ + tdx_test_success(); +} + +static void verify_td_lifecycle(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_lifecycle); + td_finalize(vm); + + printf("Verifying TD lifecycle:\n"); + + vcpu_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +int main(int argc, char **argv) +{ + ksft_print_header(); + + if (!is_tdx_enabled()) + ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n"); + + ksft_set_plan(1); + ksft_test_result(!run_in_new_process(&verify_td_lifecycle), + "verify_td_lifecycle\n"); + + ksft_finished(); + return 0; +}
The test checks report_fatal_error functionality.
TD guest can use TDG.VP.VMCALL<ReportFatalError> to report the fatal error it has experienced. TD guest is requesting a termination with the error information that include 16 general-purpose registers.
Co-developed-by: Binbin Wu binbin.wu@linux.intel.com Signed-off-by: Binbin Wu binbin.wu@linux.intel.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 6 ++- .../selftests/kvm/include/x86/tdx/tdx_util.h | 1 + .../selftests/kvm/include/x86/tdx/test_util.h | 19 +++++++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 18 +++++++ .../selftests/kvm/lib/x86/tdx/tdx_util.c | 6 +++ .../selftests/kvm/lib/x86/tdx/test_util.c | 10 ++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 51 ++++++++++++++++++- 7 files changed, 108 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index a7161efe4ee2..2acccc9dccf9 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -4,9 +4,13 @@
#include <stdint.h>
+#include "kvm_util.h" + +#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003 + #define TDG_VP_VMCALL_INSTRUCTION_IO 30
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); - +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h index 57a2f5893ffe..d66cf17f03ea 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h @@ -15,5 +15,6 @@ struct kvm_vm *td_create(void); void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t attributes); void td_finalize(struct kvm_vm *vm); +void td_vcpu_run(struct kvm_vcpu *vcpu);
#endif // SELFTESTS_TDX_KVM_UTIL_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h index 07d63bf1ffe1..dafeee9af1dc 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h @@ -38,4 +38,23 @@ bool is_tdx_enabled(void); void tdx_test_success(void); void tdx_test_assert_success(struct kvm_vcpu *vcpu);
+/* + * Report an error with @error_code to userspace. + * + * Return value from tdg_vp_vmcall_report_fatal_error() is ignored since + * execution is not expected to continue beyond this point. + */ +void tdx_test_fatal(uint64_t error_code); + +/* + * Report an error with @error_code to userspace. + * + * @data_gpa may point to an optional shared guest memory holding the error + * string. + * + * Return value from tdg_vp_vmcall_report_fatal_error() is ignored since + * execution is not expected to continue beyond this point. + */ +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index f417ee75bee2..ba088bfc1e62 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only
+#include <string.h> + #include "tdx/tdcall.h" #include "tdx/tdx.h"
@@ -25,3 +27,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
return ret; } + +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa) +{ + struct tdx_hypercall_args args; + + memset(&args, 0, sizeof(struct tdx_hypercall_args)); + + if (data_gpa) + error_code |= 0x8000000000000000; + + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR; + args.r12 = error_code; + args.r13 = data_gpa; + + __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c index e2bf9766dc03..5e4455be828a 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c @@ -9,6 +9,7 @@ #include "kvm_util.h" #include "processor.h" #include "tdx/td_boot.h" +#include "tdx/tdx.h" #include "test_util.h"
uint64_t tdx_s_bit; @@ -603,3 +604,8 @@ void td_finalize(struct kvm_vm *vm)
tdx_td_finalize_mr(vm); } + +void td_vcpu_run(struct kvm_vcpu *vcpu) +{ + vcpu_run(vcpu); +} diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c index 7355b213c344..6c82a0c3bd37 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c @@ -59,3 +59,13 @@ void tdx_test_assert_success(struct kvm_vcpu *vcpu) vcpu->run->io.port, vcpu->run->io.size, vcpu->run->io.direction); } + +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa) +{ + tdg_vp_vmcall_report_fatal_error(error_code, data_gpa); +} + +void tdx_test_fatal(uint64_t error_code) +{ + tdx_test_fatal_with_data(error_code, 0); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index fdb7c40065a6..7d6d71602761 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -3,6 +3,7 @@ #include <signal.h>
#include "kvm_util.h" +#include "tdx/tdx.h" #include "tdx/tdx_util.h" #include "tdx/test_util.h" #include "test_util.h" @@ -24,7 +25,51 @@ static void verify_td_lifecycle(void)
printf("Verifying TD lifecycle:\n");
- vcpu_run(vcpu); + td_vcpu_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +void guest_code_report_fatal_error(void) +{ + uint64_t err; + + /* + * Note: err should follow the GHCI spec definition: + * bits 31:0 should be set to 0. + * bits 62:32 are used for TD-specific extended error code. + * bit 63 is used to mark additional information in shared memory. + */ + err = 0x0BAAAAAD00000000; + tdx_test_fatal(err); + + tdx_test_success(); +} + +void verify_report_fatal_error(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error); + td_finalize(vm); + + printf("Verifying report_fatal_error:\n"); + + td_vcpu_run(vcpu); + + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + TEST_ASSERT_EQ(vcpu->run->system_event.type, KVM_SYSTEM_EVENT_TDX_FATAL); + TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 16); + + TEST_ASSERT_EQ(vcpu->run->system_event.data[12], 0x0BAAAAAD00000000); + TEST_ASSERT_EQ(vcpu->run->system_event.data[13], 0); + + td_vcpu_run(vcpu); tdx_test_assert_success(vcpu);
kvm_vm_free(vm); @@ -38,9 +83,11 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(1); + ksft_set_plan(2); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); + ksft_test_result(!run_in_new_process(&verify_report_fatal_error), + "verify_report_fatal_error\n");
ksft_finished(); return 0;
Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.
Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/test_util.h | 20 +++++ .../selftests/kvm/lib/x86/tdx/test_util.c | 35 +++++++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 78 ++++++++++++++++++- 3 files changed, 130 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h index dafeee9af1dc..cf11955d56d6 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h @@ -13,6 +13,19 @@ #define PORT_READ 0 #define PORT_WRITE 1
+/* + * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was + * called in the guest. + */ +void tdx_test_assert_io(struct kvm_vcpu *vcpu, uint16_t port, uint8_t size, + uint8_t direction); + +/* + * Run the tdx vcpu and check if there was some failure in the guest, either + * an exception like a triple fault, or if a tdx_test_fatal() was hit. + */ +void tdx_run(struct kvm_vcpu *vcpu); + /* * Run a test in a new process. * @@ -57,4 +70,11 @@ void tdx_test_fatal(uint64_t error_code); */ void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
+/* + * Assert on @error and report the @error to userspace. + * Return value from tdg_vp_vmcall_report_fatal_error() is ignored since execution + * is not expected to continue beyond this point. + */ +void tdx_assert_error(uint64_t error); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c index 6c82a0c3bd37..4ccc5298ba25 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c @@ -8,8 +8,37 @@
#include "kvm_util.h" #include "tdx/tdx.h" +#include "tdx/tdx_util.h" #include "tdx/test_util.h"
+void tdx_test_assert_io(struct kvm_vcpu *vcpu, uint16_t port, uint8_t size, + uint8_t direction) +{ + TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_IO, + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", + vcpu->run->exit_reason, + exit_reason_str(vcpu->run->exit_reason)); + + TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_IO && + vcpu->run->io.port == port && + vcpu->run->io.size == size && + vcpu->run->io.direction == direction, + "Got unexpected IO exit values: %u (%s) %u %u %u\n", + vcpu->run->exit_reason, + exit_reason_str(vcpu->run->exit_reason), + vcpu->run->io.port, vcpu->run->io.size, + vcpu->run->io.direction); +} + +void tdx_run(struct kvm_vcpu *vcpu) +{ + td_vcpu_run(vcpu); + if (vcpu->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", + vcpu->run->system_event.data[12], + vcpu->run->system_event.data[13]); +} + int run_in_new_process(void (*func)(void)) { int wstatus; @@ -69,3 +98,9 @@ void tdx_test_fatal(uint64_t error_code) { tdx_test_fatal_with_data(error_code, 0); } + +void tdx_assert_error(uint64_t error) +{ + if (error) + tdx_test_fatal(error); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 7d6d71602761..97330e28f236 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -3,6 +3,7 @@ #include <signal.h>
#include "kvm_util.h" +#include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/tdx_util.h" #include "tdx/test_util.h" @@ -25,7 +26,7 @@ static void verify_td_lifecycle(void)
printf("Verifying TD lifecycle:\n");
- td_vcpu_run(vcpu); + tdx_run(vcpu); tdx_test_assert_success(vcpu);
kvm_vm_free(vm); @@ -69,9 +70,78 @@ void verify_report_fatal_error(void) TEST_ASSERT_EQ(vcpu->run->system_event.data[12], 0x0BAAAAAD00000000); TEST_ASSERT_EQ(vcpu->run->system_event.data[13], 0);
- td_vcpu_run(vcpu); + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +#define TDX_IOEXIT_TEST_PORT 0x50 + +/* + * Verifies IO functionality by writing a |value| to a predefined port. + * Verifies that the read value is |value| + 1 from the same port. + * If all the tests are passed then write a value to port TDX_TEST_PORT + */ +void guest_ioexit(void) +{ + uint64_t data_out, data_in; + uint64_t ret; + + data_out = 0xAB; + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1, + PORT_WRITE, &data_out); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1, + PORT_READ, &data_in); + tdx_assert_error(ret); + + if (data_in != 0xAC) + tdx_test_fatal(data_in); + + tdx_test_success(); +} + +void verify_td_ioexit(void) +{ + struct kvm_vcpu *vcpu; + uint32_t port_data; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_ioexit); + td_finalize(vm); + + printf("Verifying TD IO Exit:\n"); + + /* Wait for guest to do a IO write */ + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IOEXIT_TEST_PORT, 1, PORT_WRITE); + port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + printf("\t ... IO WRITE: DONE\n"); + + /* + * Wait for the guest to do a IO read. Provide the previous written data + * + 1 back to the guest + */ + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IOEXIT_TEST_PORT, 1, PORT_READ); + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1; + + printf("\t ... IO READ: DONE\n"); + + /* + * Wait for the guest to complete execution successfully. The read + * value is checked within the guest. + */ + tdx_run(vcpu); tdx_test_assert_success(vcpu);
+ printf("\t ... IO verify read/write values: OK\n"); kvm_vm_free(vm); printf("\t ... PASSED\n"); } @@ -83,11 +153,13 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(2); + ksft_set_plan(3); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), "verify_report_fatal_error\n"); + ksft_test_result(!run_in_new_process(&verify_td_ioexit), + "verify_td_ioexit\n");
ksft_finished(); return 0;
The test reads CPUID values from inside a TD VM and compare them to expected values.
The test targets CPUID values which are virtualized as "As Configured", "As Configured (if Native)", "Calculated", "Fixed" and "Native" according to the TDX spec.
Co-developed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/test_util.h | 15 +++ .../selftests/kvm/lib/x86/tdx/test_util.c | 20 ++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 98 ++++++++++++++++++- 3 files changed, 132 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h index cf11955d56d6..2af6e810ef78 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h @@ -9,6 +9,9 @@ #define TDX_TEST_SUCCESS_PORT 0x30 #define TDX_TEST_SUCCESS_SIZE 4
+#define TDX_TEST_REPORT_PORT 0x31 +#define TDX_TEST_REPORT_SIZE 4 + /* Port I/O direction */ #define PORT_READ 0 #define PORT_WRITE 1 @@ -77,4 +80,16 @@ void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa); */ void tdx_assert_error(uint64_t error);
+/* + * Report a 32 bit value from the guest to user space using TDG.VP.VMCALL + * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT. + */ +uint64_t tdx_test_report_to_user_space(uint32_t data); + +/* + * Read a 32 bit value from the guest in user space, sent using + * tdx_test_report_to_user_space(). + */ +uint32_t tdx_test_read_report_from_guest(struct kvm_vcpu *vcpu); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c index 4ccc5298ba25..f9bde114a8bc 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c @@ -104,3 +104,23 @@ void tdx_assert_error(uint64_t error) if (error) tdx_test_fatal(error); } + +uint64_t tdx_test_report_to_user_space(uint32_t data) +{ + /* Upcast data to match tdg_vp_vmcall_instruction_io() signature */ + uint64_t data_64 = data; + + return tdg_vp_vmcall_instruction_io(TDX_TEST_REPORT_PORT, + TDX_TEST_REPORT_SIZE, PORT_WRITE, + &data_64); +} + +uint32_t tdx_test_read_report_from_guest(struct kvm_vcpu *vcpu) +{ + uint32_t res; + + tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, 4, PORT_WRITE); + res = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + return res; +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 97330e28f236..bbdcca358d71 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -3,6 +3,7 @@ #include <signal.h>
#include "kvm_util.h" +#include "processor.h" #include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/tdx_util.h" @@ -146,6 +147,99 @@ void verify_td_ioexit(void) printf("\t ... PASSED\n"); }
+/* + * Verifies CPUID functionality by reading CPUID values in guest. The guest + * will then send the values to userspace using an IO write to be checked + * against the expected values. + */ +void guest_code_cpuid(void) +{ + uint32_t ebx, ecx; + uint64_t err; + + /* Read CPUID leaf 0x1 */ + asm volatile ("cpuid" + : "=b" (ebx), "=c" (ecx) + : "a" (0x1) + : "edx"); + + err = tdx_test_report_to_user_space(ebx); + tdx_assert_error(err); + + err = tdx_test_report_to_user_space(ecx); + tdx_assert_error(err); + + tdx_test_success(); +} + +void verify_td_cpuid(void) +{ + uint32_t guest_max_addressable_ids, host_max_addressable_ids; + const struct kvm_cpuid_entry2 *cpuid_entry; + uint32_t guest_clflush_line_size; + uint32_t guest_initial_apic_id; + uint32_t guest_sse3_enabled; + uint32_t guest_fma_enabled; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint32_t ebx, ecx; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_cpuid); + td_finalize(vm); + + printf("Verifying TD CPUID:\n"); + + /* Wait for guest to report ebx value */ + tdx_run(vcpu); + ebx = tdx_test_read_report_from_guest(vcpu); + + /* Wait for guest to report either ecx value or error */ + tdx_run(vcpu); + ecx = tdx_test_read_report_from_guest(vcpu); + + /* Wait for guest to complete execution */ + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + /* Verify the CPUID values received from the guest. */ + printf("\t ... Verifying CPUID values from guest\n"); + + /* Get KVM CPUIDs for reference */ + cpuid_entry = vcpu_get_cpuid_entry(vcpu, 1); + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n"); + + host_max_addressable_ids = (cpuid_entry->ebx >> 16) & 0xFF; + + guest_sse3_enabled = ecx & 0x1; // Native + guest_clflush_line_size = (ebx >> 8) & 0xFF; // Fixed + guest_max_addressable_ids = (ebx >> 16) & 0xFF; // As Configured + guest_fma_enabled = (ecx >> 12) & 0x1; // As Configured (if Native) + guest_initial_apic_id = (ebx >> 24) & 0xFF; // Calculated + + TEST_ASSERT_EQ(guest_sse3_enabled, 1); + TEST_ASSERT_EQ(guest_clflush_line_size, 8); + TEST_ASSERT_EQ(guest_max_addressable_ids, host_max_addressable_ids); + + /* TODO: This only tests the native value. To properly test + * "As Configured (if Native)" this value needs override in the + * TD params. + */ + TEST_ASSERT_EQ(guest_fma_enabled, (cpuid_entry->ecx >> 12) & 0x1); + + /* TODO: guest_initial_apic_id is calculated based on the number of + * vCPUs in the TD. From the spec: "Virtual CPU index, starting from 0 + * and allocated sequentially on each successful TDH.VP.INIT" + * To test non-trivial values either use a TD with multiple vCPUs + * or pick a different calculated value. + */ + TEST_ASSERT_EQ(guest_initial_apic_id, 0); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -153,13 +247,15 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(3); + ksft_set_plan(4); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), "verify_report_fatal_error\n"); ksft_test_result(!run_in_new_process(&verify_td_ioexit), "verify_td_ioexit\n"); + ksft_test_result(!run_in_new_process(&verify_td_cpuid), + "verify_td_cpuid\n");
ksft_finished(); return 0;
The test calls TDG.VP.VMCALL<GetTdVmCallInfo> hypercall from the guest and verifies the expected returned values.
TDG.VP.VMCALL<GetTdVmCallInfo> hypercall is a subleaf of TDG.VP.VMCALL to enumerate which TDG.VP.VMCALL sub leaves are supported. This hypercall is for future enhancement of the Guest-Host-Communication Interface (GHCI) specification. The GHCI version of 344426-001US defines it to require input R12 to be zero and to return zero in output registers, R11, R12, R13, and R14 so that guest TD enumerates no enhancement.
Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 3 + .../selftests/kvm/include/x86/tdx/test_util.h | 27 +++++++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 23 ++++++ .../selftests/kvm/lib/x86/tdx/test_util.c | 42 +++++++++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 72 ++++++++++++++++++- 5 files changed, 166 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 2acccc9dccf9..97ceb90c8792 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -6,6 +6,7 @@
#include "kvm_util.h"
+#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
#define TDG_VP_VMCALL_INSTRUCTION_IO 30 @@ -13,4 +14,6 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); +uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, + uint64_t *r13, uint64_t *r14); #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h index 2af6e810ef78..91031e956462 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h @@ -4,6 +4,7 @@
#include <stdbool.h>
+#include "kvm_util.h" #include "tdcall.h"
#define TDX_TEST_SUCCESS_PORT 0x30 @@ -92,4 +93,30 @@ uint64_t tdx_test_report_to_user_space(uint32_t data); */ uint32_t tdx_test_read_report_from_guest(struct kvm_vcpu *vcpu);
+/* + * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL + * <Instruction.IO> call. + * + * Data is sent to host in 2 calls. LSB is sent (and needs to be read) first. + */ +uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data); + +/* + * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL + * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT. + */ +uint64_t tdx_test_report_64bit_to_user_space(uint64_t data); + +/* + * Read a 64 bit value from the guest in user space, sent using + * tdx_test_send_64bit(). + */ +uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port); + +/* + * Read a 64 bit value from the guest in user space, sent using + * tdx_test_report_64bit_to_user_space(). + */ +uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu); + #endif // SELFTEST_TDX_TEST_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index ba088bfc1e62..5105dfae0e9e 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -43,3 +43,26 @@ void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
__tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, + uint64_t *r13, uint64_t *r14) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_GET_TD_VM_CALL_INFO, + .r12 = 0, + }; + uint64_t ret; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (r11) + *r11 = args.r11; + if (r12) + *r12 = args.r12; + if (r13) + *r13 = args.r13; + if (r14) + *r14 = args.r14; + + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c index f9bde114a8bc..8c3b6802c37e 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c @@ -7,6 +7,7 @@ #include <unistd.h>
#include "kvm_util.h" +#include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/tdx_util.h" #include "tdx/test_util.h" @@ -124,3 +125,44 @@ uint32_t tdx_test_read_report_from_guest(struct kvm_vcpu *vcpu)
return res; } + +uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data) +{ + uint64_t data_hi = (data >> 32) & 0xFFFFFFFF; + uint64_t data_lo = data & 0xFFFFFFFF; + uint64_t err; + + err = tdg_vp_vmcall_instruction_io(port, 4, PORT_WRITE, &data_lo); + if (err) + return err; + + return tdg_vp_vmcall_instruction_io(port, 4, PORT_WRITE, &data_hi); +} + +uint64_t tdx_test_report_64bit_to_user_space(uint64_t data) +{ + return tdx_test_send_64bit(TDX_TEST_REPORT_PORT, data); +} + +uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port) +{ + uint32_t lo, hi; + uint64_t res; + + tdx_test_assert_io(vcpu, port, 4, PORT_WRITE); + lo = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + vcpu_run(vcpu); + + tdx_test_assert_io(vcpu, port, 4, PORT_WRITE); + hi = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + res = hi; + res = (res << 32) | lo; + return res; +} + +uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu) +{ + return tdx_test_read_64bit(vcpu, TDX_TEST_REPORT_PORT); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index bbdcca358d71..22143d16e0d1 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -240,6 +240,74 @@ void verify_td_cpuid(void) printf("\t ... PASSED\n"); }
+/* + * Verifies TDG.VP.VMCALL<GetTdVmCallInfo> hypercall functionality. + */ +void guest_code_get_td_vmcall_info(void) +{ + uint64_t r11, r12, r13, r14; + uint64_t err; + + err = tdg_vp_vmcall_get_td_vmcall_info(&r11, &r12, &r13, &r14); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r11); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r12); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r13); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r14); + tdx_assert_error(err); + + tdx_test_success(); +} + +void verify_get_td_vmcall_info(void) +{ + uint64_t r11, r12, r13, r14; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_get_td_vmcall_info); + td_finalize(vm); + + printf("Verifying TD get vmcall info:\n"); + + /* Wait for guest to report r11 value */ + tdx_run(vcpu); + r11 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r12 value */ + tdx_run(vcpu); + r12 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r13 value */ + tdx_run(vcpu); + r13 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r14 value */ + tdx_run(vcpu); + r14 = tdx_test_read_64bit_report_from_guest(vcpu); + + TEST_ASSERT_EQ(r11, 0); + TEST_ASSERT_EQ(r12, 0); + TEST_ASSERT_EQ(r13, 0); + TEST_ASSERT_EQ(r14, 0); + + /* Wait for guest to complete execution */ + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -247,7 +315,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(4); + ksft_set_plan(5); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -256,6 +324,8 @@ int main(int argc, char **argv) "verify_td_ioexit\n"); ksft_test_result(!run_in_new_process(&verify_td_cpuid), "verify_td_cpuid\n"); + ksft_test_result(!run_in_new_process(&verify_get_td_vmcall_info), + "verify_get_td_vmcall_info\n");
ksft_finished(); return 0;
The test verifies IO writes of various sizes from the guest to the host.
Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdcall.h | 3 + tools/testing/selftests/kvm/x86/tdx_vm_test.c | 79 ++++++++++++++++++- 2 files changed, 81 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h index a6c966e93486..e7440f7fe259 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h @@ -7,6 +7,9 @@ #include <linux/bits.h> #include <linux/types.h>
+#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000 +#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000 + #define TDX_HCALL_HAS_OUTPUT BIT(0)
#define TDX_HYPERCALL_STANDARD 0 diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 22143d16e0d1..f646da032004 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -308,6 +308,81 @@ void verify_get_td_vmcall_info(void) printf("\t ... PASSED\n"); }
+#define TDX_IO_WRITES_TEST_PORT 0x51 + +/* + * Verifies IO functionality by writing values of different sizes + * to the host. + */ +void guest_io_writes(void) +{ + uint64_t byte_4 = 0xFFABCDEF; + uint64_t byte_2 = 0xABCD; + uint64_t byte_1 = 0xAB; + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 1, + PORT_WRITE, &byte_1); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 2, + PORT_WRITE, &byte_2); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 4, + PORT_WRITE, &byte_4); + tdx_assert_error(ret); + + /* Write an invalid number of bytes. */ + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 5, + PORT_WRITE, &byte_4); + tdx_assert_error(ret); + + tdx_test_success(); +} + +void verify_guest_writes(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint32_t byte_4; + uint16_t byte_2; + uint8_t byte_1; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_io_writes); + td_finalize(vm); + + printf("Verifying guest writes:\n"); + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IO_WRITES_TEST_PORT, 1, PORT_WRITE); + byte_1 = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IO_WRITES_TEST_PORT, 2, PORT_WRITE); + byte_2 = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IO_WRITES_TEST_PORT, 4, PORT_WRITE); + byte_4 = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset); + + TEST_ASSERT_EQ(byte_1, 0xAB); + TEST_ASSERT_EQ(byte_2, 0xABCD); + TEST_ASSERT_EQ(byte_4, 0xFFABCDEF); + + td_vcpu_run(vcpu); + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + TEST_ASSERT_EQ(vcpu->run->system_event.data[12], TDG_VP_VMCALL_INVALID_OPERAND); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -315,7 +390,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(5); + ksft_set_plan(6); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -326,6 +401,8 @@ int main(int argc, char **argv) "verify_td_cpuid\n"); ksft_test_result(!run_in_new_process(&verify_get_td_vmcall_info), "verify_get_td_vmcall_info\n"); + ksft_test_result(!run_in_new_process(&verify_guest_writes), + "verify_guest_writes\n");
ksft_finished(); return 0;
The test verifies IO reads of various sizes from the host to the guest.
Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/x86/tdx_vm_test.c | 76 ++++++++++++++++++- 1 file changed, 75 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index f646da032004..ae5749e5c605 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -383,6 +383,78 @@ void verify_guest_writes(void) printf("\t ... PASSED\n"); }
+#define TDX_IO_READS_TEST_PORT 0x52 + +/* + * Verifies IO functionality by reading values of different sizes + * from the host. + */ +void guest_io_reads(void) +{ + uint64_t data; + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 1, + PORT_READ, &data); + tdx_assert_error(ret); + if (data != 0xAB) + tdx_test_fatal(1); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 2, + PORT_READ, &data); + tdx_assert_error(ret); + if (data != 0xABCD) + tdx_test_fatal(2); + + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 4, + PORT_READ, &data); + tdx_assert_error(ret); + if (data != 0xFFABCDEF) + tdx_test_fatal(4); + + /* Read an invalid number of bytes. */ + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 5, + PORT_READ, &data); + tdx_assert_error(ret); + + tdx_test_success(); +} + +void verify_guest_reads(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_io_reads); + td_finalize(vm); + + printf("Verifying guest reads:\n"); + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IO_READS_TEST_PORT, 1, PORT_READ); + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xAB; + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IO_READS_TEST_PORT, 2, PORT_READ); + *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xABCD; + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_IO_READS_TEST_PORT, 4, PORT_READ); + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xFFABCDEF; + + td_vcpu_run(vcpu); + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + TEST_ASSERT_EQ(vcpu->run->system_event.data[12], TDG_VP_VMCALL_INVALID_OPERAND); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -390,7 +462,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(6); + ksft_set_plan(7); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -403,6 +475,8 @@ int main(int argc, char **argv) "verify_get_td_vmcall_info\n"); ksft_test_result(!run_in_new_process(&verify_guest_writes), "verify_guest_writes\n"); + ksft_test_result(!run_in_new_process(&verify_guest_reads), + "verify_guest_reads\n");
ksft_finished(); return 0;
The test verifies reads and writes for MSR registers with different access level.
Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 4 + tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 27 +++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 193 +++++++++++++++++- 3 files changed, 223 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 97ceb90c8792..56359a8c4c19 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -10,10 +10,14 @@ #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
#define TDG_VP_VMCALL_INSTRUCTION_IO 30 +#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 +#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, uint64_t *r13, uint64_t *r14); +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value); +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index 5105dfae0e9e..99ec45a5a657 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -66,3 +66,30 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
return ret; } + +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_RDMSR, + .r12 = index, + }; + uint64_t ret; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (ret_value) + *ret_value = args.r11; + + return ret; +} + +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_WRMSR, + .r12 = index, + .r13 = value, + }; + + return __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index ae5749e5c605..079ac266a44e 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -455,6 +455,193 @@ void verify_guest_reads(void) printf("\t ... PASSED\n"); }
+/* + * Define a filter which denies all MSR access except the following: + * MSR_X2APIC_APIC_ICR: Allow read/write access (allowed by default) + * MSR_IA32_MISC_ENABLE: Allow read access + * MSR_IA32_POWER_CTL: Allow write access + */ +#define MSR_X2APIC_APIC_ICR 0x830 +static u64 tdx_msr_test_allow_bits = ~0ULL; +struct kvm_msr_filter tdx_msr_test_filter = { + .flags = KVM_MSR_FILTER_DEFAULT_DENY, + .ranges = { + { + .flags = KVM_MSR_FILTER_READ, + .nmsrs = 1, + .base = MSR_IA32_MISC_ENABLE, + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits, + }, { + .flags = KVM_MSR_FILTER_WRITE, + .nmsrs = 1, + .base = MSR_IA32_POWER_CTL, + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits, + }, + }, +}; + +/* + * Verifies MSR read functionality. + */ +void guest_msr_read(void) +{ + uint64_t data; + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_X2APIC_APIC_ICR, &data); + tdx_assert_error(ret); + + ret = tdx_test_report_64bit_to_user_space(data); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_MISC_ENABLE, &data); + tdx_assert_error(ret); + + ret = tdx_test_report_64bit_to_user_space(data); + tdx_assert_error(ret); + + /* Expect this call to fail since MSR_IA32_POWER_CTL is write only */ + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_POWER_CTL, &data); + if (ret) { + ret = tdx_test_report_64bit_to_user_space(ret); + tdx_assert_error(ret); + } else { + tdx_test_fatal(-99); + } + + tdx_test_success(); +} + +void verify_guest_msr_reads(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint64_t data; + int ret; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + + /* + * Set explicit MSR filter map to control access to the MSR registers + * used in the test. + */ + printf("\t ... Setting test MSR filter\n"); + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER); + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable"); + + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter); + TEST_ASSERT(ret == 0, + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)", + ret, errno, strerror(errno)); + + vcpu = td_vcpu_add(vm, 0, guest_msr_read); + td_finalize(vm); + + printf("Verifying guest msr reads:\n"); + + printf("\t ... Setting test MSR values\n"); + /* Write arbitrary to the MSRs. */ + vcpu_set_msr(vcpu, MSR_X2APIC_APIC_ICR, 4); + vcpu_set_msr(vcpu, MSR_IA32_MISC_ENABLE, 5); + vcpu_set_msr(vcpu, MSR_IA32_POWER_CTL, 6); + + printf("\t ... Running guest\n"); + tdx_run(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + TEST_ASSERT_EQ(data, 4); + + tdx_run(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + TEST_ASSERT_EQ(data, 5); + + tdx_run(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +/* + * Verifies MSR write functionality. + */ +void guest_msr_write(void) +{ + uint64_t ret; + + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_X2APIC_APIC_ICR, 4); + tdx_assert_error(ret); + + /* Expect this call to fail since MSR_IA32_MISC_ENABLE is read only */ + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_MISC_ENABLE, 5); + if (ret) { + ret = tdx_test_report_64bit_to_user_space(ret); + tdx_assert_error(ret); + } else { + tdx_test_fatal(-99); + } + + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_POWER_CTL, 6); + tdx_assert_error(ret); + + tdx_test_success(); +} + +void verify_guest_msr_writes(void) +{ + uint64_t ia32_misc_enable_val; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint64_t data; + int ret; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + + /* + * Set explicit MSR filter map to control access to the MSR registers + * used in the test. + */ + printf("\t ... Setting test MSR filter\n"); + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER); + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable"); + + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter); + TEST_ASSERT(ret == 0, + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)", + ret, errno, strerror(errno)); + + vcpu = td_vcpu_add(vm, 0, guest_msr_write); + td_finalize(vm); + + ia32_misc_enable_val = vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE); + + printf("Verifying guest msr writes:\n"); + + printf("\t ... Running guest\n"); + /* Only the write to MSR_IA32_MISC_ENABLE should trigger an exit */ + tdx_run(vcpu); + data = tdx_test_read_64bit_report_from_guest(vcpu); + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + printf("\t ... Verifying MSR values written by guest\n"); + + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_X2APIC_APIC_ICR), 4); + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE), + ia32_misc_enable_val); + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_POWER_CTL), 6); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -462,7 +649,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(7); + ksft_set_plan(9); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -477,6 +664,10 @@ int main(int argc, char **argv) "verify_guest_writes\n"); ksft_test_result(!run_in_new_process(&verify_guest_reads), "verify_guest_reads\n"); + ksft_test_result(!run_in_new_process(&verify_guest_msr_writes), + "verify_guest_msr_writes\n"); + ksft_test_result(!run_in_new_process(&verify_guest_msr_reads), + "verify_guest_msr_reads\n");
ksft_finished(); return 0;
From: Erdem Aktas erdemaktas@google.com
The test verifies that the guest runs TDVMCALL<INSTRUCTION.HLT> and the guest vCPU enters to the halted state.
Co-developed-by: Sagi Shahar sagis@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Erdem Aktas erdemaktas@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 2 + tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 10 +++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 81 ++++++++++++++++++- 3 files changed, 92 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 56359a8c4c19..b5831919a215 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -9,6 +9,7 @@ #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
+#define TDG_VP_VMCALL_INSTRUCTION_HLT 12 #define TDG_VP_VMCALL_INSTRUCTION_IO 30 #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32 @@ -20,4 +21,5 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, uint64_t *r13, uint64_t *r14); uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value); uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag); #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index 99ec45a5a657..e89ca727286e 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -93,3 +93,13 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
return __tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_HLT, + .r12 = interrupt_blocked_flag, + }; + + return __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 079ac266a44e..720ef5e87071 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -642,6 +642,83 @@ void verify_guest_msr_writes(void) printf("\t ... PASSED\n"); }
+/* + * Verifies HLT functionality. + */ +void guest_hlt(void) +{ + uint64_t interrupt_blocked_flag; + uint64_t ret; + + interrupt_blocked_flag = 0; + ret = tdg_vp_vmcall_instruction_hlt(interrupt_blocked_flag); + tdx_assert_error(ret); + + tdx_test_success(); +} + +void _verify_guest_hlt(int signum); + +void wake_me(int interval) +{ + struct sigaction action; + + action.sa_handler = _verify_guest_hlt; + sigemptyset(&action.sa_mask); + action.sa_flags = 0; + + TEST_ASSERT(sigaction(SIGALRM, &action, NULL) == 0, + "Could not set the alarm handler!"); + + alarm(interval); +} + +void _verify_guest_hlt(int signum) +{ + static struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + /* + * This function will also be called by SIGALRM handler to check the + * vCPU MP State. If vm has been initialized, then we are in the signal + * handler. Check the MP state and let the guest run again. + */ + if (vcpu) { + struct kvm_mp_state mp_state; + + vcpu_mp_state_get(vcpu, &mp_state); + TEST_ASSERT_EQ(mp_state.mp_state, KVM_MP_STATE_HALTED); + + /* Let the guest to run and finish the test.*/ + mp_state.mp_state = KVM_MP_STATE_RUNNABLE; + vcpu_mp_state_set(vcpu, &mp_state); + return; + } + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_hlt); + td_finalize(vm); + + printf("Verifying HLT:\n"); + + printf("\t ... Running guest\n"); + + /* Wait 1 second for guest to execute HLT */ + wake_me(1); + tdx_run(vcpu); + + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + +void verify_guest_hlt(void) +{ + _verify_guest_hlt(0); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -649,7 +726,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(9); + ksft_set_plan(10); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -668,6 +745,8 @@ int main(int argc, char **argv) "verify_guest_msr_writes\n"); ksft_test_result(!run_in_new_process(&verify_guest_msr_reads), "verify_guest_msr_reads\n"); + ksft_test_result(!run_in_new_process(&verify_guest_hlt), + "verify_guest_hlt\n");
ksft_finished(); return 0;
The test verifies MMIO reads of various sizes from the host to the guest.
Co-developed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 4 + .../selftests/kvm/include/x86/tdx/tdx_util.h | 1 + .../selftests/kvm/include/x86/tdx/test_util.h | 11 +++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 20 +++++ .../selftests/kvm/lib/x86/tdx/test_util.c | 19 ++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 89 ++++++++++++++++++- 6 files changed, 143 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index b5831919a215..fa0b24873a8f 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -13,6 +13,7 @@ #define TDG_VP_VMCALL_INSTRUCTION_IO 30 #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32 +#define TDG_VP_VMCALL_VE_REQUEST_MMIO 48
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); @@ -22,4 +23,7 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12, uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value); uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag); +uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, + uint64_t *data_out); + #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h index d66cf17f03ea..c942aec7ad26 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h @@ -6,6 +6,7 @@
#include "kvm_util.h"
+extern uint64_t tdx_s_bit; void tdx_filter_cpuid(struct kvm_vm *vm, struct kvm_cpuid2 *cpuid_data); void __tdx_mask_cpuid_features(struct kvm_cpuid_entry2 *entry);
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h index 91031e956462..3330d5a54698 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/test_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/test_util.h @@ -17,6 +17,10 @@ #define PORT_READ 0 #define PORT_WRITE 1
+/* MMIO direction */ +#define MMIO_READ 0 +#define MMIO_WRITE 1 + /* * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was * called in the guest. @@ -24,6 +28,13 @@ void tdx_test_assert_io(struct kvm_vcpu *vcpu, uint16_t port, uint8_t size, uint8_t direction);
+/* + * Assert that some MMIO operation involving TDG.VP.VMCALL <#VERequestMMIO> was + * called in the guest. + */ +void tdx_test_assert_mmio(struct kvm_vcpu *vcpu, uint64_t phys_addr, + uint32_t size, uint8_t is_write); + /* * Run the tdx vcpu and check if there was some failure in the guest, either * an exception like a triple fault, or if a tdx_test_fatal() was hit. diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index e89ca727286e..8bf41e667fc1 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -4,6 +4,7 @@
#include "tdx/tdcall.h" #include "tdx/tdx.h" +#include "tdx/test_util.h"
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data) @@ -103,3 +104,22 @@ uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)
return __tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, + uint64_t *data_out) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO, + .r12 = size, + .r13 = MMIO_READ, + .r14 = address, + }; + uint64_t ret; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (data_out) + *data_out = args.r11; + + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c index 8c3b6802c37e..f92ddda2d1ac 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/test_util.c @@ -31,6 +31,25 @@ void tdx_test_assert_io(struct kvm_vcpu *vcpu, uint16_t port, uint8_t size, vcpu->run->io.direction); }
+void tdx_test_assert_mmio(struct kvm_vcpu *vcpu, uint64_t phys_addr, + uint32_t size, uint8_t is_write) +{ + TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_MMIO, + "Got exit_reason other than KVM_EXIT_MMIO: %u (%s)\n", + vcpu->run->exit_reason, + exit_reason_str(vcpu->run->exit_reason)); + + TEST_ASSERT(vcpu->run->exit_reason == KVM_EXIT_MMIO && + vcpu->run->mmio.phys_addr == phys_addr && + vcpu->run->mmio.len == size && + vcpu->run->mmio.is_write == is_write, + "Got an unexpected MMIO exit values: %u (%s) %llu %u %u\n", + vcpu->run->exit_reason, + exit_reason_str(vcpu->run->exit_reason), + vcpu->run->mmio.phys_addr, vcpu->run->mmio.len, + vcpu->run->mmio.is_write); +} + void tdx_run(struct kvm_vcpu *vcpu) { td_vcpu_run(vcpu); diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 720ef5e87071..563f1025c8a3 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -719,6 +719,91 @@ void verify_guest_hlt(void) _verify_guest_hlt(0); }
+/* Pick any address that was not mapped into the guest to test MMIO */ +#define TDX_MMIO_TEST_ADDR 0x200000000 +#define MMIO_SYNC_VALUE 0x42 + +void guest_mmio_reads(void) +{ + uint64_t mmio_test_addr = TDX_MMIO_TEST_ADDR | tdx_s_bit; + uint64_t data; + uint64_t ret; + + ret = tdg_vp_vmcall_ve_request_mmio_read(mmio_test_addr, 1, &data); + tdx_assert_error(ret); + if (data != 0x12) + tdx_test_fatal(1); + + ret = tdg_vp_vmcall_ve_request_mmio_read(mmio_test_addr, 2, &data); + tdx_assert_error(ret); + if (data != 0x1234) + tdx_test_fatal(2); + + ret = tdg_vp_vmcall_ve_request_mmio_read(mmio_test_addr, 4, &data); + tdx_assert_error(ret); + if (data != 0x12345678) + tdx_test_fatal(4); + + ret = tdg_vp_vmcall_ve_request_mmio_read(mmio_test_addr, 8, &data); + tdx_assert_error(ret); + if (data != 0x1234567890ABCDEF) + tdx_test_fatal(8); + + /* Make sure host and guest are synced to the same point of execution */ + tdx_test_report_to_user_space(MMIO_SYNC_VALUE); + + /* Read an invalid number of bytes. */ + ret = tdg_vp_vmcall_ve_request_mmio_read(mmio_test_addr, 10, &data); + tdx_assert_error(ret); + + tdx_test_success(); +} + +/* + * Verifies guest MMIO reads. + */ +void verify_mmio_reads(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_mmio_reads); + td_finalize(vm); + + printf("Verifying TD MMIO reads:\n"); + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 1, MMIO_READ); + *(uint8_t *)vcpu->run->mmio.data = 0x12; + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 2, MMIO_READ); + *(uint16_t *)vcpu->run->mmio.data = 0x1234; + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 4, MMIO_READ); + *(uint32_t *)vcpu->run->mmio.data = 0x12345678; + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 8, MMIO_READ); + *(uint64_t *)vcpu->run->mmio.data = 0x1234567890ABCDEF; + + tdx_run(vcpu); + TEST_ASSERT_EQ(tdx_test_read_report_from_guest(vcpu), MMIO_SYNC_VALUE); + + td_vcpu_run(vcpu); + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + TEST_ASSERT_EQ(vcpu->run->system_event.data[12], TDG_VP_VMCALL_INVALID_OPERAND); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -726,7 +811,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(10); + ksft_set_plan(11); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -747,6 +832,8 @@ int main(int argc, char **argv) "verify_guest_msr_reads\n"); ksft_test_result(!run_in_new_process(&verify_guest_hlt), "verify_guest_hlt\n"); + ksft_test_result(!run_in_new_process(&verify_mmio_reads), + "verify_mmio_reads\n");
ksft_finished(); return 0;
The test verifies MMIO writes of various sizes from the guest to the host.
Co-developed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 2 + tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 14 +++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 85 ++++++++++++++++++- 3 files changed, 100 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index fa0b24873a8f..2fd67c3e5128 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -25,5 +25,7 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value); uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag); uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, uint64_t *data_out); +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, + uint64_t data_in);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index 8bf41e667fc1..d61940fe7df4 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -123,3 +123,17 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
return ret; } + +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, + uint64_t data_in) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO, + .r12 = size, + .r13 = MMIO_WRITE, + .r14 = address, + .r15 = data_in, + }; + + return __tdx_hypercall(&args, 0); +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 563f1025c8a3..6ad675a93eeb 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -804,6 +804,87 @@ void verify_mmio_reads(void) printf("\t ... PASSED\n"); }
+void guest_mmio_writes(void) +{ + uint64_t mmio_test_addr = TDX_MMIO_TEST_ADDR | tdx_s_bit; + uint64_t ret; + + ret = tdg_vp_vmcall_ve_request_mmio_write(mmio_test_addr, 1, 0x12); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_ve_request_mmio_write(mmio_test_addr, 2, 0x1234); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_ve_request_mmio_write(mmio_test_addr, 4, 0x12345678); + tdx_assert_error(ret); + + ret = tdg_vp_vmcall_ve_request_mmio_write(mmio_test_addr, 8, 0x1234567890ABCDEF); + tdx_assert_error(ret); + + /* Make sure host and guest are synced to the same point of execution */ + tdx_test_report_to_user_space(MMIO_SYNC_VALUE); + + /* Write across page boundary. */ + ret = tdg_vp_vmcall_ve_request_mmio_write(PAGE_SIZE - 1, 8, 0); + tdx_assert_error(ret); + + tdx_test_success(); +} + +/* + * Verifies guest MMIO writes. + */ +void verify_mmio_writes(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint64_t byte_8; + uint32_t byte_4; + uint16_t byte_2; + uint8_t byte_1; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_mmio_writes); + td_finalize(vm); + + printf("Verifying TD MMIO writes:\n"); + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 1, MMIO_WRITE); + byte_1 = *(uint8_t *)(vcpu->run->mmio.data); + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 2, MMIO_WRITE); + byte_2 = *(uint16_t *)(vcpu->run->mmio.data); + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 4, MMIO_WRITE); + byte_4 = *(uint32_t *)(vcpu->run->mmio.data); + + tdx_run(vcpu); + tdx_test_assert_mmio(vcpu, TDX_MMIO_TEST_ADDR, 8, MMIO_WRITE); + byte_8 = *(uint64_t *)(vcpu->run->mmio.data); + + TEST_ASSERT_EQ(byte_1, 0x12); + TEST_ASSERT_EQ(byte_2, 0x1234); + TEST_ASSERT_EQ(byte_4, 0x12345678); + TEST_ASSERT_EQ(byte_8, 0x1234567890ABCDEF); + + tdx_run(vcpu); + TEST_ASSERT_EQ(tdx_test_read_report_from_guest(vcpu), MMIO_SYNC_VALUE); + + td_vcpu_run(vcpu); + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT); + TEST_ASSERT_EQ(vcpu->run->system_event.data[12], TDG_VP_VMCALL_INVALID_OPERAND); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -811,7 +892,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(11); + ksft_set_plan(12); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -834,6 +915,8 @@ int main(int argc, char **argv) "verify_guest_hlt\n"); ksft_test_result(!run_in_new_process(&verify_mmio_reads), "verify_mmio_reads\n"); + ksft_test_result(!run_in_new_process(&verify_mmio_writes), + "verify_mmio_writes\n");
ksft_finished(); return 0;
This test issues a CPUID TDVMCALL from inside the guest to get the CPUID values as seen by KVM.
Co-developed-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Isaku Yamahata isaku.yamahata@intel.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 4 + tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 25 ++++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 78 ++++++++++++++++++- 3 files changed, 106 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 2fd67c3e5128..060158cb046b 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -9,6 +9,7 @@ #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
+#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10 #define TDG_VP_VMCALL_INSTRUCTION_HLT 12 #define TDG_VP_VMCALL_INSTRUCTION_IO 30 #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31 @@ -27,5 +28,8 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size, uint64_t *data_out); uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, uint64_t data_in); +uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, + uint32_t *ret_eax, uint32_t *ret_ebx, + uint32_t *ret_ecx, uint32_t *ret_edx);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index d61940fe7df4..fb391483d2fa 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -137,3 +137,28 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
return __tdx_hypercall(&args, 0); } + +uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, + uint32_t *ret_eax, uint32_t *ret_ebx, + uint32_t *ret_ecx, uint32_t *ret_edx) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_INSTRUCTION_CPUID, + .r12 = eax, + .r13 = ecx, + }; + uint64_t ret; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (ret_eax) + *ret_eax = args.r12; + if (ret_ebx) + *ret_ebx = args.r13; + if (ret_ecx) + *ret_ecx = args.r14; + if (ret_edx) + *ret_edx = args.r15; + + return ret; +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 6ad675a93eeb..2f75f12d2a44 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -885,6 +885,80 @@ void verify_mmio_writes(void) printf("\t ... PASSED\n"); }
+/* + * Verifies CPUID TDVMCALL functionality. + * The guest will then send the values to userspace using an IO write to be + * checked against the expected values. + */ +void guest_code_cpuid_tdcall(void) +{ + uint32_t eax, ebx, ecx, edx; + uint64_t err; + + /* Read CPUID leaf 0x1 from host. */ + err = tdg_vp_vmcall_instruction_cpuid(/*eax=*/1, /*ecx=*/0, + &eax, &ebx, &ecx, &edx); + tdx_assert_error(err); + + err = tdx_test_report_to_user_space(eax); + tdx_assert_error(err); + + err = tdx_test_report_to_user_space(ebx); + tdx_assert_error(err); + + err = tdx_test_report_to_user_space(ecx); + tdx_assert_error(err); + + err = tdx_test_report_to_user_space(edx); + tdx_assert_error(err); + + tdx_test_success(); +} + +void verify_td_cpuid_tdcall(void) +{ + struct kvm_cpuid_entry2 *cpuid_entry; + uint32_t eax, ebx, ecx, edx; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_cpuid_tdcall); + td_finalize(vm); + + printf("Verifying TD CPUID TDVMCALL:\n"); + + /* Wait for guest to report CPUID values */ + tdx_run(vcpu); + eax = tdx_test_read_report_from_guest(vcpu); + + tdx_run(vcpu); + ebx = tdx_test_read_report_from_guest(vcpu); + + tdx_run(vcpu); + ecx = tdx_test_read_report_from_guest(vcpu); + + tdx_run(vcpu); + edx = tdx_test_read_report_from_guest(vcpu); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + /* Get KVM CPUIDs for reference */ + cpuid_entry = vcpu_get_cpuid_entry(vcpu, 1); + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n"); + + TEST_ASSERT_EQ(cpuid_entry->eax, eax); + /* Mask lapic ID when comparing ebx. */ + TEST_ASSERT_EQ(cpuid_entry->ebx & ~0xFF000000, ebx & ~0xFF000000); + TEST_ASSERT_EQ(cpuid_entry->ecx, ecx); + TEST_ASSERT_EQ(cpuid_entry->edx, edx); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -892,7 +966,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(12); + ksft_set_plan(13); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -917,6 +991,8 @@ int main(int argc, char **argv) "verify_mmio_reads\n"); ksft_test_result(!run_in_new_process(&verify_mmio_writes), "verify_mmio_writes\n"); + ksft_test_result(!run_in_new_process(&verify_td_cpuid_tdcall), + "verify_td_cpuid_tdcall\n");
ksft_finished(); return 0;
From: Ryan Afranji afranji@google.com
The test checks that host can only read fixed values when trying to access the guest's private memory.
Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/x86/tdx_vm_test.c | 83 ++++++++++++++++++- 1 file changed, 82 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 2f75f12d2a44..b6ef0348746c 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -959,6 +959,85 @@ void verify_td_cpuid_tdcall(void) printf("\t ... PASSED\n"); }
+/* + * Shared variables between guest and host for host reading private mem test + */ +static uint64_t tdx_test_host_read_private_mem_addr; +#define TDX_HOST_READ_PRIVATE_MEM_PORT_TEST 0x53 + +void guest_host_read_priv_mem(void) +{ + uint64_t placeholder = 0; + uint64_t ret; + + /* Set value */ + *((uint32_t *)tdx_test_host_read_private_mem_addr) = 0xABCD; + + /* Exit so host can read value */ + ret = tdg_vp_vmcall_instruction_io(TDX_HOST_READ_PRIVATE_MEM_PORT_TEST, + 4, PORT_WRITE, &placeholder); + tdx_assert_error(ret); + + /* Update guest_var's value and have host reread it. */ + *((uint32_t *)tdx_test_host_read_private_mem_addr) = 0xFEDC; + + tdx_test_success(); +} + +void verify_host_reading_private_mem(void) +{ + uint64_t second_host_read; + uint64_t first_host_read; + struct kvm_vcpu *vcpu; + vm_vaddr_t test_page; + uint64_t *host_virt; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_host_read_priv_mem); + + test_page = vm_vaddr_alloc_page(vm); + TEST_ASSERT(test_page < BIT_ULL(32), + "Test address should fit in 32 bits so it can be sent to the guest"); + + host_virt = addr_gva2hva(vm, test_page); + TEST_ASSERT(host_virt, + "Guest address not found in guest memory regions\n"); + + tdx_test_host_read_private_mem_addr = test_page; + sync_global_to_guest(vm, tdx_test_host_read_private_mem_addr); + + td_finalize(vm); + + printf("Verifying host's behavior when reading TD private memory:\n"); + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_HOST_READ_PRIVATE_MEM_PORT_TEST, + 4, PORT_WRITE); + printf("\t ... Guest's variable contains 0xABCD\n"); + + /* Host reads guest's variable. */ + first_host_read = *host_virt; + printf("\t ... Host's read attempt value: %lu\n", first_host_read); + + /* Guest updates variable and host rereads it. */ + tdx_run(vcpu); + printf("\t ... Guest's variable updated to 0xFEDC\n"); + + second_host_read = *host_virt; + printf("\t ... Host's second read attempt value: %lu\n", + second_host_read); + + TEST_ASSERT(first_host_read == second_host_read, + "Host did not read a fixed pattern\n"); + + printf("\t ... Fixed pattern was returned to the host\n"); + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -966,7 +1045,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(13); + ksft_set_plan(14); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -993,6 +1072,8 @@ int main(int argc, char **argv) "verify_mmio_writes\n"); ksft_test_result(!run_in_new_process(&verify_td_cpuid_tdcall), "verify_td_cpuid_tdcall\n"); + ksft_test_result(!run_in_new_process(&verify_host_reading_private_mem), + "verify_host_reading_private_mem\n");
ksft_finished(); return 0;
From: Roger Wang runanwang@google.com
Adds a test for TDG.VP.INFO.
Introduce __tdx_module_call() that does needed shuffling from function parameters to registers used by the TDCALL instruction that is used by the guest to communicate with the TDX module. The first function parameter is the leaf number indicating which guest side function should be run, for example, TDG.VP.INFO.
The guest uses new __tdx_module_call() to call TDG.VP.INFO to obtain TDX TD execution environment information from the TDX module. All returned registers are passed back to the host that verifies values for correctness.
Co-developed-by: Sagi Shahar sagis@google.com Signed-off-by: Sagi Shahar sagis@google.com Signed-off-by: Roger Wang runanwang@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdcall.h | 19 +++ .../selftests/kvm/include/x86/tdx/tdx.h | 5 + .../selftests/kvm/lib/x86/tdx/tdcall.S | 68 +++++++++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 27 ++++ tools/testing/selftests/kvm/x86/tdx_vm_test.c | 133 +++++++++++++++++- 5 files changed, 251 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h index e7440f7fe259..ab1a97a82fa9 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdcall.h @@ -32,4 +32,23 @@ struct tdx_hypercall_args { /* Used to request services from the VMM */ u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
+/* + * Used to gather the output registers values of the TDCALL and SEAMCALL + * instructions when requesting services from the TDX module. + * + * This is a software only structure and not part of the TDX module/VMM ABI. + */ +struct tdx_module_output { + u64 rcx; + u64 rdx; + u64 r8; + u64 r9; + u64 r10; + u64 r11; +}; + +/* Used to communicate with the TDX module */ +u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); + #endif // SELFTESTS_TDX_TDCALL_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 060158cb046b..801ca879664e 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -6,6 +6,8 @@
#include "kvm_util.h"
+#define TDG_VP_INFO 1 + #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
@@ -31,5 +33,8 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size, uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, uint32_t *ret_eax, uint32_t *ret_ebx, uint32_t *ret_ecx, uint32_t *ret_edx); +uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, + uint64_t *r8, uint64_t *r9, + uint64_t *r10, uint64_t *r11);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S index b10769d1d557..c393a0fb35be 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdcall.S @@ -91,5 +91,73 @@ __tdx_hypercall: pop %rbp ret
+#define TDX_MODULE_rcx 0 /* offsetof(struct tdx_module_output, rcx) */ +#define TDX_MODULE_rdx 8 /* offsetof(struct tdx_module_output, rdx) */ +#define TDX_MODULE_r8 16 /* offsetof(struct tdx_module_output, r8) */ +#define TDX_MODULE_r9 24 /* offsetof(struct tdx_module_output, r9) */ +#define TDX_MODULE_r10 32 /* offsetof(struct tdx_module_output, r10) */ +#define TDX_MODULE_r11 40 /* offsetof(struct tdx_module_output, r11) */ + +.globl __tdx_module_call +.type __tdx_module_call, @function +__tdx_module_call: + /* Set up stack frame */ + push %rbp + movq %rsp, %rbp + + /* Callee-saved, so preserve it */ + push %r12 + + /* + * Push output pointer to stack. + * After the operation, it will be fetched into R12 register. + */ + push %r9 + + /* Mangle function call ABI into TDCALL/SEAMCALL ABI: */ + /* Move Leaf ID to RAX */ + mov %rdi, %rax + /* Move input 4 to R9 */ + mov %r8, %r9 + /* Move input 3 to R8 */ + mov %rcx, %r8 + /* Move input 1 to RCX */ + mov %rsi, %rcx + /* Leave input param 2 in RDX */ + + tdcall + + /* + * Fetch output pointer from stack to R12 (It is used + * as temporary storage) + */ + pop %r12 + + /* + * Since this macro can be invoked with NULL as an output pointer, + * check if caller provided an output struct before storing output + * registers. + * + * Update output registers, even if the call failed (RAX != 0). + * Other registers may contain details of the failure. + */ + test %r12, %r12 + jz .Lno_output_struct + + /* Copy result registers to output struct: */ + movq %rcx, TDX_MODULE_rcx(%r12) + movq %rdx, TDX_MODULE_rdx(%r12) + movq %r8, TDX_MODULE_r8(%r12) + movq %r9, TDX_MODULE_r9(%r12) + movq %r10, TDX_MODULE_r10(%r12) + movq %r11, TDX_MODULE_r11(%r12) + +.Lno_output_struct: + /* Restore the state of R12 register */ + pop %r12 + + pop %rbp + ret + /* Disable executable stack */ .section .note.GNU-stack,"",%progbits diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index fb391483d2fa..ab6fd3d7ae4b 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -162,3 +162,30 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
return ret; } + +uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, + uint64_t *r8, uint64_t *r9, + uint64_t *r10, uint64_t *r11) +{ + struct tdx_module_output out; + uint64_t ret; + + memset(&out, 0, sizeof(struct tdx_module_output)); + + ret = __tdx_module_call(TDG_VP_INFO, 0, 0, 0, 0, &out); + + if (rcx) + *rcx = out.rcx; + if (rdx) + *rdx = out.rdx; + if (r8) + *r8 = out.r8; + if (r9) + *r9 = out.r9; + if (r10) + *r10 = out.r10; + if (r11) + *r11 = out.r11; + + return ret; +} diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index b6ef0348746c..82acc17a66ab 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -1038,6 +1038,135 @@ void verify_host_reading_private_mem(void) printf("\t ... PASSED\n"); }
+/* + * Do a TDG.VP.INFO call from the guest + */ +void guest_tdcall_vp_info(void) +{ + uint64_t rcx, rdx, r8, r9, r10, r11; + uint64_t err; + + err = tdg_vp_info(&rcx, &rdx, &r8, &r9, &r10, &r11); + tdx_assert_error(err); + + /* return values to user space host */ + err = tdx_test_report_64bit_to_user_space(rcx); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(rdx); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r8); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r9); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r10); + tdx_assert_error(err); + + err = tdx_test_report_64bit_to_user_space(r11); + tdx_assert_error(err); + + tdx_test_success(); +} + +/* + * TDG.VP.INFO call from the guest. Verify the right values are returned + */ +void verify_tdcall_vp_info(void) +{ + const struct kvm_cpuid_entry2 *cpuid_entry; + uint32_t ret_num_vcpus, ret_max_vcpus; + uint64_t rcx, rdx, r8, r9, r10, r11; + const int num_vcpus = 2; + struct kvm_vcpu *vcpus[num_vcpus]; + uint64_t attributes; + struct kvm_vm *vm; + int gpa_bits = -1; + uint32_t i; + + vm = td_create(); + +#define TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT BIT(28) + /* Setting attributes parameter used by TDH.MNG.INIT to 0x10000000 */ + attributes = TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT; + + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, attributes); + + for (i = 0; i < num_vcpus; i++) + vcpus[i] = td_vcpu_add(vm, i, guest_tdcall_vp_info); + + td_finalize(vm); + + printf("Verifying TDG.VP.INFO call:\n"); + + /* Get KVM CPUIDs for reference */ + + for (i = 0; i < num_vcpus; i++) { + struct kvm_vcpu *vcpu = vcpus[i]; + + cpuid_entry = vcpu_get_cpuid_entry(vcpu, 0x80000008); + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n"); + gpa_bits = (cpuid_entry->eax & GENMASK(23, 16)) >> 16; + TEST_ASSERT_EQ((1UL << (gpa_bits - 1)), tdx_s_bit); + + /* Wait for guest to report rcx value */ + tdx_run(vcpu); + rcx = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report rdx value */ + tdx_run(vcpu); + rdx = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r8 value */ + tdx_run(vcpu); + r8 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r9 value */ + tdx_run(vcpu); + r9 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r10 value */ + tdx_run(vcpu); + r10 = tdx_test_read_64bit_report_from_guest(vcpu); + + /* Wait for guest to report r11 value */ + tdx_run(vcpu); + r11 = tdx_test_read_64bit_report_from_guest(vcpu); + + ret_num_vcpus = r8 & 0xFFFFFFFF; + ret_max_vcpus = (r8 >> 32) & 0xFFFFFFFF; + + /* first bits 5:0 of rcx represent the GPAW */ + TEST_ASSERT_EQ(rcx & 0x3F, gpa_bits); + /* next 63:6 bits of rcx is reserved and must be 0 */ + TEST_ASSERT_EQ(rcx >> 6, 0); + TEST_ASSERT_EQ(rdx, attributes); + TEST_ASSERT_EQ(ret_num_vcpus, num_vcpus); + TEST_ASSERT_EQ(ret_max_vcpus, vm_check_cap(vm, KVM_CAP_MAX_VCPUS)); + /* VCPU_INDEX = i */ + TEST_ASSERT_EQ(r9, i); + /* + * verify reserved bits are 0 + * r10 bit 0 (SYS_RD) indicates that the TDG.SYS.RD/RDM/RDALL + * functions are available and can be either 0 or 1. + */ + TEST_ASSERT_EQ(r10 & ~1, 0); + TEST_ASSERT_EQ(r11, 0); + + /* Wait for guest to complete execution */ + tdx_run(vcpu); + + tdx_test_assert_success(vcpu); + + printf("\t ... Guest completed run on VCPU=%u\n", i); + } + + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -1045,7 +1174,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(14); + ksft_set_plan(15); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -1074,6 +1203,8 @@ int main(int argc, char **argv) "verify_td_cpuid_tdcall\n"); ksft_test_result(!run_in_new_process(&verify_host_reading_private_mem), "verify_host_reading_private_mem\n"); + ksft_test_result(!run_in_new_process(&verify_tdcall_vp_info), + "verify_tdcall_vp_info\n");
ksft_finished(); return 0;
From: Ackerley Tng ackerleytng@google.com
virt_map() enforces a private mapping for private memory. Introduce virt_map_shared() that creates a shared mapping for private as well as shared memory. This way, the TD does not have to remap its page tables at runtime.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../testing/selftests/kvm/include/kvm_util.h | 23 +++++++++++++ tools/testing/selftests/kvm/lib/kvm_util.c | 34 +++++++++++++++++++ .../testing/selftests/kvm/lib/x86/processor.c | 15 ++++++-- 3 files changed, 70 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 813ba634dc49..442e34c6ed84 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -621,6 +621,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, unsigned int npages); +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages); void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa); void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); @@ -1096,6 +1098,27 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr virt_arch_pg_map(vm, vaddr, paddr); }
+/* + * VM Virtual Page Map as Shared + * + * Input Args: + * vm - Virtual Machine + * vaddr - VM Virtual Address + * paddr - VM Physical Address + * + * Output Args: None + * + * Return: None + * + * Within @vm, creates a virtual translation for the page starting + * at @vaddr to the page starting at @paddr. + */ +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr); + +static inline void virt_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) +{ + virt_arch_pg_map_shared(vm, vaddr, paddr); +}
/* * Address Guest Virtual to Guest Physical diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index f8cf49794eed..008f01831036 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -7,6 +7,7 @@ #include "test_util.h" #include "kvm_util.h" #include "processor.h" +#include "sparsebit.h" #include "ucall_common.h"
#include <assert.h> @@ -1604,6 +1605,39 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, } }
+/* + * Map a range of VM virtual address to the VM's physical address as shared + * + * Input Args: + * vm - Virtual Machine + * vaddr - Virtual address to map + * paddr - VM Physical Address + * npages - The number of pages to map + * + * Output Args: None + * + * Return: None + * + * Within the VM given by @vm, creates a virtual translation for + * @npages starting at @vaddr to the page range starting at @paddr. + */ +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + unsigned int npages) +{ + size_t page_size = vm->page_size; + size_t size = npages * page_size; + + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow"); + TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); + + while (npages--) { + virt_pg_map_shared(vm, vaddr, paddr); + sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift); + vaddr += page_size; + paddr += page_size; + } +} + /* * Address VM Physical to Host Virtual * diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c index 9b2c236e723a..fef63d807c91 100644 --- a/tools/testing/selftests/kvm/lib/x86/processor.c +++ b/tools/testing/selftests/kvm/lib/x86/processor.c @@ -181,7 +181,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm, return pte; }
-void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) +static void ___virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, + int level, bool protected) { const uint64_t pg_size = PG_LEVEL_SIZE(level); uint64_t *pml4e, *pdpe, *pde; @@ -231,17 +232,27 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) * Neither SEV nor TDX supports shared page tables, so only the final * leaf PTE needs manually set the C/S-bit. */ - if (vm_is_gpa_protected(vm, paddr)) + if (protected) *pte |= vm->arch.c_bit; else *pte |= vm->arch.s_bit; }
+void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level) +{ + ___virt_pg_map(vm, vaddr, paddr, level, vm_is_gpa_protected(vm, paddr)); +} + void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) { __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); }
+void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) +{ + ___virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K, false); +} + void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, uint64_t nr_bytes, int level) {
From: Ryan Afranji afranji@google.com
Test that host and guest can exchange data via shared memory.
Set up shared memory by first allocating as private and then mapping the same GPA as shared. The guest starts with a request to map a page of memory to shared. This request is done via a hypercall (TDG.VP.VMCALL<MapGPA>) from the guest that the kernel converts to KVM_EXIT_HYPERCALL with KVM_HC_MAP_GPA_RANGE that is sent back to the test for handling. Handle the guest's request using the KVM_SET_MEMORY_ATTRIBUTES ioctl().
After the shared memory is set up the guest writes to it and notifies the host from where the data is verified. In return, the host writes to the same memory that is verified by the guest.
Co-developed-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Ackerley Tng ackerleytng@google.com Co-developed-by: Binbin Wu binbin.wu@linux.intel.com Signed-off-by: Binbin Wu binbin.wu@linux.intel.com Signed-off-by: Ryan Afranji afranji@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../selftests/kvm/include/x86/tdx/tdx.h | 4 + .../selftests/kvm/include/x86/tdx/tdx_util.h | 2 + tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 26 ++++ .../selftests/kvm/lib/x86/tdx/tdx_util.c | 32 +++++ .../selftests/kvm/x86/tdx_shared_mem_test.c | 129 ++++++++++++++++++ 6 files changed, 194 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/tdx_shared_mem_test.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index e98d5413991a..9f660f913715 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -145,6 +145,7 @@ TEST_GEN_PROGS_x86 += steal_time TEST_GEN_PROGS_x86 += system_counter_offset_test TEST_GEN_PROGS_x86 += pre_fault_memory_test TEST_GEN_PROGS_x86 += x86/tdx_vm_test +TEST_GEN_PROGS_x86 += x86/tdx_shared_mem_test
# Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86 += x86/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 801ca879664e..88f3571df16f 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -9,6 +9,7 @@ #define TDG_VP_INFO 1
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 +#define TDG_VP_VMCALL_MAP_GPA 0x10001 #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10 @@ -18,6 +19,8 @@ #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32 #define TDG_VP_VMCALL_VE_REQUEST_MMIO 48
+void handle_userspace_map_gpa(struct kvm_vcpu *vcpu); + uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data); void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa); @@ -36,5 +39,6 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, uint64_t *r8, uint64_t *r9, uint64_t *r10, uint64_t *r11); +uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h index c942aec7ad26..ae39b78aa4af 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx_util.h @@ -17,5 +17,7 @@ void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t attributes); void td_finalize(struct kvm_vm *vm); void td_vcpu_run(struct kvm_vcpu *vcpu); +void handle_memory_conversion(struct kvm_vm *vm, uint32_t vcpu_id, uint64_t gpa, + uint64_t size, bool shared_to_private);
#endif // SELFTESTS_TDX_KVM_UTIL_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index ab6fd3d7ae4b..bae84c34c19e 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -1,11 +1,21 @@ // SPDX-License-Identifier: GPL-2.0-only
+#include <linux/kvm_para.h> #include <string.h>
#include "tdx/tdcall.h" #include "tdx/tdx.h" +#include "tdx/tdx_util.h" #include "tdx/test_util.h"
+void handle_userspace_map_gpa(struct kvm_vcpu *vcpu) +{ + handle_memory_conversion(vcpu->vm, vcpu->id, vcpu->run->hypercall.args[0], + vcpu->run->hypercall.args[1] << 12, + vcpu->run->hypercall.args[2] & KVM_MAP_GPA_RANGE_ENCRYPTED); + vcpu->run->hypercall.ret = 0; +} + uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size, uint64_t write, uint64_t *data) { @@ -189,3 +199,19 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
return ret; } + +uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out) +{ + struct tdx_hypercall_args args = { + .r11 = TDG_VP_VMCALL_MAP_GPA, + .r12 = address, + .r13 = size + }; + uint64_t ret; + + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT); + + if (data_out) + *data_out = args.r11; + return ret; +} diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c index 5e4455be828a..c5bee67099c5 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx_util.c @@ -608,4 +608,36 @@ void td_finalize(struct kvm_vm *vm) void td_vcpu_run(struct kvm_vcpu *vcpu) { vcpu_run(vcpu); + + /* Handle TD VMCALLs that require userspace handling. */ + if (vcpu->run->exit_reason == KVM_EXIT_HYPERCALL && + vcpu->run->hypercall.nr == KVM_HC_MAP_GPA_RANGE) { + handle_userspace_map_gpa(vcpu); + } +} + +/* + * Handle conversion of memory with @size beginning @gpa for @vm. Set + * @shared_to_private to true for shared to private conversions and false + * otherwise. + * + * Since this is just for selftests, just keep both pieces of backing + * memory allocated and not deallocate/allocate memory; just do the + * minimum of calling KVM_MEMORY_ENCRYPT_REG_REGION and + * KVM_MEMORY_ENCRYPT_UNREG_REGION. + */ +void handle_memory_conversion(struct kvm_vm *vm, uint32_t vcpu_id, uint64_t gpa, + uint64_t size, bool shared_to_private) +{ + struct kvm_memory_attributes range; + + range.address = gpa; + range.size = size; + range.attributes = shared_to_private ? KVM_MEMORY_ATTRIBUTE_PRIVATE : 0; + range.flags = 0; + + pr_debug("\t... call KVM_SET_MEMORY_ATTRIBUTES ioctl from vCPU %u with gpa=%#lx, size=%#lx, attributes=%#llx\n", + vcpu_id, gpa, size, range.attributes); + + vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &range); } diff --git a/tools/testing/selftests/kvm/x86/tdx_shared_mem_test.c b/tools/testing/selftests/kvm/x86/tdx_shared_mem_test.c new file mode 100644 index 000000000000..79745e36ce3a --- /dev/null +++ b/tools/testing/selftests/kvm/x86/tdx_shared_mem_test.c @@ -0,0 +1,129 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <linux/kvm.h> +#include <stdint.h> + +#include "kvm_util.h" +#include "processor.h" +#include "tdx/tdcall.h" +#include "tdx/tdx.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include "test_util.h" + +#define TDX_SHARED_MEM_TEST_PRIVATE_GVA (0x80000000) +#define TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK BIT_ULL(30) +#define TDX_SHARED_MEM_TEST_SHARED_GVA \ + (TDX_SHARED_MEM_TEST_PRIVATE_GVA | \ + TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK) + +#define TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE (0xcafecafe) +#define TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE (0xabcdabcd) + +#define TDX_SHARED_MEM_TEST_INFO_PORT 0x87 + +/* + * Shared variable between guest and host + */ +static uint64_t test_mem_shared_gpa; + +void guest_shared_mem(void) +{ + uint32_t *test_mem_shared_gva = + (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA; + + uint64_t placeholder; + uint64_t ret; + + /* Map gpa as shared */ + ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE, + &placeholder); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE; + + /* Exit so host can read shared value */ + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4, + PORT_WRITE, &placeholder); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); + + /* Read value written by host and send it back out for verification */ + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4, + PORT_WRITE, + (uint64_t *)test_mem_shared_gva); + if (ret) + tdx_test_fatal_with_data(ret, __LINE__); +} + +int verify_shared_mem(void) +{ + vm_vaddr_t test_mem_private_gva; + uint64_t test_mem_private_gpa; + uint32_t *test_mem_hva; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_shared_mem); + + /* + * Set up shared memory page for testing by first allocating as private + * and then mapping the same GPA again as shared. This way, the TD does + * not have to remap its page tables at runtime. + */ + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size, + TDX_SHARED_MEM_TEST_PRIVATE_GVA); + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA); + + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva); + TEST_ASSERT(test_mem_hva, + "Guest address not found in guest memory regions\n"); + + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva); + virt_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA, test_mem_private_gpa, 1); + + test_mem_shared_gpa = test_mem_private_gpa | vm->arch.s_bit; + sync_global_to_guest(vm, test_mem_shared_gpa); + + td_finalize(vm); + + vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, BIT_ULL(KVM_HC_MAP_GPA_RANGE)); + + printf("Verifying shared memory accesses for TDX\n"); + + /* Begin guest execution; guest writes to shared memory. */ + printf("\t ... Starting guest execution\n"); + + /* Handle map gpa as shared */ + tdx_run(vcpu); + + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4, PORT_WRITE); + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE); + + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE; + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4, PORT_WRITE); + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE); + + printf("\t ... PASSED\n"); + + kvm_vm_free(vm); + + return 0; +} + +int main(int argc, char **argv) +{ + if (!is_tdx_enabled()) { + printf("TDX is not supported by the KVM\n" + "Skipping the TDX tests.\n"); + return 0; + } + + return verify_shared_mem(); +}
From: Ackerley Tng ackerleytng@google.com
vm_vaddr_alloc_private allow specifying both the virtual and physical addresses for the allocation.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/include/kvm_util.h | 3 +++ tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++++++ 2 files changed, 10 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 442e34c6ed84..690aef6f887c 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -611,6 +611,9 @@ vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type); +vm_vaddr_t vm_vaddr_alloc_private(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min, + enum kvm_mem_region_type type); vm_vaddr_t vm_vaddr_identity_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min, enum kvm_mem_region_type type); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 008f01831036..9e0e28b6e9dd 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1486,6 +1486,13 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, return ____vm_vaddr_alloc(vm, sz, vaddr_min, KVM_UTIL_MIN_PFN * vm->page_size, type, false); }
+vm_vaddr_t vm_vaddr_alloc_private(struct kvm_vm *vm, size_t sz, + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min, + enum kvm_mem_region_type type) +{ + return ____vm_vaddr_alloc(vm, sz, vaddr_min, paddr_min, type, true); +} + /* * Allocate memory in @vm of size @sz beginning with the desired virtual address * of @vaddr_min and backed by physical address equal to returned virtual
From: Ackerley Tng ackerleytng@google.com
Add support for TDG.MEM.PAGE.ACCEPT that the guest uses to accept a pending private page, previously added by TDH.MEM.PAGE.AUG or after conversion using the KVM_SET_MEMORY_ATTRIBUTES ioctl().
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/include/x86/tdx/tdx.h | 2 ++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 7 +++++++ 2 files changed, 9 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 88f3571df16f..53637159fa12 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -7,6 +7,7 @@ #include "kvm_util.h"
#define TDG_VP_INFO 1 +#define TDG_MEM_PAGE_ACCEPT 6
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 #define TDG_VP_VMCALL_MAP_GPA 0x10001 @@ -40,5 +41,6 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, uint64_t *r8, uint64_t *r9, uint64_t *r10, uint64_t *r11); uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out); +uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);
#endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index bae84c34c19e..a51ab7511936 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -3,6 +3,7 @@ #include <linux/kvm_para.h> #include <string.h>
+#include "processor.h" #include "tdx/tdcall.h" #include "tdx/tdx.h" #include "tdx/tdx_util.h" @@ -215,3 +216,9 @@ uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_o *data_out = args.r11; return ret; } + +uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level) +{ + return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, (gpa & PAGE_MASK) | level, + 0, 0, 0, NULL); +}
From: Ackerley Tng ackerleytng@google.com
Support TDG.VP.VEINFO.GET that the guest uses to obtain the virtualization exception information of the recent #VE exception.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../selftests/kvm/include/x86/tdx/tdx.h | 21 +++++++++++++++++++ tools/testing/selftests/kvm/lib/x86/tdx/tdx.c | 19 +++++++++++++++++ 2 files changed, 40 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h index 53637159fa12..55e52ad3de55 100644 --- a/tools/testing/selftests/kvm/include/x86/tdx/tdx.h +++ b/tools/testing/selftests/kvm/include/x86/tdx/tdx.h @@ -7,6 +7,7 @@ #include "kvm_util.h"
#define TDG_VP_INFO 1 +#define TDG_VP_VEINFO_GET 3 #define TDG_MEM_PAGE_ACCEPT 6
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000 @@ -43,4 +44,24 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx, uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out); uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);
+/* + * Used by the #VE exception handler to gather the #VE exception + * info from the TDX module. This is a software only structure + * and not part of the TDX module/VMM ABI. + * + * Adapted from arch/x86/include/asm/tdx.h + */ +struct ve_info { + uint64_t exit_reason; + uint64_t exit_qual; + /* Guest Linear (virtual) Address */ + uint64_t gla; + /* Guest Physical Address */ + uint64_t gpa; + uint32_t instr_len; + uint32_t instr_info; +}; + +uint64_t tdg_vp_veinfo_get(struct ve_info *ve); + #endif // SELFTEST_TDX_TDX_H diff --git a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c index a51ab7511936..e42b586808a1 100644 --- a/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c +++ b/tools/testing/selftests/kvm/lib/x86/tdx/tdx.c @@ -222,3 +222,22 @@ uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level) return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, (gpa & PAGE_MASK) | level, 0, 0, 0, NULL); } + +uint64_t tdg_vp_veinfo_get(struct ve_info *ve) +{ + struct tdx_module_output out; + uint64_t ret; + + memset(&out, 0, sizeof(struct tdx_module_output)); + + ret = __tdx_module_call(TDG_VP_VEINFO_GET, 0, 0, 0, 0, &out); + + ve->exit_reason = out.rcx; + ve->exit_qual = out.rdx; + ve->gla = out.r8; + ve->gpa = out.r9; + ve->instr_len = out.r10 & 0xffffffff; + ve->instr_info = out.r10 >> 32; + + return ret; +}
From: Ackerley Tng ackerleytng@google.com
This tests the use of guest memory with explicit TDG.VP.VMCALL<MapGPA> calls.
Provide a 2MB memory region to the TDX guest with a 40KB focus area at offset 1MB intended to be shared between host and guest. The entire 2MB region starts out as private with the guest filling it with a pattern and a check from the host to ensure the host is not able to see the pattern. The guest then requests via TDG.VP.VMCALL<MapGPA> that the 40KB focus area be shared with checks that the host and guest has the same view of the memory. Finally the guest requests the 40KB memory to be private again with checks to confirm this is the case.
Co-developed-by: Binbin Wu binbin.wu@linux.intel.com Signed-off-by: Binbin Wu binbin.wu@linux.intel.com Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/x86/tdx_upm_test.c | 397 ++++++++++++++++++ 2 files changed, 398 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/tdx_upm_test.c
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index 9f660f913715..94322d8dea88 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -146,6 +146,7 @@ TEST_GEN_PROGS_x86 += system_counter_offset_test TEST_GEN_PROGS_x86 += pre_fault_memory_test TEST_GEN_PROGS_x86 += x86/tdx_vm_test TEST_GEN_PROGS_x86 += x86/tdx_shared_mem_test +TEST_GEN_PROGS_x86 += x86/tdx_upm_test
# Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86 += x86/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/x86/tdx_upm_test.c b/tools/testing/selftests/kvm/x86/tdx_upm_test.c new file mode 100644 index 000000000000..387258ab1a62 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/tdx_upm_test.c @@ -0,0 +1,397 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <asm/kvm.h> +#include <asm/vmx.h> +#include <linux/kvm.h> +#include <linux/sizes.h> +#include <stdbool.h> +#include <stdint.h> + +#include "kvm_util.h" +#include "processor.h" +#include "tdx/tdcall.h" +#include "tdx/tdx.h" +#include "tdx/tdx_util.h" +#include "tdx/test_util.h" +#include "test_util.h" + +/* TDX UPM test patterns */ +#define PATTERN_CONFIDENCE_CHECK (0x11) +#define PATTERN_HOST_FOCUS (0x22) +#define PATTERN_GUEST_GENERAL (0x33) +#define PATTERN_GUEST_FOCUS (0x44) + +/* + * 0x80000000 is arbitrarily selected. The selected address need not be the same + * as TDX_UPM_TEST_AREA_GVA_PRIVATE, but it should not overlap with selftest + * code or boot page. + */ +#define TDX_UPM_TEST_AREA_GPA (0x80000000) +/* Test area GPA is arbitrarily selected */ +#define TDX_UPM_TEST_AREA_GVA_PRIVATE (0x90000000) +/* Select any bit that can be used as a flag */ +#define TDX_UPM_TEST_AREA_GVA_SHARED_BIT (32) +/* + * TDX_UPM_TEST_AREA_GVA_SHARED is used to map the same GPA twice into the + * guest, once as shared and once as private + */ +#define TDX_UPM_TEST_AREA_GVA_SHARED \ + (TDX_UPM_TEST_AREA_GVA_PRIVATE | \ + BIT_ULL(TDX_UPM_TEST_AREA_GVA_SHARED_BIT)) + +/* The test area is 2MB in size */ +#define TDX_UPM_TEST_AREA_SIZE SZ_2M +/* 0th general area is 1MB in size */ +#define TDX_UPM_GENERAL_AREA_0_SIZE SZ_1M +/* Focus area is 40KB in size */ +#define TDX_UPM_FOCUS_AREA_SIZE (SZ_32K + SZ_8K) +/* 1st general area is the rest of the space in the test area */ +#define TDX_UPM_GENERAL_AREA_1_SIZE \ + (TDX_UPM_TEST_AREA_SIZE - TDX_UPM_GENERAL_AREA_0_SIZE - \ + TDX_UPM_FOCUS_AREA_SIZE) + +/* + * The test memory area is set up as two general areas, sandwiching a focus + * area. The general areas act as control areas. After they are filled, they + * are not expected to change throughout the tests. The focus area is memory + * permissions change from private to shared and vice-versa. + * + * The focus area is intentionally small, and sandwiched to test that when the + * focus area's permissions change, the other areas' permissions are not + * affected. + */ +struct __packed tdx_upm_test_area { + uint8_t general_area_0[TDX_UPM_GENERAL_AREA_0_SIZE]; + uint8_t focus_area[TDX_UPM_FOCUS_AREA_SIZE]; + uint8_t general_area_1[TDX_UPM_GENERAL_AREA_1_SIZE]; +}; + +static void fill_test_area(struct tdx_upm_test_area *test_area_base, + uint8_t pattern) +{ + memset(test_area_base, pattern, sizeof(*test_area_base)); +} + +static void fill_focus_area(struct tdx_upm_test_area *test_area_base, + uint8_t pattern) +{ + memset(test_area_base->focus_area, pattern, + sizeof(test_area_base->focus_area)); +} + +static bool check_area(uint8_t *base, uint64_t size, uint8_t expected_pattern) +{ + size_t i; + + for (i = 0; i < size; i++) { + if (base[i] != expected_pattern) + return false; + } + + return true; +} + +static bool check_general_areas(struct tdx_upm_test_area *test_area_base, + uint8_t expected_pattern) +{ + return (check_area(test_area_base->general_area_0, + sizeof(test_area_base->general_area_0), + expected_pattern) && + check_area(test_area_base->general_area_1, + sizeof(test_area_base->general_area_1), + expected_pattern)); +} + +static bool check_focus_area(struct tdx_upm_test_area *test_area_base, + uint8_t expected_pattern) +{ + return check_area(test_area_base->focus_area, + sizeof(test_area_base->focus_area), expected_pattern); +} + +static bool check_test_area(struct tdx_upm_test_area *test_area_base, + uint8_t expected_pattern) +{ + return (check_general_areas(test_area_base, expected_pattern) && + check_focus_area(test_area_base, expected_pattern)); +} + +static bool fill_and_check(struct tdx_upm_test_area *test_area_base, uint8_t pattern) +{ + fill_test_area(test_area_base, pattern); + + return check_test_area(test_area_base, pattern); +} + +#define TDX_UPM_TEST_ASSERT(x) \ + do { \ + if (!(x)) \ + tdx_test_fatal(__LINE__); \ + } while (0) + +/* + * Shared variables between guest and host + */ +static struct tdx_upm_test_area *test_area_gpa_private; +static struct tdx_upm_test_area *test_area_gpa_shared; + +/* + * Test stages for syncing with host + */ +enum { + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST = 1, + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST, + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN, +}; + +#define TDX_UPM_TEST_ACCEPT_PRINT_PORT 0x87 + +/* + * Does vcpu_run, and also manages memory conversions if requested by the TD. + */ +void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm, + struct kvm_vcpu *vcpu) +{ + for (;;) { + vcpu_run(vcpu); + if (vcpu->run->exit_reason == KVM_EXIT_HYPERCALL && + vcpu->run->hypercall.nr == KVM_HC_MAP_GPA_RANGE) { + uint64_t gpa = vcpu->run->hypercall.args[0]; + + handle_memory_conversion(vm, vcpu->id, gpa, + vcpu->run->hypercall.args[1] << 12, + vcpu->run->hypercall.args[2] & + KVM_MAP_GPA_RANGE_ENCRYPTED); + vcpu->run->hypercall.ret = 0; + continue; + } else if (vcpu->run->exit_reason == KVM_EXIT_IO && + vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) { + uint64_t gpa = tdx_test_read_64bit(vcpu, + TDX_UPM_TEST_ACCEPT_PRINT_PORT); + + printf("\t ... guest accepting 1 page at GPA: 0x%lx\n", + gpa); + continue; + } else if (vcpu->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) { + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", + vcpu->run->system_event.data[12], + vcpu->run->system_event.data[13]); + } + break; + } +} + +static void guest_upm_explicit(void) +{ + struct tdx_upm_test_area *test_area_gva_private = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE; + struct tdx_upm_test_area *test_area_gva_shared = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED; + uint64_t failed_gpa; + uint64_t ret = 0; + + /* Check: host reading private memory does not modify guest's view */ + fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST); + + TDX_UPM_TEST_ASSERT(check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* Remap focus area as shared */ + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_shared->focus_area, + sizeof(test_area_gpa_shared->focus_area), + &failed_gpa); + TDX_UPM_TEST_ASSERT(!ret); + + /* General areas should be unaffected by remapping */ + TDX_UPM_TEST_ASSERT(check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* + * Use memory contents to confirm that the memory allocated using mmap + * is used as backing memory for shared memory - PATTERN_CONFIDENCE_CHECK + * was written by the VMM at the beginning of this test. + */ + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_shared, PATTERN_CONFIDENCE_CHECK)); + + /* Guest can use focus area after remapping as shared */ + fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST); + + /* Check that guest has the same view of shared memory */ + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS)); + + /* Remap focus area back to private */ + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_private->focus_area, + sizeof(test_area_gpa_private->focus_area), + &failed_gpa); + TDX_UPM_TEST_ASSERT(!ret); + + /* General areas should be unaffected by remapping */ + TDX_UPM_TEST_ASSERT(check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* Focus area should be zeroed after remapping */ + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_private, 0)); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN); + + /* Check that guest can use private memory after focus area is remapped as private */ + TDX_UPM_TEST_ASSERT(fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_success(); +} + +static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, + struct tdx_upm_test_area *test_area_base_hva) +{ + tdx_run(vcpu); + tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, + PORT_WRITE); + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST); + + /* + * Check that host sees PATTERN_CONFIDENCE_CHECK when trying to read guest + * private memory. This confirms that regular memory (userspace_addr in + * struct kvm_userspace_memory_region) is used to back the host's view + * of private memory, since PATTERN_CONFIDENCE_CHECK was written to that + * memory before starting the guest. + */ + TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory."); + + vcpu_run_and_manage_memory_conversions(vm, vcpu); + tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, + PORT_WRITE); + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST); + + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_GUEST_FOCUS), + "Host should have the same view of shared memory as guest."); + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Host's view of private memory should still be backed by regular memory."); + + /* Check that host can use shared memory */ + fill_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS); + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS), + "Host should be able to use shared memory."); + + vcpu_run_and_manage_memory_conversions(vm, vcpu); + tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, + PORT_WRITE); + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN); + + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Host's view of private memory should be backed by regular memory."); + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS), + "Host's view of private memory should be backed by regular memory."); + + tdx_run(vcpu); + tdx_test_assert_success(vcpu); + + printf("\t ... PASSED\n"); +} + +static bool address_between(uint64_t addr, void *lo, void *hi) +{ + return (uint64_t)lo <= addr && addr < (uint64_t)hi; +} + +static void guest_ve_handler(struct ex_regs *regs) +{ + struct ve_info ve; + uint64_t ret; + + ret = tdg_vp_veinfo_get(&ve); + TDX_UPM_TEST_ASSERT(!ret); + + /* For this test, we will only handle EXIT_REASON_EPT_VIOLATION */ + TDX_UPM_TEST_ASSERT(ve.exit_reason == EXIT_REASON_EPT_VIOLATION); + + /* Validate GPA in fault */ + TDX_UPM_TEST_ASSERT(address_between(ve.gpa, + test_area_gpa_private->focus_area, + test_area_gpa_private->general_area_1)); + + tdx_test_send_64bit(TDX_UPM_TEST_ACCEPT_PRINT_PORT, ve.gpa); + +#define MEM_PAGE_ACCEPT_LEVEL_4K 0 +#define MEM_PAGE_ACCEPT_LEVEL_2M 1 + ret = tdg_mem_page_accept(ve.gpa & PAGE_MASK, MEM_PAGE_ACCEPT_LEVEL_4K); + TDX_UPM_TEST_ASSERT(!ret); +} + +static void verify_upm_test(void) +{ + struct tdx_upm_test_area *test_area_base_hva; + vm_vaddr_t test_area_gva_private; + uint64_t test_area_npages; + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_upm_explicit); + + vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler); + + /* + * Set up shared memory page for testing by first allocating as private + * and then mapping the same GPA again as shared. This way, the TD does + * not have to remap its page tables at runtime. + */ + test_area_npages = TDX_UPM_TEST_AREA_SIZE / vm->page_size; + vm_userspace_mem_region_add(vm, + VM_MEM_SRC_ANONYMOUS, TDX_UPM_TEST_AREA_GPA, + 3, test_area_npages, KVM_MEM_GUEST_MEMFD); + vm->memslots[MEM_REGION_TEST_DATA] = 3; + + test_area_gva_private = vm_vaddr_alloc_private(vm, TDX_UPM_TEST_AREA_SIZE, + TDX_UPM_TEST_AREA_GVA_PRIVATE, + TDX_UPM_TEST_AREA_GPA, + MEM_REGION_TEST_DATA); + TEST_ASSERT_EQ(test_area_gva_private, TDX_UPM_TEST_AREA_GVA_PRIVATE); + + test_area_gpa_private = (struct tdx_upm_test_area *) + addr_gva2gpa(vm, test_area_gva_private); + virt_map_shared(vm, TDX_UPM_TEST_AREA_GVA_SHARED, + (uint64_t)test_area_gpa_private, + test_area_npages); + TEST_ASSERT_EQ(addr_gva2gpa(vm, TDX_UPM_TEST_AREA_GVA_SHARED), + (vm_paddr_t)test_area_gpa_private); + + test_area_base_hva = addr_gva2hva(vm, TDX_UPM_TEST_AREA_GVA_PRIVATE); + + TEST_ASSERT(fill_and_check(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), + "Failed to mark memory intended as backing memory for TD shared memory"); + + sync_global_to_guest(vm, test_area_gpa_private); + test_area_gpa_shared = (struct tdx_upm_test_area *) + ((uint64_t)test_area_gpa_private | vm->arch.s_bit); + sync_global_to_guest(vm, test_area_gpa_shared); + + td_finalize(vm); + + printf("Verifying UPM functionality: explicit MapGPA\n"); + + vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, BIT_ULL(KVM_HC_MAP_GPA_RANGE)); + + run_selftest(vm, vcpu, test_area_base_hva); + + kvm_vm_free(vm); +} + +int main(int argc, char **argv) +{ + ksft_print_header(); + + if (!is_tdx_enabled()) + ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n"); + + ksft_set_plan(1); + ksft_test_result(!run_in_new_process(&verify_upm_test), + "verify_upm_test\n"); + + ksft_finished(); +}
From: Ackerley Tng ackerleytng@google.com
This tests the use of guest memory without explicit TDG.VP.VMCALL<MapGPA> calls.
Provide a 2MB memory region to the TDX guest with a 40KB focus area at offset 1MB intended to be shared between host and guest. The guest does not request memory to be shared or private using TDG.VP.VMCALL<MapGPA> but instead relies on memory to be converted automatically based on its access via shared or private mapping. The host automatically converts the memory when guest exits with KVM_EXIT_MEMORY_FAULT.
The 2MB region starts out as private with the guest filling it with a pattern, followed by a check from the host to ensure the host is not able to see the pattern. The guest then accesses the 40KB focus area via its shared mapping to trigger implicit conversion followed by checks that the host and guest has the same view of the memory. Finally the guest accesses the 40KB memory via its private mapping to trigger the implicit conversion to private followed by checks to confirm this is the case.
Signed-off-by: Ackerley Tng ackerleytng@google.com Signed-off-by: Sagi Shahar sagis@google.com --- .../testing/selftests/kvm/x86/tdx_upm_test.c | 88 ++++++++++++++++--- 1 file changed, 76 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86/tdx_upm_test.c b/tools/testing/selftests/kvm/x86/tdx_upm_test.c index 387258ab1a62..2ea5bf6d24b7 100644 --- a/tools/testing/selftests/kvm/x86/tdx_upm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_upm_test.c @@ -150,10 +150,10 @@ enum { * Does vcpu_run, and also manages memory conversions if requested by the TD. */ void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm, - struct kvm_vcpu *vcpu) + struct kvm_vcpu *vcpu, bool handle_conversions) { for (;;) { - vcpu_run(vcpu); + _vcpu_run(vcpu); if (vcpu->run->exit_reason == KVM_EXIT_HYPERCALL && vcpu->run->hypercall.nr == KVM_HC_MAP_GPA_RANGE) { uint64_t gpa = vcpu->run->hypercall.args[0]; @@ -164,6 +164,13 @@ void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm, KVM_MAP_GPA_RANGE_ENCRYPTED); vcpu->run->hypercall.ret = 0; continue; + } else if (handle_conversions && + vcpu->run->exit_reason == KVM_EXIT_MEMORY_FAULT) { + handle_memory_conversion(vm, vcpu->id, vcpu->run->memory_fault.gpa, + vcpu->run->memory_fault.size, + vcpu->run->memory_fault.flags == + KVM_MEMORY_EXIT_FLAG_PRIVATE); + continue; } else if (vcpu->run->exit_reason == KVM_EXIT_IO && vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) { uint64_t gpa = tdx_test_read_64bit(vcpu, @@ -241,8 +248,48 @@ static void guest_upm_explicit(void) tdx_test_success(); }
+static void guest_upm_implicit(void) +{ + struct tdx_upm_test_area *test_area_gva_private = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE; + struct tdx_upm_test_area *test_area_gva_shared = + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED; + + /* Check: host reading private memory does not modify guest's view */ + fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST); + + TDX_UPM_TEST_ASSERT(check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + /* Use focus area as shared */ + fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS); + + /* General areas should not be affected */ + TDX_UPM_TEST_ASSERT(check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST); + + /* Check that guest has the same view of shared memory */ + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS)); + + /* Use focus area as private */ + fill_focus_area(test_area_gva_private, PATTERN_GUEST_FOCUS); + + /* General areas should be unaffected by remapping */ + TDX_UPM_TEST_ASSERT(check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN); + + /* Check that guest can use private memory after focus area is remapped as private */ + TDX_UPM_TEST_ASSERT(fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL)); + + tdx_test_success(); +} + static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, - struct tdx_upm_test_area *test_area_base_hva) + struct tdx_upm_test_area *test_area_base_hva, + bool implicit) { tdx_run(vcpu); tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, @@ -260,7 +307,7 @@ static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK), "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");
- vcpu_run_and_manage_memory_conversions(vm, vcpu); + vcpu_run_and_manage_memory_conversions(vm, vcpu, implicit); tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, PORT_WRITE); TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), @@ -276,7 +323,7 @@ static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu, TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS), "Host should be able to use shared memory.");
- vcpu_run_and_manage_memory_conversions(vm, vcpu); + vcpu_run_and_manage_memory_conversions(vm, vcpu, implicit); tdx_test_assert_io(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE, PORT_WRITE); TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset), @@ -322,17 +369,19 @@ static void guest_ve_handler(struct ex_regs *regs) TDX_UPM_TEST_ASSERT(!ret); }
-static void verify_upm_test(void) +static void verify_upm_test(bool implicit) { struct tdx_upm_test_area *test_area_base_hva; vm_vaddr_t test_area_gva_private; uint64_t test_area_npages; struct kvm_vcpu *vcpu; struct kvm_vm *vm; + void *guest_code;
vm = td_create(); td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); - vcpu = td_vcpu_add(vm, 0, guest_upm_explicit); + guest_code = implicit ? guest_upm_implicit : guest_upm_explicit; + vcpu = td_vcpu_add(vm, 0, guest_code);
vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);
@@ -373,15 +422,28 @@ static void verify_upm_test(void)
td_finalize(vm);
- printf("Verifying UPM functionality: explicit MapGPA\n"); + if (implicit) + printf("Verifying UPM functionality: implicit conversion\n"); + else + printf("Verifying UPM functionality: explicit MapGPA\n");
vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, BIT_ULL(KVM_HC_MAP_GPA_RANGE));
- run_selftest(vm, vcpu, test_area_base_hva); + run_selftest(vm, vcpu, test_area_base_hva, implicit);
kvm_vm_free(vm); }
+void verify_upm_test_explicit(void) +{ + verify_upm_test(false); +} + +void verify_upm_test_implicit(void) +{ + verify_upm_test(true); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -389,9 +451,11 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(1); - ksft_test_result(!run_in_new_process(&verify_upm_test), - "verify_upm_test\n"); + ksft_set_plan(2); + ksft_test_result(!run_in_new_process(&verify_upm_test_explicit), + "verify_upm_test_explicit\n"); + ksft_test_result(!run_in_new_process(&verify_upm_test_implicit), + "verify_upm_test_implicit\n");
ksft_finished(); }
From: Yan Zhao yan.y.zhao@intel.com
Add a selftest to verify that adding flag KVM_MEM_LOG_DIRTY_PAGES to a !KVM_MEM_GUEST_MEMFD memslot does not produce host errors in TDX.
Signed-off-by: Yan Zhao yan.y.zhao@intel.com Signed-off-by: Sagi Shahar sagis@google.com --- tools/testing/selftests/kvm/x86/tdx_vm_test.c | 45 ++++++++++++++++++- 1 file changed, 44 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/x86/tdx_vm_test.c b/tools/testing/selftests/kvm/x86/tdx_vm_test.c index 82acc17a66ab..410d814dd39a 100644 --- a/tools/testing/selftests/kvm/x86/tdx_vm_test.c +++ b/tools/testing/selftests/kvm/x86/tdx_vm_test.c @@ -1167,6 +1167,47 @@ void verify_tdcall_vp_info(void) printf("\t ... PASSED\n"); }
+#define TDX_LOG_DIRTY_PAGES_FLAG_TEST_GPA (0xc0000000) +#define TDX_LOG_DIRTY_PAGES_FLAG_TEST_GVA_SHARED (0x90000000) +#define TDX_LOG_DIRTY_PAGES_FLAG_REGION_SLOT 10 +#define TDX_LOG_DIRTY_PAGES_FLAG_REGION_NR_PAGES (0x1000 / getpagesize()) + +void guest_code_log_dirty_flag(void) +{ + memset((void *)TDX_LOG_DIRTY_PAGES_FLAG_TEST_GVA_SHARED, 1, 8); + tdx_test_success(); +} + +/* + * Verify adding flag KVM_MEM_LOG_DIRTY_PAGES to a !KVM_MEM_GUEST_MEMFD memslot + * in a TD does not produce host errors. + */ +void verify_log_dirty_pages_flag_on_non_gmemfd_slot(void) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + + vm = td_create(); + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0); + vcpu = td_vcpu_add(vm, 0, guest_code_log_dirty_flag); + + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, + TDX_LOG_DIRTY_PAGES_FLAG_TEST_GPA, + TDX_LOG_DIRTY_PAGES_FLAG_REGION_SLOT, + TDX_LOG_DIRTY_PAGES_FLAG_REGION_NR_PAGES, + KVM_MEM_LOG_DIRTY_PAGES); + virt_map_shared(vm, TDX_LOG_DIRTY_PAGES_FLAG_TEST_GVA_SHARED, + (uint64_t)TDX_LOG_DIRTY_PAGES_FLAG_TEST_GPA, + TDX_LOG_DIRTY_PAGES_FLAG_REGION_NR_PAGES); + td_finalize(vm); + + printf("Verifying Log dirty flag:\n"); + vcpu_run(vcpu); + tdx_test_assert_success(vcpu); + kvm_vm_free(vm); + printf("\t ... PASSED\n"); +} + int main(int argc, char **argv) { ksft_print_header(); @@ -1174,7 +1215,7 @@ int main(int argc, char **argv) if (!is_tdx_enabled()) ksft_exit_skip("TDX is not supported by the KVM. Exiting.\n");
- ksft_set_plan(15); + ksft_set_plan(16); ksft_test_result(!run_in_new_process(&verify_td_lifecycle), "verify_td_lifecycle\n"); ksft_test_result(!run_in_new_process(&verify_report_fatal_error), @@ -1205,6 +1246,8 @@ int main(int argc, char **argv) "verify_host_reading_private_mem\n"); ksft_test_result(!run_in_new_process(&verify_tdcall_vp_info), "verify_tdcall_vp_info\n"); + ksft_test_result(!run_in_new_process(&verify_log_dirty_pages_flag_on_non_gmemfd_slot), + "verify_log_dirty_pages_flag_on_non_gmemfd_slot\n");
ksft_finished(); return 0;
linux-kselftest-mirror@lists.linaro.org