On 23/01/2020 19.04, Ben Gardon wrote:
KVM creates internal memslots between 3 and 4 GiB paddrs on the first vCPU creation. If memslot 0 is large enough it collides with these memslots an causes vCPU creation to fail. Instead of creating memslot 0 at paddr 0, start it 4G into the guest physical address space.
Signed-off-by: Ben Gardon bgardon@google.com
tools/testing/selftests/kvm/lib/kvm_util.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 5b971c04f1643..427c88d32e988 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -130,9 +130,11 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
- Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K).
- When phy_pages is non-zero, a memory region of phy_pages physical pages
- is created and mapped starting at guest physical address 0. The file
- descriptor to control the created VM is created with the permissions
- given by perm (e.g. O_RDWR).
- is created, starting at 4G into the guest physical address space to avoid
- KVM internal memslots which map the region between 3G and 4G. If tests need
- to use the physical region between 0 and 3G, they can allocate another
- memslot for that region. The file descriptor to control the created VM is
*/
- created with the permissions given by perm (e.g. O_RDWR).
struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) { @@ -231,7 +233,8 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) vm->vpages_mapped = sparsebit_alloc(); if (phy_pages != 0) vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
0, 0, phy_pages, 0);
KVM_INTERNAL_MEMSLOTS_END_PADDR,
0, phy_pages, 0);
return vm; }
This patch causes *all* tests on s390x to fail like this:
# selftests: kvm: sync_regs_test # Testing guest mode: PA-bits:52, VA-bits:48, 4K pages # ==== Test Assertion Failure ==== # lib/kvm_util.c:1059: false # pid=248244 tid=248244 - Success # 1 0x0000000001002f3d: addr_gpa2hva at kvm_util.c:1059 # 2 (inlined by) addr_gpa2hva at kvm_util.c:1047 # 3 0x0000000001006edf: addr_gva2gpa at processor.c:144 # 4 0x0000000001004345: addr_gva2hva at kvm_util.c:1636 # 5 0x00000000010077c1: kvm_vm_elf_load at elf.c:192 # 6 0x00000000010070c3: vm_create_default at processor.c:228 # 7 0x0000000001001347: main at sync_regs_test.c:87 # 8 0x000003ffba7a3461: ?? ??:0 # 9 0x0000000001001965: .annobin_init.c.hot at crt1.o:? # 10 0xffffffffffffffff: ?? ??:0 # No vm physical memory at 0x0 not ok 2 selftests: kvm: sync_regs_test # exit=254
AFAIK the ELF binaries on s390x are linked to addresses below 4G, so generally removing the memslot here seems to be a bad idea on s390x.
Thomas