When handling page faults for many vCPUs during demand paging, KVM's MMU lock becomes highly contended. This series creates a test with a naive userfaultfd based demand paging implementation to demonstrate that contention. This test serves both as a functional test of userfaultfd and a microbenchmark of demand paging performance with a variable number of vCPUs and memory per vCPU.
The test creates N userfaultfd threads, N vCPUs, and a region of memory with M pages per vCPU. The N userfaultfd polling threads are each set up to serve faults on a region of memory corresponding to one of the vCPUs. Each of the vCPUs is then started, and touches each page of its disjoint memory region, sequentially. In response to faults, the userfaultfd threads copy a static buffer into the guest's memory. This creates a worst case for MMU lock contention as we have removed most of the contention between the userfaultfd threads and there is no time required to fetch the contents of guest memory.
This test was run successfully on Intel Haswell, Broadwell, and Cascadelake hosts with a variety of vCPU counts and memory sizes.
This test was adapted from the dirty_log_test.
The series can also be viewed in Gerrit here: https://linux-review.googlesource.com/c/virt/kvm/kvm/+/1464 (Thanks to Dmitry Vyukov dvyukov@google.com for setting up the Gerrit instance)
Ben Gardon (9): KVM: selftests: Create a demand paging test KVM: selftests: Add demand paging content to the demand paging test KVM: selftests: Add memory size parameter to the demand paging test KVM: selftests: Pass args to vCPU instead of using globals KVM: selftests: Support multiple vCPUs in demand paging test KVM: selftests: Time guest demand paging KVM: selftests: Add parameter to _vm_create for memslot 0 base paddr KVM: selftests: Support large VMs in demand paging test Add static flag
tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 4 +- .../selftests/kvm/demand_paging_test.c | 610 ++++++++++++++++++ tools/testing/selftests/kvm/dirty_log_test.c | 2 +- .../testing/selftests/kvm/include/kvm_util.h | 3 +- tools/testing/selftests/kvm/lib/kvm_util.c | 7 +- 6 files changed, 621 insertions(+), 6 deletions(-) create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c
While userfaultfd, KVM's demand paging implementation, is not specific to KVM, having a benchmark for its performance will be useful for guiding performance improvements to KVM. As a first step towards creating a userfaultfd demand paging test, create a simple memory access test, based on dirty_log_test.
Signed-off-by: Ben Gardon bgardon@google.com --- tools/testing/selftests/kvm/.gitignore | 1 + tools/testing/selftests/kvm/Makefile | 2 + .../selftests/kvm/demand_paging_test.c | 286 ++++++++++++++++++ 3 files changed, 289 insertions(+) create mode 100644 tools/testing/selftests/kvm/demand_paging_test.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore index b35da375530af..29da0cdd98579 100644 --- a/tools/testing/selftests/kvm/.gitignore +++ b/tools/testing/selftests/kvm/.gitignore @@ -14,3 +14,4 @@ /clear_dirty_log_test /dirty_log_test /kvm_create_max_vcpus +/demand_paging_test diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index 62c591f87dabb..31f2b8afa7461 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -26,10 +26,12 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test TEST_GEN_PROGS_x86_64 += clear_dirty_log_test TEST_GEN_PROGS_x86_64 += dirty_log_test +TEST_GEN_PROGS_x86_64 += demand_paging_test TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
TEST_GEN_PROGS_aarch64 += clear_dirty_log_test TEST_GEN_PROGS_aarch64 += dirty_log_test +TEST_GEN_PROGS_aarch64 += demand_paging_test TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
TEST_GEN_PROGS_s390x = s390x/memop diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c new file mode 100644 index 0000000000000..5f214517ba1de --- /dev/null +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -0,0 +1,286 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KVM demand paging test + * Adapted from dirty_log_test.c + * + * Copyright (C) 2018, Red Hat, Inc. + * Copyright (C) 2019, Google, Inc. + */ + +#define _GNU_SOURCE /* for program_invocation_name */ + +#include <stdio.h> +#include <stdlib.h> +#include <unistd.h> +#include <time.h> +#include <pthread.h> +#include <linux/bitmap.h> +#include <linux/bitops.h> + +#include "test_util.h" +#include "kvm_util.h" +#include "processor.h" + +#define VCPU_ID 1 + +/* The memory slot index demand page */ +#define TEST_MEM_SLOT_INDEX 1 + +/* Default guest test virtual memory offset */ +#define DEFAULT_GUEST_TEST_MEM 0xc0000000 + +/* + * Guest/Host shared variables. Ensure addr_gva2hva() and/or + * sync_global_to/from_guest() are used when accessing from + * the host. READ/WRITE_ONCE() should also be used with anything + * that may change. + */ +static uint64_t host_page_size; +static uint64_t guest_page_size; +static uint64_t guest_num_pages; + +/* + * Guest physical memory offset of the testing memory slot. + * This will be set to the topmost valid physical address minus + * the test memory size. + */ +static uint64_t guest_test_phys_mem; + +/* + * Guest virtual memory offset of the testing memory slot. + * Must not conflict with identity mapped test code. + */ +static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; + +/* + * Continuously write to the first 8 bytes of each page in the demand paging + * memory region. + */ +static void guest_code(void) +{ + int i; + + for (i = 0; i < guest_num_pages; i++) { + uint64_t addr = guest_test_virt_mem; + + addr += i * guest_page_size; + addr &= ~(host_page_size - 1); + *(uint64_t *)addr = 0x0123456789ABCDEF; + } + + GUEST_SYNC(1); +} + +/* Points to the test VM memory region on which we are doing demand paging */ +static void *host_test_mem; +static uint64_t host_num_pages; + +static void *vcpu_worker(void *data) +{ + int ret; + struct kvm_vm *vm = data; + struct kvm_run *run; + + run = vcpu_state(vm, VCPU_ID); + + /* Let the guest access its memory */ + ret = _vcpu_run(vm, VCPU_ID); + TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); + if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) { + TEST_ASSERT(false, + "Invalid guest sync status: exit_reason=%s\n", + exit_reason_str(run->exit_reason)); + } + + return NULL; +} + +static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, + uint64_t extra_mem_pages, void *guest_code) +{ + struct kvm_vm *vm; + uint64_t extra_pg_pages = extra_mem_pages / 512 * 2; + + vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); + kvm_vm_elf_load(vm, program_invocation_name, 0, 0); +#ifdef __x86_64__ + vm_create_irqchip(vm); +#endif + vm_vcpu_add_default(vm, vcpuid, guest_code); + return vm; +} + +#define GUEST_MEM_SHIFT 30 /* 1G */ +#define PAGE_SHIFT_4K 12 + +static void run_test(enum vm_guest_mode mode) +{ + pthread_t vcpu_thread; + struct kvm_vm *vm; + + /* + * We reserve page table for 2 times of extra dirty mem which + * will definitely cover the original (1G+) test range. Here + * we do the calculation with 4K page size which is the + * smallest so the page number will be enough for all archs + * (e.g., 64K page size guest will need even less memory for + * page tables). + */ + vm = create_vm(mode, VCPU_ID, + 2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K), + guest_code); + + guest_page_size = vm_get_page_size(vm); + /* + * A little more than 1G of guest page sized pages. Cover the + * case where the size is not aligned to 64 pages. + */ + guest_num_pages = (1ul << (GUEST_MEM_SHIFT - + vm_get_page_shift(vm))) + 16; +#ifdef __s390x__ + /* Round up to multiple of 1M (segment size) */ + guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL; +#endif + + host_page_size = getpagesize(); + host_num_pages = (guest_num_pages * guest_page_size) / host_page_size + + !!((guest_num_pages * guest_page_size) % + host_page_size); + + guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * + guest_page_size; + guest_test_phys_mem &= ~(host_page_size - 1); + +#ifdef __s390x__ + /* Align to 1M (segment size) */ + guest_test_phys_mem &= ~((1 << 20) - 1); +#endif + + DEBUG("guest physical test memory offset: 0x%lx\n", + guest_test_phys_mem); + + + /* Add an extra memory slot for testing demand paging */ + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, + guest_test_phys_mem, + TEST_MEM_SLOT_INDEX, + guest_num_pages, 0); + + /* Do mapping for the demand paging memory slot */ + virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, + guest_num_pages * guest_page_size, 0); + + /* Cache the HVA pointer of the region */ + host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem); + +#ifdef __x86_64__ + vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); +#endif +#ifdef __aarch64__ + ucall_init(vm, NULL); +#endif + + /* Export the shared variables to the guest */ + sync_global_to_guest(vm, host_page_size); + sync_global_to_guest(vm, guest_page_size); + sync_global_to_guest(vm, guest_test_virt_mem); + sync_global_to_guest(vm, guest_num_pages); + + pthread_create(&vcpu_thread, NULL, vcpu_worker, vm); + + /* Wait for the vcpu thread to quit */ + pthread_join(vcpu_thread, NULL); + + ucall_uninit(vm); + kvm_vm_free(vm); +} + +struct vm_guest_mode_params { + bool supported; + bool enabled; +}; +struct vm_guest_mode_params vm_guest_mode_params[NUM_VM_MODES]; + +#define vm_guest_mode_params_init(mode, supported, enabled) \ +({ \ + vm_guest_mode_params[mode] = \ + (struct vm_guest_mode_params){ supported, enabled }; \ +}) + +static void help(char *name) +{ + int i; + + puts(""); + printf("usage: %s [-h] [-m mode]\n", name); + printf(" -m: specify the guest mode ID to test\n" + " (default: test all supported modes)\n" + " This option may be used multiple times.\n" + " Guest mode IDs:\n"); + for (i = 0; i < NUM_VM_MODES; ++i) { + printf(" %d: %s%s\n", i, vm_guest_mode_string(i), + vm_guest_mode_params[i].supported ? " (supported)" : ""); + } + puts(""); + exit(0); +} + +int main(int argc, char *argv[]) +{ + bool mode_selected = false; + unsigned int mode; + int opt, i; +#ifdef __aarch64__ + unsigned int host_ipa_limit; +#endif + +#ifdef __x86_64__ + vm_guest_mode_params_init(VM_MODE_PXXV48_4K, true, true); +#endif +#ifdef __aarch64__ + vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); + vm_guest_mode_params_init(VM_MODE_P40V48_64K, true, true); + + host_ipa_limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE); + if (host_ipa_limit >= 52) + vm_guest_mode_params_init(VM_MODE_P52V48_64K, true, true); + if (host_ipa_limit >= 48) { + vm_guest_mode_params_init(VM_MODE_P48V48_4K, true, true); + vm_guest_mode_params_init(VM_MODE_P48V48_64K, true, true); + } +#endif +#ifdef __s390x__ + vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); +#endif + + while ((opt = getopt(argc, argv, "hm:")) != -1) { + switch (opt) { + case 'm': + if (!mode_selected) { + for (i = 0; i < NUM_VM_MODES; ++i) + vm_guest_mode_params[i].enabled = false; + mode_selected = true; + } + mode = strtoul(optarg, NULL, 10); + TEST_ASSERT(mode < NUM_VM_MODES, + "Guest mode ID %d too big", mode); + vm_guest_mode_params[mode].enabled = true; + break; + case 'h': + default: + help(argv[0]); + break; + } + } + + for (i = 0; i < NUM_VM_MODES; ++i) { + if (!vm_guest_mode_params[i].enabled) + continue; + TEST_ASSERT(vm_guest_mode_params[i].supported, + "Guest mode ID %d (%s) not supported.", + i, vm_guest_mode_string(i)); + run_test(i); + } + + return 0; +}
The demand paging test is currently a simple page access test which, while potentially useful, doesn't add much versus the existing dirty logging test. To improve the demand paging test, add a basic userfaultfd demand paging implementation.
Signed-off-by: Ben Gardon bgardon@google.com --- .../selftests/kvm/demand_paging_test.c | 157 ++++++++++++++++++ 1 file changed, 157 insertions(+)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 5f214517ba1de..61ba4e6a8214a 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -11,11 +11,14 @@
#include <stdio.h> #include <stdlib.h> +#include <sys/syscall.h> #include <unistd.h> #include <time.h> +#include <poll.h> #include <pthread.h> #include <linux/bitmap.h> #include <linux/bitops.h> +#include <linux/userfaultfd.h>
#include "test_util.h" #include "kvm_util.h" @@ -29,6 +32,8 @@ /* Default guest test virtual memory offset */ #define DEFAULT_GUEST_TEST_MEM 0xc0000000
+#define __NR_userfaultfd 323 + /* * Guest/Host shared variables. Ensure addr_gva2hva() and/or * sync_global_to/from_guest() are used when accessing from @@ -39,6 +44,8 @@ static uint64_t host_page_size; static uint64_t guest_page_size; static uint64_t guest_num_pages;
+static char *guest_data_prototype; + /* * Guest physical memory offset of the testing memory slot. * This will be set to the topmost valid physical address minus @@ -110,13 +117,153 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, return vm; }
+static int handle_uffd_page_request(int uffd, uint64_t addr) +{ + pid_t tid; + struct uffdio_copy copy; + int r; + + tid = syscall(__NR_gettid); + + copy.src = (uint64_t)guest_data_prototype; + copy.dst = addr; + copy.len = host_page_size; + copy.mode = 0; + + r = ioctl(uffd, UFFDIO_COPY, ©); + if (r == -1) { + DEBUG("Failed Paged in 0x%lx from thread %d with errno: %d\n", + addr, tid, errno); + return r; + } + + return 0; +} + +bool quit_uffd_thread; + +struct uffd_handler_args { + int uffd; +}; + +static void *uffd_handler_thread_fn(void *arg) +{ + struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; + int uffd = uffd_args->uffd; + int64_t pages = 0; + + while (!quit_uffd_thread) { + struct uffd_msg msg; + struct pollfd pollfd[1]; + int r; + uint64_t addr; + + pollfd[0].fd = uffd; + pollfd[0].events = POLLIN; + + r = poll(pollfd, 1, 2000); + switch (r) { + case -1: + DEBUG("poll err"); + continue; + case 0: + continue; + case 1: + break; + default: + DEBUG("Polling uffd returned %d", r); + return NULL; + } + + if (pollfd[0].revents & POLLERR) { + DEBUG("uffd revents has POLLERR"); + return NULL; + } + + if (!pollfd[0].revents & POLLIN) + continue; + + r = read(uffd, &msg, sizeof(msg)); + if (r == -1) { + if (errno == EAGAIN) + continue; + DEBUG("Read of uffd gor errno %d", errno); + return NULL; + } + + if (r != sizeof(msg)) { + DEBUG("Read on uffd returned unexpected size: %d bytes", + r); + return NULL; + } + + if (!(msg.event & UFFD_EVENT_PAGEFAULT)) + continue; + + addr = msg.arg.pagefault.address; + r = handle_uffd_page_request(uffd, addr); + if (r < 0) + return NULL; + pages++; + } + + return NULL; +} + +static int setup_demand_paging(struct kvm_vm *vm, + pthread_t *uffd_handler_thread) +{ + int uffd; + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + struct uffd_handler_args uffd_args; + + guest_data_prototype = malloc(host_page_size); + memset(guest_data_prototype, 0xAB, host_page_size); + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd == -1) { + DEBUG("uffd creation failed\n"); + return -1; + } + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api) == -1) { + DEBUG("ioctl uffdio_api failed\n"); + return -1; + } + + uffdio_register.range.start = (uint64_t)host_test_mem; + uffdio_register.range.len = host_num_pages * host_page_size; + uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; + if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) { + DEBUG("ioctl uffdio_register failed\n"); + return -1; + } + + if ((uffdio_register.ioctls & UFFD_API_RANGE_IOCTLS) != + UFFD_API_RANGE_IOCTLS) { + DEBUG("unexpected userfaultfd ioctl set\n"); + return -1; + } + + uffd_args.uffd = uffd; + pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, + &uffd_args); + + return 0; +} + #define GUEST_MEM_SHIFT 30 /* 1G */ #define PAGE_SHIFT_4K 12
static void run_test(enum vm_guest_mode mode) { pthread_t vcpu_thread; + pthread_t uffd_handler_thread; struct kvm_vm *vm; + int r;
/* * We reserve page table for 2 times of extra dirty mem which @@ -173,6 +320,12 @@ static void run_test(enum vm_guest_mode mode) /* Cache the HVA pointer of the region */ host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+ /* Set up user fault fd to handle demand paging requests. */ + quit_uffd_thread = false; + r = setup_demand_paging(vm, &uffd_handler_thread); + if (r < 0) + exit(-r); + #ifdef __x86_64__ vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); #endif @@ -191,6 +344,10 @@ static void run_test(enum vm_guest_mode mode) /* Wait for the vcpu thread to quit */ pthread_join(vcpu_thread, NULL);
+ /* Tell the user fault fd handler thread to quit */ + quit_uffd_thread = true; + pthread_join(uffd_handler_thread, NULL); + ucall_uninit(vm); kvm_vm_free(vm); }
On Fri, Sep 27, 2019 at 09:18:30AM -0700, Ben Gardon wrote:
The demand paging test is currently a simple page access test which, while potentially useful, doesn't add much versus the existing dirty logging test. To improve the demand paging test, add a basic userfaultfd demand paging implementation.
Signed-off-by: Ben Gardon bgardon@google.com
.../selftests/kvm/demand_paging_test.c | 157 ++++++++++++++++++ 1 file changed, 157 insertions(+)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 5f214517ba1de..61ba4e6a8214a 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -11,11 +11,14 @@ #include <stdio.h> #include <stdlib.h> +#include <sys/syscall.h>
[1]
#include <unistd.h> #include <time.h> +#include <poll.h> #include <pthread.h> #include <linux/bitmap.h> #include <linux/bitops.h> +#include <linux/userfaultfd.h> #include "test_util.h" #include "kvm_util.h" @@ -29,6 +32,8 @@ /* Default guest test virtual memory offset */ #define DEFAULT_GUEST_TEST_MEM 0xc0000000 +#define __NR_userfaultfd 323
This line can be dropped if with [1] above?
[...]
+static void *uffd_handler_thread_fn(void *arg) +{
- struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg;
- int uffd = uffd_args->uffd;
- int64_t pages = 0;
- while (!quit_uffd_thread) {
struct uffd_msg msg;
struct pollfd pollfd[1];
int r;
uint64_t addr;
pollfd[0].fd = uffd;
pollfd[0].events = POLLIN;
r = poll(pollfd, 1, 2000);
This may introduce an unecessary 2s delay when quit. Maybe we can refer to how userfaultfd selftest did with this (please see uffd_poll_thread() in selftests/vm/userfaultfd.c on usage of pipefd).
Thanks,
Add an argument to allow the demand paging test to work on larger and smaller guest sizes.
Signed-off-by: Ben Gardon bgardon@google.com --- .../selftests/kvm/demand_paging_test.c | 55 ++++++++++++------- 1 file changed, 34 insertions(+), 21 deletions(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 61ba4e6a8214a..19982a33a0ca2 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -32,6 +32,8 @@ /* Default guest test virtual memory offset */ #define DEFAULT_GUEST_TEST_MEM 0xc0000000
+#define DEFAULT_GUEST_TEST_MEM_SIZE (1 << 30) /* 1G */ + #define __NR_userfaultfd 323
/* @@ -255,10 +257,9 @@ static int setup_demand_paging(struct kvm_vm *vm, return 0; }
-#define GUEST_MEM_SHIFT 30 /* 1G */ #define PAGE_SHIFT_4K 12
-static void run_test(enum vm_guest_mode mode) +static void run_test(enum vm_guest_mode mode, uint64_t guest_memory_bytes) { pthread_t vcpu_thread; pthread_t uffd_handler_thread; @@ -266,33 +267,40 @@ static void run_test(enum vm_guest_mode mode) int r;
/* - * We reserve page table for 2 times of extra dirty mem which - * will definitely cover the original (1G+) test range. Here - * we do the calculation with 4K page size which is the - * smallest so the page number will be enough for all archs - * (e.g., 64K page size guest will need even less memory for - * page tables). + * We reserve page table for twice the ammount of memory we intend + * to use in the test region for demand paging. Here we do the + * calculation with 4K page size which is the smallest so the page + * number will be enough for all archs. (e.g., 64K page size guest + * will need even less memory for page tables). */ vm = create_vm(mode, VCPU_ID, - 2ul << (GUEST_MEM_SHIFT - PAGE_SHIFT_4K), + (2 * guest_memory_bytes) >> PAGE_SHIFT_4K, guest_code);
guest_page_size = vm_get_page_size(vm); - /* - * A little more than 1G of guest page sized pages. Cover the - * case where the size is not aligned to 64 pages. - */ - guest_num_pages = (1ul << (GUEST_MEM_SHIFT - - vm_get_page_shift(vm))) + 16; + + TEST_ASSERT(guest_memory_bytes % guest_page_size == 0, + "Guest memory size is not guest page size aligned."); + + guest_num_pages = guest_memory_bytes / guest_page_size; + #ifdef __s390x__ /* Round up to multiple of 1M (segment size) */ guest_num_pages = (guest_num_pages + 0xff) & ~0xffUL; #endif + /* + * If there should be more memory in the guest test region than there + * can be pages in the guest, it will definitely cause problems. + */ + TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), + "Requested more guest memory than address space allows.\n" + " guest pages: %lx max gfn: %lx\n", + guest_num_pages, vm_get_max_gfn(vm));
host_page_size = getpagesize(); - host_num_pages = (guest_num_pages * guest_page_size) / host_page_size + - !!((guest_num_pages * guest_page_size) % - host_page_size); + TEST_ASSERT(guest_memory_bytes % host_page_size == 0, + "Guest memory size is not host page size aligned."); + host_num_pages = guest_memory_bytes / host_page_size;
guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; @@ -369,7 +377,7 @@ static void help(char *name) int i;
puts(""); - printf("usage: %s [-h] [-m mode]\n", name); + printf("usage: %s [-h] [-m mode] [-b bytes test memory]\n", name); printf(" -m: specify the guest mode ID to test\n" " (default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -378,6 +386,8 @@ static void help(char *name) printf(" %d: %s%s\n", i, vm_guest_mode_string(i), vm_guest_mode_params[i].supported ? " (supported)" : ""); } + printf(" -b: specify the number of bytes of memory which should be\n" + " allocated to the guest.\n"); puts(""); exit(0); } @@ -385,6 +395,7 @@ static void help(char *name) int main(int argc, char *argv[]) { bool mode_selected = false; + uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE; unsigned int mode; int opt, i; #ifdef __aarch64__ @@ -410,7 +421,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); #endif
- while ((opt = getopt(argc, argv, "hm:")) != -1) { + while ((opt = getopt(argc, argv, "hm:b:")) != -1) { switch (opt) { case 'm': if (!mode_selected) { @@ -423,6 +434,8 @@ int main(int argc, char *argv[]) "Guest mode ID %d too big", mode); vm_guest_mode_params[mode].enabled = true; break; + case 'b': + guest_memory_bytes = strtoull(optarg, NULL, 0); case 'h': default: help(argv[0]); @@ -436,7 +449,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i); + run_test(i, guest_memory_bytes); }
return 0;
In preparation for supporting multiple vCPUs in the demand paging test, pass arguments to the vCPU instead of syncing globals to it.
Signed-off-by: Ben Gardon bgardon@google.com --- .../selftests/kvm/demand_paging_test.c | 61 +++++++++++-------- 1 file changed, 37 insertions(+), 24 deletions(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 19982a33a0ca2..8fd46e99d9e30 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -44,7 +44,6 @@ */ static uint64_t host_page_size; static uint64_t guest_page_size; -static uint64_t guest_num_pages;
static char *guest_data_prototype;
@@ -65,14 +64,13 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM; * Continuously write to the first 8 bytes of each page in the demand paging * memory region. */ -static void guest_code(void) +static void guest_code(uint64_t gva, uint64_t pages) { int i;
- for (i = 0; i < guest_num_pages; i++) { - uint64_t addr = guest_test_virt_mem; + for (i = 0; i < pages; i++) { + uint64_t addr = gva + (i * guest_page_size);
- addr += i * guest_page_size; addr &= ~(host_page_size - 1); *(uint64_t *)addr = 0x0123456789ABCDEF; } @@ -84,18 +82,31 @@ static void guest_code(void) static void *host_test_mem; static uint64_t host_num_pages;
+struct vcpu_thread_args { + uint64_t gva; + uint64_t pages; + struct kvm_vm *vm; + int vcpu_id; +}; + static void *vcpu_worker(void *data) { int ret; - struct kvm_vm *vm = data; + struct vcpu_thread_args *args = (struct vcpu_thread_args *)data; + struct kvm_vm *vm = args->vm; + int vcpu_id = args->vcpu_id; + uint64_t gva = args->gva; + uint64_t pages = args->pages; struct kvm_run *run;
- run = vcpu_state(vm, VCPU_ID); + vcpu_args_set(vm, vcpu_id, 2, gva, pages); + + run = vcpu_state(vm, vcpu_id);
/* Let the guest access its memory */ - ret = _vcpu_run(vm, VCPU_ID); + ret = _vcpu_run(vm, vcpu_id); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); - if (get_ucall(vm, VCPU_ID, NULL) != UCALL_SYNC) { + if (get_ucall(vm, vcpu_id, NULL) != UCALL_SYNC) { TEST_ASSERT(false, "Invalid guest sync status: exit_reason=%s\n", exit_reason_str(run->exit_reason)); @@ -259,11 +270,13 @@ static int setup_demand_paging(struct kvm_vm *vm,
#define PAGE_SHIFT_4K 12
-static void run_test(enum vm_guest_mode mode, uint64_t guest_memory_bytes) +static void run_test(enum vm_guest_mode mode, uint64_t vcpu_wss) { pthread_t vcpu_thread; pthread_t uffd_handler_thread; struct kvm_vm *vm; + struct vcpu_thread_args vcpu_args; + uint64_t guest_num_pages; int r;
/* @@ -273,16 +286,15 @@ static void run_test(enum vm_guest_mode mode, uint64_t guest_memory_bytes) * number will be enough for all archs. (e.g., 64K page size guest * will need even less memory for page tables). */ - vm = create_vm(mode, VCPU_ID, - (2 * guest_memory_bytes) >> PAGE_SHIFT_4K, + vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K, guest_code);
guest_page_size = vm_get_page_size(vm);
- TEST_ASSERT(guest_memory_bytes % guest_page_size == 0, + TEST_ASSERT(vcpu_wss % guest_page_size == 0, "Guest memory size is not guest page size aligned.");
- guest_num_pages = guest_memory_bytes / guest_page_size; + guest_num_pages = vcpu_wss / guest_page_size;
#ifdef __s390x__ /* Round up to multiple of 1M (segment size) */ @@ -298,9 +310,9 @@ static void run_test(enum vm_guest_mode mode, uint64_t guest_memory_bytes) guest_num_pages, vm_get_max_gfn(vm));
host_page_size = getpagesize(); - TEST_ASSERT(guest_memory_bytes % host_page_size == 0, + TEST_ASSERT(vcpu_wss % host_page_size == 0, "Guest memory size is not host page size aligned."); - host_num_pages = guest_memory_bytes / host_page_size; + host_num_pages = vcpu_wss / host_page_size;
guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; @@ -344,10 +356,12 @@ static void run_test(enum vm_guest_mode mode, uint64_t guest_memory_bytes) /* Export the shared variables to the guest */ sync_global_to_guest(vm, host_page_size); sync_global_to_guest(vm, guest_page_size); - sync_global_to_guest(vm, guest_test_virt_mem); - sync_global_to_guest(vm, guest_num_pages);
- pthread_create(&vcpu_thread, NULL, vcpu_worker, vm); + vcpu_args.vm = vm; + vcpu_args.vcpu_id = VCPU_ID; + vcpu_args.gva = guest_test_virt_mem; + vcpu_args.pages = guest_num_pages; + pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args);
/* Wait for the vcpu thread to quit */ pthread_join(vcpu_thread, NULL); @@ -386,8 +400,7 @@ static void help(char *name) printf(" %d: %s%s\n", i, vm_guest_mode_string(i), vm_guest_mode_params[i].supported ? " (supported)" : ""); } - printf(" -b: specify the number of bytes of memory which should be\n" - " allocated to the guest.\n"); + printf(" -b: specify the working set size, in bytes for each vCPU.\n"); puts(""); exit(0); } @@ -395,7 +408,7 @@ static void help(char *name) int main(int argc, char *argv[]) { bool mode_selected = false; - uint64_t guest_memory_bytes = DEFAULT_GUEST_TEST_MEM_SIZE; + uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE; unsigned int mode; int opt, i; #ifdef __aarch64__ @@ -435,7 +448,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params[mode].enabled = true; break; case 'b': - guest_memory_bytes = strtoull(optarg, NULL, 0); + vcpu_wss = strtoull(optarg, NULL, 0); case 'h': default: help(argv[0]); @@ -449,7 +462,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, guest_memory_bytes); + run_test(i, vcpu_wss); }
return 0;
On Fri, Sep 27, 2019 at 09:18:32AM -0700, Ben Gardon wrote:
In preparation for supporting multiple vCPUs in the demand paging test, pass arguments to the vCPU instead of syncing globals to it.
Signed-off-by: Ben Gardon bgardon@google.com
.../selftests/kvm/demand_paging_test.c | 61 +++++++++++-------- 1 file changed, 37 insertions(+), 24 deletions(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 19982a33a0ca2..8fd46e99d9e30 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -44,7 +44,6 @@ */ static uint64_t host_page_size; static uint64_t guest_page_size; -static uint64_t guest_num_pages; static char *guest_data_prototype; @@ -65,14 +64,13 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
- Continuously write to the first 8 bytes of each page in the demand paging
- memory region.
*/ -static void guest_code(void) +static void guest_code(uint64_t gva, uint64_t pages) { int i;
- for (i = 0; i < guest_num_pages; i++) {
uint64_t addr = guest_test_virt_mem;
- for (i = 0; i < pages; i++) {
uint64_t addr = gva + (i * guest_page_size);
addr &= ~(host_page_size - 1); *(uint64_t *)addr = 0x0123456789ABCDEF; }addr += i * guest_page_size;
@@ -84,18 +82,31 @@ static void guest_code(void) static void *host_test_mem; static uint64_t host_num_pages; +struct vcpu_thread_args {
- uint64_t gva;
- uint64_t pages;
- struct kvm_vm *vm;
- int vcpu_id;
+};
static void *vcpu_worker(void *data) { int ret;
- struct kvm_vm *vm = data;
- struct vcpu_thread_args *args = (struct vcpu_thread_args *)data;
- struct kvm_vm *vm = args->vm;
- int vcpu_id = args->vcpu_id;
- uint64_t gva = args->gva;
- uint64_t pages = args->pages; struct kvm_run *run;
- run = vcpu_state(vm, VCPU_ID);
- vcpu_args_set(vm, vcpu_id, 2, gva, pages);
AArch64 doesn't implement vcpu_args_set(), but I see in the first patch that you've added this test to AArch64 as well.
Wouldn't it be easier to just create a global array of size nr-vcpus for each variable that needs to be shared with the guest? Then derive the per-cpu index from the acpi-id or maybe abuse some msr for it. We could probably even add some macros to build some type of a per-cpu framework.
Thanks, drew
Most VMs have multiple vCPUs, the concurrent execution of which has a substantial impact on demand paging performance. Add an option to create multiple vCPUs to each access disjoint regions of memory.
Signed-off-by: Ben Gardon bgardon@google.com --- .../selftests/kvm/demand_paging_test.c | 187 ++++++++++++------ 1 file changed, 127 insertions(+), 60 deletions(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index 8fd46e99d9e30..f8afc0683c346 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -24,8 +24,6 @@ #include "kvm_util.h" #include "processor.h"
-#define VCPU_ID 1 - /* The memory slot index demand page */ #define TEST_MEM_SLOT_INDEX 1
@@ -36,6 +34,12 @@
#define __NR_userfaultfd 323
+#ifdef PRINT_PER_VCPU_UPDATES +#define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__) +#else +#define PER_VCPU_DEBUG(...) +#endif + /* * Guest/Host shared variables. Ensure addr_gva2hva() and/or * sync_global_to/from_guest() are used when accessing from @@ -78,10 +82,6 @@ static void guest_code(uint64_t gva, uint64_t pages) GUEST_SYNC(1); }
-/* Points to the test VM memory region on which we are doing demand paging */ -static void *host_test_mem; -static uint64_t host_num_pages; - struct vcpu_thread_args { uint64_t gva; uint64_t pages; @@ -115,18 +115,32 @@ static void *vcpu_worker(void *data) return NULL; }
-static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, - uint64_t extra_mem_pages, void *guest_code) +#define PAGE_SHIFT_4K 12 +#define PTES_PER_PT 512 + +static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, + uint64_t vcpu_wss) { struct kvm_vm *vm; - uint64_t extra_pg_pages = extra_mem_pages / 512 * 2; + uint64_t pages = DEFAULT_GUEST_PHY_PAGES;
- vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); + /* Account for a few pages per-vCPU for stacks */ + pages += DEFAULT_STACK_PGS * vcpus; + + /* + * Reserve twice the ammount of memory needed to map the test region and + * the page table / stacks region, at 4k, for page tables. Do the + * calculation with 4K page size: the smallest of all archs. (e.g., 64K + * page size guest will need even less memory for page tables). + */ + pages += (2 * pages) / PTES_PER_PT; + pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT; + + vm = _vm_create(mode, pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm); #endif - vm_vcpu_add_default(vm, vcpuid, guest_code); return vm; }
@@ -224,15 +238,13 @@ static void *uffd_handler_thread_fn(void *arg) }
static int setup_demand_paging(struct kvm_vm *vm, - pthread_t *uffd_handler_thread) + pthread_t *uffd_handler_thread, + struct uffd_handler_args *uffd_args, + void *hva, uint64_t len) { int uffd; struct uffdio_api uffdio_api; struct uffdio_register uffdio_register; - struct uffd_handler_args uffd_args; - - guest_data_prototype = malloc(host_page_size); - memset(guest_data_prototype, 0xAB, host_page_size);
uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (uffd == -1) { @@ -247,8 +259,8 @@ static int setup_demand_paging(struct kvm_vm *vm, return -1; }
- uffdio_register.range.start = (uint64_t)host_test_mem; - uffdio_register.range.len = host_num_pages * host_page_size; + uffdio_register.range.start = (uint64_t)hva; + uffdio_register.range.len = len; uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) { DEBUG("ioctl uffdio_register failed\n"); @@ -261,40 +273,35 @@ static int setup_demand_paging(struct kvm_vm *vm, return -1; }
- uffd_args.uffd = uffd; + uffd_args->uffd = uffd; pthread_create(uffd_handler_thread, NULL, uffd_handler_thread_fn, - &uffd_args); + uffd_args); + + PER_VCPU_DEBUG("Created uffd thread for HVA range [%p, %p)\n", + hva, hva + len);
return 0; }
-#define PAGE_SHIFT_4K 12 - -static void run_test(enum vm_guest_mode mode, uint64_t vcpu_wss) +static void run_test(enum vm_guest_mode mode, int vcpus, uint64_t vcpu_wss) { - pthread_t vcpu_thread; - pthread_t uffd_handler_thread; + pthread_t *vcpu_threads; + pthread_t *uffd_handler_threads; + struct uffd_handler_args *uffd_args; struct kvm_vm *vm; - struct vcpu_thread_args vcpu_args; + struct vcpu_thread_args *vcpu_args; uint64_t guest_num_pages; + int vcpu_id; int r;
- /* - * We reserve page table for twice the ammount of memory we intend - * to use in the test region for demand paging. Here we do the - * calculation with 4K page size which is the smallest so the page - * number will be enough for all archs. (e.g., 64K page size guest - * will need even less memory for page tables). - */ - vm = create_vm(mode, VCPU_ID, (2 * vcpu_wss) >> PAGE_SHIFT_4K, - guest_code); + vm = create_vm(mode, vcpus, vcpu_wss);
guest_page_size = vm_get_page_size(vm);
TEST_ASSERT(vcpu_wss % guest_page_size == 0, "Guest memory size is not guest page size aligned.");
- guest_num_pages = vcpu_wss / guest_page_size; + guest_num_pages = (vcpus * vcpu_wss) / guest_page_size;
#ifdef __s390x__ /* Round up to multiple of 1M (segment size) */ @@ -306,13 +313,12 @@ static void run_test(enum vm_guest_mode mode, uint64_t vcpu_wss) */ TEST_ASSERT(guest_num_pages < vm_get_max_gfn(vm), "Requested more guest memory than address space allows.\n" - " guest pages: %lx max gfn: %lx\n", - guest_num_pages, vm_get_max_gfn(vm)); + " guest pages: %lx max gfn: %lx vcpus: %d wss: %lx]\n", + guest_num_pages, vm_get_max_gfn(vm), vcpus, vcpu_wss);
host_page_size = getpagesize(); TEST_ASSERT(vcpu_wss % host_page_size == 0, "Guest memory size is not host page size aligned."); - host_num_pages = vcpu_wss / host_page_size;
guest_test_phys_mem = (vm_get_max_gfn(vm) - guest_num_pages) * guest_page_size; @@ -337,18 +343,8 @@ static void run_test(enum vm_guest_mode mode, uint64_t vcpu_wss) virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages * guest_page_size, 0);
- /* Cache the HVA pointer of the region */ - host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem); - - /* Set up user fault fd to handle demand paging requests. */ quit_uffd_thread = false; - r = setup_demand_paging(vm, &uffd_handler_thread); - if (r < 0) - exit(-r);
-#ifdef __x86_64__ - vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid()); -#endif #ifdef __aarch64__ ucall_init(vm, NULL); #endif @@ -357,21 +353,83 @@ static void run_test(enum vm_guest_mode mode, uint64_t vcpu_wss) sync_global_to_guest(vm, host_page_size); sync_global_to_guest(vm, guest_page_size);
- vcpu_args.vm = vm; - vcpu_args.vcpu_id = VCPU_ID; - vcpu_args.gva = guest_test_virt_mem; - vcpu_args.pages = guest_num_pages; - pthread_create(&vcpu_thread, NULL, vcpu_worker, &vcpu_args); + guest_data_prototype = malloc(host_page_size); + TEST_ASSERT(guest_data_prototype, "Memory allocation failed"); + memset(guest_data_prototype, 0xAB, host_page_size); + + vcpu_threads = malloc(vcpus * sizeof(*vcpu_threads)); + TEST_ASSERT(vcpu_threads, "Memory allocation failed"); + + uffd_handler_threads = malloc(vcpus * sizeof(*uffd_handler_threads)); + TEST_ASSERT(uffd_handler_threads, "Memory allocation failed"); + + uffd_args = malloc(vcpus * sizeof(*uffd_args)); + TEST_ASSERT(uffd_args, "Memory allocation failed"); + + vcpu_args = malloc(vcpus * sizeof(*vcpu_args)); + TEST_ASSERT(vcpu_args, "Memory allocation failed"); + + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + vm_paddr_t vcpu_gpa; + void *vcpu_hva; + + vm_vcpu_add_default(vm, vcpu_id, guest_code); + + vcpu_gpa = guest_test_phys_mem + (vcpu_id * vcpu_wss); + PER_VCPU_DEBUG("Added VCPU %d with test mem gpa [%lx, %lx)\n", + vcpu_id, vcpu_gpa, vcpu_gpa + vcpu_wss); + + /* Cache the HVA pointer of the region */ + vcpu_hva = addr_gpa2hva(vm, vcpu_gpa); + + /* Set up user fault fd to handle demand paging requests. */ + r = setup_demand_paging(vm, &uffd_handler_threads[vcpu_id], + &uffd_args[vcpu_id], vcpu_hva, + vcpu_wss); + if (r < 0) + exit(-r); + +#ifdef __x86_64__ + vcpu_set_cpuid(vm, vcpu_id, kvm_get_supported_cpuid()); +#endif + + vcpu_args[vcpu_id].vm = vm; + vcpu_args[vcpu_id].vcpu_id = vcpu_id; + vcpu_args[vcpu_id].gva = guest_test_virt_mem + + (vcpu_id * vcpu_wss); + vcpu_args[vcpu_id].pages = vcpu_wss / guest_page_size; + } + + DEBUG("Finished creating vCPUs and starting uffd threads\n"); + + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker, + &vcpu_args[vcpu_id]); + } + + DEBUG("Started all vCPUs\n"); + + /* Wait for the vcpu threads to quit */ + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { + pthread_join(vcpu_threads[vcpu_id], NULL); + PER_VCPU_DEBUG("Joined thread for vCPU %d\n", vcpu_id); + }
- /* Wait for the vcpu thread to quit */ - pthread_join(vcpu_thread, NULL); + DEBUG("All vCPU threads joined\n");
/* Tell the user fault fd handler thread to quit */ quit_uffd_thread = true; - pthread_join(uffd_handler_thread, NULL); + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) + pthread_join(uffd_handler_threads[vcpu_id], NULL);
ucall_uninit(vm); kvm_vm_free(vm); + + free(guest_data_prototype); + free(vcpu_threads); + free(uffd_handler_threads); + free(uffd_args); + free(vcpu_args); }
struct vm_guest_mode_params { @@ -391,7 +449,8 @@ static void help(char *name) int i;
puts(""); - printf("usage: %s [-h] [-m mode] [-b bytes test memory]\n", name); + printf("usage: %s [-h] [-m mode] [-b bytes test memory] [-v vcpus]\n", + name); printf(" -m: specify the guest mode ID to test\n" " (default: test all supported modes)\n" " This option may be used multiple times.\n" @@ -401,6 +460,7 @@ static void help(char *name) vm_guest_mode_params[i].supported ? " (supported)" : ""); } printf(" -b: specify the working set size, in bytes for each vCPU.\n"); + printf(" -v: specify the number of vCPUs to run.\n"); puts(""); exit(0); } @@ -409,6 +469,7 @@ int main(int argc, char *argv[]) { bool mode_selected = false; uint64_t vcpu_wss = DEFAULT_GUEST_TEST_MEM_SIZE; + int vcpus = 1; unsigned int mode; int opt, i; #ifdef __aarch64__ @@ -434,7 +495,7 @@ int main(int argc, char *argv[]) vm_guest_mode_params_init(VM_MODE_P40V48_4K, true, true); #endif
- while ((opt = getopt(argc, argv, "hm:b:")) != -1) { + while ((opt = getopt(argc, argv, "hm:b:v:")) != -1) { switch (opt) { case 'm': if (!mode_selected) { @@ -449,6 +510,12 @@ int main(int argc, char *argv[]) break; case 'b': vcpu_wss = strtoull(optarg, NULL, 0); + break; + case 'v': + vcpus = atoi(optarg); + TEST_ASSERT(vcpus > 0, + "Must have a positive number of vCPUs"); + break; case 'h': default: help(argv[0]); @@ -462,7 +529,7 @@ int main(int argc, char *argv[]) TEST_ASSERT(vm_guest_mode_params[i].supported, "Guest mode ID %d (%s) not supported.", i, vm_guest_mode_string(i)); - run_test(i, vcpu_wss); + run_test(i, vcpus, vcpu_wss); }
return 0;
In order to quantify demand paging performance, time guest execution during demand paging.
Signed-off-by: Ben Gardon bgardon@google.com --- .../selftests/kvm/demand_paging_test.c | 68 +++++++++++++++++++ 1 file changed, 68 insertions(+)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index f8afc0683c346..fe6c5a4f8b8c2 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -34,6 +34,12 @@
#define __NR_userfaultfd 323
+#ifdef PRINT_PER_PAGE_UPDATES +#define PER_PAGE_DEBUG(...) DEBUG(__VA_ARGS__) +#else +#define PER_PAGE_DEBUG(...) +#endif + #ifdef PRINT_PER_VCPU_UPDATES #define PER_VCPU_DEBUG(...) DEBUG(__VA_ARGS__) #else @@ -64,6 +70,26 @@ static uint64_t guest_test_phys_mem; */ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+int64_t to_ns(struct timespec ts) +{ + return (int64_t)ts.tv_nsec + 1000000000LL * (int64_t)ts.tv_sec; +} + +struct timespec diff(struct timespec start, struct timespec end) +{ + struct timespec temp; + + if ((end.tv_nsec-start.tv_nsec) < 0) { + temp.tv_sec = end.tv_sec - start.tv_sec - 1; + temp.tv_nsec = 1000000000 + end.tv_nsec - start.tv_nsec; + } else { + temp.tv_sec = end.tv_sec - start.tv_sec; + temp.tv_nsec = end.tv_nsec - start.tv_nsec; + } + + return temp; +} + /* * Continuously write to the first 8 bytes of each page in the demand paging * memory region. @@ -98,11 +124,15 @@ static void *vcpu_worker(void *data) uint64_t gva = args->gva; uint64_t pages = args->pages; struct kvm_run *run; + struct timespec start; + struct timespec end;
vcpu_args_set(vm, vcpu_id, 2, gva, pages);
run = vcpu_state(vm, vcpu_id);
+ clock_gettime(CLOCK_MONOTONIC, &start); + /* Let the guest access its memory */ ret = _vcpu_run(vm, vcpu_id); TEST_ASSERT(ret == 0, "vcpu_run failed: %d\n", ret); @@ -112,6 +142,11 @@ static void *vcpu_worker(void *data) exit_reason_str(run->exit_reason)); }
+ clock_gettime(CLOCK_MONOTONIC, &end); + PER_VCPU_DEBUG("vCPU %d execution time: %lld.%.9lds\n", vcpu_id, + (long long)(diff(start, end).tv_sec), + diff(start, end).tv_nsec); + return NULL; }
@@ -147,6 +182,8 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, static int handle_uffd_page_request(int uffd, uint64_t addr) { pid_t tid; + struct timespec start; + struct timespec end; struct uffdio_copy copy; int r;
@@ -157,6 +194,8 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) copy.len = host_page_size; copy.mode = 0;
+ clock_gettime(CLOCK_MONOTONIC, &start); + r = ioctl(uffd, UFFDIO_COPY, ©); if (r == -1) { DEBUG("Failed Paged in 0x%lx from thread %d with errno: %d\n", @@ -164,6 +203,13 @@ static int handle_uffd_page_request(int uffd, uint64_t addr) return r; }
+ clock_gettime(CLOCK_MONOTONIC, &end); + + PER_PAGE_DEBUG("UFFDIO_COPY %d \t%lld ns\n", tid, + (long long)to_ns(diff(start, end))); + PER_PAGE_DEBUG("Paged in %ld bytes at 0x%lx from thread %d\n", + host_page_size, addr, tid); + return 0; }
@@ -178,7 +224,10 @@ static void *uffd_handler_thread_fn(void *arg) struct uffd_handler_args *uffd_args = (struct uffd_handler_args *)arg; int uffd = uffd_args->uffd; int64_t pages = 0; + struct timespec start; + struct timespec end;
+ clock_gettime(CLOCK_MONOTONIC, &start); while (!quit_uffd_thread) { struct uffd_msg msg; struct pollfd pollfd[1]; @@ -234,6 +283,13 @@ static void *uffd_handler_thread_fn(void *arg) pages++; }
+ clock_gettime(CLOCK_MONOTONIC, &end); + PER_VCPU_DEBUG("userfaulted %ld pages over %lld.%.9lds. (%f/sec)\n", + pages, (long long)(diff(start, end).tv_sec), + diff(start, end).tv_nsec, pages / + ((double)diff(start, end).tv_sec + + (double)diff(start, end).tv_nsec / 100000000.0)); + return NULL; }
@@ -293,6 +349,8 @@ static void run_test(enum vm_guest_mode mode, int vcpus, uint64_t vcpu_wss) uint64_t guest_num_pages; int vcpu_id; int r; + struct timespec start; + struct timespec end;
vm = create_vm(mode, vcpus, vcpu_wss);
@@ -402,6 +460,8 @@ static void run_test(enum vm_guest_mode mode, int vcpus, uint64_t vcpu_wss)
DEBUG("Finished creating vCPUs and starting uffd threads\n");
+ clock_gettime(CLOCK_MONOTONIC, &start); + for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) { pthread_create(&vcpu_threads[vcpu_id], NULL, vcpu_worker, &vcpu_args[vcpu_id]); @@ -417,11 +477,19 @@ static void run_test(enum vm_guest_mode mode, int vcpus, uint64_t vcpu_wss)
DEBUG("All vCPU threads joined\n");
+ clock_gettime(CLOCK_MONOTONIC, &end); + /* Tell the user fault fd handler thread to quit */ quit_uffd_thread = true; for (vcpu_id = 0; vcpu_id < vcpus; vcpu_id++) pthread_join(uffd_handler_threads[vcpu_id], NULL);
+ DEBUG("Total guest execution time: %lld.%.9lds\n", + (long long)(diff(start, end).tv_sec), diff(start, end).tv_nsec); + DEBUG("Overall demand paging rate: %f pgs/sec\n", + guest_num_pages / ((double)diff(start, end).tv_sec + + (double)diff(start, end).tv_nsec / 100000000.0)); + ucall_uninit(vm); kvm_vm_free(vm);
KVM creates internal memslots between 3 and 4 GiB paddrs on the first vCPU creation. If memslot 0 is large enough it collides with these memslots an causes vCPU creation to fail. Add a paddr parameter for memslot 0 so that tests which support large VMs can relocate memslot 0 above 4 GiB.
Signed-off-by: Ben Gardon bgardon@google.com --- tools/testing/selftests/kvm/demand_paging_test.c | 2 +- tools/testing/selftests/kvm/dirty_log_test.c | 2 +- tools/testing/selftests/kvm/include/kvm_util.h | 3 ++- tools/testing/selftests/kvm/lib/kvm_util.c | 7 ++++--- 4 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index fe6c5a4f8b8c2..eb1f7e4b83de3 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -171,7 +171,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, pages += (2 * pages) / PTES_PER_PT; pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT;
- vm = _vm_create(mode, pages, O_RDWR); + vm = vm_create(mode, pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm); diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 5614222a66285..181eac3a12b66 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -252,7 +252,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, struct kvm_vm *vm; uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
- vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); + vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm); diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 29cccaf96baf6..4f672c00c9e9b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -69,7 +69,8 @@ int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); -struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t guest_paddr, + uint64_t phy_pages, int perm); void kvm_vm_free(struct kvm_vm *vmp); void kvm_vm_restart(struct kvm_vm *vmp, int perm); void kvm_vm_release(struct kvm_vm *vmp); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 80a338b5403c3..7ec2bbdaba875 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -132,7 +132,8 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES, * descriptor to control the created VM is created with the permissions * given by perm (e.g. O_RDWR). */ -struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t guest_paddr, + uint64_t phy_pages, int perm) { struct kvm_vm *vm;
@@ -229,14 +230,14 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) vm->vpages_mapped = sparsebit_alloc(); if (phy_pages != 0) vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, - 0, 0, phy_pages, 0); + guest_paddr, 0, phy_pages, 0);
return vm; }
struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) { - return _vm_create(mode, phy_pages, perm); + return _vm_create(mode, 0, phy_pages, perm); }
/*
On Fri, Sep 27, 2019 at 09:18:35AM -0700, Ben Gardon wrote:
KVM creates internal memslots between 3 and 4 GiB paddrs on the first vCPU creation. If memslot 0 is large enough it collides with these memslots an causes vCPU creation to fail. Add a paddr parameter for memslot 0 so that tests which support large VMs can relocate memslot 0 above 4 GiB.
Signed-off-by: Ben Gardon bgardon@google.com
tools/testing/selftests/kvm/demand_paging_test.c | 2 +- tools/testing/selftests/kvm/dirty_log_test.c | 2 +- tools/testing/selftests/kvm/include/kvm_util.h | 3 ++- tools/testing/selftests/kvm/lib/kvm_util.c | 7 ++++--- 4 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index fe6c5a4f8b8c2..eb1f7e4b83de3 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -171,7 +171,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, pages += (2 * pages) / PTES_PER_PT; pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT;
- vm = _vm_create(mode, pages, O_RDWR);
- vm = vm_create(mode, pages, O_RDWR);
Eh, we should have removed/renamed _vm_create() with 12c386b23083 ("KVM: selftests: Move vm type into _vm_create() internally")
kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm); diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c index 5614222a66285..181eac3a12b66 100644 --- a/tools/testing/selftests/kvm/dirty_log_test.c +++ b/tools/testing/selftests/kvm/dirty_log_test.c @@ -252,7 +252,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, uint32_t vcpuid, struct kvm_vm *vm; uint64_t extra_pg_pages = extra_mem_pages / 512 * 2;
- vm = _vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR);
- vm = vm_create(mode, DEFAULT_GUEST_PHY_PAGES + extra_pg_pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0);
#ifdef __x86_64__ vm_create_irqchip(vm); diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 29cccaf96baf6..4f672c00c9e9b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -69,7 +69,8 @@ int kvm_check_cap(long cap); int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); -struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t guest_paddr,
uint64_t phy_pages, int perm);
But now we need it again. Or do we? How about just documenting that if phy_pages is >= some-limit then the base address will be 4G, otherwise it will be zero? The documentation (comment above _vm_create()) needs to be updated with this patch regardless. Either the behavior changes or a new parameter is added.
If you go with my suggestion to just use 4G when phy_pages are large, then please rename _vm_create to vm_create.
Thanks, drew
void kvm_vm_free(struct kvm_vm *vmp); void kvm_vm_restart(struct kvm_vm *vmp, int perm); void kvm_vm_release(struct kvm_vm *vmp); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 80a338b5403c3..7ec2bbdaba875 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -132,7 +132,8 @@ _Static_assert(sizeof(vm_guest_mode_string)/sizeof(char *) == NUM_VM_MODES,
- descriptor to control the created VM is created with the permissions
- given by perm (e.g. O_RDWR).
*/ -struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) +struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t guest_paddr,
uint64_t phy_pages, int perm)
{ struct kvm_vm *vm; @@ -229,14 +230,14 @@ struct kvm_vm *_vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) vm->vpages_mapped = sparsebit_alloc(); if (phy_pages != 0) vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
0, 0, phy_pages, 0);
guest_paddr, 0, phy_pages, 0);
return vm; } struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm) {
- return _vm_create(mode, phy_pages, perm);
- return _vm_create(mode, 0, phy_pages, perm);
} /* -- 2.23.0.444.g18eeb5a265-goog
Move memslot 0 past 4 GiB to support the large page tables required to map several TiB of memory.
Signed-off-by: Ben Gardon bgardon@google.com --- tools/testing/selftests/kvm/demand_paging_test.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c index eb1f7e4b83de3..a733bb3c91fd4 100644 --- a/tools/testing/selftests/kvm/demand_paging_test.c +++ b/tools/testing/selftests/kvm/demand_paging_test.c @@ -24,6 +24,12 @@ #include "kvm_util.h" #include "processor.h"
+/* + * Put slot 0 past the first 4G of guest physical address to avoid collision + * with KVM-internal memslots. + */ +#define SLOT_0_GPA (4UL << 30) + /* The memory slot index demand page */ #define TEST_MEM_SLOT_INDEX 1
@@ -171,7 +177,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, int vcpus, pages += (2 * pages) / PTES_PER_PT; pages += ((2 * vcpus * vcpu_wss) >> PAGE_SHIFT_4K) / PTES_PER_PT;
- vm = vm_create(mode, pages, O_RDWR); + vm = _vm_create(mode, SLOT_0_GPA, pages, O_RDWR); kvm_vm_elf_load(vm, program_invocation_name, 0, 0); #ifdef __x86_64__ vm_create_irqchip(vm);
On Fri, Sep 27, 2019 at 09:18:28AM -0700, Ben Gardon wrote:
When handling page faults for many vCPUs during demand paging, KVM's MMU lock becomes highly contended. This series creates a test with a naive userfaultfd based demand paging implementation to demonstrate that contention. This test serves both as a functional test of userfaultfd and a microbenchmark of demand paging performance with a variable number of vCPUs and memory per vCPU.
The test creates N userfaultfd threads, N vCPUs, and a region of memory with M pages per vCPU. The N userfaultfd polling threads are each set up to serve faults on a region of memory corresponding to one of the vCPUs. Each of the vCPUs is then started, and touches each page of its disjoint memory region, sequentially. In response to faults, the userfaultfd threads copy a static buffer into the guest's memory. This creates a worst case for MMU lock contention as we have removed most of the contention between the userfaultfd threads and there is no time required to fetch the contents of guest memory.
Hi, Ben,
Even though I may not have enough MMU knowledge to say this... this of course looks like a good test at least to me. I'm just curious about whether you have plan to customize the userfaultfd handler in the future with this infrastructure?
Asked because IIUC with this series userfaultfd only plays a role to introduce a relatively adhoc delay to page faults. In other words, I'm also curious what would be the number look like (as you mentioned in your MMU rework cover letter) if you simply start hundreds of vcpu and do the same test like this, but use the default anonymous page faults rather than uffd page faults. I feel like even without uffd that could be a huge contention already there. Or did I miss anything important on your decision to use userfaultfd?
Thanks,
Hi Peter, You're absolutely right that we could demonstrate more contention by avoiding UFFD and just letting the kernel resolve page faults. I used UFFD in this test and benchmarking for the other MMU patch set because I believe it's a more realistic scenario. A simpler page access benchmark would be better for identifying further scaling problems within the MMU, but the only situation I can think of where that would be used is VM boot. However, we don't usually see many vCPUs touching memory all over the place on boot. In a migration or restore without demand paging, the memory would have to be pre-populated with the contents of guest memory and the KVM MMU fault handler wouldn't be taking a fault in get_user_pages. In the interest of eliminating the delay from UFFD, I will add an option to use anonymous page faults or prefault memory instead.
I don't have any plans to customize the UFFD implementation at the moment, but experimenting with UFFD strategies will be useful for building higher performance post-copy in QEMU and other userspaces in the future. Thank you for taking a look at these patches. Ben
On Sun, Sep 29, 2019 at 12:23 AM Peter Xu peterx@redhat.com wrote:
On Fri, Sep 27, 2019 at 09:18:28AM -0700, Ben Gardon wrote:
When handling page faults for many vCPUs during demand paging, KVM's MMU lock becomes highly contended. This series creates a test with a naive userfaultfd based demand paging implementation to demonstrate that contention. This test serves both as a functional test of userfaultfd and a microbenchmark of demand paging performance with a variable number of vCPUs and memory per vCPU.
The test creates N userfaultfd threads, N vCPUs, and a region of memory with M pages per vCPU. The N userfaultfd polling threads are each set up to serve faults on a region of memory corresponding to one of the vCPUs. Each of the vCPUs is then started, and touches each page of its disjoint memory region, sequentially. In response to faults, the userfaultfd threads copy a static buffer into the guest's memory. This creates a worst case for MMU lock contention as we have removed most of the contention between the userfaultfd threads and there is no time required to fetch the contents of guest memory.
Hi, Ben,
Even though I may not have enough MMU knowledge to say this... this of course looks like a good test at least to me. I'm just curious about whether you have plan to customize the userfaultfd handler in the future with this infrastructure?
Asked because IIUC with this series userfaultfd only plays a role to introduce a relatively adhoc delay to page faults. In other words, I'm also curious what would be the number look like (as you mentioned in your MMU rework cover letter) if you simply start hundreds of vcpu and do the same test like this, but use the default anonymous page faults rather than uffd page faults. I feel like even without uffd that could be a huge contention already there. Or did I miss anything important on your decision to use userfaultfd?
Thanks,
-- Peter Xu
linux-kselftest-mirror@lists.linaro.org