On Thu, Dec 08, 2022 at 12:37:23AM +0000, Oliver Upton wrote:
On Thu, Dec 08, 2022 at 12:24:20AM +0000, Sean Christopherson wrote:
On Thu, Dec 08, 2022, Oliver Upton wrote:
On Wed, Dec 07, 2022 at 11:57:27PM +0000, Sean Christopherson wrote:
diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index 92d3a91153b6..95d22cfb7b41 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -609,8 +609,13 @@ static void setup_memslots(struct kvm_vm *vm, struct test_params *p) data_size / guest_page_size, p->test_desc->data_memslot_flags); vm->memslots[MEM_REGION_TEST_DATA] = TEST_DATA_MEMSLOT; +}
+static void setup_ucall(struct kvm_vm *vm) +{
- struct userspace_mem_region *region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
- ucall_init(vm, data_gpa + data_size);
- ucall_init(vm, region->region.guest_phys_addr + region->region.memory_size);
Isn't there a hole after CODE_AND_DATA_MEMSLOT? I.e. after memslot 0?
Sure, but that's only guaranteed in the PA space.
The reason I ask is because if so, then we can do the temporarily heinous, but hopefully forward looking thing of adding a helper to wrap kvm_vm_elf_load() + ucall_init().
E.g. I think we can do this immediately, and then at some point in the 6.2 cycle add a dedicated region+memslot for the ucall MMIO page.
Even still, that's just a kludge to make ucalls work. We have other MMIO devices (GIC distributor, for example) that work by chance since nothing conflicts with the constant GPAs we've selected in the tests.
I'd rather we go down the route of having an address allocator for the for both the VA and PA spaces to provide carveouts at runtime.
Aren't those two separate issues? The PA, a.k.a. memslots space, can be solved by allocating a dedicated memslot, i.e. doesn't need a carve. At worst, collisions will yield very explicit asserts, which IMO is better than whatever might go wrong with a carve out.
Perhaps the use of the term 'carveout' wasn't right here.
What I'm suggesting is we cannot rely on KVM memslots alone to act as an allocator for the PA space. KVM can provide devices to the guest that aren't represented as memslots. If we're trying to fix PA allocations anyway, why not make it generic enough to suit the needs of things beyond ucalls?
One extra bit of information: in arm, IO is any access to an address (within bounds) not backed by a memslot. Not the same as x86 where MMIO are writes to read-only memslots. No idea what other arches do.
-- Thanks, Oliver
I think that we should use these proposed changes, and then move to an ideal solution. These are the changes I propose:
1. add an arch specific API for allocating MMIO physical ranges: vm_arch_mmio_region_add(vm, npages). The x86 version creates a read-only memslot, and the arm one allocates physical space without a memslot in it.
2. Then change all IO related users (including ucall) to use vm_arch_mmio_region_add(). Ex:
pa = vm_arch_mmio_region_add(vm, npages); ucall_init(vm, pa);
page_fault_test needs to be adapted to use vm_arch_mmio_region_add() as well.
Thanks, Ricardo