On 09-03-25, 21:59, Sebastian Ene wrote:
On Fri, Mar 07, 2025 at 11:59:56AM +0530, Viresh Kumar wrote:
AFAIK, the broad idea is to implement two virtio communication paths from pVM, one to the Linux host (via virtio-pci) and another one to Trusty (via virtio-msg-ffa).
We already have from pVM to Host (not over FF-A but with pKVM APIs). What is new here/what do we want to do ?
Yes, that's what we want to use for pVM to Host. The problem is in pVM to Trusty.
Now looking at "dynamic mapping" section in [1] we are not sure if that will work fine for the end use case, pVM to trusty. It looks like the coco implementation will always end up using dma encrypt/decrypt when a pVM is running and share the memory with host, even when all we want to do is share with trusty.
We use coco to share pages from protected guest -> host (from the swiotlb region). This is to establish a channel with the host via bounce buffering and the virtio-pci communication is done through this window.
Right. We want to use that for pVM to Host communication.
Now, I don't understand why do you need a special pool to allocate memory, what allocation requirements do you have ? Performance is impacted even if you use another pool of memory because you have to page in from host first.
What do you mean by "page in from host first" ?
I think there is some confusion here. The pool we are talking about is from pVM's address space. Basically, we are adding a "reserved-memory" region in Guest's DT, so guest can set aside some space for us and we will use that for bounce-buffering too via swiotlb.
Why can't you page_alloc & ffa_share with Trusty on demand ?
We have tested that already and it works. With reserved-mem thing we just want to pre-map the entire region for performance reasons.
Even if we leave that point aside, we will have the same problem with page_alloc and ffa_share. Won't the coco driver share that memory with the host too with dma decrypt callback? I haven't tested it yet (don't have a pVM setup), but it looked like that the dma decrypt callback will get called anyway and that's what we are looking to avoid here.
I am not sure if we need contiguous PA here, contiguous IPA should be sufficient, Armelle?
I think this is super important to know what we need because having PA contiguous space in a guest is a whole new challenge by itself. I've been talking to Quentin on this and this would be more approachable (from an u-API perspective for the VMM) for pKVM when we switch over to memfd.
I don't see why we would need PA continuous space, but I would leave that for Armelle to answer.
Armelle ?