Hey Sebastian,
Please see some more answers.
On Mon, Apr 28, 2025 at 3:50 AM Sebastian Ene sebastianene@google.com wrote:
On Wed, Apr 23, 2025 at 11:12:53PM -0700, Armelle Laine wrote:
Hello Armelle, folks
wow you all were waiting for my answer?! I am not sure it was worth waiting one month for this answer, Viresh next time, please ping me on chat. The problem statement does not seem to be understood by all parties.
The goal we are seeking to achieve is for microdroid pVM to share virtqueues (and io buffers) with two isolated devices, the host and the Trutzone TEE. By isolated, we mean that the virtqueues shared between the pVM and TZ
TEE
shall not be shared with the Host.
However given: 1/ the linux kernel assumes devices using dma transfers don't support encrypted memory 2/ by default all dma transfers will trigger the swioltb bounce buffering to a decrypted buffer 3/ microdroid uses the coco driver for memory encryption / decryption control
That implies that any dma map from any virtio device driver (the one interacting with the host, as well from the one interacting with the TEE) will lead to using a "decrypted" bounce buffer. Given that the coco driver is such that all decrypted bounce buffers are shared with the
host,
the TEE virtqueues would also be shared with the host.
Is it because under the hood the virtio_pci_legacy_probe ends up calling ioremap ?
IIUC that's because these memory regions are allocated via dma_dal, nothing to do with virtio. One a memory region is tied to a DMA transfer, the kernel would create a "decrypted" bounce buffer when it assumes that the device does not support memory encryption (which is typically the default).
How would you make the distinction in the virtio driver code between the device exposed by the host and the one device exposed by Trustzone ?
The initial idea was to use different memory regions hooked to different dma hal. However, the approach taken by pKVM to hook the share abi on the memory decryption handler is not easily disabled by region or device. Hence our email to seek for a solution.
This is what we'd like to avoid.
Why can't you write a separate virtio_ffa.c that makes use of the arm ff-a driver and uses the sharing infrastructure exposed by it and avoid going on the ioremap path ?
The integration of virtio with ffa has been done relying on the kernel's
DMA infrastructure. See what Viresh described:
*"We have implemented FFA specific dma-hal [2] to perform FFA memory* *sharing with trusty. With "reserved-mem" and "memory-region" DT entries (not* *sure if that is the final solution), we are able to allocate memory the FFA* *device (which represents bus for all the enumerated devices between* *trusty/host). This memory is shared with trusty at probe time (from* *virtio-msg-ffa layer) and the DMA hal later allocates memory from there for* *coherent allocations and bounce buffers. This works just fine right now. "*
The issue is that the pKVM memory decryption handler will trigger on all DMA transfers (because the kernel assumes devices do not support memory encryption by default).
There is as you know no restrictions for the TEE to use CMA, but this is
besides the point.
Hope that clarifies,
Thanks for the clarification, looking forward to understanding the complete picture.
I'll set up a meeting so we can make progress. Thanks, Armellle
Thanks, Sebastian
On Tue, Mar 18, 2025 at 3:52 AM Viresh Kumar viresh.kumar@linaro.org wrote:
Ping.
On 10-03-25, 10:36, Viresh Kumar wrote:
On 09-03-25, 21:59, Sebastian Ene wrote:
On Fri, Mar 07, 2025 at 11:59:56AM +0530, Viresh Kumar wrote:
AFAIK, the broad idea is to implement two virtio communication
paths
from pVM,
one to the Linux host (via virtio-pci) and another one to Trusty
(via
virtio-msg-ffa).
We already have from pVM to Host (not over FF-A but with pKVM
APIs).
What is new here/what do we want to do ?
Yes, that's what we want to use for pVM to Host. The problem is in
pVM
to Trusty.
Now looking at "dynamic mapping" section in [1] we are not sure
if
that will
work fine for the end use case, pVM to trusty. It looks like the
coco
implementation will always end up using dma encrypt/decrypt when
a
pVM is
running and share the memory with host, even when all we want to
do
is share
with trusty.
We use coco to share pages from protected guest -> host (from the
swiotlb region). This is
to establish a channel with the host via bounce buffering and the
virtio-pci communication
is done through this window.
Right. We want to use that for pVM to Host communication.
Now, I don't understand why do you need a special pool to allocate
memory, what allocation
requirements do you have ? Performance is impacted even if you use
another pool of memory
because you have to page in from host first.
What do you mean by "page in from host first" ?
I think there is some confusion here. The pool we are talking about
is
from pVM's address space. Basically, we are adding a
"reserved-memory"
region in Guest's DT, so guest can set aside some space for us and we will use that for bounce-buffering too via swiotlb.
Why can't you page_alloc & ffa_share with Trusty on demand ?
We have tested that already and it works. With reserved-mem thing we just want to pre-map the entire region for performance reasons.
Even if we leave that point aside, we will have the same problem with page_alloc and ffa_share. Won't the coco driver share that memory
with
the host too with dma decrypt callback? I haven't tested it yet
(don't
have a pVM setup), but it looked like that the dma decrypt callback will get called anyway and that's what we are looking to avoid here.
I am not sure if we need contiguous PA here, contiguous IPA
should be
sufficient, Armelle?
I think this is super important to know what we need because
having PA
contiguous space in a
guest is a whole new challenge by itself. I've been talking to
Quentin
on this
and this would be more approachable (from an u-API perspective for
the
VMM) for pKVM when we switch over to memfd.
I don't see why we would need PA continuous space, but I would leave that for Armelle to answer.
Armelle ?
-- viresh
-- viresh