Hi  Achin,

     Could you please tell me which hypervisor does your TEE-agnostic driver work on £¿  What do you think about a  TEE-agnostic and Hypervisor-agnostic solution for the Guest VM to access the OP-TEE?   As there are many hypervisors on kinds of platforms,  sometimes even close-sourced ones.   Maybe the performance is a little poor, but the convenience will be  very big gain. 

     

Best Regards,
Li Cheng







At 2020-11-23 18:07:03, "Achin Gupta" <achin.gupta@arm.com> wrote: >Hi Li Cheng, > >On Sat, Nov 21, 2020 at 11:25:53PM +0800, lchina77 wrote: >> >> >> >> >> >> Hi£¬ Achin >> >> >> >> >> >> >> >> >> At 2020-11-21 02:32:47, "Achin Gupta" <Achin.Gupta@arm.com> wrote: >> >> >> >> >> > Hi Li Cheng, >> >> >> >> > Could you please elaborate the problem you are trying to solve? >> >> >> >> >> > Is the issue that it is difficult to integrate a OP-TEE specific driver into a Hypervisor? You would need that in any case so that the Host VM can access the OP-TEE in Secure wor> ld through the Hypervisor. >> > In the call sequence you have described, it seems that communication between the Guest VM and OP-TEE will now go via the Host VM. Could you please help me understand > how that helps. >> >> In my case, the TEE specific driver in the proprietary hypervisor ONLY supports the Host VM to access the OP-TEE, while the Guest VM cannot. So we propose the virtio solution for the Guest VM to access the OP-TEE. > >Thanks. I get it now. > >> >> > Routing Guest VM to TEE data via the Host seems quite opposite to the direction of travel where there is no trust between the Guest and Host and Hypervisor and Host as far a> s address space isolation >> > goes. The Host now gets dibs on every message between the Guest and TEE. >> >> Yes, but this is not a serious concern for us, because we are the provider of both the Host VM and Guest VM, and all the secret data resides in the OP-TEE. > >Fair enough. > >> >> > Virtio (as it stands) either requires the Guest to make its address space visible to the Host or bounce buffers in the Hypervisor. The former does not fly if address space isolatio> n is the security >> > goal (as above). The latter could run into performance issues but I am not an expert on this. >> >> >> >> > The approach we are working on is to replace a TEE specific driver in the Hypervisor with a driver that is agnostic of the TEE. This is achieved by standardising the role that the > Hypervisor plays >> > in communication between a Guest VM and the TEE. So you write the driver once and it works with all TEEs that follow the standard. >> >> Where does your TEE-agnostic driver run in ? the hypervisor or the Host VM or the Guest VM ? If the Guest VM can access OP-TEE with the help of the TEE-agnostic driver, whether the address space isolation between Host VM and Guest VM is still guaranteed ? > >The TEE-agnostic driver resides in: > >1. The Hypervisor in EL2. Its job is to, > - Enable a Guest VM to share/unshare memory with a TEE > - Forward SMC calls between a Guest VM and the TEE >2. The Guest VM. Its job is to, > - Communicate with the driver in the Hypervisor to enable communication and > memory management with the TEE as stated above > >Address space isolation between the Guest and Host VMs is the Hypervisor's job >anyways. The point of the TEE-agnostic driver is that memory management and >message forwarding can be done in a generic way in the Hypervisor. > >> >> > Hence my original question i.e. is this the problem you are looking to solve? >> >> We need to assure that the Guest VM can access the OP-TEE without the dependency on the TEE driver in the Hypervisor, it seems to be the same goal with your TEE-agnostic driver. > >Our goal is to avoid the need to integrate a TEE specific driver in the >Hypervisor and in TF-A while allowing any VM to access the TEE. > >In your case, it seems that the Hypervisor implements an access control policy >where only the Host VM can talk to the TEE. The TEE-agnostic driver will not >solve this problem as it could be subject to the same access control by the >Hypervisor. > >In any case, a more efficient approach would have been to: > >1. Share memory between the Guest VM and OP-TEE for the data path. > >2. Use the OP-TEE driver in the Host VM to issue SMCs to run OP-TEE > i.e. implement the control path. > >It looks like that is not possible either due to the restrictions imposed by the >Hypervisor. > >cheers, >Achin > >> >> >> >> >> > Cheers, >> >> > Achin >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From: >> Tee-dev <tee-dev-bounces@lists.linaro.org> on behalf of lchina77 <lchina77@163.com> >> >> Date: Friday, 20 November 2020 at 12:53 >> >> To: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org> >> >> Subject: [Tee-dev] virtio device for OP-TEE >> >> >> >> >> >> >> >> >> >> >> >> Hi, >> >> >> >> >> I don't know whether this is the right place to discuss, sorry for bothering. >> >> >> >> >> >> >> >> >> >> OP-TEE OS has already support virtualization, but modification to the hypervisor is also necessary. But the proprietary Hypervisors >> are close sourced and some TEE OSes are alos close-sourced, such as QSEE from QualComm. So maybe virtio-tee is an alternative solution for the Guest VM to access the OP-TEE. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> In detail, CA from Guest VM --> libteec.so (GuestVM) --> tee driver(GuestVM) -->optee_do_call_with_arg() --> invoke_fn()--> virtio-tee >> driver --> virtio-tee device(HostVM) --> libteec.so (HostVM) --> tee driver(HostVM) -->optee_do_call_with_arg() -->invoke_fn() --> TEEOS. >> >> >> >> >> >> >> >> >> >> >> >> >> >> I think the virtio-tee device must transfer the RPC to the virtio-tee driver in the GuestVM, then to the tee-supplicant in the GuestVM, >> in order to load the TAs in the GuestVM. >> >> >> >> >> >> >> >> >> >> In the HostVM, the tee-supplicant accesses the tee-driver through /dev/teepriv0, and the virtio-tee device accesses the tee-driver through >> /dev/teepriv1, So I wonder how the HostVM tee driver can dispatch the RPC from OP-TEE to the correct receiver , tee-supplicant or the virtio-tee device? >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Best Regards, >> >> >> >> >> Li Cheng >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, >> or store or copy the information in any medium. Thank you. >> >> >IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.