On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote:
On 04/06/2018 02:57 PM, Gerd Hoffmann wrote:
Hi,
I fail to see any common ground for xen-zcopy and udmabuf ...
Does the above mean you can assume that xen-zcopy and udmabuf can co-exist as two different solutions?
Well, udmabuf route isn't fully clear yet, but yes.
See also gvt (intel vgpu), where the hypervisor interface is abstracted away into a separate kernel modules even though most of the actual vgpu emulation code is common.
Thank you for your input, I'm just trying to figure out which of the three z-copy solutions intersect and how much
And what about hyper-dmabuf?
xen z-copy solution is pretty similar fundamentally to hyper_dmabuf in terms of these core sharing feature:
1. the sharing process - import prime/dmabuf from the producer -> extract underlying pages and get those shared -> return references for shared pages
2. the page sharing mechanism - it uses Xen-grant-table.
And to give you a quick summary of differences as far as I understand between two implementations (please correct me if I am wrong, Oleksandr.)
1. xen-zcopy is DRM specific - can import only DRM prime buffer while hyper_dmabuf can export any dmabuf regardless of originator
2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends out synchronization message to the exporting VM for synchronization.
3. 1-level references - when using grant-table for sharing pages, there will be same # of refs (each 8 byte) as # of shared pages, which is passed to the userspace to be shared with importing VM in case of xen-zcopy. Compared to this, hyper_dmabuf does multiple level addressing to generate only one reference id that represents all shared pages.
4. inter VM messaging (hype_dmabuf only) - hyper_dmabuf has inter-vm msg communication defined for dmabuf synchronization and private data (meta info that Matt Roper mentioned) exchange.
5. driver-to-driver notification (hyper_dmabuf only) - importing VM gets notified when newdmabuf is exported from other VM - uevent can be optionally generated when this happens.
6. structure - hyper_dmabuf is targetting to provide a generic solution for inter-domain dmabuf sharing for most hypervisors, which is why it has two layers as mattrope mentioned, front-end that contains standard API and backend that is specific to hypervisor.
No idea, didn't look at it in detail.
Looks pretty complex from a distant view. Maybe because it tries to build a communication framework using dma-bufs instead of a simple dma-buf passing mechanism.
we started with simple dma-buf sharing but realized there are many things we need to consider in real use-case, so we added communication , notification and dma-buf synchronization then re-structured it to front-end and back-end (this made things more compicated..) since Xen was not our only target. Also, we thought passing the reference for the buffer (hyper_dmabuf_id) is not secure so added uvent mechanism later.
Yes, I am looking at it now, trying to figure out the full story and its implementation. BTW, Intel guys were about to share some test application for hyper-dmabuf, maybe I have missed one. It could probably better explain the use-cases and the complexity they have in hyper-dmabuf.
One example is actually in github. If you want take a look at it, please visit:
https://github.com/downor/linux_hyper_dmabuf_test/tree/xen/simple_export
Like xen-zcopy it seems to depend on the idea that the hypervisor manages all memory it is easy for guests to share pages with the help of the hypervisor.
So, for xen-zcopy we were not trying to make it generic, it just solves display (dumb) zero-copying use-cases for Xen. We implemented it as a DRM helper driver because we can't see any other use-cases as of now. For example, we also have Xen para-virtualized sound driver, but its buffer memory usage is not comparable to what display wants and it works somewhat differently (e.g. there is no "frame done" event, so one can't tell when the sound buffer can be "flipped"). At the same time, we do not use virtio-gpu, so this could probably be one more candidate for shared dma-bufs some day.
Which simply isn't the case on kvm.
hyper-dmabuf and xen-zcopy could maybe share code, or hyper-dmabuf build on top of xen-zcopy.
Hm, I can imagine that: xen-zcopy could be a library code for hyper-dmabuf in terms of implementing all that page sharing fun in multiple directions, e.g. Host->Guest, Guest->Host, Guest<->Guest. But I'll let Matt and Dongwon to comment on that.
I think we can definitely collaborate. Especially, maybe we are using some outdated sharing mechanism/grant-table mechanism in our Xen backend (thanks for bringing that up Oleksandr). However, the question is once we collaborate somehow, can xen-zcopy's usecase use the standard API that hyper_dmabuf provides? I don't think we need different IOCTLs that do the same in the final solution.
cheers, Gerd
Thank you, Oleksandr
P.S. Sorry for making your original mail thread to discuss things much broader than your RFC...