Happy new year everyone,
I have a few questions regarding the design of the whole memory sharing over
Vsock work and looking to get some feedback before I spend time making the
changes.
The initial design (from Nov):
- Initially I implemented a FFA based DMA heap (no DMA ops used), which would
allocate memory and make direct FFA calls to send the memory to the other
endpoint.
- The userspace would open this heap, allocated dma-buf and pass its FD over
Vsock.
- The Vsock layer would then call ->shmem() (a new callback I added to
dma-buf/heap, which will send memory over FFA and return ffa bus address), to
get the metadata to be shared over Vsock.
The current design:
- In one of the calls Bertrand suggested to not create parallel paths for
sending memory over FFA. Instead we should use the exiting dma-ops for FFA
(used for virtqueue and reserved-mem) somehow.
- I created a platform device (as a child of the FFA device) and assigned a new
set of DMA ops to it (the only difference from reserved-mem ops is that we
don't do swiotlb here and allocate fresh instead of the reserved mem).
- This pdev is used by the DMA heap to allocate memory using
dma_alloc_coherent() and that made sure everything got mapped correctly.
- I still need a dma-buf helper to get the metadata to send over Vsock. The
existing helper was renamed as s/shmem/shmem_data.
The future design (that I have questions about):
- The FFA specific DMA heap I now have doesn't do anything special compared to
the system heap, mostly exactly same.
- Which made me realize that probably I shouldn't add a new heap (Google can add
one later if they really want) and the solution should work with any heap /
dma-buf.
- So, userspace should allocate heap from system-heap, get a dma-buf from it and
send its FD.
- The vsock layer then should attach this dma-buf to a `struct device` somehow
and then call map_dma_buf() for the dma-buf. This requires the dma-ops of the
device to be set to the FFA based dma-ops and then it should just work.
- The tricky point is finding that device struct (as Vsock can't get it from
dma-buf or usersapce).
- One way, I think (still needs exploring but should be possible) is to use the
struct device of the virtio-msg device over which Vsock is implemented. We can
set the dma-ops of the virtio-msg device accordingly.
- The system heap doesn't guarantee contiguous allocation though (which my FFA
heap did) and so we will be required to send a scatter-gather list over vsock,
instead of just one address and size (what I am doing right now).
- Does this make sense ? Or if there is a use-case that this won't solve, etc ?
--
viresh
Hello,
I have some questions / feedback on the Xen side of virtio-msg.
AFAIU virtio-msg defines a protocol in order to deal with discovery
(optional) and configuration of the PV devices. But things are undefined
regarding what is a "memory address".
In Xen memory model with grants, each guest has its own memory space.
The frontend shares memory pages with the backend through grants pages
identified by grant references.
So in a design based on grants, virtio addresses can't (or at least
shouldn't) be guest physical address; but needs to be something derived
on grants. A earlier design [1] was forging a address with the grant
reference, but I feel it's not great, as it forces "map+unmap" cycles
for temporary buffers thus has the same performance problem as Xen PV
drivers (without persistent grants) where map+unmap cycles is a
performance problem.
And would make the address space very fragmented and in often limited to
4KB buffers.
One idea that is already used by Xen displif (PV Display) is to have a
"gref directory" and describe the address space on that. The gref
directory is roughly a array of grant references (shared pages infos)
that could describe a address space starting at 0, where each page is
defined by a grant reference of the directory. That way, the backend can
freely keep all or a part or the address space persistently mapped (or
eventually map it all at once); and the address space is also contiguous
which would help with >4KB buffers.
Any thoughts ?
[1]
https://static.sched.com/hosted_files/xen2021/bf/Thursday_2021-Xen-Summit-v…
--
Teddy Astie | Vates XCP-ng Developer
XCP-ng & Xen Orchestra - Vates solutions
web: https://vates.tech
All,
I have updated the December wiki page with the notes from today.
Any fixes are appreciated.
https://linaro.atlassian.net/wiki/spaces/HVAC/pages/30510383105/2025-12+Upd…
The page has a link and password to the recording and transcript if you
want to see or hear the whole thing.
Thanks,
Bill
--
Bill Mills
Principal Technical Consultant, Linaro
+1-240-643-0836
TZ: US Eastern
Work Schedule: Tues/Wed/Thur
Hello,
I was able to successfully test memory sharing over Vsock, implemented over a
virtio message FFA transport. The memory is shared, as an FD, from Linux
userspace (via a FFA specific DMA HEAP) and is received in OP-TEE as a buffer,
which prints the string written by Linux.
Steps to reproduce:
- Jens shared an OP-TEE based setup few days back [1]. Reproduce it first.
- Migrate to my branches for the following repositories within that:
- build, optee_os, optee_test
- Path to my repos: git@github.com:vireshk/
- branch: vsock/shmem
- Migrate to my linux repo: git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git virtio/msg
- Repeat the same test from Jens's setup: "xtest 2005"
- The firmware side will print:
Hello from shared memory via FD!
This string was written by the Linux userspace in the memory provided by FFA
DMA HEAP.
Thanks.
--
viresh
[1] https://linaro.atlassian.net/browse/LEDGE-725
Hi Everyone,
Today, Arm is releasing the first public version of the Virtio Message Bus over FF-A in version 1.0 and in quality state Alpha 0.
The document is available for download here:
https://developer.arm.com/documentation/den0153/0100
Please contact me if you have any comment, remarks, questions or improvements to suggest so that i can handle them before the spec reaches Beta quality.
Regards
Bertrand
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi all,
This the RFC I'm preparing to send to upstream QEMU for initial RFC review.
I didn't see much precedence for Signed-off-by lines in cover letter so instead
I'm thinking to explicitely CC Bill and Alex and anyone else who would like to be
on copy, just let me know.
Changes from v1:
* Move VIRTIO_MSG_VENDOR_ID to virtio-msg.c
* Update to match recent spec changes (token + set/get_vqueue padding)
* Add endian conversion of dev_num and msg_size
* Add instructions for running on GPEX PCI x86 microVM and ARM virt
* Add missing spsc_queue.h
-------------------
This adds virtio-msg, a new virtio transport. Virtio-msg works by
exchanging messages over a bus and doesn't rely on trapping and emulating
making it a good fit for a number of applications such as AMP, real-time
and safety applications.
Together with the new transport, this series adds a PCI device that
implements an AMP setup much like it would look if two SoC's would
use virtio-msg across a PCI link.
Current limitations:
We only support a single device per bus (dev_num = 0).
Shared memory queue layout likely to change in the future.
Temporarily uses PCI Vendor Xilinx / Device 0x9039.
Missing documentation.
The virtio-msg spec:
https://github.com/Linaro/virtio-msg-spec/
QEMU with these patches:
https://github.com/edgarigl/qemu/tree/edgar/virtio-msg-rfc
Linux with virtio-msg suppport:
https://github.com/edgarigl/linux/tree/edgari/virtio-msg-6.17
To try it, first build Linux with the following enabled:
CONFIG_VIRTIO_MSG=y
CONFIG_VIRTIO_MSG_AMP=y
CONFIG_VIRTIO_MSG_AMP_PCI=y
Boot linux in QEMU with a virtio-msg-amp-pci device, in this example
with a virtio-net device attached to it:
x86/q35 machine:
-device virtio-msg-amp-pci
-device virtio-net-device,netdev=n1,bus=/q35-pcihost/pcie.0/virtio-msg-amp-pci/vmsg.0
-netdev user,id=n1
x86/microvm or ARM virt machines:
-device virtio-msg-amp-pci
-device virtio-net-device,netdev=n1,bus=/gpex-pcihost/pcie.0/virtio-msg-amp-pci/vmsg.0/virtio-msg/virtio-msg-proxy-bus.0
-netdev user,id=n1
Cheers,
Edgar
Edgar E. Iglesias (4):
virtio: Introduce notify_queue
virtio: Add virtio_queue_get_rings
virtio: Add the virtio-msg transport
virtio-msg-bus: amp-pci: Add generic AMP PCI device
hw/misc/Kconfig | 7 +
hw/misc/meson.build | 1 +
hw/misc/virtio-msg-amp-pci.c | 324 ++++++++++++
hw/virtio/Kconfig | 4 +
hw/virtio/meson.build | 5 +
hw/virtio/virtio-msg-bus.c | 89 ++++
hw/virtio/virtio-msg.c | 598 ++++++++++++++++++++++
hw/virtio/virtio.c | 23 +
include/hw/virtio/spsc_queue.h | 213 ++++++++
include/hw/virtio/virtio-bus.h | 1 +
include/hw/virtio/virtio-msg-bus.h | 95 ++++
include/hw/virtio/virtio-msg-prot.h | 749 ++++++++++++++++++++++++++++
include/hw/virtio/virtio-msg.h | 45 ++
include/hw/virtio/virtio.h | 2 +
14 files changed, 2156 insertions(+)
create mode 100644 hw/misc/virtio-msg-amp-pci.c
create mode 100644 hw/virtio/virtio-msg-bus.c
create mode 100644 hw/virtio/virtio-msg.c
create mode 100644 include/hw/virtio/spsc_queue.h
create mode 100644 include/hw/virtio/virtio-msg-bus.h
create mode 100644 include/hw/virtio/virtio-msg-prot.h
create mode 100644 include/hw/virtio/virtio-msg.h
--
2.43.0
Hi all,
This the RFC I'm preparing to send to upstream QEMU for initial RFC review.
A couple of limitations:
I've not updated the protocol with the new msg_token field yet.
We only support a single device per bus (dev_num = 0).
The kernel driver only works as a module, when building it into the kernel
it panics.
-------------------
This adds virtio-msg, a new virtio transport. Virtio-msg works by
exchanging messages over a bus and doesn't rely on trapping and emulating
making it a good fit for a number of applications such as AMP, real-time
and safety applications.
Together with the new transport, this series adds a PCI device that
implements an AMP setup much like it would look if two SoC's would
use virtio-msg across a PCI link.
The virtio-msg spec:
https://github.com/Linaro/virtio-msg-spec/
Linux with virtio-msg:
https://github.com/edgarigl/linux/tree/edgari/virtio-msg-6.17
To try it, first build Linux with the following as modules:
CONFIG_VIRTIO_MSG=m
CONFIG_VIRTIO_MSG_AMP=m
CONFIG_VIRTIO_MSG_AMP_PCI=m
Boot linux in QEMU with a virtio-msg-amp-pci device, in this example
with a virtio-net device attached to it (x86/q35 machine):
-device virtio-msg-amp-pci
-device virtio-net-device,netdev=n1,bus=/q35-pcihost/pcie.0/virtio-msg-amp-pci/vmsg.0
-netdev user,id=nc
Modprobe:
modprobe virtio_msg_transport.ko
modprobe virtio_msg_amp.ko
modprobe virtio_msg_amp_pci.ko
You now should see the virtio device.
Cheers,
Edgar
Edgar E. Iglesias (4):
virtio: Introduce notify_queue
virtio: Add virtio_queue_get_rings
virtio: Add the virtio-msg transport
virtio-msg-bus: amp-pci: Add generic AMP PCI device
hw/misc/Kconfig | 7 +
hw/misc/meson.build | 1 +
hw/misc/virtio-msg-amp-pci.c | 324 ++++++++++++
hw/virtio/Kconfig | 4 +
hw/virtio/meson.build | 5 +
hw/virtio/virtio-msg-bus.c | 89 ++++
hw/virtio/virtio-msg.c | 596 ++++++++++++++++++++++
hw/virtio/virtio.c | 23 +
include/hw/virtio/virtio-bus.h | 1 +
include/hw/virtio/virtio-msg-bus.h | 95 ++++
include/hw/virtio/virtio-msg-prot.h | 747 ++++++++++++++++++++++++++++
include/hw/virtio/virtio-msg.h | 45 ++
include/hw/virtio/virtio.h | 2 +
13 files changed, 1939 insertions(+)
create mode 100644 hw/misc/virtio-msg-amp-pci.c
create mode 100644 hw/virtio/virtio-msg-bus.c
create mode 100644 hw/virtio/virtio-msg.c
create mode 100644 include/hw/virtio/virtio-msg-bus.h
create mode 100644 include/hw/virtio/virtio-msg-prot.h
create mode 100644 include/hw/virtio/virtio-msg.h
--
2.43.0