Hi everyone,
Over the last few weeks, I have made numerous changes in the design and code to get it ready for mainline. I have now prepared the initial RFC patchset to be sent to LKML.
Please have a look and provide your valuable feedback.
-------------------------8<-------------------------
Hello,
This RFC series introduces support for a new Virtio transport type: "virtio-msg", as proposed in [1]. Unlike existing transport types like virtio-mmio or virtio-pci which rely on memory-mapped registers, virtio-msg implements transport operations via structured message exchanges using standard virtqueues.
This series includes: - Core virtio-msg transport support. - Two message transport bus implementations: - virtio-msg-ffa: based on ARM's Firmware Framework for Arm (FF-A). - virtio-msg-loopback: a loopback device for testing and validation.
The code is available here for reference: [3] and virtio-msg loopback test setup is explained here: [2].
This series is based on v6.16-rc6 and depends on commit [4] from linux-next.
### Memory Mapping and Reserved Memory Usage
The first two patches enhance the reserved-memory subsystem to support attaching `struct device`s that do not originate from DT nodes—essential for virtual or dynamically discovered devices like the FF-A or loopback buses.
This reserved-memory region enables: - Restricting all DMA-coherent and streaming DMA memory to a controlled range. - Allowing the remote endpoint to pre-map this memory, reducing runtime overhead. - Preventing unintentional data leaks, since memory is typically shared at page granularity. - For the loopback bus, it restricts the portion of kernel memory that can be mapped into userspace, improving safety.
Feedback on the design, API, and approach is welcome.
-- Viresh
[1] https://lore.kernel.org/all/20250620224426.3923880-2-bill.mills@linaro.org/ [2] https://linaro.atlassian.net/wiki/spaces/HVAC/pages/30104092673 [3] git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git virtio/msg-rfc-v1 [4] From linux-next: 5be53630b4f0 virtio-mmio: Remove virtqueue list from mmio device
Viresh Kumar (6): of: reserved-memory: Add reserved_mem_device_init() of: reserved-memory: Add of_reserved_mem_lookup_by_name virtio: Add support for virtio-msg transport virtio-msg: Add optional userspace interface for message I/O virtio-msg: Add support for FF-A (Firmware Framework for Arm) bus virtio-msg: Add support for loopback bus
MAINTAINERS | 7 + drivers/of/of_reserved_mem.c | 91 +++-- drivers/virtio/Kconfig | 34 ++ drivers/virtio/Makefile | 5 + drivers/virtio/virtio_msg.c | 546 +++++++++++++++++++++++++++ drivers/virtio/virtio_msg.h | 88 +++++ drivers/virtio/virtio_msg_ffa.c | 501 ++++++++++++++++++++++++ drivers/virtio/virtio_msg_loopback.c | 323 ++++++++++++++++ drivers/virtio/virtio_msg_user.c | 119 ++++++ include/linux/of_reserved_mem.h | 13 + include/uapi/linux/virtio_msg.h | 221 +++++++++++ include/uapi/linux/virtio_msg_ffa.h | 94 +++++ include/uapi/linux/virtio_msg_lb.h | 22 ++ 13 files changed, 2040 insertions(+), 24 deletions(-) create mode 100644 drivers/virtio/virtio_msg.c create mode 100644 drivers/virtio/virtio_msg.h create mode 100644 drivers/virtio/virtio_msg_ffa.c create mode 100644 drivers/virtio/virtio_msg_loopback.c create mode 100644 drivers/virtio/virtio_msg_user.c create mode 100644 include/uapi/linux/virtio_msg.h create mode 100644 include/uapi/linux/virtio_msg_ffa.h create mode 100644 include/uapi/linux/virtio_msg_lb.h
This adds reserved_mem_device_init() helper to attach the specified reserved-memory region to the device.
This is required to attach a reserved-memory region with a non-DT device.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/of/of_reserved_mem.c | 64 ++++++++++++++++++++------------- include/linux/of_reserved_mem.h | 7 ++++ 2 files changed, 47 insertions(+), 24 deletions(-)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index 77016c0cc296..e0a86c3fa656 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -606,6 +606,45 @@ struct rmem_assigned_device { static LIST_HEAD(of_rmem_assigned_device_list); static DEFINE_MUTEX(of_rmem_assigned_device_mutex);
+/** + * reserved_mem_device_init() - assign reserved memory region to given device + * @dev: Pointer to the device to configure + * @rmem: Pointer to the reserved memory region + * + * This function assigns the @rmem reserved memory region to the @dev device. + * + * Returns error code or zero on success. + */ +int reserved_mem_device_init(struct device *dev, struct reserved_mem *rmem) +{ + struct rmem_assigned_device *rd; + int ret; + + if (!dev || !rmem || !rmem->ops || !rmem->ops->device_init) + return -EINVAL; + + rd = kmalloc(sizeof(*rd), GFP_KERNEL); + if (!rd) + return -ENOMEM; + + ret = rmem->ops->device_init(rmem, dev); + if (ret == 0) { + rd->dev = dev; + rd->rmem = rmem; + + mutex_lock(&of_rmem_assigned_device_mutex); + list_add(&rd->list, &of_rmem_assigned_device_list); + mutex_unlock(&of_rmem_assigned_device_mutex); + + dev_info(dev, "assigned reserved memory node %s\n", rmem->name); + } else { + kfree(rd); + } + + return ret; +} +EXPORT_SYMBOL_GPL(reserved_mem_device_init); + /** * of_reserved_mem_device_init_by_idx() - assign reserved memory region to * given device @@ -624,10 +663,8 @@ static DEFINE_MUTEX(of_rmem_assigned_device_mutex); int of_reserved_mem_device_init_by_idx(struct device *dev, struct device_node *np, int idx) { - struct rmem_assigned_device *rd; struct device_node *target; struct reserved_mem *rmem; - int ret;
if (!np || !dev) return -EINVAL; @@ -644,28 +681,7 @@ int of_reserved_mem_device_init_by_idx(struct device *dev, rmem = of_reserved_mem_lookup(target); of_node_put(target);
- if (!rmem || !rmem->ops || !rmem->ops->device_init) - return -EINVAL; - - rd = kmalloc(sizeof(struct rmem_assigned_device), GFP_KERNEL); - if (!rd) - return -ENOMEM; - - ret = rmem->ops->device_init(rmem, dev); - if (ret == 0) { - rd->dev = dev; - rd->rmem = rmem; - - mutex_lock(&of_rmem_assigned_device_mutex); - list_add(&rd->list, &of_rmem_assigned_device_list); - mutex_unlock(&of_rmem_assigned_device_mutex); - - dev_info(dev, "assigned reserved memory node %s\n", rmem->name); - } else { - kfree(rd); - } - - return ret; + return reserved_mem_device_init(dev, rmem); } EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx);
diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h index f573423359f4..3933f1d39e9a 100644 --- a/include/linux/of_reserved_mem.h +++ b/include/linux/of_reserved_mem.h @@ -37,6 +37,7 @@ int of_reserved_mem_device_init_by_idx(struct device *dev, int of_reserved_mem_device_init_by_name(struct device *dev, struct device_node *np, const char *name); +int reserved_mem_device_init(struct device *dev, struct reserved_mem *rmem); void of_reserved_mem_device_release(struct device *dev);
struct reserved_mem *of_reserved_mem_lookup(struct device_node *np); @@ -64,6 +65,12 @@ static inline int of_reserved_mem_device_init_by_name(struct device *dev, return -ENOSYS; }
+static inline int reserved_mem_device_init(struct device *dev, + struct reserved_mem *rmem) +{ + return -ENOSYS; +} + static inline void of_reserved_mem_device_release(struct device *pdev) { }
static inline struct reserved_mem *of_reserved_mem_lookup(struct device_node *np)
This adds of_reserved_mem_lookup_by_name() helper to get a reserved-memory region based on the DT node name.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/of/of_reserved_mem.c | 27 +++++++++++++++++++++++++++ include/linux/of_reserved_mem.h | 6 ++++++ 2 files changed, 33 insertions(+)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c index e0a86c3fa656..b94c3b1d14b6 100644 --- a/drivers/of/of_reserved_mem.c +++ b/drivers/of/of_reserved_mem.c @@ -758,6 +758,33 @@ struct reserved_mem *of_reserved_mem_lookup(struct device_node *np) } EXPORT_SYMBOL_GPL(of_reserved_mem_lookup);
+/** + * of_reserved_mem_lookup_by_name() - acquire reserved_mem from node name + * @name: node name + * + * This function allows drivers to acquire a reference to the reserved_mem + * struct based on a reserved-memory node name. + * + * Returns a reserved_mem reference, or NULL on error. + */ +struct reserved_mem *of_reserved_mem_lookup_by_name(const char *name) +{ + struct device_node *np __free(device_node) = + of_find_node_by_path("/reserved-memory"); + struct device_node *child __free(device_node) = NULL; + + if (!np) + return ERR_PTR(-ENODEV); + + for_each_child_of_node(np, child) { + if (of_node_name_eq(child, name)) + return of_reserved_mem_lookup(child); + } + + return ERR_PTR(-ENODEV); +} +EXPORT_SYMBOL_GPL(of_reserved_mem_lookup_by_name); + /** * of_reserved_mem_region_to_resource() - Get a reserved memory region as a resource * @np: node containing 'memory-region' property diff --git a/include/linux/of_reserved_mem.h b/include/linux/of_reserved_mem.h index 3933f1d39e9a..d6d187597b7f 100644 --- a/include/linux/of_reserved_mem.h +++ b/include/linux/of_reserved_mem.h @@ -41,6 +41,7 @@ int reserved_mem_device_init(struct device *dev, struct reserved_mem *rmem); void of_reserved_mem_device_release(struct device *dev);
struct reserved_mem *of_reserved_mem_lookup(struct device_node *np); +struct reserved_mem *of_reserved_mem_lookup_by_name(const char *name); int of_reserved_mem_region_to_resource(const struct device_node *np, unsigned int idx, struct resource *res); int of_reserved_mem_region_to_resource_byname(const struct device_node *np, @@ -78,6 +79,11 @@ static inline struct reserved_mem *of_reserved_mem_lookup(struct device_node *np return NULL; }
+static inline struct reserved_mem *of_reserved_mem_lookup_by_name(const char *name) +{ + return NULL; +} + static inline int of_reserved_mem_region_to_resource(const struct device_node *np, unsigned int idx, struct resource *res)
This introduces support for a new Virtio transport type: "virtio-msg". Unlike existing transport types like virtio-mmio or virtio-pci which rely on memory-mapped registers, virtio-msg implements transport operations via structured message exchanges using standard virtqueues.
It separates bus-level functionality (e.g., device enumeration, hotplug events) from device-specific operations (e.g., feature negotiation, virtqueue setup), ensuring that a single, generic transport layer can be reused across multiple bus implementations (like ARM Firmware Framework (FF-A), IPC, etc.).
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- MAINTAINERS | 7 + drivers/virtio/Kconfig | 7 + drivers/virtio/Makefile | 1 + drivers/virtio/virtio_msg.c | 546 ++++++++++++++++++++++++++++++++ drivers/virtio/virtio_msg.h | 56 ++++ include/uapi/linux/virtio_msg.h | 221 +++++++++++++ 6 files changed, 838 insertions(+) create mode 100644 drivers/virtio/virtio_msg.c create mode 100644 drivers/virtio/virtio_msg.h create mode 100644 include/uapi/linux/virtio_msg.h
diff --git a/MAINTAINERS b/MAINTAINERS index 60bba48f5479..6fc644e405e6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -26355,6 +26355,13 @@ W: https://virtio-mem.gitlab.io/ F: drivers/virtio/virtio_mem.c F: include/uapi/linux/virtio_mem.h
+VIRTIO MSG TRANSPORT +M: Viresh Kumar viresh.kumar@linaro.org +L: virtualization@lists.linux.dev +S: Maintained +F: drivers/virtio/virtio_msg* +F: include/uapi/linux/virtio_msg* + VIRTIO PMEM DRIVER M: Pankaj Gupta pankaj.gupta.linux@gmail.com L: virtualization@lists.linux.dev diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 6db5235a7693..690ac98850b6 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -171,6 +171,13 @@ config VIRTIO_MMIO_CMDLINE_DEVICES
If unsure, say 'N'.
+config VIRTIO_MSG + tristate + select VIRTIO + help + This enables support for Virtio message transport. This option is + selected by any driver which implements the virtio message bus. + config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index eefcfe90d6b8..3eff8ca72446 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_VIRTIO_ANCHOR) += virtio_anchor.o obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio_pci_modern_dev.o obj-$(CONFIG_VIRTIO_PCI_LIB_LEGACY) += virtio_pci_legacy_dev.o obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o +obj-$(CONFIG_VIRTIO_MSG) += virtio_msg.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o diff --git a/drivers/virtio/virtio_msg.c b/drivers/virtio/virtio_msg.c new file mode 100644 index 000000000000..207fa1f18bf9 --- /dev/null +++ b/drivers/virtio/virtio_msg.c @@ -0,0 +1,546 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Virtio message transport. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + * + * The virtio-msg transport encapsulates virtio operations as discrete message + * exchanges rather than relying on PCI or memory-mapped I/O regions. It + * separates bus-level functionality (e.g., device enumeration, hotplug events) + * from device-specific operations (e.g., feature negotiation, virtqueue setup), + * ensuring that a single, generic transport layer can be reused across multiple + * bus implementations (like ARM Firmware Framework (FF-A), IPC, etc.). + * + * This file implements the generic Virtio message transport layer. + */ + +#define pr_fmt(fmt) "virtio-msg: " fmt + +#include <linux/err.h> +#include <linux/list.h> +#include <linux/slab.h> +#include <linux/virtio.h> +#include <linux/virtio_config.h> +#include <linux/virtio_ring.h> +#include <uapi/linux/virtio_msg.h> + +#include "virtio_msg.h" + +#define to_virtio_msg_device(_dev) \ + container_of(_dev, struct virtio_msg_device, vdev) + +static void msg_prepare(struct virtio_msg *vmsg, bool bus, u8 msg_id, + u16 dev_id, u16 payload_size) +{ + u16 size = sizeof(*vmsg) + payload_size; + + memset(vmsg, 0, size); + + if (bus) { + vmsg->type = VIRTIO_MSG_TYPE_BUS; + } else { + vmsg->type = VIRTIO_MSG_TYPE_TRANSPORT; + vmsg->dev_id = cpu_to_le16(dev_id); + } + + vmsg->msg_id = msg_id; + vmsg->msg_size = cpu_to_le16(size); +} + +static void transport_msg_prepare(struct virtio_msg_device *vmdev, u8 msg_id, + u16 payload_size) +{ + msg_prepare(vmdev->request, false, msg_id, vmdev->dev_id, payload_size); +} + +void virtio_msg_prepare(struct virtio_msg *vmsg, u8 msg_id, u16 payload_size) +{ + msg_prepare(vmsg, true, msg_id, 0, payload_size); +} +EXPORT_SYMBOL_GPL(virtio_msg_prepare); + +static int virtio_msg_xfer(struct virtio_msg_device *vmdev) +{ + int ret; + + memset(vmdev->response, 0, vmdev->msg_size); + + ret = vmdev->ops->transfer(vmdev, vmdev->request, vmdev->response); + if (ret) + dev_err(&vmdev->vdev.dev, "Transfer request failed (%d)\n", ret); + + return ret; +} + +static inline int virtio_msg_send(struct virtio_msg_device *vmdev) +{ + int ret; + + ret = vmdev->ops->transfer(vmdev, vmdev->request, NULL); + if (ret) + dev_err(&vmdev->vdev.dev, "Send request failed (%d)\n", ret); + + return ret; +} + +static int virtio_msg_get_device_info(struct virtio_msg_device *vmdev) +{ + struct get_device_info_resp *payload = virtio_msg_payload(vmdev->response); + struct virtio_device *vdev = &vmdev->vdev; + u32 num_feature_bits; + int ret; + + transport_msg_prepare(vmdev, VIRTIO_MSG_DEVICE_INFO, 0); + + ret = virtio_msg_xfer(vmdev); + if (ret) + return ret; + + vdev->id.device = le32_to_cpu(payload->device_id); + if (vdev->id.device == 0) { + /* + * virtio device with an ID 0 is a (dummy) placeholder with no + * function. + */ + return -ENODEV; + } + + vdev->id.vendor = le32_to_cpu(payload->vendor_id); + vmdev->config_size = le32_to_cpu(payload->config_size); + num_feature_bits = le32_to_cpu(payload->num_feature_bits); + + /* Linux supports 64 feature bits */ + if (num_feature_bits != 64) { + dev_err(&vdev->dev, "Incompatible num_feature_bits (%u)\n", + num_feature_bits); + return -EINVAL; + } + + return 0; +} + +static u64 virtio_msg_get_features(struct virtio_device *vdev) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + struct get_features *req_payload = virtio_msg_payload(vmdev->request); + struct get_features_resp *res_payload = virtio_msg_payload(vmdev->response); + __le32 *features; + int ret; + + transport_msg_prepare(vmdev, VIRTIO_MSG_GET_DEV_FEATURES, + sizeof(*req_payload)); + + /* Linux supports 64 feature bits */ + req_payload->num = cpu_to_le32(2); + req_payload->index = 0; + + ret = virtio_msg_xfer(vmdev); + if (ret) + return 0; + + features = (__le32 *)res_payload->features; + return ((u64)(le32_to_cpu(features[1])) << 32) | le32_to_cpu(features[0]); +} + +static int virtio_msg_finalize_features(struct virtio_device *vdev) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + struct set_features *payload = virtio_msg_payload(vmdev->request); + __le32 *features = (__le32 *)payload->features; + + /* Give virtio_ring a chance to accept features */ + vring_transport_features(vdev); + + transport_msg_prepare(vmdev, VIRTIO_MSG_SET_DRV_FEATURES, sizeof(*payload)); + + /* Linux supports 64 feature bits */ + payload->num = cpu_to_le32(2); + payload->index = 0; + + features[0] = cpu_to_le32((u32)vmdev->vdev.features); + features[1] = cpu_to_le32(vmdev->vdev.features >> 32); + + return virtio_msg_xfer(vmdev); +} + +static void virtio_msg_get(struct virtio_device *vdev, unsigned int offset, + void *buf, unsigned int len) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + struct get_config *req_payload = virtio_msg_payload(vmdev->request); + struct get_config_resp *res_payload = virtio_msg_payload(vmdev->response); + + BUG_ON(len > 8); + + if (offset + len > vmdev->config_size) { + dev_err(&vmdev->vdev.dev, + "Invalid config read operation: %u: %u: %u\n", offset, + len, vmdev->config_size); + return; + } + + transport_msg_prepare(vmdev, VIRTIO_MSG_GET_CONFIG, sizeof(*req_payload)); + req_payload->offset = cpu_to_le32(offset); + req_payload->size = cpu_to_le32(len); + + if (virtio_msg_xfer(vmdev)) + return; + + /* Buffer holds the data in little endian */ + if (buf) + memcpy(buf, res_payload->config, len); + vmdev->generation_count = le32_to_cpu(res_payload->generation); +} + +static void virtio_msg_set(struct virtio_device *vdev, unsigned int offset, + const void *buf, unsigned int len) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + struct set_config *payload = virtio_msg_payload(vmdev->request); + + BUG_ON(len > 8); + + if (offset + len > vmdev->config_size) { + dev_err(&vmdev->vdev.dev, + "Invalid config write operation: %u: %u: %u\n", offset, + len, vmdev->config_size); + return; + } + + transport_msg_prepare(vmdev, VIRTIO_MSG_SET_CONFIG, sizeof(*payload)); + payload->offset = cpu_to_le32(offset); + payload->size = cpu_to_le32(len); + payload->generation = cpu_to_le32(vmdev->generation_count); + + /* Buffer holds the data in little endian */ + memcpy(payload->config, buf, len); + + virtio_msg_xfer(vmdev); +} + +static u32 virtio_msg_generation(struct virtio_device *vdev) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + + virtio_msg_get(vdev, 0, NULL, 0); + return vmdev->generation_count; +} + +static u8 virtio_msg_get_status(struct virtio_device *vdev) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + struct get_device_status_resp *payload = virtio_msg_payload(vmdev->response); + + transport_msg_prepare(vmdev, VIRTIO_MSG_GET_DEVICE_STATUS, 0); + + if (virtio_msg_xfer(vmdev)) + return 0; + + return (u8)le32_to_cpu(payload->status); +} + +static void virtio_msg_set_status(struct virtio_device *vdev, u8 status) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + struct set_device_status *payload = virtio_msg_payload(vmdev->request); + + transport_msg_prepare(vmdev, VIRTIO_MSG_SET_DEVICE_STATUS, sizeof(*payload)); + payload->status = cpu_to_le32(status); + + virtio_msg_xfer(vmdev); +} + +static void virtio_msg_vq_reset(struct virtqueue *vq) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vq->vdev); + struct reset_vqueue *payload = virtio_msg_payload(vmdev->request); + + transport_msg_prepare(vmdev, VIRTIO_MSG_RESET_VQUEUE, sizeof(*payload)); + payload->index = cpu_to_le32(vq->index); + + virtio_msg_xfer(vmdev); +} + +static void virtio_msg_reset(struct virtio_device *vdev) +{ + /* Status value `0` means a reset */ + virtio_msg_set_status(vdev, 0); +} + +static bool _vmsg_notify(struct virtqueue *vq, u32 index, u32 offset, u32 wrap) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vq->vdev); + struct event_avail *payload = virtio_msg_payload(vmdev->request); + u32 val; + + transport_msg_prepare(vmdev, VIRTIO_MSG_EVENT_AVAIL, sizeof(*payload)); + payload->index = cpu_to_le32(index); + + val = offset & ((1U << VIRTIO_MSG_EVENT_AVAIL_WRAP_SHIFT) - 1); + val |= wrap & (1U << VIRTIO_MSG_EVENT_AVAIL_WRAP_SHIFT); + payload->next_offset_wrap = cpu_to_le32(val); + + return !virtio_msg_send(vmdev); +} + +static bool virtio_msg_notify(struct virtqueue *vq) +{ + return _vmsg_notify(vq, vq->index, 0, 0); +} + +static bool virtio_msg_notify_with_data(struct virtqueue *vq) +{ + u32 index, offset, wrap, data = vring_notification_data(vq); + + index = data | 0xFFFF; + data >>= 16; + offset = data | 0x7FFF; + wrap = data >> 15; + + return _vmsg_notify(vq, index, offset, wrap); +} + +int virtio_msg_event(struct virtio_msg_device *vmdev, struct virtio_msg *vmsg) +{ + struct event_used *payload = virtio_msg_payload(vmsg); + struct device *dev = &vmdev->vdev.dev; + struct virtqueue *vq; + unsigned int index; + + if (vmsg->msg_id == VIRTIO_MSG_EVENT_CONFIG) { + virtio_config_changed(&vmdev->vdev); + return 0; + } + + if (vmsg->msg_id == VIRTIO_MSG_EVENT_USED) { + index = le32_to_cpu(payload->index); + + virtio_device_for_each_vq(&vmdev->vdev, vq) { + if (index == vq->index) { + if (vring_interrupt(0, vq) != IRQ_HANDLED) + return -EIO; + + return 0; + } + } + + dev_err(dev, "Failed to find virtqueue (%u)", index); + } else { + dev_err(dev, "Unexpected message id: (%u)\n", vmsg->msg_id); + } + + return -EINVAL; +} +EXPORT_SYMBOL_GPL(virtio_msg_event); + +static void virtio_msg_del_vqs(struct virtio_device *vdev) +{ + struct virtqueue *vq, *n; + + list_for_each_entry_safe(vq, n, &vdev->vqs, list) { + virtio_msg_vq_reset(vq); + vring_del_virtqueue(vq); + } +} + +static int virtio_msg_vq_get(struct virtio_msg_device *vmdev, unsigned int *num, + unsigned int index) +{ + struct get_vqueue *req_payload = virtio_msg_payload(vmdev->request); + struct get_vqueue_resp *res_payload = virtio_msg_payload(vmdev->response); + int ret; + + transport_msg_prepare(vmdev, VIRTIO_MSG_GET_VQUEUE, sizeof(*req_payload)); + req_payload->index = cpu_to_le32(index); + + ret = virtio_msg_xfer(vmdev); + if (ret) + return ret; + + *num = le32_to_cpu(res_payload->max_size); + if (!*num) + return -ENOENT; + + return 0; +} + +static int virtio_msg_vq_set(struct virtio_msg_device *vmdev, + struct virtqueue *vq, unsigned int index) +{ + struct set_vqueue *payload = virtio_msg_payload(vmdev->request); + + transport_msg_prepare(vmdev, VIRTIO_MSG_SET_VQUEUE, sizeof(*payload)); + payload->index = cpu_to_le32(index); + payload->size = cpu_to_le32(virtqueue_get_vring_size(vq)); + payload->descriptor_addr = cpu_to_le64(virtqueue_get_desc_addr(vq)); + payload->driver_addr = cpu_to_le64(virtqueue_get_avail_addr(vq)); + payload->device_addr = cpu_to_le64(virtqueue_get_used_addr(vq)); + + return virtio_msg_xfer(vmdev); +} + +static struct virtqueue * +virtio_msg_setup_vq(struct virtio_msg_device *vmdev, unsigned int index, + void (*callback)(struct virtqueue *vq), const char *name, + bool ctx) +{ + bool (*notify)(struct virtqueue *vq); + struct virtqueue *vq; + unsigned int num; + int ret; + + if (__virtio_test_bit(&vmdev->vdev, VIRTIO_F_NOTIFICATION_DATA)) + notify = virtio_msg_notify_with_data; + else + notify = virtio_msg_notify; + + ret = virtio_msg_vq_get(vmdev, &num, index); + if (ret) + return ERR_PTR(ret); + + vq = vring_create_virtqueue(index, num, PAGE_SIZE, &vmdev->vdev, true, + true, ctx, notify, callback, name); + if (!vq) + return ERR_PTR(-ENOMEM); + + vq->num_max = num; + + ret = virtio_msg_vq_set(vmdev, vq, index); + if (ret) { + vring_del_virtqueue(vq); + return ERR_PTR(ret); + } + + return vq; +} + +static int virtio_msg_find_vqs(struct virtio_device *vdev, unsigned int nvqs, + struct virtqueue *vqs[], + struct virtqueue_info vqs_info[], + struct irq_affinity *desc) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + int i, queue_idx = 0; + + for (i = 0; i < nvqs; ++i) { + struct virtqueue_info *vqi = &vqs_info[i]; + + if (!vqi->name) { + vqs[i] = NULL; + continue; + } + + vqs[i] = virtio_msg_setup_vq(vmdev, queue_idx++, vqi->callback, + vqi->name, vqi->ctx); + if (IS_ERR(vqs[i])) { + virtio_msg_del_vqs(vdev); + return PTR_ERR(vqs[i]); + } + } + + return 0; +} + +static const char *virtio_msg_bus_name(struct virtio_device *vdev) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + + return vmdev->bus_name; +} + +static void virtio_msg_synchronize_cbs(struct virtio_device *vdev) +{ + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + + vmdev->ops->synchronize_cbs(vmdev); +} + +static void virtio_msg_release_dev(struct device *_d) +{ + struct virtio_device *vdev = + container_of(_d, struct virtio_device, dev); + struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev); + + if (vmdev->ops->release) + vmdev->ops->release(vmdev); +} + +static struct virtio_config_ops virtio_msg_config_ops = { + .get = virtio_msg_get, + .set = virtio_msg_set, + .generation = virtio_msg_generation, + .get_status = virtio_msg_get_status, + .set_status = virtio_msg_set_status, + .reset = virtio_msg_reset, + .find_vqs = virtio_msg_find_vqs, + .del_vqs = virtio_msg_del_vqs, + .get_features = virtio_msg_get_features, + .finalize_features = virtio_msg_finalize_features, + .bus_name = virtio_msg_bus_name, +}; + +int virtio_msg_register(struct virtio_msg_device *vmdev) +{ + u32 version; + int ret; + + if (!vmdev || !vmdev->ops || !vmdev->ops->bus_info || + !vmdev->ops->transfer) + return -EINVAL; + + vmdev->bus_name = vmdev->ops->bus_info(vmdev, &vmdev->msg_size, + &version); + if (version != VIRTIO_MSG_REVISION_1 || + vmdev->msg_size < VIRTIO_MSG_MIN_SIZE) + return -EINVAL; + + /* + * Allocate request/response buffers of `msg_size`. + * + * The requests are sent sequentially for each device and hence a + * per-device copy of request/response buffers is sufficient. + */ + vmdev->request = kzalloc(2 * vmdev->msg_size, GFP_KERNEL); + if (!vmdev->request) + return -ENOMEM; + + vmdev->response = (void *)vmdev->request + vmdev->msg_size; + + vmdev->vdev.config = &virtio_msg_config_ops; + vmdev->vdev.dev.release = virtio_msg_release_dev; + + if (vmdev->ops->synchronize_cbs) + virtio_msg_config_ops.synchronize_cbs = virtio_msg_synchronize_cbs; + + ret = virtio_msg_get_device_info(vmdev); + if (ret) { + if (vmdev->ops->release) + vmdev->ops->release(vmdev); + goto free; + } + + ret = register_virtio_device(&vmdev->vdev); + if (ret) { + put_device(&vmdev->vdev.dev); + goto free; + } + + return 0; + +free: + kfree(vmdev->request); + return ret; +} +EXPORT_SYMBOL_GPL(virtio_msg_register); + +void virtio_msg_unregister(struct virtio_msg_device *vmdev) +{ + unregister_virtio_device(&vmdev->vdev); + kfree(vmdev->request); +} +EXPORT_SYMBOL_GPL(virtio_msg_unregister); + +MODULE_AUTHOR("Viresh Kumar viresh.kumar@linaro.org"); +MODULE_DESCRIPTION("Virtio message transport"); +MODULE_LICENSE("GPL"); diff --git a/drivers/virtio/virtio_msg.h b/drivers/virtio/virtio_msg.h new file mode 100644 index 000000000000..099bb2f0f679 --- /dev/null +++ b/drivers/virtio/virtio_msg.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* + * Virtio message transport header. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + */ + +#ifndef _DRIVERS_VIRTIO_VIRTIO_MSG_H +#define _DRIVERS_VIRTIO_VIRTIO_MSG_H + +#include <linux/virtio.h> +#include <uapi/linux/virtio_msg.h> + +struct virtio_msg_device; + +/* + * struct virtio_msg_ops - Virtio message bus operations. + * @bus_info: Return bus information. + * @transfer: Transfer a message. + * @synchronize_cbs: Synchronize with the virtqueue callbacks (optional). + * @release: Release the resources corresponding to the device (optional). + */ +struct virtio_msg_ops { + const char *(*bus_info)(struct virtio_msg_device *vmdev, u16 *msg_size, u32 *rev); + int (*transfer)(struct virtio_msg_device *vmdev, struct virtio_msg *request, + struct virtio_msg *response); + void (*synchronize_cbs)(struct virtio_msg_device *vmdev); + void (*release)(struct virtio_msg_device *vmdev); +}; + +/* + * Representation of a device using virtio message + * transport. + */ +struct virtio_msg_device { + struct virtio_device vdev; + struct virtio_msg_ops *ops; + const char *bus_name; + void *bus_data; + u32 generation_count; + u32 config_size; + u16 msg_size; + u16 dev_id; + + struct virtio_msg *request; + struct virtio_msg *response; +}; + +int virtio_msg_register(struct virtio_msg_device *vmdev); +void virtio_msg_unregister(struct virtio_msg_device *vmdev); + +void virtio_msg_prepare(struct virtio_msg *vmsg, u8 msg_id, u16 payload_size); +int virtio_msg_event(struct virtio_msg_device *vmdev, struct virtio_msg *vmsg); + +#endif /* _DRIVERS_VIRTIO_VIRTIO_MSG_H */ diff --git a/include/uapi/linux/virtio_msg.h b/include/uapi/linux/virtio_msg.h new file mode 100644 index 000000000000..823bfbd6ecf1 --- /dev/null +++ b/include/uapi/linux/virtio_msg.h @@ -0,0 +1,221 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* + * Virtio message transport header. + * + * Copyright (c) 2025 Advanced Micro Devices, Inc. + * Written by Edgar E. Iglesias edgar.iglesias@amd.com + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + */ + +#ifndef _LINUX_VIRTIO_MSG_H +#define _LINUX_VIRTIO_MSG_H + +#include <linux/types.h> + +/* Virtio message transport definitions */ + +/* Message types */ +#define VIRTIO_MSG_DEVICE_INFO 0x02 +#define VIRTIO_MSG_GET_DEV_FEATURES 0x03 +#define VIRTIO_MSG_SET_DRV_FEATURES 0x04 +#define VIRTIO_MSG_GET_CONFIG 0x05 +#define VIRTIO_MSG_SET_CONFIG 0x06 +#define VIRTIO_MSG_GET_DEVICE_STATUS 0x07 +#define VIRTIO_MSG_SET_DEVICE_STATUS 0x08 +#define VIRTIO_MSG_GET_VQUEUE 0x09 +#define VIRTIO_MSG_SET_VQUEUE 0x0a +#define VIRTIO_MSG_RESET_VQUEUE 0x0b +#define VIRTIO_MSG_GET_SHM 0x0c +#define VIRTIO_MSG_EVENT_CONFIG 0x40 +#define VIRTIO_MSG_EVENT_AVAIL 0x41 +#define VIRTIO_MSG_EVENT_USED 0x42 +#define VIRTIO_MSG_MAX VIRTIO_MSG_EVENT_USED + +#define VIRTIO_MSG_MIN_SIZE 44 +#define VIRTIO_MSG_MAX_SIZE 65536 +#define VIRTIO_MSG_REVISION_1 0x1 + +/* Message payload format */ + +struct get_device_info_resp { + __le32 device_id; + __le32 vendor_id; + __le32 num_feature_bits; + __le32 config_size; + __le32 max_vq_count; + __le16 admin_vq_start_idx; + __le16 admin_vq_count; +} __attribute__((packed)); + +struct get_features { + __le32 index; + __le32 num; +} __attribute__((packed)); + +struct get_features_resp { + __le32 index; + __le32 num; + __u8 features[]; +} __attribute__((packed)); + +struct set_features { + __le32 index; + __le32 num; + __u8 features[]; +} __attribute__((packed)); + +struct get_config { + __le32 offset; + __le32 size; +} __attribute__((packed)); + +struct get_config_resp { + __le32 generation; + __le32 offset; + __le32 size; + __u8 config[]; +} __attribute__((packed)); + +struct set_config { + __le32 generation; + __le32 offset; + __le32 size; + __u8 config[]; +} __attribute__((packed)); + +struct set_config_resp { + __le32 generation; + __le32 offset; + __le32 size; + __u8 config[]; +} __attribute__((packed)); + +struct get_device_status_resp { + __le32 status; +} __attribute__((packed)); + +struct set_device_status { + __le32 status; +} __attribute__((packed)); + +struct set_device_status_resp { + __le32 status; +} __attribute__((packed)); + +struct get_vqueue { + __le32 index; +} __attribute__((packed)); + +struct get_vqueue_resp { + __le32 index; + __le32 max_size; + __le32 size; + __le64 descriptor_addr; + __le64 driver_addr; + __le64 device_addr; +} __attribute__((packed)); + +struct set_vqueue { + __le32 index; + __le32 unused; + __le32 size; + __le64 descriptor_addr; + __le64 driver_addr; + __le64 device_addr; +} __attribute__((packed)); + +struct reset_vqueue { + __le32 index; +} __attribute__((packed)); + +struct get_shm { + __le32 index; +} __attribute__((packed)); + +struct get_shm_resp { + __le32 index; + __le32 count; + __le32 addr; +} __attribute__((packed)); + +struct event_config { + __le32 status; + __le32 generation; + __le32 offset; + __le32 size; + __u8 config[]; +} __attribute__((packed)); + +struct event_avail { + __le32 index; + #define VIRTIO_MSG_EVENT_AVAIL_WRAP_SHIFT 31 + __le32 next_offset_wrap; +} __attribute__((packed)); + +struct event_used { + __le32 index; +} __attribute__((packed)); + +struct virtio_msg { + #define VIRTIO_MSG_TYPE_REQUEST (0 << 0) + #define VIRTIO_MSG_TYPE_RESPONSE (1 << 0) + #define VIRTIO_MSG_TYPE_TRANSPORT (0 << 1) + #define VIRTIO_MSG_TYPE_BUS (1 << 1) + __u8 type; + + __u8 msg_id; + __le16 dev_id; + __le16 msg_size; + __u8 payload[]; +} __attribute__((packed)); + +static inline void *virtio_msg_payload(struct virtio_msg *vmsg) +{ + return &vmsg->payload; +} + +/* Virtio message bus definitions */ + +/* Message types */ +#define VIRTIO_MSG_BUS_GET_DEVICES 0x02 +#define VIRTIO_MSG_BUS_PING 0x03 +#define VIRTIO_MSG_BUS_EVENT_DEVICE 0x40 + +struct bus_get_devices { + __le16 offset; + __le16 num; +} __attribute__((packed)); + +struct bus_get_devices_resp { + __le16 offset; + __le16 num; + __le16 next_offset; + __u8 devices[]; +} __attribute__((packed)); + +struct bus_event_device { + __le16 dev_num; + #define VIRTIO_MSG_BUS_EVENT_DEV_STATE_READY 0x1 + #define VIRTIO_MSG_BUS_EVENT_DEV_STATE_REMOVED 0x2 + __le16 dev_state; +} __attribute__((packed)); + +struct bus_ping { + __le32 data; +} __attribute__((packed)); + +struct bus_ping_resp { + __le32 data; +} __attribute__((packed)); + +struct bus_status { + #define VIRTIO_BUS_STATE_RESET 0x0 + #define VIRTIO_BUS_STATE_SHUTDOWN 0x1 + #define VIRTIO_BUS_STATE_SUSPEND 0x2 + #define VIRTIO_BUS_STATE_RESUME 0x4 + __u8 state; +} __attribute__((packed)); + +#endif /* _LINUX_VIRTIO_MSG_H */
Hi Viresh,
Very nice :-) Some questions and suggestions.
We will see in the future to optimize (generation count, cache of config etc).
On 22 Jul 2025, at 11:46, Viresh Kumar viresh.kumar@linaro.org wrote:
This introduces support for a new Virtio transport type: "virtio-msg". Unlike existing transport types like virtio-mmio or virtio-pci which rely on memory-mapped registers, virtio-msg implements transport operations via structured message exchanges using standard virtqueues.
It separates bus-level functionality (e.g., device enumeration, hotplug events) from device-specific operations (e.g., feature negotiation, virtqueue setup), ensuring that a single, generic transport layer can be reused across multiple bus implementations (like ARM Firmware Framework (FF-A), IPC, etc.).
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
MAINTAINERS | 7 + drivers/virtio/Kconfig | 7 + drivers/virtio/Makefile | 1 + drivers/virtio/virtio_msg.c | 546 ++++++++++++++++++++++++++++++++ drivers/virtio/virtio_msg.h | 56 ++++ include/uapi/linux/virtio_msg.h | 221 +++++++++++++ 6 files changed, 838 insertions(+) create mode 100644 drivers/virtio/virtio_msg.c create mode 100644 drivers/virtio/virtio_msg.h create mode 100644 include/uapi/linux/virtio_msg.h
diff --git a/MAINTAINERS b/MAINTAINERS index 60bba48f5479..6fc644e405e6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -26355,6 +26355,13 @@ W: https://virtio-mem.gitlab.io/ F: drivers/virtio/virtio_mem.c F: include/uapi/linux/virtio_mem.h
+VIRTIO MSG TRANSPORT +M: Viresh Kumar viresh.kumar@linaro.org +L: virtualization@lists.linux.dev +S: Maintained +F: drivers/virtio/virtio_msg* +F: include/uapi/linux/virtio_msg*
VIRTIO PMEM DRIVER M: Pankaj Gupta pankaj.gupta.linux@gmail.com L: virtualization@lists.linux.dev diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 6db5235a7693..690ac98850b6 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -171,6 +171,13 @@ config VIRTIO_MMIO_CMDLINE_DEVICES
If unsure, say 'N'.
+config VIRTIO_MSG
- tristate
- select VIRTIO
- help
- This enables support for Virtio message transport. This option is
- selected by any driver which implements the virtio message bus.
config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index eefcfe90d6b8..3eff8ca72446 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_VIRTIO_ANCHOR) += virtio_anchor.o obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio_pci_modern_dev.o obj-$(CONFIG_VIRTIO_PCI_LIB_LEGACY) += virtio_pci_legacy_dev.o obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o +obj-$(CONFIG_VIRTIO_MSG) += virtio_msg.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o diff --git a/drivers/virtio/virtio_msg.c b/drivers/virtio/virtio_msg.c new file mode 100644 index 000000000000..207fa1f18bf9 --- /dev/null +++ b/drivers/virtio/virtio_msg.c @@ -0,0 +1,546 @@ +// SPDX-License-Identifier: GPL-2.0+ +/*
- Virtio message transport.
- Copyright (C) 2025 Google LLC and Linaro.
- Viresh Kumar viresh.kumar@linaro.org
- The virtio-msg transport encapsulates virtio operations as discrete message
- exchanges rather than relying on PCI or memory-mapped I/O regions. It
- separates bus-level functionality (e.g., device enumeration, hotplug events)
- from device-specific operations (e.g., feature negotiation, virtqueue setup),
- ensuring that a single, generic transport layer can be reused across multiple
- bus implementations (like ARM Firmware Framework (FF-A), IPC, etc.).
- This file implements the generic Virtio message transport layer.
- */
+#define pr_fmt(fmt) "virtio-msg: " fmt
+#include <linux/err.h> +#include <linux/list.h> +#include <linux/slab.h> +#include <linux/virtio.h> +#include <linux/virtio_config.h> +#include <linux/virtio_ring.h> +#include <uapi/linux/virtio_msg.h>
+#include "virtio_msg.h"
+#define to_virtio_msg_device(_dev) \
- container_of(_dev, struct virtio_msg_device, vdev)
+static void msg_prepare(struct virtio_msg *vmsg, bool bus, u8 msg_id,
- u16 dev_id, u16 payload_size)
+{
- u16 size = sizeof(*vmsg) + payload_size;
- memset(vmsg, 0, size);
- if (bus) {
- vmsg->type = VIRTIO_MSG_TYPE_BUS;
- } else {
- vmsg->type = VIRTIO_MSG_TYPE_TRANSPORT;
- vmsg->dev_id = cpu_to_le16(dev_id);
- }
- vmsg->msg_id = msg_id;
- vmsg->msg_size = cpu_to_le16(size);
+}
+static void transport_msg_prepare(struct virtio_msg_device *vmdev, u8 msg_id,
- u16 payload_size)
+{
- msg_prepare(vmdev->request, false, msg_id, vmdev->dev_id, payload_size);
+}
+void virtio_msg_prepare(struct virtio_msg *vmsg, u8 msg_id, u16 payload_size) +{
- msg_prepare(vmsg, true, msg_id, 0, payload_size);
+} +EXPORT_SYMBOL_GPL(virtio_msg_prepare);
+static int virtio_msg_xfer(struct virtio_msg_device *vmdev) +{
- int ret;
- memset(vmdev->response, 0, vmdev->msg_size);
- ret = vmdev->ops->transfer(vmdev, vmdev->request, vmdev->response);
- if (ret)
- dev_err(&vmdev->vdev.dev, "Transfer request failed (%d)\n", ret);
Here a bus can report an error when sending a message. I think the idea in the transport was to not have this case and we kind of say that the bus is responsible to give back a valid message back.
I am definitely open to this possibility but here I wonder what is actually happening as you print an error but how will it be handled after ?
I see some cases where it will work, for example get_device_info but I am wondering how this will work overall.
- return ret;
+}
+static inline int virtio_msg_send(struct virtio_msg_device *vmdev) +{
- int ret;
- ret = vmdev->ops->transfer(vmdev, vmdev->request, NULL);
- if (ret)
- dev_err(&vmdev->vdev.dev, "Send request failed (%d)\n", ret);
Same as before.
- return ret;
+}
+static int virtio_msg_get_device_info(struct virtio_msg_device *vmdev) +{
- struct get_device_info_resp *payload = virtio_msg_payload(vmdev->response);
- struct virtio_device *vdev = &vmdev->vdev;
- u32 num_feature_bits;
- int ret;
- transport_msg_prepare(vmdev, VIRTIO_MSG_DEVICE_INFO, 0);
- ret = virtio_msg_xfer(vmdev);
- if (ret)
- return ret;
- vdev->id.device = le32_to_cpu(payload->device_id);
- if (vdev->id.device == 0) {
- /*
- virtio device with an ID 0 is a (dummy) placeholder with no
- function.
- */
- return -ENODEV;
- }
- vdev->id.vendor = le32_to_cpu(payload->vendor_id);
- vmdev->config_size = le32_to_cpu(payload->config_size);
- num_feature_bits = le32_to_cpu(payload->num_feature_bits);
- /* Linux supports 64 feature bits */
- if (num_feature_bits != 64) {
- dev_err(&vdev->dev, "Incompatible num_feature_bits (%u)\n",
- num_feature_bits);
- return -EINVAL;
- }
- return 0;
+}
+static u64 virtio_msg_get_features(struct virtio_device *vdev) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- struct get_features *req_payload = virtio_msg_payload(vmdev->request);
- struct get_features_resp *res_payload = virtio_msg_payload(vmdev->response);
- __le32 *features;
- int ret;
- transport_msg_prepare(vmdev, VIRTIO_MSG_GET_DEV_FEATURES,
sizeof(*req_payload));
- /* Linux supports 64 feature bits */
- req_payload->num = cpu_to_le32(2);
- req_payload->index = 0;
- ret = virtio_msg_xfer(vmdev);
- if (ret)
- return 0;
- features = (__le32 *)res_payload->features;
- return ((u64)(le32_to_cpu(features[1])) << 32) | le32_to_cpu(features[0]);
+}
+static int virtio_msg_finalize_features(struct virtio_device *vdev) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- struct set_features *payload = virtio_msg_payload(vmdev->request);
- __le32 *features = (__le32 *)payload->features;
- /* Give virtio_ring a chance to accept features */
- vring_transport_features(vdev);
- transport_msg_prepare(vmdev, VIRTIO_MSG_SET_DRV_FEATURES, sizeof(*payload));
- /* Linux supports 64 feature bits */
- payload->num = cpu_to_le32(2);
- payload->index = 0;
- features[0] = cpu_to_le32((u32)vmdev->vdev.features);
- features[1] = cpu_to_le32(vmdev->vdev.features >> 32);
- return virtio_msg_xfer(vmdev);
+}
+static void virtio_msg_get(struct virtio_device *vdev, unsigned int offset,
- void *buf, unsigned int len)
+{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- struct get_config *req_payload = virtio_msg_payload(vmdev->request);
- struct get_config_resp *res_payload = virtio_msg_payload(vmdev->response);
- BUG_ON(len > 8);
So you only allow to retrieve 64bit at a time here. Why is that ?
- if (offset + len > vmdev->config_size) {
- dev_err(&vmdev->vdev.dev,
- "Invalid config read operation: %u: %u: %u\n", offset,
- len, vmdev->config_size);
- return;
- }
- transport_msg_prepare(vmdev, VIRTIO_MSG_GET_CONFIG, sizeof(*req_payload));
- req_payload->offset = cpu_to_le32(offset);
- req_payload->size = cpu_to_le32(len);
- if (virtio_msg_xfer(vmdev))
- return;
- /* Buffer holds the data in little endian */
- if (buf)
- memcpy(buf, res_payload->config, len);
- vmdev->generation_count = le32_to_cpu(res_payload->generation);
+}
+static void virtio_msg_set(struct virtio_device *vdev, unsigned int offset,
- const void *buf, unsigned int len)
+{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- struct set_config *payload = virtio_msg_payload(vmdev->request);
- BUG_ON(len > 8);
- if (offset + len > vmdev->config_size) {
- dev_err(&vmdev->vdev.dev,
- "Invalid config write operation: %u: %u: %u\n", offset,
- len, vmdev->config_size);
- return;
- }
- transport_msg_prepare(vmdev, VIRTIO_MSG_SET_CONFIG, sizeof(*payload));
- payload->offset = cpu_to_le32(offset);
- payload->size = cpu_to_le32(len);
- payload->generation = cpu_to_le32(vmdev->generation_count);
- /* Buffer holds the data in little endian */
- memcpy(payload->config, buf, len);
- virtio_msg_xfer(vmdev);
+}
+static u32 virtio_msg_generation(struct virtio_device *vdev) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- virtio_msg_get(vdev, 0, NULL, 0);
- return vmdev->generation_count;
+}
+static u8 virtio_msg_get_status(struct virtio_device *vdev) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- struct get_device_status_resp *payload = virtio_msg_payload(vmdev->response);
- transport_msg_prepare(vmdev, VIRTIO_MSG_GET_DEVICE_STATUS, 0);
- if (virtio_msg_xfer(vmdev))
- return 0;
- return (u8)le32_to_cpu(payload->status);
+}
+static void virtio_msg_set_status(struct virtio_device *vdev, u8 status) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- struct set_device_status *payload = virtio_msg_payload(vmdev->request);
- transport_msg_prepare(vmdev, VIRTIO_MSG_SET_DEVICE_STATUS, sizeof(*payload));
- payload->status = cpu_to_le32(status);
- virtio_msg_xfer(vmdev);
+}
+static void virtio_msg_vq_reset(struct virtqueue *vq) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vq->vdev);
- struct reset_vqueue *payload = virtio_msg_payload(vmdev->request);
- transport_msg_prepare(vmdev, VIRTIO_MSG_RESET_VQUEUE, sizeof(*payload));
- payload->index = cpu_to_le32(vq->index);
- virtio_msg_xfer(vmdev);
+}
+static void virtio_msg_reset(struct virtio_device *vdev) +{
- /* Status value `0` means a reset */
- virtio_msg_set_status(vdev, 0);
+}
+static bool _vmsg_notify(struct virtqueue *vq, u32 index, u32 offset, u32 wrap) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vq->vdev);
- struct event_avail *payload = virtio_msg_payload(vmdev->request);
- u32 val;
- transport_msg_prepare(vmdev, VIRTIO_MSG_EVENT_AVAIL, sizeof(*payload));
- payload->index = cpu_to_le32(index);
- val = offset & ((1U << VIRTIO_MSG_EVENT_AVAIL_WRAP_SHIFT) - 1);
- val |= wrap & (1U << VIRTIO_MSG_EVENT_AVAIL_WRAP_SHIFT);
- payload->next_offset_wrap = cpu_to_le32(val);
- return !virtio_msg_send(vmdev);
+}
+static bool virtio_msg_notify(struct virtqueue *vq) +{
- return _vmsg_notify(vq, vq->index, 0, 0);
+}
+static bool virtio_msg_notify_with_data(struct virtqueue *vq) +{
- u32 index, offset, wrap, data = vring_notification_data(vq);
- index = data | 0xFFFF;
- data >>= 16;
- offset = data | 0x7FFF;
- wrap = data >> 15;
- return _vmsg_notify(vq, index, offset, wrap);
+}
+int virtio_msg_event(struct virtio_msg_device *vmdev, struct virtio_msg *vmsg) +{
- struct event_used *payload = virtio_msg_payload(vmsg);
- struct device *dev = &vmdev->vdev.dev;
- struct virtqueue *vq;
- unsigned int index;
- if (vmsg->msg_id == VIRTIO_MSG_EVENT_CONFIG) {
- virtio_config_changed(&vmdev->vdev);
- return 0;
- }
- if (vmsg->msg_id == VIRTIO_MSG_EVENT_USED) {
- index = le32_to_cpu(payload->index);
- virtio_device_for_each_vq(&vmdev->vdev, vq) {
- if (index == vq->index) {
- if (vring_interrupt(0, vq) != IRQ_HANDLED)
- return -EIO;
- return 0;
- }
- }
- dev_err(dev, "Failed to find virtqueue (%u)", index);
- } else {
- dev_err(dev, "Unexpected message id: (%u)\n", vmsg->msg_id);
- }
- return -EINVAL;
+} +EXPORT_SYMBOL_GPL(virtio_msg_event);
+static void virtio_msg_del_vqs(struct virtio_device *vdev) +{
- struct virtqueue *vq, *n;
- list_for_each_entry_safe(vq, n, &vdev->vqs, list) {
- virtio_msg_vq_reset(vq);
- vring_del_virtqueue(vq);
- }
+}
+static int virtio_msg_vq_get(struct virtio_msg_device *vmdev, unsigned int *num,
unsigned int index)
+{
- struct get_vqueue *req_payload = virtio_msg_payload(vmdev->request);
- struct get_vqueue_resp *res_payload = virtio_msg_payload(vmdev->response);
- int ret;
- transport_msg_prepare(vmdev, VIRTIO_MSG_GET_VQUEUE, sizeof(*req_payload));
- req_payload->index = cpu_to_le32(index);
- ret = virtio_msg_xfer(vmdev);
- if (ret)
- return ret;
- *num = le32_to_cpu(res_payload->max_size);
- if (!*num)
- return -ENOENT;
- return 0;
+}
+static int virtio_msg_vq_set(struct virtio_msg_device *vmdev,
struct virtqueue *vq, unsigned int index)
+{
- struct set_vqueue *payload = virtio_msg_payload(vmdev->request);
- transport_msg_prepare(vmdev, VIRTIO_MSG_SET_VQUEUE, sizeof(*payload));
- payload->index = cpu_to_le32(index);
- payload->size = cpu_to_le32(virtqueue_get_vring_size(vq));
- payload->descriptor_addr = cpu_to_le64(virtqueue_get_desc_addr(vq));
- payload->driver_addr = cpu_to_le64(virtqueue_get_avail_addr(vq));
- payload->device_addr = cpu_to_le64(virtqueue_get_used_addr(vq));
- return virtio_msg_xfer(vmdev);
+}
+static struct virtqueue * +virtio_msg_setup_vq(struct virtio_msg_device *vmdev, unsigned int index,
- void (*callback)(struct virtqueue *vq), const char *name,
- bool ctx)
+{
- bool (*notify)(struct virtqueue *vq);
- struct virtqueue *vq;
- unsigned int num;
- int ret;
- if (__virtio_test_bit(&vmdev->vdev, VIRTIO_F_NOTIFICATION_DATA))
- notify = virtio_msg_notify_with_data;
- else
- notify = virtio_msg_notify;
- ret = virtio_msg_vq_get(vmdev, &num, index);
- if (ret)
- return ERR_PTR(ret);
- vq = vring_create_virtqueue(index, num, PAGE_SIZE, &vmdev->vdev, true,
- true, ctx, notify, callback, name);
- if (!vq)
- return ERR_PTR(-ENOMEM);
- vq->num_max = num;
- ret = virtio_msg_vq_set(vmdev, vq, index);
- if (ret) {
- vring_del_virtqueue(vq);
- return ERR_PTR(ret);
- }
- return vq;
+}
+static int virtio_msg_find_vqs(struct virtio_device *vdev, unsigned int nvqs,
struct virtqueue *vqs[],
struct virtqueue_info vqs_info[],
struct irq_affinity *desc)
+{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- int i, queue_idx = 0;
- for (i = 0; i < nvqs; ++i) {
- struct virtqueue_info *vqi = &vqs_info[i];
- if (!vqi->name) {
- vqs[i] = NULL;
- continue;
- }
- vqs[i] = virtio_msg_setup_vq(vmdev, queue_idx++, vqi->callback,
vqi->name, vqi->ctx);
- if (IS_ERR(vqs[i])) {
- virtio_msg_del_vqs(vdev);
- return PTR_ERR(vqs[i]);
- }
- }
- return 0;
+}
+static const char *virtio_msg_bus_name(struct virtio_device *vdev) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- return vmdev->bus_name;
+}
+static void virtio_msg_synchronize_cbs(struct virtio_device *vdev) +{
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- vmdev->ops->synchronize_cbs(vmdev);
+}
+static void virtio_msg_release_dev(struct device *_d) +{
- struct virtio_device *vdev =
- container_of(_d, struct virtio_device, dev);
- struct virtio_msg_device *vmdev = to_virtio_msg_device(vdev);
- if (vmdev->ops->release)
- vmdev->ops->release(vmdev);
+}
+static struct virtio_config_ops virtio_msg_config_ops = {
- .get = virtio_msg_get,
- .set = virtio_msg_set,
- .generation = virtio_msg_generation,
- .get_status = virtio_msg_get_status,
- .set_status = virtio_msg_set_status,
- .reset = virtio_msg_reset,
- .find_vqs = virtio_msg_find_vqs,
- .del_vqs = virtio_msg_del_vqs,
- .get_features = virtio_msg_get_features,
- .finalize_features = virtio_msg_finalize_features,
- .bus_name = virtio_msg_bus_name,
+};
+int virtio_msg_register(struct virtio_msg_device *vmdev) +{
- u32 version;
- int ret;
- if (!vmdev || !vmdev->ops || !vmdev->ops->bus_info ||
- !vmdev->ops->transfer)
- return -EINVAL;
- vmdev->bus_name = vmdev->ops->bus_info(vmdev, &vmdev->msg_size,
&version);
- if (version != VIRTIO_MSG_REVISION_1 ||
- vmdev->msg_size < VIRTIO_MSG_MIN_SIZE)
- return -EINVAL;
- /*
- Allocate request/response buffers of `msg_size`.
- The requests are sent sequentially for each device and hence a
- per-device copy of request/response buffers is sufficient.
- */
- vmdev->request = kzalloc(2 * vmdev->msg_size, GFP_KERNEL);
- if (!vmdev->request)
- return -ENOMEM;
- vmdev->response = (void *)vmdev->request + vmdev->msg_size;
- vmdev->vdev.config = &virtio_msg_config_ops;
- vmdev->vdev.dev.release = virtio_msg_release_dev;
- if (vmdev->ops->synchronize_cbs)
- virtio_msg_config_ops.synchronize_cbs = virtio_msg_synchronize_cbs;
- ret = virtio_msg_get_device_info(vmdev);
- if (ret) {
- if (vmdev->ops->release)
- vmdev->ops->release(vmdev);
- goto free;
- }
- ret = register_virtio_device(&vmdev->vdev);
- if (ret) {
- put_device(&vmdev->vdev.dev);
- goto free;
- }
- return 0;
+free:
- kfree(vmdev->request);
- return ret;
+} +EXPORT_SYMBOL_GPL(virtio_msg_register);
+void virtio_msg_unregister(struct virtio_msg_device *vmdev) +{
- unregister_virtio_device(&vmdev->vdev);
- kfree(vmdev->request);
+} +EXPORT_SYMBOL_GPL(virtio_msg_unregister);
+MODULE_AUTHOR("Viresh Kumar viresh.kumar@linaro.org"); +MODULE_DESCRIPTION("Virtio message transport"); +MODULE_LICENSE("GPL"); diff --git a/drivers/virtio/virtio_msg.h b/drivers/virtio/virtio_msg.h new file mode 100644 index 000000000000..099bb2f0f679 --- /dev/null +++ b/drivers/virtio/virtio_msg.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/*
- Virtio message transport header.
- Copyright (C) 2025 Google LLC and Linaro.
- Viresh Kumar viresh.kumar@linaro.org
- */
+#ifndef _DRIVERS_VIRTIO_VIRTIO_MSG_H +#define _DRIVERS_VIRTIO_VIRTIO_MSG_H
+#include <linux/virtio.h> +#include <uapi/linux/virtio_msg.h>
+struct virtio_msg_device;
+/*
- struct virtio_msg_ops - Virtio message bus operations.
- @bus_info: Return bus information.
- @transfer: Transfer a message.
- @synchronize_cbs: Synchronize with the virtqueue callbacks (optional).
- @release: Release the resources corresponding to the device (optional).
- */
+struct virtio_msg_ops {
- const char *(*bus_info)(struct virtio_msg_device *vmdev, u16 *msg_size, u32 *rev);
- int (*transfer)(struct virtio_msg_device *vmdev, struct virtio_msg *request,
- struct virtio_msg *response);
- void (*synchronize_cbs)(struct virtio_msg_device *vmdev);
- void (*release)(struct virtio_msg_device *vmdev);
+};
+/*
- Representation of a device using virtio message
- transport.
- */
+struct virtio_msg_device {
- struct virtio_device vdev;
- struct virtio_msg_ops *ops;
- const char *bus_name;
- void *bus_data;
- u32 generation_count;
- u32 config_size;
- u16 msg_size;
- u16 dev_id;
- struct virtio_msg *request;
- struct virtio_msg *response;
+};
+int virtio_msg_register(struct virtio_msg_device *vmdev); +void virtio_msg_unregister(struct virtio_msg_device *vmdev);
+void virtio_msg_prepare(struct virtio_msg *vmsg, u8 msg_id, u16 payload_size); +int virtio_msg_event(struct virtio_msg_device *vmdev, struct virtio_msg *vmsg);
+#endif /* _DRIVERS_VIRTIO_VIRTIO_MSG_H */ diff --git a/include/uapi/linux/virtio_msg.h b/include/uapi/linux/virtio_msg.h new file mode 100644 index 000000000000..823bfbd6ecf1 --- /dev/null +++ b/include/uapi/linux/virtio_msg.h @@ -0,0 +1,221 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/*
- Virtio message transport header.
- Copyright (c) 2025 Advanced Micro Devices, Inc.
- Written by Edgar E. Iglesias edgar.iglesias@amd.com
- Copyright (C) 2025 Google LLC and Linaro.
- Viresh Kumar viresh.kumar@linaro.org
- */
+#ifndef _LINUX_VIRTIO_MSG_H +#define _LINUX_VIRTIO_MSG_H
+#include <linux/types.h>
+/* Virtio message transport definitions */
+/* Message types */ +#define VIRTIO_MSG_DEVICE_INFO 0x02 +#define VIRTIO_MSG_GET_DEV_FEATURES 0x03 +#define VIRTIO_MSG_SET_DRV_FEATURES 0x04 +#define VIRTIO_MSG_GET_CONFIG 0x05 +#define VIRTIO_MSG_SET_CONFIG 0x06 +#define VIRTIO_MSG_GET_DEVICE_STATUS 0x07 +#define VIRTIO_MSG_SET_DEVICE_STATUS 0x08 +#define VIRTIO_MSG_GET_VQUEUE 0x09 +#define VIRTIO_MSG_SET_VQUEUE 0x0a +#define VIRTIO_MSG_RESET_VQUEUE 0x0b +#define VIRTIO_MSG_GET_SHM 0x0c +#define VIRTIO_MSG_EVENT_CONFIG 0x40 +#define VIRTIO_MSG_EVENT_AVAIL 0x41 +#define VIRTIO_MSG_EVENT_USED 0x42 +#define VIRTIO_MSG_MAX VIRTIO_MSG_EVENT_USED
+#define VIRTIO_MSG_MIN_SIZE 44
In lots of function you do not test if the message length is not smaller than this. It could be a good idea to have compilation time asserts in your code when functions are making an assumption on the minimum message length required.
Cheers Bertrand
+#define VIRTIO_MSG_MAX_SIZE 65536 +#define VIRTIO_MSG_REVISION_1 0x1
+/* Message payload format */
+struct get_device_info_resp {
- __le32 device_id;
- __le32 vendor_id;
- __le32 num_feature_bits;
- __le32 config_size;
- __le32 max_vq_count;
- __le16 admin_vq_start_idx;
- __le16 admin_vq_count;
+} __attribute__((packed));
+struct get_features {
- __le32 index;
- __le32 num;
+} __attribute__((packed));
+struct get_features_resp {
- __le32 index;
- __le32 num;
- __u8 features[];
+} __attribute__((packed));
+struct set_features {
- __le32 index;
- __le32 num;
- __u8 features[];
+} __attribute__((packed));
+struct get_config {
- __le32 offset;
- __le32 size;
+} __attribute__((packed));
+struct get_config_resp {
- __le32 generation;
- __le32 offset;
- __le32 size;
- __u8 config[];
+} __attribute__((packed));
+struct set_config {
- __le32 generation;
- __le32 offset;
- __le32 size;
- __u8 config[];
+} __attribute__((packed));
+struct set_config_resp {
- __le32 generation;
- __le32 offset;
- __le32 size;
- __u8 config[];
+} __attribute__((packed));
+struct get_device_status_resp {
- __le32 status;
+} __attribute__((packed));
+struct set_device_status {
- __le32 status;
+} __attribute__((packed));
+struct set_device_status_resp {
- __le32 status;
+} __attribute__((packed));
+struct get_vqueue {
- __le32 index;
+} __attribute__((packed));
+struct get_vqueue_resp {
- __le32 index;
- __le32 max_size;
- __le32 size;
- __le64 descriptor_addr;
- __le64 driver_addr;
- __le64 device_addr;
+} __attribute__((packed));
+struct set_vqueue {
- __le32 index;
- __le32 unused;
- __le32 size;
- __le64 descriptor_addr;
- __le64 driver_addr;
- __le64 device_addr;
+} __attribute__((packed));
+struct reset_vqueue {
- __le32 index;
+} __attribute__((packed));
+struct get_shm {
- __le32 index;
+} __attribute__((packed));
+struct get_shm_resp {
- __le32 index;
- __le32 count;
- __le32 addr;
+} __attribute__((packed));
+struct event_config {
- __le32 status;
- __le32 generation;
- __le32 offset;
- __le32 size;
- __u8 config[];
+} __attribute__((packed));
+struct event_avail {
- __le32 index;
- #define VIRTIO_MSG_EVENT_AVAIL_WRAP_SHIFT 31
- __le32 next_offset_wrap;
+} __attribute__((packed));
+struct event_used {
- __le32 index;
+} __attribute__((packed));
+struct virtio_msg {
- #define VIRTIO_MSG_TYPE_REQUEST (0 << 0)
- #define VIRTIO_MSG_TYPE_RESPONSE (1 << 0)
- #define VIRTIO_MSG_TYPE_TRANSPORT (0 << 1)
- #define VIRTIO_MSG_TYPE_BUS (1 << 1)
- __u8 type;
- __u8 msg_id;
- __le16 dev_id;
- __le16 msg_size;
- __u8 payload[];
+} __attribute__((packed));
+static inline void *virtio_msg_payload(struct virtio_msg *vmsg) +{
- return &vmsg->payload;
+}
+/* Virtio message bus definitions */
+/* Message types */ +#define VIRTIO_MSG_BUS_GET_DEVICES 0x02 +#define VIRTIO_MSG_BUS_PING 0x03 +#define VIRTIO_MSG_BUS_EVENT_DEVICE 0x40
+struct bus_get_devices {
- __le16 offset;
- __le16 num;
+} __attribute__((packed));
+struct bus_get_devices_resp {
- __le16 offset;
- __le16 num;
- __le16 next_offset;
- __u8 devices[];
+} __attribute__((packed));
+struct bus_event_device {
- __le16 dev_num;
- #define VIRTIO_MSG_BUS_EVENT_DEV_STATE_READY 0x1
- #define VIRTIO_MSG_BUS_EVENT_DEV_STATE_REMOVED 0x2
- __le16 dev_state;
+} __attribute__((packed));
+struct bus_ping {
- __le32 data;
+} __attribute__((packed));
+struct bus_ping_resp {
- __le32 data;
+} __attribute__((packed));
+struct bus_status {
- #define VIRTIO_BUS_STATE_RESET 0x0
- #define VIRTIO_BUS_STATE_SHUTDOWN 0x1
- #define VIRTIO_BUS_STATE_SUSPEND 0x2
- #define VIRTIO_BUS_STATE_RESUME 0x4
- __u8 state;
+} __attribute__((packed));
+#endif /* _LINUX_VIRTIO_MSG_H */
2.31.1.272.g89b43f80a514
Virtio-msg mailing list -- virtio-msg@lists.linaro.org To unsubscribe send an email to virtio-msg-leave@lists.linaro.org
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Add support for an optional userspace interface to the virtio-msg transport via a per-bus miscdevice. When enabled by a bus implementation, this interface allows userspace to send and receive virtio messages through a character device node.
A separate device node is created for each bus that registers for userspace access, e.g., /dev/virtio-msg-N. This enables backend-side components or test tools to interact with the transport layer directly from userspace.
Bus implementations that do not require userspace interaction can omit this interface entirely.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/virtio/Kconfig | 8 +++ drivers/virtio/Makefile | 4 +- drivers/virtio/virtio_msg.h | 32 +++++++++ drivers/virtio/virtio_msg_user.c | 119 +++++++++++++++++++++++++++++++ 4 files changed, 162 insertions(+), 1 deletion(-) create mode 100644 drivers/virtio/virtio_msg_user.c
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 690ac98850b6..a86025c9e008 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -178,6 +178,14 @@ config VIRTIO_MSG This enables support for Virtio message transport. This option is selected by any driver which implements the virtio message bus.
+config VIRTIO_MSG_USER + tristate "Userspace interface for virtio message transport" + depends on VIRTIO_MSG + help + This enables userspace interface for Virtio message transport. This + can be used to read / write messages over virtio-msg transport from + userspace. + config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 3eff8ca72446..5b664c5f5f25 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -4,7 +4,9 @@ obj-$(CONFIG_VIRTIO_ANCHOR) += virtio_anchor.o obj-$(CONFIG_VIRTIO_PCI_LIB) += virtio_pci_modern_dev.o obj-$(CONFIG_VIRTIO_PCI_LIB_LEGACY) += virtio_pci_legacy_dev.o obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o -obj-$(CONFIG_VIRTIO_MSG) += virtio_msg.o +virtio_msg_transport-y := virtio_msg.o +virtio_msg_transport-$(CONFIG_VIRTIO_MSG_USER) += virtio_msg_user.o +obj-$(CONFIG_VIRTIO_MSG) += virtio_msg_transport.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o diff --git a/drivers/virtio/virtio_msg.h b/drivers/virtio/virtio_msg.h index 099bb2f0f679..b42c9d6f483a 100644 --- a/drivers/virtio/virtio_msg.h +++ b/drivers/virtio/virtio_msg.h @@ -9,6 +9,8 @@ #ifndef _DRIVERS_VIRTIO_VIRTIO_MSG_H #define _DRIVERS_VIRTIO_VIRTIO_MSG_H
+#include <linux/completion.h> +#include <linux/miscdevice.h> #include <linux/virtio.h> #include <uapi/linux/virtio_msg.h>
@@ -53,4 +55,34 @@ void virtio_msg_unregister(struct virtio_msg_device *vmdev); void virtio_msg_prepare(struct virtio_msg *vmsg, u8 msg_id, u16 payload_size); int virtio_msg_event(struct virtio_msg_device *vmdev, struct virtio_msg *vmsg);
+/* Virtio msg userspace interface */ +struct virtio_msg_user_device; + +struct virtio_msg_user_ops { + int (*handle)(struct virtio_msg_user_device *vmudev, struct virtio_msg *vmsg); +}; + +/* Host side device using virtio message */ +struct virtio_msg_user_device { + struct virtio_msg_user_ops *ops; + struct miscdevice misc; + struct completion r_completion; + struct completion w_completion; + struct virtio_msg *vmsg; + struct device *parent; + char name[15]; +}; + +#if IS_REACHABLE(CONFIG_VIRTIO_MSG_USER) +int virtio_msg_user_register(struct virtio_msg_user_device *vmudev); +void virtio_msg_user_unregister(struct virtio_msg_user_device *vmudev); +#else +static inline int virtio_msg_user_register(struct virtio_msg_user_device *vmudev) +{ + return -EOPNOTSUPP; +} + +static inline void virtio_msg_user_unregister(struct virtio_msg_user_device *vmudev) {} +#endif /* CONFIG_VIRTIO_MSG_USER */ + #endif /* _DRIVERS_VIRTIO_VIRTIO_MSG_H */ diff --git a/drivers/virtio/virtio_msg_user.c b/drivers/virtio/virtio_msg_user.c new file mode 100644 index 000000000000..479df68060a5 --- /dev/null +++ b/drivers/virtio/virtio_msg_user.c @@ -0,0 +1,119 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Virtio message transport user API. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + */ + +#define pr_fmt(fmt) "virtio-msg: " fmt + +#include <linux/err.h> +#include <linux/fs.h> +#include <linux/miscdevice.h> +#include <linux/slab.h> +#include <linux/uaccess.h> + +#include "virtio_msg.h" + +#define to_virtio_msg_user_device(_misc) \ + container_of(_misc, struct virtio_msg_user_device, misc) + +static ssize_t vmsg_miscdev_read(struct file *file, char __user *buf, + size_t count, loff_t *pos) +{ + struct miscdevice *misc = file->private_data; + struct virtio_msg_user_device *vmudev = to_virtio_msg_user_device(misc); + struct device *dev = vmudev->parent; + int ret; + + if (count < VIRTIO_MSG_MIN_SIZE) { + dev_err(dev, "Trying to read message of incorrect size: %zu\n", + count); + return 0; + } + + /* Wait for the message */ + ret = wait_for_completion_interruptible(&vmudev->r_completion); + if (ret < 0) { + dev_err(dev, "Interrupted while waiting for response: %d\n", ret); + return 0; + } + + WARN_ON(!vmudev->vmsg); + + /* The "vmsg" pointer is filled by the bus driver before waking up */ + if (copy_to_user(buf, vmudev->vmsg, count) != 0) + return 0; + + vmudev->vmsg = NULL; + + return count; +} + +static ssize_t vmsg_miscdev_write(struct file *file, const char __user *buf, + size_t count, loff_t *pos) +{ + struct miscdevice *misc = file->private_data; + struct virtio_msg_user_device *vmudev = to_virtio_msg_user_device(misc); + struct virtio_msg *vmsg __free(kfree) = NULL; + + if (count < VIRTIO_MSG_MIN_SIZE) { + dev_err(vmudev->parent, "Trying to write message of incorrect size: %zu\n", + count); + return 0; + } + + vmsg = kzalloc(count, GFP_KERNEL); + if (!vmsg) + return 0; + + if (copy_from_user(vmsg, buf, count) != 0) + return 0; + + vmudev->ops->handle(vmudev, vmsg); + + /* Wake up the handler only for responses */ + if (vmsg->type & VIRTIO_MSG_TYPE_RESPONSE) + complete(&vmudev->w_completion); + + return count; +} + +static const struct file_operations vmsg_miscdev_fops = { + .owner = THIS_MODULE, + .read = vmsg_miscdev_read, + .write = vmsg_miscdev_write, +}; + +int virtio_msg_user_register(struct virtio_msg_user_device *vmudev) +{ + static u8 vmsg_user_device_count; + int ret; + + if (!vmudev || !vmudev->ops) + return -EINVAL; + + init_completion(&vmudev->r_completion); + init_completion(&vmudev->w_completion); + + vmudev->misc.parent = vmudev->parent; + vmudev->misc.minor = MISC_DYNAMIC_MINOR; + vmudev->misc.fops = &vmsg_miscdev_fops; + vmudev->misc.name = vmudev->name; + sprintf(vmudev->name, "virtio-msg-%d", vmsg_user_device_count); + + ret = misc_register(&vmudev->misc); + if (ret) + return ret; + + vmsg_user_device_count++; + return 0; +} +EXPORT_SYMBOL_GPL(virtio_msg_user_register); + +void virtio_msg_user_unregister(struct virtio_msg_user_device *vmudev) +{ + misc_deregister(&vmudev->misc); +} +EXPORT_SYMBOL_GPL(virtio_msg_user_unregister);
Introduce a virtio-msg bus implementation based on the Arm FF-A (Firmware Framework for Arm) communication interface.
This bus enables virtio-msg transport over secure channels typically used between the normal world OS and a secure OS or hypervisor. It leverages the standardized FF-A interface to exchange messages with a remote backend service.
The implementation integrates with the core virtio-msg transport and uses FF-A service calls to transmit and receive messages.
Optionally, this bus supports attaching a reserved-memory region to constrain DMA-coherent and streaming DMA allocations to a well-defined contiguous area. This memory can be pre-mapped on the remote side, reducing runtime overhead and preventing accidental sharing of unrelated pages due to page-granularity mapping.
To enable reserved memory, the following device tree node should be defined (the node must be named "vmsgffa"):
reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges;
vmsgffa@100000000 { compatible = "restricted-dma-pool"; reg = <0x00000001 0x00000000 0x0 0x00400000>; /* 4 MiB */ }; };
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/virtio/Kconfig | 12 +- drivers/virtio/Makefile | 1 + drivers/virtio/virtio_msg_ffa.c | 501 ++++++++++++++++++++++++++++ include/uapi/linux/virtio_msg_ffa.h | 94 ++++++ 4 files changed, 607 insertions(+), 1 deletion(-) create mode 100644 drivers/virtio/virtio_msg_ffa.c create mode 100644 include/uapi/linux/virtio_msg_ffa.h
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index a86025c9e008..683152477e3f 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -176,7 +176,8 @@ config VIRTIO_MSG select VIRTIO help This enables support for Virtio message transport. This option is - selected by any driver which implements the virtio message bus. + selected by any driver which implements the virtio message bus, such + as VIRTIO_MSG_FFA.
config VIRTIO_MSG_USER tristate "Userspace interface for virtio message transport" @@ -186,6 +187,15 @@ config VIRTIO_MSG_USER can be used to read / write messages over virtio-msg transport from userspace.
+config VIRTIO_MSG_FFA + tristate "FF-A bus driver for virtio message transport" + depends on ARM_FFA_TRANSPORT + select VIRTIO_MSG + help + This implements a Virtio message bus based on ARM FF-A protocol. + + If unsure, say N. + config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 5b664c5f5f25..96ec0a9c4a7a 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o virtio_msg_transport-y := virtio_msg.o virtio_msg_transport-$(CONFIG_VIRTIO_MSG_USER) += virtio_msg_user.o obj-$(CONFIG_VIRTIO_MSG) += virtio_msg_transport.o +obj-$(CONFIG_VIRTIO_MSG_FFA) += virtio_msg_ffa.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o diff --git a/drivers/virtio/virtio_msg_ffa.c b/drivers/virtio/virtio_msg_ffa.c new file mode 100644 index 000000000000..8c7d43ba2f5a --- /dev/null +++ b/drivers/virtio/virtio_msg_ffa.c @@ -0,0 +1,501 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * FF-A bus implementation for Virtio message transport. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + * + * This implements the FF-A (Arm Firmware Framework) bus for Virtio msg + * transport. + */ + +#define pr_fmt(fmt) "virtio-msg-ffa: " fmt + +#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/of_reserved_mem.h> +#include <linux/pm.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/virtio.h> +#include <uapi/linux/virtio_msg_ffa.h> + +#include "virtio_msg.h" + +struct virtio_msg_indirect_data { + struct completion completion; + struct virtio_msg *response; +}; + +struct virtio_msg_device_data { + struct virtio_msg_device vmdev; + struct virtio_msg_indirect_data idata; +}; + +/* Represents FF-A corresponding to a partition */ +struct virtio_msg_ffa_device { + struct ffa_device *ffa_dev; + struct reserved_mem *rmem; + struct virtio_msg_indirect_data idata; + struct virtio_msg_device_data *vmdevs; + int (*send)(struct virtio_msg_ffa_device *vmfdev, + struct virtio_msg *request, + struct virtio_msg *response, + struct virtio_msg_indirect_data *idata); + int vmdev_count; + u16 msg_size; +}; + +#define to_vmdevdata(_vmdev) \ + container_of(_vmdev, struct virtio_msg_device_data, vmdev) +#define to_vmfdev(_vmdev) ((struct virtio_msg_ffa_device *)(_vmdev)->bus_data) + +static int vmsg_ffa_send_direct(struct virtio_msg_ffa_device *vmfdev, + struct virtio_msg *request, + struct virtio_msg *response, + struct virtio_msg_indirect_data *idata_unused) +{ + struct ffa_device *ffa_dev = vmfdev->ffa_dev; + struct ffa_send_direct_data2 ffa_data; + int ret; + + memcpy(&ffa_data, request, request->msg_size); + + ret = ffa_dev->ops->msg_ops->sync_send_receive2(ffa_dev, &ffa_data); + if (ret) { + dev_dbg(&ffa_dev->dev, + "Unable to send direct FF-A message: %d\n", ret); + return ret; + } + + if (response) + memcpy(response, &ffa_data, vmfdev->msg_size); + + return 0; +} + +static int vmsg_ffa_send_indirect(struct virtio_msg_ffa_device *vmfdev, + struct virtio_msg *request, + struct virtio_msg *response, + struct virtio_msg_indirect_data *idata) +{ + struct ffa_device *ffa_dev = vmfdev->ffa_dev; + struct device *dev = &ffa_dev->dev; + int ret; + + /* + * Store the response pointer in idata structure. This will be updated + * by vmsg_ffa_notifier_cb() later. + */ + idata->response = response; + + ret = ffa_dev->ops->msg_ops->indirect_send(ffa_dev, request, + request->msg_size); + if (ret) { + dev_err(dev, "Failed sending indirect FF-A message: %d\n", ret); + return ret; + } + + /* + * Always wait for the operation to finish, otherwise we may start + * another operation while the previous one is still ongoing. + */ + ret = wait_for_completion_interruptible_timeout(&idata->completion, 1000); + if (ret < 0) { + dev_err(dev, "Interrupted - waiting for a response: %d\n", ret); + } else if (!ret) { + dev_err(dev, "Timed out waiting for a response\n"); + ret = -ETIMEDOUT; + } else { + ret = 0; + } + + return ret; +} + +static int vmsg_ffa_send(struct virtio_msg_ffa_device *vmfdev, + struct virtio_msg *request, + struct virtio_msg *response, + struct virtio_msg_indirect_data *idata) +{ + int ret; + + /* Try direct messaging first, fallback to indirect */ + ret = vmsg_ffa_send_direct(vmfdev, request, response, idata); + if (!ret) { + vmfdev->send = vmsg_ffa_send_direct; + return 0; + } + + /* Fallback to indirect messaging */ + vmfdev->send = vmsg_ffa_send_indirect; + return vmfdev->send(vmfdev, request, response, idata); +} + +static struct virtio_msg_device * +find_vmdev(struct virtio_msg_ffa_device *vmfdev, u16 dev_id) +{ + int i; + + /* Find the device corresponding to a dev_id */ + for (i = 0; i < vmfdev->vmdev_count; i++) { + if (vmfdev->vmdevs[i].vmdev.dev_id == dev_id) + return &vmfdev->vmdevs[i].vmdev; + } + + dev_err(&vmfdev->ffa_dev->dev, "Couldn't find matching vmdev: %d\n", + dev_id); + return NULL; +} + +static void vmsg_ffa_notifier_cb(int notify_id, void *cb_data, void *buf) +{ + struct virtio_msg_ffa_device *vmfdev = cb_data; + struct ffa_device *ffa_dev = vmfdev->ffa_dev; + struct virtio_msg_indirect_data *idata; + struct virtio_msg_device *vmdev; + struct virtio_msg *vmsg = buf; + + /* + * We can either receive a response message (to a previously sent + * request), or an EVENT_USED request message. + */ + if (vmsg->type & VIRTIO_MSG_TYPE_RESPONSE) { + if (vmsg->type & VIRTIO_MSG_TYPE_BUS) { + idata = &vmfdev->idata; + } else { + vmdev = find_vmdev(vmfdev, le16_to_cpu(vmsg->dev_id)); + if (!vmdev) + return; + + idata = &to_vmdevdata(vmdev)->idata; + } + + if (idata->response) + memcpy(idata->response, vmsg, vmsg->msg_size); + + complete(&idata->completion); + + return; + } + + /* Only support EVENT_USED virtio request messages */ + if (vmsg->type & VIRTIO_MSG_TYPE_BUS || + vmsg->msg_id != VIRTIO_MSG_EVENT_USED) { + dev_err(&ffa_dev->dev, "Unsupported message received\n"); + return; + } + + vmdev = find_vmdev(vmfdev, le16_to_cpu(vmsg->dev_id)); + if (!vmdev) + return; + + virtio_msg_event(vmdev, vmsg); +} + +static int vmsg_ffa_notify_setup(struct virtio_msg_ffa_device *vmfdev) +{ + struct ffa_device *ffa_dev = vmfdev->ffa_dev; + int ret; + + ret = ffa_dev->ops->notifier_ops->fwk_notify_request(ffa_dev, + &vmsg_ffa_notifier_cb, vmfdev, 0); + if (ret) + dev_err(&ffa_dev->dev, "Unable to request notifier: %d\n", ret); + + return ret; +} + +static void vmsg_ffa_notify_cleanup(struct virtio_msg_ffa_device *vmfdev) +{ + struct ffa_device *ffa_dev = vmfdev->ffa_dev; + int ret; + + ret = ffa_dev->ops->notifier_ops->fwk_notify_relinquish(ffa_dev, 0); + if (ret) + dev_err(&ffa_dev->dev, "Unable to relinquish notifier: %d\n", ret); +} + +static int vmsg_ffa_bus_get_devices(struct virtio_msg_ffa_device *vmfdev, + u16 *map, u16 *count) +{ + u8 req_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE]; + u8 res_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE]; + struct virtio_msg *request = (struct virtio_msg *)&req_buf; + struct virtio_msg *response = (struct virtio_msg *)&res_buf; + struct bus_get_devices *req_payload = virtio_msg_payload(request); + struct bus_get_devices_resp *res_payload = virtio_msg_payload(response); + int ret; + + virtio_msg_prepare(request, VIRTIO_MSG_BUS_GET_DEVICES, + sizeof(*req_payload)); + req_payload->offset = 0; + req_payload->num = cpu_to_le16(0xFF); + + ret = vmfdev->send(vmfdev, request, response, &vmfdev->idata); + if (ret < 0) + return ret; + + *count = le16_to_cpu(res_payload->num); + if (!*count) + return -ENODEV; + + if (res_payload->offset != req_payload->offset) + return -EINVAL; + + /* Support up to 16 devices for now */ + if (res_payload->next_offset) + return -EINVAL; + + map[0] = res_payload->devices[0]; + map[1] = res_payload->devices[1]; + + return 0; +} + +static int vmsg_ffa_bus_version(struct virtio_msg_ffa_device *vmfdev) +{ + u8 req_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE]; + u8 res_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE]; + struct virtio_msg *request = (struct virtio_msg *)&req_buf; + struct virtio_msg *response = (struct virtio_msg *)&res_buf; + struct bus_ffa_version *req_payload = virtio_msg_payload(request); + struct bus_ffa_version_resp *res_payload = virtio_msg_payload(response); + u32 features; + int ret; + + virtio_msg_prepare(request, VIRTIO_MSG_FFA_BUS_VERSION, + sizeof(*req_payload)); + req_payload->driver_version = cpu_to_le32(VIRTIO_MSG_FFA_BUS_VERSION_1_0); + req_payload->vmsg_revision = cpu_to_le32(VIRTIO_MSG_REVISION_1); + req_payload->vmsg_features = cpu_to_le32(VIRTIO_MSG_FEATURES); + req_payload->features = cpu_to_le32(VIRTIO_MSG_FFA_FEATURE_BOTH_SUPP); + req_payload->area_num = cpu_to_le16(VIRTIO_MSG_FFA_AREA_ID_MAX); + + ret = vmfdev->send(vmfdev, request, response, &vmfdev->idata); + if (ret < 0) + return ret; + + if (le32_to_cpu(res_payload->device_version) != VIRTIO_MSG_FFA_BUS_VERSION_1_0) + return -EINVAL; + + if (le32_to_cpu(res_payload->vmsg_revision) != VIRTIO_MSG_REVISION_1) + return -EINVAL; + + if (le32_to_cpu(res_payload->vmsg_features) != VIRTIO_MSG_FEATURES) + return -EINVAL; + + features = le32_to_cpu(res_payload->features); + + /* + * - Direct message must be supported if it already worked. + * - Indirect message must be supported if it already worked + * - And direct message must not be supported since it didn't work. + */ + if ((vmfdev->send == vmsg_ffa_send_direct && + !(features & VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP)) || + (vmfdev->send == vmsg_ffa_send_indirect && + (!(features & VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP) || + (features & VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP)))) { + dev_err(&vmfdev->ffa_dev->dev, "Invalid features\n"); + return -EINVAL; + } + + return 0; +} + +static int virtio_msg_ffa_transfer(struct virtio_msg_device *vmdev, + struct virtio_msg *request, + struct virtio_msg *response) +{ + struct virtio_msg_indirect_data *idata = &to_vmdevdata(vmdev)->idata; + struct virtio_msg_ffa_device *vmfdev = to_vmfdev(vmdev); + + return vmfdev->send(vmfdev, request, response, idata); +} + +static const char *virtio_msg_ffa_bus_info(struct virtio_msg_device *vmdev, + u16 *msg_size, u32 *rev) +{ + struct virtio_msg_ffa_device *vmfdev = to_vmfdev(vmdev); + + *msg_size = vmfdev->msg_size; + *rev = VIRTIO_MSG_REVISION_1; + + return dev_name(&vmfdev->ffa_dev->dev); +} + +static struct virtio_msg_ops vmf_ops = { + .transfer = virtio_msg_ffa_transfer, + .bus_info = virtio_msg_ffa_bus_info, +}; + +static void remove_vmdevs(struct virtio_msg_ffa_device *vmfdev, int count) +{ + while (count--) + virtio_msg_unregister(&vmfdev->vmdevs[count].vmdev); +} + +static int virtio_msg_ffa_probe(struct ffa_device *ffa_dev) +{ + struct virtio_msg_ffa_device *vmfdev; + struct device *dev = &ffa_dev->dev; + struct virtio_msg_device *vmdev; + unsigned long devices = 0; + int ret, i = 0, bit; + u16 count; + + vmfdev = devm_kzalloc(dev, sizeof(*vmfdev), GFP_KERNEL); + if (!vmfdev) + return -ENOMEM; + + vmfdev->ffa_dev = ffa_dev; + vmfdev->send = vmsg_ffa_send; + vmfdev->msg_size = VIRTIO_MSG_FFA_BUS_MSG_SIZE; + ffa_dev_set_drvdata(ffa_dev, vmfdev); + init_completion(&vmfdev->idata.completion); + + ret = vmsg_ffa_notify_setup(vmfdev); + if (ret) + return ret; + + ret = vmsg_ffa_bus_version(vmfdev); + if (ret) + goto notify_cleanup; + + ret = vmsg_ffa_bus_get_devices(vmfdev, (u16 *)&devices, &count); + if (ret) + goto notify_cleanup; + + ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); + if (ret) + dev_warn(dev, "Failed to enable 64-bit or 32-bit DMA\n"); + + vmfdev->rmem = of_reserved_mem_lookup_by_name("vmsgffa"); + if (!IS_ERR(vmfdev->rmem)) { + ret = reserved_mem_device_init(dev, vmfdev->rmem); + if (ret) + goto rmem_free; + } else { + dev_info(dev, "Continuing without reserved-memory block\n"); + } + + vmfdev->vmdevs = devm_kcalloc(dev, count, sizeof(*vmfdev->vmdevs), + GFP_KERNEL); + if (!vmfdev->vmdevs) { + ret = -ENOMEM; + goto rmem_free; + } + vmfdev->vmdev_count = count; + + for_each_set_bit(bit, &devices, sizeof(devices)) { + init_completion(&vmfdev->vmdevs[i].idata.completion); + vmdev = &vmfdev->vmdevs[i].vmdev; + vmdev->dev_id = bit; + vmdev->ops = &vmf_ops; + vmdev->vdev.dev.parent = dev; + vmdev->bus_data = vmfdev; + + ret = virtio_msg_register(vmdev); + if (ret) { + dev_err(dev, "Failed to register virtio-msg device (%d)\n", ret); + goto unregister; + } + + i++; + } + + return 0; + +unregister: + remove_vmdevs(vmfdev, i); +rmem_free: + if (!IS_ERR(vmfdev->rmem)) + of_reserved_mem_device_release(dev); +notify_cleanup: + vmsg_ffa_notify_cleanup(vmfdev); + return ret; +} + +static void virtio_msg_ffa_remove(struct ffa_device *ffa_dev) +{ + struct virtio_msg_ffa_device *vmfdev = ffa_dev->dev.driver_data; + + remove_vmdevs(vmfdev, vmfdev->vmdev_count); + + if (!IS_ERR(vmfdev->rmem)) + of_reserved_mem_device_release(&ffa_dev->dev); + + vmsg_ffa_notify_cleanup(vmfdev); +} + +static const struct ffa_device_id virtio_msg_ffa_device_ids[] = { + /* c66028b5-2498-4aa1-9de7-77da6122abf0 */ + { UUID_INIT(0xc66028b5, 0x2498, 0x4aa1, + 0x9d, 0xe7, 0x77, 0xda, 0x61, 0x22, 0xab, 0xf0) }, + {} +}; + +static int __maybe_unused virtio_msg_ffa_suspend(struct device *dev) +{ + struct virtio_msg_ffa_device *vmfdev = dev_get_drvdata(dev); + int ret, i, index; + + for (i = 0; i < vmfdev->vmdev_count; i++) { + index = vmfdev->vmdev_count - i - 1; + ret = virtio_device_freeze(&vmfdev->vmdevs[index].vmdev.vdev); + if (ret) + return ret; + } + + return 0; +} + +static int __maybe_unused virtio_msg_ffa_resume(struct device *dev) +{ + struct virtio_msg_ffa_device *vmfdev = dev_get_drvdata(dev); + int ret, i; + + for (i = 0; i < vmfdev->vmdev_count; i++) { + ret = virtio_device_restore(&vmfdev->vmdevs[i].vmdev.vdev); + if (ret) + return ret; + } + + return 0; +} + +static const struct dev_pm_ops virtio_msg_ffa_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(virtio_msg_ffa_suspend, virtio_msg_ffa_resume) +}; + +static struct ffa_driver virtio_msg_ffa_driver = { + .name = "virtio-msg-ffa", + .probe = virtio_msg_ffa_probe, + .remove = virtio_msg_ffa_remove, + .id_table = virtio_msg_ffa_device_ids, + .driver = { + .pm = &virtio_msg_ffa_pm_ops, + }, +}; + +static int virtio_msg_ffa_init(void) +{ + if (IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)) + return ffa_register(&virtio_msg_ffa_driver); + else + return -EOPNOTSUPP; +} +module_init(virtio_msg_ffa_init); + +static void virtio_msg_ffa_exit(void) +{ + if (IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)) + ffa_unregister(&virtio_msg_ffa_driver); +} +module_exit(virtio_msg_ffa_exit); + +MODULE_AUTHOR("Viresh Kumar viresh.kumar@linaro.org"); +MODULE_DESCRIPTION("Virtio message FF-A bus driver"); +MODULE_LICENSE("GPL"); diff --git a/include/uapi/linux/virtio_msg_ffa.h b/include/uapi/linux/virtio_msg_ffa.h new file mode 100644 index 000000000000..adcc081b483a --- /dev/null +++ b/include/uapi/linux/virtio_msg_ffa.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* + * Virtio message FF-A (Arm Firmware Framework) bus header. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + */ + +#ifndef _LINUX_VIRTIO_MSG_FFA_H +#define _LINUX_VIRTIO_MSG_FFA_H + +#include <linux/types.h> + +/* Message types */ +#define VIRTIO_MSG_FFA_BUS_VERSION 0x80 +#define VIRTIO_MSG_FFA_BUS_AREA_SHARE 0x81 +#define VIRTIO_MSG_FFA_BUS_AREA_UNSHARE 0x82 +#define VIRTIO_MSG_FFA_BUS_RESET 0x83 +#define VIRTIO_MSG_FFA_BUS_EVENT_POLL 0x84 +#define VIRTIO_MSG_FFA_BUS_AREA_RELEASE 0xC0 + +#define VIRTIO_MSG_FEATURES 0 +#define VIRTIO_MSG_FFA_BUS_VERSION_1_0 0x1 +#define VIRTIO_MSG_FFA_BUS_MSG_SIZE VIRTIO_MSG_MIN_SIZE + +#define VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_RX_SUPP (1 << 0) +#define VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_TX_SUPP (1 << 1) +#define VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP \ + (VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_RX_SUPP | \ + VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_TX_SUPP) + +#define VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_RX_SUPP (1 << 2) +#define VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_TX_SUPP (1 << 3) +#define VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP \ + (VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_RX_SUPP | \ + VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_TX_SUPP) + +#define VIRTIO_MSG_FFA_FEATURE_BOTH_SUPP \ + (VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP | \ + VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP) + +#define VIRTIO_MSG_FFA_AREA_ID_MAX 0xFF +#define VIRTIO_MSG_FFA_AREA_ID_OFFSET 56 +#define VIRTIO_MSG_FFA_OFFSET_MASK \ + ((ULL(1) << VIRTIO_MSG_FFA_AREA_ID_OFFSET) - 1) + +#define VIRTIO_MSG_FFA_RESULT_ERROR (1 << 0) +#define VIRTIO_MSG_FFA_RESULT_BUSY (1 << 1) + +/* Message payload format */ + +struct bus_ffa_version { + __le32 driver_version; + __le32 vmsg_revision; + __le32 vmsg_features; + __le32 features; + __le16 area_num; +} __attribute__((packed)); + +struct bus_ffa_version_resp { + __le32 device_version; + __le32 vmsg_revision; + __le32 vmsg_features; + __le32 features; + __le16 area_num; +} __attribute__((packed)); + +struct bus_area_share { + __le16 area_id; + __le64 mem_handle; + __le64 tag; + __le32 count; + __le32 attr; +} __attribute__((packed)); + +struct bus_area_share_resp { + __le16 area_id; + __le16 result; +} __attribute__((packed)); + +struct bus_area_unshare { + __le16 area_id; +} __attribute__((packed)); + +struct bus_area_unshare_resp { + __le16 area_id; + __le16 result; +} __attribute__((packed)); + +struct bus_area_release { + __le16 area_id; +} __attribute__((packed)); + +#endif /* _LINUX_VIRTIO_MSG_FFA_H */
Hi Viresh.
One small point after on BUSY handling.
It need some time to get how you define which method to use to transfer messages. There might be something to rework in the future to make that a bit more straight forward as you use the "generic send" only once to send the version message so it might be better to open code it directly in there,
On 22 Jul 2025, at 11:46, Viresh Kumar viresh.kumar@linaro.org wrote:
Introduce a virtio-msg bus implementation based on the Arm FF-A (Firmware Framework for Arm) communication interface.
This bus enables virtio-msg transport over secure channels typically used between the normal world OS and a secure OS or hypervisor. It leverages the standardized FF-A interface to exchange messages with a remote backend service.
The implementation integrates with the core virtio-msg transport and uses FF-A service calls to transmit and receive messages.
Optionally, this bus supports attaching a reserved-memory region to constrain DMA-coherent and streaming DMA allocations to a well-defined contiguous area. This memory can be pre-mapped on the remote side, reducing runtime overhead and preventing accidental sharing of unrelated pages due to page-granularity mapping.
To enable reserved memory, the following device tree node should be defined (the node must be named "vmsgffa"):
reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges;
vmsgffa@100000000 { compatible = "restricted-dma-pool"; reg = <0x00000001 0x00000000 0x0 0x00400000>; /* 4 MiB */ }; };
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org
drivers/virtio/Kconfig | 12 +- drivers/virtio/Makefile | 1 + drivers/virtio/virtio_msg_ffa.c | 501 ++++++++++++++++++++++++++++ include/uapi/linux/virtio_msg_ffa.h | 94 ++++++ 4 files changed, 607 insertions(+), 1 deletion(-) create mode 100644 drivers/virtio/virtio_msg_ffa.c create mode 100644 include/uapi/linux/virtio_msg_ffa.h
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index a86025c9e008..683152477e3f 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -176,7 +176,8 @@ config VIRTIO_MSG select VIRTIO help This enables support for Virtio message transport. This option is
- selected by any driver which implements the virtio message bus.
- selected by any driver which implements the virtio message bus, such
- as VIRTIO_MSG_FFA.
config VIRTIO_MSG_USER tristate "Userspace interface for virtio message transport" @@ -186,6 +187,15 @@ config VIRTIO_MSG_USER can be used to read / write messages over virtio-msg transport from userspace.
+config VIRTIO_MSG_FFA
- tristate "FF-A bus driver for virtio message transport"
- depends on ARM_FFA_TRANSPORT
- select VIRTIO_MSG
- help
- This implements a Virtio message bus based on ARM FF-A protocol.
- If unsure, say N.
config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 5b664c5f5f25..96ec0a9c4a7a 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o virtio_msg_transport-y := virtio_msg.o virtio_msg_transport-$(CONFIG_VIRTIO_MSG_USER) += virtio_msg_user.o obj-$(CONFIG_VIRTIO_MSG) += virtio_msg_transport.o +obj-$(CONFIG_VIRTIO_MSG_FFA) += virtio_msg_ffa.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o diff --git a/drivers/virtio/virtio_msg_ffa.c b/drivers/virtio/virtio_msg_ffa.c new file mode 100644 index 000000000000..8c7d43ba2f5a --- /dev/null +++ b/drivers/virtio/virtio_msg_ffa.c @@ -0,0 +1,501 @@ +// SPDX-License-Identifier: GPL-2.0+ +/*
- FF-A bus implementation for Virtio message transport.
- Copyright (C) 2025 Google LLC and Linaro.
- Viresh Kumar viresh.kumar@linaro.org
- This implements the FF-A (Arm Firmware Framework) bus for Virtio msg
- transport.
- */
+#define pr_fmt(fmt) "virtio-msg-ffa: " fmt
+#include <linux/arm_ffa.h> +#include <linux/err.h> +#include <linux/module.h> +#include <linux/of_reserved_mem.h> +#include <linux/pm.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/virtio.h> +#include <uapi/linux/virtio_msg_ffa.h>
+#include "virtio_msg.h"
+struct virtio_msg_indirect_data {
- struct completion completion;
- struct virtio_msg *response;
+};
+struct virtio_msg_device_data {
- struct virtio_msg_device vmdev;
- struct virtio_msg_indirect_data idata;
+};
+/* Represents FF-A corresponding to a partition */ +struct virtio_msg_ffa_device {
- struct ffa_device *ffa_dev;
- struct reserved_mem *rmem;
- struct virtio_msg_indirect_data idata;
- struct virtio_msg_device_data *vmdevs;
- int (*send)(struct virtio_msg_ffa_device *vmfdev,
- struct virtio_msg *request,
- struct virtio_msg *response,
- struct virtio_msg_indirect_data *idata);
- int vmdev_count;
- u16 msg_size;
+};
+#define to_vmdevdata(_vmdev) \
- container_of(_vmdev, struct virtio_msg_device_data, vmdev)
+#define to_vmfdev(_vmdev) ((struct virtio_msg_ffa_device *)(_vmdev)->bus_data)
+static int vmsg_ffa_send_direct(struct virtio_msg_ffa_device *vmfdev,
- struct virtio_msg *request,
- struct virtio_msg *response,
- struct virtio_msg_indirect_data *idata_unused)
+{
- struct ffa_device *ffa_dev = vmfdev->ffa_dev;
- struct ffa_send_direct_data2 ffa_data;
- int ret;
- memcpy(&ffa_data, request, request->msg_size);
- ret = ffa_dev->ops->msg_ops->sync_send_receive2(ffa_dev, &ffa_data);
- if (ret) {
- dev_dbg(&ffa_dev->dev,
- "Unable to send direct FF-A message: %d\n", ret);
- return ret;
- }
- if (response)
- memcpy(response, &ffa_data, vmfdev->msg_size);
- return 0;
+}
+static int vmsg_ffa_send_indirect(struct virtio_msg_ffa_device *vmfdev,
- struct virtio_msg *request,
- struct virtio_msg *response,
- struct virtio_msg_indirect_data *idata)
+{
- struct ffa_device *ffa_dev = vmfdev->ffa_dev;
- struct device *dev = &ffa_dev->dev;
- int ret;
- /*
- Store the response pointer in idata structure. This will be updated
- by vmsg_ffa_notifier_cb() later.
- */
- idata->response = response;
- ret = ffa_dev->ops->msg_ops->indirect_send(ffa_dev, request,
- request->msg_size);
- if (ret) {
- dev_err(dev, "Failed sending indirect FF-A message: %d\n", ret);
- return ret;
- }
Is the FF-A driver already looping on BUSY case ? If the rx buffer from the receiver is not available you will get a busy back and you will have to retry. Not sure if this should be handled in FF-A driver or in here though.
Cheers Bertrand
- /*
- Always wait for the operation to finish, otherwise we may start
- another operation while the previous one is still ongoing.
- */
- ret = wait_for_completion_interruptible_timeout(&idata->completion, 1000);
- if (ret < 0) {
- dev_err(dev, "Interrupted - waiting for a response: %d\n", ret);
- } else if (!ret) {
- dev_err(dev, "Timed out waiting for a response\n");
- ret = -ETIMEDOUT;
- } else {
- ret = 0;
- }
- return ret;
+}
+static int vmsg_ffa_send(struct virtio_msg_ffa_device *vmfdev,
- struct virtio_msg *request,
- struct virtio_msg *response,
- struct virtio_msg_indirect_data *idata)
+{
- int ret;
- /* Try direct messaging first, fallback to indirect */
- ret = vmsg_ffa_send_direct(vmfdev, request, response, idata);
- if (!ret) {
- vmfdev->send = vmsg_ffa_send_direct;
- return 0;
- }
- /* Fallback to indirect messaging */
- vmfdev->send = vmsg_ffa_send_indirect;
- return vmfdev->send(vmfdev, request, response, idata);
+}
+static struct virtio_msg_device * +find_vmdev(struct virtio_msg_ffa_device *vmfdev, u16 dev_id) +{
- int i;
- /* Find the device corresponding to a dev_id */
- for (i = 0; i < vmfdev->vmdev_count; i++) {
- if (vmfdev->vmdevs[i].vmdev.dev_id == dev_id)
- return &vmfdev->vmdevs[i].vmdev;
- }
- dev_err(&vmfdev->ffa_dev->dev, "Couldn't find matching vmdev: %d\n",
- dev_id);
- return NULL;
+}
+static void vmsg_ffa_notifier_cb(int notify_id, void *cb_data, void *buf) +{
- struct virtio_msg_ffa_device *vmfdev = cb_data;
- struct ffa_device *ffa_dev = vmfdev->ffa_dev;
- struct virtio_msg_indirect_data *idata;
- struct virtio_msg_device *vmdev;
- struct virtio_msg *vmsg = buf;
- /*
- We can either receive a response message (to a previously sent
- request), or an EVENT_USED request message.
- */
- if (vmsg->type & VIRTIO_MSG_TYPE_RESPONSE) {
- if (vmsg->type & VIRTIO_MSG_TYPE_BUS) {
- idata = &vmfdev->idata;
- } else {
- vmdev = find_vmdev(vmfdev, le16_to_cpu(vmsg->dev_id));
- if (!vmdev)
- return;
- idata = &to_vmdevdata(vmdev)->idata;
- }
- if (idata->response)
- memcpy(idata->response, vmsg, vmsg->msg_size);
- complete(&idata->completion);
- return;
- }
- /* Only support EVENT_USED virtio request messages */
- if (vmsg->type & VIRTIO_MSG_TYPE_BUS ||
- vmsg->msg_id != VIRTIO_MSG_EVENT_USED) {
- dev_err(&ffa_dev->dev, "Unsupported message received\n");
- return;
- }
- vmdev = find_vmdev(vmfdev, le16_to_cpu(vmsg->dev_id));
- if (!vmdev)
- return;
- virtio_msg_event(vmdev, vmsg);
+}
+static int vmsg_ffa_notify_setup(struct virtio_msg_ffa_device *vmfdev) +{
- struct ffa_device *ffa_dev = vmfdev->ffa_dev;
- int ret;
- ret = ffa_dev->ops->notifier_ops->fwk_notify_request(ffa_dev,
- &vmsg_ffa_notifier_cb, vmfdev, 0);
- if (ret)
- dev_err(&ffa_dev->dev, "Unable to request notifier: %d\n", ret);
- return ret;
+}
+static void vmsg_ffa_notify_cleanup(struct virtio_msg_ffa_device *vmfdev) +{
- struct ffa_device *ffa_dev = vmfdev->ffa_dev;
- int ret;
- ret = ffa_dev->ops->notifier_ops->fwk_notify_relinquish(ffa_dev, 0);
- if (ret)
- dev_err(&ffa_dev->dev, "Unable to relinquish notifier: %d\n", ret);
+}
+static int vmsg_ffa_bus_get_devices(struct virtio_msg_ffa_device *vmfdev,
- u16 *map, u16 *count)
+{
- u8 req_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE];
- u8 res_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE];
- struct virtio_msg *request = (struct virtio_msg *)&req_buf;
- struct virtio_msg *response = (struct virtio_msg *)&res_buf;
- struct bus_get_devices *req_payload = virtio_msg_payload(request);
- struct bus_get_devices_resp *res_payload = virtio_msg_payload(response);
- int ret;
- virtio_msg_prepare(request, VIRTIO_MSG_BUS_GET_DEVICES,
- sizeof(*req_payload));
- req_payload->offset = 0;
- req_payload->num = cpu_to_le16(0xFF);
- ret = vmfdev->send(vmfdev, request, response, &vmfdev->idata);
- if (ret < 0)
- return ret;
- *count = le16_to_cpu(res_payload->num);
- if (!*count)
- return -ENODEV;
- if (res_payload->offset != req_payload->offset)
- return -EINVAL;
- /* Support up to 16 devices for now */
- if (res_payload->next_offset)
- return -EINVAL;
- map[0] = res_payload->devices[0];
- map[1] = res_payload->devices[1];
- return 0;
+}
+static int vmsg_ffa_bus_version(struct virtio_msg_ffa_device *vmfdev) +{
- u8 req_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE];
- u8 res_buf[VIRTIO_MSG_FFA_BUS_MSG_SIZE];
- struct virtio_msg *request = (struct virtio_msg *)&req_buf;
- struct virtio_msg *response = (struct virtio_msg *)&res_buf;
- struct bus_ffa_version *req_payload = virtio_msg_payload(request);
- struct bus_ffa_version_resp *res_payload = virtio_msg_payload(response);
- u32 features;
- int ret;
- virtio_msg_prepare(request, VIRTIO_MSG_FFA_BUS_VERSION,
- sizeof(*req_payload));
- req_payload->driver_version = cpu_to_le32(VIRTIO_MSG_FFA_BUS_VERSION_1_0);
- req_payload->vmsg_revision = cpu_to_le32(VIRTIO_MSG_REVISION_1);
- req_payload->vmsg_features = cpu_to_le32(VIRTIO_MSG_FEATURES);
- req_payload->features = cpu_to_le32(VIRTIO_MSG_FFA_FEATURE_BOTH_SUPP);
- req_payload->area_num = cpu_to_le16(VIRTIO_MSG_FFA_AREA_ID_MAX);
- ret = vmfdev->send(vmfdev, request, response, &vmfdev->idata);
- if (ret < 0)
- return ret;
- if (le32_to_cpu(res_payload->device_version) != VIRTIO_MSG_FFA_BUS_VERSION_1_0)
- return -EINVAL;
- if (le32_to_cpu(res_payload->vmsg_revision) != VIRTIO_MSG_REVISION_1)
- return -EINVAL;
- if (le32_to_cpu(res_payload->vmsg_features) != VIRTIO_MSG_FEATURES)
- return -EINVAL;
- features = le32_to_cpu(res_payload->features);
- /*
- Direct message must be supported if it already worked.
- Indirect message must be supported if it already worked
- And direct message must not be supported since it didn't work.
- */
- if ((vmfdev->send == vmsg_ffa_send_direct &&
!(features & VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP)) ||
- (vmfdev->send == vmsg_ffa_send_indirect &&
(!(features & VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP) ||
(features & VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP)))) {
- dev_err(&vmfdev->ffa_dev->dev, "Invalid features\n");
- return -EINVAL;
- }
- return 0;
+}
+static int virtio_msg_ffa_transfer(struct virtio_msg_device *vmdev,
- struct virtio_msg *request,
- struct virtio_msg *response)
+{
- struct virtio_msg_indirect_data *idata = &to_vmdevdata(vmdev)->idata;
- struct virtio_msg_ffa_device *vmfdev = to_vmfdev(vmdev);
- return vmfdev->send(vmfdev, request, response, idata);
+}
+static const char *virtio_msg_ffa_bus_info(struct virtio_msg_device *vmdev,
- u16 *msg_size, u32 *rev)
+{
- struct virtio_msg_ffa_device *vmfdev = to_vmfdev(vmdev);
- *msg_size = vmfdev->msg_size;
- *rev = VIRTIO_MSG_REVISION_1;
- return dev_name(&vmfdev->ffa_dev->dev);
+}
+static struct virtio_msg_ops vmf_ops = {
- .transfer = virtio_msg_ffa_transfer,
- .bus_info = virtio_msg_ffa_bus_info,
+};
+static void remove_vmdevs(struct virtio_msg_ffa_device *vmfdev, int count) +{
- while (count--)
- virtio_msg_unregister(&vmfdev->vmdevs[count].vmdev);
+}
+static int virtio_msg_ffa_probe(struct ffa_device *ffa_dev) +{
- struct virtio_msg_ffa_device *vmfdev;
- struct device *dev = &ffa_dev->dev;
- struct virtio_msg_device *vmdev;
- unsigned long devices = 0;
- int ret, i = 0, bit;
- u16 count;
- vmfdev = devm_kzalloc(dev, sizeof(*vmfdev), GFP_KERNEL);
- if (!vmfdev)
- return -ENOMEM;
- vmfdev->ffa_dev = ffa_dev;
- vmfdev->send = vmsg_ffa_send;
- vmfdev->msg_size = VIRTIO_MSG_FFA_BUS_MSG_SIZE;
- ffa_dev_set_drvdata(ffa_dev, vmfdev);
- init_completion(&vmfdev->idata.completion);
- ret = vmsg_ffa_notify_setup(vmfdev);
- if (ret)
- return ret;
- ret = vmsg_ffa_bus_version(vmfdev);
- if (ret)
- goto notify_cleanup;
- ret = vmsg_ffa_bus_get_devices(vmfdev, (u16 *)&devices, &count);
- if (ret)
- goto notify_cleanup;
- ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
- if (ret)
- dev_warn(dev, "Failed to enable 64-bit or 32-bit DMA\n");
- vmfdev->rmem = of_reserved_mem_lookup_by_name("vmsgffa");
- if (!IS_ERR(vmfdev->rmem)) {
- ret = reserved_mem_device_init(dev, vmfdev->rmem);
- if (ret)
- goto rmem_free;
- } else {
- dev_info(dev, "Continuing without reserved-memory block\n");
- }
- vmfdev->vmdevs = devm_kcalloc(dev, count, sizeof(*vmfdev->vmdevs),
GFP_KERNEL);
- if (!vmfdev->vmdevs) {
- ret = -ENOMEM;
- goto rmem_free;
- }
- vmfdev->vmdev_count = count;
- for_each_set_bit(bit, &devices, sizeof(devices)) {
- init_completion(&vmfdev->vmdevs[i].idata.completion);
- vmdev = &vmfdev->vmdevs[i].vmdev;
- vmdev->dev_id = bit;
- vmdev->ops = &vmf_ops;
- vmdev->vdev.dev.parent = dev;
- vmdev->bus_data = vmfdev;
- ret = virtio_msg_register(vmdev);
- if (ret) {
- dev_err(dev, "Failed to register virtio-msg device (%d)\n", ret);
- goto unregister;
- }
- i++;
- }
- return 0;
+unregister:
- remove_vmdevs(vmfdev, i);
+rmem_free:
- if (!IS_ERR(vmfdev->rmem))
- of_reserved_mem_device_release(dev);
+notify_cleanup:
- vmsg_ffa_notify_cleanup(vmfdev);
- return ret;
+}
+static void virtio_msg_ffa_remove(struct ffa_device *ffa_dev) +{
- struct virtio_msg_ffa_device *vmfdev = ffa_dev->dev.driver_data;
- remove_vmdevs(vmfdev, vmfdev->vmdev_count);
- if (!IS_ERR(vmfdev->rmem))
- of_reserved_mem_device_release(&ffa_dev->dev);
- vmsg_ffa_notify_cleanup(vmfdev);
+}
+static const struct ffa_device_id virtio_msg_ffa_device_ids[] = {
- /* c66028b5-2498-4aa1-9de7-77da6122abf0 */
- { UUID_INIT(0xc66028b5, 0x2498, 0x4aa1,
- 0x9d, 0xe7, 0x77, 0xda, 0x61, 0x22, 0xab, 0xf0) },
- {}
+};
+static int __maybe_unused virtio_msg_ffa_suspend(struct device *dev) +{
- struct virtio_msg_ffa_device *vmfdev = dev_get_drvdata(dev);
- int ret, i, index;
- for (i = 0; i < vmfdev->vmdev_count; i++) {
- index = vmfdev->vmdev_count - i - 1;
- ret = virtio_device_freeze(&vmfdev->vmdevs[index].vmdev.vdev);
- if (ret)
- return ret;
- }
- return 0;
+}
+static int __maybe_unused virtio_msg_ffa_resume(struct device *dev) +{
- struct virtio_msg_ffa_device *vmfdev = dev_get_drvdata(dev);
- int ret, i;
- for (i = 0; i < vmfdev->vmdev_count; i++) {
- ret = virtio_device_restore(&vmfdev->vmdevs[i].vmdev.vdev);
- if (ret)
- return ret;
- }
- return 0;
+}
+static const struct dev_pm_ops virtio_msg_ffa_pm_ops = {
- SET_SYSTEM_SLEEP_PM_OPS(virtio_msg_ffa_suspend, virtio_msg_ffa_resume)
+};
+static struct ffa_driver virtio_msg_ffa_driver = {
- .name = "virtio-msg-ffa",
- .probe = virtio_msg_ffa_probe,
- .remove = virtio_msg_ffa_remove,
- .id_table = virtio_msg_ffa_device_ids,
- .driver = {
- .pm = &virtio_msg_ffa_pm_ops,
- },
+};
+static int virtio_msg_ffa_init(void) +{
- if (IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT))
- return ffa_register(&virtio_msg_ffa_driver);
- else
- return -EOPNOTSUPP;
+} +module_init(virtio_msg_ffa_init);
+static void virtio_msg_ffa_exit(void) +{
- if (IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT))
- ffa_unregister(&virtio_msg_ffa_driver);
+} +module_exit(virtio_msg_ffa_exit);
+MODULE_AUTHOR("Viresh Kumar viresh.kumar@linaro.org"); +MODULE_DESCRIPTION("Virtio message FF-A bus driver"); +MODULE_LICENSE("GPL"); diff --git a/include/uapi/linux/virtio_msg_ffa.h b/include/uapi/linux/virtio_msg_ffa.h new file mode 100644 index 000000000000..adcc081b483a --- /dev/null +++ b/include/uapi/linux/virtio_msg_ffa.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/*
- Virtio message FF-A (Arm Firmware Framework) bus header.
- Copyright (C) 2025 Google LLC and Linaro.
- Viresh Kumar viresh.kumar@linaro.org
- */
+#ifndef _LINUX_VIRTIO_MSG_FFA_H +#define _LINUX_VIRTIO_MSG_FFA_H
+#include <linux/types.h>
+/* Message types */ +#define VIRTIO_MSG_FFA_BUS_VERSION 0x80 +#define VIRTIO_MSG_FFA_BUS_AREA_SHARE 0x81 +#define VIRTIO_MSG_FFA_BUS_AREA_UNSHARE 0x82 +#define VIRTIO_MSG_FFA_BUS_RESET 0x83 +#define VIRTIO_MSG_FFA_BUS_EVENT_POLL 0x84 +#define VIRTIO_MSG_FFA_BUS_AREA_RELEASE 0xC0
+#define VIRTIO_MSG_FEATURES 0 +#define VIRTIO_MSG_FFA_BUS_VERSION_1_0 0x1 +#define VIRTIO_MSG_FFA_BUS_MSG_SIZE VIRTIO_MSG_MIN_SIZE
+#define VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_RX_SUPP (1 << 0) +#define VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_TX_SUPP (1 << 1) +#define VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP \
- (VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_RX_SUPP | \
- VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_TX_SUPP)
+#define VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_RX_SUPP (1 << 2) +#define VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_TX_SUPP (1 << 3) +#define VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP \
- (VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_RX_SUPP | \
- VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_TX_SUPP)
+#define VIRTIO_MSG_FFA_FEATURE_BOTH_SUPP \
- (VIRTIO_MSG_FFA_FEATURE_DIRECT_MSG_SUPP | \
- VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP)
+#define VIRTIO_MSG_FFA_AREA_ID_MAX 0xFF +#define VIRTIO_MSG_FFA_AREA_ID_OFFSET 56 +#define VIRTIO_MSG_FFA_OFFSET_MASK \
- ((ULL(1) << VIRTIO_MSG_FFA_AREA_ID_OFFSET) - 1)
+#define VIRTIO_MSG_FFA_RESULT_ERROR (1 << 0) +#define VIRTIO_MSG_FFA_RESULT_BUSY (1 << 1)
+/* Message payload format */
+struct bus_ffa_version {
- __le32 driver_version;
- __le32 vmsg_revision;
- __le32 vmsg_features;
- __le32 features;
- __le16 area_num;
+} __attribute__((packed));
+struct bus_ffa_version_resp {
- __le32 device_version;
- __le32 vmsg_revision;
- __le32 vmsg_features;
- __le32 features;
- __le16 area_num;
+} __attribute__((packed));
+struct bus_area_share {
- __le16 area_id;
- __le64 mem_handle;
- __le64 tag;
- __le32 count;
- __le32 attr;
+} __attribute__((packed));
+struct bus_area_share_resp {
- __le16 area_id;
- __le16 result;
+} __attribute__((packed));
+struct bus_area_unshare {
- __le16 area_id;
+} __attribute__((packed));
+struct bus_area_unshare_resp {
- __le16 area_id;
- __le16 result;
+} __attribute__((packed));
+struct bus_area_release {
- __le16 area_id;
+} __attribute__((packed));
+#endif /* _LINUX_VIRTIO_MSG_FFA_H */
2.31.1.272.g89b43f80a514
Virtio-msg mailing list -- virtio-msg@lists.linaro.org To unsubscribe send an email to virtio-msg-leave@lists.linaro.org
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Add a loopback bus implementation for the virtio-msg transport.
This bus simulates a backend that echoes messages to itself, allowing testing and development of virtio-msg without requiring an actual remote backend or transport hardware.
The loopback bus requires a reserved memory region for its operation. All DMA-coherent and streaming DMA allocations are restricted to this region, enabling safe mapping into user space and helping validate the memory-sharing model.
The reserved-memory region must be named "vmsglb" in the device tree. Example:
reserved-memory { #address-cells = <2>; #size-cells = <2>; ranges;
vmsglb@100000000 { compatible = "restricted-dma-pool"; reg = <0x00000001 0x00000000 0x0 0x00400000>; /* 4 MiB */ }; };
This bus is primarily intended for functional testing, development, and validation of the virtio-msg transport and its userspace interface.
Signed-off-by: Viresh Kumar viresh.kumar@linaro.org --- drivers/virtio/Kconfig | 9 + drivers/virtio/Makefile | 1 + drivers/virtio/virtio_msg_loopback.c | 323 +++++++++++++++++++++++++++ include/uapi/linux/virtio_msg_lb.h | 22 ++ 4 files changed, 355 insertions(+) create mode 100644 drivers/virtio/virtio_msg_loopback.c create mode 100644 include/uapi/linux/virtio_msg_lb.h
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig index 683152477e3f..934e8ccb3a01 100644 --- a/drivers/virtio/Kconfig +++ b/drivers/virtio/Kconfig @@ -196,6 +196,15 @@ config VIRTIO_MSG_FFA
If unsure, say N.
+config VIRTIO_MSG_LOOPBACK + tristate "Loopback bus driver for virtio message transport" + select VIRTIO_MSG + select VIRTIO_MSG_USER + help + This implements the Loopback bus for Virtio msg transport. + + If unsure, say N. + config VIRTIO_DMA_SHARED_BUFFER tristate depends on DMA_SHARED_BUFFER diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile index 96ec0a9c4a7a..90a2f1d86937 100644 --- a/drivers/virtio/Makefile +++ b/drivers/virtio/Makefile @@ -7,6 +7,7 @@ obj-$(CONFIG_VIRTIO_MMIO) += virtio_mmio.o virtio_msg_transport-y := virtio_msg.o virtio_msg_transport-$(CONFIG_VIRTIO_MSG_USER) += virtio_msg_user.o obj-$(CONFIG_VIRTIO_MSG) += virtio_msg_transport.o +obj-$(CONFIG_VIRTIO_MSG_LOOPBACK) += virtio_msg_loopback.o obj-$(CONFIG_VIRTIO_MSG_FFA) += virtio_msg_ffa.o obj-$(CONFIG_VIRTIO_PCI) += virtio_pci.o virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o diff --git a/drivers/virtio/virtio_msg_loopback.c b/drivers/virtio/virtio_msg_loopback.c new file mode 100644 index 000000000000..aa66f09d5dfe --- /dev/null +++ b/drivers/virtio/virtio_msg_loopback.c @@ -0,0 +1,323 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Loopback bus implementation for Virtio message transport. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + * + * This implements the Loopback bus for Virtio msg transport. + */ + +#define pr_fmt(fmt) "virtio-msg-loopback: " fmt + +#include <linux/err.h> +#include <linux/list.h> +#include <linux/miscdevice.h> +#include <linux/module.h> +#include <linux/mutex.h> +#include <linux/of_reserved_mem.h> +#include <linux/slab.h> +#include <linux/types.h> +#include <linux/virtio.h> +#include <uapi/linux/virtio_msg.h> +#include <uapi/linux/virtio_msg_lb.h> + +#include "virtio_msg.h" + +struct vmlb_device { + struct virtio_msg_device vmdev; + struct list_head list; +}; + +struct virtio_msg_lb { + /* Serializes transfers and protects list */ + struct mutex lock; + struct list_head list; + struct miscdevice misc; + struct virtio_msg_user_device vmudev; + struct virtio_msg *response; + struct reserved_mem *rmem; + struct device *dev; +}; + +static struct virtio_msg_lb *vmlb; + +#define to_vmlbdev(_vmdev) ((struct vmlb_device *)(_vmdev)->bus_data) + +static struct vmlb_device *find_vmlbdev(u16 dev_id) +{ + struct vmlb_device *vmlbdev; + + list_for_each_entry(vmlbdev, &vmlb->list, list) { + if (vmlbdev->vmdev.dev_id == dev_id) + return vmlbdev; + } + + return NULL; +} + +static const char *virtio_msg_lb_bus_info(struct virtio_msg_device *vmdev, + u16 *msg_size, u32 *rev) +{ + *msg_size = VIRTIO_MSG_MIN_SIZE; + *rev = VIRTIO_MSG_REVISION_1; + + return dev_name(vmlb->dev); +} + +static int virtio_msg_lb_transfer(struct virtio_msg_device *vmdev, + struct virtio_msg *request, + struct virtio_msg *response) +{ + struct virtio_msg_user_device *vmudev = &vmlb->vmudev; + int ret; + + /* + * Allow only one transaction to progress at once. + */ + guard(mutex)(&vmlb->lock); + + /* + * Set `vmsg` to `request` and finish the completion to wake up the + * `read()` thread. + */ + vmudev->vmsg = request; + vmlb->response = response; + complete(&vmudev->r_completion); + + /* + * Wait here for the `write()` thread to finish and not return before + * the operation is finished to avoid any potential races. + */ + ret = wait_for_completion_interruptible_timeout(&vmudev->w_completion, 1000); + if (ret < 0) { + dev_err(vmlb->dev, "Interrupted while waiting for response: %d\n", ret); + } else if (!ret) { + dev_err(vmlb->dev, "Timed out waiting for response\n"); + ret = -ETIMEDOUT; + } else { + ret = 0; + } + + /* Clear the pointers, just to be safe */ + vmudev->vmsg = NULL; + vmlb->response = NULL; + + return ret; +} + +static struct virtio_msg_ops virtio_msg_lb_ops = { + .transfer = virtio_msg_lb_transfer, + .bus_info = virtio_msg_lb_bus_info, +}; + +static int virtio_msg_lb_user_handle(struct virtio_msg_user_device *vmudev, + struct virtio_msg *vmsg) +{ + struct vmlb_device *vmlbdev; + + /* Response message */ + if (vmsg->type & VIRTIO_MSG_TYPE_RESPONSE) { + if (vmlb->response) + memcpy(vmlb->response, vmsg, VIRTIO_MSG_MIN_SIZE); + + return 0; + } + + /* Only support EVENT_USED virtio request messages */ + if (vmsg->type & VIRTIO_MSG_TYPE_BUS || vmsg->msg_id != VIRTIO_MSG_EVENT_USED) { + dev_err(vmlb->dev, "Unsupported message received\n"); + return 0; + } + + vmlbdev = find_vmlbdev(le16_to_cpu(vmsg->dev_id)); + if (!vmlbdev) + return 0; + + virtio_msg_event(&vmlbdev->vmdev, vmsg); + return 0; +} + +static struct virtio_msg_user_ops vmlb_user_ops = { + .handle = virtio_msg_lb_user_handle, +}; + +static int vmlbdev_add(struct file *file, struct vmsg_lb_dev_info *info) +{ + struct vmlb_device *vmlbdev; + int ret; + + scoped_guard(mutex, &vmlb->lock) { + if (find_vmlbdev(info->dev_id)) + return -EEXIST; + + vmlbdev = kzalloc(sizeof(*vmlbdev), GFP_KERNEL); + if (!vmlbdev) + return -ENOMEM; + + vmlbdev->vmdev.dev_id = info->dev_id; + vmlbdev->vmdev.ops = &virtio_msg_lb_ops; + vmlbdev->vmdev.vdev.dev.parent = vmlb->dev; + vmlbdev->vmdev.bus_data = vmlbdev; + + list_add(&vmlbdev->list, &vmlb->list); + } + + ret = virtio_msg_register(&vmlbdev->vmdev); + if (ret) { + dev_err(vmlb->dev, "Failed to register virtio msg lb device (%d)\n", ret); + goto out; + } + + return 0; + +out: + scoped_guard(mutex, &vmlb->lock) + list_del(&vmlbdev->list); + + kfree(vmlbdev); + return ret; +} + +static int vmlbdev_remove(struct file *file, struct vmsg_lb_dev_info *info) +{ + struct vmlb_device *vmlbdev; + + scoped_guard(mutex, &vmlb->lock) { + vmlbdev = find_vmlbdev(info->dev_id); + if (vmlbdev) { + list_del(&vmlbdev->list); + virtio_msg_unregister(&vmlbdev->vmdev); + return 0; + } + } + + dev_err(vmlb->dev, "Failed to find virtio msg lb device.\n"); + return -ENODEV; +} + +static void vmlbdev_remove_all(void) +{ + struct vmlb_device *vmlbdev, *tvmlbdev; + + guard(mutex)(&vmlb->lock); + + list_for_each_entry_safe(vmlbdev, tvmlbdev, &vmlb->list, list) { + virtio_msg_unregister(&vmlbdev->vmdev); + list_del(&vmlbdev->list); + } +} + +static long vmlb_ioctl(struct file *file, unsigned int cmd, unsigned long data) +{ + struct vmsg_lb_dev_info info; + + if (copy_from_user(&info, (void __user *)data, sizeof(info))) + return -EFAULT; + + switch (cmd) { + case IOCTL_VMSG_LB_ADD: + return vmlbdev_add(file, &info); + + case IOCTL_VMSG_LB_REMOVE: + return vmlbdev_remove(file, &info); + + default: + return -ENOTTY; + } +} + +static int vmlb_mmap(struct file *file, struct vm_area_struct *vma) +{ + unsigned long size = vma->vm_end - vma->vm_start; + unsigned long offset = vma->vm_pgoff << PAGE_SHIFT; + + if (offset > vmlb->rmem->size - size) + return -EINVAL; + + return remap_pfn_range(vma, vma->vm_start, + (vmlb->rmem->base + offset) >> PAGE_SHIFT, + size, + vma->vm_page_prot); +} + +static loff_t vmlb_llseek(struct file *file, loff_t offset, int whence) +{ + return fixed_size_llseek(file, offset, whence, vmlb->rmem->size); +} + +static const struct file_operations vmlb_miscdev_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = vmlb_ioctl, + .mmap = vmlb_mmap, + .llseek = vmlb_llseek, +}; + +static int virtio_msg_lb_init(void) +{ + int ret; + + vmlb = kzalloc(sizeof(*vmlb), GFP_KERNEL); + if (!vmlb) + return -ENOMEM; + + INIT_LIST_HEAD(&vmlb->list); + mutex_init(&vmlb->lock); + vmlb->vmudev.ops = &vmlb_user_ops; + + vmlb->misc.name = "virtio-msg-lb"; + vmlb->misc.minor = MISC_DYNAMIC_MINOR; + vmlb->misc.fops = &vmlb_miscdev_fops; + + ret = misc_register(&vmlb->misc); + if (ret) + goto vmlb_free; + + vmlb->dev = vmlb->misc.this_device; + vmlb->vmudev.parent = vmlb->dev; + + vmlb->rmem = of_reserved_mem_lookup_by_name("vmsglb"); + if (IS_ERR(vmlb->rmem)) { + ret = PTR_ERR(vmlb->rmem); + goto unregister; + } + ret = reserved_mem_device_init(vmlb->dev, vmlb->rmem); + if (ret) + goto mem_release; + + /* Register with virtio-msg UAPI */ + ret = virtio_msg_user_register(&vmlb->vmudev); + if (ret) { + dev_err(vmlb->dev, "Could not register virtio-msg user API\n"); + goto mem_release; + } + + ret = dma_coerce_mask_and_coherent(vmlb->dev, DMA_BIT_MASK(64)); + if (ret) + dev_warn(vmlb->dev, "Failed to enable 64-bit or 32-bit DMA\n"); + + return 0; + +mem_release: + of_reserved_mem_device_release(vmlb->dev); +unregister: + misc_deregister(&vmlb->misc); +vmlb_free: + kfree(vmlb); + return ret; +} +module_init(virtio_msg_lb_init); + +static void virtio_msg_lb_exit(void) +{ + virtio_msg_user_unregister(&vmlb->vmudev); + of_reserved_mem_device_release(vmlb->dev); + vmlbdev_remove_all(); + misc_deregister(&vmlb->misc); + kfree(vmlb); +} +module_exit(virtio_msg_lb_exit); + +MODULE_AUTHOR("Viresh Kumar viresh.kumar@linaro.org"); +MODULE_DESCRIPTION("Virtio message loopback bus driver"); +MODULE_LICENSE("GPL"); diff --git a/include/uapi/linux/virtio_msg_lb.h b/include/uapi/linux/virtio_msg_lb.h new file mode 100644 index 000000000000..fe0ce6269a0a --- /dev/null +++ b/include/uapi/linux/virtio_msg_lb.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* + * Virtio message Loopback bus header. + * + * Copyright (C) 2025 Google LLC and Linaro. + * Viresh Kumar viresh.kumar@linaro.org + */ + +#ifndef _LINUX_VIRTIO_MSG_LB_H +#define _LINUX_VIRTIO_MSG_LB_H + +struct vmsg_lb_dev_info { + unsigned int dev_id; +}; + +#define IOCTL_VMSG_LB_ADD \ + _IOC(_IOC_NONE, 'P', 0, sizeof(struct vmsg_lb_dev_info)) + +#define IOCTL_VMSG_LB_REMOVE \ + _IOC(_IOC_NONE, 'P', 1, sizeof(struct vmsg_lb_dev_info)) + +#endif /* _LINUX_VIRTIO_MSG_LB_H */
Hi Viresh,
On 22 Jul 2025, at 11:46, Viresh Kumar viresh.kumar@linaro.org wrote:
Hi everyone,
Over the last few weeks, I have made numerous changes in the design and code to get it ready for mainline. I have now prepared the initial RFC patchset to be sent to LKML.
Please have a look and provide your valuable feedback.
-------------------------8<-------------------------
Hello,
This RFC series introduces support for a new Virtio transport type: "virtio-msg", as proposed in [1]. Unlike existing transport types like virtio-mmio or virtio-pci which rely on memory-mapped registers, virtio-msg implements transport operations via structured message exchanges using standard virtqueues.
This is a bit wrong to say that we exchange the messages through virtqueues here. Maybe changed the sentence to: ... via structured messages. Those messages can be transported through different mechanisms such as mailboxes, shared memory based FIFO or specific protocols such as FF-A on Arm.
Cheers Bertrand
This series includes:
- Core virtio-msg transport support.
- Two message transport bus implementations:
- virtio-msg-ffa: based on ARM's Firmware Framework for Arm (FF-A).
- virtio-msg-loopback: a loopback device for testing and validation.
The code is available here for reference: [3] and virtio-msg loopback test setup is explained here: [2].
This series is based on v6.16-rc6 and depends on commit [4] from linux-next.
### Memory Mapping and Reserved Memory Usage
The first two patches enhance the reserved-memory subsystem to support attaching `struct device`s that do not originate from DT nodes—essential for virtual or dynamically discovered devices like the FF-A or loopback buses.
This reserved-memory region enables:
- Restricting all DMA-coherent and streaming DMA memory to a controlled range.
- Allowing the remote endpoint to pre-map this memory, reducing runtime overhead.
- Preventing unintentional data leaks, since memory is typically shared at page
granularity.
- For the loopback bus, it restricts the portion of kernel memory that can be
mapped into userspace, improving safety.
Feedback on the design, API, and approach is welcome.
-- Viresh
[1] https://lore.kernel.org/all/20250620224426.3923880-2-bill.mills@linaro.org/ [2] https://linaro.atlassian.net/wiki/spaces/HVAC/pages/30104092673 [3] git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/linux.git virtio/msg-rfc-v1 [4] From linux-next: 5be53630b4f0 virtio-mmio: Remove virtqueue list from mmio device
Viresh Kumar (6): of: reserved-memory: Add reserved_mem_device_init() of: reserved-memory: Add of_reserved_mem_lookup_by_name virtio: Add support for virtio-msg transport virtio-msg: Add optional userspace interface for message I/O virtio-msg: Add support for FF-A (Firmware Framework for Arm) bus virtio-msg: Add support for loopback bus
MAINTAINERS | 7 + drivers/of/of_reserved_mem.c | 91 +++-- drivers/virtio/Kconfig | 34 ++ drivers/virtio/Makefile | 5 + drivers/virtio/virtio_msg.c | 546 +++++++++++++++++++++++++++ drivers/virtio/virtio_msg.h | 88 +++++ drivers/virtio/virtio_msg_ffa.c | 501 ++++++++++++++++++++++++ drivers/virtio/virtio_msg_loopback.c | 323 ++++++++++++++++ drivers/virtio/virtio_msg_user.c | 119 ++++++ include/linux/of_reserved_mem.h | 13 + include/uapi/linux/virtio_msg.h | 221 +++++++++++ include/uapi/linux/virtio_msg_ffa.h | 94 +++++ include/uapi/linux/virtio_msg_lb.h | 22 ++ 13 files changed, 2040 insertions(+), 24 deletions(-) create mode 100644 drivers/virtio/virtio_msg.c create mode 100644 drivers/virtio/virtio_msg.h create mode 100644 drivers/virtio/virtio_msg_ffa.c create mode 100644 drivers/virtio/virtio_msg_loopback.c create mode 100644 drivers/virtio/virtio_msg_user.c create mode 100644 include/uapi/linux/virtio_msg.h create mode 100644 include/uapi/linux/virtio_msg_ffa.h create mode 100644 include/uapi/linux/virtio_msg_lb.h
-- 2.31.1.272.g89b43f80a514
Virtio-msg mailing list -- virtio-msg@lists.linaro.org To unsubscribe send an email to virtio-msg-leave@lists.linaro.org
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.