Hi Ayrton / Armelle,
I tried to investigate about how to make VIRTIO_MSG_EVENT_USED work without indirect FFA messages.
- Firstly, I am still not sure if I fully understand why indirect messages can't be implemented on Trustee side ? I know it was discussed a bit in the call, but I don't remember any of it :(
- If we want to do polling of the virtqueues, then it needs to be done on the driver side (vsock), I don't see how I can generalize it and do it from virtio-msg-ffa layer. The driver needs to do something like this for the virtqueue:
while (1) { /* Try to get a buffer from the virtqueue */ buf = virtqueue_get_buf(vq, &len); if (buf) { // Handle the received buffer break; }
/* Avoid busy looping with a small delay */ cpu_relax(); }
- The other idea discussed on the call was about always queuing a direct message for a FFA device (not virtio device) for EVENT_USED message. Once the trustee side has an event to send, it can respond to this request.
I was thinking if this may block the CPU for ever with the SMC call, but I don't think that is the case. The FFA layer makes the SMC call, checks its return value and sleeps for a ms, and tries again. With this the CPU will schedule other work as soon as it can. And looks like we support sending multiple direct messages concurrently too, so while a thread is waiting for a response to EVENT_USED message, we can keep sending other messages.
If we want to support this, will this be part of the spec ? Bertrand ?
- Any other ideas on how to get this solved ?
Hi Viresh,
Sorry for the delay but i am off sick this week.
It was my understanding that polling would be done at the virtqueue level and would not require anything specific in the transport itself. For notifications from the frontend to the backend, direct messages can be used (just ignore the answer) but for backend to frontend you will have to poll the virtqueues in the different drivers.
Cheers Bertrand
On 18 Feb 2025, at 11:10, Viresh Kumar viresh.kumar@linaro.org wrote:
Hi Ayrton / Armelle,
I tried to investigate about how to make VIRTIO_MSG_EVENT_USED work without indirect FFA messages.
- Firstly, I am still not sure if I fully understand why indirect messages can't
be implemented on Trustee side ? I know it was discussed a bit in the call, but I don't remember any of it :(
- If we want to do polling of the virtqueues, then it needs to be done on the
driver side (vsock), I don't see how I can generalize it and do it from virtio-msg-ffa layer. The driver needs to do something like this for the virtqueue:
while (1) { /* Try to get a buffer from the virtqueue */ buf = virtqueue_get_buf(vq, &len); if (buf) { // Handle the received buffer break; } /* Avoid busy looping with a small delay */ cpu_relax(); }
- The other idea discussed on the call was about always queuing a direct message
for a FFA device (not virtio device) for EVENT_USED message. Once the trustee side has an event to send, it can respond to this request.
I was thinking if this may block the CPU for ever with the SMC call, but I don't think that is the case. The FFA layer makes the SMC call, checks its return value and sleeps for a ms, and tries again. With this the CPU will schedule other work as soon as it can. And looks like we support sending multiple direct messages concurrently too, so while a thread is waiting for a response to EVENT_USED message, we can keep sending other messages.
If we want to support this, will this be part of the spec ? Bertrand ?
- Any other ideas on how to get this solved ?
-- viresh
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On 19-02-25, 11:25, Bertrand Marquis wrote:
Hi Viresh,
Sorry for the delay but i am off sick this week.
No problem Bertrand, please take care.
It was my understanding that polling would be done at the virtqueue level and would not require anything specific in the transport itself.
Yeah, that's one way out but it likely needs updates to the virtio frontend drivers. And I don't want to get into that. I hope we can do better.
For notifications from the frontend to the backend, direct messages can be used (just ignore the answer) but for backend to frontend you will have to poll the virtqueues in the different drivers.
Why shouldn't we submit a USED EVENT message from frontend to backend and wait for it to return (for ever) ? That can work as an interrupt for us. I was worried earlier if it will block the CPU for ever at the guest, but that doesn't seem to be the case.
On 19-02-25, 17:04, Viresh Kumar wrote:
On 19-02-25, 11:25, Bertrand Marquis wrote:
It was my understanding that polling would be done at the virtqueue level and would not require anything specific in the transport itself.
Yeah, that's one way out but it likely needs updates to the virtio frontend drivers. And I don't want to get into that. I hope we can do better.
Finally after digging a bit into trusty's code and talking to Aryton, I realized that multi-thread support is there in trusty, but sleeping in the direct message handler would be difficult to implement.
And then I found an easy enough way to get the polling done from virtio_msg_ffa.c file itself. The call to vring_interrupt() (made from virtio_msg_receive()) is lightweight enough, and basically returns early if there are no new buffers queued by the device. I am doing the polling at a ms interval for now.
Aryton, I have already tested this with Xen based setup and it works. Please give it a try with trusty. Thanks.
(pushed to my virtio/msg branch)
diff --git a/drivers/virtio/virtio_msg_ffa.c b/drivers/virtio/virtio_msg_ffa.c index 20667571146b..7d7df47035f3 100644 --- a/drivers/virtio/virtio_msg_ffa.c +++ b/drivers/virtio/virtio_msg_ffa.c @@ -14,9 +14,11 @@
#include <linux/arm_ffa.h> #include <linux/completion.h> +#include <linux/delay.h> #include <linux/err.h> #include <linux/fs.h> #include <linux/idr.h> +#include <linux/kthread.h> #include <linux/list.h> #include <linux/module.h> #include <linux/of_reserved_mem.h> @@ -62,6 +64,7 @@ struct virtio_msg_ffa_device { struct list_head area_list; struct mutex lock; /* to protect area_list */ struct virtio_msg_async async; + struct task_struct *used_event_task; void *response;
bool indirect; @@ -160,6 +163,34 @@ find_vmdev(struct virtio_msg_ffa_device *vmfdev, u16 dev_id) return NULL; }
+static int used_event_task(void *data) +{ + struct virtio_msg_ffa_device *vmfdev = data; + struct virtio_msg_device *vmdev; + struct virtio_msg_vq *info; + struct virtio_msg msg; + int i; + + virtio_msg_prepare(&msg, false, VIRTIO_MSG_EVENT_USED, 0); + + while (!kthread_should_stop()) { + for (i = 0; i < vmfdev->vmdev_count; i++) { + vmdev = &vmfdev->vmdevs[i]; + msg.event_used.index = 0; + + list_for_each_entry(info, &vmdev->virtqueues, node) { + virtio_msg_receive(vmdev, &msg); + msg.event_used.index++; + } + } + + /* sleep for 1ms */ + fsleep(1000); + } + + return 0; +} + static void handle_async_event(struct virtio_msg_ffa_device *vmfdev, struct virtio_msg *msg) { @@ -602,6 +633,15 @@ static int virtio_msg_ffa_probe(struct ffa_device *ffa_dev) } }
+ /* Run the kthread if indirect messages aren't supported */ + if (!(features & VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP)) { + vmfdev->used_event_task = kthread_run(used_event_task, vmfdev, "vmsg-ffa-ue"); + if (IS_ERR(vmfdev->used_event_task)) { + ret = PTR_ERR(vmfdev->used_event_task); + goto deactivate; + } + } + return 0;
unregister: @@ -618,6 +658,7 @@ static void virtio_msg_ffa_remove(struct ffa_device *ffa_dev) { struct virtio_msg_ffa_device *vmfdev = ffa_dev->dev.driver_data;
+ kthread_stop(vmfdev->used_event_task); remove_vmdevs(vmfdev, vmfdev->vmdev_count); vmsg_ffa_bus_deactivate(vmfdev); }
Hi Viresh,
On 20 Feb 2025, at 10:27, Viresh Kumar viresh.kumar@linaro.org wrote:
On 19-02-25, 17:04, Viresh Kumar wrote:
On 19-02-25, 11:25, Bertrand Marquis wrote:
It was my understanding that polling would be done at the virtqueue level and would not require anything specific in the transport itself.
Yeah, that's one way out but it likely needs updates to the virtio frontend drivers. And I don't want to get into that. I hope we can do better.
Finally after digging a bit into trusty's code and talking to Aryton, I realized that multi-thread support is there in trusty, but sleeping in the direct message handler would be difficult to implement.
I tried to put some thoughts into this: - it is globally possible to poll for virtqueues - not being able to receive events is kind of a limitation of the FF-A bus implementation in your configuration - it would not make sense to have a "poll event" message at the transport level as it would enforce the transport to create a thread to call for it based on some information that the bus implementation would have to provide
So how about: - we handle this at the ffa bus implementation level - we define a FFA bus message where the driver side can just send a message to poll for event:
request: FFA bus-msg Poll events possible answers: - no events - any kind of event request
We only need this from driver to device as the driver can currently send an even to the device using a direct message with a 0 answer.
What do you think ?
Cheers Bertrand
And then I found an easy enough way to get the polling done from virtio_msg_ffa.c file itself. The call to vring_interrupt() (made from virtio_msg_receive()) is lightweight enough, and basically returns early if there are no new buffers queued by the device. I am doing the polling at a ms interval for now.
Aryton, I have already tested this with Xen based setup and it works. Please give it a try with trusty. Thanks.
(pushed to my virtio/msg branch)
diff --git a/drivers/virtio/virtio_msg_ffa.c b/drivers/virtio/virtio_msg_ffa.c index 20667571146b..7d7df47035f3 100644 --- a/drivers/virtio/virtio_msg_ffa.c +++ b/drivers/virtio/virtio_msg_ffa.c @@ -14,9 +14,11 @@
#include <linux/arm_ffa.h> #include <linux/completion.h> +#include <linux/delay.h> #include <linux/err.h> #include <linux/fs.h> #include <linux/idr.h> +#include <linux/kthread.h> #include <linux/list.h> #include <linux/module.h> #include <linux/of_reserved_mem.h> @@ -62,6 +64,7 @@ struct virtio_msg_ffa_device { struct list_head area_list; struct mutex lock; /* to protect area_list */ struct virtio_msg_async async;
- struct task_struct *used_event_task;
void *response;
bool indirect; @@ -160,6 +163,34 @@ find_vmdev(struct virtio_msg_ffa_device *vmfdev, u16 dev_id) return NULL; }
+static int used_event_task(void *data) +{
- struct virtio_msg_ffa_device *vmfdev = data;
- struct virtio_msg_device *vmdev;
- struct virtio_msg_vq *info;
- struct virtio_msg msg;
- int i;
- virtio_msg_prepare(&msg, false, VIRTIO_MSG_EVENT_USED, 0);
- while (!kthread_should_stop()) {
- for (i = 0; i < vmfdev->vmdev_count; i++) {
- vmdev = &vmfdev->vmdevs[i];
- msg.event_used.index = 0;
- list_for_each_entry(info, &vmdev->virtqueues, node) {
- virtio_msg_receive(vmdev, &msg);
- msg.event_used.index++;
- }
- }
- /* sleep for 1ms */
- fsleep(1000);
- }
- return 0;
+}
static void handle_async_event(struct virtio_msg_ffa_device *vmfdev, struct virtio_msg *msg) { @@ -602,6 +633,15 @@ static int virtio_msg_ffa_probe(struct ffa_device *ffa_dev) } }
- /* Run the kthread if indirect messages aren't supported */
- if (!(features & VIRTIO_MSG_FFA_FEATURE_INDIRECT_MSG_SUPP)) {
- vmfdev->used_event_task = kthread_run(used_event_task, vmfdev, "vmsg-ffa-ue");
- if (IS_ERR(vmfdev->used_event_task)) {
- ret = PTR_ERR(vmfdev->used_event_task);
- goto deactivate;
- }
- }
return 0;
unregister: @@ -618,6 +658,7 @@ static void virtio_msg_ffa_remove(struct ffa_device *ffa_dev) { struct virtio_msg_ffa_device *vmfdev = ffa_dev->dev.driver_data;
- kthread_stop(vmfdev->used_event_task);
remove_vmdevs(vmfdev, vmfdev->vmdev_count); vmsg_ffa_bus_deactivate(vmfdev); }
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
On 24-02-25, 08:17, Bertrand Marquis wrote:
I tried to put some thoughts into this:
- it is globally possible to poll for virtqueues
- not being able to receive events is kind of a limitation of the FF-A bus implementation in your configuration
- it would not make sense to have a "poll event" message at the transport level as it would enforce the transport to create a thread to call for it based on some information that the bus implementation would have to provide
So how about:
- we handle this at the ffa bus implementation level
Yes, this must be handled at the bus level for now.
- we define a FFA bus message where the driver side can just send a message to poll for event:
request: FFA bus-msg Poll events possible answers:
- no events
- any kind of event request
The drawback against polling the virtqueue is that this is inefficient, as we will need to exchange messages here.
I think we are good for now, as I have found a way to poll the virtqueues from the FFA bus implementation itself (as described in the previous email). So its fine for now I guess.