On 27 Apr 2026, at 13:34, Viresh Kumar viresh.kumar@linaro.org wrote:
On 27-04-26, 11:21, Bertrand Marquis wrote:
As said during the meeting, I encountered a lot of issues related to:
I end up missing things during the meeting sometimes and hence try to get more discussion over email (so I can read again and again to understand clearly).
- blocking in code that cannot sleep
We can have a spin-lock implementation for that ? How does the current code solve that ? Some sort of blocking needs to be done if the caller expects the response in the same thread.
I hope that can be done with a simple change over the current code.
thing is you need to sleep to wait for an answer, using a spinlock means we block waiting for an answer from an other VM or qemu in the kernel, this is not possible. In some cases (event sending) you can solve that using deferred work but in some others (block driver during probe) i had to solve it with more complex systems: - config register caches - dma pool and kind of retry answer with defer pool increase so that next try would have enough space in the pool to continue without blocking
- dma handling issues
- timings and concurrency during probe or runtime
Can we please discuss these in detail here ? I think we can make the current code work and solve all these issues easily. If not, I am okay with making a change in design and adapt a new strategy. But starting with completely new code at this point doesn't look right. We have already invested so much time with the current code.
Definitely ok with that but right now as said i want to have something working in full to be able to: - ensure the spec is implementable - check if there are some spec enhancements possible to simplify implementation
The code is a PoC not upstreamable and I am highly using chatgpt to make some progress with the main consequence of having code probably more complex than what is required.
There are a lot of examples in kernel where similar (simple) design was adapted, one of them is Greybus (For Google's ARA modular phone and I did work on that earlier). We can adapt parts from that to solve our current problems if required.
Agree.
Maybe lets start with the problems one by one, with exact use-case to see what we are lacking right now. I am still not able to see the full picture (in sense of the problems we have).
My goal right now is to have the following working in loopback and using FF-A between 2 VMs using qemu as vmm: - entropy device - block device
and be able to stress the system by creating a file from entropy output inside the disk.
I discovered that having entropy working is not that complex but disk is a lot more hacky.
Main issues i encountered so far: - init chicken and egg: - device or driver coming first - driver needing to exchange messages or use dma during probe - messages exchanged or DMA share creation during non sleepable context - qemu/vmm memory handling - when do you unshare - how can you ensure a share is ready before first event avail - all timeout and queuing issues - how to sleep waiting for an answer or waiting to be able to send - how to stack (events for example) or defer - who has to sleep and when - how to handle defered work when exiting, removing a device or VM
Right now i already have several consequences i need to handle in the spec - we must have a pool, sharing on demand does not work for disk - if we have a pool the sharer must say when to release, otherwise we have to reshare the pool content as the device has no idea that something is a pool - we need some config caching in the transport, otherwise any config value request from interrupt context which has to sleep cannot be processed - config generation strict as it is cannot easily be implemented without ending up in loops because generation is changing while you refresh or you have to refresh the whole config cache each time one value is modified
Having something working by something working by simplifying the scope was easy but having something working in a realistic case is far more complex. I managed to have something working fully between qemu and the kernel which is what i shared but with ffa between VMs is still only working reliably only in simple cases.
Bertrand
-- viresh
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.