On Thu, Sep 30, 2021 at 10:24:24AM -0700, Sohil Mehta wrote:
On 9/30/2021 9:30 AM, Stefan Hajnoczi wrote:
On Tue, Sep 28, 2021 at 09:31:34PM -0700, Andy Lutomirski wrote:
I spent some time reviewing the docs (ISE) and contemplating how this all fits together, and I have a high level question:
Can someone give an example of a realistic workload that would benefit from SENDUIPI and precisely how it would use SENDUIPI? Or an example of a realistic workload that would benefit from hypothetical device-initiated user interrupts and how it would use them? I'm having trouble imagining something that wouldn't work as well or better by simply polling, at least on DMA-coherent architectures like x86.
I was wondering the same thing. One thing came to mind:
An application that wants to be *interrupted* from what it's doing rather than waiting until the next polling point. For example, applications that are CPU-intensive and have green threads. I can't name a real application like this though :P.
Thank you Stefan and Andy for giving this some thought.
We are consolidating the information internally on where and how exactly we expect to see benefits with real workloads for the various sources of User Interrupts. It will take a few days to get back on this one.
One possible use case came to mind in QEMU's TCG just-in-time compiler:
QEMU's TCG threads execute translated code. There are events that require interrupting these threads. Today a check is performed at the start of every translated block. Most of the time the check is false and it's a waste of CPU.
User interrupts can eliminate the need for checks by interrupting TCG threads when events occur.
I don't know whether this will improve performance or how feasible it is to implement, but I've added people who might have ideas. (For a summary of user interrupts, see https://lwn.net/SubscriberLink/871113/60652640e11fc5df/.)
Stefan