On Wed, Nov 12, 2014 at 02:03:01PM +0100, Arnd Bergmann wrote:
On Wednesday 12 November 2014 12:34:01 Christoffer Dall wrote:
On Wed, Nov 12, 2014 at 12:15:08PM +0100, Arnd Bergmann wrote:
On Wednesday 12 November 2014 10:56:40 Mark Rutland wrote:
For the features which ACPI provides which device trees do not (e.g. the dynamic addition and removal of memory and CPUs), there will need to be some sort of interface between QEMU and the ACPI implementation. That's already outside of the realm of DT, so as previously mentioned a simple conversion doesn't cover the general case.
I think we need to support the low-level interfaces in the kernel for this anyway, we should not have to use ACPI just to do memory and CPU hotplugging in KVM, that would be silly.
I had that same intuitive feeling, but lacked good tecnical arguments for it. Care to elaborate on that?
ACPI always has to interface back to the hypervisor to do anything that changes the hardware configuration, so it essentially has to perform a hypercall or touch some virtual register.
If we need to architect that interface in qemu anyway, we should make it sane enough for the kernel to use directly, without having to go through ACPI, as not everyone will want to run ACPI.
With the usual benefits of doing it in ACPI will not require updates on both sides if we need to fix something, but also the usual downside of having something obscure and pseduo-hidden, which may be broken underneath, I suppose.
If ACPI is present, it can provide a wrapper for the same interface, but KVM should not need to be aware of the fact that ACPI is used in the guest, after it has passed the initial ACPI blob to the kernel.
That's where things begin to be a bit foggy for me. AFAIU ACPI already has a method for doing this and I speculate that there is some IRQ assigned to an ACPI event that causes some AML code to be interpreted by your OS. Wouldn't it be a matter of QEMU putting the right AML table fragments in place to wire this up then?
Yes, that is what I meant with a wrapper. The two choices are:
have an interrupt and a hypercall or mmio interface. When the interrupt gets triggered, we ask the interface what happened and do something on the same interface depending on the state of the system.
have an interrupt that causes AML code to be run, that code will use the hypercall or mmio interface to find out what happened and create an ACPI event. This event is enqueued to the generic ACPI hotplug handler, which depending on the state of the system decides to do something by calling into AML code again, which will trigger the underlying interface.
From qemu's point of view, the two are doing exactly the same thing, except that the MMIO access can be hidden in AML so the OS doesn't have to know the interface.
right, thanks for the explanation.
Note that in case of Xen, the use of hypercalls means that the OS has to know the interface after all, so the second half of the process is handled by drivers/xen/xen-acpi-*hotplug.c.
Note how the implementation that uses the ACPI wrapper is much more complex than the native one that does the same thing:
-rw-r--r-- 1 arnd arnd 2119 Nov 10 16:43 drivers/xen/cpu_hotplug.c -rw-r--r-- 1 arnd arnd 10987 Nov 10 16:43 drivers/xen/xen-acpi-cpuhotplug.c
-rw-r--r-- 1 arnd arnd 6894 Nov 10 16:43 drivers/xen/xen-balloon.c -rw-r--r-- 1 arnd arnd 12085 Nov 10 16:43 drivers/xen/xen-acpi-memhotplug.c
Interesting.
-Christoffer