On Wed, Nov 12, 2014 at 11:15:08AM +0000, Arnd Bergmann wrote:
On Wednesday 12 November 2014 10:56:40 Mark Rutland wrote:
On Wed, Nov 12, 2014 at 09:08:55AM +0000, Claudio Fontana wrote:
On 11.11.2014 16:29, Mark Rutland wrote:
I tend to mostly agree with this, we might look for alternative solutions for speeding up guest startup in the future but in general if getting ACPI in the guest for ARM64 requires also getting UEFI, then I can personally live with that, especially if we strive to have the kind of optimized virtualized UEFI you mention.
Given that UEFI will be required for other guests (e.g. if you want to boot a distribution's ISO image), I hope that virtualised UEFI will see some optimisation work.
I think the requirement it just for KVM to provide something that behaves exactly like UEFI, it doesn't have to be the full Tianocore implementation if it's easier to reimplement the boot interface.
I agree that we don't need a full Tianocore, but whatever we have must implement the minimal interface UEFI requires (which is more than just the boot interface).
For a "boot this EFI application" workflow you can skip most of the BDS stuff, but you still need boot services and runtime services provided to the application.
As mentioned by others, I'd rather see an implementation of ACPI in QEMU which learns from the experience of X86 (and possibly shares some code if possible), rather than going in a different direction by creating device trees first, and then converting them to ACPI tables somewhere in the firmware, just because device trees are "already there", for the reasons which have already been mentioned before by Igor and others.
For the features which ACPI provides which device trees do not (e.g. the dynamic addition and removal of memory and CPUs), there will need to be some sort of interface between QEMU and the ACPI implementation. That's already outside of the realm of DT, so as previously mentioned a simple conversion doesn't cover the general case.
I think we need to support the low-level interfaces in the kernel for this anyway, we should not have to use ACPI just to do memory and CPU hotplugging in KVM, that would be silly. If ACPI is present, it can provide a wrapper for the same interface, but KVM should not need to be aware of the fact that ACPI is used in the guest, after it has passed the initial ACPI blob to the kernel.
The difficulty here is that there is currently no common low-level interface between the guest and any specific hypervisor for these facilities. This would be up to kvm tool or QEMU, not KVM itself. So we'd have to spec one for !ACPI.
With ACPI, the interface should be common (following the ACPI spec), but we don't have a common interface underlying that.
I am not averse to having mechanisms for !ACPI, but we'd need to spec something.
I think any ACPI implemenation for a hypervisor should provide a demonstrable useful feature (e.g. hot-add of CPUs) before merging so we know the infrastructure is suitable.
I wouldn't want for ACPI to be "sort of" supported in QEMU, but with a limited functionality which makes it not fully useful in practice. I'd rather see it as a first class citizen instead, including the ability to run AML code.
I agree that there's no point in having ACPI in a guest unless it provides something which dt does not. I don't know how it should be structured to provide those useful features.
I see it the opposite way: we shouldn't have to use ACPI just to make use of some feature in Linux, the only reason why you'd want ACPI support in KVM is to be able to run Windows. It makes sense for the ACPI implementation to be compatible with the Linux ACPI code as well so we can test it better.
At the moment, ACPI specifies how these features should work. So without inventing a whole new interface, ACPI is the way of providing those features.
Thanks, Mark.