Hi Ard,
On Wed, 24 Sept 2025 at 10:15, Ard Biesheuvel ardb@kernel.org wrote:
On Tue, 23 Sept 2025 at 21:32, Simon Glass sjg@chromium.org wrote:
Hi Ard,
On Fri, 19 Sept 2025 at 09:50, Ard Biesheuvel ardb@kernel.org wrote:
The main difference is the level of abstraction: AML carries code logic along with the device description that can en/disable the device and put it into different power states. This is backed by so-called OperationRegions, which are ways to expose [abstracted] SPI, I2C and serial busses to the AML interpreter (as well as MMIO memory) so that the code sequences effectuating things like power state changes can be reduced to pokes of device register, regardless of how those are accessed on the particular system.
On x86, many onboard devices are simply described as PCIe devices, even though they are not actually connected to any PCIe fabric. This solves the self-description problem, vastly reducing the number of devices that need to be described via AML.
Also, there is a lot more homogeneity in how the system topology is constructed: on embedded systems, it is quite common to, e.g., tie the PHY interrupt line from the PCIe NIC to some GPIO controller that is not a naturally associated with that device at all, and this is something ACPI struggles with, and where DT shines.
DT simply operates at a different abstraction level - it describes every detail of the system topology, including every clock generator and power source. This makes it very flexible and very powerful, but also a maintenance burden: e.g., if some OEM issues a v2 of some board where one clock generator IC has been replaced because the original is EOL, it requires a new DT and potentially an OS update if the new part was not supported yet. ACPI is more flexible here, as it can simply ship different ACPI tables that make the v2 board look 100% identical to the v1 as far as the OS is concerned.
There is also the PEP addition you mention below, which I tend to see as an admission that ACPI cannot handle the complexity of modern systems.
No. The problem is not the complexity itself, but the fact that it is exposed to software.
x86 systems are just as complex, but they a) make more effort to abstract away the OS visible differences in firmware, and b) design the system with ACPI in mind, e.g., masquerade on-board peripherals as PCIe (so-called root complex integrated endpoints) so they can describe themselves, and use PCI standard abstractions for configuration and power management.
RIght. But are you saying that Windows shouldn't have PEP drivers? Or Linux shouldn't need them?
My Qualcomm laptop (using Linux) currently just reboots if it gets too hot.
Not sure what you are trying to say here. Is this a dig at ACPI? Or Windows? Or both?
Neither...I'm just pointing out the implications of ACPI for these systems. Without a driver for the complex thermal (in this case) trade-offs, they are not reliable.
Indeed.
We need DT rather than ACPI.
I tend to agree with you, but not for the reason you might think.
The ACPI vs DT gets very religious and heated at times, but it is often like watching people arguing over whether hammers are fundamentally better than screwdrivers: it really depends a lot on whether you are using nails or screws, and ACPI is really a much better solution than DT for certain markets.
However, the reason I think we need DT for these systems is the fact that there is prior art there. Many of these SoCs and subsystems (e.g., Qualcomm) are already shipping in major volumes with DT on Android phones, as well as Chrome OS, which are markets where performance and energy use are meticulously measured and managed.
Any effort to bring up Linux+ACPI on those SoCs in parallel for a niche market such as Linux laptops is bound to be futile, and it is much better to build on the existing DT support to fill in the blanks.
OK, makes sense. I agree a religious discussion isn't very useful and you've pointed out the difference in design which explains a lot of this.
The problem, of course, is that the idea that we would maintain the DTs for these systems in the kernel tree is laughable. So either these systems need to ship as vertically integrated systems (Android, CrOS), or we need to muster the self discipline to create a DT description and *stick with it* rather than drop it like a brick as soon as the Linux minor version changes, so that we can support users installing their own Linux distros.
Yes.
I'm assuming no one has a magic solution for this?
One option could be for OEMs to provide a devicetree package for each kernel version, perhaps in a /boot/oem directory with the firmware / bootloader selecting the closest one available. In other words, we try to solve the problem of 'OEMs owning the platform vs, distros owning the OS' by separating the concerns.
I suppose another would be to separate the DTs into a package for each SoC vendor or family (but still distributed by the distro and associated with the kernel), so we don't need to install lots of unnecessary cruft.
Regards, Simon