ACPI on ARMv8 Servers --------------------- Authors ------- Al Stone Graeme Gregory Hanjun Guo Contents -------- This documentation directory consists of the following files. The sum of these files is the complete document. 000_index.txt => this file 010_intro.txt => very brief overview 020_why.txt => copy of a blog post explaining "Why ACPI on ARM?" 030_reln_w_dt.txt => the relationship between DT and ACPI 040_booting.txt => booting with ACPI 050_acpi_detection.txt => detecting when ACPI is in use 060_device_enumeration.txt => device enumeration with ACPI 070_power_management.txt => power management with ACPI 080_clocks.txt => expectations for device clocks 090_recommendations.txt => driver writing recommendations 100_ASWG.txt => ACPI Spec Working Group in UEFI 110_linux_code.txt => specific requirements for kernel code 120_acpi_objects.txt => specific requirements for ACPI objects 190_references.txt => bibliography for all sections 200_signatures.txt => SOAs, Reviewed-by, ... 210_todo.txt => TODO list for this document ACPI on ARMv8 Servers --------------------- ACPI can be used for ARMv8 general purpose servers designed to follow the ARM SBSA (Server Base System Architecture) [0] and SBBR (Server Base Boot Requirements) [1] specifications, currently available to those with an ARM login at http://silver.arm.com. The ARMv8 kernel implements the reduced hardware model of ACPI version 5.1 or later. Links to the specification and all external documents it refers to are managed by the UEFI Forum. The specification is available at http://www.uefi.org/specifications and external documents can be found via http://www.uefi.org/acpi. If an ARMv8 system does not meet the requirements of the SBSA and SBBR, or cannot be described using the mechanisms defined in the required ACPI specifications, then it should be described by Device Tree (DT) since it is more suitable than ACPI for the hardware. While the documents mentioned above set out the requirements for building a industry-standard ARMv8 server, they also apply to more than one operating system. The purpose of this document is to specifically describe the full interaction between ACPI and Linux on an ARMv8 system -- that is, what Linux expects of ACPI and what ACPI can expect of Linux. Compatibility ------------- One of the primary motivations for the standardization described is to provide backward compatibility for Linux kernels. In the server market, software and hardware are often used for a long time. ACPI allows the kernel and firmware to agree on a consistent abstraction that can be maintained over time, even as hardware or software change. As long as the abstraction is maintained, systems can be updated without having to replace the kernel. When a Linux driver or subsystem first uses ACPI, it targets a baseline version of the ACPI specification. ACPI firmware must continue to work, even though it may not be optimal, with the earliest kernel version that first provides support for that baseline version of ACPI. There may be a need for additional drivers, but adding new functionality (e.g., CPU power management) should not break older kernel versions. Further, ACPI firmware must also work with the most recent version of the kernel. Why ACPI on ARM? [2] -------------------- Why are we doing ACPI on ARM? That question has been asked many times, but we haven’t yet had a good summary of the most important reasons for wanting ACPI on ARM. This article is an attempt to state the rationale clearly. During an email conversation late last year, Catalin Marinas asked for a summary of exactly why we want ACPI on ARM, Dong Wei replied with the following list: > 1. Support multiple OSes, including Linux and Windows > 2. Support device configurations > 3. Support dynamic device configurations (hot add/removal) > 4. Support hardware abstraction through control methods > 5. Support power management > 6. Support thermal management > 7. Support RAS interfaces The above list is certainly true in that all of them need to be supported. However, that list doesn’t give the rationale for choosing ACPI. We already have DT mechanisms for doing most of the above, and can certainly create new bindings for anything that is missing. So, if it isn’t an issue of functionality, then how does ACPI differ from DT and why is ACPI a better fit for general purpose ARM servers? The difference is in the support model. To explain what I mean, I’m first going to expand on each of the items above and discuss the similarities and differences between ACPI and DT. Then, with that as the groundwork, I’ll discuss how ACPI is a better fit for the general purpose hardware support model. Device Configurations --------------------- 2. Support device configurations 3. Support dynamic device configurations (hot add/removal) From day one, DT was about device configurations. There isn’t any significant difference between ACPI & DT here. In fact, the majority of ACPI tables are completely analogous to DT descriptions. With the exception of the DSDT and SSDT tables, most ACPI tables are merely flat data used to describe hardware. DT platforms have also supported dynamic configuration and hotplug for years. There isn’t a lot here that differentiates between ACPI and DT. The biggest difference is that dynamic changes to the ACPI namespace can be triggered by ACPI methods, whereas for DT changes are received as messages from firmware and have been very much platform specific (e.g. IBM pSeries does this) Power Management ---------------- 4. Support hardware abstraction through control methods 5. Support power management 6. Support thermal management Power, thermal, and clock management can all be dealt with as a group. ACPI defines a power management model (OSPM) that both the platform and the OS conform to. The OS implements the OSPM state machine, but the platform can provide state change behaviour in the form of bytecode methods. Methods can access hardware directly or hand off PM operations to a coprocessor. The OS really doesn’t have to care about the details as long as the platform obeys the rules of the OSPM model. With DT, the kernel has device drivers for each and every component in the platform, and configures them using DT data. DT itself doesn’t have a PM model. Rather the PM model is an implementation detail of the kernel. Device drivers use DT data to decide how to handle PM state changes. We have clock, pinctrl, and regulator frameworks in the kernel for working out runtime PM. However, this only works when all the drivers and support code have been merged into the kernel. When the kernel’s PM model doesn’t work for new hardware, then we change the model. This works very well for mobile/embedded because the vendor controls the kernel. We can change things when we need to, but we also struggle with getting board support mainlined. This difference has a big impact when it comes to OS support. Engineers from hardware vendors, Microsoft, and most vocally Red Hat have all told me bluntly that rebuilding the kernel doesn’t work for enterprise OS support. Their model is based around a fixed OS release that ideally boots out-of-the-box. It may still need additional device drivers for specific peripherals/features, but from a system view, the OS works. When additional drivers are provided separately, those drivers fit within the existing OSPM model for power management. This is where ACPI has a technical advantage over DT. The ACPI OSPM model and it’s bytecode gives the HW vendors a level of abstraction under their control, not the kernel’s. When the hardware behaves differently from what the OS expects, the vendor is able to change the behaviour without changing the HW or patching the OS. At this point you’d be right to point out that it is harder to get the whole system working correctly when behaviour is split between the kernel and the platform. The OS must trust that the platform doesn’t violate the OSPM model. All manner of bad things happen if it does. That is exactly why the DT model doesn’t encode behaviour: It is easier to make changes and fix bugs when everything is within the same code base. We don’t need a platform/kernel split when we can modify the kernel. However, the enterprise folks don’t have that luxury. The platform/kernel split isn’t a design choice. It is a characteristic of the market. Hardware and OS vendors each have their own product timetables, and they don’t line up. The timeline for getting patches into the kernel and flowing through into OS releases puts OS support far downstream from the actual release of hardware. Hardware vendors simply cannot wait for OS support to come online to be able to release their products. They need to be able to work with available releases, and make their hardware behave in the way the OS expects. The advantage of ACPI OSPM is that it defines behaviour and limits what the hardware is allowed to do without involving the kernel. What remains is sorting out how we make sure everything works. How do we make sure there is enough cross platform testing to ensure new hardware doesn’t ship broken and that new OS releases don’t break on old hardware? Those are the reasons why a UEFI/ACPI firmware summit is being organized, it’s why the UEFI forum holds plugfests 3 times a year, and it is why we’re working on FWTS and LuvOS. Reliability, Availability & Serviceability (RAS) ------------------------------------------------ 7. Support RAS interfaces This isn’t a question of whether or not DT can support RAS. Of course it can. Rather it is a matter of RAS bindings already existing for ACPI, including a usage model. We’ve barely begun to explore this on DT. This item doesn’t make ACPI technically superior to DT, but it certainly makes it more mature. Multiplatform Support --------------------- 1. Support multiple OSes, including Linux and Windows I’m tackling this item last because I think it is the most contentious for those of us in the Linux world. I wanted to get the other issues out of the way before addressing it. The separation between hardware vendors and OS vendors in the server market is new for ARM. For the first time ARM hardware and OS release cycles are completely decoupled from each other, and neither are expected to have specific knowledge of the other (ie. the hardware vendor doesn’t control the choice of OS). ARM and their partners want to create an ecosystem of independent OSes and hardware platforms that don’t explicitly require the former to be ported to the latter. Now, one could argue that Linux is driving the potential market for ARM servers, and therefore Linux is the only thing that matters, but hardware vendors don’t see it that way. For hardware vendors it is in their best interest to support as wide a choice of OSes as possible in order to catch the widest potential customer base. Even if the majority choose Linux, some will choose BSD, some will choose Windows, and some will choose something else. Whether or not we think this is foolish is beside the point; it isn’t something we have influence over. During early ARM server planning meetings between ARM, its partners and other industry representatives (myself included) we discussed this exact point. Before us were two options, DT and ACPI. As one of the Linux people in the room, I advised that ACPI’s closed governance model was a show stopper for Linux and that DT is the working interface. Microsoft on the other hand made it abundantly clear that ACPI was the only interface that they would support. For their part, the hardware vendors stated the platform abstraction behaviour of ACPI is a hard requirement for their support model and that they would not close the door on either Linux or Windows. However, the one thing that all of us could agree on was that supporting multiple interfaces doesn’t help anyone: It would require twice as much effort on defining bindings (once for Linux-DT and once for Windows-ACPI) and it would require firmware to describe everything twice. Eventually we reached the compromise to use ACPI, but on the condition of opening the governance process to give Linux engineers equal influence over the specification. The fact that we now have a much better seat at the ACPI table, for both ARM and x86, is a direct result of these early ARM server negotiations. We are no longer second class citizens in the ACPI world and are actually driving much of the recent development. I know that this line of thought is more about market forces rather than a hard technical argument between ACPI and DT, but it is an equally significant one. Agreeing on a single way of doing things is important. The ARM server ecosystem is better for the agreement to use the same interface for all operating systems. This is what is meant by standards compliant. The standard is a codification of the mutually agreed interface. It provides confidence that all vendors are using the same rules for interoperability. Summary ------- To summarize, here is the short form rationale for ACPI on ARM: -- ACPI’s bytecode allows the platform to encode behaviour. DT explicitly does not support this. For hardware vendors, being able to encode behaviour is an important tool for supporting operating system releases on new hardware. -- ACPI’s OSPM defines a power management model that constrains what the platform is allowed into a specific model while still having flexibility in hardware design. -- For enterprise use-cases, ACPI has extablished bindings, such as for RAS, which are used in production. DT does not. Yes, we can define those bindings but doing so means ARM and x86 will use completely different code paths in both firmware and the kernel. -- Choosing a single interface for platform/OS abstraction is important. It is not reasonable to require vendors to implement both DT and ACPI if they want to support multiple operating systems. Agreeing on a single interface instead of being fragmented into per-OS interfaces makes for better interoperability overall. -- The ACPI governance process works well and we’re at the same table as HW vendors and other OS vendors. In fact, there is no longer any reason to feel that ACPI is a Windows thing or that we are playing second fiddle to Microsoft. The move of ACPI governance into the UEFI forum has significantly opened up the processes, and currently, a large portion of the changes being made to ACPI is being driven by Linux. At the beginning of this article I made the statement that the difference is in the support model. For servers, responsibility for hardware behaviour cannot be purely the domain of the kernel, but rather is split between the platform and the kernel. ACPI frees the OS from needing to understand all the minute details of the hardware so that the OS doesn’t need to be ported to each and every device individually. It allows the hardware vendors to take responsibility for PM behaviour without depending on an OS release cycle which it is not under their control. ACPI is also important because hardware and OS vendors have already worked out how to use it to support the general purpose ecosystem. The infrastructure is in place, the bindings are in place, and the process is in place. DT does exactly what we need it to when working with vertically integrated devices, but we don’t have good processes for supporting what the server vendors need. We could potentially get there with DT, but doing so doesn’t buy us anything. ACPI already does what the hardware vendors need, Microsoft won’t collaborate with us on DT, and the hardware vendors would still need to provide two completely separate firmware interface; one for Linux and one for Windows. Relationship with Device Tree ----------------------------- ACPI support in drivers and subsystems for ARMv8 should never be mutually exclusive with DT support at compile time. At boot time the kernel will only use one description method depending on parameters passed from the bootloader (including kernel bootargs). Regardless of whether DT or ACPI is used, the kernel must always be capable of booting with either scheme (in kernels with both schemes enabled at compile time). Booting using ACPI tables ------------------------- The only defined method for passing ACPI tables to the kernel on ARMv8 is via the UEFI system configuration table. When an ARMv8 system boots, it can either have DT information, ACPI tables, or in some very unusual cases, both. If no command line parameters are used, the kernel will try to use DT for device enumeration; if there is no DT present, the kernel will try to use ACPI tables, but only if they are present. In neither is available, the kernel will not boot. If acpi=force is used on the command line, the kernel will attempt to use ACPI tables first, but fall back to DT if they are not present. The basic idea is that the kernel will not fail to boot unless it absolutely has no other choice. Processing of ACPI tables may be disabled by passing acpi=off on the kernel command line; this is the default behavior if both ACPI and DT tables are present. If acpi=force is used, the kernel will ONLY use device configuration information contained in the ACPI tables if those tables are available. In order for the kernel to load and use ACPI tables, the UEFI implementation MUST set the ACPI_20_TABLE_GUID to point to the RSDP table (the table with the ACPI signature "RSD PTR "). If this pointer is incorrect and acpi=force is used, the kernel will disable ACPI and try to use DT to boot instead. If the pointer to the RSDP table is correct, the table will be mapped into the kernel by the ACPI core, using the address provided by UEFI. The ACPI core will then locate and map in all other ACPI tables provided by using the addresses in the RSDP table to find the XSDT (eXtended System Description Table). The XSDT in turn provides the addresses to all other ACPI tables provided by the system firmware; the ACPI core will then traverse this table and map in the tables listed. The ACPI core will ignore any provided RSDT (Root System Description Table). RSDTs have been deprecated and are ignored on arm64 since they only allow for 32-bit addresses. Further, the ACPI core will only use the 64-bit address fields in the FADT (Fixed ACPI Description Table). Any 32-bit address fields in the FADT will be ignored on arm64. Hardware reduced mode (see Section 4.1 of the ACPI 5.1 specification) will be enforced by the ACPI core on arm64. Doing so allows the ACPI core to run less complex code since it no longer has to provide support for legacy hardware from other architectures. For the ACPI core to operate properly, and in turn provide the information the kernel needs to configure devices, it expects to find the following tables (all section numbers refer to the ACPI 5.1 specfication): -- RSDP (Root System Description Pointer), section 5.2.5 -- XSDT (eXtended System Description Table), section 5.2.8 -- FADT (Fixed ACPI Description Table), section 5.2.9 -- DSDT (Differentiated System Description Table), section 5.2.11.1 -- MADT (Multiple APIC Description Table), section 5.2.12 -- GTDT (Generic Timer Description Table), section 5.2.24 -- If PCI is supported, the MCFG (Memory mapped ConFiGuration Table), section 5.2.6, specifically Table 5-31. If the above tables are not all present, the kernel may or may not be able to boot properly since it may not be able to configure all of the devices available. ACPI Detection -------------- Drivers should determine their probe() type by checking for a null value for ACPI_HANDLE, or checking .of_node, or other information in the device structure. This is detailed further in the "Driver Recommendations" section. In non-driver code, if the presence of ACPI needs to be detected at runtime, then check the value of acpi_disabled. If CONFIG_ACPI is not set, acpi_disabled will always be 1. Device Enumeration ------------------ Device descriptions in ACPI should use standard recognized ACPI interfaces. These may contain less information than is typically provided via a Device Tree description for the same device. This is also one of the reasons that ACPI can be useful -- the driver takes into account that it may have less detailed information about the device and uses sensible defaults instead. If done properly in the driver, the hardware can change and improve over time without the driver having to change at all. Clocks provide an excellent example. In DT, clocks need to be specified and the drivers need to take them into account. In ACPI, the assumption is that UEFI will leave the device in a reasonable default state, including any clock settings. If for some reason the driver needs to change a clock value, this can be done in an ACPI method; all the driver needs to do is invoke the method and not concern itself with what the method needs to do to change the clock. Changing the hardware can then take place over time by changing what the ACPI method does, and not the driver. ACPI drivers should only look at one specific ASL object -- the _DSD object -- for device driver parameters (known in DT as "bindings", or "Device Properties" in ACPI). DT bindings also will be reviewed before used. The UEFI Forum provides a mechanism for registering such properties [4] so that they may be used on any operating system supporting ACPI. Device properties that have not been registered with the UEFI Forum should not be used. Drivers should look for device properties in the _DSD object ONLY; the _DSD object is described in the ACPI specification section 6.2.5, but more specifically, use the _DSD Device Properties UUID [5]: -- UUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 -- http://www.uefi.org/sites/default/files/resources/_DSD-device-properties-UUID.pdf The kernel has an interface for looking up device properties in a manner independent of whether DT or ACPI is being used and that interface should be used [6]; it can eliminate some duplication of code paths in driver probing functions and discourage divergence between DT bindings and ACPI device properties. ACPI tables are described with a formal language called ASL, the ACPI Source Language (section 19 of the specification). This means that there are always multiple ways to describe the same thing -- including device properties. For example, device properties could use an ASL construct that looks like this: Name(KEY0, "value0"). An ACPI device driver would then retrieve the value of the property by evaluating the KEY0 object. However, using Name() this way has multiple problems: (1) ACPI limits names ("KEY0") to four characters unlike DT; (2) there is no industry wide registry that maintains a list of names, minimzing re-use; (3) there is also no registry for the definition of property values ("value0"), again making re-use difficult; and (4) how does one maintain backward compatibility as new hardware comes out? The _DSD method was created to solve precisely these sorts of problems; Linux drivers should ALWAYS use the _DSD method for device properties and nothing else. The _DSM object (ACPI Section 9.14.1) could also be used for conveying device properties to a driver. Linux drivers should only expect it to be used if _DSD cannot represent the data required, and there is no way to create a new UUID for the _DSD object. Note that there is even less regulation of the use of _DSM than there is of _DSD. Drivers that depend on the contents of _DSM objects will be more difficult to maintain over time because of this. The _DSD object is a very flexible mechanism in ACPI, as are the registered Device Properties. This flexibility allows _DSD to cover more than just the generic server case and care should be taken in device drivers not to expect it to replicate highly specific embedded behaviour from DT. For example, embedded systems often find it necessary to provide detailed information about the clocks and regulators available for power management that servers might not require; the kind of fine-grained management needed for optimal battery usage plays no role in a server. Both DT bindings and ACPI device properties for device drivers have review processes. Use them. And, before creating new device properties, check to be sure that they have not been defined before and either registered in the Linux kernel documentation or the UEFI Forum. If the device drivers supports ACPI and DT, please make sure the device properties are consistent in both places. Programmable Power Control Resources ------------------------------------ Programmable power control resources include such resources as voltage/current providers (regulators) and clock sources. With ACPI, the kernel clock and regulator framework is not expected to be used at all. The kernel assumes that power control of these resources is represented with Power Resource Objects (ACPI section 7.1). The ACPI core will then handle correctly enabling and disabling resources as they are needed. In order to get that to work, ACPI assumes each device has defined D-states and that these can be controlled through the optional ACPI methods _PS0, _PS1, _PS2, and _PS3; in ACPI, _PS0 is the method to invoke to turn a device full on, and _PS3 is for turning a device full off. There are two options for using those Power Resources. -- be managed in _PSx routine which gets called on entry to Dx. -- be declared separately as power resources with their own _ON and _OFF methods. They are then tied back to D-states for a particular device via _PRx which specifies which power resources a device needs to be on while in Dx. Kernel then tracks number of devices using a power resource and calls _ON/_OFF as needed. The kernel ACPI code will also assume that the _PSx methods follow the normal ACPI rules for such methods: -- If either _PS0 or _PS3 is implemented, then the other method must also be implemented. -- If a device requires usage or setup of a power resource when on, the ASL should organize that it is allocated/enabled using the _PS0 method. -- Resources allocated or enabled in the _PS0 method should be disabled or de-allocated in the _PS3 method. -- Firmware will leave the resources in a reasonable state before handing over control to the kernel. Such code in _PSx methods will of course be very platform specific. But, this allows the driver to abstract out the interface for operating the device and avoid having to read special non-standard values from ACPI tables. Further, abstracting the use of these resources allows the hardware to change over time without requiring updates to the driver. Clocks ------ ACPI makes the assumption that clocks are initialized by the firmware -- UEFI, in this case -- to some working value before control is handed over to the kernel. This has implications for devices such as UARTs, or SoC-driven LCD displays, for example. When the kernel boots, the clocks are assumed to be set to reasonable working values. If for some reason the frequency needs to change -- e.g., throttling for power management -- the device driver should expect that process to be abstracted out into some ACPI method that can be invoked (please see the ACPI specification for further recommendations on standard methods to be expected). The only exceptions to this are CPU clocks where CPPC provides a much richer interface than ACPI methods. If the clocks are not set, there is no direct way for Linux to control them. If an SoC vendor wants to provide fine-grained control of the system clocks, they could do so by providing ACPI methods that could be invoked by Linux drivers. However, this is NOT recommended and Linux drivers should NOT use such methods, even if they are provided. Such methods are not currently standardized in the ACPI specification, and using them could tie a kernel to a very specific SoC, or tie an SoC to a very specific version of the kernel, both of which we are trying to avoid. Driver Recommendations ---------------------- DO NOT remove any DT handling when adding ACPI support for a driver. The same device may be used on many different systems. DO try to structure the driver so that it is data-driven. That is, set up a struct containing internal per-device state based on defaults and whatever else must be discovered by the driver probe function. Then, have the rest of the driver operate off of the contents of that struct. Doing so should allow most divergence between ACPI and DT functionality to be kept local to the probe function instead of being scattered throughout the driver. For example: static int device_probe_dt(struct platform_device *pdev) { /* DT specific functionality */ ... } static int device_probe_acpi(struct platform_device *pdev) { /* ACPI specific functionality */ ... } static int device_probe(stuct platform_device *pdev) { ... struct device_node node = pdev->dev.of_node; ... if (node) ret = device_probe_dt(pdev); else if (ACPI_HANDLE(&pdev->dev)) ret = device_probe_acpi(pdev); else /* other initialization */ ... /* Continue with any generic probe operations */ ... } DO keep the MODULE_DEVICE_TABLE entries together in the driver to make it clear the different names the driver is probed for, both from DT and from ACPI: static struct of_device_id virtio_mmio_match[] = { { .compatible = "virtio,mmio", }, { } }; MODULE_DEVICE_TABLE(of, virtio_mmio_match); static const struct acpi_device_id virtio_mmio_acpi_match[] = { { "LNRO0005", }, { } }; MODULE_DEVICE_TABLE(acpi, virtio_mmio_acpi_match); ASWG ---- The following areas are not yet fully defined for ARM in the 5.1 version of the ACPI specification and are expected to be worked through in the UEFI ACPI Specification Working Group (ASWG): -- ACPI based CPU topology -- ACPI based CPU idle control -- ACPI based SMMU and its IO topology -- ITS support for GIC in MADT Participation in this group is open to all UEFI members. Please see http://www.uefi.org/workinggroup for details on group membership. It is the intent of the ARMv8 ACPI kernel code to follow the ACPI specification as closely as possible, and to only implement functionality that complies with the released standards from UEFI ASWG. As a practical matter, there will be vendors that provide bad ACPI tables or violate the standards in some way. If this is because of errors, quirks and fixups may be necessary, but will be avoided if possible. If there are features missing from ACPI that preclude it from being used on a platform, ECRs (Engineering Change Requests) should be submitted to ASWG and go through the normal approval process; for those that are not UEFI members, many other members of the Linux community are and would likely be willing to assist in submitting ECRs. Linux Code ---------- Individual items needed by or provided by Linux source code are discussed in the list that follows: ACPI_OS_NAME This macro defines the string to be returned when an ACPI method invokes the _OS method. On ARM64 systems, this macro will be "Linux" by default. The command line parameter acpi_os= can be used to set it to some other value. The default value for other architectures is "Microsoft Windows NT", for example. ACPI Tables ----------- The expectations on individual ACPI tables are discussed in the list that follows: FACS Section 5.2.10 (signature == "FACS") It is unlikely that this table will be terribly useful. If it is provided, the Global Lock will NOT be used since it is not part of the hardware reduced profile. FADT Section 5.2.9 (signature == "FACP") The HW_REDUCED_ACPI flag must be set. All of the fields that are to be ignored when HW_REDUCED_ACPI is set are expected to be set to zero. If an FACS table is provided, the X_FIRMWARE_CTRL field is to be used, not FIRMWARE_CTRL. If a DSDT is provided, the X_DSDT field is to be used, not the DSDT field. HEST Section 18.3.2 (signature == "HEST") Until further error source types are defined, use only types 6 (AER Root Port), 7 (AER Endpoint), 8 (AER Bridge), or 9 (Generic Hardware Error Source). Firmware first error handling is possible if and only if Trusted Firmware is being used on arm64. MADT Section 5.2.12 (signature == "APIC") Only the GIC interrupt controller structures should be used (types 0xA - 0xE). ACPI Objects ------------ The expectations on individual ACPI objects are discussed in the list that follows: _CCA This method should be defined for all bus masters on arm64. While cache coherency is assumed, making it explicit ensures the kernel will set up DMA as it should. _DDN This field could be used for a device name. However, it is meant for DOS device names (e.g., COM1), so it not recommended for use on arm64. _DSD TBD _DSM Do not use this method. It is not standardized, the return values are not well documented, and it is currently a frequent source of error. \_GL This object is not to be used in hardware reduced mode, and therefore should not be used on arm64. \_GPE This namespace is for x86 use only. Do not use it on arm64. GPE block device Do not use GPE block devices; these are not supported in the hardware reduced profile used by arm64 _OFF It is recommended to define this method for any device that can be turned on or off. _ON It is recommended to define this method for any device that can be turned on or off. \_OS This method will return "Linux" by default. The command line parameter acpi_os= can be used to set it to some other value. _OSC This method can be a global method in ACPI (i.e., \_SB._OSC), or it may be associated with a specific device (e.g., \_SB.DEV0._OSC), or both. When used as a global method, only capabilities published in the ACPI specification are allowed. When used as a device-specifc method, the process described in [TBD] MUST be used to create an _OSC definition; out-of-process use of _OSC is not allowed. \_OSI This method is deprecated on ARM64. Any invocation of this method will print a warning on the console and return false. That is, as far as ACPI firmware is concerned, _OSI cannot be used to determine what sort of system is being used or what functionality is provided. The _OSC method is to be used instead. _PIC The method should not be used. On arm64, the only interrupt model available is GIC. \_PR This namespace is for x86 use only on legacy systems. Do not use it on arm64. _STA It is recommended to define this method for any device that can be turned on or off. References ---------- [0] http://silver.arm.com -- document ARM-DEN-0029, or newer "Server Base System Architecture", version 2.3, dated 27 Mar 2014 [1] http://silver.arm.com -- document ARM-DEN-0044A, or newer "Server Base Boot Requirements, System Software on ARM Platforms", dated 16 Aug 2014 [2] http://www.secretlab.ca/archives/151, 10 Jan 2015, Copyright (c) 2015 by Grant Likely (apart from formatting, copied verbatim). [3] AMD ACPI for Seattle platform documentation: http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Seattle_ACPI_Guide.pdf [4] http://www.uefi.org/acpi -- please see the link for the "ACPI _DSD Device Property Registry Instructions" [5] http://www.uefi.org/acpi -- please see the link for the "_DSD (Device Specific Data) Implementation Guide" [6] Kernel code for the unified device property interface can be found in include/linux/property.h and drivers/base/property.c. ======================================================================= Signatures ---------- Signed-off-by: Graeme Gregory Signed-off-by: Al Stone Signed-off-by: Hanjun Guo Reviewed-by: Suravee Suthikulpanit ======================================================================= TODO List --------- These are things that still need to be completed for this document. -- Do we need license/copyright from Grant for blog post? -- _OSI/_OS discussion -- _OSC discussion; must decide on a process for device-specific methods? or just ban them and insist on _DSD? -- _DSD discussion; must include clear process definition. -- FWTS tests are needed: -- WARN if there are any invocations of _OS -- FAIL is there are any invocations of _OSI -- _OSC: TBD