Hi,
This series switches the Qcom PCIe controller driver to bus notifier for enabling ASPM (and updating OPP) for PCI devices. This series is intented to fix the ASPM regression reported (offlist) on the Qcom compute platforms running Linux. It turned out that the ASPM enablement logic in the Qcom controller driver had a flaw that got triggered by the recent changes to the pwrctrl framework (more details in patch 1/1).
Testing -------
I've tested this series on Thinkpad T14s laptop and able to observe ASPM state changes (through controller debugfs entry and lspci) for the WLAN device.
Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@oss.qualcomm.com --- Manivannan Sadhasivam (2): PCI: qcom: Switch to bus notifier for enabling ASPM of PCI devices PCI: qcom: Move qcom_pcie_icc_opp_update() to notifier callback
drivers/pci/controller/dwc/pcie-qcom.c | 73 ++++++++++++++++++---------------- 1 file changed, 38 insertions(+), 35 deletions(-) --- base-commit: 00f0defc332be94b7f1fdc56ce7dcb6528cdf002 change-id: 20250714-aspm_fix-eed392631c8f
Best regards,
Commit 9f4f3dfad8cf ("PCI: qcom: Enable ASPM for platforms supporting 1.9.0 ops") allowed the Qcom controller driver to enable ASPM for all PCI devices enumerated at the time of the controller driver probe. It proved to be useful for devices already powered on by the bootloader as it allowed devices to enter ASPM without user intervention.
However, it could not enable ASPM for the hotplug capable devices i.e., devices enumerated *after* the controller driver probe. This limitation mostly went unnoticed as the Qcom PCI controllers are not hotplug capable and also the bootloader has been enabling the PCI devices before Linux Kernel boots (mostly on the Qcom compute platforms which users use on a daily basis).
But with the advent of the commit b458ff7e8176 ("PCI/pwrctl: Ensure that pwrctl drivers are probed before PCI client drivers"), the pwrctrl driver started to block the PCI device enumeration until it had been probed. Though, the intention of the commit was to avoid race between the pwrctrl driver and PCI client driver, it also meant that the pwrctrl controlled PCI devices may get probed after the controller driver and will no longer have ASPM enabled. So users started noticing high runtime power consumption with WLAN chipsets on Qcom compute platforms like Thinkpad X13s, and Thinkpad T14s, etc...
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
So with this, we can also get rid of the controller driver specific 'qcom_pcie_ops::host_post_init' callback.
Cc: stable@vger.kernel.org # v6.7 Fixes: 9f4f3dfad8cf ("PCI: qcom: Enable ASPM for platforms supporting 1.9.0 ops") Reported-by: Johan Hovold johan@kernel.org Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@oss.qualcomm.com --- drivers/pci/controller/dwc/pcie-qcom.c | 70 ++++++++++++++++++---------------- 1 file changed, 37 insertions(+), 33 deletions(-)
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 620ac7cf09472b84c37e83ee3ce40e94a1d9d878..b4993642ed90915299e825e47d282b8175a78346 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -20,6 +20,7 @@ #include <linux/kernel.h> #include <linux/limits.h> #include <linux/init.h> +#include <linux/notifier.h> #include <linux/of.h> #include <linux/of_pci.h> #include <linux/pci.h> @@ -247,7 +248,6 @@ struct qcom_pcie_ops { int (*get_resources)(struct qcom_pcie *pcie); int (*init)(struct qcom_pcie *pcie); int (*post_init)(struct qcom_pcie *pcie); - void (*host_post_init)(struct qcom_pcie *pcie); void (*deinit)(struct qcom_pcie *pcie); void (*ltssm_enable)(struct qcom_pcie *pcie); int (*config_sid)(struct qcom_pcie *pcie); @@ -286,6 +286,7 @@ struct qcom_pcie { const struct qcom_pcie_cfg *cfg; struct dentry *debugfs; struct list_head ports; + struct notifier_block nb; bool suspended; bool use_pm_opp; }; @@ -1040,25 +1041,6 @@ static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) return 0; }
-static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) -{ - /* - * Downstream devices need to be in D0 state before enabling PCI PM - * substates. - */ - pci_set_power_state_locked(pdev, PCI_D0); - pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL); - - return 0; -} - -static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie) -{ - struct dw_pcie_rp *pp = &pcie->pci->pp; - - pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL); -} - static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) { struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; @@ -1358,19 +1340,9 @@ static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp) pcie->cfg->ops->deinit(pcie); }
-static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp) -{ - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); - struct qcom_pcie *pcie = to_qcom_pcie(pci); - - if (pcie->cfg->ops->host_post_init) - pcie->cfg->ops->host_post_init(pcie); -} - static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { .init = qcom_pcie_host_init, .deinit = qcom_pcie_host_deinit, - .post_init = qcom_pcie_host_post_init, };
/* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ @@ -1432,7 +1404,6 @@ static const struct qcom_pcie_ops ops_1_9_0 = { .get_resources = qcom_pcie_get_resources_2_7_0, .init = qcom_pcie_init_2_7_0, .post_init = qcom_pcie_post_init_2_7_0, - .host_post_init = qcom_pcie_host_post_init_2_7_0, .deinit = qcom_pcie_deinit_2_7_0, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, .config_sid = qcom_pcie_config_sid_1_9_0, @@ -1443,7 +1414,6 @@ static const struct qcom_pcie_ops ops_1_21_0 = { .get_resources = qcom_pcie_get_resources_2_7_0, .init = qcom_pcie_init_2_7_0, .post_init = qcom_pcie_post_init_2_7_0, - .host_post_init = qcom_pcie_host_post_init_2_7_0, .deinit = qcom_pcie_deinit_2_7_0, .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, }; @@ -1773,6 +1743,33 @@ static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie) return 0; }
+static int qcom_pcie_enable_aspm(struct pci_dev *pdev) +{ + /* + * Downstream devices need to be in D0 state before enabling PCI PM + * substates. + */ + pci_set_power_state_locked(pdev, PCI_D0); + pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL); + + return 0; +} + +static int pcie_qcom_notify(struct notifier_block *nb, unsigned long action, + void *data) +{ + struct qcom_pcie *pcie = container_of(nb, struct qcom_pcie, nb); + struct device *dev = data; + struct pci_dev *pdev = to_pci_dev(dev); + + switch (action) { + case BUS_NOTIFY_BIND_DRIVER: + qcom_pcie_enable_aspm(pdev); + break; + } + + return NOTIFY_DONE; +} static int qcom_pcie_probe(struct platform_device *pdev) { const struct qcom_pcie_cfg *pcie_cfg; @@ -1946,10 +1943,15 @@ static int qcom_pcie_probe(struct platform_device *pdev) if (irq > 0) pp->use_linkup_irq = true;
+ pcie->nb.notifier_call = pcie_qcom_notify; + ret = bus_register_notifier(&pci_bus_type, &pcie->nb); + if (ret) + goto err_phy_exit; + ret = dw_pcie_host_init(pp); if (ret) { dev_err(dev, "cannot initialize host\n"); - goto err_phy_exit; + goto err_unregister_notifier; }
name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_global_irq%d", @@ -1982,6 +1984,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
err_host_deinit: dw_pcie_host_deinit(pp); +err_unregister_notifier: + bus_unregister_notifier(&pci_bus_type, &pcie->nb); err_phy_exit: qcom_pcie_phy_exit(pcie); list_for_each_entry_safe(port, tmp, &pcie->ports, list)
On Mon, Jul 14, 2025 at 11:31:04PM +0530, Manivannan Sadhasivam wrote:
Commit 9f4f3dfad8cf ("PCI: qcom: Enable ASPM for platforms supporting 1.9.0 ops") allowed the Qcom controller driver to enable ASPM for all PCI devices enumerated at the time of the controller driver probe. It proved to be useful for devices already powered on by the bootloader as it allowed devices to enter ASPM without user intervention.
However, it could not enable ASPM for the hotplug capable devices i.e., devices enumerated *after* the controller driver probe. This limitation mostly went unnoticed as the Qcom PCI controllers are not hotplug capable and also the bootloader has been enabling the PCI devices before Linux Kernel boots (mostly on the Qcom compute platforms which users use on a daily basis).
But with the advent of the commit b458ff7e8176 ("PCI/pwrctl: Ensure that pwrctl drivers are probed before PCI client drivers"), the pwrctrl driver started to block the PCI device enumeration until it had been probed. Though, the intention of the commit was to avoid race between the pwrctrl driver and PCI client driver, it also meant that the pwrctrl controlled PCI devices may get probed after the controller driver and will no longer have ASPM enabled. So users started noticing high runtime power consumption with WLAN chipsets on Qcom compute platforms like Thinkpad X13s, and Thinkpad T14s, etc...
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I think that's something we should try to avoid.
So with this, we can also get rid of the controller driver specific 'qcom_pcie_ops::host_post_init' callback.
Cc: stable@vger.kernel.org # v6.7 Fixes: 9f4f3dfad8cf ("PCI: qcom: Enable ASPM for platforms supporting 1.9.0 ops") Reported-by: Johan Hovold johan@kernel.org Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@oss.qualcomm.com
Note that the patch fails to apply to 6.16-rc6 due to changes in linux-next. Depending on how fast we can come up with a fix it may be better to target 6.16.
-static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) -{
- /*
* Downstream devices need to be in D0 state before enabling PCI PM
* substates.
*/
- pci_set_power_state_locked(pdev, PCI_D0);
- pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
- return 0;
-}
I think you should consider leaving this helper in place here to keep the size of the diff down (e.g. as you intend to backport this).
+static int qcom_pcie_enable_aspm(struct pci_dev *pdev) +{
- /*
* Downstream devices need to be in D0 state before enabling PCI PM
* substates.
*/
- pci_set_power_state_locked(pdev, PCI_D0);
- pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
You need to use the non-locked helpers here since you no longer hold the bus semaphore (e.g. as reported by lockdep).
Maybe this makes the previous comment about not moving the helper moot.
- return 0;
+}
+static int pcie_qcom_notify(struct notifier_block *nb, unsigned long action,
void *data)
+{
- struct qcom_pcie *pcie = container_of(nb, struct qcom_pcie, nb);
This results in an unused variable warning (presumably until the next patch in the series is applied).
- struct device *dev = data;
- struct pci_dev *pdev = to_pci_dev(dev);
- switch (action) {
- case BUS_NOTIFY_BIND_DRIVER:
qcom_pcie_enable_aspm(pdev);
break;
- }
- return NOTIFY_DONE;
+}
Missing newline.
static int qcom_pcie_probe(struct platform_device *pdev) { const struct qcom_pcie_cfg *pcie_cfg;
Johan
On Tue, Jul 15, 2025 at 09:48:30AM GMT, Johan Hovold wrote:
On Mon, Jul 14, 2025 at 11:31:04PM +0530, Manivannan Sadhasivam wrote:
Commit 9f4f3dfad8cf ("PCI: qcom: Enable ASPM for platforms supporting 1.9.0 ops") allowed the Qcom controller driver to enable ASPM for all PCI devices enumerated at the time of the controller driver probe. It proved to be useful for devices already powered on by the bootloader as it allowed devices to enter ASPM without user intervention.
However, it could not enable ASPM for the hotplug capable devices i.e., devices enumerated *after* the controller driver probe. This limitation mostly went unnoticed as the Qcom PCI controllers are not hotplug capable and also the bootloader has been enabling the PCI devices before Linux Kernel boots (mostly on the Qcom compute platforms which users use on a daily basis).
But with the advent of the commit b458ff7e8176 ("PCI/pwrctl: Ensure that pwrctl drivers are probed before PCI client drivers"), the pwrctrl driver started to block the PCI device enumeration until it had been probed. Though, the intention of the commit was to avoid race between the pwrctrl driver and PCI client driver, it also meant that the pwrctrl controlled PCI devices may get probed after the controller driver and will no longer have ASPM enabled. So users started noticing high runtime power consumption with WLAN chipsets on Qcom compute platforms like Thinkpad X13s, and Thinkpad T14s, etc...
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I'm aware of this limiation. But I don't think we should really worry about that scenario. No one is going to run an OS intentionally with a PCI device and without the relevant driver. If that happens, it might be due to some issue in driver loading or the user is doing it intentionally. Such scenarios are short lived IMO.
I think that's something we should try to avoid.
I would've fancied a bus notifier post device addition, but there is none available and I don't see a real incentive in adding one. The other option would be to add an ops to 'struct pci_host_bridge', but I really try not to introduce such thing unless really manadatory.
So with this, we can also get rid of the controller driver specific 'qcom_pcie_ops::host_post_init' callback.
Cc: stable@vger.kernel.org # v6.7 Fixes: 9f4f3dfad8cf ("PCI: qcom: Enable ASPM for platforms supporting 1.9.0 ops") Reported-by: Johan Hovold johan@kernel.org Signed-off-by: Manivannan Sadhasivam manivannan.sadhasivam@oss.qualcomm.com
Note that the patch fails to apply to 6.16-rc6 due to changes in linux-next. Depending on how fast we can come up with a fix it may be better to target 6.16.
I rebased this series on top of pci/controller/qcom branch, where we have some dependency. But I could spin an independent fix if Bjorn is OK to take it for the 6.16-rcS.
-static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) -{
- /*
* Downstream devices need to be in D0 state before enabling PCI PM
* substates.
*/
- pci_set_power_state_locked(pdev, PCI_D0);
- pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
- return 0;
-}
I think you should consider leaving this helper in place here to keep the size of the diff down (e.g. as you intend to backport this).
Ok.
+static int qcom_pcie_enable_aspm(struct pci_dev *pdev) +{
- /*
* Downstream devices need to be in D0 state before enabling PCI PM
* substates.
*/
- pci_set_power_state_locked(pdev, PCI_D0);
- pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
You need to use the non-locked helpers here since you no longer hold the bus semaphore (e.g. as reported by lockdep).
Good catch!
- Mani
On Tue, Jul 15, 2025 at 02:41:23PM +0530, Manivannan Sadhasivam wrote:
On Tue, Jul 15, 2025 at 09:48:30AM GMT, Johan Hovold wrote:
On Mon, Jul 14, 2025 at 11:31:04PM +0530, Manivannan Sadhasivam wrote:
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I'm aware of this limiation. But I don't think we should really worry about that scenario. No one is going to run an OS intentionally with a PCI device and without the relevant driver. If that happens, it might be due to some issue in driver loading or the user is doing it intentionally. Such scenarios are short lived IMO.
There may not even be a driver (yet). A user could plug in whatever device in a free slot. I can also imagine someone wanting to blacklist a driver temporarily for whatever reason.
How would this work on x86? Would the BIOS typically enable ASPM for each EP? Then that's what we should do here too, even if the EP driver happens to be disabled.
Note that the patch fails to apply to 6.16-rc6 due to changes in linux-next. Depending on how fast we can come up with a fix it may be better to target 6.16.
I rebased this series on top of pci/controller/qcom branch, where we have some dependency. But I could spin an independent fix if Bjorn is OK to take it for the 6.16-rcS.
Or we can just backport manually as we are indeed already at rc6.
Johan
On 7/15/25 11:33 AM, Johan Hovold wrote:
On Tue, Jul 15, 2025 at 02:41:23PM +0530, Manivannan Sadhasivam wrote:
On Tue, Jul 15, 2025 at 09:48:30AM GMT, Johan Hovold wrote:
On Mon, Jul 14, 2025 at 11:31:04PM +0530, Manivannan Sadhasivam wrote:
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I'm aware of this limiation. But I don't think we should really worry about that scenario. No one is going to run an OS intentionally with a PCI device and without the relevant driver. If that happens, it might be due to some issue in driver loading or the user is doing it intentionally. Such scenarios are short lived IMO.
There may not even be a driver (yet). A user could plug in whatever device in a free slot. I can also imagine someone wanting to blacklist a driver temporarily for whatever reason.
How would this work on x86? Would the BIOS typically enable ASPM for each EP? Then that's what we should do here too, even if the EP driver happens to be disabled.
Not sure about all x86, but the Intel VMD controller driver surely doesn't care what's on the other end:
drivers/pci/controller/vmd.c : vmd_pm_enable_quirk()
Konrad
On Tue, Jul 15, 2025 at 11:33:16AM GMT, Johan Hovold wrote:
On Tue, Jul 15, 2025 at 02:41:23PM +0530, Manivannan Sadhasivam wrote:
On Tue, Jul 15, 2025 at 09:48:30AM GMT, Johan Hovold wrote:
On Mon, Jul 14, 2025 at 11:31:04PM +0530, Manivannan Sadhasivam wrote:
Obviously, it is the pwrctrl change that caused regression, but it ultimately uncovered a flaw in the ASPM enablement logic of the controller driver. So to address the actual issue, switch to the bus notifier for enabling ASPM of the PCI devices. The notifier will notify the controller driver when a PCI device is attached to the bus, thereby allowing it to enable ASPM more reliably. It should be noted that the 'pci_dev::link_state', which is required for enabling ASPM by the pci_enable_link_state_locked() API, is only set by the time of BUS_NOTIFY_BIND_DRIVER stage of the notification. So we cannot enable ASPM during BUS_NOTIFY_ADD_DEVICE stage.
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I'm aware of this limiation. But I don't think we should really worry about that scenario. No one is going to run an OS intentionally with a PCI device and without the relevant driver. If that happens, it might be due to some issue in driver loading or the user is doing it intentionally. Such scenarios are short lived IMO.
There may not even be a driver (yet). A user could plug in whatever device in a free slot. I can also imagine someone wanting to blacklist a driver temporarily for whatever reason.
Yes, that's why I said these scenarios are 'shortlived'.
How would this work on x86? Would the BIOS typically enable ASPM for each EP? Then that's what we should do here too, even if the EP driver happens to be disabled.
There is no guarantee that BIOS would enable ASPM for all the devices in x86 world. Usually, BIOS would enable ASPM for devices that it makes use of (like NVMe SSD or WLAN) and don't touch the rest, AFAIK.
Note that the patch fails to apply to 6.16-rc6 due to changes in linux-next. Depending on how fast we can come up with a fix it may be better to target 6.16.
I rebased this series on top of pci/controller/qcom branch, where we have some dependency. But I could spin an independent fix if Bjorn is OK to take it for the 6.16-rcS.
Or we can just backport manually as we are indeed already at rc6.
I'll leave it up to Bjorn to decide.
- Mani
On Tue, Jul 15, 2025 at 03:57:12PM +0530, Manivannan Sadhasivam wrote:
On Tue, Jul 15, 2025 at 11:33:16AM GMT, Johan Hovold wrote:
On Tue, Jul 15, 2025 at 02:41:23PM +0530, Manivannan Sadhasivam wrote:
On Tue, Jul 15, 2025 at 09:48:30AM GMT, Johan Hovold wrote:
A problem with this approach is that ASPM will never be enabled (and power consumption will be higher) in case an endpoint driver is missing.
I'm aware of this limiation. But I don't think we should really worry about that scenario. No one is going to run an OS intentionally with a PCI device and without the relevant driver. If that happens, it might be due to some issue in driver loading or the user is doing it intentionally. Such scenarios are short lived IMO.
There may not even be a driver (yet). A user could plug in whatever device in a free slot. I can also imagine someone wanting to blacklist a driver temporarily for whatever reason.
Yes, that's why I said these scenarios are 'shortlived'.
My point is the opposite; that you should not make such assumptions (e.g. hardware not supported by linux or drivers disabled due to stability or security concerns).
Johan
linux-stable-mirror@lists.linaro.org