From: Marc Zyngier maz@kernel.org
[ Upstream commit 668a9fe5c6a1bcac6b65d5e9b91a9eca86f782a3 ]
When requesting an interrupt, we correctly call into the runtime PM framework to guarantee that the underlying interrupt controller is up and running.
However, we fail to do so for chained interrupt controllers, as the mux interrupt is not requested along the same path.
Augment __irq_do_set_handler() to call into the runtime PM code in this case, making sure the PM flow is the same for all interrupts.
Reported-by: Lucas Stach l.stach@pengutronix.de Tested-by: Liu Ying victor.liu@nxp.com Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/26973cddee5f527ea17184c0f3fccb70bc8969a0.camel@pen... Signed-off-by: Sasha Levin sashal@kernel.org --- kernel/irq/chip.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index 9afbd89b6096..356b289b8086 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -964,8 +964,10 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle, if (desc->irq_data.chip != &no_irq_chip) mask_ack_irq(desc); irq_state_set_disabled(desc); - if (is_chained) + if (is_chained) { desc->action = NULL; + WARN_ON(irq_chip_pm_put(irq_desc_get_irq_data(desc))); + } desc->depth = 1; } desc->handle_irq = handle; @@ -991,6 +993,7 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle, irq_settings_set_norequest(desc); irq_settings_set_nothread(desc); desc->action = &chained_action; + WARN_ON(irq_chip_pm_get(irq_desc_get_irq_data(desc))); irq_activate_and_startup(desc, IRQ_RESEND); } }
From: Kunihiko Hayashi hayashi.kunihiko@socionext.com
[ Upstream commit e3f056a7aafabe4ac3ad4b7465ba821b44a7e639 ]
Add the compatible string to support UniPhier NX1 SoC, which has the same kinds of controls as the other UniPhier SoCs.
Signed-off-by: Kunihiko Hayashi hayashi.kunihiko@socionext.com Signed-off-by: Marc Zyngier maz@kernel.org Link: https://lore.kernel.org/r/1653023822-19229-3-git-send-email-hayashi.kunihiko... Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/irqchip/irq-uniphier-aidet.c | 1 + 1 file changed, 1 insertion(+)
diff --git a/drivers/irqchip/irq-uniphier-aidet.c b/drivers/irqchip/irq-uniphier-aidet.c index 7ba7f253470e..b400f084e2cb 100644 --- a/drivers/irqchip/irq-uniphier-aidet.c +++ b/drivers/irqchip/irq-uniphier-aidet.c @@ -247,6 +247,7 @@ static const struct of_device_id uniphier_aidet_match[] = { { .compatible = "socionext,uniphier-ld11-aidet" }, { .compatible = "socionext,uniphier-ld20-aidet" }, { .compatible = "socionext,uniphier-pxs3-aidet" }, + { .compatible = "socionext,uniphier-nx1-aidet" }, { /* sentinel */ } };
From: Alexander Usyskin alexander.usyskin@intel.com
[ Upstream commit 9f4639373e6756e1ccf0029f861f1061db3c3616 ]
Link reset flow is always performed in the runtime resumed state. The internal PG state may be left as ON after the suspend and will not be updated upon the resume if the D0i3 is not supported.
Ensure that the internal PG state is set to the right value on the flow entrance in case the firmware does not support D0i3.
Signed-off-by: Alexander Usyskin alexander.usyskin@intel.com Signed-off-by: Tomas Winkler tomas.winkler@intel.com Link: https://lore.kernel.org/r/20220606144225.282375-1-tomas.winkler@intel.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/misc/mei/hw-me.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c index 60c8c84181a9..ba12be7ec7df 100644 --- a/drivers/misc/mei/hw-me.c +++ b/drivers/misc/mei/hw-me.c @@ -1138,6 +1138,8 @@ static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable) ret = mei_me_d0i3_exit_sync(dev); if (ret) return ret; + } else { + hw->pg_state = MEI_PG_OFF; } }
From: Keith Busch kbusch@kernel.org
[ Upstream commit 4641a8e6e145f595059e695f0f8dbbe608134086 ]
Many users have encountered IO timeouts with a CSTS value of 0xffffffff, which indicates a failure to read the register. While there are various potential causes for this observation, faulty NVMe APST has been the culprit quite frequently. Add the recommended troubleshooting steps in the error output when this condition occurs.
Signed-off-by: Keith Busch kbusch@kernel.org Reviewed-by: Chaitanya Kulkarni kch@nvidia.com Signed-off-by: Christoph Hellwig hch@lst.de Signed-off-by: Sasha Levin sashal@kernel.org --- drivers/nvme/host/pci.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index b06d2b6bd3fe..f9fd62b894dc 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1138,6 +1138,14 @@ static void nvme_warn_reset(struct nvme_dev *dev, u32 csts) dev_warn(dev->ctrl.device, "controller is down; will reset: CSTS=0x%x, PCI_STATUS read failed (%d)\n", csts, result); + + if (csts != ~0) + return; + + dev_warn(dev->ctrl.device, + "Does your device have a faulty power saving mode enabled?\n"); + dev_warn(dev->ctrl.device, + "Try "nvme_core.default_ps_max_latency_us=0 pcie_aspm=off" and report a bug\n"); }
static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
From: Jan Kara jack@suse.cz
[ Upstream commit 8d5459c11f548131ce48b2fbf45cccc5c382558f ]
When delayed allocation is disabled (either through mount option or because we are running low on free space), ext4_write_begin() allocates blocks with EXT4_GET_BLOCKS_IO_CREATE_EXT flag. With this flag extent merging is disabled and since ext4_write_begin() is called for each page separately, we end up with a *lot* of 1 block extents in the extent tree and following writeback is writing 1 block at a time which results in very poor write throughput (4 MB/s instead of 200 MB/s). These days when ext4_get_block_unwritten() is used only by ext4_write_begin(), ext4_page_mkwrite() and inline data conversion, we can safely allow extent merging to happen from these paths since following writeback will happen on different boundaries anyway. So use EXT4_GET_BLOCKS_CREATE_UNRIT_EXT instead which restores the performance.
Signed-off-by: Jan Kara jack@suse.cz Link: https://lore.kernel.org/r/20220520111402.4252-1-jack@suse.cz Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ext4/inode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 76c887108df3..997279814b6f 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -839,7 +839,7 @@ int ext4_get_block_unwritten(struct inode *inode, sector_t iblock, ext4_debug("ext4_get_block_unwritten: inode %lu, create flag %d\n", inode->i_ino, create); return _ext4_get_block(inode, iblock, bh_result, - EXT4_GET_BLOCKS_IO_CREATE_EXT); + EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT); }
/* Maximum number of blocks we map for direct IO at once. */
From: Ming Lei ming.lei@redhat.com
[ Upstream commit 5fd7a84a09e640016fe106dd3e992f5210e23dc7 ]
elevator can be tore down by sysfs switch interface or disk release, so hold ->sysfs_lock before referring to q->elevator, then potential use-after-free can be avoided.
Reviewed-by: Christoph Hellwig hch@lst.de Signed-off-by: Ming Lei ming.lei@redhat.com Link: https://lore.kernel.org/r/20220616014401.817001-2-ming.lei@redhat.com Signed-off-by: Jens Axboe axboe@kernel.dk Signed-off-by: Sasha Levin sashal@kernel.org --- block/blk-mq.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c index ae70b4809bec..3a791597c534 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2940,12 +2940,14 @@ static bool blk_mq_elv_switch_none(struct list_head *head, if (!qe) return false;
+ /* q->elevator needs protection from ->sysfs_lock */ + mutex_lock(&q->sysfs_lock); + INIT_LIST_HEAD(&qe->node); qe->q = q; qe->type = q->elevator->type; list_add(&qe->node, head);
- mutex_lock(&q->sysfs_lock); /* * After elevator_switch_mq, the previous elevator_queue will be * released by elevator_release. The reference of the io scheduler
From: Baokun Li libaokun1@huawei.com
[ Upstream commit cf4ff938b47fc5c00b0ccce53a3b50eca9b32281 ]
ext4_mb_normalize_request() can move logical start of allocated blocks to reduce fragmentation and better utilize preallocation. However logical block requested as a start of allocation (ac->ac_o_ex.fe_logical) should always be covered by allocated blocks so we should check that by modifying and to or in the assertion.
Signed-off-by: Baokun Li libaokun1@huawei.com Reviewed-by: Ritesh Harjani ritesh.list@gmail.com Link: https://lore.kernel.org/r/20220528110017.354175-3-libaokun1@huawei.com Signed-off-by: Theodore Ts'o tytso@mit.edu Signed-off-by: Sasha Levin sashal@kernel.org --- fs/ext4/mballoc.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index db8243627b08..0f7cbb59419b 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -3242,7 +3242,22 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac, } rcu_read_unlock();
- if (start + size <= ac->ac_o_ex.fe_logical && + /* + * In this function "start" and "size" are normalized for better + * alignment and length such that we could preallocate more blocks. + * This normalization is done such that original request of + * ac->ac_o_ex.fe_logical & fe_len should always lie within "start" and + * "size" boundaries. + * (Note fe_len can be relaxed since FS block allocation API does not + * provide gurantee on number of contiguous blocks allocation since that + * depends upon free space left, etc). + * In case of inode pa, later we use the allocated blocks + * [pa_start + fe_logical - pa_lstart, fe_len/size] from the preallocated + * range of goal/best blocks [start, size] to put it at the + * ac_o_ex.fe_logical extent of this inode. + * (See ext4_mb_use_inode_pa() for more details) + */ + if (start + size <= ac->ac_o_ex.fe_logical || start > ac->ac_o_ex.fe_logical) { ext4_msg(ac->ac_sb, KERN_ERR, "start %lu, size %lu, fe_logical %lu",
linux-stable-mirror@lists.linaro.org