On Wed, Jul 14, 2021 at 02:07:10PM +0000, Holger Kiehl wrote:
On Wed, 14 Jul 2021, Greg Kroah-Hartman wrote:
On Wed, Jul 14, 2021 at 01:26:26PM +0000, Holger Kiehl wrote:
On Wed, 14 Jul 2021, Holger Kiehl wrote:
On Wed, 14 Jul 2021, Greg Kroah-Hartman wrote:
On Wed, Jul 14, 2021 at 05:39:43AM +0000, Holger Kiehl wrote:
Hello,
On Mon, 12 Jul 2021, Greg Kroah-Hartman wrote:
> This is the start of the stable review cycle for the 5.13.2 release. > There are 800 patches in this series, all will be posted as a response > to this one. If anyone has any issues with these being applied, please > let me know. > > Responses should be made by Wed, 14 Jul 2021 06:02:46 +0000. > Anything received after that time might be too late. > With this my system no longer boots:
[ OK ] Reached target Swap. [ 75.213852] NMI watchdog: Watchdog detected hard LOCKUP on cpu 0 [ 75.213926] NMI watchdog: Watchdog detected hard LOCKUP on cpu 2 [ 75.213962] NMI watchdog: Watchdog detected hard LOCKUP on cpu 4 [FAILED] Failed to start Wait for udev To Complete Device Initialization. See 'systemctl status systemd-udev-settle.service' for details. Starting Activation of DM RAID sets... [ ] (1 of 2) A start job is running for Activation of DM RAID sets (..min ..s / no limit) [ ] (2 of 2) A start job is running for Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling (..min ..s / no limit)
System is a Fedora 34 with all updates applied. Two other similar systems with AMD CPUs (Ryzen 4750G + 3400G) this does not happen and boots fine. The system where it does not boot has an Intel Xeon E3-1285L v4 CPU. All of them use a dm_crypt root filesystem.
Any idea which patch I should drop to see if it boots again. I already dropped
[PATCH 5.13 743/800] ASoC: Intel: sof_sdw: add quirk support for Brya and BT-offload
and I just see that this one should also be dropped:
[PATCH 5.13 768/800] hugetlb: address ref count racing in prep_compound_gigantic_page
Will still need to test this.
Can you run 'git bisect' to see what commit causes the problem?
Yes, will try to do that. I think it will take some time ...
With the help of Pavel Machek and Jiri Slaby I was able 'git bisect' this to:
yoda:/usr/src/kernels/linux-5.13.y# git bisect good a483f513670541227e6a31ac7141826b8c785842 is the first bad commit commit a483f513670541227e6a31ac7141826b8c785842 Author: Jan Kara jack@suse.cz Date: Wed Jun 23 11:36:33 2021 +0200
bfq: Remove merged request already in bfq_requests_merged() [ Upstream commit a921c655f2033dd1ce1379128efe881dda23ea37 ] Currently, bfq does very little in bfq_requests_merged() and handles all the request cleanup in bfq_finish_requeue_request() called from blk_mq_free_request(). That is currently safe only because blk_mq_free_request() is called shortly after bfq_requests_merged() while bfqd->lock is still held. However to fix a lock inversion between bfqd->lock and ioc->lock, we need to call blk_mq_free_request() after dropping bfqd->lock. That would mean that already merged request could be seen by other processes inside bfq queues and possibly dispatched to the device which is wrong. So move cleanup of the request from bfq_finish_requeue_request() to bfq_requests_merged(). Acked-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20210623093634.27879-2-jack@suse.cz Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org> block/bfq-iosched.c | 41 +++++++++++++---------------------------- 1 file changed, 13 insertions(+), 28 deletions(-)
Holger
Wonderful!
So if you drop that, all works well? I'll go drop that from the queues now.
Yes. Just double checked it took a plain 5.13.1, patched it with patch-5.13.2-rc1.xz and then reverted
PATCH-5.13-259-800-bfq-Remove-merged-request-already-in-bfq_requests_merged
and it booted fine with no problems. Tested several times. Just wonder why it only happens on the Intel Broadwell CPU. Maybe it is the 128MB eDRAM L4 Cache ...
Wondeful!
Could you test 5.14-rc1 to verify if this problem is there or not? If it is, the developers need to know this so that they can work to fix the regression.
thanks,
greg k-h