The patch below does not apply to the 5.4-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to stable@vger.kernel.org.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-5.4.y git checkout FETCH_HEAD git cherry-pick -x 9fdbbdbbc92b1474a87b89f8b964892a63734492 # <resolve conflicts, build, test, etc.> git commit -s git send-email --to 'stable@vger.kernel.org' --in-reply-to '2025021037-walk-outmatch-5872@gregkh' --subject-prefix 'PATCH 5.4.y' HEAD^..
Possible dependencies:
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 9fdbbdbbc92b1474a87b89f8b964892a63734492 Mon Sep 17 00:00:00 2001 From: Hou Tao houtao1@huawei.com Date: Mon, 20 Jan 2025 16:29:49 +0800 Subject: [PATCH] dm-crypt: don't update io->sector after kcryptd_crypt_write_io_submit()
The updates of io->sector are the leftovers when dm-crypt allocated pages for partial write request. However, since commit cf2f1abfbd0db ("dm crypt: don't allocate pages for a partial request"), there is no partial request anymore.
After the introduction of write request rb-tree, the updates of io->sectors may interfere the insertion procedure, because ->sectors of these write requests which have already been added in the rb-tree may be changed during the insertion of new write request.
Fix it by removing these buggy updates of io->sectors. Considering these updates only effect the write request rb-tree, the commit which introduces the write request rb-tree is used as the fix tag.
Fixes: b3c5fd305249 ("dm crypt: sort writes") Cc: stable@vger.kernel.org Signed-off-by: Hou Tao houtao1@huawei.com Signed-off-by: Mikulas Patocka mpatocka@redhat.com
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 8a4c24d4ee02..605859ef01b5 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -2090,7 +2090,6 @@ static void kcryptd_crypt_write_continue(struct work_struct *work) struct crypt_config *cc = io->cc; struct convert_context *ctx = &io->ctx; int crypt_finished; - sector_t sector = io->sector; blk_status_t r;
wait_for_completion(&ctx->restart); @@ -2107,10 +2106,8 @@ static void kcryptd_crypt_write_continue(struct work_struct *work) }
/* Encryption was already finished, submit io now */ - if (crypt_finished) { + if (crypt_finished) kcryptd_crypt_write_io_submit(io, 0); - io->sector = sector; - }
crypt_dec_pending(io); } @@ -2121,14 +2118,13 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io) struct convert_context *ctx = &io->ctx; struct bio *clone; int crypt_finished; - sector_t sector = io->sector; blk_status_t r;
/* * Prevent io from disappearing until this function completes. */ crypt_inc_pending(io); - crypt_convert_init(cc, ctx, NULL, io->base_bio, sector); + crypt_convert_init(cc, ctx, NULL, io->base_bio, io->sector);
clone = crypt_alloc_buffer(io, io->base_bio->bi_iter.bi_size); if (unlikely(!clone)) { @@ -2145,8 +2141,6 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io) io->ctx.iter_in = clone->bi_iter; }
- sector += bio_sectors(clone); - crypt_inc_pending(io); r = crypt_convert(cc, ctx, test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true); @@ -2170,10 +2164,8 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io) }
/* Encryption was already finished, submit io now */ - if (crypt_finished) { + if (crypt_finished) kcryptd_crypt_write_io_submit(io, 0); - io->sector = sector; - }
dec: crypt_dec_pending(io);