From: Yu Kuai yukuai3@huawei.com
Hi, Greg
This is the manual adaptation version for 6.1, for 6.6/6.12 commit 8542870237c3 ("md: fix mddev uaf while iterating all_mddevs list") can be applied cleanly, can you queue them as well?
Thanks!
Yu Kuai (2): md: factor out a helper from mddev_put() md: fix mddev uaf while iterating all_mddevs list
drivers/md/md.c | 50 +++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 20 deletions(-)
From: Yu Kuai yukuai3@huawei.com
commit 3d8d32873c7b6d9cec5b40c2ddb8c7c55961694f upstream.
There are no functional changes, prepare to simplify md_seq_ops in next patch.
Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230927061241.1552837-2-yukuai1@huaweicloud.com [minor context conflict] Signed-off-by: Yu Kuai yukuai3@huawei.com --- drivers/md/md.c | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c index 5e2751d42f64..639fe8ebaa68 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -667,24 +667,28 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
static void mddev_delayed_delete(struct work_struct *ws);
+static void __mddev_put(struct mddev *mddev) +{ + if (mddev->raid_disks || !list_empty(&mddev->disks) || + mddev->ctime || mddev->hold_active) + return; + + /* Array is not configured at all, and not held active, so destroy it */ + set_bit(MD_DELETED, &mddev->flags); + + /* + * Call queue_work inside the spinlock so that flush_workqueue() after + * mddev_find will succeed in waiting for the work to be done. + */ + INIT_WORK(&mddev->del_work, mddev_delayed_delete); + queue_work(md_misc_wq, &mddev->del_work); +} + void mddev_put(struct mddev *mddev) { if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) return; - if (!mddev->raid_disks && list_empty(&mddev->disks) && - mddev->ctime == 0 && !mddev->hold_active) { - /* Array is not configured at all, and not held active, - * so destroy it */ - set_bit(MD_DELETED, &mddev->flags); - - /* - * Call queue_work inside the spinlock so that - * flush_workqueue() after mddev_find will succeed in waiting - * for the work to be done. - */ - INIT_WORK(&mddev->del_work, mddev_delayed_delete); - queue_work(md_misc_wq, &mddev->del_work); - } + __mddev_put(mddev); spin_unlock(&all_mddevs_lock); }
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 3d8d32873c7b6d9cec5b40c2ddb8c7c55961694f
WARNING: Author mismatch between patch and upstream commit: Backport author: Yu Kuaiyukuai1@huaweicloud.com Commit author: Yu Kuaiyukuai3@huawei.com
Status in newer kernel trees: 6.14.y | Present (exact SHA1) 6.13.y | Present (exact SHA1) 6.12.y | Present (exact SHA1) 6.6.y | Present (different SHA1: 452f50807917)
Note: The patch differs from the upstream commit: --- 1: 3d8d32873c7b6 ! 1: c3ea0d86c3059 md: factor out a helper from mddev_put() @@ Metadata ## Commit message ## md: factor out a helper from mddev_put()
+ commit 3d8d32873c7b6d9cec5b40c2ddb8c7c55961694f upstream. + There are no functional changes, prepare to simplify md_seq_ops in next patch.
Signed-off-by: Yu Kuai yukuai3@huawei.com Signed-off-by: Song Liu song@kernel.org Link: https://lore.kernel.org/r/20230927061241.1552837-2-yukuai1@huaweicloud.com + [minor context conflict] + Signed-off-by: Yu Kuai yukuai3@huawei.com
## drivers/md/md.c ## @@ drivers/md/md.c: static inline struct mddev *mddev_get(struct mddev *mddev) @@ drivers/md/md.c: static inline struct mddev *mddev_get(struct mddev *mddev) + * Call queue_work inside the spinlock so that flush_workqueue() after + * mddev_find will succeed in waiting for the work to be done. + */ ++ INIT_WORK(&mddev->del_work, mddev_delayed_delete); + queue_work(md_misc_wq, &mddev->del_work); +} + @@ drivers/md/md.c: static inline struct mddev *mddev_get(struct mddev *mddev) - /* Array is not configured at all, and not held active, - * so destroy it */ - set_bit(MD_DELETED, &mddev->flags); - +- - /* - * Call queue_work inside the spinlock so that - * flush_workqueue() after mddev_find will succeed in waiting - * for the work to be done. - */ +- INIT_WORK(&mddev->del_work, mddev_delayed_delete); - queue_work(md_misc_wq, &mddev->del_work); - } + __mddev_put(mddev); ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.1.y | Success | Success |
From: Yu Kuai yukuai3@huawei.com
commit 8542870237c3a48ff049b6c5df5f50c8728284fa upstream.
While iterating all_mddevs list from md_notify_reboot() and md_exit(), list_for_each_entry_safe is used, and this can race with deletint the next mddev, causing UAF:
t1: spin_lock //list_for_each_entry_safe(mddev, n, ...) mddev_get(mddev1) // assume mddev2 is the next entry spin_unlock t2: //remove mddev2 ... mddev_free spin_lock list_del spin_unlock kfree(mddev2) mddev_put(mddev1) spin_lock //continue dereference mddev2->all_mddevs
The old helper for_each_mddev() actually grab the reference of mddev2 while holding the lock, to prevent from being freed. This problem can be fixed the same way, however, the code will be complex.
Hence switch to use list_for_each_entry, in this case mddev_put() can free the mddev1 and it's not safe as well. Refer to md_seq_show(), also factor out a helper mddev_put_locked() to fix this problem.
Cc: Christoph Hellwig hch@lst.de Link: https://lore.kernel.org/linux-raid/20250220124348.845222-1-yukuai1@huaweiclo... Fixes: f26514342255 ("md: stop using for_each_mddev in md_notify_reboot") Fixes: 16648bac862f ("md: stop using for_each_mddev in md_exit") Reported-and-tested-by: Guillaume Morin guillaume@morinfr.org Closes: https://lore.kernel.org/all/Z7Y0SURoA8xwg7vn@bender.morinfr.org/ Signed-off-by: Yu Kuai yukuai3@huawei.com Reviewed-by: Christoph Hellwig hch@lst.de [skip md_seq_show() that is not exist] Signed-off-by: Yu Kuai yukuai3@huawei.com --- drivers/md/md.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c index 639fe8ebaa68..d5fbccc72810 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -684,6 +684,12 @@ static void __mddev_put(struct mddev *mddev) queue_work(md_misc_wq, &mddev->del_work); }
+static void mddev_put_locked(struct mddev *mddev) +{ + if (atomic_dec_and_test(&mddev->active)) + __mddev_put(mddev); +} + void mddev_put(struct mddev *mddev) { if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) @@ -9704,11 +9710,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks); static int md_notify_reboot(struct notifier_block *this, unsigned long code, void *x) { - struct mddev *mddev, *n; + struct mddev *mddev; int need_delay = 0;
spin_lock(&all_mddevs_lock); - list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) { + list_for_each_entry(mddev, &all_mddevs, all_mddevs) { if (!mddev_get(mddev)) continue; spin_unlock(&all_mddevs_lock); @@ -9720,8 +9726,8 @@ static int md_notify_reboot(struct notifier_block *this, mddev_unlock(mddev); } need_delay = 1; - mddev_put(mddev); spin_lock(&all_mddevs_lock); + mddev_put_locked(mddev); } spin_unlock(&all_mddevs_lock);
@@ -10043,7 +10049,7 @@ void md_autostart_arrays(int part)
static __exit void md_exit(void) { - struct mddev *mddev, *n; + struct mddev *mddev; int delay = 1;
unregister_blkdev(MD_MAJOR,"md"); @@ -10064,7 +10070,7 @@ static __exit void md_exit(void) remove_proc_entry("mdstat", NULL);
spin_lock(&all_mddevs_lock); - list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) { + list_for_each_entry(mddev, &all_mddevs, all_mddevs) { if (!mddev_get(mddev)) continue; spin_unlock(&all_mddevs_lock); @@ -10076,8 +10082,8 @@ static __exit void md_exit(void) * the mddev for destruction by a workqueue, and the * destroy_workqueue() below will wait for that to complete. */ - mddev_put(mddev); spin_lock(&all_mddevs_lock); + mddev_put_locked(mddev); } spin_unlock(&all_mddevs_lock);
[ Sasha's backport helper bot ]
Hi,
✅ All tests passed successfully. No issues detected. No action required from the submitter.
The upstream commit SHA1 provided is correct: 8542870237c3a48ff049b6c5df5f50c8728284fa
WARNING: Author mismatch between patch and upstream commit: Backport author: Yu Kuaiyukuai1@huaweicloud.com Commit author: Yu Kuaiyukuai3@huawei.com
Status in newer kernel trees: 6.14.y | Present (different SHA1: 5462544ccbad) 6.13.y | Not found 6.12.y | Not found 6.6.y | Not found
Note: The patch differs from the upstream commit: --- 1: 8542870237c3a ! 1: dd49667d64662 md: fix mddev uaf while iterating all_mddevs list @@ Metadata ## Commit message ## md: fix mddev uaf while iterating all_mddevs list
+ commit 8542870237c3a48ff049b6c5df5f50c8728284fa upstream. + While iterating all_mddevs list from md_notify_reboot() and md_exit(), list_for_each_entry_safe is used, and this can race with deletint the next mddev, causing UAF: @@ Commit message Closes: https://lore.kernel.org/all/Z7Y0SURoA8xwg7vn@bender.morinfr.org/ Signed-off-by: Yu Kuai yukuai3@huawei.com Reviewed-by: Christoph Hellwig hch@lst.de + [skip md_seq_show() that is not exist] + Signed-off-by: Yu Kuai yukuai3@huawei.com
## drivers/md/md.c ## @@ drivers/md/md.c: static void __mddev_put(struct mddev *mddev) @@ drivers/md/md.c: static void __mddev_put(struct mddev *mddev) void mddev_put(struct mddev *mddev) { if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) -@@ drivers/md/md.c: static int md_seq_show(struct seq_file *seq, void *v) - if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs)) - status_unused(seq); - -- if (atomic_dec_and_test(&mddev->active)) -- __mddev_put(mddev); -- -+ mddev_put_locked(mddev); - return 0; - } - @@ drivers/md/md.c: EXPORT_SYMBOL_GPL(rdev_clear_badblocks); static int md_notify_reboot(struct notifier_block *this, unsigned long code, void *x) ---
Results of testing on various branches:
| Branch | Patch Apply | Build Test | |---------------------------|-------------|------------| | stable/linux-6.6.y | Success | Success |
Hi Greg, Sasha, Yu,
On Sat, Apr 19, 2025 at 09:23:01AM +0800, Yu Kuai wrote:
From: Yu Kuai yukuai3@huawei.com
Hi, Greg
This is the manual adaptation version for 6.1, for 6.6/6.12 commit 8542870237c3 ("md: fix mddev uaf while iterating all_mddevs list") can be applied cleanly, can you queue them as well?
Thanks!
Yu Kuai (2): md: factor out a helper from mddev_put() md: fix mddev uaf while iterating all_mddevs list
drivers/md/md.c | 50 +++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 20 deletions(-)
I noticed that the change 8542870237c3 was queued for 6.6.y and 6.12.y and is in the review now, but wonder should we do something more with 6.1.y as this requires this series/manual adaption?
Or will it make for the next round of stable updates in 6.1.y?
(or did it just felt through the cracks and it is actually fine that I ping the thread on this question).
Regards, Salvatore
On Wed, Apr 23, 2025 at 08:14:09PM +0200, Salvatore Bonaccorso wrote:
Hi Greg, Sasha, Yu,
On Sat, Apr 19, 2025 at 09:23:01AM +0800, Yu Kuai wrote:
From: Yu Kuai yukuai3@huawei.com
Hi, Greg
This is the manual adaptation version for 6.1, for 6.6/6.12 commit 8542870237c3 ("md: fix mddev uaf while iterating all_mddevs list") can be applied cleanly, can you queue them as well?
Thanks!
Yu Kuai (2): md: factor out a helper from mddev_put() md: fix mddev uaf while iterating all_mddevs list
drivers/md/md.c | 50 +++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 20 deletions(-)
I noticed that the change 8542870237c3 was queued for 6.6.y and 6.12.y and is in the review now, but wonder should we do something more with 6.1.y as this requires this series/manual adaption?
Or will it make for the next round of stable updates in 6.1.y?
(or did it just felt through the cracks and it is actually fine that I ping the thread on this question).
This fell through the cracks and yes, it is great that you pinged it. I'll queue it up for the next release, thanks!
greg k-h
Hi Greg,
On Thu, Apr 24, 2025 at 08:40:41AM +0200, Greg KH wrote:
On Wed, Apr 23, 2025 at 08:14:09PM +0200, Salvatore Bonaccorso wrote:
Hi Greg, Sasha, Yu,
On Sat, Apr 19, 2025 at 09:23:01AM +0800, Yu Kuai wrote:
From: Yu Kuai yukuai3@huawei.com
Hi, Greg
This is the manual adaptation version for 6.1, for 6.6/6.12 commit 8542870237c3 ("md: fix mddev uaf while iterating all_mddevs list") can be applied cleanly, can you queue them as well?
Thanks!
Yu Kuai (2): md: factor out a helper from mddev_put() md: fix mddev uaf while iterating all_mddevs list
drivers/md/md.c | 50 +++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 20 deletions(-)
I noticed that the change 8542870237c3 was queued for 6.6.y and 6.12.y and is in the review now, but wonder should we do something more with 6.1.y as this requires this series/manual adaption?
Or will it make for the next round of stable updates in 6.1.y?
(or did it just felt through the cracks and it is actually fine that I ping the thread on this question).
This fell through the cracks and yes, it is great that you pinged it. I'll queue it up for the next release, thanks!
Thank you! Very much appreciated! (People installing Debian will be happy as it affects kernel under certiain circumstances, cf. https://bugs.debian.org/1086175, https://bugs.debian.org/1089158, but was longstanding already).
Thank you, Regards, Salvatore
On Thu, Apr 24, 2025 at 11:32:52AM +0200, Salvatore Bonaccorso wrote:
Hi Greg,
On Thu, Apr 24, 2025 at 08:40:41AM +0200, Greg KH wrote:
On Wed, Apr 23, 2025 at 08:14:09PM +0200, Salvatore Bonaccorso wrote:
Hi Greg, Sasha, Yu,
On Sat, Apr 19, 2025 at 09:23:01AM +0800, Yu Kuai wrote:
From: Yu Kuai yukuai3@huawei.com
Hi, Greg
This is the manual adaptation version for 6.1, for 6.6/6.12 commit 8542870237c3 ("md: fix mddev uaf while iterating all_mddevs list") can be applied cleanly, can you queue them as well?
Thanks!
Yu Kuai (2): md: factor out a helper from mddev_put() md: fix mddev uaf while iterating all_mddevs list
drivers/md/md.c | 50 +++++++++++++++++++++++++++++-------------------- 1 file changed, 30 insertions(+), 20 deletions(-)
I noticed that the change 8542870237c3 was queued for 6.6.y and 6.12.y and is in the review now, but wonder should we do something more with 6.1.y as this requires this series/manual adaption?
Or will it make for the next round of stable updates in 6.1.y?
(or did it just felt through the cracks and it is actually fine that I ping the thread on this question).
This fell through the cracks and yes, it is great that you pinged it. I'll queue it up for the next release, thanks!
Thank you! Very much appreciated! (People installing Debian will be happy as it affects kernel under certiain circumstances, cf. https://bugs.debian.org/1086175, https://bugs.debian.org/1089158, but was longstanding already).
Now queued up, thanks.
greg k-h
linux-stable-mirror@lists.linaro.org