On 4/3/23 12:10 PM, Matthew Wilcox wrote:
On Sun, Apr 02, 2023 at 06:19:20AM +0800, Rongwei Wang wrote:
Without this modification, a core will wait (mostly) 'swap_info_struct->lock' when completing 'del_from_avail_list(p)'. Immediately, other cores soon calling 'add_to_avail_list()' to add the same object again when acquiring the lock that released by former. It's not the desired result but exists indeed. This case can be described as below:
This feels like a very verbose way of saying
"The si->lock must be held when deleting the si from the available list. Otherwise, another thread can re-add the si to the available list, which can lead to memory corruption. The only place we have found where this happens is in the swapoff path."
It looks better than mine. Sorry for my confusing description, it will be fixed in the next version.
+++ b/mm/swapfile.c @@ -2610,8 +2610,12 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) spin_unlock(&swap_lock); goto out_dput; }
- del_from_avail_list(p);
- /*
* Here lock is used to protect deleting and SWP_WRITEOK clearing
* can be seen concurrently.
*/
This comment isn't necessary. But I would add a lockdep assert inside __del_from_avail_list() that p->lock is held.
Thanks. Actually, I have this line in previous test version, but delete for saving one line of code.
I will update here as you said.
Thanks for your time.
spin_lock(&p->lock);
- del_from_avail_list(p); if (p->prio < 0) { struct swap_info_struct *si = p; int nid;
-- 2.27.0
linux-stable-mirror@lists.linaro.org