Ryan Roberts ryan.roberts@arm.com writes:
On 06/03/2024 08:51, Miaohe Lin wrote:
On 2024/3/6 10:52, Huang, Ying wrote:
Ryan Roberts ryan.roberts@arm.com writes:
There was previously a theoretical window where swapoff() could run and teardown a swap_info_struct while a call to free_swap_and_cache() was running in another thread. This could cause, amongst other bad possibilities, swap_page_trans_huge_swapped() (called by free_swap_and_cache()) to access the freed memory for swap_map.
This is a theoretical problem and I haven't been able to provoke it from a test case. But there has been agreement based on code review that this is possible (see link below).
Fix it by using get_swap_device()/put_swap_device(), which will stall swapoff(). There was an extra check in _swap_info_get() to confirm that the swap entry was valid. This wasn't present in get_swap_device() so I've added it. I couldn't find any existing get_swap_device() call sites where this extra check would cause any false alarms.
Details of how to provoke one possible issue (thanks to David Hilenbrand for deriving this):
--8<-----
__swap_entry_free() might be the last user and result in "count == SWAP_HAS_CACHE".
swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0.
So the question is: could someone reclaim the folio and turn si->inuse_pages==0, before we completed swap_page_trans_huge_swapped().
Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are still references by swap entries.
Process 1 still references subpage 0 via swap entry. Process 2 still references subpage 1 via swap entry.
Process 1 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE [then, preempted in the hypervisor etc.]
Process 2 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE
Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls __try_to_reclaim_swap().
__try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> swap_entry_free()->swap_range_free()-> ... WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
What stops swapoff to succeed after process 2 reclaimed the swap cache but before process1 finished its call to swap_page_trans_huge_swapped()?
--8<-----
I think that this can be simplified. Even for a 4K folio, this could happen.
CPU0 CPU1
zap_pte_range free_swap_and_cache __swap_entry_free /* swap count become 0 */ swapoff try_to_unuse filemap_get_folio folio_free_swap /* remove swap cache */ /* free si->swap_map[] */
swap_page_trans_huge_swapped <-- access freed si->swap_map !!!
Sorry for jumping the discussion here. IMHO, free_swap_and_cache is called with pte lock held.
I don't beleive it has the PTL when called by shmem.
Yes, we don't hold PTL there.
After checking the code again. I think that there may be race condition as above without PTL. But I may miss something, again.
So synchronize_rcu (called by swapoff) will wait zap_pte_range to release the pte lock. So this theoretical problem can't happen. Or am I miss something?
For Huang Ying's example, I agree this can't happen because try_to_unuse() will be waiting for the PTL (see the reply I just sent).
CPU0 CPU1
zap_pte_range pte_offset_map_lock -- spin_lock is held. free_swap_and_cache __swap_entry_free /* swap count become 0 */ swapoff try_to_unuse filemap_get_folio folio_free_swap /* remove swap cache */ percpu_ref_kill(&p->users); swap_page_trans_huge_swapped pte_unmap_unlock -- spin_lock is released. synchronize_rcu(); --> Will wait pte_unmap_unlock to be called?
Perhaps you can educate me here; I thought that synchronize_rcu() will only wait for RCU critical sections to complete. The PTL is a spin lock, so why would synchronize_rcu() wait for the PTL to become unlocked?
Please take a look at the following link,
https://www.kernel.org/doc/html/next/RCU/whatisRCU.html#rcu-read-lock
" Note that anything that disables bottom halves, preemption, or interrupts also enters an RCU read-side critical section. Acquiring a spinlock also enters an RCU read-side critical sections, even for spinlocks that do not disable preemption, as is the case in kernels built with CONFIG_PREEMPT_RT=y. Sleeplocks do not enter RCU read-side critical sections. "
-- Best Regards, Huang, Ying
/* free si->swap_map[] */
Thanks.