On 8/9/24 19:17, David Hildenbrand wrote:
On 09.08.24 12:31, Dev Jain wrote:
As already being done in __migrate_folio(), wherein we backoff if the folio refcount is wrong, make this check during the unmapping phase, upon the failure of which, the original state of the PTEs will be restored and the folio lock will be dropped via migrate_folio_undo_src(), any racing thread will make progress and migration will be retried.
Signed-off-by: Dev Jain dev.jain@arm.com
mm/migrate.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/mm/migrate.c b/mm/migrate.c index e7296c0fb5d5..477acf996951 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1250,6 +1250,15 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, } if (!folio_mapped(src)) { + /* + * Someone may have changed the refcount and maybe sleeping + * on the folio lock. In case of refcount mismatch, bail out, + * let the system make progress and retry. + */ + struct address_space *mapping = folio_mapping(src);
+ if (folio_ref_count(src) != folio_expected_refs(mapping, src)) + goto out;
This really seems to be the latest point where we can "easily" back off and unlock the source folio -- in this function :)
I wonder if we should be smarter in the migrate_pages_batch() loop when we start the actual migrations via migrate_folio_move(): if we detect that a folio has unexpected references *and* it has waiters (PG_waiters), back off then and retry the folio later. If it only has unexpected references, just keep retrying: no waiters -> nobody is waiting for the lock to make progress.
The patch currently retries migration irrespective of the reason of refcount change.
If you are suggesting that, break the retrying according to two conditions:
1. If the folio has waiters, retry according to NR_MAX_MIGRATE_PAGES_RETRY = 10.
2. If not, retry for a large number of iterations, say 10,000, since we just need to keep
retrying till the racer finishes reading the folio/failing on folio_trylock(), and decrementing
refcount.
If so, we will have to make the check as a refcount freeze(with xas_lock()); if we don't do that,
anyone can increase the refcount again, reading data from a stale reference to the folio, making
our check futile (which begs the question: is commit 0609139 correct? Checking refcount mismatch
in __migrate_folio() is ineffective since after that, and before folio_ref_freeze() in __folio_migrate_mapping(),
the refcount may change.) As a result, the freeze will have to take place immediately after we unmap
the folios from everyone's address space, something like:
while (!folio_ref_freeze(src, expected_count) && ++retries < 10000) {
if (folio has waiters)
break; /* will be retried by the outer loop giving us 10 chances in total */
}
This really seems to be the latest point where we can "easily" back off and unlock the source folio -- in this function :) For example, when migrate_folio_move() fails with -EAGAIN, check if there are waiters (PG_waiter?) and undo+unlock to try again later.
Currently, on -EAGAIN, migrate_folio_move() returns without undoing src and dst; even if we were to fall
through to _undo_src/dst, the folios will not be unmapped again since _unmap() and _move() are
wrapped around different loops. This is what I was hinting to when I wrote in the cover letter:
"...there is no way the refcount would be decremented; as a result, this renders the retrying
useless" since upon the failure of _move(), the lock will not be dropped (which is dropped
through undo_src()), rendering the _move() loop useless. Sorry, should have noted this there.