@@ -1608,14 +1598,12 @@ bool zswap_store(struct folio *folio) /* map */ spin_lock(&tree->lock); /*
* A duplicate entry should have been removed at the beginning of this
* function. Since the swap entry should be pinned, if a duplicate is
* found again here it means that something went wrong in the swap
* cache.
* The folio may have been dirtied again, invalidate the
* possibly stale entry before inserting the new entry. */
while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
WARN_ON(1);
if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) { zswap_invalidate_entry(tree, dupentry);
VM_WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry));
It seems there is only one path called zswap_rb_insert() and there is no loop to repeat the insert any more. Can we have the zswap_rb_insert() install the entry and return the dupentry? We can still just call zswap_invalidate_entry() on the duplicate. The mapping of the dupentry has been removed when zswap_rb_insert() returns. That will save a repeat lookup on the duplicate case. After this change, the zswap_rb_insert() will map to the xarray xa_store() pretty nicely.
I brought this up in v1 [1]. We agreed to leave it as-is for now since we expect the xarray conversion soon-ish. No need to update zswap_rb_insert() only to replace it with xa_store() later anyway.
[1] https://lore.kernel.org/lkml/ZcFne336KJdbrvvS@google.com/