On Thu, Feb 08, 2024 at 02:32:54AM +0000, chengming.zhou@linux.dev wrote:
From: Chengming Zhou zhouchengming@bytedance.com
We have to invalidate any duplicate entry even when !zswap_enabled since zswap can be disabled anytime. If the folio store success before, then got dirtied again but zswap disabled, we won't invalidate the old duplicate entry in the zswap_store(). So later lru writeback may overwrite the new data in swapfile.
Fixes: 42c06a0e8ebe ("mm: kill frontswap") Cc: stable@vger.kernel.org Signed-off-by: Chengming Zhou zhouchengming@bytedance.com
Acked-by: Johannes Weiner hannes@cmpxchg.org
Nice, this is easier to backport and should be less disruptive to mm-unstable as well. It makes sense to me to put the optimization and cleanup that was cut out into a separate patch on top of mm-unstable.
mm/zswap.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/zswap.c b/mm/zswap.c index fe7ee2640c69..32633d0597dc 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio) if (folio_test_large(folio)) return false;
- if (!zswap_enabled || !tree)
- if (!tree) return false;
/* @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio) zswap_invalidate_entry(tree, dupentry); } spin_unlock(&tree->lock);
- if (!zswap_enabled)
return false;
- objcg = get_obj_cgroup_from_folio(folio); if (objcg && !obj_cgroup_may_zswap(objcg)) { memcg = get_mem_cgroup_from_objcg(objcg);