From: "andrew.yang" andrew.yang@mediatek.com
commit 3f98c9a62c338bbe06a215c9491e6166ea39bf82 upstream.
damon_get_folio() would always increase folio _refcount and folio_isolate_lru() would increase folio _refcount if the folio's lru flag is set.
If an unevictable folio isolated successfully, there will be two more _refcount. The one from folio_isolate_lru() will be decreased in folio_puback_lru(), but the other one from damon_get_folio() will be left behind. This causes a pin page.
Whatever the case, the _refcount from damon_get_folio() should be decreased.
Link: https://lkml.kernel.org/r/20230222064223.6735-1-andrew.yang@mediatek.com Fixes: 57223ac29584 ("mm/damon/paddr: support the pageout scheme") Signed-off-by: andrew.yang andrew.yang@mediatek.com Reviewed-by: SeongJae Park sj@kernel.org Cc: stable@vger.kernel.org [5.16.x] Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: SeongJae Park sj@kernel.org ---
This is a backport of the mainline patch for v6.1.y and v6.2.y.
mm/damon/paddr.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index e1a4315c4be6..402d30b37aba 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -219,12 +219,11 @@ static unsigned long damon_pa_pageout(struct damon_region *r) put_page(page); continue; } - if (PageUnevictable(page)) { + if (PageUnevictable(page)) putback_lru_page(page); - } else { + else list_add(&page->lru, &page_list); - put_page(page); - } + put_page(page); } applied = reclaim_pages(&page_list); cond_resched();
linux-stable-mirror@lists.linaro.org