4.9-stable review patch. If anyone has any objections, please let me know.
------------------
commit 906f9cdfc2a0800f13683f9e4ebdfd08c12ee81b upstream.
The term "freeze" is used in several ways in the kernel, and in mm it has the particular meaning of forcing page refcount temporarily to 0. freeze_page() is just too confusing a name for a function that unmaps a page: rename it unmap_page(), and rename unfreeze_page() remap_page().
Went to change the mention of freeze_page() added later in mm/rmap.c, but found it to be incorrect: ordinary page reclaim reaches there too; but the substance of the comment still seems correct, so edit it down.
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261514080.2275@eggly.anvils Fixes: e9b61f19858a5 ("thp: reintroduce split_huge_page()") Signed-off-by: Hugh Dickins hughd@google.com Acked-by: Kirill A. Shutemov kirill.shutemov@linux.intel.com Cc: Jerome Glisse jglisse@redhat.com Cc: Konstantin Khlebnikov khlebnikov@yandex-team.ru Cc: Matthew Wilcox willy@infradead.org Cc: stable@vger.kernel.org [4.8+] Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Sasha Levin sashal@kernel.org --- mm/huge_memory.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9f7bba700e4e..583ad61cc2f1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1839,7 +1839,7 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, } }
-static void freeze_page(struct page *page) +static void unmap_page(struct page *page) { enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS | TTU_RMAP_LOCKED; @@ -1862,7 +1862,7 @@ static void freeze_page(struct page *page) VM_BUG_ON_PAGE(ret, page + i - 1); }
-static void unfreeze_page(struct page *page) +static void remap_page(struct page *page) { int i;
@@ -1971,7 +1971,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags);
- unfreeze_page(head); + remap_page(head);
for (i = 0; i < HPAGE_PMD_NR; i++) { struct page *subpage = head + i; @@ -2138,7 +2138,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) }
/* - * Racy check if we can split the page, before freeze_page() will + * Racy check if we can split the page, before unmap_page() will * split PMDs */ if (total_mapcount(head) != page_count(head) - extra_pins - 1) { @@ -2147,7 +2147,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) }
mlocked = PageMlocked(page); - freeze_page(head); + unmap_page(head); VM_BUG_ON_PAGE(compound_mapcount(head), head);
/* Make sure the page is not on per-CPU pagevec as it takes pin */ @@ -2199,7 +2199,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) fail: if (mapping) spin_unlock(&mapping->tree_lock); spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); - unfreeze_page(head); + remap_page(head); ret = -EBUSY; }