On Thu 23-08-18 13:59:16, Mike Kravetz wrote: [...]
@@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); address = pvmw.address;
if (PageHuge(page)) {
if (huge_pmd_unshare(mm, &address, pvmw.pte)) {
/*
* huge_pmd_unshare unmapped an entire PMD
* page. There is no way of knowing exactly
* which PMDs may be cached for this mm, so
* we must flush them all. start/end were
* already adjusted above to cover this range.
*/
flush_cache_range(vma, start, end);
flush_tlb_range(vma, start, end);
mmu_notifier_invalidate_range(mm, start, end);
/*
* The ref count of the PMD page was dropped
* which is part of the way map counting
* is done for shared PMDs. Return 'true'
* here. When there is no other sharing,
* huge_pmd_unshare returns false and we will
* unmap the actual page and drop map count
* to zero.
*/
page_vma_mapped_walk_done(&pvmw);
break;
}
}
Wait a second. This is not correct, right? You have to call the notifiers after page_vma_mapped_walk_done because they might be sleepable and we are still holding the pte lock. This is btw. a problem for other users of mmu_notifier_invalidate_range in try_to_unmap_one, unless I am terribly confused. This would suggest 369ea8242c0fb is incorrect.