On Fri 13-12-19 19:26:17, John Hubbard wrote:
Add tracking of pages that were pinned via FOLL_PIN.
As mentioned in the FOLL_PIN documentation, callers who effectively set FOLL_PIN are required to ultimately free such pages via unpin_user_page(). The effect is similar to FOLL_GET, and may be thought of as "FOLL_GET for DIO and/or RDMA use".
Pages that have been pinned via FOLL_PIN are identifiable via a new function call:
bool page_dma_pinned(struct page *page);
What to do in response to encountering such a page, is left to later patchsets. There is discussion about this in [1], [2], and [3].
This also changes a BUG_ON(), to a WARN_ON(), in follow_page_mask().
[1] Some slow progress on get_user_pages() (Apr 2, 2019): https://lwn.net/Articles/784574/ [2] DMA and get_user_pages() (LPC: Dec 12, 2018): https://lwn.net/Articles/774411/ [3] The trouble with get_user_pages() (Apr 30, 2018): https://lwn.net/Articles/753027/
Suggested-by: Jan Kara jack@suse.cz Suggested-by: Jérôme Glisse jglisse@redhat.com Cc: Kirill A. Shutemov kirill.shutemov@linux.intel.com Signed-off-by: John Hubbard jhubbard@nvidia.com
Hi Jan,
This should address all of your comments for patch 23!
Thanks. One comment below:
@@ -1486,6 +1500,10 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); if (flags & FOLL_TOUCH) touch_pmd(vma, addr, pmd, flags);
- if (!try_grab_page(page, flags))
return ERR_PTR(-ENOMEM);
- if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /*
- We don't mlock() pte-mapped THPs. This way we can avoid
I'd move this still a bit higher - just after VM_BUG_ON_PAGE() and before if (flags & FOLL_TOUCH) test. Because touch_pmd() can update page tables and we don't won't that if we're going to fail the fault.
With this fixed, the patch looks good to me so you can then add:
Reviewed-by: Jan Kara jack@suse.cz
Honza