On Tue 10-12-19 18:53:13, John Hubbard wrote:
Convert from get_user_pages() to pin_user_pages().
As required by pin_user_pages(), release these pages via
put_user_page().
Cc: Jan Kara jack@suse.cz Signed-off-by: John Hubbard jhubbard@nvidia.com
The patch looks good to me. You can add:
Reviewed-by: Jan Kara jack@suse.cz
I'd just note that mm_iommu_do_alloc() has a pre-existing bug that the last jump to 'free_exit' (at line 157) happens already after converting page pointers to physical addresses so put_page() calls there will just crash. But that's completely unrelated to your change. I'll send a fix separately.
Honza
arch/powerpc/mm/book3s64/iommu_api.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 56cc84520577..a86547822034 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, for (entry = 0; entry < entries; entry += chunk) { unsigned long n = min(entries - entry, chunk);
ret = get_user_pages(ua + (entry << PAGE_SHIFT), n,
if (ret == n) {ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, FOLL_WRITE | FOLL_LONGTERM, mem->hpages + entry, NULL);
@@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, return 0; free_exit:
- /* free the reference taken */
- for (i = 0; i < pinned; i++)
put_page(mem->hpages[i]);
- /* free the references taken */
- put_user_pages(mem->hpages, pinned);
vfree(mem->hpas); kfree(mem); @@ -215,7 +214,8 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) SetPageDirty(page);
put_page(page);
put_user_page(page);
- mem->hpas[i] = 0; }
}
2.24.0