4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Yang Shi yang.shi@linux.alibaba.com
commit a7f40cfe3b7ada57af9b62fd28430eeb4a7cfcb7 upstream.
When MPOL_MF_STRICT was specified and an existing page was already on a node that does not follow the policy, mbind() should return -EIO. But commit 6f4576e3687b ("mempolicy: apply page table walker on queue_pages_range()") broke the rule.
And commit c8633798497c ("mm: mempolicy: mbind and migrate_pages support thp migration") didn't return the correct value for THP mbind() too.
If MPOL_MF_STRICT is set, ignore vma_migratable() to make sure it reaches queue_pages_to_pte_range() or queue_pages_pmd() to check if an existing page was already on a node that does not follow the policy. And, non-migratable vma may be used, return -EIO too if MPOL_MF_MOVE or MPOL_MF_MOVE_ALL was specified.
Tested with https://github.com/metan-ucw/ltp/blob/master/testcases/kernel/syscalls/mbind...
[akpm@linux-foundation.org: tweak code comment] Link: http://lkml.kernel.org/r/1553020556-38583-1-git-send-email-yang.shi@linux.al... Fixes: 6f4576e3687b ("mempolicy: apply page table walker on queue_pages_range()") Signed-off-by: Yang Shi yang.shi@linux.alibaba.com Signed-off-by: Oscar Salvador osalvador@suse.de Reported-by: Cyril Hrubis chrubis@suse.cz Suggested-by: Kirill A. Shutemov kirill@shutemov.name Acked-by: Rafael Aquini aquini@redhat.com Reviewed-by: Oscar Salvador osalvador@suse.de Acked-by: David Rientjes rientjes@google.com Cc: Vlastimil Babka vbabka@suse.cz Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- mm/mempolicy.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-)
--- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -547,11 +547,16 @@ retry: goto retry; }
- migrate_page_add(page, qp->pagelist, flags); + if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { + if (!vma_migratable(vma)) + break; + migrate_page_add(page, qp->pagelist, flags); + } else + break; } pte_unmap_unlock(pte - 1, ptl); cond_resched(); - return 0; + return addr != end ? -EIO : 0; }
static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, @@ -623,7 +628,12 @@ static int queue_pages_test_walk(unsigne unsigned long endvma = vma->vm_end; unsigned long flags = qp->flags;
- if (!vma_migratable(vma)) + /* + * Need check MPOL_MF_STRICT to return -EIO if possible + * regardless of vma_migratable + */ + if (!vma_migratable(vma) && + !(flags & MPOL_MF_STRICT)) return 1;
if (endvma > end) @@ -650,7 +660,7 @@ static int queue_pages_test_walk(unsigne }
/* queue pages from current vma */ - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) + if (flags & MPOL_MF_VALID) return 0; return 1; }