From: Gerald Schaefer gerald.schaefer@linux.ibm.com
commit 7c8d42fdf1a84b1a0dd60d6528309c8ec127e87c upstream.
The alignment check in prepare_hugepage_range() is wrong for 2 GB hugepages, it only checks for 1 MB hugepage alignment.
This can result in kernel crash in __unmap_hugepage_range() at the BUG_ON(start & ~huge_page_mask(h)) alignment check, for mappings created with MAP_FIXED at unaligned address.
Fix this by correctly handling multiple hugepage sizes, similar to the generic version of prepare_hugepage_range().
Fixes: d08de8e2d867 ("s390/mm: add support for 2GB hugepages") Cc: stable@vger.kernel.org # 4.8+ Acked-by: Alexander Gordeev agordeev@linux.ibm.com Signed-off-by: Gerald Schaefer gerald.schaefer@linux.ibm.com Signed-off-by: Vasily Gorbik gor@linux.ibm.com Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org --- arch/s390/include/asm/hugetlb.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
--- a/arch/s390/include/asm/hugetlb.h +++ b/arch/s390/include/asm/hugetlb.h @@ -30,9 +30,11 @@ pte_t huge_ptep_get_and_clear(struct mm_ static inline int prepare_hugepage_range(struct file *file, unsigned long addr, unsigned long len) { - if (len & ~HPAGE_MASK) + struct hstate *h = hstate_file(file); + + if (len & ~huge_page_mask(h)) return -EINVAL; - if (addr & ~HPAGE_MASK) + if (addr & ~huge_page_mask(h)) return -EINVAL; return 0; }