We (or rather, readahead logic :) ) might be allocating a THP in the pagecache and then try mapping it into a process that explicitly disabled THP: we might end up installing PMD mappings.
This is a problem for s390x KVM, which explicitly remaps all PMD-mapped THPs to be PTE-mapped in s390_enable_sie()->thp_split_mm(), before starting the VM.
For example, starting a VM backed on a file system with large folios supported makes the VM crash when the VM tries accessing such a mapping using KVM.
Is it also a problem when the HW disabled THP using TRANSPARENT_HUGEPAGE_UNSUPPORTED? At least on x86 this would be the case without X86_FEATURE_PSE.
In the future, we might be able to do better on s390x and only disallow PMD mappings -- what s390x and likely TRANSPARENT_HUGEPAGE_UNSUPPORTED really wants. For now, fix it by essentially performing the same check as would be done in __thp_vma_allowable_orders() or in shmem code, where this works as expected, and disallow PMD mappings, making us fallback to PTE mappings.
Link: https://lkml.kernel.org/r/20241011102445.934409-3-david@redhat.com Fixes: 793917d997df ("mm/readahead: Add large folio readahead") Signed-off-by: David Hildenbrand david@redhat.com Reported-by: Leo Fu bfu@redhat.com Tested-by: Thomas Huth thuth@redhat.com Cc: Thomas Huth thuth@redhat.com Cc: Matthew Wilcox (Oracle) willy@infradead.org Cc: Ryan Roberts ryan.roberts@arm.com Cc: Christian Borntraeger borntraeger@linux.ibm.com Cc: Janosch Frank frankja@linux.ibm.com Cc: Claudio Imbrenda imbrenda@linux.ibm.com Cc: Hugh Dickins hughd@google.com Cc: Kefeng Wang wangkefeng.wang@huawei.com Cc: stable@vger.kernel.org Signed-off-by: Andrew Morton akpm@linux-foundation.org (cherry picked from commit 2b0f922323ccfa76219bcaacd35cd50aeaa13592) Signed-off-by: David Hildenbrand david@redhat.com ---
Minor contextual difference.
Note that the backport of 963756aac1f011d904ddd9548ae82286d3a91f96 is required (send separately as reply to the "FAILED:" mail).
--- mm/memory.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c index b6ddfe22c5d5..742c2f65c2c8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4293,6 +4293,15 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) pmd_t entry; vm_fault_t ret = VM_FAULT_FALLBACK;
+ /* + * It is too late to allocate a small folio, we already have a large + * folio in the pagecache: especially s390 KVM cannot tolerate any + * PMD mappings, but PTE-mapped THP are fine. So let's simply refuse any + * PMD mappings if THPs are disabled. + */ + if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags)) + return ret; + if (!transhuge_vma_suitable(vma, haddr)) return ret;