Changes since v9 [1] and v10 [2]
* Resend the full series with the reworked "mm: introduce MEMORY_DEVICE_FS_DAX and CONFIG_DEV_PAGEMAP_OPS" (Christoph) * Move generic_dax_pagefree() into the pmem driver (Christoph) * Cleanup __bdev_dax_supported() (Christoph) * Cleanup some stale SRCU bits leftover from other iterations (Jan) * Cleanup xfs_break_layouts() (Jan)
[1]: https://lists.01.org/pipermail/linux-nvdimm/2018-April/015457.html [2]: https://lists.01.org/pipermail/linux-nvdimm/2018-May/015885.html
---
Background:
get_user_pages() in the filesystem pins file backed memory pages for access by devices performing dma. However, it only pins the memory pages not the page-to-file offset association. If a file is truncated the pages are mapped out of the file and dma may continue indefinitely into a page that is owned by a device driver. This breaks coherency of the file vs dma, but the assumption is that if userspace wants the file-space truncated it does not matter what data is inbound from the device, it is not relevant anymore. The only expectation is that dma can safely continue while the filesystem reallocates the block(s).
Problem:
This expectation that dma can safely continue while the filesystem changes the block map is broken by dax. With dax the target dma page *is* the filesystem block. The model of leaving the page pinned for dma, but truncating the file block out of the file, means that the filesytem is free to reallocate a block under active dma to another file and now the expected data-incoherency situation has turned into active data-corruption.
Solution:
Defer all filesystem operations (fallocate(), truncate()) on a dax mode file while any page/block in the file is under active dma. This solution assumes that dma is transient. Cases where dma operations are known to not be transient, like RDMA, have been explicitly disabled via commits like 5f1d43de5416 "IB/core: disable memory registration of filesystem-dax vmas".
The dax_layout_busy_page() routine is called by filesystems with a lock held against mm faults (i_mmap_lock) to find pinned / busy dax pages. The process of looking up a busy page invalidates all mappings to trigger any subsequent get_user_pages() to block on i_mmap_lock. The filesystem continues to call dax_layout_busy_page() until it finally returns no more active pages. This approach assumes that the page pinning is transient, if that assumption is violated the system would have likely hung from the uncompleted I/O.
---
Dan Williams (7): memremap: split devm_memremap_pages() and memremap() infrastructure mm: introduce MEMORY_DEVICE_FS_DAX and CONFIG_DEV_PAGEMAP_OPS mm: fix __gup_device_huge vs unmap mm, fs, dax: handle layout changes to pinned dax mappings xfs: prepare xfs_break_layouts() to be called with XFS_MMAPLOCK_EXCL xfs: prepare xfs_break_layouts() for another layout type xfs, dax: introduce xfs_break_dax_layouts()
drivers/dax/super.c | 14 ++- drivers/nvdimm/pfn_devs.c | 2 drivers/nvdimm/pmem.c | 25 +++++ fs/Kconfig | 1 fs/dax.c | 97 +++++++++++++++++++++ fs/xfs/xfs_file.c | 72 ++++++++++++++-- fs/xfs/xfs_inode.h | 16 +++ fs/xfs/xfs_ioctl.c | 8 -- fs/xfs/xfs_iops.c | 16 ++- fs/xfs/xfs_pnfs.c | 15 ++- fs/xfs/xfs_pnfs.h | 5 + include/linux/dax.h | 7 ++ include/linux/memremap.h | 36 ++------ include/linux/mm.h | 71 +++++++++++---- kernel/Makefile | 3 - kernel/iomem.c | 167 ++++++++++++++++++++++++++++++++++++ kernel/memremap.c | 209 ++++++--------------------------------------- mm/Kconfig | 5 + mm/gup.c | 36 ++++++-- mm/hmm.c | 13 --- mm/swap.c | 3 - 21 files changed, 542 insertions(+), 279 deletions(-) create mode 100644 kernel/iomem.c
get_user_pages_fast() for device pages is missing the typical validation that all page references have been taken while the mapping was valid. Without this validation truncate operations can not reliably coordinate against new page reference events like O_DIRECT.
Cc: stable@vger.kernel.org Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") Reported-by: Jan Kara jack@suse.cz Reviewed-by: Jan Kara jack@suse.cz Signed-off-by: Dan Williams dan.j.williams@intel.com --- mm/gup.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c index 76af4cfeaf68..84dd2063ca3d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1456,32 +1456,48 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr, return 1; }
-static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, +static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { unsigned long fault_pfn; + int nr_start = *nr; + + fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); + if (!__gup_device_huge(fault_pfn, addr, end, pages, nr)) + return 0;
- fault_pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - return __gup_device_huge(fault_pfn, addr, end, pages, nr); + if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { + undo_dev_pagemap(nr, nr_start, pages); + return 0; + } + return 1; }
-static int __gup_device_huge_pud(pud_t pud, unsigned long addr, +static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { unsigned long fault_pfn; + int nr_start = *nr; + + fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); + if (!__gup_device_huge(fault_pfn, addr, end, pages, nr)) + return 0;
- fault_pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - return __gup_device_huge(fault_pfn, addr, end, pages, nr); + if (unlikely(pud_val(orig) != pud_val(*pudp))) { + undo_dev_pagemap(nr, nr_start, pages); + return 0; + } + return 1; } #else -static int __gup_device_huge_pmd(pmd_t pmd, unsigned long addr, +static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { BUILD_BUG(); return 0; }
-static int __gup_device_huge_pud(pud_t pud, unsigned long addr, +static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, unsigned long end, struct page **pages, int *nr) { BUILD_BUG(); @@ -1499,7 +1515,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0;
if (pmd_devmap(orig)) - return __gup_device_huge_pmd(orig, addr, end, pages, nr); + return __gup_device_huge_pmd(orig, pmdp, addr, end, pages, nr);
refs = 0; page = pmd_page(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); @@ -1537,7 +1553,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 0;
if (pud_devmap(orig)) - return __gup_device_huge_pud(orig, addr, end, pages, nr); + return __gup_device_huge_pud(orig, pudp, addr, end, pages, nr);
refs = 0; page = pud_page(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
On Fri, 18 May 2018 18:35:08 -0700 Dan Williams dan.j.williams@intel.com wrote:
get_user_pages_fast() for device pages is missing the typical validation that all page references have been taken while the mapping was valid. Without this validation truncate operations can not reliably coordinate against new page reference events like O_DIRECT.
I'm not seeing anything in the changelog which justifies a -stable backport. ie: a description of the end-user-visible effects of the bug?
On Mon, Jun 11, 2018 at 2:58 PM, Andrew Morton akpm@linux-foundation.org wrote:
On Fri, 18 May 2018 18:35:08 -0700 Dan Williams dan.j.williams@intel.com wrote:
get_user_pages_fast() for device pages is missing the typical validation that all page references have been taken while the mapping was valid. Without this validation truncate operations can not reliably coordinate against new page reference events like O_DIRECT.
I'm not seeing anything in the changelog which justifies a -stable backport. ie: a description of the end-user-visible effects of the bug?
Without this change get_user_pages_fast() could race truncate. The ordering of page_cache_add_speculative() before re-validating the mapping allows truncate and page freeing to synchronize against get_user_pages_fast().
Specifically, a get_user_pages_fast() thread could continue allowing a page to be mapped and accessed via the kernel mapping after it was meant to be torn down. This could cause unexpected data corruption or access to the physical page after it has been invalidated from process page tables.
Ideally I think we would go further than this patch and backport the full fix for the filesystem-dax-vs-truncate problem. I was planning to spin up a 4.14 backport with the full set of the pieces that went into 4.17 and 4.18.
linux-stable-mirror@lists.linaro.org