A simple optimization for migrate_vma_*() when the source vma is not an anonymous vma and a new test case to exercise it. This is based on linux-mm and is for Andrew Morton's tree.
Changes in v2: Do the same check for vma_is_anonymous() for pte_none(). Don't increment cpages if the page isn't migrating.
Ralph Campbell (2): mm/migrate: optimize migrate_vma_setup() for holes mm/migrate: add migrate-shared test for migrate_vma_*()
mm/migrate.c | 16 ++++++++++-- tools/testing/selftests/vm/hmm-tests.c | 35 ++++++++++++++++++++++++++ 2 files changed, 49 insertions(+), 2 deletions(-)
When migrating system memory to device private memory, if the source address range is a valid VMA range and there is no memory or a zero page, the source PFN array is marked as valid but with no PFN. This lets the device driver allocate private memory and clear it, then insert the new device private struct page into the CPU's page tables when migrate_vma_pages() is called. migrate_vma_pages() only inserts the new page if the VMA is an anonymous range. There is no point in telling the device driver to allocate device private memory and then not migrate the page. Instead, mark the source PFN array entries as not migrating to avoid this overhead.
Signed-off-by: Ralph Campbell rcampbell@nvidia.com --- mm/migrate.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c index b0125c082549..ec00b7a6ea2a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2205,6 +2205,16 @@ static int migrate_vma_collect_hole(unsigned long start, struct migrate_vma *migrate = walk->private; unsigned long addr;
+ /* Only allow populating anonymous memory. */ + if (!vma_is_anonymous(walk->vma)) { + for (addr = start; addr < end; addr += PAGE_SIZE) { + migrate->src[migrate->npages] = 0; + migrate->dst[migrate->npages] = 0; + migrate->npages++; + } + return 0; + } + for (addr = start; addr < end; addr += PAGE_SIZE) { migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE; migrate->dst[migrate->npages] = 0; @@ -2297,8 +2307,10 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, pte = *ptep;
if (pte_none(pte)) { - mpfn = MIGRATE_PFN_MIGRATE; - migrate->cpages++; + if (vma_is_anonymous(vma)) { + mpfn = MIGRATE_PFN_MIGRATE; + migrate->cpages++; + } goto next; }
Add a migrate_vma_*() self test for mmap(MAP_SHARED) to verify that !vma_anonymous() ranges won't be migrated.
Signed-off-by: Ralph Campbell rcampbell@nvidia.com --- tools/testing/selftests/vm/hmm-tests.c | 35 ++++++++++++++++++++++++++ 1 file changed, 35 insertions(+)
diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 79db22604019..e83d3ab37697 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -931,6 +931,41 @@ TEST_F(hmm, migrate_fault) hmm_buffer_free(buffer); }
+/* + * Migrate anonymous shared memory to device private memory. + */ +TEST_F(hmm, migrate_shared) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Migrate memory to device. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ASSERT_EQ(ret, -ENOENT); + + hmm_buffer_free(buffer); +} + /* * Try to migrate various memory types to device private memory. */
linux-kselftest-mirror@lists.linaro.org