For now, we did not support reliable R/O long-term pinning in COW mappings. That means, if we would trigger R/O long-term pinning in MAP_PRIVATE mapping, we could end up pinning the (R/O-mapped) shared zeropage or a pagecache page.
The next write access would trigger a write fault and replace the pinned page by an exclusive anonymous page in the process page table; whatever the process would write to that private page copy would not be visible by the owner of the previous page pin: for example, RDMA could read stale data. The end result is essentially an unexpected and hard-to-debug memory corruption.
Some drivers tried working around that limitation by using "FOLL_FORCE|FOLL_WRITE|FOLL_LONGTERM" for R/O long-term pinning for now. FOLL_WRITE would trigger a write fault, if required, and break COW before pinning the page. FOLL_FORCE is required because the VMA might lack write permissions, and drivers wanted to make that working as well, just like one would expect (no write access, but still triggering a write access to break COW).
However, that is not a practical solution, because (1) Drivers that don't stick to that undocumented and debatable pattern would still run into that issue. For example, VFIO only uses FOLL_LONGTERM for R/O long-term pinning. (2) Using FOLL_WRITE just to work around a COW mapping + page pinning limitation is unintuitive. FOLL_WRITE would, for example, mark the page softdirty or trigger uffd-wp, even though, there actually isn't going to be any write access. (3) The purpose of FOLL_FORCE is debug access, not access without lack of VMA permissions by arbitrarty drivers.
So instead, make R/O long-term pinning work as expected, by breaking COW in a COW mapping early, such that we can remove any FOLL_FORCE usage from drivers and make FOLL_FORCE ptrace-specific (renaming it to FOLL_PTRACE). More details in patch #8.
Patches #1--#3 add COW tests for non-anonymous pages. Patches #4--#7 prepare core MM for extended FAULT_FLAG_UNSHARE support in COW mappings. Patch #8 implements reliable R/O long-term pinning in COW mappings Patches #9--#19 remove any FOLL_FORCE usage from drivers. Patch #20 renames FOLL_FORCE to FOLL_PTRACE.
I'm refraining from CCing all driver/arch maintainers on the whole patch set, but only CC them on the cover letter and the applicable patch (I know, I know, someone is always unhappy ... sorry).
RFC -> v1: * Use term "ptrace" instead of "debuggers" in patch descriptions * Added ACK/Tested-by * "mm/frame-vector: remove FOLL_FORCE usage" -> Adjust description * "mm: rename FOLL_FORCE to FOLL_PTRACE" -> Added
David Hildenbrand (20): selftests/vm: anon_cow: prepare for non-anonymous COW tests selftests/vm: cow: basic COW tests for non-anonymous pages selftests/vm: cow: R/O long-term pinning reliability tests for non-anon pages mm: add early FAULT_FLAG_UNSHARE consistency checks mm: add early FAULT_FLAG_WRITE consistency checks mm: rework handling in do_wp_page() based on private vs. shared mappings mm: don't call vm_ops->huge_fault() in wp_huge_pmd()/wp_huge_pud() for private mappings mm: extend FAULT_FLAG_UNSHARE support to anything in a COW mapping mm/gup: reliable R/O long-term pinning in COW mappings RDMA/umem: remove FOLL_FORCE usage RDMA/usnic: remove FOLL_FORCE usage RDMA/siw: remove FOLL_FORCE usage media: videobuf-dma-sg: remove FOLL_FORCE usage drm/etnaviv: remove FOLL_FORCE usage media: pci/ivtv: remove FOLL_FORCE usage mm/frame-vector: remove FOLL_FORCE usage drm/exynos: remove FOLL_FORCE usage RDMA/hw/qib/qib_user_pages: remove FOLL_FORCE usage habanalabs: remove FOLL_FORCE usage mm: rename FOLL_FORCE to FOLL_PTRACE
arch/alpha/kernel/ptrace.c | 6 +- arch/arm64/kernel/mte.c | 2 +- arch/ia64/kernel/ptrace.c | 10 +- arch/mips/kernel/ptrace32.c | 4 +- arch/mips/math-emu/dsemul.c | 2 +- arch/powerpc/kernel/ptrace/ptrace32.c | 4 +- arch/sparc/kernel/ptrace_32.c | 4 +- arch/sparc/kernel/ptrace_64.c | 8 +- arch/x86/kernel/step.c | 2 +- arch/x86/um/ptrace_32.c | 2 +- arch/x86/um/ptrace_64.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 8 +- drivers/gpu/drm/exynos/exynos_drm_g2d.c | 2 +- drivers/infiniband/core/umem.c | 8 +- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 +- drivers/infiniband/sw/siw/siw_mem.c | 9 +- drivers/media/common/videobuf2/frame_vector.c | 2 +- drivers/media/pci/ivtv/ivtv-udma.c | 2 +- drivers/media/pci/ivtv/ivtv-yuv.c | 5 +- drivers/media/v4l2-core/videobuf-dma-sg.c | 14 +- drivers/misc/habanalabs/common/memory.c | 3 +- fs/exec.c | 2 +- fs/proc/base.c | 2 +- include/linux/mm.h | 35 +- include/linux/mm_types.h | 8 +- kernel/events/uprobes.c | 4 +- kernel/ptrace.c | 12 +- mm/gup.c | 38 +- mm/huge_memory.c | 13 +- mm/hugetlb.c | 14 +- mm/memory.c | 97 +++-- mm/util.c | 4 +- security/tomoyo/domain.c | 2 +- tools/testing/selftests/vm/.gitignore | 2 +- tools/testing/selftests/vm/Makefile | 10 +- tools/testing/selftests/vm/check_config.sh | 4 +- .../selftests/vm/{anon_cow.c => cow.c} | 387 +++++++++++++++++- tools/testing/selftests/vm/run_vmtests.sh | 8 +- 39 files changed, 575 insertions(+), 177 deletions(-) rename tools/testing/selftests/vm/{anon_cow.c => cow.c} (75%)
Originally, the plan was to have a separate tests for testing COW of non-anonymous (e.g., shared zeropage) pages.
Turns out, that we'd need a lot of similar functionality and that there isn't a really good reason to separate it. So let's prepare for non-anon tests by renaming to "cow".
Signed-off-by: David Hildenbrand david@redhat.com --- tools/testing/selftests/vm/.gitignore | 2 +- tools/testing/selftests/vm/Makefile | 10 ++++---- tools/testing/selftests/vm/check_config.sh | 4 +-- .../selftests/vm/{anon_cow.c => cow.c} | 25 +++++++++++-------- tools/testing/selftests/vm/run_vmtests.sh | 8 +++--- 5 files changed, 27 insertions(+), 22 deletions(-) rename tools/testing/selftests/vm/{anon_cow.c => cow.c} (97%)
diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore index 8a536c731e3c..ee8c41c998e6 100644 --- a/tools/testing/selftests/vm/.gitignore +++ b/tools/testing/selftests/vm/.gitignore @@ -1,5 +1,5 @@ # SPDX-License-Identifier: GPL-2.0-only -anon_cow +cow hugepage-mmap hugepage-mremap hugepage-shm diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index 0986bd60c19f..89c14e41bd43 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -27,7 +27,7 @@ MAKEFLAGS += --no-builtin-rules
CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES) LDLIBS = -lrt -lpthread -TEST_GEN_FILES = anon_cow +TEST_GEN_FILES = cow TEST_GEN_FILES += compaction_test TEST_GEN_FILES += gup_test TEST_GEN_FILES += hmm-tests @@ -99,7 +99,7 @@ TEST_FILES += va_128TBswitch.sh
include ../lib.mk
-$(OUTPUT)/anon_cow: vm_util.c +$(OUTPUT)/cow: vm_util.c $(OUTPUT)/khugepaged: vm_util.c $(OUTPUT)/ksm_functional_tests: vm_util.c $(OUTPUT)/madv_populate: vm_util.c @@ -156,8 +156,8 @@ warn_32bit_failure: endif endif
-# ANON_COW_EXTRA_LIBS may get set in local_config.mk, or it may be left empty. -$(OUTPUT)/anon_cow: LDLIBS += $(ANON_COW_EXTRA_LIBS) +# cow_EXTRA_LIBS may get set in local_config.mk, or it may be left empty. +$(OUTPUT)/cow: LDLIBS += $(COW_EXTRA_LIBS)
$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
@@ -170,7 +170,7 @@ local_config.mk local_config.h: check_config.sh
EXTRA_CLEAN += local_config.mk local_config.h
-ifeq ($(ANON_COW_EXTRA_LIBS),) +ifeq ($(COW_EXTRA_LIBS),) all: warn_missing_liburing
warn_missing_liburing: diff --git a/tools/testing/selftests/vm/check_config.sh b/tools/testing/selftests/vm/check_config.sh index 9a44c6520925..bcba3af0acea 100644 --- a/tools/testing/selftests/vm/check_config.sh +++ b/tools/testing/selftests/vm/check_config.sh @@ -21,11 +21,11 @@ $CC -c $tmpfile_c -o $tmpfile_o >/dev/null 2>&1
if [ -f $tmpfile_o ]; then echo "#define LOCAL_CONFIG_HAVE_LIBURING 1" > $OUTPUT_H_FILE - echo "ANON_COW_EXTRA_LIBS = -luring" > $OUTPUT_MKFILE + echo "COW_EXTRA_LIBS = -luring" > $OUTPUT_MKFILE else echo "// No liburing support found" > $OUTPUT_H_FILE echo "# No liburing support found, so:" > $OUTPUT_MKFILE - echo "ANON_COW_EXTRA_LIBS = " >> $OUTPUT_MKFILE + echo "COW_EXTRA_LIBS = " >> $OUTPUT_MKFILE fi
rm ${tmpname}.* diff --git a/tools/testing/selftests/vm/anon_cow.c b/tools/testing/selftests/vm/cow.c similarity index 97% rename from tools/testing/selftests/vm/anon_cow.c rename to tools/testing/selftests/vm/cow.c index bbb251eb5025..d202bfd63585 100644 --- a/tools/testing/selftests/vm/anon_cow.c +++ b/tools/testing/selftests/vm/cow.c @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* - * COW (Copy On Write) tests for anonymous memory. + * COW (Copy On Write) tests. * * Copyright 2022, Red Hat, Inc. * @@ -986,7 +986,11 @@ struct test_case { test_fn fn; };
-static const struct test_case test_cases[] = { +/* + * Test cases that are specific to anonymous pages: pages in private mappings + * that may get shared via COW during fork(). + */ +static const struct test_case anon_test_cases[] = { /* * Basic COW tests for fork() without any GUP. If we miss to break COW, * either the child can observe modifications by the parent or the @@ -1104,7 +1108,7 @@ static const struct test_case test_cases[] = { }, };
-static void run_test_case(struct test_case const *test_case) +static void run_anon_test_case(struct test_case const *test_case) { int i;
@@ -1125,15 +1129,17 @@ static void run_test_case(struct test_case const *test_case) hugetlbsizes[i]); }
-static void run_test_cases(void) +static void run_anon_test_cases(void) { int i;
- for (i = 0; i < ARRAY_SIZE(test_cases); i++) - run_test_case(&test_cases[i]); + ksft_print_msg("[INFO] Anonymous memory tests in private mappings\n"); + + for (i = 0; i < ARRAY_SIZE(anon_test_cases); i++) + run_anon_test_case(&anon_test_cases[i]); }
-static int tests_per_test_case(void) +static int tests_per_anon_test_case(void) { int tests = 2 + nr_hugetlbsizes;
@@ -1144,7 +1150,6 @@ static int tests_per_test_case(void)
int main(int argc, char **argv) { - int nr_test_cases = ARRAY_SIZE(test_cases); int err;
pagesize = getpagesize(); @@ -1152,14 +1157,14 @@ int main(int argc, char **argv) detect_hugetlbsizes();
ksft_print_header(); - ksft_set_plan(nr_test_cases * tests_per_test_case()); + ksft_set_plan(ARRAY_SIZE(anon_test_cases) * tests_per_anon_test_case());
gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR); pagemap_fd = open("/proc/self/pagemap", O_RDONLY); if (pagemap_fd < 0) ksft_exit_fail_msg("opening pagemap failed\n");
- run_test_cases(); + run_anon_test_cases();
err = ksft_get_fail_cnt(); if (err) diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh index ce52e4f5ff21..71744b9002d0 100755 --- a/tools/testing/selftests/vm/run_vmtests.sh +++ b/tools/testing/selftests/vm/run_vmtests.sh @@ -50,8 +50,8 @@ separated by spaces: memory protection key tests - soft_dirty test soft dirty page bit semantics -- anon_cow - test anonymous copy-on-write semantics +- cow + test copy-on-write semantics example: ./run_vmtests.sh -t "hmm mmap ksm" EOF exit 0 @@ -267,7 +267,7 @@ fi
CATEGORY="soft_dirty" run_test ./soft-dirty
-# COW tests for anonymous memory -CATEGORY="anon_cow" run_test ./anon_cow +# COW tests +CATEGORY="cow" run_test ./cow
exit $exitcode
On 11/16/22 11:26, David Hildenbrand wrote:
Originally, the plan was to have a separate tests for testing COW of non-anonymous (e.g., shared zeropage) pages.
Turns out, that we'd need a lot of similar functionality and that there isn't a really good reason to separate it. So let's prepare for non-anon tests by renaming to "cow".
Signed-off-by: David Hildenbrand david@redhat.com
Acked-by: Vlastimil Babka vbabka@suse.cz
Let's add basic tests for COW with non-anonymous pages in private mappings: write access should properly trigger COW and result in the private changes not being visible through other page mappings.
Especially, add tests for: * Zeropage * Huge zeropage * Ordinary pagecache pages via memfd and tmpfile() * Hugetlb pages via memfd
Fortunately, all tests pass.
Signed-off-by: David Hildenbrand david@redhat.com --- tools/testing/selftests/vm/cow.c | 338 ++++++++++++++++++++++++++++++- 1 file changed, 337 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/vm/cow.c b/tools/testing/selftests/vm/cow.c index d202bfd63585..fb07bd44529c 100644 --- a/tools/testing/selftests/vm/cow.c +++ b/tools/testing/selftests/vm/cow.c @@ -19,6 +19,7 @@ #include <sys/mman.h> #include <sys/ioctl.h> #include <sys/wait.h> +#include <linux/memfd.h>
#include "local_config.h" #ifdef LOCAL_CONFIG_HAVE_LIBURING @@ -35,6 +36,7 @@ static size_t thpsize; static int nr_hugetlbsizes; static size_t hugetlbsizes[10]; static int gup_fd; +static bool has_huge_zeropage;
static void detect_thpsize(void) { @@ -64,6 +66,31 @@ static void detect_thpsize(void) close(fd); }
+static void detect_huge_zeropage(void) +{ + int fd = open("/sys/kernel/mm/transparent_hugepage/use_zero_page", + O_RDONLY); + size_t enabled = 0; + char buf[15]; + int ret; + + if (fd < 0) + return; + + ret = pread(fd, buf, sizeof(buf), 0); + if (ret > 0 && ret < sizeof(buf)) { + buf[ret] = 0; + + enabled = strtoul(buf, NULL, 10); + if (enabled == 1) { + has_huge_zeropage = true; + ksft_print_msg("[INFO] huge zeropage is enabled\n"); + } + } + + close(fd); +} + static void detect_hugetlbsizes(void) { DIR *dir = opendir("/sys/kernel/mm/hugepages/"); @@ -1148,6 +1175,312 @@ static int tests_per_anon_test_case(void) return tests; }
+typedef void (*non_anon_test_fn)(char *mem, const char *smem, size_t size); + +static void test_cow(char *mem, const char *smem, size_t size) +{ + char *old = malloc(size); + + /* Backup the original content. */ + memcpy(old, smem, size); + + /* Modify the page. */ + memset(mem, 0xff, size); + + /* See if we still read the old values via the other mapping. */ + ksft_test_result(!memcmp(smem, old, size), + "Other mapping not modified\n"); + free(old); +} + +static void run_with_zeropage(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, tmp; + + ksft_print_msg("[RUN] %s ... with shared zeropage\n", desc); + + mem = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANON, -1, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + + smem = mmap(NULL, pagesize, PROT_READ, MAP_PRIVATE | MAP_ANON, -1, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Read from the page to populate the shared zeropage. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, pagesize); +munmap: + munmap(mem, pagesize); + if (smem != MAP_FAILED) + munmap(smem, pagesize); +} + +static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, *mmap_mem, *mmap_smem, tmp; + size_t mmap_size; + int ret; + + ksft_print_msg("[RUN] %s ... with huge zeropage\n", desc); + + if (!has_huge_zeropage) { + ksft_test_result_skip("Huge zeropage not enabled\n"); + return; + } + + /* For alignment purposes, we need twice the thp size. */ + mmap_size = 2 * thpsize; + mmap_mem = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (mmap_mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + return; + } + mmap_smem = mmap(NULL, mmap_size, PROT_READ, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (mmap_smem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* We need a THP-aligned memory area. */ + mem = (char *)(((uintptr_t)mmap_mem + thpsize) & ~(thpsize - 1)); + smem = (char *)(((uintptr_t)mmap_smem + thpsize) & ~(thpsize - 1)); + + ret = madvise(mem, thpsize, MADV_HUGEPAGE); + ret |= madvise(smem, thpsize, MADV_HUGEPAGE); + if (ret) { + ksft_test_result_fail("MADV_HUGEPAGE failed\n"); + goto munmap; + } + + /* + * Read from the memory to populate the huge shared zeropage. Read from + * the first sub-page and test if we get another sub-page populated + * automatically. + */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + if (!pagemap_is_populated(pagemap_fd, mem + pagesize) || + !pagemap_is_populated(pagemap_fd, smem + pagesize)) { + ksft_test_result_skip("Did not get THPs populated\n"); + goto munmap; + } + + fn(mem, smem, thpsize); +munmap: + munmap(mmap_mem, mmap_size); + if (mmap_smem != MAP_FAILED) + munmap(mmap_smem, mmap_size); +} + +static void run_with_memfd(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, tmp; + int fd; + + ksft_print_msg("[RUN] %s ... with memfd\n", desc); + + fd = memfd_create("test", 0); + if (fd < 0) { + ksft_test_result_fail("memfd_create() failed\n"); + return; + } + + /* File consists of a single page filled with zeroes. */ + if (fallocate(fd, 0, 0, pagesize)) { + ksft_test_result_fail("fallocate() failed\n"); + goto close; + } + + /* Create a private mapping of the memfd. */ + mem = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto close; + } + smem = mmap(NULL, pagesize, PROT_READ, MAP_SHARED, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Fault the page in. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, pagesize); +munmap: + munmap(mem, pagesize); + if (smem != MAP_FAILED) + munmap(smem, pagesize); +close: + close(fd); +} + +static void run_with_tmpfile(non_anon_test_fn fn, const char *desc) +{ + char *mem, *smem, tmp; + FILE *file; + int fd; + + ksft_print_msg("[RUN] %s ... with tmpfile\n", desc); + + file = tmpfile(); + if (!file) { + ksft_test_result_fail("tmpfile() failed\n"); + return; + } + + fd = fileno(file); + if (fd < 0) { + ksft_test_result_skip("fileno() failed\n"); + return; + } + + /* File consists of a single page filled with zeroes. */ + if (fallocate(fd, 0, 0, pagesize)) { + ksft_test_result_fail("fallocate() failed\n"); + goto close; + } + + /* Create a private mapping of the memfd. */ + mem = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto close; + } + smem = mmap(NULL, pagesize, PROT_READ, MAP_SHARED, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Fault the page in. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, pagesize); +munmap: + munmap(mem, pagesize); + if (smem != MAP_FAILED) + munmap(smem, pagesize); +close: + fclose(file); +} + +static void run_with_memfd_hugetlb(non_anon_test_fn fn, const char *desc, + size_t hugetlbsize) +{ + int flags = MFD_HUGETLB; + char *mem, *smem, tmp; + int fd; + + ksft_print_msg("[RUN] %s ... with memfd hugetlb (%zu kB)\n", desc, + hugetlbsize / 1024); + + flags |= __builtin_ctzll(hugetlbsize) << MFD_HUGE_SHIFT; + + fd = memfd_create("test", flags); + if (fd < 0) { + ksft_test_result_skip("memfd_create() failed\n"); + return; + } + + /* File consists of a single page filled with zeroes. */ + if (fallocate(fd, 0, 0, hugetlbsize)) { + ksft_test_result_skip("need more free huge pages\n"); + goto close; + } + + /* Create a private mapping of the memfd. */ + mem = mmap(NULL, hugetlbsize, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, + 0); + if (mem == MAP_FAILED) { + ksft_test_result_skip("need more free huge pages\n"); + goto close; + } + smem = mmap(NULL, hugetlbsize, PROT_READ, MAP_SHARED, fd, 0); + if (mem == MAP_FAILED) { + ksft_test_result_fail("mmap() failed\n"); + goto munmap; + } + + /* Fault the page in. */ + tmp = *mem + *smem; + asm volatile("" : "+r" (tmp)); + + fn(mem, smem, hugetlbsize); +munmap: + munmap(mem, hugetlbsize); + if (mem != MAP_FAILED) + munmap(smem, hugetlbsize); +close: + close(fd); +} + +struct non_anon_test_case { + const char *desc; + non_anon_test_fn fn; +}; + +/* + * Test cases that target any pages in private mappings that are non anonymous: + * pages that may get shared via COW ndependent of fork(). This includes + * the shared zeropage(s), pagecache pages, ... + */ +static const struct non_anon_test_case non_anon_test_cases[] = { + /* + * Basic COW test without any GUP. If we miss to break COW, changes are + * visible via other private/shared mappings. + */ + { + "Basic COW", + test_cow, + }, +}; + +static void run_non_anon_test_case(struct non_anon_test_case const *test_case) +{ + int i; + + run_with_zeropage(test_case->fn, test_case->desc); + run_with_memfd(test_case->fn, test_case->desc); + run_with_tmpfile(test_case->fn, test_case->desc); + if (thpsize) + run_with_huge_zeropage(test_case->fn, test_case->desc); + for (i = 0; i < nr_hugetlbsizes; i++) + run_with_memfd_hugetlb(test_case->fn, test_case->desc, + hugetlbsizes[i]); +} + +static void run_non_anon_test_cases(void) +{ + int i; + + ksft_print_msg("[RUN] Non-anonymous memory tests in private mappings\n"); + + for (i = 0; i < ARRAY_SIZE(non_anon_test_cases); i++) + run_non_anon_test_case(&non_anon_test_cases[i]); +} + +static int tests_per_non_anon_test_case(void) +{ + int tests = 3 + nr_hugetlbsizes; + + if (thpsize) + tests += 1; + return tests; +} + int main(int argc, char **argv) { int err; @@ -1155,9 +1488,11 @@ int main(int argc, char **argv) pagesize = getpagesize(); detect_thpsize(); detect_hugetlbsizes(); + detect_huge_zeropage();
ksft_print_header(); - ksft_set_plan(ARRAY_SIZE(anon_test_cases) * tests_per_anon_test_case()); + ksft_set_plan(ARRAY_SIZE(anon_test_cases) * tests_per_anon_test_case() + + ARRAY_SIZE(non_anon_test_cases) * tests_per_non_anon_test_case());
gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR); pagemap_fd = open("/proc/self/pagemap", O_RDONLY); @@ -1165,6 +1500,7 @@ int main(int argc, char **argv) ksft_exit_fail_msg("opening pagemap failed\n");
run_anon_test_cases(); + run_non_anon_test_cases();
err = ksft_get_fail_cnt(); if (err)
Let's test whether R/O long-term pinning is reliable for non-anonymous memory: when R/O long-term pinning a page, the expectation is that we break COW early before pinning, such that actual write access via the page tables won't break COW later and end up replacing the R/O-pinned page in the page table.
Consequently, R/O long-term pinning in private mappings would only target exclusive anonymous pages.
For now, all tests fail: # [RUN] R/O longterm GUP pin ... with shared zeropage not ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd not ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile not ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage not ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) not ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) not ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage not ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd not ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile not ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage not ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) not ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) not ok 162 Longterm R/O pin is reliable
Signed-off-by: David Hildenbrand david@redhat.com --- tools/testing/selftests/vm/cow.c | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/vm/cow.c b/tools/testing/selftests/vm/cow.c index fb07bd44529c..73e05b52c49e 100644 --- a/tools/testing/selftests/vm/cow.c +++ b/tools/testing/selftests/vm/cow.c @@ -561,6 +561,7 @@ static void test_iouring_fork(char *mem, size_t size) #endif /* LOCAL_CONFIG_HAVE_LIBURING */
enum ro_pin_test { + RO_PIN_TEST, RO_PIN_TEST_SHARED, RO_PIN_TEST_PREVIOUSLY_SHARED, RO_PIN_TEST_RO_EXCLUSIVE, @@ -593,6 +594,8 @@ static void do_test_ro_pin(char *mem, size_t size, enum ro_pin_test test, }
switch (test) { + case RO_PIN_TEST: + break; case RO_PIN_TEST_SHARED: case RO_PIN_TEST_PREVIOUSLY_SHARED: /* @@ -1193,6 +1196,16 @@ static void test_cow(char *mem, const char *smem, size_t size) free(old); }
+static void test_ro_pin(char *mem, const char *smem, size_t size) +{ + do_test_ro_pin(mem, size, RO_PIN_TEST, false); +} + +static void test_ro_fast_pin(char *mem, const char *smem, size_t size) +{ + do_test_ro_pin(mem, size, RO_PIN_TEST, true); +} + static void run_with_zeropage(non_anon_test_fn fn, const char *desc) { char *mem, *smem, tmp; @@ -1433,7 +1446,7 @@ struct non_anon_test_case { };
/* - * Test cases that target any pages in private mappings that are non anonymous: + * Test cases that target any pages in private mappings that are not anonymous: * pages that may get shared via COW ndependent of fork(). This includes * the shared zeropage(s), pagecache pages, ... */ @@ -1446,6 +1459,19 @@ static const struct non_anon_test_case non_anon_test_cases[] = { "Basic COW", test_cow, }, + /* + * Take a R/O longterm pin. When modifying the page via the page table, + * the page content change must be visible via the pin. + */ + { + "R/O longterm GUP pin", + test_ro_pin, + }, + /* Same as above, but using GUP-fast. */ + { + "R/O longterm GUP-fast pin", + test_ro_fast_pin, + }, };
static void run_non_anon_test_case(struct non_anon_test_case const *test_case)
For now, FAULT_FLAG_UNSHARE only applies to anonymous pages, which implies a COW mapping. Let's hide FAULT_FLAG_UNSHARE early if we're not dealing with a COW mapping, such that we treat it like a read fault as documented and don't have to worry about the flag throughout all fault handlers.
While at it, centralize the check for mutual exclusion of FAULT_FLAG_UNSHARE and FAULT_FLAG_WRITE and just drop the check that either flag is set in the WP handler.
Signed-off-by: David Hildenbrand david@redhat.com --- mm/huge_memory.c | 3 --- mm/hugetlb.c | 5 ----- mm/memory.c | 23 ++++++++++++++++++++--- 3 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ed12cd3acbfd..68d00196b519 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1267,9 +1267,6 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd); VM_BUG_ON_VMA(!vma->anon_vma, vma);
- VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE)); - VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE)); - if (is_huge_zero_pmd(orig_pmd)) goto fallback;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1de986c62976..383b26069b33 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5314,9 +5314,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr = address & huge_page_mask(h); struct mmu_notifier_range range;
- VM_BUG_ON(unshare && (flags & FOLL_WRITE)); - VM_BUG_ON(!unshare && !(flags & FOLL_WRITE)); - /* * hugetlb does not support FOLL_FORCE-style write faults that keep the * PTE mapped R/O such as maybe_mkwrite() would do. @@ -5326,8 +5323,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
/* Let's take out MAP_SHARED mappings first. */ if (vma->vm_flags & VM_MAYSHARE) { - if (unlikely(unshare)) - return 0; set_huge_ptep_writable(vma, haddr, ptep); return 0; } diff --git a/mm/memory.c b/mm/memory.c index 2d453736f87c..e014435a87db 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3344,9 +3344,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct folio *folio;
- VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE)); - VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE)); - if (likely(!unshare)) { if (userfaultfd_pte_wp(vma, *vmf->pte)) { pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -5161,6 +5158,22 @@ static void lru_gen_exit_fault(void) } #endif /* CONFIG_LRU_GEN */
+static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, + unsigned int *flags) +{ + if (unlikely(*flags & FAULT_FLAG_UNSHARE)) { + if (WARN_ON_ONCE(*flags & FAULT_FLAG_WRITE)) + return VM_FAULT_SIGSEGV; + /* + * FAULT_FLAG_UNSHARE only applies to COW mappings. Let's + * just treat it like an ordinary read-fault otherwise. + */ + if (!is_cow_mapping(vma->vm_flags)) + *flags &= ~FAULT_FLAG_UNSHARE; + } + return 0; +} + /* * By the time we get here, we already hold the mm semaphore * @@ -5177,6 +5190,10 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, count_vm_event(PGFAULT); count_memcg_event_mm(vma->vm_mm, PGFAULT);
+ ret = sanitize_fault_flags(vma, &flags); + if (ret) + return ret; + if (!arch_vma_access_permitted(vma, flags & FAULT_FLAG_WRITE, flags & FAULT_FLAG_INSTRUCTION, flags & FAULT_FLAG_REMOTE))
On 11/16/22 11:26, David Hildenbrand wrote:
For now, FAULT_FLAG_UNSHARE only applies to anonymous pages, which implies a COW mapping. Let's hide FAULT_FLAG_UNSHARE early if we're not dealing with a COW mapping, such that we treat it like a read fault as documented and don't have to worry about the flag throughout all fault handlers.
While at it, centralize the check for mutual exclusion of FAULT_FLAG_UNSHARE and FAULT_FLAG_WRITE and just drop the check that either flag is set in the WP handler.
Signed-off-by: David Hildenbrand david@redhat.com
mm/huge_memory.c | 3 --- mm/hugetlb.c | 5 ----- mm/memory.c | 23 ++++++++++++++++++++--- 3 files changed, 20 insertions(+), 11 deletions(-)
Reviewed-by: Vlastimil Babka vbabka@suse.cz
Let's catch abuse of FAULT_FLAG_WRITE early, such that we don't have to care in all other handlers and might get "surprises" if we forget to do so.
Write faults without VM_MAYWRITE don't make any sense, and our maybe_mkwrite() logic could have hidden such abuse for now.
Write faults without VM_WRITE on something that is not a COW mapping is similarly broken, and e.g., do_wp_page() could end up placing an anonymous page into a shared mapping, which would be bad.
This is a preparation for reliable R/O long-term pinning of pages in private mappings, whereby we want to make sure that we will never break COW in a read-only private mapping.
Signed-off-by: David Hildenbrand david@redhat.com --- mm/memory.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c index e014435a87db..c4fa378ec2a0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5170,6 +5170,14 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, */ if (!is_cow_mapping(vma->vm_flags)) *flags &= ~FAULT_FLAG_UNSHARE; + } else if (*flags & FAULT_FLAG_WRITE) { + /* Write faults on read-only mappings are impossible ... */ + if (WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE))) + return VM_FAULT_SIGSEGV; + /* ... and FOLL_FORCE only applies to COW mappings. */ + if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE) && + !is_cow_mapping(vma->vm_flags))) + return VM_FAULT_SIGSEGV; } return 0; }
On 11/16/22 11:26, David Hildenbrand wrote:
Let's catch abuse of FAULT_FLAG_WRITE early, such that we don't have to care in all other handlers and might get "surprises" if we forget to do so.
Write faults without VM_MAYWRITE don't make any sense, and our maybe_mkwrite() logic could have hidden such abuse for now.
Write faults without VM_WRITE on something that is not a COW mapping is similarly broken, and e.g., do_wp_page() could end up placing an anonymous page into a shared mapping, which would be bad.
This is a preparation for reliable R/O long-term pinning of pages in private mappings, whereby we want to make sure that we will never break COW in a read-only private mapping.
Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Vlastimil Babka vbabka@suse.cz
mm/memory.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c index e014435a87db..c4fa378ec2a0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5170,6 +5170,14 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, */ if (!is_cow_mapping(vma->vm_flags)) *flags &= ~FAULT_FLAG_UNSHARE;
- } else if (*flags & FAULT_FLAG_WRITE) {
/* Write faults on read-only mappings are impossible ... */
if (WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)))
return VM_FAULT_SIGSEGV;
/* ... and FOLL_FORCE only applies to COW mappings. */
if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE) &&
!is_cow_mapping(vma->vm_flags)))
} return 0;return VM_FAULT_SIGSEGV;
}
We want to extent FAULT_FLAG_UNSHARE support to anything mapped into a COW mapping (pagecache page, zeropage, PFN, ...), not just anonymous pages. Let's prepare for that by handling shared mappings first such that we can handle private mappings last.
While at it, use folio-based functions instead of page-based functions where we touch the code either way.
Signed-off-by: David Hildenbrand david@redhat.com --- mm/memory.c | 38 +++++++++++++++++--------------------- 1 file changed, 17 insertions(+), 21 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index c4fa378ec2a0..c35e6cd32b6a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3342,7 +3342,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; - struct folio *folio; + struct folio *folio = NULL;
if (likely(!unshare)) { if (userfaultfd_pte_wp(vma, *vmf->pte)) { @@ -3360,13 +3360,12 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) }
vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte); - if (!vmf->page) { - if (unlikely(unshare)) { - /* No anonymous page -> nothing to do. */ - pte_unmap_unlock(vmf->pte, vmf->ptl); - return 0; - }
+ /* + * Shared mapping: we are guaranteed to have VM_WRITE and + * FAULT_FLAG_WRITE set at this point. + */ + if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { /* * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a * VM_PFNMAP VMA. @@ -3374,20 +3373,19 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * We should not cow pages in a shared writeable mapping. * Just mark the pages writable and/or call ops->pfn_mkwrite. */ - if ((vma->vm_flags & (VM_WRITE|VM_SHARED)) == - (VM_WRITE|VM_SHARED)) + if (!vmf->page) return wp_pfn_shared(vmf); - - pte_unmap_unlock(vmf->pte, vmf->ptl); - return wp_page_copy(vmf); + return wp_page_shared(vmf); }
+ if (vmf->page) + folio = page_folio(vmf->page); + /* - * Take out anonymous pages first, anonymous shared vmas are - * not dirty accountable. + * Private mapping: create an exclusive anonymous page copy if reuse + * is impossible. We might miss VM_WRITE for FOLL_FORCE handling. */ - folio = page_folio(vmf->page); - if (folio_test_anon(folio)) { + if (folio && folio_test_anon(folio)) { /* * If the page is exclusive to this process we must reuse the * page without further checks. @@ -3438,19 +3436,17 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) /* No anonymous page -> nothing to do. */ pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; - } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == - (VM_WRITE|VM_SHARED))) { - return wp_page_shared(vmf); } copy: /* * Ok, we need to copy. Oh, well.. */ - get_page(vmf->page); + if (folio) + folio_get(folio);
pte_unmap_unlock(vmf->pte, vmf->ptl); #ifdef CONFIG_KSM - if (PageKsm(vmf->page)) + if (folio && folio_test_ksm(folio)) count_vm_event(COW_KSM); #endif return wp_page_copy(vmf);
On 11/16/22 11:26, David Hildenbrand wrote:
We want to extent FAULT_FLAG_UNSHARE support to anything mapped into a COW mapping (pagecache page, zeropage, PFN, ...), not just anonymous pages. Let's prepare for that by handling shared mappings first such that we can handle private mappings last.
While at it, use folio-based functions instead of page-based functions where we touch the code either way.
Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Vlastimil Babka vbabka@suse.cz
If we already have a PMD/PUD mapped write-protected in a private mapping and we want to break COW either due to FAULT_FLAG_WRITE or FAULT_FLAG_UNSHARE, there is no need to inform the file system just like on the PTE path.
Let's just split (->zap) + fallback in that case.
This is a preparation for more generic FAULT_FLAG_UNSHARE support in COW mappings.
Signed-off-by: David Hildenbrand david@redhat.com --- mm/memory.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index c35e6cd32b6a..d47ad33c6487 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4802,6 +4802,7 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; + vm_fault_t ret;
if (vma_is_anonymous(vmf->vma)) { if (likely(!unshare) && @@ -4809,11 +4810,13 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) return handle_userfault(vmf, VM_UFFD_WP); return do_huge_pmd_wp_page(vmf); } - if (vmf->vma->vm_ops->huge_fault) { - vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
- if (!(ret & VM_FAULT_FALLBACK)) - return ret; + if (vmf->vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vmf->vma->vm_ops->huge_fault) { + ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD); + if (!(ret & VM_FAULT_FALLBACK)) + return ret; + } }
/* COW or write-notify handled on pte level: split pmd. */ @@ -4839,14 +4842,17 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) { #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) + vm_fault_t ret; + /* No support for anonymous transparent PUD pages yet */ if (vma_is_anonymous(vmf->vma)) goto split; - if (vmf->vma->vm_ops->huge_fault) { - vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD); - - if (!(ret & VM_FAULT_FALLBACK)) - return ret; + if (vmf->vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vmf->vma->vm_ops->huge_fault) { + ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD); + if (!(ret & VM_FAULT_FALLBACK)) + return ret; + } } split: /* COW or write-notify not handled on PUD level: split pud.*/
On 11/16/22 11:26, David Hildenbrand wrote:
If we already have a PMD/PUD mapped write-protected in a private mapping and we want to break COW either due to FAULT_FLAG_WRITE or FAULT_FLAG_UNSHARE, there is no need to inform the file system just like on the PTE path.
Let's just split (->zap) + fallback in that case.
This is a preparation for more generic FAULT_FLAG_UNSHARE support in COW mappings.
Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Vlastimil Babka vbabka@suse.cz
Nits:
mm/memory.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index c35e6cd32b6a..d47ad33c6487 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4802,6 +4802,7 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) { const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
- vm_fault_t ret;
if (vma_is_anonymous(vmf->vma)) { if (likely(!unshare) && @@ -4809,11 +4810,13 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) return handle_userfault(vmf, VM_UFFD_WP); return do_huge_pmd_wp_page(vmf); }
- if (vmf->vma->vm_ops->huge_fault) {
vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
if (!(ret & VM_FAULT_FALLBACK))
return ret;
- if (vmf->vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) {
if (vmf->vma->vm_ops->huge_fault) {
I guess it could have been a single if with && and the reduced identation could fit keeping 'ret' declaration inside. AFAICS the later patches don't build more on top of this anyway. But also fine keeping as is.
(the hunk below same)
ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
if (!(ret & VM_FAULT_FALLBACK))
return ret;
}}
/* COW or write-notify handled on pte level: split pmd. */ @@ -4839,14 +4842,17 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) { #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
- vm_fault_t ret;
- /* No support for anonymous transparent PUD pages yet */ if (vma_is_anonymous(vmf->vma)) goto split;
- if (vmf->vma->vm_ops->huge_fault) {
vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD);
if (!(ret & VM_FAULT_FALLBACK))
return ret;
- if (vmf->vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) {
if (vmf->vma->vm_ops->huge_fault) {
ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD);
if (!(ret & VM_FAULT_FALLBACK))
return ret;
}}
split: /* COW or write-notify not handled on PUD level: split pud.*/
Extend FAULT_FLAG_UNSHARE to break COW on anything mapped into a COW (i.e., private writable) mapping and adjust the documentation accordingly.
FAULT_FLAG_UNSHARE will now also break COW when encountering the shared zeropage, a pagecache page, a PFNMAP, ... inside a COW mapping, by properly replacing the mapped page/pfn by a private copy (an exclusive anonymous page).
Note that only do_wp_page() needs care: hugetlb_wp() already handles FAULT_FLAG_UNSHARE correctly. wp_huge_pmd()/wp_huge_pud() also handles it correctly, for example, splitting the huge zeropage on FAULT_FLAG_UNSHARE such that we can handle FAULT_FLAG_UNSHARE on the PTE level.
This change is a requirement for reliable long-term R/O pinning in COW mappings.
Signed-off-by: David Hildenbrand david@redhat.com --- include/linux/mm_types.h | 8 ++++---- mm/memory.c | 4 ---- 2 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5e7f4fac1e78..5e9aaad8c7b2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1037,9 +1037,9 @@ typedef struct { * @FAULT_FLAG_REMOTE: The fault is not for current task/mm. * @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch. * @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal signals. - * @FAULT_FLAG_UNSHARE: The fault is an unsharing request to unshare (and mark - * exclusive) a possibly shared anonymous page that is - * mapped R/O. + * @FAULT_FLAG_UNSHARE: The fault is an unsharing request to break COW in a + * COW mapping, making sure that an exclusive anon page is + * mapped after the fault. * @FAULT_FLAG_ORIG_PTE_VALID: whether the fault has vmf->orig_pte cached. * We should only access orig_pte if this flag set. * @@ -1064,7 +1064,7 @@ typedef struct { * * The combination FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE is illegal. * FAULT_FLAG_UNSHARE is ignored and treated like an ordinary read fault when - * no existing R/O-mapped anonymous page is encountered. + * applied to mappings that are not COW mappings. */ enum fault_flag { FAULT_FLAG_WRITE = 1 << 0, diff --git a/mm/memory.c b/mm/memory.c index d47ad33c6487..56b21ab1e4d2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3432,10 +3432,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) } wp_page_reuse(vmf); return 0; - } else if (unshare) { - /* No anonymous page -> nothing to do. */ - pte_unmap_unlock(vmf->pte, vmf->ptl); - return 0; } copy: /*
On 11/16/22 11:26, David Hildenbrand wrote:
Extend FAULT_FLAG_UNSHARE to break COW on anything mapped into a COW (i.e., private writable) mapping and adjust the documentation accordingly.
FAULT_FLAG_UNSHARE will now also break COW when encountering the shared zeropage, a pagecache page, a PFNMAP, ... inside a COW mapping, by properly replacing the mapped page/pfn by a private copy (an exclusive anonymous page).
Note that only do_wp_page() needs care: hugetlb_wp() already handles FAULT_FLAG_UNSHARE correctly. wp_huge_pmd()/wp_huge_pud() also handles it correctly, for example, splitting the huge zeropage on FAULT_FLAG_UNSHARE such that we can handle FAULT_FLAG_UNSHARE on the PTE level.
This change is a requirement for reliable long-term R/O pinning in COW mappings.
Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Vlastimil Babka vbabka@suse.cz
include/linux/mm_types.h | 8 ++++---- mm/memory.c | 4 ---- 2 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5e7f4fac1e78..5e9aaad8c7b2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1037,9 +1037,9 @@ typedef struct {
- @FAULT_FLAG_REMOTE: The fault is not for current task/mm.
- @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch.
- @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal signals.
- @FAULT_FLAG_UNSHARE: The fault is an unsharing request to unshare (and mark
exclusive) a possibly shared anonymous page that is
mapped R/O.
- @FAULT_FLAG_UNSHARE: The fault is an unsharing request to break COW in a
COW mapping, making sure that an exclusive anon page is
mapped after the fault.
- @FAULT_FLAG_ORIG_PTE_VALID: whether the fault has vmf->orig_pte cached.
We should only access orig_pte if this flag set.
@@ -1064,7 +1064,7 @@ typedef struct {
- The combination FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE is illegal.
- FAULT_FLAG_UNSHARE is ignored and treated like an ordinary read fault when
- no existing R/O-mapped anonymous page is encountered.
*/
- applied to mappings that are not COW mappings.
enum fault_flag { FAULT_FLAG_WRITE = 1 << 0, diff --git a/mm/memory.c b/mm/memory.c index d47ad33c6487..56b21ab1e4d2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3432,10 +3432,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) } wp_page_reuse(vmf); return 0;
- } else if (unshare) {
/* No anonymous page -> nothing to do. */
pte_unmap_unlock(vmf->pte, vmf->ptl);
}return 0;
copy: /*
We already support reliable R/O pinning of anonymous memory. However, assume we end up pinning (R/O long-term) a pagecache page or the shared zeropage inside a writable private ("COW") mapping. The next write access will trigger a write-fault and replace the pinned page by an exclusive anonymous page in the process page tables to break COW: the pinned page no longer corresponds to the page mapped into the process' page table.
Now that FAULT_FLAG_UNSHARE can break COW on anything mapped into a COW mapping, let's properly break COW first before R/O long-term pinning something that's not an exclusive anon page inside a COW mapping. FAULT_FLAG_UNSHARE will break COW and map an exclusive anon page instead that can get pinned safely.
With this change, we can stop using FOLL_FORCE|FOLL_WRITE for reliable R/O long-term pinning in COW mappings.
With this change, the new R/O long-term pinning tests for non-anonymous memory succeed: # [RUN] R/O longterm GUP pin ... with shared zeropage ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) ok 162 Longterm R/O pin is reliable
Note 1: We don't care about short-term R/O-pinning, because they have snapshot semantics: they are not supposed to observe modifications that happen after pinning.
As one example, assume we start direct I/O to read from a page and store page content into a file: modifications to page content after starting direct I/O are not guaranteed to end up in the file. So even if we'd pin the shared zeropage, the end result would be as expected -- getting zeroes stored to the file.
Note 2: For shared mappings we'll now always fallback to the slow path to lookup the VMA when R/O long-term pining. While that's the necessary price we have to pay right now, it's actually not that bad in practice: most FOLL_LONGTERM users already specify FOLL_WRITE, for example, along with FOLL_FORCE because they tried dealing with COW mappings correctly ...
Note 3: For users that use FOLL_LONGTERM right now without FOLL_WRITE, such as VFIO, we'd now no longer pin the shared zeropage. Instead, we'd populate exclusive anon pages that we can pin. There was a concern that this could affect the memlock limit of existing setups.
For example, a VM running with VFIO could run into the memlock limit and fail to run. However, we essentially had the same behavior already in commit 17839856fd58 ("gup: document and work around "COW can break either way" issue") which got merged into some enterprise distros, and there were not any such complaints. So most probably, we're fine.
Signed-off-by: David Hildenbrand david@redhat.com --- include/linux/mm.h | 27 ++++++++++++++++++++++++--- mm/gup.c | 10 +++++----- mm/huge_memory.c | 2 +- mm/hugetlb.c | 7 ++++--- 4 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h index 6bd2ee5872dd..e8cc838f42f9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3095,8 +3095,12 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) * Must be called with the (sub)page that's actually referenced via the * page table entry, which might not necessarily be the head page for a * PTE-mapped THP. + * + * If the vma is NULL, we're coming from the GUP-fast path and might have + * to fallback to the slow path just to lookup the vma. */ -static inline bool gup_must_unshare(unsigned int flags, struct page *page) +static inline bool gup_must_unshare(struct vm_area_struct *vma, + unsigned int flags, struct page *page) { /* * FOLL_WRITE is implicitly handled correctly as the page table entry @@ -3109,8 +3113,25 @@ static inline bool gup_must_unshare(unsigned int flags, struct page *page) * Note: PageAnon(page) is stable until the page is actually getting * freed. */ - if (!PageAnon(page)) - return false; + if (!PageAnon(page)) { + /* + * We only care about R/O long-term pining: R/O short-term + * pinning does not have the semantics to observe successive + * changes through the process page tables. + */ + if (!(flags & FOLL_LONGTERM)) + return false; + + /* We really need the vma ... */ + if (!vma) + return true; + + /* + * ... because we only care about writable private ("COW") + * mappings where we have to break COW early. + */ + return is_cow_mapping(vma->vm_flags); + }
/* Paired with a memory barrier in page_try_share_anon_rmap(). */ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) diff --git a/mm/gup.c b/mm/gup.c index 5182abaaecde..01116699c863 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -578,7 +578,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } }
- if (!pte_write(pte) && gup_must_unshare(flags, page)) { + if (!pte_write(pte) && gup_must_unshare(vma, flags, page)) { page = ERR_PTR(-EMLINK); goto out; } @@ -2338,7 +2338,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, goto pte_unmap; }
- if (!pte_write(pte) && gup_must_unshare(flags, page)) { + if (!pte_write(pte) && gup_must_unshare(NULL, flags, page)) { gup_put_folio(folio, 1, flags); goto pte_unmap; } @@ -2506,7 +2506,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; }
- if (!pte_write(pte) && gup_must_unshare(flags, &folio->page)) { + if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; } @@ -2572,7 +2572,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0; }
- if (!pmd_write(orig) && gup_must_unshare(flags, &folio->page)) { + if (!pmd_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; } @@ -2612,7 +2612,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 0; }
- if (!pud_write(orig) && gup_must_unshare(flags, &folio->page)) { + if (!pud_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 68d00196b519..dec7a7c0eca8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1434,7 +1434,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, if (pmd_protnone(*pmd) && !gup_can_follow_protnone(flags)) return NULL;
- if (!pmd_write(*pmd) && gup_must_unshare(flags, page)) + if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK);
VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 383b26069b33..c3aab6d5b7aa 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6195,7 +6195,8 @@ static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, } }
-static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, +static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, + unsigned int flags, pte_t *pte, bool *unshare) { pte_t pteval = huge_ptep_get(pte); @@ -6207,7 +6208,7 @@ static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, return false; if (flags & FOLL_WRITE) return true; - if (gup_must_unshare(flags, pte_page(pteval))) { + if (gup_must_unshare(vma, flags, pte_page(pteval))) { *unshare = true; return true; } @@ -6336,7 +6337,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * directly from any kind of swap entries. */ if (absent || - __follow_hugetlb_must_fault(flags, pte, &unshare)) { + __follow_hugetlb_must_fault(vma, flags, pte, &unshare)) { vm_fault_t ret; unsigned int fault_flags = 0;
On Wed, Nov 16, 2022 at 11:26:48AM +0100, David Hildenbrand wrote:
We already support reliable R/O pinning of anonymous memory. However, assume we end up pinning (R/O long-term) a pagecache page or the shared zeropage inside a writable private ("COW") mapping. The next write access will trigger a write-fault and replace the pinned page by an exclusive anonymous page in the process page tables to break COW: the pinned page no longer corresponds to the page mapped into the process' page table.
Now that FAULT_FLAG_UNSHARE can break COW on anything mapped into a COW mapping, let's properly break COW first before R/O long-term pinning something that's not an exclusive anon page inside a COW mapping. FAULT_FLAG_UNSHARE will break COW and map an exclusive anon page instead that can get pinned safely.
With this change, we can stop using FOLL_FORCE|FOLL_WRITE for reliable R/O long-term pinning in COW mappings.
With this change, the new R/O long-term pinning tests for non-anonymous memory succeed: # [RUN] R/O longterm GUP pin ... with shared zeropage ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) ok 162 Longterm R/O pin is reliable
Note 1: We don't care about short-term R/O-pinning, because they have snapshot semantics: they are not supposed to observe modifications that happen after pinning.
As one example, assume we start direct I/O to read from a page and store page content into a file: modifications to page content after starting direct I/O are not guaranteed to end up in the file. So even if we'd pin the shared zeropage, the end result would be as expected -- getting zeroes stored to the file.
Note 2: For shared mappings we'll now always fallback to the slow path to lookup the VMA when R/O long-term pining. While that's the necessary price we have to pay right now, it's actually not that bad in practice: most FOLL_LONGTERM users already specify FOLL_WRITE, for example, along with FOLL_FORCE because they tried dealing with COW mappings correctly ...
Note 3: For users that use FOLL_LONGTERM right now without FOLL_WRITE, such as VFIO, we'd now no longer pin the shared zeropage. Instead, we'd populate exclusive anon pages that we can pin. There was a concern that this could affect the memlock limit of existing setups.
For example, a VM running with VFIO could run into the memlock limit and fail to run. However, we essentially had the same behavior already in commit 17839856fd58 ("gup: document and work around "COW can break either way" issue") which got merged into some enterprise distros, and there were not any such complaints. So most probably, we're fine.
Signed-off-by: David Hildenbrand david@redhat.com
I don't think my ack is any good for the implementation, but for the driver side semantics this sounds like what we want :-)
Acked-by: Daniel Vetter daniel.vetter@ffwll.ch
include/linux/mm.h | 27 ++++++++++++++++++++++++--- mm/gup.c | 10 +++++----- mm/huge_memory.c | 2 +- mm/hugetlb.c | 7 ++++--- 4 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h index 6bd2ee5872dd..e8cc838f42f9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3095,8 +3095,12 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
- Must be called with the (sub)page that's actually referenced via the
- page table entry, which might not necessarily be the head page for a
- PTE-mapped THP.
- If the vma is NULL, we're coming from the GUP-fast path and might have
*/
- to fallback to the slow path just to lookup the vma.
-static inline bool gup_must_unshare(unsigned int flags, struct page *page) +static inline bool gup_must_unshare(struct vm_area_struct *vma,
unsigned int flags, struct page *page)
{ /* * FOLL_WRITE is implicitly handled correctly as the page table entry @@ -3109,8 +3113,25 @@ static inline bool gup_must_unshare(unsigned int flags, struct page *page) * Note: PageAnon(page) is stable until the page is actually getting * freed. */
- if (!PageAnon(page))
return false;
- if (!PageAnon(page)) {
/*
* We only care about R/O long-term pining: R/O short-term
* pinning does not have the semantics to observe successive
* changes through the process page tables.
*/
if (!(flags & FOLL_LONGTERM))
return false;
/* We really need the vma ... */
if (!vma)
return true;
/*
* ... because we only care about writable private ("COW")
* mappings where we have to break COW early.
*/
return is_cow_mapping(vma->vm_flags);
- }
/* Paired with a memory barrier in page_try_share_anon_rmap(). */ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP)) diff --git a/mm/gup.c b/mm/gup.c index 5182abaaecde..01116699c863 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -578,7 +578,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } }
- if (!pte_write(pte) && gup_must_unshare(flags, page)) {
- if (!pte_write(pte) && gup_must_unshare(vma, flags, page)) { page = ERR_PTR(-EMLINK); goto out; }
@@ -2338,7 +2338,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, goto pte_unmap; }
if (!pte_write(pte) && gup_must_unshare(flags, page)) {
}if (!pte_write(pte) && gup_must_unshare(NULL, flags, page)) { gup_put_folio(folio, 1, flags); goto pte_unmap;
@@ -2506,7 +2506,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; }
- if (!pte_write(pte) && gup_must_unshare(flags, &folio->page)) {
- if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; }
@@ -2572,7 +2572,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0; }
- if (!pmd_write(orig) && gup_must_unshare(flags, &folio->page)) {
- if (!pmd_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; }
@@ -2612,7 +2612,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, return 0; }
- if (!pud_write(orig) && gup_must_unshare(flags, &folio->page)) {
- if (!pud_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 68d00196b519..dec7a7c0eca8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1434,7 +1434,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, if (pmd_protnone(*pmd) && !gup_can_follow_protnone(flags)) return NULL;
- if (!pmd_write(*pmd) && gup_must_unshare(flags, page))
- if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) return ERR_PTR(-EMLINK);
VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 383b26069b33..c3aab6d5b7aa 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6195,7 +6195,8 @@ static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, } } -static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, +static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma,
unsigned int flags, pte_t *pte, bool *unshare)
{ pte_t pteval = huge_ptep_get(pte); @@ -6207,7 +6208,7 @@ static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, return false; if (flags & FOLL_WRITE) return true;
- if (gup_must_unshare(flags, pte_page(pteval))) {
- if (gup_must_unshare(vma, flags, pte_page(pteval))) { *unshare = true; return true; }
@@ -6336,7 +6337,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * directly from any kind of swap entries. */ if (absent ||
__follow_hugetlb_must_fault(flags, pte, &unshare)) {
__follow_hugetlb_must_fault(vma, flags, pte, &unshare)) { vm_fault_t ret; unsigned int fault_flags = 0;
2.38.1
On 11/16/22 11:26, David Hildenbrand wrote:
We already support reliable R/O pinning of anonymous memory. However, assume we end up pinning (R/O long-term) a pagecache page or the shared zeropage inside a writable private ("COW") mapping. The next write access will trigger a write-fault and replace the pinned page by an exclusive anonymous page in the process page tables to break COW: the pinned page no longer corresponds to the page mapped into the process' page table.
Now that FAULT_FLAG_UNSHARE can break COW on anything mapped into a COW mapping, let's properly break COW first before R/O long-term pinning something that's not an exclusive anon page inside a COW mapping. FAULT_FLAG_UNSHARE will break COW and map an exclusive anon page instead that can get pinned safely.
With this change, we can stop using FOLL_FORCE|FOLL_WRITE for reliable R/O long-term pinning in COW mappings.
With this change, the new R/O long-term pinning tests for non-anonymous memory succeed: # [RUN] R/O longterm GUP pin ... with shared zeropage ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) ok 162 Longterm R/O pin is reliable
Note 1: We don't care about short-term R/O-pinning, because they have snapshot semantics: they are not supposed to observe modifications that happen after pinning.
As one example, assume we start direct I/O to read from a page and store page content into a file: modifications to page content after starting direct I/O are not guaranteed to end up in the file. So even if we'd pin the shared zeropage, the end result would be as expected -- getting zeroes stored to the file.
Note 2: For shared mappings we'll now always fallback to the slow path to lookup the VMA when R/O long-term pining. While that's the necessary price we have to pay right now, it's actually not that bad in practice: most FOLL_LONGTERM users already specify FOLL_WRITE, for example, along with FOLL_FORCE because they tried dealing with COW mappings correctly ...
Note 3: For users that use FOLL_LONGTERM right now without FOLL_WRITE, such as VFIO, we'd now no longer pin the shared zeropage. Instead, we'd populate exclusive anon pages that we can pin. There was a concern that this could affect the memlock limit of existing setups.
For example, a VM running with VFIO could run into the memlock limit and fail to run. However, we essentially had the same behavior already in commit 17839856fd58 ("gup: document and work around "COW can break either way" issue") which got merged into some enterprise distros, and there were not any such complaints. So most probably, we're fine.
Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Vlastimil Babka vbabka@suse.cz
On 11/16/22 02:26, David Hildenbrand wrote: ...
With this change, the new R/O long-term pinning tests for non-anonymous memory succeed: # [RUN] R/O longterm GUP pin ... with shared zeropage ok 151 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd ok 152 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with tmpfile ok 153 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with huge zeropage ok 154 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (2048 kB) ok 155 Longterm R/O pin is reliable # [RUN] R/O longterm GUP pin ... with memfd hugetlb (1048576 kB) ok 156 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with shared zeropage ok 157 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd ok 158 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with tmpfile ok 159 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with huge zeropage ok 160 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (2048 kB) ok 161 Longterm R/O pin is reliable # [RUN] R/O longterm GUP-fast pin ... with memfd hugetlb (1048576 kB) ok 162 Longterm R/O pin is reliable
Yes. I was able to reproduce these results, after some minor distractions involving huge pages, don't ask. :)
Note 1: We don't care about short-term R/O-pinning, because they have snapshot semantics: they are not supposed to observe modifications that happen after pinning.
As one example, assume we start direct I/O to read from a page and store page content into a file: modifications to page content after starting direct I/O are not guaranteed to end up in the file. So even if we'd pin the shared zeropage, the end result would be as expected -- getting zeroes stored to the file.
Note 2: For shared mappings we'll now always fallback to the slow path to lookup the VMA when R/O long-term pining. While that's the necessary price we have to pay right now, it's actually not that bad in practice: most FOLL_LONGTERM users already specify FOLL_WRITE, for example, along with FOLL_FORCE because they tried dealing with COW mappings correctly ...
Note 3: For users that use FOLL_LONGTERM right now without FOLL_WRITE, such as VFIO, we'd now no longer pin the shared zeropage. Instead, we'd populate exclusive anon pages that we can pin. There was a concern that this could affect the memlock limit of existing setups.
For example, a VM running with VFIO could run into the memlock limit and fail to run. However, we essentially had the same behavior already in commit 17839856fd58 ("gup: document and work around "COW can break either way" issue") which got merged into some enterprise distros, and there were not any such complaints. So most probably, we're fine.
Signed-off-by: David Hildenbrand david@redhat.com
include/linux/mm.h | 27 ++++++++++++++++++++++++--- mm/gup.c | 10 +++++----- mm/huge_memory.c | 2 +- mm/hugetlb.c | 7 ++++--- 4 files changed, 34 insertions(+), 12 deletions(-)
Looks good,
Reviewed-by: John Hubbard jhubbard@nvidia.com
thanks,
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Tested-by: Leon Romanovsky leonro@nvidia.com # Over mlx4 and mlx5. Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/infiniband/core/umem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 86d479772fbc..755a9c57db6f 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -156,7 +156,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, struct mm_struct *mm; unsigned long npages; int pinned, ret; - unsigned int gup_flags = FOLL_WRITE; + unsigned int gup_flags = FOLL_LONGTERM;
/* * If the combination of the addr and size requested for this memory @@ -210,8 +210,8 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
cur_base = addr & PAGE_MASK;
- if (!umem->writable) - gup_flags |= FOLL_FORCE; + if (umem->writable) + gup_flags |= FOLL_WRITE;
while (npages) { cond_resched(); @@ -219,7 +219,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, page_list); + gup_flags, page_list); if (pinned < 0) { ret = pinned; goto umem_release;
On Wed, Nov 16, 2022 at 11:26:49AM +0100, David Hildenbrand wrote:
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Tested-by: Leon Romanovsky leonro@nvidia.com # Over mlx4 and mlx5. Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/infiniband/core/umem.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Jason Gunthorpe jgg@nvidia.com
Jason
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Christian Benvenuti benve@cisco.com Cc: Nelson Escobar neescoba@cisco.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 67923ced6e2d..c301b3be9f30 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -85,6 +85,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, int dmasync, struct usnic_uiom_reg *uiomr) { struct list_head *chunk_list = &uiomr->chunk_list; + unsigned int gup_flags = FOLL_LONGTERM; struct page **page_list; struct scatterlist *sg; struct usnic_uiom_chunk *chunk; @@ -96,7 +97,6 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, int off; int i; dma_addr_t pa; - unsigned int gup_flags; struct mm_struct *mm;
/* @@ -131,8 +131,8 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, goto out; }
- gup_flags = FOLL_WRITE; - gup_flags |= (writable) ? 0 : FOLL_FORCE; + if (writable) + gup_flags |= FOLL_WRITE; cur_base = addr & PAGE_MASK; ret = 0;
@@ -140,8 +140,7 @@ static int usnic_uiom_get_pages(unsigned long addr, size_t size, int writable, ret = pin_user_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), - gup_flags | FOLL_LONGTERM, - page_list, NULL); + gup_flags, page_list, NULL);
if (ret < 0) goto out;
On Wed, Nov 16, 2022 at 11:26:50AM +0100, David Hildenbrand wrote:
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Christian Benvenuti benve@cisco.com Cc: Nelson Escobar neescoba@cisco.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/infiniband/hw/usnic/usnic_uiom.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
Reviewed-by: Jason Gunthorpe jgg@nvidia.com
Jason
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Bernard Metzler bmt@zurich.ibm.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/infiniband/sw/siw/siw_mem.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index 61c17db70d65..b2b33dd3b4fa 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -368,7 +368,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) struct mm_struct *mm_s; u64 first_page_va; unsigned long mlock_limit; - unsigned int foll_flags = FOLL_WRITE; + unsigned int foll_flags = FOLL_LONGTERM; int num_pages, num_chunks, i, rv = 0;
if (!can_do_mlock()) @@ -391,8 +391,8 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
mmgrab(mm_s);
- if (!writable) - foll_flags |= FOLL_FORCE; + if (writable) + foll_flags |= FOLL_WRITE;
mmap_read_lock(mm_s);
@@ -423,8 +423,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable) while (nents) { struct page **plist = &umem->page_chunk[i].plist[got];
- rv = pin_user_pages(first_page_va, nents, - foll_flags | FOLL_LONGTERM, + rv = pin_user_pages(first_page_va, nents, foll_flags, plist, NULL); if (rv < 0) goto out_sem_up;
On Wed, Nov 16, 2022 at 11:26:51AM +0100, David Hildenbrand wrote:
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Bernard Metzler bmt@zurich.ibm.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/infiniband/sw/siw/siw_mem.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-)
Reviewed-by: Jason Gunthorpe jgg@nvidia.com
Jason
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/media/v4l2-core/videobuf-dma-sg.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index f75e5eedeee0..234e9f647c96 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -151,17 +151,16 @@ static void videobuf_dma_init(struct videobuf_dmabuf *dma) static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, int direction, unsigned long data, unsigned long size) { + unsigned int gup_flags = FOLL_LONGTERM; unsigned long first, last; - int err, rw = 0; - unsigned int flags = FOLL_FORCE; + int err;
dma->direction = direction; switch (dma->direction) { case DMA_FROM_DEVICE: - rw = READ; + gup_flags |= FOLL_WRITE; break; case DMA_TO_DEVICE: - rw = WRITE; break; default: BUG(); @@ -177,14 +176,11 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, if (NULL == dma->pages) return -ENOMEM;
- if (rw == READ) - flags |= FOLL_WRITE; - dprintk(1, "init user [0x%lx+0x%lx => %lu pages]\n", data, size, dma->nr_pages);
- err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, - flags | FOLL_LONGTERM, dma->pages, NULL); + err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags, + dma->pages, NULL);
if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0;
On Wed, Nov 16, 2022 at 11:26:52AM +0100, David Hildenbrand wrote:
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
I looked at this a while ago when going through some of the follow_pfn stuff, so
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/media/v4l2-core/videobuf-dma-sg.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index f75e5eedeee0..234e9f647c96 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -151,17 +151,16 @@ static void videobuf_dma_init(struct videobuf_dmabuf *dma) static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, int direction, unsigned long data, unsigned long size) {
- unsigned int gup_flags = FOLL_LONGTERM; unsigned long first, last;
- int err, rw = 0;
- unsigned int flags = FOLL_FORCE;
- int err;
dma->direction = direction; switch (dma->direction) { case DMA_FROM_DEVICE:
rw = READ;
break; case DMA_TO_DEVICE:gup_flags |= FOLL_WRITE;
break; default: BUG();rw = WRITE;
@@ -177,14 +176,11 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, if (NULL == dma->pages) return -ENOMEM;
- if (rw == READ)
flags |= FOLL_WRITE;
- dprintk(1, "init user [0x%lx+0x%lx => %lu pages]\n", data, size, dma->nr_pages);
- err = pin_user_pages(data & PAGE_MASK, dma->nr_pages,
flags | FOLL_LONGTERM, dma->pages, NULL);
- err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags,
dma->pages, NULL);
if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0; -- 2.38.1
On 16/11/2022 11:26, David Hildenbrand wrote:
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
Acked-by: Hans Verkuil hverkuil-cisco@xs4all.nl
Looks good!
Hans
drivers/media/v4l2-core/videobuf-dma-sg.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index f75e5eedeee0..234e9f647c96 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -151,17 +151,16 @@ static void videobuf_dma_init(struct videobuf_dmabuf *dma) static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, int direction, unsigned long data, unsigned long size) {
- unsigned int gup_flags = FOLL_LONGTERM; unsigned long first, last;
- int err, rw = 0;
- unsigned int flags = FOLL_FORCE;
- int err;
dma->direction = direction; switch (dma->direction) { case DMA_FROM_DEVICE:
rw = READ;
break; case DMA_TO_DEVICE:gup_flags |= FOLL_WRITE;
break; default: BUG();rw = WRITE;
@@ -177,14 +176,11 @@ static int videobuf_dma_init_user_locked(struct videobuf_dmabuf *dma, if (NULL == dma->pages) return -ENOMEM;
- if (rw == READ)
flags |= FOLL_WRITE;
- dprintk(1, "init user [0x%lx+0x%lx => %lu pages]\n", data, size, dma->nr_pages);
- err = pin_user_pages(data & PAGE_MASK, dma->nr_pages,
flags | FOLL_LONGTERM, dma->pages, NULL);
- err = pin_user_pages(data & PAGE_MASK, dma->nr_pages, gup_flags,
dma->pages, NULL);
if (err != dma->nr_pages) { dma->nr_pages = (err >= 0) ? err : 0;
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
commit cd5297b0855f ("drm/etnaviv: Use FOLL_FORCE for userptr") documents that FOLL_FORCE | FOLL_WRITE was really only used for reliable R/O pinning.
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: David Airlie airlied@gmail.com Signed-off-by: David Hildenbrand david@redhat.com --- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index cc386f8a7116..efe2240945d0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -638,6 +638,7 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) struct page **pvec = NULL; struct etnaviv_gem_userptr *userptr = &etnaviv_obj->userptr; int ret, pinned = 0, npages = etnaviv_obj->base.size >> PAGE_SHIFT; + unsigned int gup_flags = FOLL_LONGTERM;
might_lock_read(¤t->mm->mmap_lock);
@@ -648,14 +649,15 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) if (!pvec) return -ENOMEM;
+ if (!userptr->ro) + gup_flags |= FOLL_WRITE; + do { unsigned num_pages = npages - pinned; uint64_t ptr = userptr->ptr + pinned * PAGE_SIZE; struct page **pages = pvec + pinned;
- ret = pin_user_pages_fast(ptr, num_pages, - FOLL_WRITE | FOLL_FORCE | FOLL_LONGTERM, - pages); + ret = pin_user_pages_fast(ptr, num_pages, gup_flags, pages); if (ret < 0) { unpin_user_pages(pvec, pinned); kvfree(pvec);
On Wed, Nov 16, 2022 at 11:26:53AM +0100, David Hildenbrand wrote:
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
commit cd5297b0855f ("drm/etnaviv: Use FOLL_FORCE for userptr") documents that FOLL_FORCE | FOLL_WRITE was really only used for reliable R/O pinning.
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Cc: Daniel Vetter daniel.vetter@ffwll.ch Cc: Lucas Stach l.stach@pengutronix.de Cc: Russell King linux+etnaviv@armlinux.org.uk Cc: Christian Gmeiner christian.gmeiner@gmail.com Cc: David Airlie airlied@gmail.com Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Also ack for merging through whatever tree suits best, since I guess this should all land together. -Daniel
drivers/gpu/drm/etnaviv/etnaviv_gem.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index cc386f8a7116..efe2240945d0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -638,6 +638,7 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) struct page **pvec = NULL; struct etnaviv_gem_userptr *userptr = &etnaviv_obj->userptr; int ret, pinned = 0, npages = etnaviv_obj->base.size >> PAGE_SHIFT;
- unsigned int gup_flags = FOLL_LONGTERM;
might_lock_read(¤t->mm->mmap_lock); @@ -648,14 +649,15 @@ static int etnaviv_gem_userptr_get_pages(struct etnaviv_gem_object *etnaviv_obj) if (!pvec) return -ENOMEM;
- if (!userptr->ro)
gup_flags |= FOLL_WRITE;
- do { unsigned num_pages = npages - pinned; uint64_t ptr = userptr->ptr + pinned * PAGE_SIZE; struct page **pages = pvec + pinned;
ret = pin_user_pages_fast(ptr, num_pages,
FOLL_WRITE | FOLL_FORCE | FOLL_LONGTERM,
pages);
if (ret < 0) { unpin_user_pages(pvec, pinned); kvfree(pvec);ret = pin_user_pages_fast(ptr, num_pages, gup_flags, pages);
-- 2.38.1
FOLL_FORCE is really only for ptrace access. R/O pinning a page is supposed to fail if the VMA misses proper access permissions (no VM_READ).
Let's just remove FOLL_FORCE usage here; there would have to be a pretty good reason to allow arbitrary drivers to R/O pin pages in a PROT_NONE VMA. Most probably, FOLL_FORCE usage is just some legacy leftover.
Cc: Andy Walls awalls@md.metrocast.net Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/media/pci/ivtv/ivtv-udma.c | 2 +- drivers/media/pci/ivtv/ivtv-yuv.c | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 210be8290f24..99b9f55ca829 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -115,7 +115,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr,
/* Pin user pages for DMA Xfer */ err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count, - dma->map, FOLL_FORCE); + dma->map, 0);
if (user_dma.page_count != err) { IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index 4ba10c34a16a..582146f8d70d 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -63,12 +63,11 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma,
/* Pin user pages for DMA Xfer */ y_pages = pin_user_pages_unlocked(y_dma.uaddr, - y_dma.page_count, &dma->map[0], FOLL_FORCE); + y_dma.page_count, &dma->map[0], 0); uv_pages = 0; /* silence gcc. value is set and consumed only if: */ if (y_pages == y_dma.page_count) { uv_pages = pin_user_pages_unlocked(uv_dma.uaddr, - uv_dma.page_count, &dma->map[y_pages], - FOLL_FORCE); + uv_dma.page_count, &dma->map[y_pages], 0); }
if (y_pages != y_dma.page_count || uv_pages != uv_dma.page_count) {
On 16/11/2022 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. R/O pinning a page is supposed to fail if the VMA misses proper access permissions (no VM_READ).
Let's just remove FOLL_FORCE usage here; there would have to be a pretty good reason to allow arbitrary drivers to R/O pin pages in a PROT_NONE VMA. Most probably, FOLL_FORCE usage is just some legacy leftover.
I'm pretty sure about that as well, so:
Acked-by: Hans Verkuil hverkuil-cisco@xs4all.nl
Regards,
Hans
Cc: Andy Walls awalls@md.metrocast.net Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/pci/ivtv/ivtv-udma.c | 2 +- drivers/media/pci/ivtv/ivtv-yuv.c | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 210be8290f24..99b9f55ca829 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -115,7 +115,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Pin user pages for DMA Xfer */ err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count,
dma->map, FOLL_FORCE);
dma->map, 0);
if (user_dma.page_count != err) { IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index 4ba10c34a16a..582146f8d70d 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -63,12 +63,11 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Pin user pages for DMA Xfer */ y_pages = pin_user_pages_unlocked(y_dma.uaddr,
y_dma.page_count, &dma->map[0], FOLL_FORCE);
uv_pages = 0; /* silence gcc. value is set and consumed only if: */ if (y_pages == y_dma.page_count) { uv_pages = pin_user_pages_unlocked(uv_dma.uaddr,y_dma.page_count, &dma->map[0], 0);
uv_dma.page_count, &dma->map[y_pages],
FOLL_FORCE);
}uv_dma.page_count, &dma->map[y_pages], 0);
if (y_pages != y_dma.page_count || uv_pages != uv_dma.page_count) {
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start);
ret = pin_user_pages_fast(start, nr_frames, - FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, + FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs)); if (ret > 0) { vec->got_ref = true;
On Wed, Nov 16, 2022 at 11:26:55AM +0100, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
Also code I looked at while looking at follow_pfn stuff
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start); ret = pin_user_pages_fast(start, nr_frames,
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
if (ret > 0) { vec->got_ref = true;FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs));
-- 2.38.1
Hi David, Tomasz,
On 16/11/2022 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
I've decided to revert 707947247e95: I have not been able to reproduce the problem described in that commit, and Tomasz reported that it caused problems with a specific use-case they encountered. I'll post that patch soon and I expect it to land in 6.2. It will cause a conflict with this patch, though.
If the problem described in that patch occurs again, then I will revisit it and hopefully do a better job than I did before. That commit was not my finest moment.
Regards,
Hans
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start); ret = pin_user_pages_fast(start, nr_frames,
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
if (ret > 0) { vec->got_ref = true;FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs));
On 23/11/2022 14:26, Hans Verkuil wrote:
Hi David, Tomasz,
On 16/11/2022 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
I've decided to revert 707947247e95: I have not been able to reproduce the problem described in that commit, and Tomasz reported that it caused problems with a specific use-case they encountered. I'll post that patch soon and I expect it to land in 6.2. It will cause a conflict with this patch, though.
If the problem described in that patch occurs again, then I will revisit it and hopefully do a better job than I did before. That commit was not my finest moment.
In any case, for this patch:
Acked-by: Hans Verkuil hverkuil-cisco@xs4all.nl
Regards,
Hans
Regards,
Hans
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start); ret = pin_user_pages_fast(start, nr_frames,
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
if (ret > 0) { vec->got_ref = true;FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs));
On 16.11.22 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start); ret = pin_user_pages_fast(start, nr_frames,
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
if (ret > 0) { vec->got_ref = true;FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs));
Hi Andrew,
see the discussion at [1] regarding a conflict and how to proceed with upstreaming. The conflict would be easy to resolve, however, also the patch description doesn't make sense anymore with [1].
On top of mm-unstable, reverting this patch and applying [1] gives me an updated patch:
From 1e66c25f1467c1f1e5f275312f2c6df29308d4df Mon Sep 17 00:00:00 2001 From: David Hildenbrand david@redhat.com Date: Wed, 16 Nov 2022 11:26:55 +0100 Subject: [PATCH] mm/frame-vector: remove FOLL_FORCE usage
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Acked-by: Hans Verkuil hverkuil-cisco@xs4all.nl Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index aad72640f055..8606fdacf5b8 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -41,7 +41,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, bool write, int ret_pin_user_pages_fast = 0; int ret = 0; int err; - unsigned int gup_flags = FOLL_FORCE | FOLL_LONGTERM; + unsigned int gup_flags = FOLL_LONGTERM;
if (nr_frames == 0) return 0;
Hi David,
On 27/11/2022 11:35, David Hildenbrand wrote:
On 16.11.22 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- Â 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, Â Â Â Â Â start = untagged_addr(start); Â Â Â Â Â Â ret = pin_user_pages_fast(start, nr_frames, -Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FOLL_WRITE | FOLL_LONGTERM, Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (struct page **)(vec->ptrs)); Â Â Â Â Â if (ret > 0) { Â Â Â Â Â Â Â Â Â vec->got_ref = true;
Hi Andrew,
see the discussion at [1] regarding a conflict and how to proceed with upstreaming. The conflict would be easy to resolve, however, also the patch description doesn't make sense anymore with [1].
Might it be easier and less confusing if you post a v2 of this series with my patch first? That way it is clear that 1) my patch has to come first, and 2) that it is part of a single series and should be merged by the mm subsystem.
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Regards,
Hans
On top of mm-unstable, reverting this patch and applying [1] gives me an updated patch:
From 1e66c25f1467c1f1e5f275312f2c6df29308d4df Mon Sep 17 00:00:00 2001 From: David Hildenbrand david@redhat.com Date: Wed, 16 Nov 2022 11:26:55 +0100 Subject: [PATCH] mm/frame-vector: remove FOLL_FORCE usage
GUP now supports reliable R/O long-term pinning in COW mappings, such that we break COW early. MAP_SHARED VMAs only use the shared zeropage so far in one corner case (DAXFS file with holes), which can be ignored because GUP does not support long-term pinning in fsdax (see check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop using FOLL_FORCE, which is really only for ptrace access.
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch Acked-by: Hans Verkuil hverkuil-cisco@xs4all.nl Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- Â 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index aad72640f055..8606fdacf5b8 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -41,7 +41,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, bool write, Â Â Â Â int ret_pin_user_pages_fast = 0; Â Â Â Â int ret = 0; Â Â Â Â int err; -Â Â Â unsigned int gup_flags = FOLL_FORCE | FOLL_LONGTERM; +Â Â Â unsigned int gup_flags = FOLL_LONGTERM; Â Â Â Â Â if (nr_frames == 0) Â Â Â Â Â Â Â Â return 0;
On 28.11.22 09:17, Hans Verkuil wrote:
Hi David,
On 27/11/2022 11:35, David Hildenbrand wrote:
On 16.11.22 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- Â 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, Â Â Â Â Â start = untagged_addr(start); Â Â Â Â Â Â ret = pin_user_pages_fast(start, nr_frames, -Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FOLL_WRITE | FOLL_LONGTERM, Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (struct page **)(vec->ptrs)); Â Â Â Â Â if (ret > 0) { Â Â Â Â Â Â Â Â Â vec->got_ref = true;
Hi Andrew,
see the discussion at [1] regarding a conflict and how to proceed with upstreaming. The conflict would be easy to resolve, however, also the patch description doesn't make sense anymore with [1].
Might it be easier and less confusing if you post a v2 of this series with my patch first? That way it is clear that 1) my patch has to come first, and 2) that it is part of a single series and should be merged by the mm subsystem.
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
Thanks!
On 28/11/2022 09:18, David Hildenbrand wrote:
On 28.11.22 09:17, Hans Verkuil wrote:
Hi David,
On 27/11/2022 11:35, David Hildenbrand wrote:
On 16.11.22 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- Â Â 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, Â Â Â Â Â Â start = untagged_addr(start); Â Â Â Â Â Â Â ret = pin_user_pages_fast(start, nr_frames, -Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â FOLL_WRITE | FOLL_LONGTERM, Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (struct page **)(vec->ptrs)); Â Â Â Â Â Â if (ret > 0) { Â Â Â Â Â Â Â Â Â Â vec->got_ref = true;
Hi Andrew,
see the discussion at [1] regarding a conflict and how to proceed with upstreaming. The conflict would be easy to resolve, however, also the patch description doesn't make sense anymore with [1].
Might it be easier and less confusing if you post a v2 of this series with my patch first? That way it is clear that 1) my patch has to come first, and 2) that it is part of a single series and should be merged by the mm subsystem.
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
Andrew, I've resent my patch, this time with you CCed as well.
Regards,
Hans
Thanks!
On Mon, Nov 28, 2022 at 5:19 PM David Hildenbrand david@redhat.com wrote:
On 28.11.22 09:17, Hans Verkuil wrote:
Hi David,
On 27/11/2022 11:35, David Hildenbrand wrote:
On 16.11.22 11:26, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. According to commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"), get_vaddr_frames() currently pins all pages writable as a workaround for issues with read-only buffers.
FOLL_FORCE, however, seems to be a legacy leftover as it predates commit 707947247e95 ("media: videobuf2-vmalloc: get_userptr: buffers are always writable"). Let's just remove it.
Once the read-only buffer issue has been resolved, FOLL_WRITE could again be set depending on the DMA direction.
Cc: Hans Verkuil hverkuil@xs4all.nl Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Tomasz Figa tfiga@chromium.org Cc: Marek Szyprowski m.szyprowski@samsung.com Cc: Mauro Carvalho Chehab mchehab@kernel.org Signed-off-by: David Hildenbrand david@redhat.com
drivers/media/common/videobuf2/frame_vector.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c index 542dde9d2609..062e98148c53 100644 --- a/drivers/media/common/videobuf2/frame_vector.c +++ b/drivers/media/common/videobuf2/frame_vector.c @@ -50,7 +50,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, start = untagged_addr(start); ret = pin_user_pages_fast(start, nr_frames,
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
FOLL_WRITE | FOLL_LONGTERM, (struct page **)(vec->ptrs)); if (ret > 0) { vec->got_ref = true;
Hi Andrew,
see the discussion at [1] regarding a conflict and how to proceed with upstreaming. The conflict would be easy to resolve, however, also the patch description doesn't make sense anymore with [1].
Might it be easier and less confusing if you post a v2 of this series with my patch first? That way it is clear that 1) my patch has to come first, and 2) that it is part of a single series and should be merged by the mm subsystem.
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
Thanks!
However you folks proceed with taking this patch, feel free to add my Acked-by. Thanks!
Best regards, Tomasz
-- Thanks,
David / dhildenb
On Mon, 28 Nov 2022 09:18:47 +0100 David Hildenbrand david@redhat.com wrote:
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
I'm inclined to let things sit as they are. Cross-tree conflicts happen, and Linus handles them. I'll flag this (very simple) conflict in the pull request, if MM merges second. If v4l merges second then hopefully they will do the same. But this one is so simple that Linus hardly needs our help.
But Linus won't be editing changelogs so that the changelog makes more sense after both trees are joined. I'm inclined to let the changelog sit as it is as well.
On 28.11.22 23:59, Andrew Morton wrote:
On Mon, 28 Nov 2022 09:18:47 +0100 David Hildenbrand david@redhat.com wrote:
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
I'm inclined to let things sit as they are. Cross-tree conflicts happen, and Linus handles them. I'll flag this (very simple) conflict in the pull request, if MM merges second. If v4l merges second then hopefully they will do the same. But this one is so simple that Linus hardly needs our help.
But Linus won't be editing changelogs so that the changelog makes more sense after both trees are joined. I'm inclined to let the changelog sit as it is as well.
Works for me. Thanks Andrew!
On 29/11/2022 09:48, David Hildenbrand wrote:
On 28.11.22 23:59, Andrew Morton wrote:
On Mon, 28 Nov 2022 09:18:47 +0100 David Hildenbrand david@redhat.com wrote:
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
I'm inclined to let things sit as they are. Cross-tree conflicts happen, and Linus handles them. I'll flag this (very simple) conflict in the pull request, if MM merges second. If v4l merges second then hopefully they will do the same. But this one is so simple that Linus hardly needs our help.
It's not about cross-tree conflicts, it's about the fact that my patch is a fix that needs to be backported to older kernels. It should apply cleanly to those older kernels if my patch goes in first, but if it is the other way around I would have to make a new patch for the stable kernels.
Also, the updated changelog in David's patch that sits on top of mine makes a lot more sense.
If you really don't want to take my patch as part of this, then let me know and I'll take it through the media subsystem and hope for the best :-)
Regards,
Hans
But Linus won't be editing changelogs so that the changelog makes more sense after both trees are joined. I'm inclined to let the changelog sit as it is as well.
Works for me. Thanks Andrew!
On 29.11.22 10:08, Hans Verkuil wrote:
On 29/11/2022 09:48, David Hildenbrand wrote:
On 28.11.22 23:59, Andrew Morton wrote:
On Mon, 28 Nov 2022 09:18:47 +0100 David Hildenbrand david@redhat.com wrote:
Less chances of things going wrong that way.
Just mention in the v2 cover letter that the first patch was added to make it easy to backport that fix without being hampered by merge conflicts if it was added after your frame_vector.c patch.
Yes, that's the way I would naturally do, it, however, Andrew prefers delta updates for minor changes.
@Andrew, whatever you prefer!
I'm inclined to let things sit as they are. Cross-tree conflicts happen, and Linus handles them. I'll flag this (very simple) conflict in the pull request, if MM merges second. If v4l merges second then hopefully they will do the same. But this one is so simple that Linus hardly needs our help.
It's not about cross-tree conflicts, it's about the fact that my patch is a fix that needs to be backported to older kernels. It should apply cleanly to those older kernels if my patch goes in first, but if it is the other way around I would have to make a new patch for the stable kernels.
IIUC, the conflict will be resolved at merge time and the merge resolution will be part of the merge commit. It doesn't matter in which order the patches go upstream, the merge commit resolves the problematic overlap.
So your patch will be upstream as intended, where it can be cleanly backported.
Hope I am not twisting reality ;)
FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable.
FOLL_FORCE in this case seems to be a legacy leftover. Let's just remove it.
Cc: Inki Dae inki.dae@samsung.com Cc: Seung-Woo Kim sw0312.kim@samsung.com Cc: Kyungmin Park kyungmin.park@samsung.com Cc: David Airlie airlied@gmail.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/gpu/drm/exynos/exynos_drm_g2d.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c index 471fd6c8135f..e19c2ceb3759 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c +++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c @@ -477,7 +477,7 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct g2d_data *g2d, }
ret = pin_user_pages_fast(start, npages, - FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, + FOLL_WRITE | FOLL_LONGTERM, g2d_userptr->pages); if (ret != npages) { DRM_DEV_ERROR(g2d->dev,
On Wed, Nov 16, 2022 at 11:26:56AM +0100, David Hildenbrand wrote:
FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable.
FOLL_FORCE in this case seems to be a legacy leftover. Let's just remove it.
Cc: Inki Dae inki.dae@samsung.com Cc: Seung-Woo Kim sw0312.kim@samsung.com Cc: Kyungmin Park kyungmin.park@samsung.com Cc: David Airlie airlied@gmail.com Cc: Daniel Vetter daniel@ffwll.ch Cc: Krzysztof Kozlowski krzysztof.kozlowski@linaro.org Signed-off-by: David Hildenbrand david@redhat.com
Reviewed-by: Daniel Vetter daniel.vetter@ffwll.ch
Plus ack for merging through the appropriate non-drm tree. -Daniel
drivers/gpu/drm/exynos/exynos_drm_g2d.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c index 471fd6c8135f..e19c2ceb3759 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c +++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c @@ -477,7 +477,7 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct g2d_data *g2d, } ret = pin_user_pages_fast(start, npages,
FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM,
if (ret != npages) { DRM_DEV_ERROR(g2d->dev,FOLL_WRITE | FOLL_LONGTERM, g2d_userptr->pages);
-- 2.38.1
FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable.
FOLL_FORCE in this case seems to be a legacy leftover. Let's just remove it.
Cc: Dennis Dalessandro dennis.dalessandro@cornelisnetworks.com Cc: Jason Gunthorpe jgg@ziepe.ca Cc: Leon Romanovsky leon@kernel.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index f4b5f05058e4..f693bc753b6b 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -110,7 +110,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, for (got = 0; got < num_pages; got += ret) { ret = pin_user_pages(start_page + got * PAGE_SIZE, num_pages - got, - FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, + FOLL_LONGTERM | FOLL_WRITE, p + got, NULL); if (ret < 0) { mmap_read_unlock(current->mm);
FOLL_FORCE is really only for ptrace access. As we unpin the pinned pages using unpin_user_pages_dirty_lock(true), the assumption is that all these pages are writable.
FOLL_FORCE in this case seems to be due to copy-and-past from other drivers. Let's just remove it.
Acked-by: Oded Gabbay ogabbay@kernel.org Cc: Oded Gabbay ogabbay@kernel.org Cc: Arnd Bergmann arnd@arndb.de Cc: Greg Kroah-Hartman gregkh@linuxfoundation.org Signed-off-by: David Hildenbrand david@redhat.com --- drivers/misc/habanalabs/common/memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/misc/habanalabs/common/memory.c b/drivers/misc/habanalabs/common/memory.c index ef28f3b37b93..e35cca96bbef 100644 --- a/drivers/misc/habanalabs/common/memory.c +++ b/drivers/misc/habanalabs/common/memory.c @@ -2312,8 +2312,7 @@ static int get_user_memory(struct hl_device *hdev, u64 addr, u64 size, if (!userptr->pages) return -ENOMEM;
- rc = pin_user_pages_fast(start, npages, - FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM, + rc = pin_user_pages_fast(start, npages, FOLL_WRITE | FOLL_LONGTERM, userptr->pages);
if (rc != npages) {
Let's make it clearer that functionality provided by FOLL_FORCE is really only for ptrace access. Prevent accidental re-use in drivers by renaming FOLL_FORCE to FOLL_PTRACE:
git grep -l 'FOLL_FORCE' | xargs sed -i 's/FOLL_FORCE/FOLL_PTRACE/g'
In the future, we might want to use a separate set of flags for the access_vm interface: most FOLL_* flags don't apply and we mostly only want to pass FOLL_PTRACE and FOLL_WRITE.
Suggested-by: Christoph Hellwig hch@infradead.org Cc: Oleg Nesterov oleg@redhat.com Cc: Richard Henderson richard.henderson@linaro.org Cc: Ivan Kokshaysky ink@jurassic.park.msu.ru Cc: Matt Turner mattst88@gmail.com Cc: Catalin Marinas catalin.marinas@arm.com Cc: Will Deacon will@kernel.org Cc: Thomas Bogendoerfer tsbogend@alpha.franken.de Cc: Michael Ellerman mpe@ellerman.id.au Cc: Nicholas Piggin npiggin@gmail.com Cc: Christophe Leroy christophe.leroy@csgroup.eu Cc: "David S. Miller" davem@davemloft.net Cc: Thomas Gleixner tglx@linutronix.de Cc: Ingo Molnar mingo@redhat.com Cc: Borislav Petkov bp@alien8.de Cc: Dave Hansen dave.hansen@linux.intel.com Cc: "H. Peter Anvin" hpa@zytor.com Cc: Richard Weinberger richard@nod.at Cc: Anton Ivanov anton.ivanov@cambridgegreys.com Cc: Johannes Berg johannes@sipsolutions.net Cc: Eric Biederman ebiederm@xmission.com Cc: Kees Cook keescook@chromium.org Cc: Alexander Viro viro@zeniv.linux.org.uk Cc: Peter Zijlstra peterz@infradead.org Cc: Arnaldo Carvalho de Melo acme@kernel.org Cc: Mark Rutland mark.rutland@arm.com Cc: Alexander Shishkin alexander.shishkin@linux.intel.com Cc: Jiri Olsa jolsa@kernel.org Cc: Namhyung Kim namhyung@kernel.org Cc: Mike Kravetz mike.kravetz@oracle.com Cc: Muchun Song songmuchun@bytedance.com Cc: Kentaro Takeda takedakn@nttdata.co.jp Cc: Tetsuo Handa penguin-kernel@I-love.SAKURA.ne.jp Cc: Paul Moore paul@paul-moore.com Cc: James Morris jmorris@namei.org Cc: "Serge E. Hallyn" serge@hallyn.com Signed-off-by: David Hildenbrand david@redhat.com --- arch/alpha/kernel/ptrace.c | 6 +++--- arch/arm64/kernel/mte.c | 2 +- arch/ia64/kernel/ptrace.c | 10 +++++----- arch/mips/kernel/ptrace32.c | 4 ++-- arch/mips/math-emu/dsemul.c | 2 +- arch/powerpc/kernel/ptrace/ptrace32.c | 4 ++-- arch/sparc/kernel/ptrace_32.c | 4 ++-- arch/sparc/kernel/ptrace_64.c | 8 ++++---- arch/x86/kernel/step.c | 2 +- arch/x86/um/ptrace_32.c | 2 +- arch/x86/um/ptrace_64.c | 2 +- fs/exec.c | 2 +- fs/proc/base.c | 2 +- include/linux/mm.h | 8 ++++---- kernel/events/uprobes.c | 4 ++-- kernel/ptrace.c | 12 ++++++------ mm/gup.c | 28 +++++++++++++-------------- mm/huge_memory.c | 8 ++++---- mm/hugetlb.c | 2 +- mm/memory.c | 4 ++-- mm/util.c | 4 ++-- security/tomoyo/domain.c | 2 +- 22 files changed, 61 insertions(+), 61 deletions(-)
diff --git a/arch/alpha/kernel/ptrace.c b/arch/alpha/kernel/ptrace.c index a1a239ea002d..55def6479ff2 100644 --- a/arch/alpha/kernel/ptrace.c +++ b/arch/alpha/kernel/ptrace.c @@ -158,7 +158,7 @@ static inline int read_int(struct task_struct *task, unsigned long addr, int * data) { int copied = access_process_vm(task, addr, data, sizeof(int), - FOLL_FORCE); + FOLL_PTRACE); return (copied == sizeof(int)) ? 0 : -EIO; }
@@ -166,7 +166,7 @@ static inline int write_int(struct task_struct *task, unsigned long addr, int data) { int copied = access_process_vm(task, addr, &data, sizeof(int), - FOLL_FORCE | FOLL_WRITE); + FOLL_PTRACE | FOLL_WRITE); return (copied == sizeof(int)) ? 0 : -EIO; }
@@ -284,7 +284,7 @@ long arch_ptrace(struct task_struct *child, long request, case PTRACE_PEEKTEXT: /* read word at location addr. */ case PTRACE_PEEKDATA: copied = ptrace_access_vm(child, addr, &tmp, sizeof(tmp), - FOLL_FORCE); + FOLL_PTRACE); ret = -EIO; if (copied != sizeof(tmp)) break; diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 7467217c1eaf..fa29fecaedbc 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -525,7 +525,7 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request, int ret; struct iovec kiov; struct iovec __user *uiov = (void __user *)data; - unsigned int gup_flags = FOLL_FORCE; + unsigned int gup_flags = FOLL_PTRACE;
if (!system_supports_mte()) return -EIO; diff --git a/arch/ia64/kernel/ptrace.c b/arch/ia64/kernel/ptrace.c index ab8aeb34d1d9..3781db1f506c 100644 --- a/arch/ia64/kernel/ptrace.c +++ b/arch/ia64/kernel/ptrace.c @@ -452,7 +452,7 @@ ia64_peek (struct task_struct *child, struct switch_stack *child_stack, return 0; } } - copied = access_process_vm(child, addr, &ret, sizeof(ret), FOLL_FORCE); + copied = access_process_vm(child, addr, &ret, sizeof(ret), FOLL_PTRACE); if (copied != sizeof(ret)) return -EIO; *val = ret; @@ -489,7 +489,7 @@ ia64_poke (struct task_struct *child, struct switch_stack *child_stack, } } } else if (access_process_vm(child, addr, &val, sizeof(val), - FOLL_FORCE | FOLL_WRITE) + FOLL_PTRACE | FOLL_WRITE) != sizeof(val)) return -EIO; return 0; @@ -544,7 +544,7 @@ ia64_sync_user_rbs (struct task_struct *child, struct switch_stack *sw, if (ret < 0) return ret; if (access_process_vm(child, addr, &val, sizeof(val), - FOLL_FORCE | FOLL_WRITE) + FOLL_PTRACE | FOLL_WRITE) != sizeof(val)) return -EIO; } @@ -561,7 +561,7 @@ ia64_sync_kernel_rbs (struct task_struct *child, struct switch_stack *sw, /* now copy word for word from user rbs to kernel rbs: */ for (addr = user_rbs_start; addr < user_rbs_end; addr += 8) { if (access_process_vm(child, addr, &val, sizeof(val), - FOLL_FORCE) + FOLL_PTRACE) != sizeof(val)) return -EIO;
@@ -1105,7 +1105,7 @@ arch_ptrace (struct task_struct *child, long request, case PTRACE_PEEKDATA: /* read word at location addr */ if (ptrace_access_vm(child, addr, &data, sizeof(data), - FOLL_FORCE) + FOLL_PTRACE) != sizeof(data)) return -EIO; /* ensure return value is not mistaken for error code */ diff --git a/arch/mips/kernel/ptrace32.c b/arch/mips/kernel/ptrace32.c index afcf27a877cb..31c1c805bb8e 100644 --- a/arch/mips/kernel/ptrace32.c +++ b/arch/mips/kernel/ptrace32.c @@ -71,7 +71,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, break;
copied = ptrace_access_vm(child, (u64)addrOthers, &tmp, - sizeof(tmp), FOLL_FORCE); + sizeof(tmp), FOLL_PTRACE); if (copied != sizeof(tmp)) break; ret = put_user(tmp, (u32 __user *) (unsigned long) data); @@ -185,7 +185,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, ret = 0; if (ptrace_access_vm(child, (u64)addrOthers, &data, sizeof(data), - FOLL_FORCE | FOLL_WRITE) == sizeof(data)) + FOLL_PTRACE | FOLL_WRITE) == sizeof(data)) break; ret = -EIO; break; diff --git a/arch/mips/math-emu/dsemul.c b/arch/mips/math-emu/dsemul.c index e02bd20b60a6..6111a46de2df 100644 --- a/arch/mips/math-emu/dsemul.c +++ b/arch/mips/math-emu/dsemul.c @@ -271,7 +271,7 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir, /* Write the frame to user memory */ fr_uaddr = (unsigned long)&dsemul_page()[fr_idx]; ret = access_process_vm(current, fr_uaddr, &fr, sizeof(fr), - FOLL_FORCE | FOLL_WRITE); + FOLL_PTRACE | FOLL_WRITE); if (unlikely(ret != sizeof(fr))) { MIPS_FPU_EMU_INC_STATS(errors); free_emuframe(fr_idx, current->mm); diff --git a/arch/powerpc/kernel/ptrace/ptrace32.c b/arch/powerpc/kernel/ptrace/ptrace32.c index 19c224808982..336cfebd70df 100644 --- a/arch/powerpc/kernel/ptrace/ptrace32.c +++ b/arch/powerpc/kernel/ptrace/ptrace32.c @@ -65,7 +65,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, break;
copied = ptrace_access_vm(child, (u64)addrOthers, &tmp, - sizeof(tmp), FOLL_FORCE); + sizeof(tmp), FOLL_PTRACE); if (copied != sizeof(tmp)) break; ret = put_user(tmp, (u32 __user *)data); @@ -169,7 +169,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, ret = 0; if (ptrace_access_vm(child, (u64)addrOthers, &tmp, sizeof(tmp), - FOLL_FORCE | FOLL_WRITE) == sizeof(tmp)) + FOLL_PTRACE | FOLL_WRITE) == sizeof(tmp)) break; ret = -EIO; break; diff --git a/arch/sparc/kernel/ptrace_32.c b/arch/sparc/kernel/ptrace_32.c index e7db48acb838..b5c91855faee 100644 --- a/arch/sparc/kernel/ptrace_32.c +++ b/arch/sparc/kernel/ptrace_32.c @@ -56,7 +56,7 @@ static int regwindow32_get(struct task_struct *target, return -EFAULT; } else { if (access_process_vm(target, reg_window, uregs, size, - FOLL_FORCE) != size) + FOLL_PTRACE) != size) return -EFAULT; } return 0; @@ -74,7 +74,7 @@ static int regwindow32_set(struct task_struct *target, return -EFAULT; } else { if (access_process_vm(target, reg_window, uregs, size, - FOLL_FORCE | FOLL_WRITE) != size) + FOLL_PTRACE | FOLL_WRITE) != size) return -EFAULT; } return 0; diff --git a/arch/sparc/kernel/ptrace_64.c b/arch/sparc/kernel/ptrace_64.c index 86a7eb5c27ba..4de97cd1e55a 100644 --- a/arch/sparc/kernel/ptrace_64.c +++ b/arch/sparc/kernel/ptrace_64.c @@ -165,7 +165,7 @@ static int get_from_target(struct task_struct *target, unsigned long uaddr, return -EFAULT; } else { int len2 = access_process_vm(target, uaddr, kbuf, len, - FOLL_FORCE); + FOLL_PTRACE); if (len2 != len) return -EFAULT; } @@ -180,7 +180,7 @@ static int set_to_target(struct task_struct *target, unsigned long uaddr, return -EFAULT; } else { int len2 = access_process_vm(target, uaddr, kbuf, len, - FOLL_FORCE | FOLL_WRITE); + FOLL_PTRACE | FOLL_WRITE); if (len2 != len) return -EFAULT; } @@ -592,7 +592,7 @@ static int genregs32_set(struct task_struct *target, ®_window[pos], (void *) k, sizeof(*k), - FOLL_FORCE | FOLL_WRITE) + FOLL_PTRACE | FOLL_WRITE) != sizeof(*k)) return -EFAULT; k++; @@ -622,7 +622,7 @@ static int genregs32_set(struct task_struct *target, (unsigned long) ®_window[pos], ®, sizeof(reg), - FOLL_FORCE | FOLL_WRITE) + FOLL_PTRACE | FOLL_WRITE) != sizeof(reg)) return -EFAULT; pos++; diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c index 8e2b2552b5ee..7c11da8bbe4c 100644 --- a/arch/x86/kernel/step.c +++ b/arch/x86/kernel/step.c @@ -60,7 +60,7 @@ static int is_setting_trap_flag(struct task_struct *child, struct pt_regs *regs) unsigned long addr = convert_ip_to_linear(child, regs);
copied = access_process_vm(child, addr, opcode, sizeof(opcode), - FOLL_FORCE); + FOLL_PTRACE); for (i = 0; i < copied; i++) { switch (opcode[i]) { /* popf and iret */ diff --git a/arch/x86/um/ptrace_32.c b/arch/x86/um/ptrace_32.c index 0bc4b73a9cde..a40430123448 100644 --- a/arch/x86/um/ptrace_32.c +++ b/arch/x86/um/ptrace_32.c @@ -38,7 +38,7 @@ int is_syscall(unsigned long addr) * in case of singlestepping, if copy_from_user failed. */ n = access_process_vm(current, addr, &instr, sizeof(instr), - FOLL_FORCE); + FOLL_PTRACE); if (n != sizeof(instr)) { printk(KERN_ERR "is_syscall : failed to read " "instruction from 0x%lx\n", addr); diff --git a/arch/x86/um/ptrace_64.c b/arch/x86/um/ptrace_64.c index 289d0159b041..d9f8cba121d6 100644 --- a/arch/x86/um/ptrace_64.c +++ b/arch/x86/um/ptrace_64.c @@ -203,7 +203,7 @@ int is_syscall(unsigned long addr) * in case of singlestepping, if copy_from_user failed. */ n = access_process_vm(current, addr, &instr, sizeof(instr), - FOLL_FORCE); + FOLL_PTRACE); if (n != sizeof(instr)) { printk("is_syscall : failed to read instruction from " "0x%lx\n", addr); diff --git a/fs/exec.c b/fs/exec.c index a0b1f0337a62..e616abec8b82 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -199,7 +199,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos, { struct page *page; int ret; - unsigned int gup_flags = FOLL_FORCE; + unsigned int gup_flags = FOLL_PTRACE;
#ifdef CONFIG_STACK_GROWSUP if (write) { diff --git a/fs/proc/base.c b/fs/proc/base.c index 9e479d7d202b..f84a85a0f36d 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -854,7 +854,7 @@ static ssize_t mem_rw(struct file *file, char __user *buf, if (!mmget_not_zero(mm)) goto free;
- flags = FOLL_FORCE | (write ? FOLL_WRITE : 0); + flags = FOLL_PTRACE | (write ? FOLL_WRITE : 0);
while (count > 0) { size_t this_len = min_t(size_t, count, PAGE_SIZE); diff --git a/include/linux/mm.h b/include/linux/mm.h index e8cc838f42f9..037423431225 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2999,7 +2999,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_TOUCH 0x02 /* mark page accessed */ #define FOLL_GET 0x04 /* do get_page on page */ #define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ -#define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ +#define FOLL_PTRACE 0x10 /* get_user_pages read/write w/o permission */ #define FOLL_NOWAIT 0x20 /* if a disk transfer is needed, start the IO * and return without waiting upon it */ #define FOLL_NOFAULT 0x80 /* do not fault in pages */ @@ -3151,12 +3151,12 @@ static inline bool gup_must_unshare(struct vm_area_struct *vma, static inline bool gup_can_follow_protnone(unsigned int flags) { /* - * FOLL_FORCE has to be able to make progress even if the VMA is - * inaccessible. Further, FOLL_FORCE access usually does not represent + * FOLL_PTRACE has to be able to make progress even if the VMA is + * inaccessible. Further, FOLL_PTRACE access usually does not represent * application behaviour and we should avoid triggering NUMA hinting * faults. */ - return flags & FOLL_FORCE; + return flags & FOLL_PTRACE; }
typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index d9e357b7e17c..6f67c1164f10 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -466,7 +466,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, struct vm_area_struct *vma; int ret, is_register, ref_ctr_updated = 0; bool orig_page_huge = false; - unsigned int gup_flags = FOLL_FORCE; + unsigned int gup_flags = FOLL_PTRACE;
is_register = is_swbp_insn(&opcode); uprobe = container_of(auprobe, struct uprobe, arch); @@ -2028,7 +2028,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) * but we treat this as a 'remote' access since it is * essentially a kernel access to the memory. */ - result = get_user_pages_remote(mm, vaddr, 1, FOLL_FORCE, &page, + result = get_user_pages_remote(mm, vaddr, 1, FOLL_PTRACE, &page, NULL, NULL); if (result < 0) return result; diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 54482193e1ed..81394ebd96aa 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -632,7 +632,7 @@ int ptrace_readdata(struct task_struct *tsk, unsigned long src, char __user *dst int this_len, retval;
this_len = (len > sizeof(buf)) ? sizeof(buf) : len; - retval = ptrace_access_vm(tsk, src, buf, this_len, FOLL_FORCE); + retval = ptrace_access_vm(tsk, src, buf, this_len, FOLL_PTRACE);
if (!retval) { if (copied) @@ -661,7 +661,7 @@ int ptrace_writedata(struct task_struct *tsk, char __user *src, unsigned long ds if (copy_from_user(buf, src, this_len)) return -EFAULT; retval = ptrace_access_vm(tsk, dst, buf, this_len, - FOLL_FORCE | FOLL_WRITE); + FOLL_PTRACE | FOLL_WRITE); if (!retval) { if (copied) break; @@ -1309,7 +1309,7 @@ int generic_ptrace_peekdata(struct task_struct *tsk, unsigned long addr, unsigned long tmp; int copied;
- copied = ptrace_access_vm(tsk, addr, &tmp, sizeof(tmp), FOLL_FORCE); + copied = ptrace_access_vm(tsk, addr, &tmp, sizeof(tmp), FOLL_PTRACE); if (copied != sizeof(tmp)) return -EIO; return put_user(tmp, (unsigned long __user *)data); @@ -1321,7 +1321,7 @@ int generic_ptrace_pokedata(struct task_struct *tsk, unsigned long addr, int copied;
copied = ptrace_access_vm(tsk, addr, &data, sizeof(data), - FOLL_FORCE | FOLL_WRITE); + FOLL_PTRACE | FOLL_WRITE); return (copied == sizeof(data)) ? 0 : -EIO; }
@@ -1339,7 +1339,7 @@ int compat_ptrace_request(struct task_struct *child, compat_long_t request, case PTRACE_PEEKTEXT: case PTRACE_PEEKDATA: ret = ptrace_access_vm(child, addr, &word, sizeof(word), - FOLL_FORCE); + FOLL_PTRACE); if (ret != sizeof(word)) ret = -EIO; else @@ -1349,7 +1349,7 @@ int compat_ptrace_request(struct task_struct *child, compat_long_t request, case PTRACE_POKETEXT: case PTRACE_POKEDATA: ret = ptrace_access_vm(child, addr, &data, sizeof(data), - FOLL_FORCE | FOLL_WRITE); + FOLL_PTRACE | FOLL_WRITE); ret = (ret != sizeof(data) ? -EIO : 0); break;
diff --git a/mm/gup.c b/mm/gup.c index 01116699c863..323edebd0399 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -482,7 +482,7 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, return -EEXIST; }
-/* FOLL_FORCE can write to even unwritable PTEs in COW mappings. */ +/* FOLL_PTRACE can write to even unwritable PTEs in COW mappings. */ static inline bool can_follow_write_pte(pte_t pte, struct page *page, struct vm_area_struct *vma, unsigned int flags) @@ -491,11 +491,11 @@ static inline bool can_follow_write_pte(pte_t pte, struct page *page, if (pte_write(pte)) return true;
- /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) + /* Maybe FOLL_PTRACE is set to override it? */ + if (!(flags & FOLL_PTRACE)) return false;
- /* But FOLL_FORCE has no effect on shared mappings */ + /* But FOLL_PTRACE has no effect on shared mappings */ if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) return false;
@@ -942,7 +942,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if (write) { if (!(vm_flags & VM_WRITE)) { - if (!(gup_flags & FOLL_FORCE)) + if (!(gup_flags & FOLL_PTRACE)) return -EFAULT; /* * We used to let the write,force case do COW in a @@ -957,7 +957,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) return -EFAULT; } } else if (!(vm_flags & VM_READ)) { - if (!(gup_flags & FOLL_FORCE)) + if (!(gup_flags & FOLL_PTRACE)) return -EFAULT; /* * Is there actually any vma we can reach here which does not @@ -1455,7 +1455,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * other than PROT_NONE. */ if (vma_is_accessible(vma)) - gup_flags |= FOLL_FORCE; + gup_flags |= FOLL_PTRACE;
/* * We made sure addr is within a VMA, so the following will @@ -1507,11 +1507,11 @@ long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, /* * FOLL_TOUCH: Mark page accessed and thereby young; will also mark * the page dirty with FOLL_WRITE -- which doesn't make a - * difference with !FOLL_FORCE, because the page is writable + * difference with !FOLL_PTRACE, because the page is writable * in the page table. * FOLL_HWPOISON: Return -EHWPOISON instead of -EFAULT when we hit * a poisoned page. - * !FOLL_FORCE: Require proper access permissions. + * !FOLL_PTRACE: Require proper access permissions. */ gup_flags = FOLL_TOUCH | FOLL_HWPOISON; if (write) @@ -1601,11 +1601,11 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, long i;
/* calculate required read or write permissions. - * If FOLL_FORCE is set, we only require the "MAY" flags. + * If FOLL_PTRACE is set, we only require the "MAY" flags. */ vm_flags = (foll_flags & FOLL_WRITE) ? (VM_WRITE | VM_MAYWRITE) : (VM_READ | VM_MAYREAD); - vm_flags &= (foll_flags & FOLL_FORCE) ? + vm_flags &= (foll_flags & FOLL_PTRACE) ? (VM_MAYREAD | VM_MAYWRITE) : (VM_READ | VM_WRITE);
for (i = 0; i < nr_pages; i++) { @@ -1807,7 +1807,7 @@ struct page *get_dump_page(unsigned long addr) if (mmap_read_lock_killable(mm)) return NULL; ret = __get_user_pages_locked(mm, addr, 1, &page, NULL, &locked, - FOLL_FORCE | FOLL_DUMP | FOLL_GET); + FOLL_PTRACE | FOLL_DUMP | FOLL_GET); if (locked) mmap_read_unlock(mm); return (ret == 1) ? page : NULL; @@ -2198,7 +2198,7 @@ EXPORT_SYMBOL(get_user_pages); * * It is functionally equivalent to get_user_pages_fast so * get_user_pages_fast should be used instead if specific gup_flags - * (e.g. FOLL_FORCE) are not required. + * (e.g. FOLL_PTRACE) are not required. */ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags) @@ -2869,7 +2869,7 @@ static int internal_get_user_pages_fast(unsigned long start, int ret;
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | - FOLL_FORCE | FOLL_PIN | FOLL_GET | + FOLL_PTRACE | FOLL_PIN | FOLL_GET | FOLL_FAST_ONLY | FOLL_NOFAULT))) return -EINVAL;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index dec7a7c0eca8..695792d2495d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1371,7 +1371,7 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, return pmd_dirty(pmd); }
-/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ +/* FOLL_PTRACE can write to even unwritable PMDs in COW mappings. */ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, struct vm_area_struct *vma, unsigned int flags) @@ -1380,11 +1380,11 @@ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, if (pmd_write(pmd)) return true;
- /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) + /* Maybe FOLL_PTRACE is set to override it? */ + if (!(flags & FOLL_PTRACE)) return false;
- /* But FOLL_FORCE has no effect on shared mappings */ + /* But FOLL_PTRACE has no effect on shared mappings */ if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) return false;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c3aab6d5b7aa..de78ff9db801 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5315,7 +5315,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, struct mmu_notifier_range range;
/* - * hugetlb does not support FOLL_FORCE-style write faults that keep the + * hugetlb does not support FOLL_PTRACE-style write faults that keep the * PTE mapped R/O such as maybe_mkwrite() would do. */ if (WARN_ON_ONCE(!unshare && !(vma->vm_flags & VM_WRITE))) diff --git a/mm/memory.c b/mm/memory.c index 56b21ab1e4d2..8b47cd40a7b9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3383,7 +3383,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
/* * Private mapping: create an exclusive anonymous page copy if reuse - * is impossible. We might miss VM_WRITE for FOLL_FORCE handling. + * is impossible. We might miss VM_WRITE for FOLL_PTRACE handling. */ if (folio && folio_test_anon(folio)) { /* @@ -5172,7 +5172,7 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, /* Write faults on read-only mappings are impossible ... */ if (WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE))) return VM_FAULT_SIGSEGV; - /* ... and FOLL_FORCE only applies to COW mappings. */ + /* ... and FOLL_PTRACE only applies to COW mappings. */ if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE) && !is_cow_mapping(vma->vm_flags))) return VM_FAULT_SIGSEGV; diff --git a/mm/util.c b/mm/util.c index b56c92fb910f..04be917bfb1b 100644 --- a/mm/util.c +++ b/mm/util.c @@ -985,7 +985,7 @@ int get_cmdline(struct task_struct *task, char *buffer, int buflen) if (len > buflen) len = buflen;
- res = access_process_vm(task, arg_start, buffer, len, FOLL_FORCE); + res = access_process_vm(task, arg_start, buffer, len, FOLL_PTRACE);
/* * If the nul at the end of args has been overwritten, then @@ -1001,7 +1001,7 @@ int get_cmdline(struct task_struct *task, char *buffer, int buflen) len = buflen - res; res += access_process_vm(task, env_start, buffer+res, len, - FOLL_FORCE); + FOLL_PTRACE); res = strnlen(buffer, res); } } diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c index 31af29f669d2..c52a93631866 100644 --- a/security/tomoyo/domain.c +++ b/security/tomoyo/domain.c @@ -916,7 +916,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned long pos, */ mmap_read_lock(bprm->mm); ret = get_user_pages_remote(bprm->mm, pos, 1, - FOLL_FORCE, &page, NULL, NULL); + FOLL_PTRACE, &page, NULL, NULL); mmap_read_unlock(bprm->mm); if (ret <= 0) return false;
On Wed, Nov 16, 2022 at 2:30 AM David Hildenbrand david@redhat.com wrote:
Let's make it clearer that functionality provided by FOLL_FORCE is really only for ptrace access.
I'm not super-happy about this one.
I do understand the "let's rename the bit so that no new user shows up".
And it's true that the main traditional use is ptrace.
But from the patch itself it becomes obvious that no, it's not *just* ptrace. At least not yet.
It's used for get_arg_page(), which uses it to basically look up (and install) pages in the newly created VM.
Now, I'm not entirely sure why it even uses FOLL_FORCE, - I think it might be historical, because the target should always be the new stack vma.
Following the history of it is a big of a mess, because there's a number of renamings and re-organizations, but it seems to go back to 2007 and commit b6a2fea39318 ("mm: variable length argument support").
Before that commit, we kept our own array of "this is the set of pages that I will install in the new VM". That commit basically just inserts the pages directly into the VM instead, getting rid of the array size limitation.
So at a minimum, I think that FOLL_FORCE would need to be removed before any renaming to FOLL_PTRACE, because that's not some kind of small random case.
It *might* be as simple as just removing it, but maybe there's some reason for having it that I don't immediately see.
There _are_ also small random cases too, like get_cmdline(). Maybe that counts as ptrace, but the execve() case most definitely does not.
Linus
On 16.11.22 19:16, Linus Torvalds wrote:
On Wed, Nov 16, 2022 at 2:30 AM David Hildenbrand david@redhat.com wrote:
Let's make it clearer that functionality provided by FOLL_FORCE is really only for ptrace access.
I'm not super-happy about this one.
I do understand the "let's rename the bit so that no new user shows up".
And it's true that the main traditional use is ptrace.
But from the patch itself it becomes obvious that no, it's not *just* ptrace. At least not yet.
It's used for get_arg_page(), which uses it to basically look up (and install) pages in the newly created VM.
Now, I'm not entirely sure why it even uses FOLL_FORCE, - I think it might be historical, because the target should always be the new stack vma.
Following the history of it is a big of a mess, because there's a number of renamings and re-organizations, but it seems to go back to 2007 and commit b6a2fea39318 ("mm: variable length argument support").
Right.
Before that commit, we kept our own array of "this is the set of pages that I will install in the new VM". That commit basically just inserts the pages directly into the VM instead, getting rid of the array size limitation.
So at a minimum, I think that FOLL_FORCE would need to be removed before any renaming to FOLL_PTRACE, because that's not some kind of small random case.
It *might* be as simple as just removing it, but maybe there's some reason for having it that I don't immediately see.
Right, I have the same feeling. It might just be a copy-and-paste legacy leftover.
There _are_ also small random cases too, like get_cmdline(). Maybe that counts as ptrace, but the execve() case most definitely does not.
I agree. I'd suggest moving forward without this (last) patch for now and figuring out how to further cleanup FOLL_FORCE usage on top.
@Andrew, if you intend to put this into mm-unstable, please drop the last patch for now.
On Wed, Nov 16, 2022 at 10:16:34AM -0800, Linus Torvalds wrote:
There _are_ also small random cases too, like get_cmdline(). Maybe that counts as ptrace, but the execve() case most definitely does not.
Oh, er, why does get_arg_page() even need FOLL_FORCE? This is writing the new stack contents to the nascent brpm->vma, which was newly allocated with VM_STACK_FLAGS, which an arch can override, but they all appear to include VM_WRITE | VM_MAYWRITE.
On Thu, Nov 17, 2022 at 2:58 PM Kees Cook keescook@chromium.org wrote:
Oh, er, why does get_arg_page() even need FOLL_FORCE? This is writing the new stack contents to the nascent brpm->vma, which was newly allocated with VM_STACK_FLAGS, which an arch can override, but they all appear to include VM_WRITE | VM_MAYWRITE.
Yeah, it does seem entirely superfluous.
It's been there since the very beginning (although in that original commit b6a2fea39318 it was there as a '1' to the 'force' argument to get_user_pages()).
I *think* it can be just removed. But as long as it exists, it should most definitely not be renamed to FOLL_PTRACE.
There's a slight worry that it currently hides some other setup issue that makes it matter, since it's been that way so long, but I can't see what it is.
Linus
On Thu, Nov 17, 2022 at 03:20:01PM -0800, Linus Torvalds wrote:
On Thu, Nov 17, 2022 at 2:58 PM Kees Cook keescook@chromium.org wrote:
Oh, er, why does get_arg_page() even need FOLL_FORCE? This is writing the new stack contents to the nascent brpm->vma, which was newly allocated with VM_STACK_FLAGS, which an arch can override, but they all appear to include VM_WRITE | VM_MAYWRITE.
Yeah, it does seem entirely superfluous.
It's been there since the very beginning (although in that original commit b6a2fea39318 it was there as a '1' to the 'force' argument to get_user_pages()).
I *think* it can be just removed. But as long as it exists, it should most definitely not be renamed to FOLL_PTRACE.
There's a slight worry that it currently hides some other setup issue that makes it matter, since it's been that way so long, but I can't see what it is.
My test system boots happily with it removed. I'll throw it into -next and see if anything melts...
On Wed, Nov 16, 2022 at 10:16:34AM -0800, Linus Torvalds wrote:
Following the history of it is a big of a mess, because there's a number of renamings and re-organizations, but it seems to go back to 2007 and commit b6a2fea39318 ("mm: variable length argument support").
I went back and read parts of the discussions with Ollie, and the .force=1 thing just magically appeared one day when we were sending work-in-progress patches back and forth without mention of where it came from :-/
And I certainly can't remember now..
Looking at it now, I have the same reaction as both you and Kees had, it seems entirely superflous. So I'm all for trying to remove it.
On Fri, Nov 18, 2022 at 12:09:02PM +0100, Peter Zijlstra wrote:
On Wed, Nov 16, 2022 at 10:16:34AM -0800, Linus Torvalds wrote:
Following the history of it is a big of a mess, because there's a number of renamings and re-organizations, but it seems to go back to 2007 and commit b6a2fea39318 ("mm: variable length argument support").
I went back and read parts of the discussions with Ollie, and the .force=1 thing just magically appeared one day when we were sending work-in-progress patches back and forth without mention of where it came from :-/
And I certainly can't remember now..
Looking at it now, I have the same reaction as both you and Kees had, it seems entirely superflous. So I'm all for trying to remove it.
Thanks for digging through the history! I've pushed the change to -next: https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git/commit/?h=for...
linux-kselftest-mirror@lists.linaro.org