A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Based on the reproducer provided in [1] we suspect this is caused by the lack of VMA locking while forking a child process.
Patch 1/2 in the series implements proper VMA locking during fork. I tested the fix locally using the reproducer and was unable to reproduce the memory corruption problem. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Patch 2/2 disabled per-VMA locks until the fix is tested and verified.
Both patches apply cleanly over Linus' ToT and stable 6.4.y branch.
Changes from v2 posted at [3]: - Move VMA locking before flush_cache_dup_mm, per David Hildenbrand
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com [3] https://lore.kernel.org/all/20230705063711.2670599-1-surenb@google.com/
Suren Baghdasaryan (2): fork: lock VMAs of the parent process when forking mm: disable CONFIG_PER_VMA_LOCK until its fixed
kernel/fork.c | 6 ++++++ mm/Kconfig | 3 ++- 2 files changed, 8 insertions(+), 1 deletion(-)
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com --- kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK + /* Disallow any page faults before calling flush_cache_dup_mm */ + for_each_vma(old_vmi, mpnt) + vma_start_write(mpnt); + vma_iter_init(&old_vmi, oldmm, 0); +#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
On 05.07.23 19:12, Suren Baghdasaryan wrote:
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK
- /* Disallow any page faults before calling flush_cache_dup_mm */
- for_each_vma(old_vmi, mpnt)
vma_start_write(mpnt);
- vma_iter_init(&old_vmi, oldmm, 0);
+#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
The old version was most probably fine as well, but this certainly looks even safer.
Acked-by: David Hildenbrand david@redhat.com
On Wed, Jul 5, 2023 at 10:14 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK
/* Disallow any page faults before calling flush_cache_dup_mm */
for_each_vma(old_vmi, mpnt)
vma_start_write(mpnt);
vma_iter_init(&old_vmi, oldmm, 0);
+#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
The old version was most probably fine as well, but this certainly looks even safer.
Acked-by: David Hildenbrand david@redhat.com
Thanks!
-- Cheers,
David / dhildenb
* Suren Baghdasaryan surenb@google.com [230705 13:24]:
On Wed, Jul 5, 2023 at 10:14 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK
/* Disallow any page faults before calling flush_cache_dup_mm */
for_each_vma(old_vmi, mpnt)
vma_start_write(mpnt);
vma_iter_init(&old_vmi, oldmm, 0);
vma_iter_set(&old_vmi, 0) is probably what you want here.
+#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
The old version was most probably fine as well, but this certainly looks even safer.
Acked-by: David Hildenbrand david@redhat.com
I think this is overkill and believe setting the vma_start_write() will synchronize with any readers since it's using the per-vma rw semaphore in write mode. Anything faulting will need to finish before the fork continues and faults during the fork will fall back to a read lock of the mmap_lock. Is there a possibility of populate happening outside the mmap_write lock/vma_lock?
Was your benchmarking done with this loop at the start?
Thanks, Liam
On Wed, Jul 5, 2023 at 4:07 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Suren Baghdasaryan surenb@google.com [230705 13:24]:
On Wed, Jul 5, 2023 at 10:14 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK
/* Disallow any page faults before calling flush_cache_dup_mm */
for_each_vma(old_vmi, mpnt)
vma_start_write(mpnt);
vma_iter_init(&old_vmi, oldmm, 0);
vma_iter_set(&old_vmi, 0) is probably what you want here.
Ok, I send another version with that.
+#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
The old version was most probably fine as well, but this certainly looks even safer.
Acked-by: David Hildenbrand david@redhat.com
I think this is overkill and believe setting the vma_start_write() will synchronize with any readers since it's using the per-vma rw semaphore in write mode. Anything faulting will need to finish before the fork continues and faults during the fork will fall back to a read lock of the mmap_lock. Is there a possibility of populate happening outside the mmap_write lock/vma_lock?
Yes, I think we understand the loss of concurrency in the parent's ability to fault pages while forking. Is that a real problem though?
Was your benchmarking done with this loop at the start?
No, it was done with the initial version where the lock was inside the existing loop. I just reran the benchmark and while kernel compilation times did not change, the stress test shows ~7% regression now, probably due to that additional tree walk. I'll update that number in the new patch. Thanks!
Thanks, Liam
* Suren Baghdasaryan surenb@google.com [230705 20:20]:
On Wed, Jul 5, 2023 at 4:07 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Suren Baghdasaryan surenb@google.com [230705 13:24]:
On Wed, Jul 5, 2023 at 10:14 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK
/* Disallow any page faults before calling flush_cache_dup_mm */
for_each_vma(old_vmi, mpnt)
vma_start_write(mpnt);
vma_iter_init(&old_vmi, oldmm, 0);
vma_iter_set(&old_vmi, 0) is probably what you want here.
Ok, I send another version with that.
+#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
The old version was most probably fine as well, but this certainly looks even safer.
Acked-by: David Hildenbrand david@redhat.com
I think this is overkill and believe setting the vma_start_write() will synchronize with any readers since it's using the per-vma rw semaphore in write mode. Anything faulting will need to finish before the fork continues and faults during the fork will fall back to a read lock of the mmap_lock. Is there a possibility of populate happening outside the mmap_write lock/vma_lock?
Yes, I think we understand the loss of concurrency in the parent's ability to fault pages while forking. Is that a real problem though?
No, I don't think that part is an issue at all. I wanted to be sure I didn't miss something.
Was your benchmarking done with this loop at the start?
No, it was done with the initial version where the lock was inside the existing loop. I just reran the benchmark and while kernel compilation times did not change, the stress test shows ~7% regression now, probably due to that additional tree walk. I'll update that number in the new patch.
..but I expected a performance hit and didn't understand why you updated the patch this way. It would probably only happen on really big trees though and, ah, the largest trees I see are from the android side. I'd wager the impact will be felt more when larger trees encounter smaller CPU cache.
Thanks, Liam
On Wed, Jul 5, 2023 at 5:33 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Suren Baghdasaryan surenb@google.com [230705 20:20]:
On Wed, Jul 5, 2023 at 4:07 PM Liam R. Howlett Liam.Howlett@oracle.com wrote:
- Suren Baghdasaryan surenb@google.com [230705 13:24]:
On Wed, Jul 5, 2023 at 10:14 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
When forking a child process, parent write-protects an anonymous page and COW-shares it with the child being forked using copy_present_pte(). Parent's TLB is flushed right before we drop the parent's mmap_lock in dup_mmap(). If we get a write-fault before that TLB flush in the parent, and we end up replacing that anonymous page in the parent process in do_wp_page() (because, COW-shared with the child), this might lead to some stale writable TLB entries targeting the wrong (old) page. Similar issue happened in the past with userfaultfd (see flush_tlb_page() call inside do_wp_page()). Lock VMAs of the parent process when forking a child, which prevents concurrent page faults during fork operation and avoids this issue. This fix can potentially regress some fork-heavy workloads. Kernel build time did not show noticeable regression on a 56-core machine while a stress test mapping 10000 VMAs and forking 5000 times in a tight loop shows ~5% regression. If such fork time regression is unacceptable, disabling CONFIG_PER_VMA_LOCK should restore its performance. Further optimizations are possible if this regression proves to be problematic.
Suggested-by: David Hildenbrand david@redhat.com Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Holger Hoffstätte holger@applied-asynchrony.com Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
kernel/fork.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c index b85814e614a5..403bc2b72301 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = -EINTR; goto fail_uprobe_end; } +#ifdef CONFIG_PER_VMA_LOCK
/* Disallow any page faults before calling flush_cache_dup_mm */
for_each_vma(old_vmi, mpnt)
vma_start_write(mpnt);
vma_iter_init(&old_vmi, oldmm, 0);
vma_iter_set(&old_vmi, 0) is probably what you want here.
Ok, I send another version with that.
+#endif flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); /*
The old version was most probably fine as well, but this certainly looks even safer.
Acked-by: David Hildenbrand david@redhat.com
I think this is overkill and believe setting the vma_start_write() will synchronize with any readers since it's using the per-vma rw semaphore in write mode. Anything faulting will need to finish before the fork continues and faults during the fork will fall back to a read lock of the mmap_lock. Is there a possibility of populate happening outside the mmap_write lock/vma_lock?
Yes, I think we understand the loss of concurrency in the parent's ability to fault pages while forking. Is that a real problem though?
No, I don't think that part is an issue at all. I wanted to be sure I didn't miss something.
Was your benchmarking done with this loop at the start?
No, it was done with the initial version where the lock was inside the existing loop. I just reran the benchmark and while kernel compilation times did not change, the stress test shows ~7% regression now, probably due to that additional tree walk. I'll update that number in the new patch.
..but I expected a performance hit and didn't understand why you updated the patch this way. It would probably only happen on really big trees though and, ah, the largest trees I see are from the android side. I'd wager the impact will be felt more when larger trees encounter smaller CPU cache.
My test has 10000 vmas and even for Android that's a stretch (the highest number I've seen was ~4000). We can think of a less restrictive solution if this proves to be a problem for some workloads but for now I would prefer to fix this in a safe way and possibly improve that later. The alternative is to revert this completely and we get no more testing until the next release.
Thanks, Liam
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com --- mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK - def_bool y + bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP + depends on BROKEN help Allow per-vma locking during page fault handling.
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n config PER_VMA_LOCK
- def_bool y
- bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
- depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different. I think the urgency to disable the feature stems from the timeline being very close to when distributions will start using the 6.4 stable kernel version.
-- Cheers,
David / dhildenb
On 05.07.23 19:22, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
Can you point me at the other reports, so I can quickly scan them?
On Wed, Jul 5, 2023 at 10:24 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:22, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
Can you point me at the other reports, so I can quickly scan them?
by Jacob Young: https://bugzilla.kernel.org/show_bug.cgi?id=217624 by Jiri Slaby: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ by Holger Hoffstätte: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... only saying that Firefox started crashing after upgrading to 6.4.1
-- Cheers,
David / dhildenb
On Wed, Jul 5, 2023 at 11:09 AM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 10:24 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:22, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
Can you point me at the other reports, so I can quickly scan them?
by Jacob Young: https://bugzilla.kernel.org/show_bug.cgi?id=217624 by Jiri Slaby: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/
From strace in https://lore.kernel.org/all/f7ad7a42-13c8-a486-d0b7-01d5acf01e13@kernel.org/ looks like clone3() was involved, so this seems quite likely to be the same issue I think.
by Holger Hoffstätte: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asy... only saying that Firefox started crashing after upgrading to 6.4.1
-- Cheers,
David / dhildenb
On Wed, Jul 05, 2023 at 10:22:27AM -0700, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
The commit log seems slightly confusing. It mostly says the bug was still not solved, but I assume patch 1 is the current "fix", it's just not clear whether there's any other potential issues?
According to the stable tree rules:
- It must fix a problem that causes a build error (but not for things marked CONFIG_BROKEN), an oops, a hang, data corruption, a real security issue, or some "oh, that's not good" issue. In short, something critical.
I think it means vma lock will never be fixed in 6.4, and it can't (because after this patch it'll be BROKEN, and this patch copies stable, and we can't fix BROKEN things in stables).
Totally no problem I see, just to make sure this is what you wanted..
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
Thanks,
On Wed, Jul 5, 2023 at 1:25 PM Peter Xu peterx@redhat.com wrote:
On Wed, Jul 05, 2023 at 10:22:27AM -0700, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
The commit log seems slightly confusing. It mostly says the bug was still not solved, but I assume patch 1 is the current "fix", it's just not clear whether there's any other potential issues?
According to the stable tree rules:
- It must fix a problem that causes a build error (but not for things marked CONFIG_BROKEN), an oops, a hang, data corruption, a real security issue, or some "oh, that's not good" issue. In short, something critical.
I think it means vma lock will never be fixed in 6.4, and it can't (because after this patch it'll be BROKEN, and this patch copies stable, and we can't fix BROKEN things in stables).
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
Totally no problem I see, just to make sure this is what you wanted..
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
I think we can further optimize the locking rules here (see discussion in https://lore.kernel.org/all/20230703182150.2193578-1-surenb@google.com/) but for now we want the most effective and simple way to fix the memory corruption problem. Thanks, Suren.
Thanks,
-- Peter Xu
On Wed, 5 Jul 2023 13:33:26 -0700 Suren Baghdasaryan surenb@google.com wrote:
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
I think "no". By doing this we're effectively backporting a minor performance optimization, which isn't a thing we'd normally do.
On Wed, Jul 5, 2023 at 5:24 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 13:33:26 -0700 Suren Baghdasaryan surenb@google.com wrote:
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
I think "no". By doing this we're effectively backporting a minor performance optimization, which isn't a thing we'd normally do.
In that case, maybe for 6.4 we send the fix and only disable it by default without marking BROKEN? That way we still have a way to enable it if desired?
On Wed, Jul 5, 2023 at 5:30 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:24 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 13:33:26 -0700 Suren Baghdasaryan surenb@google.com wrote:
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
I think "no". By doing this we're effectively backporting a minor performance optimization, which isn't a thing we'd normally do.
In that case, maybe for 6.4 we send the fix and only disable it by default without marking BROKEN? That way we still have a way to enable it if desired?
I'm preparing the next version with Liam's corrections. If the above option I suggested is acceptable I can send a modified second patch which would not have BROKEN dependency.
On Wed, 5 Jul 2023 17:32:09 -0700 Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:30 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:24 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 13:33:26 -0700 Suren Baghdasaryan surenb@google.com wrote:
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
I think "no". By doing this we're effectively backporting a minor performance optimization, which isn't a thing we'd normally do.
In that case, maybe for 6.4 we send the fix and only disable it by default without marking BROKEN? That way we still have a way to enable it if desired?
I'm preparing the next version with Liam's corrections. If the above option I suggested is acceptable I can send a modified second patch which would not have BROKEN dependency.
I think just mark it broken and move on. At some later time we can consider backporting the fixes into 6.4.x and reenabling, but I don't think it's likely that we'll do this.
On Wed, Jul 5, 2023 at 5:44 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 17:32:09 -0700 Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:30 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:24 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 13:33:26 -0700 Suren Baghdasaryan surenb@google.com wrote:
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
I think "no". By doing this we're effectively backporting a minor performance optimization, which isn't a thing we'd normally do.
In that case, maybe for 6.4 we send the fix and only disable it by default without marking BROKEN? That way we still have a way to enable it if desired?
I'm preparing the next version with Liam's corrections. If the above option I suggested is acceptable I can send a modified second patch which would not have BROKEN dependency.
I think just mark it broken and move on. At some later time we can consider backporting the fixes into 6.4.x and reenabling, but I don't think it's likely that we'll do this.
Uh, ok. I'll send the next version shortly with the patch fixing the issue and another one marking it BROKEN. Hopefully in the next version we can roll it our more carefully, removing BROKEN dependency but keeping it disabled by default?
On Wed, Jul 5, 2023 at 5:49 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:44 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 17:32:09 -0700 Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:30 PM Suren Baghdasaryan surenb@google.com wrote:
On Wed, Jul 5, 2023 at 5:24 PM Andrew Morton akpm@linux-foundation.org wrote:
On Wed, 5 Jul 2023 13:33:26 -0700 Suren Baghdasaryan surenb@google.com wrote:
I was hoping we could re-enable VMA locks in 6.4 once we get more confirmations that the problem is gone. Is that not possible once the BROKEN dependency is merged?
I think "no". By doing this we're effectively backporting a minor performance optimization, which isn't a thing we'd normally do.
In that case, maybe for 6.4 we send the fix and only disable it by default without marking BROKEN? That way we still have a way to enable it if desired?
I'm preparing the next version with Liam's corrections. If the above option I suggested is acceptable I can send a modified second patch which would not have BROKEN dependency.
I think just mark it broken and move on. At some later time we can consider backporting the fixes into 6.4.x and reenabling, but I don't think it's likely that we'll do this.
Uh, ok. I'll send the next version shortly with the patch fixing the issue and another one marking it BROKEN. Hopefully in the next version we can roll it our more carefully, removing BROKEN dependency but keeping it disabled by default?
v4 is posted at https://lore.kernel.org/all/20230706011400.2949242-1-surenb@google.com/ Thanks, Suren.
On 05.07.23 22:25, Peter Xu wrote:
On Wed, Jul 05, 2023 at 10:22:27AM -0700, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
The commit log seems slightly confusing. It mostly says the bug was still not solved, but I assume patch 1 is the current "fix", it's just not clear whether there's any other potential issues?
According to the stable tree rules:
- It must fix a problem that causes a build error (but not for things marked CONFIG_BROKEN), an oops, a hang, data corruption, a real security issue, or some "oh, that's not good" issue. In short, something critical.
I think it means vma lock will never be fixed in 6.4, and it can't (because after this patch it'll be BROKEN, and this patch copies stable, and we can't fix BROKEN things in stables).
Totally no problem I see, just to make sure this is what you wanted..
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
At least not that I am aware of (and people who care about that should really work on scalable fork() alternatives, like that io_uring fork() thingy).
My understanding is that CONFIG_PER_VMA_LOCK wants to speed up page concurrent page faults *after* fork() [or rather, after new process creation], IOW, when we have a lot of mmap() activity going on while some threads of the new process are already active and don't actually touch what's getting newly mmaped.
On Wed, Jul 5, 2023 at 1:37 PM David Hildenbrand david@redhat.com wrote:
On 05.07.23 22:25, Peter Xu wrote:
On Wed, Jul 05, 2023 at 10:22:27AM -0700, Suren Baghdasaryan wrote:
On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand david@redhat.com wrote:
On 05.07.23 19:12, Suren Baghdasaryan wrote:
A memory corruption was reported in [1] with bisection pointing to the patch [2] enabling per-VMA locks for x86. Disable per-VMA locks config to prevent this issue while the problem is being investigated. This is expected to be a temporary measure.
[1] https://bugzilla.kernel.org/show_bug.cgi?id=217624 [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
Reported-by: Jiri Slaby jirislaby@kernel.org Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/ Reported-by: Jacob Young jacobly.alt@gmail.com Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624 Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first") Cc: stable@vger.kernel.org Signed-off-by: Suren Baghdasaryan surenb@google.com
mm/Kconfig | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/Kconfig b/mm/Kconfig index 09130434e30d..0abc6c71dd89 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK def_bool n
config PER_VMA_LOCK
def_bool y
bool "Enable per-vma locking during page fault handling." depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
depends on BROKEN help Allow per-vma locking during page fault handling.
Do we have any testing results (that don't reveal other issues :) ) for patch #1? Not sure if we really want to mark it broken if patch #1 fixes the issue.
I tested the fix using the only reproducer provided in the reports plus kernel compilation and my fork stress test. All looked good and stable but I don't know if other reports had the same issue or something different.
The commit log seems slightly confusing. It mostly says the bug was still not solved, but I assume patch 1 is the current "fix", it's just not clear whether there's any other potential issues?
According to the stable tree rules:
- It must fix a problem that causes a build error (but not for things marked CONFIG_BROKEN), an oops, a hang, data corruption, a real security issue, or some "oh, that's not good" issue. In short, something critical.
I think it means vma lock will never be fixed in 6.4, and it can't (because after this patch it'll be BROKEN, and this patch copies stable, and we can't fix BROKEN things in stables).
Totally no problem I see, just to make sure this is what you wanted..
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
At least not that I am aware of (and people who care about that should really work on scalable fork() alternatives, like that io_uring fork() thingy).
My understanding is that CONFIG_PER_VMA_LOCK wants to speed up page concurrent page faults *after* fork() [or rather, after new process creation], IOW, when we have a lot of mmap() activity going on while some threads of the new process are already active and don't actually touch what's getting newly mmaped.
Getting as much concurrency as we can is the goal. If we can allow some page faults during fork, I would take that too. But for now let's deploy the simplest and safest approach.
-- Cheers,
David / dhildenb
On Wed, Jul 05, 2023 at 04:25:21PM -0400, Peter Xu wrote:
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
Good grief, no. Why would we want to optimise something that happens so rarely? The goal is, as usual, more performance. Satisfying page faults while mmap()/munmap()/mprotect() are happening is worthwhile. Those happen a lot more than fork().
In this case though, there's also a priority-inversion problem that we're trying to solve where process A (high priority) calls mmap() while process B (low priority) is reading /proc/$pid/smaps and now (because rwsems are fair), none of process A's other threads can satisy any page faults until process B is scheduled.
Where on earth did you get the idea that we cared even a little bit about the performance of page fault during fork()?
On Wed, Jul 5, 2023 at 2:28 PM Matthew Wilcox willy@infradead.org wrote:
On Wed, Jul 05, 2023 at 04:25:21PM -0400, Peter Xu wrote:
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
Good grief, no. Why would we want to optimise something that happens so rarely? The goal is, as usual, more performance. Satisfying page faults while mmap()/munmap()/mprotect() are happening is worthwhile. Those happen a lot more than fork().
In this case though, there's also a priority-inversion problem that we're trying to solve where process A (high priority) calls mmap() while process B (low priority) is reading /proc/$pid/smaps and now (because rwsems are fair), none of process A's other threads can satisy any page faults until process B is scheduled.
Where on earth did you get the idea that we cared even a little bit about the performance of page fault during fork()?
I think the original reasoning for Android to improve app launch time could have made that impression. However the speed up there comes not from allowing page faults into the parent process (Zygote) while it forks a child but rather from the child being able to fault pages and establish its mappings concurrently.
On Wed, Jul 05, 2023 at 10:27:56PM +0100, Matthew Wilcox wrote:
On Wed, Jul 05, 2023 at 04:25:21PM -0400, Peter Xu wrote:
There'll still try to be a final fix, am I right? As IIRC allowing page faults during fork() is one of the major goals of vma lock.
Good grief, no. Why would we want to optimise something that happens so rarely? The goal is, as usual, more performance. Satisfying page faults while mmap()/munmap()/mprotect() are happening is worthwhile. Those happen a lot more than fork().
In this case though, there's also a priority-inversion problem that we're trying to solve where process A (high priority) calls mmap() while process B (low priority) is reading /proc/$pid/smaps and now (because rwsems are fair), none of process A's other threads can satisy any page faults until process B is scheduled.
Is it possible to extend vma lock to things like smaps?
Where on earth did you get the idea that we cared even a little bit about the performance of page fault during fork()?
My memory, when I was talking to someone during the conference that mentioned such a use case. But my memory can be just wrong, in that case it's my fault, but I hope it's still fine to just ask here.
linux-stable-mirror@lists.linaro.org