On Wed, 16 Oct 2024 at 20:38, Linus Walleij linus.walleij@linaro.org wrote:
On Wed, Oct 16, 2024 at 1:33 PM Ard Biesheuvel ardb@kernel.org wrote:
@@ -125,6 +126,12 @@ void __check_vmalloc_seq(struct mm_struct *mm)
(...)
Then, there is another part to this: in arch/arm/kernel/traps.c, we have the following code
void arch_sync_kernel_mappings(unsigned long start, unsigned long end) { if (start < VMALLOC_END && end > VMALLOC_START) atomic_inc_return_release(&init_mm.context.vmalloc_seq); }
where we only bump vmalloc_seq if the updated region overlaps with the vmalloc region, so this will need a similar treatment afaict.
Not really, right? We bump init_mm.context.vmalloc_seq if the address overlaps the entire vmalloc area.
Then the previously patched __check_vmalloc_seq() will check that atomic counter and copy the PGD entries, and with the code in this patch it will also copy (sync) the corresponding shadow memory at that point.
Yes, so we rely on the fact that changes to the vmalloc area and changes to the associated shadow mappings always occur in combination, right?
I think that should probably be safe, but we have to be sure.