[ Upstream commit 83b0177a6c48 ] [ Patch for 6.1.y, 6.6.y, 6.12.y trees ]
To avoid racing with should_flush_tlb, setting loaded_mm to LOADED_MM_SWITCHING must happen before reading tlb_gen.
This patch differs from the upstream fix since the ordering issue in stable trees is different: here, the relevant code blocks are in the wrong order due to a bad rebase earlier, so the fix is to reorder them.
Signed-off-by: Stephen Dolan sdolan@janestreet.com Acked-by: Dave Hansen dave.hansen@intel.com Link: https://lore.kernel.org/lkml/CAHDw0oGd0B4=uuv8NGqbUQ_ZVmSheU2bN70e4QhFXWvuAZ... --- arch/x86/mm/tlb.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 8629d90fdcd9..ed182831063c 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -606,6 +606,14 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, */ cond_mitigation(tsk);
+ /* + * Indicate that CR3 is about to change. nmi_uaccess_okay() + * and others are sensitive to the window where mm_cpumask(), + * CR3 and cpu_tlbstate.loaded_mm are not all in sync. + */ + this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); + barrier(); + /* * Stop remote flushes for the previous mm. * Skip kernel threads; we never send init_mm TLB flushing IPIs, @@ -623,14 +631,6 @@ void switch_mm_irqs_off(struct mm_struct *unused, struct mm_struct *next, next_tlb_gen = atomic64_read(&next->context.tlb_gen);
choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); - - /* - * Indicate that CR3 is about to change. nmi_uaccess_okay() - * and others are sensitive to the window where mm_cpumask(), - * CR3 and cpu_tlbstate.loaded_mm are not all in sync. - */ - this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); - barrier(); }
new_lam = mm_lam_cr3_mask(next);
linux-stable-mirror@lists.linaro.org