3.16.63-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Juergen Gross jgross@suse.com
commit b2d7a075a1ccef2fb321d595802190c8e9b39004 upstream.
Using only 32-bit writes for the pte will result in an intermediate L1TF vulnerable PTE. When running as a Xen PV guest this will at once switch the guest to shadow mode resulting in a loss of performance.
Use arch_atomic64_xchg() instead which will perform the requested operation atomically with all 64 bits.
Some performance considerations according to:
https://software.intel.com/sites/default/files/managed/ad/dc/Intel-Xeon-Scal...
The main number should be the latency, as there is no tight loop around native_ptep_get_and_clear().
"lock cmpxchg8b" has a latency of 20 cycles, while "lock xchg" (with a memory operand) isn't mentioned in that document. "lock xadd" (with xadd having 3 cycles less latency than xchg) has a latency of 11, so we can assume a latency of 14 for "lock xchg".
Signed-off-by: Juergen Gross jgross@suse.com Reviewed-by: Thomas Gleixner tglx@linutronix.de Reviewed-by: Jan Beulich jbeulich@suse.com Tested-by: Jason Andryuk jandryuk@gmail.com Signed-off-by: Boris Ostrovsky boris.ostrovsky@oracle.com [bwh: Backported to 3.16: Use atomic64_cxhg()] Signed-off-by: Ben Hutchings ben@decadent.org.uk --- arch/x86/include/asm/pgtable-3level.h | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
--- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -1,6 +1,8 @@ #ifndef _ASM_X86_PGTABLE_3LEVEL_H #define _ASM_X86_PGTABLE_3LEVEL_H
+#include <asm/atomic64_32.h> + /* * Intel Physical Address Extension (PAE) Mode - three-level page * tables on PPro+ CPUs. @@ -142,10 +144,7 @@ static inline pte_t native_ptep_get_and_ { pte_t res;
- /* xchg acts as a barrier before the setting of the high bits */ - res.pte_low = xchg(&ptep->pte_low, 0); - res.pte_high = ptep->pte_high; - ptep->pte_high = 0; + res.pte = (pteval_t)atomic64_xchg((atomic64_t *)ptep, 0);
return res; }