From: Linus Torvalds torvalds@linux-foundation.org
commit 91309a70829d94c735c8bb1cc383e78c96127a16 upstream
This was a suggestion by David Laight, and while I was slightly worried that some micro-architecture would predict cmov like a conditional branch, there is little reason to actually believe any core would be that broken.
Intel documents that their existing cores treat CMOVcc as a data dependency that will constrain speculation in their "Speculative Execution Side Channel Mitigations" whitepaper:
"Other instructions such as CMOVcc, AND, ADC, SBB and SETcc can also be used to prevent bounds check bypass by constraining speculative execution on current family 6 processors (Intel® Core™, Intel® Atom™, Intel® Xeon® and Intel® Xeon Phi™ processors)"
and while that leaves the future uarch issues open, that's certainly true of our traditional SBB usage too.
Any core that predicts CMOV will be unusable for various crypto algorithms that need data-independent timing stability, so let's just treat CMOV as the safe choice that simplifies the address masking by avoiding an extra instruction and doesn't need a temporary register.
Cc: stable@vger.kernel.org # 6.12.x: 573f45a: x86: x86: fix off-by-one in access_ok() Cc: stable@vger.kernel.org # 6.12.x: 86e6b15: x86: fix user address masking non-canonical speculation issue Cc: stable@vger.kernel.org # 6.10.x: e60cc61: vfs: dcache: move hashlen_hash() from callers into d_hash() Cc: stable@vger.kernel.org # 6.10.x: e782985: runtime constants: add default dummy infrastructure Cc: stable@vger.kernel.org # 6.10.x: e3c92e8: runtime constants: add x86 architecture support Suggested-by: David Laight David.Laight@aculab.com Link: https://www.intel.com/content/dam/develop/external/us/en/documents/336996-sp... Signed-off-by: Linus Torvalds torvalds@linux-foundation.org Signed-off-by: Jimmy Tran jtoantran@google.com --- arch/x86/include/asm/uaccess_64.h | 13 ++++++------- arch/x86/lib/getuser.S | 5 ++--- 2 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h index e68eded5ee490..123d36c89722f 100644 --- a/arch/x86/include/asm/uaccess_64.h +++ b/arch/x86/include/asm/uaccess_64.h @@ -66,14 +66,13 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm, */ static inline void __user *mask_user_address(const void __user *ptr) { - unsigned long mask; - + void __user *ret; asm("cmp %1,%0\n\t" - "sbb %0,%0" - : "=r" (mask) - : "r" (ptr), - "0" (runtime_const_ptr(USER_PTR_MAX))); - return (__force void __user *)(mask | (__force unsigned long)ptr); + "cmova %1,%0" + :"=r" (ret) + :"r" (runtime_const_ptr(USER_PTR_MAX)), + "0" (ptr)); + return ret; }
/* diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S index ffa3fff259578..0f7f58f20b068 100644 --- a/arch/x86/lib/getuser.S +++ b/arch/x86/lib/getuser.S @@ -44,9 +44,8 @@ .pushsection runtime_ptr_USER_PTR_MAX,"a" .long 1b - 8 - . .popsection - cmp %rax, %rdx - sbb %rdx, %rdx - or %rdx, %rax + cmp %rdx, %rax + cmova %rdx, %rax .else cmp $TASK_SIZE_MAX-\size+1, %eax .if \size != 8