This is a note to let you know that I've just added the patch titled
crypto: x86/sha256-mb - fix panic due to unaligned access
to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git%3Ba=su...
The filename of the patch is: crypto-x86-sha256-mb-fix-panic-due-to-unaligned-access.patch and it can be found in the queue-4.9 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree, please let stable@vger.kernel.org know about it.
From 5dfeaac15f2b1abb5a53c9146041c7235eb9aa04 Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin aryabinin@virtuozzo.com Date: Mon, 16 Oct 2017 18:51:30 +0300 Subject: crypto: x86/sha256-mb - fix panic due to unaligned access
From: Andrey Ryabinin aryabinin@virtuozzo.com
commit 5dfeaac15f2b1abb5a53c9146041c7235eb9aa04 upstream.
struct sha256_ctx_mgr allocated in sha256_mb_mod_init() via kzalloc() and later passed in sha256_mb_flusher_mgr_flush_avx2() function where instructions vmovdqa used to access the struct. vmovdqa requires 16-bytes aligned argument, but nothing guarantees that struct sha256_ctx_mgr will have that alignment. Unaligned vmovdqa will generate GP fault.
Fix this by replacing vmovdqa with vmovdqu which doesn't have alignment requirements.
Fixes: a377c6b1876e ("crypto: sha256-mb - submit/flush routines for AVX2") Reported-by: Josh Poimboeuf jpoimboe@redhat.com Signed-off-by: Andrey Ryabinin aryabinin@virtuozzo.com Acked-by: Tim Chen Signed-off-by: Herbert Xu herbert@gondor.apana.org.au Signed-off-by: Greg Kroah-Hartman gregkh@linuxfoundation.org
--- arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S +++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S @@ -155,8 +155,8 @@ LABEL skip_ %I .endr
# Find min length - vmovdqa _lens+0*16(state), %xmm0 - vmovdqa _lens+1*16(state), %xmm1 + vmovdqu _lens+0*16(state), %xmm0 + vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A} vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C} @@ -176,8 +176,8 @@ LABEL skip_ %I vpsubd %xmm2, %xmm0, %xmm0 vpsubd %xmm2, %xmm1, %xmm1
- vmovdqa %xmm0, _lens+0*16(state) - vmovdqa %xmm1, _lens+1*16(state) + vmovdqu %xmm0, _lens+0*16(state) + vmovdqu %xmm1, _lens+1*16(state)
# "state" and "args" are the same address, arg1 # len is arg2 @@ -234,8 +234,8 @@ ENTRY(sha256_mb_mgr_get_comp_job_avx2) jc .return_null
# Find min length - vmovdqa _lens(state), %xmm0 - vmovdqa _lens+1*16(state), %xmm1 + vmovdqu _lens(state), %xmm0 + vmovdqu _lens+1*16(state), %xmm1
vpminud %xmm1, %xmm0, %xmm2 # xmm2 has {D,C,B,A} vpalignr $8, %xmm2, %xmm3, %xmm3 # xmm3 has {x,x,D,C}
Patches currently in stable-queue which might be from aryabinin@virtuozzo.com are
queue-4.9/crypto-x86-sha256-mb-fix-panic-due-to-unaligned-access.patch queue-4.9/crypto-x86-sha1-mb-fix-panic-due-to-unaligned-access.patch
linux-stable-mirror@lists.linaro.org