From: Eric Biggers ebiggers@google.com
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free.
xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and unconditionally accessing walk.iv has caused a real problem in other algorithms. Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt().
Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64") Cc: stable@vger.kernel.org # v4.11+ Signed-off-by: Eric Biggers ebiggers@google.com --- arch/arm64/crypto/aes-neonbs-glue.c | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 4737b6c6c5cf5..5144551177334 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -304,6 +304,8 @@ static int __xts_crypt(struct skcipher_request *req, int err;
err = skcipher_walk_virt(&walk, req, false); + if (err) + return err;
kernel_neon_begin(); neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1);
Hi,
[This is an automated email]
This commit has been processed because it contains a "Fixes:" tag, fixing commit: 1abee99eafab crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64.
The bot has tested the following trees: v5.0.7, v4.19.34, v4.14.111.
v5.0.7: Build OK! v4.19.34: Build OK! v4.14.111: Failed to apply! Possible dependencies: 683381747270 ("crypto: arm64/aes-blk - move kernel mode neon en/disable into loop") 78ad7b08d8e0 ("crypto: arm64/aes-bs - move kernel mode neon en/disable into loop")
How should we proceed with this patch?
-- Thanks, Sasha
linux-stable-mirror@lists.linaro.org