From: Eric Biggers ebiggers@google.com
When the user-provided IV buffer is not aligned to the algorithm's alignmask, skcipher_walk_virt() allocates an aligned buffer and copies the IV into it. However, skcipher_walk_virt() can fail after that point, and in this case the buffer will be freed.
This causes a use-after-free read in callers that read from walk->iv unconditionally, e.g. the LRW template. For example, this can be reproduced by trying to encrypt fewer than 16 bytes using "lrw(aes)".
Fix it by making skcipher_walk_first() reset walk->iv to walk->oiv if skcipher_walk_next() fails.
This addresses a similar problem as commit 2b4f27c36bcd ("crypto: skcipher - set walk.iv for zero-length inputs"), but that didn't consider the case where skcipher_walk_virt() fails after copying the IV.
This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation, when the extra self-tests were run on a KASAN-enabled kernel.
Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: stable@vger.kernel.org # v4.10+ Signed-off-by: Eric Biggers ebiggers@google.com --- crypto/skcipher.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/crypto/skcipher.c b/crypto/skcipher.c index bcf13d95f54ac..0f639f018047d 100644 --- a/crypto/skcipher.c +++ b/crypto/skcipher.c @@ -426,19 +426,27 @@ static int skcipher_copy_iv(struct skcipher_walk *walk)
static int skcipher_walk_first(struct skcipher_walk *walk) { + int err; + if (WARN_ON_ONCE(in_irq())) return -EDEADLK;
walk->buffer = NULL; if (unlikely(((unsigned long)walk->iv & walk->alignmask))) { - int err = skcipher_copy_iv(walk); + err = skcipher_copy_iv(walk); if (err) return err; }
walk->page = NULL;
- return skcipher_walk_next(walk); + err = skcipher_walk_next(walk); + + /* On error, don't leave 'walk->iv' pointing to freed buffer. */ + if (unlikely(err)) + walk->iv = walk->oiv; + + return err; }
static int skcipher_walk_skcipher(struct skcipher_walk *walk,
Eric Biggers ebiggers@kernel.org wrote:
From: Eric Biggers ebiggers@google.com
When the user-provided IV buffer is not aligned to the algorithm's alignmask, skcipher_walk_virt() allocates an aligned buffer and copies the IV into it. However, skcipher_walk_virt() can fail after that point, and in this case the buffer will be freed.
This causes a use-after-free read in callers that read from walk->iv unconditionally, e.g. the LRW template. For example, this can be reproduced by trying to encrypt fewer than 16 bytes using "lrw(aes)".
This looks like a bug in LRW. Relying on walk->iv to be set to anything after a failed skcipher_walk_virt call is wrong. So we should fix it there instead.
Cheers,
On Mon, Apr 08, 2019 at 02:23:54PM +0800, Herbert Xu wrote:
Eric Biggers ebiggers@kernel.org wrote:
From: Eric Biggers ebiggers@google.com
When the user-provided IV buffer is not aligned to the algorithm's alignmask, skcipher_walk_virt() allocates an aligned buffer and copies the IV into it. However, skcipher_walk_virt() can fail after that point, and in this case the buffer will be freed.
This causes a use-after-free read in callers that read from walk->iv unconditionally, e.g. the LRW template. For example, this can be reproduced by trying to encrypt fewer than 16 bytes using "lrw(aes)".
This looks like a bug in LRW. Relying on walk->iv to be set to anything after a failed skcipher_walk_virt call is wrong. So we should fix it there instead.
Cheers,
It's not just LRW though. It's actually 7 places:
arch/arm/crypto/aes-neonbs-glue.c arch/arm/crypto/chacha-neon-glue.c arch/arm64/crypto/aes-neonbs-glue.c arch/arm64/crypto/chacha-neon-glue.c crypto/chacha-generic.c crypto/lrw.c crypto/salsa20-generic.c
Do you prefer that all those be updated?
- Eric
On Mon, Apr 08, 2019 at 10:27:37AM -0700, Eric Biggers wrote:
It's not just LRW though. It's actually 7 places:
arch/arm/crypto/aes-neonbs-glue.c arch/arm/crypto/chacha-neon-glue.c arch/arm64/crypto/aes-neonbs-glue.c arch/arm64/crypto/chacha-neon-glue.c crypto/chacha-generic.c crypto/lrw.c crypto/salsa20-generic.c
Do you prefer that all those be updated?
We have to because if memory allocation fails then walk->iv will just be the origial IV. If you can use the original IV then you might as well just do that.
I just checked chacha-generic and it is fine as it's using the original IV and not walk->iv (the difference is that one may be unaligned while the other is guaranteed to be aligned).
arm*/chacha-neon-glue.c are also correct.
salsa20 is indeed broken but the fix is trivial since it's already using unaligned loads.
arm/aes-neonbs-glue seems easily fixed as it can simply use the unaligned original IV since it's going through the crypto API again.
arm64/aes-neonbs-glue I'm not quite sure as it's calling into assembly so depending on whether that wants aligned/unaligned this can either use the original IV or check for errors and abort if necessary.
Cheers,
linux-stable-mirror@lists.linaro.org