From: Eric Biggers ebiggers@google.com
Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle().
Fix this by memcpy()ing the key to a temporary buffer.
Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too.
Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: Peter Robinson pbrobinson@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers ebiggers@google.com --- crypto/ghash-generic.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index e6307935413c1..c8a347798eae6 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c @@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm); + be128 k;
if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); @@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm,
if (ctx->gf128) gf128mul_free_4k(ctx->gf128); - ctx->gf128 = gf128mul_init_4k_lle((be128 *)key); + + BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE); + memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */ + ctx->gf128 = gf128mul_init_4k_lle(&k); + memzero_explicit(&k, GHASH_BLOCK_SIZE); + if (!ctx->gf128) return -ENOMEM;
On Thu, May 30, 2019 at 6:51 PM Eric Biggers ebiggers@kernel.org wrote:
From: Eric Biggers ebiggers@google.com
Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle().
Fix this by memcpy()ing the key to a temporary buffer.
Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too.
Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: Peter Robinson pbrobinson@gmail.com
Tested-by: Peter Robinson pbrobinson@gmail.com
That fixes the problems I was seeing, thanks for the quick response/fix.
Peter
Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers ebiggers@google.com
crypto/ghash-generic.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index e6307935413c1..c8a347798eae6 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c @@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
be128 k; if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
@@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm,
if (ctx->gf128) gf128mul_free_4k(ctx->gf128);
ctx->gf128 = gf128mul_init_4k_lle((be128 *)key);
BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE);
memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */
ctx->gf128 = gf128mul_init_4k_lle(&k);
memzero_explicit(&k, GHASH_BLOCK_SIZE);
if (!ctx->gf128) return -ENOMEM;
-- 2.22.0.rc1.257.g3120a18244-goog
Le 30/05/2019 à 19:50, Eric Biggers a écrit :
From: Eric Biggers ebiggers@google.com
Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle().
Fix this by memcpy()ing the key to a temporary buffer.
Shouldn't we make it dependent on CONFIG_HAVE_64BIT_ALIGNED_ACCESS or !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS ?
Christophe
Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too.
Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: Peter Robinson pbrobinson@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers ebiggers@google.com
crypto/ghash-generic.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index e6307935413c1..c8a347798eae6 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c @@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
- be128 k;
if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); @@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm, if (ctx->gf128) gf128mul_free_4k(ctx->gf128);
- ctx->gf128 = gf128mul_init_4k_lle((be128 *)key);
- BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE);
- memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */
- ctx->gf128 = gf128mul_init_4k_lle(&k);
- memzero_explicit(&k, GHASH_BLOCK_SIZE);
- if (!ctx->gf128) return -ENOMEM;
On Mon, Jun 03, 2019 at 09:27:24AM +0200, Christophe Leroy wrote:
Le 30/05/2019 à 19:50, Eric Biggers a écrit :
From: Eric Biggers ebiggers@google.com
Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle().
Fix this by memcpy()ing the key to a temporary buffer.
Shouldn't we make it dependent on CONFIG_HAVE_64BIT_ALIGNED_ACCESS
No, because the buffer can have as little as 1-byte alignment.
or !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS ?
I don't think that's a good idea because two code paths are harder to test than one, and also CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only means that the CPU allows "regular" loads and stores to be misaligned. On some architectures the compiler can still generate load and store instructions that require alignment, e.g. 'ldrd' or 'ldm' on ARM.
We could change gf128mul_init_4k_lle() to take a byte array and make it use get_unaligned_be64(). But since it has to allocate and initialize a 4 KiB multiplication table anyway, that microoptimization would be lost in the noise.
- Eric
On Thu, May 30, 2019 at 10:50:39AM -0700, Eric Biggers wrote:
From: Eric Biggers ebiggers@google.com
Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle().
Fix this by memcpy()ing the key to a temporary buffer.
Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too.
Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: Peter Robinson pbrobinson@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers ebiggers@google.com Tested-by: Peter Robinson pbrobinson@gmail.com
crypto/ghash-generic.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
Patch applied. Thanks.
linux-stable-mirror@lists.linaro.org