From patchwork Wed Sep 26 09:51:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 147555 Delivered-To: patch@linaro.org Received: by 2002:a2e:8595:0:0:0:0:0 with SMTP id b21-v6csp500927lji; Wed, 26 Sep 2018 02:53:13 -0700 (PDT) X-Google-Smtp-Source: ACcGV6338Tex79oMWOSAXRCgDSxH48rmKKWDXGF2udsQvUA0R1MjkUeaFPqQ+MdbQtmnB1YPH0Qz X-Received: by 2002:a17:902:76c8:: with SMTP id j8-v6mr5238377plt.161.1537955593041; Wed, 26 Sep 2018 02:53:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537955593; cv=none; d=google.com; s=arc-20160816; b=uMTO8w2Z3/AU78keUnfysY3qSTKs3NqNr9BlhDULleir/dDjgO5L6cTNh8hS+gcIUj oBObmt0YVDC5CN74p8/2ceKt3h6GV211gENaAlybI/rDjVhncuJbqI9L/IHZ8VwN/RUK e6WlgzzYRl3m7wXfafyWdt4LI3qy2fM4RgGPSQBzmeXla98Y1Qx+yN+lzylcA778Mb1Q Vp3W2jcaZuBaA4EHe3pGNjrd6omOiHR0si7Vdfo3WXYeCKCdb7HqwLD2ESGA3WGMg0RC 5q8xlwSid8YYFZJ4IpZvnsKpPYKnYq4KX7Yhw3ci1RFMXBIlws4I+OXFBGp0cYsI7mxu 6cww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature; bh=AcCEamnZOtWS3f9Z7gCN2AuMiToYUK0uUE/SzMpwl9M=; b=EeW5I2IgomFSWM9z8Gtgj9vYuHtj5W/iz5Je1D899J0pOaxoNjd6VokGKJ55PnSDVk HCS23t0tdUJ1f5WpNvAYGkV8z6Pc6KbJJGBhOSQLEBCHkK4E6OqFGZ21eUKL3Q697cyc D/oXk+T3Qs+UhTEncBjhPximq0eVhIVG+3cAlowzd7HC12qPA5oD+19+flA4LhEcoLnn 9Rg2IeG77jhnwmCp9w+Y/P5Kb9AAayS8sxVIDobL1Box6EpK6dEZDpE+RPMjNvnZ5TG2 ChL5U0pbo9BoTyc8P9BdFypRUwSLSJoyKyBqLTINhT/l/KthhTwLtieu2Nj6sZOl4RHt Jxug== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HX8jSUck; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d16-v6si4517606pfe.267.2018.09.26.02.53.12; Wed, 26 Sep 2018 02:53:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HX8jSUck; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727048AbeIZQFU (ORCPT + 2 others); Wed, 26 Sep 2018 12:05:20 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:39810 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726841AbeIZQFU (ORCPT ); Wed, 26 Sep 2018 12:05:20 -0400 Received: by mail-wr1-f67.google.com with SMTP id s14-v6so26299021wrw.6 for ; Wed, 26 Sep 2018 02:53:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=AcCEamnZOtWS3f9Z7gCN2AuMiToYUK0uUE/SzMpwl9M=; b=HX8jSUck2h+/one7KjsiAXfG2DKrJodIS3zGIfMTNi25NhZI+XplvgjdeE4UqBKDFp A7DZvYguSHm4JML6NHOLhzmlsG/qJZcU93xJxFaDMWOcpTwCLDZt0AqGiG+/Ko+qrrIk hKUoEhCb+ZTH+JuHbvZFntXS7QHP8F1YuHgq8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=AcCEamnZOtWS3f9Z7gCN2AuMiToYUK0uUE/SzMpwl9M=; b=SVgLdIqPiIPZbel5oJKrw7oE8qwuO+pxrUHA9vFVz58J5ymJlrRSCGD2Xr/Hue9XKy 6VRd9Rt5K97vhXbkIR9yJj0FfmxATZWGNgutBr2ojlBTol7HOS1Jfg+2FcrTEKiK/NJ2 sFUUmcng7WLP6AZ21O99KKdggAlRykwPr37XuDsIXkCpf59khqdhdTO//YjB9IURQJTY WtprHlGScIwD8JSaN5/88BQ6jFZ1k3MkeIj31+ksla02G+StJ42YB6lcXLUsuJPpNp6t lO2cApG/Z1DdiLekcQeJ4QLpHRnvC2yV5EYyHn/AP5T10M8MQxXHrbdS3JxSvOBEuSMF /svQ== X-Gm-Message-State: ABuFfoidRVVdBRy4iUIQ3h4+EyuEj40VueZU6vXzyr10mYUyYtQQxhyc qGtpVn7+iR882GT3T5G5VDxJgFw0AyU= X-Received: by 2002:adf:b2d7:: with SMTP id g81-v6mr4222791wrd.48.1537955588902; Wed, 26 Sep 2018 02:53:08 -0700 (PDT) Received: from rev03.home ([2a01:cb1d:112:6f00:9193:c2a3:10e4:c286]) by smtp.gmail.com with ESMTPSA id d2-v6sm4631365wrm.71.2018.09.26.02.53.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 26 Sep 2018 02:53:08 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: arnd@arndb.de, keescook@chromium.org, herbert@gondor.apana.org.au, giovanni.cabiddu@intel.com, qat-linux@intel.com, Ard Biesheuvel Subject: [PATCH] crypto: qat - move temp buffers off the stack Date: Wed, 26 Sep 2018 11:51:59 +0200 Message-Id: <20180926095159.22135-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Arnd reports that with Kees's latest VLA patches applied, the HMAC handling in the QAT driver uses a worst case estimate of 160 bytes for the SHA blocksize, allowing the compiler to determine the size of the stack frame at runtime and throw a warning: drivers/crypto/qat/qat_common/qat_algs.c: In function 'qat_alg_do_precomputes': drivers/crypto/qat/qat_common/qat_algs.c:257:1: error: the frame size of 1112 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] Given that this worst case estimate is only 32 bytes larger than the actual block size of SHA-512, the use of a VLA here was hiding the excessive size of the stack frame from the compiler, and so we should try to move these buffers off the stack. So move the ipad/opad buffers and the various SHA state descriptors into the tfm context struct. Since qat_alg_do_precomputes() is only called in the context of a setkey() operation, this should be safe. Using SHA512_BLOCK_SIZE for the size of the ipad/opad buffers allows them to be used by SHA-1/SHA-256 as well. Reported-by: Arnd Bergmann Signed-off-by: Ard Biesheuvel --- This applies against v4.19-rc while Arnd's report was about -next. However, since Kees's VLA change results in a warning about a pre-existing condition, we may decide to apply it as a fix, and handle the conflict with Kees's patch in cryptodev. Otherwise, I can respin it to apply onto cryptodev directly. The patch was build tested only - I don't have the hardware. Thoughts anyone? drivers/crypto/qat/qat_common/qat_algs.c | 60 ++++++++++---------- 1 file changed, 31 insertions(+), 29 deletions(-) -- 2.18.0 Reviewed-by: Kees Cook diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index 1138e41d6805..d2698299896f 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -113,6 +113,13 @@ struct qat_alg_aead_ctx { struct crypto_shash *hash_tfm; enum icp_qat_hw_auth_algo qat_hash_alg; struct qat_crypto_instance *inst; + union { + struct sha1_state sha1; + struct sha256_state sha256; + struct sha512_state sha512; + }; + char ipad[SHA512_BLOCK_SIZE]; /* sufficient for SHA-1/SHA-256 as well */ + char opad[SHA512_BLOCK_SIZE]; }; struct qat_alg_ablkcipher_ctx { @@ -148,37 +155,32 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, unsigned int auth_keylen) { SHASH_DESC_ON_STACK(shash, ctx->hash_tfm); - struct sha1_state sha1; - struct sha256_state sha256; - struct sha512_state sha512; int block_size = crypto_shash_blocksize(ctx->hash_tfm); int digest_size = crypto_shash_digestsize(ctx->hash_tfm); - char ipad[block_size]; - char opad[block_size]; __be32 *hash_state_out; __be64 *hash512_state_out; int i, offset; - memset(ipad, 0, block_size); - memset(opad, 0, block_size); + memset(ctx->ipad, 0, block_size); + memset(ctx->opad, 0, block_size); shash->tfm = ctx->hash_tfm; shash->flags = 0x0; if (auth_keylen > block_size) { int ret = crypto_shash_digest(shash, auth_key, - auth_keylen, ipad); + auth_keylen, ctx->ipad); if (ret) return ret; - memcpy(opad, ipad, digest_size); + memcpy(ctx->opad, ctx->ipad, digest_size); } else { - memcpy(ipad, auth_key, auth_keylen); - memcpy(opad, auth_key, auth_keylen); + memcpy(ctx->ipad, auth_key, auth_keylen); + memcpy(ctx->opad, auth_key, auth_keylen); } for (i = 0; i < block_size; i++) { - char *ipad_ptr = ipad + i; - char *opad_ptr = opad + i; + char *ipad_ptr = ctx->ipad + i; + char *opad_ptr = ctx->opad + i; *ipad_ptr ^= HMAC_IPAD_VALUE; *opad_ptr ^= HMAC_OPAD_VALUE; } @@ -186,7 +188,7 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(shash, ipad, block_size)) + if (crypto_shash_update(shash, ctx->ipad, block_size)) return -EFAULT; hash_state_out = (__be32 *)hash->sha.state1; @@ -194,22 +196,22 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(shash, &sha1)) + if (crypto_shash_export(shash, &ctx->sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) - *hash_state_out = cpu_to_be32(*(sha1.state + i)); + *hash_state_out = cpu_to_be32(ctx->sha1.state[i]); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(shash, &sha256)) + if (crypto_shash_export(shash, &ctx->sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) - *hash_state_out = cpu_to_be32(*(sha256.state + i)); + *hash_state_out = cpu_to_be32(ctx->sha256.state[i]); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(shash, &sha512)) + if (crypto_shash_export(shash, &ctx->sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) - *hash512_state_out = cpu_to_be64(*(sha512.state + i)); + *hash512_state_out = cpu_to_be64(ctx->sha512.state[i]); break; default: return -EFAULT; @@ -218,7 +220,7 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(shash, opad, block_size)) + if (crypto_shash_update(shash, ctx->opad, block_size)) return -EFAULT; offset = round_up(qat_get_inter_state_size(ctx->qat_hash_alg), 8); @@ -227,28 +229,28 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(shash, &sha1)) + if (crypto_shash_export(shash, &ctx->sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) - *hash_state_out = cpu_to_be32(*(sha1.state + i)); + *hash_state_out = cpu_to_be32(ctx->sha1.state[i]); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(shash, &sha256)) + if (crypto_shash_export(shash, &ctx->sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) - *hash_state_out = cpu_to_be32(*(sha256.state + i)); + *hash_state_out = cpu_to_be32(ctx->sha256.state[i]); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(shash, &sha512)) + if (crypto_shash_export(shash, &ctx->sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) - *hash512_state_out = cpu_to_be64(*(sha512.state + i)); + *hash512_state_out = cpu_to_be64(ctx->sha512.state[i]); break; default: return -EFAULT; } - memzero_explicit(ipad, block_size); - memzero_explicit(opad, block_size); + memzero_explicit(ctx->ipad, block_size); + memzero_explicit(ctx->opad, block_size); return 0; }