From patchwork Mon Sep 15 07:30:29 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Behan Webster X-Patchwork-Id: 37405 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-we0-f200.google.com (mail-we0-f200.google.com [74.125.82.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 8E8BA218C4 for ; Mon, 15 Sep 2014 07:32:04 +0000 (UTC) Received: by mail-we0-f200.google.com with SMTP id u57sf1908279wes.3 for ; Mon, 15 Sep 2014 00:32:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=6xlqVvSdUCgnuGknWbXNVy4z75rda7avEj8R3M2mHfU=; b=jW7NCSlcBl38VKE24iQ5zUdgbFuzzOKkoUxR4H/SXR1VhokCgu5KfjqV0DH0ivp0qk MfL5ljzUkhj3SnmiowF2UV3SijVzSgVjcuTFU6GGfC64LB1Zx7bRPswB18Hnz+qUCF2M MQsFlcXwQ9OPWueraDJX/E53LChEyL3rGdJEwSg/R0/nknvE1oFDmX6JJav+6TfYGKCO C3zKF9WcQZMttDCUfNkrTevTp/Yq6GQ96l77VTX1bLV+TtUxA8SCRnq6+QEvzxVnw87+ HszTW/4HiizRKyOWmkmHoFalk1IpDN7BFD/lK0dee8d+8MtQCgzVbbPKaxU79/ugJ1BH 5Qvg== X-Gm-Message-State: ALoCoQlrLCB++A+vX9q7X7uYo7LGD8sjEwBmjb2uKXvM3w9mF2YaBXsEMzOtDY/cM/cguQl49ghG X-Received: by 10.181.13.195 with SMTP id fa3mr5046164wid.0.1410766323062; Mon, 15 Sep 2014 00:32:03 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.6.39 with SMTP id x7ls111098lax.31.gmail; Mon, 15 Sep 2014 00:32:02 -0700 (PDT) X-Received: by 10.152.19.167 with SMTP id g7mr26103762lae.46.1410766322697; Mon, 15 Sep 2014 00:32:02 -0700 (PDT) Received: from mail-la0-x231.google.com (mail-la0-x231.google.com [2a00:1450:4010:c03::231]) by mx.google.com with ESMTPS id of9si13809752lbb.124.2014.09.15.00.32.02 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 15 Sep 2014 00:32:02 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::231 as permitted sender) client-ip=2a00:1450:4010:c03::231; Received: by mail-la0-f49.google.com with SMTP id pv20so4039975lab.36 for ; Mon, 15 Sep 2014 00:32:02 -0700 (PDT) X-Received: by 10.112.163.103 with SMTP id yh7mr6843689lbb.73.1410766322597; Mon, 15 Sep 2014 00:32:02 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp161640lbb; Mon, 15 Sep 2014 00:32:01 -0700 (PDT) X-Received: by 10.68.162.3 with SMTP id xw3mr2108474pbb.142.1410766320312; Mon, 15 Sep 2014 00:32:00 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ra1si21020713pbb.187.2014.09.15.00.31.58 for ; Mon, 15 Sep 2014 00:32:00 -0700 (PDT) Received-SPF: none (google.com: linux-kernel-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753415AbaIOHbx (ORCPT + 27 others); Mon, 15 Sep 2014 03:31:53 -0400 Received: from mail-pd0-f180.google.com ([209.85.192.180]:43558 "EHLO mail-pd0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753426AbaIOHbl (ORCPT ); Mon, 15 Sep 2014 03:31:41 -0400 Received: by mail-pd0-f180.google.com with SMTP id ft15so5545277pdb.11 for ; Mon, 15 Sep 2014 00:31:41 -0700 (PDT) X-Received: by 10.66.243.6 with SMTP id wu6mr1257959pac.157.1410766301354; Mon, 15 Sep 2014 00:31:41 -0700 (PDT) Received: from galdor.edimax.com ([38.126.120.10]) by mx.google.com with ESMTPSA id cz1sm1011611pdb.85.2014.09.15.00.31.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Sep 2014 00:31:39 -0700 (PDT) From: behanw@converseincode.com To: agk@redhat.com, clm@fb.com, davem@davemloft.net, dm-devel@redhat.com, fabf@skynet.be, herbert@gondor.apana.org.au, jbacik@fb.com, snitzer@redhat.com, tadeusz.struk@intel.com Cc: akpm@linux-foundation.org, bruce.w.allan@intel.com, d.kasatkin@samsung.com, james.l.morris@oracle.com, john.griffin@intel.com, linux-btrfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linux-ima-user@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, linux-security-module@vger.kernel.org, neilb@suse.de, qat-linux@intel.com, serge@hallyn.com, thomas.lendacky@amd.com, zohar@linux.vnet.ibm.com, torvalds@linux-foundation.org, Behan Webster Subject: [PATCH v3 07/12] crypto: LLVMLinux: Remove VLAIS from crypto/.../qat_algs.c Date: Mon, 15 Sep 2014 00:30:29 -0700 Message-Id: <1410766234-1634-8-git-send-email-behanw@converseincode.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1410766234-1634-1-git-send-email-behanw@converseincode.com> References: <1410766234-1634-1-git-send-email-behanw@converseincode.com> MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Original-Sender: behanw@converseincode.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::231 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@ Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Behan Webster Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Behan Webster Reviewed-by: Mark Charlebois Reviewed-by: Jan-Simon Möller --- drivers/crypto/qat/qat_common/qat_algs.c | 31 ++++++++++++++----------------- 1 file changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index 59df488..9cabadd 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -152,10 +152,7 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, const uint8_t *auth_key, unsigned int auth_keylen, uint8_t *auth_state) { - struct { - struct shash_desc shash; - char ctx[crypto_shash_descsize(ctx->hash_tfm)]; - } desc; + SHASH_DESC_ON_STACK(shash, ctx->hash_tfm); struct sha1_state sha1; struct sha256_state sha256; struct sha512_state sha512; @@ -167,12 +164,12 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, __be64 *hash512_state_out; int i, offset; - desc.shash.tfm = ctx->hash_tfm; - desc.shash.flags = 0x0; + shash->tfm = ctx->hash_tfm; + shash->flags = 0x0; if (auth_keylen > block_size) { char buff[SHA512_BLOCK_SIZE]; - int ret = crypto_shash_digest(&desc.shash, auth_key, + int ret = crypto_shash_digest(shash, auth_key, auth_keylen, buff); if (ret) return ret; @@ -195,10 +192,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, *opad_ptr ^= 0x5C; } - if (crypto_shash_init(&desc.shash)) + if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(&desc.shash, ipad, block_size)) + if (crypto_shash_update(shash, ipad, block_size)) return -EFAULT; hash_state_out = (__be32 *)hash->sha.state1; @@ -206,19 +203,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(&desc.shash, &sha1)) + if (crypto_shash_export(shash, &sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha1.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(&desc.shash, &sha256)) + if (crypto_shash_export(shash, &sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha256.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(&desc.shash, &sha512)) + if (crypto_shash_export(shash, &sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) *hash512_state_out = cpu_to_be64(*(sha512.state + i)); @@ -227,10 +224,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, return -EFAULT; } - if (crypto_shash_init(&desc.shash)) + if (crypto_shash_init(shash)) return -EFAULT; - if (crypto_shash_update(&desc.shash, opad, block_size)) + if (crypto_shash_update(shash, opad, block_size)) return -EFAULT; offset = round_up(qat_get_inter_state_size(ctx->qat_hash_alg), 8); @@ -239,19 +236,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash, switch (ctx->qat_hash_alg) { case ICP_QAT_HW_AUTH_ALGO_SHA1: - if (crypto_shash_export(&desc.shash, &sha1)) + if (crypto_shash_export(shash, &sha1)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha1.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA256: - if (crypto_shash_export(&desc.shash, &sha256)) + if (crypto_shash_export(shash, &sha256)) return -EFAULT; for (i = 0; i < digest_size >> 2; i++, hash_state_out++) *hash_state_out = cpu_to_be32(*(sha256.state + i)); break; case ICP_QAT_HW_AUTH_ALGO_SHA512: - if (crypto_shash_export(&desc.shash, &sha512)) + if (crypto_shash_export(shash, &sha512)) return -EFAULT; for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) *hash512_state_out = cpu_to_be64(*(sha512.state + i));