From patchwork Mon Sep 15 07:30:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Behan Webster X-Patchwork-Id: 37403 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f70.google.com (mail-la0-f70.google.com [209.85.215.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 5883E20970 for ; Mon, 15 Sep 2014 07:32:03 +0000 (UTC) Received: by mail-la0-f70.google.com with SMTP id s18sf1908027lam.1 for ; Mon, 15 Sep 2014 00:32:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=ZhPT+3v5kNbds5/eQbzuR78mq/G8ora0fIBAtOzGBuw=; b=eGC6WQKVbP+YLuVcNqQ4geLK87T1JdS7HK+YPWtxkKXx4jHqD/fTSNAkCZtw9r68Zh hPT/hbMytONwJC0s30gYPY7/oNpYUK9cHEZbpZJ6x11+dMFRdUBg+ic0T8r840Cl9Ge+ VmnsxrWgpLAU84/ERKFKJ4PJEGkBX/faKcIICGlzK0q9m0qsDgW8qrO7VWq/xBm0ZUGF OSMBMy7+U7cBniwPyaag2Fi/KJG7mRhlgR7K7IW7Uwt+a0ycROURYj7NaTbapJ8geXiM EXxiqB2SiX+8GjsBQOtS5PfzHSNf2itr58R+OGEPj3N135bpjb2n5qVN4s6C0oihPNvc 59JQ== X-Gm-Message-State: ALoCoQllleuWU0l+cku4lIFBO5bdjjOsFv7AWYyM62cVZtdY6rS8NeP4UGzLQYQU91ap8whXCRZn X-Received: by 10.152.6.9 with SMTP id w9mr232287law.7.1410766322153; Mon, 15 Sep 2014 00:32:02 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.115.138 with SMTP id jo10ls314920lab.56.gmail; Mon, 15 Sep 2014 00:32:01 -0700 (PDT) X-Received: by 10.152.234.76 with SMTP id uc12mr26632308lac.50.1410766321748; Mon, 15 Sep 2014 00:32:01 -0700 (PDT) Received: from mail-la0-x230.google.com (mail-la0-x230.google.com [2a00:1450:4010:c03::230]) by mx.google.com with ESMTPS id rd4si17616749lac.32.2014.09.15.00.32.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 15 Sep 2014 00:32:01 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::230 as permitted sender) client-ip=2a00:1450:4010:c03::230; Received: by mail-la0-f48.google.com with SMTP id ty20so4028907lab.7 for ; Mon, 15 Sep 2014 00:32:01 -0700 (PDT) X-Received: by 10.112.76.6 with SMTP id g6mr24733991lbw.22.1410766321477; Mon, 15 Sep 2014 00:32:01 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.130.169 with SMTP id of9csp161638lbb; Mon, 15 Sep 2014 00:32:00 -0700 (PDT) X-Received: by 10.70.43.201 with SMTP id y9mr41442085pdl.111.1410766318483; Mon, 15 Sep 2014 00:31:58 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ra1si21020713pbb.187.2014.09.15.00.31.57 for ; Mon, 15 Sep 2014 00:31:58 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753466AbaIOHbv (ORCPT ); Mon, 15 Sep 2014 03:31:51 -0400 Received: from mail-pd0-f177.google.com ([209.85.192.177]:55694 "EHLO mail-pd0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753442AbaIOHbr (ORCPT ); Mon, 15 Sep 2014 03:31:47 -0400 Received: by mail-pd0-f177.google.com with SMTP id y10so5635848pdj.36 for ; Mon, 15 Sep 2014 00:31:47 -0700 (PDT) X-Received: by 10.68.164.164 with SMTP id yr4mr36259709pbb.57.1410766307279; Mon, 15 Sep 2014 00:31:47 -0700 (PDT) Received: from galdor.edimax.com ([38.126.120.10]) by mx.google.com with ESMTPSA id cz1sm1011611pdb.85.2014.09.15.00.31.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Sep 2014 00:31:45 -0700 (PDT) From: behanw@converseincode.com To: agk@redhat.com, clm@fb.com, davem@davemloft.net, dm-devel@redhat.com, fabf@skynet.be, herbert@gondor.apana.org.au, jbacik@fb.com, snitzer@redhat.com, tadeusz.struk@intel.com Cc: akpm@linux-foundation.org, bruce.w.allan@intel.com, d.kasatkin@samsung.com, james.l.morris@oracle.com, john.griffin@intel.com, linux-btrfs@vger.kernel.org, linux-crypto@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linux-ima-user@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org, linux-security-module@vger.kernel.org, neilb@suse.de, qat-linux@intel.com, serge@hallyn.com, thomas.lendacky@amd.com, zohar@linux.vnet.ibm.com, torvalds@linux-foundation.org, =?UTF-8?q?Jan-Simon=20M=C3=B6ller?= , Behan Webster , pageexec@freemail.hu, gmazyland@gmail.com Subject: [PATCH v3 08/12] crypto, dm: LLVMLinux: Remove VLAIS usage from dm-crypt Date: Mon, 15 Sep 2014 00:30:30 -0700 Message-Id: <1410766234-1634-9-git-send-email-behanw@converseincode.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1410766234-1634-1-git-send-email-behanw@converseincode.com> References: <1410766234-1634-1-git-send-email-behanw@converseincode.com> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Original-Sender: behanw@converseincode.com X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 2a00:1450:4010:c03::230 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org; dkim=neutral (body hash did not verify) header.i=@ Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Jan-Simon Möller Replaced the use of a Variable Length Array In Struct (VLAIS) with a C99 compliant equivalent. This patch allocates the appropriate amount of memory using a char array using the SHASH_DESC_ON_STACK macro. The new code can be compiled with both gcc and clang. Signed-off-by: Jan-Simon Möller Signed-off-by: Behan Webster Cc: pageexec@freemail.hu Cc: gmazyland@gmail.com Cc: "David S. Miller" Cc: Herbert Xu --- drivers/md/dm-crypt.c | 34 ++++++++++++++-------------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index cd15e08..fc93b93 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -526,29 +526,26 @@ static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv, u8 *data) { struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk; - struct { - struct shash_desc desc; - char ctx[crypto_shash_descsize(lmk->hash_tfm)]; - } sdesc; + SHASH_DESC_ON_STACK(desc, lmk->hash_tfm); struct md5_state md5state; __le32 buf[4]; int i, r; - sdesc.desc.tfm = lmk->hash_tfm; - sdesc.desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP; + desc->tfm = lmk->hash_tfm; + desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP; - r = crypto_shash_init(&sdesc.desc); + r = crypto_shash_init(desc); if (r) return r; if (lmk->seed) { - r = crypto_shash_update(&sdesc.desc, lmk->seed, LMK_SEED_SIZE); + r = crypto_shash_update(desc, lmk->seed, LMK_SEED_SIZE); if (r) return r; } /* Sector is always 512B, block size 16, add data of blocks 1-31 */ - r = crypto_shash_update(&sdesc.desc, data + 16, 16 * 31); + r = crypto_shash_update(desc, data + 16, 16 * 31); if (r) return r; @@ -557,12 +554,12 @@ static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv, buf[1] = cpu_to_le32((((u64)dmreq->iv_sector >> 32) & 0x00FFFFFF) | 0x80000000); buf[2] = cpu_to_le32(4024); buf[3] = 0; - r = crypto_shash_update(&sdesc.desc, (u8 *)buf, sizeof(buf)); + r = crypto_shash_update(desc, (u8 *)buf, sizeof(buf)); if (r) return r; /* No MD5 padding here */ - r = crypto_shash_export(&sdesc.desc, &md5state); + r = crypto_shash_export(desc, &md5state); if (r) return r; @@ -679,10 +676,7 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc, struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw; u64 sector = cpu_to_le64((u64)dmreq->iv_sector); u8 buf[TCW_WHITENING_SIZE]; - struct { - struct shash_desc desc; - char ctx[crypto_shash_descsize(tcw->crc32_tfm)]; - } sdesc; + SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm); int i, r; /* xor whitening with sector number */ @@ -691,16 +685,16 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc, crypto_xor(&buf[8], (u8 *)§or, 8); /* calculate crc32 for every 32bit part and xor it */ - sdesc.desc.tfm = tcw->crc32_tfm; - sdesc.desc.flags = CRYPTO_TFM_REQ_MAY_SLEEP; + desc->tfm = tcw->crc32_tfm; + desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP; for (i = 0; i < 4; i++) { - r = crypto_shash_init(&sdesc.desc); + r = crypto_shash_init(desc); if (r) goto out; - r = crypto_shash_update(&sdesc.desc, &buf[i * 4], 4); + r = crypto_shash_update(desc, &buf[i * 4], 4); if (r) goto out; - r = crypto_shash_final(&sdesc.desc, &buf[i * 4]); + r = crypto_shash_final(desc, &buf[i * 4]); if (r) goto out; }