From patchwork Thu Apr 9 10:55:42 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 46930 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id CA2EF218D6 for ; Thu, 9 Apr 2015 10:56:42 +0000 (UTC) Received: by wgin8 with SMTP id n8sf15635854wgi.0 for ; Thu, 09 Apr 2015 03:56:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=qnFeoQg9YNvqLU2D4at3ma3TQ+sLg0IB0nnToK/xiRo=; b=A30/6LI2kzn8Kze+Tpr7NqUciE/boIMKL0pO2BXlzH3Lf+5E5XJQAIzdUW+g2yRJOq olQeXLo1xlmob230ayG9lkX5i0tpfl8TjQcle3zqKNZzMs5YUaLNcd179W32kCBXsw/6 v17H4GpzDTiXqdduRP/sJLxusSXFYlAfGI2CRVz+Mja93D8IXR+mzJwFoCFatToTThyX 4cg0l3l5rUbENYICPeyZe8L+L3igt74Seh+Ya5z7TinNe2JUgjfoJZ6aUYwPbWYu9MyB cw73NsvZAKNN8XGxX+Zo5gJQAWQDVXstyuKFS3SM2eLhcuYgjcf5cWdkICz5i0CSwfrN sB2g== X-Gm-Message-State: ALoCoQmEKnxtwhgi3I2kJB1SJMuBslqbw6NKjNeBt7Bs6S/8JXuRTr8/TeSj5/K30Z4TpQSqWAp1 X-Received: by 10.180.107.33 with SMTP id gz1mr481835wib.3.1428577002124; Thu, 09 Apr 2015 03:56:42 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.6.101 with SMTP id z5ls312321laz.12.gmail; Thu, 09 Apr 2015 03:56:41 -0700 (PDT) X-Received: by 10.152.6.1 with SMTP id w1mr4018687law.91.1428577001965; Thu, 09 Apr 2015 03:56:41 -0700 (PDT) Received: from mail-la0-f43.google.com (mail-la0-f43.google.com. [209.85.215.43]) by mx.google.com with ESMTPS id r10si6264393lbl.87.2015.04.09.03.56.41 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Apr 2015 03:56:41 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) client-ip=209.85.215.43; Received: by layy10 with SMTP id y10so86519161lay.0 for ; Thu, 09 Apr 2015 03:56:41 -0700 (PDT) X-Received: by 10.112.222.133 with SMTP id qm5mr12823987lbc.86.1428577001856; Thu, 09 Apr 2015 03:56:41 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.67.65 with SMTP id l1csp394155lbt; Thu, 9 Apr 2015 03:56:40 -0700 (PDT) X-Received: by 10.66.249.101 with SMTP id yt5mr29887887pac.116.1428576989805; Thu, 09 Apr 2015 03:56:29 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id wy1si20935465pab.171.2015.04.09.03.56.29 for ; Thu, 09 Apr 2015 03:56:29 -0700 (PDT) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933276AbbDIK41 (ORCPT ); Thu, 9 Apr 2015 06:56:27 -0400 Received: from mail-wi0-f170.google.com ([209.85.212.170]:37994 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933268AbbDIK41 (ORCPT ); Thu, 9 Apr 2015 06:56:27 -0400 Received: by wiun10 with SMTP id n10so92967156wiu.1 for ; Thu, 09 Apr 2015 03:56:26 -0700 (PDT) X-Received: by 10.180.89.34 with SMTP id bl2mr5174197wib.23.1428576985902; Thu, 09 Apr 2015 03:56:25 -0700 (PDT) Received: from ards-macbook-pro.local ([90.174.5.113]) by mx.google.com with ESMTPSA id b10sm2192186wiz.9.2015.04.09.03.56.23 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 09 Apr 2015 03:56:25 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, herbert@gondor.apana.org.au, samitolvanen@google.com, jussi.kivilinna@iki.fi Cc: stockhausen@collogia.de, Ard Biesheuvel Subject: [PATCH v4 10/16] crypto/arm: move SHA-224/256 ASM/NEON implementation to base layer Date: Thu, 9 Apr 2015 12:55:42 +0200 Message-Id: <1428576948-23863-11-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1428576948-23863-1-git-send-email-ard.biesheuvel@linaro.org> References: <1428576948-23863-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.43 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This removes all the boilerplate from the existing implementation, and replaces it with calls into the base layer. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/sha256_glue.c | 170 ++++++------------------------------- arch/arm/crypto/sha256_glue.h | 17 +--- arch/arm/crypto/sha256_neon_glue.c | 143 ++++++++----------------------- 3 files changed, 66 insertions(+), 264 deletions(-) diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c index ccef5e25bbcb..a84e869ef900 100644 --- a/arch/arm/crypto/sha256_glue.c +++ b/arch/arm/crypto/sha256_glue.c @@ -24,165 +24,49 @@ #include #include #include -#include +#include #include #include + #include "sha256_glue.h" asmlinkage void sha256_block_data_order(u32 *digest, const void *data, - unsigned int num_blks); - - -int sha256_init(struct shash_desc *desc) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - sctx->state[0] = SHA256_H0; - sctx->state[1] = SHA256_H1; - sctx->state[2] = SHA256_H2; - sctx->state[3] = SHA256_H3; - sctx->state[4] = SHA256_H4; - sctx->state[5] = SHA256_H5; - sctx->state[6] = SHA256_H6; - sctx->state[7] = SHA256_H7; - sctx->count = 0; - - return 0; -} - -int sha224_init(struct shash_desc *desc) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - sctx->state[0] = SHA224_H0; - sctx->state[1] = SHA224_H1; - sctx->state[2] = SHA224_H2; - sctx->state[3] = SHA224_H3; - sctx->state[4] = SHA224_H4; - sctx->state[5] = SHA224_H5; - sctx->state[6] = SHA224_H6; - sctx->state[7] = SHA224_H7; - sctx->count = 0; - - return 0; -} - -int __sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len, - unsigned int partial) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int done = 0; + unsigned int num_blks); - sctx->count += len; - - if (partial) { - done = SHA256_BLOCK_SIZE - partial; - memcpy(sctx->buf + partial, data, done); - sha256_block_data_order(sctx->state, sctx->buf, 1); - } - - if (len - done >= SHA256_BLOCK_SIZE) { - const unsigned int rounds = (len - done) / SHA256_BLOCK_SIZE; - - sha256_block_data_order(sctx->state, data + done, rounds); - done += rounds * SHA256_BLOCK_SIZE; - } - - memcpy(sctx->buf, data + done, len - done); - - return 0; -} - -int sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len) +int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->count % SHA256_BLOCK_SIZE; - - /* Handle the fast case right here */ - if (partial + len < SHA256_BLOCK_SIZE) { - sctx->count += len; - memcpy(sctx->buf + partial, data, len); + /* make sure casting to sha256_block_fn() is safe */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); - return 0; - } - - return __sha256_update(desc, data, len, partial); + return sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); } +EXPORT_SYMBOL(crypto_sha256_arm_update); -/* Add padding and return the message digest. */ static int sha256_final(struct shash_desc *desc, u8 *out) { - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int i, index, padlen; - __be32 *dst = (__be32 *)out; - __be64 bits; - static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, }; - - /* save number of bits */ - bits = cpu_to_be64(sctx->count << 3); - - /* Pad out to 56 mod 64 and append length */ - index = sctx->count % SHA256_BLOCK_SIZE; - padlen = (index < 56) ? (56 - index) : ((SHA256_BLOCK_SIZE+56)-index); - - /* We need to fill a whole block for __sha256_update */ - if (padlen <= 56) { - sctx->count += padlen; - memcpy(sctx->buf + index, padding, padlen); - } else { - __sha256_update(desc, padding, padlen, index); - } - __sha256_update(desc, (const u8 *)&bits, sizeof(bits), 56); - - /* Store state in digest */ - for (i = 0; i < 8; i++) - dst[i] = cpu_to_be32(sctx->state[i]); - - /* Wipe context */ - memset(sctx, 0, sizeof(*sctx)); - - return 0; -} - -static int sha224_final(struct shash_desc *desc, u8 *out) -{ - u8 D[SHA256_DIGEST_SIZE]; - - sha256_final(desc, D); - - memcpy(out, D, SHA224_DIGEST_SIZE); - memzero_explicit(D, SHA256_DIGEST_SIZE); - - return 0; -} - -int sha256_export(struct shash_desc *desc, void *out) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(out, sctx, sizeof(*sctx)); - - return 0; + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); } -int sha256_import(struct shash_desc *desc, const void *in) +int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) { - struct sha256_state *sctx = shash_desc_ctx(desc); - - memcpy(sctx, in, sizeof(*sctx)); - - return 0; + sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + return sha256_final(desc, out); } +EXPORT_SYMBOL(crypto_sha256_arm_finup); static struct shash_alg algs[] = { { .digestsize = SHA256_DIGEST_SIZE, - .init = sha256_init, - .update = sha256_update, + .init = sha256_base_init, + .update = crypto_sha256_arm_update, .final = sha256_final, - .export = sha256_export, - .import = sha256_import, + .finup = crypto_sha256_arm_finup, .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha256", .cra_driver_name = "sha256-asm", @@ -193,13 +77,11 @@ static struct shash_alg algs[] = { { } }, { .digestsize = SHA224_DIGEST_SIZE, - .init = sha224_init, - .update = sha256_update, - .final = sha224_final, - .export = sha256_export, - .import = sha256_import, + .init = sha224_base_init, + .update = crypto_sha256_arm_update, + .final = sha256_final, + .finup = crypto_sha256_arm_finup, .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha224", .cra_driver_name = "sha224-asm", diff --git a/arch/arm/crypto/sha256_glue.h b/arch/arm/crypto/sha256_glue.h index 0312f4ffe8cc..7cf0bf786ada 100644 --- a/arch/arm/crypto/sha256_glue.h +++ b/arch/arm/crypto/sha256_glue.h @@ -2,22 +2,13 @@ #define _CRYPTO_SHA256_GLUE_H #include -#include extern struct shash_alg sha256_neon_algs[2]; -extern int sha256_init(struct shash_desc *desc); +int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data, + unsigned int len); -extern int sha224_init(struct shash_desc *desc); - -extern int __sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len, unsigned int partial); - -extern int sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len); - -extern int sha256_export(struct shash_desc *desc, void *out); - -extern int sha256_import(struct shash_desc *desc, const void *in); +int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *hash); #endif /* _CRYPTO_SHA256_GLUE_H */ diff --git a/arch/arm/crypto/sha256_neon_glue.c b/arch/arm/crypto/sha256_neon_glue.c index c4da10090eee..39ccd658817e 100644 --- a/arch/arm/crypto/sha256_neon_glue.c +++ b/arch/arm/crypto/sha256_neon_glue.c @@ -19,131 +19,62 @@ #include #include #include +#include #include #include #include + #include "sha256_glue.h" asmlinkage void sha256_block_data_order_neon(u32 *digest, const void *data, - unsigned int num_blks); - + unsigned int num_blks); -static int __sha256_neon_update(struct shash_desc *desc, const u8 *data, - unsigned int len, unsigned int partial) +static int sha256_update(struct shash_desc *desc, const u8 *data, + unsigned int len) { struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int done = 0; - - sctx->count += len; - - if (partial) { - done = SHA256_BLOCK_SIZE - partial; - memcpy(sctx->buf + partial, data, done); - sha256_block_data_order_neon(sctx->state, sctx->buf, 1); - } - - if (len - done >= SHA256_BLOCK_SIZE) { - const unsigned int rounds = (len - done) / SHA256_BLOCK_SIZE; - sha256_block_data_order_neon(sctx->state, data + done, rounds); - done += rounds * SHA256_BLOCK_SIZE; - } + if (!may_use_simd() || + (sctx->count % SHA256_BLOCK_SIZE) + len < SHA256_BLOCK_SIZE) + return crypto_sha256_arm_update(desc, data, len); - memcpy(sctx->buf, data + done, len - done); + kernel_neon_begin(); + sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order_neon); + kernel_neon_end(); return 0; } -static int sha256_neon_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int partial = sctx->count % SHA256_BLOCK_SIZE; - int res; - - /* Handle the fast case right here */ - if (partial + len < SHA256_BLOCK_SIZE) { - sctx->count += len; - memcpy(sctx->buf + partial, data, len); - - return 0; - } - - if (!may_use_simd()) { - res = __sha256_update(desc, data, len, partial); - } else { - kernel_neon_begin(); - res = __sha256_neon_update(desc, data, len, partial); - kernel_neon_end(); - } - - return res; -} - -/* Add padding and return the message digest. */ -static int sha256_neon_final(struct shash_desc *desc, u8 *out) +static int sha256_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) { - struct sha256_state *sctx = shash_desc_ctx(desc); - unsigned int i, index, padlen; - __be32 *dst = (__be32 *)out; - __be64 bits; - static const u8 padding[SHA256_BLOCK_SIZE] = { 0x80, }; - - /* save number of bits */ - bits = cpu_to_be64(sctx->count << 3); - - /* Pad out to 56 mod 64 and append length */ - index = sctx->count % SHA256_BLOCK_SIZE; - padlen = (index < 56) ? (56 - index) : ((SHA256_BLOCK_SIZE+56)-index); - - if (!may_use_simd()) { - sha256_update(desc, padding, padlen); - sha256_update(desc, (const u8 *)&bits, sizeof(bits)); - } else { - kernel_neon_begin(); - /* We need to fill a whole block for __sha256_neon_update() */ - if (padlen <= 56) { - sctx->count += padlen; - memcpy(sctx->buf + index, padding, padlen); - } else { - __sha256_neon_update(desc, padding, padlen, index); - } - __sha256_neon_update(desc, (const u8 *)&bits, - sizeof(bits), 56); - kernel_neon_end(); - } - - /* Store state in digest */ - for (i = 0; i < 8; i++) - dst[i] = cpu_to_be32(sctx->state[i]); - - /* Wipe context */ - memzero_explicit(sctx, sizeof(*sctx)); - - return 0; + if (!may_use_simd()) + return crypto_sha256_arm_finup(desc, data, len, out); + + kernel_neon_begin(); + if (len) + sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order_neon); + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order_neon); + kernel_neon_end(); + + return sha256_base_finish(desc, out); } -static int sha224_neon_final(struct shash_desc *desc, u8 *out) +static int sha256_final(struct shash_desc *desc, u8 *out) { - u8 D[SHA256_DIGEST_SIZE]; - - sha256_neon_final(desc, D); - - memcpy(out, D, SHA224_DIGEST_SIZE); - memzero_explicit(D, SHA256_DIGEST_SIZE); - - return 0; + return sha256_finup(desc, NULL, 0, out); } struct shash_alg sha256_neon_algs[] = { { .digestsize = SHA256_DIGEST_SIZE, - .init = sha256_init, - .update = sha256_neon_update, - .final = sha256_neon_final, - .export = sha256_export, - .import = sha256_import, + .init = sha256_base_init, + .update = sha256_update, + .final = sha256_final, + .finup = sha256_finup, .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha256", .cra_driver_name = "sha256-neon", @@ -154,13 +85,11 @@ struct shash_alg sha256_neon_algs[] = { { } }, { .digestsize = SHA224_DIGEST_SIZE, - .init = sha224_init, - .update = sha256_neon_update, - .final = sha224_neon_final, - .export = sha256_export, - .import = sha256_import, + .init = sha224_base_init, + .update = sha256_update, + .final = sha256_final, + .finup = sha256_finup, .descsize = sizeof(struct sha256_state), - .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha224", .cra_driver_name = "sha224-neon",