From patchwork Mon Jul 24 10:28:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 108554 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp3887512qge; Mon, 24 Jul 2017 03:28:46 -0700 (PDT) X-Received: by 10.98.150.16 with SMTP id c16mr15870230pfe.64.1500892126551; Mon, 24 Jul 2017 03:28:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1500892126; cv=none; d=google.com; s=arc-20160816; b=aLBS5sV7B3HbvAstX2ejq/JIsAjTV+DvU8FzS8VrJZW4TW/4V/nqjXef9Q5LTGnakW q6Eg/8t5L1I1GFXRSltXUFoephJG16Exn+/4HhUlHLLR7ZECsXzvDZs5y2QF12Nivg1o vPnwJ5eCdaF1GuNDphKvcGDlofjUz6GfKRglvuO++qXCZY5WS1+uIkdo2zdDzTlz/DvP zpISFcH7nSjl07EgZKC02V2ILvPZpAhKHg0YIwRT8oF26UeqXkeK8nLvLMX8QyJ6QRoO zWQdJbNchcmmhqj+Npwx74g1BDS+NDPUuHN5Ryh2leYAhEC2Jd4K1jb7OrSwVMcYLS4q 2psA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=tOJvzuIB+Rc40wnFBnD6TLb14pLMUlMB9fcZWYiK9ek=; b=oq8NeabzzsiKfvFu4zZ7oQf2jJvZLLHU2K4aJEqRU+njJZyoZXFjl7U3QcgTVeeU3n mB25/5ruX9D44zwcNRYQBTe6brWeWsojpkrmvdmZmYlEDmUNUdaAPZYE36MK3tqtpwvu clZBAUUZI/2/VkzZ4U9jnO6FIQNJAZpIU6e+OEOY5HqGsK4behVtSjcn6oLG0O6mrbpl CKvx5aGB/FNreit6flMLZpNXx/j9C+u0qgPtXyD9jBMD5Iuvx+ZQNgx2fKey1a0D86D0 NSzjHK4tvO33C6EpgY+ceQ69Gg1ozbGgycHL3uPfnRTfypu1G9Xce18D/a1tr4iXJfS9 B/tQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=INQ7lqP6; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 38si3348411pld.76.2017.07.24.03.28.46; Mon, 24 Jul 2017 03:28:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=INQ7lqP6; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752312AbdGXK2p (ORCPT + 1 other); Mon, 24 Jul 2017 06:28:45 -0400 Received: from mail-wr0-f181.google.com ([209.85.128.181]:36452 "EHLO mail-wr0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751766AbdGXK2o (ORCPT ); Mon, 24 Jul 2017 06:28:44 -0400 Received: by mail-wr0-f181.google.com with SMTP id y43so104224371wrd.3 for ; Mon, 24 Jul 2017 03:28:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tOJvzuIB+Rc40wnFBnD6TLb14pLMUlMB9fcZWYiK9ek=; b=INQ7lqP68sGH8AF5eO35GoS3R6g2VZowjX4HF0YEzWP5RmEs4tjVozJvREgC6OeRdx KiaSk/q0U+9wkYNge4uztVYnqjgn9+wOkoXBH5rjPx51ZtBYSqHYi9duCOq+K9q00h6T Y6wCHThJWBObpwYsM3HgSPWrTCTRIjC4InXDQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tOJvzuIB+Rc40wnFBnD6TLb14pLMUlMB9fcZWYiK9ek=; b=Jc2nXYDpwVbOXdmiiKsGdyumhWundq2XDsOjyh9JEKPzpD/xU4+eg+WAhrwCz1FjU3 fLWV5O3iBQX9dt3uuF+22sneDgLtyUQDcXVi6GLLkf8ckTEcBPpa4XSTzO2aUmVP40Ex bILJ+vi37cSOj14Oakrd7NAUIID9SNImJ2xoj2hma0wS+31tZBEdAmZ8byrkL0ckmH/N 8pPi3qfiQ6TL9Ya8XRcgpOMa9humCRCzt2t/xEu2f+2g+PQVZxYxjcUoj2GPdPEqMBol ixLz9MT2ryncI3qo57oOO0OMikyOlA4u+H037naAdY35+x4SSSmOW+o3oiqZ8n0Rj/IB 07lQ== X-Gm-Message-State: AIVw110Oq8WbMwVqsRpjX+jRQyeHJMS52a5gv0AGe6TiT8Sf2Ox1qAOh +KmaRHrWnefa+25HXlWXpA== X-Received: by 10.223.158.139 with SMTP id a11mr14803500wrf.131.1500892122819; Mon, 24 Jul 2017 03:28:42 -0700 (PDT) Received: from localhost.localdomain ([105.148.195.69]) by smtp.gmail.com with ESMTPSA id v44sm13205400wrb.53.2017.07.24.03.28.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jul 2017 03:28:42 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: herbert@gondor.apana.org.au, dave.martin@arm.com, Ard Biesheuvel Subject: [PATCH resend 07/18] crypto: arm64/sha2-ce - add non-SIMD scalar fallback Date: Mon, 24 Jul 2017 11:28:09 +0100 Message-Id: <20170724102820.16534-8-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170724102820.16534-1-ard.biesheuvel@linaro.org> References: <20170724102820.16534-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The arm64 kernel will shortly disallow nested kernel mode NEON, so add a fallback to scalar code that can be invoked in that case. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 3 +- arch/arm64/crypto/sha2-ce-glue.c | 30 +++++++++++++++++--- arch/arm64/crypto/sha256-glue.c | 1 + 3 files changed, 29 insertions(+), 5 deletions(-) -- 2.9.3 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 5d5953545dad..8cd145f9c1ff 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -24,8 +24,9 @@ config CRYPTO_SHA1_ARM64_CE config CRYPTO_SHA2_ARM64_CE tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_HASH + select CRYPTO_SHA256_ARM64 config CRYPTO_GHASH_ARM64_CE tristate "GHASH (for GCM chaining mode) using ARMv8 Crypto Extensions" diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 0ed9486f75dd..fd1ff2b13dfa 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -1,7 +1,7 @@ /* * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions * - * Copyright (C) 2014 Linaro Ltd + * Copyright (C) 2014 - 2017 Linaro Ltd * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as @@ -9,6 +9,7 @@ */ #include +#include #include #include #include @@ -34,13 +35,19 @@ const u32 sha256_ce_offsetof_count = offsetof(struct sha256_ce_state, const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state, finalize); +asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks); + static int sha256_ce_update(struct shash_desc *desc, const u8 *data, unsigned int len) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) + return sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + sctx->finalize = 0; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha2_ce_transform); kernel_neon_end(); @@ -54,13 +61,22 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, struct sha256_ce_state *sctx = shash_desc_ctx(desc); bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE); + if (!may_use_simd()) { + if (len) + sha256_base_do_update(desc, data, len, + (sha256_block_fn *)sha256_block_data_order); + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); + } + /* * Allow the asm code to perform the finalization if there is no * partial data and the input is a round multiple of the block size. */ sctx->finalize = finalize; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_update(desc, data, len, (sha256_block_fn *)sha2_ce_transform); if (!finalize) @@ -74,8 +90,14 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); + if (!may_use_simd()) { + sha256_base_do_finalize(desc, + (sha256_block_fn *)sha256_block_data_order); + return sha256_base_finish(desc, out); + } + sctx->finalize = 0; - kernel_neon_begin_partial(28); + kernel_neon_begin(); sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform); kernel_neon_end(); return sha256_base_finish(desc, out); diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c index a2226f841960..b064d925fe2a 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -29,6 +29,7 @@ MODULE_ALIAS_CRYPTO("sha256"); asmlinkage void sha256_block_data_order(u32 *digest, const void *data, unsigned int num_blks); +EXPORT_SYMBOL(sha256_block_data_order); asmlinkage void sha256_block_neon(u32 *digest, const void *data, unsigned int num_blks);