From patchwork Sat Jan 28 23:25:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 92776 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp847288qgi; Sat, 28 Jan 2017 15:26:31 -0800 (PST) X-Received: by 10.99.138.201 with SMTP id y192mr16352326pgd.146.1485645991598; Sat, 28 Jan 2017 15:26:31 -0800 (PST) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d65si8383902pfl.73.2017.01.28.15.26.31; Sat, 28 Jan 2017 15:26:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752616AbdA1X0V (ORCPT + 1 other); Sat, 28 Jan 2017 18:26:21 -0500 Received: from mail-wm0-f44.google.com ([74.125.82.44]:37311 "EHLO mail-wm0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752494AbdA1X0V (ORCPT ); Sat, 28 Jan 2017 18:26:21 -0500 Received: by mail-wm0-f44.google.com with SMTP id d140so1641758wmd.0 for ; Sat, 28 Jan 2017 15:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ElVhTH7rbhdNxoCETCPbiBrPki0mmyu75g9pZfBtnwA=; b=C9eLTgd5q9rgn3rZFpaxnlYNcWa7KDYWcb4dfNll+3N7UjzZ8AiggrYXv26TP/Xr8b Eu6h9qx5pyCOgyUbkqWwV+lH6AX5cITMsJZUnrX3ifvDa8MSTQXtF619jJnWm8pzTiKs AaPDYKscCiovl90PD+FmyRe6kQ6ONT/O2Vrow= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ElVhTH7rbhdNxoCETCPbiBrPki0mmyu75g9pZfBtnwA=; b=VkxmGJN2BjrhZlTxjUBhT47gsVYF140HbWfNgcJxtCY/OzT3uqXrJnNX3kPSXlEgo8 yjBK2biUwop6pmR31ZMxLH+Tokjo+mlNP/eSwwB+4sdApPk83+TTkbSYY4qsLHJ7g69r 2nCPZOauYllxJst1rd+6tpiyiKBNPsxkyJMEw9JUYpZzOJYeJbBSoFYs7rY8bGnOELwI 96OlQYUP5KpiGnAo51WA7BSW5FVErpdAp26K2DJARiKo3nTl0ojcGnjV7x2GJk4H1XFM e+qZgeLnJzCK0Yf4HNWUoKpHIM+5Lj1vX9pidH7yIUmfYTwW4TrAilbGNA4F1IN7D/C5 1d1g== X-Gm-Message-State: AIkVDXKc/eOBo8xgeucKC0pVTrF4qlnMsRZqLmtKbEu6R2wpT5WOUHiJd6mo6JgGmnRONFzZ X-Received: by 10.223.131.34 with SMTP id 31mr15619163wrd.119.1485645974306; Sat, 28 Jan 2017 15:26:14 -0800 (PST) Received: from localhost.localdomain ([160.163.215.165]) by smtp.gmail.com with ESMTPSA id 33sm14992064wrd.34.2017.01.28.15.26.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 28 Jan 2017 15:26:13 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, herbert@gondor.apana.org.au, Ard Biesheuvel Subject: [PATCH v3 10/10] crypto: arm64/aes - replace scalar fallback with plain NEON fallback Date: Sat, 28 Jan 2017 23:25:39 +0000 Message-Id: <1485645939-17126-11-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1485645939-17126-1-git-send-email-ard.biesheuvel@linaro.org> References: <1485645939-17126-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The new bitsliced NEON implementation of AES uses a fallback in two places: CBC encryption (which is strictly sequential, whereas this driver can only operate efficiently on 8 blocks at a time), and the XTS tweak generation, which involves encrypting a single AES block with a different key schedule. The plain (i.e., non-bitsliced) NEON code is more suitable as a fallback, given that it is faster than scalar on low end cores (which is what the NEON implementations target, since high end cores have dedicated instructions for AES), and shows similar behavior in terms of D-cache footprint and sensitivity to cache timing attacks. So switch the fallback handling to the plain NEON driver. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 2 +- arch/arm64/crypto/aes-neonbs-glue.c | 38 ++++++++++++++------ 2 files changed, 29 insertions(+), 11 deletions(-) -- 2.7.4 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 5de75c3dcbd4..bed7feddfeed 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -86,7 +86,7 @@ config CRYPTO_AES_ARM64_BS tristate "AES in ECB/CBC/CTR/XTS modes using bit-sliced NEON algorithm" depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER - select CRYPTO_AES_ARM64 + select CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_SIMD endif diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 323dd76ae5f0..863e436ecf89 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -10,7 +10,6 @@ #include #include -#include #include #include #include @@ -42,7 +41,12 @@ asmlinkage void aesbs_xts_encrypt(u8 out[], u8 const in[], u8 const rk[], asmlinkage void aesbs_xts_decrypt(u8 out[], u8 const in[], u8 const rk[], int rounds, int blocks, u8 iv[]); -asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); +/* borrowed from aes-neon-blk.ko */ +asmlinkage void neon_aes_ecb_encrypt(u8 out[], u8 const in[], u32 const rk[], + int rounds, int blocks, int first); +asmlinkage void neon_aes_cbc_encrypt(u8 out[], u8 const in[], u32 const rk[], + int rounds, int blocks, u8 iv[], + int first); struct aesbs_ctx { u8 rk[13 * (8 * AES_BLOCK_SIZE) + 32]; @@ -140,16 +144,28 @@ static int aesbs_cbc_setkey(struct crypto_skcipher *tfm, const u8 *in_key, return 0; } -static void cbc_encrypt_one(struct crypto_skcipher *tfm, const u8 *src, u8 *dst) +static int cbc_encrypt(struct skcipher_request *req) { + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + int err, first = 1; - __aes_arm64_encrypt(ctx->enc, dst, src, ctx->key.rounds); -} + err = skcipher_walk_virt(&walk, req, true); -static int cbc_encrypt(struct skcipher_request *req) -{ - return crypto_cbc_encrypt_walk(req, cbc_encrypt_one); + kernel_neon_begin(); + while (walk.nbytes >= AES_BLOCK_SIZE) { + unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; + + /* fall back to the non-bitsliced NEON implementation */ + neon_aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, + ctx->enc, ctx->key.rounds, blocks, walk.iv, + first); + err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); + first = 0; + } + kernel_neon_end(); + return err; } static int cbc_decrypt(struct skcipher_request *req) @@ -254,9 +270,11 @@ static int __xts_crypt(struct skcipher_request *req, err = skcipher_walk_virt(&walk, req, true); - __aes_arm64_encrypt(ctx->twkey, walk.iv, walk.iv, ctx->key.rounds); - kernel_neon_begin(); + + neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, + ctx->key.rounds, 1, 1); + while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE;