From patchwork Mon Jul 24 10:28:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 108558 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp3887612qge; Mon, 24 Jul 2017 03:28:54 -0700 (PDT) X-Received: by 10.84.210.74 with SMTP id z68mr11987738plh.454.1500892134315; Mon, 24 Jul 2017 03:28:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1500892134; cv=none; d=google.com; s=arc-20160816; b=lcra0X1VgtW+Ek3m5F5uD4xPBosea6siUIuwcunuredDYx1jQBlyNqBYHYdCcZksZa T7SvdJlJCFlGN5YamU5Gu/Nm5lfzJd0BTnBZstiH485n8coNE8av9RwTkreVdLnQrmka wS8xlRScFYHuSi5f2CgbcluzCbc0Ba5JiENldLsd5/DCeWkKLorR1UXX0tkbBopPVlfj AQ3hedh9itK0mfMtNkaFp0ozoMoWw5X8S/Hmh5WDjUlNNHgoR4RcQDoEWK+fN2GhDqr/ vVyB4TRYwFd5V6ZKOM9pycX2EweOIt9ea0/l3nE5Q5qz9rdWhv4GBZ32NqQb5cuzM3mu z0Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=xOxPGsVwzCPmT+gjxld6nSBw+uPvohl10NFFDZBpfKk=; b=Ztcsm1gL/PrwtghCv+1xERmM5WgiV7yAKqZDjx3K6lkj14Ny4q/ZRcwUWNO9dm2fTo kuJU9ZsTaCN4zr5aGsyZSCRxiIAg7dQ+38UktQSGIn4ajvE0bClE6GIarXgu1PGmgvh/ qeYhSUmlnzGB4u3ZagAE0bH07NkKVrp+h3yRWgtwRtLx6MMkYMw5rOC4rmbOFwC/ryzb yDEKWPiJSKKe0IekQDGYEPF0Ome07iMyNYDvV6p0y0dF1vadVW/FhnFTXNT/1Y4yI7bb NDLEeKpmgGCzRO0FBFG3dFNi411aNb8vvy7zkJktKQ14iep300li5Nt3hI+ghu076iQh uciA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.b=h7K/e1HA; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 38si3348411pld.76.2017.07.24.03.28.54; Mon, 24 Jul 2017 03:28:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.b=h7K/e1HA; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752259AbdGXK2x (ORCPT + 1 other); Mon, 24 Jul 2017 06:28:53 -0400 Received: from mail-wr0-f181.google.com ([209.85.128.181]:34706 "EHLO mail-wr0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751766AbdGXK2w (ORCPT ); Mon, 24 Jul 2017 06:28:52 -0400 Received: by mail-wr0-f181.google.com with SMTP id 12so104449683wrb.1 for ; Mon, 24 Jul 2017 03:28:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xOxPGsVwzCPmT+gjxld6nSBw+uPvohl10NFFDZBpfKk=; b=h7K/e1HAbQbfLO6yyAd1cT7s5PNO6XlhMSwPfVFdSmzVjCxVtTNtaC7BVQGZ/EWWSn yrNyJBy7E0JQHVUuYTt5MO9VWuyh0vab49EpzZ2N4I1GmAgJpmzyPrdPgrzzPHSpT3SY LFpXNOEzMozzJ0HOopqvPcLnojgfKecxlgdXo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xOxPGsVwzCPmT+gjxld6nSBw+uPvohl10NFFDZBpfKk=; b=SOm8SAVFHtlnWQ6Awma0htklWcTXDZpN+q/ORLeGoDAlKm66dnxS5zP1/NPKe4stRV d8QGxXWz7L+2D1raMOlXFSqnYkT7EoXJrbclMBmruxLQQLstNBsN1XWR5pZRarPXAucm O5Nrx9s10i85Rkw6u2pVHtPIpdFjCOZE+wLG9lxGnz2fFY6+2b2QGTj9gF3nCiI9aami s9HWtvHr1l9ItUMGi/phrfOXEJxEed3Ddrr67sGmihgHa+3ASbSp05GldpxPE1tPQxmu owuzWN1Xz0p7wnq+zV58NdYRFFMyLWT+F4R3TG2CwVqSdpIhJyYNmLH/My7oZNLwKA7l utrw== X-Gm-Message-State: AIVw111BjmhmpKf7kO0aWq24HIkPClP18lBHT7ezHyhJfjJPTT6v6DYy 0qNrT5UhaKfZm0I/jnSe0A== X-Received: by 10.223.139.78 with SMTP id v14mr11314995wra.248.1500892130999; Mon, 24 Jul 2017 03:28:50 -0700 (PDT) Received: from localhost.localdomain ([105.148.195.69]) by smtp.gmail.com with ESMTPSA id v44sm13205400wrb.53.2017.07.24.03.28.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 24 Jul 2017 03:28:50 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: herbert@gondor.apana.org.au, dave.martin@arm.com, Ard Biesheuvel Subject: [PATCH resend 11/18] crypto: arm64/aes-blk - add a non-SIMD fallback for synchronous CTR Date: Mon, 24 Jul 2017 11:28:13 +0100 Message-Id: <20170724102820.16534-12-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170724102820.16534-1-ard.biesheuvel@linaro.org> References: <20170724102820.16534-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org To accommodate systems that may disallow use of the NEON in kernel mode in some circumstances, introduce a C fallback for synchronous AES in CTR mode, and use it if may_use_simd() returns false. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 6 +- arch/arm64/crypto/aes-ctr-fallback.h | 53 ++++++++++++++++++ arch/arm64/crypto/aes-glue.c | 59 +++++++++++++++----- 3 files changed, 101 insertions(+), 17 deletions(-) -- 2.9.3 diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index ba637765c19a..a068dcbe2518 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -64,15 +64,17 @@ config CRYPTO_AES_ARM64_CE_CCM config CRYPTO_AES_ARM64_CE_BLK tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER select CRYPTO_AES_ARM64_CE + select CRYPTO_AES_ARM64 select CRYPTO_SIMD config CRYPTO_AES_ARM64_NEON_BLK tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions" - depends on ARM64 && KERNEL_MODE_NEON + depends on KERNEL_MODE_NEON select CRYPTO_BLKCIPHER + select CRYPTO_AES_ARM64 select CRYPTO_AES select CRYPTO_SIMD diff --git a/arch/arm64/crypto/aes-ctr-fallback.h b/arch/arm64/crypto/aes-ctr-fallback.h new file mode 100644 index 000000000000..c9285717b6b5 --- /dev/null +++ b/arch/arm64/crypto/aes-ctr-fallback.h @@ -0,0 +1,53 @@ +/* + * Fallback for sync aes(ctr) in contexts where kernel mode NEON + * is not allowed + * + * Copyright (C) 2017 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include + +asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds); + +static inline int aes_ctr_encrypt_fallback(struct crypto_aes_ctx *ctx, + struct skcipher_request *req) +{ + struct skcipher_walk walk; + u8 buf[AES_BLOCK_SIZE]; + int err; + + err = skcipher_walk_virt(&walk, req, true); + + while (walk.nbytes > 0) { + u8 *dst = walk.dst.virt.addr; + u8 *src = walk.src.virt.addr; + int nbytes = walk.nbytes; + int tail = 0; + + if (nbytes < walk.total) { + nbytes = round_down(nbytes, AES_BLOCK_SIZE); + tail = walk.nbytes % AES_BLOCK_SIZE; + } + + do { + int bsize = min(nbytes, AES_BLOCK_SIZE); + + __aes_arm64_encrypt(ctx->key_enc, buf, walk.iv, + 6 + ctx->key_length / 4); + crypto_xor_cpy(dst, src, buf, bsize); + crypto_inc(walk.iv, AES_BLOCK_SIZE); + + dst += AES_BLOCK_SIZE; + src += AES_BLOCK_SIZE; + nbytes -= AES_BLOCK_SIZE; + } while (nbytes > 0); + + err = skcipher_walk_done(&walk, tail); + } + return err; +} diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c index 0da30e3b0e4b..998ba519a026 100644 --- a/arch/arm64/crypto/aes-glue.c +++ b/arch/arm64/crypto/aes-glue.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include @@ -19,6 +20,7 @@ #include #include "aes-ce-setkey.h" +#include "aes-ctr-fallback.h" #ifdef USE_V8_CRYPTO_EXTENSIONS #define MODE "ce" @@ -249,6 +251,17 @@ static int ctr_encrypt(struct skcipher_request *req) return err; } +static int ctr_encrypt_sync(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (!may_use_simd()) + return aes_ctr_encrypt_fallback(ctx, req); + + return ctr_encrypt(req); +} + static int xts_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); @@ -355,8 +368,8 @@ static struct skcipher_alg aes_algs[] = { { .ivsize = AES_BLOCK_SIZE, .chunksize = AES_BLOCK_SIZE, .setkey = skcipher_aes_setkey, - .encrypt = ctr_encrypt, - .decrypt = ctr_encrypt, + .encrypt = ctr_encrypt_sync, + .decrypt = ctr_encrypt_sync, }, { .base = { .cra_name = "__xts(aes)", @@ -458,11 +471,35 @@ static int mac_init(struct shash_desc *desc) return 0; } +static void mac_do_update(struct crypto_aes_ctx *ctx, u8 const in[], int blocks, + u8 dg[], int enc_before, int enc_after) +{ + int rounds = 6 + ctx->key_length / 4; + + if (may_use_simd()) { + kernel_neon_begin(); + aes_mac_update(in, ctx->key_enc, rounds, blocks, dg, enc_before, + enc_after); + kernel_neon_end(); + } else { + if (enc_before) + __aes_arm64_encrypt(ctx->key_enc, dg, dg, rounds); + + while (blocks--) { + crypto_xor(dg, in, AES_BLOCK_SIZE); + in += AES_BLOCK_SIZE; + + if (blocks || enc_after) + __aes_arm64_encrypt(ctx->key_enc, dg, dg, + rounds); + } + } +} + static int mac_update(struct shash_desc *desc, const u8 *p, unsigned int len) { struct mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct mac_desc_ctx *ctx = shash_desc_ctx(desc); - int rounds = 6 + tctx->key.key_length / 4; while (len > 0) { unsigned int l; @@ -474,10 +511,8 @@ static int mac_update(struct shash_desc *desc, const u8 *p, unsigned int len) len %= AES_BLOCK_SIZE; - kernel_neon_begin(); - aes_mac_update(p, tctx->key.key_enc, rounds, blocks, - ctx->dg, (ctx->len != 0), (len != 0)); - kernel_neon_end(); + mac_do_update(&tctx->key, p, blocks, ctx->dg, + (ctx->len != 0), (len != 0)); p += blocks * AES_BLOCK_SIZE; @@ -505,11 +540,8 @@ static int cbcmac_final(struct shash_desc *desc, u8 *out) { struct mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct mac_desc_ctx *ctx = shash_desc_ctx(desc); - int rounds = 6 + tctx->key.key_length / 4; - kernel_neon_begin(); - aes_mac_update(NULL, tctx->key.key_enc, rounds, 0, ctx->dg, 1, 0); - kernel_neon_end(); + mac_do_update(&tctx->key, NULL, 0, ctx->dg, 1, 0); memcpy(out, ctx->dg, AES_BLOCK_SIZE); @@ -520,7 +552,6 @@ static int cmac_final(struct shash_desc *desc, u8 *out) { struct mac_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); struct mac_desc_ctx *ctx = shash_desc_ctx(desc); - int rounds = 6 + tctx->key.key_length / 4; u8 *consts = tctx->consts; if (ctx->len != AES_BLOCK_SIZE) { @@ -528,9 +559,7 @@ static int cmac_final(struct shash_desc *desc, u8 *out) consts += AES_BLOCK_SIZE; } - kernel_neon_begin(); - aes_mac_update(consts, tctx->key.key_enc, rounds, 1, ctx->dg, 0, 1); - kernel_neon_end(); + mac_do_update(&tctx->key, consts, 1, ctx->dg, 0, 1); memcpy(out, ctx->dg, AES_BLOCK_SIZE);