From patchwork Wed Jan 29 16:50:46 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 23887 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-oa0-f72.google.com (mail-oa0-f72.google.com [209.85.219.72]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 7159A20300 for ; Wed, 29 Jan 2014 16:52:01 +0000 (UTC) Received: by mail-oa0-f72.google.com with SMTP id i4sf7418774oah.11 for ; Wed, 29 Jan 2014 08:52:00 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=m07lIsH7dfGjJ8pkWc9JrndbexDQmP3Oln0fDKUPvmE=; b=JeyHmI/Ag5yaD1LhEpGco22nNozugLnCsRlaObSfchKEWWxt8K4gOjSKdt+9PIMNWm x/tPVVaNOSrY6nCGh98P4mBhHIbW5PrOPkG7hJ4ZoDBVCFJIPmi7vWJLXxoTQoAOpfku GX0mwHKNZf7F8IujP3ApB2WW9tEU2T6FljObdNJtQdAlLjTNK0sF8wCMHCv3jZszwS5m ig7DxZLZs9ldY8RWt13AE9pEWO3uO7DzaUHfHTsDbEDaMwcRIRrwXA8YHdjvr03E/K/s jV5Qvyx6toNSFhfSCYjlXZBahUJbIgmGGF1MK5U05js6ob2jGr5lv6e2CJvc80AcRuFN zHyg== X-Gm-Message-State: ALoCoQmGCSfLMV/chxi2wHFkq+kxlCl6AHObup29JH03x2zujkAhjQx2v8DqANoa5o0/BqyxOttK X-Received: by 10.50.78.166 with SMTP id c6mr3929550igx.1.1391014320637; Wed, 29 Jan 2014 08:52:00 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.80.149 with SMTP id c21ls207711qgd.52.gmail; Wed, 29 Jan 2014 08:52:00 -0800 (PST) X-Received: by 10.58.161.227 with SMTP id xv3mr1054920veb.31.1391014320445; Wed, 29 Jan 2014 08:52:00 -0800 (PST) Received: from mail-vc0-f175.google.com (mail-vc0-f175.google.com [209.85.220.175]) by mx.google.com with ESMTPS id df13si960981vec.110.2014.01.29.08.52.00 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 29 Jan 2014 08:52:00 -0800 (PST) Received-SPF: neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.175; Received: by mail-vc0-f175.google.com with SMTP id ij19so1310101vcb.20 for ; Wed, 29 Jan 2014 08:52:00 -0800 (PST) X-Received: by 10.58.170.106 with SMTP id al10mr139775vec.61.1391014320351; Wed, 29 Jan 2014 08:52:00 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.174.196 with SMTP id u4csp128192vcz; Wed, 29 Jan 2014 08:51:59 -0800 (PST) X-Received: by 10.68.114.163 with SMTP id jh3mr8838109pbb.99.1391014318474; Wed, 29 Jan 2014 08:51:58 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q6si3228343pbf.274.2014.01.29.08.51.57; Wed, 29 Jan 2014 08:51:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753244AbaA2Qvw (ORCPT + 27 others); Wed, 29 Jan 2014 11:51:52 -0500 Received: from mail-ee0-f46.google.com ([74.125.83.46]:47570 "EHLO mail-ee0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753225AbaA2Qvt (ORCPT ); Wed, 29 Jan 2014 11:51:49 -0500 Received: by mail-ee0-f46.google.com with SMTP id c13so1014901eek.5 for ; Wed, 29 Jan 2014 08:51:48 -0800 (PST) X-Received: by 10.14.122.5 with SMTP id s5mr10881578eeh.28.1391014308147; Wed, 29 Jan 2014 08:51:48 -0800 (PST) Received: from ards-macbook-pro.local (cag06-7-83-153-85-71.fbx.proxad.net. [83.153.85.71]) by mx.google.com with ESMTPSA id m1sm10892118een.7.2014.01.29.08.51.40 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 29 Jan 2014 08:51:47 -0800 (PST) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Cc: catalin.marinas@arm.com, will.deacon@arm.com, tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, x86@kernel.org, gregkh@linuxfoundation.org, akpm@linux-foundation.org, arnd@arndb.de, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Subject: [PATCH 5/5] arm64: add Crypto Extensions based synchronous core AES cipher Date: Wed, 29 Jan 2014 17:50:46 +0100 Message-Id: <1391014246-9715-6-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1391014246-9715-1-git-send-email-ard.biesheuvel@linaro.org> References: <1391014246-9715-1-git-send-email-ard.biesheuvel@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ard.biesheuvel@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.175 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This implements the core AES cipher using the Crypto Extensions, using only NEON registers q0 and q1. Signed-off-by: Ard Biesheuvel --- arch/arm64/Makefile | 1 + arch/arm64/crypto/Makefile | 13 +++++ arch/arm64/crypto/aes-ce-cipher.c | 103 ++++++++++++++++++++++++++++++++++++++ crypto/Kconfig | 6 +++ 4 files changed, 123 insertions(+) create mode 100644 arch/arm64/crypto/Makefile create mode 100644 arch/arm64/crypto/aes-ce-cipher.c diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2fceb71ac3b7..8185a913c5ed 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -45,6 +45,7 @@ export TEXT_OFFSET GZFLAGS core-y += arch/arm64/kernel/ arch/arm64/mm/ core-$(CONFIG_KVM) += arch/arm64/kvm/ core-$(CONFIG_XEN) += arch/arm64/xen/ +core-$(CONFIG_CRYPTO) += arch/arm64/crypto/ libs-y := arch/arm64/lib/ $(libs-y) libs-y += $(LIBGCC) diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile new file mode 100644 index 000000000000..ac58945c50b3 --- /dev/null +++ b/arch/arm64/crypto/Makefile @@ -0,0 +1,13 @@ +# +# linux/arch/arm64/crypto/Makefile +# +# Copyright (C) 2013 Linaro Ltd +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License version 2 as +# published by the Free Software Foundation. +# + +obj-$(CONFIG_CRYPTO_AES_ARM64_CE) += aes-ce-cipher.o + +CFLAGS_aes-ce-cipher.o += -march=armv8-a+crypto diff --git a/arch/arm64/crypto/aes-ce-cipher.c b/arch/arm64/crypto/aes-ce-cipher.c new file mode 100644 index 000000000000..b5a5d5d6e4b8 --- /dev/null +++ b/arch/arm64/crypto/aes-ce-cipher.c @@ -0,0 +1,103 @@ +/* + * linux/arch/arm64/crypto/aes-ce-cipher.c + * + * Copyright (C) 2013 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include + +MODULE_DESCRIPTION("Synchronous AES cipher using ARMv8 Crypto Extensions"); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL"); + +static void aes_cipher_encrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + u32 rounds = 6 + ctx->key_length / 4; + + kernel_neon_begin(); + + __asm__(" ld1 {v0.16b}, [%[in]] ;" + " ld1 {v1.16b}, [%[key]], #16 ;" + "0: aese v0.16b, v1.16b ;" + " subs %[rounds], %[rounds], #1 ;" + " ld1 {v1.16b}, [%[key]], #16 ;" + " beq 1f ;" + " aesmc v0.16b, v0.16b ;" + " b 0b ;" + "1: eor v0.16b, v0.16b, v1.16b ;" + " st1 {v0.16b}, [%[out]] ;" + : : + [out] "r"(dst), + [in] "r"(src), + [rounds] "r"(rounds), + [key] "r"(ctx->key_enc) + : "cc"); + + kernel_neon_end(); +} + +static void aes_cipher_decrypt(struct crypto_tfm *tfm, u8 dst[], u8 const src[]) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + u32 rounds = 6 + ctx->key_length / 4; + + kernel_neon_begin(); + + __asm__(" ld1 {v0.16b}, [%[in]] ;" + " ld1 {v1.16b}, [%[key]], #16 ;" + "0: aesd v0.16b, v1.16b ;" + " ld1 {v1.16b}, [%[key]], #16 ;" + " subs %[rounds], %[rounds], #1 ;" + " beq 1f ;" + " aesimc v0.16b, v0.16b ;" + " b 0b ;" + "1: eor v0.16b, v0.16b, v1.16b ;" + " st1 {v0.16b}, [%[out]] ;" + : : + [out] "r"(dst), + [in] "r"(src), + [rounds] "r"(rounds), + [key] "r"(ctx->key_dec) + : "cc"); + + kernel_neon_end(); +} + +static struct crypto_alg aes_alg = { + .cra_name = "aes", + .cra_driver_name = "aes-ce", + .cra_priority = 300, + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_module = THIS_MODULE, + .cra_cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = crypto_aes_set_key, + .cia_encrypt = aes_cipher_encrypt, + .cia_decrypt = aes_cipher_decrypt + } +}; + +static int __init aes_mod_init(void) +{ + return crypto_register_alg(&aes_alg); +} + +static void __exit aes_mod_exit(void) +{ + crypto_unregister_alg(&aes_alg); +} + +module_cpu_feature_match(AES, aes_mod_init); +module_exit(aes_mod_exit); diff --git a/crypto/Kconfig b/crypto/Kconfig index 7bcb70d216e1..f1d98bc346b6 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -791,6 +791,12 @@ config CRYPTO_AES_ARM_BS This implementation does not rely on any lookup tables so it is believed to be invulnerable to cache timing attacks. +config CRYPTO_AES_ARM64_CE + tristate "Synchronous AES cipher using ARMv8 Crypto Extensions" + depends on ARM64 && KERNEL_MODE_NEON + select CRYPTO_ALGAPI + select CRYPTO_AES + config CRYPTO_ANUBIS tristate "Anubis cipher algorithm" select CRYPTO_ALGAPI