From patchwork Mon Jul 30 21:06:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 143170 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp4526130ljj; Mon, 30 Jul 2018 14:06:53 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfGYDKE9lZNLnj0sxk5GqwDcQ5KHA0HTgy4DDuFdZftWxeQYSIzAhB34BZWoyRrENLcGkrh X-Received: by 2002:a62:6104:: with SMTP id v4-v6mr19368179pfb.122.1532984813018; Mon, 30 Jul 2018 14:06:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532984813; cv=none; d=google.com; s=arc-20160816; b=HImFWoLwBfVcwwsGpvfUeng8WGQUGkO4wrbZB9JqDTxqnr9BVFJxaUP57Lua4Blc/N +hxzK84OYsrq0l103B1+SY2O6eq0uxUkJ2PusQvZEizQmtmA7IrdDWw436ET/kEzb8ZM heM5bZHdtxCPSpYs88fSXNoX+3nHffBD3F87R+uut3cSQYEtOmFnxf/8CT8AQCjzgD/e bL2twEezz9gRf4FsPJ9MW9BKDnwDKY1D/TXSaoFV1j9pkFVTWwWslBJ2VQBa0MjbJycP TsTwSHZaRqry316ca2hXw0/DtMVH2MlC++jH9KnEOphZedFKuXZReGG0ZCozGOTKJbco LN7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Oi2atpawXs6AVsSOki0WharswvgDpcQxzrO9YjhPypw=; b=KFl3yQPhHV5bjgRxif/Xjko5vHaRe75L95NYvdAXfxPuS0cKGzCJIMn3hawW1VZa+z 0Aiud89dsa9of1dl7hUg4ffaq8nCJiiukSxZdIkXZ4P9zAYvhrm44ApWqMTcfo5E9WbH Gl61Sjdabc3wtQZquA2KPksc/+tgxESc5kj4x65f//qoKay02q74nVfK9FydvyGmWmJV vAtOZSOhtNfISLxpetyyznvSLzcjobg77w3e1E8lqI7u8N1DJ/8ZL9ZPOK2cyNacinjn 6ntM0iAQji8ASsFlOlpICMMcv40Y9vD70pN9Xcrwo4JTHU0PpEKoUQQ0QJVRhs9S0gj/ wLFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Wh3xTg/S"; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d23-v6si12014849pfl.122.2018.07.30.14.06.52; Mon, 30 Jul 2018 14:06:53 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Wh3xTg/S"; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729008AbeG3Wnk (ORCPT + 1 other); Mon, 30 Jul 2018 18:43:40 -0400 Received: from mail-ed1-f42.google.com ([209.85.208.42]:41204 "EHLO mail-ed1-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728746AbeG3Wnk (ORCPT ); Mon, 30 Jul 2018 18:43:40 -0400 Received: by mail-ed1-f42.google.com with SMTP id s24-v6so4662098edr.8 for ; Mon, 30 Jul 2018 14:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Oi2atpawXs6AVsSOki0WharswvgDpcQxzrO9YjhPypw=; b=Wh3xTg/SBYDXBlSb6rpTVl3fbWyWr2ZUAUdkZww9JPMEuWz+QLvpHiCbrGlUy8qBec +kNdcAkNiKUf/ZE98KoTfKGgCAKF2Ba9Dul8F5qQQ/7nWPJ5ISyaZwy/kkCSFH3LPuc/ 86oVVdgsENc3hGCLAQnNh153cyjD9K3iXDu4k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Oi2atpawXs6AVsSOki0WharswvgDpcQxzrO9YjhPypw=; b=Yse/QJnBoHUzy/dM1btjHj+lrRAasT4ShRnQokMZJXACSw5qYckta/2Q5XU0E15oKm 5XGYdFdDI+Ysai+zpWUPyC+0H98GqZyUHNSb3NM1nY5hGUUSLqbzOwOaYs4mMCouaJ3h mtm2vwWvG/bY7vojTfnWDis7ezUkcJncq3eyRhZFF8821Zyxryr5ZSbtEhV4WjGbvIXk 0KrcLF5bpfPy6n2r0x6FYJEqYs1EYGAvil5WcJUvTkQiHvzePvv1fhFYdev2VBLZk78F mdFes4KUlZ1op69hgkciNH0GEwQouC+IKvypkeibgkhRHuywy1WRecNMYaFpxBMbey/A 205w== X-Gm-Message-State: AOUpUlFI+dWeHIyWHZskIN+vcf/vKQTP255JqV6+a1OChRotLXBcgbDR Qf0P0y2rWEunyrTYVyUc0M6nTixZD24= X-Received: by 2002:a50:b8a6:: with SMTP id l35-v6mr9894217ede.273.1532984809125; Mon, 30 Jul 2018 14:06:49 -0700 (PDT) Received: from rev02.home (b80182.upc-b.chello.nl. [212.83.80.182]) by smtp.gmail.com with ESMTPSA id g6-v6sm2677328edn.28.2018.07.30.14.06.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jul 2018 14:06:48 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, vakul.garg@nxp.com, Ard Biesheuvel Subject: [PATCH v2 1/3] crypto/arm64: aes-ce-gcm - operate on two input blocks at a time Date: Mon, 30 Jul 2018 23:06:40 +0200 Message-Id: <20180730210642.25180-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180730210642.25180-1-ard.biesheuvel@linaro.org> References: <20180730210642.25180-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Update the core AES/GCM transform and the associated plumbing to operate on 2 AES/GHASH blocks at a time. By itself, this is not expected to result in a noticeable speedup, but it paves the way for reimplementing the GHASH component using 2-way aggregation. Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/ghash-ce-core.S | 127 +++++++++++++++----- arch/arm64/crypto/ghash-ce-glue.c | 103 ++++++++++------ 2 files changed, 161 insertions(+), 69 deletions(-) -- 2.18.0 diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index c723647b37db..dac0df29d194 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -286,9 +286,10 @@ ENTRY(pmull_ghash_update_p8) __pmull_ghash p8 ENDPROC(pmull_ghash_update_p8) - KS .req v8 - CTR .req v9 - INP .req v10 + KS0 .req v8 + KS1 .req v9 + INP0 .req v10 + INP1 .req v11 .macro load_round_keys, rounds, rk cmp \rounds, #12 @@ -336,84 +337,146 @@ CPU_LE( rev x8, x8 ) .if \enc == 1 ldr x10, [sp] - ld1 {KS.16b}, [x10] + ld1 {KS0.16b-KS1.16b}, [x10] .endif -0: ld1 {CTR.8b}, [x5] // load upper counter - ld1 {INP.16b}, [x3], #16 +0: ld1 {INP0.16b-INP1.16b}, [x3], #32 + rev x9, x8 - add x8, x8, #1 - sub w0, w0, #1 - ins CTR.d[1], x9 // set lower counter + add x11, x8, #1 + add x8, x8, #2 .if \enc == 1 - eor INP.16b, INP.16b, KS.16b // encrypt input - st1 {INP.16b}, [x2], #16 + eor INP0.16b, INP0.16b, KS0.16b // encrypt input + eor INP1.16b, INP1.16b, KS1.16b .endif - rev64 T1.16b, INP.16b + ld1 {KS0.8b}, [x5] // load upper counter + rev x11, x11 + sub w0, w0, #2 + mov KS1.8b, KS0.8b + ins KS0.d[1], x9 // set lower counter + ins KS1.d[1], x11 + + rev64 T1.16b, INP0.16b cmp w7, #12 b.ge 2f // AES-192/256? -1: enc_round CTR, v21 +1: enc_round KS0, v21 ext T2.16b, XL.16b, XL.16b, #8 ext IN1.16b, T1.16b, T1.16b, #8 - enc_round CTR, v22 + enc_round KS1, v21 eor T1.16b, T1.16b, T2.16b eor XL.16b, XL.16b, IN1.16b - enc_round CTR, v23 + enc_round KS0, v22 pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 eor T1.16b, T1.16b, XL.16b - enc_round CTR, v24 + enc_round KS1, v22 pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) - enc_round CTR, v25 + enc_round KS0, v23 ext T1.16b, XL.16b, XH.16b, #8 eor T2.16b, XL.16b, XH.16b eor XM.16b, XM.16b, T1.16b - enc_round CTR, v26 + enc_round KS1, v23 eor XM.16b, XM.16b, T2.16b pmull T2.1q, XL.1d, MASK.1d - enc_round CTR, v27 + enc_round KS0, v24 mov XH.d[0], XM.d[1] mov XM.d[1], XL.d[0] - enc_round CTR, v28 + enc_round KS1, v24 eor XL.16b, XM.16b, T2.16b - enc_round CTR, v29 + enc_round KS0, v25 ext T2.16b, XL.16b, XL.16b, #8 - aese CTR.16b, v30.16b + enc_round KS1, v25 pmull XL.1q, XL.1d, MASK.1d eor T2.16b, T2.16b, XH.16b - eor KS.16b, CTR.16b, v31.16b + enc_round KS0, v26 + + eor XL.16b, XL.16b, T2.16b + rev64 T1.16b, INP1.16b + + enc_round KS1, v26 + + ext T2.16b, XL.16b, XL.16b, #8 + ext IN1.16b, T1.16b, T1.16b, #8 + + enc_round KS0, v27 + + eor T1.16b, T1.16b, T2.16b + eor XL.16b, XL.16b, IN1.16b + + enc_round KS1, v27 + + pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 + eor T1.16b, T1.16b, XL.16b + + enc_round KS0, v28 + + pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 + pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) + + enc_round KS1, v28 + + ext T1.16b, XL.16b, XH.16b, #8 + eor T2.16b, XL.16b, XH.16b + eor XM.16b, XM.16b, T1.16b + + enc_round KS0, v29 + + eor XM.16b, XM.16b, T2.16b + pmull T2.1q, XL.1d, MASK.1d + + enc_round KS1, v29 + + mov XH.d[0], XM.d[1] + mov XM.d[1], XL.d[0] + + aese KS0.16b, v30.16b + + eor XL.16b, XM.16b, T2.16b + + aese KS1.16b, v30.16b + + ext T2.16b, XL.16b, XL.16b, #8 + + eor KS0.16b, KS0.16b, v31.16b + + pmull XL.1q, XL.1d, MASK.1d + eor T2.16b, T2.16b, XH.16b + + eor KS1.16b, KS1.16b, v31.16b eor XL.16b, XL.16b, T2.16b .if \enc == 0 - eor INP.16b, INP.16b, KS.16b - st1 {INP.16b}, [x2], #16 + eor INP0.16b, INP0.16b, KS0.16b + eor INP1.16b, INP1.16b, KS1.16b .endif + st1 {INP0.16b-INP1.16b}, [x2], #32 + cbnz w0, 0b CPU_LE( rev x8, x8 ) @@ -421,16 +484,20 @@ CPU_LE( rev x8, x8 ) str x8, [x5, #8] // store lower counter .if \enc == 1 - st1 {KS.16b}, [x10] + st1 {KS0.16b-KS1.16b}, [x10] .endif ret 2: b.eq 3f // AES-192? - enc_round CTR, v17 - enc_round CTR, v18 -3: enc_round CTR, v19 - enc_round CTR, v20 + enc_round KS0, v17 + enc_round KS1, v17 + enc_round KS0, v18 + enc_round KS1, v18 +3: enc_round KS0, v19 + enc_round KS1, v19 + enc_round KS0, v20 + enc_round KS1, v20 b 1b .endm diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index 8a10f1d7199a..e649f9f6e689 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -349,9 +349,10 @@ static int gcm_encrypt(struct aead_request *req) struct gcm_aes_ctx *ctx = crypto_aead_ctx(aead); struct skcipher_walk walk; u8 iv[AES_BLOCK_SIZE]; - u8 ks[AES_BLOCK_SIZE]; + u8 ks[2 * AES_BLOCK_SIZE]; u8 tag[AES_BLOCK_SIZE]; u64 dg[2] = {}; + int nrounds = num_rounds(&ctx->aes_key); int err; if (req->assoclen) @@ -363,32 +364,31 @@ static int gcm_encrypt(struct aead_request *req) if (likely(may_use_simd())) { kernel_neon_begin(); - pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - pmull_gcm_encrypt_block(ks, iv, NULL, - num_rounds(&ctx->aes_key)); + pmull_gcm_encrypt_block(ks, iv, NULL, nrounds); put_unaligned_be32(3, iv + GCM_IV_SIZE); + pmull_gcm_encrypt_block(ks + AES_BLOCK_SIZE, iv, NULL, nrounds); + put_unaligned_be32(4, iv + GCM_IV_SIZE); kernel_neon_end(); err = skcipher_walk_aead_encrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; + while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; kernel_neon_begin(); pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, &ctx->ghash_key, - iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key), ks); + iv, ctx->aes_key.key_enc, nrounds, + ks); kernel_neon_end(); err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); + walk.nbytes % (2 * AES_BLOCK_SIZE)); } } else { - __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, - num_rounds(&ctx->aes_key)); + __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); err = skcipher_walk_aead_encrypt(&walk, req, false); @@ -400,8 +400,7 @@ static int gcm_encrypt(struct aead_request *req) do { __aes_arm64_encrypt(ctx->aes_key.key_enc, - ks, iv, - num_rounds(&ctx->aes_key)); + ks, iv, nrounds); crypto_xor_cpy(dst, src, ks, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -418,19 +417,28 @@ static int gcm_encrypt(struct aead_request *req) } if (walk.nbytes) __aes_arm64_encrypt(ctx->aes_key.key_enc, ks, iv, - num_rounds(&ctx->aes_key)); + nrounds); } /* handle the tail */ if (walk.nbytes) { u8 buf[GHASH_BLOCK_SIZE]; + unsigned int nbytes = walk.nbytes; + u8 *dst = walk.dst.virt.addr; + u8 *head = NULL; crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, ks, walk.nbytes); - memcpy(buf, walk.dst.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, GHASH_BLOCK_SIZE - walk.nbytes); - ghash_do_update(1, dg, buf, &ctx->ghash_key, NULL); + if (walk.nbytes > GHASH_BLOCK_SIZE) { + head = dst; + dst += GHASH_BLOCK_SIZE; + nbytes %= GHASH_BLOCK_SIZE; + } + + memcpy(buf, dst, nbytes); + memset(buf + nbytes, 0, GHASH_BLOCK_SIZE - nbytes); + ghash_do_update(!!nbytes, dg, buf, &ctx->ghash_key, head); err = skcipher_walk_done(&walk, 0); } @@ -453,10 +461,11 @@ static int gcm_decrypt(struct aead_request *req) struct gcm_aes_ctx *ctx = crypto_aead_ctx(aead); unsigned int authsize = crypto_aead_authsize(aead); struct skcipher_walk walk; - u8 iv[AES_BLOCK_SIZE]; + u8 iv[2 * AES_BLOCK_SIZE]; u8 tag[AES_BLOCK_SIZE]; - u8 buf[GHASH_BLOCK_SIZE]; + u8 buf[2 * GHASH_BLOCK_SIZE]; u64 dg[2] = {}; + int nrounds = num_rounds(&ctx->aes_key); int err; if (req->assoclen) @@ -467,37 +476,44 @@ static int gcm_decrypt(struct aead_request *req) if (likely(may_use_simd())) { kernel_neon_begin(); - - pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); kernel_neon_end(); err = skcipher_walk_aead_decrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { - int blocks = walk.nbytes / AES_BLOCK_SIZE; + while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, &ctx->ghash_key, - iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + iv, ctx->aes_key.key_enc, nrounds); kernel_neon_end(); err = skcipher_walk_done(&walk, - walk.nbytes % AES_BLOCK_SIZE); + walk.nbytes % (2 * AES_BLOCK_SIZE)); } + if (walk.nbytes) { + u8 *iv2 = iv + AES_BLOCK_SIZE; + + if (walk.nbytes > AES_BLOCK_SIZE) { + memcpy(iv2, iv, AES_BLOCK_SIZE); + crypto_inc(iv2, AES_BLOCK_SIZE); + } + kernel_neon_begin(); pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc, - num_rounds(&ctx->aes_key)); + nrounds); + + if (walk.nbytes > AES_BLOCK_SIZE) + pmull_gcm_encrypt_block(iv2, iv2, NULL, + nrounds); kernel_neon_end(); } - } else { - __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, - num_rounds(&ctx->aes_key)); + __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); err = skcipher_walk_aead_decrypt(&walk, req, false); @@ -512,8 +528,7 @@ static int gcm_decrypt(struct aead_request *req) do { __aes_arm64_encrypt(ctx->aes_key.key_enc, - buf, iv, - num_rounds(&ctx->aes_key)); + buf, iv, nrounds); crypto_xor_cpy(dst, src, buf, AES_BLOCK_SIZE); crypto_inc(iv, AES_BLOCK_SIZE); @@ -526,14 +541,24 @@ static int gcm_decrypt(struct aead_request *req) } if (walk.nbytes) __aes_arm64_encrypt(ctx->aes_key.key_enc, iv, iv, - num_rounds(&ctx->aes_key)); + nrounds); } /* handle the tail */ if (walk.nbytes) { - memcpy(buf, walk.src.virt.addr, walk.nbytes); - memset(buf + walk.nbytes, 0, GHASH_BLOCK_SIZE - walk.nbytes); - ghash_do_update(1, dg, buf, &ctx->ghash_key, NULL); + const u8 *src = walk.src.virt.addr; + const u8 *head = NULL; + unsigned int nbytes = walk.nbytes; + + if (walk.nbytes > GHASH_BLOCK_SIZE) { + head = src; + src += GHASH_BLOCK_SIZE; + nbytes %= GHASH_BLOCK_SIZE; + } + + memcpy(buf, src, nbytes); + memset(buf + nbytes, 0, GHASH_BLOCK_SIZE - nbytes); + ghash_do_update(!!nbytes, dg, buf, &ctx->ghash_key, head); crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, iv, walk.nbytes); @@ -558,7 +583,7 @@ static int gcm_decrypt(struct aead_request *req) static struct aead_alg gcm_aes_alg = { .ivsize = GCM_IV_SIZE, - .chunksize = AES_BLOCK_SIZE, + .chunksize = 2 * AES_BLOCK_SIZE, .maxauthsize = AES_BLOCK_SIZE, .setkey = gcm_setkey, .setauthsize = gcm_setauthsize, From patchwork Mon Jul 30 21:06:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 143171 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp4526157ljj; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfNOvbWLid5na5e9Z8nS5MsDPsFXfHgY2g2JOQUGQEhwOguJQKK8R015Jd9jbKsmpbSx/Va X-Received: by 2002:a62:3d86:: with SMTP id x6-v6mr19332582pfj.192.1532984814401; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532984814; cv=none; d=google.com; s=arc-20160816; b=R4o2/y5VZT5vfY0/iWNs+2ACapVsvajdsHHZltpC2ywTQglFMG/hTJpGqY/xf3TKvp UBDUc4Xx3qn8090D49eqHXUUHwcqitVYmudRA81KA5qZ13D1tcPUpleWwAFuvDOvO/3Y D+Y/o/PG4HYHWURDMkCCnju6H0oUVmodqkY9sIIVzUDHKtoSDiIHw63Y8b5mkWiUDnMS WR4bWawj38mmD1GUyE+Ikt1Y5rl8DOQw/tmhNdhOAuOdOmBfL/I/5B3Fs9v0jiCPp8ye 43OZZ26WrOg/yx1HKGGQhC/yOpa54qyGNq095P5BCmfb+SriXnrhmNM8J5btWA/wLhRI B81Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=kCpoBXRPo3Jm8fWxZW7JOGIBzZWre1bc5UqD9t1mBgo=; b=0ICN/vw6z6alce21h2QbIeG0MD91wf0LtTOqa4F/4t4fVub3y3ckIA7YpykvwfH18Q /ydC0lGkBD/vXENBodLeBYM8b8oWNFT36usUZcsXWnW53CrXvHr25zA1MiYMoi2CQETi WvLA+qUgJhjbgkQTmJuD5ginuCrZdjiST9wkZgkimJgEqG8KWdD7mIOkYlRhDKX01yxd xYzLUmOqcqLJ/8/qvM1z0kUVVKF27bRQSHU2nsIT+ubOEB5ZlP/1AV4p230SD6V5n/Wx J3iPMeHMxIU1aV/F7JLUuUmE97vRXvb2gQi7CM0mhHnRr43Q/OGUrA8wZ4YON5HPkK2I OIjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=G5obF5c3; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d23-v6si12014849pfl.122.2018.07.30.14.06.54; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=G5obF5c3; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729315AbeG3Wnm (ORCPT + 1 other); Mon, 30 Jul 2018 18:43:42 -0400 Received: from mail-ed1-f66.google.com ([209.85.208.66]:36070 "EHLO mail-ed1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728799AbeG3Wnm (ORCPT ); Mon, 30 Jul 2018 18:43:42 -0400 Received: by mail-ed1-f66.google.com with SMTP id k15-v6so4684385edr.3 for ; Mon, 30 Jul 2018 14:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kCpoBXRPo3Jm8fWxZW7JOGIBzZWre1bc5UqD9t1mBgo=; b=G5obF5c3gBWutCW29khTKqQTJxVdFhnR58tRP4P3jZD9l1vePzsRZkHJkcacNdAg5h lf+Mq2ur4F/+0u+YUMrt+OZ46mEEmQPhBrbKUXrW3iJ7uHtEHp9CUk6r3I5VH+sVgh/l MZIuoU5REsl/tByLOmT0bs7oN7aBkYiwsepM0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kCpoBXRPo3Jm8fWxZW7JOGIBzZWre1bc5UqD9t1mBgo=; b=l/KfzdsWFWnNiKlf8XvHlgeeYhlGeYwNOcZAmD8TOf4KDXZh4hWlQg/+F1J4DHMidl YtHfni+LQ+48ZHdUjWTUDyuTtyY+uSIPLXDpjeRuoQfwz7SnwDORXi+knBUFYo2F4L7u VmIJJD57UnMq2+9C7L8ZvVS0q4TXBgj23qhJZq9OFz91fMvdhIfhX3UzfrKab0RRfMWJ 56QvR+wvjNX/+jxHlrak4m6ucuQHrVusRpC8hyfPYSLpoM+xSj1nShsayEgAKTK5m84R Cx1uXpmV6NdXP4UepKl85rvFtPKQjTesymyWIqcxQlIMaU/CLajJOGkgJjAoqjNKMXoY LXHw== X-Gm-Message-State: AOUpUlF6U/Yr49lspAJMKELIN+BAYQ2vS3I4YqWBKNtlhT5l4gkBw6jo 4vyZtF33/9rS/saDmsDzSKCAnU9zmh0= X-Received: by 2002:a50:c8c3:: with SMTP id k3-v6mr9286465edh.193.1532984810166; Mon, 30 Jul 2018 14:06:50 -0700 (PDT) Received: from rev02.home (b80182.upc-b.chello.nl. [212.83.80.182]) by smtp.gmail.com with ESMTPSA id g6-v6sm2677328edn.28.2018.07.30.14.06.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jul 2018 14:06:49 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, vakul.garg@nxp.com, Ard Biesheuvel Subject: [PATCH v2 2/3] crypto/arm64: aes-ce-gcm - implement 2-way aggregation Date: Mon, 30 Jul 2018 23:06:41 +0200 Message-Id: <20180730210642.25180-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180730210642.25180-1-ard.biesheuvel@linaro.org> References: <20180730210642.25180-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement a faster version of the GHASH transform which amortizes the reduction modulo the characteristic polynomial across two input blocks at a time. On a Cortex-A53, the gcm(aes) performance increases 24%, from 3.0 cycles per byte to 2.4 cpb for large input sizes. Signed-off-by: Ard Biesheuvel --- Raw numbers after the patch. arch/arm64/crypto/ghash-ce-core.S | 86 +++++++------------- arch/arm64/crypto/ghash-ce-glue.c | 34 +++++--- 2 files changed, 52 insertions(+), 68 deletions(-) -- 2.18.0 BASELINE: ========= test 0 (128 bit key, 16 byte blocks): 445165 operations in 1 seconds ( 7122640 bytes) test 1 (128 bit key, 64 byte blocks): 437076 operations in 1 seconds ( 27972864 bytes) test 2 (128 bit key, 256 byte blocks): 354203 operations in 1 seconds ( 90675968 bytes) test 3 (128 bit key, 512 byte blocks): 284031 operations in 1 seconds (145423872 bytes) test 4 (128 bit key, 1024 byte blocks): 203473 operations in 1 seconds (208356352 bytes) test 5 (128 bit key, 2048 byte blocks): 129855 operations in 1 seconds (265943040 bytes) test 6 (128 bit key, 4096 byte blocks): 75686 operations in 1 seconds (310009856 bytes) test 7 (128 bit key, 8192 byte blocks): 40167 operations in 1 seconds (329048064 bytes) test 8 (192 bit key, 16 byte blocks): 441610 operations in 1 seconds ( 7065760 bytes) test 9 (192 bit key, 64 byte blocks): 429364 operations in 1 seconds ( 27479296 bytes) test 10 (192 bit key, 256 byte blocks): 343303 operations in 1 seconds ( 87885568 bytes) test 11 (192 bit key, 512 byte blocks): 272029 operations in 1 seconds (139278848 bytes) test 12 (192 bit key, 1024 byte blocks): 192399 operations in 1 seconds (197016576 bytes) test 13 (192 bit key, 2048 byte blocks): 121298 operations in 1 seconds (248418304 bytes) test 14 (192 bit key, 4096 byte blocks): 69994 operations in 1 seconds (286695424 bytes) test 15 (192 bit key, 8192 byte blocks): 37045 operations in 1 seconds (303472640 bytes) test 16 (256 bit key, 16 byte blocks): 438244 operations in 1 seconds ( 7011904 bytes) test 17 (256 bit key, 64 byte blocks): 423345 operations in 1 seconds ( 27094080 bytes) test 18 (256 bit key, 256 byte blocks): 336844 operations in 1 seconds ( 86232064 bytes) test 19 (256 bit key, 512 byte blocks): 265711 operations in 1 seconds (136044032 bytes) test 20 (256 bit key, 1024 byte blocks): 186853 operations in 1 seconds (191337472 bytes) test 21 (256 bit key, 2048 byte blocks): 117301 operations in 1 seconds (240232448 bytes) test 22 (256 bit key, 4096 byte blocks): 67513 operations in 1 seconds (276533248 bytes) test 23 (256 bit key, 8192 byte blocks): 35629 operations in 1 seconds (291872768 bytes) THIS PATCH: =========== test 0 (128 bit key, 16 byte blocks): 441257 operations in 1 seconds ( 7060112 bytes) test 1 (128 bit key, 64 byte blocks): 436595 operations in 1 seconds ( 27942080 bytes) test 2 (128 bit key, 256 byte blocks): 369839 operations in 1 seconds ( 94678784 bytes) test 3 (128 bit key, 512 byte blocks): 308239 operations in 1 seconds (157818368 bytes) test 4 (128 bit key, 1024 byte blocks): 231004 operations in 1 seconds (236548096 bytes) test 5 (128 bit key, 2048 byte blocks): 153930 operations in 1 seconds (315248640 bytes) test 6 (128 bit key, 4096 byte blocks): 92739 operations in 1 seconds (379858944 bytes) test 7 (128 bit key, 8192 byte blocks): 49934 operations in 1 seconds (409059328 bytes) test 8 (192 bit key, 16 byte blocks): 437427 operations in 1 seconds ( 6998832 bytes) test 9 (192 bit key, 64 byte blocks): 429462 operations in 1 seconds ( 27485568 bytes) test 10 (192 bit key, 256 byte blocks): 358183 operations in 1 seconds ( 91694848 bytes) test 11 (192 bit key, 512 byte blocks): 294539 operations in 1 seconds (150803968 bytes) test 12 (192 bit key, 1024 byte blocks): 217082 operations in 1 seconds (222291968 bytes) test 13 (192 bit key, 2048 byte blocks): 140672 operations in 1 seconds (288096256 bytes) test 14 (192 bit key, 4096 byte blocks): 84369 operations in 1 seconds (345575424 bytes) test 15 (192 bit key, 8192 byte blocks): 45280 operations in 1 seconds (370933760 bytes) test 16 (256 bit key, 16 byte blocks): 434127 operations in 1 seconds ( 6946032 bytes) test 17 (256 bit key, 64 byte blocks): 423837 operations in 1 seconds ( 27125568 bytes) test 18 (256 bit key, 256 byte blocks): 351244 operations in 1 seconds ( 89918464 bytes) test 19 (256 bit key, 512 byte blocks): 286884 operations in 1 seconds (146884608 bytes) test 20 (256 bit key, 1024 byte blocks): 209954 operations in 1 seconds (214992896 bytes) test 21 (256 bit key, 2048 byte blocks): 136553 operations in 1 seconds (279660544 bytes) test 22 (256 bit key, 4096 byte blocks): 80749 operations in 1 seconds (330747904 bytes) test 23 (256 bit key, 8192 byte blocks): 43118 operations in 1 seconds (353222656 bytes) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index dac0df29d194..f7281e7a592f 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -290,6 +290,10 @@ ENDPROC(pmull_ghash_update_p8) KS1 .req v9 INP0 .req v10 INP1 .req v11 + HH .req v12 + XL2 .req v13 + XM2 .req v14 + XH2 .req v15 .macro load_round_keys, rounds, rk cmp \rounds, #12 @@ -323,6 +327,7 @@ ENDPROC(pmull_ghash_update_p8) .endm .macro pmull_gcm_do_crypt, enc + ld1 {HH.2d}, [x4], #16 ld1 {SHASH.2d}, [x4] ld1 {XL.2d}, [x1] ldr x8, [x5, #8] // load lower counter @@ -330,10 +335,11 @@ ENDPROC(pmull_ghash_update_p8) load_round_keys w7, x6 movi MASK.16b, #0xe1 - ext SHASH2.16b, SHASH.16b, SHASH.16b, #8 + trn1 SHASH2.2d, SHASH.2d, HH.2d + trn2 T1.2d, SHASH.2d, HH.2d CPU_LE( rev x8, x8 ) shl MASK.2d, MASK.2d, #57 - eor SHASH2.16b, SHASH2.16b, SHASH.16b + eor SHASH2.16b, SHASH2.16b, T1.16b .if \enc == 1 ldr x10, [sp] @@ -358,116 +364,82 @@ CPU_LE( rev x8, x8 ) ins KS0.d[1], x9 // set lower counter ins KS1.d[1], x11 - rev64 T1.16b, INP0.16b + rev64 T1.16b, INP1.16b cmp w7, #12 b.ge 2f // AES-192/256? 1: enc_round KS0, v21 - - ext T2.16b, XL.16b, XL.16b, #8 ext IN1.16b, T1.16b, T1.16b, #8 enc_round KS1, v21 - - eor T1.16b, T1.16b, T2.16b - eor XL.16b, XL.16b, IN1.16b + pmull2 XH2.1q, SHASH.2d, IN1.2d // a1 * b1 enc_round KS0, v22 - - pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 - eor T1.16b, T1.16b, XL.16b + eor T1.16b, T1.16b, IN1.16b enc_round KS1, v22 - - pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 - pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) + pmull XL2.1q, SHASH.1d, IN1.1d // a0 * b0 enc_round KS0, v23 - - ext T1.16b, XL.16b, XH.16b, #8 - eor T2.16b, XL.16b, XH.16b - eor XM.16b, XM.16b, T1.16b + pmull XM2.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) enc_round KS1, v23 - - eor XM.16b, XM.16b, T2.16b - pmull T2.1q, XL.1d, MASK.1d + rev64 T1.16b, INP0.16b + ext T2.16b, XL.16b, XL.16b, #8 enc_round KS0, v24 - - mov XH.d[0], XM.d[1] - mov XM.d[1], XL.d[0] + ext IN1.16b, T1.16b, T1.16b, #8 + eor T1.16b, T1.16b, T2.16b enc_round KS1, v24 - - eor XL.16b, XM.16b, T2.16b + eor XL.16b, XL.16b, IN1.16b enc_round KS0, v25 - - ext T2.16b, XL.16b, XL.16b, #8 + eor T1.16b, T1.16b, XL.16b enc_round KS1, v25 - - pmull XL.1q, XL.1d, MASK.1d - eor T2.16b, T2.16b, XH.16b + pmull2 XH.1q, HH.2d, XL.2d // a1 * b1 enc_round KS0, v26 - - eor XL.16b, XL.16b, T2.16b - rev64 T1.16b, INP1.16b + pmull XL.1q, HH.1d, XL.1d // a0 * b0 enc_round KS1, v26 - - ext T2.16b, XL.16b, XL.16b, #8 - ext IN1.16b, T1.16b, T1.16b, #8 + pmull2 XM.1q, SHASH2.2d, T1.2d // (a1 + a0)(b1 + b0) enc_round KS0, v27 - - eor T1.16b, T1.16b, T2.16b - eor XL.16b, XL.16b, IN1.16b + eor XL.16b, XL.16b, XL2.16b + eor XH.16b, XH.16b, XH2.16b enc_round KS1, v27 - - pmull2 XH.1q, SHASH.2d, XL.2d // a1 * b1 - eor T1.16b, T1.16b, XL.16b + eor XM.16b, XM.16b, XM2.16b + ext T1.16b, XL.16b, XH.16b, #8 enc_round KS0, v28 - - pmull XL.1q, SHASH.1d, XL.1d // a0 * b0 - pmull XM.1q, SHASH2.1d, T1.1d // (a1 + a0)(b1 + b0) - - enc_round KS1, v28 - - ext T1.16b, XL.16b, XH.16b, #8 eor T2.16b, XL.16b, XH.16b eor XM.16b, XM.16b, T1.16b - enc_round KS0, v29 - + enc_round KS1, v28 eor XM.16b, XM.16b, T2.16b + + enc_round KS0, v29 pmull T2.1q, XL.1d, MASK.1d enc_round KS1, v29 - mov XH.d[0], XM.d[1] mov XM.d[1], XL.d[0] aese KS0.16b, v30.16b - eor XL.16b, XM.16b, T2.16b aese KS1.16b, v30.16b - ext T2.16b, XL.16b, XL.16b, #8 eor KS0.16b, KS0.16b, v31.16b - pmull XL.1q, XL.1d, MASK.1d eor T2.16b, T2.16b, XH.16b eor KS1.16b, KS1.16b, v31.16b - eor XL.16b, XL.16b, T2.16b .if \enc == 0 diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index e649f9f6e689..c41ac62c90e9 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -46,6 +46,7 @@ struct ghash_desc_ctx { struct gcm_aes_ctx { struct crypto_aes_ctx aes_key; + u64 h2[2]; struct ghash_key ghash_key; }; @@ -62,12 +63,11 @@ static void (*pmull_ghash_update)(int blocks, u64 dg[], const char *src, const char *head); asmlinkage void pmull_gcm_encrypt(int blocks, u64 dg[], u8 dst[], - const u8 src[], struct ghash_key const *k, - u8 ctr[], u32 const rk[], int rounds, - u8 ks[]); + const u8 src[], u64 const *k, u8 ctr[], + u32 const rk[], int rounds, u8 ks[]); asmlinkage void pmull_gcm_decrypt(int blocks, u64 dg[], u8 dst[], - const u8 src[], struct ghash_key const *k, + const u8 src[], u64 const *k, u8 ctr[], u32 const rk[], int rounds); asmlinkage void pmull_gcm_encrypt_block(u8 dst[], u8 const src[], @@ -233,7 +233,8 @@ static int gcm_setkey(struct crypto_aead *tfm, const u8 *inkey, unsigned int keylen) { struct gcm_aes_ctx *ctx = crypto_aead_ctx(tfm); - u8 key[GHASH_BLOCK_SIZE]; + be128 h1, h2; + u8 *key = (u8 *)&h1; int ret; ret = crypto_aes_expand_key(&ctx->aes_key, inkey, keylen); @@ -245,7 +246,19 @@ static int gcm_setkey(struct crypto_aead *tfm, const u8 *inkey, __aes_arm64_encrypt(ctx->aes_key.key_enc, key, (u8[AES_BLOCK_SIZE]){}, num_rounds(&ctx->aes_key)); - return __ghash_setkey(&ctx->ghash_key, key, sizeof(key)); + __ghash_setkey(&ctx->ghash_key, key, sizeof(be128)); + + /* calculate H^2 (used for 2-way aggregation) */ + h2 = h1; + gf128mul_lle(&h2, &h1); + + ctx->h2[0] = (be64_to_cpu(h2.b) << 1) | (be64_to_cpu(h2.a) >> 63); + ctx->h2[1] = (be64_to_cpu(h2.a) << 1) | (be64_to_cpu(h2.b) >> 63); + + if (be64_to_cpu(h2.a) >> 63) + ctx->h2[1] ^= 0xc200000000000000UL; + + return 0; } static int gcm_setauthsize(struct crypto_aead *tfm, unsigned int authsize) @@ -379,9 +392,8 @@ static int gcm_encrypt(struct aead_request *req) kernel_neon_begin(); pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, - walk.src.virt.addr, &ctx->ghash_key, - iv, ctx->aes_key.key_enc, nrounds, - ks); + walk.src.virt.addr, ctx->h2, iv, + ctx->aes_key.key_enc, nrounds, ks); kernel_neon_end(); err = skcipher_walk_done(&walk, @@ -487,8 +499,8 @@ static int gcm_decrypt(struct aead_request *req) kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, - walk.src.virt.addr, &ctx->ghash_key, - iv, ctx->aes_key.key_enc, nrounds); + walk.src.virt.addr, ctx->h2, iv, + ctx->aes_key.key_enc, nrounds); kernel_neon_end(); err = skcipher_walk_done(&walk, From patchwork Mon Jul 30 21:06:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 143172 Delivered-To: patch@linaro.org Received: by 2002:a2e:9754:0:0:0:0:0 with SMTP id f20-v6csp4526164ljj; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) X-Google-Smtp-Source: AAOMgper2guLFdwIYdOoNEh53WOss0tE4b4bi0ousxg/ypZUlq1U19ewxbmYDkJEdbpMJQl7l5cA X-Received: by 2002:a63:9902:: with SMTP id d2-v6mr17743602pge.343.1532984814789; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532984814; cv=none; d=google.com; s=arc-20160816; b=ngXLD/IsaxKpBoM8s2m0tTSTtUS+5SsyCXPieTE+17qSKprCy1FszKx4poI9+wpMOs sSPOIh43r4ca3dWTLzmx5RJOASmQbTXBHg6i/al5QV7JzBhPysLNxC7WoQe7Nhf3w1LZ EPmjhVQJhagS/XDoTTwb2tX/bY41jsbDUuDKI+fFD6qw7VdzEPbElGwHMO6WJXIjs6mv nzaYpSARKonqWk3G5xl4gemh+vIwdr14k9j7lcYWLq4JJO8TApHyzz3FpVsKsS8TEELM aXt1Nxsv1kTqurV3w3g2skW7hAQf3P0vQlnAwhcsY8Vb6v9CahrIXf503aTIypV55Vcn rPvw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=Uh24AV5TNSmS+7+NMkGioALDRvHboZ9k//HWtNFFdJw=; b=mhx0vsyv3QiPpjHlnVe6ZBqSWBPhxTrlpq1DFzpkCoMdqu7b3WNTmS/UoCFgpXDU7J 0m4xozkyx7kX1TkjmhnuycrII6y61ZimLZ2+79+5oUSdNZ1qO7rJ2q+JaJmL884ikCJR oXl+3mIBcRl15JBBhE534wl70aGOWEfZLTaRFgX9UUMhZYx8VPvDJtD+gvp2qAHABVlc d0CbnltBi5Tw68UyA7Bha/tJJfANkBsa+YJ1ApQ0+4uhajbngcIjt57UNBJFb7LTzho3 a+7GnRwLsliul8OQYjS10m9XI0okfZrcXI9gq8ZRpq5aN2uYjn/UPV5NNt3y9pCXy07n i8nQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hAoocBE0; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d23-v6si12014849pfl.122.2018.07.30.14.06.54; Mon, 30 Jul 2018 14:06:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hAoocBE0; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728799AbeG3Wnm (ORCPT + 1 other); Mon, 30 Jul 2018 18:43:42 -0400 Received: from mail-ed1-f65.google.com ([209.85.208.65]:33798 "EHLO mail-ed1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728746AbeG3Wnm (ORCPT ); Mon, 30 Jul 2018 18:43:42 -0400 Received: by mail-ed1-f65.google.com with SMTP id h1-v6so4677192eds.1 for ; Mon, 30 Jul 2018 14:06:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Uh24AV5TNSmS+7+NMkGioALDRvHboZ9k//HWtNFFdJw=; b=hAoocBE0y0Orpxbqinf6hE1Ic1fo+E/bChWd2aEKvXICpk7lGkGvwqP+ONm3VzBdRH PNNrlPyWZry9/ASUDAqVPJc1EZSWFMgzKUmmY9tuqcsePUu8NvVPVeeSYadr2zuf65V6 ARL8tNoXO/a8rb8FVoZL0EovpI61oXa4EQGHc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Uh24AV5TNSmS+7+NMkGioALDRvHboZ9k//HWtNFFdJw=; b=CK7joajDlS1tLbGdA0vnLbq6tbgAEnoSaD0mPcWQOmsU4PZH+D0DPkCs10NU9430VF Vilna/j9VVHpQCbJCkMiBm/uD1kHT3BfV0avvnKi+H1ffcqjko8Z7rqxIz3Ye3o2WJfr rT2gP7b3XdXaNrImcwniK3IYZIDnlMF9ZHnW2aGu7iuljFLYUijDCsRQoaenXxwl6fFy Tv8MwiKhKy737YObCWW+sxWsd293gS3gt4Fwe3sY/YQwvenKTrw9Q3tYDWr1SUmehurk nWSq8B47ST7YZWBYWJNDG1CUQgkSfFEY6HnS9QoffFL7zgSupbIzi9yunN7nN/ei3RMk FrYw== X-Gm-Message-State: AOUpUlEM4O5JTW+Ijmj7Yc/JSzAMqDw9zDv4IpoftYNHGVh5mzVbMTsW 38fEsemhr56mMOL63hXNMC+EBiUpC5Q= X-Received: by 2002:a50:8d19:: with SMTP id s25-v6mr718742eds.238.1532984811431; Mon, 30 Jul 2018 14:06:51 -0700 (PDT) Received: from rev02.home (b80182.upc-b.chello.nl. [212.83.80.182]) by smtp.gmail.com with ESMTPSA id g6-v6sm2677328edn.28.2018.07.30.14.06.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jul 2018 14:06:50 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, vakul.garg@nxp.com, Ard Biesheuvel Subject: [PATCH v2 3/3] crypto: arm64/aes-ce-gcm - don't reload key schedule if avoidable Date: Mon, 30 Jul 2018 23:06:42 +0200 Message-Id: <20180730210642.25180-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180730210642.25180-1-ard.biesheuvel@linaro.org> References: <20180730210642.25180-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Squeeze out another 5% of performance by minimizing the number of invocations of kernel_neon_begin()/kernel_neon_end() on the common path, which also allows some reloads of the key schedule to be optimized away. The resulting code runs at 2.3 cycles per byte on a Cortex-A53. Signed-off-by: Ard Biesheuvel --- Raw numbers after the patch. arch/arm64/crypto/ghash-ce-core.S | 9 ++- arch/arm64/crypto/ghash-ce-glue.c | 81 +++++++++++--------- 2 files changed, 49 insertions(+), 41 deletions(-) -- 2.18.0 testing speed of gcm(aes) (gcm-aes-ce) encryption test 0 (128 bit key, 16 byte blocks): 365343 operations in 1 seconds ( 5845488 bytes) test 1 (128 bit key, 64 byte blocks): 504620 operations in 1 seconds ( 32295680 bytes) test 2 (128 bit key, 256 byte blocks): 418881 operations in 1 seconds (107233536 bytes) test 3 (128 bit key, 512 byte blocks): 343166 operations in 1 seconds (175700992 bytes) test 4 (128 bit key, 1024 byte blocks): 252229 operations in 1 seconds (258282496 bytes) test 5 (128 bit key, 2048 byte blocks): 164862 operations in 1 seconds (337637376 bytes) test 6 (128 bit key, 4096 byte blocks): 98274 operations in 1 seconds (402530304 bytes) test 7 (128 bit key, 8192 byte blocks): 52530 operations in 1 seconds (430325760 bytes) test 8 (192 bit key, 16 byte blocks): 343221 operations in 1 seconds ( 5491536 bytes) test 9 (192 bit key, 64 byte blocks): 495929 operations in 1 seconds ( 31739456 bytes) test 10 (192 bit key, 256 byte blocks): 404755 operations in 1 seconds (103617280 bytes) test 11 (192 bit key, 512 byte blocks): 326728 operations in 1 seconds (167284736 bytes) test 12 (192 bit key, 1024 byte blocks): 235987 operations in 1 seconds (241650688 bytes) test 13 (192 bit key, 2048 byte blocks): 151724 operations in 1 seconds (310730752 bytes) test 14 (192 bit key, 4096 byte blocks): 89285 operations in 1 seconds (365711360 bytes) test 15 (192 bit key, 8192 byte blocks): 47432 operations in 1 seconds (388562944 bytes) test 16 (256 bit key, 16 byte blocks): 323574 operations in 1 seconds ( 5177184 bytes) test 17 (256 bit key, 64 byte blocks): 489854 operations in 1 seconds ( 31350656 bytes) test 18 (256 bit key, 256 byte blocks): 396979 operations in 1 seconds (101626624 bytes) test 19 (256 bit key, 512 byte blocks): 317923 operations in 1 seconds (162776576 bytes) test 20 (256 bit key, 1024 byte blocks): 211440 operations in 1 seconds (216514560 bytes) test 21 (256 bit key, 2048 byte blocks): 145407 operations in 1 seconds (297793536 bytes) test 22 (256 bit key, 4096 byte blocks): 85050 operations in 1 seconds (348364800 bytes) test 23 (256 bit key, 8192 byte blocks): 45068 operations in 1 seconds (369197056 bytes) diff --git a/arch/arm64/crypto/ghash-ce-core.S b/arch/arm64/crypto/ghash-ce-core.S index f7281e7a592f..913e49932ae6 100644 --- a/arch/arm64/crypto/ghash-ce-core.S +++ b/arch/arm64/crypto/ghash-ce-core.S @@ -1,7 +1,7 @@ /* * Accelerated GHASH implementation with ARMv8 PMULL instructions. * - * Copyright (C) 2014 - 2017 Linaro Ltd. + * Copyright (C) 2014 - 2018 Linaro Ltd. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published @@ -332,8 +332,6 @@ ENDPROC(pmull_ghash_update_p8) ld1 {XL.2d}, [x1] ldr x8, [x5, #8] // load lower counter - load_round_keys w7, x6 - movi MASK.16b, #0xe1 trn1 SHASH2.2d, SHASH.2d, HH.2d trn2 T1.2d, SHASH.2d, HH.2d @@ -346,6 +344,8 @@ CPU_LE( rev x8, x8 ) ld1 {KS0.16b-KS1.16b}, [x10] .endif + cbnz x6, 4f + 0: ld1 {INP0.16b-INP1.16b}, [x3], #32 rev x9, x8 @@ -471,6 +471,9 @@ CPU_LE( rev x8, x8 ) enc_round KS0, v20 enc_round KS1, v20 b 1b + +4: load_round_keys w7, x6 + b 0b .endm /* diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c index c41ac62c90e9..88e3d93fa7c7 100644 --- a/arch/arm64/crypto/ghash-ce-glue.c +++ b/arch/arm64/crypto/ghash-ce-glue.c @@ -1,7 +1,7 @@ /* * Accelerated GHASH implementation with ARMv8 PMULL instructions. * - * Copyright (C) 2014 - 2017 Linaro Ltd. + * Copyright (C) 2014 - 2018 Linaro Ltd. * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 as published @@ -374,37 +374,39 @@ static int gcm_encrypt(struct aead_request *req) memcpy(iv, req->iv, GCM_IV_SIZE); put_unaligned_be32(1, iv + GCM_IV_SIZE); - if (likely(may_use_simd())) { - kernel_neon_begin(); + err = skcipher_walk_aead_encrypt(&walk, req, false); + if (likely(may_use_simd() && walk.total >= 2 * AES_BLOCK_SIZE)) { + u32 const *rk = NULL; + + kernel_neon_begin(); pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); pmull_gcm_encrypt_block(ks, iv, NULL, nrounds); put_unaligned_be32(3, iv + GCM_IV_SIZE); pmull_gcm_encrypt_block(ks + AES_BLOCK_SIZE, iv, NULL, nrounds); put_unaligned_be32(4, iv + GCM_IV_SIZE); - kernel_neon_end(); - - err = skcipher_walk_aead_encrypt(&walk, req, false); - while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + do { int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; - kernel_neon_begin(); + if (rk) + kernel_neon_begin(); + pmull_gcm_encrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, ctx->h2, iv, - ctx->aes_key.key_enc, nrounds, ks); + rk, nrounds, ks); kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % (2 * AES_BLOCK_SIZE)); - } + + rk = ctx->aes_key.key_enc; + } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - err = skcipher_walk_aead_encrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *dst = walk.dst.virt.addr; @@ -486,50 +488,53 @@ static int gcm_decrypt(struct aead_request *req) memcpy(iv, req->iv, GCM_IV_SIZE); put_unaligned_be32(1, iv + GCM_IV_SIZE); - if (likely(may_use_simd())) { + err = skcipher_walk_aead_decrypt(&walk, req, false); + + if (likely(may_use_simd() && walk.total >= 2 * AES_BLOCK_SIZE)) { + u32 const *rk = NULL; + kernel_neon_begin(); pmull_gcm_encrypt_block(tag, iv, ctx->aes_key.key_enc, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - kernel_neon_end(); - err = skcipher_walk_aead_decrypt(&walk, req, false); - - while (walk.nbytes >= 2 * AES_BLOCK_SIZE) { + do { int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2; + int rem = walk.total - blocks * AES_BLOCK_SIZE; + + if (rk) + kernel_neon_begin(); - kernel_neon_begin(); pmull_gcm_decrypt(blocks, dg, walk.dst.virt.addr, walk.src.virt.addr, ctx->h2, iv, - ctx->aes_key.key_enc, nrounds); - kernel_neon_end(); + rk, nrounds); - err = skcipher_walk_done(&walk, - walk.nbytes % (2 * AES_BLOCK_SIZE)); - } + /* check if this is the final iteration of the loop */ + if (rem < (2 * AES_BLOCK_SIZE)) { + u8 *iv2 = iv + AES_BLOCK_SIZE; - if (walk.nbytes) { - u8 *iv2 = iv + AES_BLOCK_SIZE; + if (rem > AES_BLOCK_SIZE) { + memcpy(iv2, iv, AES_BLOCK_SIZE); + crypto_inc(iv2, AES_BLOCK_SIZE); + } - if (walk.nbytes > AES_BLOCK_SIZE) { - memcpy(iv2, iv, AES_BLOCK_SIZE); - crypto_inc(iv2, AES_BLOCK_SIZE); - } + pmull_gcm_encrypt_block(iv, iv, NULL, nrounds); - kernel_neon_begin(); - pmull_gcm_encrypt_block(iv, iv, ctx->aes_key.key_enc, - nrounds); + if (rem > AES_BLOCK_SIZE) + pmull_gcm_encrypt_block(iv2, iv2, NULL, + nrounds); + } - if (walk.nbytes > AES_BLOCK_SIZE) - pmull_gcm_encrypt_block(iv2, iv2, NULL, - nrounds); kernel_neon_end(); - } + + err = skcipher_walk_done(&walk, + walk.nbytes % (2 * AES_BLOCK_SIZE)); + + rk = ctx->aes_key.key_enc; + } while (walk.nbytes >= 2 * AES_BLOCK_SIZE); } else { __aes_arm64_encrypt(ctx->aes_key.key_enc, tag, iv, nrounds); put_unaligned_be32(2, iv + GCM_IV_SIZE); - err = skcipher_walk_aead_decrypt(&walk, req, false); - while (walk.nbytes >= AES_BLOCK_SIZE) { int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *dst = walk.dst.virt.addr;