From patchwork Mon Dec 4 12:26:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120514 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4365477qgn; Mon, 4 Dec 2017 04:27:13 -0800 (PST) X-Google-Smtp-Source: AGs4zMZc5JRnH9Ati0dWxkEj0qb7YV05QOJyVwYDs0KSSfLd3LUSqnoX1Ny8GO4wLtjZAe/eW2co X-Received: by 10.84.169.67 with SMTP id g61mr14309338plb.152.1512390433364; Mon, 04 Dec 2017 04:27:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512390433; cv=none; d=google.com; s=arc-20160816; b=oOpJCn2/oThPMsm7RdN5RM0cG3cazCnh2vPmpCNiYXxoXBCMQiTpO0OGMNmrcHpqnC 8wAmeeyB+ZUBrtFIk7lqsM+QOhTBveCtLUXIbTcn1ilpi10Im0NEGy3qUFeL8CyGECNy 0Vwhyma4qu7xNe497A4K8yLXbJONuYgINnVO3CWZ+9o1R7WJ+tKR7UqpGXnP2rfsWQV+ on5nu9Mmf4+s/9VNkhnsXGXH5FJ3cCjATNBgKtBcjKOHm8+92EaRvMbV+5+rRAhLVdNO JSgrr660O0ZY1c6PUEIgcpNvdTs4r3v/bOeR0ZwetbTT5mAPGa6pk3BgvhU9dGrYtbkI GHsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=siDqjuKDSU5MVSI0X6LMwRScVHyvLRBVTI5yU5S7QOs=; b=sWcxZ7km1eJ5SCLUYQftQER0UI1p/7fd6bNUaSW4eWybqnlA1rCc0ck6T6gV5b0KeO DaTydBVwyXY/8U6rlWroHQWMqtxQ4m1LrA3C2a4BQDyUKVLPP/EccCGi+VtFHwtLeXYT gHYtbsOo6s9+BPGq4n8XVztL04vKG2C9UykrCIFn/Tp/a7dGPWTRhmYs8n8yI/zU9/5G oB508/SsPWSag81WQ7rvU0nGx5oW7hmYxP1vJQyC9YyooqXvE69/5iyS3iWG/jtB2JbU xILdoUtLnyotScJtDDm0Bmtq20vZvaIaVDUGar675qmnmw7pMr2IiO41nhmJQoO734Sl R6Ew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=c0ihMOSb; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay5si7779699plb.11.2017.12.04.04.27.13; Mon, 04 Dec 2017 04:27:13 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=c0ihMOSb; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753157AbdLDM1L (ORCPT + 1 other); Mon, 4 Dec 2017 07:27:11 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:46594 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753136AbdLDM1I (ORCPT ); Mon, 4 Dec 2017 07:27:08 -0500 Received: by mail-wm0-f65.google.com with SMTP id r78so5466675wme.5 for ; Mon, 04 Dec 2017 04:27:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=siDqjuKDSU5MVSI0X6LMwRScVHyvLRBVTI5yU5S7QOs=; b=c0ihMOSbcJS4ttMKW2fKt4ehWzuMtyfqzB8iY1U66vdVdy+NPdOS5bZzgff8X7hMX4 lmJpEbBEOGT0fgAYOzKIBotc9uNQPNO1MS6sa+ScITxzVjQ+5923YyqSVXKRs778/7Ux TpEWYr3qrZImdlo6Y0bCwRvnkLSMjfS5KYPhA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=siDqjuKDSU5MVSI0X6LMwRScVHyvLRBVTI5yU5S7QOs=; b=la9MPAiUXUqiD6KpcCsjbb89XgZkMN6Myx1uDf36aEIDpFpou4vonDvsdg21yL4B2t q3V00Go3kvd0oNv6H0ury7GZy6vGF88JIffdbxll9u2Y6/6Q0W9ybC0MLY4FvVIES7zs rj6y4xI9koequo6b0NMdqZMPFqKVNkkV/P0cejER3okuHkhbVo6toSmXFoXe0oW1tAmA fNQ/FY/l//Gm2gLsrgMHt9WUCNZVKxXWt8cj1hH9ggX/KP3FyydBFwVyazz+cHAk/Rrz lerdGs+RADiAg9ElWYehsZGAZEEN+8XvAanfxAACQ5h07ouUVRDBhlmPfvZiZF5lFt8l rJaw== X-Gm-Message-State: AKGB3mLvNde/1lWVob0MNGgOAmRkv5OfZBXYvMgBenKV+HhEH2TAwFpG 7g/mzu6nPG4uR2UWqTYi5GByPt1oBGI= X-Received: by 10.28.5.198 with SMTP id 189mr2944610wmf.29.1512390426976; Mon, 04 Dec 2017 04:27:06 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id a8sm7665839wmh.41.2017.12.04.04.27.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Dec 2017 04:27:06 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v2 04/19] crypto: arm64/aes-bs - move kernel mode neon en/disable into loop Date: Mon, 4 Dec 2017 12:26:30 +0000 Message-Id: <20171204122645.31535-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171204122645.31535-1-ard.biesheuvel@linaro.org> References: <20171204122645.31535-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-neonbs-glue.c | 36 +++++++++----------- 1 file changed, 17 insertions(+), 19 deletions(-) -- 2.11.0 diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 9d823c77ec84..e7a95a566462 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -99,9 +99,8 @@ static int __ecb_crypt(struct skcipher_request *req, struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -109,12 +108,13 @@ static int __ecb_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -158,19 +158,19 @@ static int cbc_encrypt(struct skcipher_request *req) struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; /* fall back to the non-bitsliced NEON implementation */ + kernel_neon_begin(); neon_aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->enc, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -181,9 +181,8 @@ static int cbc_decrypt(struct skcipher_request *req) struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -191,13 +190,14 @@ static int cbc_decrypt(struct skcipher_request *req) blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); aesbs_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -229,9 +229,8 @@ static int ctr_encrypt(struct skcipher_request *req) u8 buf[AES_BLOCK_SIZE]; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes > 0) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *final = (walk.total % AES_BLOCK_SIZE) ? buf : NULL; @@ -242,8 +241,10 @@ static int ctr_encrypt(struct skcipher_request *req) final = NULL; } + kernel_neon_begin(); aesbs_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks, walk.iv, final); + kernel_neon_end(); if (final) { u8 *dst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; @@ -258,8 +259,6 @@ static int ctr_encrypt(struct skcipher_request *req) err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } @@ -304,12 +303,11 @@ static int __xts_crypt(struct skcipher_request *req, struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); kernel_neon_begin(); - - neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, - ctx->key.rounds, 1); + neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1); + kernel_neon_end(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -318,13 +316,13 @@ static int __xts_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; }