From patchwork Fri Dec 1 21:19:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120386 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp1655470qgn; Fri, 1 Dec 2017 13:26:09 -0800 (PST) X-Google-Smtp-Source: AGs4zMZAdHjjgl7QQ2TjP9U0mToQd/kHNdghRG+wbGpm+a+DhG9rsE6HDMUBNUnQHoZlq3YDe40y X-Received: by 10.84.131.68 with SMTP id 62mr7336847pld.185.1512163568932; Fri, 01 Dec 2017 13:26:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512163568; cv=none; d=google.com; s=arc-20160816; b=Mz8RRtX9y9VfU+5UXVm57hck1CtjbbHBTjibybOkZ2R9IigvkHVpRNU3ZmkC5izlVK 89XRRs5XvN/WOkHWDfooe2QQFHvNk4Ld3+AHTCJXEcZhEO/1mwe7f/jc6obh0TL9MKuE wnOoE8xFZbs00kLbAjJgW/vzFVkXef4a0wiQrlQCho8VEwHJO5bYbpD4sVXzukBeMy6q S+bnvtrP3kM4dXH438x0xT1WATzws+sZtmfJUqwaJEfkclg8mXqvWEtcrA3f/cxusIEd +rIo/ooucHbl5I/MrbHNtRxk0dnJ+ruQOf9A6YcO059Kj8R/GxFIHfgZHQp3/vkLNPq3 qO7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=xFYW2ZTVB5o82qjJajwwvBv8xyOTBiHRuYwfftVvmjM=; b=ijrlBe37bBhU0H0goYvw0fPCghYBLRiwpZe1yV3k5mRuvtWUkB1lpca6QhtfB0PfUT kMBp45IrFSu3FCk/fik9AM+0b1qUd6Il4vrPry8O/EkWdDbkNfabmVwTPJUEkb1mlPnq GOSYQnzA5O7D16EQikEYCIaSql0+Hb+yS6CBKQL/i95ly/GRxV7bEYh68qAxZrzXfKRT K5gKGF/RsbEwDHSi1jWneAoN0/msrcGE/01pLZi9reEs8Y3RwDy14antStQIbdtlF0sH pg+X9QaG57PNlg+JsO/IUakxSl+gYbw9mK39HXie3TAZTfM13erPc6Z6bLy9gffkW/DW dLjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=H0J8pwd5; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u66si5760540pfa.237.2017.12.01.13.26.08; Fri, 01 Dec 2017 13:26:08 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=H0J8pwd5; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751220AbdLAV0I (ORCPT + 4 others); Fri, 1 Dec 2017 16:26:08 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:38556 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751169AbdLAV0H (ORCPT ); Fri, 1 Dec 2017 16:26:07 -0500 Received: by mail-wr0-f193.google.com with SMTP id o2so11448895wro.5 for ; Fri, 01 Dec 2017 13:26:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2f72SllUhgXqRO7dNE4vJcttALH9pRigO7ZTMGHcSJs=; b=H0J8pwd51B6F4BO63u1SeRcF5JpSsuc2Nhgkkmedk9GnRijPq1nNLXW0YDSvoTBx3s k32/24n7z6B4CcLS6osDfNEuGI/xDYZxB1S00/CmdMYirtw/uj8UjIO4y65jNjCvT+Mh CDweDVbcFmBity9jzgB+22r8ihKAfTlpGOt5s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2f72SllUhgXqRO7dNE4vJcttALH9pRigO7ZTMGHcSJs=; b=ZBHnUR2D3EL6B4iara/hs0o7YhkgLrcN0Ul82AFfppAvrs1yhXt+nky0gAFlt8Fu43 HKaZd61Hpa+F4r9nm94RRd92sMAsnfkcY+v5bd6lanCOqUwr0RzAJBvQfPPBIUkrkeLf yhl7oHwivJbePYDNNmRN3T7U6ZOkV67gxjIIkxrUHrJHyjWkT5Gw2lk5IhkZcqvBCLA8 GmYIhhN00DprdTzG61F4sgKWsRFspT12rVp7cLaUjAQLOpXBYDpb5kdWd88rKVrSNXs2 Hi09xJxAVrwDhY4RJbOO0lUDQDvNJ9rjiKShCfiy0N2Rd/envKhO0KdPE1X9II0+uoiI JQ1Q== X-Gm-Message-State: AJaThX7tNaS9qC7qjBpmwlzz2zSzI6xK9GpKzpCpT3ZIdh9jmJXNROkz yj8qD0in9M10HFgCN/cojUls3A== X-Received: by 10.223.136.38 with SMTP id d35mr6516564wrd.36.1512163565616; Fri, 01 Dec 2017 13:26:05 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id p17sm2161682wma.23.2017.12.01.13.26.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 01 Dec 2017 13:26:04 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH 3/5] crypto: arm64/aes-bs - move kernel mode neon en/disable into loop Date: Fri, 1 Dec 2017 21:19:25 +0000 Message-Id: <20171201211927.24653-4-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171201211927.24653-1-ard.biesheuvel@linaro.org> References: <20171201211927.24653-1-ard.biesheuvel@linaro.org> Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/aes-neonbs-glue.c | 26 +++++++++----------- 1 file changed, 12 insertions(+), 14 deletions(-) -- 2.11.0 -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c index 9d823c77ec84..fa09dc340a1e 100644 --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -101,7 +101,6 @@ static int __ecb_crypt(struct skcipher_request *req, err = skcipher_walk_virt(&walk, req, true); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -109,12 +108,13 @@ static int __ecb_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -160,17 +160,17 @@ static int cbc_encrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; /* fall back to the non-bitsliced NEON implementation */ + kernel_neon_begin(); neon_aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->enc, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -183,7 +183,6 @@ static int cbc_decrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -191,13 +190,14 @@ static int cbc_decrypt(struct skcipher_request *req) blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); aesbs_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -231,7 +231,6 @@ static int ctr_encrypt(struct skcipher_request *req) err = skcipher_walk_virt(&walk, req, true); - kernel_neon_begin(); while (walk.nbytes > 0) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *final = (walk.total % AES_BLOCK_SIZE) ? buf : NULL; @@ -242,8 +241,10 @@ static int ctr_encrypt(struct skcipher_request *req) final = NULL; } + kernel_neon_begin(); aesbs_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks, walk.iv, final); + kernel_neon_end(); if (final) { u8 *dst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; @@ -258,8 +259,6 @@ static int ctr_encrypt(struct skcipher_request *req) err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } @@ -307,9 +306,8 @@ static int __xts_crypt(struct skcipher_request *req, err = skcipher_walk_virt(&walk, req, true); kernel_neon_begin(); - - neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, - ctx->key.rounds, 1); + neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1); + kernel_neon_end(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -318,13 +316,13 @@ static int __xts_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } From patchwork Fri Dec 1 21:19:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120387 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp1655518qgn; Fri, 1 Dec 2017 13:26:12 -0800 (PST) X-Google-Smtp-Source: AGs4zMbvzG2YKAG+i8T2v9c813EisEV3mS9AUr6iOpNwPnUo79OrHKildlxoH8UzWV7kA6M8j6ri X-Received: by 10.99.153.2 with SMTP id d2mr7046397pge.379.1512163572399; Fri, 01 Dec 2017 13:26:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512163572; cv=none; d=google.com; s=arc-20160816; b=QzpCCdrcDVMSEbz7h6FDhJt+3cWiWhQKJ2RDR8zwsN99iaRZtBigeY7NxcP3Cka2OZ aMuj6C9modZ8MVZ59UVTp8YThHuYabcu7cOgxXzsA2W1+X6qmDk5lyL95aazjYcmlUWl QJB4jqRgKK6k0a4Yth7tGaCwovicsbpXbIZX2ZASrquL2Ht83LrTQVkeiXZa1uXCqMwo mZpWPrGfjn1CFcv/UHdFQlPUQgkOZMTQs9lzoTzaH8s3hFEOALwBU0AGGGnRakeICT1W f0TmlsjFEdDcCpryeh0weg1pO0LegkHbgb9Qx00kctwyj9fmpvSNWVHQwNoiw9bpHSWy /9TQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=1rNy77/PAImno98ljWJe2khTgitKwOlSs6IM90SBxa4=; b=sXsNSYg+ygYLuoT32Okt3/E4m/PQt492TEw122xs5ZCgNnAl96rMwIIxdg10iAN4PW yRMB2Qdfsi3ZtEO48WfjxOnhgbrRwkeeMD02w44f7jKBXzCxaMV1LUhlaGAIkQRr8nSY ZXieDdbu2zCP+AA1JTiFz9AvOnpNaiVFT3usvZRspX6TZizInKHexpEf8RQp7L+iok1O WfGBCcqjtVQAjxvcRyt81K0BY6fLcLREbEOV6kLedtn0UPCk1Z6ebUZudzUboYjbO2Wk njSO7UD6R6jXqa7E7Kwz5W2pbltJDeBaIRSDdJEKHOw9ScFRmqnOCNeQGaTER16cOJ5T HlyQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=PHkA5CT5; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u66si5760540pfa.237.2017.12.01.13.26.12; Fri, 01 Dec 2017 13:26:12 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=PHkA5CT5; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751278AbdLAV0L (ORCPT + 4 others); Fri, 1 Dec 2017 16:26:11 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:39527 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751169AbdLAV0K (ORCPT ); Fri, 1 Dec 2017 16:26:10 -0500 Received: by mail-wr0-f194.google.com with SMTP id a41so9577896wra.6 for ; Fri, 01 Dec 2017 13:26:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=4IWFfyA8qbht0pbAaWWcgZ+mreBLvmqM3d70R4Q8pxQ=; b=PHkA5CT5J4SlaoFfIRk8BTLr98g5VgavB/WQ725JS5coJ2YMunZmkw5cP5wo67AeUq zxLhyrGCYSDu1PHmbwvvNghJQ4uN4hzywxC0Ez2smK87vmQlHqN6QNfCRQp+gLqeXRV3 5ZqEMGlwyIw7kShZHVI7BNIBrxn7rAa9sP6oQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4IWFfyA8qbht0pbAaWWcgZ+mreBLvmqM3d70R4Q8pxQ=; b=kUumydZcBmpn28HI8C6Az4b/Fk2XUyOYsEQQ4/KUmNJbXRnT8ZqqjdnOTJcBDQ/3eE Eruz89/GmU6NyZAHtpLvLzRQdtllcDL3qKvdYMscVIyvlBrcy+qe0k3Km5PYeHranIMp KXFWK0O9kSFqrEW7rOQ2GpJVNSf258bz3Xe6y1LQmm6gPtBVYCWt1OZSKspqsV6V7YyB rrY3meM+3DgiZkcsqDMmycsqC5Fam1XEQQMVf2i16TpD8FYPT2vUnedNPfIyF37GHuDm RP7/3dmbuMRjRPOtITmo1TKM7gqFOyiwlzW5oLWEeE5TMHRtvxl5TERouubnhyNdgY2o l+6w== X-Gm-Message-State: AJaThX7ZZapyskcsVfflqyv5lPhLdUDglgG/E1JLK82OuCdYVceoC79Z HakVGk37AqVtq/HUMpYgAlw3qg== X-Received: by 10.223.142.49 with SMTP id n46mr6279292wrb.279.1512163569448; Fri, 01 Dec 2017 13:26:09 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id p17sm2161682wma.23.2017.12.01.13.26.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 01 Dec 2017 13:26:08 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH 4/5] crypto: arm64/chacha20 - move kernel mode neon en/disable into loop Date: Fri, 1 Dec 2017 21:19:26 +0000 Message-Id: <20171201211927.24653-5-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171201211927.24653-1-ard.biesheuvel@linaro.org> References: <20171201211927.24653-1-ard.biesheuvel@linaro.org> Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org When kernel mode NEON was first introduced on arm64, the preserve and restore of the userland NEON state was completely unoptimized, and involved saving all registers on each call to kernel_neon_begin(), and restoring them on each call to kernel_neon_end(). For this reason, the NEON crypto code that was introduced at the time keeps the NEON enabled throughout the execution of the crypto API methods, which may include calls back into the crypto API that could result in memory allocation or other actions that we should avoid when running with preemption disabled. Since then, we have optimized the kernel mode NEON handling, which now restores lazily (upon return to userland), and so the preserve action is only costly the first time it is called after entering the kernel. So let's put the kernel_neon_begin() and kernel_neon_end() calls around the actual invocations of the NEON crypto code, and run the remainder of the code with kernel mode NEON disabled (and preemption enabled) Signed-off-by: Ard Biesheuvel --- arch/arm64/crypto/chacha20-neon-glue.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- 2.11.0 -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/arm64/crypto/chacha20-neon-glue.c b/arch/arm64/crypto/chacha20-neon-glue.c index cbdb75d15cd0..b9ca7bef428a 100644 --- a/arch/arm64/crypto/chacha20-neon-glue.c +++ b/arch/arm64/crypto/chacha20-neon-glue.c @@ -36,6 +36,7 @@ static void chacha20_doneon(u32 *state, u8 *dst, const u8 *src, { u8 buf[CHACHA20_BLOCK_SIZE]; + kernel_neon_begin(); while (bytes >= CHACHA20_BLOCK_SIZE * 4) { chacha20_4block_xor_neon(state, dst, src); bytes -= CHACHA20_BLOCK_SIZE * 4; @@ -55,6 +56,7 @@ static void chacha20_doneon(u32 *state, u8 *dst, const u8 *src, chacha20_block_xor_neon(state, buf, buf); memcpy(dst, buf, bytes); } + kernel_neon_end(); } static int chacha20_neon(struct skcipher_request *req) @@ -72,7 +74,6 @@ static int chacha20_neon(struct skcipher_request *req) crypto_chacha20_init(state, ctx, walk.iv); - kernel_neon_begin(); while (walk.nbytes > 0) { unsigned int nbytes = walk.nbytes; @@ -83,7 +84,6 @@ static int chacha20_neon(struct skcipher_request *req) nbytes); err = skcipher_walk_done(&walk, walk.nbytes - nbytes); } - kernel_neon_end(); return err; }