From patchwork Mon Dec 4 12:26:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 120510 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4365222qgn; Mon, 4 Dec 2017 04:26:58 -0800 (PST) X-Google-Smtp-Source: AGs4zMb7jw5aFodRt9ZRwzbO9hKsMWxbXU3lZBuQQFyjpeo8qo/SPhACIf1U8a0PKi8puS3L4IK3 X-Received: by 10.159.247.202 with SMTP id v10mr1902672plz.309.1512390417978; Mon, 04 Dec 2017 04:26:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1512390417; cv=none; d=google.com; s=arc-20160816; b=JYMRj+2vsOVFmjdSOr2GscQEC6+RZS9KZEO5Oj/hi4Hgm8fLyFvG8aIkUYmvMttR7a rG/tnBmXsgk10Wec8iblspqPmeE+U/WgxMNtVrpQImCNLqePCVuB/Z1S3z4NZY+AyAbQ RyBMJrYgUFwXcKk+M+1qReukmabQBHl2SjQ1aiK7A8s5e7nfAYmNSQAk1oXoKjwzVIXS TCJVR2vUwtsnRen7OK3ilRsOVLV86XSg62PKkFh6KAXCX9FmoW3FpYa4TvqNr+UH+khe Hb6ratssUcoR5WN2F/5/0cLAZ09VQnFHB3Dub4GgURJgfSgO2l6CvbD29zUH5jebGOav CyRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=xgT3n9x0cm6vCv/GY335gm/0Xn0tJhI+vuOr26yGtC8=; b=M3wXxvgB/tFSgrwrc+aWUdEswR4IBnwjsICG9zWdSqdAISYSD+jkcCjzSaYHKdz2cW MG5csRI0bEDlOg0SO0hPDXhJxnnMl75fgNngi6YOW924/bNGO/Cp8cQhvYFGbPj42cq9 D6lOF5yFPie0eIDIf4CTaamKRXia3nvPLJ9dr8KEBRosvk5C5hdZqcFxiDShlrekEs78 FfzeMur/WJtaYr/kcqIQG4Af74ZEq1qp1QbNHBMan2jfsdrEnA4mnfyFeGwD4LRfDzsD n9a1S7EB91owrqRGjdK0AHK+J/SJ5ePd1BEyMbTb2CyzdqkhESwqYzMgkK3KozhIHy3w +vyA== ARC-Authentication-Results: i=1; mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=FSt4XRic; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay5si7779699plb.11.2017.12.04.04.26.57; Mon, 04 Dec 2017 04:26:57 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org header.s=google header.b=FSt4XRic; spf=pass (google.com: best guess record for domain of linux-rt-users-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-rt-users-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753105AbdLDM04 (ORCPT + 4 others); Mon, 4 Dec 2017 07:26:56 -0500 Received: from mail-wr0-f176.google.com ([209.85.128.176]:46745 "EHLO mail-wr0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753067AbdLDM0z (ORCPT ); Mon, 4 Dec 2017 07:26:55 -0500 Received: by mail-wr0-f176.google.com with SMTP id x49so17056375wrb.13 for ; Mon, 04 Dec 2017 04:26:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=lpFoGu8upF7d/LEYfIPSgR5mPi7TN4Z9pjjvhup3rgU=; b=FSt4XRicwRaPq6yrO6TQNNBNjosdYPUpZ57QVQTeVsWKDPZTRvfqysz9Mpn3ibf6Jn RE5TuXNPw4oCE+6URWGnYnOqQTjT6ehlKQJuOg5Xr6o5m+5gLyU89JQnCQaiaT+mcsYB 9tFSap2XPRsEyquS154BfNn6B4WAxMGbXKZOg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=lpFoGu8upF7d/LEYfIPSgR5mPi7TN4Z9pjjvhup3rgU=; b=hQqbWvgQJH/r4ir31OBoq/bsYAxDi+deO9UutahgEL5rHSd/5GnU3nLbGKIbTU+KjS REqn0nAnthgn/EtW5ezAlpgk2O3a67lop5D+u5lLtUz73bW0puXH0bgwJIASffuSquoc EY/e5P5WC5v6+04sW1w0Klv2RVQBHzgQlDcDL/O7BFRWdTAYWltXgNRaevG9jrgymzBT deCSIuJCMX21tOMoEhazW67IbSq8LbXZJcKUiLNnl+fL7bCnyFKl6wlV1uwcddmRZQVi MVILZ9DRE7XvuZ6olBjIh+hNEtWHGNbfbEAn1Dt7VndnNC2nMvHRX7HLBPT55CtKAyRi ae2A== X-Gm-Message-State: AJaThX52XdNDpabZBAeZyXn5LKfoF6EFb+YI2nAeJx64CWD4XNQ9DuxG SRGhxTPVWWlTEFmUGb8dIuwULQ== X-Received: by 10.223.166.103 with SMTP id k94mr11975952wrc.22.1512390414212; Mon, 04 Dec 2017 04:26:54 -0800 (PST) Received: from localhost.localdomain ([105.150.171.234]) by smtp.gmail.com with ESMTPSA id a8sm7665839wmh.41.2017.12.04.04.26.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 04 Dec 2017 04:26:53 -0800 (PST) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Dave Martin , Russell King - ARM Linux , Sebastian Andrzej Siewior , Mark Rutland , linux-rt-users@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Will Deacon , Steven Rostedt , Thomas Gleixner Subject: [PATCH v2 00/19] crypto: arm64 - play nice with CONFIG_PREEMPT Date: Mon, 4 Dec 2017 12:26:26 +0000 Message-Id: <20171204122645.31535-1-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.11.0 Sender: linux-rt-users-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org This is a followup 'crypto: arm64 - disable NEON across scatterwalk API calls' sent out last Friday. As reported by Sebastian, the way the arm64 NEON crypto code currently keeps kernel mode NEON enabled across calls into skcipher_walk_xxx() is causing problems with RT builds, given that the skcipher walk API may allocate and free temporary buffers it uses to present the input and output arrays to the crypto algorithm in blocksize sized chunks (where blocksize is the natural blocksize of the crypto algorithm), and doing so with NEON enabled means we're alloc/free'ing memory with preemption disabled. This was deliberate: when this code was introduced, each kernel_neon_begin() and kernel_neon_end() call incurred a fixed penalty of storing resp. loading the contents of all NEON registers to/from memory, and so doing it less often had an obvious performance benefit. However, in the mean time, we have refactored the core kernel mode NEON code, and now kernel_neon_begin() only incurs this penalty the first time it is called after entering the kernel, and the NEON register restore is deferred until returning to userland. This means pulling those calls into the loops that iterate over the input/output of the crypto algorithm is not a big deal anymore (although there are some places in the code where we relied on the NEON registers retaining their values between calls) So let's clean this up for arm64: update the NEON based skcipher drivers to no longer keep the NEON enabled when calling into the skcipher walk API. As pointed out by Peter, this only solves part of the problem. So let's tackle it more thoroughly, and update the algorithms to test the NEED_RESCHED flag each time after processing a fixed chunk of input. An attempt was made to align the different algorithms with regards to how much work such a fixed chunk entails, i.e., yielding every block for an algorithm that operates on 16 byte blocks at < 1 cycles per byte seems rather pointless. Changes since v1: - add CRC-T10DIF test vector (#1) - stop using GFP_ATOMIC in scatterwalk API calls, now that they are executed with preemption enabled (#2 - #6) - do some preparatory refactoring on the AES block mode code (#7 - #9) - add yield patches (#10 - #18) - add test patch (#19) - DO NOT MERGE Cc: Dave Martin Cc: Russell King - ARM Linux Cc: Sebastian Andrzej Siewior Cc: Mark Rutland Cc: linux-rt-users@vger.kernel.org Cc: Peter Zijlstra Cc: Catalin Marinas Cc: Will Deacon Cc: Steven Rostedt Cc: Thomas Gleixner Ard Biesheuvel (19): crypto: testmgr - add a new test case for CRC-T10DIF crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop crypto: arm64/aes-blk - move kernel mode neon en/disable into loop crypto: arm64/aes-bs - move kernel mode neon en/disable into loop crypto: arm64/chacha20 - move kernel mode neon en/disable into loop crypto: arm64/ghash - move kernel mode neon en/disable into loop crypto: arm64/aes-blk - remove configurable interleave crypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path crypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels arm64: assembler: add macro to conditionally yield the NEON under PREEMPT crypto: arm64/sha1-ce - yield every 8 blocks of input crypto: arm64/sha2-ce - yield every 8 blocks of input crypto: arm64/aes-blk - yield after processing each 64 bytes of input crypto: arm64/aes-bs - yield after processing each 128 bytes of input crypto: arm64/aes-ghash - yield after processing fixed number of blocks crypto: arm64/crc32-ce - yield NEON every 16 blocks of input crypto: arm64/crct10dif-ce - yield NEON every 8 blocks of input DO NOT MERGE arch/arm64/crypto/Makefile | 3 - arch/arm64/crypto/aes-ce-ccm-glue.c | 47 +- arch/arm64/crypto/aes-ce.S | 17 +- arch/arm64/crypto/aes-glue.c | 95 ++- arch/arm64/crypto/aes-modes.S | 624 ++++++++++---------- arch/arm64/crypto/aes-neon.S | 2 + arch/arm64/crypto/aes-neonbs-core.S | 317 ++++++---- arch/arm64/crypto/aes-neonbs-glue.c | 48 +- arch/arm64/crypto/chacha20-neon-glue.c | 12 +- arch/arm64/crypto/crc32-ce-core.S | 55 +- arch/arm64/crypto/crct10dif-ce-core.S | 39 +- arch/arm64/crypto/ghash-ce-core.S | 128 ++-- arch/arm64/crypto/ghash-ce-glue.c | 17 +- arch/arm64/crypto/sha1-ce-core.S | 45 +- arch/arm64/crypto/sha2-ce-core.S | 40 +- arch/arm64/crypto/sha256-glue.c | 36 +- arch/arm64/include/asm/assembler.h | 83 +++ crypto/testmgr.h | 259 ++++++++ 18 files changed, 1231 insertions(+), 636 deletions(-) -- 2.11.0 -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html