From patchwork Tue Sep 3 16:43:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 172861 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp98914ily; Tue, 3 Sep 2019 09:43:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqxCmvZWF0UfF/vksaymEnCcLlY6GEKwIDM7O9gK8UWbmMk/ZAeIMo2bW3fTctN7DfkqOZv1 X-Received: by 2002:a17:90a:17aa:: with SMTP id q39mr225419pja.106.1567529028185; Tue, 03 Sep 2019 09:43:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567529028; cv=none; d=google.com; s=arc-20160816; b=NgzI7hrElsHojSHABN75DT+p7S2qsYByMt1SJ5wOJQ3Uh/VkH0Lwn3JxqrHnrkjhuX ABgrRzKs47+ShflLISYNUZRYau/JgxoC/0ldVkDR9WcE/wDkDuBxmEus/9uUm8nd9UeR NYbkmOqF01RC87/Gzt54+GcGiP5JkRUKfWZyG1ZrYvII/dEGeK7bPSeFUiN8E/T3jcHT BOm7qDwJ4os6cnvidUZDqgt2aKGKY2HVUZCyWF0rucovT5LExLxmrI5VpjbS+3mK0scI mzru4m8RaCR9q9SZ7ENuhhGDD1Kb2XpltpVROhSWSScMoYStENHBmxVp9qfZXidJXPIA yKyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=iouKzcU6vzz3J8Cw/jp3uuc+QHFS3Cf59HMMjSOOWbs=; b=UuBUHGfw2blRsVe8FEJmw08+KCEH7fzK36EEwDdMa04d+hPbKiLf4vMr10ZucnreAU bhUPj/1lM8O34D+Ex4HvbB5ps0icVulu7WzI9iF9IA3hijUhqsYVakQhe1OsnKUip4Dg /JIgz94cpHDpSUH2HDBOphwCDuO/FfN4n9Ki6YuM/UCz5PnLqGwOfecj5wExa0TzFkxA 1wSny5FKt+oVdtCUvF4wZeIV7wE4I+zv2zCRrjiWTRF9eBZZDv1D+rPX8arRzKfAlRqj rsa3i+VCs4p3OnoP90TYbRwtm0NIZOPBcW+FP6MNpr/++1H4xs7W+Y44RqlXeQTi8Ftu qGRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BrLC5uJN; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i8si14716735pgs.87.2019.09.03.09.43.47; Tue, 03 Sep 2019 09:43:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BrLC5uJN; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730057AbfICQnr (ORCPT + 3 others); Tue, 3 Sep 2019 12:43:47 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:43513 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728854AbfICQnr (ORCPT ); Tue, 3 Sep 2019 12:43:47 -0400 Received: by mail-pl1-f194.google.com with SMTP id 4so8119413pld.10 for ; Tue, 03 Sep 2019 09:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iouKzcU6vzz3J8Cw/jp3uuc+QHFS3Cf59HMMjSOOWbs=; b=BrLC5uJN4eOQS0EJ3ZdfeNolv1hkywi9MM4+6XIxCE8Yt9tSb3YBOrd5LKrmVYFSLd ghyuGXYVA3RuWF21bD1Fk3ifO3IScM5Tm2dCUk9H4dA2AjzsMX4VAS1XsxrFkeDNRiC1 K7xmYTJIeSc1Mrv7Qd9wLRek6cMN8WGdWf7h/mh8MCW08A7bVttBdyxZk2sZYlqCgl/f reCV+gkX1LTqVLYU3GIvnJdV60QZt4BcyUk0XcERV+LMSk8pjDgRxi8vDe/PM8b6yd4q rKrdyMc7Mn5fpZfgNXph243dKvcVwHXmoDMNLr6qn4mH62qoqgsP68lmLE+7lVk0dK8U zMkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iouKzcU6vzz3J8Cw/jp3uuc+QHFS3Cf59HMMjSOOWbs=; b=H/lRM81V58JFzXHm2luSOc7PJI/eOApqtdB4yPkYej+itPFgIp3Kh1euqEp73PkwvB T9rF7e4DlH/C587SnWEuPeQmb3WEXupO7zeiqR2o6gbwu162/fn2stq8bSpy2jJYbLtB ABxSkbpRGJhGWaDi7I9pWmBQdrvSqYUlgpGkfJCQfdTAQIbOaZGcP8gtwUnQF/MNa4xa Mb01vtgebnOdWgA+iHx7bY0xjLh4oxy+TTS0zR9xy8lNpiMFU1iubxWAip6D3ogqoifa 1LZfQdREgckaiohG3h2wU3k0qdMsTQ0CPCd64xFzJdAytHZ/fRmcWPiC2UXgcEe8RHb4 7NRA== X-Gm-Message-State: APjAAAVuGdiEOnmi4u43f2nq6VmKXQzUqVyxRvaVsreQPsD6ydi1eSRx uKGjD5LI+9uAZDBAaNjFP/m8Z0eii0TMiIdC X-Received: by 2002:a17:902:543:: with SMTP id 61mr36243602plf.20.1567529025930; Tue, 03 Sep 2019 09:43:45 -0700 (PDT) Received: from e111045-lin.nice.arm.com ([104.133.8.102]) by smtp.gmail.com with ESMTPSA id b126sm20311847pfb.110.2019.09.03.09.43.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Sep 2019 09:43:45 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH v2 02/17] crypto: arm/aes-ce - yield the SIMD unit between scatterwalk steps Date: Tue, 3 Sep 2019 09:43:24 -0700 Message-Id: <20190903164339.27984-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190903164339.27984-1-ard.biesheuvel@linaro.org> References: <20190903164339.27984-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Reduce the scope of the kernel_neon_begin/end regions so that the SIMD unit is released (and thus preemption re-enabled) if the crypto operation cannot be completed in a single scatterwalk step. This avoids scheduling blackouts due to preemption being enabled for unbounded periods, resulting in a more responsive system. After this change, we can also permit the cipher_walk infrastructure to sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as well. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 47 ++++++++++---------- arch/arm/crypto/aes-neonbs-glue.c | 22 ++++----- 2 files changed, 34 insertions(+), 35 deletions(-) -- 2.17.1 diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index 75d2ff03a63e..486e862ae34a 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -177,15 +177,15 @@ static int ecb_encrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_enc, num_rounds(ctx), blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -197,15 +197,15 @@ static int ecb_decrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_dec, num_rounds(ctx), blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -217,16 +217,16 @@ static int cbc_encrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_enc, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -238,16 +238,16 @@ static int cbc_decrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_dec, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -258,13 +258,14 @@ static int ctr_encrypt(struct skcipher_request *req) struct skcipher_walk walk; int err, blocks; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_enc, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } if (walk.nbytes) { @@ -278,13 +279,13 @@ static int ctr_encrypt(struct skcipher_request *req) */ blocks = -1; + kernel_neon_begin(); ce_aes_ctr_encrypt(tail, NULL, ctx->key_enc, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); crypto_xor_cpy(tdst, tsrc, tail, nbytes); err = skcipher_walk_done(&walk, 0); } - kernel_neon_end(); - return err; } @@ -319,17 +320,16 @@ static int xts_encrypt(struct skcipher_request *req) struct skcipher_walk walk; unsigned int blocks; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + kernel_neon_begin(); ce_aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key1.key_enc, rounds, blocks, walk.iv, ctx->key2.key_enc, first); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } @@ -341,17 +341,16 @@ static int xts_decrypt(struct skcipher_request *req) struct skcipher_walk walk; unsigned int blocks; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + kernel_neon_begin(); ce_aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key1.key_dec, rounds, blocks, walk.iv, ctx->key2.key_enc, first); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index 45cd9818791e..9000d0796d5e 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -90,9 +90,8 @@ static int __ecb_crypt(struct skcipher_request *req, struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -100,12 +99,13 @@ static int __ecb_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -159,9 +159,8 @@ static int cbc_decrypt(struct skcipher_request *req) struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -169,13 +168,14 @@ static int cbc_decrypt(struct skcipher_request *req) blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); aesbs_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -223,9 +223,8 @@ static int ctr_encrypt(struct skcipher_request *req) u8 buf[AES_BLOCK_SIZE]; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes > 0) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *final = (walk.total % AES_BLOCK_SIZE) ? buf : NULL; @@ -236,8 +235,10 @@ static int ctr_encrypt(struct skcipher_request *req) final = NULL; } + kernel_neon_begin(); aesbs_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks, walk.iv, final); + kernel_neon_end(); if (final) { u8 *dst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; @@ -252,7 +253,6 @@ static int ctr_encrypt(struct skcipher_request *req) err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -329,7 +329,6 @@ static int __xts_crypt(struct skcipher_request *req, crypto_cipher_encrypt_one(ctx->tweak_tfm, walk.iv, walk.iv); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -337,12 +336,13 @@ static int __xts_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; }