From patchwork Wed Aug 21 14:32:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 171957 Delivered-To: patch@linaro.org Received: by 2002:a92:d204:0:0:0:0:0 with SMTP id y4csp1055217ily; Wed, 21 Aug 2019 07:33:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqwnNkzTq+DFZ1Cvi4GjI4oJdchjfn8gtU/0eoWM9EX7GDPThDdQTc+euPhG+hfrbMLqwJ8p X-Received: by 2002:a63:d002:: with SMTP id z2mr30046615pgf.364.1566397987591; Wed, 21 Aug 2019 07:33:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1566397987; cv=none; d=google.com; s=arc-20160816; b=Yc0W6BEQMu2d6XqViJX1t2eIc6KpUNCmWn9tnluR+8Q/zhgayokpvuz7yr1TOcktw2 TXVyEbsKiKulJg6spKY12q50RL3B8lq872X4oUZjUR3YRAq4JAJYxqMYHrBR6rx1GoeO GZRvZrSg74A34SYwa/zTWWOhdpHZdSD8Wol3jSDYi0MlFn9Lp0hMBkxjASyPgB7wlXaw qJgcFnqBj8yvT1aCff9GL8tWO+OBzxM2tPCHwndeq11OoOqMWTeASfqciVc77k6cnPGd r344I/AvMr8CphyQSn8Z7xdYJ3wxZAMNP+yYTKGnSNN5UvUSxj9bIBV/6FRx78+PFN7C 8X1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=iouKzcU6vzz3J8Cw/jp3uuc+QHFS3Cf59HMMjSOOWbs=; b=riDtCAkE8xY3pX1khPGavOselX/V6pPYxOZh3Ue8a/iHHooDId87Zevb+9sVNhDQ3r eoRAvalXhNyzl7AS41j6exxS8hArQF40ZraGLkW1jDd2Jo9k0XpC3Jyp3emasejneEA3 sxQeGmZy6sikkJr16hcMZP8bzWzAFj/VzqjvSs4jCOwS+JYJE34jhSELyAzKUYhj0epm 4jg61RVmJdq5Hwk3MP/RU/nKg+msNqmpoFoHRRaM4M2ctYQfv6ZL/iai1kThuhYgKt01 ehEvaw341jbOJUQzM3djksXkocHJlOzXiKJT5vWg2kcaS3I5i2sew/VbPiE4tOdgxDxW yggQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GaAOKLbS; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j16si15915283pfh.0.2019.08.21.07.33.07; Wed, 21 Aug 2019 07:33:07 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=GaAOKLbS; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728502AbfHUOdH (ORCPT + 3 others); Wed, 21 Aug 2019 10:33:07 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:40882 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728980AbfHUOdG (ORCPT ); Wed, 21 Aug 2019 10:33:06 -0400 Received: by mail-wr1-f66.google.com with SMTP id c3so2257291wrd.7 for ; Wed, 21 Aug 2019 07:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iouKzcU6vzz3J8Cw/jp3uuc+QHFS3Cf59HMMjSOOWbs=; b=GaAOKLbSlqipMH664RibEBtkwaszXHKiJcIXNSxoGR2IKNaSLiCl+ZQRMV+pUHajNM AJJGpuZNdY4qtB5wtVvjrkXhsOBUVERYoOMnQ3A3WghV0M3auzU0mkoI3O1tUNxvmfH8 RRpLRD3JDtzP0ra4wG/pa9IxxmpCrcBH7aEDeHXWx8rI1XWAwnysIif0iMEpc1dXK6fK bAh0oAQ5UFL0AZWkdymwGxM627A5XO03Ne1NYsPIfilIWQmZT8GXOo8TVwHm36AJ9Lft iWDLLQZ4Eohc+acV+HCGe7G89gyGs+GB2o7ahnEDKSFDAb6Gk0cMENLZeZehtxIEii6o xFQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iouKzcU6vzz3J8Cw/jp3uuc+QHFS3Cf59HMMjSOOWbs=; b=NkPievD5lD46PHBK3rNzBBmIoVbKKhPdySYHleAzCliIsFIUKQxn7cKhHkHjQ/T5nl BG8UdFMGjLBAk8R8tIizTkBoyl3Qi5q+2MnFWh5DoYtxRC4xfOX1OaDRiwU7PobGl5+6 QyrYnVWA4eEGR0JitzlZP2dh3ehHWbdDNFVtWWgXWcplRTZSnFRYZGEh5pEuFtfpxTkZ tzHzHaVlMyvug6JmfUtJ5H2aVDtypwgYunMJWSre2CXX/7ByF2rWscIkxdwtECOysK+3 rzyW9o3K0IgBTQ3EmcV3f9QWhdwHuKOeN5uJ8KCGoAuG/n7vSaRzneVXQdVCSekdPuK1 domQ== X-Gm-Message-State: APjAAAUR1l2X3l2mckwZjKp8/VwSqUGalneYlJ5A1L8BDOx6k0WAEl9+ ji3EBTbMfB4QVQQ2EaAhs/uP7go8N4sVaQ== X-Received: by 2002:adf:fe85:: with SMTP id l5mr38995702wrr.5.1566397982854; Wed, 21 Aug 2019 07:33:02 -0700 (PDT) Received: from mba13.lan (adsl-103.109.242.1.tellas.gr. [109.242.1.103]) by smtp.gmail.com with ESMTPSA id 16sm181427wmx.45.2019.08.21.07.33.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Aug 2019 07:33:02 -0700 (PDT) From: Ard Biesheuvel To: linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, ebiggers@kernel.org, Ard Biesheuvel Subject: [PATCH 02/17] crypto: arm/aes-ce - yield the SIMD unit between scatterwalk steps Date: Wed, 21 Aug 2019 17:32:38 +0300 Message-Id: <20190821143253.30209-3-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190821143253.30209-1-ard.biesheuvel@linaro.org> References: <20190821143253.30209-1-ard.biesheuvel@linaro.org> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Reduce the scope of the kernel_neon_begin/end regions so that the SIMD unit is released (and thus preemption re-enabled) if the crypto operation cannot be completed in a single scatterwalk step. This avoids scheduling blackouts due to preemption being enabled for unbounded periods, resulting in a more responsive system. After this change, we can also permit the cipher_walk infrastructure to sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as well. Signed-off-by: Ard Biesheuvel --- arch/arm/crypto/aes-ce-glue.c | 47 ++++++++++---------- arch/arm/crypto/aes-neonbs-glue.c | 22 ++++----- 2 files changed, 34 insertions(+), 35 deletions(-) -- 2.17.1 diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c index 75d2ff03a63e..486e862ae34a 100644 --- a/arch/arm/crypto/aes-ce-glue.c +++ b/arch/arm/crypto/aes-ce-glue.c @@ -177,15 +177,15 @@ static int ecb_encrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_enc, num_rounds(ctx), blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -197,15 +197,15 @@ static int ecb_decrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_dec, num_rounds(ctx), blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -217,16 +217,16 @@ static int cbc_encrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_enc, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -238,16 +238,16 @@ static int cbc_decrypt(struct skcipher_request *req) unsigned int blocks; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_dec, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -258,13 +258,14 @@ static int ctr_encrypt(struct skcipher_request *req) struct skcipher_walk walk; int err, blocks; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) { + kernel_neon_begin(); ce_aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key_enc, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } if (walk.nbytes) { @@ -278,13 +279,13 @@ static int ctr_encrypt(struct skcipher_request *req) */ blocks = -1; + kernel_neon_begin(); ce_aes_ctr_encrypt(tail, NULL, ctx->key_enc, num_rounds(ctx), blocks, walk.iv); + kernel_neon_end(); crypto_xor_cpy(tdst, tsrc, tail, nbytes); err = skcipher_walk_done(&walk, 0); } - kernel_neon_end(); - return err; } @@ -319,17 +320,16 @@ static int xts_encrypt(struct skcipher_request *req) struct skcipher_walk walk; unsigned int blocks; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + kernel_neon_begin(); ce_aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key1.key_enc, rounds, blocks, walk.iv, ctx->key2.key_enc, first); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } @@ -341,17 +341,16 @@ static int xts_decrypt(struct skcipher_request *req) struct skcipher_walk walk; unsigned int blocks; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) { + kernel_neon_begin(); ce_aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key1.key_dec, rounds, blocks, walk.iv, ctx->key2.key_enc, first); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE); } - kernel_neon_end(); - return err; } diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c index 45cd9818791e..9000d0796d5e 100644 --- a/arch/arm/crypto/aes-neonbs-glue.c +++ b/arch/arm/crypto/aes-neonbs-glue.c @@ -90,9 +90,8 @@ static int __ecb_crypt(struct skcipher_request *req, struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -100,12 +99,13 @@ static int __ecb_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -159,9 +159,8 @@ static int cbc_decrypt(struct skcipher_request *req) struct skcipher_walk walk; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -169,13 +168,14 @@ static int cbc_decrypt(struct skcipher_request *req) blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); aesbs_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -223,9 +223,8 @@ static int ctr_encrypt(struct skcipher_request *req) u8 buf[AES_BLOCK_SIZE]; int err; - err = skcipher_walk_virt(&walk, req, true); + err = skcipher_walk_virt(&walk, req, false); - kernel_neon_begin(); while (walk.nbytes > 0) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; u8 *final = (walk.total % AES_BLOCK_SIZE) ? buf : NULL; @@ -236,8 +235,10 @@ static int ctr_encrypt(struct skcipher_request *req) final = NULL; } + kernel_neon_begin(); aesbs_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr, ctx->rk, ctx->rounds, blocks, walk.iv, final); + kernel_neon_end(); if (final) { u8 *dst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE; @@ -252,7 +253,6 @@ static int ctr_encrypt(struct skcipher_request *req) err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; } @@ -329,7 +329,6 @@ static int __xts_crypt(struct skcipher_request *req, crypto_cipher_encrypt_one(ctx->tweak_tfm, walk.iv, walk.iv); - kernel_neon_begin(); while (walk.nbytes >= AES_BLOCK_SIZE) { unsigned int blocks = walk.nbytes / AES_BLOCK_SIZE; @@ -337,12 +336,13 @@ static int __xts_crypt(struct skcipher_request *req, blocks = round_down(blocks, walk.stride / AES_BLOCK_SIZE); + kernel_neon_begin(); fn(walk.dst.virt.addr, walk.src.virt.addr, ctx->key.rk, ctx->key.rounds, blocks, walk.iv); + kernel_neon_end(); err = skcipher_walk_done(&walk, walk.nbytes - blocks * AES_BLOCK_SIZE); } - kernel_neon_end(); return err; }