From patchwork Wed Nov 29 08:41:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corentin Labbe X-Patchwork-Id: 119935 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp2782895qgn; Wed, 29 Nov 2017 00:44:46 -0800 (PST) X-Google-Smtp-Source: AGs4zMYxyZAC4DDBF9Wexpw2mBfqEDWbgzfW4KlmdBe8zeEUIQmbeXGqHRbSmbV96FqFIDf4udJ+ X-Received: by 10.98.196.155 with SMTP id h27mr2188538pfk.137.1511945086190; Wed, 29 Nov 2017 00:44:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1511945086; cv=none; d=google.com; s=arc-20160816; b=V5fXMFpLEpETPA4z7D1XJq4N2k/x3Vwsj+DsULminOKJ3/wmsyPnuKHFhgaXda4c0s mff0yJfLnY99S5TWHspe8J+kKpB89Woe4B5bg6fsdyNV+nKQDTnjE0D9ijvmPeQaZhRJ oHjtnJnkoZEtOSMwibUkADLQvQYuKNlh7XB+gBeLWNyjdA4OkItkdH9Z8FmxfsKk9qrt UM8gFuyKsW9xb6VGGbO9aP7bsw59HyCAh9M+HSmBf9I9vHAzMRkMRx4LjHqv9LbEnX0Y fVgAguZ1uRiLaT9REYSIHxRqGLu4ExIr8MAF79L2wB8SmdQgt4u4/uhLfi1vSa+lEpxB H2hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=WQvz8QBiIozxGVQAldcBjipGiMXUOoF5QPMJS+BXMMw=; b=tnl2Xca6or27bo1LJ7G+qG6i7O5kxtR3ZsqBYmf0IoCAp3W0GHJ6l2Qy7ICYf1y88P 5WFf2ZM/pTit09VLhvgMEavIj3GcWAePNgeepZ8c8IiKAsQ5/Q9E7FAoonHkiQ1NvaMj C48NGhKqlbbfy+/J+FfnAuJ91kZdps17OWXhe/WoKqS1WVvYujadcacswuz/iP+rTn1E uh1ROoPR1z6MnItPrdap4iAhSOG0NM+WyWRZB3WmfdGeGSmYmg0uigUDHIRxySph1LId G4daCrO3X+P6XBiz64AIGNCfKKuBzeNpuS1TX60IVHdPh67/eaI1PqIhDO+HywznbpNy ekbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=NrNDNjKu; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c17si902871pgu.693.2017.11.29.00.44.45; Wed, 29 Nov 2017 00:44:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=NrNDNjKu; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753821AbdK2Ioo (ORCPT + 28 others); Wed, 29 Nov 2017 03:44:44 -0500 Received: from mail-wr0-f194.google.com ([209.85.128.194]:39218 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752597AbdK2Ins (ORCPT ); Wed, 29 Nov 2017 03:43:48 -0500 Received: by mail-wr0-f194.google.com with SMTP id a41so752191wra.6; Wed, 29 Nov 2017 00:43:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WQvz8QBiIozxGVQAldcBjipGiMXUOoF5QPMJS+BXMMw=; b=NrNDNjKuoSs0inY6MPb9OPJnmhta5n1VYSb6DRlFZ+1DYWZkPUheCTWy97J5lG7Rp8 PaA8KIqPa61Btuu3Tugc+EAq1WZTTNpfaCumnEh7nSbhoYBHEpph1ga6PcQKqG5LAt5L SQ0KQ+jaMlZetv5RDBR4skvOECC6gMd6ul743jnejliT+nrRREElJuDaHIobcx7GqA/R Kol+w8iBIvEK4cuzXEYbW+QorpxmRYHkVPnev6iswFwUZWeGUHIqhSZQbMiExX5JQQYC BD3aWTAsw2vnXX7/0n1LUBomVTtOkKC1d6GZ9P5bKheL2B18hn/5LXiQ9/DeiOWakrCy wF/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WQvz8QBiIozxGVQAldcBjipGiMXUOoF5QPMJS+BXMMw=; b=EJPiG7flLaugwKgspbzzYkXe3SnnDw+lHZ3EwNu9STt7DIMlTRgIIJkRDL1Qee3DlO 1fmi7F72V293YeGSjJ8lZiN/MNGgwntfuEv5+iK5vlMc15YJLV02uLVtuCHV9vAKKJwb y/qSZwLzJwQ26nmVzbU2cwcvHd89cgSP9qPLvUJLxqPL3FbYktWbh+31umeceyFAOyg2 zuVY5asccq3fAAxCiw18aCgOFxZd8pfhcfDnYIUTgQ5j051Tl+LFoIBKhyWlKXT3At41 ntL4HrZMSA/HpPLliC+cEL8Fao/R1RU1aNNoE+0kRm0UH12jdFppsml3/44qhj+AZWj9 gRfQ== X-Gm-Message-State: AJaThX7c7tZV/ddqp+q/m8LFQNUR6oVpMnw2IvdlTd1NqFe0Tuhp3KHb VNON+1SXDdCaSiSYFlZeXXY= X-Received: by 10.223.129.135 with SMTP id 7mr1792608wra.55.1511945027179; Wed, 29 Nov 2017 00:43:47 -0800 (PST) Received: from Red.local (LFbn-MAR-1-580-204.w90-118.abo.wanadoo.fr. [90.118.159.204]) by smtp.googlemail.com with ESMTPSA id 19sm1947398wmv.41.2017.11.29.00.43.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Nov 2017 00:43:46 -0800 (PST) From: Corentin Labbe To: herbert@gondor.apana.org.au, alexandre.torgue@st.com, arei.gonglei@huawei.com, davem@davemloft.net, jasowang@redhat.com, mcoquelin.stm32@gmail.com, mst@redhat.com, fabien.dessenne@st.com Cc: linux-arm-kernel@lists.infradead.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Corentin Labbe Subject: [PATCH RFC 1/4] crypto: engine - Permit to enqueue all async requests Date: Wed, 29 Nov 2017 09:41:18 +0100 Message-Id: <20171129084121.9385-2-clabbe.montjoie@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20171129084121.9385-1-clabbe.montjoie@gmail.com> References: <20171129084121.9385-1-clabbe.montjoie@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The crypto engine could actually only enqueue hash and ablkcipher request. This patch permit it to enqueue any type of crypto_async_request. Signed-off-by: Corentin Labbe --- crypto/crypto_engine.c | 188 +++++++++++------------------------------------- include/crypto/engine.h | 46 +++++------- 2 files changed, 60 insertions(+), 174 deletions(-) -- 2.13.6 diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c index 61e7c4e02fd2..f7c4c4c1f41b 100644 --- a/crypto/crypto_engine.c +++ b/crypto/crypto_engine.c @@ -34,11 +34,10 @@ static void crypto_pump_requests(struct crypto_engine *engine, bool in_kthread) { struct crypto_async_request *async_req, *backlog; - struct ahash_request *hreq; - struct ablkcipher_request *breq; unsigned long flags; bool was_busy = false; - int ret, rtype; + int ret; + struct crypto_engine_reqctx *enginectx; spin_lock_irqsave(&engine->queue_lock, flags); @@ -94,7 +93,6 @@ static void crypto_pump_requests(struct crypto_engine *engine, spin_unlock_irqrestore(&engine->queue_lock, flags); - rtype = crypto_tfm_alg_type(engine->cur_req->tfm); /* Until here we get the request need to be encrypted successfully */ if (!was_busy && engine->prepare_crypt_hardware) { ret = engine->prepare_crypt_hardware(engine); @@ -104,57 +102,31 @@ static void crypto_pump_requests(struct crypto_engine *engine, } } - switch (rtype) { - case CRYPTO_ALG_TYPE_AHASH: - hreq = ahash_request_cast(engine->cur_req); - if (engine->prepare_hash_request) { - ret = engine->prepare_hash_request(engine, hreq); - if (ret) { - dev_err(engine->dev, "failed to prepare request: %d\n", - ret); - goto req_err; - } - engine->cur_req_prepared = true; - } - ret = engine->hash_one_request(engine, hreq); - if (ret) { - dev_err(engine->dev, "failed to hash one request from queue\n"); - goto req_err; - } - return; - case CRYPTO_ALG_TYPE_ABLKCIPHER: - breq = ablkcipher_request_cast(engine->cur_req); - if (engine->prepare_cipher_request) { - ret = engine->prepare_cipher_request(engine, breq); - if (ret) { - dev_err(engine->dev, "failed to prepare request: %d\n", - ret); - goto req_err; - } - engine->cur_req_prepared = true; - } - ret = engine->cipher_one_request(engine, breq); + enginectx = crypto_tfm_ctx(async_req->tfm); + + if (enginectx->op.prepare_request) { + ret = enginectx->op.prepare_request(engine, async_req); if (ret) { - dev_err(engine->dev, "failed to cipher one request from queue\n"); + dev_err(engine->dev, "failed to prepare request: %d\n", + ret); goto req_err; } - return; - default: - dev_err(engine->dev, "failed to prepare request of unknown type\n"); - return; + engine->cur_req_prepared = true; + } + if (!enginectx->op.do_one_request) { + dev_err(engine->dev, "failed to do request\n"); + ret = -EINVAL; + goto req_err; + } + ret = enginectx->op.do_one_request(engine, async_req); + if (ret) { + dev_err(engine->dev, "failed to hash one request from queue\n"); + goto req_err; } + return; req_err: - switch (rtype) { - case CRYPTO_ALG_TYPE_AHASH: - hreq = ahash_request_cast(engine->cur_req); - crypto_finalize_hash_request(engine, hreq, ret); - break; - case CRYPTO_ALG_TYPE_ABLKCIPHER: - breq = ablkcipher_request_cast(engine->cur_req); - crypto_finalize_cipher_request(engine, breq, ret); - break; - } + crypto_finalize_request(engine, async_req, ret); return; out: @@ -170,59 +142,16 @@ static void crypto_pump_work(struct kthread_work *work) } /** - * crypto_transfer_cipher_request - transfer the new request into the - * enginequeue + * crypto_transfer_request - transfer the new request into the engine queue * @engine: the hardware engine * @req: the request need to be listed into the engine queue */ -int crypto_transfer_cipher_request(struct crypto_engine *engine, - struct ablkcipher_request *req, - bool need_pump) +int crypto_transfer_request(struct crypto_engine *engine, + struct crypto_async_request *req, bool need_pump) { unsigned long flags; int ret; - spin_lock_irqsave(&engine->queue_lock, flags); - - if (!engine->running) { - spin_unlock_irqrestore(&engine->queue_lock, flags); - return -ESHUTDOWN; - } - - ret = ablkcipher_enqueue_request(&engine->queue, req); - - if (!engine->busy && need_pump) - kthread_queue_work(engine->kworker, &engine->pump_requests); - - spin_unlock_irqrestore(&engine->queue_lock, flags); - return ret; -} -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request); - -/** - * crypto_transfer_cipher_request_to_engine - transfer one request to list - * into the engine queue - * @engine: the hardware engine - * @req: the request need to be listed into the engine queue - */ -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine, - struct ablkcipher_request *req) -{ - return crypto_transfer_cipher_request(engine, req, true); -} -EXPORT_SYMBOL_GPL(crypto_transfer_cipher_request_to_engine); - -/** - * crypto_transfer_hash_request - transfer the new request into the - * enginequeue - * @engine: the hardware engine - * @req: the request need to be listed into the engine queue - */ -int crypto_transfer_hash_request(struct crypto_engine *engine, - struct ahash_request *req, bool need_pump) -{ - unsigned long flags; - int ret; spin_lock_irqsave(&engine->queue_lock, flags); @@ -231,7 +160,7 @@ int crypto_transfer_hash_request(struct crypto_engine *engine, return -ESHUTDOWN; } - ret = ahash_enqueue_request(&engine->queue, req); + ret = crypto_enqueue_request(&engine->queue, req); if (!engine->busy && need_pump) kthread_queue_work(engine->kworker, &engine->pump_requests); @@ -239,80 +168,45 @@ int crypto_transfer_hash_request(struct crypto_engine *engine, spin_unlock_irqrestore(&engine->queue_lock, flags); return ret; } -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request); +EXPORT_SYMBOL_GPL(crypto_transfer_request); /** - * crypto_transfer_hash_request_to_engine - transfer one request to list + * crypto_transfer_request_to_engine - transfer one request to list * into the engine queue * @engine: the hardware engine * @req: the request need to be listed into the engine queue */ -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine, - struct ahash_request *req) -{ - return crypto_transfer_hash_request(engine, req, true); -} -EXPORT_SYMBOL_GPL(crypto_transfer_hash_request_to_engine); - -/** - * crypto_finalize_cipher_request - finalize one request if the request is done - * @engine: the hardware engine - * @req: the request need to be finalized - * @err: error number - */ -void crypto_finalize_cipher_request(struct crypto_engine *engine, - struct ablkcipher_request *req, int err) +int crypto_transfer_request_to_engine(struct crypto_engine *engine, + struct crypto_async_request *req) { - unsigned long flags; - bool finalize_cur_req = false; - int ret; - - spin_lock_irqsave(&engine->queue_lock, flags); - if (engine->cur_req == &req->base) - finalize_cur_req = true; - spin_unlock_irqrestore(&engine->queue_lock, flags); - - if (finalize_cur_req) { - if (engine->cur_req_prepared && - engine->unprepare_cipher_request) { - ret = engine->unprepare_cipher_request(engine, req); - if (ret) - dev_err(engine->dev, "failed to unprepare request\n"); - } - spin_lock_irqsave(&engine->queue_lock, flags); - engine->cur_req = NULL; - engine->cur_req_prepared = false; - spin_unlock_irqrestore(&engine->queue_lock, flags); - } - - req->base.complete(&req->base, err); - - kthread_queue_work(engine->kworker, &engine->pump_requests); + return crypto_transfer_request(engine, req, true); } -EXPORT_SYMBOL_GPL(crypto_finalize_cipher_request); +EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine); /** - * crypto_finalize_hash_request - finalize one request if the request is done + * crypto_finalize_request - finalize one request if the request is done * @engine: the hardware engine * @req: the request need to be finalized * @err: error number */ -void crypto_finalize_hash_request(struct crypto_engine *engine, - struct ahash_request *req, int err) +void crypto_finalize_request(struct crypto_engine *engine, + struct crypto_async_request *req, int err) { unsigned long flags; bool finalize_cur_req = false; int ret; + struct crypto_engine_reqctx *enginectx; spin_lock_irqsave(&engine->queue_lock, flags); - if (engine->cur_req == &req->base) + if (engine->cur_req == req) finalize_cur_req = true; spin_unlock_irqrestore(&engine->queue_lock, flags); if (finalize_cur_req) { + enginectx = crypto_tfm_ctx(req->tfm); if (engine->cur_req_prepared && - engine->unprepare_hash_request) { - ret = engine->unprepare_hash_request(engine, req); + enginectx->op.unprepare_request) { + ret = enginectx->op.unprepare_request(engine, req); if (ret) dev_err(engine->dev, "failed to unprepare request\n"); } @@ -322,11 +216,11 @@ void crypto_finalize_hash_request(struct crypto_engine *engine, spin_unlock_irqrestore(&engine->queue_lock, flags); } - req->base.complete(&req->base, err); + req->complete(req, err); kthread_queue_work(engine->kworker, &engine->pump_requests); } -EXPORT_SYMBOL_GPL(crypto_finalize_hash_request); +EXPORT_SYMBOL_GPL(crypto_finalize_request); /** * crypto_engine_start - start the hardware engine diff --git a/include/crypto/engine.h b/include/crypto/engine.h index dd04c1699b51..2e45db45849b 100644 --- a/include/crypto/engine.h +++ b/include/crypto/engine.h @@ -17,7 +17,6 @@ #include #include #include -#include #define ENGINE_NAME_LEN 30 /* @@ -65,19 +64,6 @@ struct crypto_engine { int (*prepare_crypt_hardware)(struct crypto_engine *engine); int (*unprepare_crypt_hardware)(struct crypto_engine *engine); - int (*prepare_cipher_request)(struct crypto_engine *engine, - struct ablkcipher_request *req); - int (*unprepare_cipher_request)(struct crypto_engine *engine, - struct ablkcipher_request *req); - int (*prepare_hash_request)(struct crypto_engine *engine, - struct ahash_request *req); - int (*unprepare_hash_request)(struct crypto_engine *engine, - struct ahash_request *req); - int (*cipher_one_request)(struct crypto_engine *engine, - struct ablkcipher_request *req); - int (*hash_one_request)(struct crypto_engine *engine, - struct ahash_request *req); - struct kthread_worker *kworker; struct kthread_work pump_requests; @@ -85,19 +71,25 @@ struct crypto_engine { struct crypto_async_request *cur_req; }; -int crypto_transfer_cipher_request(struct crypto_engine *engine, - struct ablkcipher_request *req, - bool need_pump); -int crypto_transfer_cipher_request_to_engine(struct crypto_engine *engine, - struct ablkcipher_request *req); -int crypto_transfer_hash_request(struct crypto_engine *engine, - struct ahash_request *req, bool need_pump); -int crypto_transfer_hash_request_to_engine(struct crypto_engine *engine, - struct ahash_request *req); -void crypto_finalize_cipher_request(struct crypto_engine *engine, - struct ablkcipher_request *req, int err); -void crypto_finalize_hash_request(struct crypto_engine *engine, - struct ahash_request *req, int err); +struct crypto_engine_op { + int (*prepare_request)(struct crypto_engine *engine, + struct crypto_async_request *areq); + int (*unprepare_request)(struct crypto_engine *engine, + struct crypto_async_request *areq); + int (*do_one_request)(struct crypto_engine *engine, + struct crypto_async_request *areq); +}; + +struct crypto_engine_reqctx { + struct crypto_engine_op op; +}; + +int crypto_transfer_request(struct crypto_engine *engine, + struct crypto_async_request *req, bool need_pump); +int crypto_transfer_request_to_engine(struct crypto_engine *engine, + struct crypto_async_request *req); +void crypto_finalize_request(struct crypto_engine *engine, + struct crypto_async_request *req, int err); int crypto_engine_start(struct crypto_engine *engine); int crypto_engine_stop(struct crypto_engine *engine); struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt);