From patchwork Tue Jan 31 08:01:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80739C636CC for ; Tue, 31 Jan 2023 08:01:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229706AbjAaIBw (ORCPT ); Tue, 31 Jan 2023 03:01:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229651AbjAaIBv (ORCPT ); Tue, 31 Jan 2023 03:01:51 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A4EE40BF5 for ; Tue, 31 Jan 2023 00:01:50 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaN-005ve7-GT; Tue, 31 Jan 2023 16:01:48 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:47 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:47 +0800 Subject: [PATCH 2/32] crypto: cryptd - Use subreq for AEAD References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org AEAD reuses the existing request object for its child. This is error-prone and unnecessary. This patch adds a subrequest object just like we do for skcipher and hash. This patch also restores the original completion function as we do for skcipher/hash. Signed-off-by: Herbert Xu --- crypto/cryptd.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 1ff58a021d57..c0c416eda8e8 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -93,6 +93,7 @@ struct cryptd_aead_ctx { struct cryptd_aead_request_ctx { crypto_completion_t complete; + struct aead_request req; }; static void cryptd_queue_worker(struct work_struct *work); @@ -715,6 +716,7 @@ static void cryptd_aead_crypt(struct aead_request *req, int (*crypt)(struct aead_request *req)) { struct cryptd_aead_request_ctx *rctx; + struct aead_request *subreq; struct cryptd_aead_ctx *ctx; crypto_completion_t compl; struct crypto_aead *tfm; @@ -722,14 +724,24 @@ static void cryptd_aead_crypt(struct aead_request *req, rctx = aead_request_ctx(req); compl = rctx->complete; + subreq = &rctx->req; tfm = crypto_aead_reqtfm(req); if (unlikely(err == -EINPROGRESS)) goto out; - aead_request_set_tfm(req, child); + + aead_request_set_tfm(subreq, child); + aead_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, + NULL, NULL); + aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + aead_request_set_ad(subreq, req->assoclen); + err = crypt( req ); + req->base.complete = compl; + out: ctx = crypto_aead_ctx(tfm); refcnt = refcount_read(&ctx->refcnt); @@ -798,8 +810,8 @@ static int cryptd_aead_init_tfm(struct crypto_aead *tfm) ctx->child = cipher; crypto_aead_set_reqsize( - tfm, max((unsigned)sizeof(struct cryptd_aead_request_ctx), - crypto_aead_reqsize(cipher))); + tfm, sizeof(struct cryptd_aead_request_ctx) + + crypto_aead_reqsize(cipher)); return 0; } From patchwork Tue Jan 31 08:01:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649340 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51F9AC38142 for ; Tue, 31 Jan 2023 08:01:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229826AbjAaIB5 (ORCPT ); Tue, 31 Jan 2023 03:01:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbjAaIBz (ORCPT ); Tue, 31 Jan 2023 03:01:55 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EF8140BF5 for ; Tue, 31 Jan 2023 00:01:54 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaR-005veW-Oc; Tue, 31 Jan 2023 16:01:52 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:51 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:51 +0800 Subject: [PATCH 4/32] crypto: aead - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- include/crypto/internal/aead.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/aead.h b/include/crypto/internal/aead.h index cd8cb1e921b7..28a95eb3182d 100644 --- a/include/crypto/internal/aead.h +++ b/include/crypto/internal/aead.h @@ -82,7 +82,7 @@ static inline void *aead_request_ctx_dma(struct aead_request *req) static inline void aead_request_complete(struct aead_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline u32 aead_request_flags(struct aead_request *req) From patchwork Tue Jan 31 08:01:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A51DEC636CD for ; Tue, 31 Jan 2023 08:02:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229651AbjAaICF (ORCPT ); Tue, 31 Jan 2023 03:02:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229991AbjAaICA (ORCPT ); Tue, 31 Jan 2023 03:02:00 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF8CD42BE2 for ; Tue, 31 Jan 2023 00:01:58 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaV-005vfk-Vj; Tue, 31 Jan 2023 16:01:57 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:01:55 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:01:55 +0800 Subject: [PATCH 6/32] crypto: hash - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. This patch also removes the voodoo programming previously used for unaligned ahash operations and replaces it with a sub-request. Signed-off-by: Herbert Xu --- crypto/ahash.c | 137 +++++++++++++---------------------------- include/crypto/internal/hash.h | 2 2 files changed, 45 insertions(+), 94 deletions(-) diff --git a/crypto/ahash.c b/crypto/ahash.c index 4b089f1b770f..369447e483cd 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -190,121 +190,69 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, } EXPORT_SYMBOL_GPL(crypto_ahash_setkey); -static inline unsigned int ahash_align_buffer_size(unsigned len, - unsigned long mask) -{ - return len + (mask & ~(crypto_tfm_ctx_alignment() - 1)); -} - static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); unsigned long alignmask = crypto_ahash_alignmask(tfm); unsigned int ds = crypto_ahash_digestsize(tfm); - struct ahash_request_priv *priv; + struct ahash_request *subreq; + unsigned int subreq_size; + unsigned int reqsize; + u8 *result; + u32 flags; - priv = kmalloc(sizeof(*priv) + ahash_align_buffer_size(ds, alignmask), - (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC); - if (!priv) + subreq_size = sizeof(*subreq); + reqsize = crypto_ahash_reqsize(tfm); + reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment()); + subreq_size += reqsize; + subreq_size += ds; + subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1); + + flags = ahash_request_flags(req); + subreq = kmalloc(subreq_size, (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? + GFP_KERNEL : GFP_ATOMIC); + if (!subreq) return -ENOMEM; - /* - * WARNING: Voodoo programming below! - * - * The code below is obscure and hard to understand, thus explanation - * is necessary. See include/crypto/hash.h and include/linux/crypto.h - * to understand the layout of structures used here! - * - * The code here will replace portions of the ORIGINAL request with - * pointers to new code and buffers so the hashing operation can store - * the result in aligned buffer. We will call the modified request - * an ADJUSTED request. - * - * The newly mangled request will look as such: - * - * req { - * .result = ADJUSTED[new aligned buffer] - * .base.complete = ADJUSTED[pointer to completion function] - * .base.data = ADJUSTED[*req (pointer to self)] - * .priv = ADJUSTED[new priv] { - * .result = ORIGINAL(result) - * .complete = ORIGINAL(base.complete) - * .data = ORIGINAL(base.data) - * } - */ - - priv->result = req->result; - priv->complete = req->base.complete; - priv->data = req->base.data; - priv->flags = req->base.flags; - - /* - * WARNING: We do not backup req->priv here! The req->priv - * is for internal use of the Crypto API and the - * user must _NOT_ _EVER_ depend on it's content! - */ - - req->result = PTR_ALIGN((u8 *)priv->ubuf, alignmask + 1); - req->base.complete = cplt; - req->base.data = req; - req->priv = priv; + ahash_request_set_tfm(subreq, tfm); + ahash_request_set_callback(subreq, flags, cplt, req); + + result = (u8 *)(subreq + 1) + reqsize; + result = PTR_ALIGN(result, alignmask + 1); + + ahash_request_set_crypt(subreq, req->src, result, req->nbytes); + + req->priv = subreq; return 0; } static void ahash_restore_req(struct ahash_request *req, int err) { - struct ahash_request_priv *priv = req->priv; + struct ahash_request *subreq = req->priv; if (!err) - memcpy(priv->result, req->result, + memcpy(req->result, subreq->result, crypto_ahash_digestsize(crypto_ahash_reqtfm(req))); - /* Restore the original crypto request. */ - req->result = priv->result; - - ahash_request_set_callback(req, priv->flags, - priv->complete, priv->data); req->priv = NULL; - /* Free the req->priv.priv from the ADJUSTED request. */ - kfree_sensitive(priv); -} - -static void ahash_notify_einprogress(struct ahash_request *req) -{ - struct ahash_request_priv *priv = req->priv; - struct crypto_async_request oreq; - - oreq.data = priv->data; - - priv->complete(&oreq, -EINPROGRESS); + kfree_sensitive(subreq); } static void ahash_op_unaligned_done(struct crypto_async_request *req, int err) { struct ahash_request *areq = req->data; - if (err == -EINPROGRESS) { - ahash_notify_einprogress(areq); - return; - } - - /* - * Restore the original request, see ahash_op_unaligned() for what - * goes where. - * - * The "struct ahash_request *req" here is in fact the "req.base" - * from the ADJUSTED request from ahash_op_unaligned(), thus as it - * is a pointer to self, it is also the ADJUSTED "req" . - */ + if (err == -EINPROGRESS) + goto out; /* First copy req->result into req->priv.result */ ahash_restore_req(areq, err); +out: /* Complete the ORIGINAL request. */ - areq->base.complete(&areq->base, err); + ahash_request_complete(areq, err); } static int ahash_op_unaligned(struct ahash_request *req, @@ -391,15 +339,17 @@ static void ahash_def_finup_done2(struct crypto_async_request *req, int err) ahash_restore_req(areq, err); - areq->base.complete(&areq->base, err); + ahash_request_complete(areq, err); } static int ahash_def_finup_finish1(struct ahash_request *req, int err) { + struct ahash_request *subreq = req->priv; + if (err) goto out; - req->base.complete = ahash_def_finup_done2; + subreq->base.complete = ahash_def_finup_done2; err = crypto_ahash_reqtfm(req)->final(req); if (err == -EINPROGRESS || err == -EBUSY) @@ -413,19 +363,20 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err) static void ahash_def_finup_done1(struct crypto_async_request *req, int err) { struct ahash_request *areq = req->data; + struct ahash_request *subreq; - if (err == -EINPROGRESS) { - ahash_notify_einprogress(areq); - return; - } + if (err == -EINPROGRESS) + goto out; - areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + subreq = areq->priv; + subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG; err = ahash_def_finup_finish1(areq, err); - if (areq->priv) + if (err == -EINPROGRESS || err == -EBUSY) return; - areq->base.complete(&areq->base, err); +out: + ahash_request_complete(areq, err); } static int ahash_def_finup(struct ahash_request *req) diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 1a2a41b79253..0b259dbb97af 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -199,7 +199,7 @@ static inline void *ahash_request_ctx_dma(struct ahash_request *req) static inline void ahash_request_complete(struct ahash_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } static inline u32 ahash_request_flags(struct ahash_request *req) From patchwork Tue Jan 31 08:02:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BB95C38142 for ; Tue, 31 Jan 2023 08:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230008AbjAaICM (ORCPT ); Tue, 31 Jan 2023 03:02:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230154AbjAaICD (ORCPT ); Tue, 31 Jan 2023 03:02:03 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3FA641B66 for ; Tue, 31 Jan 2023 00:02:02 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaa-005vgt-5u; Tue, 31 Jan 2023 16:02:01 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:00 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:00 +0800 Subject: [PATCH 8/32] crypto: skcipher - Use crypto_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the crypto_request_complete helper instead of calling the completion function directly. Signed-off-by: Herbert Xu --- include/crypto/internal/skcipher.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h index 06d0a5491cf3..fb3d9e899f52 100644 --- a/include/crypto/internal/skcipher.h +++ b/include/crypto/internal/skcipher.h @@ -94,7 +94,7 @@ static inline void *skcipher_instance_ctx(struct skcipher_instance *inst) static inline void skcipher_request_complete(struct skcipher_request *req, int err) { - req->base.complete(&req->base, err); + crypto_request_complete(&req->base, err); } int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn, From patchwork Tue Jan 31 08:02:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6A1BC38142 for ; Tue, 31 Jan 2023 08:02:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230480AbjAaICS (ORCPT ); Tue, 31 Jan 2023 03:02:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbjAaICH (ORCPT ); Tue, 31 Jan 2023 03:02:07 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AADF410BD for ; Tue, 31 Jan 2023 00:02:07 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlae-005vhm-Cz; Tue, 31 Jan 2023 16:02:05 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:04 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:04 +0800 Subject: [PATCH 10/32] crypto: rsa-pkcs1pad - Use akcipher_request_complete References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the akcipher_request_complete helper instead of calling the completion function directly. In fact the previous code was buggy in that EINPROGRESS was never passed back to the original caller. Fixes: 3d5b1ecdea6f ("crypto: rsa - RSA padding algorithm") Signed-off-by: Herbert Xu --- crypto/rsa-pkcs1pad.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c index 6ee5b8a060c0..4e9d2244ee31 100644 --- a/crypto/rsa-pkcs1pad.c +++ b/crypto/rsa-pkcs1pad.c @@ -214,16 +214,14 @@ static void pkcs1pad_encrypt_sign_complete_cb( struct crypto_async_request *child_async_req, int err) { struct akcipher_request *req = child_async_req->data; - struct crypto_async_request async_req; if (err == -EINPROGRESS) - return; + goto out; + + err = pkcs1pad_encrypt_sign_complete(req, err); - async_req.data = req->base.data; - async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req)); - async_req.flags = child_async_req->flags; - req->base.complete(&async_req, - pkcs1pad_encrypt_sign_complete(req, err)); +out: + akcipher_request_complete(req, err); } static int pkcs1pad_encrypt(struct akcipher_request *req) @@ -332,15 +330,14 @@ static void pkcs1pad_decrypt_complete_cb( struct crypto_async_request *child_async_req, int err) { struct akcipher_request *req = child_async_req->data; - struct crypto_async_request async_req; if (err == -EINPROGRESS) - return; + goto out; + + err = pkcs1pad_decrypt_complete(req, err); - async_req.data = req->base.data; - async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req)); - async_req.flags = child_async_req->flags; - req->base.complete(&async_req, pkcs1pad_decrypt_complete(req, err)); +out: + akcipher_request_complete(req, err); } static int pkcs1pad_decrypt(struct akcipher_request *req) @@ -513,15 +510,14 @@ static void pkcs1pad_verify_complete_cb( struct crypto_async_request *child_async_req, int err) { struct akcipher_request *req = child_async_req->data; - struct crypto_async_request async_req; if (err == -EINPROGRESS) - return; + goto out; - async_req.data = req->base.data; - async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req)); - async_req.flags = child_async_req->flags; - req->base.complete(&async_req, pkcs1pad_verify_complete(req, err)); + err = pkcs1pad_verify_complete(req, err); + +out: + akcipher_request_complete(req, err); } /* From patchwork Wed Feb 8 05:56:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 651852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F891C636CC for ; Wed, 8 Feb 2023 05:56:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229509AbjBHF4J (ORCPT ); Wed, 8 Feb 2023 00:56:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229668AbjBHF4I (ORCPT ); Wed, 8 Feb 2023 00:56:08 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEEED2311E for ; Tue, 7 Feb 2023 21:56:05 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pPdR2-008l01-8E; Wed, 08 Feb 2023 13:56:01 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Wed, 08 Feb 2023 13:56:00 +0800 Date: Wed, 8 Feb 2023 13:56:00 +0800 From: Herbert Xu To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Subject: [v2 PATCH 11/32] crypto: cryptd - Use request_complete helpers Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org v2 adds some missing subreq assignments which breaks the callback in case of EINPROGRESS. ---8<--- Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- crypto/cryptd.c | 234 ++++++++++++++++++++++++++++++-------------------------- 1 file changed, 126 insertions(+), 108 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 9d60acc920cb..29890fc0eab7 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -72,7 +72,6 @@ struct cryptd_skcipher_ctx { }; struct cryptd_skcipher_request_ctx { - crypto_completion_t complete; struct skcipher_request req; }; @@ -83,6 +82,7 @@ struct cryptd_hash_ctx { struct cryptd_hash_request_ctx { crypto_completion_t complete; + void *data; struct shash_desc desc; }; @@ -92,7 +92,6 @@ struct cryptd_aead_ctx { }; struct cryptd_aead_request_ctx { - crypto_completion_t complete; struct aead_request req; }; @@ -178,8 +177,8 @@ static void cryptd_queue_worker(struct work_struct *work) return; if (backlog) - backlog->complete(backlog, -EINPROGRESS); - req->complete(req, 0); + crypto_request_complete(backlog, -EINPROGRESS); + crypto_request_complete(req, 0); if (cpu_queue->queue.qlen) queue_work(cryptd_wq, &cpu_queue->work); @@ -238,18 +237,51 @@ static int cryptd_skcipher_setkey(struct crypto_skcipher *parent, return crypto_skcipher_setkey(child, key, keylen); } -static void cryptd_skcipher_complete(struct skcipher_request *req, int err) +static struct skcipher_request *cryptd_skcipher_prepare( + struct skcipher_request *req, int err) +{ + struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->req; + struct cryptd_skcipher_ctx *ctx; + struct crypto_skcipher *child; + + req->base.complete = subreq->base.complete; + req->base.data = subreq->base.data; + + if (unlikely(err == -EINPROGRESS)) + return NULL; + + ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req)); + child = ctx->child; + + skcipher_request_set_tfm(subreq, child); + skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, + NULL, NULL); + skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, + req->iv); + + return subreq; +} + +static void cryptd_skcipher_complete(struct skcipher_request *req, int err, + crypto_completion_t complete) { + struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); + struct skcipher_request *subreq = &rctx->req; int refcnt = refcount_read(&ctx->refcnt); local_bh_disable(); - rctx->complete(&req->base, err); + skcipher_request_complete(req, err); local_bh_enable(); - if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt)) + if (unlikely(err == -EINPROGRESS)) { + subreq->base.complete = req->base.complete; + subreq->base.data = req->base.data; + req->base.complete = complete; + req->base.data = req; + } else if (refcnt && refcount_dec_and_test(&ctx->refcnt)) crypto_free_skcipher(tfm); } @@ -257,54 +289,26 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base, int err) { struct skcipher_request *req = skcipher_request_cast(base); - struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_request *subreq = &rctx->req; - struct crypto_skcipher *child = ctx->child; + struct skcipher_request *subreq; - if (unlikely(err == -EINPROGRESS)) - goto out; - - skcipher_request_set_tfm(subreq, child); - skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, - req->iv); - - err = crypto_skcipher_encrypt(subreq); - - req->base.complete = rctx->complete; + subreq = cryptd_skcipher_prepare(req, err); + if (likely(subreq)) + err = crypto_skcipher_encrypt(subreq); -out: - cryptd_skcipher_complete(req, err); + cryptd_skcipher_complete(req, err, cryptd_skcipher_encrypt); } static void cryptd_skcipher_decrypt(struct crypto_async_request *base, int err) { struct skcipher_request *req = skcipher_request_cast(base); - struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); - struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm); - struct skcipher_request *subreq = &rctx->req; - struct crypto_skcipher *child = ctx->child; - - if (unlikely(err == -EINPROGRESS)) - goto out; - - skcipher_request_set_tfm(subreq, child); - skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP, - NULL, NULL); - skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen, - req->iv); - - err = crypto_skcipher_decrypt(subreq); + struct skcipher_request *subreq; - req->base.complete = rctx->complete; + subreq = cryptd_skcipher_prepare(req, err); + if (likely(subreq)) + err = crypto_skcipher_decrypt(subreq); -out: - cryptd_skcipher_complete(req, err); + cryptd_skcipher_complete(req, err, cryptd_skcipher_decrypt); } static int cryptd_skcipher_enqueue(struct skcipher_request *req, @@ -312,11 +316,14 @@ static int cryptd_skcipher_enqueue(struct skcipher_request *req, { struct cryptd_skcipher_request_ctx *rctx = skcipher_request_ctx(req); struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct skcipher_request *subreq = &rctx->req; struct cryptd_queue *queue; queue = cryptd_get_queue(crypto_skcipher_tfm(tfm)); - rctx->complete = req->base.complete; + subreq->base.complete = req->base.complete; + subreq->base.data = req->base.data; req->base.complete = compl; + req->base.data = req; return cryptd_enqueue_request(queue, &req->base); } @@ -469,45 +476,63 @@ static int cryptd_hash_enqueue(struct ahash_request *req, cryptd_get_queue(crypto_ahash_tfm(tfm)); rctx->complete = req->base.complete; + rctx->data = req->base.data; req->base.complete = compl; + req->base.data = req; return cryptd_enqueue_request(queue, &req->base); } -static void cryptd_hash_complete(struct ahash_request *req, int err) +static struct shash_desc *cryptd_hash_prepare(struct ahash_request *req, + int err) +{ + struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); + + req->base.complete = rctx->complete; + req->base.data = rctx->data; + + if (unlikely(err == -EINPROGRESS)) + return NULL; + + return &rctx->desc; +} + +static void cryptd_hash_complete(struct ahash_request *req, int err, + crypto_completion_t complete) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); int refcnt = refcount_read(&ctx->refcnt); local_bh_disable(); - rctx->complete(&req->base, err); + ahash_request_complete(req, err); local_bh_enable(); - if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt)) + if (err == -EINPROGRESS) { + req->base.complete = complete; + req->base.data = req; + } else if (refcnt && refcount_dec_and_test(&ctx->refcnt)) crypto_free_ahash(tfm); } static void cryptd_hash_init(struct crypto_async_request *req_async, int err) { - struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm); - struct crypto_shash *child = ctx->child; struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); - struct shash_desc *desc = &rctx->desc; + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct crypto_shash *child = ctx->child; + struct shash_desc *desc; - if (unlikely(err == -EINPROGRESS)) + desc = cryptd_hash_prepare(req, err); + if (unlikely(!desc)) goto out; desc->tfm = child; err = crypto_shash_init(desc); - req->base.complete = rctx->complete; - out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_init); } static int cryptd_hash_init_enqueue(struct ahash_request *req) @@ -518,19 +543,13 @@ static int cryptd_hash_init_enqueue(struct ahash_request *req) static void cryptd_hash_update(struct crypto_async_request *req_async, int err) { struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx; + struct shash_desc *desc; - rctx = ahash_request_ctx(req); + desc = cryptd_hash_prepare(req, err); + if (likely(desc)) + err = shash_ahash_update(req, desc); - if (unlikely(err == -EINPROGRESS)) - goto out; - - err = shash_ahash_update(req, &rctx->desc); - - req->base.complete = rctx->complete; - -out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_update); } static int cryptd_hash_update_enqueue(struct ahash_request *req) @@ -541,17 +560,13 @@ static int cryptd_hash_update_enqueue(struct ahash_request *req) static void cryptd_hash_final(struct crypto_async_request *req_async, int err) { struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); - - if (unlikely(err == -EINPROGRESS)) - goto out; - - err = crypto_shash_final(&rctx->desc, req->result); + struct shash_desc *desc; - req->base.complete = rctx->complete; + desc = cryptd_hash_prepare(req, err); + if (likely(desc)) + err = crypto_shash_final(desc, req->result); -out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_final); } static int cryptd_hash_final_enqueue(struct ahash_request *req) @@ -562,17 +577,13 @@ static int cryptd_hash_final_enqueue(struct ahash_request *req) static void cryptd_hash_finup(struct crypto_async_request *req_async, int err) { struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); + struct shash_desc *desc; - if (unlikely(err == -EINPROGRESS)) - goto out; - - err = shash_ahash_finup(req, &rctx->desc); + desc = cryptd_hash_prepare(req, err); + if (likely(desc)) + err = shash_ahash_finup(req, desc); - req->base.complete = rctx->complete; - -out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_finup); } static int cryptd_hash_finup_enqueue(struct ahash_request *req) @@ -582,23 +593,22 @@ static int cryptd_hash_finup_enqueue(struct ahash_request *req) static void cryptd_hash_digest(struct crypto_async_request *req_async, int err) { - struct cryptd_hash_ctx *ctx = crypto_tfm_ctx(req_async->tfm); - struct crypto_shash *child = ctx->child; struct ahash_request *req = ahash_request_cast(req_async); - struct cryptd_hash_request_ctx *rctx = ahash_request_ctx(req); - struct shash_desc *desc = &rctx->desc; + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct cryptd_hash_ctx *ctx = crypto_ahash_ctx(tfm); + struct crypto_shash *child = ctx->child; + struct shash_desc *desc; - if (unlikely(err == -EINPROGRESS)) + desc = cryptd_hash_prepare(req, err); + if (unlikely(!desc)) goto out; desc->tfm = child; err = shash_ahash_digest(req, desc); - req->base.complete = rctx->complete; - out: - cryptd_hash_complete(req, err); + cryptd_hash_complete(req, err, cryptd_hash_digest); } static int cryptd_hash_digest_enqueue(struct ahash_request *req) @@ -711,20 +721,20 @@ static int cryptd_aead_setauthsize(struct crypto_aead *parent, } static void cryptd_aead_crypt(struct aead_request *req, - struct crypto_aead *child, - int err, - int (*crypt)(struct aead_request *req)) + struct crypto_aead *child, int err, + int (*crypt)(struct aead_request *req), + crypto_completion_t compl) { struct cryptd_aead_request_ctx *rctx; struct aead_request *subreq; struct cryptd_aead_ctx *ctx; - crypto_completion_t compl; struct crypto_aead *tfm; int refcnt; rctx = aead_request_ctx(req); - compl = rctx->complete; subreq = &rctx->req; + req->base.complete = subreq->base.complete; + req->base.data = subreq->base.data; tfm = crypto_aead_reqtfm(req); @@ -740,17 +750,20 @@ static void cryptd_aead_crypt(struct aead_request *req, err = crypt(subreq); - req->base.complete = compl; - out: ctx = crypto_aead_ctx(tfm); refcnt = refcount_read(&ctx->refcnt); local_bh_disable(); - compl(&req->base, err); + aead_request_complete(req, err); local_bh_enable(); - if (err != -EINPROGRESS && refcnt && refcount_dec_and_test(&ctx->refcnt)) + if (err == -EINPROGRESS) { + subreq->base.complete = req->base.complete; + subreq->base.data = req->base.data; + req->base.complete = compl; + req->base.data = req; + } else if (refcnt && refcount_dec_and_test(&ctx->refcnt)) crypto_free_aead(tfm); } @@ -761,7 +774,8 @@ static void cryptd_aead_encrypt(struct crypto_async_request *areq, int err) struct aead_request *req; req = container_of(areq, struct aead_request, base); - cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt); + cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->encrypt, + cryptd_aead_encrypt); } static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) @@ -771,7 +785,8 @@ static void cryptd_aead_decrypt(struct crypto_async_request *areq, int err) struct aead_request *req; req = container_of(areq, struct aead_request, base); - cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt); + cryptd_aead_crypt(req, child, err, crypto_aead_alg(child)->decrypt, + cryptd_aead_decrypt); } static int cryptd_aead_enqueue(struct aead_request *req, @@ -780,9 +795,12 @@ static int cryptd_aead_enqueue(struct aead_request *req, struct cryptd_aead_request_ctx *rctx = aead_request_ctx(req); struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cryptd_queue *queue = cryptd_get_queue(crypto_aead_tfm(tfm)); + struct aead_request *subreq = &rctx->req; - rctx->complete = req->base.complete; + subreq->base.complete = req->base.complete; + subreq->base.data = req->base.data; req->base.complete = compl; + req->base.data = req; return cryptd_enqueue_request(queue, &req->base); } From patchwork Tue Jan 31 08:02:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6831DC38142 for ; Tue, 31 Jan 2023 08:02:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229851AbjAaIC1 (ORCPT ); Tue, 31 Jan 2023 03:02:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229613AbjAaICM (ORCPT ); Tue, 31 Jan 2023 03:02:12 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EA6841B40 for ; Tue, 31 Jan 2023 00:02:11 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlai-005vix-I8; Tue, 31 Jan 2023 16:02:09 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:08 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:08 +0800 Subject: [PATCH 12/32] crypto: atmel - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/atmel-aes.c | 4 ++-- drivers/crypto/atmel-sha.c | 4 ++-- drivers/crypto/atmel-tdes.c | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c index e90e4a6cc37a..ed10f2ae4523 100644 --- a/drivers/crypto/atmel-aes.c +++ b/drivers/crypto/atmel-aes.c @@ -554,7 +554,7 @@ static inline int atmel_aes_complete(struct atmel_aes_dev *dd, int err) } if (dd->is_async) - dd->areq->complete(dd->areq, err); + crypto_request_complete(dd->areq, err); tasklet_schedule(&dd->queue_task); @@ -955,7 +955,7 @@ static int atmel_aes_handle_queue(struct atmel_aes_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); ctx = crypto_tfm_ctx(areq->tfm); diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c index 00be792e605c..a77cf0da0816 100644 --- a/drivers/crypto/atmel-sha.c +++ b/drivers/crypto/atmel-sha.c @@ -292,7 +292,7 @@ static inline int atmel_sha_complete(struct atmel_sha_dev *dd, int err) clk_disable(dd->iclk); if ((dd->is_async || dd->force_complete) && req->base.complete) - req->base.complete(&req->base, err); + ahash_request_complete(req, err); /* handle new request */ tasklet_schedule(&dd->queue_task); @@ -1080,7 +1080,7 @@ static int atmel_sha_handle_queue(struct atmel_sha_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); ctx = crypto_tfm_ctx(async_req->tfm); diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c index 8b7bc1076e0d..b2d48c1649b9 100644 --- a/drivers/crypto/atmel-tdes.c +++ b/drivers/crypto/atmel-tdes.c @@ -590,7 +590,7 @@ static void atmel_tdes_finish_req(struct atmel_tdes_dev *dd, int err) if (!err && (rctx->mode & TDES_FLAGS_OPMODE_MASK) != TDES_FLAGS_ECB) atmel_tdes_set_iv_as_last_ciphertext_block(dd); - req->base.complete(&req->base, err); + skcipher_request_complete(req, err); } static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, @@ -619,7 +619,7 @@ static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); req = skcipher_request_cast(async_req); From patchwork Tue Jan 31 08:02:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21F92C636CD for ; Tue, 31 Jan 2023 08:02:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229613AbjAaIC2 (ORCPT ); Tue, 31 Jan 2023 03:02:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229634AbjAaICO (ORCPT ); Tue, 31 Jan 2023 03:02:14 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8535241B6D for ; Tue, 31 Jan 2023 00:02:13 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlak-005vjY-N1; Tue, 31 Jan 2023 16:02:11 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:10 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:10 +0800 Subject: [PATCH 13/32] crypto: artpec6 - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu Acked-by: Jesper Nilsson --- drivers/crypto/axis/artpec6_crypto.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c index f6f41e316dfe..8493a45e1bd4 100644 --- a/drivers/crypto/axis/artpec6_crypto.c +++ b/drivers/crypto/axis/artpec6_crypto.c @@ -2143,13 +2143,13 @@ static void artpec6_crypto_task(unsigned long data) list_for_each_entry_safe(req, n, &complete_in_progress, complete_in_progress) { - req->req->complete(req->req, -EINPROGRESS); + crypto_request_complete(req->req, -EINPROGRESS); } } static void artpec6_crypto_complete_crypto(struct crypto_async_request *req) { - req->complete(req, 0); + crypto_request_complete(req, 0); } static void @@ -2161,7 +2161,7 @@ artpec6_crypto_complete_cbc_decrypt(struct crypto_async_request *req) scatterwalk_map_and_copy(cipher_req->iv, cipher_req->src, cipher_req->cryptlen - AES_BLOCK_SIZE, AES_BLOCK_SIZE, 0); - req->complete(req, 0); + skcipher_request_complete(cipher_req, 0); } static void @@ -2173,7 +2173,7 @@ artpec6_crypto_complete_cbc_encrypt(struct crypto_async_request *req) scatterwalk_map_and_copy(cipher_req->iv, cipher_req->dst, cipher_req->cryptlen - AES_BLOCK_SIZE, AES_BLOCK_SIZE, 0); - req->complete(req, 0); + skcipher_request_complete(cipher_req, 0); } static void artpec6_crypto_complete_aead(struct crypto_async_request *req) @@ -2211,12 +2211,12 @@ static void artpec6_crypto_complete_aead(struct crypto_async_request *req) } } - req->complete(req, result); + aead_request_complete(areq, result); } static void artpec6_crypto_complete_hash(struct crypto_async_request *req) { - req->complete(req, 0); + crypto_request_complete(req, 0); } From patchwork Tue Jan 31 08:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649334 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEF3AC636CC for ; Tue, 31 Jan 2023 08:02:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231311AbjAaICt (ORCPT ); Tue, 31 Jan 2023 03:02:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230324AbjAaICY (ORCPT ); Tue, 31 Jan 2023 03:02:24 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB61B45F67 for ; Tue, 31 Jan 2023 00:02:19 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlar-005vm9-7P; Tue, 31 Jan 2023 16:02:18 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:17 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:17 +0800 Subject: [PATCH 16/32] crypto: nitrox - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/cavium/nitrox/nitrox_aead.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/cavium/nitrox/nitrox_aead.c b/drivers/crypto/cavium/nitrox/nitrox_aead.c index 0653484df23f..b0e53034164a 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_aead.c +++ b/drivers/crypto/cavium/nitrox/nitrox_aead.c @@ -199,7 +199,7 @@ static void nitrox_aead_callback(void *arg, int err) err = -EINVAL; } - areq->base.complete(&areq->base, err); + aead_request_complete(areq, err); } static inline bool nitrox_aes_gcm_assoclen_supported(unsigned int assoclen) @@ -434,7 +434,7 @@ static void nitrox_rfc4106_callback(void *arg, int err) err = -EINVAL; } - areq->base.complete(&areq->base, err); + aead_request_complete(areq, err); } static int nitrox_rfc4106_enc(struct aead_request *areq) From patchwork Tue Jan 31 08:02:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4462CC38142 for ; Tue, 31 Jan 2023 08:02:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231344AbjAaIC6 (ORCPT ); Tue, 31 Jan 2023 03:02:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbjAaIC0 (ORCPT ); Tue, 31 Jan 2023 03:02:26 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16A9345F55 for ; Tue, 31 Jan 2023 00:02:24 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlav-005vnS-Cz; Tue, 31 Jan 2023 16:02:22 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:21 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:21 +0800 Subject: [PATCH 18/32] crypto: chelsio - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/chelsio/chcr_algo.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 68d65773ef2b..0eade4fa6695 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -220,7 +220,7 @@ static inline int chcr_handle_aead_resp(struct aead_request *req, reqctx->verify = VERIFY_HW; } chcr_dec_wrcount(dev); - req->base.complete(&req->base, err); + aead_request_complete(req, err); return err; } @@ -1235,7 +1235,7 @@ static int chcr_handle_cipher_resp(struct skcipher_request *req, complete(&ctx->cbc_aes_aio_done); } chcr_dec_wrcount(dev); - req->base.complete(&req->base, err); + skcipher_request_complete(req, err); return err; } @@ -2132,7 +2132,7 @@ static inline void chcr_handle_ahash_resp(struct ahash_request *req, out: chcr_dec_wrcount(dev); - req->base.complete(&req->base, err); + ahash_request_complete(req, err); } /* From patchwork Tue Jan 31 08:02:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649332 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E29AC38142 for ; Tue, 31 Jan 2023 08:03:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231364AbjAaIDE (ORCPT ); Tue, 31 Jan 2023 03:03:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59318 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230306AbjAaICq (ORCPT ); Tue, 31 Jan 2023 03:02:46 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2EFB84109B for ; Tue, 31 Jan 2023 00:02:28 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlaz-005voU-HY; Tue, 31 Jan 2023 16:02:26 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:25 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:25 +0800 Subject: [PATCH 20/32] crypto: hisilicon - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/hisilicon/sec/sec_algs.c | 6 +++--- drivers/crypto/hisilicon/sec2/sec_crypto.c | 10 ++++------ 2 files changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/hisilicon/sec/sec_algs.c b/drivers/crypto/hisilicon/sec/sec_algs.c index 490e1542305e..1189effcdad0 100644 --- a/drivers/crypto/hisilicon/sec/sec_algs.c +++ b/drivers/crypto/hisilicon/sec/sec_algs.c @@ -504,8 +504,8 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp, kfifo_avail(&ctx->queue->softqueue) > backlog_req->num_elements)) { sec_send_request(backlog_req, ctx->queue); - backlog_req->req_base->complete(backlog_req->req_base, - -EINPROGRESS); + crypto_request_complete(backlog_req->req_base, + -EINPROGRESS); list_del(&backlog_req->backlog_head); } } @@ -534,7 +534,7 @@ static void sec_skcipher_alg_callback(struct sec_bd_info *sec_resp, if (skreq->src != skreq->dst) dma_unmap_sg(dev, skreq->dst, sec_req->len_out, DMA_BIDIRECTIONAL); - skreq->base.complete(&skreq->base, sec_req->err); + skcipher_request_complete(skreq, sec_req->err); } } diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c index f5bfc9755a4a..074e50ef512c 100644 --- a/drivers/crypto/hisilicon/sec2/sec_crypto.c +++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c @@ -1459,12 +1459,11 @@ static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req, break; backlog_sk_req = backlog_req->c_req.sk_req; - backlog_sk_req->base.complete(&backlog_sk_req->base, - -EINPROGRESS); + skcipher_request_complete(backlog_sk_req, -EINPROGRESS); atomic64_inc(&ctx->sec->debug.dfx.recv_busy_cnt); } - sk_req->base.complete(&sk_req->base, err); + skcipher_request_complete(sk_req, err); } static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req) @@ -1736,12 +1735,11 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err) break; backlog_aead_req = backlog_req->aead_req.aead_req; - backlog_aead_req->base.complete(&backlog_aead_req->base, - -EINPROGRESS); + aead_request_complete(backlog_aead_req, -EINPROGRESS); atomic64_inc(&c->sec->debug.dfx.recv_busy_cnt); } - a_req->base.complete(&a_req->base, err); + aead_request_complete(a_req, err); } static void sec_request_uninit(struct sec_ctx *ctx, struct sec_req *req) From patchwork Tue Jan 31 08:02:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BC53C38142 for ; Tue, 31 Jan 2023 08:03:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231416AbjAaIDQ (ORCPT ); Tue, 31 Jan 2023 03:03:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231303AbjAaICt (ORCPT ); Tue, 31 Jan 2023 03:02:49 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DF584608A for ; Tue, 31 Jan 2023 00:02:32 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb3-005vpH-N2; Tue, 31 Jan 2023 16:02:30 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:29 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:29 +0800 Subject: [PATCH 22/32] crypto: safexcel - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/inside-secure/safexcel.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c index ae6110376e21..5fac36d98070 100644 --- a/drivers/crypto/inside-secure/safexcel.c +++ b/drivers/crypto/inside-secure/safexcel.c @@ -850,7 +850,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring) goto request_failed; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); /* In case the send() helper did not issue any command to push * to the engine because the input data was cached, continue to @@ -1050,7 +1050,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv if (should_complete) { local_bh_disable(); - req->complete(req, ret); + crypto_request_complete(req, ret); local_bh_enable(); } From patchwork Tue Jan 31 08:02:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DD2FC636CC for ; Tue, 31 Jan 2023 08:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229608AbjAaIDk (ORCPT ); Tue, 31 Jan 2023 03:03:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230484AbjAaIC4 (ORCPT ); Tue, 31 Jan 2023 03:02:56 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 953494608F for ; Tue, 31 Jan 2023 00:02:36 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlb7-005vrK-Tf; Tue, 31 Jan 2023 16:02:34 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:33 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:33 +0800 Subject: [PATCH 24/32] crypto: marvell/cesa - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/marvell/cesa/cesa.c | 4 ++-- drivers/crypto/marvell/cesa/tdma.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/crypto/marvell/cesa/cesa.c b/drivers/crypto/marvell/cesa/cesa.c index 5cd332880653..b61e35b932e5 100644 --- a/drivers/crypto/marvell/cesa/cesa.c +++ b/drivers/crypto/marvell/cesa/cesa.c @@ -66,7 +66,7 @@ static void mv_cesa_rearm_engine(struct mv_cesa_engine *engine) return; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); ctx = crypto_tfm_ctx(req->tfm); ctx->ops->step(req); @@ -106,7 +106,7 @@ mv_cesa_complete_req(struct mv_cesa_ctx *ctx, struct crypto_async_request *req, { ctx->ops->cleanup(req); local_bh_disable(); - req->complete(req, res); + crypto_request_complete(req, res); local_bh_enable(); } diff --git a/drivers/crypto/marvell/cesa/tdma.c b/drivers/crypto/marvell/cesa/tdma.c index f0b5537038c2..388a06e180d6 100644 --- a/drivers/crypto/marvell/cesa/tdma.c +++ b/drivers/crypto/marvell/cesa/tdma.c @@ -168,7 +168,7 @@ int mv_cesa_tdma_process(struct mv_cesa_engine *engine, u32 status) req); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); } if (res || tdma->cur_dma == tdma_cur) From patchwork Tue Jan 31 08:02:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C86B4C636D6 for ; Tue, 31 Jan 2023 08:03:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229973AbjAaIDm (ORCPT ); Tue, 31 Jan 2023 03:03:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229828AbjAaIC6 (ORCPT ); Tue, 31 Jan 2023 03:02:58 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45D1046084 for ; Tue, 31 Jan 2023 00:02:43 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbE-005vtF-9Z; Tue, 31 Jan 2023 16:02:41 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:40 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:40 +0800 Subject: [PATCH 27/32] crypto: mxs-dcp - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/mxs-dcp.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index d6f9e2fe863d..1c11946a4f0b 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -413,11 +413,11 @@ static int dcp_chan_thread_aes(void *data) set_current_state(TASK_RUNNING); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); if (arq) { ret = mxs_dcp_aes_block_crypt(arq); - arq->complete(arq, ret); + crypto_request_complete(arq, ret); } } @@ -709,11 +709,11 @@ static int dcp_chan_thread_sha(void *data) set_current_state(TASK_RUNNING); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); if (arq) { ret = dcp_sha_req_to_buf(arq); - arq->complete(arq, ret); + crypto_request_complete(arq, ret); } } From patchwork Tue Jan 31 08:02:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75EEEC636CC for ; Tue, 31 Jan 2023 08:03:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229828AbjAaIDn (ORCPT ); Tue, 31 Jan 2023 03:03:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbjAaIDD (ORCPT ); Tue, 31 Jan 2023 03:03:03 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FD314614E for ; Tue, 31 Jan 2023 00:02:45 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbG-005vte-GJ; Tue, 31 Jan 2023 16:02:43 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:42 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:42 +0800 Subject: [PATCH 28/32] crypto: qat - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/qat/qat_common/qat_algs.c | 4 ++-- drivers/crypto/qat/qat_common/qat_algs_send.c | 3 ++- drivers/crypto/qat/qat_common/qat_comp_algs.c | 4 ++-- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index b4b9f0aa59b9..bcd239b11ec0 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -676,7 +676,7 @@ static void qat_aead_alg_callback(struct icp_qat_fw_la_resp *qat_resp, qat_bl_free_bufl(inst->accel_dev, &qat_req->buf); if (unlikely(qat_res != ICP_QAT_FW_COMN_STATUS_FLAG_OK)) res = -EBADMSG; - areq->base.complete(&areq->base, res); + aead_request_complete(areq, res); } static void qat_alg_update_iv_ctr_mode(struct qat_crypto_request *qat_req) @@ -752,7 +752,7 @@ static void qat_skcipher_alg_callback(struct icp_qat_fw_la_resp *qat_resp, memcpy(sreq->iv, qat_req->iv, AES_BLOCK_SIZE); - sreq->base.complete(&sreq->base, res); + skcipher_request_complete(sreq, res); } void qat_alg_callback(void *resp) diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.c b/drivers/crypto/qat/qat_common/qat_algs_send.c index ff5b4347f783..bb80455b3e81 100644 --- a/drivers/crypto/qat/qat_common/qat_algs_send.c +++ b/drivers/crypto/qat/qat_common/qat_algs_send.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) /* Copyright(c) 2022 Intel Corporation */ +#include #include "adf_transport.h" #include "qat_algs_send.h" #include "qat_crypto.h" @@ -34,7 +35,7 @@ void qat_alg_send_backlog(struct qat_instance_backlog *backlog) break; } list_del(&req->list); - req->base->complete(req->base, -EINPROGRESS); + crypto_request_complete(req->base, -EINPROGRESS); } spin_unlock_bh(&backlog->lock); } diff --git a/drivers/crypto/qat/qat_common/qat_comp_algs.c b/drivers/crypto/qat/qat_common/qat_comp_algs.c index 1480d36a8d2b..cf0ac9f8ea83 100644 --- a/drivers/crypto/qat/qat_common/qat_comp_algs.c +++ b/drivers/crypto/qat/qat_common/qat_comp_algs.c @@ -94,7 +94,7 @@ static void qat_comp_resubmit(struct work_struct *work) err: qat_bl_free_bufl(accel_dev, qat_bufs); - areq->base.complete(&areq->base, ret); + acomp_request_complete(areq, ret); } static void qat_comp_generic_callback(struct qat_compression_req *qat_req, @@ -169,7 +169,7 @@ static void qat_comp_generic_callback(struct qat_compression_req *qat_req, end: qat_bl_free_bufl(accel_dev, &qat_req->buf); - areq->base.complete(&areq->base, res); + acomp_request_complete(areq, res); } void qat_comp_alg_callback(void *resp) From patchwork Tue Jan 31 08:02:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B389C38142 for ; Tue, 31 Jan 2023 08:03:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229991AbjAaIDo (ORCPT ); Tue, 31 Jan 2023 03:03:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230112AbjAaIDE (ORCPT ); Tue, 31 Jan 2023 03:03:04 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A9D42BEB for ; Tue, 31 Jan 2023 00:02:49 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbK-005vuQ-Mb; Tue, 31 Jan 2023 16:02:47 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:46 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:46 +0800 Subject: [PATCH 30/32] crypto: s5p-sss - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/s5p-sss.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c index b79e49aa724f..1c4d5fb05d69 100644 --- a/drivers/crypto/s5p-sss.c +++ b/drivers/crypto/s5p-sss.c @@ -499,7 +499,7 @@ static void s5p_sg_done(struct s5p_aes_dev *dev) /* Calls the completion. Cannot be called with dev->lock hold. */ static void s5p_aes_complete(struct skcipher_request *req, int err) { - req->base.complete(&req->base, err); + skcipher_request_complete(req, err); } static void s5p_unset_outdata(struct s5p_aes_dev *dev) @@ -1355,7 +1355,7 @@ static void s5p_hash_finish_req(struct ahash_request *req, int err) spin_unlock_irqrestore(&dd->hash_lock, flags); if (req->base.complete) - req->base.complete(&req->base, err); + ahash_request_complete(req, err); } /** @@ -1397,7 +1397,7 @@ static int s5p_hash_handle_queue(struct s5p_aes_dev *dd, return ret; if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); req = ahash_request_cast(async_req); dd->hash_req = req; @@ -1991,7 +1991,7 @@ static void s5p_tasklet_cb(unsigned long data) spin_unlock_irqrestore(&dev->lock, flags); if (backlog) - backlog->complete(backlog, -EINPROGRESS); + crypto_request_complete(backlog, -EINPROGRESS); dev->req = skcipher_request_cast(async_req); dev->ctx = crypto_tfm_ctx(dev->req->base.tfm); From patchwork Tue Jan 31 08:02:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 649326 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59E2EC636CD for ; Tue, 31 Jan 2023 10:22:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230271AbjAaKWW (ORCPT ); Tue, 31 Jan 2023 05:22:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbjAaKWU (ORCPT ); Tue, 31 Jan 2023 05:22:20 -0500 Received: from formenos.hmeau.com (helcar.hmeau.com [216.24.177.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BFA6118AB4 for ; Tue, 31 Jan 2023 02:22:17 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pMlbO-005vvP-V3; Tue, 31 Jan 2023 16:02:52 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Tue, 31 Jan 2023 16:02:50 +0800 From: "Herbert Xu" Date: Tue, 31 Jan 2023 16:02:50 +0800 Subject: [PATCH 32/32] crypto: talitos - Use request_complete helpers References: To: Linux Crypto Mailing List , Tudor Ambarus , Jesper Nilsson , Lars Persson , linux-arm-kernel@axis.com, Raveendra Padasalagi , George Cherian , Tom Lendacky , John Allen , Ayush Sawal , Kai Ye , Longfang Liu , Antoine Tenart , Corentin Labbe , Boris Brezillon , Arnaud Ebalard , Srujana Challa , Giovanni Cabiddu , qat-linux@intel.com, Thara Gopinath , Krzysztof Kozlowski , Vladimir Zapolskiy Message-Id: Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use the request_complete helpers instead of calling the completion function directly. Signed-off-by: Herbert Xu --- drivers/crypto/talitos.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index d62ec68e3183..bb27f011cf31 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -1560,7 +1560,7 @@ static void skcipher_done(struct device *dev, kfree(edesc); - areq->base.complete(&areq->base, err); + skcipher_request_complete(areq, err); } static int common_nonsnoop(struct talitos_edesc *edesc, @@ -1759,7 +1759,7 @@ static void ahash_done(struct device *dev, kfree(edesc); - areq->base.complete(&areq->base, err); + ahash_request_complete(areq, err); } /*