From patchwork Thu Mar 31 15:36:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 555329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93633C433EF for ; Thu, 31 Mar 2022 15:40:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238883AbiCaPm1 (ORCPT ); Thu, 31 Mar 2022 11:42:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238982AbiCaPmI (ORCPT ); Thu, 31 Mar 2022 11:42:08 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D89E142487; Thu, 31 Mar 2022 08:37:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648741028; x=1680277028; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kBBNB+Un7rmvD0SjZLQnsFl6KoTCeAAwRJnh0pl0xjc=; b=Fv3STFSflXRQQNkvX208V/ptKvCwcNQx39IoavhIetobGwjN/mrBcOF4 LWbDqqVb9ZnYyEcFJLoX2smh3R5+sBBzB7qNGrr4YZd30/kpdsnU1Nl6L yrFmC/q8DLwwZ+phVPYUWgOC0eGF74d/SIU6+VysyDCNNHFaS19MGQfFe mttMKVN944Y0D7JvHwWEaBViVWwhM4JuuPbSNASXBjq6g6CYbwcejpm4j Rmt1pb+FSDNmOids4b4xubDVNtabsTZyO/7oPxhQVtvO6hrdhP5XnR1YP 07owaASQNHT1tVZDqyqms4o0T57bSeQAJ09PfJZhbG/u3nybGB6z14wvj A==; X-IronPort-AV: E=McAfee;i="6200,9189,10302"; a="242022735" X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="242022735" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2022 08:36:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="555124761" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.76]) by fmsmga007.fm.intel.com with ESMTP; 31 Mar 2022 08:36:55 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, marco.chiappero@intel.com, Giovanni Cabiddu , stable@vger.kernel.org, Mikulas Patocka , Wojciech Ziemba Subject: [PATCH v2 1/4] crypto: qat - use pre-allocated buffers in datapath Date: Thu, 31 Mar 2022 16:36:49 +0100 Message-Id: <20220331153652.37020-2-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220331153652.37020-1-giovanni.cabiddu@intel.com> References: <20220331153652.37020-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Organization: Intel Research and Development Ireland Ltd - Co. Reg. #308263 - Collinstown Industrial Park, Leixlip, County Kildare - Ireland Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In order to do DMAs, the QAT device requires that the scatterlist structures are mapped and translated into a format that the firmware can understand. This is defined as the composition of a scatter gather list (SGL) descriptor header, the struct qat_alg_buf_list, plus a variable number of flat buffer descriptors, the struct qat_alg_buf. The allocation and mapping of these data structures is done each time a request is received from the skcipher and aead APIs. In an OOM situation, this behaviour might lead to a dead-lock if an allocation fails. Based on the conversation in [1], increase the size of the aead and skcipher request contexts to include an SGL descriptor that can handle a maximum of 4 flat buffers. If requests exceed 4 entries buffers, memory is allocated dynamically. In addition, remove the CRYPTO_ALG_ALLOCATES_MEMORY flag from both aead and skcipher alg structures. [1] https://lore.kernel.org/linux-crypto/20200722072932.GA27544@gondor.apana.org.au/ Cc: stable@vger.kernel.org Fixes: d370cec32194 ("crypto: qat - Intel(R) QAT crypto interface") Reported-by: Mikulas Patocka Signed-off-by: Giovanni Cabiddu Reviewed-by: Marco Chiappero Reviewed-by: Wojciech Ziemba --- drivers/crypto/qat/qat_common/qat_algs.c | 77 ++++++++++++---------- drivers/crypto/qat/qat_common/qat_crypto.h | 24 +++++++ 2 files changed, 67 insertions(+), 34 deletions(-) diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index f998ed58457c..b9228f3a26de 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -46,19 +46,6 @@ static DEFINE_MUTEX(algs_lock); static unsigned int active_devs; -struct qat_alg_buf { - u32 len; - u32 resrvd; - u64 addr; -} __packed; - -struct qat_alg_buf_list { - u64 resrvd; - u32 num_bufs; - u32 num_mapped_bufs; - struct qat_alg_buf bufers[]; -} __packed __aligned(64); - /* Common content descriptor */ struct qat_alg_cd { union { @@ -693,7 +680,10 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst, bl->bufers[i].len, DMA_BIDIRECTIONAL); dma_unmap_single(dev, blp, sz, DMA_TO_DEVICE); - kfree(bl); + + if (!qat_req->buf.sgl_src_valid) + kfree(bl); + if (blp != blpout) { /* If out of place operation dma unmap only data */ int bufless = blout->num_bufs - blout->num_mapped_bufs; @@ -704,7 +694,9 @@ static void qat_alg_free_bufl(struct qat_crypto_instance *inst, DMA_BIDIRECTIONAL); } dma_unmap_single(dev, blpout, sz_out, DMA_TO_DEVICE); - kfree(blout); + + if (!qat_req->buf.sgl_dst_valid) + kfree(blout); } } @@ -721,15 +713,24 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, dma_addr_t blp = DMA_MAPPING_ERROR; dma_addr_t bloutp = DMA_MAPPING_ERROR; struct scatterlist *sg; - size_t sz_out, sz = struct_size(bufl, bufers, n + 1); + size_t sz_out, sz = struct_size(bufl, bufers, n); + int node = dev_to_node(&GET_DEV(inst->accel_dev)); if (unlikely(!n)) return -EINVAL; - bufl = kzalloc_node(sz, GFP_ATOMIC, - dev_to_node(&GET_DEV(inst->accel_dev))); - if (unlikely(!bufl)) - return -ENOMEM; + qat_req->buf.sgl_src_valid = false; + qat_req->buf.sgl_dst_valid = false; + + if (n > QAT_MAX_BUFF_DESC) { + bufl = kzalloc_node(sz, GFP_ATOMIC, node); + if (unlikely(!bufl)) + return -ENOMEM; + } else { + bufl = &qat_req->buf.sgl_src.sgl_hdr; + memset(bufl, 0, sizeof(struct qat_alg_buf_list)); + qat_req->buf.sgl_src_valid = true; + } for_each_sg(sgl, sg, n, i) bufl->bufers[i].addr = DMA_MAPPING_ERROR; @@ -760,12 +761,18 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, struct qat_alg_buf *bufers; n = sg_nents(sglout); - sz_out = struct_size(buflout, bufers, n + 1); + sz_out = struct_size(buflout, bufers, n); sg_nctr = 0; - buflout = kzalloc_node(sz_out, GFP_ATOMIC, - dev_to_node(&GET_DEV(inst->accel_dev))); - if (unlikely(!buflout)) - goto err_in; + + if (n > QAT_MAX_BUFF_DESC) { + buflout = kzalloc_node(sz_out, GFP_ATOMIC, node); + if (unlikely(!buflout)) + goto err_in; + } else { + buflout = &qat_req->buf.sgl_dst.sgl_hdr; + memset(buflout, 0, sizeof(struct qat_alg_buf_list)); + qat_req->buf.sgl_dst_valid = true; + } bufers = buflout->bufers; for_each_sg(sglout, sg, n, i) @@ -810,7 +817,9 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, dma_unmap_single(dev, buflout->bufers[i].addr, buflout->bufers[i].len, DMA_BIDIRECTIONAL); - kfree(buflout); + + if (!qat_req->buf.sgl_dst_valid) + kfree(buflout); err_in: if (!dma_mapping_error(dev, blp)) @@ -823,7 +832,8 @@ static int qat_alg_sgl_to_bufl(struct qat_crypto_instance *inst, bufl->bufers[i].len, DMA_BIDIRECTIONAL); - kfree(bufl); + if (!qat_req->buf.sgl_src_valid) + kfree(bufl); dev_err(dev, "Failed to map buf for dma\n"); return -ENOMEM; @@ -1434,7 +1444,7 @@ static struct aead_alg qat_aeads[] = { { .cra_name = "authenc(hmac(sha1),cbc(aes))", .cra_driver_name = "qat_aes_cbc_hmac_sha1", .cra_priority = 4001, - .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct qat_alg_aead_ctx), .cra_module = THIS_MODULE, @@ -1451,7 +1461,7 @@ static struct aead_alg qat_aeads[] = { { .cra_name = "authenc(hmac(sha256),cbc(aes))", .cra_driver_name = "qat_aes_cbc_hmac_sha256", .cra_priority = 4001, - .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct qat_alg_aead_ctx), .cra_module = THIS_MODULE, @@ -1468,7 +1478,7 @@ static struct aead_alg qat_aeads[] = { { .cra_name = "authenc(hmac(sha512),cbc(aes))", .cra_driver_name = "qat_aes_cbc_hmac_sha512", .cra_priority = 4001, - .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = AES_BLOCK_SIZE, .cra_ctxsize = sizeof(struct qat_alg_aead_ctx), .cra_module = THIS_MODULE, @@ -1486,7 +1496,7 @@ static struct skcipher_alg qat_skciphers[] = { { .base.cra_name = "cbc(aes)", .base.cra_driver_name = "qat_aes_cbc", .base.cra_priority = 4001, - .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), .base.cra_alignmask = 0, @@ -1504,7 +1514,7 @@ static struct skcipher_alg qat_skciphers[] = { { .base.cra_name = "ctr(aes)", .base.cra_driver_name = "qat_aes_ctr", .base.cra_priority = 4001, - .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY, + .base.cra_flags = CRYPTO_ALG_ASYNC, .base.cra_blocksize = 1, .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), .base.cra_alignmask = 0, @@ -1522,8 +1532,7 @@ static struct skcipher_alg qat_skciphers[] = { { .base.cra_name = "xts(aes)", .base.cra_driver_name = "qat_aes_xts", .base.cra_priority = 4001, - .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK | - CRYPTO_ALG_ALLOCATES_MEMORY, + .base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .base.cra_blocksize = AES_BLOCK_SIZE, .base.cra_ctxsize = sizeof(struct qat_alg_skcipher_ctx), .base.cra_alignmask = 0, diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index b6a4c95ae003..0928f159ea99 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -21,6 +21,26 @@ struct qat_crypto_instance { atomic_t refctr; }; +#define QAT_MAX_BUFF_DESC 4 + +struct qat_alg_buf { + u32 len; + u32 resrvd; + u64 addr; +} __packed; + +struct qat_alg_buf_list { + u64 resrvd; + u32 num_bufs; + u32 num_mapped_bufs; + struct qat_alg_buf bufers[]; +} __packed; + +struct qat_alg_fixed_buf_list { + struct qat_alg_buf_list sgl_hdr; + struct qat_alg_buf descriptors[QAT_MAX_BUFF_DESC]; +} __packed __aligned(64); + struct qat_crypto_request_buffs { struct qat_alg_buf_list *bl; dma_addr_t blp; @@ -28,6 +48,10 @@ struct qat_crypto_request_buffs { dma_addr_t bloutp; size_t sz; size_t sz_out; + bool sgl_src_valid; + bool sgl_dst_valid; + struct qat_alg_fixed_buf_list sgl_src; + struct qat_alg_fixed_buf_list sgl_dst; }; struct qat_crypto_request; From patchwork Thu Mar 31 15:36:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 555669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27583C433EF for ; Thu, 31 Mar 2022 15:40:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233616AbiCaPmm (ORCPT ); Thu, 31 Mar 2022 11:42:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239009AbiCaPm1 (ORCPT ); Thu, 31 Mar 2022 11:42:27 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A528C58E4A; Thu, 31 Mar 2022 08:37:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648741040; x=1680277040; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Tq5FH2el9YQ0UIBVsCe8lCMGwtLVfNqgfWD1f+AvdIE=; b=jDxP60IOsd4Ugz+h0ipVSjFfJTWJFwE1A6kU+73CzYTOnBV88Oo48ypQ n1ESAaUuTkfgGwubvJwxGfG0MOWu7HFr08jRuz49KkZBJy9tiSKjAE1a2 c/2KC3ktqOUmiS+ecUF/UhUUlq9Z20ou74gLUa6nhZRyErCzRrt2CyXxZ BiL5tY+yjQ1F9lBK4H1VjCMszhGcLssQb9Dlk25bbGM8F2Op4Vy1DNp+M itcUx+/XQc+gwed+oNK0OdPwuN/BwJB24YPViQnCC3fEPhA7AjkqtDI6o N/mY/hKuooiMC/3UI3f6xDhgH6aUEHluEUACWjkavkMmIaEf5yGai+FMw A==; X-IronPort-AV: E=McAfee;i="6200,9189,10302"; a="242022740" X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="242022740" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2022 08:36:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="555124770" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.76]) by fmsmga007.fm.intel.com with ESMTP; 31 Mar 2022 08:36:58 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, marco.chiappero@intel.com, Giovanni Cabiddu , stable@vger.kernel.org Subject: [PATCH v2 2/4] crypto: qat - refactor submission logic Date: Thu, 31 Mar 2022 16:36:50 +0100 Message-Id: <20220331153652.37020-3-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220331153652.37020-1-giovanni.cabiddu@intel.com> References: <20220331153652.37020-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Organization: Intel Research and Development Ireland Ltd - Co. Reg. #308263 - Collinstown Industrial Park, Leixlip, County Kildare - Ireland Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Move the submission loop to a new function, qat_alg_send_message(), and share it between the symmetric and the asymmetric algorithms. If the HW queues are full return -ENOSPC instead of -EBUSY. Cc: stable@vger.kernel.org Signed-off-by: Giovanni Cabiddu Reviewed-by: Marco Chiappero --- drivers/crypto/qat/qat_common/Makefile | 1 + drivers/crypto/qat/qat_common/qat_algs.c | 68 +++++++++---------- drivers/crypto/qat/qat_common/qat_algs_send.c | 21 ++++++ drivers/crypto/qat/qat_common/qat_algs_send.h | 10 +++ drivers/crypto/qat/qat_common/qat_asym_algs.c | 50 +++++++++----- drivers/crypto/qat/qat_common/qat_crypto.h | 5 ++ 6 files changed, 101 insertions(+), 54 deletions(-) create mode 100644 drivers/crypto/qat/qat_common/qat_algs_send.c create mode 100644 drivers/crypto/qat/qat_common/qat_algs_send.h diff --git a/drivers/crypto/qat/qat_common/Makefile b/drivers/crypto/qat/qat_common/Makefile index f25a6c8edfc7..04f058acc4d3 100644 --- a/drivers/crypto/qat/qat_common/Makefile +++ b/drivers/crypto/qat/qat_common/Makefile @@ -16,6 +16,7 @@ intel_qat-objs := adf_cfg.o \ qat_crypto.o \ qat_algs.o \ qat_asym_algs.o \ + qat_algs_send.o \ qat_uclo.o \ qat_hal.o diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index b9228f3a26de..f93218dd6de3 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -17,7 +17,7 @@ #include #include #include "adf_accel_devices.h" -#include "adf_transport.h" +#include "qat_algs_send.h" #include "adf_common_drv.h" #include "qat_crypto.h" #include "icp_qat_hw.h" @@ -939,6 +939,17 @@ void qat_alg_callback(void *resp) qat_req->cb(qat_resp, qat_req); } +static int qat_alg_send_sym_message(struct qat_crypto_request *qat_req, + struct qat_crypto_instance *inst) +{ + struct qat_alg_req req; + + req.fw_req = (u32 *)&qat_req->req; + req.tx_ring = inst->sym_tx; + + return qat_alg_send_message(&req); +} + static int qat_alg_aead_dec(struct aead_request *areq) { struct crypto_aead *aead_tfm = crypto_aead_reqtfm(areq); @@ -949,7 +960,7 @@ static int qat_alg_aead_dec(struct aead_request *areq) struct icp_qat_fw_la_auth_req_params *auth_param; struct icp_qat_fw_la_bulk_req *msg; int digst_size = crypto_aead_authsize(aead_tfm); - int ret, ctr = 0; + int ret; u32 cipher_len; cipher_len = areq->cryptlen - digst_size; @@ -975,15 +986,12 @@ static int qat_alg_aead_dec(struct aead_request *areq) auth_param = (void *)((u8 *)cipher_param + sizeof(*cipher_param)); auth_param->auth_off = 0; auth_param->auth_len = areq->assoclen + cipher_param->cipher_length; - do { - ret = adf_send_message(ctx->inst->sym_tx, (u32 *)msg); - } while (ret == -EAGAIN && ctr++ < 10); - if (ret == -EAGAIN) { + ret = qat_alg_send_sym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); - return -EBUSY; - } - return -EINPROGRESS; + + return ret; } static int qat_alg_aead_enc(struct aead_request *areq) @@ -996,7 +1004,7 @@ static int qat_alg_aead_enc(struct aead_request *areq) struct icp_qat_fw_la_auth_req_params *auth_param; struct icp_qat_fw_la_bulk_req *msg; u8 *iv = areq->iv; - int ret, ctr = 0; + int ret; if (areq->cryptlen % AES_BLOCK_SIZE != 0) return -EINVAL; @@ -1023,15 +1031,11 @@ static int qat_alg_aead_enc(struct aead_request *areq) auth_param->auth_off = 0; auth_param->auth_len = areq->assoclen + areq->cryptlen; - do { - ret = adf_send_message(ctx->inst->sym_tx, (u32 *)msg); - } while (ret == -EAGAIN && ctr++ < 10); - - if (ret == -EAGAIN) { + ret = qat_alg_send_sym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); - return -EBUSY; - } - return -EINPROGRESS; + + return ret; } static int qat_alg_skcipher_rekey(struct qat_alg_skcipher_ctx *ctx, @@ -1184,7 +1188,7 @@ static int qat_alg_skcipher_encrypt(struct skcipher_request *req) struct qat_crypto_request *qat_req = skcipher_request_ctx(req); struct icp_qat_fw_la_cipher_req_params *cipher_param; struct icp_qat_fw_la_bulk_req *msg; - int ret, ctr = 0; + int ret; if (req->cryptlen == 0) return 0; @@ -1208,15 +1212,11 @@ static int qat_alg_skcipher_encrypt(struct skcipher_request *req) qat_alg_set_req_iv(qat_req); - do { - ret = adf_send_message(ctx->inst->sym_tx, (u32 *)msg); - } while (ret == -EAGAIN && ctr++ < 10); - - if (ret == -EAGAIN) { + ret = qat_alg_send_sym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); - return -EBUSY; - } - return -EINPROGRESS; + + return ret; } static int qat_alg_skcipher_blk_encrypt(struct skcipher_request *req) @@ -1253,7 +1253,7 @@ static int qat_alg_skcipher_decrypt(struct skcipher_request *req) struct qat_crypto_request *qat_req = skcipher_request_ctx(req); struct icp_qat_fw_la_cipher_req_params *cipher_param; struct icp_qat_fw_la_bulk_req *msg; - int ret, ctr = 0; + int ret; if (req->cryptlen == 0) return 0; @@ -1278,15 +1278,11 @@ static int qat_alg_skcipher_decrypt(struct skcipher_request *req) qat_alg_set_req_iv(qat_req); qat_alg_update_iv(qat_req); - do { - ret = adf_send_message(ctx->inst->sym_tx, (u32 *)msg); - } while (ret == -EAGAIN && ctr++ < 10); - - if (ret == -EAGAIN) { + ret = qat_alg_send_sym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); - return -EBUSY; - } - return -EINPROGRESS; + + return ret; } static int qat_alg_skcipher_blk_decrypt(struct skcipher_request *req) diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.c b/drivers/crypto/qat/qat_common/qat_algs_send.c new file mode 100644 index 000000000000..78f1bb8c26c0 --- /dev/null +++ b/drivers/crypto/qat/qat_common/qat_algs_send.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) +/* Copyright(c) 2022 Intel Corporation */ +#include "adf_transport.h" +#include "qat_algs_send.h" +#include "qat_crypto.h" + +#define ADF_MAX_RETRIES 20 + +int qat_alg_send_message(struct qat_alg_req *req) +{ + int ret = 0, ctr = 0; + + do { + ret = adf_send_message(req->tx_ring, req->fw_req); + } while (ret == -EAGAIN && ctr++ < ADF_MAX_RETRIES); + + if (ret == -EAGAIN) + return -ENOSPC; + + return -EINPROGRESS; +} diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.h b/drivers/crypto/qat/qat_common/qat_algs_send.h new file mode 100644 index 000000000000..3fa685d0c293 --- /dev/null +++ b/drivers/crypto/qat/qat_common/qat_algs_send.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only) */ +/* Copyright(c) 2022 Intel Corporation */ +#ifndef QAT_ALGS_SEND_H +#define QAT_ALGS_SEND_H + +#include "qat_crypto.h" + +int qat_alg_send_message(struct qat_alg_req *req); + +#endif diff --git a/drivers/crypto/qat/qat_common/qat_asym_algs.c b/drivers/crypto/qat/qat_common/qat_asym_algs.c index b0b78445418b..a3fdbcc08772 100644 --- a/drivers/crypto/qat/qat_common/qat_asym_algs.c +++ b/drivers/crypto/qat/qat_common/qat_asym_algs.c @@ -12,6 +12,7 @@ #include #include "icp_qat_fw_pke.h" #include "adf_accel_devices.h" +#include "qat_algs_send.h" #include "adf_transport.h" #include "adf_common_drv.h" #include "qat_crypto.h" @@ -137,6 +138,17 @@ struct qat_asym_request { void (*cb)(struct icp_qat_fw_pke_resp *resp); } __aligned(64); +static int qat_alg_send_asym_message(struct qat_asym_request *qat_req, + struct qat_crypto_instance *inst) +{ + struct qat_alg_req req; + + req.fw_req = (u32 *)&qat_req->req; + req.tx_ring = inst->pke_tx; + + return qat_alg_send_message(&req); +} + static void qat_dh_cb(struct icp_qat_fw_pke_resp *resp) { struct qat_asym_request *req = (void *)(__force long)resp->opaque; @@ -213,7 +225,7 @@ static int qat_dh_compute_value(struct kpp_request *req) struct qat_asym_request *qat_req = PTR_ALIGN(kpp_request_ctx(req), 64); struct icp_qat_fw_pke_request *msg = &qat_req->req; - int ret, ctr = 0; + int ret; int n_input_params = 0; if (unlikely(!ctx->xa)) @@ -338,13 +350,13 @@ static int qat_dh_compute_value(struct kpp_request *req) msg->input_param_count = n_input_params; msg->output_param_count = 1; - do { - ret = adf_send_message(ctx->inst->pke_tx, (u32 *)msg); - } while (ret == -EBUSY && ctr++ < 100); + ret = qat_alg_send_asym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) + goto unmap_all; - if (!ret) - return -EINPROGRESS; + return ret; +unmap_all: if (!dma_mapping_error(dev, qat_req->phy_out)) dma_unmap_single(dev, qat_req->phy_out, sizeof(struct qat_dh_output_params), @@ -642,7 +654,7 @@ static int qat_rsa_enc(struct akcipher_request *req) struct qat_asym_request *qat_req = PTR_ALIGN(akcipher_request_ctx(req), 64); struct icp_qat_fw_pke_request *msg = &qat_req->req; - int ret, ctr = 0; + int ret; if (unlikely(!ctx->n || !ctx->e)) return -EINVAL; @@ -732,13 +744,14 @@ static int qat_rsa_enc(struct akcipher_request *req) msg->pke_mid.opaque = (u64)(__force long)qat_req; msg->input_param_count = 3; msg->output_param_count = 1; - do { - ret = adf_send_message(ctx->inst->pke_tx, (u32 *)msg); - } while (ret == -EBUSY && ctr++ < 100); - if (!ret) - return -EINPROGRESS; + ret = qat_alg_send_asym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) + goto unmap_all; + + return ret; +unmap_all: if (!dma_mapping_error(dev, qat_req->phy_out)) dma_unmap_single(dev, qat_req->phy_out, sizeof(struct qat_rsa_output_params), @@ -776,7 +789,7 @@ static int qat_rsa_dec(struct akcipher_request *req) struct qat_asym_request *qat_req = PTR_ALIGN(akcipher_request_ctx(req), 64); struct icp_qat_fw_pke_request *msg = &qat_req->req; - int ret, ctr = 0; + int ret; if (unlikely(!ctx->n || !ctx->d)) return -EINVAL; @@ -884,13 +897,14 @@ static int qat_rsa_dec(struct akcipher_request *req) msg->input_param_count = 3; msg->output_param_count = 1; - do { - ret = adf_send_message(ctx->inst->pke_tx, (u32 *)msg); - } while (ret == -EBUSY && ctr++ < 100); - if (!ret) - return -EINPROGRESS; + ret = qat_alg_send_asym_message(qat_req, ctx->inst); + if (ret == -ENOSPC) + goto unmap_all; + + return ret; +unmap_all: if (!dma_mapping_error(dev, qat_req->phy_out)) dma_unmap_single(dev, qat_req->phy_out, sizeof(struct qat_rsa_output_params), diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index 0928f159ea99..0dcba6fc358c 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -9,6 +9,11 @@ #include "adf_accel_devices.h" #include "icp_qat_fw_la.h" +struct qat_alg_req { + u32 *fw_req; + struct adf_etr_ring_data *tx_ring; +}; + struct qat_crypto_instance { struct adf_etr_ring_data *sym_tx; struct adf_etr_ring_data *sym_rx; From patchwork Thu Mar 31 15:36:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 555328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F228C433EF for ; Thu, 31 Mar 2022 15:42:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238979AbiCaPmr (ORCPT ); Thu, 31 Mar 2022 11:42:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239052AbiCaPm2 (ORCPT ); Thu, 31 Mar 2022 11:42:28 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCD935A158; Thu, 31 Mar 2022 08:37:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648741041; x=1680277041; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wspBC7yhTsGGOGxtG2IhAryn8B6WNOq3vPpJMjpO3Hg=; b=coS8dO5RYp1hZLcqFFj7yYbYfq9DDfrQLFyamXtZG3m551iTnNMYAZSI suqfRrEnGotTFi8DVYmRFv2iHmataXiZS9VVwZl4GKq5ann7RGHCzR97D wdMLAjkCOHsHVDg/ta/YR3s4D2vKsaj5TjBR09bi/I7zP2FacK6SzpH4j 561H/OGWDzNiDQ4Rkz+hTbPXEcqtEPc7D77nwAQDUJKEnOi7ulZYuuUvc Frp+Y3OHxNNw4gAliDR8j6eh46aZC4shvdmptkkazDFjZ3uOpcZlQCK/Q 2AvfDCUA3s9x4Q11Q6ezSs+kYJwDeAGYLvmMKkKpofIrNB+ShXQM/2Fxy w==; X-IronPort-AV: E=McAfee;i="6200,9189,10302"; a="242022745" X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="242022745" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2022 08:37:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="555124800" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.76]) by fmsmga007.fm.intel.com with ESMTP; 31 Mar 2022 08:36:59 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, marco.chiappero@intel.com, Giovanni Cabiddu , stable@vger.kernel.org, Mikulas Patocka , Kyle Sanderson Subject: [PATCH v2 3/4] crypto: qat - add backlog mechanism Date: Thu, 31 Mar 2022 16:36:51 +0100 Message-Id: <20220331153652.37020-4-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220331153652.37020-1-giovanni.cabiddu@intel.com> References: <20220331153652.37020-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Organization: Intel Research and Development Ireland Ltd - Co. Reg. #308263 - Collinstown Industrial Park, Leixlip, County Kildare - Ireland Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The implementations of the crypto algorithms (aead, skcipher, etc) in the QAT driver do not properly support requests with the CRYPTO_TFM_REQ_MAY_BACKLOG flag set. If the HW queue is full, the driver returns -EBUSY but does not enqueue the request. This can result in applications like dm-crypt waiting indefinitely for the completion of a request that was never submitted to the hardware. Fix this by adding a software backlog queue: if the ring buffer is more than eighty percent full, then the request is enqueued to a backlog list and the error code -EBUSY is returned back to the caller. Requests in the backlog queue are resubmitted at a later time, in the context of the callback of a previously submitted request. The request for which -EBUSY is returned is then marked as -EINPROGRESS once submitted to the HW queues. The submission loop inside the function qat_alg_send_message() has been modified to decide which submission policy to use based on the request flags. If the request does not have the CRYPTO_TFM_REQ_MAY_BACKLOG set, the previous behaviour has been preserved. Based on a patch by Vishnu Das Ramachandran Cc: stable@vger.kernel.org Fixes: d370cec32194 ("crypto: qat - Intel(R) QAT crypto interface") Reported-by: Mikulas Patocka Reported-by: Kyle Sanderson Signed-off-by: Giovanni Cabiddu Reviewed-by: Marco Chiappero --- drivers/crypto/qat/qat_common/adf_transport.c | 11 ++++ drivers/crypto/qat/qat_common/adf_transport.h | 1 + .../qat/qat_common/adf_transport_internal.h | 1 + drivers/crypto/qat/qat_common/qat_algs.c | 23 ++++--- drivers/crypto/qat/qat_common/qat_algs_send.c | 65 ++++++++++++++++++- drivers/crypto/qat/qat_common/qat_algs_send.h | 1 + drivers/crypto/qat/qat_common/qat_asym_algs.c | 22 ++++--- drivers/crypto/qat/qat_common/qat_crypto.c | 3 + drivers/crypto/qat/qat_common/qat_crypto.h | 10 +++ 9 files changed, 119 insertions(+), 18 deletions(-) diff --git a/drivers/crypto/qat/qat_common/adf_transport.c b/drivers/crypto/qat/qat_common/adf_transport.c index 8ba28409fb74..630d0483c4e0 100644 --- a/drivers/crypto/qat/qat_common/adf_transport.c +++ b/drivers/crypto/qat/qat_common/adf_transport.c @@ -8,6 +8,9 @@ #include "adf_cfg.h" #include "adf_common_drv.h" +#define ADF_MAX_RING_THRESHOLD 80 +#define ADF_PERCENT(tot, percent) (((tot) * (percent)) / 100) + static inline u32 adf_modulo(u32 data, u32 shift) { u32 div = data >> shift; @@ -77,6 +80,11 @@ static void adf_disable_ring_irq(struct adf_etr_bank_data *bank, u32 ring) bank->irq_mask); } +bool adf_ring_nearly_full(struct adf_etr_ring_data *ring) +{ + return atomic_read(ring->inflights) > ring->threshold; +} + int adf_send_message(struct adf_etr_ring_data *ring, u32 *msg) { struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(ring->bank->accel_dev); @@ -217,6 +225,7 @@ int adf_create_ring(struct adf_accel_dev *accel_dev, const char *section, struct adf_etr_bank_data *bank; struct adf_etr_ring_data *ring; char val[ADF_CFG_MAX_VAL_LEN_IN_BYTES]; + int max_inflights; u32 ring_num; int ret; @@ -263,6 +272,8 @@ int adf_create_ring(struct adf_accel_dev *accel_dev, const char *section, ring->ring_size = adf_verify_ring_size(msg_size, num_msgs); ring->head = 0; ring->tail = 0; + max_inflights = ADF_MAX_INFLIGHTS(ring->ring_size, ring->msg_size); + ring->threshold = ADF_PERCENT(max_inflights, ADF_MAX_RING_THRESHOLD); atomic_set(ring->inflights, 0); ret = adf_init_ring(ring); if (ret) diff --git a/drivers/crypto/qat/qat_common/adf_transport.h b/drivers/crypto/qat/qat_common/adf_transport.h index 2c95f1697c76..e6ef6f9b7691 100644 --- a/drivers/crypto/qat/qat_common/adf_transport.h +++ b/drivers/crypto/qat/qat_common/adf_transport.h @@ -14,6 +14,7 @@ int adf_create_ring(struct adf_accel_dev *accel_dev, const char *section, const char *ring_name, adf_callback_fn callback, int poll_mode, struct adf_etr_ring_data **ring_ptr); +bool adf_ring_nearly_full(struct adf_etr_ring_data *ring); int adf_send_message(struct adf_etr_ring_data *ring, u32 *msg); void adf_remove_ring(struct adf_etr_ring_data *ring); #endif diff --git a/drivers/crypto/qat/qat_common/adf_transport_internal.h b/drivers/crypto/qat/qat_common/adf_transport_internal.h index 501bcf0f1809..8b2c92ba7ca1 100644 --- a/drivers/crypto/qat/qat_common/adf_transport_internal.h +++ b/drivers/crypto/qat/qat_common/adf_transport_internal.h @@ -22,6 +22,7 @@ struct adf_etr_ring_data { spinlock_t lock; /* protects ring data struct */ u16 head; u16 tail; + u32 threshold; u8 ring_number; u8 ring_size; u8 msg_size; diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c index f93218dd6de3..7f1bdab9706b 100644 --- a/drivers/crypto/qat/qat_common/qat_algs.c +++ b/drivers/crypto/qat/qat_common/qat_algs.c @@ -937,17 +937,22 @@ void qat_alg_callback(void *resp) (void *)(__force long)qat_resp->opaque_data; qat_req->cb(qat_resp, qat_req); + + qat_alg_send_backlog(qat_req->alg_req.backlog); } static int qat_alg_send_sym_message(struct qat_crypto_request *qat_req, - struct qat_crypto_instance *inst) + struct qat_crypto_instance *inst, + struct crypto_async_request *base) { - struct qat_alg_req req; + struct qat_alg_req *alg_req = &qat_req->alg_req; - req.fw_req = (u32 *)&qat_req->req; - req.tx_ring = inst->sym_tx; + alg_req->fw_req = (u32 *)&qat_req->req; + alg_req->tx_ring = inst->sym_tx; + alg_req->base = base; + alg_req->backlog = &inst->backlog; - return qat_alg_send_message(&req); + return qat_alg_send_message(alg_req); } static int qat_alg_aead_dec(struct aead_request *areq) @@ -987,7 +992,7 @@ static int qat_alg_aead_dec(struct aead_request *areq) auth_param->auth_off = 0; auth_param->auth_len = areq->assoclen + cipher_param->cipher_length; - ret = qat_alg_send_sym_message(qat_req, ctx->inst); + ret = qat_alg_send_sym_message(qat_req, ctx->inst, &areq->base); if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); @@ -1031,7 +1036,7 @@ static int qat_alg_aead_enc(struct aead_request *areq) auth_param->auth_off = 0; auth_param->auth_len = areq->assoclen + areq->cryptlen; - ret = qat_alg_send_sym_message(qat_req, ctx->inst); + ret = qat_alg_send_sym_message(qat_req, ctx->inst, &areq->base); if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); @@ -1212,7 +1217,7 @@ static int qat_alg_skcipher_encrypt(struct skcipher_request *req) qat_alg_set_req_iv(qat_req); - ret = qat_alg_send_sym_message(qat_req, ctx->inst); + ret = qat_alg_send_sym_message(qat_req, ctx->inst, &req->base); if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); @@ -1278,7 +1283,7 @@ static int qat_alg_skcipher_decrypt(struct skcipher_request *req) qat_alg_set_req_iv(qat_req); qat_alg_update_iv(qat_req); - ret = qat_alg_send_sym_message(qat_req, ctx->inst); + ret = qat_alg_send_sym_message(qat_req, ctx->inst, &req->base); if (ret == -ENOSPC) qat_alg_free_bufl(ctx->inst, qat_req); diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.c b/drivers/crypto/qat/qat_common/qat_algs_send.c index 78f1bb8c26c0..c2f3d920c94d 100644 --- a/drivers/crypto/qat/qat_common/qat_algs_send.c +++ b/drivers/crypto/qat/qat_common/qat_algs_send.c @@ -6,7 +6,7 @@ #define ADF_MAX_RETRIES 20 -int qat_alg_send_message(struct qat_alg_req *req) +static int qat_alg_send_message_retry(struct qat_alg_req *req) { int ret = 0, ctr = 0; @@ -19,3 +19,66 @@ int qat_alg_send_message(struct qat_alg_req *req) return -EINPROGRESS; } + +void qat_alg_send_backlog(struct qat_instance_backlog *backlog) +{ + struct qat_alg_req *req, *tmp; + + spin_lock_bh(&backlog->lock); + list_for_each_entry_safe(req, tmp, &backlog->list, list) { + if (adf_send_message(req->tx_ring, req->fw_req)) { + /* The HW ring is full. Do nothing. + * qat_alg_send_backlog() will be invoked again by + * another callback. + */ + break; + } + list_del(&req->list); + req->base->complete(req->base, -EINPROGRESS); + } + spin_unlock_bh(&backlog->lock); +} + +static void qat_alg_backlog_req(struct qat_alg_req *req, + struct qat_instance_backlog *backlog) +{ + spin_lock_bh(&backlog->lock); + list_add_tail(&req->list, &backlog->list); + spin_unlock_bh(&backlog->lock); +} + +static int qat_alg_send_message_maybacklog(struct qat_alg_req *req) +{ + struct qat_instance_backlog *backlog = req->backlog; + struct adf_etr_ring_data *tx_ring = req->tx_ring; + u32 *fw_req = req->fw_req; + + /* If any request is already backlogged, then add to backlog list */ + if (!list_empty(&backlog->list)) + goto enqueue; + + /* If ring is nearly full, then add to backlog list */ + if (adf_ring_nearly_full(tx_ring)) + goto enqueue; + + /* If adding request to HW ring fails, then add to backlog list */ + if (adf_send_message(tx_ring, fw_req)) + goto enqueue; + + return -EINPROGRESS; + +enqueue: + qat_alg_backlog_req(req, backlog); + + return -EBUSY; +} + +int qat_alg_send_message(struct qat_alg_req *req) +{ + u32 flags = req->base->flags; + + if (flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + return qat_alg_send_message_maybacklog(req); + else + return qat_alg_send_message_retry(req); +} diff --git a/drivers/crypto/qat/qat_common/qat_algs_send.h b/drivers/crypto/qat/qat_common/qat_algs_send.h index 3fa685d0c293..5ce9f4f69d8f 100644 --- a/drivers/crypto/qat/qat_common/qat_algs_send.h +++ b/drivers/crypto/qat/qat_common/qat_algs_send.h @@ -6,5 +6,6 @@ #include "qat_crypto.h" int qat_alg_send_message(struct qat_alg_req *req); +void qat_alg_send_backlog(struct qat_instance_backlog *backlog); #endif diff --git a/drivers/crypto/qat/qat_common/qat_asym_algs.c b/drivers/crypto/qat/qat_common/qat_asym_algs.c index a3fdbcc08772..b97a0e189db6 100644 --- a/drivers/crypto/qat/qat_common/qat_asym_algs.c +++ b/drivers/crypto/qat/qat_common/qat_asym_algs.c @@ -136,17 +136,21 @@ struct qat_asym_request { } areq; int err; void (*cb)(struct icp_qat_fw_pke_resp *resp); + struct qat_alg_req alg_req; } __aligned(64); static int qat_alg_send_asym_message(struct qat_asym_request *qat_req, - struct qat_crypto_instance *inst) + struct qat_crypto_instance *inst, + struct crypto_async_request *base) { - struct qat_alg_req req; + struct qat_alg_req *alg_req = &qat_req->alg_req; - req.fw_req = (u32 *)&qat_req->req; - req.tx_ring = inst->pke_tx; + alg_req->fw_req = (u32 *)&qat_req->req; + alg_req->tx_ring = inst->pke_tx; + alg_req->base = base; + alg_req->backlog = &inst->backlog; - return qat_alg_send_message(&req); + return qat_alg_send_message(alg_req); } static void qat_dh_cb(struct icp_qat_fw_pke_resp *resp) @@ -350,7 +354,7 @@ static int qat_dh_compute_value(struct kpp_request *req) msg->input_param_count = n_input_params; msg->output_param_count = 1; - ret = qat_alg_send_asym_message(qat_req, ctx->inst); + ret = qat_alg_send_asym_message(qat_req, inst, &req->base); if (ret == -ENOSPC) goto unmap_all; @@ -556,6 +560,8 @@ void qat_alg_asym_callback(void *_resp) struct qat_asym_request *areq = (void *)(__force long)resp->opaque; areq->cb(resp); + + qat_alg_send_backlog(areq->alg_req.backlog); } #define PKE_RSA_EP_512 0x1c161b21 @@ -745,7 +751,7 @@ static int qat_rsa_enc(struct akcipher_request *req) msg->input_param_count = 3; msg->output_param_count = 1; - ret = qat_alg_send_asym_message(qat_req, ctx->inst); + ret = qat_alg_send_asym_message(qat_req, inst, &req->base); if (ret == -ENOSPC) goto unmap_all; @@ -898,7 +904,7 @@ static int qat_rsa_dec(struct akcipher_request *req) msg->output_param_count = 1; - ret = qat_alg_send_asym_message(qat_req, ctx->inst); + ret = qat_alg_send_asym_message(qat_req, inst, &req->base); if (ret == -ENOSPC) goto unmap_all; diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index 67c9588e89df..80d905ed102e 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -353,6 +353,9 @@ static int qat_crypto_create_instances(struct adf_accel_dev *accel_dev) &inst->pke_rx); if (ret) goto err; + + INIT_LIST_HEAD(&inst->backlog.list); + spin_lock_init(&inst->backlog.lock); } return 0; err: diff --git a/drivers/crypto/qat/qat_common/qat_crypto.h b/drivers/crypto/qat/qat_common/qat_crypto.h index 0dcba6fc358c..245b6d9a3650 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.h +++ b/drivers/crypto/qat/qat_common/qat_crypto.h @@ -9,9 +9,17 @@ #include "adf_accel_devices.h" #include "icp_qat_fw_la.h" +struct qat_instance_backlog { + struct list_head list; + spinlock_t lock; /* protects backlog list */ +}; + struct qat_alg_req { u32 *fw_req; struct adf_etr_ring_data *tx_ring; + struct crypto_async_request *base; + struct list_head list; + struct qat_instance_backlog *backlog; }; struct qat_crypto_instance { @@ -24,6 +32,7 @@ struct qat_crypto_instance { unsigned long state; int id; atomic_t refctr; + struct qat_instance_backlog backlog; }; #define QAT_MAX_BUFF_DESC 4 @@ -82,6 +91,7 @@ struct qat_crypto_request { u8 iv[AES_BLOCK_SIZE]; }; bool encryption; + struct qat_alg_req alg_req; }; static inline bool adf_hw_dev_has_crypto(struct adf_accel_dev *accel_dev) From patchwork Thu Mar 31 15:36:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giovanni Cabiddu X-Patchwork-Id: 555668 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A48B2C433FE for ; Thu, 31 Mar 2022 15:42:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238391AbiCaPoM (ORCPT ); Thu, 31 Mar 2022 11:44:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239282AbiCaPm3 (ORCPT ); Thu, 31 Mar 2022 11:42:29 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AB175D66F; Thu, 31 Mar 2022 08:37:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648741042; x=1680277042; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0fWKPOzHGnKEYsF/rOLOD1zuA7oHxus2QAWrWbDy4m8=; b=n6A5fZgfg6TDC7ng3nnoqZqqJUYShidFyXWX3vpHC7pwRnlyU0SyN5l8 ynWoge2NWgrADFR+aAYCduKw6MPA4wSK0WSaaikyvyBEGd6ubMQO3Rpng L2V4SslrFWBZiNQ78ppsi1ID2uz+GqNT4N9atv0Sq0I7jXjRRq7xgSTyg rgOF334r5mFxyb1JCrjyXNUAyogkOa/1xvJPVtgw4Cj3MmrzhhQGyX7e2 imunI9O3GFl+4+Ic/HjM8WzwgCgAT8oxC6fjNQgIsYiW0hMRf0ha17EWd JFM+ZU82oXC0tHcK4kFAlxSDNJ0fjGTNwW2Nooo6kIu97+6RVTWEjKfCh w==; X-IronPort-AV: E=McAfee;i="6200,9189,10302"; a="242022751" X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="242022751" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2022 08:37:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,225,1643702400"; d="scan'208";a="555124807" Received: from silpixa00400314.ir.intel.com (HELO silpixa00400314.ger.corp.intel.com) ([10.237.222.76]) by fmsmga007.fm.intel.com with ESMTP; 31 Mar 2022 08:37:02 -0700 From: Giovanni Cabiddu To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, marco.chiappero@intel.com, Giovanni Cabiddu , stable@vger.kernel.org Subject: [PATCH v2 4/4] crypto: qat - re-enable registration of algorithms Date: Thu, 31 Mar 2022 16:36:52 +0100 Message-Id: <20220331153652.37020-5-giovanni.cabiddu@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220331153652.37020-1-giovanni.cabiddu@intel.com> References: <20220331153652.37020-1-giovanni.cabiddu@intel.com> MIME-Version: 1.0 Organization: Intel Research and Development Ireland Ltd - Co. Reg. #308263 - Collinstown Industrial Park, Leixlip, County Kildare - Ireland Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Re-enable the registration of algorithms after fixes to (1) use pre-allocated buffers in the datapath and (2) support the CRYPTO_TFM_REQ_MAY_BACKLOG flag. This reverts commit 8893d27ffcaf6ec6267038a177cb87bcde4dd3de. Cc: stable@vger.kernel.org Signed-off-by: Giovanni Cabiddu Reviewed-by: Marco Chiappero --- drivers/crypto/qat/qat_4xxx/adf_drv.c | 7 ------- drivers/crypto/qat/qat_common/qat_crypto.c | 7 ------- 2 files changed, 14 deletions(-) diff --git a/drivers/crypto/qat/qat_4xxx/adf_drv.c b/drivers/crypto/qat/qat_4xxx/adf_drv.c index fa4c350c1bf9..a6c78b9c730b 100644 --- a/drivers/crypto/qat/qat_4xxx/adf_drv.c +++ b/drivers/crypto/qat/qat_4xxx/adf_drv.c @@ -75,13 +75,6 @@ static int adf_crypto_dev_config(struct adf_accel_dev *accel_dev) if (ret) goto err; - /* Temporarily set the number of crypto instances to zero to avoid - * registering the crypto algorithms. - * This will be removed when the algorithms will support the - * CRYPTO_TFM_REQ_MAY_BACKLOG flag - */ - instances = 0; - for (i = 0; i < instances; i++) { val = i; bank = i * 2; diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index 80d905ed102e..9341d892533a 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -161,13 +161,6 @@ int qat_crypto_dev_config(struct adf_accel_dev *accel_dev) if (ret) goto err; - /* Temporarily set the number of crypto instances to zero to avoid - * registering the crypto algorithms. - * This will be removed when the algorithms will support the - * CRYPTO_TFM_REQ_MAY_BACKLOG flag - */ - instances = 0; - for (i = 0; i < instances; i++) { val = i; snprintf(key, sizeof(key), ADF_CY "%d" ADF_RING_ASYM_BANK_NUM, i);