From patchwork Wed Apr 6 08:11:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 558359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6C9EC433F5 for ; Wed, 6 Apr 2022 11:23:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232402AbiDFLZX (ORCPT ); Wed, 6 Apr 2022 07:25:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237092AbiDFLYq (ORCPT ); Wed, 6 Apr 2022 07:24:46 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 88060B6461; Wed, 6 Apr 2022 01:12:03 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4403F23A; Wed, 6 Apr 2022 01:12:03 -0700 (PDT) Received: from e122247.kfn.arm.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 068C43F718; Wed, 6 Apr 2022 01:12:00 -0700 (PDT) From: Gilad Ben-Yossef To: Gilad Ben-Yossef , Herbert Xu , "David S. Miller" Cc: Cristian Marussi , Dung Nguyen , Jing Dan , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] crypto: ccree: rearrange init calls to avoid race Date: Wed, 6 Apr 2022 11:11:38 +0300 Message-Id: <20220406081139.1963615-2-gilad@benyossef.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220406081139.1963615-1-gilad@benyossef.com> References: <20220406081139.1963615-1-gilad@benyossef.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Rearrange init calls to avoid the rare race condition of the cipher algs being registered and used while we still init the hash code which uses the HW without proper lock. Signed-off-by: Gilad Ben-Yossef Reported-by: Dung Nguyen Tested-by: Jing Dan Tested-by: Dung Nguyen Fixes: 63893811b0fc("crypto: ccree - add ahash support") --- drivers/crypto/ccree/cc_driver.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c index 790fa9058a36..7d1bee86d581 100644 --- a/drivers/crypto/ccree/cc_driver.c +++ b/drivers/crypto/ccree/cc_driver.c @@ -529,24 +529,26 @@ static int init_cc_resources(struct platform_device *plat_dev) goto post_req_mgr_err; } - /* Allocate crypto algs */ - rc = cc_cipher_alloc(new_drvdata); + /* hash must be allocated first due to use of send_request_init() + * and dependency of AEAD on it + */ + rc = cc_hash_alloc(new_drvdata); if (rc) { - dev_err(dev, "cc_cipher_alloc failed\n"); + dev_err(dev, "cc_hash_alloc failed\n"); goto post_buf_mgr_err; } - /* hash must be allocated before aead since hash exports APIs */ - rc = cc_hash_alloc(new_drvdata); + /* Allocate crypto algs */ + rc = cc_cipher_alloc(new_drvdata); if (rc) { - dev_err(dev, "cc_hash_alloc failed\n"); - goto post_cipher_err; + dev_err(dev, "cc_cipher_alloc failed\n"); + goto post_hash_err; } rc = cc_aead_alloc(new_drvdata); if (rc) { dev_err(dev, "cc_aead_alloc failed\n"); - goto post_hash_err; + goto post_cipher_err; } /* If we got here and FIPS mode is enabled @@ -558,10 +560,10 @@ static int init_cc_resources(struct platform_device *plat_dev) pm_runtime_put(dev); return 0; -post_hash_err: - cc_hash_free(new_drvdata); post_cipher_err: cc_cipher_free(new_drvdata); +post_hash_err: + cc_hash_free(new_drvdata); post_buf_mgr_err: cc_buffer_mgr_fini(new_drvdata); post_req_mgr_err: @@ -593,8 +595,8 @@ static void cleanup_cc_resources(struct platform_device *plat_dev) (struct cc_drvdata *)platform_get_drvdata(plat_dev); cc_aead_free(drvdata); - cc_hash_free(drvdata); cc_cipher_free(drvdata); + cc_hash_free(drvdata); cc_buffer_mgr_fini(drvdata); cc_req_mgr_fini(drvdata); cc_fips_fini(drvdata); From patchwork Wed Apr 6 08:11:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 558607 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0EDBC433EF for ; Wed, 6 Apr 2022 12:24:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232758AbiDFM0i (ORCPT ); Wed, 6 Apr 2022 08:26:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232810AbiDFM0G (ORCPT ); Wed, 6 Apr 2022 08:26:06 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AC1322EC118; Wed, 6 Apr 2022 01:12:17 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64D3F23A; Wed, 6 Apr 2022 01:12:17 -0700 (PDT) Received: from e122247.kfn.arm.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 81F0B3F718; Wed, 6 Apr 2022 01:12:15 -0700 (PDT) From: Gilad Ben-Yossef To: Gilad Ben-Yossef , Herbert Xu , "David S. Miller" Cc: Cristian Marussi , Corentin Labbe , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] crypto: ccree: use fine grained DMA mapping dir Date: Wed, 6 Apr 2022 11:11:39 +0300 Message-Id: <20220406081139.1963615-3-gilad@benyossef.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220406081139.1963615-1-gilad@benyossef.com> References: <20220406081139.1963615-1-gilad@benyossef.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Use a fine grained specification of DMA mapping directions in certain cases, allowing both a more optimized operation as well as shushing out a harmless, though persky dma-debug warning. Signed-off-by: Gilad Ben-Yossef Reported-by: Corentin Labbe --- drivers/crypto/ccree/cc_buffer_mgr.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c index 11e0278c8631..6140e4927322 100644 --- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -356,12 +356,14 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx, req_ctx->mlli_params.mlli_dma_addr); } - dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL); - dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src)); - if (src != dst) { - dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE); + dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE); dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst)); + dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src)); + } else { + dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL); + dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src)); } } @@ -377,6 +379,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, u32 dummy = 0; int rc = 0; u32 mapped_nents = 0; + int src_direction = (src != dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL); req_ctx->dma_buf_type = CC_DMA_BUF_DLLI; mlli_params->curr_pool = NULL; @@ -399,7 +402,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, } /* Map the src SGL */ - rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents, + rc = cc_map_sg(dev, src, nbytes, src_direction, &req_ctx->in_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); if (rc) goto cipher_exit; @@ -416,7 +419,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, } } else { /* Map the dst sg */ - rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL, + rc = cc_map_sg(dev, dst, nbytes, DMA_FROM_DEVICE, &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); if (rc) @@ -456,6 +459,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) struct aead_req_ctx *areq_ctx = aead_request_ctx(req); unsigned int hw_iv_size = areq_ctx->hw_iv_size; struct cc_drvdata *drvdata = dev_get_drvdata(dev); + int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL); if (areq_ctx->mac_buf_dma_addr) { dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr, @@ -514,13 +518,11 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents, areq_ctx->assoclen, req->cryptlen); - dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, - DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction); if (req->src != req->dst) { dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n", sg_virt(req->dst)); - dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, - DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE); } if (drvdata->coherent && areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT && @@ -843,7 +845,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, else size_for_map -= authsize; - rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL, + rc = cc_map_sg(dev, req->dst, size_for_map, DMA_FROM_DEVICE, &areq_ctx->dst.mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes, &dst_mapped_nents); @@ -1056,7 +1058,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) size_to_map += authsize; } - rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL, + rc = cc_map_sg(dev, req->src, size_to_map, + (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL), &areq_ctx->src.mapped_nents, (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES + LLI_MAX_NUM_OF_DATA_ENTRIES),