From patchwork Thu Dec 3 01:35:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iuliana Prodan \(OSS\)" X-Patchwork-Id: 337864 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MSGID_FROM_MTA_HEADER, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA593C6369E for ; Thu, 3 Dec 2020 01:37:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53D5120B80 for ; Thu, 3 Dec 2020 01:37:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729362AbgLCBgz (ORCPT ); Wed, 2 Dec 2020 20:36:55 -0500 Received: from mail-eopbgr60063.outbound.protection.outlook.com ([40.107.6.63]:52705 "EHLO EUR04-DB3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727066AbgLCBgz (ORCPT ); Wed, 2 Dec 2020 20:36:55 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lENbWR+/wjstw5btBMUazvoUN6MY5ekduMS0BO5bhUuUoJZxOGNanflGlZz6Mf21mw/83f7V65kq30NNVaaznpbJqz/c+udLcEbuWORSXOaHhzGYEMukpzuKN029nmyQpDr39hy31wcbk6TfP9oDpymPX3/M95ayiVO+kbZSaCilGXK2SO+/c7SOolFHTCLtCaIUrhi/mjwv81VmaahmFQdgproSqHHML/q7aMRw9HufuexESDYTFPKnoxCEQUQhVjzhZV4aF9ZzjFjvoAOiW3XGbPe0V5ZFDo7uHgOiOv44+7AYHKm531IJVEY2Y7ByMkNvsw0CcTUTR+KcvDxAUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K2pnXHHaYiNPxaXhj7kzUh4Xx93WVGMeJ4QpoKI0NMI=; b=JjhQdfVlA6urQQ2jN7neTSv3EvhwWq2vc+jduITzK7ZgPpCIqfc6o6oWNzRC/eXO2Murn6vaYSuc502a+l2RwmT+D0oi1hswyzygRHwbByRGVbrwus3jPmaJA+aJn9DknbbCABCP2PpyB8n9TsEfaniPOeuhFzPtt4bKyzECCNglzoH7cO0KeoywsGknDQeijCOwsI024Koa+GXlGs/xj/xHpA709obWa4uBbEd0fiNiBaVd3EJDgvq2/0mqwWw/DKqxx2TLzNQVW5nr5Fndvn3XUfJx7WzDB9VT5gChedOSj7AyKfJ/6ACiu/b3yw1XiI+kTlIUHlDfWJWXcibcdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=K2pnXHHaYiNPxaXhj7kzUh4Xx93WVGMeJ4QpoKI0NMI=; b=MoHXa2UpEnXjKjxec0qSN90PoC7R/K4wYkTjXmDxD9T/WSVA6Ul8vflag7ncIftHLROe1quyvqptu2VK8Nnhk8xUltXai/X/UVMo9Htgfyx0hzgJwcpkG2Fu2ewR7sy2frilYdjL0oMXijyEjD3Dh3l+54DoRx+6cEx2EvtgdQk= Authentication-Results: gondor.apana.org.au; dkim=none (message not signed) header.d=none; gondor.apana.org.au; dmarc=none action=none header.from=oss.nxp.com; Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) by VI1PR04MB4687.eurprd04.prod.outlook.com (2603:10a6:803:72::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 3 Dec 2020 01:36:04 +0000 Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c]) by VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c%4]) with mapi id 15.20.3632.018; Thu, 3 Dec 2020 01:36:04 +0000 From: "Iuliana Prodan (OSS)" To: Herbert Xu , Ard Biesheuvel , "David S. Miller" , Horia Geanta Cc: Aymen Sghaier , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 1/5] crypto: caam/jr - avoid allocating memory at crypto request runtime for skcipher Date: Thu, 3 Dec 2020 03:35:20 +0200 Message-Id: <20201203013524.30495-2-iuliana.prodan@oss.nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> References: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> X-Originating-IP: [83.217.231.2] X-ClientProxiedBy: AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) To VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from lsv15040.swis.ro-buh01.nxp.com (83.217.231.2) by AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend Transport; Thu, 3 Dec 2020 01:36:03 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 151c87f0-930f-4d1b-ca54-08d8972bcc5d X-MS-TrafficTypeDiagnostic: VI1PR04MB4687: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: O9CBTubQm8dl6XmmCEFbXpgYBseobPTqfWtWAxRJ0JB/Qulb3Pq2dHroweqTrK6fXizS1oRnRj9R9aM7TcrFrzSQIxBJ76zOv+LQvr4Hi2K5sSToGxi9mlyY6f3eCDaSd06D6auFPC9WjhIxIA7DMiIaws98L8YC80VLXE1e0J4FZmlzB3RhJs+TfAET5Owi5ioiYl5IZpitFuf0dR3GvecXtq9mj7H5ZCQSs3UWDJyyi1RnW7GQ3ron7fNf7613RUVxypuusLN9m0oxnovNtB3Q+iTIjRWOLB6nJkyIK1+hamU18IW8fWpnbZCv5Vo6du1d4uZVVL1pct3TRzzdEA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR0402MB3712.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(346002)(39860400002)(396003)(366004)(136003)(8936002)(5660300002)(4326008)(66946007)(66556008)(66476007)(478600001)(52116002)(6486002)(54906003)(6506007)(316002)(2906002)(110136005)(26005)(1076003)(6666004)(6512007)(186003)(956004)(2616005)(16526019)(86362001)(8676002)(83380400001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: mpkvaCAEFsn3YX5W1iReCEyKzcXytx3jIX8Q4xpmiZ8vQHJKDA5RzO8UGcIPavYPyx/MqYPAWcjhxuB/0amkliSlB6MFiqn2FMGPE3pg9qHq1zaz7mpQWIQec4zfxlDSW4soUOlubAwXDRDJydFUPTziWYWPf9wxh/8LzmoBqPneQXl3ksbqeYFJgxZdfyC6UdhMB04GW1KEl21pGV4AA43HIORC5GZ2W30g5wt8XYQEUKhkBLL1dBk0oH+86mr9oRMCKwESFaGH1qJE8yP4V3vHQW5QHU7Hyyy9spdku7VqWRrYLcpPxF3COeGkEtEHBY6PA0VNeVblMtrvk2KeqFRyCdvnifiwxemBwXaetfSZsroP2/8skbPrxLo4XpTbtiOlwV2gR8tdsYzSvC3LeFXGdieTIlJG21KsWs3FMMnKL9nqFGYH7nGvBTVdjoL+65EE9ugbZLbGBHhHf4Ybcr59k5Zsa5LdpjQ9AZAZUWsknBPRtYKMzg/hvbPFGiGSkjS5ymKtFDjHNaNqW8qEcrMc9X03dEYvSI6wrZSiZ0ZldObZDHs2GkzJd+rb3xJc7NpPZBSg2V1L5vCe8k21OYcX4G07qBwBl7b2Gosh3Gh11DrDyTmR6axBjzRwc1cPnm6ENNehsbNC/lk8LRJl3FettcMa5GjrhlOEGYvK5F2JPNFqlwSVj5c0QRdj/iigfKFMXno39HX7w8wc+vxDxzXpKJLg2oVoh1QFGE+5fGwyavZFH5P2WWqUQGOP1gkfSIAY88D7ZMDbj128UL2vvUs5s2FGXhXFae/Qq0JFdNlFXeiORe2KJpUMuTYiXyqm9rS9qD9ITalrL2cqUbExXklbUGfQSvEb6C9Kk6VGEv/nFbtSmpA3vVT7sODfT1gPVd1XDvIllEdZWmWqMcy4mlZaeWuwObA03PYhHf9H5yONlSUxpvybVEA+wHhDj41TyzV9+dRcOuesGgW+fGq8HJ/r8BmZkV+zurww3J+ZDjkah/CDk7Hkk1JBXc5aGLIp X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 151c87f0-930f-4d1b-ca54-08d8972bcc5d X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3712.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 01:36:04.3820 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: bInxhPVWEcYyyL82IqDWqbrXe/vmdoirtRc5/+1YIRj8Vq/957C12ZboNeXy/pybPVXllJVZ16f2PHE0hnkU/Q== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4687 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Iuliana Prodan Remove CRYPTO_ALG_ALLOCATES_MEMORY flag and allocate the memory needed by the driver, to fulfil a request, within the crypto request object. The extra size needed for base extended descriptor and hw descriptor commands, link tables, IV is computed in frontend driver (caamalg) initialization and saved in reqsize field that indicates how much memory could be needed per request. CRYPTO_ALG_ALLOCATES_MEMORY flag is limited only to dm-crypt use-cases, which seems to be 4 entries maximum. Therefore in reqsize we allocate memory for maximum 4 entries for src and 1 for IV, and the same for dst, both aligned. If the driver needs more than the 4 entries maximum, the memory is dynamically allocated, at runtime. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamalg.c | 77 +++++++++++++++++++++++++---------- 1 file changed, 55 insertions(+), 22 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 8697ae53b063..ef49781a2545 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -905,6 +905,7 @@ struct aead_edesc { * @iv_dma: dma address of iv for checking continuity and link table * @sec4_sg_bytes: length of dma mapped sec4_sg space * @bklog: stored to determine if the request needs backlog + * @free: stored to determine if skcipher_edesc needs to be freed * @sec4_sg_dma: bus physical mapped address of h/w link table * @sec4_sg: pointer to h/w link table * @hw_desc: the h/w job descriptor followed by any referenced link tables @@ -918,6 +919,7 @@ struct skcipher_edesc { dma_addr_t iv_dma; int sec4_sg_bytes; bool bklog; + bool free; dma_addr_t sec4_sg_dma; struct sec4_sg_entry *sec4_sg; u32 hw_desc[]; @@ -1037,7 +1039,8 @@ static void skcipher_crypt_done(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, req->dst, edesc->dst_nents > 1 ? 100 : req->cryptlen, 1); - kfree(edesc); + if (edesc->free) + kfree(edesc); /* * If no backlog flag, the completion of the request is done @@ -1604,7 +1607,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, dma_addr_t iv_dma = 0; u8 *iv; int ivsize = crypto_skcipher_ivsize(skcipher); - int dst_sg_idx, sec4_sg_ents, sec4_sg_bytes; + int dst_sg_idx, sec4_sg_ents, sec4_sg_bytes, edesc_size = 0; src_nents = sg_nents_for_len(req->src, req->cryptlen); if (unlikely(src_nents < 0)) { @@ -1675,16 +1678,30 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, sec4_sg_bytes = sec4_sg_ents * sizeof(struct sec4_sg_entry); - /* - * allocate space for base edesc and hw desc commands, link tables, IV - */ - edesc = kzalloc(sizeof(*edesc) + desc_bytes + sec4_sg_bytes + ivsize, - GFP_DMA | flags); - if (!edesc) { - dev_err(jrdev, "could not allocate extended descriptor\n"); - caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0, - 0, 0, 0); - return ERR_PTR(-ENOMEM); + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + desc_bytes + sec4_sg_bytes + ivsize; + if (edesc_size > (crypto_skcipher_reqsize(skcipher) - + sizeof(struct caam_skcipher_req_ctx))) { + /* + * allocate space for base edesc and hw desc commands, + * link tables, IV + */ + edesc = kzalloc(edesc_size, GFP_DMA | flags); + if (!edesc) { + caam_unmap(jrdev, req->src, req->dst, src_nents, + dst_nents, 0, 0, 0, 0); + return ERR_PTR(-ENOMEM); + } + edesc->free = true; + } else { + /* + * get address for base edesc and hw desc commands, + * link tables, IV + */ + edesc = (struct skcipher_edesc *)((u8 *)rctx + + sizeof(struct caam_skcipher_req_ctx)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } edesc->src_nents = src_nents; @@ -1706,7 +1723,8 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, dev_err(jrdev, "unable to map IV\n"); caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0, 0, 0, 0); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ERR_PTR(-ENOMEM); } @@ -1736,7 +1754,8 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, dev_err(jrdev, "unable to map S/G table\n"); caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, 0, 0); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ERR_PTR(-ENOMEM); } } @@ -1764,11 +1783,11 @@ static int skcipher_do_one_req(struct crypto_engine *engine, void *areq) if (ret != -EINPROGRESS) { skcipher_unmap(ctx->jrdev, rctx->edesc, req); - kfree(rctx->edesc); + if (rctx->edesc->free) + kfree(rctx->edesc); } else { ret = 0; } - return ret; } @@ -1841,7 +1860,8 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { skcipher_unmap(jrdev, edesc, req); - kfree(edesc); + if (edesc->free) + kfree(edesc); } return ret; @@ -3393,10 +3413,22 @@ static int caam_cra_init(struct crypto_skcipher *tfm) container_of(alg, typeof(*caam_alg), skcipher); struct caam_ctx *ctx = crypto_skcipher_ctx(tfm); u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK; - int ret = 0; + int ret = 0, extra_reqsize = 0; ctx->enginectx.op.do_one_request = skcipher_do_one_req; + /* + * Compute extra space needed for base edesc and + * hw desc commands, link tables, IV + */ + extra_reqsize = sizeof(struct skcipher_edesc) + + DESC_JOB_IO_LEN * CAAM_CMD_SZ + /* hw desc commands */ + /* link tables for src and dst: + * 4 entries max + 1 for IV, aligned = 8 + */ + (16 * sizeof(struct sec4_sg_entry)) + + AES_BLOCK_SIZE; /* ivsize */ + if (alg_aai == OP_ALG_AAI_XTS) { const char *tfm_name = crypto_tfm_alg_name(&tfm->base); struct crypto_skcipher *fallback; @@ -3411,9 +3443,11 @@ static int caam_cra_init(struct crypto_skcipher *tfm) ctx->fallback = fallback; crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx) + - crypto_skcipher_reqsize(fallback)); + crypto_skcipher_reqsize(fallback) + + extra_reqsize); } else { - crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx) + + extra_reqsize); } ret = caam_init_common(ctx, &caam_alg->caam, false); @@ -3486,8 +3520,7 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg) alg->base.cra_module = THIS_MODULE; alg->base.cra_priority = CAAM_CRA_PRIORITY; alg->base.cra_ctxsize = sizeof(struct caam_ctx); - alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY); + alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY); alg->init = caam_cra_init; alg->exit = caam_cra_exit; From patchwork Thu Dec 3 01:35:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iuliana Prodan \(OSS\)" X-Patchwork-Id: 337071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MSGID_FROM_MTA_HEADER, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 233E7C71155 for ; Thu, 3 Dec 2020 01:37:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C509C221EB for ; Thu, 3 Dec 2020 01:37:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729386AbgLCBhC (ORCPT ); Wed, 2 Dec 2020 20:37:02 -0500 Received: from mail-eopbgr40078.outbound.protection.outlook.com ([40.107.4.78]:53706 "EHLO EUR03-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727066AbgLCBhC (ORCPT ); Wed, 2 Dec 2020 20:37:02 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V50xsz7U/Yhaizz7NYWpjIQRNZdGSLLnPZOwlRZip6bHI413aTpUB+nxqOnvNVXJ+/rE3jYbgGGF9DDffezVH3XIhc3UDT59OqBdCgi/GIT4db3k2JulcE/GX9fWa3yXEg4INS3Stj8N3kAmlyCegHgBUkiLKydZwLsGVZ74thiFFw7PgB8gpGtSakqH+g3DGI0HcRTe19hmkrtjbSO+BU/qes2MlML8NIAeDzrUIbyH9tuKOMHf2kGiEiqWElmrifVErcclTyyRqCp2yfOziMJE9HxZgH/9Ouzjzwdh2oSk/cb8oVWKAB8JZBP6G6aaBiQhgc0gMQ4Xw17S+GsIrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=No+GMcOBJ1I+rMOeGFp5xXR3Z4TYSAxzZ8RoHDhB45M=; b=loilI++TCIZFU8f5dUIUELA+BTcELtl5Mvs7P+jOkys+DMUKhF66G05QjmfScnIZ9tQVVs1m2nA8sh8sJ/KqvvKCOSb5i8Odysd/6Nwa2bm5wvuM6nS+lmp1dAn27vAg7zIMePvbWu4f9T3kj75Nc1uMNBH9OByMUPgvDjbVA3nLQ6Nnn/t2YdFfnuvJCJ4gUM3+3SVv58dxoA4n7XAx31PiNMfiLGuYq7mL4eLvE/2c90ullo4rqavJ5OieRhOfC3LX2iUondpW7+EfsDVVXE7lFj9cgxD9go45lq5MP2ytpHjempSam0bAHQBLvtp4XS+wO3DaLAEjHAVnkXKclw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=No+GMcOBJ1I+rMOeGFp5xXR3Z4TYSAxzZ8RoHDhB45M=; b=M+y00/RHUuoMAnUaKHxqVfn2kKYZh/iF6QNCpHfktYRnpky6ytn41Cpa+eZA2b3oaQ9arT3Iqc4UtPLPHMZtvAvr64O4b4xnTjQelLcSmfP1HIohJObu5mJPZ7RX3z10FvonCTIn2ZvmMBEf2Zf4SN5Dsr7JJsWzfDcdAaS3XI0= Authentication-Results: gondor.apana.org.au; dkim=none (message not signed) header.d=none; gondor.apana.org.au; dmarc=none action=none header.from=oss.nxp.com; Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) by VI1PR04MB4430.eurprd04.prod.outlook.com (2603:10a6:803:65::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Thu, 3 Dec 2020 01:36:17 +0000 Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c]) by VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c%4]) with mapi id 15.20.3632.018; Thu, 3 Dec 2020 01:36:17 +0000 From: "Iuliana Prodan (OSS)" To: Herbert Xu , Ard Biesheuvel , "David S. Miller" , Horia Geanta Cc: Aymen Sghaier , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 2/5] crypto: caam/jr - avoid allocating memory at crypto request runtime for aead Date: Thu, 3 Dec 2020 03:35:21 +0200 Message-Id: <20201203013524.30495-3-iuliana.prodan@oss.nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> References: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> X-Originating-IP: [83.217.231.2] X-ClientProxiedBy: AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) To VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from lsv15040.swis.ro-buh01.nxp.com (83.217.231.2) by AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend Transport; Thu, 3 Dec 2020 01:36:16 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 442b1395-5bbd-4e13-4d41-08d8972bd42e X-MS-TrafficTypeDiagnostic: VI1PR04MB4430: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UEXbpzSoToiPCK/SBnng/XY3QjzBzKZq2fY2qoWldMofhUywf4IdwQAKXcl3iL8XEt8rFGpJwN8mIHUh5tmW2PbMrXuwK1RsAjQ7vhZ/362T7EEtAkAd3E2cIpJJkvFL0/VlAVzuidKM2UyjkSd/iIt75peY20yrcsBqfkbB9gzb9NTySRGV4KpZ3QOZlvFzkq/fRSVabhKCfGO+s7haPwFJgaIAMjiuYQRyi2PDMlg1spwcwZX4hhA+9RqvH5UkG58fKM0RjY7+G2XABGBjDT783D07hVLEj22qMF5pE2vao8xZl0TBNQgeMt4Rw5o+DSeP6raHyJEcMW4mYO8VXA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR0402MB3712.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(136003)(346002)(376002)(39860400002)(396003)(54906003)(1076003)(110136005)(2906002)(316002)(2616005)(83380400001)(8676002)(52116002)(6506007)(8936002)(5660300002)(66946007)(26005)(66476007)(956004)(16526019)(86362001)(186003)(6512007)(4326008)(478600001)(66556008)(6666004)(6486002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: KpR21cr1e4aAcEdFs0u9KoHCSVgQ0idy52kFeoQCxrUW2CMyAayMcQhdVyvhYFqu3NkerPGlzy1FUMiO5PmkoviwhsMB2K3wy9Q8REcPpaUw/bynYReV+SHI+Wp9tU6E3nd4mugp/O9EIubwNxLuo/iGMJ56D1hMmnJ36TYjlCaSOPUNZxeJx54miJQRWsslxjP+zNSGkqWjf6p07GJ9GLgPBsdy4V1oguGysQPjgh/0MhoYeH1uupKyuc7SmYjQrCx49Su1nBfnzFDG9GZaY5YtlrVh8gWaTf5nxY8ZzWgC1gJYJlAS/6kbj6fihXVEtwtQc9KqldUVTyNqqWGlrA3KfMQoWBfwRqf+DbwgW12qztL7VQ+czJit3/MepyOcdIYbRxDY8JhT8ulL6uSZZx9sD8z2WrGhLCKWPAwDecuN9fQGCtaBGjyQqlQmis74wYJsDjG1Wol3DLHcvrtnqFk48Mn+QnyIaZ+gDp8GQX4/NFj2gYAbJ0T4JN3wFatYtLhJuB0pwdD9fzB1RZpFq4MmBsrQVndGs8ZBtRGKy2jFQMzJCc+fRqmHYvhkZ8wdtlCPWej/Zhysbkmy4K+6UJn+2/Y0kNOb611gSX/Kxbl7uqJM9g2yDk6b1+4Myf0hq7gJ1kPnwwoWYG1ko/QeQKLUv1IiqEPPNh1oHuTCrvM/BGov6HdCXY/ycDjbeJpi77fEq6GGLB72jvAIbAI++yDIAaCWnKcj2m3jc5l9hO7B1anNg1+LcnQY72LjZAvbaV1r6cPLJvBWBzPeCh3NghWQ0z1ZNo+Vl0TBbcMDymv5xZtTVYdGK7smnWtF414c+mMs6KOrsO1Za26lkV2IUVF3Zi5l3MNRPvQqda4748cXQjRlpgFCFNRcg8Q2Pb4dKYnuVC9KSltxUUt8Le7RxbfAuHwL5et28unApiqoTgifHI74G7p2p/VnPn0kWHatzeUaG+630aMs4DkIvrUk2PpOqRlkKaxwE1xT+iF37nhknuxRoILab8zNSAoX0HTD X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 442b1395-5bbd-4e13-4d41-08d8972bd42e X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3712.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 01:36:17.3896 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: oEMIa93ja35wyg7uj3wLEynpFxbBjdBfT1jAY2r09DQIWQXziUgt1IZll40szmQsW7bAU0wY+cFPIoIP6x4jxg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4430 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Iuliana Prodan Remove CRYPTO_ALG_ALLOCATES_MEMORY flag and allocate the memory needed by the driver, to fulfil a request, within the crypto request object. The extra size needed for base extended descriptor, hw descriptor commands and link tables is computed in frontend driver (caamalg) initialization and saved in reqsize field that indicates how much memory could be needed per request. CRYPTO_ALG_ALLOCATES_MEMORY flag is limited only to dm-crypt use-cases, which seems to be 4 entries maximum. Therefore in reqsize we allocate memory for maximum 4 entries for src and 4 for dst, aligned. If the driver needs more than the 4 entries maximum, the memory is dynamically allocated, at runtime. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamalg.c | 64 ++++++++++++++++++++++++++--------- 1 file changed, 48 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index ef49781a2545..058c808dbae9 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -880,6 +880,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, * @mapped_dst_nents: number of segments in output h/w link table * @sec4_sg_bytes: length of dma mapped sec4_sg space * @bklog: stored to determine if the request needs backlog + * @free: stored to determine if aead_edesc needs to be freed * @sec4_sg_dma: bus physical mapped address of h/w link table * @sec4_sg: pointer to h/w link table * @hw_desc: the h/w job descriptor followed by any referenced link tables @@ -891,6 +892,7 @@ struct aead_edesc { int mapped_dst_nents; int sec4_sg_bytes; bool bklog; + bool free; dma_addr_t sec4_sg_dma; struct sec4_sg_entry *sec4_sg; u32 hw_desc[]; @@ -987,8 +989,8 @@ static void aead_crypt_done(struct device *jrdev, u32 *desc, u32 err, ecode = caam_jr_strstatus(jrdev, err); aead_unmap(jrdev, edesc, req); - - kfree(edesc); + if (edesc->free) + kfree(edesc); /* * If no backlog flag, the completion of the request is done @@ -1301,7 +1303,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0; int src_len, dst_len = 0; struct aead_edesc *edesc; - int sec4_sg_index, sec4_sg_len, sec4_sg_bytes; + int sec4_sg_index, sec4_sg_len, sec4_sg_bytes, edesc_size = 0; unsigned int authsize = ctx->authsize; if (unlikely(req->dst != req->src)) { @@ -1381,13 +1383,30 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, sec4_sg_bytes = sec4_sg_len * sizeof(struct sec4_sg_entry); - /* allocate space for base edesc and hw desc commands, link tables */ - edesc = kzalloc(sizeof(*edesc) + desc_bytes + sec4_sg_bytes, - GFP_DMA | flags); - if (!edesc) { - caam_unmap(jrdev, req->src, req->dst, src_nents, dst_nents, 0, - 0, 0, 0); - return ERR_PTR(-ENOMEM); + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + desc_bytes + sec4_sg_bytes; + if (edesc_size > (crypto_aead_reqsize(aead) - + sizeof(struct caam_aead_req_ctx))) { + /* + * allocate space for base edesc and + * hw desc commands, link tables + */ + edesc = kzalloc(edesc_size, GFP_DMA | flags); + if (!edesc) { + caam_unmap(jrdev, req->src, req->dst, src_nents, + dst_nents, 0, 0, 0, 0); + return ERR_PTR(-ENOMEM); + } + edesc->free = true; + } else { + /* + * get address for base edesc and + * hw desc commands, link tables + */ + edesc = (struct aead_edesc *)((u8 *)rctx + + sizeof(struct caam_aead_req_ctx)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } edesc->src_nents = src_nents; @@ -1420,7 +1439,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); aead_unmap(jrdev, edesc, req); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ERR_PTR(-ENOMEM); } @@ -1450,7 +1470,8 @@ static int aead_enqueue_req(struct device *jrdev, struct aead_request *req) if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { aead_unmap(jrdev, edesc, req); - kfree(rctx->edesc); + if (rctx->edesc->free) + kfree(rctx->edesc); } return ret; @@ -1538,7 +1559,8 @@ static int aead_do_one_req(struct crypto_engine *engine, void *areq) if (ret != -EINPROGRESS) { aead_unmap(ctx->jrdev, rctx->edesc, req); - kfree(rctx->edesc); + if (rctx->edesc->free) + kfree(rctx->edesc); } else { ret = 0; } @@ -3463,8 +3485,19 @@ static int caam_aead_init(struct crypto_aead *tfm) struct caam_aead_alg *caam_alg = container_of(alg, struct caam_aead_alg, aead); struct caam_ctx *ctx = crypto_aead_ctx(tfm); + int extra_reqsize = 0; + + /* + * Compute extra space needed for base edesc and + * hw desc commands, link tables, IV + */ + extra_reqsize = sizeof(struct aead_edesc) + + /* max size for hw desc commands */ + (AEAD_DESC_JOB_IO_LEN + CAAM_CMD_SZ * 6) + + /* link tables for src and dst, 4 entries max, aligned */ + (8 * sizeof(struct sec4_sg_entry)); - crypto_aead_set_reqsize(tfm, sizeof(struct caam_aead_req_ctx)); + crypto_aead_set_reqsize(tfm, sizeof(struct caam_aead_req_ctx) + extra_reqsize); ctx->enginectx.op.do_one_request = aead_do_one_req; @@ -3533,8 +3566,7 @@ static void caam_aead_alg_init(struct caam_aead_alg *t_alg) alg->base.cra_module = THIS_MODULE; alg->base.cra_priority = CAAM_CRA_PRIORITY; alg->base.cra_ctxsize = sizeof(struct caam_ctx); - alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY; alg->init = caam_aead_init; alg->exit = caam_aead_exit; From patchwork Thu Dec 3 01:35:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iuliana Prodan \(OSS\)" X-Patchwork-Id: 337863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MSGID_FROM_MTA_HEADER, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5ECFC6369E for ; Thu, 3 Dec 2020 01:37:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4EEDB20C56 for ; Thu, 3 Dec 2020 01:37:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727066AbgLCBh0 (ORCPT ); Wed, 2 Dec 2020 20:37:26 -0500 Received: from mail-eopbgr40078.outbound.protection.outlook.com ([40.107.4.78]:53706 "EHLO EUR03-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727064AbgLCBh0 (ORCPT ); Wed, 2 Dec 2020 20:37:26 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WZudrV7IzwFCyg0xZmooLdVf/RDmrDGE8cmic7ekMWJoKoJ7Ki9cDw7pyQsnHcXCLJXJgPffj2ccqXTeBaUk/yWqKEOlJAL6dPlFEySpI5LSWmfNlaQpv7kC7mn2NDjDn3YuolWudGLzO7jbbCSFjap226wROztqqv2V4cxC9wXSl0QPbwDwkVlfI6gOrBuLi5/3+Gl9G/aDlBIR1/cvyqvi//FPHw6YgM4ag12o1WEMXjBHXxmQInmqG6ENbE62YHLFUw/VydnZ+NIYPwet3tUJq1m47hwRXrePrcLwCBPhRJ0SqPg12pbSq4uEFA4dBHVl7+vcqAVR2VbqHmo97g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kYbDpmiYRqYJJZakh225B2tAE0oig4gXKlJGQraX0k0=; b=PdilO1h/Y20cBYe5SQMlKxItkoX3P2Miv9ntmM0qqdzBw9Bb7+RJAnv5jE0M2GJm525w30/xx93w3Tyzy0CdNd+yhimvCQsoaGPKZHdG65nxetArB+7cySyDNRgSdFX2gPfd1Z7J+uv5eMiP37VndEmMH4OIGrRw/d4JuzYM9cYPdm/HpslaHt+oh0u62J7HLUqRYe4A22Jxo62HakT4RJ+GI3+saS1h99R8G016vSE+rzhCchV1BwP9qRuE9RFQZRNTeQoCrYBAQAn24uHPmlY4hS4OocilIK8tlAJZ/0ykxAZrcwc9j0Pw7MmeFNbxOy2a/lG/urPAEkyOhW5QqQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kYbDpmiYRqYJJZakh225B2tAE0oig4gXKlJGQraX0k0=; b=WrqC1R16EHRiEWtnNtu4SSFrPM/W4nWFd+UOCik8evgUBIDIsaDoApethp3vWpHoSd4iYvMT5bUVl3XoNMaN3VZphZh8dLUZcZfwpJ6NALGVXrL2BBM0FhTIQv6ZTFBsQcz+DqAa44C7ZnBVN9xP6sb8pyAZ5NviuQc2Mhxokz4= Authentication-Results: gondor.apana.org.au; dkim=none (message not signed) header.d=none; gondor.apana.org.au; dmarc=none action=none header.from=oss.nxp.com; Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) by VI1PR04MB4430.eurprd04.prod.outlook.com (2603:10a6:803:65::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Thu, 3 Dec 2020 01:36:29 +0000 Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c]) by VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c%4]) with mapi id 15.20.3632.018; Thu, 3 Dec 2020 01:36:29 +0000 From: "Iuliana Prodan (OSS)" To: Herbert Xu , Ard Biesheuvel , "David S. Miller" , Horia Geanta Cc: Aymen Sghaier , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 3/5] crypto: caam/jr - avoid allocating memory at crypto request runtime fost hash Date: Thu, 3 Dec 2020 03:35:22 +0200 Message-Id: <20201203013524.30495-4-iuliana.prodan@oss.nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> References: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> X-Originating-IP: [83.217.231.2] X-ClientProxiedBy: AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) To VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from lsv15040.swis.ro-buh01.nxp.com (83.217.231.2) by AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend Transport; Thu, 3 Dec 2020 01:36:27 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 3505ddb8-f317-427e-92d6-08d8972bdb0b X-MS-TrafficTypeDiagnostic: VI1PR04MB4430: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6108; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1d1aAaYoZoXIi6zAHqmawYUeEMov0A9H9xOG9OO/M0RNcxcT+QX5Vy24Qng8yHTUUMQl9M5rmrzaySSQIkES8CubvbRWy2gOKzTViWWv7aaPcG+YzVHeF/cfhbs5J18JHat+hB7ccXIsuE+7TeSb3vxsRrsM5LTvCJRI/B9YZv6Tc4CGhW98K0Sl0O7NbCjXALxWhAdjuWIZkb43VXbunc25ICdGtIGh+3ZcuAcvzQdfsJ+JzPstKM+hzqkrq0ENS2UG78zQP8Lw2esFpuhC6CTPytyHfpWV4geXJs5BNQcrpfcH4KoZic+xzEfuMmOGfQvgVzpHblkUYE94ZXEyFA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR0402MB3712.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(136003)(346002)(376002)(39860400002)(396003)(54906003)(1076003)(110136005)(2906002)(316002)(2616005)(83380400001)(8676002)(52116002)(6506007)(8936002)(5660300002)(66946007)(26005)(66476007)(956004)(16526019)(86362001)(186003)(6512007)(4326008)(478600001)(66556008)(6666004)(6486002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: CoXwSyONdgDn6bDBxIqKiomTRX/RoFWAHdPrQokYbqQ9pFLu4JGs4HZ01+4ssJXY1NwVbC33EenxcZWjlQyWP1l5DB+DESy3UB4PVPuLhZPB3cnJqmtOlY0K7onqhm71tbprfz0Y/rw8fS3HZVPLGGFG3+Lbi1ah571vxtXp+91SPi6carwL77b3V7QlRBvgv8Co/fPD2RiSyERqH6kmPhb1O8OMikZt3w+qfLIgBB4zI5ojMkOTrXaL5iT4rpgLNq04bNRgSCdx37gZN+qgI5lkSIPJLEZEhb/o+izLoD4IRm9x7e+Sm994H3g62aFegawFkT0t1XuuA7loq62qJZme1xAVhn2EmR6NtJrFZ+7tFUA95xgU+6+TYAkvwssKCNBy+6Xb19hIjQ1yVwOROPbvD8h63eFkleYzj2gNu6x/P8Vfn63dbZoHH7FbXuWiExKUL+CoKj5gGTRfc5LQ2c3fns8ci3fS+uYhQSTdmiOX7tQJq9tIS/CvUQ8zSUxrCbFi/QjfRlktcGzUPL2lu29K+GM99Mjrzk3opHcpVXj5Kt086+HkZlt4+Bt7LAq9ONbemjv4/UM9v+OT+MypyA7ngLzguN1phDai7bPdVHy9xvf+UF+EZZSYvQP5fJeDwoEf0f4us8Uvsl5opoZtCK7dGOb3F2CSW3xMO+fQxNbMrJBcB4TPVarGOzB1A4dxhAm8NKwZIw7ngx4d6GRvEi84n+szbln7dhcy02g8PJL36iEGS08ioqALz/T5IqUK/HjuFxrDyzAkFCe/s2uVQkBKPYWGCLkaIu5FKcgsjIJV+pKuYZDLGkWSFEYeV3WFSRDzh31PwWH6FKh6Pkq151rp1IiM7+0CKWml2TyDCQR9D2scpLjRxMrBGb+J6G4G4z8IBUWLuG4eJgS+q/Dg2dhXYQLx14ZT3MHZXbq56HRN5bLCLglkjCVuLGa6P2XVXX4E6UDn7xeyC6IcJpH7IvNagoPXRAF4h08q4DRDEFgPg0pfYP/krNpce6azNG/x X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3505ddb8-f317-427e-92d6-08d8972bdb0b X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3712.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 01:36:28.9100 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: n/lBB2EXoLIZYgkTB/0QF2Us6MgJvv1qZiwPIlO9/nHIRcYuaLqZJ7otUiIM/BF6onTXBLdMaZrEb9b0UvCUTA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4430 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Iuliana Prodan Remove CRYPTO_ALG_ALLOCATES_MEMORY flag and allocate the memory needed by the driver, to fulfil a request, within the crypto request object. The extra size needed for base extended descriptor and link tables is computed in frontend driver (caamhash) initialization and saved in reqsize field that indicates how much memory could be needed per request. CRYPTO_ALG_ALLOCATES_MEMORY flag is limited only to dm-crypt use-cases, which seems to be 4 entries maximum. Therefore in reqsize we allocate memory for maximum 4 entries for src and 4, aligned. If the driver needs more than the 4 entries maximum, the memory is dynamically allocated, at runtime. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamhash.c | 77 +++++++++++++++++++++++++--------- 1 file changed, 57 insertions(+), 20 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index e8a6d8bc43b5..4a6376691ad6 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -527,6 +527,7 @@ static int acmac_setkey(struct crypto_ahash *ahash, const u8 *key, * @src_nents: number of segments in input scatterlist * @sec4_sg_bytes: length of dma mapped sec4_sg space * @bklog: stored to determine if the request needs backlog + * @free: stored to determine if ahash_edesc needs to be freed * @hw_desc: the h/w job descriptor followed by any referenced link tables * @sec4_sg: h/w link table */ @@ -535,6 +536,7 @@ struct ahash_edesc { int src_nents; int sec4_sg_bytes; bool bklog; + bool free; u32 hw_desc[DESC_JOB_IO_LEN_MAX / sizeof(u32)] ____cacheline_aligned; struct sec4_sg_entry sec4_sg[]; }; @@ -595,7 +597,8 @@ static inline void ahash_done_cpy(struct device *jrdev, u32 *desc, u32 err, ahash_unmap_ctx(jrdev, edesc, req, digestsize, dir); memcpy(req->result, state->caam_ctx, digestsize); - kfree(edesc); + if (edesc->free) + kfree(edesc); print_hex_dump_debug("ctx@"__stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, @@ -644,7 +647,8 @@ static inline void ahash_done_switch(struct device *jrdev, u32 *desc, u32 err, ecode = caam_jr_strstatus(jrdev, err); ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, dir); - kfree(edesc); + if (edesc->free) + kfree(edesc); scatterwalk_map_and_copy(state->buf, req->src, req->nbytes - state->next_buflen, @@ -701,11 +705,25 @@ static struct ahash_edesc *ahash_edesc_alloc(struct ahash_request *req, GFP_KERNEL : GFP_ATOMIC; struct ahash_edesc *edesc; unsigned int sg_size = sg_num * sizeof(struct sec4_sg_entry); - - edesc = kzalloc(sizeof(*edesc) + sg_size, GFP_DMA | flags); - if (!edesc) { - dev_err(ctx->jrdev, "could not allocate extended descriptor\n"); - return NULL; + int edesc_size; + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + sg_size; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = kzalloc(sizeof(*edesc) + sg_size, GFP_DMA | flags); + if (!edesc) { + dev_err(ctx->jrdev, "could not allocate extended descriptor\n"); + return NULL; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } state->edesc = edesc; @@ -767,7 +785,8 @@ static int ahash_do_one_req(struct crypto_engine *engine, void *areq) if (ret != -EINPROGRESS) { ahash_unmap(jrdev, state->edesc, req, 0); - kfree(state->edesc); + if (state->edesc->free) + kfree(state->edesc); } else { ret = 0; } @@ -802,7 +821,8 @@ static int ahash_enqueue_req(struct device *jrdev, if ((ret != -EINPROGRESS) && (ret != -EBUSY)) { ahash_unmap_ctx(jrdev, edesc, req, dst_len, dir); - kfree(edesc); + if (edesc->free) + kfree(edesc); } return ret; @@ -930,7 +950,8 @@ static int ahash_update_ctx(struct ahash_request *req) return ret; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ret; } @@ -991,7 +1012,8 @@ static int ahash_final_ctx(struct ahash_request *req) digestsize, DMA_BIDIRECTIONAL); unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ret; } @@ -1065,7 +1087,8 @@ static int ahash_finup_ctx(struct ahash_request *req) digestsize, DMA_BIDIRECTIONAL); unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ret; } @@ -1114,7 +1137,8 @@ static int ahash_digest(struct ahash_request *req) req->nbytes); if (ret) { ahash_unmap(jrdev, edesc, req, digestsize); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ret; } @@ -1123,7 +1147,8 @@ static int ahash_digest(struct ahash_request *req) ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize); if (ret) { ahash_unmap(jrdev, edesc, req, digestsize); - kfree(edesc); + if (edesc->free) + kfree(edesc); return -ENOMEM; } @@ -1180,7 +1205,8 @@ static int ahash_final_no_ctx(struct ahash_request *req) digestsize, DMA_FROM_DEVICE); unmap: ahash_unmap(jrdev, edesc, req, digestsize); - kfree(edesc); + if (edesc->free) + kfree(edesc); return -ENOMEM; } @@ -1301,7 +1327,8 @@ static int ahash_update_no_ctx(struct ahash_request *req) return ret; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ret; } @@ -1376,7 +1403,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) digestsize, DMA_FROM_DEVICE); unmap: ahash_unmap(jrdev, edesc, req, digestsize); - kfree(edesc); + if (edesc->free) + kfree(edesc); return -ENOMEM; } @@ -1484,7 +1512,8 @@ static int ahash_update_first(struct ahash_request *req) return ret; unmap_ctx: ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE); - kfree(edesc); + if (edesc->free) + kfree(edesc); return ret; } @@ -1771,6 +1800,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) sh_desc_update); dma_addr_t dma_addr; struct caam_drv_private *priv; + int extra_reqsize = 0; /* * Get a Job ring from Job Ring driver to ensure in-order @@ -1851,8 +1881,15 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) ctx->enginectx.op.do_one_request = ahash_do_one_req; + /* Compute extra space needed for base edesc and link tables */ + extra_reqsize = sizeof(struct ahash_edesc) + + /* link tables for src: + * 4 entries max + max 2 for remaining buf, aligned = 8 + */ + (8 * sizeof(struct sec4_sg_entry)); + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct caam_hash_state)); + sizeof(struct caam_hash_state) + extra_reqsize); /* * For keyed hash algorithms shared descriptors @@ -1927,7 +1964,7 @@ caam_hash_alloc(struct caam_hash_template *template, alg->cra_priority = CAAM_CRA_PRIORITY; alg->cra_blocksize = template->blocksize; alg->cra_alignmask = 0; - alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY; + alg->cra_flags = CRYPTO_ALG_ASYNC; t_alg->alg_type = template->alg_type; From patchwork Thu Dec 3 01:35:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iuliana Prodan \(OSS\)" X-Patchwork-Id: 337070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MSGID_FROM_MTA_HEADER, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BCDAC64E7C for ; Thu, 3 Dec 2020 01:38:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F09A120B80 for ; Thu, 3 Dec 2020 01:38:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729429AbgLCBhw (ORCPT ); Wed, 2 Dec 2020 20:37:52 -0500 Received: from mail-eopbgr40078.outbound.protection.outlook.com ([40.107.4.78]:53706 "EHLO EUR03-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727064AbgLCBhv (ORCPT ); Wed, 2 Dec 2020 20:37:51 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ifl/AYkqpKfj5rw8Fp2nT+5nkdFXlaFJC6wuqrBYYYjdxU5+nyPLR9oYOnA3Swce+mjhy7bPDiagQ12/KLosGW+NvecXM7rpubHV63vmkDgVPxiAC908RQ+ibpHz4XHxDEOFmgF7RGVRLvQ0B9yz4KYca+BSu+S3w+8h9rSJQKiRH32g0EgQeHCiRzortRAvOTS4zyLPg5bR1bWklVBrT8ITZb8duY4ryiHog/hXVXzySVg3NFr3abnXx0+6Dq6J7cX7fsXg5I2681eRlp9AqL7pYgxUJpBh0j97KJk/N1YcQ05qJEXLZgu1ARqfN9QDnz76DLhVWfvpRLQgByCFxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UBSuvDfvTLyJm7/On1f++2Qjz4IVm9jJgG3XAC/sSf4=; b=d4YZjLXkYjltWMZ+0U/K8YdplobP211x1tvOyG55mvCkJJS2TlWDGeaAiAjllklOXs1pSOzzviHmKFVvSSMkXzkWg35FLD9gRlo0YzbmOHz4o4vKHOlmAbIvEiFpNeDjkS/J1MlvA2PdWAvbmEjtZGOfV7JqLfoNG13WnzSLszkUokp3YldSixQ8qkmOQXoeNhi5LEKNqLFHVajIWKqLSLw6VGwCYkOrcAGCXYpxONplJTbYKabiy7CKjDR+IpuZsTlFCFnNl22i5lPRfWvKkuDtQgphdyG03eB1ZhWY+FcfVv8pkYsPujQ/PxbFTYwXXvh29RSyF+pw2nijtu05bg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UBSuvDfvTLyJm7/On1f++2Qjz4IVm9jJgG3XAC/sSf4=; b=hRu4b9MAQWBtXKWO7fhCAGV8RTk8QqFDyCNfTCE7Usr4KSIXo4rIqaY8q6yg0mzTCsO5caFortZlWgTT2PP7nGC9TavRU5khG+ezjSA0p3fYmqkYI9kbW8n3RYKI04w+d8pUCLkAv841FqRyIQn3Mf6kgFRnQd7EISqOVkv+UMU= Authentication-Results: gondor.apana.org.au; dkim=none (message not signed) header.d=none; gondor.apana.org.au; dmarc=none action=none header.from=oss.nxp.com; Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) by VI1PR04MB4430.eurprd04.prod.outlook.com (2603:10a6:803:65::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18; Thu, 3 Dec 2020 01:36:45 +0000 Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c]) by VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c%4]) with mapi id 15.20.3632.018; Thu, 3 Dec 2020 01:36:45 +0000 From: "Iuliana Prodan (OSS)" To: Herbert Xu , Ard Biesheuvel , "David S. Miller" , Horia Geanta Cc: Aymen Sghaier , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 4/5] crypto: caam/qi - avoid allocating memory at crypto request runtime Date: Thu, 3 Dec 2020 03:35:23 +0200 Message-Id: <20201203013524.30495-5-iuliana.prodan@oss.nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> References: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> X-Originating-IP: [83.217.231.2] X-ClientProxiedBy: AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) To VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from lsv15040.swis.ro-buh01.nxp.com (83.217.231.2) by AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend Transport; Thu, 3 Dec 2020 01:36:44 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 2c66ce61-ee24-42cf-9c8f-08d8972be50b X-MS-TrafficTypeDiagnostic: VI1PR04MB4430: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8882; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: o+KUz86nTQygGjK92E0YjtUHqUug2DgirCrzUR4HEvTiqwqphzv/Na56Cxx0/kZrkZxq5n/jfd6fq/5cHcRZgWx+PinNcFFRqfbebz+H+vExkHtt5MII/P3ey75moDP2U7sL8/hBAmzLTGjLX/SRrU/LDjrVg+i15mNiugfHHq/Y6ZOdCJzdKtXdcgwoeRlRCxeED1ms4m2n1i7nsvqdx3BLQ3blNq1BSDGp3+gZSWbX24ZEFhWwWr8GUmPeMYbJBJLNcDu6z943NrKmJbiJXjsHhjJphXUXEr1rWqBqvi2Grbn8u8DjQxX3cnjHSqbJVgtsXNr/D+G/nd3th+Cc0Q== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR0402MB3712.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(136003)(346002)(376002)(39860400002)(396003)(54906003)(1076003)(110136005)(2906002)(316002)(2616005)(83380400001)(8676002)(52116002)(30864003)(6506007)(8936002)(5660300002)(66946007)(26005)(66476007)(956004)(16526019)(86362001)(186003)(6512007)(4326008)(478600001)(66556008)(6666004)(6486002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 8ld+jwTFnl47Y1ZtaW66F/JwdUW0b3uWj5+vGFMj2a5+4GqeWCbb6In51oep8yAlgTokjYkQpD8HSmZatAvfYWrxVfs9zXDOhqzfSiONr7Jmb/CWJssatOoghlwaNxuUvi5fhmVEurjj6dytRNRn9tJPDUXLUoZ+bmTzjIjxQu7woGcHaCIjzTAbZZYaSHIaX4n7gZGE/y2F6baB5VTpVvOs0c1mgbwYDV7JYh2ALZyk/a1qFgLaYLQHu96OlWpAOs2jIctNlEtb+7gvUPJR6dykby8muKX+U3iJaE+okWgu/yhiMpNypKHENfuL4J+eMwtuE3F5MFi2WUmyo27WLRbea1nQZdXXC1NDwWs1j5rkyfvoQajYZEin+n3ZFlRKCBuHxrHElLKzrKjKXvAjVDf29fnff/eo5MUqMVMVdPpyX6vXTwCrgQ2vEglJF0ey6KWirFGCQhVLy0J2+ON/qEtBEHrQ/C9jnCVB4NkmtBw8IdJ3POcgHVH9aj0Zv8irEdxbFkhrcPJVqveDBDZLPV27f+/mmwMRIzacV97i+jL/eEtZqgFJhm5+rz2AxgJ7tNzyKo/pOx7EguBwJ2uFoJ2v30rVvUoQyEYrD8lspLNQ0BYsRF/K7v8zP0GgO7dzYg3bGKu4ODbzNhPTSbYWFkjS2lOzLPgEW9fE17kl2xZcEfsKCYeGWutYKo8mzdWDyw15xYQrjhT5NFSPtzmOjHxW24FaiRXojgRI1LNN+M//OBcQ5yM4hR1BeSr4AbPOg9WzAUeUWp03TG6xI6co7aINuL/hsrJmgybAOC10moAutoY90Rv+IRi8j0arCeYGyNMDTaY02j+VRXNFyr8WL1vnwjhJyRjQWFfZrXwxPHi2V05auYzwITzDQ6AZnWsnkWtokK2PdDCnRFb34ZmvxfXJqov8ywdsDYy8GTRqZR73BK8Lu0Y8Ob5T9jVaR2SUbo4a3AteCRw1WdCDnFY4Ujbiy0UPfNsH9RpwL1BW+ouPmlGgkXcS2Up9EM7JsU2F X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2c66ce61-ee24-42cf-9c8f-08d8972be50b X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3712.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 01:36:45.7135 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: q8uPkfNtFHyEAj7gsd2r7XpX0rb+KwvoJwQCD4XW23MVlP8Z52M2U2SBc/UV2oL9z3NVKITeO3G8X26N4zLKmg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4430 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Iuliana Prodan Remove CRYPTO_ALG_ALLOCATES_MEMORY flag and allocate the memory needed by the driver, to fulfil a request, within the crypto request object. The extra size needed for base extended descriptor, hw descriptor commands and link tables is computed in frontend driver (caamalg_qi) initialization and saved in reqsize field that indicates how much memory could be needed per request. CRYPTO_ALG_ALLOCATES_MEMORY flag is limited only to dm-crypt use-cases, which seems to be 4 entries maximum. Therefore in reqsize we allocate memory for maximum 4 entries for src and 4 for dst, aligned. If the driver needs more than the 4 entries maximum, the memory is dynamically allocated, at runtime. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamalg_qi.c | 134 +++++++++++++++++++++---------- 1 file changed, 90 insertions(+), 44 deletions(-) diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c index a24ae966df4a..ea49697e2579 100644 --- a/drivers/crypto/caam/caamalg_qi.c +++ b/drivers/crypto/caam/caamalg_qi.c @@ -788,6 +788,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key, * @dst_nents: number of segments in output scatterlist * @iv_dma: dma address of iv for checking continuity and link table * @qm_sg_bytes: length of dma mapped h/w link table + * @free: stored to determine if aead_edesc needs to be freed * @qm_sg_dma: bus physical mapped address of h/w link table * @assoclen: associated data length, in CAAM endianness * @assoclen_dma: bus physical mapped address of req->assoclen @@ -799,6 +800,7 @@ struct aead_edesc { int dst_nents; dma_addr_t iv_dma; int qm_sg_bytes; + bool free; dma_addr_t qm_sg_dma; unsigned int assoclen; dma_addr_t assoclen_dma; @@ -812,6 +814,7 @@ struct aead_edesc { * @dst_nents: number of segments in output scatterlist * @iv_dma: dma address of iv for checking continuity and link table * @qm_sg_bytes: length of dma mapped h/w link table + * @free: stored to determine if skcipher_edesc needs to be freed * @qm_sg_dma: bus physical mapped address of h/w link table * @drv_req: driver-specific request structure * @sgt: the h/w link table, followed by IV @@ -821,6 +824,7 @@ struct skcipher_edesc { int dst_nents; dma_addr_t iv_dma; int qm_sg_bytes; + bool free; dma_addr_t qm_sg_dma; struct caam_drv_req drv_req; struct qm_sg_entry sgt[]; @@ -927,7 +931,8 @@ static void aead_done(struct caam_drv_req *drv_req, u32 status) aead_unmap(qidev, edesc, aead_req); aead_request_complete(aead_req, ecode); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } /* @@ -949,7 +954,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dma_addr_t qm_sg_dma, iv_dma = 0; int ivsize = 0; unsigned int authsize = ctx->authsize; - int qm_sg_index = 0, qm_sg_ents = 0, qm_sg_bytes; + int qm_sg_index = 0, qm_sg_ents = 0, qm_sg_bytes, edesc_size = 0; int in_len, out_len; struct qm_sg_entry *sg_table, *fd_sgt; struct caam_drv_ctx *drv_ctx; @@ -958,13 +963,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (IS_ERR_OR_NULL(drv_ctx)) return (struct aead_edesc *)drv_ctx; - /* allocate space for base edesc and hw desc commands, link tables */ - edesc = qi_cache_alloc(GFP_DMA | flags); - if (unlikely(!edesc)) { - dev_err(qidev, "could not allocate extended descriptor\n"); - return ERR_PTR(-ENOMEM); - } - if (likely(req->src == req->dst)) { src_len = req->assoclen + req->cryptlen + (encrypt ? authsize : 0); @@ -973,7 +971,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (unlikely(src_nents < 0)) { dev_err(qidev, "Insufficient bytes (%d) in src S/G\n", src_len); - qi_cache_free(edesc); return ERR_PTR(src_nents); } @@ -981,7 +978,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, DMA_BIDIRECTIONAL); if (unlikely(!mapped_src_nents)) { dev_err(qidev, "unable to map source\n"); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } else { @@ -992,7 +988,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (unlikely(src_nents < 0)) { dev_err(qidev, "Insufficient bytes (%d) in src S/G\n", src_len); - qi_cache_free(edesc); return ERR_PTR(src_nents); } @@ -1000,7 +995,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (unlikely(dst_nents < 0)) { dev_err(qidev, "Insufficient bytes (%d) in dst S/G\n", dst_len); - qi_cache_free(edesc); return ERR_PTR(dst_nents); } @@ -1009,7 +1003,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, src_nents, DMA_TO_DEVICE); if (unlikely(!mapped_src_nents)) { dev_err(qidev, "unable to map source\n"); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } else { @@ -1024,7 +1017,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dev_err(qidev, "unable to map destination\n"); dma_unmap_sg(qidev, req->src, src_nents, DMA_TO_DEVICE); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } else { @@ -1058,14 +1050,30 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, sg_table = &edesc->sgt[0]; qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); - if (unlikely(offsetof(struct aead_edesc, sgt) + qm_sg_bytes + ivsize > - CAAM_QI_MEMCACHE_SIZE)) { + + /* Check if there's enough space for edesc saved in req */ + edesc_size = offsetof(struct aead_edesc, sgt) + qm_sg_bytes + ivsize; + if (unlikely(edesc_size > CAAM_QI_MEMCACHE_SIZE)) { dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n", qm_sg_ents, ivsize); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); + } else if (edesc_size > crypto_aead_reqsize(aead)) { + /* allocate space for base edesc, link tables and IV */ + edesc = qi_cache_alloc(GFP_DMA | flags); + if (unlikely(!edesc)) { + dev_err(qidev, "could not allocate extended descriptor\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, + dst_nents, 0, 0, DMA_NONE, 0, 0); + return ERR_PTR(-ENOMEM); + } + edesc->free = true; + } else { + /* get address for base edesc, link tables and IV */ + edesc = (struct aead_edesc *)((u8 *)aead_request_ctx(req)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } if (ivsize) { @@ -1079,7 +1087,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dev_err(qidev, "unable to map IV\n"); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } @@ -1098,7 +1107,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dev_err(qidev, "unable to map assoclen\n"); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, DMA_TO_DEVICE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1120,7 +1130,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dma_unmap_single(qidev, edesc->assoclen_dma, 4, DMA_TO_DEVICE); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, DMA_TO_DEVICE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1174,7 +1185,8 @@ static inline int aead_crypt(struct aead_request *req, bool encrypt) ret = -EINPROGRESS; } else { aead_unmap(ctx->qidev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } return ret; @@ -1237,7 +1249,8 @@ static void skcipher_done(struct caam_drv_req *drv_req, u32 status) memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); skcipher_request_complete(req, ecode); } @@ -1254,7 +1267,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, dma_addr_t iv_dma; u8 *iv; int ivsize = crypto_skcipher_ivsize(skcipher); - int dst_sg_idx, qm_sg_ents, qm_sg_bytes; + int dst_sg_idx, qm_sg_ents, qm_sg_bytes, edesc_size = 0; struct qm_sg_entry *sg_table, *fd_sgt; struct caam_drv_ctx *drv_ctx; @@ -1317,22 +1330,30 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, qm_sg_ents = 1 + pad_sg_nents(qm_sg_ents); qm_sg_bytes = qm_sg_ents * sizeof(struct qm_sg_entry); - if (unlikely(offsetof(struct skcipher_edesc, sgt) + qm_sg_bytes + - ivsize > CAAM_QI_MEMCACHE_SIZE)) { + + /* Check if there's enough space for edesc saved in req */ + edesc_size = offsetof(struct skcipher_edesc, sgt) + qm_sg_bytes + ivsize; + if (unlikely(edesc_size > CAAM_QI_MEMCACHE_SIZE)) { dev_err(qidev, "No space for %d S/G entries and/or %dB IV\n", qm_sg_ents, ivsize); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); return ERR_PTR(-ENOMEM); - } - - /* allocate space for base edesc, link tables and IV */ - edesc = qi_cache_alloc(GFP_DMA | flags); - if (unlikely(!edesc)) { - dev_err(qidev, "could not allocate extended descriptor\n"); - caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, - 0, DMA_NONE, 0, 0); - return ERR_PTR(-ENOMEM); + } else if (edesc_size > crypto_skcipher_reqsize(skcipher)) { + /* allocate space for base edesc, link tables and IV */ + edesc = qi_cache_alloc(GFP_DMA | flags); + if (unlikely(!edesc)) { + dev_err(qidev, "could not allocate extended descriptor\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, DMA_NONE, 0, 0); + return ERR_PTR(-ENOMEM); + } + edesc->free = true; + } else { + /* get address for base edesc, link tables and IV */ + edesc = (struct skcipher_edesc *)((u8 *)skcipher_request_ctx(req)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } /* Make sure IV is located in a DMAable area */ @@ -1345,7 +1366,8 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, dev_err(qidev, "unable to map IV\n"); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1372,7 +1394,8 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req, dev_err(qidev, "unable to map S/G table\n"); caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, DMA_BIDIRECTIONAL, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1446,7 +1469,8 @@ static inline int skcipher_crypt(struct skcipher_request *req, bool encrypt) ret = -EINPROGRESS; } else { skcipher_unmap(ctx->qidev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } return ret; @@ -2493,7 +2517,15 @@ static int caam_cra_init(struct crypto_skcipher *tfm) container_of(alg, typeof(*caam_alg), skcipher); struct caam_ctx *ctx = crypto_skcipher_ctx(tfm); u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK; - int ret = 0; + int ret = 0, extra_reqsize = 0; + + /* Compute extra space needed for base edesc, link tables and IV */ + extra_reqsize = sizeof(struct skcipher_edesc) + + /* link tables for src and dst: + * 4 entries max + 1 for IV, aligned = 8 + */ + (16 * sizeof(struct qm_sg_entry)) + + AES_BLOCK_SIZE; /* ivsize */ if (alg_aai == OP_ALG_AAI_XTS) { const char *tfm_name = crypto_tfm_alg_name(&tfm->base); @@ -2509,7 +2541,10 @@ static int caam_cra_init(struct crypto_skcipher *tfm) ctx->fallback = fallback; crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_skcipher_req_ctx) + - crypto_skcipher_reqsize(fallback)); + crypto_skcipher_reqsize(fallback) + + extra_reqsize); + } else { + crypto_skcipher_set_reqsize(tfm, extra_reqsize); } ret = caam_init_common(ctx, &caam_alg->caam, false); @@ -2525,6 +2560,19 @@ static int caam_aead_init(struct crypto_aead *tfm) struct caam_aead_alg *caam_alg = container_of(alg, typeof(*caam_alg), aead); struct caam_ctx *ctx = crypto_aead_ctx(tfm); + int extra_reqsize = 0; + + /* Compute extra space needed for base edesc, link tables and IV */ + extra_reqsize = sizeof(struct aead_edesc) + + /* link tables for src and dst: + * 4 entries max + 1 for IV, aligned = 8 + */ + (16 * sizeof(struct qm_sg_entry)) + + AES_BLOCK_SIZE; /* ivsize */ + /* + * Set the size for the space needed for base edesc, link tables, IV + */ + crypto_aead_set_reqsize(tfm, extra_reqsize); return caam_init_common(ctx, &caam_alg->caam, !caam_alg->caam.nodkp); } @@ -2580,8 +2628,7 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg) alg->base.cra_module = THIS_MODULE; alg->base.cra_priority = CAAM_CRA_PRIORITY; alg->base.cra_ctxsize = sizeof(struct caam_ctx); - alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY); + alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY); alg->init = caam_cra_init; alg->exit = caam_cra_exit; @@ -2594,8 +2641,7 @@ static void caam_aead_alg_init(struct caam_aead_alg *t_alg) alg->base.cra_module = THIS_MODULE; alg->base.cra_priority = CAAM_CRA_PRIORITY; alg->base.cra_ctxsize = sizeof(struct caam_ctx); - alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY; alg->init = caam_aead_init; alg->exit = caam_aead_exit; From patchwork Thu Dec 3 01:35:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iuliana Prodan \(OSS\)" X-Patchwork-Id: 337862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MSGID_FROM_MTA_HEADER, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCF46C64E8A for ; Thu, 3 Dec 2020 01:38:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62204221EB for ; Thu, 3 Dec 2020 01:38:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727064AbgLCBh6 (ORCPT ); Wed, 2 Dec 2020 20:37:58 -0500 Received: from mail-eopbgr60065.outbound.protection.outlook.com ([40.107.6.65]:28405 "EHLO EUR04-DB3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729428AbgLCBh6 (ORCPT ); Wed, 2 Dec 2020 20:37:58 -0500 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E6ONQv8/Vo05TS/KjMsHAe+Wq4MRQijAxm2LvSOzYS4ghYUZMWGwYOESZ0+uFkWMapHijQSBxd2y0uPMq7Nfbmt9wm8I3k8hOlnvvqc48XOcFZApbyYh3BwmCsWIY5kjwTFrQhv+J64koMpsRjipdc8dd2wTSy7BM2/E0wMs21DWwEqDp0Mgb633sON/+95uG73lKqiSF3uGqfd/BlWhZ3d9bXZKmZYd55ymhigatPblx90ayuaSvn26RPWncOhA2wdzSsmO3ZwsGNl5p3tqHcYbf7t6c/ovVqSgFJYNPx16bMjtVZaB83QNz0hHZLd/UN1qTMyD7Hw4Yqd1/7U4Ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SCmdyOtMke2Gb58MdPCYf/SHTSbMjxxLkJNtBXBslQ8=; b=moEY9MPG0Zn4ZqqPZS5vkZen2IgaUGouFpUjA8rtSeGEwt388xhf5UfYELfbiDdu1MnhGbSQb6jviUniAzC8DxOjudUk2uyBphf+cI3bUWWIC78OJE92leWJGWtlmeS6IfLZmEpcNm4CL2z7/p1RH2gSypNdL6fHzMlDD3efPZxRU1TEpxZ0diSlvq3KZMXDHoFJ6KxLWIRTP7wgA2PYWB5mPUQFT1BDZvxEw+2LgZum1s7BfPQn85cwglRkf1bQVrxu5OwkKWDx3dc0QC5FlL1vPG33nF+XqoL3K3zC5DfscqQkfsO1HWPmKWybLmeWLelSM1Qq3aK1chSNltr5RQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oss.nxp.com; dmarc=pass action=none header.from=oss.nxp.com; dkim=pass header.d=oss.nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=NXP1.onmicrosoft.com; s=selector2-NXP1-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SCmdyOtMke2Gb58MdPCYf/SHTSbMjxxLkJNtBXBslQ8=; b=RXsZHjWOjZQO66U1qdZL7DawO1FOFAd8tFrI0Rx2RiGhCgKm4pkiowyGWCzlk1oYa/ukHRzRU4uJrZtXL6gK0/fM/fs6sStJJpjTxhfwc4cX8qke2/SIZWSAjtvLG9XoVNM4Zn9PbtUqv24aOiRbMqVLH50Db+KaMlYtVSxa3Mg= Authentication-Results: gondor.apana.org.au; dkim=none (message not signed) header.d=none; gondor.apana.org.au; dmarc=none action=none header.from=oss.nxp.com; Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) by VI1PR04MB4687.eurprd04.prod.outlook.com (2603:10a6:803:72::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.17; Thu, 3 Dec 2020 01:37:04 +0000 Received: from VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c]) by VI1PR0402MB3712.eurprd04.prod.outlook.com ([fe80::ade4:e169:1f4a:28c%4]) with mapi id 15.20.3632.018; Thu, 3 Dec 2020 01:37:04 +0000 From: "Iuliana Prodan (OSS)" To: Herbert Xu , Ard Biesheuvel , "David S. Miller" , Horia Geanta Cc: Aymen Sghaier , Silvano Di Ninno , Franck Lenormand , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, linux-imx , Iuliana Prodan Subject: [PATCH 5/5] crypto: caam/qi2 - avoid allocating memory at crypto request runtime Date: Thu, 3 Dec 2020 03:35:24 +0200 Message-Id: <20201203013524.30495-6-iuliana.prodan@oss.nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> References: <20201203013524.30495-1-iuliana.prodan@oss.nxp.com> X-Originating-IP: [83.217.231.2] X-ClientProxiedBy: AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) To VI1PR0402MB3712.eurprd04.prod.outlook.com (2603:10a6:803:1c::25) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from lsv15040.swis.ro-buh01.nxp.com (83.217.231.2) by AM0PR03CA0059.eurprd03.prod.outlook.com (2603:10a6:208::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3632.18 via Frontend Transport; Thu, 3 Dec 2020 01:37:02 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: c5172171-360b-4348-4da7-08d8972befe1 X-MS-TrafficTypeDiagnostic: VI1PR04MB4687: X-MS-Exchange-SharedMailbox-RoutingAgent-Processed: True X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /p0KfKxA3+CCHe9KBhSdp1UVXE0ewtGW2OiT4GgSHtoYhmJVw2VcPZMoYHhcP8t0mPtaDfKgpzIY4lrv07ZAcmR7v7fTe4nWUo1QcUVdaSDF0EDMejPvZFaFVPLTLFQwNz9jQAvsDuhreo3idx426QTSoVQmeBl7e5pqJ8pwc2ai5m1dUuiiLXbOjaXK6EQfzPhiV1jDA9qDLm8gVi/UxbMqZENeuOGuaTy1UXyiMVAe5EEPMjkob1swImo8hgvuyE5zlNsysP6f4+ByR07iLdTopcr8WE82kcYeautDOkWnZm6M8ylnrEswrLKVuqKbEy5moVXWzsPa6+qmVJrBxg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR0402MB3712.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(346002)(39860400002)(396003)(366004)(136003)(8936002)(5660300002)(4326008)(66946007)(30864003)(66556008)(66476007)(478600001)(52116002)(6486002)(54906003)(6506007)(316002)(2906002)(110136005)(26005)(1076003)(6666004)(6512007)(186003)(956004)(2616005)(16526019)(86362001)(8676002)(83380400001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: h+EMUteOS9eNPY3q/YwvFQavJJ/cqzC44Rdb0QVfUsov4KyGEcsdSALfh5rEO9VPdMs89AoTeC0wI85iga+E5FNERtKihSx/9MgBqZnRoit8M6hpm72ezP1EBFsrj8fBHMocQlrJPwCDR0S3QIWDzkyZgNJR+4FmDG197pl9WhgQu7LFNLaFLtTjkUSAT1Zt/X2HA2aLxzpFnS1zk2552JpkOM8rNJZkKh8ywcLu+xG3vbS3IgGUcDqxqjfpyhVcSloldMRUMHCSOUpy3ApyyrEQpVI9NfXG1WOEqOtXbxcVKbcyt6qsqFflDqtb3eZx83yfXv97Nds0x1vJcfuz1FJXA2UB9c4i7aEVRzkRpOiaAuaU4AjUdHUQJG91TCVnL3/qE1Ebig2IJw1bOXiEH2tSwT5h7dhcdUyL9r5lvMltO2ypo5fKa5ulKOQ3PuEUeeeb/NfbcX7sZN+RsEmHnjb0us4qXRZWAVMeIrToySww9DEHELSq4Eji9pCCDITdK1nmq2EWlIzz5JL3Ljp6Uj+a5HifWGPbTSvapFe/crWdZLZ557k6rzFMG0KfiodfA94eBYboKbqUxpz1F3QigmwUMxTLRkmAnEx85RWaGRltuM90xI628lTrkc5ox+l77vTeBPxr2pn6QXqKAq5LpyW/SobGKKW2/sXaN1UVjqrXlTCtMsv0yZvLlx61VAl6WP2DByrYcnFCK7y9aDT+zckCJnnhGw9hbG081XIL/VRqR0TYNutG9KSOydIb09NwUHTdg7GAqHUQO1cqFqd70+/NcAU0V1i5JcwEkwPJJYZL6pdmMM+aKi/E1aIpsECh8/5hjNyInUYkpLQYfuiuWrDzE3gdvGaRZChPeOctlWspYffdOn35Ps9GZxKm1dd1oblMrvas0HdfIIV2L5tUNk+sNym2KIWlqdU7lgW16inDsBIO8WW3xUTehtOX7puAfpJqBMsuiGKUZU5ECDpncc8YL/95pFCnAPeAG1zgmtxUdh6/MOPLM78SHFe9kMKg X-OriginatorOrg: oss.nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: c5172171-360b-4348-4da7-08d8972befe1 X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3712.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Dec 2020 01:37:04.0111 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: vZLJnE15c/mRqj9Jd/qpZF6lsMgReaN0tX53j8xN7ION/o4TjAItg5wV3w0hI8+anxY++bUdZGsynOD7CsEBtg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4687 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Iuliana Prodan Remove CRYPTO_ALG_ALLOCATES_MEMORY flag and allocate the memory needed by the driver, to fulfil a request, within the crypto request object. The extra size needed for base extended descriptor, hw descriptor commands and link tables is computed in frontend driver (caamalg_qi2) initialization and saved in reqsize field that indicates how much memory could be needed per request. CRYPTO_ALG_ALLOCATES_MEMORY flag is limited only to dm-crypt use-cases, which seems to be 4 entries maximum. Therefore in reqsize we allocate memory for maximum 4 entries for src and 4 for dst, aligned. If the driver needs more than the 4 entries maximum, the memory is dynamically allocated, at runtime. Signed-off-by: Iuliana Prodan --- drivers/crypto/caam/caamalg_qi2.c | 415 ++++++++++++++++++++---------- drivers/crypto/caam/caamalg_qi2.h | 6 + 2 files changed, 288 insertions(+), 133 deletions(-) diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c index a780e627838a..88bbed7dc65b 100644 --- a/drivers/crypto/caam/caamalg_qi2.c +++ b/drivers/crypto/caam/caamalg_qi2.c @@ -362,17 +362,10 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dma_addr_t qm_sg_dma, iv_dma = 0; int ivsize = 0; unsigned int authsize = ctx->authsize; - int qm_sg_index = 0, qm_sg_nents = 0, qm_sg_bytes; + int qm_sg_index = 0, qm_sg_nents = 0, qm_sg_bytes, edesc_size = 0; int in_len, out_len; struct dpaa2_sg_entry *sg_table; - /* allocate space for base edesc, link tables and IV */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (unlikely(!edesc)) { - dev_err(dev, "could not allocate extended descriptor\n"); - return ERR_PTR(-ENOMEM); - } - if (unlikely(req->dst != req->src)) { src_len = req->assoclen + req->cryptlen; dst_len = src_len + (encrypt ? authsize : (-authsize)); @@ -381,7 +374,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (unlikely(src_nents < 0)) { dev_err(dev, "Insufficient bytes (%d) in src S/G\n", src_len); - qi_cache_free(edesc); return ERR_PTR(src_nents); } @@ -389,7 +381,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (unlikely(dst_nents < 0)) { dev_err(dev, "Insufficient bytes (%d) in dst S/G\n", dst_len); - qi_cache_free(edesc); return ERR_PTR(dst_nents); } @@ -398,7 +389,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, DMA_TO_DEVICE); if (unlikely(!mapped_src_nents)) { dev_err(dev, "unable to map source\n"); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } else { @@ -412,7 +402,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dev_err(dev, "unable to map destination\n"); dma_unmap_sg(dev, req->src, src_nents, DMA_TO_DEVICE); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } else { @@ -426,7 +415,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, if (unlikely(src_nents < 0)) { dev_err(dev, "Insufficient bytes (%d) in src S/G\n", src_len); - qi_cache_free(edesc); return ERR_PTR(src_nents); } @@ -434,7 +422,6 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, DMA_BIDIRECTIONAL); if (unlikely(!mapped_src_nents)) { dev_err(dev, "unable to map source\n"); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } @@ -466,14 +453,30 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, sg_table = &edesc->sgt[0]; qm_sg_bytes = qm_sg_nents * sizeof(*sg_table); - if (unlikely(offsetof(struct aead_edesc, sgt) + qm_sg_bytes + ivsize > - CAAM_QI_MEMCACHE_SIZE)) { + + /* Check if there's enough space for edesc saved in req */ + edesc_size = offsetof(struct aead_edesc, sgt) + qm_sg_bytes + ivsize; + if (unlikely(edesc_size > CAAM_QI_MEMCACHE_SIZE)) { dev_err(dev, "No space for %d S/G entries and/or %dB IV\n", qm_sg_nents, ivsize); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); - qi_cache_free(edesc); return ERR_PTR(-ENOMEM); + } else if (edesc_size > (crypto_aead_reqsize(aead) - + sizeof(struct caam_request))) { + /* allocate space for base edesc, link tables and IV */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (unlikely(!edesc)) { + dev_err(dev, "could not allocate extended descriptor\n"); + return ERR_PTR(-ENOMEM); + } + edesc->free = true; + } else { + /* get address for base edesc, link tables and IV */ + edesc = (struct aead_edesc *)((u8 *)req_ctx + + sizeof(struct caam_request)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } if (ivsize) { @@ -487,7 +490,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dev_err(dev, "unable to map IV\n"); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } } @@ -511,7 +515,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dev_err(dev, "unable to map assoclen\n"); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, DMA_TO_DEVICE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -533,7 +538,8 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, dma_unmap_single(dev, edesc->assoclen_dma, 4, DMA_TO_DEVICE); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, DMA_TO_DEVICE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1118,7 +1124,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req) dma_addr_t iv_dma; u8 *iv; int ivsize = crypto_skcipher_ivsize(skcipher); - int dst_sg_idx, qm_sg_ents, qm_sg_bytes; + int dst_sg_idx, qm_sg_ents, qm_sg_bytes, edesc_size = 0; struct dpaa2_sg_entry *sg_table; src_nents = sg_nents_for_len(req->src, req->cryptlen); @@ -1176,22 +1182,32 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req) qm_sg_ents = 1 + pad_sg_nents(qm_sg_ents); qm_sg_bytes = qm_sg_ents * sizeof(struct dpaa2_sg_entry); - if (unlikely(offsetof(struct skcipher_edesc, sgt) + qm_sg_bytes + - ivsize > CAAM_QI_MEMCACHE_SIZE)) { + + /* Check if there's enough space for edesc saved in req */ + edesc_size = offsetof(struct skcipher_edesc, sgt) + qm_sg_bytes + ivsize; + if (unlikely(edesc_size > CAAM_QI_MEMCACHE_SIZE)) { dev_err(dev, "No space for %d S/G entries and/or %dB IV\n", qm_sg_ents, ivsize); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); return ERR_PTR(-ENOMEM); - } - - /* allocate space for base edesc, link tables and IV */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (unlikely(!edesc)) { - dev_err(dev, "could not allocate extended descriptor\n"); - caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, 0, - 0, DMA_NONE, 0, 0); - return ERR_PTR(-ENOMEM); + } else if (edesc_size > (crypto_skcipher_reqsize(skcipher) - + sizeof(struct caam_request))) { + /* allocate space for base edesc, link tables and IV */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (unlikely(!edesc)) { + dev_err(dev, "could not allocate extended descriptor\n"); + caam_unmap(dev, req->src, req->dst, src_nents, + dst_nents, 0, 0, DMA_NONE, 0, 0); + return ERR_PTR(-ENOMEM); + } + edesc->free = true; + } else { + /* get address for base edesc, link tables and IV */ + edesc = (struct skcipher_edesc *)((u8 *)req_ctx + + sizeof(struct caam_request)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } /* Make sure IV is located in a DMAable area */ @@ -1204,7 +1220,8 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req) dev_err(dev, "unable to map IV\n"); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, 0, 0, DMA_NONE, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1228,7 +1245,8 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req) dev_err(dev, "unable to map S/G table\n"); caam_unmap(dev, req->src, req->dst, src_nents, dst_nents, iv_dma, ivsize, DMA_BIDIRECTIONAL, 0, 0); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ERR_PTR(-ENOMEM); } @@ -1292,7 +1310,8 @@ static void aead_encrypt_done(void *cbk_ctx, u32 status) ecode = caam_qi2_strstatus(ctx->dev, status); aead_unmap(ctx->dev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); aead_request_complete(req, ecode); } @@ -1313,7 +1332,8 @@ static void aead_decrypt_done(void *cbk_ctx, u32 status) ecode = caam_qi2_strstatus(ctx->dev, status); aead_unmap(ctx->dev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); aead_request_complete(req, ecode); } @@ -1339,7 +1359,8 @@ static int aead_encrypt(struct aead_request *req) if (ret != -EINPROGRESS && !(ret == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { aead_unmap(ctx->dev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } return ret; @@ -1367,7 +1388,8 @@ static int aead_decrypt(struct aead_request *req) if (ret != -EINPROGRESS && !(ret == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { aead_unmap(ctx->dev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } return ret; @@ -1417,7 +1439,8 @@ static void skcipher_encrypt_done(void *cbk_ctx, u32 status) memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); skcipher_request_complete(req, ecode); } @@ -1455,7 +1478,8 @@ static void skcipher_decrypt_done(void *cbk_ctx, u32 status) memcpy(req->iv, (u8 *)&edesc->sgt[0] + edesc->qm_sg_bytes, ivsize); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); skcipher_request_complete(req, ecode); } @@ -1511,7 +1535,8 @@ static int skcipher_encrypt(struct skcipher_request *req) if (ret != -EINPROGRESS && !(ret == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { skcipher_unmap(ctx->dev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } return ret; @@ -1561,7 +1586,8 @@ static int skcipher_decrypt(struct skcipher_request *req) if (ret != -EINPROGRESS && !(ret == -EBUSY && req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { skcipher_unmap(ctx->dev, edesc, req); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); } return ret; @@ -1602,7 +1628,15 @@ static int caam_cra_init_skcipher(struct crypto_skcipher *tfm) container_of(alg, typeof(*caam_alg), skcipher); struct caam_ctx *ctx = crypto_skcipher_ctx(tfm); u32 alg_aai = caam_alg->caam.class1_alg_type & OP_ALG_AAI_MASK; - int ret = 0; + int ret = 0, extra_reqsize = 0; + + /* Compute extra space needed for base edesc, link tables and IV */ + extra_reqsize = sizeof(struct skcipher_edesc) + + /* link tables for src and dst: + * 4 entries max + 1 for IV, aligned = 8 + */ + (16 * sizeof(struct dpaa2_sg_entry)) + + AES_BLOCK_SIZE; /* ivsize */ if (alg_aai == OP_ALG_AAI_XTS) { const char *tfm_name = crypto_tfm_alg_name(&tfm->base); @@ -1619,9 +1653,11 @@ static int caam_cra_init_skcipher(struct crypto_skcipher *tfm) ctx->fallback = fallback; crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_request) + - crypto_skcipher_reqsize(fallback)); + crypto_skcipher_reqsize(fallback) + + extra_reqsize); } else { - crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_request)); + crypto_skcipher_set_reqsize(tfm, sizeof(struct caam_request) + + extra_reqsize); } ret = caam_cra_init(ctx, &caam_alg->caam, false); @@ -1636,8 +1672,17 @@ static int caam_cra_init_aead(struct crypto_aead *tfm) struct aead_alg *alg = crypto_aead_alg(tfm); struct caam_aead_alg *caam_alg = container_of(alg, typeof(*caam_alg), aead); + int extra_reqsize = 0; + + /* Compute extra space needed for base edesc, link tables and IV */ + extra_reqsize = sizeof(struct aead_edesc) + + /* link tables for src and dst: + * 4 entries max + 1 for IV, aligned = 8 + */ + (16 * sizeof(struct dpaa2_sg_entry)) + + AES_BLOCK_SIZE; /* ivsize */ - crypto_aead_set_reqsize(tfm, sizeof(struct caam_request)); + crypto_aead_set_reqsize(tfm, sizeof(struct caam_request) + extra_reqsize); return caam_cra_init(crypto_aead_ctx(tfm), &caam_alg->caam, !caam_alg->caam.nodkp); } @@ -3006,8 +3051,7 @@ static void caam_skcipher_alg_init(struct caam_skcipher_alg *t_alg) alg->base.cra_module = THIS_MODULE; alg->base.cra_priority = CAAM_CRA_PRIORITY; alg->base.cra_ctxsize = sizeof(struct caam_ctx); - alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY); + alg->base.cra_flags |= (CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY); alg->init = caam_cra_init_skcipher; alg->exit = caam_cra_exit; @@ -3020,8 +3064,7 @@ static void caam_aead_alg_init(struct caam_aead_alg *t_alg) alg->base.cra_module = THIS_MODULE; alg->base.cra_priority = CAAM_CRA_PRIORITY; alg->base.cra_ctxsize = sizeof(struct caam_ctx); - alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY | - CRYPTO_ALG_KERN_DRIVER_ONLY; + alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY; alg->init = caam_cra_init_aead; alg->exit = caam_cra_exit_aead; @@ -3400,7 +3443,8 @@ static void ahash_done(void *cbk_ctx, u32 status) ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE); memcpy(req->result, state->caam_ctx, digestsize); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); print_hex_dump_debug("ctx@" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, @@ -3425,7 +3469,8 @@ static void ahash_done_bi(void *cbk_ctx, u32 status) ecode = caam_qi2_strstatus(ctx->dev, status); ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); scatterwalk_map_and_copy(state->buf, req->src, req->nbytes - state->next_buflen, @@ -3465,7 +3510,8 @@ static void ahash_done_ctx_src(void *cbk_ctx, u32 status) ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL); memcpy(req->result, state->caam_ctx, digestsize); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); print_hex_dump_debug("ctx@" __stringify(__LINE__)": ", DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, @@ -3490,7 +3536,8 @@ static void ahash_done_ctx_dst(void *cbk_ctx, u32 status) ecode = caam_qi2_strstatus(ctx->dev, status); ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); scatterwalk_map_and_copy(state->buf, req->src, req->nbytes - state->next_buflen, @@ -3528,7 +3575,7 @@ static int ahash_update_ctx(struct ahash_request *req) int in_len = *buflen + req->nbytes, to_hash; int src_nents, mapped_nents, qm_sg_bytes, qm_sg_src_index; struct ahash_edesc *edesc; - int ret = 0; + int ret = 0, edesc_size = 0; *next_buflen = in_len & (crypto_tfm_alg_blocksize(&ahash->base) - 1); to_hash = in_len - *next_buflen; @@ -3554,18 +3601,31 @@ static int ahash_update_ctx(struct ahash_request *req) mapped_nents = 0; } - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) { - dma_unmap_sg(ctx->dev, req->src, src_nents, - DMA_TO_DEVICE); - return -ENOMEM; - } - - edesc->src_nents = src_nents; qm_sg_src_index = 1 + (*buflen ? 1 : 0); qm_sg_bytes = pad_sg_nents(qm_sg_src_index + mapped_nents) * sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) { + dma_unmap_sg(ctx->dev, req->src, src_nents, + DMA_TO_DEVICE); + return -ENOMEM; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); + } + + edesc->src_nents = src_nents; sg_table = &edesc->sgt[0]; ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table, @@ -3627,7 +3687,8 @@ static int ahash_update_ctx(struct ahash_request *req) return ret; unmap_ctx: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -3642,18 +3703,31 @@ static int ahash_final_ctx(struct ahash_request *req) gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; int buflen = state->buflen; - int qm_sg_bytes; + int qm_sg_bytes, edesc_size = 0; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; struct dpaa2_sg_entry *sg_table; int ret; - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) - return -ENOMEM; - qm_sg_bytes = pad_sg_nents(1 + (buflen ? 1 : 0)) * sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) + return -ENOMEM; + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); + } + sg_table = &edesc->sgt[0]; ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table, @@ -3698,7 +3772,8 @@ static int ahash_final_ctx(struct ahash_request *req) unmap_ctx: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -3713,7 +3788,7 @@ static int ahash_finup_ctx(struct ahash_request *req) gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; int buflen = state->buflen; - int qm_sg_bytes, qm_sg_src_index; + int qm_sg_bytes, qm_sg_src_index, edesc_size = 0; int src_nents, mapped_nents; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; @@ -3737,17 +3812,31 @@ static int ahash_finup_ctx(struct ahash_request *req) mapped_nents = 0; } - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) { - dma_unmap_sg(ctx->dev, req->src, src_nents, DMA_TO_DEVICE); - return -ENOMEM; - } - - edesc->src_nents = src_nents; qm_sg_src_index = 1 + (buflen ? 1 : 0); qm_sg_bytes = pad_sg_nents(qm_sg_src_index + mapped_nents) * sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) { + dma_unmap_sg(ctx->dev, req->src, src_nents, + DMA_TO_DEVICE); + return -ENOMEM; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); + } + + edesc->src_nents = src_nents; sg_table = &edesc->sgt[0]; ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table, @@ -3792,7 +3881,8 @@ static int ahash_finup_ctx(struct ahash_request *req) unmap_ctx: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -3807,8 +3897,9 @@ static int ahash_digest(struct ahash_request *req) gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; int digestsize = crypto_ahash_digestsize(ahash); - int src_nents, mapped_nents; + int src_nents, mapped_nents, qm_sg_bytes, edesc_size = 0; struct ahash_edesc *edesc; + struct dpaa2_sg_entry *sg_table; int ret = -ENOMEM; state->buf_dma = 0; @@ -3830,21 +3921,33 @@ static int ahash_digest(struct ahash_request *req) mapped_nents = 0; } - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) { - dma_unmap_sg(ctx->dev, req->src, src_nents, DMA_TO_DEVICE); - return ret; + qm_sg_bytes = pad_sg_nents(mapped_nents) * sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) { + dma_unmap_sg(ctx->dev, req->src, src_nents, + DMA_TO_DEVICE); + return ret; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } edesc->src_nents = src_nents; memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt)); if (mapped_nents > 1) { - int qm_sg_bytes; - struct dpaa2_sg_entry *sg_table = &edesc->sgt[0]; - - qm_sg_bytes = pad_sg_nents(mapped_nents) * sizeof(*sg_table); + sg_table = &edesc->sgt[0]; sg_to_qm_sg_last(req->src, req->nbytes, sg_table, 0); edesc->qm_sg_dma = dma_map_single(ctx->dev, sg_table, qm_sg_bytes, DMA_TO_DEVICE); @@ -3887,7 +3990,8 @@ static int ahash_digest(struct ahash_request *req) unmap: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -3899,18 +4003,17 @@ static int ahash_final_no_ctx(struct ahash_request *req) struct caam_request *req_ctx = &state->caam_req; struct dpaa2_fl_entry *in_fle = &req_ctx->fd_flt[1]; struct dpaa2_fl_entry *out_fle = &req_ctx->fd_flt[0]; - gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? - GFP_KERNEL : GFP_ATOMIC; u8 *buf = state->buf; int buflen = state->buflen; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; int ret = -ENOMEM; - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) - return ret; + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); if (buflen) { state->buf_dma = dma_map_single(ctx->dev, buf, buflen, @@ -3960,7 +4063,6 @@ static int ahash_final_no_ctx(struct ahash_request *req) unmap: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE); - qi_cache_free(edesc); return ret; } @@ -3978,7 +4080,7 @@ static int ahash_update_no_ctx(struct ahash_request *req) int *buflen = &state->buflen; int *next_buflen = &state->next_buflen; int in_len = *buflen + req->nbytes, to_hash; - int qm_sg_bytes, src_nents, mapped_nents; + int qm_sg_bytes, src_nents, mapped_nents, edesc_size = 0; struct ahash_edesc *edesc; int ret = 0; @@ -4006,17 +4108,30 @@ static int ahash_update_no_ctx(struct ahash_request *req) mapped_nents = 0; } - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) { - dma_unmap_sg(ctx->dev, req->src, src_nents, - DMA_TO_DEVICE); - return -ENOMEM; + qm_sg_bytes = pad_sg_nents(1 + mapped_nents) * + sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) { + dma_unmap_sg(ctx->dev, req->src, src_nents, + DMA_TO_DEVICE); + return -ENOMEM; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } edesc->src_nents = src_nents; - qm_sg_bytes = pad_sg_nents(1 + mapped_nents) * - sizeof(*sg_table); sg_table = &edesc->sgt[0]; ret = buf_map_to_qm_sg(ctx->dev, sg_table, state); @@ -4081,7 +4196,8 @@ static int ahash_update_no_ctx(struct ahash_request *req) return ret; unmap_ctx: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_TO_DEVICE); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -4096,7 +4212,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req) gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; int buflen = state->buflen; - int qm_sg_bytes, src_nents, mapped_nents; + int qm_sg_bytes, src_nents, mapped_nents, edesc_size = 0; int digestsize = crypto_ahash_digestsize(ahash); struct ahash_edesc *edesc; struct dpaa2_sg_entry *sg_table; @@ -4119,15 +4235,29 @@ static int ahash_finup_no_ctx(struct ahash_request *req) mapped_nents = 0; } - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) { - dma_unmap_sg(ctx->dev, req->src, src_nents, DMA_TO_DEVICE); - return ret; + qm_sg_bytes = pad_sg_nents(2 + mapped_nents) * sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) { + dma_unmap_sg(ctx->dev, req->src, src_nents, + DMA_TO_DEVICE); + return ret; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } edesc->src_nents = src_nents; - qm_sg_bytes = pad_sg_nents(2 + mapped_nents) * sizeof(*sg_table); sg_table = &edesc->sgt[0]; ret = buf_map_to_qm_sg(ctx->dev, sg_table, state); @@ -4177,7 +4307,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) return ret; unmap: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -4195,7 +4326,7 @@ static int ahash_update_first(struct ahash_request *req) int *buflen = &state->buflen; int *next_buflen = &state->next_buflen; int to_hash; - int src_nents, mapped_nents; + int src_nents, mapped_nents, qm_sg_bytes, edesc_size = 0; struct ahash_edesc *edesc; int ret = 0; @@ -4224,12 +4355,26 @@ static int ahash_update_first(struct ahash_request *req) mapped_nents = 0; } - /* allocate space for base edesc and link tables */ - edesc = qi_cache_zalloc(GFP_DMA | flags); - if (!edesc) { - dma_unmap_sg(ctx->dev, req->src, src_nents, - DMA_TO_DEVICE); - return -ENOMEM; + qm_sg_bytes = pad_sg_nents(mapped_nents) * sizeof(*sg_table); + + /* Check if there's enough space for edesc saved in req */ + edesc_size = sizeof(*edesc) + qm_sg_bytes; + if (edesc_size > (crypto_ahash_reqsize(ahash) - + sizeof(struct caam_hash_state))) { + /* allocate space for base edesc and link tables */ + edesc = qi_cache_zalloc(GFP_DMA | flags); + if (!edesc) { + dma_unmap_sg(ctx->dev, req->src, src_nents, + DMA_TO_DEVICE); + return -ENOMEM; + } + edesc->free = true; + } else { + /* get address for base edesc and link tables */ + edesc = (struct ahash_edesc *)((u8 *)state + + sizeof(struct caam_hash_state)); + /* clear memory */ + memset(edesc, 0, sizeof(*edesc)); } edesc->src_nents = src_nents; @@ -4240,11 +4385,7 @@ static int ahash_update_first(struct ahash_request *req) dpaa2_fl_set_len(in_fle, to_hash); if (mapped_nents > 1) { - int qm_sg_bytes; - sg_to_qm_sg_last(req->src, src_len, sg_table, 0); - qm_sg_bytes = pad_sg_nents(mapped_nents) * - sizeof(*sg_table); edesc->qm_sg_dma = dma_map_single(ctx->dev, sg_table, qm_sg_bytes, DMA_TO_DEVICE); @@ -4306,7 +4447,8 @@ static int ahash_update_first(struct ahash_request *req) return ret; unmap_ctx: ahash_unmap_ctx(ctx->dev, edesc, req, DMA_TO_DEVICE); - qi_cache_free(edesc); + if (edesc->free) + qi_cache_free(edesc); return ret; } @@ -4553,7 +4695,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) HASH_MSG_LEN + 64, HASH_MSG_LEN + SHA512_DIGEST_SIZE }; dma_addr_t dma_addr; - int i; + int i, extra_reqsize = 0; ctx->dev = caam_hash->dev; @@ -4591,8 +4733,15 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) OP_ALG_ALGSEL_SUBMASK) >> OP_ALG_ALGSEL_SHIFT]; + /* Compute extra space needed for base edesc and link tables */ + extra_reqsize = sizeof(struct ahash_edesc) + + /* link tables for src: + * 4 entries max + max 2 for remaining buf, aligned = 8 + */ + (8 * sizeof(struct dpaa2_sg_entry)); + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), - sizeof(struct caam_hash_state)); + sizeof(struct caam_hash_state) + extra_reqsize); /* * For keyed hash algorithms shared descriptors @@ -4647,7 +4796,7 @@ static struct caam_hash_alg *caam_hash_alloc(struct device *dev, alg->cra_priority = CAAM_CRA_PRIORITY; alg->cra_blocksize = template->blocksize; alg->cra_alignmask = 0; - alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY; + alg->cra_flags = CRYPTO_ALG_ASYNC; t_alg->alg_type = template->alg_type; t_alg->dev = dev; diff --git a/drivers/crypto/caam/caamalg_qi2.h b/drivers/crypto/caam/caamalg_qi2.h index d35253407ade..3e7367784b39 100644 --- a/drivers/crypto/caam/caamalg_qi2.h +++ b/drivers/crypto/caam/caamalg_qi2.h @@ -102,6 +102,7 @@ struct dpaa2_caam_priv_per_cpu { * @dst_nents: number of segments in output scatterlist * @iv_dma: dma address of iv for checking continuity and link table * @qm_sg_bytes: length of dma mapped h/w link table + * @free: stored to determine if aead_edesc needs to be freed * @qm_sg_dma: bus physical mapped address of h/w link table * @assoclen: associated data length, in CAAM endianness * @assoclen_dma: bus physical mapped address of req->assoclen @@ -112,6 +113,7 @@ struct aead_edesc { int dst_nents; dma_addr_t iv_dma; int qm_sg_bytes; + bool free; dma_addr_t qm_sg_dma; unsigned int assoclen; dma_addr_t assoclen_dma; @@ -124,6 +126,7 @@ struct aead_edesc { * @dst_nents: number of segments in output scatterlist * @iv_dma: dma address of iv for checking continuity and link table * @qm_sg_bytes: length of dma mapped qm_sg space + * @free: stored to determine if skcipher_edesc needs to be freed * @qm_sg_dma: I/O virtual address of h/w link table * @sgt: the h/w link table, followed by IV */ @@ -132,6 +135,7 @@ struct skcipher_edesc { int dst_nents; dma_addr_t iv_dma; int qm_sg_bytes; + bool free; dma_addr_t qm_sg_dma; struct dpaa2_sg_entry sgt[]; }; @@ -141,12 +145,14 @@ struct skcipher_edesc { * @qm_sg_dma: I/O virtual address of h/w link table * @src_nents: number of segments in input scatterlist * @qm_sg_bytes: length of dma mapped qm_sg space + * @free: stored to determine if ahash_edesc needs to be freed * @sgt: pointer to h/w link table */ struct ahash_edesc { dma_addr_t qm_sg_dma; int src_nents; int qm_sg_bytes; + bool free; struct dpaa2_sg_entry sgt[]; };