From patchwork Tue Dec 12 14:52:47 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gilad Ben-Yossef X-Patchwork-Id: 121522 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp4221261qgn; Tue, 12 Dec 2017 06:53:39 -0800 (PST) X-Google-Smtp-Source: ACJfBottA+clRn2tozyk6NUX360GJtTLpmMvHejwIjqvHRem37XmVmmisVkIJLnPSuhCvPidHy6z X-Received: by 10.84.244.9 with SMTP id g9mr2580557pll.192.1513090418905; Tue, 12 Dec 2017 06:53:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1513090418; cv=none; d=google.com; s=arc-20160816; b=vnV81F3qNWFk9alMikPyUJcK9OUII+mEKgLHN7MYOXbhhlTu+tP72aHobw1pbulMoD U2js4aGWok/EEP4u8XWqcjRvwSwyzZvmOb44ZSlKAf9TqZLJYqD5cfkLa6tgUwPDvcd9 jGr86lQOP4aT2J0mhFbR4q9sdmjB5csQpFwnVpHkUht/Cegd+zO1TGt+l1sHCW+sZME7 yZAib0f72kDWQk4NYE9cD1IfctqSVAFdOJdHjRyWVNtinEc0iRyWoYOwTRXcEUfNAR/V QLuSQoDdgFNAVvVlrnOT2cwC9WpTdyuB+cZMg9TbYYgwYO0xSNC2mg5jhwNvHe4ZHVOa AzxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=8qLTD7nR9jOG0wDCMILBzzQz60CGcaKzy2C14Xb6/n4=; b=wD/2ONeUgKWZFbsmS9oAJ3qIGZq8VwFWbSUPFQzxM8B9dVKQIWH8WN1ST5lxEZxgSE D8wX/4TvaMdrJxk+N3H0AfntRWGWJz49OBZFwsgdZVzLIZOtwQL59muC83AnUltCTI+B IUVanzwpTqYxa+3fyzD2IvZNx3KrI1m/VVNXy0qo6qDfoVp7/j8tSwcKjOB43kmRAtcT Wu+K2sOJUEomsg2YfXpMXnW99bkVw0hS2vvrNH99WpHT6Yl/TnSkizfUoV/QQEwhgxv6 ENbr1rqSaUXO98r1VgTAAQi+/7/6D0+V3zFHJdJTT4Laibw/XPiq+QSdhvn+ShRehpYJ lw+Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s10si12953698pfg.229.2017.12.12.06.53.38; Tue, 12 Dec 2017 06:53:38 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752471AbdLLOxf (ORCPT + 11 others); Tue, 12 Dec 2017 09:53:35 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:44986 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751728AbdLLOxc (ORCPT ); Tue, 12 Dec 2017 09:53:32 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A11AE1529; Tue, 12 Dec 2017 06:53:32 -0800 (PST) Received: from sugar.kfn.arm.com (unknown [10.45.48.196]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8940F3F577; Tue, 12 Dec 2017 06:53:29 -0800 (PST) From: Gilad Ben-Yossef To: Greg Kroah-Hartman Cc: Ofir Drang , linux-crypto@vger.kernel.org, driverdev-devel@linuxdriverproject.org, devel@driverdev.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/24] staging: ccree: remove ahash wrappers Date: Tue, 12 Dec 2017 14:52:47 +0000 Message-Id: <1513090395-7938-2-git-send-email-gilad@benyossef.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513090395-7938-1-git-send-email-gilad@benyossef.com> References: <1513090395-7938-1-git-send-email-gilad@benyossef.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Remove a no longer needed abstraction around ccree hash crypto API internals that used to allow same ops to be used in synchronous and asynchronous fashion. Signed-off-by: Gilad Ben-Yossef --- drivers/staging/ccree/ssi_hash.c | 260 ++++++++++++--------------------------- 1 file changed, 76 insertions(+), 184 deletions(-) -- 2.7.4 diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c index 0c4f3db..a762eef 100644 --- a/drivers/staging/ccree/ssi_hash.c +++ b/drivers/staging/ccree/ssi_hash.c @@ -426,13 +426,15 @@ static void ssi_hash_complete(struct device *dev, void *ssi_req) req->base.complete(&req->base, 0); } -static int ssi_hash_digest(struct ahash_req_ctx *state, - struct ssi_hash_ctx *ctx, - unsigned int digestsize, - struct scatterlist *src, - unsigned int nbytes, u8 *result, - void *async_req) +static int ssi_ahash_digest(struct ahash_request *req) { + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + u32 digestsize = crypto_ahash_digestsize(tfm); + struct scatterlist *src = req->src; + unsigned int nbytes = req->nbytes; + u8 *result = req->result; struct device *dev = drvdata_to_dev(ctx->drvdata); bool is_hmac = ctx->is_hmac; struct ssi_crypto_req ssi_req = {}; @@ -460,11 +462,9 @@ static int ssi_hash_digest(struct ahash_req_ctx *state, return -ENOMEM; } - if (async_req) { - /* Setup DX request structure */ - ssi_req.user_cb = (void *)ssi_hash_digest_complete; - ssi_req.user_arg = (void *)async_req; - } + /* Setup DX request structure */ + ssi_req.user_cb = ssi_hash_digest_complete; + ssi_req.user_arg = req; /* If HMAC then load hash IPAD xor key, if HASH then load initial * digest @@ -563,44 +563,32 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); set_cipher_mode(&desc[idx], ctx->hw_mode); /* TODO */ set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, - NS_BIT, (async_req ? 1 : 0)); - if (async_req) - set_queue_last_ind(&desc[idx]); + NS_BIT, 1); + set_queue_last_ind(&desc[idx]); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]); idx++; - if (async_req) { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); - if (rc != -EINPROGRESS) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - ssi_hash_unmap_result(dev, state, digestsize, result); - ssi_hash_unmap_request(dev, state, ctx); - } - } else { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); - if (rc) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - } else { - cc_unmap_hash_request(dev, state, src, false); - } + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (rc != -EINPROGRESS) { + dev_err(dev, "send_request() failed (rc=%d)\n", rc); + cc_unmap_hash_request(dev, state, src, true); ssi_hash_unmap_result(dev, state, digestsize, result); ssi_hash_unmap_request(dev, state, ctx); } return rc; } -static int ssi_hash_update(struct ahash_req_ctx *state, - struct ssi_hash_ctx *ctx, - unsigned int block_size, - struct scatterlist *src, - unsigned int nbytes, - void *async_req) +static int ssi_ahash_update(struct ahash_request *req) { + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base); + struct scatterlist *src = req->src; + unsigned int nbytes = req->nbytes; struct device *dev = drvdata_to_dev(ctx->drvdata); struct ssi_crypto_req ssi_req = {}; struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN]; @@ -628,11 +616,9 @@ static int ssi_hash_update(struct ahash_req_ctx *state, return -ENOMEM; } - if (async_req) { - /* Setup DX request structure */ - ssi_req.user_cb = (void *)ssi_hash_update_complete; - ssi_req.user_arg = async_req; - } + /* Setup DX request structure */ + ssi_req.user_cb = ssi_hash_update_complete; + ssi_req.user_arg = req; /* Restore hash digest */ hw_desc_init(&desc[idx]); @@ -666,39 +652,29 @@ static int ssi_hash_update(struct ahash_req_ctx *state, hw_desc_init(&desc[idx]); set_cipher_mode(&desc[idx], ctx->hw_mode); set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr, - HASH_LEN_SIZE, NS_BIT, (async_req ? 1 : 0)); - if (async_req) - set_queue_last_ind(&desc[idx]); + HASH_LEN_SIZE, NS_BIT, 1); + set_queue_last_ind(&desc[idx]); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_setup_mode(&desc[idx], SETUP_WRITE_STATE1); idx++; - if (async_req) { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); - if (rc != -EINPROGRESS) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - } - } else { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); - if (rc) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - } else { - cc_unmap_hash_request(dev, state, src, false); - } + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (rc != -EINPROGRESS) { + dev_err(dev, "send_request() failed (rc=%d)\n", rc); + cc_unmap_hash_request(dev, state, src, true); } return rc; } -static int ssi_hash_finup(struct ahash_req_ctx *state, - struct ssi_hash_ctx *ctx, - unsigned int digestsize, - struct scatterlist *src, - unsigned int nbytes, - u8 *result, - void *async_req) +static int ssi_ahash_finup(struct ahash_request *req) { + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + u32 digestsize = crypto_ahash_digestsize(tfm); + struct scatterlist *src = req->src; + unsigned int nbytes = req->nbytes; + u8 *result = req->result; struct device *dev = drvdata_to_dev(ctx->drvdata); bool is_hmac = ctx->is_hmac; struct ssi_crypto_req ssi_req = {}; @@ -718,11 +694,9 @@ static int ssi_hash_finup(struct ahash_req_ctx *state, return -ENOMEM; } - if (async_req) { - /* Setup DX request structure */ - ssi_req.user_cb = (void *)ssi_hash_complete; - ssi_req.user_arg = async_req; - } + /* Setup DX request structure */ + ssi_req.user_cb = ssi_hash_complete; + ssi_req.user_arg = req; /* Restore hash digest */ hw_desc_init(&desc[idx]); @@ -794,9 +768,8 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); hw_desc_init(&desc[idx]); /* TODO */ set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, - NS_BIT, (async_req ? 1 : 0)); - if (async_req) - set_queue_last_ind(&desc[idx]); + NS_BIT, 1); + set_queue_last_ind(&desc[idx]); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); @@ -804,36 +777,24 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; - if (async_req) { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); - if (rc != -EINPROGRESS) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - ssi_hash_unmap_result(dev, state, digestsize, result); - } - } else { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); - if (rc) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - ssi_hash_unmap_result(dev, state, digestsize, result); - } else { - cc_unmap_hash_request(dev, state, src, false); - ssi_hash_unmap_result(dev, state, digestsize, result); - ssi_hash_unmap_request(dev, state, ctx); - } + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (rc != -EINPROGRESS) { + dev_err(dev, "send_request() failed (rc=%d)\n", rc); + cc_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); } return rc; } -static int ssi_hash_final(struct ahash_req_ctx *state, - struct ssi_hash_ctx *ctx, - unsigned int digestsize, - struct scatterlist *src, - unsigned int nbytes, - u8 *result, - void *async_req) +static int ssi_ahash_final(struct ahash_request *req) { + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); + u32 digestsize = crypto_ahash_digestsize(tfm); + struct scatterlist *src = req->src; + unsigned int nbytes = req->nbytes; + u8 *result = req->result; struct device *dev = drvdata_to_dev(ctx->drvdata); bool is_hmac = ctx->is_hmac; struct ssi_crypto_req ssi_req = {}; @@ -854,11 +815,9 @@ static int ssi_hash_final(struct ahash_req_ctx *state, return -ENOMEM; } - if (async_req) { - /* Setup DX request structure */ - ssi_req.user_cb = (void *)ssi_hash_complete; - ssi_req.user_arg = async_req; - } + /* Setup DX request structure */ + ssi_req.user_cb = ssi_hash_complete; + ssi_req.user_arg = req; /* Restore hash digest */ hw_desc_init(&desc[idx]); @@ -939,9 +898,8 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); /* Get final MAC result */ hw_desc_init(&desc[idx]); set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize, - NS_BIT, (async_req ? 1 : 0)); - if (async_req) - set_queue_last_ind(&desc[idx]); + NS_BIT, 1); + set_queue_last_ind(&desc[idx]); set_flow_mode(&desc[idx], S_HASH_to_DOUT); set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED); set_setup_mode(&desc[idx], SETUP_WRITE_STATE0); @@ -949,34 +907,25 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE); set_cipher_mode(&desc[idx], ctx->hw_mode); idx++; - if (async_req) { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); - if (rc != -EINPROGRESS) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - ssi_hash_unmap_result(dev, state, digestsize, result); - } - } else { - rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0); - if (rc) { - dev_err(dev, "send_request() failed (rc=%d)\n", rc); - cc_unmap_hash_request(dev, state, src, true); - ssi_hash_unmap_result(dev, state, digestsize, result); - } else { - cc_unmap_hash_request(dev, state, src, false); - ssi_hash_unmap_result(dev, state, digestsize, result); - ssi_hash_unmap_request(dev, state, ctx); - } + rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1); + if (rc != -EINPROGRESS) { + dev_err(dev, "send_request() failed (rc=%d)\n", rc); + cc_unmap_hash_request(dev, state, src, true); + ssi_hash_unmap_result(dev, state, digestsize, result); } return rc; } -static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx) +static int ssi_ahash_init(struct ahash_request *req) { + struct ahash_req_ctx *state = ahash_request_ctx(req); + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); struct device *dev = drvdata_to_dev(ctx->drvdata); - state->xcbc_count = 0; + dev_dbg(dev, "===== init (%d) ====\n", req->nbytes); + state->xcbc_count = 0; ssi_hash_map_request(dev, state, ctx); return 0; @@ -1713,63 +1662,6 @@ static int ssi_mac_digest(struct ahash_request *req) return rc; } -//ahash wrap functions -static int ssi_ahash_digest(struct ahash_request *req) -{ - struct ahash_req_ctx *state = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); - u32 digestsize = crypto_ahash_digestsize(tfm); - - return ssi_hash_digest(state, ctx, digestsize, req->src, req->nbytes, - req->result, (void *)req); -} - -static int ssi_ahash_update(struct ahash_request *req) -{ - struct ahash_req_ctx *state = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); - unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base); - - return ssi_hash_update(state, ctx, block_size, req->src, req->nbytes, - (void *)req); -} - -static int ssi_ahash_finup(struct ahash_request *req) -{ - struct ahash_req_ctx *state = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); - u32 digestsize = crypto_ahash_digestsize(tfm); - - return ssi_hash_finup(state, ctx, digestsize, req->src, req->nbytes, - req->result, (void *)req); -} - -static int ssi_ahash_final(struct ahash_request *req) -{ - struct ahash_req_ctx *state = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); - u32 digestsize = crypto_ahash_digestsize(tfm); - - return ssi_hash_final(state, ctx, digestsize, req->src, req->nbytes, - req->result, (void *)req); -} - -static int ssi_ahash_init(struct ahash_request *req) -{ - struct ahash_req_ctx *state = ahash_request_ctx(req); - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm); - struct device *dev = drvdata_to_dev(ctx->drvdata); - - dev_dbg(dev, "===== init (%d) ====\n", req->nbytes); - - return ssi_hash_init(state, ctx); -} - static int ssi_ahash_export(struct ahash_request *req, void *out) { struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); @@ -1829,7 +1721,7 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in) /* call init() to allocate bufs if the user hasn't */ if (!state->digest_buff) { - rc = ssi_hash_init(state, ctx); + rc = ssi_ahash_init(req); if (rc) goto out; }