From patchwork Sun Oct 22 08:10:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4430C25B42 for ; Sun, 22 Oct 2023 08:18:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231305AbjJVISr (ORCPT ); Sun, 22 Oct 2023 04:18:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229574AbjJVISq (ORCPT ); Sun, 22 Oct 2023 04:18:46 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E723DDD for ; Sun, 22 Oct 2023 01:18:44 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B765C433CA for ; Sun, 22 Oct 2023 08:18:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962724; bh=msjZIYn/hX0AcK2md0eJkbgBzivbOxW2vCmrp//6yVA=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Z8BCVoRday9FjHk1k0s5ldMfDwu3DKckpz2564exbH12ao85uliBuG76roawa8Yi9 TVJ1I1tTqnYy1CNisDWFHL47ogVGgJJrr01HaA/r6hmOr6bLpm0S6kqIpeYLp2AkvH kmbnM0t96CFr9hGnX+CBut/xSkqa9228/vZoIrZXFs8UVmhl9vPHESZQTDLqEigfWT VOeyyq0/nWN/TIIIBhqMnklm/ykT0SLDxVDhmgeQp9+tHfVew5e/Affps5kg04HiDg 2KKlvVH/bor66/kxRMw7wu4y8DRMqg/LLr6iMtrGXxpqSk+iQA+Bb6tNsk1kXfstZU AynWC+cyNBkZw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 04/30] crypto: sun8i-ss - remove unnecessary alignmask for ahashes Date: Sun, 22 Oct 2023 01:10:34 -0700 Message-ID: <20231022081100.123613-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The crypto API's support for alignmasks for ahash algorithms is nearly useless, as its only effect is to cause the API to align the key and result buffers. The drivers that happen to be specifying an alignmask for ahash rarely actually need it. When they do, it's easily fixable, especially considering that these buffers cannot be used for DMA. In preparation for removing alignmask support from ahash, this patch makes the sun8i-ss driver no longer use it. This driver didn't actually rely on it; it only writes to the result buffer in sun8i_ss_hash_run(), simply using memcpy(). And sun8i_ss_hmac_setkey() does not assume any alignment for the key buffer. Signed-off-by: Eric Biggers --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c index 4a9587285c04f..2532d2abc4f7e 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c @@ -315,21 +315,20 @@ static struct sun8i_ss_alg_template ss_algs[] = { .import = sun8i_ss_hash_import, .init_tfm = sun8i_ss_hash_init_tfm, .exit_tfm = sun8i_ss_hash_exit_tfm, .halg = { .digestsize = MD5_DIGEST_SIZE, .statesize = sizeof(struct md5_state), .base = { .cra_name = "md5", .cra_driver_name = "md5-sun8i-ss", .cra_priority = 300, - .cra_alignmask = 3, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx), .cra_module = THIS_MODULE, } } }, .alg.hash.op = { @@ -348,21 +347,20 @@ static struct sun8i_ss_alg_template ss_algs[] = { .import = sun8i_ss_hash_import, .init_tfm = sun8i_ss_hash_init_tfm, .exit_tfm = sun8i_ss_hash_exit_tfm, .halg = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct sha1_state), .base = { .cra_name = "sha1", .cra_driver_name = "sha1-sun8i-ss", .cra_priority = 300, - .cra_alignmask = 3, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx), .cra_module = THIS_MODULE, } } }, .alg.hash.op = { @@ -381,21 +379,20 @@ static struct sun8i_ss_alg_template ss_algs[] = { .import = sun8i_ss_hash_import, .init_tfm = sun8i_ss_hash_init_tfm, .exit_tfm = sun8i_ss_hash_exit_tfm, .halg = { .digestsize = SHA224_DIGEST_SIZE, .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha224", .cra_driver_name = "sha224-sun8i-ss", .cra_priority = 300, - .cra_alignmask = 3, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx), .cra_module = THIS_MODULE, } } }, .alg.hash.op = { @@ -414,21 +411,20 @@ static struct sun8i_ss_alg_template ss_algs[] = { .import = sun8i_ss_hash_import, .init_tfm = sun8i_ss_hash_init_tfm, .exit_tfm = sun8i_ss_hash_exit_tfm, .halg = { .digestsize = SHA256_DIGEST_SIZE, .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha256", .cra_driver_name = "sha256-sun8i-ss", .cra_priority = 300, - .cra_alignmask = 3, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx), .cra_module = THIS_MODULE, } } }, .alg.hash.op = { @@ -448,21 +444,20 @@ static struct sun8i_ss_alg_template ss_algs[] = { .init_tfm = sun8i_ss_hash_init_tfm, .exit_tfm = sun8i_ss_hash_exit_tfm, .setkey = sun8i_ss_hmac_setkey, .halg = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct sha1_state), .base = { .cra_name = "hmac(sha1)", .cra_driver_name = "hmac-sha1-sun8i-ss", .cra_priority = 300, - .cra_alignmask = 3, .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct sun8i_ss_hash_tfm_ctx), .cra_module = THIS_MODULE, } } }, .alg.hash.op = { From patchwork Sun Oct 22 08:10:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737188 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EC90C25B43 for ; Sun, 22 Oct 2023 08:18:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231351AbjJVISs (ORCPT ); Sun, 22 Oct 2023 04:18:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231356AbjJVISq (ORCPT ); Sun, 22 Oct 2023 04:18:46 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41B94CF for ; Sun, 22 Oct 2023 01:18:45 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AF3CAC433CC for ; Sun, 22 Oct 2023 08:18:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962724; bh=Xxrz1naAot6FqR/QVk1c1Y/+JuFedk9AZ4iYx6yrmC4=; h=From:To:Subject:Date:In-Reply-To:References:From; b=ospeMnJxYdMITQ6sRmkoqkaRkYQ+AZUGlYevXnqodQVbfWvC1TMrJ5wJgqoUu4AXe OXGuFQX5PJ4CSF3Bkxn6I486sBMqzfq7CUybSt2ESJToQgaoBo/gLEmSaqfOqMKrZG LpUbnzVIWFeWt39mMq4XVwZGsivaCHOkGnMpnHN+VPMsJBJlSt4fcVMsAMEaxoYMOJ TSaVlk2EcIbxEcZvEH3oGQcjqZ1HQRiWXiQ03HsXwEW2bebmsN85YnyIADDXs+syyV LMwODyUfa3aQ3jWeIVU7y8RoY3QEKVVXOm3UBfGG7anJxvVGe+uC2nsrP+KdQsqntG i+RsW2po/pqaA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 05/30] crypto: atmel - remove unnecessary alignmask for ahashes Date: Sun, 22 Oct 2023 01:10:35 -0700 Message-ID: <20231022081100.123613-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The crypto API's support for alignmasks for ahash algorithms is nearly useless, as its only effect is to cause the API to align the key and result buffers. The drivers that happen to be specifying an alignmask for ahash rarely actually need it. When they do, it's easily fixable, especially considering that these buffers cannot be used for DMA. In preparation for removing alignmask support from ahash, this patch makes the atmel driver no longer use it. This driver didn't actually rely on it; it only writes to the result buffer in atmel_sha_copy_ready_hash(), simply using memcpy(). And this driver didn't set an alignmask for any keyed hash algorithms, so the key buffer need not be considered. Signed-off-by: Eric Biggers --- drivers/crypto/atmel-sha.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c index 3622120add625..6cd3fc493027a 100644 --- a/drivers/crypto/atmel-sha.c +++ b/drivers/crypto/atmel-sha.c @@ -1293,29 +1293,27 @@ static struct ahash_alg sha_224_alg = { .halg.base.cra_blocksize = SHA224_BLOCK_SIZE, .halg.digestsize = SHA224_DIGEST_SIZE, }; static struct ahash_alg sha_384_512_algs[] = { { .halg.base.cra_name = "sha384", .halg.base.cra_driver_name = "atmel-sha384", .halg.base.cra_blocksize = SHA384_BLOCK_SIZE, - .halg.base.cra_alignmask = 0x3, .halg.digestsize = SHA384_DIGEST_SIZE, }, { .halg.base.cra_name = "sha512", .halg.base.cra_driver_name = "atmel-sha512", .halg.base.cra_blocksize = SHA512_BLOCK_SIZE, - .halg.base.cra_alignmask = 0x3, .halg.digestsize = SHA512_DIGEST_SIZE, }, }; static void atmel_sha_queue_task(unsigned long data) { struct atmel_sha_dev *dd = (struct atmel_sha_dev *)data; atmel_sha_handle_queue(dd, NULL); From patchwork Sun Oct 22 08:10:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87938C001DF for ; Sun, 22 Oct 2023 08:18:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231408AbjJVISu (ORCPT ); Sun, 22 Oct 2023 04:18:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231445AbjJVISr (ORCPT ); Sun, 22 Oct 2023 04:18:47 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8611F93 for ; Sun, 22 Oct 2023 01:18:45 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22633C433C9 for ; Sun, 22 Oct 2023 08:18:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962725; bh=fOXBC+AS9/ID2WA3s2FG5UUqbbs3rkcny53YOaMWtOA=; h=From:To:Subject:Date:In-Reply-To:References:From; b=WiC6aYXMgJIj8Z2d6SBHUkWIQWvzz6O2DUqwhudo0SqDitty5uLrFXNCPYQSD51nJ HOt6CT4PEJ811F+cIDYWDv4Jp0jC1L9LVZYnozZnQ7zyuyiuBMFEI78kb0rJ16zDoU TnH6WannUE57Gy4kHVXvSgXK2USwTla+tYlAH/QTsaRe/6UrsGvheZ3QKGObP6rjOV FgvJafsmjrC53dtkDaGlYY0Cnak0oM2Tja3+zB7s1b65X49dloMx2GoFwTkWgonOgR DwKtM3yOSpB/EUyJcW8lMm2sgGVNHvztKZXJrSF84cDyKr7xL/3EbJCg4PhCa3bRQe sKFDac5d+Tdow== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 07/30] crypto: mxs-dcp - remove unnecessary alignmask for ahashes Date: Sun, 22 Oct 2023 01:10:37 -0700 Message-ID: <20231022081100.123613-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The crypto API's support for alignmasks for ahash algorithms is nearly useless, as its only effect is to cause the API to align the key and result buffers. The drivers that happen to be specifying an alignmask for ahash rarely actually need it. When they do, it's easily fixable, especially considering that these buffers cannot be used for DMA. In preparation for removing alignmask support from ahash, this patch makes the mxs-dcp driver no longer use it. This driver didn't actually rely on it; it only writes to the result buffer in dcp_sha_req_to_buf(), using a bytewise copy. And this driver only supports unkeyed hash algorithms, so the key buffer need not be considered. Signed-off-by: Eric Biggers --- drivers/crypto/mxs-dcp.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c index f6b7bce0e6568..5c91b49b0fc71 100644 --- a/drivers/crypto/mxs-dcp.c +++ b/drivers/crypto/mxs-dcp.c @@ -901,21 +901,20 @@ static struct ahash_alg dcp_sha1_alg = { .digest = dcp_sha_digest, .import = dcp_sha_import, .export = dcp_sha_export, .halg = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct dcp_export_state), .base = { .cra_name = "sha1", .cra_driver_name = "sha1-dcp", .cra_priority = 400, - .cra_alignmask = 63, .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct dcp_async_ctx), .cra_module = THIS_MODULE, .cra_init = dcp_sha_cra_init, .cra_exit = dcp_sha_cra_exit, }, }, }; @@ -928,21 +927,20 @@ static struct ahash_alg dcp_sha256_alg = { .digest = dcp_sha_digest, .import = dcp_sha_import, .export = dcp_sha_export, .halg = { .digestsize = SHA256_DIGEST_SIZE, .statesize = sizeof(struct dcp_export_state), .base = { .cra_name = "sha256", .cra_driver_name = "sha256-dcp", .cra_priority = 400, - .cra_alignmask = 63, .cra_flags = CRYPTO_ALG_ASYNC, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct dcp_async_ctx), .cra_module = THIS_MODULE, .cra_init = dcp_sha_cra_init, .cra_exit = dcp_sha_cra_exit, }, }, }; From patchwork Sun Oct 22 08:10:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC481C25B43 for ; Sun, 22 Oct 2023 08:18:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231461AbjJVIS4 (ORCPT ); Sun, 22 Oct 2023 04:18:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231602AbjJVISs (ORCPT ); Sun, 22 Oct 2023 04:18:48 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C0FFE8 for ; Sun, 22 Oct 2023 01:18:46 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE139C43391 for ; Sun, 22 Oct 2023 08:18:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962725; bh=/Cfq0RY4zUnrDGJwojsl3Ln90Xniz2usLN5xUMqzG1Y=; h=From:To:Subject:Date:In-Reply-To:References:From; b=J/+606EtxfLsJD5ithWjIVsvs7SjU+AQUXcISp7CZgnG8m8U2+4D8U0m6qcK5cr5q yExlaXvUx1qpTApOgKTGy4uttF+stam/XdBcSZJEO1R+2+IMbhNDx0LjoAFEjT67gs AvQmWUl1UzMOXAfbYI4ArWgFZEN26eQLxBjl+Ye5cNiRav6EZmANS27I2fG20z7v/S tKAZa0f2vKdDHfuIIF5gMZb5butjvlPQmHanez1j0RyFs+/TFoswSIY/penRcsTT2Z UBOzUzhkZ2atEt6mAneCyRL2XN8p4f+z1ev03kpbZbJ1kWinYU2Ngnx9+VAQTT7497 nCsynL0XNifRg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 10/30] crypto: omap-sham - stop setting alignmask for ahashes Date: Sun, 22 Oct 2023 01:10:40 -0700 Message-ID: <20231022081100.123613-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The crypto API's support for alignmasks for ahash algorithms is nearly useless, as its only effect is to cause the API to align the key and result buffers. The drivers that happen to be specifying an alignmask for ahash rarely actually need it. When they do, it's easily fixable, especially considering that these buffers cannot be used for DMA. In preparation for removing alignmask support from ahash, this patch makes the omap-sham driver no longer use it. This driver did actually rely on it, but only for storing to the result buffer using __u32 stores in omap_sham_copy_ready_hash(). This patch makes omap_sham_copy_ready_hash() use put_unaligned() instead. (It really should use a specific endianness, but that's an existing bug.) Signed-off-by: Eric Biggers --- drivers/crypto/omap-sham.c | 16 ++-------------- 1 file changed, 2 insertions(+), 14 deletions(-) diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c index a6b4a0b3ace30..c4d77d8533313 100644 --- a/drivers/crypto/omap-sham.c +++ b/drivers/crypto/omap-sham.c @@ -349,24 +349,24 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req) break; case FLAGS_MODE_SHA512: d = SHA512_DIGEST_SIZE / sizeof(u32); break; default: d = 0; } if (big_endian) for (i = 0; i < d; i++) - hash[i] = be32_to_cpup((__be32 *)in + i); + put_unaligned(be32_to_cpup((__be32 *)in + i), &hash[i]); else for (i = 0; i < d; i++) - hash[i] = le32_to_cpup((__le32 *)in + i); + put_unaligned(le32_to_cpup((__le32 *)in + i), &hash[i]); } static void omap_sham_write_ctrl_omap2(struct omap_sham_dev *dd, size_t length, int final, int dma) { struct omap_sham_reqctx *ctx = ahash_request_ctx(dd->req); u32 val = length << 5, mask; if (likely(ctx->digcnt)) omap_sham_write(dd, SHA_REG_DIGCNT(dd), ctx->digcnt); @@ -1428,21 +1428,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = { .base.halg.digestsize = SHA1_DIGEST_SIZE, .base.halg.base = { .cra_name = "sha1", .cra_driver_name = "omap-sha1", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1451,21 +1450,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = { .base.halg.digestsize = MD5_DIGEST_SIZE, .base.halg.base = { .cra_name = "md5", .cra_driver_name = "omap-md5", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1476,21 +1474,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = { .base.halg.base = { .cra_name = "hmac(sha1)", .cra_driver_name = "omap-hmac-sha1", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx) + sizeof(struct omap_sham_hmac_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_sha1_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1501,21 +1498,20 @@ static struct ahash_engine_alg algs_sha1_md5[] = { .base.halg.base = { .cra_name = "hmac(md5)", .cra_driver_name = "omap-hmac-md5", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx) + sizeof(struct omap_sham_hmac_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_md5_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, } }; /* OMAP4 has some algs in addition to what OMAP2 has */ static struct ahash_engine_alg algs_sha224_sha256[] = { @@ -1528,21 +1524,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = { .base.halg.digestsize = SHA224_DIGEST_SIZE, .base.halg.base = { .cra_name = "sha224", .cra_driver_name = "omap-sha224", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1551,21 +1546,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = { .base.halg.digestsize = SHA256_DIGEST_SIZE, .base.halg.base = { .cra_name = "sha256", .cra_driver_name = "omap-sha256", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1576,21 +1570,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = { .base.halg.base = { .cra_name = "hmac(sha224)", .cra_driver_name = "omap-hmac-sha224", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx) + sizeof(struct omap_sham_hmac_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_sha224_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1601,21 +1594,20 @@ static struct ahash_engine_alg algs_sha224_sha256[] = { .base.halg.base = { .cra_name = "hmac(sha256)", .cra_driver_name = "omap-hmac-sha256", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx) + sizeof(struct omap_sham_hmac_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_sha256_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, }; static struct ahash_engine_alg algs_sha384_sha512[] = { { @@ -1627,21 +1619,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .base.halg.digestsize = SHA384_DIGEST_SIZE, .base.halg.base = { .cra_name = "sha384", .cra_driver_name = "omap-sha384", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1650,21 +1641,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .base.halg.digestsize = SHA512_DIGEST_SIZE, .base.halg.base = { .cra_name = "sha512", .cra_driver_name = "omap-sha512", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1675,21 +1665,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .base.halg.base = { .cra_name = "hmac(sha384)", .cra_driver_name = "omap-hmac-sha384", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx) + sizeof(struct omap_sham_hmac_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_sha384_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, { .base.init = omap_sham_init, .base.update = omap_sham_update, .base.final = omap_sham_final, @@ -1700,21 +1689,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .base.halg.base = { .cra_name = "hmac(sha512)", .cra_driver_name = "omap-hmac-sha512", .cra_priority = 400, .cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct omap_sham_ctx) + sizeof(struct omap_sham_hmac_ctx), - .cra_alignmask = OMAP_ALIGN_MASK, .cra_module = THIS_MODULE, .cra_init = omap_sham_cra_sha512_init, .cra_exit = omap_sham_cra_exit, }, .op.do_one_request = omap_sham_hash_one_req, }, }; static void omap_sham_done_task(unsigned long data) { From patchwork Sun Oct 22 08:10:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737186 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED080C25B42 for ; Sun, 22 Oct 2023 08:18:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231770AbjJVISx (ORCPT ); Sun, 22 Oct 2023 04:18:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231594AbjJVISr (ORCPT ); Sun, 22 Oct 2023 04:18:47 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30221F4 for ; Sun, 22 Oct 2023 01:18:46 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1D6CC43395 for ; Sun, 22 Oct 2023 08:18:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962726; bh=WXgzK4O6+fvazhY9rDTpzwXQKpRF0dnRpBE1ykWmc+k=; h=From:To:Subject:Date:In-Reply-To:References:From; b=NjH2+gb9AhK/ol8kAreWX6MEOf75cm233hTnviYBhIgDJoaI5+SwzaLYa3KADhxDN xN+BCk9FAsdVR+ErQ0+sadJSOD6g/3Pmool03s/+n1xM1EvsXAUnWuL5q9ATjF1QFw vE4bBDGl81OGIawB369aqMNAbg2Mnwk/Qz1qIPT4ccFnvpl34mWok20gvihlvk1jop XxvPrtSzLTM99c+4LfM0gsh2Moo5a7146rtMyDO9ohEzRq5NSJwcWjfVItCkFqJ9CJ jY9nRjh00+mCtnYxACacznKBoDEfl9ESUfdCwcCPTEQBwX6UbU2cWyQS0WRB+yYq6i OrPHjO3VJICNQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 11/30] crypto: rockchip - remove unnecessary alignmask for ahashes Date: Sun, 22 Oct 2023 01:10:41 -0700 Message-ID: <20231022081100.123613-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The crypto API's support for alignmasks for ahash algorithms is nearly useless, as its only effect is to cause the API to align the key and result buffers. The drivers that happen to be specifying an alignmask for ahash rarely actually need it. When they do, it's easily fixable, especially considering that these buffers cannot be used for DMA. In preparation for removing alignmask support from ahash, this patch makes the rockchip driver no longer use it. This driver didn't actually rely on it; it only writes to the result buffer in rk_hash_run(), already using put_unaligned_le32(). And this driver only supports unkeyed hash algorithms, so the key buffer need not be considered. Signed-off-by: Eric Biggers --- drivers/crypto/rockchip/rk3288_crypto_ahash.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c index 8c143180645e5..1b13b4aa16ecc 100644 --- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c +++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c @@ -386,21 +386,20 @@ struct rk_crypto_tmp rk_ahash_sha1 = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct sha1_state), .base = { .cra_name = "sha1", .cra_driver_name = "rk-sha1", .cra_priority = 300, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct rk_ahash_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, } } }, .alg.hash.op = { .do_one_request = rk_hash_run, }, }; struct rk_crypto_tmp rk_ahash_sha256 = { @@ -419,21 +418,20 @@ struct rk_crypto_tmp rk_ahash_sha256 = { .digestsize = SHA256_DIGEST_SIZE, .statesize = sizeof(struct sha256_state), .base = { .cra_name = "sha256", .cra_driver_name = "rk-sha256", .cra_priority = 300, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct rk_ahash_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, } } }, .alg.hash.op = { .do_one_request = rk_hash_run, }, }; struct rk_crypto_tmp rk_ahash_md5 = { @@ -452,19 +450,18 @@ struct rk_crypto_tmp rk_ahash_md5 = { .digestsize = MD5_DIGEST_SIZE, .statesize = sizeof(struct md5_state), .base = { .cra_name = "md5", .cra_driver_name = "rk-md5", .cra_priority = 300, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct rk_ahash_ctx), - .cra_alignmask = 3, .cra_module = THIS_MODULE, } } }, .alg.hash.op = { .do_one_request = rk_hash_run, }, }; From patchwork Sun Oct 22 08:10:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3643C25B45 for ; Sun, 22 Oct 2023 08:19:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231622AbjJVIS7 (ORCPT ); Sun, 22 Oct 2023 04:18:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231616AbjJVISt (ORCPT ); Sun, 22 Oct 2023 04:18:49 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 975CBCF for ; Sun, 22 Oct 2023 01:18:46 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 663F9C433CA for ; Sun, 22 Oct 2023 08:18:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962726; bh=BCF/IKknJGOxDyL3q/kFif5oBmpkW4H+u9uUhiQNKGQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=dCjVk8Evzos/U2XOb73dRrrAjDEX1tGQzlC09kLYTptC7Z7F82oVbyS7430Pqy+Uq dP/xyN1CsoiTKJNZaSa7NRkDevazmWfTO8LIETwe7Tmh4z+sIEmOfxBfvfsB8udYO5 d4OJZwPjUD9vOIb7RnRDsljpi54x+mwPGCUEYkLAT5FpIAJ9/ieN2AGy4ftjolt9Lp LsiR07s74ZUPOKrfrPOzxYjlz/TfKOKRt/hTKfJsdoonIwvGUjPaq6ZM21xIjRqQBf VkDFLHo1fvjawePtKQ2uLEIGKpZ0gshdt6SBJQlvrCAlriTUYU0x4Vq7i9w/3nqNDP zR8ntNDad/zDw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 13/30] crypto: stm32 - remove unnecessary alignmask for ahashes Date: Sun, 22 Oct 2023 01:10:43 -0700 Message-ID: <20231022081100.123613-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The crypto API's support for alignmasks for ahash algorithms is nearly useless, as its only effect is to cause the API to align the key and result buffers. The drivers that happen to be specifying an alignmask for ahash rarely actually need it. When they do, it's easily fixable, especially considering that these buffers cannot be used for DMA. In preparation for removing alignmask support from ahash, this patch makes the stm32 driver no longer use it. This driver didn't actually rely on it; it only writes to the result buffer in stm32_hash_finish(), simply using memcpy(). And stm32_hash_setkey() does not assume any alignment for the key buffer. Signed-off-by: Eric Biggers --- drivers/crypto/stm32/stm32-hash.c | 20 -------------------- 1 file changed, 20 deletions(-) diff --git a/drivers/crypto/stm32/stm32-hash.c b/drivers/crypto/stm32/stm32-hash.c index 2b2382d4332c5..34e0d7e381a8c 100644 --- a/drivers/crypto/stm32/stm32-hash.c +++ b/drivers/crypto/stm32/stm32-hash.c @@ -1276,21 +1276,20 @@ static struct ahash_engine_alg algs_md5[] = { .digestsize = MD5_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "md5", .cra_driver_name = "stm32-md5", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1306,21 +1305,20 @@ static struct ahash_engine_alg algs_md5[] = { .digestsize = MD5_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(md5)", .cra_driver_name = "stm32-hmac-md5", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = MD5_HMAC_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, } }; @@ -1338,21 +1336,20 @@ static struct ahash_engine_alg algs_sha1[] = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha1", .cra_driver_name = "stm32-sha1", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1368,21 +1365,20 @@ static struct ahash_engine_alg algs_sha1[] = { .digestsize = SHA1_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha1)", .cra_driver_name = "stm32-hmac-sha1", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA1_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, }; @@ -1400,21 +1396,20 @@ static struct ahash_engine_alg algs_sha224[] = { .digestsize = SHA224_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha224", .cra_driver_name = "stm32-sha224", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1430,21 +1425,20 @@ static struct ahash_engine_alg algs_sha224[] = { .digestsize = SHA224_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha224)", .cra_driver_name = "stm32-hmac-sha224", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, }; @@ -1462,21 +1456,20 @@ static struct ahash_engine_alg algs_sha256[] = { .digestsize = SHA256_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha256", .cra_driver_name = "stm32-sha256", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1492,21 +1485,20 @@ static struct ahash_engine_alg algs_sha256[] = { .digestsize = SHA256_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha256)", .cra_driver_name = "stm32-hmac-sha256", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, }; @@ -1524,21 +1516,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .digestsize = SHA384_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha384", .cra_driver_name = "stm32-sha384", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1554,21 +1545,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .digestsize = SHA384_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha384)", .cra_driver_name = "stm32-hmac-sha384", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1583,21 +1573,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .digestsize = SHA512_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha512", .cra_driver_name = "stm32-sha512", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1613,21 +1602,20 @@ static struct ahash_engine_alg algs_sha384_sha512[] = { .digestsize = SHA512_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha512)", .cra_driver_name = "stm32-hmac-sha512", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, }; @@ -1645,21 +1633,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_224_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha3-224", .cra_driver_name = "stm32-sha3-224", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1675,21 +1662,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_224_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha3-224)", .cra_driver_name = "stm32-hmac-sha3-224", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_224_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1704,21 +1690,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_256_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha3-256", .cra_driver_name = "stm32-sha3-256", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1734,21 +1719,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_256_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha3-256)", .cra_driver_name = "stm32-hmac-sha3-256", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_256_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1763,21 +1747,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_384_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha3-384", .cra_driver_name = "stm32-sha3-384", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1793,21 +1776,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_384_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha3-384)", .cra_driver_name = "stm32-hmac-sha3-384", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_384_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1822,21 +1804,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_512_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "sha3-512", .cra_driver_name = "stm32-sha3-512", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, }, { @@ -1852,21 +1833,20 @@ static struct ahash_engine_alg algs_sha3[] = { .digestsize = SHA3_512_DIGEST_SIZE, .statesize = sizeof(struct stm32_hash_state), .base = { .cra_name = "hmac(sha3-512)", .cra_driver_name = "stm32-hmac-sha3-512", .cra_priority = 200, .cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY, .cra_blocksize = SHA3_512_BLOCK_SIZE, .cra_ctxsize = sizeof(struct stm32_hash_ctx), - .cra_alignmask = 3, .cra_init = stm32_hash_cra_sha3_hmac_init, .cra_exit = stm32_hash_cra_exit, .cra_module = THIS_MODULE, } }, .op = { .do_one_request = stm32_hash_one_request, }, } }; From patchwork Sun Oct 22 08:10:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F94EC25B42 for ; Sun, 22 Oct 2023 08:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231626AbjJVITC (ORCPT ); Sun, 22 Oct 2023 04:19:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231634AbjJVISw (ORCPT ); Sun, 22 Oct 2023 04:18:52 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD49EDA for ; Sun, 22 Oct 2023 01:18:46 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9AAF6C433C8 for ; Sun, 22 Oct 2023 08:18:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962726; bh=iOjB1N0adCnBX+lR3hMNM96Q8Ir+G0IDr77NStX6D0w=; h=From:To:Subject:Date:In-Reply-To:References:From; b=MdZHcC1aGn0yNc8OZ6MCA0yGU5qsVa2wEcfCrivFjBsHiUNiPPxuYbMoF6XpPhnby lA3rMIc//hp7wmGfxgdgJTi0dBvxOy+MuhNGVeC37svSHbaqGXRdjXYavfwhfzHMQl Pfc69KF7DOwuLaHl5suR+2TyxmryOHCfjafOfT7H9maEkYKMNRlf1ajeRJzdbCE+mo z+UNrE/H642GHR3YRsm5ks5HfAFm0TIe2DDUO9AZNH+2m0nmFwi4vyKsUUAyBQ4FNg /72DtphO6JvLfz27nWQnUAmpjrDR6xHCFnBrfULhvSQ3Y+43iNYnWDhgyqRNzo2CCN ArApVZWRKYQVQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 14/30] crypto: ahash - remove support for nonzero alignmask Date: Sun, 22 Oct 2023 01:10:44 -0700 Message-ID: <20231022081100.123613-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Currently, the ahash API checks the alignment of all key and result buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. First, since it does not apply to the message, its effect is much more limited than e.g. is the case for the alignmask for "skcipher". Second, the key and result buffers are given as virtual addresses and cannot (in general) be DMA'ed into, so drivers end up having to copy to/from them in software anyway. As a result it's easy to use memcpy() or the unaligned access helpers. The crypto_hash_walk_*() helper functions do use the alignmask to align the message. But with one exception those are only used for shash algorithms being exposed via the ahash API, not for native ahashes, and aligning the message is not required in this case, especially now that alignmask support has been removed from shash. The exception is the n2_core driver, which doesn't set an alignmask. In any case, no ahash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from ahash. The benefit is that all the code to handle "misaligned" buffers in the ahash API goes away, reducing the overhead of the ahash API. This follows the same change that was made to shash. Signed-off-by: Eric Biggers --- Documentation/crypto/devel-algos.rst | 4 +- crypto/ahash.c | 117 ++------------------------- crypto/shash.c | 8 +- include/crypto/internal/hash.h | 4 +- include/linux/crypto.h | 27 ++++--- 5 files changed, 28 insertions(+), 132 deletions(-) diff --git a/Documentation/crypto/devel-algos.rst b/Documentation/crypto/devel-algos.rst index 3506899ef83e3..9b7782f4f6e0a 100644 --- a/Documentation/crypto/devel-algos.rst +++ b/Documentation/crypto/devel-algos.rst @@ -228,13 +228,11 @@ Note that it is perfectly legal to "abandon" a request object: In other words implementations should mind the resource allocation and clean-up. No resources related to request objects should remain allocated after a call to .init() or .update(), since there might be no chance to free them. Specifics Of Asynchronous HASH Transformation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some of the drivers will want to use the Generic ScatterWalk in case the implementation needs to be fed separate chunks of the scatterlist which -contains the input data. The buffer containing the resulting hash will -always be properly aligned to .cra_alignmask so there is no need to -worry about this. +contains the input data. diff --git a/crypto/ahash.c b/crypto/ahash.c index 213bb3e9f2451..744fd3b8ea258 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -28,35 +28,26 @@ static const struct crypto_type crypto_ahash_type; struct ahash_request_priv { crypto_completion_t complete; void *data; u8 *result; u32 flags; void *ubuf[] CRYPTO_MINALIGN_ATTR; }; static int hash_walk_next(struct crypto_hash_walk *walk) { - unsigned int alignmask = walk->alignmask; unsigned int offset = walk->offset; unsigned int nbytes = min(walk->entrylen, ((unsigned int)(PAGE_SIZE)) - offset); walk->data = kmap_local_page(walk->pg); walk->data += offset; - - if (offset & alignmask) { - unsigned int unaligned = alignmask + 1 - (offset & alignmask); - - if (nbytes > unaligned) - nbytes = unaligned; - } - walk->entrylen -= nbytes; return nbytes; } static int hash_walk_new_entry(struct crypto_hash_walk *walk) { struct scatterlist *sg; sg = walk->sg; walk->offset = sg->offset; @@ -66,37 +57,22 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk) if (walk->entrylen > walk->total) walk->entrylen = walk->total; walk->total -= walk->entrylen; return hash_walk_next(walk); } int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) { - unsigned int alignmask = walk->alignmask; - walk->data -= walk->offset; - if (walk->entrylen && (walk->offset & alignmask) && !err) { - unsigned int nbytes; - - walk->offset = ALIGN(walk->offset, alignmask + 1); - nbytes = min(walk->entrylen, - (unsigned int)(PAGE_SIZE - walk->offset)); - if (nbytes) { - walk->entrylen -= nbytes; - walk->data += walk->offset; - return nbytes; - } - } - kunmap_local(walk->data); crypto_yield(walk->flags); if (err) return err; if (walk->entrylen) { walk->offset = 0; walk->pg++; return hash_walk_next(walk); @@ -114,115 +90,85 @@ EXPORT_SYMBOL_GPL(crypto_hash_walk_done); int crypto_hash_walk_first(struct ahash_request *req, struct crypto_hash_walk *walk) { walk->total = req->nbytes; if (!walk->total) { walk->entrylen = 0; return 0; } - walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req)); walk->sg = req->src; walk->flags = req->base.flags; return hash_walk_new_entry(walk); } EXPORT_SYMBOL_GPL(crypto_hash_walk_first); -static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key, - unsigned int keylen) -{ - unsigned long alignmask = crypto_ahash_alignmask(tfm); - int ret; - u8 *buffer, *alignbuffer; - unsigned long absize; - - absize = keylen + alignmask; - buffer = kmalloc(absize, GFP_KERNEL); - if (!buffer) - return -ENOMEM; - - alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); - memcpy(alignbuffer, key, keylen); - ret = tfm->setkey(tfm, alignbuffer, keylen); - kfree_sensitive(buffer); - return ret; -} - static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen) { return -ENOSYS; } static void ahash_set_needkey(struct crypto_ahash *tfm) { const struct hash_alg_common *alg = crypto_hash_alg_common(tfm); if (tfm->setkey != ahash_nosetkey && !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); } int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen) { - unsigned long alignmask = crypto_ahash_alignmask(tfm); - int err; - - if ((unsigned long)key & alignmask) - err = ahash_setkey_unaligned(tfm, key, keylen); - else - err = tfm->setkey(tfm, key, keylen); + int err = tfm->setkey(tfm, key, keylen); if (unlikely(err)) { ahash_set_needkey(tfm); return err; } crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); return 0; } EXPORT_SYMBOL_GPL(crypto_ahash_setkey); static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt, bool has_state) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - unsigned long alignmask = crypto_ahash_alignmask(tfm); unsigned int ds = crypto_ahash_digestsize(tfm); struct ahash_request *subreq; unsigned int subreq_size; unsigned int reqsize; u8 *result; gfp_t gfp; u32 flags; subreq_size = sizeof(*subreq); reqsize = crypto_ahash_reqsize(tfm); reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment()); subreq_size += reqsize; subreq_size += ds; - subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1); flags = ahash_request_flags(req); gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ? GFP_KERNEL : GFP_ATOMIC; subreq = kmalloc(subreq_size, gfp); if (!subreq) return -ENOMEM; ahash_request_set_tfm(subreq, tfm); ahash_request_set_callback(subreq, flags, cplt, req); result = (u8 *)(subreq + 1) + reqsize; - result = PTR_ALIGN(result, alignmask + 1); ahash_request_set_crypt(subreq, req->src, result, req->nbytes); if (has_state) { void *state; state = kmalloc(crypto_ahash_statesize(tfm), gfp); if (!state) { kfree(subreq); return -ENOMEM; @@ -244,114 +190,67 @@ static void ahash_restore_req(struct ahash_request *req, int err) if (!err) memcpy(req->result, subreq->result, crypto_ahash_digestsize(crypto_ahash_reqtfm(req))); req->priv = NULL; kfree_sensitive(subreq); } -static void ahash_op_unaligned_done(void *data, int err) -{ - struct ahash_request *areq = data; - - if (err == -EINPROGRESS) - goto out; - - /* First copy req->result into req->priv.result */ - ahash_restore_req(areq, err); - -out: - /* Complete the ORIGINAL request. */ - ahash_request_complete(areq, err); -} - -static int ahash_op_unaligned(struct ahash_request *req, - int (*op)(struct ahash_request *), - bool has_state) -{ - int err; - - err = ahash_save_req(req, ahash_op_unaligned_done, has_state); - if (err) - return err; - - err = op(req->priv); - if (err == -EINPROGRESS || err == -EBUSY) - return err; - - ahash_restore_req(req, err); - - return err; -} - -static int crypto_ahash_op(struct ahash_request *req, - int (*op)(struct ahash_request *), - bool has_state) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - unsigned long alignmask = crypto_ahash_alignmask(tfm); - int err; - - if ((unsigned long)req->result & alignmask) - err = ahash_op_unaligned(req, op, has_state); - else - err = op(req); - - return crypto_hash_errstat(crypto_hash_alg_common(tfm), err); -} - int crypto_ahash_final(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct hash_alg_common *alg = crypto_hash_alg_common(tfm); if (IS_ENABLED(CONFIG_CRYPTO_STATS)) atomic64_inc(&hash_get_stat(alg)->hash_cnt); - return crypto_ahash_op(req, tfm->final, true); + return crypto_hash_errstat(alg, tfm->final(req)); } EXPORT_SYMBOL_GPL(crypto_ahash_final); int crypto_ahash_finup(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct hash_alg_common *alg = crypto_hash_alg_common(tfm); if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { struct crypto_istat_hash *istat = hash_get_stat(alg); atomic64_inc(&istat->hash_cnt); atomic64_add(req->nbytes, &istat->hash_tlen); } - return crypto_ahash_op(req, tfm->finup, true); + return crypto_hash_errstat(alg, tfm->finup(req)); } EXPORT_SYMBOL_GPL(crypto_ahash_finup); int crypto_ahash_digest(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); struct hash_alg_common *alg = crypto_hash_alg_common(tfm); + int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { struct crypto_istat_hash *istat = hash_get_stat(alg); atomic64_inc(&istat->hash_cnt); atomic64_add(req->nbytes, &istat->hash_tlen); } if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) - return crypto_hash_errstat(alg, -ENOKEY); + err = -ENOKEY; + else + err = tfm->digest(req); - return crypto_ahash_op(req, tfm->digest, false); + return crypto_hash_errstat(alg, err); } EXPORT_SYMBOL_GPL(crypto_ahash_digest); static void ahash_def_finup_done2(void *data, int err) { struct ahash_request *areq = data; if (err == -EINPROGRESS) return; diff --git a/crypto/shash.c b/crypto/shash.c index 409b33f9c97cc..359702c2cd02b 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -534,40 +534,40 @@ struct crypto_shash *crypto_clone_shash(struct crypto_shash *hash) EXPORT_SYMBOL_GPL(crypto_clone_shash); int hash_prepare_alg(struct hash_alg_common *alg) { struct crypto_istat_hash *istat = hash_get_stat(alg); struct crypto_alg *base = &alg->base; if (alg->digestsize > HASH_MAX_DIGESTSIZE) return -EINVAL; + /* alignmask is not useful for hashes, so it is not supported. */ + if (base->cra_alignmask) + return -EINVAL; + base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) memset(istat, 0, sizeof(*istat)); return 0; } static int shash_prepare_alg(struct shash_alg *alg) { struct crypto_alg *base = &alg->halg.base; int err; if (alg->descsize > HASH_MAX_DESCSIZE) return -EINVAL; - /* alignmask is not useful for shash, so it is not supported. */ - if (base->cra_alignmask) - return -EINVAL; - if ((alg->export && !alg->import) || (alg->import && !alg->export)) return -EINVAL; err = hash_prepare_alg(&alg->halg); if (err) return err; base->cra_type = &crypto_shash_type; base->cra_flags |= CRYPTO_ALG_TYPE_SHASH; diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 8d0cd0c591a09..59c707e4dea46 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -11,29 +11,27 @@ #include #include struct ahash_request; struct scatterlist; struct crypto_hash_walk { char *data; unsigned int offset; - unsigned int alignmask; + unsigned int flags; struct page *pg; unsigned int entrylen; unsigned int total; struct scatterlist *sg; - - unsigned int flags; }; struct ahash_instance { void (*free)(struct ahash_instance *inst); union { struct { char head[offsetof(struct ahash_alg, halg.base)]; struct crypto_instance base; } s; struct ahash_alg alg; diff --git a/include/linux/crypto.h b/include/linux/crypto.h index f3c3a3b27facd..b164da5e129e8 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -103,21 +103,20 @@ * chunk can cross a page boundary or a scatterlist element boundary. * aead: * - The IV buffer and all scatterlist elements must be aligned to the * algorithm's alignmask. * - The first scatterlist element must contain all the associated data, * and its pages must be !PageHighMem. * - If the plaintext/ciphertext were to be divided into chunks of size * crypto_aead_walksize() (with the remainder going at the end), no chunk * can cross a page boundary or a scatterlist element boundary. * ahash: - * - The result buffer must be aligned to the algorithm's alignmask. * - crypto_ahash_finup() must not be used unless the algorithm implements * ->finup() natively. */ #define CRYPTO_ALG_ALLOCATES_MEMORY 0x00010000 /* * Mark an algorithm as a service implementation only usable by a * template and never by a normal user of the kernel crypto API. * This is intended to be used by algorithms that are themselves * not FIPS-approved but may instead be used to implement parts of @@ -271,32 +270,34 @@ struct compress_alg { * of the smallest possible unit which can be transformed with * this algorithm. The users must respect this value. * In case of HASH transformation, it is possible for a smaller * block than @cra_blocksize to be passed to the crypto API for * transformation, in case of any other transformation type, an * error will be returned upon any attempt to transform smaller * than @cra_blocksize chunks. * @cra_ctxsize: Size of the operational context of the transformation. This * value informs the kernel crypto API about the memory size * needed to be allocated for the transformation context. - * @cra_alignmask: Alignment mask for the input and output data buffer. The data - * buffer containing the input data for the algorithm must be - * aligned to this alignment mask. The data buffer for the - * output data must be aligned to this alignment mask. Note that - * the Crypto API will do the re-alignment in software, but - * only under special conditions and there is a performance hit. - * The re-alignment happens at these occasions for different - * @cra_u types: cipher -- For both input data and output data - * buffer; ahash -- For output hash destination buf; shash -- - * For output hash destination buf. - * This is needed on hardware which is flawed by design and - * cannot pick data from arbitrary addresses. + * @cra_alignmask: For cipher, skcipher, lskcipher, and aead algorithms this is + * 1 less than the alignment, in bytes, that the algorithm + * implementation requires for input and output buffers. When + * the crypto API is invoked with buffers that are not aligned + * to this alignment, the crypto API automatically utilizes + * appropriately aligned temporary buffers to comply with what + * the algorithm needs. (For scatterlists this happens only if + * the algorithm uses the skcipher_walk helper functions.) This + * misalignment handling carries a performance penalty, so it is + * preferred that algorithms do not set a nonzero alignmask. + * Also, crypto API users may wish to allocate buffers aligned + * to the alignmask of the algorithm being used, in order to + * avoid the API having to realign them. Note: the alignmask is + * not supported for hash algorithms and is always 0 for them. * @cra_priority: Priority of this transformation implementation. In case * multiple transformations with same @cra_name are available to * the Crypto API, the kernel will use the one with highest * @cra_priority. * @cra_name: Generic name (usable by multiple implementations) of the * transformation algorithm. This is the name of the transformation * itself. This field is used by the kernel when looking up the * providers of particular transformation. * @cra_driver_name: Unique name of the transformation provider. This is the * name of the provider of the transformation. This can be any From patchwork Sun Oct 22 08:10:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737184 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F13D7C00A8F for ; Sun, 22 Oct 2023 08:18:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231611AbjJVIS6 (ORCPT ); Sun, 22 Oct 2023 04:18:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231356AbjJVISt (ORCPT ); Sun, 22 Oct 2023 04:18:49 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C5A693 for ; Sun, 22 Oct 2023 01:18:47 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE60AC433CB for ; Sun, 22 Oct 2023 08:18:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962726; bh=o1ecA8M9LopLjtfKIXqmw72nJfQRspU8YS31eZy7Elg=; h=From:To:Subject:Date:In-Reply-To:References:From; b=aOvCasG01WZc7tw/m8wDVs+zroQJln7q5z2DgCGb+Vy9kdTBmY9n4W3QSEzvTMH3g qymKZNOBzyWOJGPMHnKuvPTgE16+PAjhOpb0Czv/rBOEA0Pjw5gg3Xo5QYTgmTQJb0 qjCxtpk5OEHOj4ncH/Ccr299vLcK35c8C7si+fVHqgdH8/axRXoaaYZaZA+OlKaHm4 LJz7KC85Ihrt3ya4JW4KaO2Tci7xJ1R8sY/k2k97HBxNiV0gmqnknYwMd3K5ATGn0l BZmY9al6540kolMiqfzVp/vAfOE5BE45XyRjmXtDu8oOK7a1EQBxdQRr+QKaQ88WVF 9Fk0XU5PNStwg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 15/30] crypto: authenc - stop using alignmask of ahash Date: Sun, 22 Oct 2023 01:10:45 -0700 Message-ID: <20231022081100.123613-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the alignmask for ahash and shash algorithms is always 0, simplify the code in authenc accordingly. Signed-off-by: Eric Biggers --- crypto/authenc.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/crypto/authenc.c b/crypto/authenc.c index fa896ab143bdf..3aaf3ab4e360f 100644 --- a/crypto/authenc.c +++ b/crypto/authenc.c @@ -134,23 +134,20 @@ static int crypto_authenc_genicv(struct aead_request *req, unsigned int flags) struct crypto_aead *authenc = crypto_aead_reqtfm(req); struct aead_instance *inst = aead_alg_instance(authenc); struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc); struct authenc_instance_ctx *ictx = aead_instance_ctx(inst); struct crypto_ahash *auth = ctx->auth; struct authenc_request_ctx *areq_ctx = aead_request_ctx(req); struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff); u8 *hash = areq_ctx->tail; int err; - hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth), - crypto_ahash_alignmask(auth) + 1); - ahash_request_set_tfm(ahreq, auth); ahash_request_set_crypt(ahreq, req->dst, hash, req->assoclen + req->cryptlen); ahash_request_set_callback(ahreq, flags, authenc_geniv_ahash_done, req); err = crypto_ahash_digest(ahreq); if (err) return err; @@ -279,23 +276,20 @@ static int crypto_authenc_decrypt(struct aead_request *req) unsigned int authsize = crypto_aead_authsize(authenc); struct aead_instance *inst = aead_alg_instance(authenc); struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc); struct authenc_instance_ctx *ictx = aead_instance_ctx(inst); struct crypto_ahash *auth = ctx->auth; struct authenc_request_ctx *areq_ctx = aead_request_ctx(req); struct ahash_request *ahreq = (void *)(areq_ctx->tail + ictx->reqoff); u8 *hash = areq_ctx->tail; int err; - hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth), - crypto_ahash_alignmask(auth) + 1); - ahash_request_set_tfm(ahreq, auth); ahash_request_set_crypt(ahreq, req->src, hash, req->assoclen + req->cryptlen - authsize); ahash_request_set_callback(ahreq, aead_request_flags(req), authenc_verify_ahash_done, req); err = crypto_ahash_digest(ahreq); if (err) return err; @@ -393,40 +387,38 @@ static int crypto_authenc_create(struct crypto_template *tmpl, goto err_free_inst; auth = crypto_spawn_ahash_alg(&ctx->auth); auth_base = &auth->base; err = crypto_grab_skcipher(&ctx->enc, aead_crypto_instance(inst), crypto_attr_alg_name(tb[2]), 0, mask); if (err) goto err_free_inst; enc = crypto_spawn_skcipher_alg_common(&ctx->enc); - ctx->reqoff = ALIGN(2 * auth->digestsize + auth_base->cra_alignmask, - auth_base->cra_alignmask + 1); + ctx->reqoff = 2 * auth->digestsize; err = -ENAMETOOLONG; if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "authenc(%s,%s)", auth_base->cra_name, enc->base.cra_name) >= CRYPTO_MAX_ALG_NAME) goto err_free_inst; if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "authenc(%s,%s)", auth_base->cra_driver_name, enc->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto err_free_inst; inst->alg.base.cra_priority = enc->base.cra_priority * 10 + auth_base->cra_priority; inst->alg.base.cra_blocksize = enc->base.cra_blocksize; - inst->alg.base.cra_alignmask = auth_base->cra_alignmask | - enc->base.cra_alignmask; + inst->alg.base.cra_alignmask = enc->base.cra_alignmask; inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_ctx); inst->alg.ivsize = enc->ivsize; inst->alg.chunksize = enc->chunksize; inst->alg.maxauthsize = auth->digestsize; inst->alg.init = crypto_authenc_init_tfm; inst->alg.exit = crypto_authenc_exit_tfm; inst->alg.setkey = crypto_authenc_setkey; From patchwork Sun Oct 22 08:10:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737182 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31D22CDB474 for ; Sun, 22 Oct 2023 08:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231302AbjJVITB (ORCPT ); Sun, 22 Oct 2023 04:19:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231627AbjJVISv (ORCPT ); Sun, 22 Oct 2023 04:18:51 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DB1BDD for ; Sun, 22 Oct 2023 01:18:48 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DDB4DC433CD for ; Sun, 22 Oct 2023 08:18:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962727; bh=vv+DmIoq1Yu+1izodwo+C0qCNOAzfQxHxI0uOGd0dFk=; h=From:To:Subject:Date:In-Reply-To:References:From; b=MYY1fjaNu7s9sE7aWGuJXLeZS+yn9uoA5lDjiXby4IAJ/7W3FSNHrSTgWOfovQv9/ k4TSxog0Z5k+zEhD3zRyksfddh/phJFgSTdh8P7OB8DMhacNCQMlqQFZDqaG3NyALV NSNWzWD4lpJoyPtBsStTngMYY0TnSerxqE9UPYDDRU5XJ96JLuUq2anxMbSCdV3IBE 3I7CB792/dwkSPVvNHbgLDwO9UiO3BlydMBgOak7WmizXmua26Sm0/WFZgsHhIF2Uj vAdMJw+8aJE9Qz+u3InQ0H2hgTEZBPmmnt+6+WqaIhflj/NW6DV1uIvFxzJzmS/h00 v7dhsHEYS7zYg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 20/30] crypto: ccm - stop using alignmask of ahash Date: Sun, 22 Oct 2023 01:10:50 -0700 Message-ID: <20231022081100.123613-21-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the alignmask for ahash and shash algorithms is always 0, simplify crypto_ccm_create_common() accordingly. Signed-off-by: Eric Biggers --- crypto/ccm.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/crypto/ccm.c b/crypto/ccm.c index dd7aed63efc93..36f0acec32e19 100644 --- a/crypto/ccm.c +++ b/crypto/ccm.c @@ -497,22 +497,21 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl, goto err_free_inst; if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)", ctr->base.cra_driver_name, mac->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto err_free_inst; inst->alg.base.cra_priority = (mac->base.cra_priority + ctr->base.cra_priority) / 2; inst->alg.base.cra_blocksize = 1; - inst->alg.base.cra_alignmask = mac->base.cra_alignmask | - ctr->base.cra_alignmask; + inst->alg.base.cra_alignmask = ctr->base.cra_alignmask; inst->alg.ivsize = 16; inst->alg.chunksize = ctr->chunksize; inst->alg.maxauthsize = 16; inst->alg.base.cra_ctxsize = sizeof(struct crypto_ccm_ctx); inst->alg.init = crypto_ccm_init_tfm; inst->alg.exit = crypto_ccm_exit_tfm; inst->alg.setkey = crypto_ccm_setkey; inst->alg.setauthsize = crypto_ccm_setauthsize; inst->alg.encrypt = crypto_ccm_encrypt; inst->alg.decrypt = crypto_ccm_decrypt; From patchwork Sun Oct 22 08:10:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBE72C00A8F for ; Sun, 22 Oct 2023 08:19:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231678AbjJVITJ (ORCPT ); Sun, 22 Oct 2023 04:19:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231684AbjJVISw (ORCPT ); Sun, 22 Oct 2023 04:18:52 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB0261A3 for ; Sun, 22 Oct 2023 01:18:48 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88496C433CA for ; Sun, 22 Oct 2023 08:18:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962728; bh=G3cmf2bNwDRiorWVX3Gfw9Cf5KsHz9Me8snVNz2tvck=; h=From:To:Subject:Date:In-Reply-To:References:From; b=SsGmdPrzzxa2KkLJXa0WkXD9qOJeOAygHy+rvtFOsIe0rKzSUG1dHv846XAB87PQR 21wZt9cxbBIUIvVp+URF2qfUWWBEigSm2FCA6Y6THeDBCbLRjrDX9YCXQt9a1SW1+Y cHnaCW7GC+SXCDMZkvp9m+IUvZBqKK8SK5QkKMugGAgdqOLV1Plbp2cnUzn4mnNcJF 7G8QpnQFHN5l6kfVTKFdzzwC2qsbwAJx+sDh26BGsDkHQtclGfqQzuaTNp2A7YyqCX xjPnhw8MWXnm13c+IGkbWK2+bfwJktTLNjyPzHWeR4/n33J8G0qh71sGifLu8FgBEw +FJrFKuYVX27w== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 23/30] crypto: ahash - remove crypto_ahash_alignmask Date: Sun, 22 Oct 2023 01:10:53 -0700 Message-ID: <20231022081100.123613-24-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers crypto_ahash_alignmask() no longer has any callers, and it always returns 0 now that neither ahash nor shash algorithms support nonzero alignmasks anymore. Therefore, remove it. Signed-off-by: Eric Biggers --- include/crypto/hash.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/include/crypto/hash.h b/include/crypto/hash.h index d3a380ae894ad..b00a4a36a8ec3 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -335,26 +335,20 @@ int crypto_has_ahash(const char *alg_name, u32 type, u32 mask); static inline const char *crypto_ahash_alg_name(struct crypto_ahash *tfm) { return crypto_tfm_alg_name(crypto_ahash_tfm(tfm)); } static inline const char *crypto_ahash_driver_name(struct crypto_ahash *tfm) { return crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm)); } -static inline unsigned int crypto_ahash_alignmask( - struct crypto_ahash *tfm) -{ - return crypto_tfm_alg_alignmask(crypto_ahash_tfm(tfm)); -} - /** * crypto_ahash_blocksize() - obtain block size for cipher * @tfm: cipher handle * * The block size for the message digest cipher referenced with the cipher * handle is returned. * * Return: block size of cipher */ static inline unsigned int crypto_ahash_blocksize(struct crypto_ahash *tfm) From patchwork Sun Oct 22 08:10:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B1C6CDB474 for ; Sun, 22 Oct 2023 08:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231806AbjJVITL (ORCPT ); Sun, 22 Oct 2023 04:19:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231688AbjJVISw (ORCPT ); Sun, 22 Oct 2023 04:18:52 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B2AFD46 for ; Sun, 22 Oct 2023 01:18:49 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0EADC433C9 for ; Sun, 22 Oct 2023 08:18:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962729; bh=hxgXz4/fEI7XnX3NFAUSs9aMdV4IPK6z/hrrnU2Nvfo=; h=From:To:Subject:Date:In-Reply-To:References:From; b=GmWDDBr9e8TP8QUybW5LK1J7jObsVWYTFRwx3Hz+oJ/nvRD6k1A7ahZD+lH9EjoRz 5w73kqm5DcJID/I+NV/2wStfjMMkAZhUMOSfGVxOeXj/WlTsSEA5LzF5rz4TXbhymE 1/egnMlpFvL/8ZEZ7MRkKx2ySaT/2yX+aiuSuCgLqQ6fuBepxMdIN68a/oIzKFcN5E t3l8nLg5+Tx14vIbj6cvpGadJOwJvYXutCnL4d4YjNTMLntQHxRWrgAzjXQFhob1wn 3aCRHxj6k78wHpN7p2Bqb5i+kRoIKMkQLkB+ndyUq7lrOVtUBEP5tAaQzAZeO2vv9a yrXlM8ksYJbNg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 25/30] crypto: ahash - improve file comment Date: Sun, 22 Oct 2023 01:10:55 -0700 Message-ID: <20231022081100.123613-26-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Improve the file comment for crypto/ahash.c. Signed-off-by: Eric Biggers --- crypto/ahash.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/crypto/ahash.c b/crypto/ahash.c index 556c950100936..1ad402f4dac6c 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -1,16 +1,20 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* * Asynchronous Cryptographic Hash operations. * - * This is the asynchronous version of hash.c with notification of - * completion via a callback. + * This is the implementation of the ahash (asynchronous hash) API. It differs + * from shash (synchronous hash) in that ahash supports asynchronous operations, + * and it hashes data from scatterlists instead of virtually addressed buffers. + * + * The ahash API provides access to both ahash and shash algorithms. The shash + * API only provides access to shash algorithms. * * Copyright (c) 2008 Loc Ho */ #include #include #include #include #include #include From patchwork Sun Oct 22 08:10:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA75DC001DF for ; Sun, 22 Oct 2023 08:19:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231877AbjJVITO (ORCPT ); Sun, 22 Oct 2023 04:19:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231722AbjJVISx (ORCPT ); Sun, 22 Oct 2023 04:18:53 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 69E30D53 for ; Sun, 22 Oct 2023 01:18:49 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 307F8C433C7 for ; Sun, 22 Oct 2023 08:18:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962729; bh=ji3lKzqwdzm3b5FtRsxKYlJDL+v3S0VySToriL6uwC4=; h=From:To:Subject:Date:In-Reply-To:References:From; b=tdyNRRNLpkK4MxpRc3j3/km9Jwaty08JLzhB9aZfw7S8X+YFUT6sM6uG2ztvBOK3c GNgrSL9djBNzTYfFKCLZvdIL9/fgLqR3yhXijwlk2JpK+1ODiQUGjPtex2KAK+lk8P g1401dS87fpPrhvQzED5qr1K+Npm1iT/e+kozKeNqYDK38FaO8iNN3lJA4/rpeTbYr dlYV/RG+pm7+inWjhZZmzdnN+h9obwPsNtupVwndxJ5kqCU27DKJl8Phcwppb3YwkL c3nyVa0GVJSdM+TrRD6dTeCmmPJVcQGdPJEpynuT1bu9AMBwwjEUL9c0XAd57hBwCR QejP4ZTuer2rQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 26/30] crypto: chelsio - stop using crypto_ahash::init Date: Sun, 22 Oct 2023 01:10:56 -0700 Message-ID: <20231022081100.123613-27-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The function pointer crypto_ahash::init is an internal implementation detail of the ahash API that exists to help it support both ahash and shash algorithms. With an upcoming refactoring of how the ahash API supports shash algorithms, this field will be removed. Some drivers are invoking crypto_ahash::init to call into their own code, which is unnecessary and inefficient. The chelsio driver is one of those drivers. Make it just call its own code directly. Signed-off-by: Eric Biggers --- drivers/crypto/chelsio/chcr_algo.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c index 16298ae4a00bf..177428480c7d1 100644 --- a/drivers/crypto/chelsio/chcr_algo.c +++ b/drivers/crypto/chelsio/chcr_algo.c @@ -1913,39 +1913,46 @@ static int chcr_ahash_finup(struct ahash_request *req) set_wr_txq(skb, CPL_PRIORITY_DATA, req_ctx->txqidx); chcr_send_wr(skb); return -EINPROGRESS; unmap: chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req); err: chcr_dec_wrcount(dev); return error; } +static int chcr_hmac_init(struct ahash_request *areq); +static int chcr_sha_init(struct ahash_request *areq); + static int chcr_ahash_digest(struct ahash_request *req) { struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req); struct crypto_ahash *rtfm = crypto_ahash_reqtfm(req); struct chcr_dev *dev = h_ctx(rtfm)->dev; struct uld_ctx *u_ctx = ULD_CTX(h_ctx(rtfm)); struct chcr_context *ctx = h_ctx(rtfm); struct sk_buff *skb; struct hash_wr_param params; u8 bs; int error; unsigned int cpu; cpu = get_cpu(); req_ctx->txqidx = cpu % ctx->ntxq; req_ctx->rxqidx = cpu % ctx->nrxq; put_cpu(); - rtfm->init(req); + if (is_hmac(crypto_ahash_tfm(rtfm))) + chcr_hmac_init(req); + else + chcr_sha_init(req); + bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm)); error = chcr_inc_wrcount(dev); if (error) return -ENXIO; if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0], req_ctx->txqidx) && (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))) { error = -ENOSPC; goto err; From patchwork Sun Oct 22 08:10:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0485AC25B42 for ; Sun, 22 Oct 2023 08:19:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231463AbjJVITP (ORCPT ); Sun, 22 Oct 2023 04:19:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37420 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231752AbjJVISx (ORCPT ); Sun, 22 Oct 2023 04:18:53 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3341FD66 for ; Sun, 22 Oct 2023 01:18:50 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CC2FCC433CD for ; Sun, 22 Oct 2023 08:18:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962729; bh=qYUoZ18gV/v9jD2C+nhy14QvgKVFOozIRdZNRKQCNb0=; h=From:To:Subject:Date:In-Reply-To:References:From; b=N6hzRrGvROWevpi/EXGK04ga/MBD4MUyiIRZ7McDxsTUp8y+sl+ANitL+Zv2+hwik Iv2NlpWq8aN31t6yGig4r+wnAN0wBQw59LnR+sDlrpFkGuXwxNYJjBX/PjELYPVnn4 O3v0zLlJZl2b3cAPJd0400iBRQCdo71IGE5i/L91wrf7Dv4bE+3/YOMCbkHs9ZUkpT mrtkVNdUeN6V8JIA0H3TUTBD17GMeHnPKEYxIzCpdl6jlelcfaX57S8XvfHinly2vx VGexSVPwbP9ubD+LUce0nGdOLkBuvDBmrScQedl18IZ4gvTz+n8D56Tkw3ELL14hLq NKZnjz+xKLtoA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 29/30] crypto: ahash - check for shash type instead of not ahash type Date: Sun, 22 Oct 2023 01:10:59 -0700 Message-ID: <20231022081100.123613-30-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Since the previous patch made crypto_shash_type visible to ahash.c, change checks for '->cra_type != &crypto_ahash_type' to '->cra_type == &crypto_shash_type'. This makes more sense and avoids having to forward-declare crypto_ahash_type. The result is still the same, since the type is either shash or ahash here. Signed-off-by: Eric Biggers --- crypto/ahash.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/crypto/ahash.c b/crypto/ahash.c index 74be1eb26c1aa..96fec0ca202af 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -20,22 +20,20 @@ #include #include #include #include #include #include "hash.h" #define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e -static const struct crypto_type crypto_ahash_type; - static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen) { struct crypto_shash **ctx = crypto_ahash_ctx(tfm); return crypto_shash_setkey(*ctx, key, keylen); } static int shash_async_init(struct ahash_request *req) { @@ -504,21 +502,21 @@ static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm) static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) { struct crypto_ahash *hash = __crypto_ahash_cast(tfm); struct ahash_alg *alg = crypto_ahash_alg(hash); hash->setkey = ahash_nosetkey; crypto_ahash_set_statesize(hash, alg->halg.statesize); - if (tfm->__crt_alg->cra_type != &crypto_ahash_type) + if (tfm->__crt_alg->cra_type == &crypto_shash_type) return crypto_init_shash_ops_async(tfm); hash->init = alg->init; hash->update = alg->update; hash->final = alg->final; hash->finup = alg->finup ?: ahash_def_finup; hash->digest = alg->digest; hash->export = alg->export; hash->import = alg->import; @@ -528,21 +526,21 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) } if (alg->exit_tfm) tfm->exit = crypto_ahash_exit_tfm; return alg->init_tfm ? alg->init_tfm(hash) : 0; } static unsigned int crypto_ahash_extsize(struct crypto_alg *alg) { - if (alg->cra_type != &crypto_ahash_type) + if (alg->cra_type == &crypto_shash_type) return sizeof(struct crypto_shash *); return crypto_alg_extsize(alg); } static void crypto_ahash_free_instance(struct crypto_instance *inst) { struct ahash_instance *ahash = ahash_instance(inst); ahash->free(ahash); @@ -753,19 +751,19 @@ int ahash_register_instance(struct crypto_template *tmpl, return err; return crypto_register_instance(tmpl, ahash_crypto_instance(inst)); } EXPORT_SYMBOL_GPL(ahash_register_instance); bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg) { struct crypto_alg *alg = &halg->base; - if (alg->cra_type != &crypto_ahash_type) + if (alg->cra_type == &crypto_shash_type) return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg)); return __crypto_ahash_alg(alg)->setkey != NULL; } EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Asynchronous cryptographic hash type"); From patchwork Sun Oct 22 08:11:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 737176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E21A6C001DF for ; Sun, 22 Oct 2023 08:19:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231685AbjJVITw (ORCPT ); Sun, 22 Oct 2023 04:19:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231756AbjJVITH (ORCPT ); Sun, 22 Oct 2023 04:19:07 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5015FD67 for ; Sun, 22 Oct 2023 01:18:50 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D6D0C43395 for ; Sun, 22 Oct 2023 08:18:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697962730; bh=T2ZJiDo+f27sELZt3itiGVHeqhQFxtNwENsoLqKMCLw=; h=From:To:Subject:Date:In-Reply-To:References:From; b=H6cupww/mKmzkA3tsO8XaMybWo1609fEa9rXAu/vvVxErpSvHzOa73HpdkeCL18wf 5lS64+XgxfhJ949PqQ075UXClNZR4VJQhQq5st+Vnthylhq6bUBiOpi0q6Upt2nLKh MsewWQ+t95MZGo20BTutpVjs1JmDrEeBzXAZYdrBEt0Htu6CrwtNNicp4fOPqJkaiu GSQ+JvwMAAXxvujYncf0yC1SQBxyPVpxVFwSOZgbQkcSnFG8taPz4nGyRtPy+PtoZi ikzHStjtr1g16KIVk29+I4OIlX+N22Nx1xseZj4vC7SpqBQtsjVr92VgeoP2S3QnDQ 5CVIZNfuIvOGA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 30/30] crypto: ahash - optimize performance when wrapping shash Date: Sun, 22 Oct 2023 01:11:00 -0700 Message-ID: <20231022081100.123613-31-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231022081100.123613-1-ebiggers@kernel.org> References: <20231022081100.123613-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The "ahash" API provides access to both CPU-based and hardware offload- based implementations of hash algorithms. Typically the former are implemented as "shash" algorithms under the hood, while the latter are implemented as "ahash" algorithms. The "ahash" API provides access to both. Various kernel subsystems use the ahash API because they want to support hashing hardware offload without using a separate API for it. Yet, the common case is that a crypto accelerator is not actually being used, and ahash is just wrapping a CPU-based shash algorithm. This patch optimizes the ahash API for that common case by eliminating the extra indirect call for each ahash operation on top of shash. It also fixes the double-counting of crypto stats in this scenario (though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested in performance anyway...), and it eliminates redundant checking of CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash. Signed-off-by: Eric Biggers --- crypto/ahash.c | 285 +++++++++++++++++++++--------------------- crypto/hash.h | 10 ++ crypto/shash.c | 8 +- include/crypto/hash.h | 68 +--------- 4 files changed, 167 insertions(+), 204 deletions(-) diff --git a/crypto/ahash.c b/crypto/ahash.c index 96fec0ca202af..deee55f939dc8 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -20,61 +20,67 @@ #include #include #include #include #include #include "hash.h" #define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e -static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key, - unsigned int keylen) +static inline struct crypto_istat_hash *ahash_get_stat(struct ahash_alg *alg) { - struct crypto_shash **ctx = crypto_ahash_ctx(tfm); + return hash_get_stat(&alg->halg); +} + +static inline int crypto_ahash_errstat(struct ahash_alg *alg, int err) +{ + if (!IS_ENABLED(CONFIG_CRYPTO_STATS)) + return err; - return crypto_shash_setkey(*ctx, key, keylen); + if (err && err != -EINPROGRESS && err != -EBUSY) + atomic64_inc(&ahash_get_stat(alg)->err_cnt); + + return err; } -static int shash_async_init(struct ahash_request *req) +/* + * For an ahash tfm that is using an shash algorithm (instead of an ahash + * algorithm), this returns the underlying shash tfm. + */ +static inline struct crypto_shash *ahash_to_shash(struct crypto_ahash *tfm) { - struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); - struct shash_desc *desc = ahash_request_ctx(req); + return *(struct crypto_shash **)crypto_ahash_ctx(tfm); +} - desc->tfm = *ctx; +static inline struct shash_desc *prepare_shash_desc(struct ahash_request *req, + struct crypto_ahash *tfm) +{ + struct shash_desc *desc = ahash_request_ctx(req); - return crypto_shash_init(desc); + desc->tfm = ahash_to_shash(tfm); + return desc; } int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc) { struct crypto_hash_walk walk; int nbytes; for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0; nbytes = crypto_hash_walk_done(&walk, nbytes)) nbytes = crypto_shash_update(desc, walk.data, nbytes); return nbytes; } EXPORT_SYMBOL_GPL(shash_ahash_update); -static int shash_async_update(struct ahash_request *req) -{ - return shash_ahash_update(req, ahash_request_ctx(req)); -} - -static int shash_async_final(struct ahash_request *req) -{ - return crypto_shash_final(ahash_request_ctx(req), req->result); -} - int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc) { struct crypto_hash_walk walk; int nbytes; nbytes = crypto_hash_walk_first(req, &walk); if (!nbytes) return crypto_shash_final(desc, req->result); do { @@ -82,30 +88,20 @@ int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc) crypto_shash_finup(desc, walk.data, nbytes, req->result) : crypto_shash_update(desc, walk.data, nbytes); nbytes = crypto_hash_walk_done(&walk, nbytes); } while (nbytes > 0); return nbytes; } EXPORT_SYMBOL_GPL(shash_ahash_finup); -static int shash_async_finup(struct ahash_request *req) -{ - struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); - struct shash_desc *desc = ahash_request_ctx(req); - - desc->tfm = *ctx; - - return shash_ahash_finup(req, desc); -} - int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc) { unsigned int nbytes = req->nbytes; struct scatterlist *sg; unsigned int offset; int err; if (nbytes && (sg = req->src, offset = sg->offset, nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) { @@ -116,110 +112,54 @@ int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc) req->result); kunmap_local(data); } else err = crypto_shash_init(desc) ?: shash_ahash_finup(req, desc); return err; } EXPORT_SYMBOL_GPL(shash_ahash_digest); -static int shash_async_digest(struct ahash_request *req) -{ - struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); - struct shash_desc *desc = ahash_request_ctx(req); - - desc->tfm = *ctx; - - return shash_ahash_digest(req, desc); -} - -static int shash_async_export(struct ahash_request *req, void *out) -{ - return crypto_shash_export(ahash_request_ctx(req), out); -} - -static int shash_async_import(struct ahash_request *req, const void *in) -{ - struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); - struct shash_desc *desc = ahash_request_ctx(req); - - desc->tfm = *ctx; - - return crypto_shash_import(desc, in); -} - -static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm) +static void crypto_exit_ahash_using_shash(struct crypto_tfm *tfm) { struct crypto_shash **ctx = crypto_tfm_ctx(tfm); crypto_free_shash(*ctx); } -static int crypto_init_shash_ops_async(struct crypto_tfm *tfm) +static int crypto_init_ahash_using_shash(struct crypto_tfm *tfm) { struct crypto_alg *calg = tfm->__crt_alg; - struct shash_alg *alg = __crypto_shash_alg(calg); struct crypto_ahash *crt = __crypto_ahash_cast(tfm); struct crypto_shash **ctx = crypto_tfm_ctx(tfm); struct crypto_shash *shash; if (!crypto_mod_get(calg)) return -EAGAIN; shash = crypto_create_tfm(calg, &crypto_shash_type); if (IS_ERR(shash)) { crypto_mod_put(calg); return PTR_ERR(shash); } + crt->using_shash = true; *ctx = shash; - tfm->exit = crypto_exit_shash_ops_async; - - crt->init = shash_async_init; - crt->update = shash_async_update; - crt->final = shash_async_final; - crt->finup = shash_async_finup; - crt->digest = shash_async_digest; - if (crypto_shash_alg_has_setkey(alg)) - crt->setkey = shash_async_setkey; + tfm->exit = crypto_exit_ahash_using_shash; crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) & CRYPTO_TFM_NEED_KEY); - - crt->export = shash_async_export; - crt->import = shash_async_import; - crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash); return 0; } -static struct crypto_ahash * -crypto_clone_shash_ops_async(struct crypto_ahash *nhash, - struct crypto_ahash *hash) -{ - struct crypto_shash **nctx = crypto_ahash_ctx(nhash); - struct crypto_shash **ctx = crypto_ahash_ctx(hash); - struct crypto_shash *shash; - - shash = crypto_clone_shash(*ctx); - if (IS_ERR(shash)) { - crypto_free_ahash(nhash); - return ERR_CAST(shash); - } - - *nctx = shash; - - return nhash; -} - static int hash_walk_next(struct crypto_hash_walk *walk) { unsigned int offset = walk->offset; unsigned int nbytes = min(walk->entrylen, ((unsigned int)(PAGE_SIZE)) - offset); walk->data = kmap_local_page(walk->pg); walk->data += offset; walk->entrylen -= nbytes; return nbytes; @@ -283,44 +223,68 @@ int crypto_hash_walk_first(struct ahash_request *req, return hash_walk_new_entry(walk); } EXPORT_SYMBOL_GPL(crypto_hash_walk_first); static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen) { return -ENOSYS; } -static void ahash_set_needkey(struct crypto_ahash *tfm) +static void ahash_set_needkey(struct crypto_ahash *tfm, struct ahash_alg *alg) { - const struct hash_alg_common *alg = crypto_hash_alg_common(tfm); - - if (tfm->setkey != ahash_nosetkey && - !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) + if (alg->setkey != ahash_nosetkey && + !(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); } int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen) { - int err = tfm->setkey(tfm, key, keylen); + if (likely(tfm->using_shash)) { + struct crypto_shash *shash = ahash_to_shash(tfm); + int err; - if (unlikely(err)) { - ahash_set_needkey(tfm); - return err; + err = crypto_shash_setkey(shash, key, keylen); + if (unlikely(err)) { + crypto_ahash_set_flags(tfm, + crypto_shash_get_flags(shash) & + CRYPTO_TFM_NEED_KEY); + return err; + } + } else { + struct ahash_alg *alg = crypto_ahash_alg(tfm); + int err; + + err = alg->setkey(tfm, key, keylen); + if (unlikely(err)) { + ahash_set_needkey(tfm, alg); + return err; + } } - crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); return 0; } EXPORT_SYMBOL_GPL(crypto_ahash_setkey); +int crypto_ahash_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + + if (likely(tfm->using_shash)) + return crypto_shash_init(prepare_shash_desc(req, tfm)); + if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) + return -ENOKEY; + return crypto_ahash_alg(tfm)->init(req); +} +EXPORT_SYMBOL_GPL(crypto_ahash_init); + static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt, bool has_state) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); unsigned int ds = crypto_ahash_digestsize(tfm); struct ahash_request *subreq; unsigned int subreq_size; unsigned int reqsize; u8 *result; gfp_t gfp; @@ -370,67 +334,92 @@ static void ahash_restore_req(struct ahash_request *req, int err) if (!err) memcpy(req->result, subreq->result, crypto_ahash_digestsize(crypto_ahash_reqtfm(req))); req->priv = NULL; kfree_sensitive(subreq); } -int crypto_ahash_final(struct ahash_request *req) +int crypto_ahash_update(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct hash_alg_common *alg = crypto_hash_alg_common(tfm); + struct ahash_alg *alg; + if (likely(tfm->using_shash)) + return shash_ahash_update(req, ahash_request_ctx(req)); + + alg = crypto_ahash_alg(tfm); if (IS_ENABLED(CONFIG_CRYPTO_STATS)) - atomic64_inc(&hash_get_stat(alg)->hash_cnt); + atomic64_add(req->nbytes, &ahash_get_stat(alg)->hash_tlen); + return crypto_ahash_errstat(alg, alg->update(req)); +} +EXPORT_SYMBOL_GPL(crypto_ahash_update); + +int crypto_ahash_final(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct ahash_alg *alg; + + if (likely(tfm->using_shash)) + return crypto_shash_final(ahash_request_ctx(req), req->result); - return crypto_hash_errstat(alg, tfm->final(req)); + alg = crypto_ahash_alg(tfm); + if (IS_ENABLED(CONFIG_CRYPTO_STATS)) + atomic64_inc(&ahash_get_stat(alg)->hash_cnt); + return crypto_ahash_errstat(alg, alg->final(req)); } EXPORT_SYMBOL_GPL(crypto_ahash_final); int crypto_ahash_finup(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct hash_alg_common *alg = crypto_hash_alg_common(tfm); + struct ahash_alg *alg; + + if (likely(tfm->using_shash)) + return shash_ahash_finup(req, ahash_request_ctx(req)); + alg = crypto_ahash_alg(tfm); if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { - struct crypto_istat_hash *istat = hash_get_stat(alg); + struct crypto_istat_hash *istat = ahash_get_stat(alg); atomic64_inc(&istat->hash_cnt); atomic64_add(req->nbytes, &istat->hash_tlen); } - - return crypto_hash_errstat(alg, tfm->finup(req)); + return crypto_ahash_errstat(alg, alg->finup(req)); } EXPORT_SYMBOL_GPL(crypto_ahash_finup); int crypto_ahash_digest(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct hash_alg_common *alg = crypto_hash_alg_common(tfm); + struct ahash_alg *alg; int err; + if (likely(tfm->using_shash)) + return shash_ahash_digest(req, prepare_shash_desc(req, tfm)); + + alg = crypto_ahash_alg(tfm); if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { - struct crypto_istat_hash *istat = hash_get_stat(alg); + struct crypto_istat_hash *istat = ahash_get_stat(alg); atomic64_inc(&istat->hash_cnt); atomic64_add(req->nbytes, &istat->hash_tlen); } if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) err = -ENOKEY; else - err = tfm->digest(req); + err = alg->digest(req); - return crypto_hash_errstat(alg, err); + return crypto_ahash_errstat(alg, err); } EXPORT_SYMBOL_GPL(crypto_ahash_digest); static void ahash_def_finup_done2(void *data, int err) { struct ahash_request *areq = data; if (err == -EINPROGRESS) return; @@ -441,21 +430,21 @@ static void ahash_def_finup_done2(void *data, int err) static int ahash_def_finup_finish1(struct ahash_request *req, int err) { struct ahash_request *subreq = req->priv; if (err) goto out; subreq->base.complete = ahash_def_finup_done2; - err = crypto_ahash_reqtfm(req)->final(subreq); + err = crypto_ahash_alg(crypto_ahash_reqtfm(req))->final(subreq); if (err == -EINPROGRESS || err == -EBUSY) return err; out: ahash_restore_req(req, err); return err; } static void ahash_def_finup_done1(void *data, int err) { @@ -478,59 +467,68 @@ static void ahash_def_finup_done1(void *data, int err) static int ahash_def_finup(struct ahash_request *req) { struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); int err; err = ahash_save_req(req, ahash_def_finup_done1, true); if (err) return err; - err = tfm->update(req->priv); + err = crypto_ahash_alg(tfm)->update(req->priv); if (err == -EINPROGRESS || err == -EBUSY) return err; return ahash_def_finup_finish1(req, err); } +int crypto_ahash_export(struct ahash_request *req, void *out) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + + if (likely(tfm->using_shash)) + return crypto_shash_export(ahash_request_ctx(req), out); + return crypto_ahash_alg(tfm)->export(req, out); +} +EXPORT_SYMBOL_GPL(crypto_ahash_export); + +int crypto_ahash_import(struct ahash_request *req, const void *in) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + + if (likely(tfm->using_shash)) + return crypto_shash_import(prepare_shash_desc(req, tfm), in); + if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) + return -ENOKEY; + return crypto_ahash_alg(tfm)->import(req, in); +} +EXPORT_SYMBOL_GPL(crypto_ahash_import); + static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm) { struct crypto_ahash *hash = __crypto_ahash_cast(tfm); struct ahash_alg *alg = crypto_ahash_alg(hash); alg->exit_tfm(hash); } static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) { struct crypto_ahash *hash = __crypto_ahash_cast(tfm); struct ahash_alg *alg = crypto_ahash_alg(hash); - hash->setkey = ahash_nosetkey; - crypto_ahash_set_statesize(hash, alg->halg.statesize); if (tfm->__crt_alg->cra_type == &crypto_shash_type) - return crypto_init_shash_ops_async(tfm); - - hash->init = alg->init; - hash->update = alg->update; - hash->final = alg->final; - hash->finup = alg->finup ?: ahash_def_finup; - hash->digest = alg->digest; - hash->export = alg->export; - hash->import = alg->import; - - if (alg->setkey) { - hash->setkey = alg->setkey; - ahash_set_needkey(hash); - } + return crypto_init_ahash_using_shash(tfm); + + ahash_set_needkey(hash, alg); if (alg->exit_tfm) tfm->exit = crypto_ahash_exit_tfm; return alg->init_tfm ? alg->init_tfm(hash) : 0; } static unsigned int crypto_ahash_extsize(struct crypto_alg *alg) { if (alg->cra_type == &crypto_shash_type) @@ -634,33 +632,35 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash) return ERR_CAST(tfm); return hash; } nhash = crypto_clone_tfm(&crypto_ahash_type, tfm); if (IS_ERR(nhash)) return nhash; - nhash->init = hash->init; - nhash->update = hash->update; - nhash->final = hash->final; - nhash->finup = hash->finup; - nhash->digest = hash->digest; - nhash->export = hash->export; - nhash->import = hash->import; - nhash->setkey = hash->setkey; nhash->reqsize = hash->reqsize; nhash->statesize = hash->statesize; - if (tfm->__crt_alg->cra_type != &crypto_ahash_type) - return crypto_clone_shash_ops_async(nhash, hash); + if (likely(hash->using_shash)) { + struct crypto_shash **nctx = crypto_ahash_ctx(nhash); + struct crypto_shash *shash; + + shash = crypto_clone_shash(ahash_to_shash(hash)); + if (IS_ERR(shash)) { + err = PTR_ERR(shash); + goto out_free_nhash; + } + *nctx = shash; + return nhash; + } err = -ENOSYS; alg = crypto_ahash_alg(hash); if (!alg->clone_tfm) goto out_free_nhash; err = alg->clone_tfm(nhash, hash); if (err) goto out_free_nhash; @@ -680,20 +680,25 @@ static int ahash_prepare_alg(struct ahash_alg *alg) if (alg->halg.statesize == 0) return -EINVAL; err = hash_prepare_alg(&alg->halg); if (err) return err; base->cra_type = &crypto_ahash_type; base->cra_flags |= CRYPTO_ALG_TYPE_AHASH; + if (!alg->finup) + alg->finup = ahash_def_finup; + if (!alg->setkey) + alg->setkey = ahash_nosetkey; + return 0; } int crypto_register_ahash(struct ahash_alg *alg) { struct crypto_alg *base = &alg->halg.base; int err; err = ahash_prepare_alg(alg); if (err) @@ -754,16 +759,16 @@ int ahash_register_instance(struct crypto_template *tmpl, } EXPORT_SYMBOL_GPL(ahash_register_instance); bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg) { struct crypto_alg *alg = &halg->base; if (alg->cra_type == &crypto_shash_type) return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg)); - return __crypto_ahash_alg(alg)->setkey != NULL; + return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey; } EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Asynchronous cryptographic hash type"); diff --git a/crypto/hash.h b/crypto/hash.h index de2ee2f4ae304..93f6ba0df263e 100644 --- a/crypto/hash.h +++ b/crypto/hash.h @@ -5,20 +5,30 @@ * Copyright (c) 2023 Herbert Xu */ #ifndef _LOCAL_CRYPTO_HASH_H #define _LOCAL_CRYPTO_HASH_H #include #include #include "internal.h" +static inline struct crypto_istat_hash *hash_get_stat( + struct hash_alg_common *alg) +{ +#ifdef CONFIG_CRYPTO_STATS + return &alg->stat; +#else + return NULL; +#endif +} + static inline int crypto_hash_report_stat(struct sk_buff *skb, struct crypto_alg *alg, const char *type) { struct hash_alg_common *halg = __crypto_hash_alg_common(alg); struct crypto_istat_hash *istat = hash_get_stat(halg); struct crypto_stat_hash rhash; memset(&rhash, 0, sizeof(rhash)); diff --git a/crypto/shash.c b/crypto/shash.c index 28092ed8415a7..d5194221c88cb 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -16,21 +16,27 @@ #include "hash.h" static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg) { return hash_get_stat(&alg->halg); } static inline int crypto_shash_errstat(struct shash_alg *alg, int err) { - return crypto_hash_errstat(&alg->halg, err); + if (!IS_ENABLED(CONFIG_CRYPTO_STATS)) + return err; + + if (err && err != -EINPROGRESS && err != -EBUSY) + atomic64_inc(&shash_get_stat(alg)->err_cnt); + + return err; } int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { return -ENOSYS; } EXPORT_SYMBOL_GPL(shash_no_setkey); static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg) diff --git a/include/crypto/hash.h b/include/crypto/hash.h index b00a4a36a8ec3..c7bdbece27ccb 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -243,30 +243,21 @@ struct shash_alg { union { struct HASH_ALG_COMMON; struct hash_alg_common halg; }; }; #undef HASH_ALG_COMMON #undef HASH_ALG_COMMON_STAT struct crypto_ahash { - int (*init)(struct ahash_request *req); - int (*update)(struct ahash_request *req); - int (*final)(struct ahash_request *req); - int (*finup)(struct ahash_request *req); - int (*digest)(struct ahash_request *req); - int (*export)(struct ahash_request *req, void *out); - int (*import)(struct ahash_request *req, const void *in); - int (*setkey)(struct crypto_ahash *tfm, const u8 *key, - unsigned int keylen); - + bool using_shash; /* Underlying algorithm is shash, not ahash */ unsigned int statesize; unsigned int reqsize; struct crypto_tfm base; }; struct crypto_shash { unsigned int descsize; struct crypto_tfm base; }; @@ -506,109 +497,60 @@ int crypto_ahash_digest(struct ahash_request *req); * crypto_ahash_export() - extract current message digest state * @req: reference to the ahash_request handle whose state is exported * @out: output buffer of sufficient size that can hold the hash state * * This function exports the hash state of the ahash_request handle into the * caller-allocated output buffer out which must have sufficient size (e.g. by * calling crypto_ahash_statesize()). * * Return: 0 if the export was successful; < 0 if an error occurred */ -static inline int crypto_ahash_export(struct ahash_request *req, void *out) -{ - return crypto_ahash_reqtfm(req)->export(req, out); -} +int crypto_ahash_export(struct ahash_request *req, void *out); /** * crypto_ahash_import() - import message digest state * @req: reference to ahash_request handle the state is imported into * @in: buffer holding the state * * This function imports the hash state into the ahash_request handle from the * input buffer. That buffer should have been generated with the * crypto_ahash_export function. * * Return: 0 if the import was successful; < 0 if an error occurred */ -static inline int crypto_ahash_import(struct ahash_request *req, const void *in) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - - if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) - return -ENOKEY; - - return tfm->import(req, in); -} +int crypto_ahash_import(struct ahash_request *req, const void *in); /** * crypto_ahash_init() - (re)initialize message digest handle * @req: ahash_request handle that already is initialized with all necessary * data using the ahash_request_* API functions * * The call (re-)initializes the message digest referenced by the ahash_request * handle. Any potentially existing state created by previous operations is * discarded. * * Return: see crypto_ahash_final() */ -static inline int crypto_ahash_init(struct ahash_request *req) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - - if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) - return -ENOKEY; - - return tfm->init(req); -} - -static inline struct crypto_istat_hash *hash_get_stat( - struct hash_alg_common *alg) -{ -#ifdef CONFIG_CRYPTO_STATS - return &alg->stat; -#else - return NULL; -#endif -} - -static inline int crypto_hash_errstat(struct hash_alg_common *alg, int err) -{ - if (!IS_ENABLED(CONFIG_CRYPTO_STATS)) - return err; - - if (err && err != -EINPROGRESS && err != -EBUSY) - atomic64_inc(&hash_get_stat(alg)->err_cnt); - - return err; -} +int crypto_ahash_init(struct ahash_request *req); /** * crypto_ahash_update() - add data to message digest for processing * @req: ahash_request handle that was previously initialized with the * crypto_ahash_init call. * * Updates the message digest state of the &ahash_request handle. The input data * is pointed to by the scatter/gather list registered in the &ahash_request * handle * * Return: see crypto_ahash_final() */ -static inline int crypto_ahash_update(struct ahash_request *req) -{ - struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); - struct hash_alg_common *alg = crypto_hash_alg_common(tfm); - - if (IS_ENABLED(CONFIG_CRYPTO_STATS)) - atomic64_add(req->nbytes, &hash_get_stat(alg)->hash_tlen); - - return crypto_hash_errstat(alg, tfm->update(req)); -} +int crypto_ahash_update(struct ahash_request *req); /** * DOC: Asynchronous Hash Request Handle * * The &ahash_request data structure contains all pointers to data * required for the asynchronous cipher operation. This includes the cipher * handle (which can be used by multiple &ahash_request instances), pointer * to plaintext and the message digest output buffer, asynchronous callback * function, etc. It acts as a handle to the ahash_request_* API calls in a * similar way as ahash handle to the crypto_ahash_* API calls.