From patchwork Thu Oct 19 05:53:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BC64CDB483 for ; Thu, 19 Oct 2023 05:54:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229632AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230304AbjJSFyO (ORCPT ); Thu, 19 Oct 2023 01:54:14 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CB29FE for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4A4BC433C9 for ; Thu, 19 Oct 2023 05:54:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694852; bh=GdWvwxtyPRuZpFGX2Ept5lwoZL6uqZnLAjrR6ZlAoHQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=hHx70JNvyiSRekfPyCK10QDoaEgC8JbeeeiC7agLALKoqGSyPfGFMJWUExTrcFn9N i48omFfyrtTu37AJidLPdYPMx5JGeGvH0u+ZUOOS9qXmvrx87GUxRmQLLeICtl0T67 AkXt9FY4cZBt9h3DAtIEydd52pS2H1vCHTs/omVJsvOeTSIhrCumz7yFAto8JEZDAQ yE+Hj6FKb1tFm3PKucr4qWn015dylcDqWXgUR/0j+TKkvLyA4VOY6Q40+rsy8SmE+R BD9ERX1MXhe0YO9cNTvBrw2WP/L7pNf1bt6CfoQHESDSmqldvzQZ/kAAjCmCfwoFj/ 9jlx3TMgXDArA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 01/17] crypto: sparc/crc32c - stop using the shash alignmask Date: Wed, 18 Oct 2023 22:53:27 -0700 Message-ID: <20231019055343.588846-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers As far as I can tell, "crc32c-sparc64" is the only "shash" algorithm in the kernel that sets a nonzero alignmask and actually relies on it to get the crypto API to align the inputs and outputs. This capability is not really useful, though. To unblock removing the support for alignmask from shash_alg, this patch updates crc32c-sparc64 to no longer use the alignmask. This means doing 8-byte alignment of the data when doing an update, using get_unaligned_le32() when setting a non-default initial CRC, and using put_unaligned_le32() to output the final CRC. Partially tested with: export ARCH=sparc64 CROSS_COMPILE=sparc64-linux-gnu- make sparc64_defconfig echo CONFIG_CRYPTO_CRC32C_SPARC64=y >> .config echo '# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set' >> .config echo CONFIG_DEBUG_KERNEL=y >> .config echo CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y >> .config make olddefconfig make -j$(getconf _NPROCESSORS_ONLN) qemu-system-sparc64 -kernel arch/sparc/boot/image -nographic However, qemu doesn't actually support the sparc CRC32C instructions, so for the test I temporarily replaced crc32c_sparc64() with __crc32c_le() and made sparc64_has_crc32c_opcode() always return true. So essentially I tested the glue code, not the actual SPARC part which is unchanged. Signed-off-by: Eric Biggers --- arch/sparc/crypto/crc32c_glue.c | 45 ++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/arch/sparc/crypto/crc32c_glue.c b/arch/sparc/crypto/crc32c_glue.c index 82efb7f81c288..688db0dcb97d9 100644 --- a/arch/sparc/crypto/crc32c_glue.c +++ b/arch/sparc/crypto/crc32c_glue.c @@ -13,97 +13,101 @@ #include #include #include #include #include #include #include #include +#include #include "opcodes.h" /* * Setting the seed allows arbitrary accumulators and flexible XOR policy * If your algorithm starts with ~0, then XOR with ~0 before you set * the seed. */ static int crc32c_sparc64_setkey(struct crypto_shash *hash, const u8 *key, unsigned int keylen) { u32 *mctx = crypto_shash_ctx(hash); if (keylen != sizeof(u32)) return -EINVAL; - *mctx = le32_to_cpup((__le32 *)key); + *mctx = get_unaligned_le32(key); return 0; } static int crc32c_sparc64_init(struct shash_desc *desc) { u32 *mctx = crypto_shash_ctx(desc->tfm); u32 *crcp = shash_desc_ctx(desc); *crcp = *mctx; return 0; } extern void crc32c_sparc64(u32 *crcp, const u64 *data, unsigned int len); -static void crc32c_compute(u32 *crcp, const u64 *data, unsigned int len) +static u32 crc32c_compute(u32 crc, const u8 *data, unsigned int len) { - unsigned int asm_len; - - asm_len = len & ~7U; - if (asm_len) { - crc32c_sparc64(crcp, data, asm_len); - data += asm_len / 8; - len -= asm_len; + unsigned int n = -(uintptr_t)data & 7; + + if (n) { + /* Data isn't 8-byte aligned. Align it. */ + n = min(n, len); + crc = __crc32c_le(crc, data, n); + data += n; + len -= n; + } + n = len & ~7U; + if (n) { + crc32c_sparc64(&crc, (const u64 *)data, n); + data += n; + len -= n; } if (len) - *crcp = __crc32c_le(*crcp, (const unsigned char *) data, len); + crc = __crc32c_le(crc, data, len); + return crc; } static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data, unsigned int len) { u32 *crcp = shash_desc_ctx(desc); - crc32c_compute(crcp, (const u64 *) data, len); - + *crcp = crc32c_compute(*crcp, data, len); return 0; } -static int __crc32c_sparc64_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32c_sparc64_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { - u32 tmp = *crcp; - - crc32c_compute(&tmp, (const u64 *) data, len); - - *(__le32 *) out = ~cpu_to_le32(tmp); + put_unaligned_le32(~crc32c_compute(*crcp, data, len), out); return 0; } static int crc32c_sparc64_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return __crc32c_sparc64_finup(shash_desc_ctx(desc), data, len, out); } static int crc32c_sparc64_final(struct shash_desc *desc, u8 *out) { u32 *crcp = shash_desc_ctx(desc); - *(__le32 *) out = ~cpu_to_le32p(crcp); + put_unaligned_le32(~*crcp, out); return 0; } static int crc32c_sparc64_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return __crc32c_sparc64_finup(crypto_shash_ctx(desc->tfm), data, len, out); } @@ -128,21 +132,20 @@ static struct shash_alg alg = { .digest = crc32c_sparc64_digest, .descsize = sizeof(u32), .digestsize = CHKSUM_DIGEST_SIZE, .base = { .cra_name = "crc32c", .cra_driver_name = "crc32c-sparc64", .cra_priority = SPARC_CR_OPCODE_PRIORITY, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, .cra_ctxsize = sizeof(u32), - .cra_alignmask = 7, .cra_module = THIS_MODULE, .cra_init = crc32c_sparc64_cra_init, } }; static bool __init sparc64_has_crc32c_opcode(void) { unsigned long cfr; if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) From patchwork Thu Oct 19 05:53:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32E98CDB484 for ; Thu, 19 Oct 2023 05:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232673AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232123AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA317116 for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B6A2C433CB for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=OwOWNeLfTUGViMlwD4hC/ObiKumOEI6sEF7PWE5f/Gc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=iGCjWIvRk1sEFa6VFFJ0Rh0237kxRVi6SPJcsdC7uXvLMjPdpnEdr6Abll8MkszyA 8ysTFroaWbkxK1p/OwI4aqH2Uoh6Q9gdARhoh4ytm/vgffvPRIYPJbduw1JhXF0kMH trl0TbB8dwyxjfc1NWwKn0A1IlpguPklgTCjv9wGuQ/0sf5rxqKPRqPXnL7wU8uajH rhsU6RDZ1c7vRSi366+wAYRJBW4sMZPMpGMQpaol5cEBxM7R295XLQ/8z6RWJkVpN1 2Wv2V4pLGhvjZIK7fCqnBzELnhtjvi3q+v+W36RlbLXN+75nyVTjlFw5Zw+nxhqZPl 3W+TJ9xHm86Ow== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 04/17] crypto: mips/crc32 - remove redundant setting of alignmask to 0 Date: Wed, 18 Oct 2023 22:53:30 -0700 Message-ID: <20231019055343.588846-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers This unnecessary explicit setting of cra_alignmask to 0 shows up when grepping for shash algorithms that set an alignmask. Remove it. No change in behavior. Signed-off-by: Eric Biggers --- arch/mips/crypto/crc32-mips.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/mips/crypto/crc32-mips.c b/arch/mips/crypto/crc32-mips.c index 3e4f5ba104f89..ec6d58008f8e1 100644 --- a/arch/mips/crypto/crc32-mips.c +++ b/arch/mips/crypto/crc32-mips.c @@ -283,21 +283,20 @@ static struct shash_alg crc32_alg = { .final = chksum_final, .finup = chksum_finup, .digest = chksum_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32", .cra_driver_name = "crc32-mips-hw", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksum_cra_init, } }; static struct shash_alg crc32c_alg = { .digestsize = CHKSUM_DIGEST_SIZE, .setkey = chksum_setkey, .init = chksum_init, @@ -305,21 +304,20 @@ static struct shash_alg crc32c_alg = { .final = chksumc_final, .finup = chksumc_finup, .digest = chksumc_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32c", .cra_driver_name = "crc32c-mips-hw", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksum_cra_init, } }; static int __init crc32_mod_init(void) { int err; From patchwork Thu Oct 19 05:53:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB59FCDB485 for ; Thu, 19 Oct 2023 05:54:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232123AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232583AbjJSFyP (ORCPT ); Thu, 19 Oct 2023 01:54:15 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E875111B for ; Wed, 18 Oct 2023 22:54:13 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 88D89C433CC for ; Thu, 19 Oct 2023 05:54:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694853; bh=/jzM6v2stiOvtt+IkxG/WMBv2h4LRliFs5rirhlpdJY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=qKU3FgssrmOlztJEIqXsTjp/Zal42NBzK1VFPd0J5q6DBiGZcj4LuCsm4StrvTp3G mZheuaK89/gXMIwqU4MKWyIkS5VYKU2FpRvGQAhDXZ2CGuwCwjRKZ+sh3615wGBfvL 8+5LJn0nk82wu8aT41NX9Ljkri0zKiQH73yzVZ63LQcJBRQhTuDx9B4q+zyeH59QdI NTE7YY0PFy7Ij9+3YAlIn0LCqCZM/I75CtsBdGTs+aD7D/nJETjO/T4NaZnoA0SvhV NZ7hwXa3/M0IouDKCioPVeMNvtFSU9j0fgbipdwKC24mbsU7VH8HDvW4MiOzTHNv9L 7+cmyLgCfSIgw== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 05/17] crypto: loongarch/crc32 - remove redundant setting of alignmask to 0 Date: Wed, 18 Oct 2023 22:53:31 -0700 Message-ID: <20231019055343.588846-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers This unnecessary explicit setting of cra_alignmask to 0 shows up when grepping for shash algorithms that set an alignmask. Remove it. No change in behavior. Signed-off-by: Eric Biggers --- arch/loongarch/crypto/crc32-loongarch.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/loongarch/crypto/crc32-loongarch.c b/arch/loongarch/crypto/crc32-loongarch.c index 1f2a2c3839bcb..a49e507af38c0 100644 --- a/arch/loongarch/crypto/crc32-loongarch.c +++ b/arch/loongarch/crypto/crc32-loongarch.c @@ -232,21 +232,20 @@ static struct shash_alg crc32_alg = { .final = chksum_final, .finup = chksum_finup, .digest = chksum_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32", .cra_driver_name = "crc32-loongarch", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksum_cra_init, } }; static struct shash_alg crc32c_alg = { .digestsize = CHKSUM_DIGEST_SIZE, .setkey = chksum_setkey, .init = chksum_init, @@ -254,21 +253,20 @@ static struct shash_alg crc32c_alg = { .final = chksumc_final, .finup = chksumc_finup, .digest = chksumc_digest, .descsize = sizeof(struct chksum_desc_ctx), .base = { .cra_name = "crc32c", .cra_driver_name = "crc32c-loongarch", .cra_priority = 300, .cra_flags = CRYPTO_ALG_OPTIONAL_KEY, .cra_blocksize = CHKSUM_BLOCK_SIZE, - .cra_alignmask = 0, .cra_ctxsize = sizeof(struct chksum_ctx), .cra_module = THIS_MODULE, .cra_init = chksumc_cra_init, } }; static int __init crc32_mod_init(void) { int err; From patchwork Thu Oct 19 05:53:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11465CDB487 for ; Thu, 19 Oct 2023 05:54:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232614AbjJSFyS (ORCPT ); Thu, 19 Oct 2023 01:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232626AbjJSFyQ (ORCPT ); Thu, 19 Oct 2023 01:54:16 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A90E7FE for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C7CFC433C9 for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=20+UvtXYNWYcRu2KHZqRBA5odB5J1ZAywDOCONf/L4I=; h=From:To:Subject:Date:In-Reply-To:References:From; b=YM7NMcpgyWaAKr8L3i6ehkyUtQbdbhRthdDKP+Cd1Ep6AVfdB4sUcJKHFrV+2IqGZ WEjfHnNJ2GdDcCPlU358vhq5oeNt4C1r1zJbhBq2e7M4/3bBRE1QYMw/rT/390B0I7 HLKpSFzbAWlfAhe7lBbxRMgsVH71KWehD5G1C6h2/0DH+kYdB7l3RwMmLqNhX/NAm6 E9WiScfWztz6CBhCVMRc+qGbzKx3OlDb5HIFhRApe0JAc05ZBuR0u1X+7Bnjh8j8sc 7DpDta3Mfn0v02YqN/VLuyymCckJWFyjQS7zeRPoHz2nOx06RXW4K7+NasuacJP0wP mVFZ9pmHLEBYA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 09/17] crypto: vmac - don't set alignmask Date: Wed, 18 Oct 2023 22:53:35 -0700 Message-ID: <20231019055343.588846-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers The vmac template is setting its alignmask to that of its underlying 'cipher'. This doesn't actually accomplish anything useful, though, so stop doing it. (vmac_update() does have an alignment bug, where it assumes u64 alignment when it shouldn't, but that bug exists both before and after this patch.) This is a prerequisite for removing support for nonzero alignmasks from shash. Signed-off-by: Eric Biggers --- crypto/vmac.c | 1 - 1 file changed, 1 deletion(-) diff --git a/crypto/vmac.c b/crypto/vmac.c index 4633b2dda1e0a..0a1d8efa6c1a6 100644 --- a/crypto/vmac.c +++ b/crypto/vmac.c @@ -642,21 +642,20 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb) err = -EINVAL; if (alg->cra_blocksize != VMAC_NONCEBYTES) goto err_free_inst; err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg); if (err) goto err_free_inst; inst->alg.base.cra_priority = alg->cra_priority; inst->alg.base.cra_blocksize = alg->cra_blocksize; - inst->alg.base.cra_alignmask = alg->cra_alignmask; inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx); inst->alg.base.cra_init = vmac_init_tfm; inst->alg.base.cra_exit = vmac_exit_tfm; inst->alg.descsize = sizeof(struct vmac_desc_ctx); inst->alg.digestsize = VMAC_TAG_LEN / 8; inst->alg.init = vmac_init; inst->alg.update = vmac_update; inst->alg.final = vmac_final; From patchwork Thu Oct 19 05:53:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735837 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 472C7CDB465 for ; Thu, 19 Oct 2023 05:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231705AbjJSFyg (ORCPT ); Thu, 19 Oct 2023 01:54:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55462 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232678AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D589C112 for ; Wed, 18 Oct 2023 22:54:14 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A79B5C433CA for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=Q/ICmAR+hC7qnlX/jlK/bjhFFZtdntEIDOUu3Skh/nU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=hH5m5fcIkTYfxyF6ttq+Qul7pZaUHgFgd+6wJnpSmN8SvYF5TwNG8skmiUUeZXhVg qBPQrCw3ivkgiG2KiF3r/bwl5JsnGi8EIdkjD68zRt6ptkr/s9uIyTGyLBGvDq9OvN pm8llPdsfDVqYsUFVkOvHKR1TnaBbu0rj49OAiX3qzJR5L8UZFee6Vt2jsJ1NqCc31 vffCckOeMbwq/RsDxq6Y/ixeP/EXrnOCEzuC0w2dQyQvH6FtXZ2Mvjm1aEZQ6g/n3m qj9zVOKd/EAV70AZpUA0gPjk9Ft1seBh9BTzZGEjkVD6OQAoyO+hX75zKcF+PKyqr8 IsJ1NtAyPC7tA== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 11/17] crypto: shash - remove support for nonzero alignmask Date: Wed, 18 Oct 2023 22:53:37 -0700 Message-ID: <20231019055343.588846-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Currently, the shash API checks the alignment of all message, key, and digest buffers against the algorithm's declared alignmask, and for any unaligned buffers it falls back to manually aligned temporary buffers. This is virtually useless, however. In the case of the message buffer, cryptographic hash functions internally operate on fixed-size blocks, so implementations end up needing to deal with byte-aligned data anyway because the length(s) passed to ->update might not be divisible by the block size. Word-alignment of the message can theoretically be helpful for CRCs, like what was being done in crc32c-sparc64. But in practice it's better for the algorithms to use unaligned accesses or align the message themselves. A similar argument applies to the key and digest. In any case, no shash algorithms actually set a nonzero alignmask anymore. Therefore, remove support for it from shash. The benefit is that all the code to handle "misaligned" buffers in the shash API goes away, reducing the overhead of the shash API. Signed-off-by: Eric Biggers --- crypto/shash.c | 128 ++++--------------------------------------------- 1 file changed, 8 insertions(+), 120 deletions(-) diff --git a/crypto/shash.c b/crypto/shash.c index 52420c41db44a..409b33f9c97cc 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -3,264 +3,151 @@ * Synchronous Cryptographic Hash operations. * * Copyright (c) 2008 Herbert Xu */ #include #include #include #include #include -#include #include #include #include #include "hash.h" -#define MAX_SHASH_ALIGNMASK 63 - static const struct crypto_type crypto_shash_type; static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg) { return hash_get_stat(&alg->halg); } static inline int crypto_shash_errstat(struct shash_alg *alg, int err) { return crypto_hash_errstat(&alg->halg, err); } int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { return -ENOSYS; } EXPORT_SYMBOL_GPL(shash_no_setkey); -static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) -{ - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); - unsigned long absize; - u8 *buffer, *alignbuffer; - int err; - - absize = keylen + (alignmask & ~(crypto_tfm_ctx_alignment() - 1)); - buffer = kmalloc(absize, GFP_ATOMIC); - if (!buffer) - return -ENOMEM; - - alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); - memcpy(alignbuffer, key, keylen); - err = shash->setkey(tfm, alignbuffer, keylen); - kfree_sensitive(buffer); - return err; -} - static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg) { if (crypto_shash_alg_needs_key(alg)) crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); } int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); int err; - if ((unsigned long)key & alignmask) - err = shash_setkey_unaligned(tfm, key, keylen); - else - err = shash->setkey(tfm, key, keylen); - + err = shash->setkey(tfm, key, keylen); if (unlikely(err)) { shash_set_needkey(tfm, shash); return err; } crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); return 0; } EXPORT_SYMBOL_GPL(crypto_shash_setkey); -static int shash_update_unaligned(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - struct crypto_shash *tfm = desc->tfm; - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); - unsigned int unaligned_len = alignmask + 1 - - ((unsigned long)data & alignmask); - /* - * We cannot count on __aligned() working for large values: - * https://patchwork.kernel.org/patch/9507697/ - */ - u8 ubuf[MAX_SHASH_ALIGNMASK * 2]; - u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); - int err; - - if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf))) - return -EINVAL; - - if (unaligned_len > len) - unaligned_len = len; - - memcpy(buf, data, unaligned_len); - err = shash->update(desc, buf, unaligned_len); - memset(buf, 0, unaligned_len); - - return err ?: - shash->update(desc, data + unaligned_len, len - unaligned_len); -} - int crypto_shash_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - struct crypto_shash *tfm = desc->tfm; - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); + struct shash_alg *shash = crypto_shash_alg(desc->tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) atomic64_add(len, &shash_get_stat(shash)->hash_tlen); - if ((unsigned long)data & alignmask) - err = shash_update_unaligned(desc, data, len); - else - err = shash->update(desc, data, len); + err = shash->update(desc, data, len); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_update); -static int shash_final_unaligned(struct shash_desc *desc, u8 *out) -{ - struct crypto_shash *tfm = desc->tfm; - unsigned long alignmask = crypto_shash_alignmask(tfm); - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned int ds = crypto_shash_digestsize(tfm); - /* - * We cannot count on __aligned() working for large values: - * https://patchwork.kernel.org/patch/9507697/ - */ - u8 ubuf[MAX_SHASH_ALIGNMASK + HASH_MAX_DIGESTSIZE]; - u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1); - int err; - - if (WARN_ON(buf + ds > ubuf + sizeof(ubuf))) - return -EINVAL; - - err = shash->final(desc, buf); - if (err) - goto out; - - memcpy(out, buf, ds); - -out: - memset(buf, 0, ds); - return err; -} - int crypto_shash_final(struct shash_desc *desc, u8 *out) { - struct crypto_shash *tfm = desc->tfm; - struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); + struct shash_alg *shash = crypto_shash_alg(desc->tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) atomic64_inc(&shash_get_stat(shash)->hash_cnt); - if ((unsigned long)out & alignmask) - err = shash_final_unaligned(desc, out); - else - err = shash->final(desc, out); + err = shash->final(desc, out); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_final); -static int shash_finup_unaligned(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return shash_update_unaligned(desc, data, len) ?: - shash_final_unaligned(desc, out); -} - static int shash_default_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct shash_alg *shash = crypto_shash_alg(desc->tfm); return shash->update(desc, data, len) ?: shash->final(desc, out); } int crypto_shash_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct crypto_shash *tfm = desc->tfm; struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { struct crypto_istat_hash *istat = shash_get_stat(shash); atomic64_inc(&istat->hash_cnt); atomic64_add(len, &istat->hash_tlen); } - if (((unsigned long)data | (unsigned long)out) & alignmask) - err = shash_finup_unaligned(desc, data, len, out); - else - err = shash->finup(desc, data, len, out); - + err = shash->finup(desc, data, len, out); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_finup); static int shash_default_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct shash_alg *shash = crypto_shash_alg(desc->tfm); return shash->init(desc) ?: shash->finup(desc, data, len, out); } int crypto_shash_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct crypto_shash *tfm = desc->tfm; struct shash_alg *shash = crypto_shash_alg(tfm); - unsigned long alignmask = crypto_shash_alignmask(tfm); int err; if (IS_ENABLED(CONFIG_CRYPTO_STATS)) { struct crypto_istat_hash *istat = shash_get_stat(shash); atomic64_inc(&istat->hash_cnt); atomic64_add(len, &istat->hash_tlen); } if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY) err = -ENOKEY; - else if (((unsigned long)data | (unsigned long)out) & alignmask) - err = shash->init(desc) ?: - shash_finup_unaligned(desc, data, len, out); else err = shash->digest(desc, data, len, out); return crypto_shash_errstat(shash, err); } EXPORT_SYMBOL_GPL(crypto_shash_digest); int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data, unsigned int len, u8 *out) { @@ -663,21 +550,22 @@ int hash_prepare_alg(struct hash_alg_common *alg) } static int shash_prepare_alg(struct shash_alg *alg) { struct crypto_alg *base = &alg->halg.base; int err; if (alg->descsize > HASH_MAX_DESCSIZE) return -EINVAL; - if (base->cra_alignmask > MAX_SHASH_ALIGNMASK) + /* alignmask is not useful for shash, so it is not supported. */ + if (base->cra_alignmask) return -EINVAL; if ((alg->export && !alg->import) || (alg->import && !alg->export)) return -EINVAL; err = hash_prepare_alg(&alg->halg); if (err) return err; base->cra_type = &crypto_shash_type; From patchwork Thu Oct 19 05:53:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73528CDB483 for ; Thu, 19 Oct 2023 05:54:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232650AbjJSFy0 (ORCPT ); Thu, 19 Oct 2023 01:54:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232057AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E97A11B for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5C1EC433CB for ; Thu, 19 Oct 2023 05:54:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694854; bh=WZ1hRhoDd4lCMgZ0YancZdiBlLpsjneFbre+ZmH4G/E=; h=From:To:Subject:Date:In-Reply-To:References:From; b=vKNjDSUqoOZWhVRnRj3JK3CAdtOS5Gf6iV7Q7ROCEkQjis7RCZdyMrbcqn89/h9A8 bT174ZX6KLxPwh7olqAiv/J3eN/w2aGUIZH1BuqF96dpH2TXx8pM+twzH5D7bXjIrR HsB0ujykcUfMCqaSlAwhKtByS2u2ffwjFKrUZEMArMUZ8Rqm7IEnbGinPsrtXVOSHH AiA1ohW//vX1YL929f7345oNb++UQYTmpSwIw8h1fna5wxvNqfKA5qDRxcC11u7dzT mx4pEUusOVNPFjffLf2LEgoAN4lN+eTprkPb+cRpS31vDi3/X9tvYEf+nUGXP64HGU uEpdHPuYGSrVg== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 12/17] libceph: stop checking crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:38 -0700 Message-ID: <20231019055343.588846-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in net/ceph/messenger_v2.c. Signed-off-by: Eric Biggers --- net/ceph/messenger_v2.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index d09a39ff2cf04..f8ec60e1aba3a 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -726,22 +726,20 @@ static int setup_crypto(struct ceph_connection *con, noio_flag = memalloc_noio_save(); con->v2.hmac_tfm = crypto_alloc_shash("hmac(sha256)", 0, 0); memalloc_noio_restore(noio_flag); if (IS_ERR(con->v2.hmac_tfm)) { ret = PTR_ERR(con->v2.hmac_tfm); con->v2.hmac_tfm = NULL; pr_err("failed to allocate hmac tfm context: %d\n", ret); return ret; } - WARN_ON((unsigned long)session_key & - crypto_shash_alignmask(con->v2.hmac_tfm)); ret = crypto_shash_setkey(con->v2.hmac_tfm, session_key, session_key_len); if (ret) { pr_err("failed to set hmac key: %d\n", ret); return ret; } if (con->v2.con_mode == CEPH_CON_MODE_CRC) { WARN_ON(con_secret_len); return 0; /* auth_x, plain mode */ @@ -809,22 +807,20 @@ static int hmac_sha256(struct ceph_connection *con, const struct kvec *kvecs, memset(hmac, 0, SHA256_DIGEST_SIZE); return 0; /* auth_none */ } desc->tfm = con->v2.hmac_tfm; ret = crypto_shash_init(desc); if (ret) goto out; for (i = 0; i < kvec_cnt; i++) { - WARN_ON((unsigned long)kvecs[i].iov_base & - crypto_shash_alignmask(con->v2.hmac_tfm)); ret = crypto_shash_update(desc, kvecs[i].iov_base, kvecs[i].iov_len); if (ret) goto out; } ret = crypto_shash_final(desc, hmac); out: shash_desc_zero(desc); From patchwork Thu Oct 19 05:53:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC9D5CDB465 for ; Thu, 19 Oct 2023 05:54:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232740AbjJSFyc (ORCPT ); Thu, 19 Oct 2023 01:54:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55424 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232658AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C75B116 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0F614C43391 for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=TuzF4+jIfBft5VqDo/fE9UfXx0/TR1AztgYN6AHx+OQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=vGp0ylWCyNJcdylXwU1quTIfNQxtwNGHr6tFRyQ6H43DQ/GkmQEitHntKQ435Lyu0 Vt7tX4v7jv/p1h9ZIEiPsWgHFxI0q7c7of5tyLuHm4ZIAL3nSRL3T3UQEQd4aSfiuR aBI20FfnnbPLvERWMMbFD1ehxikbsl88ZbXd3CNm7atBO7Gc3A4VPWGfeuzZEh6RSb HbdIJuMJzfyVze0qz5RJ5wTtb/Xh0GJVgPpsq8uoXYBY/VEaAy4PIMU0G8nSa6NupA SyMiteaoTlBEJWUiy/PYe39DjG4UK5ScdkeQk2B4XnZOGRYNP5Dlj3JC9u50Pke+/3 4mq/LGD8dXTBQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 13/17] crypto: drbg - stop checking crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:39 -0700 Message-ID: <20231019055343.588846-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in drbg. Signed-off-by: Eric Biggers --- crypto/drbg.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index ff4ebbc68efab..e01f8c7769d03 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1691,21 +1691,21 @@ static int drbg_init_hash_kernel(struct drbg_state *drbg) sdesc = kzalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm), GFP_KERNEL); if (!sdesc) { crypto_free_shash(tfm); return -ENOMEM; } sdesc->shash.tfm = tfm; drbg->priv_data = sdesc; - return crypto_shash_alignmask(tfm); + return 0; } static int drbg_fini_hash_kernel(struct drbg_state *drbg) { struct sdesc *sdesc = drbg->priv_data; if (sdesc) { crypto_free_shash(sdesc->shash.tfm); kfree_sensitive(sdesc); } drbg->priv_data = NULL; From patchwork Thu Oct 19 05:53:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78D32CDB485 for ; Thu, 19 Oct 2023 05:54:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232057AbjJSFyg (ORCPT ); Thu, 19 Oct 2023 01:54:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232683AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C97AB6 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D13CC433CC for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=rDKS+YvCLRvkbTwqkwMcOAISlL7GPSBnvGKjtB0WB6E=; h=From:To:Subject:Date:In-Reply-To:References:From; b=fUh0PtvtFDXqoxVBd27XQlUtMdOsxPo50Wqb9jbwAFfXp3PDPpeJPA92LFsKLlT+y f0aNzXuG9fhBT5aB6D0tEvWwpPQC44S8F7s49BZmRAyaOCjRIEtJK0eL9Q2e6lUWpj RxUzE8vCM7d4mJu/ntxTLSzesnHdrlb66Nb+5KYh6Uq+YylgDVhncY1vM/7f+lJMdU Rsc0uKg4kkfgiuo7lyRcHZHGTh5b+duFLnLGYYE7RjnEBfAnXpIBfNF3KV42sQ/6SW WGuWqLWKdNYybwEJnH21kLB0WZ5oBJ1QNGmRwj42D7dvs5RqoNBX20EohFaJt4D77z HTLHFTur3SAew== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 14/17] crypto: testmgr - stop checking crypto_shash_alignmask Date: Wed, 18 Oct 2023 22:53:40 -0700 Message-ID: <20231019055343.588846-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, crypto_shash_alignmask() always returns 0 and will be removed. In preparation for this, stop checking crypto_shash_alignmask() in testmgr. Signed-off-by: Eric Biggers --- crypto/testmgr.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 54135c7610f06..48a0929c7a158 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -1268,50 +1268,49 @@ static inline int check_shash_op(const char *op, int err, /* Test one hash test vector in one configuration, using the shash API */ static int test_shash_vec_cfg(const struct hash_testvec *vec, const char *vec_name, const struct testvec_config *cfg, struct shash_desc *desc, struct test_sglist *tsgl, u8 *hashstate) { struct crypto_shash *tfm = desc->tfm; - const unsigned int alignmask = crypto_shash_alignmask(tfm); const unsigned int digestsize = crypto_shash_digestsize(tfm); const unsigned int statesize = crypto_shash_statesize(tfm); const char *driver = crypto_shash_driver_name(tfm); const struct test_sg_division *divs[XBUFSIZE]; unsigned int i; u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN]; int err; /* Set the key, if specified */ if (vec->ksize) { err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize, - cfg, alignmask); + cfg, 0); if (err) { if (err == vec->setkey_error) return 0; pr_err("alg: shash: %s setkey failed on test vector %s; expected_error=%d, actual_error=%d, flags=%#x\n", driver, vec_name, vec->setkey_error, err, crypto_shash_get_flags(tfm)); return err; } if (vec->setkey_error) { pr_err("alg: shash: %s setkey unexpectedly succeeded on test vector %s; expected_error=%d\n", driver, vec_name, vec->setkey_error); return -EINVAL; } } /* Build the scatterlist for the source data */ - err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs); + err = build_hash_sglist(tsgl, vec, cfg, 0, divs); if (err) { pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n", driver, vec_name, cfg->name); return err; } /* Do the actual hashing */ testmgr_poison(desc->__ctx, crypto_shash_descsize(tfm)); testmgr_poison(result, digestsize + TESTMGR_POISON_LEN); From patchwork Thu Oct 19 05:53:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 735839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2638CDB482 for ; Thu, 19 Oct 2023 05:54:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232777AbjJSFye (ORCPT ); Thu, 19 Oct 2023 01:54:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232670AbjJSFyR (ORCPT ); Thu, 19 Oct 2023 01:54:17 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 978C1124 for ; Wed, 18 Oct 2023 22:54:15 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A203C433C9 for ; Thu, 19 Oct 2023 05:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1697694855; bh=nrvdUnNzBcEYHOgVy+o433Wmsw1Eb+ZN29j5yZ8/iW0=; h=From:To:Subject:Date:In-Reply-To:References:From; b=UcujRJ/ZzXStpBtb+1q8h6Z+lV26x/5L/eylyo7ppBI/IsGKUpKZ3CGIhnSJ8PQUF mLA1X0BT/VaiuoXvXDn1koLOFXQ76XJiP+fjS7u8eXh4WdiT4NReVhKgI//eUwr5wy PodKm46iaE16TmlGWzqKX6m8QIVykug0pIex6NSFnE29NqCm3bbd6OxWPZriOqwCeb +aI8M7DhUtxEjfYUy5oAAoIzI4+nBotm5VCQEiFHINflz34/rnAr320sb7rAcyakBR msCjlQ5Fe8x3CzTOFwZpyl9mcoReWINIkLSoXdEa2iT+dbQhNdGTunfCLzQgnbAWia CrijRO/Xw2IlQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Subject: [PATCH 15/17] crypto: adiantum - stop using alignmask of shash_alg Date: Wed, 18 Oct 2023 22:53:41 -0700 Message-ID: <20231019055343.588846-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231019055343.588846-1-ebiggers@kernel.org> References: <20231019055343.588846-1-ebiggers@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Eric Biggers Now that the shash algorithm type does not support nonzero alignmasks, shash_alg::base.cra_alignmask is always 0, so OR-ing it into another value is a no-op. Signed-off-by: Eric Biggers --- crypto/adiantum.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/crypto/adiantum.c b/crypto/adiantum.c index 51703746d91e2..064a0a57c77c1 100644 --- a/crypto/adiantum.c +++ b/crypto/adiantum.c @@ -554,22 +554,21 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb) goto err_free_inst; if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "adiantum(%s,%s,%s)", streamcipher_alg->base.cra_driver_name, blockcipher_alg->cra_driver_name, hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME) goto err_free_inst; inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE; inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx); - inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask | - hash_alg->base.cra_alignmask; + inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask; /* * The block cipher is only invoked once per message, so for long * messages (e.g. sectors for disk encryption) its performance doesn't * matter as much as that of the stream cipher and hash function. Thus, * weigh the block cipher's ->cra_priority less. */ inst->alg.base.cra_priority = (4 * streamcipher_alg->base.cra_priority + 2 * hash_alg->base.cra_priority + blockcipher_alg->cra_priority) / 7;