From patchwork Thu Dec 12 09:30:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 181458 Delivered-To: patch@linaro.org Received: by 2002:ac9:44c4:0:0:0:0:0 with SMTP id t4csp531096och; Thu, 12 Dec 2019 01:30:43 -0800 (PST) X-Google-Smtp-Source: APXvYqwECeLZrzFx6YzyEuR5SAaIjqqcYiFpML0jYsTpKshO/Jed/E72rJMpLCXMRywUZ69IzR5f X-Received: by 2002:a9d:58c9:: with SMTP id s9mr6707841oth.121.1576143043021; Thu, 12 Dec 2019 01:30:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1576143043; cv=none; d=google.com; s=arc-20160816; b=iocnT7pvEYmKUH3rJCry8rpwJLGTXAT5CGNxmlswuCnxRc0dTeEn3mAa1QnanLsu4l Xm37QlEuoS0LVBemnXV8Nqcu5VjJjEXZBDRP2w3WJbByVozGl9Q4OojM1J0r2Z5L1xs3 a7vZ/iY3HqqVdywPZeGDOXAQpJH5hD+SREO0XdtaeHVrN8z91dL55rLBYMhaQaRByxGg dxRAABcmDKzrq92HkC02Rp+c3WlaA34j7yjwaHr5vgZDtyboDwscQJn1qSeKQmjkjWp8 kSRwaab6/s3RdJ1sSXlsTaDFnu1+lfe8co2sMZi+s+kxAYV+W/BbHXgOv9ZbhPaaJEhF e/qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AMlOp+Ar6DU+q0r+M1DVLtClcYfUwu/9p1hyW2lNmIU=; b=D6r5HsVpHh5SRN/yHOwyAq72dM90rPo5fdVQMdJalA4WbvICkQ+na7QhhH7G4oY83W HH0J6irWlGeO6Y905FNvt2AQ96hVwM69dkFVvGKWjY0PBgQeuzMt9j4iHjyXm0GgAh/V ezDnHIqB6+gXYqt9ea2xSJWIVP20dW9Kh+tnA9+QTGOIrOdz0kvqNZhSE9M883GAH0oB A7vgutTfCPChbOD9tasqopUw8Hv94Wd1rgYjBjy/nClDtjewfCuGpHx9IwjqnmQaD0o2 EuzYyNJ8L8KbTzPsL/HWUxHLWHjAVhoSnPjtRzRGvlat4iMY0NHdw9V72bbYup8UyqS5 oDqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=Lfml23Qk; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id u132si2834746oig.93.2019.12.12.01.30.42; Thu, 12 Dec 2019 01:30:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@zx2c4.com header.s=mail header.b=Lfml23Qk; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=zx2c4.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728317AbfLLJam (ORCPT + 3 others); Thu, 12 Dec 2019 04:30:42 -0500 Received: from frisell.zx2c4.com ([192.95.5.64]:41827 "EHLO frisell.zx2c4.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728369AbfLLJak (ORCPT ); Thu, 12 Dec 2019 04:30:40 -0500 Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTP id ab7ceb1f; Thu, 12 Dec 2019 08:34:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=zx2c4.com; h=from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-type:content-transfer-encoding; s=mail; bh=Klwp9JpAhfzi 9ZAZqdspfVd0z2Y=; b=Lfml23QkzeJSaPZhnmwm4NDHRA0df8Lj8Pv5q2dwQPbC IxRo7evyhn+JNrwVV132BNZZRyZ0kYq4Z+7hds6o0W336E3+txROdPLsCw1jutVF Pul3pEeA0kIIhSk/s/KDgaM7LlyocEJDzRfiu4oYaB3otS/n/8SUXwxGoGkLAGYw Nw5YS73qMKP1VINqsPt/203lngGhdAEJR+PSYcfSWO9u6LsKwG8wJ+7tNGc6PVa1 mER/DLB5NMzSM+rXRbiSv1sHFnQ/zknirvIir3HdlsdJULuBwLkMxM9DZR2B73Jx NZS7x5TQrT3HZqCQhHCpFW+bbdnUJkaiOa3iQwP0Kw== Received: by frisell.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 48c0fccb (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256:NO); Thu, 12 Dec 2019 08:34:42 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-crypto@vger.kernel.org, ebiggers@kernel.org Cc: "Jason A. Donenfeld" , Ard Biesheuvel Subject: [PATCH crypto-next v2 3/3] crypto: arm/arm64/mips/poly1305 - remove redundant non-reduction from emit Date: Thu, 12 Dec 2019 10:30:08 +0100 Message-Id: <20191212093008.217086-3-Jason@zx2c4.com> In-Reply-To: <20191212093008.217086-1-Jason@zx2c4.com> References: <20191211170936.385572-1-Jason@zx2c4.com> <20191212093008.217086-1-Jason@zx2c4.com> MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org This appears to be some kind of copy and paste error, and is actually dead code. Pre: f = 0 ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[0]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst); Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[1]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst + 4); Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[2]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst + 8); Pre: 0 ≤ f < 2³² ⇒ (f >> 32) = 0 f = (f >> 32) + le32_to_cpu(digest[3]); Post: 0 ≤ f < 2³² put_unaligned_le32(f, dst + 12); Therefore this sequence is redundant. And Andy's code appears to handle misalignment acceptably. Signed-off-by: Jason A. Donenfeld Cc: Ard Biesheuvel --- arch/arm/crypto/poly1305-glue.c | 18 ++---------------- arch/arm64/crypto/poly1305-glue.c | 18 ++---------------- arch/mips/crypto/poly1305-glue.c | 18 ++---------------- 3 files changed, 6 insertions(+), 48 deletions(-) -- 2.24.0 Tested-by: Ard Biesheuvel Reviewed-by: Ard Biesheuvel diff --git a/arch/arm/crypto/poly1305-glue.c b/arch/arm/crypto/poly1305-glue.c index abe3f2d587dc..ceec04ec2f40 100644 --- a/arch/arm/crypto/poly1305-glue.c +++ b/arch/arm/crypto/poly1305-glue.c @@ -20,7 +20,7 @@ void poly1305_init_arm(void *state, const u8 *key); void poly1305_blocks_arm(void *state, const u8 *src, u32 len, u32 hibit); -void poly1305_emit_arm(void *state, __le32 *digest, const u32 *nonce); +void poly1305_emit_arm(void *state, u8 *digest, const u32 *nonce); void __weak poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit) { @@ -179,9 +179,6 @@ EXPORT_SYMBOL(poly1305_update_arch); void poly1305_final_arch(struct poly1305_desc_ctx *dctx, u8 *dst) { - __le32 digest[4]; - u64 f = 0; - if (unlikely(dctx->buflen)) { dctx->buf[dctx->buflen++] = 1; memset(dctx->buf + dctx->buflen, 0, @@ -189,18 +186,7 @@ void poly1305_final_arch(struct poly1305_desc_ctx *dctx, u8 *dst) poly1305_blocks_arm(&dctx->h, dctx->buf, POLY1305_BLOCK_SIZE, 0); } - poly1305_emit_arm(&dctx->h, digest, dctx->s); - - /* mac = (h + s) % (2^128) */ - f = (f >> 32) + le32_to_cpu(digest[0]); - put_unaligned_le32(f, dst); - f = (f >> 32) + le32_to_cpu(digest[1]); - put_unaligned_le32(f, dst + 4); - f = (f >> 32) + le32_to_cpu(digest[2]); - put_unaligned_le32(f, dst + 8); - f = (f >> 32) + le32_to_cpu(digest[3]); - put_unaligned_le32(f, dst + 12); - + poly1305_emit_arm(&dctx->h, dst, dctx->s); *dctx = (struct poly1305_desc_ctx){}; } EXPORT_SYMBOL(poly1305_final_arch); diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c index 83a2338a8826..e97b092f56b8 100644 --- a/arch/arm64/crypto/poly1305-glue.c +++ b/arch/arm64/crypto/poly1305-glue.c @@ -21,7 +21,7 @@ asmlinkage void poly1305_init_arm64(void *state, const u8 *key); asmlinkage void poly1305_blocks(void *state, const u8 *src, u32 len, u32 hibit); asmlinkage void poly1305_blocks_neon(void *state, const u8 *src, u32 len, u32 hibit); -asmlinkage void poly1305_emit(void *state, __le32 *digest, const u32 *nonce); +asmlinkage void poly1305_emit(void *state, u8 *digest, const u32 *nonce); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); @@ -162,9 +162,6 @@ EXPORT_SYMBOL(poly1305_update_arch); void poly1305_final_arch(struct poly1305_desc_ctx *dctx, u8 *dst) { - __le32 digest[4]; - u64 f = 0; - if (unlikely(dctx->buflen)) { dctx->buf[dctx->buflen++] = 1; memset(dctx->buf + dctx->buflen, 0, @@ -172,18 +169,7 @@ void poly1305_final_arch(struct poly1305_desc_ctx *dctx, u8 *dst) poly1305_blocks(&dctx->h, dctx->buf, POLY1305_BLOCK_SIZE, 0); } - poly1305_emit(&dctx->h, digest, dctx->s); - - /* mac = (h + s) % (2^128) */ - f = (f >> 32) + le32_to_cpu(digest[0]); - put_unaligned_le32(f, dst); - f = (f >> 32) + le32_to_cpu(digest[1]); - put_unaligned_le32(f, dst + 4); - f = (f >> 32) + le32_to_cpu(digest[2]); - put_unaligned_le32(f, dst + 8); - f = (f >> 32) + le32_to_cpu(digest[3]); - put_unaligned_le32(f, dst + 12); - + poly1305_emit(&dctx->h, dst, dctx->s); *dctx = (struct poly1305_desc_ctx){}; } EXPORT_SYMBOL(poly1305_final_arch); diff --git a/arch/mips/crypto/poly1305-glue.c b/arch/mips/crypto/poly1305-glue.c index b37d29cf5d0a..fc881b46d911 100644 --- a/arch/mips/crypto/poly1305-glue.c +++ b/arch/mips/crypto/poly1305-glue.c @@ -15,7 +15,7 @@ asmlinkage void poly1305_init_mips(void *state, const u8 *key); asmlinkage void poly1305_blocks_mips(void *state, const u8 *src, u32 len, u32 hibit); -asmlinkage void poly1305_emit_mips(void *state, __le32 *digest, const u32 *nonce); +asmlinkage void poly1305_emit_mips(void *state, u8 *digest, const u32 *nonce); void poly1305_init_arch(struct poly1305_desc_ctx *dctx, const u8 *key) { @@ -134,9 +134,6 @@ EXPORT_SYMBOL(poly1305_update_arch); void poly1305_final_arch(struct poly1305_desc_ctx *dctx, u8 *dst) { - __le32 digest[4]; - u64 f = 0; - if (unlikely(dctx->buflen)) { dctx->buf[dctx->buflen++] = 1; memset(dctx->buf + dctx->buflen, 0, @@ -144,18 +141,7 @@ void poly1305_final_arch(struct poly1305_desc_ctx *dctx, u8 *dst) poly1305_blocks_mips(&dctx->h, dctx->buf, POLY1305_BLOCK_SIZE, 0); } - poly1305_emit_mips(&dctx->h, digest, dctx->s); - - /* mac = (h + s) % (2^128) */ - f = (f >> 32) + le32_to_cpu(digest[0]); - put_unaligned_le32(f, dst); - f = (f >> 32) + le32_to_cpu(digest[1]); - put_unaligned_le32(f, dst + 4); - f = (f >> 32) + le32_to_cpu(digest[2]); - put_unaligned_le32(f, dst + 8); - f = (f >> 32) + le32_to_cpu(digest[3]); - put_unaligned_le32(f, dst + 12); - + poly1305_emit_mips(&dctx->h, dst, dctx->s); *dctx = (struct poly1305_desc_ctx){}; } EXPORT_SYMBOL(poly1305_final_arch);