From patchwork Wed Nov 19 17:19:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yazen Ghannam X-Patchwork-Id: 41200 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wg0-f70.google.com (mail-wg0-f70.google.com [74.125.82.70]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2FE25241C9 for ; Wed, 19 Nov 2014 17:20:47 +0000 (UTC) Received: by mail-wg0-f70.google.com with SMTP id b13sf668762wgh.1 for ; Wed, 19 Nov 2014 09:20:46 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=nT0fjm+HtZNHvhM2F8WQxHpp2gPNiFwhp65gx/oqwoI=; b=W5PucYX3swIiHHP740qsOU6Fc7bDjF99jEb8SR3IlcZEzjdki7e6WWUjYxbM+ajHKd vgYgp1bKFmt0JRqyhMgBYAYJ2QeiDzjgAnz5uScG4teWk8JcLmzciTfp1UYtLQvT4Pww uvfnLgTF8/2DqG3xk2G3l9K0igPdMXXeybSkv3G0W4rXX58xZzeLvT7s1+gbIF+/Ns/p XFlvi8GSmI4aUoJ8eApALZteTvPX91fAWlApNDmx/5GHUhAYnN6ba1XoyfJo0IBLv8c+ BLYhmQtmYHp2wOKy2iCkF7HjO971hQpQcoMhTtN6j2WxazThBmeUfWWnIfWAv7JZIKWE v8sQ== X-Gm-Message-State: ALoCoQnu5XDbRo18VRtCMA1fer1gGOAewK0OLtbJQKVD5yCRBC6/fHYcPb0fHMYbS6CwPezyzhZl X-Received: by 10.112.62.226 with SMTP id b2mr725024lbs.11.1416417646448; Wed, 19 Nov 2014 09:20:46 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.87.205 with SMTP id ba13ls1511606lab.33.gmail; Wed, 19 Nov 2014 09:20:45 -0800 (PST) X-Received: by 10.152.27.202 with SMTP id v10mr6349701lag.13.1416417645910; Wed, 19 Nov 2014 09:20:45 -0800 (PST) Received: from mail-lb0-f170.google.com (mail-lb0-f170.google.com. [209.85.217.170]) by mx.google.com with ESMTPS id k5si2423526lag.75.2014.11.19.09.20.45 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 09:20:45 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) client-ip=209.85.217.170; Received: by mail-lb0-f170.google.com with SMTP id w7so909826lbi.1 for ; Wed, 19 Nov 2014 09:20:45 -0800 (PST) X-Received: by 10.112.14.69 with SMTP id n5mr6590718lbc.34.1416417645778; Wed, 19 Nov 2014 09:20:45 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp123536lbc; Wed, 19 Nov 2014 09:20:44 -0800 (PST) X-Received: by 10.68.69.1 with SMTP id a1mr13723116pbu.162.1416417643926; Wed, 19 Nov 2014 09:20:43 -0800 (PST) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qk1si3575620pac.189.2014.11.19.09.20.43 for ; Wed, 19 Nov 2014 09:20:43 -0800 (PST) Received-SPF: none (google.com: linux-crypto-owner@vger.kernel.org does not designate permitted sender hosts) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756087AbaKSRUm (ORCPT ); Wed, 19 Nov 2014 12:20:42 -0500 Received: from mail-ie0-f180.google.com ([209.85.223.180]:49942 "EHLO mail-ie0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755737AbaKSRUl (ORCPT ); Wed, 19 Nov 2014 12:20:41 -0500 Received: by mail-ie0-f180.google.com with SMTP id rp18so903196iec.25 for ; Wed, 19 Nov 2014 09:20:40 -0800 (PST) X-Received: by 10.50.138.166 with SMTP id qr6mr2295597igb.17.1416417640197; Wed, 19 Nov 2014 09:20:40 -0800 (PST) Received: from yghannam-Ubuntu.hsd1.il.comcast.net (c-71-229-76-164.hsd1.il.comcast.net. [71.229.76.164]) by mx.google.com with ESMTPSA id j142sm1168977ioe.16.2014.11.19.09.20.38 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 19 Nov 2014 09:20:39 -0800 (PST) From: Yazen Ghannam To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Cc: herbert@gondor.apana.org.au, Yazen Ghannam Subject: [PATCH] arm64: crypto: Add ARM64 CRC32 hw accelerated module Date: Wed, 19 Nov 2014 11:19:37 -0600 Message-Id: <1416417577-27495-1-git-send-email-yazen.ghannam@linaro.org> X-Mailer: git-send-email 2.1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: yazen.ghannam@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.170 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This module registers a crc32 algorithm and a crc32c algorithm that use the optional CRC32 and CRC32C instructions in ARMv8. Tested on AMD Seattle. Improvement compared to crc32c-generic algorithm: TCRYPT CRC32C speed test shows ~450% speedup. Simple dd write tests to btrfs filesystem show ~30% speedup. Signed-off-by: Yazen Ghannam Acked-by: Steve Capper Acked-by: Ard Biesheuvel --- arch/arm64/crypto/Kconfig | 4 + arch/arm64/crypto/Makefile | 4 + arch/arm64/crypto/crc32-arm64.c | 274 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 282 insertions(+) create mode 100644 arch/arm64/crypto/crc32-arm64.c diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 5562652..c1a0468 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -50,4 +50,8 @@ config CRYPTO_AES_ARM64_NEON_BLK select CRYPTO_AES select CRYPTO_ABLK_HELPER +config CRYPTO_CRC32_ARM64 + tristate "CRC32 and CRC32C using optional ARMv8 instructions" + depends on ARM64 + select CRYPTO_HASH endif diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index a3f935f..5720608 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -34,5 +34,9 @@ AFLAGS_aes-neon.o := -DINTERLEAVE=4 CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS +obj-$(CONFIG_CRYPTO_CRC32_ARM64) += crc32-arm64.o + +CFLAGS_crc32-arm64.o := -mcpu=generic+crc + $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE $(call if_changed_rule,cc_o_c) diff --git a/arch/arm64/crypto/crc32-arm64.c b/arch/arm64/crypto/crc32-arm64.c new file mode 100644 index 0000000..9499199 --- /dev/null +++ b/arch/arm64/crypto/crc32-arm64.c @@ -0,0 +1,274 @@ +/* + * crc32-arm64.c - CRC32 and CRC32C using optional ARMv8 instructions + * + * Module based on crypto/crc32c_generic.c + * + * CRC32 loop taken from Ed Nevill's Hadoop CRC patch + * http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201406.mbox/%3C1403687030.3355.19.camel%40localhost.localdomain%3E + * + * Using inline assembly instead of intrinsics in order to be backwards + * compatible with older compilers. + * + * Copyright (C) 2014 Linaro Ltd + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include + +#include + +MODULE_AUTHOR("Yazen Ghannam "); +MODULE_DESCRIPTION("CRC32 and CRC32C using optional ARMv8 instructions"); +MODULE_LICENSE("GPL v2"); + +#define CRC32X(crc, value) __asm__("crc32x %w[c], %w[c], %x[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32W(crc, value) __asm__("crc32w %w[c], %w[c], %w[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32H(crc, value) __asm__("crc32h %w[c], %w[c], %w[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32B(crc, value) __asm__("crc32b %w[c], %w[c], %w[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32CX(crc, value) __asm__("crc32cx %w[c], %w[c], %x[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32CW(crc, value) __asm__("crc32cw %w[c], %w[c], %w[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32CH(crc, value) __asm__("crc32ch %w[c], %w[c], %w[v]":[c]"+r"(crc):[v]"r"(value)) +#define CRC32CB(crc, value) __asm__("crc32cb %w[c], %w[c], %w[v]":[c]"+r"(crc):[v]"r"(value)) + +static u32 crc32_arm64_le_hw(u32 crc, const u8 *p, unsigned int len) +{ + s64 length = len; + + while ((length -= sizeof(u64)) >= 0) { + CRC32X(crc, get_unaligned_le64(p)); + p += sizeof(u64); + } + + /* The following is more efficient than the straight loop */ + if (length & sizeof(u32)) { + CRC32W(crc, get_unaligned_le32(p)); + p += sizeof(u32); + } + if (length & sizeof(u16)) { + CRC32H(crc, get_unaligned_le16(p)); + p += sizeof(u16); + } + if (length & sizeof(u8)) + CRC32B(crc, *p); + + return crc; +} + +static u32 crc32c_arm64_le_hw(u32 crc, const u8 *p, unsigned int len) +{ + s64 length = len; + + while ((length -= sizeof(u64)) >= 0) { + CRC32CX(crc, get_unaligned_le64(p)); + p += sizeof(u64); + } + + /* The following is more efficient than the straight loop */ + if (length & sizeof(u32)) { + CRC32CW(crc, get_unaligned_le32(p)); + p += sizeof(u32); + } + if (length & sizeof(u16)) { + CRC32CH(crc, get_unaligned_le16(p)); + p += sizeof(u16); + } + if (length & sizeof(u8)) + CRC32CB(crc, *p); + + return crc; +} + +#define CHKSUM_BLOCK_SIZE 1 +#define CHKSUM_DIGEST_SIZE 4 + +struct chksum_ctx { + u32 key; +}; + +struct chksum_desc_ctx { + u32 crc; +}; + +static int chksum_init(struct shash_desc *desc) +{ + struct chksum_ctx *mctx = crypto_shash_ctx(desc->tfm); + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); + + ctx->crc = mctx->key; + + return 0; +} + +/* + * Setting the seed allows arbitrary accumulators and flexible XOR policy + * If your algorithm starts with ~0, then XOR with ~0 before you set + * the seed. + */ +static int chksum_setkey(struct crypto_shash *tfm, const u8 *key, + unsigned int keylen) +{ + struct chksum_ctx *mctx = crypto_shash_ctx(tfm); + + if (keylen != sizeof(mctx->key)) { + crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; + } + mctx->key = get_unaligned_le32(key); + return 0; +} + +static int chksum_update(struct shash_desc *desc, const u8 *data, + unsigned int length) +{ + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); + + ctx->crc = crc32_arm64_le_hw(ctx->crc, data, length); + return 0; +} + +static int chksumc_update(struct shash_desc *desc, const u8 *data, + unsigned int length) +{ + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); + + ctx->crc = crc32c_arm64_le_hw(ctx->crc, data, length); + return 0; +} + +static int chksum_final(struct shash_desc *desc, u8 *out) +{ + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); + + put_unaligned_le32(~ctx->crc, out); + return 0; +} + +static int __chksum_finup(u32 crc, const u8 *data, unsigned int len, u8 *out) +{ + put_unaligned_le32(~crc32_arm64_le_hw(crc, data, len), out); + return 0; +} + +static int __chksumc_finup(u32 crc, const u8 *data, unsigned int len, u8 *out) +{ + put_unaligned_le32(~crc32c_arm64_le_hw(crc, data, len), out); + return 0; +} + +static int chksum_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); + + return __chksum_finup(ctx->crc, data, len, out); +} + +static int chksumc_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); + + return __chksumc_finup(ctx->crc, data, len, out); +} + +static int chksum_digest(struct shash_desc *desc, const u8 *data, + unsigned int length, u8 *out) +{ + struct chksum_ctx *mctx = crypto_shash_ctx(desc->tfm); + + return __chksum_finup(mctx->key, data, length, out); +} + +static int chksumc_digest(struct shash_desc *desc, const u8 *data, + unsigned int length, u8 *out) +{ + struct chksum_ctx *mctx = crypto_shash_ctx(desc->tfm); + + return __chksumc_finup(mctx->key, data, length, out); +} + +static int crc32_cra_init(struct crypto_tfm *tfm) +{ + struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); + + mctx->key = ~0; + return 0; +} + +static struct shash_alg crc32_alg = { + .digestsize = CHKSUM_DIGEST_SIZE, + .setkey = chksum_setkey, + .init = chksum_init, + .update = chksum_update, + .final = chksum_final, + .finup = chksum_finup, + .digest = chksum_digest, + .descsize = sizeof(struct chksum_desc_ctx), + .base = { + .cra_name = "crc32", + .cra_driver_name = "crc32-arm64-hw", + .cra_priority = 300, + .cra_blocksize = CHKSUM_BLOCK_SIZE, + .cra_alignmask = 0, + .cra_ctxsize = sizeof(struct chksum_ctx), + .cra_module = THIS_MODULE, + .cra_init = crc32_cra_init, + } +}; + +static struct shash_alg crc32c_alg = { + .digestsize = CHKSUM_DIGEST_SIZE, + .setkey = chksum_setkey, + .init = chksum_init, + .update = chksumc_update, + .final = chksum_final, + .finup = chksumc_finup, + .digest = chksumc_digest, + .descsize = sizeof(struct chksum_desc_ctx), + .base = { + .cra_name = "crc32c", + .cra_driver_name = "crc32c-arm64-hw", + .cra_priority = 300, + .cra_blocksize = CHKSUM_BLOCK_SIZE, + .cra_alignmask = 0, + .cra_ctxsize = sizeof(struct chksum_ctx), + .cra_module = THIS_MODULE, + .cra_init = crc32_cra_init, + } +}; + +static int __init crc32_mod_init(void) +{ + int err; + + err = crypto_register_shash(&crc32_alg); + + if (err) + return err; + + err = crypto_register_shash(&crc32c_alg); + + if (err) { + crypto_unregister_shash(&crc32_alg); + return err; + } + + return 0; +} + +static void __exit crc32_mod_exit(void) +{ + crypto_unregister_shash(&crc32_alg); + crypto_unregister_shash(&crc32c_alg); +} + +module_cpu_feature_match(CRC32, crc32_mod_init); +module_exit(crc32_mod_exit);