From patchwork Mon Feb 8 09:38:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: yumeng X-Patchwork-Id: 378665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFE18C433E0 for ; Mon, 8 Feb 2021 09:46:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB5AE64E8F for ; Mon, 8 Feb 2021 09:46:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231756AbhBHJqC (ORCPT ); Mon, 8 Feb 2021 04:46:02 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12490 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231803AbhBHJmi (ORCPT ); Mon, 8 Feb 2021 04:42:38 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DZ1GJ0MV7zjKc4; Mon, 8 Feb 2021 17:40:08 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Mon, 8 Feb 2021 17:41:20 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v8 5/9] crypto: move curve_id of ECDH to algorithm name Date: Mon, 8 Feb 2021 17:38:53 +0800 Message-ID: <1612777137-51067-6-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1612777137-51067-1-git-send-email-yumeng18@huawei.com> References: <1612777137-51067-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org 1. Move curve id of ECDH from the key into the algorithm name instead in crypto; 2. Modify 'testmgr.c' and 'testmgr.h' to adapt the modification. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu Reported-by: kernel test robot Reported-by: kernel test robot --- crypto/ecdh.c | 72 +++++++++++++++++++++++++++++++++++---------------- crypto/ecdh_helper.c | 4 +-- crypto/testmgr.c | 11 ++++++-- crypto/testmgr.h | 32 ++++++++++------------- include/crypto/ecdh.h | 2 -- 5 files changed, 74 insertions(+), 47 deletions(-) diff --git a/crypto/ecdh.c b/crypto/ecdh.c index 96f80c8..04a427b 100644 --- a/crypto/ecdh.c +++ b/crypto/ecdh.c @@ -23,33 +23,16 @@ static inline struct ecdh_ctx *ecdh_get_ctx(struct crypto_kpp *tfm) return kpp_tfm_ctx(tfm); } -static unsigned int ecdh_supported_curve(unsigned int curve_id) -{ - switch (curve_id) { - case ECC_CURVE_NIST_P192: return ECC_CURVE_NIST_P192_DIGITS; - case ECC_CURVE_NIST_P256: return ECC_CURVE_NIST_P256_DIGITS; - default: return 0; - } -} - static int ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, unsigned int len) { struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); struct ecdh params; - unsigned int ndigits; if (crypto_ecdh_decode_key(buf, len, ¶ms) < 0 || - params.key_size > sizeof(ctx->private_key)) + params.key_size > sizeof(u64) * ctx->ndigits) return -EINVAL; - ndigits = ecdh_supported_curve(params.curve_id); - if (!ndigits) - return -EINVAL; - - ctx->curve_id = params.curve_id; - ctx->ndigits = ndigits; - if (!params.key || !params.key_size) return ecc_gen_privkey(ctx->curve_id, ctx->ndigits, ctx->private_key); @@ -140,13 +123,24 @@ static unsigned int ecdh_max_size(struct crypto_kpp *tfm) return ctx->ndigits << (ECC_DIGITS_TO_BYTES_SHIFT + 1); } -static struct kpp_alg ecdh = { +static int ecdh_nist_p192_init_tfm(struct crypto_kpp *tfm) +{ + struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P192; + ctx->ndigits = ECC_CURVE_NIST_P192_DIGITS; + + return 0; +} + +static struct kpp_alg ecdh_nist_p192 = { .set_secret = ecdh_set_secret, .generate_public_key = ecdh_compute_value, .compute_shared_secret = ecdh_compute_value, .max_size = ecdh_max_size, + .init = ecdh_nist_p192_init_tfm, .base = { - .cra_name = "ecdh", + .cra_name = "ecdh-nist-p192", .cra_driver_name = "ecdh-generic", .cra_priority = 100, .cra_module = THIS_MODULE, @@ -154,14 +148,48 @@ static struct kpp_alg ecdh = { }, }; +static int ecdh_nist_p256_init_tfm(struct crypto_kpp *tfm) +{ + struct ecdh_ctx *ctx = ecdh_get_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P256; + ctx->ndigits = ECC_CURVE_NIST_P256_DIGITS; + + return 0; +} + +static struct kpp_alg ecdh_nist_p256 = { + .set_secret = ecdh_set_secret, + .generate_public_key = ecdh_compute_value, + .compute_shared_secret = ecdh_compute_value, + .max_size = ecdh_max_size, + .init = ecdh_nist_p256_init_tfm, + .base = { + .cra_name = "ecdh-nist-p256", + .cra_driver_name = "ecdh-generic", + .cra_priority = 100, + .cra_module = THIS_MODULE, + .cra_ctxsize = sizeof(struct ecdh_ctx), + }, +}; + +static bool ecdh_nist_p192_registered; + static int ecdh_init(void) { - return crypto_register_kpp(&ecdh); + int ret; + + ret = crypto_register_kpp(&ecdh_nist_p192); + ecdh_nist_p192_registered = ret == 0; + + return crypto_register_kpp(&ecdh_nist_p256); } static void ecdh_exit(void) { - crypto_unregister_kpp(&ecdh); + if (ecdh_nist_p192_registered) + crypto_unregister_kpp(&ecdh_nist_p192); + crypto_unregister_kpp(&ecdh_nist_p256); } subsys_initcall(ecdh_init); diff --git a/crypto/ecdh_helper.c b/crypto/ecdh_helper.c index 66fcb2e..c79696f 100644 --- a/crypto/ecdh_helper.c +++ b/crypto/ecdh_helper.c @@ -10,7 +10,7 @@ #include #include -#define ECDH_KPP_SECRET_MIN_SIZE (sizeof(struct kpp_secret) + 2 * sizeof(short)) +#define ECDH_KPP_SECRET_MIN_SIZE (sizeof(struct kpp_secret) + sizeof(short)) static inline u8 *ecdh_pack_data(void *dst, const void *src, size_t sz) { @@ -46,7 +46,6 @@ int crypto_ecdh_encode_key(char *buf, unsigned int len, return -EINVAL; ptr = ecdh_pack_data(ptr, &secret, sizeof(secret)); - ptr = ecdh_pack_data(ptr, ¶ms->curve_id, sizeof(params->curve_id)); ptr = ecdh_pack_data(ptr, ¶ms->key_size, sizeof(params->key_size)); ecdh_pack_data(ptr, params->key, params->key_size); @@ -67,7 +66,6 @@ int crypto_ecdh_decode_key(const char *buf, unsigned int len, if (secret.type != CRYPTO_KPP_SECRET_TYPE_ECDH) return -EINVAL; - ptr = ecdh_unpack_data(¶ms->curve_id, ptr, sizeof(params->curve_id)); ptr = ecdh_unpack_data(¶ms->key_size, ptr, sizeof(params->key_size)); if (secret.len != crypto_ecdh_key_len(params)) return -EINVAL; diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 1a4103b..b890ac5 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4910,11 +4910,18 @@ static const struct alg_test_desc alg_test_descs[] = { } }, { #endif - .alg = "ecdh", + .alg = "ecdh-nist-p192", .test = alg_test_kpp, .fips_allowed = 1, .suite = { - .kpp = __VECS(ecdh_tv_template) + .kpp = __VECS(ecdh_p192_tv_template) + } + }, { + .alg = "ecdh-nist-p256", + .test = alg_test_kpp, + .fips_allowed = 1, + .suite = { + .kpp = __VECS(ecdh_p256_tv_template) } }, { .alg = "ecrdsa", diff --git a/crypto/testmgr.h b/crypto/testmgr.h index 99aca08..85bf54e 100644 --- a/crypto/testmgr.h +++ b/crypto/testmgr.h @@ -2261,19 +2261,17 @@ static const struct kpp_testvec curve25519_tv_template[] = { } }; -static const struct kpp_testvec ecdh_tv_template[] = { +static const struct kpp_testvec ecdh_p192_tv_template[] = { { #ifndef CONFIG_CRYPTO_FIPS .secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x20\x00" /* len */ - "\x01\x00" /* curve_id */ + "\x1e\x00" /* len */ "\x18\x00" /* key_size */ #else "\x00\x02" /* type */ - "\x00\x20" /* len */ - "\x00\x01" /* curve_id */ + "\x00\x1e" /* len */ "\x00\x18" /* key_size */ #endif "\xb5\x05\xb1\x71\x1e\xbf\x8c\xda" @@ -2301,18 +2299,20 @@ static const struct kpp_testvec ecdh_tv_template[] = { .b_public_size = 48, .expected_a_public_size = 48, .expected_ss_size = 24 - }, { + } #endif +}; + +static const struct kpp_testvec ecdh_p256_tv_template[] = { + { .secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x28\x00" /* len */ - "\x02\x00" /* curve_id */ + "\x26\x00" /* len */ "\x20\x00" /* key_size */ #else "\x00\x02" /* type */ - "\x00\x28" /* len */ - "\x00\x02" /* curve_id */ + "\x00\x26" /* len */ "\x00\x20" /* key_size */ #endif "\x24\xd1\x21\xeb\xe5\xcf\x2d\x83" @@ -2350,25 +2350,21 @@ static const struct kpp_testvec ecdh_tv_template[] = { .secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x08\x00" /* len */ - "\x02\x00" /* curve_id */ + "\x06\x00" /* len */ "\x00\x00", /* key_size */ #else "\x00\x02" /* type */ - "\x00\x08" /* len */ - "\x00\x02" /* curve_id */ + "\x00\x06" /* len */ "\x00\x00", /* key_size */ #endif .b_secret = #ifdef __LITTLE_ENDIAN "\x02\x00" /* type */ - "\x28\x00" /* len */ - "\x02\x00" /* curve_id */ + "\x26\x00" /* len */ "\x20\x00" /* key_size */ #else "\x00\x02" /* type */ - "\x00\x28" /* len */ - "\x00\x02" /* curve_id */ + "\x00\x26" /* len */ "\x00\x20" /* key_size */ #endif "\x24\xd1\x21\xeb\xe5\xcf\x2d\x83" diff --git a/include/crypto/ecdh.h b/include/crypto/ecdh.h index a5b805b..deaaa48 100644 --- a/include/crypto/ecdh.h +++ b/include/crypto/ecdh.h @@ -29,12 +29,10 @@ /** * struct ecdh - define an ECDH private key * - * @curve_id: ECC curve the key is based on. * @key: Private ECDH key * @key_size: Size of the private ECDH key */ struct ecdh { - unsigned short curve_id; char *key; unsigned short key_size; }; From patchwork Mon Feb 8 09:38:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: yumeng X-Patchwork-Id: 378666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F894C433E9 for ; Mon, 8 Feb 2021 09:46:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F2C8364E99 for ; Mon, 8 Feb 2021 09:46:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbhBHJps (ORCPT ); Mon, 8 Feb 2021 04:45:48 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12493 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231862AbhBHJmk (ORCPT ); Mon, 8 Feb 2021 04:42:40 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DZ1GJ1XK5zjL06; Mon, 8 Feb 2021 17:40:08 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Mon, 8 Feb 2021 17:41:21 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v8 7/9] crypto: hisilicon/hpre - add 'ECDH' algorithm Date: Mon, 8 Feb 2021 17:38:55 +0800 Message-ID: <1612777137-51067-8-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1612777137-51067-1-git-send-email-yumeng18@huawei.com> References: <1612777137-51067-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org 1. Enable 'ECDH' algorithm in Kunpeng 930; 2. HPRE ECDH Support: ecdh-nist-p192, ecdh-nist-p224, ecdh-nist-p256, ecdh-nist-p384, ecdh-nist-p521. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu --- drivers/crypto/hisilicon/hpre/hpre.h | 2 +- drivers/crypto/hisilicon/hpre/hpre_crypto.c | 641 +++++++++++++++++++++++++++- drivers/crypto/hisilicon/hpre/hpre_main.c | 1 + 3 files changed, 639 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/hisilicon/hpre/hpre.h b/drivers/crypto/hisilicon/hpre/hpre.h index 7c7bfdd..5a8091a 100644 --- a/drivers/crypto/hisilicon/hpre/hpre.h +++ b/drivers/crypto/hisilicon/hpre/hpre.h @@ -81,6 +81,7 @@ enum hpre_alg_type { HPRE_ALG_KG_CRT = 0x3, HPRE_ALG_DH_G2 = 0x4, HPRE_ALG_DH = 0x5, + HPRE_ALG_ECC_MUL = 0xD, }; struct hpre_sqe { @@ -102,5 +103,4 @@ struct hisi_qp *hpre_create_qp(u8 type); int hpre_algs_register(struct hisi_qm *qm); void hpre_algs_unregister(struct hisi_qm *qm); - #endif diff --git a/drivers/crypto/hisilicon/hpre/hpre_crypto.c b/drivers/crypto/hisilicon/hpre/hpre_crypto.c index 712bea9..49df0ab 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_crypto.c +++ b/drivers/crypto/hisilicon/hpre/hpre_crypto.c @@ -2,6 +2,8 @@ /* Copyright (c) 2019 HiSilicon Limited. */ #include #include +#include +#include #include #include #include @@ -36,6 +38,20 @@ struct hpre_ctx; #define HPRE_DFX_SEC_TO_US 1000000 #define HPRE_DFX_US_TO_NS 1000 +/* size in bytes of the n prime */ +#define HPRE_ECC_NIST_P192_N_SIZE 24 +#define HPRE_ECC_NIST_P224_N_SIZE 28 +#define HPRE_ECC_NIST_P256_N_SIZE 32 +#define HPRE_ECC_NIST_P384_N_SIZE 48 +#define HPRE_ECC_NIST_P521_N_SIZE 66 + +/* size in bytes */ +#define HPRE_ECC_HW256_KSZ_B 32 +#define HPRE_ECC_HW384_KSZ_B 48 +#define HPRE_ECC_HW576_KSZ_B 72 + +#define HPRE_ECDH_MAX_SZ HPRE_ECC_HW576_KSZ_B + typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe); struct hpre_rsa_ctx { @@ -61,14 +77,25 @@ struct hpre_dh_ctx { * else if base if the counterpart public key we * compute the shared secret * ZZ = yb^xa mod p; [RFC2631 sec 2.1.1] + * low address: d--->n, please refer to Hisilicon HPRE UM */ - char *xa_p; /* low address: d--->n, please refer to Hisilicon HPRE UM */ + char *xa_p; dma_addr_t dma_xa_p; char *g; /* m */ dma_addr_t dma_g; }; +struct hpre_ecdh_ctx { + /* low address: p->a->k->b */ + unsigned char *p; + dma_addr_t dma_p; + + /* low address: x->y */ + unsigned char *g; + dma_addr_t dma_g; +}; + struct hpre_ctx { struct hisi_qp *qp; struct hpre_asym_request **req_list; @@ -80,7 +107,10 @@ struct hpre_ctx { union { struct hpre_rsa_ctx rsa; struct hpre_dh_ctx dh; + struct hpre_ecdh_ctx ecdh; }; + /* for ecc algorithms */ + unsigned int curve_id; }; struct hpre_asym_request { @@ -91,6 +121,7 @@ struct hpre_asym_request { union { struct akcipher_request *rsa; struct kpp_request *dh; + struct kpp_request *ecdh; } areq; int err; int req_id; @@ -1115,6 +1146,461 @@ static void hpre_rsa_exit_tfm(struct crypto_akcipher *tfm) crypto_free_akcipher(ctx->rsa.soft_tfm); } +static void hpre_key_to_big_end(u8 *data, int len) +{ + int i, j; + u8 tmp; + + for (i = 0; i < len / 2; i++) { + j = len - i - 1; + tmp = data[j]; + data[j] = data[i]; + data[i] = tmp; + } +} + +static void hpre_ecc_clear_ctx(struct hpre_ctx *ctx, bool is_clear_all, + bool is_ecdh) +{ + struct device *dev = HPRE_DEV(ctx); + unsigned int sz = ctx->key_sz; + unsigned int shift = sz << 1; + + if (is_clear_all) + hisi_qm_stop_qp(ctx->qp); + + if (is_ecdh && ctx->ecdh.p) { + /* ecdh: p->a->k->b */ + memzero_explicit(ctx->ecdh.p + shift, sz); + dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p); + ctx->ecdh.p = NULL; + } + + hpre_ctx_clear(ctx, is_clear_all); +} + +/* + * The bits of 192/224/256/384/521 are supported by HPRE, + * and convert the bits like: + * bits<=256, bits=256; 256key_sz << 1; + unsigned int shiftb = ctx->key_sz << 2; + void *p = ctx->ecdh.p + ctx->key_sz - cur_sz; + void *a = ctx->ecdh.p + shifta - cur_sz; + void *b = ctx->ecdh.p + shiftb - cur_sz; + void *x = ctx->ecdh.g + ctx->key_sz - cur_sz; + void *y = ctx->ecdh.g + shifta - cur_sz; + const struct ecc_curve *curve = ecc_get_curve(ctx->curve_id); + char *n; + + if (unlikely(!curve)) + return -EINVAL; + + n = kzalloc(ctx->key_sz, GFP_KERNEL); + if (!n) + return -ENOMEM; + + fill_curve_param(p, curve->p, cur_sz, curve->g.ndigits); + fill_curve_param(a, curve->a, cur_sz, curve->g.ndigits); + fill_curve_param(b, curve->b, cur_sz, curve->g.ndigits); + fill_curve_param(x, curve->g.x, cur_sz, curve->g.ndigits); + fill_curve_param(y, curve->g.y, cur_sz, curve->g.ndigits); + fill_curve_param(n, curve->n, cur_sz, curve->g.ndigits); + + if (params->key_size == cur_sz && memcmp(params->key, n, cur_sz) >= 0) { + kfree(n); + return -EINVAL; + } + + kfree(n); + return 0; +} + +static unsigned int hpre_ecdh_get_curvesz(unsigned short id) +{ + switch (id) { + case ECC_CURVE_NIST_P192: + return HPRE_ECC_NIST_P192_N_SIZE; + case ECC_CURVE_NIST_P224: + return HPRE_ECC_NIST_P224_N_SIZE; + case ECC_CURVE_NIST_P256: + return HPRE_ECC_NIST_P256_N_SIZE; + case ECC_CURVE_NIST_P384: + return HPRE_ECC_NIST_P384_N_SIZE; + case ECC_CURVE_NIST_P521: + return HPRE_ECC_NIST_P521_N_SIZE; + default: + break; + } + + return 0; +} + +static int hpre_ecdh_set_param(struct hpre_ctx *ctx, struct ecdh *params) +{ + struct device *dev = HPRE_DEV(ctx); + unsigned int sz, shift, curve_sz; + int ret; + + ctx->key_sz = hpre_ecdh_supported_curve(ctx->curve_id); + if (!ctx->key_sz) + return -EINVAL; + + curve_sz = hpre_ecdh_get_curvesz(ctx->curve_id); + if (!curve_sz || params->key_size > curve_sz) + return -EINVAL; + + sz = ctx->key_sz; + + if (!ctx->ecdh.p) { + ctx->ecdh.p = dma_alloc_coherent(dev, sz << 3, &ctx->ecdh.dma_p, + GFP_KERNEL); + if (!ctx->ecdh.p) + return -ENOMEM; + } + + shift = sz << 2; + ctx->ecdh.g = ctx->ecdh.p + shift; + ctx->ecdh.dma_g = ctx->ecdh.dma_p + shift; + + ret = hpre_ecdh_fill_curve(ctx, params, curve_sz); + if (ret) { + dev_err(dev, "failed to fill curve_param, ret = %d!\n", ret); + dma_free_coherent(dev, sz << 3, ctx->ecdh.p, ctx->ecdh.dma_p); + ctx->ecdh.p = NULL; + return ret; + } + + return 0; +} + +static bool hpre_key_is_zero(char *key, unsigned short key_sz) +{ + int i; + + for (i = 0; i < key_sz; i++) + if (key[i]) + return false; + + return true; +} + +static int hpre_ecdh_set_secret(struct crypto_kpp *tfm, const void *buf, + unsigned int len) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + struct device *dev = HPRE_DEV(ctx); + unsigned int sz, sz_shift; + struct ecdh params; + int ret; + + if (crypto_ecdh_decode_key(buf, len, ¶ms) < 0) { + dev_err(dev, "failed to decode ecdh key!\n"); + return -EINVAL; + } + + if (hpre_key_is_zero(params.key, params.key_size)) { + dev_err(dev, "Invalid hpre key!\n"); + return -EINVAL; + } + + hpre_ecc_clear_ctx(ctx, false, true); + + ret = hpre_ecdh_set_param(ctx, ¶ms); + if (ret < 0) { + dev_err(dev, "failed to set hpre param, ret = %d!\n", ret); + return ret; + } + + sz = ctx->key_sz; + sz_shift = (sz << 1) + sz - params.key_size; + memcpy(ctx->ecdh.p + sz_shift, params.key, params.key_size); + + return 0; +} + +static void hpre_ecdh_hw_data_clr_all(struct hpre_ctx *ctx, + struct hpre_asym_request *req, + struct scatterlist *dst, + struct scatterlist *src) +{ + struct device *dev = HPRE_DEV(ctx); + struct hpre_sqe *sqe = &req->req; + dma_addr_t dma; + + dma = le64_to_cpu(sqe->in); + if (unlikely(!dma)) + return; + + if (src && req->src) + dma_free_coherent(dev, ctx->key_sz << 2, req->src, dma); + + dma = le64_to_cpu(sqe->out); + if (unlikely(!dma)) + return; + + if (req->dst) + dma_free_coherent(dev, ctx->key_sz << 1, req->dst, dma); + if (dst) + dma_unmap_single(dev, dma, ctx->key_sz << 1, DMA_FROM_DEVICE); +} + +static void hpre_ecdh_cb(struct hpre_ctx *ctx, void *resp) +{ + unsigned int curve_sz = hpre_ecdh_get_curvesz(ctx->curve_id); + struct hpre_dfx *dfx = ctx->hpre->debug.dfx; + struct hpre_asym_request *req = NULL; + struct kpp_request *areq; + u64 overtime_thrhld; + char *p; + int ret; + + ret = hpre_alg_res_post_hf(ctx, resp, (void **)&req); + areq = req->areq.ecdh; + areq->dst_len = ctx->key_sz << 1; + + overtime_thrhld = atomic64_read(&dfx[HPRE_OVERTIME_THRHLD].value); + if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld)) + atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value); + + p = sg_virt(areq->dst); + memmove(p, p + ctx->key_sz - curve_sz, curve_sz); + memmove(p + curve_sz, p + areq->dst_len - curve_sz, curve_sz); + + hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src); + kpp_request_complete(areq, ret); + + atomic64_inc(&dfx[HPRE_RECV_CNT].value); +} + +static int hpre_ecdh_msg_request_set(struct hpre_ctx *ctx, + struct kpp_request *req) +{ + struct hpre_asym_request *h_req; + struct hpre_sqe *msg; + int req_id; + void *tmp; + + if (req->dst_len < ctx->key_sz << 1) { + req->dst_len = ctx->key_sz << 1; + return -EINVAL; + } + + tmp = kpp_request_ctx(req); + h_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + h_req->cb = hpre_ecdh_cb; + h_req->areq.ecdh = req; + msg = &h_req->req; + memset(msg, 0, sizeof(*msg)); + msg->key = cpu_to_le64(ctx->ecdh.dma_p); + + msg->dw0 |= cpu_to_le32(0x1U << HPRE_SQE_DONE_SHIFT); + msg->task_len1 = (ctx->key_sz >> HPRE_BITS_2_BYTES_SHIFT) - 1; + h_req->ctx = ctx; + + req_id = hpre_add_req_to_ctx(h_req); + if (req_id < 0) + return -EBUSY; + + msg->tag = cpu_to_le16((u16)req_id); + return 0; +} + +static int hpre_ecdh_src_data_init(struct hpre_asym_request *hpre_req, + struct scatterlist *data, unsigned int len) +{ + struct hpre_sqe *msg = &hpre_req->req; + struct hpre_ctx *ctx = hpre_req->ctx; + struct device *dev = HPRE_DEV(ctx); + unsigned int tmpshift; + dma_addr_t dma = 0; + void *ptr; + int shift; + + /* Src_data include gx and gy. */ + shift = ctx->key_sz - (len >> 1); + if (unlikely(shift < 0)) + return -EINVAL; + + ptr = dma_alloc_coherent(dev, ctx->key_sz << 2, &dma, GFP_KERNEL); + if (unlikely(!ptr)) + return -ENOMEM; + + tmpshift = ctx->key_sz << 1; + scatterwalk_map_and_copy(ptr + tmpshift, data, 0, len, 0); + memcpy(ptr + shift, ptr + tmpshift, len >> 1); + memcpy(ptr + ctx->key_sz + shift, ptr + tmpshift + (len >> 1), len >> 1); + + hpre_req->src = ptr; + msg->in = cpu_to_le64(dma); + return 0; +} + +static int hpre_ecdh_dst_data_init(struct hpre_asym_request *hpre_req, + struct scatterlist *data, unsigned int len) +{ + struct hpre_sqe *msg = &hpre_req->req; + struct hpre_ctx *ctx = hpre_req->ctx; + struct device *dev = HPRE_DEV(ctx); + dma_addr_t dma = 0; + + if (unlikely(!data || !sg_is_last(data) || len != ctx->key_sz << 1)) { + dev_err(dev, "data or data length is illegal!\n"); + return -EINVAL; + } + + hpre_req->dst = NULL; + dma = dma_map_single(dev, sg_virt(data), len, DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(dev, dma))) { + dev_err(dev, "dma map data err!\n"); + return -ENOMEM; + } + + msg->out = cpu_to_le64(dma); + return 0; +} + +static int hpre_ecdh_compute_value(struct kpp_request *req) +{ + struct crypto_kpp *tfm = crypto_kpp_reqtfm(req); + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + struct device *dev = HPRE_DEV(ctx); + void *tmp = kpp_request_ctx(req); + struct hpre_asym_request *hpre_req = PTR_ALIGN(tmp, HPRE_ALIGN_SZ); + struct hpre_sqe *msg = &hpre_req->req; + int ret; + + ret = hpre_ecdh_msg_request_set(ctx, req); + if (unlikely(ret)) { + dev_err(dev, "failed to set ecdh request, ret = %d!\n", ret); + return ret; + } + + if (req->src) { + ret = hpre_ecdh_src_data_init(hpre_req, req->src, req->src_len); + if (unlikely(ret)) { + dev_err(dev, "failed to init src data, ret = %d!\n", ret); + goto clear_all; + } + } else { + msg->in = cpu_to_le64(ctx->ecdh.dma_g); + } + + ret = hpre_ecdh_dst_data_init(hpre_req, req->dst, req->dst_len); + if (unlikely(ret)) { + dev_err(dev, "failed to init dst data, ret = %d!\n", ret); + goto clear_all; + } + + msg->dw0 = cpu_to_le32(le32_to_cpu(msg->dw0) | HPRE_ALG_ECC_MUL); + ret = hpre_send(ctx, msg); + if (likely(!ret)) + return -EINPROGRESS; + +clear_all: + hpre_rm_req_from_ctx(hpre_req); + hpre_ecdh_hw_data_clr_all(ctx, hpre_req, req->dst, req->src); + return ret; +} + +static unsigned int hpre_ecdh_max_size(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + /* max size is the pub_key_size, include x and y */ + return ctx->key_sz << 1; +} + +static int hpre_ecdh_nist_p192_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P192; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static int hpre_ecdh_nist_p224_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P224; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static int hpre_ecdh_nist_p256_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P256; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static int hpre_ecdh_nist_p384_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P384; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static int hpre_ecdh_nist_p521_init_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + ctx->curve_id = ECC_CURVE_NIST_P521; + + return hpre_ctx_init(ctx, HPRE_V3_ECC_ALG_TYPE); +} + +static void hpre_ecdh_exit_tfm(struct crypto_kpp *tfm) +{ + struct hpre_ctx *ctx = kpp_tfm_ctx(tfm); + + hpre_ecc_clear_ctx(ctx, true, true); +} + static struct akcipher_alg rsa = { .sign = hpre_rsa_dec, .verify = hpre_rsa_enc, @@ -1154,6 +1640,137 @@ static struct kpp_alg dh = { }; #endif +static struct kpp_alg ecdh_nist_p192 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p192_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p192", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static struct kpp_alg ecdh_nist_p224 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p224_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p224", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static struct kpp_alg ecdh_nist_p256 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p256_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p256", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static struct kpp_alg ecdh_nist_p384 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p384_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p384", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static struct kpp_alg ecdh_nist_p521 = { + .set_secret = hpre_ecdh_set_secret, + .generate_public_key = hpre_ecdh_compute_value, + .compute_shared_secret = hpre_ecdh_compute_value, + .max_size = hpre_ecdh_max_size, + .init = hpre_ecdh_nist_p521_init_tfm, + .exit = hpre_ecdh_exit_tfm, + .reqsize = sizeof(struct hpre_asym_request) + HPRE_ALIGN_SZ, + .base = { + .cra_ctxsize = sizeof(struct hpre_ctx), + .cra_priority = HPRE_CRYPTO_ALG_PRI, + .cra_name = "ecdh-nist-p521", + .cra_driver_name = "hpre-ecdh", + .cra_module = THIS_MODULE, + }, +}; + +static int hpre_register_ecdh(void) +{ + int ret; + + ret = crypto_register_kpp(&ecdh_nist_p192); + if (ret) + return ret; + + ret = crypto_register_kpp(&ecdh_nist_p224); + if (ret) + goto unregister_ecdh_p192; + + ret = crypto_register_kpp(&ecdh_nist_p256); + if (ret) + goto unregister_ecdh_p224; + + ret = crypto_register_kpp(&ecdh_nist_p384); + if (ret) + goto unregister_ecdh_p256; + + ret = crypto_register_kpp(&ecdh_nist_p521); + if (ret) + goto unregister_ecdh_p384; + + return 0; + +unregister_ecdh_p384: + crypto_unregister_kpp(&ecdh_nist_p384); +unregister_ecdh_p256: + crypto_unregister_kpp(&ecdh_nist_p256); +unregister_ecdh_p224: + crypto_unregister_kpp(&ecdh_nist_p224); +unregister_ecdh_p192: + crypto_unregister_kpp(&ecdh_nist_p192); + return ret; +} + +static void hpre_unregister_ecdh(void) +{ + crypto_unregister_kpp(&ecdh_nist_p521); + crypto_unregister_kpp(&ecdh_nist_p384); + crypto_unregister_kpp(&ecdh_nist_p256); + crypto_unregister_kpp(&ecdh_nist_p224); + crypto_unregister_kpp(&ecdh_nist_p192); +} + int hpre_algs_register(struct hisi_qm *qm) { int ret; @@ -1164,17 +1781,33 @@ int hpre_algs_register(struct hisi_qm *qm) return ret; #ifdef CONFIG_CRYPTO_DH ret = crypto_register_kpp(&dh); - if (ret) + if (ret) { crypto_unregister_akcipher(&rsa); + return ret; + } #endif - return ret; + if (qm->ver >= QM_HW_V3) { + ret = hpre_register_ecdh(); + if (ret) { +#ifdef CONFIG_CRYPTO_DH + crypto_unregister_kpp(&dh); +#endif + crypto_unregister_akcipher(&rsa); + return ret; + } + } + + return 0; } void hpre_algs_unregister(struct hisi_qm *qm) { - crypto_unregister_akcipher(&rsa); + if (qm->ver >= QM_HW_V3) + hpre_unregister_ecdh(); + #ifdef CONFIG_CRYPTO_DH crypto_unregister_kpp(&dh); #endif + crypto_unregister_akcipher(&rsa); } diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c index e492103..f933457 100644 --- a/drivers/crypto/hisilicon/hpre/hpre_main.c +++ b/drivers/crypto/hisilicon/hpre/hpre_main.c @@ -1046,4 +1046,5 @@ module_exit(hpre_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Zaibo Xu "); +MODULE_AUTHOR("Meng Yu "); MODULE_DESCRIPTION("Driver for HiSilicon HPRE accelerator"); From patchwork Mon Feb 8 09:38:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: yumeng X-Patchwork-Id: 378663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5D98C433E6 for ; Mon, 8 Feb 2021 09:48:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5920C64E9A for ; Mon, 8 Feb 2021 09:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231750AbhBHJrf (ORCPT ); Mon, 8 Feb 2021 04:47:35 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:12497 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230425AbhBHJp1 (ORCPT ); Mon, 8 Feb 2021 04:45:27 -0500 Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DZ1GJ2N4XzjL0D; Mon, 8 Feb 2021 17:40:08 +0800 (CST) Received: from localhost.localdomain (10.67.165.24) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.498.0; Mon, 8 Feb 2021 17:41:21 +0800 From: Meng Yu To: , , , , , CC: , , , , Subject: [PATCH v8 8/9] crypto: add curve25519 params and expose them Date: Mon, 8 Feb 2021 17:38:56 +0800 Message-ID: <1612777137-51067-9-git-send-email-yumeng18@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1612777137-51067-1-git-send-email-yumeng18@huawei.com> References: <1612777137-51067-1-git-send-email-yumeng18@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.24] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org 1. Add curve 25519 parameters; 2. Add curve25519 function 'ecc_get_curve25519_param', to make its parameters be exposed to everyone in kernel tree. Signed-off-by: Meng Yu Reviewed-by: Zaibo Xu --- crypto/ecc.c | 6 ++++++ crypto/ecc_curve_defs.h | 18 ++++++++++++++++++ include/crypto/ecc_curve.h | 7 +++++++ 3 files changed, 31 insertions(+) diff --git a/crypto/ecc.c b/crypto/ecc.c index a631d3e..d8dbe7a 100644 --- a/crypto/ecc.c +++ b/crypto/ecc.c @@ -43,6 +43,12 @@ typedef struct { u64 m_high; } uint128_t; +/* Returns curv25519 curve param */ +const struct ecc_curve *ecc_get_curve25519(void) +{ + return &ecc_25519; +} +EXPORT_SYMBOL(ecc_get_curve25519); const struct ecc_curve *ecc_get_curve(unsigned int curve_id) { diff --git a/crypto/ecc_curve_defs.h b/crypto/ecc_curve_defs.h index bcd7b58..f37eced 100644 --- a/crypto/ecc_curve_defs.h +++ b/crypto/ecc_curve_defs.h @@ -160,4 +160,22 @@ static struct ecc_curve nist_p521 = { .b = nist_p521_b }; +/* curve25519 */ +static u64 curve25519_g_x[] = { 0x0000000000000009, 0x0000000000000000, + 0x0000000000000000, 0x0000000000000000 }; +static u64 curve25519_p[] = { 0xffffffffffffffed, 0xffffffffffffffff, + 0xffffffffffffffff, 0x7fffffffffffffff }; +static u64 curve25519_a[] = { 0x000000000001DB41, 0x0000000000000000, + 0x0000000000000000, 0x0000000000000000 }; +static const struct ecc_curve ecc_25519 = { + .name = "curve25519", + .g = { + .x = curve25519_g_x, + .ndigits = 4, + }, + .p = curve25519_p, + .a = curve25519_a, +}; + + #endif diff --git a/include/crypto/ecc_curve.h b/include/crypto/ecc_curve.h index 19a35da..7096478 100644 --- a/include/crypto/ecc_curve.h +++ b/include/crypto/ecc_curve.h @@ -50,4 +50,11 @@ struct ecc_curve { */ const struct ecc_curve *ecc_get_curve(unsigned int curve_id); +/** + * ecc_get_curve25519() - get curve25519 curve; + * + * Returns curve25519 + */ +const struct ecc_curve *ecc_get_curve25519(void); + #endif