From patchwork Tue Aug 10 12:42:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC2BCC4338F for ; Tue, 10 Aug 2021 12:42:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98790604DC for ; Tue, 10 Aug 2021 12:42:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240728AbhHJMnP (ORCPT ); Tue, 10 Aug 2021 08:43:15 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:43618 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239087AbhHJMnP (ORCPT ); Tue, 10 Aug 2021 08:43:15 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 19401200AE; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zrhwWEeGhqBzwG4fO6ATjJUgff+ThHnR+gJis2gc95M=; b=yxIPTQ8BhCO9PRyFK4yOHK/grPX91BMKnConxNCMejtQKLoDidxlWhNYYCfo+e9n8s28Ks 60uwFkLeh+f5LtbPvucy7Q+oQfEwPLnf/hwzUXeCGix0pY9jGjBQT+LTwelh3U6Bxqgqz5 /5NTV0yWmjTDgjrtA/vhPtV5GEGkAyg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zrhwWEeGhqBzwG4fO6ATjJUgff+ThHnR+gJis2gc95M=; b=c1Vm88cQXKcvAGAuw5PTfUDYfqVe8rro2hftSFHTiC3zK6cV8kSgC6SwQ16+XSMQx2Psck kl1Ga2ObVSZeAGCw== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 94600A3B88; Tue, 10 Aug 2021 12:42:50 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 544EA518C542; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 01/13] crypto: add crypto_has_shash() Date: Tue, 10 Aug 2021 14:42:18 +0200 Message-Id: <20210810124230.12161-2-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add helper function to determine if a given synchronous hash is supported. Signed-off-by: Hannes Reinecke --- crypto/shash.c | 6 ++++++ include/crypto/hash.h | 2 ++ 2 files changed, 8 insertions(+) diff --git a/crypto/shash.c b/crypto/shash.c index 0a0a50cb694f..4c88e63b3350 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -521,6 +521,12 @@ struct crypto_shash *crypto_alloc_shash(const char *alg_name, u32 type, } EXPORT_SYMBOL_GPL(crypto_alloc_shash); +int crypto_has_shash(const char *alg_name, u32 type, u32 mask) +{ + return crypto_type_has_alg(alg_name, &crypto_shash_type, type, mask); +} +EXPORT_SYMBOL_GPL(crypto_has_shash); + static int shash_prepare_alg(struct shash_alg *alg) { struct crypto_alg *base = &alg->base; diff --git a/include/crypto/hash.h b/include/crypto/hash.h index f140e4643949..f5841992dc9b 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -718,6 +718,8 @@ static inline void ahash_request_set_crypt(struct ahash_request *req, struct crypto_shash *crypto_alloc_shash(const char *alg_name, u32 type, u32 mask); +int crypto_has_shash(const char *alg_name, u32 type, u32 mask); + static inline struct crypto_tfm *crypto_shash_tfm(struct crypto_shash *tfm) { return &tfm->base; From patchwork Tue Aug 10 12:42:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D87BC4320A for ; Tue, 10 Aug 2021 12:42:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4ECDB604DC for ; Tue, 10 Aug 2021 12:42:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240725AbhHJMnR (ORCPT ); Tue, 10 Aug 2021 08:43:17 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:43650 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240684AbhHJMnP (ORCPT ); Tue, 10 Aug 2021 08:43:15 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 187C9200AC; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jBz1euu/Em73FsBrJkOYPpUWzZMW08ZHnj+jOApWQ20=; b=daG147EI93GuflJ21eDXm7tEVZ0kzV1RBKSXywqdhNPwfvaF9ibSB9nPtvKDGthaRdVvSQ W3Ld7QOwgTX6nXUVW0d60YNGlZX4DvpToQ1cT7Aqtpuw9Y5jyftcdAe0QpxlLJuwmuwK0E INfNNUTq/AfVBNZ2iqDsAzqdllnKrI0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jBz1euu/Em73FsBrJkOYPpUWzZMW08ZHnj+jOApWQ20=; b=/mlxHgXt/f/jdc68S0rtT6z4OBGbp3taASbwQy4uHq0lFPwtFKhJtkcYO6kjvNjTG7xLJo aRO1dSXEhLxyaYBA== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 97A16A3B8A; Tue, 10 Aug 2021 12:42:50 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 6408D518C548; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 04/13] lib/base64: RFC4648-compliant base64 encoding Date: Tue, 10 Aug 2021 14:42:21 +0200 Message-Id: <20210810124230.12161-5-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Add RFC4648-compliant base64 encoding and decoding routines. Signed-off-by: Hannes Reinecke --- include/linux/base64.h | 16 ++++++ lib/Makefile | 2 +- lib/base64.c | 115 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 132 insertions(+), 1 deletion(-) create mode 100644 include/linux/base64.h create mode 100644 lib/base64.c diff --git a/include/linux/base64.h b/include/linux/base64.h new file mode 100644 index 000000000000..660d4cb1ef31 --- /dev/null +++ b/include/linux/base64.h @@ -0,0 +1,16 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * base64 encoding, lifted from fs/crypto/fname.c. + */ + +#ifndef _LINUX_BASE64_H +#define _LINUX_BASE64_H + +#include + +#define BASE64_CHARS(nbytes) DIV_ROUND_UP((nbytes) * 4, 3) + +int base64_encode(const u8 *src, int len, char *dst); +int base64_decode(const char *src, int len, u8 *dst); + +#endif /* _LINUX_BASE64_H */ diff --git a/lib/Makefile b/lib/Makefile index 6d765d5fb8ac..92a428aa0e5f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -46,7 +46,7 @@ obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \ bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ list_sort.o uuid.o iov_iter.o clz_ctz.o \ bsearch.o find_bit.o llist.o memweight.o kfifo.o \ - percpu-refcount.o rhashtable.o \ + percpu-refcount.o rhashtable.o base64.o \ once.o refcount.o usercopy.o errseq.o bucket_locks.o \ generic-radix-tree.o obj-$(CONFIG_STRING_SELFTEST) += test_string.o diff --git a/lib/base64.c b/lib/base64.c new file mode 100644 index 000000000000..ef120cecceba --- /dev/null +++ b/lib/base64.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * base64.c - RFC4648-compliant base64 encoding + * + * Copyright (c) 2020 Hannes Reinecke, SUSE + * + * Based on the (slightly non-standard) base64 routines + * from fs/crypto/fname.c, but using the RFC-mandated + * coding table. + */ + +#include +#include +#include +#include +#include + +static const char lookup_table[65] = + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; + +/** + * base64_encode() - base64-encode some bytes + * @src: the bytes to encode + * @len: number of bytes to encode + * @dst: (output) the base64-encoded string. Not NUL-terminated. + * + * Encodes the input string using characters from the set [A-Za-z0-9+,]. + * The encoded string is roughly 4/3 times the size of the input string. + * + * Return: length of the encoded string + */ +int base64_encode(const u8 *src, int len, char *dst) +{ + int i, bits = 0; + u32 ac = 0; + char *cp = dst; + + for (i = 0; i < len; i++) { + ac = (ac << 8) | src[i]; + bits += 8; + if (bits < 24) + continue; + do { + bits -= 6; + *cp++ = lookup_table[(ac >> bits) & 0x3f]; + } while (bits); + ac = 0; + } + if (bits) { + int more = 0; + + if (bits < 16) + more = 2; + ac = (ac << (2 + more)); + bits += (2 + more); + do { + bits -= 6; + *cp++ = lookup_table[(ac >> bits) & 0x3f]; + } while (bits); + *cp++ = '='; + if (more) + *cp++ = '='; + } + + return cp - dst; +} +EXPORT_SYMBOL_GPL(base64_encode); + +/** + * base64_decode() - base64-decode some bytes + * @src: the base64-encoded string to decode + * @len: number of bytes to decode + * @dst: (output) the decoded bytes. + * + * Decodes the base64-encoded bytes @src according to RFC 4648. + * + * Return: number of decoded bytes + */ +int base64_decode(const char *src, int len, u8 *dst) +{ + int i, bits = 0, pad = 0; + u32 ac = 0; + size_t dst_len = 0; + + for (i = 0; i < len; i++) { + int c, p = -1; + + if (src[i] == '=') { + pad++; + if (i + 1 < len && src[i + 1] == '=') + pad++; + break; + } + for (c = 0; c < strlen(lookup_table); c++) { + if (src[i] == lookup_table[c]) { + p = c; + break; + } + } + if (p < 0) + break; + ac = (ac << 6) | p; + bits += 6; + if (bits < 24) + continue; + while (bits) { + bits -= 8; + dst[dst_len++] = (ac >> bits) & 0xff; + } + ac = 0; + } + dst_len -= pad; + return dst_len; +} +EXPORT_SYMBOL_GPL(base64_decode); From patchwork Tue Aug 10 12:42:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104D4C43214 for ; Tue, 10 Aug 2021 12:42:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E7A4260EFF for ; Tue, 10 Aug 2021 12:42:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240726AbhHJMnQ (ORCPT ); Tue, 10 Aug 2021 08:43:16 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:40772 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240725AbhHJMnP (ORCPT ); Tue, 10 Aug 2021 08:43:15 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 23D572205E; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+RWp9TT74PI/W7gTm3C8We4/DOCgKW/4kX34e86Qrdo=; b=JtnXQUNAdGycjXl6okGIRpDDbiFiDxlkpByQSTcVOp6eQuDvlI6k6lLPomVVG+S0wg5iRl 6rGUG3HRm2UrImMw3EWiqVirJFakTsaXmYZGZOsZi3TQ3zTg+x1QRr9vUtGwkH3ivuZATt GoL2VPoAa2UT3IsRAWkBK+t/xwXlO6M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+RWp9TT74PI/W7gTm3C8We4/DOCgKW/4kX34e86Qrdo=; b=dilKpGujbrYsM4yCFVvjpUDWN6uJEo3poTz+FkHaHdWjeFTTkJLq8lP5I2dZJqQZU8GI/E bUe589jw29sXg2AQ== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 18687A3B8F; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 692E5518C54A; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 05/13] nvme: add definitions for NVMe In-Band authentication Date: Tue, 10 Aug 2021 14:42:22 +0200 Message-Id: <20210810124230.12161-6-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Signed-off-by: Hannes Reinecke --- include/linux/nvme.h | 186 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 185 insertions(+), 1 deletion(-) diff --git a/include/linux/nvme.h b/include/linux/nvme.h index b7c4c4130b65..e2142e3246eb 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -19,6 +19,7 @@ #define NVMF_TRSVCID_SIZE 32 #define NVMF_TRADDR_SIZE 256 #define NVMF_TSAS_SIZE 256 +#define NVMF_AUTH_HASH_LEN 64 #define NVME_DISC_SUBSYS_NAME "nqn.2014-08.org.nvmexpress.discovery" @@ -1263,6 +1264,8 @@ enum nvmf_capsule_command { nvme_fabrics_type_property_set = 0x00, nvme_fabrics_type_connect = 0x01, nvme_fabrics_type_property_get = 0x04, + nvme_fabrics_type_auth_send = 0x05, + nvme_fabrics_type_auth_receive = 0x06, }; #define nvme_fabrics_type_name(type) { type, #type } @@ -1270,7 +1273,9 @@ enum nvmf_capsule_command { __print_symbolic(type, \ nvme_fabrics_type_name(nvme_fabrics_type_property_set), \ nvme_fabrics_type_name(nvme_fabrics_type_connect), \ - nvme_fabrics_type_name(nvme_fabrics_type_property_get)) + nvme_fabrics_type_name(nvme_fabrics_type_property_get), \ + nvme_fabrics_type_name(nvme_fabrics_type_auth_send), \ + nvme_fabrics_type_name(nvme_fabrics_type_auth_receive)) /* * If not fabrics command, fctype will be ignored. @@ -1393,6 +1398,183 @@ struct nvmf_property_get_command { __u8 resv4[16]; }; +struct nvmf_auth_send_command { + __u8 opcode; + __u8 resv1; + __u16 command_id; + __u8 fctype; + __u8 resv2[19]; + union nvme_data_ptr dptr; + __u8 resv3; + __u8 spsp0; + __u8 spsp1; + __u8 secp; + __le32 tl; + __u8 resv4[16]; +}; + +struct nvmf_auth_receive_command { + __u8 opcode; + __u8 resv1; + __u16 command_id; + __u8 fctype; + __u8 resv2[19]; + union nvme_data_ptr dptr; + __u8 resv3; + __u8 spsp0; + __u8 spsp1; + __u8 secp; + __le32 al; + __u8 resv4[16]; +}; + +/* Value for secp */ +enum { + NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER = 0xe9, +}; + +/* Defined value for auth_type */ +enum { + NVME_AUTH_COMMON_MESSAGES = 0x00, + NVME_AUTH_DHCHAP_MESSAGES = 0x01, +}; + +/* Defined messages for auth_id */ +enum { + NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE = 0x00, + NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE = 0x01, + NVME_AUTH_DHCHAP_MESSAGE_REPLY = 0x02, + NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1 = 0x03, + NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 = 0x04, + NVME_AUTH_DHCHAP_MESSAGE_FAILURE2 = 0xf0, + NVME_AUTH_DHCHAP_MESSAGE_FAILURE1 = 0xf1, +}; + +struct nvmf_auth_dhchap_protocol_descriptor { + __u8 authid; + __u8 rsvd; + __u8 halen; + __u8 dhlen; + __u8 idlist[60]; +}; + +enum { + NVME_AUTH_DHCHAP_AUTH_ID = 0x01, +}; + +/* Defined hash functions for DH-HMAC-CHAP authentication */ +enum { + NVME_AUTH_DHCHAP_SHA256 = 0x01, + NVME_AUTH_DHCHAP_SHA384 = 0x02, + NVME_AUTH_DHCHAP_SHA512 = 0x03, +}; + +/* Defined Diffie-Hellman group identifiers for DH-HMAC-CHAP authentication */ +enum { + NVME_AUTH_DHCHAP_DHGROUP_NULL = 0x00, + NVME_AUTH_DHCHAP_DHGROUP_2048 = 0x01, + NVME_AUTH_DHCHAP_DHGROUP_3072 = 0x02, + NVME_AUTH_DHCHAP_DHGROUP_4096 = 0x03, + NVME_AUTH_DHCHAP_DHGROUP_6144 = 0x04, + NVME_AUTH_DHCHAP_DHGROUP_8192 = 0x05, +}; + +union nvmf_auth_protocol { + struct nvmf_auth_dhchap_protocol_descriptor dhchap; +}; + +struct nvmf_auth_dhchap_negotiate_data { + __u8 auth_type; + __u8 auth_id; + __le16 rsvd; + __le16 t_id; + __u8 sc_c; + __u8 napd; + union nvmf_auth_protocol auth_protocol[]; +}; + +struct nvmf_auth_dhchap_challenge_data { + __u8 auth_type; + __u8 auth_id; + __u16 rsvd1; + __le16 t_id; + __u8 hl; + __u8 rsvd2; + __u8 hashid; + __u8 dhgid; + __le16 dhvlen; + __le32 seqnum; + /* 'hl' bytes of challenge value */ + __u8 cval[]; + /* followed by 'dhvlen' bytes of DH value */ +}; + +struct nvmf_auth_dhchap_reply_data { + __u8 auth_type; + __u8 auth_id; + __le16 rsvd1; + __le16 t_id; + __u8 hl; + __u8 rsvd2; + __u8 cvalid; + __u8 rsvd3; + __le16 dhvlen; + __le32 seqnum; + /* 'hl' bytes of response data */ + __u8 rval[]; + /* followed by 'hl' bytes of Challenge value */ + /* followed by 'dhvlen' bytes of DH value */ +}; + +enum { + NVME_AUTH_DHCHAP_RESPONSE_VALID = (1 << 0), +}; + +struct nvmf_auth_dhchap_success1_data { + __u8 auth_type; + __u8 auth_id; + __le16 rsvd1; + __le16 t_id; + __u8 hl; + __u8 rsvd2; + __u8 rvalid; + __u8 rsvd3[7]; + /* 'hl' bytes of response value if 'rvalid' is set */ + __u8 rval[]; +}; + +struct nvmf_auth_dhchap_success2_data { + __u8 auth_type; + __u8 auth_id; + __le16 rsvd1; + __le16 t_id; + __u8 rsvd2[10]; +}; + +struct nvmf_auth_dhchap_failure_data { + __u8 auth_type; + __u8 auth_id; + __le16 rsvd1; + __le16 t_id; + __u8 rescode; + __u8 rescode_exp; +}; + +enum { + NVME_AUTH_DHCHAP_FAILURE_REASON_FAILED = 0x01, +}; + +enum { + NVME_AUTH_DHCHAP_FAILURE_FAILED = 0x01, + NVME_AUTH_DHCHAP_FAILURE_NOT_USABLE = 0x02, + NVME_AUTH_DHCHAP_FAILURE_CONCAT_MISMATCH = 0x03, + NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE = 0x04, + NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE = 0x05, + NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD = 0x06, + NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE = 0x07, +}; + + struct nvme_dbbuf { __u8 opcode; __u8 flags; @@ -1436,6 +1618,8 @@ struct nvme_command { struct nvmf_connect_command connect; struct nvmf_property_set_command prop_set; struct nvmf_property_get_command prop_get; + struct nvmf_auth_send_command auth_send; + struct nvmf_auth_receive_command auth_receive; struct nvme_dbbuf dbbuf; struct nvme_directive_cmd directive; }; From patchwork Tue Aug 10 12:42:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5310AC432BE for ; Tue, 10 Aug 2021 12:43:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2BF15604DC for ; Tue, 10 Aug 2021 12:43:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240746AbhHJMnT (ORCPT ); Tue, 10 Aug 2021 08:43:19 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:40802 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbhHJMnP (ORCPT ); Tue, 10 Aug 2021 08:43:15 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 2846022063; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yc9L4FJH4DzjSrDVrO2auPgn4SAQQ4+MtTuLY6XhdZA=; b=zNWYaMbRwtDrmV2+PjF+6kRlvIeLp9r3rzjJHLJw2WK5zwbWjjpM44ZhR1fasshS5zfaTX XEmsofsWEMK7+/XlNu/8jNoHM8Jg1x/7sdbNICDoN2tdvUBFTz2Y8+h2y2g2MLJPZkV2Qy 9lIT5tkdgtFRjcY3w7zYjXNeVLbh1xE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yc9L4FJH4DzjSrDVrO2auPgn4SAQQ4+MtTuLY6XhdZA=; b=vA582syyUD57YuUM1Cxo9aO+yi3j2Fv9+YTX7ihmZBh4S5dSc873jXQJOTuSyy166HQvGN bI71NdQgZWnr2YAg== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 195C3A3B92; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 6E109518C54C; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 06/13] nvme-fabrics: decode 'authentication required' connect error Date: Tue, 10 Aug 2021 14:42:23 +0200 Message-Id: <20210810124230.12161-7-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org The 'connect' command might fail with NVME_SC_AUTH_REQUIRED, so we should be decoding this error, too. Signed-off-by: Hannes Reinecke --- drivers/nvme/host/fabrics.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index a5469fd9d4c3..1e9900f72e66 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -332,6 +332,10 @@ static void nvmf_log_connect_error(struct nvme_ctrl *ctrl, dev_err(ctrl->device, "Connect command failed: host path error\n"); break; + case NVME_SC_AUTH_REQUIRED: + dev_err(ctrl->device, + "Connect command failed: authentication required\n"); + break; default: dev_err(ctrl->device, "Connect command failed, error wo/DNR bit: %d\n", From patchwork Tue Aug 10 12:42:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86B97C04FE3 for ; Tue, 10 Aug 2021 12:43:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D53460F11 for ; Tue, 10 Aug 2021 12:43:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240761AbhHJMnZ (ORCPT ); Tue, 10 Aug 2021 08:43:25 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:43702 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240731AbhHJMnQ (ORCPT ); Tue, 10 Aug 2021 08:43:16 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 27A73200B1; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UxHkLBkblr5HH8dgeBIya+ICDr4LD2JlCONiiYPdc7w=; b=UxcjYnqAm/oGucyW9ne2Y5K1JtozsDsdp5yGvWWqSzclWABlXLhIHrj9xpKye7Hh6X8VPH W/vYHIkInS90DWX0M/xBYjTHqJre5DfJi9hSn9pPmfYnHkP6NleU+a/DQv5vKwIT5GJzb6 Wzkp8gS8u+J/Q6mXjakNn9nfvqz9ec0= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UxHkLBkblr5HH8dgeBIya+ICDr4LD2JlCONiiYPdc7w=; b=aHRTPhKBBbruuOVyzJkLXtxUSl5N2i4ZPWSB+yeX4dbQ+d7rLy+tOEclD9nWl2G/98UxuS jTMhKDZNXkjj1kDw== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 187DFA3B90; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 76E26518C54E; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 07/13] nvme: Implement In-Band authentication Date: Tue, 10 Aug 2021 14:42:24 +0200 Message-Id: <20210810124230.12161-8-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement NVMe-oF In-Band authentication according to NVMe TPAR 8006. This patch adds two new fabric options 'dhchap_secret' to specify the pre-shared key (in ASCII respresentation according to NVMe 2.0 section 8.13.5.8 'Secret representation') and 'dhchap_bidi' to request bi-directional authentication of both the host and the controller. Re-authentication can be triggered by writing the PSK into the new controller sysfs attribute 'dhchap_secret'. Signed-off-by: Hannes Reinecke --- drivers/nvme/host/Kconfig | 12 + drivers/nvme/host/Makefile | 1 + drivers/nvme/host/auth.c | 1282 +++++++++++++++++++++++++++++++++++ drivers/nvme/host/auth.h | 25 + drivers/nvme/host/core.c | 79 ++- drivers/nvme/host/fabrics.c | 73 +- drivers/nvme/host/fabrics.h | 6 + drivers/nvme/host/nvme.h | 30 + drivers/nvme/host/trace.c | 32 + 9 files changed, 1534 insertions(+), 6 deletions(-) create mode 100644 drivers/nvme/host/auth.c create mode 100644 drivers/nvme/host/auth.h diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig index c3f3d77f1aac..720737c010d3 100644 --- a/drivers/nvme/host/Kconfig +++ b/drivers/nvme/host/Kconfig @@ -85,3 +85,15 @@ config NVME_TCP from https://github.com/linux-nvme/nvme-cli. If unsure, say N. + +config NVME_AUTH + bool "NVM Express over Fabrics In-Band Authentication" + depends on NVME_CORE + select CRYPTO_HMAC + select CRYPTO_SHA256 + select CRYPTO_SHA512 + help + This provides support for NVMe over Fabrics In-Band Authentication + for the NVMe over TCP transport. + + If unsure, say N. diff --git a/drivers/nvme/host/Makefile b/drivers/nvme/host/Makefile index cbc509784b2e..817b46f3e000 100644 --- a/drivers/nvme/host/Makefile +++ b/drivers/nvme/host/Makefile @@ -16,6 +16,7 @@ nvme-core-$(CONFIG_NVM) += lightnvm.o nvme-core-$(CONFIG_BLK_DEV_ZONED) += zns.o nvme-core-$(CONFIG_FAULT_INJECTION_DEBUG_FS) += fault_inject.o nvme-core-$(CONFIG_NVME_HWMON) += hwmon.o +nvme-core-$(CONFIG_NVME_AUTH) += auth.o nvme-y += pci.o diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c new file mode 100644 index 000000000000..8916755b6247 --- /dev/null +++ b/drivers/nvme/host/auth.c @@ -0,0 +1,1282 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2020 Hannes Reinecke, SUSE Linux + */ + +#include +#include +#include +#include +#include +#include +#include "nvme.h" +#include "fabrics.h" +#include "auth.h" + +static u32 nvme_dhchap_seqnum; + +struct nvme_dhchap_queue_context { + struct list_head entry; + struct work_struct auth_work; + struct nvme_ctrl *ctrl; + struct crypto_shash *shash_tfm; + struct crypto_kpp *dh_tfm; + void *buf; + size_t buf_size; + int qid; + int error; + u32 s1; + u32 s2; + u16 transaction; + u8 status; + u8 hash_id; + u8 hash_len; + u8 dhgroup_id; + u8 c1[64]; + u8 c2[64]; + u8 response[64]; + u8 *host_response; +}; + +static struct nvme_auth_dhgroup_map { + int id; + const char name[16]; + const char kpp[16]; + int privkey_size; + int pubkey_size; +} dhgroup_map[] = { + { .id = NVME_AUTH_DHCHAP_DHGROUP_NULL, + .name = "NULL", .kpp = "NULL", + .privkey_size = 0, .pubkey_size = 0 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_2048, + .name = "ffdhe2048", .kpp = "dh", + .privkey_size = 256, .pubkey_size = 256 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_3072, + .name = "ffdhe3072", .kpp = "dh", + .privkey_size = 384, .pubkey_size = 384 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_4096, + .name = "ffdhe4096", .kpp = "dh", + .privkey_size = 512, .pubkey_size = 512 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_6144, + .name = "ffdhe6144", .kpp = "dh", + .privkey_size = 768, .pubkey_size = 768 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_8192, + .name = "ffdhe8192", .kpp = "dh", + .privkey_size = 1024, .pubkey_size = 1024 }, +}; + +const char *nvme_auth_dhgroup_name(int dhgroup_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) { + if (dhgroup_map[i].id == dhgroup_id) + return dhgroup_map[i].name; + } + return NULL; +} +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_name); + +int nvme_auth_dhgroup_pubkey_size(int dhgroup_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) { + if (dhgroup_map[i].id == dhgroup_id) + return dhgroup_map[i].pubkey_size; + } + return -1; +} +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_pubkey_size); + +int nvme_auth_dhgroup_privkey_size(int dhgroup_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) { + if (dhgroup_map[i].id == dhgroup_id) + return dhgroup_map[i].privkey_size; + } + return -1; +} +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_privkey_size); + +const char *nvme_auth_dhgroup_kpp(int dhgroup_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) { + if (dhgroup_map[i].id == dhgroup_id) + return dhgroup_map[i].kpp; + } + return NULL; +} +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_kpp); + +int nvme_auth_dhgroup_id(const char *dhgroup_name) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(dhgroup_map); i++) { + if (!strncmp(dhgroup_map[i].name, dhgroup_name, + strlen(dhgroup_map[i].name))) + return dhgroup_map[i].id; + } + return -1; +} +EXPORT_SYMBOL_GPL(nvme_auth_dhgroup_id); + +static struct nvme_dhchap_hash_map { + int id; + const char hmac[15]; + const char digest[15]; +} hash_map[] = { + {.id = NVME_AUTH_DHCHAP_SHA256, + .hmac = "hmac(sha256)", .digest = "sha256" }, + {.id = NVME_AUTH_DHCHAP_SHA384, + .hmac = "hmac(sha384)", .digest = "sha384" }, + {.id = NVME_AUTH_DHCHAP_SHA512, + .hmac = "hmac(sha512)", .digest = "sha512" }, +}; + +const char *nvme_auth_hmac_name(int hmac_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(hash_map); i++) { + if (hash_map[i].id == hmac_id) + return hash_map[i].hmac; + } + return NULL; +} +EXPORT_SYMBOL_GPL(nvme_auth_hmac_name); + +const char *nvme_auth_digest_name(int hmac_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(hash_map); i++) { + if (hash_map[i].id == hmac_id) + return hash_map[i].digest; + } + return NULL; +} +EXPORT_SYMBOL_GPL(nvme_auth_digest_name); + +int nvme_auth_hmac_id(const char *hmac_name) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(hash_map); i++) { + if (!strncmp(hash_map[i].hmac, hmac_name, + strlen(hash_map[i].hmac))) + return hash_map[i].id; + } + return -1; +} +EXPORT_SYMBOL_GPL(nvme_auth_hmac_id); + +unsigned char *nvme_auth_extract_secret(unsigned char *secret, size_t *out_len) +{ + unsigned char *key; + u32 crc; + int key_len; + size_t allocated_len; + + allocated_len = strlen(secret); + key = kzalloc(allocated_len, GFP_KERNEL); + if (!key) + return ERR_PTR(-ENOMEM); + + key_len = base64_decode(secret, allocated_len, key); + if (key_len != 36 && key_len != 52 && + key_len != 68) { + pr_debug("Invalid DH-HMAC-CHAP key len %d\n", + key_len); + kfree_sensitive(key); + return ERR_PTR(-EINVAL); + } + + /* The last four bytes is the CRC in little-endian format */ + key_len -= 4; + /* + * The linux implementation doesn't do pre- and post-increments, + * so we have to do it manually. + */ + crc = ~crc32(~0, key, key_len); + + if (get_unaligned_le32(key + key_len) != crc) { + pr_debug("DH-HMAC-CHAP key crc mismatch (key %08x, crc %08x)\n", + get_unaligned_le32(key + key_len), crc); + kfree_sensitive(key); + return ERR_PTR(-EKEYREJECTED); + } + *out_len = key_len; + return key; +} +EXPORT_SYMBOL_GPL(nvme_auth_extract_secret); + +u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn) +{ + const char *hmac_name = nvme_auth_hmac_name(key_hash); + struct crypto_shash *key_tfm; + struct shash_desc *shash; + u8 *transformed_key; + int ret; + + /* No key transformation required */ + if (key_hash == 0) + return 0; + + hmac_name = nvme_auth_hmac_name(key_hash); + if (!hmac_name) { + pr_warn("Invalid key hash id %d\n", key_hash); + return ERR_PTR(-EKEYREJECTED); + } + key_tfm = crypto_alloc_shash(hmac_name, 0, 0); + if (IS_ERR(key_tfm)) + return (u8 *)key_tfm; + + shash = kmalloc(sizeof(struct shash_desc) + + crypto_shash_descsize(key_tfm), + GFP_KERNEL); + if (!shash) { + crypto_free_shash(key_tfm); + return ERR_PTR(-ENOMEM); + } + transformed_key = kzalloc(crypto_shash_digestsize(key_tfm), GFP_KERNEL); + if (!transformed_key) { + ret = -ENOMEM; + goto out_free_shash; + } + + shash->tfm = key_tfm; + ret = crypto_shash_setkey(key_tfm, key, key_len); + if (ret < 0) + goto out_free_shash; + ret = crypto_shash_init(shash); + if (ret < 0) + goto out_free_shash; + ret = crypto_shash_update(shash, nqn, strlen(nqn)); + if (ret < 0) + goto out_free_shash; + ret = crypto_shash_update(shash, "NVMe-over-Fabrics", 17); + if (ret < 0) + goto out_free_shash; + ret = crypto_shash_final(shash, transformed_key); +out_free_shash: + kfree(shash); + crypto_free_shash(key_tfm); + if (ret < 0) { + kfree_sensitive(transformed_key); + return ERR_PTR(ret); + } + return transformed_key; +} +EXPORT_SYMBOL_GPL(nvme_auth_transform_key); + +static int nvme_auth_hash_skey(int hmac_id, u8 *skey, size_t skey_len, u8 *hkey) +{ + const char *digest_name; + struct crypto_shash *tfm; + int ret; + + digest_name = nvme_auth_digest_name(hmac_id); + if (!digest_name) { + pr_debug("%s: failed to get digest for %d\n", __func__, + hmac_id); + return -EINVAL; + } + tfm = crypto_alloc_shash(digest_name, 0, 0); + if (IS_ERR(tfm)) + return -ENOMEM; + + ret = crypto_shash_tfm_digest(tfm, skey, skey_len, hkey); + if (ret < 0) + pr_debug("%s: Failed to hash digest len %zu\n", __func__, + skey_len); + + crypto_free_shash(tfm); + return ret; +} + +int nvme_auth_augmented_challenge(u8 hmac_id, u8 *skey, size_t skey_len, + u8 *challenge, u8 *aug, size_t hlen) +{ + struct crypto_shash *tfm; + struct shash_desc *desc; + u8 *hashed_key; + const char *hmac_name; + int ret; + + hashed_key = kmalloc(hlen, GFP_KERNEL); + if (!hashed_key) + return -ENOMEM; + + ret = nvme_auth_hash_skey(hmac_id, skey, + skey_len, hashed_key); + if (ret < 0) + goto out_free_key; + + hmac_name = nvme_auth_hmac_name(hmac_id); + if (!hmac_name) { + pr_warn("%s: invalid hash algoritm %d\n", + __func__, hmac_id); + ret = -EINVAL; + goto out_free_key; + } + tfm = crypto_alloc_shash(hmac_name, 0, 0); + if (IS_ERR(tfm)) { + ret = PTR_ERR(tfm); + goto out_free_key; + } + desc = kmalloc(sizeof(struct shash_desc) + crypto_shash_descsize(tfm), + GFP_KERNEL); + if (!desc) { + ret = -ENOMEM; + goto out_free_hash; + } + desc->tfm = tfm; + + ret = crypto_shash_setkey(tfm, hashed_key, hlen); + if (ret) + goto out_free_desc; + + ret = crypto_shash_init(desc); + if (ret) + goto out_free_desc; + + ret = crypto_shash_update(desc, challenge, hlen); + if (ret) + goto out_free_desc; + + ret = crypto_shash_final(desc, aug); +out_free_desc: + kfree_sensitive(desc); +out_free_hash: + crypto_free_shash(tfm); +out_free_key: + kfree_sensitive(hashed_key); + return ret; +} +EXPORT_SYMBOL_GPL(nvme_auth_augmented_challenge); + +int nvme_auth_gen_privkey(struct crypto_kpp *dh_tfm, int dh_gid) +{ + char *pkey; + int ret, pkey_len; + + if (dh_gid == NVME_AUTH_DHCHAP_DHGROUP_2048 || + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_3072 || + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_4096 || + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_6144 || + dh_gid == NVME_AUTH_DHCHAP_DHGROUP_8192) { + struct dh p = {0}; + int bits = nvme_auth_dhgroup_pubkey_size(dh_gid) << 3; + int dh_secret_len = 64; + u8 *dh_secret = kzalloc(dh_secret_len, GFP_KERNEL); + + if (!dh_secret) + return -ENOMEM; + + /* + * NVMe base spec v2.0: The DH value shall be set to the value + * of g^x mod p, where 'x' is a random number selected by the + * host that shall be at least 256 bits long. + * + * We will be using a 512 bit random number as private key. + * This is large enough to provide adequate security, but + * small enough such that we can trivially conform to + * NIST SB800-56A section 5.6.1.1.4 if + * we guarantee that the random number is not either + * all 0xff or all 0x00. But that should be guaranteed + * by the in-kernel RNG anyway. + */ + get_random_bytes(dh_secret, dh_secret_len); + + ret = crypto_ffdhe_params(&p, bits); + if (ret) { + kfree_sensitive(dh_secret); + return ret; + } + + p.key = dh_secret; + p.key_size = dh_secret_len; + + pkey_len = crypto_dh_key_len(&p); + pkey = kmalloc(pkey_len, GFP_KERNEL); + if (!pkey) { + kfree_sensitive(dh_secret); + return -ENOMEM; + } + + get_random_bytes(pkey, pkey_len); + ret = crypto_dh_encode_key(pkey, pkey_len, &p); + if (ret) { + pr_debug("failed to encode private key, error %d\n", + ret); + kfree_sensitive(dh_secret); + goto out; + } + } else { + pr_warn("invalid dh group %d\n", dh_gid); + return -EINVAL; + } + ret = crypto_kpp_set_secret(dh_tfm, pkey, pkey_len); + if (ret) + pr_debug("failed to set private key, error %d\n", ret); +out: + kfree_sensitive(pkey); + return ret; +} +EXPORT_SYMBOL_GPL(nvme_auth_gen_privkey); + +int nvme_auth_gen_pubkey(struct crypto_kpp *dh_tfm, + u8 *host_key, size_t host_key_len) +{ + struct kpp_request *req; + struct crypto_wait wait; + struct scatterlist dst; + int ret; + + req = kpp_request_alloc(dh_tfm, GFP_KERNEL); + if (!req) + return -ENOMEM; + + crypto_init_wait(&wait); + kpp_request_set_input(req, NULL, 0); + sg_init_one(&dst, host_key, host_key_len); + kpp_request_set_output(req, &dst, host_key_len); + kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &wait); + + ret = crypto_wait_req(crypto_kpp_generate_public_key(req), &wait); + + kpp_request_free(req); + return ret; +} +EXPORT_SYMBOL_GPL(nvme_auth_gen_pubkey); + +int nvme_auth_gen_shared_secret(struct crypto_kpp *dh_tfm, + u8 *ctrl_key, size_t ctrl_key_len, + u8 *sess_key, size_t sess_key_len) +{ + struct kpp_request *req; + struct crypto_wait wait; + struct scatterlist src, dst; + int ret; + + req = kpp_request_alloc(dh_tfm, GFP_KERNEL); + if (!req) + return -ENOMEM; + + crypto_init_wait(&wait); + sg_init_one(&src, ctrl_key, ctrl_key_len); + kpp_request_set_input(req, &src, ctrl_key_len); + sg_init_one(&dst, sess_key, sess_key_len); + kpp_request_set_output(req, &dst, sess_key_len); + kpp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &wait); + + ret = crypto_wait_req(crypto_kpp_compute_shared_secret(req), &wait); + + kpp_request_free(req); + return ret; +} +EXPORT_SYMBOL_GPL(nvme_auth_gen_shared_secret); + +static int nvme_auth_send(struct nvme_ctrl *ctrl, int qid, + void *data, size_t tl) +{ + struct nvme_command cmd = {}; + blk_mq_req_flags_t flags = qid == NVME_QID_ANY ? + 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED; + struct request_queue *q = qid == NVME_QID_ANY ? + ctrl->fabrics_q : ctrl->connect_q; + int ret; + + cmd.auth_send.opcode = nvme_fabrics_command; + cmd.auth_send.fctype = nvme_fabrics_type_auth_send; + cmd.auth_send.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER; + cmd.auth_send.spsp0 = 0x01; + cmd.auth_send.spsp1 = 0x01; + cmd.auth_send.tl = tl; + + ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, tl, 0, qid, + 0, flags); + if (ret > 0) + dev_dbg(ctrl->device, + "%s: qid %d nvme status %d\n", __func__, qid, ret); + else if (ret < 0) + dev_dbg(ctrl->device, + "%s: qid %d error %d\n", __func__, qid, ret); + return ret; +} + +static int nvme_auth_receive(struct nvme_ctrl *ctrl, int qid, + void *buf, size_t al) +{ + struct nvme_command cmd = {}; + blk_mq_req_flags_t flags = qid == NVME_QID_ANY ? + 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED; + struct request_queue *q = qid == NVME_QID_ANY ? + ctrl->fabrics_q : ctrl->connect_q; + int ret; + + cmd.auth_receive.opcode = nvme_fabrics_command; + cmd.auth_receive.fctype = nvme_fabrics_type_auth_receive; + cmd.auth_receive.secp = NVME_AUTH_DHCHAP_PROTOCOL_IDENTIFIER; + cmd.auth_receive.spsp0 = 0x01; + cmd.auth_receive.spsp1 = 0x01; + cmd.auth_receive.al = al; + + ret = __nvme_submit_sync_cmd(q, &cmd, NULL, buf, al, 0, qid, + 0, flags); + if (ret > 0) { + dev_dbg(ctrl->device, "%s: qid %d nvme status %x\n", + __func__, qid, ret); + ret = -EIO; + } + if (ret < 0) { + dev_dbg(ctrl->device, "%s: qid %d error %d\n", + __func__, qid, ret); + return ret; + } + + return 0; +} + +static int nvme_auth_receive_validate(struct nvme_ctrl *ctrl, int qid, + struct nvmf_auth_dhchap_failure_data *data, + u16 transaction, u8 expected_msg) +{ + dev_dbg(ctrl->device, "%s: qid %d auth_type %d auth_id %x\n", + __func__, qid, data->auth_type, data->auth_id); + + if (data->auth_type == NVME_AUTH_COMMON_MESSAGES && + data->auth_id == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) { + return data->rescode_exp; + } + if (data->auth_type != NVME_AUTH_DHCHAP_MESSAGES || + data->auth_id != expected_msg) { + dev_warn(ctrl->device, + "qid %d invalid message %02x/%02x\n", + qid, data->auth_type, data->auth_id); + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE; + } + if (le16_to_cpu(data->t_id) != transaction) { + dev_warn(ctrl->device, + "qid %d invalid transaction ID %d\n", + qid, le16_to_cpu(data->t_id)); + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_MESSAGE; + } + return 0; +} + +static int nvme_auth_set_dhchap_negotiate_data(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + struct nvmf_auth_dhchap_negotiate_data *data = chap->buf; + size_t size = sizeof(*data) + sizeof(union nvmf_auth_protocol); + + if (chap->buf_size < size) { + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; + return -EINVAL; + } + memset((u8 *)chap->buf, 0, size); + data->auth_type = NVME_AUTH_COMMON_MESSAGES; + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_NEGOTIATE; + data->t_id = cpu_to_le16(chap->transaction); + data->sc_c = 0; /* No secure channel concatenation */ + data->napd = 1; + data->auth_protocol[0].dhchap.authid = NVME_AUTH_DHCHAP_AUTH_ID; + data->auth_protocol[0].dhchap.halen = 3; + data->auth_protocol[0].dhchap.dhlen = 6; + data->auth_protocol[0].dhchap.idlist[0] = NVME_AUTH_DHCHAP_SHA256; + data->auth_protocol[0].dhchap.idlist[1] = NVME_AUTH_DHCHAP_SHA384; + data->auth_protocol[0].dhchap.idlist[2] = NVME_AUTH_DHCHAP_SHA512; + data->auth_protocol[0].dhchap.idlist[3] = NVME_AUTH_DHCHAP_DHGROUP_NULL; + data->auth_protocol[0].dhchap.idlist[4] = NVME_AUTH_DHCHAP_DHGROUP_2048; + data->auth_protocol[0].dhchap.idlist[5] = NVME_AUTH_DHCHAP_DHGROUP_3072; + data->auth_protocol[0].dhchap.idlist[6] = NVME_AUTH_DHCHAP_DHGROUP_4096; + data->auth_protocol[0].dhchap.idlist[7] = NVME_AUTH_DHCHAP_DHGROUP_6144; + data->auth_protocol[0].dhchap.idlist[8] = NVME_AUTH_DHCHAP_DHGROUP_8192; + + return size; +} + +static int nvme_auth_process_dhchap_challenge(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + struct nvmf_auth_dhchap_challenge_data *data = chap->buf; + size_t size = sizeof(*data) + data->hl + data->dhvlen; + const char *hmac_name; + + if (chap->buf_size < size) { + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; + return NVME_SC_INVALID_FIELD; + } + + hmac_name = nvme_auth_hmac_name(data->hashid); + if (!hmac_name) { + dev_warn(ctrl->device, + "qid %d: invalid HASH ID %d\n", + chap->qid, data->hashid); + chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE; + return -EPROTO; + } + if (chap->hash_id == data->hashid && chap->shash_tfm && + !strcmp(crypto_shash_alg_name(chap->shash_tfm), hmac_name) && + crypto_shash_digestsize(chap->shash_tfm) == data->hl) { + dev_dbg(ctrl->device, + "qid %d: reuse existing hash %s\n", + chap->qid, hmac_name); + goto select_kpp; + } + if (chap->shash_tfm) { + crypto_free_shash(chap->shash_tfm); + chap->hash_id = 0; + chap->hash_len = 0; + } + chap->shash_tfm = crypto_alloc_shash(hmac_name, 0, + CRYPTO_ALG_ALLOCATES_MEMORY); + if (IS_ERR(chap->shash_tfm)) { + dev_warn(ctrl->device, + "qid %d: failed to allocate hash %s, error %ld\n", + chap->qid, hmac_name, PTR_ERR(chap->shash_tfm)); + chap->shash_tfm = NULL; + chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED; + return NVME_SC_AUTH_REQUIRED; + } + if (crypto_shash_digestsize(chap->shash_tfm) != data->hl) { + dev_warn(ctrl->device, + "qid %d: invalid hash length %d\n", + chap->qid, data->hl); + crypto_free_shash(chap->shash_tfm); + chap->shash_tfm = NULL; + chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE; + return NVME_SC_AUTH_REQUIRED; + } + if (chap->hash_id != data->hashid) { + kfree(chap->host_response); + chap->host_response = NULL; + } + chap->hash_id = data->hashid; + chap->hash_len = data->hl; + dev_dbg(ctrl->device, "qid %d: selected hash %s\n", + chap->qid, hmac_name); + + gid_name = nvme_auth_dhgroup_kpp(data->dhgid); + if (!gid_name) { + dev_warn(ctrl->device, + "qid %d: invalid DH group id %d\n", + chap->qid, data->dhgid); + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE; + return -EPROTO; + } + + if (data->dhgid != NVME_AUTH_DHCHAP_DHGROUP_NULL) { + if (data->dhvlen == 0) { + dev_warn(ctrl->device, + "qid %d: empty DH value\n", + chap->qid); + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE; + return -EPROTO; + } + chap->dh_tfm = crypto_alloc_kpp(gid_name, 0, 0); + if (IS_ERR(chap->dh_tfm)) { + int ret = PTR_ERR(chap->dh_tfm); + + dev_warn(ctrl->device, + "qid %d: failed to initialize %s\n", + chap->qid, gid_name); + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE; + chap->dh_tfm = NULL; + return ret; + } + chap->dhgroup_id = data->dhgid; + } else if (data->dhvlen != 0) { + dev_warn(ctrl->device, + "qid %d: invalid DH value for NULL DH\n", + chap->qid); + chap->status = NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE; + return -EPROTO; + } + dev_dbg(ctrl->device, "qid %d: selected DH group %s\n", + chap->qid, gid_name); + +select_kpp: + chap->s1 = le32_to_cpu(data->seqnum); + memcpy(chap->c1, data->cval, chap->hash_len); + + return 0; +} + +static int nvme_auth_set_dhchap_reply_data(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + struct nvmf_auth_dhchap_reply_data *data = chap->buf; + size_t size = sizeof(*data); + + size += 2 * chap->hash_len; + if (ctrl->opts->dhchap_bidi) { + get_random_bytes(chap->c2, chap->hash_len); + chap->s2 = nvme_dhchap_seqnum++; + } else + memset(chap->c2, 0, chap->hash_len); + + + if (chap->buf_size < size) { + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; + return -EINVAL; + } + memset(chap->buf, 0, size); + data->auth_type = NVME_AUTH_DHCHAP_MESSAGES; + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_REPLY; + data->t_id = cpu_to_le16(chap->transaction); + data->hl = chap->hash_len; + data->dhvlen = 0; + data->seqnum = cpu_to_le32(chap->s2); + memcpy(data->rval, chap->response, chap->hash_len); + if (ctrl->opts->dhchap_bidi) { + dev_dbg(ctrl->device, "%s: qid %d ctrl challenge %*ph\n", + __func__, chap->qid, + chap->hash_len, chap->c2); + data->cvalid = 1; + memcpy(data->rval + chap->hash_len, chap->c2, + chap->hash_len); + } + return size; +} + +static int nvme_auth_process_dhchap_success1(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + struct nvmf_auth_dhchap_success1_data *data = chap->buf; + size_t size = sizeof(*data); + + if (ctrl->opts->dhchap_bidi) + size += chap->hash_len; + + + if (chap->buf_size < size) { + chap->status = NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; + return NVME_SC_INVALID_FIELD; + } + + if (data->hl != chap->hash_len) { + dev_warn(ctrl->device, + "qid %d: invalid hash length %d\n", + chap->qid, data->hl); + chap->status = NVME_AUTH_DHCHAP_FAILURE_HASH_UNUSABLE; + return NVME_SC_INVALID_FIELD; + } + + if (!data->rvalid) + return 0; + + /* Validate controller response */ + if (memcmp(chap->response, data->rval, data->hl)) { + dev_dbg(ctrl->device, "%s: qid %d ctrl response %*ph\n", + __func__, chap->qid, chap->hash_len, data->rval); + dev_dbg(ctrl->device, "%s: qid %d host response %*ph\n", + __func__, chap->qid, chap->hash_len, chap->response); + dev_warn(ctrl->device, + "qid %d: controller authentication failed\n", + chap->qid); + chap->status = NVME_AUTH_DHCHAP_FAILURE_FAILED; + return NVME_SC_AUTH_REQUIRED; + } + dev_info(ctrl->device, + "qid %d: controller authenticated\n", + chap->qid); + return 0; +} + +static int nvme_auth_set_dhchap_success2_data(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + struct nvmf_auth_dhchap_success2_data *data = chap->buf; + size_t size = sizeof(*data); + + memset(chap->buf, 0, size); + data->auth_type = NVME_AUTH_DHCHAP_MESSAGES; + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2; + data->t_id = cpu_to_le16(chap->transaction); + + return size; +} + +static int nvme_auth_set_dhchap_failure2_data(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + struct nvmf_auth_dhchap_failure_data *data = chap->buf; + size_t size = sizeof(*data); + + memset(chap->buf, 0, size); + data->auth_type = NVME_AUTH_DHCHAP_MESSAGES; + data->auth_id = NVME_AUTH_DHCHAP_MESSAGE_FAILURE2; + data->t_id = cpu_to_le16(chap->transaction); + data->rescode = NVME_AUTH_DHCHAP_FAILURE_REASON_FAILED; + data->rescode_exp = chap->status; + + return size; +} + +static int nvme_auth_dhchap_host_response(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + SHASH_DESC_ON_STACK(shash, chap->shash_tfm); + u8 buf[4], *challenge = chap->c1; + int ret; + + dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n", + __func__, chap->qid, chap->s1, chap->transaction); + if (chap->dh_tfm) { + challenge = kmalloc(chap->hash_len, GFP_KERNEL); + if (!challenge) { + ret = -ENOMEM; + goto out; + } + ret = nvme_auth_augmented_challenge(chap->hash_id, + chap->sess_key, + chap->sess_key_len, + chap->c1, challenge, + chap->hash_len); + if (ret) + goto out; + } + shash->tfm = chap->shash_tfm; + ret = crypto_shash_init(shash); + if (ret) + goto out; + ret = crypto_shash_update(shash, challenge, chap->hash_len); + if (ret) + goto out; + put_unaligned_le32(chap->s1, buf); + ret = crypto_shash_update(shash, buf, 4); + if (ret) + goto out; + put_unaligned_le16(chap->transaction, buf); + ret = crypto_shash_update(shash, buf, 2); + if (ret) + goto out; + memset(buf, 0, sizeof(buf)); + ret = crypto_shash_update(shash, buf, 1); + if (ret) + goto out; + ret = crypto_shash_update(shash, "HostHost", 8); + if (ret) + goto out; + ret = crypto_shash_update(shash, ctrl->opts->host->nqn, + strlen(ctrl->opts->host->nqn)); + if (ret) + goto out; + ret = crypto_shash_update(shash, buf, 1); + if (ret) + goto out; + ret = crypto_shash_update(shash, ctrl->opts->subsysnqn, + strlen(ctrl->opts->subsysnqn)); + if (ret) + goto out; + ret = crypto_shash_final(shash, chap->response); +out: + if (challenge != chap->c1) + kfree(challenge); + return ret; +} + +static int nvme_auth_dhchap_ctrl_response(struct nvme_ctrl *ctrl, + struct nvme_dhchap_queue_context *chap) +{ + SHASH_DESC_ON_STACK(shash, chap->shash_tfm); + u8 buf[4], *challenge = chap->c2; + int ret; + + if (chap->dh_tfm) { + challenge = kmalloc(chap->hash_len, GFP_KERNEL); + if (!challenge) { + ret = -ENOMEM; + goto out; + } + ret = nvme_auth_augmented_challenge(chap->hash_id, + chap->sess_key, + chap->sess_key_len, + chap->c2, challenge, + chap->hash_len); + if (ret) + goto out; + } + dev_dbg(ctrl->device, "%s: qid %d host response seq %d transaction %d\n", + __func__, chap->qid, chap->s2, chap->transaction); + dev_dbg(ctrl->device, "%s: qid %d challenge %*ph\n", + __func__, chap->qid, chap->hash_len, challenge); + dev_dbg(ctrl->device, "%s: qid %d subsysnqn %s\n", + __func__, chap->qid, ctrl->opts->subsysnqn); + dev_dbg(ctrl->device, "%s: qid %d hostnqn %s\n", + __func__, chap->qid, ctrl->opts->host->nqn); + shash->tfm = chap->shash_tfm; + ret = crypto_shash_init(shash); + if (ret) + goto out; + ret = crypto_shash_update(shash, challenge, chap->hash_len); + if (ret) + goto out; + put_unaligned_le32(chap->s2, buf); + ret = crypto_shash_update(shash, buf, 4); + if (ret) + goto out; + put_unaligned_le16(chap->transaction, buf); + ret = crypto_shash_update(shash, buf, 2); + if (ret) + goto out; + memset(buf, 0, 4); + ret = crypto_shash_update(shash, buf, 1); + if (ret) + goto out; + ret = crypto_shash_update(shash, "Controller", 10); + if (ret) + goto out; + ret = crypto_shash_update(shash, ctrl->opts->subsysnqn, + strlen(ctrl->opts->subsysnqn)); + if (ret) + goto out; + ret = crypto_shash_update(shash, buf, 1); + if (ret) + goto out; + ret = crypto_shash_update(shash, ctrl->opts->host->nqn, + strlen(ctrl->opts->host->nqn)); + if (ret) + goto out; + ret = crypto_shash_final(shash, chap->response); +out: + if (challenge != chap->c2) + kfree(challenge); + return ret; +} + +int nvme_auth_generate_key(struct nvme_ctrl *ctrl) +{ + int ret; + u8 key_hash; + + if (ctrl->dhchap_key && ctrl->dhchap_key_len) + /* Key already set */ + return 0; + + if (sscanf(ctrl->opts->dhchap_secret, "DHHC-1:%hhd:%*s:", + &key_hash) != 1) + return -EINVAL; + + /* Pass in the secret without the 'DHHC-1:XX:' prefix */ + ctrl->dhchap_key = nvme_auth_extract_secret(ctrl->opts->dhchap_secret + 10, + &ctrl->dhchap_key_len); + if (IS_ERR(ctrl->dhchap_key)) { + ret = PTR_ERR(ctrl->dhchap_key); + ctrl->dhchap_key = NULL; + return ret; + } + return ret; +} +EXPORT_SYMBOL_GPL(nvme_auth_generate_key); + +static void nvme_auth_reset(struct nvme_dhchap_queue_context *chap) +{ + chap->status = 0; + chap->error = 0; + chap->s1 = 0; + chap->s2 = 0; + chap->transaction = 0; + memset(chap->c1, 0, sizeof(chap->c1)); + memset(chap->c2, 0, sizeof(chap->c2)); +} + +static void __nvme_auth_free(struct nvme_dhchap_queue_context *chap) +{ + if (chap->shash_tfm) + crypto_free_shash(chap->shash_tfm); + kfree_sensitive(chap->host_response); + kfree(chap->buf); + kfree(chap); +} + +static void __nvme_auth_work(struct work_struct *work) +{ + struct nvme_dhchap_queue_context *chap = + container_of(work, struct nvme_dhchap_queue_context, auth_work); + struct nvme_ctrl *ctrl = chap->ctrl; + size_t tl; + int ret = 0; + + chap->transaction = ctrl->transaction++; + + /* DH-HMAC-CHAP Step 1: send negotiate */ + dev_dbg(ctrl->device, "%s: qid %d send negotiate\n", + __func__, chap->qid); + ret = nvme_auth_set_dhchap_negotiate_data(ctrl, chap); + if (ret < 0) { + chap->error = ret; + return; + } + tl = ret; + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl); + if (ret) { + chap->error = ret; + return; + } + + /* DH-HMAC-CHAP Step 2: receive challenge */ + dev_dbg(ctrl->device, "%s: qid %d receive challenge\n", + __func__, chap->qid); + + memset(chap->buf, 0, chap->buf_size); + ret = nvme_auth_receive(ctrl, chap->qid, chap->buf, chap->buf_size); + if (ret) { + dev_warn(ctrl->device, + "qid %d failed to receive challenge, %s %d\n", + chap->qid, ret < 0 ? "error" : "nvme status", ret); + chap->error = ret; + return; + } + ret = nvme_auth_receive_validate(ctrl, chap->qid, chap->buf, chap->transaction, + NVME_AUTH_DHCHAP_MESSAGE_CHALLENGE); + if (ret) { + chap->status = ret; + chap->error = NVME_SC_AUTH_REQUIRED; + return; + } + + ret = nvme_auth_process_dhchap_challenge(ctrl, chap); + if (ret) { + /* Invalid challenge parameters */ + goto fail2; + } + + if (chap->ctrl_key_len) { + dev_dbg(ctrl->device, + "%s: qid %d DH exponential\n", + __func__, chap->qid); + ret = nvme_auth_dhchap_exponential(ctrl, chap); + if (ret) + goto fail2; + } + + dev_dbg(ctrl->device, "%s: qid %d host response\n", + __func__, chap->qid); + ret = nvme_auth_dhchap_host_response(ctrl, chap); + if (ret) + goto fail2; + + /* DH-HMAC-CHAP Step 3: send reply */ + dev_dbg(ctrl->device, "%s: qid %d send reply\n", + __func__, chap->qid); + ret = nvme_auth_set_dhchap_reply_data(ctrl, chap); + if (ret < 0) + goto fail2; + + tl = ret; + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl); + if (ret) + goto fail2; + + /* DH-HMAC-CHAP Step 4: receive success1 */ + dev_dbg(ctrl->device, "%s: qid %d receive success1\n", + __func__, chap->qid); + + memset(chap->buf, 0, chap->buf_size); + ret = nvme_auth_receive(ctrl, chap->qid, chap->buf, chap->buf_size); + if (ret) { + dev_warn(ctrl->device, + "qid %d failed to receive success1, %s %d\n", + chap->qid, ret < 0 ? "error" : "nvme status", ret); + chap->error = ret; + return; + } + ret = nvme_auth_receive_validate(ctrl, chap->qid, + chap->buf, chap->transaction, + NVME_AUTH_DHCHAP_MESSAGE_SUCCESS1); + if (ret) { + chap->status = ret; + chap->error = NVME_SC_AUTH_REQUIRED; + return; + } + + if (ctrl->opts->dhchap_bidi) { + dev_dbg(ctrl->device, + "%s: qid %d controller response\n", + __func__, chap->qid); + ret = nvme_auth_dhchap_ctrl_response(ctrl, chap); + if (ret) + goto fail2; + } + + ret = nvme_auth_process_dhchap_success1(ctrl, chap); + if (ret < 0) { + /* Controller authentication failed */ + goto fail2; + } + + /* DH-HMAC-CHAP Step 5: send success2 */ + dev_dbg(ctrl->device, "%s: qid %d send success2\n", + __func__, chap->qid); + tl = nvme_auth_set_dhchap_success2_data(ctrl, chap); + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl); + if (!ret) { + chap->error = 0; + return; + } + +fail2: + dev_dbg(ctrl->device, "%s: qid %d send failure2, status %x\n", + __func__, chap->qid, chap->status); + tl = nvme_auth_set_dhchap_failure2_data(ctrl, chap); + ret = nvme_auth_send(ctrl, chap->qid, chap->buf, tl); + if (!ret) + ret = -EPROTO; + chap->error = ret; +} + +int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid) +{ + struct nvme_dhchap_queue_context *chap; + + if (!ctrl->dhchap_key || !ctrl->dhchap_key_len) { + dev_warn(ctrl->device, "qid %d: no key\n", qid); + return -ENOKEY; + } + + mutex_lock(&ctrl->dhchap_auth_mutex); + /* Check if the context is already queued */ + list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) { + if (chap->qid == qid) { + mutex_unlock(&ctrl->dhchap_auth_mutex); + queue_work(nvme_wq, &chap->auth_work); + return 0; + } + } + chap = kzalloc(sizeof(*chap), GFP_KERNEL); + if (!chap) { + mutex_unlock(&ctrl->dhchap_auth_mutex); + return -ENOMEM; + } + chap->qid = qid; + chap->ctrl = ctrl; + + /* + * Allocate a large enough buffer for the entire negotiation: + * 4k should be enough to ffdhe8192. + */ + chap->buf_size = 4096; + chap->buf = kzalloc(chap->buf_size, GFP_KERNEL); + if (!chap->buf) { + mutex_unlock(&ctrl->dhchap_auth_mutex); + kfree(chap); + return -ENOMEM; + } + + INIT_WORK(&chap->auth_work, __nvme_auth_work); + list_add(&chap->entry, &ctrl->dhchap_auth_list); + mutex_unlock(&ctrl->dhchap_auth_mutex); + queue_work(nvme_wq, &chap->auth_work); + return 0; +} +EXPORT_SYMBOL_GPL(nvme_auth_negotiate); + +int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid) +{ + struct nvme_dhchap_queue_context *chap; + int ret; + + mutex_lock(&ctrl->dhchap_auth_mutex); + list_for_each_entry(chap, &ctrl->dhchap_auth_list, entry) { + if (chap->qid != qid) + continue; + mutex_unlock(&ctrl->dhchap_auth_mutex); + flush_work(&chap->auth_work); + ret = chap->error; + nvme_auth_reset(chap); + return ret; + } + mutex_unlock(&ctrl->dhchap_auth_mutex); + return -ENXIO; +} +EXPORT_SYMBOL_GPL(nvme_auth_wait); + +/* Assumes that the controller is in state RESETTING */ +static void nvme_dhchap_auth_work(struct work_struct *work) +{ + struct nvme_ctrl *ctrl = + container_of(work, struct nvme_ctrl, dhchap_auth_work); + int ret, q; + + nvme_stop_queues(ctrl); + /* Authenticate admin queue first */ + ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY); + if (ret) { + dev_warn(ctrl->device, + "qid 0: error %d setting up authentication\n", ret); + goto out; + } + ret = nvme_auth_wait(ctrl, NVME_QID_ANY); + if (ret) { + dev_warn(ctrl->device, + "qid 0: authentication failed\n"); + goto out; + } + dev_info(ctrl->device, "qid 0: authenticated\n"); + + for (q = 1; q < ctrl->queue_count; q++) { + ret = nvme_auth_negotiate(ctrl, q); + if (ret) { + dev_warn(ctrl->device, + "qid %d: error %d setting up authentication\n", + q, ret); + goto out; + } + } +out: + /* + * Failure is a soft-state; credentials remain valid until + * the controller terminates the connection. + */ + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) + nvme_start_queues(ctrl); +} + +void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl) +{ + INIT_LIST_HEAD(&ctrl->dhchap_auth_list); + INIT_WORK(&ctrl->dhchap_auth_work, nvme_dhchap_auth_work); + mutex_init(&ctrl->dhchap_auth_mutex); + nvme_auth_generate_key(ctrl); +} +EXPORT_SYMBOL_GPL(nvme_auth_init_ctrl); + +void nvme_auth_stop(struct nvme_ctrl *ctrl) +{ + struct nvme_dhchap_queue_context *chap = NULL, *tmp; + + cancel_work_sync(&ctrl->dhchap_auth_work); + mutex_lock(&ctrl->dhchap_auth_mutex); + list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry) + cancel_work_sync(&chap->auth_work); + mutex_unlock(&ctrl->dhchap_auth_mutex); +} +EXPORT_SYMBOL_GPL(nvme_auth_stop); + +void nvme_auth_free(struct nvme_ctrl *ctrl) +{ + struct nvme_dhchap_queue_context *chap = NULL, *tmp; + + mutex_lock(&ctrl->dhchap_auth_mutex); + list_for_each_entry_safe(chap, tmp, &ctrl->dhchap_auth_list, entry) { + list_del_init(&chap->entry); + flush_work(&chap->auth_work); + __nvme_auth_free(chap); + } + mutex_unlock(&ctrl->dhchap_auth_mutex); + kfree(ctrl->dhchap_key); + ctrl->dhchap_key = NULL; + ctrl->dhchap_key_len = 0; +} +EXPORT_SYMBOL_GPL(nvme_auth_free); diff --git a/drivers/nvme/host/auth.h b/drivers/nvme/host/auth.h new file mode 100644 index 000000000000..cf1255f9db6d --- /dev/null +++ b/drivers/nvme/host/auth.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (c) 2021 Hannes Reinecke, SUSE Software Solutions + */ + +#ifndef _NVME_AUTH_H +#define _NVME_AUTH_H + +#include + +const char *nvme_auth_dhgroup_name(int dhgroup_id); +int nvme_auth_dhgroup_pubkey_size(int dhgroup_id); +int nvme_auth_dhgroup_privkey_size(int dhgroup_id); +const char *nvme_auth_dhgroup_kpp(int dhgroup_id); +int nvme_auth_dhgroup_id(const char *dhgroup_name); + +const char *nvme_auth_hmac_name(int hmac_id); +const char *nvme_auth_digest_name(int hmac_id); +int nvme_auth_hmac_id(const char *hmac_name); + +unsigned char *nvme_auth_extract_secret(unsigned char *dhchap_secret, + size_t *dhchap_key_len); +u8 *nvme_auth_transform_key(u8 *key, size_t key_len, u8 key_hash, char *nqn); + +#endif /* _NVME_AUTH_H */ diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 11779be42186..bbec38d1add5 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -24,6 +24,7 @@ #include "nvme.h" #include "fabrics.h" +#include "auth.h" #define CREATE_TRACE_POINTS #include "trace.h" @@ -320,6 +321,7 @@ enum nvme_disposition { COMPLETE, RETRY, FAILOVER, + AUTHENTICATE, }; static inline enum nvme_disposition nvme_decide_disposition(struct request *req) @@ -327,6 +329,9 @@ static inline enum nvme_disposition nvme_decide_disposition(struct request *req) if (likely(nvme_req(req)->status == 0)) return COMPLETE; + if ((nvme_req(req)->status & 0x7ff) == NVME_SC_AUTH_REQUIRED) + return AUTHENTICATE; + if (blk_noretry_request(req) || (nvme_req(req)->status & NVME_SC_DNR) || nvme_req(req)->retries >= nvme_max_retries) @@ -359,11 +364,13 @@ static inline void nvme_end_req(struct request *req) void nvme_complete_rq(struct request *req) { + struct nvme_ctrl *ctrl = nvme_req(req)->ctrl; + trace_nvme_complete_rq(req); nvme_cleanup_cmd(req); - if (nvme_req(req)->ctrl->kas) - nvme_req(req)->ctrl->comp_seen = true; + if (ctrl->kas) + ctrl->comp_seen = true; switch (nvme_decide_disposition(req)) { case COMPLETE: @@ -375,6 +382,15 @@ void nvme_complete_rq(struct request *req) case FAILOVER: nvme_failover_req(req); return; + case AUTHENTICATE: +#ifdef CONFIG_NVME_AUTH + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING)) + queue_work(nvme_wq, &ctrl->dhchap_auth_work); + nvme_retry_req(req); +#else + nvme_end_req(req); +#endif + return; } } EXPORT_SYMBOL_GPL(nvme_complete_rq); @@ -708,7 +724,9 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq, switch (ctrl->state) { case NVME_CTRL_CONNECTING: if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) && - req->cmd->fabrics.fctype == nvme_fabrics_type_connect) + (req->cmd->fabrics.fctype == nvme_fabrics_type_connect || + req->cmd->fabrics.fctype == nvme_fabrics_type_auth_send || + req->cmd->fabrics.fctype == nvme_fabrics_type_auth_receive)) return true; break; default: @@ -3426,6 +3444,51 @@ static ssize_t nvme_ctrl_fast_io_fail_tmo_store(struct device *dev, static DEVICE_ATTR(fast_io_fail_tmo, S_IRUGO | S_IWUSR, nvme_ctrl_fast_io_fail_tmo_show, nvme_ctrl_fast_io_fail_tmo_store); +#ifdef CONFIG_NVME_AUTH +static ssize_t nvme_ctrl_dhchap_secret_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct nvme_ctrl *ctrl = dev_get_drvdata(dev); + struct nvmf_ctrl_options *opts = ctrl->opts; + + if (!opts->dhchap_secret) + return sysfs_emit(buf, "none\n"); + return sysfs_emit(buf, "%s\n", opts->dhchap_secret); +} + +static ssize_t nvme_ctrl_dhchap_secret_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + struct nvme_ctrl *ctrl = dev_get_drvdata(dev); + struct nvmf_ctrl_options *opts = ctrl->opts; + char *dhchap_secret; + + if (!ctrl->opts->dhchap_secret) + return -EINVAL; + if (count < 7) + return -EINVAL; + if (memcmp(buf, "DHHC-1:", 7)) + return -EINVAL; + + dhchap_secret = kzalloc(count + 1, GFP_KERNEL); + if (!dhchap_secret) + return -ENOMEM; + memcpy(dhchap_secret, buf, count); + if (strcmp(dhchap_secret, opts->dhchap_secret)) { + kfree(opts->dhchap_secret); + opts->dhchap_secret = dhchap_secret; + /* Key has changed; reset authentication data */ + nvme_auth_free(ctrl); + nvme_auth_generate_key(ctrl); + } + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING)) + queue_work(nvme_wq, &ctrl->dhchap_auth_work); + return count; +} +DEVICE_ATTR(dhchap_secret, S_IRUGO | S_IWUSR, + nvme_ctrl_dhchap_secret_show, nvme_ctrl_dhchap_secret_store); +#endif + static struct attribute *nvme_dev_attrs[] = { &dev_attr_reset_controller.attr, &dev_attr_rescan_controller.attr, @@ -3447,6 +3510,9 @@ static struct attribute *nvme_dev_attrs[] = { &dev_attr_reconnect_delay.attr, &dev_attr_fast_io_fail_tmo.attr, &dev_attr_kato.attr, +#ifdef CONFIG_NVME_AUTH + &dev_attr_dhchap_secret.attr, +#endif NULL }; @@ -3470,6 +3536,10 @@ static umode_t nvme_dev_attrs_are_visible(struct kobject *kobj, return 0; if (a == &dev_attr_fast_io_fail_tmo.attr && !ctrl->opts) return 0; +#ifdef CONFIG_NVME_AUTH + if (a == &dev_attr_dhchap_secret.attr && !ctrl->opts) + return 0; +#endif return a->mode; } @@ -4277,6 +4347,7 @@ EXPORT_SYMBOL_GPL(nvme_complete_async_event); void nvme_stop_ctrl(struct nvme_ctrl *ctrl) { nvme_mpath_stop(ctrl); + nvme_auth_stop(ctrl); nvme_stop_keep_alive(ctrl); nvme_stop_failfast_work(ctrl); flush_work(&ctrl->async_event_work); @@ -4331,6 +4402,7 @@ static void nvme_free_ctrl(struct device *dev) nvme_free_cels(ctrl); nvme_mpath_uninit(ctrl); + nvme_auth_free(ctrl); __free_page(ctrl->discard_page); if (subsys) { @@ -4421,6 +4493,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev, nvme_fault_inject_init(&ctrl->fault_inject, dev_name(ctrl->device)); nvme_mpath_init_ctrl(ctrl); + nvme_auth_init_ctrl(ctrl); return 0; out_free_name: diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 1e9900f72e66..4e51c729765c 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -370,6 +370,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl) union nvme_result res; struct nvmf_connect_data *data; int ret; + u32 result; cmd.connect.opcode = nvme_fabrics_command; cmd.connect.fctype = nvme_fabrics_type_connect; @@ -402,8 +403,25 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl) goto out_free_data; } - ctrl->cntlid = le16_to_cpu(res.u16); - + result = le32_to_cpu(res.u32); + ctrl->cntlid = result & 0xFFFF; + if ((result >> 16) & 2) { + /* Authentication required */ + ret = nvme_auth_negotiate(ctrl, NVME_QID_ANY); + if (ret) { + dev_warn(ctrl->device, + "qid 0: failed to setup authentication\n"); + ret = NVME_SC_AUTH_REQUIRED; + goto out_free_data; + } + ret = nvme_auth_wait(ctrl, NVME_QID_ANY); + if (ret) + dev_warn(ctrl->device, + "qid 0: authentication failed\n"); + else + dev_info(ctrl->device, + "qid 0: authenticated\n"); + } out_free_data: kfree(data); return ret; @@ -436,6 +454,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid) struct nvmf_connect_data *data; union nvme_result res; int ret; + u32 result; cmd.connect.opcode = nvme_fabrics_command; cmd.connect.fctype = nvme_fabrics_type_connect; @@ -461,6 +480,24 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid) nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32), &cmd, data); } + result = le32_to_cpu(res.u32); + if ((result >> 16) & 2) { + /* Authentication required */ + ret = nvme_auth_negotiate(ctrl, qid); + if (ret) { + dev_warn(ctrl->device, + "qid %d: failed to setup authentication\n", qid); + ret = NVME_SC_AUTH_REQUIRED; + } else { + ret = nvme_auth_wait(ctrl, qid); + if (ret) + dev_warn(ctrl->device, + "qid %u: authentication failed\n", qid); + else + dev_info(ctrl->device, + "qid %u: authenticated\n", qid); + } + } kfree(data); return ret; } @@ -552,6 +589,8 @@ static const match_table_t opt_tokens = { { NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" }, { NVMF_OPT_TOS, "tos=%d" }, { NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" }, + { NVMF_OPT_DHCHAP_SECRET, "dhchap_secret=%s" }, + { NVMF_OPT_DHCHAP_BIDI, "dhchap_bidi" }, { NVMF_OPT_ERR, NULL } }; @@ -828,6 +867,23 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts, } opts->tos = token; break; + case NVMF_OPT_DHCHAP_SECRET: + p = match_strdup(args); + if (!p) { + ret = -ENOMEM; + goto out; + } + if (strlen(p) < 11 || strncmp(p, "DHHC-1:", 7)) { + pr_err("Invalid DH-CHAP secret %s\n", p); + ret = -EINVAL; + goto out; + } + kfree(opts->dhchap_secret); + opts->dhchap_secret = p; + break; + case NVMF_OPT_DHCHAP_BIDI: + opts->dhchap_bidi = true; + break; default: pr_warn("unknown parameter or missing value '%s' in ctrl creation request\n", p); @@ -946,6 +1002,7 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts) kfree(opts->subsysnqn); kfree(opts->host_traddr); kfree(opts->host_iface); + kfree(opts->dhchap_secret); kfree(opts); } EXPORT_SYMBOL_GPL(nvmf_free_options); @@ -955,7 +1012,10 @@ EXPORT_SYMBOL_GPL(nvmf_free_options); NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ NVMF_OPT_DISABLE_SQFLOW |\ - NVMF_OPT_FAIL_FAST_TMO) + NVMF_OPT_CTRL_LOSS_TMO |\ + NVMF_OPT_FAIL_FAST_TMO |\ + NVMF_OPT_DHCHAP_SECRET |\ + NVMF_OPT_DHCHAP_BIDI) static struct nvme_ctrl * nvmf_create_ctrl(struct device *dev, const char *buf) @@ -1172,7 +1232,14 @@ static void __exit nvmf_exit(void) BUILD_BUG_ON(sizeof(struct nvmf_connect_command) != 64); BUILD_BUG_ON(sizeof(struct nvmf_property_get_command) != 64); BUILD_BUG_ON(sizeof(struct nvmf_property_set_command) != 64); + BUILD_BUG_ON(sizeof(struct nvmf_auth_send_command) != 64); + BUILD_BUG_ON(sizeof(struct nvmf_auth_receive_command) != 64); BUILD_BUG_ON(sizeof(struct nvmf_connect_data) != 1024); + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_negotiate_data) != 8); + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_challenge_data) != 16); + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_reply_data) != 16); + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_success1_data) != 16); + BUILD_BUG_ON(sizeof(struct nvmf_auth_dhchap_success2_data) != 16); } MODULE_LICENSE("GPL v2"); diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index a146cb903869..27df1aac5736 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -67,6 +67,8 @@ enum { NVMF_OPT_TOS = 1 << 19, NVMF_OPT_FAIL_FAST_TMO = 1 << 20, NVMF_OPT_HOST_IFACE = 1 << 21, + NVMF_OPT_DHCHAP_SECRET = 1 << 22, + NVMF_OPT_DHCHAP_BIDI = 1 << 23, }; /** @@ -96,6 +98,8 @@ enum { * @max_reconnects: maximum number of allowed reconnect attempts before removing * the controller, (-1) means reconnect forever, zero means remove * immediately; + * @dhchap_secret: DH-HMAC-CHAP secret + * @dhchap_bidi: enable DH-HMAC-CHAP bi-directional authentication * @disable_sqflow: disable controller sq flow control * @hdr_digest: generate/verify header digest (TCP) * @data_digest: generate/verify data digest (TCP) @@ -120,6 +124,8 @@ struct nvmf_ctrl_options { unsigned int kato; struct nvmf_host *host; int max_reconnects; + char *dhchap_secret; + bool dhchap_bidi; bool disable_sqflow; bool hdr_digest; bool data_digest; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 18ef8dd03a90..a256600b3572 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -328,6 +328,15 @@ struct nvme_ctrl { struct work_struct ana_work; #endif +#ifdef CONFIG_NVME_AUTH + struct work_struct dhchap_auth_work; + struct list_head dhchap_auth_list; + struct mutex dhchap_auth_mutex; + unsigned char *dhchap_key; + size_t dhchap_key_len; + u16 transaction; +#endif + /* Power saving configuration */ u64 ps_max_latency_us; bool apst_enabled; @@ -874,6 +883,27 @@ static inline bool nvme_ctrl_sgl_supported(struct nvme_ctrl *ctrl) return ctrl->sgls & ((1 << 0) | (1 << 1)); } +#ifdef CONFIG_NVME_AUTH +void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl); +void nvme_auth_stop(struct nvme_ctrl *ctrl); +int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid); +int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid); +void nvme_auth_free(struct nvme_ctrl *ctrl); +int nvme_auth_generate_key(struct nvme_ctrl *ctrl); +#else +static inline void nvme_auth_init_ctrl(struct nvme_ctrl *ctrl) {}; +static inline void nvme_auth_stop(struct nvme_ctrl *ctrl) {}; +static inline int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid) +{ + return -EPROTONOSUPPORT; +} +static inline int nvme_auth_wait(struct nvme_ctrl *ctrl, int qid) +{ + return NVME_SC_AUTH_REQUIRED; +} +static inline void nvme_auth_free(struct nvme_ctrl *ctrl) {}; +#endif + u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode); int nvme_execute_passthru_rq(struct request *rq); diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c index 6543015b6121..66f75d8ea925 100644 --- a/drivers/nvme/host/trace.c +++ b/drivers/nvme/host/trace.c @@ -271,6 +271,34 @@ static const char *nvme_trace_fabrics_property_get(struct trace_seq *p, u8 *spc) return ret; } +static const char *nvme_trace_fabrics_auth_send(struct trace_seq *p, u8 *spc) +{ + const char *ret = trace_seq_buffer_ptr(p); + u8 spsp0 = spc[1]; + u8 spsp1 = spc[2]; + u8 secp = spc[3]; + u32 tl = get_unaligned_le32(spc + 4); + + trace_seq_printf(p, "spsp0=%02x, spsp1=%02x, secp=%02x, tl=%u", + spsp0, spsp1, secp, tl); + trace_seq_putc(p, 0); + return ret; +} + +static const char *nvme_trace_fabrics_auth_receive(struct trace_seq *p, u8 *spc) +{ + const char *ret = trace_seq_buffer_ptr(p); + u8 spsp0 = spc[1]; + u8 spsp1 = spc[2]; + u8 secp = spc[3]; + u32 al = get_unaligned_le32(spc + 4); + + trace_seq_printf(p, "spsp0=%02x, spsp1=%02x, secp=%02x, al=%u", + spsp0, spsp1, secp, al); + trace_seq_putc(p, 0); + return ret; +} + static const char *nvme_trace_fabrics_common(struct trace_seq *p, u8 *spc) { const char *ret = trace_seq_buffer_ptr(p); @@ -290,6 +318,10 @@ const char *nvme_trace_parse_fabrics_cmd(struct trace_seq *p, return nvme_trace_fabrics_connect(p, spc); case nvme_fabrics_type_property_get: return nvme_trace_fabrics_property_get(p, spc); + case nvme_fabrics_type_auth_send: + return nvme_trace_fabrics_auth_send(p, spc); + case nvme_fabrics_type_auth_receive: + return nvme_trace_fabrics_auth_receive(p, spc); default: return nvme_trace_fabrics_common(p, spc); } From patchwork Tue Aug 10 12:42:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 947CEC4338F for ; Tue, 10 Aug 2021 12:43:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 78589604DC for ; Tue, 10 Aug 2021 12:43:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240723AbhHJMnX (ORCPT ); Tue, 10 Aug 2021 08:43:23 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:43704 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240733AbhHJMnQ (ORCPT ); Tue, 10 Aug 2021 08:43:16 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 356B1200B2; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxVPhhmIEkG2QvaM+OJw+sU5fagj/1FaG8k8qmB+yhk=; b=PKDrajiHNK7ptsfVcnHMnWfbAmM/OdNxCsZ9Oghz60kXy42PLLdJXV5MFDzqeFQFvHcUxP fAorrLUXFfT0QPcpdHdinxQ3DcFBaoiEwlV6Fkaru/FF0ArePHCtvuygZO2z/RQ7eMZzjq GmaoIC5wllJEW7C/NVg0Xt192z0fevM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxVPhhmIEkG2QvaM+OJw+sU5fagj/1FaG8k8qmB+yhk=; b=KWDew8DXXBDu1RO3QrVpLiy1DpqrQbOPWnFA4a4Yy+GAyhnKuqTzygXxGlKjXz3tuqsLIx LUDChVLgdtBIDhCQ== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 2045DA3B94; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 885D2518C556; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 11/13] nvmet-auth: Diffie-Hellman key exchange support Date: Tue, 10 Aug 2021 14:42:28 +0200 Message-Id: <20210810124230.12161-12-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org Implement Diffie-Hellman key exchange using FFDHE groups for NVMe In-Band Authentication. This patch adds a new host configfs attribute 'dhchap_dhgroup' to select the FFDHE group to use. Signed-off-by: Hannes Reinecke --- drivers/nvme/target/auth.c | 148 ++++++++++++++++++++++++- drivers/nvme/target/configfs.c | 31 ++++++ drivers/nvme/target/fabrics-cmd-auth.c | 30 ++++- drivers/nvme/target/nvmet.h | 6 + 4 files changed, 208 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/target/auth.c b/drivers/nvme/target/auth.c index 5b5f3cd4f914..fe44593a37f8 100644 --- a/drivers/nvme/target/auth.c +++ b/drivers/nvme/target/auth.c @@ -53,6 +53,71 @@ int nvmet_auth_set_host_key(struct nvmet_host *host, const char *secret) return 0; } +int nvmet_setup_dhgroup(struct nvmet_ctrl *ctrl, int dhgroup_id) +{ + struct nvmet_host_link *p; + struct nvmet_host *host = NULL; + const char *dhgroup_kpp; + int ret = -ENOTSUPP; + + if (dhgroup_id == NVME_AUTH_DHCHAP_DHGROUP_NULL) + return 0; + + down_read(&nvmet_config_sem); + if (ctrl->subsys->type == NVME_NQN_DISC) + goto out_unlock; + + list_for_each_entry(p, &ctrl->subsys->hosts, entry) { + if (strcmp(nvmet_host_name(p->host), ctrl->hostnqn)) + continue; + host = p->host; + break; + } + if (!host) { + pr_debug("host %s not found\n", ctrl->hostnqn); + ret = -ENXIO; + goto out_unlock; + } + + if (host->dhchap_dhgroup_id != dhgroup_id) { + ret = -EINVAL; + goto out_unlock; + } + if (ctrl->dh_tfm) { + if (ctrl->dh_gid == dhgroup_id) { + pr_debug("reuse existing DH group %d\n", dhgroup_id); + ret = 0; + } else { + pr_debug("DH group mismatch (selected %d, requested %d)\n", + ctrl->dh_gid, dhgroup_id); + ret = -EINVAL; + } + goto out_unlock; + } + + dhgroup_kpp = nvme_auth_dhgroup_kpp(dhgroup_id); + if (!dhgroup_kpp) { + ret = -EINVAL; + goto out_unlock; + } + ctrl->dh_tfm = crypto_alloc_kpp(dhgroup_kpp, 0, 0); + if (IS_ERR(ctrl->dh_tfm)) { + pr_debug("failed to setup DH group %d, err %ld\n", + dhgroup_id, PTR_ERR(ctrl->dh_tfm)); + ret = PTR_ERR(ctrl->dh_tfm); + ctrl->dh_tfm = NULL; + } else { + ctrl->dh_gid = dhgroup_id; + ctrl->dh_keysize = nvme_auth_dhgroup_pubkey_size(dhgroup_id); + ret = 0; + } + +out_unlock: + up_read(&nvmet_config_sem); + + return ret; +} + int nvmet_setup_auth(struct nvmet_ctrl *ctrl) { int ret = 0; @@ -147,6 +212,11 @@ void nvmet_destroy_auth(struct nvmet_ctrl *ctrl) ctrl->shash_tfm = NULL; ctrl->shash_id = 0; } + if (ctrl->dh_tfm) { + crypto_free_kpp(ctrl->dh_tfm); + ctrl->dh_tfm = NULL; + ctrl->dh_gid = 0; + } if (ctrl->dhchap_key) { kfree(ctrl->dhchap_key); ctrl->dhchap_key = NULL; @@ -182,8 +252,18 @@ int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response, return ret; } if (ctrl->dh_gid != NVME_AUTH_DHCHAP_DHGROUP_NULL) { - ret = -ENOTSUPP; - goto out; + challenge = kmalloc(shash_len, GFP_KERNEL); + if (!challenge) { + ret = -ENOMEM; + goto out; + } + ret = nvme_auth_augmented_challenge(ctrl->shash_id, + req->sq->dhchap_skey, + req->sq->dhchap_skey_len, + req->sq->dhchap_c1, + challenge, shash_len); + if (ret) + goto out; } shash->tfm = ctrl->shash_tfm; @@ -256,8 +336,18 @@ int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response, return ret; } if (ctrl->dh_gid != NVME_AUTH_DHCHAP_DHGROUP_NULL) { - ret = -ENOTSUPP; - goto out; + challenge = kmalloc(shash_len, GFP_KERNEL); + if (!challenge) { + ret = -ENOMEM; + goto out; + } + ret = nvme_auth_augmented_challenge(ctrl->shash_id, + req->sq->dhchap_skey, + req->sq->dhchap_skey_len, + req->sq->dhchap_c2, + challenge, shash_len); + if (ret) + goto out; } shash->tfm = ctrl->shash_tfm; @@ -299,3 +389,53 @@ int nvmet_auth_ctrl_hash(struct nvmet_req *req, u8 *response, kfree_sensitive(ctrl_response); return 0; } + +int nvmet_auth_ctrl_exponential(struct nvmet_req *req, + u8 *buf, int buf_size) +{ + struct nvmet_ctrl *ctrl = req->sq->ctrl; + int ret; + + if (!ctrl->dh_tfm) { + pr_warn("No DH algorithm!\n"); + return -ENOKEY; + } + ret = nvme_auth_gen_pubkey(ctrl->dh_tfm, buf, buf_size); + if (ret == -EOVERFLOW) { + pr_debug("public key buffer too small, need %d is %d\n", + crypto_kpp_maxsize(ctrl->dh_tfm), buf_size); + ret = -ENOKEY; + } else if (ret) { + pr_debug("failed to generate public key, err %d\n", ret); + ret = -ENOKEY; + } else + pr_debug("%s: ctrl public key %*ph\n", __func__, + (int)buf_size, buf); + + return ret; +} + +int nvmet_auth_ctrl_sesskey(struct nvmet_req *req, + u8 *pkey, int pkey_size) +{ + struct nvmet_ctrl *ctrl = req->sq->ctrl; + int ret; + + req->sq->dhchap_skey_len = + nvme_auth_dhgroup_privkey_size(ctrl->dh_gid); + req->sq->dhchap_skey = kzalloc(req->sq->dhchap_skey_len, GFP_KERNEL); + if (!req->sq->dhchap_skey) + return -ENOMEM; + ret = nvme_auth_gen_shared_secret(ctrl->dh_tfm, + pkey, pkey_size, + req->sq->dhchap_skey, + req->sq->dhchap_skey_len); + if (ret) + pr_debug("failed to compute shared secred, err %d\n", ret); + else + pr_debug("%s: shared secret %*ph\n", __func__, + (int)req->sq->dhchap_skey_len, + req->sq->dhchap_skey); + + return ret; +} diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c index e0760911a761..e9b8884a83b0 100644 --- a/drivers/nvme/target/configfs.c +++ b/drivers/nvme/target/configfs.c @@ -1712,9 +1712,40 @@ static ssize_t nvmet_host_dhchap_hash_store(struct config_item *item, CONFIGFS_ATTR(nvmet_host_, dhchap_hash); +static ssize_t nvmet_host_dhchap_dhgroup_show(struct config_item *item, + char *page) +{ + struct nvmet_host *host = to_host(item); + const char *dhgroup = nvme_auth_dhgroup_name(host->dhchap_dhgroup_id); + + return sprintf(page, "%s\n", dhgroup ? dhgroup : "none"); +} + +static ssize_t nvmet_host_dhchap_dhgroup_store(struct config_item *item, + const char *page, size_t count) +{ + struct nvmet_host *host = to_host(item); + int dhgroup_id; + + dhgroup_id = nvme_auth_dhgroup_id(page); + if (dhgroup_id < 0) + return -EINVAL; + if (dhgroup_id != NVME_AUTH_DHCHAP_DHGROUP_NULL) { + const char *kpp = nvme_auth_dhgroup_kpp(dhgroup_id); + + if (!crypto_has_kpp(kpp, 0, 0)) + return -EINVAL; + } + host->dhchap_dhgroup_id = dhgroup_id; + return count; +} + +CONFIGFS_ATTR(nvmet_host_, dhchap_dhgroup); + static struct configfs_attribute *nvmet_host_attrs[] = { &nvmet_host_attr_dhchap_key, &nvmet_host_attr_dhchap_hash, + &nvmet_host_attr_dhchap_dhgroup, NULL, }; #endif /* CONFIG_NVME_TARGET_AUTH */ diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c index ab9dfc06bac0..2f1b95098917 100644 --- a/drivers/nvme/target/fabrics-cmd-auth.c +++ b/drivers/nvme/target/fabrics-cmd-auth.c @@ -64,13 +64,24 @@ static u16 nvmet_auth_negotiate(struct nvmet_req *req, void *d) null_dh = dhgid; continue; } + if (ctrl->dh_tfm && ctrl->dh_gid == dhgid) { + pr_debug("%s: ctrl %d qid %d: reusing existing DH group %d\n", + __func__, ctrl->cntlid, req->sq->qid, dhgid); + break; + } + if (nvmet_setup_dhgroup(ctrl, dhgid) < 0) + continue; + if (nvme_auth_gen_privkey(ctrl->dh_tfm, dhgid) == 0) + break; + crypto_free_kpp(ctrl->dh_tfm); + ctrl->dh_tfm = NULL; + ctrl->dh_gid = 0; } - if (null_dh < 0) { + if (!ctrl->dh_tfm && null_dh < 0) { pr_debug("%s: ctrl %d qid %d: no DH group selected\n", __func__, ctrl->cntlid, req->sq->qid); return NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE; } - ctrl->dh_gid = null_dh; pr_debug("%s: ctrl %d qid %d: DH group %s (%d)\n", __func__, ctrl->cntlid, req->sq->qid, nvme_auth_dhgroup_name(ctrl->dh_gid), ctrl->dh_gid); @@ -91,7 +102,11 @@ static u16 nvmet_auth_reply(struct nvmet_req *req, void *d) return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; if (data->dhvlen) { - return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; + if (!ctrl->dh_tfm) + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; + if (nvmet_auth_ctrl_sesskey(req, data->rval + 2 * data->hl, + data->dhvlen) < 0) + return NVME_AUTH_DHCHAP_FAILURE_DHGROUP_UNUSABLE; } response = kmalloc(data->hl, GFP_KERNEL); @@ -232,6 +247,7 @@ void nvmet_execute_auth_send(struct nvmet_req *req) NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD; goto done_kfree; } + switch (data->auth_id) { case NVME_AUTH_DHCHAP_MESSAGE_REPLY: status = nvmet_auth_reply(req, d); @@ -303,6 +319,8 @@ static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al) int hash_len = crypto_shash_digestsize(ctrl->shash_tfm); int data_size = sizeof(*d) + hash_len; + if (ctrl->dh_tfm) + data_size += ctrl->dh_keysize; if (al < data_size) { pr_debug("%s: buffer too small (al %d need %d)\n", __func__, al, data_size); @@ -321,6 +339,12 @@ static int nvmet_auth_challenge(struct nvmet_req *req, void *d, int al) return -ENOMEM; get_random_bytes(req->sq->dhchap_c1, data->hl); memcpy(data->cval, req->sq->dhchap_c1, data->hl); + if (ctrl->dh_tfm) { + data->dhgid = ctrl->dh_gid; + data->dhvlen = ctrl->dh_keysize; + ret = nvmet_auth_ctrl_exponential(req, data->cval + data->hl, + data->dhvlen); + } pr_debug("%s: ctrl %d qid %d seq %d transaction %d hl %d dhvlen %d\n", __func__, ctrl->cntlid, req->sq->qid, req->sq->dhchap_s1, req->sq->dhchap_tid, data->hl, data->dhvlen); diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index adcbe2523710..e9d18ca6ebbf 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -227,6 +227,7 @@ struct nvmet_ctrl { size_t dhchap_key_len; struct crypto_shash *shash_tfm; u8 shash_id; + struct crypto_kpp *dh_tfm; u32 dh_gid; u32 dh_keysize; #endif @@ -693,6 +694,7 @@ int nvmet_setup_auth(struct nvmet_ctrl *ctrl); void nvmet_init_auth(struct nvmet_ctrl *ctrl, struct nvmet_req *req); void nvmet_destroy_auth(struct nvmet_ctrl *ctrl); void nvmet_auth_sq_free(struct nvmet_sq *sq); +int nvmet_setup_dhgroup(struct nvmet_ctrl *ctrl, int dhgroup_id); bool nvmet_check_auth_status(struct nvmet_req *req); int nvmet_auth_host_hash(struct nvmet_req *req, u8 *response, unsigned int hash_len); @@ -702,6 +704,10 @@ static inline bool nvmet_has_auth(struct nvmet_ctrl *ctrl) { return ctrl->shash_tfm != NULL; } +int nvmet_auth_ctrl_exponential(struct nvmet_req *req, + u8 *buf, int buf_size); +int nvmet_auth_ctrl_sesskey(struct nvmet_req *req, + u8 *buf, int buf_size); #else static inline int nvmet_setup_auth(struct nvmet_ctrl *ctrl) { From patchwork Tue Aug 10 12:42:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 494461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B840CC43214 for ; Tue, 10 Aug 2021 12:43:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9E3E160EFF for ; Tue, 10 Aug 2021 12:43:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240713AbhHJMnZ (ORCPT ); Tue, 10 Aug 2021 08:43:25 -0400 Received: from smtp-out2.suse.de ([195.135.220.29]:43718 "EHLO smtp-out2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240734AbhHJMnQ (ORCPT ); Tue, 10 Aug 2021 08:43:16 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id 3BCFE200B3; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MUoNB1RE7VhyoT1+GvVF1n3f2kAKysSACL4JZIm62EA=; b=EX2vEI9iPdolviNRyMxdRU3CU11O4eOfGgO6VJpppShc8sUqjxev9d8M8aYZqSw13WphTh EoeWw5f+xYhnlDlZw3hGORRaLLLTc54ZtVzdxDiembtjbjJnVe7cZfHlfaKpB/iGuF19Ds USY31qdDkSHXqLG7pQCqGdzXbMJT5xQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1628599372; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MUoNB1RE7VhyoT1+GvVF1n3f2kAKysSACL4JZIm62EA=; b=WFcQNkEFlZB//F5QFZeBEPxbYqgMdFIqhGS24JWxwebjGtKxSY9wgEMcjgmh5jYRMiu64y +/IoSmOIaqtRnfAQ== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id 20CD7A3B96; Tue, 10 Aug 2021 12:42:52 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id 8D902518C55A; Tue, 10 Aug 2021 14:42:50 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , Herbert Xu , "David S . Miller" , linux-nvme@lists.infradead.org, linux-crypto@vger.kernel.org, Hannes Reinecke Subject: [PATCH 13/13] nvme: add non-standard ECDH and curve25517 algorithms Date: Tue, 10 Aug 2021 14:42:30 +0200 Message-Id: <20210810124230.12161-14-hare@suse.de> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210810124230.12161-1-hare@suse.de> References: <20210810124230.12161-1-hare@suse.de> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org TLS 1.3 specifies ECDH and curve25517 in addition to the FFDHE groups, and these are already implemented in the kernel. So add support for these non-standard groups for NVMe in-band authentication to validate the augmented challenge implementation. Signed-off-by: Hannes Reinecke --- drivers/nvme/host/auth.c | 34 +++++++++++++++++++++++++++++++++- include/linux/nvme.h | 2 ++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c index e73b5be4a4b0..6de2910eec80 100644 --- a/drivers/nvme/host/auth.c +++ b/drivers/nvme/host/auth.c @@ -9,6 +9,8 @@ #include #include #include +#include +#include #include "nvme.h" #include "fabrics.h" #include "auth.h" @@ -69,6 +71,13 @@ static struct nvme_auth_dhgroup_map { { .id = NVME_AUTH_DHCHAP_DHGROUP_8192, .name = "ffdhe8192", .kpp = "dh", .privkey_size = 1024, .pubkey_size = 1024 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_ECDH, + .name = "ecdh", .kpp = "ecdh-nist-p256", + .privkey_size = 32, .pubkey_size = 64 }, + { .id = NVME_AUTH_DHCHAP_DHGROUP_25519, + .name = "curve25519", .kpp = "curve25519", + .privkey_size = CURVE25519_KEY_SIZE, + .pubkey_size = CURVE25519_KEY_SIZE }, }; const char *nvme_auth_dhgroup_name(int dhgroup_id) @@ -424,6 +433,27 @@ int nvme_auth_gen_privkey(struct crypto_kpp *dh_tfm, int dh_gid) kfree_sensitive(dh_secret); goto out; } + } else if (dh_gid == NVME_AUTH_DHCHAP_DHGROUP_ECDH) { + struct ecdh p = {0}; + + pkey_len = crypto_ecdh_key_len(&p); + pkey = kmalloc(pkey_len, GFP_KERNEL); + if (!pkey) + return -ENOMEM; + + get_random_bytes(pkey, pkey_len); + ret = crypto_ecdh_encode_key(pkey, pkey_len, &p); + if (ret) { + pr_debug("failed to encode private key, error %d\n", + ret); + goto out; + } + } else if (dh_gid == NVME_AUTH_DHCHAP_DHGROUP_25519) { + pkey_len = CURVE25519_KEY_SIZE; + pkey = kmalloc(pkey_len, GFP_KERNEL); + if (!pkey) + return -ENOMEM; + get_random_bytes(pkey, pkey_len); } else { pr_warn("invalid dh group %d\n", dh_gid); return -EINVAL; @@ -597,7 +627,7 @@ static int nvme_auth_set_dhchap_negotiate_data(struct nvme_ctrl *ctrl, data->napd = 1; data->auth_protocol[0].dhchap.authid = NVME_AUTH_DHCHAP_AUTH_ID; data->auth_protocol[0].dhchap.halen = 3; - data->auth_protocol[0].dhchap.dhlen = 6; + data->auth_protocol[0].dhchap.dhlen = 8; data->auth_protocol[0].dhchap.idlist[0] = NVME_AUTH_DHCHAP_SHA256; data->auth_protocol[0].dhchap.idlist[1] = NVME_AUTH_DHCHAP_SHA384; data->auth_protocol[0].dhchap.idlist[2] = NVME_AUTH_DHCHAP_SHA512; @@ -607,6 +637,8 @@ static int nvme_auth_set_dhchap_negotiate_data(struct nvme_ctrl *ctrl, data->auth_protocol[0].dhchap.idlist[6] = NVME_AUTH_DHCHAP_DHGROUP_4096; data->auth_protocol[0].dhchap.idlist[7] = NVME_AUTH_DHCHAP_DHGROUP_6144; data->auth_protocol[0].dhchap.idlist[8] = NVME_AUTH_DHCHAP_DHGROUP_8192; + data->auth_protocol[0].dhchap.idlist[9] = NVME_AUTH_DHCHAP_DHGROUP_ECDH; + data->auth_protocol[0].dhchap.idlist[10] = NVME_AUTH_DHCHAP_DHGROUP_25519; return size; } diff --git a/include/linux/nvme.h b/include/linux/nvme.h index e2142e3246eb..e1145bbfc92f 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -1477,6 +1477,8 @@ enum { NVME_AUTH_DHCHAP_DHGROUP_4096 = 0x03, NVME_AUTH_DHCHAP_DHGROUP_6144 = 0x04, NVME_AUTH_DHCHAP_DHGROUP_8192 = 0x05, + NVME_AUTH_DHCHAP_DHGROUP_ECDH = 0x0e, + NVME_AUTH_DHCHAP_DHGROUP_25519 = 0x0f, }; union nvmf_auth_protocol {