From patchwork Wed Aug 3 16:40:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adel Abouchaev X-Patchwork-Id: 595202 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53D81C19F2C for ; Wed, 3 Aug 2022 16:41:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237321AbiHCQlH (ORCPT ); Wed, 3 Aug 2022 12:41:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236331AbiHCQk7 (ORCPT ); Wed, 3 Aug 2022 12:40:59 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7DD0BE0D; Wed, 3 Aug 2022 09:40:57 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id a8so3614011pjg.5; Wed, 03 Aug 2022 09:40:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Kwg0Gxnft/pxS4kgXDydNW4k2vrq3EyTloWOdq0PXAc=; b=J6ry0juC/cS1/keXoOwFK+MuEMj4DPNdUiiPrRADSM1Ai+nh2WJjSsFxPPwOLXZi0E OQFLgeKjWL//VpC/a7o4RQIChsxIKMpJiHC3LVZu6+hjIksbnXFCwMVBnZVEccTnT0ys vFkwXozeovy3Adau/hq0kJAQGYVZzS2gRBdwxtIFRIqkNCDqV3q+OxlSu9HYEJU6Pa4T r7Ul+D6d2ijz/bg3/qyOs6vngWUTXsKdCZQzTI4M7OZrARHLX1v1wnD8Vppum8HT5S0w 4vam+bqP4rurSh0WGNicpfHVVp8+2yynTB+dsrIjuIyrk45tJp3ku3OG970300TI8hMs zDAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Kwg0Gxnft/pxS4kgXDydNW4k2vrq3EyTloWOdq0PXAc=; b=u6lnZthpU7G6OqxH01HU9bLGAscsauIBo5H6G+SNjC8RmwpZ2Vrsx0u/3tuIguIocl GOQ8Ib84VnOumKrroEtBi7OGZMaj6zz55HInLRrpWtPFpKTyaYTCQq3mK+WXz+njFJWZ tNQwtmgpMRreb6kXyo4BatNqgx9wJ6S6tXoomEZ3zi7w3qp2QUWYWSyL8heA6TMwFLEH a5ulQcfrZoHH82Tr5wMNDSi0CscfdUzbSj3XamlZFzMi/u4gUWG7lH2p1n7MycINniSb oyX69JL5nh+UzK1P4AluCzcVtW+N2bmOJ38IOhFN+aKYvtclOKeLQv/vL5JUM0QrEvO8 1JMA== X-Gm-Message-State: ACgBeo2cJvLTVKyfbFyj94g7uFl/4v+FhPJPn25NZL9d94WGAWEmSGVW aaWzdTe2iL2BSzMxCjkbliMtIrh3PkA3kzVT X-Google-Smtp-Source: AA6agR5MfZwpSsjBltY1tEBSjei9IMpegZCHDn7zPRI8i1VSTvy8b5cdgUNXmy0WqfgjUUtKIU1QnQ== X-Received: by 2002:a17:902:f80f:b0:16d:c4af:88aa with SMTP id ix15-20020a170902f80f00b0016dc4af88aamr27144961plb.6.1659544857267; Wed, 03 Aug 2022 09:40:57 -0700 (PDT) Received: from localhost (fwdproxy-prn-117.fbsv.net. [2a03:2880:ff:75::face:b00c]) by smtp.gmail.com with ESMTPSA id l9-20020a170903244900b0016cdbb22c28sm2306177pls.0.2022.08.03.09.40.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Aug 2022 09:40:56 -0700 (PDT) From: Adel Abouchaev To: kuba@kernel.org Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, corbet@lwn.net, dsahern@kernel.org, shuah@kernel.org, imagedong@tencent.com, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC net-next 2/6] net: Define QUIC specific constants, control and data plane structures Date: Wed, 3 Aug 2022 09:40:41 -0700 Message-Id: <20220803164045.3585187-3-adel.abushaev@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220803164045.3585187-1-adel.abushaev@gmail.com> References: <20220803164045.3585187-1-adel.abushaev@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Define control and data plane structures to pass in control plane for flow add/remove and during packet send within ancillary data. Define constants to use within SOL_UDP to program QUIC sockets. Signed-off-by: Adel Abouchaev --- include/uapi/linux/quic.h | 61 +++++++++++++++++++++++++++++++++++++++ include/uapi/linux/udp.h | 3 ++ 2 files changed, 64 insertions(+) create mode 100644 include/uapi/linux/quic.h diff --git a/include/uapi/linux/quic.h b/include/uapi/linux/quic.h new file mode 100644 index 000000000000..79680b8b18a6 --- /dev/null +++ b/include/uapi/linux/quic.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) */ + +#ifndef _UAPI_LINUX_QUIC_H +#define _UAPI_LINUX_QUIC_H + +#include +#include + +#define QUIC_MAX_CONNECTION_ID_SIZE 20 + +/* Side by side data for QUIC egress operations */ +#define QUIC_BYPASS_ENCRYPTION 0x01 + +struct quic_tx_ancillary_data { + __aligned_u64 next_pkt_num; + __u8 flags; + __u8 conn_id_length; +}; + +struct quic_connection_info_key { + __u8 conn_id[QUIC_MAX_CONNECTION_ID_SIZE]; + __u8 conn_id_length; +}; + +struct quic_aes_gcm_128 { + __u8 header_key[TLS_CIPHER_AES_GCM_128_KEY_SIZE]; + __u8 payload_key[TLS_CIPHER_AES_GCM_128_KEY_SIZE]; + __u8 payload_iv[TLS_CIPHER_AES_GCM_128_IV_SIZE]; +}; + +struct quic_aes_gcm_256 { + __u8 header_key[TLS_CIPHER_AES_GCM_256_KEY_SIZE]; + __u8 payload_key[TLS_CIPHER_AES_GCM_256_KEY_SIZE]; + __u8 payload_iv[TLS_CIPHER_AES_GCM_256_IV_SIZE]; +}; + +struct quic_aes_ccm_128 { + __u8 header_key[TLS_CIPHER_AES_CCM_128_KEY_SIZE]; + __u8 payload_key[TLS_CIPHER_AES_CCM_128_KEY_SIZE]; + __u8 payload_iv[TLS_CIPHER_AES_CCM_128_IV_SIZE]; +}; + +struct quic_chacha20_poly1305 { + __u8 header_key[TLS_CIPHER_CHACHA20_POLY1305_KEY_SIZE]; + __u8 payload_key[TLS_CIPHER_CHACHA20_POLY1305_KEY_SIZE]; + __u8 payload_iv[TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE]; +}; + +struct quic_connection_info { + __u16 cipher_type; + struct quic_connection_info_key key; + union { + struct quic_aes_gcm_128 aes_gcm_128; + struct quic_aes_gcm_256 aes_gcm_256; + struct quic_aes_ccm_128 aes_ccm_128; + struct quic_chacha20_poly1305 chacha20_poly1305; + }; +}; + +#endif + diff --git a/include/uapi/linux/udp.h b/include/uapi/linux/udp.h index 4828794efcf8..0ee4c598e70b 100644 --- a/include/uapi/linux/udp.h +++ b/include/uapi/linux/udp.h @@ -34,6 +34,9 @@ struct udphdr { #define UDP_NO_CHECK6_RX 102 /* Disable accpeting checksum for UDP6 */ #define UDP_SEGMENT 103 /* Set GSO segmentation size */ #define UDP_GRO 104 /* This socket can receive UDP GRO packets */ +#define UDP_QUIC_ADD_TX_CONNECTION 106 /* Add QUIC Tx crypto offload */ +#define UDP_QUIC_DEL_TX_CONNECTION 107 /* Del QUIC Tx crypto offload */ +#define UDP_QUIC_ENCRYPT 108 /* QUIC encryption parameters */ /* UDP encapsulation types */ #define UDP_ENCAP_ESPINUDP_NON_IKE 1 /* draft-ietf-ipsec-nat-t-ike-00/01 */ From patchwork Wed Aug 3 16:40:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adel Abouchaev X-Patchwork-Id: 595201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 590D3C19F29 for ; Wed, 3 Aug 2022 16:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231527AbiHCQlH (ORCPT ); Wed, 3 Aug 2022 12:41:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236815AbiHCQlA (ORCPT ); Wed, 3 Aug 2022 12:41:00 -0400 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7A163FA2F; Wed, 3 Aug 2022 09:40:58 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d20so9534936pfq.5; Wed, 03 Aug 2022 09:40:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=bsiUbd/vDuPDnl7sEdaZyJkbZg+kaW5QRqCv78tgHDM=; b=GYEYC8qcPPFmQ52HDDIu4DivNGsVAVNW8o4YPnmZ7n/GyOoBw91riQGtfi6PnsJRJ1 AB23JUfS2SAnya/q7Wu7gbIQOj/OHV4GV583VjJMqYI9drWTfxEVwEzPqS841ciKsutj 51mBzmaDP5HrrRL9GbXjnLkF98A4swDKsZJQStTyz5om4JYrzK77xaQsLsQ/yxtX/UjS 6Ew/W00h7Ela0mzkUY/wPcewZzN0zKo85DiL90++pjM0R3bUcLlVChPJEy+Zgmq+cK5d 73bZBzM2dzfZaKM5OofxhIUQuXeC3pg+SiiIvHwtdyPna4lWlUowC2+1b/PTorIYKzs1 RUVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=bsiUbd/vDuPDnl7sEdaZyJkbZg+kaW5QRqCv78tgHDM=; b=lFmpNdAxlYOAcaSEr/4Ee8W05FhbTTMhSwbKEpb4cw6+5Mx3bKAKIzjFh4uKMBAETw k24h7vcK9pduOyfFb6lHWbxrdbBpxdUrFLPGzxftvXC5F2eTeFFYN9cDeZuKBdmCgGTr tGoen+6kaGUJvhFJVzfRj9kX5xYQOOZN8TYcEHDnzuTP6jU+cmcUzhGBnkLUTGggi7Wt HX/xVy9QBInIuZ3/YGK5C6WtAKH/Kt8Qu9YrJ0objWkWleIKwFLE8HnP1XMI3+ltoSOZ Bo1BVR7sdRG8yUyPJGQYmha0k2V72ULkCNQ2p3fmUei8H+nqH8tHkdS/tDX9LSZawEww tawg== X-Gm-Message-State: ACgBeo0ALlZ2Bs7O/Mfxh24zT7UNxvRBPPowzVwXpMiRva1bBv65lu8t DCvzAI2lIrQvBgKGby4Edds= X-Google-Smtp-Source: AA6agR6il3c5AUGJNpCHtvZtYsJgb/hEDyGfySo/m3A5b87L+XS9qQPhbCPBJVZLb6lKr1MwBVP3pg== X-Received: by 2002:aa7:941a:0:b0:52d:e57c:cf43 with SMTP id x26-20020aa7941a000000b0052de57ccf43mr7752613pfo.77.1659544858205; Wed, 03 Aug 2022 09:40:58 -0700 (PDT) Received: from localhost (fwdproxy-prn-013.fbsv.net. [2a03:2880:ff:d::face:b00c]) by smtp.gmail.com with ESMTPSA id h23-20020a17090a055700b001f3162e4e55sm1842447pjf.35.2022.08.03.09.40.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Aug 2022 09:40:57 -0700 (PDT) From: Adel Abouchaev To: kuba@kernel.org Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, corbet@lwn.net, dsahern@kernel.org, shuah@kernel.org, imagedong@tencent.com, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC net-next 3/6] net: Add UDP ULP operations, initialization and handling prototype functions. Date: Wed, 3 Aug 2022 09:40:42 -0700 Message-Id: <20220803164045.3585187-4-adel.abushaev@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220803164045.3585187-1-adel.abushaev@gmail.com> References: <20220803164045.3585187-1-adel.abushaev@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Define functions to add UDP ULP handling, registration with UDP protocol and supporting data structures. Create structure for QUIC ULP and add empty prototype functions to support it. Signed-off-by: Adel Abouchaev --- include/net/inet_sock.h | 2 + include/net/udp.h | 33 +++++++ include/uapi/linux/udp.h | 1 + net/Kconfig | 1 + net/Makefile | 1 + net/ipv4/Makefile | 3 +- net/ipv4/udp.c | 6 ++ net/ipv4/udp_ulp.c | 190 +++++++++++++++++++++++++++++++++++++++ 8 files changed, 236 insertions(+), 1 deletion(-) create mode 100644 net/ipv4/udp_ulp.c diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index 6395f6b9a5d2..e9c44b3ccffe 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -238,6 +238,8 @@ struct inet_sock { __be32 mc_addr; struct ip_mc_socklist __rcu *mc_list; struct inet_cork_full cork; + const struct udp_ulp_ops *udp_ulp_ops; + void __rcu *ulp_data; }; #define IPCORK_OPT 1 /* ip-options has been held in ipcork.opt */ diff --git a/include/net/udp.h b/include/net/udp.h index 8dd4aa1485a6..f50011a20c92 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -523,4 +523,37 @@ struct proto *udp_bpf_get_proto(struct sock *sk, struct sk_psock *psock); int udp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); #endif +/* + * Interface for adding Upper Level Protocols over UDP + */ + +#define UDP_ULP_NAME_MAX 16 +#define UDP_ULP_MAX 128 + +struct udp_ulp_ops { + struct list_head list; + + /* initialize ulp */ + int (*init)(struct sock *sk); + /* cleanup ulp */ + void (*release)(struct sock *sk); + + char name[UDP_ULP_NAME_MAX]; + struct module *owner; +}; + +int udp_register_ulp(struct udp_ulp_ops *type); +void udp_unregister_ulp(struct udp_ulp_ops *type); +int udp_set_ulp(struct sock *sk, const char *name); +void udp_get_available_ulp(char *buf, size_t len); +void udp_cleanup_ulp(struct sock *sk); +int udp_setsockopt_ulp(struct sock *sk, sockptr_t optval, + unsigned int optlen); +int udp_getsockopt_ulp(struct sock *sk, char __user *optval, + int __user *optlen); + +#define MODULE_ALIAS_UDP_ULP(name)\ + __MODULE_INFO(alias, alias_userspace, name);\ + __MODULE_INFO(alias, alias_udp_ulp, "udp-ulp-" name) + #endif /* _UDP_H */ diff --git a/include/uapi/linux/udp.h b/include/uapi/linux/udp.h index 0ee4c598e70b..893691f0108a 100644 --- a/include/uapi/linux/udp.h +++ b/include/uapi/linux/udp.h @@ -34,6 +34,7 @@ struct udphdr { #define UDP_NO_CHECK6_RX 102 /* Disable accpeting checksum for UDP6 */ #define UDP_SEGMENT 103 /* Set GSO segmentation size */ #define UDP_GRO 104 /* This socket can receive UDP GRO packets */ +#define UDP_ULP 105 /* Attach ULP to a UDP socket */ #define UDP_QUIC_ADD_TX_CONNECTION 106 /* Add QUIC Tx crypto offload */ #define UDP_QUIC_DEL_TX_CONNECTION 107 /* Del QUIC Tx crypto offload */ #define UDP_QUIC_ENCRYPT 108 /* QUIC encryption parameters */ diff --git a/net/Kconfig b/net/Kconfig index 6b78f695caa6..93e3b1308aec 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -63,6 +63,7 @@ menu "Networking options" source "net/packet/Kconfig" source "net/unix/Kconfig" source "net/tls/Kconfig" +source "net/quic/Kconfig" source "net/xfrm/Kconfig" source "net/iucv/Kconfig" source "net/smc/Kconfig" diff --git a/net/Makefile b/net/Makefile index fbfeb8a0bb37..28565bfe29cb 100644 --- a/net/Makefile +++ b/net/Makefile @@ -16,6 +16,7 @@ obj-y += ethernet/ 802/ sched/ netlink/ bpf/ ethtool/ obj-$(CONFIG_NETFILTER) += netfilter/ obj-$(CONFIG_INET) += ipv4/ obj-$(CONFIG_TLS) += tls/ +obj-$(CONFIG_QUIC) += quic/ obj-$(CONFIG_XFRM) += xfrm/ obj-$(CONFIG_UNIX_SCM) += unix/ obj-y += ipv6/ diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile index bbdd9c44f14e..88d3baf4af95 100644 --- a/net/ipv4/Makefile +++ b/net/ipv4/Makefile @@ -14,7 +14,8 @@ obj-y := route.o inetpeer.o protocol.o \ udp_offload.o arp.o icmp.o devinet.o af_inet.o igmp.o \ fib_frontend.o fib_semantics.o fib_trie.o fib_notifier.o \ inet_fragment.o ping.o ip_tunnel_core.o gre_offload.o \ - metrics.o netlink.o nexthop.o udp_tunnel_stub.o + metrics.o netlink.o nexthop.o udp_tunnel_stub.o \ + udp_ulp.o obj-$(CONFIG_BPFILTER) += bpfilter/ diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index aa9f2ec3dc46..e4a5f66b3141 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -2778,6 +2778,9 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname, up->pcflag |= UDPLITE_RECV_CC; break; + case UDP_ULP: + return udp_setsockopt_ulp(sk, optval, optlen); + default: err = -ENOPROTOOPT; break; @@ -2846,6 +2849,9 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname, val = up->pcrlen; break; + case UDP_ULP: + return udp_getsockopt_ulp(sk, optval, optlen); + default: return -ENOPROTOOPT; } diff --git a/net/ipv4/udp_ulp.c b/net/ipv4/udp_ulp.c new file mode 100644 index 000000000000..3801ed7ad17d --- /dev/null +++ b/net/ipv4/udp_ulp.c @@ -0,0 +1,190 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Pluggable UDP upper layer protocol support, based on pluggable TCP upper + * layer protocol support. + * + * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved. + * Copyright (c) 2016-2017, Dave Watson . All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static DEFINE_SPINLOCK(udp_ulp_list_lock); +static LIST_HEAD(udp_ulp_list); + +/* Simple linear search, don't expect many entries! */ +static struct udp_ulp_ops *udp_ulp_find(const char *name) +{ + struct udp_ulp_ops *e; + + list_for_each_entry_rcu(e, &udp_ulp_list, list, + lockdep_is_held(&udp_ulp_list_lock)) { + if (strcmp(e->name, name) == 0) + return e; + } + + return NULL; +} + +static const struct udp_ulp_ops *__udp_ulp_find_autoload(const char *name) +{ + const struct udp_ulp_ops *ulp = NULL; + + rcu_read_lock(); + ulp = udp_ulp_find(name); + +#ifdef CONFIG_MODULES + if (!ulp && capable(CAP_NET_ADMIN)) { + rcu_read_unlock(); + request_module("udp-ulp-%s", name); + rcu_read_lock(); + ulp = udp_ulp_find(name); + } +#endif + if (!ulp || !try_module_get(ulp->owner)) + ulp = NULL; + + rcu_read_unlock(); + return ulp; +} + +/* Attach new upper layer protocol to the list + * of available protocols. + */ +int udp_register_ulp(struct udp_ulp_ops *ulp) +{ + int ret = 0; + + spin_lock(&udp_ulp_list_lock); + if (udp_ulp_find(ulp->name)) + ret = -EEXIST; + else + list_add_tail_rcu(&ulp->list, &udp_ulp_list); + + spin_unlock(&udp_ulp_list_lock); + + return ret; +} +EXPORT_SYMBOL_GPL(udp_register_ulp); + +void udp_unregister_ulp(struct udp_ulp_ops *ulp) +{ + spin_lock(&udp_ulp_list_lock); + list_del_rcu(&ulp->list); + spin_unlock(&udp_ulp_list_lock); + + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(udp_unregister_ulp); + +void udp_cleanup_ulp(struct sock *sk) +{ + struct inet_sock *inet = inet_sk(sk); + + /* No sock_owned_by_me() check here as at the time the + * stack calls this function, the socket is dead and + * about to be destroyed. + */ + if (!inet->udp_ulp_ops) + return; + + if (inet->udp_ulp_ops->release) + inet->udp_ulp_ops->release(sk); + module_put(inet->udp_ulp_ops->owner); + + inet->udp_ulp_ops = NULL; +} + +static int __udp_set_ulp(struct sock *sk, const struct udp_ulp_ops *ulp_ops) +{ + struct inet_sock *inet = inet_sk(sk); + int err; + + err = -EEXIST; + if (inet->udp_ulp_ops) + goto out_err; + + err = ulp_ops->init(sk); + if (err) + goto out_err; + + inet->udp_ulp_ops = ulp_ops; + return 0; + +out_err: + module_put(ulp_ops->owner); + return err; +} + +int udp_set_ulp(struct sock *sk, const char *name) +{ + struct sk_psock *psock = sk_psock_get(sk); + const struct udp_ulp_ops *ulp_ops; + + if (psock){ + sk_psock_put(sk, psock); + return -EINVAL; + } + + sock_owned_by_me(sk); + ulp_ops = __udp_ulp_find_autoload(name); + if (!ulp_ops) + return -ENOENT; + + return __udp_set_ulp(sk, ulp_ops); +} + +int udp_setsockopt_ulp(struct sock *sk, sockptr_t optval, unsigned int optlen) +{ + char name[UDP_ULP_NAME_MAX]; + int val, err; + + if (!optlen || optlen > UDP_ULP_NAME_MAX) + return -EINVAL; + + val = strncpy_from_sockptr(name, optval, optlen); + if (val < 0) + return -EFAULT; + + if (val == UDP_ULP_NAME_MAX) + return -EINVAL; + + name[val] = 0; + lock_sock(sk); + err = udp_set_ulp(sk, name); + release_sock(sk); + return err; +} + +int udp_getsockopt_ulp(struct sock *sk, char __user *optval, int __user *optlen) +{ + struct inet_sock *inet = inet_sk(sk); + int len; + + if (get_user(len, optlen)) + return -EFAULT; + + len = min_t(unsigned int, len, UDP_ULP_NAME_MAX); + if (len < 0) + return -EINVAL; + + if (!inet->udp_ulp_ops) { + if (put_user(0, optlen)) + return -EFAULT; + return 0; + } + + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, inet->udp_ulp_ops->name, len)) + return -EFAULT; + + return 0; +} From patchwork Sat Aug 6 00:11:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adel Abouchaev X-Patchwork-Id: 595974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AF86C00140 for ; Sat, 6 Aug 2022 00:12:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241585AbiHFAMk (ORCPT ); Fri, 5 Aug 2022 20:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241600AbiHFAM2 (ORCPT ); Fri, 5 Aug 2022 20:12:28 -0400 Received: from mail-oa1-x33.google.com (mail-oa1-x33.google.com [IPv6:2001:4860:4864:20::33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C970B7C1B1; Fri, 5 Aug 2022 17:12:23 -0700 (PDT) Received: by mail-oa1-x33.google.com with SMTP id 586e51a60fabf-10cf9f5b500so4629127fac.2; Fri, 05 Aug 2022 17:12:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=5XrjuNCGI68fWtTF7az6gjkp6LBU0Yuc3xGpWyxaSCY=; b=egdsMrlUMnb42C1vbyKXcbZumo5LpbzQkk1heAOFNKlRj79mH+DjD5To19Rt8ZDCnL KHAjDa3XEJlrC3Xvf21GatZkegMNsbmv8vMAwWPGJ4/EgpI5rvxItc8Qa908NmbIGt7Y CpAfGxTyiq/GB0gPm4MGDJJ5oZb4O5Qk1E7NWmqBWWrlsydMpSynck9PA6KeXIsqvn2d xF6oaal7BMDXKqlTKMlOlvt/AMBj/y+UqRLnBjQkagVI9nECGkzGEt0ofsy4pCwN4Ubp 8SBMzpaUB6BMD+fiyBrcqFfNBHxxinF59ynARgBRtLNt/AiDhUw2PCnEkSLJ0vdKMB7q t4QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=5XrjuNCGI68fWtTF7az6gjkp6LBU0Yuc3xGpWyxaSCY=; b=YiparYOWSW3t53eaqjoWpTPmldSNINpYXjWN+PwBDYRRAWKi2DfMYRpdV5reVLYNy7 i8tCXNd6xIicjpbPnaSA+kG0sP3jvJshnFyvZ2n/sc1Cy/ia7OFtLQKXQdPOaHNAG522 W+TItkEf1C7gI5IqC9SQ8P3HniLqMsQk8ku2LXKA45A+PRjGHUccP60jmitNDzhyrdrq RuWGunqDH/A34s4rn8kBcPrrFkU9ydxhvCg6WLmMAoAKqB0qlH292ZJzo8nj7yU/n54E +K5l3G1qPowFEM2A76/lpB42SlBv9woOtTu4hRoVqP87U77h6C4YhtVdvEGb1+NQpBbN HLzA== X-Gm-Message-State: ACgBeo3PJkLJclbyRh6MSgc8o2yW+aeWQI/Ywy0B503snmirN1JAIv64 jFV5itrf+WRO5d4NgIH216vIRYFl5fxOkQ== X-Google-Smtp-Source: AA6agR7hAdLOfuwarDEv9pbSGpnMoTGvSXPV1zFdyWCb+GbFX8y43ZYAc+lRcuIluOwryf3ZwnyVww== X-Received: by 2002:a17:90b:4a06:b0:1f4:e4fc:91ed with SMTP id kk6-20020a17090b4a0600b001f4e4fc91edmr10344687pjb.152.1659744732086; Fri, 05 Aug 2022 17:12:12 -0700 (PDT) Received: from localhost (fwdproxy-prn-023.fbsv.net. [2a03:2880:ff:17::face:b00c]) by smtp.gmail.com with ESMTPSA id u23-20020a1709026e1700b0016dafeda062sm3584842plk.232.2022.08.05.17.12.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 17:12:11 -0700 (PDT) From: Adel Abouchaev To: kuba@kernel.org Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, corbet@lwn.net, dsahern@kernel.org, shuah@kernel.org, imagedong@tencent.com, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC net-next v2 4/6] Implement QUIC offload functions Date: Fri, 5 Aug 2022 17:11:51 -0700 Message-Id: <20220806001153.1461577-5-adel.abushaev@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220806001153.1461577-1-adel.abushaev@gmail.com> References: <20220806001153.1461577-1-adel.abushaev@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add connection hash to the context do support add, remove operations on QUIC connections for the control plane and lookup for the data plane. Implement setsockopt and add placeholders to add and delete Tx connections. Signed-off-by: Adel Abouchaev --- v2: Added net/quic/Kconfig reference to net/Kconfig in this commit. --- include/net/quic.h | 49 ++ net/Kconfig | 1 + net/ipv4/udp.c | 8 + net/quic/Kconfig | 16 + net/quic/Makefile | 8 + net/quic/quic_main.c | 1400 ++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 1482 insertions(+) create mode 100644 include/net/quic.h create mode 100644 net/quic/Kconfig create mode 100644 net/quic/Makefile create mode 100644 net/quic/quic_main.c diff --git a/include/net/quic.h b/include/net/quic.h new file mode 100644 index 000000000000..15e04ea08c53 --- /dev/null +++ b/include/net/quic.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef INCLUDE_NET_QUIC_H +#define INCLUDE_NET_QUIC_H + +#include +#include +#include +#include + +#define QUIC_MAX_SHORT_HEADER_SIZE 25 +#define QUIC_MAX_CONNECTION_ID_SIZE 20 +#define QUIC_HDR_MASK_SIZE 16 +#define QUIC_MAX_GSO_FRAGS 16 + +// Maximum IV and nonce sizes should be in sync with supported ciphers. +#define QUIC_CIPHER_MAX_IV_SIZE 12 +#define QUIC_CIPHER_MAX_NONCE_SIZE 16 + +/* Side by side data for QUIC egress operations */ +#define QUIC_ANCILLARY_FLAGS (QUIC_BYPASS_ENCRYPTION) + +#define QUIC_MAX_IOVEC_SEGMENTS 8 +#define QUIC_MAX_SG_ALLOC_ELEMENTS 32 +#define QUIC_MAX_PLAIN_PAGES 16 +#define QUIC_MAX_CIPHER_PAGES_ORDER 4 + +struct quic_internal_crypto_context { + struct quic_connection_info conn_info; + struct crypto_skcipher *header_tfm; + struct crypto_aead *packet_aead; +}; + +struct quic_connection_rhash { + struct rhash_head node; + struct quic_internal_crypto_context crypto_ctx; + struct rcu_head rcu; +}; + +struct quic_context { + struct proto *sk_proto; + struct rhashtable tx_connections; + struct scatterlist sg_alloc[QUIC_MAX_SG_ALLOC_ELEMENTS]; + struct page *cipher_page; + struct mutex sendmsg_mux; + struct rcu_head rcu; +}; + +#endif diff --git a/net/Kconfig b/net/Kconfig index 6b78f695caa6..93e3b1308aec 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -63,6 +63,7 @@ menu "Networking options" source "net/packet/Kconfig" source "net/unix/Kconfig" source "net/tls/Kconfig" +source "net/quic/Kconfig" source "net/xfrm/Kconfig" source "net/iucv/Kconfig" source "net/smc/Kconfig" diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index 027c4513a9cd..6f56d3cbaeee 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -113,6 +113,7 @@ #include #include #include +#include #if IS_ENABLED(CONFIG_IPV6) #include #endif @@ -1011,6 +1012,13 @@ static int __udp_cmsg_send(struct cmsghdr *cmsg, u16 *gso_size) return -EINVAL; *gso_size = *(__u16 *)CMSG_DATA(cmsg); return 0; + case UDP_QUIC_ENCRYPT: + /* This option is handled in UDP_ULP and is only checked + * here for the bypass bit + */ + if (cmsg->cmsg_len != CMSG_LEN(sizeof(struct quic_tx_ancillary_data))) + return -EINVAL; + return 0; default: return -EINVAL; } diff --git a/net/quic/Kconfig b/net/quic/Kconfig new file mode 100644 index 000000000000..661cb989508a --- /dev/null +++ b/net/quic/Kconfig @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# QUIC configuration +# +config QUIC + tristate "QUIC encryption offload" + depends on INET + select CRYPTO + select CRYPTO_AES + select CRYPTO_GCM + help + Enable kernel support for QUIC crypto offload. Currently only TX + encryption offload is supported. The kernel will perform + copy-during-encryption. + + If unsure, say N. diff --git a/net/quic/Makefile b/net/quic/Makefile new file mode 100644 index 000000000000..928239c4d08c --- /dev/null +++ b/net/quic/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# Makefile for the QUIC subsystem +# + +obj-$(CONFIG_QUIC) += quic.o + +quic-y := quic_main.o diff --git a/net/quic/quic_main.c b/net/quic/quic_main.c new file mode 100644 index 000000000000..e738c8130a4f --- /dev/null +++ b/net/quic/quic_main.c @@ -0,0 +1,1400 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include +// Include header to use TLS constants for AEAD cipher. +#include +#include +#include +#include + +static unsigned long af_init_done; +static struct proto quic_v4_proto; +static struct proto quic_v6_proto; +static DEFINE_SPINLOCK(quic_proto_lock); + +static u32 quic_tx_connection_hash(const void *data, u32 len, u32 seed) +{ + return jhash(data, len, seed); +} + +static u32 quic_tx_connection_hash_obj(const void *data, u32 len, u32 seed) +{ + const struct quic_connection_rhash *connhash = data; + + return jhash(&connhash->crypto_ctx.conn_info.key, + sizeof(struct quic_connection_info_key), seed); +} + +static int quic_tx_connection_hash_cmp(struct rhashtable_compare_arg *arg, + const void *ptr) +{ + const struct quic_connection_info_key *key = arg->key; + const struct quic_connection_rhash *x = ptr; + + return !!memcmp(&x->crypto_ctx.conn_info.key, + key, + sizeof(struct quic_connection_info_key)); +} + +static const struct rhashtable_params quic_tx_connection_params = { + .key_len = sizeof(struct quic_connection_info_key), + .head_offset = offsetof(struct quic_connection_rhash, node), + .hashfn = quic_tx_connection_hash, + .obj_hashfn = quic_tx_connection_hash_obj, + .obj_cmpfn = quic_tx_connection_hash_cmp, + .automatic_shrinking = true, +}; + +static inline size_t quic_crypto_key_size(u16 cipher_type) +{ + switch (cipher_type) { + case TLS_CIPHER_AES_GCM_128: + return TLS_CIPHER_AES_GCM_128_KEY_SIZE; + case TLS_CIPHER_AES_GCM_256: + return TLS_CIPHER_AES_GCM_256_KEY_SIZE; + case TLS_CIPHER_AES_CCM_128: + return TLS_CIPHER_AES_CCM_128_KEY_SIZE; + case TLS_CIPHER_CHACHA20_POLY1305: + return TLS_CIPHER_CHACHA20_POLY1305_KEY_SIZE; + default: + break; + } + WARN_ON("Unsupported cipher type"); + return 0; +} + +static inline size_t quic_crypto_tag_size(u16 cipher_type) +{ + switch (cipher_type) { + case TLS_CIPHER_AES_GCM_128: + return TLS_CIPHER_AES_GCM_128_TAG_SIZE; + case TLS_CIPHER_AES_GCM_256: + return TLS_CIPHER_AES_GCM_256_TAG_SIZE; + case TLS_CIPHER_AES_CCM_128: + return TLS_CIPHER_AES_CCM_128_TAG_SIZE; + case TLS_CIPHER_CHACHA20_POLY1305: + return TLS_CIPHER_CHACHA20_POLY1305_TAG_SIZE; + default: + break; + } + WARN_ON("Unsupported cipher type"); + return 0; +} + +static inline size_t quic_crypto_iv_size(u16 cipher_type) +{ + switch (cipher_type) { + case TLS_CIPHER_AES_GCM_128: + BUILD_BUG_ON(TLS_CIPHER_AES_GCM_128_IV_SIZE + > QUIC_CIPHER_MAX_IV_SIZE); + return TLS_CIPHER_AES_GCM_128_IV_SIZE; + case TLS_CIPHER_AES_GCM_256: + BUILD_BUG_ON(TLS_CIPHER_AES_GCM_256_IV_SIZE + > QUIC_CIPHER_MAX_IV_SIZE); + return TLS_CIPHER_AES_GCM_256_IV_SIZE; + case TLS_CIPHER_AES_CCM_128: + BUILD_BUG_ON(TLS_CIPHER_AES_CCM_128_IV_SIZE + > QUIC_CIPHER_MAX_IV_SIZE); + return TLS_CIPHER_AES_CCM_128_IV_SIZE; + case TLS_CIPHER_CHACHA20_POLY1305: + BUILD_BUG_ON(TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE + > QUIC_CIPHER_MAX_IV_SIZE); + return TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE; + default: + break; + } + WARN_ON("Unsupported cipher type"); + return 0; +} + +static inline size_t quic_crypto_nonce_size(u16 cipher_type) +{ + switch (cipher_type) { + case TLS_CIPHER_AES_GCM_128: + BUILD_BUG_ON(TLS_CIPHER_AES_GCM_128_IV_SIZE + + TLS_CIPHER_AES_GCM_128_SALT_SIZE > + QUIC_CIPHER_MAX_NONCE_SIZE); + return TLS_CIPHER_AES_GCM_128_IV_SIZE + + TLS_CIPHER_AES_GCM_128_SALT_SIZE; + case TLS_CIPHER_AES_GCM_256: + BUILD_BUG_ON(TLS_CIPHER_AES_GCM_256_IV_SIZE + + TLS_CIPHER_AES_GCM_256_SALT_SIZE > + QUIC_CIPHER_MAX_NONCE_SIZE); + return TLS_CIPHER_AES_GCM_256_IV_SIZE + + TLS_CIPHER_AES_GCM_256_SALT_SIZE; + case TLS_CIPHER_AES_CCM_128: + BUILD_BUG_ON(TLS_CIPHER_AES_CCM_128_IV_SIZE + + TLS_CIPHER_AES_CCM_128_SALT_SIZE > + QUIC_CIPHER_MAX_NONCE_SIZE); + return TLS_CIPHER_AES_CCM_128_IV_SIZE + + TLS_CIPHER_AES_CCM_128_SALT_SIZE; + case TLS_CIPHER_CHACHA20_POLY1305: + BUILD_BUG_ON(TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE + + TLS_CIPHER_CHACHA20_POLY1305_SALT_SIZE > + QUIC_CIPHER_MAX_NONCE_SIZE); + return TLS_CIPHER_CHACHA20_POLY1305_IV_SIZE + + TLS_CIPHER_CHACHA20_POLY1305_SALT_SIZE; + default: + break; + } + WARN_ON("Unsupported cipher type"); + return 0; +} + +static inline +u8 *quic_payload_iv(struct quic_internal_crypto_context *crypto_ctx) +{ + switch (crypto_ctx->conn_info.cipher_type) { + case TLS_CIPHER_AES_GCM_128: + return crypto_ctx->conn_info.aes_gcm_128.payload_iv; + case TLS_CIPHER_AES_GCM_256: + return crypto_ctx->conn_info.aes_gcm_256.payload_iv; + case TLS_CIPHER_AES_CCM_128: + return crypto_ctx->conn_info.aes_ccm_128.payload_iv; + case TLS_CIPHER_CHACHA20_POLY1305: + return crypto_ctx->conn_info.chacha20_poly1305.payload_iv; + default: + break; + } + WARN_ON("Unsupported cipher type"); + return 0; +} + +static int +quic_config_header_crypto(struct quic_internal_crypto_context *crypto_ctx) +{ + struct crypto_skcipher *tfm; + char *header_cipher; + int rc = 0; + char *key; + + switch (crypto_ctx->conn_info.cipher_type) { + case TLS_CIPHER_AES_GCM_128: + header_cipher = "ecb(aes)"; + key = crypto_ctx->conn_info.aes_gcm_128.header_key; + break; + case TLS_CIPHER_AES_GCM_256: + header_cipher = "ecb(aes)"; + key = crypto_ctx->conn_info.aes_gcm_256.header_key; + break; + case TLS_CIPHER_AES_CCM_128: + header_cipher = "ecb(aes)"; + key = crypto_ctx->conn_info.aes_ccm_128.header_key; + break; + case TLS_CIPHER_CHACHA20_POLY1305: + header_cipher = "chacha20"; + key = crypto_ctx->conn_info.chacha20_poly1305.header_key; + break; + default: + rc = -EINVAL; + goto out; + } + + tfm = crypto_alloc_skcipher(header_cipher, 0, 0); + if (IS_ERR(tfm)) { + rc = PTR_ERR(tfm); + goto out; + } + + rc = crypto_skcipher_setkey(tfm, key, + quic_crypto_key_size(crypto_ctx->conn_info + .cipher_type)); + if (rc) { + crypto_free_skcipher(tfm); + goto out; + } + + crypto_ctx->header_tfm = tfm; + +out: + return rc; +} + +static int +quic_config_packet_crypto(struct quic_internal_crypto_context *crypto_ctx) +{ + struct crypto_aead *aead; + char *cipher_name; + int rc = 0; + char *key; + + switch (crypto_ctx->conn_info.cipher_type) { + case TLS_CIPHER_AES_GCM_128: { + key = crypto_ctx->conn_info.aes_gcm_128.payload_key; + cipher_name = "gcm(aes)"; + break; + } + case TLS_CIPHER_AES_GCM_256: { + key = crypto_ctx->conn_info.aes_gcm_256.payload_key; + cipher_name = "gcm(aes)"; + break; + } + case TLS_CIPHER_AES_CCM_128: { + key = crypto_ctx->conn_info.aes_ccm_128.payload_key; + cipher_name = "ccm(aes)"; + break; + } + case TLS_CIPHER_CHACHA20_POLY1305: { + key = crypto_ctx->conn_info.chacha20_poly1305.payload_key; + cipher_name = "rfc7539(chacha20,poly1305)"; + break; + } + default: + rc = -EINVAL; + goto out; + } + + aead = crypto_alloc_aead(cipher_name, 0, 0); + if (IS_ERR(aead)) { + rc = PTR_ERR(aead); + goto out; + } + + rc = crypto_aead_setkey(aead, key, + quic_crypto_key_size(crypto_ctx->conn_info + .cipher_type)); + if (rc) + goto free_aead; + + rc = crypto_aead_setauthsize(aead, + quic_crypto_tag_size(crypto_ctx->conn_info + .cipher_type)); + if (rc) + goto free_aead; + + crypto_ctx->packet_aead = aead; + goto out; + +free_aead: + crypto_free_aead(aead); + +out: + return rc; +} + +static inline struct quic_context *quic_get_ctx(struct sock *sk) +{ + struct inet_sock *inet = inet_sk(sk); + + return (__force void *)rcu_access_pointer(inet->ulp_data); +} + +static void quic_free_cipher_page(struct page *page) +{ + __free_pages(page, QUIC_MAX_CIPHER_PAGES_ORDER); +} + +static struct quic_context *quic_ctx_create(void) +{ + struct quic_context *ctx; + + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return NULL; + + mutex_init(&ctx->sendmsg_mux); + ctx->cipher_page = alloc_pages(GFP_KERNEL, QUIC_MAX_CIPHER_PAGES_ORDER); + if (!ctx->cipher_page) + goto out_err; + + if (rhashtable_init(&ctx->tx_connections, + &quic_tx_connection_params) < 0) { + quic_free_cipher_page(ctx->cipher_page); + goto out_err; + } + + return ctx; + +out_err: + kfree(ctx); + return NULL; +} + +static int quic_getsockopt(struct sock *sk, int level, int optname, + char __user *optval, int __user *optlen) +{ + struct quic_context *ctx = quic_get_ctx(sk); + + return ctx->sk_proto->getsockopt(sk, level, optname, optval, optlen); +} + +static int do_quic_conn_add_tx(struct sock *sk, sockptr_t optval, + unsigned int optlen) +{ + struct quic_internal_crypto_context *crypto_ctx; + struct quic_context *ctx = quic_get_ctx(sk); + struct quic_connection_rhash *connhash; + int rc = 0; + + if (sockptr_is_null(optval)) + return -EINVAL; + + if (optlen != sizeof(struct quic_connection_info)) + return -EINVAL; + + connhash = kzalloc(sizeof(*connhash), GFP_KERNEL); + if (!connhash) + return -EFAULT; + + crypto_ctx = &connhash->crypto_ctx; + rc = copy_from_sockptr(&crypto_ctx->conn_info, optval, + sizeof(crypto_ctx->conn_info)); + if (rc) { + rc = -EFAULT; + goto err_crypto_info; + } + + // create all TLS materials for packet and header decryption + rc = quic_config_header_crypto(crypto_ctx); + if (rc) + goto err_crypto_info; + + rc = quic_config_packet_crypto(crypto_ctx); + if (rc) + goto err_free_skcipher; + + // insert crypto data into hash per connection ID + rc = rhashtable_insert_fast(&ctx->tx_connections, &connhash->node, + quic_tx_connection_params); + if (rc < 0) + goto err_free_ciphers; + + return 0; + +err_free_ciphers: + crypto_free_aead(crypto_ctx->packet_aead); + +err_free_skcipher: + crypto_free_skcipher(crypto_ctx->header_tfm); + +err_crypto_info: + // wipeout all crypto materials; + memzero_explicit(&connhash->crypto_ctx, sizeof(connhash->crypto_ctx)); + kfree(connhash); + return rc; +} + +static int do_quic_conn_del_tx(struct sock *sk, sockptr_t optval, + unsigned int optlen) +{ + struct quic_internal_crypto_context *crypto_ctx; + struct quic_context *ctx = quic_get_ctx(sk); + struct quic_connection_rhash *connhash; + struct quic_connection_info conn_info; + + if (sockptr_is_null(optval)) + return -EINVAL; + + if (optlen != sizeof(struct quic_connection_info)) + return -EINVAL; + + if (copy_from_sockptr(&conn_info, optval, optlen)) + return -EFAULT; + + connhash = rhashtable_lookup_fast(&ctx->tx_connections, + &conn_info.key, + quic_tx_connection_params); + if (!connhash) + return -EINVAL; + + rhashtable_remove_fast(&ctx->tx_connections, + &connhash->node, + quic_tx_connection_params); + + + crypto_ctx = &connhash->crypto_ctx; + + crypto_free_skcipher(crypto_ctx->header_tfm); + crypto_free_aead(crypto_ctx->packet_aead); + memzero_explicit(crypto_ctx, sizeof(*crypto_ctx)); + kfree(connhash); + + return 0; +} + +static int do_quic_setsockopt(struct sock *sk, int optname, sockptr_t optval, + unsigned int optlen) +{ + int rc = 0; + + switch (optname) { + case UDP_QUIC_ADD_TX_CONNECTION: + lock_sock(sk); + rc = do_quic_conn_add_tx(sk, optval, optlen); + release_sock(sk); + break; + case UDP_QUIC_DEL_TX_CONNECTION: + lock_sock(sk); + rc = do_quic_conn_del_tx(sk, optval, optlen); + release_sock(sk); + break; + default: + rc = -ENOPROTOOPT; + break; + } + + return rc; +} + +static int quic_setsockopt(struct sock *sk, int level, int optname, + sockptr_t optval, unsigned int optlen) +{ + struct quic_context *ctx; + struct proto *sk_proto; + + rcu_read_lock(); + ctx = quic_get_ctx(sk); + sk_proto = ctx->sk_proto; + rcu_read_unlock(); + + if (level == SOL_UDP && + (optname == UDP_QUIC_ADD_TX_CONNECTION || + optname == UDP_QUIC_DEL_TX_CONNECTION)) + return do_quic_setsockopt(sk, optname, optval, optlen); + + return sk_proto->setsockopt(sk, level, optname, optval, optlen); +} + +static int +quic_extract_ancillary_data(struct msghdr *msg, + struct quic_tx_ancillary_data *ancillary_data, + u16 *udp_pkt_size) +{ + struct cmsghdr *cmsg_hdr = 0; + void *ancillary_data_ptr = 0; + + if (!msg->msg_controllen) + return -EINVAL; + + for_each_cmsghdr(cmsg_hdr, msg) { + if (!CMSG_OK(msg, cmsg_hdr)) + return -EINVAL; + + if (cmsg_hdr->cmsg_level != IPPROTO_UDP) + continue; + + if (cmsg_hdr->cmsg_type == UDP_QUIC_ENCRYPT) { + if (cmsg_hdr->cmsg_len != + CMSG_LEN(sizeof(struct quic_tx_ancillary_data))) + return -EINVAL; + memcpy((void *)ancillary_data, CMSG_DATA(cmsg_hdr), + sizeof(struct quic_tx_ancillary_data)); + ancillary_data_ptr = cmsg_hdr; + } else if (cmsg_hdr->cmsg_type == UDP_SEGMENT) { + if (cmsg_hdr->cmsg_len != CMSG_LEN(sizeof(u16))) + return -EINVAL; + memcpy((void *)udp_pkt_size, CMSG_DATA(cmsg_hdr), + sizeof(u16)); + } + } + + if (!ancillary_data_ptr) + return -EINVAL; + + return 0; +} + +static int quic_sendmsg_validate(struct msghdr *msg) +{ + if (!iter_is_iovec(&msg->msg_iter)) + return -EINVAL; + + if (!msg->msg_controllen) + return -EINVAL; + + return 0; +} + +static struct quic_connection_rhash +*quic_lookup_connection(struct quic_context *ctx, + u8 *conn_id, + struct quic_tx_ancillary_data *ancillary_data) +{ + struct quic_connection_info_key conn_key; + + // Lookup connection information by the connection key. + memset(&conn_key, 0, sizeof(struct quic_connection_info_key)); + // fill the connection id up to the max connection ID length + if (ancillary_data->conn_id_length > QUIC_MAX_CONNECTION_ID_SIZE) + return NULL; + + conn_key.conn_id_length = ancillary_data->conn_id_length; + if (ancillary_data->conn_id_length) + memcpy(conn_key.conn_id, + conn_id, + ancillary_data->conn_id_length); + return rhashtable_lookup_fast(&ctx->tx_connections, + &conn_key, + quic_tx_connection_params); +} + +static int quic_sg_capacity_from_msg(const size_t pkt_size, + const off_t offset, + const size_t length) +{ + size_t pages = 0; + size_t pkts = 0; + + pages = DIV_ROUND_UP(offset + length, PAGE_SIZE); + pkts = DIV_ROUND_UP(length, pkt_size); + return pages + pkts + 1; +} + +static void quic_put_plain_user_pages(struct page **pages, size_t nr_pages) +{ + int i; + + for (i = 0; i < nr_pages; ++i) + if (i == 0 || pages[i] != pages[i - 1]) + put_page(pages[i]); +} + +static int quic_get_plain_user_pages(struct msghdr * const msg, + struct page **pages, + int *page_indices) +{ + size_t nr_mapped = 0; + size_t nr_pages = 0; + void *data_addr; + void *page_addr; + size_t count = 0; + off_t data_off; + int ret = 0; + int i; + + for (i = 0; i < msg->msg_iter.nr_segs; ++i) { + data_addr = msg->msg_iter.iov[i].iov_base; + if (!i) + data_addr += msg->msg_iter.iov_offset; + page_addr = + (void *)((unsigned long)data_addr & PAGE_MASK); + + data_off = (unsigned long)data_addr & ~PAGE_MASK; + nr_pages = + DIV_ROUND_UP(data_off + msg->msg_iter.iov[i].iov_len, + PAGE_SIZE); + if (nr_mapped + nr_pages > QUIC_MAX_PLAIN_PAGES) { + quic_put_plain_user_pages(pages, nr_mapped); + ret = -ENOMEM; + goto out; + } + + count = get_user_pages((unsigned long)page_addr, nr_pages, 1, + pages, NULL); + if (count < nr_pages) { + quic_put_plain_user_pages(pages, nr_mapped + count); + ret = -ENOMEM; + goto out; + } + + page_indices[i] = nr_mapped; + nr_mapped += count; + pages += count; + } + ret = nr_mapped; + +out: + return ret; +} + +static int quic_sg_plain_from_mapped_msg(struct msghdr * const msg, + struct page **plain_pages, + void **iov_base_ptrs, + void **iov_data_ptrs, + const size_t plain_size, + const size_t pkt_size, + struct scatterlist * const sg_alloc, + const size_t max_sg_alloc, + struct scatterlist ** const sg_pkts, + size_t *nr_plain_pages) +{ + int iov_page_indices[QUIC_MAX_IOVEC_SEGMENTS]; + struct scatterlist *sg; + unsigned int pkt_i = 0; + ssize_t left_on_page; + size_t pkt_left; + unsigned int i; + size_t seg_len; + off_t page_ofs; + off_t seg_ofs; + int ret = 0; + int page_i; + + if (msg->msg_iter.nr_segs >= QUIC_MAX_IOVEC_SEGMENTS) { + ret = -ENOMEM; + goto out; + } + + ret = quic_get_plain_user_pages(msg, plain_pages, iov_page_indices); + if (ret < 0) + goto out; + + *nr_plain_pages = ret; + sg = sg_alloc; + sg_pkts[pkt_i] = sg; + sg_unmark_end(sg); + pkt_left = pkt_size; + for (i = 0; i < msg->msg_iter.nr_segs; ++i) { + page_ofs = ((unsigned long)msg->msg_iter.iov[i].iov_base + & (PAGE_SIZE - 1)); + page_i = 0; + if (!i) { + page_ofs += msg->msg_iter.iov_offset; + while (page_ofs >= PAGE_SIZE) { + page_ofs -= PAGE_SIZE; + page_i++; + } + } + + seg_len = msg->msg_iter.iov[i].iov_len; + page_i += iov_page_indices[i]; + + if (page_i >= QUIC_MAX_PLAIN_PAGES) + return -EFAULT; + + seg_ofs = 0; + while (seg_ofs < seg_len) { + if (sg - sg_alloc > max_sg_alloc) + return -EFAULT; + + sg_unmark_end(sg); + left_on_page = min_t(size_t, PAGE_SIZE - page_ofs, + seg_len - seg_ofs); + if (left_on_page <= 0) + return -EFAULT; + + if (left_on_page > pkt_left) { + sg_set_page(sg, plain_pages[page_i], pkt_left, + page_ofs); + pkt_i++; + seg_ofs += pkt_left; + page_ofs += pkt_left; + sg_mark_end(sg); + sg++; + sg_pkts[pkt_i] = sg; + pkt_left = pkt_size; + continue; + } + sg_set_page(sg, plain_pages[page_i], left_on_page, + page_ofs); + page_i++; + page_ofs = 0; + seg_ofs += left_on_page; + pkt_left -= left_on_page; + if (pkt_left == 0 || + (seg_ofs == seg_len && + i == msg->msg_iter.nr_segs - 1)) { + sg_mark_end(sg); + pkt_i++; + sg++; + sg_pkts[pkt_i] = sg; + pkt_left = pkt_size; + } else { + sg++; + } + } + } + + if (pkt_left && pkt_left != pkt_size) { + pkt_i++; + sg_mark_end(sg); + } + ret = pkt_i; + +out: + return ret; +} + +/* sg_alloc: allocated zeroed array of scatterlists + * cipher_page: preallocated compound page + */ +static int quic_sg_cipher_from_pkts(const size_t cipher_tag_size, + const size_t plain_pkt_size, + const size_t plain_size, + struct page * const cipher_page, + struct scatterlist * const sg_alloc, + const size_t nr_sg_alloc, + struct scatterlist ** const sg_cipher) +{ + const size_t cipher_pkt_size = plain_pkt_size + cipher_tag_size; + size_t pkts = DIV_ROUND_UP(plain_size, plain_pkt_size); + struct scatterlist *sg = sg_alloc; + int pkt_i; + void *ptr; + + if (pkts > nr_sg_alloc) + return -EINVAL; + + ptr = page_address(cipher_page); + for (pkt_i = 0; pkt_i < pkts; + ++pkt_i, ptr += cipher_pkt_size, ++sg) { + sg_set_buf(sg, ptr, cipher_pkt_size); + sg_mark_end(sg); + sg_cipher[pkt_i] = sg; + } + return pkts; +} + +/* fast copy from scatterlist to a buffer assuming that all pages are + * available in kernel memory. + */ +static int quic_sg_pcopy_to_buffer_kernel(struct scatterlist *sg, + u8 *buffer, + size_t bytes_to_copy, + off_t offset_to_read) +{ + off_t sg_remain = sg->length; + size_t to_copy; + + if (!bytes_to_copy) + return 0; + + // skip to offset first + while (offset_to_read > 0) { + if (!sg_remain) + return -EINVAL; + if (offset_to_read < sg_remain) { + sg_remain -= offset_to_read; + break; + } + offset_to_read -= sg_remain; + sg = sg_next(sg); + if (!sg) + return -EINVAL; + sg_remain = sg->length; + } + + // traverse sg list from offset to offset + bytes_to_copy + while (bytes_to_copy) { + to_copy = min_t(size_t, bytes_to_copy, sg_remain); + if (!to_copy) + return -EINVAL; + memcpy(buffer, sg_virt(sg) + (sg->length - sg_remain), to_copy); + buffer += to_copy; + bytes_to_copy -= to_copy; + if (bytes_to_copy) { + sg = sg_next(sg); + if (!sg) + return -EINVAL; + sg_remain = sg->length; + } + } + + return 0; +} + +static int quic_copy_header(struct scatterlist *sg_plain, + u8 *buf, const size_t buf_len, + const size_t conn_id_len) +{ + u8 *pkt = sg_virt(sg_plain); + size_t hdr_len; + + hdr_len = 1 + conn_id_len + ((*pkt & 0x03) + 1); + if (hdr_len > QUIC_MAX_SHORT_HEADER_SIZE || hdr_len > buf_len) + return -EINVAL; + + WARN_ON_ONCE(quic_sg_pcopy_to_buffer_kernel(sg_plain, buf, hdr_len, 0)); + return hdr_len; +} + +static u64 quic_unpack_pkt_num(struct quic_tx_ancillary_data * const control, + const u8 * const hdr, + const off_t payload_crypto_off) +{ + u64 truncated_pn = 0; + u64 candidate_pn; + u64 expected_pn; + u64 pn_hwin; + u64 pn_mask; + u64 pn_len; + u64 pn_win; + int i; + + pn_len = (hdr[0] & 0x03) + 1; + expected_pn = control->next_pkt_num; + + for (i = 1 + control->conn_id_length; i < payload_crypto_off; ++i) { + truncated_pn <<= 8; + truncated_pn |= hdr[i]; + } + + pn_win = 1ULL << (pn_len << 3); + pn_hwin = pn_win >> 1; + pn_mask = pn_win - 1; + candidate_pn = (expected_pn & ~pn_mask) | truncated_pn; + + if (expected_pn > pn_hwin && + candidate_pn <= expected_pn - pn_hwin && + candidate_pn < (1ULL << 62) - pn_win) + return candidate_pn + pn_win; + + if (candidate_pn > expected_pn + pn_hwin && + candidate_pn >= pn_win) + return candidate_pn - pn_win; + + return candidate_pn; +} + +static int +quic_construct_header_prot_mask(struct quic_internal_crypto_context *crypto_ctx, + struct skcipher_request *hdr_mask_req, + struct scatterlist *sg_cipher_pkt, + off_t sample_offset, + u8 *hdr_mask) +{ + u8 *sample = sg_virt(sg_cipher_pkt) + sample_offset; + u8 hdr_ctr[sizeof(u32) + QUIC_CIPHER_MAX_IV_SIZE]; + struct scatterlist sg_cipher_sample; + struct scatterlist sg_hdr_mask; + struct crypto_wait wait_header; + u32 counter; + + BUILD_BUG_ON(QUIC_HDR_MASK_SIZE + < sizeof(u32) + QUIC_CIPHER_MAX_IV_SIZE); + + // cipher pages are continuous, get the pointer to the sg data directly, + // page is allocated in kernel + sg_init_one(&sg_cipher_sample, sample, QUIC_HDR_MASK_SIZE); + sg_init_one(&sg_hdr_mask, hdr_mask, QUIC_HDR_MASK_SIZE); + skcipher_request_set_callback(hdr_mask_req, 0, crypto_req_done, + &wait_header); + + if (crypto_ctx->conn_info.cipher_type == TLS_CIPHER_CHACHA20_POLY1305) { + counter = cpu_to_le32(*((u32 *)sample)); + memset(hdr_ctr, 0, sizeof(hdr_ctr)); + memcpy((u8 *)hdr_ctr, (u8 *)&counter, sizeof(u32)); + memcpy((u8 *)hdr_ctr + sizeof(u32), + (sample + sizeof(u32)), + QUIC_CIPHER_MAX_IV_SIZE); + skcipher_request_set_crypt(hdr_mask_req, &sg_cipher_sample, + &sg_hdr_mask, 5, hdr_ctr); + } else { + skcipher_request_set_crypt(hdr_mask_req, &sg_cipher_sample, + &sg_hdr_mask, QUIC_HDR_MASK_SIZE, + NULL); + } + + return crypto_wait_req(crypto_skcipher_encrypt(hdr_mask_req), + &wait_header); +} + +static int quic_protect_header(struct quic_internal_crypto_context *crypto_ctx, + struct quic_tx_ancillary_data *control, + struct skcipher_request *hdr_mask_req, + struct scatterlist *sg_cipher_pkt, + int payload_crypto_off) +{ + u8 hdr_mask[QUIC_HDR_MASK_SIZE]; + off_t quic_pkt_num_off; + u8 quic_pkt_num_len; + u8 *cipher_hdr; + int err; + int i; + + quic_pkt_num_off = 1 + control->conn_id_length; + quic_pkt_num_len = payload_crypto_off - quic_pkt_num_off; + + if (quic_pkt_num_len > 4) + return -EPERM; + + err = quic_construct_header_prot_mask(crypto_ctx, hdr_mask_req, + sg_cipher_pkt, + payload_crypto_off + + (4 - quic_pkt_num_len), + hdr_mask); + if (unlikely(err)) + return err; + + cipher_hdr = sg_virt(sg_cipher_pkt); + // protect the public flags + cipher_hdr[0] ^= (hdr_mask[0] & 0x1f); + + for (i = 0; i < quic_pkt_num_len; ++i) + cipher_hdr[quic_pkt_num_off + i] ^= hdr_mask[1 + i]; + + return 0; +} + +static +void quic_construct_ietf_nonce(u8 *nonce, + struct quic_internal_crypto_context *crypto_ctx, + u64 quic_pkt_num) +{ + u8 *iv = quic_payload_iv(crypto_ctx); + int i; + + for (i = quic_crypto_nonce_size(crypto_ctx->conn_info.cipher_type) - 1; + i >= 0 && quic_pkt_num; + --i, quic_pkt_num >>= 8) + nonce[i] = iv[i] ^ (u8)quic_pkt_num; + + for (; i >= 0; --i) + nonce[i] = iv[i]; +} + +ssize_t quic_sendpage(struct quic_context *ctx, + struct sock *sk, + struct msghdr *msg, + const size_t cipher_size, + struct page * const cipher_page) +{ + struct kvec iov; + ssize_t ret; + + iov.iov_base = page_address(cipher_page); + iov.iov_len = cipher_size; + iov_iter_kvec(&msg->msg_iter, WRITE | ITER_KVEC, &iov, 1, + cipher_size); + ret = security_socket_sendmsg(sk->sk_socket, msg, msg_data_left(msg)); + if (ret) + return ret; + + ret = ctx->sk_proto->sendmsg(sk, msg, msg_data_left(msg)); + WARN_ON(ret == -EIOCBQUEUED); + return ret; +} + +static int quic_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) +{ + struct quic_internal_crypto_context *crypto_ctx = NULL; + struct scatterlist *sg_cipher_pkts[QUIC_MAX_GSO_FRAGS]; + struct scatterlist *sg_plain_pkts[QUIC_MAX_GSO_FRAGS]; + struct page *plain_pages[QUIC_MAX_PLAIN_PAGES]; + void *plain_base_ptrs[QUIC_MAX_IOVEC_SEGMENTS]; + void *plain_data_ptrs[QUIC_MAX_IOVEC_SEGMENTS]; + struct msghdr msg_cipher = { + .msg_name = msg->msg_name, + .msg_namelen = msg->msg_namelen, + .msg_flags = msg->msg_flags, + .msg_control = msg->msg_control, + .msg_controllen = msg->msg_controllen, + }; + struct quic_connection_rhash *connhash = NULL; + struct quic_connection_info *conn_info = NULL; + struct quic_context *ctx = quic_get_ctx(sk); + u8 hdr_buf[QUIC_MAX_SHORT_HEADER_SIZE]; + struct skcipher_request *hdr_mask_req; + struct quic_tx_ancillary_data control; + u8 nonce[QUIC_CIPHER_MAX_NONCE_SIZE]; + struct aead_request *aead_req = 0; + struct scatterlist *sg_cipher = 0; + struct udp_sock *up = udp_sk(sk); + struct scatterlist *sg_plain = 0; + u16 gso_pkt_size = up->gso_size; + size_t last_plain_pkt_size = 0; + off_t payload_crypto_offset; + struct crypto_aead *tfm = 0; + size_t nr_plain_pages = 0; + struct crypto_wait waiter; + size_t nr_sg_cipher_pkts; + size_t nr_sg_plain_pkts; + ssize_t hdr_buf_len = 0; + size_t nr_sg_alloc = 0; + size_t plain_pkt_size; + u64 full_pkt_num; + size_t cipher_size; + size_t plain_size; + size_t pkt_size; + size_t tag_size; + int ret = 0; + int pkt_i; + int err; + + memset(&hdr_buf[0], 0, QUIC_MAX_SHORT_HEADER_SIZE); + hdr_buf_len = copy_from_iter(hdr_buf, QUIC_MAX_SHORT_HEADER_SIZE, + &msg->msg_iter); + if (hdr_buf_len <= 0) { + ret = -EINVAL; + goto out; + } + iov_iter_revert(&msg->msg_iter, hdr_buf_len); + + ctx = quic_get_ctx(sk); + + // Bypass for anything that is guaranteed not QUIC. + plain_size = len; + + if (plain_size < 2) + return ctx->sk_proto->sendmsg(sk, msg, len); + + // Bypass for other than short header. + if ((hdr_buf[0] & 0xc0) != 0x40) + return ctx->sk_proto->sendmsg(sk, msg, len); + + // Crypto adds a tag after the packet. Corking a payload would produce + // a crypto tag after each portion. Use GSO instead. + if ((msg->msg_flags & MSG_MORE) || up->pending) { + ret = -EINVAL; + goto out; + } + + ret = quic_sendmsg_validate(msg); + if (ret) + goto out; + + ret = quic_extract_ancillary_data(msg, &control, &gso_pkt_size); + if (ret) + goto out; + + // Reserved bits with ancillary data present are an error. + if (control.flags & ~QUIC_ANCILLARY_FLAGS) { + ret = -EINVAL; + goto out; + } + + // Bypass offload on request. First packet bypass applies to all + // packets in the GSO pack. + if (control.flags & QUIC_BYPASS_ENCRYPTION) + return ctx->sk_proto->sendmsg(sk, msg, len); + + if (hdr_buf_len < 1 + control.conn_id_length) { + ret = -EINVAL; + goto out; + } + + // Fetch the flow + connhash = quic_lookup_connection(ctx, &hdr_buf[1], &control); + if (!connhash) { + ret = -EINVAL; + goto out; + } + + crypto_ctx = &connhash->crypto_ctx; + conn_info = &crypto_ctx->conn_info; + + tag_size = quic_crypto_tag_size(crypto_ctx->conn_info.cipher_type); + + // For GSO, use the GSO size minus cipher tag size as the packet size; + // for non-GSO, use the size of the whole plaintext. + // Reduce the packet size by tag size to keep the original packet size + // for the rest of the UDP path in the stack. + if (!gso_pkt_size) { + plain_pkt_size = plain_size; + } else { + if (gso_pkt_size < tag_size) + goto out; + + plain_pkt_size = gso_pkt_size - tag_size; + } + + // Build scatterlist from the input data, split by GSO minus the + // crypto tag size. + nr_sg_alloc = quic_sg_capacity_from_msg(plain_pkt_size, + msg->msg_iter.iov_offset, + plain_size); + if ((nr_sg_alloc * 2) >= QUIC_MAX_SG_ALLOC_ELEMENTS) { + ret = -ENOMEM; + goto out; + } + + sg_plain = ctx->sg_alloc; + sg_cipher = sg_plain + nr_sg_alloc; + + ret = quic_sg_plain_from_mapped_msg(msg, plain_pages, + plain_base_ptrs, + plain_data_ptrs, plain_size, + plain_pkt_size, sg_plain, + nr_sg_alloc, sg_plain_pkts, + &nr_plain_pages); + + if (ret < 0) + goto out; + + nr_sg_plain_pkts = ret; + last_plain_pkt_size = plain_size % plain_pkt_size; + if (!last_plain_pkt_size) + last_plain_pkt_size = plain_pkt_size; + + // Build scatterlist for the ciphertext, split by GSO. + cipher_size = plain_size + nr_sg_plain_pkts * tag_size; + + if (DIV_ROUND_UP(cipher_size, PAGE_SIZE) + >= (1 << QUIC_MAX_CIPHER_PAGES_ORDER)) { + ret = -ENOMEM; + goto out_put_pages; + } + + ret = quic_sg_cipher_from_pkts(tag_size, plain_pkt_size, plain_size, + ctx->cipher_page, sg_cipher, nr_sg_alloc, + sg_cipher_pkts); + if (ret < 0) + goto out_put_pages; + + nr_sg_cipher_pkts = ret; + + if (nr_sg_plain_pkts != nr_sg_cipher_pkts) { + ret = -EPERM; + goto out_put_pages; + } + + // Encrypt and protect header for each packet individually. + tfm = crypto_ctx->packet_aead; + crypto_aead_clear_flags(tfm, ~0); + aead_req = aead_request_alloc(tfm, GFP_KERNEL); + if (!aead_req) { + aead_request_free(aead_req); + ret = -ENOMEM; + goto out_put_pages; + } + + hdr_mask_req = skcipher_request_alloc(crypto_ctx->header_tfm, + GFP_KERNEL); + if (!hdr_mask_req) { + aead_request_free(aead_req); + ret = -ENOMEM; + goto out_put_pages; + } + + for (pkt_i = 0; pkt_i < nr_sg_plain_pkts; ++pkt_i) { + payload_crypto_offset = + quic_copy_header(sg_plain_pkts[pkt_i], + hdr_buf, + sizeof(hdr_buf), + control.conn_id_length); + + full_pkt_num = quic_unpack_pkt_num(&control, hdr_buf, + payload_crypto_offset); + + pkt_size = (pkt_i + 1 < nr_sg_plain_pkts + ? plain_pkt_size + : last_plain_pkt_size) + - payload_crypto_offset; + if (pkt_size < 0) { + aead_request_free(aead_req); + skcipher_request_free(hdr_mask_req); + ret = -EINVAL; + goto out_put_pages; + } + + /* Construct nonce and initialize request */ + quic_construct_ietf_nonce(nonce, crypto_ctx, full_pkt_num); + + /* Encrypt the body */ + aead_request_set_callback(aead_req, + CRYPTO_TFM_REQ_MAY_BACKLOG + | CRYPTO_TFM_REQ_MAY_SLEEP, + crypto_req_done, &waiter); + aead_request_set_crypt(aead_req, sg_plain_pkts[pkt_i], + sg_cipher_pkts[pkt_i], + pkt_size, + nonce); + aead_request_set_ad(aead_req, payload_crypto_offset); + err = crypto_wait_req(crypto_aead_encrypt(aead_req), &waiter); + if (unlikely(err)) { + ret = err; + aead_request_free(aead_req); + skcipher_request_free(hdr_mask_req); + goto out_put_pages; + } + + /* Protect the header */ + memcpy(sg_virt(sg_cipher_pkts[pkt_i]), hdr_buf, + payload_crypto_offset); + + err = quic_protect_header(crypto_ctx, &control, + hdr_mask_req, + sg_cipher_pkts[pkt_i], + payload_crypto_offset); + if (unlikely(err)) { + ret = err; + aead_request_free(aead_req); + skcipher_request_free(hdr_mask_req); + goto out_put_pages; + } + } + skcipher_request_free(hdr_mask_req); + aead_request_free(aead_req); + + // Deliver to the next layer. + if (ctx->sk_proto->sendpage) { + msg_cipher.msg_flags |= MSG_MORE; + err = ctx->sk_proto->sendmsg(sk, &msg_cipher, 0); + if (err < 0) { + ret = err; + goto out_put_pages; + } + + err = ctx->sk_proto->sendpage(sk, ctx->cipher_page, 0, + cipher_size, 0); + if (err < 0) { + ret = err; + goto out_put_pages; + } + if (err != cipher_size) { + ret = -EINVAL; + goto out_put_pages; + } + ret = plain_size; + } else { + ret = quic_sendpage(ctx, sk, &msg_cipher, cipher_size, + ctx->cipher_page); + // indicate full plaintext transmission to the caller. + if (ret > 0) + ret = plain_size; + } + + +out_put_pages: + quic_put_plain_user_pages(plain_pages, nr_plain_pages); + +out: + return ret; +} + +static int quic_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t len) +{ + struct quic_context *ctx; + int ret; + + rcu_read_lock(); + ctx = quic_get_ctx(sk); + rcu_read_unlock(); + if (!ctx) + return -EINVAL; + + mutex_lock(&ctx->sendmsg_mux); + ret = quic_sendmsg(sk, msg, len); + mutex_unlock(&ctx->sendmsg_mux); + return ret; +} + +static void quic_release_resources(struct sock *sk) +{ + struct quic_internal_crypto_context *crypto_ctx; + struct quic_connection_rhash *connhash; + struct inet_sock *inet = inet_sk(sk); + struct rhashtable_iter hti; + struct quic_context *ctx; + struct proto *sk_proto; + + rcu_read_lock(); + ctx = quic_get_ctx(sk); + if (!ctx) { + rcu_read_unlock(); + return; + } + + sk_proto = ctx->sk_proto; + + rhashtable_walk_enter(&ctx->tx_connections, &hti); + rhashtable_walk_start(&hti); + + while ((connhash = rhashtable_walk_next(&hti))) { + if (IS_ERR(connhash)) { + if (PTR_ERR(connhash) == -EAGAIN) + continue; + break; + } + + crypto_ctx = &connhash->crypto_ctx; + crypto_free_aead(crypto_ctx->packet_aead); + crypto_free_skcipher(crypto_ctx->header_tfm); + memzero_explicit(crypto_ctx, sizeof(*crypto_ctx)); + } + + rhashtable_walk_stop(&hti); + rhashtable_walk_exit(&hti); + rhashtable_destroy(&ctx->tx_connections); + + if (ctx->cipher_page) { + quic_free_cipher_page(ctx->cipher_page); + ctx->cipher_page = NULL; + } + + rcu_read_unlock(); + + write_lock_bh(&sk->sk_callback_lock); + rcu_assign_pointer(inet->ulp_data, NULL); + WRITE_ONCE(sk->sk_prot, sk_proto); + write_unlock_bh(&sk->sk_callback_lock); + + kfree_rcu(ctx, rcu); +} + +static void +quic_prep_protos(unsigned int af, struct proto *proto, const struct proto *base) +{ + if (likely(test_bit(af, &af_init_done))) + return; + + spin_lock(&quic_proto_lock); + if (test_bit(af, &af_init_done)) + goto out_unlock; + + *proto = *base; + proto->setsockopt = quic_setsockopt; + proto->getsockopt = quic_getsockopt; + proto->sendmsg = quic_sendmsg_locked; + + smp_mb__before_atomic(); /* proto calls should be visible first */ + set_bit(af, &af_init_done); + +out_unlock: + spin_unlock(&quic_proto_lock); +} + +static void quic_update_proto(struct sock *sk, struct quic_context *ctx) +{ + struct proto *udp_proto, *quic_proto; + struct inet_sock *inet = inet_sk(sk); + + udp_proto = READ_ONCE(sk->sk_prot); + ctx->sk_proto = udp_proto; + quic_proto = sk->sk_family == AF_INET ? &quic_v4_proto : &quic_v6_proto; + + quic_prep_protos(sk->sk_family, quic_proto, udp_proto); + + write_lock_bh(&sk->sk_callback_lock); + rcu_assign_pointer(inet->ulp_data, ctx); + WRITE_ONCE(sk->sk_prot, quic_proto); + write_unlock_bh(&sk->sk_callback_lock); +} + +static int quic_init(struct sock *sk) +{ + struct quic_context *ctx; + + ctx = quic_ctx_create(); + if (!ctx) + return -ENOMEM; + + quic_update_proto(sk, ctx); + + return 0; +} + +static void quic_release(struct sock *sk) +{ + lock_sock(sk); + quic_release_resources(sk); + release_sock(sk); +} + +static struct udp_ulp_ops quic_ulp_ops __read_mostly = { + .name = "quic-crypto", + .owner = THIS_MODULE, + .init = quic_init, + .release = quic_release, +}; + +static int __init quic_register(void) +{ + udp_register_ulp(&quic_ulp_ops); + return 0; +} + +static void __exit quic_unregister(void) +{ + udp_unregister_ulp(&quic_ulp_ops); +} + +module_init(quic_register); +module_exit(quic_unregister); + +MODULE_DESCRIPTION("QUIC crypto ULP"); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_UDP_ULP("quic-crypto"); From patchwork Sat Aug 6 00:11:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adel Abouchaev X-Patchwork-Id: 595975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13F1EC25B0D for ; Sat, 6 Aug 2022 00:12:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241476AbiHFAMV (ORCPT ); Fri, 5 Aug 2022 20:12:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241470AbiHFAMP (ORCPT ); Fri, 5 Aug 2022 20:12:15 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 033AD7B1C1; Fri, 5 Aug 2022 17:12:14 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id q9-20020a17090a2dc900b001f58bcaca95so2815830pjm.3; Fri, 05 Aug 2022 17:12:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=sww81hoiL59hzTectLjSNsU2UHbDqTNlrA1vYd8UUJQ=; b=ZJiK+elXtWU+A731EPxOjWHWQPrY1XxUpHA8kSRIbnmhUVHKX8e9meCY3wi932Ln1Y 5RuD4pim7/4b3N8UtrmoZSnHkBiiGpMYqyMng1gaz60VogHMoZ9ddrl3XBZ2MreWrCE8 elc3HEJnMR12iApk/IUjZHfx53QPfADVizDxo6RG88d7L7R+JR7zMe1sVFoKrjVunol7 he7tk/oGMmnqLYjRUCBaeBr2vjkbHB9SClN3wJx0iHNf/9T0Qjlf+VwlcurrH/1eaDtl BQWDZJYQfFOP8onSEIvLSw/xhdM/n/ppxV5/LIN3EQVXZ1iU+NO2Lnm3Fop1HHp6a8Y1 y2Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=sww81hoiL59hzTectLjSNsU2UHbDqTNlrA1vYd8UUJQ=; b=sqjLMQFA7rvTT1xA0CaT1H/lMXMF1vyDQFltExThyGiVn0tvYnGxSQC7MxsVHq4+0y fKY97ORPhOIPew5jJLRnmh4oT/PROOukYKzRD5xghj2Fnj/q0dA0J3V10yKAFFBvOgnZ TX2QrjSryPiAcHmeGZ/7I/Kgz30xfPwQMyGGzu/s6KMFXedZWeyGITNM4KXGSjnPj7vr LEArTdEGcOAVygZ0+zJUvw5rs5VMWV9/ECYgW8q3aX0LvavpKR/ro4VBaPw5H27990cW NgjB2WRq0BxYawFYZMWmO58ofuMxh4RHxPjcbXlDyJUXTz9k41vgjh7ot3kxedPUmr7X NO1A== X-Gm-Message-State: ACgBeo1NSWBLhUblAQvnTAVVvJO0ZvRINAe9NbwbYOlrM3twTxNTlJNT S0LjsH9oQ4MVy2vav2gNYoMz8dxhFtiWVg== X-Google-Smtp-Source: AA6agR7i5M0tsV5LbPZcxfgyDoZwVGGjo65A9IyURpzwqnpW0d7BUw1ulPZQVyl2Z7CEj//mSVHNlg== X-Received: by 2002:a17:902:a502:b0:16b:fbd9:7fc5 with SMTP id s2-20020a170902a50200b0016bfbd97fc5mr9259280plq.112.1659744733183; Fri, 05 Aug 2022 17:12:13 -0700 (PDT) Received: from localhost (fwdproxy-prn-006.fbsv.net. [2a03:2880:ff:6::face:b00c]) by smtp.gmail.com with ESMTPSA id o6-20020aa79786000000b0052aaff953aesm3590894pfp.115.2022.08.05.17.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Aug 2022 17:12:12 -0700 (PDT) From: Adel Abouchaev To: kuba@kernel.org Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, corbet@lwn.net, dsahern@kernel.org, shuah@kernel.org, imagedong@tencent.com, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC net-next v2 5/6] Add flow counters and Tx processing error counter Date: Fri, 5 Aug 2022 17:11:52 -0700 Message-Id: <20220806001153.1461577-6-adel.abushaev@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220806001153.1461577-1-adel.abushaev@gmail.com> References: <20220806001153.1461577-1-adel.abushaev@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Added flow counters. Total flow counter is accumulative, the current shows the number of flows currently in flight, the error counters is accumulating the number of errors during Tx processing. Signed-off-by: Adel Abouchaev --- include/net/netns/mib.h | 3 +++ include/net/quic.h | 10 +++++++++ include/net/snmp.h | 6 +++++ include/uapi/linux/snmp.h | 11 ++++++++++ net/quic/Makefile | 2 +- net/quic/quic_main.c | 46 +++++++++++++++++++++++++++++++++++++++ net/quic/quic_proc.c | 45 ++++++++++++++++++++++++++++++++++++++ 7 files changed, 122 insertions(+), 1 deletion(-) create mode 100644 net/quic/quic_proc.c diff --git a/include/net/netns/mib.h b/include/net/netns/mib.h index 7e373664b1e7..dcbba3d1ceec 100644 --- a/include/net/netns/mib.h +++ b/include/net/netns/mib.h @@ -24,6 +24,9 @@ struct netns_mib { #if IS_ENABLED(CONFIG_TLS) DEFINE_SNMP_STAT(struct linux_tls_mib, tls_statistics); #endif +#if IS_ENABLED(CONFIG_QUIC) + DEFINE_SNMP_STAT(struct linux_quic_mib, quic_statistics); +#endif #ifdef CONFIG_MPTCP DEFINE_SNMP_STAT(struct mptcp_mib, mptcp_statistics); #endif diff --git a/include/net/quic.h b/include/net/quic.h index 15e04ea08c53..b6327f3b7632 100644 --- a/include/net/quic.h +++ b/include/net/quic.h @@ -25,6 +25,16 @@ #define QUIC_MAX_PLAIN_PAGES 16 #define QUIC_MAX_CIPHER_PAGES_ORDER 4 +#define __QUIC_INC_STATS(net, field) \ + __SNMP_INC_STATS((net)->mib.quic_statistics, field) +#define QUIC_INC_STATS(net, field) \ + SNMP_INC_STATS((net)->mib.quic_statistics, field) +#define QUIC_DEC_STATS(net, field) \ + SNMP_DEC_STATS((net)->mib.quic_statistics, field) + +int __net_init quic_proc_init(struct net *net); +void __net_exit quic_proc_fini(struct net *net); + struct quic_internal_crypto_context { struct quic_connection_info conn_info; struct crypto_skcipher *header_tfm; diff --git a/include/net/snmp.h b/include/net/snmp.h index 468a67836e2f..f94680a3e9e8 100644 --- a/include/net/snmp.h +++ b/include/net/snmp.h @@ -117,6 +117,12 @@ struct linux_tls_mib { unsigned long mibs[LINUX_MIB_TLSMAX]; }; +/* Linux QUIC */ +#define LINUX_MIB_QUICMAX __LINUX_MIB_QUICMAX +struct linux_quic_mib { + unsigned long mibs[LINUX_MIB_QUICMAX]; +}; + #define DEFINE_SNMP_STAT(type, name) \ __typeof__(type) __percpu *name #define DEFINE_SNMP_STAT_ATOMIC(type, name) \ diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h index 4d7470036a8b..7bb2768b528a 100644 --- a/include/uapi/linux/snmp.h +++ b/include/uapi/linux/snmp.h @@ -349,4 +349,15 @@ enum __LINUX_MIB_TLSMAX }; +/* linux QUIC mib definitions */ +enum +{ + LINUX_MIB_QUICNUM = 0, + LINUX_MIB_QUICCURRTXSW, /* QuicCurrTxSw */ + LINUX_MIB_QUICTXSW, /* QuicTxSw */ + LINUX_MIB_QUICTXSWERROR, /* QuicTxSwError */ + __LINUX_MIB_QUICMAX +}; + + #endif /* _LINUX_SNMP_H */ diff --git a/net/quic/Makefile b/net/quic/Makefile index 928239c4d08c..a885cd8bc4e0 100644 --- a/net/quic/Makefile +++ b/net/quic/Makefile @@ -5,4 +5,4 @@ obj-$(CONFIG_QUIC) += quic.o -quic-y := quic_main.o +quic-y := quic_main.o quic_proc.o diff --git a/net/quic/quic_main.c b/net/quic/quic_main.c index e738c8130a4f..eb0fdeabd3c4 100644 --- a/net/quic/quic_main.c +++ b/net/quic/quic_main.c @@ -362,6 +362,8 @@ static int do_quic_conn_add_tx(struct sock *sk, sockptr_t optval, if (rc < 0) goto err_free_ciphers; + QUIC_INC_STATS(sock_net(sk), LINUX_MIB_QUICCURRTXSW); + QUIC_INC_STATS(sock_net(sk), LINUX_MIB_QUICTXSW); return 0; err_free_ciphers: @@ -411,6 +413,7 @@ static int do_quic_conn_del_tx(struct sock *sk, sockptr_t optval, crypto_free_aead(crypto_ctx->packet_aead); memzero_explicit(crypto_ctx, sizeof(*crypto_ctx)); kfree(connhash); + QUIC_DEC_STATS(sock_net(sk), LINUX_MIB_QUICCURRTXSW); return 0; } @@ -436,6 +439,9 @@ static int do_quic_setsockopt(struct sock *sk, int optname, sockptr_t optval, break; } + if (rc) + QUIC_INC_STATS(sock_net(sk), LINUX_MIB_QUICTXSWERROR); + return rc; } @@ -1242,6 +1248,9 @@ static int quic_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) quic_put_plain_user_pages(plain_pages, nr_plain_pages); out: + if (unlikely(ret < 0)) + QUIC_INC_STATS(sock_net(sk), LINUX_MIB_QUICTXSWERROR); + return ret; } @@ -1374,6 +1383,36 @@ static void quic_release(struct sock *sk) release_sock(sk); } +static int __net_init quic_init_net(struct net *net) +{ + int err; + + net->mib.quic_statistics = alloc_percpu(struct linux_quic_mib); + if (!net->mib.quic_statistics) + return -ENOMEM; + + err = quic_proc_init(net); + if (err) + goto err_free_stats; + + return 0; + +err_free_stats: + free_percpu(net->mib.quic_statistics); + return err; +} + +static void __net_exit quic_exit_net(struct net *net) +{ + quic_proc_fini(net); + free_percpu(net->mib.quic_statistics); +} + +static struct pernet_operations quic_proc_ops = { + .init = quic_init_net, + .exit = quic_exit_net, +}; + static struct udp_ulp_ops quic_ulp_ops __read_mostly = { .name = "quic-crypto", .owner = THIS_MODULE, @@ -1383,6 +1422,12 @@ static struct udp_ulp_ops quic_ulp_ops __read_mostly = { static int __init quic_register(void) { + int err; + + err = register_pernet_subsys(&quic_proc_ops); + if (err) + return err; + udp_register_ulp(&quic_ulp_ops); return 0; } @@ -1390,6 +1435,7 @@ static int __init quic_register(void) static void __exit quic_unregister(void) { udp_unregister_ulp(&quic_ulp_ops); + unregister_pernet_subsys(&quic_proc_ops); } module_init(quic_register); diff --git a/net/quic/quic_proc.c b/net/quic/quic_proc.c new file mode 100644 index 000000000000..cb4fe7a589b5 --- /dev/null +++ b/net/quic/quic_proc.c @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2019 Meta Platforms, Inc. */ + +#include +#include +#include +#include + +#ifdef CONFIG_PROC_FS +static const struct snmp_mib quic_mib_list[] = { + SNMP_MIB_ITEM("QuicCurrTxSw", LINUX_MIB_QUICCURRTXSW), + SNMP_MIB_ITEM("QuicTxSw", LINUX_MIB_QUICTXSW), + SNMP_MIB_ITEM("QuicTxSwError", LINUX_MIB_QUICTXSWERROR), + SNMP_MIB_SENTINEL +}; + +static int quic_statistics_seq_show(struct seq_file *seq, void *v) +{ + unsigned long buf[LINUX_MIB_QUICMAX] = {}; + struct net *net = seq->private; + int i; + + snmp_get_cpu_field_batch(buf, quic_mib_list, net->mib.quic_statistics); + for (i = 0; quic_mib_list[i].name; i++) + seq_printf(seq, "%-32s\t%lu\n", quic_mib_list[i].name, buf[i]); + + return 0; +} +#endif + +int __net_init quic_proc_init(struct net *net) +{ +#ifdef CONFIG_PROC_FS + if (!proc_create_net_single("quic_stat", 0444, net->proc_net, + quic_statistics_seq_show, NULL)) + return -ENOMEM; +#endif /* CONFIG_PROC_FS */ + + return 0; +} + +void __net_exit quic_proc_fini(struct net *net) +{ + remove_proc_entry("quic_stat", net->proc_net); +} From patchwork Wed Aug 3 16:40:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adel Abouchaev X-Patchwork-Id: 595200 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 647D3C19F29 for ; Wed, 3 Aug 2022 16:41:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234006AbiHCQlL (ORCPT ); Wed, 3 Aug 2022 12:41:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237251AbiHCQlH (ORCPT ); Wed, 3 Aug 2022 12:41:07 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBB0840BD3; Wed, 3 Aug 2022 09:41:02 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id w10so16930316plq.0; Wed, 03 Aug 2022 09:41:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=PagpYLBKMzBgTY2+VszOhttVW5lcLb7OhK9zW5hxXPM=; b=fJ6DvZ3Gr8MbyMNwohEqYThd52JIYRvk48Xv3/rnv8zcLt6OLsp2WTMh4hf0bobX/V NZVnsEorlaxKf4KHEpxm4v9r7Wtrf9FXtDrfsI7xVNjrzSNTwkSP/zA9JnspkfkF59Is Lao8adYVwVfcoAcLi/vxCNdZtWi32Ve0U2rmEM3rQ3gy9BnCWhVDuzd+kiUxVQJWaTN8 7X5yMy0D1x51DfYv3cBIP3oTVDwfLbVeKImpTnJQNLZItztJ+YLGqxKfAg4evECPzkg6 YNYPDptVkwrp4pyul+uP4LjtCDuQ4AOm9kKoMrPhmSppqxPyyXkJpDYCLUI5puwd5H91 kfsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=PagpYLBKMzBgTY2+VszOhttVW5lcLb7OhK9zW5hxXPM=; b=mV8uqNUfwPG4Ahu8BqDik1c6ofemwKDc1rz+KZ9+6YZQTViPJSBKELjldsysnpjlj/ 7mvXmisvE/mmxFId8gL6kYSrZLLhLbPdqmySKaQwdIbzSaEga9RUL9Uy5Bw2T5FVlIa1 PU3vsrydyDJzk8IY9OAFH7Z2pxFIg91ciTUfKF9lggYMds/C/3BsVZPDzhQWNNM2kNLS TMBrOP6+6G9qS/iB/jhEe4r0urClC1pIghJGJbfByREpzzZ3ms7Usge1tZQvmH5qVewd uhoL00ViTQtwkXZeBd3FPIuR+oIITXZ29u6GGiWR0C1+/GCEGksED40iUijJNJ6dVbtB OkdQ== X-Gm-Message-State: ACgBeo3gUYkPHGc8/7jQmlU9HQBsCMK8tm4kq+JhPkE4XO+g1DtxuAcn CLBENiK2nU9P2GIHpclUomc= X-Google-Smtp-Source: AA6agR51dHijpODZzth8zzV6k1eeSaCIsJbJ7Nk4Yk1psIm0t8eQIf7pRYaxR90AVCFPE6zqZstWVQ== X-Received: by 2002:a17:902:c24c:b0:16d:d5d4:aa84 with SMTP id 12-20020a170902c24c00b0016dd5d4aa84mr24733003plg.36.1659544861483; Wed, 03 Aug 2022 09:41:01 -0700 (PDT) Received: from localhost (fwdproxy-prn-008.fbsv.net. [2a03:2880:ff:8::face:b00c]) by smtp.gmail.com with ESMTPSA id l15-20020a654c4f000000b003db7de758besm11441190pgr.5.2022.08.03.09.41.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Aug 2022 09:41:01 -0700 (PDT) From: Adel Abouchaev To: kuba@kernel.org Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, corbet@lwn.net, dsahern@kernel.org, shuah@kernel.org, imagedong@tencent.com, netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [RFC net-next 6/6] net: Add self tests for ULP operations, flow setup and crypto tests Date: Wed, 3 Aug 2022 09:40:45 -0700 Message-Id: <20220803164045.3585187-7-adel.abushaev@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220803164045.3585187-1-adel.abushaev@gmail.com> References: <20220803164045.3585187-1-adel.abushaev@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org Add self tests for ULP operations, flow setup and crypto tests. Signed-off-by: Adel Abouchaev --- tools/testing/selftests/net/.gitignore | 1 + tools/testing/selftests/net/Makefile | 2 +- tools/testing/selftests/net/quic.c | 1024 ++++++++++++++++++++++++ tools/testing/selftests/net/quic.sh | 45 ++ 4 files changed, 1071 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/net/quic.c create mode 100755 tools/testing/selftests/net/quic.sh diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore index ffc35a22e914..bd4967e57803 100644 --- a/tools/testing/selftests/net/.gitignore +++ b/tools/testing/selftests/net/.gitignore @@ -38,3 +38,4 @@ ioam6_parser toeplitz tun cmsg_sender +quic diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index db05b3764b77..aee89b0458b4 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -54,7 +54,7 @@ TEST_GEN_FILES += ipsec TEST_GEN_FILES += ioam6_parser TEST_GEN_FILES += gro TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa -TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls tun +TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls tun quic TEST_GEN_FILES += toeplitz TEST_GEN_FILES += cmsg_sender TEST_GEN_FILES += stress_reuseport_listen diff --git a/tools/testing/selftests/net/quic.c b/tools/testing/selftests/net/quic.c new file mode 100644 index 000000000000..20e425003fcb --- /dev/null +++ b/tools/testing/selftests/net/quic.c @@ -0,0 +1,1024 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define _GNU_SOURCE + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "../kselftest_harness.h" + +#define UDP_ULP 105 + +#ifndef SOL_UDP +#define SOL_UDP 17 +#endif + +// 1. QUIC ULP Registration Test + +FIXTURE(quic_ulp) +{ + int sfd; + socklen_t len_s; + union { + struct sockaddr_in addr; + struct sockaddr_in6 addr6; + } server; + int default_net_ns_fd; + int server_net_ns_fd; +}; + +FIXTURE_VARIANT(quic_ulp) +{ + unsigned int af_server; + char *server_address; + unsigned short server_port; +}; + +FIXTURE_VARIANT_ADD(quic_ulp, ipv4) +{ + .af_server = AF_INET, + .server_address = "10.0.0.2", + .server_port = 7101, +}; + +FIXTURE_VARIANT_ADD(quic_ulp, ipv6) +{ + .af_server = AF_INET6, + .server_address = "2001::2", + .server_port = 7102, +}; + +FIXTURE_SETUP(quic_ulp) +{ + char path[PATH_MAX]; + int optval = 1; + + snprintf(path, sizeof(path), "/proc/%d/ns/net", getpid()); + self->default_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->default_net_ns_fd, 0); + strcpy(path, "/var/run/netns/ns2"); + self->server_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->server_net_ns_fd, 0); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + self->sfd = socket(variant->af_server, SOCK_DGRAM, 0); + ASSERT_NE(setsockopt(self->sfd, SOL_SOCKET, SO_REUSEPORT, &optval, + sizeof(optval)), -1); + if (variant->af_server == AF_INET) { + self->len_s = sizeof(self->server.addr); + self->server.addr.sin_family = variant->af_server; + inet_pton(variant->af_server, variant->server_address, + &self->server.addr.sin_addr); + self->server.addr.sin_port = htons(variant->server_port); + ASSERT_EQ(bind(self->sfd, &self->server.addr, self->len_s), 0); + ASSERT_EQ(getsockname(self->sfd, &self->server.addr, + &self->len_s), 0); + } else { + self->len_s = sizeof(self->server.addr6); + self->server.addr6.sin6_family = variant->af_server; + inet_pton(variant->af_server, variant->server_address, + &self->server.addr6.sin6_addr); + self->server.addr6.sin6_port = htons(variant->server_port); + ASSERT_EQ(bind(self->sfd, &self->server.addr6, self->len_s), 0); + ASSERT_EQ(getsockname(self->sfd, &self->server.addr6, + &self->len_s), 0); + } + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +}; + +FIXTURE_TEARDOWN(quic_ulp) +{ + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + close(self->sfd); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +}; + +TEST_F(quic_ulp, request_nonexistent_udp_ulp) +{ + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_ULP, + "nonexistent", sizeof("nonexistent")), -1); + // If UDP_ULP option is not present, the error would be ENOPROTOOPT. + ASSERT_EQ(errno, ENOENT); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +}; + +TEST_F(quic_ulp, request_quic_crypto_udp_ulp) +{ + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_ULP, + "quic-crypto", sizeof("quic-crypto")), 0); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +}; + +// 2. QUIC Data Path Operation Tests + +#define DO_NOT_SETUP_FLOW 0 +#define SETUP_FLOW 1 + +#define DO_NOT_USE_CLIENT 0 +#define USE_CLIENT 1 + +FIXTURE(quic_data) +{ + int sfd, c1fd, c2fd; + socklen_t len_c1; + socklen_t len_c2; + socklen_t len_s; + + union { + struct sockaddr_in addr; + struct sockaddr_in6 addr6; + } client_1; + union { + struct sockaddr_in addr; + struct sockaddr_in6 addr6; + } client_2; + union { + struct sockaddr_in addr; + struct sockaddr_in6 addr6; + } server; + int default_net_ns_fd; + int client_1_net_ns_fd; + int client_2_net_ns_fd; + int server_net_ns_fd; +}; + +FIXTURE_VARIANT(quic_data) +{ + unsigned int af_client_1; + char *client_1_address; + unsigned short client_1_port; + uint8_t conn_id_1[8]; + uint8_t conn_1_key[16]; + uint8_t conn_1_iv[12]; + uint8_t conn_1_hdr_key[16]; + size_t conn_id_1_len; + bool setup_flow_1; + bool use_client_1; + unsigned int af_client_2; + char *client_2_address; + unsigned short client_2_port; + uint8_t conn_id_2[8]; + uint8_t conn_2_key[16]; + uint8_t conn_2_iv[12]; + uint8_t conn_2_hdr_key[16]; + size_t conn_id_2_len; + bool setup_flow_2; + bool use_client_2; + unsigned int af_server; + char *server_address; + unsigned short server_port; +}; + +FIXTURE_VARIANT_ADD(quic_data, ipv4) +{ + .af_client_1 = AF_INET, + .client_1_address = "10.0.0.1", + .client_1_port = 6667, + .conn_id_1 = {0x11, 0x12, 0x13, 0x14}, + .conn_id_1_len = 4, + .setup_flow_1 = SETUP_FLOW, + .use_client_1 = USE_CLIENT, + .af_client_2 = AF_INET, + .client_2_address = "10.0.0.3", + .client_2_port = 6668, + .conn_id_2 = {0x21, 0x22, 0x23, 0x24}, + .conn_id_2_len = 4, + .setup_flow_2 = SETUP_FLOW, + //.use_client_2 = USE_CLIENT, + .af_server = AF_INET, + .server_address = "10.0.0.2", + .server_port = 6669, +}; + +FIXTURE_VARIANT_ADD(quic_data, ipv6_mapped_ipv4_two_conns) +{ + .af_client_1 = AF_INET6, + .client_1_address = "::ffff:10.0.0.1", + .client_1_port = 6670, + .conn_id_1 = {0x11, 0x12, 0x13, 0x14}, + .conn_id_1_len = 4, + .setup_flow_1 = SETUP_FLOW, + .use_client_1 = USE_CLIENT, + .af_client_2 = AF_INET6, + .client_2_address = "::ffff:10.0.0.3", + .client_2_port = 6671, + .conn_id_2 = {0x21, 0x22, 0x23, 0x24}, + .conn_id_2_len = 4, + .setup_flow_2 = SETUP_FLOW, + .use_client_2 = USE_CLIENT, + .af_server = AF_INET6, + .server_address = "::ffff:10.0.0.2", + .server_port = 6672, +}; + +FIXTURE_VARIANT_ADD(quic_data, ipv6_mapped_ipv4_setup_ipv4_one_conn) +{ + .af_client_1 = AF_INET, + .client_1_address = "10.0.0.3", + .client_1_port = 6676, + .conn_id_1 = {0x11, 0x12, 0x13, 0x14}, + .conn_id_1_len = 4, + .setup_flow_1 = SETUP_FLOW, + .use_client_1 = DO_NOT_USE_CLIENT, + .af_client_2 = AF_INET6, + .client_2_address = "::ffff:10.0.0.3", + .client_2_port = 6676, + .conn_id_2 = {0x11, 0x12, 0x13, 0x14}, + .conn_id_2_len = 4, + .setup_flow_2 = DO_NOT_SETUP_FLOW, + .use_client_2 = USE_CLIENT, + .af_server = AF_INET6, + .server_address = "::ffff:10.0.0.2", + .server_port = 6677, +}; + +FIXTURE_VARIANT_ADD(quic_data, ipv6_mapped_ipv4_setup_ipv6_one_conn) +{ + .af_client_1 = AF_INET6, + .client_1_address = "::ffff:10.0.0.3", + .client_1_port = 6678, + .conn_id_1 = {0x11, 0x12, 0x13, 0x14}, + .setup_flow_1 = SETUP_FLOW, + .use_client_1 = DO_NOT_USE_CLIENT, + .af_client_2 = AF_INET, + .client_2_address = "10.0.0.3", + .client_2_port = 6678, + .conn_id_2 = {0x11, 0x12, 0x13, 0x14}, + .setup_flow_2 = DO_NOT_SETUP_FLOW, + .use_client_2 = USE_CLIENT, + .af_server = AF_INET6, + .server_address = "::ffff:10.0.0.2", + .server_port = 6679, +}; + +FIXTURE_SETUP(quic_data) +{ + char path[PATH_MAX]; + int optval = 1; + + if (variant->af_client_1 == AF_INET) { + self->len_c1 = sizeof(self->client_1.addr); + self->client_1.addr.sin_family = variant->af_client_1; + inet_pton(variant->af_client_1, variant->client_1_address, + &self->client_1.addr.sin_addr); + self->client_1.addr.sin_port = htons(variant->client_1_port); + } else { + self->len_c1 = sizeof(self->client_1.addr6); + self->client_1.addr6.sin6_family = variant->af_client_1; + inet_pton(variant->af_client_1, variant->client_1_address, + &self->client_1.addr6.sin6_addr); + self->client_1.addr6.sin6_port = htons(variant->client_1_port); + } + + if (variant->af_client_2 == AF_INET) { + self->len_c2 = sizeof(self->client_2.addr); + self->client_2.addr.sin_family = variant->af_client_2; + inet_pton(variant->af_client_2, variant->client_2_address, + &self->client_2.addr.sin_addr); + self->client_2.addr.sin_port = htons(variant->client_2_port); + } else { + self->len_c2 = sizeof(self->client_2.addr6); + self->client_2.addr6.sin6_family = variant->af_client_2; + inet_pton(variant->af_client_2, variant->client_2_address, + &self->client_2.addr6.sin6_addr); + self->client_2.addr6.sin6_port = htons(variant->client_2_port); + } + + if (variant->af_server == AF_INET) { + self->len_s = sizeof(self->server.addr); + self->server.addr.sin_family = variant->af_server; + inet_pton(variant->af_server, variant->server_address, + &self->server.addr.sin_addr); + self->server.addr.sin_port = htons(variant->server_port); + } else { + self->len_s = sizeof(self->server.addr6); + self->server.addr6.sin6_family = variant->af_server; + inet_pton(variant->af_server, variant->server_address, + &self->server.addr6.sin6_addr); + self->server.addr6.sin6_port = htons(variant->server_port); + } + + snprintf(path, sizeof(path), "/proc/%d/ns/net", getpid()); + self->default_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->default_net_ns_fd, 0); + strcpy(path, "/var/run/netns/ns11"); + self->client_1_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->client_1_net_ns_fd, 0); + strcpy(path, "/var/run/netns/ns12"); + self->client_2_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->client_2_net_ns_fd, 0); + strcpy(path, "/var/run/netns/ns2"); + self->server_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->server_net_ns_fd, 0); + + if (variant->use_client_1) { + ASSERT_NE(setns(self->client_1_net_ns_fd, 0), -1); + self->c1fd = socket(variant->af_client_1, SOCK_DGRAM, 0); + ASSERT_NE(setsockopt(self->c1fd, SOL_SOCKET, SO_REUSEPORT, &optval, + sizeof(optval)), -1); + if (variant->af_client_1 == AF_INET) { + ASSERT_EQ(bind(self->c1fd, &self->client_1.addr, + self->len_c1), 0); + ASSERT_EQ(getsockname(self->c1fd, &self->client_1.addr, + &self->len_c1), 0); + } else { + ASSERT_EQ(bind(self->c1fd, &self->client_1.addr6, + self->len_c1), 0); + ASSERT_EQ(getsockname(self->c1fd, &self->client_1.addr6, + &self->len_c1), 0); + } + } + + if (variant->use_client_2) { + ASSERT_NE(setns(self->client_2_net_ns_fd, 0), -1); + self->c2fd = socket(variant->af_client_2, SOCK_DGRAM, 0); + ASSERT_NE(setsockopt(self->c2fd, SOL_SOCKET, SO_REUSEPORT, &optval, + sizeof(optval)), -1); + if (variant->af_client_2 == AF_INET) { + ASSERT_EQ(bind(self->c2fd, &self->client_2.addr, + self->len_c2), 0); + ASSERT_EQ(getsockname(self->c2fd, &self->client_2.addr, + &self->len_c2), 0); + } else { + ASSERT_EQ(bind(self->c2fd, &self->client_2.addr6, + self->len_c2), 0); + ASSERT_EQ(getsockname(self->c2fd, &self->client_2.addr6, + &self->len_c2), 0); + } + } + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + self->sfd = socket(variant->af_server, SOCK_DGRAM, 0); + ASSERT_NE(setsockopt(self->sfd, SOL_SOCKET, SO_REUSEPORT, &optval, + sizeof(optval)), -1); + if (variant->af_server == AF_INET) { + ASSERT_EQ(bind(self->sfd, &self->server.addr, self->len_s), 0); + ASSERT_EQ(getsockname(self->sfd, &self->server.addr, + &self->len_s), 0); + } else { + ASSERT_EQ(bind(self->sfd, &self->server.addr6, self->len_s), 0); + ASSERT_EQ(getsockname(self->sfd, &self->server.addr6, + &self->len_s), 0); + } + + ASSERT_EQ(setsockopt(self->sfd, IPPROTO_UDP, UDP_ULP, + "quic-crypto", sizeof("quic-crypto")), 0); + + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +FIXTURE_TEARDOWN(quic_data) +{ + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + close(self->sfd); + ASSERT_NE(setns(self->client_1_net_ns_fd, 0), -1); + close(self->c1fd); + ASSERT_NE(setns(self->client_2_net_ns_fd, 0), -1); + close(self->c2fd); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +TEST_F(quic_data, send_fail_no_flow) +{ + char const *test_str = "test_read"; + int send_len = 10; + + ASSERT_EQ(strlen(test_str) + 1, send_len); + EXPECT_EQ(sendto(self->sfd, test_str, send_len, 0, + &self->client_1.addr, self->len_c1), -1); +}; + +TEST_F(quic_data, encrypt_two_conn_gso_1200_iov_2_size_9000_aesgcm128) +{ + size_t cmsg_tx_len = sizeof(struct quic_tx_ancillary_data); + uint8_t cmsg_buf[CMSG_SPACE(cmsg_tx_len)]; + struct quic_connection_info conn_1_info; + struct quic_connection_info conn_2_info; + struct quic_tx_ancillary_data *anc_data; + socklen_t recv_addr_len_1; + socklen_t recv_addr_len_2; + struct cmsghdr *cmsg_hdr; + int frag_size = 1200; + int send_len = 9000; + struct iovec iov[2]; + int msg_len = 4500; + struct msghdr msg; + char *test_str_1; + char *test_str_2; + char *buf_1; + char *buf_2; + int i; + + test_str_1 = (char *)malloc(9000); + test_str_2 = (char *)malloc(9000); + memset(test_str_1, 0, 9000); + memset(test_str_2, 0, 9000); + + buf_1 = (char *)malloc(10000); + buf_2 = (char *)malloc(10000); + for (i = 0; i < 9000; i += (1200 - 16)) { + test_str_1[i] = 0x40; + memcpy(&test_str_1[i + 1], &variant->conn_id_1, + variant->conn_id_1_len); + test_str_1[i + 1 + variant->conn_id_1_len] = 0xca; + + test_str_2[i] = 0x40; + memcpy(&test_str_2[i + 1], &variant->conn_id_2, + variant->conn_id_2_len); + test_str_2[i + 1 + variant->conn_id_2_len] = 0xca; + } + + // program the connection into the offload + conn_1_info.cipher_type = TLS_CIPHER_AES_GCM_128; + memset(&conn_1_info.key, 0, sizeof(struct quic_connection_info_key)); + conn_1_info.key.conn_id_length = variant->conn_id_1_len; + memcpy(conn_1_info.key.conn_id, + &variant->conn_id_1, + variant->conn_id_1_len); + + conn_2_info.cipher_type = TLS_CIPHER_AES_GCM_128; + memset(&conn_2_info.key, 0, sizeof(struct quic_connection_info_key)); + conn_2_info.key.conn_id_length = variant->conn_id_2_len; + memcpy(conn_2_info.key.conn_id, + &variant->conn_id_2, + variant->conn_id_2_len); + + memcpy(&conn_1_info.crypto_info_aes_gcm_128.packet_encryption_key, + &variant->conn_1_key, 16); + memcpy(&conn_1_info.crypto_info_aes_gcm_128.packet_encryption_iv, + &variant->conn_1_iv, 12); + memcpy(&conn_1_info.crypto_info_aes_gcm_128.header_encryption_key, + &variant->conn_1_hdr_key, 16); + memcpy(&conn_2_info.crypto_info_aes_gcm_128.packet_encryption_key, + &variant->conn_2_key, 16); + memcpy(&conn_2_info.crypto_info_aes_gcm_128.packet_encryption_iv, + &variant->conn_2_iv, 12); + memcpy(&conn_2_info.crypto_info_aes_gcm_128.header_encryption_key, + &variant->conn_2_hdr_key, + 16); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_SEGMENT, &frag_size, + sizeof(frag_size)), 0); + + if (variant->setup_flow_1) + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, + UDP_QUIC_ADD_TX_CONNECTION, + &conn_1_info, sizeof(conn_1_info)), 0); + + if (variant->setup_flow_2) + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, + UDP_QUIC_ADD_TX_CONNECTION, + &conn_2_info, sizeof(conn_2_info)), 0); + + recv_addr_len_1 = self->len_c1; + recv_addr_len_2 = self->len_c2; + + iov[0].iov_base = test_str_1; + iov[0].iov_len = msg_len; + iov[1].iov_base = (void *)test_str_1 + 4500; + iov[1].iov_len = msg_len; + + msg.msg_name = (self->client_1.addr.sin_family == AF_INET) + ? (void *)&self->client_1.addr + : (void *)&self->client_1.addr6; + msg.msg_namelen = self->len_c1; + msg.msg_iov = iov; + msg.msg_iovlen = 2; + msg.msg_control = cmsg_buf; + msg.msg_controllen = sizeof(cmsg_buf); + cmsg_hdr = CMSG_FIRSTHDR(&msg); + cmsg_hdr->cmsg_level = IPPROTO_UDP; + cmsg_hdr->cmsg_type = UDP_QUIC_ENCRYPT; + cmsg_hdr->cmsg_len = CMSG_LEN(cmsg_tx_len); + anc_data = (struct quic_tx_ancillary_data *)CMSG_DATA(cmsg_hdr); + anc_data->next_pkt_num = 0x0d65c9; + anc_data->flags = 0; + anc_data->conn_id_length = variant->conn_id_1_len; + + if (variant->use_client_1) + EXPECT_EQ(sendmsg(self->sfd, &msg, 0), send_len); + + iov[0].iov_base = test_str_2; + iov[0].iov_len = msg_len; + iov[1].iov_base = (void *)test_str_2 + 4500; + iov[1].iov_len = msg_len; + msg.msg_name = (self->client_2.addr.sin_family == AF_INET) + ? (void *)&self->client_2.addr + : (void *)&self->client_2.addr6; + msg.msg_namelen = self->len_c2; + cmsg_hdr = CMSG_FIRSTHDR(&msg); + anc_data = (struct quic_tx_ancillary_data *)CMSG_DATA(cmsg_hdr); + anc_data->next_pkt_num = 0x0d65c9; + anc_data->conn_id_length = variant->conn_id_2_len; + anc_data->flags = 0; + + if (variant->use_client_2) + EXPECT_EQ(sendmsg(self->sfd, &msg, 0), send_len); + + if (variant->use_client_1) { + ASSERT_NE(setns(self->client_1_net_ns_fd, 0), -1); + if (variant->af_client_1 == AF_INET) { + for (i = 0; i < 7; ++i) { + EXPECT_EQ(recvfrom(self->c1fd, buf_1, 9000, 0, + &self->client_1.addr, + &recv_addr_len_1), + 1200); + // Validate framing is intact. + EXPECT_EQ(memcmp((void *)buf_1 + 1, + &variant->conn_id_1, + variant->conn_id_1_len), 0); + } + EXPECT_EQ(recvfrom(self->c1fd, buf_1, 9000, 0, + &self->client_1.addr, + &recv_addr_len_1), + 728); + EXPECT_EQ(memcmp((void *)buf_1 + 1, + &variant->conn_id_1, + variant->conn_id_1_len), 0); + } else { + for (i = 0; i < 7; ++i) { + EXPECT_EQ(recvfrom(self->c1fd, buf_1, 9000, 0, + &self->client_1.addr6, + &recv_addr_len_1), + 1200); + } + EXPECT_EQ(recvfrom(self->c1fd, buf_1, 9000, 0, + &self->client_1.addr6, + &recv_addr_len_1), + 728); + EXPECT_EQ(memcmp((void *)buf_1 + 1, + &variant->conn_id_1, + variant->conn_id_1_len), 0); + } + EXPECT_NE(memcmp(buf_1, test_str_1, send_len), 0); + } + + if (variant->use_client_2) { + ASSERT_NE(setns(self->client_2_net_ns_fd, 0), -1); + if (variant->af_client_2 == AF_INET) { + for (i = 0; i < 7; ++i) { + EXPECT_EQ(recvfrom(self->c2fd, buf_2, 9000, 0, + &self->client_2.addr, + &recv_addr_len_2), + 1200); + EXPECT_EQ(memcmp((void *)buf_2 + 1, + &variant->conn_id_2, + variant->conn_id_2_len), 0); + } + EXPECT_EQ(recvfrom(self->c2fd, buf_2, 9000, 0, + &self->client_2.addr, + &recv_addr_len_2), + 728); + EXPECT_EQ(memcmp((void *)buf_2 + 1, + &variant->conn_id_2, + variant->conn_id_2_len), 0); + } else { + for (i = 0; i < 7; ++i) { + EXPECT_EQ(recvfrom(self->c2fd, buf_2, 9000, 0, + &self->client_2.addr6, + &recv_addr_len_2), + 1200); + EXPECT_EQ(memcmp((void *)buf_2 + 1, + &variant->conn_id_2, + variant->conn_id_2_len), 0); + } + EXPECT_EQ(recvfrom(self->c2fd, buf_2, 9000, 0, + &self->client_2.addr6, + &recv_addr_len_2), + 728); + EXPECT_EQ(memcmp((void *)buf_2 + 1, + &variant->conn_id_2, + variant->conn_id_2_len), 0); + } + EXPECT_NE(memcmp(buf_2, test_str_2, send_len), 0); + } + + if (variant->use_client_1 && variant->use_client_2) + EXPECT_NE(memcmp(buf_1, buf_2, send_len), 0); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + if (variant->setup_flow_1) { + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, + UDP_QUIC_DEL_TX_CONNECTION, + &conn_1_info, sizeof(conn_1_info)), + 0); + } + if (variant->setup_flow_2) { + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, + UDP_QUIC_DEL_TX_CONNECTION, + &conn_2_info, sizeof(conn_2_info)), + 0); + } + free(test_str_1); + free(test_str_2); + free(buf_1); + free(buf_2); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +// 3. QUIC Encryption Tests + +FIXTURE(quic_crypto) +{ + int sfd, cfd; + socklen_t len_c; + socklen_t len_s; + union { + struct sockaddr_in addr; + struct sockaddr_in6 addr6; + } client; + union { + struct sockaddr_in addr; + struct sockaddr_in6 addr6; + } server; + int default_net_ns_fd; + int client_net_ns_fd; + int server_net_ns_fd; +}; + +FIXTURE_VARIANT(quic_crypto) +{ + unsigned int af_client; + char *client_address; + unsigned short client_port; + uint32_t algo; + uint8_t conn_id[8]; + uint8_t conn_key[16]; + uint8_t conn_iv[12]; + uint8_t conn_hdr_key[16]; + size_t conn_id_len; + bool setup_flow; + bool use_client; + unsigned int af_server; + char *server_address; + unsigned short server_port; +}; + +FIXTURE_SETUP(quic_crypto) +{ + char path[PATH_MAX]; + int optval = 1; + + if (variant->af_client == AF_INET) { + self->len_c = sizeof(self->client.addr); + self->client.addr.sin_family = variant->af_client; + inet_pton(variant->af_client, variant->client_address, + &self->client.addr.sin_addr); + self->client.addr.sin_port = htons(variant->client_port); + } else { + self->len_c = sizeof(self->client.addr6); + self->client.addr6.sin6_family = variant->af_client; + inet_pton(variant->af_client, variant->client_address, + &self->client.addr6.sin6_addr); + self->client.addr6.sin6_port = htons(variant->client_port); + } + + if (variant->af_server == AF_INET) { + self->len_s = sizeof(self->server.addr); + self->server.addr.sin_family = variant->af_server; + inet_pton(variant->af_server, variant->server_address, + &self->server.addr.sin_addr); + self->server.addr.sin_port = htons(variant->server_port); + } else { + self->len_s = sizeof(self->server.addr6); + self->server.addr6.sin6_family = variant->af_server; + inet_pton(variant->af_server, variant->server_address, + &self->server.addr6.sin6_addr); + self->server.addr6.sin6_port = htons(variant->server_port); + } + + snprintf(path, sizeof(path), "/proc/%d/ns/net", getpid()); + self->default_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->default_net_ns_fd, 0); + strcpy(path, "/var/run/netns/ns11"); + self->client_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->client_net_ns_fd, 0); + strcpy(path, "/var/run/netns/ns2"); + self->server_net_ns_fd = open(path, O_RDONLY); + ASSERT_GE(self->server_net_ns_fd, 0); + + if (variant->use_client) { + ASSERT_NE(setns(self->client_net_ns_fd, 0), -1); + self->cfd = socket(variant->af_client, SOCK_DGRAM, 0); + ASSERT_NE(setsockopt(self->cfd, SOL_SOCKET, SO_REUSEPORT, &optval, + sizeof(optval)), -1); + if (variant->af_client == AF_INET) { + ASSERT_EQ(bind(self->cfd, &self->client.addr, + self->len_c), 0); + ASSERT_EQ(getsockname(self->cfd, &self->client.addr, + &self->len_c), 0); + } else { + ASSERT_EQ(bind(self->cfd, &self->client.addr6, + self->len_c), 0); + ASSERT_EQ(getsockname(self->cfd, &self->client.addr6, + &self->len_c), 0); + } + } + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + self->sfd = socket(variant->af_server, SOCK_DGRAM, 0); + ASSERT_NE(setsockopt(self->sfd, SOL_SOCKET, SO_REUSEPORT, &optval, + sizeof(optval)), -1); + if (variant->af_server == AF_INET) { + ASSERT_EQ(bind(self->sfd, &self->server.addr, self->len_s), 0); + ASSERT_EQ(getsockname(self->sfd, &self->server.addr, + &self->len_s), + 0); + } else { + ASSERT_EQ(bind(self->sfd, &self->server.addr6, self->len_s), 0); + ASSERT_EQ(getsockname(self->sfd, &self->server.addr6, + &self->len_s), + 0); + } + + ASSERT_EQ(setsockopt(self->sfd, IPPROTO_UDP, UDP_ULP, + "quic-crypto", sizeof("quic-crypto")), 0); + + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +FIXTURE_TEARDOWN(quic_crypto) +{ + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + close(self->sfd); + ASSERT_NE(setns(self->client_net_ns_fd, 0), -1); + close(self->cfd); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +FIXTURE_VARIANT_ADD(quic_crypto, ipv4) +{ + .af_client = AF_INET, + .client_address = "10.0.0.1", + .client_port = 7667, + .conn_id = {0x08, 0x6b, 0xbf, 0x88, 0x82, 0xb9, 0x12, 0x49}, + .conn_key = {0x87, 0x71, 0xEA, 0x1D, 0xFB, 0xBE, 0x7A, 0x45, 0xBB, + 0xE2, 0x7E, 0xBC, 0x0B, 0x53, 0x94, 0x99}, + .conn_iv = {0x3A, 0xA7, 0x46, 0x72, 0xE9, 0x83, 0x6B, 0x55, 0xDA, + 0x66, 0x7B, 0xDA}, + .conn_hdr_key = {0xC9, 0x8E, 0xFD, 0xF2, 0x0B, 0x64, 0x8C, 0x57, + 0xB5, 0x0A, 0xB2, 0xD2, 0x21, 0xD3, 0x66, 0xA5}, + .conn_id_len = 8, + .setup_flow = SETUP_FLOW, + .use_client = USE_CLIENT, + .af_server = AF_INET6, + .server_address = "10.0.0.2", + .server_port = 7669, +}; + +FIXTURE_VARIANT_ADD(quic_crypto, ipv6) +{ + .af_client = AF_INET6, + .client_address = "2001::1", + .client_port = 7673, + .conn_id = {0x08, 0x6b, 0xbf, 0x88, 0x82, 0xb9, 0x12, 0x49}, + .conn_key = {0x87, 0x71, 0xEA, 0x1D, 0xFB, 0xBE, 0x7A, 0x45, 0xBB, + 0xE2, 0x7E, 0xBC, 0x0B, 0x53, 0x94, 0x99}, + .conn_iv = {0x3A, 0xA7, 0x46, 0x72, 0xE9, 0x83, 0x6B, 0x55, 0xDA, + 0x66, 0x7B, 0xDA}, + .conn_hdr_key = {0xC9, 0x8E, 0xFD, 0xF2, 0x0B, 0x64, 0x8C, 0x57, + 0xB5, 0x0A, 0xB2, 0xD2, 0x21, 0xD3, 0x66, 0xA5}, + .conn_id_len = 8, + .setup_flow = SETUP_FLOW, + .use_client = USE_CLIENT, + .af_server = AF_INET6, + .server_address = "2001::2", + .server_port = 7675, +}; + +TEST_F(quic_crypto, encrypt_test_vector_aesgcm128_single_flow_gso_in_control) +{ + char test_str[37] = {// Header, conn id and pkt num + 0x40, 0x08, 0x6B, 0xBF, 0x88, 0x82, 0xB9, 0x12, + 0x49, 0xCA, + // Payload + 0x02, 0x80, 0xDE, 0x40, 0x39, 0x40, 0xF6, 0x00, + 0x01, 0x0B, 0x00, 0x0F, 0x65, 0x63, 0x68, 0x6F, + 0x20, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, + 0x37, 0x38, 0x39 + }; + + char match_str[53] = { + 0x46, 0x08, 0x6B, 0xBF, 0x88, 0x82, 0xB9, 0x12, + 0x49, 0x1C, 0x44, 0xB8, 0x41, 0xBB, 0xCF, 0x6E, + 0x0A, 0x2A, 0x24, 0xFB, 0xB4, 0x79, 0x62, 0xEA, + 0x59, 0x38, 0x1A, 0x0E, 0x50, 0x1E, 0x59, 0xED, + 0x3F, 0x8E, 0x7E, 0x5A, 0x70, 0xE4, 0x2A, 0xBC, + 0x2A, 0xFA, 0x2B, 0x54, 0xEB, 0x89, 0xC3, 0x2C, + 0xB6, 0x8C, 0x1E, 0xAB, 0x2D + }; + + size_t cmsg_tx_len = sizeof(struct quic_tx_ancillary_data); + uint8_t cmsg_buf[CMSG_SPACE(cmsg_tx_len) + + CMSG_SPACE(sizeof(uint16_t))]; + struct quic_tx_ancillary_data *anc_data; + struct quic_connection_info conn_info; + int send_len = sizeof(test_str); + int msg_len = sizeof(test_str); + uint16_t frag_size = 1200; + struct cmsghdr *cmsg_hdr; + int wrong_frag_size = 26; + socklen_t recv_addr_len; + struct iovec iov[2]; + struct msghdr msg; + char *buf; + + buf = (char *)malloc(1024); + conn_info.cipher_type = TLS_CIPHER_AES_GCM_128; + memset(&conn_info.key, 0, sizeof(struct quic_connection_info_key)); + conn_info.key.conn_id_length = variant->conn_id_len; + memcpy(conn_info.key.conn_id, + &variant->conn_id, + variant->conn_id_len); + memcpy(&conn_info.crypto_info_aes_gcm_128.packet_encryption_key, + &variant->conn_key, 16); + memcpy(&conn_info.crypto_info_aes_gcm_128.packet_encryption_iv, + &variant->conn_iv, 12); + memcpy(&conn_info.crypto_info_aes_gcm_128.header_encryption_key, + &variant->conn_hdr_key, + 16); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_SEGMENT, &wrong_frag_size, + sizeof(wrong_frag_size)), 0); + + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_QUIC_ADD_TX_CONNECTION, + &conn_info, sizeof(conn_info)), 0); + + recv_addr_len = self->len_c; + + iov[0].iov_base = test_str; + iov[0].iov_len = msg_len; + + memset(cmsg_buf, 0, sizeof(cmsg_buf)); + msg.msg_name = (self->client.addr.sin_family == AF_INET) + ? (void *)&self->client.addr + : (void *)&self->client.addr6; + msg.msg_namelen = self->len_c; + msg.msg_iov = iov; + msg.msg_iovlen = 1; + msg.msg_control = cmsg_buf; + msg.msg_controllen = sizeof(cmsg_buf); + cmsg_hdr = CMSG_FIRSTHDR(&msg); + cmsg_hdr->cmsg_level = IPPROTO_UDP; + cmsg_hdr->cmsg_type = UDP_QUIC_ENCRYPT; + cmsg_hdr->cmsg_len = CMSG_LEN(cmsg_tx_len); + anc_data = (struct quic_tx_ancillary_data *)CMSG_DATA(cmsg_hdr); + anc_data->flags = 0; + anc_data->next_pkt_num = 0x0d65c9; + anc_data->conn_id_length = variant->conn_id_len; + + cmsg_hdr = CMSG_NXTHDR(&msg, cmsg_hdr); + cmsg_hdr->cmsg_level = IPPROTO_UDP; + cmsg_hdr->cmsg_type = UDP_SEGMENT; + cmsg_hdr->cmsg_len = CMSG_LEN(sizeof(uint16_t)); + memcpy(CMSG_DATA(cmsg_hdr), (void *)&frag_size, sizeof(frag_size)); + + EXPECT_EQ(sendmsg(self->sfd, &msg, 0), send_len); + ASSERT_NE(setns(self->client_net_ns_fd, 0), -1); + if (variant->af_client == AF_INET) { + EXPECT_EQ(recvfrom(self->cfd, buf, 1024, 0, + &self->client.addr, &recv_addr_len), + sizeof(match_str)); + } else { + EXPECT_EQ(recvfrom(self->cfd, buf, 9000, 0, + &self->client.addr6, &recv_addr_len), + sizeof(match_str)); + } + EXPECT_STREQ(buf, match_str); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_QUIC_DEL_TX_CONNECTION, + &conn_info, sizeof(conn_info)), 0); + free(buf); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +TEST_F(quic_crypto, encrypt_test_vector_aesgcm128_single_flow_gso_in_setsockopt) +{ + char test_str[37] = {// Header, conn id and pkt num + 0x40, 0x08, 0x6B, 0xBF, 0x88, 0x82, 0xB9, 0x12, + 0x49, 0xCA, + // Payload + 0x02, 0x80, 0xDE, 0x40, 0x39, 0x40, 0xF6, 0x00, + 0x01, 0x0B, 0x00, 0x0F, 0x65, 0x63, 0x68, 0x6F, + 0x20, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, + 0x37, 0x38, 0x39 + }; + + char match_str[53] = { + 0x46, 0x08, 0x6B, 0xBF, 0x88, 0x82, 0xB9, 0x12, + 0x49, 0x1C, 0x44, 0xB8, 0x41, 0xBB, 0xCF, 0x6E, + 0x0A, 0x2A, 0x24, 0xFB, 0xB4, 0x79, 0x62, 0xEA, + 0x59, 0x38, 0x1A, 0x0E, 0x50, 0x1E, 0x59, 0xED, + 0x3F, 0x8E, 0x7E, 0x5A, 0x70, 0xE4, 0x2A, 0xBC, + 0x2A, 0xFA, 0x2B, 0x54, 0xEB, 0x89, 0xC3, 0x2C, + 0xB6, 0x8C, 0x1E, 0xAB, 0x2D + }; + + size_t cmsg_tx_len = sizeof(struct quic_tx_ancillary_data); + uint8_t cmsg_buf[CMSG_SPACE(cmsg_tx_len)]; + struct quic_tx_ancillary_data *anc_data; + struct quic_connection_info conn_info; + int send_len = sizeof(test_str); + int msg_len = sizeof(test_str); + struct cmsghdr *cmsg_hdr; + socklen_t recv_addr_len; + int frag_size = 1200; + struct iovec iov[2]; + struct msghdr msg; + char *buf; + + buf = (char *)malloc(1024); + + conn_info.cipher_type = TLS_CIPHER_AES_GCM_128; + memset(&conn_info.key, 0, sizeof(struct quic_connection_info_key)); + conn_info.key.conn_id_length = variant->conn_id_len; + memcpy(&conn_info.key.conn_id, + &variant->conn_id, + variant->conn_id_len); + memcpy(&conn_info.crypto_info_aes_gcm_128.packet_encryption_key, + &variant->conn_key, 16); + memcpy(&conn_info.crypto_info_aes_gcm_128.packet_encryption_iv, + &variant->conn_iv, 12); + memcpy(&conn_info.crypto_info_aes_gcm_128.header_encryption_key, + &variant->conn_hdr_key, + 16); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_SEGMENT, &frag_size, + sizeof(frag_size)), + 0); + + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_QUIC_ADD_TX_CONNECTION, + &conn_info, sizeof(conn_info)), + 0); + + recv_addr_len = self->len_c; + + iov[0].iov_base = test_str; + iov[0].iov_len = msg_len; + + msg.msg_name = (self->client.addr.sin_family == AF_INET) + ? (void *)&self->client.addr + : (void *)&self->client.addr6; + msg.msg_namelen = self->len_c; + msg.msg_iov = iov; + msg.msg_iovlen = 1; + msg.msg_control = cmsg_buf; + msg.msg_controllen = sizeof(cmsg_buf); + cmsg_hdr = CMSG_FIRSTHDR(&msg); + cmsg_hdr->cmsg_level = IPPROTO_UDP; + cmsg_hdr->cmsg_type = UDP_QUIC_ENCRYPT; + cmsg_hdr->cmsg_len = CMSG_LEN(cmsg_tx_len); + anc_data = (struct quic_tx_ancillary_data *)CMSG_DATA(cmsg_hdr); + anc_data->flags = 0; + anc_data->next_pkt_num = 0x0d65c9; + anc_data->conn_id_length = variant->conn_id_len; + + EXPECT_EQ(sendmsg(self->sfd, &msg, 0), send_len); + ASSERT_NE(setns(self->client_net_ns_fd, 0), -1); + if (variant->af_client == AF_INET) { + EXPECT_EQ(recvfrom(self->cfd, buf, 1024, 0, + &self->client.addr, &recv_addr_len), + sizeof(match_str)); + } else { + EXPECT_EQ(recvfrom(self->cfd, buf, 9000, 0, + &self->client.addr6, &recv_addr_len), + sizeof(match_str)); + } + EXPECT_STREQ(buf, match_str); + + ASSERT_NE(setns(self->server_net_ns_fd, 0), -1); + ASSERT_EQ(setsockopt(self->sfd, SOL_UDP, UDP_QUIC_DEL_TX_CONNECTION, + &conn_info, sizeof(conn_info)), 0); + free(buf); + ASSERT_NE(setns(self->default_net_ns_fd, 0), -1); +} + +TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/net/quic.sh b/tools/testing/selftests/net/quic.sh new file mode 100755 index 000000000000..6c684e670e82 --- /dev/null +++ b/tools/testing/selftests/net/quic.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +sudo ip netns add ns11 +sudo ip netns add ns12 +sudo ip netns add ns2 +sudo ip link add veth11 type veth peer name br-veth11 +sudo ip link add veth12 type veth peer name br-veth12 +sudo ip link add veth2 type veth peer name br-veth2 +sudo ip link set veth11 netns ns11 +sudo ip link set veth12 netns ns12 +sudo ip link set veth2 netns ns2 +sudo ip netns exec ns11 ip addr add 10.0.0.1/24 dev veth11 +sudo ip netns exec ns11 ip addr add ::ffff:10.0.0.1/96 dev veth11 +sudo ip netns exec ns11 ip addr add 2001::1/64 dev veth11 +sudo ip netns exec ns12 ip addr add 10.0.0.3/24 dev veth12 +sudo ip netns exec ns12 ip addr add ::ffff:10.0.0.3/96 dev veth12 +sudo ip netns exec ns12 ip addr add 2001::3/64 dev veth12 +sudo ip netns exec ns2 ip addr add 10.0.0.2/24 dev veth2 +sudo ip netns exec ns2 ip addr add ::ffff:10.0.0.2/96 dev veth2 +sudo ip netns exec ns2 ip addr add 2001::2/64 dev veth2 +sudo ip link add name br1 type bridge forward_delay 0 +sudo ip link set br1 up +sudo ip link set br-veth11 up +sudo ip link set br-veth12 up +sudo ip link set br-veth2 up +sudo ip netns exec ns11 ip link set veth11 up +sudo ip netns exec ns12 ip link set veth12 up +sudo ip netns exec ns2 ip link set veth2 up +sudo ip link set br-veth11 master br1 +sudo ip link set br-veth12 master br1 +sudo ip link set br-veth2 master br1 +sudo ip netns exec ns2 cat /proc/net/quic_stat + +printf "%s" "Waiting for bridge to start fowarding ..." +while ! timeout 0.5 sudo ip netns exec ns2 ping -c 1 -n 2001::1 &> /dev/null +do + printf "%c" "." +done +printf "\n%s\n" "Bridge is operational" + +sudo ./quic +sudo ip netns exec ns2 cat /proc/net/quic_stat +sudo ip netns delete ns2 +sudo ip netns delete ns12 +sudo ip netns delete ns11