From patchwork Wed Sep 9 01:27:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 261308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C201C433E2 for ; Wed, 9 Sep 2020 01:28:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E06C216C4 for ; Wed, 9 Sep 2020 01:28:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="gKwUcZJX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728297AbgIIB2X (ORCPT ); Tue, 8 Sep 2020 21:28:23 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:5199 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726591AbgIIB2S (ORCPT ); Tue, 8 Sep 2020 21:28:18 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 08 Sep 2020 18:27:27 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Tue, 08 Sep 2020 18:28:18 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Tue, 08 Sep 2020 18:28:18 -0700 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Sep 2020 01:28:11 +0000 From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net-next V2 01/12] net/mlx5e: Refactor inline header size calculation in the TX path Date: Tue, 8 Sep 2020 18:27:46 -0700 Message-ID: <20200909012757.32677-2-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909012757.32677-1-saeedm@nvidia.com> References: <20200909012757.32677-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599614847; bh=CkGQuBiuduhMHjGupLjKKdkGlA6MC84TX4AH4MAg4oo=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding: Content-Type:X-Originating-IP:X-ClientProxiedBy; b=gKwUcZJX9Jw2sWhjtcARyq36V2HdqNWFUwYQGzLk5m8Ovb/2T+skDFBFz9D+/re21 P5fI59JbQDrtIX1qP9eVI/zRsNpU1TmDlvOyIi3az7xps5eqAZ6xQYj4ZOBgQrOaxl q1unQsA1r1QX8OVQTg0KkYdrTIibUwYJI16iy5aruL8p1lNrrGHJciA+FPGhBvr3YP VnMC9yukyvwL/Fe1yLOozl8qSzBksg9BzatSoGEpfH72it7QelloYi0++djTH+4Lrv qkgCtHABh7u67HIrn1fjtbOe8YGUKTscee6NYfY0F1FWBAAqiq+UZhnoWySsTzDXYf vs9qyCeC6tYgg== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy As preparation for the next patch, don't increase ihs to calculate ds_cnt and then decrease it, but rather calculate the intermediate value temporarily. This code has the same amount of arithmetic operations, but now allows to split out ds_cnt calculation, which will be performed in the next patch. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index da596de3abba..e15aa53ff83e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -307,9 +307,9 @@ void mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb, ds_cnt += skb_shinfo(skb)->nr_frags; if (ihs) { - ihs += !!skb_vlan_tag_present(skb) * VLAN_HLEN; + u16 inl = ihs + !!skb_vlan_tag_present(skb) * VLAN_HLEN - INL_HDR_START_SZ; - ds_cnt_inl = DIV_ROUND_UP(ihs - INL_HDR_START_SZ, MLX5_SEND_WQE_DS); + ds_cnt_inl = DIV_ROUND_UP(inl, MLX5_SEND_WQE_DS); ds_cnt += ds_cnt_inl; } @@ -348,12 +348,12 @@ void mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb, eseg->mss = mss; if (ihs) { - eseg->inline_hdr.sz = cpu_to_be16(ihs); if (skb_vlan_tag_present(skb)) { - ihs -= VLAN_HLEN; + eseg->inline_hdr.sz = cpu_to_be16(ihs + VLAN_HLEN); mlx5e_insert_vlan(eseg->inline_hdr.start, skb, ihs); stats->added_vlan_packets++; } else { + eseg->inline_hdr.sz = cpu_to_be16(ihs); memcpy(eseg->inline_hdr.start, skb->data, ihs); } dseg += ds_cnt_inl; From patchwork Wed Sep 9 01:27:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 261307 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2555C43461 for ; Wed, 9 Sep 2020 01:28:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 783FB216C4 for ; Wed, 9 Sep 2020 01:28:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="LI9gqvoO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729161AbgIIB2i (ORCPT ); Tue, 8 Sep 2020 21:28:38 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19424 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728197AbgIIB2X (ORCPT ); Tue, 8 Sep 2020 21:28:23 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 08 Sep 2020 18:28:09 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 08 Sep 2020 18:28:23 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 08 Sep 2020 18:28:23 -0700 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Sep 2020 01:28:13 +0000 From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net-next V2 06/12] net/mlx5e: Unify constants for WQE_EMPTY_DS_COUNT Date: Tue, 8 Sep 2020 18:27:51 -0700 Message-ID: <20200909012757.32677-7-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909012757.32677-1-saeedm@nvidia.com> References: <20200909012757.32677-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599614889; bh=Dg7Gly/umPukAxiqzuYv4MftkedGW/W6pWIw7O5ZtH0=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding: Content-Type:X-Originating-IP:X-ClientProxiedBy; b=LI9gqvoOAu0euOD7wfallfHXnRqaWPW3eYsJuHatlN19vpilMOWtGLbChqLR3wssm /eAZgbA0D1w0PHPB0HS5AxwfumN9exTYtUTzZ0EOWx9bSu9bTq7ljcyJsPzUwZGnJp EdIt+pezEMA6xLetO0EYGABaQJQNpLhfvFPd6dgLhsDPveSpvNm+AdUynLDcfFaUjA T/QGW3Co4yJ30eWr6VGd+VQpGUeI0POQENjlFi13X11EmAONoZ732Amsp2yuiluRUL eQ9iddJo1HTOWs8cgBeUhiog0GHqftXKfKoOFSg4MgszjNmo5/k/2NQdZAuJGMHi3G Lfjahq4+ud51w== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy A constant for the number of DS in an empty WQE (i.e. a WQE without data segments) is needed in multiple places (normal TX data path, MPWQE in XDP), but currently we have a constant for XDP and an inline formula in normal TX. This patch introduces a common constant. Additionally, mlx5e_xdp_mpwqe_session_start is converted to use struct assignment, because the code nearby is touched. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 2 ++ .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 17 ++++++++------- .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 21 +++++++------------ .../net/ethernet/mellanox/mlx5/core/en_tx.c | 2 +- 4 files changed, 21 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 9931a605eed9..277725c05de4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -7,6 +7,8 @@ #include "en.h" #include +#define MLX5E_TX_WQE_EMPTY_DS_COUNT (sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS) + #define INL_HDR_START_SZ (sizeof(((struct mlx5_wqe_eth_seg *)NULL)->inline_hdr.start)) enum mlx5e_icosq_wqe_type { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 7fccd2ea7dc9..737e88d49e89 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -196,16 +196,19 @@ static void mlx5e_xdp_mpwqe_session_start(struct mlx5e_xdpsq *sq) { struct mlx5e_xdp_mpwqe *session = &sq->mpwqe; struct mlx5e_xdpsq_stats *stats = sq->stats; + struct mlx5e_tx_wqe *wqe; u16 pi; pi = mlx5e_xdpsq_get_next_pi(sq, MLX5E_XDP_MPW_MAX_WQEBBS); - session->wqe = MLX5E_TX_FETCH_WQE(sq, pi); - - net_prefetchw(session->wqe->data); - session->ds_count = MLX5E_XDP_TX_EMPTY_DS_COUNT; - session->pkt_count = 0; - - mlx5e_xdp_update_inline_state(sq); + wqe = MLX5E_TX_FETCH_WQE(sq, pi); + net_prefetchw(wqe->data); + + *session = (struct mlx5e_xdp_mpwqe) { + .wqe = wqe, + .ds_count = MLX5E_TX_WQE_EMPTY_DS_COUNT, + .pkt_count = 0, + .inline_on = mlx5e_xdp_get_inline_state(sq, session->inline_on), + }; stats->mpwqe++; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 615bf04f4a54..96d6b1553bab 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -38,9 +38,7 @@ #include "en/txrx.h" #define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN) -#define MLX5E_XDP_TX_EMPTY_DS_COUNT \ - (sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS) -#define MLX5E_XDP_TX_DS_COUNT (MLX5E_XDP_TX_EMPTY_DS_COUNT + 1 /* SG DS */) +#define MLX5E_XDP_TX_DS_COUNT (MLX5E_TX_WQE_EMPTY_DS_COUNT + 1 /* SG DS */) #define MLX5E_XDP_INLINE_WQE_MAX_DS_CNT 16 #define MLX5E_XDP_INLINE_WQE_SZ_THRSD \ @@ -123,23 +121,20 @@ static inline void mlx5e_xmit_xdp_doorbell(struct mlx5e_xdpsq *sq) /* Enable inline WQEs to shift some load from a congested HCA (HW) to * a less congested cpu (SW). */ -static inline void mlx5e_xdp_update_inline_state(struct mlx5e_xdpsq *sq) +static inline bool mlx5e_xdp_get_inline_state(struct mlx5e_xdpsq *sq, bool cur) { u16 outstanding = sq->xdpi_fifo_pc - sq->xdpi_fifo_cc; - struct mlx5e_xdp_mpwqe *session = &sq->mpwqe; #define MLX5E_XDP_INLINE_WATERMARK_LOW 10 #define MLX5E_XDP_INLINE_WATERMARK_HIGH 128 - if (session->inline_on) { - if (outstanding <= MLX5E_XDP_INLINE_WATERMARK_LOW) - session->inline_on = 0; - return; - } + if (cur && outstanding <= MLX5E_XDP_INLINE_WATERMARK_LOW) + return false; + + if (!cur && outstanding >= MLX5E_XDP_INLINE_WATERMARK_HIGH) + return true; - /* inline is false */ - if (outstanding >= MLX5E_XDP_INLINE_WATERMARK_HIGH) - session->inline_on = 1; + return cur; } static inline bool mlx5e_xdp_mpqwe_is_full(struct mlx5e_xdp_mpwqe *session) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index 0e13976b1ffc..f045c4be63db 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -306,7 +306,7 @@ static inline void mlx5e_sq_calc_wqe_attr(struct sk_buff *skb, const struct mlx5e_tx_attr *attr, struct mlx5e_tx_wqe_attr *wqe_attr) { - u16 ds_cnt = sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS; + u16 ds_cnt = MLX5E_TX_WQE_EMPTY_DS_COUNT; u16 ds_cnt_inl = 0; ds_cnt += !!attr->headlen + skb_shinfo(skb)->nr_frags; From patchwork Wed Sep 9 01:27:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 261304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C353C2BC11 for ; Wed, 9 Sep 2020 01:28:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DD8AB218AC for ; Wed, 9 Sep 2020 01:28:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="aE15F/Gr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729691AbgIIB2v (ORCPT ); Tue, 8 Sep 2020 21:28:51 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19421 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727940AbgIIB2X (ORCPT ); Tue, 8 Sep 2020 21:28:23 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 08 Sep 2020 18:28:09 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 08 Sep 2020 18:28:23 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 08 Sep 2020 18:28:23 -0700 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Sep 2020 01:28:14 +0000 From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net-next V2 07/12] net/mlx5e: Move the TLS resync check out of the function Date: Tue, 8 Sep 2020 18:27:52 -0700 Message-ID: <20200909012757.32677-8-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909012757.32677-1-saeedm@nvidia.com> References: <20200909012757.32677-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599614889; bh=qo/MQejkdb1QoTxWN2HHE54S3bg2uNXKV0D1ZKsqyRU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding: Content-Type:X-Originating-IP:X-ClientProxiedBy; b=aE15F/GrHaG6oY7+suPRPxADXSbALHMMcyv7ligz5OB5a/DCUaDoY/0hBnaPlCkda UsJfuQySp5BBz93hVdITptQww/YfLMa8o2qzdVFecaXlzJtkoOtTt1YdnglRChZAlv YbQijwMjQJkTosLaxXBAhY+JI8YuEjA7uvCkbSqJjE/lArJeh0LOSnDtTohngYVJbf QlG7PhcXalty46skg9uvgtPejksgZJee+2isxd73AICcpVth02tHIFzrQH7LGFYISh RiP4/1HiaHvEzSDPgpTXUTKtT/+lSSem5i6hnKCLL7MGzyKfxXIBe/Va8P6cNh2EJC zT5VmzRKKMEUg== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy Before this patch, mlx5e_ktls_tx_handle_resync_dump_comp checked for resync_dump_frag_page. It happened for all WQEs without an SKB, including padding WQEs, and required a function call. Normally, padding WQEs happen more often than TLS resyncs. Take this check out of the function and put it to an inline function to save a call on all padding WQEs. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 3 --- .../mellanox/mlx5/core/en_accel/ktls_txrx.h | 14 +++++++++++--- drivers/net/ethernet/mellanox/mlx5/core/en_tx.c | 4 ++-- 3 files changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index f4861545b236..b140e13fdcc8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -345,9 +345,6 @@ void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, struct mlx5e_sq_stats *stats; struct mlx5e_sq_dma *dma; - if (!wi->resync_dump_frag_page) - return; - dma = mlx5e_dma_get(sq, (*dma_fifo_cc)++); stats = sq->stats; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h index ff4c740af10b..fcfb156cf09d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h @@ -29,11 +29,19 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi, void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi, u32 *dma_fifo_cc); +static inline void +mlx5e_ktls_tx_try_handle_resync_dump_comp(struct mlx5e_txqsq *sq, + struct mlx5e_tx_wqe_info *wi, + u32 *dma_fifo_cc) +{ + if (unlikely(wi->resync_dump_frag_page)) + mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma_fifo_cc); +} #else static inline void -mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, - struct mlx5e_tx_wqe_info *wi, - u32 *dma_fifo_cc) +mlx5e_ktls_tx_try_handle_resync_dump_comp(struct mlx5e_txqsq *sq, + struct mlx5e_tx_wqe_info *wi, + u32 *dma_fifo_cc) { } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index f045c4be63db..cabc84e71b2d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -547,7 +547,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) sqcc += wi->num_wqebbs; if (unlikely(!skb)) { - mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); + mlx5e_ktls_tx_try_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); continue; } @@ -612,7 +612,7 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq) sqcc += wi->num_wqebbs; if (!skb) { - mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); + mlx5e_ktls_tx_try_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); continue; } From patchwork Wed Sep 9 01:27:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 261306 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B81D0C43461 for ; Wed, 9 Sep 2020 01:28:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7B9A12177B for ; Wed, 9 Sep 2020 01:28:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="I680Oycu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729449AbgIIB2k (ORCPT ); Tue, 8 Sep 2020 21:28:40 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19457 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728483AbgIIB2a (ORCPT ); Tue, 8 Sep 2020 21:28:30 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 08 Sep 2020 18:28:09 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 08 Sep 2020 18:28:23 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 08 Sep 2020 18:28:23 -0700 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Sep 2020 01:28:15 +0000 From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net-next V2 08/12] net/mlx5e: Support multiple SKBs in a TX WQE Date: Tue, 8 Sep 2020 18:27:53 -0700 Message-ID: <20200909012757.32677-9-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909012757.32677-1-saeedm@nvidia.com> References: <20200909012757.32677-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599614889; bh=p4FpQvplA8kO0HUSgPwbUpWXnxtt1TCnroxWUvmzms8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding: Content-Type:X-Originating-IP:X-ClientProxiedBy; b=I680OycuTXzwsn7fdSZZMJL78d2dUNXV9uG6/IeYUPfAlHFBhUZb2hrlb6GWHTAzV JfF6XntV4YJUTRgZk79OMGBXrQyI78QTaQKrE1VZzrZh9kKJV3OYJmYg3eFkbjtVIY NHxOvDkwtjO+Z8jZrZ38lMG015AfmWgd3jbL7R58FrfT903sEgOR+iAisUAq99KNgh +kz51HyCRKnFFBC95LIuWNOYnzm/I0LZRfSxc9GKIhvpIvBde8iKxYNrAeeWuAmxeR TD+UwvnbE8d7A6OAFLp72MTW6IOdtyKk27GHzd14brFrc0CXkr2bPdfgyHcw4E9yTa wXJvfSkwvcDZw== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy TX MPWQE support for SKBs is coming in one of the following patches, and a single MPWQE can send multiple SKBs. This commit prepares the TX path code to handle such cases: 1. An additional FIFO for SKBs is added, just like the FIFO for DMA chunks. 2. struct mlx5e_tx_wqe_info will contain num_fifo_pkts. If a given WQE contains only one packet, num_fifo_pkts will be zero, and the SKB will be stored in mlx5e_tx_wqe_info, as usual. If num_fifo_pkts > 0, the SKB pointer will be NULL, and the SKBs will be stored in the FIFO. This change has no performance impact in TCP single stream test and XDP_TX single stream test. UDP pktgen (burst 32), single stream: Packet rate: 19.23 Mpps -> 19.12 Mpps Instructions per packet: 360 -> 354 Cycles per packet: 142 -> 140 CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (x86_64) NIC: Mellanox ConnectX-6 Dx Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 4 ++ .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 18 +++++ .../mellanox/mlx5/core/en_accel/ktls_txrx.h | 10 ++- .../net/ethernet/mellanox/mlx5/core/en_main.c | 7 +- .../net/ethernet/mellanox/mlx5/core/en_tx.c | 71 ++++++++++++++----- 5 files changed, 89 insertions(+), 21 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 4f33658da25a..6ab60074fca9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -317,11 +317,13 @@ struct mlx5e_txqsq { /* dirtied @completion */ u16 cc; + u16 skb_fifo_cc; u32 dma_fifo_cc; struct dim dim; /* Adaptive Moderation */ /* dirtied @xmit */ u16 pc ____cacheline_aligned_in_smp; + u16 skb_fifo_pc; u32 dma_fifo_pc; struct mlx5e_cq cq; @@ -329,9 +331,11 @@ struct mlx5e_txqsq { /* read only */ struct mlx5_wq_cyc wq; u32 dma_fifo_mask; + u16 skb_fifo_mask; struct mlx5e_sq_stats *stats; struct { struct mlx5e_sq_dma *dma_fifo; + struct sk_buff **skb_fifo; struct mlx5e_tx_wqe_info *wqe_info; } db; void __iomem *uar_map; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 277725c05de4..03fe92323f48 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -105,6 +105,7 @@ struct mlx5e_tx_wqe_info { u32 num_bytes; u8 num_wqebbs; u8 num_dma; + u8 num_fifo_pkts; #ifdef CONFIG_MLX5_EN_TLS struct page *resync_dump_frag_page; #endif @@ -231,6 +232,23 @@ mlx5e_dma_push(struct mlx5e_txqsq *sq, dma_addr_t addr, u32 size, dma->type = map_type; } +static inline struct sk_buff **mlx5e_skb_fifo_get(struct mlx5e_txqsq *sq, u16 i) +{ + return &sq->db.skb_fifo[i & sq->skb_fifo_mask]; +} + +static inline void mlx5e_skb_fifo_push(struct mlx5e_txqsq *sq, struct sk_buff *skb) +{ + struct sk_buff **skb_item = mlx5e_skb_fifo_get(sq, sq->skb_fifo_pc++); + + *skb_item = skb; +} + +static inline struct sk_buff *mlx5e_skb_fifo_pop(struct mlx5e_txqsq *sq) +{ + return *mlx5e_skb_fifo_get(sq, sq->skb_fifo_cc++); +} + static inline void mlx5e_tx_dma_unmap(struct device *pdev, struct mlx5e_sq_dma *dma) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h index fcfb156cf09d..7521c9be735b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_txrx.h @@ -29,20 +29,24 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi, void mlx5e_ktls_tx_handle_resync_dump_comp(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi, u32 *dma_fifo_cc); -static inline void +static inline bool mlx5e_ktls_tx_try_handle_resync_dump_comp(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi, u32 *dma_fifo_cc) { - if (unlikely(wi->resync_dump_frag_page)) + if (unlikely(wi->resync_dump_frag_page)) { mlx5e_ktls_tx_handle_resync_dump_comp(sq, wi, dma_fifo_cc); + return true; + } + return false; } #else -static inline void +static inline bool mlx5e_ktls_tx_try_handle_resync_dump_comp(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi, u32 *dma_fifo_cc) { + return false; } #endif /* CONFIG_MLX5_EN_TLS */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 26834625556d..b413aa168e4e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1040,6 +1040,7 @@ static void mlx5e_free_icosq(struct mlx5e_icosq *sq) static void mlx5e_free_txqsq_db(struct mlx5e_txqsq *sq) { kvfree(sq->db.wqe_info); + kvfree(sq->db.skb_fifo); kvfree(sq->db.dma_fifo); } @@ -1051,15 +1052,19 @@ static int mlx5e_alloc_txqsq_db(struct mlx5e_txqsq *sq, int numa) sq->db.dma_fifo = kvzalloc_node(array_size(df_sz, sizeof(*sq->db.dma_fifo)), GFP_KERNEL, numa); + sq->db.skb_fifo = kvzalloc_node(array_size(df_sz, + sizeof(*sq->db.skb_fifo)), + GFP_KERNEL, numa); sq->db.wqe_info = kvzalloc_node(array_size(wq_sz, sizeof(*sq->db.wqe_info)), GFP_KERNEL, numa); - if (!sq->db.dma_fifo || !sq->db.wqe_info) { + if (!sq->db.dma_fifo || !sq->db.skb_fifo || !sq->db.wqe_info) { mlx5e_free_txqsq_db(sq); return -ENOMEM; } sq->dma_fifo_mask = df_sz - 1; + sq->skb_fifo_mask = df_sz - 1; return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index cabc84e71b2d..d42f3c1dfa26 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -343,6 +343,7 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb, .num_bytes = attr->num_bytes, .num_dma = num_dma, .num_wqebbs = wqe_attr->num_wqebbs, + .num_fifo_pkts = 0, }; cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | attr->opcode); @@ -491,6 +492,20 @@ static inline void mlx5e_consume_skb(struct mlx5e_txqsq *sq, struct sk_buff *skb napi_consume_skb(skb, napi_budget); } +static inline void mlx5e_tx_wi_consume_fifo_skbs(struct mlx5e_txqsq *sq, + struct mlx5e_tx_wqe_info *wi, + struct mlx5_cqe64 *cqe, + int napi_budget) +{ + int i; + + for (i = 0; i < wi->num_fifo_pkts; i++) { + struct sk_buff *skb = mlx5e_skb_fifo_pop(sq); + + mlx5e_consume_skb(sq, skb, cqe, napi_budget); + } +} + bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) { struct mlx5e_sq_stats *stats; @@ -536,26 +551,33 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) wqe_counter = be16_to_cpu(cqe->wqe_counter); do { - struct sk_buff *skb; - last_wqe = (sqcc == wqe_counter); ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); wi = &sq->db.wqe_info[ci]; - skb = wi->skb; sqcc += wi->num_wqebbs; - if (unlikely(!skb)) { - mlx5e_ktls_tx_try_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); + if (likely(wi->skb)) { + mlx5e_tx_wi_dma_unmap(sq, wi, &dma_fifo_cc); + mlx5e_consume_skb(sq, wi->skb, cqe, napi_budget); + + npkts++; + nbytes += wi->num_bytes; continue; } - mlx5e_tx_wi_dma_unmap(sq, wi, &dma_fifo_cc); - mlx5e_consume_skb(sq, wi->skb, cqe, napi_budget); + if (unlikely(mlx5e_ktls_tx_try_handle_resync_dump_comp(sq, wi, + &dma_fifo_cc))) + continue; - npkts++; - nbytes += wi->num_bytes; + if (wi->num_fifo_pkts) { + mlx5e_tx_wi_dma_unmap(sq, wi, &dma_fifo_cc); + mlx5e_tx_wi_consume_fifo_skbs(sq, wi, cqe, napi_budget); + + npkts += wi->num_fifo_pkts; + nbytes += wi->num_bytes; + } } while (!last_wqe); if (unlikely(get_cqe_opcode(cqe) == MLX5_CQE_REQ_ERR)) { @@ -594,12 +616,19 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget) return (i == MLX5E_TX_CQ_POLL_BUDGET); } +static void mlx5e_tx_wi_kfree_fifo_skbs(struct mlx5e_txqsq *sq, struct mlx5e_tx_wqe_info *wi) +{ + int i; + + for (i = 0; i < wi->num_fifo_pkts; i++) + dev_kfree_skb_any(mlx5e_skb_fifo_pop(sq)); +} + void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq) { struct mlx5e_tx_wqe_info *wi; u32 dma_fifo_cc, nbytes = 0; u16 ci, sqcc, npkts = 0; - struct sk_buff *skb; sqcc = sq->cc; dma_fifo_cc = sq->dma_fifo_cc; @@ -607,20 +636,28 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq) while (sqcc != sq->pc) { ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); wi = &sq->db.wqe_info[ci]; - skb = wi->skb; sqcc += wi->num_wqebbs; - if (!skb) { - mlx5e_ktls_tx_try_handle_resync_dump_comp(sq, wi, &dma_fifo_cc); + if (likely(wi->skb)) { + mlx5e_tx_wi_dma_unmap(sq, wi, &dma_fifo_cc); + dev_kfree_skb_any(wi->skb); + + npkts++; + nbytes += wi->num_bytes; continue; } - mlx5e_tx_wi_dma_unmap(sq, wi, &dma_fifo_cc); - dev_kfree_skb_any(skb); + if (unlikely(mlx5e_ktls_tx_try_handle_resync_dump_comp(sq, wi, &dma_fifo_cc))) + continue; - npkts++; - nbytes += wi->num_bytes; + if (wi->num_fifo_pkts) { + mlx5e_tx_wi_dma_unmap(sq, wi, &dma_fifo_cc); + mlx5e_tx_wi_kfree_fifo_skbs(sq, wi); + + npkts += wi->num_fifo_pkts; + nbytes += wi->num_bytes; + } } sq->dma_fifo_cc = dma_fifo_cc; From patchwork Wed Sep 9 01:27:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 261303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F6CCC43461 for ; Wed, 9 Sep 2020 01:29:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 178AB2177B for ; Wed, 9 Sep 2020 01:29:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="LHkVp/Go" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728971AbgIIB2e (ORCPT ); Tue, 8 Sep 2020 21:28:34 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19418 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727935AbgIIB2X (ORCPT ); Tue, 8 Sep 2020 21:28:23 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 08 Sep 2020 18:28:09 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 08 Sep 2020 18:28:23 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 08 Sep 2020 18:28:23 -0700 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Sep 2020 01:28:17 +0000 From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net-next V2 11/12] net/mlx5e: Move TX code into functions to be used by MPWQE Date: Tue, 8 Sep 2020 18:27:56 -0700 Message-ID: <20200909012757.32677-12-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909012757.32677-1-saeedm@nvidia.com> References: <20200909012757.32677-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599614889; bh=gMyRUN5eHthfl8OCHAIkgElpWrt+vFQaX4XNeclONBY=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding: Content-Type:X-Originating-IP:X-ClientProxiedBy; b=LHkVp/GoU7mw25kdvo8Ce6qoPXjroo+n3XfpQf7zeRIZmiCAh1GeE5uG+QtEjTyDW W+G7b4wOwuTO7Xa0hiaMktfrUjO0I6dKMnJ00/wKIiZPvGDLJAUd1emYs8LWLGeeH2 49yUfHFEvm+SV25tVbo/PxE0HdqZlYxjTILP/uZ0X7vjGDlh7gcTY8WYZMRAR+QW1h HQEh2AqUbEBIVp50w2QS1zv3IalBFAwhrI4oyWVs3/yIAmUhKmbNLXOiSKDXyKlA2h lNbxxavuyXC3xT3ADLB7JEhNeo0nlYIizAPEdp4NzCRlPNhvuwkbZW6UQjUGMLFRzP ifu/lhaLliXag== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy mlx5e_txwqe_complete performs some actions that can be taken to separate functions: 1. Update the flags needed for hardware timestamping. 2. Stop the TX queue if it's full. Take these actions into separate functions to be reused by the MPWQE code in the following commit and to maintain clear responsibilities of functions. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- .../net/ethernet/mellanox/mlx5/core/en_tx.c | 23 ++++++++++++++----- 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index d42f3c1dfa26..090021e26e1e 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -328,6 +328,20 @@ static inline void mlx5e_sq_calc_wqe_attr(struct sk_buff *skb, }; } +static inline void mlx5e_tx_skb_update_hwts_flags(struct sk_buff *skb) +{ + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; +} + +static inline void mlx5e_tx_check_stop(struct mlx5e_txqsq *sq) +{ + if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, sq->stop_room))) { + netif_tx_stop_queue(sq->txq); + sq->stats->stopped++; + } +} + static inline void mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb, const struct mlx5e_tx_attr *attr, @@ -349,14 +363,11 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb, cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | attr->opcode); cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | wqe_attr->ds_cnt); - if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) - skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + mlx5e_tx_skb_update_hwts_flags(skb); sq->pc += wi->num_wqebbs; - if (unlikely(!mlx5e_wqc_has_room_for(wq, sq->cc, sq->pc, sq->stop_room))) { - netif_tx_stop_queue(sq->txq); - sq->stats->stopped++; - } + + mlx5e_tx_check_stop(sq); send_doorbell = __netdev_tx_sent_queue(sq->txq, attr->num_bytes, xmit_more); if (send_doorbell) From patchwork Wed Sep 9 01:27:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 261305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25478C433E2 for ; Wed, 9 Sep 2020 01:28:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D16FC218AC for ; Wed, 9 Sep 2020 01:28:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="nxzPRzlY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729738AbgIIB2v (ORCPT ); Tue, 8 Sep 2020 21:28:51 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19460 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728507AbgIIB2c (ORCPT ); Tue, 8 Sep 2020 21:28:32 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 08 Sep 2020 18:28:14 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 08 Sep 2020 18:28:28 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 08 Sep 2020 18:28:28 -0700 Received: from sx1.mtl.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 9 Sep 2020 01:28:17 +0000 From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski CC: , Maxim Mikityanskiy , "Tariq Toukan" , Saeed Mahameed Subject: [net-next V2 12/12] net/mlx5e: Enhanced TX MPWQE for SKBs Date: Tue, 8 Sep 2020 18:27:57 -0700 Message-ID: <20200909012757.32677-13-saeedm@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200909012757.32677-1-saeedm@nvidia.com> References: <20200909012757.32677-1-saeedm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599614894; bh=gc9aUUsF/fPH0bRbnESNsOm/Xu7z8OcMcDZE4K3zLEs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding: Content-Type:X-Originating-IP:X-ClientProxiedBy; b=nxzPRzlYzMwyq2OW8LaKORSoXF9FDIl0NtnceOPy+t74Ol0hqVl3/xbPXoQ/6Mdas Jley18/9Gjx4tE+y7YWsCpC4hqfOBPDMTEddPRjVSXa/W0jx8T99LaJG/xJOoKRgQ9 wK8BzXU+v6IB+342qd50DVEPBvP+PReEas6g+EL8wAcX9wpaIGBgRHvQpPQRaHtFrm VQjk+wwYK26q6uS0TiRywSOY3fEDF13scM5Z5DQ9m8RxyiqhHxZFAdyswrdSbDOVpe syHn8vpqvuSpyPzFVdRuWFW9qi1ckODysghvhbdQfVNg7bqn0QBepO2Xf6+QvkfpLX CDVALk9e71o4A== Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maxim Mikityanskiy This commit adds support for Enhanced TX MPWQE feature in the regular (SKB) data path. A MPWQE (multi-packet work queue element) can serve multiple packets, reducing the PCI bandwidth on control traffic. Two new stats (tx*_mpwqe_blks and tx*_mpwqe_pkts) are added. The feature is on by default and controlled by the skb_tx_mpwqe private flag. In a MPWQE, eseg is shared among all packets, so eseg-based offloads (IPSEC, GENEVE, checksum) run on a separate eseg that is compared to the eseg of the current MPWQE session to decide if the new packet can be added to the same session. MPWQE is not compatible with certain offloads and features, such as TLS offload, TSO, nonlinear SKBs. If such incompatible features are in use, the driver gracefully falls back to non-MPWQE. This change has no performance impact in TCP single stream test and XDP_TX single stream test. UDP pktgen, 64-byte packets, single stream, MPWQE off: Packet rate: 19.12 Mpps -> 20.02 Mpps Instructions per packet: 354 -> 347 Cycles per packet: 140 -> 129 UDP pktgen, 64-byte packets, single stream, MPWQE on: Packet rate: 19.12 Mpps -> 20.67 Mpps Instructions per packet: 354 -> 335 Cycles per packet: 140 -> 124 Enabling MPWQE can reduce PCI bandwidth: PCI Gen2, pktgen at fixed rate of 36864000 pps on 24 CPU cores: Inbound PCI utilization with MPWQE off: 81.3% Inbound PCI utilization with MPWQE on: 59.3% PCI Gen3, pktgen at fixed rate of 56064005 pps on 24 CPU cores: Inbound PCI utilization with MPWQE off: 65.8% Inbound PCI utilization with MPWQE on: 49.2% Enabling MPWQE can also reduce CPU load, increasing the packet rate in case of CPU bottleneck: PCI Gen2, pktgen at full rate on 24 CPU cores: Packet rate with MPWQE off: 37.4 Mpps Packet rate with MPWQE on: 49.1 Mpps PCI Gen3, pktgen at full rate on 24 CPU cores: Packet rate with MPWQE off: 56.2 Mpps Packet rate with MPWQE on: 67.0 Mpps Burst size in all pktgen tests is 32. CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (x86_64) NIC: Mellanox ConnectX-6 Dx To avoid performance degradation when MPWQE is off, manual optimizations of function inlining were performed. It's especially important to have mlx5e_sq_xmit_mpwqe noinline, otherwise gcc inlines it automatically and bloats mlx5e_xmit, slowing it down. Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 4 + .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 1 + .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 1 + .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 1 + .../mellanox/mlx5/core/en_accel/en_accel.h | 29 +-- .../mellanox/mlx5/core/en_accel/tls_rxtx.c | 2 + .../ethernet/mellanox/mlx5/core/en_ethtool.c | 15 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 11 ++ .../ethernet/mellanox/mlx5/core/en_stats.c | 6 + .../ethernet/mellanox/mlx5/core/en_stats.h | 4 + .../net/ethernet/mellanox/mlx5/core/en_tx.c | 184 +++++++++++++++++- 11 files changed, 240 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 3511836f0f4a..2abb0857ede0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -221,6 +221,7 @@ enum mlx5e_priv_flag { MLX5E_PFLAG_RX_STRIDING_RQ, MLX5E_PFLAG_RX_NO_CSUM_COMPLETE, MLX5E_PFLAG_XDP_TX_MPWQE, + MLX5E_PFLAG_SKB_TX_MPWQE, MLX5E_NUM_PFLAGS, /* Keep last */ }; @@ -304,6 +305,7 @@ struct mlx5e_sq_dma { enum { MLX5E_SQ_STATE_ENABLED, + MLX5E_SQ_STATE_MPWQE, MLX5E_SQ_STATE_RECOVERING, MLX5E_SQ_STATE_IPSEC, MLX5E_SQ_STATE_AM, @@ -315,6 +317,7 @@ enum { struct mlx5e_tx_mpwqe { /* Current MPWQE session */ struct mlx5e_tx_wqe *wqe; + u32 bytes_count; u8 ds_count; u8 pkt_count; u8 inline_on; @@ -333,6 +336,7 @@ struct mlx5e_txqsq { u16 pc ____cacheline_aligned_in_smp; u16 skb_fifo_pc; u32 dma_fifo_pc; + struct mlx5e_tx_mpwqe mpwqe; struct mlx5e_cq cq; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 06dbfd6cd82a..8ccd0b661a7f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -278,6 +278,7 @@ mlx5e_tx_dma_unmap(struct device *pdev, struct mlx5e_sq_dma *dma) } void mlx5e_sq_xmit_simple(struct mlx5e_txqsq *sq, struct sk_buff *skb, bool xmit_more); +void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq); static inline bool mlx5e_tx_mpwqe_is_full(struct mlx5e_tx_mpwqe *session) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index f0a102763de6..0b201e66f191 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -205,6 +205,7 @@ static void mlx5e_xdp_mpwqe_session_start(struct mlx5e_xdpsq *sq) *session = (struct mlx5e_tx_mpwqe) { .wqe = wqe, + .bytes_count = 0, .ds_count = MLX5E_TX_WQE_EMPTY_DS_COUNT, .pkt_count = 0, .inline_on = mlx5e_xdp_get_inline_state(sq, session->inline_on), diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index 4bd8af478a4a..d487e5e37162 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -147,6 +147,7 @@ mlx5e_xdp_mpwqe_add_dseg(struct mlx5e_xdpsq *sq, u32 dma_len = xdptxd->len; session->pkt_count++; + session->bytes_count += dma_len; if (session->inline_on && dma_len <= MLX5E_XDP_INLINE_WQE_SZ_THRSD) { struct mlx5_wqe_inline_seg *inline_dseg = diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index 23d4ef5ab9c5..2ea1cdc1ca54 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -128,31 +128,38 @@ static inline bool mlx5e_accel_tx_begin(struct net_device *dev, return true; } -static inline bool mlx5e_accel_tx_finish(struct mlx5e_priv *priv, - struct mlx5e_txqsq *sq, - struct sk_buff *skb, - struct mlx5e_tx_wqe *wqe, - struct mlx5e_accel_tx_state *state) -{ -#ifdef CONFIG_MLX5_EN_TLS - mlx5e_tls_handle_tx_wqe(sq, &wqe->ctrl, &state->tls); -#endif +/* Part of the eseg touched by TX offloads */ +#define MLX5E_ACCEL_ESEG_LEN offsetof(struct mlx5_wqe_eth_seg, mss) +static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv, + struct mlx5e_txqsq *sq, + struct sk_buff *skb, + struct mlx5_wqe_eth_seg *eseg) +{ #ifdef CONFIG_MLX5_EN_IPSEC if (test_bit(MLX5E_SQ_STATE_IPSEC, &sq->state)) { - if (unlikely(!mlx5e_ipsec_handle_tx_skb(priv, &wqe->eth, skb))) + if (unlikely(!mlx5e_ipsec_handle_tx_skb(priv, eseg, skb))) return false; } #endif #if IS_ENABLED(CONFIG_GENEVE) if (skb->encapsulation) - mlx5e_tx_tunnel_accel(skb, &wqe->eth); + mlx5e_tx_tunnel_accel(skb, eseg); #endif return true; } +static inline void mlx5e_accel_tx_finish(struct mlx5e_txqsq *sq, + struct mlx5e_tx_wqe *wqe, + struct mlx5e_accel_tx_state *state) +{ +#ifdef CONFIG_MLX5_EN_TLS + mlx5e_tls_handle_tx_wqe(sq, &wqe->ctrl, &state->tls); +#endif +} + static inline int mlx5e_accel_init_rx(struct mlx5e_priv *priv) { return mlx5e_ktls_init_rx(priv); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c index c36560b3e93d..6982b193ee8a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls_rxtx.c @@ -270,6 +270,8 @@ bool mlx5e_tls_handle_tx_skb(struct net_device *netdev, struct mlx5e_txqsq *sq, if (!datalen) return true; + mlx5e_tx_mpwqe_ensure_complete(sq); + tls_ctx = tls_get_ctx(skb->sk); if (WARN_ON_ONCE(tls_ctx->netdev != netdev)) goto err_out; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index 5cb1e4839eb7..2c34bb57048c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -1901,7 +1901,7 @@ static int set_pflag_rx_no_csum_complete(struct net_device *netdev, bool enable) return 0; } -static int set_pflag_xdp_tx_mpwqe(struct net_device *netdev, bool enable) +static int set_pflag_tx_mpwqe_common(struct net_device *netdev, u32 flag, bool enable) { struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5_core_dev *mdev = priv->mdev; @@ -1913,7 +1913,7 @@ static int set_pflag_xdp_tx_mpwqe(struct net_device *netdev, bool enable) new_channels.params = priv->channels.params; - MLX5E_SET_PFLAG(&new_channels.params, MLX5E_PFLAG_XDP_TX_MPWQE, enable); + MLX5E_SET_PFLAG(&new_channels.params, flag, enable); if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { priv->channels.params = new_channels.params; @@ -1924,6 +1924,16 @@ static int set_pflag_xdp_tx_mpwqe(struct net_device *netdev, bool enable) return err; } +static int set_pflag_xdp_tx_mpwqe(struct net_device *netdev, bool enable) +{ + return set_pflag_tx_mpwqe_common(netdev, MLX5E_PFLAG_XDP_TX_MPWQE, enable); +} + +static int set_pflag_skb_tx_mpwqe(struct net_device *netdev, bool enable) +{ + return set_pflag_tx_mpwqe_common(netdev, MLX5E_PFLAG_SKB_TX_MPWQE, enable); +} + static const struct pflag_desc mlx5e_priv_flags[MLX5E_NUM_PFLAGS] = { { "rx_cqe_moder", set_pflag_rx_cqe_based_moder }, { "tx_cqe_moder", set_pflag_tx_cqe_based_moder }, @@ -1931,6 +1941,7 @@ static const struct pflag_desc mlx5e_priv_flags[MLX5E_NUM_PFLAGS] = { { "rx_striding_rq", set_pflag_rx_striding_rq }, { "rx_no_csum_complete", set_pflag_rx_no_csum_complete }, { "xdp_tx_mpwqe", set_pflag_xdp_tx_mpwqe }, + { "skb_tx_mpwqe", set_pflag_skb_tx_mpwqe }, }; static int mlx5e_handle_pflag(struct net_device *netdev, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index b413aa168e4e..f8ad4a724a63 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -1075,6 +1075,12 @@ static int mlx5e_calc_sq_stop_room(struct mlx5e_txqsq *sq, u8 log_sq_size) sq->stop_room = mlx5e_tls_get_stop_room(sq); sq->stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); + if (test_bit(MLX5E_SQ_STATE_MPWQE, &sq->state)) + /* A MPWQE can take up to the maximum-sized WQE + all the normal + * stop room can be taken if a new packet breaks the active + * MPWQE session and allocates its WQEs right away. + */ + sq->stop_room += mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); if (WARN_ON(sq->stop_room >= sq_size)) { netdev_err(sq->channel->netdev, "Stop room %hu is bigger than the SQ size %d\n", @@ -1116,6 +1122,8 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c, set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state); if (mlx5_accel_is_tls_device(c->priv->mdev)) set_bit(MLX5E_SQ_STATE_TLS, &sq->state); + if (param->is_mpw) + set_bit(MLX5E_SQ_STATE_MPWQE, &sq->state); err = mlx5e_calc_sq_stop_room(sq, params->log_sq_size); if (err) return err; @@ -2168,6 +2176,7 @@ static void mlx5e_build_sq_param(struct mlx5e_priv *priv, mlx5e_build_sq_param_common(priv, param); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); MLX5_SET(sqc, sqc, allow_swp, allow_swp); + param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); } @@ -4721,6 +4730,8 @@ void mlx5e_build_nic_params(struct mlx5e_priv *priv, params->log_sq_size = is_kdump_kernel() ? MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE : MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; + MLX5E_SET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE, + MLX5_CAP_ETH(mdev, enhanced_multi_pkt_send_wqe)); /* XDP SQ */ MLX5E_SET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index e3b2f59408e6..20d7815ffbf4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -98,6 +98,8 @@ static const struct counter_desc sw_stats_desc[] = { { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tso_inner_bytes) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_added_vlan_packets) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_nop) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_mpwqe_blks) }, + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_mpwqe_pkts) }, #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_tls_encrypted_packets) }, @@ -353,6 +355,8 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) s->tx_tso_inner_bytes += sq_stats->tso_inner_bytes; s->tx_added_vlan_packets += sq_stats->added_vlan_packets; s->tx_nop += sq_stats->nop; + s->tx_mpwqe_blks += sq_stats->mpwqe_blks; + s->tx_mpwqe_pkts += sq_stats->mpwqe_pkts; s->tx_queue_stopped += sq_stats->stopped; s->tx_queue_wake += sq_stats->wake; s->tx_queue_dropped += sq_stats->dropped; @@ -1527,6 +1531,8 @@ static const struct counter_desc sq_stats_desc[] = { { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, csum_partial_inner) }, { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, added_vlan_packets) }, { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, nop) }, + { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, mpwqe_blks) }, + { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, mpwqe_pkts) }, #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_packets) }, { MLX5E_DECLARE_TX_STAT(struct mlx5e_sq_stats, tls_encrypted_bytes) }, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h index 2e1cca1923b9..fd198965ba82 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h @@ -117,6 +117,8 @@ struct mlx5e_sw_stats { u64 tx_tso_inner_bytes; u64 tx_added_vlan_packets; u64 tx_nop; + u64 tx_mpwqe_blks; + u64 tx_mpwqe_pkts; u64 rx_lro_packets; u64 rx_lro_bytes; u64 rx_ecn_mark; @@ -345,6 +347,8 @@ struct mlx5e_sq_stats { u64 csum_partial_inner; u64 added_vlan_packets; u64 nop; + u64 mpwqe_blks; + u64 mpwqe_pkts; #ifdef CONFIG_MLX5_EN_TLS u64 tls_encrypted_packets; u64 tls_encrypted_bytes; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index 090021e26e1e..e6f584970426 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -429,6 +429,166 @@ mlx5e_sq_xmit_wqe(struct mlx5e_txqsq *sq, struct sk_buff *skb, dev_kfree_skb_any(skb); } +static inline bool mlx5e_tx_skb_supports_mpwqe(struct sk_buff *skb, struct mlx5e_tx_attr *attr) +{ + return !skb_is_nonlinear(skb) && !skb_vlan_tag_present(skb) && !attr->ihs; +} + +static inline bool mlx5e_tx_mpwqe_same_eseg(struct mlx5e_txqsq *sq, struct mlx5_wqe_eth_seg *eseg) +{ + struct mlx5e_tx_mpwqe *session = &sq->mpwqe; + + /* Assumes the session is already running and has at least one packet. */ + return !memcmp(&session->wqe->eth, eseg, MLX5E_ACCEL_ESEG_LEN); +} + +static void mlx5e_tx_mpwqe_session_start(struct mlx5e_txqsq *sq, + struct mlx5_wqe_eth_seg *eseg) +{ + struct mlx5e_tx_mpwqe *session = &sq->mpwqe; + struct mlx5e_tx_wqe *wqe; + u16 pi; + + pi = mlx5e_txqsq_get_next_pi(sq, MLX5E_TX_MPW_MAX_WQEBBS); + wqe = MLX5E_TX_FETCH_WQE(sq, pi); + prefetchw(wqe->data); + + *session = (struct mlx5e_tx_mpwqe) { + .wqe = wqe, + .bytes_count = 0, + .ds_count = MLX5E_TX_WQE_EMPTY_DS_COUNT, + .pkt_count = 0, + .inline_on = 0, + }; + + memcpy(&session->wqe->eth, eseg, MLX5E_ACCEL_ESEG_LEN); + + sq->stats->mpwqe_blks++; +} + +static inline bool mlx5e_tx_mpwqe_session_is_active(struct mlx5e_txqsq *sq) +{ + return sq->mpwqe.wqe; +} + +static inline void mlx5e_tx_mpwqe_add_dseg(struct mlx5e_txqsq *sq, struct mlx5e_xmit_data *txd) +{ + struct mlx5e_tx_mpwqe *session = &sq->mpwqe; + struct mlx5_wqe_data_seg *dseg; + + dseg = (struct mlx5_wqe_data_seg *)session->wqe + session->ds_count; + + session->pkt_count++; + session->bytes_count += txd->len; + + dseg->addr = cpu_to_be64(txd->dma_addr); + dseg->byte_count = cpu_to_be32(txd->len); + dseg->lkey = sq->mkey_be; + session->ds_count++; + + sq->stats->mpwqe_pkts++; +} + +static struct mlx5_wqe_ctrl_seg *mlx5e_tx_mpwqe_session_complete(struct mlx5e_txqsq *sq) +{ + struct mlx5e_tx_mpwqe *session = &sq->mpwqe; + u8 ds_count = session->ds_count; + struct mlx5_wqe_ctrl_seg *cseg; + struct mlx5e_tx_wqe_info *wi; + u16 pi; + + cseg = &session->wqe->ctrl; + cseg->opmod_idx_opcode = cpu_to_be32((sq->pc << 8) | MLX5_OPCODE_ENHANCED_MPSW); + cseg->qpn_ds = cpu_to_be32((sq->sqn << 8) | ds_count); + + pi = mlx5_wq_cyc_ctr2ix(&sq->wq, sq->pc); + wi = &sq->db.wqe_info[pi]; + *wi = (struct mlx5e_tx_wqe_info) { + .skb = NULL, + .num_bytes = session->bytes_count, + .num_wqebbs = DIV_ROUND_UP(ds_count, MLX5_SEND_WQEBB_NUM_DS), + .num_dma = session->pkt_count, + .num_fifo_pkts = session->pkt_count, + }; + + sq->pc += wi->num_wqebbs; + + session->wqe = NULL; + + mlx5e_tx_check_stop(sq); + + return cseg; +} + +static noinline void +mlx5e_sq_xmit_mpwqe(struct mlx5e_txqsq *sq, struct sk_buff *skb, + struct mlx5_wqe_eth_seg *eseg, bool xmit_more) +{ + struct mlx5_wqe_ctrl_seg *cseg; + struct mlx5e_xmit_data txd; + + if (!mlx5e_tx_mpwqe_session_is_active(sq)) { + mlx5e_tx_mpwqe_session_start(sq, eseg); + } else if (!mlx5e_tx_mpwqe_same_eseg(sq, eseg)) { + mlx5e_tx_mpwqe_session_complete(sq); + mlx5e_tx_mpwqe_session_start(sq, eseg); + } + + sq->stats->xmit_more += xmit_more; + + txd.data = skb->data; + txd.len = skb->len; + + txd.dma_addr = dma_map_single(sq->pdev, txd.data, txd.len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(sq->pdev, txd.dma_addr))) + goto err_unmap; + mlx5e_dma_push(sq, txd.dma_addr, txd.len, MLX5E_DMA_MAP_SINGLE); + + mlx5e_skb_fifo_push(sq, skb); + + mlx5e_tx_mpwqe_add_dseg(sq, &txd); + + mlx5e_tx_skb_update_hwts_flags(skb); + + if (unlikely(mlx5e_tx_mpwqe_is_full(&sq->mpwqe))) { + /* Might stop the queue and affect the retval of __netdev_tx_sent_queue. */ + cseg = mlx5e_tx_mpwqe_session_complete(sq); + + if (__netdev_tx_sent_queue(sq->txq, txd.len, xmit_more)) + mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); + } else if (__netdev_tx_sent_queue(sq->txq, txd.len, xmit_more)) { + /* Might stop the queue, but we were asked to ring the doorbell anyway. */ + cseg = mlx5e_tx_mpwqe_session_complete(sq); + + mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg); + } + + return; + +err_unmap: + mlx5e_dma_unmap_wqe_err(sq, 1); + sq->stats->dropped++; + dev_kfree_skb_any(skb); +} + +void mlx5e_tx_mpwqe_ensure_complete(struct mlx5e_txqsq *sq) +{ + /* Unlikely in non-MPWQE workloads; not important in MPWQE workloads. */ + if (unlikely(mlx5e_tx_mpwqe_session_is_active(sq))) + mlx5e_tx_mpwqe_session_complete(sq); +} + +static inline bool mlx5e_txwqe_build_eseg(struct mlx5e_priv *priv, struct mlx5e_txqsq *sq, + struct sk_buff *skb, struct mlx5_wqe_eth_seg *eseg) +{ + if (unlikely(!mlx5e_accel_tx_eseg(priv, sq, skb, eseg))) + return false; + + mlx5e_txwqe_build_eseg_csum(sq, skb, eseg); + + return true; +} + netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) { struct mlx5e_priv *priv = netdev_priv(dev); @@ -443,21 +603,35 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev) /* May send SKBs and WQEs. */ if (unlikely(!mlx5e_accel_tx_begin(dev, sq, skb, &accel))) - goto out; + return NETDEV_TX_OK; mlx5e_sq_xmit_prepare(sq, skb, &accel, &attr); + + if (test_bit(MLX5E_SQ_STATE_MPWQE, &sq->state)) { + if (mlx5e_tx_skb_supports_mpwqe(skb, &attr)) { + struct mlx5_wqe_eth_seg eseg = {}; + + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &eseg))) + return NETDEV_TX_OK; + + mlx5e_sq_xmit_mpwqe(sq, skb, &eseg, netdev_xmit_more()); + return NETDEV_TX_OK; + } + + mlx5e_tx_mpwqe_ensure_complete(sq); + } + mlx5e_sq_calc_wqe_attr(skb, &attr, &wqe_attr); pi = mlx5e_txqsq_get_next_pi(sq, wqe_attr.num_wqebbs); wqe = MLX5E_TX_FETCH_WQE(sq, pi); /* May update the WQE, but may not post other WQEs. */ - if (unlikely(!mlx5e_accel_tx_finish(priv, sq, skb, wqe, &accel))) - goto out; + mlx5e_accel_tx_finish(sq, wqe, &accel); + if (unlikely(!mlx5e_txwqe_build_eseg(priv, sq, skb, &wqe->eth))) + return NETDEV_TX_OK; - mlx5e_txwqe_build_eseg_csum(sq, skb, &wqe->eth); mlx5e_sq_xmit_wqe(sq, skb, &attr, &wqe_attr, wqe, pi, netdev_xmit_more()); -out: return NETDEV_TX_OK; }