From patchwork Thu Mar 25 05:04:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91E40C433DB for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4B82E61A26 for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229810AbhCYFE6 (ORCPT ); Thu, 25 Mar 2021 01:04:58 -0400 Received: from mail.kernel.org ([198.145.29.99]:47884 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229493AbhCYFEo (ORCPT ); Thu, 25 Mar 2021 01:04:44 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 3C63C61A21; Thu, 25 Mar 2021 05:04:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648684; bh=J3rhXXr+ZN5GHRqjsnwyBxKeuRUTirpRcOy37nkZgMw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a+Ru7i/JeeibrbaYCJH+zEmbm5rTvGPfgtGcb6dH35uxRFws2mFOaHG6Nmc5Zk6I8 Jp05ehQa45q2sF6UVjElkQpIIQWQbNPkUX1BsJAG/WRkxiIjUbm7EYx2mcN2PKN8uI golkYALwx4PBq+eTQbHpyQDhq+WREEnUTARDEb5xehoZeF4n7hwMZUjH/lxEE1P/Uc 4jpHXPxQQxkaF3wsyU4lU2ehc2Vx9F/RSMELTNPBu45etpUAMWeOrcWevrCmc2X2LR l0xv+NLoyImNlP8hJihkHLJH7do0enw/KEvze5ylbZxrG4c9YN8Yml76dzDtt4LVex 8lV+QuIBw8tew== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Yevgeny Kliteynik , Dan Carpenter , Alex Vesker , Saeed Mahameed Subject: [net-next 02/15] net/mlx5: DR, Fix potential shift wrapping of 32-bit value in STEv1 getter Date: Wed, 24 Mar 2021 22:04:25 -0700 Message-Id: <20210325050438.261511-3-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yevgeny Kliteynik Fix 32-bit variable shift wrapping in dr_ste_v1_get_miss_addr. Fixes: a6098129c781 ("net/mlx5: DR, Add STEv1 setters and getters") Reported-by: Dan Carpenter Signed-off-by: Yevgeny Kliteynik Reviewed-by: Alex Vesker Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c index 815951617e7c..616ebc38381a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c @@ -264,8 +264,8 @@ static void dr_ste_v1_set_miss_addr(u8 *hw_ste_p, u64 miss_addr) static u64 dr_ste_v1_get_miss_addr(u8 *hw_ste_p) { u64 index = - (MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_31_6) | - MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_39_32) << 26); + ((u64)MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_31_6) | + ((u64)MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_39_32)) << 26); return index << 6; } From patchwork Thu Mar 25 05:04:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF76BC433E1 for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8003861A27 for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229847AbhCYFFA (ORCPT ); Thu, 25 Mar 2021 01:05:00 -0400 Received: from mail.kernel.org ([198.145.29.99]:47898 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229508AbhCYFEp (ORCPT ); Thu, 25 Mar 2021 01:04:45 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6809661A24; Thu, 25 Mar 2021 05:04:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648685; bh=+IdPbWs7MYobGyl22tW3H3PzIM4BslGgGn5MjbmJmvQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RIzvKeM026qCkjnMjxrXS7p+gSR+rAPsEh4djPaMqXwoGpCzRzOMlrls66ltQJWFN SEGynT9uTTg22IhaeGr9Rf2SjzbDJ1t8sMTHwJq1MclZqE71C4x9R/clLs3MwEeS/E iIbDRyIFgY3Fmt1QV+O4BsKp4B143i3V2xxwQqwZ8XnyqwrIjSqCYPx9yc2JkzDCRo /rokoAN2mAOznRispCxNWv5vczJHS5xF1QJ74kXtZFXrkwV3foe1TbPYPCm71ZEDFh g58lIPie+dv+nDpczx9SgjJC1CeaNlvdSGk9PqiuG0x99OqEMMBDl8/AztMO9fSQ2z 85QYtNX1IF7Tg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Saeed Mahameed , Arnd Bergmann Subject: [net-next 03/15] net/mlx5e: alloc the correct size for indirection_rqt Date: Wed, 24 Mar 2021 22:04:26 -0700 Message-Id: <20210325050438.261511-4-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Saeed Mahameed The cited patch allocated the wrong size for the indirection_rqt table, fix that. Fixes: 2119bda642c4 ("net/mlx5e: allocate 'indirection_rqt' buffer dynamically") CC: Arnd Bergmann Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en_tc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c index 730f33ada90a..251941d3a8d7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c @@ -447,11 +447,11 @@ static void mlx5e_hairpin_destroy_transport(struct mlx5e_hairpin *hp) static int mlx5e_hairpin_fill_rqt_rqns(struct mlx5e_hairpin *hp, void *rqtc) { - u32 *indirection_rqt, rqn; struct mlx5e_priv *priv = hp->func_priv; int i, ix, sz = MLX5E_INDIR_RQT_SIZE; + u32 *indirection_rqt, rqn; - indirection_rqt = kzalloc(sz, GFP_KERNEL); + indirection_rqt = kcalloc(sz, sizeof(*indirection_rqt), GFP_KERNEL); if (!indirection_rqt) return -ENOMEM; From patchwork Thu Mar 25 05:04:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3971C433E3 for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 92B2461A1F for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229866AbhCYFFC (ORCPT ); Thu, 25 Mar 2021 01:05:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:47916 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229590AbhCYFEq (ORCPT ); Thu, 25 Mar 2021 01:04:46 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id B31A561A26; Thu, 25 Mar 2021 05:04:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648686; bh=8jyGKXLn6uyETEmf6MbPCgX4CZOIByAJVqxZQuatrgE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z/1ZZgr+2hs1BkCYSnlksebNJP91jVuUX65as7QnQpt5QW6LXKqAHK23EfgmwFHgn 6bq782YPsMlNk059rhSVAAHXp9+oAAxJj+IY18Izp79WXB9G2HIHXmHS7AF0Y7kZjP /CQUxyNxJgB4AoTeTokHz90ZvxHIsFvh/YhqnzNxopki0ZOlzxPKlWMiFv6sn7q7qX wdQjMHHKmiobseoEBE18azbAbkg4BLTW36Oe+NP1QNWfnbPiNmGMh5M79UPRVfMWxf c91pcFHbwz6HH+g9Rbt9OfY8x7D/VNK1koDkzSn76W6glGXkbUiwXh3/J9p3N2ZAV8 AwIjo6aDny+Lw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Maxim Mikityanskiy , Aya Levin , Saeed Mahameed Subject: [net-next 05/15] net/mlx5e: Move params logic into its dedicated file Date: Wed, 24 Mar 2021 22:04:28 -0700 Message-Id: <20210325050438.261511-6-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tariq Toukan Take params logic out of en_main.c, into the dedicated params.c. Some functions are now hidden and become static. No functional change here. Signed-off-by: Tariq Toukan Reviewed-by: Maxim Mikityanskiy Reviewed-by: Aya Levin Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 12 - .../ethernet/mellanox/mlx5/core/en/params.c | 478 +++++++++++++++++- .../ethernet/mellanox/mlx5/core/en/params.h | 30 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 454 ----------------- .../net/ethernet/mellanox/mlx5/core/en_rep.c | 1 + .../ethernet/mellanox/mlx5/core/ipoib/ipoib.c | 1 + 6 files changed, 495 insertions(+), 481 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 1f5bc4d91060..e286a6073db3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -918,8 +918,6 @@ struct mlx5e_profile { void mlx5e_build_ptys2ethtool_map(void); bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev); -bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev, - struct mlx5e_params *params); void mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats); void mlx5e_fold_sw_stats64(struct mlx5e_priv *priv, struct rtnl_link_stats64 *s); @@ -1023,14 +1021,6 @@ void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv); void mlx5e_build_default_indir_rqt(u32 *indirection_rqt, int len, int num_channels); -void mlx5e_reset_tx_moderation(struct mlx5e_params *params, u8 cq_period_mode); -void mlx5e_reset_rx_moderation(struct mlx5e_params *params, u8 cq_period_mode); -void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode); -void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode); - -void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params); -void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev, - struct mlx5e_params *params); int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state); void mlx5e_activate_rq(struct mlx5e_rq *rq); void mlx5e_deactivate_rq(struct mlx5e_rq *rq); @@ -1176,8 +1166,6 @@ int mlx5e_netdev_change_profile(struct mlx5e_priv *priv, void mlx5e_netdev_attach_nic_profile(struct mlx5e_priv *priv); void mlx5e_set_netdev_mtu_boundaries(struct mlx5e_priv *priv); void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16 mtu); -void mlx5e_build_rq_params(struct mlx5_core_dev *mdev, - struct mlx5e_params *params); void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params, u16 num_channels); void mlx5e_rx_dim_work(struct work_struct *work); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index 36381a2ed5a5..d2e0e07407a8 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -3,7 +3,9 @@ #include "en/params.h" #include "en/txrx.h" -#include "en_accel/tls_rxtx.h" +#include "en/port.h" +#include "en_accel/en_accel.h" +#include "accel/ipsec.h" static inline bool mlx5e_rx_is_xdp(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) @@ -37,8 +39,8 @@ u32 mlx5e_rx_get_min_frag_sz(struct mlx5e_params *params, return linear_rq_headroom + hw_mtu; } -u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params, - struct mlx5e_xsk_param *xsk) +static u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk) { u32 frag_sz = mlx5e_rx_get_min_frag_sz(params, xsk); @@ -186,3 +188,473 @@ int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params) return 0; } + +static struct dim_cq_moder mlx5e_get_def_tx_moderation(u8 cq_period_mode) +{ + struct dim_cq_moder moder; + + moder.cq_period_mode = cq_period_mode; + moder.pkts = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS; + moder.usec = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC; + if (cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE) + moder.usec = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC_FROM_CQE; + + return moder; +} + +static struct dim_cq_moder mlx5e_get_def_rx_moderation(u8 cq_period_mode) +{ + struct dim_cq_moder moder; + + moder.cq_period_mode = cq_period_mode; + moder.pkts = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS; + moder.usec = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC; + if (cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE) + moder.usec = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE; + + return moder; +} + +static u8 mlx5_to_net_dim_cq_period_mode(u8 cq_period_mode) +{ + return cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE ? + DIM_CQ_PERIOD_MODE_START_FROM_CQE : + DIM_CQ_PERIOD_MODE_START_FROM_EQE; +} + +void mlx5e_reset_tx_moderation(struct mlx5e_params *params, u8 cq_period_mode) +{ + if (params->tx_dim_enabled) { + u8 dim_period_mode = mlx5_to_net_dim_cq_period_mode(cq_period_mode); + + params->tx_cq_moderation = net_dim_get_def_tx_moderation(dim_period_mode); + } else { + params->tx_cq_moderation = mlx5e_get_def_tx_moderation(cq_period_mode); + } +} + +void mlx5e_reset_rx_moderation(struct mlx5e_params *params, u8 cq_period_mode) +{ + if (params->rx_dim_enabled) { + u8 dim_period_mode = mlx5_to_net_dim_cq_period_mode(cq_period_mode); + + params->rx_cq_moderation = net_dim_get_def_rx_moderation(dim_period_mode); + } else { + params->rx_cq_moderation = mlx5e_get_def_rx_moderation(cq_period_mode); + } +} + +void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) +{ + mlx5e_reset_tx_moderation(params, cq_period_mode); + MLX5E_SET_PFLAG(params, MLX5E_PFLAG_TX_CQE_BASED_MODER, + params->tx_cq_moderation.cq_period_mode == + MLX5_CQ_PERIOD_MODE_START_FROM_CQE); +} + +void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) +{ + mlx5e_reset_rx_moderation(params, cq_period_mode); + MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_CQE_BASED_MODER, + params->rx_cq_moderation.cq_period_mode == + MLX5_CQ_PERIOD_MODE_START_FROM_CQE); +} + +bool slow_pci_heuristic(struct mlx5_core_dev *mdev) +{ + u32 link_speed = 0; + u32 pci_bw = 0; + + mlx5e_port_max_linkspeed(mdev, &link_speed); + pci_bw = pcie_bandwidth_available(mdev->pdev, NULL, NULL, NULL); + mlx5_core_dbg_once(mdev, "Max link speed = %d, PCI BW = %d\n", + link_speed, pci_bw); + +#define MLX5E_SLOW_PCI_RATIO (2) + + return link_speed && pci_bw && + link_speed > MLX5E_SLOW_PCI_RATIO * pci_bw; +} + +bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev, + struct mlx5e_params *params) +{ + if (!mlx5e_check_fragmented_striding_rq_cap(mdev)) + return false; + + if (MLX5_IPSEC_DEV(mdev)) + return false; + + if (params->xdp_prog) { + /* XSK params are not considered here. If striding RQ is in use, + * and an XSK is being opened, mlx5e_rx_mpwqe_is_linear_skb will + * be called with the known XSK params. + */ + if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL)) + return false; + } + + return true; +} + +void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev, + struct mlx5e_params *params) +{ + params->log_rq_mtu_frames = is_kdump_kernel() ? + MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE : + MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE; + + mlx5_core_info(mdev, "MLX5E: StrdRq(%d) RqSz(%ld) StrdSz(%ld) RxCqeCmprss(%d)\n", + params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ, + params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ? + BIT(mlx5e_mpwqe_get_log_rq_size(params, NULL)) : + BIT(params->log_rq_mtu_frames), + BIT(mlx5e_mpwqe_get_log_stride_size(mdev, params, NULL)), + MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)); +} + +void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params) +{ + params->rq_wq_type = mlx5e_striding_rq_possible(mdev, params) && + MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_STRIDING_RQ) ? + MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ : + MLX5_WQ_TYPE_CYCLIC; +} + +void mlx5e_build_rq_params(struct mlx5_core_dev *mdev, + struct mlx5e_params *params) +{ + /* Prefer Striding RQ, unless any of the following holds: + * - Striding RQ configuration is not possible/supported. + * - Slow PCI heuristic. + * - Legacy RQ would use linear SKB while Striding RQ would use non-linear. + * + * No XSK params: checking the availability of striding RQ in general. + */ + if (!slow_pci_heuristic(mdev) && + mlx5e_striding_rq_possible(mdev, params) && + (mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL) || + !mlx5e_rx_is_linear_skb(params, NULL))) + MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_STRIDING_RQ, true); + mlx5e_set_rq_type(mdev, params); + mlx5e_init_rq_type_params(mdev, params); +} + +/* Build queue parameters */ + +void mlx5e_build_create_cq_param(struct mlx5e_create_cq_param *ccp, struct mlx5e_channel *c) +{ + *ccp = (struct mlx5e_create_cq_param) { + .napi = &c->napi, + .ch_stats = c->stats, + .node = cpu_to_node(c->cpu), + .ix = c->ix, + }; +} + +#define DEFAULT_FRAG_SIZE (2048) + +static void mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk, + struct mlx5e_rq_frags_info *info) +{ + u32 byte_count = MLX5E_SW2HW_MTU(params, params->sw_mtu); + int frag_size_max = DEFAULT_FRAG_SIZE; + u32 buf_size = 0; + int i; + + if (MLX5_IPSEC_DEV(mdev)) + byte_count += MLX5E_METADATA_ETHER_LEN; + + if (mlx5e_rx_is_linear_skb(params, xsk)) { + int frag_stride; + + frag_stride = mlx5e_rx_get_linear_frag_sz(params, xsk); + frag_stride = roundup_pow_of_two(frag_stride); + + info->arr[0].frag_size = byte_count; + info->arr[0].frag_stride = frag_stride; + info->num_frags = 1; + info->wqe_bulk = PAGE_SIZE / frag_stride; + goto out; + } + + if (byte_count > PAGE_SIZE + + (MLX5E_MAX_RX_FRAGS - 1) * frag_size_max) + frag_size_max = PAGE_SIZE; + + i = 0; + while (buf_size < byte_count) { + int frag_size = byte_count - buf_size; + + if (i < MLX5E_MAX_RX_FRAGS - 1) + frag_size = min(frag_size, frag_size_max); + + info->arr[i].frag_size = frag_size; + info->arr[i].frag_stride = roundup_pow_of_two(frag_size); + + buf_size += frag_size; + i++; + } + info->num_frags = i; + /* number of different wqes sharing a page */ + info->wqe_bulk = 1 + (info->num_frags % 2); + +out: + info->wqe_bulk = max_t(u8, info->wqe_bulk, 8); + info->log_num_frags = order_base_2(info->num_frags); +} + +static inline u8 mlx5e_get_rqwq_log_stride(u8 wq_type, int ndsegs) +{ + int sz = sizeof(struct mlx5_wqe_data_seg) * ndsegs; + + switch (wq_type) { + case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: + sz += sizeof(struct mlx5e_rx_wqe_ll); + break; + default: /* MLX5_WQ_TYPE_CYCLIC */ + sz += sizeof(struct mlx5e_rx_wqe_cyc); + } + + return order_base_2(sz); +} + +static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv, + struct mlx5e_cq_param *param) +{ + void *cqc = param->cqc; + + MLX5_SET(cqc, cqc, uar_page, priv->mdev->priv.uar->index); + if (MLX5_CAP_GEN(priv->mdev, cqe_128_always) && cache_line_size() >= 128) + MLX5_SET(cqc, cqc, cqe_sz, CQE_STRIDE_128_PAD); +} + +static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk, + struct mlx5e_cq_param *param) +{ + struct mlx5_core_dev *mdev = priv->mdev; + bool hw_stridx = false; + void *cqc = param->cqc; + u8 log_cq_size; + + switch (params->rq_wq_type) { + case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: + log_cq_size = mlx5e_mpwqe_get_log_rq_size(params, xsk) + + mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk); + hw_stridx = MLX5_CAP_GEN(mdev, mini_cqe_resp_stride_index); + break; + default: /* MLX5_WQ_TYPE_CYCLIC */ + log_cq_size = params->log_rq_mtu_frames; + } + + MLX5_SET(cqc, cqc, log_cq_size, log_cq_size); + if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) { + MLX5_SET(cqc, cqc, mini_cqe_res_format, hw_stridx ? + MLX5_CQE_FORMAT_CSUM_STRIDX : MLX5_CQE_FORMAT_CSUM); + MLX5_SET(cqc, cqc, cqe_comp_en, 1); + } + + mlx5e_build_common_cq_param(priv, param); + param->cq_period_mode = params->rx_cq_moderation.cq_period_mode; +} + +void mlx5e_build_rq_param(struct mlx5e_priv *priv, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk, + u16 q_counter, + struct mlx5e_rq_param *param) +{ + struct mlx5_core_dev *mdev = priv->mdev; + void *rqc = param->rqc; + void *wq = MLX5_ADDR_OF(rqc, rqc, wq); + int ndsegs = 1; + + switch (params->rq_wq_type) { + case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: + MLX5_SET(wq, wq, log_wqe_num_of_strides, + mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk) - + MLX5_MPWQE_LOG_NUM_STRIDES_BASE); + MLX5_SET(wq, wq, log_wqe_stride_size, + mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk) - + MLX5_MPWQE_LOG_STRIDE_SZ_BASE); + MLX5_SET(wq, wq, log_wq_sz, mlx5e_mpwqe_get_log_rq_size(params, xsk)); + break; + default: /* MLX5_WQ_TYPE_CYCLIC */ + MLX5_SET(wq, wq, log_wq_sz, params->log_rq_mtu_frames); + mlx5e_build_rq_frags_info(mdev, params, xsk, ¶m->frags_info); + ndsegs = param->frags_info.num_frags; + } + + MLX5_SET(wq, wq, wq_type, params->rq_wq_type); + MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN); + MLX5_SET(wq, wq, log_wq_stride, + mlx5e_get_rqwq_log_stride(params->rq_wq_type, ndsegs)); + MLX5_SET(wq, wq, pd, mdev->mlx5e_res.hw_objs.pdn); + MLX5_SET(rqc, rqc, counter_set_id, q_counter); + MLX5_SET(rqc, rqc, vsd, params->vlan_strip_disable); + MLX5_SET(rqc, rqc, scatter_fcs, params->scatter_fcs_en); + + param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); + mlx5e_build_rx_cq_param(priv, params, xsk, ¶m->cqp); +} + +void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv, + u16 q_counter, + struct mlx5e_rq_param *param) +{ + struct mlx5_core_dev *mdev = priv->mdev; + void *rqc = param->rqc; + void *wq = MLX5_ADDR_OF(rqc, rqc, wq); + + MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wq, log_wq_stride, + mlx5e_get_rqwq_log_stride(MLX5_WQ_TYPE_CYCLIC, 1)); + MLX5_SET(rqc, rqc, counter_set_id, q_counter); + + param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); +} + +void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, + struct mlx5e_params *params, + struct mlx5e_cq_param *param) +{ + void *cqc = param->cqc; + + MLX5_SET(cqc, cqc, log_cq_size, params->log_sq_size); + + mlx5e_build_common_cq_param(priv, param); + param->cq_period_mode = params->tx_cq_moderation.cq_period_mode; +} + +void mlx5e_build_sq_param_common(struct mlx5e_priv *priv, + struct mlx5e_sq_param *param) +{ + void *sqc = param->sqc; + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); + + MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wq, pd, priv->mdev->mlx5e_res.hw_objs.pdn); + + param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(priv->mdev)); +} + +void mlx5e_build_sq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, + struct mlx5e_sq_param *param) +{ + void *sqc = param->sqc; + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); + bool allow_swp; + + allow_swp = mlx5_geneve_tx_allowed(priv->mdev) || + !!MLX5_IPSEC_DEV(priv->mdev); + mlx5e_build_sq_param_common(priv, param); + MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); + MLX5_SET(sqc, sqc, allow_swp, allow_swp); + param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); + param->stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params); + mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); +} + +static void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv, + u8 log_wq_size, + struct mlx5e_cq_param *param) +{ + void *cqc = param->cqc; + + MLX5_SET(cqc, cqc, log_cq_size, log_wq_size); + + mlx5e_build_common_cq_param(priv, param); + + param->cq_period_mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE; +} + +static u8 mlx5e_get_rq_log_wq_sz(void *rqc) +{ + void *wq = MLX5_ADDR_OF(rqc, rqc, wq); + + return MLX5_GET(wq, wq, log_wq_sz); +} + +static u8 mlx5e_build_icosq_log_wq_sz(struct mlx5e_params *params, + struct mlx5e_rq_param *rqp) +{ + switch (params->rq_wq_type) { + case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: + return max_t(u8, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE, + order_base_2(MLX5E_UMR_WQEBBS) + + mlx5e_get_rq_log_wq_sz(rqp->rqc)); + default: /* MLX5_WQ_TYPE_CYCLIC */ + return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; + } +} + +static u8 mlx5e_build_async_icosq_log_wq_sz(struct net_device *netdev) +{ + if (netdev->hw_features & NETIF_F_HW_TLS_RX) + return MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; + + return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; +} + +static void mlx5e_build_icosq_param(struct mlx5e_priv *priv, + u8 log_wq_size, + struct mlx5e_sq_param *param) +{ + void *sqc = param->sqc; + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); + + mlx5e_build_sq_param_common(priv, param); + + MLX5_SET(wq, wq, log_wq_sz, log_wq_size); + MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); + mlx5e_build_ico_cq_param(priv, log_wq_size, ¶m->cqp); +} + +static void mlx5e_build_async_icosq_param(struct mlx5e_priv *priv, + u8 log_wq_size, + struct mlx5e_sq_param *param) +{ + void *sqc = param->sqc; + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); + + mlx5e_build_sq_param_common(priv, param); + param->stop_room = mlx5e_stop_room_for_wqe(1); /* for XSK NOP */ + MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); + MLX5_SET(wq, wq, log_wq_sz, log_wq_size); + mlx5e_build_ico_cq_param(priv, log_wq_size, ¶m->cqp); +} + +void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, + struct mlx5e_params *params, + struct mlx5e_sq_param *param) +{ + void *sqc = param->sqc; + void *wq = MLX5_ADDR_OF(sqc, sqc, wq); + + mlx5e_build_sq_param_common(priv, param); + MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); + param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE); + mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); +} + +void mlx5e_build_channel_param(struct mlx5e_priv *priv, + struct mlx5e_params *params, + u16 q_counter, + struct mlx5e_channel_param *cparam) +{ + u8 icosq_log_wq_sz, async_icosq_log_wq_sz; + + mlx5e_build_rq_param(priv, params, NULL, q_counter, &cparam->rq); + + icosq_log_wq_sz = mlx5e_build_icosq_log_wq_sz(params, &cparam->rq); + async_icosq_log_wq_sz = mlx5e_build_async_icosq_log_wq_sz(priv->netdev); + + mlx5e_build_sq_param(priv, params, &cparam->txq_sq); + mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq); + mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq); + mlx5e_build_async_icosq_param(priv, async_icosq_log_wq_sz, &cparam->async_icosq); +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h index 97a5bf50a53b..a43776e8a86b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h @@ -84,12 +84,21 @@ static inline bool mlx5e_qid_validate(const struct mlx5e_profile *profile, /* Parameter calculations */ +void mlx5e_reset_tx_moderation(struct mlx5e_params *params, u8 cq_period_mode); +void mlx5e_reset_rx_moderation(struct mlx5e_params *params, u8 cq_period_mode); +void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode); +void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode); + +bool slow_pci_heuristic(struct mlx5_core_dev *mdev); +bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev, struct mlx5e_params *params); +void mlx5e_build_rq_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params); +void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params); +void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params); + u16 mlx5e_get_linear_rq_headroom(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); u32 mlx5e_rx_get_min_frag_sz(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); -u32 mlx5e_rx_get_linear_frag_sz(struct mlx5e_params *params, - struct mlx5e_xsk_param *xsk); u8 mlx5e_mpwqe_log_pkts_per_wqe(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); bool mlx5e_rx_is_linear_skb(struct mlx5e_params *params, @@ -117,26 +126,23 @@ void mlx5e_build_rq_param(struct mlx5e_priv *priv, struct mlx5e_xsk_param *xsk, u16 q_counter, struct mlx5e_rq_param *param); +void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv, + u16 q_counter, + struct mlx5e_rq_param *param); void mlx5e_build_sq_param_common(struct mlx5e_priv *priv, struct mlx5e_sq_param *param); void mlx5e_build_sq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, struct mlx5e_sq_param *param); -void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, - struct mlx5e_params *params, - struct mlx5e_xsk_param *xsk, - struct mlx5e_cq_param *param); void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, struct mlx5e_cq_param *param); -void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv, - u8 log_wq_size, - struct mlx5e_cq_param *param); -void mlx5e_build_icosq_param(struct mlx5e_priv *priv, - u8 log_wq_size, - struct mlx5e_sq_param *param); void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, struct mlx5e_sq_param *param); +void mlx5e_build_channel_param(struct mlx5e_priv *priv, + struct mlx5e_params *params, + u16 q_counter, + struct mlx5e_channel_param *cparam); u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params); int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 475d1ed6fe9c..de40ed634011 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -87,51 +87,6 @@ bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev) return true; } -void mlx5e_init_rq_type_params(struct mlx5_core_dev *mdev, - struct mlx5e_params *params) -{ - params->log_rq_mtu_frames = is_kdump_kernel() ? - MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE : - MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE; - - mlx5_core_info(mdev, "MLX5E: StrdRq(%d) RqSz(%ld) StrdSz(%ld) RxCqeCmprss(%d)\n", - params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ, - params->rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ? - BIT(mlx5e_mpwqe_get_log_rq_size(params, NULL)) : - BIT(params->log_rq_mtu_frames), - BIT(mlx5e_mpwqe_get_log_stride_size(mdev, params, NULL)), - MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)); -} - -bool mlx5e_striding_rq_possible(struct mlx5_core_dev *mdev, - struct mlx5e_params *params) -{ - if (!mlx5e_check_fragmented_striding_rq_cap(mdev)) - return false; - - if (mlx5_fpga_is_ipsec_device(mdev)) - return false; - - if (params->xdp_prog) { - /* XSK params are not considered here. If striding RQ is in use, - * and an XSK is being opened, mlx5e_rx_mpwqe_is_linear_skb will - * be called with the known XSK params. - */ - if (!mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL)) - return false; - } - - return true; -} - -void mlx5e_set_rq_type(struct mlx5_core_dev *mdev, struct mlx5e_params *params) -{ - params->rq_wq_type = mlx5e_striding_rq_possible(mdev, params) && - MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_STRIDING_RQ) ? - MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ : - MLX5_WQ_TYPE_CYCLIC; -} - void mlx5e_update_carrier(struct mlx5e_priv *priv) { struct mlx5_core_dev *mdev = priv->mdev; @@ -1860,16 +1815,6 @@ static int mlx5e_set_tx_maxrate(struct net_device *dev, int index, u32 rate) return err; } -void mlx5e_build_create_cq_param(struct mlx5e_create_cq_param *ccp, struct mlx5e_channel *c) -{ - *ccp = (struct mlx5e_create_cq_param) { - .napi = &c->napi, - .ch_stats = c->stats, - .node = cpu_to_node(c->cpu), - .ix = c->ix, - }; -} - static int mlx5e_open_queues(struct mlx5e_channel *c, struct mlx5e_params *params, struct mlx5e_channel_param *cparam) @@ -2111,299 +2056,6 @@ static void mlx5e_close_channel(struct mlx5e_channel *c) kvfree(c); } -#define DEFAULT_FRAG_SIZE (2048) - -static void mlx5e_build_rq_frags_info(struct mlx5_core_dev *mdev, - struct mlx5e_params *params, - struct mlx5e_xsk_param *xsk, - struct mlx5e_rq_frags_info *info) -{ - u32 byte_count = MLX5E_SW2HW_MTU(params, params->sw_mtu); - int frag_size_max = DEFAULT_FRAG_SIZE; - u32 buf_size = 0; - int i; - - if (mlx5_fpga_is_ipsec_device(mdev)) - byte_count += MLX5E_METADATA_ETHER_LEN; - - if (mlx5e_rx_is_linear_skb(params, xsk)) { - int frag_stride; - - frag_stride = mlx5e_rx_get_linear_frag_sz(params, xsk); - frag_stride = roundup_pow_of_two(frag_stride); - - info->arr[0].frag_size = byte_count; - info->arr[0].frag_stride = frag_stride; - info->num_frags = 1; - info->wqe_bulk = PAGE_SIZE / frag_stride; - goto out; - } - - if (byte_count > PAGE_SIZE + - (MLX5E_MAX_RX_FRAGS - 1) * frag_size_max) - frag_size_max = PAGE_SIZE; - - i = 0; - while (buf_size < byte_count) { - int frag_size = byte_count - buf_size; - - if (i < MLX5E_MAX_RX_FRAGS - 1) - frag_size = min(frag_size, frag_size_max); - - info->arr[i].frag_size = frag_size; - info->arr[i].frag_stride = roundup_pow_of_two(frag_size); - - buf_size += frag_size; - i++; - } - info->num_frags = i; - /* number of different wqes sharing a page */ - info->wqe_bulk = 1 + (info->num_frags % 2); - -out: - info->wqe_bulk = max_t(u8, info->wqe_bulk, 8); - info->log_num_frags = order_base_2(info->num_frags); -} - -static inline u8 mlx5e_get_rqwq_log_stride(u8 wq_type, int ndsegs) -{ - int sz = sizeof(struct mlx5_wqe_data_seg) * ndsegs; - - switch (wq_type) { - case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: - sz += sizeof(struct mlx5e_rx_wqe_ll); - break; - default: /* MLX5_WQ_TYPE_CYCLIC */ - sz += sizeof(struct mlx5e_rx_wqe_cyc); - } - - return order_base_2(sz); -} - -static u8 mlx5e_get_rq_log_wq_sz(void *rqc) -{ - void *wq = MLX5_ADDR_OF(rqc, rqc, wq); - - return MLX5_GET(wq, wq, log_wq_sz); -} - -void mlx5e_build_rq_param(struct mlx5e_priv *priv, - struct mlx5e_params *params, - struct mlx5e_xsk_param *xsk, - u16 q_counter, - struct mlx5e_rq_param *param) -{ - struct mlx5_core_dev *mdev = priv->mdev; - void *rqc = param->rqc; - void *wq = MLX5_ADDR_OF(rqc, rqc, wq); - int ndsegs = 1; - - switch (params->rq_wq_type) { - case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: - MLX5_SET(wq, wq, log_wqe_num_of_strides, - mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk) - - MLX5_MPWQE_LOG_NUM_STRIDES_BASE); - MLX5_SET(wq, wq, log_wqe_stride_size, - mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk) - - MLX5_MPWQE_LOG_STRIDE_SZ_BASE); - MLX5_SET(wq, wq, log_wq_sz, mlx5e_mpwqe_get_log_rq_size(params, xsk)); - break; - default: /* MLX5_WQ_TYPE_CYCLIC */ - MLX5_SET(wq, wq, log_wq_sz, params->log_rq_mtu_frames); - mlx5e_build_rq_frags_info(mdev, params, xsk, ¶m->frags_info); - ndsegs = param->frags_info.num_frags; - } - - MLX5_SET(wq, wq, wq_type, params->rq_wq_type); - MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN); - MLX5_SET(wq, wq, log_wq_stride, - mlx5e_get_rqwq_log_stride(params->rq_wq_type, ndsegs)); - MLX5_SET(wq, wq, pd, mdev->mlx5e_res.hw_objs.pdn); - MLX5_SET(rqc, rqc, counter_set_id, q_counter); - MLX5_SET(rqc, rqc, vsd, params->vlan_strip_disable); - MLX5_SET(rqc, rqc, scatter_fcs, params->scatter_fcs_en); - - param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); - mlx5e_build_rx_cq_param(priv, params, xsk, ¶m->cqp); -} - -static void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv, - u16 q_counter, - struct mlx5e_rq_param *param) -{ - struct mlx5_core_dev *mdev = priv->mdev; - void *rqc = param->rqc; - void *wq = MLX5_ADDR_OF(rqc, rqc, wq); - - MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC); - MLX5_SET(wq, wq, log_wq_stride, - mlx5e_get_rqwq_log_stride(MLX5_WQ_TYPE_CYCLIC, 1)); - MLX5_SET(rqc, rqc, counter_set_id, q_counter); - - param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); -} - -void mlx5e_build_sq_param_common(struct mlx5e_priv *priv, - struct mlx5e_sq_param *param) -{ - void *sqc = param->sqc; - void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - - MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB)); - MLX5_SET(wq, wq, pd, priv->mdev->mlx5e_res.hw_objs.pdn); - - param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(priv->mdev)); -} - -void mlx5e_build_sq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, - struct mlx5e_sq_param *param) -{ - void *sqc = param->sqc; - void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - bool allow_swp; - - allow_swp = mlx5_geneve_tx_allowed(priv->mdev) || - !!MLX5_IPSEC_DEV(priv->mdev); - mlx5e_build_sq_param_common(priv, param); - MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); - MLX5_SET(sqc, sqc, allow_swp, allow_swp); - param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); - param->stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params); - mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); -} - -static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv, - struct mlx5e_cq_param *param) -{ - void *cqc = param->cqc; - - MLX5_SET(cqc, cqc, uar_page, priv->mdev->priv.uar->index); - if (MLX5_CAP_GEN(priv->mdev, cqe_128_always) && cache_line_size() >= 128) - MLX5_SET(cqc, cqc, cqe_sz, CQE_STRIDE_128_PAD); -} - -void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, - struct mlx5e_params *params, - struct mlx5e_xsk_param *xsk, - struct mlx5e_cq_param *param) -{ - struct mlx5_core_dev *mdev = priv->mdev; - bool hw_stridx = false; - void *cqc = param->cqc; - u8 log_cq_size; - - switch (params->rq_wq_type) { - case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: - log_cq_size = mlx5e_mpwqe_get_log_rq_size(params, xsk) + - mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk); - hw_stridx = MLX5_CAP_GEN(mdev, mini_cqe_resp_stride_index); - break; - default: /* MLX5_WQ_TYPE_CYCLIC */ - log_cq_size = params->log_rq_mtu_frames; - } - - MLX5_SET(cqc, cqc, log_cq_size, log_cq_size); - if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) { - MLX5_SET(cqc, cqc, mini_cqe_res_format, hw_stridx ? - MLX5_CQE_FORMAT_CSUM_STRIDX : MLX5_CQE_FORMAT_CSUM); - MLX5_SET(cqc, cqc, cqe_comp_en, 1); - } - - mlx5e_build_common_cq_param(priv, param); - param->cq_period_mode = params->rx_cq_moderation.cq_period_mode; -} - -void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, - struct mlx5e_params *params, - struct mlx5e_cq_param *param) -{ - void *cqc = param->cqc; - - MLX5_SET(cqc, cqc, log_cq_size, params->log_sq_size); - - mlx5e_build_common_cq_param(priv, param); - param->cq_period_mode = params->tx_cq_moderation.cq_period_mode; -} - -void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv, - u8 log_wq_size, - struct mlx5e_cq_param *param) -{ - void *cqc = param->cqc; - - MLX5_SET(cqc, cqc, log_cq_size, log_wq_size); - - mlx5e_build_common_cq_param(priv, param); - - param->cq_period_mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE; -} - -void mlx5e_build_icosq_param(struct mlx5e_priv *priv, - u8 log_wq_size, - struct mlx5e_sq_param *param) -{ - void *sqc = param->sqc; - void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - - mlx5e_build_sq_param_common(priv, param); - - MLX5_SET(wq, wq, log_wq_sz, log_wq_size); - MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); - mlx5e_build_ico_cq_param(priv, log_wq_size, ¶m->cqp); -} - -void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, - struct mlx5e_params *params, - struct mlx5e_sq_param *param) -{ - void *sqc = param->sqc; - void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - - mlx5e_build_sq_param_common(priv, param); - MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); - param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE); - mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); -} - -static u8 mlx5e_build_icosq_log_wq_sz(struct mlx5e_params *params, - struct mlx5e_rq_param *rqp) -{ - switch (params->rq_wq_type) { - case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: - return max_t(u8, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE, - order_base_2(MLX5E_UMR_WQEBBS) + - mlx5e_get_rq_log_wq_sz(rqp->rqc)); - default: /* MLX5_WQ_TYPE_CYCLIC */ - return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; - } -} - -static u8 mlx5e_build_async_icosq_log_wq_sz(struct net_device *netdev) -{ - if (netdev->hw_features & NETIF_F_HW_TLS_RX) - return MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; - - return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; -} - -static void mlx5e_build_channel_param(struct mlx5e_priv *priv, - struct mlx5e_params *params, - u16 q_counter, - struct mlx5e_channel_param *cparam) -{ - u8 icosq_log_wq_sz, async_icosq_log_wq_sz; - - mlx5e_build_rq_param(priv, params, NULL, q_counter, &cparam->rq); - - icosq_log_wq_sz = mlx5e_build_icosq_log_wq_sz(params, &cparam->rq); - async_icosq_log_wq_sz = mlx5e_build_async_icosq_log_wq_sz(priv->netdev); - - mlx5e_build_sq_param(priv, params, &cparam->txq_sq); - mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq); - mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq); - mlx5e_build_icosq_param(priv, async_icosq_log_wq_sz, &cparam->async_icosq); -} - int mlx5e_open_channels(struct mlx5e_priv *priv, struct mlx5e_channels *chs) { @@ -4867,93 +4519,6 @@ void mlx5e_build_default_indir_rqt(u32 *indirection_rqt, int len, indirection_rqt[i] = i % num_channels; } -static bool slow_pci_heuristic(struct mlx5_core_dev *mdev) -{ - u32 link_speed = 0; - u32 pci_bw = 0; - - mlx5e_port_max_linkspeed(mdev, &link_speed); - pci_bw = pcie_bandwidth_available(mdev->pdev, NULL, NULL, NULL); - mlx5_core_dbg_once(mdev, "Max link speed = %d, PCI BW = %d\n", - link_speed, pci_bw); - -#define MLX5E_SLOW_PCI_RATIO (2) - - return link_speed && pci_bw && - link_speed > MLX5E_SLOW_PCI_RATIO * pci_bw; -} - -static struct dim_cq_moder mlx5e_get_def_tx_moderation(u8 cq_period_mode) -{ - struct dim_cq_moder moder; - - moder.cq_period_mode = cq_period_mode; - moder.pkts = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS; - moder.usec = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC; - if (cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE) - moder.usec = MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC_FROM_CQE; - - return moder; -} - -static struct dim_cq_moder mlx5e_get_def_rx_moderation(u8 cq_period_mode) -{ - struct dim_cq_moder moder; - - moder.cq_period_mode = cq_period_mode; - moder.pkts = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS; - moder.usec = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC; - if (cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE) - moder.usec = MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC_FROM_CQE; - - return moder; -} - -static u8 mlx5_to_net_dim_cq_period_mode(u8 cq_period_mode) -{ - return cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE ? - DIM_CQ_PERIOD_MODE_START_FROM_CQE : - DIM_CQ_PERIOD_MODE_START_FROM_EQE; -} - -void mlx5e_reset_tx_moderation(struct mlx5e_params *params, u8 cq_period_mode) -{ - if (params->tx_dim_enabled) { - u8 dim_period_mode = mlx5_to_net_dim_cq_period_mode(cq_period_mode); - - params->tx_cq_moderation = net_dim_get_def_tx_moderation(dim_period_mode); - } else { - params->tx_cq_moderation = mlx5e_get_def_tx_moderation(cq_period_mode); - } -} - -void mlx5e_reset_rx_moderation(struct mlx5e_params *params, u8 cq_period_mode) -{ - if (params->rx_dim_enabled) { - u8 dim_period_mode = mlx5_to_net_dim_cq_period_mode(cq_period_mode); - - params->rx_cq_moderation = net_dim_get_def_rx_moderation(dim_period_mode); - } else { - params->rx_cq_moderation = mlx5e_get_def_rx_moderation(cq_period_mode); - } -} - -void mlx5e_set_tx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) -{ - mlx5e_reset_tx_moderation(params, cq_period_mode); - MLX5E_SET_PFLAG(params, MLX5E_PFLAG_TX_CQE_BASED_MODER, - params->tx_cq_moderation.cq_period_mode == - MLX5_CQ_PERIOD_MODE_START_FROM_CQE); -} - -void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params, u8 cq_period_mode) -{ - mlx5e_reset_rx_moderation(params, cq_period_mode); - MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_CQE_BASED_MODER, - params->rx_cq_moderation.cq_period_mode == - MLX5_CQ_PERIOD_MODE_START_FROM_CQE); -} - static u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout) { int i; @@ -4966,25 +4531,6 @@ static u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeo return MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]); } -void mlx5e_build_rq_params(struct mlx5_core_dev *mdev, - struct mlx5e_params *params) -{ - /* Prefer Striding RQ, unless any of the following holds: - * - Striding RQ configuration is not possible/supported. - * - Slow PCI heuristic. - * - Legacy RQ would use linear SKB while Striding RQ would use non-linear. - * - * No XSK params: checking the availability of striding RQ in general. - */ - if (!slow_pci_heuristic(mdev) && - mlx5e_striding_rq_possible(mdev, params) && - (mlx5e_rx_mpwqe_is_linear_skb(mdev, params, NULL) || - !mlx5e_rx_is_linear_skb(params, NULL))) - MLX5E_SET_PFLAG(params, MLX5E_PFLAG_RX_STRIDING_RQ, true); - mlx5e_set_rq_type(mdev, params); - mlx5e_init_rq_type_params(mdev, params); -} - void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params, u16 num_channels) { diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c index 4cc902e0d71b..e5123e9182c0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c @@ -40,6 +40,7 @@ #include "eswitch.h" #include "en.h" #include "en_rep.h" +#include "en/params.h" #include "en/txrx.h" #include "en_tc.h" #include "en/rep/tc.h" diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c index 0fc055cdf221..7eeeb02a78c9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c @@ -33,6 +33,7 @@ #include #include #include "en.h" +#include "en/params.h" #include "ipoib.h" #define IB_DEFAULT_Q_KEY 0xb1b From patchwork Thu Mar 25 05:04:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08263C433E4 for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0E7E61A27 for ; Thu, 25 Mar 2021 05:05:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229882AbhCYFFG (ORCPT ); Thu, 25 Mar 2021 01:05:06 -0400 Received: from mail.kernel.org ([198.145.29.99]:47926 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229653AbhCYFEq (ORCPT ); Thu, 25 Mar 2021 01:04:46 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 511FD619BA; Thu, 25 Mar 2021 05:04:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648686; bh=fmuF4DAHR3AWzMvpDT7WWXF/CEpd41rhDmHFejeyGc4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GEotLVGq0rFsf7tf6uhT11pfeYTCkAi4OGaNn+7cOtYMZHWYYGxnqg26eZKVwfkBp +V+b2jOVwtabg3jkNPCZoSIYKwQRLV9neGvMaml3AUjKMswbgcdpJfwklj/XTlhtU5 ti1mZcDbIEk7WOS6KSblfhA4ZfMdbFIqC1o5b9VEY9K3zISWFvhf0necPH6yvC8Xdc 1RVAIVqkpfgezUhlaSrygJ7Cbh/IEL8/BJKaUtEXSakmUpfeanDeQgeXyjLVlxOw5h x3ksIjCtLjLfNKyKVWTURGl6ieiARwtb2BRrg/0b/CR/qFwM2K3bLdSCWguVrO4m16 xNemc4CgHCTrQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Tariq Toukan , Aya Levin , Maxim Mikityanskiy , Saeed Mahameed Subject: [net-next 06/15] net/mlx5e: Restrict usage of mlx5e_priv in params logic functions Date: Wed, 24 Mar 2021 22:04:29 -0700 Message-Id: <20210325050438.261511-7-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Tariq Toukan Do not use generic struct mlx5e_priv as a parameter to param functions, as it is too generic. All calculations of the channel's param should be mainly based on struct mlx5_core_dev and struct mlx5e_params. Additional info can be explicitly passed. Signed-off-by: Tariq Toukan Signed-off-by: Aya Levin Reviewed-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed --- .../ethernet/mellanox/mlx5/core/en/params.c | 94 +++++++++---------- .../ethernet/mellanox/mlx5/core/en/params.h | 17 ++-- .../net/ethernet/mellanox/mlx5/core/en/ptp.c | 8 +- .../net/ethernet/mellanox/mlx5/core/en/qos.c | 4 +- .../net/ethernet/mellanox/mlx5/core/en/trap.c | 12 ++- .../mellanox/mlx5/core/en/xsk/setup.c | 9 +- .../ethernet/mellanox/mlx5/core/en_ethtool.c | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 6 +- 8 files changed, 77 insertions(+), 75 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index d2e0e07407a8..c350e72215d9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -174,15 +174,15 @@ u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *par return stop_room; } -int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params) +int mlx5e_validate_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params) { size_t sq_size = 1 << params->log_sq_size; u16 stop_room; - stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params); + stop_room = mlx5e_calc_sq_stop_room(mdev, params); if (stop_room >= sq_size) { - netdev_err(priv->netdev, "Stop room %u is bigger than the SQ size %zu\n", - stop_room, sq_size); + mlx5_core_err(mdev, "Stop room %u is bigger than the SQ size %zu\n", + stop_room, sq_size); return -EINVAL; } @@ -421,22 +421,21 @@ static inline u8 mlx5e_get_rqwq_log_stride(u8 wq_type, int ndsegs) return order_base_2(sz); } -static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv, +static void mlx5e_build_common_cq_param(struct mlx5_core_dev *mdev, struct mlx5e_cq_param *param) { void *cqc = param->cqc; - MLX5_SET(cqc, cqc, uar_page, priv->mdev->priv.uar->index); - if (MLX5_CAP_GEN(priv->mdev, cqe_128_always) && cache_line_size() >= 128) + MLX5_SET(cqc, cqc, uar_page, mdev->priv.uar->index); + if (MLX5_CAP_GEN(mdev, cqe_128_always) && cache_line_size() >= 128) MLX5_SET(cqc, cqc, cqe_sz, CQE_STRIDE_128_PAD); } -static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, +static void mlx5e_build_rx_cq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_xsk_param *xsk, struct mlx5e_cq_param *param) { - struct mlx5_core_dev *mdev = priv->mdev; bool hw_stridx = false; void *cqc = param->cqc; u8 log_cq_size; @@ -458,17 +457,16 @@ static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv, MLX5_SET(cqc, cqc, cqe_comp_en, 1); } - mlx5e_build_common_cq_param(priv, param); + mlx5e_build_common_cq_param(mdev, param); param->cq_period_mode = params->rx_cq_moderation.cq_period_mode; } -void mlx5e_build_rq_param(struct mlx5e_priv *priv, +void mlx5e_build_rq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_xsk_param *xsk, u16 q_counter, struct mlx5e_rq_param *param) { - struct mlx5_core_dev *mdev = priv->mdev; void *rqc = param->rqc; void *wq = MLX5_ADDR_OF(rqc, rqc, wq); int ndsegs = 1; @@ -499,14 +497,13 @@ void mlx5e_build_rq_param(struct mlx5e_priv *priv, MLX5_SET(rqc, rqc, scatter_fcs, params->scatter_fcs_en); param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); - mlx5e_build_rx_cq_param(priv, params, xsk, ¶m->cqp); + mlx5e_build_rx_cq_param(mdev, params, xsk, ¶m->cqp); } -void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv, +void mlx5e_build_drop_rq_param(struct mlx5_core_dev *mdev, u16 q_counter, struct mlx5e_rq_param *param) { - struct mlx5_core_dev *mdev = priv->mdev; void *rqc = param->rqc; void *wq = MLX5_ADDR_OF(rqc, rqc, wq); @@ -518,7 +515,7 @@ void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv, param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); } -void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, +void mlx5e_build_tx_cq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_cq_param *param) { @@ -526,40 +523,41 @@ void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, MLX5_SET(cqc, cqc, log_cq_size, params->log_sq_size); - mlx5e_build_common_cq_param(priv, param); + mlx5e_build_common_cq_param(mdev, param); param->cq_period_mode = params->tx_cq_moderation.cq_period_mode; } -void mlx5e_build_sq_param_common(struct mlx5e_priv *priv, +void mlx5e_build_sq_param_common(struct mlx5_core_dev *mdev, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB)); - MLX5_SET(wq, wq, pd, priv->mdev->mlx5e_res.hw_objs.pdn); + MLX5_SET(wq, wq, pd, mdev->mlx5e_res.hw_objs.pdn); - param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(priv->mdev)); + param->wq.buf_numa_node = dev_to_node(mlx5_core_dma_dev(mdev)); } -void mlx5e_build_sq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, +void mlx5e_build_sq_param(struct mlx5_core_dev *mdev, + struct mlx5e_params *params, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); bool allow_swp; - allow_swp = mlx5_geneve_tx_allowed(priv->mdev) || - !!MLX5_IPSEC_DEV(priv->mdev); - mlx5e_build_sq_param_common(priv, param); + allow_swp = mlx5_geneve_tx_allowed(mdev) || + !!MLX5_IPSEC_DEV(mdev); + mlx5e_build_sq_param_common(mdev, param); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); MLX5_SET(sqc, sqc, allow_swp, allow_swp); param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_SKB_TX_MPWQE); - param->stop_room = mlx5e_calc_sq_stop_room(priv->mdev, params); - mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); + param->stop_room = mlx5e_calc_sq_stop_room(mdev, params); + mlx5e_build_tx_cq_param(mdev, params, ¶m->cqp); } -static void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv, +static void mlx5e_build_ico_cq_param(struct mlx5_core_dev *mdev, u8 log_wq_size, struct mlx5e_cq_param *param) { @@ -567,7 +565,7 @@ static void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv, MLX5_SET(cqc, cqc, log_cq_size, log_wq_size); - mlx5e_build_common_cq_param(priv, param); + mlx5e_build_common_cq_param(mdev, param); param->cq_period_mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE; } @@ -592,69 +590,69 @@ static u8 mlx5e_build_icosq_log_wq_sz(struct mlx5e_params *params, } } -static u8 mlx5e_build_async_icosq_log_wq_sz(struct net_device *netdev) +static u8 mlx5e_build_async_icosq_log_wq_sz(struct mlx5_core_dev *mdev) { - if (netdev->hw_features & NETIF_F_HW_TLS_RX) + if (mlx5_accel_is_ktls_rx(mdev)) return MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE; return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; } -static void mlx5e_build_icosq_param(struct mlx5e_priv *priv, +static void mlx5e_build_icosq_param(struct mlx5_core_dev *mdev, u8 log_wq_size, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - mlx5e_build_sq_param_common(priv, param); + mlx5e_build_sq_param_common(mdev, param); MLX5_SET(wq, wq, log_wq_sz, log_wq_size); - MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); - mlx5e_build_ico_cq_param(priv, log_wq_size, ¶m->cqp); + MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(mdev, reg_umr_sq)); + mlx5e_build_ico_cq_param(mdev, log_wq_size, ¶m->cqp); } -static void mlx5e_build_async_icosq_param(struct mlx5e_priv *priv, +static void mlx5e_build_async_icosq_param(struct mlx5_core_dev *mdev, u8 log_wq_size, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - mlx5e_build_sq_param_common(priv, param); + mlx5e_build_sq_param_common(mdev, param); param->stop_room = mlx5e_stop_room_for_wqe(1); /* for XSK NOP */ - MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq)); + MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(mdev, reg_umr_sq)); MLX5_SET(wq, wq, log_wq_sz, log_wq_size); - mlx5e_build_ico_cq_param(priv, log_wq_size, ¶m->cqp); + mlx5e_build_ico_cq_param(mdev, log_wq_size, ¶m->cqp); } -void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, +void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); - mlx5e_build_sq_param_common(priv, param); + mlx5e_build_sq_param_common(mdev, param); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); param->is_mpw = MLX5E_GET_PFLAG(params, MLX5E_PFLAG_XDP_TX_MPWQE); - mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); + mlx5e_build_tx_cq_param(mdev, params, ¶m->cqp); } -void mlx5e_build_channel_param(struct mlx5e_priv *priv, +void mlx5e_build_channel_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, u16 q_counter, struct mlx5e_channel_param *cparam) { u8 icosq_log_wq_sz, async_icosq_log_wq_sz; - mlx5e_build_rq_param(priv, params, NULL, q_counter, &cparam->rq); + mlx5e_build_rq_param(mdev, params, NULL, q_counter, &cparam->rq); icosq_log_wq_sz = mlx5e_build_icosq_log_wq_sz(params, &cparam->rq); - async_icosq_log_wq_sz = mlx5e_build_async_icosq_log_wq_sz(priv->netdev); + async_icosq_log_wq_sz = mlx5e_build_async_icosq_log_wq_sz(mdev); - mlx5e_build_sq_param(priv, params, &cparam->txq_sq); - mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq); - mlx5e_build_icosq_param(priv, icosq_log_wq_sz, &cparam->icosq); - mlx5e_build_async_icosq_param(priv, async_icosq_log_wq_sz, &cparam->async_icosq); + mlx5e_build_sq_param(mdev, params, &cparam->txq_sq); + mlx5e_build_xdpsq_param(mdev, params, &cparam->xdp_sq); + mlx5e_build_icosq_param(mdev, icosq_log_wq_sz, &cparam->icosq); + mlx5e_build_async_icosq_param(mdev, async_icosq_log_wq_sz, &cparam->async_icosq); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h index a43776e8a86b..602e41a2bddd 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h @@ -121,30 +121,31 @@ u16 mlx5e_get_rq_headroom(struct mlx5_core_dev *mdev, /* Build queue parameters */ void mlx5e_build_create_cq_param(struct mlx5e_create_cq_param *ccp, struct mlx5e_channel *c); -void mlx5e_build_rq_param(struct mlx5e_priv *priv, +void mlx5e_build_rq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_xsk_param *xsk, u16 q_counter, struct mlx5e_rq_param *param); -void mlx5e_build_drop_rq_param(struct mlx5e_priv *priv, +void mlx5e_build_drop_rq_param(struct mlx5_core_dev *mdev, u16 q_counter, struct mlx5e_rq_param *param); -void mlx5e_build_sq_param_common(struct mlx5e_priv *priv, +void mlx5e_build_sq_param_common(struct mlx5_core_dev *mdev, struct mlx5e_sq_param *param); -void mlx5e_build_sq_param(struct mlx5e_priv *priv, struct mlx5e_params *params, +void mlx5e_build_sq_param(struct mlx5_core_dev *mdev, + struct mlx5e_params *params, struct mlx5e_sq_param *param); -void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv, +void mlx5e_build_tx_cq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_cq_param *param); -void mlx5e_build_xdpsq_param(struct mlx5e_priv *priv, +void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_sq_param *param); -void mlx5e_build_channel_param(struct mlx5e_priv *priv, +void mlx5e_build_channel_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, u16 q_counter, struct mlx5e_channel_param *cparam); u16 mlx5e_calc_sq_stop_room(struct mlx5_core_dev *mdev, struct mlx5e_params *params); -int mlx5e_validate_params(struct mlx5e_priv *priv, struct mlx5e_params *params); +int mlx5e_validate_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params); #endif /* __MLX5_EN_PARAMS_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c index bb5d108f75d0..27b5c7b143b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c @@ -389,19 +389,19 @@ static void mlx5e_ptp_close_cqs(struct mlx5e_port_ptp *c) mlx5e_close_cq(&c->ptpsq[tc].txqsq.cq); } -static void mlx5e_ptp_build_sq_param(struct mlx5e_priv *priv, +static void mlx5e_ptp_build_sq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq; - mlx5e_build_sq_param_common(priv, param); + mlx5e_build_sq_param_common(mdev, param); wq = MLX5_ADDR_OF(sqc, sqc, wq); MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size); param->stop_room = mlx5e_stop_room_for_wqe(MLX5_SEND_WQE_MAX_WQEBBS); - mlx5e_build_tx_cq_param(priv, params, ¶m->cqp); + mlx5e_build_tx_cq_param(mdev, params, ¶m->cqp); } static void mlx5e_ptp_build_params(struct mlx5e_port_ptp *c, @@ -419,7 +419,7 @@ static void mlx5e_ptp_build_params(struct mlx5e_port_ptp *c, /* SQ */ params->log_sq_size = orig->log_sq_size; - mlx5e_ptp_build_sq_param(c->priv, params, &cparams->txq_sq_param); + mlx5e_ptp_build_sq_param(c->mdev, params, &cparams->txq_sq_param); } static int mlx5e_ptp_open_queues(struct mlx5e_port_ptp *c, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c index 12d7ad061237..5efe3278b0f6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c @@ -232,8 +232,8 @@ static int mlx5e_open_qos_sq(struct mlx5e_priv *priv, struct mlx5e_channels *chs memset(¶m_sq, 0, sizeof(param_sq)); memset(¶m_cq, 0, sizeof(param_cq)); - mlx5e_build_sq_param(priv, params, ¶m_sq); - mlx5e_build_tx_cq_param(priv, params, ¶m_cq); + mlx5e_build_sq_param(priv->mdev, params, ¶m_sq); + mlx5e_build_tx_cq_param(priv->mdev, params, ¶m_cq); err = mlx5e_open_cq(priv, params->tx_cq_moderation, ¶m_cq, &ccp, &sq->cq); if (err) goto err_free_sq; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c index 0bfbc8cbe840..ba36657cd297 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c @@ -238,14 +238,16 @@ static void mlx5e_deactivate_trap_rq(struct mlx5e_rq *rq) clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); } -static void mlx5e_build_trap_params(struct mlx5e_priv *priv, struct mlx5e_trap *t) +static void mlx5e_build_trap_params(struct mlx5_core_dev *mdev, + int max_mtu, u16 q_counter, + struct mlx5e_trap *t) { struct mlx5e_params *params = &t->params; params->rq_wq_type = MLX5_WQ_TYPE_CYCLIC; - mlx5e_init_rq_type_params(priv->mdev, params); - params->sw_mtu = priv->netdev->max_mtu; - mlx5e_build_rq_param(priv, params, NULL, priv->q_counter, &t->rq_param); + mlx5e_init_rq_type_params(mdev, params); + params->sw_mtu = max_mtu; + mlx5e_build_rq_param(mdev, params, NULL, q_counter, &t->rq_param); } static struct mlx5e_trap *mlx5e_open_trap(struct mlx5e_priv *priv) @@ -259,7 +261,7 @@ static struct mlx5e_trap *mlx5e_open_trap(struct mlx5e_priv *priv) if (!t) return ERR_PTR(-ENOMEM); - mlx5e_build_trap_params(priv, t); + mlx5e_build_trap_params(priv->mdev, netdev->max_mtu, priv->q_counter, t); t->priv = priv; t->mdev = priv->mdev; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c index d84a2299adf4..313a708e351b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c @@ -35,13 +35,14 @@ bool mlx5e_validate_xsk_param(struct mlx5e_params *params, } } -static void mlx5e_build_xsk_cparam(struct mlx5e_priv *priv, +static void mlx5e_build_xsk_cparam(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_xsk_param *xsk, + u16 q_counter, struct mlx5e_channel_param *cparam) { - mlx5e_build_rq_param(priv, params, xsk, priv->q_counter, &cparam->rq); - mlx5e_build_xdpsq_param(priv, params, &cparam->xdp_sq); + mlx5e_build_rq_param(mdev, params, xsk, q_counter, &cparam->rq); + mlx5e_build_xdpsq_param(mdev, params, &cparam->xdp_sq); } int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params, @@ -61,7 +62,7 @@ int mlx5e_open_xsk(struct mlx5e_priv *priv, struct mlx5e_params *params, if (!cparam) return -ENOMEM; - mlx5e_build_xsk_cparam(priv, params, xsk, cparam); + mlx5e_build_xsk_cparam(priv->mdev, params, xsk, priv->q_counter, cparam); err = mlx5e_open_cq(c->priv, params->rx_cq_moderation, &cparam->rq.cqp, &ccp, &c->xskrq.cq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index abdf721bb264..d2d8ba4b3508 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -368,7 +368,7 @@ int mlx5e_ethtool_set_ringparam(struct mlx5e_priv *priv, new_channels.params.log_rq_mtu_frames = log_rq_size; new_channels.params.log_sq_size = log_sq_size; - err = mlx5e_validate_params(priv, &new_channels.params); + err = mlx5e_validate_params(priv->mdev, &new_channels.params); if (err) goto unlock; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index de40ed634011..fe98f7d7921f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -2070,7 +2070,7 @@ int mlx5e_open_channels(struct mlx5e_priv *priv, if (!chs->c || !cparam) goto err_free; - mlx5e_build_channel_param(priv, &chs->params, priv->q_counter, cparam); + mlx5e_build_channel_param(priv->mdev, &chs->params, priv->q_counter, cparam); for (i = 0; i < chs->num; i++) { struct xsk_buff_pool *xsk_pool = NULL; @@ -3029,7 +3029,7 @@ int mlx5e_open_drop_rq(struct mlx5e_priv *priv, struct mlx5e_cq *cq = &drop_rq->cq; int err; - mlx5e_build_drop_rq_param(priv, priv->drop_rq_q_counter, &rq_param); + mlx5e_build_drop_rq_param(mdev, priv->drop_rq_q_counter, &rq_param); err = mlx5e_alloc_drop_cq(priv, cq, &cq_param); if (err) @@ -3856,7 +3856,7 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu, new_channels.params = *params; new_channels.params.sw_mtu = new_mtu; - err = mlx5e_validate_params(priv, &new_channels.params); + err = mlx5e_validate_params(priv->mdev, &new_channels.params); if (err) goto out; From patchwork Thu Mar 25 05:04:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 336F6C433E9 for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1DD3F61A26 for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229915AbhCYFFT (ORCPT ); Thu, 25 Mar 2021 01:05:19 -0400 Received: from mail.kernel.org ([198.145.29.99]:47968 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229693AbhCYFEt (ORCPT ); Thu, 25 Mar 2021 01:04:49 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id BCE0C619BA; Thu, 25 Mar 2021 05:04:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648689; bh=P7AQjvy1CBSl8v21G/SQRv8ZSDinZRk3GoiqM3aDxwg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UXMkWeEgeO49MNTVhznX9vLkiA1M7UGli8UtWW5hMViIZrSYV1dktPehUxv/iHuzk 3VLQ3Py6OaGcsJy6xoZZIIh+E8n4ha/4+kdWYZzceiGHd+za1rUiTgyOXyU/QNzSEt 8CSHE2avc98wr9oogtjCzzkpPqZtxAvFlWvVwn/SkyiAgs5rg79UB1PbbxnDag0J0v TyDEoRvBaSnlFZP7LYCKZShpJQQ2nmqYRJ8TDsEHHyJghFtmzVRDsdeJR686y3skbe mTv+TKvvP8WnXTKDzKUXDJNhXpCX0Clx54WzDyv5VVh20/aYlOXat9zmRidZNJMqNc aP0pjTEULIKMw== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Aya Levin , Tariq Toukan , Saeed Mahameed Subject: [net-next 11/15] net/mlx5e: Generalize close RQ Date: Wed, 24 Mar 2021 22:04:34 -0700 Message-Id: <20210325050438.261511-12-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Aya Levin Allow different flavours of RQ to use the same close flow. Add validity checks to support different RQ types which not necessarily initialize all the RQ's functionality. Signed-off-by: Aya Levin Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en/trap.c | 12 +----------- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 13 ++++++++----- 2 files changed, 9 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c index d6e6641e9288..86ab4e864fe6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/trap.c @@ -30,14 +30,6 @@ static int mlx5e_trap_napi_poll(struct napi_struct *napi, int budget) return work_done; } -static void mlx5e_free_trap_rq(struct mlx5e_rq *rq) -{ - page_pool_destroy(rq->page_pool); - mlx5e_free_di_list(rq); - kvfree(rq->wqe.frags); - mlx5_wq_destroy(&rq->wq_ctrl); -} - static void mlx5e_init_trap_rq(struct mlx5e_trap *t, struct mlx5e_params *params, struct mlx5e_rq *rq) { @@ -93,9 +85,7 @@ static int mlx5e_open_trap_rq(struct mlx5e_priv *priv, struct mlx5e_trap *t) static void mlx5e_close_trap_rq(struct mlx5e_rq *rq) { - mlx5e_destroy_rq(rq); - mlx5e_free_rx_descs(rq); - mlx5e_free_trap_rq(rq); + mlx5e_close_rq(rq); mlx5e_close_cq(&rq->cq); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index b25e1501e236..eda30bc80a51 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -592,10 +592,12 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq) struct bpf_prog *old_prog; int i; - old_prog = rcu_dereference_protected(rq->xdp_prog, - lockdep_is_held(&rq->priv->state_lock)); - if (old_prog) - bpf_prog_put(old_prog); + if (xdp_rxq_info_is_reg(&rq->xdp_rxq)) { + old_prog = rcu_dereference_protected(rq->xdp_prog, + lockdep_is_held(&rq->priv->state_lock)); + if (old_prog) + bpf_prog_put(old_prog); + } switch (rq->wq_type) { case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: @@ -901,7 +903,8 @@ void mlx5e_deactivate_rq(struct mlx5e_rq *rq) void mlx5e_close_rq(struct mlx5e_rq *rq) { cancel_work_sync(&rq->dim.work); - cancel_work_sync(&rq->icosq->recover_work); + if (rq->icosq) + cancel_work_sync(&rq->icosq->recover_work); cancel_work_sync(&rq->recover_work); mlx5e_destroy_rq(rq); mlx5e_free_rx_descs(rq); From patchwork Thu Mar 25 05:04:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65FA5C433EA for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4370261A0D for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbhCYFFV (ORCPT ); Thu, 25 Mar 2021 01:05:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:47986 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbhCYFEu (ORCPT ); Thu, 25 Mar 2021 01:04:50 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id BF7C661A26; Thu, 25 Mar 2021 05:04:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648690; bh=0SCwwd+q02qMtvdw2Hqx3/cqYHztqsspJ01MMJEuB/I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tVwB0ApD9S3lg127zZYn8CF6nJcRTruGmFOQ76Bp140n3+u7fo36RA8p9DLZc180U tMimlFE1tf7hylqFf/pkC6FinCos9i5DEmqVgIUu/1gPOO3pVLt+b5txdJ59B0DXKH 1VxFxZJUZTCXrHCVgVknWkJwDHpmN6Y09c3uAL2yGSi87d8h37EONFlBOiDK/6P+Cq h/pgjRqSYJ73xi/ZrMguXDHGmIP67k+1ik0Il+Bf2delQL4kt/y3JO2zgXXtGeDK9B ohd6s29E1wLFN8nYDCOSlWuKm6zc6EdYinO6/l+89IzQpVOqV+Eiqaf7Ccm386qCQL drrdH7OyxrBLQ== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Aya Levin , Tariq Toukan , Saeed Mahameed Subject: [net-next 13/15] net/mlx5e: Generalize PTP implementation Date: Wed, 24 Mar 2021 22:04:36 -0700 Message-Id: <20210325050438.261511-14-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Aya Levin Following patches in the set add support for RX PTP. Rename PTP prefix from %s/port_ptp/ptp/g to include RX PTP too. In addition rename indication (used in statistics context) that PTP-SQ was opened: %s/port_ptp_opened/tx_ptp_opened/g. This will simplify adding indication that PTP-RQ was opened. Signed-off-by: Aya Levin Signed-off-by: Tariq Toukan Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 10 ++--- .../net/ethernet/mellanox/mlx5/core/en/ptp.c | 39 +++++++++---------- .../net/ethernet/mellanox/mlx5/core/en/ptp.h | 12 +++--- .../mellanox/mlx5/core/en/reporter_tx.c | 8 ++-- .../ethernet/mellanox/mlx5/core/en_ethtool.c | 2 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 23 ++++++----- .../ethernet/mellanox/mlx5/core/en_stats.c | 18 ++++----- .../net/ethernet/mellanox/mlx5/core/en_tx.c | 2 +- 8 files changed, 56 insertions(+), 58 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 1ff8b79606f9..c32c2e265c26 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -706,11 +706,11 @@ struct mlx5e_channel { int cpu; }; -struct mlx5e_port_ptp; +struct mlx5e_ptp; struct mlx5e_channels { struct mlx5e_channel **c; - struct mlx5e_port_ptp *port_ptp; + struct mlx5e_ptp *ptp; unsigned int num; struct mlx5e_params params; }; @@ -725,7 +725,7 @@ struct mlx5e_channel_stats { struct mlx5e_xdpsq_stats xsksq; } ____cacheline_aligned_in_smp; -struct mlx5e_port_ptp_stats { +struct mlx5e_ptp_stats { struct mlx5e_ch_stats ch; struct mlx5e_sq_stats sq[MLX5E_MAX_NUM_TC]; struct mlx5e_ptp_cq_stats cq[MLX5E_MAX_NUM_TC]; @@ -854,10 +854,10 @@ struct mlx5e_priv { struct mlx5e_stats stats; struct mlx5e_channel_stats channel_stats[MLX5E_MAX_NUM_CHANNELS]; struct mlx5e_channel_stats trap_stats; - struct mlx5e_port_ptp_stats port_ptp_stats; + struct mlx5e_ptp_stats ptp_stats; u16 max_nch; u8 max_opened_tc; - bool port_ptp_opened; + bool tx_ptp_opened; struct hwtstamp_config tstamp; u16 q_counter; u16 drop_rq_q_counter; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c index fb93edf910b9..2bc6d4362670 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c @@ -116,8 +116,7 @@ static bool mlx5e_ptp_poll_ts_cq(struct mlx5e_cq *cq, int budget) static int mlx5e_ptp_napi_poll(struct napi_struct *napi, int budget) { - struct mlx5e_port_ptp *c = container_of(napi, struct mlx5e_port_ptp, - napi); + struct mlx5e_ptp *c = container_of(napi, struct mlx5e_ptp, napi); struct mlx5e_ch_stats *ch_stats = c->stats; bool busy = false; int work_done = 0; @@ -153,7 +152,7 @@ static int mlx5e_ptp_napi_poll(struct napi_struct *napi, int budget) return work_done; } -static int mlx5e_ptp_alloc_txqsq(struct mlx5e_port_ptp *c, int txq_ix, +static int mlx5e_ptp_alloc_txqsq(struct mlx5e_ptp *c, int txq_ix, struct mlx5e_params *params, struct mlx5e_sq_param *param, struct mlx5e_txqsq *sq, int tc, @@ -177,7 +176,7 @@ static int mlx5e_ptp_alloc_txqsq(struct mlx5e_port_ptp *c, int txq_ix, sq->uar_map = mdev->mlx5e_res.hw_objs.bfreg.map; sq->min_inline_mode = params->tx_min_inline_mode; sq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); - sq->stats = &c->priv->port_ptp_stats.sq[tc]; + sq->stats = &c->priv->ptp_stats.sq[tc]; sq->ptpsq = ptpsq; INIT_WORK(&sq->recover_work, mlx5e_tx_err_cqe_work); if (!MLX5_CAP_ETH(mdev, wqe_vlan_insert)) @@ -241,7 +240,7 @@ static void mlx5e_ptp_free_traffic_db(struct mlx5e_skb_fifo *skb_fifo) kvfree(skb_fifo->fifo); } -static int mlx5e_ptp_open_txqsq(struct mlx5e_port_ptp *c, u32 tisn, +static int mlx5e_ptp_open_txqsq(struct mlx5e_ptp *c, u32 tisn, int txq_ix, struct mlx5e_ptp_params *cparams, int tc, struct mlx5e_ptpsq *ptpsq) { @@ -291,7 +290,7 @@ static void mlx5e_ptp_close_txqsq(struct mlx5e_ptpsq *ptpsq) mlx5e_free_txqsq(sq); } -static int mlx5e_ptp_open_txqsqs(struct mlx5e_port_ptp *c, +static int mlx5e_ptp_open_txqsqs(struct mlx5e_ptp *c, struct mlx5e_ptp_params *cparams) { struct mlx5e_params *params = &cparams->params; @@ -319,7 +318,7 @@ static int mlx5e_ptp_open_txqsqs(struct mlx5e_port_ptp *c, return err; } -static void mlx5e_ptp_close_txqsqs(struct mlx5e_port_ptp *c) +static void mlx5e_ptp_close_txqsqs(struct mlx5e_ptp *c) { int tc; @@ -327,7 +326,7 @@ static void mlx5e_ptp_close_txqsqs(struct mlx5e_port_ptp *c) mlx5e_ptp_close_txqsq(&c->ptpsq[tc]); } -static int mlx5e_ptp_open_cqs(struct mlx5e_port_ptp *c, +static int mlx5e_ptp_open_cqs(struct mlx5e_ptp *c, struct mlx5e_ptp_params *cparams) { struct mlx5e_params *params = &cparams->params; @@ -360,7 +359,7 @@ static int mlx5e_ptp_open_cqs(struct mlx5e_port_ptp *c, if (err) goto out_err_ts_cq; - ptpsq->cq_stats = &c->priv->port_ptp_stats.cq[tc]; + ptpsq->cq_stats = &c->priv->ptp_stats.cq[tc]; } return 0; @@ -376,7 +375,7 @@ static int mlx5e_ptp_open_cqs(struct mlx5e_port_ptp *c, return err; } -static void mlx5e_ptp_close_cqs(struct mlx5e_port_ptp *c) +static void mlx5e_ptp_close_cqs(struct mlx5e_ptp *c) { int tc; @@ -402,7 +401,7 @@ static void mlx5e_ptp_build_sq_param(struct mlx5_core_dev *mdev, mlx5e_build_tx_cq_param(mdev, params, ¶m->cqp); } -static void mlx5e_ptp_build_params(struct mlx5e_port_ptp *c, +static void mlx5e_ptp_build_params(struct mlx5e_ptp *c, struct mlx5e_ptp_params *cparams, struct mlx5e_params *orig) { @@ -420,7 +419,7 @@ static void mlx5e_ptp_build_params(struct mlx5e_port_ptp *c, mlx5e_ptp_build_sq_param(c->mdev, params, &cparams->txq_sq_param); } -static int mlx5e_ptp_open_queues(struct mlx5e_port_ptp *c, +static int mlx5e_ptp_open_queues(struct mlx5e_ptp *c, struct mlx5e_ptp_params *cparams) { int err; @@ -441,19 +440,19 @@ static int mlx5e_ptp_open_queues(struct mlx5e_port_ptp *c, return err; } -static void mlx5e_ptp_close_queues(struct mlx5e_port_ptp *c) +static void mlx5e_ptp_close_queues(struct mlx5e_ptp *c) { mlx5e_ptp_close_txqsqs(c); mlx5e_ptp_close_cqs(c); } -int mlx5e_port_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, - u8 lag_port, struct mlx5e_port_ptp **cp) +int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, + u8 lag_port, struct mlx5e_ptp **cp) { struct net_device *netdev = priv->netdev; struct mlx5_core_dev *mdev = priv->mdev; struct mlx5e_ptp_params *cparams; - struct mlx5e_port_ptp *c; + struct mlx5e_ptp *c; unsigned int irq; int err; int eqn; @@ -475,7 +474,7 @@ int mlx5e_port_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, c->netdev = priv->netdev; c->mkey_be = cpu_to_be32(priv->mdev->mlx5e_res.hw_objs.mkey.key); c->num_tc = params->num_tc; - c->stats = &priv->port_ptp_stats.ch; + c->stats = &priv->ptp_stats.ch; c->lag_port = lag_port; netif_napi_add(netdev, &c->napi, mlx5e_ptp_napi_poll, 64); @@ -500,7 +499,7 @@ int mlx5e_port_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, return err; } -void mlx5e_port_ptp_close(struct mlx5e_port_ptp *c) +void mlx5e_ptp_close(struct mlx5e_ptp *c) { mlx5e_ptp_close_queues(c); netif_napi_del(&c->napi); @@ -508,7 +507,7 @@ void mlx5e_port_ptp_close(struct mlx5e_port_ptp *c) kvfree(c); } -void mlx5e_ptp_activate_channel(struct mlx5e_port_ptp *c) +void mlx5e_ptp_activate_channel(struct mlx5e_ptp *c) { int tc; @@ -518,7 +517,7 @@ void mlx5e_ptp_activate_channel(struct mlx5e_port_ptp *c) mlx5e_activate_txqsq(&c->ptpsq[tc].txqsq); } -void mlx5e_ptp_deactivate_channel(struct mlx5e_port_ptp *c) +void mlx5e_ptp_deactivate_channel(struct mlx5e_ptp *c) { int tc; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h index 90c98ea63b7f..4cae06f2c312 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/ptp.h @@ -17,7 +17,7 @@ struct mlx5e_ptpsq { struct mlx5e_ptp_cq_stats *cq_stats; }; -struct mlx5e_port_ptp { +struct mlx5e_ptp { /* data path */ struct mlx5e_ptpsq ptpsq[MLX5E_MAX_NUM_TC]; struct napi_struct napi; @@ -43,11 +43,11 @@ struct mlx5e_ptp_params { struct mlx5e_sq_param txq_sq_param; }; -int mlx5e_port_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, - u8 lag_port, struct mlx5e_port_ptp **cp); -void mlx5e_port_ptp_close(struct mlx5e_port_ptp *c); -void mlx5e_ptp_activate_channel(struct mlx5e_port_ptp *c); -void mlx5e_ptp_deactivate_channel(struct mlx5e_port_ptp *c); +int mlx5e_ptp_open(struct mlx5e_priv *priv, struct mlx5e_params *params, + u8 lag_port, struct mlx5e_ptp **cp); +void mlx5e_ptp_close(struct mlx5e_ptp *c); +void mlx5e_ptp_activate_channel(struct mlx5e_ptp *c); +void mlx5e_ptp_deactivate_channel(struct mlx5e_ptp *c); enum { MLX5E_SKB_CB_CQE_HWTSTAMP = BIT(0), diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c index 63ee3b9416de..e107801adf48 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c @@ -315,8 +315,8 @@ mlx5e_tx_reporter_diagnose_common_config(struct devlink_health_reporter *reporte if (err) return err; - generic_ptpsq = priv->channels.port_ptp ? - &priv->channels.port_ptp->ptpsq[0] : + generic_ptpsq = priv->channels.ptp ? + &priv->channels.ptp->ptpsq[0] : NULL; if (!generic_ptpsq) goto out; @@ -346,7 +346,7 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter, struct netlink_ext_ack *extack) { struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter); - struct mlx5e_port_ptp *ptp_ch = priv->channels.port_ptp; + struct mlx5e_ptp *ptp_ch = priv->channels.ptp; int i, tc, err = 0; @@ -460,7 +460,7 @@ static int mlx5e_tx_reporter_dump_sq(struct mlx5e_priv *priv, struct devlink_fms static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg) { - struct mlx5e_port_ptp *ptp_ch = priv->channels.port_ptp; + struct mlx5e_ptp *ptp_ch = priv->channels.ptp; struct mlx5_rsc_key key = {}; int i, tc, err; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index d2d8ba4b3508..1b3183f521cb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -2023,7 +2023,7 @@ static int set_pflag_tx_port_ts(struct net_device *netdev, bool enable) mlx5e_num_channels_changed_ctx, NULL); out: if (!err) - priv->port_ptp_opened = true; + priv->tx_ptp_opened = true; return err; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 9096a7e6f4d6..18582e934b61 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -2088,8 +2088,7 @@ int mlx5e_open_channels(struct mlx5e_priv *priv, } if (MLX5E_GET_PFLAG(&chs->params, MLX5E_PFLAG_TX_PORT_TS)) { - err = mlx5e_port_ptp_open(priv, &chs->params, chs->c[0]->lag_port, - &chs->port_ptp); + err = mlx5e_ptp_open(priv, &chs->params, chs->c[0]->lag_port, &chs->ptp); if (err) goto err_close_channels; } @@ -2103,8 +2102,8 @@ int mlx5e_open_channels(struct mlx5e_priv *priv, return 0; err_close_ptp: - if (chs->port_ptp) - mlx5e_port_ptp_close(chs->port_ptp); + if (chs->ptp) + mlx5e_ptp_close(chs->ptp); err_close_channels: for (i--; i >= 0; i--) @@ -2124,8 +2123,8 @@ static void mlx5e_activate_channels(struct mlx5e_channels *chs) for (i = 0; i < chs->num; i++) mlx5e_activate_channel(chs->c[i]); - if (chs->port_ptp) - mlx5e_ptp_activate_channel(chs->port_ptp); + if (chs->ptp) + mlx5e_ptp_activate_channel(chs->ptp); } #define MLX5E_RQ_WQES_TIMEOUT 20000 /* msecs */ @@ -2152,8 +2151,8 @@ static void mlx5e_deactivate_channels(struct mlx5e_channels *chs) { int i; - if (chs->port_ptp) - mlx5e_ptp_deactivate_channel(chs->port_ptp); + if (chs->ptp) + mlx5e_ptp_deactivate_channel(chs->ptp); for (i = 0; i < chs->num; i++) mlx5e_deactivate_channel(chs->c[i]); @@ -2163,8 +2162,8 @@ void mlx5e_close_channels(struct mlx5e_channels *chs) { int i; - if (chs->port_ptp) - mlx5e_port_ptp_close(chs->port_ptp); + if (chs->ptp) + mlx5e_ptp_close(chs->ptp); for (i = 0; i < chs->num; i++) mlx5e_close_channel(chs->c[i]); @@ -2756,11 +2755,11 @@ static void mlx5e_build_txq_maps(struct mlx5e_priv *priv) } } - if (!priv->channels.port_ptp) + if (!priv->channels.ptp) return; for (tc = 0; tc < num_tc; tc++) { - struct mlx5e_port_ptp *c = priv->channels.port_ptp; + struct mlx5e_ptp *c = priv->channels.ptp; struct mlx5e_txqsq *sq = &c->ptpsq[tc].txqsq; priv->txq2sq[sq->txq_ix] = sq; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 92c5b81427b9..32331ac9288c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -407,13 +407,13 @@ static void mlx5e_stats_grp_sw_update_stats_ptp(struct mlx5e_priv *priv, { int i; - if (!priv->port_ptp_opened) + if (!priv->tx_ptp_opened) return; - mlx5e_stats_grp_sw_update_stats_ch_stats(s, &priv->port_ptp_stats.ch); + mlx5e_stats_grp_sw_update_stats_ch_stats(s, &priv->ptp_stats.ch); for (i = 0; i < priv->max_opened_tc; i++) { - mlx5e_stats_grp_sw_update_stats_sq(s, &priv->port_ptp_stats.sq[i]); + mlx5e_stats_grp_sw_update_stats_sq(s, &priv->ptp_stats.sq[i]); /* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92657 */ barrier(); @@ -1851,7 +1851,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(qos) { return; } static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ptp) { - return priv->port_ptp_opened ? + return priv->tx_ptp_opened ? NUM_PTP_CH_STATS + ((NUM_PTP_SQ_STATS + NUM_PTP_CQ_STATS) * priv->max_opened_tc) : 0; @@ -1861,7 +1861,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(ptp) { int i, tc; - if (!priv->port_ptp_opened) + if (!priv->tx_ptp_opened) return idx; for (i = 0; i < NUM_PTP_CH_STATS; i++) @@ -1884,24 +1884,24 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(ptp) { int i, tc; - if (!priv->port_ptp_opened) + if (!priv->tx_ptp_opened) return idx; for (i = 0; i < NUM_PTP_CH_STATS; i++) data[idx++] = - MLX5E_READ_CTR64_CPU(&priv->port_ptp_stats.ch, + MLX5E_READ_CTR64_CPU(&priv->ptp_stats.ch, ptp_ch_stats_desc, i); for (tc = 0; tc < priv->max_opened_tc; tc++) for (i = 0; i < NUM_PTP_SQ_STATS; i++) data[idx++] = - MLX5E_READ_CTR64_CPU(&priv->port_ptp_stats.sq[tc], + MLX5E_READ_CTR64_CPU(&priv->ptp_stats.sq[tc], ptp_sq_stats_desc, i); for (tc = 0; tc < priv->max_opened_tc; tc++) for (i = 0; i < NUM_PTP_CQ_STATS; i++) data[idx++] = - MLX5E_READ_CTR64_CPU(&priv->port_ptp_stats.cq[tc], + MLX5E_READ_CTR64_CPU(&priv->ptp_stats.cq[tc], ptp_cq_stats_desc, i); return idx; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c index d2efe2455955..cfb6ffd7df54 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c @@ -142,7 +142,7 @@ u16 mlx5e_select_queue(struct net_device *dev, struct sk_buff *skb, return txq_ix; } - if (unlikely(priv->channels.port_ptp)) + if (unlikely(priv->channels.ptp)) if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && mlx5e_use_ptpsq(skb)) return mlx5e_select_ptpsq(dev, skb); From patchwork Thu Mar 25 05:04:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saeed Mahameed X-Patchwork-Id: 409359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92508C433EC for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6DB1161A27 for ; Thu, 25 Mar 2021 05:05:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229957AbhCYFFZ (ORCPT ); Thu, 25 Mar 2021 01:05:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:48002 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229730AbhCYFEv (ORCPT ); Thu, 25 Mar 2021 01:04:51 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id BD8C561A24; Thu, 25 Mar 2021 05:04:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616648691; bh=HE/QZFdqS671tqHl5nlwxCKvdp++TU0y/bfIoDjsvEc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JNeZ0ey3lfaIQtLhxhzb/7+nHw4XHQBB6f0omnB7ZII8YQ415lpdqpFF5a2aOBsq2 zhGMTPqUy/mGX6TL+6NjmjF20X518K3VmtSlxi9LQiy3AldMkom/gdRAqsWK6E4JHe 15J3hqD46BqRmLd4CG54GVAciB9qvUOzHzdR4P1QMQeHjWs0cNZp1DwUoRGF+MhQR+ PQrQbxfKZpqcQZi52pGOStb5HrzS5Rg7mJOeodmpGEU2ycabCE+iPDiVYzQbgbfTqk W2RTM04wx1fCy3cYDC2bj2iga019mQ1zrSFJRRL5L/ZOlmBz1tJ0MI3J3YbQXFN3I4 s9GZeItFqiryg== From: Saeed Mahameed To: "David S. Miller" , Jakub Kicinski Cc: netdev@vger.kernel.org, Colin Ian King , Saeed Mahameed Subject: [net-next 15/15] net/mlx5: Fix spelling mistakes in mlx5_core_info message Date: Wed, 24 Mar 2021 22:04:38 -0700 Message-Id: <20210325050438.261511-16-saeed@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210325050438.261511-1-saeed@kernel.org> References: <20210325050438.261511-1-saeed@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Colin Ian King There are two spelling mistakes in a mlx5_core_info message. Fix them. Signed-off-by: Colin Ian King Signed-off-by: Saeed Mahameed --- drivers/net/ethernet/mellanox/mlx5/core/health.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index a0a851640804..9ff163c5bcde 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -340,7 +340,7 @@ static int mlx5_health_try_recover(struct mlx5_core_dev *dev) return -EIO; } - mlx5_core_info(dev, "health revovery succeded\n"); + mlx5_core_info(dev, "health recovery succeeded\n"); return 0; }