From patchwork Thu Sep 17 06:49:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 260736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4DBBC433E2 for ; Thu, 17 Sep 2020 06:56:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8A34B21D1B for ; Thu, 17 Sep 2020 06:56:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726301AbgIQG43 (ORCPT ); Thu, 17 Sep 2020 02:56:29 -0400 Received: from wout1-smtp.messagingengine.com ([64.147.123.24]:60507 "EHLO wout1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726279AbgIQG4Y (ORCPT ); Thu, 17 Sep 2020 02:56:24 -0400 X-Greylist: delayed 375 seconds by postgrey-1.27 at vger.kernel.org; Thu, 17 Sep 2020 02:56:23 EDT Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 13692605; Thu, 17 Sep 2020 02:50:12 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 17 Sep 2020 02:50:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=YAtZMiPt3aI4blXiy8x1tva3KE4h31URv94Gc6R0kBo=; b=DUQlhtiy riTo2YS8xhedoW2kqNWmwFe6cA7z6Mi5asG2MoGmO6CQZmC2og4vWSz5FQUCKAQd 3bt41FG0djTiIgyQuoAK33+KVFZKRQP0qM90AjUJD3OpFKo8Z7AI8Yjf+E8XsA5T jFa5XWz0WuUuoknMNxZzaCpZZz2fOS6tLUYTzT3pyPzRxU72KZxwJXzyPCJclr1j SMQqrAp2ditMxriDk1Pj/HI4KeJloIyofvBQLG9dIh+D38A/UzFpERt+8owCJxgy +vFug5ZlvPUPK542PnEJZXLNv57fM+bxri+NpBXexmDu4H+/qYEnuEeVkWQEmUA+ 4sOEOrJBf+oyJw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrtdefgdduudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrfeeirdekvden ucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-36-82.inter.net.il [84.229.36.82]) by mail.messagingengine.com (Postfix) with ESMTPA id A7DBB3064684; Thu, 17 Sep 2020 02:50:09 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, petrm@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 2/3] mlxsw: spectrum_dcb: Implement dcbnl_setbuffer / getbuffer Date: Thu, 17 Sep 2020 09:49:02 +0300 Message-Id: <20200917064903.260700-3-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200917064903.260700-1-idosch@idosch.org> References: <20200917064903.260700-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata Add dcbnl_setbuffer, which bounces requests if a headroom is in DCB mode. Implement dcbnl_getbuffer such that it can always be used to determine port-buffer configuration, regardless of headroom mode. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_dcb.c | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c index d9a556dbe85e..5f92b1691360 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c @@ -592,6 +592,62 @@ static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev, return err; } +static int mlxsw_sp_dcbnl_getbuffer(struct net_device *dev, struct dcbnl_buffer *buf) +{ + struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev); + struct mlxsw_sp_hdroom *hdroom = mlxsw_sp_port->hdroom; + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + int prio; + int i; + + buf->total_size = 0; + + BUILD_BUG_ON(DCBX_MAX_BUFFERS > MLXSW_SP_PB_COUNT); + for (i = 0; i < MLXSW_SP_PB_COUNT; i++) { + u32 bytes = mlxsw_sp_cells_bytes(mlxsw_sp, hdroom->bufs.buf[i].size_cells); + + if (i < DCBX_MAX_BUFFERS) + buf->buffer_size[i] = bytes; + buf->total_size += bytes; + } + + buf->total_size += mlxsw_sp_cells_bytes(mlxsw_sp, hdroom->int_buf.size_cells); + + for (prio = 0; prio < IEEE_8021Q_MAX_PRIORITIES; prio++) + buf->prio2buffer[prio] = hdroom->prios.prio[prio].buf_idx; + + return 0; +} + +static int mlxsw_sp_dcbnl_setbuffer(struct net_device *dev, struct dcbnl_buffer *buf) +{ + struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev); + struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp; + struct mlxsw_sp_hdroom hdroom; + int prio; + int i; + + hdroom = *mlxsw_sp_port->hdroom; + + if (hdroom.mode != MLXSW_SP_HDROOM_MODE_TC) { + netdev_err(dev, "The use of dcbnl_setbuffer is only allowed if egress is configured using TC\n"); + return -EINVAL; + } + + for (prio = 0; prio < IEEE_8021Q_MAX_PRIORITIES; prio++) + hdroom.prios.prio[prio].set_buf_idx = buf->prio2buffer[prio]; + + BUILD_BUG_ON(DCBX_MAX_BUFFERS > MLXSW_SP_PB_COUNT); + for (i = 0; i < DCBX_MAX_BUFFERS; i++) + hdroom.bufs.buf[i].set_size_cells = mlxsw_sp_bytes_cells(mlxsw_sp, + buf->buffer_size[i]); + + mlxsw_sp_hdroom_prios_reset_buf_idx(&hdroom); + mlxsw_sp_hdroom_bufs_reset_lossiness(&hdroom); + mlxsw_sp_hdroom_bufs_reset_sizes(mlxsw_sp_port, &hdroom); + return mlxsw_sp_hdroom_configure(mlxsw_sp_port, &hdroom); +} + static const struct dcbnl_rtnl_ops mlxsw_sp_dcbnl_ops = { .ieee_getets = mlxsw_sp_dcbnl_ieee_getets, .ieee_setets = mlxsw_sp_dcbnl_ieee_setets, @@ -604,6 +660,9 @@ static const struct dcbnl_rtnl_ops mlxsw_sp_dcbnl_ops = { .getdcbx = mlxsw_sp_dcbnl_getdcbx, .setdcbx = mlxsw_sp_dcbnl_setdcbx, + + .dcbnl_getbuffer = mlxsw_sp_dcbnl_getbuffer, + .dcbnl_setbuffer = mlxsw_sp_dcbnl_setbuffer, }; static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port) From patchwork Thu Sep 17 06:49:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 260735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8EF3C433E2 for ; Thu, 17 Sep 2020 06:56:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 955032083B for ; Thu, 17 Sep 2020 06:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726312AbgIQG4g (ORCPT ); Thu, 17 Sep 2020 02:56:36 -0400 Received: from wout1-smtp.messagingengine.com ([64.147.123.24]:50841 "EHLO wout1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726290AbgIQG4Y (ORCPT ); Thu, 17 Sep 2020 02:56:24 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 3BCB777B; Thu, 17 Sep 2020 02:50:14 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Thu, 17 Sep 2020 02:50:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=QiYAGUn33p2izElOMqkqYk5B2f5wa840MMlY46h4VGY=; b=RwnaynU+ Yth+R1ycr6bMBGHBVcsG4TldaNaEJBs8lmvTppcTk5kzvSiof188EnZMuAWuXnN0 G1+XOoF09NjKZxtAbqnrvc1bouR7j5CAOCDlvGxvpaJSMNjK5bTlMrFFNtrD+eci 0rl8B1i3fV301CYp0ytk3DuyfEd5DxFTtKn/l9YhNm1xq3SwGJov1yAzaj778h6p pAFoBfYbHKqFGf1OSKfao67q0NLS4xLzXYoSGwWF4Dm2nfGt2HkKHGMKeQoKOc2z DGcOiGsxwKxbhBgMTJ7Wc2AZEZJEBIqzUXh5NrrOA9kiCwEjScyM2StsHm/owxBi VCktFYB/Zrhfmw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrtdefgdduudduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrfeeirdekvden ucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehiughosh gthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-36-82.inter.net.il [84.229.36.82]) by mail.messagingengine.com (Postfix) with ESMTPA id CCAD73064680; Thu, 17 Sep 2020 02:50:11 -0400 (EDT) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, petrm@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next 3/3] mlxsw: spectrum_qdisc: Disable port buffer autoresize with qdiscs Date: Thu, 17 Sep 2020 09:49:03 +0300 Message-Id: <20200917064903.260700-4-idosch@idosch.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200917064903.260700-1-idosch@idosch.org> References: <20200917064903.260700-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Petr Machata There are two interfaces to configure ETS: qdiscs and DCB. Historically, DCB ETS configuration was projected to ingress as well, and configured port buffers. Qdisc was not. Keep qdiscs behaving this way, and if an offloaded qdisc is configured on a port, move this port's headroom to a manual mode, thus allowing configuration of port buffers through dcbnl_setbuffer. Signed-off-by: Petr Machata Reviewed-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_qdisc.c | 34 ++++++++++++++++++- 1 file changed, 33 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c index 964fd444bb10..fd672c6c9133 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_qdisc.c @@ -140,18 +140,31 @@ static int mlxsw_sp_qdisc_destroy(struct mlxsw_sp_port *mlxsw_sp_port, struct mlxsw_sp_qdisc *mlxsw_sp_qdisc) { + struct mlxsw_sp_qdisc *root_qdisc = &mlxsw_sp_port->qdisc->root_qdisc; + int err_hdroom = 0; int err = 0; if (!mlxsw_sp_qdisc) return 0; + if (root_qdisc == mlxsw_sp_qdisc) { + struct mlxsw_sp_hdroom hdroom = *mlxsw_sp_port->hdroom; + + hdroom.mode = MLXSW_SP_HDROOM_MODE_DCB; + mlxsw_sp_hdroom_prios_reset_buf_idx(&hdroom); + mlxsw_sp_hdroom_bufs_reset_lossiness(&hdroom); + mlxsw_sp_hdroom_bufs_reset_sizes(mlxsw_sp_port, &hdroom); + err_hdroom = mlxsw_sp_hdroom_configure(mlxsw_sp_port, &hdroom); + } + if (mlxsw_sp_qdisc->ops && mlxsw_sp_qdisc->ops->destroy) err = mlxsw_sp_qdisc->ops->destroy(mlxsw_sp_port, mlxsw_sp_qdisc); mlxsw_sp_qdisc->handle = TC_H_UNSPEC; mlxsw_sp_qdisc->ops = NULL; - return err; + + return err_hdroom ?: err; } static int @@ -159,6 +172,8 @@ mlxsw_sp_qdisc_replace(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, struct mlxsw_sp_qdisc *mlxsw_sp_qdisc, struct mlxsw_sp_qdisc_ops *ops, void *params) { + struct mlxsw_sp_qdisc *root_qdisc = &mlxsw_sp_port->qdisc->root_qdisc; + struct mlxsw_sp_hdroom orig_hdroom; int err; if (mlxsw_sp_qdisc->ops && mlxsw_sp_qdisc->ops->type != ops->type) @@ -168,6 +183,21 @@ mlxsw_sp_qdisc_replace(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, * new one. */ mlxsw_sp_qdisc_destroy(mlxsw_sp_port, mlxsw_sp_qdisc); + + orig_hdroom = *mlxsw_sp_port->hdroom; + if (root_qdisc == mlxsw_sp_qdisc) { + struct mlxsw_sp_hdroom hdroom = orig_hdroom; + + hdroom.mode = MLXSW_SP_HDROOM_MODE_TC; + mlxsw_sp_hdroom_prios_reset_buf_idx(&hdroom); + mlxsw_sp_hdroom_bufs_reset_lossiness(&hdroom); + mlxsw_sp_hdroom_bufs_reset_sizes(mlxsw_sp_port, &hdroom); + + err = mlxsw_sp_hdroom_configure(mlxsw_sp_port, &hdroom); + if (err) + goto err_hdroom_configure; + } + err = ops->check_params(mlxsw_sp_port, mlxsw_sp_qdisc, params); if (err) goto err_bad_param; @@ -191,6 +221,8 @@ mlxsw_sp_qdisc_replace(struct mlxsw_sp_port *mlxsw_sp_port, u32 handle, err_bad_param: err_config: + mlxsw_sp_hdroom_configure(mlxsw_sp_port, &orig_hdroom); +err_hdroom_configure: if (mlxsw_sp_qdisc->handle == handle && ops->unoffload) ops->unoffload(mlxsw_sp_port, mlxsw_sp_qdisc, params);