From patchwork Mon Dec 14 11:30:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC115C4361B for ; Mon, 14 Dec 2020 11:32:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7315522B47 for ; Mon, 14 Dec 2020 11:32:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437815AbgLNLcO (ORCPT ); Mon, 14 Dec 2020 06:32:14 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:55297 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437677AbgLNLbt (ORCPT ); Mon, 14 Dec 2020 06:31:49 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 79CE75C0098; Mon, 14 Dec 2020 06:31:03 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=HWY0p3OJKNVUIb/JUoSXQGS7oqvB/Y1kPFZyDj6+HC0=; b=inkb+bYm 9BZUtu6roxIm9Xyd/6TZ952lLTVTUa+DaFlPzsu/JRgw1KUTxWe5DsYkziE/IdD3 AuQCngJOkUPVAp2JhzkkO0ByiflsiCsUrtnqEefVKCQT/AZ/fbwR+8M0JE31fmj+ i3XLsbjFa5Aeo+jAZIk2YPjP1gWqQ56SDVjxRSlQoqd8q8i5QaJeI45AXU3U8HBu K218GJIGkJtprq9whrli5j9oysGQ/++aSBvbNid3BXND0+3XUJgNLihAMsKEOLF6 kZC++gFtXvQE0JcGei0waKc5Y0tmDCtmF/KtKD8JsvRUvYhAAeAJktmrEHfwR+pM MWPFFSRGSadu+Q== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id 4597C1080067; Mon, 14 Dec 2020 06:31:02 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 02/15] mlxsw: reg: Add Router XLT Enable Register Date: Mon, 14 Dec 2020 13:30:28 +0200 Message-Id: <20201214113041.2789043-3-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko The RXLTE enables XLT (eXtended Lookup Table) LPM lookups if a capable XM is present on the system. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 44 +++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index e7979edadf4c..ebde4fc860e2 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -8469,6 +8469,49 @@ mlxsw_reg_rmft2_ipv6_pack(char *payload, bool v, u16 offset, u16 virtual_router, mlxsw_reg_rmft2_sip6_mask_memcpy_to(payload, (void *)&sip6_mask); } +/* RXLTE - Router XLT Enable Register + * ---------------------------------- + * The RXLTE enables XLT (eXtended Lookup Table) LPM lookups if a capable + * XM is present on the system. + */ + +#define MLXSW_REG_RXLTE_ID 0x8050 +#define MLXSW_REG_RXLTE_LEN 0x0C + +MLXSW_REG_DEFINE(rxlte, MLXSW_REG_RXLTE_ID, MLXSW_REG_RXLTE_LEN); + +/* reg_rxlte_virtual_router + * Virtual router ID associated with the router interface. + * Range is 0..cap_max_virtual_routers-1 + * Access: Index + */ +MLXSW_ITEM32(reg, rxlte, virtual_router, 0x00, 0, 16); + +enum mlxsw_reg_rxlte_protocol { + MLXSW_REG_RXLTE_PROTOCOL_IPV4, + MLXSW_REG_RXLTE_PROTOCOL_IPV6, +}; + +/* reg_rxlte_protocol + * Access: Index + */ +MLXSW_ITEM32(reg, rxlte, protocol, 0x04, 0, 4); + +/* reg_rxlte_lpm_xlt_en + * Access: RW + */ +MLXSW_ITEM32(reg, rxlte, lpm_xlt_en, 0x08, 0, 1); + +static inline void mlxsw_reg_rxlte_pack(char *payload, u16 virtual_router, + enum mlxsw_reg_rxlte_protocol protocol, + bool lpm_xlt_en) +{ + MLXSW_REG_ZERO(rxlte, payload); + mlxsw_reg_rxlte_virtual_router_set(payload, virtual_router); + mlxsw_reg_rxlte_protocol_set(payload, protocol); + mlxsw_reg_rxlte_lpm_xlt_en_set(payload, lpm_xlt_en); +} + /* Note that XMDR and XRALXX register positions violate the rule of ordering * register definitions by the ID. However, XRALXX pack helpers are * using RALXX pack helpers, RALXX registers have higher IDs. @@ -11754,6 +11797,7 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { MLXSW_REG(rigr2), MLXSW_REG(recr2), MLXSW_REG(rmft2), + MLXSW_REG(rxlte), MLXSW_REG(xmdr), MLXSW_REG(xralta), MLXSW_REG(xralst), From patchwork Mon Dec 14 11:30:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83E1CC4361B for ; Mon, 14 Dec 2020 11:34:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4DC0222B47 for ; Mon, 14 Dec 2020 11:34:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438025AbgLNLdG (ORCPT ); Mon, 14 Dec 2020 06:33:06 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:53939 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437747AbgLNLce (ORCPT ); Mon, 14 Dec 2020 06:32:34 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 4FC155C0143; Mon, 14 Dec 2020 06:31:07 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=9WNoADTgn1nqJd/n49xMeBwUsm9R9L+YyEqzWf11NCU=; b=jmGaRFe4 bSqSWrVY+wZ/vsgHLPofbq9lFVYhXvchET7paU978sJcQMNE3bKWg1pEekXL8tl9 z1Un6RCaHyQataM2UAOlGGbdQ5iVJI0+yvTfYSMKBC6eTNNeO8InSdK0Tq2r14Ri 7g0x11JPtnN27vXjpbSug/KKtS7sX+gnWXQeXGZjtRXvW1ZjMqxdr84Mp6B3PDdh 0VS0Piul6h/XTMam2uy1ETX6JSaDub3jB6hLAAVRfLaKuBt16ude0etatD2yuavt 8aqRJ63tmLRhGnwZ2dTJLMMW85khTdhbZCYwLncm2VuB+iEyQHdtwN88YBE+9KPW v195V1c5n6s36w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id 350581080067; Mon, 14 Dec 2020 06:31:06 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 05/15] mlxsw: Ignore ports that are connected to eXtended mezanine Date: Mon, 14 Dec 2020 13:30:31 +0200 Message-Id: <20201214113041.2789043-6-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Use the info stored in the bus_info struct about the eXtended mezanine connected ports and don't expose them. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/core.c | 12 ++++++++++++ drivers/net/ethernet/mellanox/mlxsw/core.h | 1 + drivers/net/ethernet/mellanox/mlxsw/minimal.c | 3 ++- drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 3 +++ 4 files changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c index c67825a68a26..685037e052af 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core.c @@ -2856,6 +2856,18 @@ mlxsw_core_port_devlink_port_get(struct mlxsw_core *mlxsw_core, } EXPORT_SYMBOL(mlxsw_core_port_devlink_port_get); +bool mlxsw_core_port_is_xm(const struct mlxsw_core *mlxsw_core, u8 local_port) +{ + const struct mlxsw_bus_info *bus_info = mlxsw_core->bus_info; + int i; + + for (i = 0; i < bus_info->xm_local_ports_count; i++) + if (bus_info->xm_local_ports[i] == local_port) + return true; + return false; +} +EXPORT_SYMBOL(mlxsw_core_port_is_xm); + struct mlxsw_env *mlxsw_core_env(const struct mlxsw_core *mlxsw_core) { return mlxsw_core->env; diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h index ec424d388ecc..6558f9cde3d6 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.h +++ b/drivers/net/ethernet/mellanox/mlxsw/core.h @@ -223,6 +223,7 @@ enum devlink_port_type mlxsw_core_port_type_get(struct mlxsw_core *mlxsw_core, struct devlink_port * mlxsw_core_port_devlink_port_get(struct mlxsw_core *mlxsw_core, u8 local_port); +bool mlxsw_core_port_is_xm(const struct mlxsw_core *mlxsw_core, u8 local_port); struct mlxsw_env *mlxsw_core_env(const struct mlxsw_core *mlxsw_core); bool mlxsw_core_is_initialized(const struct mlxsw_core *mlxsw_core); int mlxsw_core_module_max_width(struct mlxsw_core *mlxsw_core, u8 module); diff --git a/drivers/net/ethernet/mellanox/mlxsw/minimal.c b/drivers/net/ethernet/mellanox/mlxsw/minimal.c index c010db2c9dba..b34c44723f8b 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/minimal.c +++ b/drivers/net/ethernet/mellanox/mlxsw/minimal.c @@ -291,7 +291,8 @@ static int mlxsw_m_ports_create(struct mlxsw_m *mlxsw_m) /* Create port objects for each valid entry */ for (i = 0; i < mlxsw_m->max_ports; i++) { - if (mlxsw_m->module_to_port[i] > 0) { + if (mlxsw_m->module_to_port[i] > 0 && + !mlxsw_core_port_is_xm(mlxsw_m->core, i)) { err = mlxsw_m_port_create(mlxsw_m, mlxsw_m->module_to_port[i], i); diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index df8175cd44ab..516d6cb45c9f 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -1840,6 +1840,9 @@ static int mlxsw_sp_port_module_info_init(struct mlxsw_sp *mlxsw_sp) return -ENOMEM; for (i = 1; i < max_ports; i++) { + if (mlxsw_core_port_is_xm(mlxsw_sp->core, i)) + continue; + err = mlxsw_sp_port_module_info_get(mlxsw_sp, i, &port_mapping); if (err) goto err_port_module_info_get; From patchwork Mon Dec 14 11:30:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343856 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78A71C4361B for ; Mon, 14 Dec 2020 11:37:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4402C22AB0 for ; Mon, 14 Dec 2020 11:37:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438360AbgLNLgy (ORCPT ); Mon, 14 Dec 2020 06:36:54 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:41647 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437909AbgLNLcg (ORCPT ); Mon, 14 Dec 2020 06:32:36 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 9F6345C016A; Mon, 14 Dec 2020 06:31:12 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:12 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=bNTagCLn4h36ut/17O/vNWVonbqEXAVXqx2glJgQCzI=; b=NnSGqndb M1aTYNsO8QfujByLwTLRIkzM/YWb0a3wtjyv+WWD2saxiFCLsYVQDjBs2npkp5M3 7rpRMwHwLcux2W9s5/8AnzNxTii3mlhDB6yfmfaVXcqnF1eMvcuUjDS5hokissby kievJQ90FJCfCnPfL/McldiQOTAnOui0ktuU1anORbT7zo015lbITurSo/GZcUP6 FbjFXw0XT0WqU3zEEAqA3tCBSaJAWqXnDUhvyIkDctHRO1Vop3P83qCqBema6c84 g2wjyCQesPFO5AzUaDp1KPT6SEqOtmj8J7juzba/5Dpxh4+OlgRJbVNTQAoeyvyn 8wcuREUva+tGAA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgepheenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id 70E121080059; Mon, 14 Dec 2020 06:31:11 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 09/15] mlxsw: reg: Add XM Router M Table Register Date: Mon, 14 Dec 2020 13:30:35 +0200 Message-Id: <20201214113041.2789043-10-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko The XRMT configures the M-Table for the XLT-LPM. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 33 +++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index 6db3a5b22f5d..0e3abb315e06 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -8543,10 +8543,10 @@ static inline void mlxsw_reg_rxltm_pack(char *payload, u8 m0_val_v4, u8 m0_val_v mlxsw_reg_rxltm_m0_val_v4_set(payload, m0_val_v4); } -/* Note that XLTQ, XMDR and XRALXX register positions violate the rule +/* Note that XLTQ, XMDR, XRMT and XRALXX register positions violate the rule * of ordering register definitions by the ID. However, XRALXX pack helpers are * using RALXX pack helpers, RALXX registers have higher IDs. - * Also XMDR is using RALUE enums. XLTQ is just put alongside with the + * Also XMDR is using RALUE enums. XLRQ and XRMT are just put alongside with the * related registers. */ @@ -8874,6 +8874,34 @@ static inline void mlxsw_reg_xmdr_c_ltr_act_ip2me_tun_pack(char *xmdr_payload, mlxsw_reg_xmdr_c_ltr_pointer_to_tunnel_set(payload, pointer_to_tunnel); } +/* XRMT - XM Router M Table Register + * --------------------------------- + * The XRMT configures the M-Table for the XLT-LPM. + */ +#define MLXSW_REG_XRMT_ID 0x7810 +#define MLXSW_REG_XRMT_LEN 0x14 + +MLXSW_REG_DEFINE(xrmt, MLXSW_REG_XRMT_ID, MLXSW_REG_XRMT_LEN); + +/* reg_xrmt_index + * Index in M-Table. + * Range 0..cap_xlt_mtable-1 + * Access: Index + */ +MLXSW_ITEM32(reg, xrmt, index, 0x04, 0, 20); + +/* reg_xrmt_l0_val + * Access: RW + */ +MLXSW_ITEM32(reg, xrmt, l0_val, 0x10, 24, 8); + +static inline void mlxsw_reg_xrmt_pack(char *payload, u32 index, u8 l0_val) +{ + MLXSW_REG_ZERO(xrmt, payload); + mlxsw_reg_xrmt_index_set(payload, index); + mlxsw_reg_xrmt_l0_val_set(payload, l0_val); +} + /* XRALTA - XM Router Algorithmic LPM Tree Allocation Register * ----------------------------------------------------------- * The XRALTA is used to allocate the XLT LPM trees. @@ -11891,6 +11919,7 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { MLXSW_REG(rxltm), MLXSW_REG(xltq), MLXSW_REG(xmdr), + MLXSW_REG(xrmt), MLXSW_REG(xralta), MLXSW_REG(xralst), MLXSW_REG(xraltb), From patchwork Mon Dec 14 11:30:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343857 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79B15C1B0D8 for ; Mon, 14 Dec 2020 11:36:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B42322AB0 for ; Mon, 14 Dec 2020 11:36:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438210AbgLNLfl (ORCPT ); Mon, 14 Dec 2020 06:35:41 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:45031 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437886AbgLNLc5 (ORCPT ); Mon, 14 Dec 2020 06:32:57 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id EA0F25C018A; Mon, 14 Dec 2020 06:31:13 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=bnrwwkrss26l2M+IhJk5HBV5Z3nV+ObLc564KmeQpGg=; b=kHQPb/vw zGOJ4imerNFpcF/DJWBkDJtADCANpYLAUFiJ6o/lwnUxv0UTpfhZ+4MC6TGPumBH aqXFUGiUv95tgOv0eI5QXCiEmIdY/YVTRYzvWI+YMc4/KIxjFSxAnpDS6ItjDT9V iqi4u/OapxRmEvll+8NwQeN0s8oauSSKHpTgsVCLKXz5Hrj/4/7KNKp4XoFE+jGw DnSyIijPnAbBkpOUAqY6nY29JKNXdYn3YO9Yz2Q56WFoZo+/4SZs3NQSCSgaubhX +GWxHqTctfE422dzkPnXTB2vMycGJNtmv77jBzbkpJPkeOLqeYBdgVJl3SoIYtZ/ dnce2qFZgV9Sug== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgepheenucfrrghrrghmpehmrghilhhfrhhomhepihguoh hstghhsehiughoshgthhdrohhrgh X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id BB20B1080059; Mon, 14 Dec 2020 06:31:12 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 10/15] mlxsw: spectrum_router_xm: Implement L-value tracking for M-index Date: Mon, 14 Dec 2020 13:30:36 +0200 Message-Id: <20201214113041.2789043-11-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko There is a table that assigns L-value per M-index. The L is always the biggest from the currently inserted prefixes. Setup a hashtable to track the M-index information and the prefixes that are related to it. Ensure the L-value is always correctly set. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../ethernet/mellanox/mlxsw/spectrum_router.c | 1 + .../ethernet/mellanox/mlxsw/spectrum_router.h | 1 + .../mellanox/mlxsw/spectrum_router_xm.c | 203 ++++++++++++++++++ 3 files changed, 205 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 3b32d9648578..62d51b281b58 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -7098,6 +7098,7 @@ static void mlxsw_sp_router_fib_event_work(struct work_struct *work) op_ctx->bulk_ok = !list_is_last(&fib_event->list, &fib_event_queue) && fib_event->family == next_fib_event->family && fib_event->event == next_fib_event->event; + op_ctx->event = fib_event->event; /* In case family of this and the previous entry are different, context * reinitialization is going to be needed now, indicate that. diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h index d6f7aba6eb9c..31612891ad48 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.h @@ -24,6 +24,7 @@ struct mlxsw_sp_fib_entry_op_ctx { * the context priv is initialized. */ struct list_head fib_entry_priv_list; + unsigned long event; unsigned long ll_priv[]; }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c index 966a20f3bc0d..c069092aa5ac 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c @@ -3,6 +3,7 @@ #include #include +#include #include "spectrum.h" #include "core.h" @@ -16,14 +17,35 @@ static const u8 mlxsw_sp_router_xm_m_val[] = { [MLXSW_SP_L3_PROTO_IPV6] = 0, /* Currently unused. */ }; +#define MLXSW_SP_ROUTER_XM_L_VAL_MAX 16 + struct mlxsw_sp_router_xm { bool ipv4_supported; bool ipv6_supported; unsigned int entries_size; + struct rhashtable ltable_ht; +}; + +struct mlxsw_sp_router_xm_ltable_node { + struct rhash_head ht_node; /* Member of router_xm->ltable_ht */ + u16 mindex; + u8 current_lvalue; + refcount_t refcnt; + unsigned int lvalue_ref[MLXSW_SP_ROUTER_XM_L_VAL_MAX + 1]; +}; + +static const struct rhashtable_params mlxsw_sp_router_xm_ltable_ht_params = { + .key_offset = offsetof(struct mlxsw_sp_router_xm_ltable_node, mindex), + .head_offset = offsetof(struct mlxsw_sp_router_xm_ltable_node, ht_node), + .key_len = sizeof(u16), + .automatic_shrinking = true, }; struct mlxsw_sp_router_xm_fib_entry { bool committed; + struct mlxsw_sp_router_xm_ltable_node *ltable_node; /* Parent node */ + u16 mindex; /* Store for processing from commit op */ + u8 lvalue; }; #define MLXSW_SP_ROUTE_LL_XM_ENTRIES_MAX \ @@ -68,6 +90,20 @@ static int mlxsw_sp_router_ll_xm_raltb_write(struct mlxsw_sp *mlxsw_sp, char *xr return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(xraltb), xraltb_pl); } +static u16 mlxsw_sp_router_ll_xm_mindex_get4(const u32 addr) +{ + /* Currently the M-index is set to linear mode. That means it is defined + * as 16 MSB of IP address. + */ + return addr >> MLXSW_SP_ROUTER_XM_L_VAL_MAX; +} + +static u16 mlxsw_sp_router_ll_xm_mindex_get6(const unsigned char *addr) +{ + WARN_ON_ONCE(1); + return 0; /* currently unused */ +} + static void mlxsw_sp_router_ll_xm_op_ctx_check_init(struct mlxsw_sp_fib_entry_op_ctx *op_ctx, struct mlxsw_sp_fib_entry_op_ctx_xm *op_ctx_xm) { @@ -114,11 +150,13 @@ static void mlxsw_sp_router_ll_xm_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ct len = mlxsw_reg_xmdr_c_ltr_pack4(op_ctx_xm->xmdr_pl, op_ctx_xm->trans_offset, op_ctx_xm->entries_count, xmdr_c_ltr_op, virtual_router, prefix_len, (u32 *) addr); + fib_entry->mindex = mlxsw_sp_router_ll_xm_mindex_get4(*((u32 *) addr)); break; case MLXSW_SP_L3_PROTO_IPV6: len = mlxsw_reg_xmdr_c_ltr_pack6(op_ctx_xm->xmdr_pl, op_ctx_xm->trans_offset, op_ctx_xm->entries_count, xmdr_c_ltr_op, virtual_router, prefix_len, addr); + fib_entry->mindex = mlxsw_sp_router_ll_xm_mindex_get6(addr); break; default: WARN_ON_ONCE(1); @@ -130,6 +168,9 @@ static void mlxsw_sp_router_ll_xm_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ct WARN_ON_ONCE(op_ctx_xm->trans_item_len != len); op_ctx_xm->entries[op_ctx_xm->entries_count] = fib_entry; + + fib_entry->lvalue = prefix_len > mlxsw_sp_router_xm_m_val[proto] ? + prefix_len - mlxsw_sp_router_xm_m_val[proto] : 0; } static void @@ -172,6 +213,147 @@ mlxsw_sp_router_ll_xm_fib_entry_act_ip2me_tun_pack(struct mlxsw_sp_fib_entry_op_ tunnel_ptr); } +static struct mlxsw_sp_router_xm_ltable_node * +mlxsw_sp_router_xm_ltable_node_get(struct mlxsw_sp_router_xm *router_xm, u16 mindex) +{ + struct mlxsw_sp_router_xm_ltable_node *ltable_node; + int err; + + ltable_node = rhashtable_lookup_fast(&router_xm->ltable_ht, &mindex, + mlxsw_sp_router_xm_ltable_ht_params); + if (ltable_node) { + refcount_inc(<able_node->refcnt); + return ltable_node; + } + ltable_node = kzalloc(sizeof(*ltable_node), GFP_KERNEL); + if (!ltable_node) + return ERR_PTR(-ENOMEM); + ltable_node->mindex = mindex; + refcount_set(<able_node->refcnt, 1); + + err = rhashtable_insert_fast(&router_xm->ltable_ht, <able_node->ht_node, + mlxsw_sp_router_xm_ltable_ht_params); + if (err) + goto err_insert; + + return ltable_node; + +err_insert: + kfree(ltable_node); + return ERR_PTR(err); +} + +static void mlxsw_sp_router_xm_ltable_node_put(struct mlxsw_sp_router_xm *router_xm, + struct mlxsw_sp_router_xm_ltable_node *ltable_node) +{ + if (!refcount_dec_and_test(<able_node->refcnt)) + return; + rhashtable_remove_fast(&router_xm->ltable_ht, <able_node->ht_node, + mlxsw_sp_router_xm_ltable_ht_params); + kfree(ltable_node); +} + +static int mlxsw_sp_router_xm_ltable_lvalue_set(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_router_xm_ltable_node *ltable_node) +{ + char xrmt_pl[MLXSW_REG_XRMT_LEN]; + + mlxsw_reg_xrmt_pack(xrmt_pl, ltable_node->mindex, ltable_node->current_lvalue); + return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(xrmt), xrmt_pl); +} + +static int +mlxsw_sp_router_xm_ml_entry_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_router_xm_fib_entry *fib_entry) +{ + struct mlxsw_sp_router_xm *router_xm = mlxsw_sp->router->xm; + struct mlxsw_sp_router_xm_ltable_node *ltable_node; + u8 lvalue = fib_entry->lvalue; + int err; + + ltable_node = mlxsw_sp_router_xm_ltable_node_get(router_xm, + fib_entry->mindex); + if (IS_ERR(ltable_node)) + return PTR_ERR(ltable_node); + if (lvalue > ltable_node->current_lvalue) { + /* The L-value is bigger then the one currently set, update. */ + ltable_node->current_lvalue = lvalue; + err = mlxsw_sp_router_xm_ltable_lvalue_set(mlxsw_sp, + ltable_node); + if (err) + goto err_lvalue_set; + } + + ltable_node->lvalue_ref[lvalue]++; + fib_entry->ltable_node = ltable_node; + return 0; + +err_lvalue_set: + mlxsw_sp_router_xm_ltable_node_put(router_xm, ltable_node); + return err; +} + +static void +mlxsw_sp_router_xm_ml_entry_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_router_xm_fib_entry *fib_entry) +{ + struct mlxsw_sp_router_xm_ltable_node *ltable_node = + fib_entry->ltable_node; + struct mlxsw_sp_router_xm *router_xm = mlxsw_sp->router->xm; + u8 lvalue = fib_entry->lvalue; + + ltable_node->lvalue_ref[lvalue]--; + if (lvalue == ltable_node->current_lvalue && lvalue && + !ltable_node->lvalue_ref[lvalue]) { + u8 new_lvalue = lvalue - 1; + + /* Find the biggest L-value left out there. */ + while (new_lvalue > 0 && !ltable_node->lvalue_ref[lvalue]) + new_lvalue--; + + ltable_node->current_lvalue = new_lvalue; + mlxsw_sp_router_xm_ltable_lvalue_set(mlxsw_sp, ltable_node); + } + mlxsw_sp_router_xm_ltable_node_put(router_xm, ltable_node); +} + +static int +mlxsw_sp_router_xm_ml_entries_add(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx_xm *op_ctx_xm) +{ + struct mlxsw_sp_router_xm_fib_entry *fib_entry; + int err; + int i; + + for (i = 0; i < op_ctx_xm->entries_count; i++) { + fib_entry = op_ctx_xm->entries[i]; + err = mlxsw_sp_router_xm_ml_entry_add(mlxsw_sp, fib_entry); + if (err) + goto rollback; + } + return 0; + +rollback: + for (i--; i >= 0; i--) { + fib_entry = op_ctx_xm->entries[i]; + mlxsw_sp_router_xm_ml_entry_del(mlxsw_sp, fib_entry); + } + return err; +} + +static void +mlxsw_sp_router_xm_ml_entries_del(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx_xm *op_ctx_xm) +{ + struct mlxsw_sp_router_xm_fib_entry *fib_entry; + int i; + + for (i = 0; i < op_ctx_xm->entries_count; i++) { + fib_entry = op_ctx_xm->entries[i]; + mlxsw_sp_router_xm_ml_entry_del(mlxsw_sp, fib_entry); + } +} + static int mlxsw_sp_router_ll_xm_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, bool *postponed_for_bulk) @@ -197,6 +379,15 @@ static int mlxsw_sp_router_ll_xm_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, return 0; } + if (op_ctx->event == FIB_EVENT_ENTRY_REPLACE) { + /* The L-table is updated inside. It has to be done before + * the prefix is inserted. + */ + err = mlxsw_sp_router_xm_ml_entries_add(mlxsw_sp, op_ctx_xm); + if (err) + goto out; + } + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(xmdr), op_ctx_xm->xmdr_pl); if (err) goto out; @@ -217,6 +408,12 @@ static int mlxsw_sp_router_ll_xm_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, } } + if (op_ctx->event == FIB_EVENT_ENTRY_DEL) + /* The L-table is updated inside. It has to be done after + * the prefix was removed. + */ + mlxsw_sp_router_xm_ml_entries_del(mlxsw_sp, op_ctx_xm); + out: /* Next pack call is going to do reinitialization */ op_ctx->initialized = false; @@ -289,9 +486,14 @@ int mlxsw_sp_router_xm_init(struct mlxsw_sp *mlxsw_sp) if (err) goto err_rxltm_write; + err = rhashtable_init(&router_xm->ltable_ht, &mlxsw_sp_router_xm_ltable_ht_params); + if (err) + goto err_ltable_ht_init; + mlxsw_sp->router->xm = router_xm; return 0; +err_ltable_ht_init: err_rxltm_write: err_mindex_size_check: err_device_id_check: @@ -307,5 +509,6 @@ void mlxsw_sp_router_xm_fini(struct mlxsw_sp *mlxsw_sp) if (!mlxsw_sp->bus_info->xm_exists) return; + rhashtable_destroy(&router_xm->ltable_ht); kfree(router_xm); } From patchwork Mon Dec 14 11:30:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 548CFC4361B for ; Mon, 14 Dec 2020 11:33:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0CC5D22B47 for ; Mon, 14 Dec 2020 11:33:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438044AbgLNLdG (ORCPT ); Mon, 14 Dec 2020 06:33:06 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:50151 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437743AbgLNLco (ORCPT ); Mon, 14 Dec 2020 06:32:44 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 86BC75C00B5; Mon, 14 Dec 2020 06:31:16 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:16 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=4n1J1hRkMTi9UwcHj9eJ9FIO/5HrTdG99h4TAUWhxUE=; b=HTnXIW5p j4lbfvUm8/btN10KIX5Pmy89mjf+hgWmWpHO3e+jgvZ0ddwk27v8S/YbMunOupL4 aFTrGaFwQUx6VLN0UdPNWsTsGTQWUe0Esk2NoY1a3lQjDg9r75GOCfWh0x8QDu4H G1W2i+RdinfR6SM2URtUb/DsuaqWgTyNlgsxOdTKu1NLqoEBi60UVMQ0xoQmq5gX zoOIR7Kovomy9EEoyJpkhQ0lcyX+Y3zEMl+K77KeQsi4bhCb1Qns0Puj05zWti14 kBbBF0ADZ34p4fDi7pvub7/SnlRwwCBscwA2eYfV6/HViLn8+GKbGq/LoHPltglU q4BKPUceGFQXtw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgepuddtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id 5CDAD1080059; Mon, 14 Dec 2020 06:31:15 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 12/15] mlxsw: reg: Add Router LPM Cache Enable Register Date: Mon, 14 Dec 2020 13:30:38 +0200 Message-Id: <20201214113041.2789043-13-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko The RLPMCE allows disabling the LPM cache. Can be changed on the fly. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/reg.h | 35 +++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h index f1c5a532454e..16e2df6ef2f4 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h @@ -8653,6 +8653,40 @@ static inline void mlxsw_reg_rlcmld_pack6(char *payload, mlxsw_reg_rlcmld_dip_mask6_memcpy_to(payload, dip_mask); } +/* RLPMCE - Router LPM Cache Enable Register + * ----------------------------------------- + * Allows disabling the LPM cache. Can be changed on the fly. + */ + +#define MLXSW_REG_RLPMCE_ID 0x8056 +#define MLXSW_REG_RLPMCE_LEN 0x4 + +MLXSW_REG_DEFINE(rlpmce, MLXSW_REG_RLPMCE_ID, MLXSW_REG_RLPMCE_LEN); + +/* reg_rlpmce_flush + * Flush: + * 0: do not flush the cache (default) + * 1: flush (clear) the cache + * Access: WO + */ +MLXSW_ITEM32(reg, rlpmce, flush, 0x00, 4, 1); + +/* reg_rlpmce_disable + * LPM cache: + * 0: enabled (default) + * 1: disabled + * Access: RW + */ +MLXSW_ITEM32(reg, rlpmce, disable, 0x00, 0, 1); + +static inline void mlxsw_reg_rlpmce_pack(char *payload, bool flush, + bool disable) +{ + MLXSW_REG_ZERO(rlpmce, payload); + mlxsw_reg_rlpmce_flush_set(payload, flush); + mlxsw_reg_rlpmce_disable_set(payload, disable); +} + /* Note that XLTQ, XMDR, XRMT and XRALXX register positions violate the rule * of ordering register definitions by the ID. However, XRALXX pack helpers are * using RALXX pack helpers, RALXX registers have higher IDs. @@ -12028,6 +12062,7 @@ static const struct mlxsw_reg_info *mlxsw_reg_infos[] = { MLXSW_REG(rxlte), MLXSW_REG(rxltm), MLXSW_REG(rlcmld), + MLXSW_REG(rlpmce), MLXSW_REG(xltq), MLXSW_REG(xmdr), MLXSW_REG(xrmt), From patchwork Mon Dec 14 11:30:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343859 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5FEEC4361B for ; Mon, 14 Dec 2020 11:35:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5714A22C97 for ; Mon, 14 Dec 2020 11:35:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2437996AbgLNLew (ORCPT ); Mon, 14 Dec 2020 06:34:52 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:38503 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437995AbgLNLdC (ORCPT ); Mon, 14 Dec 2020 06:33:02 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id C8B3D5C0191; Mon, 14 Dec 2020 06:31:17 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=gxPGPwA5QZqXvPG2Xm5n1PZD3rc7WPljFhsWbmNEy0I=; b=L2vjKQrZ J2FWxOXUqM3q84ow4g1Ohy13A+j8Si7Tzdy242dFAJ8pguYEzhSFuA5/lvn8bCDN VYAoewsK50LjwM1Pqkda6A1KqrQvYVqXHdE6ZHCTTkn0kc3hHGwkzHHfuiwrLExx GEnJscolHlANWFV2r1VqwnGbXSxJLGlLN97plmvvBt4gl3YKjnVtmqE3c54zEeCf HmE8PozADt4Z7JZD0HynJwEo7F9L/s/uo+rHXQjYpwaierUXdknIW+Awzjq7virB hmfODEb8vwhtfVz1FbnDh63z+ywp4EiAYnMSXIzm9SNgNu52Q8ZpHDDbR8aWildn COuMbUBwosviLQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgepuddtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id A96041080059; Mon, 14 Dec 2020 06:31:16 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 13/15] mlxsw: spectrum_router_xm: Introduce basic XM cache flushing Date: Mon, 14 Dec 2020 13:30:39 +0200 Message-Id: <20201214113041.2789043-14-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Upon route insertion and removal, it is needed to flush possibly cached entries from the XM cache. Extend XM op context to carry information needed for the flush. Implement the flush in delayed work since for HW design reasons there is a need to wait 50usec before the flush can be done. If during this time comes the same flush request, consolidate it to the first one. Implement this queued flushes by a hashtable. v2: * Fix GENMASK() high bit Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- .../mellanox/mlxsw/spectrum_router_xm.c | 291 ++++++++++++++++++ 1 file changed, 291 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c index c069092aa5ac..2f1e70e5a262 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router_xm.c @@ -24,6 +24,9 @@ struct mlxsw_sp_router_xm { bool ipv6_supported; unsigned int entries_size; struct rhashtable ltable_ht; + struct rhashtable flush_ht; /* Stores items about to be flushed from cache */ + unsigned int flush_count; + bool flush_all_mode; }; struct mlxsw_sp_router_xm_ltable_node { @@ -41,11 +44,20 @@ static const struct rhashtable_params mlxsw_sp_router_xm_ltable_ht_params = { .automatic_shrinking = true, }; +struct mlxsw_sp_router_xm_flush_info { + bool all; + enum mlxsw_sp_l3proto proto; + u16 virtual_router; + u8 prefix_len; + unsigned char addr[sizeof(struct in6_addr)]; +}; + struct mlxsw_sp_router_xm_fib_entry { bool committed; struct mlxsw_sp_router_xm_ltable_node *ltable_node; /* Parent node */ u16 mindex; /* Store for processing from commit op */ u8 lvalue; + struct mlxsw_sp_router_xm_flush_info flush_info; }; #define MLXSW_SP_ROUTE_LL_XM_ENTRIES_MAX \ @@ -125,6 +137,7 @@ static void mlxsw_sp_router_ll_xm_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ct { struct mlxsw_sp_fib_entry_op_ctx_xm *op_ctx_xm = (void *) op_ctx->ll_priv; struct mlxsw_sp_router_xm_fib_entry *fib_entry = (void *) priv->priv; + struct mlxsw_sp_router_xm_flush_info *flush_info; enum mlxsw_reg_xmdr_c_ltr_op xmdr_c_ltr_op; unsigned int len; @@ -171,6 +184,15 @@ static void mlxsw_sp_router_ll_xm_fib_entry_pack(struct mlxsw_sp_fib_entry_op_ct fib_entry->lvalue = prefix_len > mlxsw_sp_router_xm_m_val[proto] ? prefix_len - mlxsw_sp_router_xm_m_val[proto] : 0; + + flush_info = &fib_entry->flush_info; + flush_info->proto = proto; + flush_info->virtual_router = virtual_router; + flush_info->prefix_len = prefix_len; + if (addr) + memcpy(flush_info->addr, addr, sizeof(flush_info->addr)); + else + memset(flush_info->addr, 0, sizeof(flush_info->addr)); } static void @@ -262,6 +284,231 @@ static int mlxsw_sp_router_xm_ltable_lvalue_set(struct mlxsw_sp *mlxsw_sp, return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(xrmt), xrmt_pl); } +struct mlxsw_sp_router_xm_flush_node { + struct rhash_head ht_node; /* Member of router_xm->flush_ht */ + struct list_head list; + struct mlxsw_sp_router_xm_flush_info flush_info; + struct delayed_work dw; + struct mlxsw_sp *mlxsw_sp; + unsigned long start_jiffies; + unsigned int reuses; /* By how many flush calls this was reused. */ + refcount_t refcnt; +}; + +static const struct rhashtable_params mlxsw_sp_router_xm_flush_ht_params = { + .key_offset = offsetof(struct mlxsw_sp_router_xm_flush_node, flush_info), + .head_offset = offsetof(struct mlxsw_sp_router_xm_flush_node, ht_node), + .key_len = sizeof(struct mlxsw_sp_router_xm_flush_info), + .automatic_shrinking = true, +}; + +static struct mlxsw_sp_router_xm_flush_node * +mlxsw_sp_router_xm_cache_flush_node_create(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_router_xm_flush_info *flush_info) +{ + struct mlxsw_sp_router_xm *router_xm = mlxsw_sp->router->xm; + struct mlxsw_sp_router_xm_flush_node *flush_node; + int err; + + flush_node = kzalloc(sizeof(*flush_node), GFP_KERNEL); + if (!flush_node) + return ERR_PTR(-ENOMEM); + + flush_node->flush_info = *flush_info; + err = rhashtable_insert_fast(&router_xm->flush_ht, &flush_node->ht_node, + mlxsw_sp_router_xm_flush_ht_params); + if (err) { + kfree(flush_node); + return ERR_PTR(err); + } + router_xm->flush_count++; + flush_node->mlxsw_sp = mlxsw_sp; + flush_node->start_jiffies = jiffies; + refcount_set(&flush_node->refcnt, 1); + return flush_node; +} + +static void +mlxsw_sp_router_xm_cache_flush_node_hold(struct mlxsw_sp_router_xm_flush_node *flush_node) +{ + if (!flush_node) + return; + refcount_inc(&flush_node->refcnt); +} + +static void +mlxsw_sp_router_xm_cache_flush_node_put(struct mlxsw_sp_router_xm_flush_node *flush_node) +{ + if (!flush_node || !refcount_dec_and_test(&flush_node->refcnt)) + return; + kfree(flush_node); +} + +static void +mlxsw_sp_router_xm_cache_flush_node_destroy(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_router_xm_flush_node *flush_node) +{ + struct mlxsw_sp_router_xm *router_xm = mlxsw_sp->router->xm; + + router_xm->flush_count--; + rhashtable_remove_fast(&router_xm->flush_ht, &flush_node->ht_node, + mlxsw_sp_router_xm_flush_ht_params); + mlxsw_sp_router_xm_cache_flush_node_put(flush_node); +} + +static u32 mlxsw_sp_router_xm_flush_mask4(u8 prefix_len) +{ + return GENMASK(31, 32 - prefix_len); +} + +static unsigned char *mlxsw_sp_router_xm_flush_mask6(u8 prefix_len) +{ + static unsigned char mask[sizeof(struct in6_addr)]; + + memset(mask, 0, sizeof(mask)); + memset(mask, 0xff, prefix_len / 8); + mask[prefix_len / 8] = GENMASK(8, 8 - prefix_len % 8); + return mask; +} + +#define MLXSW_SP_ROUTER_XM_CACHE_PARALLEL_FLUSHES_LIMIT 15 +#define MLXSW_SP_ROUTER_XM_CACHE_FLUSH_ALL_MIN_REUSES 15 +#define MLXSW_SP_ROUTER_XM_CACHE_DELAY 50 /* usecs */ +#define MLXSW_SP_ROUTER_XM_CACHE_MAX_WAIT (MLXSW_SP_ROUTER_XM_CACHE_DELAY * 10) + +static void mlxsw_sp_router_xm_cache_flush_work(struct work_struct *work) +{ + struct mlxsw_sp_router_xm_flush_info *flush_info; + struct mlxsw_sp_router_xm_flush_node *flush_node; + char rlcmld_pl[MLXSW_REG_RLCMLD_LEN]; + enum mlxsw_reg_rlcmld_select select; + struct mlxsw_sp *mlxsw_sp; + u32 addr4; + int err; + + flush_node = container_of(work, struct mlxsw_sp_router_xm_flush_node, + dw.work); + mlxsw_sp = flush_node->mlxsw_sp; + flush_info = &flush_node->flush_info; + + if (flush_info->all) { + char rlpmce_pl[MLXSW_REG_RLPMCE_LEN]; + + mlxsw_reg_rlpmce_pack(rlpmce_pl, true, false); + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rlpmce), + rlpmce_pl); + if (err) + dev_err(mlxsw_sp->bus_info->dev, "Failed to flush XM cache\n"); + + if (flush_node->reuses < + MLXSW_SP_ROUTER_XM_CACHE_FLUSH_ALL_MIN_REUSES) + /* Leaving flush-all mode. */ + mlxsw_sp->router->xm->flush_all_mode = false; + goto out; + } + + select = MLXSW_REG_RLCMLD_SELECT_M_AND_ML_ENTRIES; + + switch (flush_info->proto) { + case MLXSW_SP_L3_PROTO_IPV4: + addr4 = *((u32 *) flush_info->addr); + addr4 &= mlxsw_sp_router_xm_flush_mask4(flush_info->prefix_len); + + /* In case the flush prefix length is bigger than M-value, + * it makes no sense to flush M entries. So just flush + * the ML entries. + */ + if (flush_info->prefix_len > MLXSW_SP_ROUTER_XM_M_VAL) + select = MLXSW_REG_RLCMLD_SELECT_ML_ENTRIES; + + mlxsw_reg_rlcmld_pack4(rlcmld_pl, select, + flush_info->virtual_router, addr4, + mlxsw_sp_router_xm_flush_mask4(flush_info->prefix_len)); + break; + case MLXSW_SP_L3_PROTO_IPV6: + mlxsw_reg_rlcmld_pack6(rlcmld_pl, select, + flush_info->virtual_router, flush_info->addr, + mlxsw_sp_router_xm_flush_mask6(flush_info->prefix_len)); + break; + default: + WARN_ON(true); + goto out; + } + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(rlcmld), rlcmld_pl); + if (err) + dev_err(mlxsw_sp->bus_info->dev, "Failed to flush XM cache\n"); + +out: + mlxsw_sp_router_xm_cache_flush_node_destroy(mlxsw_sp, flush_node); +} + +static bool +mlxsw_sp_router_xm_cache_flush_may_cancel(struct mlxsw_sp_router_xm_flush_node *flush_node) +{ + unsigned long max_wait = usecs_to_jiffies(MLXSW_SP_ROUTER_XM_CACHE_MAX_WAIT); + unsigned long delay = usecs_to_jiffies(MLXSW_SP_ROUTER_XM_CACHE_DELAY); + + /* In case there is the same flushing work pending, check + * if we can consolidate with it. We can do it up to MAX_WAIT. + * Cancel the delayed work. If the work was still pending. + */ + if (time_is_before_jiffies(flush_node->start_jiffies + max_wait - delay) && + cancel_delayed_work_sync(&flush_node->dw)) + return true; + return false; +} + +static int +mlxsw_sp_router_xm_cache_flush_schedule(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_router_xm_flush_info *flush_info) +{ + unsigned long delay = usecs_to_jiffies(MLXSW_SP_ROUTER_XM_CACHE_DELAY); + struct mlxsw_sp_router_xm_flush_info flush_all_info = {.all = true}; + struct mlxsw_sp_router_xm *router_xm = mlxsw_sp->router->xm; + struct mlxsw_sp_router_xm_flush_node *flush_node; + + /* Check if the queued number of flushes reached critical amount after + * which it is better to just flush the whole cache. + */ + if (router_xm->flush_count == MLXSW_SP_ROUTER_XM_CACHE_PARALLEL_FLUSHES_LIMIT) + /* Entering flush-all mode. */ + router_xm->flush_all_mode = true; + + if (router_xm->flush_all_mode) + flush_info = &flush_all_info; + + rcu_read_lock(); + flush_node = rhashtable_lookup_fast(&router_xm->flush_ht, flush_info, + mlxsw_sp_router_xm_flush_ht_params); + /* Take a reference so the object is not freed before possible + * delayed work cancel could be done. + */ + mlxsw_sp_router_xm_cache_flush_node_hold(flush_node); + rcu_read_unlock(); + + if (flush_node && mlxsw_sp_router_xm_cache_flush_may_cancel(flush_node)) { + flush_node->reuses++; + mlxsw_sp_router_xm_cache_flush_node_put(flush_node); + /* Original work was within wait period and was canceled. + * That means that the reference is still held and the + * flush_node_put() call above did not free the flush_node. + * Reschedule it with fresh delay. + */ + goto schedule_work; + } else { + mlxsw_sp_router_xm_cache_flush_node_put(flush_node); + } + + flush_node = mlxsw_sp_router_xm_cache_flush_node_create(mlxsw_sp, flush_info); + if (IS_ERR(flush_node)) + return PTR_ERR(flush_node); + INIT_DELAYED_WORK(&flush_node->dw, mlxsw_sp_router_xm_cache_flush_work); + +schedule_work: + mlxsw_core_schedule_dw(&flush_node->dw, delay); + return 0; +} + static int mlxsw_sp_router_xm_ml_entry_add(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_router_xm_fib_entry *fib_entry) @@ -282,10 +529,18 @@ mlxsw_sp_router_xm_ml_entry_add(struct mlxsw_sp *mlxsw_sp, ltable_node); if (err) goto err_lvalue_set; + + /* The L value for prefix/M is increased. + * Therefore, all entries in M and ML caches matching + * {prefix/M, proto, VR} need to be flushed. Set the flush + * prefix length to M to achieve that. + */ + fib_entry->flush_info.prefix_len = MLXSW_SP_ROUTER_XM_M_VAL; } ltable_node->lvalue_ref[lvalue]++; fib_entry->ltable_node = ltable_node; + return 0; err_lvalue_set: @@ -313,6 +568,13 @@ mlxsw_sp_router_xm_ml_entry_del(struct mlxsw_sp *mlxsw_sp, ltable_node->current_lvalue = new_lvalue; mlxsw_sp_router_xm_ltable_lvalue_set(mlxsw_sp, ltable_node); + + /* The L value for prefix/M is decreased. + * Therefore, all entries in M and ML caches matching + * {prefix/M, proto, VR} need to be flushed. Set the flush + * prefix length to M to achieve that. + */ + fib_entry->flush_info.prefix_len = MLXSW_SP_ROUTER_XM_M_VAL; } mlxsw_sp_router_xm_ltable_node_put(router_xm, ltable_node); } @@ -354,6 +616,23 @@ mlxsw_sp_router_xm_ml_entries_del(struct mlxsw_sp *mlxsw_sp, } } +static void +mlxsw_sp_router_xm_ml_entries_cache_flush(struct mlxsw_sp *mlxsw_sp, + struct mlxsw_sp_fib_entry_op_ctx_xm *op_ctx_xm) +{ + struct mlxsw_sp_router_xm_fib_entry *fib_entry; + int err; + int i; + + for (i = 0; i < op_ctx_xm->entries_count; i++) { + fib_entry = op_ctx_xm->entries[i]; + err = mlxsw_sp_router_xm_cache_flush_schedule(mlxsw_sp, + &fib_entry->flush_info); + if (err) + dev_err(mlxsw_sp->bus_info->dev, "Failed to flush XM cache\n"); + } +} + static int mlxsw_sp_router_ll_xm_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_fib_entry_op_ctx *op_ctx, bool *postponed_for_bulk) @@ -414,6 +693,11 @@ static int mlxsw_sp_router_ll_xm_fib_entry_commit(struct mlxsw_sp *mlxsw_sp, */ mlxsw_sp_router_xm_ml_entries_del(mlxsw_sp, op_ctx_xm); + /* At the very end, do the XLT cache flushing to evict stale + * M and ML cache entries after prefixes were inserted/removed. + */ + mlxsw_sp_router_xm_ml_entries_cache_flush(mlxsw_sp, op_ctx_xm); + out: /* Next pack call is going to do reinitialization */ op_ctx->initialized = false; @@ -490,9 +774,15 @@ int mlxsw_sp_router_xm_init(struct mlxsw_sp *mlxsw_sp) if (err) goto err_ltable_ht_init; + err = rhashtable_init(&router_xm->flush_ht, &mlxsw_sp_router_xm_flush_ht_params); + if (err) + goto err_flush_ht_init; + mlxsw_sp->router->xm = router_xm; return 0; +err_flush_ht_init: + rhashtable_destroy(&router_xm->ltable_ht); err_ltable_ht_init: err_rxltm_write: err_mindex_size_check: @@ -509,6 +799,7 @@ void mlxsw_sp_router_xm_fini(struct mlxsw_sp *mlxsw_sp) if (!mlxsw_sp->bus_info->xm_exists) return; + rhashtable_destroy(&router_xm->flush_ht); rhashtable_destroy(&router_xm->ltable_ht); kfree(router_xm); } From patchwork Mon Dec 14 11:30:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ido Schimmel X-Patchwork-Id: 343858 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF8B8C4361B for ; Mon, 14 Dec 2020 11:35:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7DD2B22CB1 for ; Mon, 14 Dec 2020 11:35:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2438097AbgLNLf2 (ORCPT ); Mon, 14 Dec 2020 06:35:28 -0500 Received: from out3-smtp.messagingengine.com ([66.111.4.27]:38235 "EHLO out3-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2437994AbgLNLdB (ORCPT ); Mon, 14 Dec 2020 06:33:01 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 30F4D5C0195; Mon, 14 Dec 2020 06:31:19 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Mon, 14 Dec 2020 06:31:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=/9UxObfiQuB8Zgv02Y+7mja7OyK4TLDPv6s4xo3hd38=; b=muR8cRP8 xqEh8vlNJtfW1kvZ0WIC+twv5xaK1ksc0P1r/cE+ZP2T3xhzS8nxPNcr7bAyzgVk XwUis5dyORLhu9+rvCAZRXz0OQ01NGaAc/yoQhzI2y3ClvJqE8VXZ31DN0RPHT66 nYoksQryEQj9fC6R5kXs6hFmHwRzEKm/ok5lSLcQYW71UyJ9O5Q5OjtLnJ1VeCGX XBB/RqQilvveCjGcuQYAwJLPNI6iyC5CE3J9iqnonneXktketPXO771zU2qRAODc Q3Uio5E+HA9mEv4H+tQ0zE6vs5i32Ehyj0hgg19fl7Iq1hTEzJSA4knkcrUBGuhw O3eCK9syL6zhGg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudekkedgvdelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhgggfestdekre dtredttdenucfhrhhomhepkfguohcuufgthhhimhhmvghluceoihguohhstghhsehiugho shgthhdrohhrgheqnecuggftrfgrthhtvghrnhepudetieevffffveelkeeljeffkefhke ehgfdtffethfelvdejgffghefgveejkefhnecukfhppeekgedrvddvledrudehvddrfedu necuvehluhhsthgvrhfuihiivgepuddtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehiug hoshgthhesihguohhstghhrdhorhhg X-ME-Proxy: Received: from shredder.mtl.com (igld-84-229-152-31.inter.net.il [84.229.152.31]) by mail.messagingengine.com (Postfix) with ESMTPA id 00A031080059; Mon, 14 Dec 2020 06:31:17 -0500 (EST) From: Ido Schimmel To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, jiri@nvidia.com, mlxsw@nvidia.com, Ido Schimmel Subject: [PATCH net-next v2 14/15] mlxsw: spectrum: Set KVH XLT cache mode for Spectrum2/3 Date: Mon, 14 Dec 2020 13:30:40 +0200 Message-Id: <20201214113041.2789043-15-idosch@idosch.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201214113041.2789043-1-idosch@idosch.org> References: <20201214113041.2789043-1-idosch@idosch.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jiri Pirko Set a profile option to instruct FW to use 1/2 of KVH for XLT cache, not the whole one. Signed-off-by: Jiri Pirko Signed-off-by: Ido Schimmel --- drivers/net/ethernet/mellanox/mlxsw/cmd.h | 13 +++++++++++++ drivers/net/ethernet/mellanox/mlxsw/core.h | 4 +++- drivers/net/ethernet/mellanox/mlxsw/pci.c | 6 ++++++ drivers/net/ethernet/mellanox/mlxsw/spectrum.c | 2 ++ 4 files changed, 24 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h index 4de15c56542f..392ce3cb27f7 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h +++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h @@ -674,6 +674,12 @@ MLXSW_ITEM32(cmd_mbox, config_profile, set_kvd_hash_double_size, 0x0C, 26, 1); */ MLXSW_ITEM32(cmd_mbox, config_profile, set_cqe_version, 0x08, 0, 1); +/* cmd_mbox_config_set_kvh_xlt_cache_mode + * Capability bit. Setting a bit to 1 configures the profile + * according to the mailbox contents. + */ +MLXSW_ITEM32(cmd_mbox, config_profile, set_kvh_xlt_cache_mode, 0x08, 3, 1); + /* cmd_mbox_config_profile_max_vepa_channels * Maximum number of VEPA channels per port (0 through 16) * 0 - multi-channel VEPA is disabled @@ -800,6 +806,13 @@ MLXSW_ITEM32(cmd_mbox, config_profile, adaptive_routing_group_cap, 0x4C, 0, 16); */ MLXSW_ITEM32(cmd_mbox, config_profile, arn, 0x50, 31, 1); +/* cmd_mbox_config_profile_kvh_xlt_cache_mode + * KVH XLT cache mode: + * 0 - XLT can use all KVH as best-effort + * 1 - XLT cache uses 1/2 KVH + */ +MLXSW_ITEM32(cmd_mbox, config_profile, kvh_xlt_cache_mode, 0x50, 8, 4); + /* cmd_mbox_config_kvd_linear_size * KVD Linear Size * Valid for Spectrum only diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h index 6558f9cde3d6..6b3ccbf6b238 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.h +++ b/drivers/net/ethernet/mellanox/mlxsw/core.h @@ -256,7 +256,8 @@ struct mlxsw_config_profile { used_max_pkey:1, used_ar_sec:1, used_adaptive_routing_group_cap:1, - used_kvd_sizes:1; + used_kvd_sizes:1, + used_kvh_xlt_cache_mode:1; u8 max_vepa_channels; u16 max_mid; u16 max_pgt; @@ -278,6 +279,7 @@ struct mlxsw_config_profile { u32 kvd_linear_size; u8 kvd_hash_single_parts; u8 kvd_hash_double_parts; + u8 kvh_xlt_cache_mode; struct mlxsw_swid_config swid_config[MLXSW_CONFIG_PROFILE_SWID_COUNT]; }; diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c index aae472f0e62f..4eeae8d78006 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/pci.c +++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c @@ -1196,6 +1196,12 @@ static int mlxsw_pci_config_profile(struct mlxsw_pci *mlxsw_pci, char *mbox, mlxsw_cmd_mbox_config_profile_kvd_hash_double_size_set(mbox, MLXSW_RES_GET(res, KVD_DOUBLE_SIZE)); } + if (profile->used_kvh_xlt_cache_mode) { + mlxsw_cmd_mbox_config_profile_set_kvh_xlt_cache_mode_set( + mbox, 1); + mlxsw_cmd_mbox_config_profile_kvh_xlt_cache_mode_set( + mbox, profile->kvh_xlt_cache_mode); + } for (i = 0; i < MLXSW_CONFIG_PROFILE_SWID_COUNT; i++) mlxsw_pci_config_profile_swid_config(mlxsw_pci, mbox, i, diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c index 516d6cb45c9f..1650d9852b5b 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c @@ -2936,6 +2936,8 @@ static const struct mlxsw_config_profile mlxsw_sp2_config_profile = { .max_ib_mc = 0, .used_max_pkey = 1, .max_pkey = 0, + .used_kvh_xlt_cache_mode = 1, + .kvh_xlt_cache_mode = 1, .swid_config = { { .used_type = 1,